Multivariable Feedback Control 2005
Multivariable Feedback Control 2005
Multivariable Feedback Control: Analysis and Design, Second Edition is an excellent resouice for
A A
SA DDESIGN
advanced undergraduate and graduate courses studying multivariable control. It is also an invaluable w=2
tool for engineers who want to understand multivariable control, its limitations, and how it can be 0~ = 0,0],
applied in practice. The analysis techniques and the material on control structure design should
prove very useful in the new emerging area of systems biology. n R.e
C, =
Reviews of the first edition:
“Being rich in insights and practical tips on controller design, the book should also prove to n
be very beneficial to industrial control engineers, both as a reference book and as an 0
educational tool.”
Applied Mechanics Reviews e1
•1~
“In summary, this book can be strongly recommended not only as a basic text in multivariable = 0,2
control techniques for graduate and undergraduate students, hut also as a valuable source
of information for control engineers.”
2~ \
ISBN 0—470—01168—8
~WI LiY
9 780470 011683
N 2005
KO
pv
r
CONTENTS
PREFACE xi
1 INTRODUCTION
1.1 The process of control system design
1.2 The control problem
1.3 Transfer functions
1.4 Scaling
1.5 Deriving linear models
1.6 Notation
robust control, controller design, control structure design and controllability analysis. It may 1993 European Control Conference for inviting us to present a short course on applied 7-L~
be desirable to teach the material in a different order from that given in the book. For example, control, which was the starting point for our collaboration. The final manuscript for the first
in his course at ETH Zurich, Professor Manfred Moran has chosen to start with 5150 systems edition began to take shape in 1994—1995 during a stay the authors had at the University of
(Chapters 1, 2, 5 and 7) and then system theory (Chapter 4), before moving on to MIMO California at Berkeley thanks to Andy Packard, Kameshwar Poolla, Masayoshi Tomizuka
—
systems (Chapters 3, 6, 8 and 9). and others at the BCCI-lab, and to the stimulating coffee at Brewed Awakening.
The book is partly based on a graduate multivariable control course given by the first We are grateful for the numerous technical and editorial contributions of Yi Cao,
author in the Cybernetics Department at the Norwegian University of Science and Technology Kjetil Havre, Ghassan Murad and Ying Zhao. The computations for Example 4.5 were
in Trondheim. The course, attended by students from Electrical, Chemical and Mechanical performed by Roy S. Smith who shared an office with the authors at Berkeley. Helpful
Engineering, has usually consisted of 3 lectures a week for 12 weeks, In addition to regular comments, contributions and corrections were provided by Richard Braatz, Jie Chen, Atle
assignments, the students have been required to complete a 50-hour design project using C. Christiansen, Wankyun Chung, Bjørn Glemmestad, John Morten Godhavn, Finn Are
Matlab. In Appendix B, a project outline is given together with a sample exam. Michelsen and Per Johan Nicklasson. A number of people have assisted in editing and typing
various versions of the manuscript, including Zi-Qin Wang, Yongjiang Yu, Greg Becker, Fen
Wu, Regina Raag and Anneli Laur. We also acknowledge the contributions from our graduate
Examples and Internet students, notably Neale Foster, Morten Hovd, Elling W. Jacobsen, Petter Lundström, John
All of the numerical examples have been solved using Matlab. Some sample files are included Morud, Raza Samar and Erik A. Wolff.
in the text to illustrate the steps involved. All these files use either the new Robust Control For the second edition, we are indebted to Vinay Kariwala for many technical contributions
toolbox or the Control toolbox, but the problems could have been solved easily using other and editorial changes. Other researchers at Trondheim have also been helpful and we are
software packages. especially grateful to Vidar Alstad and Espen Storkaas. From Leicester, Matthew Turner and
Guido Herrmann were extremely helpful with the preparation of the new chapter on LMIs.
The following are available over the Internet: Finally, thanks to colleagues and former colleagues at Trondheim and Caltech from the first
0 Matlab files for examples and figures author, and at Leicester, Oxford and Cambridge from the second author.
0 Solutions to selected exercises (those marked with a The aero-engine model (Chapters 11 and 13) and the helicopter model (Chapter 13) are
0 Linear state-space models for plants used in the case studies provided with the kind permission of Rolls-Royce Military Aero Engines Ltd and the UK
Ministry of Defence, DRA (now QinetiQ) Bedford, respectively.
0 Corrections, comments, extra exercises and exam sets
0 Lecture notes for courses based on the book We have made use of material from several books. In particular, we recommend Zhou
et al. (1996) as an excellent reference on system theory and 7-t~~ control and The Control
This information can be accessed from the authors’ home pages, which are easily found using Handbook (Levine, 1996) as a good general reference. Of the others we would like
a search engine like Google. The current addresses are: to acknowledge, and recommend for further reading, the following: Rosenbrock (1970),
Rosenbrock (1974), Kwakernaak and Sivan (1972), Kailath (1980), Chen (1984), Francis
o http: //www.nt.ntnu.no/users/skoge
(1987), Anderson and Moore (1989), Maciejowski (1989), Moran and Zafiriou (1989), Boyd
• http: //www. le.ac.uk/engineering/staff/postlethwajte and Barratt (1991), Doyle et al. (1992), Boyd et al. (1994), Green and Limebeer (1995), and
the Matlab toolbox manuals of Grace et al. (1992), Balas et al. (1993), Chiang and Safonov
Comments and questions (1992) and Balas et al. (2005).
Please send questions, information on any errors and any comments you may have to the
authors. Their email addresses are. Second edition
o skoge@cheineng . ntnu . no In this second edition, we have corrected a number of minor mistakes and made numerous
o [email protected] changes and additions throughout the text, partly arising from the many questions and
comments we have received from interested readers and partly to reflect developments in
the field. The main additions and changes are:
Acknowledgements Chapter 2: Material has been included on unstable plants, the feedback amplifier, the lower
The contents of the book are strongly influenced by the ideas and courses of Professors John gain margin, simple IMC tuning rules for PID control, and the half rule for estimating
Doyle and Manfred Moran from the first author’s time as a graduate student at Caltech during the effective delay.
the period 1983—1986, and by the formative years, 1975—1981, the second author spent at Chapter 3: Some material on the relative gain array has been moved in from Chapter 10.
Cambridge University with Professor Alistair MacFarlane. We thank the organizers of the
Chapter 4: Changes have been made to the tests of state controllability and observability (of
Solutions to the remaining exercises are available to course lecturers by contacting the authors. course, they are equivalent to the old ones).
xiv MULTIVAEJABLE FEEDBACK CONTROL
Chapters 5 and 6: New results have been included on fundamental performance limitations
introduced by RHP-poles and RHP-zeros.
Chapter 6: The section on limitations imposed by uncertainty has been rewritten
Chapter 7: The examples of parametric uncertainty have been introduced earlier and 1
shortened.
Chapter 9: A clear strategy is given for incorporating integral action into LQG control.
Chapter 10: The chapter has been reorganized. New material has been included on
INTRODUCTION
the selection of controlled variables and self-optimizing control. The section on
decentralized control has been rewritten and several examples have been added.
Chapter 12: A complete new chapter on LMIs. In this chapter, we begin with a brief outline of the design process for control systems. We then discuss
linear models and transfer functions which are the basic building blocks for the analysis and design
Appendix: Minor changes to positive definite matrices and the all-pass factorization.
techniques presented in this book. The scaling of variables is critical in applications and so we provide
In reality, the book has been expanded by more than 100 pages, but this is not reflected in a simple procedure for this. An example is given to show how to derive a linear model in terms of
the number of pages in the second edition because the page size has also been increased. deviation variables for a practical application. Finally, we summarize the most important notation used
All the Matlab programs have been updated for compatibility with the new Robust Control in the book.
toolbox.
where 17(s) denotes the Laplace transform of y(t), and so on. To simplify our presentation we effects resulting from changes in the independent variables (here u and d) considered one at
will make the usual abuse of notation and replace 17(s) by y(s), etc. In addition, we will omit a time.
the independent variables s and t when the meaning is clear. All the signals u(s), d(s) and y(s) are deviation variables. This is sometimes shown
If u(t). x1 (t), x2(t) and y(t) represent deviation variables away from a nominal operating explicitly, for example, by use of the notation §u(s), but since we always use deviation
point or trajectory, then we can assume xi(t = 0) = x2 (t = 0) = 0. The elimination of variables when we consider Laplace transforms, the S is normally omitted.
ti(s) and 2a(s) from (1.3) then yields the transferfunction
/31s + fin
u(s) — —
(1.4) 1.4 Scaling
Importantly, for linear systems, the transfer function is independent of the input signal Scaling is very important in practical applications as it makes model analysis and controller
(forcing function). Notice that the transfer function in (1.4) may also represent the following design (weight selection) much simpler. It requires the engineer to make a judgement at the
system: start of the design process about the required performance of the system. To do this, decisions
ji(t) + aiy(t) + aoy(t) = flju(t) + flou(t) (1.5) are made on the expected magnitudes of disturbances and reference changes, on the allowed
with input u(t) and output y(t) magnitude of each input signal, and on the allowed deviation of each output.
Transfer functions, such as C(s) in (1.4), will be used thrbughout the book to model Let the unscaled (or originally scaled) linear model of the process in deviation variables be
systems and their components. More generally, we consider rational transfer functions of (1.9)
the form
~=O~+ö~~ ~=ij—F
&5~zt+~+flis+flo
C(s) = (1.6) where a hat (~) is used to show that the variables are in their unscaled units. A useful
+ a1s + an
approach for scaling is to make the variables less than 1 in magnitude. This is done by
For multivariable systems, C(s) is a matrix of transfer functions. In (1.6) is is the order of dividing each variable by its maximum expected or allowed change. For disturbances and
the denominator (or pole polynomial) and is also called the order of the system, and n~ is the manipulated inputs, we use the scaled variables
order of the numerator (or zero polynomial). Then n n~ is referred to as the pole excess or
—
(1.11)
Usually we let C(s) represent the effect of the inputs u on the outputs y, whereas Cs(s) U = V/~max, r = F/~n,ax, e = ~/~max
represents the effect on y of the disturbances d (“process noise”). We then have the following
To formalize the scaling procedure, we introduce the scaling factors
linear process model in terms of deviation variables
(1.12)
= Cmax, D~ max, Dd ~ Dr Fmax
y(s) = C(s)u(s) + C~(s)d(s) (1.8)
We have here made use of the superposition principle for linear systems, which implies that a For multi-input multi-output (MIMO) systems, each variable in the vectors d, F, ~ and ~ may
change in a dependent variable (here y) can simply be found by adding together the separate have a different maximum value, in which case D~, D~, Dd and D~ become diagonal scaling
6 MULTIVARIABLE FEEDBACK CONTROL 7
INTRODUCTION
matrices. This ensures, for example, that all errors (outputs) are of about equal importance in
Remark 2 With the above scalings, the worst-case behaviour of a system is analyzed by considering
terms of their magnitude. disturbances d of magnitude 1, and references F of magnitude 1.
The corresponding scaled variables to use for control purposes are then
Remark 3 The control error is
d = D~’ä ~ = D’~, y = D~J~ e = D’~ r = (1.18)
(1.13) e = y — r = Gu + Gdd — RF
On substituting (1.13) into (1.9) we get and we see that a normalized reference change Fmay be viewed as a special case of a disturbance with
Gd = —R, where 1? is usually a constant diagonal matrix. We will sometimes use this observation to
Dey = GD~u + GdDdd; Dee = D~y — unify our treatment of disturbances and references.
and introduction of the scaled transfer functions Remark 4 The scaling of the outputs in (1.11) in terms of the control error is used when analyzing a
given plant. However, if the issue is to select which outputs to control, see Section 10.3, then one may
& = D~’GD~, Gd = D1dD (1.14) choose to scale the outputs with respect to their expected variation (which is usually similar to Fmax).
Remark 5 If the expected or allowed variation of a variable about its nominal value is not symmetric.
yields the following model in terms of scaled variables:
then to allow for the worst case, \ve should use the largest variation for the scaling dmax and the smallest
variations for the scalings £max and ~
y=Gu+Gdd; e=y—r (1.15)
Specifically, let denote the original physical variable (before introducing any deviation or scaling),
Here u and d should be less thnn I in magnitude, and it is useful in some cases to introduce a and let denote the nominal value. Furthermore, assume that in terms of the physical variables we have
scaled reference F, which is less than 1 in magnitude. This is done by dividing the reference that
by the maximum expected reference change dmin ≤ J≤ dmax
:iI~,1~ ~ ≤ Umax
F= V/7’ynax = (1.16)
We then have that where ë= ~7 — F. Then we have the following scalings (or “ranges” or ‘spans”):
= RF where R 4 D’Dr = (1.17) (1.19)
Fnax/~max
dmax run (lJmax - ~i, Idm~n
Here R is the largest expected change in reference relative to the allowed control error (1.20)
(typically, fl ≥ 1). The block diagram for the system in terms of scaled variables may then Umax = mm (I~inax — Um~n — r I)
(1.21)
be written as shown in Figure 1.1, for which the following control objective is relevant: Cmax = ruin (IeZ~I, R-i-I)
For example, if for the unscaled physical input we have 0 ≤ ~i ≤ 10 with nominal value ii’ = 4, then
o In terms of scaled variables we have that Id(t)I ≤ 1 and JF(t)I < 1, and our control
the input scaling is i2max = ruin (110— 4j, 0— ~l) = mmn(6, 4) = 4.
objective is to manipulate it with Iu(t)I ç 1 such that Ie(t)I = y(t) r(t)I ≤ 1 (at least
—
Note that to get the worst case, we take the “max” for disturbances and “m!n” for inputs and outputs.
most of the time)
For example, if the disturbance is —5 ≤ d ≤ 10 with zero nominal value (d’ = 0), then d,,~3< = 10,
1~
whereas if the manipulated input is —5 ≤ ~i ≤ 10 with zero nominal value (iI~ = 0), then ~2fl1ax 5.
This approach may be conservative when the variations for several variables are not symmetric. The
resulting scaled variables are (1.22)
d (d Jj4imax
(1.23)
it = (it — it )/umax
(1.24)
y = (ii — V)I~uax
A further discussion on scaling and performance is given in Chapter 5 on page 165.
y e
Figure 1.1: Model in terms of scaled variables 1.5 Deriving linear models
Linear models may be obtained from physical “first-principle” models, from analyzing input—
Remark 1 A number of the interpretations used in the book depend critically on a correct scaling. output data, or from a combination of these two approaches. Although modelling and system
In particular, this applies to the input—output controllability analysis presented in Chapters 5 and 6. identification are not covered in this book, it is always important for a control engineer to
Furthermore, for a MIMO system one cannot correctly make use of the sensitivity function S = have a good understanding of a model’s origin. The following steps are usually taken when
(I + GIC)’ unless the output errors are of comparable magnitude.
deriving a linear model for controller design based on a first-principle approach:
8 MULTIVARIABLE FEEDBACK CONTROL 9
INTRODUCTION
1. Formulate a nonlinear state-space model based on physical knowledge. T0[K]
2. Determine the steady-state operating point (or trajectory) about which to linearize.
3. Introduce deviation variables and linearize the model. There are essentially three parts to
this step:
(a) Linearize the equations using a Taylor expansion where second- and higher-order terms
are omitted.
(b) Introduce the deviation variables, e.g. öx(t) defined by
&x(t) = x(t) —
where the superscript denotes the steady-state operating point or trajectory along
*
A yields
Here x and it may be vectors, in which case the Jacobians A and B are matrices. Cv-~5T(t) SQ(t) + a(ST0Q) — ST(t)) (1.29)
4. Scale the variables to obtain scaled models which are more suitable for control purposes.
Remark. if a depended on the state variable (Tin this example), or on one of the independent variables
In most cases steps 2 and 3 are performed numerically based on the model obtained in of interest (Q or T0 in this example), then one would have to include an extra term (T* — T )Sa(t) on
step I. Also, since (1.26) is in terms of deviation variables, its Laplace transform becomes the right hand side of (1.29).
sOw(s) = ASx(s) + BOn(s), or
On taking Laplace transforms in (1.29), assuming oT(t) = 0 at I = 0, and rearranging we get
= (sI — A)’BOu(s) (1.27) 1 ISQ(s) + Cv (1.30)
OT(s) =
1~ = —
rs+i (a a
Example 1.1 Physical model of a room heating process. The above steps for deriving a linear
model will be illustrated on the simple example depicted in Figure 1.2, where the control problem The time constant for this example is r 100 io~/i00 bOOs 17 mimi which is reasonable. it
is to adjust the heat input Q to maintain constant roo,,l temperature T (within ±1 K). The outdoor means that for a step increase in heat input it will take about 17 minfor the temperature to reach 63%
temperature T0 is the ‘ham disturbance. Units am-c shown in square brackets. of its steady-state incmease.
1. Physical model. An energy balance for the ,oo’;, requires that the change in energy in the room,, 4. Linear model in scaled variables. We introduce the following scaled variables:
must equal the net inflow of energy to the ‘von? (per unit of time). This yields the following state—space OT(s) OQ(s) (s) 571(s)
n(s)= d (1.31)
model: y(s) = oTrnax~ SQrnax ST0,max
~(CvT)Q+a(T0-T) (1.28)
in our case the acceptable variations in room temperature T are ±1 K, i.e. STmax fiCrnax 1 K.
where T [K] is the moon, temperature, Cv- [i/K] is the heat capacity of the room, Q [WI is the heat input Furthermore, the heat input can vary between 0 W and 6000 W and since its nominal value is 2000 W
([win some heat source), and the terni a(T0 — T) [WJ represents the net heat loss due to exchange of
we have SQnax = 2000 W (see Remark 5 on page 7). Finally, the expected variations in outdoor
air and heat conduction through the walls. temperature are ±10 K, i.e. OT0,max 10 K. The model in terms of scaled variables then becomes
2. Operating point. Consider a case where the heat input Q* is 2000 Wand the djiference between
indoor and outdoor temperatures T* — T is 20 K. Then the steady-state energy bala,,ce yields 1 SQrnaxl — 20
C(s) = ______——
= 2000/20 = 100 W/K. We assume the room heat capacity is constant, Cv = 100 kJ/K. (This rs+lSTmaxa l000s+l
value com-responds approximately to the heat capacity of air in a roomn of about 100 ,jj3; thus we neglect 1 ST0,raax 10 (1.32)
heat accumulation in the walls.) Gd(s) r5+l STmax l000s+l
11
10 MULTI VARIABLE FEEDBACK CONTROL INTRODUCTION
Note that the static gait, fort/ic input is It = 20, whereas the static gain for the disturbance is ltd = 10. of possible perturbed plants) or C’ (usually for a particular perturbed plant). For example,
The fact i/tat ku > 1 ,ileans that we need sonic control (feedback orfeedforward,l to keep the output with additive uncertainty we may have C,, C + EA C + WA/.XA, where u)A is a weight
within its allowed bound (iei < 1) when there is a disturbance of magnitude idi = 1. The fact that representing the magnitude of the uncertainty.
I hi > iku I means that we have enough “power” in the inputs to reject the disturbance at steady state; By the right-half plane (RHP) we mean the closed right half of the complex plane,
that is, we can, using an input of magnitude lvi ≤ 1, have peifect disturbance rejection (e = 0)for including the imaginary axis (jw-axis). The left-half plane (LHP) is the open left half of the
the maximum disturbance (idi = 1). We will return with a detailed discussion of this in Section 5.15.2
complex plane, excluding the imaginary axis. A RHP-pole (unstable pole) is a pole located
where we analyze the input—output controllability of the root,, heating process.
in the right-half plane, and thus includes poles on the imaginary axis. Similarly, a RHP-zero
(“unstable” zero) is a zero located in the right-half plane.
We use AT to denote the transpose of a matrix A, and AH to represent its complex
1.6 Notation conjugate transpose.
There is no standard notation to cover all of the topics covered in this book. We have tried
to use the most familiar notation from the literature whenever possible, but an overriding Mathematical terminology
concern has been to be consistent within the book, to ensure that the reader can follow the
The symbol 4 is used to denote equal by definition, W is used to denote equivalent by
ideas and techniques through from one chapter to another.
definition, and A B means that A is identically equal to B.
The most important notation is summarized in Figure 1.3, which shows a one degree-
of-freedom control configuration with negative feedback, a two degrees-of-freedom control Let A and B be logic statements. Then the following expressions are equivalent:
configuratio&, and a general control configuration. The last configuration can be used A~=B
to represent a wide class of controllers, including the one and two degrees-of-freedom A ifB, or: IfB then A
configurations, as well as feedforward and estimation schemes and many others; and, as we A is necessary for B
will see, it can also be used to formulate optimization problems for controller design. The B ~ A, or: B implies A
symbols used in Figure 1.3 are defined in Table 1.1. Apart from the use of v to represent the B is sufficient for A
controller inputs for the general configuration, this notation is reasonably standard. B only if A
Lower-case letters are used for vectors and signals (e.g. u, y, n), and upper-case letters for not A =t~ not B
matrices, transfer functions and systems (e.g. C, K). Matrix elements are usually denoted
The remaining notation, special terminology and abbreviations will be defined in the text.
by lower-case letters, so Yij is the ij’th element in the matrix G. However, sometimes we
use upper-case letters Cu, e.g. if C is partitioned so that C~ is itself a matrix, or to avoid
conflicts in notation. The Laplace variables is often omitted for simplicity, so we often write
C when we mean C(s).
For state-space realizations we use the standard (A, B, C, .D) notation. That is, a system G
with a state-space realization (A. B, C, D) has a transfer function C(s) = C(sI —A)—’ B +
D. We sometimes write
s AB
(1.33)
C D
to mean that the transfer function C(s) has a state-space realization given by the quadruple
(A,B,C,D).
For closed-loop transfer functions we use S to denote sensitivity at the plant output, and
T = I S to denote complementary sensitivity. With negative feedback, S = (I + L)’ and
—
T = L(I + L)’, where L is the transfer function around the loop as seen from the output.
In most cases L = CIC, but if we also include measurement dynamics (Yin = C,~y + ii) then
L = CKC~. The corresponding transfer functions as seen from the input of the plant are
Li = ICC (or L1 = KC~C), Si = (I + Li)—’ and T1 = Lj(I + Li)—’
To represent uncertainty we use perturbations E (not normalized) or perturbations ~
(normalized such that their magnitude (norm) is less than or equal to 1). The nominal plant
model is C, whereas the perturbed model with uncertainty is denoted C,, (usually for a set
The one degree-of-freedom controller has only the control error r — Yin as its input, whereas the two degrees-of-
freedom controller has two inputs, namely r and yin.
13
12 MULTIVARIABLE FEEDBACK CONTROL INTRODUCTION
U control signals
I CLASSICAL FEEDBACK
CONTROL
In this chapter, we review the classical frequency response techniques for the analysis and design of
single-loop (single-input single-output, SISO) feedback control systems. These loop-shaping techniques
have been successfully used by industrial control engineers for decades, and have proved to be
indispensable when it comes to providing insight into the benefits, limitations and problems of feedback
control. During the 1980’s the classical methods were extended to a more formal method based on
shaping closed-loop transfer functions; for example, by considering the 7&~ norm of the weighted
sensitivity function. We introduce this method at the end of the chapter.
The same underlying ideas and techniques will recur throughout the book as we present practical
procedures for the analysis and design of multivariable (multi-input multi-output, MIMO) control
systems.
Frequency-by-frequency sinusoids
We now want to give a physical picture of frequency response in terms of a system’s
response to persistent sinusoids. It is important that the reader has this picture in mind
when reading the rest of the book. For example, it is needed to understand the response of a
multivariable system in terms of its singular value decomposition. A physical interpretation of
the frequency response for a stable linear system y = G(s)u is as follows. Apply a sinusoidal
input signal with frequency w {rad/s] and magnitude u9, such that
0 10 20 30 40 50 60 70 80 90 100
u(t) = u0 sin(wt + a) Time [sec]
This input signal is persistent; that is, it has been applied since t = —oo. Then the output Figure 2.1: Sinusoidal response for system C(s) = Se?S/(10s + 1) at frequency w = 0.2 radls
signal is also a persistent sinusoid of the same frequency, namely
y(t) = Yo sin(cat + j3) response, IC(jw)I, being equal to ~yo(w)~/Iuo(w)I, is also referred to as the system gain.
Sometimes the gain is given in units of dB (decibel) defined as
Here u0 and yo represent magnitudes and are therefore both non-negative. Note that the output
sinusoid has a different amplitude yo and is also shifted in phase from the input by A [dB] = 20 log10 A (2.4)
For example, A = 2 corresponds to A = 6.02 dB, and A = \/~ corresponds to A = 3.01 dB,
and A = 1 corresponds to A = 0 dB.
Importantly, it can be shown that yo/uo and 4, can be obtained directly from the Laplace
Both IC(iw)I and LC(jw) depend on the frequency w. This dependency may be plotted
transform C(s) after inserting the imaginary number s = jw and evaluating the magnitude
explicitly in Bode plots (with w as independent variable) or somewhat implicitly in a Nyquist
and phase of the resulting complex number COw). We have
plot (phase plane plot). In Bode plots we usually employ a log-scale for frequency and gain,
yo/uo = JC(jw)I; 4’ = LG(jw) [rad] (2.1) and a linear scale for the phase.
For example, let COw) = a + jb, with real part a = Re C(jw) and imaginary part
b = Im COw), then
In words, (2.1) says that after sending a sinusoidal signal throng?? a system C(s), the signal’s
magnitude is ampiffied by afactor C(jw) and its phase is shifted by ZG(jw). In Figure 2.1,
this statement is illustrated for the following first-order delay system (time in seconds):
k=5,G=2,r=10 (2.3)
i-s + 1
At frequency w = 0.2 rad/s, we see that the output y lags behind the input by about a
quarter of a period and that the amplitude of the output is approximately twice that of the to_1 Iofl’ loG I0~
Frequency [rad/s]
input. More accurately, the amplification is
Figure 2.2: Frequency response (Bode plots) of C(s) = 5e_2s /(lOs + 1)
C(jw)I = k/~(rw)2 + 1 = 5/~/(10w)2 + 1 = 2.24
and the phase shift is In Figure 2.2, the Bode plots are shown for the system in (2.3). We note that in this case
both the gain and phase fall monotonically with frequency. This is quite common for process
4, = LC(jw) = — arctan(rw) — 6w = — arctan(lOw) — 2w = —1.51 rad = —86.5° control applications. The delay U only shifts the sinusoid in time, and thus affects the phase
but not the gain. The system gain IC(iw)I is equal to Ic at low frequencies; this is the steady-
COw) is called the frequency response of the system C(s). It describes how the system state gain and is obtained by setting s = 0 (or w = 0). The gain remains relatively constant
responds to persistent sinusoidal inputs of frequency w. The magnitude of the frequency
18 MULTIVARIABLE FEEDBACK CONTROL CLASSICAL FEEDBACK CONTROL 19
up to the break frequency if?- where it starts falling sharply. Physically, the system responds gives the phase of C (normalized1 such that G(0) > 0) at a given frequency w0 as a function
too slowly to let high-frequency (“fast”) inputs have much effect on the outputs. of IG(iw)I over the entire frequency range:
The frequency response is also useful for an un stable plant C(s), which by itself has no
steady-state response. Let C(s) be stabilized by feedback control, and consider applying a (2.11)
LG(jw0)=— dlnw ln w—wo w
sinusoidal forcing signal to the stabilized system. In this case all signals within the system
are persistent sinusoids with the same frequency w, and G(jw) yields as before the sinusoidal
if°° dln~G(jw)~ ____ dw
N(w)
response from the input to the output of C(s).
Phasor notation. For any sinusoidal signal The name minimum-phase refers to the fact that such a system has the minimum possible
phase lag for the given magnitude response IG(iw)I. The term N(w) is the slope of the
u(t) = Ito sin(wt + a) magnitude in log-variables at frequency w. In particular, the local slope at frequency w0 is
we may introduce the phasor notation by defining the complex number — (dlnIC(jw)~
N(cao) —.
\ dliit’~
u(w) ~ ii0e~° (2.5)
The term ln in (2.11) is infinite at w = wo, so it follows that LG(jwo) is primarily
We then have that
uo = Iu(w)I; a = Zu(w) (2.6) determined by the local slope N(wo). Also In = ç which justifies the
commonly used approximation for stable minimum-phase systems
We use w as an argument to show explicitly that this notation is used for sinusoidal signals,
and also because n0 and a generally depend on w. Note that u(w) is not equal to u(s) LG(~wo) ~N(wo) [rad] 90
0
. N(wo) (2.12)
evaluated at s = w or .s = jw, nor is it equal to u(t) evaluated at t = w. From Euler’s
formula for complex numbers, we have that eiz = cos z + j sin z. It then follows that sin(wt)
The approximation is exact for the system C(s) = ifs” (where N(w) = —n), and it is good
is equal to the imaginary part of the complex function ~ and we can write the time domain
for stable minimum-phase systems except at frequencies close to those of resonant (complex)
sinusoidal response in complex form as follows:
poles or zeros.
RHP-zeros and time delays contribute additional phase lag to a system when compared
u(t) = n0Im e1(~~t~<~) gives, as t —* 00 : y(t) = yolm 6j(wt+fl) (2.7)
to that of a minimum-phase system with the same gain (hence the term non-minimum-phase
where system). For example, the system C(s) = with a RHP-zero at s = a has a constant gain
ito = G(jw)juo, fi = ZG(jw) + a (2.8) of 1, but its phase is —2 arctan(w/a) [rad) (and not 0 [rad] as it would be for the minimum-
phase system C(s) = 1 of the same gain). Similarly, the time delay system e~s has a
and IC(iw)J and LG(jw) are defined in (2.2). Since G(jw) = IG(iw)I eizG(i(~), the constant gain of 1, but its phase is —wO [rad].
sinusoidal response in (2.7) and (2.8) can be compactly written in phasor notation as follows: Straight-line approximations (asymptotes). For the design methods used in this book it is
useful to be able to sketch Bode plots quickly, and in particular the magnitude (gain) diagram.
= C(jw)u(w)e~”~t (2.9) The reader is therefore advised to become familiar with asymptotic Bode plots (straight-line
or because the term c1” appears on both sides approximations). For example, for a transfer function
the frequency response. This may be quantified by the Bode gain—phase relationship which and minimum-phase, but their phases differ by 1800. Systems with integrators maybe ueated by replacing ~ by
where e is a small positive number.
20 MULTIVARIABLE FEEDBACK CONTROL
CLASSICAL FEEDBACK CONTROL 21
d
0
•0
•~ 10°
C,
10
io3 l0~2 l0~’ 10° Ia’ io2 Io~
WI 7.
Y
W2 W3
0
0)
-45
‘-90 Urn
—135
—180
10_2 0’~’ 10° 10’ 10’
Frequency [radlsj Ti
Figure 2.3: Bode plots of transfer function L1 = 30 (3+oOi~i’(+1O)’ The asymptotes are given by Figure 2.4: Block diagram of one degree-of-freedom feedback control system
dotted lines. The vertical dotted lines on the upper plot indicate the break frequencies wi, w2 and cu3.
is the measured output and ii is the measurement noise. Thus, the input to the plant is
significant deviation is around the resonance frequency w0 for complex poles or zeros with a
damping <j of about 0.3 or less. In Figure 2.3, the Bode plots are shown for u = K(s)(r — y — n) (2.15)
Li(s) = 30 (s + 1) The objective of control is to manipulate it (design K) such that the control error e remains
(2.14)
(s + 0.01)2(s + 10) small in spite of disturbances d. The control error e is defined as
The asymptotes (straight-line approximations) are shown by dotted lines. In this example the C = y —7’ (2.16)
asymptotic slope of L1f is 0 up to the first break frequency at w1 = 0.01 rad/s where we
have two poles and then the slope changes to N = —2. Then at w2 = 1 rad/s there is a zero where r denotes the reference value (setpoint) for the output.
and the slope changes to N = —1. Finally, there is a break frequency corresponding to a pole
at w3 = 10 rad/s and so the slope is N = —2 at this and higher frequencies. We note that Remark. In the literature, the control error is frequently defined as r Yin which is often the controller
—
the magnitude follows the asymptotes closely, whereas the phase does not. The asymptotic input. However, this is not a good definition of an error variable, First, the error is normally defined as
phase jumps at the break frequency by —90° (LHP-pole or RHP-zero) or +90° (LHP-zero or the actual value (here y) minus the desired value (here r). Second, the error should involve the actual
RHP-pole), value (y) and not the measured value We therefore use the definition in (2.16).
(yin).
Remark. The phase approximation can be significantly improved if, for each term jw + a, we let the
phase contribution be zero for oj < 0.la and Ff2 (90°) for w > lOa, and then connect these two lines 2.2.2 Closed-loop transfer functions
by a third line from (0,w = Ole) to (irf2,w = ba), which of course passes through the correct
phase ~r/4 at w = a. For the terms ~2 + ~ + w~, ~ < 1, we can better approximate the phase by The plant model is written as
y = 0(s)u + Gd(s)d (2.17)
letting it be zero for w < O.lw0 and F for w ~ iOwa, with a third line connecting (0,w = 0.lwo) to
(F, w = lOwo), which passes through the correct phase rr/2 at w = Wo.
and for a one degree-of-freedom controller the substitution of (2.15) into (2.17) yields
T S T
22 MULTIVARIABLE FEEDBACK CONTROL CLASSICAL FEEDBACK CONTROL 23
We see that S is the closed-loop transfer function from the output disturbances to the outputs,
while T is the closed-loop transfer function from the reference signals to the outputs. The
term complementary sensitivity forT follows from the identity
Figure 2.5: Two degrees-of-freedom feedback and feedforward control. Perfect measurements of y and
S+T=I (2.22) d assumed.
To derive (2.22), we write 5+1’ = (I+L)—’ +(I+L)~’L and factor out the term (I+L)’.
The term sensitivity function is natural because S gives the sensitivity reduction afforded by 2.2.3 Two degrees-of-freedom and feedforward control
feedback. To see this, consider the “open-loop” case, i.e. with no control (K = 0). Then the
error is The control structure in Figure 2.4 is called one degree-of-freedom because the controller
K acts on a single signal, namely the difference r Ym~ In the two degrees-of-freedom
—
e = y r = —r + Gdd + 0 n
—
(2.23) structure of Figure 2.5, we treat the two signals Ym and r independently by introducing
and a comparison with (2.20) shows that, with the exception of noise, the response with a “feedforward” controller Kr on the reference2. In Figure 2.5 we have also introduced a
feedback is obtained by premultiplying the right hand side by S. feedforward controller ‘~d for the measured disturbance d. The plant input in Figure 2.5 is
the sum of the contributions from the feedback controller and the two feedforward controllers,
Remark I Actually, the above explanation is not the original reason for the name “sensitivity”. Bode
first called S sensitivity because it gives the relative sensitivity of the closed-loop transfer function 1’ K(r y) + Krr Kdd — (2.26)
to the relative plant model error. In particular, at a given frequency w we have for a 5150 plain, by ~- ~—
feedback feedforward
straightforward differentiation ofT, that
where for simplicity we have assumed perfect measurements of y and d. After substituting
dT/T (2.26) into (2.17) and solving with respect toy,
dC/C (2.24)
y = (I + GK)1 [G(K + K,.)r + (Gd — GKd)dJ (227)
Remark 2 Equations (2.15)—(2.23) are written in matrix form because they also apply to MIMO
systems. Of course, for 5150 systems we may write S + 1’ = 1,S = j—~-g. 1’ = and so on. Using SGK — I = T — I = —5, the resulting control error is
Remark 3 In general, closed-loop transfer functions for 5150 systems with negative feedback may be e= y—r = S(5rT+SdGd(1) (2.28)
obtained from the rule
“direct” where the three “sensitivity” functions, giving the effect of control, are defined by
OUTPUT= -INPUT (2.25)
1 + “loop”
I where “direct” represents the transfer function for the direct effect of the input on the output (with the
feedback path open) and “loop” is the transfer function around the loop (denoted L(s)). In the above
case L = OK. If there is also a measurement device, Gm(S), in the loop, then L(s) = 0KG,,,. The
S=(I+GK)—’, SrIGK,., Sd=I—GKdGJ’
S is the classical feedback sensitivity function, whereas 5r and 5d are the “feedforward
sensitivity functions” for reference and disturbance, respectively. Without feedback control
(2.29)
rule in (2.25) is easily derived by generalizing (2.18). In Section 3.2, we present a more general form of
(K = 0) we have S = I, and correspondingly without feedforward control (lCd = 0 and
this rule which also applies to multivariable systems.
lC~ = 0) we have 5d = I and 5r = I. We want the sensitivities to be small to get a small
error e. More precisely,
2 There are many other ways or introducing two degrees-of-freedom control, see e.g. Figure 2.25 (page 52) for a
“prefilter” structure. The form in Figure 2.5 is preferred here because it unifies the treatment of references and
j disturbances.
I
for cases where nonlinear effects cause the linear model 0 to change significantly as we
y = G(G’r — O’G~d) + Odd = r
change r. Thus, even though the underlying system is strongly nonlinear (and uncertain)
These controllers also give 8,. = 0 and Sd = 0 in (2.28). However, note that in (2.30) we the input—output response from y to r is approximately linear (and certain) with a constant
must assume that it is possible to realize physically the plant inverse G’ and that both the gain of 1.
plant 0 and the resulting controller containing the term 0’ are stable. These are serious
Example 2.1 Feedback amplifier. The “global” linearizing effect of negative feedback is the
considerations, but of more general concern is the loss of performance that inevitably arises
basis for feedback amplifiers, first developed by Harold Black in 1927 for telephone communication
because (1) the disturbances are never known (measured) exactly, and (2) 0 is never an exact
(Kline, 1993). In the feedback amplifle’; we want to magnify the input signal r by a factor a by sending
model. The fundamental reasons for using feedba c/c control are therefore the presence of
it through a,, amplifier C with a large gain. In an open-loop (feedforward) arrangemnent y = Or
1. Signal uncertainty unknown disturbance (d)
~- and we must adjust the amplifier such that C = a. Black’s idea was to leave the high-gain amplifier
2. Model uncertainty (~) unchanged, and instead modify the input signal r by subtracting (1/a)y, where y is the measured
3. An unstable plant output signaL This corresponds to inserting a controller K2 = 1/a in the negative feedback path
(e.g. see Figure 4.4(d) on page 147) to get y = C(r — K2y). The closed-loop response becomes
The third reason follows because unstable plants can only be stabilized by feedback (see
V = 1÷~Kj and for IGK2I >~ 1 (which requires 1~I >~ a) we get y *r = a r, as desired.
Remark 2 on internal stability, page 145). In addition, for a nonlinear plant, feedback control Note that the closed-loop gain a is set by the feedback nerwork (1(2 = 1/a) and is independent
provides a linearizing effect on the system’s behaviour. This is discussed in the next section. of amnplifler (C) pam-ameter changes. Furthermore, within the system ‘s closed-loop bandwidth, all
signals (with any magnitude or frequency) are amplified by the samne amount a, and this property is
independent of the amplifier dynamics C(s). Apparently, Black’s claimed improvemnents, with simple
2.2.5 High-gain feedback
negative feedback, over the then-standard feedforward approach, seemed so unlikely that his patent
The benefits of feedback control require the use of “high” gains. As seen from (2.30), the application was initially rejected.
perfect feedforward controller uses an explicit model of the plant inverse as part of the
controller. With feedback, on the other hand, the use of high gains in OK implicitly generates Remark. In Black’s design, the amplifier gain must be much larger than the desired closed-loop
an inverse. To see this, note that with L = O.K large, we get S = (1 + OK)1 U and amplification (Le. 101 >> a). This see,ns unnecessary, because with feedforward control, it is sufficient
T = 1—S I. From (2.21) the input signal generated by feedback is it = KS(r Odd—n),
—
to require Cl = a. hideed, the requirement 101 >~ a cc,? be avoided, if we add integral actioti to the
loop. This may be done by use of a “two degrees-of-freedom” controller where we add a controller K,
and from the identity KS = 0’T it follows that with high-gain feedback the input signal
befom-e the plant (a,nplifler~ to get y = OK1 (r — K2V) (see Figure 4.4 on page 147). The closed-loop
is it 0’(r Odd n) and we get y r n. Thus, high-gain feedback generates the
K2 ~ and for I OK1 1(21 >> 1 (which requires GKi I >> a) we get
— — —
response becon,es p =
inverse without the need for an explicit model, and this also explains why feedback control is
p fr = ar. The requirement OK1 1(21 >~ 1 only needs to hold at those frequencies for which
much less sensitive to uncertainty than feedforward control. amplification is desired, and may be obtained by choosing K1 as a simple Pt (proportional—integral)
This is one of the beauties of feedback control; the problem is that high-gain feedback controller with a proportional gain of]: that is, K, = 1 + whe,-e ~t is the adjustable integral time.
may induce instability. The solution is to use high feedback gains only over a limited
frequency range (typically, at low frequencies), and to ensure that the gains “roll off” at higher Of course, the “global” linearizing effect of negative feedback assumes that high-gain
frequencies where stability is a problem. The design is most critical around the “bandwidth” feedback is possible and does not result in closed-loop instability. The latter is well known
frequency where the loop gain ILl drops below 1. The design of feedback controllers therefore with audio amplifiers as “singing”, “ringing”, “squalling” or “howling”. In the next section,
we consider conditions for closed-loop stability.
r
26 MULTIVARIABLE FEEDBACK CONTROL
CLASSICAL FEEDBACK CONTROL 27
2 Method 1, which involves computing the poles, is best suited for numerical calculations.
However, time delays must first be approximated as rational transfer functions, e.g. Padd
1.5 approximations. Method 2, which is based on the frequency response, has a nice graphical
interpretation, and may also be used for systems with time delays. Furthermore, it provides
0.5 useful measures of relative stability and forms the basis for several of the robustness tests
used later in this book.
0
—0.5 Example 2.3 Stability of inverse response process with proportional control. Let us determine
0 5 10 15 20 25 30 35 40 45 50 the condition for closed-loop stability of the plant C in (2.31) with proportional control; that is, with
Time [sec]
K(s) = IC. (a constant) and loop transfer function L(s) = K0C(s).
Figure 2.6: Effect of proportional gain IC0 on the closed-loop response y(t) for the inverse response 1. The system is stable if and only if all the closed-loop poles are in the LII? The poles are solutions to
process 1 + L(s) = 0 or equivalently the roots of
(lOs + 1)(5s + 1) + K03(—2s + 1) = 0
Example 2.2 Inverse response process. Consider the plant (time in seconds) 5082 + (15— 61C)s + (1 + 3IC~) = 0 (2.33)
But since we are only intet-ested in the half plane location of the poles, it is not necessary to solve
C(s) - 3(-2s + 1)~ (2.31) (2.33). Rathet; One “lay consider the coefficients a~ of the cha,-acteristic equation a,,?’ + +ais+
— (lOs-fl)(5s+1)
= 0 in (2.33), and use the Routh—Hurwitz test to check for stability For second-order systems,
This is one of two ‘nail, example processes used in this chapter to illustrate the techniques of classical this test says that we have stability if and only if all the coefficients have the same sign. This yields
control. The model has a right-half plane (RHP) zero at s = 0.5 tad/s. This imposes a fundamental the following stability conditions:
limitation on control, and high controller gains will induce closed—loop instability. (15—6K0)>O; (1+3K0)>O
This is illustrated for a proportional (P) controller IC(s) = IC in Figure 2.6, where the response
y = Tv = GK0(1 + GIC)’r to a step change in the reference (r(t) = lfor t > 0) is shown for or equivalently —1/3 < K0 < 2.5. With negative feedback (K0 > 0) only the upper bound is
four different values of K0. The system is seen to be stable for IC < 2.5, and unstable for IC0 > 2.5. of practical interest, and we find that the ,naxitnum allowed gait? ( “ultimate gait?”) is IC,.. = 2.5
The controller gail? at the limit of instability, IC,. = 2.5, is sometimes called the ultimate gain and for which agrees with the simulation in Figure 2.6. The poles at the onset of instability may be found by
this value the system is seen to cycle continuously with a period F,, = 15.2 s, corresponding to the substituting IC = IC,, = 2.5 into (2.33) to get 50s2 + 8.5 0, i.e. s = ±j’.,/8.5/50 = ±j0.412.
frequency w,, 4 2K/F,, = 0.42 rad/s~ Thus, at the onset of instability we have two poles on the imaginary axis, and the system will be
continuously cycling with a frequency w = 0.412 rod/s corresponding to a period F,, = 2K/co
Two methods are commonly used to determine closed-loop stability: 15.2 s. This agrees with the simulation results in Figure 2.6.
2. Stability may also be evaluated from the frequency response of L(s). A graphical evaluation is most
1. The poles of the closed-loop system are evaluated. That is, the roots of 1 + L(s) = 0 enlightening. The Bode plots of the plant (i.e. L(s) with L = 1) are shown in Figure 2.7. From
are found, where L is the transfer function around the loop. The system is stable tf and these one finds the frequency 0180 where LL is —180° and than reads off the corresponding gait?.
only if all the closed-loop poles are in the open left-half plane (LHP) (i.e. poles on the This yields IL(jwisoN = K0IG(jwiso)I = 0.4 IC0, and we get from (2.32) that the system is stable
imaginary axis are considered “unstable”). The poles are also equal to the eigenvalues of if and only if L(jwigo)j < 1 ~ IC~ < 2.5 (as found above). Alternatively, the phase crossover
the state-space A-matrix, and this is usually how the poles are computed numerically. frequency may be obtained analytically from
2. The frequency response (including negative frequencies) of LQjw) is plotted in the LLQwiso) = — arctan(2coiso) — arctaii(Swiso) — arctan(locoiso) —180°
complex plane and the number of encirclements it makes of the critical point —1 is
counted. By Nyquist’s stability criterion (for which a detailed statement is given in which gives 0180 = 0.412 rod/s as found in the pole calculation above. The loop gait? at this
Theorem 4.9) closed-loop stability is inferred by equating the number of encirclements frequency is
3. ~/(2wigo)~ +1
to the number of open-loop unstable poles (RHP-poles). IL(iwiso)I = IC0 \/(5wj5~)2 +1 0.41C0
For open-loop stable systems where LL(jw) falls with frequency such that ZL(jw) ~/tl0wiso)2 + 1
crosses —180° only once (from above at frequency 0150), one may equivalently use Bode’s which is the same asfoundfrom the groph in Figure 2.7. The stability condition LOwiso) < 1 then
yields IC0 < 2.5 as expected.
I
28 MULTJVARIABLE FEEDBACK CONTROL CLASSICAL FEEDBACK CONTROL 29
4
Example 2.4 P1 control of the inverse response process. We have already studied the use of a
proportional controller for the process in (2.31). We found that a controller gain of IC = 1.5 gave a
C
t 0 reasonably good response, except for a steady-state offset (see Figure 2.6). The reason for this offset
z 10 4 is the non-zero steady-state sensitivity fl~nction, 5(0) = 1+KcG(0) = 0.18 (where 0(0) 3 is the
50 steady-state gain of the plant). From e —Sr in (2.20) it follows that for r 1 the steady-state
4
control error is —0.18 (as is confirmed by the simulation in Figure 2.6). To remove the steady-state
offset we add integral action in the form of a P1 controller
4
4
IC(s) = ic. (1 + (2.34)
4
\ TIC
C
C, The settings for IC and rs can be deter,nined from the classical tuning rules of Ziegler and Nichols
C. (1942):
IC = K~/2.2, r~ P~/1.2 (2.35)
where K~ is the maximum (ultimate) P controller gain and P~ is the corresponding period of
Frequency [radis] oscillations, in our case K~ = 2.5 and P.., = 15.2 s (as observed from the simulation in Figure 2.6),
and we get K~ = 1.14 and ~-~‘ = 12.7 s. Alternatively, IC and P~, can be obtained analytically from
4
Figure 2.7: Bode plots for L(s) —
—
K°(103+1)(5s+l)
3~23+1) with IC = 1 the model C(s),
1/~G(jw,jJ, P.., = 2ir/w,, (2.36)
where w,, is defined by ZG(jw~) = —180°
2.4 Evaluating closed-loop performance The closed-loop response, with P1 control, to a step change in reference is shown in Figure 2.8. The
output y(t) has an initial inverse response due to the RHP-zemv, but it the,, rises quickly and y(t) = 0.9
Although closed-loop stability is an important issue, the real objective of control is to improve at t = LU s (the rise time). The response is quite oscillatory and it does not settle to within ±5% of
performance; that is, to make the output y(t) behave in a more desirable manner. Actually, the the final value until after t = 65 s (the settling time). The overslmoot (height ofpeak relative to the final
possibility of inducing instability is one of the disadvantages of feedback control which has value) is about 62% which is much larger than one would nonnally like for reference tracking. The
to be traded off against performance improvement. The objective of this section is to discuss overshoot is due to controller tuning, and could have been avoided by reducing the controller gain. The
ways of evaluating closed-loop performance. decay ratio, which is the ratio between subsequent peaks, is about 0.35 which is also a bit large.
Exercise 2.1 * Use (2.36) to compute K~ and F,, for the process in (2.31).
2.4.1 Typical closed-loop responses
In summary, for this example, the Ziegler—Nichols P1 tunings are somewhat “aggressive” and
The following example, which considers proportional plus integral (P1) control of the stable give a closed-loop system with smaller stability margins and a more oscillatory response than
inverse response process in (2.31), illustrates what type of closed-loop performance one might would normally be regarded as acceptable. For disturbance rejection the controller settings
expect. may be more reasonable, and one can add a prefilter to improve the response for reference
tracking, resulting in a two degrees-of-freedom controller. However, this will not change the
stability robustness of the system.
::‘~
y(t)
u(t)
40
Time [sec]
—0.5
0 0.5 1.5 2 2.5 3 3.5 4
Figure 2.8: Closed-loop response to a unit step change in reference for the stable inverse response Time [s]
process (2.31) with Pt control
Figure 2.9: Closed-loop response to a unit step change in reference for the unstable process (2.37) with
Pt control
30 MULTI VARIABLE FEEDBACK CONTROL CLASSICAL FEEDBACK CONTROL 31
Example 2.5 Pt control of unstable process. Consider the unstable process, The rise time and settling time are measures of the speed of the response, whereas the
overshoot, decay ratio and steady-state offset are related to the quality of the response.
4
(237) Another measure of the quality of the response is:
(s — 1)(0.02s + 1)2
Without control (K = 0), the output in response to any input change will eventually go out of bounds. • Total -vam-iation (TV): the total up and down movement of the signal (input or output),
To stabilize, we use a P1 controller (2.34) with settings3 which should be as small as possible. The computation of total variation is illustrated in
I Figure 2.11. In Matlab, TV = sum(abs (diff (y) ) ).
IC = 1.25, rs = 1.5 (238)
The resulting stable closed-loop ‘-esponse to a step change in the reference is shown in Figure 2.9. The
response is not oscillatory and the selected tunings are robust with a large gain margin of 18.7 (see
Section 2.4.3). The output y(t) has some overshoot (about 30%), which is unavoidable for an unstable
process. We note with interest that the input u(t) starts out positive, hut that the final steady—state value
is negative. That is, the input has an inverse response. This is expected for an unstable process, since
the transferfunction KS ~from the plant output to the plant input) must have a RHF-zero, see page 146.
The above measures address the output response, y(t). In addition, one should consider the
magnitude of the manipulated input (control signal, u), which usually should be as small and
smooth as possible. One measure of “smoothness” is to have a small total valiation. Note that
attempting to reduce the total variation of the input signal is equivalent to adding a penalty on
input movement, as is commonly done when using model predictive control (MPC). If there
tr are important disturbances, then the response to these should also be considered. Finally, one
Time
may investigate in simulation how the controller works if the plant model parameters are
different from their nominal values.
Figure 2.10: Characteristics of closed-loop response to step in reference
Remark 1 Another way of quantifying time domain performance is in terms of some norm of the error
Step response analysis. The above examples illustrate the approach often taken by signal e(t) = y(t) — r(t). For example, one might use the integral squared error (ISE), or its square root
engineers when evaluating the performance of a control system. That is, one simulates the which is the 2-norm of the error signal, iie(t)lle = ‘~,/f~°°
ieQr)i2d~ In this way, the various objectives
response to a step in the reference input, and considers the following characteristics (see related to both the speed and quality of response are combined into one number. Actually, in most cases
Figure 2.10): minimizing the 2-norm seems to give a reasonable trade-off between the various objectives listed above.
Another advantage of the 2-norm is that the resulting optimization problems (such as minimizing ISE)
o Rise time (Ir): the time it takes for the output to first reach 90% of its final value, which is are numerically easy to solve. One can also take input magnitudes into account by considering, for
usually required to be small. example, J = Vfoc~(Qie(t)i2 + Riu(t)12)dt where Q and Rare positive constants. This is similar to
Settling time (t8): the time after which the output remains within ±e% of its final value
.1 linear quadratic (LQ) optimal control, but in LQ control one normally considers an impulse rather than
II (typically e = 5), which is usually required to be small. a step change in m-(t).
O Overshoot: the peak value divided by the final value, which should typically be 1.2 (20%)
or less. Remark 2 The step response is equal to the integral of the corresponding impulse response, e.g. set
o Decay ratio: the ratio of the second and first peaks, which should typically be 0.3 or less. = 1 in (4.11). Some thought then reveals that one can compute the total variation as the integrated
o Steady-state offset: the difference between the final value and the desired final value, which absolute area (1-norm) of the corresponding impulse response (Boyd and Barratt, 1991, p. 98). That is,
is usually required to he small. let y = Tr, then the total variation in y for a step change in p is
s-cc
3
The Pt d~ntrnHer for this unstable process is almost identical to the H-infinity (7i~) S/KS controller obtained TV = I igT(r)idr 4 igr(t)iIi (2.39)
using the weights w,~ = t and vip = 1/k! +wb/s with M = 1.5 and = lOin (2.112) and (2.113); sec Jo
Exercise 2.5 (page 65)
where gr(t) is the impulse response of T, i.e. y(t) resulting from an impulse change in ,-(t).
32 MULTIVARIABLE FEEDBACK CONTROL I CLASSICAL FEEDBACK CONTROL 33
system behaviour in the crossover (bandwidth) region. We will now describe some of the
LOWiso)
important frequency domain measures used to assess performance, e.g. gain and phase ‘~Ie
w +oc
margins, the maximum peaks of S and T, and the various definitions of crossover and
S 0.5
bandwidth frequencies used to characterize speed of response. ‘I
negative feedback. A typical Bode plot and a typical Nyquist plot of LOw) illustrating the S
gain margin (GM) and phase margin (PM) are given in Figures 2.12 and 2.13, respectively. LOw) —
S
From Nyquist’s stability condition, the closeness of the curve L(jw) to the point —1 in the
complex plane is a good measure of how close a stable closed-loop system is to instability.
We see from Figure 2.13 that GM measures the closeness of LOw) to —1 along the real axis,
whereas PM is a measure along the unit circle.
Figure 2.13: Typical Nyquist plot of LOw) for stable plant with PM and GM indicated. Closed-loop
instability occurs if LOw) encircles the critical point —1.
10’
0)
~0
If there is more than one such crossing between —1 and 0, then we take the closest crossing
..~ lao
0O
C,
to —1, corresponding to the largest value of L(jwisa)I. For some systems, e.g. for low-
order minimum-phase plants, there is no such crossing and GM = cc. The GM is the factor
la_Il by which the loop gain ILCIw)I may be increased before the closed-loop system becomes
I 0~ unstable. The GM is thus a direct safeguard against steady-state gain uncertainty (error).
Typically, we require GM > 2. On a Bode plot with a logarithmic axis for ILl, we have that
GM is the vertical distance (in dB) from the unit magnitude line down to L(jwigo)j, see
Figure 2.12. Note that 20 log10 GM is the GM in dB.
In some cases, e.g. for an unstable plant, the Nyquist plot of L crosses the negative real axis
between —cc and —1, and a lower gain margin (or gain reduction margin) can be similarly
Ip defined,
l0~’ 10° GML = 1/IL(jwLlso)l (2.42)
Frequency [rad/s]
where WL1SO is the frequency where the Nyquist curve of LOw) crosses the negative real
Figure 2.12: Typical Bode plot of LOw) with PM and GM indicated
axis between —cc and —1. If there is more than one such crossing, then we take the closest
crossing to —1, corresponding to the smallest value of L(jwigo) For many systems, e.g. for
~.
More precisely, if the Nyquist plot of L crosses the negative real axis between —1 and 0,
most stable plants, there is no such crossing and GML = 0. The value of GMj~ is the factor
then the (upper) gain margin is defined as
by which the loop gain IL(iw)I may be decreased before the closed-loop system becomes
unstable.
GM = 1/~L(jw15o)~ (2.40) The phase margin is defined as
where the phase crossover frequency w150 is where the Nyquist curve of L(jw) crosses the PM LL(jw~) + 180° (2.43)
negative real axis between —1 and 0, i.e.
where the gain c,vssoverfrequency w~ is the frequency where IL(iw) I crosses 1, i.e.
LL(jwi8o) = —180° (2.41) (2.44)
1
L.
34 MULTI VARIABLE FEEDBACK CONTROL CLASSICAL FEEDBACK CONTROL 35
If there is more than one such crossing, then the one giving the smallest value of PM is
taken. The PM tells us how much negative phase (phase lag) we can add to L(s) at frequency
wc before the phase at this frequency becomes —180° which corresponds to closed-loop 0
‘0
instability (see Figure 2.13). Typically, we require PM larger than 30° or more. The PM is z
c
a direct safeguard against time delay uncertainty; the system becomes unstable if we add a a
time delay of
Omax = PM/w0 (2.45)
Note that the units must be consistent, and so if Wc is in [rad/s] then PM must be in radians.
It is also important to note that by decreasing the value of w~ (lowering the closed-loop
bandwidth, resulting in a slower response) the system can tolerate larger time delay errors.
ato
C,
PC
0
~0 0
=10
a50
a
Figure 2.15: Bode magnitude and phase plots of U = GIC, S and P for Pt control of unstable process,
C(s) = ts—1)(O028+112’ K(s) = 1.25(1 + 1-h)
0
-I Exercise 2.2 Prove that the maximum additional delay for which closed-loop stability is ,naintained
.2 -90 is given by (2.45).
I~ process are shown in Figure 2.15. The gain margin (GM), lower gain margin (GML), phase mat-gin
(PM) and peak values of S (Als) and P (Mw) are Maximum peak criteria
Clvi = 18.7, CMt = 0.21, PM = 59.5°, M5 = 1.19, A’IT = 1.38 The maximum peaks of the sensitivity and complementary sensitivity functions are defined
as
In this case, the phase of LOw) crosses —180° twice. First, ZL crosses —180° at a low frequency (w M5 = maxlS(jw)I; Mw = max~T(jw)l (2.46)
w U,
about 0.9) where ILl is about 4.8, and we have that the lower gain margin is GIiJt = 1/4.8 = 0.21.
Second, LL crosses —180° at a high frequency (w about 40) where U is about 0.054, and we have (Note that !i’Ig = II~II~ and MT = II2’lI~ in terms of the ?~tc.D norm introduced later.) Since
that the (upper) gain margin is GM= 1/0.054 = 18.7. Thus, instability is induced by decreasing the S + P = 1, using (A.51), it follows that at any frequency
Il loop gain by afactor 4.8 or increasing it by a factor 1& 7.
lSI-ITII~I5+TI=’
36 CLASSICAL FEEDBACK CONTROL
37
MULTIVARIABLE FEEDBACK CONTROL
Proof of (2.4 7~ and (2.48): To derive the GM inequalities notice that L(jwi8o) = —1/GM (since For underdamped systems with ~ < 1 the poles are complex and yield oscillatory step
CM = l/IL(jwiso)l and L is real and negative at 0180), from which we get
responses. With r(t) = 1 (a unit step change) the values of the overshoot and total variation
—1 .
for y(t) are given, together with MT and M5, as a function of ~ in Table 2.1. From Table 2.1,
TOwiao) = CM ~ SOwiso) = — (2.49) we see that the total variation TV correlates quite well with MT. This is further confirmed by
(A.137) and (2.39) which together yield the following general bounds:
and the GM results follow. To derive the PM inequalities in (2.47) and (2.48) consider Figure 2.16
where we have lS(iw~)l = 1/Il + L(jw~)l = 1/I—I — L(jw~)~ and we obtain MT <TV < (2n + 1)MT
(2.52)
lSUw~)l = lT(iw~)l = 1 (2.50) Here ii is the order of T(s), which is 2 for our prototype system in (2.51). Given that
2 sin(PM/2) the response of many systems can be crudely approximated by fairly low-order systems,
and the inequalities follow. Alternative formulae, which are sometimes used, follow from the identity the bound in (2.52) suggests that MT may provide a reasonable approximation to the total
2sin(PM/2) = ~,./2(1 — cos(PM)). variation. This provides some justification for the use of MT in classical control to evaluate
the quality of the response.
I
38 MULTIVARIABLE FEEDBACK CONTROL 39
CLASSICAL FEEDBACK CONTROL
Table 2.1: Step response characteristics and frequency peaks of prototype second-order system (2.51), response. For tracking performance, the output is y = Tv and since without control P = 0, we may say
see also Table 2.2 that control is effective as long as T is reasonably large, which we may define to be larger than 0.707.
This leads to an alternative definition which has been traditionally used to define the bandwidth of a
Time domain, y(t) Frequency domain control system: The bandwidth in tenns of T, WET, is the highest frequency at which IT(jw)I crosses
( Overshoot Total variation MT Ms 1/v’~ = 0.707 (~ —3 dB) from above. However, we would argue that this alternative definition,
2.0 1 1 1 1.05 although being closer to how the term is used in some other fields, is less useful for feedback control.
1.5 1 I 1 1.08
1.0 I 1 1 1.15 The gain crossover frequency, We, defined as the frequency where IL(jwjI first crosses 1
0.8 1.02 1.03 1 1.22 from above, is also sometimes used to define closed-loop bandwidth. It has the advantage of
0.6 1.09 1.21 1.04 1.35 being simple to compute and usually gives a value between WB and WBT. Specifically, for
0.4 1.25 1.68 1.36 1.66 systems with PM < 900 (most practical systems) we have
0.2 1.53 3.22 2.55 2.73 (2.53)
WB < We < WBT
0.1 1.73 6.39 5.03 5.12
0.01 1.97 63.7 50.0 50.0 Proof of (2.53): Note that IL(jw~)I 1 5° ISOtoc)I IT(jwc)I. Thus, when PM = 90° we get
ISOwc)I = T(jw~)~ = 1707 (see (2.50)), and we have w~ = = WET. For PM < 90° we
get S(jco~)l = IT(iw~)I > 0.707, and since WE is the frequency where IS(jw)I crosses 0.707 from
below we must have WE < w~. Similarly, since WET is the frequency where IT(iw)I crosses 0.707
Table 2.2: Matlab program to generate Table 2.1
% Uses the Control toolbox from above, we must have WET > We. C
tau=l; zeta=01; toO :0.01:100;
P = tft1,ftau~tau 2*tau*zeta 1)); S = From this we have that the situation is generally as follows: Up to the frequency WE, SI
[A,B,C,0]=ssdata(Pl; V steptA,B,C,D,l,t);
overshoot=nax(y) tvsum(abs (diff (yN) is less than 0.7 and control is effective in terms of improving performance. In the frequency
Mt=florm(T,inf,le_4),Nsnorm{s,jnf,le4) range [WB, WBT] control still affects the response, but does not improve performance in —
most cases we find that in this frequency range 181 is larger than 1 and control degrades
performance. Finally, at frequencies higher than WBT we have S 1 and control has no
2.4.5 Bandwidth and crossover frequency
significant effect on the response. The situation just described is illustrated in Example 2.7
The concept of bandwidth is very important in understanding the benefits and trade-offs below (see Figure 2.18).
involved when applying feedback control. Above we considered peaks of closed-loop transfer
Example 2.4 (pages 28 and 34) continued. The plant C(s) = has a RI-IP-zero
functions, M5 and 11’1T’ which are related to the quality of the response. However, for
and the Ziegler—Nichols P1 tunings (IC = 1.14, Ti = 12.7) are quite aggressive with GM 1.63 and
performance we must also consider the speed of the response, and this leads to considering
PM = 19.4°. The bandwidth and crossover frequencies are WE = 0.14, w~ = 0.24 and WET 0.44,
the bandwidth frequency of the system. In general, a large bandwidth corresponds to a smaller
which is in agreement with (2.53).
rise time, since high-frequency signals are more easily passed on to the outputs. A high
bandwidth also indicates a system which is sensitive to noise. Conversely, if the bandwidth is Example 2.6 Consider the simple case of a first-order closed-loop system,
small, the time response will generally be slow, and the system will usually be more robust.
It
Loosely speaking, bandwidth may be defined as the frequency range [w1 , Wa] over which L(s)=~, S(s)=—fK;
control is effective. In most cases we require tight control at steady-state so w1 = 0, and we
then simply call w2 = WB the bandwidth. In this ideal case, all bandwidth and crossover frequencies are identical: We WE WET k.
The word “effective” may be interpreted in different ways, and this may give rise to Furthermore, the phase of L remains constant at —90°, so PM = 90°, Wtso = cc (or really undefined)
different definitions of bandwidth. The interpretation we use is that control is effective if and GM = cc.
we obtain some benefit in terms of performance. For tracking performance the error is
e = y r = —Sr and we get that feedback is effective (in terms of improving performance)
— Example 2.7 Comparison of WE and WET as indicators of performance. An example where WET
as long as the relative error el/In = 181 is reasonably small, which we may define to be is a poor indicator of peiforinance is the following (we are not suggesting this as a good controller
design!):
8~ ≤ 0.707.~ We then get the following definition: —s + z —s+z 1 (2.54)
Definition 2.1 The (closed-loop) bandwidth, Wa is the frequency where SOw) first crosses s(Ts + rz + 2)’ s+z TS+l
= 0.707 (~ —3 dB)from below For this system, both L andT have a RHP-zero at z = 11, and we have GM 2.1, PM 60.1°,
Ms = 1.93 and MT = 1. We find that too = 0.036 and We = 0.054 are both less thou z = 0.1
Remark. Another interpretation is to say that control is effective if it significantly changes the output (as one should a~-pect because speed of response is li,nited by the presence of RHP-zeros), whereas
WET = 1/i- = 1.0 is ten times larger than z. The closed-loop response to a unit step change in the
The reason for choosing the value 0.707 when defining the bandwidth w8 is that, for the simple case of a first- reference is shown in Figure 2.17. The rise time is 31.0 s, which is close to 1/WE 28.0 s, but vety
order closed-loop response with S = s/(s + a), the low-frequency asymptote s/a of S crosses magnitude I at
frequency to = a, and at this frequency ISUw)I = l/~@ = 0.707.
~1
1. Shaping of transfer functions. In this approach the designer specifies the magnitude of
0.5 some transfer function(s) as a function of frequency, and then finds a controller which
gives the desired shape(s).
‘— 0 (a) Loop shaping. This is the classical approach in which the magnitude of the open-
loop transfer function, LOW), is shaped. Usually no optimization is involved and the
—Oi designer aims to obtain ILIJw)I with desired bandwidth, slopes, etc. We will look at
this approach in detail later in Section 2.6. However, classical loop shaping is difficult
4~ 50 to apply for complicated systems, and one may then instead use the Glover—McFarlane
Time [sec] 9-t~ loop-shaping design presented in Chapter 9. The method consists of a second step
where optimization is used to make an initial loop-shaping design more robust.
Figure 2.17: Step response for system T = ~ (b) Shaping of closed-loop transfer functions, such as 5, T and KS. One analytical
approach is the internal model control (IMC) design procedure, where one aims
to specify directly T(s). This works well for simple systems and is discussed in
Section 2.7. However, optimization is more generally used, resulting in various 7-l~
10° optimal control problems such as mixed weighted sensitivity; see Section 2.8 and later
I)
chapters.
z
C
00 2. The signal-based approach. This involves time domain problem formulations resulting
C,
in in the minimization of a norm of a transfer function. Here one considers a particular
disturbance or reference change and then one tries to optimize the closed-loop response.
The “modern” state-space methods from the 1960’s, such as linear quadratic Gaussian
10~ (LQG) control, are based on this signal-oriented approach. In LQG the input signals are
assumed to be stochastic (or alternatively impulses in a deterministic setting) and the
Frequency [rad/s]
expected value of the output variance (or the 2-norm) is minimized. These methods may
Figure 2.18: Plots of ISI and TI for system T = —3+0.1.11...
be generalized to include frequency-dependent weights on the signals leading to what is
3+0.1 s+1 called the Wiener—Hopf (or ?i2 norm) design method.
By considering sinusoidal signals, frequency by frequency, a signal-based 9-l~ optimal
defferent from 1/WET = 1.0 s, illustrating that WE is a better indicator of closed-loop peiformance control methodology can be derived in which the R~ norm of a combination of closed-
than WBT. loop transfer functions is minimized. This approach has attracted significant interest, and
The magnitude Bode plots of S and T are shown in Figure 2.18. We see that ITI 1 up to about may be combined with model uncertainty representations to yield quite complex robust
WET. Howeve, in the fi-equency range from WE to WET the phase of T (not shown) drops from about performance problems requiring js-synthesis, an important topic which will be addressed
—40° to about —220°, ~° b; p’-actice tracking is out of phase and control is poor in tins frequency in later chapters.
range. In approaches 1 and 2, the overall design process is iterative between controller design
and performance (or cost) evaluation. If performance is not satisfactory then one must
In conclusion, WE (which is defined in terms of 181) and w~ (in terms of LI) are good
either adjust the controller parameters directly (e.g. by reducing “c from the value
indicators of closed-loop performance, while WET (in terms of TI) may be misleading in
are well suited for certain nonlinear problems where an explicit feedback controller does not 6. Physical system must be strictly proper: L has to approach 0 at high frequencies.
exist or is difficult to obtain. For example, in the process industry, model predictive control 7. Stability (stable plant): L small.
is used to handle problems with constraints on the inputs and outputs. Online optimization Fortunately, the conflicting design objectives mentioned above are generally in different
approaches are expected to become more popular in the future as faster computers and more frequency ranges, and we can meet most of the objectives by using a large loop gain (ILl > 1)
efficient and reliable computational algorithms are developed. at low frequencies below crossover, and a small gain (ILl < 1) at high frequencies above
crossover.
10-I
Example 2.9 Disturbance process. We now introduce our second 5150 example control problem in
which disturbance rejection is an important objective in addition to command tracking. We assume that
the plant has been appropriately scaled as outlined in Section 1.4.
Problem formulation. Consider the disturbance process described by
Figure 2.21: Magnitude Bode plot of controller (2.59) for loop-shaping design
(2.62)
we want the loop shape to have a slope of —1 around crossover (we), with preferably a
steeper slope before and after crossover, then the phase lag of L at w~ will necessarily be with time in seconds (a block diagratn is shown in Figure 2.23 below). The control objectives are:
at least ~90°, even when there are no RHP-zeros or delays. Therefore, if we assume that for 1. Command tracking: Tile rise time (to reach 90% ofthefinal value) should be less than 0.3s and the
performance and robustness we want a PM of about 35° or more, then the additional phase overshoot should be less than 5%.
contribution from any delays and RHP-zeros at frequency Wc cannot exceed about —55°. 2. Disturbance rejection: The output in response to a unit step disturbance should remain within the
First consider a time delay 8. It yields an additional phase contribution of —9w, which range [—1, 1] at all times, and it should return to 0 as quickly as possible (Iy(t)j should at least be
at frequency w = 1/9 is —1 rad = —57° (which is slightly more than —55°). Thus, for less than 0.1 after 35).
acceptable control performance we need We < 1/0, approximately 3. Input constraints: u(t) should remain within the range Hi, 1] at all times to avoid input saturation
Next consider a real REP-zero at s = z. To avoid an increase in slope caused by this (this is easily satisfied for most designs).
zero we place a pole at s = —z such that the loop transfer function contains the term Analysis. Since Gd(0) = 100 we have that without control the output response to a unit disturbance
the form of which is referred to as all-pass since its magnitude equals 1 at all frequencies. (d = 1) will be 100 times larger than what is deemed to be acceptable. The magnitude IGd(iw)I is
The phase contribution from the all-pass term at w = z/2 is —2 arctan(0.5) = —53° lower at higher frequencies. but it remains larger than 1 up to Wd 10 ;ad/s (where jGd(jwd)I = i)
(which is very close to —55°), so for acceptable control performance we need w~ < 42, Thus, feedback control is needed up to frequency wd, so we need w~ to be approximately equal to
approximately 10 radls for disturbance rejection. On the other hand, we do not want w0 to be larger than necessaly
because of sensitivity to noise and stability problems associated with high-gain feedback. We will thus
aim at a design with w~ 10 radls.
2.6.3 Inverse-based controller design Inverse-based controller design. We will consider the inverse-based design as give?? by (2.60) and
(2.61) with w~ = 10. Since C(s) has a pole excess of three this yields an unrealizable controller; and
In Example 2.6.2, we made sure that L(s) contained the RHP-zero of C(s), but otherwise the therefore we choose to approximate the plant term (0.05s + 1)~ by (0.is + 1) and then in the controller
specified L(s) was independent of G(s). This suggests the following possible approach for we let this tern? be effective over one decade, i.e. we use (0.is + i)/(0.Ols + 1) to give the realizable
a minimum-phase plant (i.e. one with no REP-zeros or time delays). We select a loop shape design
which has a slope of—i throughout the frequency range, namely w0lOs+i 0.ls+l ho(s) ~ ojs+i (2.63)
Ko(s) = ~ 2000.015+1’ = s _-____-_-__-———-—————--——-—————--—
— (0.055+i)2(0.Ois±fl’ We 10
L(s) = (2.60) The response to a step reference is excellent as shown in Figure 2.22(a). The rise time is about 0.16s
where w~ is the desired gain crossover frequency. This loop shape yields a PM of 90° and an and there is no overshoot so the specifications are more than satisfied. However; the response to a step
disturbance (Figure 2.22(b)) is much too sluggish. Although the output stays within the range [—1,1].
infinite GM since the phase of L(jw) never reaches —180°. The controller corresponding to
it is still 0.75 at I = 3 s (whereas it should be less than 0.1). Because of the integral action the output
(2.60) is
does eventually return to zero, bitt it does not drop below 0.1 until after 23 s.
IC(s) = ~G’(s) (2.61)
The above example illustrates that the simple inverse-based design method, where L has a
That is, the controller inverts the plant and adds an integrator (i/s). This is an old idea, and slope of about N = —i at all frequencies, does not always yield satisfactory designs. In the
is also the essential part of the internal model control (IMC) design procedure (Moran and example, reference tracking was excellent, but disturbance rejection was poor. The objective
Zafiriou, 1989) (page 55), which has proved successful in many applications. However, there of the next section is to understand why the disturbance response was so poor, and to propose
are at least three good reasons why this inverse-based controller may not be a good choice: a more desirable loop shape for disturbance rejection.
rzr:rr:r.rrzr!j
-a
49
48 MULTIVARJABLE FEEDBACK CONTROL CLASSICAL FEEDBACK CONTROL
o Notice that a reference change may be viewed as a disturbance directly affecting the output.
1.5
This follows from (1.18), from which we get that a maximum reference changer = 1? may
be viewed as a disturbance d = 1 with Gd(s) = —R where R is usually a constant.
This explains why selecting K to be like G’ (an inverse-based controller) yields good
• For disturbance rejection a good choice for the controller is one which contains the
Example 2.10 Loop-shaping design for the disturbance process. Consider again the p/ant
described by (2.62). The plant can be represented by the block diagram in Figure 2.23, and we see
dynamics (Gd) of the disturbance and inverts the dynamics (0) of the inputs (at least at
that the disturbance enters at the plant input in the sense that G and Cd share the some dominating
frequencies just before crossover). dynamics as represented by the term 200/(lOs ± 1).
o For disturbances entering directly at the plant output, 0d = 1, we get IICminI = 10’I, Step 1. Initial design. Front (2.65) we know that a good initial loop shape looks like IL,~1nI
so aa inverse-based design provides the best trade-off between performance (disturbance
rejection) aad minimum use of feedback.
IGdI = at frequencies up to crossove’; The corresponding controller is K(s) = G1Lmin
o For disturbances entering directly at the plant input (which is a common situation in 0.5(0.05s + 1)2. This controller is not proper (i.e. it has more zeros than po/es), but since the tenn
(0.05s + 1)2 only conies into effect at 1/0.05 = 20 radfs, which is beyond the desired gain crossover
practice often referred to as a load disturbance), we have Gd = G and we get 1Km1, = 1,
—
frequency We = 10 tad/s. we may rep/ace it by a constant gain of 1 resulting in a proportional controller
so a simple proportional controller with unit gain yields a good trade-off between output (2.69)
performance and input usage. ICi(s) = 0.5
—
d Step 3. High-frequency correction. To increase the PM and improve the transient response we
supplement the controller with ‘derivative action” by multiplying 1(2 (s) by a lead—lag term which is
effective over one decade starting at 20 rad/s:
s+2 0.05s+l (2.71)
Ks(s) = 0.5~
s 0.005s + 1
This gives a PM of 51°, and peak values M5 = 1.43 and Mr = 1.23. From Figure 2.24(b), it is seen
that the controller K3 (s) reacts quicker than 1(2(s) and the disturbance response ys(t) stays below 1.
~a~two&~rees-of-freedom)
7.
y 0.5
0 0.5 1 1.5
Time [sec]
+
ii
Figure 2.26: Tracking responses with the one degree-of-freedom controller (1(3) and the two degrees-
of-freedom controller (I(~, Kra) for the disturbance process
Figure 2.25: Two degrees-of-freedom controller
where the tern, 1/(0.03s + 1) was included to avoid the initial peaking of the input signal u(t) above
Let T = L(1 + L)’ (with L = OK,,) denote the complementary sensitivity function 1. The tracking response with this two degrees-of-freedom controller is shown iii Figure 2.26. The rise
for the feedback system. Then for a one degree-of-freedom controller y = Ti’, whereas for a time is 0.25 s which is better than the requirement of 0.3 £ and the overshoot is only 2.3% which is
two degrees-of-freedom controller y = TKrr. If the desired transfer function for reference better than the requirement of 5%. The disturbance response is the same as curve ya in Figure 2.24. in
tracking (often denoted the reference model) is Trer, then the corresponding ideal reference conclusion, we are able to satisfy all specifications using a low-order two degrees-of-freedom controller
prefilter Kr satisfies TKr = Trot, or
Loop shaping applied to a flexible structure
Kr(s) = T~(s)Trer(s) (2.72)
The following example shows how the loop-shaping procedure for disturbance rejection can
Thus, in theory we may design Kr(s) to get any desired tracking response Trot(s). However, be used to design a one degree-of-freedom controller for a very different kind of plant.
in practice it is not so simple because the resulting Kr(s) may be unstable (if C(s) has RHP
zeros) or unrealizable, and also TKr 0 Tr~ if G(s) and thus T(s) is not known exactly.
to-
Remark. A convenient practical choice of prefilter is the lead—lag network
C-,
:r~ /‘—yoL
~0
Kr(s) = floodS + 1 (2.73) a
flag5 +1 ~
Co
10° 0 ~ ~ ~Y~L_,/\
C,
Here we select fload > flag if we want to speed up the response, and 1~lead < flag if we want to slow
down the response. If one does nut require fast reference tracking, which is the case in many process —l
control applications, a simple lag is often used (with flood = 0).
to —2 -
20
io~~2 100 10~ 0 5 10 15
Example 2.11 Two degrees-of-freedom design for the disturbance process. in Example 2.10 we Frequency [radls] Time [sec]
designed a loop-shaping controller Ka(s) for the plant in (2.62) which gave good performance with (b) Open-loop and closed-loop distur
respect to disturbances. However~ the command tracking pe~formance was not quite acceptable as is
(a) Magnitude plot of IGI Cdl bance responses with K = 1
shown by ys in Figure 226. The rise time is 0.16 s which is better than the required value of 0.3 s, but
the overshoot is 24% which is sign ~ficantly higher than the maximum value of 5%. To improve upon Figure 227: Flexible structure in (2.75)
tIns we can use a two degrees-of-freedom controller with K,, = K3. and we design Kr(s) based on
(2.72) with reference model Trot = 1/(OJs + 1) (a first-order response with no overshoot). To get a
low-order Kr(s), we either may use the actual T(s) and then use a low-o,-der approximation Of Kr(s), Example 2.12 Loop shaping for a flexible structure. Consider the following model of a flexible
or we may start with a low-order approximation of T(s). We will do the latter. From the step response structure with, a disturbance occurring at the plant input:
ya in Figure 2.26 we approximate the response by Iwo parts: a fast response with tune constant 0.1 s 2 5~(~2 + i)
and gain 1.5, and a slower response with time consta,zt 05 s and gain —0.5 (the sum of the gains is 1). G(s) = Gd(s) (s2 + 0.52)(s2 + 22) (2.75)
Thus we use T(s) 5-~2~ — = (O.l$+lflO.SS÷l).fm~n which (2.72) yields Kr(s) =
Following closed-loop simulations we modified this slightly to arrive at the design
Fivyn the Bode magnitude plot in Figure 227(a) we see that I Gd (jw) >~ 1 around the resonance
1 frequencies of 0.5 and 2 radls, so control is needed at these frequencies. The dashed line in
Kra(5) — 0.5s + 1 (2.74) Figu;-e 2.2 7(b) shows the open-loop response to a unit step disturba,,ce. The output is seen to cycle
— 0.65s + 1 0.03s ± 1
54 MULTIVARIABLE FEEDBACK CONTROL CLASSICAL FEEDBACK CONTROL 55
between —2 and 2 (outside the allowed range —1 to 1). which confirms that control is needed. The IMC design method (e.g. Moran and Zafiriou, 1989) is simple and has proven to be
From (2.66) a controller which meets the specification JyQu) < 1 for Id(w)I = 1 is given by successful in applications. The idea is to specify the desired closed-loop response and solve
IKmin(jw)I = JG’GdI = 1. Indeed the controller for the resulting controller. This simple idea, also known as “direct synthesis”, results in an
K(s) = 1 (2.76) “inverse-based” controller design. The key step is to specify a good closed-loop response. To
do so, one needs to understand what closed-loop responses are achievable and desirable.
turns out to be a good choice as is verified by the closed-loop disturbance response (solid line) in The first step in the IMC procedure for a stable plant is to factorize the plant model into
Figure 2.27(b); the output goes up to about 0.5 and then returns to zero. The fact that the choice an invertible minimum-phase part (Gm) and a non-invertible all-pass part (Ga). A time delay
L(s) = C(s) gives closed-loop stability is not immediately obvious since IGI has four gain crossover o and non-minimum-phase (RHP) zeros z~ cannot be inverted, because the inverse would be
frequencies. Howei’er~ instability cannot occur because the plant is “passive” with LG > —180° at all
non-causal and unstable, respectively. We therefore have
frequencies.
G(s) GmGa (2.77)
—s + z~
2.6.6 Conclusions on loop shaping Ga(s) = e_OS Re(z~) > 0; 0 > 0 (2.78)
- s+zi
The loop-shaping procedure outlined and illustrated by the examples above is well suited for
relatively simple problems, as might arise for stable plants where L(s) crosses the negative The second step is to specify the desired closed-loop transfer function T from references to
real axis only once. Although the procedure may be extended to more complicated systems outputs, y = Tv. There is no way we can prevent T from including the non-minimum-phase
the effort required by the engineer is considerably greaten In particular, it may be very elements Of Ga, so we specify
difficult to achieve stability. T(s) f(5)Ga(s) (2.79)
Fortunately, there exist alternative methods where the burden on the engineer is much less. where f(s) is a low-pass filter selected by the designer, typically of the form f(s) =
One such approach is the Glover—McFarlane 7-L~ loop-shaping procedure which is discussed lIQrcs + 1)~. The rest is algebra. We have from (2.19) that
in detail in Chapter 9. It is essentially a two-step procedure, where in the first step the T = GK(1 + GK)~ (2.80)
engineer, as outlined in this section, decides on a loop shape, LI (denoted the “shaped plant”
G8), and in the second step an optimization provides the necessary phase corrections to get a so combining (2.77), (2.79) and (2.80), and solving for the controller, yields
stable and robust design. The method is applied to the disturbance process in Example 9.3 on
page 368. K = G’ 1 —T = ~ f~’ Ga (2.81)
An alternative to shaping the open-loop transfer function L(s) is to shape closed-loop
transfer functions. This is discussed next in Sections 2.7 and 2.8. We note that the controller inverts the minimum-phase part Gm of the plant.
I~ Example 2.13 We apply the IMC design method to a stable second-order plus time delay process
I~ C(s) = k rgs2 +1 (2.82)
2.7 IMC design procedure and PID control for stable
plants where ( is the damping factor 1(1 < 1 gives an unde;datnped process with oscillations. We consider
a stable process where ro and ( are non-negative. Factorization yields Ga(s) = e0~, Gm(s)
Specifications directly on the open-loop transfer function L = GK, as in the loop-shaping r~s2 +2roCs+1~ We select a flrst-o,der filter f(s) = 1/fr~s + 1). From (2.79) this specifies that we
design procedures of the previous section, make the design process transparent as it is clear desire, following the unavoidable tune delay a simple first-order tracking response with time constant
how changes in L(s) affect the controller IC(s) and vice versa. An apparent problem with
this approach, however, is that it does not consider directly the closed-loop transferfunctions, T(s) 1 e0~ (2.83)
r~s + 1
such as S and T, which determine the final response. The following approximations apply: From (2.8!), the resulting cont,-oller beco,nes
PIP control. The PID controller, with three adjustahle parameters, is the most widely used We ziote that the PID settings are simpler if we use the cascade form.
control algorithm in industry. There are many variations, but the most common is probably SIMC (SlcogestadJSimple IMC) ~m design for first- or second.order plus delay
the “ideal” (or parallel) form process. Skogestad (2003) has derived simple rules for model reduction and PID tuning
based on the above ideas. He claims these are “probably the best simple PID tuning rules in
1 the world” © In process control, it is common to approximate the process with a first-order
KPID(s) = ICc + + TDS (2.86) .
For ( < 1 we have complex zeros in the controller and it cannot be realized in the cascade PID form
(2.87~. However for overdamped plants with ( > 1, we can write the model (2.82) in the for,n
The corresponding settings for the ideal-form PID controller are obtained using (2.88), but
—Os are more complicated.
C
C(s) = k( + 1)(T2s + 1) (2.91)
The settings in (2.95) and (2.97) follow directly from the model, except for the single
tuning parameter ‘rc that affects the controller gain (and the integral time for near-integrating
restdtingfronz (2.85) in the controller IC(s) = (ris+lflris+1) (rstO)s~ Comparing with (2.87), the processes). The choice of the tuning parameter i-c is based on a trade-off between output
cascade P?D settings become
performance (favoured by a small ‘rc) and robustness and input usage (favoured by a large rc).
— 1 rj For robust and “fast” control, Skogestad (2003) recommends ‘rc = 0 ,which for the model
Kc , T5 —Tj, TD—T2 (2.92)
hm+0 (2.96) gives a sensitivity peak Ms 1.7, gain margin GM and crossover frequency
Using (2.88), the corresponding ideal PID settings become Wc = 0.5/0.
Model reduction and effective delay. To derive a model in the form (2.94) or (2.96),
1 (Ti +r2) _________
TI=Tl+T2, ~ 1+T2/TJ (2.93) where 0 is the effective delay, Skogestad (2003) provides some simple analytic rules for
-‘
59
58 MULTIVARIABLE FEEDBACK CONTROL S CLASSICAL FEEDBACK CONTROL
model reduction. These are based on the approximations r0~ 1 Os (for approximating —
Using the half rule, the process is approximated as a first-order tune delay model with
a RHP-zero as a delay) and e°~ 1/(1 + Os) (for approximating a lag as a delay). The
i
k = 3, ri = 6 + 2.5/2 = 7.25, 0 1.2 + 0.8 + 2.5/2 + 2.5 + 0.4 6.15
lag approximation is conservative, because in terms of control a delay is worse than a lag of
equal magnitude. Thus, when approximating the largest lag, Skogestad (2003) recommends or as a second-order time delay model with
the use of the simple ha If ruleS
I
it = 3, Ti = 6, T2 = 2.5 + 2.5/2 = 3.75, 0 = 1.2 + 0.8 + 2.5/2 + 0.4 = 3.65
o Half rule. The largest neglected lag (denominator~ time constant is distributed equally to The P1 settings based on the first-order model are (choosing Tc = 0 = 6.15)
the effective delay and the smallest retained time constant.
it’. = ~ = 0.169, TI = mm (7.25, 8 6.15) = 725
To illustrate, let the original model be in the form
and the cascade PID settings based oil tile second-order model are (choosillg r0 = 0 = 3.65)
k0=o.274, ?I=6, iD=3.75
Go(s) — fl(T~s+ 1)~_o~~ (2.98)
— JJ~(r~os+1)
We note that a P1 controller results from a first-order model, and a PID controller from a
where the lags r10 are ordered according to their magnitude, and = l/z~0 > 0
second-order model. Since the effective delay 0 is the main limiting factor in terms of control
denote the inverse response (negative numerator) time constants corresponding to the RHP performance, its value gives invaluable insight into the inherent controllability of the process.
zeros located at s = zjo. Then, according to the half rule, to obtain a first-order model With the effective delay computed using the half rule, it follows that P1 control performance
C(s) = c°~/(r1s + 1) (for P1 control), we use is limited by (half of) the magnitude of the second-largest time constant -r2, whereas PID
control performance is limited by (half of) the magnitude of the third-largest time constant,
Tt = flo +
l’oo
0=Oo+
T20
+ Z Tj~ +~
h
+ 2 (2.99)
Tg.
Ti=rlo; T2=T2o+-~-;
Tao
9 =00+ + Z TjO + Z ‘ir + h
(2.100)
Using the half rule, the process is approxilliated as a first-order time delay model with Ic = 200, Ti
10.025 and 8 0.075. The recommended choice for “fast” control is T0 = 0 = 0.075. However;
I on page 47 it was starad that we aim for a gain crossover frequency of about w0 10 [rads].
Since we desire a first-order closed-loop response, this corresponds to Tc 1/U)c = 0.1. With
where h is the sampling period (for cases with digital implementation). The main objective
of the empirical half rule is to maintain the robustness of the proposed P1 and PID tuning
Tc = 0.1 the corresponding SIMC P1 settings are IC = ~ -~g~75)
= 0.286 and TI
mm (10.025,4 . (0.1 + 0.075)) = 0.7. This is an almost-imitegratilig process, and we note that we
rules, which with rc = 0 give Ms about 1.7. This is discussed by Skogestad (2003), who also reduce the integral tilnefromn 10.025 (which would be good for tracking step references) (00.7 in order
provides additional rules for approximating positive numerator time constants (LHP-zeros). to get acceptable pemformance for input disturbances.
To improve further the performance. we use the half rule to obtain a second-order model (Ic
Example 2.14 The process 200, Ti = 10, T2 = 0.075,8 = 0.025) and choose w0 = 0.1 to derive S1MC PID settings
2
Go(s) = (ICe = 0.4, ~I = 0.5, ?v = 0.075). Interestingly, the corresponding controller
(s + 1)(0.2s + 1)
is approximated using tile ha If rule as a first-order wit/i time delay process, G(s) = has /(Ts + 1), IC(s) = 0.4~-~~(0.0755 + 1)
wit/i h = 2,0 = 0.2/2 = 0.1 andr = 1 + 0.2/2 = 1.1. Choosing r~ = 8 = 0.1 the SIMC P1 settings
(2.95) become IC = = 2.75 andrj = min(1.1,42.0.1) = 0.8.
is almost identical to the final controller K3 in (2.71), designed previously using loop-s/loping ideas.
In tins case, we ~nay also consider using a second-order model (2.96) wit!, k = 2, ri = 1, T2 =
0.2, 0 = 0 (no approximation). Since 0 = 0, we cannot c/loose r~ = 0 as it would yield an infinite
controller gain. However; the controller gain is limited by other factors, such as the allowed input 2.8 Shaping closed-loop transfer functions
magnitude, measurement noise and unmodeiled dynamics. Because of such factors, let us assume that
the largest allowed cont,-oller gain is IC = 10. From (2.97) this corresponds to r0 = 0.05, and we get In Section 2.6, we discussed the shaping of the magnitude of the open-loop transfer function
= mm (1,4. 0.05) = 0.2 and -rD = T2 = 0.2. Using (2.88), the corresponding ideal PID settings L(s). In this section, we introduce the reader to the shaping of the magnitudes of closed-
are K0 = 20, rj = 0.4 and TO = 0.1. loop transfer functions, where we synthesize a controller by minimizing an 7-L~ performance
objective. The topic is discussed further in Section 3.5.7 and in more detail in Chapter 9.
Example 2.15 Consider the process
Such a design strategy automates the actual controller design, leaving the engineer with the
(—0.8s + 1) —I.2s task of selecting reasonable bounds (“weights”) on the desired closed-loop transfer functions.
C(s) = 3
(6s + 1)(2.5s + 1)2(0.4s + 1) Before explaining how this may be done in practice, we discuss the terms ~ and 7-L2-
I
60 CLASSICAL FEEDBACK CONTROL 61
MULTIVARIABLE FEEDBACK CONTROL
2.8.1 The terms ~ and ?12 4. Shape of S over selected frequency ranges.
The 7L~ norm of a stable scalar transfer function f(s) is simply the peak value of jf(jw)J as 5. MaximumpeakmagnitudeOfS, JS(jw)II~0 ~ M.
a function of frequency, i.e. The peak specification prevents amplification of noise at high frequencies, and also introduces
IIf(s)II~ ~ max f (.iw)I (2.101) a margin of robustness; typically we select M = 2. Mathematically, these specifications may
Remark. Strictly speaking, we should here replace “max” (the maximum value) by “sup” (the
,
be captured by an upper bound, 1/~wp(s) on the magnitude of 8, where vip(s) is a weight
selected by the designer. The subscript P stands for peiformance since S is mainly used as a
supremum, the least upper bound). This is because the maximum may only be approached as w —* cc
performance indicator, and the performance requirement becomes
and may therefore not actually be achieved. However, for engineering purposes there is no difference
between “sup” and “max’
The terms R~ norm and 7t~ control are intimidating at first, and a name conveying the 10’
engineering significance of 7-L~ would have been better. After all, we are simply talking
about a design method which aims to press down the peak(s) of one or more selected transfer 0) 100
-c
functions. However, the term 7-I~, which is purely mathematical, has now established itself
in the control community. To make the term less forbidding, an explanation of its background 00
C,
may help. First, the symbol cc comes from the fact that the maximum magnitude over IC—’
frequency may be written as
cc i/p 10
IC- I0~’ 100 10’
max~f(jw)~ lim Frequency
‘U
=
p-+00
(f~cc If(iw)IPdw) [rad/s]
1/2
II! (s)l12 4 (1
2K
If(iw)I2dw (2.102)
The factor 1/v’s is introduced to get consistency with the 2-norm of the corresponding Frequency [rad/s]
impulse response; see (4.120). Note that the 7-12 norm of a semi-proper (or bi-proper) transfer
(b) Weighted sensitivity wpS
function (where 1im8÷~ f(s) is a non-zero constant) is infinite, whereas its 7I~ norm is
finite. An example of a semi-proper transfer function (with an infinite 712 norm) is the
Figure 2.28: Case where 1~1 exceeds its bound 1/~wpI, resulting in ~~wpSIIoo > 1
sensitivity function S = (I + GIC)’
IS(iw)I (2.103)
2.8.2 Weighted sensitivity < 1/~wp(jw)~, Vw
As already discussed, the sensitivity function S is a very good indicator of closed-loop wpS~ < ~ Vw ~ IIw~SIIoo <jj (2.104)
performance, both for SISO and MIMO systems. The main advantage of considering S is
that because we ideally want S small, it is sufficient to consider just its magnitude I~I; that The last equivalence follows from the definition of the 7-L~ norm, and in words the
is, we need not worry about its phase. Typical specifications in terms of S include: performance requirement is that the 7-1~ norm of the weighted sensitivity, vip8, must be less
than 1. In Figure 2.28(a), an example is shown where the sensitivity, I~I~ exceeds its upper
I. Minimum bandwidth frequency w~ (defined as the frequency where IS&w) I crosses 0.707 bound, 1/Irvpl, at some frequencies. The resulting weighted sensitivity, sopS~, therefore
from below). exceeds 1 at the same frequencies as is illustrated in Figure 2.28(b). Note that we usually
2. Maximum tracking error at selected frequencies. do not use a log-scale for the magnitude when plotting weighted transfer functions, such as
3. System type, or alternatively the maximum steady-state tracking error, A. IwpSI.
-Vt
—
one can make demands on another closed-loop transfer function, e.g. on the complementary
sensitivity T = I S = 01(8. For instance, one might specify an upper bound 1/IWTI on
—
loG the magnitude ofT to make sure that L rolls off sufficiently fast at high frequencies. Also,
0) to achieve robustness or to restrict the magnitude of the input signals, to = KS(r —
I
‘O
one may place an upper bound, ~ on the magnitude of KS. To combine these “mixed
Co sensitivity” specifications, a “stacking approach” is usually used, resulting in the following
I O~’
overall specification:
tap S
10 -
Figure 2.29: Inverse of performance weight: exact and asymptotic plot of l/~wp (jw)J in (2.105) Here we use the maximum singular value, U(N(jw)), to measure the size of the matrix N
at each frequency. For SISO systems, N is a vector and U(N) is the usual Euclidean vector
norm: _____________________________
Weight selection. An asymptotic plot of a typical upper bound, 1/~wp~, is shown in
~(N) = vfl~~I2 + Iwr~’I2 + Iw~KSI2 (2.108)
Figure 2.29. The weight illustrated may be represented by
After selecting the form of N and the weights, the 9-t~ optimal controller is obtained by
s/M + w3 solving the problem
wp(s)= s+w~A (2.105)
minIIN(K)1100 (2.109)
and we see that 1/~wp(jw)~ (the upper bound on I~I) is equal to A (typically A 0) at where K is a stabilizing controller. A good tutorial introduction to ‘7~tCo control is given by
low frequencies, is equal to i’vI ≥ 1 at high frequencies, and the asymptote crosses I at the Kwakemaak (1993).
frequency w~, which is approximately the bandwidth requirement.
Remark. For this weight the loop shape L = wi/s yields an S which exactly matches the bound Remark I The stacking procedure is selected for mathematical convenience as it does not allow us
(2.104) at frequencies below the bandwidth and easily satisfies (by a factor lvi) the bound at higher to specify exactly the bounds on the individual transfer functions as described above. For example,
frequencies. assume that #i(K) and ~2(K) are two functions of K (which might represent ~b1(K) = tapS and
= tarT) and that we want to achieve
In some cases, in order to improve performance, we may require a steeper slope for L (and
8) below the bandwidth, and then a higher-order weight may be selected. A weight which ~ arid I~2I<1 (2.110)
goes as at frequencies below crossover is
This is similar to, but not quite the same as, the stacked requirement
‘wp(s) —
—
(s/M’/’~
(s+4Au/7j0
+4)~ I (2.106) & [‘] = ~/}~iI° + I~iI~ <1 (2.111)
Exercise 2.4 For ii = 2, make an asy~nptotic plot of 1/Jwp in (2.106) and conpare with the Objectives (2.110) and (2.111) are very similar when either ~ or I~2I is small, but in the “worst”
asymptotic plot of 1/jwp I i’~ (2.105). case when ~i I = I~I, we get from (2.111) that ~ < 0.707 and I~oI < 0.707. That is, there is a
possible “error” in each specification equal to at most a factor v”~1 3 dB. In general, with n stacked
The insights gained in the previous section on loop-shaping design are very useful for requirements the resulting error is at most ~ This inaccuracy in the specifications is something we are
selecting weights. For example, for disturbance rejection we must satisfy JSGd(jw)I < 1 at probably willing to sacrifice in the interests of mathematical convenience. In any case, the specifications
all frequencies (assuming the variables have been scaled to be less than I in magnitude). It are in general rather rough, and are effectively knobs for the engineer to select and adjust until a
then follows that a good initial choice for the performance weight is to let Iwp~w)~ look satisfactory design is reached.
like Gd(jw)l at frequencies where OdI > 1. In other cases, one may first obtain an initial
Remark 2 Let ~7inin = minjc IINU )IICo denote the optimal 71Co norm. An important property of
controller using another design procedure, such as LQG, and the resulting sensitivity lS(iw) I
71~ optimal controllers is that they yield a flat frequency response; that is, a(N(jw)) = ~ at all
may then be used to select a performance weight for a subsequent 9-L~~~ design.
frequencies. The practical implication is that, except for at most a factor \/1, the transfer functions
resulting from a solution to (2.109) will be close to 7mbi times the bounds selected by the designer.
2.8.3 Stacked requirements: mixed sensitivity This gives the designer a mechanism for directly shaping the magnitudes of a(S), &(T), &(KS), and
so on.
The specification iJwpSI~~ < 1 puts a lower bound on the bandwidth, but not an upper
one, and nor does it allow us to specify the roll-off of L(s) above the bandwidth. To do this
-I
Example 2.17 W~ mixed sensitivity design for the disturbance process. Consider again the
plant in (2.62), and consider an 71m mixed sensitivity S/KS design in which
-ax
N ~upS (2.112)
— Lw~ICS
Appropriate scaling of the plant has been performed so that the inputs should be about 1 or less in ax
magnitude, and we therefore select a simple input weight w,~ = 1. The performance weight is chosen,
in the form of(2. 105), as
The inverse of this weight is shown in Figure 2.30, and is seen front the dashed line to cross 1
in magnitude at about the same frequency as weight mm, but it specifies tighter control at lower
frequencies. With the weight mpg, we get a design with an optimal 7~(m norm of 2.19, yielding
Ms = L62, Mr = 1.42, GM = 4.77, PM = 438° andw~ = 1L28 ratUs. (The design is actually
very similar to the loop-shaping design for disturbances, IC3.) The disturbance response is very good,
whereas the tracking response has a somewhat high overshoot; see curve to in Figure 2.31(a).
In conclusion, design 1 is best for reference tracking whereas design 2 is best for disturbance
too in- rejection. To get a design with both good tracking and good disturbance rejection we need a two
Frequency [tad/si degrees-of-freedom controller; as was discussed in Example 2.11 (page 52).
Figure 2.30: Inverse of performance weight (dashed line) and resulting sensitivity function (solid line) Exercise 2.5 ‘ILm design for unstable plant. Obtain S/KS 7-l~ controllers for the unstable process
for two ?L~~ designs (I and 2) for the disturbance process
(2.37) using w,~ = 1 and the performance weights in (2.113) (design 1) and (2.114) (design 2). Plot the
frequency response of the controller for design I together with the P1 controller (2.38) to confirm that
the two controllers are almost identical. You willfind that the response with the design 2 (second—order
weight) is faster; but on the other hand robustness margins are not quite as good:
Table 2.4: Matlab program to synthesize 71~ controller for Example 2.17
% Uses the Robust control toolbox
Gtf(200,convl[lOlJ,conv([O,O51],(O,o51]))); %PlantisG.
7mm = I1N1100 W~ Ms Mr GM GMt. PM
M”l.S; wblO; A=l.e—4; Design 1: 124 4.96 1.17 1.35 1&48 0.20 61.7°
Wp = tf( [1/N wb) , [1 wb*Al vu = 1, % Weights. Design 2: 5.79 8.21 1.31 1.56 11.56 0.23 48.5°
% Find H—infinity optinsi controller:
[khini,ghinfgopt] = nixsyn(GWp,Wu,[]);
Marg = sllmargin(G*khinf) % Gain and phase margins
2.9 Conclusion
For tIns problemit, we achieved an optimal 71~ norm of 1.37, so the weighted sensitivity requirements
are not quite satisfied (see design I in Figure 2.30 where the curve for S~~ is slightly above that for
The main purpose of this chapter has been to present the classical ideas and techniques of
1/ftvpj{). Nevertheless, the design scents good with 115Wm = Ms = 1.30, IITIIm = Mr = 1,
feedback control. We have concentrated on SISO systems so that insights into the necessary
GM = 8, PM = 71.19° and w. = 7.22 radls, and the tracking response is very good as shown
by curve yi in Figure 2.31(a). (The design is actually very sinular to the loop-shaping design for design trade-offs, and the design approaches available, can be properly developed before
references, Ito, which was an inverse-based controller:) MIMO systems are considered. We also introduced the 9~tco problem based on weighted
However; we see from curve ys in Figure 231(b) that the disturbance response is very sluggish sensitivity, for which typical performance weights are given in (2.105) and (2.106).
If disturbance rejection is the main concern, then from our earlier discussion in Section 2.6.4 this
66 MULTIVARIABLE FEEDBACK CONTROL
3
INTRODUCTION TO
MULTIVARIABLE CONTROL
In this chapter, we introduce the reader to multi-input multi-output (MIMO) systems. It is almost
“a book within the book” because a lot of topics are discussed in more detail in later chapters.
Topics include transfer functions for MIMO systems, multivariable frequency response analysis
and the singular value decomposition (SVD), relative gain array (RGA), multivariable control, and
multivariable right-half plane (RHP) zeros. The need for a careful analysis of the effect of uncertainty
in MIMO systems is motivated by two examples. Finally, ‘ye describe a general control configuration
that can be used to formulate control problems. The chapter shnuld be accessible to readers who have
attended a classical 8180 control course.
3.1 Introduction
We consider a MIMO plant with in inputs and I outputs. Thus, the basic transfer function
model is y(s) = G(s)u(s), where y is an I x 1 vector, it is an in x 1 vector and G(s) is an
I x in transfer function matrix.
If we make a change in the first input, it1, then this will generally affect all the outputs,
in, 112, ..., y~; that is, there is interaction between the inputs and outputs. A non-interacting
plant would result if it1 only affects 111, ~2 only affects y2~ and so on.
The main difference between a scalar (8180) system and a MIMO system is the presence
of directions in the latter. Directions are relevant for vectors and matrices, but not for
scalars. However, despite the complicating factor of directions, most of the ideas and
techniques presented in the previous chapter on 8180 systems may be extended to MIMO
systems. The singular value decomposition (SVD) provides a useful way of quantifying
multivariable directionality, and we will see that most 8180 results involving the absolute
value (magnitude) may be generalized to multivariable systems by considering the maximum
singular value. An exception to this is Bode’s stability condition which has no generalization
in terms of singular values. This is related to the fact that it is difficult to find a good measure
of phase for MIMO transfer functions.
The chapter is organized as follows. We start by presenting some rules for determining
multivariable transfer functions from block diagrams. Although most of the formulae
for scalar systems apply, we must exercise some care since matrix multiplication is not
commutative: that is, in general GK $ KG. Then we introduce the singular value
decomposition and show how it may be used to study directions in multivariable systems.
We also give a brief introduction to multivariable control and decoupling. We then consider from afeedback loop then include a term (I — L)’ for positive feedback (or
a simple plant with a multivariable RHP-zero and show how the effect of this zero may be (I + L)1 for negative feedback) where L is the transfer function around that
shifted from one output channel to another. After this we discuss robustness, and study two loop (evaluated against the signalfiow starting at the point of exitfrom the loop).
example plants, each 2 x 2, which demonstrate that the simple gain and phase margins used Parallel branches should be treated independently and their contributions added
for SISO systems do not generalize easily to MIMO systems. Finally, we consider a general 2222
togethet:
control problem formulation.
At this point, the reader may find it useful to browse through Appendix A where Care should be taken when applying this rule to systems with nested loops. For such systems
some important mathematical tools are described. Exercises to test understanding of this it is probably safer to write down the signal equations and eliminate internal variables to get
mathematics are given at the end of this chapter. :1:2:: the transfer function of interest. The rule is best understood by considering an example.
1:2%
0 11+ V
~G1G2,~
2%:::: ± z
w
(a) Cascade system (b) Positive feedback system
Figure 3.1: Block diagrams for the cascade rule and the feedback rule Figure 3.2: Block diagram corresponding to (3.2)
The following three rules are useful when evaluating transfer functions for MIMO systems. 1:::
Example 3.1 The transfer function for the b/ock diagram in Figure 3.2 is given by
1, Cascade rule. For the cascade (selies) interconnection of Gi and G2 in Figure 3.1(a), z = (P11 + P12K(I — P221fl’P21)w (3.2)
the overall transferfunction matrix is 0 = G0G1.
To derive thisfrom the MIMO rule above we start at the output z and move backwards towards to. There
Remark. The order of the transfer function matrices in 0 = 0201 (first G2and then Ci) is the reverse are two branches, one of which gives the term Pu directlg hi the other b,-anch we move backwards and
of the order in which they appear in the block diagram of Figure 3.1(a) (first Cj and then Cl). This has 1 tneet P12 and then K. We then exit front a feedback loop and get a feint (I — L)’ (positive feedback)
led some authors to use block diagrams in which the inputs enter at the right hand side. However, in this with L = P22K, and finally we meet P21.
case the order of the transfer function blocks in a feedback path will be reversed compared with their
order in the formula, so no fundamental benefit is obtained. Exercise 3.2 Use the MIMO rule to derive the transfer functions from u to y and fivmn u to z in
Figure 3.1(b). Use the push—through rule to rewrite the two transfer functions.
2. Feedback rule. With reference to the positive feedback system in Figure 3.1(b), we have
Exercise 3.3 * use the MIMO title to show that (2.19) corresponds to the negative feedback system in
v = (I — L) 1u where L = 0201 is the transferfunctian around the loop.
Figure 2.4.
3. Push-through rule. For matrices of appropriate dimensions
The cascade and feedback rules can be combined into the following MIMO rule for evaluating
closed-loop transfer functions from block diagrams. Figure 3.3: Conventional negative feedback control system
MIMO rule: Startfrom the output and write down the blocks as you meet them For the negative feedback system in Figure 3.3, we define L, to be the loop transfer function
when moving backwards (against the signal flow) towards the input. If you exit
as seen when breaking the loop at the output of the plant. Thus, for the case where the loop
1
70 MULTIVARIABLE FEEDBACK CONTROL INTRODUCTION TO MULTIVARIABLE CONTROL 71
consists of a plant C and a feedback controller K we have Remark 1 The above identities are clearly useful when deriving transfer functions analytically, but
they are also useful for numerical calculations involving state-space realizations, e.g. L(s) = C(sI —
L=GK (3.3) A)’B + D, For example, assume we have been given a state-space realization for L = OK with
it states (so A is an it x it matrix) and we want to find the state-space realization of 7’. Then we can
The sensitivity and complementary sensitivity are then defined as first form S = (I + L)’ with it states, and then multiply it by L to obtain 7’ = SL with 2n states.
However, a minimal realization of 7’ has only it states. This may be obtained numerically using model
S4(I+L)’; T41—S=L(I+L)’ (3.4) reduction, but it is preferable to find it directly using P = I — 5, see (3.7).
In Figure 3.3, T is the transfer function from r to y, and S is the transfer function from d~ to Remark 2 Note also that the right identity in (3.10) can only be used to compute the state-space
y; also see (2.17) to (2.21) which apply to MIMO systems. realization of 7’ if that of L’ exists, so L must be semi-proper with V ~ 0 (which is rarely the
S and T are sometimes called the output sensitivity and output coznplementa,y sensitivity, case in practice). On the other hand, since 1, is square, we can always compute the frequency response
of LOw)~’ (except at frequencies where L(s) has jw-axis poles), and then obtain TOw) from (3.10).
respectively, and to make this explicit one may use the notation L0 E L, So S and
T. This is to distinguish them from the corresponding transfer functions evaluated at Remark 3 In Appendix A.7 we present some factorizations of the sensitivity function which will
the input to the plant. be useful in later applications. For example, (A.l47) relates the sensitivity of a perturbed plant,
We define L1 to be the loop transfer function as seen when breaking the loop at the input 5’ = (I + C’ K) —1, to that of the nominal plant, S = (I + OK) — ~. We have
to the plant with negative feedback assumed. In Figure 3.3 5’ = 5(1 + E0T)’, Eo ~ (0’— G)0Z’ (3.11)
= KG (3.5) where E0 is an output multiplicative perturbation representing the difference between C and C’, and
T is the nominal complementary sensitivity function.
The input sensitivity and input complementary sensitivity functions are then defined as
Si = Sand T1 = T. The transfer function G(s) is a function of the Laplace variables and can be used to represent
a dynamic system. However, if we fix s = 8o then we may view G(so) simply as an I x m
Exercise 3.4 In Figure 3.3, what transferfunction does S1 represent? Evaluate the transferfunctions complex matrix (with in inputs and I outputs), which can be analyzed using standard tools
from d1 and d2 to r — y.
in matrix algebra. In particular, the choice 5o = jw is of interest since COw) represents the
The following relationships are useful: response to a sinusoidal signal of frequency w.
(I+L)~’+L(I+L)’=S+T=I (3.7)
3.3.1 Obtaining the frequency response from G(s)
C(I+KC)—’ = (I+GK)’C (3.8)
GK(I + CK)’ = 0(1 + ICG)’I( = (I + GIC)’GK (3.9)
T = L(I + = (I + (L)’)’ (3.10)
Note that the matrices C and K in (3.7)—(3.10) need not be square whereas L = OK is Figure 3.4: System C(s) with input d and output y
square: (3.7) follows trivially by factorizing out the term (I + L)~’ from the right; (3.8)
says that OS~ = SO and follows from the push-through rule; (3.9) also follows from the The frequency domain is ideal for studying directions in multivariable systems at any given
push-through rule; (3.10) can be derived fi’om the identity Mf4M’ = (M2M~)’. frequency. Consider the system C(s) in Figure 3.4 with input cl(s) and output y(s):
Similar relationships, but with 0 and K interchanged, apply for the transfer functions
evaluated at the plant input. To assist in remembering (3.7)—(3.10) note that C comes first y(s) = C(s)d(s) (3.12)
(because the transfer function is evaluated at the output) and then 0 and K alternate in
sequence. A given transfer matrix never occurs twice in sequence. For example, the closed- (We denote the input here by ci rather than by it to avoid confusion with the matrix U used
loop transfer function G(I+G1C)’ does not exist (unless 0 is repeated in the block diagram, below in the singular value decomposition.) In Section 2.1 we considered the sinusoidal
but then these 0’s would actually represent two different physical entities). response of scalar systems. These results may be directly generalized to multivariable systems
by considering the elements liii of the matrix C. We have
gjj (jw) represents the sinusoidal response from input jto output i.
72 MULTIVARIABLE FEEDbACK CONTROL a INTRODUCTION TO MULTIVARIABLE CONTROL
To be more specific we apply to input channel 3 a scalar sinusoidal signal given by 332 Directions in multivariable systems
it3 (t) = d0 sin(wt + a) (3 13) a For a SISO system y = Gd the gain at a given frequency is simply
This input signal is persistent that is it has been applied since t = —oo Then the ~y(w)~ IGUw)d(w)I = GQjw)( (323)
corresponding persistent output signal in channel z is also a sinusoid with the same frequency Id(w) I Id(w) I
y~(t) = Yzo sm(wt + fi2) (3 14) The gain depends on the frequency w but since the system is linear it is independent of the
input maanitude Id(w)I.
where the amplification (gain) and phnse shift may be obtained from the complex number Things are not quite as simple for MIMO systems where the input and output signals are
g~~Q,jw) as follows: both vectors, and we need to “sum up” the magnitudes of the elements in each vector by
= Ig~, (a’~) — cr~ = Zg~ (jw) (3 15) use of some norm as discussed in Appendix A 5 1 If we select the vector 2-norm the usual
measure of length, then at a given frequency w the magnitude of the vector input signal is
In phasor notation, see (2.5) and (2.10), we may compactly represent the sinusoidal time _____________
y1(w) = gj1 (jw)di (w) + g12(jw)d2(w) +~ = Z gt~(jw)d~(w) (3.18) Again the gain depends on the frequencyw, and again it is independentof the inputmagnitude
Md(w)112. However, for a MIMO system there are additional degrees of freedom and the gain
or in matrix form depends also on the direction of the input d.i The maximum gain as the direction of the input
y(w) = G(jw)d(w) (3.19) is varied is the maximum singular value of G,
where Gd
di(w) yi(w) max 2 = max IIGdII2 â(G) (3.27)
d2(w) iJ2(W) d$0 ~dj~2 lldlI2=i
d(w) = and y(w) = . (3.20)
dm(w) yj(w)
whereas the minimum gain is the minimum singular value of G,
represent the vectors of sinusoidal input and output signals. I mill II~II2 = mm IIGdII9 a(G) (3.28)
Example 3.2 Consider a 2 x 2 ,nultivariable system where we simultaneously apply sinusoidal signals
#0 11d112 1d112=1 — —
of the samnefrequencv w to the Iwo input channels: The first identities in (3.27) and (3.28) follow because the gain is independent of the input
d(fl = [di~) 1 — [d10 shi(wt + ai) 1 . d~ [dioeic~ 1 magnitude for a linear system.
[di(t)] Ld2osinQ~t + aa)j Oi \W) — Ldsoeia2 ~ (3.21)
The corresponding output signal is I Example 3.3 For a system with two Oipiits, d [~o], the gain is in general different for the
following five inputs:
i(t)
~ = I yi(t) j
[zn(t) = yiosin(wt+Pi)
[zoo sin(wt + ft2) j °‘. (
~~‘1 — [yioe’~
[y2o&02 1 (3.22) [ii [01 — 10.7071 d — F 0.707 1 ~ = 0.6
y(w) is obtained b3 nniltipl5 ing the comnple~. man Lx COw) b) the complex vectot d(w), as given
- ..- . , ) - .
in
. di = ~oj’ ~2 = ~ij’ ~ ~o.7O7j’ ~ H0707i’ [—0.8
(3.19). The tenu direciloii refers to a norrnaiized vector of unit iength.
75
74 MULTIVARIABLE FEEDBACK CONTROL JNTRODUCTION TO MULTIVARIABLE CONTROL
0
:4 ~4 0
4
u(C)
~(C) (see Appendix A.5 for more details). As we may expect, the magnitude of the largest
eigenvalue, p(G) ~ Ptrnax(0)l (the spectral radius), does not satisfy the properties of a
matrix norm; also see (A.116).
In Appendix A.5.2 we introduce several matrix norms, such as the Frobenius norm IIGIIF,
the sum norm IIGIIsum, the maximum column sum IlGIIti, the maximum row sum lIOll~~~~
C. C.
a,
0 and the maximum singular value llGII~2 = a(0) (the latter three norms are induced by a
—l vector norm, e.g. see (3.27); this is the reason for the subscript i). Actually, the choice of
—2 matrix norm among these is not critical because the various norms of an I x in matrix differ
—3 at most by a factor v”~J, see (A.l 19)—(A.124). In this book, we will use all of the above
-4 norms, each depending on the situation. However, in this chapter we will mainly use the
—s a induced 2-norm, U(G). Notice that U(G) = 100 for the matrix in (3.30).
1110
Exercise 3.5 Compute the spectral radius and the five matrix norms mentioned above for the
*
Figure 3.6: Outputs (right plot) resulting from use of 11d1l2 = 1 (unit circle in left plot) for system C matrices in (3.29) and (3.30).
in (3.29). The maximum (a(C)) and minimum &(G)) gains are obtained for d = (~) and d = (v)
respectively.
3.3.4 Singular value decomposition
An alternative plot, which shows the directions of the outputs more clearly, is shown in Figure 3.6.
From the shape of the output space (right plot), we see that it is easy to increase both Uio and The singular value decomposition (SVD) is defined in Appendix A.3. Here we are interested
~2o simultaneously (gain a(G) = 7.34), but difficult to increase one and decrease the other (gain in its physical interpretation when applied to the frequency response of a MIMO system 0(s)
£(G) = 0.27). with in inputs and 1 outputs.
4-I
V where k = min{I, in}. Thus, for any vector d, not in the null space of G, we have that
where the angles 0~ and 02 depend on the given matrix. From (3.35) we see that the matrices I 11Gd117
U and V involve rotations and that their columns are orthonormal. c(G) ≤ IIdIla <a(G)
— (3.42)
The singular values are sometimes called the principal values or principal gains, and the I
associated directions are called principal directions. In general, the singular values must be Defining u1 = ii, v1 = i~, ~k = ~ and Va iz, then it follows that
computed numerically. For 2 x 2 matrices, however, analytic expressions for the singular I
values ait given in (A.37). Cv 0u~ Gv_ = au (3.43)
Caution. It is standard notation to use the symbol U to denote the matrix of output singular vectors. I The vector ti corresponds to the input direction with largest amplification, and ii is the
This is unfortunate as it is also standard notation to use is (lower case) to represent the input signal. The
reader should be careful not to confuse these two. corresponding output direction in which the inputs are most effective. The directions
involving v and ii are sometimes referred to as the “strongest”, “high-gain” or “most
Input and output directions. The column vectors of U, denoted is4, represent the output important” directions. The next most important directions are associated with v2 and is2,
directions of the plant. They are orthogonal and of unit length (orthonormal), i.e. and so on (see Appendix A.3.5) until the “least important”, “weak” or “low-gain” directions
which are associated with v and is.
II~iII2 = J1uu12 + 1ui212 + . .. + IunI2 = 1 (3.36)
Example 3.3 continued. Consider again the system (3.29) with
u~u4=1, u~u~=0, i~j (3.37) L [5 41
G=~ 2]
(3M)
Likewise, the column vectors of 1~, denoted v~, are orthogonal and of unit length, and
represent the input directions. These input and output directions are related through the
TheSVDOfGi is
singular values. To see this, note that since V is unitary we have VHV = 1, s~ (3.33) may be
written as CV = US, which for column i becomes 0.872 0.490 1 [7.343 0 10.794 —0.6081 H
= [0.490 —0.872] L 0 0.272~ 10.608 0.794
Gv4=a4u4 (338)
where v2 and u~ are vectors, whereas o~ is a scalar. That is, if we consider an input in 2 For a ‘fat” matrix C with more inputs thaa outputs (m > I), we can always choose a non-zero input din the null
the direction v4, then the output is in the direction n~. Furthermore, since 11v4 112 = 1 and space of C such that Gd = 0.
79
78 MULTIVARIABLE FEEDBACK CONTROL INTRODUCTION TO MULTIVARIABLE CONTROL
The largest gain of 7343 is for an input in the direction V = The smallest gain of 0.2 72 is for counteract each othe,: Thus, the distillation process is ill-conditioned, at least at steady-state, and the
condition nu,nber is 197.2/1.39 = 141.7. The physics of this example is discussed in more detail below,
an input in the direction v = [~0~~8]. This confirms the findings on page 73 (see Figure 3.6). and later in this chapter we will consider a simple controller design (see Motivating robustness example
110. 2 in Section 3.7.2).
Note that the directions in terms of the singular vectors are not unique, in the sense that
the elements in each pair of vectors (u1, v~) may be multiplied by a complex scalar c of Example 3.6 Physics of the distillation process. The model in (3.45) represents two-point (dual)
magnitude 1 ([ci = 1). This is easily seen from (3.38). For example, we may change the composition control ofa distillation column, where the top composition is to be controlled at YD 0.99
(output yi) and the bottom composition at XB = 0.01 (output 112), using reflux L (itiput iii) and boilup
sign of the vector V (multiply by c = —1) provided we also change the sign of the vector ii.
V (input u2) as manipulated inputs (see Figure 10,6 on page 408). Note that we have here returned to
Also, if you use Matlab to compute the SVD of the matrix in (3.44) (g= [5 4; 3 2 1
the convention of using it1 and it2 to denote the manipulated inputs; the output singular vectors will be
[u, s, vi =svd (g) ), then you will probably find that the signs of the elements in U and V denoted by ü and it.
are different from those given above. The 1, 1-element of the gain matrix C is 87.8. Thus an iticrease in Ui by 1 (with u2 constant) yields a
Since in (3.44) both inputs affect both outputs, we say that the system is interactive. large steady-state change in yi of 87.8; that is, the outputs are very setlsitive to changes in itj. Similarly,
This follows from the relatively large off-diagonal elements in C in (3.44). Furthermore, all itlcrease in ~2 by 1 (with ui constant) yields yi = —86.4. Again, this is a very large change, but
the system is ill-conditioned: that is, some combinations of the inputs have a strong effect in the opposite direction of that for the increase in Ui. We therefore see that changes in ui and u2
on the outputs, whereas other combinations have a weak effect on the outputs. This may counteract each othem; and if we increase ‘uj and ~2 simultaneously by 1, then the overall steady-state
be quantified by the condition itumber: the ratio between the gains in the strong and weak change in 111 is only 87.8 — 86.4 = 1.4.
directions, which for the system in (3.44) is 7 = 0/a = 7.343/0.272 = 27.0. Physically, the reason for this small change is that the compositions in the distillation column are
only weakly dependent on changes in the internal flows (i.e. simultaneous changes in the internal flows
Example 3.4 Shopping cart. Consider a shopping cart (supermarket trolley) with fixed wheels which L and V). This can also be seen from the smallest singular value, c(G) = 1.39, which is obtained for
we may want to move in th,-ee directions: forwards, sideways and upwatds. This is a simple illustrative
example where we can easily figure out the pi-incipal directionsfrom experience. The strongest direction,
inputs in the direction ~ ~ From the output singular vector
= ~ = [~0~~1] we see that
the effect is to move the outputs iii d~ffere,it directions; that is, to change Vi — 112. Therefore, it takes
corresponding to the largest singular value, will clearly be in the forwards direction. The next di,ection,
a large control action to Inova the compositions iii different directions; that is, to make both products
corresponding to the second singular value, will be sideways. Finally, the most difficult” or “weak”
purer simultaneously. This makes sense from a physical poitit of view.
direction, corresponding to the smallest singular value, will be upwards (lifting up the cart).
On the other hand, the distillation column is very sensitive to changes in external flows (Le. increase
For the shopping cart the gain depends strongly on the input direction, i.e. the plant is ill-conditioned.
Control of ill-conditioned plants is sometimes difficult, and the control problem associated with the U’ — Ui = L — V). This can be seen from the input singular vector V = [067?~8] associated with the
shopping cart can be desc,-ibed as follows. Assume we want to push the shopping cart sideways (maybe largest singular value, and is a general property of distillation columns where both products are of high
we are blocking someone). This is rather difficult (the plant has low gain in this direction) so a strong purity. The ,‘eason for this is that the external distillate flow (which varies as V — L) has to be about
force is needed. Howeve,; if there is any uncertainty in our knowledge about the direction the cart is equal to the amount of light compotlent in the feed, and even a small imbalance leads to large changes
pointing, then sonic of our applied force will be directed fonvards (where the plant gain is large) and ill the p,vduct compositions.
the cart will suddenly move forward with an undesired large speed. We thus see that the cont,’ol of an
il/—conditioned plant may be especially dij3lcult if there is input uncertainty which call cause the input For dynamic systems the singular values and their associated directions vary with frequency,
signal to “spread” from one input direction to another We will discuss this in more detail later and for control purposes it is usually the frequency range corresponding to the closed-loop
bandwidth which is of main interest. The singular values are usually plotted as a function of
Example 3,5 Distillation process. Consider the following steady-state model of a distillation
frequency in a Bode magnitude plot with a log-scale for frequency and magnitude. Typical
column:
F[108.2
87.8 —86.4 plots are shown in Figure 3.7.
G —
— —109.6 (3.45)
The variables have been sca/ed as discussed in Section 1.4. Thus, since the elements are much larger
Non-square plant
than 1 ill magnitude this suggests that there will be no problems with input constraints. However~ this
is somewhat misleading as the gain in the low—gain direction (corresponditig to the s,nallest singular The SVD is also useful for non-square plants. For example, consider a plant with two inputs
value) is actually only just above 1. To see this consider the SYD of C: and three outputs. In this case the third output singular vector, it3, tells us in which output
direction the plant cannot be controlled. Similarly, for a plant with more inputs than outputs,
C —
—
Fo.625
[0.781 —0.7811
0.625 j F197.2
[ 0 0 1 F 0.707
1.39] [—0.708
—0.708
—0.707
(3.46) the additional input singular vectors tell us in which directions the input will have no effect.
U vii Example 3.7 Consider a non-square systent with three inputs and two outputs,
From the first input singular vector~ V = [0.707 _o,7orT, we see that the gain is 197.2 when we 1
C2 [a 42 —1
[5
increase one input and decrease the other input by a similar amount. On the other hand, from the
second input singular vecto,; v = [—0.708 —0.707J, we see that if we change both inputs by the
sa,ne amount then the gain is only 1.39. The reason for this is that the plant is such that the two inputs
I
81
80 MULTIVARIABLE FEEDBACK CONTROL INTRODUCTION TO MULTIVARIABLE CONTROL
U(S(jw)). Let 1/~wp(jw)~ (the inverse of the performance weight) represent the maximum
in2 allowed magnitude of 11e112/II7’112 at each frequency. This results in the following performance
uG
requirement:
V lot
t
a a(8(jw)) <1/~wp(jw)I, k/ui 4~ a(wpS) <1, k/ui
a C
to
1100
~ in
~~wpSIIoo < 1 (3.48)
where the Nm norm (see also page 60) is defined as the peak of the maximum singular value
100 in 100 101 of the frequency response
Frequency [rad/sJ Frequency [rad/sJ IIM(s)lIc.o ~ maxd(M(jw)) (3.49)
(a) Distillation process in (3.93) (b) Spinning satellite in (3.88)
Typical performance weights ump(s) are given in Section 2.8.2, which should be studied
carefully.
Figure 3.7: Typical plots of singular values
The singular values of SOw) may be plotted as functions of frequency, as illustrated later
in Figure 3.12(a). Typically, they are small at low frequencies where feedback is effective,
with SVD and they approach 1 at high frequencies because any real system is strictly proper:
= [0.877
0.481
0.481
—0.877
] [7.354
0
0
1.387
0 [0.792
L
—0.161
0.124
0.588
—0.785]
w—*oo: L(jw)-40 ≠. S(jw)—*I (3.50)
U S if”
— —
The maximum singular value, o(SUw)), usually has a peak larger than 1 around the crossover
frequencies. This peak is undesirable, but it is unavoidable for real systems.
From our definition, the nununumn singular value is a(Ga) = 1.387, but note that an input d in the As for SISO systems we define the bandwidth as the frequency up to which feedback
0.588 is effective. For MIMO systems the bandwidth will depend on directions, and we have a
direction Va = —0.785 is in the itull space of C and yields a zero output, y = Gd = 0. bandwidth region between a lower frequency where the maximum singular value, a(S),
0.196
reaches 0.7 (the “low-gain” or “worst-case” direction), and a higher frequency where the
Exercise 3.6 For a system with m inputs and one output, what is the interpretation of the singular minimum singular value, a(S), reaches 0.7 (the “high-gain” or “best-case”)4. If we want
values and the associated input directions (V)? What is U in this case? to associate a single bandwidth frequency for a multivariable system, then we consider the
worst-case (low-gain) direction, and define
3.3.5 Singular values for performance • Bandwidth, w~: Frequency where a(S) crosses = 0.7 from below.
So far we have used the SVD primarily to gain insight into the directionality of MIMO It is then understood that the bandwidth is at least WB for any direction of the input (reference
systems. But the maximum singular value is also very useful in terms of frequency domain or disturbance) signal. Since S = (I + L)’, (A.54) yields
performance and robustness. We consider performance here.
For 5150 systems we earlier found that IS(iw)I evaluated as a function of frequency 1 (3.51)
a(L) — 1 < — <c(L) +1
gives useful information about the effectiveness of feedback control. For example, it is the — a(S)
gain from a sinusoidal reference input (or output disturbance) r(w)3 to the control error,
Ie(w)I = JS(jw)I Ir(w)I.
. ..
Thus at frequencies where feedback is effective (namely where ~(L) >> 1) we have
For MIMO systems a useful generalization results if we consider the ratio IIe(w) 112/IIr(w) 112, a(S) 1/a(L), and at the bandwidth frequency (where 1/U(S(jwB)) = = 1.41)
where r is the vector of reference inputs, e is the vector of control errors, and 112 is the we have that a(L(jwB)) is between 0.41 and 2.41. Thus, the bandwidth is approximately
vector 2-norm. As explained above, this gain depends on the direction of r(w) and we have where o-(L) crosses 1. Finally, at higher frequencies, where for any real system 2(L) (and
from (3.42) that it is bounded by the maximum and minimum singular value of S, a(L)) is small, we have that a(S) 1.
quantify the degree of directionality and the level of (two-way) interactions in MIMO systems where x denotes element-by-element multiplication (the Hadamard or Schur product). With
are the condition number and the relative gain array (RGA), respectively. We first consider Matlab, we write5
the condition number of a matrix which is defined as the ratio between the maximum and RCA = G*pinv(G).~
minimum singular values, The RGA of a transfer matrix is generally computed as a function of frequency (see Matlab
7(0) ~ ~(G)/a(G) (3.52) program in Table 3.1). For a 2 x 2 matrix with elements Yji the RGA is
A matrix with a large condition number is said to be ill-conditioned. For a non-singular
A(G)— [A21
— [All A12] — F[1—A11
A11 1—A11 1
(square) matrix £(G) = 1/U(0’), so 7(G) = U(G)otC’). It then follows from (A.120) A22j — A11 (3.55)
Ajj
that the condition number is large if both C and G’ have large elements. 911922
The condition number depends strongly on the scaling of the inputs and outputs. To be The RGA is a very useful tool in practical applications. The RGA is treated in detail at
more specific, if D1 and D2 are diagonal scaling matrices, then the condition numbers of three places;in this book. First, we give a general introduction in this section (pages 82—90).
the matrices C and D1CD2 may be arbitrarily far apart. In general, the matrix C should be The use of the RGA for decentralized control is discussed in more detail in Section 10.6
scaled on physical grounds, e.g. by dividing each input and output by its largest expected or (pages 441—453). Finally, its algebraic properties and extension to non-square matrices are
desired value as discussed in Section 1.4. considered in Appendix A.4 (pages 526—529).
One might also consider minimizing the condition number over all possible scalings. This
results in the minimized or optimal condition number which is defined by
3.4.1 Original interpretation: RGA as an interaction measure
- 7*(G) = mm 7(D1GD2) (3.53)
D1 ,D2
We follow Bristol (1966) here, and show that the RGA provides a measure of interactions. Let
u~ and y~ denote a particular input—output pair for the multivariable plant C(s), and assume
and can be computed using (A.74).
that our task is to use u~ to control y~. Bristol argued that there will be two extreme cases:
The condition number has been used as an input—output controllability measure, and
in particular it has been postulated that a large condition number indicates sensitivity to o All other loops open: Uk = 0, V/c ≠ ~.
uncertainty. This is not true in general, but the reverse holds: if the condition number is small, o All other loops closed with perfect control: yk = 0, V/c ~ 1.
then the multivariable effects of uncertainty are not likely to be serious (see (6.89)).
If the condition number is large (say, larger than 10), then this may indicate control Perfect control is only possible at steady-state, but it is a good approximation at frequencies
problems: within the bandwidth of each loop. We now evaluate “our” gain Oy~/8u~ for the two extreme
cases:
I. A large condition number 7(0) = a(G)/a(G) may be caused by a small value of
u(G), which is generally undesirable (on the other hand, a large value of a~(G) need not Other 1oops open: Yij (3.56)
necessarily be a problem).
2. A large condition number may mean that the plant has a large minimized condition
number, or equivalently, it has large RGA elements which indicate fundamental control Other loops closed:
\
a-— J
/8y~N
U3 ,i lfl.=O,k~
= gjj (3.57)
problems; see below.
3. A large condition number does imply that the system is sensitive to “unstructured” (full- Here Yji = [G]~~ is the ij’th element of C, whereas ~jj is the inverse of the ji’th element of
block) input uncertainty (e.g. with an inverse-based controller, see (8.136)), but this kind
of uncertainty often does not occur in practice. We therefore cannot generally conclude
g~jj = 1/[C~]~~ (3.58)
that a plant with a large condition number is sensitive to uncertainty, e.g. see the diagonal
plant in Example 3.12 (page 89). To derive (3.58) we note that
The symbol ‘ in Mat]ab gives the conjugate transpose (A”), and we must use . ‘ to get the ‘regular” transpose
(AT).
84 MULTIVARIABLE FEEDBACK CONTROL INTRODUCTION TO MULTIVARIABLE CONTROL 85
and (3.58) follows. Bristol argued that the ratio between the gains in (3.56) and (3.57) is a However, one should avoid pairings where the sign of the steady-state gain from u~i to y~
useful measure of interactions, and defined the ij’th “relative gain” as may change depending on the control of the other outputs, because this will yield instability
with integral action in the loop. Thus, g~~(O) and ~,,(O) should have the same sign, and we
A —tGi [G’]1~ (3.61) have:
—t jij
gij
Pairing rule 2 (page 449): Avoid (if possible) pairing on negative steady-state
The RGA is the corresponding matrix of relative gains. From (3.61) we see that A(G) =
RGA elements.
C x (C’)~’ where x denotes element-by-element multiplication (the Schur product). This
is identical to our definition of the RGA matrix in (3.54). The reader is referred to Section 10.6.4 (page 438) for derivation and further discussion of
these pairing rules.
Remark. The assumption of Yk = 0 (“perfect control of yk”) in (3.57) is satisfied at steady-state
(w = 0) provided we have integral action in the loop, but it will generally not hold exactly at other
frequencies. Unfortunately, this has led many authors to dismiss the RCA as being “only useful at 3.4.2 Examples: RGA
steady-state” or “only useful if we use integral action”. On the contrary, in most cases it is the value
of the RCA at frequencies close to crossover which is most important, and both the gain and the phase Example 19 Blending process. Consider a blending process where we ‘nix sugar (it,) and water
of the RCA elements are important. The derivation of the RCA in (3.56) to (3.61) was included to (it2) to make a given amount (yi = F) of a soft drink with a given sugar fraction (y2 x). The
illustrate one useful interpretation of the RCA, but note that our definition of the RCA in (3.54) is balances “mass in = mass out” for total mass and sugar mass are
purely algebraic and makes no assumption about “perfect control”. The general usefulness of the RCA F1 + F2 = F
is further demonstrated by the additional general algebraic and control properties of the RCA listed on
page 88. F, = xF
Note that the process itself has no dynamics. Linearization yields
Example 3.8 RGA for 2 x 2 system. Consider a 2 x 2 system with the plant model
dl?, + dF2 = dF
in = g,,(s)u, + g,2(s)u2 (3.62)
92 = 921(S)U1 + g22(s)u2 (3.63) dF, = x’dF + F5dx
Wit/i u~ = dE’,, = dF2, y, = dF and 92 = dx we then get the model
Assume that “our” task is to use it, to control yj. First consider the case when the other loop is open,
i.e. U2 is constant. We then have it1 + U~
252=0: ij,=gii(s)ui
1—x’ x
Next consider the case when the other loop is closed with pemfect control, i.e. 92 = 0. In this case, it2 F’ Ui — ~‘tt2
92
will also change when we change it1, due to interactions. More precisely, setting 92 = 0 in (3.63) gives where x = 0.2 is the nominal steady-state sugarfraction and F’ = 2kg/s is the nominal amount. The
921 (s) transfer matrix then becomes
it2 151
A’1 (s) — “open-loop gain (with ~2 = 0)” — mi(s) — 1 reasonable from a physical point of view. Pairing rule 2 is a/so satisfied for this choice.
— “closed-loop gain (withy2 = 0)” — ~ii (s) 1— 912(5)921 Cs)
Example 3.10 Steady-state RGA. Consider a 3 x 3 plant for which we have at steady-state
Intuitively, for decentralized control, we prefer to pair variables u~ and lii so that ~ is close
to 1 at all frequencies, because this means that the gain from u~ to y~ is unaffected by closing E_167
16.8 30.5
31.0
4.30 1
—1.41 A(G)
r 1.50
I—0.41
0.99
0.97
—1.481
0.45 (3.64)
[ 1.27
= , =
the other loops. More precisely, we have: 54.1 5.40 J [—o.os —0.95 2.03 J
Pairing rule 1 (page 449): Prefer pairings such that the rearranged system, with For decentralized control, we need to pair on one element in each column or row. It is then clear that
the selected pairings along the diagonal, has an RGA matrix close to identity at the only choice that satisfies pairing rule 2 (“avoid pairing on negative RGA elements”) is to pair on
frequencies around the closed-loop bandwidth. the diagonal elements; that is, use Ui to control y,, ~ to control 92 and 253 to control ya.
86 MULTIVARIABLE FEEDBACK CONTROL INTRODUCTION TO MULTIVARIABLE CONTROL 87
Remark. The plant in (3.64) represents the steady-state model of a fluid catalytic cracking (FCC)
process. A dynamic model of the FCC process in (3.64) is given in Exe,vise 6.17 (page 257). Table 3.1: Matlab program to calculate frequency-dependent RGA
% Plant model (3.65)
a = tfl’s’);
Some additional examples and exercises, that further illustrate the effectiveness of the steady- C (0.01/(s+l.72e—4)/(4fl2°s + l))*(_34 54*13+0 0572)
state RCA for selecting pairings, are given on page 442. omega = logspace(—5,2,61);
1* RCA
for ± = l:length(omega)
Example 3.11 Frequency-dependent RGA. The following model describes a a large pressurized Cf = freqresp(C,on~ega(±)); % C(jw)
vessel (Skogestad and Wolff 1991), for example, of the kind found in offshore oil-gas separations. The RCAw(:,;,i) = Cf.*±nv(Cf).; % RCA at frequency omega
RCAn0(±) = sum(sum(abs(RCAw(:, i) - eye(2flfl; % RCA number
inputs are the valve positions for liquid (it,) and vapour (ua)flow, and the outputs are the liquid volume end
(yj) and pressure (y2). RCA = frd(RGAW,omega);
Q(s)—,
F —o.O823~—~-
s
001913e_(s+2b0)3
S
1
I
L_o.8022 4.32s+1
~ —0.09188 4.323+1
~
For the diagonal pairings this gives the P1 settings
= —12.1/(rci + 0), ‘ii, = 4(r0, + 0); 1C~2 = ~47.0/@c2 + 9), Tsg 4.32
andfor the off-diagonal pairings (the index refers to the output)
= 52.3/Qrci + 0 + 2.16), TI’ 4(Tci + 0 + 2.16); Kc2 14.3/(T02 + 0), T12 4.32
io_2 rn° io2 in0
Frequency [md/si Frequency [rad/s]
For improved robustness, the level cont,-oller (yr) is tuned about 3 times slower than the pressure
(a) Magnitude of RGA elements (b) RGA number controller (y2), i.e. use mi = 30 and T02 = 0. This gives a crossover frequency of about 0.5/6 in
the fastest loop. With a delay of about 5 s or larger you should find, as expected fmvmn the RCA at
Figure 3.8: Frequency-dependent RGA for C(s) in (3.65) crossover frequencies (pairing rule 1), that the off-diagonal pairing is best. Howeve~ if the delay is
decreased from 5 s to 1 s, them~ the diagonal pairing is best, as expected since the RGA for the diagonal
The RGA matrix A(s) depends on frequency. At steady-state (s = 0) the 2,1 element of C(s) is zero,
pairing approaches I at frequencies above] radis.
so A(0) = I. Similarly at high frequencies the 1,2 element is s,nall relative to the other elements, so
A(j~) = I. This seems to suggest that the diagonal pairing should be used. However~ at intermediate
frequencies, the off-diagonal RGA elements are closest to 1, see Figure 3.8(a). For example, atfrequency 3.4.3 RGA number and iterative RGA
w = 0.01 radjs the RGA matrix becomes (see Table 3.1)
Note that in Figure 3.8(a) we plot only the magnitudes of A51, but this may be misleading
A 0.2469 + 0.0193i 0.7531 0.0193i
—
when selecting pairings. For example, a magnitude of 1 (seemingly a desirable pairing)
—
— 0.7531 0.0193i
— 0.2469 + 0.0193i (3.66)
may correspond to an RCA element of —1 (an undesirable pairing). The phase of the RGA
Thus, from pairing rule 1, the reverse pairings is probably best if we use decentralized control and elements should therefore also be considered. An alternative is to compute the RCA number,
the closed-loop bandwidth is around 0.01 red/s. From a physical point of view the use of the reverse as defined next.
pairings is quite surprising, because it involves using the vapour flow (u2) to control liquid level (yi). RGA number. A simple measure for selecting pairings according to rule 1 is to prefer
and the liquidflow (itj) to control pressure (y2). pairings with a small RGA number. For a diagonal pairing,
Remark. Although it is possible to use decentralized control for this interactive process, see the
following exercise, one m;iay achieve much better pemformance with ,nultivariable controL If one insists RCA number 4 IIA(C) — Ilisum (3.67)
on using decentralized control, then it is recommended to add a liquid flow measurement and use an
‘inner” (lower layer) flow controller Tile resulting u~ is then the liquidflow rate rather than tile valve where we have (somewhat arbitrarily) chosen the sum norm, lAlisum = Z~ asa. The RCA
position. The,, U2 (vapour flow) has no effect on y~ (liquid volume), and the plant is triangular with number for other pairings is obtained by subtracting 1 for the selected pairings; for example,
912 = 0. In this case the diagonal pairing is clearly best.
Exercise 3.7 * Design decentralized single-loop controllers for the plant (3,65) using (a) the diagonal
A(G) — [~ ~]
for the off-diagonal pairing for a 2 x 2 plant. The disadvantage with the
RCA number, at least for larger systems, is that it needs to be recomputed for each alternative
pairings and (b) the off-diagonal pairings. Use the delay 6 (which is nominally 5 seconds) as a pairing. On the other hand, the RGA elements need to be computed only once.
r
Example 3.11 continued. The RCA nunther for the plant C(s) in (3.65) is plotted for the two Example 3.12 Consider a diagonal plant for which we have
alternative pairings in Figure 3.8(b). As expected, we see that the off-diagonal pairing is preferred at
interinediatefrequencies. C = [ 100
0
01 A(C)
lj
, = I, 7(C) =
a
100, 7*(C) = 1 (3.69)
Exercise 3.8 Compute the RCA number for the six alternate pairings for the plant in (3.64). Which Here the condition number is 100 which means that the plant gain depends stmvngly on the input
direction. However; since the plant is diagonal there are no interactions so A(C) = I and the minimized
pairing would you prefer? dition number y (C) = 1.
con
Remark. Diagonal dominance. A more precise statement of pairing rule 1 (page 84) would be to prefer Example 3.13 Consider a triangular plant Cfor which we get
pairings that have “diagonal dominance” (see definition on page 10.6.4). There is a close relationship
between a small RCA number and diagonal dominance, but unfortunately there are exceptions for plants
of size 4 x 4 or larger, so a small RGA number dbes not always guarantee diagonal dominance; see
C = [ ~ , C~ =
1
r1
[0
—21
]
, A(C) = I, ‘-y(C)
2.41
= 5.83, 7’(C) = 1 (3.70)
Example 10.18 on page 440. Note thatfor a triangular matrix, there is one-way interaction, but no two-way interaction, and the RCA
is always the identity matrix.
Iterative RGA. An iterative evaluation of the RCA, A2(G) = A(A(C)) etc., is very
useful for choosing pairings with diagonal dominance for large systems. Wolff (1994) found Example 3.14 Consider again the distillation process in (3.45) for which we have at steady-state
numerically that
ACO 4 urn th(C)
k—*ct
(3.68) C =
r 87.8
[108.2
—86.4 1
—109.6]
, C~’ ro.399
[0.394
—0.3151
—0.320] ‘
A(C)
=
[ 35.1
—34.1
—34.11
35.1 ] (3.71)
is a permuted identity matrix (except for “borderline” cases). More importantly, Johnson and ~ this case 7(C) = 197.2/1.391 = 141.7 is only slightly larger than ~‘ (C) 138.268. The
Shapiro (1986, Theorem 2) have proven that A°3 always converges to the identity matrix if C magnitude sum of the elements in the RCA matrix is I}AII,~~ = 138.275. This confirms property
is a generalized diagonally dominant matrix (see definition in Remark 10.6.4 on page 439) . AS which states that, for 2 x 2 systems, IIA(øIIsum 7~ (C) when 7* (C) is large. The condition
Since permuting the matrix C causes similar permutations of A(C), A°° may then be used as number is large, but since the minimum singular value a(C) = 1.391 is larger than 1 this does not by
a candidate pairing choice. Typically, JY~ approaches A’~ fork between 4 and 8. For example, itself imply a control problem. However; the large RGA elements indicate problems, as discussed below
for C [0.33 0.671 A2 [—0.33 1.33 1 A3 [—0.07
—
—
1 1 21
[—i ~j we get A
—
— 0.67 o.33j’
—
L
1.33 —O.33j
— ‘
—
[
1.07
1.07 —O.OYj
—
(control property Cl).
indicate that the plant is fundamentally dtfficult to control due to strong interactions and
Al. It is independent of input and output scaling, sensitivity to uncertainty.
A2. Its rows and columns sum to 1.
(a) Uncertainty in the input channels (diagonal input uncertainty). Plants with large RGA
A3. The RCA is the identity matrix if C is upper or lower triangular. elements (at crossover frequency) are fundamentally difficult to control because of
A4. A relative change in an element of C equal to the negative inverse of its corresponding sensitivity to input uncertainty, e.g. caused by uncertain or neglected actuator dynamics.
RCA element, g~ = gt~(1 1/A~~), yields singularity.
— In particular, decouplers or other inverse-based controllers should not be used for plants
AS. From (A.80), plants with large RCA elements are always ill-conditioned (with a large with large RCA elements (see page 251).
value of 7(C)), but the reverse may not hold (i.e. a plant with a large 7(C) may have (b) Element uncertainty. As implied by algebraic property A4 above, large RCA elements
small RCA elements).
imply sensitivity to element-by-element uncertainty. However, this kind of uncertainty
From property A3, it follows that the RCA (or more precisely A — I) provides a measure may not occur in practice due to physical couplings between the transfer function
of two-way interaction, elements. Therefore, diagonal input uncertainty (which is always present) is usually
of more concern for plants with large RGA elements.
90 MULTIVARIABLE FEEDBACK CONTROL 91
INTRODUCTION TO MULTI VARIABLE CONTROL
C2. RCA and RI-IP -zeros. If the sign of an RCA element changes as we go from $ = 0 to
$ = cc, then there is a RHP-zero in 0 or in some subsystem of C (see Theorem 10.7,
3.5 Control of multivariable plants
page 445).
3.5.1 Diagonal controller (decentralized control)
C3. Non-square plants. The definition of the RCA may be generalized to non-square matrices
by using the pseudo-inverse; see Appendix A.4.2. Extra inputs: If the sum of the elements The simplest approach to multivariable controller design is to use a diagonal or block-
in a column of RCA is small (<< 1), then one may consider deleting the corresponding diagonal controller IC(s). This is often referred to as decentralized control. Decentralized
input. Extra outputs: If all elements in a row of RCA are small (<< 1), then the control works well if G(s) is close to diagonal, because then the plant to be controlled is
corresponding output cannot be controlled. essentially a collection of independent sub-plants. However, if the off-diagonal elements
in 0(s) are large, then the performance with decentralized diagonal control may be poor
C4. RGA and decentralized controL The usefulness of the RCA is summarized by the two
because no attempt is made to counteract the interactions. There are three basic approaches
pairing rules on page 84.
to the design of decentralized controllers:
Example 3.14 continued. For the steady-state distillation model in (3.71), the large RGA element of o Fully coordinated design
35.1 indicates a control problem. More preczsel~; fitndamental control problems are expected ifanalysis o Independent design
shows that COw) has large RCA elements also in the crossover frequency range. Indeed, with the o Sequential design
idealized dynamic model (3.93) used below, the RCA elements are large at allfrequencies. and we will Decentralized control is discussed in more detail in Chapter 10 on page 428.
confirm in sunulations that there is a strong sensitivity to input channel uncertainty with an inverse—
based controller~ see page 100. For decentralized control, we should, according to rule 2, avoid pairing
on the negative RCA elements. Thus, the diagonal pairing is preferred. 3.5.2 Two-step compensator design approach
Example 3.16 Consider the plant
d
1 /s+I s+4
2 (3.72)
Ss+1 \~ 1
We find that Asj(cc) = 2 and A11(O) = —1 have different signs. Since none of the diagonal elements
have RI-IF-zeros we conclude from property C2 that C(s) must have a RI-IF-zero. This is indeed true
and C(s) has a zero at $ = 2.
Let us elaborate a bit more on the use of RCA for decentralized control (control property
+ 1)
C4). Assume we use decentralized control with integral action in each loop, and want to
pair on one or more negative steady-state RCA elements. This may happen because this
pairing is preferred for dynamic reasons or because there exists no pairing choice with only
positive RCA elements, e.g. see the system in (10.80) on page 443. What will happen? Will yin
the system be unstable? No, not necessarily. We may, for example, tune one loop at a time
in a sequential manner (usually starting with the fastest loops), and we will end up with a
stable overall system. However, due to the negative RCA element there will be some hidden
Ii
problem, because the system is not decentralized integral controllable (DIC); see page 442.
The stability of the overall system then depends on the individual loops being in service.
Figure 3.9: One degree-of-freedom feedback control configuration
This means that detuning one or more of the individual loops may result in instability for the
overall system. Instability may also occur if an input saturates, because the corresponding Consider the simple feedback system in Figure 3.9. A conceptually simple approach
ioop is then effectively out of service. In summary, pairing on negative steady-state R~GA to multivariable control is given by a two-step procedure in which we first design a
elements should be avoided, and if it cannot be avoided then one should make sure that the “compensator” to deal with the interactions in G, and then design a diagonal controller
loops remain in service. using methods similar to those for 5150 systems in Chapter 2. Several such approaches are
For a detailed analysis of achievable performance of the plant (input—output controllability discussed below.
analysis), one must consider the singular values, as well as the RCA and condition number as The most common approach is to use a pre-compensator, Wi(s), which counteracts the
functions of frequency. In particular, the crossover frequency range is important. In addition, interactions in the plant and results in a “new” shaped plant:
disturbances and the presence of unstable (REP) plant poles and zeros must be considered.
All these issues are discussed in much more detail in Chapters 5 and 6 where we address Ge(s) = C(s) Wi(s) (3.73)
achievable performance and input—output controllability analysis for SISO and MIMO plants, which is more diagonal and easier to control than the original plant 0(s). After finding a
respectively. suitable 14’~(s) we can design a diagonal controller 1C5(s) for the shaped plant 0~(s). The
92 MULTIVAR.JABLE FEEDBACK CONTROL INTRODUCTION TO MULTIVARIABLE CONTROL 93
overall controller is then Even though decoupling controllers may not always be desirable in practice, they are of
K(s) = Wi(s)K3(s) (3.74) interest from a theoretical point of view. They also yield insights into the limitations imposed
In many cases effective compensators may be derived on physical grounds and may include by the multivariable interactions on achievable performance. One popular design method,
nonlinear elements such as ratios. which essentially yields a decoupling controller, is the intemal model control (IMC) approach
(Moran and Zaflriou, 1989).
Remark 1 Some design approaches in this spirit are the Nyquist array technique of Rosenbrock (1974) Another common strategy, which avoids most of the problems just mentioned, is to use
and the characteristic loci technique of Macparlane and Kouvaritakis (1977). partial (one-way) decoupling where 0~(s) in (3.73) is upper or lower triangular.
Remark 2 The ~ loop-shaping design procedure, described in detail in Section 9.4, is similar in that
a pre-compensator is first chosen to yield a shaped plant, G. = CP172, with desirable properties, and 3.5.4 Pre- and post-compensators and the SVD controller
then a controller IC~(s) is designed. The main difference is that in ?&c loop shaping, ICE(s) is a full
multivariable controller, designed and based on optimization (to optimize ?-E robust stability). The above pre-compensator approach may be extended by introducing a post-compensator
as shown in Figure 3.10. One then designs a diagonal controller K3 for the shaped
3.5.3 Decoupling K
Decoupling control results when the compensator 14’i is chosen such that G~ = GW1 in
(3.73) is diagonal at a selected frequency. The following different cases are possible:
1. Dynamic decoupling: Ga(s) is diagonal at all frequencies. For example, with G3 (s) = I
and a square plant, we get T4’~ = 0’ Cs) (disregarding the possible problems involved
in realizing Gt(s)). If we then select ICR(s) = i(s)I (e.g. with i(s) = k/s), the overall Figure 3.10: Pre- and post-compensators, 1472 and T4’2. K3 is diagonal.
controller is
K(s) = i~jnv(s) 4 i(s)G1(s) (3.75) plant W2GW1. The overall controller is then
We will later refer to (3.75) as an inverse-based controller. It results in a decoupled nominal
K(s) = 14’11C3W2 (3.76)
system with identical loops, i.e. L(s) = l(s)I, 8(s) = and T(s) =
Remark. In some cases we may want to keep the diagonal elements in the shaped plant unchanged The SVD controller is a special case of a pre- and post-compensator design. Here
by selecting W~ = G’Gdfag. In other cases we may want the diagonal elements in P!’1 to be I.
14’, = V~ and T’V2 = u0T (3.77)
This maybe obtained by selecting W1 = G~ ((G1)djag)’. and the off-diagonal elements of P!’1
are then called “decoupling elements”
where V0 and Cf0 are obtained from the SVD of G0 = UOEQVQT, where G~ is a real
2. Steady-state decoupling: G, (0) is diagonal. This may be obtained by selecting a constant approximation of G(jw0) at a given frequency w0 (often around the bandwidth). SVD
pre-compensator W1 = G’(O) (and for a non-square plant we may use the pseudo- controllers are studied by Hung and MacFarlane (1982), and by Hovd et al. (1997) who
inverse provided G(0) has full row (output) rank). found that the SVD-controller structure is optimal in some cases, e.g. for plants consisting of
3. Approximate decoupling at frequency w0: 03(jw0) is as diagonal as possible. This is symmetrically interconnected subsystems.
usually obtained by choosing a constant pre-compensator ~ = 0;’ where G~ is a real In summary, the SVD controller provides a useful class of controllers. By selecting
approximation of G(jw0). G~ may be obtained, for example, using the align algorithm of K3 = i(s)E;’ a decoupling design is achieved, and selecting a diagonal K3 with a low
Kouvaritakis (1974) (see file align. m available at the book’s home page). The bandwidth condition number (7(IC~) small) generally results in a robust controller (see Section 6.10).
frequency is a good selection for w0 because the effect on performance of reducing
interaction is normally greatest at this frequency.
3.5.5 What is the shape of the “best” feedback controller?
The idea of decoupling control is appealing, but there are several difficulties:
Consider the problem of disturbance rejection. The closed-loop disturbance response is
As one might expect, decoupling may be very sensitive to modelling errors and y = SGdd. Suppose we have scaled the system (see Section 1.4) such that at each frequency
uncertainties. This is illustrated below in Section 3.7.2 (page 100). the disturbances are of maximum magnitude 1, 11d112 ~ 1, and our performance requirement
2. The requirement of decoupling and the use of an inverse-based controller may not be is that 1y112 ~ 1. This is equivalent to requiring U(SGd) ~ 1. In many cases there is a trade
desirable for disturbance rejection. The reasons are similar to those given for 5150 systems off between input usage and performance, such that the controller that minimizes the input
in Section 2.6.4, and are discussed further below; see (3.79). magnitude is one that yields all singular values of SOd equal to 1, i.e. oi(SG~i) = l,Vw.
3. If the plant has RHP-zeros then the requirement of decoupling generally introduces extra This corresponds to
RHP-zeros into the closed-loop system (see Section 6.6.1, page 236). SrnjnGd Ui (3.78)
F
where Ui(s) is some all-pass transfer function (which at each frequency has all its singular (see also Figure 2.29 on page 62). Selecting A6 << 1 ensures approximate integral action
values equal to 1). The subscript mm refers to the use of the smallest loop gain that satisfies the with 8(0) 0. Often we select A4~~ about 2 for all outputs, whereas the desired closed-
performance objective. For simplicity, we assume that Gd is square so U1 (jw) is a unitary loop bandwidth w~ may be different for each output. A large value of c41 yields a faster
matrix. At frequencies where feedback is effective we have S = (I + L)1 L’, and response for output i.
(3.78) yields Lmin = GKmin GdU~’. In conclusion, the controller and loop shape with 2. KS is the transfer function from references r to inputs u in Figure 3.9, so for a system
the minimum gain will often look like which has been scaled as in Section 1.4, a reasonable initial choice for the input weight is
‘~= [~~]
PT~ S
(3.80)
Numerically, the problem min~ç I1N1100 is often solved by 7-iteration, where one solves for
the controllers that achieve IINII~ < and then reduces 7 iteratively to obtain the smallest
~,
value 7mm for which a solution exists. More details about 9-t03 design are given in Chapter 9.
This problem was discussed earlier for SISO systems, and another look at Section 2.8.3 would
be useful now. A sample Matlab file is provided in Example 2.17, page 64.
The following issues and guidelines are relevant when selecting the weights T’V~ and W~:
I. S is the transfer function from r to —e = r — y. A common choice for the performance
3.6 Introduction to multivariable RHP-zeros
weight is IVp = diag{wpj} with
By means of an example, we now give the reader an appreciation of the fact that MIMO
.9/M1 + w31 systems have zeros even though their presence may not be obvious from the elements of G(s).
Wp6 = A6 <<1 (3.81) As for SISO systems, we find that RHP-zeros impose fundamental limitations on control.
s+w~1A6
96 MULTIVARIABLE FEEDBACK CONTROL / INTRODUCTION TO MULTI VARIABLE CONTROL 97
The zeros z of MIMO systems are defined as the values s = z where 0(s) loses rank, The Matlab file for the design is the same as in Table 2.4 on page 64, except that we now have a 2 x 2
and we can find the direction of a zero by looking at the direction in which the matrix 0(z) system. Since there is a RHP-zemv at z = 0.5 we expect that this will somehow linnt the bandwidth of
has zero gain. For square systems we essentially have that the poles and zeros of 0(s) are the closed-loop system.
the poles and zeros of detG(s). However, this crude method may fail in some cases, as it Design 1. We weight the two outputs equally and select
may incorrectly cancel poles and zeros with the same location but different directions (see
Design 1 : M1 = M2 = 1.5; c41 = z/2 0.25
Sections 4.5 and 4.5.3 for more details).
This yields an no,-m for N of 2.80 and the resulting singular values of S are shown by the solid
Example 3.17 Consider the following plant: hues in Figure 3.12(a). The closed-loop response to a ,-efeu-ence change r = [1 —1 1T is shown by
the solid lines in Figure 3.12(b). We note that both outputs behave rather poorly and both display an
C(s)
1
(0.2s + 1)(s + 1) [i + 2s
F 1 ii
2j (3.84)
= inverse i-esponse.
The responses to a step in each individual input are shown in Figure 3.11(a) and (b). We see that the
100 Design I: —
a
V2
C
0.5 ‘ 10
vi ~io2
Design]:
Design 2: — — -
Figure 3.11: Open-loop response for C(s) in (3.84) Figure 3.12: Alternative designs for 2 x 2 plant (3.84) with RFIP-zero
plant is interactive, but for these two inputs there is no inverse response to indicate the presence of a Design 2. For MIMO plants, one can often stove most of the deteriorating effect (e.g. inverse
RHP-zero. Nevertheless, the plant does have a niultivariable RHP-zero at z = 0.5; that is, C(s) loses response) of a RHP-zero to a particular output channel. To illustrate this, we change the weight ivpa so
rank at .s = 0.5, and det C(0.5) = 0. The SVD of G(0.5) is
that more emphasis is placed on output 2. We do this by increasing the bandwidth requirement in output
channel 2 byafactor of 100:
1 ri ii ro.45 0.89 1 rl.92 01 ro.71 0.71 H
C(0.5) = ~ [2 2j = [0.89 —0.45] 0 [ oj [0.71 —0.71] (3.85) Design 2 : = M2 = 1.5; esini 0.25, w~2 25
U S VI’ ThLc yields an 1-t~ nonn for N of 2.92. In this case we see from the dashed line in Figuu-e 3.12(b) that
and we have as expected c(C(0.5)) = 0. The directions corresponding to the RHP-zero a,-e v = the response for output 2 (y2) is excellent with 110 invem-se response. Howevem; this conies at the expense
[on] (input direction) and = [0o8~] (output direction). Thus, the RHP-zero is associated with of output 1 (vi) where the response is poorer than for Design 1.
Design 3. We can also inteirhange the weights ion and wp2 to stress output 1 rather than output 2.
both inputs and with both outputs. The presence of the inultivariable RHP-ze,v is indeed observed from In this case (not shown) we get an excellent response in output 1 with ito inverse response, but output 2
the time response in Figure 3. 11(c), which, is for a simultaneous input change in opposite directions, i-esponds vemy poorly (much poorer than output 1 for Design 2). Furthermore, the 71~ norm for N is
u = [~]. We see that Y2 displays an inverse response whereas Vi happens to remain at ze,’o for this 6.73, whereas it was only 2.92 for Design 2.
Thus, we see that it is easiem; for this example, to get tight control of output 2 than of output 1. This
particular input change.
To see how the RHP-ze,v affects the closed-loop response, we design a controller which minimizes may be expected froni the output direction of the RUP-zero, it = [°j~~]. which is mostly itt the
the 7t~ norm of the weighted S/KS matrix direction of output 1. We will discuss this in more detail in Section 6.6.1.
Remark 1 We find from this example that we can direct the effect of the RHP-zero to either of the two
N— WpS (3.86)
— T4’UKS outputs. This is typical of multivariable RHP-zeros, but in other cases the RHP-zero is associated with
a particular output channel and it is not possible to move its effect to another channel. The zero is then
with weights
called a “pinned zero” (see Section 4.6).
14,’~ I, Wv 0 ]~~s/Mi+w~i A,=1r4 Remark 2 It is obsetved from the plot of the singular values in Figure 3.12(a) that we were able to
= =
Lo WP2J s+w~,At (3.87)
obtain by Design 2 a very large improvement in the “good” direction (corresponding to c(S)) at the
F
expense of only a minor deterioration in the “bad” direction (corresponding to a(S)). Thus Design
direction for this plant (recall the discussion following (3.51)), so we expect poor closed-
I demonstrates a shortcoming of the ?L~ norm: only the worst direction (maximum singular value)
contributes to the ?i~ norm and it may not always be easy to get a good trade-off between the various loop performance. This is confirmed by considering S and T. For example, at steady-state
directions. a(T) = 10.05 and a(S) = 10. Furthermore, the large off-diagonal elements in T(s) in (3.90)
show that we have strong interactions in the closed-loop system. (For reference tracking,
however, this may be counteracted by use of a two degrees-of-freedom controller.)
Robust stability (RS). Now let us consider stability robustness. In order to determine
3.7 Introduction to MIMO robustness stability margins with respect to perturbations in each input channel, one may consider
Figure 3.13 where we have broken the loop at the first input. The loop transfer function
To motivate the need for a deeper understanding of robustness, we present two examples at this point (the transfer function from in1 to zj) is L1 (s) = 1/s (which can be derived from
which illustiate that MIMO systems can display a sensitivity to uncertainty not found in t11(s) = = l~L~~))• This corresponds to an infinite gain margin and a phase margin
5180 systems We focus our attention on diagonal input uncertainty, which is present in any of 90°. On breaking the loop at the second input we get the same result. This suggests good
real system and often limits achievable performance becnuse it enters between the controller robustness properties irrespective of the value of a. However, the design is far from robust as
and the plant a further analysis shows. Consider input gain uncertainty, and let q and ~2 denote the relative
zl Wi
to_I
3.7.1 Motivating robustness example no, 1: spinning satellite
Consider the following plant (Doyle, 1986, Packard et al, 1993) which can itself be motivated
by considering the angular velocity control of a satellite spinning about one of its principal a
axes
1 r s—a2 a(s+1)l _~:n:~~
s2+a2 [—a(s+1) s—a2 J
a=10
(3.88)
Er
alO
A B — a 0 0 1 (389
CD — 1 a00
—a 1 0 0
The plant has a pair of yw-axis poles at s = ±Ja so it needs to be stabilized Let us apply Figure 3.13: Checking stability margins “one-loop-at-a-time”
negative feedback and try the simple diagonal constant controller
error in the gain in each input channel. Then
K = I (1 + e1)ui, u~ = (1 + e2)u2 (3.91)
The complementary sensitivity function is where r4 and u~ are the actual changes in the manipulated inputs, while u1 and u2 are the
T(s)=GK(I+GKI1=_Lfl ~] (390)
desired changes as computed by the controller. It is important to stress that this diagonal
input uncertainty, which stems from our inability to know the exact values of the manipulated
inputs, is always present. In terms of a state-space description, (3.91) may be represented by
replacing B by
Nominal stability (NS). The closed-loop system has two poles at s = —1 and so it is
stable This can be verified by evaluating the closed-loop state matrix B’— l+ci 0
— 0 1+62
A’ —A —
B’KC— — —a
0 0a — 0
l+6i 1+62
0 —a
1 a1
(To denve A~, use x = Ax + Bu, y = Cx and u = —Ky)
Nominal performance (NP). The singular values of L = CIC = C are shown in which has a characteristic polynomial given by
Figure 3 7(a), page 80 We see that o-(L) = 1 at low frequencies and starts dropping off
at about w = 10 Since o-(L) never exceeds 1, we do not have tight control in the low-gain det(sI — A~1) = ~2 ~ (2 + ~i + e~) S + 1 + ej + 6~ + (a2 + 1)6162 (3.92)
a1 no
100 MULTIVARIABLE FEEDBACK CONTROL INTRODUCTION TO MULTIVARIABLE CONTROL 101
The perturbed system is stable if and only if both the coefficients 0o and a1 are positive. We 2.5 Nominal plant:
therefore see that the system is always stable if we consider uncertainty in only one channel Perturbed plant: — —
2
at a time (at least as long as the channel gain is positive). More precisely, we have stability II “
for (—1 < e~ < no, c2 = 0) and (c~ = 0, —1 < e2 < no). This confirms the infinite gain 1.5
margin seen earlier. However, the system can only tolerate small simultaneous changes in the
two channels. For example, let c1 = —e2, then the system is unstable (00 < 0) for
0.5
1
eu> Va2 +1 0
0 10 20 30 40 50 60
Time [minI
In summary, we have found that checking single-loop margins is inadequate for MIMO
problems. We have also observed that large values of a(T) or a(s) indicate robustness
Figure 3.14: Response with decoupling controller to filtered reference input r1 = 1/(5s + 1). The
problems. We will return to this in Chapter 8, where we show that with input uncertainty of perturbed plant has 20% gain uncertainty as given by (3.97).
magnitude leti < 1/a(T), we are guaranteed robust stability (even for “full-block complex
perturbations”).
In the next example we find that there can be sensitivity to diagonal input uncertainty first-order responses each with a time constant of 1/0.7 = 1.43 mm. This is confirmed by
even in cases where 0(T) and a(s) have no large peaks. This cannot happen for a diagonal the solid line in Figure 3.14 which shows the simulated response to a reference change in
controller, see (6.92), but it will happen if we use an inverse-based controller for a plant with yi. The responses are clearly acceptable, and we conclude that nominal petformance (NP) is
large RGA elements, see (6.93). achieved with the decoupling controller.
Robust stability (RS). The resulting sensitivity and complementary sensitivity functions
with this controller are
3.7.2 Motivating robustness example no. 2: distillation process
1
The following is an idealized dynamic model of a distillation column: 8=81= ~__I; I (3.96)
s + 0. ~ 1.43s + 1
1 87.8 —86.4 Thus, a(s) and a(T) are both less than 1 at all frequencies, so there are no peaks which
0(s) = 758 + L 108.2 (3.93)
—109.6 would indicate robustness problems. We also find that this controller gives an infinite gain
(time is in minutes). The physics of this example was discussed in Example 3.6. The plant margin (GM) and a phase margin (PM) of 90° in each channel. Thus, use of the traditional
is ill-conditioned with condition number 7(0) = 141.7 at all frequencies. The plant is also margins and the peak values of 5 and T indicate no robustness problems. However, from the
strongly two-way interactive and the RGA matrix at all frequencies is large RGA elements there is cause for concern, and this is confirmed in the following.
We consider again the input gain uncertainty (3.91) as in the previous example, and we
—34.1 select €j = 0.2 and ~2 = —0.2. We then have
A(G) = (3.94)
35.1
= 1.2u1, z4 = 0.8u2 (3.97)
The large elements in this matrix indicate that this process is fundamentally difficult to
control. Note that the uncertainty is on the change in the inputs (flow rates), and not on their
Remark. Equation (3.93) is admittedly a very crude model of a real distillation column; there should
absolute values. A 20% error is typical for process control applications (see Remark 2 on
be a high-order lag in the transfer function from input I to output 2 to represent the liquid flow down to page 297). The uncertainty in (3.97) does not by itself yield instability. This is verified
the column, and higher-order composition dynamics should also be included. Nevertheless, the model by computing the closed-loop poles, which, assuming no cancellations, are solutions to
is simple and displays important features of distillation column behaviour. It should be noted that with a det(I + L(s)) = det(I + Li(s)) = 0 (see (4.105) and (A.12)). In our case
more detailed model, the RCA elements would approach I at frequencies around I rad/min, indicating
[1+~ 0 1 0.7 [1+ei 0
less of a control problem. L’i(s)=ICvG’=KjnvG~ a i+~j —
TL a
We consider the following inverse-based controller, which may also be looked upon as a
steady-state decoupler with a P1 controller so the perturbed closed-loop poles are
response represented by the solid line, and even though it is stable, the response is clearly 0 1+C2j
not acceptable; it is no longer decoupled, and yi(t) and y2(t) reach a value of about 2.5
before settling at their desired values of 1 and 0. Thus RP is not achieved by the decoupling where L0 is a constant tnatrtx for the distillation model (333), since all elements in C share the
controller same dynamics, C(s) = g(s)Co. The closed-loop poles of the perturbed system are solutions to
det(I + L’(s)) = det(I + (k,/s)Lo) = 0, or equivalently
Remark 1 There is a simple reason for the observed poor response to the reference change in yl. To
accomplish this change, which occurs mostly in the direction corresponding to the low plant gain, the
inverse-based controller generates relatively large inputs u1 and ii2, while flying to keep is, — is2 very det (ti + Lo) = (s/k,)’ + tr(Ln)(s/k,) + det(Lo) = 0 (3.100)
stnall. However, the input uncertainty makes this impossible — the result is an undesired large change
in the actual value of u~ — uS, which subsequently results in large changes in VI and 1J2 because of the For k, > 0 we have from the Routh—Hunvitz stability condition that instability occurs if and only
large plant gain (U(C) = 197.2) in this direction, as seen from (3.46). if the trace and/or the determinant of L0 are negative. Since det(Lo) > 0 for any gain error
less than 100%, instability can only occur if tr(Lo) < 0. Evaluate tr(Lo) and show that with
Remark 2 The system remains stable for gain uncertainty up to 100% because the uncertainty occurs gain errors of equal magnitude the combination of errors which ,nost easily yields instability is with
only at one side of the plant (at the input). If we also consider uncertainty at the output then we find that 1, — —12=—e1=c2 = e. Use this to show that the perturbed system is unstable if
the decoupling controller yields instability for relatively small errors in the input and output gains. This
is illustrated in Exercise 3.11 below. el > V2A1 1 (3.101)
Remark 3 It is also difficult to get a robust controller with other standard design techniques for this
model. For example, an S/KS design as in (3.80) with Wp = top1 (using M = 2 and WB = 0.05 in where A,, = g~g~/ det C is the 1,1 element of the RGA of C. hi our case A,, = 35.1 and we get
the performance weight (3.81)) and W,. = I yields a good nominal response (although not decoupled), instability for id > 0.120. Check this numerically, e.g. using Matlab.
but the system is very sensitive to input uncertainty, and the outputs go up to about 3.4 and settle very Remark. The instability condition in (3.101) for simultaneous input and output gain uncertainty applies
slowly when there is 20% input gain error.
to the very special case of a 2 x 2 plant, in which all elements share the same dynamics, C(s) = g(s)Go,
Remark 4 Attempts to make the inverse-based controller robust using the second step of the Clover— and an inverse-based controller, K(s) = (k,/s)C’(s).
McFarlane ?L~ loop-shaping procedure are also unhelpful; see Exercise 3.12. This shows that
Exercise 3.12 * Consider again the distillation process C(s) in (3.93). The response using the
robustness with respect to general coprime factor uncertainty does not necessarily imply robustness
invem-se-based controller K~,,, in (3.95) was found to be sensitive to input gain errors. We want to see
with respect to input uncertainty. In any case, the solution is to avoid inverse-based controllers for a
the controller can be modified to yield a more robust system by using the Glover—McFarlane 9i~ loop-
plant with large RGA elements.
shaping procedure. To this effect, let the shaped plant be C~ = ~ i.e. Wi = ~ and design an
71 controller K3 for the shaped plant (see page 370 and Chapter 9), such that the overall controller
Exercise 3.10 * Design an SVD controller K FV,K5W, for tile distillation process in (3.93), i.e.
=
becomes K = Kj,~iC3. (You willfind that ~ = 1.414 which indicates good ,vbustness with respect
select W1 = V and H’2 = UT where U and V are given in (3.46). Select K5 iii the form
to coprime factor lmncertaimity but the 1001) shape is almost unchanged and the systemn remains sensitive
to input uncertainty)
K3 =
0
[0 ~
and try the following values: 3.7.3 Robustness conclusions
(a) ci = c2 = 0.005; From the two motivating examples above we found that multivariable plants can display a
(b) c1 = 0.005, c2 = 0.05; sensitivity to uncertainty (in this case input uncertainty) which is fundamentally different
(c) c, = 0.7/197 = 0.0036, c2 = 0.7/1.39 = 0.504. from what is possible in 5150 systems.
In the first example (spinning satellite), we had excellent stability margins (PM and GM)
Simulate the closed—loop reference response with and without uncertainty Designs (a) and (b) should
be robust. Which has the best peiforniance? Design (c) should give the response in Figure 3.14. In when considering one loop at a time, but small simultaneous input gain errors gave instability.
the simulations, include high-order plant dynamics by replacing C(s) by (uo’~÷1)’ C(s). What is the This might have been expected from the peak values (9-l~ norms) of S and T, defined as
condition nionber of the cost roller in the three cases? Discuss the results. (See also the conclusion on
page 251.) iiTiI~ = maxU(T(jw)), ~ = maxU(S(jw)) (3.102)
(weighted) (weighted)
it
exogenous exogenous outputs
2
it V
contiol signals sensed outputs
2
Vi
Figure 3.15: General control configuration for the case with no model uncertainty
The most important point of this section is to appreciate that almost any linear control
problem can be formulated using the block diagram in Figure 3.15 (for the nominal case) Example 3.18 One degree-of-freedom feedback control configuration. We ii’ant to find Pfor (lie
or in Figure 3.23 (with model uncertainty). conventional one degree-of-freedom control configuration in Figure 3.16. The first step is to identify the
signals for the generalized plant:
106 MULTI VARIABLE FEEDBACK CONTROL TO MULTIVARIABLE CONTROL 107
F
101 d
10 = 102 r z=e=y—r; v=r—ym=r—y—n z
(3.103)
ma n
With this choice of v, the controller only has information about the deviation r — Yin Also note that
z = y — r, which means that performance is specified in terms of the actual output y and not in terms
of the ,neasured output Yin. The block diagram in Figure 3.16 then yields
z = y—r=Gu+d—r=Iwi —Itn2+Owa+Gu
v = r—ym=r—Gu—d—n=—Iw1~I102—j103_Q~
so the generalized plant Pfronz [w u 1T to [z v ]T is and eliminate it and v from (3.108), (3.109) and (3.113) to yield z = Nw where N is given
o w,~i by
N = P11 + P12K(I — P22K~1 P21 4 PflP, K) (3.114)
0 WTG
(3.106)
WpI WpG Here Pj (1’, K) denotes a lower linear fractional transformation (LFT) of P with K as the
—I -G parameter. Some properties of LFI’s are given in Appendix A.8. In words, N is obtained from
Figure 3.15 by using K to close a lower feedback loop around P. Since positive feedback is
3.8.3 Partitioning the generalized plant F used in the general configuration in Figure 3.15 the term (I — P22K)—1 has a negative sign.
We often partition P as Remark. To assist in remembering the sequence of P12 and P21 in (3.114), notice that the first (last)
P1~ P12 (3 1O7~ index in P11 is the same as the first (last) index in P12K(I P221C)’ P21. The lower LFT in (3.114)
—
z = P~~w+P12u (3.108) / Example 3.20 We want to derive N for the partitioned P in (3.110) and (3.111) using the LFT
v = P21w+P22u (3.109) formula in (3.114). We get
The reader should become familiar with this notation. In Example 3.19 we get 0 Wul ~WuKS
N= 0 + WTG IC(I+GK)’(—I)= —Wu’T
0 W,j WpI WpG WpS
P11 0 P12 WrG (3.110)
PVpI WpG where we have made use of the identities S = (I + GK)’, T = GKS and I — T = S. With the
exception of the two negative signs, this is identical to N given in (3.105). Of course, the negative signs
P21 = —I, P22 = —G (3.111) have no effect on the norm of N.
Note that P22 has dimensions compatible with the controller, i.e. if K is an n~ x flu matrix,
Again, it should be noted that deriving N from P is much simpler using available software.
then P22 is an n~, )< ~u matrix. For cases with one degree-of-freedom negative feedback
control we have P22 = —G. For example, in the Matlab Robust Control toolbox we can evaluate N = F1 (P, K) using the
command if t ( P, K)
3.8.4 Analysis: closing the ioop to get N Exercise 3.13 Consider the two degrees-of-freedom feedback configuration in Figure 1.3(b). (i) Find
P ~~‘lzen
d
Eu ~ rn
w= 2~ I I (3.115)
w I fl
lYm J
~NI
(ii) Let z = Nw and derive N in two different ways: directly franz the block diagrwn and using
N = F1(P,K).
Figure 3.20: General block diagram for analysis with no uncertainty
The general feedback configurations in Figures 3.15 and 3.18 have the controller K as 3.8.5 Generalized plant P: further examples
a separate block. This is useful when synthesizing the controller. However, for analysis To illustrate the generality of the configuration in Figure 3.15, we now present two further
of closed-loop performance the controller is given, and we may absorb K into the examples: one in which we derive P for a problem involving feedforward control, and one
interconnection structure and obtain the system N as shown in Figure 3.20 where for a problem involving estimation.
z=Nw (3.112) Example 3.21 Consider the control system in Figure 3.21, where yi is the output we want to control,
112 is a secondazy output (extra measu,-e,nent), and we also measure the disturbance d. By secondazy we
where N is a function of K. To find N, we first partition the generalized plant P as given in mean that 112 is of secondaty importance for control; that is, there is no control objective associated with
(3.107)—(3.109), combine this with the controller equation it. The control configuration includes a two degrees-of-freedom controller a feedfonvard controller
u=Kv (3.113)
r
110 MULTIVARIABLE FEEDBACK CONTROL INTRODUCTION TO MULTIVARIABLE CONTROL 111
41
d I to K to make the controllers proper). However, if we impose restrictions on the design such that, for
example, K2 or K1 are designed “locally” (without considering the whole problem), then this will limit
/
the achievable performance. For example, for a two degrees-of-freedom controller a common approach
is first to design the feedback controller I(~, for disturbance rejection (without considering reference
/
4
tracking) and then design K,- for reference tracking. This will generally give some performance loss
compared to a simultaneous design of K~ and Kr.
Example 3.fl Output estimator. Consider a situation where we have no measurement of the output
y which we want to control. However~ we do have a measurement of another output variable 1/2. Let d
n denote the unknown external inputs (including noise and disturbances) and no the known plant inputs
(a subscript o is used because in this case the output ufroin K is not the plant input). Let the model be
y=Cuo+Gdd; y2=Fuc+F.td
The objective is to design an estinzator~ Kest, such that the estimnated output ~ = K0~~ [~2] is as close
as possible in some sense to the true output y; see Figure 3.22. This problem may be written in the
general framework of Figure 3.15 with
= [ d 1, = ~, ~ = u — F Y2 1
Figure 3.21: System with feedforward, local feedback and two degrees-of-freedom control LUG] Luci
Note that n = ~; that is, the output ufroni the generalized controller is the estimate of the plant output.
Furthermore. K = Kest and
and a local feedback controller based on the extra ineasurenient ~2. To recast this into our standard Gd G -1
configuration of Figure 3.15 we define P= F~ F 0 (3.119)
010
rn
w=I
rdl
LnJ
z=yj—n; v=i
lullI
11/21
(3.116) We see that P22 = [~ j since the estimator probletn does not involve feedback.
[dJ
Note that d and n are both inputs and outputs to P and we have assumed a peifect measurement of the Exercise 3.15 State estimator (observer). In the Kahnan filter problem studied in Section 9.2 the
disturbance d. Since the controller has explicit information about r we have a two degrees-of-freedom objective is to minimize x — £ (whereas in Example 3.22 the objective was to mninimnize y — ç,. Show
controller The generalized controller K tnay be written in terms of the individual controller blocks in how the Kalnma’z filter problem can be represented in the general configuratioti of Figure 3.15 and find
Figure 3.21 as follows: P.
K=[K1K, —1(~ —1(2 Kd] (3.117)
By writing down the equations or by inspection from Figure 3.21 we get
3.8.6 Deriving P from N
Cl —I GiG2 For cases where N is given and we wish to find a P such that
01 0
P= C1 0 C1C2 (3.118) N=F)(P,K) =P,1 +P12K(1—P221()’P21
o 0 C2
1 0 0 it is usually best to work from a block diagram representation. This was illustrated above for
Then partitioning P as in (3.108) and (3.109) yields P22 = [QT (C1 G2)T ~ 0T 1T the stacked N in (3.105). Alternatively, the following procedure may be useful:
Exercise 3.14 * Cascade implementation. Consider Example 3.21 further The localfeedback based 1. SetK=OinNtoobtainP11.
on 1/2 is often implemented in a cascade ‘miannem; see also Figure 10.11. In tIns case the output from I(~ 2. Define Q = N — P11 and rewrite Q such that each term has a common factor 1?
enters into 1(2 and it tnay be viewed as a refem-ence signalfor 1/2. Derive the generalized controller K KU — P22 K)—’ (this gives F22).
and the generalized plant P in this case. 3. Since Q = F12RP21, we can now usually obtain P12 and F2, by inspection.
Remark. From Example 3.21 and Exercise 3.14, we see that a cascade implementation does not usually Example 3.23 Weighted sensitivity. We will use the above procedure to derive P when N = nipS
limit the achievable performance since, unless the optimal 1(2 or 1(~ have RHP-zeros, we can obtain wp(1 + GK~’, where nip is a scatar weight.
from the optimal overall K the subcontrollers 1(2 and 1(1 (although we may have to add a small D-term 1. Pu = N(K = 0) = nipI.
112 MULTIVARIABLE FEEDBACK CONTROL
r INTRODUCTION TO MULTIVARIABLE CONTROL 113
d + z The tiansfer function (I + GK)’ may be represented on a block diagram with the input
and output signals aflet the plant, whereas (I + ICG)~ may be represented by another block
diagram with input and output signals before the plant However, in N there are no cross
y
coupling terms between an input before the plant and an output after the plant (corresponding
to 0(1 + KO)~), 01 between an input after the plant and an output before the plant
(corresponding to —K(I + GK)’) so N cannot be represented in block diagram form
Equivalently, if we apply the procedure in Section 3 8 6 to N in (3 121), we aie not able to
find solutions to P12 and F21 in step 3
Another stacked transfer function which cannot in general be represented in block diagram
form is
N= [14/pS~ (3.122)
[SOd]
Remark. The case where N cannot be written as an LFT of K is a special case of the Hadamard
weighted fl~ problem studied by van Diggelen and Glover (1994a). Although the solution to this 7L
problem remains intractable, van Diggelen and Glover (1994b) present a solution for a similar problem
where the Frobenius norm is used instead of the singular value to “sum up the channels”.
Exercise 3.17 Show that N in (3.122) can be represented in block diagrainforni if Wp wpl where
wp is a scalai:
Remark. When obtaining P from a given N, we have that P11 and 1~22 are unique, whereas from step
3 in the above procedure we see that Pi2 and .P2~ are not unique. For instance, let a be a real scalar,
then we may instead choose P12 = aP12 and P21 = (1/a)1’21. For Pin (3.120) this means that we
may move the negative sign of the scalar wp from P12 to P21.
Exercise 3.16 * Mixed sensitivity. Use the above procedure to derive the generalized plant Pfor the
stacked N in (3.105).
parametric uncertainty, neglected dynamics, etc, as will be discussed in more detail in Chapters 7 and
8 Then “pull out” each of these blocks from the system so that an input and an output can be associated
with each A, as shown in Figure 3 25(a) Finally, collect these perturbation blocks into a large block
diagonal matrix having perturbation inputs and outputs as shown in Figure 3 25(b) In Chapter 8 we
discuss in detail how to obtain N and A Generally, it is difficult to perform these tasks manually, but
this can be easily done using software, see examples in Chapters 7
/0
0;
Figure 3.24: General block diagram for analysis with uncertainty included 3.9 Additional exercises
Most of these exercises are based on material presented in Appendix A The exercises
illustrate material which the reader should know before reading the subsequent chapters
Inputs
w Exercise 3.18 * Consider the peiformance specification ~ < 1 Suggest a rational transfer
function weight tap(s) and sketch it as a function offrequency for the following two cases
I We desue no steady-state offset, a bandwidth better than 1 iad/s and a resonance peak (woist
amplification caused by feedback) lower than 1 5
2 We desue less than 1% steady-state offset, less than 10% er,or up to frequency 3 tad/s. a bandwidth
z better than 10 tad/i, and a resonance peak lower than 2 (Hint See (2 105) and (2 106)
Exercise 3.19 By hIMhI00 one can mean eithe; a spatial ot temporal norm Explain the difference
between the two and illustrate by computing the appropiiate infinity norinfom
Figure 3.25: Rearranging a system with multiple perturbations into the NA-structure 13 4 s—i 3
6 ‘ Mi(s)=—y—-—~
To evaluate the perturbed (uncertain) transfer function from external inputs w to external Exercise 3.20 What is the relationship between the RGA matrix and 1111cc? tainty in the individual
*
outputs z, we use A to close the upper loop around N (see Figure 3.24), resulting in an upper elements 2 Illust, ate this for perturbations in the 1, 1 element of the matrix
LFT (see Appendix A.8):
Exercise 3.21 Assume that A is non—singular (i) Formulate a condition in terms of the inaxi,num
Remark 1 Controller synthesis based on Figure 3.23 is still an unsolved problem, although good singular value of B for the matrix A + B to remain non-singular Apply this to A in (3.125) and (ii)
practical approaches like DK-iteration tn find the “p-optimal” controller are in use (see Section 8.12). find an B of minimum magnitude which makes A + B singular
For analysis (with a given controller), the situation is better and with the ?j~ norm an assessment of
robust performance involves computing the structured singular value, p. This is discussed in more detail Exercise 3.22 * Compute hAil11, &(A) = hAil12, hIAik~, hlAhiF. hhAhhinax and hiMhsum for the
in ChapLer 8. following matrices and tabulate your results:
Remark 2 In (3.124) N has been partitioned to be compatible with A; that is, N~1 has dimensions r, ~1 U ii U A5 [1 01
compatible with A. Usually, A is square, in which case N11 is a square matrix of the same dimension
A1=I; A2=
[o o] ;A3
Li ii Lo oj’ i oj
as A. For the nominal case with no uncertainty we have FU(N, A) = FU(N, 0) = N22, so N22 is the
Show using the above matrices that the following bounds are tight (i.e. we may have equality) for 2 x 2
nominal transfer function from w to z.
matrices (m = 2):
Remark 3 Note that P and N here also include information about how the uncertainty affects the a(A) ≤ hlAhiF ≤
system, so they are not the same P and N as used earlier, e.g. in (3.114). Actually, the parts P22 and hi-4hlrnax ≤ â(A) < mibAhitnax
N22 ofF and N in (3.123) (with uncertainty) are equal to the P and N in (3.114) (without uncertainty).
Strictly speaking, we should have used another symbol for N and P in (3.123), but for notational
IbAhhii/~/~ ≤ o(A) ≤ ~/~hhAhh~
simplicity we did not. IlAhhioc/v’~ < a(A) ≤ v’~IhAIht~
hbAhbp ≤ hiAhhsum
Remark 4 The fact that almost any control problem with uncertainty can be represented by Figure 3.23
may seem surprising, so some explanation is in order. First, represent each source of uncertainty by a Exercise 3.23 Find example mati-ices to illustrate that the above bounds are also tight when A is a
perturbation hlock, A1, which is normalized such that lAth ≤ 1. These perturbations may result from square m X m matrix with in > 2.
116 MIJLTIVARIABLE FEEDBACK CONTROL
INTRODUCTION TO MULTIVARIABLE CONTROL 117
Exercise 3.24 Do the extreme singular values bound the magnitudes of the elements of a matrix?
*
That is, is U(A) greater than the largest element (in magnitude), and is c(A) smaller than the smallest
3.10 Conclusion
element? For a non-singular matrix, how is c(A) related to the largest element in A1?
The main purpose of this chapter has been to give an overview of methods for analysis and
Exercise 3.25 c’onsider a lower triangular m x sri matrix A with au = —1, at~ = 1 for all i > j, design of multivariable control systems.
and aij = Ofor all i <j. jn terms of analysis, we have shown how to evaluate MIMO transfer functions and
(a) What is detA? how to use the singular value decomposition of the frequency-dependent plant transfer
function matrix to provide insight into multivariable directionality. Other useful tools for
(b) What a,-e the eigenvalues of A? analyzing directionality and interactions are the condition number and the RGA. Closed-loop
(c) What is the RGA of A? performance may be analyzed in the frequency domain by evaluating the maximum singular
(d) Let m = 4 andfind an B with the smallest value of &(E) such that A + B is singula~: value of the sensitivity function as a function of frequency. Multivariable RHP-zeros impose
fundamental limitations on closed-loop performance, but for MIMO systems we can often
Exercise 3.26 Find two matrices A and B such that p(A + B) > p(A) + p(B) which proves that
*
direct the undesired effect of a RHP-zero to a subset of the outputs. MIMO systems are often
the spectral radius does not satisfy the triangle inequality and is thus not a norm. more sensitive to uncertainty than SISO systems, and we demonstrated in two examples the
possible sensitivity to input gain uncertainty.
Exercise 3.27 Write T = CK(I + GK)’ as all LFT of K, Le.find P such that T = F1 (P, K).
In terms of controller design, we discussed some simple approaches such as decoupling
and decentralized control. We also introduced a general control configuration in terms of the
Exercise 3.28 * Write K as an LFT ofT = GK(I + CK)’, i.e. find J such that K = F1(J, T). generalized plant F, which can be used as a basis for synthesizing multivariable controllers
using a number of methods, including LQG, fl2, 9t~ and p-optimal control. These methods
Exercise 3.29 State-space descriptions may be represented as LETs. To de,nonstrate this find Hfor are discussed in much more detail in Chapters 8 and 9. In this chapter we have only discussed
the R~ weighted sensitivity method.
F1(H, 1/s) = C(sI — A)1B + D
Exercise 3.31 In (3.11) we stated that the sensitivity of a perturbed plant, 5’ = (I + c’iq’, is
related to that of the no,nitial plant, S = (I + do_i, by
5’ = S(I + B0T)1
where B0 = (C’ —C) C ~. This exercise deals with how the above result may be derived in a systematic
(though cumbersome) manner using LFTs (see also Skogestad and Moran, 1988a).
(a) First find F such that 5’ = (I + Q’K)’ = F~ (F, K), andfind J such that K = F) (J, T) (see
Exercise 3.28).
(b) Combine these LFTs to findS’ = F) (N, T). What is N in terms of C and C’? Note that since
= 0 we have from (A.164)
N — F11
J21F~ J22+J21F22J12
5’ = I — C’W’T(I — (I —
4
ELEMENTS OF LINEAR
SYSTEM THEO RY
The main objective of this chapter is to summarize important results from linear system theory The
treatment is thorough, but readers are encouraged to consult other books, such as Kailath (1980) or
Zhou et al (1996), for more details and background information if these results are new to them
We use in this book various representations of time-invariant linear systems, all of which
are equivalent for systems that can be described by linear ordinary differential equations
with constant coefficients and which do not require differentiation of the inputs (independent
variables) The most important of these representations are discussed in this section
where x 2 dz/dt and f and g are nonlinear functions Linear state-space models may then
be derived from the linearization of such models In terms of deviation variables (where x
represents a deviation from some nominal value or trajectory, etc ) we have
x(t) Az(t) + Bu(t) (43)
the state matrix. These equations provide a convenient means of describing the dynamic Bx=Ax+Bu (49)
behaviour of proper, rational, linear systems. They may be rewritten as
I If E is non-singular, (4 9) is a special case of (4 3), because (4 9) may then be wntten as
thAB x
yCD u $2 x=Ax+Bu
where A = E ‘A and B = E’B However, if the matnx E is singular then (49) allows for implicit
which gives rise to the shorthand notation algebraic relations between the states x For example, if E = [10] then (49) is equivalent to the
following set of differential and algebraic equations
sAB
(45)
CD Aiix, + Aiix, + Bin
which is frequently used to describe a state-space model of a system C. Note that the 0 = A2ixi+Aiaxa+Ban
4
representation in (4.3)—(4.4) is not a unique description of the input—output behaviour of a It would be possible to eliminate the algebraic variables x2 (by solving the algebraic equations) to get
linear system. First, there exist realizations with the same input—output behaviour, but with 322 = —A~ (A,,x, + En) and thus derive a set of differential equations (4 3)in xi only However,
additional unobservable and/or uncontrollable states (modes). Second, even for a minimal it is often more convenient to keep the system on the onginal descriptor form in (4 9)
realization (a realization with the fewest number of states and consequently no unobservable
or uncontrollable modes) there are an infinite number of possibilities. To see this, let S be
an invertible constant matrix, and introduce the new states ~ = Sx, i.e. x = 5’xi5. Then an 4.1.2 Impulse response representation
equivalent state-space realization (i.e. one with the same input—output behaviour) in terms of The impulse response matrix is
these new states is
10 t<0
A=SAS_1, B=SB, C=CS-’. D=D g(t) ~ CeAtB+Dó~t) t≥0 (410)
The most common realizations are given by a few canonical forms, such as the Jordan
(diagonalized) canonical form, the observability canonical form, etc.; see page 126. $2 f~
where 6(t) is the unit impulse (delta) function which satisfies limE_.+o 6(t)dt = 1 The tj ‘th
Given the linear dynamical system in (4.3) with an initial state condition x(to) and an input element of the impulse response matrix, g,,(t), represents the response y3(t) to an impulse
it, (t) = 6(t) for a system with a zero initial state
u(t), the dynamical system response z(t) fort ≥ to can be determined from
With initial state x(0) = 0, the dynamic response to an arbitrary input u(t) (which is zero
x(t) = CAQ_to)x(t0) + I
it0
eMi_T)Bn(r)dr (46)
fort <0) may from (4 6) be written as
The latter dyadic expansion, involving the right (t1) and left (q~) eigenvectors of A, applies 4.1.3 Transfer function representation — Laplace transforms
for cases with distinct eigenvalues A~ of A, see (A.23). We will refer to the term eAit as the The transfer function representation is unique and is very useful for directly obtaining insight
mode associated with the eigenvalue .A~(A). For a diagonalized realization (where we select into the properties of a system It is defined as the Laplace transform of the impulse response
S such that A = 5A5’ = A is a diagonal matrix) we have that ~ = diag{eAi(Mt}; see
(A.22).
Remark 1 In the state-space model (4.3)—(4.4) u represents all independent variables. Usually, we
matrix
C(s) = f
g(t)e_Stdt (4 12)
consider three kinds of independent variables, namely the manipulated inputs (it), the disturbances (d) Alternatively, we may start from the state-space description With the assumption of a zero
and the measurement noise n. The state-space model is then written as initial state, z(t = 0) = 0, the Laplace transforms of (4 3) and (4 4) becomei
± = Ax + Bit + Bdd sx(s) = Ax(s) + Bit(s) ~ x(s) = (sI — M’Bu(s) (4.13)
y=Cx+Du+Ddd+n
(48)
We make the usual abuse of notation and let f(s) denote the Laplace transform of f(t)
Note that the symbol n is used to represent both the noise signal and the number of states.
122 MULTIVARIABLE FEEDBACK CONTROL
ELEMENTS OF LINEAR SYSTEM THEORY 123
y(s) = Cz(s) + Du(s) ~ y(s) = — + D)u(s) (4.14) Simil~i~ly a left coprimefactorization of G is
C(s)
0(s) = Mf’(s)N,(s) (4.20)
where G(s) is the transfer function matrix. Equivalently, from (A.1),
Here N, and Ad’, axe stable and coprime; that is, there exist stable U,(s) and Vt(s) such that
G(s) = [Cadj(sI — A)B + Ddet(sI — A)] (4.15) the following Bezout identity is satisfied:
det(sI — A)
PJ1U~ + M,14 = I (4.21)
where det(sI A) = fl~1(s p~) is the pole polynomial. The poles are equal to the
— —
eigenvalues of A, i.e. p~ = A~ (A). For cases where the eigenvalues of A are distinct, we may For a scalar system, the left and right coprime factorizations are identical, 0 = NAC’
use the dyadic expansion of A given in (A.23), and derive Af—’N.
Remark. Two stable scalar transfer functions, N(s) and M(s), are coprime if and only if they have
G(s)=Z~’’3+D (4.16) no common RHP-zeros including the point at s cc. In this case, we can always lind stable U and V
such that NU + MV = 1.
where qi and t~ are the left and right eigenvectors of the state matrix A respectively. When Example 4.1 Consider the scalar system
disturbances are treated separately, see (4.8), the corresponding disturbance transfer function
is 0(s) (s 1)(s + 2)
— (4.22)
(s—3)(s+4)
Gd(s) = C(sI A)’B~ + Dd — (4.17)
To obtain a copranefactorzzatzon, we first make all the RHP-po/es of G zeros of lvi, and all the RHP
Note that any system written in the state-space form of (4.3) and (4.4) has a transfer zeros of C zeros of N. We then allocate the poles of N and lvi so that N and M are both pi-oper and
function, but the opposite is not true. For example, time delays and improper systems can the identity C = NM’ holds. Thus
be represented by Laplace transforms, but do not have a state-space representation. On the
other hand, the state-space representation yields an internal description of the system which N(s)=~—~ M(s)=~—~
s+4’ s+2
may be useful if the model is derived from physical principles. It is also more suitable for
is a copritne factorization. Usually, we select N and lvi to have the same poles as each other and
numerical calculations.
the same order as C(s). This gives the most degrees of freedom subject to having a realization of
[Al(s) N(s) ]T with the lowest order; We then have that
4.1.4 Frequency response N(S)=k(5~H5+2) lvi(5)~_k(53)(5+4) (4.23)
s2+his+ka’ s2+kjs+k,
An important advantage of transfer functions is that the frequency response (Fourier
transform) is directly obtained from the Laplace transform by setting s = jw in 0(s). For is a cop rime factorizatiop of (4.22) for any k and for any k1, k2 > 0.
more details on the frequency response, the reader is referred to Sections 2.1 and 3.3.
From the above example, we see that the coprime factorization is not unique. Now we
introduce the operator M* defined as M*(s)= MT(~s) (which for s = jw is the same
4.1.5 Coprime factorization as the complex conjugate transpose M” = MT). Then 0(s) = N,.(s)M,1(s) is called a
normalized right coprime factorization if
Another useful way of representing systems is the coprime factorization which may be used
both in state-space and transfer function form. In the latter case a tight cop ritnefactorization M,~’M,. + N,~N,. = I (4.24)
of G is
G(s) = Nr(s)M,E’(s) (4.18) In this case Xr (s) = [Mr] satisfies XXr = I and is called an inner transfer function. The
where Nt(s) and Mr(s) are stable coprime transfer functions. The stability implies that
normalized left coprime factorization 0(s) = Mj’ (s)N, (s) is defined similarly, requiring
Nr(S) should contain all the RHP-zeros of G(s), and Air(s) should contain as RHP-zeros all that
the RHP-poles of 0(s). The coprimeness implies that there should be no common RHP-zeros
M,Mr + N,A7 = I (4.25)
(including the point at infinity) in Nr and Mr. which result in pole—zero cancellations when
forming NrM,’. Mathematically, coprimeness means that there exist stable U,. (s) and 14(s) In this case A’, (s) = [A’!, N,] is co-inner which means that X,X7 = I. The normalized
such that the following Bezout identity is satisfied: coprime factorizations are unique to within a right (left) multiplication by a unitary matrix.
To derive normalized coprime factorizations by hand, as in the above exercise, is in general 4.1.6 More on state-space realizations
difficult. Numerically, however, one can easily find a state-space realization. If G has a
Inverse system. In some cases we may want to find a state~space descnption of the inverse
minimal state-space realization
of a system For a square G(s) we have
~ AB
C CD
a-’’ ABD’c BD~ 427
then a minimal state-space realization of a normalized left coprime factorization is given — —D’C D~ (
(Vidyasagar, 1988) by where D is assumed to be non-singular For a non-square G(s) in which D has full row (or
column) rank, a right (or left) inverse of C(s) can be found by replacing D—1 by Dt, the
s A+HC B+HD H
[Nj(s) .fl/1j(s)] = R~12C R—’/2D R’12 (4 26) pseudo-inverse of D
For a stiictly proper system with D = 0, one may obtain an approximate inverse by
including a small additional feed-through term D, preferably chosen on physical grounds
where
One should be careful, however, to select the signs of the terms in 13 such that one does not
H 4 _(BDT + ZCT)R~l, B 41 +
introduce RHP-zeros in G(s) because this will make G(s)’ unstable
and the matrix Z is the unique positive definite solution to the algebraic Riccati equation Improper systems. Impioper transfer functions, where the order of the s-polynomial in
the numerator exceeds that of the denominator, cannot be represented in standard state-
(A— B8’ DTC)Z + Z(A — B8’ DTC)T — zCTR_Fpz + BS_1BT = o space form To approximate improper systems by state-space models, we can include some
high-frequency dynamics which we know from physical considerations will have little
where significance
S 4.r+ DTD Realization of SISO transfer functions. Transfer functions are a good way of
Notice that the formulae simplify considerably for a strictly proper plant, i.e. when D = 0. representing systems because they give more immediate insight into a system’s behaviour
The Matlab commands in Table 4.1 can be used to find the normalized coprime factorization However, for numetical calculations a state-space iealization is usually desired One way of
for G(s) using (4.26). obtaining a state-space iealization from a SISO transfer function is given next Consider a
strictly proper transfei function (13 = 0) of the form
Table 4.1: Matlab commands to generate a normalized coprime factorization C G(s) = fim_,s’~’ + + fiis + fin (428)
% uses the Robust control toolbox s’~ + ~ + + a,s + a0
% Find Normalized coprime factors of system ia,b,c,dl using (4.26) Then, since multiplication by s corresponds to differentiation in the time domain, (4.28) and
S=eye(size(d*dfl+d*d; the relationship y(s) = C(s)u(s) correspond to the following differential equation:
R=eye(size(d*dfl÷d*d;
Al = a_b*inv(S)*d*c; ym(t) + a~_jy’’(t) + + aiy’(t) + aüy(t) = fi~_iu’~’(t) + . . . + fi1u’(t) + finu(t)
Rl = c*inv(Ri*c;
(Rls,Rlerr) = sqrtm(Rl); where y71_i(t) and u”’(t) represent n l’th order derivatives, etc. We can further write
—
Ql = bhinv{S)*b;
iZ,L,Gl=care(Al,Rls,Ql); isolve Riccati equation this as
H =_(b*d + Z*c)*invin);
C
yin = (—an_,y”’ + fi~_,u”~) + + (—a1y’ + fiiu’) +(—auy± fiuu)
. . .
A =a + H*c;
En = b + H=d~ Em ii1
c inv(sqrtm(R))*c;
On = inv(sqrtm(Rfl*d; —
Din = inv(sqrtm(Rfl;
N = ss(A,En,c,00);
N = ss(A,Em,C.Om);
where we have introduced new variables z,, x~ and we have y = x1. Note that x’
is the n’th derivative of x1(t). With the notation ± x’(t) = dz/dt, we have the following
Exercise 4.2 Verify numerically (e.g. using the Matlab file iii Table 4.] or the Robust Control toolbox state-space equations:
command ncfrnr) that the normalized cop rime factors of C(s) in (4.22) are as given in Exercise 4.].
= —anx1 + fiou
th,~_~ = —a1x, + x,~ + thu
—a~_jx1+x2+fi~_iu
r
126 MULTIVAflJABLE FEEDBACK CONTROL ELEMENTS OF LINEAR SYSTEM THEORY 127
— an —1
—an_ 2
10
01 00
fin—i
fin— 2
A= [~ CT0
B= [~], C=[7i ~j (4.35)
A= where yi and 72 are as given above.
—a2 0
(4.29) 4. Observer canonical form in (4.29)
0 10’ fi2
—aj 0 0 .“01 / A= [~ 1 B= Ri C=[1 0] (436)
—ao 0 0 fib
[0 0’ LfiO
c_ri—
0 0 ... 0 0~ where fib
K~
CTT’
C TDTJ
c2r~r0
This is called the observer canonical form. Two advantages of this realization are that one
can obtain the elements of the matrices directly from the transfer function, and that the output On comparing these four realizations with the transfer function model in (4.31), it is clear
y is simply equal to the first state. Notice that if the transfer function is not strictly proper, that the transfer function offers more immediate insight. One can at least see that it is a PID
then we must first bring out the constant term, i.e. write G(s) = G~(s) + D, and then find controller.
the realization of G1 (s) using (4.29). Time delay. A time delay (or dead time) is an infinite-dimensional system and not
Example 4.2 To obtain the state-space realization, in observer canonical fm-tn, of the 5150 transfer representable as a rational transfer function. For a state-space realization it must therefore
function G(s) = ~ we first bring out a constant tei-ni by division to get be approximated. An n’th-order approximation of a time delay 6 may be obtained by putting
n first-order Padd approximations in series
a—a —2a ~
eOs 11 9 ~n (4.37)
C(s)=—-__+i (i+4s)”
Thus D = 1. For the term ~ we get from (4.28) that fib = —2a and a~ = a, and thei-efo,-e (4.29)
yields A = —a, B = —2a and C = 1. Alternative (and possibly better) approximations are in use, but the above approximation is
often preferred because of its simplicity.
Example 4.3 Consider an ideal P1D controller
K(s) = 1(~ (i + + TD5) = K~T1TDs +775±1 (4.30) 4.2 State controllability and state observability
\ 778 775
Since this involves differentiation of the input, it is an improper transferfunction and cannot be written It is useful to introduce the concept of pole vectors. We define the z th input pole vecto;
in state-spacefor,,,. A proper PID controller may be obtained by letting the derivative action be effective 4 B11q~ (4.38)
over a lunitedfrequency range. For example,
K(s) = K~ +
T55
+ TDS
i+eTDs
) (4.31)
and the i’th output pole vector
4 ct~ (4.39)
whe,-e c is typically about 0.1 (see also page 56). This call now be realized in state-space form in (see Matlab commands in Table 4.2). For the case when A has distinct eigenvalues, we have
an infinite number of ways. Four common forms are given below. In all cases, the D—,natrix, which from (4.16) the following dyadic expansion of the transfer function matrix from inputs to
leprese, Its the controller gain at high frequencies (s —Icc), is a scalar given by outputs: C(s) = Z
Ct1q~B ~ ~ = ~ 1Jp;U~ + D (4.40)
1. Diagonalizedfo,-,,, (Jordan canonicalform) where we have scaled the eigenvectors such that qjtt1 = 1. From (4.40), u,,~ is an indication
of how much the i’th mode is excited (and thus may be “controlled”) by the inputs, whereas
] A= [~
2. Observability canonical form
B= [Kc/~o)]’ C=[i —ij (433) Yp,i indicates how much the i’th mode is observed in the outputs. Thus, the pole vectors may
be used for checking the state controllability and observability of a system. This is explained
F A= [~ ~j.
cro
B= [7i]
72
C=[i 0] (4.34)
in more detail below, but let us start by defining state controllability.
Definition 4.1 State controllability. The dynamical system. ± = Ax + Bit, or equivalently
the pair (A, B), is said to be state controllable if for any initial state x(0) = zo, any time
where 7i=Kc{~J,
11 1’\ 72—~—~- tj > 0 and any final state x1, there exists an input u(t) such that x(ti) xi. Otherwise the
\Tt CTDJ C
system is said to be state uncontrollable.
-
128 MULTIVA1UABLE FEEDBACK CONTROL
ELEMENTS OF LINEAR SYSTEM THEORY 129
Table 4.2: Matlab commands to find pole vectors Example 4.4 Consider a scalar system with ~ states and the following state-space realization:
% Find pole vectors of system [A,s,c,D]
To test for state controllability it is instructive to consider the individual poles ~i and the
which has only one state. In fact, the first state corresponding to the eigenvalue at —2 is not controllable.
associated input pole vectors u~. Based on (4.40) we have (Zhou et a]., 1996, p. 52): This is verified by considering state controllability.
Theorem 4.1 Let m be an eigenvalue of A or~ equivalently, a pole of the system. 1. The eigenvalues of A, and thus the system poles, are p~ = —2 and p2 = —4. The corresponding left
eigenvectors are qi = [0.707 —0.707 :T and q~ = [0 1 :T. The two input pole vectors are
o The pole ~j is state controllable if and only if
up, = B’1q1 = 0, ui,, = B”q2 = 1
= B~2’q1 #0 (4.41)
and since u~1 is zero we have that the first pole (aigenvalue) is not state controllable.
for all left eigenvectors qj (including linear combinations) associated with p~. Otherwise,
2. The controllability ;nat,’ix has rank 1 since it has two linearly dependent tows:
the pole is uncontrollable.
I.. o A system is state controllable if and only if every pole ~i is controllable. C=[B AB=[I :~]
Remark. The need to consider linear combinations of eigenvectors only applies when p~ is a repeated
pole (with multiplicity greater than I). In this case, we may collect the left eigenvectors associated with 3. The controllability Gramian is also singular
p~ in the matrix Q~ and collect the corresponding input pole vectors in the matrix U~,1 = B5’ Q1. The — 0.125 0.125
number of uncontrollable states corresponding to the pole p~ is then rank(Qt) rank(U~,1).
—
— 0.125 0.125
In summary, a system is state controllable if and only if all its input pole vectors are non
Example 4.5 Consider a scalar system G(s) = 1/(rs + i)~ with the following realization:
zero.
There exist many other tests for state controllability. Two of these are: —lft 0 0 0 hr
1. The system (A, B) is state controllable if and only if the controllability matrix A= 1~r jyT ,B= ~ ,C=[o 0 0 f (4.46)
0 0 1k —lfr 0
C4[B AB A2B ... AmIB] (4.42)
The svstenz has four repeated eigenvalues at —1/r (tnultiplicitv 4), and the corresponding left
has rank n (full row rank). Here n is the number of states. eigenvectors of A are the columns of
2, From (4.6) one can verify that a particular input which achieves ir(ti) = x1 is
1 —l 1 —1
u(t) = _BTeAT(tl_OPVC(tI)_1(eAu1zo — x1) (4.43) ,~_ 0 0 0 0
~ 0 0 0 0
where J4~c(t) is the Gramian matrix at time t, 0000
Since the four eigenvecrors qi are linearly dependent, there is no need to consider linear combinations,
and since all input pole vectors ate non-zero (u~,€ = BMqt = ±1/r, i = 1 4), we conclude that
Wc(t) ~ eATBBTeATrd7
the system is state controllable. The is confirmed by computing the controllability matrix C in (4.42)
Therefore, the system (A, B) is state controllable if and only if the Gramian matrix T’Vc(t) which has fidl rank.
has full rank (and thus is positive definite) for any t > 0. For a stable system (A is stable)
we only need to consider P 4 W~(oo); that is, the pair (A, B) is state controllable if and In words, if a system is state controllable we can by use of its inputs u bring it from any
only if the controllability Gramian initial state to any final state within any given finite time. State controllability would therefore
seem to be an important property for practical control, but it rarely is for the following four
P~ CATBBTeA?rdr (4.44) reasons:
is positive definite (P > 0) and thus has full rank n. P may also be obtained as the solution 1. It says nothing about how the states behave at earlier and later times, e.g. it does not imply
to the Lyapunov equation that one can hold (as 1 —÷ oc) the states at a given value.
2. The required inputs may be very large with sudden changes.
AP + PAT = _BBT (4.45)
3. Some of the states may be of no practical importance.
130 MULTI VARIABLE FEEDBACK CONTROL ELEMENTS OF LINEAR SYSTEM THEORY 131
150 2. Although the states (tank temperatures T~) are indeed at their desired values of *1 at t = 400 s, it
is not possible to hold them at these values, since at steady-state all the states must be equal (in our
100 case, all states app?vach U as time goes to infinity, since ti = T0 is reset to Oat t 400 s).
0 50 It is quite easy to explain the shape of the input T0 (t). The fourth tank is furthest away and we want its
temperature to decrease (2’4 (400) —1) and therefore the inlet temperature To is initially decreased
0
C-) 0 to about —40. Then, since T5(400) = 1 is positive, T0 is mci-eased to about 30 at t = 220 5; it is
subsequently decreased to about —40, since T2(400) = —1, and finally increased to “ore than 100 to
100 200 300 400 500 achieve T1(400) = 1.
Time [sec]
From the above example, we see clearly that the property of state controllability may
(a) Input trajectory to give desired state at t = 400
not imply that the system is “controllable” in a practical sense2. This is because state
controllability is concerned only with the value of the states at discrete values of time (target
10 hitting), while in most cases we want the outputs to remain close to some desired value (or
trajectory) for all values of time, and without using inappropriate control signals.
I -: So now we know that state controllability does not imply that the system is controllable
from a practical point of view. But what about the reverse: if we do not have state
-10
controllability, is this an indication that the system is not controllable in a practical sense?
-‘5
a,
In other words, should we be concerned if a system is not state controllable? In many cases
50 300 350 200 . 250 300 350 400 the answer is “no”, since we may not be concerned with the behaviour of the uncontrollable
Ttme [sec]
states which may be outside our system boundary or of no practical importance. If we are
(b) Response of states (tank temperatures) indeed concerned about these states then they should be included in the output vector y. State
uncontrollability will then appear as a rank deficiency in the transfer function matrix C(s)
Figure 4.1: State controllability of four first-order systems in series (see functional controllability).
In conclusion, state controllability is neither a necessaty nor sufficient condition for a
system to be controllable in a practical sense (input—output controllability). So is the issue
4. The definition is an existence result which provides no degree of controllability (see
Hankel singular values for this). of state controllability of any value at all? Yes, because it tells us whether we have included
some states in our model that we have no means of affecting. This is certainly a practical (and
The first two objections are illustrated by the following example. numerical) concern if the associated mode is unstable. It also tells us when we can save on
computer time by deleting uncontrollable states which have no effect on the output for a zero
Example 4.5 continued. State controllability of tanks in series. Consider a system with one
input and four states arisingfron3fourflrst-order svste.’ns in series:
initial state.
In summary, state controllability is a system theoretical concept which is important when
C(s) = 1/(rs + it comes to computations and realizations. However, its name is somewhat misleading, and
most of the above discussion might have been avoided if only Kalman, who originally defined
A state-space realization is given by (4.46). A physical example could be four identical tanks (e.g. (state) controllability, had used a different terminology. For example, better terms might
bath tubs) in series where water flows from one tank to the next. Energy balances, assuming no
have been “point-wise controllability” or “state affectability” from which it would have been
heat loss, yield P4 = ~~yTa, T3 = ~-1T2, T~ = ~1T1, P = ~~To where the states
understood that although all the states could be individually affected, we might not be able to
x = [T1 P2 P3 P4 ~“ are the four tank temperatures, the input u = To is the inlet temperature,
control them independently over a period of time.
and r = 100 s is the residence tune in each tank. In practice, we know that it is veiy difficult to conti-ol
the four temperatures independently, since at steady-state all tempe;-atures must be equaL However; we
Definition 4.2 State observability. The dynamical system ± = Ax + Bu, y = Cx + Du
found above that the system is state controllable, so it must be possible to achieve at any given time
any desired temperature in each of the four tanks simply by adjusting the inlet temperature. This sounds
(or the pair (A, C)) is said to be state observable if for any time t1 > 0, the initial state
ahnost too good to be true, so let us consider a specific case. x(0) = r0 can be determined froni the time histoty of the input u(t) and the output y(t) in
Assume that the system is initially at steady-state (all temperatures are zero), and that we want to the interval [0. t1]. Otherwise the system, or (A, C), is said to be state unobservable.
achieve at t = 400 s the following temperatures: Ti(400) = 1, T2(400) = —1, Ta(400) = 1 and
T4 (400) = —1. The change in inlet temperature, T0 (t), to achieve this was computedfrom (4.43) and To test for state observability it is instructive to consider the individual modes p~ and the
is shown as a function of time in Figure 4.1(a). The corresponding tank temperatures are shown in associated output pole vectors Yp,i Based on (4.40) we have (Zhou et al., 1996, p. 52):
Figure 4.1(b). Tsvo things ale worth noting:
Theorem 4.2 Let pj be an eigenvalue of A 0,; equivalently, a mode of the system.
1. The required change in inlet temperature T0 is more than 100 times larger than the desim-ed
tempem-ature changes in the tanks and it also varies widely with time. 2 In ChapterS, we introduce a more practical concept of controllability which we call “input—output controllability”.
132 MULTIVARIABLE FEEDBACK CONTROL ELEMENTS OF LINEAR SYSTEM THEORY 133
The mnodep1 is observable ~f and only tf
~]
o
Example 4.7 Consider the scalar system
for all right eigenvectors t~ (including linear combinations) associated with p~. Otherwise,
the mode is unobservable. There are two eigenvalues (poles) at p and the corresponding right and left eigenvector matrices are
A syste,n is observable if and only if every mode p~ is observable. 1 —11 — 0 0
— 0 0]’ ~ ~ —~
Remark. The need to consider linear combinations of eigenvectors only applies when pt is a repeated
pole (‘vith multiplicity greater than I). In this case, we may collect the right eigenvectors associated Note that the two right (left) eigenvectors are linearly dependent. The associated output and input pole
with p~ in the matrix T1 and collect the corresponding input pole vectors in the matrix ~ = CT1. The vectors are collected in matrices,
number of unobservable states corresponding to the mode p~ is then rank(T1) — rank(Y~,t).
Y~=CT=[1 —1], U~=B”Q=[1 —1]
In suinmaty a system is observable if and only if all its output pole vectors are non-zero. Both states are observable since rank(T) — rank(Y~) = 1 — 1 = 0, and both states are state
The following example illustrates this, and what may happen if we have repeated poles. controllable since rallk(Q) — rank(U~) = I — 1 = 0. This agi-ees with the transfer function
representation.
Example 4.6 Consider a system with two states, two inputs, one output and the following state-space
,-ealization: Two other tests for state observability are:
A= [~ 01, B= Ii 41 C=[o.s
P2] L2 0]’
0.25, D=[0 0] 1. The system (A, C) is state observable if and only if we have full column rank (rank n) of
The corresponding transferfunction is
0 [~
2. For a stable system we may consider the obsen’ability Gramian
(4.48)
T—
—
[‘0 01
ij’ ~Lo11
— 01
ij
~ fo~ eATTCTCeATdT (4.49)
(where the first column is associated with P1 and the second wit?, p2). The associated output and input
which must have full rank n (and thus be positive definite) for the system to be state
pole vectors may be collected in matrices,
observable. Q can also be found as the solution to the following Lyapunov equation:
Y~ = CT = [y~,t ?Jp,2 = [0.5 0.25, U~ = BHQ = [up,i Up,2 = {~ ~] ATQ + QA = _CTC (4.50)
Let its first consider the case with distinct poles, i.e. P1 ≠ P2. We see that the two output pole A system is state observable if we can obtain the value of all individual states by measuring
“vecto,-s” (columns in 1’~) are both non-zero, so both modes are observable. The two input pole vectors the output y(t) over some time period. However, even if a system is state observable it may not
(columns in U~) ale also both lion-zero, so both modes are state controllable. Howeve,; since the second be observable in a practical sense. For example, obtaining x(0) may require taking high-order
ele,nent in up,2 is 0 it follows that mode P2 is not state controllable from input 2 (which is also easily derivatives of y(t) which may be numerically poor and sensitive to noise. This is illustrated
seen from the transfer_function representation). in the following example.
Next consider the case with two repeated poles, pi = P2. In this case, both the cohnnns of T ond
Example 4.5 (tanks in series) continued. We have y = T4 (the temperature of the last tank), and,
their linear combinations are the right eigenvectors of A. Since rank(T) — rank(Y~) = 2— 1 = 1, one
similar to Example 4.7, all states arv observable front p. Howeve,; consider a case where the initial
of the two states is not observable (which is also easily seen from the transfer Jitnction representation
temperatures in the tanks, T~ (0), i = 1 4, are non—zero (and unknown), and the inlet temperature
as there is a pole—zero cancellation in the first element in C(s)). Howeve,; both states remain state
To (t) = u(t) is zero for t ≥ 0. Then, from a practical point of view, it is clear that it is numerically
controllable since rank(Q) — rank(U~) = 2 — 2 = 0.
vemy difficult to back-calculate, for example, 7’1 (0) based on nieasure,nents b_f yQ) = T4(t) over some
In the above example the poles are “in parallel” (as can been seen since the first element interval [0, t1], although in theory all states are observable from the output.
in C(s) can be written + and this may give problems with observability and
—),
Definition 4.3 Minimal realization, McMillan degree and hidden mode. A state-space
controllability for repeated poles. However, if the repeated poles are “in series” there is no realization (A, B, C, D) of C(s) is said to be a minimal realization of C(s) ~ A has the
such problem, as illustrated in Example 4.5 and further in the following example. smallest possible dimension (i.e. the fewest nionber of states). The smallest dimension is
called the McMillan degree of C(s). A mode is hidden if it is not state controllable or
observable and thus does not appear ill the ,ninimnal realization.
134 MULTIVARIABLE FEEDBACK CONTROL
Since only controllable and observable states contribute to the input—output behaviour from
I ELEMENTS OF LINEAR SYSTEM THEORY
4.4 Poles
135
chamactenstic equation
0(s) 4 det(sI A) = 0 — (4.51)
A
4.3 Stability To see that this definition is reasonable, recall (4 15) and see Appendix A 2 1 Note that if A
does not correspond to a minimal realization then the poles by this definition will include the
There are a number of ways in which stability may be defined, e.g. see Willems (1970). poles (eigenvalues) corresponding to uncontrollable and/or unobservable states
Fortunately, for linear time-invariant systems these differences have no practical significance,
and we use the following definition: S
4.4.1 Poles and stability
Definition 4.4 A system is (internally) stable if none of its components contain hidden
unstable modes and the injection of bounded external signals at any place in the system For linear systems, the poles determine stability
results in bounded output signals measured anywhere in the system. Theorem 4.3 A lmeam dynamic system z = Ax + Bit is stable if and only if all the poles
Here we define a signal u(t) to be “bounded” if there exists a constant c such that u(t) I < c are in the open left-halfplane (LHP), that is, Re(p5) = Re{A~(A)} < 0,Vz A matux A with
such a piopemly is said to be “stable” ot Hurwitz
for all t. The word internally is included in the definition to stress that we do not only require
the response from one particular input to another particular output to be stable, but require Proof From (4 7) we see that the time response (4 6) can be wntten as a sum of teniis each containing
stability for signals injected or measured at any point of the system. This is discussed in a mode eP,t Poles in the RHP with Re{p,} > 0 give rise to unstable modes since in this case
more detail for feedback systems in Section 4.7. Similarly, the components must contain no is unbounded as t —*cc Poles in the open LHP give use to stable modes where e~ —* 0 as
hidden unstable modes; that is, any instability in the components must be contained in their I —* cc Systems with poles on the jo-axis, including integrators, are unstable from our Definition 44
input—output behaviour. of stability For example, consider y = Cu and assume C(s) has imaginary poles s = ±jWo Then
with a bounded sinusoidal input, u(t) = sinw0t, the output y(t) grows unbounded as I —> cc C
Definition 4.5 Stabilizable, detectable and hidden unstable modes. A system is
stabilizable if all unstable modes are state controllable. A system is detectable if all unstable
4.4.2 Poles from state-space realizations
modes are observable. A system with unstabilizable or undetectable modes is said to contain
hidden unstable modes. Poles are usually obtained numerically by computing the eigenvalues of the A-matrix To get
the fewest number of poles, without unstabilizable or uncontrollable modes, we should use a
A linear system is stabilizable (detectable) if and only if all input (output) pole vectors minimal realization of the system
associated with the unstable modes are non-zero; see (4.41) and (4.47) for details. If a system
is not detectable, then there is a state within the system which will eventually grow out of
bounds, but we have no way of observing this from the outputs y(t). 1 4.4.3 Poles from transfer functions
Remark 1 Any unstable linear system can be stabilized by feedback control (at least in theory) The following theorem from MacFarlane and Karcanias (1976) allows us to obtain the poles
provided the system contains no hidden unstable mode(s). However, this may require an unstable directly fiom the transfer function matrix C(s) and is useful for hand calculations It also has
controller, see also page 150. the advantage of yielding only the poles corresponding to a minimal realization of the system
Remark 2 Systems with hidden unstable modes must be avoided both in practice and in computations Theorem 4.4 The pole polynomial 0(e) conespondmg to a mmunal mealization of a system
(since variables will eventually blow up on our computer if not on the factory floor). In the book we with tiansfem function C(s) is the least coninion denominatom of all non—identically zeto
always assume, unless otherwise stated, that our systems contain no hidden unstable modes. minors of all omdems of C(s)
A nunot of a matrix is the determinant of the matrix obtained by deleting certain rows and/or
columns of the matrix We will use the notation M~ to denote the minor corresponding to the
A
M,dtirariahle Feedback Conipol: Analysis and Design Second Edition
S. Skogesiad and I. Postleihwaiie © 2005 John Wiley & Sons, Lid I
A
I
136 MULTIVARIABLE FEEDBACK CONTROL
I
it
ELEMENTS OF LINEAR SYSTEM THEORY 137
deletion of rows r and columns c in C(s). In the procedure defined by the theorem we cancel I
How many poles at s = —a does a minimal realization of C1 (s) have9 From (A 10),
common factors in the numerator and denominator of each minor. It then follows that only
observable and controllable poles will appear in the pole polynomial.
det (G1(s)) = det (~Go(s)) (s +a)m det (Go(s)) (461)
Example 4.8 Consider the plant C(s) = e9’ which has no state-space realization as it
contains a delay and is also impi-oper Thus we cannot compute the poles from (4.51). Howeveç front so if Go has no zeros at s = —a, then C1 (s) has in poles at s = —a However, Go may have
Theorem 4.4 we have that the denominator is (s + 1) and as expected C(s) has a pole at s = —1. j zeros at s = —a As an example, consider a 2 x 2 plant in the form given by (4 60) It may
have two poles at s = —a (as in (3 93)), one pole at s = —a (as in (452) where dot Go(s)
Example 4.9 Consider the square transferfunction matrix has a zero at s = —a) or no pole at s = —a (if all the elements of Go(s) have a zero at
1 [s—i $ = —a)
1.25(s+i)(s±2)[ —6 s—2 (452) As noted above, the poles are obtained numerically by computing the eigenvalues of the
A-matnx Thus, to compute the poles of a transfer function C(s), we must first obtain a state-
The minors of order 1 are the four elements that all have (s + 1) (s + 2) in the denominator The minor space realization of the system Preferably this should be a minimal realization For example,
of order 2 is the determinant
if we make individual realizations of the five non-zero elements in Example 4 10 and then
simply combine them to get an overall state-space realization, we will get a system with 15
detCts~— (s—i)(s—2)+6s — 1 (453) states, where each of the three poles (in the common denominator) are repeated five times A
— 1.252(8 + i)2(s + 2)2 — l.252(s + i)(s + 2)
model reduction to obtain a minimal realization will subsequently yield a system with four
Note the pole—zero cancellation when evaluating the determinant. The least connnon deno,ninator of
all the minors is then
poles as given in (4 59)
(454)
so a minimal realization of the system has two poles: one at s = —1 and one at s = —2. 4.4.4 Pole vectors and directions
Example 4.10 Consider the 2 x 3 system, with three inputs and two outputs, In multivanable systems poles have directions associated with them To quantify them we use
the input and output pole vectoms defined in (438) and (439)
F (s—i)(s+2) 0 (s—fl2
(4 55) Vp. = Gt~ ni,, = BHq~ (4 62)
‘‘(s+i)(s+2)(s—i)F—(s+i)(s+2) (s-i)(s+i) (s—i)(s+i)
The minors of order 1 are the five non-zero elements (e.g.M~3 = gii (s)): These give an indication of how much the z’th mode is excited in each output and input Pole
d,mectmons are defined as pole vectors normalized to have unit length, i e
1 s-i —1 1 1
s+1’ (s+1)(s±2)’ s—i’ s+2’ s+2 (456) i i
‘= l II ~ = itt,,. W2t1~’ (463)
The minor of order 2 corm-esponding to the deletion of column 2 is
The pole directions may alternatively be obtained directly from the transfer function matrix by
11’! (si)(s+2)(s~1)(s+1)+(8+i)(s+2)(s_i)2 2
—
((s+1)(s+2)(s_i))2
_________
(457) evaluating G(s) at the pole p~ and considering the directions of the resulting complex matrix
— (s+i)(s+2) G(p~) The matrix is infinite in the direction of the pole, and we may somewhat crudely write
The other two minors of om-der 2 am-c
=00 y~, (464)
M — —(s—i) ________
(458) ‘ where z4, is the input pole direction, and is the output pole direction The pole directions
— (s+i)(s+2)2’ (s + i)(s + 2)
may then in principle be obtained from an SVD of G(p~) = UEVH Then u is the first
By considering all minors we find their least common denominator to be
column in V (corresponding to the infinite singular value), and y~, the first column in U For
~(s)=(s+i)(s+2)2(s_i) (4 59) numerical calculations we may evaluate C(s) at s = p~ + e where c is a small numbei
The system them-efom-e has fonr poles: one at 8 = —1, one at s = 1 and two at s = —2. Remark 1 As already mentioned, if is,, = BMq = 0 then the corresponding pole is not state
controllable, and if y,, = Ct = 0 the corresponding pole is not state observable (see also Zhou
From the above examples we see that the MIMO poles are essentially the poles of the et al , 1996, p 52)
elements. However, by looking at only the elements it is not possible to determine the Remark 2 For a multivaiiable plant the pole vectors defined in (462) provide a very useful tool for
multiplicity of the poles. For instance, let Go(s) be a square in x in transfer function matrix selecting inputs and outputs for stabilization, see Section 1043 for details For a single unstable
with no pole at s = —a, and consider mode, selecting the input corresponding to the largest element in n~ and the output corresponding
to the largest element in minimizes the input usage required for stabilization More precisely, this
Gi(s) = —~—--Go(s) (4 60) choice minimizes the lower hound on both the 7-t2 and 1L~ norms of the transfer function KS from
measurement (output) noise to input (Havre and Skogestad, 2003)
7
133 MULTI VARIABLE FEEDBACK CONTROL - EMENTS OF LINEAR SYSTEM THEORY 139
Remark 3 Notice that there is difference between the non-normalized pole vector and the normalized 4.5.2 Zeros from transfer functions
pole direction (vector). Above we used a to show explicitly that the direction vector is normalized,
but later in the book this is omitted. For zeros (see below) such problems do not arise because we are The following theorem from MacFarlane and Karcanias (1976) is useful for hand calculating
only interested in the normalized zero direction (vector). the zeros of a transfer function matrix C(s).
Theorem 4.5 The zero polynomial z(s), corresponding to a minimal realization of the
systems is the greatest common divisor of all the numerators of all order-r minors of 0(s),
where r is the normal rank of 0(s), provided that these minors have been adjusted in such a
4.5 Zeros way as to have the pole polynomial i(s) as their denominator
Zeros of a system arise when competing effects, internal to the system, are such that the Example 4.11 Consider the 2 x 2 transfer function matrix
output is zero even when the inputs (and the states) are not themselves identically zero. For
a SISO system the zeros z1 are the solutions to 0(z1) = 0. In general, it can be argued that C(s) = ~ [s_i 2(s- 1)] (4.68)
zeros are values of s at which C(s) loses rank (from rank I to rank 0 for a SISO system).
This is the basis for the following definition of zeros for a multivariable system (MacFarlane The normal rank of 0(s) is 2, and the minor of order 2 is the determinant, det C(s) = 2(s_i)2_i8
and Karcanias, 1976): 2~. From Theorem 4.4, the pole polynomial is ~(s) = s + 2 and therefore the zero polynomial is
z(s) = $ 4. Thus, C(s) has a single RHP-zem at s = 4.
—
Definition 4.7 Zeros. z1 is a zero of G(s) if the rank of 0(z1) is less than the nortnal rank
of C(s). The zero polynomial is defined as z(s) = fl~1 (s z1) where n~ is the number of
—
This illustrates that in general multivariable zeros have no relationship with the zeros of the
finite zeros of C(s). transfer function elements. This is also shown by the following example where the system
has no zeros.
In this book, we do not consider zeros at infinity; we require that z1 is finite. The normal
rank of C(s) is defined as the rank of G(s) at all values of s except at a finite number of Example 4.9 continued. Consider again the 2 x 2 system in (4.52) where det C(s) in (4.53) already
singularities (which are the zeros). has ~(s) as its denominator Thus the zero polynomial is given by the numerator of (4.53), which is 1,
This definition of zeros is based on the transfer function matrix, corresponding to a minimal and we find that the system has no multivariable zeros.
realization of a system. These zeros are sometimes called “transmission zeros”, but we The next two examples consider non-square systems.
will simply call them “zeros”. We may sometimes use the term “multivariable zeros” to
distinguish them from the zeros of the elements of the transfer function matrix. Example 4.12 Consider the 1 x 2 system
The normal rank of C(s) is 1, and since there is no value ofs for which both elements become zero,
Zeros are usually computed from a state-space description of the system. First note that the C(s) has no zeros.
state-space equations of a system may be written as
In general, non-square systems are less likely to have zeros than square systems. For instance,
F(s)[z] = [~}, F(s) = [si-A -B] (4.65) for a square 2 x 2 system to have a zero, there must be a value of s for which the two columns
in C(s) are linearly dependent. On the other hand, for a 2 x 3 system to have a zero, we need
The zeros are then the values s = z for which the polynomial system matrix, F(s), loses all three columns in 0(s) to be linearly dependent.
rank, resulting in zero output for some non-zero input. Numerically, the zeros are found as The following is an example of a non-square system which does have a zero.
non-trivial solutions (with U: ~ 0 and x~ ≠ 0) to the following problem: Example 4.10 continued. Consider again the 2 x 3 system in (4.55), and adjust the minors of order
2 in (4.57,1 and (4.58) so that their denominators are ~(s) (s + 1)(s + 2)~ (s 1). We get
—
(z4_M)[~] =0 (4.66)
Mi(s) = (si) Mo(s) 2(s-1)(s+2) Ma(s) = (s-i)(s+2) (4.70)
M_FA B]. ~ (4.67)
[c 13]’ 9[O a The common factor for these minors is the zero polynomial z(s) (s — 1). Thus, the system has a
This is solved as a generalized eigenvalue problem in the conventional eigenvalue problem
— single RH?-zero located at $ = 1.
we have 19 = I. Note that we usually get additional zeros if the realization is not minimal. We also see from the last example that a minimal realization of a MIMO system can have
poles and zeros at the same value of s, provided their directions are different.
140 MULTIVAR.IA]JLE FEEDBACK CONTROL 141
ELEMENTS OF LINEAR SYSTEM THEORY
I
example, given a state-space realization, we can evaluate C(s) = C(sI A)’B + D. Let —
a = tN’s’); G = Its—i) 4; 4.5 2*(s_l)1J(s+2);p..2;z4;
C crude method for computing pole and zero directions
C(s) have a zero at s = z. Then C(s) loses rank at $ = z, and there will exist non-zero Gz evaifrtG, z) ; n min(size(Gz))
vectors zt~ and Yr such that Iu,s,v) = svd(Gz); yz UI:,n), uz V(:,n)
Gp evaifr(G,p+l.e-5);
G(z)zz: = Yr (471) tU.S.Vl = svdtcp}; yp 01:1), up VI:,l)
Here Ur is defined as the input zero direction, and Yr is defined as the output zero direction.
We usually normalize the direction vectors to have unit length,
4.6 Some important remarks on poles and zeros
= 1 Y~~Yz = 1
The zeros resulting from a minimal realization are sometimes called the transmission
From a practical point of view, the output zero direction, liz, is usually of more interest than zeros. If one does not have a minimal realization, then numerical computations (e.g. using
it:, because y~ gives information about which output (or combination of outputs) may be Matlab) may yield additional invariant zeros. These invariant zeros plus the transmission
difficult to control. zeros are sometimes called the system zeros. The invariant zeros can be further subdivided
Remark 1 Taking the Herinitian (conjugate transpose) of (4.71) yields i4’O”(z) = 0 .
into input and output decoupling zeros. These cancel poles associated with uncontrollable
Premultiplying by u~ and postmultiplying by Yr noting that uYu~ = 1 and Yz~~Yz = 1 yields or unobservable states and hence have limited practical significance. To avoid all these
0(z)’~’y2 = 0~ u~, or complications, we recommend that a minimal realization is found before computing the
y~’0(z) =0u~’ (472) zeros.
Remark 2 In principle, we may obtain u~ and Yr from an SVD of 0(z) = UEV”, and we have 2 Rosenbrock (1966; 1970) first defined multivariable zeros using something similar to the
that u~ is the last column in V (corresponding to the zero singular value of 0(z)) and Yr is the last Smith—McMillan form. Poles and zeros are defined in terms of the McMillan form in Zhou
column of U. An example was given earlier in (3.85). A better approach numerically is to obtain u~ etal. (1996).
from a state-space description using the generalized eigenvalue problem in (4.66). Similarly, Yr may be 3 In the time domain, the presence of zeros implies blocking of certain input signals
a
obtained from the transposed state-space description, see (4.72), using MT in (4.66). (MacFarlane and Karcanias, 1976). If z is a zero of C(s), then there exists an input signal
of the form UzeZtl+(t), where it2 is a (complex) vector and i+(t) is a unit step, and a set
Example 4.13 Zero and pole directions. Consider the 2 x 2 plant in (4.68), which has a RHP-zero
I of initial conditions (states) z~, such that y(t) = 0 fort > 0.
at z = 4 and a LHP-pole at p = —2. The pole and zero directions are usuallyfoundfrom a state-space
realization using (4.38)—Ø.39) and (4.65)—(4.67,l, respectively. However; we will here use an SVD of 1 4 For square systems we essentially have that the poles and zeros of 0(s) are the poles and
4a zeros of dot C(s). However, this crude definition may fail in a few cases, for instance
0(z) and G(p) to determine the zero and pole directions using the Matlab commands in Table 4.3,
although we stress that this is generally not a reliable method numerically. An SVD of 0(z) gives when there is a zero and pole in different parts of the system which happen to cancel when
forming det 0(s). For example, the system
0(z) = 0(4)
—
- ~11[~~ 41~j —
—
1 ro.55
~ ~o.83 —0.831 f9.01
0.55 ][ 0
01 ~O.6
0] Los —0.81”
0.6 j a
4 (4.74)
G(s)= [(s+2)/(s+1)
0 0 1
(s+1)/(s+2)j
The input and output zero directions are associated with the zero singular value of 0(z), see (4.71),
I
and we get u~ = [—0.80] a,zd y~ = [~0~3]. We see from Yr that the zero has a slightly larger a has det C(s) = 1, although the system obviously has poles at —1 and —2 and
component in the first output. Next, to determine the pole directions consider (multivariable) zeros at —1 and —2.
5 C(s) in (4.74) provides a good example for illustrating the importance of directions when
— 1 L_3+e 4
0(p+e) = 0(—2+e) (473) discussing poles and zeros of multivariable systems. We note that although the system has
4.5 2(—3 + e)
a poles and zeros at the same locations (at —1 and —2), their directions are different and
The SVD as e —* 0 becomes
so they do not cancel or otherwise interact with each other. In (4.74) the pole at —i has
1 [—o.ss —0.831 Ig.oi 4 directions u~, = = [1 0T, whereas the zero at—i has directions u. = liz = [0 l:T.
0(—2 + e) = c2 i 0.83 —0.55] L o 011 0.6
o] L —0.8 —0.81”
—o.6j
6 For square systems with a non-singular fl-matrix, the number of poles is the same as the
The pole input and output directions are associated with the largest singular value, o~ = 9.01/c2, number of zeros, and the zeros of 0(s) are equal to the poles 0’(s), and vice versa.
Furthermore, if the inverse of 0Q) exists then it follows from the SVD that
and we get u~ = [0~~] and y~ = [j00~5]. We note from y~ that the pole has a slightly larger
component in the second output. = 0 (4.75)
-a
It is important to note that although the locations of the poles and zeros are independent
of input and output scalings, their directions are not. Thus, the inputs and outputs need to be
1 7 There are no zeros if the outputs y contain direct information about all the states; that is, if
from y we can directly obtains. For example, we have no zeros if y = z or more generally
scaled properly before making any interpretations based on pole and zero directions. if rank C = it and 13 = 0 (see a proof in Example 4.15). This probably explains why zeros
4
a
a
I
1~~
142 MULTIVARiABLE FEEDBACK CONTROL ELEMENTS OF LINEAR SYSTEM THEORY 143
were given very little attention in the optimal control theory of the 1960’s which was based the rank of G1 (z) is 1, which is less than the normal rank of 0, (s), which is 2. On the
on state feedback, other hand, G2 (a) = L h2, (s z) h22 h23
—
[hii(s z) h,2 hi3] does not have a zero at a = z since G2(z)
—
8 Zeros usually appear when there are fewer inputs or outputs than states, or when 13 ~ 0.
Consider a square m x m plant C(s) = C(sI A)—’ B + 13 with n states. We then have
— has rank 2 which is equal to the normal rank of 02(5) (assuming that the last two columns
for the number of (finite) zeros of C(s) (Maciejowski, 1989, p. 55) of 02(5) have rank 2).
14. The concept of functional controllability, see page 233, is related to zeros. Loosely
13 ~ 0 : At most ii — vi+ ranlc(D) zeros speaking, one can say that a system which is functionally uncontrollable has in a certain
D = 0: At most ii — 2m + rank(CB) zeros (4.76) output direction “a zero for all values of a”.
13 = 0 and rank(CB) = m: Exactly n — m zeros
The control implications of RHP-zeros and RHP-poles are discussed for 5150 systems on
9 Moving poles. How are the poles affected by (a) feedback (0(1 + KG)’), (b) series pages 183—197 and for MIMO systems on pages 235—237.
compensation (OK, feedforward control) and (c) parallel compensation (C + K)? The Example 4.14 Effect of feedback on poles and zeros. Consider a SISO negative feedback system
answer is that (a) feedback control moves the poles (e.g. 0 = —L~~ K = —2a moves the
pole from —a to +a), (b) series compensation cannot move thes+a’ poles, but we may cancel with plant
fivin 0(s)r to= output
reference z(s)/~(s)
y is and a constant gain controller K(s) = k. The closed-loop response
poles inC by placing zeros in K (e.g. C = ~ K = t~fl and (c) parallel compensation
s-fe’
cannot move the poles, but we may cancel their effect s+k1’
by subtracting identical poles in K T(s) =
L(s) — kG(s) hz(s) z~j(s) (4.77)
(e g 0 = ‘ K = —~). 1 + L(s) — 1 + kG(s) ~(s) + hz(s) k~~~(s)
S+a’ s+a
10 For a strictly proper plant C(s) = C(sI A)’B, the open-loop poles are determined
— Note the following:
by the characteristic polynomial ~ot(s) = det(sI A). If we apply constant gain
— 1 me zero polynomial is z~i(s) = z(s), so the zero locations are unchanged byfeedback.
negative feedback u = —Koy, the poles are determined by the corresponding closed- 2. vie pole locations are changed by feedback. For example,
loop characteristic polynomial ~j (a) = det(sI — A + BK0C). Thus, unstable plants may
be stabilized by use of feedback control. See also Example 4.14. A~ —*0 ≠- ~~~(s) —+ ~(s) (4.78)
11 Moving zeros. Consider next the effect of feedback, series and parallel compensation on k —* cc =~- 4a (s) —* hz(s) (4.79)
the zeros.
(a) With feedback, the zeros of G(I + KG) are the zeros of C plus the poles of K.
—t That is, as we increase the feedback gain, the closed-loop poles move from open-loop poles to the
This means that the zeros in 0, including their output directions U:, are unaffected by open-loop zeros. RI-Il’-zeros therefore imply high-gain instability. These results are well known from
feedback. However, even though U: is fixed it is still possible with feedback control to a classical root locus analysis.
move the deteriorating effect of a RHP-zero to a given output channel, provided U: has a Example 4.15 We want to prove that G(s) = C(sI — AY’B + 13 has no zeros if 13 = 0 and
non-zero element for this output. This was illustrated by the example in Section 3.6, and rank (C) = v where n is the number of states. Solution: Consider the polynomial system matrix F(s)
is discussed in more detail in Section 6.6.1. in (4.65). The first ii columns of P are independent because C has rank n. The last m columns are
(b) Series compensation can counter the effect of zeros in G by placing poles in K to independent of 5. Furthermore, the first n and last in columns are independent of each other~ since
cancel them, but cancellations are not possible for RHP-zeros due to internal stability (see 13 0 and C has full colunm rank and thus cannot have any columns equal to zero. In conclusion,
Section 4.7). F(s) always has rank n + in and there are no zeros. (We need D 0 because if D is non-zero then
(c) The only way to move zeros is by parallel compensation, y = (G + K)u, which, if y the first ii columns of P may depend on the last in columns for some value of s.)
is a physical output, can only be accomplished by adding an extra input (actuator). Exercise 4,3* (a) Consider a SISO system 0(s) = C(sI — A)’B + 13 with Just one state, i.e. A
12 Pinned zeros. A zero is pinned to a subset of the outputs if U: has one or more elements is a scalar~ Find the zeros. Does 0(s) have any zeros for D = 0? (b) Do OK and KG have the same
equal to zero. In most cases, pinned zeros have a scalar origin. Pinned zeros are quite poles and zeros for a 5150 system? Ditto, for a MIMO system?
common in practice, and their effect cannot be moved freely to any output. For example,
the effect of a measurement delay for output y, cannot be moved to output 112. Similarly, a Exercise 4.4 Determine the poles and zeros of
zero is pinned to certain inputs if has one or more elements equal to zero. An example
U:
is C(s) in (4.74), where the zero at —2 is pinned to input v, and to output y~.
13 Zeros of non-square systems. The existence of zeros for non-square systems is common
r~
C(s) [ 5(s+2)
s(s+1O)(s+1)(s—5)
(s+I)(s—5)
(s+2)
5(s+2)
(a+i)ts—5)
(s+i)Cs—5)
1
h21(s—z)
h,2
h22(s—z)
h13 }
channels. As an example consider a plant with three inputs and two outputs Cr(s) =
which has a zero at a = z which is pinned to output
h23(s—z)
detG(s) = 50(s4 — s~ _15s2
s(s+1)2(s+10)(s—5)2
— 23s —10) 50(s + 1)2(s + 2)(s
s(s+1)2(s+10)(s—5)2
—5)
112, i.e. y~ = [0 ~T, This follows because the second row of C, (z) is equal to zero, so How many poles does C(s) have?
-
e
Exercise 4.5 * Given y(s) = C(s)u(s), wit/i C(s) = ~. Determine a state-space realization of
Remark 1 In practice, it is not possible to cancel exactly a plant zero or pole because of modelling
C(s) and then find the zeros of C(s) using the generalized eigenvalue problem. What is the transfer errors. In the above example, therefore, L and S will in practice also be unstable. However, it is
function from u(s) to x(s), the single state of C(s), and what are the zeros of this transferfunction? important to stress that even in the ideal case with a perfect RHP pole—zero cancellation, as in the
above example, we would still get an internally unstable system. This is a subtle but important point. In
Exercise 4.6 Find the zeros for a 2 x 2 plant with
this ideal case the state-space descriptions of L and S contain an unstable hidden mode corresponding
to an unstabilizable or undetectable state.
A=Idll a12] B=F ~ i] 0=1, D=o
1a21 C22J 1b21 b22j
Remark 2 By the same reasoning as in Example 4.16, we get an internally unstable system if we
Exercise 4.7 * For what values of c1 does the following plant have RHP-zeros? use feedforward control to cancel a RHP-zero or to stabilize an unstable plant. For example, consider
Figure 4.2 with the feedback loop removed and K as the feedforward controller. For an unstable plant
A_hO o] B—I 0_frn cii D_10 0
~O —ii, — ‘ ~i0 0]’ ~O 1 (4 80 C(s) = we may use a feedforward controller K(s) ~ and get an (apparently) stable response
y = CKr = r. First, this requires a perfect model with perfect cancellation of the unstable pole at
1. Second, even with a perfect model, we have y = Cd~ where C is unstable, so any signal d~
Exercise 4.8 C’onsider the plant in (4.80), but assume that both states are measum-ed and used for
feedback control, i.e. i/rn = it (but the controlled output is still y = Cx + Du). Can a RHP-zero in entering between the controller and the plant will eventually drive the system out of bounds. Thus, the
C(s) give problems with stability in the feedback system? Can we achieve “peifect” control of v in this only way to stabilize an unstable plant is to move the unstable poles from the RHP to the LHP and this
case? (Answers: No and no). can only be accomplished by feedback control.
Example 4.16 Consider the feedback system shown in Figure 4.2 where C(s) = ~4- and K(s) =
Figure 4.3: Block diagram used to check internal stability of feedback system
From the above example, it is clear that to be rigorous we must consider internal stability
of the feedback system, see Definition 4.4. To this effect consider the system in Figure 4.3
where we inject and measure signals at both locations between the two components, C and
K.Weget
is = (I + KG)—’d~ — K(I + CK~’d~ (4.83)
y = G(I + I(G)1d~ + (1 + GK)—’d0 (4.84)
Figure 4.2: Internally unstable system The theorem below follows immediately:
k ~± Infor,ning the loop transferfunction L = CIC we then cancel the term (s—i), a RH? pole—zero Theorem 4.6 Assume that the components C and K contain no unstable hidden modes.
cancellation, to obtain Then the feedback system in Figure 4.3 is internally stable if and only if allfour closed-loop
L = CK = ~, and S = (I+Ly1 = (481) transfer matrices in (4.83) and (4.84) are stable.
5 s+k
5(s) is stable; that is, the transferfimnctionfro,n d~ toy is stable. However; the transfer function front The following can be proved using the above theorem (recall Example 4.16). If there are
d,, to u is unstable: RHPpole—zemv cancellations between C(s) and K(s), Le. if CK and KG do not bat/i contain
u = —K(1 + CK)’d~ = — (s —~5± k) d9 ~482’ all the RHF-pales in G and K, then the system in Figure 4.3 is internally unstable.
If we disallow RI-IP pole—zero cancellations between system components, such as C and
Consequent/v although the system appeal-s to be stable when considering the output signal y, it is K, then stability of one closed-loop transfer function implies stability of the others. This is
unstable when considering the “internal” signal u, so the syste,n is (ititernally) unstable. stated in the following theorem.
146 MULTIVARIABLE FEEDBACK CONTROL 147
ELEMENTS OF LINEAR SYSTEM THEORY
Theorem 4.7 Assume there are no RHP pole—zero cancellations between C(s) and IC(s);
Exercise 4.11 Give,, the complemnentamy sensitivity functions
*
that is, all RHP -poles in C(s) and IC(s) are contained in the minimal realizations of CrC
and KG. Then the feedback system in Figure 4.3 is internally stable jf and only ~f one of the 2s + 1 —2s + 1
Ti(s) = +0.8s+1 T2(s) = ~2 ~ O.Ss + 1
four closed-loop transfer function matrices in (4.83) and (4.84) is stable.
Proof- A proof is given by Zhou et al. (1996, p. 125). hat can you say about possible RHP-poles or REP-zeros in the corresponding loop transferfunctions.
Ll(s)andL2(s)~’
Note how we define pole—zero cancellations in the above theorem. In this way, RHP pole—
zero cancellations resulting from C or K not having full normal rank are also disallowed. For The following exercise demonstrates another application of the internal stability
example, with C(s) = 1/(s a) and K = 0 we get CIC = 0 so the RHP-pole at .s = a
— requirement
has disappeared and there is effectively a RHP pole—zero cancellation. In this case, we get
5(s) = 1 which is stable, but internal stability is clearly not possible.
Exercise 4.9 * Use iA. 7,) to show that the signal relationships (4.83) and (4.84) may also be written
as
r~i
[U] =M(s~~j; M(s)= [—C r i K1’ ij (4.85)
(a) (b)
From this we get that the system in Figure 4.3 is internally stable if and only if M(s) is stable.
a
-3-
148 MULTIVARJABLE FEEDBACK CONTROL
~MENT5 OF LINEAR SYSTEM THEORY 149
(c) Explain why the conflgutation in Figure 44(c) should not be used if K, contains RHP Poles
This imphes that this configuration should not be used if we want integral action in K, mark 1 If only proper controllers are allowed then Q must be propei since the term (1 — QQ)—i is
9i_proper
(d) Show that the configuration In Figure 44(d) ‘nay be used provided the RH)’ poles (mcludii,g
integrators) of K, are contaiucd in K, and the RH)’ zetvs in K2 Discuss why one may often set ~eniark 2 We have shown that by varying Q freely (but stably) we will always have internal stability,
Kr = 1 tn this case (to give a fourth possibility)
,nd thus avoid internal RHP pole—zero cancellations between K and C This means that although Q
(e) .4flfthfo,m whe,e r goes to both K~ and IC, is show,, in Figure 25 When is this form suitable’
nay generate unstable controllers K, there is no danger of getting a RFIP-pole in K that cancels a
p.FlY-zero in C
The requirement of internal stability also dictates that we must exercise care when we use
a separate unstable disturbance model Gd(s) To avoid this problem one should for state The parameteflzatlon in (4.92) is identical to the internal model control (IMC)
space computations use a combined model for inputs and disturbances i e write the model parameteflzatbon (Moran and Zafiriou, 1989) of stabilizing controllers It may be denved
y = On + Odd in the form directly from the IMC structure given in Figure 4 5 The idea behind the IMC structure is that
y=[G Ne “controller” Q can be designed in an open-loop fashion since the feedback signal only
contains information about the difference between the actual output and the output predicted
where 0 and Gd share the same states see (4 14) and (4 17) from the model
48 Stabilizing controllers
In this section we introduce a parametenzation known as the Q-parameterization or
Youla-parameterization (Youla et al., 1976), of all stabilizing controllers for a plant. By all
stabilizing controllers we mean all controllers that yield internal stability of the closed-loop
system We first consider stable plants for which the parametenzation is easily denved and
then unstable plants where we make use of the coprime factorization
The following lemma forms the basis for parameterizing all stabilizing controllers for stable
plants: Figure 4.5: The internal model control (IMC) structure
Lemma 4.8 For a stable plant 0(s) the negative feedback system in Figure 4.3 is internally
stable if and only ~fQ = K(I + OK) 1is stable. Exercise 4.13 * Show that the IMC structure in Figure 45 is nitemnally unstable if either Q or C is
unstable
Prooft The four transfer functions in (4.83) and (4.84) are easily shown to be
Exercise 4.14 Show that testing internal stability of the IMC structure is equivalent to testing for
K(I + GIfl—~ = (4.88) stability of the four closed-loop t,anyferfitnctions in (4 88)—(4 91)
(I+GKyi 10Q (4 89) Exercise 4.15 * Give,, a stable con tiolle, K What set of plants can be stabilized by this controller’
(I+IC&y’ =1—QG (4 90) (Hint Interchange the roles ofplant and cont, oiler)
0(1 + KGf” = 0(1— QG) (491) 1
which are clearly all stable if & and Q are stable. Thus, with & stable the system is internally stable if 4.8.2 Unstable plants
and only if Q is stable. C
For an unstable plant 0(s), consider its left coprime factorization
As proposed by Zames (1981), by solving (4.88) with respect to the controller K, we find that
a parameterization of all stabilizing negative feedback controllers for the stable plant C(s) is 1 0(s) = M,’Ni (4 93)
given by
K = (I - QG)~Q = Q(I -
(4 92) A parameterization of all stabilizing negattve feedback controllers fom the plant 0(s) is then
(Vidyasagar, 1985)
where the “parameter” Q is any stable transfer function matrix. 1 K(s) (14 QN,)’ (Ur + QM,)
—
(494)
C
150 MULTIVARIABLE FEEDBACK CONTROL 5IEMENTS OF LINEAR SYSTEM THEORY 151
where V,. and U,. satisfy the Bezout identity (4.19) for the right coprime factorization, 4.9.1 Open- and closed-loop characteristic polynomials
and Q(s) is any stable transfer function satisfying the technical condition det(V,.(co) —
Q(oc)N,(oo)) ~ 0. Similar to (4.94), the stabilizing negative feedback controllers can also
be parameterized based on the right coprime factors, M,.. N,. (Vidyasagar, 1985).
Remark 1 With Q = 0 we have ICo = i4~~ U,., so 11,. and U,. can alternatively be obtained from a left
coprime factorization of some initial stabilizing controller (Co.
Remark 2 For a stable plant, we may write G(s) = N1(s) corresponding to M, = I. In this case
K0 = 0 is a stabilizing controller, so we may from (4.19) select U,. = 0 and V,. = I, and (4.94) yields Figure 4.6 Negative feedback system
K = (I QG)’Q as found before in (4.92).
—
Remark 3 We can also formulate the parameterization of all stabilizing controllers in state-space form, We first derive some preliminary results involving the determinant of the return difference
e.g. see page 312 in Zhou et al. (1996) for details. operator I + L Consider the feedback system shown in Figure 4 6, where L(s) is the loop
transfer function matnx Stability of the open-loop system is determined by the poles of L(s)
The Q-parameterization may be very useful for controller synthesis. First, the search over
all stabilizing K’s (e.g. S = (I + GIC)’ must be stable) is replaced by a search over stable
If L(s) has a state-space realization [ Aj~ ~ i e
Q’s. Second, all closed-loop transfer functions (5, T, etc.) will be in the form H1 + H2QH3,
so they are affine3 in Q. This further simplifies the optimization problem. L(s) = C01(sI — A01)’’B0, + D01 (495)
Strongly stabilizable. In theory, any linear plant may be stabilized irrespective of the then the poles of L(s) are the roots of the open-loop characteristic polynomial
location of its RHP-poles and RHP-zeros, provided the plant does not contain unstable hidden
modes. However, this may require an unstable controller, and for practical purposes it is = det(sI — A01) (496)
sometimes desirable that the controller is stable. If such a stable controller exists the plant is
said to be strongly stabilizable. Youla et al. (1974) proved that a strictly proper rational Sf50 Assume there are no RHP pole—zero cancellations between C(s) and IC(s) Then from
plant is strongly stabilizable by a proper controller if and only if every real RHP-zero in C(s) Theorem 4 7 internal stability of the closed-loop system is equivalent to the stability of
lies to the left of an even number (including zero) of real RIIP-poles in C(s). Note that the 5(s) = (I + L(s))’ The state matnx of 5(s) is given (assuming L(s) is well-posed,
i e D0, + I is invertible) by
presence of any complex RHP-poles or complex RHP-zeros does not affect this result, We
then have: A0, = A0, — B01(I + D01)”’C0, (497)
o A strictly proper rational plant with a single real RHP-zero z and a single real RHP-pole This equation may be derived by writing down the state-space equations for the transfer
p. e.g. G(s) = (S~TI~S+1)’ can be stabilized by a stable proper controller if and only if function from r to y in Figure 4 6
z > p.
x=A01x+B01(r—y) (498)
Notice the requirement that C(s) is strictly proper. For example, the plant G(s) =
with z = 1 < p = 2 is stabilized with a stable constant gain controller IC(s) = IC~ with y= C01x+D01(r—y) (499)
—2 <K0 < —1. However, this plant is not strictly proper so the result by Youla et al. (1974) and using (4 99) to eliminate y from (4 98) The closed-loop characteristic polynomial is thus
does not apply. given by
4 det(sI — A01) = det(sI — A0, + B01(I + D01)’c01) (4 100)
4.9 Stability analysis in the frequency domain
Relationship between characteristic polynomials
As noted above, the stability of a linear system is equivalent to the system having no poles
The above identities may be used to express the determinant of the return difference operatoi,
in the closed RHP. This test may be used for any system, be it open-loop or closed-loop.
I + L, in terms of ~, (s) and ~ (s) From (4 95) we get
In this section we will study the use of frequency domain techniques to derive information
about closed-loop stability from the open-loop transfer matrix L(jw). This provides a direct det(I + L(s)) = det(I + C01(sI — A01)’B0, + D01) (4101)
generalization of Nyquist’s stability test for 5150 systems.
Note that when we talk about eigenvalues in this section, we refer to the eigenvalues of a Schur’s formula (A 14) then yields (with A11 = I + D0, A12 = —Ce, A22 = sI —
complex matrix, usually of L(jw) = CK(jw), and not those of the state matrix A. A0, A21 = B01)
~ A function f(s) is affine inn if f(r) = as + b, and is linear ins if f(s) = as. det(I + L(s)) = ~01(s) c (4 102)
152 MULTIVARIABLE FEEDBACK CONTROL LEMENTS OF LINEAR SYSTEM THEORY 153
where c = det(I + D01) is a constant which is of no significance when evaluating the Aim
poles. Note that ~~i(s) and ~~~(s) are polynomials in s which have zeros only, whereas
det(I + L(s)) is a transfer function with both poles and zeros.
Example 4.17 We will rederive expression (4.102) for 8180 systems. Let L(s) —
—
k~~!)_ The
sensitivity function is given by
S(s) =
1 + L(s)
= ~ol(s)
hz(s) + ~ol(s)
(4A03)
DRC
and the denominator is
which is the same as ~cl (s) in (4.102) (except for the constant a which is necessary to make the leading
coefficient of c6ct (s) equal to 1, as required by its definition). Figure 4.7: Nyquist D-contour for system with no open-loop jw-axis poles
Remark 1 One may be surprised to see from (4.103) that the zero polynomial of S(s) is equal to the
lm
open-loop pole polynomial, ~ but this is indeed correct. On the other hand, note from (4.77) that
the zero polynomial of T(s) = L(s)/(1 + L(s)) is equal to z(s), the open-loop zero polynomial.
4
Remark 2 From (4.102), for the case when there are no cancellations between ~ (s) and ~cl (s), we
have that the closed-loop poles are solutions to ‘1
Theorem 4.9 Generalized (MIMO) Nyquist theorem. Let P0, denote the number of open
loop unstable poles in L(s). The closed-loop system with loop transfer function L(s) and
negative feedback is stable if and only if the Nyquist plot of det(I + L(s)) Figure 4.8: Typical Nyquist plot of det(I + LCIw))
(i) makes P01 anti-clockwise encirclemnents of the origin, and
(ii) does not pass through the origin.
Remark 4 We see that for stability det(I + LOw)) should make no encirclements of the origin
The theorem is proved below, but let us first make some important remarks. if L(s) is open-loop stable, and should make P01 anti-clockwise encirclements if L(s) is unstable.
If this condition is not satisfied then the number of closed-loop unstable poles of (I + L(s~’ is
Remark 1 By “Nyquist plot of det(I + L(s))” we mean “the image of det(I + L(s)) as s goes = N + a1, where .Af is the number of clockwise encirclements of the origin by the Nyquist plot
clockwise around the Nyquist D-contour”. The Nyquist D-contour includes the entire jw-axis (s = jw) of det(I + LOw)).
and an infinite semi-circle into the RHP as illustrated in Figure 4.7. The D-contour must also avoid
locations where L(s) has jw-Ixis poles by making small indentations (semi-circles) around these RemarkS For any real system, L(s) is proper and so to plot det(I+L(s)) ass traverses the D-contour
points, we need to considers = jw only along the imaginary axis. This follows since lim,~ L(s) = D0 is
finite, and therefore for s = co [he Nyquist plot of det(I + L(s)) converges to det(I + D0t) which is
Remark 2 In the following discussion, for practical reasons, we define unstable poles or RHP-poles as on the real axis.
poles in the open RI-IP, excluding the jw-axis. In [his case the Nyquist D-contour should make a small
semi-circular indentation into the RHP at locations where L(s) has jw-axis poles, thereby avoiding the Remark 6 In many cases L(s) contains integrators so for w = 0 the plot of det(I ± LOw)) may
extra count of encirclements due to jw-axis poles. “start” from ±joo. A typical plot for positive frequencies is shown in Figure 4.8 for the system
Remark 3 Another practical way of avoiding the indentation is to shift all jw-axis poles into the LHP, 3(—2s + 1) _________
e.g. by replacing the integrator 1/s by 1/(s + e) where eisa small positive number. L = OK, C (lOs + 1)(5s + ~ K = 1,1412,7s
12.7s+ 1 (4.106)
154 MULTIVARIABLE FEEDBACK CONTROL ELEMENTS OF LINEAR SYSTEM THEORY 155
Note that the solid and dashed curves (positive and negative frequencies) need to he connected as to
approaches 0,so there is also a large (infinite) semi-circle (not shown) corresponding to the indentation
1.9.4 Small-gain theorem
of the fl-contour into the RHP at s = 0 (the indentation is to avoid the integrator in L(s)). To find The small-gain theorem is a very general result which we will find useful in the book We
which way the large semi-circle goes, one can use the rule (based on conformal mapping arguments) present first a generalized version of it in terms of the spectral radius, p(L(~w)), which at
that a right-angled turn in the fl-contour will result in a right-angled turn in the Nyquist plot. It then ‘ach frequency is defined as the maximum eigenvalue magnitude
follows for the example in (4.106) that there will be an infinite semi-circle into the RHP. There are
therefore no encirclements of the origin. Since there are no open-loop unstable poles (jw-axis poles are 4 ma~c~A,(L(jw))~ (4109)
excluded in the counting), P01 = 0, and we conclude that the closed-loop system is stable.
Proof of Theorem 4.9: The proof makes use of the following result from complex variable theory Theorem 4.11 Spectral radius stability condition. Consider a system with a stable loop
(Churchill et al., 1974): transfer fttnction L(s) Then the closed-loop system ts stable ~f
Lemma 4.10 Argument principle. Consider a (transfer) function f(s) and let C denote a closed
p(L(jw))<1 Vw (4110)
contour in the complex plane. Assume that:
I. f(s) is “analytic” along C; that is, f(s) has no poles on C.
2. f(s) has Z zeros inside C.
3. f(s) has P poles inside C. Proof The generalized Nyquist theorem (Theorem 49) says that if L(s) is stable, then the closed-
Then the image f(s) as the complex argument s traverses the contour C once in a clockwise direction loop system is stable if and only if the Nyquist plot of det(I + L(s)) does not encircle the origin
will make Z — P clockwise encirclements of the origin. To prove condition (4 110) we will prove the “reverse”, that is, if the system is unstable and therefore
det(I + L(s)) does encircle the origin, then there is an eigenvnlue, A,(LUw)), which is larger than 1
Let .i~f (A, f(s), C) denote the number of clockwise encirclements of the point A by the image f(s) as
at some frequency If det(I + L(s)) does encircle the origin, then there must exist a gain e C (0, 1] and
s traverses the contour C clockwise. Then a restatement of Lemma 4.10 is
a frequency to’ such that
~(0, f(s), C) = Z — P (4107) det(I+ eLOw’)) = 0 (4111)
We now recall (4.102) and apply Lemma 4.10 to the function f(s) = det(I+L(s)) = ~~f~}c selecting This is easily seen by geometric arguments since det(I + eLQjw’)) = 1 fore = 0 Expression (4 Ill)
C to be the Nyquist fl-contour. We assume c = det(I + D01) ≠ 0 since otherwise the feedback system is equivalent to (see eigenvalue properties in Appendix A 2 1
would be ill-posed. The contour fi goes along the jw-axis and around the entire RHP, but avoids open-
loop poles of L(s) on the jw-axis (where çboj~w) = 0) by making small semi-circles into the RHP. fJ .X,(I+eLQjw’)) = 0 (4112)
This is needed to make f(s) analytic along fi. We then have that f(s) hasP = P01 poles and Z =
zeros inside D. Here Pa denotes the number of unstable closed-loop poles (in the open RHP). Equation ~ 1 + eA,(L(jw’)) = 0 for some z (4 113)
(4.107) then gives
~ A,(L(jw’)) = _~ for some i (4 114)
K(0, det(I + L(s)), fi) = Pci — P01 (4108) ~
Since [he system is stable if and only if P~ = 0, condition (i) of Theorem 4.9 follows. However, we ~. .\,(L(jw’))~ ≥ 1 forsomei (4115)
have not yet considered the possibility that f(s) = det(I+L(s)), and hence ~a (s) has zeros on the fi ~‘ p(L(jw’)) ≥ 1 (4116)
contour itself, which will also correspond to a closed-loop unstable pole. To avoid this, det(I+ L(jw))
must not be zero for any value of to and condition (ii) in Theorem 4.9 follows. C C
Example 4.18 SISO stability conditions. Consider an open-loop stable 5150 system. In this case, Theorem 4 11 is quite intuitive, as it simply says that if the system gain is less than 1 in
the Nyquist stability condition states that for closed-loop stability the Nyquist plot of 1 + L(s) should
all directions (all eigenvalues) and for all frequencies (Yw), then all signal deviations will
not encurle the origin. This is equivalent to the Nyquist plot of L(jw) not encircling the point —1 in
eventually die out, and the system is stable
the complex plane
In general, the spectral radius theorem is conservative because phase information is not
I considered For SISO systems p(L(jw)) = JL(~w)~, and consequently the above stability
4.9.3 Eigenvalue loci j condition requires that JL(jw)~ < 1 for all frequencies This is clearly conservative, since
The eigenvalue loci (sometimes called characteristic loci) are defined as the eigenvalues of
j from the Nyquist stability condition for a stable L(s), we only require ILUw)I < 1 at
frequencies where the phase of L(yw) is —180° ± n 360° As an example, let L = k/(s + e)
the frequency response of the open-loop transfer function, .X~(L(jw)). They partly provide a Since the phase never reaches —180° the system is closed-loop stable for any value of k > 0
generalization of the Nyquist plot of LOw) from 5150 to MIMO systems, and with them gain However, to satisfy (4.110) we need k < c, which for a small value of e is very conservative
and phase margins can be defined as in the classical sense. However, these margins are not too 4
indeed
useful as they only indicate stability with respect to a simultaneous parameter change in all ii
of the loops. Therefore, although characteristic loci were well researched in the 1970’s and
greatly influenced the British developments in multivariable control, e.g. see Postlethwaite
I Remark. Later we will consider cases where the phase of L is allowed to vary freely, and in which case
Theorem 4.11 is not conservative. Actually, a clever use of the above theorem is the main idea behind
and MacFarlane (1979), they will not be considered further in this book. 4 most of the conditions for robust stability and robust performance presented later in this book.
I
4
I
I
L
156 MULTIVARIABLE FEEDBACK CONTROL ELEMENTS OF LINEAR SYSTEM THEORY 157
The small-gain theorem below follows directly from Theorem 4.11 if we consider a matrix 3. w(t) is any signal satisfying ilw(t)112 1, but w(t) = 0 fort > 0, and we only measure
IABU ≤ JAIl II-BIl~ Then, at any frequency, we have
norm, which by definition satisfies z(t) fort ≥ 0.
p(L) < JL~ (see (A.117)).
1’he relevant system norms in the three cases are the fl2 , ?-t~ and Hankel norms, respectively.
Theorem 4.12 Small-gain theorem. Consider a system with a stable loop transferfunction The 1-t2 and 71cc norms also have other interpretations as are discussed below. We introduced
L(s). Then the closed-loop system is stable if the ?~t2 and ?-t~ norms in Section 2.8, where we also discussed the terminology. In Appendix
AS.7 we present a more detailed interpretation and comparison of these and other norms.
IIL(iw)Il < 1 Vw (4117) -
where IILII denotes any matrix norm satisfying IIABII ≤ hAil hiBhi. 4.10.1 7~t2 norm
Consider a strictly proper system G(s), i.e. D 0 in a state-space realization. For the 9-L2
Remark 1 This result is only a special case of a more general small-gain theorem which also applies norm we use the Frobenius norm spatially (for the matrix) and integrate over frequency, i.e.
to many nonlinear systems (Desoer and Vidyasagar, 1975).
Remark 2 The small-gain theorem does not consider phase information, and is therefore independent
of the sign of the feedback.
IIG(s)112 4
Al
1f
~
tr(GUw)HGOW)) dw (4.119)
Remark 3 Any induced norm can be used, e.g. the singular value, a(L).
N IG(iw) ~ Gjj (iw)12
We see that G(s) must he strictly proper, otherwise the 9-12 norm is infinite. The 9-la norm can
Remark 4 The small-gain theorem can be extended to include more than one block in the loop. e.g. nlso be given another interpretation. By Parseval’s theorem, (4.119) is equal to the 9-12 norm
L = L1L2. In this case we get from (A.98) that the system is stable if IIL1II . 11L211 < 1, Vw. of the impulse response
Remark 5 The small-gain theorem is generally more conservative than the spectral radius condition in
Theorem 4.11. Therefore, the arguments on conservatism made following Theorem 4.11 also apply to IIG(s)i12 = 4 tr(gT(r)g(r)) dr (4.120)
Theorem 4.12.
N [oCr) ii-=E~5 Igu Cr) 2
Remark 1 Note that G(s) and g(t) are dynamic systems while G(jw) and g(r) are constant matrices
(for a given value of w or r).
4.10 System norms
Remark 2 We can change the order of integration and summation in (4.120) to get
to &
IIG(s)112 = JI9(t)I{2 = ~ f g~(r)I2dr (4.121)
where g~~(t) is the ij’th element of the impulse response matrix, g(t). From this we see that the U2
norm can be interpreted as the 2-norm output resulting from applying unit impulses S~ (t) to each input,
one after another (allowing the output to settle to zero before_applying an impulse to the next input).
Figure 4.9: System C This is more clearly seen by writing iIG(s)112 v’E?.~ IiztQ)Il~ where z~(t) is the output vector
resulting from applying a unit impulse o~(t) to the i’th input.
Consider the system in Figure 4.9, with a stable transfer function matrix U(s) and impulse
In summary, we have the following deterministic performance interpretation of the 9-12 norm:
response matrix g(t). To evaluate the performance we ask the question: given information
about the allowed input signals w(t), how large can the outputs z(t) become? To answer this, IIG(s)112 ma
w(t)= unit impulses
Wz(t)112 (4.122)
we must evaluate the relevant system norm.
We will here evaluate the output signal in terms of the usual 2-norm, The 9-12 norm can also be given a stochastic interpretation (see page 355) in terms of the
quadratic criterion in optimal control (LQG) where we measure the expected root mean
4.10.2 7~1cc norm 4.10.3 Difference between the 7-12 and 7-1~, norms
Consider a proper linear stable system C(s) (i.e. D ~ 0 is allowed). For the 7-1~ norm we To understand the difference between the 72 and 7-1~ norms, note that from (A.127) we can
use the singular value (induced 2-norm) spatially (for the matrix) and pick out the peak value write the Frobenius norm in terms of singular values. We then have
as a function of frequency
IIG(s)IIco ~ maxo(G(jw)) (4 124) IIG(s)112 = ~ f~~(C(iw))~ (4.128)
In terms of petfonnance we see from (4.124) that the 7-1~ norm is the peak of the transfer
function “magnitude”, and by introducing weights, the R~ norm can be interpreted as the From this we see that minimizing the 7-i~ norm corresponds to minimizing the peak of
magnitude of some closed-loop transfer function relative to a specified upper bound. This the largest singular value (“worst direction, worst frequency”), whereas minimizing the 7-12
leads to specifying performance in terms of weighted sensitivity, mixed sensitivity, and so norm corresponds to minimizing the sum of the squares of all the singular values over all
on. frequencies (“average direction, average frequency”). In summary, we have
However, the 7-1~., norm also has several time domain performance interpretations. First, as
discussed in Section 3.3.5, it is the worst-case gain for sinusoidal inputs at any frequency. 7-1,,: “push down peak of largest singular value”.
a 7-12: “push down whole thing” (all singular values over all frequencies).
As t —* cc, let z(w) denote the response of the system to a persistent sinusoidal input
w(w) (phasor notation). Then we have z(w) = C(jw)w(w). At a given frequency w, the Example 4.19 We will compute the W~ and 112 norms for the following SISO plant:
amplification (gain) JIz(w)112/IIw(w) 112 depends on the direction of w(w), and the gain in the
worst-case direction is given by the maximum singular value: C(s) = —i----
s+a
(4.129)
= max
IIz(w)112 The fl2 norm is
w(w)≠O IIw(w)112
The gain also depends on frequency, and the gain at the worst-case frequency is given by the
74,, norm:
IIz(w)112
IIC(s)112 = (~r: / 1 [tan’ (~fl~]o0
~2~a ~a1i-~)
N (4.130)
axis, where
A+BPc’DTC BR1BT
4 127 1,~ ~ it can be shown that IC(s) I ~ ≤ Ig(t) Il’, and this example illustrates that we may have
equality
_CT(I+DR~DT)C —(A+BR-1DTC)T
and R = 721 — DTD, see Zhou et al. (1996, p. 115). This is an iterative procedure, where Example 4.20 There exists no general relationship between the 112 and 71,, norms. As an example
one may start with a large value of ~ and reduce it until imaginary eigenvalues for H appear. consider the two systems
1 Cs
f2(s)= 52+es+l (4.135)
and let e —1 0. Then we have for f, that the 71,, norm is 1 and the 7(2 ~?on,, is infinite. For f2 the 7-1,,
norm is again 1 (at w = 1), but now the 7(2 norl,l is zero.
160 MULTIVARIABLE FEEDBACK CONTROL OF LINEAR SYSTEM THEORY 161
Why is the 1-1~~ norm so popular? In robust control we use the 9~l,3 norm mainly because
it is convenient for representing unstructured model uncertainty, and because it satisfies the
multiplicative property (A.98):
v1i2~~ Ihw(~)hI~dr where Ca(s) denotes a truncated or residualized balanced realization with k states; see
The Hankel norm is a kind of induced norm from past inputs to future outputs. Its definition Chapter 11. The method of Hankel norm minimization gives a somewhat improved error
is analogous to trying to pump a swing with limited input energy such that the subsequent I
bound, where we are guaranteed that hG(s) Ga(s)hhoo is less than the sum of the discarded
—
length of jump is maximized as illustrated (by the mythical creature) in Figure 4.10. S Hankel singular values. This and other methods for model reduction are discussed in detail
It may be shown that the [Jankel norm is equal to S in Chapter 11 where a number of examples can be found.
IIG(s)IhH = \/p(PQ) (4 139) 1 Example 4.22 We want to compute anaLytically the various system norms for C(s) = 1/(s + a)
using state-space methods. A state-space realization is A = —a, B = 1, C 1 and V 0.
where p is the spectral radius (absolute value of maximum eigenvalue), P is the controllability The controllability Gramian P is obtained from the Lyapunov equation AP + PAT = _BBT ~
Gramian defined in (4.44) and Q the observability Gramian defined in (4.49). The name —-aP — aP = —iso P = 1/2a. Similarly~ the observability Gramian isQ = 1/2a. From (4.123) the
S
“Hankel” is used because the matrix PQ has the special structure of a Hankel matrix 712 norm is then _______________
(which has identical elements along the “wrong-way” diagonals). The corresponding Hankel S
hbG(s)hh2 = ;/tr(BTQB)
singular values are the positive square roots of the eigenvalues of PQ, S The eigenvalues of the Hamiltonian matrix H in (4.127) al-c
= (4 140) A(H) =
—
1/721
a] = ±~a2 — 1/72
S
S
‘I
162 MULTIVARIABLE FEEDBACK CONTROL
IIG(s)II= = 1/a
The Hankel matrix is PQ = 1/4a2 andfrom (4.139) the Hankel nor/n is 5
IIC(s)iIu = ,/p(PQ) = 1/2a
These results agree with the frequency domain calculations in Example 4.19.
Exercise 4.16 Let a = 0.5 and = 0.0001 and check numerically the results in Examples 4.19,
LIMITATIONS ON
4.20, 4.21 and 4.22 using, for example, the Matlab Robust Control toolbox commands norm ( eye, 2),
norm (sys, ±nf), and for the Hankel norm, max (hankelsv ( sys) ).
PERFORMANCE IN 5150
SYSTEMS
4.11 Conclusion
This chapter has covered the following important elements of linear system theory: In this chapter, we discuss the fundamental limitations on performance in 5150 systems. We summarize
system descriptions, state controllability and observability, poles and zeros, stability and these limitations in the form of a procedure for input—output controllability analysis, which is then
stabilization, and system norms. The topics are standard and the treatment is complete for applied to a series of examples. Input—output controllability of a plant is the ability to achieve acceptable
the purposes of this book. control performance. Proper scaling of the input, output and disturbance variables prior to this analysis
is critical.
These rules are reasonable, but what are “self-regulating”, “large”, “rapid” and “direct”? A
major objective of this chapter is to quantify these terms.
III. How might the process be changed to improve control? For example, to reduce the
effects of a disturbance one might in process control consider changing the size of a buffer
tank, or in automotive control one might decide to change the properties of a spring. In other
Whether or not the last two actions are design modifications is arguable, but at least they
address important issues which are relevant before the controller is designed. 5.1.2 Scaling and performance
Input—output controllability analysis is applied to a plant to find out what control The above definition of controllability does not specify the allowed bounds for the
performance can be expected. Another term for input—output controllability analysis is displacements or the expected variations in the disturbance; that is, no definition of the
peiforinance targeting. Early work on input—output controllability analysis includes that of desired performance is included. Throughout this chapter and the next, when we discuss
Ziegler and Nichols (1943) and Rosenbrock (1970). Moran (1983) talked about “dynamic controllability, we will assume that the variables and models have been scaled as outlined in
resilience” and made use of the concept of “perfect control”. Important ideas on performance Section 1.4, so that the requirement for acceptable performance is:
limitations are also found in Bode (1945), Horowitz (1963), Frank (1968a; 1968b),
Kwakernaak and Sivan (1972), Horowitz and Shaked (1975), Zames (1981), Doyle and a For any reference r(t) between —R and R and any disturbance d(t) between —i and 1,
Stein (1981), Francis and Zames (1984), Boyd and Desoer (1985), Kwakernaak (1985), keep the output y(t) within the range r(t) ito r(t) + 1 (at least most of the time), using
—
Freudenberg and Looze (1985; 1988), Engell (1988), Moran and Zafiriou (1989), Middleton an inputu(t) within the range—ito 1.
(1991), Boyd and Barratt (1991), Chen (1995), Seron et al. (1997), Chen (2000) and Havre
We will interpret this definition from a frequency-by-frequency sinusoidal point of view, i.e.
and Skogestad (2001). We also refer the reader to two IFAC workshops on Interactions
d(t) = sin wt, and so on. With a = y — r we then have:
between process design and process control (Perkins, 1992; Zafiriou, 1994) and the special
issue of IEEE Transactions on Automatic Control on Petformance limitations (Chen and For any disturbance d(w)j < 1 and any reference Ir(w)I < R(w), the
Middleton, 2003). performance requirement is to keep at each frequency w the control error
e(w) I < 1, using an input u(w) I ≤ i.
5. Li Inpnt—output controllability analysis It is impossible to track very fast reference changes, so we will assume that R(w) is
frequency dependent; for simplicity, we assume that R(w) is R (a constant) up to the
Surprisingly, given the plethora of mathematical methods available for control system design,
frequency W,- and is zero above that frequency.
the methods available for controllability analysis are largely qualitative. In most cases,
It could also be argued that the magnitude of the sinusoidal disturbances should approach
the “simulation approach” is used, i.e. performance is assessed by exhaustive simulations.
zero at high frequencies. While this may be true, we really only care about frequencies within
1~
166 MULTIVARIABLE FEEDBACK CONTROL LIMITATJONS IN SISO SYSTEMS 167
the bandwidth of the system, and in most cases itis reasonable to assume that the plant 5.2 Fundamental limitations on sensitivity
expenences sinusoidal disturbances of constant magnitude up to this frequency Similarly
it might also be argued that the allowed control error should be frequency dependent For In this section, we present some fundamental algebraic and analytic constraints on the
example, we may require no steady-state offset, i.e. e should be zero at low frequencies. sensitivities S and T, including the waterbed effects Bounds on the peak of 181 and other
However, including frequency variations is not recommended when doing a preliminary
closed-loop transfer functions are presented in Section 5 3
analysis (however, one may take such considerations into account when interpreting the
results).
Recall that with r = RF (see Section 1.4) the control error may be written as 5.2.1 S plus T is one
e=y—r=Gu+Gdd-RF (5 1) From the definitions S = (I + L)’ and T
= L(I + L)3 we derive
S+T=I (52)
where fl is the magnitude of the reference and IiXw)I ~ 1 and Id(w)I ~ 1 are unknown
signals. We will use (5.1) to unify our treatment of disturbances and references. Specifically, (or S + T = 1 for a SISO system) Ideally, we want S small to obtain the benefits of feedback
we will derive results for disturbances, which can then be applied directly to the references (small control erroi for commands and disturbances), and T small to avoid sensitivity to
by replacing G~ by —fl; see (5.1). noise which is one of the disadvantages of feedback Unfortunately, these requirements are
not simultaneously possible at any frequency as is clear from (5 2) Specifically, (5 2) implies
that at any frequency eithei ISUw)I or ITUW)l must be larger than or equal to 05, and also
5.1.3 Remarks on the term controllability that IS(aw)I and IT(aw)I at any frequency can diffet by at most I
The definition of (input—output) controllability on page 164 is in tune with most engineers’
intuitive feeling about what the term means, and was also how the term was used historically 5.2.2 Interpolation constraints
in the control literature. For example, Ziegler and Nichols (1943) defined controllability
as “the ability of the process to achieve and maintain the desired equilibrium value”. Ifp is a RHP-pole of the plant G(s) then
Unfortunately, in the 1960’s “controllability” became synonymous with the rather narrow
concept of “state controllability” introduced by Kalman, and the term is still used in this ~N~5=1, 8(p)=0~ (53)
restrictive manner by the systems theory community. State controllability is the ability to Similarly, if z is a RHP-zero of G(s) then
bring a system from a given initial state to any final state within a finite time. Howevei~
T(z)=0, S(z)=1~ (5.4)
as shown in Example 4.5 this gives no regard to the quality of the response between
these two states and later, and the required inputs may be excessive. The concept of state These interpolation constraints follow from the requirement of internal stability as shown in
controllability is important for realizations and numerical calculations, but as long as we (4.86) and (4.87). The conditions clearly restrict the allowable 8 and T and prove very useful
know that all the unstable modes are both controllable and observable, it usually has little in Section 5.3.
practical significance. For example, Rosenbrock (1970, p. 177) notes that “most industrial We can also formulate interpolation constraints resulting from the loop transfer function
plants are controlled quite satisfactorily though they are not [state] controllable”. And L(s) = G(s)K(s). The fundamental constraints imposed by the RI-IP-poles and zeros of
conversely, there are many systems, like the tanks in series (Example 4.5), which are state G(s) will still be present, whereas the new constraints, identical to (5.3) and (5.4), arising
controllable, but which are not input—output controllable. To avoid any confusion between from the RHP-poles and zeros of K(s), are to some extent under our control and therefore
practical controllability and Kalman’s state controllability, Moran (1983) introduced the term
dynamic resilience. However, this term does not capture the fact that it is related to control,
I not fundamental.
so instead we prefer the term input—output controllability, or simply controllability when it is
clear that we are not referring to state controllability. 5.2,3 The waterbed effects (sensitivity integrals)
Where are we heading? In this chapter we will discuss a number of results related to
A typical sensitivity function is shown by the solid line in Figure 5.1. We note that 181
achievable performance. In Sections 5.2 and 5.3, we present some fundamental limitations
has a peak value greater than 1; we will show that this peak is unavoidable in practice.
imposed by RHP-poles and RHP-zeros. Readers who are more interested in the engineering
Two formulae are given, in the form of theorems, which essentially say that if we push the
implications of controllability may want to skip to Section 5.4. Many of the results can be
sensitivity down at some frequencies then it will have to increase at others. The effect is
formulated as upper and lower bounds on the bandwidth of the system. As noted in Section
similar to sitting on a waterbed: pushing it down at one point, which reduces the water level
2.4.5, there are several definitions of bandwidth (wB, w~ and WBT) in terms of the transfer
locally, will result in an increased level somewhere else on the bed. In general, a trade-off
functions 8, L and T, but since we are looking for approximate bounds we will not be too
between sensitivity reduction and sensitivity increase must be performed whenever:
concerned with these differences. The main results are summarized at end of the chapter in
terms of eight controllability rules. 1. L(s) has at least two more poles than zeros (first waterbed formula), or
2. L(s) has a RHP-zero (second waterbed formula).
I
L
168 MULTIVARIABLE FEEDBACK CONTROL ~jMITATIONS IN SISO SYSTEMS 169
o
10 f In ISOw)ldw Z Re~) (5 5)
I in —I
proof See Doyle et al (1992, p 100) or Zhou et al (1996) The generalization of Bode’s cnteiion to
unstable plants is due to Freudenberg and Looze (1985, 1988)
Frequency [rad/s]
For a graphical interpretation of (5 5) note that the magnitude scale is logarithmic whereas
the frequency scale is linear
Figure 5.1: Plot of typical sensitivity, SI, with upper bound 1/~wp~ Stable plant. For a stable plant (5 5) gives
P00
Pole excess of two: first waterbed formula /
Jo
ln ISOw)Idw = 0 (56)
To motivate the first waterbed formula consider the open-loop transfer function L(s) =
and the area of sensitivity reduction (ln I~I negative) must equal the area of sensitivity
s(s+i) As shown in Figure 5.2, there exists a frequency range over which the Nyquist plot of increase (ln 1~1 positive) In this respect, the benefits and costs of feedback are balanced
LOw) is inside the unit circle centred on the point —1, such that 1 +LI, which is the distance
exactly, as in the waterbed analogy From this we expect that an increase in the bandwidth (S
between Land —1, is less than 1, and thus 151 = 1 +L1’ is greaterthan 1. In practice, L(s)
smaller than 1 over a larger frequency range) must come at the expense of a larger peak in
will have at least two more poles than zeros (at least at sufficiently high frequency, e.g. due
I to actuator and measurement dynamics), so there will always exist a frequency range over ‘SI
which I~I is greater than 1. This behaviour may be quantified by the following theorem, of Remark. Although this is true in most practical cases, the effect may not be so sinking in some cases,
which the stable case is a classical result due to Bode. and it is not strictly implied by (5 5) anyway This is because the increase in area may come over a large
frequency range, imagine a very large waterbed Consider S(jw)I = 1 + 6 for w C [wi,w2], where
m 6 is arbitrarily small (small peak), then we can choose wi arbitrary large (high bandwidth) simply by
selecting the inierval [Wi, w2]to be sufficiently large However, in practice the frequency response of L
L(s) = s(s+i) has to roll off at frequencies above the bandwidth frequency w~ and it is required that (Stein, 2003)
f 0 ln~S(jw)jdw 0
(5.7)
/ JS(jw)~>1
flnIs(iw)I .w(z,w)dw ~‘ lnfl ~ (5.9)
w(z,w)= Zr 1 (5.10)
z2+w2 zl+(w/z)2
—I LOw)
and if the zero pair is complex (z = z ± jy)
Lm(8)~ a:
w(z,w) =
x2+(y~~w)2
~— + -~
x2+(y+w)2
(5.11)
L(s) 1
—m
— —2.0
Figure 5.3: Additional phase lag contributed by REP-zero causes ~ > I Proof: See Freudenberg and Looze (1985; 1988). 0
Note that when there is a RFIP-pole close to the RHP-zero (pi z) then oo. This
—* —+
As a further example, consider Figure 5.4 which shows the magnitude of the sensitivity is not surprising as such plants are in practice impossible to stabilize.
function for the following loop transfer function: The weight w(z,w) effectively “cuts off” the contribution from In 8~ to the sensitivity
integral at frequencies w > z. Thus, for a stable plant where 8~ is reasonably close to 1 at
L(s) = T k = 0.1,0.5,1.0.2.0 (5.8) high frequencies we have approximately
The plant has a REP-zero at z = 2, and we see that an increase in the controller gain k, lnlS&w)Idw~0 (5.12)
corresponding to a higher bandwidth, results in a larger peak for 8. For k = 2 the closed-
loop system becomes unstable with a pair of complex conjugate poles on the imaginary axis, This is similar to Bode’s sensitivity integral relationship in (5.6), except that the trade-off
and the peak of 8 is infinite.
between S less than 1 and S larger than 1 is done over a limited frequency range. Thus,
in this case the waterbed is finite, and a large peak for 181 is unavoidable if we try to push
down 181 at low frequencies. This is illustrated by the example in Figure 5.4 and further by
the example in Figure 5.5. In Figure 5.5 we plot In 181 as a function of w (note the linear
frequency scale) for two cases. In both cases, the areas of ln S below and above 100 = 1
C/)
a (dotted line) are equal, see (5.6), but for case 2 this must happen at frequencies below the
‘C
z RHP-zero at z = 5, see (5.12), and to achieve this the peak of 1821 must be higher.
100
a
‘~~iU 10’
0,
U
F
.E 100
Frequency 00
Figure 5.4: Effect of increased controller gain on ISI for system with RHP-zero at z = 2, L(s) = :110_Il
11.
0 2 3
Theorem 5.2 Weighted sensitivity integral (second waterbed formula). Suppose that Frequency (linear scale)
L(s) has a single real RFIP-zero z or a complex conjugate pair of zeros z = a: ± jy, and has
N~ RHP -poles, ~j. Let p~ denote the complex conjugate ofp~. Then for closed-loop stability Figure 5.5: Sensitivity S = jf~
corresponding to L1
—
—
2
~-~pry (dashed line) and L2 = L1 Z~t~
0+0
IIf(s)IIcc maxlf(jw)I
=
~2
We first consider bounds on the weighted sensitivity (wpS) and the weighted
IITIIcc M~ > IJ Iz,+pl
N,
Ie~°I (5.18)
~ I~ —p1
complementary sensitivity (wrT). The weights nip and wr are useful if we want to specify
that I~I and ITI should be small in some selected frequency region.
The bounds (5,16), (5.17) and (5.18) are tightfor the case with a single real RHP-pole p. For
5.3,1 Minimum peaks for S and T etatnple, with a single RHP-pole, minjc IITIlcc Mr,min = MPZJ . Ie~°I.
Theorem 5.3 Sensitivity peak. For closed-loop stability the sensitivity function must satisfj’ Note that (5.18) also imposes a bound on the peak of S for plants with a time delay. From
for each RHP-zero z of 0(s) (5.2), I~l and ITI differ by at most 1, so
N,,
11511cc ≥ IITIIoc
lJwpSII~ ≥ wp(z)~ ~ 13±1u1 (5.13)
1 (5.19)
i=1 Iz—p~l and a peak in ITI also implies a peak in I~l~ Example 5.1 on page 175 further illustrates this
M,,,. point.
where p~ denote the N~ RHP -poles of 0(s). If C(s) has no RHP-poles the bound simpljfies
Proof of (5.13): The bounds for S were originally derived by Zames (1981). The results can be derived
to
using the interpolation constraints S(z) = 1 and T(p) = 1 given above. In addition, we make use of the
IIwpS~~00 ≥ Iwp(z)J (5.14) maximum modulus principle for complex analytic functions (e.g. see maximum principle in Churchill
Without a weight the bound (5.13) siinplifles to et al., 1974), which for our purposes can be stated as follows:
N,, Maximum modulus principle. Suppose f(s) is stable (i.e. f(s) is analytic in the complex RHP’).
Then the maximum value of If(s) I for s in the RHP is attained on the region’s boundary, i.e. somewhere
11511cc = Ms ≥ fJ Iz—piI
lz~Pil (5.15) along the jw-axis. Hence, we have fora stable f(s)
IIf(jw)IIoc maxlf(jw)I
w
> f(sn)I Vs0 ERHP (5.20)
The bounds (5.13), (5.14) and (5.15) are tight for the ease with a single real RHP-zero z
Remark. Expression (5.20) can be understond by imagining a 3-D plot of If(s)I as a function of the
and no time delay. Here “tight” means that there exists a controller (possibly improper) that
complex variables. In such a plot If(s)I has “peaks” at its poles and “valleys” at its zeros. Thus, if f(s)
achieves the bound (with equality). For example, with a single RHP-zero and no time delay, has no poles (peaks) in the RHP, we find that If(s)I slopes downwards from the LFIP and into the RHP.
minK IISlI~ = Ms,min =
A function f(s) of the complex variable s is analytic at a point so if its derivative exists not only at so but at each
We note that the bound (5.15) approaches infinity, as the distance lz i’d approaches
— points in some neighbourhood around so~ If the derivative does not exist at 5o but does so in some neighbourhood
zero. A time delay imposes additional problems for stabilization, but there does not exist a of so. then so is called a singular point. We are considering a rational transfer function f(s). which is analytic
except at its poles (so = p). The poles are singular points.
tight lower bound for S in terms of the time delay. However, similar bounds apply for the
complementary sensitivity T and here the time delay also enters the tight bound.
/
5 = 5aSm, Sa(s) =
$ — p.
T~i’~ (5.21) Remark
taken into3 account,
These bounds may be
see Chapter 6. generalized to MIMO systems if the directions of poles and zeros are
Here Sm is the “minimum-phase version” of S with all RHP-zeros mirrored into the LHP. 5a (s) is Example 5.1 Unstable plant with time delay. The plant
all-pass with IS~Ow)I = 1 at all frequencies. (Remark: There is a technical problem here with jw-axis —0.5s
poles: these must first be moved slightly into the RHP.) The weight top(s) is as usual assumed to be C(s) e
stable and minimum-phase. Consider a RHP-zero located at z, for which we get from the maximum 5—3
modulus principle
has p = 3 and 6 = 0.5. Since pO = 1.5 is larger than 1, the peak of 171 will be large, and we will have
IIwpS~Joc = max wpS(jw) = max J toES,,, ~i~j ~ ~ difficulty in stabilizing the plant. Specifically, from (5.18) itfollows thatfor any cont roller we must have
where Sm(z) = S(z)S~(z)~’ = 1 S~(z)’. This proves (5.13). Chen (1995) and Chen (2000, p. 11211= ≥ MT,min = e~’5’3 = e~’5 = 4.48
1107) provide an alternative proof of the bound, based on the integral relationship (5.9). The tightness
This bound is tight in the sense that there exists a controller that achieves it. The peak of the sensitivity
of the bound was first proved by Havre and Skogestad (1998). An alternative proof is given by Chen
S must also be large since
(2000, p. 1109).
0
Proof of (5.16): The proof of (5.16) is similar to the proof of (5.13). We write T = TaTm, where Ta 11511cc > Ms,min ≥ Mr,n,jn 1 4.48— 1 3.38
contains the RHP-zeros z~ and the time delay; see also Theorem 5,5. 0 This bound is not tight, so the actual value of kIS,m~n may be higher than 3.38, but not higher than
From the bounds in Theorem 5.3 and 5.4, we note that 538, since the peaks of ISI and ITI differ by at most 1. The unavoidable large values for IISIIcc and
IITIlcc for this process imply poor performance and robustness problems.
oS is primarily limited by RHP-zeros. The bound IwpS~ ~ Iwp(z)~ shows that we cannot
freely specify the shape of 181
for a plant with a RHP-zero z. Example 5.2 Plant with complex RHP poles. The plant
• T is primarily limited by RHP-poles. The bound IwrTI ≥ IWT(P)I shows that we cannot — 2
freely specify the shape of ITI for a plant with a RHP-pole p. C(s)=10. s2_2s±5 (5.24)
• The terms ~ and Mp:5 show that the limitations are more serious if we have both RHP
poles and RHP-zeros. Large peaks for S and T are unavoidable if we have a RHP-zero and has a RHP-zero at z = 2 and RHP-poles at p = 1 ± j2. From (5.22), a tight lower bound on 11511=
and lIT 11cc is
RHP-pole located close to each other. (2 ± 1)2 + 2~
2.6
Remark 1 Let ~ and MT,~;~ denote the lowest achievable values for 1511cc and IITII= (2— 1)2 + 2~
respectively; that is, minj~ IJSII ~ Ms,m;,, and min5< IITIIcc ~ MT,m~n. Chen (2000) shows that We can also use (5.23), where M,,~1 = ~ = 1.61, but this does not give a tight bound since we
the bound (5.15) is also tight for 11711= and the bound (5.18) (for the case with no Lime delay) is also
have two RI!!’ poles.
tight for 11511=. Then, for a plant with a single REP-zero z (and no time delay) we have the following
tight lower bound on IISII= and IITII=:
The effect of combined RHP-poles and RHP-zeros is further illustrated by examples on
N,, page 179.
kls,m;n = MTm;n l~l’ Iz ±p~l
j1 Iz — I (5.22) Stabilization. The results, e.g. (5.23), show that large peaks on 8 and T are unavoidable if
M~,,1 such a plant is impossible to stabilize. However, in theory, any linear plant may be stabilized
and for the case with a single RHP-pole p (and no time delay) we have the following tight lower buund irrespective of the location of its RHP-poles and RHP-zeros, provided the plant does not
on IISII= and IITII=: contain unstable hidden modes (e.g. corresponding to the situation p = z); see also page 150.
N,
Ms,min = MT,m~n = zj +pI
.~ Jzj ~l
— (5.23) 5.3.2 Minimum peaks for other closed-loop transfer functions
In this section, we provide bounds on peaks for some other closed-loop transfer functions. To
These Light bounds are further generalized in (6.8) (page 224) to any number of RHP-poles and RHP- motivate, recall from (2.19) and (2.20) that the closed-loop control error e y r and the —
C) H~E~ ~
F o~°°~, ~ ~ ~ 0
~~s- -‘~—a——-n
taa~n ~o a
a 0
pa C, ;9*°~ oS0~~Cn C) ~‘a,S HC-°°C~
C,, Iv 0~- 0~~o
0 — ~ xF~~~ç)°-~’ -. ~0_ C-’’-HE Cl) H on
~
II> a
C) ~~ga ‘~ ~$1~ 9 ~ ~1
0 00 I CJ~) 0
-t 8g• ~a~3 0 II .< ZCo
C-, Na
Co~
0 ~-°~t—~ ~°~C~ a~0 u’C0 0
Ct l~ ~raoa ~ ~ w0~
a -i._ ~0 IC Cd) I
0 ~gCJ) o’a ~ ~j~-~’o a. t
0 aNa’~C) ~ ~ ~o -aa~ ~ ~ 9 a,
~ ‘-DC,,
+ ~ en
~ 0
~ 1! ~ ~ ~0 ~ Na
C-,
~irir ~a to ~-‘ a— ~.2, ~ ~
0 1+ lIC o~-~ ao~ ~ ~ 0 ~
0 Cl) ~ Ca~ Cd)
~1 C) °~
=a0~_ ~
0~ ~9
0~ ~., a-5C -~ C)
a.
C)
a~
Na C
—a 8 ~V’- ~‘a
~CoCfl 0,
~ 0- ~ -. 0- 0-
-C.- ‘C.
~ 0 Na = — a- a- p,
H --~a.
=Z~.~_0 0,L0n 0-ZN + 00 C)Q
a. ~
C0 a,Co o~ coCI) ~._ tn
C 4:1
0 i~flL ~ ~ s ~ lii
a
a a_wa:’— ~ I 0-
0~ ~-ao oa
F~~5 ~ -~ ~- ~-ga
Co -a~, ~ 0 —no C
Pinto
~pa
~gga ~ ~s Z~0 0~-,Ø. n C)
en ~ ~ ~9 ~ 0’ 0
a
‘-4 t0~_ 0
0~00 ~ C
a:’C,’ q~-~ S- ~ °~ H 0
9g~~o a~go C-’d~~ ~ z
t ~ r Gn 0
pa Pt S
CM
a Na
—a an—
~ ~ ~-
Pt
Na Na
C-, 0~ C~’i
I —. ‘o~Z ‘<0-~ —
494
H
-4
0
Table 5.1: Bounds on peak of important closed-loop transfer functions CO
-4
Want IMII,~ smailfor Bound on 11M1100 z
Co
-4
It’d’ Signals Stability ?vhustness Special case General case C)’
(see page 22) (see page 303) (tight only for N~ = 1 and/or N0 = 1) (including MIMO) 0
Co
Performance Relative inverse Co
1. S tracking (pole) uncertainty ~ (5.15) or ~ (5.23) (6.8) S
(e_=_—Si’) Co
Performance Relative additive
2. T noise (zero) uncertainty M~~4 (5.22) or Iv1~ . Ie~°I (5.18) (6.8)
(e = —Tn) (~) and (6.16) for delay system
Input usage Additive (zero)
3. KS (it = KS(r — it)) uncertainty G’(p)I = IG;1(p)I Ie~I 1/g51(U(G)) (5.30)
— (&~) (5.31)
Performance Gd = C: IGd,rns(z)I - N1~P1 (5.29) (6.12) with W1 = I and
4.’ SG~ disturbance Inverse (pole) = Gm(z)I for Gd = G (5.28) Wa =
(e = SGdd) uncertainty (~iA)
Input usage Gd = C: IG,(1)~Cd,ms(P)I
s.’ It’SGd disturbance Relative additive = IG;~O)Gd,msO’)I 1/~H(U(Gd,rnsG))
(it = KSGdd) (zero) uncertainty (5.34) (5.33)
— (&)
= fl~49~l f~’J~,MJZ~ =
* Special case: Input disturbance (Gd = C)
-4
-4
—4
178 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN SISO SYSTEMS 179
Bounds on SGd. In the general disturbance case, Gd ≠ G and we want to keep IISGdIIm
winch proves (531)
small to reduce the effect of disturbances on the outputs. This case can be handled similar to
SC by replacing Cms by Gd,ms in (5.28) to get Example 5.3 For the unstable plant C(s) = we have C~ (s) = ~-4~
andfrom (531) IIKSII0 ≥
IG~ (p) ~ = 6 That is, irrespective of the controlle,, the closed-loop transfer function KS, from plant
output (e g measurement noise) to plant input, must exceed 6 in magnitude at some frequency
IISGaII~ ≥ IGd,ms(Z)I ~ Iz +p~J (5.29)
C— Exercise 5.2 Fom a system with a single unstable polep, show that the two bounds on IIKSII=, (530)
and (531), are equivalent (Hint Use (4 140) to find the minimum (and only) Hankel singular value of
= (C(s) (s — ~))I5~ /(s —
Rounds on KS. The peak on the transfer function KS is required to be small to avoid
large input signals in response to noise and disturbances; see (5;26). In particular, this is Bounds on KSGd. For arbitrary disturbances, the bound (5 30) can be generalized
important for an unstable plant, where a large value of IIICSII~ is likely to cause saturation as (Kariwala, 2004)
in is resulting in difficulties in stabilization. II1<8CdIIoo ≥ 1IUH(U(GdmsG)) (5 33)
Let H denote the smallest Hankel singular value and U(G)* be the mirror image of the
anti-stable part of C. Glover (1986), who considered robustness against additive uncertainty, where U(Gj~,15G)* is the mirror image of the anti-stable part of GJ’,~8G Note that any
proved that unstable modes in Gd must be contained in C such that they are stabilizable with feedback
JIKSII~≥1/cLH(U(G)*) ~ 30 control Under the same condition, the bound (5 31) may be generalized using (5 32) to get
(Havre and Skogestad, 2001)
The bound (130) is tight, in the sense that there always exists a controller (possibly
improper) that achieves the bound. For a stable plant there is no lower bound, as in this II1<SCdIIcc > ICms(P)’Cd,,ns(P)I Mp:, I&’°l IG5(P1’Gd,ms(P)I (534)
case, minj~ IIICSII~ = 0, which can be achieved by K = 0.
A simpler bound is also available, since for any RHP-polep, o~jj(U(G)*) ≤ JG8 (p) where , Here Gd,,,,s denotes the “stable and minimum-phase version” of Cd with both the RHP
G3(s) is the “stable version” of C with its RHP-poles mirrored into the LHP; see (5.27). poles and RHP-zeros mirrored into the LHP The bound is tight for a single RHP-pole p The
Equality applies for a plant with a single real RHP-pole p. This gives the bound (Havre and bounds (5 30) and (5 33) can also be used for delay systems, since although the delay system
Skogestad, 2001) itself is irrational, its anti-stable part is rational (Kanwala, 2004)
IIKSII~ ≥ IG;’(pN (5 31) Example 5.4 Considem a plant and disturbance model
which is tight for plants with a single real RHP-pole p. This bound also applies to plants with
time delay. Gd 05
(s—3)(lOs+1) (s—3)(02s+1)
Proof of (5.31): We first prove the following generalized bound (Havre and Skogestad, 2000:
We have Cs(s) = (s+3)~os+1) and Cd,,,,~ (s-i-3)(02a+1) Notice that the time delay in Cd drops
Theorem 5.5 Let VT be a (weighted) closed-loop transfer function, where T is the conzplementa;y out in Cd,ms With p = 3, (534) gives the following lower bound oil the peak of the tmansfer function
sensitivity function. Then for closed-loop stability we must require for each RHP-pole p in C, from a distum bance to plant input
where 14,3 is the “minimum-phase and stable version’ of V (with its RHP-poles and RHP-zeros
Example 5.5 Considem an unstable plant (p ≥ 0) with a RHP-zemo (z ≥ 0) and a tune delay (0 ≥ 0),
mirrored into the LHP), and zj denote the N: RHP-zeros of C. If C has no RHP-ze,vs the bound
given by
is simply IIVTII~ ≥ V,,,~ (p). The bound (5.32) is tight (equality) for the case when C has only one
RHP-pole. C(s) = _~_ (s —z) e6~ (535)
s—p (s+z)
Proof. C has RHP-zeros at zj, and therefore T must have RHP-zeros at z~, so we write T = We have ICj~p)l = I4~f~}e°~lm=v = ~ and flomn (531) we must have fom any
TaTm with Ta(s) = 11~ ~ Next, note that IIVTII0 = IIVmsTmsIIco = IIVm5T,nIIoc. Now, stabilizing contivllem
consider a RHP-pole located at p. and use the maximum modulus principle to show that IIVTH~ ≥
IIKSII= ≥ 1C8(pYt = ~ . e~ (5.36)
IVms(p)Tm(p)I = IVms~1p)T(p)Ta(pY~I = IVmsIp) . l-11~ ~ which proves (5.32). To prove
(5.31) we make use of the identity KS = C’CKS = C’T. Use of (5.32) with V = C’ then Since is = —KS(Cdd + is), we see fm-ammi the first term that the m-equii-ed input is is large if I~I is lam-ge;
gives that is, if the unstable mnode is “fast “. In addition, we note that the exponential term e°~ grows sharply
forO> i/p.
lIKSII~ ? lCmsM~I II Izixi +pI
—p1
= IC5~’I
I For example, consider the following plant, which we will show is impossible to control in practice:
a
C(s) = —~— ~ e~5~
s—3 s+6
180 MTJLTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 181
We see that at frequencies where feedback is effective and T I (these arguments also
apply to MIMO systems and this is the reason why we here choose to use matrix notation), This controller is “ideal” in the sense that it may not be realizable in practice because the cost
the input generated by feedback in (5.39) is the same as the perfect control input in (5.38). function includes no penalty on the input u(t). This particular problem is considered in detail
182 MULTIVARTABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 183
by Frank (1968a, 19681,) and Moran and Zaflriou (1989), and also Qiu and Davison (1993)
who study “cheap” linear quadratic regulator (LQR) control Moran and Zafiriou show that
for stable plants with RHP-zeros at z3 (real and/or complex) and a time delay 8, the “ideal”
response y = Tv when r(t) is a unit step is given by
—in
1. with a delay 8: ISEmin = 8 The magnitude 181 is plotted in Figure 5 6 At low frequencies, wO < 1, we have 1 —
2. with a RHP-zero z: ISEmin = 2/z Os (by a Taylor series expansion of the exponential) and the low-frequency asymptote of
3. with complex RHP-zeros z = x ± jy: ISEmin = 4ir/(x2 + y2) I8Cic~.i)I crosses 1 at a frequency of about 1/8 (the exact frequency where ISCiw)l crosses
1 in Figure 56 is = 105/9) Since for S = 1 e0~, we have ~ = 1/ILl, we
—
We see that the worst case is to have a RHP-zero at the origin (z~ = 0). This is reasonable
also have that 1/8 is equal to the gain crossover frequency for L The “ideal” ISE optimal
because the steady-state gain is then zero, so it will not be possible to keep y(t) at a steady-
controller bounds the practically realizable controllers, so we expect this value to provide an
state value of 1 as t—* oc and ISE = cc.
approximate upper bound on wi,, namely (for a process with a time delay and performance
However, note that these ISE values are for step changes in the reference which emphasize
requirements at low frequency)
the low-frequency behaviour. Alternatively, consider the tracking of a sinusoidal reference, w~, < 1/9 (5 45)
r(t) = sin(wt). In this case, we get for a plant with RFIP-zeros at z~ (Qiu and Davison, 1993)
This approximate bound is the same as derived in Section 2.6.2 by considering the limitations
/ 1 1
ISEnfln=2Zt .
\~z~—3W
j
+
Zj+JW ) (5.43)
imposed on a loop-shaping design by a time delay 9. In addition to this bandwidth limitation,
we also have the limitations on the peak of the closed-loop transfer functions given in
Table 5.3.2.
As expected, TSEmjn = cc for a purely complex zero located at the frequency w, zj = ±jw,
because then COw) = 0. For a real RHP-zero z~, the maximum (worst) value of ISEmin is
achieved when w = z~j, and ISEmin = 0 when z~ = 0 (zero located at the origin) or z~ = cc
(zero located far out in the RHP). In summary, we find that a RHP-zero zj mainly limits
5.7 Limitations imposed by RHP-zeros
the performance around the frequency I z~, This interpretation is confirmed below when we
We will here consider plants with a zero z in the closed RFIP (and no pure time delay).
consider the achievable bandwidth.
RHP-zeros typically appear when we have competing effects of slow and fast dynamics. For
performance. In the following we attempt to build up insight into the performance limitations
imposed by RHP-zeros using a number of different results in both the time and frequency 2
domains.
Example 5.6 ‘fade-off between undershoot and settling time. Consider the plant ISUw)l < 1/~wp(jw)~ Vw * llwpSlIeo < 1 (547)
—s + z
s+z
z=1
ij
However, from the interpolation constraints 8(z) = and we have, as shown in (5 14), that
which is controlled by
s+l 1
,
IwpSIIcc ≥ Iwp(z)8(z)I = Iwp(z) so to be able to satisfy (547) we must at least require
Kr(s) Kc0051 that the weight satisfies _____________
IwP(z)I<1~ (548)
The sensitivity function and the step response of the closed-loop system for IC = 0.2, 0.5, 0.8 are
shown in Figure 5.7. We note that as the controller beco,nes more aggressive (IC increased), the (We say “at least” because condition (5 14) is not an equality) We will now use (5 48) to gain
settling time decreases, but this peiformance improvement comes at the cost of higher undershoot. This insight into the limitations imposed by RHP-zeros (A) by considering a weight that requires
is expected fro,n (5.46) and also from the fact that a higher value of ~ results in a higher bandwidth, good performance at low frequencies, and (B) by considering a weight that requires good
but increased sensitivity peak; see Figure 5.7(a). However (5.46) is conservative. With e = 0.05 and performance at high frequencies
= 0.8, the undershoot is approximately 1.8, whereas (5.46) gives a lower bound of only 0.106. The
bound (5.46) is not tight, nevertheless it clearly illustrates the trade-off between undershoot and settling
time for systems with teal RH?-zeros. A. RHP-zero and performance at low frequencies
Consider the following performance weight
5.7.2 High-gain instability
s/M + w~8
wp(s)
It is well known from classical root—locus analysis that as the feedback gain increases towards s+w~A (549)
infinity, the closed-loop poles migrate to the positions of the open-loop zeros; also see (4.79).
Thus, the presence of RHP-zeros implies high-gain instability. For example, the system in I This weight emphasizes low-frequency performance. From (5.47) it specifies a minimum
Example 5.6 is unstable for ~ ~ 1. Since high gain is required for performance, RHP-zeros I bandwidth w~, a maximum peak of 181 less than M, a steady-state offset less than A < 1,
limit the performance of a closed-loop system. and at frequencies lower than the bandwidth the sensitivity is required to improve by at least
186 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 187
20 dB/decade (i.e. 181 has slope 1 or larger on a log—log plot); see Section 2.8.2 for further
a. RI{P-zero and performance at high frequencies
details. If the plant has a RHP-zero at s = z, thea from (5.48) we must require
We now consider the case where we want tight control at high frequencies, by use of the
z/M + w~
(5 50) performance weight
z+w~A <1
Real zero. Consider the case when z is real. Then all variables are real and positive and
zap(s) = + 4- (5.56)
from (5.50) we derive the following bound on the achievable bandwidth: This requires tight control (IS&w)I < 1) at frequencies higher than w~, whereas the only
requirement at low frequencies is that the peak of ~SJ is less than M. Admittedly, the weight
1— 1/M
(551) in (5.56) is unrealistic in that it requires 8 —ì 0 at high frequencies, but this does not affect
1—A
the result as is confirmed in Exercise 5.9 where a more realistic weight is studied. In any case,
For example, with A = 0 (no steady-state offset) and M = 2 (II8II~ < 2) we must at least to satisfy llwpSllco < 1 we must at least require that the weight satisfies Iwp(z)I < 1, and
require with a real RHP-zero we derive for the weight in (5.56)
4
<0.5z (5 52)
Complex zeros. When the system has a pair of complex conjugate RHP-zeros z = z ± jy, (5.57)
x ≥ 0, a similar derivation with A = 0 yields
For example, with M = 2 the requirement is w~ > 2z, so we can only achieve tight control
4 < -~ + + ~2 (1 - (5.53) at frequencies beyond the frequency of the RHP-zero.
time response is quite similar to that in Figure 5.8 with IC = 0.5. Try to improve the response, e.g. by
to’ ~~ttuig the weight have a steeper slope at the crossovel neal the RHP zero
Sctpoini
Cl)
Exercise 5 9 * Consider the case of a plant with a RHP zero a where we want to hunt the sensitivit)
U
00
fun ction ove, sonic f?eque,ic3 range To this effect let
~ (10005 + )( +1)
C
‘5 ivp(s) ~ + 1)(~3~± 1) (559)
C
1 0~~’
This weight is equal to 1/LI-I at low and high frequencies has a maamu,u value of about 10/Li! at
10_i ~~U to2 ennechatefrequencies and the as~mptotc ciosses 1 atfiequencies ‘~~B /1000 andw2 Thus we iequire
Frequency Irad/sI Time tight control ISI < 1 in thcfrequen~ range between 0~BL = w8/1000 alzdwBH
(a) Make a sketch of 1/imp I (which piovides all uppet bound on JSJ)
(a) Sensitivity function (b) Response to step in reference
(b) Show that the RH? zeto z cannot be in the flequenc5 range wheie we iequiie tight contrnl and
that we can achieve tight con ttvl at frequencies either below about z/2 (the usual case) o, above
Figure 5.8: Control of plant with RHP~zero at z = 1 using positive feedback: C(s) =
s-I-i about 2z To see this select lI-I = 2 and evaluate tvp(z) foi va,ious values of WE kz c g
I(i(s) = ‘~c (0Ois+iflO.02s+i) K = 01,00,1,10,100 1000, 2000,10000 (You will find that wp(z) = 095 (~ 1)foi K 05
(coirespOliding to the requuement WBH < z/2) andfo’ A. = 2000 (coiresponding to the rcquiiement
WEL > 2z))
Remark 1 The reversal of the sign in the controller is probably best understood by considering the
inverse response behaviour of a plant with a RHP-zero. Normally, we want tight control at low
frequencies, and the sign of the controller is based on the steady-state gain of the plant. However, if 57 4 RHP-zeros and non-causal controllers
we instead want tight control at high frequencies (and have no requirements at low frequencies) then
we base the controller design on the plant’s initial response where the gain is reversed because of the Perfect control can actually be achieved for a plant with a time delay oi RHP zero if we use
inverse response. a non causal cont,olle,2 t e a controller which uses information about the future This is
sometimes called Preview Control and may be relevant for certain servo problems e g in
Remark 2 An important case, where we can only achieve tight control at high frequencies, is robotics and for product changeovers in chemical plants A brief discussion is given here
characterized by plants with a zero at the origin, e.g. C(s) = s/(Ss + 1). In this case, good transient but non-causal controllers are not considered in the rest of the book since our focus is on
control is possible, but the control has no effect at steady~state. The only way to achieve tight control at feedback control
low frequencies is to use an additional actuator (input) as is often done in practice.
Time delay Foi a delay c°~ we may achieve perfect control wtth a non causal
Remark 3 Short~term control. In this hook, we generally assume that the system behaviour as t —* cc feedfoi ward controller K~ = e°~ (a prediction) Such a controller may be used if we have
is important. However, this is not true in some cases because the system may only be under closed-loop knowledge about future changes in r(t) or d(t)
control for a finite time t1. In this case, the presence of a “slow” RHP-zero (with I-al small) may not be For example if we know that we should be at work at 08 00 and we know that it takes
significant provided t1 << 1/~z~. For example, in Figure 5.8W) if the total control time is t~ = 0.01 [5], 30 mm to get to work then we make a prediction and leave home at 07 30 We don t wait
then the RHP-zero at a = 1 [rad/s] is insignificant. until 08 00 when we are suddenly told by the appearance of a step change in our reference
As an example of short-term control, consider treating a patient with some medication. Let it be position, that we should be at work.
the dosage of medication and y the condition of the patient. With most medications \ve find that RHP-zero Future knowledge can also be used to give perfect control in the presence of a
in the short term the treatment has a positive effect, whereas in the long term the treatment has a RHP-zero As an example consider a plant with a real RHP-zero given by
negative effect (due to side effects which may eventually lead to death). However, this inverse response
behaviour (characteristic of a plant with a RHP-zero) may be largely neglected during limited treatment, G(s)= S+Z z>0 (560)
although one may find that the dosage has to be increased during the treatment to have the desired effect. 8+-a
Interestingly, the last point is illustrated by the upper left curve in Figure 5.9, which shows the input and a desired reference change
u(t) using an internally unstable controller which over some finite time may eliminate the effect of
the RHP-zero. In process control, similar conclusions are also applicable to the control of batch or 10 t<0
semi-batch processes. 1 t>o
Exercise 5.8 (a) Plot the plant input u(t) corresponding to Figure 5.8 and discuss in the light of the With a feedforward controller K,. the response from r to y is y = G(s)Kr (s)r. In theory, we
above remark. may achieve perfect control (y(t) = r(t)) with the following two controllers (e.g. Eaton and
(b) In the simulations in Figures 5.7 and 5.8, we use simple P1 and derivative cont,vllers. As an
Rawlings, 1992):
altem-native, use the S/KS method in (3.80) to synthesize 7-1~ controllers for both the negative and
positive feedback cases. Use peifom-niance weights in the forn givemi by (5.49) and (5.56), respectivel)t 1 A system is causal if its out~tiis depend only on past inputs, and non-causal if its outputs also depend on future
With 4 = 1000 and M = 2 in (5.56) and w~ = 1 (for the weight on KS) you will find that the inpuls.
190 MULTIVARIABLE FEEDBACK CONTROL ~~j1MITATIONS IN SISO SYSTEMS 191
Input: unstable controller Output unstable controller plants with RHP-zeros. For example, for a system with single RHP-zero z (Middleton
et al., 2004),
~ ≥ ~p(z)~~ti’ (5.61)
where t~, is the preview time (reference change is known at time 1,, before it occurs). Then,
5 _—5 0 5 similar to (5.52),
Input: non-causal controller Output non causal controllcr w~ <0.5ze2t1? (5.62)
H
which shows that the non-causal controller can overcome the bandwidth limitation imposed
by the RHP-zero (by having a large preview time).
3. In most cases we have to accept the poor performance resulting from the RHP-zero and use
5 —5 0 5
a stable causal controller. The ideal causal feedforward controller in terms of minimizing
Input: stable causal controller Output stable causal controller
the ISE (7-12 norm) of y(t) for the plant in (5.60) is to use Kr = 1, and the corresponding
2
plant input and output responses are shown in the lower plots in Figure 5.9.
250 5~5
5.7.5 LHP-zeros
Time (secl Ttme [sec)
Zeros in the LHP, usually corresponding to “overshoots” in the time response, do not present a
fundamental limitation on control, but in practice a LHP-zero located close to the origin may
Figure 5.9: Control of plant with RHP-zero at z = 1 cause problems. First, one may encounter problems with input constraints at low frequencies
(because the steady-state gain is small). Second, a simple controller can probably not then be
1. A causal unstable feedback controller used. For example, a simple PID controller as in (2.93) contains no adjustable poles that can
be used to counteract the effect of a LHP-zero.
s+z For uncertain plants, zeros can cross from the LHP into the RHP, either through zero (which
Kr(s) =
—s + z is worse if we want tight control at low frequencies) or through infinity. We discuss this in
Section 7.4 (page 264).
For a step in r from 0 to 1 at I = 0, this controller generates the following input signal:
u(t) = { 26t
t<0
t>0 G(s) =
k
(1+ris)(1+r2s)(1+rgs)-~-
—
—
k
fl~j(1+rjs) (5.63)
These input signals u(t) and the corresponding outputs y(t) are shown in Figure 5.9 for where n is 3 or larger. At high frequencies the gain drops sharply with frequency, G(jw)j
a plant with z = 1. Note that for perfect control the non-causal controller needs to start (k/ fJ r~)~”. From condition (5.82) derived below, it is therefore likely (at least if k is
changing the input at I = —cc, but for practical reasons we started the simulation at I = —5 small) that we encounter problems with input saturation. Otherwise, the presence of high-
where n(t) = 2e5 = 0.013. order lags does not present any fundamental limitations.
The first option, the unstable controller, is not acceptable as it yields an internally unstable However, in practice a large phase lag at high frequencies, e.g. ZG(,jw) —+ —n 90° for
-
system in which u (t) goes to infinity as I increases (an exception may be if we want to control the plant in (5.63), poses a problem (independent of K) even when input saturation is not an
the system only over a limited time tj; see page 188). issue. This is because for stability we need a positive phase margin, i.e. the phase of L = GK
The second option, the non-causal controller, is usually not possible because future setpoint must be larger than —180° at the gain crossover frequency w~. That is, for stability we need
changes are unknown. However, if we have such information, it is certainly beneficial for Wc < w180; see (2.32).
192 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 193
In principle, w150 (the frequency at which the phase lag around the feedback loop is 180°) — the bounds on Ms and M~, (5.15) and (5.18), that RHP-poles combined with RHP-zeros
is not directly related to phase lag in the plant, but in most practical cases there is a close or a time delay make control difficult. The question here is: does a RHP-pole by itself pose
relationship. Define w~ as the frequency where the phase lag in the plant C is —180°, i.e. problems in terms of control performance?
LG(jw11) 4 —180° First, feedback control is required, so we need some measurement of the plant output. The
reason for this is that it is impossible to stabilize a system with feedforward control even —
Note that w,~ depends only on the plant model, Then, with a proportional controller we have with a perfect model that allows us to cancel perfectly the unstable pole. As discussed on
that w150 = w~, and with a P1 controller w18o < w~. Thus with these two simple controllers page 145, we would get an internally unstable system, which eventually grows out of bounds.
a phase lag in the plant does pose a fundamental limitation: Next, what problems does a RHP-pole p pose for feedback control? A good starting
Stability bound for P or P1 control: w~ < w~ (5 64 point for such a discussion is the fundamental constraint on the sensitivity function for
internal stability, 8(p) = 0 Recall that the corresponding constraint with a RHP-zero z
.
Note that this is a strict bound to get stability, and for performance (phase and gain margin) was 5(z) = 1, which was a problem because it is not compatible with the desire to have
we typically need w~ less than about 0.5w~.
If we want to extend the gain crossover frequency w~ beyond w,,~, we must place zeros in
151 small (compared to 1) in order to have tight control (good output performance). At first,
it may therefore seem that the requirement 5(p) = 0 does not pose a problem, because it is
the controller (e.g. “derivative action”) to provide phase lead which counteracts the negative compatible with tight control (good output performance). Actually, the main problem is at the
phase in the plant. A commonly used controller is the PID controller which has a maximum plant input, because stabilization of an unstable plant requires feedback control with the active
phase lead of 90° at high frequencies. In practice, the maximum phase lead is smaller than use of plant inputs. With feedback control, u = KS(r n do), where S = (1 + GK)’.
— —
90°. For example, an industrial cascade PID controller (2.87) typically has derivative action Note that changes in n and d~ are outside our control and therefore “unavoidable”, and for
over only one decade, and the maximum phase lead is 55° (which is the maximum phase lead an unstable plant a minimum value on IKSI is also unavoidable, as derived in Section 5,3.2.
of the term ~ ). This is also a reasonable value for the phase margin, so for performance This leads to the conclusion that for an unstable plant a minimum input usage u is required.
we approximately require In addition, the presence of a RI-IP-pole imposes a lower bound on the required bandwidth
Practical performance bound (PID control): w~ < w~ (565) and also causes an overshoot in the output signal, as summarized below
We stress again that plant phase lag does not pose afundamentai limitation if a more complex 1 RHP-pole limitation on input usage. For an unstable plant, the transfer function KS
controller is used. Specifically, if the model is known exactly and there are no RHP-zeros or (from measurement noise n or output disturbances d~ to plant input it) must satisfy, see
time delays, then one may in theory extend w~ to infinite frequency. For example, one may (5.31),
simply invert the plant model by placing zeros in the controller at the plant poles, and then let iIKSII,0 ≥ IG;’ci)l (5.66)
the controller roll off at high frequencies beyond the dynamics of the plant. However, in many
which is tight for the case of a single real RHP-pole p. A tight lower bound for a plant
practical cases the bound in (5.65) applies because we may want to use a simple controller,
with multiple unstable poles is given by (5.30).
and also because uncertainty about the plant model often makes it difficult to place controller
2. RBP-pole limitation on lower bandwidth. To stabilize a plant, we need to react
zeros which counteract the plant poles at high frequencies.
sufficiently fast, and we must require that the closed—loop bandwidth is larger than
Remark. The relative order (relative degree) of the plant is sometimes used as an input—output (approximately, see proof below)
controllability measure (e.g. Daoutidis and Kravaris, 1992). The relative order may also be defined for
nonlinear plants, and it corresponds for linear plants to the pole excess of C(s). For a minimum-phase
• 2p, for a real RUP-pole p.
• 0.67(x + ~4z2 + 3y2). for a pair of complex RI-IF-poles p = z ± iv.
plant the phase lag at infinite frequency is the relative order times —90°. Of course, we want the inputs
directly to affect the outputs, so we want the relative order to be small. However, the practical usefulness o l~l5IpI~ for a pair ofpurely imaginaiy poles p = i p1
of the relative order is rather limited since it only gives information at infinite frequency. The phase lag 3. RHP-pole limitation on overshoot. A stable feedback system with a real RI-IF-pole
of C(s) as a function of frequency, including the value of w~, provides much more information, must have an overshoot in its closed-loop response y(t) to a step in the reference; see
Figure 5.12(b). To quantify this overshoot Yos, we require a slightly different version of
Another approach for quantifying the limitations of phase lags is to approximate the higher-
rise time t~an that defined on page 30. In accordance with Middleton (1991), we define
order lags as an “effective delay” as discussed in Chapter 2; see (2.99) (P1 control) and (2.100)
rise time t,. as the maximum t,~ for which the output signal y(t) to a step r satisfies
(PID control).
y(t)/r < t/t,. Vt; see Figure s.io.3 Then, the step response of a system with a real
RHP -pole p (p > 0) must satisfy (Middleton, 1991; Se,vn et al., 1997)
where Yj is the final value qf the output signal y. With integral action y~ = r and a large
overshoot (y~) is unavoidable if the response is slow with large rise time (tr).
1 100
a00
a
Frequency [radls]
Figure 5.11: Typical complementary sensitivity, III, with upper bound 1/1mm
Tine, Since u = —KS(Gdd + n), the exact bounds (5.66) and (5.34) imply that stabilization
may be impossible in the presence of measurement noise it or a disturbance d, since the
Figure 5.10: Rise time tr according to definition y(t)/r ≤ t/G Vt for plant in (5.71) with IC = ‘ri = required inputs it may be outside the saturation limit. When the input saturates, the system is
1.25. The straight line with slope 1/tr just touches y(t). practically open-loop and stabilization is impossible (see also Section 5.11.3 on page 201).
The limitations on the bandwidth and overshoot are related: to stabilize an unstable plant,
Stabilization becomes more difficult and the above bounds become worse, if the plant has we need a minimum bandwidth, which corresponds to a maximum rise time. If the rise time
a time delay or RHP-zeros located close to the RHP-poles. In essence, “the system may go is too large, then control is bound to be poor. This is clearly seen from (5.67). With integral
unstable before we have time to react”; see also Example 5.5. action, y~ = r, and the “excess” overshoot mj05 r must exceed ~ For example, with
—
0.5
WT(s) = .4’—
Wsr
+
MT
(5.68)
—2 a 2 0
which requires that (i) P (like ILl) has a roll-off rate of at least 1 at high frequencies (which must be 10 10 10 0 1 2 3 4
satisfied for any real system), (ii) ITI is less than MT at low frequencies, and (iii) ITI drops below I at Frequency [radlsj Time [sec]
frequency WET. The requirements on l~I are shown graphically in Figure 5.11. For a real RHP-pole at (a) Complementary sensitivity function (b) Response to step in reference
s = p. the condition WTQJ) < 1 yields
Figure 5.12(a). For IC = 2, the rise ti,ne isO.2s. The resulting overshoot is 1.22, which is reasonably The co,vespoiiditig sensitivity and complementaiy sensitivity functions. and the tune response to a unit
close to the lower boundfrom (5.67) step reference change, are shown in Figure 5.13. The time response is good, taking into account the
closeness of the RHP-pole and zero.
~. ≥ VI (pt,. — l)ei~t~ + 1 +r=1 (0.2 — 1)e°2 + 1 +1=1.11 From (5.22), we have for a plant with a single ,-eal RHP-pole p and a single real RHP-zero z:
Ptr 0.2 — Iz±pI
It may see,,, that we can improve the peiformance by ilicreasing IC furthe,: This is piobably not Ms,n,in — MTrnin = (5.73)
z—pi
possible, as the actual limitation due to the RHP-pole occurs at the plant input. The peak in KS
increases wit/i K. (not shown here), so a larger value of IC~ can cause saturation problems. The plant in (5.72) has z = 4 and p = 1, so = 4
= 1.67 amid therefore it follows that for any
cotitroller we must at least have IISIk~ > 1.67 and IITII~ > 1.67. The actual peak values for the
Combined RHP-pole and RHP-zeros. In Section 5.3 (e.g. Table 5.3.2 on page 177), we above S/KS-design are 2.40 and 2.43, respectively.
derived lower bounds on the peaks of important closed-loop transfer functions, and found that
Example 5.10 Balancing a rod. This exatnple is taken fro’n Doyle et al. (1992) (also see Stein, 2003).
the combined effect of a RHP-zero z and RHP-pole p is to increase the minimum peak by
Consider the problem of balancing a rod in the pa/ui of one’s hand. The objective is to keep the rod
a factor f~±R~. Here, we consider in some more detail the possibly conflicting bandwidth upright, by sunall hand move,nents, based on observing the ,-od either at its far end (output gi) or the
limitation imposed by having RHP-poles combined with RHP-zeros or a time delay. In end in one’s hand (output y2). The linearized transfer functions for the two cases are
order to get acceptable low-frequency performance while maintaining robustness, we have
from (5.45) and (5.52) the approximate bounds WB W~, < 0.SJzf for a RHP-zero and —g C2(s) ____ ~2—u
i~$i — ~2 (Mls2 — (M + m)g)’ $2 (MIs2 — (M + m)g)
WB Wc < 1/8 for a time delay. On the other hand, for a RHP-pole we have approximately
10B > ~ Put together we get the approximate requirements I~l < 0.251z1 and p~0 < 0.5 Heje I [ml is the length of the rod and in [kg] its mass. M [kg] is the ,,iass of your hand and g [~ 10
in order to stabilize a plant while achieving acceptable low-frequency performance and mm’s2] is the acceleration due to gravity. In both cases, the plant has three unstable poles: two at the
robustness. The following example confirms that these requirements are reasonable. o, igin and one at p = i~/ ~‘~T ~ .4 short ,vd with a large “ass gives a large value of p, and this
ill turn “cans that the system is ,no,e difficult to stabilize. For example, wit/i Al = in and 1 = 1 [,,,]
Example 5.9 7i~ design for plant with RHP-pole nnd mw-zero. We want to design an Ire get p 4.5 [radls] and from (5.70) we desim-e a bandwidth of about 9 [mad/si (corresponding to a
controllerfor the following plant with z = 4 and p = 1: response time of about 0.1 [s]).
5—4 If one is measuring ~ji (looking at the far end of the rod) then aclueving this bandwidth is the “lain
C(s) = (5.72) requirenient. Howeve;; if one tries to balance the rod by looking at one’s hand (y2) there is also a RHP
(s 1)(0.ls + 1)
—
—clv at z = ~/f. If the ,nass of the rod is small (rn/fYI is s,nall), then p is close to z and stabilization is
in practice impossible with any controlle,: Eve,, with a large mass, stabilization is vemy difficult because
p > z whereas we would normally prefer to have the RHP-zemvfarfromn the origin and the RHP—pole
close to the origin (z > p). So although in theoty the rod may be stabilized by looking at one’s ha,,d
(C2), it see,ns doubtful that this is possible for a hu,nan. To quantify these proble~ns we can use (5.73)
to get ________
C)
t
Ms,min = M’r,,nin
Iz+pI 11+71 , 7 V IM+rn
z
Iz—pI 11—71 M
Co
~I0 G’onsider a light-weight rod with rn/M = 0.1, for which Ice expect stabilization to be difficult. We
obtahi Ms,rni, = ~ = 42, amid we must have IISII= ≥ 42 and IITII~ ≥ 42, so poor cont,vl
peifor,nance Lc inevitable if we tiy to hala,,ce the rod by lookitig at our hand (y2)
The difference between the two cases, measuring Vt a,,d tneasuring Y2, highlights the importance of
io2 100 1o2 0 1 2 3 4 sensor location on the achievable peiformnance of control.
Frequency [radls] Time [seci
(a) ISI and TI (b) Response to step in reference Exercise 5.11 * For a system ii’ith a single ,eal RHP-zemv z and N~ RHP-poles p~ and tight control
at low frequ~icies (A = 0 in (5.50)) derive the following generalization of (5.52):
Figure 5.13: ?I~ design for a plant with RHP-zero at z = 4 and RHP-pole at p = 1 / N,
IT.TIZ ~ 1 4
(5.7)
Note that z > p, so f,vtn the condition on page 150 it is possible to stabilize this plant with a
stable controlle,: Furthermore, fpJ = 0.25IzJ sofro,n the condition just derved it should be possible to
(Hint: Use (5.13).) Note that for a plant with a single RHP-pole a,,d REP-zero the bound (5.74) with
achieve acceptable low-frequency peiformance and robustness. We use the S/KS desig” method as in
lvi = 2 is feasible (upper hound on w~ is positive) for p < 0.33z. This confir,ns the approxhnate
Example 2.17 with input weight w,, = 1 and peiformnance weight w~ in (5.49) with A = 0, M = 2,
bound p < 0.25z derived for stability with acceptable low-frequency pe,formance and robustness on
= 1. The software gives a stable and minimu~n-phase controller with I ~
LWU J
I = 1.89. page 196.
198 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN SISO SYSTEMS 199
From (5.77) we also get that the frequency wd where I Gd I crosses I from above yields a lower where Wr is the frequency up to which performance tracking is required.
bound on the bandwidth:
Remark. The bandwidth requirement imposed by (5.80) depends on how sharply ISCIw)I increases in
wB > wd wherewd is defined by IGd(iwd)I = 1 the frequency range from w~ (where 151 < 1/R) to We (where 151 1). If ISI increases with a slope of
1 then the approximate bandwidth requirement becomes we > Rw~, and if ~ increases with a slope
A plant with a small IGd or a small wd is preferable since the need for feedback control is then of 2 it bec~ies biB >
less, or alternatively, given a feedback controller (which fixes 8) the effect of disturbances on
the output is less.
Example 5.11 Assume that the disturbance model is Gd(s) = kd/(1 + rd$) where kd = 10 and 5.11 Limitations imposed by input constraints
= 100 [seconds I. Scaling has been applied to Gd so this means that without feedback, the effect of
dist,u’bances on the outputs at low frequencies is lcd = 10 times larger than we desire. Thus feedback In all physical systems there are limits to the changes that can be made to the manipulated
I
is required, and since Gd crosses 1 at afrequency bid lcd/rd = 0.1 rod’s, tile minimum bandwidth
variables. In this section, we assume that the model has been scaled as outlined in Section 1.4,
requirement for disturbance rejection is We > 0.1 [radls].
so that at any time we must have u(t)I < 1. The question we want to answer is: can the
200 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 201
expected disturbances be rejected and can we track the reference changes while maintaining
Iu(t)l < 1? We will consider separately the two cases of perfect control (e = 0) and l0~
acceptable control (Id < 1). These results apply to both feedback and feedforward control.
At the end of the section we consider the additional problems encountered for unstable ,~ it?
plants (where feedback control is required). a
C
Remark 1 We use a frequency-by-frequency analysis and assume that at each frequency ld(wN ~ 1 (or ~ IOU
IF(w)l < 1). The worst-case disturbance at each frequency is jd(w){ = 1 and the worst-case reference
isr = RFwith V(~)I = 1.
Remark 2 Note that rate limitations, Idu/dtl < 1, may also be handled by our analysis. This is done 10_I 100 it? 102
by considering du/dt as the plant input by including a term 1/s in the plant model C(s). Alternatively Frequency [rad/s]
we multiply the derived lower bounds on 101, e.g. in (5.84), by the frequency w. For the more general
case with limitations on both magnitude (juj < 1) and rate (jdu/dtj ≤ ü.,,~), the derived lower bounds Figure 5.15: Input saturation is expected for disturbances at intermediate frequencies from w~ to Wa
on JCj should be multiplied by max(1, w/ñmax).
Remark 3 Below we require jul < 1 rather than nj ≤ 1. This has no practical effect, and is used to 5.11.2 Inputs for acceptable control
simplify the presentation.
For simplicity above, we assumed perfect control. However, perfect control is never really
required, especially not at high frequencies, and the input magnitude required for acceptable
5.11.1 Inputs for perfect control control (namely e(jw)j < 1) is somewhat smaller. For disturbance rejection we must then
i equire _________________
From (5.38) the input required to achieve perfect control (c = 0) is
1101 > Gal —jj at frequencies where IGal > 1 (5.84)
is = G’r — G’Gdd (581)
Ptooft Consider a “worst-case” disturbance with ld(w)I = 1. The control error is e = y = Cu + Cad.
Disturbance rejection. With r = 0 and ld(w)I = 1 the requirement lu(w)I < 1 gives Thus at frequencies where lCaUw)I > 1 the smallest input needed to reduce the error to Ie(w)l = 1 is
found when u(w) is chosen such that the complex vectors Cu and Cad have opposite directions. That
is, lel = 1 = jCadl — Gui, and with Idi 1 we get lul I~’ GOal—i), and the result follows by
1G’(jw)Ge(jw)l <1 Vw (5 82)
requirIng jul < 1.
In other words, to achieve perfect control and avoid input saturation we need 101 > IGdI at Similarly, to achieve acceptable control for command tracking we must require
all frequencies. (However, as is discussed below, we do not really need control at frequencies
where Gal < 1.)
Command tracking. Next let d = 0 and consider the worst-case reference command
I IGI>IRI1j VoJ<Wr (5.85)
which is lr(w)l = Rat all frequencies up to LU,.. To keep the inputs within their constraints In summary, if we want “acceptable control” (Id < 1) rather than “perfect control” (e = 0),
we must then require from (5.81) that then lGaI in (5.82) should be replaced by Gal 1, and similarly, R in (5.83) should be
—
replaced by R 1. The differences are clearly small at frequencies where IGal and IRI are
—
(this is for stabilization of a plant with a real RHP-pole at p). Otherwise, the input it wm which without constraints yields a stable closed-loop system with a gain crossover frequency w~, of
exceed 1 (and thus saturate) when there is a sinusoidal disturbance d(t) = sinwt, and we about L7. The closed-loop response to a unit step disturbance occurring after 1 second is shown in
may not be able to stabilize the plant. Figure 5.16(b). The stable closed-loop response when there is no input constraint is shown by the dashed
line. However; we note that the input signal exceeds I for a short ti,ne, and when it is constrained to be
Remark. The result in (5.86) was not available on publication of the firstedition of this book (Skoaestad within the interval [—1, 1] we find indeed that the system is unstable (solid lines).
and Postlethwaite, 1996) where we instead used the approximate, but nevertheless useful, bound Remark. For this example, a small reduction in the disturbance magnitude front k~ 0.5 to
- - . he 0.48 results in a stable closed-loop response in the presence of input constraints (not shown).
JGQw) I > IGcUw)I VU) <p (5.87) Since he = 0.54 is the limiting value obtained front (5.86), this seems to indicate that (5.86) is a
very tight condition in terms of predicting stability but one should be carefid about making such a
This approximate bound is based on (5.69) where we found that we need IT(iw)l ≥ I up to the conclusiott. First, (5.86) is actually only tight for sinusoids and the simnulations in this example are for a
frequency p. approximately. Since u = KSGcd = TG’G~d this implies that we need Jul ≥ step disturbance. Second, in the example we use a particular controller; whereas (5.86) is for the “best”
1G’Gcl . ldI tip to the frequency p. and to have Jul < 1 for Idi = 1 (the worst-case disturbance) stabilizing controller in terms of minimizing input usage.
we must require lG’GcI ≤ 1.
Example 5.13 Consider For unstable plants, reference changes can also drive the system into input saturation
and instability. However, this is not really a fundamental problem, because, in contrast to
C( ~ ) = (lOs + 1)(s — 1)’ c c(s) — (s ± 1)(0.2s
he + 1)’ h~ < 1 (5.88) disturbance controller
of-freedom changes andto filter
measurement
the reference
noise,signal
one has
andthe
thus
option
reduce
of the
using
magnitude
a two degrees
of the
Since k~ < 1 and the peifonnance objective is IeI < 1, we do not really need control for disturbance
manipulated input.
rejection, but feedback control is requited for stabilization, since the plant has a RHP-pole at p = 1.
We have IGI > IGeI (i.e. JG’GcI < 1)forf,-equencies lowerthan 05/ks, see Figure 5.16(a), sofrom
the approximate bound (5.87) we do not expect p’vblems with input const,-ai,zts at low frequencies. 5.12 Limitations imposed by uncertainty
However; at high frequencies we have GJ < Gel, and from (5.87) we must approximately require
0.5/ks > p. i.e. he < 0.5 to avoid problems with input saturation. This is confirmed by the exact
The presence of uncertainty requires us to use feedback control rather than just feedforward
bound in (5.86). We get
control. The main objective of this section is to gain more insight into this statement. A further
5 I =
(lOs + 1)(s + 1)13 0.227, G~,~13(1) = (s + 1)(0.2s + 1) ,=i = 0.417lc~
and from (5.86) we must require k~ < 0.54 in order to avoid input saturation (I~I < 1) when we have 5.12.1 Feedforward control and uncertainty
sinusoidal disturba,zces of unit magnitude.
Consider feedforward control from the reference and measured disturbance (see Figure 2.5),
10’
u(t)
When applied to the nominal plant p = Gu + Ged the resulting control error is e p r —
V
z 0.5 “~“z:t ii —(1 GKr)r + (Cc GK4)d. Correspondingly, for the actual plant (with model error)
— —
C
0 -
y’=G’u+G~d (5.91)
~ l0~ —0.5
—1 vU) the control error is
—1.5 t e’ = p’ — r = —(1 — G’Kr)r + (G~ — G’Ke)d = —S~r + S~G’~d (5.92)
0 5 10
Frequency [nd/si Time [sec]
(a) C and Ge with lc,j = 0.5 (b) Response to step in disturbance (he = where S,~ 4 1 G’IC,- and S~ 4 1 G’IQG~’ are the feedforward sensitivity functions.
— —
0.5) These are 1 for the case with no feedforward control, and should be less than 1 in magnitude
for feedforward control to be beneficial. However, this may not be the case since any change
Figure 5.16: Instability caused by input saturation for unstable plant
in the process (C’ and G~) directly propagates to a corresponding change in S~ and S~ and
To check this for a particular case we select he = 0.5 and use the controller
thus in the control error. This is the main problem with feedforward control.
To see this more clearly, consider the “perfect” feedforward controller K,. = C(s)_i and
= G(s)’Ge. which gives perfect nominal control (with e = 0, 8r = 0 and S~ = 0).
K( 5 ) — 0.04 (lOs + 1)2
$ (0.15+1)2 (5.89) (We must here assume that G(s) is minimum-phase and stable and assume that there are no
204 MIJLTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 205
problems with input saturation.) Applying the perfect feedforward controller to the actual where 8’ = (I + G’IC)1 can he written (see (A 147)) as
plant gives
81=8 1 (595)
IC’
C y —‘r= r (G’/G~ ~ (593 1+ET
~\G/Gd
Here E (C’ — is the complementaiy sensitivity
G)/G is the relative error for C, and T
= rei. error in G rei. error in G/C~ function
From (5 94) we see that the control error is only weakly affected by model error at
Thus, we find that S~ and 8, are equal to the (negative) relative errors in G and 0/0d.
frequencies where feedback is effective (where 181 << 1 and T 1) For example, if we
respectively. If the model error (uncertainty) is sufficiently large, such that the relative error in
have integral action in the feedback loop and if the feedback system with model error is
C/Gd is larger than 1, then ~8~J is larger than 1 and feedforward control makes this situation
stable, then 8(0) = 8’(O) = 0 and the steady-state control error is zeio even with model
worse. This may quite easily happen in practice. For example, if the gain in C is increased
error
by 33% and the gain in Gd is reduced by 33%, such that 8~ = +1 = + 1 = —Hg Uncertainty at crossover. Although feedback control counteracts the effect of uncertainty
—2 + 1 = —1. In words, the feedforward controller overcompensates for the disturbance, at frequencies where the loop gain is large, uncertainty in the crossover frequency region
such that its negative counteracting effect is twice that of the original effect. can result in poor performance and even instability This may be analyzed, for example, by
Another important insight from (193) is the following: to achieve le’I < 1 for Jdl = 1 we considering the effect of the uncertainty on the gain margin, GM = 1/~L(jwigo)i, where
must require that the relative model error in C/Gd is less than 1/JG~l. This requirement is C~)iso is the frequency where ZL is —180°, see (240) Most practical controllers behave as a
unlikely to be satisfied at frequencies where G~ is much larger than 1 (see the following constant gain K,, in the crossover region, so IL(awiso)i K,,IG(ywjso)l where w150
example) and this clearly motivates the need for feedback control for “sensitive” plants
,
(since the phase lag of the controller is approximately zero at this frequency, see also
where the disturbances have a large effect on the output. Section 5 8) This observation yields the following approximate rule
Example 5.14 Consider disturbance rejection for a plant with Define w,~ as the frequency whete ZG(yw~) = —180° Uncettainty which keeps G(yw~)i
approxunately constant will not change the gum margin Uncertainty which mci eases
300 100
G= lOs+1 Gd= lOs+1 IGOw~) will dcci ease the gum maigm and may yield mstabthty
The objective is to keep lvi < 1 ford = 1, but notice that the disturbance gain at steady-state is 100. This rule is useful, for example, when evaluating the effect of parametric uncertainty This is
Nominally, the feedforward controller I~d = G1 Gd gives peifect control, it = 0. Now apply this illustrated in the following example
controller to the actual process where the gains have changed by 10%
Example 5.15 Consider a stable first-order delay process, G(s) ke~S/(1 + rs), where the
330 90 paranieters Ii, r and 8 are uncertain in the sense that they niay vary with operating conditions. If
Gd = lOs + 1
— lOs + 1’ we assumer > 0 then w,. (ir/2)/0 and we derive
From (5.93), the disturbance response in this case is
IG(iw~)I ~ (5.96)
= —
G’G’
____ — i) G~d = —0.22 G~d = io 20
+
We see that to keep I GUw~ ) I constant we want k ~ constant. Ifonly the delay 8 increases, then IG(jw0 ) I
increases and we may get instability (as we expect). However~ the uncertainty in the parameters is often
Thus, for a step disturbance d of niagnitude 1, the output y will approach —20, which is much larger
than the bound lvi < 1. This “cans that we need to use feedback control, which, as discussed in the coupled. For example, if 8 and r increase proportionally (which is quite common in practice) such that
next section, is hardly affected by the above model error Although feedforward control by itself is not the ratio r/& remains constant, then stability is not affected. In another case the steady-state gain k
sufficient for this example, it has some benefits. This is because the feedforwai-d controller reduces the may change with operating point, but this may not affect stability if the ratio k/r, which determines the
effect of the disturbance, and the minimum bandwidth requirement forfeedback control is reducedfrom high-frequency gain, is unchanged.
Ikil/r~ = 100/10 = 10 ,ad/s (nofeedfonvard) to about 20/10 = 2 radls (with feedfonvard).
The above example illustrates the importance of taking into account the structure of the
uncertainty, e.g. the coupling between the uncertain parameters. A robustness analysis which
5.12.2 Feedback control and uncertainty assumes the uncertain parameters to be uncorrelated is generally conservative. This is further
discussed in Chapters 7 and 8.
With feedback control the closed-loop response with no model error is y r — = 8(Gdd — r)
where 8 = (I + GK)’ is the sensitivity function. With model error we get
~ = 8’(G’~d — r) (5.94)
206 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 207
d
gule 3. Input constraints arising from disturbances For acceptable contiol (lel < 1)
we tequire G(jw)~ > GdOw)I 1 atfiequencies where Gd&w)I > 1 Fo, petfect
—
contiol (e = 0) the requnement is G(jw)~ > IGd(aw)I (See (5 82) and (5 84))
Rule 4. Input constraints arising from setpoints We tequire IG(aw)I > H — 1 up to the
r frequency W,. where tracking is requited (See (5 85).)
1)
Rule 5. Time delay 6 in G(s)Gm(s) We approximately tequtre w~ < 1/6 (See (545))
Rule 6. Tight control at low frequencies with a Rf{P-zero a in G(s)Gm(s) Fo; a teal
RI-IF-zero we requite w~ < z/2 and for an imaginary RHP-zero we approximately
require w~ < 0.861z1. (See (5.52) and (5.53).)
Figure 5.17: Feedback control system Remark. Strictly speaking, a RHP-zero only makes it impossible to have tight control in the
frequency range close to the location of the RHP-zero. If we do not need tight control at low
frequencies, then we may reverse the sign of the controller gain, and instead achieve tight control
5.13 Summary: controllability analysis with feedback
control I at higher frequencies. In this case we must for a RHP-zero z approximately require w0 > 2z.
A special case is for plants with a zero at the origin; here we can achieve good transient control
even though the control has no effect at steady-state.
We will now summarize the results of this chapter by a set of “controllability rules”. We Rule 7. Phase lag constraint. We require in most practical cases (e.g. with PID control):
use the term “(input—output) controllability” since the bounds depend on the plant only; w~ < w~. Here the ultimate frequency w,~ is where ZGG~(jciJ~) = —180°. (See
that is, are independent of the specific controller. Except for Rule 7, all requirements are (5 65))
fundamental, although some of the expressions, as seen from the derivations, are approximate Since time delays (Rule 5) and RHP-zeros (Rule 6) also contribute to the phase lag,
(i.e. they may be off by a factor of 2 or so). However, for practical designs the bounds will one may in most practical cases combine Rules 5, 6 and 7 into the single rule: w~ <w,~
need to be satisfied to get acceptable performance. (Rule 7).
Consider the control system in Figure 5.17, where all the blocks are scalar. The model is
Rule S. Real open-loop unstable pole in G(s) at s = p. We need high feedback gains to
y = G(s)u + Gd(s)d; y~ = G,~(s)y (597) stabilize the system and we approximately require w~ > 2p. (See (5.70).)
Here Gm(s) denotes the measurement transfer function and we assume Gm(0) = 1 (perfect In addition,for unstable plants we need G3~p)I > IGd,ms(P)I. Otherwise, the input
steady-state measurement). The variables d, z~, y and r are assumed to have been scaled as may saturate when there are disturbances, and the plant cannot be stabilized; see
outlined in Section 1.4, and therefore G(s) and Gd(s) are the scaled transfer functions. Let (5.86).
w~ denote the gain crossover frequency, defined as the frequency where IL(iw)f crosses 1 Most of the rules are illustrated graphically in Figure 5.18.
from above. Let wd denote the frequency at which IGd&wd)I first crosses I from above.
We have not formulated a rule to guard against model uncertainty. This is because, as
The first step for controllability analysis with feedback control is to evaluate the bounds on
given in (5.94) and (5.95), uncertainty has only a minor effect on feedback performance for
the peaks of the different closed-loop transfer functions, i.e. 5, T, KS, SG and SGd, using
SISO systems, except at frequencies where the relative uncertainty E approaches 100%, and
formulae summarized in Table 5.3.2. We require that the peaks of all of these closed-loop
we obviously have to detune the system. Also, since 100% uncertainty at a given frequency
transfer functions be small. For example, the performance requirement of keeping control
allows for4the presence of a RHP-zero on the imaginary axis at this frequency (G(jw) = 0),
error signal e small is satisfied, only if f ~ and IITIk~ are small. Similarly, it is necessary
it is already covered by Rule 6.
to ensure that IIICSII~ is small to avoid actuator saturation, which may destabilize the system.
The rules are necessary conditions (“minimum requirements”) to achieve acceptable
In addition, the following rules apply (Skogestad, 1996):
control performance. They are not sufficient since among other things we have only
Rule 1. Speed of response to reject disturbances. We approximately require w~ > Wd. considered one effect at a time.
More specifically, with feedback control we require IS(iw)J ≤ IlJGd(iw)I Vw. (See The rules quantify the qualitative rules given in the introduction. For example, the rule
“Control outputs that are not self-regulating” may be quantified as “Control outputs y for
(5.76) and (5.79).)
which Gd(jw)I > 1 at some frequency” (Rule 1). Another important insight from Rule
Rule 2. Speed of response to track reference changes. We require IS(iw)I < 1/H up to 1 is that a larger disturbance or a smaller specification on the control error requires faster
the frequency cü,. where tracking is required. (See (5.80).)
208 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 209
where as found above we approximately require We < 1/6 (Rule 5), Wc < z/2 (Rule 6) and
We <w~ (Rule 7). Condition (5.98) may be used, as in the example of Section 5.15.3 below,
to determine the size of equipment
then the error in G/Gd must not exceed 10% at this frequency. In practice, this means that Now consider performance where the results for feedback and feedforward control differ
feedforward control has to be combined with feedback control if the output is sensitive to the (i)First considerfeedbock control From Rule 1 we need for acceptable performance (Id < 1)
disturbance (i.e. if JGdl is much larger than 1 at some frequency). with disturbances
Combined feedback and feedforward control. To analyze controllability in this case we Wd kd/rd <w~ (5 104)
may assume that the feedforward controller “d has already been designed. Then from (5.99)
the controllability of the remaining feedback problem can be analyzed using the rules in On the other hand from Rule 5 we require for stability and performance
Section 5.13 if Gd(s) is replaced by
w~ < 1/U~ (5 105)
Gd(s) GKdGmd + Gd (5.101)
where °tot = U + 0m is the total delay around the loop The combination of (5 104) and
However, one must be aware that the feedforward control may be very sensitive to model (5 105) yields the following requirement for controllability
error, so the benefits of feedforward may be less in practice.
Conclusion. From (5.101) we see that the primary potential benefit of feedforward control Feedback 8+ Urn < Td/kd (5 106)
is to reduce the effect of the disturbance and make Gd less than 1 at frequencies where
feedback control is not effective due to, for example, a delay or a large phase lag in GGm (s). (a) For feedforwaid control any delay for the disturbance itself yields a smaller net
delay and to have id < 1 we need only require
Pivof of (5.107): Introduce U = 0 + Omd — 0d, and consider first the case with 0 < 0 (so (5.107) is
5.15.1 First-order delay process clearly satisfied). In this case perfect control is possible using the controller (5.100),
Problem statement. Consider disturbance rejection for the following process:
1çperfect —G~GdGjJ~ — kd 1 + i-s (5.108)
c_Os — T1±rds
G(s) = k Gd(s) = (5.102)
1+-i-s l+rds so we can even achieve e = 0. Next, consider 0> 0. Perfect control is not possible, so instead we use
In addition there are measurement delays G,,~ for the output and °md for the disturbance. All the “ideal” controller obtained by deleting the prediction e9°,
parameters have been appropriately scaled such that at each frequency ui < 1, ldl < 1 and
—~ 1 + i-s (5.109)
we want id < 1. Assume lkdl > 1. Treat the two cases of (i) feedback control only, and (ii)
k l+TdS
feedforward control only, and carry out the following:
(a) For each of the eight parameters in this model explain qualitatively what value you From (5.99) the response with this controller is
would choose from a controllability point of view (with descriptions such as large, small,
1~ ~Od~ -.
value has no effect). e= (GK~Gm~ + G~)d = ~
1 + Td5
(1— e°~)d (5.110)
(b) Give quantitative relationships between the parameters which should be satisfied to
achieve controllability. Assume that appropriate scaling has been applied in such a way that and to achieve IeI/IdI < 1 we must require ~0 < 1 (using asymptotic values and 1 — e~ Ic for
the disturbance is less than I in magnitude, and that the input and the output are required to small z) which is equivalent to (5.107). 0
be less than 1 in magnitude.
Solution. (a) Qualitative. We want the input to have a “large, direct and fast effect” on the
output, while we want the disturbance to have a “small, indirect and slow effect”. By “direct” 5.15.2 Application: room heating
we mean without any delay or inverse response. This leads to the following conclusion. For Consid~r the problem of maintaining a room at constant temperature, as discussed in
both feedback and feedforward control we want k and rd large, and r, 8 and kd small. For Section 1.5, see Figure 1.2. Let y be the room temperature, u the heat input and d the outdoor
feedforward control we also want 9d large (we then have more time to react), but for feedback temperature. Feedback control should be used. Let the measurement delay for temperature
the value of °d does not matter; it translates time, but otherwise has no effect. Clearly, we want (y) be Urn = 100 s.
°rn small for feedback control (it is not used for feedforward), and we want 8rnd small for
feedforward control (it is not used for feedback). 1. Is the plant controllable with respect to disturbances?
(b) Quantitative. To stay within the input constraints (liii < 1) we must require from Rule 2. Is the plant controllable with respect to setpoint changes of magnitude B = 3 (±3 K)
3 that G(jw)~ > lGd(iw)l for frequencies w < wd. Specifically, for both feedback and when the desired response time for setpoint changes is -r,. = 1000 s (17 mm)?
feedforward control ________________________
10’
5)
~0
z
C
10
Solution. A critical part of controllability analysis is scaling. A model in terms of scaled controller for the roo~n heating process. Also compute the robustness parameters (CM, PM, Ms and
variables was derived in (1.32) It/ir)for the two designs.
20 10
C(s) = 1000s+1~ Cd(s) = ______ (5.111) 5.15.3 Application: neutralization process
l000s + 1
The frequency responses of Cl and Gdl are shown in Figure 5.19. ACID BASE
I. Disturbances. From Rule 1 feedback control is necessary up to the frequency Wd
Vt
10/1000 = 0.01 rad/s, where lGdl crosses 1 in magnitude (wc > wd). This is exactly the cA
same frequency as the upper bound given by the delay, 1/9 = 0.01 radls (w~ < 1/9). We
therefore conclude that the system is barely controllable for this disturbance. From Rule 3 no
problems with input constraints are expected since IGI > lGdl at all frequencies. To support
these conclusions, we design a series PID controller of the form IC(s) = IC~ ‘*f5 0~~t11~ V
With C(s) = ~ the SIMC PT tunings (page 57) for this process are IC~ = 0.25 (scaled
C
variables) and ‘i-i = 800 s. This yields smooth responses, but the output peak exceeds 1.7
in response to the disturbance and the settling to the new steady-state is slow. To reduce the
output peak below 1, it is necessary to increase K~ to about 0.4. Reducing rj from 800 5
to 200 s reduces the settling time. The introduction of derivative action with TD = 60 s Figure 5.21: Neutralization process with one mixing tank
gives better robustness and fewer oscillations. The final controller settings are I(~ = 0.4
(scaled variables), -r1 = 200 s and TD = 60 s. The closed-loop simulation for a unit step The following application is interesting in that it shows how the controllability analysis tools
disturbance (corresponding to a sudden 10 K increase in the outdoor temperature) is shown may assist the engineer in redesigning the process to make it controllable.
in Figure 5.20(a). The output error exceeds its allowed value of 1 for a very short time after Problem statement. Consider the process in Figure 5.21, where a strong acid with pH
about 100 s, but then returns quite quickly to zero. The input goes down to about —0.8 and = —1 (yet, a negative pH is possible it corresponds to cH+ = 10 mol/l) is neutralized by
—
thus remains within its allowed bound of ±1. a strong base (pH = 15) in a mixing tank with volume V= 10 m3. We want to use feedback
2. Serpoints. The plant is controllable with respect to the desired setpoint changes. First, control to keep the pH in the product stream (output y) in the range 7 ± 1 (“salt water”) by
the delay is 100 s which is much smaller than the desired response time of 1000 s, and thus manipulating the amount of base, qB (input u), in spite of variations in the flow of acid, ~
poses no problem. Second, IG(iw)l ≥ 1? = 3 up to aboutw1 = 0.007 [rad/s] which is seven (disturbance d). The delay in the pH measurement is °m 10 5.
times higher than the required w~ = 1/~rr = 0.001 [rad/sj. This means that input constraints To achieve the desired product with pH = 7 one must exactly balance the inflow of acid (the
pose no problem. In fact, we should be able to achieve response times of about 1/Wi = 150s disturbance) by the addition of base (the manipulated input). Intuitively, one might expect that
without reaching the input constraints. This is confirmed by the simulation in Figure 5.20(b) the main control problem is to adjust the base accurately by means of a very accurate valve.
for a desired setpoint change 3/(150s + 1) using the same PID controller as above. However, as we will see, this “feedforward” way of thinking is misleading, and the main
hurdle to good control is the need for very fast response times.
Exercise 5.12 * Peiform closed-loop simulations wit/i the SIMC P1 controller and the proposed PID
We take the controlled output to be the excess of acid, c [mol/l], defined as c = CH-F —
I
214 MULTIVARIABLE FEEDBACK CONTROL LIMJTATIONS IN 5150 SYSTEMS 215
COH—, which avoids the need to include a chemical reaction term in the model. In terms of
this variable c, the control objective is to keep id S = 10_6 moL~l, and the plant is a
simple mixing process modelled by
The nominal values for the acid and base flows are q~ = q~ = 0.005 [m3Is] resulting in a
product flow qt = 0.01 [m3/sJ = 10 [i/sI. Here superscript denotes the steady-state value.
*
We divide each variable by its maximum deviation to get the following scaled variables:
c qA (1113)
0.5q~ Figure 5.23: Neutralization process with two tanks and one controller
Then the appropriately scaled linear model for one tank becomes
illustrated in Figure 5.23 for the case of two tanks. This is similar to playing golf where it is
Gd(s) = 1+Ths’ 0(s) = l+TftS = 2.5~ 106 (5.114) the transfer function for the effect of the disturbance becomes
1
where Tn = V/q = 1000 s is the residence time for the liquid in the tank. Note that the Gd($) kdh71(s); 1r~(s) (~-s + 1)” (5.116)
steady-state gain in terms of scaled variables is more than a million, so the output is extremely
sensitive to both the input and the disturbance. The reason for this high gain is the much higher where kd = 2.5• 106 is the gain for the mixing process, h71(s) is the transfer function of the
concentration in the two feed streams, compared to that desired in the product stream. The mixing tanks, and T~ is the total residence time, V~0~/q. The magnitude of h71(s) as a function
question is: can acceptable control be achieved? of frequency is shown in Figure 5.24 for one to four equal tanks in series.
100
too
o
‘0
S a
00 us
Ce Ce
100
10° to
Frequency [rad/s] 10’ 10° 10’ to2
Frequency)< Tn
Figure 5.22: Frequency responses for the neutralization process with one mixing tank
Figure 5.24: Frequency responses for it tanks in series with the same total residence time m; 1z71(s)
n= 1,2,3,4
Controllability analysis. The frequency responses of Gd(s) and G(s) are shown £
graphically in Figure 5.22. From Rule 2, input constraints do not pose a problem since From controllability Rules 1 and 5, we must at least require for acceptable disturbance
101 = 2lGaI at all frequencies. The main control problem is the high disturbance sensitivity, rejection that _______________
and from (5.104) (Rule 1) we find the frequency up to which feedback is needed
IG~(iwo)i < 1 w0 4 1/0 (5.117)
= 2500rad/s (5.115) where 0 is the delay in the feedback loop. Thus, one purpose of the mixing tanks h71(s) is to
reduce the effect of the disturbance by a factor kd (= 2.5 106) at the frequency w0 (= 0.1
This requires a response time of 1/2500 = 0.4 milliseconds which is clearly impossible in a [rad/s]), i.e. 1h71(iwo)i < 1/k~j. With rj, = V~0~/q we obtain the following minimum value
process control application, and is in any case much less than the measurement delay of 10 s. for the total volume for n equal tanks in series:
Design change: multiple tanks. The only way to improve controllability is to modify
the process. This is done in practice by performing the neutralization in several steps as = q9m~(k~/’~ —1 (5.118)
216 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS TN 5150 SYSTEMS 217
where q = 0 01 m3/s With 0 = 10 s we then find that the following designs have the same n tanks in series. With is controllers the overall closed-loop response from a disturbance into
controllability with respect to disturbance rejection the first tank to the pH in the last tank becomes
n 71
No. of Total Volume
tanks volume each tank iJGdJJ(l+L)d~Tdl~ L4fJL~ (5.119)
n V~0~ [m31 [sn~j
250000 250000 where Gd = fJ~L1 G1 and L~ G1K~, and the approximation applies at low frequencies
2 316 158 where feedback is effective.
3 40.7 13.6 In this case, we can design each loop L5(s) with a slope of —1 and bandwidth w~ (‘jo,
4 15.9 398 such that the overall loop transfer function L has slope —n and achieves ILl > IGdI at all
5 9.51 1.90 frequencies lower than wd (the size of the tanks is selected as before such that wd wo).
6 6.96 1.16 Thus, our analysis confirms the usual recommendation of adding base gradually and having
7 5.70 0.81 one pH controller for each tank (McMillan, 1984, p. 208). Tt seems unlikely that any other
control strategy can achieve a sufficiently high roll-off for ILl.
In summary, this application has shown how a simple controllability analysis may be
With one tank we need a volume corresponding to that of a supertanker to get acceptable used to make decisions on both the appropriate size of the equipment, and the selection
controllability. The minimum total volume is obtained with 18 tanks of about 203 litres each of actuators and measurements for control. Our conclusions are in agreement with what is
— giving a total volume of 3.662 m3. However, taking into account the additional cost for used in industry. Importantly, we arrived at these conclusions without having to design any
extra equipment such as piping, mixing, measurements and control, we would probably select controllers or perform any simulations. Of course, as a final test, the conclusions from the
a design with 3 or 4 tanks for this example. controllability analysis should be verified by simulations using a nonlinear model.
Control system design. We are not quite finished yet. The condition IGa(iwo)I S 1 in
Exercise 5.13 Comparison of local feedback and cascade control. Explain why a cascade control
(5.117), which formed the basis for redesigning the process, may be optimistic because it
system with two measurements (pH in each tank) and only one manipulated input (the base flow into
only ensures that we have 1~I < l/IGdI at the crossover frequency wB wo. However,
the first tank) will not achieve as good a peiformance as the cont,vl system in Figure 5.25 where we use
from Rule 1 we also require that 181 < 1/IGdI, or approximately ILl > IGdI, at frequencies localfeedback with two manipulated inputs (one for each tank).
lower than we,, and this may be difficult to achieve since Gd(s) = kdh(S) is of order is, where
is is the number of tanks. The problem is that this requires ILl to drop steeply with frequency, The following exercise further considers the use of buffer tanks for reducing quality
which results in a large negative phase for L, whereas for stability and performance the slope (concentration, temperature) disturbances in chemical processes.
of ILl at crossover should not be steeper than —1, approximately (see Section 2.6.2).
Thus, the control system in Figure 5.23 with a single feedback controller will not achieve Exercise 5.14 * (a) The effect of a concentration dirtu rbance must be reduced by afactor of 100 at the
the desired performance. The solution is to install a local feedback control system on each frequency 0.5 rod/mm. The disturbances should be dampened by use of buffer tanks and the objective
is to minimize the total volume. How ~nany tanks in series should one have? What is the total residence
tank and to add base in each tank as shown in Figure 5.25. This is anotherplant design change
ti,ne?
BASE (b) The fred too distillation column has large variations in concentration and the use of one buffer
ACID tank is suggested to dampen these. The effect of the feed concentration don the product composition y
is given by (scaled variables, time in minutes)
0 BASE
Gd(s) = e8/3s
That is, after a step in d the output y will, after an initial delay ofT mm, increase in a ramp-like fashion
and reach#ts mnaximum allowed value (which is 1) after another 3 minutes. Feedback contivl should be
used and there is an additional measurement delay of 5 minutes. What should be the residence time in
the tank?
(c) Show that in terms of ,ninimizing the total volume for buffer tanks in series, it is optimal to have
buffer tanks of equal size.
(d) Is there any reason to have buffer tanks in parallel (they must not be of equal size because then
one may simply combine them)?
(e) What about parallel pipes in series (pure delay). Is this a good idea?
Figure 5.25: Neutralization process with two tanks and two controllers
Buffer tanks are also used in chemical processes to dampen liquid flow rate disturbances (or
since it requires an additional measurement and actuator for each tank. Consider the case of gas pressure disturbances). This is the topic of the following exercise.
218 MULTI VARIABLE FEEDBACK CONTROL LIMITATIONS IN 5150 SYSTEMS 219
Exercise 5.15 Let d1 = q;~ [m3/s] denote aflow rate which acts as a disturbance to the process. We temperature T (which should be 60 ± 10°C). The measurement delayfor T is 3 s. The main disturbance
add a buffer tank (with liquid volume V [in3]), and use a “slow” level controller K such that the outflow is on P0. The following model in terms of deviation variables is derived from heat balances:
d2 = Qout (the “new” disturbance) is smoother than the inflow qin (the “original” disturbance). The
idea is to increase or decrease temporarily the liquid volume in the tank to avoid sudden changes in
T(s) = q(s) + 0.6(20s + 1) (5.121)
qout. Note that the steady-state value of qout must equal that of qin. (60s + 1)(12s + 1) (60s + 1)(12s + 1)TO(5)
A material balance yields V(s) = (qin(s) — qout(s~/s and with a level controller qout(s)
K(s)V(s) we find that where T and P0 a,e in ° C’, q is in kg/s, and the unit for time is seconds. Derive the scaled model. Is
K(s) the plant controllable with feedback control? (Solution: The delay poses 110 problem ~‘pemformance), but
d2(s)= s+K(s)d1(5) (5.120) the effect of the disturbance is a bit too large at high frequencies (input saturation), so the plant is not
controllable.)
h(s)
The design of a buffer tank for aflow rate disturbance then consists of two steps:
I. Design the level controller ic(s) such that h(s) has the desired shape (e.g. determined by a
controllability analysis of how d2 affects the remaining process; note that we must always have 5.16 Conclusion
h(0) = 1).
2. Design the size of the Lank (determine its volume l7max) such that the tank does not oveiflow or go The chapter has presented a frequency domain controllability analysis for scalar systems
empty for the expected disturbances in d1 = ~ applicable to both feedback and feedforward control. We summarized our findings in terms
Problem statement. (a) Assume the inflow varies in the range qr~ * 100% where q~ is the nominal of eight controllability rules; see page 206. These rules are necessary conditions (“minimum
value, and apply this stepwise procedure to two cases: requirements”) to achieve acceptable control performance. They are not sufficient since
(i) The desired transferfunction is h(s) = 1/(rs + 1). among other things they only consider one effect at a time. The rules may be used to
(ii) The desired transfer function is h(s) = 1/(r2s ± 1)2.
(b) Explain why it is usually not recommended to have integral action in K(s). determine whether or not a given plant is controllable. The method has been applied to a
(c) In case (ii) one could alternatively use two tanks in series with controllers designed as in (i). pH neutralization process, and it is found that the heuristic design rules given in the literature
Explain why this is most likely not a good solution. (Solution: The required total volume is the same, follow directly. The key steps in the analysis are to consider disturbances and to scale the
but the cost of two smaller tanks is larger than one large tank.) variables properly.
The tools presented in this chapter may also be used to study the effectiveness of
adding extra manipulated inputs or extra measurements (cascade control). They may
5.15.4 Additional exercises also be generalized to multivariable plants where directionality becomes a further crucial
Exercise 5.16 * What infom-mation about a plant is important for controller design, and in particulam; consideration. Interestingly, a direct generalization to decentralized control of multivariable
in which frequency range is it important to know the model well? To answer this problem you may think plants is rather straightforward and involves the CLDG and the PRGA; see page 448 in
about the following sub-problems: Chapter 10.
(a) Explain what infom-ination about the plant is used for Ziegler—Nichols tuning of a SISO PID
con trolle,:
(b) Is the steady-state plant gain C(0) important for controller design? (As an example consider the
plant C(s) = with al < I and design a P controller K(s) = K~ such that w~ = 100. How does
the cont,-oller design and the closed-loop response depend on the steady-state gain 0(0) = 1/a?)
K1 e°15. The measurement device for the output has transfer function Gm(s) = ~ The unit
for time is seconds. The nominal parameter values are: K1 = 0.24, Oi = us], K2 = 38, 02 = 5 [5]
and T = 2 [s].
(a) Assume all variables have been app ropm-iately scaled. Is the plant input—output controllable?
(b) What is the effect on controllability of changing one ,,zodel parameter at a time in the following S
ways?
1. O~ is reduced to 0.1 [sj
2. 02 is reduced to 2 Is].
3. K~ is reduced to 0.024.
4. K2 is reduced to 8.
5. T is increased to 30 Is].
Exercise 5.18 * A heat exchanger is used to exchange heat between two streams: a coolant with flow
tate q (1 ± I kg/s) is used to cool a hot stream with inlet tempem-ature T0 (100 * 10° C) to the outlet
220 MULTIVARIABLE FEEDBACK CONTROL
6
LIMITATIONS ON
PERFORMANCE IN MIMO
SYSTEMS
In this chaptei, we generalize the results of Chapter 5 to MIMO systems Most of the results on
fundamental limitations and controllability analysis for SISO systems also hold for MIMO systems with
the additional consideration of directions Thus, we focus on results that hold exclusively for MIMO
systems oi are non-trivial extensions of similar results for 5150 systems We first discuss fundamental
limitations on the sensitivity and complementary sensitivity functions imposed by the presence of RHP
zeros We then consider separately the issues of functional controllability, RHP-zeros, RHP-poles,
distuibances, input constraints and uncertainty Finally, we summaitze the main steps in a procedure
foi analyzing the input-output controllability of MIMO plants
6.1 Introduction
In a MIMO system, the plant gain, RHP-zeros, delays, RHP-poles and disturbances each
have directions associated with them. This makes it more difficult to consider their effects
separately, as we did in the SISO case, but we will nevertheless see that most of the SISO
results can be generalized.
We will quantify the directionality of the various effects in G and Gd by their output
directions:
All these are I x 1 vectors where I is the number of outputs. liz and y~, are fixed complex
vectors, while lid(s) and u1(s) are frequency dependent (s may here be viewed as a
generalized complex frequency; in most cases .s = jw). The vectors are normalized such
that they have Euclidean length I,
We may also consider the associated input directions of 0. However, these directions are 6.2.2 Interpolation constraints
usually of less interest since we are primarily concerned with the performance at the output
RI-IP-zero. If C(s) has a RHP-zero at z with output direction liz, then for internal stability of
of the plant.
the feedback system the following interpolation constraints must apply:
The angles between the various output directions can be quantified using their inner
products: bi~iypI, Iy~’UdI. etc. The inner product gives a number between 0 and 1, and from y~T(z) = 0; y~1S(z) (6.4)
this we can define the angle in the first quadrant, see (A.1 14). For example, the output angle
between a pole and a zero is In words, (6.4) says that T must have a RHP-zero in the same direction as 0, and that 5(z)
= cos’ Iv~’v~I has an eigenvalue of 1 corresponding to the left eigenvector liz
where cor’ denotes arccos. Proof of (6.4): From (4.71) there exists an output direction U: such that y~’G(z) = 0. For internal
We assume throughout this chapter that the models have been scaled as outlined in stability, the controller cannot cancel the RHP-zero and it follows that L = OK has a RHP-zero in the
Section 1.4. The scaling procedure is the same as that for 5150 systems, except that the same direction, i.e. y~’L(z) = 0. Now S = (I + L)1 is stable and has no RHP-pole at s = z. It then
scaling factors D~, Dd, Dr and D~ are diagonal matrices with elements equal to the follows from T = LS that y~’T(z) = 0 and yJf(l —8) = 0. 0
maximum change in each variable n~, d1, ~ and e~. The control error in terms of scaled
BlIP-pole. If C(s) has a RHP-pole at p with output direction y,~, then for internal stability
variables is then
the following interpolation constraints apply:
e = y r = Cu + Gdd
— —
where at each frequency we have IIu(w)Ilmax ≤ l, IId(w)Ilmax ≤ land IV(w)Ilmax ≤ 1, and S(p)y~ = 0; T(p)y~ = (6.5)
the control objective is to achieve Ie(w)Ilmax < 1
Proof of (6.5): The square matrix L(p) has a RHP-pole at s = p. and if we assume that L(s) has no
Remark I Here ~ is the vector infinity-norm: that is, the absolute value of the largest element in RHP-zeros at a = p then L’(p) exists and from (4.75) there exists an output pole direction y,, such
the vector. This norm is sometimes denoted ~, but this is not used here to avoid confusing it with that
0 (6.6)
the 31cc norm of the transfer function (where the cc denotes the maximum over frequency rather than
the maximum over the elements of the vector). Since T is stable, it has no RHP-pole at s = p, so T(p) is finite. It then follows, from S = TL’, that
Remark 2 As for SISO systems, ‘ye see that reference changes may be analyzed as a special case of S(p)yp = T(p)L~’(p)yp = 0 and T(p) = (I — S(p~yp Up 0
disturbances by replacing Gd by —R. Similar constraints apply to L1, Si and T,, but these are in terms of the input zero and pole
Remark 3 Whether various disturbances and reference changes should he considered separately or directions, u~ and ui,.
simultaneously is a matter of design philosophy. In this chapter, we mainly consider their effects
separately, on the grounds that it is unlikely for several disturbances to attain their worst values
simultaneously. This leads to necessary conditions for acceptable performance, which involve the
6.2.3 Sensitivity integrals
elements of different matrices rather than matrix norms. For 5150 systems we presented several integral constraints on sensitivity (the waterbed
effects). These may be generalized to MIMO systems by using the determinant or the singular
values of 8, see Boyd and Barratt (1991) and Freudenberg and Looze (1988). For example,
6.2 Fundamental limitations on sensitivity the generalization of the Bode sensitivity integral in (5.5) may be written
cc
6.2.1 8 pIus T is the identity matrix
0
ln det S(jw)Idw = ~ ~° in uj(8(jw))dw . Z Re(p~) (6.7)
From the identity S + T = land (A.51), we get
For a stab~ L(s), the integral is zero. Other generalizations are also available, see Chen
Ia(S) —11 <a(T) ≤ a(S) + 1 (6.1) (1995), Zhou et al. (1996) and Chen (2000). However, although these integral relationships
are interesting, it seems difficult to derive concrete bounds on achievable performance from
0(T) — lJ <a(S) <0(T) + 1 (6.2) them.
These can be combined to get
Ia(S) — a(T)I < 1 (6.3)
Thus, the magnitudes of 0(5) and a(T) differ by at most I at a given frequency, so 0(5) is 6.3 Fundamental limitations: bounds on peaks
large if and only if a(T) is large. For example, if 0(T) is 5 at a given frequency, then a(S)
must be between 4 and 6 at this frequency. The bounds (6.1) and (6.2) also show that we Based on the interpolation constraints presented in Section 6.2.2, one may derive lower
cannot have both S and T small (close to 0) simultaneously. bounds on various closed-loop transfer functions. The bounds are direct generalizations of
224 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 225
those found for 5150 systems, see page 172, and the comments and interpietations made for winch is studied In more detail in Example 63 (page 227) The output direction vectois conesponding
5150 systems carry over directly if we take the directions into account The results presented to the RHP-ze,o at z = 2 and RHP-pole at p 3 are, respectively.
in this section are from Section V in the paper by Chen (2000), unless otherwise stated The
ro.3271 [11
derivations of bounds of this kind go back to the work of Zames (1981) iJz Los~i’ YP= Lo]
There is sonic alignment In output 1, since the RUP-ze,o has some effect in output! and the RHP-pole
6.3.1 Minimum peaks for S and T has all its effect in output I This translates into unavoidable peaks for a~(S) and u(T) From (68) we
In the following, 1l’I~ mm and MTmin denote the lowest achievable values for 1181100 and get Ms mm = Mp,rn,n = 1 89, see Matlab code in Table 6 1
1121100, iespectively, using any stabilizing controller K That is, we define
Table 6.1: Matlab program for calculating sensitivity peak using (6.8)
4 fl)n ISIIcc, MT,111~fl = K
mm IITIIa, V47 Has distinct and at least one REP-zero and one REP-pole
[ptot,ztotl = pzmap(G); % poles and zeros
p = ptot (find(ptot>Oi I; z = ztot ifind(ztotOi) ; % REP poles and zeros
np = length(p); nz = lengthtz);
Theorem 6.1 Sensitivity and complementary sensitivity peaks. Conside, a ,ational plant G = ss(G); [V,El = eigtc.Ai; C = GC~V; % output pole vectors
0(s) (with no tune delay) Let z~ be the N2 RI-IP~zeros of 0(s) with (antt) oatput zero for i = l:np
Ypt:,ii =C(:,ii/noflft(C(:iH; %poledirectiofls
duection vectois V:,t Let p1 be the N~ RHP-poles of 0(s) with (untt) output pole duection end
vectots p~,t Fuitheunore, assume that z~ and p, a,e all dtsttnct Then we have the following for i = l:nz
1u,S,V3 = svdtevalfr(G,ztiii); Yzt~,ii = U(:,endl; % zero directions
fight lowet bound on IISII~ and 11Th00 end
Qp = ~yp.*yp) *(l./tdiagtPi*ones(np) ÷ ones(np)*diagW)i);
Qz = (y~.*y~) *il./(diag(z)*onestnz) + onesinz)*diag(z~)i);
Ms,mjn = MT,inin = + ~2 (Q:1/2Q~Q;l/2
) (68)
Qzp = (Yz*yp).*{l./(diag{zi*onestnz,np) —
ones (nz,npi *diag (pill
Mstsin = sqrtUinorimtsqrt5ltinv(Qzll*QZp*5qrtm(1J~~(QPili2)
whe,e the elements of the N2 x N. inati ix Q2, N~ x N~ matt ix Q~ and N2 x man ix
ame gtven by Chen (2000) ac One RI-IP-pole and one RUP-zero. For a plant with one RHP-zero z and one RHP-pole
H H H
ifl i in i._ Y~,ZYP,J in i.._
t’~ zjij t’~zpjiy
______ ______
— — 1’~pjzj — —. - — (69)
Z1+Z3 p1+p3
/7 z+p(2 (6.11)
Note that (6.8) gives a tight bound for any number of RHP-poles and RHP-zeros. Ms,min MT,min ~ sin ~ + cos2 ~
v
.,
Iz—pH
Example 6.1 C’onsider the SISO plant
where ~ = cos~ Iv~’vph is the angle between the output directions of the pole and zero. If
Q(\ (s—1)(s—3) the pole and zero are aligned such that Y: = y~, and 4’ = 0, then (6.] 1) simplifies to give the
(s — 2)(s + 1)2 5150 conditions in (5.23). Conversely, if the pole and zero are orthogonal to each other, then
= 9Q0 and Ms,mmn = MT,min = 1, and there is no additional penalty for having both a
For this plant we have zi = 1, z2 = 3, p~ = 2, and since this is a 5150
plant, all direction vectors ~z and y~ are 1. Since we have RI-IP-zews close to the RHP RHP-pole and a RHP-zero.
pole we expect that control is fundamentally difficult. This is verified from (6.8). In Mat-
Example 6.2 continued. For the plant in (6.10) we have 1J~1IJp = 0.327 which gives ~
lab, we write Qz = [1/2 1/4; 1/4 1/6]; Qp = [1/4]; Qpz = [—1 1]; msrnin =
cost 0.327 = 70.9°. Equation (6.11) then gives Ms,m;n Mp,min = 1.89, which agrees with
sqrt(1+svd(sqrtrn(jnv(Qp) ) *Qpz*sqrtffl(jflv(Qz) ) )A2) andf,nd Ms,min = MT,min =
15. This also agrees with the bound (5.23)for a SISO plant with a single RHP-pole: the value obtained from (6.8).
N- The bounc~(6.8) can be extended to include weights. With no loss of generality we assume
1~S,min
—
— JVITmin =
- hzj+pb —
—
11+21 13+21
.
—
—00— tO that the weights W1 (s) and W2 (s) contain no RHP-poles or RHP-zeros and consider the
j=i Iz~—p~ 11—21 13—21 weighted functions T4~1SW2 and W1TW2.
We see from the factor i1::~; in Q~2 that the bound will be large if we have a RHP-pole Theorem 6.2 Weighted sensitivity and complementary sensitivity peaks. Consider a
rational plant C(s) with no time delay and no poles or zeros on the imaginary axis. Let
p~ close to RHP-zero z~j and with directions aligned such that Y~j?Jp,i is not small.
z1 be the RUP-zeros of C(s) with (unit) output zero direction vectors Yz,i• Let p~ be the RHP
Example 6.2 ~‘onsider the MIMO plant poles of 0(s) with (unit) output pole direction vectors Yp,i Furthennore, assume that z~ and
are all distinct. Define
0 ~1 Icos(30°) _sin(300)] 1 32
ol
Ge(s) =
(6 10)
L o 3+3 JI Lsinaoo
~ cos(30°) j [ 0 ~
o.ms+m J
z=2,p=3 75mm inf hJW’i~2hI00~ 7T,min 4 inf 11W1TW21100
r
Then For a time delay plant with one RHP-zero z and one RHP-pole p, similar to (6.11), we have
75mm = ~ (Q-1/2(Q + Q~2Q~QZp2)Q~h/2) (6.12)
[Q~i~,
-, -
—
y~1Wf’(p1)Wi(p~)y~~
—
p1-I- p,j
, [Qp2_z,
-. —
— - ________________________
____________________
Pi+pj 1
[~ ~ ~:~] ~ ~
. ~
For the case with a scalar weight, we have in particular the following direct generalizations which has for all values of a a RHP-zero at z = 2 and a RHP-pole at p = 3. For a = 0° the rotation
of the SISO results: matrix U~ = I, and the plant consists of two decoupled subsystems
~ ≥ wp(z)~ (6.14)
C ( ~ — (O.ls+1)(s—p)
IIWTTH~ ≥ IWT(P)I (6.15) ° (O.ls+tfls+3)
This shows that i(S) cannot be shaped freely for a plant with a RHP-zero, and U(T) cannot Here the subsystem 911 has both a RHP-pole and a RHP-zero, and closed-loop peiforinance is expected
be shaped freely for a plant with a RHP-pole. to be poot: On the other hand, there are no particular control problems related to the subsystem 922.
The bound forT in (6.8) can also be extended to include time delays at the plant output: Nra, consider a = 90° for which we have
Theorem 6.3 Complementary sensitivity peak for plant with time delay. Consider a plant
with time delays in the output channels
Ua [~ ~], and Cao(s)= [ ,fl~
(0. 1~+ 1) (5 +3)
and we again have two decoupled subsystems, but this time in the off-diagonal elements. The main
Go(s) = 0(s)G(s), 0(s) = diag (e°i5,. . . , difference, howeve,; is that there is no interaction between the RHP-pole and RHP-zero in this case,
so we expect this plant to be easier to control. For intermediate values of a we do not have decoupled
where C(s) is a rational transferfunction matrix. Let z~ be the RHP-ze,vs of C(s) with (unit) subsystems, and the i-c will be some interaction between the RHP-pole and RHP-zero.
output zero direction vectors y~,j. Let p~ be the RHP-poles of 0(s) with (unit) output pole Since in (6.20) the RHP-pole is located at the output of the plant, its output di,-ection is fixed and we
direction vectors Yp,i’ Note that the directions are evaluated for the plant without the time find y~ = [1 0 Tfor all values of a. On the other hand, the RHP-ze,v output direction changes from
delay. Furthermore, assume that z~ and p~ aie all distinct. Then we have the following tight [1 0T for a = 0° to [0 1 -T for a = 90°. Thus, the angle ~ between the pole and zero direction
lower bound on IITIJ~: also varies between 00 and 90°, but ~ and a are not equaL This is seen from the table beloig where we
also give Ms,mmn = MT,min. see (6.8) or (6.11), forfour rotation angtes, a = 00,300,600 and 90°.
4’ — cost iY~
I upl 00 70.9° 83.4° 90°
I Ms,mmn = MT,mmn 5.0 1.89 1.15 1.0
[Qo]ij = ___________
‘Slice 7.00 2.60 1.59 1.98
Pi+Pj (6.17)
liTiice 7.40 2.76 1.60 1.31
There is no tight bound available for lI~II~ for plants with time delays. However, a(S) 9.55 3.53 2.01 1.59
7mmn(S/K5)
and a-(T) differ by at most 1, see (6.1), and we have
MT,mmn + 1 ~ Ms,mmn ≥ MTmmn — 1 (6.18) The table also shows the values of[ISII~ and 11211= obtained by an ?Loc optimal S/KS design (see
page 94) using the following weights:
where MT,min is given by (6,16). An application of the bound (6.16) for a 5150 plant is given
in Example 5.1 (page 175). = I; Wp (s/M±wB) I; M = 2,w~ = 0.5 (6.21)
223 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 229
a = 0°
6 3 2 Minimum peaks for othei closed-loop transfer functions
::L7~ Time
:~ Time
In this section we provide bounds on peaks for some other closed-loop transfer functions
For motivation we refer the reader to the discussion for SISO systems in Section 5 3 2 on
page 175 The results for MIMO systems are summarized in Table 63 2 where we also show
the performance and robustness reasons behind minimizing the peaks of different closed-loop
transfer functions We frequently make use of minimum-phase and stable versions of the phnt
and the disturbance models and the details for their calculation can be found in Section A 6
Bounds on SG Theorem 6 2 can be used to calculate the peak value for SO with W1 = I
and M’~ = G~5(s) where G,,,5(s) denotes the minimum-phase stable version of 0(s) In
:H~1
particular when the system has one RHP zero rand one RHP pole p IISGII= must satisfy
::Hz
lISGJI~ > IyHGms(s)ll~~+ ~j: cos2ø (622)
2 3 4 5
Time Time where
Figure 6.1: MIMO plant (6.20) with angle ~ between RFIP-pnle and RHP-zero. Response to step in cos~ — _____________________
IIY~’G,,,S(Z)II2IIG?fl~(p)yPII2
reference r = [1 — lj~ with W controller for four different values of ~. Solid line: 1/i; dashed line:
When 0(s) is non square (mole inputs than outputs) the pseudo inverse Of Gmg(s) can be
used to find bounds on IISGIIo0
The weight 14/p indicates that we require ISIf~ less than 2, and require tight control up to afrequency Bounds on SGd In the geneial case 0d ≠ 0 and we also want to keep llSGdIIoo small
of about w~ = 0.5 rad/s. The minimum 1]~ norm for the overall S/KS problem is given by the value This case can he handled as for SO by ieplacing Gms by Gd ~ in (6 22) where Gd ,,,~(s)
of 7 in Table 63. The corresponding responses to a step change in the reference, r = [1 —1, are denotes the minimum-phase stable veision of Gd(s)
shown in Figmo-e 6. J.
Several things about the example are worth noting: Bounds on KS Glover (1986) derived the tight bound on the transfer function KS
1. We see from the simulation for ~ = a = 00 in Figure 6.] that the response for an is vemy poor This is
as expected because of the closeness of the RHP-pole and zero (z = 2, p = 3). The response for y2
11K51100 ≥ 1/ups(U(G)5) (623)
is also relatively sluggish, because the fl design is only concerned with the worst-case response
where ~H is the smallest Hankel singular value and U(G)* is the minor image of the anti
in an- The response for 1/2 mizay therefore be made fastem; if desired.
2. For ~ = a = 90° the RHP-pole and RHP-zero do not interact. From the si,nulation we see that Vi stable part of C (for a stable plant there is no lower bound).
(solid line) has on overshoot due to the RI/P-pole, whereas 1/2 (dashed line) has an inverse response A simplerbound is also available since for any RHP-polep UH(U(G)*) ≤ IIu~G5(p)lI2
due to the RI/P-zero. where equality applies for a plant with ‘i single real RHP pole p Here u~ is the input pole
3. The lower bound Ms,m;,, = MT,m;n on IISiIcc and 1T1f, see (6.8), is tight in the sense direction, and G5 is the “stable version” of G with its RHP-poles mirrored into the LHP, see
that there exists a controller that achieves it. This can be confirmed numerically by selecting (5 27) This gives the bound (Havre and Skogestad 2001)
TV,, = 0.011, w~ = 001 and Al = 1. W,, and WE are small so the main objective is to
minimize the peak of S. We find with these weights that the ?L designs for the four angles yield lIKSII~ > 114G5(p)’112 (624)
11511= = 5.04,l.905,1.155,I.0o5, which are vety close to Ms,m;,,.
4. The angle ~ between the pole and zero is quite different fromn the rotation angle a at intermediate which is tight for the case with a single RHP-pole
values between 00 and 900. This is because of the influence of the RI/P-pole in output 1, which yields
a strong gain in this direction, and thus tends to push the zero direction towards output 2. Example ~.4 Consider the following multivariable plant.
5. For a = 0° we have MS,min = MTm;n = 5 so it is clearly impossible to get IS~J less than 2,
as required by the pemformance weight Wp. This is one reason why 7mm = 9.55 is so large in this G(s)= s—p I with z = —2.5 and p = 2 (6.25)
case. O.is+t
1
6. The ?L optimal controller is unstable for a = flo and 30°. This is not altogether surprising,
The plant 0 has a RI/P-pole p = 2 ~plus a LI/P-zero at z = —2.5 which poses no limitation). The
because for a = 0° the plant becomnes two 5150 systems one of which needs an unstable controller
corresponding input and output pole dim-ections are
to stabilize it since p > z (see condition on page ]50).
Up
FL_o.258i’
0.966 1 ~‘
Ii
Lo
F,
CJ~~ CI, ~ a’ -t a,
~= aa~a’-~ [‘3
ci
II 0)0 CD ‘~‘ It a C
(I, :3- ~ ~t ‘a *1 N
a, a, a It “Ca
~0- ‘a ~0
op,C_0000 a a
Ot —. Ct a Ct Oi LI — H IDLI H a-
+0 CD Ga ~ C, Di Di B H a, IC H CI c a’,
a, p30 a C,’
a— CI IDID HI”” P~ II a ~ a a.-. N a
no itC5a—--rr a [‘3 it
a, op ‘1:
aH,’,,’,*
0* CD DiI~IOLIN, ci a- a a N
a, a,
a’t a OaH a- -‘N.--. *,., N a
H II “ ID H +
0 _C_~ ~ II H + N, C HHD ‘a, It
~j k.~ H Di 11 BHIC . a a
o 0 -~ a aC’ ci”. inlOin ‘0- C-i p’~ —‘.~
C— —Q o0) o via it LI H-H’-’’-’ C
a
0’ ~. ‘l*O’-’ (‘3 8
o ,— co N ~ C
eo B H”,’O BOaR--- ci
a Di — + Q.,C’~; On ~a ~ IV
C- in — a, Di * C a a C ci 0-CD HO I C, 0
0-9 a ‘1 ‘a-. 0’ Di H DI Di Di 0 a, a
~Q — - DiO O’-’O’ 11 H” Co
op C a— ,~, <Co -. Cra a, ~ ‘--it
~CD a H’Di’. C N aa
~ (0 OH H o 0’ Di ‘P 0 0~ ‘a a a C) N
a, ~-a.CD ‘a <CoCa,,’ CO “HDH H ‘C N — N Op
a’ a H.. CD ‘ C”'Di
p 8 itO Di CD ID—Il ci N
p3*0 a, C —.
d C Duo— 0 DO a ap
(I) p. II C — 0 a
p3 t rj~ a H C’ Oi BID C H a,
0 CD 0 a, 9 ‘-in err to C)
a’ C) *1” 0’ Di Ca- — 0 B
~4 ~ a
a. o*. -ro * EDo a Di
CD ~- — ‘~ 8 N
o ° 3 H” P10. (0 LI.
8 a a *11 — + ~
a a,09 8 a - foci H aaa a
0. 0 0 CI, ~0 +*1 (00* — It II
IV a, co a. a HO Hit a, =— a N
IV C 0, a p a
5’ ~ C) C’1 *0 a a, La Ot a,
CD’,. 0 C- C.-. C, pt’.
a, S. ° a’ Ot p In,— Hot Di p. C, -R~ +
0~ CD +,. [C. N a
C C) a, Di FIDi -.
C ci 0 “-C C-. C
Ca, z 0 8 N -‘‘O C-.-- Ia 0t
H’
—. CD On 0 a. a
C ‘a C’’-’ Di ‘C t2
CI,
4,- H H i-I ~ [0
‘— a, C IC
CD ~ Cl
ID — a, 00
a
J) C 0 C) aCD N Li In P1’ N ~“ N
a, a’ ~ a, H
C) ci II H + a
a. C Di a, ~ ~
a, 5— N CO. a a a
~il a, 0~ C’ C a + Ut
CD a, 4 ~ a
S C) C CD ci
IC o a, to,’, 0-i
CD * a a inn 0 H
‘-‘*1 C -ii
p3
CD “-C
—CD H HO
a
a It
9,’ a OCt 0~
it, 00) N) (0 LI, a
I,.) Ha + *0
C a *10 H B’ It
inn a a.
0 Ill
C 0 HO
a, ‘-‘p. In
N a’I a, a Di a, It
t~ ~, CD 0i — it Di
C 0€ B
C Di ‘C C 0-i C)
0) 0 a Di It a
N 1’O ‘it
C) 00 Ci
S~o a. H p-i a Co a “H
0 C)
a, N C
a CD Op
~1 z
C a, ~-.a H3
~‘ 0 PC’ P.’
~C
S. C
N) -C —. a,’ (‘3 N N
it, —.3 a CD
~ C
a a t~ a
p
Table 6.3: Bounds on peaks of impoitant closed-loop transfer functions
Want sinai/for Bound on peak
Performance Multiplicative
inverse output ~ ~ + 1—~1cos2.~
z+pl’ (611) (6.8)
I. S tracking ‘/
Ce = —Sr) uncertainty (sin)
Performance Multiplicative ~/sin2~+j4+$cos2~ (oil) (6.8)
2. T noise additive output and (619) for delay system and (6.16) for delay
(e = —Tn) uncertainty (z~) system
Input usage Additive uncertainty Ilu~’ G3 (pY’ 112 (6.24) 1/aH (11(G)0) (6.23)
3. KS (u = (t≥OA) (tight for any value of N,)
KSO_— n))
Performance Gd = G : IIy,”Gd,ms(8)II (6.12) with W, = I
4. SG~~ disturbance Inverse uncertainty ~sin2 ~ + cos2 ~ (6.22) and W = Gd,ms
(e = SGd.-) (~oA)
function from the input disturbances to the controller output (see page 69), the bound on the uncont,vllable if rank(B) < I (the system is input deficient), or if rank(C) < I (the system
peak value forT1 can be alternatively calculated as a special case of (6.26) and (6.27). Note is output deficient), or if rank(sI A) < l (fewer states than outputs). This follows since
—
that, for a minimum-phase system, 0m$ = G~ and it follows from (6.27) that in this case the rank of a product of matrices is less than or equal to the minimum rank of the individual
I14’GslPY’Gd,ms(p)I = Ju~’I = 1 and we have that IITjII~ ≥ 1. This bound is tight for matrices see (A 36)
any number of unstable poles for minimum-phase systems (Kariwala, 2004). In most cases functional uncontrollability is a structutal property of the plant that is it
For many practical systems, bounding one of S and Si (or one ofT and Ti) also bounds does not depend on specific parameter values, and it may often be evaluated from cause-and
the other, but this is not true in general, as shown by the next example. effect graphs A typical example of this is when none of the inputs is, affect a particular output
y3 which would be the case if one of the rows in C(s) was identically zero Another example
Example 6.5 Consider the following ‘nultivariable plant: is when there are fewer inputs than outputs
8~2 If the plant is not functionally controllable i e 1’ < 1 then there are I r output directions
—
1
Ct“ ‘I—
I —
~
0.01(s—z) denoted Yo~ which cannot be affected. These directions will vary with frequency, and we have
0.01
3+10 (analogous to the concept of a zero direction)
The plant Chas a RHP-pole at s = p and a RHP -zero at s = z. Since the pole appears in the (1, 1)
element and the zero only in the first column of C(s), we have y~’0w)00w) = 0 (630)
[ii Iii Iii Fo.oi From an SVD of 00w) = UEVH, the uncontrollable output directions yo(jw) are the
Un = Lo]’ !1u = [oj’ U: = [oj’ 1/: Los9 last 1 i’ columns of U0,w) By analyzing these directions an engineer can then decide on
—
This concludes this section on fundamental limitations. Later, in this chapter, we discuss C(s) = I ‘t~ ~
L s+2 3+2
the control implications of these results in more detail.
This is easily seen since column 2 of C(s) is two lines column 1. The uncontrollable output directions
at low and high frequencies are, respectively,
of 0(s). Then a lower bound on the time delay for output i is given by the smallest delay in ExelciSe 64 Repeat Ererc,se 62 wit/i e’6~ meplaced b3 099(1 — ~s)”/(1 + es)” (where is = 2
rowi of 0(s), i.e. is the orde, of the Pade appro.uination) Also plot the elements of S(jw) as functions offrequencyfo;
grnfl = minO~j K = 0 1/0 K = 1/0 and K = 8/0 Norrce that thete is no ringing here as C(s) is singulam only at
= 00.
This bound is obvious since 6~~” is the minimum time for any input to affect output i, and
can be regarded as a delay pinned to output i.
Holt and Moran (1985a) have derived additional bounds, but their usefulness is sometimes 66 Limitations imposed by RIIP-zeros
limited since they assume a decoupled closed-loop response (which is usually not desirable
in terms of overall performance) and also assume infinite power in the inputs. RHP-zeros are common in many practical multivanable problems The limitations they
Exceptions. For MIMO systems we have the surprising result that an increased time delay impose are similar to those for 5180 systems although often not quite so senous because
may sometimes improve the achievable performance. As a simple example, consider the plant they only apply in particular directions
Exercise 6.2 Simulate the closed-loop response of the plant (6.31,ifor the setpoint c/ianges rt = [a] tight control at low frequencies and a peak for a(S) less than 2 we derive from (5 51)
that the bandwidth (in the worst direction) must for a real RHP zero satisfy w~ < 42
and r2 = [4] using a simple diagonal co;itroller K = I with hO = 0.1, 1 and 10. Plot the responses
Alternatively if we require tight control at high frequencies then we must from (5 57) satisfy
4 > 2z The reader is also referred to Exercise 6 5 which gives the trade off between the
of both the inputs and outputs with 0 = 1. Why is contivi much better with r2 as compared to ri? performances of different output for plants with a RHP-zero.
Exercise 6.3 * To illustrate further the above arguments, compute the sensitivity function S for the Remark 1 The use of a scalar weight top(s) in (6.32) is somewhat restrictive. However, the assumption
plant (6.31) and K = ~I. Use the approximation e’0~ I — Os to show that at low frequencies the is less restrictive if one follows the scaling procedure in Section 1.4 and scales all outputs by their
elements of 5(s) are of magnitude 1/(kO + 2). How large must k be to have acceptable peiformance allowed variations such that their magnitudes are of approximately equal importance.
(less than 10% offset at low frequencies)? What is the corresponding bandwidth? (Answer: Need
k > 8/0. Bandwidth is equal to k.) Remark 2 Note that condition (6.32) involves the maximum singular value (which is associated
with the “worst” direction), and therefore the RHP-zero may not be a limitation in other directions.
Remark 1 The observant reader may have noticed that the smallest singular value of 0(s) in (6.31) Furthermore we may to some extent choose the worst direction This is discussed next
drops to zero periodically at high frequencies, as e”~”0 = 1 for wO = 2,rn, n = 0, 1,2 This will
cause “ringing” irrespective of the bandwidth, as seen from the simulations. Exercise 6 ~ Tom a plant with a single meal RHP zemo z with input duection it, and a diagonal
pemformance weight matrrc Wp show that the mequim ement Wp SI I~ < 1 implies
Remark 2 The reader may also have noticed that C(s) in (6.31) is singular at s = 0 (even with 0
non-zero) and thus has a zero at s = 0. Therefore, a controller with integral action which cancels this
zero yields an internally unstable system (e.g. the transfer function KS contains an integrator). This
Z ~ <1 (6.33)
internal instability will manifest itself as integrating input signals that will eventually go to infinity. To
“fix” these results, we may assume that the plant has an integrator in each element. Then, one of the If sop, is given by (550) and sop3 = 0 s 0 y (atbitmanl5 pool contiol of all outputs othem than y,)
integrators will cancel the zero at s = 0 and the resulting steady-state gain is finite in one direction and show that ogle cont, ol of y, at lowf, equencies unposes the follownig hnntation on WB
infinite in another. Alternatively, we may assume that e’0~ is replaced by 0.99e”°~ so that the plant is - /1 1
not singular at steady-state (but it is close to singular). W8,1 < Z { ‘— — (6.34)
\Uz,i 1W’
Remark 3 A physical example of a model in the form of (6.31) is a distillation column where S
represents the time for a change in liquid flow at the top to reach the bottom of the column,
236 MULTIVARIABLE FEEDBACK CONTROL LIMJTATIONS IN MIMO SYSTEMS 237
6.6.1 Moving the effect of a RHP-zero to a specific output RHP-zero in each of the diagonal elements of T(s), i.e. whereas C(s) has one RI-IP-zero at
In MIMO systems, one can often move the deteriorating effect of a RHP-zero to a less $ = z, To(s) has two. In other words, requiring a decoupled response generally leads to the
important output. This is possible because, although the interpolation constraint yYT(z) = 0 introduction of additional RHP-zeros in T(s) which are not present in the plant C(s).
imposes a certain relationship between the elements within each column of T(s), the columns We also see that we can move the effect of the RI-IP-zero to a particular output, but we then
of T(s) may still be selected independently. Let us first consider an example to motivate the have to accept some interaction. This is stated more exactly in the following theorem.
results that follow. Most of the results in this section are from Holt and Moran (1 985b) where
further extensions can also be found. Theorem 6.4 Assume that C(s) is square, functionally controllable and stable and has a
single RHP -zero at $ = z and no RHP-pole at $ = z. Then if the k’th element of the output
Example 3.17 continued. Consider the plant zero direction is non-zero, i.e. Yzk $ 0, it is possible to obtain “pemfect” control on all
1 r outputs j ~ k with the remaining output exhibiting no steady-state offset. Specifically, T can
1
(0.2s + 1)(s + 1)11 +2s 2 be chosen oftheformn
which has a RHP-zero at $ = z = 0.5. This is the same plant considered on page 96, where we 1 0 ... 0 0 0 0-
0 1 0 0 0 0
petformed seine 71 controller designs. The output zero direction satisfies ~J 0(z) = 0 and we find ...
Yz
~±F21ho.89
[—0.45 T(s) = 0.. a (6.37)
— ~ — Ai~ lint -3+2
3+2
3+2 3+2 3+2 3+2 3+2
Any allowable T(s) must satisfy the interpolation constraint y~’T(z) = 0 in (6.4). and tins imposes
thefollowing relationships between the column elements of T(s): .0 0 ... 0 0 0 ... 1.
For the two designs with one output pemfectly controlled we choose
0
—3+2
8+2 ] (6.36) than I, then the interactions will be significant, in terms of yielding some =
much larger than I in magnitude. In particular, we cannot move the effect of a RHP-zero to an
output corresponding to a zero element in 1):, which occurs frequently if we have a RI-IP-zero
r 1 0 lint pinned to a subset of the outputs.
8+2
z~±i 1
8+2 3+2
Exercise 6.6 Consider the plant
*
1
The basis for the last two selections is as follows. For the output which is not perfectly controlled, the C(s) = [I
3+1
a
(6.39)
diagonal element must have a RHP-zero to satisfy (6.35), and the off-diagonal element must have an
s-tenmz in the numerator to give T(0) = I. To satisfy (6.35), we must then require for the two designs (a) Find j4k zero and its output direction. (Answer: z = — 1 and U: = [—a fTC)
(b) Which values of a yield a RHP-zemv, and which of these values is best/worst in terms of achievable
/3i4, 132=1 pemformnance? (Answer: We have a RHP-zero for al < 1. Best for a = 0 with zero at infinity; if control
The RHP-zero has no effect on output 1 for design T1 (s), and no effect on output 2 for design T2(s). at steady-state is required then worst for a = 1 with zemv at a = 0.)
We therefore see that it is indeed possible to move the effect of the R1-IP-zem-o to a particular output. (c) Suppose a = 0.1. Which output is the most difficult to control? Illustrate your conclusion using
Howevem we must pay for this by having to accept some interaction. We note that the magnitude of the Theoremn 6,4. (Answer: Output 2 is the most difficult since the zero is mainly in that direction; we get
interaction, as expressed by /3k. is largest for the case where output 1 is perfectly controlled (Si = 4). strong interaction with fi = 20 if we want to control 1)2 petfectly.)
This is reasonable since the zero output direction 1): = [0.89 —0.45 1T is mainly in the direction
Exercise 6.7 Repeat the above exemrise for the plant
of output 1, so we have to “pay more” to push its effect to output 2. This was also obsen’ed in the
controller designs in Section 3.6; see Figum-e 3.12 on page 97.
C(s)
1 [ s—a 1 (6.40)
s+1 t@+2)2 s—a
We see from the above example that by requiring a decoupled response from r to y, as
in design To(s) in (6.36), we have to accept that the multivaniable RHP-zero appears as a
238 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 239
6.7 Limitations imposed by unstable (RHP) poles Here ü and ~ are the output directions in which the plant has its largest and smallest gains,
respectively; see Chapter 3.
For unstable plants we need feedback for stabilization and a non-zero minimum value of In the following, let r = 0 and assume that the disturbance has been scaled such that at
Plant with RNP-zero. If G(s) has a RI-IP-zero at s = z then the performance may be
poor when the disturbance is aligned with the output direction of this zero. To see this use
6.8 Performance requirements imposed by disturbances y~’S(z) = yY and apply the maximum modulus principle to f(s) = y~Spd to get
For 5150 systems we found that large and “fast” disturbances require tight control and a (6.47)
≥ Iy~!ga(z)I = IY~VdI IIgd(z)112
large bandwidth. The same results apply to MIMO systems, but again the issue of directions
is important. To satisfy IISPdIIoo < 1, we must then for a given disturbance d at least require
Definition 6.2 Disturbance direction. C’onsider a single (scalar) disturbance and let the
vector Pd represent its effect on the outputs (y = gdd). The disturbance direction is defined
Iv~fgd(z)I <jj (6.48)
as
1 where Yz is the direction of the RHP-zero. This provides a generalization of the 5150
Yd Pd (6.42) conditien IGd(z)I < 1 in (5.78). Forcombined disturbances, the condition is IIY~’Gd(z)II2 <
hod 112
1
The associated disturbance condition number is defined as
Remark. In the above development we consider at each frequency performance in terms of 11e112
7d(G) = a(G) U(Gtyd) (6.43) (the 2-norm). However, the scaling procedure presented in Section 1.4 leads naturally to the vector
max-norm as the way to measure signals and performance. Fortunately, this difference is not too
Here Gt is the pseudo-inverse, which is G’ for a non-singular C important, and we will neglect it in the following. The reason is that for an iii x 1 vector a we have
Ilalimax ( Ja~ ≤ ~./i lailmax (see (A.95)), so the values of max- and 2-norms are at most a factor
Remark. We use Pd (rather than Gd) to show that we consider a single disturbance, i.e. Pd is a vector. of .\/1i apart.
For a plant with many disturbances p~ is a column of the matrix Gd.
Example 6.7 Consider the following plant and disturbance models:
The disturbance condition number provides a measure of how a disturbance is aligned with
the plant. It may vary between 1 (for yd = u) if the disturbance is in the “good” direction, G(s)=__!~_[5_1 “
s+2 4~5 2(s—1)
] Pd(s)— 6 [k]
1’ IkI≤1
(6.49)
and the condition number 7(G) = o(G)o(G~) (for yd = it) if it is in the “bad” direction.
/
240 MULTIVAIUABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 241
It is assumed that the disturbance and outputs have been appropriately scaled, and the question is Two-norm. We measure both the disturbance 11d112 < 1 and the input in terms of the 2-
whether the plant is input—output controllable, i.e. wherlzer we can achieve IISoeII~ < 1,for any value norm. Assume that C has full row rank so that the outputs can be perfectly controlled. Then
of Ikl < 1. C(s) has a REP-zero z = 4 and in Example 4.13 on page 140 we have already computed the smallest inputs (huh2) needed for perfect disturbance rejection are
the zero direction. From this we get
u _GtGad (6.51)
Iygd(zN = [0.83 -0.55 [t] = 0.83k — 0.551
where G~ = G’~(CG”)1 is the Moore—Penrose pseudo-inverse from (A.65). Then with
and from (6.48) we conclude that the plant is not input—output controllable U 10.83k — 0.551 > t Le. a single disturbance we require ~Ctg,j~2 ~ 1. With combined disturbances we require
if k < —0.54. We cannot really conclude that the plant is controllablefork > —0.54 since (6.48) is o(CtCd) ≤ 1; that is, the induced 2-norm is less than 1, see (A.107).
only a necessary (and not sufficient) condition for acceptable pemformnance, and them may also be other For combined reference changes, JIF(w)112 < 1, the corresponding condition for perfect
factors that determine co,ztrollabilit~ such as input constraints which are discussed next. control with 11u112 < 1 becomes U(GTR) < 1, or equivalently (see (A.63))
6.9 Limitations imposed by input constraints where w,. is the frequency up to which reference tracking is required. Usually B is diagonal
with all elements larger than 1, and we must at least require
Constraints on the manipulated variables can limit the ability to reject disturbances and track
a(C(jw)) ~ 1,Vw <Wr (6.53)
references, and to stabilize the plant. As was done for SISO plants in Chapter 5, we will
consider the case of perfect control (e = 0) and then of acceptable control (hell ≤ 1). We
or, more generally, we want o~G&w)) large.
derive the results for disturbances, and the corresponding results for reference tracking are
obtained by replacing C~ by —R. The results in this section apply to both feedback and
I
feedforward control. 6.9.2 Inputs for acceptable control
~,
Remark. For MIMO systems the choice of vector norm, . to measure the vector signal magnitudes It is possible to generalize the results applicable for 5150 systems in Section 5.11.2 to MIMO
at each frequency makes some difference. The vector max-norm (largest element) is the most natural systems using the singular values. The main result is summarized below and the details of the
choice when considering input saturation and is also the most natural in terms of our scaling procedure. derivation can be found in the first edition of this book (Skogestad and Postlethwaite, 1996).
However, for mathematical convenience we will also consider the vector 2-norm (Euclidean norm). In Let r = 0 and consider the response e = Cu + Cdd to a disturbance d. We require Ilell < 1
most cases, the difference between these two norms is of little practical significance.
for any IldIl ~ 1 using inputs with huh < 1. We use here the max-norm, umax (the vector
infinity-norm), for the vector signals. To simplify the problem, we consider this problem
6.9.1 Inputs for perfect control frequency by frequency and one disturbance at a time, i.e. d is a scalar and g~j a vector. The
worst-case disturbance is then IdI = 1 and the problem at each frequency is to compute
We consider the question: can the disturbances IldIl ≤ 1 be rejected perfectly (e = 0) while
maintaining hull ≤ 1? To answer this, we must quantify the set of possible disturbances and Urn111 4: mmU IlUhIrnax such that IICu + Yddllmax ~ 1, k~I (6.54)
the set of allowed input signals. We will consider both the max-norm and 2-norm.
Max-norm and square plant. For a square plant the input needed for perfect disturbance At each frequency the SVD of the plant (which may be non-square) is C = UEV”. We
rejection is u = —C’Gdd (as for 5150 systems). Consider a single disturbance (Pd is a then have that each singular value of C, u~ (C), must approximately satisfy
vector). Then the worst-case disturbance is Jd(ca)l = 1, and we get that input saturation is
avoided (hluhIrnax ≤ 1) if all elements in the vector G’gd are less than 1 in magnitude; that c~(C) ≥ Iu~gdl — 1, at frequencies where u~gdb > (6.55)
is,
11C’gdllmax < 1,Vw where u~ is the i’th output singular vector of C, Note that (6.55) is approximate and is a
For simultaneous disturbances (Cd is a matrix), the corresponding requirement is necessary condition for achieving acceptable control.
T
In addition, we will for completeness consider additive uncertainty
As discussed for 5150 systems in Section 5.12, the presence of uncertainty requires us to G’=G+EA or EAG’-G (6.60)
use feedback control rather than just feedforward control. With MIMO systems there is an although this is generally not a good uncertainty description because it is difficult to quantify
additional problem in that there is also uncertainty associated with the plant directionality. the magnitude of EA. If all the elements in the matrices Ei, E0 or EA are non-zero, then
The main objective of this section is to introduce some simple tools, like the RCA and we have full-block (“unstructured”) uncertainty. However, unstructured uncertainty is often a
the condition number, which are useful in picking out plants for which one might expect poor (conservative) assumption for multivariable plants. We will therefore focus on diagonal
sensitivity to multivariable (directional) uncertainty. input and output uncertainty, where Ei or E0 are diagonal matrices. This uncertainty is
Consider the actual (uncertain) plant G’ and the two-degrees of freedom controller usually caused by uncertainty in the individual input or output channels. For example,
= K(r — y’)K7r. Here K is the feedback controller and Kr the feedforward controller
for references, see Figure 2.5. For simplicity, we only consider feedforward control for diag{c1,c2,...} (6.61)
references, but the analysis may easily be extended to distrubances. The resulting control where c~ is the relative uncertainty in input channel i. Typically, the magnitude of e1 is 0.1 or
error e’ in response to a reference changer is, see (2.28), larger. It is important to stress that diagonal input and output uncertainty is always present in
real systems. Of these, we will show that diagonal input uncertainty is usually the worst for
= y’ — r = —S’S~r (6 56) control, because performance is measured at the plant output.
where S = (I +‘ GK)’ is the (feedback) sensitivity function and S~ = I G’K,. is —
The minimized condition number ‘y (G) may be computed using (A.75). Similarly, we state
for both feedback and feedforward control, lower bounds in terms of the RCA matrix of the
6.10.2 Effect of uncertainty on feedforward control
plant. We consider here the effect of uncertainty when we use “perfect” (inverse based) feedforward
control. We use the feeforward controller u = Krr and assume that the plant 0 is invertible
Remark. In Chapter 8, we discuss more exact methods for analyzing performance with almost any so that we can select
kind of uncertainty and a given controller. This involves analyzing robust performance by use of the Kr
structured singular value. However, in this section the treatment is kept at a more elementary level as
we look for results that depend on the plant only. For the nominal case with no uncertainty we then achieve peifect control with S~ = 0; that is,
e = y r = (0K,. I)r = —S,.r = 0. However, for the actual plant 0’ (with uncertainty)
— —
In practice, the difference between the true perturbed plant 0’ and the plant model G is caused and we get for the three sources of uncertainty
by a number of different sources. In this section, we focus on input uncertainty and output Output uncertainty:
C” ~‘
LiQ (6.62)
uncertainty. In a multiplicative (relative) form, the output and input uncertainties (as in Figure
Input uncertainty: = GEiG’ (6.63)
6.2) are given by2
Additive uncertainty: —5. = (6.64)
Output uncertainty: 0’ = (I + Eo)G or P20 = (0’— (6.58) Forfeedforward control to be effective (at a given frequency) we must require U(S~) ~ 1. We
Input uncertainty: 0’ = 0(1+ E1) or P21 = G’(G’ —0) (6.59) derive the following upper bounds for the three sources of uncertainty:
2 Inthis book we use ~ to represent nod~lized uncertainty which is norm-bounded to be less than 1, whereas Output uncertainty: 0(5:.) = (6.65)
E = cIA is not normalized. We often use a weight 1w! = id = 0(E) to represent the magnitude of the
uncertainty. Input uncertainty: o(g) <a(E1) 7(0) (6.66)
Additive uncertainty: o(S.) <U(EA)/u(G) (6.67)
244 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 245
where we have used a(G~) = 1/c(G) and introduced the condition number 7(G) Example 6.8 Inverse-based control of distillation process. For the distillation process in (3.93) we
a(G)/a(G). The bounds are tight (i.e. equality can always be achieved) if we assume that have
any “full-block” uncertainty F0, Fj or EA of a given magnitude is allowed. For output 1 [87.8 86.41 A’G~— [35.1 —34.1 671
uncertainty, (6.62) is identical to the result that can be derived for 5150 systems (see — 75 + 1 [108.2 —109.6]’ 1~ [—34.1 35.1
page 204), and we must require for effective use of feeforward control that the relative output and 7(0) = -y (0) = 141.7. The RGA elements are large so we know that inverse-based feedfonvard
uncertainty is less than 1. For input uncertainty, the norm of the matrix GESG’ can be a control is sensitive to diagonal input uncertainly. With E, diag{ci, c2} we get,for all frequencies,
factor 7(G) larger than the norm of F5, and for a large 7(G) we must require that the relative
input uncertainty is much less than 1. However, inequalities (6.66) and (6.67) are generally CE o—’ —
—
[35.1e~
[43.2e~
—
—
34.162
43.262
—27Thj + 27.7e~
—34.lej + 35.1c~]
1 ‘672
conservative because it is not likely in practice than any full-block uncertainty of a given
magnitude is possible. The elements in the matrix GEsG’ are largest when 61 and 62 have opposite signs. With a 20% error
Diagonal input uncertainty. We will therefore focus on diagonal input uncertainty, which in each input channel, we may select ej = 0.2 and 62 = —0.2 and find
always occurs in practice, and which may severely limit multivariable performance with [13.8
GE1C —l ~ —ii.i]
~33] (6.73)
feedforward control. In particular, we will show that
o Feedforward control with diagonal input uncertainty is acceptable for plants with a small Thus with an “ideal” feedforward controller and 20% input uncertainty, we get from (6.63) that the
minimized input condition number 7(G), see (6.68), but should be avoidedfor plattts with relative tracking error at allfrequencies, including steady-state, may exceed 1000%. This de;nonst rates
large RGA elements, see (6.70). the needfor feedback control. Howeve;; applying feedback control is also difficult for this plant as seen
in Example 6.11.
With diagonal input uncertainty (6.61) we may write F1 = D1E5D7’ and —S~. =
(GDnF5(GD5)’ where the diagonal matrix D1 is free to be chosen. We may use this The following example demonstrates that a large plant condition number, 7(G), does not
degree of freedom to make the bound on o-(S~) less conservative. We have (for all diagonal necessarily imply sensitivity to uncertainty even with an inverse-based controller.
F1)
= a(GF50’) ≤ U(E~)y(G) (6.68) Example 6.9 Inverse-based control of distillation process, DV-model. in this example we
consider the following distillation model given by Skogestad et al. (1988) (it is the same system as
This shows that we have insensitivity to diagonal input uncertainty tf the minimized input studied above but with the DV- rather than the LV-configuration for the lower control levels, see
condition number is small. To be able to say “if and only if” we would need (6.68) to be tight Example 10.8):
(at least within some factor); that is, there should always exists a “worst-case” diagonal F1 o — 1 [ 87.8 1.4
75s + 1 H108.2 —1.4
VC~ [0.448 0.552
1~ [0.552 0.448
674
that makes U(S~) reasonably close to the upper bound. Although this seems likely in most — ‘
cases, it has not been proved to hold generally. Fortunately, we have an RCA condition that We have that lIA(C(iw~IIt~ = 1, 7(0) 70.76 and7(G) 1.11 atallfrequencies. The condition
works in the opposite direction. With diagonal input uncertainty, the diagonal elements of nwnber is large, but nevertheless there is no sensitivity to diagonal input uncertainty, because ~ (0) is
GEJG’ are from (A.81) directly given by the corresponding row elements of the RGA small. This applies to ideal inverse-based feedforward control, see (6.68). as well as to inverse-based
feedback control, see (6.92) below.
[GE1G’ ]~ = Z A1~(G)~ (6.69) Example 6.10 For a 2 x 2 plant with diagonal input uncertainty we generally have
0E10
—l Aiiej + A12e~
2lAn(ci_62) 922 — (2)1 (6.75)
The norm of a matrix is always lai-ger than its elements, and by allowing any diagonal input A21e1 +A22c2 ]
uncertainty satisfying l~4 < U(Fj) we may select the worst-case combination of e~ such that
For example, condsider a triangular plant with 912 = 0 and with a large lg2jj/Jgii I,
the row-sum is maximized (see remark on page 246). We then have (for some “worst-case”
diagonal Fi)
o-(S~) = a(0E1G’) ≥ a(Ei)IIAII~~ (6.70)
where IIAW.~ is the induced co-norm (maximum row sum) of the RCA, The RCA matrix is is inverse -based feedfonvard control sensitive to uncertainty for this plant? A = I, which is small, so
easy to compute and independent of both input and output scalings, which make the use of the lower bound (6.70) in terms of the RCA is inconclusive. The minimized input condition number for
condition (6.70) particularly attractive. Since diagonal input uncertainty is always present, we this triangular plant is 7 = 219211/19111 = 20. which is large, so the upper bound (6.68) in tam-ms
conclude from (6.63) and (6.70) that if the plant has large RCA elements then performance of 7 is also inconclusive. Howevem; the system is indeed sensitive to diagonal input uncem-tainly, since
with feedforward control will be poor. The reverse statement is not true; that is, if the RCA from (6.75) the 2,1 element of CESC’ is (g21/g11)(ei — e2). For example, with 20% diagonal input
has small elements we cannot conclude that the plant is insensitive to input uncertainty. uncertainty we may select e1 = 0.2 and 62 = —0.2 and the 2, 1 element becomes 10(0.2 + 0.2) = 4
This follows because we cannot from the RCA say anything about the magnitude of the which is much larger than 1, and feedfonvard control is expected to yield poor pemformance with
off-diagonal elements of GF5G’; see also Example 6.10. uncertainty This motivates the use offeedback control for this plant.
246 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 247
Remark. Worst-case uncertainty. It is useful to know which combinations of input errors give poor Remark 3 Another form of (6.77) is (Zames, 1981)
performance. For an inverse-based controller (feedforward or feedback), a good indicator results if we
consider CE,C’, where B, = diag{ea}. If all e~, have the same magnitude Iwil = a(Ej), then 2”— T = S’(L’ — L)S (6.82)
the largest possible magnitude of any diagonal element in CE,C’ is given by Iwil IIA(C)II1~. To.
obtain this value one may select the phase of each e~ such that Lea = —LA~a, where i denotes the Conclusion. Prom (6.77), (6.81) and (6.82) we see that with feedback control 2” —2’ is small
row of A(C) with the largest elements. Also, if A(C) is real (e.g. at steady-state), the signs of the q~’s at frequencies where feedback is effective (i.e. S and 5’ are small). This is usually at low
should be the opposite from those in the row of A(C) with the largest elements. frequencies. At higher frequencies we have for real systems that L is small, so 2’ is small,
and again 2” —2’ is small. Thus with feedback, uncertainty only has a significant effect in the
6.10.3 Uncertainty and the benefits of feedback crossover region where S and 2’ both have norms around 1.
where AT = T’ T and AC = C’
— — C. Equation (6.78) provides a generalization of Bode’s where WI(S) and wo(s) are scalar weights. Typically the uncertainty bound, IwiI or Iwo is ~,
differential relationship (2.24) for 8150 systems. To see this, consider a 5180 system and let AC 0. —>
0.2 at low frequencies and exceeds 1 at higher frequencies.
Then 8’ —~ S and we have from (6.78) We first state some upper bounds on U(S’). These are based on identities (6.84)—(6.86) and
dT dC
(6.79) singular value inequalities (see Appendix A.3.4) of the kind
2’
Remark 2 Alternative expressions showing the benefits of feedback control are derived by introducing
the inverse output multiplicative uncertainty C’ = (1 — E~o)~’C. We then get (Horowitz and = £(I+EIT,) 1—~(EjTj) i—~(E1)~(T,) 1—[wj~(T,)
Shaked, 1975)
Of course these inequalities only apply if we assume U(E1T1) < 1, U(Er)U(Tr) < 1 and
Feedfonvard control: — Tr = (6.80) Iwila(T,) < 1. For simplicity, we will not state these assumptions each time.
Feedback control: — T = SE10T’ (6.81)
(Simple proof for square plants: switch C and C’ in (6.76) and (6.77) and use E,o = (C’ —
248 MULTI VARIABLE FEEDBACK CONTROL LIMJTATIONS IN MIMO SYSTEMS 249
Upper bound on a(S’) for output uncertainty Theorem 6.5 Input uncertainty and inverse-based control. Consider a controller K(s)
I(s) G~ (s) which results in a nominally decoupled response with sensitivity S s I and
From (6.84), we derive
complementaiy sensitivity 7’ = t I where t(s) = 1 s(s). Suppose the plant has diagonal
—
input uncertainty B5 of relative magnitude wi (jw) in each input channel. Then there exists
a(S’) < a(S)a((I + E0T)’) < (6.88) a combination of input uncertainties (i.e., exists a diagonal ~i) such that at each frequency
1— lwola(T)
From (6.88), we see that output uncertainty, be it diagonal or full block, poses no particular a(8’) ≥ a(S) (i+ 1+Iwit~
IwitI I{A(G)Ikcc)N (6.93)
problem when performance is measured at the plant output. That is, if we have a reasonable
stability margin (11(1 + EQT)’IIcc is not too much larger than 1), then the nominal and
perturbed sensitivities do not differ very much. where A (0)11 ~co is the maximum row sum of the RGA and a(S) = 1st.
The proof is given below. From (6.93), we see that with an inverse-based controller the worst-
Upper bounds on a(s’) for input uncertainty case sensitivity will be much larger than the nominal at frequencies where the plant has large
RCA elements. At frequencies where control is effective (a(S) is small and t~ 1), this
General case (full-block or diagonal input uncertainty and any controller). From (6.85) and implies that control is not as good as expected, but it may still be acceptable. However,
(6.86), we derive at crossover frequencies, where a(s) and ~tJ = Ii are both close to 1, we find that
—
o~S) a(S’) in (6.93) may become much larger than I if the plant has large RCA elements at these
~(8’) < 7(G)a(S)a((1 + E5T5)’) < 7(0) — wiIa(Tj) (6.89) frequencies. The bound (6.93) applies to diagonal input uncertainty and therefore also to
full-block input uncertainty (since it is a lower bound).
a(S)
a(S’) < 7(K)a(S)a~I+T1E1)’) ≤ 7(K)~ — IwiIa(Ti) (6.90)
P,vofofTheore;n 6.5: (From Skogestad and Flavre (1996) and Ojøsreter (1995).) Write the sensitivity
function as
From (6.89), we see that for a plant with a small condition number, ‘y(O) 1, the system
is insensitive to input uncertainty, irrespective of the controller. From (6.90), we have the 5’ = (I + G’K)’ = SC (I + E5T1)’ C’, Es = diag{ea}, S = ci (6.94)
important result that if we use a “round” controller, meaning that 7(1C) is close to 1, then the D
sensitivity function is not sensitive to input uncertainty. In many cases, (6.89) and (6.90) are
Since D is a diagonal matrix, we have from (6.69) that the diagonal elements of 5’ are given in terms
not very useful because they yield unnecessarily large upper bounds. of the RCA of the plant C as
Diagonal input uncertainty (any controller). From the first identity in (6.85) we get
5’ = 5(1 + (GD1)Ej(OD5)’T)’ and we derive, by singular value inequalities,
4 = ~Z~d~; ~ 1 A = Cx (0~)T (6.95)
a(s) k=1 1+tea
a(S’) < (6.91)
— 1—7(0)Iwila(T) (Note that s here is a scalar sensitivity function and not the Laplace variable.) The singular value of a
matrix is larger than any of its elements, so a(S’) ≥ max~ s~jJ, and the objective in the following is to
a(S)
a(S’) < (6.92) choose a combination of input errors e~ such that the worst-case Is~d is as large as possible. Consider a
— 1—75(K)JwjIa(T) given output i and write each term in the sum in (6.95) as
From (6.91), the system is insensitive to diagonal input uncertainty if 7(G) is small, A~~tea
irrespective of the controller. Similarly, from (6.92) the system is insensitive to diagonal input 1+tea l+tea
uncertainty if ~5 (K) is small, irrespective of the plant. Note that ~5 (K) = 1 for a diagonal
We choose all Ca to have the same magnitude Jws(jw)I, so we have ca(jw) IwsIe~c~~. We also
controller (decentralized control), so (6.92) shows that diagonal uncertainty poses no problem
assume that Itca I < 1 at all frequencies3, so that the phase of 1 + tea lies between —90° and 90°. It is
with decentralized control. On the other hand, with an inverse-based (decoupling) controller then always possible to select Lea (the phase of cc) such that the last term in (6.96) is real and negative,
of the form K = DG’ where D is diagonal, we have 75(K) = “4(0), so decoupling and we have at each frequency, with these choices for ct,
control may be sensitive to diagonal input uncertainty for plants with a large 7 (0).
= = ______
Lower bound on a(S’) for input uncertainty (including diagonal input uncertainty)
Above we derived upper bounds on a(S’); we will next derive a lower bound. A lower bound > ~ lAtal Iwitl
1 +
.
Iwstl
= 1+
1+
IwitI
Iwstl
EA~
~
(6.97)
is useful because it allows us to make definite conclusions about when the plant is not input—
output controllable. Importantly, the bound applies also to the special (and common) case of ‘ The assumption Itea I < 1 is not included in the theorem since it is actually needed for robust stability. ir it does
diagonal input uncertainty. not hold we may have J(S’) infinite for some allowed uncertainty, and (6.93) clearly holds.
250 MULTIVARIABLE FEEDBACK CONTROL
where the first equality makes use of the fact that the row elements of the RCA sum to 1 (E~=1 Atk =
1). The inequality follows since Ie&-I = IwsI and 1 + te&I ≤ 1 + Iteal = 1 + wrtI. This derivation
T LIMITATIONS IN MIMO SYSTEMS
G(s)=[~ WO]
holds for any i (but only for one at a time), and (6.93) follows by selecting ito maximize ~
(the maximum row sum of the RCA of C). C for which at all frequencies A(G) = 1, 7(0) = i0~, 7’(C) = L00 and 4(C) = 200. The RCA
matrix is the identity, but since 912/gil = 100 we expect from (6.75) that this plant will be sensitive to
We next consider three examples. In the first, we consider a plant where both the RCA
diagonal input uncertainty if we use inverse-based feedback control, K = ~ 0_i. This is confirmed if
and ‘4(G) are large. In the second, they are both small. In the third, the RCA is small, but we cotnpute the worst-case sensitivity function 5’ for C’ = CU + wi ~s) where A’ is diagonal and
77 is large. The first and third are sensitive to diagonal input uncertainty, whereas the second JwsJ = 0.2. Weflndby computing skewed-p. p8(Ni), that the peak of &(S’) is ~ 20A3.
(where’4 is small) is insensitive. Note that the peak is independent of the cot it roller gain c in this case since C(s) is a constant nzat,-ix.
Also note that with full—block ( “unsti-uctured”) input uncertainty (A5 is a full matrix) the worst—case
Example 6.11 Feedback control of distillation process. Consider again the distillation process
sensitivity is IIS’110c = 1021.7.
C(s) in (6.71) which we on page 245 found to be sensitive to diagonal input uncertainty with
feedforward controL For this plant we have JA(C(jwflJI~ = 69.1 and 7(0) ~ (C) 141.7
at al/frequencies. Conclusions on input uncertainty and feedback control
1. Inverse-based feedback controllen Consider the controller IC(s) = (0.7/s)C’(s) correspond-
big to the nominal sensitivity function Let us summarize the above findings. The following statements apply to the frequency range
S I around crossoveit By “small’, we mean about 2 or smaller. By “large” we mean about 10 or
s+0.7 larger.
The nominal response is excellent, but we found from sinudations in Figure 3.14 that the closed-loop 1. Condition number 7(G) or 7(K) small: robust performance to both diagonal and full-
response with 20% input gain uncertainty was extremely poor (we used e~ = 0.2 and e~ = —0.2). The
block input uncertainty; see (6.89) and (6.90).
poor response is easily explained froni the lower RCA bound on U(S’) in (6.93). With the inverse-based
2. Minimized condition numbers ‘4(C) or 75(K) small: robust performance to diagonal
controller we have i(s) = k/s. which has a nominal phase margin of PM = 90°. and from (2.50) we
have, at frequency w~, that IsOwc)I = ItUw0)I = 1/V~ = 0.707. With jws I = 0.2, we then get from input uncertainty; see (6.91) and (6.92). Note that a diagonal controller (decentralized
(6.93) that control) always has 7~(K) = 1.
3. RCA(C) has large elements: inverse-based controller is not robust to diagonal input
a(S’(jw0)) ≥ 0.707 + 1707.02.691) = 0.707-9.56 = 6.76 (6.98)
uncertainty; see (6.93). Since diagonal input uncertainty is unavoidable in practice, the rule
(This is close to the peak value in (6.93) of 6.81 at frequency 0.79 radlmin.) Thus, we have that with is never to use a decoupling controller for a plant with large RCA elements. Furthermore, a
20% input uncertainty we tnay have uS’ ~ ~ 6.81 and this explains the observed poor closed- diagonal controller will most likely yield poor nominal performance for a plant with large
loop peifonnance. For comparison, the actual worst-case peak value of a(S’), with the inverse-based RCA elements, so we conclude that plants with large RGA elements are fundamentally
co,itmller~ is 14.5 (computed numerically using skewed-p as discussed below). This is close to the value difficult to control.
obtained with the uncet-tainty Bj = diag{ei, c2} = diag{0.2, —0.2}, 4. ‘4(G) is large while at the same time RCA(C) has small elements: cannot make any
definite conclusion about the sensitivity to input uncertainty based on the bounds in
uus’uu= = (i + 1.2 0.8jG-’) ~ = 14.5 this section. However, as seen in Examples 6.10 and 6.12, we may expect sensitivity to
diagonal input uncertainty with inverse-based feedforward or feedback control.
for which the peak occurs at 0.69 rad/min. The difference between the values 6.81 and 14.5 illustrates
that the bound in tenns of the RCA is generally not tight, but it is nevertheless very usefuL
2. Diagonal (decent,-alized~ fredback control/en Consider the controller
6.10.5 Element-by-element uncertainty
klQrs+1) Ii 01 —2
S
Kdiag(S) = ~ Lo _~j’ It2 = 2.410 [mm
.
Element-by-element uncertainty assumes independent uncertainty in the individual elements
of 0. This kind of uncertainty description may be questionable from a physical point of view,
The peak value for the upper bound on &(S’) in (6.92) is 1.26, so we are guaranteed IIS’II= ≤ 1.26, but it is nevertheless populait Interestingly, the RCA matrix is a direct measure of sensitivity
even with 20% gain uncei-tainty. For comparison, the actual peak in the perturbed sensitivity function to element-by-element uncertainty as matrices with large RCA values become singular for
with E1 = diag{0.2, —0.2} is IIS’II= = 1.05. Of course, the problem with the simple diagonal small relative errors in the elements.
controller is that (although it is robust) even nominal peiformance is pool:
Remark. Relationship with the structured singular value: skewed-p. To analyze exactly the worst- Theorem 6.6 Consider a complex matrix C and let Ajj denote the ij’th element in the RCA
case sensitivity with a given uncertainty Iwsi we may compute skewed-p (p’). With reference to man-ft of C. The matrix C becomes singular if we make a relative change —1/Au in its ij ‘th
Section 8.11, this involves computing pa(N) with ~ = diag(&, /Xp) and N = {~T’~ ~ element; that is, if a single element in C is perturbed fivin gjj to gpjj = g~~(1 —
and varying p’ until p(N) = 1. The worst-case performance at a given frequency is then a(S’) = The theorem is due to Yu and Luyben (1987). Our proof in Appendix A.4 is from Hovd and
p8(N). Skogestad (1992).
252 MTJLTIVARJABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 253
Example 6.13 The matrix C in (6.71) is non-singular The 1,2 element of the RGA is A12(C) = Here T(0) = I because of the requirement for integral action in all channels of CK. Also, since OK
—34.1. Thus, the matrix C becomes singular ifg,, is pertu rbed from —86.4 to and C’K are strictly proper, E0T is strictly proper, and hence Eo(s)T(s) -4 0 ass -4 xi. Thus, the
map of det(1 ± B0T(s)) starts at dot C’(O)/ det 0(0) (fore = 0) and ends at 1 (fore = cc). A more
9p12 = 86.4(1 — 1/(34.1)) = 88.9 (6.99) careful analysis of the Nyquist plot of det(1 + RioT(s)) reveals that the number of encirclements of the
origin will be even for dot C’ (0)! dot 0(0) > 0, and odd for dot C’ (0)! det 0(0) < 0. Thus, if this
The above theorem is an important algebraic property of the RGA, but it also has important parity (odd or even) does not match that of P — P’ we will get instability, and the theorem follows. LI
implications for improved control:
1. Identification. Models of multivariable plants, 0(s), are often obtained by identifying Example 6.14 Suppose the true model of a plant is given by C(s), and that b) careful identification
one element at a time, e.g. using step responses. From Theorem 6.6 it is clear that this simple we obtain a model Ci (s),
identification procedure will most likely give meaningless results (e.g. the wrong sign of the 1 F 87.8 —86.4] 1 187 —881
steady-state RCA) if there are large RCA elements within the bandwidth where the model is — 75s+l [108.2 —109.6]’ 75s+1 Lioo —108]
intended to be used.
.4tflrst glance, the identified niodel seems vemy good, but it is actually useless for control purposes since
2. REP-zeros. Consider a plant with transfer function matrix 0(s). If the relative
dot C,(0) has the wrong sign; dot C(0) = —274.4 and dot 01(0) 196 (also the ROts elements
uncertainty in an element at a given frequency is larger than I1/At~(iw)I then the plant may have the wrong sign; the 1,1 element in the RCA is —47.9 instead of +35.1). From Theorem 6.7 we
be singular at this frequency, implying that the uncertainty allows for a RHP-zero on the jw the,? get that any controller with integral action designed based on the model Ci will yield instability
axis. This is of course detrimental to performance in terms of both feedforward and feedback when applied to the plant C.
control.
Remark. Theorem 6.6 seems to “prove” that plants with large ROA elements are fundamentally difficult
to control. However, although the statement may be true (see the conclusions on page 251 based on
diagonal input uncertainty, which is always present), we cannot draw this conclusion from Theorem 6.6. 6.11 MIMO input—output controllability
This is because the assumption of element-by-element uncertainty is often unrealistic from a physical
point of view, since the elements are usually coupled in some way. For example, this is the case for the We now summarize the main findings of this chapter in an analysis procedure for input—
distillation column process, ‘vhere the elements are coupled due to an underlying physical constraint in output controllability of a MIMO plant. The presence of directions in MIMO systems makes
such a way that the model (6.71) never becomes singular, even for large changes in the transfer function it more difficult to give a precise description of the procedure in terms of a set of rules as was
matrix elements. done in the SISO case.
6.10.6 Steady-state condition for integral control 6.11.1 Controllability analysis procedure
Feedback control reduces the sensitivity to model uncertainty at frequencies where the loop The following procedure assumes that we have made a decision on the plant inputs and plant
gains are large. With integral action in the controller we can achieve zero steady-state control outputs (manipulations and measurements), and we want to analyze the model C to find out
error, even with large model errors, provided the sign of the plant, as expressed by det 0(0), what control performance can be expected.
does not change. The statement applies for stable plants, or more generally for cases where The procedure can also be used to assist in control structure design (the selection of inputs,
the number of unstable poles in the plant does not change. The conditions are stated more outputs and control configuration), but it must then be repeated for each 0 corresponding to
exactly in the following theorem by Hovd and Skogestad (1994). each candidate set of inputs and outputs. In some cases, the number of possibilities is so large
that such an approach becomes prohibitive. Some pre-screening is then required, e.g. based
Theorem 6.7 Let the number of open-loop unstable poles (excluding poles at s = 0) of
on physical insight or by analyzing the “large” model, GaO, with all the candidate inputs and
G(s)K(s) and G’(s)K(s) be F and F’, respectively. Assume that the controller K is such
outputs included. This is briefly discussed in Section 10.4.
that OK has integral action in all channels, and that the transfer functions OK and O’K
A typical MIMO controllability analysis may proceed as follows:
are strictly proper Then if
det0’(O)/detO(O) { <o
>0
for F
for F
—
—
F’ even, including zero
F’ odd
(6.100)
1. Scale all variables (inputs it, outputs y, disturbances d, references, r) to obtain a scaled
model, y = G(s)u + Od(s)d, r =
2. Obtain a minimal realization.
see Section 1.4.
at least one of the following instabilities will occur: (a) The negative feedback closed-loop 3. Check functional controllability. To be able to control the outputs independently, we first
system with loop gain OK is unstable. (b) The negative feedback closed-loop system with need at least as many inputs it as outputs y. Second, we need the rank of C(s) to be equal to
loop gain G’K is unstable. the numberof outputs, 1, i.e. the minimum singularvalue of G(jw), a(0) = aj(C), should
be non-zero (except at possible jw-axis zeros). If the plant is not functionally controllable
Proof. For stability of both (1+GK)’ and (I+C’K)~ we have from Lemma A.5 in Appendix A.7.3 then compute the output direction where the plant has no gain, see (6.30), to obtain insight
that det(I + E0T(s)) needs to encircle the origin P — P’ times as s traverses the Nyquist D-contour. into the source of the problem.
254 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 255
4. Compute the poles. For RHP (unstable) poles obtain their locations and associated A controllability analysis may also be used to obtain initial performance weights for
directions; see (6.5). “Fast” RHP-poles far from the origin are bad. controller design. After a controller design one may analyze the controller by plotting, for
5. Compute the zeros. For RHP-zeros obtain their locations and associated directions. Look example, its elements, singular values, RCA and condition number as a function of frequency.
for zeros pinned into certain outputs. “Small” RHP-zeros (close to the origin) are bad if
tight performance at low frequencies is desired.
6.11.2 Plant design changes
6. Calculate the bounds on different closed-loop transfer functions using the formulae
summarized in Table 6.3.2. A large peak (>> 1) for any of 5, T, KS, SGd, KSGd, s~ If a plant is not input—output controllable, then it must somehow be modified. Some possible
and T1 (including Gd = G) indicates poor closed-loop performance or poor robustness modifications are listed below.
against uncertainty. Note that the peaks of KS, SGd, KSGd depend on the scaling of the Controlled outputs. Identify the output(s) which cannot be controlled satisfactorily.
plant and disturbance models. Should these outputs really be controlled? Can the specifications for these be relaxed?
7. Obtain the frequency response G(jw) and compute the RCA matrix, A = G x (Gt)T. Manipulated inputs. If undesirable input constraints are encountered then consider
Plants with large RGA elements at crossover frequencies are difficult to control and should replacing or moving actuators. For example, this could mean replacing a control valve with a
be avoided. For more details about the use of the RGA see Section 3.3.6, page 81. larger one, or moving it closer to the controlled output.
8. From now on scaling is critical. Compute the singular values of G(jw) and plot them as a If there are RHP-zeros which cause control problems then the zeros may often be
function of frequency. Also consider the associated input and output singular vectors. eliminated by adding another input (possibly resulting in a non-square plant). This may not
9 The minimum singular value, u(G(jw)), is a particularly useful controllability measure. be possible if the zero is pinned to a particular output.
It should generally be as large as possible at frequencies where control is needed. If Extra measurements. If there are RFIP-zeros that cause control problems, then these zeros
u(G(jw)) < 1 then we cannot (at frequency w) make independent output changes of may often be eliminated by adding extra measurements (i.e. add outputs with no associated
unit magnitude by using inputs of unit magnitude. control objective). If the effect of disturbances, or uncertainty, is large, and the dynamics of
10. For disturbances, consider the elements of the matrix °d~ At frequencies where one or the plant are such that acceptable control cannot be achieved, then consider adding “fast local
more elements is larger than I, we need control. We get more information by considering loops” based on extra measurements which are located close to the inputs and disturbances;
one disturbance at a time (the columns g~j of Gd). We must require for each disturbance see Section 10.6.4 and the example on page 216.
that S is less than 1/~~gd~~2 in the disturbance direction yd. i.e. IISydII2 ≤ 1/~~gd~I2; see Disturbances. If the effect of disturbances is too large, then see whether the disturbance
(6.45). Thus, we must at least require a(S) < 1/~~gd~~2 and we may have to require itself may be reduced. This may involve adding extra equipment to dampen the disturbances,
a(S) < 1/IIgdII2; see (6.46). such as a buffer tank in a chemical process or a spring in a mechanical system. In other cases,
this may involve improving or changing the control of another part of the system, e.g. we
Remark. If feedforward control is already used, then one may instead analyze ad(s) = GKdG,nd+
may have a disturbance which is actually the manipulated input in another part of the system.
Gd where ‘(~ denotes the feedforward controller, see (5.101).
Plant dynamics and time delays. In most cases, controllability is improved by making
11. Disturbances and input saturation: the plant dynamics faster and by reducing time delays. An exception to this is a strongly
First step. Consider the input magnitudes needed for perfect control by computing the interactive plant, where an increased dynamic lag or time delay may be helpful if it somehow
elements in the matrix GtGd. If all elements are less than I at all frequencies then “delays” the effect of the interactions; see (6.31). Another more obvious exception is for
input saturation is not expected to be a problem. If some elements of GtGd are feedforward control of a measured disturbance, where a delay for the disturbance’s effect on
larger than 1, then perfect control (a = 0) cannot be achieved at this frequency, the outputs is an advantage.
but “acceptable” control (hell2 < 1) may be possible, and this may be tested in the
second step Example 6.15 Removing zeros by adding inputs. Consider a stable 2 x 2 plant
Second step. Check condition (6.55): that is, consider the elements of U”Gd and make 1 Fs+1 s+3
0’k~)— (s+2)2L 1 2
sure that the elements in the i’th row are smaller than ct(G) + 1, at all frequencies.
12. Are the requirements compatible? Look at disturbances, RHP-poles and RHP-zeros which has a RHP-zero at $ = 1 which limits achievable pe;fornzance. The zero is not pinned to a
and their associated locations and directions. For example, we must require for each particular output, so it will most likely disappear if we add a third manipulated input. Suppose the new
disturbance and each RHP-zero that y~~gd(z)~ ≤ 1; see (6.47). For combined RHP-zeros plant is
1 f8+i s+3 s+6
and RHP-poles see (6.8). 1 2 3
13. Uncertainty. If the condition number 7(G) is small then we expect no particular problems
with uncertainty. If the RGA elements are large, we expect strong sensitivity to uncertainty. which indeed has no zeros. It is interesting to note that each of the three individual 2 x 2 sub—plants of
For a more detailed analysis see the conclusion on page 251 G2(s) has a RHP-zero (located at s = 1, s = 1.5 and s = 3, respectively).
14. If decentralized control (diagonal controller) is of interest see the summary on page 448.
15. The use of the condition number and RCA are summarized separately in Section 3.3.6,
page 81.
256 MULTIVARIABLE FEEDBACK CONTROL LIMITATIONS IN MIMO SYSTEMS 257
riol r
The reader will be better prepared for some of these exercises following an initial reading of
C(s)
—
—
[ioU
[ioo 1021
iooj’ gdjçs)
—
—
I ITT I.
I 10 I’ 9d2 =
I ITT
I —1
Chapter 10 on decentralized control. In all cases the variables are assumed to be scaled as LmJ LmJ
outlined in Section 1.4. Which disturbance is the worst?
Exercise 6.8 Analyze input—output controllability for
*
Exercise 6.15 (a) Analyze input—output controllability for the following three plants each of which
has two inputs and one output: C(s) = (gi (s) 92(5))
1 r 1 1
C(s)= s2+i00 I s±~~1 (Vgi(s) = 92(5) =
L s+I 1
(ii) g~(s) — s—I 92(5) — s-2.1
Compute the zeros and poles, plot the RGA as afunction offrequency, etc. s—2 s—’O
(ut) 91(5) = ~ 92(5) =
Exercise 6.9 Analyze input—output controllability for (h) Design controllers and petforni closed-loop simulations of reference tracking to complement your
analysis. Consider also the input tnagnitudei
1 a
C(s)= [rs+1+a
i-s + 1 + a
(rs+1)(rs+1+2a)L ° Exercise 6.16 * Find the poles and zeros and analyze input—output controllability for
where r = 100; consider two cases: (a) a = 20, and (b) a = 2. C(s) Fc+(1/s) 1/s 1
i/s c+(1/s)j
Remark. This is a simple two-mixing-tank model of a heat exchanger where u = IT
L 2in
Here c is a constant, e.g. c = 1. M similar modelfor, z is encounteredfor distillation columns controlled
Tiout] and a is the number of heat transfer units.
11 with the DB-configuration. In this case the physical reason for the model being singular at steady-state
is that the sian of the two manipulated inputs is fixed at steady-state, D + B = F.)
Exercise 6.10* Let
Exercise 6.17 Controllability of an FCC process. Consider the following 3 x 3 model ofafluidized
A— [—10
0
0 1 B=I,C—
—ij’
—
Lw 1.’I,D—
rio
0 —
[O o1
o ij catalytic cracking (FCC) process:
—
[vi] Eu~]
= C(s) [u2j; f(s) (18.Ss + 1)(75.8s + 1)
(a) Petforni a controllability analysis of C(s). ~t3 53
(b) Let ± = Ax + Bit + d and consider a unit distu,-bance d = [Si ~2 1T Which direction (value 16.3(92052 + 32.4s + 1) 30.5(52.is + 1) 4.30(7.28s + 1)
of zi /52) gives a disturbance that is most difficult to ‘eject (consider both RHP-ze,vs and input C(s) = f(s) —16.7(75.Ss + 1) 31.0(75.8s + 1)(1.58s + 1) —1.41(74.6s + 1)
saturation)? 1.27(—939s + 1) 54.1(57.3s + 1) 5.40
(c) Discuss decentralized control of the plant. How would you pair the variables? Acceptable cont,vl of this 3 x 3 plant can be achieved with partial control of two outputs with ~nput 3 ill
manual (‘lot used). That is, we have a 2 x 2 control problem. Consider three options for the controlled
Exercise 6.11 Consider the following two plants. Do you expect any control problems? ~‘ould outputs:
decentralized or inverse-based control be used? What pairing would you use for decentralized control? y1=[Y1]. y2=[921. y3=[ in
192] L~~J LY293
1 it1 and ~2. Assume that the third input is a disturbance (d =
Ca(s)
1.25(s + 1)(s + 20)
r5—i s 1
L 42 s_20] In all th,-ee cases, the inputs are
(a) Based on the zeros of the three 2 x 2 plants, C1 (s), C2(s) and Ca(s). which choice of outputs
do you prefer? Which seems to be the worst?
1
Ca(s)= (s2 F
+ 0.1) [iO(s +i 0.1)/s + o.i)/sj1
(s0.1(s—1) It may be useful to know (lIar the zero polynomials
a JS.75 10’s~ + 3.92 10’s~ + 3.85- lOt?s2 + i.22 . 1O~s + i.03 io~
5 444 106s3 —1.05 i06s2 —8.61 lo4s —9.43 102
Exercise 6.12 Order the following three plants in terms of their expected ease of controllability:
*
c 5.75. io~s~ —8.75 i06s3 —5.66 i05s2 + 6.35 iO~s + 1.60 102
Ci(s)
—
—
rioo 951
Lioo ioojC2(5) —
—
rioo3—~
L iou 95e~~l
ioo j~crs(s),_,
=
rioo
[iou 95e~l
iou have the following roots:
a —0570 —0.0529 —0.0451 —0.0132
5 0.303 —0.0532 —00i32
Remember to consider also the sensitivity to input gain uncertainty. a 0.199 —0.0532 0.0200 —0.0132
(b) For the prefem-red choice of outputs in (a) do a ,m,ore detailed analysis of the expected control
Exercise 6.13 Analyze input—output controllability for petformnance (compute poles and zeros, sketch RGA i~, comment on possible problems with input
C(s) = [ S000s
(a000s+iH2s+1)
3
2(— 53+1)
lOOs+1
3
Ss+1
constraints (assume the inputs and outputs have bee,, pi-operly scaled), discuss the effect of the
disturbance, etc.). What type ofcont roller would you use? What pairing would you use for decentralized
control?
(c) Discuss why the 3 x 3 plant may be difficult to cot itrol.
258 MULTIVARIABLE FEEDBACK CONTROL
Remark. This is a model of a fluidized catalytic cracking (FCC) reactor where it = (F3 F~ kc)T
represents the circulation, airflow and feed composition, and y = (Ti Tcy Try )T represents three
temperatures. Ca(s) is called the Hicks control structure and 03(5) the conventional structure. More
details are found in Hovd and Skogestad (1993).
7
6.12 Conclusion
We have found that most of the insights into the performance limitations of SISO systems
UNCERTAINTY AND
developed in Chapter 5 carry over to MIMO systems. For R}IP-zeros, RHP-poles and
disturbances, the issue of directions usually makes the limitations less severe for MIMO
ROBUSTNESS FOR SISO
than for 5150 systems. However, the situation is usually the opposite with model uncertainty
because for MIMO systems there is also uncertainty associated with plant directionality. This
SYSTEMS
is an issue which is unique to MIMO systems.
We summarized on page 253 the main steps involved in an analysis of input—output
controllability of MIMO plants. In this chapter, we show how to represent uncertainty by real or complex perturbations. We also
analyze rohust stability (RS) and robust performance (RP) for 5180 systems using elementary methods.
Chapter 8 is devoted to a more general analysis and controller design for uncertain systems using the
structured singular value.
1. Determine the uncertainty set: find a mathematical representation of the model uncertainty
(“clarify what we know about what we don’t know”).
2. Check robust stability (RS): determine whether the system remains stable for all plants in
the uncertainty set.
3. Check robust performance (RP): if RS is satisfied, determine whether the performance
specifications are met for all plants in the uncertainty set.
S
This approach may not always achieve optimal performance. In particular, if the worst-case
plant rarely or never occurs, other approaches, such as optimizing some average performance
or using adaptive control, may yield better performance. Nevertheless, the linear uncertainty
descriptions presented in this book are very useful in many practical situations.
It should also be appreciated that model uncertainty is not the only concern when it comes
to robustness. Other considerations include sensor and actuator failures, physical constraints,
changes in control objectives, the opening and closing of loops, etc. Furthermore, if a control
design is based on an optimization, then robustness problems may also be caused by the
mathematical objective function not properly describing the real control problem. Also, the
numerical design algorithms themselves may not be robust. However, when we refer to
robustness in this book, we mean robustness with respect to model uncertainty, and assume
that a fixed (linear) controller is used.
7 5150 UNCERTAINTY AND ROBUSTNESS
1. There are always parameters in the linear model which are only known approximately or
are simply in error.
261
To account for model uncertainty we will assume that the dynamic behaviour of a plant 2. The parameters in the linear model may vary due to nonlinearities or changes in the
is described not by a single linear time-invariant model but by a set H of possible linear operating conditions.
time-invariant models, sometimes denoted as the “uncertainty set”. We adopt the following 3. Measurement devices have imperfections. This may even give rise to uncertainty on the
notation: manipulated inputs, since the actual input is often measured and adjusted in a cascade
manner. For example, this is often the case with valves where a flow controller is often
H — a set of possible perturbed plant models. used. In other cases, limited valve resolution may cause input uncertainty.
U(s) C H — nominal plant model (with no uncertainty). 4. At high frequencies even the structure and the model order are unknown, and the
uncertainty will always exceed 100% at some frequency.
Un(s) C H and U’ (s) € H — particular perturbed plant models. 5. Even when a very detailed model is available we may choose to work with a simpler
(low-order) nominal model and represent the neglected dynamics as “uncertainty”.
Sometimes Ui,, is used rather than H to denote the uncertainty set, whereas U’ always refers to
6. Finally, the controller implemented may differ from the one obtained by solving the
a particular uncertain plant. The subscript p stands for perturbed or possible or H (take your
synthesis problem. In this case, one may include uncertainty to allow for controller order
pick). This should not be confused with the subscript capital P, e.g. in wp, which denotes
reduction and implementation inaccuracies.
peiformance.
We will use a “norm-bounded uncertainty description” where the set H is generated by The various sources of model uncertainty mentioned above may be grouped into two main
allowing 7~tc’~, norm-bounded stable perturbations to the nominal plant U(s). This corresponds classes:
to a continuous description of the model uncertainty, and there will be an infinite number of
possible plants U,, in the set H. We let E denote a perturbation which is not normalized, and 1. Parametric (real) uncertainty. Here the structure of the model (including the order) is
let ~ denote a normalized perturbation with 7~too norm less than 1. known, but some of the parameters are uncertain.
2. Dynamic (frequency-dependent) uncertainty. Here the model is in error because of
Remark. Another strategy for dealing with model uncertainty is to approximate its effect on the missing dynamics, usually at high frequencies, either through deliberate neglect or because
feedback system by adding fictitious disturbances or noise. For example, this is the only way of handling of a lack of understanding of the physical process. Any model of a real system will contain
model uncertainty within the so-called LQG approach to optimal control (see Chapter 9). Is this an
this source of uncertainty.
acceptable strategy? In general, the answer is no. This is easily illustrated for linear systems where
the addition of disturbances does nor affect system stability, whereas model uncertainty combined with Parametric uncertainty is quantified by assuming that each uncertain parameter is bounded
feedback may easily create instability. within some region [amj,~, amax]. That is, we have parameter sets of the form
For example, consider a plant with a nominal model y = Gu+Gdd, and let the perturbed plant model
be C,, = C + B where E represents additive model uncertainty. Then the output of the perturbed plant a,, = a(1 + raá)
ts
y = G,,u + Cdd = Cu + d1 + d2 (7.1) where a is the mean parameter value, ia = (amax arnjn)/(amax + a,,,n,) is the relative
—
where y is different from what we ideally expect (namely Cu) for two reasons: uncertainty in the parameter, and a is any real scalar satisfying iX~ < 1.
1. Uncertainty in the model (di = Eu) Dynamic uncertainty is somewhat less precise and thus more difficult to quantify, but it
2. Signal uncertainty (d2 = Cdd) appears that the frequency domain is particularly well suited for this class. This leads to
In LQO control we set tad = d1 + d2 where tEd ~5 assumed to be an independent variable such as white complex perturbations which we normalize such that ~Ajjoo ~ 1. In this chapter, we will
noise. Then in the design problem we may make tad large by selecting appropriate weighting functions, deal mainly with this class of perturbations.
but its presence will never cause instability. However, in reality tad = Eu + d2, 50 tad depends on the
signal u and this may cause instability in the presence of feedback when u depends on y. Specifically,
the closed-loop system (I ± (C + E)1c7’ may be unstable for some B 0 0. In conclusion, it may be
important to take explicitly into account model uncertainty when studying feedback control.
We will next discuss some sources of model uncertainty and outline how to represent them
mathematically.
In many cases, we prefer to lump the various sources of dynamic uncertainty into a where kV is an uncertain gain and Go(s) is a transferfunction witll 110 uncertainty By writing
multiplicative uncertainty of the form
= k(1 + raA), k ~ km~n ± kmax rh ~ (Lax zLmn)/2 (7.5)
GV(s) = G(s)(1 + wj(s)Aj(sfl; 1Ai(iw)I~_1 Vw (7.2)
]IAiIl~≤1 where rh is the relative magnitude of the gain uncertainty and k is the average gain, (7.4) may be
rewritten as nmltiplicative uncertainty
which may be represented by the block diagram in Figure 7.1. In (7.2), As(s) is any stable
transfer function which at each frequency is less than or equal to I magnitude. Some examples kGo(s)(1 + tkA)~ A~ ≤ 1 (7.6)
of allowable Ai(s)’s with 7~lc.D norm less than 1, IA’I~ < 1, are C(s)
s—z 1 1 0.1 where A is a real scalar and G(s) is tile nominal plant. We see that the uncertaillty in (7.6) is in the
$ + z’ rs + 1’ (5s + 1)~’ ~2 + 0.ls + 1 form of (7.2) with a constant multiplicative weight tof (s) = rh. Tile ullcertaillty description in (7.6)
can also handle cases where the gain changes sign (k~n~ < 0 and ki-,,ax > 0) corresponding to r~. > 1.
The subscript I denotes “input”, but for 5150 systems it doesn’t matter whether we consider The usefulness of this approach is rather limited, howeve,; since it is unpossible to get any bell efit from
the perturbation at the input or output of the plant, since cailt rol far a plant where we can have G~ = 0, at least with a linear contlvllel:
G(1+wjAj) = (1+woAo)G with Al(s) = Ao(s) andwj(s) =wo(s) Example 7.2 Time constant uncertainty. Consider a set of plants, with an uncertaIn tillle constant,
given by
Another uncertainty form, which is better suited for representing pole uncertainty, is the 1 ~Go(s); Tmin ≤ Tp < T,nax
(7.7)
inverse multiplicative uncertainty Tp$ +
By writing TV = f(1 + r~A), sunilar to (7.5) with ~ < 1, the nlodel set (7.7) call be rewritten as
11j5: GV(s) = G(s)(1 + w~i(s)A~j(s))’; IAti(jw)~ < 1 Vw (7.3)
C0 C0 1 _____
(7.8)
Even with a stable A~i(s) this form allows for uncertainty in the location of an unstable pole, 1 +ts +r~~sA 1 +ts 1 +wts(s)A’ wit(s) 1 +~s
and it also allows for poles crossing between the left- and right-half planes. C(s)
Parametric uncertainty is sometimes called structured uncertainty as it models the
uncertainty in a structured manner. Analogously, lumped dynamics uncertainty is sometimes which is in the inverse multiplicative fornl of (7.3). Note that it does Ilot Illake physical sense for ‘ri,
called unstructured uncertainty. However, one should be careful about using these terms to change sign, because a value TV = 0 corresponds to a pole at infinity in the RIB’, and the
because there can be several levels of structure, especially for IVIIMO systems. correspondmg plant would be impossible to stabilize. To represent cases in which a pole may cross
betii’een tile ha Ifplanes, one should instead consider paralnetric uncertainly in the pale itself 1/(s +p),
Remark. Alternative approaches for describing uncertainty and the resulting performance may be as described ill (Z 9).
considered. One approach for parametric uncertainty is to assume a probabilistic (e.g. normal)
distribution of the parameters, and to consider the “average” response. This stochastic uncertainty is,
however, difficult to analyze exactly. Example 7.3 Pole uncertainty. Consider uncertain!)’ in the parameter a in a state-space model,
Another approach is the multi-model approach in which one considers a finite set of alternative = ay + bit, corresponding to the uncertain transfer function CV(s) = b/(s — ar). More generally
models. A problem with the multi-model approach is that it is not clear how to pick the set of models cotlsider the following set ofplants:
such that they represent the limiting (“worst-case”) plants. 1
Go(s) = Go(s); ~ < a,, ~ (7.9)
S —
In this book, we will use a combination of parametric (real) uncertainty and dynamic
(frequency-dependent) uncertainty. These sources can be handled within the 7-t~ framework Ifa,,j~,nd amax have different signs then this means that tile plant call change flvnl stable to unstable
by allowing the perturbations to be real or complex, respectively. witil tile pole crossing tllrougll the origin (lvllich ilappens in sonIc applicatiolls). This set of plants can
be syrittell as
Go(s) (7.10)
7.3 Parametric uncertainty which can be exactly described by illverse nluitiphcative ltllcertaillt as in (7.59) with nonlillal model
C = Co(s)/(s — a) and
Parametric uncertainty may be represented in the ?-l~ framework, if we restrict the r~ a
wtj(s) (7.11)
perturbations A to be real. Here we provide a few simple examples to illustrate this approach. s—a
The Inagnitude of the weight w~i(s) is equal to r~ at low frequencies. If r~ is larger than I then tile
Example 7.1 Gain uncertainty. Let the set ofpossible plants be plant call be both stable and ullstable.
Example 7.4 Parametric zero uncertainty. Consider zero uncertainty in the “time constant” form,
as in Table 7.1: Matlab program for representing repeated parametric uncertainty
% Uses Robust control toolbox
G~(s) = (1 + r~s)Go(s); ~ ~ r, T,nax (7.12) k urealvk’,O.S, ‘Range’, [0.4 0.6)); % Uncertain parameter
where the remaining dynamics C0 (s) are as usual assumed to have no uncertainty For example, let alpha = ureal(’alpha’,l, ‘Range’,[O.B 1.2));
A = [—(l+k) 0; 1 —(l+kH;
—1 < < 3. Then the possible zeros z~ = —l/r~ cross from the LHP to the RHP through i;zfinit-y B = [U/k —1), —1]’;
z~ < —1/3 (in LFJP) and z~ ≥ 1 (in Ru?). The set ofplants in (7.12) may be written as multiplicative C = [0 alpha);
op = ss(A,Bc,0);
(relative) uncertainty with % Use lftdata to obtain the interconnection matrix of Figure 3.23
wt(s) = r,.ts/(1 + fs) (7.13)
The magnitude Iwt (jw)I is small at low frequencies, and approaches rr (the relative uncertainty in r)
at high frequencies. For cases ii’ith rr > live allow the zero to cross from the LHP to the RHP (through The above may seem complicated. In psactice, it is not, as it can be done automatically with
infinity). available software. For example, in Table 7.1, we show how to generate the LFT realization
Exercise 7.1 Parametric zero uncertainty in zero form. Consider the following alternative form of for the following uncertain plant:
parametric zero uncertainty: ±= [(1~) _(l±k)JX+ [~]u
G~(s) = (a + z~)Go(s); Zm)n ≤ Zp ≤ Zrnax (7.14)
which caters for zeros crossing from the LHP to the REP through the origin (corresponding to a sign y=[1 &x
change in the steady-state gain). Show that the resulting multiplicative weight is wi(s) = r02/(s + 2)
wherek=O.5+O.1.6l,I6lI<landcz=l+O.2ó2withIö2I≤l.
and explain why the set ofplants given by (7.14) is entirely different from that with the zero uncertainty
in “time constant” form in (7,12). Explain what the implications are for control if r. > 1.
The above parametric uncertainty descriptions are mainly included to gain insight.
A general procedure for handling parametric uncertainty, more suited for numerical 7.4 Representing uncertainty in the frequency domain
calculations, is given by Packard (1988). Consider an uncertain state-space model
In terms of quantifying uncertainty arising from unmodelled dynamics, the frequency domain
= A~z+B~u (7.15) approach (7-L,,~) does not seem to have much competition (when compared with other norms).
y = C~x+D~u (7.16) In fact, Owen and Zames (1992) make the following observation:
for if disturbances and plant models are clearly parameterized then 7t~ methods
Assume that the underlying cause for the uncertainty is uncertainty in some real parameters
seem to offer no clear advantages over more conventional state-space and
61,62,... (these could be temperature, mass, volume, etc.), and assume in the simplest case
parametric methods.
that the state-space matrices depend linearly on these parameters, i.e.
Parametric uncertainty is also often represented by complex perturbations. This has the
ApA+E6Ai,Bp=B+Z6jB~,CpC+Z6jCj,DpD+Z6jDj (7.18)
advantage of simplifying analysis and especially controller synthesis. For example, we may
where A, B, C and D model the nominal system. This description has multiple perturbations, simply replace the real perturbation, —1 ≤ A < 1, by a complex perturbation with A(jw)l ~
so it cannot be represented by a single perturbation, but it should be fairly clear that we can 1. This is of course conservative as it introduces possible plants that are not present in the
separate out the perturbations affecting A, B, C and D, and then collect them in a large original set. However, if there are several real perturbations, then the conservatism is often
diagonal matrix A with the real 6j’s along its diagonal. Some of the öj’s may have to be reduce by lumping these perturbations into a single complex perturbation. The reason for
repeated. Also, note that seemingly nonlinear parameter dependencies may be rewritten in this is hat with several uncertain parameters the true uncertainty region is often quite “disc
our standard linear block diagram form; for example, we can handle ~? (which would need shaped”, and may be more accurately represented by a single complex perturbation. This is
öi repeated), aj40i6462 etc. This is illustrated next by an example. illustrated below.
Example 7.5 Assume that the linearization of a nonlinear model ,‘esults in a model y = Cu,
where C = j2 and ~I ≤ 1 in sonic uncertain paramnete;: TImEs “may be written as an upper linear
fractional transformation, F,. (M, ~), as in (A.159). To see this, define thefollowing auxiliary variables,
7.4.1 Uncertainty regions
y = z1, z; = 6x;, x1 = z2, z2 = 6x2 and Z2 = u. The,,, arrange these variables such that To illustrate how parametric uncertainty translates into frequency domain uncertainty,
[ri r~ ~jT = lvi~ [zi Z2 ~1T and [z~ z2 jT = A [z~ ~ 1T to get the desired result, where consider in Figure 7.2 the Nyquist plots (or regions) generated by the following set of plants:
M=
0
0
1
10
01
00
and A=[~ ~] Gb(s) k__e~0~,
7-s + 1
2 <k,O,r <3 (7.19)
266 MULTIVARIABLE FEEDBACK CONTROL 5150 UNCERTAINTY AND ROBUSTNESS 267
Step 1. At each frequency, a region of complex numbers G~(jw) is generated by varying region to another). Nevertheless, we derive frequency-by-frequency necessary and sufficient conditions
the three parameters in the ranges given by (7.19), see Figure 7.2. In general, these for robust stability based on uncertainty regions in this and the next chapter. Thus, the only conservatism
is in the second step where we approximate the original uncertainty region by a larger disc-shaped region
uncertainty regions have complicated shapes and complex mathematical descriptions,
as shown in Figure 7.3.
and are cumbersome to deal with in the context of control system design.
Remark 2 Exact methods do exist (using complex region mapping, e.g. see Laughlin et al. (1986))
Step 2. We therefore approximate such complex regions as discs (circles) as shown in which avoid the second conservative step. However, as already mentioned these methods are rather
Figure 7.3, resulting in a (complex) additive uncertainty description as discussed next. complex, and although they may be used in analysis, at least for simple systems, they are not really
suitable for coi~troller synthesis and will not he pursued further in this book.
Im Remark 3 From Figure 7.3 we see that the radius of the disc may be reduced by moving the centre
(selecting another nominal model). This is discussed in Section 7.4.4.
Re
COw)
Figure 7.2: Uncertainty regions of the Nyquist plot at given frequencies. Data from (7.19).
—1
Im
Usually WA(S) is of low order to simplify the controller design. Furthermore, an
objective of frequency domain uncertainty is usually to represent uncertainty in a simple
7/ C (centre) straightforward manner.
teAl 3. Multiplicative (relative) uncertainty. This is often the preferred uncertainty form, and we
A have
~Re lj(w) = max
I G~,(jw) C(jw) —
(7.25)
C(jw)
with a rational weight
IwiUw)l ≥ 11(w),Vw (7.26)
Example 7.6 Multiplicative weight for parnmetric uncertainty. Consider again the set of plants
Figure 7.5: The set of possible plants includes the origin at frequencies where IwA(i~)t ≥ IGUa’)l, with parametric uncertainty given in (7.19)
or equivalently w,(fw)I ≥ 1
H: Cp(s)= k1e0s, 2<k,8,r≤3 (7.27)
The disc-shaped regions may alternatively be represented by a multiplicative uncertainty We want to represent this set using multiplicative uncertainty with a rational weight wi(s). To simplify
description as in (7.2), subsequent controller design we select a delay-free nominal model
Hi : G~(s) = G(s)(1 + wi(s)l.S.i(s)); IA1(jw)I < 1,Vw (7.21) k 2.5
(7.28)
fs+1 2.5s+1
By comparing (7.20) and (7.21) we see that for 5150 systems the additive and multiplicative
uncertainty descriptions are equivalent if at each frequency To obtain lj(a’) in (7.25), one may use the Matlab Robust Control toolbox co,,unand usample, which
gives the specified number of random plants from the uncertain set ofplants. However this command
Iwr(iw)I = IWA(jw)iIIG(jw)I (7.22)
does not handle the uncertainty in ti,ne delay thus we consider three values (2, 2.5 and 3) for each
However, multiplicative (relative) weights are often preferred because their numerical value is of the three parameters (h, 8, r). (This is not, in general, guaranteed to yield the worst case as the
more informative. At frequencies where IwiUw)I > 1 the uncertainty exceeds 100% and the worst case immay be at the interior of the intervals.) The corresponding relative errors I (G~ — G)/Gl are
shown asfimnctions offrequency for the 33 = 27 resulting G~ ‘s in Figure 7.6. The curve forts (a’) must
Nyquist curve may pass through the origin. This follows since, as illustrated in Figure 7.5, the
radius of the discs in the Nyquist plot, WA (ja’) I = IG&w)wi (ja’) then exceeds the distance
from G(jw) to the origin. At these frequencies we do not know the phase of the plant, and
we allow for zeros crossing from the LHP to the RI-IP. To see this, consider a frequency a’0
where wi(ja’o){ ~ 1. Then there exists a Au ~ 1 such that G~(jw0) = 0 in (7.21); that
C)
is, there exists a possible plant with zeros at s = ±jwo. For this plant at frequency a’0 the -g 100
input has no effect on the output, so control has no effect. It then follows that tight control is C
en
not possible at frequencies where Iwi(iw)I ~ 1 (this condition is derived more rigorously in C,
(7.43)).
this so that Wi (jw) I ≥ ls(w) at all frequencies, we can multiply w,j by a correction factor to lift the Option 1 usually yields the largest uncertainty region, but the model is simple and this
gain slightly at w = 1. The following works well: facilitates controller design in later stages. Option 2 is probably the most straightforward
choice. Option 3 yields the smallest region, but in this case a significant effort maybe required
+ 1.6s ± 1
wi(s) = ~ii(s) ~2 + I.4s ± 1 (7.30) to obtain the nominal model, which is usually not a rational transfer function and a rational
approximation could be of very high order.
as is seen from the dashed line in Figure 7.6. The magnitude of the weight crosses 1 at about w = 0.26.
This seems reasonable since we have neglected the delay in our nominal model, which by itself yields Example 7.7 Consider again the uncertainty set (7.27) used in Example 7.6. The nominal models
100% uncertainty at afrequency of about 1/Umax = 0.33 (see Figure 78(a) below). selectedfor optiomzs] and 2 are
k
An uncertainty description for the same parametric uncertainty, but with a mean-value 02(5) = -~ 1e0~
nominal model (with delay), is given in Exercise 7.8. Parametric gain and delay uncertainty
(without time constant uncertainty) are discussed further on page 272. For option 3 the nominal model is not rationaL The Nyquist plot of the three resulting discs atfrequency
= 0.5 are shown in Figure 7.7
Remark. Pole uncertainty. In the example we represented pole (time constant) uncertainty by a
multiplicative perturbation, A~. We may even do this for unstabte plants, provided the poles do not Remark. A similar example was studied by Wang et al. (1994), who obtained the best controller designs
shift between the half planes and one allows As(s) to be unstable. However, if the pole uncertainty with option I, although the uncertainty region is clearly much larger in this case. The reason for this is
is large, and in particular if poles can cross from the LHP to the RHP, then one should use an inverse that the “worst-case region” in the Nyquist plot in Figure 7.7 corresponds quite closely to those plants
(“feedback”) uncertainty representation as in (7.3). with the most negative phase (at coordinates approximately equal to (—1.5, —1.5)). Thus, the additional
plants included in the largest region (option 1) are generally easier to control and do not really matter
when evaluating the worst-case plant with respect to stability or performance. In conclusion, at least for
7.4.4 Choice of nominal model 5150 plants, we find that for plants with an uncertain time delay, it is simplest and sometimes best (!)
to use a delay-free nominal model, and to represent the nominal delay as additional uncertainty.
With parametric uncertainty represented as complex perturbations there are three main
options for the choice of nominal model: The choice of nominal model is only an issue since we are lumping several sources of
parametric uncertainty into a single complex perturbation. Of course, if we use a parametric
1. A simplified model, e.g. a low-order, delay-free model.
uncertainty description, based on multiple real perturbations, then we should always use the
2. A model of mean parameter values, G(s) = C(s).
mean parameter values in the nominal model.
3. The central plant obtained from a Nyquist plot (yielding the smallest discs).
oscillates between 0 and 2 at higher frequencies (which corresponds to the Nyquist plot of Since only the magnitude matters this may be represented by the following first-order weight:
e”0~”~’ going around and around the unit circle). Similar curves are generated for smaller (1 + t)Omaxs + fl
values of the delay, and they also oscillate between 0 and 2 but at even higher frequencies. w5(s) = (7.36)
+1
It then follows that if we consider all 6 e [0, ~max] then the relative error bound is 2 at
frequencies above ~ and we have However, as seen from Figure 7.9, by comparing the dashed line (representing wj) with
i(w) —
—
f
~ 21 ~ W <lr/Omax
w ~ ~/0max (7.32) in’
‘~0
.~ to 3)
=
C
to C
Co
c2 10
to_I 1°-i 100 10’ tn_
Frequency
lo27~
100 mo 102 Figure 7.9: Multiplicative weight for gain and delay uncertainty in (7.33) (with 6,’lax 1, T~ 0.2)
Frequency Frequency
(a) Time delay (b) First-order lag the solid line (representing I,), this weight wj is somewhat optimistic (too small), especially
around frequencies 1/Bmax. To make sure that Iws(iw)I ≥ 11(w) at all frequencies we apply
Figure 7.8: Multiplicative uncertainty resulting from neglected dynamics a correction factor and get a third-order weight
2. Neglected lag. Let f(s) = 1/Q2-~s + 1), where 0 ~ < ~ In this case the resulting (1 + ~4’)Omaxs + rj, (~i~~)2 ~2 ~ 20.838- dOS5 + 1
wI(s) = (7.37)
lj (w), which is shown in Figure 7.8(b), can be represented by a rational transfer function with 1 +20.685 d~sS+ 1
Iwi(iw)I = 11(w) where The improved weight w1(s) in (737) is not shown in Figure 7.9, but it would be almost
1 TmaxS indistinguishable from the exact bound given by the solid curve. In practical applications, it
wj(s)1— =
TmaxS +1 TmaxS +1 is suggested that one starts with a simple weight as in (7.36), and if it later appears important
to eke out a little extra performance then one should try a higher-order weight as in (7.37).
This weight approaches I at high frequencies, and the low-frequency asymptote crosses I at
frequency 1IT,nax Example 7.8 Consider (lie set G~(s) = k~eoP~ Go(s) with 2 < k~ ≤ 3 and 2 < O~ ≤ 3.
3. Multiplicative weight for gain and delay uncertainty. Consider the following set of We approximate this with a no,ninal delay-free plant G = kG0 = 2.5G0 and relative uncertaint.
The simple first-order weight in (7.36), wi(s) = 333+02 is somewhat optimistic. To cover all the
plants: 1.Ss+1
Cr(s) = k~e°”~Go(s); k~ C [kmin,kmax], 9~e [Bmin,9max] (7.33) uncertainty we iizay use (7.37), wj (s) = 8.3s+0.2
1.Ss+1 . 1.61232+2.1283+1
1.61232+I.7393+1
S
which we want to represent by multiplicative uncertainty and a delay-free nominal model,
C(s) = kGo(s), where k = k~ip±km~ and r~ = (krnax_k,nin)/2 Lundstrdm (1994) derived 7.4.6 Unmodelled dynamics uncertainty
the following exact expression for the relative uncertainty weight: Although we have spent a considerable amount of time on modelling uncertainty and deriving
weights, we have not yet addressed the most important reason for using frequency domain
lj(w) =
1I ~/r~ + 2(1 + rk)(1
2+ra
— cos (Omaxw)) forw <
for w ~
iT/8max
~/~max
(7.34) (fl~) uncertainty descriptions and complex perturbations, namely the incorporation of
unmodelled dynamics. Of course, unmodelled dynamics is close to neglected dynamics which
where rk is the relative uncertainty in the gain. This bound is irrational. To derive a rational we have just discussed, but it is not quite the same. In unmodelled dynamics we also include
weight we first approximate the delay by a first-order Padd approximation to get unknown dynamics of unknown or even infinite order. To represent unmodelled dynamics we
usually use a simple multiplicative weight of the form
T5 + ~‘o
kmaxe~°”’~ —k~k(1+rk)1_— —k= k~1+ ~)9maxs+ra (7.35) wi(s) = (7.38)
(r/r~0)s + 1
274 MULTIVARIABLE FEEDBACK CONTROL 5150 UNCERTAINTY AND ROBUSTNESS 275
li(w) = IC’C’ — 1~ in (7.25) for each C’ (Ca, C&, etc.) and adjust the parameter in question until transfer function becomes
l~ just touches Iwi (jw) I.
(a) Neglected delay: Find the largest Ofor Ca = Ce°’ (Answer: 0.13). L~ = C~K = CK(1 + wjAj) = L + wjLA1, ¼’(iw)I <1,Vw (7.39)
(b) Neglected lag: Find the largest i-for Cb = C~17 (Answer: 0.15).
(c) Uncertain pole: Find the range of afor C~ = 4
(Answer: 0.8 to 1.33). As always, we assume (by design) stability of the nominal closed-loop system (i.e. with
ll~ = 0). For simplicity, we also assume that the loop transfer function L~ is stable. We now
(d) Uncertain pole (time constant fonn): Find the range ofTfor Cd = p~ (Answer: 0.7 to 1.5).
use the Nyquist stability condition to test for RS of the closed-loop system. We have
(e) Neglected resonance: Find the range of(for C€ = C (s/w)2+2((s/7o)+i (Answer: 0.02 to 0.8).
(fl Neglected dynamics: Find the lai-gest integer mfor Cf = C (oo~3+1) (Answer: 13). as W System stable VL~
(g) Neglected RJ-JP-ze,-o: Find the largest r~ for C9 = C ~ (Answer: 0.07). These results imply
that a control system which meets given stability and peiformance requirements for all plants in U is * L~, should not encircle the point — 1, VL~ (7.40)
also guaranteed to satisfy the same requirements for the above plants Ca, Cm, C9.
(h) Repeat all of the above with a new nominal plant C = 1/(s — 1) (and with everything else the
same except Cd = 1/(Ts — 1)) (Answers: same as above). Im
s + 0.3 Re
wi(s) =
(1/3)s + 1 Ii + L(jw)l
We end this section with a couple of remarks on uncertainty modelling:
1. We can usually get away with just one source of complex uncertainty for SISO systems. S
2. With an Rc,D uncertainty description, it is possible to represent time delays (corresponding
to an infinite-dimensional plant) and unmodelled dynamics of infinite order, using a
nominal model and associated weights of finite order.
Figure 7.11: Nyquist plot of L~ for RS
I wjL I
I I<1,Vw * ItviTI<1,Vw (7.42)
11+LI
Yz~
W IIwiTII~ <1 (7.43)
Note that for 5150 systems wj = w0 and T = = GK(1 + GK)’, so the condition
a
could equivalently be written in terms of wjTj or tv0T. Thus, the requirement of RS for the
case with multiplicative uncertainty gives an upper bound on the complementary sensitivity:
Figure 7.12: MA-structure
RS-’≠’JTI< l/IwiI, Vw~ (7.44)
We see that we have to detune the system (i.e. make T small) atfrequencies where the relative is the transfer function from the output of IXi to the input of IXi. We now apply the Nyquist
uncertainty IwjI exceeds 1 in magnitude. Condition (7.44) is exact (necessary and sufficient) stability condition to the system in Figure 7.12. We assume that IX and M = wjT are stable;
provided there exist uncertain plants such that at each frequency all perturbations satisfying the former implies that G and G~ must have the same unstable poles, the latter is equivalent
JIX(jw)I < 1 are possible. If this is not the case, then (7.44) is only sufficient for RS, e.g. this to assuming nominal stability of the closed-loop system. The Nyquist stability condition then
is the case if the perturbation is restricted to be real, as for the parametric gain uncertainty in determines RS if and only if the “loop transfer function” MIX does not encircle —1 for all IX.
(7.6). Thus,
RS ~ Ii + A/1IX~ >0, Vw,VIIXI < 1 (7.50)
Remark. Unstable plants. The stability condition (7.43) also applies to the case when L and L~ are
unstable as long as the number of RHP-poles remains the same for each plant in the uncertainty set. The last condition is most easily violated (the worst case) when IX is selected at each
This follows since the nominal closed-loop system is assumed stable, so we must make sure that the frequency such that JIXI = 1 and the terms MIX and 1 have opposite signs (point in the
perturbation does not change the number of encirclements, and (7.43) is the condition which guarantees opposite direction). We therefore get
this.
RS 4~ 1— IM(jw)I >0, Vw (7.51)
2. Algebraic derivation of RS condition. Since L~ is assumed stable, and the nominal
closed loop is stable, the nominal loop transfer function L(jw) does not encircle —1. ~ IM(iw)l < 1. Vw (7.52)
Therefore, since the set of plants is norm-bounded, it then follows that if some L~1 in the
which is the same as (7.43) and (7.48) since M = w,T. The MIX-structure provides a very
uncertainty set encircles —1, then there must be another L92 in the uncertainty set which
general way of handling robust stability, and we will discuss this at length in the next chapter
goes exactly through—i at some frequency. Thus,
where we will see that (7.52) is essentially a clever application of the small-gain theorem
RS ~ Il+L~I≠0, VL~,Vw (7.45) where we avoid the usual conservatism since any phase in A~IIX is allowed.
4~ 1+L,~>0, VL~,Vw (7 46) Example 7.9 Consider the following nominal plant and Fl controller:
Ji+L+W1LIX5I>0, VJIX1I<1,Vw (7.47) 3(—2s + 1) i2.7s + 1
G(s) = (5s + i)(ios + 1) K(s) I(~ 12.7s
At each frequency the last condition is most easily violated (the worst case) when the complex
number IXj(jw) is selected with IIX1(iw)I = 1 and with phase such that the terms (1 + L) Recall that this is the inverse response process from O~apter 2. Initially, we select K0 = K01 1.13
and w1LIX, have opposite signs (point in the opposite direction). Thus as suggested by the Ziegler—Nichols tuning rule. It results in a nominally stable closed-loop system.
Suppose that one “extreme” uncertain plant is
RS*I1+LI—IroiLI>O, Vw ~IwiTI<1, Vw (7.48)
G’(s) = 4(—3s + 1)/(4s + 1)2 (7.53)
and we have rederived (7.43).
For this plant the relative error I(G’ — G)/G} is 013 at low frequencies; it is 1 at about 0.1 radls, and
3. MIX-structure derivation of RS condition. This derivation is a preview of a general
it is 5.25 at high frequencies. Based on this and (7.38) we choose the following uncertainty weight:
analysis presented in the next chapter. The reader should not be too concerned if he or she
does not fully understand the details at this point. The derivation is based on applying the lOs + 0.33
wj(s) (iO/5.25)s + 1
Nyquist stability condition to an altemative “loop transfer function” )l~f IX rather than L~. The
argument goes as follows. Notice that the only source of instability in Figure 7.10 is the new which closely matches this relative error~ We now want to evaluate whether the system remains stable
feedback loop created by IX,. If the nominal (IX1 = 0) feedback system is stable then the for all possible plants as given by G~ = G(1 + unIXs), where A1 (s) is any perturbation satisfying
stability of the system in Figure 7.10 is equivalent to stability of the system in Figure 7.12, < 1. From (7.44), we have the following necessary and sufficient condition for robust stability:
where IX = IX, and I TI < 1/Iwj I V w. This condition is easy to check. Based oil the nominal platit (753) and the given
M = w,K(l + GK)’G = wjT (7 49) controller K1 (with gain ICi = 1.13). we compute P1 = OK1 / (1 + CI(i) as aflinction offrequency.
278 MULTI VARIABLE FEEDBACK CONTROL 5150 UNCERTAINTY AND ROBUSTNESS 279
From Figure 7.13, we see that IT I exceeds wi over a wide frequency range, so from (7.44), we k,,Lo = kL0(1 + r~A) (7.56)
conclude that the system is not robustly stable.
From Figure 713, we notice that the worst-case frequency is w = W26, where IT; I is a factor of where
— kmax + 1 kmax — 1
1/0.13 = 7.7 larger than ms (see also Matlab code in Table 72, where we get Smargl = 0.13). In rk (7.57)
other words, reducing the uncertainly weight toj by a factor 7.7 would give stability. 2 kmax + 1
With the given uncertain plant, we need to reduce the controller gain to achieve robust stability By Note that the nominal L = TtL0 is not fixed, but depends on kmax. The RS condition
trial and error; we find that reducing the gain to K~2 = 0.31 just achieves RS, as is seen from the curve IIwiTII~o < 1 (which is derived for complex A) with wj = rj then gives
forT2 = GK2/(l + OK2) in Figure 713.
II kL0 II
Ilrk - II <1 (7.58)
Table 7.2: Matlab program for describing plant II 1+kL0~
with complex uncertainty and analyzing RS
% Uses Robust control toolbox Here both rk and It depend on kmax, and (7.58) must be solved iteratively to find kmax,2.
G 3*tf( (—2 11 ,conv{ [5 1), (10 11)); Condition (7.58) would be exact if A were complex, but since it is not we expect kmax,2 to
Wi tf{[lO 0.331, [10/5.25 11); % Uncertainty weight
Delta = ultidyn(’Oelta’, (1 11); % Dynamic uncertainty be somewhat smaller than GM.
Op = 0 * (1 + Witoelta};
K = tf(El2.7 11. (12.7 01); Example 7.10 To check this numerically consider a system with L0 = ~ ~We find uiiso 2
Li = 0p1.l3*K; % ziegler—Nichols controller [radls] and ILoUwiso)I = 0.5, and the exact factor by which lye can increase the loop gain is, from
Ti = feedback(Ll,l);
[Smargl,Dstabl,Reportl) robuststab(Tl) % Stability margins (7.55), kinax,; = GM = 2. On the other hand, use of (758) yields km,,,c,2 = 1.78, which as expected
L2 Gp*l.l3*K; % Detuned controller is less than GM = 2. This illustrates the conservatis,n involved in replacing a real perturbation by a
‘22 = teedback(Gp*0.3l*K,l);
[Smarg2, Dstab2,Report2] = robuststab(T2) complex one.
Exercise 7.4 Represent the gain uncertainty in (7.54) as multiplicative complex uncertainty with
*
Now 1 + LI represents at each frequency the distance of L(jw) from the point—i in the
Nyquist plot, so L(jw) must be at least a distance of wp(jw)~ from —1. This is illustrated
Figure 7.14: Feedback system with inverse multiplicative uncertainty graphically in Figure 7.15, where we see that for NP, L(jw) must stay outside a disc of radius
IwpOw)I centred on —1.
Algebraic derivation. Assume for simplicity that the loop transfer function L~ is stable, and
In,
assume stability of the nominal closed-loop system. RS is then guaranteed if encirclements
by L~(jw) of the point —1 are avoided, and since L~ is in a norm-bounded set we have
- I1+L(iw)I
itS 4~ I1+L~I>0, VL~, Vu’ (7.60)
* 1 + L(1 + w~stXri)’I > 0, V~I1jJ ≤ 1,Vw (7.61) 0 Re
wp(jw)l
The last condition is most easily violated (the worst case) when iX~1 is selected at each
frequency such that IIjjJ = 1 and the terms 1 + Land w~jtI~j have opposite signs (point in
the opposite direction). Thus
We see that we need tight control and have to make S small at frequencies where the
uncertainty is large and w~1~ exceeds 1 in magnitude. This may be somewhat surprising
since we intuitively expect to have to detune the system (and make S 1) when we have Figure 7.16: Diagram for RP with multiplicative uncertainty
uncertainty, while this condition tells us to do the opposite. The reason is that this uncertainty
represents pole uncertainty, and at frequencies where IwuI exceeds I we allow for poles For robust performance we require the performance condition (7.66) to be satisfied for all
crossing from the LHP to the RHP (G~, becoming unstable), and we then know that we need possible plants, i.e. including the worst-case uncertainty:
feedback (I~I < 1) in order to stabilize the system.
However, 181 < 1 may not always be possible. In particular, assume that the plant has a RPW wpS~i <1 VSp,Vw (7.67)
RHP-zero at a = z. Then we have the interpolation constraint 8(z) = 1 and we must as a wp~<~1+Lp~ VL~,Vw (7.68)
prerequisite for RS, IIwiiSIIoo < 1, require that w~i(z) < 1 (recall the maximum modulus
theorem, see (5.20)). Thus, we cannot have large pole uncertainty with Iw~i(iw)I > 1 (and This conesponds to requiring I~/dI < 1 WI1 in Figure 7.16, where we consider
hence the possibility of instability) at frequencies where the plant has a RHP-zero. This is multiplicative uncertainty, and the set of possible loop transfer functions is
consistent with the results we obtained in Section 5.3.2 (page 179). = G~K = L(i + zvjáj) = L + wiLtIj (7.69)
282 MULTIVARIABLE FEEDBACK CONTROL 5180 UNCERTAINTY AND ROBUSTNESS 283
1. Graphical derivation of liP condition. Condition (7.68) is illustrated graphically by 2. The RP condition (7.72) can be used to derive bounds on the loop shape ILl. At a given frequency
the Nyquist plot in Figure 7.17. For RP we must require that all possible L~(jw) stay outside we have that Imp SJ + IwsTI < 1 (RP) is satisfied if (see Exercise 7.5)
a disc of radius ~wp(jw)~ centred on —1. Since L~ at each frequency stays within a disc of
1+IwpI
radius wjL centred on L, we see from Figure 7.17 that the condition for RP is that the two ILl > (at frequencies where 1w, I < 1) (7.76)
discs, with radii ImpI and IwiLl, do not overlap. Since their centres are located a distance 1-Iw,I
1 + LI apart, the RP condition becomes or if
RP fropl+Iw1LI<I1+LI, (7.70) ILl < 1 — ~mp I (at frequencies where Imp I <1) (7.77)
~ Vu~ 1 +Iw,I
Iwi’(l + L)’I + IwsL(1 + L)’I <1, (7.71) Conditions (7.76) and (7,77) may be combined over different frequency ranges. Condition (7.76)
is most useful at low frequencies where generally IwiI < 1 and IwpI > 1 (tight performance
or in other words requirement) and we need ILl large. Conversely, condition (7.77) is most useful at high frequencies
liP ~ max~ (IwpSI + IwsTI) <1 (7.72) where generally Iw,I > 1 (more than 100% uncertainty), IwpI < 1 and we need L small. The
loop-shaping conditions (7.76) and (7.77) may in the general case be obtained numerically from
Im p-conditions as outlined in Remark 13 on page 311. This is discussed by Braatz et al. (1996) who
derived bounds also in terms of S and T, and furthermore derived necessary bounds for RI’ in
addition to the sufficient bounds in (7.76) and (7.77); see also Exercise 7.6.
1+Lljwl 3. The term p(Nap) = mp SI + IwsTI in (7.72) is the structured singular value (ii) for RP for this
particulax problem; see (8.129). We will discuss p in much more detail in the next chapter.
Re 4. The structured singular value p is not equal to the worst-case weighted sensitivity, maxs,, ~mp S~1,
given in (7.74) (although many people seem to think it is). The worst-case weighted sensitivity is
equal to skewed-p (p3) with fixed uncertainty; see Section 8.10.3. Thus, in summary we have for
this particular RP problem:
— IwpSI
wL p = ~wpS~ + IwsTI, ~8 — 1— IwiTI (7.78)
Note that p and j? are closely related since p < 1 if and only if p8 ≤ 1
Figure 7.17: Nyquist plot of RP condition ~wp~ < 1 + Lp~
Exercise 7.5 Derive the loop-shaping bounds in (7.76) and (7.77) which are sufficient for ~mp S~ +
2. Algebraic derivation of RP condition. From the definition in (7.67) we have that RP is IwiTI < 1 (RP). (Hint: Start fivm the RP condition in the form ImpI + wiLl < 1 + LI and use the
satisfied if the worst-case (maximum) weighted sensitivity at each frequency is less than 1, facts that 1 + LI ≥ 1 — ILl and 1 + LI ≥ ILl —1.)
liP ~ nax~wpS~~<1, ‘t/w (7.73)
Exercise 7.6 Also derive, from Imp ~l + 1w, TI < 1, the following necessary bounds for RP (which
*
must be satisfied):
(strictly speaking, max should be replaced by sup, the supremum). The perturbed sensitivity IwpI—1
is 8,, = (I + L,,)’ = 1/(1 + L + wiLlis), and the worst-case (maximum) is obtained ILl > 1— I~~’I (for~wp~ >1 and Iwil <1)
at each frequency by selecting hI = I such that the terms (1 + L) and wjLlXj (which are
complex numbers) point in opposite directions. We get ILl < 1 IwpI
IwiI—1 ~or ~wp~ < land IwsI > 1)
wp~ IwpSI (Hint Use Ii + LI ≤ 1 + 11t)
max IwpS~I = (7.74)
I1+LI—IwiLI = 1—Iw1TI
and by substituting (7.74) into (7.73) we rederive the RP condition in (7.72). Example 7.11 HP problem. Consider RP of the 5150 system in Figure 718, for which we have
Remarks on HP condition (7.72). 0.1 5
I. The RP condition (7.72) is closely approximated by the following mixed sensitivity 7i~ condition: RP ~ <1, V~li~ <1, Vw; wp(s) = 0.25 + —; we(s) (7.79)
S s+1
limpS ii = max VIwpSI2 + IwsTI2 <1 (7.75) (a) Derive a condition for RP.
H wjT L (b) For what values of r~ is it impossible to satisfy the RP condition?
To be more precise, we find from (A.96) that condition (7.75) is within a factor of at most to (c) Let r~ = 0.5. Consider two cases for the nominal loop transfer function: (1) GK1 (s) = 0.5/s
condition (7.72). This means that for 5150 systems we can closely approximate the RP condition and (2) GK2 (s) = ~9~. For each system, sketch the magnitudes of S and its peiformance bound as
in terms of an W~ problem, so there is little need to make use of the structured singular value. afiaiction offrequency Does each system satisfy RP?
However, ~ve will see in the next chapter that the situation can be very different for MIMO systems.
284 MULTIVARIABLE FEEDBACK CONTROL 5150 UNCERTAINTY AND ROBUSTNESS 285
findings can also be confirmed using the Matlab commands shown in Table 7.3. We also note that
Prnarguncl . UpperBound = 1.335, which implies that the design Si will have RE even if we
increase the uncertainly (w,,) and pemfortnance requirements (sop) by a factor of 1.335. For design
52, the corresponding pemformance margin (Pmargunc2 . UpperBound) is 0.7, which occurs at the
frequency u; = 0.801. This implies that the uncertainty and pemformance requirements must be reduced
by the factor 0.7 at frequency 0.801 to achieve RP.
sopS
RP ~ <1 VA,Vu; (7 81)
7.6.3 The relationship between NP, RS and RP
A simple analysis shows that the worst case corresponds to selecting Au with magnitude 1 such that
Consider a SISO system with multiplicative uncertainty, and assume that the closed-loop is
the term wuAuS is purely real and negative, and hence we have
nominally stable (NS). The conditions for nominal performance (NP), robust stability (RS)
RP IwpS~ <1— ~iouS~, Vu; (7.82) and robust performance (RP) can then be summarized as follows:
wpS~ + IWuSI <1, Vu; (7.83) NP ~ wpSI < 1,Vw (7.85)
1 as ~ IwcTI<1,Vw (7.86)
ISUW)I< iwp~w)~ + Iw~(jw)I Vu; (7.84)
RP 4~ wp8~+~wjT~<1,Vw (7.87)
(b) Since any teal system is strictly proper we have SI = 1 at high frequencies and therefore we
i;iust require Iw~ (jw) I + ~op (ju;)J < 1 as u; -4 oc. With tile weights in (7.79) this is equivalent to From this we see that a prerequisite for RP is that we satisfy NP and RS. This applies in
r5 + 0.25 < 1. Therefore, we must at least require r5 < 0.75 for RI’, so RP cannot be satisfied if general, both for 5150 and MIMO systems and for any uncertainty. In addition, for 5150
≥ 0.75. systems, if we satisfy both RS and NP, then we have at each frequency
10’
wpSI + IwiTI <2max{IwpSI, Iw1TI} <2 (7.88)
It then follows that, within a factor of at most 2, we will automatically get RP when the
0) subobjectives of NP and RS are satisfied. Thus, RP is not a “big issue” for SISO systems, and
St 0
~ 10 this is probably the main reason why there is little discussion aboutRP in the classical control
to literature. On the other hand, as we will see in the next chapter, for MIMO systems we may
CO
get very poor RP even though the subobjectives of NP and RS are individually satisfied.
To satisfy RS we generally want T small, whereas to satisfy NP we generally want
S small. However, we cannot make both S and T small at the same frequency because
of the identity S + T = 1. This has implications for RP, since wp~~S~ + IwilITI ≥
102 102
min{IwpI, IwsI}GSI + TI), where I~I + TI ≥ IS+TI = 1, and we derive ateach frequency
10 10~ 10’
Frequency wpS~ + Iw1TI ≥ min{~wpI, IwjI} (7.89)
Figure 7.19: RP test We conclude that we cannot have both wpl > 1 (i.e. good peiformance) and soil > 1
(i.e. more than 100% uncertainly) at the samne frequency. One explanation for this is that at
(c) Design S~ yields RE while 52 does ‘lot. This is seen by checking the RP condition (7.84) frequencies where soil > 1 the uncertainty will allow for RHP-zeros, and we know that we
graphically as shown in Figure Z19: Si I has a peak of 1 while 1S21 has a peak of about 2.45. These cannot have tight performance in the presence of Ri-IP-zeros.
286
Here the worst case is obtained when we choose A1 and A2 with magnitudes 1 such that
the terms Lw2A2 and w1 A~ are in the opposite direction of the term 1 + L. We get
RS * Il+LI—ILw2I—IwiI>0, Vw
287
(7.94)
* IwiSI+Iw2TI<1, Vw (7.95)
(a)
7.7 Additional exercises
Exercise 7.7 * Consider a “true” plant
3e°”
C’(s)
(2s + 1)(0.ls ± 1)2
(a) Derive and sketch the additive uncertainty weight when the nominal model is C(s) = 3/ (2s + 1).
(b) Derive the corresponding robust stability condition.
(c) Apply this test for the controller IC(s) = k/s and find the values of k that yield stability. Is this
condition tight?
(b) Exercise 7.8 Uncertainty weight for a first-order model with delay. Laughlin et al. (1987)
considered the following parametric uncertainty description:
Figure 7.20: (a) RP with multiplicative uncertainty
(b) RS with combined multiplicative and inverse multiplicative uncertainty Cr(s) = k~ e8~’ k~ € [km~n,kmaxj, Tp C [Tm~n,Tmax], 9p C [9m~n,8max] (7.96)
71,8±1
where all parameters are assumed positive. They chose the mean parameter values as (Ic, 0, t) giving
RP may be viewed as a special case of RS (with multiple perturbations). To see this consider the nominal model
the following two cases as illustrated in Figure 7.6.4: C(s) O(s) ~ k (7.97)
ts + 1
1. RP with multiplicative uncertainty and suggested use of the following multiplicative uncertainty weight:
2. RS with combined multiplicative and inverse multiplicative uncertainty
kmax ts+l Ts+l ~rnax6rnin
WSL(8) ~ —Ts+I —1; 4 (7.98)
As usual the uncertain perturbations are normalized such that IJA1I~ ~ land IIA2II~ ~ 1.
Tmjn5+l
Since we use the 7-1~~ norm to define both uncertainty and performance and since the (a) Show that the resulting stable and ininimnum—phase weight corresponding to the uncertainty
weights in Figure 7.6.4(a) and (b) are the same, the tests for RP and RS in cases (a) and description in (7.27) is
(b), respectively, are identical. This may be argued from the block diagrams, or by simply
lolL(s) = (1.2552 + 1.55s + 0.2)/(2s ± 1)(0.25s + 1) (7.99)
evaluating the conditions for the two cases as shown below.
Note that this weight cannot be compared with (7.29) or (7.30) since the nominal plant is djfferent.
1. The condition for RP with multiplicative uncertainty was derived in (7.72), but with w1
(b) Plot the magnitude of Wsy_ as aflmnction offrequency. Find thefrequency where the weight crosses
replaced by tup and with w2 replaced by wj. We found that
1 in magnitude, and compare this with 1/Umax. G’om,nent on your answe,:
RP ~ IwiSI+Iw2TI<1, Vw (7 90) (c) Find is (jw) using (7.25) and compare with IWIL . Does the weight (7.99) and the uncertainty
model (7.2) include all possible plants? (Answer: No, not quite around frequency w = 5.)
2. We will now derive the RS condition for the case where L~ is stable (this assumption
may be relaxed if the more general MA-structure is used, see (8.128)). We want the Exercise 7.9 Consider again the system in Figure 7.18. What kind of uncertainty might to,, and A,,
system to be closed-loop stable for all possible A1 and A2. RS is equivalent to avoiding represent?
encirclements of —1 by the Nyquist plot of L~. That is, the distance between L~ and —l Exercise 7.10 Neglected dynamics. Assu,ne we have derived the following detailed model:
must be larger than zero, i.e. 1 + L~j > 0, and therefore
3(—0.5s ± 1)
(791) Ga~~,,~t(s) (7.100)
RS ~ I1+L~j>0 VL~,Vw (2s + 1)(0.ls + 1)2
~ 1 +L(1 +w2A2)(1 —wiAi)’I >0, VA1,VA2,Vw (7 92) and we want to use the simplified nominal model C(s) = 3/(2s + 1) with multiplicative uncertainty
4z~ 1+L+Lta2A2_W1A1I>0,VA1,VA2,Vw (793) Plot Ii (w) and approximate it by a rational transfer function Wi(s).
288 MULTIVARIABLE FEEDBACK CONTROL
Exercise 7.11 * Parametric gain uncertainty. We showed in Example 7.1 how to represent scalar
pammetric gain uncertainty Cr(s) = k~Co(s) where
IA~ 0 ft 0~o0
<a’
a’ -~
C,) (Ot-
90
0- 0
—. 0 0
S9 a a
I ft a’0 —
0 _ g 2S
K
0
hg; I 0 0
— —
0 — 0 0 0
A 0-
g S
0 =
0 tc~ 00- a
2
—. C 0 0
12* C> C
~ 0 —~ 0
—— C
z
(/)
H
1~
2
01
tfl C
0~
0
I _
0
t
a
0
0
C
a5-
0 0
0 0 —
0 5- ___
0’
0
C-)
S0
C N
S
to
292 MULTJVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 293
equal to the largest of the maximum singular values of the individual blocks, it then follows be more important for MIMO plants because it offers a simple method of representing the
for A = diag{A5} that coupling between uncertain transfer function elements. For example, the simple uncertainty
description used in (8.7) originated from a parametric uncertainty description of the
U(A~(jw))<1Vw,Vi 4t~ H~X1Ico≤1~ (8.5) distillation process.
Note that A has structure, and therefore in the robustness analysis we do not want to allow all
A such that (8.5) is satisfied. Only the subset which has the block-diagonal structure in (8.1) 8.2.3 Unstructured uncertainty
should be considered. In some cases the blocks in A may be repeated or may be real; that Unstructured perturbations are often used to get a simple uncertainty model. We define
is, we have additional structure. For example, as shown in Example 7.5, repetition is often unstructured uncertainty as the use of a “full” complex perturbation matrix A, usually with
needed to handle parametric uncertainty. dimensions compatible with those of the plant, where at each frequency any A(jw) satisfying
Remark. The assumption of a stable A may be relaxed, but then the resulting robust stability and 6(A(jw)) < 1 is allowed.
performance conditions will be harder to derive and more complex to state. Furthermore, if we use a
suitable form for the uncertainty and allow for multiple perturbations, then we can always generate the
W1A
desired class of plants with stable perturbations, so assuming A stable is not really a restriction.
The main difference between 5150 and MIMO systems is the concept of directions which (a) (d)
is only relevant in the latter. As a consequence MIMO systems may experience much larger
sensitivity to uncertainty than 5150 systems. The following example illustrates for MIMO
systems that it is sometimes critical to represent the coupling between uncertainty in different
transfer function elements.
where w in this case is a real constant, e.g. a, = 50. For the numerical data above, det Op = det C Six common forms of unstructured uncertainty are shown in Figure 8.5. In Figure 8.5(a),
irrespective of 6, so C~ is never singular for this uncertainty. (Note that det G~ = det C is not
(b) and (c) are shown three feedforward forms: additive uncertainty, multiplicative input
generally trite for the uncertainty description given in (8.7).)
uncertainty and multiplicative output uncertainty given by
Exercise 8.1 * The uncertain plant in (8.7) may be represented in the additive uncertainty formit
11A~ (8.8)
Op = C + PT’2 AA W1 where AA = 6 is a single scalar perturbation. Find W1 and TV2.
Hj: G~=C(I+Es); = wjA1 (8.9)
H~: G~=(I+Eo)G; (8.10)
8.2.2 Parametric uncertainty
The representation of parametric uncertainty, as discussed in Chapter 7 for SISO systems, In Figure 8.5(d), (e) and (f) are shown three feedback or inverse forms: inverse additive
carries straight over to MIMO systems. However, the inclusion of parametric uncertainty may uncertainty, inverse multiplicative input uncertainty and inverse multiplicative output
294
uncertainty given by
MULTI VARIABLE FEEDBACK CONTROL
I MIMO ROBUST STABILITY AND PERFORMANCE
which is much larger than lj(w) if the condition number of the plant is large. To see this,
Lumping uncertainty into a single perturbation
write E1 = wjAj where we allow any A1(jw) satisfying a(Ar(jw)) < 1,Vw. Then at a
For 5150 systems, we usually lump multiple sources of uncertainty into a single complex given frequency
perturbation, often in multiplicative form. This may also be done for MIMO systems, but
then it makes a difference whether the perturbation is at the input or the output. lo(w) = lwilrnaxa(CAiG’) = wj(jw)~’y(C(jw)) (8.21)
Since output uncertainty is frequently less restrictive than input uncertainty in terms of
control performance (see Section 6.10.4), we first attempt to lump the uncertainty at the Proof of (&21): Write at each frequency C = UEVH and C’ = UEVH. Select A, = VUH (which
output. For example, a set of plants H may be represented by multiplicative output uncertainty is a unitary matrix with all singular values equal to 1). Then a(GA5G’) = a(UEEV”) = a(EE)
with a scalar weight wc,(s) using a(G)a(0’) = 7(0). ci
Op = (I+woAo)G, IIAoII~ ~ 1 (814) Example 8.2 Assume the relative input uncertainty is 10%, Le. w~ = 0.1, and the condition number
of the plant is 141.7. Then we itiust select Ic, = wo = 0.1 141.7 = 14.2 in order to m-epi-esent this as
.
where, similar to (7.25),
multiplicative output uncertainty (this is larger than 1 and therefore not usefulfor controller design).
lo(w) = max a ((0,,— G)0’Ow)); Iwo(iw)I ≥ lo(w) Vw (8 15)
open Also for diagonal uncertainty (Bj diagonal) we may have a similar situation. For example,
(and we can use the pseudo-inverse if C is singular). If the resulting uncertainty weight is if the plant has large RGA elements then the elements in 0E1G’ will be much larger than
reasonable (i.e. it must at least be less than 1 in the frequency range where we want control), those of Li, see (A.81), making it impractical to move the uncertainty from the input to the
and the subsequent analysis shows that robust stability and performance may be achieved, output.
then this lumping of uncertainty at the output is fine. If this is not the case, then one may
Example 8.3 Let H be the set of plants generated by the additive uncem-tainty in (8.7,1 with to = 10
try to lump the uncertainty at the input instead, using multiplicative input uncertainty with a (corm-esponding to about 10% uncertainty in each element). Then from (8.7) one plant C’ in tl,is set
scalar weight, (corresponding to 5 = 1) has
C,, = 0(1 + wjAj), hAul00 < 1 (8 16)
o’=o+[’~ -i~] (8.22)
where, similar to (7.25),
for which we have I, = a(0’ (C’ — C)) = 14.3. Them-efoi-e, to represent C’ in terms of input
li(w) = max a (0’(G,,
open
— G)(jw)); Iwu(iw)l > li(w) ‘1w (8 17) uncem-tainty we would need a i-dative nnce~-tainty of more thai, 1400%. This would imply that the plant
could become singular at steady—state and thus impossible to contivl, which, we know is incorrect.
However, in many cases this approach of lumping uncertainty either at the output or the Fortunately we can instead repi-esent this additive uncem-tainty as multiplicative output uncertainty
input does not work well. This is because one cannot in general shift a perturbation from one (which is also generally preferable for a subsequent controller design) with ho = U~C’ — C)C’)
location in the plant (say at the input) to another location (say the output) without introducing 0.10. Therefore output unce,-taintv wo,-ks wellfor this pam-ticular example.
candidate plants which were not present in the original set. In particular, one should be careful
when the plant is ill-conditioned. This is discussed next.
296 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 297
Conclusion. Ideally, we would like to lump several sources of uncertainty into a single The scalar transfer function hi(s) is often absorbed into the plant model C(s), but for
perturbation to get a simple uncertainty description. Often an unstructured multiplicative representing the uncertainty it is important to notice that it originates at the input. We can
output perturbation is used. However, from the above discussion we have learnt that we should represent this actuator uncertainty as multiplicative (relative) uncertainty given by
be careful about doing this, at least for plants with a large condition number. In such cases
we may have to represent the uncertainty as it occurs physically (at the input, in the elements, h~~(s) = h~(s)(1 +wj~(s)öt(s)); Ió~Ciw)I < 1,Vw (8.26)
etc.) thereby generating several perturbations. For uncertainty associated with unstable plant
poles, we should use one of the inverse forms in Figure 8.5. which after combining all input channels results in diagonal input ancertainty for the plant
Normally we would represent the uncertainty in each input or output channel using a simple
weight in the form given in (7.38), namely
7~s + r0
w(s) (r/r~)s + 1 (8.28)
where C = 1122 is the nominal plant model. Obtain H for each of the six uncertainty forms in Remark 2 The claim is often made that one can easily reduce the static input gain uncertainty to
(8.8)—(8.13) using E = W2AW1. (Hint for the inverse forms: (I — W1ATV2)1 = I + W1A(I — significantly less than 10%, but this is not true in most cases. Consider again (8.25). A commonly
1472 T’T’~ A) — ‘W2, see (3. 7)—(3.9).) suggested method to reduce the uncertainty is to measure the actual input (mi) and employ local
feedback (cascade control) to readjust u~. As a simple example, consider a bathroom shower, in which
Exercise 8.3 * Obtain H in Figure 8.6 for the uncertain plant in Figure 7.6.4(b). the input varables are the flows of hot and cold water. One can then imagine measuring these flows
and using cascade control so that each flow can be adjusted more accurately. However, even in this case
there will be uncertainty related to the accuracy of each measurement. Note that it is not the absolute
8.2.4 Diagonal uncertainty measurement error that yields problems, but rather the error in the sensitivity of the measurement with
respect to changes (i.e. the “gain” of the sensor). For example, assume that the nominal flow in our
By “diagonal uncertainty” we mean that the perturbation is a complex diagonal matrix
shower is I I/mm and we want to increase it to 1.1 1/mm; that is, in terms of deviation variables we want
A(s) = diag{ó~(s)}; Iöt(iw)I ≤ 1,Vw,V/ (8.24) u = 0.1 [I/minI. Suppose the vendor guarantees that the measurement error is less than 1%. But, even
with this small absolute error, the actual flow rate may have increased from 0.99 1/mm (measured value
(usually of the same size as the plant). For example, this is the case if A is diagonal of I 1/mm is 1% higher) to 1.11 1/mm (measured value of 1.1 1/mm is 1% lower), corresponding to a
in any of the six uncertainty forms in Figure 8.5. Diagonal uncertainty usually arises change u’ = 0.12 [1/mm], and an input gain uncertainty of 20%.
from a consideration of uncertainty or neglected dynamics in the individual input channels
In conclusion, diagonal input uncertainty, as given in (8.27), should always be considered
(actuators) or in the individual output channels (sensors). This type of diagonal uncertainty
because:
is always present, and since it has a scalar origin it may be represented using the methods
presented in Chapter 7. 1. It is always present and a system which is sensitive to this uncertainty will not work in
To make this clearer, let us consider uncertainty in the input channels. With each input practice.
u~ there is associated a separate physical system (amplifiet; signal converter, actuator, valve, 2. It often restricts achievable performance with multivariable control.
etc.) which based on the controller output signal, u~, generates a physical plom~input m~
= lz~(s)u~ (8.25)
298 MULTI VARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 299
8.3 Obtaining P, N and M Exercise 8.4 * Show in detail how P in (8.29) is derived.
Exercise 8.5 For the system in Figure 8.7 we see easily from the block diagram that the uncertain
We will now illustrate, by way of an example, how to obtain the interconnection matrices P,
transfer function from in to z is F = Wp (I -f 0(1 + W,A, )K) ‘. Show that this is identical to
N and M in a given situation.
F~ (N, A) evaluated using (8.35), where from (832), we have N,, —W,T,, N,2 = —W,KS,
N2, =WpSG and N22 WpS.
Exercise 8.6 * Derive N in (8.32)from P in (8.29) using the lower LET in (8.2). You will note that
the algebra is quite tedious, and that it is much simpler to derive N directly from the block diagram as
described above.
Exercise 8.7 Derive P and Nfor the case when the multiplicative uncertainly is at the output rather
than tIme input.
Exercise 8.9 Find Pfor the uncertain plant G~ in (8.23) when to = rand z = y — r.
Figure 8.7: System with multiplicative input uncertainty and performance measured at the output
Exercise 8.10 * Find the interconnection matrix Nfor the uncertain system in Figure 7.18. What is
Example 8.4 System with input uncertainty. Consider a feedback system with multiplicative input
uncertainty A’ as shown in Figure 8.7. Here Wj is a normalization weight for the uncertainty and Exercise 8.11 Find the transfer function IV = N,, for studying robust stability for the uncertain
T’Vp is a petforinance weight. We want to derive the generalized plant P in Figure 8.1 which has inputs plant G~ in (8.23).
[ua in tt7T and outputs [VA Z ~ 1~’ By writing down the equations (e.g. see Example 3.18) or
simply by inspecting Figure 8.7 (remember to break the loop before and after K) we get
0 0
P= WpG Wp WpG (8.29)
—G —I -G
it is recommended that the reader carefully derives P (as instructed in Exercise 8.4). Note that the
transfer function from UA to y~ (upper left element in P) is 0 because ua has no direct effect on
(except through K). Next, we want to derive the matrix N corresponding to Figure 8.2. First, partition
P to be compatible with K, i.e.
Ph = ~ :~]~ ~,2 —
—
I w, 1
~WpGj (8.30) Figure 8.8: System with input and output multiplicative uncertainty
The upper left block, N,,, in (8.32) is the transfer function from ~A to VA. This is the transfer
function M needed in Figure 8.3 for evaluating robust stability. Thus, we have M = —W,KG(I +
KG)-’ = -W,T,. 8.4 Definitions of robust stability and robust performance
Remark. Of course, deriving N from P is straightforward using available software. For example, in the
Matlab Robust Control toolbox we can evaluate N = F1 (F, K) using the command N= if t (P. K) ,and We have discussed how to represent an uncertain set of plants in terms of the NA-structure
with a specific A the perturbed transfer function FU(N, A) from in to z is obtained with the command in Figure 8.2. The next step is to check whether we have stability and acceptable performance
F=lft(delta,N). for all plants in the set:
300 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 301
1. Robust stability (RS) analysis: with a given controller K we determine whether the system where in this case S denotes the number of real scalars (some of which may be repeated), and F the
remains stable for all plants in the uncertainty set. number of complex blocks. This gets rather involved. Fortunately, this amount of detail is rarely required
2. Robust petformance (RP) analysis: if RS is satisfied, we determine how “large” the transfer as it is usually clear what we mean by “for all allowed perturbations” or “VA”.
function from exogenous inputs w to outputs z may be for all plants in the uncertainty set.
Before proceeding, we need to define performance more precisely. In Figure 8.2, w represents
the exogenous inputs (normalized disturbances and references), and z the exogenous outputs 8.5 Robust stability of the MA-structure
(normalized errors). We have z = F(A)w, where from (8.3)
Consider the uncertain NA-system in Figure 8.2 for which the transfer function from w to z
F = FU(N, A) 4 N22 + N21A(I — N11A)’N12 (8.35) is, as in (8.35), given by
We will use the ltoo norm to define performance and require for RP that IIF(A)1100 ≤ 1 for FU(N, A) = N22 + N21A(I — (8.41)
all allowed A’s. A typical choice is F = wpSp (the weighted sensitivity function), where Wp
is the performance weight (capital P for performance) and S,, represents the set of perturbed Suppose that the system is nominally stable (with A = 0); that is, N is stable (which means
sensitivity functions (lower-case p for perturbed). that the whole of N, and not only N22, must be stable). We also assume that A is stable. We
In terms of the NA-structure in Figure 8.2, our requirements for stability and performance then see directly from (8.41) that the only possible source of instability is the feedback term
can then be summarized as follows: (I N11A)’. Thus, when we have nominal stability (NS), the stability of the system in
—
Figure 8.2 is equivalent to the stability of the MA-structure in Figure 8.3 where M = N11.
NS W N is internally stable (8.36)
We thus need to derive conditions for checking the stability of the MA-structure. The next
NP W I}N2211oo <1; andNS (8.37) theorem follows from the generalized Nyquist Theorem 4.9. It applies to 71oo norm-bounded
A-perturbations, but as can be seen from the statement it also applies to any other convex set
RS F = FU(N, A) is stable VA, IIAIIoo < 1; and NS (8.38)
of perturbations (e.g. sets with other structures or sets bounded by different norms).
RP t~ IFIboo <1, VA, IAIboo ~ 1; and NS (8.39)
Theorem 8.1 Determinant stability condition (real or complex perturbations). Assume
These definitions of RS and RP are useful only if we can test them in an efficient manner; that the nominal system M(s) and the perturbations A(s) are stable. Consider the convex
that is, without having to search through the infinite set of allowable perturbations A. We will set ofperturbations A, such that if A’ is an allowed perturbation then so is cA’ where c is
show how this can be done by introducing the structured singular value, p, as our analysis any real scalar such that Id < 1. Then the MA-system in Figure 8.3 is stable for all allowed
tool. At the end of the chapter we also discuss how to synthesize controllers such that we perturbations (we have RS) if and only if
have “optimal robust performance” by minimizing p over the set of stabilizing controllers.
Nyquist plot of det (I — MA(s)) does ‘lot encircle the origin, VA (8.42)
Remark 1 Important. As a prerequisite for nominal performance (NP), robust stability CR8) and
robust performance (RP), we must first satisfy nominal stability (NS). This is because the frequency- 4~ det(I—MA(jw))≠0, Vw,VA~ (8.43)
by-frequency conditions can also be satisfied for unstable systems. A~(MA)j1, Vi,Vw,VA (8.44)
Remark 2 Convention for inequalities. In this book, we use the convention that the perturbations are
bounded such that they are less than or equal tO I. This results in a stability condition with a strict
Proof: Condition (8.42) is simply the generalized Nyquist theorem (page 152) applied to a positive
feedback system with a stable loop transfer function IvIA.
inequality: for example, RS VWAII0.3 ≤ 1 if I[~~IJ~~ < 1. (We could alternatively have bounded the
(8.42) ~. (8.43): This is obvious since by “encirclement of the origin” we also include the origin
uncertainty with a strict inequality, yielding the equivalent condition RS V{IAII~ < 1 if IMl~ ~ 1.)
itselE
Remark 3 Allowed perturbations. For simplicity below, we will use the shorthand notation (8.42) ~ (8.43) is proved by proving that not(8.42) ~- not(8.43). First note that with A = 0,
det(I — MA) = 1 at all frequencies. Assume there exists a perturbation A’ such that the image
VA and max (8.40)
A of det(I — A’IA’(s)) encircles the origin as s traverses the Nyquist D-contour. Because the Nyquist
contour and its map are closed, there then exists another perturbation in the set, A” = eA’, with
to mean “for all A’s in the set of allowed perturbations”, and “maximizing ovet all A’s in the set of
c e [0,1], and an a,’ such that det(I — MA”(jw’)) = 0.
allowed perturbations”. By allowed perturbations we mean that the ?t~ norm of A is less than or
(8.44) is equivalent to (8.43) since det(I — A) = fl~ A~(I — A) and A1(I — A) = 1 — .X1(A) (see
equal to 1, IIAIt~ ≤ 1, and that A has a specified block-diagonal structure where certain blocks may
Appendix A.2.1). C
be restricted to be real. To be mathematically exact, we should replace A in (8.40) by A e Ba, where
BA = {A C A: 11AM00 ≤ 1} The following is a special case of Theorem 8.1 which applies to complex perturbations.
is the set of unity norm-bounded perturbations with a given structure A. The allowed structure should Theorem 8.2 5pectral radius condition for complex perturbations. Assume that the
also be defined, for example, by nominal system l1’I(s) and the perturbations A(s) are stable. Consider the class of
A = {diag[S1L.i 5sI~5,Ai,...,Ap~ :J~ C fl,A~ C perturbations, A, such that if A’ is an allowed perturbation then so is cA’ where c is any
302 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 303
complex scalar such that cl ≤ 1. Then the MA-system in Figure 8.3 is stable for all allowed Lemma 8.3 together with Theorem 8.2 directly yield the following theorem:
perturbations (we have RS) if and only if
Theorem 8.4 RS for unstructured (“full”) perturbations. Assume that the nominal system
p(MA(jw)) < 1, Vw,VA (8.45) M(s) is stable (NS) and that the perturbations A(s) are stable. Then the Il/IA-system in
Figure 8.3 is stable for all perturbations A satisfying ~ < 1 (i.e. we have RS) if and
or equivalently only ~f
RS ~ maxp(MA(jw)) < 1, Vw (8.46)
llMlI~<l~
U(M(jwfl<1 Vw~ (8.49)
Proof (8.45) ≠. (8.43) (.@~. RS) is “obvious”: it follows from the definition of the spectral radius p. and Remark 1 Condition (8.49) may be rewritten as
applies also to real A’s.
(8.43) ~. (8.45) is proved by proving that not(8.45) ~ not(8.43). Assume there exists a perturbation RS ~ a(M(jw)) a~(A(jw)) < 1, Vw,VA, (8.50)
A’ such that p(MA’) = 1 at some frequency. Then A~(k1A’)I = 1 for some eigenvalue i, and there
always exists another perturbation in [he set, A” = cu’, where a is a complex scalar with cl = 1, The sufficiency of (8.50) (@) also follows directly from the small-gain theorem by choosing L = MA.
such that A~(MA”) = +1 (real and positive) and therefore det(1 — MA”) = fl~ A~(I — MA”) = The small-gain theorem applies to any operator norm satisfying IIABII ≤ hAll . hIBhl.
fl~(1 — A€(Js’IA”)) = 0. Finally, the equivalence between (8.45) and (8.46) is simply the definition of Remark 2 An important reason for using the W~ norm to analyze robust stability is that the stability
max,x. C
condition in (8.50) is both necessary and sufficient. In contrast, use of the IL2 norm, e.g. a condition like
Remark I The proof of (8.45) relies on adjusting the phase of A~ (MeN) using the complex scalar c hIMhI2 < 1, yields neither necessary nor sufficient conditions for stability. We do not get sufficiency
since the IL2 norm does not in general satisfy IIABII ≤ 11-411 ‘ IIBII; see e.g. Example 4.2L
and thus requires the perturbation to be complex.
Remark 2 In words, Theorem 8.2 tells us that we have stability if and only if the spectral radius of MA
is less than 1 at all frequencies and for all allowed perturbations, A. The main problem here is of course 8.6.1 Application of the unstructured RS condition
that we have to test the condition for an infinite set of A’s, and this is difficult to check numerically. We will now present necessary and sufficient conditions for RS for each of the six single
Remark 3 Theorem 8.1, which applies to both real and complex perturbations, forms the basis for the unstructured perturbations in Figure 8.5. with
general definition of the structured singular value in (8.76).
E=W2AW1, hlAhloc~1 (8.51)
To derive the matrix M, we simply “isolate” the perturbation, and determine the transfer
8.6 Robust stability for complex unstructured uncertainty function matrix
M = W1M0W2 (8.52)
In this section, we consider the special case where A(s) is allowed to be any (full) complex
from the output to the input of the perturbation, where M0 for each of the six cases
transfer function matrix satisfying ~ < 1. This is often referred to as unstructured
(disregarding some negative signs which do not affect the subsequent robustness condition)
uncertainty or as full-block co~nplex perturbation uncertainly.
is given by
Lemma 8.3 Let A be the set of all complex matrices such that a(A) < 1. Then the following
0P = ~ + EA: K(I + GIC)’ = KS (8.53)
holds:
maxp(MA) = maxU(MA) = maxU(A)diM) = U(M) (8.47) = 0(1+ Ei): M0 = K(1 + GK)’G = (8.54)
= (I + Eo)G: M0 = GK(I + GK)’ = T (8.55)
= 0(1 — M0 =(I+GK)’G=SG (8.56)
Proof In general, the spectral radius (p) provides a lower bound on the spectral norm (~) (see 5A. 117)), = 0(1 — M0 = (I + 1CG)’ = Si (8.57)
and we have
maxp(MA) ≤ maxa(MA) ≤ maxa(A)&(M) = ã(M) (8.48)
= (I — Mo = (I + G1C)’ = S (8.58)
where the second inequality in (8.48) follows since ã(AB) ≤ &(A)a(B). Now, we need to show
that we actually have equality. This will be the case if for any A’I [here exists an allowed A’ such that For example, (8.54) and (8.55) follow from the diagonal elements in the M-matrix in (8.34),
p(MA’) = a(IvI). Such a A’ does indeed exist if we allow A’ to be a full matrix such that all directiOns and the others are derived in a similar fashion. Note that the sign of A’Io does not matter as it
in A’ are allowed. Select A’ = VU11 where U and V are matrices of the left and right singular vectors may be absorbed into A. Theorem 8.4 then yields
of M = UE V” . Then &(A’) = 1 and p(MA’) = p(UEV”VU11) = p(UEU11) = p(E) = U(M).
The second to last equality follows since U11 = U’ and the eigenvalues are invariant under similarity RS * IIW1M0W2CIw)IIco < 1 (8.59)
transformations.
304 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 305
For instance, from (8.54) and (8.59) we get for multiplicative input uncertainty with a scalar We then get from Theorem 8.4
weight:
RSVG~, = G(I+’wjllj’), 111111100< 1 * IIwiTjII00 <1 (8.60)
as vii 11N 11A1 1100 ≤ C ~ 11M1100 < 1/c (8.64)
Note that the 5150 condition (7.43) follows as a special case of (8.60). Similarly (7.64) The above RS result is central to the 7-i00 loop-shaping design procedure discussed in
follows as a special case of the inverse multiplicative output uncertainty in (8.58): Chapter 9.
The coprime uncertainty description provides a good “generic” uncertainty description
aS V0, = (I — w~0A~0)’G, II11ioIlco ~ 1 ~ IIt0ioSIIco < 1 (861) for cases where we do not use any specific a priori uncertainty information. Note that the
In general, the unstructured uncertainty descriptions in terms of a single perturbation are not uncertainty magnitude is e, so it is not normalized to be less than 1 in this case. This is
“tight” (in the sense that at each frequency all complex perturbations satisfying o~11(jw)) < because this uncertainty description is most often used in a controller design procedure where
1 may not occur in practice). Thus, the above RS conditions are often conservative. In order the objective is to maximize the magnitude of the uncertainty (e) such that RS is maintained.
to get tighter conditions we must use a tighter uncertainty description in terms of a block- Remark. In (8.62) we bound the combined (stacked) uncertainty, II[zXn ZXM < e, which is not
diagonal 11. quite the same as bounding the individual blocks, IIaNIIco < e and 1¼A11100 ~ e. However, from
(A.46) we see that these two approaches differ at most by a factor of v’~, so it is not an important issue
from a practical point of view.
8.6.2 RS for coprime factor uncertainty
Exercise 8.13 * Consider combined multiplicative and inverse multiplicative uncertainty at the output,
= (I a~oW~o)’(I + ~oWa)G, whe,-e we choose to norm-bound the combined uncertainty
—
III A~0 ~o 1II~ ≤ 1. Draw a block diagram of the uncertain plant, and derive a necessamy and
sufficient condition for RS of the closed-loop system.
Robust stability bounds in terms of the ~ norm (RS .t~ IIMI~ < 1) are in general
only tight when there is a single full perturbation block. An “exception” to this is when the
uncertainty blocks enter or exit from the same location in the block diagram, because they
can then be stacked on top of each other or side by side, in an overall zI which is then a full
matrix. If we norm-bound the combined (stacked) uncertainty, we then get a tight condition
forRS in terms of WM1100.
One important uncertainty description that falls into this category is the coprime HnHHM~ ~DH
uncertainty description shown in Figure 8.9, for which the set of plants is
= (M, +11M)’ (J’4 +AN), Il[11N 11M]JIoo ~ C (8.62) NEW M: DMD1
where C = Afr’A’~ is a left coprime factorization of the nominal plant, see (4.20). This Figure 8.10: Use of block-diagonal scalings, 11D = Di.I
uncertainty description is surprisingly general: it allows both zeros and poles to cross into
the RHP, and has proved to be very useful in applications (McFarlane and Glover, 1990). Consider now the presence of structured uncertainty, where 11 = diag{/X~ } is block diagonal.
Since we have no weights on the perturbations, it is reasonable to use a normalized coprime To test for RS we rearrange the system into the M11-structure and we have from (8.49)
factorization of the nominal plant; see (4.25). In any case, to test for RS we can rearrange the
R5 if ~(M(jw)) <1,Vw (8.65)
block diagram to match the Ma-structure in Figure 8.3 with
We have written “if” here rather than “if and only if” since this condition is only sufficient
11=[11N 11M1; M=_[~](I+GK)_1Mi1 (8.63) for RS when 11 has “no structure” (full-block uncertainty). The question is whether we can
306 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 307
take advantage of the fact that A = diag{A2} is structured to obtain an RS condition which Find the smallest structured A (measured in terms of 0(A)) which makes the matrix I— IvIA
is tighter than (8 65) One idea is to make use of the fact that stability must be independent of singular, then p(M) = l/o(A).
scaling To this effect, we introduce the block-diagonal scaling matrix
Mathematically,
D = diag{d1I~} (8 66) 1
4 min{0(A)~ det(l — MA) = 0 for structured A} (8.70)
where d~ is a scalar and I~ is an identity matrix of the same dimension as the i’th perturbation
block, A~. Now we rescale the inputs and outputs to M and A by inserting the matrices Clearly, p(M) depends not only on M but also on the allowed structure for A. This is
D and D’ on both sides as shown in Figure 8.10. This clearly has no effect on stability. sometimes shown explicitly by using the notation pA(M).
Next, note that with the chosen form for the scalings, we have for each perturbation block I
= d~J.1 ~d7’ ; that is, we have A = DAD—’. This means that (8.65) must also apply if we Remark. For the case where A is “unstructured” (a full matrix), the smallest A which yields singularity
replace M by DMD’ (see Figure 8.10), and we have has 0(A) = 1/0(iVI), and we have p(IvI) = 0(M). A particular smallest A which achieves this is
A = ~-viuf’.
RS if 0(DMD—’) < 1,Vw (867)
Example 8.5 Full perturbation (A is unstructured). Consider
This applies for any D in (8.66), and therefore the “most improved” (least conservative) RS
condition is obtained by minimizing at each frequency the scaled singular value, and we have M —
—
[[—12 2 ]
—1j —
[L—o.447
0.394 0.4471
o.894J [3.162
~ 0~ [0.707
L°~°~ —0.707
0.707
(871)
where V is the set of block-diagonal matrices whose structure is compatible to that of A, i.e. A = iiit = 3.162 ~1 [0394 —0.447] = ~ ~ (8.72)
AD = DA. We will return with more examples of this compatibility later. Note that when
A is a full matrix, we must select D = dl and we have U(DMD’) = 0(M), and so as with 0(A) = 1/0(M) = 1/3.162 = 0.316 makes det(I — MA) = 0. Thus p(M) = 3.162 when A
expected (8.68) is identical to (8.65). However, when A has structure, we get more degrees is a full ,natrix.
of freedom in D, and 0(DMD’) may be significantly smaller than o(M).
Note that the perturbation A in (8.72) is a full matrix. If we restrict A to be diagonal then we
Remark I Historically, the RS condition in (8.68) directly motivated the introduction of the structured need a larger perturbation to make det(I MA) = 0. This is illustrated next.
—
singular value, p(M), discussed in detail in the next section. As one might guess, we have that
p(M) ≤ min~ U(DMD’). In fact, for block-diagonal complex perturbations we generally have Example 8.5 continued. Diagonal perturbation (A is structured). For the ,natrix M in (& 71), the
that p(M) is very close to mm0 U(DMD’). smallest diagonal A which ,nakes det(I MA) = 0 is —
Remark 2 Other norms. Condition (8.68) is essentially a scaled version of the small-gain theorem.
Thus, a similar condition applies when we use other matrix norms. The A’IA-structure in Figure 8.3 is
A= ~ ~ (8.73)
stable for all block~diagonal A’s which satisfy [A(jw)[J < 1, Yin if with 6(A) = 0.333. Thus p(M) = 3 when A is a diagonal matrix.
mm lID(w)M(iw)D(wY’Il <l,Vw (8 69) The above example shows that is depends on the structure of A. The following example
D(w)CV
demonstrates that p also depends on whether the perturbation is real or complex.
where D as before is compatible with the block structure of A. Any matrix norm may be used; for
example, the Frobenius norm, JMIIF, or any induced matrix norm such as [M[[11 (maximum column Example 8.6 p of a scalar. if Ii~I is a scalar then in most cases p(M) = IMI. This follows from
sum), lMllj (maximum row sum), or [M[[€2 = U(11’I), which is the one we will use. Although in F (8.70) by selecting A[ = 1/IMI such that (1 MA) = 0. Howeven this requires that we can select
—
some cases it may be convenient to use other norms, we usually prefer 0 because for this norm we get the phase of A such that .A’IA is real, which is impossible when A is real and M has an imagina~y
a necessary and sufficient RS condition. component, so in this case p(M) = 0. in suinmaly. we have for a scalar Il/I
I
—
statement is: km, i.e. p = 1/km. This results in the following alternative definition of p.
6
308 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 309
Definition 8 1 Structured singular value Let Iii be a given complex matrix and let A = Proof Consider det(I — ~MA) where p = pA(M) and use Schur s formula in (A 14) with
diag{A~} denote a set of complex matuces with a(A) ≤ 1 and with a given block diagonal A1~ = 1— ‘M~~A1 and A~ I — 1M22A2 C
structure (in which yome oft/ic blocks may be zepeated and some may be zestiicted to be “
teal) The teal zion negative function p(M) called the structured smgula’ value is defined In words (8 78) simply says that robustness with respect to two perturbations taken
by together is at least as bad as for the worst perturbation considered alone This agrees with
our intuition that we cannot improve RS by including another uncertain perturbation
min{k~ det(I — km MA) = 0 for structured A, ~(A) <1} (8 76) In addition the tippet bounds given below for complex perturbations e g p~ (M) ~
minDED a(DMD’) in (8 87) also hold for real or mixed real/complex perturbations A
If no such structured A exists then p(M) = 0
This follows because complex perturbations include real perturbations as a special case
A value of p = 1 means that there exists a perturbation with U(A) = 1 which is just large However the lower bounds e g p(M) ≥ p(M) in (8 82) generally held only for complex
enough to make I — MA singular A larger value of p is bad as it means that a smaller perturbations
pertuibation makes I — MA singular whereas a smaller value of p is good
Exeicise 8 14 Find pfoi the uncertain swem in Figure 764(b) 8 83 p for complex A
When all the blocks in A are complex p may be computed relatively easily This is discussed
881 Remarks on the definition of p below and in more detail in the survey paper by Packard and Doyle (1993) The results are
mainly based on the following result which may be viewed as another definition of p that
The structured singular value was introduced by Doyle (1982) At the same time (in fact in the same
issue of the same jnurnal) Safonov (1982) introduced the Multiva, table Stability Margin km foi ~ applies for complex A only
diagonally perturbed system as the inverse of p that is / ,~(M) = p(M)~ In many iespects Lemma 85 For complex pertuibations A with o(A) < 1
this is a more natural definition of a robustness margin However p(M) hIs a number of other
advantages such as pioviding a generalization of the spectral radius p(M) and the spectral norm
a(AI) I p(MF) = maxA ~ p(MA) (879)
2 The A corresponding to the smallest / in (8 76) will always have ~(A) = 1 since if det(I —
I ~MA’) = 0 for some A’ with o(A’) = c < 1 then 1/A4~ cannot be the structured singular Proof The lemma follows directly from the definition of p and the equivalence between (843) and
value of M since there exists a smaller scalar km = l~c such that det(I — kmii’IA) = 0 where (846) C
A = 1A’ andU(A) = 1
3 Note that with km = 0 we obtain I — / mMA = I which is clearly non singular Thus one possible
way to obtain p numerically is to start with km = 0 and gradually increase / m until we first find Properties of p for complex perturbations
an allowed A with U(A) = 1 such tWit (I — km MA) is singular (this value of km is then 1/jz) Most of the properties below follow easily from (8 79)
By allowed we mean that A must have the specified block diagonal structure and that some of the
blocks may have to be real 1 p(aM) = alp(kI)
for any (complex) scalar a
4 The sequence of M and A in the definition of p does not matter This follows from the identity 2 For a repeated scalar complex perturbation we have
(A 12) which yields
det(I kmMA) = det(I — kmAM) (877) 1 A = cSI (5 is a complex scalar) p(M) = p(M) (8 80)
5 In most cases Al and A are square but this need not be the case If they are non square then we
make use of (877) and work with either MA or AM (whichever has the lowest dimension) Proof Follows directly from (8 79) since there are no degrees of freedom for the maximization C
The remainder of this section deals with the properties and computation of p Readers who 3 Foi a full block complex perturbation we have from (8 79) and (8 47)
are primarily interested in the practical use of p may skip most of this material
A full matrix p(M) = o(M) (881)
882 Properties of p for real and complex A 4 p for complex perturbations is bounded by the spectral radius and the singular value
Two properties of p which hold for both real and complex perturbations A are I (spectral norm): _______________________
p(M) ≤ p(M) ≤ o~M) (8 82)
1. p(aM) = IaIp(M) for any real scalar a.
2 Let A = diag{A1, A2} be a block diagonal perturbation (in which A1 and A2 may have This follows from (8 80) and (8 81) since selecting A = SI gives the fewest degrees of
additional structure) and let A’! be partitioned accordingly. Then freedom for the optimization in (8.79), whereas selecting A full gives the most degrees of
freedom.
p~(iVI) > max{p~1(Mn),pa2(AI22)} (878)
-I
310 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 311
5. Consider any unitary matrix U with the same structure as A. Then 9. Without affecting the optimization in (8.87), we may assume the blocks in D to be
Hermitian positive definite, i.e. D~ = Di’ > 0, and for scalars d~ > 0 (Packard and
p(MU) = p(M) = p(UM) (8.83) Doyle, 1993).
Proof Follows from (8.79) by writing MUA = MA’ where U(A’) = a-WA) = &(A), and so U 10. One can always simplify the optimization in (8.87) by fixing one of the scalar blocks in
may always be absorbed into A. C B equal to 1. For example, let B = diag{dj, d2,... d~}, then one may without loss of
generality set 4 = 1.
6. Consider any matrix D which commutes with A: that is, AD = DA. Then
Proof Let D’ = and note that U(DMD’) = a(D’MD’’). C
p(DM) = ,u(MD) and p(DMD’) = p(M) (8.84)
Similarly, for cases where A has one or more scalar blocks, one may simplify the
Proof p(DM) = p(MD) follows from optimization in (8.86) by fixing one of the corresponding unitary scalars in U equal to
p~(DM) = maxp(DMA) = maxp(MAD) = maxp(MDA) = pa(MD) (8.85) 1. This follows from Property 1 with I~I = 1.
11. The following property is useful for finding p(AB) when A has a structure similar to that
The first equality is (8.79). The second equality applies since p(AB) = p(BA) (by the eigenvalue of A or B:
properties in the Appendix). The key step is the third equality which applies only when DA = AD. p~(AB) <a-(A)ft.~A(B) (8.92)
The fourth equality again follows from (8.79). C
i~ta(AB) ≤ a(B)p8~(A) (8.93)
7. Improved lower bound. Define U as the set of all unitary matrices U with the same
block-diagonal structure as A. Then for complex A Here the subscript “AA” denotes the structure of the matrix AA, and “BA” denotes the
structure of BA.
I p(M) = max~j~ p(MU)
(8.86)
Proof- The proof is from Skogestad and Moran (1988a). We use the fact that p(AB)
Proof. The proof of this important result is given by Doyle (1982) and Packard and Doyle (1993). It maxA p(AAB) = maxa p(I’B)a(A) where V = AA/U(A). When we maximize over A, V
follows from a generalization of the maximum modulus theorem for rational functions. C generates a certain set of matrices with a(V) < 1. Let us extend this set by maximizing over
all matrices V with a-(V) < 1 and with the same structure as AA. We then get p(AB) S
The result (8.86) is motivated by combining (8.83) and (8.82) to yield maxvp(VB)U(A) =pv(B)a(A).
A (8.90)
[0 Aa(full)j 0 — d2Ij where A = diag{A, R}. The result is proved by Skogestad and Moran (1988a).
A = diag{A1(full),521,3a,54} D = diag{djI,Da(full),ds,d4} (8.91) 13. The following is a further generalization of these bounds. Assume that M is an LFT of
R: M = N11 + N12R1j N22R)’N21. The problem is to find an upper bound on B,
—
PA(M) < 1 (for RS or RP), (8.97) may be used to derive a sufficient loop-shaping bound
on a transfer function of interest, e.g. 1? may be S, T, L, L’’ or K.
p [~ ~]
~..-
= ~
~C&1)_=_~Jp(AB)
i.Ja(A)a(B)
1, a(M) = max{a(A),a(B)}
for
for
A = 61
A = diag{Ai, A2}, A1 full
for A afull matrix
(8.102)
i’ll
Remark. In the above we have used miuj~. To be mathematically correct, we should have used iuf,,
because the set of allowed D’s is not bounded and therefore the exact minimum may not be achieved Proof.’ From the definition of eigenvalues and Schur’s formula (A.14) we get A1 (M) = VAi (AB) and
(although we may get arbitrarily close). The use of max~ (rather than SUPA) is mathematically correct p(M) = ~p(AB) follows. For block-diagonal A, p(M) = VaG4)U(B) follows in a similar way
since the set A is closed (with o’(A) ≤ 1). using p(M) = max~p(MA) = max~1,~2 p(AA2BA1), and then realizing that we can always
select A1 and A2 such that_p(AA2BA1) = ~(A)U(B) (recall (8.47)). U(M) = max{a(A), a(B)}
Example 8.7 Let
M=[~ ~] (8.98)
follows since a(M) = ,Jp(MHM) where MHM diag{B”B, AHA}.
det(1 — MA) = det(1 — = 1— [6i 62[~] = 1—aS1 — hO2 8.9 Robust stability with structured uncertainty
The smallest 6~ and 62 which ,,,ake this matrix singular i.e. 1 — aS1 — hO2 = 0, are obtained when
Consider stability of the MA-structure in Figure 8.3 for the case where A is a set of nonn
loll = 621 = 6~ and the phases of 6~ and 62 are adjusted such that 1 — al . ISI — IN 161 = 0. We get bounded block-diagonal perturbations. From the determinant stability condition in (8.43)
6J = 1/Gal + hi), andfrom (8.70) we have that p = 1/161 = al + IN C
which applies to both complex and real perturbations we get
Exercise 8.15 * (continued from Example 8.7~. (b) For M in (& 98) and a diagonal A show that
jz(M) = lal + Ibi using the lower “bound” p(M) = maxu p(MU) (which is always exact). (Hint: RS * det(I—MA(jw)) ≠0, Vw,VA,U(A(jw)) <1 Vw (8.104)
Use U = diag{e~, 1} (the blocks in U are unitary scalars, and we may fix one of them equal to 1).)
(c) For M in (8.98) and a diagonal A show that p(M) = lal + IbI using the upper bound A problem with (8.104) is that it is only a “yes/no” condition. To find the factor km by which
p(M) <minD a~DIY1D~) (which is exact in this case since D has two “blocks”). the system is robustly stable, we scale the uncertainty A by km, and look for the smallest km
Solution: Use D = diag{d, 1}. Since D1l’1D~ is a singular matrix we have fro,n (A.37,) that which yields “borderline instability”, namely
a(DMDj = ~[ ~ da] = Vlal2 + ldaI2 + lb/dl2 ± 1bI2 (8.100) det(I — kmMA) = 0 (8.105)
which we want to minimize with respect to d. The solution is d = QT~i7iaI which gives p(fivl) = From the definition of pin (8.76) this value is km = 1/p(M), and we obtain the following
VIal2 + 2labl + 1612 = lal + lbl. necessary and sufficient condition for robust stability.
314 MULTIVARIABLE FEEDBACK CONTROL I
I
MIMO ROBUST STABILITY AND PERFORMANCE 315
Theorem 8.6 RS for block-diagonal perturbations (real or complex). Assume that the C This implies a relative uncertainty of up to 20% in the low-frequency range, which increases at high
nominal system Il/I and the perturbations A are stable. Then the li/IA-system in Figure 8.3 is I frequencies, reaching a value of 1 (100% uncertainty) at about 1 rad./,nin. The increase with frequency
stable for all allowed perturbations with a(A) < 1, Vw, ~f and only ~f I allows for various neglected dynamics associated with the actuator and valve. The uncertainty may be
I represented as tnultiplicative input uncertainty as shown in Figure 8.7 where A1 is a diagonal complex
p(M(jw)) <1, Vw (8.106) I matrix and the weight is Wi = viii where wi(s) is a scalar On rearranging the block diagram to
Proof. p(M) < 1 ~‘ km > 1, so if p(M) < 1 at all frequencies the required perturbation A to make
I match the MA-structure in Figure 8.3 we get li/I = w,KG(I ± ICC)1 = wsTi (recall (8.32)), and
the R5 condition p(lvI) < 1 in Theorem,, 8.6 yields
det(I — MA) = 0 is larger than 1, and the system is stable. On the other hand, p(M) = 1 ~ kin = 1, 1
so if p(M) = 1 at some frequency there does exist a perturbation with U(A) = 1 such that
det(I — lvIA) = 0 at this frequency, and the system is unstable. C
I RS~pa,(Ti)<
IwiUw)I
Vw, A5
Jo]
(8.110)
This condition is shown graphically in Figure 8.11 and is see,, to he satisfied at al/frequencies, so the
Condition (8.106) for RS maybe rewritten as system is robustly stable. Also in Figure 8.11, 0(T5) can be seen to be larger than 1/Ian (jw) I over a
wide frequency range. This shows that the system would be unstable forfidl-block input uncertainty (Ai
RS ~ ~i(M(jw)) U(A(jw)) <1, Vw (8.107) full). However; full-block uncertainty is not reasonable for this plant, and therefore we conclude that
the use of the singular value is conservative in this case. This demonstrates the need for the structured
which may be interpreted as a “generalized small-gain theorem” that also takes into account
singular value.
the structure of A.
One may argue whether Theorem 8.6 is really a theorem, or a restatement of the definition Exercise 8.21 ~‘onsider the same example and check for RS with full-block multiplicative output
of ~z. In either case, we see from (8.106) that it is trivial to check for RS provided we can uncertainty of the same mnagnitude. (Solution: RS is satisfied.)
compute p.
Example 8.10 118 of spinning satellite. Recall Motivating example no. I fmvmn Section 3.7.1 with the
Let us consider two examples that illustrate how we use p to check for RS with structured
plant C(s) given in (3.88) and the controller K = I. We want to study how sensitive this design is to
uncertainty. In the first example, the structure of the uncertainty is important, and an analysis multiplicative input uncertainty.
based on the ?~LD,, norm leads to the incorrect conclusion that the system is not robustly stable. In this case T, = T, so for RS there is no difference between multiplicative input and multiplicative
In the second example the structure makes no difference. output uncertainrs In Figure 8.12, we plot p(T) as a function of frequency. We find for this case
Example 8.9 118 with diagonal input uncertainty.
102
Consider RS of the feedback system in
a that p(T) = 0(T) irrespective of the structure of the complex ,nultiplicative perturbation (full-block,
diagonal or repeated complex scala,-). Since p(T) crosses 1 at about 10 rad/s, we can tolerate more than
100% uncertainty at frequencies above 10 tad/s. At low frequencies p(T) is about 10, so to guarantee
RS we call at most tolem-ate 10% (complex) uncertainty at low frequencies. This confinns the results
10’
- llIwdi -
0.2
100
10’
ea
to 10-i
0)
~0
~4 100
.4
1o2 —
10_i 10_i 100 10’ 102
ID-i
Frequency
Figure 8.11: RS for diagonal input uncertainty is guaranteed since pix, (Ti) < 1/Iw,l, Vw. The use of 10_i
unstructured uncertainty and U(T;) is conservative. io_2 10° 10’ 102
Frequency
Figure 8. 7for the case when the multiplicative input uncertainly is diagonal. A no,ninal 2 x 2 plant and
the controller (which represents P1 control of a distillation process using the Dy-configuration) is given
by Figure 8.12: p-plot for spinning satellite
C(s) 1= [ —87.8 1.4 ~ K(s) = 1 +rs [—0.0015 0 (8.108)
rs + 1 L—lna.2 —1.4j’ $ 0 —0.075] fromn Section 3.7.1, where we found that real perturbations 6~ = 0,1 and 62 = —0.1 yield instability
(tune in minutes). The controller results in a nominally stable system with acceptable pemforinance. Thus, the use of complex rather than real perturbatioils is not conservative in this case, at least for A1
Assume there is complex multiplicative uncertainty in each mnanipulated input ofinagnitude diagonal.
$ + 0.2 However; with repented scalar perturbations (Le. the uncertainty in each channel is identical) there
w,(s) = (8.109) is a difference between meal and complex perturbations. With repeated real perturbations, available
0.os + 1
I
316 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 317
software (e.g. using the command mussy wit/i bik = [—2 01 in the Robust Control too/box in perturbations. This may be tested exactly by computing p(N) as stated in the following
Matlab) yields a peak p-value of 1. so we can tolerate a perturbation 6~ = 62 of magnitude 1 theorem.
before getting instability (Tins is confirmed by considering the characteristic polynomial in (3.92),
from which we see that 6~ = 6~ = —1 yields instability.) On the other hand, with complex repeated Theorem 8.7 Robust performance. Rearrange the uncertain systetn into the NA-structure
perturbations, we have that p(T) = p(T) is 10 at low frequencies, so instability may occur wit/i a of Figure 8.13. Assume NS such that N is (internally) stable. Then
(non-physical) complex 6, = 62 of magnitude 0.1. (Indeed, from (3.92) we see that the non-physical
constant perturbation 6, = 62 = j0.1 yields instability.) RP t! 1IF1100 IIF~(N,A)II00 <1, VI~AI~ ~; 1 (8.112)
I [~JNUw)) < 1, Vto (8.113)
8.9.1 What do p ≠ 1 and skewed-p mean? I
A value of p 1.1 for RS means that all the uncertainty blocks must be decreased in where p is computed with respect to the structure
=
ii
magnitude by a factor 1.1 in order to guarantee stability. I11
But if we want to keep some of the uncertainty blocks fixed, how large can one particular (8.114)
Apj
source of uncertainty be before we get instability? We define this value as l/p8, where p8 is
called shewed-p. We may view p8(M) as a generalization of p(M). and A~ is afull complex perturbation with the same dimensions as FT.
For example, let A = diag{A,, A2} and assume we have fixed IA1 < land we want to
find how large A2 can be before we get instability. The solution is to select Below we prove the theorem in two alternative ways, but first a few remarks:
I. Condition (8.113) allows us to test if IIFII= < 1 for all possible A’s without having to test each A
Km ~~ kml (8.111)
I individually. Essentially, p is defined such that it directly addresses the worst case.
2. The p-condition for RP involves the enlarged perturbation A = diag{A, Ap }. Here A, which itself
and look at each frequency for the smallest value of km which makes det(I — KJYIA) = 0, I maybe a block-diagonal matrix, represents the true uncertainty, whereas A~ is afuli cotnpiex matrix
and we have that skewed-p is stemming from the 7i~ norm performance specification. For example, for the nominal system (with
4 1/km A = 0)we get from (8.81) that a(N22) = pa. (N22), and we see that Ap must be a full matrix.
I 3. Since A always has structure, the use of the N norm, uNit00 < 1, is generally conservative for
Note that to compute skewed-p we must first define which part of the perturbations is to be RI’.
constant. p~(1l’1) is always further from I than p(M) is, i.e. p8 ≥ p for p > 1, p8 = p for I 4. From (8.78) we have that
= 1, and p8 < p for p < 1. In practice, with available software to compute p. we obtain
p8 by iterating on km until p(KmM) = 1 where Km may be as in (8.111). This iteration is p~(N) ≥ max{pA(Nii),p~s~(N22)} (8.115)
straightforward since p increases uniformly with k~1. RP 115 NP
where as just noted pa~(N22) = U(Nn). Condition (8.115) implies that RS (pa(Nn) < 1) and
NP (ã(N22) < 1) are automatically satisfied when RP (p(N) < 1) is satisfied. However, note
8.10 Robust performance that NS (stability of N) is not guaranteed by (8.113) and must be tested separately. (Beware! It is
a common mistake to get a design with apparently great RP, but which is not nominally stable and
Robust performance (RP) means that the performance objective is satisfied for all possible thus is actually robustly unstable.)
5. For a generalization of Theorem 8.7 see the main loop theorem of Packard and Doyle (1993); see
plants in the uncertainty set, even the worst-case plant. We showed in Chapter 7 that for
also Zhou et al. (1996).
a 5150 system with an 7-L~ performance objective, the RP condition is identical to an RS
condition with an additional perturbation block (I).
This also holds for MIMO systems, as illustrated by the stepwise derivation in Figure 8.13. Block diagram proof of Theorem 8.7
Step B is the key step and the reader is advised to study this carefully in the treatment below. In the following, let F = F,. (N, A) denote the perturbed closed-loop system for which we want to test
Note that the block A~ (where capital P denotes Performance) is always a full matrix. It is a RP. The theorem is proved by the equivalence between the various block diagrams in Figure 8.13.
fictitious uncertainty block representing the R~ performance specification. I Step A. This is simply the definition of RP: 11F1100 < 1.
Step 13 (the key step). Recall first from Theorem 8.4 that stability of the MA-structure in Figure 8.3,
where A is afldl complex matrix, is equivalent to IliuM00 < 1. From this theorem, we get that the RP
8.10.1 Testing RP using p condition u1Fu100 < 1 is equivalent to RS of the PAp-structure, where Ap is aft/I complex matrix.
To test for RI’, we first “pull out” the uncertain perturbations and rearrange the uncertain Step C. Introduce F = F(N, A) from Figure 8.2.
system into the NA-form of Figure 8.2. Our RP requirement, as given in (8.39), is that Step D. Collect A andAp into the block-diagonal matrix A. Then the original RP problem is
the ?Lc.o norm of the transfer function F = FU(N, A) remains less than I for all allowed equivalent to RS of the NA-structure which from Theorem 8.6 is equivalent to pa(N) < 1. 0
-y
I
I
318 MULTI VARIABLE FEEDBACK CONTROL I MIMO ROBUST STABILITY AND PERFORMANCE 319
RP
pa(N(jw)) <1 ~ det(I — N(jw)A(jw)) ~ 0, vA, a(A(jw)) ≤ 1
STEP A By Schur’s formula in (A.14) we have
S
VII ‘~p IIoo≤ 1 Since this expression should not be zero, both terms must be non-zero at each frequency, i.e.
is RS, det(I — N11A) ~4 OVA ~ p,~~Nii) <1, Vw (RS)
VIIzSjI~<l and for all A
det(I—FAp) 0 OVAp ~pjx,(F) <1 ~.U(F) <1, Vw (RPdeflnition)
STEP C
Theorem 8.7 is proved by reading the above lines in the opposite direction. Note that it is not necessary
to test for RS separately as it follows as a special case of the RP requirement. 0
~1
I
320 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 321
To find p5 numerically, we scale the performance part of N by a factor km = i/ps and iterate
on knt until p = 1. That is, at each frequency skewed-p is the value p8(N) which solves
[I 0
P(KmN) = 1, K,,~ = (8.12 1)
i/ps Figure 8.14: RP of system with input uncertainty
Note that p underestimates how bad or good the actual worst-case performance is. This
follows because p5 (N) is always further from I than p(N). I. Find the interconnection matrix N for this problem.
2. Consider the SISO case, so that useful connections can be made with results from the
Remark. The corresponding worst-case perturbation may be obtained as follows. First compute
previous chapter.
the worst-case performance at each frequency using skewed-p. At the frequency where p3 (N)
has its peak, we may extract the corresponding worst-case perturbation generated by the software, 3. Consider a multivariable distillation process for which we have already seen from
and then find a stable, all-pass transfer function that matches this. In the Matlab Robust Control simulations in Chapter 3 that a decoupling controller is sensitive to small errors in the
toolbnx, the single command robustperf combines these steps: (perfmarg,perfmargunc] input gains. We will find that p for RP is indeed much larger than I for this decoupling
robustperf(lft(Delta,N) controller.
4. Find some simple bounds on p for this problem and discuss the role of the condition
number.
5. Make comparisons with the case where the uncertainty is located at the output.
8.11 Application: robust performance with input
uncertainty 8.11.1 Interconnection matrix
We will now consider in some detail the case of multiplicative input uncertainty with On rearranging the system into the NiX-structure, as shown in Figure 8.14, we get, as in
performance defined in terms of weighted sensitivity, as illustrated in Figure 8.14. The (8.32),
performance requirement is then w1KS (8.124)
N— torTi
—WpSG WpS
RP ~! Jwp(I+GpK)’I~~ <1, VGp (8.122)
where T1 = KG(I+ KG)’, S = (1+ GK)’. For simplicity we have omitted the negative
where the set of plants is given by signs in the 1,1 and 1,2 blocks of N, since p(N) = p(UN) with unitary U = [~‘ ~]; see
G~ = G(I + wjzIj), I¼iIl~~ < 1 (8.123) (8.83).
For a given controller K we can now test for NS, NP, RS and RP using (8.1 16)—(8.l 19)
Here top(s) and WI(S) are scalar weights, so the performance objective is the same for all with
the outputs, and the uncertainty is the same for all inputs. We will mostly assume that t\, 0
is diagonal, but we will also consider the case when &r is a full matrix. This problem 0
- iSp
is excellent for illustrating the robustness analysis of uncertain multivariable systems. It
Here iS = zX~ may be a full or diagonal matrix (depending on the physical situation), whereas
should be noted, however, that although the problem setup in (8.122) and (8.123) is fine
the fictitious perturbation matrix Lip, representing the 7-t~ performance specification, is
for analyzing a given controller, it is less suitable for controller synthesis. For example, the
always a full matrix.
problem formulation does not penalize directly the outputs from the controller.
In this section, we will:
IF
I
322 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 323
1
8.11.2 RP with input uncertainty for SISO system will now confirm these findings by a p-analysis. To this effect we use the following weights
for uncertainty and performance:
For a 5150 system N in (8.124) is a 2 x 2 matrix and tX~ and IXp are scalars. In this case
conditions (8.1 16)—(8.119) become s + 0.2 s/2 + 0.05
wi(s) aSs + 1’ top(s) = - (8.133)
NS ~ N internally stable ~4’ 8, 8G. KS and T1 are stable (8.125)
With reference to (7.36) we see that the weight wi(s) may approximately represent a 20%
NP ~ U(N22) = tupS~ <1, Vw (8.126) gain error and a neglected time delay of 0.9 mm. wi (jw) I levels off at 2 (200% uncertainty)
RS ~ p~(Njj)=IwiTjI<1,Vw (8.127) at high frequencies. With reference to (2.105) we see that the performance weight top(s)
RP ~ p7j(N)=frupSl+IwiTiI<1.Vw (8.128) specifies integral action, a closed-loop bandwidth of about 0.05 [rad/min] (which is relatively
slow in the presence of an allowed time delay of 0.9 mm) and a maximum peak for a(S) of
where the RP condition (8.128) follows from (8.103); that is, =2.
We now test for NS, NP, RS and RP. Note that tX1 is a diagonal matrix in this example.
w1KS wjTj
wpS ~ topS = IwiTiI + jwpSI (8.129)
where we have used T1 = 1(5G. For 5150 systems, T1 = T and we see that (8.128) is
identical to (7.72), which was derived in Chapter 7 using a simple graphical argument based
on the Nyquist plot of L = OK.
RP optimization, in terms of weighted sensitivity with multiplicative uncertainty for a SISO
system, thus involves minimizing the peak value of p(N) = IwiTi + JwpSI. This may be
solved using DK-iteration as outlined later in Section 8.12, A closely related problem, which
is easier to solve both mathematically and numerically, is to minimize the peak value (R~
norm) of the mixed sensitivity matrix
10’ 10_i 100 101 10’
Nmix — topS
wjT (8.130) Frequency
Consider again the distillation process example from Chapter 3 (Motivating example no. 2) Is/2+0.051
a(N22) U(wpS)
and the corresponding inverse-based controller: s+0.7
0(s) =
1
____
75s + 1
[ 87.8
108.2
—86.4
—109.6 K(s) = $
(8.13 1) I
and we see from the dashed-dot line in Figure 8.15 that the NP condition is easily
satisfied: U(wpS) is small at low frequencies (0.05/0.7 = 0.07 at w = 0) and
approaches 1/2 = 0.5 at high frequencies.
The controller provides a nominally decoupled system with
RS Since in this case w1Tj = w1T is a scalar times the identity matrix, we have, independent
L11. 5=cIandT=tI (8.132) of the structure of eli, that -
5s+1 I
where Iwitl = 0.2
(0.5s + 1)(1.43s + 1)1I
1 —s —
t 1 0.7 1 and we see from the dashed line in Figure 8.15 that RS is easily satisfied. The peak
T’ ~1+1+07’ — ~s+0.71.43s+1
value of p,~1 (M) over frequency is IIMIIA, = 0.53. This means that we may increase
We have used € for the nominal sensitivity in each loop to distinguish it from the Laplace the uncertainty by a factor of 1/0.53 = 1.89 before the worst-case uncertainty yields
variables. Recall from Figure 3.14 that this controller gave an excellent nominal response, instability. That is, we can, tolerate about 38% gain uncertainty and a time delay of
but that the response with 20% gain uncertainty in each input channel was extremely poor. We about 1.7 mm before we get instability.
I
I
324 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 325
I
RP Although our system has good robustness margins (RS easily satisfied) and excellent NP
we know from the simulations in Figure 3.14 that RP is poor. This is confirmed by the
p-curve for RP in Figure 8.15 which was computed numerically using p~ (N) with N
as in (8.124), ii = diag-(z~q, Ap} and ~ = diag{ó1, 82}. The peak value is close to
6, meaning that even with six times less uncertainty, the weighted sensitivity will he
about six times larger than we require. The peak of the actual worst-case weighted
Table 8.1: Matlab program for p-analysis (generates Figure 8.15)
sensitivity with uncertainty blocks of magnitude I, which may be computed using % Uses the Robust control toolbox
skewed-p. is for comparison 44.93. G0[87.8 —86.4; 108.2 —109.6];
G=tf ([1], [75 11) *G0;
The Matlab Robust Control toolbox commands to generate Figure 8.15 are given in Table 8.1. G=minreal (as (GIl;
% Inverse-based controller
In general, p with unstructured uncertainty (~1i full) is larger than p with structured
uncertainty (J.Ij diagonal). However, for our particular plant and controller in (8.131) it
appears from numerical calculations, and by use of (8.136) below, that they are the same.
I Rinv=O.7*tf)[7S 11(1 le_5])*inv(G0);
% Weights
Of course, this is not generally true, as is confirmed in the following exercise. Wp0.5*tf)[l0 l],[l0 le~5])*eye[2);
Witf([l 0.2],[O.5 l~eym)2);
Exercise 8.22* C’onsider the plant G(s) in (8.108) which is ill-conditioned with 7(G) = 70.8 at all
frequencies (but note that the RGA elements of C are all about 0.5). With an inverse-based controller Generalized plant P
IC(s) = ~‘1C(s)1, compute pfor RP with both diagonal and full-block input uncertainty using the systemnames = c up Wi
weights in (8.133). The value of p is much smaller in the former case. inputvar lydel(2); w(2) ;
outputvar = [WI ; Wp ; —G—wl
input$o-G [u+ydel3
inputtoWp [G+w]
8.11.4 RP and the condition number input_to_Wi = ‘ Eu]
sysoutname =
In this subsection, we consider the relationship between p for RP and the condition number cleanupsysic= ‘yes’; sysic;
of the plant or of the controller. We consider unstructured multiplicative input uncertainty N=lft(P,Kinv)
(i.e. i2s.~ is a full matrix) and performance measured in terms of weighted sensitivity. omega = logapace{-3,3,61); Nf=frd(N,omega);
where k is the condition number of either the plant or the controller (the smallest one should % Worst case weighted sensitivity
be used): delta = [ultidyn(’dell’, [1 1]) 0;0 ultidyn(’del2’, [1 1]));
k=7(G) or k=7(K) (8.135) Np = lft(delta,N); %Perturbed model
opt = wcgopt(’Asadmreshold’,lOO);
Proof of (8.134): Since zXj is a full matrix, (8.87) yields ups = wcgain(Np,opt); I (ens 44.98 for
I delta = 1)
p(N) = minU
r N11 dN121 I mu for as
d Ld_1N21 N22 j I
Nrs-Nf(l:2,l:2); I Picking out WiTi
[mubnds,muinfo)=mussv(Nrs,[l 1; 1 Un’);
where from (A.47) muRS=mubnds (: • U; [musSinf,rsuaSw)=norm(muRS, intl I lana 0.5242)
dw,KS1 I
wiT,
~ d~tvpSG
wpS j ≤ a(w,Tj [I dG’]) ±a(wpS [d’G I]) I mu for NS (=max. singular value of Nnp)
I
Nnp=Nf(3:4,3:4); I Picking out wP*Si
≤ a(w,Ti)U(I dGj+a(wpS)a(d~G I) [mubnds,nuinfo]=mussv(Nnp, (1 l;l 1], c’);
muNS=muhnds( :4); [muusinf,muNSw]=norn(muNS, inf) I (ans = 0.500)
≤1+id)a(G’) ≤1+!d~IU(G) bodemag(muRP, ‘,muRS, ——‘ ,muNS, _. ‘ omega)
From (8 134) we see that with a “round” controller, i e one with 7(K) = 1, there is less A similar bound involving 7(K) applies We then have
sensitivity to uncertainty (but it may be difficult to achieve NP in this case) On the other —
hand, we would expect p for RP to be large if we used an inverse-based controller for a plant p8(N) = U(wpS)
a(zopS’) < Ic (8 138)
with a large condition number, since then -y(K) = 7(G) is large This is confirmed by (8 136) 1 a(wiTj) — —
below where Ic as befoie denotes the condition number of either the plant or the controller
Example 8.11 For the distillation process studied above, we have 7(G) = 7(I() = 141 7 (preferably the smallest) Equation (8 138) holds for any controller and for any structure of
at all frequencies, and at frequency to = 1 rad/min the uppe; bound given by (8 134) becomes the uncertainty (including ~i unstructured)
(052 + 0 41)(1 + y’1411) = 131 This is highet than tile actual value of p(N) which is 556.
winch illustiates that tile bound in (8 134) is genetally not tight Remark 1 In Section 6 104, we derived tighter upper bounds for cases when ~j is restncted to be
diagonal and when we have a decoupling controllei Tn (6 93), we also denved a lower bound in terms
Inverse-based controller. With an inverse-based controller (iesulting in the nominal oftheRGA
decoupled system (8 132)) and unstructured input uncertainty, it is possible to derive an
analytic expression for p for RP with N as in (8 124) Remark 2 Since p8 = p when p = 1, we may, from (8 134), (8 138) and expressions similar to (6 91)
p(N) = ~f0 17 ± 027 + 3051 = 556 which agtees with the plot in Figure 815 R5
RP
a(st) ≤ ‘y~0~ i —
a(s)
a~wi~ti~ (8 137)
It also implies that for practical purposes we may optimize RP with output uncertainty by
minimizing the 7L00 norm of the stacked matrix
328 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 329
Exercise 8.23 Consider the RP problem with weighted sensitivity and multiplicative output value independent of the controller (because L(joo) 0 for real systems). However, with a
uncertainty. Derive the inteivonnection ~natnx N for (1) the conventional case with ~ = finite-order controller we will generally not be able (and it may not be desirable) to extend
diag{~, A~ }, and (2) the stacked case when ~ = [~ ~p]. Use this to prove (8.140). the flatness to infinite frequencies.
The DIcE-iteration depends heavily on optimal solutions for steps I and 2, and also on
good fits in step 3, preferably by a transfer function of low order. One reason for preferring
a low-order fit is that this reduces the order of the 9-t~ problem, which usually improves the
8.12 j.i-synthesis and DK-iteration numerical properties of the 7t~ optimization (step 1) and also yields a controller of lower
order. In some cases the iterations converge slowly, and it may be difficult to judge whether
The structured singular value p is a very powerful tool for the analysis of RP with a given the iterations are converging or not. One may even experience the p-value increasing. This
controller. However, one may also seek to find the controller that minimizes a given p may be caused by numerical problems or inaccuracies (e.g. the upper bound p-value in step
condition: this is the p-synthesis problem. 2 being higher than the ?-(~ norm obtained in step 1), or by a poor fit of the V-scales. In
any case, if the iterations converge slowly, then one may consider going back to the initial
problem and rescaling the inputs and outputs.
8.12.1 DK-iteration In the IC-step (step 1) where the ?~t~ controller is synthesized, it is often desirable to use a
At present there is no direct method to synthesize a p-optimal controller. However, for slightly suboptimal controller (e.g. with an 9-t~ norm, 7, which is 5% higher than the optimal
complex perturbations a method known as DK-iteration is available. It combines ?-t~ value, ~ This yields a blend of 7-1~ and ?L2 optimality with a controller which usually
synthesis and p-analysis, and often yields good results. The starting point is the upper bound has a steeper high-frequency roll-off than the 9L~ optimal controller.
(8.87) on p in terms of the scaled singular value
8.12.2 Adjusting the performance weight
p(N) < mill U(DND’)
— DEV
Recall that if pat a given frequency is different from 1, then the interpretation is that at this
I~. frequency we can tolerate 1/p-times more uncertainty and satisfy our performance objective
The idea is to find the controller that minimizes the peak value over frequency of this upper p
bound, namely with a margin of i/p. In p-synthesis, the designer will usually adjust some parameter(s)
rnin(minlIDN(K)D’IJ~) (8.141) in the performance or uncertainty weights until the peak p-value is close to 1. Sometimes
the uncertainty is fixed, and we effectively optimize worst-case performance by adjusting a
by alternating between minimizing IIDAT(K)D_hII~ with respect to either K or D (while parameter in the performance weight. For example, consider the performance weight
holding the other fixed). To start the iterations, one selects an initial stable rational transfer
matrix D(s) with appropriate structure. The identity matrix is often a good initial choice for s/M +w~
wp(s) (8.142)
V provided the system has been reasonably scaled for performance. The DEC-iteration then s+w~A
proceeds as follows:
where we want to keep M constant and find the highest achievable bandwidth frequency w~.
I. IC-step. Synthesize an 7~1c.D controller for the scaled problem, minjç IIDN(IflD’ ~ The optimization problem becomes
with fixed D(s).
2. V-step. Find D(jw) to minimize at each frequency U(DND’ (jw)) with fixed N. maxI~4j such that p(N) <1,Vw (8.143)
3. Fit the magnitude of each element of D(jw) to a stable and minimum-phase transfer
where N, the interconnection matrix for the RP problem, depends on w7~. This may be
function D(s) and go to step 1.
implemented as an outer loop around the DIcE-iteration.
The iteration may continue until satisfactory performance is achieved, IIDND’ ~ < 1, or
until the Rc,~ norm no longer decreases. One fundamental problem with this approach is that
8.12.3 Fixed structure controller
although each of the minimization steps (IC-step and V-step) are convex, joint convexity is
not guaranteed. Therefore, the iterations may converge to a local optimum. However, practical Sometimes it is desirable to find a low-order controller with a given structure, e.g. a
experience suggests that the method works well in most cases. decentralized PID controller. This may be achieved by numerical optimization where p is
The order of the controller resulting from each iteration is equal to the number of states minimized with respect to the controller parameters. The problem here is that the optimization
in the plant G(s) plus the number of states in the weights plus twice the number of states in is not generally convex in the parameters. Sometimes it helps to switch the optimization
D(s). For most cases, the true p-optimal controller is not rational, and will thus be of infinite between minimizing the peak of p (i.e. IpII~) and minimizing the integral square deviation
order, but because we use a finite-order D(s) to approximate the V-scales, we get a controller of p away from k (i.e. IIpCjw) kI~2) where k usually is close to I. The latter is an attempt
—
of finite (but often high) order. The true p-optimal controller would have a flat p-curve (as a to “flatten out” p.
function of frequency), except at infinite frequency where p generally has to approach a fixed
330 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 331
with respect to uncertain high-frequency dynamics), or one may consider a more complicated % Weights.
problem setup (see Section 13.4). up = O.5*tf([lO lL[lO l.e_5])*eye(2); C Approximated
With this caution in mind, we proceed with the problem description. Again, we use the Wi = tf([l 0.23,10.5 l3)*eye(2); C integrator.
model of the simplified distillation process % Generalized plant P. %
systemnanes = G Wp Wi’;
C(s) = 758±1 [~1 z~] (8.144)
inputvar = ‘(udel[2); w(2) ; u(2)3’;
outputvar [Wi; up; —G-w]
input.tofl = ‘[u÷udell’;
input.to_Wp = ‘ IG+w) ; input_to-wi = [uj
The uncertainty weight wjl and performance weight wpI are given in (8.133), and are shown sysoutname = P’; cleanupsysic = yes’;
graphically in Figure 8.16. The objective is to minimize the peak value of p~ (N), where N sysic;
P ninreal(ss(Pfl;
is given in (8.124) and zi = diag{Aj,~p}. We will consider diagonal input uncertainty
(which is always present in any real problem), so IX~ is a 2 x 2 diagonal matrix. Ap is a C Initialize.
full 2 x 2 matrix representing the performance specification. Note that we have only three omega = logspace(-3,3,61);
complex uncertainty blocks, so p(N) is equal to the upper bound minD U(DND’) in this blk = [11; 1 1; 2 2);
nneas = 2; flu = 2; dO = 1;
case. 0= append(dO,dO,tf(eye(2)),tf(eye(2)H; C Initial scaling.
We will now use DK-iteration in an attempt to obtain the p-optimal controller for this
% START ITERATION -
example. The appropriate commands for the Matlab Robust Control toolbox are listed in
Table 8.2. The Matlab Robust Control toolbox contains commands that “automate” the 13K- C STEP 1; Find s-infinity optimal controller
% with given scalings:
iteration (listed at the bottom of Table 8.2), but we use a “manual” approach here, as this
yields more insight. IK,Nsc,ganvsa, info) hinfsyn[o*P*inv(O),nmeas,nu,....
‘aethod’, ‘lni’, ‘Tolgam’,le-3);
Nf frd(lft(P,K),omega);
to-
C STEP 2; compute mu using upper bound:
First the generalized plant P as given in (8.29) is constructed. It includes the plant model,
the uncertainty weight and the performance weight, but not the controller which is to be
designed (note that N = F,(P, K)). Then the block structure is defined; it consists of two
1 x 1 blocks to represent Zx~ and a 2 x 2 block to represent I≥,~p. The scaling matrix 13 for
332 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 333
DND’ then has the structureD = diag{d,, d2, d31,_} where 12 is a 2 x 2 identity matrix,
and we may set d3 = 1. As initial scalings we select d? = 4 = 1. P is then scaled with
Initial
the matrix diag{D, 12} where ‘2 is associated with the inputs and outputs from the controller 10°
—~
(dotted line) are shown in Figure 8.18 and labelled “Iter. 1”. The fit is very good, except l0~~ -~ 10_I 10° 10’ 102 10~
at higher frequencies. At low frequencies, it is hard to distinguish the two curves. d2 is not Frequency [md/mm]
shown because it was found that d, d2 (indicating that the worst-case full-block A1 is in
fact diagonal). Figure 8.18: Change in D-scale d1 during DK-iteration
Iteration no. 2. Step 1: With the 8-state scaling .D’(s) the ?-t~ software gave a 21-state
controller and IID’N(D1y’II~ = 1.0274. Step 2: This controller gave a peak value of p
of 1.0272. Step 3: The resulting scalings D2 were only slightly changed from the previous
iteration as can be seen from d? (w) labelled “Iter. 2” in Figure 8.18.
Iteration no. 3. Step 1: With the scalings D2(s) the 7-1~ norm was only slightly reduced
from 1.0274 to 1.0208. Since the improvement was small and since the value was very close 0.6
to the desired value of 1, it was decided to stop the iterations. The resulting controller with 0.4
21 states (denoted K3 in the following) gives a peak p-value of 1.0205.
0.2
Figure 8.17: Change in p during DK-iteration which is 0.2 in magnitude at low frequencies. Thus, the following input gain perturbations
are allowable:
0
EI1—EL2 1.2
0~ E12=
[0.8
0
1
L2J1E13_LO
[12 n
0.8’
] [0.8 0]
‘~~o o.s
Analysis of p-”optimal” controller K3
The final p-curves for NP, RS and RP with controller 1(3 are shown in Figure 8.19. The These perturbations do not make use of the fact that wi(s) increases with frequency. Two
objectives of RS and NP are easily satisfied. Furthermore, the peak p-value of 1.0205 with allowed dynamic perturbations for the diagonal elements in w~A, are
controller K3 is only slightly above 1, so the performance specification U(wpsp) < 1 is —s + 0.2 s + 0.2
almost satisfied for all possible plants. To confirm this we considered the nominal plant and E,(s) = 0.5s + 1’
0.ös+1
six perturbed plants
Z(s) = G(s)Ei~(s) corresponding to elements in E11 of
—0.417s + 1 —0.633s + 1
f,(s)=1+c,(s)=1.2 0.5s+1 f2(s) = 1 + c2(s) = 0.8 0.5s + 1
334 MULTJVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 335
Remark. The “worst-case” plant is not unique, and there are many plants which yield a worst-case
performance of maxs~ i~’~s~ii~ = 1.037. For example, it is likely that we could find plants which
were more consistently “worse” at all frequencies than the one shown by the dotted lines in Figure 8.20.
io°
The time responses of y~ and y2 to a filtered setpoint change in y~, r1 = 1/(5s + 1), are
shown in Figure 8.21 both for the nominal case (solid line) and for 20% input gain uncertainty
(dashed line) using the plant G’3 = GE3 (which we know is one of the worst plants). The
responses are interactive, but show no strong sensitivity to the uncertainty. The responses with
uncertainty are seen to be much better than those with the inverse-based controller studied
10~’ earlier and shown in Figure 3.14.
1o2 10_i 100 10’ 102
Frequency [rad/min] Remarks on the p-synthesis example.
Figure 8.20: Perturbed sensitivity functions ã(S’) using p-”optimal” controller 1(3. Dotted lines: plants 1. By trial and error, and many long nights, Petter Lundström was able to reduce the peak p-value for
C~, i = 1,6. Solid line: nominal plant C. Dashed line: inverse of performance weight. RP for this problem down to about popt = 0.974 (Lundstrom, 1994). The resulting design produces
the curves labelled optimal in Figures 8.17 and 8.18. The corresponding controller, ‘(opt, may be
synthesized using ?1~ synthesis with the following third-order fl-scales:
0.6
2. Note that the optimal controller K0~~ for this problem has an SVD form. That is, let C = UEVH,
0.4
then ‘(opt = VK3U~ where IC is a diagonal matrix. This arises because in this example U and V
0.2 are constant matrices. For more details see lloyd (1992) and Hovd et al. (1997).
0 3. For this particular plant it appears that the worst-case full-block input uncertainty is a diagonal
perturbation, so we might as well have used a full matrix for /~j. But this does not hold in general.
0 20 40 60 80 100 4. The ?1 software may encounter numerical problems if F(s) has poles on the jw-axis. This is the
Time [mm]
reason why in the Matlab code we have moved the integrators (in the performance weights) slightly
into the LHP
Figure 8.21: Setpoint response for p-”optimal” controller K3. Solid line: nominal plant. Dashed line:
5. The initial choice of scaling D = I gave a good design for this plant with an ?~Lcc norm of about 1.18.
uncertain plant G~.
This scaling worked well because the inputs and outputs had been scaled to be of unit magnitude.
For a comparison, consider the original model in Skogestad et al. (1988) which was in terms of
so let us also consider unscaled physical variables:
1 [0.878 —0.864 (8.146)
= [fl(s) 0 1 E16 = [h(s) a 1 Cunscaiea(s) = —1.096
fj(s)j’ L 0 fi(s)] 75s+1 L1.o82
Equation (8.146) has all its elements 100 times smaller than in the scaled model (8.144). Therefore,
The maximum singular value of the sensitivity, U(Sfl, is shown in Figure 8.20 for the nominal
using this model should give the same optimal p-value but with controller gains 100 times larger.
and six perturbed plants, and is seen to be almost below the bound 1/lwj(jw)l for all seven However, starting the DK-iteration with D = I works vety poorly in this case. The first iteration
cases (1 = 0,6) illustrating that RP is almost satisfied. The sensitivity for the nominal plant yields an 7(00 norm of 14.9 (step 1) resulting in a peak p-value of 5.2 (step 2). Subsequent iterations
is shown by the solid line, and the others with dotted lines. At low frequencies the worst-case yield with third- and fourth-order fits of the D-scales the following peak p-values: 2.92, 2.22, 1.87,
corresponds closely to a plant with gains 1.2 and 0.8, such as G~, G’3 or G’0. Overall, the 1.67, 1.58, 1.53, 1.49, 1.46, 1.44, 1.42. At this point (after 11 iterations) the p-plot is fairly flat up
worst case of these six plants seems to be G’6 = 0E16, which has a(S’) close to the bound at to 10 [rad/min] and one may be tempted to stop the iterations. However, we are still far away from
low frequencies, and has a peak of about 2.003 (above the allowed bound of 2) at 3.5 rad/min. the optimal value which we know is less than 1. This demonstrates the importance of good initial
To find the “true” worst-case performance and plant we used the Matlab Robust Control D-scales, which is related to scaling the plant model properly.
toolbox command robustperf as explained in Section 8.10.3 on page 320. This gives a 6. We used the stepwise procedure for DK-iteration primarily for insight. Matlab Robust Control
worst-case performance of maxs~ ~ws~SpIIoc = 1.0205, and the sensitivity function for the toolbox command dksyn provides an automated version of this procedure, which is sho’vn at the
bottom of Table 8.2. For the distillation process example, dksyn yields a 26-state controller with a
corresponding worst-case plant G~,,c(s) = G(s)(I + wi(s)Iiwc(s)) found with the software
p value of 1.094 in four iterations, which is inferior compared to the “manual” stepwise procedure.
has a peak value of a(S~) of about 1.0979 at 0.02 rad/min. It may seem surprising that lisp 1100
is much smaller than the sensitivity peak for the perturbed plants considered earlier; however, Exercise 8.24 * Explain why the optimal p-value would not change if in the model (8.144) we changed
note that 0~c(~) is the worst-case plant with respect to the peak value of J~wpS,,~,3 and not the time constant of 75 [mini to another va/tie. Note that the p-iteration itself would be affected.
Ii’~’p I cc-
336 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 337
8.13 Further remarks on ,u et al., 1994), even for purely complex perturbations (Toker and Ozbay, 1998).
This does not mean, however, that practical algorithms are not possible, and we have
8.13.1 Further justification for the upper bound on p described practical algorithms for computing upper bounds of p for cases with complex,
real or mixed real/complex perturbations.
For complex perturbations, the scaled singular value U(VND’) is a tight upper bound As mentioned on page 310, the upper bound a(VMD’”) for complex perturbations is
on p(N) in most cases, and minimizing the upper bound IIDND’II~ forms the basis for generally tight, whereas the present upper bounds for mixed perturbations (see (8.147)) may
the DIcE-iteration. However, IIDND’ ~ is also of interest in its own right. The reason be arbitrarily conservative.
for this is that when all uncertainty blocks are full and complex, the upper bound provides There also exist a number of lower bounds for computing p. Most of these involve
a necessary and sufficient condition for robustness to arbitrary-slow time-varying linear generating a perturbation which makes I MiX singular, see e.g. Young and Doyle (1997).
—
uncertainty (Poolla and Tikku, 1995). On the other hand, the use of p assumes the uncertain
perturbations to be time invariant, In some cases, it can be argued that slowly time-varying
uncertainty is more useful than constant perturbations, and therefore that it is better to 8.13.4 Discrete case
minimize IIDND’’lIc.o instead of p(N). In addition, by considering how D(w) varies with
It is also possible to use p for analyzing RP of discrete time systems (Packard and
frequency, one can find bounds on the allowed time variations in the perturbations.
Doyle, 1993). Consider a discrete time system
Another interesting fact is that the use of constant V-scales (V is not allowed to vary
with frequency) provides a necessary and sufficient condition for robustness to arbitrary-fast Zk+1 = Azk + Buk, Yk = Cx, + Vu,
time-varying linear uncertainty (Shamma, 1994). It may be argued that such perturbations
are unlikely in a practical situation. Nevertheless, we see that if we can get an acceptable The corresponding discrete transfer function matrix from u toy is N(z) = C(zI — A) —‘B +
controller design using constant V-scales, then we know that this controller will work very V. First, note that the ?‘L~.~ norm of a discrete transfer function is
well even for rapid changes in the plant model. Another advantage of constant V-scales
is that the computation of p is then straightforward and may be solved using LMIs, see IINIicc 4 max a(C(zI — A)’B + 13)
Example 12.4.
This follows since evaluation on the jw-axis in the continuous case is equivalent to the unit
circle (Izi = 1) in the discrete case. Second, note that N(z) may be written as an LFT in
8.13.2 Real perturbations and the mixed p-problem terms of 1/z,
We have not discussed in any detail the analysis and design problems which arise with real
or, more importantly, mixed real and complex perturbations.
The current algorithms, implemented in the Matlab p-toolbox, employ a generalization of
N(z)=C(zI_A)’B+D=F~(H~~I) H= [~ ~] (8.148)
the upper bound U(DkID1), where in addition to V-matrices, which exploit the block- Thus, by introducing ö~ = 1/z and iX~ = ~-I we have from the main loop theorem of
diagonal structure of the perturbations, there are C-matrices, which exploit the structure of Packard and Doyle (1993) (which generalizes Theorem 8.7) that uNit00 < 1 (NP) if and only
the real perturbations. The C-matrices (which should not be confused with the plant transfer if
function C(s)) have real diagonal elements at locations where zi is real and have zeros pa(H) < 1, ~ diag{iX~, i≥ip} (8.149)
elsewhere. The algorithm in the it-toolbox makes use of the following result from Young et al.
(1992): if there exist a fi > 0, a V and a C with the appropriate block-diagonal structure such where tI~ is a matrix of repeated complex scalars, representing the discrete “frequencies”,
that and Lip is a full complex matrix, representing the singular value performance specification.
Thus, we see that the search over frequencies in the frequency domain is avoided, but at the
a ((I + G2)_* 3DkID-’ — JG) (I + G2r*) <1 (8.147) expense of a complicated p-calculation. The condition in (8.149) is also referred to as the
state-space p-test.
then p(M) ≤ j3. For more details, the reader is referred to Young (1993). Condition (8.149) only considers nominal performance (NP). However, note that in this
There is also a corresponding 130K-iteration procedure for synthesis (Young, 1994). The
case nominal stability (NS) follows as a special case (and thus does not need to be tested
practical implementation of this algorithm is, however, difficult, and a very high-order fit
separately), since when pa(H) < 1 (NP) we have from (8.78) that p~, (A) = p(A) < 1,
may be required for the C-scales. An alternative approach which involves solving a series of which is the well-known stability condition for discrete systems.
scaled DEC-iterations is given by Tøffner-Clausen et al. (1995).
We can also generalize the treatment to consider RS and RP. In particular, since the
state-space matrices are contained explicitly in H in (8.148), it follows that the discrete
8.13.3 Computational complexity time formulation is convenient if we want to consider parametric uncertainty in the state-
space matrices. This is discussed by Packard and Doyle (1993). However, this results in real
It has been established that the computational complexity of computing p has a combinatoric perturbations, and the resulting p-problem which involves repeated complex perturbations
(non-polynomial or “NP-hard”) growth with the number of parameters involved (Braatz (from the evaluation of z on the unit circle), a full-block complex perturbation (from the
338 MULTIVARIABLE FEEDBACK CONTROL MIMO ROBUST STABILITY AND PERFORMANCE 339
performance specification), and real perturbations (from the uncertainty), is difficult to solve structured singular value; for example, by considering first simple sources of uncertainty
numerically both for analysis and in particular for synthesis. For this reason the discrete time such as multiplicative input uncertainty. One then iterates between design and analysis until
formulation is little used in practical applications. a satisfactory solution is obtained.
Practical p-analysis
8.14 Conclusion
We end the chapter by providing a few recommendations on how to use the structured singular
In this chapter and the last we have discussed how to represent uncertainty and how to analyze value p in practice.
its effect on stability (RS) and performance (RP) using the structured singular value p as our 1. Because of the effort involved in deriving detailed uncertainty descriptions, and the
main tool. subsequent complexity in synthesizing controllers, the rule is to “start simple” with a
To analyze robust stability (RS) of an uncertain system we make use of the MA-structure crude uncertainty description, and then to see whether the performance specifications can
(Figure 8.3) where M represents the transfer function for the “new” feedback part generated be met. Only if they can’t, should one consider more detailed uncertainty descriptions such
by the uncertainty. From the small-gain theorem, as parametric uncertainty (with real perturbations).
2. The use of p implies a worst-case analysis, so one should be careful about including too
RS ‘~= U(M) < 1 Vw (8.150) many sources of uncertainty, noise and disturbances— otherwise it becomes very unlikely
which is tight (necessary and sufficient) for the special case where at each frequency any for the worst case to occur, and the resulting analysis and design may be unnecessarily
complex A satisfying a(A) < 1 is allowed. More generally, the tight condition is conservative.
3. There is always uncertainty with respect to the inputs and outputs, so it is generally “safe”
RS * p(M) <1 Vw (8.151) to include diagonal input and output uncertainty. The relative (multiplicative) form is very
convenient in this case.
where 1z(M) is the structured singular value p(M). The calculation of p makes use of the 4. p is most commonly used for analysis. If p is used for synthesis, then we recommend that
fact that A has a given block-diagonal structure, where certain blocks may also be real (e.g. you keep the uncertainty fixed and adjust the parameters in the performance weight until
to handle parametric uncertainty). p is close to I.
We defined robust performance (RP) as JIF~(N, A)IJ~, < 1 for all allowed A’s. Since we
used the R~ norm in both the representation of uncertainty and the definition of performance,
we found that RP could be viewed as a special case of RS, and we derived
where p is computed with respect to the block-diagonal structure diag{A, Ap}. Here A
represents the uncertainty and Ap is a fictitious full uncertainty block representing the 7t~
performance bound.
It should be noted that there are two main approaches to getting a robust design:
1. We aim to make the system robust to some “general” class of uncertainty which we do
not explicitly model. For 5180 systems the classical gain and phase margins and the peaks
of S and T provide useful general robustness measures. For MIMO systems, normalized
coprime factor uncertainty provides a good general class of uncertainty, and the associated
Glover—McFarlane ?-L~ loop-shaping design procedure, see Chapter 9, has proved itself
very useful in applications.
2. We explicitly model and quantify the uncertainty in the plant and aim to make the
system robust to this specific uncertainty. This second approach has been the focus of the
preceding two chapters. Potentially, it yields better designs, but it may require a much
larger effort in terms of uncertainty modelling, especially if parametric uncertainty is
considered. Analysis and, in particular, synthesis using p can be very involved.
In applications, it is therefore recommended to start with the first approach, at least for
design. The robust stability and performance are then analyzed in simulations and using the
340 MIJLTIVARIABLE FEEDBACK CONTROL
CONTROLLER DESIGN
In this chapter, we present practical procedures for multivariable controller design which are relatively
straightforward to apply and which, in our opinion, have an important role to play in industrial control.
For industrial systems which are either SISO or loosely coupled, the classical loop-shaping approach
to control system design as described in Section 2.6 has been successfully applied. But for truly
multivariable systems it has only been in the last two decades, or so, that reliable generalizations of
this classical approach have emerged.
relationships:
a(GK)
n
Figure 9.1: One degree-of-freedom feedback configuration Figure 9.2: Design trade-offs for the multivariable loop transfer function GK
2. For noise attenuation make 0(0K) small; valid for frequencies at which 0(0K) ~< 1.
1. For disturbance rejection make a(s) small.
3. For reference tracking make c(GK) large; valid for frequencies at which £(GK) >> 1.
2. For noise attenuation make 0(T) small.
4. For input usage (control energy) reduction make o(K) small; valid for frequencies at
3. For reference tracking make a(T) ~(T) 1.
which a(OK) <<1.
4. For input usage (control energy) reduction make U(KS) small.
5. For robust stability to an additive perturbation make a(K) small; valid for frequencies at
If the unstructured uncertainty in the plant model 0 is represented by an additive perturbation, which 0(0K) ~< 1.
i.e. Q~, = 0 + ~, then from (8.53), a further closed-loop objective is: 6. For robust stability to a multiplicative output perturbation make o(GK) small; valid for
frequencies at which 0(0K) ~< 1.
5. For robust stability in the presence of an additive perturbation make aU(S) small.
Typically, the open-loop requirements 1 and 3 are valid and important at low frequencies,
Alternatively, if the uncertainty is modelled by a multiplicative output perturbation such that 0 < w < w1 < WB, while 2, 4, 5 and 6 are conditions which are valid and important at
= (I + zIJO, then from (8.55), we have: high frequencies, WB ≤ 0h ≤ w ~ oo, as illustrated in Figure 9.2. From this we see that
at frequencies where we want high gains (at low frequencies) the “worst-case” direction is
6. For robust stability in the presence of a multiplicative output perturbation make a(T)
related to a(GK), whereas at frequencies where we want low gains (at high frequencies) the
small.
“worst-case” direction is related to o(GK).
The closed-loop requirements 1 to 6 cannot all be satisfied simultaneously. Feedback design
Exercise 9.1 Show that the closed-loop objectives I to 6
* call be approximated by the open-loop
is therefore a trade-off over frequency of conflicting objectives. This is not always as difficult objectives 3 to 6 at the specified frequency ranges.
as it sounds because the frequency ranges over which the objectives are important can be
quite different. For example, disturbance rejection is typically a low-frequency requirement, From Figure 9.2, it follows that the control engineer must design K so that o(GK) and
while noise mitigation is often only relevant at higher frequencies. a(GK) avoid the shaded regions. That is, for good performance, c(GK) must be made to
In classical loop shaping, it is the magnitude of the open-loop transfer function L = OK lie above a performance boundary for all W up to wj, and for robust stability a-(GK) must
which is shaped, whereas the above design requirements are all in terms of closed-loop be forced below a robustness boundary for all LU above Wh. To shape the singular values of
transfer functions. However, recall from (3.51) that OK by selecting K is a relatively easy task, but to do this in a way which also guarantees
closed-loop stability is in general difficult. Closed-loop stability cannot be determined from
1
a(L) —1 ≤ ~ a(L) +1 (9.3) open-loop singular values.
For 5150 systems, it is clear from Bode’s work (1945) that closed-loop stability is closely
from which we see that a(S) 1/u(L) at frequencies where g(L) is much larger than 1. related to open-loop gain and phase near the crossover frequency Wc, where IGK(iw~) 1.
It also follows that at the bandwidth frequency (where 1/U(S(jw~)) = ~ = 1.41), we In particular, the roll-off rate from high to low gain at crossover is limited by phase
have u(L(jwB)) between 0.41 and 2.41. Furthermore, from T = L(I + L)—1 it follows that requirements for stability, and in practice this corresponds to a roll-off rate less than
0(T) a(L) at frequencies where a(L) is small. Thus, over specified frequency ranges, it 40 dB/decade (slope —2 on log—log plot); see Section 2.6.2. An immediate consequence
is relatively easy to approximate the closed-loop requirements by the following open-loop of this is that there is a lower limit to the difference between LU~ and Wj in Figure 9.2.
objectives: For MIMO systems a similar gain—phase relationship holds in the crossover frequency
region, but this is in terms of the eigenvalues of OK and results in a limit on the roll-off rate
1. For disturbance rejection make a(OK) large; valid for frequencies at which c(GK) >~ 1. of the magnitude of the eigenvalues of OK, not the singular values (Doyle and Stein, 1981).
344 MULTJVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 345
The stability constraint is therefore even more difficult to handle in multivariable loop shaping
and
than it is in classical loop shaping. To overcome this difficulty Doyle and Stein (1981)
proposed that the loop shaping should be done with a controller that was already known E {wd(t)w~QT)T} = 0, E {w,,(t)wdQr)T} = 0 (9.8)
to guarantee stability. They suggested that an LQG controller could be used in which the
regulator part is designed using a “sensitivity recovery” procedure of Kwakernaak (1969) to where .E is the expectation operator and ÔQ r) is a delta function.
—
give desirable properties (gain and phase margins) in OK. They also gave a dual “robustness The LQG control problem is to find the optimal control u(t) which minimizes
recovery” procedure for designing the filter in an LQG controller to give desirable properties
in KG. Recall that KG is not in general equal to OK, which implies that stability margins I ‘I
vary from one break point to another in a multivariable system. Both of these loop transfer J= E~ lim —/
{~T-*coTjo
[xTQx+uTRu]dt~ (9.9)
recovery (LTR) procedures are discussed below after first describing traditional LQG control.
where Q and R are appropriately chosen constant weighting matrices (design parameters)
such that Q = QT ≥ 0 and R = > 0. The name LQG arises from the use of a Linear
model, an integral Quadratic cost function, and Gaussian white noise processes to model
9.2 LQG control disturbance signals and noise.
The solution to the LQG problem, known as the separation theorem or certainty
Optimal control, building on the optimal filtering work of Wiener in the 1940’s, reached equivalence principle, is surprisingly simple and elegant. It consists of first determining the
maturity in the 1960’s with what we now call linear quadratic Gaussian or LQG control. optimal controller for a deterministic linear quadratic regulator (LQR) problem: namely, the
Its development coincided with large research programmes and considerable funding in above LQG problem without wd and w~. It happens that the solution to this problem can be
the United States and the former Soviet Union on space-related problems. These were written in terms of the simple state feedback law
problems, such as rocket manoeuvring with minimum fuel consumption, which could be
u(t) = —K,.z(t) (9.10)
well defined and easily formulated as optimization problems. Aerospace engineers were
particularly successful at applying LQG, but when other control engineers attempted to use where K,. is a constant matrix which is easy to compute and is clearly independent of W and
LQG on everyday industrial problems a different story emerged. Accurate plant models
V. the statistical properties of the plant noise. Note that (9.10) requires that a; is measured
were frequently not available and the assumption of white noise disturbances was not and available for feedback, which is not generally the case. This difficulty is overcome by the
always relevant or meaningful to practising control engineers. As a result LQG designs were
sometimes not robust enough to be used in practice. In this section, we will describe the {
next step, where we find an optimal estimate i~ of the state a;, so that E [a; 1jjT [a; i~j
— — }
LQG problem and its solution, we will discuss its robustness properties, and we will describe is minimized. The optimal state estimate is given by a Kalman filter and is independent of Q
procedures for improving robustness. Many textbooks consider this topic in far greater detail; and B. The required solution to the LQG problem is then found by replacing a; by £, to give
we recommend Anderson and Moore (1989) and Kwakernaak and Sivan (1972). u(t) = K4(t). We therefore see that the LQG problem and its solution can be separated
into two distinct parts, as illustrated in Figure 9.3.
th = Az+Bu+wd (9.4)
y = Gx+Du+wn (9.5)
where for simplicity we set Li = 0 (see Remark 2 on page 347). wd and w,~ are the
disturbance (process noise) and measurement noise respectively, which are usually assumed
~1~
Deterministic
to be uncorrelated zero-mean Gaussian stochastic processes with constant power spectral linear quadratic a; Kalman filter
density matrices I’V and V respectively. That is, wd and w,~ are white noise processes with regulator
covariances dynamic system
constant, —K,.
E {wd(t)wdQr)T} = Wö(t r)— (9.6)
Figure 9.3: The separation theorem
E {wn(t)wn(r)T} = Vö(t r)
—
(9.7)
346 MULTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 347
WY,
LQG: combined optimal state estimation and optimal state feedback. The LQG control
problem is to minimize J in (9.9). The structure of the LQG controller is illustrated in
Figure 9.4; its transfer function, from y ton (i.e. assuming positive feedback), is easily shown
to be given by
s A—BK,.—KfC Kf
= K,. 0
—
—
A BR_1BTX YCTv_1C
—
-Th’B~X
Remark I The optimal gain matrices Icr and Kr exist, and the LQG-controlled system is internally
stable, provided the systems with state-space realizations (A, B, Q?i) and (A, W~, C) are stabilizable
and detectable.
Remark 2 If the plant model is bi-proper. with a non-zero D-term in (9.5), then the Kalman filter
equation (9.14) has the extra term —K1Du on the right hand side, and the A-matrix of the LQG
Figure 9.4: The LQO controller and noisy plant controller in (9.17) has the extra term +KfDKr.
Exercise 9.2 For the plant and LQG coat roller arrangement of Figure 9.4, show that the closed-loop
dynamics are described by
We will now give the equations necessary to find the optimal state feedback matrix K,. and
the Kalman filter. d z 1 — .4—BK,. BK,. 1 1+11 0
Optimal state feedback. The LQR problem, where all the states are known, is the dl x—Lj — 0 A—K1Cj a—U [I —I~ w,~
deterministic initial value problem: given the system ± = Ax + Bu with a non-zero initial This shows that the closed-loop poles are simply the union of the poles of the deterministic LQR system
state x(O), find the input signal u(t) which takes the system to the zero state (a = 0) in an (eigenvalues of A — BK,.) and the poles of the Kaln,an filter (eigenvalues of.4 — K1C). It is exactly
optimal manner, i.e. by minimizing the deterministic cost as we would have expectedfrom the separation theore,n.
The optimal solution (for any initial state) is u(t) = ~Krx(t), where
p
Kr = R_1BTX (9.12)
and X = XT ≥ 0 is the unique positive semi-definite solution of the algebraic Riccati
equation
ATX+XA_XBJr1BTX+Q=0 (9.13)
Kalman filter. The Kalman filter has the structure of an ordinary state estimator or
observer, as shown in Figure 9.4, with
Figure 9.5: LQG controller with integral action and reference input
x = AL + Bu + Kf (y — CL) (9.14)
The optimal choice of Kf, which minimizes E {[x — LIT [a — L]}. is given by For the LQG controller, as shown in Figure 9.4, it is not easy to see where to position
the reference input r, and how integral action may be included, if desired. One strategy is
Kf = YCT (9.15) illustrated in Figure 9.5. Here the control error r y is integrated and the regulator K,. is
—
where Y = YT ≥ 0 is the unique positive semi-definite solution of the algebraic Riccati designed for the plant augmented with the integrator states.
equation Example 9.1 LQG design with integral action for inverse response process. The standa,-d LQG
YAT + AY - YCTV~~CY + W = 0 (9 16) design p;vcedure does not give a controller with integral action, so we will use the setup in Figure 9.5
348 MULTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 349
and augment the plant C(s) with an integrator before designing the state feedback regulator The plant (loop shaping). However note that the loop-shaping controller is a one degree-of-freedom contivller
is the SISO inverse response process C(s) = (tl~1~l) in (2.31), which was studied extensively whereas the LQG controller is actually a two degrees-of-freedom controller (Klqg2 in Table 9.1). As
seen from Figure 9.5, the reference change is not sent directly to the Kalman filter and this avoids the
in Chapter 2. For the objective function J = f(xTQX + UTRU)dt, we choose Q such that only the
derivative or proportional kick “. For our specific example, the step response (not shown) of the one
integrated state u — r is weighted, and we choose the input weight R = 1. (Only the ratio between
degree-of-freedom LQG controller a = KLQG (r — y) (Klqg in Table 9.1) is significantly worse with
Q and R matters and i-educing R yields a faster response.) The Kalman filter is set up so that we do an overshoot in y of about 40%. This large ove,-shoot translates into much poorer robustness margins
not estimate the integrated states. For the noise weights we select W = wi (process noise directly on
the states) with w = 1, and we choose V = 1 (measurement noise). (Only the ratio between w and V for LQG than the loop-shaping design; see Exercise 9.4. Also, the disturbance rejection is poorer for
matters and reducing V yields a faster response.) The Matlab file in Table 9.1 was used to design the the LQG design.
LQG controller; The resulting closed-loop response is shown in Figure 9.6. The response is good and Exercise 9.3 Derive the equations used in the Matlab file in Table 9.1.
very similar to the loop-shaping design in Figure 2.20 (page 45).
Exercise 9.4 Compare the robustness of the loop-shaping and LQG designs in Examples 2.6.2
and 9.1, respectively by computing the gain and phase margins (GM and PM) and the sensitivity peaks
Table 9.1: Matlab commands to generate LQG controller in Example 9.1 (fyi3 and MT). (Note that robustness is given by the feedback loop, and is thus the same for the LQG
% Uses the control toolbox
0 = tf t3 [—2 11, conv( (5 13,110 11) % inverse response process controllers Klqg and Klqg2 defined in Table 9.1.)
(a,b,c,d) — ssdata(G);
% Model dimensions: Solution: GM PM Ms Mw
p = size(c,1); % no. of outputs (y) Loop-shaping 2.92 5j90 111 1.75
[nm] = size(b); no. of states and inputs (u)
Znm=zeros(n,n); z,mn=zeros(m,m);
LQG 1.83 374° 1.63 2.39
Znn=zeros(n,n); Zmn=zeros(m,n);
% 1) Design state feedback regulator
A = (a Znm;-c Zion]; B = (b;-d]; % augment plant with integrators
weight on integrated error
9.2.2 Robustness properties
o=tZnn Znrn;Zmn eye(m,m)]; %
R=eye In); % input weight
Kr=lqr (A, 3, Q, a) % optimal state-feedback regulator
For an LQG-controlled system with a combined Kalman filter and LQR control law there
Krp=Kr(l:n,1:n);Kri=Krtl:m,n+l:n+m); % extract integrator and state feedbacks are no guaranteed stability margins. This was brought starkly to the attention of the control
% 2) Design Kalman filter % dont estimate integrator states
Enoise = eye(n) % process noise model (Gd)
community by Doyle (1978) (in a paper entitled “Guaranteed Margins for LQG Regulators”
w = eye(n); V — 1*eye(n); % process and measurement noise weight with a very compact abstract which simply states “There are none”). He showed, by example,
Estss = ss(a,[b mnoise],c,(0 0 OH;
[Mess. Me) = kalmen(Estss,W,V) % Salman filter gain
that there exist LQG combinations with arbitrarily small-gain margins.
% 3) Form overall controller However, for an LQR-controlled system (i.e. assuming all the states are available and no
Ac=(Zsin Znn;_b*Kri a_b*Krp_Ke*cl; % integrators included
ncr [eyetn); son); scy [-eye(m); Me];
stochastic inputs) it is well known (Kalman, 1964; Safonov and Athans, 1977) that, if the
cc = [-Mn -Krp]; ocr = Zn’sn; Ocy = Zion; weight fl is chosen to be diagonal, the sensitivity function 8 = (I + K,. (sI Af’ B)—’ —
Klgg2 = sstAc, 13cr Bcy],cc,(ocr Dcyfl; % Final 2-nor controller from In y]’ to u
Klgg = sstAc,-Bcy,cc,-Dcy); % Feedback part of controller from -y to u
satisfies the Kalman inequality
% Simulation U(8(jw)) ≤ 1, Via (9.18)
sysi = feedbackto’Klqg,l); step(sysl,50); % 1-DOF simulation
sys = feedback(G*Klgg2,l,2,l,+l); % 2-DOF simulation From this it can be shown that the system will have a gain margin equal to infinity, a gain
sys2 sys*[l; 01; hold; step(sys2,5O);
reduction margin (lower gain margin) equal to 0.5, and a (minimum) phase margin of 60° in
each plant input control channel. This means that in the LQR-controlled system it =
{ }
a complex perturbation diag k1e~0~ can be introduced at the plant inputs without causing
instability provided
2
1,5 -
(i)O~=0and0.5<k~<oo,i=1,2 in
y (t)
or
0.5 u(t) (ii) it1 = land OH ≤ 60°,i = 1,2 in
0
where in is the number of plant inputs. For a single-input plant, the above shows that the
—0,5
0
-
5 10 15 20 25 30 35 40 45 50 Nyquist diagram of the open-loop regulator transfer function Kr(sI A)’B will always —
Time [sec] lie outside the unit circle with centre —1. This was first shown by Kalman (1964), and is
illustrated in Example 9.2 below.
Figure 9.6: LQO design (Klqg2 in Table 9.1) for inverse response process. Closed-loop response to
unit step in reference r. Example 9.2 LQR design of a first-order process. Consider afirst-order process C(s) = I/(s — a)
with the state-space realization
Remark. We just noted the similarity between the responses in Figure 9.6 (LQG) and Figure 2.20 th(t) = ax(t) + u(t), y(t) = x(t)
CONTROLLER DESIGN 351
350 MULTIVARIABLE FEEDBACK CONTROL
so that the state is dii-ectly measured. For a non-zero initial state the cost function to be ,ninimized is
Jr = + Ru2)dt
aX+Xa-X1C’X+lO ~ X2—2aRX-R0 3
which, since X ≥ 0, gives X = aR + ~/~R)2 + B. The optimal control is given by it = KrX
wherefrom (9.12)
Kr = X/R = a + ~Ja2 + hR
and we get the closed-loop system
I = ax + u = ~ + 1/Bc
The closed-loop pole is located at $ = — ,,/~i/R < 0. Thus, the root locus for the optimal closed-
loop pole with respect to B starts at a = —lal for B = ~ (infinite weight on the input) and moves to
—~ along the teal axis as R appi-oaches zero. Note that the toot locus is identical for stable (a < 0)
and unstable (a > 0) plants G(s) with the same value of al. In particular for a > 0 we see that the
minimum input energy needed to stabilize the plant (corresponding to B = ~) is obtained with the Controller, ICLQQ(s)
input it = —2 aIx, which i,,oves the pole from a = a to its mirror image at a = —a.
For B small (“cheap control”) the gain crossover frequency of the loop transfer function L = Figure 9.7: LQG-controlled plant
GKr = Kr/($ — a) is given approximately by w~ ~/i7~. Note also that L(jw) has a
roll-off of —1 (—20 dB/decade) at high frequencies, which is a general property of LQR designs.
Furthermore, the Nyquist plot of LOw) avoids the unit disc centred on the critical point —1. Le.
IS(jw)l = 1/li + L(jw)J ≤ hat all frequencies. This is obvious for the stable plant with a < 0 Remark. La(s) and L4(s) are surprisingly simple. For La(s) the reason is that after opening the loop
since Kr > 0 and the,, the phase of LOw) varies from 0° (at zero frequency) to —90° (at infinite at point 3 the error dynamics (point 4) of the Kalman filter are not excited by the plant inputs; in fact
frequency). The surprise is that it is also true for the unstable plant with a > 0 even though the phase they are uncontrollable from it.
of L(jw) varies from —180° to —90°.
Exercise 9.5 Derive the expressions for L1 (a), L2 (a), L3 (s) and L4 (a), and explain why L4 (a) (like
Consider now the Kalman filter shown earlier in Figure 9.4. Notice that it is itself a feedback L, (a)) has such a simple form.
system. Arguments dual to those employed for the LQR-controlled system can then be used
to show that, if the power spectral density matrix V is chosen to be diagonal, then at the At points 3 and 4 we have the guaranteed robustness properties of the LQR system and
input to the Kalman gain matrix Kf there will be an infinite gain margin, a gain reduction the Kalman filter respectively. But at the actual input and output of the plant (points I
margin of 0.5 and a minimum-phase margin of 60°. Consequently, for a single-output plant, and 2) where we are most interested in achieving good stability margins, we have complex
the Nyquist diagram of the open-loop filter transfer function C(sI A)’ Kf will lie outside — transfer functions which in general give no guarantees of satisfactory robustness properties.
the unit circle with centre at —1. Notice also that points 3 and 4 are effectively inside the LQG controller which has to be
An LQR-controlled system has good stability margins at the plant inputs, and a Kalman implemented, most likely as software, and so we have good stability margins where they are
filter has good stability margins at the inputs to Kf, so why are there no guarantees for LQG not really needed and no guarantees where they are.
control? To answer this, consider the LQG controller arrangement shown in Figure 9.7. The Fortunately, for a minimum-phase plant procedures developed by Kwakemaak (1969) and
loop transfer functions associated with the labelled points I to 4 are respectively Doyle and Stein (1979; 1981) show how, by a suitable choice of parameters, either L1 (a) can
be made to tend asymptotically to La(s) or L2(s) can be made to approach L4(s). These
L, (a) = K,.[~(s)-’ + BIG + KfC]~’ ICjC~(s)B procedures are considered next.
= —ICLQG(s)G(s) (9.19)
L2(s) = —G(s)KLQG(s) (9.20)
9.2.3 Loop transfer recovery (LTR) procedures
La(s) = 1Cr4(s)B (regulator transfer function) (9.21)
(9.22) For full details of the recovery procedures, we refer the reader to the or ginal communications
L4(s) = C4’(s)ICj (filter transfer function) (Kwakernaak, 1969; Doyle and Stein, 1979; Doyle and Stein, 1981) or to the tutorial paper
where by Stein and Athans (1987). We will only give an outline of the major steps here, since we
~(s) ~ (si -
(9.23) will argue later that the procedures are somewhat limited for practical control system design.
For a more recent appraisal of LTR, we recommend the Special Issue of the International
KLQG (a) is as in (9.17) and C(s) = C4(s)B is the plant model.
352 MULTI VARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 353
Journal of Robust and Nonlinear control, edited by Niemann and Stoustrup (1995). As the R~ theory developed, however, the two approaches of 912 and 9-1~ control were
The LQG loop transfer function L2(s) can be made to approach C~(s)Kf, with its seen to be more closely related than originally thought, particularly in the solution process;
guaranteed stability margins, if K~ in the LQR problem is designed to be large using the see for example Glover and Doyle (1988) and Doyle et al. (1989). In this section, we will
sensitivity recovery procedure of Kwakernaak (1969). It is necessary to assume that the plant begin with a general control problem formulation into which we can cast all 9-12 and 9-1~
model 0(s) is minimum-phase and that it has at least as many inputs as outputs. optimizations of practical interest. The general 9-12 and ~ problems will be described
Alternatively, the LQG loop transfer function L1(s) can be made to approach K~4’(s)B along with some specific and typical control problems. It is not our intention to describe
by designing Kf in the Kalman filter to be large using the robustness recovery procedure in detail the mathematical solutions, since efficient, commercial software for solving such
of Doyle and Stein (1979). Again, it is necessary to assume that the plant model 0(s) is problems is readily available. Rather we seek to provide an understanding of some useful
minimum-phase, but this time it must have at least as many outputs as inputs. problem formulations, which might then be used by the reader, or modified to suit his or her
The procedures are dual and therefore we will only consider recovering robustness at the application.
plant output. That is, we aim to make L2(s) = G(s)ICLQG(s) approximately equal to the
Kalman filter transfer function C1’(s)Kj.
First, we design a Kalman filter whose transfer function C~(s)Kj is desirable. This is
9.3.1 General control problem formulation
done, in an iterative fashion, by choosing the power spectral density matrices H~ and V so There are many ways in which feedback design problems can be cast as 9-12 and 9t~
that the minimum singular value of C4(s)Kf is large enough at low frequencies for good optimization problems. It is very useful therefore to have a standard problem formulation into
performance and its maximum singular value is small enough at high frequencies for robust which any particular problem may be manipulated. Such a general formulation is afforded by
stability, as discussed in Section 9.1. Notice that 14~ and V are being used here as design the general configuration shown in Figure 9.8 and discussed earlier in Chapter 3. The system
parameters and their associated stochastic processes are considered to be fictitious. In tuning
J47 and V we should be careful to choose V as diagonal and 14’ = (BS)(BS)T, where S is
a scaling matrix which can be used to balance, raise, or lower the singular values. When the
singular values of C~ (s)Kf are thought to be satisfactory, loop transfer recovery is achieved
by designing K~ in an LQR problem with Q = 0T0 and R = p1, where p is a scalar. As p
tends to zero G(8)KLQG tends to the desired loop transfer function C4(s)Kf.
Much has been written on the use of LTR procedures in multivariable control system
design. But as methods for multivariable loop shaping they are limited in their applicability
and sometimes difficult to use. Their main limitation is to minimum-phase plants. This is Figure 9.8: General control configuration
because the recovery procedures work by cancelling the plant zeros, and a cancelled non-
minimum-phase zero would lead to instability. The cancellation of lightly damped zeros is
also of concern because of undesirable oscillations at these modes during transients. A further of Figure 9.8 is described by
disadvantage is that the limiting process (p -~ 0) which brings about full recovery also
introduces high gains which may cause problems with unmodelled dynamics. Because of the a Wi FP11(s) P12(s) to
= F(s)[ 1= (9.24)
above disadvantages, the recovery procedures are not usually taken to their limits (p -4 0) to V U j [i’2i~s) P22(s) it
achieve full recovery, but rather a set of designs is obtained (for small p) and an acceptable it = K(s)v (9.25)
design is selected. The result is a somewhat ad-hoc design procedure in which the singular
values of a loop transfer function, G(8)KLQO (s) or KLQCJ (s)G(s), are indirectly shaped. A with a state-space realization of the generalized plant P given by
more direct and intuitively appealing method for multivariable loop shaping will be given in A B1 B2
Section 9.4. F~ Ci Dii (9.26)
02 D21 D22
The signals are: u the control variables, v the measured variables, to the exogenous signals
9.3 ?-t2 and ?-t~ control
such as disturbances zvd and commands r, and z the so-called “error” signals which are to be
minimized in some sense to meet the control objectives. As shown in (3.114) the closed-loop
Motivated by the shortcomings of LQG control, there was a significant shift in the 1980’s
transfer function from to to z is given by the linear fractional transformation
towards 9~ optimization for robust control. This development originated from the influential
work of Zanies (1981), although an earlier use of 9~tc~, optimization in an engineering context a = F,(P.K)w (9.27)
can be found in Helton (1976). Zames argued that the poor robustness properties of LQG
could be attributed to the integral criterion in terms of the 912 norm, and he also criticized where
the representation of uncertain disturbances by white noise processes as often unrealistic. F,(P, K) = P~ + P12K(I — P22K)’P21 (9.28)
CONTROLLER DESIGN 355
354 MULTIVARIABLE FEEDBACK CONTROL
Whilst the above assumptions may appear daunting, most sensibly posed control problems
7-12and 7i~ control involve the minimization of the 9-12 and R~ norms of F) (F, K)
will meet them. Therefore, if the software (e.g. the Robust Control toolbox of Matlab)
respectively. We will consider each of them in turn.
complains, then it probably means that your control problem is not well formulated and you
First some remarks about the algorithms used to solve such problems. The most general,
should think again.
widely available and widely used algorithms for 7-12 and 7-t~~ control problems are based
Lastly, it should be said that 7-1~~ algorithms, in general, find a suboptimal controller. That
on the state-space solutions in Glover and Doyle (1988) and Doyle et al. (1989). It is worth
is, for a specified 7 a stabilizing controller is found for which I~} (F, K) < ~ If an
~.
mentioning again that the similarities between 9-12 and 7-L~ theory are most clearly evident in
optimal controller is required then the algorithm can be used iteratively, reducing 7 until the
the aforementioned algorithms. For example, both 9-12 and 7-1~ require the solutions to two
minimum is reached within a given tolerance. In general, to find an optimal 7i~ controller
Riccati equations, they both give controllers of state dimension equal to that of the generalized
is numerically and theoretically complicated. This contrasts significantly with 7-12 theory, in
plant F, and they both exhibit a separation structure in the controller already seen in LQG
which the optimal controller is unique and can be found from the solution of just two Riccati
control. An algorithm for 7-1~ control problems is summarized in Section 9.3.4.
equations.
The following assumptions are typically made in 7-12 and 7-L~ problems:
(A4) [A —jwI
Dnj
B11
I has full row rank for all w.
IIF(s)112 =
V
~ f tr [F(jw)F(jw)H] &~; F ~ F,(F, K) (9.29)
D21j
For a particular problem the generalized plant F will include the plant model, the
(A5) D11 = 0 and fl22 = 0. interconnection structure, and the designer-specified weighting functions. This is illustrated
Assumption (Al) is required for the existence of stabilizing controllers K, and assumption for the LQG problem in the next subsection.
(A2) is sufficient to ensure the controllers are proper and hence realizable. Assumptions As discussed in Section 4.10.1 and noted in Tables A.5.7 and A.5.7 on page 540, the 7-12
norm can be given different deterministic interpretations. It also has the following stochastic
(A3) and (A4) ensure that the optimal controller does not try to cancel poles or zeros
interpretation. Suppose in the general control configuration that the exogenous input w is
on the imaginary axis which would result in closed-loop instability. Assumption (AS) is
conventional in 7-12 control. D11 = 0 makes P11 strictly proper. Recall that 7-12 is the white noise of unit intensity. That is,
set of strictly proper stable transfer functions. The assumption D22 = 0 simplifies the E {w(t)wQr)T} Iö(t — r) (9.30)
formulae in the 7-12 algorithms and is made without loss of generality, since a substitution
I~D = K(I + D22K)’ gives the controller, when D22 ~ 0 (Zhou et al., 1996, p. 317). The expected power in the error signal z is then given by
In 7j~, neither D11 = 0, nor = 0, are required but they do significantly simplify the
algorithm formulae. If they are not zero, an equivalent 7~ problem can be constructed in
which they are; see Safonov et al. (1989) and Green and Limebeer (1995). For simplicity, it
is also sometimes assumed that D12 and D21 are given by
E{7lirnO~
f z(t)Tz(t)dt}
(9.31)
—
tr F {z(t)z(t)T}
— I
1.~ tr [F(jw)F(jw)H] dw
This can be achieved, without loss of generality, by a scaling of u and v and a unitary (by Parseval’s theorem)
transformation of to and z; see for example Maciejowski (1989). In addition, for simplicity
of exposition, the following additional assumptions are sometimes made:
= IFIl~ = IlF)(F,K)II~ (9.32)
(A7) D~C1 = 0 and B1D~j = 0. Thus, by minimizing the 9-12 norm, the output (or error) power of the generalized system, due
to a unit intensity white noise input, is minimized; we are minimizing the root-mean-square
(AS) (A, B1) is stabilizable and (A, C1) is detectable. (rms) value of z.
Assumption (A7) is common in 7-12 control, e.g. in LQG where there are no cross-terms in the
cost function (Df~C1 = 0), and the process noise and measurement noise are uncorrelated
(B1D~ = 0). Notice that if (A7) holds then (A3) and (A4) may be replaced by (A8).
4-
± = Ax+Bu+wd (9.33)
(9.34)
1~ =
where
Elj [w4t) 1
[w,~(t)j
[wd(r)T w~(r)T ~ FW
lh[o °] ó(t
V
- r) (9.35)
it 11
(9.36) Figure 9.9: The LQC problem formulated in the general control configuration
J=E1 urn ~ IT}
T-4oo TJ0
As discussed in Section 4.10.2 the 7-l~~ norm has several interpretations in terms of
and represent the stochastic inputs Wd, io,1 as
performance. One is that it minimizes the peak of the maximum singular value of
F) (POw), IC(jw)). It also has a time domain interpretation as the induced (worst-case) 2-
Fw~ 01 (9.38)
Iwdi
[tonj [a V~j
~,,
norm. Letz = F,(P,K)w, then
where w is a white noise process of unit intensity. Then the LQG cost function is II14ll1’,1’~DWcc =
Iz(t)K2
ma~ IIw(t)I~2 (9.43)
w(t)~O
T
lim I
J=EI~T-+coTJ0 z(t)Tz(t)dt~
j
= IIF,(P,K)II~ (9.39) where IIz(t)W2 = V1n10 Z~ Iz~(t)I2dt is the 2-norm of the vector signal.
In practice, it is usually not necessary to obtain an optimal controller for the 7’lc.~ problem,
where and it is often computationally (and theoretically) simpler to design a suboptimal one (i.e.
z(s) = F, (P, K)w(s) (9.40) one close to the optimal ones in the sense of the 7-t~ norm). Let 7m1,, be the minimum value
of UP) (F, K) j~ over all stabilizing controllers K. Then the 9L~ suboptimal control problem
and the generalized plant P is given by is: given a 7 > 7mm, find all stabilizing controllers K such that
A W~ 0~B 11F1(P,I()1100 <7
P11 P12]s — 0 0 0 (9.41) This can be solved efficiently using the algorithm of Doyle et al. (1989), and by reducing 7
F21 P22] 0 0 0 iteratively, an optimal solution is approached. The algorithm is summarized below with all
C 0 v~j 0 the simplifying assumptions.
General R~ algorithm. For the general control configuration of Figure 9.8 described
The above formulation of the LQG problem is illustrated in the general setting in Figure 9.9. by (9.24)’—(9.26), with assumptions (Al) to (AS) in Section 9.3.1, there exLcts a stabilizing
With the standard assumptions for the LQG problem, application of the general ‘N2 controller K(s) such that hF, (F, K)hh~ < ~ if and only if
formulae (Doyle et al., 1989) to this formulation gives the familiar LQG optimal controller
(i) X~ > 0 is a solution to the algebraic Riccati equation
as in (9.17).
ATX00 + XocA + C[C1 + Xoo(7’2B1Br - B2BflX03 =0 (9.44)
358 MULTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 359
such that Re .A~ [A + (f2B1B[ — B2BT)X~] <0, Vi; and of exogenous input signals. The latter might include the outputs of perturbations representing
uncertainty, as well as the usual disturbances, noise and command signals. Both of these two
(ii) Y,~ > 0 is a solution to the algebraic Riccati equation
approaches will be considered again in the remainder of this section. In each case we will
examine a particular problem and formulate it in the general control configuration.
AY~ + Y~OAT + J31B[ + Y~(72G”C1 - CTC2)Y~ =0 (9.45)
A difficulty that sometimes arises with 7-1~ control is the selection of weights such that the
such that Re A~ [A + Y~(72CTC1 — GIG2)] < 0, Vi; and 7-L~ optimal controller provides a good trade-off between conflicting objectives in various
frequency ranges. Thus, for practical designs it is sometimes recommended to perform only
(iii) p(X~~Y~.3) <72 a few iterations of the 7-L~ algorithm. The justification for this is that the initial design, after
one iteration, is similar to an 9-12 design which does trade off over various frequency ranges.
All such controllers are then given by K = F) (K0, Q) where Therefore stopping the iterations before the optimal value is achieved gives the design an 7-12
flavour which may be desirable.
A0~ ~Z~cLco Z00B2
IC~(s)~ F~ 0 1 (9.46)
—C2 I 0 9.3.5 Mixed-sensitivity 7i~ control
= -BTX,~, L~ = -Y~cI, Z~ = (I - 72Y,~X~’ (9.47) Mixed-sensitivity is the name given to transfer function shaping problems in which the
sensitivity function S = (1 + GIC)~1 is shaped along with one or more other closed-loop
A0.3 = A + y2B1BTX~ + B2Fcx, + ZcoL00C2 (9.48) transfer functions such as KS or the complementary sensitivity function T I S. Earlier —
and Q(s) is any stable proper transfer function such that IQII~ < ~. For Q(s) = 0, we get in this chapter, by examining a typical one degree-of-freedom configuration, Figure 9.1, we
saw quite clearly the importance of 5, KS and T.
IC(s) = K0~,(s) = —F~,jsI — AOC)’ZCOLQO (9.49) Suppose, therefore, that we have a regulation problem in which we want to reject a
disturbance d entering at the plant output and it is assumed that the measurement noise is
This is called the “central” controller and has the same number of states as the generalized relatively insignificant. Tracking is not an issue and therefore for this problem it makes sense
plant F(s). The central controller can be separated into a state estimator (observer) of the to shape the closed-loop transfer functions S and KS in a one degree-of-freedom setting.
form Recall that S is the transfer function between d and the output, and KS the transfer function
= Ax~ + B~ 7-2BrX~+B2u + Z~L~(C2?I y) (9.50)
— between d and the control signals. It is important to include KS as a mechanism for limiting
tOwar,t
the size and bandwidth of the controller, and hence the control energy used. The size of KS
is also important for robust stability with respect to uncertainty modelled as additive plant
and a state feedback perturbations; see (8.53) on page 303.
(9.51)
The disturbance d is typically a low-frequency signal, and therefore it will be successfully
Upon comparing the observer in (9.50) with the Kalman filter in (9.14) we see that it contains rejected if the maximum singular value of S is made small over the same low frequencies.
an additional term BifiJ~vorst, where iDworst can be interpreted as an estimate of the worst-case To do this we could select a scalar low-pass filter to1 (s) with a bandwidth equal to that of
disturbance (exogenous input). Note that for the special case of 7-t~ loop shaping this extra the disturbance, and then find a stabilizing controller that minimizes IwiSjI~. This cost
term is not present. This is discussed in Section 9.4.4. function alone is not very practical. It focuses on just one closed-loop transfer function and
7-iteration. If we desire a controller that achieves 7nth,, to within a specified tolerance, for plants without RHP-zeros the optimal controller has infinite gains. In the presence of a
then we can perform a bisection on 7 until its value is sufficiently accurate. The above result non-minimum-phase zero, the stability requirement will indirectly limit the controller gains,
provides a test for each value of 7 to determine whether it is less than ~ or greater than but it is far more useful in practice to minimize
7mm
Given all the assumptions (Al) to (A8), the above is the most simple form of the general
R~ algorithm. For the more general situation, where some of the assumptions are relaxed,
the reader is referred to the original source (Glover and Doyle, 1988). In practice, we would
[:~~] ~ (9.52)
where w~(s) is a scalar high-pass filter with a crossover frequency approximately equal to
expect a user to have access to commercial software such as Matlab and its toolboxes.
that of the desired closed-loop bandwidth.
In Section 2.8, we distinguished between two methodologies for 9-t~.~ controller design:
In general, the scalar weighting functions wi(s) and w2(s) can be replaced by matrices
the transfer function shaping approach and the signal-based approach. In the former, 7-(~
optimization is used to shape the singular values of specified transfer functions over Wt (s) and ~~‘2 (s). This can be useful for systems with channels of quite different bandwidths
when diagonal weights are recommended, but anything more complicated is usually not worth
frequency. The maximum singular values are relatively easy to shape by forcing them to
the effort.
lie below user-defined bounds, thereby ensuring desirable bandwidths and roll-off rates. In
the signal-based approach, we seek to minimize the energy in certain error signals given a set Remark. Note that we have outlined here an alternative way of selecting the weights from that in
-~
Example 2.17 and Section 3.5.7. There TV1 = TVp was selected with a crossover frequency equal to P
that of the desired closed-loop bandwidth and W2 = PT7,~ was selected as a constant, usually W0 = I. z’
To see how this mixed-sensitivity problem can be formulated in the general setting, we
can imagine the disturbance d as a single exogenous input and define an error signal
Z2 J
z = [4’ zT]T, where z1 = W~y and z2 = —W,u, as illustrated in Figure 9.10. It is
P
iv =
not difficult from Figure 9.10 to show that z1 = ~ Sw and z2 = T472KSw as required, and The ability to shape T is desirable for tracking problems and noise attenuation. It is
to determine the elements of the generalized plant P as also important for robust stability with respect to multiplicative perturbations at the plant
P__wi
— 0
-— W1G
(9.53)
output. The 5/2’ mixed-sensitivity minimization problem can be put into the standard control
configuration as shown in Figure 9.12. The elements of the corresponding generalized plant
P21=—I P
where the partitioning is such that iv =
zi ~Z2J
P’1 P12 iv
ur
= (9.54)
P21 P22 u
V
and
F,(P,K) = L TY~KS
w1s 1
j (9.55)
I
r
362 MULTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 363
The shaping of closed-loop transfer functions as described above with the “stacked” cost
functions becomes difficult with more than two functions. With two, the process is relatively
easy. The bandwidth requirements on each are usually complementary and simple, stable, frequency, or when we consider the induced 2-norm between the exogenous input signals
low-pass and high-pass filters are sufficient to carry out the required shaping and trade-offs. and the error signals, we are required to minimize the 9~l,, norm. In the absence of model
We stress that the weights i’V~ in mixed-sensitivity W00 optimal control must all be stable. If uncertainty, there does not appear to be an overwhelming case for using the 7t~ norm rather
they are not, assumption (Al) in Section 9.3,1 is not satisfied, and the general 9-L~ algorithm than the more traditional 9i2 norm. However, when uncertainty is addressed, as it always
is not applicable. Therefore if we wish, for example, to emphasize the minimization of S should be, ‘N~ is clearly the more natural approach using component uncertainty models as
at low frequencies by weighting with a term including integral action, we would have to in Figure 9.13.
approximate ~ by 4~, where e << 1. This is exactly what was done in Example 2,17.
Similarly one might be interested in weighting KS with a non-proper weight to ensure that K
is small outside the system bandwidth. But the standard assumptions preclude such a weight.
The trick here is to replace a non-proper term such as (1 + rj s) by (1 + ‘r1 s)/(1 + ma) where
r2 << r1. A useful discussion of the tricks involved in using “unstable” and “non-proper”
weights in 9’t~ control can be found in Meinsma (1995).
For more complex problems, information might be given about several exogenous signals
in addition to a variety of signals to be minimized and classes of plant perturbations to be z2
robust against. In this case, the mixed-sensitivity approach is not general enough and we are
forced to look at more advanced techniques such as the signal-based approach considered
next.
controller K such that the 9-La, norm of the transfer function between w and z is less than (1990). It is essentially a two-stage design process. First, the open-loop plant is augmented
1 for all A, where IAJ~ < 1. We have assumed in this statement that the signal weights by pre- and post-compensators to give a desired shape to the singular values of the open-loop
have normalized the 2-norm of the exogenous input signals to unity. This problem is a non frequency response. This could be based on an initial controller design. Then the resulting
standard 7~t~~0 optimization. It is a robust performance problem for which the p-synthesis shaped plant (initial loop shape) is robustly stabilized (“robustified”) with respect to the quite
procedure, outlined in Chapter 8, can be applied. Mathematically, we require the structured general class of coprime factor uncertainty using 9L~ optimization.
An important advantage is that no problem-dependent uncertainty modelling, or weight
selection, is required in this second step.
We will begin the section with a description of the 7-t~ robust stabilization problem (Clover
and McFarlane, 1989). This is a particularly nice problem because it does not require 7-
iteration for its solution, and explicit formulae for the corresponding controllers are available.
The formulae are relatively simple and so will be presented in full.
Following this, a step-by-step procedure for R~ loop-shaping design is presented. This
systematic procedure has its origin in the PhD thesis of Hyde (1991) and has since been
successfully applied to several industrial problems. The procedure synthesizes what is in
effect a single degree-of-freedom controller. This can be a limitation if there are stringent
requirements on command following. However, as shown by Limebeer et al. (1993), the
procedure can be extended by introducing a second degree of freedom in the controller
and formulating a standard 9-t~ optimization problem which allows one to trade off robust
Figure 9.15: An 7icc robust performance problem
stabilization against closed-loop model matching. We will describe this two degrees-of-
freedom extension and further show that such controllers have a special observer-based
singular value structure which can be taken advantage of in controller implementation.
p(IvI(jw)) <l,Vw (9.60)
G = M’N (9.62)
9.4 ?-t~ loop-shaping design where we have dropped the subscript from M and N for simplicity. A perturbed plant model
can then be written as
The loop-shaping design procedure described in this section is based on 7-L, robust
stabilization combined with classical loop shaping, as proposed by McFarlane and Clover (M + Akf)1(N + AN) (9.63)
MULTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 367
366
A controller (the “central” controller in McFarlane and Glover (1990)) which guarantees
that
The Matlab function coprimeunc, listed in Table 9.2, can be used to generate the controller
where &~i, /.~N are stable unknown transfer functions which represent the uncertainty in in (9.70). It is important to emphasize that since we can compute 7m1,, from (9.66) we get
the nominal plant model G. The objective of robust stabilization it to stabilize not only the an explicit solution by solving just two Riccati equations (care) and avoid the 7-iteration
nominal model G, but a family of perturbed plants defined by needed to solve the general ?LCo problem.
Notice that 7K is the ?-t~ norm from 4’ to [~j and (I — Gid)~ is the sensitivity function
S
R
=
=
eye(sizetd*dH+d*d;
eye(siZe(d*dfl+d*d;
amy = inv(R);Sinv*inv(S);
for this positive feedback arrangement. Al = (a_b*Smnv*d*c); ml = 8; Bl = b; Ql = c*Rinv*c;
[X,XAI1P,G) = care(Al,Bl,Ql,Rl);
The lowest achievable value of 71c and the corresponding maximum stability margin e are A2 Al’; Q2 b*Sinv*b*; 52 c’; R2
given by Glover and McFarlane (1989) as [Z,ZANP,G1 = care(A2,52,O2,R2);
% optimal ganmia
xx = X*Z; gar,m,in = sgrttl+max(eig(XZ)))
7mm
-‘
= 6max
= Ii II[N M]II~}
- T = (1 + p(XZ))~ (9.66) % Use higher gamma
pan = gamrel*gammin, gam2 = galn*gam; gamconst = (l-gam2) *eye(size(XZfl;
Lc = gemconst + XZ; Li inv(Lc’); yc ~Sinv*(d*c+b*2th
where I liii denotes Hankel norm, p denotes the spectral radius (maximum eigenvalue), Ac * a + b*Fc + garn2*Li
Sc * gam2*Li*Z*c;
(c+d*Fc);
and for a minimal state-space realization (A, B, C, D) of C, Z is the unique positive definite cc =
Dc = _______________________________________
solution to the algebraic Riccati equation
1
368 MIJLTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 369
U, V satisfy
IF[
Iii
-N~ 1+F
i L
~ ] = lIEN MIIIH (9.73)
The determination of U and V is a Nehari extension problem: that is, a problem in which an unstable
transfer function B(s) is approximated by a stable transfer function Q(s), such that Il-fl + QIIce is
minimized, the minimum being IIRiIss. A solution to this problem is given in Clover (1984).
Exercise 9.7 Formulate the ‘H00 ,-obust stabilization problem in the general control configuration
of Figu;-e 9.8, and determine a transfer function expression and a state-space realization for the 100
generalized plant P. Frequency [rod/si Time [secl
W1 C TV2 This yields a shaped plant C, = GB’, with a gain c,vssoverfrequency of 13.7 radJs, and the magnitude
of C, (flu) is shown by the dashed line in Figure 9.18(a). The response to a unit step in the disturbance
Exercise 9.8 Design an E~ loop-shaping controllerfor the disturbance process in (9.75) using the
*
frequencies, roll-off rates of approximately 20 dB/decade (a slope of about —1) at the
weight W1 in (9.76), i.e. generate plots corresponding to those in Figure 9.18. Next, repeat the design
desired bandwidth(s), with higher rates at high frequencies. Some trial and error is
with W1 = 2(s + 3)/s (which ,rsults in an initial G~ which would yield closed-loop instability with
involved here. W2 is usually chosen as a constant, reflecting the relative importance of
= 1). Gompute the gain and phase margins and compare the disturbance and reference responses.
hi both cases find w~ and use (2.45) to compute the maximum delay that can be tolerated in the plant the outputs to be controlled and the other measurements being fed back to the controller.
before instability arises. For example, if there are feedback measurements of two outputs to be controlled and a
velocity signal, then W2 might be chosen to be diag[1, 1,0.1], where 0.1 is in the velocity
Skill is required in the selection of the weights (pre- and post-compensators T4~~ and W2), signal channel. W~, contains the dynamic shaping. Integral action, for low-frequency
but experience on real applications has shown that robust controllers can be designed with performance; phase advance for reducing the roll-off rates at crossover; and phase lag
relatively little effort by following a few simple rules. An excellent illustration of this to increase the roll-off rates at high frequencies should all be placed in W~, if desired. The
is given in the thesis of Hyde (1991) who worked with Glover on the robust control of weights should be chosen so that no unstable hidden modes are created in G3.
VSTOL (Vertical and/or Short Take-Off and Landing) aircraft. Their work culminated in 4 Optional: Align the singular values at a desired bandwidth using a further constant weight
a successful flight test of 1-t~ loop-shaping control laws implemented on a Harrier jump- We cascaded with Wi,. This is effectively a constant decoupler and should not be used
jet research vehicle at the former UK Deknce Research Agency (now QinetiQ), Bedford, if the plant is ill-conditioned in terms of large RGA elements (see Section 6.10.4). The
in 1993. The ?-l~ loop-shaping procedure has also been extensively studied and worked on align algorithm of Kouvaritakis (1974) is recommended (see file align.m available at
by Postlethwaite and Walker (1992) in their work on advanced control of high-performance the book’s home page).
helicopters, also for the UK DRA (now QinetiQ) at Bedford. This application is discussed in 5 Optional: Introduce an additional gain matrix W9 cascaded with W~ to provide control
detail in the helicopter case study in Section 13.2. More recently, 7t~ loop shaping has been over actuator usage. W0 is diagonal and is adjusted so that actuator rate limits are not
tested in flight on a Bell 205 fly-by-wire helicopter; see Postlethwaite et al. (1999), Smerlas exceeded for reference demands and typical disturbances on the scaled plant outputs. This
et al. (2001), Prempain and Postlethwaite (2004), Postlethwaite et al. (2005). requires some trial and error.
Based on these, and other, studies, it is recommended that the following systematic 6 Robustly stabilize the shaped plant G3 = T’V2GW1, where TV1 = 147p1’VaWg, using the
procedure is followed when using ?-L~ loop shaping design: formulae of the previous section. First, calculate the maximum stability margin Cmax
1/7,ni,, If the margin is too small, C,nax < 0.25, then go back to step 4 and modify
1. Scale the plant outputs and inputs. This is very important for most design procedures and is the weights. Otherwise, select ~ > ~ by about 10%, and synthesize a suboptimal
sometimes forgotten. In general, scaling improves the conditioning of the design problem, controller using (9.70). There is usually no advantage to be gained by using the optimal
it enables meaningful analysis to be made of the robustness properties of the feedback controller. When Em~ > 0.25 (respectively 7mm < 4) the design is usually successful. In
system in the frequency domain, and for loop shaping it can simplify the selection of this case, at least 25% coprime factor uncertainty is allowed, and we also find that the shape
weights. There are a variety of methods available including normalization with respect to of the open-loop singular values will not have changed much after robust stabilization. A
the magnitude of the maximum or average value of the signal in question. Scaling with small value of Ciflax indicates that the chosen singular value 1oop shapes are incompatible
respect to maximum values is important if the controllability analysis of earlier chapters is with robust stability requirements. That the loop shapes do not change much following
to be used. However, if one is to go straight to a design the following variation has proved robust stabilization if ‘y is small (e large) is justified theoretically in McFarlane and Clover
useful in practice: (1990).
(a) The outputs are scaled such that equal magnitudes of cross-coupling into each of the 7. Analyze the design and if all the specifications are not met make further modifications to
outputs is equally undesirable. the weights.
(b) Each input is scaled by a given percentage (say 10%) of its expected range of operation. 8 Implement the controller. The configuration shown in Figure 9.19 has been found useful
That is, the inputs are scaled to reflect the relative actuator capabilities. An example of when compared with the conventional setup in Figure 9.1. This is because the references
this type of scaling is given in the aero-engine case study of Chapter 13.
2. Order the inputs and outputs so that the plant is as diagonal as possible. The relative gain
array can be useful here. The purpose of this pseudo-diagonalization is to ease the design
of the pre- and post-compensators which, for simplicity, will be chosen to be diagonal.
Next, we discuss the selection of weights to obtain the shaped plant G3 = W2GW1 where
3. Select the elements of diagonal pre- and post-compensators W~, and 14~2 so that the do not directly excite the dynamics of K3, which can result in large amounts of overshoot
singular values of W2GT47~ are desirable. This would normally mean high gain at low (classical derivative kick). The constant prefilter ensures a steady-state gain of 1 between
372 MULTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN
r andy, assuming integral action in W1 or C. structure may not be sufficient. A general two degrees-of-freedom feedback control scheme
It has recently been shown (Glover et al., 2000) that the stability margin Cmax = 1/7~njn, is depicted in Figure 9.20. The commands and feedbacks enter the controller separately and
here defined in terms of coprime factor perturbations, can be interpreted in terms of are independently processed.
simultaneous gain and phase margins in all the plant’s inputs and outputs, when the R~ loop-
shaping weights J47~ and W2 are diagonal. The derivation of these margins is based on the
gap metric (Georgiou and Smith, 1990) and the v-gap metric (Vinnicombe, 1993) measures
for uncertainty. A discussion of these measures lies outside the scope of this book, but the
interested reader is referred to the excellent book on the subject by Vinnicombe (2001) and
the paper by Glover et al. (2000).
We will conclude this subsection with a summary of the advantages offered by the above Figure 9.20: General two degrees-of-freedom feedback control scheme
7-1~ loop-shaping design procedure:
o It is relatively easy to use, being based on classical loop-shaping ideas. The ?-L~ loop-shaping design procedure of McFarlane and Glover is a one degree-of-
o There exists a closed formula for the 7t~ optimal cost ~ which in turn corresponds to freedom design, although as we showed in Figure 9.19 a simple constant prefilter can easily
a maximum stability margin Cmax = 1/7mm. be implemented for steady-state accuracy. For many tracking problems, however, this will not
o No 7-iteration 15 required in the solution. be sufficient and a dynamic two degrees-of-freedom design is required. In Hoyle et al (1991)
o Except for special systems, ohes with all-pass factors, there are no pole—zero cancellations and Limebeer et al. (1993) a two degrees-of-freedom extension of the Glover—McFarlane
between the plant and controller (Sefton and Glover, 1990; Tsai et al., 1992). Pole—zero procedure was proposed to enhance the model-matching properties of the closed loop. With
cancellations are common in some other 7-l~ control problems, like the S/T-problem in this the feedback part of the controller is designed to meet robust stability and disturbance
(9.56), and are a problem when the plant has lightly damped modes. rejection requirements in a manner similar to the one degree-of-freedom loop-shaping design
procedure except that only a pre-compensator weight 1W is used. It is assumed that the
Exercise 9.9 First a definition and some useful properties. measured outputs and the outputs to be controlled are the same, although this assumption can
Definition: A stable transferfunction matrix H(s) is inner if H*H = I, and co-inner if HH3 = 1. be removed as shown later. An additional prefilter part of the controller is then introduced to
The operator H’ is defined asH’ (a) = HT(_s). force the response of the closed-loop system to follow that of a specified model, Tref, often
Pioperties: The W~ norm is invariant under right multiplication by a co—inner fun ction a,zd under
called the reference model. Both parts of the controller are synthesized by solving the design
left multiplication by an innerfunction.
Equipped with the above definition and properties, show for the shaped C~ = M1 N3 that the
problem illustrated in Figure 9.21.
matrix [It’13 N3 ] is co-inner and hence that the Nm loop-shaping cost function
is equivalent to
K3S3 KSSSCS
S S S
where Ss = (I — C3 IC)’. This shows that the problem offinding a stabilizing controller to minimize
the four-block cost function (9.79) has an exact solution.
Whilst it is highly desirable, from a computational point of view, to have exact solutions for Figure 9.21: Two degrees-of-freedom 71 loop-shaping design problem
9-L~ optimization problems, such problems are rare. We are fortunate that the above robust
stabilization problem is also one of great practical significance.
The design problem is to find the stabilizing controller K = [1(1 1(2] for the shaped
plant C,, = OW1, with a normalized coprime factorization C,, = M;’N,,, which minimizes
9.4.3 Two degrees-of-freedom controllers theRm norm of the transfer function between the signals [rT ~j,T fP and [u~’ ~T eT 1T
as defined in Figure 9.21. The problem is easily cast into the general control configuration
Many control design problems possess two degrees of freedom: on the one hand,
and solved suboptimally using standard methods and 7-iteration. We will show this later.
measurement or feedback signals; and on the other, commands or references. Sometimes,
The control signal to the shaped plant u,, is given by
one degree of freedom is left out of the design, and the controller is driven (for example) by
an error signal, i.e. the difference between a command and the output. But in cases where
stringent time domain specifications are set on the output response, a one degree-of-freedom = [K1 1(2] [~] (9.80)
374 MULTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 375
where K1 is the prefilter, K2 is the feedback controller, fi is the scaled reference, and y is the then P may be realized by
measured output. The purpose of the prefilter is to ensure that
0 0 (B8D~’ + ZSC[)R;112 B5
1(1 — G5K0)’G5K1 — TrefII~ ~ 7Th2 (9.81) ______ Ar Br 0 0
0 0 0 o I
Trer is the desired closed-loop transfer function selected by the designer to introduce time C. 0 0
1/2 p
(9.87)
domain specifications (desired response characteristics) into the design process; and p is a c —
P2Cr RV2 p138
scalar parameter that the designer can increase to place more emphasis on model matching in -~ -~- - -
and used in standard 7~t~ algorithms (Doyle et al., 1989) to synthesize the controller K. Note
U8
v
e
=
p(I
p2 [(I
—
K2CS)’I(1
G5K2)~’C81(1
— C8I(2)’G81(1 — Tref]
1(2(1
—
—
(I G8K2)-1M;’
p(I G5K2)’M;’
—
[~]
1’
that R5 = I+D5D~”, and Z8 is the unique positive definite solution to the generalized Riccati
equation (9.67) for G5. Matlab commands to synthesize the controller are given in Table 9.3.
(9.82)
In the optimization, the 7j~ norm of this block matrix transfer function is minimized.
Notice that the (1,2) and (2,2) blocks taken together are associated with robust stabilization Table 9.3: Matlab commands to synthesize the 7~ two degrees-of-freedom controller in
and the (3,1) block corresponds to model matching. In addition, the (1,1) and (2,1) blocks (9.80)
% Uses Robust control toolbox
help to limit actuator usage and the (3,2) block is linked to the performance of the loop. For
% INPUTS: Shaped plant Os
p = 0, the problem reverts to minimizing the 7-t~ norm of the transfer function between ~ Reference model Tref
and [uT ~TjT, namely, the robust stabilization problem, and the two degrees-of-freedom
controller reduces to an ordinary 7L~ loop-shaping controller. % OUTPUT: Two degrees-of-freedom controller K
To put the two degrees-of-freedom design problem into the standard control configuration, % coprime factorization of Os
we can define a generalized plant P by
[As,Bs,cs,Ds] = ssdata(balreal(Gsfl;
(Ar,Br,Cr,Dr) = ssdata(Tref);
(nr,nr] = size(Ar); (lr,mr] size(Or);
(ns,ns] = size(As); (ls,ms] size(Ds);
[~
Rs = eye(ls)+Ds*Ds*; SB eye(ms).Ds~*os;
(9.83) A = (As — ns*inv(ss)*Ds~*cs);
= ~21[~;] B=sqrtm(cs’ =inv(Rs) *gp);
Q=Bs*inv(Ss)*Bs
Es , ZANP, 0, REP] =care (A, B, Q)
Remark 1 We stress that we aim here to minimize the ~ norm of the entire transfer function in
(9.82). An alternative problem would be to minimize the 7-L~ norm form r to e subject to an upper
bound on II [AN, IXjj~] J~. This problem would involve the structured singular value, and the
optimal controller could be obtained from solving a series of 7L~ optimization problems using DIC
iteration; see Section 8.12.
Remark 2 Extra measurements. In some cases, a designer has more plant outputs available as
measurements than can (or even need) to be controlled. These extra measurements can often make
the design problem easier (e.g. velocity feedback) and therefore when beneficial should be used by the Figure 9.22: Two degrees-of-freedom fl~, loop-shaping controller
feedback controller 1(2. This can be accommodated in the two degrees-of-freedom design procedure by
introducing an output selection matrix W0. This matrix selects from the output measurements y only
those which are to be controlled and hence included in the model-matching part of the optimization. disturbance term (a “worst” disturbance) entering the observer state equations For 7-t~ loop-
In Figure 9.21, I’V0 is introduced between y and the summing junction. In the optimization problem,
shaping controllers, whether of the one or two degtees-of-freedom variety, this extra term is
only the equation for the error e is affected, and in the realization (9.87) for P one simply replaces pC,
not present The clear structure of 7t~ loop-shaping controllers has several advantages
by p14/00,, pRY2 by pW0RY2 and pD, by pW0D, in the fifth row. For example, if there are four
feedback measurements and only the first three are to be controlled, then o It is helpful in describing a controller’s function, especially to one’s managers or clients
1000
who may not be familiar with advanced control
l4~~= 0 1 0 0 (9 88) a It lends itself to implementation in a gain-scheduled scheme, as shown by Hyde and Clover
0010 (1993)
o It offers computational savings in digital implementations and some multi-mode switching
Remark 3 Steady-state gain matching. The command signals r can be scaled by a constant matrix schemes, as shown in Samar (1995)
W~ to make the closed-loop transfer function from r to the controlled outputs W0y match the desired
model Tref exactly at steady-state. This is not guaranteed by the optimization which aims to minimize We will present the controller equations, for both one and two degrees-of-freedom 7-L,~ loop-
the oc-norm of the error. The required scaling is given by shaping designs For simplicity we will assume the shaped plant is stnctly propet, with a
stabilizable and detectable state-space realization
~ [W0(1 — Gs(s)K2(s)Y’C,(s)Ki(s)]~2’rer(s)Io (9 89)
A, B, 990
Recall that FT~Q = 1 if there are no extra feedback measurements beyond those that are to be controlled.
The resulting controller is K = [K1W~ ‘(2].
C, 0
In this case, as shown in Sefton and Clover (1990), the single degree-of-freedom 7-t~~ loop-
We will conclude this subsection with a summary of the main steps required to synthesize a
shaping controller can be realized as an observer for the shaped plant plus a state feedback
two degrees-of-freedom 7i~ loop-shaping controller.
control law The equations are
1. Design a one degree-of-freedom 9-t~ loop-shaping controller using the procedure of
Section 9.4.2, but without a post-compensator weight T’V2. Hence J47~. £8 = A,?ii,+H,(C,&,—y,)+B,u, (991)
2. Select a desired closed-loop transfer function Trer between the commands and controlled (9 92)
outputs.
3. Set the scalar parameter p to a small value greater than 1; something in the range 1 to 3 wheie £, is the observer state, it, and Ys are respectively the input and output of the shaped
will usually suffice. plant, and
4. For the shaped plant G, = OW1, the desired response Trer, and the scalar parameter H, = —z,c?’ (9.93)
p, solve the standard ?t~ optimization problem defined by P in (9.87) to a specified
tolerance to get K = [K1 “2]. Remember to include W0 in the problem formulation if K, = —B~’ [I — 7_21 — -y~2X,Z,j X, (9.94)
extra feedback measurements are to be used. where Z, and X, are the appropriate solutions to the generalized algebraic Riccati equations
5. Replace the prefilter K1 by K1 W~ to give exact model matching at steady-state. for C, given in (9.67) and (9.68).
6. Analyze and, if required, redesign making adjustments top and possibly W1 and Trer. In Figure 9.23, an implementation of an observer-based 7-L~ loop-shaping controller is
The final two degrees-of-freedom 7L~© loop-shaping controller is illustrated in Figure 9.22 shown in block diagram form. The same structure was used by Hyde and Clover (1993) in
their VSTOL design which was scheduled as a function of aircraft forward speed.
Walker (1996) has shown that the two degrees-of-freedom 9-L~© loop-shaping controller
9.4.4 Observer-based structure for 7-tm loop-shaping controllers also has an observer-based structure. He considers a stabilizable and detectable plant
7-t,~ designs exhibit a separation structure in the controller. As seen from (9.50) and (9.51) the
controller has an observer/state feedback structure, but the observer is non-standard, having a (9S5)
378 MULTI VARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 379
+ U, LI
where I~ and I,,, are unit matrices of dimensions equal to those of the error signal z,
and exogenous input w, respectively, in the standard configuration.
Notice that this 9t00 controller depends on the solution to just one algebraic Riccati equation,
not two. This is a characteristic of the two degrees-of-freedom 7-t~~ loop-shaping controller
(Hoyle et al., 1991).
Walker (1996) further shows that if (i) and (ii) are satisfied, then a stabilizing controller
IC(s) satisfying IJF~(P, IflhI~ <7 has the following equations:
p = (DTJD)-’(DTJC+BTX00) (9.99)
commands
V12
(9.100)
— I~ 0
- I~ 0 Figure 9.24: Structure of the two degrees-of-freedom ?-L0~ loop-shaping controller
(9.10 1)
= 0 _721w
380 MULTIVARIABLE FEEDBACK CONTROL CONTROLLER DESIGN 381
actuator
The controller consists of a state observer for the shaped plant C8, a model of the desired saturation
closed-loop transfer function Trer (without Cr) and a state feedback control law that uses
both the observer and reference-model states.
As in the one degree-of-freedom case, this observer-based structure is useful in gain
scheduling. The reference-model part of the controller is also nice because it is often the
same at different design operating points and so may not need to be changed at all during
a scheduled operation of the controllet: Likewise, parts of the observer may not change; for
example, if the weight Wi(s) is the same at all the design operating points. Therefore whilst
the structure of the controller is comforting in the familiarity of its parts, it also has some Figure 9.25: Self-conditioned weight W1
significant advantages when it comes to implementation.
Exercise 9.10 * Show that the Hanus form oft/ic weight W1 in (9.109) simplifies to (9.108) when there
9.4.5 Implementation issues ~ ,~, ~ i.e. wizen 1~a =
Discrete time controllers. For implementation purposes, discrete time controllers are
usually required. These can be obtained from a continuous time design using a bilinear Bumpless transfen In the aero-engine application, a multi-mode switched controller was
transformation from the s-domain to the z-domain, but there can be advantages in being designed. This consisted of three controllers, each designed for a different set of engine output
able to design directly in discrete time. In Samar (1995) and Postlethwaite et al. (1995), varinbles, which were switched between depending on the most significant outputs at nny
observer-based state-space equations are derived directly in discrete time for the two degrees- given time. To ensure smooth transition from one controller to another bumpless transfer
—
of-freedom H~ loop-shaping controller and successfully applied to an aero-engine. This — it was found
controllers. Thususeful
whentoon-line,
condition
the the reference
observer statemodels
evolvesand the observers
according in each of
to an equation of the
the
application was on a real engine, a Spey engine, which is a Rolls-Royce two-spool reheated
form (9.102) but when off-line the state equation becomes
turbofan that was housed at the UK Defence Research Agency (now QinetiQ), Pyestock.
As this was a real application, a number of important implementation issues needed to be = A3553 + H3(C3?~i. — y3) + Bgita. (9.110)
addressed. Although these are outside the general scope of this book, they will be briefly
mentioned now. where n~8 is the actual input to the shaped plant governed by the on-line controller.
Anti-windup. In 74~ loop shaping the pre-compensator weight W1 would normally The reference model with state feedback given by (9.103) and (9.104) is not invertible
include integral action in order to reject low-frequency disturbances acting on the system. and therefore cannot be self-conditioned. However, in discrete time the optimal control
However, in the case of actuator saturation the integrators continue to integrate their input and also has a feed-through term from r which gives a reference model that can be inverted.
hence cause windup problems. An anti-windup scheme is therefore required on the weighting Consequently, in the nero-engine example the reference models for the three controllers were
function ~ One approach is to implement the weight W1 in its self-conditioned or Hanus each conditioned so that the inputs to the shaped plant from the off-line controller followed
form. Let the weight W1 have a realization the actual shaped plant input it~3 given by the on-line controller. For a more recent treatment
of bumpless transfer see Turner and Walker (2000).
~[ A~ Dw
B~ j1 (9.108)
and let it be the input to the plant actuators and it3 the input to the shaped plant. Then
if Satisfactory
advanced control methods
solutions are to gain wider
to implementation issues acceptance in discussed
such as those industry. We have
above are tried
demonstrate here that the observer-based structure of the R~ loop-shaping controller is
to
crucial
it = W1u8. When implemented in Hanus form, the expression for it becomes (Hanus helpful in this regard.
et al., 1987)
0 B~D;’
~— A~—B~D;1C~
C~ ~ 0 Ha.] 9.5 Conclusion
where it~ is the actual plant input; that is, the measurement at the output of the actuators which
therefore contains information about possible actuator saturation. The situation is illustrated We have described several methods and techniques for controller design, but our emphasis
in Figure 9.25, where the actuators are each modelled by a unit gain and a saturation. The has been on 7i~ loop shaping which is easy to apply and in our experience works very well
Hanus form prevents windup by keeping the states of T4~~ consistent with the actual plant in practice. It combines classical loop-shaping ideas (familiar to most practising engineers)
input at all times. When there is no saturation u~ = it, the dynamics of W1 remain unaffected with an effective method for robustly stabilizing the feedback loop. For complex problems,
and (9.109) simplifies to (9.108). But when tia ~ it the dynamics are inverted and driven such as unstable plants with multiple gain crossover frequencies, it may not be easy to decide
by u~ so that the states remain consistent with the actual plant input u~. Notice that such an on a desired loop shape. In this case, we would suggest doing an initial LQG design (with
implementation requires H71 to be invertible and minimum-phase. A more general approach simple weights) and using the resulting loop shape as a reasonable one to aim for in Rc,o loop
to anti-windup is given in Section 12.4. shaping.
382 MULTIVARIABLE FEEDBACK CONTROL
An alternative to 7-/~ loop shaping is a standard R~ design with a “stacked” cost function
such as in S/KS mixed-sensitivity optimization. In this approach, 7-L~ optimization is used to
shape two or sometimes three closed-loop transfer functions. However, with more functions
the shaping becomes increasingly difficult for the designer.
In other design situations where there are several performance objectives (e.g. on signals,
0
model following and model uncertainty), it may be more appropriate to follow a signal-
based 7-i2 or ?lm approach. But again the problem formulations become so complex that
the designer has little direct influence on the design.
After a design, the resulting controller should be analyzed with respect to robustness and
ONTROL STRUCTURE
tested by nonlinear simulation. For the former, we recommend p-analysis as discussed in
Chapter 8, and if the design is not robust, then the weights will need modifying in a redesign.
ESIGN
Sometimes one might consider synthesizing a p-optimal controller, but this complexity is
rarely necessary in practice. Moreover, one should be careful about combining controller
synthesis and analysis into a single step. The following quote from Rosenbrock (1974) Most (if not all) available control theories assume that a control structure is given at the outset. They
illustrates the dilemma: therefore fail to answer some basic questions, which a control engineer regularly meets in practice.
Which variables should be coatrolled, which variables should be measured, which inputs should be
In synthesis the designer specifies in detail the properties which his system
manipulated, and which links should be made between them? The objective of this chapter is to describe
must have, to the point where there is only one possible solution. The act
. . .
the main issues involved in control structure design and to present some of the quantitative methods
of specifying the requirements in detail implies the final solution, yet has to be
available, for example, for selection of controlled variables and for decentralized control.
done in ignorance of this solution, which can then turn out to be unsuitable in
ways that were not foreseen.
Therefore, control system design usually proceeds iteratively through the steps of modelling,
control structure design, controllability analysis, performance and robustness weights
10.1 Introduction
selection, controller synthesis, control system analysis and nonlinear simulation. Rosenbrock
(1974) makes the following observation: (weighted) (weighted)
exogenous inputs ~ exogenous outputs
Solutions are constrained by so many requirements that it is virtually impossible
to list them all. The designer finds himself threading a maze of such
requirements, attempting to reconcile conflicting demands of cost, performance,
easy maintenance, and so on. A good design usually has strong aesthetic appeal
to those who are competent in the subject. manipulated inputs sensed outputs
(control signals)
In much of this book, we consider the general control problem formulation shown in
Figure 10.1, where the cont;vller design problem is to
a Find a stabilizing controller K, which, based on the information in y, generates a control
signal u, which counteracts the influence of w on z, thereby minimizing the closed-loop
norm from w to z.
We presented different techniques for controller design in Chapters 2, 8 and 9. However, if
we go back to Chapter 1 (page 1), then we see that controller design is only one step, step 9,
in the overall process of designing a control system. In this chapter, we are concerned with
the structural decisions of control structure design, which are the steps necessary to get to
Figure 10.1:
Step 6 on page 1: The selection of a conuvl configuration (a structure of interconnecting (vi) “Advanced”/supervzsoiy contiol layei configuration Should it be decentralized or
measurements/commands and manipulated variables). multivariable9 (Sections 105 1 and 106)
See Sections 10.5 and 10.6: What is the structure of K in Figure 10.1; that is, how should we (vii) On-line optimization layei Is this needed or is a constant setpoint policy sufficient (“self-
“pair” the variable sets it and y? optimizing contiol”)9 (Section 10 3)
The distinction between the words control structure and control configuration may seem Except for decision (iv), which is specific to process control, this procedure may be applied
minor, but note that it is significant within the context of this book. The control structure (or to any control problem
control strategy) refers to all structural decisions included in the design of a control system Control structure design was considered by Foss (1973) in his paper entitled “Critique of
(steps 4, 5 and 6). On the other hand, the control configuration refers only to the structuring chemical piocess control theory” where he concluded by challenging the control theoreticians
(decomposition) of the controller K itself (step 6) (also called the measurementlmanipulation of the day to close the gap between theory and applications in this important area Control
partitioning or input/output pairing). Control configuration issues are discussed in more detail structure design is clearly important in the chemical process industry because of the
in Section 10.5. The selection of controlled outputs, manipulations and measurements (steps complexity of these plants, but the same issues are relevant in most other areas of control
4 and 5 combined) is sometimes called input/output selection. where we have large-scale systems In the late 1980’s Carl Nett (Nett, 1989, Nett and
One important reason for decomposing the control system into a specific control Minto, 1989) gave a number of lectures based on his experience of aero-engine control at
configuration is that it may allow for simple tuning of the subcontrollers without the need for Geneial Electric, under the title “A quaatitative approach to the selection and partitioning
a detailed plant model describing the dynamics and interactions in the process. Multivariable of measurements and manipulations for the control of complex systems” He noted that
centralized controllers can always outperform decomposed (decentralized) controllers, but increases in controller complexity unnecessarily outpace increases in plant complexity, and
this performance gain must be traded off against the cost of obtaining and maintaining a that the objective should be to
sufficiently detailed plant model and the additional hardware.
minimize control system complexity subject to the achievement of accuracy
The number of possible control structures shows a combinatorial growth, so for most
specifications in the face of uncertainty
systems a careful evaluation of all alternative control structures is impractical. Fortunately, we
can often obtain a reasonable choice of controlled outputs, measurements and manipulated Balas (2003) recently surveyed the status of flight control He states, with ieference to the
inputs from physical insight. In other cases, simple controllability measures as presented Boeing company, that “the key to the control design is selecting the variables to be regulated
in Chapters 5 and 6 may be used for quickly evaluating or screeniag alternative control and the controls to pei form regulation” (steps 4 and 5) Similarly, the first step in Honeywell’s
structures. Additional tools are presented in this chapter. procedure for controller design is “the selection of controlled vanables (CVs) for performance
From an engineering point of view, the decisions involved in designing a complete and robustness” (step 4)
control system are taken sequentially: first, a “top-down” selection of controlled outputs, Surveys on control structure design and input—output selection are given by Van de Wal
measurements and inputs (steps 4 and 5) and then a “bottom-up” design of the control (1994) and Van de Wal and de Jager (2001), respectively A review of control structure design
system (in which step 6, the selection of the control configuration, is the most important in the chemical process industry (plantwide control) is given by Larsson and Skogestad
decision). However, the decisions are closely related in the sense that one decision directly (2000) The reader is referred to ChapterS (page 164) for an overview of the literature on
influences the others, so the procedure may involve iteration. Skogestad (2004a) has proposed input—output controllability analysis
a procedure for control structure design for complete chemical plants, consistiag of the
following structural decisions:
S
“Top-down” (mainly step 4) 10.2 Optimal operation and control
(i) Identify operational constraints and identify a scalar cost function J that characterizes
The overall control objective is to maintain acceptable opeiation (in terms of safety.
optimal operation.
environmental impact, load on operatois, and so on) while keeping the operating conditions
(ii) Identify degrees of freedom (manipulated inputs it) and in particular identify the ones that
close to economically optimal In Figure 10 2, we show three different implementations for
affect the cost J (in process control, the cost J is usually determined by the steady-state).
optimization and control
(iii) Analyze the solution of optimal operation for various disturbances, with the aim of finding
primary controlled variables (yi = z) which, when kept constant, indirectly minimize the (a) Open-loop optimization
386 MULTIVARiABLE FEEDBACK CONTROL STRUCTURE DESIGN 387
Objective Objective Objective
‘Jr
Optimizer
y
1. The stability and performance of a lower (faster) layer is not much influenced by the this set of controlled variables, howevei Hence experience and intuition still plays a major
presence of upper (slow) layers because the frequency of the “disturbance” from the upper role in the design of control systems”
layer is well inside the bandwidth of the lower layer. The important variables in this section are
2. With the lower (faster) layers in place, the stability and performance of the upper (slower)
layers do not depend much on the specific controller settings used in the lower layers • xi — degrees of freedom (inputs)
because they only effect high frequencies outside the bandwidth of the upper layers. o z — primary (“economic”) controlled variables
o r — reference value (setpoint) for z
More generally, there are two ways of partitioning the control system: o y — measurements, process information (often including xi)
Vertical (blearchical) decomposition. This is the decomposition just discussed which In the general case, the controlled variables aie selected as functtons of the measurements,
usually results from a time scale difference between the various control objectives z =HO) For example, z can be a linear combination of measurements, i e z = Hg In
(“decoupling in time”). The controllers are normally designed sequentially, starting many cases, we select individual measurements as controlled vanables and H is a “selection
with the fast layers, and then cascaded (series interconnected) in a hierarchical manner. matrix” consisting of ones and zetos Normally, we select as many controlled variables as the
number of available degrees of freedom, i e n~ =
Horizontal decomposition. This is used when the plant is “decoupled in space”, and The controlled vanables z me often not important vaitables in themselves, but are
normally involves a set of independent decentralized controllers. Decentralized control controlled in order to achieve some ovetall operational objective A reasonable question is
is discussed in more detail in Section 10.6 (page 428). then why not forget the whole thing about selecting controlled variables, and instead directly
adjust the manipulated variables u9 The reason is that an open-loop implementation usually
Remark 1 In accordance with Lunze (1992) we have purposely used the word layer rather than level fails because we are not able to adjust to changes (disturbances ci) and eriors (in the model)
for the hierarchical decomposition of the control system. The somewhat subtle difference is that in
The following example illustrates the issues
a multilevel system all units contribute to satisfying the same goal, whereas in a multilayer system
the different units have different local objectives (which preferably contribute to the overall goal). Example 10.1 Cake baking. The ovet all goal is to make a cake winch is spell baked inside and has
Multilevel systems have been studied in connection with the solution of optimization problems. a nice eliei mm The tnampnlated input fom aclneving tIns is the heat input, xi = Q (and ne ii ill assume
that the dnmation of the baking is fired, e g at 15 nunutes)
Remark 2 The tasks within any layer can be performed by humans (e.g. manual control), and the
(a) If we had nevem baked a cake befome, and if ne weme to cons!, nd the oven oumselves, we might
interaction and task sharing between the automatic control system and the human operators are very
considem directly manipulating the heat input to the oven, possthlv nith a natt—metem measu? emneni
important in most cases, e.g. an aircraft pilot. However, these issues are outside the scope of this book.
1-lonevem, this open—loop mmplemnenration is ould not n omk well, as the optinial heat input depends
Remark 3 As noted above, we may also decompose [he control layer, and from now on when we talk stmonglv on the particulat oven we use, and the opem ation is also sensitive to distum bances, fat example,
about control configurations, hierarchical decomposition and decentralization, we generally refer to the opening the oven (loot or whateve, else might be in the oven In shoi t, the open—loop implementation is
control layer. sensitive to umicei taints
(b) An effective is a~’ of meducing the u,mcei taints is to use feedback Thet efom e, in pm actice we use a
Remark 4 A fourth possible strategy for optimization and control, not shown in Figure 10.2, is closed—loop iniplementation whet e tie contiol the oven tenipei atume (z = 2’) using a thei inostat The
(d) ext,-e,nnin-seeking control. Here the model-based block in Figure 10.2(c) is replaced by an tenipeiatnte setpoint r = 2’. is fotind fmoni a cook book (winch play tile iole of the “optimnice’ “)
“experimenting” controller, which, based on measurements of the cost J, perturbs the input in order The (a) open-loop and (h) closed-loop implementations of the cake baking pmocess ale illustiated in
to seek the extremum (minimum) of J; see e.g. Ariyur and Krstic (2003) for details. The main rignie 102
disadvantage with this strategy is that a fast and accurate on-line measurement of J is rarely available.
The key question is what variables z should we control9 In many cases, it is clear from
a physical understanding of the process what these are For example, if we aie considering
heating or cooling a room, then we should select the room temperature as the controlled
10.3 Selection of primary controlled outputs variable z Furtheimore, we generally control vai tables that are optimally at their constiaints
(limits) For example, we make sure that the air conditioning is on maximum if we want to
We are concerned here with the selection of controlled outputs (controlled variables, CV5). cool down oui house quickly In othei cases, it is less obvious what to contiol, because the
This involves selecting the variables z to be controlled at given reference values, z r, where ovetall control objective may not be directly associated with keeping some variable constant
r is set by some higher layer in the control hierarchy. Thus, the selection of controlled outputs To get an idea of the issues involved, we will consider some simple examples Let us first
(for the control layer) is usually intimately related to the hierarchical structuring of the control consider two cases where implementation is obvious because the optimal strategy is to keep
system shown in Figure 10.2(b). The aim of this section is to provide systematic methods for variables at then constraints
selecting controlled variables. Until recently, this has remained an unsolved problem. For
example, Fisher et al. (1985) state that “Our current approach to control of a complete plant Example 10.2 Short-distance (100 m) running. The objective is to nnninnze the tune 2’ of the ‘ace
is to solve the optimal steady-state problem on-line, and then use the results of this analysis to (J = 2’) The manipulated uipnt (xi) is the muscle pOss em Foi a it elI—ti anied tunnel, the optinial
fix the setpoints of selected controlled vai-iables. There is no available procedure for selecting solution lies at the const, aint xi = it,~ Inzplenientation is tliemi eon select z = xi and r = Uma,. 0?
alwi natively “m Un a sfast as possible’
a—
390 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 391
Example 10.3 Driving from A to B. Let y denote the speed of the car The objective is to minimize is
the time P of driving from A to B ot equivalently to maxi,nize the speed (y), i.e. J = —y. If we are J0~~(d) 4 J(u0~~(d),d) = minJ(u,d) (10.1)
driving on a straight and clear toad, then the optimal solution is always to stay on the speed limit
constraint (ymax). Implementation is then easy: use a feedback scheme (cruise control) to adjust the Ideally, we want as = u0P~(d). However, this will not be achieved in practice and we have a
engine power (as) such that we are at the speed limit; that is, select z = y and r = Umax. loss L = J(as,d) J0~~(d) >0.
—
We consider the simple feedback policy in Figure 10.2(b), where we attempt to keep z
In the next example, the optimal solution does not lie at a constraint and the selection of
constant. Note that the open-loop implementation is included as a special case by selecting
the controlled variable is not obvious.
z = as. The aim is to adjust as automatically, if necessary, when there is a disturbance d such
Example 10.4 Long-distance running. The objective is to minimize the tune P of the race (J = 7’), that as as0~t(d). This effectively turns the complex optimization problem into a simple
which is achieved by maximizing the average speed. It is clear that running at mnaxitnutn input power is feedback problem. The goal is to achieve “self-optimizing control” (Skogestad, 2000):
not a good strategy. This would give a high speed at the beginning, but a slower speed towards the end,
Self-optimizing control is when we can achieve att acceptable loss with constant
and the average speed will be lower A better policy would be to keep constant speed (z = = speed).
setpoint values for the controlled variables without the need to reoptimize when
The optimization layer (e.g. the trainer) will then choose an optimal setpoint rfor the speed, and tIns is
disturbances occur
implemented by the control layer (the runner). Alternative strategies, which may work better iii a hilly
terrain, are to keep a constant heart rate (z = = heart rate) or a constant lactate level (z = ya = Remnrk. In Chapter 5, we introduced the term self-regulation, which is when acceptable dynamic
lactate level). control performance can be obtained with constant manipulated variables (as). Self-optimizing control
is a direct generalization to the layer above where we can achieve acceptable (economic) performance
with constant controlled variables (z).
10.3.1 Self-optimizing control
The concept of self-optimizing control is inherent in many real-life scenarios
Recall that the title of this section is selection of primary controlled outputs. In the cake
including (Skogestad, 2004b):
baking process, we select the oven temperature as the controlled output z in the control layer.
It is interesting to note that controlling the oven temperature in itself has no direct relation to o The central bank attempts to optimize the welfare of the country (J) by keeping a constant
the overall goal of making a well-baked cake. So why do we select the oven temperature as a inflation rate (z) by varying the interest rate (as).
controlled output? We now want to outline an approach for answering questions of this kind. o The long-distance runner may attempt to minimize the total running time (J = 7’) by
Two distinct questions arise: keeping a constant heart rate (z = Yl) or constant lactate level (z = y2) by varying the
1. What variables z should be selected as the controlled variables? muscle power (as).
2. What is the optimal reference value (z0~~) for these variables? o A driver attempts to minimize the fuel consumption and engine wear (J) by keeping a
constant engine rotation speed (z) by varying the gear position (as).
The second problem is one of optimization and is extensively studied (but not in this book).
Here we want to gain some insight into the first problem which has been much less studied. The presence of self-optimizing control is also evident in biological systems, which have
We make the following assumptions: no capacity for solving complex on-line optimization problems. Here, self-optimizing control
policies are the only viable solution and have developed by evolution. In business systems,
1. The overall goal can be quantified in terms of a scalar cost function J the primary (“economic”) controlled variables are called key performance indicators (KPI5)
2. For a given disturbance d, there exists an optimal value u0~~(d) (and corresponding value and their optimal values are obtained by analyzing successful businesses (“benchmarking”).
z0~t(d)), which minimizes the cost function J The idea of self-optimizing control is further illustrated in Figure 10.4, where we see
3. The reference values r for the controlled outputs z are kept constant, i.e. a’ is independent
that there is a loss if we keep a constant value for the controlled variable z, rather than
of the disturbances d. Typically, some average value is selected, e.g. r = reoptimizing when a disturbance moves the process away from its nominal optimal operating
In the following, we assume that the optimally constrained variables are already controlled point (denoted d).
at their constraints (“active constraint control”) and consider the “remaining” unconstrained An ideal self-optimizing variable would be the gradient of the Lagrange function for
problem with controlled variables z and remaining unconstrained degrees of freedom as. the optimization problem, which should be zero. However, a direct measurement of the
The system behaviour is a function of the independent variables as and d, so we may gradient (or a closely related variable) is rarely available, and computing the gradient
formally write J = J(u, d).1 For a given disturhance d the optimal value of the cost function generally requires knowing the value of unmeasured disturbances. We will now outline some
approaches for selecting the controlled variables z. Although a model is used to find z, note
Note that the cost J is usually not a simple function of as and d, but is rallier given by some implied relationship
such as that the goal of self-optimizing control is to eliminate the need for on-line model-based
minJ=Jo(as,x,d) st. f(x,u,d)=0 optimization.
zs,x
where dim f = dimx and f(x, as, d) = 0 represents the model equations. Formally eliminating the internal state
variables z gives the problem mm. J(u, d)
I
392 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 393
Cost J where 0 and O~ are the steady-state gain matrix and distuibance model respectively For a
fixed d, we have z — = G(u u0~~) If 0 is invertible we then get
—
= constant
u—u0~ =0’(z—z0~t) (104)
Rcopiiniizcd .J_,t (d)
Note that 0 isa square matrix, since we have assumed that n~ = n~ From (102) and (104)
we get
10.3.2 Selecting controlled outputs: local analysis where ~ = J~i420_1 (z ~ These expressions for the loss L yield considerable insight
—
Obviously, we would like to select the controlled outputs z such that z—z0~,~ is zero However,
We use here a local second-order accurate analysis of the loss function. From this, we derive
this is not possible in practice because of (1) varying distuibances d and (2) implementation
the useful minimum singular value rule, and an exact local method; see Halvorsen et al.
eiior e associated with control of z To see this more clearly, we write
(2003) for further details. Note that this is a local analysis, which may be misleading; for
example, if the optimum point of operation is close to infeasibility. z — = z — r+r — = e0~(d) + a (107)
Consider the loss L = J(u, d) J0~~(d), where d is a fixed (generally non-zero)
—
disturbance. We here make the following additional assumptions: First, we have an optimization error
I. The cost function J is smooth, or more precisely twice differentiable.
e0~~(d) ~ — z02~(d) (108)
2. As before, we assume that the optimization problem is unconstrained. If it is optimal
to keep some variable at a constraint, then we assume that this is implemented (“active because the algoiithm (e g the cook book for cake baking) gives a desired r which is different
constraint control”) and consider the remaining unconstrained problem. from the optimal z0~t(d) Second, we have a control or implementation error
3. The dynamics of the problem can be neglected when evaluating the cost; that is, we
consider steady-state control and optimization. A
(109)
4. We control as many variables z as there are available degrees of freedom, i.e. n~ =
because control is not perfect, either because of poor control performance or because of an
For a fixed d we may then express J(u, d) in terms of a Taylor series expansion in is around incorrect measurement (steady-state bias) nZ
the optimal point. We get
If we have integial action in the contioller, then the steady-state control error is zero,
and we have a = flZ Jf z is directly measured then flZ is its measurement eiror If z is a
~8J~ T
J(u,d) = J0~~(d) + I —) (u — combination of several measurements y, z = Hp, see Figure 10 2(b), then flZ = Hit9, where
\bUJ opt
it9 is the vector of measurement errors for the measurements p
=0 Inmost cases, the errors a and e0~t(d) can be assumed independent
1 Example 10.1 Cake baking continued. Let us ietuin to the question why select the oven tenipeiature
+ ~(u — uopt(dflT (ji~~) (u uO~~(d)) + (10.2)
as a connolled output2 lYe have tno altei natives a closed—loop nnplementation with z T (the oven
=
te,npeiau,e) and an open—loop inipleinentation with z = a = Q (the heat input) Fioin ewei lence, we
know that the optunal oven tenipeiatuie T00~ is largely independent of disturbances and is almost
We will neglect terms of third order and higher (which assumes that we are reasonably close the same Jo; au’. oven This “leans that we ‘nay always specify the same oven tempeiatuie, say
to the optimum). The second term on the right hand side in (10.2) is zero at the optimal point 0 r = = 1900 C, as obtained floin the cook book On the other hand, the optnnal heat input Q0~~
394 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 395
depends strongly on the heat loss, the size of the oven, etc., and may vary between, say, 100 W and 5000 (A2) The inputs are scaled such that the effect of a given deviation it, —n,0~ on the cost function
W .4 cook book would then need to list a different value of r = Q~ for each kind of oven and would in J is similar for each input such that J,~ = (82J/oui02~ is close to a constant times a
addition ,ieed some correction factor depending on the room temperature, how often the oven door is unitary matnx, 1 e = a U, where a =
opened, etc. Therefore, we find that it is much easier to get e0~~ = — T0~~ 1°C] small thai, to get
= — Q0~ 1W] small. Thus, the main ,-eason for controlling the oven temperature is to minimize
From (10 6), we have L = ~ii~]i~, where ~ = J~42&’(z ~ and from (340), the —
the optimization errol: In addition, the control error e is expected to be much smaller when controlling worst-case W~1l2 for liz Zoptil2 = 1 is Hzii2 = U(J~2&1) Then, the resulting worst-case
—
temperature. loss is 2
From (10.5) and (10.7), we conclude that we should select the controlled outputs z such that: max L = ~a2(ah/2G_1) ci 91 (10.11)
I~.PtI[a≤1 2 2 o—(G)
I. G~’ is small (i.e. & is large); the choice of z should be such that the inputs have a large Since the constant a is independent of the choice of z, to minimize the loss L, we should
effect on z. select controlled variables that maximize o-(G).
2. e0~~(d) = r— z0~t(d) is small; the choice of z should be such that its optimal value zupt(d)
depends only weakly on the disturbances (and other changes). Maximum scaled gain (minimum singular value) rule. Assume that the
3. e = z r is small; the choice of z should be such that it is easy to keep the control or
—
unconstrained degrees offreedom are scaled such that they have similar effects
implementation error e small. on the cost (or more precisely, such that J,~ is constant times unitary), and
4. &‘ is small, which implies that & should not be close to singular. For cases with two or assume that the candidate controlled variables z have been scaled such that the
more controlled variables, the variables should be selected such that they are independent expected variation in z 2opt (“span”) is each of magnitude 1. Then for self-
—
of each other. optimizing control (mininzu;n steady-state loss), select controlled variables z that
maximize the minimum singular value, u(G), of the scaled gain matrix C from
By proper scaling of the variables, these four requirements can be combined into the it to z.
“maximize minimum singular value rule” as discussed next.
This important result was first presented in the first edition of this book (Skogestad and
Postlethwaite, 1996) and proven in more detail by Halvorsen et al. (2003). Altematively, if
10.3.3 Selecting controlled outputs: maximum scaled gain method we do not scale the inputs to make J,~ unitary, then we should prefer sets of controlled
Scalar case. In many cases we only have one unconstrained degree of freedom (z is a scalar). variables with a large value of u(JWG).
Define the “span” or range of z as the expected value of z — z011~, and introduce the scaled Example 10.5 The aero-engine application in Chapter 13 (page 500) provides a nice illustration of
gain from it to z: output selection. There the overall goal is to operate the engine optimally in tertns offimel consumption,
C = G/span(z) while at the same time staying softly away from instability. The optimization layer is a look-up table,
which gives the optimal parameters for the engine at various operating points. Since the engine
Note that span(z) = z 2npt includes both the optimization (setpoint) error and the
—
at steady-state has three degrees of freedomn we need to specify three variables to keep the engine
implementation error. Then, from (10.5), the loss imposed by keeping z constant is
app roximately at the optimal point, and six alternative sets of three outputs are given in Table 13.3.2
(page 503). For the scaled varables, the value of U(G(0)) is 0.060, 0-049, 0.056, 0.366, 0.409 and
a (z—z0~t~2 cii 0.342 for the six alternative sets. Based on this, the first three sets are eliminated. The final choice is
(10.10)
2\~ G ) — 2 G’~ then based on other considerations including contmvllabiliiy
where a = {Jrn~, the Hessian of the cost function, is independent of the choice for z. From Procedure. The use of the minimum singular value to select controlled outputs may be
(10.10), we see that the “scaled gain” C’ = C/span should be maximized to minimize the summarized in the following procedure:
loss. Note that the loss decreases with the square of the scaled gain. For an application, see
Example 10.6 on page 398. From a (nonlinear) model compute the optimal values for alternative candidates for z.
Multivariable case. We consider here the general case where it and z are vectors. Let each (This yields a “look-up” table of 2opt for various expected disturbance combinations.)
output z~ be scaled such that the expected magnitude of z~ z10~ (“span”) is of order 1, or
—
From this data obtain for each candidate output, the expected variation in its optimal value,
more precisely, mainly for mathematical convenience, such that the combined error measured = (Ziopt,m,x — zioptmjj/2.
by the 2-norm is less than 1, i.e. liz ZoptIi2 ≤ 1. Note from (10.7) that the “span” includes
—
2 For each candidate output z~, obtain the expected implementation error e1.
the sum of the optimal variation (e0~~ = r z~~t) and the implementation error (e = z v)
— —
3 Scale the candidate outputs z1 by dividing by the “span” ~ + e1
We assume that: 4 Scale the inputs it such that a unit deviation in each input from its optimal value has the
same effect on the cost function J (i.e. such that J,.,,~ is close to a constant times a unitary
(Al) The variations in 21 2~0~ are uncorrelated, or more precisely, the “worst-case”
— matrix).
combination of output deviations z~ z~°~ with liz Zoptil2 = 1, can occur in practice.
— —
1 Note that G is the scaled gain matrix, i.e. G = G’, but ‘ye will in the foHowing omit the prime to simpiify noiation.
C
4
396 MULTIVARIABLE FEEDBACK CONTROL
CONTROL STRUCTURE DESIGN 397
5. Prefer sets of controlled variables with a large value of~(G). G is the transfer function
be able to detect feasibility problems For example, in marathon running, selecting a control
describing the effect of the scaled inputs is on the scaled outputs z.
strategy based on constant speed may be good locally (for small disturbances) However,
Remark 1 The disturbances and measurement noise enter indirectly through the scaling of the outputs S if we encounter a steep hill (a large disturbance), then operation may not be feasible,
z. because the selected reference value may be too high In such cases, we may need to use
a “brute force” direct evaluation of the loss and feasibility for altemative sets of controlled
Remark 2 Our desire to have o-(G) large for output selection is not related to the desire to have c(G) variables This is done by solving the nonlinear equations, and evaluating the cost function
large to avoid input constraints as discussed in Section 6.9. In particular, the scalings, and thus the
J for various selected disturbances ci and control errors e, assuming z = r + e where r
matrix G, are different for the two cases.
is kept constant (Skogestad, 2000) Here r is usually selected as the optimal value for the
Remark 3 We have in our derivation assumed that the nominal operating point is optimal. However, nominal disturbance, but this may not be the best choice and its value may also be found
it can be shown that the results are independent of the operating point, provided we are in the region by optimization (“optimal back-off”) (Govatsmark, 2003) The set of controlled outputs
where the cost can be approximated by a quadratic function as in (10.2) (Alstad, 2005). Thus, it is with smallest worst-case or average value of J is then preferred This approach may be
equally important to select the right controlled variables when we are nominally non-optimal. time consuming because the solution of the nonlinear equations must be repeated for each
candidate set of controlled outputs
Exercise 10.1 Recall that the singular value method requires that the minimum singular value of the
(scaled) gain matrix be maximized, It is proposed that the loss can simply be minimized by selecting the
controlled variables as z = fly, where fi is a large number Show that such a scalitig does not affect the 10.3.6 Selecting controlled outputs: measurement combinations
selection of controlled variables using the singular value method.
We have so far selected z as a subset of the available measurements y More generally, we may
consider combinations of the measurements We will restnct ourselves to hneat combinations
10.3.4 Selecting controlled outputs: exact local method
4 z=Hy (1015)
The minimum singular value rule is based on two simplifying assumptions (Al) and (A2)
on page 394, which may not hold for some cases with more than one controlled variable where y now denotes all the available measurements, including the inputs is used by the
(n~ = flu > 1). The violation of assumption (A2) can easily be compensated for by control system The objective is to find the measurement combination matrix H
minimizing u(J,~42G) instead of u(G), but assumption (Al) is more limiting. This is pointed 4 Optimal combination. Wnte the linear model in terms of the measurements y as
out by Halvorsen et al. (2003), who derived the following exact local method. y = G~u + G’~d Locally, the optimal linear combination is obtained by minimizing
Let the diagonal matrix Wd contain the magnitudes of expected disturbances and the o(,[ AId Mel) in (10 12) with We = ~ where ~ contains the expected
diagonal matrix W~ contain the expected implementation errors associated with the individual measurement errors associated with the individual measured variables, see Halvorsen et al
(2003) Note that H enters (10 12) indirectly, since C = Hot’ and 0d = HG’~ depend on
[~,J
controlled variables. We assume that the combined disturbance and implementation error
H However, (10 12) is a nonlinear function of H and numerical search-based methods need
vector has norm 1, 112 = 1. Then, it may be shown that the worst-case loss
to be used
is (Halvorsen et al., 2003) Null space method A simpler method for finding H is the null space method proposed
by Alstad and Skogestad (2004), wheie we neglect the implementation error, i e M~ = 0 in
,
max L = ~U([M~ Me])2 (10.12) (10 14) Then, a constant setpoint policy (z = r) is optimal if z0~~(d) is independent of ci,
that is. when Zupt = 0 ci in terms of deviation variables Note that the optimal values of the
[e’J [12≤’ individual measurements Yopt still depend on ci and we may write
where
i/opt Fd (10 16)
M~ = ,j112
flu ~uu’~4za
‘,
— G”Gd) 147d (10.13) where F denotes the optimal sensitivity of y with respect to ci We would like to find z = Hy
e = Jr!2
flu G’”We (10.14) such that z01,~ = Hy0~~ = HFd = 0 ci for all ci To satisfy this, we must require
Here J~. = (82J/Ou2)0~~. Jud = (82J/Ou8d)0~~ and the scaling enters through the HF=0 (1017)
weights Wd and We.
or that H lies in the left null space of F. This is always possible, provided n~, ~ n~ +st~t. This
is because the null space of F has dimension n~ rid and to make HF = 0, we must require
—
10.3.5 Selecting controlled outputs: direct evaluation of cost that n2 = n~ <flu — ~d• It can be shown that when (10.17) holds, Md = 0. If there are too
The local methods presented in Sections 10.3.2-10.3.4 are very useful. However, in many many disturbances, i.e. n~, <fl,~ + 7i~j, then one should select only the important disturbances
practical examples nonlinear effects are important. In particular, the local methods may not (in terms of economics) or combine disturbances with a similar effect on y (Alstad, 2005).
4
S
I
398 MULTIVARIABLE FEEDBACK CONTROL
r CONTROL STRUCTURE DESIGN
In the presence of implementation errors, even when (10.17) holds such that Md = 0, the
loss can be large due to non-zero J1’I~. Therefore, the null space method does not guarantee Table 10.1: Local “maximum gain” analysis for selecting controlled variable for cooling cycle
— ‘~ ‘ IG~
that the loss L using a combination of measurements will be less than using the individual Variable (v) eXz0~t(di) 0 ~ 16! I
measurements. One practical approach is to select first the candidate measurements y, whose ) ~~j~nser pressure, Ph [Pa] 3689 ---464566 126
sensitivity to the implementation error is small (Alstad, 2005). Evaporator pressure, pz [Pa] —167 0 0
Temperature at condenser exit, Ta [K] 0.1027 316 3074
Degree of sub-cooling, Th — p.atQ~) [K] —0.0165 331 20017
10.3.7 Selecting controlled outputs: examples Choke valve opening, it 8.0 x 1 1250
Liquid level in condenser, Mh [m3] 6.7 x ir6 —1.06 157583
The following example illustrates the simple “maximize scaled gain rule” (mimimum singular Liquid level in evaporator, M? [m3] —1.0 X io~ 1.05 105087
value method). SN
Example 10.6 Cooling cycle. A simple cooling cycle or heat pump consists of a compressor (where Ti at the evaporator exit is directly related tom (because of saturation) and also has a zero gain. The
work W3 is supplied and the pressure is increased to ph), a high-pressure condenser (where heat is open-loop policy with a constant valve pOsitiot? it has a scaled gait? of 1250, and the temperature at
supplied to the surroundings at high temperature), an expansion valve (where the fluid is expanded to the condenser exit (Ta) has a scaled gab? of 3074. Eve!? more pron?ising is the degree of subcooling
at the condenser exit with a scaled gain of 20017. Note that tl?e loss dereases it? proportion to Ic’ 2,
so the increase iti the gait? by a factor 20017/1250 16.0 whet? we change from constant choke
valve ope??it?g ( “open-loop”) to constant degree of subcooling, corresponds to a decrease in the loss
( at least for small perturbations)by a factor 16.02 = 256. Finally, the best single measuretnents see!??
to be the amount of liquid in the co,?denser and evaporator~ M1. and Mt. with scaled gains of 157583
and 105087, respectively Both these strategies are used in actual heat pump systems. A “brute force”
evaluation of the cost for a (large) disturbance it? the surrou!?ding temperature (di = Tn-) of about 10
U
K, confirms the linear analysis, except that the choice z = Ta t?trt?s out to be infeasible. The open-loop
policy with constant valve positiot? (z = it) u?creases the cott?pressor work by about 10%, whereas the
policy with a constat?t conde!?ser level (z = lYle) has an it?crease of less thaI? 0.003%. Similar results
holdfor a disturbance in the cold surrou;?dings (d2 = Ta). Note that the implen?entation error was not
co!?sidered, so the actual losses will be large;:
The next simple example illustrates the use of different methods for selection of controlled
variables.
Figure 10.5: Cooling cycle
Example 10.7 Selection of controlled variables. As a simple exatnple, consider a scalar
ut?constra;ned problem, with the costfunction J = (it — d)2, where nomu?ally d* = 0. For this problemn
we have three ca!?didate tneasurements,
a lower pressure p~ such that the temperature drops) and a low-pressure evaporator (where heat is
Zn = 0.1(u—d); 1J2 = 20u; Zn lOu— Sd
removedfrom the surroundings at low temperature); see Figure 10.5. The compressor work is indirectly
set by the amount of heating or cooling, which is assumed give!?. We consider a design with a flooded We assume the disturbam?ce and measuremnet?t noises are of unit n?agnit?tde, i.e. IdI ≤ 1 a;?d ~ < 1.
evaporator where there is no super-heating. In this case, the expansion valve position (it) remains as For this problem, we always have Jopt (d) 0 correspot?ding to
an unconstrained degree offreedotn, and should be adjusted to minimize the work supplied, J = TVS.
u0~~(d) = d, yi,0~;(d) = 0, y2,OP~(d) = 20d and ya,opt(d) Sd
The question is: what variable should we control?
Seven alternative controlled variables are considered in Table 10.1. The data is for an ammonia For the nomit?al case with d = 0, we thus have uopt(d) 0 and y~p~~*) = Ofor all candidate
L
cooling cycle, and we consider Ay0~~ for a small disturbance of 0.1 K in the hot surroundings controlled variables and at the nominal operating point we have J,~ = 2, J~d = —2. The lit?earized
(di = Tn-). We do not cot?sider implementation errors. Details are give!? in Jensen at?d Skogestad
u?odels for the three measured va,-iables are
(2005). From (10.10), itfollows that it may be useful to compute the scaled gail? G’ = G/span(z(dt)) yi: c~ = 0.L c~, = —0.1
for the various disturbances d1 and look for cot?trolled va,-iables z with a large value of G’f. Two Zn: c~=2o, 6!~2=0
obvious candidate cot?trolled variables are the high al?d low pressures (ph and p;). However~ these pa: cg = io, c~ = —s
appear to be poor choices with scaled gains JO’ I of 126 and 0, respectively. The zero gait? is because Let us first consider selecting one of the individual mneas?Lrements as a cot?t rolled variable. We l?ave
we assu,ne a give!? cooling duty Qa = UA(T — Ta) and further assume saturation T1 = T8at(~1).
Case 1: z=yi,
Keeping pj constant is the?? infeasible when, for example, there are disturbances in Ta. Other obvious
Case 2: z=y2,
candidates a,-e the tetnperatum-es at the exit of the heat exchangers, T~ and T1. However the temperature
Case 3: z=ya,
I
400 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 401
The losses for this example can be evaluated analytically and we find for the three cases We ascumne that the implementation errom for it is 1, i e n” = 1 We then have WX = I, where W~[
isa 4 x 4 matnx Fuithermome, we have
= (be1)2; L2 = (005e2 — d)2; L3 = (0.les — 0.5d)2
G9=[01 20 10 I]~ G~=[—01 0 —5 oV
(For example, with z = y~, we have it = ~ + Sd)/10 and with z = ng, we get L3 = (it — d)2 =
(0.1n~ + 0.5d — d)2.) With dl < 1 and nfl ≤ 1, the worst-case losses (with dl = 1 and Optnnal combination We wish to find H such that a([ Md M~ }) in (10 12) is mninnnized, wheme
nfl = 1) are = 100, L2 = 1.052 = 1.1025 and L3 = 0.62 = 0.36, and we find that HG’~’, Gd = HG~, W~ HWS, ‘Jun 2, Jud = —2 and Wd = 1 Numencal optimnization
z = y~ is the best overall choice for self-optimizing control and z = 1/i is the worst. We note that ‘,ields H0~~ = [00209 —02330 09780 —00116]. that is, the optimal combination of the
z = 1/i is peifectly self-optimizing with respect to disturbances, but has the highest loss. This highlights thmee ,neasumements and the manipulated input it is
the importance of considering the implementation error when selecting controlled variables. Next, we
compare the three different methods discussed earlier in this section. z = 0 O2O9yi — 0 233061/2 ± 0 97SOy~ — 0 0116u
A. Marimum scaled gain (singular value rule): For the three choices of controlled variables we have We note, as expected, that the most nnpom tant cont, ibution to z comes frotn the vat table y~ The lovs
without scaling Gil = c(Gi) = 0.1, a(G2) = 20 and c(Gi) = 10. This indicates that Z2 is is L = 00405, so it is meduced by afactom 6 compared to the pmevious best case (L = 026) with
the best choice, hut this is only correct with no disturbances. Let us now follow the singular value Z?J3
procedure. Null space method in the null space method wefind the optunal comnbination without unplemnentation
1. The input is scaled by the factor 1/V(82J/Uu2)01,~ = 1/~/~ such that a unit deviation in each cr1 or This first vtep is to find the optunal vensitivity with mespect to the disturbances Since itopt = d,
input from its optimal value has the same effect on the cost function J. we have
2. The maximum setpoint error due to variations in distum-bances is given as e0~~,t = cyj; “,id — A1/opt = Fi~d G~iàu0~~ + G~Ad = (G1 + G~) Ad
G~. Then, forz = yi, eopt,i = 0.1 . (—2) — (—0.1) = 0 and similarly, e0~g,~ = —20 and F
= 5. and thus the optunal sensitivity i
3. For each candidate controlled variable the implementation error is ~Z =
4. The expected variation (“span”)forz = 1/i is le0~t,~I + nfl = 0 + 1 = 1. Similarly forz = 1/2 F=[0 20 5 i]•~
and z = y~, the spans at-c 20 + 1 = 21 and 5 + 1 = 6, respectively To have zeto loss with mespect to distumbances we need to combine at least n,, + lId = 1 + 1 = 2
5. The scaled gain matrices and the worst-case losses are measiuements Since we have foum candidate measuiemnents, theme are an infinite nu,nbem ofpossible
z IGi~ = f ~0.1/V’~= 0.071; = = 100 combmations, butfor sunpliciiy of the contmol system, wepmefer to combme only two measurements
= ~ GJ = ~ .20/~,/~= 0.67; To meduce the effect of unplementation em roes, it is best to combine measurenients y with a lamge
1.1025 = ~ =
gain, provided they contain diffement infonnation about it and d Mote precisely we should ,naxinuze
Z=ya: IG~Ik10/~/~=1.18;
= ~i+I = 0.360 u([ G~ C2]) From this we find that measurements 2 and 3 are the best, with ~U G~’ G~])
We note from the computed losses that the singular value rule (= maximize scaled gain rule) suggests
that we should control z = yy, which is the same as found with the “exact” procedui-e. The losses
~ ~ = 445 To find the optimal co,nbmation we use HF = 0 or
are also identical.
20h2 + 5lz~ 0
B. Exact local method: in this case, we have H’d = 1 and W~1 = I and for 1/i
Setting 112 = 1 gives h3 = —4, and the optimal combmation is z = 1/2 — 41/3 0? (normalizmg the
and M~=~•0.1’-b=b0~ 2-norm of H to 1)
z = —0 24251/2 + 0 97Olya
which give The resultuig loss wIzen including the nnplemnentation ermor is L = 0 0425 We mecommnemid the use
= U([Md Me D2 = ~(~(0 b0~)) = 100 of this solution, because the loss is only margimially highem (0 0425 instead of 0 0405) than that
obtained using the optimnal combuiation of all foui measuremnem its
Similarly we find with Z2 and z3
Max imnizing scaled gain for combined measurements Fot the scalai case, the “maximize scaled gain
L2 = ~(~(—~‘~ ~&/20)) = 1.0025 asid L3 = ~(a(—v~/2 v~/10)) = 0.26 rule” call also be used to find the best combmation Considem a linear combination ofmneesumemetits
2 and 3, z = h2112 + h3y3 The gani fmom it to z is C = h2G~ + h3G~ The span fom z,
Thus, the exact local method also suggests selectimig z = as the comm-oIled variable. The reason span(z) = leopt,zl + Ic1 l~ is obtained by combuumig the individual spans
for the slight difference from the “exact” nonlinear losses is that we assunied d amid it2’ individually eopt,z = h2e0~t,2 + h3~opt 3 h2f2 + h3fs 20112 + Shy
to be less than 1 in the exact nomilinear method, whereas in the exact linear method lye assumed that
and Icz I 1e2 I + h3 lea I we assume that the combined implementation en ors ale 2—no, in
the combined 2-norm of d and Jj2’ was less thami 1.
[~] [~]
= 112 if
C. Comnbimiatiomis of measume,nents: We miow want to find the best comnbination z = Hy. in additiomi to bounded, II 112 ~ 1, then the womst-case implementation em tol fom z is Ic1 I = II 112 The
1/i, 1/2 and y~, we also include the input it in the set y, i.e. iesultnig scaled gain that should be maxumzed in magnitude is
L
402 MULTIVARIABLE FEEDBACK CONTROL
The expression (10.18) gives considerable insight into the selection of a good measurement
combination. We should select H (Le. h2 and ha) in order to ,naxinzize 0’. The null space method
I CONTROL STRUCTURE DESIGN
a implementation error = sum of control error and measurement error (at steady
403
corresponds to selecting H such that 0opt = h2e0~~,2 + hie0~~,s = 0. This gives h2 = —0.2425
3 state)
and h3 = 0.9701, and Ic2 I = I [~] 112 = 1. The corresponding scaled gain is I!
3
For cases with more than one unconstrained degree of freedom, we use the gain in the most
difficult direction as expressed by the minimum singular value
0’ = —20 0.2425 + 10 0.9701 - = —4.851 I General rule: “maxunuze the (scaled) minimum smgulam value ~(C’) (at steady
0±1 I state)”
with a loss L = a/(21G’12) = 0.0425 (as found above). (The factor a = Juu = 2 is included
because we did not scale the inputs when obtaining C’.) I We have written “at steady-state” because the cost usually depends on the steady-state, but
more generally it could be replaced by “at the bandwidth frequency of the layer above (which
Some additional examples can be found in Skogestad (2000), Halvorsen et al. (2003), adjusts the setpoints for z)”
Skogestad (2004b) and Govatsmark (2003).
Exercise 10.2 Suppose that we want to minimize the LQG-lype objective function, j
* = x~ + ru2, / 10.4 Regulatory control layer
r > 0, where the steady-state model of the system is
x ± 2u — 3d = 0 In this section, we are concerned with the regulatory control layer This is at the bottom of
the control hierarchy and the objective of this layer is generally to “stabilize” the process and
= 2x, 112 =6x—5d, ya =3x—2d
facilitate smooth operation It is not to optimize objectives related to piofit, which is done
Which ,neasure,nent would you select as a controlled variable for r = 1? How does your conclusion at higher layers Usually, this is a decentralized control system of “low complexity” which
change with variation in r? Assume unit implementation errorfor all measurements. keeps a set of measurements at given setpoints The regulatory control layer is usually itself
hieiarchical, consisting of cascaded loops If there are “truly” unstable modes (RHP-poles)
Exercise 10.3 In Exe,t-ise 10.2, how would your conclusions change when is (open-loop
then these are usually stabilized first Then, we close loops to “stabilize” the system in the
implementation policy) is also included as a candidate controlled variable? First, assume the
more general sense of keeping the states within acceptable bounds (avoiding drift), for which
implementation error for u is unity Repeat the analysis, whemi the implementation error for is and
the key issue is local disturbance rejection
each of the ,neasurements is 10.
The most important issues for regulatory control are what to measure and what to
manipulate Some simple rules for these are given on page 405 A fundamental issue
10.3.8 Selection of controlled variables: summary is whether the introduction of a separate regulatory control layer imposes an inherent
performance loss in terms of control of the primary variables z Interestingly, the answer is
When the optimum coincides with constraints, optimal operation is achieved by controlling “no” provided the regulatory controller does not contain RHP-zeros, and provided the layer
the active constraints. It is for the remaining unconstrained degrees of freedom that the above has full access to changing the reference values in the regulatory control layer (see
selection of controlled variables is a difficult issue. Theorem 102 on page 415)
The most common “unconstrained case” is when there is only a single unconstrained
degree of freedom. The rule is then to select a controlled variable such that the (scaled) gain
is maximized. 10.4.1 Objectives of regulatory control
Scalar rule: “maximize scaled gain 09” Some more specific objectives of the regulatory control layer may be:
• 0 = unscaled gain from is to z 01. Provide sufficient quality of control to enable a trained operator to keep the plant running
• Scaled gain 0’ = 0/span safely without use of the higher layers in the control system.
• span = optimal range (Ie0~tI) + implementation error (Id) This sharply reduces the need for providing costly backup systems for the higher layers of
In words, this “maximize scaled gain rule” may be expressed as follows: the control hierarchy in case of failures.
o optimal variation: due to disturbance (at steady-state) 04. Track references (setpoints) set by the higher layers in the control hierarchy.
C
404 MULTIVARIABLE FEEDBACK CONTROL
DONTROL STRUCTURE DESIGN 405
The setpoints of the lower layers are often the manipulated variables for the higher levels in
The regulatory control layer should assist in achieving the overall operational goals, so if
the control hierarchy, and we want to be able to change these variables as directly and with as “economic” controlled vanables z are known, then we should include them in y~ In other
little interaction as possible. Otherwise, the higher layer will need a model of the dynamics
cases, if the objective is to stop the system from “drifting” away from its steady-state, then the
and interactions of the outputs from the lower layer.
variables 111 could be a weighted subset of the system states, see the discussion on page 418
The most important issues for regulatory control are
05. Provide for local disturbance rejection.
i. What should we control (what is the variable set y2)~
This follows from 04, since we want to be able to keep the controlled variables in the 2 What should we select as manipulated variables (what is the variable set u2) and how
regulatory control system at their setpoints. should it be paired with y2~
06. Stabilize the plant (in the mathematical sense of shifting RHP-poles to the LHP). The pairing issue arises because we aim at using decentralized SISO control, if at all possible
In many cases, it is “clear” from physical considerations and experience what the variables
07. Avoid “drift” so that the system stays within its “linear region” which allows the use of
112 are (see the distillation example below for a typical case) However, we have put the word
linear controllers. “clear” in quotes, because it may sometimes be useful to question the conventional control
wisdom
08. Make it possible to use simple (at least in terms of dynamics) models in the higher
We will below, see (10 28), derive transfer functions for “partial control”, which are useful
layers.
for a more exact analysis of the effects of various choices for 112 and u2 However, we will
We want to use relatively simple models because of reliability and the costs involved in first present some simple rules that may be useful for reducing the number of alternatives that
obtaining and maintaining a detailed dynamic model of the plant, and because complex could be studied This is important in order to avoid a combinatoiial growth in possibilities
dynamics will add to the computational burden on the higher-layer control system. For a plant where we want to select vi from Al candidate inputs it, and I from L candidate
measurements y, the number of possibilities is
09. Do not introduce unnecessary performance limitations for the remaining control
problem.
(LN (MN — Li M’
~ I) ~m) — l’(L —1)’ m~(M — vi)’ (10 19)
The “remaining control problem” is the control problem as seen from the higher layer A few examples for in = I = 1 and M = L = 2 the number of possibilities is 4, foi
which has as manipulated inputs the setpoints to the lower-level control system and the in = I = 2 and Al = L = 4 it is 36, and for m = Al, I = 5 and L = 100 (selecting 5
possible “unused” manipulated inputs. By “unnecessary” we mean limitations (e.g. R}IP measurements out of 100 possible) there are 75287520 possible combinations
zeros, large RGA elements, strong sensitivity to disturbances) that do not exist in the original It is useful to distinguish between two main cases
problem formulation.
Cascade and indirect control. The variables 112 are controlled solely to assist in achieving
good control of the “primary” outputs yi~ In this case r2 (sometimes denoted r2,u) is
10.4.2 Selection of variables for regulatory control usually “free” for use as manipulated inputs (MV5) in the layer above for the control of
For the following discussion, it is useful to divide the outputs y into two classes: Yi~
2 Decentralized control (using sequential design). The variables 112 are important in
o yi — (locally) uncontrolled outputs (for which there is an associated control objective) themselves. In this case, their reference values r2 (sometimes denoted r24) are usually
4
o y2 — (locally) measured and controlled outputs (with reference value r2) not available for the control of in~ but rather act as disturbances to the control of y~.
By “locally” we mean here “in the regulatory control layer”. Thus, the variables y2 are the 4 Rules for selecting 112• Especially for the first case (cascade and indirect control), the
K
selected controlled variables in the regulatory control layer. We also subdivide the available following rules may be useful for identifying candidate controlled variables 112 in the
manipulated inputs u in a similar manner: regulatory control layer:
4
• — (locally) unused inputs (this set may be empty) 112 should be easy to measure.
o — (locally) used inputs for control of y~ (usually n~2 = n~2) 2 Control of 112 should “stabilize” the plant.
3 112 should have good controllability; that is, it has favourable dynamics for control.
We will study the regulatory control layer, but a similar subdivision and analysis could be 4 112 should be located “close” to the manipulated variable it2 (as a consequence of rule 3,
performed for any control layer. The variables y~ are sometimes called “primary” outputs, because for good controllability we want a small effective delay; see page 57).
and the variables 112 “secondary” outputs. Note that 112 is the controlled variable (CV) in the 5 The (scaled) gain from it2 to 112 should be large.
control layer presently considered. Typically, you can think of y~ as the variables we would
really like to control and 112 as the variables we control locally to make control of yr easier. In words, the last rule says that the controllable range for 112 (which may be reached
by varying the inputs it2) should be large compared to its expected variation (span). It
4
406 MULTIVARIABLE FEEDBACK CONTROL
I?
CONTROL STRUCTURE DESIGN 407
is a restatement of the maximum gain rule presented on page 395 for selecting primary Note that these th,ee variables are unpo~tant to control in themselves
(“economic”) controlled variables z. The rule follows because we would like to control
variables y2 that contribute to achieving optimal operation. For the scalar case, we should Oterallcontrol problem In sununaly, we have now identified five variables that we want to contmol
maximize the gain IG&21 = ~G22~/span(y2), where C22 is the unscaled transfer function
from it2 to Y2, and span(y2) is the sum of the optimal variation and the implementation error y=~Mnp]T
312
for Y2• For cases with more than one output, the “gain” is given by the minimum singular
value, ~(G~2). The scaled gain (including the optimal variation and implementation error) The mesulting overall 5 x 5 control problem from it to v can be app;ou;nated as (Skogestad and
should be evaluated for constant it1 and approximately at the bandwidth frequency of the Mo,ari, 1987cr)
control layer immediately above (which adjust the references r2 for y2). mi(s) 912(5) 0 0 0 1.
Rules for selecting it2. To control 112, we select a subset it2 of the available manipulated Zn 921(8) 922(5) 0 0 0 v
inputs it. Similar considerations as for 112 apply to the choice of candidate manipulated Mn = —1/s 0 —1/s 0 0 D (1020)
variables u2: Mn gb(s)/s —1/s 0 —1/s 0 B
Mv(p) 0 1/(s+kp) 0 0 —1/(s+kp) Vp
1. Select it2 so that controllability for 112 is good; that is, it2 has a “large” and “direct” effect
In addition, there are high-frequency dynamics (delays) associated with the inputs (valves) and outputs
on 112 Here “large” means that the gain is large, and “direct” means good dynamics with (measurements) For control pum poses itis veiy impom tant to include the tra;isferfimnction gs. (s), winch
no inverse response and a small effective delay. replesents the liquid flow dynanucs fmom the top to the bottom of the colu,nn, AL~ = gL(5)AL
2. Select ~ to maximize the magnitude of the (scaled) gain from it2 to 112 For control pulposes, it may be approximated by a delay, gb(s) = e9’~’ Yb(s) also enters into the
3. Avoid using variables it2 that may saturate. t,ansfer function 921(5) from L to Zn, and by this decouples the distillation co/un,,, dynanucs at high
frequencies The ovem all plant model in (10 20) usually has no i,iherent control lunitations caused by
The last item is the only “new” requirement compared to what we stated for selecting y2•
RHP-zeros, but the plant has two poles at the origin (f,o,n the nitegrating liquid levels, IL/ID and kin),
By “saturate” we mean that the desired value of the input it2 exceeds a physical constraint;
for example, on its magnitude or rate. The last mle applies because, when an input saturates, and also one pole close to the ongin (“almost integratmg”) in GLV 911 912 originating from
921 922
we have effectively lost control, and reconfiguration may be required. Preferably, we would the internal recycle in the colunin These three modes need to be “stabilized” In addition, for high
like to minimize the need for reconfiguration and its associated logic in the regulatory control punts’ sepamations, there is a potential control problem in that the GLV-subsystemn is strongly coupled
layer, and rather leave such tasks for the upper layers in the control hierarchy. at steady-state, e g resulting in large elements in the RCA matm Ices foi GLV and also for the ovem all
5 x S plant, butfortunately the system is decoupled at high freque,wy because ofthe liquidflow dynamics
Example 10.8 Regulatory control for distillation column: basic layer. The ovem-all control represented by Yb(s) Anothe; complication is that composition mneasurements (111) are often expensive
problem for tile distillation column in Figure 10.6 has five manipulated inputs and 103 reliable
u=[L V D B VTT Regulatory contivl selection of a2 As already mentioned, the distillation column is first stabilized
These al-c all flows [mo//si: reflux L, boilup V, distillate D, bottom flow B, and overhead vapour by closing three decentralized 5150 loops for level and pressure, 112 = [Mn M8 p T These
(cooling) VT. What to control (y) is yet to be decided. loops usually interact weakly with each othei and may be tuned independently Howeve,, there eiist
Overall objective. FIVm a steady-state (and economic) point of vielg the co/u,,,,, has only three many possible choices for 1t2 (atid thits fo, iti) For examnple, the condensei holdup tatik (Mn) has
degrees of freedom3 With pressure also controlled, there a,e two i-emnaining steady-state degrees of one inlet flow (VT) and two outlet flows (L and D), and any one of these floivs, or a comnbination,
freedom, and we want to identify the economic controlled variables Vi = z associated with these. To do may be used effectively to control MD By convention, each choice (“configumation”) of it2 used for
this, we define the cost function J and tninirnize it for various distu,-bances, subject to the constraints, conttolhng level and plessuie is named by the inputs Ui left for composition contmol Fom aample, the
which include specifications on top composition (zp) and bottom composition (Zn), together with “LV-configu;ation” used in many examples in this book refers to a partially cont,olled systeni wheie
ice = [D B VT T is used to contmol levels and piessure (92) in the regulatory layer, and we ale left
upper and lower bounds on the flows, In ~nost cases, the optimnal solution lies at the constraints. A very
co,nmon situation is that both top and bottom composition optimally lie at their specifications (Vn,mi,i with
and Xn,max). We generally choose to control active co,,straints and then have ui=[L VT
to cont,ol composition (111) The LV-configuration is known to be strongly interactive at steady-state,
IJj=Z=[ZD Zn
a. as can been seen f~omn the large steady-state RCA elements, see (3 94) on page 100 On the othet hand,
Regulato,y control: selection of in. We need to stabilize the two integrating modes associated with the LV-conflgu,ation is good f,om the point of view that it is the only configutation wheie control of
the liquid holdups (levels) in the condenser atid reboiler of the colu,,in (Mp and M~ [mol]). In Vi (using Ui) is nearly independent of the tuning oft/ic level controllems (1(2) This is quite important,
addition, we normally have tight control of pressure (p), because othenvise the (later) contmvl of because we no? i~zally want “slow” (s,nooth contmol) rathe; than tight control oft/ic levels (Mn and
temperature and composition becomes miiore difficult. In summam~~ we decide to control the following a. Mn) This may give undesirable interactions fm on, the megulatory contmol layem (92) into the pm immiamy
three va,-iables in the regulatory control layer cont,ol layem (Vi) However, this is avoided with the LV-conflgu,ation
-T Anothem configuiation is the DV-configuration where ac = [L B VT T is used to contmol levels
V2[MD Mn p amid pressum e, and we are left ivith
ó A distillation column has two fewer steady-state than dynamic degrees ot freedom, because the integrating
ui=[D 17T
condenser and rebniler levels, which need to be controlled to stabilize the process, have no steady-stale effect.
0
408 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 409
J-’mduct compositions) A contiollabilny analysis of the model CLv(s) from it to Vi shows that there
, 0) an almost integrating mode, and (2) strong interactions The integiating mode results in high
sensitivity to disturbances at lower ft equencies The control implication is that we need to close a
“5~abilizuig” loop A closer analysis of the interactions (e g a plot of the RGA elements as afunction of
freqiien ~J shows that they are mnuch smnallet at high ft equencies The physical reason fot this is that L
and SD are at the top of the column, and V and 5B at the bottom, and since it takes some tune (0L)for
a change in L to teach the bottoni, the high-frequency response is decoupled The control unplication is
V that the mteractions may be avoided by closing a loop with a closed-loop response tune less than about
-ç
DYD
to cont,-ol compositions. If we were only concerned with cotitrolli ig the condenser level (ill D) then this
choice would be better for cases with difficult separations where L/D >~ 1. This is because to avoid
saturation in ~2 we would like to use the largest flow (in tIns case it2 = L) to control condenser level
(MD). In addition for this case, the steady-state interactions fivin iti to y~, as expressed by the RGA,
are generally much less; see (6.74) on page 245. Rosvever~ a disadvantage with the DV-conflguration
is that the effect of it1 on y~ depends strongly on the tuning of K2. This is not surprising, since using V
to cotitrol 5D corresponds to pairing on 931 = 0 in (10.20). and V (iti) therefore only has an effect on
XD (Ui) when the level loop (from u2 = L to ~2 = MD) has beet: closed. Figure 10.7: Distillation column with LV-conflguration and regulatory temperature loop
There are also many other possible configurations (choices for the two inputs in lit); with five inputs
there are ten alternative configurations. Furthermore, one often allowsfor the possibilire of using ratios
It tumns out that closing one fast loop may take care of both stabilization and reducing interactions
between flows, e.g. L/D, as possible degrees offreedoin in u~, and tlus sharply increases tha number
The issue is then which loop to close The most obvious choice is to close one of the composition
of alternatives. 1-lowevet; for all these configurations, the effect of it1 on yi depends on the tuning of
1(2, which is undesirable. This is one reason why the LV-conflguration is used most in practice. In the
lOO~5 (yj) Howevem, there is usually a tune delay involved in measuring composition (SD and XE),
and the measurement may be unreliable On the othem hand, the tempemature 2’ is a good indicatom of
next section, we discuss how closing a ‘fast” temperature loop may improve the controllability of the composition and is easy to measure The prefem red solution is therefore to close afast temperature loop
LV-conflguration.
somewhere along the column This loop will be unpleniented as pamt of the megulatomy control system
We have two available manipulated yam iables it, so teniperatume may be contm oIled using reflux L or
In the above example, the variables Y2 were important variables in themselves. In the boilup V We choose meflu-c L here (see Figure 10 7) because it is mole likely that boilup V will teach
following example, the variable Y2 is controlled to assist in the control of the primary its niaximnun: value, and input saturation is not desired in the megulatomy contiol layem In terms of the
variables y~. notation presemited above, we then have a Sf80 regulatory loop with
Example 10.9 Regulatory control for distillation column: temperature control. We will assu,ne y2=T, u2=L
that we have closed the three basic control loops for liquid holdup (MD, MB) and pressure (p) using
and it1 = V The “p1 unaty” composition contmol layem adjusts the tempem ature setpoint r2 T~ fom
the LV-configuration, see Example 10.8, and we are left with a 2 x 2 control problem with
the regulatory layem Thus, fom the pm imamy la~em we have
u=[L V~’ y1=[XD XE]
T
u[ui r2]T[V 2’3jT
(reflux and boilup) and The issue is to find which temperature 2’ in the column,: to control, andfor tIns we may use the “maximum
yl=[XD 5)3 gain rule”. The objective is to mnaxinnze the scaled gait: 02 (jw) I from it2 = L to ~2 = 2’.
I
410 MULTJVARIABLE FEEDBACK CONTROL
)NTROL STRUCTURE DESIGN 411
Heie lG~2l = 1022J/span where 022 is the unsealed gain and span = optimal iange (leoptl) +
unplementation error (IeI)for the selected temperature The gain should be evaluated at approximately
the bandwidthfrequency oft/ic composition layer that adjusts the setpoint r~ = T5 For this application
we assume that the pi imao layer is relatively slow such that we can evaluate the gain at steady state
Le. w = 0.
In Table 10 2 we show the tiormahzed temperatures y2 = X unsealed gain optimal vanatioti foi a
the two disturbances iniplementatioti e, for and the resulting span and scaled gain fo; Ineasulements C)
located at stages I (reboiler~ 5 10 15 21 (feed stage) 26 31 36 and 41 (condetise,) The gains
arc also plotted as a function of stage number in Figioe 10 8 The laigest scaled gain of about 88 is
achieved when the temperatute measuremetit is located at stage 15 from the bottom Howeve, this is
below the feed stage and it takes some time for the change in reflu.x (ua = L) which enteis at the top
to reach this stage Thus foi dynamic reasons it is better to place the measurement in the top part of
s to is 20 25 30 35 40
the column for example at stage 27 where the gain has a local peak of about 74 Stage Number
Figure 10.8: Scaled (I0~2I) and unsealed (10221) gains for alternative temperature locations for the
Table 10.2: Evaluation of scaled gain IG~2I for alternative temperature locations (y2) for distillation distillation example
example. Span = IL~y2,0~~(di)I + JAy2,0~t(d2)I + e92. Scaled gain G~~I = 10221/span.
Nominal Unsealed Scaled
Stage valuey2 022 ~412 OP~(d1) ~ OP~(d2) e22 span(y2) I021 3. As seemifroin the solid and dashed lines in Figure 10.8, the local peaks of the unsealed and scaled
1 0.0100 1.0846 0.0077 0.0011 0.05 0.0588 18.448 gains occur at stages 26 and 27. respectively Thus, scaling does not affect the final conclusion much
iti this case. However; if we were to set the implementation error e to zero, then the Inaximuni scaled
5 0.0355 3.7148 0.0247 0.0056 0.05 0.0803 46.247
10 0.1229 10.9600 0.0615 0.0294 0.05 0.1408 77.807 gain would be at the bottom of the colunui (stage 1).
15 0.2986 17,0030 0.0675 0.0769 0.05 0.1944 87.480 4. We ,iiade the choice u~ = L to avoid saturation ill the boilup V in the regulatory control layer
21 0.4987 9.6947 -0.0076 0.0955 0.05 0.1532 63.300 However; if saturation is not a problem, then the other alternative ~2 = V may be better. A similar
26 0.6675 14.4540 -0.0853 0.0597 0.05 0.1950 74.112 analysis with 112 = V gives a mnaxinuim scaled gain of about 100 is obtaitied with the temperature
31 08469 105250 00893 00130 005 01524 69074 measured at stage 14.
36 09501 41345 00420 00027 005 00947 43646 In suninialy, the overall 5 x S distillation control problem may be solved by first design big a 4 x 4
41 0.9900 0.8754 -0.0096 -0.0013 0.05 0.0609 14.376 ‘stabilizing” (regulatory) controller 1(2 for levels, pressure and temperature
?j2[Mp MB p T~’, u2=[D B VT
Remarks to example
1. We use data for “column A” (see Section 13.4) which has 40 stages. This column separates a binary and then designing a 2 x 2 “primary” cotitroller K1 for composition cotitrol
mixture, andfor simplicity we assume that the temperature T on stage i is directly given by the mole y1[zv XE, ‘ui=[V T3
fractioti of the light component, T~ = x~. This can be regarded as a “normalized” temperature which
ranges from 0 in the bottom to 1 iti the top of the column. The implementation error is assumed to Alternatively, we niay ititerchange Land V in 111 and 112. The temperature sensor (T) should be located
be the sa,ne on all stages, namely e2, = 0.05 (amid with a temperature difference betweeti the two at a point with a large scaled gain.
components of 13.5 K, this corresponds to all implementation error of *0.68 K). The disturbances
are a 20% increase in feed rate F (d1 = 0.2) and a change from 0.5 to 0.6 in feed mole fraction ZF We have discussed some simple rules and tools (“maximum gain rule”) for selecting
(d2 = 0.1). the variables in the regulatory control layer. The regulatory control layer is usually itself
2. The optimal variation (~y2,0~~(d)) is often obtained from a detailed steady-state model, but it was hierarchical, consisting of a layerfor stabilization of unstable modes (RHP-poles) and a layer
generated here from the linear niodeL For any disturbance d we have in terms of deviation variables for “stabilization” in terms of disturbance rejection. Next, we introduce pole vectors and
(we onut the zX ‘s) partial control, which are more specific tools for addressing the issues of stabilization and
= Gnu2 + Odld disturbance rejection.
112 = 022112 + Gd2d
The optimal strategy is to have the product compositions comistant; that is, yt = [XD XE JT = 0. 10.4.3 Stabilization: pole vectors
Howevet; since ~2 = L is a scala,; this is not possible. The best solution in a least squares sense
(minimize 11w 112) is found by using the pseudo—inverse, 14vt = ‘~012Gd1 d. Tile resulting optimtial Pole vectors are useful for selecting inputs and outputs for stabilization of unstable modes
change in the temperature 112 = P is then / (RHP-poles) when input usage is an issue. An important advantage is that the selection of
inputs is treated separately from the selection of outputs and hence we avoid the combinatorial
= (—G220120d1 + 0d2)d (10.21) issue. The main disadvantage is that the theoretical results only hold for cases with a sitzgle
RHP-pole, but applications show that the tool is more generally useful.
412 MULTIVARIAI3LE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 413
The issue is: which outputs (measurements) and inputs (manipulations) should be used for Example 10.10 Stabilization of Tennessee Eastman process. The Tennessee Eastman chenncal
stabilization? We should clearly avoid saturation of the inputs, because this makes the system process (Downs and Vogel, 1993) was intioduced as a challenge problem to test methods foi
effectively open-loop and stabilization is then impossible. A reasonable objective is therefore contiol stiucture design ~ The piocess has 12 manipulated inputs and 41 candidate measurements,
to minimize the input usage required for stabilization. In addition, this choice also minimizes of which is e considem 11 here, see Haure (1998) fo’ details on the selection of these Va, iables
the “disturbing” effect that the stabilization layer has on the remaining control problem. and scaling The model has sn unstable poles at the operating point considered, p
Recall that u = —KS(r + ii d), so input usage is minimized when the norm of KS is
—
[0 o oni 0 023 ± Jo 156 3 066 ± j5 079] The absolute values of the output and input pole vectois
minimal. We will consider both the 9-12 and 9-L~ norms. are
6815 6909 2573 0964
Theorem 10.1 (Input usage for stabilization) For a rational plant with a single unstable 6906 7197 2636 0246
mode p. the minimal 9-t2 and 9i~ norms of the transferfunction KS are given as (Havre and 0000 0000 0013 0366 0148 1485 0768 0044
3973 11550 o096 0470
Skogestad, 2003; Kariwala, 2004)
0009 058 1 0 4 88 0316 T 0012
0597 0369
0077 0519
0066 0356
0033
(2p)3/2 . qT~ l~l = 0000 0001 0041 011~ IUPI 0135 1850 1682 0110
mm
K
111(8112 =
IIUpII2 IIYpII2 (10.22) 160o 1192 0754 0131 22 4 00 onon
.
nono 0001 0039 0108
0000 0001 0038 0217 -.
usuig the pole vector appioach by stabilizing one ieal pole oi a paim of complex poles at a time Usually,
Exercise 10.4 Show that for a system with a single unstable pole, (10.23) represents the least
*
the selected vai iable does not depend on the contmvllems designed in the pievious vteps Venfy tins foi
achievable value of llKSlk~. (Hint: Rearrange (5.3!) on page 178 using the definition ofpole vectors.) each ofthefollowimig two svsteniv
When the plant has multiple unstable poles, the pole vectors associated with a specific [102 ii [102 i
RHP-pole give a measure of input usage required to move this RHP-pole assuming that the Gi(s) = Q(s) [12 15 soij Ga(s) = Q(s) [12 1 161
other RI-IP-poles are unchanged. This is of course unrealistic; nevertheless, the pole vector
approach can he used by stabilizing one source of instability at a time. That is, first an input — [1/(s-1) 0
and an output are selected considering one real RHP-pole or a pair of complex RHP-poles
A Q(s)— [ o 1/(s—05)
and a stabilizing controller is designed. Then, the pole vectors are recomputed for the partially (Hint Use simple piopo~ tional contiolleisfom stabilization of p = 1 and evaluate the effect of change
controlled system and another set of variables is selected. This process is repeated until all the of cont,olle, gaui on pole vectoms in the second item ation
modes are stabilized. This process results in a sequentially designed decentralized controller Simuhnk and Matlab models for the Tennessee Eastman process are ,o,amiabie from Professor Larry Ricker at the
and has been useful in several practical applications, as demonstrated by the next example. University of Washington (easily found using a search engine)
414 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 415
10.4.4 Local disturbance rejection: partial control Note that ~d, the partial disturbance gain, is the disturbance gain for a system under partial
Let y~ denote the primary variables, and y2 the locally controlled variables. We start by control. Pu is the effect of it1 on Yl with 112 controlled. In many cases, the set it1 is empty
deriving the transfer functions for y~ for the partially controlled system when y2 is controlled. because there are no extra inputs. In such cases, r2 is probably available for control of 111,
We also partition the inputs it into the sets it1 and it2, where the set it2 is used to control 112. and F,. gives the effect of r2 on Yi• In other cases, r2 may be viewed as a disturbance for the
The model y = Gu may then be written5 control of y~.
In the following discussion, we assume that the control of y, is fast compared to the control
= Gnitj + Gnu2 + Gdjd (10.24) of yj. This results in a time scale separation between these layers, which simplifies controller
design. To obtain the resulting model we may let I’(2 -4 oo in (10.26). Alternatively, we may
= G21u1 + G22u2 + Gd2d (10.25) solve fort2 in (10.25) to get
it2 = —G~’G~~d — G,’G21u1 + G~y2 (10.27)
d We have assumed that G22 is square and invertible, otherwise we can use a least squares
solution by replacing G~’ by the pseudo-inverse, G~2. On substituting (10.27) into (10.24)
and assuming 112 n~ (“perfect” control), we get
—
~ ~ (10.28)
Pa Pd F,. 1(2
The advantage of the approximation (10.28) over (10.26) is that it is independent of 1(2, but
it’ we stress that it is useful only at frequencies where Y2 is tightly controlled.
Remark 1 Relationships similar to those given in (10.28) have been derived by many authors, e.g. see
the work of Manousiouthakis et al. (1986) on block relative gains and the work of Haggblom and Wallet
(1988) on distillation control configurations.
Remark 2 Equation (10.26) may be rewritten in terms of linear fractional transformations (page 543).
For example, the transfer function from ti to yi is
Figure 10.9: Partial control Exercise 10.6 The block diagram in Figure 10.11 below shows a cascade control system where the
primaly output yi depends directly on the extra measurement y~, so 012 0102, 022 02,
Now assume that feedback control Get = [1 G, ] and 0d2 = [0 1]. Assume tight control of y2. Show that Pd = [1 0) and P..
and discuss the result. Note that P.. is the “new” plant as it appears with the inner loop closed.
= I(2(~2 — 7J2,m)
The selection of secondary variables 112 depends on whether it, or r2 (or any) are available
is used for the secondary subsystem involving it2 and y~, see Figure 10.9, where Y2,m = for control of y,. Next, we consider in turn each of the three cases that may arise.
112 + it2 is the measured value of Y2~ By eliminating it2 and y2~ we then get the following
model for the resulting partially controlled system from it1, r2, d and it2 to 111:
1. Cascade control system
= (G1~ — G121(2(I± G22K2)’ 021) it1 Cascade control is a special case of partial control, where we use it2 to control (tightly) the
P,.
secondary outputs 112, and r2 replaces it2 as a degree of freedom for controlling y,. We would
like to avoid the introduction of additional (new) RHP-zeros, when closing the secondary
+ (Gdl —G12K2(I+G22K2y’0d2)d
loops. The next theorem shows that this is not a problem.
Pd
Theorem 10.2 (RHP-zeros due to closing of secondary loop) Assume that it1(, =
+ G12K2(I ±G221cE2Y~(r2 — it2)
P.
+n~,, and it1(, = it,., = n~, (see Figure 10.9). Let the plant C = [~“ ~12] and the
secondary loop (.92 = (I + G22K2) 1) be stable. Then the partially controlled plant
We may assume that any stabilizing loops have atready been closed, so for the model y = On, 0 includes the
stabilizing controller and u includes any “free’ setpoints to the stabilizing layer below (10.30)
FCL = [G,~ — G,21cE2S2G21 0121(282]
416 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 417
from ~u1 ro] to ~jj in (10.26) has no additional RHP—zeros (that are not present in the open-
2. U([Pd Fr]) is small and at least smaller than U(Odl). In particular, this argument
loop plant [G~~ 012 ]fronz [iii a2] to y~) ~f applies at higher frequencies. Note that F,. measures the effect of measurement noise ~2
1. r2 is available for control of iii. and on lit
2. 1(2 is nimunum-phase. 3. To ensure that it2 has enough power to reject the local disturbances d and track 1’2, based
on (10.27), we require that U(G~Gd2) < 1 and U(G~) < 1. Here, we have assumed
Proof: Under the dimensional and stability assumptions, .PCL is a stable and square transfer function that the inputs have been scaled as outlined in Section 1.4.
matrix. Thus, the RHP-zeros of POt are the points in RHP where det(Po~(s)) = 0 (also see Remark 4
on page 141). Using Schur’s formula in (Al 4), Remark 1 The above recommendations for selection of secondary variables are stated in terms of
ingular values, but the choice of norm is usually of secondary importance. The minimization of
det(FOL) = det(M) . det(S2) e([Pd Pr]) arises if [,~]II2 ≤ land we want to minimize IIy1[12.
where
Remark 2 By considering the cost function J = mind,~, YTVI. the selection of secondary variables
lvi — [ G~
021
0
—I
012 1(2
1+0221(2
for disturbance rejection using the objectives outlined above is closely related to the concept of self-
optimizing control discussed in Section 10.3.
with the partitioning as shown above. By exchanging the columns of A’I, we have
2. sequentially designed decentralized control system
det(M) = (—1)~’det 11 On 0121(2 0
1+0221(2 —I When r2 is not available for control of yi~ we have a sequentially designed decentralized
= det([ On 0121(2 ]) controller. Here the variables 112 are important in themselves and we first design a controller
1(2 to control the subset y2~ With this controller 1(2 in place (a partially controlled system),
det([ 011 ])det({ 1 ~ we may then design a controller 1(~ for the remaining outputs.
= 012
0 1(2 j) In this case, secondary loops can introduce “new” RHP-zeros in the partially controlled
= det([ 011 012 ]) det(K2) system 1%. For example, this is likely to happen if we pair on negative RGA elements
The second equality follows since the rearranged matrix is block triangular and det(—1) = (—1)”. (Shinskey, 1967; 1996); see Example 10.22 (page 446). Such zeros, however, can be moved
Then, putting everything together, we have that to high frequencies (beyond the bandwidth), if it is possible to tune the inner (secondary) loop
sufficiently fast (Cui and Jacobsen, 2002).
det(Pct) = deE ([ ~ 012 ]) det(K2) . dot(S2) In addition, based on the general objectives for variable selection, we require that u(F~)
Although the RUP-poles of 1(2 appear as RFIP-zeros of 82 due to the interpolation constraints, these
instead of u([ P,~ F,.]) be large. The other objectives for secondary variable selection are
zeros are cancelled by 1(2 and thus det(K2) . dot(S2) evaluated at RHP-poles of 1(2 is non-zero. the same as for cascade control and are therefore not repeated here.
Therefore, when r~ is available for control of yi and 1(2 is minimum-phase, the RHP-zeros of Pc~ are
the same as the RHP-zeros of [On 012] and the result follows. A somewhat more restrictive version 3. Indirect control
of this theorem was proven by Larsson (2000). The proof here is due to V. Kariwala. Note that the
assumptions on the dimensions of yi and ~2 are made for simplicity of the proof and the conclusions Indirect control is when neither r2 nor it1 are available for control of yi~ The objective is to
of Theorem 10.2 still hold when these assumptions are relaxed. C minimize J = Wi nil~ but we assume that we cannot measure yi~ Instead we hope that Vt
—
controlled outputs 112 such that IIFddIl and IIF,.n21l are small or, in terms of singular values,
1. cUP1, ~r 1) (or u(Fr), if it1 is empty) is large at low frequencies. a([ P4 —F,. 1) is small. The problem of indirect control is closely related to that of cascade
418 MULTIVARIABLE FEEDBACK CONTROL ONTROL STRUCTURE DESIGN 419
control. The main difference is that in cascade control we also measure and control Yi in an
outer loop; so in cascade control we need II [Pd F,.] I small only at frequencies outside the
bandwidth of the outer control loop (involving yx).
Then.
dJ/du = 2(Gl~ )T QW~ ~ 2(Gw)TG~d = 2(G’”)~G’” {~]
An ideal “self-optimizing” variable is dJ/du, as then a is always optimal with zero loss (in
Remark I In some cases, this measurement selection problem involves a trade-off between wanting
~
C = 0
IIPdII small (wanting a strong correlation between measured outputs yi and “primary” outputs ui) the absence of implementation effor). Now, c H@ to get c dJ/du, we would
and wanting IPril small (wanting the effect of control errors (measurement noise) to be small). For
example, this is the case in a distillation column when we use temperatures inside the column On) for like
HG2 = (Gw)TG1V (10.35)
indirect control of the product compositions (yx). For a high-purity separation, we cannot place the
measurement close to the column end due to sensitivity to measurement error (liPrIl becomes large), (the factor 2 does not matter). Sincen~ ≥ ri~, + 71d, (10.35) has an infinite number of solutions, and
and we cannot place it far from the column end due to sensitivity to disturbances (IIPdII becomes large); the one using the right inverse of Gi/ is given by (10.34). It can be shown that the use of the right
see also Example 10.9 (page 408). inverse is optimal in terms of minimizing the effect of the (until now neglected) implementation error
on w, provided the measurements (y) have been normalized (scaled) with respect to their expected
Remark 2 Indirect control is related to the idea of inferential co,,rrol which is commonly used in measurement error (~2) (Alstad, 2005, p. 52). The result (10.34) was originally proved by Hon et al.
the process industry. However, with inferential control the idea is usually to use the measurement of (2005). but this proof is due to V. Kariwala.
y2 to estimate (infer) y~ and then to control this estimate rather than controlling 1)2 directly, e.g. see C
Stephanopoulos (1984). However, there is no universal agreement on these terms, and Marlin (1995)
uses the term inferential control to mean indirect control as discussed above. H computed from (10.34) will be dynamic (frequency-dependent), but for practical
purposes, we recommend that it is evaluated at the closed-loop bandwidth frequency of the
outer loop that adjusts the setpoints for r. Inmost cases. it is acceptable to use the steady-state
Optimal “stabilizing” control in terms of minimizing drift matrices.
A primary objective of the regulatory control system is to “stabilize” the plant in terms of Example 10.11 Combination of measurements for minimizing drift of distillation column. We
minimizing its steady-state drift from a nominal operating point. To quantify this, let w consider the distillation column (column “A “) with the LV—conflguration and use the same data as in
represent the variables in which we would like to avoid drift; for example, w could be the Example 10.9 (page 408). The objective is to minimize the steady-state drift of the 41 composition
weighted states of the plant. For now let y denote the available measurements and u the variables (in = states) due to variations in the feed tate and feed composition by controlling a
manipulated variables to be used for stabilizing control. The problem is: to minimize the combination of the available temperature measurements. We have u = L, flu = 1 and iid = 2 and we
drift, which variables a should be controlled (at constant setpoints) by u? We assume linear need at least n~ +n~ = 1+2 = 3 measurements to achieve zero loss (see null space method, page 397L
measurement combinations, We select three temperature measurements (y) at stages 15, 20 and 26. One reason for not selecting the
c = Hy t10 33~ measurements located at the cohtmn ends is their sensitivity to implementation error see Example 10.9.
By ignoring the imnplemnentation errol; the optimal combination of variables that nlinimnizes I I~P~° (0)112
and that we control as many variables as the number of degrees of freedom, n~, = n~. The is,fronz (10.34),
linear model is c = 0.719T15 — 0.01ST20 + 0.6947’26
in =GWn+G~d=Gul[~] When c is controlled pemfectly at c~ = 0, this gives a(Pj”(0)) = 0.363. This is significantly
smaller than o’(G~’ (0)) = 9.95, which is the “open-loop” deviation of the state variables due to the
y=G11u+G~d=G2[~] disturbances. We have not considered the effect of implementation envr so fat: Similar to (10.28), it can
be shown that the effect of implementation error on in is given by a(G~ (G7 )t). With an implementation
With perfect regulatory control (c = 0), the closed-loop response from d to in is
error of 0.05 in the individual temperature measurements, we get &(G~(G2)t) = 0.135, which is
small.
in = Pj°d; PJ° = G~7 — GWWGY)1HGY
Since generally n~ > n~, we do not have enough degrees of freedom to make in = 0 (“zero
drift”). Instead, we seek the least squares solution that minimizes IIwII2. In the absence of 10.5 Control configuration elements
implementation error, an explicit solution, which also minimizes IIPj°II2~ is
In this section, we discuss in more detail some of the control configuration elements
H = (G2m)TGUt(G91 (10.34) mentioned above. We assume that the measurements y, manipulations u and controlled
where we have assumed that we have enough measurements, fl2 ≥ n~ + ri~j. outputs z are fixed. The available synthesis theories presented in this book result in a
multivariable controller K which connects all available measurements/commands (y) with
Proof of (10.34): We want to minimize all available manipulations (a),
u=Ky (10.36)
J = IIwII~ = ~T(0b0 )T0~v~ + dT(GdjTGf d + 2uT(Gt~)TG~~d
420 MULTIVARIABLE FEEDBACK CONTROL IONTROL STRUCTURE DESIGN 421
However, such a “big” (full) controller may not be desirable. By control configuration Selectors are used to select for control, depending on the conditions of the
selection we mean the partitioning of measurements/commands and manipulations within system, a subset of the manipulated inputs or a subset of the outputs
the control layer. More specifically, we define
Control configuration. The restrictions imposed on the overall cot it roller K by In addition to restrictions on the structure of K, we may impose restrictions on the way,
decomposing it into a set of local controllers (subcontrollers, units, elements, or rather in which sequence, the subcontrollers are designed. For most decomposed control
blocks) with predetermined links and with a possibly predetermined design systems we design the controllers sequentially, starting with the “fast” or “inner” or “lower
sequence where subcontrollers are designed locally. layer” control loops in the control hierarchy. Since cascade and decentralized control systems
depend more strongly on feedback rather than models as their source of information, it is
In a conventional feedback system, a typical restriction on K is to use a one degree-of- usually more important (relative to centralized multivariable control) that the fast control
freedom controller (so that we have the same controller for r and —y). Obviously, this loops are tuned to respond quickly.
limits the achievable performance compared to that of a two degrees-of-freedom controller. In this section, we discuss cascade controllers and selectors, and in the following section,
In other cases, we may use a two degrees-of-freedom controller, but we may impose the we consider decentralized diagonal control. Let us first give somejustification for using such
restriction that the feedback part of the controller (K8) is first designed locally for disturbance “suboptimal” configurations rather than directly designing the overall controller K.
rejection, and then the prefilter (Kr) is designed for command tracking. In general, this will
limit the achievable performance compared to a simultaneous design (see also the remark on
page 110). Similar arguments apply to other cascade schemes. 10.5.1 Why use simplified control configurations?
Some elements used to build up a specific control configuration are: Decomposed control configurations can be quite complex, see for example Figure 10.13
a Cascade controllers (page 427), and it may therefore be both simpler and better in terms of control performance to
o Decentralized controllers et up the controller design problem as an optimization problem and let the computer do the
o Feedforward elements job resulting in a centralized multivariable controller as used in other chapters of this book.
o Decoupling elements If this is the case, why are simplified parameterizations (e.g. PID) and control
o Selectors configurations (e.g. cascade and decentralized control) used in practice? There are a number
of reasons, but the most important one is probably the cost associated with obtaining good
These are discussed in more detail below, and in the context of the process industry in plant models, which are a prerequisite for applying multivariable control. On the other hand,
Shinskey (1967, 1996) and Balchen and Mumme (1988). First, some definitions: with cascade and decentralized control the controllers are usually tuned one at a time with
Decentralized control is when the control system consists of independent a minimum of modelling effort, sometimes even on-line by selecting only a few parameters
feedback controllers which interconnect a subset of the output measure— (e.g., the gain and integral time constant of a P1 controller). Thus:
inents/commands with a subset of the manipulated inputs. These subsets should o A fundamental i-eason for applying cascade and decentralized control is to save on
not be used by any other controlle,:
modelling effort.
This definition of decentralized control is consistent with its use by the control community.
Other benefits of cascade and decentralized control may include the following:
In decentralized control, we may rearrange the ordering of measurements/commands and
manipulated inputs such that the feedback part of the overall controller K in (10.36) has a 0 easy for operators to understand
fixed block-diagonal structure. 0 ease of tuning because the tuning parameters have a direct and “localized” effect
S insensitive to uncertainty, e.g. in the input channels
Cascade control arises when the output from one controller is the input to
a failure tolerance and the possibility of taking individual control elements into or out of
another This is broader than the conventional definition of cascade control which
is that the output from one controller is the reference command (setpoint) to service
another. In addition, in cascade control, it is usually assumed that the inner loop
a few control links and the possibility for simplified (decentralized) implementation
0 reduced computation load
(1(2) is much faster than the outer loop (I(~)
Feedforward elements link measured disturbances to manipulated inputs. The latter two benefits are becoming less relevant as the cost of computing power is
reduced. Based on the above discussion, the main challenge is to find a control configuration
Decoupling elements link one set of manipulated inputs (“measurements”) with which allows the (sub)controllers to be tuned independently based on a minimum of model
another set of manipulated inputs. They are used to improve the peiformance information (the pairing problem). For industrial problems, the number of possible pairings
of decentralized control systems, and are often viewed asfeedforward elements is usually very high, but in most cases physical insight and simple tools, such as the RGA,
(although this is not correct when we view the control system as a whole) where can be helpful in reducing the number of options to a manageable number. To be able to tune
the “,neasured disturbance” is the tnanipulated input computed by another the controllers independently, we must require that the loops interact only to a limited extent.
decentralized controller. For example, one desirable property is that the steady-state gain from u~ to yj in an “inner”
422 MULTI VARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 423
loop (which has already been tuned) does not change too much as outer loops are closed. For
decentralized diagonal control the RGA is a useful tool for addressing this pairing problem
jo.5.3 Extra measurements: cascade control
(see page 449). In many cases, we make use of extra measurements Y2 (secondaty outputs) to provide local
disturbance iejection and linearization, or to reduce the effects of measurement noise For
Remark. We just argued that the main advantage of applying cascade and decentralized control is that example, velocity feedback is frequently used in mechanical systems, and local flow cascades
the controllers can be tuned on-line and this saves on the modelling effort. However, in our theoretical are used in process systems For distillation columns, it is usually recommended to close an
treatment we need a model, for example, to decide on a control configuration. This seems to be a
inner temperature loop (y2 = T), see Example 10 9
contradiction, but note that the model required for selecting a configuration may be more “generic” and
A typical implementation with two cascaded SISO controllers is shown in Figure 10 10(a)
does not need to be modified for each particular application. Thus, if we have found a good control
configuration for one particular applications, then it is likely that it will work well also for similar where
applications. r2 =Ki(s)(ri —yi) (1038)
U 1(2(s)(i’2 — uJ2) (10 39)
10.5.2 Cascade control systems u is the manipulated input, yr the controlled output (with an associated control objective ri)
nnd y2 the extra measurement Note that the output r2 from the slower pi imaiy controller
We want to illustrate how a control system which is decomposed into subcontrollers can be
K1 is not a manipulated plant input, but rather the reference input to the faster secondaty
used to solve multivariable control problems. For simplicity, we use SISO controllers here of
(or slave) controllei 1(2 For example, cascades based on measunng the actual manipulated
the form
variable (in which case y2 = urn) are commonly used to reduce unceitainty and nonlinearity
= I(~(s)O~ y~) — (10 37) at the plant input
where K~(s) is a scalar. Note that whenever we close a SISO control loop we lose the
corresponding input, ~ as a degree of freedom, but at the same time the reference, r~,
becomes a new degree of freedom.
r1 +!...._HI( [-4~__.1K2I U1G2_H~~ ~joi +~
It may look like it is not possible to handle non-square systems with 5150 controllers.
However, since the input to the controller in (10.37) is a reference minus a measurement, we
can cascade controllers to make use of extra measurements or extra inputs. A cascade control
Figure 10.11 Common case of cascade control wheie the primary output yi depends directly on the
structure results when either of the following two situations arise:
extra measurement 1/2
• The reference e’~ is an output from another controller (typically used for the case of an extra
measurementy~), see Figure 10.10(a). This is conventional cascade control. In the general case, Yl and Y2 in Figure 10 10(a) are not directly related to each other,
o The “measurement” y~ is an output from another controller (typically used for the case of and this is sometimes referred to as pat allel cascade contiol However, it is common to
an extra manipulated input u~, e.g. in Figure 10.10(b) where it2 is the “measurement” for encounter the situation in Figure 10 11 wheie yr depends directly on 112 This is a special case
controller K1). This cascade scheme is referred to as input resetting. of Figure 10 10(a) with “Plant” = [G,~G2], and it is considered further in Example 10 12
and Exercise 10 7
~~iT_H 11 ~ K2 Plant
Remark. Centralized (parallel) implementation. Alternatively, we may use a centralized
implementation it = K(r y) where K is a 2-input I-output controller This gives
—
1. Disturbances arising within the secondary ioop (before y2 in Figure 10.11) are corrected mny be unnecessarily fast and to improve robustness we may want to select a larger ‘r~2. Its
by the secondary controller before they can influence the primary variable yi value will not affect the outer loop, provided ‘rc2 < r01 /5 approximately, where ‘r~ is the
2. Phase lag existing in the secondary part of the process (02 in Figure 10.11) is reduced response time in the outer loop.
measurably by the secondary loop. This improves the speed of response of the primary Example 10.12 Consider the closed-loop system in Figure 10.11, where
loop.
3. Gain variations in the secondary part of the process are overcome within its own loop. 01 = (—06s+1) e — and 02 10.4s + 1)
(6s+1) (6s+1)(
Moran and Zafiriou (1989) conclude, again with reference to Figure 10.11, that the use of an We first consider the case where we only use the primaly measurement (gi), i.e. design the
extra measurement y2 is useful under the following circumstances: cont,-oller bared on C = 0102. Using the half rule on page 57, we find that the effective delay is
(a) The disturbance d2 (entering before the measurement y2) is significant and 01 is non- Ci 6/2=+ 0.9
wit/i= K. 0.4 and
± 0.6,-~+=1 =5, and
9. The using the SIMC
closed-loop tuning
response rules
oft/ic on page
system 57, changes
to step a P1 controller is designed
of magnitude 1 in
minimum-phase e.g. 0~ contains an effective time delay [see Example 10.12].
— the setpoint (at I = 0) and of magnitude 6 in disturbance d2 (at I = 50) is shown in Figure 10.12. From
(b) The plant 02 has considerable uncertainty associated with it e.g. 02 has a poorly known
— the dashed line, we see that the closed-loop disturbance rejection is poor~
nonlinear behaviour and the inner loop serves to remove the uncertainty.
—
In terms of design, they recommended that 1(2 is first designed to minimize the effect of d2 5 ,-,.% I
on /ll (with 1(1 = 0) and then K1 is designed to minimize the effect of d1 on yi~ 4
An example where local feedback control is required to counteract the effect of high-order 3 Without Cascade
lags is given for a neutralization process in Figure 5.25 on page 216. The benefits of local S
feedback are also discussed by Horowitz (1991). ~2
Exercise 10.7 We want to derive the above conclusions (a) and (b) from an input—output ~ ~/J~poin~hange(rJ1 Disturbance Change (d2)
controllability analysis, and also explain (c) why we may choose to use cascade control if we want
to use simple controllers (even with d2 = 0). 0 20 40 60 80 100
Outline of solution: (a) Note that if 01 is minimum-phase, then the input—output controllability of 02 Time [sec]
and 0102 are in theo,y the sonic, and for rejecting d2 there is noflindamental advantage in measuring
vi rather than y2. (b) The inner loop L2 = 021(2 removes the uncertainty if it is sufficiently fast (high- Figure 10.12: Improved control performance with cascade control (solid) as compared to single-loop
gain feedback). It yields a transferfunction (1 + L2) ‘L2 which is close to 1 at frequencies where K~ control (dashed)
is active. (c) In most cases, such as when PID controllers are used, the practical closed-loop bandwidth
is limited app roximatelv by the frequency w,~, where the phase of the plant is 180° (see Section 5.8 Next, to improve disturbance rejection, we make use of the measurement 212 in a cascade
on page 191), sow? inner cascade loop may )‘ieldfaster control (for rejecting d1 and tracking r1) if the implementation as shown in Figure 10.11. First, the P1 controller for the inner loop is designed based
phase of 02 is less than that of 0102. on 02. The effective delay is 82 = 0.2. For ‘fast control” the SIMC rule (page 57,) is to use Tc2 82.
Howevem; since this is an inner 1001), where tight control is not critical, we choose Tc2 = 282 = 0.4,
Thning of cascaded Pifi controllers using the SIMC rules. Recall the SIMC PID which gives somewhat less aggressive settings with K~2 = 10.33 and ?~I2 = 2.4. The P1 controllerfor
procedure presented on page 57, where the idea is to tune the controllers such that the the outer loop is next designed with the inner loop closed. From (10.41), the transfer function for the
resulting transfer function from r to y is T ~
‘-Cs
Here, 6 is the effective delay in 0 bitier loop is approximated as a delay of Tc2 + 0~ = 0.6 giving a1
Oie 0 ‘ Os — (—O.6s+l)
(Gs+1)
(from it to y) and ‘1~c is a tuning parameter with ‘rc = C being selected for fast (and still Thus, for the outer loop, the effective delay is 01 = 0.6 + 1.6 2.2 and with Tel 2.2 (“fast
robust) control. Let us apply this approach to the cascaded system in Figure 10.11. The inner control”), the resulting SlMCPltunings are K~1 = 1.36 and Tn = 6. From Figure 10.12, we note that
loop (1(2) is tuned based on 02. We then get 212 = T2i’2, where ~‘2 ~~‘l and 02 is the the cascade controller greatly improves the rejection of d2. The speed of the setpoint tracking is also
effective delay in 02. Since the inner loop is fast (02 and Tc2 are small), its response may be improved, because the local control (1(2) reduces the effective delay for control of yi.
approximated as a pure time delay for the tuning of the slower outer loop (I(~), Exercise 10.8 To illustrate the benefit of using inner cascades for high-omrier plants, case (c) in
Exercise 10.7, cotisider Figure 10.11 and a plant 0 0~ 02030405 with
1 e~0’~’-’~ (10.41)
01 = 02 Os 04 C
s+1
I
The resulting model for tuning of the outer loop (1(i) is then
C’onskler the following two cases:
= G1T, 0ie~1+Tc2)5 (10.42) (a) Measuren,ent of yi only, Le. 0= 1
(b) Four additional mneasum-ements available (y2, ~5, y4, y~) on outputs of 01,02, 03 and 04.
and the PID tuning parameters for K1 are easily obtained using the SJMC rules. For a “fast For case (a) design a PID controller andfor case (b) use five simple proportional controllers with gains
response” from ~‘2 to 212 in the inner loop, the SIMC-rule is to select ic2 = 02. However, this K = 10. C’ompare the responses.
426 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 427
= I(2(s)Qr — y) (10.43)
The objective of the other slower controller is then to use input it1 to reset input it2 to its
desired value l~U2:
= 1(1(s)(r,~, — ui)1 Vi = U2 (10.44)
and we see that the output it2 from the fast controller 1(2 is the “measurement” Vi for the
slow controller 1(~. ill
In process control, the cascade implementation with input resetting often involves valve
position control, because the extra input it2, usually a valve, is reset to a desired position by
the outer cascade.
Centralized (parallel) implementation. Alternatively, we may use a centralized
implementation it = 1((r y) where K is a 1-input 2-output controller. This gives
—
it1 =Kii(s)(r—y), it2 =K21(s)(r—y) (10.45) Figure 10.13: Control configuration with two layers of cascade control
Here two inputs are used to control one output, so to get a unique steady-state for the inputs it1 In Figure 10.13, controllers K1 and K2 are cascaded in a conventional ma~Iner~ whereas controllers
and it2 we usually let K11 have integral control, whereas ~21 does not. Then it2(t) will only 1(2 and 1(3 are cascaded to achieve input resetting. The “input” Ui is not a (physical) plant input, but it
be used for transient (fast) control and will return to zero (or more precisely to its desired does play the role ofan input (manipulated variable) as seen from the controller K1. The corresponding
value r,~2) as I —, oc. With r~2 = 0 the relationship between the centralized and cascade equations are
implementation is K~ = —I(1K2 and K21 = K2. ui = Ki(s)(ri —vi) (10.46)
Comparison of cascade and centralized implementations. The cascade implementation K~(s)(ri — y2), r2 = Ui (10.47)
in Figure 10.10(b) has the advantage, compared to the centralized (parallel) implementation, U3 Ka(s)(ra—ya), y8it2 (10.48)
of decoupling the design of the two controllers. It also shows more clearly that VU2’ the
reference for it2, may be used as a degree of freedom at higher layers in the control system. ~‘ontroller I(~ controls the prilnaly output ~ at its reference ri by adjusting the ‘input” itj, which
Finally, we can have integral action in both K1 and 1(2, but note that the gain of K1 should is the reference value for Y2. Controller 1(2 controls the secondary output Y2 using input ~ Finally,
controller 1(3 manipulates it~ slowly in order to reset input ~ to its desired value n.
be negative (if effects of it1 and it2 on y are both positive).
Typically, the controllers in a cascade system are tuned one at a time starting with the
Exercise 10.9 * Draw the block diagi-ams for the two centralized (parallel) implementations fastest ioop. For example, for the control system in Figure 10.13 we would probably tune the
corresponding to Figure 10.10. three controllers in the order 1(2 (inner cascade using fast input), 1(3 (input resetting using
slower input), and 1(~ (final adjustment of yr).
Exercise 10.10 Derive the closed-loop transfer functions for the effect of r on
v. itj and it2 in the
cascade input resetting scheme of Figure 10.10(b). As an example use C = [C11 G12 = [1 1 and Exercise 10.11 Process control application. A practical case of a control system like the one in
use integral action in both controllers, 1(1 = —1/s and 1(2 = 10/s. Show that input U2 is reset at Figure 10.13 is in the use of a pre -heater to keep a reactor temperature yi at a given value rj. In this
steady-state. case, !t2 may be the outlet temperaturefrom the pre-heatei; u2 the bypass flow (which should be reset to
V3. say 10% of the totalfiow), and u~ the flow of heating ,nedium (steam). Process engineering students:
Make a process flowsheet with instrunzentation lines (not a block diagram) for this heater/reactor
process.
428 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 429
10.5.6 Selectors A high-gain feedback Foi example, it can be proved theoretically (Zames and Bensoussan,
Split-range control for extra inputs. We assumed above that the extra input is used to 1983) that with decentralized control one may achieve perfect control of all outputs, provided
improve dynamic performance. Another situation is when input constraints make it necessary the plant has no RHP-zeros that limit the use of high feedback gains Furthermore, for
to add a manipulated input. In this case, the control range is often split such that, for example, a stable plant G(s) (also with RHP-zeros), it is possible to use integral control in all
u1 is used for control when yE [ymjn,yl], and it2 is used when y € [yi,ymax}. channels (to achieve perfect steady-state control) if and only if G(O) is non-singular (Campo
Selectors for too few inputs. A completely different situation occurs if there are too few and Moran, 1994) Both these conditions are also required with full multivariable control
inputs. Consider the case with one input (it) and several outputs (yl,y2,. .). In this case,
.
Nevertheless, for “interactive” plants and finite bandwidth controllers, there is a performance
we cannot control all the outputs independently, so we either need to control all the outputs loss with decentralized control because of the interactions caused by non-zero off-diagonal
in some average manner, or we need to make a choice about which outputs are the most elements in C The interactions may also cause stability problems A key element in
important to control. Selectors or logic switches are often used for the latter. Auctioneering decentralized control is therefore to select good “pairings” of inputs and outputs, such that
selectors are used to decide to control one of several similar outputs. For example, such a the effect of the interactions is minimized
selector may be used to adjust the heat input (it) to keep the maximum temperature (max1 y~) The design of decentralized control systems typically involves two steps
in a fired heater below some value. Override selectors are used when several controllers
1 The choice of pairings (control configuration selection)
compute the input value, and we select the smallest (or largest) as the input. For example, this
2. The design (tuning) of each controller, k~(s)
is used in a heater where the heat input (it) normally controls temperature (yij, except when
the pressure (y2) is too large and pressure control takes over. The optimal solution to this problem is very difficult mathematically First, the number of
painng options in step I is in1 for an m x in plant and thus increases c~ponentza1ly with the
size of the plant Second, the optimal controller in step 2 is in general of infinite order and
10.6 Decentralized feedback control may be non-unique In step 2, there are three main approaches
Fully coordinated design. All the diagonal controller elements kg(s) are designed
10.6.1 Introduction simultaneously based on the complete model C(s) This is the theoretically optimal
approach for decentralized control, but it is not commonly used in practice First,
as just mentioned, the design problem is very difficult Second, it offers few of the
“normal” benefits of decentralized control (see page 421), such as ease of tuning,
reduced modelling effort, and good failure tolerance In fact, since a detailed dynamic
model is required for the design, an optimal coordinated decentralized design offers
few benefits compared to using a “full” multivariable controller which is easier to
design and has better performance The exception is situations where multivariable
control cannot be used, for example, when centralized cooordination is difficult
for geographical reasons We do not address the optimal coordinated design of
decentralized controllers in this book, and the reader is referred to the literature (e g
Sourlas and Manousiouthakis, 1995) for more details
Figure 10.14: Decentralized diagonal control of a 2 x 2 plant Independent design. Each controller element k, (s) is designed based on the corresponding
diagonal element of C(s), such that each individual loop is stable Possibly, there
We have already discussed, in the previous sections on control configurations, the use of is some consideration of the off-diagonal interactions when tuning each loop This
decentralized control, but here we consider it in more detail. To this end, we assume in this approach is the main focus in the remaining part of this chapter It is used when it is
section that C(s) is a square plant which is to be controlled using a diagonal controller (see desirable that we have mtegrzty where the individual parts of the system (including each
Figure 10.14) loop) can operate independently The pairing rules on page 449 can be used to obtain
ki(s) pairings for independent design In short the rules are to (1) pair on RGA elements
k2(s) close to I at crossover frequencies, (2) pair on positive steady-state RGA elements,
K(s) = diag{k1(s)} = (10.49)
and (3) pair on elements that impose minimal bandwidth limitations (e g small delay)
,
km(s)] The first and second rules are to avoid that the interactions cause instability The third
This is the problem of decentralized (or diagonal) feedback control. rule follows because we for good performance want to use high-gain feedback, but we
It may seem like the use of decentralized control seriously limits the achievable control require stable individual loops For many interactive plants, it is not possible to find a
performance. However, often the performance loss is small, partly because of the benefits set of pairing satisfying all the three rules
430 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 431
Sequential design. The controllers are designed sequentially, one at a time, with the
betWeen 1 and —1 We design the decentralized controller to give first-order responses with
previously designed (“inner”) controllers implemented. This has the important
time constant r~ in each of the individual loops, that is, y~ ~Tlr: For simplicity, the
advantage of reducing each design to a scalar (5150) problem, and is well suited for plants have no dynamics, and the individual controllers are then simple integral controllers
on-line tuning. The sequential design approach can be used for interactive problems
where the independent design approach does not work, provided it is acceptable to have
= f ~, see the IMC design procedure on page 54 To make sure that we do not use
aggressive control, we use (in all simulations) a “real” plant, where we add a delay of 0 5
“slow” control of some output so that we get a difference in the closed-loop response time units in each output, i e 0s,m CC° ~ This delay is not included in the analytic
times of the outputs. One then starts by closing the fast “inne?’ loops (involving the expressions. e g (10 52), in order to simplify our discussion, but it is included for simulation
outputs with the fastest desired response times), and continues by closing the slower and tuning With a delay of 0 5 we should, for stability and acceptable robustness, select
“outer” loops. The main disadvnntage with this approach is that failure tolerance is not
~ ~ 1, see the SIMC rule for “fast but robust” control on page 57 In all simulations we drive
guaranteed when the inner loops fail (integrity). In particular, the individual loops are the system with reference changes of r1 = 1 at I = 0 and r2 = 1 at I = 20
not guaranteed to be stable. Furthermore, one has to decide on the order in which to
close the loops.
I setpoint
The effective use of a decentralized controller requires some element of decoupling.
Loosely speaking, independent design is used when the system is decoupled in space (G(s)
is close to diagonal), whereas sequential design is used when the system outputs can be Sos yi an
decoupled in time.
The analysis of sequentially designed decentralized control systems may be performed
using the results on partial control presented earlier in this chapter. For example, after closing 0 I I
0 5 10 15 20 25 30 35 40
the inner loops (from it2 to y2), the transfer function for the remaining outer system (from it1 Time [sec]
to y~) is ft = — G12K2(I + G221(i)’Gii); see (10.26). Notice that in the general
case we need to take into account the details of the controller ~ However, when there is (a) Diagonal pairing, controller (tO Si) with ri = ri 1
a time scale separation between the layers with the fast loops (1(2) being closed first, then
we may for the design of K1 assume 1(2 —÷ cc (“perfect control of y2”), and the transfer
function for the remaining “slow” outer system becomes P,1 = —Gj2G~’ 021; see
(10.28), The advantages of the time scale separation for sequential design of decentralized
controllers (with fast “inner” and slow “outer” loops), are the same as those for hierarchical
cascade control (with fast “lowe?’ and slow “uppe?’ layers) as listed on page 387. Examples
of sequential design are given in Example 10.15 (page 432) and in Section 10.6.6 (page 445).
I
The relative gain array (RGA) is a very useful tool for decentralized control. It is defined as
I
0 5 10 15 20 25 30 35 40
A = C x (01)T, where x denotes element-by-element multiplication. It is recommended Time [secl
to carefully read the discussion about the “original interpretation” of the RGA on page 83, (b) Off-diagonal pairing, plant (10 53) and controller (10 54)
before continuing. Note in particular from (3.56) that each RGA element represents the ratio
between the open-loop (gjj) and “closed-loop” (~jj) gains for the corresponding input-output
Figure 10.15 Decentralized control of diagonal plant (10 50)
pair, A~ = gij/g~jj. By “closed-loop” here we mean partial control with the other outputs
perfectly controlled. Intuitively, we would like to pair on elements with A~~(s) close to 1,
because this means that the transfer function from u~ to yj is unaffected by closing the other Example 10.14 Diagonal plant. Consider the simplest case of a diagonal plant
loops.
Remark. We assume in this section that the decentralized controllers k~(s) are scalar. The treatment
may be generalized to block-diagonal controllers by, for example, introducing tools such as the block
0= [~ ~]
web RCA = I The off-diagonal elements ale zeiv, so theze are no inte; actions and decentralized
(1050)
relative gain; e.g., see Manousiouthakis et al. (1986) and Kariwala et al. (2003). control with diagonal paznngs is obviously optunal
Diagonal pan zngs The emit, olle,
10.6.2 Introductory examples ~ [f i] (1051)
To provide some insight into decentralized control and to motivate the material that follows gives 111cc decoupled fi;st-orde; responses
we start with some simple 2 x 2 examples. We assume that the outputs y~ and 112 have
been scaled so that the allowable control errors (e~ = = 1,2 are approximately
1 1
—
yi = Ti and Ui = Ti (1052)
TiS+l
S
432 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN
G~=GI
r0 11T
I =1
ro 1
[1 0] 1.1 (1053) ~ 0 5 10 15 20 25 30 35 40
0
This corresponds to pairing on two zero ele,nents, oN = 0 and 92 = 0, and we cannot use independent I Time [sec]
or sequential controller design. A coordinated (simultaneous) conti-oller design is required and after (a) Diagonal pairing, controller (10 56) with ri = 5 and Ta = 1
some trial and error we arrived at the following design
K’(s)= [ S
0
—(Ois+O.i)
0
(0-is +2 ) (1054)
I
Peiformnance is of course quite poor as is illustrated in Figure 10.15(h), but it is nevertheless workable so
(surprisingly!).
Sc
Exercise 10.12 consider in more detail the off-diagonal pairings for the diagonal plant in the
example above. (i) Explain why it is necessary to use a negative sign in (10.54). (ii) Show that the
1 —l
1 0 5 10 15 20 25 30 35 40
plant (10.53) cannot be stabilized by a pure integral action controller of the form IC(s) = diag(~1~). I Ttme [icc]
Example 10.1$ One-way interactive (triangular) plant. consider I (b) Off-diagonal painng, plant (10 59) and controller (10 60) with r~ = Sand i~2 = 1
c=[~ ?j (10 55) Figure 10.16 Decentralized control of triangular plant (10 55)
for which
L—~
01
‘~J and RGA=[~ ~] Remark. The peiforniance problem was not detected fiom the RGA matrix, because it only mneasuies
two-way inteiactionv Howevem, it may be detectedfrom the “Pemformance RGA “matrix (PRGA), which
foi our plant with unity diagonal elements is equal to G~1 As discussed oil page 437, a laige clement
The RGA matrix is ideiititi; which suggests that the diagonal pairings are best for this plant. However;
in a row of PRGA indicates that fast control is needed to get acceptable ieference tracking Thus, the
we see that there is a large interaction (921 = 5) fivin u1 to i,j~, which, as one might expect, implies
2, 1 element in G~ of magnitude 5, confim ins that control of 92 must be about 5 tunes faste, thai, that
poor pemformance with decentralized controL Note that this is not a fundamental control limitation as
the decoupling controller K(s) = I [1~ ~] gives ‘uce decoupled responses, identical to those shown
of
“= [t ]
with ~ = ~ = 1 (assuming a 0.5 time delay). Howevem; a closer analysis shows that the closed-loop
(10.56)
This co, respondv to paiming on a zero element g~i = 0 This pairing it not acceptable if we use the
independent design approach, because i4 has no effect oil yi so “loop 1” does not work by itself
However; with the sequetitial design app roach, we may first close the loop around 92 (on the eleineni
response with the cont,-oller (1056) becomes
oN = 5) With the IMC design approach, the cont,olle, becomes hE(s) = 1/(g~oT2s) = i/(5i-~s)
1
= and with this
10 57
forloop closed,pamtral
u does have inan(10
effect
28))on Ui Assuming tight contiol of 92 gives (using the
Ti
Tj 5 +I erpression “pemfect” cont,ol
Si-a S 1
92 = Ti + (10.58)
(ris+1)(i-as-l-1) T25+1 Ui = (~;1 — 9~i) ~ =
If we plot the intel-action term from i’i to 92 as afimction offrequemicj~ then we find thatfor i-i = ~ it
has a peak value of about 2.5. Therefore, with this contmoller the response for 92 15 not acceptable when The controllem fom the pairing UE-yi becomes hI(s) = i/(gN-ris) = —5/Qi-is) and thus
we make a change in ri. To keep this peak below 1, we need to select i-1 ≥ 5T2, approximately This is
illust,-ated in FigurelOi6(a) where we have selected i-i = 5 and ~ = 1. Thus, to keep 1e21 ≤ 1, we
must accept slow control of Vi. [~ ?] 5ra S (1060)
434 MULTIVARIABLE FEEDBACK CONTROL STRUCTURE DESIGN 435
The response with i-i = 5 and ~ = 1 is shown in Figure 10.16(b). We see that peifor,nance is only
slightly worse titan with the diagonal pairings. Howeve~ more seriously, we have the problem that ~f
control of 92 fails, e.g. because i4 = ui saturates, then we also lose control of yi (in addition, we get
2
instability with 112 drifting away, because of the integral action for yi). The situation is particularly bad
in this case because of the pairing on a zero element, but the dependence on faster (inner) loops being
in service is a general problem with sequential design.
Exercise 10.13 . Redo the simulations in Example 10.15 with 20% diagonal i~zput uncertainty. 0
Specifically, add a block [162 ~ between the plant and the controller Also si,nulate with the
—J
0 5 10 15 20 25 30 35
decoupler f<(s) F 1
i3 [—5
= which is expected to be particularly senstive to uncertainty (why? — Time [see]
40
see conclusions on page 251 and note that 7 (C) = 10 for this plant).
(a) 912 = 0.17; controller (10.62) with r1 = 5 and fl) = 1
Example 10.16 Two-way interactive plant. Consider the plant
2
~= [~ 912
1 (1061)
for which
0
~i~<eE_ setpoint
=
1—5912
1 1
[—s
—9121
1
and RCA = 1 — 1Sgi. L
~ i
—5g12
—5912
1 —1
0 5 10 15 20 25 30 35 40
The control properties of this plant depend on the parameter gig. The plant is singular (det(G) = Time [see]
1 — 5912 = 0) for 912 = 0.2, and in this case independent control of both outputs is impossible,
whatever the controller. We will examine the diagonal pairings using the independent design controller (b) 912 = —0.2; controller (10.62) with -rj = 5 and r2 = 1
_L 01 2
K— [ris I (10.62)
0j-~
~2 3
The individual loops are stable with responses Ui = (ri+i) i-1 and 92 = (72s+i) ~ respectively. With
both loops closed, the response is ~‘ = CK(I + GIC) ~ r = Pr, where
0
1 912 Ti .9
(r1s + 1)(T28 ± 1)— 5912 5T2 S TiS +1 — 5912 —1
0 5 10 15 20 25 30 35 40
We see that T(0) = I, so we have peifect steady-state control, as is expected with integral action. Time [sec]
1-loweve;; the interactions as expressed by the term 5912 may yield instability and we find that the
system is closed-loop unstable for 912 > 0.2. This is also expected because the diagonal RGA elements (c) 912 = —1; controller (10.62) with ri = 5 and ~1a = 1
are negative for 912 > 0.2, indicating a gain change between the open-loop (g~~) and closed-loop (flit)
9
transfer functions, which is incompatible with integral action. Thus, for 912 > 0.2, the off-diagonal
pairings must be used if we want to use an independent design (with stable individual loops). 92
We will now consider three cases, (a) 912 = 0117, (b) 912 = —0.2 and (c) 912 = —1, each with the
same controller (10.62) with ~ = 5 and 2~2 = 1. Because of the large interactions given by 921 = 5,
we need to control 92 faster than yi 0
~ seipoint
—— &7
[—33.3
—1.1
6.7
and RCA= [-~~ —5.7 0 5 10 15 20
Time [see]
25 30 35 40
6.7
The large RGA elements indicate strong interactions. Furthem-more, recall front (3.56) that the (d) 912 = —1; controller (10.62) with Ti = 21.95 and fl~ = 1
RGA gives the ratio of the open-loop and (partially) closed-loop gains, gjj ~ Thus, in terms of
decentralized control, the large positive RGA elements indicate that ~jj is small and the ioops will Figure 10.17: Decentralized control of plant (10.61) with diagonal pairings
tend to counteract each other by reducing the effective 1001) gain. This is confirmed by simulations
in Figure lO.17(a).
436 MULTIVARIABLE FEEDBACK CONTROL ONTROL STRUCTURE DESIGN 437
~
-i 1 0.17
=[~~
0.17
0.17 10.17 0.83
and RGA=1033 0.17 denoted L~ = g~k5, which is also equal to the i’th diagonal element of L —
=
—
GIc[.
The RGA indicates clearly that the off-diagonal pairings are preferable. Neve,-theless, we will
54(1+GKY-’=diagj1
+g~
kJ and T=I—S (10.64)
consider the diagonal pairings with rj = 5 and T2 = 1 (as befo;-e). The response is poor as seen in
Figure 10. I7(c). The closed-loop system is stable, but very oscillatory. This is not surprising as the contain the sensitivity and complementary sensitivity functions for the individual loops. Note
diagonal RGA elements of 0.17 indicate that the interactions increase the effective loop gains by a that S is not equal to the matrix of dingonal elements of S = (I + GK)’. —
factor 6 (= 1/0.17). To study this in more detail, we write the closed-loop polynomial in standard With decentralized control, the interactions are given by the off-diagonal elements G C. —
form The interactions can be normalized with respect to the diagonal elements and we define
(Tis+1)(T2s+1)—Sgi2=T2s2+2T(s+1
with E 4 (G G)G’ — (10.65)
T /
/ T5T2
and C = ~ y~p~z~ — 5gi2 The “magnitude” of the matrix E is commonly used as an “interaction measure”. We will
V1 — 59i2
show that p(E) (where p is the structured singular value) is the best (least conservative)
We note that we get oscillations (0 < ( < 1), when 912 is negative and large. For example,
measure, and will define “generalized diagonal dominance” to mean p(E) < 1. To derive
912 = —1, Ti = 5 and T2 = 1 gives C = 0.55. Interestingly, we see from the expression for ~
that the oscillations ‘nay be reduced by selecting Tj and T2 to be “lore different. This follows because these results we make use of the following important factorization of the “overall” sensitivity
is the ratio between the arithmetic and geometric ,,ieans, which is larger the more different function S = (I + GK)’ with all loops closed,
Ti and ~ are. Indeed, with 912 = —1 we find that oscillations can be eliminated (( = 1) by selecting
= 21.95-7-2. This is confirmed by the simulations in Figurelo.17(d). The response is surprisingly S = (I+ET)’ (10.66)
~ ~
good taking into accou,zt that we are using the wrong pairings. overall individual loops interactions
Exercise 10.14 Design decentralized controllers for the 3 x 3 plant C(s) = G(0)e°5s where C(0)
Equation (10.66) follows from (A.147) with C = C and C’ = G. The reader is encouraged
is given by (10.79). Try both the diagonal pairings and the pairings corresponding to positive steady-
01 o~
to confirm that (10.66) is correct, because most of the important results for stability and
state RGA elements, i.e. C’ = C 1 0 0 performance using independent design may be derived from this expression.
001 A related factorization which follows from (A.148) is
The above examples show that in many cases we can achieve quite good performance S 5(1 — E5S)’ (I — Es) (10.67)
with decentralized control, even for interactive plants. However, decentralized controller
design is more difficult for such plants, and this, in addition to the possibility for improved where
performance, favours the use of multivariable control for interactive plants. (C — G)G~ (10.68)
With the exception of Section 10.6.6, the focus in the rest of this chapter is on
(10.67) may be rewritten as
independently designed decentralized control systems, which cannot be analyzed using the
S = (I + S(F — fl)’SF (10.69)
expressions for partial control presented earlier in (10.28). We present tools for pairing
selections (step 1) and for analyzing the stability and performance of decentralized control where F is the performance relative gain array (PRGA),
systems based on independent design. Readers who are primarily interested in applications of
decentralized control may want to go directly to the summary in Section 10.6.8 (page 448). F(s) 4 C(s)C_i(s) (10.70)
10.6.3 Notation and factorization of sensitivity function 10.6.7 we discuss in more detail the use of the PRGA.
These factorizations are particularly useful for analyzing decentralized control systems
G(s) denotes a square in x in plant with elements gjj• With a particular choice of pairings based on independent design, because the basis is then the individual loops with transfer
we can rearrange the columns or rows of G(s) such that the paired elements are along the function S.
438 MULTIVARIABLE FEEDBACK CONTROL ONTROL STRUCTURE DESIGN 439
10 6 4 Stability of decenti alized control systems In both the above Theorems, (i) and (ii) are necessary and sufficient conditions for stability,
We consider the independent design procedure and assume that (a) the plant G is stable and ,hereas the spectral radius condition (iii) is weaker (only sufficient) and the p-condition
(b) each individual loop is stable by itself (S and T are stable) Assumption (b) is the basis ondition (iv) is even weaker. Nevertheless, the use of p is the least conservative way of
for independent design Assumption (a) is also required for independent design because we ~splitting up” the spectral radius p in condition (iii).
want to be able to take any loop(s) out of service and remain stable and this is not possible if Equation (10.72) is easy to satisfy at high frequencies, where generally U(T) —* 0.
the plant is unstable ~~milnrly, (10.74) is usually easy to satisfy at low frequencies since U(S(0)) 0 for systems
To achieve stability of the overall system with all loops closed, we must require that the with integral control (no steady-state offset). Unfortunately, the two conditions cannot be
interactions do not cause instability We use the expressions for Sin (1066) and (1069) to combined over different frequency ranges (Skogestad and Moran, 1989). Thus, to guarantee
derive conditions for this stability we need to satisfy one of the conditions over the whole frequency range.
Since (10.72) is generally most difficult to satisfy at low frequencies, where usually
Theorem 10 3 With assumptions (a) and (b) the overall system is stable (S is stable) cr(T) 1, this gives rise to the following pairing rule:
(0 if and only zf(I + ET)’ is stable whete E = (0— • Prefer pairings with p(E) < 1 (“diagonal dominance”) at frequencies within the closed-
(a) if and only if det(I + ET(s)) does not encucle the oiigin as $ traveises the Nyquist
loop bandwidth.
D contoui
(iU) if Let j\denote the RGA of 0. For ann x n plant .A~~(0) > 0.5 Vi is a necessary condition
p(ET(jw)) < 1 Vw (10 71) for p(E(0)) < 1 (diagonal dominance at steady state) (Kaniwala et al., 2003). This gives the
(iv) (and (JO 7J)is satisfied) if following pairing rule: Prefer pairing on steady-state RQA elements larger than 0.5 (because
otherwise we can never have p(E(0)) < 1).
~(T)=maxI~I<1/p(E) Vw (1072) Since (10.74) is generally most difficult to satisfy at high frequencies where U(S) 1, and
since encirclement of the origin of det(I — EsS(s)) is most likely to occur at frequencies up
The stiuctuted singulat value ,u(E) is computed with iespect to a diagonal structute (of T) to crossover, this gives rise to the following pairing rule:
• Prefer pairings with p(E5) < 1 (“diagonal dominance”) at crossover frequencies.
Proof (Grosdidier and Moran 1986) (ii) follows from the factorization S = S(I + ET)_i in (10 66)
and the generalized Nyquist theorem in Lemma AS (page 543) (in) Condition (10 71) follows from Gershgorin bounds. An alternative to splitting up p(ET) using p, is to use Gershgonin’s
the spectral radius stability condition in (4 110) (iv) The least conservative way to split up p(ET) is to
theorem, see page 519. From (10.71) we may then derive (Rosenbrock, 1974) sufficient
use the structured singular value From (8 92) we have p(ET) < p(E)U(T) and (10 72) follows C
conditions for overall stability, either in terms of the rows of 0,
Theorem 10 4 With assumptions (a) and (b) and also assuming that that G and 0 have no
RHP-zeros, the overall system is stable (S is stable): —
IZII < Igiil/ZIgiiI Vi,Vw (10.75)
j≠i
(i) if and only ~f(I EsS(s))’ is stable wheme Es = (0—
—
(a) if and only if det(I E5S) does not encode the ongin ass tiaverses the Nyquist D
— or, alternatively, in terms of the columns,
contoum
(ui)if 1t51 < Ig~~I/ZIgj~I Vi,Vw (10.76)
p(E5S(jw)) < 1 Vw (10 73) is’”
(iv) (and (JO 73)is satisfied) if This gives the important insight that it is preferable to pair on large elements in 0,
because then the sum of the off-diagonal elements, Z1~ gij I and Z.1≠~ ig~~ is small. The
U(S)=maxI~,I < 1/p(Es) Vw (1074) “Gershgorin bounds”, which should be small, are the inverse of the right hand sides in (10.75)
and (10.76),
The structumed singulam value p(Es) is computed with iespect to a diagonal sti uctuje (of 5) The Gershgorin conditions (10.75) and (10.76), are complementary to the p-condition in
(10.72). Thus, the use of (10.72) is not always better (less conservative) than (10.75) and
Proof The proof is similai to that of Theorem 103 We need to assume no RHP zeros in order to get (10.76). It is true that the smallest of the i = 1,... n-i upper bounds in (10.75) or (10.76)
(i) C is always smaller (more restrictive) than 1/p(E) in (10.72). However, (10.72) imposes the
Remark The p conditions (10 72) and (1074) for (nominal) stability of the decentralized control same bound on ~ for each loop, whereas (10.75) and (10.76) give individual bounds, some
system c’~n be geneialized to include robust stability and iobust performance see equations (3la b) of which may be less restrictive than 1/p(E).
in Skogesiad and Moran (1989) Diagonal dominance. Although “diagonal dominance” is a matrix property, its definition
has been motivated by control, where, loosely speaking, diagonal dominance means that the
440 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 441
interactions will not introduce instability. Originally, for example in the Inverse Nyquist Array iI’xangular plants. Overall stability is trivially satisfied for a triangular plant as described
method of Rosenbrock (1974), diagonal dominance was defined in terms of the Gershgorin in the theorem below.
bounds, resulting in the conditions IEII~i < 1 (“column dominance”) and IJEIi~~ < 1
(“row dominance”), where E = (0 0)0_i, However, stability is scaling independent,
— Theorem 10.5 Suppose the plant 0(s) is stable and upper or lower triangular (at all
and by “optimally” scaling the plant using DGD’, where the scaling matrix D is diagonal, frequencies). and is controlled by a diagonal controller~ Then the overall system
is stable
one obtains from these conditions that the matrix 0 is (generalized) diagonally dominant if and only if the individual loops are stable.
if p(JEI) < 1; see (A.128). Here pGEI) is the Perron root of E. An even less restrictive
definition of diagonal dominance is obtained by starting from the stability condition in terms proof. For a triangular plant 0, £2 = (0 — C)G’ is triangular with all diagonal elements zero, so it
of p(E) in (10.72). This leads us to propose the improved definition below. follows that all eigenvalues of ET are zero. Thus det(I + ET(s)) = 1 and from (ii) in Theorem 10.3
the interactions can not cause instability. C
Definition 10.1 A matrix C is generalized diagonally dominant if and only if p(E) < 1.
Because of interactions, there may not exists pairings such that the plant is triangular at
Here the term “generalized diagonally dominant” means “can be scaled to be diagonally low frequencies. Fortunately, in practice it is sufficient for stability that the plant is triangular
dominant”. Note that we always have p(E) < p(JEI), so the use of p is less restrictive thnn at crossover frequencies, and we have:
the Perron root. Also note that p(E) = 0 for a triangular plant.6 It is also possible to use Triangular pairing rule. To achieve stability with decentralized control,
p(E5) as measure of diagonal dominance, and we then have that a matrix is generalized prefer pairings such that at frequencies w around crossover; the rearranged
diagonally dominant if p(E) < 1 or if p(Eg) < 1. plant matrix COw) (with the paired elements along the diagonal) is close to
triangulam:
Example 10.17 Consider the following plant where we pair on its diagonal ele,nents:
—5 1 2 _ —5 0 0 _ 0 0.5 0.33 Derivation of triangular pairing rule. The derivation is based on Theorem 10.4, From the spectral
0= 4 2 —1; 0= 0 2 0; E=(C—G)0’= —0.8 0 —0J67 radius stability condition in (10.74) the overall system is stable if p(SE5(jw)) < 1, Vw. At
—3 —2 6 0 0 6 0.6 —1 0 low frequencies, this condition is usually satisfied because S is small. At higher frequencies, where
The p-interaction measu;-e is p(E) = 0.9189, so the plant is diagonally dominant. From (10.72), S = diag{~} 1, (10.74) may be satisfied if G(jw) is close to triangular. This is because Es
stability of the individual loops guarantees stability of the overall closed-loop system, provided and thus SE5 are then close to triangular, with diagonal elements close to zero, so the eigenvalues of
we keep the individual peaks of itt I less than 1/p(E) = 1.08. This allows for integral control with SE5 Ow) are close to zero. Thus (10.74) is satisfied and we have stability of S. The use of Theorem 10.4
1(0) = 1. Note that it is not possible in this case to conclude from the Cershgorin bounds in (10.75) assumes that 0 and C have no RHP-zeros, but in practice the result also holds for plants with RHP-zeros
and (10.76) that the plant is diagonally dominant, because the 2, 2 element of 0 ( 2) is smaller than provided they are located beyond the crossover frequency range. C
both the sit,,, of the off—diagonal elements in row 2 (= 5) and in column 2 (= 3).
Remark. Triangular plant and RGA. An important RGA-property is that the RCA of a triangular
plant is always the identity matrix (A = I) or equivalently the RCA number is zero; see property 4 on
Iterative RGA. An iterative computation of the RGA, AP(C), gives a permuted identity
page 527. In the first edition of this book (Skogestad and Postlethwaite, 1996), we inconecily claimed
matrix that corresponds to the (permuted) generalized diagonal dominant pairing, if it exists
that the reverse is also true; that is, an identity RCA matrix (A(G) = I) implies that C is triangular.
(Johnson and Shapiro, 1986, Theorem 2) (see also page 88). Note that the iterative RGA However, this holds only for 3 x 3 systems or smaller. For a 4 x 4 counterexample consider the following
avoids the combinatorial problem of testing all pairings, as is required when computing p(E) matrix
or the RGA number. Thus, we may use the iterative RGA to find a promising pairing, and 1100
check for diagonal dominance using p(E). 0 a (10.77)
Exercise 10.15 For the plant in Example 10.17 check that the iterative RCA converges to the 0011
diagonally don,ina,,t pairings. which has RGA= I for any nonzero value of a and $, but C cannot be made triangular by rearranging
the order of inputs and outputs. Also, for this plant stability of the individual loops does not necessarily
Example 10.18 RGA number. The RCA numbe;; hA IJISum, is comnmnonly used as a ,neasura
—
give overall stability. For example, T = I (stable individual loops) gives instability (T unstable)
of diagonal do,nina,,ce, but unfortunately for 4 x 4 plants or largen a small RCA number does not with a = /3 when hal = 1i31 < 0.4. Therefore, ROA= I and stable individual loops do not generally
guarantee diagonal do,ninance. To illustrate this, consider the ,natrix 0 [1 1 0 0; 0 0 1 1 . guarantee overall stability (it is not a sufficient stability condition). Nevertheless, it is clear that we
1; 1 1 0.1 0; 0 0 1 11. It has has RGA= I, but p(E) = p(Es) = l0.9so it isfarfronz would prefer to have RGA= I, because otherwise the plant cannot be triangular. Thus, from the
diagonally dominant. triangular pairing rule ‘ye have that it is desirable to select pairings such that the RCA is close to
the identity matrix in the crossover region.
A triangular plant may have large off-diagonal elements, but it can be scaled to be diagonal. For example
~d1 0 1 Fgii 0 1 [lId1 0 ] — 1 gii 0 1 (10.77) is a generalization of a counterexample given by Johnson and Shapiro (1986). On our book’s home page a
da] Lon 922] L 0 1/daj — L~912 922] whichapproaches 0 j forldil >> 1d2I.
922 physical mixing process is given with a transrer function of this form.
442 MULTIVARIABLE FEEDBACK CONTROL
JNTROL STRUCTURE DESIGN
10.6.5 Integrity and negative RGA elements
~ample 10.19 Consider a 3 x 3 plant with
A desirable property of a decentralized control system is that it has integrity, that is, the
10.2 5.6 1.4 0,96 1.45 —1.41
closed-loop system should remain stable ns subsystem controllers are brought in and out of 0(0) = 15.5 —8.4 —0.7 and A(0) = 0.94 —0.37 0.43 (10.79)
service or when inputs saturate. Mathematically, the system possesses integrity if it remains 18.1 0.4 1.8 —0.90 —0.07 1.98
stable when the controller K is replaced by IFJ( where E = diag{c1} and e~ may take on the
values oft1 = 0 or e~ = 1. Fore 3 x 3 plant there al-c six possible pairings, but from the steady-state RGA we see that the,’e is only
one positive element in column 2 (A12 = 1.45), and only one positive element in row 3 (A~ 1.98),
An even stronger requirement (“complete detunability”) is when it is required that the
and therefore there is only one possible pairing with all RGA elements positive (Ui ~ Y2, u2 ~ P1,
system remains stable as the gain in various loops is reduced (detuned) by an arbitrary factor
~ ~ p ). Thus, if we require to pair on the positive RGA elements, we can from a quick glance at the
i.e. q may take any value between 0 and 1, 0 ~ t~ < 1, Decentralized integral controllability ‘adv-state RGA eliminate five of the six pairings.
(DIC) is concerned with whether complete detunability is possible with integral control.
:xarnple 1020 Consider the following plant and RGA:
Definition 10.2 Decentralized integral controllability (ftc). The plant C(s) (cone
0.5 0.5 —0.004 —1.56 —2.19 4.75
sponding to a given pairing with the paired elements along its diagonal) is DIC if there 0(0) = 1 2 —0.01 and A(0) = 3.12 4.75 —6.88 (10.80)
exists a stabilizing decentralized controller with integral action iii each loop such that each —30 —250 1 —0.56 —1.56 3.12
individual loop may be detuned independently by afactor e~ (0 ≤ ~ < 1) without introducing
instability. From the RGA, we see that it is impossible to rearrange the plant such that all diagonal RGA elements
are positive. Consequent~; this plant is not DICfor any choice of pairings.
Note that DIC considers the cxistence of a controller, so it depends only on the plant C and
the chosen pairings. The steady-state RCA provides a very useful tool to test for DIC, as is
xample 10.21 Consider the following plant and RGA.
clear from the following result which was first proved by Grosdidier et al. (1985). ‘—s + 1’~ 1 —4.19 —25.96 1 5 —5
C(s) = ~ 6.19 1 —25.96 and A(C) = —5 1 5
(as+1)— 1 1 1 5 —5 1
Theorem 10.6 Steady-state RGA and fTC. C’onsider a stable square plant C and a
diagonal controller K with integral action in all elements, and assume that the loop transfer Note that the RGA Lc constant, independent offrequency. Out)’ two of the six possible pairings give
function OK is strictly propet: If a pairing of outputs and manipulated inputs corresponds positive steady-state RCA elements (see pairing rule 2 on page 449): (a) the (diagonal) pairing on all
to a negative steady-state relative gain, then the closed-loop system has at least one of the A, — 1 and (b) the pairing on all A~ = 5. 1ntuitiveh~ one may expect pairing (a) to be the best since
following properties: it cormesponds to pairing on RCA elements equal to 1. Howeve~ the RGA matrix isfarfrom identin~
(a) The overall closed-loop system is unstable. and the RCA numbe,; IA — IIISurn, is 30 for both pairings. Also, hone of the pairings are diagonally
(b) The loop with the negative relative gain is unstable by itself dominant as t4E) = 8.84 for pairing (a) andp(E) = 1.25for the pairing (b). These are larger than 1,
(c) The closed-loop system is unstable if the loop with the negative relative gain is opened so none of the two alternatives satisfy pairing nile I discussed on page 449, and we al-c led to conclude
(broken). that decentralized control should not be usedfor this plant.
lloyd and Skogestad (1992) confirm this conclusion by designing P1 controllers for the two cases.
This can be summarized as follows:
They found pairing (a) corresponding to Au = 1 to be significantly worse than (b) with Au = 5, in
A stable (reordered) plant C(s) is DIC only ~fA11(O) ≥ Ofor all i. agreement with the values for p(E). They also found the achievable closed-loop time constants to be
(10.78)
1160 and 220, respectively; which in both cases is vety slow comnpared to the RHP-zero which has a
Proof. Use Theorem 6.7 on, page 252 and select C’ = diag{g~t, Q~} Since det C’ = gjj det C” and time constant of].
from (A.78) A11 = we have det C’/ det C = A~ and Theorem 10.6 follo’.vs. C
Exercise 10.16 Use the method of “iterative RGA” (page 88) on the model iii Example ]O.21, and
Each of the three possible instabilities in Theorem 10.6 resulting from pairing on a negative confirm that it results in “recommending” the pairing on Au = 5, which indeed was found to be the
value of A1~(O) is undesirable. The worst case is (a) when the overall system is unstable, best choice based on js(E) and the simulatio,zs. (This is partly good luck, because the proven theoretical
but situation (c) is also highly undesirable as it will imply instability if the loop with the result for iterative RGA only holds for a generalized diagonally dominant matrix.)
negative relative gain somehow becomes inactive, e.g. due to input saturation. Situation (b)
is unacceptable if the loop in question is intended to be operated by itself, or if all the other Exercise 10.17 * (a) Assume that the 4 x 4 matrix in (A.83) represents the steady-state model of a
plant. Show that 20 of the 24 possible pairings call be eliminated by requiring DIC (b) Consider the
loops may become inactive, e.g. due to input saturation.
3 x 3 FCC process in Exercise 6.] 7 on page 257. Show that five of the six possible pair ngs can be
The RGA is a very efficient tool because it does not have to be recomputed for each eliminated by requiring DIC.
possible choice of pairing. This follows since any permutation of the rows and columns of
C results in the same permutation in the RGA of C. To achieve DIC one has to pair on a Remarks on fTC and RGA
positive RGA(0) element in each row and column, and therefore one can often eliminate
many candidate pairings by a simple glance at the RGA matrix. This is illustrated by the 1. DIC was introduced by Skogestad and Moran (1988b) who also give necessary and sufficient
following examples: conditions for testing DIC. A detailed survey of conditions for DIC and other related properties
ts given by Campo and Moran (1994).
444 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 445
2. DIC is also closely related to 13 stability see papers by Yu and Fan (1990) and Campo and Moran is inconclusive However the Nz of the three principal submatrices [~02 ~], []~ ~] and
(1994) The theory of 13 stability provides necessary and sufficient conditions (except in a few
special cases such as when the determinant of one or more of the submatrices is zero)
3. Unstable plants ase not INC. The reason for this is that with all e~ = 0 we are left with the
[~ 1
j
j~ ale I —1 2 and 22 and since one of these is negative the determinant condition correctly
tells us that ‘ye do not have DIC.
uncontrolled plant C, and the system will be (internally) unstable if C(s) is unstable, For this 4 x 4 example the RCA is inconclusive:
4. For c~ = 0 we assume that the integrator of the corresponding SISO controller has been removed,
otherwise the integrator would yield internal instability 8 72 2 81 2 98 —15 80 0 41 0 47 —o 06 0 17
5. For 2 x 2 and 3 x 3 plants we have even tighter RCA conditions for DIC than (10.78). For 2 x 2 02(0) = 6.54 —2.92 2.50 —20.79 and A(C2(0)) —0.20 0.45 0.32 0.44
plants (Skooestad and Moran 1988b) —5.82 0.99 —1.48 —7.51 0.40 0.08 0.17 0.35
—723 292 311 786 039 0001 057 004
DIC ‘~ Aii(O)>0 (10.81) All the diagonal RCA values are positive, so it is inconclusive when it comes to DIC. However, the
Niederlinski index of the gain matrix is negative Ns(C2(0)) = —1865 and we conclude thu this
For 3 x 3 plants with positive diagonal RCA elements of 0(0) and of Gtt(0), i = 1,2,3 (its three
pairing is not DIC (further evaluation of the 3 x 3 and 2 x 2 submatrices is not necessary in this
principal submatrices), we have (Yu and Fan. 1990)
case).
DIC ~. ~Aii(0) ± ~A22(o) + ~A33(O) ≥1 (10.82) 9. The ahove results, including the requirement that we should pair on positive RCA elements, give
necessary conditions for DIC If we assume that the controllers have integral action then T(0) = I
(Strictly speaking, as pointed out by Campo_and Moran (1994), we do not have equivalence for the and we can derive from (10 72) that a sufficient condition fo, DIC is that C is genernlized diagonally
case when ~/Xh’~b) + ,/A22(O) ± \/.Aaa(0) is identically equal to 1, but this has little practical dominant at steady-state, i.e.
significance.) p(E(0)) < 1
6. One cannot in general expect tight conditions for DIC in terms of the RGA (i.e. for 4 x 4 systems This is pioved by Braatz (1993 p 154) Since the requirement is only sufficient for DIC it cannot
or higher). The reason for this is that the RCA essentially only considers “corner values”, e~ = 0 be used to eliminate designs.
or e~ = 1, for the detuning factor, that is, it tests for integrity. This is clear from the fact that :o. If the plant has jw-axis poles, e.g. integrators, it is recommended that, prior to the RCA analysis,
= Lfl. deiG” , where C corresponds to e~ = 1 for all i, g~ corresponds to e~ = 1 with the
these are moved slightly into the LHP (e.g. by using very low-gain feedback). This will have no
other cj, = 0, and C” corresponds to e1 = 0 with the other ek = 1. A more complete integrity practical significance for the subsequent analysis.
(“corner-value”) result is given next. 11. Since Theorem 6.7 applies to unstable plants, we may also easily extend Theorem 10.6 to unstable
7. Determinant condition for integrity (NC). The following condition is concerned with whether it plants (and in this case one may actually desire to pair on a negative RCA element). This is shown
is possible to design a decentralized controller for the plant such that the system possesses integrity, in Hovd and Skogestad (1994) Alternatively one may fist implement a stabilizing controller and
which is a prerequisite for having DIC. Assume without loss of generality that the signs of the then analyze the pnrtially controlled system as if it were the plant C(s)
rows or colu,n,zs of C have beeti adjusted such that all diagonal elements of 0(0) are positive,
i.e. gu (0) ≥ 0. Then one ‘no compute the determinant of 0(0) and all its principal submatrices
(obtained by deleting rows and corresponding columns in 0(0)), which should all have the same 10 6 6 RHP-zeros and RGA reasons for avoiding negative RGA
sign for integrity. This determinant condition follows by applying Theorem 6.7 to all possible elements with sequential design
combinations of e~ = 0 or I as illustrated in the proof of Theorem 10.6.
8. The Niederlinski index of a matrix C is defined as So far we have considered decentralized control based on independent design, where we
require that the individual loops are stable and that we do not get instability as loops are
N1 (C) = det C/rLgu (10.83) closed or taken out of service. This led to the integrity (DIC) result of avoiding pairing on
A simple way to test the determinant condition for integrity, which is a necessary condition for DIC, negative RCA elements at steady state. However, if we use sequential design, then the “inner”
is to require that the Niederlinski index of 0(0) and the Niederlinski indices of all the principal loops should not be taken out of service, and one may even end up with loops that are unstable
submatrices C”(O) of 0(0) are positive. by themselves (if the innei loops were to be removed) Nevertheless foi sequential design we
The original result of Niederlinski, which involved only testing N1 of C(0), obviously yields find that it is also generally undesiiable to pair on negative RCA elements and the purpose
less information than the determinant condition as does the use of the sign of the RCA elements. of this section is primarily to illustrate this, by using some results that link the RCA and
This is because the RCA element is A~ = , so we may have cases where t’vo negative RHP-zeros.
determinants result in a positive RCA element. Nevertheless, the RCA is usually the preferred tool Bristol (1966) claimed that negative values of A~1(0) imply the presence of RHP-zeros,
because it does not have to be recomputed for each pairing. Let us first consider an example where but did not provide any proof Howevei it is indeed true as illustrated by the following two
the Niederlinski index is inconclusive: theorems.
10 0 20 4.58 0 —3.58
Ci(0) = 0.2 I —1 and A(C1(0)) = 1 —2.5 2.5 Theorem 10.7 (Hovd and Skogestad, 1992) Consider a transfer function matrix C(s) with
11 12 10 —4.58 3.5 2.08 no zeros or poles at s = 0. Assume that lim5,~ ~ (s) is finite and d~fferentfroin zero. If
Since one of the diagonal RCA elements is negative, we conclude that this pairing is not DIC. A,~ (j~) and ~ (0) have different signs the,, at least one of the following must be trite:
On the other hand, Nj(C~(0)) = 0.48 (which is positive), so Niederlinski’s original condition (a) The element gij (s) has a RHP-zetv.
446 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 447
(b) The overall plant G(s) has a RFIP -zero. 10.6.7 Performance of decentralized control systems
(c) The subsystem with input j and output i removed, G~ (s), has a RHP -zero. Consider again the factorization
Theorem 10.8 (Grosdidier et aL, 1985) Consider a stable transfer function nzatrLt C(s) S = (I + S(F — I))’SF
with elements Yij (s). Let ~jj (s) denote the closed-loop transfer function between input u~
and output y~ with all the other outputs under integral control. Assume that: (i) gjj(s) has in (10.69) where F = GG’ is the performance relative gain array (PRGA), The diagonal
no RHP-zeros, (ii) the ioop transfer function GK is strictly proper (iii) all other elements of elements of the PRGA matrix are equal to the diagonal elements of the RGA, 7H = ~ and
G(s) have equal or higher pole excess than gjj (s). We then have: this is the reason for its name, Note that the off-diagonal elements of the PRGA depend on the
If Ajj(0) < 0, then for ~jj (s) the number of RHP -poles plus RHP -zeros is odd. relative scaling on the outputs, whereas the RGA is scaling independent. On the other hand,
the PRGA also measures one-way interaction, whereas the RGA only measures two-way
Note that ~ij(s) in Theorem 10.8 is the same as the transferfunction P~ from u1 toy’ for interaction. Atfrequencies where feedbackis effective (S 0), (10.69)yields S SF Thus,
the partially controlled system in (10.26). large elements in the PRGA (F) (compared to 1 in magnitude) mean that the interactions
Sequential design and RUP-zeros. We design and implement the diagonal controller by “slow down” the overall response and cause performance to be worse than for the individual
tuning and closing one loop at a time in a sequential manner. Assume that we end by pairing loops. On the other hand, small PRGA elements (compared to 1 in magnitude) mean that the
on a negative steady-state RGA element, .A~~(0) < 0, and that the corresponding element interactions actually improve performance at this frequency.
Yjj (s) has no RHP-zero. Then we have the following implications: We will also make use of the related closed-loop disturbance gain (CLDG) matrix, defined
(a) If we have integral action (as we normally have), then we will get a RHP-zero in as
~ij(s) which will limitthe performancein the”flnal” outputy~ (followsfromTheorem 10.8). Gd(s) 4 F(s)Gd(s) = G(s)G’(s)Gd(s) (10.84)
However, the performance limitation is less if the inner loop is tuned sufficiently fast (Cui
and Jacobsen, 2002), see also Example 10.22. The CLDG depends on both output and disturbance scaling.
(b) If ~ (co) is positive (it is usually close to 1, see pairing rule 1), then irrespective of In the following, we consider performance in terms of the control error
integral action, we have a RHP-zero in Ge(s), which will also limit the performance in the ci = y = Cu + Gdd
— — (10.85)
other outputs (follows from Theorem 10.7).
In conclusion, for performance we should avoid ending up by pairing on a negative RGA Suppose the system has been scaled as outlined in Section 1.4, such that at each frequency:
element. 1. Each disturbance is less than 1 in magnitude, IdkI < 1.
Example 10.22 Negntive RGA element and RHP-zeros. Consider a plant wit/i 2. Each reference change is less than the corresponding diagonal element in R, Ir~ < R~.
C(s)
—
— s +
1
10
~
[2
4I
1
•/~~() [—1
2 —ij
21
3. For each output the acceptable control error is less than I, 1e11 < 1.
Single disturbance. Consider a single disturbance, in which case Gd is a vector, and let
NotethattheRGA is independent offrequencyforthis plant, so Aii(0) = Ac.D = 1. We want to illustrate g~, denotefrequencies
Consider the i’th element
whereoffeedback
Gd. Let isL5effective
= gnk1 sodenote
SF isthe loop(and
small transfer function
(10.88) in 1oop
is valid). Theni.
that pairing on negative RGA elements gives peiformance problems. We start by closing the loop from
u1 toys with a controller a, = kn(s)(ri — yr). For the partially controlled system, the resulting foi acceptable disturbance rejection (IetI < 1) with decentralized control, we must require
transferfunctionfroin u2 to y2 (“outer loop”) is for each loop i,
I1+L~I > kTh~[ (10.86)
g22(S) = g22(s) — kij(s)g21(s)g,2(s)
1 + gn (s)k,,(s) which is the same as the SISO condition (5.77) except that Gd is replaced by the CLDG, 9dj~
In words, ~dj gives the “apparent” disturbance gain as seen from loop i when the system is
With an integral controller k5, (a) = Ks/a, we find, as expectedfro,n Theorem 10.8, that controlled using decentralized control.
32 + lOs 41j — Single reference change. We can similarly address a change in reference for output j of
g22(s) = (a + 10)(s2 + lOs + 4I(s) magnitude R~ and consider frequencies where feedback is effective (and (10.88) is valid).
always has a RHP-zero. For large values of K7, the RHP-zero ;novesfiirther awa~~ and is less limiting
Then for acceptable reference tracking (JetI < 1) we must require for each 1oop I
in terms ofpemformance for the outer loop. With a proportional controllem; ku(s) = IC, we find that 1 + L1~ > I71j1 IR1I
. (10.87)
g’2(s) — a + 10 4./C which is the same as the SISO condition (5.80) except for the PRGA factor, I7ijI~ In other
- (a + 10)(s ± 10 + 4./C) words, when the other loops are closed the response in loop i gets slower by a factor
has a zero at 4I~~ — 10. For IC < 2.5, the zero is in the LHP, but it crosses into the RH~ when 17u1. Consequently, for petformance it is desirable to have small elements in F, at least at
i~ exceeds 2.5. For large values of IC, the RHP-zero moves flirt/icr away~ and does not limit the frequencies where feedback is effective. However, at frequencies close to crossover, stability
pemforniance in the outer loop in practice. The worst value is I~ = 2.5, where we have a zero at the is the main issue, and since the diagonal elements of the PRGA and RGA are equal, we
origin and the steady-state gain ?22 (0) changes sign. usually prefer to have 7u = A~ close to I (see pairing rule I on page 449).
448 MULTI VARIABLE FEEDB4CK CONTROL CONTROL STRUCTURE DESIGN 449
Proofs of (10.86) and (JO.87~: At frequencies where feedback is effective, S is small, so 10.6.9 Independent design
I+ S(F I) — I (10.88) We first consider the case of independent design, where the controller elements are designed
based on the diagonal (paired) elements of the plant such that individual 1oops are stable.
and from (10.69) we have
The first step is to determine if one can find a good set of input—output pairs bearing in
S (10.89)
mind the following three pairing rules:
The closed-loop response then becomes
Pairing rule 1. RGA at crossover frequencies. Prefer pairings such that the
C = SG~,d — Sr SGcjd — SI’r (10.90) rearranged system, with the selected pairings along the diagonal, has an RGA
and the response in output ito a single disturbance da and a single reference change rj is matrix close to identity atfi-equencies around the closed-loop bandwidth.
s~gdu~.dk — Si7ikrk (10.91) To help in identifying the pairing with RCA closest to identity, one may, at the bandwidth
frequency, compute the iterative RCA, Ak(G); see Exercise 10.6.4 on page 440.
where ~ = l/(l + g1~k~) is the sensitivity function for loop i by itself. Thus, to achieve Ie~I < 1 Pairing rule I is to ensure that we have diagonal dominance where interactions from other
for Idkf = 1 we must require sigdjk~ < 1 and (10.86) follows. Similarly, to achieve Ie~I < 1 for loops do not cause instability. Actually, pairing rule I does not ensure this, see the Remark on
Ir~~ = IR~I we must require 8~7~kRJI <land (10.87) follows. Also note that Si7ikI <1 will imply page 441, and to ensure stability we may instead require that the rearranged plant is triangular
that assumption (10.88) is valid. Since P usually has all of its elements larger than 1, in most cases
at crossover frequencies. However, the RCA is simple and only requires one computation,
(10.88) will be automatically satisfied if (10.87) is satisfied, so we normally need not check assumption
and since (a) all triangular plants have RCA = I and (b) there is at most one choice of
(10.88). C
pairings with RCA = I at crossover frequencies, we do nothing wrong in terms of missing
Remark 1 Relation (10.89) may also be derived from (10.66) by assuming T I which yields good pairing alternatives by following pairing rule I. To check for diagonal dominance of a
(I + ET)’ (I + E)’ = P promising pairing (with RCA = I) one may subsequently compute it(Es) = p(PRGA I)) —
When considering decentralized diagonal control of a plant, one should first check that the Pairing rule 3. Pi-efer a pairing ij where Pij puts minimal restrictions on the
plant is controllable with any controller, see Section 6.11 achievable bandwidth. Specifically, the effective delay 9jj in gjj (s) should be
If the plant is unstable, then it recommended that a lower-layer stabilizing controller is first small.
implemented, at least for the “fast” unstable modes. The pole vectors (page 411) are useful
in selecting which inputs and outputs to use for stabilizing control. Note that some unstable This rule favours pairing on variables physically “close to each other”, which makes it
plants are not stabilizable with a diagonal controller. This happens if the unstable modes easier to use high-gain feedback and satisfy (10.86) and (10.87), while at the same time
belong to the “decentralized fixed modes”, which are the modes unaffected by diagonal achieving stability in each loop. It is also consistent with the desire that A(jw) is close to I at
feedback control (e.g. Lunze (1992)). A simple example is a triangular plant where the crossover frequencies. Pairing rule 3 implies that we should avoid pairing on elements with
unstable mode appears only in the off-diagonal elements, but here the plant can be stabilized high order, a time delay or a RHP-zero, because these result in an increased effective delay;
by changing the pairings. see page 58. Goodwin et al. (2005) discuss performance limitations of independent design,
in particular when pairing rule 3 is violated.
When a reasonable choice of pairings has been found (if possible), one should rearrange
C to have the paired elements along the diagonal and perform a controllability analysis as
follows.
450 MULTIVARIABLE FEEDBACK CONTROL CONTROL STRUCTURE DESIGN 451
1. Compute the PRGA (r = G0’) and CLDG (Gd = ~Gd), and plot these as functions starting with a pairing where gjj has good controllability, including a large gain and a small
of frequency. For systems with many ioops, it is best to perform the analysis one ioop at effective delay. One may also consider the disturbance gain to find which outputs need to be
a time. That is, for each loop i, plot IThikI for each disturbance It and plot I7ijI for each tightly controlled. After closing one loop, one needs to obtain the transfer function for the
reference j (assuming here for simplicity that each reference is of unit magnitude). For resulting partially controlled system, see (10.28), and then redo the analysis in order to select
performance, see (10.87) and( 10.86), we need ~ + LtI to be larger than each of these the next pairing, and so on.
Performance: 1 +L~I > n~ax{I~dikI, 17j11} (10.93) E~tampIe 10.23 Application to distillation process. in order to demonstrate the use of the
frequency-depetident RGA and C’LDG for evaluation of expected diagonal control peiformance, we
To achieve stability of the individual loops one must analyze gij(s) to ensure that the again consider the distillation process used in Example 10.8. The LV-configuraiion is used; that is, the
manipulated inputs are reflux L (in) and boilup V (u2). The outputs are the product compositions YD
bandwidth required by (10.93) is achievable. Note that RHP-zeros in the diagonal elements
(111) and x B (in). Disturbances in feed flow rate F (di) and feed composition Zr (d2) are included in
may limit achievable decentralized control, whereas they may not pose any problems for
the model. The disturbances and outputs have been scaled such that a magnitude of 1 corresponds to a
a multivariable controller. Since with decentralized control we usually want to use simple change in F of 20%, a change in Zr of 20%, and a change in XB and iJD of 0.01 mole fraction units.
controllers, the achievable bandwidth in each loop will be limited by the effective delay The five state dynamic model is given in Section 13.4.
O~.j ing~j(s). Initial controllability analysis. C(s) is stable and has no RHP-zeros. The plant and RCA matrix at
7 In general, see rule 5.13 on page 207, one may check for constraints by considering the steach-state are
elements of G’G~ and making sure that they do not exceed 1 in magnitude within the
frequency range where control is needed. Equivalently, one may plot Ig~~I for each loop 1,
G(0) = [~j~8~ ~] A(O) = ~ j’] (10.95)
and the requirement is then The RCA elements are much larger than 1 and indicate a plant that is fundamentally difficult to control
(recall property Cl, page 89). Fortynate~; the flow dynamics partially decouple the response at higher
To avoid input constraints : Ig~tI > I~dikI ‘v’k (10.94) frequencies, and we find that A(jw) I at frequencies above about 0.5 racUinin. Therefore if we can
athieve sufficiently fast control, the large steady-state RGA elements may be less of a probl cal.
at frequencies where Igd~hI is larger than 1 (this follows since 0d = 0G’Cd). This
provides a direct generalization of the requirement IGI > GdI for SISO systems.
The advantage of (10.94) compared to using G’Gd is that we can limit ourselves to 101
frequencies where control is needed to reject the disturbance (where ?Jd~k~ > 1).
0)
.0)
z
If the plant is not controllable with any choice of pairings, then one may consider another
00
pairing choice and go back to step 1. Most likely this will not help, and one would need to
consider decentralized sequential design, or multivariable control.
If the chosen pairing is controllable then the analysis based on (10.93) tells us directly how
large the loop gain L~} = g~~k~J must be, and this can be used as a basis for designing the l0~
l0 l0~ IC0 I0~
controller k~(s) for loop i. Frequency [rad/min]
Figure 10.18: Disturbance gains I9dik for assessing the effect of disturbance It on output i
10.6.10 Sequential design
Sequential design may be applied when it is not possible to find a suitable set of pairings for
lO
independent design using the above three pairing rules. For example, with sequential design
one may choose to pair on an element with gjj = 0 (and ~ = 0), which violates both
0)
pairing rules 1 and 3. One then relies on the interactions to achieve the desired performance, ~0
as loop i by itself has no effect. This was illustrated for the case with off-diagonal pairings
Co
in Example 10.15 on page 433. Another case with pairing on a zero element is in distillation a
10
0
control when the L17-conflguration is not used, see Example 10.8. One may also in some
cases pair on negative steady-state RGA elements, although we have established that to avoid
introducing RHP-zeros one should avoid closing a loop on a negative steady-state RCA (see
page 446). Frequency [rad/min]
The procednre and rules for independent design can be used as a starting point for finding
Figure 10.19: Closed-loop disturbance gains Ydik for assessing the effect of disturbance hon output i
good pairings for sequential design. With sequential design, one also has to decide the order
in which the loops are closed, and one generally starts by closing the fast loops. This favours
452 MULTIVARIABLE FEEDBACK CONTROL cONTROT.J STRUCTURE DESJCN
and the magnitudes of the elements in Gd Ow) ale plotted as functions offrequency in Figure 1018
From this plot the two distut bances seem to be equally difficult to ieject with magnitudes laigem thati
1 up to afiequency of about 0 1 ,ad/mm We conclude that contiol is needed up toOl tad/nun The
magnitude oft/ic elements in G ‘Gd (jw) (not shown) ale all less than 1 at allfiequencies (at least up
Time [mini
to 10 rad/oun), and so it n ill be assumed that input constiaints pose no pioblemn
Choice of pairings. The selection of u1 to contiol y, and u2 to contiol 1/2 cot responds to pairing on
Figure 10.21: Decentralized P1 control. Responses to a unit step in d, at I = 0 and a unit step in d2 at
positive elements of A(0) and AOw) I at high frequencies This seems sensible, and is used in the = 50 mm.
following
Analysis of decentralized control. The elements in the CLDG and PRGA matrices are shown as
functions offiequencv in Figuies 1019 and 1020 At stead~ -state we have In summary, there is an excellent agreement between the controllability analysis and the
F(0) =
E 35A
—43.2 —27.61
35.1 j’ Gd(O) = F(0)Gd(0) = L
1—47.7
70.5 —0.401
11.7 (10 97)
simulations, as has also been confirmed by a number of other examples.
In this particular case, the off-diagonal elements of RGA (A) and PRGA (1’) are quite sbnilat: We note 10.6.11 Conclusions on decentralized control
that Gd (0) is vet)’ different froni Gd (0), and this also holds at higher frequencies. For disturbance 1
(first column in Gd) we_find that the interactions increase the apparent effect oft/ic disturbance, whereas In this section, we have derived a number of conditions for the stability, e.g. (10.72) and
they reduce the effect of disturbance 2, at least on output 1. (10 78), and performance, e.g. (10.86) and (10.87), of decentralized control systems. The
condittons may be useful in determining appropriate pairings of inputs and outputs and
l0~ the sequence in which the decentralized controllers should be designed. Recall, however,
that in many practical cases decentralized controllers are tuned off-line, and sometimes
on-line, using local models. In such cases, the conditions may be used in an input—output
~oo lO
controllability analysis to determine the viability of decentralized control.
C’s 711= 722 Some exercises which include a controllability analysis of decentralized control are given
= 0
IC at the end of Chapter 6.
11
MODEL REDUCTION
This chapter describes methods for reducing the order of a plant or controller model. We place
considerable emphasis on reduced order models obtained by residualizing the less controllable and
observable states of a balanced realization. We also present the more familiar methods of balanced
truncation and optimal Hankel norm approximation.
11.1 Introduction
Modern controller design methods such as 7t0~, and LQG produce controllers of order at least
equal to that of the plant, and usually higher because of the inclusion of weights. These control
laws may be too complex with regards to practical implementation and simpler designs are
then sought. For this purpose, one can either reduce the order of the plant model prior to
controller design, or reduce the controller in the final stage, or both.
The central problem we address is: given a high-order linear time-invariant stable model C,
find a low-order approximation C0 such that the infinity (R0.3 or £00) norm of the difference,
I 1G Ga J~, is small. By model order, we mean the dimension of the state vector in a minimal
realization. This is sometimes called the McMillan degree.
So far in this book we have only been interested in the infinity (R00) norm of stable
systems. But the error G C0 may be unstable and the definition of the infinity norm
—
needs to be extended to unstable systems. £~ defines the set of rational functions which
have no poles on the imaginary axis, it includes 7-t00, and its norm (like H00) is given by
uGh00 = sup~U(G(jw)).
We will describe three main methods for tackling this problem: balanced truncation,
balanced residualization and optimal Hankel norm approximation. Each method gives a stable
approximation and a guaranteed bound on the error in the approximation. We will further
show how the methods can be employed to reduce the order of an unstable model C. All
these methods start from a special state-space realization of G referred to as balanced. We
will describe this realization, but first we will show how the techniques of truncation and
residualization can be used to remove the high-frequency or fast modes of a state-space
realization.
Let (A, B, C, D) be a minimal realization of a stable system C(s), and partition the state Let us assume A22 is invertible and define
vector x, of dimension n, into [mi] where z2 is the vector of n — k states which we wish to
Ar 4
— A,, -i
A12A,, A21 (11.7)
remove. With appropriate partitioning of A, B and C, the state-space equations become
A -1
Br Bi — A12A,2 B9 (11.8)
±1 = A11x1 + A12z2 + Bjtt
C’r 4 C1—C2A29A,, (11.9)
±2 = A21z1 + A,,x2 + B2u (11 1) —
The reduced order model Ga(s) 4 (A,., Br, Cr, Dr) is called a residualization of G(s) 4
11.2.1 Truncation
(A. B, C, D). Usually (A, B, C, .D) will have been put into Jordan form, with the eigenvalues
A k’th-order truncation of the realization G 4 (A, B, C, D) is given by Ga 4 ordered so that z2 contains the fast modes. Model reduction by residualization is then
(A11, B1, C1, D). The truncated model Ga is equal to G at infinite frequency, G(oo) = equivalent to singular perturbational approximation, where the derivatives of the fastest
Ga(oo) = D, but apart from this there is little that can be said in the general case about the states are allowed to approach zero with some parameter e. An important property of
relationship between G and Ga. If, however, A is in Jordan form then it is easy to order the residualization is that it preserves the steady-state gain of the system, Ca(0) G(0).
states so that x2 corresponds to high-frequency or fast modes. This is discussed next. This should be no surprise since the residualization process sets derivatives to zero, which
Modal truncation. For simplicity, assume that A has been diagonalized so that are zero anyway at steady-state. But it is in stark contrast to truncation which retains the
system behaviour at infinite frequency. This contrast between truncation and residualization
A1 0 0 follows from the simple bilinear relationship s -4 which relates the two (e.g. Liu and
0 A2 0 Anderson, 1989).
B= C=[ci c2 (II 2)
It is clear from the discussion above that truncation is to be preferred when accuracy is
0 6 A required at high frequencies, whereas residualization is better for low-frequency modelling.
Both methods depend to a large extent on the original realization and we have suggested
Then, if the A1 ‘s are ordered so that IA’ I < I.~’2 the fastest modes are removed from
...,
the use of the Jordan form. A better realization, with many useful properties, is the balanced
the model after truncation. The difference between G and °a following a k’th-order model realization which will be considered next.
truncation is given by
GGa ~ c (113)
i~=k+1 2 11.3 Balanced realizations
and therefore
hG GahIco ≤ Z IRe(A1)I (114)
In words only: a balanced realization is an asymptotically stable minimal realization in which
the controllability and observability Gramians are equal and diagonal.
i=k+1
More formally: let (A, B, C. D) be a minimal realization of a stable, rational transfer
It is interesting to note that the error depends on the residues c~bf as well as the A1’s. The function C(s), then (A, B, C, D) is called balanced if the solutions to the following
distance of A1 from the imaginary axis is therefore by itself not a reliable indicator of whether Lyapunov equations
the associated mode should be included in the reduced order model or not.
An advantage of modal truncation is that the poles of the truncated model are a subset of AP+PAT+BBT 0 (11.11)
the poles of the original model and therefore retain any physical interpretation they might ATQ+QA+CTC = 0 (11.12)
have, e.g. the phugoid mode in aircraft dynamics.
areP=Q=diag(a1,u2,...,u~)~E,whereai >a2 > ..>u~ >0.PandQarethe
11.2.2 Residualization controllability and observability Gramians, also defined by
In truncation, we discard all the states and dynamics associated with ~2. Suppose that instead
P ~ eAtBBT di (11.13)
of this we simply set ±2 = 0, i.e. we residualize x2, in the state-space equatI~ns. One can
then solve for x2 in terms of x1 and it, and back substitution of x2 then gives
Q ~ eAT~C~Ce~di (11.14)
= (A11 — A,2A~,’A2i)x1 + (B1 — A12AJB0)u (115)
458 MULTIVARIABLE FEEDBACK CONTROL MODEL REDUCTION 459
E is therefore simply referred to as the Gramian of C(s). The ci’s are the ordered Flankel Balanced residualization. In balanced truncation above, we discarded the least
singular values of C(s), more generally defined as u~ ~ A7 (FQ), i = 1..... n. Notice that controllable and observable states corresponding to ~ In balanced residualization, we
aj = lIGHt’, the Hankel norm of 0(s). simply set to zero the derivatives of all these states. The method was introduced by Fernando
Any minimal realization of a stable transfer function can be balanced by a simple state and Nicholson (1982) who called it a singular perturbational approximation of a balanced
similarity transformation, and routines for doing this are available in Matlab. For further system The resulting balanced residualization of C(s) is (Ar, Br, C~, D,.) as given by the
details on computing balanced realizations, see Laub et al. (1987). Note that balancing does formulae (I 1.7)—(11.lO).
not depend on D. Liu and Anderson (1989) have shown that balanced residualization enjoys the same error
So what is so special about a balanced realization? In a balanced realization the value of bound as balanced truncation. An alternative derivation of the error bound, more in the style
each oj is associated with a stnte x~ of the balanced system. And the size of o’~ is a relative of Glover (1984), is given by Samar et al. (1995). A precise statement of the error bound is
measure of the contribution that x1 makes to the input—output behaviour of the system; also given in the following theorem.
see the discussion on page 161. Therefore if ~i >> O~2, then the state x1 affects the input—
Theorem 11.1 Let C(s) be a stable rational transfer fitnction with Hankel singular values
output behaviour much more than x2, or indeed any other state because of the ordering of
o~i > o~ > > ojv where each o~ has multiplicity r~ and let 0~(s) be obtained by
the c~. After balancing a system, each state is just as controllable as it is observable, and a
truncating or residualizing the balanced realization of 0(s) to the first (r, + i-2 + + ri.)
measure of a state’s joint observability and controllability is given by its associated Hankel
states. Then
singular value. This property is fundamental to the model reduction methods in the remainder
hG(s) — G~(s)hI~ ≤ 2(c~÷i + ~ + + UN) (11.17)
of this chapter which work by removing states having little effect on the system’s input—output
behaviour. The following two exercises are to emphasize that (i) balanced residualization preserves
the steady-state gain of the system and (ii) balanced residualization is related to balanced
truncation by the bilinear transformations —+
11.4 Balanced truncation and balanced residualization Exercise 11.1 * The steady-state gain ofafidi-order balanced system (A, B, C, D) is D — CA’B.
Show, by algebraic manipulation, that this is also equal to Dr CrAr’Br, the steady-state gain of
—
Let the balanced realization (A. B, C, D) of C(s) and the corresponding S be partitioned the balanced residualization given by (ii. 7)—U]. JO).
compatibly as
some sense closest to being completely unobservable and completely uncontrollable, which Remark I The Ii + l’th Hankel singular value is generally not repeated, but the possibility is included
seems sensible A more detailed discussion of the Hankel norm was given in Section 4 10 4 in the theory for completeness.
(page 160)
The Hankel norm approximation problem has been considered by many but especially Remark 2 The order Ii of the approximation can be selected either directly, or indirectly by chnosing
Clover (1984). In Clover (1984) a complete treatment of the problem is given, including a the “cut-off” value a-a for the included Hankel singular values. In the latter case, one often looks for
large “gaps” in the relative magnitude, o-a-/a-a-÷i.
closed-form optimal solution and a bound on the infinity norm of the approximation error.
The infinity norm bound is of particular interest because it is better than that for balanced Remark 3 There is an infinite number of unitary matrices U satisfying (11.25); one choice is U
truncation and residualization _C2(BT)t.
The theorem below gives a particular construction for optimal Hankel norm approxima
tions of square stable transfer functions Remark4lfaa+i = o-,~, i.e. only the smallest Flankel singular value is deleted, then F = 0, otherwise
(A, B, C, D) has a non-zero anti-stable part and C~ has to be separated from F.
Theorem 11 2 Let G(s) be a stable squate tiansfet function G(s) wit/i Hankel smgulai
Remark S When the order Ii is chosen to be zero, C~ is a constant matrix and (A, B, ö, .0 — G~)
F(s), which is entirely anti-stable. In this case, IIC(s) — C~(s)jIH = hG(s) — F(s)hIc~
an optimal Hankel norm app roximation of mdci k C~ (s) can be consti ucted as follows
F~”(—s)I~c00. The last inequality follows since the £~ norm of a system is equal to
Let (A, B, C, D) be a balanced tealization of C(s) wit/i the Hankel singular values the £~ norm of its mirror image across the imaginary axis. This special case can he interpreted as
reordered so that the Giamian matrix is approximating a stable system by an unstable system or an unstable system by a stable one. The latter
problem is alternatively known as the Neha,-i extension problem, which was used extensively in the early
E = diag(uj,a2, ,Uk,o&+t+j, ,Un,U&+i, ,u&+i) (II 19)
solutions of 7-t~ optimal controller design problems (Francis, 1987); also see the robust stabilization
= diag(El,ak+lI) problem on page 368.
Pat tition (A, B, C, D) to confoun with E Remark 6 For non-square systems, an optimal Hankel norm approximation can be obtained by
Remark 7 The Hankel norm of a system does not depend on the D-matrix in the system’s state-space
~ C1E1+aa+1UBT (1123)
realization. The choice of the D-matrix in C~ is therefore arbitrary except when F = 0, in which case
B ~ D—aa+1U (1124) it is equal to D.
whete U isa unitary matuxsatisfymg RemarkS The infinity norm does depend on the D-matrix, and therefore the D-matrix of C~ can be
chosen to reduce the infinity norm of the approximation error (without changing the Hankel norm).
B2 = —C~U (11 25) Clover (1984) showed that through a particular choice of D, called D0, the following bound could be
and obtained:
(1126)
hIC-C~—DoIhcc ≤Uk+i+ö (1129)
where
A
The matux has I stable eigenvalues (in the open LEIP) the temaining onec ate in the n—k--I n—k—I
5 ~ IF— D0II~ ≤ Z a-~ (F(—s)) ≤ a-i+a~-c (C(s)) (11.30)
open RH? Then
G~(s)+F(s)~ [~ (1127) This results in an infinity norm bound on the approximation error, S ≤ a-~+~+~ . ~g,,, which is equal
to the “sum of the tail” or less since the Hankel singular value ~k+i, which may be repeated, is only
included once. Recall that the bound for the error in balanced truncation and balanced residualization is
whete C~j(s) isa stable optimal Hankel noun approximation of ordet k and F(s) is an anti
twice the “sum of the tail”.
stable (al/poles in the open RHP) transfer function of order n — k — 1. The 1-lankel norm
of the error between C and the optimal appi-oxitnation C~ is equal to the (k + 1) ‘Ui Hankel
singular value of G:
k
IC — C1j~11 —
— ~k+1 (C) (11.28)
462 MULTIVARIABLE FEEDBACK CONTROL
100EL REDUCTION 463
11 6 Reduction of unstable models
Matlab automatically separate out the unstable part and then add it to the stable part after its
reduction.
Balanced tiuncation balanced residualization and optimal Hankel norm approximation only
apply to stable models In this section we will briefly present two approaches for reducing the
order of an unstable model. Table 11.1: Matlab commands for model reduction
j~~obust control toolbox
Remove fast stable modes
ppole(SY51
sysd_canOn(sys); % Diagonalize the system
11 6 1 Stable part model ieduction alimtabs(P)>tol) & (real(p)<O); % and identify fast stable modes
systmodred(sysd,alim, ‘ti % then: Truncate fast modes
Enns (1984) and Glover (1984) proposed that the unstable model could first be decomposed sysrumodrad(sysd, elim), % or: Residualize fast modes.
into its stable and anti-stable parts. Namely % Balanced model reduction
% uorks for stable modes, so usa k > number of unstable modes
~=sizaisys.A,l);
C(s) = G5(s) +G5(s) (11.31) sysbtbalaflcmr (53’s, k); % kth order bslanced truncatiOn.
sysbrmodred{balrealisys),k+l:n); % or: kth order balanced residualization.
where G5(s) has all its poles in the closed RHP and G8(s) has all its poles in the open LHP. sysbhhankelnlr (sys,k); % or: kth order optimal Henkel norm approx.
Balanced truncation, balanced residualization or optimal Hankel norm approximation can % using coprime factors (works also for unstable modes)
nusizeisys,2)
then be applied to the stable part G8 (s) to find a reduced order approximation C~ (s). This sysct”ncfmr(sys, k) % balanced truncetion of coprime factors.
is then added to the anti-stable part to give fsysc,cinfo)ncfmr(sys.n); or: obtain coprime factors of system
syscr=modred(cinfo.GL,k+l:fl); % residualize.
syscrm=minreal {inv(syscr (: ,nu+l. ‘~ and obtain kth order model.
G5(s) = G5(s) + Gsa(s) (11.32) :endfl’syscr(:,l:nufl;
syschhankelmricinfO.GLk); % or: optimal Hankel norm approximation.
as an approximation to the full-order model C(s). syschm=minresl tiny (sysch I:, nu+l - - . - % and obtain kth order model.
:endfl*sysch(:,l:nui);
Table 11.2: Hankel singular values of the gas turbine aero-engine model ~.ierefore, should be as small as possible for good controller design. Figure 11.2(a) shows that
I) 2.0005e+01 6) 6.2964e-Ol 11) 1.3621e-02 the error for balanced residualization is the smallest in this frequency range.
Steady-state gain preservation. It is sometimes desirable to have the steady-state gain of
2) 4.0464e+00 7) 1.6689e-0l 12) 3.9967e-03
3) 2.7546e+00 8) 9.3407e-02 13) 1.1789e-03 the reduced plant model the same as the full-order model. For example, this is the case if
4) 1.7635e+00 9) 2.2193e-02 14) 3.2410e-04 we want to use the model for feedforward control. The truncated and optimal Hankel norm
5) 1.2965e+00 10) 1.5669e-02 15) 3.3073e-05 approximated systems do not preserve the steady-state gain and have to be scaled, i.e. the
nodel approximation G~ is replaced by G014’S, where T’1’5 = G0 (0) ‘G(O), G being the full-
order model. The scaled system no longer enjoys the bounds guaranteed by these methods
and 11GG01’176 leD can be quite large as is shown in Figure 11.2(b). Note that the residualized
ystem does not need scaling, and the error system for this case has been shown again only
100 for ease of comparison. The 9-L42~ norms of these errors are computed and are found to degrade
to 5.71 (at 151 rad/s) for the scaled truncated system and 2.61 (at 168.5 rad/s) for the scaled
optimal Hankel norm approximated system. The truncated and Hankel norm approximated
~~~‘
systems are clearly worse after scaling since the errors in the critical frequency range around
crossover become large despite the improvement at steady-state. Hence residualization is to
lo~2 10° ‘02 100 IF
Frequency [mdlsl
10 ID
be preferred over these other techniques whenever good low-frequency matching is desired.
Frequency brad/si
(a) Balanced residualiza
tion (b) Balanced truncation (c) Optimal Hankel norm
approximation
50
Figure 11.1: Singular values for model reductions of the aero-engine from 15 to 6 states 0
~ ~‘::~:--::~-~. —50
the residualized system matches perfectly at steady-state. The singular values of the error
‘00 - Reduced: -- ,‘‘ Reduced: —- -00
—‘50
Reduced: z
150 Full order ‘5° Full order Full order
system (G Ge), for each of the three approximations, are shown in Figure 11.2(a). The
— o 001 002 003 0 0.0’ 002 003 0 0.01
lime tsccl
002 003
Tone bed T,me [see)
(a) Balanced residuatiza- (b) Scaled balanced trunca (c) Scaled optimal Hankel
tion tion norm approximation
100 100
Figure 11.3: Aero-engine: impulse responses (second input)
5
10
Balanced truncation: Balanced truncation: —5
Balanced residualization: Balanced residualization:
Optimal Hankel-norm approx.: — — — Optimal Hankel-norm approx.: — — —
—10
1010 l0~0
Reduced: — — - - — - - —
l0 10° io2 10° 1o2 —[5 Full order —
11.8.2 Reduction of an aero-engine controller however, lose the steady-state gain. The prefilters of these reduced controllers would
We now consider reduction of a stable two degrees-of-freedom H~ loop-shaping controller, therefore need to be rescaled to match Trer(0).
The plant for which the controller is designed is the full-order gas turbine engine model z. The full-order controller [K1 1(2] is directly reduced without first scaling the prefilter.
described in Section 11.8.1 above. In this case, scaling is done after reduction.
A robust controller was designed using the procedure outlined in Section 9.4.3; see We now consider the first approach. A balanced residualization of [1(1T’V~ 1(21 is obtained.
Figure 9.21 which describes the design problem. Elmer(s) is the desired closed-loop transfer Tl~e theoretical upper bound on the ~ norm of the error (twice the sum of the tail) is 0.698,
function, p is a design parameter, G~ = M~’N8 is the shaped plant and (tiN,, AAr~) are
i.e.
perturbations on the normalized coprime factors representing uncertainty. We denote the IIKiw~ — (1(i1’V1)0 1(2 — I(2a Woo ≤ 0.698 (11.33)
actual closed-loop transfer function (from /9 to y) by T2~.
The controller K = [I<’~ 1(21, which excludes the loop-shaping weight W1 (which where the subscript a refers to the low-order approximation. The actual error norm is
includes 3 integral action states), has 6 inputs (because of the two degrees-of-freedom computed to be 0.365. ~ for this residualization is computed and its magnitude plotted
structure), 3 outputs and 24 states. It has not been scaled (i.e. the steady-state value of in Figure 11.6(a). The 9’too norm of the difference (Ta,, Elmer) is computed to be 1.44 (at 43 —
has not been matched to that of Elmer by scaling the prefilter). It is reduced to 7 states in each
of the cases that follow.
Let us first compare the magnitude of ~ with that of the specified model Elmer. By
magnitude, we mean singular values. These are shown in Figure 11.5(a). The infinity norm of
IC’
~ ~‘~‘
In’
101 ~:,- 100
101
10° 100 IC
T,,,-- IC”
T~-- ,~
101
IC io° IC’ io~’ io~ 102 IC’ ‘C° 102
Frequency [md/sI Frequency Imad/al Frequency laid/al
Trtr —
‘
‘‘o
‘‘ Trer~~~ ‘
oo‘0 Figure 11.6: Singular values of Trer and El10 for reduced [KiWi Kn]
—— ‘ o’ T~p ——
10” ‘ ‘‘ io” rad/s). This value is very close to that obtained with the full-order controller [1<’1W~ 1(2],
1o~ 10° 10’ 10’ 10’ 10 and so the closed-loop response of the system with this reduced controller is expected to be
Frequency lrad/s] Frequency [md/sI
very close to that with the full-order controller. Next [1(1 W~ 1(2] is reduced via balanced
(a) Unsealed prefilter [1(1 1(2] (b) Sealed prefilter [Ni W1 1(2]
truncation. The bound given by (11.33) still holds. The steady-state gain, however, falls below
the adjusted level, and the prefilter of the truncated controller is thus scaled. The bound given
Figure 11.5: Singular values of Tm1 (solid) and T1$ (dashed)
by (11.33) can no longer be guaranteed for the prefilter (it is in fact found to degrade to 3.66),
the difference T~13 Elmer is computed to be 0.974 and occurs at 8.5 rad/s. Note that we have
—
but it holds for 1(2 I(2a. Singular values of Elmer and El20 for the scaled truncated controller
—
ate shown in Figure 11.6(b). The infinity norm of the difference is computed to be 1.44 and
p = 1 and the ~ achieved in the 7-t~ optimization is 2.32, so that IITvs TrerII~ <7p’~2 as —
this maximum occurs at 46 rad/s. Finally [1(~ W~ 1(2] is reduced by optimal Hankel norm
required; see (9.81). The prefilter is now scaled so that ~ matches Elmer exactly at steady-
approximation. The following error bound is theoretically guaranteed:
state, i.e. we replace K1 by K1W~ where W~ = Ty,~(0)’Trer(0). It is argued by Hoyle et al.
(1991) that this scaling produces better model matching at all frequencies, because the 7Lm
optimization process has already given ~ the same magnitude frequency response shape
— (1<’iWi)a 1(2 “1<’2a leo <0.189 (11.34)
as the model Elmer. The scaled transfer function is shown in Figure 11.5(b), and the infinity Again the reduced prefilter needs to be scaled and the above bound can no longer be
norm of the difference (T9~ Elmer) computed to be 1.44 (at 46 rad/s). It can be seen that this
—
guaranteed; it actually degrades to 1.87. Magnitude plots of El20 and Elmer are shown in
scaling has not degraded the infinity norm of the error significantly as was claimed by Hoyle Figure 11.6(c), and the infinity norm of the difference is computed to be 1.43 and occurs
et al. (1991). To ensure perfect steady-state tracking the controller is always scaled in this at 43 rad/s.
way. We are now in a position to discuss ways of reducing the controller. We will look at the It has been observed that both balanced truncation and optimal Hankel norm approximation
following two approaches: cause a lowering of the system steady-state gain. In the process of adjustment of these steady-
state gains, the infinity norm error bounds are destroyed. In the case of our two degrees-
1. The scaled controller [1(1 T’V1 1(21 is reduced. A balanced residualization of this
of-freedom controller, where the prefilter has been optimized to give closed-loop responses
controller preserves the controller’s steady-state gain and would not need to be scaled
within a tolerance of a chosen ideal model, large deviations may be incurred. Closed-loop
again. Reductions via truncation and optimal Hankel norm approximation techniques,
468 MULTIVARIABLE FEEDBACK CONTROL MODEL REDUCTION 469
responses for the three reduced controllers discussed above are shown in Figures 11.7, 11.8
and 11.9.
It is seen that the residualized controller performs much closer to the full-order controller
and exhibits better performance in terms of interactions and overshoots. It may not be
IS 5-5 posSIble to use the other two reduced controllers if the deviation from the specified model
becomes larger than the allowable tolerance, in which case the number of states by which
the controller is reduced would probably have to be reduced. It should also be noted from
Reduced: — - Reduced: Reduced: “ —
0.5
Full order:
(11.33) and (11.34) that the guaranteed bound for 1(2 K2a is lowest for optimal Hankel
—
Full order — Full order: —
norm approximation.
Let us now consider the second approach. The controller [K1 1(21 obtained from the
0.5 I 1,5 0 0,5 I 1.5 ~ Timd [see] ~ H~ optimization algorithm is reduced directly. The theoretical upper bound on the error for
Time [see] Time [see)
balanced residualization and truncation is
(a) Step in r~ (b) Step in T~ (c) Step in Ta
111(1 .li~a 1(2 1(2a j~ ≤ 0.165 (11.35)
Figure 11.7: Closed-loop step responses: [Ki 14’j K2] balanced residualized
The residualized controller retains the steady-state gain of [K1 1(21. It is therefore scaled
with the same W1 as was required for scaling the prefilter of the full-order controller. Singular
values of Trer and T25 for this reduced controller are shown in Figure 11.10(a), and the
1-5
infinity norm of the difference was computed to be 1.50 at 44 rad/s. [Ki 1(21 is next
~ueed: Reduced: — —
Figure 11.10: Singular values of Trer and T11~ for reduced [Ki I<’Q
truncated. The steady-state gain of the truncated controller is lower than that of [K1 1(21,
and it turns out that this has the effect of reducing the steady-state gain of T~13. Note that the
I j~edueed:
steady-state gain of ~ is already less than that of Trer (Figure 11.5). Thus in scaling the
Reduced: —‘
0~ prefilter of the truncated controller, the steady-state gain has to be pulled up from a lower
Full order:
level as compared with the previous (residualized) case. This causes greater degradation at
other frequencies. The infinity norm of (T25 Trer) in this case is computed to be 25.3 and
—
0.5 l’s 2
—0,io
D’s I 1,5 0 0.5 1 1.5 occurs at 3.4 rad/s (see Figure 11.10(b)). Finally [K1 1(21 is reduced by optimal Hankel
Tiine1[secl Time [see] Time [see]
norm approximation. The theoretical bound given in (11.29) is computed and found to be
(a) Step in Ti (b) Snep in r2 (c) Step in ra 0.037, i.e. we have
1(i ~~K10 1(2 “1<’2aIIoo <0.037 (11.36)
Figure 11.9: Closed-loop step responses: [1<’itI’j 1(2] optimal Hankel norm approximated and
resealed The steudy-state gain falls once more in the reduction process, and again a larger scaling is
required. Singular value plots for ~ and Trer are shown in Figure 11.10(c). IIT~s Tref lice —
to an unacceptable level. Only the resldualized system maintains an acceptable level of gives poorer model matching at other frequencies, and only the residualized controller’s
performance. performance is deemed acceptable.
~.: / r~ Reduced:
Full order —
Reduced:
Full order:
— -
—
I ——
Reduced
11.9 Conclusion
We have presented and compared three main methods for model reduction based on
alanced realizations: balanced truncation, balanced residualization and optimal Hankel norm
~roximation.
0.5 15 2 05 I IS 2 ~0 OS I IS 2
Time [see] Time [see] Time [see) Residualization, unlike truncation and optimal Hankel norm approximation, preserves
(a) Step in~ (b) Step in r2 (e) Step in r3
the steady-state gain of the system, and, like truncation, it is simple and computationally
inexpensive. It is observed that truncation and optimal Hankel norm approximation perform
k..+fer at high frequencies, whereas residualization performs better at low and medium
Figure 11.11: Closed-loop step responses: [Ki K2 balanced residualized and scaled
frequencies, i.e. up to the critical frequencies. Thus for plant model reduction, where models
I.e IS
are not accurate at high frequencies to start with, residualization would seem to be a better
option. Further, if the steady-state gains are to be kept unchanged, truncated and optimal
0.5 7 \‘,‘ 7
7 Reduced:
Full order
-
0.5
aedueed:
FuA’order:
- -
— 0: ~‘,__-
R’edueed
-__~Full order —
1-lankel norm approximated systems require scaling, which may result in large errors. In such
a case, too, residualization would be a preferred choice.
Frequency-weighted model reduction has been the subject of numerous papers over the
-or ‘0.5 past few years. The idea is to emphasize frequency ranges where better matching is required.
05 I 1.5 3 I 15 2 ~O 05 1 15 2
Time [see] ‘lime [see) Time [see] This, however, has been observed to have the effect of producing larger errors (greater
(a) Step in r1 (b) Step in r2 (c) Step in r3 mismatching) at other frequencies (Anderson, 1986; Enns, 1984). In order to get good steady-
state matching, a relatively large weight would have to be used at steady-state, which would
Figure 11.12: Closed-loop step responses: [Ks I(2 balanced truncated and scaled cause poorer matching elsewhere. The choice of weights is not straightforward, and an error
hound is available only for weighted Hankel norm approximation. The computation of the
bound is also not as easy as in the unweighted case (Anderson and Liu, 1989). Balanced
I , ;:~ \Redueed:- residualization can, in this context, be seen as a reduction scheme with implicit low- and
0: ‘
I Reduced
Fullorder medium-frequency weighting.
- “,“ Pull order: —
For controller reduction, we have shown in a two degrees-of-freedom example the
importance of scaling and steady-state gain matching.
~•~o 0.5 I IS 2 0% °~ I 15 2
In general, steady-state gain matching may not be crucial, but the matching should usually
Time [see] Time [see] Time [see] be good near the desired closed-loop bandwidth. Balanced residualization has been seen
(a) Step in ri (b) Step in r2 (e) Step in o 3 to perform close to the full-order system in this frequency range. Good approximation at
high frequencies may also sometimes be desired. In such a case, using truncation or optimal
Figure 11.13: Closed-loop step responses: [I(~ I(~ optimal Hankel norm approximated and scaled Hankel norm approximation with appropriate frequency weightings may yield better results.
Finally, for controller reduction it is important that any subsequent loss in closed-loop
We have seen that the first approach yields better model matching, though at the expense performance is minimized. This problem has been addressed by Goddard (1995).
of a larger infinity norm bound on 1(2 1(2~ (compare (11.33) and (11.35), or (11.34) and
—
(11.36)). We have also seen how the scaling of the prefilter in the first approach gives poorer
performance for the truncated and optimal Hankel norm approximated controllers, relative to
the residualized one.
In the second case, all the reduced controllers need to be scaled, but a “large?’ scaling is
required for the truncated and optimal Hankel norm approximated controllers. There appears
to be no formal proof of this observation. It is, however, intuitive in the sense that controllers
reduced by these two methods yield poorer model matching at steady-state as compared with
that achieved by the full-order controller. A larger scaling is therefore required for them
than is required by the full-order or residualized controllers. In any case, this larger scaling
472 MULTI VARIABLE FEEDBACK CONTROL
12
LINEAR MATRIX
INEQUALITIES
This chapter gives an introduction to the use of linear matrix inequalities (LMTs) in the numerical
solution of some important control problems. LMI problems are defined and tools described for
transforming such problems into suitable formats for solution. The chapter ends with a case study on
anti-windup compensator synthesis.
where X1 C 1R~~ XPI is a matrix, Z~1 qj X~j = rn, and the columns of all the matrix variables
12 1 1 Fundamental LMI properties are stacked up to form a single vector variable.
A notion central to the understanding of matrix inequalities is definiteness In particular a Hence, from now on, we will consider functions of the form
real square matrix Q is defined to be positive definite if
F(X1,X2,...,X~) = F0+G1X1H1+02X2H2+... (12.7)
a.TQZ>0 Vx≠0 (121)
Fn+ZGiXiHi>0 (12.8)
and Q is said to be positive semi definite if
ZTQZ≥0 Vir (122) where F0, G1, H~ are given matrices and the X~ are the matrix variables which we seek.
It is common practice to write Q > 0 (Q ≥ 0) to indicate that Q is positive (semi-)definite Exercise 12.1 * Let Q be a Hermitian matrix (Q = Qh’) having the form Q Qn ±jQi. Show that
Likewise a matrix P = —Q is said to be negative (semi )definite if Q is positive (semi Q 0 ~fa;id only q
)definite and to indicate negative (semi-)definiteness we write P < 0 (P ≤ 0)
QR QI
Notice that any real square matrix Q can be written as
— I R
(123)
12.1.2 Systems of LMIs
where the first tei mon the right hand side of (12 3) is symmetric and the second term is skew In general, we are frequently faced with LMI constraints of the form
symmetric. A property of a skew-symmetric matrix is that its associated quadratic function is
always zero and therefore
F1(X1,. . . ,X,,) > 0 (1110)
IQ +2 QTN
xTQx=xT~ (124)
,X,,) > 0 (12.11)
It then follows that Q is positive definite if the symmetric matrix (Q+QT) is positive definite
A consequence of this is that Q is positive definite if all the eigenvalues of (Q + QT) are where
positive
If Q is a complex matrix it is said to be positive definite if x”Qx > 0 for any non-zero ~ = F0~ +ZGijXiHij (12.12)
x and Q will then be Hermitian In this chapter however we are largely interested in real
matrices and real valued LMIs as discussed below However, it is easily seen that, by defining F0, G~, H~, X~ as
The basic structure of an LMI is
Fo = diag(Foi,.. ,Fo~,) . (12.13)
F(x)Fo+ZwtF~>0 (125) diag(Gn,. .. ,G~p) (12.14)
= diag(H1~,. .,H~,) . (12.15)
where a C R’~ is a variable and F0, F, are given constant symmetric real matrices
X1 = diag(X1,. .X~) . (12.16)
The representation (12.5) may seem restrictive, as we have not allowed for cases where
some of the matrices F, are complex Heimitian or the LMI is non-strict having the form we actually have the inequality
F(z) ≥ 0 However complex-valued LMIs can be easily turned into real-valued LMIs see
Exercise 12 1 Similarly it is also possible to convert any feasible non-strict LMI to the
strict LMI form in (125) see Boyd et al (1994) FbI5(Xl,...,X~) 4Pn+Zd~2tftt >0 (12.17)
The basic LMI problem the feasibility problem is to find a such that inequality (125)
— —
holds Note that F(x) > 0 in (12 5) descnbes an affine relationship in terms of the variable
476 MULTIVARIABLE FEEDBACK CONTROL I ‘EAR MATRIX INEQUALITIES 477
That is, we can represent a (big) system of LMIs as a single LMI. Therefore, we do not fatlab code for solving this problem is given in Table 12.1.
distinguish a single LMI from a system of LMIs; they are the same mathematical entity. We
may also encounter systems of LMIs of the form
Table 12.1: MATLAB program for determining stability in Example 12.1
F1(X1,.. .,X~) > 0 (12.1 ~ Uses Z4ATLAB Robust control toolbox
A: n )< n state matrix
F2(X1.. . . ,X,1) > F3(Xj.....X~) (12.19) setlmis C 11)
p = lmivar(l, (size(A,l) 11); % Specify structure and size of P
Again, it is easy to see that this can be written in the same form as inequality (12.17) above. Lyap = newlmi
only the terms above the diagonal need to be specified:
For the remainder of the chapter we do not distinguish between LMIs which can be written lmitern([Lyap 1 1 P1,l.A,’s’) % AP + P’A < 0
as above, or those which are in the more generic form of inequality (12.17). lmiterm(ELyap 1 2 01,1) % 0
lmiterm{[Lyap 2 2 P1-1,1) % P > 0
Notation. It is standard to let X~ denote the generic LMI variables. In the examples that LuIsys = getlmis; % Obtain the system of LMI5
follow in this chapter, we will use the notation more commonly associated with the specific [tmin,xfeasl = feasp(LMIsys); Solve the feasibility problem
problem. For instance, in Example 12.1, P = X1, and in Example 12.6, P and Q are the LMI a Feasible CA is stable) iff tmin < r
variables X1 and X2, respectively.
,~‘.ere V is the set of matrices D which commute with the uncertainly block A (i.e. satisfy DA AD).
Table 12.2: MATLAB program for calculating 71~.,norm in Example 12.2 This hound is tight for complex A with three or fewer blocks. Due to the presence of the inverse tern,,
I Uses MATLAB Robust Control toolbox
I [A,S,C,D) : State—space realization “e opti nizatioll problem is difficult to solve in its originalform; however; it can be transformed into an
n = size(A.l) equivalent LMI problem. To see this note that
setlmis (1
P = lnivar(1,(size(A,l) IN; %Specify structure and size of p a(DMD’) <7 ~ p(D M’5D~DMD~) <72 (12.41)
ganna = lmivar[l,1l 11);
MinfLMI = newlat % LMI 4 1 ~D M1 DtI DMD _721 <0 ~ MHPM — -72P <0 (12.42)
lmitern([HinfLHI 1 1 PL1,A, ‘s’) I AP + PA
lmiterm([HinELHI 1 2 P1,1,6) 1 PB
lmiterm([HinELMI 1 3 O1,C’) I C’ where we have introduced P = D~ D. Note that P > 0 and in addition has the structure of D, i.e.
lmiterm C tHinfLr4I 2 2 galmna) , —l • 1) 1 ~ganimaI p e V. Now, ~up can be found by solving the following optimization problem:
lmitern((IiinfLMI 2 3 OLD) 1 0’
lmiternUHinfU4I 3 3 gsmmaL—l,l) I ~gen’sea’I mm 72 (12.43)
Ppos = newlmi I LIII 4 2
lmiterm([Ppos 1 1 P1,—l,l) I P > 0 st. MHPM — 72P <0 (12.44)
LHIsys = getlmis; I Obtain the system of LNI5
C = mat2dec (LMIsys, zeros In) • 1); 1 Vector c in c’x
options = [le-S,O,O, 0,01; 1 Relative accuracy of solution which is a GEVP with the functions
[norrnhinf,xoptl = mincx(LMIsysc, options); I Solve minimization problem
F1(P) = M~PM (12.45)
.F2(P) = —P (12.46)
A Matlab program for solving the GEVP (12.43)—(12.44) with structured A is shown in Table 12.3.
minA (12.31)
s.t. F1(X1,...,X~)—AF2(X1 X~) <0 (12.32)
Xn) <0 (12.33) Table 12.3: MATLAB program for calculnting upper bound on p in Example 12.4
I uses MATLAB Robust control toolbox
F3(X1,. ,X,~) <0
. . (12.34) I Here: H is 4 x 4 real matrix
I Here: Structured Delta with a full 2 x 2 block and a scalar 2 x 2 block
setimis (1
The first two lines are equivalent to minimizing the largest “generalized” eigenvalue of P = lmivar(l, [2 O;2 1)); % Specify P to connute with Delta
the matrix pencil F1 (X1 X~) AF2 (X1,— X~). In some cases, a GEVP problem
. . .
garmea = lmivar(1,[1 1])
I LMI 1 2
Ppos newlmi
can be reduced to a linear objective minimization problem, through an appropriate change of lmitern((—Ppos 1 1 P1,1,1) %P>O
MuupLMI newlmi; I LMI II 1
variables.
lniterm( [MuupLMI 1 1 P1 ,M’ ,m) 1 F; (P1 14’PM
lmiterm([—MuupLMI 1 1 P1,1,1) I F;(P) —P
Example 12.3 Bounding the decay rate of n linear system. A good example of a GEVP is given by LMIsys = getlmis; I Obtain the system of 12415
[gminxoptl = gevp(LMIsys,l); I Solve the GEVP problem
Boyd et al. (1994). Given a stable linear system ± = Ax, the decay rate is the largest a such that
muup sgrt(gmin) I Upper bound on p
where fi is a constant. If we choose V(x) = xTPx > 0 as a Lyapunov fimction for the system and
ensure that V(x) ≤ —2aV(x) it is easily shown that the system will have a decay rate of at least a.
Exercise 12.2 Let lvi = [~1 ~ and compute p(M) for (i) A = 6 1 (scalar 2 x 2 block), (ii)
Hence, the p~-oblem offinding the decay rate could be posed as the optimization problem in P > 0 A — diag(6j, 62) (two 1 x 1 blocks) and (iii) A = full 2 x 2 block using the Matlab program in
Table 12.3. Verify with (8.99). (Solution: p(M) = 2,4 and ~/~U 4.47.)
mm —a (12.36)
s.t. ATP + PA + 2aP <0 (12.37)
This problem is a GEVP with the functions 12.3 Tricks in LMI problems
Fi(P) = ATP+PA (12.38)
Although many control problems can be cast as LMI problems, a substantial number of these
F2(P) = —2P (12.39) need to be manipulated before they are in a suitable LMI problem format. Fortunately, there
are a number of common tools or “tricks” which can be used to transform problems into
Example 12.4 Calculating upper bound on p. Consider the problem of calculating the upper botoid
on the structured singular value, p in (8.87), given as
suitable LMI forms. Some of the more useful ones are described below.
The key fact to consider when making a change of variables is the assurance that the Notice that the original variables can be recovered by inverting X and U
original variables can be recovered and that they are not over-determined. Notice also that
multiplication by Q above is an example of a congruence transformation as considered in the 12.3.3 Schur complement
next section.
The main use of the Schur complement is to transform quadratic matrix inequalities into
Exercise 12.3 With
* reference to Example 12.2, formulate the problem of finding the worst-case LMIs, or at least as a step in this direction. Schur’s complement formula says that the
(maximum) gaul of each of the uncem-tain systems following statements are equivalent:
k k
(12.51) ‘N~
rs+1 ~J’12
as LA’!! problems. Verlfy your m-esults with the Robust Cont,-ol toolbox com,nand wcgain using
(i)
~=L~ ‘N2]
numnerical values 2 < k, r ~ 3.
(ii) ~22 < 0
‘Na — ~F12’~22~ <0
A non-strict form involving a Moore—Penrose pseudo-inverse also exists if ~ is only
negative semi-definite; see Boyd et al. (1994).
482 MULTIVARIABLE FEEDBACK CONTROL
LINEAR MATRIX INEQUALITIES 483
Example 12.7 Making a quadratic inequality linear. Consider the LQR-type matrix inequality
Example 12.8 Combining quadratic constraints to yield an LMI. An instructive example, from
(Riccati inequality)
ATP + PA + PB WiBTP + Q <0 (12.57) Boyd et aL (1994), involves finding a matrix variable P > 0 such that
where P > 0 is the matrix variable and the other matrices are constant with Q, B > 0. This inequality x T ATP+PA ~J~1 ~1 68
can he used to minimize the cost function (seen in G’hapter 9) z BTP 0 j zj<O (12.)
and use the Schur complement identities we can transfoi-m our Riccati inequality into
(12.61)
or [:]T[C~ ~j[:] ~ 0 (12.71)
The S piocedure is essentially a method which enables one to combine several quadratic
inequalities into one single inequality (generally with some conservatism) There are many 12.3.5 The projection lemma and Finsler’s lemma
instances in control engineering when we would like to ensure that a single quadratic function In some types of control problems, particularly those seeking dynamic controllers, we
of s € W~ is such that encounter inequalities of the form
Po(x) <0, Fo(s) 4 xTA0t + 2b0s + c0 (12 63)
‘P(X) + G(X)AHT(X) + H(X)ATCT(X) <0 (12.73)
whenever certain other quadratic functions are positive semi-definite i e when
where X and A are the matrix variables and ‘I’(.), GC), H(.) are (normally affine) functions
F2(s)≥0 F~(s)4sTA,z+2bos+co ze{1 2 q) (1264) ofX but not of A.
To illustrate the S-procedure consider z = 1 for simplicity That is we would like to ensure In Gahinet and Apkarian (1994), it is proved that inequality (12.73) is satisfied, for some
En(s) < 0 for all s such that F1 (s) ≥ 0 Now if there exists a positive (or zero) scalar ‘r X, if and only if
such that f H’Q(X)1’(X)WG(X) < 0 1274
Faug(s)4F0(s)+TF1(s)<0 Vs st Fi(s)≥0 (1265)
it follows that our goal is achieved To see this note that Fang(s) ≤ 0 implies that F0 (s) ≤ 0
1. T47~(x)’I’(X)WH(x) <
where 1470(x) and 147H(x) are matrices with columns which form bases for the null spaces
if rF1 (s) ≥ 0 because F0 (s) < Fang(s) if F1 (a) ≥ 0 Thus extending this idea to q
of G(X) and H(X) respectively. Alternatively, WG(x) and WH(x) are sometimes called
inequality constraints we have that
orthogonal complements of G(X) and H(X) respectively. Note that
Fo(s) <0 whenever Ft(s) > 0 (12 66)
Wa(x)G(X) = 0, WH(x)H(X) = 0 (12.75)
holds if
The main point of this result (referred to as Gahinet and Apkarian’s projection lemma) is
Fo(s)+ZrjFj(s)<0 r,>0 (1267) that it enables one to transform a matrix inequality, which is a, not necessarily linear, function
of two variables, into two inequalities which are functions of just one variable. This has two
In general the S procedure is conservative inequality (12 67) implies inequality (12 66)
but not vice versa When q = 1 however, the S procedure is non-conservative The usefulness useful consequences:
of the S procedure is in the possibility of including the ‘r~ s as vanables in an LMI problem (i) It can facilitate the derivation of an LMI.
484 MULTIVARIABLE FEEDBACK CONTROL 2INEAR MATRIX INEQUALITIES 485
(ii) There are fewer variables for computation. compenstltols are added which take action when the control signal saturates As the anti-
Finsler (1937) also proved that inequality (12.73) is equivalent to two inequalities windup compensator is inactive for large periods of time, conventional linear methods are
not always useful for designing such a compensator However, as we will discover, LMIs can
f ‘I’(X) aG(X)G(X)T
— <0 play an important part in this design
Anti-windup was also discussed in Section 9 4 5, where the Hanus scheme was briefly
~ ~(X) — aH(X)H(X)T <0 (12.76)
introduced The approach below is more general and rigorous
for some real a. In other words, inequalities(12.74) and (12.76) are equivalent. This result is
often referred to as Finsler’s lemma.
12.4.1 Representing anti-windup compensators
Example 12.5 (State feedback) continued. C~onsider again the state feedback synthesis proble,;,
offinding F> 0 and F such that d
(A ± BF)TP + P(A + BF) <0 (12.77)
Using the change of variables described earlier in Example 12.5, we can change this problem into that
1/
offinding Q > 0 and L such that
However; as T4’~ is a matrix whose columns span the null space of the identity matrix which, is
J’/(I)
= {0}, the above equation simply reduces to
Notice that the use of both the projection lemma and Finsler’s lemma effectively reduces
our original LMI problem into two separate ones: the first LMI problem involves the iJi,n
calculation of Q > 0; the second involves the back substitution of Q into the original
problem in order for us to find L (and then F). The reader is, however, cautioned against
the possibility of ill-conditioning in this two-step approach. For some problems, normally
those with large numbers of variables, X can be poorly conditioned, which can hinder the
numerical determination of ‘~ from (12.73).
12.4 Case study: anti-windup compensator synthesis Figure 12.2 Conditioning with M(s)
Linear controllers can be very effective at controlling real plants until they encounter actuator A generic anti-windup compensator is depicted in Figure 12 1 The plant C(s)
saturation, which can cause the behaviour of the system to deteriorate dramatically, or even [C1(s) C2 (s)] is assumed to be stable (to enable global results to be obtained — see
become unstable. To limit this loss of performance special compensators called anti-windup Turner and Postlethwaite (2004) for more detail about this) G1 (s) represents the distuibance
486 MULTI VARIABLE FEEDBACK CONTROL LINEAR MATRIX INEQUALITIES 487
it can be seen that, providing the nominal linear closed loop is stable, overall stability is
governed by the stability of the nonlinear loop. Moreover, the performance of the system can
be measured by the “size” of the map from u~ to yd• This map, call it 7;, governs how much
the linear output is perturbed by the saturation of the control signal. Hence, it would be useful
to minimize the size of the norm of this nonlinear operator. For more information on the
— —
motivation behind this see, for example, Turner and Postlethwaite (2004) and Turner et al.
(2003).
Theorem 12.1 Lyapunov’s theorem Given a positive definite function V(x) > 0 Vx ~ 0
and an autonomous system ± = f(x), then the system ± = f(x) is stable if
Nominal Linear Transfer Function ÔV (x)<0 Vz≠0 (12.84)
VQr) = —f
Dx
Figure 12.3: Equivalent representation of conditioning with ki(s) As our anti-windup system is nonlinear due to the presence of the saturation function, we
will use Lyapunov’s theorem to establish stability
feedforward part of the plant and therefore is the transfer function from the disturbance d(s)
12.4.3 £2 gain
to the output y(s). Similarly G2(s) represents the feedback part of the plant and therefore is
the transfer function from the actual control input zi~ (s) to the output y(s). Only 02(5) plays In linear systems, the 9Lc.~ norm is equivalent to the maximum root mean square or rms energy
a part in anti-windup synthesis and its state-space realization is given by gain of the system. The equivalent measure for nonlinear systems is the so-called £2 gain,
which is a bound on the rms energy gain. Specifically a nonlinear system with input u(t) and
] (12.82)
IC(s) is the linear controller, which we assume has been designed such that its closed ioop
11yU2 < 7I~UII2 + /3 (12.85)
interconnection with C(s) is stable, in the absence of saturation, and such that some linear where /3 is a positive constant and II(.)112 denotes the standard 2-norm-in-time (C2 norm) of
performance specifications have been satisfied. a vector. Thus the £2 gain of a system can be taken as a measure of the size of the output a
The anti-windup compensator, 0(s), adds extra signals to the controller input and output system exhibits relative to the size of its input.
when control signal saturation occurs. By choosing 0(s) in different ways, the closed-loop
properties during and following saturation are influenced. Figure 12.2 shows the closed-loop
system, when 0(s) is parameterized in terms of the transfer function M(s). An interesting
12.4.4 Sector boundedness
choice of M(s) is kr(s) = I. In this case, the anti-windup solution is similar to the The saturation function is defined as
internal model control scheme discussed by Campo and Moran (1990). However, this is
not always a good solution, especially when 02(5) has lightly damped modes (Weston and sat(u) = [sati(ui) satm(um)]T (12.86)
Postlethwaite, 2000). As shown later, better solutions can be obtained by choosing kr(s) as
and sat~(u~) = sign(u1) x niin{Iu~l,ut}, u~ > 0 Vi e {1,...,rn}, where ü~ is the i’th
a coprime fnctor of 02(5).
saturation limit. From this, the deadzone function can be defined as
From the identity
Dz(u) = it sat(u)
— (12.83) Dz(u) = it — sat(u) (12.87)
where DzC) and satC) represent the deadzone and saturation functions respectively, it can
It is easy to verify that the saturation function, sat~(ut), satisfies the following inequality:
be proven that Figures 12.2 and 12.3 are equivalent (Weston and Postlethwaite, 2000). Figure
12.3 is convenient to analyze the stability and performance of the system and, in particular, u1sat~(u~) ≥ sat~(ut) (12.88)
488 MULTIVARIABLE FEEDBACK CONTROL LINEAR MATRIX INEQUALITIES 489
or where
sat~Qn~)[u~ —sat~(u~)]w~ ≥ 0 (1289) —
A = A~+B~F (1295)
for some w~ > 0. Collecting this inequality for all i we can write
C = C~+D~F (1296)
sat(u)TW[u — sat(u)] ≥0 (1290) = [4 -T T (1297)
for some diagonal W > 0. Similarly it follows that However, from the sector boundedness of the deadzone we also have that
Dz(u)TW[u — Dz(u)j > 0 (12 91) 2iITTV[uj1~ — Fx~ — i~j ~ 0 (12 98)
for some diagonal W > 0. We will make use of this inequality in the derivation of our We will use the S-procedure to combine inequalities (12 94) and (12 98) First note that
anti-windup compensator synthesis equations. inequality (12 98) maybe written as
0 _FT147 0
12.4.5 Full-order anti-windup compensators * —2W TV z>0 (1299)
* * 0
The term “full-order” anti-windup compensators has a similar meaning to the term “full
order” ?4,~ controller; that is, the compensator is of order equal to the plant. We will confine We have added the factor of 2 into inequality (12.98) so that inequality (12.99) can be
our attention to full-order anti-windup compensator synthesis. For a treatment of low-order wntten in a tidier fashion, without this factoi of 2, there would be several factors of 1/2
and static anti-windup synthesis, see Turner and Postlethwaite (2004). present Using the S-procedure descnbed earlier, we can combine inequality (12.94) with
Assume that we factorize 02(5) = N(s)M(s)’, i.e. the anti-windup parameter Al(s) is (12 99) to obtain
chosen as part of a coprime factorization of 02(5); for example, see Section 4.1.5 or Zhou ATP + PA + OTO FB~ + UTD - FTT4/r 0
et al. (1996). In this case, the operator ‘T~ ujj,, —ì i’d is given by * —2Wr + D~7DP TV-r <0 (12 100)
* * _721
I~ (A~ + B~F)z~ + B~ü
Fx~ Notice that -r only appears adjacent to l4~, so we can define a new variable V = ¾7r and
(12.92)
(Op + D~F)x~ + JJJJiI use this from now on. Applying the Schur complement we obtain
I’” Dz(u,~~ Ud)
ATP+PA PB~_FTV 0 CT
The matrix F determines the coprime factorization of 02(5), which in turn influences * —21~ V DT
“ <0 (12101)
the performance of the anti-windup compensator. Hence, our goal in full-order anti-windup * * —71 0
synthesis is to find an appropriate matrix F such that the closed-loop performance in the * * 71
presence of saturation is good.
Next, using the congruence transformation diag(P’, V’,I,I) we obtain
* —217—’
We would like to choose F (and therefore Al(s)) such that 7 is internally stable with * *
sufficiently small £2 gain. It can be verified see (see Turner and Postlethwaite, 2004) that * *
if we choose a Lyapunov function V(x) = > 0 and ensure that
0 p~~tQT+p_lFTDT
I V_U3T
V(x) + 1J3’?Jd — 72U~~Ujj~ <0 (1293) —71 0 (12102)
then the operator ‘T~ is indeed internally stable with an £2 gain of ~. Therefore, using the * —71
expression for 7 we can write inequality (12.93) as
Finally, defining new variables Q = F’, U = V’, L = QF we get
ATP + PA+OTC
]
PB~+C-r 0 QAT+ApQ+LTB~+BpL B~U-QF 0 QC~T+LTD~T
ZT *
*
0
_721
(12 94) * -2U I UDT p <a
* * * —71 0
* * * —71
490 MULTI VARIABLE FEEDBACK CONTROL
which is now an LMI in Q > 0, U > 0 and diagonal, 7> 0 and L. To obtain F we can thus
compute F = Q’L, which allows us to construct our anti-windup compensator.
For applications of these and similar formulae see Turner and Postlethwaite (2004) and
Herrmann et al. (2003a; 2003b). 13
12.5 Conclusion
CASE STUDIES
In recent years, efficient interior-point algorithms have been developed to solve convex LMI
optimization problems of the type presented in this chapter. We have described the main
(generic) LMI problems in control and the tools and tricks required to transform them into
formats that can readily take advantage of the algorithms now available, especially in Matlab. In this chapter, we present three case studies which illustrate a number of important practical issues,
In the examples, we have only used Matlab code, as we have throughout the book. Alternative namely weights selection in ?L~ mixed-sensitivity design, disturbance rejection, output selection, two
LMI software is available and in this context we would like to mention YALMIP (http: degrees-of-freedom 7-t~ loop-shaping design, ill-conditioned plants, p-analysis and p-synthesis
I/control. ee ethz ch/ ~j oloef /yalmip .php), which is particularly useful for
. .
interfacing with the free solvers available. By including this chapter, we have attempted to
give the essential ingredients for developing an understanding of the power and usefulness 13.1 Introduction
of LMIs. More details can be found in Boyd et al. (1994). A cautionary note is that the
complexity of LMI computations is high, and certainly higher, for example, than solving a The complete design process for an industrial control system will normally include the
Riccati equation in a conventional approach. Nevertheless, the LMI approach opens the way following steps:
to solving problems that conventional methods cannot.
I. Plant modelling: to determine a mathematical model of the plant either from experimental
data using identification techniques, or from physical equations describing the plant
dynamics, or a combination of these.
2 Plant input—output controllability analysis: to discover what closed-loop performance can
be expected and what inherent limitations there are to “good” control, and to assist in
deciding upon an initial control structure and maybe an initial selection of performance
weights.
3 Control structure design: to decide on which variables to be manipulated and measured
and which links should be made between them.
4 Controller design: to formulate a mathematical design problem which captures the
engineering design problem and to synthesize a corresponding controller.
5 Control system analysis: to assess the control system by analysis and simulation against
the performance specifications or the designer’s expectations.
6 Controller implementation: to implement the controller, almost certainly in software
for computer control, taking care to address important issues such as anti-windup and
bumpless transfer.
7 Control system commissioning: to bring the controller on-line, to carry out on-site testing
and to implement any required modifications before certifying that the controlled plant is
fully operational.
In this book, we have focused on steps 2, 3, 4 and 5, and in this chapter we will present
three case studies which demonstrate many of the ideas and practical techniques which can
be used in these steps. The case studies are not meant to produce the “best” controller for the
application considered but rather are used here to illustrate a particular technique from the
book.
In case study I, a helicopter control law is designed for the rejection of atmospheric
turbulence. The gust disturbance is modelled as an extra input to an 8/1(8 7-L~
mixed-sensitivity design problem. Results from nonlinear simulations indicate significant The nonlinear helicopter model we will use for simulation purposes was developed at the
improvement over a standard S/KS design. For more information on the applicability of 7Loo former Defence Research Agency (now QinetiQ), Bedford (Padfield, 1981) and is known
control to advanced helicopter flight, the reader is referred to Walker and Postlethwaite (1996) as the Rationalized Helicopter Model (RHM) A turbulence generator module has recently
who describe the design and ground-based piloted simulation testing of a high-performance been included in the RHM and this enables controller designs to be tested on-line for their
helicopter flight control system. The first flight test results are given in Postlethwaite et al. disturbance rejection properties It should be noted that the model of the gusts affects the
(1999). helicopter equations in a complicated fashion and is self-contained in the code of the RHM
Case study 2 illustrates the application and usefulness of the two degrees-of-freedom For design purposes we will imagine that the gusts affect the model in a much simpler manner
7-ta, loop-shaping approach by applying it to the design of a robust controller for a high- We will begin by repeating the design of Yue and Postlethwaite (1990) which used
performance aero-engine. Nonlinear simulation results are shown. Efficient and effective an S/KS floo mixed-sensitivity problem formulation without explicitly considering
tools for control structure design (input—output selection) are also described and applied to atmosphenc turbulence We will then, for the purposes of design, represent gusts as a
this problem. This design work on the aero-engine has been further developed and forms perturbation in the velocity states of the helicopter model and include this disturbance as an
the basis of a multi-mode controller which has been implemented and successfully tested on extra input to the S/KS design problem The resulting controller is seen to be substantially
a Rolls-Royce Spey engine test facility at the former UK Defence Research Agency (now better at rejecting atmospheric turbulence than the earlier standard S/KS design More recent
QinetiQ), Pyestock (Samar, 1995). references on the application of Hoo optimization to helicopter flight control, including flight
The final case study is concerned with the control of an idealized distillation column. A tests, are given in the conclusions, Section 1326
very simple plant model is used, but it is sufficient to illustrate the difficulties of controlling
ill-conditioned plants and the adverse effects of model uncertainty. The structured singular
value p is seen to be a powerful tool for robustness analysis.
13.2.2 The helicopter model
Case studies 1, 2 and 3 are based on papers by Postlethwaite et al. (1994), Samar and The aircraft model used in our work is representative of the Westland Lynx, a twin-engined
Postlethwaite (1994) and Skogestad et al. (1988), respectively. multi-purpose military helicopter, approximately 9000 lbs (4000 kg) gross weight, with a
four-blade semi-rigid main rotor The unaugmented aircraft is unstable, and exhibits many
of the cross-couplings characteristic of a single main-rotor helicopter In addition to the
basic iigid body, engine and actuator components, the model also includes second-orderrotor
13.2 Helicopter control flapping and coning modes for off-line use The model has the advantage that essentially the
same code can be used for a real-time piloted simulation as for a workstation-based off-line
This case study is used to illustrate how weights can be selected in ?ioo mixed-sensitivity
handling qualities assessment
design, and how this design problem can be modified to improve disturbance rejection
The equations governing the motion of the helicopter are complex and difficult to formulate
properties.
with high levels of precision For example, the rotor dynamics are particularly difficult to
model A robust design methodology is therefore essential for high-performance helicopter
13.2.1 Problem description control The starting point for this study was to obtain an eighth-order differential equation
In this case study, we consider the design of a controller to reduce the effects of atmospheric
turbulence on helicopters. The reduction of the effects of gusts is very important in reducing Table 13.1 Helicopter state vector
a pilot’s workload, and enables aggressive manoeuvres to be carried out in poor weather State Description
conditions. Also, as a consequence of decreased buffeting, the airframe and component lives 9 Pitch attitude
are lengthened and passenger comfort is increased. ~ Roll attitude
The design of rotorcraft flight control systems, for robust stability and performance, p Roll rate (body-axis)
has been studied over a number of years using a variety of methods including: Hoo £7 Pitch rate (body-axis)
optimization (Yue and Postlethwaite, 1990; Postlethwaite and Walker, 1992); eigenstructure ~ Yaw rate
assignment (Manness and Murray-Smith, 1992; Samblancatt et al., 1990); sliding mode Va, Forward velocity
control (Foster et al., 1993); and H2 design (Takahashi, 1993). These early Hoo controller Vy Lateral velocity
designs were particularly successful (Walker et al., 1993), and have proved themselves in Vz Vertical velocity
piloted simulations. These designs have used frequency information about the disturbances
to limit the system sensitivity but in general there has been no explicit consideration of
the effects of atmospheric turbulence. Therefore by incorporating practical knowledge about modelling the small-perturbation rigid motion of the aircraft about hover The corresponding
the disturbance characteristics, and how they affect the real helicopter, improvements to the state-space model is
overall performance should be possible. We will demonstrate this below.
x = Az+Bu (131)
I
494 MULTIVARIABLE FEEDBACK CONTROL CASE STUDIES 495
}
o Heave velocity H
o Pitch attitude 0 (a)
a Roll attitude
a Heading rate ~‘ in
together with two additional (body-axis) measurements
[~]jH_KH
a
a
Roll rate p
Pitch rate q }
The controller (or pilot in manual control) generates four blade angle demands which are
(b)
effectively the helicopter inputs, since the actuators (which are typically modelled as first- Figure 13.1: Helicopter control structure (a) as implemented, (b) in the standard one
order lags) are modelled as unity gains in this study. The blade angles are degree-of-freedom configuration
The reasoning behind these selections of Yue and Postlethwaite (1990) is summarized below
Selection of W~ (s) (peifo, mance wezght~ For good tracking accuracy in each of the
controlled outputs the sensitivity function is required to be small This suggests forcing
integral action into the controller by selecting an s~ shape in the weights associated with the
controlled outputs It was not thought necessary to have exactly zero steady-state errors and
therefore these weights were given a finite gain of 500 at low frequencies (Notice that a pure
integrator cannot be included in W1 anyway since the standard ?-t~ optimal control problem
would not then be well posed in the sense that the corresponding generalized plant P could
Frequency [nulls]
not then be stabilized by the feedback controller K ) In tuning W1 it was found that a finite Frequency [rai/s)
attenuation at high frequencies wns useful in reducing overshoot Therefore high gain low (a) S (b) KS
pass filters were used in the primary channels to give accurate tracking up to about 6 rad/s
The presence of unmodelled rotor dynamics around 10 rad/s limits the bandwidth of Wi Figure 13.3: Singular values of Sand KS (s/KS design)
With four inputs to the helicopter we can only expect to control four outputs independently
Because of the rate feedback measurements the sensitivity function S is a 6 x 6 matrix and
therefore two of its singular values (corresponding top and q) are always close to 1 across
all frequencies All that can be done in these channels is to improve the disturbance rejection
properties around crossover 4 to 7 rad/s and this was achieved using second-order band-pass
filters in the last two elements of W1
Selection of 14~2(s) (input weight) The same first-order high pass filter is used in each
7. p
channel with a corner frequency of 10 rad/s to limit input magnitudes at high-frequencies and
thereby limit the closed loop bandwidth The high-frequency gain of W2 can be increased
to limit fast actuator movement The low frequency gain of W2 was set to approximately
—100 dE to ensure that the cost function is dominated by W1 at low frequencies
Selection of W3(s) (setpomt filter) J47~ is a weighting on the reference input r It is
chosen to be a constant matrix with unity weighting on each of the output commands and
a weighting of 0 1 on the fictitious rate demands The reduced weighting on the rates (which
are not directly controlled) enables some disturbance rejection on these outputs without them
significantly affecting the cost function The main aim of T4~~ is to force equally good tracking
of each of the primary signals Figure 13.4; Disturbance rejection design
For the controller designed using the above weights the singular value plots of S and
1(8 are shown in Figure 13 3(a) and (b) These have the general shapes and bandwidths
designed for and as already mentioned the controlled system performed well in piloted
simulation The effects of atmospheric tuibulence will be illustrated later after designing a We define Bd = columns 6, 7 and 8 of A. Then we have
second controller in which disturbance rejection is explicitly included in the design problem Ax+Bu+Bdd (13.9)
p = Cx (13.10)
13 2 4 Distut bance rejection design
which in transfer function terms can be expressed as
In the design below we will assume that the atmospheric turbulence can be modelled as
gust velocity components that perturb the helicopter’s velocity states v,,, v, and v~ by p = G(s)u + Gd(s)d (13,11)
d = [di d2 di -T as in the following equations. The disturbed system is therefore expressed
as where C(s) = C(sI A)’B, and Gd(s) = C(sI A)”~Bd. The design problem we will
— —
solve is illustrated in Figure 13.4. The optimization problem is to find a stabilizing controller
± = Ax+A[~]+Bu (133) K that minimizes the cost function
100
10—a
to_I
(a) S (b) KS
which is the 7-t~ norm of the transfer function from [~] to z This is easily cast into the
0 5 10 15 °iswis
general control configuration and solved using standard software Notice that if we set T4’~ to
zero the problem reverts to the S/KS mixed-sensitivity design of the previous subsection To Figure 13.7 Response to turbulence of the S/KS design (time in seconds)
synthesize the controller we used the same weights W1, W2 and W3 as in the S/KS design,
and selected W4 = al, with a a scalar parameter used to emphasize disturbance rejection 13.2.5 Comparison of disturbance rejection properties of the two
After a few iterations we finalized on a = 30 For this value of a, the singular value plots of
S and KS, see Figure 13 5(a) and (b), are quite similar to those of the S/KS design, but as esigns
we will see in the next subsection there is a significant improvement in the rejection of gusts To compare the disturbance rejection properties of the two designs we simulated both
Also, since Gd shares the same dynamics as G, and W4 is a constant matrix, the degree of controllers on the RHM nonlinear helicopter model equipped with a statistical discrete gust
the disturbance rejection controller is the same as that for the S/KS design model for atmospheric turbulence (DahI and Faulkner, 1979) With this simulation facility,
gusts cannot be generated at hover and so the nonlinear model was trimmed at a forward flight
Vz Turb vy Turb speed of 20 knots (at an altitude of 100 ft (30 m)), and the effect of turbulence on the four
4o~ controlled outputs observed Recall that both designs were based on a linearized model about
20 hover and therefore these tests at 20 knots also demonstrate the robustness of the controllers
0 Tests were camed out for a vanety of gusts, and in all cases the disturbance rejection design
was significantly better than the S/KS design
20 In Figure 13 6, we show a typical gust generated by the RHM The effects of this on
5 10 15 0 5 10 15
the controlled
disturbance outputsdesign,
rejection are shown in Figures
respectively 137 and
Compared with13the
8 for the design,
S/KS S/KS design and the
the disturbance
V2 Turb Wind Direction rejection controller practically halves the turbulence effect on heave velocity, pitch attitude
and roll attitude The change in the effect on heading rate is small
13.2.6 Conclusions
The two controllers designed were of the same degree and had similar frequency domain
44
0 5 10 15 properties. But by incorporating knowledge about turbulence activity into the second design,
substantial improvements in disturbance rejection were achieved. The reduction of the
turbulence effects by a half in heave velocity, pitch attitude and roll attitude indicates
Figure 13.6: Velocity components of turbulence (time in seconds) the possibility of a significant reduction in a pilot’s workload, allowing more aggressive
manoeuvres to be carried out with greater precision. Passenger comfort and safety would
also be increased.
500 MULTIVARIABLE FEEDBACK CONTROL CASE STUDIES 501
10
—1
-0.1
5 ~i0 5 10 15
depends on the pressure ratios generated by the two compressors. If the pressure ratio across
Figure 13.8: Response to turbulence of the disturbance rejection design (time in seconds) a compressor exceeds a certain maximum, it may no longer be able to hold the pressure head
generated and the flow will tend to reverse its direction. This happens in practice, with the
flow actually going negative, but it is only a momentary effect. When the back pressure has
The study was primarily meant to illustrate the ease with which information about cleared itself, positive flow is re-established but, if flow conditions do not change, the pressure
disturbances can be beneficially included in controller design. The case study also builds up causing flow reversal again. Thus the flow surges back and forth at high frequency,
demonstrated the selection of weights in 7-t~ mixed-sensitivity design. To read how the 7-L~ the phenomenon being referred to as surge. Surging causes excessive aerodynamic pulsations
methods have been successfully used and tested in flight on a Bell 205 fly-by-wire helicopter, which are transmitted through the whole machine and must be avoided at all costs. However,
see Postlethwaite et al. (1999), Smerlas et al. (2001), Prempain and Postlethwaite (2004) and for higher performance and greater efficiency the compressors must also be operated close to
Postlethwaite et al. (2005). A series of flight tests carried out in 2004 resulted in level 1 (the their surge lines. The primary aim of the control system is thus to control engine thrust whilst
highest) handling qualities ratings for all manoeuvres tested. There results were still to be regulating compressor surge margins. But these engine parameters, namely thrust and the
written up, when this book went to press. For more flight control examples and illustrations two compressor surge margins, are not directly measurable. There are, however, a number of
of the usefulness of robust multivariable control, see Bates and Postlethwaite (2002). measurements available which represent these quantities, and our first task is to choose from
the available measurements, the ones that are in some sense better for control purposes. This
is the problem of output selection as discussed in Chapter 10.
13.3 Aero.-engine control The next step is the design of a robust multivariable controller which provides satisfactory
performance over the entire operating range of the engine. Since the aero-engine is a highly
In this case study, we apply a variety of tools to the problem of output selection, and illustrate nonlinear system, it is normal for several controllers to be designed at different operating
the application of the two degrees-of-freedom 7’t~ loop-shaping design procedure. points and then to be scheduled across the flight envelope. Also in an aero-engine there are
a number of parameters, apart from the ones being primarily controlled, that are to be kept
within specified safety limits, e.g. the turbine blade temperature. The number of parameters
13.3.1 Problem description to be controlled and/or limited exceeds the number of available inputs, and hence all these
parameters cannot be controlled independently at the same time. The problem can be tackled
This case study explores the application of advanced control techniques to the problem of
by designing a number of scheduled controllers, each for a different set of output variables,
control structure design and robust multivariable controller design for a high-performance
which are then switched between, depending on the most significant limit at any given time.
gas turbine engine. The engine under consideration is the Spey engine which is a Rolls-
The switching is usually done by means of lowest-wins or highest-wins gates, which serve
Royce two-spool reheated turbofan, used to power modern military aircraft. The engine has
to propagate the output of the most suitable controller to the plant input. Thus, a switched
two compressors: a low-pressure (LP) compressor or fan, and a high-pressure (HP) or core
gain-scheduled controller can be designed to cover the full operating range and all possible
compressor as shown in Figure 13.9. The high-pressure flow at the exit of the core compressor
configurations. In Postlethwaite et al. (1995) a digital multi-mode scheduled controller is
is combusted and allowed to expand partially through the HP and LP turbines which drive the
designed for the Spey engine under consideration here. In their study gain scheduling was
two compressors. The flow finally expands to atmospheric pressure at the nozzle exit, thus
not required to meet the design specifications. Below we will describe the design of a robust
producing thrust for aircraft propulsion. The efficiency of the engine and the thrust produced
502 MULTIVARIABLE FEEDBACK CONTROL CASE STUDIES 503
controller for the primary engine outputs using the two degrees-of-freedom 7~tcc loop-shaping dynamics which result in a plant model of 18 states for controller design. The nonlinear model
approach. The same methodology was used in the design of Postlethwaite et al. (1995) which used in this case study was provided by the UK Defence Research Agency (now QinetiQ) at
was successfully implemented and tested on the Spey engine. pyestock with the permission of Rolls-Royce Military Aero Engines Ltd.
Scaling. Some of the tools we will use for control structure selection are dependent on
the scalings employed. Scaling the inputs and the candidate measurements, therefore, is
13.3.2 Control structure design: output selection vital before comparisons are made and can also improve the conditioning of the problem
The Spey engine has three inputs, namely fuel flow (WFE), a nozzle with a variable area for design purposes. We use the method of scaling described in Section 9.4.2. The outputs
(AJ), and inlet guide vanes with a variable angle setting (IGV): are scaled such that equal magnitudes of cross-coupling into each of the outputs are equally
undesirable. We have chosen to scale the thrust-related outputs such that one unit of each
u=[WFE AJ IGV]T scaled measurement represents 7.5% of maximum thrust. A unit step demand on each of
these scaled outputs would thus correspond to a demand of 7.5% (of maximum) in thrust.
In this study, there are six output measurements available, The surge-margin-related outputs are scaled so that one unit corresponds to 5% surge margin.
If the controller designed provides an interaction of less than 10% between the scaled outputs
YaIl = [NL OPR1 OPR2 LPPR LPEMN NH 1T (for unit reference steps), then we would have 0.75% or less change in thrust for a step
demand of 5% in surge margin, and a 0.5% or less change in surge margin for a 7.5% step
as described below. For each one of the six output measurements, a look-up table provides
demand in thrust. The final output NH (which is already a scaled variable) was further scaled
its desired optimal value (setpoint) as a function of the operating point. However, with three
(divided by 2.2) so that a unit change in NH corresponds to a 2.2% change in NH. The inputs
inputs we can only control three outputs independently so the first question we face is: which
are scaled by 10% of their expected ranges of operation.
three?
Engine thrust (one of the parameters to be controlled) can be defined in terms of the LP
compressor’s spool speed (NL), the ratio of the HP compressor’s outlet pressure to engine Table 13.2: RHP zeros and minimum singular value for the six candidate output sets
inlet pressure (OPR1), or the engine overall pressure ratio (OPR2). We will choose from Candidate RHP zeros
these three measurements the one that is best for control: Set no. controlled < 100 rad/s 2(G(0))
outputs
• Engine thrust: Select one of NL, OPRI and OPR2 (outputs 1,2 and 3).
1 NL, LPPR, NH (1,4,6) none 0.060
Similarly, the surge margin of the LP compressor can be represented by either the LP ~2 OPR1, LPPR, NH (2,4,6) none 0.049
compressor’s pressure ratio (LPPR) or the LP compressor’s exit Mach number measurement 3 OPR2, LPPR, NH (3,4,6) 30.9 0.056
(LPEMN), and a selection between the two has to be made: 4 NL, LPEMN, NH (1,5,6) none 0.366
5 OPR1, LPEMN, NH (2,5,6) none 0.409
• Surge margin: Select one of LPPR and LPEMN (outputs 4 and 5). 27.7 0.392
6 OPR2, LPEMN, NH (3,5,6)
In this study we will not consider control of the HP compressor’s surge margin, or other
configurations concerned with the limiting of engine temperatures. Our third output will be
the HP compressor’s spool speed (NH), which it is also important to maintain within safe Steady-state model. With these scalings the steady-state model Yall GaIIU (with all the
limits. (NH is actually the HP spool speed made dimensionless by dividing by the square root candidate outputs included) and the corresponding non-square RGA matrix, A = Gan x G~,
of the total inlet temperature and scaled so that it is a percentage of the maximum spool speed are given by
at a standard temperature of 288.15 K.)
0.696 —0.046 —0.001 0.009 0.016 0.000
1.076 —0.027 0.004 0.016 0.008 —0.000
o Spool speed: Select NH (output 6). 1.385 0.087 —0.002 A (fl 0.006 0.028 —0.000
‘Sail
—
— 11.036 0.238 —0.017 ‘‘S~’~
-‘ —
— 0.971 —0.001 0.002 (13.13)
We have now subdivided the available outputs into three subsets, and decided to select one —0.064 —0.412 0.000 —0.003 0.950 0.00 0
output from each subset. This gives rise to the six candidate output sets as listed in Table 1.474 —0.093 0.983 0.002 —0.000 0.998
13.3.2.
We now apply some of the tools given in Chapter 10 for tackling the output selection and the singular value decomposition of Gan (0) = LJ0E0VJ’ is
problem. It is emphasized at this point that a good physical understanding of the plant is very 0.062 0.001 —0.144 —0S44 —0.117 —0.266
important in the context of this problem, and some measurements may have to be screened 0.095 0.001 —0.118 —0.070 —0.734 0.659
beforehand on practical grounds. A 15-state linear model of the engine (derived from a 0.123 —0.025 0.133 —0.286 0.640 0.689
~‘0= 0.977 —0.129 —0.011 0.103 —0.001 —0.133
nonlinear simulation at 87% of maximum thrust) will be used in the analysis that follows. —0.006 0.065 —0.971 0.108 0.195 0.055
The model is available over the Internet (as described in the Preface), along with actuator 0.131 0.989 0.066 —0.000 0.004 —0.004
504 MULTIVARIABLE FEEDBACK CONTROL 505
CASE STUDIES
rll.296 0 0 ~
J 0 0.986 0 1.000 —0.007 —0.021
function matrices C(s) with three outputs,
so— ~ 0 0.4171
01 Vo = 0.020 —0.154 0.988
~ 0 0
0.010 0.988 A(G(s)) = 0(s) x GT(s) (13.14)
~ 0 0 01 0.154
L o 0 oJ
In Section 3.4, it is argued that the RCA provides useful information for the analysis of input—
The six row sums of the RCA matrix are output controllability and for the pairing of inputs and outputs. Specifically input and output
variables should be paired so that the diagonal elements of the RCA are as close as possible
= [0.025 0.023 0.034 0.972 0.947 1~000T
to unity. Furthermore, if the plant has large RCA elements and an inverting controller is used,
and from (A.85) this indicates that we should select outputs 4, 5 and 6 (corresponding to the the closed-loop system will have little robustness in the face of diagonal input uncertainty.
three largest elements) in order to maximize the projection of the selected outputs onto the Such a perturbation is quite common due to uncertainty in the actuators. Thus we want A
space corresponding to the three non-zero singular values. However, this selection is not one to have small elements and for diagonal dominance we want A I to be small. These two
—
of our six candidate output sets because there is no output directly related to engine thrust objectives can be combined in the single objective of a small RGA number, defined as
(outputs 1, 2 and 3).
We now proceed with a more detailed input—output controllability analysis of the six
RCA number ~ hA — IlIsum ZI1 I + ZI Aij I (13.15)
i=j
candidate output sets. In the following, 0(s) refers to the transfer function matrix for the
effect of the three inputs on the selected three outputs. The lower the RGA number, the more preferred is the control structure. Before calculating
Minimum singular value. In Chapter 10, we showed that a reasonable criterion for the RGA number over frequency we rearranged the output variables so that the steady-state
selecting controlled outputs y is to make II~’Qi y0~~)Il small (page 395), in particular
—
RCA matrix was as close as possible to the identity matrix.
at steady-state. Here y 1/opt is the deviation in y from its optimal value. At steady-state this
—
deviation arises mainly from errors in the (look-up table) setpoint due to disturbances and
unknown variations in the operating point. If we assume that, with the scalings given above,
the magnitude I(~ y~~)~l is similar (close to 1) for each of the six outputs, then we should
—
select a set of outputs such that the elements in G’(O) are small, or altematively, such that
a (0(0)) is as large as possible (minimum singular value rule; see page 395). In Table 13.3.2
we have listed rr (0(0)) for the six candidate output sets. We conclude that we can eliminate
sets 1, 2 and 3, and consider only sets 4, 5 and 6. For these three sets we find that the value of
a(G(0)) is between 0.366 and 0.409 which is only slightly smaller than a(G~i1(0)) = 0.417.
10’ 100 i o’
Remark. The three eliminated sets all include output 4, LPPR. Interestingly, this output is associated Frequency [rad/s]
with the largest element in the gain matrix Gaii (0) of 11.0, and is thus also associated with the largest
singular value (as seen from the first column of U). This illustrates that the preferred choice is often not Figure 13.10: RCA numbers
associated with ~(0).
The RCA numbers for the six candidate output sets are shown in Figure 13.10. As in the
Right-half plane zeros. RHP-zeros limit the achievable performance of a feedback loop by minimum singular value analysis above, we again see that sets 1, 2 and 3 are less favourable.
limiting the open-loop gain—bandwidth product. They can be a cause of concern, particularly Once more, sets 4 and 5 are the best but too similar to allow a decisive selection.
if they lie within the desired closed-loop bandwidth. Also, choosing different outputs for Hankel singular values. Notice that sets 4 and 5 differ only in one output variable,
feedback control can give rise to different numbers of RHP-zeros at different locations. The NL in set 4 and OPR1 in set 5. Therefore, to select between them we next consider the
choice of outputs should be such that a minimum number of RHP-zeros are encountered, and Hankel singular values of the two transfer functions between the three inputs and output NL
should be as far removed from the imaginary axis as possible. and output OPR1, respectively. Hankel singular values reflect the joint controllability and
Table 13.3.2 shows the RHP-zeros slower than 100 rad/s for all combinations of observability of the states of a balanced realization (as described in Section 11.3). Recall that
prospective output variables. The closed-loop bandwidth requirement for the aero-engine the Hankel singular values are invariant under state transformations but they do depend on
is approximately 10 rad/s. RHP-zeros close to this value or smaller (closer to the origin) scaling.
will, therefore, cause problems and should be avoided. It can be seen that the variable OPR2 Figure 13.11 shows the Hankel singular values of the two transfer functions for outputs NL
introduces (relatively) slow RHP-zeros. It was observed that these zeros move closer to the and OPR1, respectively. The Hankel singular values for OPR1 are larger, which indicates that
origin at higher thrust levels. Thus sets 3 and 6 are unfavourable for closed-loop control. OPR1 has better state controllability and observability properties than NL. In other words,
This along with the minimum singular value analysis leaves us with sets 4 and 5 for further output OPRI contains more information about the system internal states than output NL. It
consideration therefore seems to be preferable to use OPR1 for control purposes rather than NL, and hence
Relative gain array (RGA). We here consider the RGAs of the candidate square transfer (in the absence of no other information) set S is our final choice.
506 MULTIVARIABLE FEEDBACK CONTROL STUDIES 507
0.4
1. The singular values of the plant are shown in Figure 13.12(a) and indicate a need for
extra low-frequency gain to give good steady-state tracking and disturbance rejection. The
0.3 NL(Output I): +
N
OPRI (Output 2): x 102 102
02
0_I +
100
2(
6 ÷ ~ x *
0 2 4 6 8 10 12 14 16
n-state number 10
—2
—0.064 —0.412 0.000 A(G) = 0.004 0.996 —0.000 (13.16) 4. 7mm in (9.66) for this shaped plant is found to be 2.3 which indicates that the shaped plant
1.474 —0.093 0.983 —0.006 —0.000 1.006 is compatible with robust stability.
5. p is set to I and the reference model T~0r is chosen as
Pairing of inputs and outputs. The pairing of inputs and outputs is important because it
makes the design of the prefilter easier in a two degrees-of-freedom control configuration Tre~=diag{00181~1: 0.00:s+1’ 0.28+1}
and simplifies the selection of weights. It is of even greater importance if a decentralized
control scheme is to be used, and gives insight into the working of the plant. In Chapter 10,
The third output NH is thus made slower than the other two in following reference inputs.
it is argued that negative entries on the principal diagonal of the steady-state RCA should be
avoided and that the outputs in 0 should be (re)arranged such that the RCA is close to the 6. The standard 7-L~ optimization defined by P in (9.87) is solved. ~ iterations are performed
and a slightly suboptimal controller achieving 7 = 2.9 is obtained. Moving closer
identity matrix. For the selected output set, we see from (13.16) that no rearranging of the
outputs is needed. That is, we should pair OPRI, LPEMN and NH with WFE, AJ and IGV, to optimality introduces very fast poles in the controller which, if the controller is to
respectively. be discretized, would ask for a very high sample rate. Choosing a slightly suboptimal
controller alleviates this problem and also improves on the H2 performance. The prefilter
7-t~ loop-shaping design. We follow the design procedure given in Section 9.4.3. In steps
I to 3 we discuss how pre- and post-compensators are selected to obtain the desired shaped is finally scaled to achieve perfect steady-state model matching. The controller (with the
weights TV1 and W2) has 27 states.
plant (loop shape) G~ = W720M71 where J47~ = 147pH70T’17a. In steps 4 to 6 we present the
subsequent 7i~ design.
508 MULTI VARIABLE FEEDBACK CONTROL CASE STUDIES 509
Step responses of the linear controlled plant model are shown in Figure 13.13. The decoupling
Si 7’ OPRI
06 / , .-“~ NH
—
0.4
0.8 0.8 ~ “solid — reference inputs
U, , U, — 0.2 dash-dot — outputs
0.0 0.0
112 ‘ 112
0,4 0.4
0,2
‘is --
0,2
0
‘S
--
-- ‘02
-03 0
,
.+~__
‘“ LPEMN
05 I 05
level loops (MD and MB) this model may be used to generate the model for any configuration Example 3.5 (page 78): SVD analysis. The singular values are plotted as a function of
(LV, DV, etc.). The Matlab commands for generating the LV-, DV- and DB-configurations frequency in Figure 3.7(b) on page 80.
are given in Table 13.3.
Example 3.6 (page 79,): Discussion of the physics of the process and the interpretation of
A 5-state LV-model, obtained by model reducing the above 82-state model, is given on
directions.
page 513. This model is also available over the Intemet.
Example 3.14 (page 89): The condition number, 7(G), is 141.7, and the 1,1 element of the
RGA, A11 (C), is 35.1 (at all frequencies).
Table 13.3: Matlab program for generating model of various distillation
configurations Motivating example no. 2 (page 100,): Introduction to robustness problems with inverse-
% Uses Matlab Robust control toolbox
% G4; State-space model (4 inputs, 2 disturbances. 4 outputs, 82 states) based controller using simulation with 20% input uncertainty.
% Level controllers using 0 and 5 (P-controllers; bandwidth = 10 red/mm)
Kd 10; Kb = 10; Exercise 3.10 (page 102): Design of robust SVD controller.
I Now generate the tv-configuration from G4 using sysic:
systemnames ‘04 Kd Kb’;
inputvar = ‘(LU); VU); d(2) ) Exercise 3.11 (page 102): Combined input and output uncertainty for inverse-based con
outputvar = ‘(04(1) ;G4 (2))’; troller.
input_to_04 = ‘EL; V; Kd; Kb; d )
input_toJcd =
input_toJ(b = ‘[04(4))’; Exercise 3.12 (page 103): Attempt to “robustify” an inverse-based design using McFarlane—
sysoutname = ‘Glv’; Glover R~ loop-shaping procedure.
cleanupsysic=’yes’; sysic;
I
I Modifications needed to generate Dy-configuration:
Example 6.8 (page 245): Sensitivity to input uncertainty with feedforward control (RGA).
Ki = 10; Kb = 10;
systemnames = ‘04 Kl Kb’; Example 6.11 (page 250): Sensitivity to input uncertainty with inverse-based controller,
inputvar = ‘[0(1); VU); d(2(J’; sensitivity peak (RGA).
inputtoo4 = ‘Ui; 11; 0; Kb; d 3’;
input_to.K1 ‘[04(3))’;
input_toKb = ‘[04(4))’; Example 6.14 (page 253): Sensitivity to element-by-element uncertainty (relevant for
sysoutnene = ‘Gdv’; identification).
I
I Modifications needed to generate 05-configuration:
Ki 10; Ky = 10; Example 8.1 (page 292): Coupling between uncertainty in transfer function elements.
aystemnames = ‘04 K1 (tv;
inputyar ‘(0(1); 5(1); d(2)]’; Example in Section 8.11.3 (page 322): p for robust performance which explains poor
input_to_04 ‘ [KI; (Cv; 0; 5; d ]
input_toJCl = ‘[04(3))’;
performance in Motivating example no. 2.
input_toitv = ‘(04(4))’;
sysoutname = ‘0db’; Example in Section 8.12.4 (page 330): Design of p-optimal controller using DK-iteration.
In addition, the reader is referred to the first edition of this book (Skogestad and
This distillation process has been used as an illustrative example throughout the book, and
Postlethwaite, 1996) for an example on the magnitude of inputs for rejecting disturbances
so to avoid unnecessary repetition we will simply summarize what has been done and refer to
(in feed rate and feed composition) at steady state.
the many exercises and examples for more details. The steady-state properties of the model,
The model in (13.17) has also been the basis for two benchmark problems.
including the choice of temperature measurement, are discussed in Examples 10.8 and 10.9.
Original benchmark problem. The original control problem was formulated by
Skogestad et al. (1988) as a bound on the weighted sensitivity with frequency-bounded input
13.4.1 Idealized LV-model uncertainty. The optimal solution to this problem is provided by the one degree-of-freedom
p-optimal controller given in the example in Section 8.12.4 where a peak p-value of 0.974
The following idealized model of the distillation process, originally from Skogestad et al.
(Remark 1 on page 335) was obmined.
(1988), has been used in examples throughout the book:
Revised CDC benchmark problem. The original problem formulation is unrealistic in
1 87.8 —86.4 1 that there is no bound on the input magnitudes. Furthermore, the bounds on performance and
G(s) (13.17)
= ____
75s + i L 108.2 —109.6]
uncertainty are given in the frequency domain (in terms of weighted R~ norm), whereas
many engineers feel that time domain specifications are more realistic. Limebeer (1991)
The inputs are the reflux (L) and boilup (17), and the controlled outputs are the top and therefore suggested the following CDC specifications. The set of plants U is defined by
bottom product compositions (YD and XB). This is a very crude model of the distillation
process, but it provides an excellent example of an ill-conditioned process where control is 1 [0.878
difficult, primarily due to the presence of input uncertainty. 75s+l Ll.082 —1.096 0 k2e028
We refer the reader to the following places in the book where the model (13.17) is used: k~ e [0.8 1.2], O~ c [0 1.0] (13.18)
512 MULTIVARIABLE FEEDBACK CONTROL CASE STUDIES 513
In physical terms this means 20% gain uncertainty anQ up to 1 minute delay in each input
channel. The specification is to achieve for every plant G C H. to2
SI: Closed-loop stability. 0
~0
0
~0
C
C
S2: For a unit step demand in channel 1 at t = 0 the plant output Ui (tracking) and Y~ C
Ca CC
(interaction) should satisfy~ ~ IOU
IOU SI
• yiQ) ~ 0.9forallt ~ 30mm 1A12 I
• yi(t) < 1.lforallt io2 to~ IOU
o 0.99 <yi(co) < 1.01 Frequency (rad/minj Frequency [rad/mini
o y2(t) <O.Sforallt (a) Singular values (1,) RGA elements
o —0.01 <y2(0O) <0.01
Figure 13.16: Detailed 5-state mndel of distillation column
The same corresponding requirements hold for a unit step demand in channel 2.
S3: a(Içs) <0.316, Vw Details on the 5-state model. A state-space realization is
54: U(d1C0) < lforw >150 ~5FA1B1 SrAIBdl (13.19)
Note that a two degrees-of-freedom controller may be used and K~ then refers to the feedback
0 j~ Ga(s)[cI 0 j
plant of the controller. In practice, specification 54 is indirectly satisfied by 53. Note that the where
uncertainty description G,~ = G(I + w4s) with Wj = ~ (as used in the examples in —.629 .624
—.005131 0 0 0 0
the book) only allows for about 0.9 minute time delay error. To get a weight wi(s) which 0 —.07366 0 0 0 .055 —.172
includes the uncertainty in (13.18) we may use the procedure described on page 272, i.e. A= 0 0 —.1829 0 0 B= .030 —.108
0 0 0 —.4620 .9895 —.186 —.139
(7.36) or (7.37) with rk = 0.2 and ~max = 1. —1.23 —.056
0 0 0 —.9895 —.4620
Several designs have been presented which satisfy the specifications for the CDC problem —0.062 —0.067
in (13.18). For example, a two degrees-of-freedom 71~, loop-shaping design is given by 0.131 0.040
Limebeer et al. (1993), and an extension of this by Whidborne et al. (1994). A two degrees- c — [—.7223
— .8913
—.5170
.4728
.3386
.9876
—.1633
.8425
.1121
.2186
0.022 —0.106
—0.188 0.027
of-freedom p-optimal design is presented by Lundstrom et al. (1999). —0.045 0.014
Scaling. The model is scaled such that a magnitude of I corresponds to the following: 0.01
13.4.2 Detailed LI/-model mole fraction units for each output (ijD and ZB), the nominal feed flow rate for the two inputs
(L and V) and a 20% change for each disturbance (feed rate F and feed composition zp).
In the book we have also used a 5-state dynamic model of the distillation process which
Notice that the steady-state gains computed with this model are slightly different from the
includes liquid flow dynamics (in addition to the composition dynamics) as well as
ones used in the examples.
disturbances. This 5-state model was obtained from model reduction of the detailed model
with 82 states. The steady-state gains for the two disturbances are given in (10.96). Remark. A similar dynamic LV-rnodel, but with 8 states, is given by Green and Limebeer (1995), who
The 5-state model is similar to (13.17) at low frequencies, but the model is much less also design an flc. loop-shaping controller.
interactive at higher frequencies. The physical reason for this is that the liquid flow dynamics
Exercise 13.1 * Repeat the p-optimal design based on DK-iteration in Section 8.12.4 using the model
decouple the response and make G(jw) upper triangular at higher frequencies. The effect is
(13.19).
illustrated in Figure 13.16 where we show the singular values and the magnitudes of the RGA
elements as functions of frequency. As a comparison, the RGA element A~ (G) = 35.1 at all
frequencies (and not just at steady-state) for the simplified model in (13.17). The implication
is that control at crossover frequencies is easier than expected from the simplified model 13.4.3 Idealized DV-model
(13.17). Finally, we have also made use of an idealized model for the Dy-configuration:
Applications based on the 5-state model are found in
1 —87.8 1.4
G(s) (13.20)
Example 10.9 (page 408): Selection of secondary (temperature) measurement for improving 75s + 1 1—108.2 —1.4
controllability of primary (composition) variables.
In this case the condition number 7(0) = 70.8 is still large, but the RGA elements are small
Example in Section 10.23 (page 451): Controllability analysis of decentralized control. (about 0.5).
514 MULTIVARIABLE FEEDBACK CONTROL
Example 6.9 (page 245): Bounds on the sensitivity peak show that an inverse-based
controller is robust with respect to diagonal input uncertainty.
Example 8.9 (page 314): p for robust stability with a diagonal controller is computed. The
difference between diagonal and full-block input uncertainty is significant. APPENDIX A
Remark. In practice, the DV-configuration may not be as favourable as indicated by these examples,
because the level controller is not perfect as was assumed when deriving (13.20).
To compute the magnitude ci, we multiply c by its conjugate ë 4 a — j~3 and take the square
root, i.e. __________ ________
played an important role in the robustness analysis. The transpose of a matrix A is AT (with elements ~ the conjugate is A (with elements
You should now be in a position to move straight to Appendix B, to complete a major Re a11 — j Tm ajj), the conjugate transpose (or Hermitian adjoint) matrix is A~’ 4 AT
project on your own and to sit the sample exam. (with elements Re a1~ j Im a11), the trace is trA (sum of diagonal elements), and the
—
Good luck! determinant is det A. By definition, the inverse of a non-singular matrix A, denoted A’,
A’ = ~ (Al) The determinant is defined only for square matrices, so let A be an n x n matrix. The
matrix is non-singular if det A is non-zero. The determinant may be defined inductively
where adj A is the adjugate (or “classical adjoint”) of A which is the transposed matrix of as det A = a,~c~1 (expansion along column j) or det A = ~ ajje~j (expansion
cofactorsc~ of A, along row i), where c~j is the ij’th cofactor given in (A.2). This inductive definition begins
= [adj A]~~ 4 (~1)~~ dot A’3 (A.2) by defining the determinant of an 1 x 1 matrix (a scalar) to be the value of the scalar, i.e.
Here Au is a submatrix formed by deleting row i and column j of A. As an example, for a deta = a. We then get for a 2 x 2 matrix detA = a,,a22 a,2a21 and soon. From the —
2 x 2 matrix we have definition we directly get that det A = det AT. Some other determinant identities are given
below:
A = [a,, a,2j detA = a1~a22 — a12a2, 1. Let A, and A2 be square matrices of the same dimension. Then
[a2, an]
det(A,A2) = det(A2A,) = detA, dot A2 (A.9)
—a,2 .
compatible dimensions such that the matrices A2A3A4 and (A, + A2A3A4) are defined.
6. Schur’s formula for the determinant of a partitioned matrix:
Also assume that the inverses given below exist. Then
(A, + A2A3A4)’ = AT’ — AT’A2(A4AT’A2 + (A.6) dot [A,,
[A2,
A12
A22]
1 det(A,,) det(A22 . — A21A~’A,2)
Proof: Postmultiply (or premultiply) the right hand side in (A.6) by A1 + A2A3A4. This gives the (A.14)
det(A22) det(A,, . — A,2Aj~421)
identity matrix. 1]
Lemma A.2 Inverse of a partitioned matrix. If Aj~,’ and X’ exist then where it is assumed that A,, and/or A22 are non-singular.
Pmof Note that A has the following decomposition if A,, is non-singular:
EA1,
LA2,
A121’
A22]
—
—
1A’
11 +A~A,2X’A21A~j’
—1
—AE’ A,2X’
(A.7) [AH
[A21
A,2]
A22
—
—
[ A21A~
I
11
0] [An
Ij 0
0] [I
X 0
Aj~’At2] (A.15)
where X 4 A22 A21Aj~j’A12 is the Schur complement of A,, in A; also see (A.15).
—
Similarly ~fA~’ and Y’ exist then where X = A22 — A21 Aj,’Ai,. The first part of (A. 14) is proved by evaluating the determinant using
(A.9) and (A. 13). Similarly, if A22 is non-singular,
A,, A12 —‘ — Y’ A11 A12] I A12A~] y o~ 1 0
(A.S) (A.16)
A2, A22 — -A~A21Y-’ A;2’ + A;’A21Y-’A,2A~ A21 A22j 0 1 j 0 A22] A~A21 I
where Y 4 A,, — A12A;,’A2, is the Sc/mr complement of A22 in A; also see (A. 16). where Y = A11 — Ai2A~’A2t, and the last part of (A.14) follows. C
518 MULTIVARIABLE FEEDBACK CONTROL
MATRIX THEORY AND NORMS 519
A.2 Eigenvalues and elgenvectors Remark. The case where the eigenvalues are not distinct (i.e. repeated) is much more complicated,
both theoretically and computationally. Fortunately, from a practical point of view it is sufficient to
Definition A.1 Eigenvalues and eigenvectors. Let A be a square n x n matrix. The understand the case where the eigenvalues are distinct.
eigenvalues A~, I = 1, , ii, are the n solutions to the n ‘tli-order characteristic equation
..
the eigenvalue, A~, so the order of the complex eigenvalues may be different for A and A11. 8. The matrix cAk where his an integer has eigenvalues cAp.
Remark 3 The eigenvalues are sometimes called characteristic gains. The set of eigenvalues of A is 9. Consider the I x m matrix A and the m x I matrix B. Then the I x I matrix AB and
called the spectrum of A. The largest of the absolute values of the eigenvalues of A is the spectral the in x in matrix BA have the same non-zero eigenvalues. To be more specific, assume
radius of A, I > in. Then the matrix AB has the same m eigenvalues as BA plus I in eigenvalues
—
A = T’AT (A.22) From the above properties we have, for example, that
This always happens if the eigenvalues are distinct, and may also happen in other cases, e.g.
As(S) = A~((I + L)’) = ~ ~ (A.24)
for A = I. For distinct eigenvalues, we also have that the right and left eigenvectors are
mutually orthogonal, and we may scale the columns in Q such that they are also mutually
orthonormal, In this book, we are sometimes interested in the eigenvalues of a real (state) matrix A, and
1 ifi=j in other cases in the eigenvalues of a complex transfer function matrix evaluated at a given
q~ ~ ifi~j frequency, e.g. L(jw), as in (A.24). It is important to appreciate this difference.
Then we have the following dyadic expansion or spectral decomposition of the matrix A in
j
terms of its right and left eigenvectors: A,2.2 Elgenvalues of the state matrix
A = Z A~t~q~ (A.23)
Consider a system described by the linear differential equations
±=Ax+Bu (A.25)
520 MULTIVARIABLE FEEDBACK CONTROL
MAT1UX THEORY AND NORMS 521
Unless A is diagonal this is a set of coupled differential equations. For simplicity, we assume
The unitary matrices U and V form orthonormal bases for the column (output) space and
that the eigenvectors t~ of A are linearly independent and introduce the new state vector
the row (input) space of A. The column vectors of V, denoted v~, are called right or input
z = T”z, i.e. x = Tz. We then get
singular vectors and the column vectors of U, denoted u~, are called left or output singular
vectors. We deflnefl EU1, ~ vi,UE Uk andvE Va.
E
Ti=ATz+33u ~ i=Az+T’Bu (A.26)
Note that the decomposition in (A.28) is not unique. For example, for a square matrix, an
which is a set of uncoupled differential equations in terms of the new states z. The unforced alternative SVD is A = UIEVIH, where U’ = US, V’ = VS, S = diag{e~°’ } and O~ is any
solution (i.e. with it = 0) for each state z~ is z~ = ~ where zo~ is the value of the state at real number. However, the singular values, o’~, are unique.
= 0. If .X1 is real, then we see that this mode is stable (z~ 0 as t
—, no) if and only if A~
—* The singular values are the positive square roots of the k = min(l, in) largest eigenvalues
0. If A~ = ReA~ + jImA1 is complex, then we get eAit = e~Ait(cos(JmAjt) + j sin(ImA~t)) of both AAH and AHA. We have
and the mode is stable (z1 —, 0 as t -4 on) if and only if ReA1 < 0. The fact that the new
state z~ is complex is of no concern since the real physical states x = Tz are of course real. u~(A) = ~J~J~T~HA) = /~(AAH) (A.33)
Consequently, a linear system is stable if and only if all the eigenvalues of the state matrix A
have real parts less than zero; that is, lie in the open left-half plane. Also, the columns of U and V are unit eigenvectors of AA” and AHA, respectively. To
derive (A.33) we write
A.2.3 Eigenvalues of transfer functions AAH = (UEVH)(UEVH)H = (UEVH)(VEHUH) = UEE~U~ (A.34)
The eigenvalues of the loop transfer function matrix, A1(L(jw)), evaluated as a function of
or equivalently since U is unitary and satisfies UH = U’ we get
frequency, are sometimes called the characteristic loci, and to some extent they generalize
L(jw) for a scalar system. In Chapter 8, we make use of .A1(L) to study the stability of the (AAH)U = UEE’~ (A.35)
M&structure where L = Mu. Even more important in this context is the spectral radius,
p(L(jw)) = max~ IAt(L(iw))~. We then see that U is the matrix of eigenvectors of AA~’ and {o’fl are its eigenvalues.
Similarly, we have that V is the matrix of eigenvectors of AHA.
singular 2 x 2 matrices (with det A = 0 and £(A) = 0) we get a(A) = ~ IIAJIr’ Another very useful result is Fan’s theorem (Horn and Johnson, 1991, p. 140 and p. 178):
(the Frobenius norm), which is actually a special case of (A. 127).
— U(B) <u~(A + B) <cj(A) + U(B) (A.50)
aj(A’) = 1/u~M). u~(A’) = v~(A), v1(A’) = u~(A) (A.39) On combining (A.40) and (A.53) we get a relationship that is useful when evaluating the
amplification of closed-loop systems:
and in particular
â(A’) = l/u(A) (A.40) ~(A) — 1 ≤ a(I + A)-’ ≤ u(A) + 1 (A.54)
For a partitioned matrix, M = [~] or M = [A B], the following inequalities are useful:
direction, and so on.
A.3.6 Singularity of matrix A + E i i = I = in i e A is non-singular In this case At = A—’ is the inverse of the matnx
From the left inequality in (A.52) we find that 2 r = in < I i e A has full column rank This is the conventional least squares problem
where we want to minimize lu Ax112 and the solution is
—
for which ~y Ax112 is minimized. The solution is given in terms of the pseudo-inverse
—
The condition number of a matrix is defined in this book as the ratio
(Moore—Penrose generalized inverse) of A:
x=Aty (A.61) 7(A) =a1(A)/a&(A) =O~(A)JaM)
where k = mm(I in) A matrix with a large condition number is said to be ill conditioned
The pseudo-inverse may be obtained from an SVD of A = UEVH by This definition yields an infinite condition number for rank-deficient matrices For a non
singular matrix we get from (A 40)
At = V~E;’U7 = (A.62)
7(4) =a~(A) a(A’) (A68)
where r is the number of non-zero singular values of A. We have that
Other definitions for the condition number of a non-singular matrix are also in use e g
u(A) = 1/U(At) (A.63)
Note that At exists for any matrix A, even for a singular square matrix and a non-square 7~(A) = IIAll~ llA’ll~ (A 69)
matrix. The pseudo-inverse also satisfies where llAM~ denotes any matrix norm If we use the induced 2 norm (maximum singular
value) then this yields (A 68) From (A 68) and (A 43) we get for non singular matrices
AAtA=A and AtAAt=At
Note the following cases (where r is the rank of A): 7(AB)<7(A)7(B) (A70)
i
526 MULTIVARIABLE FEEDBACK CONTROL MATRIX THEORY AND NORMS 527
The minimized co,zdjtjon number is obtained by minimizing the condition number over all A.4.1 Algebraic properties of the RGA
possible scalings. We have
* Most of the properties below follow directly if we write the RGA elements in the form
~ (A) = mill 7(D0AD1) (A71)
— -. .—.. — — —1 ~÷~a~jdetAt’
where D1 and D0 are (complex) diagonal scaling matrices. If we allow scaling only on one u —a~ afl_az3dO~A —( ~ detA ( . )
side then we get the input and output minimized condition numbers:
where ajj denotes the ji’th element of the matrix A 4 A’, A~ denotes the matrix A with
* . *
71(A) = mln7(ADI). 70(A) = min7(DQA) (A72) row i and column j deleted, and c~ = (_l)ifi det ~ is the ij’th cofactor of the matrix A.
Di Do
For any non-singular in x m matrix A, the following properties hold:
As shown in (A.79) and (A.80), the minimized condition number is closely related to the I. A(A’) = A(AT) = A(A)T.
norm of the RGA-matrix. 2. Any permutation of the rows and columns of A results in the same permutation in the
Remark. To compute these minimized condition numbers we define RGA. That is, A(P1AP2) = P1A(A)P2 where P1 and P2 are permutation matrices. (A
permutation matrix has a single 1 in every row and column and all other elements equal to
H— 0 A’ 0.) A(P) = P for any permutation matrix.
AU (A 73)
3. The sum of the elements in each row (and each column) of the RGA is 1. That is,
Then we have, as proven by Braatz and Moran (1994), Z~Z ~ = land ~?L1 ~ = 1.
4. A(A) = I if A is a lower or upper triangular matrix; and in particular the RGA of a
= mm &(DHD~), D = diag{D7’,Do} (A 74) diagonal matrix is the identity matrix.
D; ,D0
5. The RGA is scaling invariant. Therefore, A(D1AD2) = A(A) where D1 and D2 are
diagonal matrices.
i/iJ~=rnin~DHui, D = diag{D7’,I} (A 75)
6. The RGA is a measure of sensitivity to relative element-by-element uncertainty in the
matrix. More precisely, the matrix A becomes singular if a single element in A is perturbed
= rninU(DHD’), D = diag{I, Do} (A 76)
from a11 to a’11 = a~1(l ~-); see Theorem 6.6 on page 251.
—
These convex optimization problems may be solved using available software for the upper bound on 7. The norm of the RGA is closely related to the minimized condition number 7* defined in
the structured singular value pup(H); see (8.87) and Example 12.4. In calculating p~~(H), we use for (A.71). For a 2 x 2 matrix (Grosdidier et al., 1985) and a real 3 x 3 matrix (Liang, 1992):
~ (A) the structure A = diag{Adjag, Adiag}, for 7(A) the structure A = diag{Adjag, AfuII}, and 7* ~ 1/7* IIAII0~ (A.79)
for ~5 (A) the structure A = diag{Ac~jj, Adiag}.
In general, for a (complex) matrix of any size (Nett and Manousiouthakis, 1987):
7* ~ 1/7* ≥ II’~IIm (A.80)
A.4 Relative gain array
Here IIAWm 4 2 max{11A1111, IIAII1~} is two times the maximum row or column sum of
The relative gain array (RGA), see section 3.4 (page 82), was introduced by Bristol (1966). the RGA (the matrix norms are defined in Section A.5.2). (A.80) shows that a matrix with
Many of its properties were stated by Bristol, but they were not proven rigorously until the
work by Grosdidier et al. (1985). The RGA of a complex non-singular m x in matrix A,
I large RGA elements always has a large minimized condition number. The reverse has also
been conjectured (Nett and Manousiouthakis, 1987), but it does not hold for 4 x 4 matrices
denoted RGA(A) or A(A), is a complex in x in matrix defined by or larger as shown by the following counterexample motivated by Liang (1992):
1—1—li k000 1 1 i—i k 1 —i—L
RGA(A) E A(A) 4 A x (A1)T (A77)
A—1i 11 —1—1
1 1 okoo
ooin —ii
1—111 11 —
k
—1k
k 1
k—i
where the operation x denotes element-by-element multiplication (Hadamard or Schur 1—11—1 0001 1 1—11 k—li—k
product). If A is real then t~(A) is also real. For calculating RGA using Matlab, use
has 7* (A) = 7(A) = k (which can be arbitrary large), but for any k all RGA elements are
rga = a. *pjnv (a) ‘;see also Table 3.1 on page 87.
.
0.25 so IIA(A)II,~ = 2.
Example:
8. The diagonal elements of the matrix ADA—’ are given in terms of the corresponding row
elements of the RGA (Skogestad and Moran, 1987c; Nett and Manousiouthakis, 1987).
A 1—— ~1 —2
~ ,A1~ — 0.4 0.2
0301’ AA
(i)— — 0.4
0.6 0.6
0.4 For any diagonal matrix D = diag{d1} we have
[A’DAJ~~ = ~ .A~(A)d1 (A.82) (c) The elements of column j of the RCA sum to the square of the 2-norm of the j’th row
i=1 in V
9. It follows from Property 3 that A always has at least one eigenvalue and one singular value > A~ = IIeT~4I212 <
—
1 (A.85)
equal to 1. i=1
Proofs of some of the properties: Property 3: Since AA’ = I it follows that ~ ~ = Here 1’,. contains the firstr input singular vectors forG, and e~ is an in x 1 basis vector
From the definition of the RCA we then have that ~ A1~ = 1. Property 4: If the matrix is upper for input u~; e~ = [0 0 1 0 01T where 1 appears in positionj.
triangular then ~ = 0 for i > j. It then follows that CU = 0 forj > i and all the off-diagonal RCA (d) The diagonal elements of B ADAt are b~1 Z’?’1 ~ = EyL1 d~Au~ where
elements are zero. Property 5: Let A’ = D1AD,. Then a~j = djtd2~at~ and = and ~ ?Jfi denotes the ji’ th element of At and D is any diagonal matrix.
the result follows. Property 6: The determinant can be evaluated by expanding it in terms of any row or 2. A has fill! column rank, r = rank(A) in (i.e. A has no more inputs than outputs, and
column, e.g. by row i, detA = ~ detA~~. Let A’ denote A with ~ substituted fore,, the inputs are linearly independent). In this case AtA I, and the following properties
By expanding the determinant of A’ by row i and then using (A.78) we get
hold:
detA’ = detA — (_i)~~5~!i- detAt~ = 0 (a) The RCA is independentof input scaling, i.e. A(AD) A(A).
(b) The elements in each column of the RCA sum to 1, E~ A~j = 1.
det .4
(c) The elements of row i of the RCA sum to the square of the 2-norm of the i’th row in
Property 8: The ii’th element of the matrix B = ADA—’ is ha = ~ d~a~~a~t = ~ d~A1~. Ur,
U
Example A.1 i=1 AU = IeTUrlI~ ~ i (A.86)
56 66 75 r 6.16 —0.69 —7.94 3.48 Here U,. contains the first i- output singular vectors for G, and e5 is an I x 1 basis vector
A, = [75
18
54
66
82
25
28
38 AM,) =
—1.77 0.10
—6.60 1.73
3.16
8.55
_0.491
—2.69 (A.83)
for output ys; ~ = [0 0 1 0 01T where 1 appears in position i.
d The diagonal elements of B = AtDA we equal to E~1
9 51 8 1iJ [ 3.21 —0.14 —2.77 0.70 ~ ~ d1A~, where ~afi denotes the ji’th element of At and D is any diagonal matrix.
In this case, 7(A2) = O(A2)/c(A9) = 207.68/L367 = 151.9, ~(A2) = 51.73 (obtained 3. General case. For a general square or non-square matrix which has neither full row nor
numerically using (A.74)), 77M2) = 118.70 and 75(A2) = 92.57. Furthermore, IlAlim = full column rank, identities (A.85) and (A.86) still apply.
2 max{22.42, 19.58} = 44.84, so (A.80) is satisfieth The matrix A2 is non-singular and the 1,3 From this it also follows that the rank of any matrix is equal to the sum of its RCA
ele,nent of the RCA is A13 (A2) = —7.94. Thus from Property 6 the matrix A2 becomes singular if elements. Let the I x in matrix G have rank r, then
the 1,3 element is perturbed from 75 to 75(1 — ~ = 84.45.
Additional examples on the properties of RGA are given in Section 3.4.
Z A~j (G)
i,j
= rank(G) = r
Proofs of (A.85) and (A.86): We will prove these identities for the general case. Write the SVD of C
A.4.2 RGA of a non-square matrix as C = UrErVrH (this is the economy-size SVD from (A.59)) where Er is invertible. We have that
= eVUrErIf,!’ej, [Ct].. = e~VrE’Uj’ei, UI’Ur = I~ and ~,rHv Ir, where 1,. denotes
The RCA may be generalized to a non-square I x sri matrix A by use of the pseudo-inverse the identity matrix of dim r x r. For the row sum (A.86) we then get
At defined in (A.62). We have m In
matrices, but the remaining properties do not apply in the general case. However, they partly = erUrsrV,.’1 Zeaef I4E~UJ’e~ = e~U,.U~et IefL4II~
apply if A is either of full row rank or full column rank.
‘In
1. A has full row rank, r = rank(A) = I (i.e. A has at least as many inputs as outputs, and
C
the outputs are linearly independent). In this case AA1 = I, and the following properties The result for the column sum (A.85) is proved in a similar fashion.
hold:
Remark. The extension of the RCA to non-square matrices was suggested by Chang and Yu (1990)
(a) The RCA is independent of output scaling, i.e. A(DA) = AM) who also stated most of its properties, although in a somewhat incomplete form. More general and
(b) The elements in each row of the RCA sum to 1, Z$ ~ = 1. precise statements are found, for example, in Cao (1995).
530 MULTIVARIABLE FEEDBACK CONTROL MATRIX THEORY AND NORMS 531
It is useful to have a single number which gives an overall measure of the size of a We will consider a vector a with m elements; that is, the vector space is V = C~Z. To
vector, a matrix, a signal, or a system For this purpose we use functions which are illustrate the different norms we will calculate each of them for the vector
called norms The most commonly used norm is the Euclidean vector norm, hell2
1
Viei 2 + le2l2 + + leml2 This is simply the distance between two points y and z, where b= b2 = 3 (A.89)
= ~, z~ is the difference in their z’th coordinates
—
—5
Definition AS A norm of e (which may be a vectoi, matrix, signal or system) is a real
numbei, denoted Hell, that satisfies the following pmpeities We will consider three norms which are special cases of the vector p-norm
(z lailP)
1/p
I Non-negative lIeu ≥ 0
2 Positive hell = 0 ~ e = 0 (for semi-norms we have hell = 0 ~ e = 0) llall~ = (A.90)
3 Homogeneous Ila elI = hal hell for all complex scalars a
4 Triangle inequality where we must have p ~ 1 to satisfy the triangle inequality (property 4 of a norm). Here a is
lie1 + e2ll < Ileill + ble2H (A 88) a column vector with elements a1 and la~l is the absolute value of the complex scalar a1.
Mo,e precisely, e is an element in a vector space V ove, the field C of complex numbers, and Vector 1-norm (or sum norm). This is sometimes referred to as the “taxi-cab norm”, as
the piopet ties above must be satisfied Ye, ei, e2 C V and Va C C in two dimensions it corresponds to the distance between two places when following the
“streets” (New York style). We have
In this book, we consider the noims of four different objects (norms on four different vector
Ilalli ~ Zlail (11b111 = 1+3 + 5 = 9) (A.91)
spaces)
e is a constant vector
e is a constant matrix Vector 2-norm (Euclidean norm). This is the most common vector norm, and
e is a time-dependent signal, e(t), which at each fixed I is a constant scalar or vector corresponds to the shortest distance between two points
e is a “system”, a transfer function C(s) or impulse response g(t), which at each fixed s
or I is a constant scalar or matrix 11a112 ~ lail2 (lIMb = a +9+25=5.916) (A.92)
Cases 1 and 2 involve spatial norms and the question that arises is how do we average or
sum up the channels? Cases 3 and 4 involve function norms or tempoial norms where we The Euclidean vector norm satisfies the property
want to “average” or “sum up” as a function of time or frequency Note that the first two are
finite-dimensional norms, while the latter two are infinite-dimensional aHa = llall~ (A.93)
Remark. Notation for norms. The reader should be aware that the notation on norms in the literature is where a~ denotes the complex conjugate transpose of the vector a.
not consistent, and one must be careful to avoid confusion First, in spite of the fundamental difference Vector co-norm (or max norm). This is the largest-element magnitude in the vector. We
between spatial and temporal norms, the same notation, II II. is generally used for both of them, and we use the notation llallmax so that
adopt this here Second, the same notation is often used to denote entirely different norms For example,
consider the infinity norm, llelk~ If e is a constant vector, then llell~ is the largest element in the vector llallmax E llall~~ 4 max la~l (IlMIrnax I —51 5) (A.94)
(we often use IlelImax for this) If e(t) is a scalar time signal, then lle(t)ll~ is the peak value of le(t)l as
a function of time If E is a constant matrix then llElk~ may denote the largest matrix element (we use
Since the various vector norms only differ by constant factors, they are often said to be
llAllrna~c for this), while other authors use llEll~ to denote the largest matrix row sum (we use llEll,~ equivalent. For example, for a vector with m elements
for this) Finally, if E(s) is a stable proper system (transfer function), then llElk~ is the 7j~ noim
which is the peak value of the maximum singular value of B, llE(s)ll~ = max~ ã(E(yw)) (which is
how we mostly use the cc-norm in this book) IlalIrnax ≤ 11a112 ≤ ~ llallrnax (A.95)
In Figure A. 1 the differences between the vector norms are illustrated by plotting the contours
for llall~ = 1 for the case with in = 2.
532 MULTIVARIABLE FEEDBACK CONTROL MATRIX THEORY AND NORMS 533
a2 The trace tr is the sum of the diagonal elements, and A” is the complex conjugate transpose
p =co
1 of A. The Frobenius norm is important in control because it is used for summing up the
channels, e.g. when using LQG optimal control.
Max element norm. This is the largest-element magnitude,
1 This norm is not a matrix norm as it does not satisfy (A.98). However, note that V’Th~ IL4IImax
is a matrix norm.
The above three norms are sometimes called the 1-, 2- and co-norm, respectively, but this
notation is not used in this book to avoid confusion with the more important induced p-norms
introduced next.
1
j
534 MULTIVARIABLE FEEDBACK CONTROL
where the spectral radius p(A) = max~ lA~(A)l is the largest eigenvalue of the matrix A.
Note that the induced 2-norm of a matrix is equal to the (largest) singular value, and is often
called the spectral norm. For the example matrix in (A.97) we get
I MATRIX THEORY AND NORMS
3. Choose both A = z’~ and B = w as vectors. Then using the Frobenius norm or induced
2-norm (singular value) in (A.98) we derive the Cauchy—Schwarz inequality
535
we have p(Aj) = 1 and p(A2) = 1. However, p(Ai + A2) = 12 and p(A1A2) = 101.99,
llABlI~~ ~ max IA Bw ~J,
0)00 llwllp = max
wØO
IlAvII~
J~v~~p
IlBwIl~
.
IIWIlp ≤ max
vØO
IIAvll~
~
max llBwlI~
wØO ~
which satisfy neither the triangle inequality (property 4 of a norm) nor the multiplicative
property in (A.98).
and (A.109) follows from the definition of an induced norm. C Although the spectral radius is not a norm, it provides a lower bound on any matrix norm,
which can be very useful.
Implications of the multiplicative property
Theorem A.4 For any matrix norm (and in particularfor any induced norm)
For matrix norms the multiplicative property IIABII ≤ 11AM IIBII holds for matrices A and .
B of any dimension as long as the product AB exists. In particular, it holds if we choose A p(A) ≤ hAil (A.117)
and B as vectors. From this observation we get:
Pivqf: Since A~(A) is an eigenvalue of A, we have that At~ = Ajt1 where t~ denotes the eigenvector.
1. Choose B to be a vector, i.e. B = w. Then for any matrix norm we have from (A.98) that We get
11t111 = II it~II = IIAt~II < ~ ~ (0.118)
IlAwhi ≤ hAil hiwli . (A.111)
(the last inequality follows from (A.1l1)). Thus for any matrix norm 1A1G4)I ~ 11.411 and since this
We say that the “matrix norm hAil is compatible with its corresponding vector norm IIwII”. holds for all eigenvalues the result follows.
Clearly, from (A.103) any induced matrix p-norm is compatible with its corresponding
vector p-norm. Similarly, the Frobenius norm is compatible with the vector 2-norm (since For our example matrix in (A.97) we get p(A0) = 3.162 which is less than all the
when w is a vector IIwIIF = 11w112). induced norms (lIAoIIn = 6, IIAnIl~~ = 7, a(Ao) = 5.117) and also less than the Frobenius
2. From (A. 111) we also get for any matrix norm that norm (IIAIIF = 5.477) and the sum norm (IIAIIsum 10).
A simple physical interpretation of (A.117) is that the eigenvalue measures the gain of
hAIl ≥ max II Awl I (A.l 12)
the matrix only in certain directions (given by the eigenvectors), and must therefore be less
w#o IIwIl than that for a matrix norm which allows any direction and yields the maximum gain, recall
(A.112).
Note that the induced norms are defined such that we have equality in (A.l 12). The
property IIAIIF > U(A) then follows since JIwIIF = 11w112
536 MULTIVARIABLE FEEDBACK CONTROL MATRIX THEORY AND NORMS 537
A.5.4 Some matrix norm relationships A.5.5 Matrix and vector norms in Matlab
The various norms of the matrix A are closely related as can be seen from the following The following Matlab commands are used for matrices:
inequalities from Golub and van Loan (1989, p. 15) and Horn and Johnson (1985, p. 314). o(A) = hlAhb~2 norm(A,2) ormax(svd(A))
Let A be an I x m matrix, then lAtIn norm (A, 1)
hIAhI~00 norm(A, ‘±nf’)
U(A) < IIAIIF < ~min(I.m) o~(A) (A.1 19) IIAIIF norm(A, ‘fro’)
IlAhIsum sum (surn(abs(A)
IlAlimax ç a(A) ≤ \/7,~ IlAlImax (A.120) hlAlImax max(max(abs(Afl) (which is not a matrix norm)
a(A) ≤ ~/lIAIl~ilIAlI~00 (A. 121) p(A) max(abs(eig(Afl)
p~A~) max(eig(abs(A)))
1 7(A) a(A)/u(A) cond(A)
All~ ≤ a(A) ~ Vi llAll~~ (A.122) =
~ll
1 For vectors:
~llAlIn ≤ d(A) ç Vii lIAIl~1 (A.123) Hall’ norm(a, 1)
llall2 norm (a, 2)
max{a(A), liAlIp, hAhn hIAlkco} ≤ hlAhhsum
, (A.124)
Ilahlmax norm(a, ‘inf’)
All these norms, except IhAhimax, are matrix norms and satisfy (A.98). The inequalities are
tight; that is, there exist matrices of any size for which the equality holds. Note from (A. 120)
A.5.6 Signal norms
that the maximum singular value is closely related to the largest element of the matrix.
Therefore, IlAhimax can be used as a simple and readily available estimate of ~(A). We will consider the temporal norm of a time-varying (or frequency-varying) signal, e(t). In
An important property of the Frobenius norm and the maximum singular value (induced contrast with spatial norms (vector and matrix norms), we find that the choice of temporal
2-norm) is that they are invariant with respect to unitary transformations, i.e. for unitary norm makes a big difference. As an example, consider Figure A.4 which shows two signals,
matrices U~, satisfying U1Uf1 = I, we have ej(t) and e2(t). For e,(t) the cc-norm (peak) is 1, lhei(t)llcc = 1, whereas since the signal
does not “die out” the 2-norm is infinite, lIe, (t)Il2 = cc. For e2(t) the opposite is true.
IIU1AU2IIF = I1AIIF (A.125)
minhIDAD’Il~i
D
= minhIDAD’PI icc
D
= p~A~) (A.128) Figure A.4: Signals with entirely different 2-norms and cc-norms
where D is a diagonal “scaling” matrix, Al denotes the matrix A with all its elements For signals we may compute the norm in two steps:
replaced by their magnitudes, and pQA~) = max~ lA~GADl is the Perron root (Perron—
Frobenius eigenvalue). The Perron root is greater than or equal to the spectral radius, 1. “Sum up” the channels at a given time or frequency using a vector norm (for a scalar signal
p(A) < pQA~). we simply take the absolute value).
2. “Sum up” in time or frequency using a temporal norm.
Recall from above that the vector norms are “equivalent” in the sense that their values differ
only by a constant factor. Therefore, it does not really make too much difference which norm
we use in step 1. We normally use the same p-norm for both the vector and the signal, and
538 MULTIVARIABLE FEEDBACK CONTROL MATRIX THEORY AND NORMS 539
e
A.5.7 Signal interpretation of various system norms
Two system norms are considered in Section 4.10. These are the 7i2 norm, IIG(s)1I2
~g(t)I~2, and the 7~L03 norm, IIG(s)II~~. The main reason for including this subsection is to
11e1I03 show that there are many ways of evaluating performance in terms of signals, and to show
e(t) that the ‘H2 and 7-(~, norms are useful measures in this context. This in turn will be useful in
helping us to understand how to select performance weights in controller design problems.
The proofs of the results in this subsection require a good background in functional analysis
and can be found in Doyle et al. (1992), Dahleh and Diaz-Bobillo (1995) and Zhou et al.
t (1996),
Consider a system G with input d and output e, such that
Figure A.5: Signal 1-norm and cc-norm
e Gd (A.134)
For performance we may want the output signal e to be “small” for any allowed input signals
thus define the temporal p-norm, IIe(t)II~, of a time-varying vector as d. We therefore need to specify:
1. What d’s are allowed. (Which set does d belong to?)
rZ
usual way of introducing the LQ objective and gives rise to the U~ norm.
2. d(t) is a white noise process with zero mean.
IIe(t)IIi = 1e1(r)ldr (A.130) 3. d(t) = sin(wt) with fixed frequency, applied from t = —cc (which corresponds to the
—cc steady-state sinusoidal response).
4. d(t) is a set of sinusoids with all frequencies allowed.
2-norm in time (quadratic norm, integral square error (ISE), “energy” of signal): 5. d(t) is bounded in energy, IId(t)l12 ~ 1.
6. d(t) is bounded in power, Id(t)iipow ~ 1.
Ie(t)112 = Z Ie~(r)I2dr (A. 131)
7. d(t) is bounded in magnitude, Ild(t) ~ ~ 1.
The first three sets of inputs are specific signals, whereas the latter three are classes of inputs
with bounded norm. The physical problem at hand determines which of these input classes is
co-norm in time (peak value in time, see Figure A.5):
the most reasonable.
IIe(t)II~~ = max (m~xIeier)I) (A. 132) To measure the output signal one may consider the following norms:
In addition, we will consider the power norm or ms norm (which is actually only a semi- 1. 1-norm, IIe(t)IIi
norm since it does not satisfy norm property 2) 2. 2-norm (energy), IIe(t)112
3. co-norm (peak magnitude), IIe(t)1103
7’
4. power, IIe(t)IIpow
Ile(t)IIpo4v = lim (A.133) Other norms are possible, but, again, it is engineering issues that determine which norm is the
2T
most appropriate. We will now consider which system norms result from the definitions of
input classes, and output norms, respectively. That is, we want to find the appropriate system
Remark I In most cases we assume c(t) = 0 fort < 0 so the lower value for the integration may be gain to test for performance. The results for SISO systems in which d(t) and e(t) are scalar
changed to r = 0. signals are summarized in Tables A.5.7 and A.5.7. In these tables G(s) is the transfer function
and g(t) is its corresponding impulse response. Note in particular that
Remark 2 To be mathematically correct we should have used SI1Pr (least upper bound) rather than
max- in (A. 132), since the maximum value may not actually be achieved (e.g. if it occurs fort = cc). IIe(t)]Is (A. 135)
7-L~ norm: IIG(s)1103 ~ maac, U(G(jw)) = max~cfl ]dcm)IIs
540 MULTIVARIABLE FEEDBACK CONTROL MATRIX THEORY AND NORMS 541
norm results if we consider d(t) to be the set of sinusoids with all frequencies allowed, and
measure the output using the 2-norm (not shown in Tables A.5.7 and A.5.7, but discussed where uI~, is the input zero direction of zj. With this factorization, z1 is not a zero of 01. By
in Section 3.3.5). Also, the 7-12 norm results if the input is white noise and we measure the repeated application of (A. 139) on G1, i = 1 1, 0 can be factored into a minimum-
—
0 = toO mc (A.141)
Table A.2: System norms for three sets of norm-bounded input signals and three different output norms. it~: (‘~
The entries along the diagonal are induced norms.
When 0 has N1) RHP-poles at p, these poles can also be factored into a stable part and an
~ U II~1lI2 I Ildiloc I iid~ipo~v
We1I2
Ilelico
lIeII~~~~
IIG(s)II~~
IlG(s)Ib
0
00
~g(t)~~i
~ 110(8)11cc
cc (usually)
cc (usually)
II0(s)II~~
I all-pass filter on the input and output side as follows:
0 = G81B~’ 8j1 = ~ (I
i=N~
— 2eO~) (A.142)
The results in Tables A.5.7 and A.5.7 may be generalized to MIMO systems by use of the
appropriate matrix and vector norms. In particular, the induced norms along the diagonal in
Table A.5.7 generalize if we use for the 7i~ norm II0(s)Ilc-~ = max~ U(G(jw)), and for the
G = ~;: 11Q- ~°~s~) (A.143)
norm we use Jg(t) JR = max1 JJg~(t) Iii, where g1(t) denotes row i of the impulse response For 5150 systems, (A.140)—(A.143) simplify as
matrix. The fact that the 7tcc norm and L1 norm are induced norms makes them well suited
for robustness analysis; for example, using the small-gain theorem. The two norms are also
closely related as can be seen from the following bounds for a proper scalar system:
I
1 —
= = ii: ~
i=18+zi
—
(A.144)
Remark. In the first edition of this book (Skogestad and Postlethwaite, 1996), the Blaschke products are
defined as the inverse of the more conventional definitions used here. However, note that the alternative
definitions used in the first edition have no effect on any ensuing analysis.
I
542 MULTI VARIABLE FEEDBACK CONTROL MATRIX THEORY AND NORMS 543
S (I + GK)’, 5’ (1 + G’K)~’ Lemma AS Assume that the negative feedback closed-loop system with loop transfer
= = (A.146)
function &(s)K(s) is stable. Suppose C’ = (I + Eo)&, and let the number of open-loop
unstable poles of &(s)K(s) and G’(s)K(s) be P and F’, respectively. Then the negative
A.7.1 Output perturbations feedback closed-loop system with loop transferfunction G’(s)K(s) is stable if and only if
Assume that 0’ is related to C by either an output multiplicative perturbation Eo, or an A((det(I + SoT)) = P — F’ (A.155)
inverse output multiplicative perturbation E~0. Then 5’ can be factorized in terms of S as
follows: where # denotes the number of clockwise encirclements of the origin as s traverses the
5’ = 5(1 + E0T)1; G’ = (I + E0)& (A.147) Nyquist D-contour in a clockwise direction.
5’ = S(I—EjoS)’(I—E~o); 0’ = (I—E10)’G (A. 148) Proof. Let N(f) denote the number of clockwise encirclements of the origin by f(s) as s traverses the
For a square plant, E0 and E~0 can be obtained from a given C and C’ by Nyquist fl-contour in a clockwise direction. For the encirclements of the product of two functions we
have N(f,f2) = iV&1) + .M(f2). This together with (AlSO) and the fact det(AB) = det A~ det B
E0 = (G’ — G)0’; E~0 = (G’ — (A. 149) yields
Proof of (A. 147): .Nl,det(I + G’K)) .Af(det(I + B0T)) + .Af(det(1 + OK)) (A.156)
I + O’K = I + (I + Eo)GK = (1 + E0GK(I + CK~1)(I + OK) For stability we need from Theorem 4.9 that .M(det(I + C’K)) = —F’, but we know that
.Af(det(1 + OK)) = —P and hence Lemma A.5 follows. The lemma is from Hovd and Skogestad
T (1994); similar results, at least for stable plants, have been presented by, for example, Orosdidier and
0 Moran (1986) and Nwokah and Perez (1991). 0
Pivofof(A.148):
In other words, (A.155) tells us that for stability det(I + E0T) must provide the required
I+C’K = I+(1—B~o)’GK=(f—E1o)’~I—E10)+GK) additional number of clockwise encirclements. If (A.155) is not satisfied then the negative
= (1— E~o)’(1 B~o (1 + 0K~’)(I + OK)
— feedback system with &‘K must be unstable. We show in Theorem 6.7 how the information
about what happens at s = 0 can be used to determine stability.
0
H = B) (F, K) (A.163)
yI~ We find
F11 F12
2 P21 F22
— Qii + Q12M11(I — Q12(I —
J t J
— M21(I— M22 + M21Q22(I —
F,(M,K) (A.l68)
where M is given by
(a) (b) (c)
F iVljj’
(A.169)
Figure A.7: An interconnection of LETs yields an LFT —
—
This expression follows easily from the matrix inversion lemma in (A.6).
where M is given by
RI Project work
Students are encouraged to formulate their own project based on an application they are
working on. Otherwise, the project is given by the instructor. In either case, a preliminary
statement of the problem must be approved before starting the project; see the first item
below.
A useful collection of benchmark problems for control system design is provided in
Davison (1990). The helicopter, aero-engine and distillation case studies in Chapter 13, and
the chemical reactor in Example 6.17, also provide the basis for several projects. These
models are available over the Internet.
(ii) Centralized control (LQG, LTR, 9-12 (in principle same as LQG, but with a different 4. Consider the following 2 x 2 plant with one disturbance given in state-space form:
way of choosing weights), 9-1~ loop shaping, 9-t~~,mixed sensitivity, etc.).
(iii) A decoupler combined with P1 control. ±1 = —0.1w1 + 0.01st1
5. Simulations. Perform simulations in the time domain for the closed-loop system. = —0.5w2 + 10u~
6. Robustness analysis using p.
= 0.25w1 + 0.25w2 — 0.25w3 + i.25d
(a) Choose suitable performance and uncertainty weights. Plot the weights as functions
of frequency. yi 0.8w3; Y2 0123
(b) State clearly how RP is defined for your problem (using block diagrams). (a) Construct a block diagram representation of the system with each block in the form
(c) Compute p for NP, RS and RP. k/(i + TS).
(b) Perform a controllability analysis.
(d) Perform a sensitivity analysis. For example, change the weights (e.g. to make one
output channel faster and another slower), move uncertainties around (e.g. from input
to output), change the Li’s from a diagonal to full matrix, etc. Problem 2 (25%). General control problem formulation.
Comment: You may need to move back to step (a) and redefine your weights if you find
ACID
out from step (c) that your original weights are unreasonable. pH 13
7. Optional: 71c~t or p-optimal controller design. Design an or p-optimal controller and
see if you can improve the response and satisfy RP. Compare simulations with previous
designs.
8. Discussion. Discuss the main results. You should also comment on the usefulness of the
project as an aid to leaming and give suggestions on how the project activity might be ACID
improved.
112
9. Conclusion.
3. Plant with two inputs and one output: Figure B.2: Block diagram of neutralization process
5 4 3 A block diagram of the process is shown in Figure B.2. It includes one disturbance, two
y(s) = 0.2s + l~1 + 0.2s + i~~2 + 0.02s + (B.3)
inputs and two measurements (Yt and 112). The main control objective is to keep y~ r2. In
addition, we would like to reset input 2 to its nominal value; that is, we want u2 r~, at low
frequencies. Note that there is no particular control objective for yj.
550 MULTIVARIABLE FEEDBACK CONTROL
I PROJECT WORK AND SAMPLE EXAM
(d) Let
551
I
in
G~(s) —
Fgji+wiAi
[g~ + w3A1
gia+w2A2~
g22 j IAiI~1,IA2I~1
U V
I and consider the controller K(s) = c/s. Put this into the M/X-structure and find the RS
condition.
(a) Define the general control problem: that is, find z, w, u, v and P (see Figure B.3). (1) Show by a counterexample that in general U(AB) is not equal to o(BA). Under what
(b) Define an ?-t~ control problem based on P. Discuss briefly what you want the conditions is p(AB) =
unweighted transfer functions from d to z to look like, and use this to say a little about I (g) The PRGA matrix is defined as r CdiagC1. What is its relationship to the RGA?
how the performance weights should be selected.
I
II
I
I
I
I
where ~ < 1 and 6~ ~ 1. For a feedback controller K(s) derive the interconnection
matrix M for robust stability. I
(b) For the above case consider using the condition minD U(DMD’) < 1 to check for
robust stability (RS). What is D (give as few parameters as possible)? Is the RS condition
tight in this case?
(c) When is the condition p(MA) < 1 necessary and sufficient for robust stability? Based
on p(MA) < 1, derive the RS condition 4t4M) < 1. When is this last condition necessary
and sufficient? I
I
I
It
I
552 MULTIVARIABLE FEEDBACK CONTROL
II
2
I BIBLIOGRAPHY
Aistad, V (2005) Studies on Selection of Contiolled Variables, PhD thesis, Norwegian University of
Science and Technology, Trondheim
Aistad, V and Skogestad, S (2004) Combinations of measurements as controlled variables Application
to a Peltyuk distillation column, Proceedings of the 7th International Symposuim on ADCHEM,
Hong Kong, P R China, pp 249—254
Anderson, B D 0 (1986) Weighted Hankel-norm approximation Calculation of bounds, Systems &
Control Letters 7(4) 247—255
Anderson, B D 0 and Liu, Y (1989) Controller reduction Concepts and approaches, IEEE
Transactions on Automatic Control AC-34(8) 802—812
Anderson, B D 0 and Moore, J B (1989) Optimal Control Lineat Quadratic Methods, Prentice Hall,
Upper Saddle River, NJ
Ariyur, K B and Krstic, M (2003) Real-Tune Optinnzation by Extremuin-Seeking Control, John Wiley
& Sons, Hoboken, NJ
Balas, G , Chiang, R , Packard, A and Safonov, M (2005) Robust Control Toolbox User’s Guide, 3 0 1
edn, MathWorks, South Natick, MA
Balas, C J (2003) Flight control law design An industry perspective, European Journal of Control
9(2—3) 207—226
Balas, C J ,Doyle, J C , Clover, K ,Packard, A and Smith, R (1993) p-Analysis and Synthesis Toolbox
User’s Guide, MathWorks, South Natick, MA
Balchen, J C and Mumme, K (1988) Process Control Structures and Applications, Van Nostrand
Reinhold, New York
Bates, D and Postlethwaite, I (2002) Robust Multivariable Control of Aerospace Systems, Delft
University Press, The Netherlands
Bode, H W (1945) Netwoik Analysis and Feedback Amplifier Design, Van Nostrand, New York
Boyd, S and Barratt, C (1991) Linear Controller Design — Lunar of Performance, Prentice Hall, Upper
Saddle River, NJ
Boyd, S and Desoer, C A (1985) Suhharmonic functions and performance bounds in linear time
invanant feedback systems, IMA Journal of Mathematical Control and Inforniation 2 153—170
Boyd, S , Chaoui, L B, Feron, B and Balakrishnan, V (1994) Linear Matrix Inequalities in System and
Contiol Theomy, Society for Industnal and Applied Mathematics (SIAM), Philadelphia, PA
Biaatz, R D (1993) Robust Loopshaping for Process Control, PhD thesis, California Institute of
Technology, Pasadena, CA
Braatz, R D and Moran, M (1994) Minimizing the Euclidean condition number, SIAM Journal on
Control and Optimization 32(6) 1763—1768
Braatz, R D ,Moran, M and Skogestad, S (1996) Lonpshaping for robust performance, International
Joumnal of Robust and Nonlineai C’ontrol 6(8) 805—823
Braatz, R D, Young, P M, Doyle, J C and Moran, M (1994) Computational complexity of p
calculation, IEEE Tiansactiomis ofAutomatic Contiol AC-39(5) 1000—1002
Bristol, B H (1966) On a new measure of interactions for multivariable process control, IEEE
Tiansactions onAutomnatic C’ontiolAC-ll(l) 133—134
Campo, P J and Moran, M (1990) Robust control of processes subject to saturation nonlineanities,
Computeis and Chemmcal Engineering 14(4—5) 343—358
Campo, P J and Moran, M (1994) Achievable closed-loop properties of systems under decentralized
control: Conditions involving the steady-state gain, IEEE Transactions oil Automatic Control AC- Fernando, K. V. and Nicholson, H. (1982). Singular perturbational model reduction of balanced systems,
39(5): 932—942.
Cao, Y. (1995). C’ontrol Structure Selection for Chemical Processes Using Input—output Controllability
I IEEE Transactions on Automatic Control AC-27(2): 466—468.
Analysis, PhD thesis, University of Exeter.
Chang, J. W. and Yu, C. C. (1990). The relative gain for non-square multivariable systems, Chemical
II Finsler, P. (1937). Uber das Vorkommen definiter und semi-definiter Formen in Scharen quadratischer
Formen, Comentarii Mathematica Helvetici 9: 192—199.
Fisher, W. R., Doherty, M. F. and Douglas, 3. M. (1985). Steady-state control as a prelude to dynamic
Engineering Science 45(5): 1309—1323.
Chen, C. T. (1984). Linear System Theory and Design, Holt, Rinehart and Winston, New York.
Chen, J. (1995). Sensitivity integral relations and design trade-offs in linear multivariable feedback
I control, Chemical Engineering Research & Design 63: 353—357.
Foss, A. 5. (1973). Critique of chemical process control theory, AIChE Journal 19(2): 209—214.
Foster, N. P., Spurgeon, S. K. and Postlethwaite, I. (1993). Robust model-reference tracking control with
systems, IEEE Transactions On Automatic Control AC-40(10): 1700—1716.
Chen, J. (2000). Logarithmic integrals, interpolation bounds, and performance limitations in MEMO
I a sliding mode applied to an ACT rotorcraft, 19th European RotorcraJt Forum, Italy.
feedback systems, IEEE Transactions on Automatic ~‘ontrol AC-45(6): 1098—1115.
Chen, J. and Middleton, R. H. (2003). New developments and applications in performance limitation of
I3 Francis, B. (1987). A course in 7-l~.~ control theory, Vol.88 of Lecture Notes in Control and Information
Sciences, Springer-Verlag, Berlin.
Francis, B. A. and Zames, G. (1984). On 7t~ optimal sensitivity theory for 5150 feedback systems,
feedback control, IEEE Transactions on Automatic Control AC-48(8): 1297. 4 IEEE Transactions on Automatic Comztrol AC-29(l): 9—16.
Chiang, R. Y. and Safonov, M. C. (1992). Robust Control Toolbox User’s Guide, MathWorks, South Frank, P. M. (1968a). Vollstandige Vorhersage im stetigen Regelkreis mit Totzeit, Teil 1,
Natick, MA. 1 Regelungstechnik 16(3): 111—116.
Chilali, M. and Gahinet, P. (1996). 71c.3 design with pole placement constraints: An LMI approach, IEEE 3
II Frank, P. M. (l968b). Vollstandige Vorhersage im stetigen Regelkreis mit Totzeit, Teil II,
Transactions on Automatic Control AC-41(3): 358—367. Regelungstechnik 16(5): 214-218.
Churchill, R. V., Brown, 3. W. and Verhey, R. F. (1974). Complex Variables and Applications, McGraw- Freudenberg, 3. 5. and Looze, D. P. (1985). Right half planes poles and zeros and design tradeoffs in
Hill, New York. feedback systems, IEEE Transactions on Automatic Control AC-30(6): 555—565.
Cui, H. and Jacobsen, E. W. (2002). Performance limitations in decentralized control, Journal of Process Freudenberg, 3. S. and Looze, D. P. (1988). Frequency Domain Properties of Scalar and Multivariable
Control 12(4): 485-494.
Dahi, H. J. and Faulkner, A. J. (1979). Helicopter simulation in atmospheric turbulence, Vertica pp. 65—
I Feedback Svstenls, Vol. 104 of Lecture Notes in Control and Information Sciences, Springer-
78.
I Verlag, Berlin.
4 Cabinet, P. and Apkarian, P. (1994). A linear matrx inequality approach to 7i~ control, international
I
Dahleh, M. and Diaz-Bobillo, I. (1995). Control of Uncertain Systems. A Linear Programming Approach, Journal of Robust and Nonlinear Contmvl 4: 421—448.
Prentice Hall, Englewood Cliffs, NJ. Gahinet, P., Nemirovski, A., Laub, A. and Chilali, M. (1995). LMI Control Toolbox, MathWorks, South
Daoutidis, P. and Kravaris, C. (1992). Structural evaluation of control configurations for multivariable Natick, MA.
nonlinear processes, Chemical Engineering Science 47(6): 1091—1107. Georgiou, T. T. and Smith, M. C. (1990). Optimal robustness in the gap metric, IEEE Transactions on
Davison, E. 3. (ed.) (1990). Be,ichnzark Problems for C’ontrol System Design, Report of the IFAC Theory Automatic Control AC-35(6): 673—686.
Committee, International Federation of Automatic Control, Laxenberg, Austria. Gjøsmter, 0. B. (1995). Structures for Multivariable Robust Process Control, PhD thesis, Norwegian
Desoer, C. A. and Vidyasagar, M. (1975). Feedback Systems: input—Output Properties, Academic Press, University of Science and Technology, Trondheim.
New York. Glover, K. (1984). All optimal Hankel-norm approximations of linear multivariable systems and their
Downs, 3.3. and Vogel, H. F. (1993). A plant-wide industrial process control problem, Computers Che,n. LOC_error bounds, International Journal of C’ontrol 39(6): 1115—1193.
Engng. 17: 245—255. Clover, K. (1986). Robust stabilization of linear multivariable systems: Relations to approximations,
Doyle, J. C. (1978). Guaranteed margins for LQG regulators, IEEE Transactions on Automatic Control International Journal of ~‘ontrol 43(3): 741—766.
AC-23(4): 756—757. Clover, K. and Doyle, 3. C. (1988). State-space formulae for all stabilizing controller that satisfy an
Doyle, 3. C. (1982). Analysis of feedback systems with structured uncertainties, lEE Proceedings, Part norm bound and relations to risk sensitivity, Systems and Control Letters 11(3): 167—172.
D Control Theory and Applications 129(6): 242—250. Glover, K. and McFarlane, D. (1989). Robust stabilization of normalized coprime factor plant
Doyle, 3. C. (1983). Synthesis of robust controllers and filters, Proceedings of the IEEE Conference on descriptions with 7I~ bounded uncertainty, IEEE Tramlsactions on Automatic Control AC-
Decision and Control, San Antonio, TA, USA, pp. 109—114. 34(8): 821—830.
I
Doyle, 3. C. (1984). Lecture Notes on Advances in Multivariable Control, ONRIHoneywell Workshop, Clover, K., Vinnicombe, G. and Papageorgiou, 0. (2000). Guaranteed multi-loop stability margins
Minneapolis, USA. and the gap metric, Proceedings of the 39th IEEE Conference on Decision and ~‘ontrol, Sydney,
Doyle, 3. C. (1986). Redondo Beach lecture notes, Internal Report, Caltech, Pasadena, CA. Australia, pp. 4084—4085.
Doyle, 3. C., Francis, B. and Tannenbaum, A. (1992). Feedback Control Theory, Macmillan, New York. Goddard, P. (1995). Pemformance Preserving Controller Approximnation, PhD thesis, Trinity College,
Doyle, 3. C., Glover, K., Khargonekar, P. P. and Francis, B. A. (1989). State-space solutions to standard Cambridge.
7i2 and fl~ control problems, IEEE Transactions on Automatic Control AC-34(8): 831—847.
Doyle, J. C. and Stein, 0. (1979). Robustness with observers, IEEE Transactions on Automatic Control
I Golub, G. H. and van Loan, C. F. (1989). Matrix Computations, Johns Hopkins University Press,
AC-24(4): 607—611. I Baltimore, MD.
Goodwin, G. C., Salgado, M. B. and Silva, E. I. (2005). Time-domain performance limitations arising
Doyle, 3. C. and Stein, G. (1981). Multivariable feedback design: Concepts for a classical/modem from decentralized architectures and their relationship to the rga, tnt. I Control 78: 1045—1062.
synthesis, IEEE Transactions on Automatic Control AC-26(.l): 4—16. Goodwin, G. C., Salgado, M. H. and Yuz, J. I. (2003). Performance limitations for linear feedback systems
Eaton, 3. W. and Rawlings, 3. B. (1992). Model-predictive control of chemical processes, Chemnical in the presence of plant uncertainty, IEEE Transactions on Automatic Control AC-48(8): 1312—
Engineering Science 47(4): 705—720. 1319.
Engell, 5. (1988). Optimale Lineare Regelung, Vol. 18 of Fachberichte Messen, Steuern, Regehi, Govatsmark, M. 5. (2003). Integrated Optimization and Control, PhD thesis, Norwegian University of
Springer-Verlag, Berlin. Science and Technology, Trondheim.
Enns, D. (1984). Model reduction with balanced realizations: An error bound and a frequency weighted Grace, A., Laub, A. 3., Little, J. N. and Thompson, C. M. (1992). Control System Toolbox, Math Works,
generalization, Proceedings of the 23rd IEEE C’onference Cml Decision and Control, Las Vegas, South Natick, MA.
NV, USA, pp. 127—132. Green, M. and Limebeer, D. 3. N. (1995). Linear Robust Control, Prentice Hall, Upper Saddle River, NJ.
I
556 MULTIVARIABLE FEEDBACK CONTROL
Grosdidier, P. and Moran, M. (1986). Interaction measures for systems under decentralized control,
I
I
BIBLIOGRAPHY
Cambridge.
557
I
CO. USA. 13(4): 82—85.
Holt, B. R. and Moran, M. (1985a). Design of resilient processing plants V — The effect of deadtime on Kothare, M. V., Balakrishnan, V. and Moran, M. (1996). Robust constrained model predictive control
dynamic resilience, Chemical Engineering Science 40(7): 1229—1237. using linear matrix inequalities, Automatica 3200): 1361—1379.
Holt, B. R. and Moran, M. (1985b). Design of resilient processing plants VI— The effect of right plane Kouvaritakis, B. (1974). Characteristic Locus Methods for Multivariable Feedback Systems Design, PhD
zeros on dynamic resilience, chemical Engineering Science 40W: 59—74. thesis, University of Manchester Institute of Science and Technology, Manchester.
Hori, E. 5., Skogestad, S. and Kwong, W. H. (2005). Use of perfect indirect control to minimize the state Kwakennaak, Fl. (1969). Optimal low-sensitivity linear feedback systems. Automatica 5(3): 279—285.
deviations, in L. Puigjaner and A. Espuna (eds), European Symposium on computer-aided process Kwakernaak, H. (1985). Minimax frequency domain performance and robustness optimization of linear
engineering (ESCAPE) 15. Barcelona, Spain, Elsevier. feedback systems, IEEE Transactions on Automatic ~‘ontrol AC-30( 10): 994—1004.
Horn, R. A. and Johnson, C. R. (1985). Matrix Anali’sis, Cambridge University Press, Cambridge. Kwakernaak, H. (1993). Robust control and 7-100-optimization—Tutorial paper, Automnatica 29(2): 255—
Horn, R. A. and Johnson, C. R. (1991). Topics in Matrix Analysis, Cambridge University Press,
Cambridge. I 273.
Kwakernaak, H. and Sivan, R. (1972). Linear Optimal Control Systems, Wiley Interscience, New York.
Horowitz, I. M. (1963). Synthesis of Feedback Systems, Academic Press, London.
Horowitz, I. M. (1991). Survey of quantitative feedback theory (QET), International Journal of Control I Larsson, T. (2000). Studies on Plantwide Control, PhD thesis, Norwegian University of Science and
Technology, Trondheim.
53(2): 255—291. Larsson, T. and Skogestad, 5. (2000). Plantwide control: A review and a new design procedure, Modeling,
Horowitz, I. M. and Shaked, U. (1975). Superiority of transfer function over state-variable methods Identification and G’ontrol 21: 209—240.
in linear time-invariant feedback system design, IEEE Transactions on Automatic Control AC- Laub, A. J., Heath, M. T., Page, C. C. and Ward, R. C. (1987). Computation of system
20W: 84—97. balancing transformations and other applications of simultaneous diagonalization algorithms,
Hovd, M. (1992). Studies on Control Structure Selection and Design of Robust Decentralized and SYD IEEE Transactions on Automatic Control AC-32(2): 115—122.
Controllers, PhD thesis, Norwegian University of Science and Technology, Trondheim. Laughlin, D. L., Jordan, K. 0. and Morari, M. (1986). Internal model control and process uncertainty
Hovd, M., Bnaatz, R. D. and Skogestad, 5. (1997). SVD controllers fonlL2-, 1I~-, and p-optimal control, — mapping uncertainty regions for SISO controller-design, International Journal of Control
I
S
558 MULTIVARIABLE FEEDBACK CONTROL BIBLIOGRAPHY 559
Liu, Y. and Anderson, B, D. 0. (1989). Singular perturbation approximation of balanced systems, 115(2B): 426—438.
International Journal of Control 50(4): 1379—1405. Padfleld, G. D. (1981). Theoretical model of helicopter flight mechanics for application to piloted
Lundstrom, P. (1994). Studies on Robust Multivariable Distillation Control, PhD thesis, Norwegian simulation, Technical Report 81048, Defence Research Agency (now QinetiQ), UK.
University of Science and Technology, Trondheim. Perkins, J. D. (ed.) (1992). IFAC Workshop on Interactions Between Pmvcess Design and Process Control,
LundstrOm, P., Skogestad, S. and Doyle, I. C. (1999). Two degrees of freedom controller design for an ill- (London, September), Pergamon Press, Oxford.
conditioned plant using p-synthesis, IEEE Transactions on Control System Technology 7W: 12— Pemebo, L. and Silverman, L. M. (1982). Model reduction by balanced state space representations, IEEE
21. Transactions on Automatic Control AC-27(2): 382—387.
Lunze, J. (1992). Feedback C’ontrol of Large-Scale Systems, Prentice-Hall, New York, NY. Poolla, K. and Tikku, A. (1995). Robust performance against time-varying structured perturbations,
MacFarlane, A. C. J. and Karcanias, N. (1976). Poles and zeros of linear multivariable systems: A survey IEEE Transactions on Automatic Control AC.40(9): 1589—1602.
of algebraic, geometric and complex variable theory, International Journal of Control 24: 33—74. Postlethwaite, I., Foster, N. P. and Walker, D. J. (1994). Rotorcraft control law design for rejection of
MacFarlane, A. C. J. and Kouvaritakis, B. (1977). A design technique for linear multivariable feedback atmospheric turbulence, Proceedings of lEE Conference, G’ontrol 94, Warwick, UK, pp. 1284—
systems, International Journal of Control 25: 837—874 1289.
Maciejowski, J. lvi. (1989). Multivariable Feedback Design, Addison-Wesley, Wokingham. Postlethwaite, I. and MacFarlane, A. C. J. (1979). A Complex Variable Approach to the Analysis ofLinear
Manness, M. A. and Murray-Smith, D. J. (1992). Aspects of multivariable flight control law design for Mu/tivariable Feedback Systems, Vol. 12 of Lecture Notes in Control and Information Sciences,
helicopters using eigenstructure assignment, Journal ofAmerican Helicopter Society 37(3): 18—32. Springer-Verlag, Berlin.
Manousiouthakis, V., Savage, R. and Arkun, Y. (1986). Synthesis of decentralized process control Postlethwaite, I., Prempain, E., Turkoglu, E., Turner, M. C., Ellis, K. and Gubbles, A. W. (2005). Design
structures using the concept of block relative gain, AIChE Journal 32: 991—1003. and flight testing of various ?1~ controllers for the Bell 205 helicopter, C’ontrol Engineering
Marlin, T. (1995). Process Control, McGraw Hill, New York. Practice 13: 383—398.
McFarlane, D. and Clover, K. (1990). Robust ~‘ontroller Design Using Normalized C’op rime Factor Plant Postlethwaite, I., Samar, R., Choi, B.-W. and Cu, D.-W. (1995). A digital multi-mode ‘N~ controller for
Descriptions, Vol. 138 of Lecture Notes in Control and Information Sciences, Springer-Verlag, the Spey turbofan engine, 3rd European Control (‘onference, Rome, Italy, pp. 3881—3886.
Berlin. Postlethwaite, I., Smerlas, A., Walker, D. J., Gubbels, A. W., Baillie, S. W., Strange, M. B. and Howitt, 1.
McMillan, C. K. (1984). pH Control, Instrument Society of America, Research Triangle Park, NC. (1999). W~control of the NRC Bell 205 fly-by-wire helicopter, Journal of American Helicopter
Meinsma, 0. (1995). Unstable and nonproper weights in ?t~ control, Automatica 31(11): 1655—1658. Society 44(4): 276—284.
Meyer, D. 0. (1987). Model Reduction via Factorial Representation, PhD thesis, Stanford University, Postlethwaite, I. and Walker, D. J. (1992). Advanced control of high performance rotorcraft, Institute
Stanford, CA. of Mathematics and Its Applications Conference on Aerospace Vehicle Dynamics and Contm’ol,
Middleton, R. H. (1991). Trade-offs in linear control system design, Automatica 27(2): 281—292. Cranfield Institute of Technology, UK, pp. 615-619.
Middleton, R. H. and Braslavsky, J. H. (2002). Towards quantitative time domain design tradeoffs Prempain, B. and Postlethwaite, I. (2004). Static 7t~ loop shaping of a fly-by-wire helicopter,
in nonlinear control, Proceedings of the American Control Conference, Anchorage, AK, USA, Proceedings of the 43rd IEEE Conference on Decision and G’ontrol, Bahamas.
pp. 4896—4901. Qiu, L. and Davison, E. J. (1993). Performance limitations of non-minimum phase systems in the
Middleton, R. H., Chen, J. and Freudenberg, J. S. (2004). Tracking sensitivity and achievable ?i~~ servomechanism problem, Automatica 29(2): 337—349.
performance in preview control, Automatica 40(8): 1297—1306. Rosenbrock, H. H. (1966). On the design of linear multivariable systems, Third IFAC World Con gress,
Moore, B, C. (1981). Principal component analysis in linear systems: Controllability, observability and London, UK. Paper Ia.
model reduction, IEEE Transactions on Automatic Control AC.26( 1): 17—32. Rosenbrock, H. H. (1970). State-space and Multivariable Theory, Nelson, London.
Moran, M. (1983). Design of resilient processing plants III — A general framework for the assessment of Rosenbrock, H. H. (1974). Computer-Aided C’ontrol Systeni Design, Academic Press, New York.
dynamic resilience, C’/memical Engineering Science 38(11): 1881—1891 Safonov, M. 0. (1982). Stability margins of diagonally perturbed multivariable feedback systems, lEE
Moran, M. and Zaflriou, E. (1989). Robust Process Control, Prentice Hall, Englewood Cliffs, NJ. Proceedings, Part D 129(6): 251—256.
Nett, C. N. (1986). Algebraic aspects of linear control system stability, IEEE Transactions on Automatic Safonov, M. 0. and Athans, M. (1977). Gain and phase margin for multiloop LQG regulators, IEEE
Control AC-31(10): 941—949. Transactions oil Automatic Control AC-22(2): 173—179.
Nett, C. N. (1989). A quantitative approach to the selection and partitioning of measurements and Safonov, M. C. and Chiang, R. Y. (1989). A Schur method for balanced-truncation model reduction,
manipulations for the control of complex systems, Presentation at Caltech Control Workshop, IEEE Transactions on Automatic Control AC-34(7): 729—733.
Pasadena, USA, January. Safonov, M. 0., Limebeer, D. J. N. and Chiang, R. Y. (1989). Simplifying the R~ theory via loop-
Nett, C. N. and Manousiouthakis, V. (1987). Euclidean condition and block relative gain: Connections, shifting, matrix-pencil and descriptor concepts, International Journal of Control 50(6): 2467—
conjectures, and clarifications, IEEE Transactions on Automatic Control AC.32(5): 405—407. 2488.
Nett, C. N. and Minto, K. D. (1989). A quantitative approach to the selection and partitioning of Samar, R. (1995). Robust Multi-Mode Control ofHigh Pe,formanceAero-Engines, PhD thesis, University
measurements and manipulations for the control of complex systems, Copy of transparencies from of Leicester.
talk at American Control Conference, Pittsburgh, PA, USA, June. Samar, R. and Postleth’vaite, 1. (1994). Multivariable controller design for a high performance aero
Niemann, H. and Stoustrup, J. (1995). Special Issue on Loop Transfer Recovery, Inter,zational Journal engine, Proceedings of lEE Conference, Control 94, War’vick, UK, pp. 1312—1317.
of Robust and Nonlinear Cont,-ol 7(7): November. Samar, R., Postlethwaite, I. and Cu, D.-W. (1995). Model reduction with balanced realizations,
Nwokah, 0. D. I. and Perez, R. (1991). On multivariable stability in the gain space, Automatica International Journal of Control 62(1): 33—64.
27(6): 975—983. Samblancatt, C., Apkanian, P. and Patton, R. 1. (1990). Improvement of helicopter robustness and
Owen, J. C. and Zames, 0. (1992). Robust 7-1~ disturbance minimization by duality, Systems & C’ontrol performance control law using eigenstructure techniques and 7L~ synthesis, 16th European
Letters 19(4): 255—263. Rotom~raft Forum, Scotland. Paper No. 2.3.1.
Packard, A. (1988). What’s New with p. PhD thesis, University of California, Berkeley, CA. Sato, T. and Liu, K.-Z. (1999). LMI solution to general 712 suboptimal control problems, Systems amid
Packard, A. and Doyle, J. C. (1993). The complex structured singular value, Automnatica 29(1): 71—109. Control Letters 36(4): 295—305.
Packard, A., Doyle, J. C. and Balas, 0. (1993). Linear, multivariable robust-control with a p. Seborg, D. B., Edgar, T. R and Mellichamp, D. A. (1989). Process Dynamics amid Control, John Wiley
perspective, Journal of Dynamic Systems Measuremnent and G’ontrol — Transactions oft/ic ASME & Sons, New York.
560 MULTIVARIABLE FEEDBACK CONTROL BIBLIOGRAPHY 561
Sefton, J. and Clover, K. (1990). Pole—zero cancellations in the general R~ problem with reference to a
I
IEEE Transactions on Automatic Control AC-32(2): 105—114.
two block design, Systems & Control Lettenc 14(4): 295—306. Stein, 0. and Doyle, J. C. (1991). Beyond singular values and loopshapes, AIAA Journal of Guidance
Seron, M., Braslavsky, J. and Goodwin, 0. (1997). Fundamental limitations in filtering and control, and Control 14: 5—16.
Springer-Verlag, Berlin. Stephanopoulos, 0. (1984). Chemical Process Control, Prentice Hall, Englewood Cliffs, NJ.
Shamma, J. S. (1994). Robust stability with time-varying structured uncertainty, IEEE Transactions on Strang, 0. (1976). Linear Algebra and Its Applications, Academic Press, New York.
Autonmtic Control AC-39(4): 714—724. Takahashi, M. D. (1993). Synthesis and evaluation of an 112 control law for a hovering helicopter,
Shinskey, F. G. (1967). Process Control Systems, 1st edn, McGraw-Hill, New York. Journal of Guidance, Control and Dynamics 16: 579—584.
Shinskey, F. 0. (1984). Distillation C’ontrol, 2nd edn, McGraw-Hill, New York. TØffner-Clausen, S., Andersen, P., Stoustrup, J. and Niemann, H. H. (1995). A new approach to p
Shinskey, F. G. (1996). Process ControlSvste,ns: Application, Design and Tuning. 4th edn, McGraw-Hill, synthesis for mixed perturbation sets, Proceedings of 3rd European Control Conference, Rome,
New York. Italy, pp. 147—152.
Skogestad, 5. (1996). A procedure for 5150 controllability analysis — with application to design of pH Toker, 0. and Ozbay, H. (1998). On the complexity of purely complex p computation and related
neutralization process, Coniputers & Chemical Engineering 20(4): 373—386. I problems in multidimensional systems, IEEE Transactions on Automatic Control AC-43(3): 409—
Skogestad, 5. (1997). Dynamics and control of distillation columns - a tutorial introduction, Transactions 414.
of JChe,nE (UK) 75(A). Plenary paper from symposium Distillation and Absorption 97, Tombs, M. S. and Postlethwaite, 1. (1987). Truncated balanced realization of a stable non-minimal state-
Maastricht, Netherlands, September. space system, International Journal of C’ontrol 46: 13 19—1330.
Skogestad, S. (2000). Plantwide control: The search for the self-optimizing control structure, Journal of Tsai, M. C., Geddes, F. J. M. and Postlethwaite, 1. (1992). Pole—zero cancellations and closed-loop
Process Control 10: 487—507. properties of an ?i~ mixed sensitivity design problem, Automatica 28(3): 519—530.
Skogestad, S. (2003). Simple analytic rules for model reduction and PID controller tuning, Journal of Turner, M. C., Herrmann, G. and Postlethwaite, I. (2003). Discrete time anti-windup - part I: stability
Process Control 13: 29 1—309. Also see corrections in 14, 465 (2004). and performance, Proceedings of the European Control Conference, ~‘ambridge, UK.
Skogestad, S. (2004a). Control structure design for complete chemical plants, C’omnputers & C’hemical Turner, M. C., Herrmann, G. and Postlethwaite, 1, (2004). An introduction to linear matrix inequalities
Engineering 28: 2 19—234. in control, University of Leicester Department of Engineering Technical Report no 02-04
Skogestad, S. (2004b). Near-optimal operation by self-optimizing control: From process control to Turner, M. C. and Postlethwaite, I. (2004). A new perspective on static and low order anti-windup
marathon running and business systems, Comnputers & Chemical Engineering 290): 127—137. compensator synthesis, International Journal of Control 77W: 27—44.
Skogestad, S. and Havre, K. (1996). The use of the RGA and condition number as robustness measures, Turner, M. C. and Walker, D. J. (2000). Linear quadratic bumpless transfer, Automatica 36(8): 1089—
Computers & Chemical Engineering 20(S): S 1005—S 1010. 1101.
Skogestad, S., Lundstrom, P. and Jacobsen, F. (1990). Selecting the best distillation control configuration, Van de Wal. M. (1994). Control structure design for dynamic systems: A review, Technical Report WFW
AIChE Journal 36(5): 753—764. 94-084, Eindhoven University of Technology, Eindhoven, The Netherlands.
Skogestad, S. and Moran, M. (l987a). Control configuration selection for distillation columns, AIChE Van de Wal, M. and de Jager, B. (2001). A review of methods for inputloutput selection, Automatica
Journal 33(10): 1620—1635. 37(4): 487—5 10.
Skogestad, S. and Moran, M. (l987b). Effect of disturbance directions on closed-loop performance, van Diggelen, F. and Clover, K. (l994a). A Hadamard weighted loop shaping design procedure for robust
Industrial & Engineering Chemistmy Reseamvh 26(10): 2029—2035. decoupling, Automatica 30(5): 83 1—845.
Skogestad, S. and Moran, M. (1987c). Implications of large RCA elements on control performance,
industrial & Engineering Chenzistmy Research 26(11): 2323—2330.
I van Diggelen, F. and Clover, K. (1994b). State-space solutions to Hadamard weighted W~ and 712
control-problems, International Journal of Control 59(2): 357—394.
Skogestad, S. and Moran, M. (1988a). Some new properties of the structured singular value, IEEE Vidyasagar, M. (1985). Control System Synthesis: A Factorization Approach, MIT Press, Cambridge,
Transactions on Automatic Control AC—33(12): 1151—1154. MA.
Skogestad, S. and Morani, M. (1988b). Variable selection for decentralized control, AIChE Annual Vidyasagar, M. (1988). Normalized coprime factorizations for non-strictly proper systems, IEEE
Meeting, Washington, DC. Paper l26f. Reprinted in Modeling, Identification and £‘ontrol, 1992, I Transactions on Automnatic Control AC-33(3): 300—301.
Vol. 13,No. 2,113—125. Vinnicombe, 0. (1993). Frequency domain uncertainty and the graph topology, IEEE Transactions on
Skogestad, S. and Morani, M. (1989). Robust performance of decentralized control systems by Automatic Contmvl AC-38(9): 1371—1383.
independent designs, Automatica 25W: 119—125. Vinnicombe, 0. (2001). Uncertainty and feedback: 7j~ loop-shaping and the v-gap metric, Imperial
Skogestad, S., Moran, M. and Doyle, J. C. (1988). Robust control of ill-conditioned plants: High-purity College Press, London.
distillation, IEEE Transactions on Automatic ~‘ontrol AC-33( 12): 1092—1105. Walker, D. J. (1996). On the structure of a two degrees-of-freedom 7-10~loop-shaping controller,
Skogestad, S. and Postlethwaite, I. (1996). Multivariable Feedback Control: Analysis and Design, 1st
I
International Journal of c’ontrol 63(6): 1105—1127.
edn, Wiley, Chichester. Walker, D. J. and Postlethwaite, I. (1996). Advanced helicopter flight control using two degrees-of-
Skogestad, S. and Wolff, B. A. (1991). TANKSPILL - A process control game, C’AC’HE News 32: 1—4. freedom ?L~ optimization, Journal of Guidance, G’ontrol and Dynamics 19(2): March—April.
Published by CACHE Corpnration, Austin, TX, USA. Walker, D. J., Postlethwaite, I., Howitt, 1. and Foster, N. P. (1993). Rotorcraft flying qualities
Smerlas, A., Walker, D. J., Postlethwaite, I., Strange, M. E., Flowitt, J. and Gubbels, A. W. (2001). improvement using advanced control, American Helicopter Society/NASA Conference, San
Evaluating lt00controllers on the NRC Bell 205 fly-by-wire helicopter, Control Engineering Francisco, USA.
Practice 9W: 1—10. Wang, Z. Q., LundstrOm, P. and Skogestad, S. (1994). Representation of uncertain time delays in the lL~~
Sounlas, D. D. and Manousiouthakis, V. (1995). Best achievable decentralized performance, IEEE framework, International Journal of Contmol 59(3): 627—638.
Transactions on Automatic Control AC-40(ll): 1858—1871. 1 Weston, P. and Postlethwaite, 1. (2000). Linear conditioning for systems containing saturating actuators,
Stanley, G., Marino-Galanraga. M. and McAvoy, T. J. (1985). Short cut operability analysis. 1. The Automatica 36(9): 1347—1354.
relative disturbance gain, Industrial and Engineering Ome,nistmy Process Design and Development Whidborne, J. F., Postlethwaite, I. and Cu, 0. W. (1994). Robust controller design using 1l~ loop
24(4): 1181—1 188. shaping and the method of inequalities, IEEE Tm-ansactions on C’ontrol Systems Technology
Stein, G. (2003). Respect the unstable, IEEE Cont,vl Systems Magazine 23(4): 12—25. I
2(4): 455—461.
Stein, 0. and Athans, M. (1987). The LQG/LTR procedure for multivariable feedback control design, Willems, J. (1970). Stability Theomy of Dynamical Systemns, Nelson, London.
>~
562 MULTIVARIABLE FEEDBACK CONTROL
-7I
Wolff, F A (1994) Studies on C’ont,oI of integrated Plants, PhD thesis, Norwegian University of
Science and Technology, Trondheim
Wonham, M (1974) Lineat Multivar,able Systems, Spnnger-Verlag, Berlin
Youla, D C, Bongiorno, J J and Lu, C N (1974) Single-loop feedback stabilization of linear
multivariable dynamical plants, Automatica 10(2) 159—173
Youla, D C, Jabr, H A and Bongiorno, J J (1976) Modern Wiener-Hopf design of optimal controllers,
part II The multivariable case, IEEE Transactions on Automatic Contiol AC-21(3) 319—338
Young, P M (1993) Robustness wit/i Pa,anietric amid Dynamic Uncertauines, PhD thesis, California
Institute of Technology, Pasadena, CA
Young, P M (1994) Controller design with mixed uncertainties, Proceedings of the American Contiol
Conference, Baltimore, MD, USA, pp 2333—2337
Young, P M and Doyle, I C (1997) A lower bound for the mixed p problem, IEEE Transactions on
Autoniatic Contiol 42(1) 123—128 Acceptable contiol, 201 Bode’s stability condition, 27
Young, P M, Newlin, M and Doyle, J C (1992) Practical computation of the mixed p problem, Active constraint control, 392 Break frequency. 19
Proceedings oft/ic Anierican Control Conference, Chicago, USA, pp 2190—2194 Actuator saturation, see Input constraint Buffer tank
Yu, C C and Fan, M K H (1990) Decentralized integral controllability and D-stability, Chemical Adjoint concentration disturbance, 217
Engineem org Science 45(11) 3299—3309 classical, see Adjugate flow rate disturbance, 218
Yu, C C and Luyben, W L (1987) Robustness with respect to integral controllability, Imidustrial & Hermitian, see Conjugate transpose Bumpless transfer, 381
Engineering OiemnistrvResea,ch 26(5) 1043—1045 Adjugate (classical adjoint), 516
Yue, A and Postlethwaite, I (1990) Improvement of helicopter handling qualities using ‘Jt~ Aero-engine case study, 463, 500—509 Cake baking process, 389, 393
optimization, lEE P’oceethngs - D Control Theomy amid Applications 137 115—129 model reduction, 463 Canonical form, 120, 126
Zafiriou, B (ed ) (1994) IFAC Worhshop on Integmation of Pmvcess Design and Contmol, (Baltimore, MD, controller, 466-471 controllability, 127
June), Peigamon Press, Oxford See also special issue of Coinputers& ChemmcalEngmneeming, Vol plant, 463—465 diagonalized (Jordan), 126
20, No 4, 1996 Align algorithm, 371 observability, 126
Zames, 0 (1981) Feedback and optimal sensitivity Model reference transformations, multiplicative All-pass, 46, 93, 174 obseiver, 126, 127
seminorms, and approximate inverse, IEEE Transactions oti Autoimiatic Contiol AC-26(2) 301— All-pass factorization, 541 Cascade control, 217, 420, 422—426
320 • Analytic function, 173 conventional, 415, 420, 422, 423
Zames, G and Bensoussan, D (1983) Multivariable feedback, sensitivity and decentralized control, Angle between vectors, 535 geneialized controller, 110
IEEE Tramisactions on Aiitomnaiic Contmol AC-28(l 1) 1030—1035 Anti-stable, 462 input resetting, 422, 426
Zhou, K, Doyle, J and Clover, K (1996) Robust and Optunal Cont,ol, Prentice Hall, Upper Saddle Anti-windup, 380, 484 parallel cascade, 423
River, NJ deadzone, 486 why use, 421
Ziegler, I 0 and Nichols, N B (1942) Optimum settings for automatic controllers, Tmansactions of the saturation, 486 Case studies
A SME 64 759—768 synthesis, 488 aero-engine, 500—509
Ziegler, J 0 and Nichols, N B (1943) Process lags in automatic-control circuits Tmonsactions of the Augmented plant model, 347 distillation piocess, 509—514
A SM E 65 433—444 ‘ helicopter, 492—500
Back-off, 397 Cauchy—Schwarz inequality, 535
Balanced model reduction, 458 Causal, 189, 209
residualization, 459 Cause-and-effect graph, 233
truncation, 458 Centralized controller, 386
Balanced realization, 161, 457 Charactenstic gain, see Eigenvalue
Bandwidth, 38 Charactenstic loci, 92, 154
complementary sensitivity (WET), 39~ Characteristic polynomial, 151
gain crossover (we), 33 closed-loop, 151
sensitivity function (wE), 38,81 open-loop, 151
Bezout identity, 122 Classical control, 15—65
4 Bi-proper, see Semi-proper Closed-loop disturbance gain (CLDG), 447, 451
Bilinear matnx inequality, 481 Combinatorial growth, 405
Blaschke product, 541 Command, see Reference (r)
Block relative gain, 415, 430 Compatible norm, 534
Bode gain—phase relationship, 18 Compensator, 91
Bode plots, 17, 32 Complementary sensitivity function (T), 22, 70
Bode sensitivity integral, 168 bandwidth (WET), 39
MIMO, 223 maximum peak (Mp), 35
SISO, 168 output, 70
Bode’s differential relationship, 22, 246 peak 5150, 173
4 __________________________________ RHP-pole, 172, 194
Page numbers in italic refer to definitions Complex number, 515
Condition number (-i), 82, 525 right, 122 Distillation process, 100, 234, 509—5 14 right, 518
computation, 526 stabilizing controllers, 149 DV-model, 513 Element uncertainty, 251, 527
disturbance (7d). 238 state-space realization, 124 diagonal controller, 314 RCA. 251
input uncertainty, 251 uncertainty, 365 inverse-based controller, 245 Estimator
minimized, 82, 526 Crossover frequency, 38 robust stability, 314 general control configuration, 111
robust performance, 324, 327 gain (we), 33, 39 sensitivity peak, 245 see also Observer
Congruence transformation, 481 phase (wigo), 32 LV-model, 510—513 Euclidean norm, 532
Conjugate (A), 515 CDC benchmark problem, 511 Exogenous input (w), 13
Conjugate transpose (A”), 515 D-stability, 444 coupling between elements, 292 Exogenous output (z). 13
Control configuration, 11,384,420 Dead time, see Time delay decentralized control, 451 Extra input, 426
general, 11 Deadzone, 486 detailed model, 512 Extra measurement, 423
one degree-of-freedom, 11 Decay rate, 478 DK-iteration, 330 Extremum seeking control, 388
two degrees-of-freedom, II Decay ratio, 30 element-by-element uncertainty, 253
Control error Ce), 2 Decentralized control, 91, 248, 420, 428—453 feedforward control, 245 Fan’s theorem, 523
scaling, 5 application: distillation process, 45 9I~ loop shaping, 103 FCC process, 257
Control layer, 386 CLDC, 447 inverse-based controller, 100, 102, 250, 322 controllability analysis, 257
Control signal (it), 13 controllability analysis, 448 p-optimal controller, 330 pairings, 443
Control structure design, 2, 383, 502 13-stability, 444 physics and direction, 79 RCA matrix, 85
aero-engine case study, 502 independent design, 429 robust performance, 322 RHP-zeros, 257
Control system decomposition input uncertainty (RCA), 248 robustness problem, 100, 245 Feedback
horizontal, 388 interaction, 437 sensitivity peak (RCA), 250 negative, 20, 69
vertical, 388 pairing, 90, 429,441, 449 SVD analysis, 78 positive, 69
Control system design, 1,491 performance, 447 SVD controller, 102 why use, 24
Control system hierarchy, 387 PRGA, 437,447 Measurement selection, 418 Feedback amplifier, 25
Controllability RDC, 448 regulatory control, 406 Feedback rule, 68
,see Input—output controllability RCA, 83-449 Disturbance (d). 13 Feedforward control, 23. 109
,see State controllability sequential design, 417, 429, 446 limitation MIMO, 238—240 controllability SISO, 209
Controllability Cramian, 128, 457 stability, 438 limitation SISO, 198—199 perfect, 24
Controllability matrix, 128 triangular plant, 441 scaling, S uncertainty MIMO. 243
Controlled output, 384, 388 why use, 421 Disturbance condition number (7d), 238 distillation, 245
aero-engine. 395, 502 Decentralized integral controllability (DIC), 442 Disturbance model (Cd), 122, 148 uncertainty 5150, 203
indirect control, 417 determinant condition, 444 internal stability, 148 unstable plant, 145
selection, 388—403 RCA, 442, 443 Disturbance process example, 47 Feedforward element, 420
self-optimizing control, 391 Decibel (dB), 17 IL,, loop shaping, 368 Feedforward sensitivity, 23, 203, 242
Controlled variable (CV), 388 Decoupling, 92—93 inverse-based controller, 47 Fictitious disturbance, 260
Controller (K), 13 dynamic, 92 loop-shaping design. 49 Final value theorem, 44
Controller design, 40,341.381 partial, 93 mixed sensitivity. 64 Finsler’s lemma, 483
numerical optimization, 41 steady-state, 92 two degrees-of-freedom design. 52 F1 (lower LFT), 543
shaping of transfer functions, 41 Decoupling element, 92, 420 Disturbance rejection, 48 Flexible structure, 53
signal-based, 41 Delay, see Time delay MIMO system, 93 Fourier transform, 122
trade-offs, 341—344 Delta function, see Impulse function (6) mixed sensitivity, 496 Frequency response, 15—20, 122
see also U2 optimal control Derivative action, 126 13K-iteration, 328 bandwidth, see Bandwidth
see also 7-t~ optimal control Derivative kick, 56 Matlab, 330 break frequency, 18
see also LQC control Descriptor system, 121, 367 Dyadic expansion, 120, 518 gain crossover frequency (we), 33, 39
see also p-synthesis Detectable, 134 Dynamic resilience, 166 magnitude, 16. 17
Controller parameterization, 148 Determinant, 517 MIMO system, 71
Convex optimization. 310 Deviation variable, 5, 8 Effective delay (8). 57 minimum-phase, 18
Convex set, 30 Diagonal controller, see Decentralized control Eigenvalue (A), 75, 518 phase, 16
Convolution, 121 Diagonal dominance generalized. 138 phase crossover frequency (wiso), 32
Coprime factor uncertainty, 365 df, 440 measure of gain, 75 phase shift, 16
robust stability, 304 iterative RCA, 88 pole, 135 physical interpretation, IS
Coprime factorization, 122—124 pairing rule, 439 properties of, 519 straight-line approximation, 19
left, 123 Direction. 73 spectral radius, see Spectral radius Frobenius norm, 532
Matlab, 124 Direction of plant, 73, see also Output direction state matrix (A), 519 F~ (upper LFT), 543
model reduction, 462 Directionality, 67, 77, 81 transfer function, 520 Full-authority controller, 494
normalized, 123 Discrete time control Eigenvector, 518 Functional controllability. 233
W~ loop shaping, 380 left, 518 and zeros, 143
566 MULTIVARIABLE FEEDBACK CONTROL
I
I
INDEX 567
uncontrollable output direction, 233 weight selection, 506 Input resetting, 422 disturbance model, 148
IL00 norm, 60, 158, 539 I Input selection, 403 feedback system, 145
Gain, 17,73 calculation using LMI, 477 Input uncertainty, 99, 242, 251 interpolation constraint, 146
Gain margin (GM), 32, 36, 279 induced 2-norm, 158 condition number, 251 two degrees-of-freedom controller, 147
lower, 33 MIMO system, 81 diagonal, 99, 101 Interpolation constraint, 146, 223
LQG, 349 multiplicative property, 160 generalized plant, 298 MIMO, 223
Gain scheduling relationship toW2 norm, 159 a maonitude of, 297 RHP-pole, 223
7-100 loop shaping, 378 7&~ optimal control, 354, 357—364 ~see also Uncertainty RHP-zero, 223
Gain—phase relationship, 18 assumptions, 354 minimized condition number, 251 8180, 167
Gap metnc, 372 7-iteration, 358 1 RGA, 251 Inverse matrix, 515, 524
General control configuration, 104, 353, 383 mixed sensitivity, 359, 494 Input, manipulated, 13 Inverse Nyquist Array method, 440
including weights, 106 robust performance, 364 1 scaling, 5 Inverse response, 184
Generalized controller, 104 signal-based, 362 Input—output controllability, 164 Inverse response process, 26, 44
Generalized eigenvalue problem, 138, 477 Hadamard-weighted N00 problem, 113 analysis of, 164 loop-shaping design, 44
Generalized inverse, 524 Half rule, 58, 87 application LQG design, 347
Geneialized plant, 13, 104, 109, 353 Hamiltonian matrix, 158 aero-engine, 500—509 P control, 27
estimator, 111 Hankel norm, 160—162, 366, 458, 459 Fcc process, 85, 89, 257 P1 control, 29
feedforward control, 109 model reduction, 161, 459—461 first-order delay process, 210 Inverse system, 125
IL0.. loop shaping. 374, 378 Hankel singular value, 160, 178, 229, 458, 463 neutralization process, 213 Inverse-based controller, 46, 47, 92, 100
input uncertainty, 298 aero-engine, 505 room heating, 211 input uncertainty and RGA, 249
limitation, 112 Hanus form, 380 condition number, 82 robust performance, 326
Matlab, 106 Haidy space, 60 controllability rule, 206 structured singular value (p), 326
mixed sensitivity (S/KS), 360 Helicopter case study, 492—500 decentralized control, 448 worst-case uncertainty, 246
mixed sensitivity (S/T), 361 Hermitian matrix, 516 exeicises, 256 Irrational transfer function, 127
one degree-of-freedom controller, 105 Hidden mode, 133 feedforward control, 209 ISE optimal control, 181
two degrees-of-freedom controller, 109 Hierarchical control, 418 • plant design change, 164, 255
uncertainty, 289 distillation, 406, 408 plant inversion, 180 Jordan form, 126, 456, 457
Geishgorin bound, 439 Huiwitz, 135 remarks definition, 166
Gershgonn’s theorem, 439, 519 RGA analysis, 82 Kalman filter, 112, 346
Glover—McFarlane loop shaping, see 7-L0.. loop Ideal resting value, 426 scaling MIMO, 222 generalized plant, Ill
shaping Identification, 252 > scaling 5150, 165 iobustness, 350
Gramian sensitivity to uncertainty, 253 summary MIMO, 253—255 Kalman inequality, 172, 349
controllability, 128 Ill-conditioned, 82 summary 5150, 206—209 Key performance indicatois (KPls), 391
observability, 133 Impropet, ‘~ Input-output pairing, 90, 428—449, 506
Gramian matrix, 128, 458, 460 Impulse function (ó), 121, 345 Input—output selection, 384 £~ norm, 539
Impulse iesponse, 31 Integral absolute error (IAE), 538 £2 gain, 487
IL2 norm, 60, 157, 539 Impulse response matrix, 121 a Integral action, 29 £00 norm, 455
computation of, 157 Indirect control, 417 in LQG controller, 347 Lag, 52, 58
stochastic interpretation, 355 Induced norm, 533 a Integral contiol Laplace transform, 121
IL2 optimal control, 354—356 maximum column sum, 533 uncertainty, 252 final value theorem, 44
assumptions, 354 maximum row sum, 533 , see also Decentralized integral controllabil- Lead—lag, 52
LQG control, 356 multiplicative pioperty, 534 a ity Least squares solution, 524
IL00 loop shaping, 54, 364—381 singular value, 533 Inteoial squaie error (1sF), 31 Left-half plane (LHP) rero, 191
aero-engine, 506 spectral norm, 533 optimal control, 235 Linear fractional transfoimation (LET), 109, 113,
anti-windup, 380 Inferential control, 418 Integratoi, 152 114, 116, 543—546
bumpless transfer, 381 Inner product, 535 a Integrity, 442 factorization of S. 116
controllei implementation, 371 Inner transfer function, 123 i determinant condition, 444 interconnection, 544
controller order, 466 Input constraint, 199, 380 a see also Decentralized integial controllabil- inverse, 545
design procedure, 368 acceptable contiol, 201, 241 stabilizing controller, 116
discrete time contiol, 380 anti-windup, 380, 484 Interaction, 67, 78 Linear matrix inequalities, 473—490
gain scheduling, 378 limitation MIMO, 240—241 a two-way, 88 bilinear matrix inequality, 481
generalized plant, 374, 378 limitation 8180,199—203 a Internal model contiol (1MG), 46,49,54,93 change of variables, 480
implementation, 380 max-norm, 240 block diagram, 149 congruence transformation, 481
Matlab, 369 perfect control, 200, 240 51MG PID tuning rule, 57 feasibility problems, 476
observei, 376 two-nonii, 241 Internal model principle, 49 Finsler’s lemma, 483
servo problem, 372, 376 unstable plant, 201 Internal stability, 134, 144—148 generalized eigenvalue pioblems, 477
two degrees-of-freedom contiollei, 372—376 Input direction, 76 linear objective minimization problems, 477
a-
568 MULTIVARIABLE FEEDBACK CONTROL INDEX 569
Matlab, 477, 479 matrix norm, 537 4 MIMO plant with RHP-zeio, 96 Notation, 10
projection lemma, 483 mixed sensitivity, 64 1 MIMO weight selection. 94 Nyquist array, 92
properties, 474 model ieduction, 463 Mixed sensitivity (S/I’) Nyquist D-contour, 153
S-proceduie, 482 p-analysis, 324 4 generalized plant, 361 Nyquist plot, 17, 32
Schur complement, 481 normalized copiime factonzation, 124 4 Modal truncation, 456 Nyquist stability theorem, 152
structured singular value, 478 pole and zero directions, 140 4 Mode, 120 argument principle, 154
systems of LMIs, 475 pole vectors, 127 4 Model, 13 generalized, MIMO, 152
tricks, 479 repeated parametiic uncertainty, 265 1 derivation of, 7 5150,26
Lineai model, 7 robust performance, 285 4 scaling, 6
Linear objective rninimi7ation problems, 477 robust stability, 278 4 Model matching, 376, 466 Observability, 131
Linear quadratic Gaussian, see LQG step response, 37 Model predictive control, 42 Observability Gramian, 133, 457
Linear quadratic regulator (LQR), 345 vector norm, 537 4 Model reduction. 455—471 Observability matrix, 133
cheap control, 235 Matrix, 120, 5 15—529 4 aero-engine model, 463 Observer, 376
robustness, 349 exponential function, 120 analytic (half rule). 57 7I~~ loop shaping, 376
Linear system, 119 generalized inveise, 524 balanced residualization, 459 Offset, see Control error (e)
Linear system theory, 119—162 inveise, 515 4 balanced truncation, 458 One degree-of-freedom contioller, 11,20
Linearization, 8 norm, 532—537 coprime, 462 Optiniization, 386
Linearizing effect of feedback, 25 Matnx inversion lemma, 516 4 error bound, 460, 462 closed-loop implementation, 389
LMI, see Linear matrix inequalities Matnx norm, 75, 532 4 frequency weight, 471 open-loop implementation, 389
LMI feasibility problems, 476 Frobenius norm, 532 4 Hankel norm approximation, 161, 459461 Optimization layer, 386
Local feedback, 199, 216, 217 induced norm, 533 Matlab, 463 look-up table, 395
Loop shaping, 41,43, 341—344 inequality, 536 modal truncation, 456 Orthogonal, 76
desired loop shape, 43, 49, 94 Matlab, 537 residualization, 456 Orthonormal, 76
disturbance rejection, 48 max element norm, 533 steady-state gain pieservation. 465 Output (y), 13
flexible structure, 53 relationship between norms, 536 truncation, 456 primaiy, 13, 427
robust perfoimance, 283 Matiix square root (4i/2), 516 unstable plant, 462 secondary, 13, 427
slope, 43 Maximum modulus principle, 173 Model uncertainty, see Uncertainty Output direction. 76, 221, 222
trade-off, 42 Maximum singular value, 77 Moore—Penrose inverse, 524 disturbance. 221, 238
see alto WcO loop shaping McMillan degree, 133, 455 p. tee Structuied singular value plant, 76, 221
Loop transfer function (L), 22, 69 McMillan form, 141 p-synthesis, 328—335 pole, 137, 221
Loop tiansfer iecoveiy (LTR), 344, 351—352 Measurement, 13 4 Multilayer, 388 zero, 140,221
LQG control, 41,260, 344—351 cascade control, 415 4 Multilevel, 388 Output scaling, 5
controller, 347 Measurement noise (ii), 13 4 Multiplicative property, 75, 160, 534 Output uncertainty, tee Uncertainty
IL2 optimal control, 356 Measurement selection, 417 Multiplicative uncertainty, see Uncertainty Overshoot. 30, 193
inverse response process, 347 distillation column, 418 Multivaitable stability margin, 308
Matlab, 348 MIMO system, 67 Multivariable zero, see Zero Pade appioximation, 127
problem definition, 345 Minimal realization, 133 Pairing, 90, 429, 441, 449
robustness, 349, 350 Minimized condition number, 526, 527 Neglected dynamics, see Uncertainty aero-engine, 506
Lyapunov equation, 128, 133, 457 input unceitainty, 251 Neutralization process, 213—217, 549 ,see also Decentralized control
Lyapunov stability, 487 Minimum singular value, 77, 254 control system design, 216 Parseval’s theoiem, 355
Lyapunov theorem, 487 ado-engine, 504 mixing tank, 213 Partial control
output selection, 395 plant design change FCC process, 257
Main loop theoiem, 317 plant, 233, 241 multiple pH adjustments, 216 Partitioned matnx, 516, 517
Manipulated input, see Input Minimum-phase, 19 multiple taaks, 214 Perfect control, 180
Manual control, 388 Minor of a matnx, 135 Niederlinski index, 444 non-causal controller, 189, 190
Matlab files Mixed sensitivity, 62. 282 Noise (n), 13 unstable controller, 190
acheivable sensitivity peak, 225 disturbance rejection, 496 4 Nominal performance (NP), 3, 281. 300 Performance, 30
coptime uncertainty, 367, 369 general control conflguiation 106 Nyquist plot, 281 frequency domain, 32
distillation configurations, 510 generalized plant, 108 Nominal stability (NS), 3,300 ?Lcc norm, 81
DK-iteration, 330 7i~ optimal control, 359 494 Non-causal controllei, 189 limitations MJMO, 221—258
frequency dependent RGA, 86 RP, 282 Non-minunum-phase, 19 lunitations 8180. 163—219
generalized eigenvalue problems, 479 weight selection, 496 Norm, 530-540 time domain. 30
generalized plant, 106 Mixed sensitivity (S/KS), 64 , see also Matrix norm weight selection, 62
input performance, 230 disturbance piocess, 64 ,see also Signal norm weighted sensitivity, 61,81
lioeai objective niininnzatioo problems, 477 geneialized plant, 360 ,see also System noim worst-case, 320, 334
LMI feasibility problems, 477 Matlab, 64 1 ,see also Vector norm , see also Robust performance
4
4
I
570 MULTIVARIABLE FEEDBACK CONTROL INDEX 571
I
572 MULTIVARIABLE FEEDBACK CONTROL
I INDEX 573
Setpoint, see Reference (r) coprime uncertainty, 366 Superposition principle, 4, 119 feedforward control, 203, 243
Settling time, 30 multivariable, 308 Supervisory control, 386 distillation process, 245
Shaped plant (C,), 91, 368 Stabilizable, 134, 150 Supremum (sup), 60 RCA, 244
Shaping of closed-loop transfer function, 41, see strongly stabilizable, ISO System norm, 156—162, 539 frequency domain, 265
also Loop shaping Stabilization, 150 System type, 44 generalized plant, 289
Sign of plant MIMO, 252 input usage, 201 Systems biology, xi infinite order, 274
Signal, 3 pole vector, 137, 411 input, 293, 294, 298, see also Input uncertainty
Signal norm, 537 unstable controller, 228 1 Temporal norm, 530 input and output, 299
co-norm, 538 Stabilizing controller, 116, 148—ISO ,see also Signal norm integral control, 252
I,, norm, 538 State controllability, 127, 137, 166 ,see alio System norm inverse additive, 294
I-norm, 538 example tanks in series, 130 Time delay, 45, 127, 182, 233 inverse multiplicative, 262, 294
2-norm, 538 State estimator, see Observer effective, 57 LFT, 289
1SF, 538 State feedback, 345, 346, 480, 484 increased delay, 234 limitation MIMO, 242—253
power-norm, 538 State matnx (A), 120 limitation MIMO, 233 limitation SISO,203—205
Signal uncertainty, 24 State obseivability, 131, 137 limitation SISO,45, 182 lumped, 294
see also Disturbance (4), see also Noise (ii) example tanks in series, 133 Padd approximation, 127 Matlab, 278
Signal-based controller design, 362 State-space realization, 119, 125 1 perfect control, 189 modelling SISO, 259
SIMC PlO tuning rule, see PlO controller hidden mode, 133 phase lag, 19 multiplicative, 262, 268, 269
Similanty transformation, 519 inversion of, 125 Time delay unceitainty, 34 Na-structure, 291
Singular matrix, 521,524 minimal (McMillan degree), 133 1 Time response neglected dynamics. 261, 271
Singular perturbational approximation, 457, 459 unstable hidden mode, 134 decay ratio, 30 nominal model, 270
Singular value, 76, 77 ,see also Canonical form overshoot, 30 Nyquist plot, 266, 270
2 x 2 matnx, 521 Steady-state gain, 17 quality, 31 output, 242, 293, 294
frequency plot, 80 Steady-state offset, 29,30 rise time, 30 parametric, 261, 262, 269, 292
1E~ norm, 81 Step response, 31 settling time. 30 gain, 262, 288
inequalities, 522 Stochastic, 344, 355, 356 speed, 31 gain and delay, 272
Singular value decomposition (SVD), 75, 520 Strictly proper, 4 steady-state offset, 30 pole, 263
2 x 2 matnx, 76 Strokes, The, 575 total vanation, 31 time constant, 263
economy-size, 524 Structural property, 233 Time scale separation, 387 zero, 264
non-square plant, 79 Structured singular value (p, SSV), 283, 306,307 1 Total vaiiation, 31 physical ongin, 260
of inverse, 522 complex perturbations, 309 Transfer function, 3,21, 121 pole, 270
pseudo-inverse, 524 computational complexity, 336 closed-loop, 21 RHP-pole, 263
SVD controller, 93 definition, 308 evaluation MIMO, 68 RHP-zero, 264
Singular vector, 76, 521 discrete case, 337 evaluation SISO,22 signal, 24
Sinusoid, 16 DK-iteration, 328 rational, 4 state space, 264
Skewed-p. 316, 320, 326 distillation process, 330 state-space realization, 125 structured, 262
Small-gain theorem, 156 LMI, 478 Tiansmission zero, see Zero, 141 time-varying, 336
robust stability, 306 Matlab, 324, 330 Transpose (AT), 515 unmodelled, 261, 273
@, 57 p-synthesis, 328—335 Triangle inequality. 75, 530 unstable plant, 263
Spatial norm, 530 nominal performance, 319 Truncation, 456 unstructured, 262, 293
see also Matnx norm practical use, 339 Two degrees-of-freedom controller, Il, 23, 147 weight, 268, 269
see also Vector norm properties of, 308 7lcc loop shaping, 372—376 Undershoot, 184
Spectral decomposition, 518 complex perturbation, 309—313 1 design, 51—52 Unitary matrix, 520
Spectral radius (p), 518, 535 real perturbation, 308 internal stability, 147 Unstable hidden mode, 134
Perron root (p (I A D). 536 real perturbation, 336 local design, Ill, 420 Unstable mode, 135
Spectral radius stability condition, 155 relation to condition number, 324 Unstable plant, 192
Spinning satellite, 98 robust performance, 316, 319, 364 Ultimate gain, 26 frequency response, 18
robust stability, 315 robust stability, 319 Uncertainty, 3,24,203,259,289,290 P1 control, 30,34
Split-range control, 428 RP, 283 additive, 267, 268, 293 ,see also RHP-pole, see also Stabilizable, see
Stability, 26, 134. 135 scalar, 307 and feedback — benefits, 246 also Stabilizing controller
closed-loop, 26 skewed-p. 283, 316, 320 and feedback — problems, 247
frequency domain, ISO state-space test, 337 at crossover, 205 Valve position control, 426
internal, 134 upper bound, 336 complex SISO,266—270 Vector norm, 531
Lyapunov, 487 worst-case performance, 320 convex set, 301 Euclidean norm, 531
,see also Robust stability Submatnx (A”), 516 coprime factor, 304, 365 Matlab, 537
Stability margin, 35 Sum norm (II A ~ 532 diagonal, 296 max norm, 531
3 element-by-element, 292, 295 p-norm, 531
3
3
3
~14
1
I
I
574 MULTIVARIABLE FEEDBACK CONTROL 4
Waterbed effect, 167
Weight selection, 62, 329 4
W~ loop shaping, 370, 506 3
mixed sensitivity, 496 1
mixed sensitivity (S/KS), 94 4
performance, 62, 329
Weighted sensitivity, 60
generalized plant, 111
MIMO system, 81
RHP-zero, 172, 185, 223
typical specification, 60
Weighted sensitivity integral, 170
White noise, 344 3
Wiener—Hopf design, 362
YALMIP, 490
Youla parametenzation, 148
I
3
4
I
The End Has No End
I The Strokes
I From the album “Room on Fire”
October 2003
I
I
I
S
I