0% found this document useful (0 votes)
114 views

Time Delay in CS

This document provides an overview of time delays in control systems. It discusses delay elements and their dynamics in both discrete and continuous time. It also covers interactions of delays with other system dynamics, including input/output delays and internal delays. The document then analyzes stability of systems with time delays using modal methods and Lyapunov's direct method. Additional sections discuss stabilization of time-delay systems using fixed-structure and problem-oriented controllers, as well as performance analysis and implementation of delay compensation-based controllers.

Uploaded by

Dineshkumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views

Time Delay in CS

This document provides an overview of time delays in control systems. It discusses delay elements and their dynamics in both discrete and continuous time. It also covers interactions of delays with other system dynamics, including input/output delays and internal delays. The document then analyzes stability of systems with time delays using modal methods and Lyapunov's direct method. Additional sections discuss stabilization of time-delay systems using fixed-structure and problem-oriented controllers, as well as performance analysis and implementation of delay compensation-based controllers.

Uploaded by

Dineshkumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 173

Time Delays in Control Systems

course notes ( winter 2019/2020 )

Leonid Mirkin
Faculty of Mechanical Engineering
Technion—IIT

draft, April 14, 2020


ii
Contents

Preface vii

Nomenclature ix

1 Systems with Time Delays 1


1.1 Delay elements and their dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Delay in discrete time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Delay in continuous time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Interactions of delays with other dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Input and output delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Internal delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 General interconnections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Finite-dimensional approximations of the delay element . . . . . . . . . . . . . . . . . . . 12
1.3.1 Padé approximant of e s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2 Stability Analysis 19
2.1 Modal methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.1 Characteristic function of delay-differential equations . . . . . . . . . . . . . . . . 19
2.1.2 Asymptotic root properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.3 Stability and roots of characteristic function . . . . . . . . . . . . . . . . . . . . . 24
2.1.4 Nyquist stability criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.5 Delay sweeping (direct method of Walton–Marshall) . . . . . . . . . . . . . . . . 27
2.1.6 Bilinear (Rekašius) transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 Lyapunov’s direct method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.1 Ordinary differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.2 Delay-differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3 Stabilization of Time-Delay Systems 39


3.1 Stabilization of FOPTD systems by fixed-structure controllers . . . . . . . . . . . . . . . 39
3.1.1 Stabilizing PI controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.2 Stabilizing PD controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Problem-oriented controller architectures: historical developments . . . . . . . . . . . . . 43
3.2.1 Dead-time compensation: Smith predictor and its modifications . . . . . . . . . . 43
3.2.2 Finite spectrum assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.3 Kwon–Pearson–Artstein reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.4 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 Problem-oriented controller architectures: control-theoretic insight . . . . . . . . . . . . . 49
3.3.1 Gaining insight via discrete-time systems: state feedback . . . . . . . . . . . . . . 49

iii
iv C ONTENTS

3.3.2 Gaining insight via discrete-time systems: output feedback . . . . . . . . . . . . . 52


3.3.3 Intermezzo: Fiagbedzi–Pearson reduction for systems with internal delays . . . . 53
3.4 Loop shifting and all stabilizing controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.4.1 Internal stability and loop shifting . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.4.2 Preliminary: truncation and completion operators . . . . . . . . . . . . . . . . . . 60
3.4.3 Loop shifting for dead-time systems . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.4.4 Potential extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.5 Delay as a constraint: extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4 Performance of Time-Delay Systems 67


4.1 Standard H2 and H1 problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.1.1 State-space formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1.2 Design case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.2 H2 design for dead-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2.1 Extraction of optimal dead-time controllers . . . . . . . . . . . . . . . . . . . . . 74
4.2.2 Loop shifting solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.3 Design case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.4 Extensions to systems with multiple loop delays . . . . . . . . . . . . . . . . . . . 79
4.3 H1 design for dead-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3.1 Extraction of -suboptimal dead-time controllers . . . . . . . . . . . . . . . . . . 82
4.3.2 Loop shifting approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.4 Tuning industrial controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5 Implementation of DTC-based Controllers 89


5.1 General observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.2 Implementation via reset mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.3 Rational approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.3.1 Naı̈ve Padé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.3.2 Padé with interpolation constraints . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.3.3 Direct Padé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.3.4 Approach of Partington–Mäkilä . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.4 Lumped-delay approximations (LDA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.4.1 Naı̈ve use of Newton–Cotes formulae . . . . . . . . . . . . . . . . . . . . . . . . 96
5.4.2 Proper use of Newton–Cotes formulae . . . . . . . . . . . . . . . . . . . . . . . . 97
5.4.3 Beyond Newton–Cotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.5 Coda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

6 Robustness to Delay Uncertainty 103


6.1 Delay margin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.1.1 Bounds on the achievable delay margin . . . . . . . . . . . . . . . . . . . . . . . . 104
6.1.2 Delay margins of DTC-based loops: case study and general considerations . . . . 106
6.2 Embedding uncertain delays into less structured uncertainty classes . . . . . . . . . . . . . 109
6.2.1 Underlying idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.2.2 Preliminary: robust stability with respect to norm-bounded uncertainty . . . . . . 110
6.2.3 Covering models for the uncertain delay element . . . . . . . . . . . . . . . . . . 113
6.2.4 Case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.2.5 Time-varying delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.2.6 Beyond simple coverings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
C ONTENTS v

6.3 Analysis based on Lyapunov–Krasovskii methods . . . . . . . . . . . . . . . . . . . . . . 123

7 Exploiting Delays 127


7.1 Dead-beat open-loop control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.1.1 Posicast control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.1.2 Generating continuous-time FIR responses by a chain of delays . . . . . . . . . . 129
7.1.3 Input shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
7.1.4 Time-optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.1.5 Generating continuous-time FIR responses by general FIR systems . . . . . . . . 134
7.2 Preview control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.3 Stabilizing delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.4 Delays in the regulator problem: repetitive control . . . . . . . . . . . . . . . . . . . . . . 142

A Background on Linear Algebra 147


A.1 Schur complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
A.2 Sign-definite matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
A.3 Linear matrix equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

B Background on Linear Systems 151


B.1 Signals and systems in time domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
B.1.1 Continuous-time signals and systems . . . . . . . . . . . . . . . . . . . . . . . . . 151
B.1.2 Discrete-time signals and systems . . . . . . . . . . . . . . . . . . . . . . . . . . 153
B.2 Signals and systems in transformed domains . . . . . . . . . . . . . . . . . . . . . . . . . 154
B.3 State-space techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

Bibliography 155

Index 161
vi C ONTENTS
Preface

are ubiquitous in control applications. They represent mass and heat transport phenom-
T IME DELAYS
ena, computation and communication time lags, many effects of unmodeled high-frequency dynam-
ics, et cetera. Dynamics of continuous-time systems involving delays are intrinsically infinite dimensional,
which complicates their analysis and associated control design methods. In many situations, delays have
negative effects on the stability of control systems and impose severe limitations on their attainable perfor-
mance. These factors suggest that understanding time-delay systems and corresponding control analysis
and design methods is of vital importance.
These notes are intended to be an introduction to the realm of time-delay control systems. Their main
emphasis is laid on the linear time-invariant (LTI) setting and input and / or output delays. The reason
is twofold. First, this class, dubbed dead-time systems, is of great importance in applications, where a
major harm is caused by loop delays. Second, these systems constitute the best understood class of time-
delay systems, with plenty of rigorous, yet still transparent and intuitive, analysis and design methods
available. Therefore, dead-time systems are a convenient class of time-delay systems, on which concepts
can be explained without the need to dig into overly convoluted technicalities. Still, many ideas behind
the studied systems are generic and extendible to more general settings.
Two aspects of the control of time-delay systems are highlighted throughout the text. The first one is
prominence given to dead-time compensation (DTC) methods as the control architecture in the context of
time-delay systems. I am convinced—and hope that the text conveys this opinion—that DTC is intrinsic
to delayed dynamics and is a natural extension of classical concepts of state feedback and state observa-
tion. As such, quite a lot of space is devoted to motivating the DTC structure, its use in various control
and estimation problems, as well as to related implementation issues. The second peculiarity is that the
presentation is not dominated by stability analyses. Stability requirements comprise a compulsory part of
requirements to control systems, of course. But the stabilization is hardly ever the ultimate goal of control
design. Control is about imposing desired behaviors on controlled systems, reducing the sensitivity to
disturbances on them, and so on. These aspects are extensively discussed in the notes.
This is an engineering text, so first and foremost it aims at developing an engineering insight into the
impact of delays on control systems and at exploiting the structure of the delay element in various analysis
and design situations. As a result—whether this is a welcome outcome or not depends on viewpoint—the
notes are less concerned with such apparently fascinating issues as associated initial value problems, the
smoothness of solutions, clustering closed-loop poles for systems with multiple incommensurate delays,
and so on. Also, the math is not always self contained, some technical results are presented without proofs.
Nonetheless, reasonable levels of rigorousness and self-containment are endeavored (although with only
a partial success).

Haifa (32.7746,35.0230) L EONID M IRKIN


October, 2019

vii
viii P REFACE
Nomenclature

N set of positive integers (natural numbers)


Z set of integers
ZC set of nonnegative integers
Z set of non-positive integers, Z D Z n N
Zi1 ::i2 integer interval from i1 to and including i2 , i.e. Zi1 ::i2 ´ fi 2 Z j i1  i  i2 g
R set of real numbers, R D . 1; 1/
RC set of nonnegative real numbers, RC D Œ0; 1/
R set of non-positive real numbers, R D .1; 0
jR set of pure imaginary numbers
C set of complex numbers
Re ´ the real part of ´ 2 C
Im ´ the imaginary part of ´ 2 C
C˛ open right half-plain, to the right of ˛ 2 R, i.e. C˛ ´ f´ 2 C j Re ´ > ˛g

C x ˛ ´ f´ 2 C j Re ´  ˛g
closed right half-plain, to the right of ˛ 2 R, i.e. C
T unit circle, T ´ f´ 2 C j j´j D 1g
D interior of T (open unit disk), D ´ f´ 2 C j j´j < 1g
x
D x ´ f´ 2 C j j´j  1g D D [ T
closed unit disk, D
F generic field, frequently used as an alias of either R or C
pm
C .I/ class of continuous functions I ! Fpm for I  R (it is denoted C p .I/ if m D 1 and C.I/
if the dimensions are irrelevant or clear from the context)
Lpm
2 .I/ Lebesgue space of square integrable functions I ! F pm (or L2 .I/)
Lpm
2C .R/ space of square integrable functions R ! F pm vanishing in R n RC (or L2C .R/)
Lpm
2 .R/ space of square integrable functions R ! F pm vanishing in R n R (or L2 .R/)
`pm
2 .I/ space of square summable functions I ! F pm for I  Z (or `2 .I/)
pm
`2C .Z/ space of square summable functions Z ! F pm vanishing in Z n ZC (or `2C .Z/)
`2pm .Z/ space of square summable functions Z ! F pm vanishing in ZC (or `2 .Z/)
L1pm .I/ space of absolute integrable functions I ! F pm (or L1 .I/)
pm
L1 .I/ space of essentially bounded functions I ! F pm (or L1 .I/)
pm
H1 .A/ Hardy space of holomorphic and bounded functions A ! F pm for some A  C (or H1 )
xM t .s/ finite-window history of x at time t , xM t .s/ ´ x.t C s/ for all s 2 Œ ; 0 and some  > 0

ix
x N OMENCLATURE

1I .t / (continuous-time) indicator of a set I  R, 1I .t / D 1 if t 2 I
0 otherwise
1.t / unit step, 1.t / ´ 1RC .t /
ı.t / Dirac delta function 
1I Œi  (discrete-time) indicator of a set I  Z, 1I Œi  D 1 if i 2 I
0 otherwise
1Œi  unit step, 1Œi  ´ 1ZC Œi  
ıŒi  unit pulse at i D 0, ıŒi  D 1 if i D 0
0 otherwise
n
 0  0
ei the i th standard basis in F ; e1 ´ 1 0 0    0 , e2 ´ 0 1 0    0 , et cetera
In n  n identity matrix (just I if the dimension is irrelevant)
M0 transpose of a matrix M 2 Rnm / complex-conjugate transpose of a matrix M 2 Cnm
i .M / i th eigenvalue of a matrix M 2 F nn
spec.M / spectrum of a matrix M 2 F nn , i.e. the set of all its eigenvalues
.M / spectral radius of a matrix M 2 F nn , .M / D maxfj1 .M /j; : : : ; jn .M /jg
x .M / the maximum singular value of a matrix M 2 F pm
 .M / the minimum singular value of a matrix M 2 F pm
P P
tr .M / trace of a matrix M 2 F nn , tr .M / D niD1 mi i D niD1 i .M /
kM k spectral norm of M 2 F nm , kM k2 ´ .M 0 M / D .MM 0 /
Pn Pm
kM kF Frobenius norm of M 2 F nm , kM k2F ´ tr .M 0 M / D tr .MM 0 / D iD1 j D1 jmij j
2
2 3
M1 0
diagfMi g block-diagonal matrix with Mi on its diagonal, i.e. diagfMi g ´ 4 ::
: 5
0 Mk
sign a sign of a 2 R, i.e. sign a D 1 if a > 0, sign a D 1 if a < 0, and sign a D 0 if a D 0
deg P .s/ degree of a polynomial P .s/
lcf left coprime factorization over H1 , like G D MQ 1
NQ
1
rcf right coprime factorization over H1 , like G D NM
Fl .G; K/ lower linear fractional transformation, Fl .G; K/ D G11 C G12 K.I G22 K/ 1 G21
Fu .G; K/ upper linear fractional transformation, Fu .G; K/ D G22 C G21 K.I G11 K/ 1 G12
" #
Fl .G; GQ 11 / G12 .I GQ 11 G22 / 1 GQ 12
G ? GQ Redheffer star product, G ? GQ D
GQ 21 .I G22 GQ 11 / 1 G21 Q G22 /
Fu .G;
Chapter 1

Systems with Time Delays

is an intrinsic part of mass and information transfer. After all, no mass / information can
L ATENCY
travel faster than the speed of light. Information processing takes time as well. Hence, every control
system should take potential latencies into account. This chapter introduces the delay element, which is
the basic module describing latencies, and discusses its fundamental properties, in both continuous and
discrete times, and effects on finite-dimensional dynamics.

1.1 Delay elements and their dynamics


1.1.1 Delay in discrete time
Although we are mostly concerned with continuous-time systems, throughout this text we occasionally
use their discrete-time counterparts to gain insight into underlying ideas. This is because the dynamics
of the delay element are more conventional in discrete time, which facilitates grasping ideas without the
need to dig into advanced mathematical motions.
x  W u 7! y ,
With this logic in mind, we start with defining the discrete-time delay element as a system D
acting as
yŒt  D uŒt  i.e. t0 t0 C 

y
t!
x
D
t0

u
t!
(1.1)

for some  2 N, called the delay, and all u W ZC ! Rm . This is an ordinary linear shift-invariant (LSI)
causal system, whose impulse response d Œt  D ıŒt  Im and the . m/-order transfer function, which is
the ´-transform of d , 2 3
0 Im    0 0
6 :: :: : : : : 7
6 : : : :: :: 7
1
x  .´/ D Im D 6 7
D 6 0 0    Im 0 7 : (1.2)
´ 6 7
4 0 0    0 Im 5
Im 0 0 0 0
The chosen state-space realization above is in the canonical companion form and is one of many possibil-
x  first.
ities, of course. Another route to end up with this realization is to construct the state vector of D
This can be done via the interpretation of the state vector as a memory accumulator. It is readily seen that
given an arbitrary time instance t  0, the knowledge of uŒt C s for all s 2 Z :: 1 is what we need to
determine the present and future values of y given the inputs from t on. The . m/-dimensional vector
2 3
uŒt  
6 :: 7
xŒt  ´ 4 : 5 (1.3)
uŒt 1

1
2 C HAPTER 1. S YSTEMS WITH T IME D ELAYS

x  . Under this choice,


is thus a logical candidate for the state vector of D the state propagation equation
becomes 8̂ 2 3 2 3
ˆ 0 Im    0 0
ˆ
ˆ 6 :: :: : : :: 7 6 :: 7
ˆ
ˆ
<xŒt C 1 D 6 : : : : 7 6 : 7
6 7 xŒt  C 6 7 uŒt 
x
D W 4 0 0    I m 5 4 0 5 (1.4)
ˆ
ˆ
ˆ
ˆ 0 0  0 Im
ˆ  
:̂ yŒt  D Im 0    0 xŒt 
and agrees with (1.2). As a matter of fact, the observability matrix of this realization equals I m and its
controllability matrix is the block-exchange matrix with m-dimensional blocks. Hence, realization (1.4)
is minimal. If the delay element is not assumed to be in its zero equilibrium at t D 0, a nonzero initial
condition xŒ0, which is the history of its inputs in Z :: 1 , can be introduced.
The delay element D x  is `2 .ZC /-stable. This follows by the fact that all poles of its transfer function
in (1.2) are at the origin, i.e. in the open unit disk D. Another way to see that is via the readily verifiable
relation kDx  uk2 D kuk2 , which holds for all u 2 `2 .ZC /.
Remark 1.1 (varying delays). A natural generalization of D x  is the varying delay element Dx Œt , which
acts as yŒt  D uŒt  Œt  for a function  Œt   0. This system is substantially knottier than the constant
delay element, even its dimension varies from step to step. Another, somewhat surprising, fact is that D x Œt
might be `2 .ZC /-unstable for some  Œt . For example, if  Œt  D t , then we have that yŒt  D uŒ0 for all
t 2 ZC . So the choice of uŒt  D ıŒt , which is an `2 .ZC /-signal, results in yŒt  D 1Œt , which is not. Yet
it can be shown that D x Œt is `2 .ZC /-stable if  Œt  is uniformly bounded, say by N 2 N, with its induced
p
norm upperbounded by 1 C N > 1 then. O

1.1.2 Delay in continuous time


x  W u 7! y is similar to its discrete-time counterpart from (1.1). It is
The continuous-time delay element D
defined as
y.t / D u.t / i.e. t0 t0 C 

y
t!
x
D
t0

u
t!
(1.5)

for some constant delay  > 0 and all u W RC ! Rm . The similarity is not complete though. System (1.5)
is more complex than that in (1.1), chiefly because the former is infinite dimensional.
Remember that the dimension of a dynamic system is the size of its minimal possible state vector, its
smallest history accumulator. For the system in (1.5) the history, required to continue from a given time
point tc  0, is clearly the whole trajectory of u in the time interval Œtc ; tc . Indeed, this is exactly what
we need to calculate y.t / for t 2 Œtc ; tc C  . In other words, given an arbitrary time instance t  0, we
have to know the function uM t W Œ ; 0 ! Rm such that

uM t .s/ D u.t C s/; 8s 2 Œ ; 0; (1.6)

to determine the present and future values of y given the inputs from t on. This is a perfect analogy with
the discrete delay. Yet there is a qualitative difference between the discrete- and continuous-time cases.
The set of all discrete m-dimensional functions over the finite interval Z :: 1 is a finite-dimensional linear
space, after all it is equivalent to the set of all  m-dimensional vectors, cf. (1.3). Unlike this, the set of all
continuous-time functions over the finite interval Œ ; 0 is an infinite-dimensional linear space, because
the number of linearly independent functions is unbounded there1 . Hence, the continuous-time D x  in (1.5)
is an infinite-dimensional system. Its state is uM t defined by (1.6). But writing down corresponding state
equations would require advanced technical tools, which goes beyond the scope of these notes.
1 For example, the functions fi .t/ ´ e j2 i t= are linearly independent for all i 2 ZC .
1.1. D ELAY ELEMENTS AND THEIR DYNAMICS 3

Im
3

0
0

0 -3
1 0 Re
-180
-6

-360

j!
e -9
-540
-1 0 1 -720 -540 -360 -180 0
10 10 10

(a) Bode plot (b) Nyquist plot (c) Nichols chart

Fig. 1.1: Frequency response plots of the continuous-time delay element

If not stated otherwise, we assume that the initial conditions for (1.5) are zero, i.e. uM 0 D 0. In this case
x
D actually equals the shift operator S defines by (B.2). The delay element is LTI and stable. Indeed,
x  .˛u C ˇv/.t / D .˛u C ˇv/.t
ŒD  / D ˛u.t  / C ˇv.t x  u/.t / C ˇ.D
 / D ˛.D x  v/.t /

for all constants ˛ and ˇ and inputs u and v , which implies linearity. Because D x  S D SC D S Dx  , we
have time invariance. And L2 .RC /-stability follows, similarly to the discrete-time case, from the fact that
kDx  uk2 D kuk2 for all u 2 L2 .RC /. The delay element is obviously causal and its impulse response is

d .t / D ı.t  /Im ;
x  can be analyzed in transformed domains, like Fourier
where m is the dimension of u.t / and y.t /. Thus, D
and Laplace.
x  is
The transfer function of D
Dx  .s/ ´ Lfd g D e s Im : (1.7)
It is an irrational function of s , which is yet another indication that the continuous-time delay element is
an infinite-dimensional system. The function e s is an entire function of s (i.e. holomorphic in the whole
C). It is also bounded in every right-half plane C˛ ´ fs 2 C j Re s > ˛g, as je s j < e ˛ for all s 2 C˛
and  > 0. Hence, the transfer function D x  2 H1 , which is the set of all holomorphic and bounded
functions in C0 . This is yet another proof that the continuous-time delay element is L2 -stable and causal.
The frequency response of the delay element is
x  . j!/ ´ Ffd g D e
D j!
Im : (1.8)
In the scalar case, m D 1, its magnitude and phase are quite simple:
x  . j!/j D 1
jD and x  . j!/ D  !:
arg D (1.9)
Thus, the frequency response magnitude of the delay element is unit for all frequencies and its phase
is a linearly decreasing function of ! , i.e. the delay element adds a phase lag growing linearly with the
frequency. The Bode, Nyquist, and Nichols plots of D x  are presented in Fig. 1.1. Expressions (1.9)
facilitate the analysis of time-delay systems in the frequency domain, making it in some cases rather
intuitive and substantially simpler than the analysis in the time domain.
Remark 1.2 (varying delays). Like in the discrete-time case, we can generalize D x  as the varying delay
element D x .t/ , which acts as y.t / D u.t  .t // for some function  .t /  0. Curiously, this system might
be L2 .RC /-unstable even if j .t /j  N for all t 2 RC and an arbitrary upper bound N > 0 (consider finding
an example of such a delay function as a homework assignment). A yet more general situation is if the
delay depends not only on time, but also on its input, so that y.t / D u.t  .t; u.t ///. Such a delay
element, D x .t/;u , is nonlinear and its properties are yet more involved. O
4 C HAPTER 1. S YSTEMS WITH T IME D ELAYS

(a) Sensing in rolling mill (b) Actuating via conveyor belt

Fig. 1.2: Examples of systems with sensing and actuating delays

1.2 Interactions of delays with other dynamics


One seldom faces processes containing the delay element alone. In most situations delays interact with
other dynamic processes. In this section some of such interactions are studied. For the sake of simplicity,
we mostly consider systems with a single and constant delay, which simplifies their analysis. Remarks on
multiple-delay systems and time-varying delays will be provided mostly to highlight potential differences.

1.2.1 Input and output delays


Arguably, the principal source of delays in feedback control applications are delays arising in the “control
pass,” i.e. in transferring information between the sensor and the actuator ends of the controller. These are
sensing delays, like that in measuring the thickness of a metal strip in rolling mills, see Fig. 1.2(a), where
X-ray gauge measurements are on a distance, say d , from the roll gap and have access to measure-
ments only  D d=v time units after the rolls, where v is the velocity of the exist strip;
actuation delays, like that in the conveyor belt in Fig. 1.2(b), where a material through which some pro-
cess is affected can reach the process only after  D l=v time units after injecting into the system,
where l is the length of the conveyor pass and v is its velocity;
communication delays, which are more and more common in light of the trend to distribute information
acquisition and processing, with the use of communication networks to exchange local information
between various components of control systems;
computational delays, which are inevitable if controllers are implemented on digital computers;
et cetera. From the plant modeling viewpoint, such delays can be viewed as input and / or output delays,
that is delays connected in series with a controlled plant. A good collection of examples of systems with
input / output delays arising in various, mostly process control, applications can be found in [51].
Let a plant P be LTI and input and output delays be uniform, say all input channels are delayed by
the same u  0 and all output channels are delayed the same by y  0. This yields D x y PD
x u as the new
x x x
plant. By the very time invariance, Dy PDu D PDy Cu , meaning that without loss of generality we may
regard such systems as input delay systems PD x  with the transfer function

s
P .s/ D P .s/ e ; (1.10)

where  D y C u . Systems of form (1.10) are known as dead-time systems.


Although P is infinite dimensional, its properties are relatively intuitive. The impulse response of P
is p .t / D p.t  /, by definition, with support in Œ; 1/. The addition of the delay element does not alter
(in)stability properties of the delay-free P . As e s is an H1 function, it does not add any instability.
In fact, it does not introduce additional poles, because it is entire. Moreover, e s ¤ 0 for all s 2 C,
so it cannot cancel poles of P .s/. To gain insight into the structure of the state space of P , consider
1.2. I NTERACTIONS OF DELAYS WITH OTHER DYNAMICS 5

3 Im
0 3
-3
0

-3

-20
1 Re
0
-90
-180
P . j!/

j!
-360 P . j!/e
-20

-540

10-1 100 101 -720 -540 -360 -180 0

(a) Bode plot (b) Nyquist plot (c) Nichols chart

p s
Fig. 1.3: Frequency response plots of P .s/ D 2=.2 s C 1/ (dashed) and P .s/ D P .s/e (solid)

its discrete-time counterpart, whose transfer function is P .´/´  . This is a finite-dimensional system,
provided of course P is finite dimensional itself, and its state-space realization can be derived from those
of P .´/ D D C C.´I A/ 1 B and the discrete delay element in (1.2) using (B.22) on p. 155:
2 3
2 3 A B 0  0 0
0 Im    0 0
:: :: 7 6 6 0 0 Im    0 0 7
:: :: : : 7
 6 : : : :
: : 7 6 : : : :: : : : :
A B 6 : :: :: 7
7
7 6 : : :
P .´/´  D
6
6 0 0    Im 0 7 D 6 7: (1.11)
C D 6
4 0 0    0 Im 5 6 0 0 0    Im 0 7
7 6 7
4 0 0 0    0 Im 5
Im 0 0 0 0
C D 0 0 0 0
The state vector of this system is the union of states of its components and includes thus both the state
of P and the history of the input signal over the last  steps, cf. (1.3). Following this logic, the state of
the continuous-time input-delay system P , with the transfer function as in (1.10), at a time instance t is
.xP .t /; uM t / 2 Rn  fŒ ; 0 ! Rm g, i.e. it comprises both the state xP of the delay-free P and the history
of the input signal over the time interval Œt ; t . The space of all such states is infinite dimensional.
In the SISO case the frequency response of P can be easily derived from that of P . It follows from
(1.9) that
jP . j!/j D jP . j!/j and arg P . j!/ D arg P . j!/  !: (1.12)
In other words, the addition of the delay element does not alter the magnitude and adds extra phase lag,
proportional to the frequency. These properties facilitate the construction of frequency-response plots of
input-delay systems from those of their delay-free versions, see Fig. 1.3. Specifically, the Bode magnitude
plot remains unchanged and its phase part shifts downward by  ! as shown in Fig. 1.3(a). Each point of
the Nyquist plot of P . j!/ rotates clockwise, with the rotation angle increasing with the frequency. This
normally results in spiral curves, like that in Fig. 1.3(b). Each point of the Nichols plot shifts leftward, see
Fig. 1.3(c), by a distance increasing with the frequency.
Remark 1.3 (multiple delays). It may happen that different input and / or output channels of P have dif-
ferent delays. For example, if a thickness profile of the strip in the rolling mill system in Fig. 1.2(a) is
controlled, then several points at different distances from the edge of the strip should be measured. Such
sensors are normally located at different distances from the roll gap as well, causing different measurement
delays. General multiple input and output delays can be described in the block-diagonal form diagfD x u;i g
x
and diagfDy;i g, respectively, for some u;i  0 and y;i  0 and all relevant channel indices i . Such delay
elements are still stable and do not alter stability properties and pole locations of the plant. At the same
time, diagonal delays no longer commute with the plant, unless it is block-diagonal itself. This nontriv-
ially complicates the analysis of such systems. O
6 C HAPTER 1. S YSTEMS WITH T IME D ELAYS

Remark 1.4 (distributed delays). A yet more general description of the interconnection of delays and other
dynamics is in the distributed-delay. An example of such a delay element is the system u 7! y acting as
Z  Z t
y.t / D ˛.s/u.t s/ds D ˛.t s/u.s/ds (1.13)
0 t 
qm
for some (generalized) function ˛ W Œ0;   ! R . This definition appears natural, as the integral can
be seen as a weighted sum of delays in the range Œ0;   and standard lumped delays can be produced by
Dirac delta components of ˛ . However, this definition is also somewhat confusing, by this logic any causal
convolution as in (B.3) is a distributed-delay system. Think of the case where  ! 1 and ˛.t / D e at 1.t /
for some a > 0. This yields an ordinary first-order lag, whose treatment as a distributed-delay system
would only complicate matters. Therefore, the term “distributed delay” is barely used throughout this
text. When a system of form (1.13) with a finite  arises, it is referred to as an FIR (finite impulse
response) system. This is because (1.13) is a convolution representation of an LTI system, whose impulse
response has support over a finite time interval Œ0;  . O
Remark 1.5 (varying delays). It may be safe to claim that constant lags are not widespread in applications.
For example, if the strip velocity in the the rolling mill system in Fig. 1.2(a) / the belt velocity in the
conveyor actuator in Fig. 1.2(b) varies, then the corresponding measurement / actuation delay varies with
time. In many cases such variations are small, so a constant-delay assumption is adequate. Still, there are
applications, like networked control, where delay variations cannot be neglected. Properties of systems
with time-varying delays might be less intuitive than those with constant delays. As an illustration, note
that Z Z
p.t s/u.s  .s//ds ¤ p.t  .t / s/u.s/ds
RC RC
in general. Hence, the effect of an input delay  .t / is not equivalent to that of the same output delay and
vice versa. In fact, it might even happen that there is no equivalent output delay for a given non-constant
input delay. Such issues render the analysis of systems with varying delays substantially more involved
than that of systems with constant delays. O

Delays as a compact modeling tool


In some situations input delays are used as a convenient modeling tool to represent high-order dynam-
ics in a concise manner, with fewer parameters. Typical examples, omnipresent in process control, are
first-order-plus-time-delay (FOPTD) and second-order-plus-time-delay (SOPTD) models, which com-
prise first- or second-order dynamics connected in series with a delay element. Such models are suffi-
ciently rich to reflect complex dynamical phenomena with monotonic responses, while have only three or
four parameters (the static gain, time constants, and the delay) to identify.
To provide a flavor of this approach, consider a plant Pn with the transfer function of the form
1
Pn .s/ D
.s C 1/n
for a large enough n 2 N. This kind of model can describe n identical tanks, modeled as “flow 7! level”
systems, connected in series; a queue of n vehicles, modeled as integrators (“velocity 7! position”) and
whose control signals are proportional to the position mismatch between the current and the next vehicle;
et cetera. The frequency response of such a system has monotonically decreasing gain and phase, with a
large phase lag at high frequencies. It can then be beneficial to approximate these dynamics by lower-order
ones connected in series with a delay element to account for the high-frequency phase lag. For example,
for n D 5 and n D 8 the SOPTD approximations
e 1:7236s e 3:8451s
P5; .s/ D and P8; .s/ D
.1:6875s C 1/2 .2:1626s C 1/2
1.2. I NTERACTIONS OF DELAYS WITH OTHER DYNAMICS 7

1 1
y.t /

y.t /
step response of P5 .s/ step response of P8 .s/
step response of P5; .s/ step response of P8; .s/

1:7236 10 20 t 3:8451 10 20 t
(a) n D 5 (b) n D 8

Fig. 1.4: Step response of Pn and its second-order-plus-time-delay (SOPTD) approximation PQn

fit the step responses of the plant reasonably well, see Fig. 1.4.

1.2.2 Internal delays


In some systems delays arise not as a result of control path latencies, but rather in connection with internal
interactions. To illustrate this phenomenon, consider a process described by the one-dimensional wave
equation
@2 w.x; t / 2
2 @ w.x; t /
D c for 0 < x < L and t  0; (1.14)
@t 2 @x 2
where w is a physical variable of interest evolving both in time t and in space x , c > 0 is the speed of
wave propagation in the medium, and L > 0 is the medium length. This kind of equations can describe a
number of processes propagating in one-dimensional media, like acoustic waves in a duct, vibrations of a
string, torsion of a rod, electrical transmission lines, et cetera. They vary in the nature of their excitation
and interaction with the surrounding environment, which may be lumped (i.e. via boundary conditions) or
distributed. Throughout this section we assume the former kind, which is simpler, of a type motivated by
acoustic waves in a cylindrical duct, see [8] and the references therein for details. In this case @w=@t and
@w=@x can be thought of as the velocity and (scaled) pressure of the air in a duct.
Assume that the system is excited only at x D 0 via the boundary condition
@w.0; t /
D u.t / (1.15)
@t
for some exogenous signal u.t /. Assume also that the other end, that at x D L, is passively connected to
the environment and the interaction of waves in the medium with its surrounding at x D L is characterized
as
@w.x; t / ˇˇ @w.L; t /
cZm ˇ D ZL (1.16)
@x xDL @t
for some end impedance operator ZL , where Zm > 0 is the impedance of the free propagation in the
medium. The zero boundary impedance case ZL D 0 corresponds to the reflected and inverted wave.
The infinite impedance ZL D 1 implies that the end is sealed and waves reflect without inversion.
If ZL D Zm , then we effectively have a semi-infinite duct, with waves totally transmitted. The end
impedance need not be constant. In more realistic models ZL is a dynamic system, frequently LTI, whose
transfer function is positive real, i.e. such that ZL .s/  0 for all s 2 C x 0 . It is not uncommon to have
ZL .0/ D 0 and ZL .1/ D Zm .
The relation between u.t / and w.x; t / can be derived by taking the Laplace transform of (1.14) with
respect to t , which results in the ordinary differential equation c 2 @2x W .x; s/ D s 2 W .x; s/ with the boundary
conditions sW .0; s/ D U.s/ and sZL .s/W .L; s/ C cZm @x W .L; s/ D 0, where @x f ´ @f =@x and @2x f ´
@2 f =@x 2 . The solution to this ODE is
        
W .x; s/ 0 1 W .0; s/ cosh.sx=c/=s c sinh.sx=c/=s U.s/
D exp x D ;
@x W .x; s/ s 2 =c 2 0 @x W .0; s/ sinh.sx=c/=c cosh.sx=c/ @x W .0; s/
8 C HAPTER 1. S YSTEMS WITH T IME D ELAYS

40

10
20

0
0
-20

-10 -40

90 90

0 0

-90 -90
10-1 100 101 10-1 100 101

(a) ZL .s/ D s=.s C 1/ and R.s/ D 1=.2s C 1/ (b) ZL .s/ D 0 and R.s/ D 1

Fig. 1.5: Bode plots of G0 .s/ from (1.19) for Zm D 1 and  D 5

where @x f .0; t / is meant for @x f .x; t /jxD0 . The second boundary condition reads then
  
  cosh.sL=c/=s c sinh.sL=c/=s U.s/
sZL .s/ cZm D 0;
sinh.sL=c/=c cosh.sL=c/ @x W .0; s/

which is solved by c@x W .0; s/ D V0 .s/U.s/, where


2.L=c/s
1 C R.s/e ZL .s/ Zm
V0 .s/ ´ 2.L=c/s
for R.s/ ´ : (1.17)
1 R.s/e ZL .s/ C Zm
The parameter R.s/ is known the reflectance of the system at its end. If ZL .s/ is positive real, then we
x 0 and the equality holds iff ZL .s/ D 0 (think of it as Tustin’s transform
have that jR.s/j  1 for all s 2 C
of ZL =Zm ). This is an important property of the considered system. Thus, we have that
    
sW .x; s/ cosh.sx=c/ sinh.sx=c/ 1
D U.s/: (1.18)
c@x W .x; s/ sinh.sx=c/ cosh.sx=c/ V0 .s/

If we are interested in the effect of u.t / on px .t / D cZm @w.x; t /=@x (think of it as the pressure) at
the very point x D 0 of its application, then the transfer function of the system G0 W u 7! p0 is
s
1 C R.s/e
G0 .s/ D V0 .s/Zm D s
Zm ; (1.19)
1 R.s/e
where  ´ 2L=c > 0 is the time that takes a wave to travel to the end of the medium and back. This
transfer function includes a delay element as its internal part, which renders G0 .s/ nontrivially more
tangled than its delay-free part R. It might not be easy even to derive a closed-form impulse response of
this system. This is possible in some special cases, for example for R.s/ D 1 the impulse response is the
impulse train g0 .t / D ı.t / 2ı.t  / C 2ı.t 2 / 2ı.t 3 / C    , cf. (B.4). Also, the Bode plots of the
frequency-response G0 .e j! /, shown in Fig. 1.5 for two different simple R.s/, exhibit a rich behavior, with
numerous resonances, even though only a few parameters are required to model this system. Moreover, not
for every stable R this G0 is stable. For example, the system with R.s/ D 1, whose frequency response
is depicted in Fig. 1.5(b), is unstable (stability issues in delay systems are discussed in Chapter 2).
Another complication is the location of poles of G0 .s/. They are among the roots of the function
s
 .s/ D MR .s/ NR .s/e ; (1.20)

where NR .s/ and MR .s/ the numerator and denominator polynomials of R.s/, assuming the latter is
rational. This  .s/ is not a polynomial in s as it contains the transcendental term e s . Such functions
1.2. I NTERACTIONS OF DELAYS WITH OTHER DYNAMICS 9

´2 w2 ´2 w2
R x 2
D R x 2
D

px w1 ´1 u px w1 ´1 u
x 1
D Zm x 1
D Zm

R x 2
D R x 2
D
w3 ´3 w3 ´3

(a) (b)

Fig. 1.6: Block-diagrams of Gx , whose transfer function is given by (1.21)

are known as quasi-polynomials. In order to provide a flavor of difficulties associated with analyzing its
roots, consider two simple cases with reflectances from Fig. 1.5. For R.s/ D 1 we have  .s/ D 1 C e s
and every si D j.2i C 1/= , i 2 Z, as its root. In other words, this  .s/ has infinitely many roots. This
is typical to quasi-polynomials. Not quite typical is the possibility to have these roots analytically. In fact,
this possibility is essentially limited to the case when both NR .s/ and MR .s/ are constants. For example,
if R.s/ D 1=.2s C 1/, as in Fig. 1.5(a), then (1.20) reads  .s/ D 2s C 1 C e s and its roots cannot be
expressed analytically. Nevertheless, some of its modal properties can be analyzed. For example, because
the equality j2s C 1j D je s j must hold true at every root of this  .s/, we can conclude there are no
roots in the closed right-half plane C x 0 . This is because j2s C 1j > 1 at all points there except the origin,
s
whereas je j  1 for s 2 C x 0 . A simple check yields then that  .0/ D 2 ¤ 0 in this case. More details
about properties of quasi-polynomials are presented in Section 2.1.
Yet more complex dynamics connect the exogenous input u with the pressure px at internal points
x 2 .0; L/. The transfer function of the system Gx W u 7! px , which is derived from (1.18), is
1 s
e C R.s/e 2 s
Gx .s/ D Zm ; (1.21)
1 R.s/e .1 C2 /s
where 1 ´ x=c and 2 ´ .2L x/=c are the times that take the original and reflected wave to reach the
given x . Two alternative block-diagram representations of this system are presented in Fig. 1.6. Note that
if the reflectance is zero, R.s/ D 0, implying that the end impedance matches that of medium, then this
system reduces to the scaled delay element D x 1 Zm .

1.2.3 General interconnections


In light of the proliferation of interconnection configurations of the delay element(s) with finite-dimen-
sional dynamics, it may be convenient to have a unified representation of systems involving delays. A
x  is the m  m delay element
possible choice for such a representation is presented in Fig. 1.7, where D
and  
G´w G´u
GD
Gyw Gyu
is a finite-dimensional, i.e. delay-free, system. This is an upper linear-fractional transformation, denoted
as Fu .G; D x  /. It defines the system (plant)

x  / D Gyu C Gyw D
P D Fu .G; D x  .I x  / 1 R´u :
G´w D

All single-delay systems studied so far can be viewed as particular cases of this setup for an appropriate
choice of G . The dead-time system as in (1.10) corresponds to
 
0 I
G D Ginpd ´ (1.22)
P 0
10 C HAPTER 1. S YSTEMS WITH T IME D ELAYS

x
D
´ w
 
G´w G´u
Gyw Gyu
y u

Fig. 1.7: General single-delay interconnection for P W u 7! y

and the wave equation with the transfer function (1.19)—to


 
R Zm
G D Gwave,0 ´ : (1.23)
2R Zm
Note that having a nilpotent “G´w ” part implies that the dependence of P on the delay is affine (or linear, if
Dyu D 0 as well). The choice of G producing a given P is actually non-unique. Because D x M D M D x
for every time-invariant M , systems can be moved from the “´” to “w ” channel without affecting the
operator P . In other words, if there is a multiplier M such that G´w D GQ ´w M and Gyw D GQ yw M , then
     
GQ ´w G´u M 0 M 0 GQ ´w G´u
GD Q $ µ GQ
Gyw Gyu 0 I 0 I GQ yw Gyu

and Fu .G; Dx  / D Fu .G;


Q Dx  / regardless of M . This transformation can always be carried out for a square
and nonsingular M . In some situations it is also possible to use non-invertible and even non-square
multiplies, which might help in reducing problem dimensions as is seen in the example below.
Example 1.1. Let 2 3 3 2
0 1
0 1 1 0 0 1 1
60 17
0 1 1 17 60 0 1
6 7  7 6
6 7 1 1 0 7 6
G.s/ D 6 1 0
07 0 0 07D61
; 0 0
6 7 0 0 1 7 6
40 1
05 0 0 05 40 1 0
1 0
0 0 0 0 1 0 0
 
which interacts with a 2  2 delay element Dx  . Taking M D 1 1 , the system can be equivalently
generated by 2 3
0 1 1 0 2 3
6 7 0 1 1 0
 6 0 0 1 1 7
Q 1 1 0 6 7 60 0 1 17
G.s/ D 61 0 0 07D6 7
0 0 1 6 7 41 1 0 05
40 1 0 05
1 0 0 0
1 0 0 0
interacting with a scalar delay, which is a more economical description. O
Remark 1.6 (multiple delays). Systems with multiple delays can also be presented in the form of Fig. 1.7.
x  with its block-diagonal counterpart diagfD
It is only required to replace the single-delay operator D x i g.
For instance, the wave equation with the transfer function (1.21) can be expressed in this form by removing
all three delay blocks at the block-diagram in Fig. 1.6(b) and connecting the inputs .w1 ; w2 ; w3 ; u/ with
the outputs .´1 ; ´2 ; ´3 ; y/ for y D px ,
´2 w2
R x 2
D
2 3
0 1 0 Zm
y w1 ´1 u
6R 0 0 0 7
x 1
D Zm H) G D Gwave,x ´6
4 0
7 (1.24)
1 0 Zm 5
R x 2
D 1 0 R 0
w3 ´3
1.2. I NTERACTIONS OF DELAYS WITH OTHER DYNAMICS 11

x 1 ; D
and taking diagfD x 2 g as its delay element, where D
x 1 is scalar and D
x 2 is 2  2. O
The separation of delay-free and delayed parts is conceptually appealing. It facilitates manipulating
dime-delay systems. For example, closing a feedback loop between y and u of the form
u D K.y C v/

for some delay-free “controller” K and a “reference signal” v results in the the same configuration, just
now with a different, yet still delay-free, “G ” part and v instead of u. For example, for a dead-time system
with G as in (1.22) we end up with  
KP K
G D GPK ´ ; (1.25)
P 0
which can be verified by simple signal tracing (´ D K.y C v/ D KP w C Kv ). This GPK has a nonzero
“G´w ” part, meaning that it is no longer a system with input delay. The separation in Fig. 1.7 is also
convenient in numerical simulations (this is how delay systems are implemented in MATLAB, as a matter
of fact), since standard tools can be used for the finite-dimensional part and all infinite-dimensionality is
concentrated in the relatively simple pure delay element. For example, simulating Fu .Gwave,0 ; D x  / may
be easier than that of the corresponding wave PDE.
The separation of the delay element from the delay-free dynamics also facilitates the use of convenient
state-space machinery in analyzing time-delay systems. Bring in a minimal state-space realization of the
transfer function 2 3
  A Bw Bu
G´w .s/ G´u .s/
G.s/ D D 4 C´ D´w D´u 5 : (1.26)
Gyw .s/ Gyu .s/
Cy Dyw Dyu
This transfer function defines the following time-domain relation:

P / D Ax.t / C Bw w.t / C Bu u.t /
<x.t
G W ´.t / D C´ x.t / C Dyw w.t / C D´u u.t /

y.t / D Cy x.t / C Dyw w.t / C Dyu u.t /
These equations should be complemented by the delayed relation w.t / D ´.t  /, resulting in
8̂      
P /
x.t A Bw x.t / Bu
< ´.t / D C´ D´w C u.t /
ˆ
ˆ
´.t  / D´u
P W   (1.27)
ˆ
ˆ   x.t /
:̂ y.t / D Cy Dyw C Dyu u.t /
´.t  /
describing the mapping P W u 7! y . Equations of this kind are known as delay-differential equations
(DDEs), aka differential-difference equation, which are a subclass of functional-differential equations.
The “x ” part of its propagation is governed by a differential equation and the “´” part—by a delay (dif-
ference) equation. Equation (1.27) is a general single-delay DDE. Its state at every time instance t  0
comprises the state x.t / of G and the whole time history of ´ over the interval Œt ; t  (c.f. the discussion
in ÷1.1.2), i.e. it is .x.t /; Ḿ t / 2 Rn  fŒ ; 0 ! Rm g. The transfer function of P is readily derived using
standard properties of the Laplace transform and assuming zero history in t  0:
  1 
  sI A Bw e s Bu
P .s/ D Dyu C Cy Dyw e s : (1.28)
C´ I D´w e s D´u
This is normally an irrational function of s . Namely, each its element is a quotient of quasi-polynomials
of a more general form than (1.20) (see ÷2.1.1 for details).
Form (1.27) of delay-differential equations in is not quite orthodox. Two its special cases, which are
normally studied in the literature, are presented below.
12 C HAPTER 1. S YSTEMS WITH T IME D ELAYS

1. If D´w D 0, then the second row of (1.27) reads ´.t / D C´ x.t / C D´u u.t /. Substituting this
expression to the first row, we end up with the following dynamics:

P / D Ax.t / C A x.t
x.t  / C Bu u.t / C B u.t / (1.29)

for A ´ Bw C´ and B ´ Bw D´u . This is a conventional form of so-called (single-delay) retarded


DDEs, in which the derivative term does not include delays.
2. If D´w ¤ 0, but there exists a square matrix E such that Bw D´w D E Bw (take the lowest rank E
satisfying it), then pre-multiplying the second row of (1.27) by Bw and using the first row we have:

Bw ´.t / D Bw C´ x.t / E Bw ´.t  / C Bw D´u u.t /


D Bw C´ x.t / P /
E .x.t Ax.t / Bu u.t // C Bw D´u u.t /:

Hence, we end up with the dynamical equation

P / C E x.t
x.t P  / D Ax.t / C A x.t  / C Bu u.t / C B u.t / (1.30)

for A ´ Bw C´ C E A and B ´ Bw D´u C E Bu . This is a conventional form of so-called neutral


DDEs (single-delay, again), in which the derivative term is delayed as well.
Throughout this text the general DDE (1.27) corresponding to the setup in Fig. 1.7 is preferred, partially
by pure aesthetic considerations.

1.3 Finite-dimensional approximations of the delay element


The continuous-time delay element (1.5) is infinite dimensional and so are its interconnections. To avoid
associated complications, one may consider to approximate the delay element by a finite-dimensional
system, so that standard methods can be used. Such approximations are addressed in this section.
Remark 1.7 (to approximate, or not to approximate). Before discussing approximations techniques, a brief
disclaimer is in order. One of the central themes of these notes is the exploitation of the structure of the
delay element, especially in various control design problems. Approximating delays by finite-dimensional
systems might damage this structure, rendering the problem in hand less transparent. Approximations are
thus suggested to be used as merely a convenience, e.g. a tool for performing quick initial screening or
simulations. As such, their simplicity and transparency are preferable to accuracy to some extent. O
First, consider perspectives of approximating the pure delay element Dx  by a stable finite-dimensional
LTI system, say R . We say that R approximates D x  if the approximation error R ´ kD x  R k is
“small,” where kk stands for a system norm of choice. Consider the H1 norm as the measure to the
approximation accuracy, in which case
x
R D kD x  . j!/
R k1 D sup jD R . j!/j D sup je j!
R . j!/j:
!2R !2R

Because R .s/ is rational, the argument of its frequency response is bounded, it approaches some finite
value as ! grows. At the same time, it follows from (1.9) that the phase lag of D x  . j!/ is unbounded.
Hence, there is an infinite increasing sequence of frequencies, say f!i gi2N , such that
x  . j!i //
arg R . j!i / x  . j!i / D  C 2ki ” e j.arg R . j!i /
arg D arg D
D 1

for some ki 2 Z. At those frequencies we have that


x  . j!i /
jD x  . j!i /j C jR . j!i /j D 1 C jR . j!i /j:
R . j!i /j D jD
1.3. F INITE - DIMENSIONAL APPROXIMATIONS OF THE DELAY ELEMENT 13

Therefore,
x
R  1 C sup jR . j!i /j  1 D kD 0k1
i2N

The lower bound R D 1 above is attained only if jR . j!i /j D 0 for all i 2 N. Because R .s/ is rational,
the only possible choice here is R D 0. In other words, the best finite-dimensional approximation of
Dx  is the zero system. This optimal approximation is useless and effectively implies that any attempt to
approximate the pure delay element in the H1 metric2 is futile.
However, we might never need to approximate the delay element over the whole frequency range. In
most engineeringly-motivated control problems only a finite frequency band is of interest, just because
realistic processes have finite bandwidths. As the phase lag of D x  over any finite frequency range is
finite, the approximation problem in this setting does make sense. A possible approach to that end is to
approximate F D x  for a stable low-pass filter F , whose bandwidth defines the frequency range of interest.
There are various approaches to approximate F D x  by finite-dimensional systems, see [56, Sec. 6.3]
and the references therein for an overview. Similarly to model order reduction methods for finite-dimen-
sional systems, delay approximation methods can be roughly divided into those, oriented on singular value
decompositions, and those, based on power series expansions of the delay element in the Laplace domain.
The first group has normally built-in stability and performance guarantees and are thus more accurate.
An example is the balanced truncation procedure, popular for finite-dimensional systems and based on
calculating Hankel singular values of the original system and corresponding Schmidt pairs (singular vec-
tors). This approach does extend to systems of the form F D x  , with calculable Hankel singular values and
Schmidt pairs. Yet involved calculations entail transcendental equations and are not quite easy to use, even
in the simplest case of a first-order F .s/. For that reason, results based on singular value are far less pop-
ular than those based on power series expansions. The idea there is to truncate a power expansion of e s ,
or a function involving it, up to some term and under certain additional (interpolation) constraints. Such
methods are frequently rather easy to implement. On the downside, accuracy and even stability are often
not their natural by-product. For instance, it might appear natural to approximate the transfer function of
Dx  via the relation
e s=2 Qn .  s=2/
e s D s=2  ;
e Qn . s=2/
where the polynomial Qn .s/ ´ 1 C s C s 2 =2Š C    C s n =nŠ is the nth order partial sum of the Maclaurin
series of es . However, this Qn .s/ is Hurwitz only for n  4, which renders this approach quite limited.

s
1.3.1 Padé approximant of e
Perhaps the best known truncation-based approach is to approximate the delay element by its Padé ap-
proximant of given degrees. Consider first its general logic. Let .s/ be a complex function, analytic in
some neighborhood of the origin. A rational function

Pm .s/ pm s m C    C p1 s C p0
Rm;n .s/ D D ;
Qn .s/ qn s n C    C q1 s C 1

is said to be the Œm; n-Padé approximant of .s/ if the first nC mC 1 terms of the Maclaurin series of .s/
and Rm;n .s/ coincide. In other words, the Œm; n-Padé approximant matches .s/ and its n C m derivatives
at s D 0, i.e.  .i/ .0/ D R.i/ .0/ for all i 2 Z0::nCm . An alternative condition for the Padé approximant,
which is useful for computing the required coefficients qi and pi , is that .s/ Rm;n .s/ D O.s mCnC1 / or,
equivalently,
.s/Qn .s/ Pm .s/ D O.s mCnC1 / as s ! 0: (1.31)
2 The x  in the A-norm, which is the L1 .RC /-induced system norm.
same conclusion is immediate for approximating D
14 C HAPTER 1. S YSTEMS WITH T IME D ELAYS

This Bézout-like identity is solvable iff .s/ and 1 are coprime, i.e. have no common roots, which is
obviously always true. Hence, the sought polynomials always exist and are unique. The m C n C 1 free
coefficients of Pm .s/ and Qn .s/ can be selected from (1.31) by the use of the Maclaurin series expansion
of .s/ and zeroing the coefficients of all powers of s from 0 to m C n. The latter procedure can, in turn,
be expressed as the following set of n C m C 1 linear equations (here we assume that n  m, i.e. that
Rm;n .s/ is proper):
2 3
1
2 36
  0 1 0  0 6 q1 7
0 0 0 0 7
6 ::: 7
6 7
6 1
6 : 0  0 0  0 0 1  0 7 7
6 : :: :: :: :: :: :: :: : : : 7 6 7
6 : : : : : : : : : :: 7 6
6 q m
7
7
6 76
6 m m 1    0 0    0 0 0    1 7 6 qmC1 7 7
6 76
6 mC1          0 0 0    0 7 6 :: 7
m 1 0
: 7 D 0; (1.32)
6 7 7
6 :: :: :: :: : : :: :: :: :: 7 6
6 7
6 : : : : : : : : : 7 q 7
6 766 n 7
6 n
6 n 1    n m n m 1    0 0 0    0 7 76 p 0 7
6 : : : : : : : : 76 7
4 : : :
: :
: :
: :
: :
: :
: : 6 p
: 56 1 7
7
6 :: 7
nCm nCm 1    n n 1    m 0 0    0 4 : 5
„ ƒ‚ …
permutated Sylvester matrix pm

where i D  .i/ .0/= i Š for all i 2 ZC are the coefficients of the Maclaurin expansion of .s/. Thus, all we
need is to solve n linear equations (the last n rows above) in qi and then calculate the coefficients of Pm .s/
from the first m C 1 rows of (1.32). As a matter of fact, those n equations for the coefficients of Qn .s/ are
of the form M q D b for an n  n Toeplitz matrix M . This structure can be exploited to solve the equation
more efficiently.
Our interest is the Œn; n-Padé approximant of e s . So consider first
1 1
X 1 i X
.s/ D es D s µ i s i :

iD0 iD0

The Œn; n-Padé approximant of the transfer function e s of the delay element D x  is then obtained by the
substitution s !  s . Also, although any m  n can be considered, the choice m D n appears natural
and is by far most common. Still, all arguments below extend to m < n seamlessly. For m D n equality
(1.32) reads 2 3
2 3 1
0 0  0 1 0  0 6q 7
6 1 7
6 1
6 0    0 0 1  0 7 76 :: 7
6 :: :: :: :: :: :: : : :: 7 6 : 7
6 : : : : : : : : 76 6 7
6 7 6 qn 77
6 n n 1    0 0 0    1 7 6 D 0: (1.33)
6 7 6 p0 77
6 nC1 n    1 0 0    0 7 6
6 7 7
6 :: :: :: :: :: :: :: 7 6 p1 7
4 : : : : : : 6
: 5 6 :: 7 7
4 : 5
2n 2n 1    n 0 0    0
pn
Define
2 3
2 3 2 3 1
1 .t / nC1 .t / n .t /  1 .t /
:: 7 6 q1 ti
6 7
.t / D 4 ::: 5 ´ 4 :: :: ::
6 7 6 7
: : : : 56 : 7; where i .t / ´
4 :: 5 iŠ
n .t / 2n .t / 2n 1 .t /    n .t /
qn
1.3. F INITE - DIMENSIONAL APPROXIMATIONS OF THE DELAY ELEMENT 15

(so that i D i .1/). Clearly, the last n rows in the left-hand side of (1.33) are exactly .1/. Hence,
the choice of Qn .s/ in the Œn; n-Padé approximant of es is equivalent to the choice of n scalars qi such
that .1/ D 0. Two more facts, which follow directly from the definition of i .t /, are important. First,
.0/ D 0 for all qi . Second, i .t / D P iC1 .t / for all i 2 N, so that i .t / D n.n i/ .t /. Thus, we should look
for a
 polynomial function n .t / of order 2n such that n.i/ .t / D 0 at t D 0 and t D 1 for all i 2 Z0::n 1 .
All such polynomials can be described as ˛t n .t 1/n for some constant ˛ . Hence, we have the equality
n
t 2n X t 2n i
n .t / D C qi D ˛t n .t 1/n ; 8t 2 Œ0; 1: (1.34)
.2n/Š .2n i /Š
iD1

It follows then by the binomial theorem that ˛ D 1=.2n/Š and


 
n .2n i /Š .2n i /Š nŠ
qi D . 1/i D . 1/i
i .2n/Š .2n/Š .n i /Š i Š

for all i 2 Z1::n . Note that an alternative expression for qi can be obtained by differentiating (1.34) 2n i
times at t D 0, to have
qi D n.2n i/ .0/

It is only left to obtain the coefficients pi from the first nC1 rows of (1.33). The first row yields p0 D 1.
By repeating the arguments about the last n rows and taking into account that n.i/ .1/ D . 1/i n.i/ .0/, we
have for all i 2 Z1::n :
pi D n.2n i/ .1/ D . 1/2n i n.2n i/ .0/ D . 1/i qi :

This implies that Pn .s/ D Qn . s/ and the resulting Rn;n .s/ is all-pass, in the sense that jRn;n . j!/j D 1
for all ! 2 R.
Summarizing, the Œn; n-Padé approximant of the delay element D x  has the transfer function

n  
s Qn .  s/ X n .2n i /Š i
e  Rn;n . s/ D ; where Qn .s/ D s (1.35)
Qn . s/ i .2n/Š
iD0

s
(mind substituting s !  s to derive the approximant of e from that of es ). The five lowest-order
polynomials Qn .s/ are

1
Q1 .s/ D 1 C s;
2
1 1 2
Q2 .s/ D 1 C sC s ;
2 12
1 1 2 1 3
Q3 .s/ D 1 C sC s C s ;
2 10 120
1 3 2 1 3 1 4
Q4 .s/ D 1 C sC s C s C s ;
2 28 84 1680
1 1 2 1 3 1 4 1
Q5 .s/ D 1 C sC s C s C s C s5 :
2 9 72 1008 30240
Two properties of the Padé approximation method are given below without proofs. The first one is
x :
about the stability of Rn;n , which should be an essential property of any approximation of D

Proposition 1.1. The polynomials Qn .s/ defined in (1.35) are Hurwitz for all n 2 N.
16 C HAPTER 1. S YSTEMS WITH T IME D ELAYS

40
10 20
0
0
-20
-10 -40
90 90

0 0

-90 -90
-1.4 80

0
-80
-80

-160 -160
-1 0 1 -1 0 1
10 10 10 10 10 10

(a) R.s/ D 1=.2s C 1/ (b) R.s/ D 1

Fig. 1.8: G0 .s/ from (1.19) for Zm D 1 and  D 5 with Œ4; 4- and Œ12; 12-Padé approximants of the delay

The second result provides a simple, yet tight at very low frequencies, upper bound on the approxi-
mation error in the frequency domain. To formulate it, we need the normalized Butterworth polynomials,
which are k -order Hurwitz polynomials Bk .s/ satisfying jBk . j!/j2 D 1 C ! 2k . For k D 2n C 1
Yn  n C 1 i  
2
B2nC1 .s/ D .s C 1/ s C 2 cos  sC1
iD1
2n C 1

and its roots are equidistant on the left unit semi-circle.

Proposition 1.2. The Padé approximant (1.35) satisfies je j! Rn;n . j!/j  2:02jH2nC1 . j!/j for all
! 2 R, where
.s=!n /2nC1
H2nC1 .s/ D
B2nC1 .s=!n /
is the high-pass Butterworth filter, whose cutoff frequency
 
.2n/Š.2n C 1/Š 1=.2nC1/
!n ´ 2
.nŠ/2

is such that f!n g D f2:8845; 4:2823; 5:7251; 7:1814; 8:6435; : : :g and limn!1 !n =n D 4=e  1:4715.

Although the bound of Proposition 1.2 is conservative, it does show that the Padé approximant is accurate
at low frequencies and the approximations accuracy increases with n. These conclusions are intuitive.
Indeed, the equality e s Rn;n .s/ D O.s 2nC1 /, which defines the method, effectively says that Rn;n .s/
interpolates e s and its 2n derivatives at the origin, i.e. at ! D 0. An immediate consequence of the result
of Proposition 1.2 is that k.Dx 1 Rn;n /F k1  2:02kH2nC1 F k1 for all F 2 H1 . If F is a low-pass filter,
then the lower its bandwidth is, the smaller is kH2nC1 F k1 . This is also intuitive.
To illustrate traits of Padé approximants, consider the system G0 driven by the wave equation (1.14),
whose transfer function is given by (1.19) on p. 8 and the frequency responses are presented in Fig. 1.5. If
the reflectance R has a finite bandwidth, i.e. if its transfer function R.s/ is strictly proper, then the delay
has a limited effect on the high-frequency dynamics of G0 . We can then expect that the use of a Padé
approximant of a sufficiently high degree will result in a good approximation of G0 . This is indeed the
case, as can be seen from the plots in Fig. 1.8(a) for the strictly proper R.s/ D 1=.2s C 1/, representing
the original system (thin pale line) and its Œ4; 4- and Œ12; 12-Padé approximants (thick lines). It is also
clear that the higher-order Padé results in a higher approximation accuracy. The situation is different if
the bandwidth of R.s/ is infinite. As no finite-dimensional approximation of the delay element succeeds
1.3. F INITE - DIMENSIONAL APPROXIMATIONS OF THE DELAY ELEMENT 17

over all frequencies, the high-frequency dynamics of G0 can no longer be captured by its approximation.
This is seen from the plots in Fig. 1.8(b), where the approximation error for the bi-proper R.s/ D 1 is
unbounded, no matter what degree of the Padé approximant is chosen.
18 C HAPTER 1. S YSTEMS WITH T IME D ELAYS
Chapter 2

Stability Analysis

is a vital property of every control system. So the first analytic chapter of these notes ad-
S TABILITY
dresses the stability analysis of the single-delay system P W u 7! y in Fig. 2.1 for a finite-dimensional
G , given in terms of its state-space realization (1.26), and the m  m delay element D x  defined by (1.5).
We assume throughout the chapter that there is no redundancy in the system, in the sense that (1.26) is
minimal and the i/o dimensions of D x  are irreducible (cf. Example 1.1 on p. 10). Both i/o stability (in
the L2 sense) and Lyapunov stability are studied below. In both these cases the analysis reduces to that
of autonomous versions of the system, much in parallel to what happens in the finite-dimensional case.
Details are more delicate and involved though.

2.1 Modal methods


We start with studying the i/o stability of the system in Fig. 2.1. We say that this system is stable if the
transfer function P .s/ of P W u 7! y , given by (1.28), belongs to H1 , which is the space of holomorphic
and bounded functions in C0 , see (B.17). It is well known that in the finite-dimensional case P 2 H1 iff
P .s/ is proper and its poles are in the open left-half plane C n Cx 0 . An appropriately defined properness
is also required in delay systems and it always holds for the system in Fig. 2.1 with a proper G.s/. In this
section we thus concentrate on “poles” of P .s/ or, more accurately, on its characteristic function.

2.1.1 Characteristic function of delay-differential equations


The notion of the characteristic function plays an important role in the analysis of dynamic systems. It
reflects the so-called free motion of the state of a system, i.e. the possible behavior of the state in the
absence of exogenous inputs. In the finite-dimensional case the situation is straightforward. If its free
motion is described by x.tP / D Ax.t / or, equivalently, .sI A/X.s/ D 0, then nontrivial solutions can
P
only exist at the roots s D i of det.sI A/ D 0. These solutions are then always of the form i ei t xi ,
where xi solve .i I A/xi D 0, i.e. either zero vectors or eigenvectors.

x
D
´ w
 
G´w G´u
Gyw Gyu
y u

Fig. 2.1: General single-delay interconnection

19
20 C HAPTER 2. S TABILITY A NALYSIS

Conceptually, the characteristic function for delay systems, like that in Fig. 2.1, is still defined via the
existence of a nontrivial solution to the free motion of its state. But now the state, that of of DDE (1.27),
is .x.t /; Ḿ t / 2 Rn  fŒ ; 0 ! Rm g, i.e. it is infinite dimensional. The derivation of the exact form of the
corresponding characteristic equation requires the use of more advanced techniques, which goes beyond
the scope of these notes (see e.g. [9, Thm. 2.4.6]). So we again make use of the discrete case to gain
insight into the issue, circumventing technical complications.
Consider the discrete-time version of the interconnection in Fig. 2.1 with the state equation of G of
the form xŒt C 1 D AxŒt  C Bw wŒt  C Bu uŒt  and with ´Œt  D C´ xŒt  C D´w wŒt  C D´u uŒt . Taking into
account the relation wŒt  D ´Œt  , the free motion equation for this system satisfies
    
xŒt C 1 A Bw xŒt 
D : (2.1)
´Œt  C´ D´w ´Œt  

This is not yet a standard (first-order) state equation. But following the logic of the developments in ÷1.1.1,
we can easily derive it as
2 3 2 3
A Bw 0    0 xŒt 
6 0
6 0 Im    0 7
7
6 ´Œt   7
6 7
6 :: :: :: : : : ::
: ::
7 6 7
x Œt C 1 D 6 : : : 7 x Œt ; where x Œt  ´ 6 : 7: (2.2)
6 7 6 7
4 0 0 0    Im 5 4 ´Œt 2 5
C´ D´w 0    0 ´Œt 1

The corresponding characteristic polynomial is obviously


2 3
´In A Bw 0 0  0 0
6
6 0 ´I m  I m 0  0 0 7
7
6
6 0 0 ´Im Im    0 0 7
7
 .´/ D det 6 :: :: :: :: :: :: 7:
6
6 : : : : : : 7
7
4 0 0 0 0    ´Im Im 5
C´ D´w 0 0  0 ´Im

This expression can be simplified via applying (A.5b) to the partition above with . 1/m  . 1/m
lower-right sub-block and using the equality
2 3 1 2 1 3
´I I    0 0 ´ I ´ 2 I    ´1  I ´  I
6 0 ´I    0 0 7 6 0 ´ 1 I    ´2  I ´1  I 7
6 7 6 7
6 :: :: :: :: 7 6 : :: :: :: 7
6 : : : : 7 D 6 :: : : : 7
6 7 6 7
4 0 0    ´I I 5 4 0 0    ´ 1I ´ 2I 5
0 0    0 ´I 0 0  0 ´ 1I

(assuming  blocks). Standard manipulations with determinants yield then the characteristic polynomial
   
´I A Bw  m ´I A Bw ´ 
 .´/ D det D´ det :
C´ ´ I D´w C´ I D´w ´ 

The second factor in the last equality above can be viewed as resulting from the ´-transform of equation
x .
(2.1) and the factor ´ m is merely the denominator of the transfer function of D
Returning to the continuous-time system, the free motion counterpart of (2.1) for it is
    
P /
x.t A Bw x.t /
D ;
´.t / C´ D´w ´.t  /
2.1. M ODAL METHODS 21

which is readily obtained from DDE (1.27). It is then not surprising that the characteristic function of that
DDE is  
sI A Bw e s
 .s/ ´ det (2.3)
C´ I D´w e s
(the “missing” term e m s has no zeros and can therefore be omitted). The matrix in the right-hand side
of (2.3) can be seen as a part of P .s/ in (1.28). It can be shown that this a function of the form
k
X
is
 .s/ D Qi .s/e (2.4)
iD0

for some k 2 N and finite polynomials Qi .s/ such that deg Q0 .s/  deg Qi .s/ for all i 2 Z1::k . As we
already know from the discussion on p. 8, this kind of functions is known as quasi-polynomials. Because
Q0 .s/ D det.sI A/, its roots are the eigenvalues of A. If the condition deg Q0 .s/ > deg Qi .s/ holds
for all i 2 Z1::k , then the quasi-polynomial (2.4) is termed retarded. Otherwise, i.e. if there is at least one
j 2 Z1::k such that deg Q0 .s/ D deg Qj .s/, it is called neutral. The same terminology is used with respect
to DDE (1.27)  .s/ itself. There are classes of delay systems, whose characteristic quasi-polynomials
have deg Q0 .s/ < deg Qj .s/ for at least one j . They are known as advanced and not studied in this text.
They never result from the system in Fig. 2.1 with a proper G.s/ and should not be expected in realistic
applications. If k D 1, then  .s/ in (2.4) is said to be a quasi-polynomial with a single delay, like that in
(1.20). If k > 1, then (2.4) is a quasi-polynomial with multiple commensurate delays, as all its delays are
integer multiples of one  > 0. It should be emphasized that a single-delay system, like that in Fig. 2.1, can
potentially have a multiple-delay characteristic function, see Example 2.3 below. Yet such a function is
always of the commensurate type. In multiple-delay systems, like that in Remark 1.6, characteristic quasi-
P
polynomials can be of the form Q0 .s/ C i Qi .s/e i s with at least one irrational i1 =i2 . Such multiple-
delay quasi-polynomials are said to have incommensurate delays and their properties are normally way
messier than those of their commensurate counterparts.
Example 2.1. Consider an input-delay system as in (1.10) for the plant P .s/ D D C .sI A/ 1 B . From
(1.22), the state-space realization 2 3
A B 0
Ginpd .s/ D 4 0 0 I 5 ;
C D 0
for which  
s
sI A Be
 .s/ D det D det.sI A/:
0 I
This is a standard polynomial, agreeing with the discussion in ÷1.2.1. O
Example 2.2. If the reflectance in (1.19) is R.s/ D D C C.sI A/ 1 B , then the state-space realization
of Gwave,0 in (1.23) is
2 3
A B 0
Gwave,0 .s/ D 4 C D Zm 5 :
2C 2D 1
Its characteristic function  
sI A B e s
 .s/ D det
C 1 D e s
The static R.s/ D 1 as in Fig. 1.5(b) results in the already familiar, from a discussion on p. 9,  .s/ D
1 C e s , which is a single-delay neutral quasi-polynomial. If R.s/ D 1=.2s C 1/, as in Fig. 1.5(a), then
  
s C 1=2 e s 1  1 s
 .s/ D det D sC C e ;
1=2 1 2 2
22 C HAPTER 2. S TABILITY A NALYSIS

which is a single-delay retarded quasi-polynomial. O


Example 2.3. Let 2 3
0 1 0 1 0
6 q00 q01 q10 1 17
6 7
6 7
G.s/ D 6 1 0 0 0 07
6 7
4 0 ˛ 0 0 05
1 0 0 0 0
for some ˛ 2 R. The corresponding characteristic function is
 .s/ D s 2 C q01 s C q00 C . ˛s C q10 C ˛q00 /e s
C ˛q10 e 2s
:
If ˛ ¤ 0, this is a multiple-delay commensurate quasi-polynomial. If ˛ D 0,  .s/ has only one delay. In
fact, in the latter case 2 3
2 3 0 1 0 1 0
1 0 6
q00 q01 q10 1 1 7
G.s/ D 4 0 0 5 6
4 1
7
0 0 0 05
0 1
1 0 0 0 0
1
and M D 0 can be moved through the 2  2 delay element and then Fu .G; D x  / D Fu .G;
Q Dx  / for
2 3 2 3
0 1 0 1 0 2 3 0 1 0 0
6 q00 q01 q10 1 1 7 1 0
7 4 0 0 5 D 6 q00 q01 q10 1 7
6 7
Q
G.s/ D64 1 0 0 0 0 5 4 1 0 0 05
0 1
1 0 0 0 0 1 0 0 0
x  , which is a simpler setup (cf. Example 1.1).
and the 1  1 delay element D O
Example 2.4. If D´w D 0, then the corresponding characteristic function is always retarded, cf. (1.29).
Yet D´w ¤ 0 does not necessarily lead to a neutral  .s/. To see this, consider
2 3
0 1 1 1
61 0 1 07
G.s/ D 64 1 0 0 0 5;
7

1 0 0 0
0 1
whose D´w D 0 0 ¤ 0 (although it is nilpotent), but the characteristic function  .s/ D s C e 2s is
still retarded. O
It happens that relations between roots of (2.3), which are the solutions to  .s/ D 0, and the stability
of the system in Fig. 2.1 are more delicate than those in the finite-dimensional case. Apart from difficulties
with finding them, it might happen that P 62 H1 even if all roots of (2.3) do lie in the open left-half plane.
Still, there is a strong link between roots of (2.3) and stability in all practically important situations. To
understand that, some mathematical preliminaries on asymptotic properties of roots of (2.3) are required.

2.1.2 Asymptotic root properties


The quasi-polynomial  .s/ in (2.4) is clearly an entire function of s , i.e. it is holomorphic in the whole
C. Hence, the set of roots of  .s/ is countable and there are no accumulation points [63, Thm. 10.18].
This, in turn, implies that the number of roots of (2.4) in every bounded region of C if finite. Thus, there
are sequences of roots with unbounded magnitudes.
The asymptotic behavior of those sequences is understood rather well. For commensurate quasi-
polynomials of the form (2.4), which are either neutral or retarded, roots for large jsj are always organized
in a finite number of chains of two possible kinds (follow by Rouché’s arguments).
2.1. M ODAL METHODS 23

Im s Im s

lnj˛1 j lnj˛2 j lnj˛3 j Re s Re s


  

(a) Neutral chains, ˛i 2 f1=4; 1; 2g (b) Asymptotic retarded chain, ˛1 D 13

Fig. 2.2: Asymptotic chains of roots of  .s/

1. Neutral chains are asymptotic, under sufficiently large jsj, to the solutions of the equations es D ˛i
for some ˛i 2 C n f0g. Those solutions are at the sequence fsik gk2Z with
lnj˛i j 2k C arg ˛i
Cjsik D :
 
All these sik are on a vertical line, either in the left-half plane (if j˛i j < 1), or in the right-half plane
(if j˛i j > 1), or on the imaginary axis (if j˛i j D 1), see Fig. 2.2(a).
2. Retarded chains are asymptotic, under sufficiently large jsj, to the solutions of the equations s es D ˛i
for some ˛i 2 C n f0g. Those solutions asymptotically approach [56, Lem. 6.1.2] the sequence
fsik gk2N with
ln.2k/ lnj ˛i j 2k C arg ˛i =2
sik D Cj
 
for sufficiently large k , as well as their complex conjugates. The real parts of these sik approach 1
irrespective of ˛i , see Fig. 2.2(b).
If (2.4) is retarded, i.e. if deg Q0 .s/ > deg Qi .s/ for all i 2 Z1::k , there there are only retarded chain(s).
Otherwise, there is at least one neutral chain and, possibly, also retarder chain(s). More details can be
found in [4, Ch. 12] or [56, Sec. 6.1], but digging into this issue is beyond the scope of these notes.
Remark 2.1 (incommensurate delays). The asymptotic roots behavior for quasi-polynomials with incom-
P
mensurate delays, i.e. of the form P .s/ C i Qi .s/e i s with at least one irrational ratio i1 =i2 , is slightly
different. While retarded chains, if exist, are the same as in the commensurate case, neutral modes do
not approach infinity in a regular way. Nonetheless, they are asymptotically confined to the vertical strip
jRe sj  ˇ for some finite ˇ > 0. O
Once the qualitative picture of roots tendencies of  .s/ for large jsj is clear, neutral chains can be
quantified. To this end, note that sI A is nonsingular whenever jsj is large enough (whenever jsj > .A/,
as a matter of fact). Hence, (2.3) reads, using (A.5a),
s
 .s/ D det.sI A/ det.I D´w e C´ .sI A/ 1 Bw e s
/
and only roots of the second factor are of interest. At every neutral chain we have that jsj ! 1, whereas
je s j is bounded. Hence, the term C´ .sI A/ 1 Bw e s vanishes and we end up with
s
det.I D´w e /D0 ” det.es I D´w / D 0:
Thus, all neutral chains are along the vertical lines Im s D lnji j= for distinct nonzero eigenvalues i of
D´w . There are no neutral chains iff D´w is nilpotent, which agrees with Example 2.4.
24 C HAPTER 2. S TABILITY A NALYSIS

2.1.3 Stability and roots of characteristic function


An important outcome of the asymptotic analysis above for retarded quasi-polynomials of the form (2.4)
is that only a finite number of their roots can be located in any right-half plane Cˇ . This suggests that the
effect of those roots on properties of the transfer function P .s/ in the right-hand plane C0 is essentially
the same as in the case of rational transfer functions.
The neutral case under .D´w / < 1 is similar from the stability point of view. Namely, in this situation
neutral chains converge only along vertical lines lying in the open left-half plane C n C x 0 and there could
be at most a finite number of roots of in Cln..D´w // , i.e. neutral asymptotic chains are separated from the
imaginary axis (because ln..D´w // < 0). This leads to the following important result:

Proposition 2.1. If .D´w / < 1, then P .s/ given by (1.28) belongs to H1 iff the characteristic function
x 0.
(2.3) has no roots in C

Proof (outline). Sufficiency follows by the observation, already made at the end of ÷2.1.3, that
  1
sI A Bw e s
˚.s/ ´
C´ I D´w e s

x 0 and jsj ! 1g whenever .D´w / < 1, see proofs in [5, Sec. 3]. Hence, ˚ 2 H1
is bounded in fs j s 2 C
x  in Fig. 2.1.
and so does P . Necessity assumes implicitly that there is no redundancy in D

The case of .D´w / > 1 is technically simple and the conclusion is also quite in line with our finite-
x 0 . Hence, P .s/
dimensional intuitions. Namely, there is clearly an infinite number of roots of  .s/ in C
is not holomorphic in C0 and P 62 H1 , so the system is unstable. We thus have the following result:

Proposition 2.2. If .D´w / > 1, then P 62 H1 .

The most delicate situation takes place if .D´w / D 1. This is the case where our finite-dimensional
insight might betray us, as demonstrated by the example below.

Example 2.5. Consider the system in Fig. 2.1 for  D 1 and


2 3
  1 1 1
s=.s C 1/ s=.s C 1/ 1
G.s/ D D4 1 1 15 H) P .s/ D s
1=.s C 1/ 1=.s C 1/ s C 1 C se
1 0 0

(this P .s/ is proper, i.e. bounded on C˛ for a sufficiently large ˛ ). Its characteristic function
 
sC1 e s
 .s/ D det D s C 1 C se s
1 1Ce s

is a single-delay neutral quasi-polynomial. Clearly,  .s/ D 0 iff s satisfies e s D 1 C 1=s . Assuming


that s D  C j! is a root of this  .s/, the magnitude equality je s j D j1 C 1=sj can be expressed as
ˇ ˇ ˇ  ˇ ˇ ˇ
ˇ 1 ˇˇ ˇˇ  ! ˇ ˇ  ˇ
e  D ˇˇ1 C D 1 C 2 2
j 2 2
ˇ  ˇ1 C
2 2
ˇ:
 C j! ˇ ˇ  C!  C! ˇ ˇ  C! ˇ

First, let  D 0. The first equality above reads then 1 D 1 C 1=j!j, which holds for none ! 2 R. Now,
let  > 0. In this case the inequality above leads to e  > 1, which is again impossible. Thus, P .s/ has
x 0 be a sequence of
no poles in the closed RHP. Nevertheless, this P 62 H1 . To see this, let fsi g 2 C n C
roots of  .s/ such that
si C 1 C si e si D 0; with lim jsi j D 1;
i!1
2.1. M ODAL METHODS 25

which always exists as we already know. Define then another sequence, fQsi g 2 C0 , where sQi D si and
thus limi!1 jQsi j D 1 as well. The values of P .s/ at each sQi are
1 1
P .Qsi / D D D 1 C si :
1 si si esi 1 si C si2 =.1 C si /

Thus, we have a sequence in C0 at which limi!1 jP .Qsi /j D 1. Thus, P .s/ is not bounded in C0 . O
x 0 is nevertheless unstable,
Thus, if .D´w / D 1, it might happen that a system having no poles in C
which has no counterparts for finite-dimensional systems. To muddle even more, the transfer function

1
;
.s C 1/.s C 1 C s e s /

which has the same poles as the transfer function in Example 2.5, plus an additional pole at s D 1,
does belong to H1 , see [57], and is thus stable. Moreover, there are systems with .D´w / D 1 that have
infinitely many roots in C0 .
It is possible to characterize all these situations, which is intricate and might not be quite important
from the engineering point of view. This is because of a simple observation that even if a system with
.D´w / D 1 is stable, this property is extremely fragile and thus impractical. Indeed, a very small pertur-
bation to D´w can render .D´w / > 1 and the system unstable, with an infinite number of poles in C0 .
For that reason, we regard delay systems with .D´w / D 1 as practically unstable, which simplifies the
analysis and does not entail any loss of engineering generality.
Remark 2.2 (regaining the link). The discussion above effectively says that the connection between stabil-
ity and roots of the characteristic function is conventional for the class of systems of interest, those with
.D´w / < 1. Still, we do not have this connection for a general D´w anymore. But it can be regained with
a slight modification of the stability area in the complex plane. Namely, the system is stable whenever
there is  > 0 such that all roots of  .s/ lie in C n C  . The price to pay for this modification is the loss of
a clear intuition for transfer functions along this new stability boundary,  C jR. Because the problematic
case of .D´w / D 1 is ruled out anyway, we proceed with the standard definition. O

2.1.4 Nyquist stability criterion


The Nyquist criterion is an elegant way to count the number of unstable closed-loop poles using the
knowledge of the number of unstable open-loop poles and the behavior of the polar plot of the loop
frequency response. Mathematically, the criterion is an application of Cauchy’s argument principle to the
open-loop transfer function with respect to the Nyquist contour. The latter comprises a semicircular arc
connecting C jr and jr in the clockwise direction, i.e. r cos  C jr sin  for  2 Œ ; , for r ! 1, which
is closed along the imaginary axis (encircling pure imaginary poles of the loop transfer function from the
right, if present). A key factor enabling stability analysis with this approach is that all unstable poles of
both open- and closed-loop systems lie inside the Nyquist contour. The criterion also complements well
the loop shaping design method, because the loop transfer function on the Nyquist contour is merely its
frequency response, with values at the semicircular arc collapsing to a point in the finite-dimensional case.
Projecting these ideas to the system in Fig. 2.1 in the case when G´w is SISO, we may consider
analyzing its unstable poles on the basis of the polar plot of the “loop gain”
s
L .s/ ´ G´w .s/e : (2.5)

A clear advantage here is that G´w is finite dimensional, i.e. standard, and effects of the delay on the
frequency response of L are simple and intuitive, as discussed in ÷1.2.1 on p. 5. Certain care should be
26 C HAPTER 2. S TABILITY A NALYSIS

12.1

-10

-900 -720 -540 -360 -180 0

Fig. 2.3: Nichols plots of the loop frequency response of (2.6)

taken to analyze poles of P .s/ properly. This is because the Nyquist analysis of L is only concerned with
the poles of .I L / 1 D .I G´w D x  / 1 , whereas P D Gyu CGyw D x  .I G´w D x  / 1 G´u might contain
additional modes, those of Gyu , Gyw , and G´u , or might have some modes canceled. Still, accounting for
those potential additions / cancellations is not a challenge because G is finite dimensional. A simplest
way to do that is to use a potentially nonminimal form of G´w .s/, with all poles of G.s/ included into it.
In fact, if such canceled poles are unstable, the system is unstable for all  . If these poles are stable, they
have no effect on the stability of P .
So, let G´w be SISO and assume that jG´w .1/j < 1 (otherwise the system is unstable anyway). With
the assumption above, we know that there may be only a finite number of “closed-loop” poles in C x 0.
Hence, the Nyquist contour still encircles all possible unstable poles and can be used as is. The only
subtle point here is the values of G´w .s/ at the semicircular arc of the contour, whose radius approaches
infinity. If G´w .s/ is strictly proper, then G´w .s/ is zero at all its parts, exactly as in the finite-dimensional
case. But if G´w .1/ D ¤ 0 for some j j < 1, then
r!1  r cos  j r sin 
L .r cos  C jr sin / ! e e

is no longer a point. Rather, it is a curve connecting e jr and e jr through the origin and remaining
x  D. Although this behavior deviates from that in the finite-dimensional case,
within the closed disk j j D
it does not affect the stability analysis, as the connecting curve remains within the open unit disk D. Thus,
the Nyquist stability criterion extends to delay systems in Fig. 2.1 with jG´w .1/j < 1 verbatim.
Example 2.6. To illustrate the use of the Nyquist criterion in the context of the stability analysis of the
delay system in Fig. 2.1 and insights one can gain from it, consider the system with
2 3
0 1 0 0
 
6 q00 q01 q10 1 7 1 q10 1
G.s/ D 6 7D ; (2.6)
4 1 0 0 05 s 2 C q01 s C q00 q10 1
1 0 0 0
which is the system studied in Example 2.3 on p. 22 (under ˛ D 0). It is readily seen that poles of
P .s/ coincide with those of 1=.1 L / in this case. Let q00 D 1, q01 D 0:1, and q10 D 0:4. For this
choice of the parameters the system G´w is itself stable, so the system is stable iff the Nyquist plot of L
does not encircle the critical point. It might be easier to analyze the behavior of L . j!/ on the Nichols
chart, because it is cleaner than the Nyquist plot for systems with large phase lags and several crossover
frequencies. The Nichols plots of the studied system for several values of  are presented in Fig. 2.3. We
can readily conclude that P is stable for  D 0 and  D 4 (solid lines) and unstable for  D 2,  D 8, and
 D 11 (dashed lines). As a matter of fact, if  D 2 or  D 8, there are 2 unstable poles of P .s/, while if
 D 11, there are 4 of them.
2.1. M ODAL METHODS 27

Because the impact of  on L . j!/ is uncomplicated—only the phase is affected by  , linearly—


Nyquist arguments offer a valuable insight into the effect of D x  on the stability of the system. Indeed, with
G from (2.6), L . j!/ has two crossover frequencies, i.e. the frequencies at which jL . j!/j D jG´w . j!/j D
0 [dB]. They are !c;1  0:7795 and !c;2  1:1757 (rad/s), denoted by round and square markers, respec-
tively, on Fig. 2.3. Poles of P .s/ cross the imaginary axis when the Nichols plot of L .s/ crosses any of
the critical points . 180 360k; 0/, k 2 Z. Because the delay does not affect the magnitude of L . j!/, a
critical point can only be crossed at either of the crossover frequencies. Being stable at  D 0, the system
clearly remains stable for sufficiently small  > 0. But as  becomes sufficiently large, the plot crosses the
critical point . 180; 0/ at ! D !c;2 and the system loses stability, see the dashed line for  D 2 in Fig. 2.3.
Then, as  grows further, the second point crosses that critical point, this time at ! D !c;1 . It happens
that at this very  the point L . j!c;2 / is still to the right of the next critical point, . 540; 0/. Hence, as
soon as L . j!c;1 / passes . 180; 0/, the system becomes stable again, cf. the solid line for  D 4. As 
keeps growing, L . j!c;2 / eventually crosses . 540; 0/ at which point stability is lost again, see the dashed
line for  D 8. And the system never becomes stable again after that. To understand the reason, note that
the angular distance between L . j!c;1 / and L . j!c;2 / grows with  , just because the phase lag due to the
delay element is proportional to the frequency and !c;2 > !c;1 . Hence, there is a delay  D ? such that

2 C arg G´w . j!c;2 / arg G´w . j!c;1 /


arg L? . j!c;2 / D arg L? . j!c;1 / 2 ” ? D
!c;2 !c;1

(in our case ?  9:1775) and the angular distance between L . j!c;1 / and L . j!c;2 / exceeds 360ı for all
 > ? . Thus, there is at least one critical point (and for larger delays even more, like for the pale dashed
curve in Fig. 2.3 corresponding to  D 11) between them for all such delays. O

The arguments above apply to a general situation. Although details might be knotty and stability
conditions are not readily formulated for every possible frequency-response curve, some qualitative con-
clusions can be drawn. In particular, it is not hard to convince oneself that the following properties hold:
 if there are no nonzero crossover frequencies, then stability / instability property of the zero-delay
system holds for all  > 0 (delay-independent condition);
 stability / instability of the zero-delay system is preserved for “sufficiently small”  > 0;
 unless there are no nonzero crossover frequencies, there is a finite ? such that the system is unstable
for all  > ? .
This said, the Nyquist criterion might still not the the most convenient tool for computing intervals of 
for which the system is stable. Direct analyses of characteristic quasi-polynomials studied in the next two
subsections, might be advantageous in this respect.

2.1.5 Delay sweeping (direct method of Walton–Marshall)


A fundamental property of quasi-polynomials  .s/ as in (2.3) under .D´w / < 1 is that the rightmost
frontier of their roots, i.e. sup .s/D0 Re s , is a continuous function of  for all   0, see [10, Thms. 2.2
and 2.3]. Consequently, as  changes in RC , the roots might migrate from the left-half plane C n C x 0 to
the right-half plane C0 or vice versa only through the imaginary axis. Thus, we can start with counting
unstable roots of the zero-delay version of it, 0 .s/, which is a standard polynomial analysis problem, and
then count crossings the imaginary axis by the roots of  .s/ as  sweeps the whole positive semi-axis.
Each time roots cross from left to right, which is referred to as a switch, we add to the count of unstable
poles. If roots cross from right to left, a reversal, we subtract that count. Walton and Marshall in [75]
showed that this count can be carried out efficiently, using only finite polynomial analysis and most of its
steps depend only on properties of Qi .s/ and are independent of  .
28 C HAPTER 2. S TABILITY A NALYSIS

Single-delay quasi-polynomials
Let us start with the single-delay version of (2.4), which is the quasi-polynomial
s
 .s/ D Q0 .s/ C Q1 .s/e ; (2.7)

where the real polynomials Q0 .s/ and Q1 .s/ satisfy the following assumptions:
A 1 : deg Q0 .s/  deg Q1 .s/,
A 2 : jlims!1 Q1 .s/=Q0 .s/j < 1,
x 0,
A 3 : Q0 .s/ and Q1 .s/ have no common roots in C
A 4 : Q0 .0/ C Q1 .0/ ¤ 0.
Assumption A 1 just says that  .s/ is either neutral or retarded, while A 2 guarantees that there are no
neutral root chains in Cx 0 . Common unstable roots of Q0 .s/ and Q1 .s/ are roots of  .s/ regardless of  .
If A 4 does not hold, then  .s/ has an (unstable) root at the origin for all  . Thus, A 1–4 do not impose
any loss of generality, if they do not hold, then the stability analysis is futile.
By the logic of the method, we now analyze pure imaginary roots of  .s/ for  > 0. In other words,
we seek for ! 2 R that solve
j! Q1 . j!/
Q0 . j!/ C Q1 . j!/e D0 ” e j! D :
Q0 . j!/
The latter equation, in turn, is equivalent to the following two equalities:

jQ0 . j!/j D jQ1 . j!/j (2.8a)


(Q0 . j!/ and Q1 . j!/ cannot vanish simultaneously by A 3 ), known as the magnitude relation, and

 ! D arg Q1 . j!/ arg Q0 . j!/ C .2k 1/ for some k 2 Z; (2.8b)

dubbed the phase relation. Three observations on potential solutions of (2.8) in ! and  are in order. First,
if ! D 0 solves the gain relation (2.8a), then the phase relation (2.8b) is not valid because of A 4 . This
agrees with the intuition that roots cannot cross the imaginary axis at the origin,  has no effect on  .s/
there. Second, if ! D !i ¤ 0 solves (2.8a), then so does ! D !i and every  solving (2.8b) for !i does
that for !i too. Therefore, roots always migrate in pairs. Third, if ! D !i > 0 solves (2.8a), then (2.8b)
is solved by all positive
2
ik D i0 C k; k 2 ZC ; (2.9)
!i
where i0 is the smallest nonnegative solution of the phase relation for !i . Thus, to find pure imaginary
roots of  .s/ we only need to find all positive solutions of the magnitude relation (2.8a), which are known
as crossing frequencies. It is readily seen that these frequencies are the positive roots of

.!/ D Q0 . j!/Q0 . j!/ Q1 . j!/Q1 . j!/; (2.10)

which is an even polynomial whose leading coefficient is positive, by A 1,2 , and whose effective degree,
i.e. the degree with respect to ! 2 , equals that of Q0 .s/. If .!/ has no positive real roots, no imaginary
axis crossings can take place and stability properties are delay independent. If (2.10) is solvable, then
there are an infinite number of crossings as  increases. But if roots of (2.7) do cross jR as  grows, it
happens only at a finite number of points, independent of  . This is an important property, which facilitates
the characterization of stability regions in terms of  .
Having found crossing frequencies, the next step is to understand directions of those crossings. The
following result clarifies this issue:
2.1. M ODAL METHODS 29

Lemma 2.3. Let !i > 0 be a solution of .!i / D 0. Each pure imaginary point j!i is
1. a switch iff .!/ changes its sign from minus to plus at ! D !i as ! increases,
2. a reversal iff .!/ changes its sign from plus to minus at ! D !i as ! increases,
3. a tangential point, at which no roots of  .s/ cross the imaginary axis, otherwise.
In particular, the point is a switch (reversal) if d.!/=d!j!D!i > 0 (d.!/=d!j!D!i < 0).

Proof. Crossing directions are determined by the function

ds ˇˇ
 .!/ D sign Re ˇ :
d sD j!

Namely, we have a switch at !i if  .!i / > 0, a reversal if  .!i / < 0, and an uncertain situation if
 .!i / D 0, where higher derivatives are required. Thus, our first goal is to calculate  .!/ at the imaginary
roots of  .s/. To this end, differentiate the characteristic function by  along any of its roots:
 
d .s/ dQ0 .s/ ds dQ1 .s/ ds s s ds
0D D C e Q1 .s/e  Cs :
d ds d ds d d

Hence,
s   1
ds sQ1 .s/e Q00 .s/ Q10 .s/
D 0 D s C ;
d Q0 .s/ C Q10 .s/e s Q1 .s/e s Q0 .s/ Q1 .s/
where equality Q1 .s/e s D Q0 .s/ was used and ./0 stands for the differentiation with respect to s .
Because sign Re ´ 1 D sign Re ´,
  
1 Q00 . j!/ Q10 . j!/
 .!/ D sign Re C
j! Q0 . j!/ Q1 . j!/
  0 
j Q0 . j!/ Q10 . j!/ 
D sign Re (because Re D 0)
! Q0 . j!/ Q1 . j!/ j!
  0 
Q0 . j!/ Q10 . j!/
D sign Re j (because ! > 0)
Q0 . j!/ Q1 . j!/

Multiplying expression under “sign” by Q0 . j!/Q0 . j!/ D Q1 . j!/Q1 . j!/ > 0 we get:
  
dQ0 . j!/ dQ1 . j!/
 .!/ D sign Re j Q0 . j!/ Q1 . j!/
d. j!/ d. j!/
 
dQ0 . j!/ dQ1 . j!/
D sign Re Q0 . j!/ Q1 . j!/ :
d! d!

Because sign Re ´ D sign.´ C ´/,


 
dQ0 . j!/ dQ1 . j!/ dQ0 . j!/ dQ1 . j!/
 .!/ D sign Q0 . j!/ Q1 . j!/ C Q0 . j!/ Q1 . j!/
d! d! d! d!
d.!/
D sign :
d!
This proves the last statement.
To complete the proof we need to address the singular case of  .!i / D 0. Its handling is quite technical
and can be found in [75, ÷6.2].
30 C HAPTER 2. S TABILITY A NALYSIS

.!/

!5 !4 !3 !2 !1 !

reversal switch tangential point reversal switch

Fig. 2.4: Qualitative sketch of .!/

Remarkably, not only crossing frequencies, but also crossing directions are independent of  . More-
over, because .!/ is even and its leading coefficient is positive (by A 2 ), lim!!1 .!/ D C1. This
means that the last, at the highest ! , cross of the zero level by it is always a point where the sign changes
from minus to plus. Hence, the highest non-tangential crossing frequency is always a switch. Also, switch
and reversal frequencies always alternate, see Fig. 2.4 for a qualitative picture.
Having understood crossing directions at each crossing frequency, it is time to return to the delay
chains in (2.9), which solve the phase relation (2.8b). Each crossing frequency determines such a chain
and each chain is stuck with the same crossing direction. All we need now is to order all crossing delays
as an ascendant sequence fj gj 2ZC with 0 D 0, still marked by their crossing direction, and start counting
the number of C0 roots of  .s/, say  2 ZC , as  grows from  D 0 up. At each j the counter  changes,
so we end up with a sequence fj gj 2ZC . Its starting point is the zero-delay 0 and j at each j 2 N are
updated as

<j 1 C 2 if j corresponds to a switch
j D j 1 2 if j corresponds to a reversal (2.11)

j 1 if j corresponds to a tangential point
(the counter changes by 2 as roots migrate at both j!j and j!j ). The system is then stable in each interval
 2 .j ; j C1 / with j D 0. There number of such intervals might be arbitrary and the first stable interval is
not necessarily Œ0; 1 /, meaning that the delay might have a stabilizing effect. Anyhow, unless all crossing
frequencies are tangential points, there is always a finite j? 2 ZC such that j > 0 for all j  j? , i.e. the
counting may stop at a finite j . This is because the highest non-tangential crossing frequency !l is always
a switch, so the distance between two successive switch delays in that chain, l;kC1 lk D 2=!l , is
always smaller than that between any possible reversal delays. Hence, switches accumulate faster and the
sequence fj g is unbounded.

Example 2.7. To illustrate this procedure, consider again the system from (2.6) for q00 D 1, q01 D 0:1,
and some q10 > 0. The zero-delay characteristic polynomial is 0 .s/ D s 2 C 0:1s C 1 C q10 in this case
and it is Hurwitz, i.e. 0 D 0. The polynomial

.!/ D . ! 2 C j0:1! C 1/. ! 2 j0:1! C 1/ 2


q10 D !4 2  0:995 ! 2 C 1 2
q10

P
and .!/ D 4!.! 2 0:995/ is zero only if ! 2 D 0:995. Four scenarios are possible, depending on q10 .
p
1. If 0 < q10 < 1 0:9952  0:0999, there are no real roots of .!/. Hence, taking into account that
0 D 0, the system is stable for all   0, i.e. it is delay-independent stable.
p
2. If q10 D 1 0:9952 , there is a double positive real roots of .!/,
2
!1;2 D 0:995:

Because .!/ D .! 2 0:995/2  0 for this q10 , .!/ does not change its sign at crossing frequencies
and we have a tangential point. The delays at which the roots of  .s/ are on the imaginary axis can
2.1. M ODAL METHODS 31

be determined from the phase relation (2.8b). To this end, note that
(
2 arctan !0:1!
2 1 if 0  !  1
arg Q1 . j!/ arg Q0 . j!/ D arg q10 arg.1 ! C j0:1!/ D 0:1!
arctan ! 2 1  if !  1

Because !1;2 < 1, (2.8b) reads


0:1!1;2
 !1;2 D arctan 2
C .2k 1/  1:5207 C .2k 1/
!1;2 1

and crossing delays are positive iff k 2 N. Thus, the system is stable iff

 ¤ 1:62495 C 6:31476k; k 2 ZC
p
3. If 1 0:9952 < q10 < 1, there are two positive real roots of .!/,
q q
!12 D 0:995 C 0:9952 1 C q10 2
and !22 D 0:995 0:9952 2
1 C q10 ;

the first of which is a switch and the second is a reversal. To illustrate properties of the system,
select q10 D 0:4, like in the Nyquist analysis of Example 2.6. With this choice !1  1:1757 and
!2  0:7795 and the phase relations read

1k  0:2537 C 5:3441k D f0:2537; 5:5978; 10:9419; 16:286; 21:6301; 26:9742; : : :g

for switches and

2k  3:7785 C 8:0602k D f3:7785; 11:8387; 19:8989; 27:9591; 36:0193; : : :g

for reversals. The sequence of all crossing delays is then (gray elements mark reversals)

fj g D f0; 0:2537; 3:7785; 5:5978; 10:9419; 11:8387; 16:286; 19:8989; 21:6301; 26:9742; 27:9591; : : :g:

Hence, the counter of unstable poles reads fj g D f0; 2; 0; 2; 4; 2; 4; 2; 4; 6; 4; : : :g in this case. At
j D 3 and j D 4 we have two successive switches (and, expectably, never two successive reversals).
Thus, we may stop counting after j D 4 and conclude the system is stable iff

 2 Œ0; 0:2537/ [ .3:7785; 5:5978/;

which agrees with what we have via the Nichols chart in Fig. 2.3.
4. If q10  1, there is one positive real root of .!/,
q
!12 D 0:995 C 0:9952 2
1 C q10  1:99

and it is a switch, of course. This implies that the system is stable only until the first crossing delay,
10 . The phase relation (2.8b) reads now (mind that !1 > 1)
0:1!1 0:1!1
 !1 D arctan  C .2k 1/ D arctan C 2.k 1/;
!12 1 !12 1
so its minimum positive solution corresponds to k D 1. Hence, the system is stable iff
q
1 0:1!1
0 < arctan 2 ; where !12 D 0:995 C q10 2
0:009975
!1 !1 1
which is a decreasing function of q10  1, as a matter of fact.
32 C HAPTER 2. S TABILITY A NALYSIS

Ugh, that was long, but nevertheless quite straightforward . . . O


Remark 2.3 (necessary condition). An interesting, and quite useful, observation from the developments
above is that roots can cross the imaginary axis only in pairs. Therefore, if 0 .s/ has an odd number of
unstable poles, the system is delay-independent unstable. Using Vieta’s formulae, this observation leads
x 0 . Namely,
to a simple necessary condition for the existence of  for which all roots of  .s/ are in C n C
n n 1
if 0 .s/ D s C 0;n 1 s C    C 01 s C 00 , then there is no   0 for which all roots of  .s/ are
stable whenever 00  0. The case of 00 < 0 is exactly the condition for 0 .s/ to have an odd number of
unstable roots. The case of 00 D 0 corresponds to the situation when  .0/ D 0 for all  . O
Remark 2.4 (Nyquist connections). Comparing the results of the third scenario of Example 2.7 with those
in ÷2.1.4, we can see that crossing frequencies of the former match the crossover frequencies of the latter.
This is not a coincidence and is a general property, just because Q1 .s/=Q0 .s/ is exactly the “open-loop”
transfer function G´w .s/. In the same vein, the quantities i0 in (2.9) can be interpreted as delay margins of
the loop L from (2.5). Still, there are qualitative differences. For instance, while the Nyquist arguments
are based on the number of unstable roots of Q0 .s/, the delay sweeping method uses unstable roots of
Q0 .s/ C Q1 .s/ as its starting point. It might be that this very difference renders the direct method easier
to “algoritmize” than the Nyquist criterion. At the same time, the Nyquist criterion arguably offers more
insight into properties of the system and is more visual. O

Multiple-delay quasi-polynomials
Arguments above extend to general commensurate quasi-polynomials of form (2.4), although technicali-
ties become rather cumbersome. To provide a flavor of the approach and associate complications, consider
the two-delay version of the problem for
s 2s
 .s/ D Q0 .s/ C Q1 .s/e C Q2 .s/e ;
x 0 , no common unstable roots of all Qi .s/,
such that counterparts of A 1–4 hold (i.e. no neutral chains in C
P
and i Qi .0/ ¤ 0). The underlying idea is still to count imaginary crossings of roots of this  .s/ as 
increases from  D 0. Because  .s/ is real, its pure imaginary roots coincide with those of
2s s 2s
e  . s/ D Q2 . s/ C Q1 . s/e C Q0 . s/e :

But then, every imaginary root of  .s/ is also a root of yet another quasi-polynomial
2s
Q  .s/ ´ Q0 . s/ .s/ Q2 .s/e  . s/
 s
D Q0 . s/Q0 .s/ Q2 . s/Q2 .s/ C Q0 . s/Q1 .s/ Q1 . s/Q2 .s/ e ;

which is a single-delay quasi-polynomial of the form (2.7) and whose imaginary roots we can analyze.
The catch is that Q  .s/ might have additional imaginary roots, which are an artifact of the procedure. To
understand these additional roots, consider the magnitude equality part of the equation Q  . j!/ D 0, which
is
.jQ0 . j!/j jQ2 . j!/j/j . j!/j D 0:
This equality holds if either j . j!/j D 0, which is what we are interested in, or jQ0 . j!/j D jQ2 . j!/j,
which is an artifact in potentia (it might not be, there is also a phase relation to satisfy). Thus, all positive
frequencies at which the magnitudes of Q0 and Q2 coincide should be checked separately. There might
be only a finite number of such frequencies (otherwise, we necessarily have Q0 .s/ D Q2 .s/ and a neutral
chain on the imaginary axis, which contradicts our assumptions). Assuming that jQ0 . j!i /j ¤ jQ2 . j!i /j
at all imaginary roots j!i of  .s/, it can be shown [75, Sec. 6.3] that the crossing directions of Q  .s/
and  .s/ coincide iff jQ0 . j!i /j > jQ2 . j!i /j and are opposite iff jQ0 . j!i /j < jQ2 . j!i /j at each crossing
frequency !i .
2.1. M ODAL METHODS 33

Example 2.8. Let


s 2s
 .s/ D s C e Ce ;
for which 0 .s/ D s C 2 is Hurwitz. The only positive frequency at which jQ0 . j!/j D jQ2 . j!/j is ! D 1
then. Now,
Q  .s/ D s 2 1 .s C 1/e s
p
Q
and the corresponding .!/ D .1 ! 2 /2 .1 C ! 2 / D ! 2 .! 2 3/ has one admissible solution, !1 D 3.
As this !1 ¤ 1, there are no artifact p crossing frequencies. All crossings of Q  .s/ at this frequency are
switches and, because jQ0 . j!1 /j D 3 > 1 D jQ2 . j!1 /j, so are the crossings of  .s/. To find the
smallest positive  at which this switch takes place, consider the phase relation for Q  .s/, which is

 !1 D arg. 1 j!1 / arg.!12 1/ C .2k 1/ D arctan !1 C 2k D C 2k:
3
p
Its smallest positive solution is  D =.3 3/  0:6046. Hence,

0 < p
3 3
is the range of delays for which this  .s/ has no unstable roots. O

The procedure outlined above applies to general quasi-polynomials of the form (2.3) recursively, each
step reducing one delay. It should be clear that even in a general commensurate case crossing frequencies
and root directions are independent of the delay, which is a useful insight. Computational details are rather
dull though. Also, dimensions of involved polynomials grow rapidly in those iterations, so the approach
might not be quite practical for quasi-polynomials of the form (2.4) with a large k .
Remark 2.5 (incommensurate delays). The situation is way more complicated for quasi-polynomials with
incommensurate delays. Roots crossings there no longer happen at a finite number of frequencies, so the
analysis is a nightmare even for quasi-polynomials with 2 delays and constant Q1 .s/ and Q2 .s/. O

2.1.6 Bilinear (Rekašius) transformation


An alternative take on the idea of checking crossings of the imaginary axis by roots of  .s/ as  sweeps
the whole positive semi-axis was proposed by Rekašius in [61]. A key observation is that at each frequency
! 2 R, the locus of e j! in the complex plane as  sweeps RC (which is the unit circle T , as a matter of
x˛ . j!/, where
fact) coincides with that of R

x˛ .s/ ´ sC˛
R ; (2.12)
sC˛
as ˛ sweeps the whole real axis R. Moreover, for every !i 2 R n f0g and i 2 RC there is ˛i 2 R, viz.
i !i
˛i D !i cot ; (2.13)
2
such that the equality e ji !i D Rx˛i . j!i / holds.
These observations suggest that frequency-response properties of the delay systems in Fig. 2.1 can be
studied in terms of those of the finite-dimensional system in Fig. 2.5, which is obtained by the substitution
Dx ! R x˛ . A state-space realization of this finite-dimensional system can be derived similarly to that of
the delay system (1.27). The only difference is the replacement of the relation w.t / D ´.t  / with
(
xP R .t / D ˛xR .t / C ´.t /
w.t / D 2˛xR .t / ´.t /
34 C HAPTER 2. S TABILITY A NALYSIS


R
´ w
 
G´w G´u
Gyw Gyu
y u

Fig. 2.5: General interconnection with Rekašius substitution

which represents the Laplace-domain relation W .s/ D . s C ˛/=.s C ˛/Z.s/. Eliminating ´ and w from
the model we then end up with the transfer function of P W u 7! y in terms of its state-space realization
2 3
A Bw R´w C´ 2˛Bw R´w Bu Bw R´w D´u
P .s/ D 4 R´w C´ ˛.D´w I /R´w R´w D´u 5; (2.14)
Cy Dyw R´w C´ 2˛Dyw R´w Dyu Dyw R´w D´u

where R´w ´ .I C D´w / 1 is well defined if we assume that .D´w / < 1. The corresponding character-
istic function is
  k
sI A C Bw R´w C´ 2˛Bw R´w X
˛ .s/ ´ det D N .s C ˛/k i .˛ s/i Qi .s/; (2.15)
R´w C´ sI C ˛.I D´w /R´w
iD0

where the last expression follows from (2.4) for N ¤ 0 such that ˛ .s/ is monic. Nonzero pure imaginary
roots of  .s/ in (2.4) coincide then with those of ˛ .s/ in (2.13). Moreover, crossing directions of such
roots of ˛ .s/ are closely related to those of  .s/, as established by the result below.
Lemma 2.4. Let s and s˛ be roots of  .s/ and ˛ .s/, respectively. If j!i ¤ 0 is a root of both these
functions, then
ds ˇˇ ds˛ ˇˇ
sign Re ˇ D sign Re ˇ :
d sD j!i d˛ sD j!i
Proof. Consider the equation  
sI A Bw 
det D0
C´ I D´w 
and let .s/ denote its solution. Clearly,
s s˛ C ˛
e D .s / and D .s˛ /:
s˛ C ˛
Thus,  
d s s ds ds
e D e s C  D  0 .s /
d d d
and  
d s˛ C ˛ 2 ds˛ ds˛
D s˛ ˛ D  0 .s˛ / ;
d˛ s˛ C ˛ .s˛ C ˛/2 d˛ d˛
where  0 .s/ ´ d.s/=ds . Hence,
   1     1 
ds 2 ds˛
e s s C D s˛ ˛
d .s˛ C ˛/2 d˛

whenever s D s˛ . In particular, at s D s˛ D j!i ¤ 0 we have that e j!i D . j!i C ˛/=. j!i C ˛/ and
ds 1 ds˛ 1
     
2
 C j!i D 2 ˛ j!i
d !i C ˛ 2 d˛
2.1. M ODAL METHODS 35

˛I
Q́ wQ
 
Gint Gint
2I I

 
G´w G´u
Gyw Gyu
y u

Fig. 2.6: Equivalent form of the system in Fig. 2.5 with isolated ˛ , here Gint .s/ D 1=s  I

or, equivalently,   1   1  
ds 2 ds˛ 1 2˛
D Cj  :
d !i2 C ˛ 2 d˛ !i !i2 C ˛ 2
The result then follows by the fact that sign Re ´ D sign Re ´ 1 .

Thus, we can analyze imaginary crossings of  .s/ via those of ˛ .s/. A good news is that this ˛ .s/
is a standard polynomial, rather than a quasi-polynomial from (2.4). A bad news is that the parameters of
˛ .s/ are functions of ˛ , so the analysis has to be carried out in parametric form. This can be done via
the Routh–Hurwitz stability criterion, although handling singular cases there might be a mess. Arguably,
the delay sweeping method is more streamlined and easier to use, especially in the analysis of various
low-order applications, see Section 3.1. Still, the bilinear transformation may also be helpful in analyzing
properties of delay systems, examples can be found in ÷6.1.1.

Example 2.9. Return to the system with G as in (2.6). Assume that q00 D 1, q01 D 0:1, and q10 D 0:4,
like in the third item of Example 2.7. The counterpart of  .s/ D s 2 C 0:1s C 1 C 0:4e s in this case is

˛ .s/ D s 3 C .˛ C 0:1/s 2 C 0:1.˛ C 6/s C 1:4˛:

We can construct its Routh array, of course, but for third-order polynomials the condition of having a pair
of nonzero pure imaginary roots is
p
7:9 ˙ 60:01
1:4˛ D 0:1.˛ C 6/.˛ C 0:1/ H) ˛1;2 D :
2
The corresponding imaginary roots (with positive frequencies) are then
p
s1;2 D j 0:1.˛ C 6/j˛D˛1;2 ;

which yields crossing frequencies !1  1:1757 and !2  0:7795, exactly as in Example 2.7. Crossing
directions can be found by constructing the corresponding root locus plot. But we already know, from the
analysis in ÷2.1.5, that the highest crossing frequency is a switch and the next one is a reversal. Inverting
(2.13), we then get  
2 !i
ik D arctan C k ; 8k 2 Z
!i ˛i
from which the results derived in Example 2.7 are recovered. O
 
Remark 2.6 (pulling ˛ out). It is readily seen that R x˛ .s/ D Fu . 1=s 1=s ; ˛/. This form facilitates ex-
2 1
tracting ˛ from the LFT in Fig. 2.5 in the equivalent form presented in Fig. 2.6. The latter system, in turn,
36 C HAPTER 2. S TABILITY A NALYSIS

Q ˛I /, where
is equivalent to Fu .G;
2 3
0 .I C D´w / 1 C´ .D´w I /.I C D´w / 1
.I C D´w / 1
6 0 A Bw .I C D´w / 1 C´ 2Bw .I C D´w / 1 Bu Bw .I C D´w / 1 D´u 7
Q
G.s/ D6 7
4I 0 0 0 5
1
0 Cy Dyw .I C D´w / C´ 2Dyw .I C D´w / 1 Dyu 1
Dyw .I C D´w / D´u
 
is the Redheffer start product GQ D Gint Gint
2I I
? G , where Gint is the integrator with the transfer function
Gint .s/ D 1=s  I . O

2.2 Lyapunov’s direct method


Another group of methods of analyzing the stability of time-delay systems is based on the idea of the
Lyapunov function. These methods are not in the center of these notes, so they are only outlined below.
There is a hefty literature on this subject, which can be consulted for more details and (sometimes) insight,
see e.g. [21, 27] and the references therein.

2.2.1 Ordinary differential equations


We start with a brief overview of Lyapunov’s direct (aka second) method for finite-dimensional systems
described by the equation
P / D f .x.t //; x.0/ D x0
x.t (2.16)
for some x0 2 Rn and a locally Lipschitz continuous function f W Rn ! Rn such that f .0/ D 0. A key
idea is that the Lyapunov stability of this system can be analyzed in terms of a continuously differentiable
function V .x/ W Rn ! R such that V .0/ D 0 and V .x/ > 0 for all x ¤ 0. Such a function is known as the
Lyapunov function and its derivative along trajectories of (2.16) is defined as
dV .x/ @V .x/ dx @V .x/
VP .x/ ´ D D f .x/:
dt @x dt @x
The following results hold:
1. If VP .x/  0 for all x , then (2.16) is Lyapunov stable.
2. If VP .x/ < 0 for all x ¤ 0, then (2.16) is asymptotically stable by Lyapunov.
3. If VP .x/  0 for all x and VP .x/  0 implies x  0, then (2.16) is asymptotically stable by Lyapunov.
The last condition follows by LaSalle’s invariance principle, which says that if VP  0, then all trajectories
accumulate in the set fx j VP .x/ D 0g. If the statements “for all x ” above are understood as “for all x
in some neighborhood of the origin,” then properties are local. For global stability, holding for all initial
conditions x0 2 Rn , the statements above should be valid for all x 2 Rn , and also limkxk!1 V .x/ D 1.
A Lyapunov function can be interpreted as an energy function of the system.
The catch in the use of this method is to find a Lyapunov function, which might be a highly nontrivial
task in its own. The problem is perhaps best understood for linear systems, where f .x/ D Ax for some
matrix A 2 Rnn . A handy choice in this case is a quadratic Lyapunov function of the form

V .x/ D x 0 P x (2.17)

for some P D P 0 > 0. With this choice VP .x/ D 2x 0 PAx D x 0 .PA C A0 P /x and the system is (globally)
stable if there is P > 0 such that PA C A0 P  0. This requirement can be formulated as the existence of
P > 0 satisfying the Lyapunov equation PACA0 P CC 0 C D 0 for some C . In this case VP D kC xk2  0.
2.2. LYAPUNOV ’ S DIRECT METHOD 37

If C has full row rank, then kC xk > 0 for all x ¤ 0 and we have asymptotic stability. In fact, it is sufficient
to have .C; A/ observable, because in that case C x  0 iff x  0 and we can apply LaSalle’s invariance
principle to conclude about asymptotic stability. Note that although asymptotic stability does not imply
that kx.t /k is a monotonically decreasing function of t , the existence of a quadratic Lyapunov function
implies that in some coordinate bases, namely P 1=2 x , the state of an asymptotically stable system does
decay monotonically. This property is useful in analyzing various classes of switched systems.

2.2.2 Delay-differential equations


Consider now the delay system in Fig. 2.1. Because the purpose of this section is to provide mainly a
flavor of the approach, in the analysis of its Lyapunov stability we consider only the (relatively simple)
situation of retarded DDEs, for D´w D 0. We can therefore concentrate on the retarded DDE

P / D Ax.t / C A x.t
x.t  /; xM 0 D  (2.18)

where A ´ Bw C´ and  2 C n .Œ ; 0/, i.e. is continuous. This is the autonomous version of (1.29)
with nonzero continuous initial condition. The solution of (2.18) exists on all RC and is continuous, i.e.
x 2 C n .R/. We may consider xM t W C n .Œ ; 0/ as the state vector of this system. If rank A < n, this is not
a minimal state, cf. (2.2). Still, this choice is sufficient for our purposes.
Once the definition of state is clear, the extension of Lyapunov’s direct method to it is conceptually
straightforward. All we need is a function V .x/ M possessing infinite-dimensional counterparts of properties
defined in ÷2.2.1. The analogy would be even stronger if DDE (2.18) was cast as an operator differential
equation in terms of .x.t /; xM t /, see [9, Sec. 2.4]. There are also other approaches, apparently developed via
attempts to extend Lyapunov’s method in terms of functions V .x/ to DDEs back in ’50s, which endeavor
to keep conditions expressed in terms of x.t / whenever possible. Two best known of them were put
forward by Razumikhin [60] and N. N. Krasovskii [30]. The latter is presented below.
For a continuous function V .x/ M W C n .Œ ; 0/ ! R, define its derivative along trajectories of (2.18) as

V .xM tC / V .xM t /


VP .xM t / ´ lim sup :
#0 

The following result, which is a linear version of [21, Cor. 5.3.1] can then be formulated:

M W C n .Œ ; 0/ ! R such that


Theorem 2.5 (Krasovskii). If there is a continuous V .x/

V .xM t /  ˛.kx.t /k/ and VP .xM t /  ˇ.kx.t /k/

for functions ˛; ˇ W RC ! RC such that lim !1 ˛. / D 1, then (2.18) is Lyapunov stable. If, in addition,
ˇ. / > 0 for all > 0, then (2.18) is asymptotically stable.

Remark 2.7 (neutral DDEs). In the neutral delay case, like in (1.30), the Lyapunov analysis is quite similar.
An important fact is that, much like in the i/o stability case, the system is Lyapunov unstable whenever
.E / > 1 and might be either stable or unstable if .E / D 1, see [59] for more details. If .E / < 1,
then essentially the only alteration to the conditions of Theorem 2.5 is the replacement of ˛.kx.t /k/ with
˛.kx.t / E x.t  /k/. O
The dependences of functions ˛ and ˇ only on kx.t /k is a relaxation of expected conditions in terms
of a norm of xM t (although the bound on the derivative can be interpreted in terms of LaSalle’s invariance
principle, as x  0 obviously implies xM  0). These conditions imply that, in principle, the choice of the
Lyapunov function like in (2.17) is still legitimate, which is not quite obvious. However, such a choice
would be futile in most situations, because it is not sufficiently rich for dynamics like (2.18). Hence,
38 C HAPTER 2. S TABILITY A NALYSIS

finding conditions on P guaranteeing stability is not normally possible for it. This is why more elaborate
functions are considered.
A natural general counterpart of the quadratic Lyapunov function (2.17) for DDE (2.18) would be
Z 0 Z 0Z 0
0 0
V .xM t / D x .t /P0 x.t / C 2x .t / P0 .s/x.t C s/ds C x 0 .t C r/P .r; s/x.t C s/ds dr (2.19)
  

for some matrix P0 D P00 > 0 and functions P0 W Œ ; 0 ! Rnn and P W Œ ; 0  Œ ; 0 ! Rnn such
0
that P .r; s/ D P .s; r/. Such functions are conventionally dubbed Lyapunov–Krasovskii functionals.
However, the form above might be “too rich.” Finding matrix functions for (2.19) might be a hard problem
to handle. For this reason, simpler particular cases of the quadratic function (2.19) are typically sought.
The example below illustrates the idea (other uses are discussed in Section 6.3).
Example 2.10. Consider DDE (2.18) and select
Z 0 Z t
0 0 0
V .xM t / D x .t /P0 x.t / C x .t C s/P x.t C s/ds D x .t /P0 x.t / C x 0 .s/P x.s/ds
 t 

for matrices P0 D P00 > 0 and P D P0 > 0, which corresponds to P0 .s/ D 0 and P .r; s/ D ı.r s/P
in (2.19). To take the derivative of this function along trajectories of (2.18), remember the Leibniz integral
rule, Z Z b.t/
d b.t/ @ db.t / da.t /
f .s; t /ds D f .s; t /ds C f .b.t /; t / f .a.t /; t /: (2.20)
dt a.t/ a.t/ @t dt dt
It is then readily seen that

VP .xM t / D 2x 0 .t /P0 x.t


P / C x 0 .t /P x.t / x 0 .t  /P x.t /
D 2x .t /P0 Ax.t / C 2x .t /P0 A x.t  / C x .t /P x.t / x 0 .t  /P x.t
0 0 0
/
  
 0 0
 A 0 P0 C P0 A C P P0 A  x.t /
D x .t / x .t  / :
A0 P0 P x.t  /

Thus, if we can find P0 and P such that


 0 
A P0 C P0 A C P P0 A 
< 0;
A0 P0 P

then the system is stable. In fact, it is then stable for every delay  , so we end up with delay-independent
stability. The condition above is linear in its two free parameters, P0 and P . Such matrix inequalities are
known as linear matrix inequalities (LMIs) and can be efficiently solved. On the downside, the condition
above is normally conservative, i.e. the failure to solve the LMI above does not necessarily implies that
the system is unstable for some  .
Sometimes the condition is non-conservative though. For example, in the scalar case with A D 1
and A D a the LMI above reads
 
P 2P0 P0 a a2 P 2
< 0 ” P > 0 ^ P 2P0 C  0 < 0 ” .P P0 /2 < .1 a2 /P02 ;
a P0 P P

where the first equivalence relation follows by (A.6). The last inequality is solvable in P0 > 0 and P > 0
iff ja j < 1, as in that case an appropriate P can be found for every given P0 . At the same time, the use
of any of the methods discussed in Section 2.1 yields the conclusion that the system is delay-independent
stable iff a 2 . 1; 1. This implies that the Lyapunov–Krasovskii methods misses only stability under
a D 1, i.e. it is practically non-conservative in this case. O
Chapter 3

Stabilization of Time-Delay Systems

Mille viæ ducunt homines per sæcula Romam


Alain de Lille, Liber Parabolarum

about the stability analysis of time-delay systems, we turn to stabi-


H AVING GRASPED BASIC IDEAS
lization problems in this chapter. We shall be mostly concerned with dead-time systems, i.e. systems
comprising a finite-dimensional system connected in series with a delay element, like (1.10) on p. 4. The
morale of this chapter is that a right choice of the controller architecture can substantially simplify the
solution, rendering it essentially finite dimensional.

3.1 Stabilization of FOPTD systems by fixed-structure controllers


We start with a brief flirt with stabilization ideas for fixed-structure (finite-dimensional fixed-structure, to
be precise) controllers. To simplify the exposition, we consider only unstable FOPTD (first-order-plus-
time-delay) plants of the form
e s
P .s/ D (3.1)
s 1
As discussed in ÷1.2.1, such models are important in applications, like process control. Moreover, in
some situations this model represents dynamics, left after other parts, stable and stably invertible, are
canceled by the controller. The choice of the plant pole at s D 1 can be done without loss of generality.
If P .s/ D a=.s a/ for some a > 0, the pole can always be normalized by the substitution s ! as and
scaling  ! a . In other words, the delay  in (3.1) should always be though of as the ratio between the
loop delay and the time constant 1=a of the finite-dimensional part of the plant dynamics.
The stabilization setup for this system is the standard unity-feedback architecture shown in Fig. 3.1.
As only the stability of this system is analyzed, the exogenous signals are not relevant. The characteristic
function of the closed-loop system in this case is
s
 .s/ D .s 1/MC .s/ C NC .s/e ; (3.2)

d
y 1 s
u e r
e C.s/
s 1
n

Fig. 3.1: Unity-feedback control setup for a FOPTD plant

39
40 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

where NC .s/ and MC .s/ are the numerator and denominator of the controller, C.s/ D NC .s/=MC .s/. This
is a single-delay quasi-polynomial of the form (2.4), which can be analyzed the delay sweeping method
of ÷2.1.5 for a given C.s/. In this section we consider a more challenging problem of analyzing  .s/ as
functions of controller parameters.

3.1.1 Stabilizing PI controllers


Assume that the class of controllers to be analyzed is of the form
 
1
C.s/ D CPI .s/ ´ kp 1 C (3.3)
Ti s
for some nonzero real kp (known as the proportional gain) and Ti (integral time or reset time). In this case
NC .s/ D kp .Ti s C 1/ and MC .s/ D Ti s and the characteristic function (3.2) reads
s
 .s/ D Ti s.s 1/ C kp .Ti s C 1/e :

Form the function

.!/ D Ti2 ! 2 .! 2 C 1/ kp2 .Ti2 ! 2 C 1/ D Ti2 ! 4 Ti2 .kp2 1/! 2 kp2 :

according to (2.10). The only positive-real solution of this equation, the crossing frequency, satisfies
1 2 q 
!c2 D kp 1 C .kp2 1/2 C 4kp2 =Ti2 ; (3.4)
2
which is a switch because d.!/=d! D 2Ti2 !.1 kp2 C 2! 2 / D 0 only at ! D 0 and ! 2 D .kp2 1/=2,
both smaller than !c2 . Hence, the closed-loop system can be stable for some delays only it is stable for
 D 0. The zero-delay characteristic polynomial,

0 .s/ D Ti s 2 C Ti .kp 1/s C kp ;

is Hurwitz iff all its coefficients are nonzero and have the same sign. If kp < 0, then we must have Ti < 0
and Ti .kp 1/ < 0, which is a contradiction. Hence, all coefficients must be positive, which is the case iff

kp > 1 and Ti > 0: (3.5)

As a matter of fact, this implies that Ti !c2 > kp > 1. The phase relation (2.8b) for this  .s/ reads
 jTi !c C 1  3
 !c D arg. jTi !c C 1/ arg. j!c 1/ C .2k 1/ D arg C 2k 
2 j!c 1 2
. jTi !c C 1/. j!c C 1/  1 2
 1
D arg C 2k  D arg .1 T i ! c C j.T i C 1/! c / C 2k :
!c2 C 1 2 2

Because Ti !c2 > 1, we have that arg.1 Ti !c2 C j.Ti C 1/!c / 2 .=2; / and the smallest crossing delay
corresponds to k D 1. Therefore, the closed-loop system is stable iff (3.5) holds and
1 Ti !c2 1
0 < arctan <1 (3.6)
!c .Ti C 1/!c
for !c given by (3.4). This is a simple condition on  for given kp and Ti , but is far from being simple in
terms of controller parameters. Even in the P controller case, which corresponds to Ti D 1, the condition
p
arctan kp2 1
0 < p (3.60 )
kp2 1
3.1. S TABILIZATION OF FOPTD SYSTEMS BY FIXED - STRUCTURE CONTROLLERS 41

 >0
 > 0:1  > 0:1
1:5 1:5

 > 0:2
 > 0:2

1 1

1=Ti 1=Ti
 > 0:3  > 0:3

0:5 0:5

 > 0:5  > 0:5

0 0
1 1:5 2 2:5 3 0 1:32 2:33
kp !c

(a) in the .kp ; 1=Ti /-plane (b) in the .!c ; 1=Ti /-plane

Fig. 3.2: Maximum stabilizing delay contours for the PI controller (3.3)

is not solvable in kp . All we can do is to plot stability contours of the delay in the .kp ; 1=Ti /-plane shown
in Fig. 3.2(a). This plot can be used to choose suitable PI controller (3.3) for a given delay  .
The crossing frequency !c in (3.4) is the crossover frequency of the loop P C . This is an important
characteristic of the control system in Fig. 3.1, it roughly indicates the frequency range in which feedback
is useful and the closed-loop bandwidth. It follows from (3.4) that any given crossover frequency can be
attained by the choice s
1 C !c2
kp D T i ! c ;
1 C Ti2 !c2
p
which is admissible, i.e. results in kp > 1, iff !c > 1= Ti . In other words, there is a one-to-one corre-
p
spondence between !c > 1= Ti and kp > 1, so we can analyze the stability of the system in terms of its
crossover frequency instead of the proportional gain. This analysis may actually be more informative and
it enables us to end up with the following analytic bound on the integral time under a given delay  > 0:

1 C !c tan. !c /
Ti > ; (3.7)
!c .!c tan. !c //

which is admissible iff 0 <  !c < tan. !c / < =2. The corresponding upper bounds on the reciprocal
integrator time 1=Ti as functions of !c are presented in Fig. 3.2(b) for several values of  .

3.1.2 Stabilizing PD controllers


Now consider the class of PD controllers of the form

C.s/ D CPD .s/ ´ kp .1 C Td s/ (3.8)

for some nonzero proportional gain kp and real Td (derivative time). The characteristic function (3.2) for
this choice is
 .s/ D s 1 C kp .1 C Td s/e s :

This is a neutral quasi-polynomial, so we first have to guarantee A 2 on p. 28. This requires jkp Td j < 1,
which is assumed hereafter. The corresponding .!/ is then

.!/ D .1 kp2 Td2 /! 2 .kp2 1/:


42 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

1 1

 > 1:5

0:5 0:5
 >1
 >1
 > 0:75  > 0:75
 > 0:5  > 0:5
Td 0 Td 0
 > 0:25
 > 0:25
 >0

0:5 0:5
 >0

1 1
1 1:5 2 2:5 3 0 1:13 2:33 3:43
kp !c

(a) in the .kp ; Td /-plane (b) in the .!c ; Td /-plane

Fig. 3.3: Maximum stabilizing delay contours for the PD controller (3.8)

Clearly, its solution must satisfy


kp2 1
!c2 D : (3.9)
1 kp2 Td2
Taking into account that jkp Td j < 1, this .!/ has positive roots iff jkp j > 1. If this is the case, then the
crossing frequency is is a switch, because d.!/=d! D 2.1 kp2 Td2 /! is positive there. Consider now
properties of the zero-delay version of the characteristic equation,

0 .s/ D .1 C kp Td /s C kp 1:

Because jkp Td j < 1, its leading coefficient is positive, so this polynomial is Hurwitz iff kp > 1. Thus, the
system is delay-independent unstable whenever kp  1 and is stable for all delays below the first crossing
delay if kp > 1. This crossing delay can be found from the phase relation (under kp > 0)

 !c D arg.1 C jTd !c / arg. 1 j!c / C .2k 1/ D arctan.Td !c / C arctan !c C 2k:

Because each one of the two first terms above is a function in .0; =2/, the minimum positive solution of
this equation corresponds to k D 0. Hence, the system is stable iff
1 1 
kp > 1; jTd j < < 1; and 0 < arctan.Td !c / C arctan !c  2 (3.10)
kp !c
for !c given by (3.9). The last relation is again not quite transparent in terms of the controller parameters
for a fixed  . The level curves in the .kp ; Td /-plane presented in Fig. 3.3(a) can be used to choose kp and
Td .
Like in the PI case, the analysis is simplified if we consider the crossover frequency !c as the parameter
of choice instead of kp . This is possible owing to the relation
s
1 C !c2
kp D ;
1 C Td2 !c2

which defines a one-to-one correspondence between !c > 0 and kp > 1 provided jTd j < 1. The bound on
 in (3.10) translated then to the following bound on the derivative time as a function of  and !c :
tan. !c arctan !c /
1< < Td < 1; (3.11)
!c
3.2. P ROBLEM - ORIENTED CONTROLLER ARCHITECTURES : HISTORICAL DEVELOPMENTS 43

d
y u eQ r
P .s/ e s
CQ .s/ -
s
P .s/.1 e /

n yQ

Fig. 3.4: Unity-feedback system with Smith controller

which is nonempty iff 0 <  !c < arctan !c C =2 <  . These bounds are presented in Fig. 3.3(b) and
may be useful to see limitations, imposed by the loop delay, on the attainable crossover frequency.
Although the examples above discuss simple controllers for a simple time-delay system, they can
be used to appreciate problems arising in more general situations. Namely, it is not hard to extrapo-
late that finding precise stabilizability conditions might be tremendously challenging and even if such
conditions can be found, they are seldom transparent and it might not be trivial to find a meaningful re-
parametrization, for which the result is intuitive. For these reasons, the design of fixed-structure finite-
dimensional stabilizing controllers is normally addressed via conservative methods, based on various kinds
of approximations and the use of robustness techniques (some of the are studied in Chapter 6).

3.2 Problem-oriented controller architectures: historical developments


The design of (low-order) fixed-structure controllers is a challenge even for finite-dimensional plants with
high-order dynamics. Conventional wisdom has it then that the complexity of the controller should be
comparable to that of the controlled process. In particular, there are systematic and intuitive methods to
design at least .n p/-dimensional stabilizing controllers for n-dimensional LTI plants with p measured
non-redundant outputs, whereas no such methods exist for lower-order controllers in general.
The observation above suggests that the stabilization of (infinite-dimensional) time-delay systems may
require the use of infinite-dimensional controllers. This section aims at presenting developments in the
control literature since the late ’50s on methods to design infinite-dimensional controllers for time-delay,
mainly dead-time, systems, which exploit the structure of the delay element. These developments bore
several fruitful concepts, resulting in intuitive design procedures and implementable controller architec-
tures. The presentation below is technical, its main goal is to present a historical overview. The underlying
insights and somewhat more streamlined derivations of these approaches are then revealed in the next sec-
tion.

3.2.1 Dead-time compensation: Smith predictor and its modifications


The first problem-oriented controller architecture for dead-time systems was proposed by Otto J. M. Smith
[67] back in 1957. The controller, presented in Fig. 3.4, comprises two parts, a primary controller CQ and a
Smith predictor P .1 D x  / in the internal feedback path of the controller. The overall controller C W y 7! u
in this case has the transfer function
CQ .s/
C.s/ D : (3.12)
1 C CQ .s/P .s/.1 e s /

Although the controller is quite complex, the setup makes closed-loop properties more transparent. The
underlying logic of this architecture can be seen from the behavior of the signal

yQ D y C P .1 x  /u D P .D
D x  u C d / C P .1 x  /u D P .u C d /;
D
44 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

d
y u eQ r
P .s/ e s
CQ .s/ -

PQ .s/ P .s/e s

n yQ

Fig. 3.5: Unity-feedback system with modified Smith predictor

which can thus be seen as a predicted version of y . The use of the predicted output in lieu of y leads to
simpler closed-loop dynamics, as can be seen from the resulting control sensitivity,

C.s/e s CQ .s/e s
CQ .s/
Tc .s/ D D D µ TQc .s/;
1 C P .s/C.s/e s
1 C CQ .s/P .s/.1 e s / C P .s/CQ .s/e s 1 C P .s/CQ .s/
and complementary sensitivity,

P .s/CQ .s/
T .s/ D P .s/e s
Tc .s/ D e s
µ TQ .s/e s
;
1 C P .s/CQ .s/
transfer functions. Thus, the control sensitivity function turns completely delay free and the complemen-
tary sensitivity function is left with only an input delay. From the stabilization point of view, a key fact is
that the delay is not present in the denominators of both these closed-loop transfer functions. As a result,
if the primary controller CQ stabilizes the delay-free systems TQ and TQc , then C defined by (3.12) renders
the actual T and Tc stable as well. This remarkable property facilitates the design of the primary controller
for the delay-free version of the plant, P , and then implementing it in the Smith form of Fig. 3.4. Also
note that although the overall controller in (3.12) is infinite dimensional, it is readily implementable. All
we need is to realize a delay line and a finite-dimensional plant model in the internal feedback of the
controller.
The situation is not that simple though. Consider the disturbance sensitivity transfer function
 
P .s/CQ .s/ s P .s/ ŒP .s/2 CQ .s/.1 e s /
Td .s/ D P .s/.1 T .s// D P .s/ 1 e D C
1 C P .s/CQ .s/ 1 C P .s/CQ .s/ 1 C P .s/CQ .s/
µ TQd .s/ C TQ .s/P .s/.1 e s /:

Unless plant poles are canceled in the predictor P .s/.1 e s /, they are the poles of the disturbance
sensitivity Td .s/ as well. Hence, if P .s/ has unstable poles other than single poles at j2k= for k 2 Z,
the closed-loop system is internally unstable as well, no matter what primary controller is chosen. This
implies that the Smith controller is not applicable to unstable plants in most cases.
However, instability under unstable plants is not an intrinsic property of predictor-based schemes. A
possible fix was proposed by Watanabe and Ito in [77]. The idea is paraphrased in Fig. 3.5, where the
prediction element PQ PD x  is based on a finite-dimensional system PQ , which is not necessarily the same
as the plant P . With this choice, the signal yQ D PQ u C P d is no longer a prediction of the zero-delay
output. Still, this is a delay-free response, so that the architecture presented in Fig. 3.5 is dubbed the
dead-time compensator. The four closed-loop transfer functions of interest are then
     
S.s/ Td .s/ 1 PQ .s/ P .s/e s Q
S.s/ TQd .s/ 1 PQ .s/ C P .s/e s
D ; (3.13)
Tc .s/ T .s/ 0 1 TQc .s/ TQ .s/ 0 1

where    
Q
S.s/ TQd .s/ 1 1  
´ 1 PQ .s/
TQc .s/ TQ .s/ 1 C PQ .s/CQ .s/ CQ .s/
3.2. P ROBLEM - ORIENTED CONTROLLER ARCHITECTURES : HISTORICAL DEVELOPMENTS 45

are the closed-loop transfer functions associated with the unity-feedback interconnection of delay-free PQ
and CQ . If the finite-dimensional system PQ is such that the prediction block PQ PDx  is itself stable, then
      1
1 0 1 0 1 0
and D Q
PQ .s/ P .s/e s 1 PQ .s/ C P .s/e s 1 P .s/ P .s/e s 1

are both bi-stable and    


S Tc SQ TQc
2 H1 ” 2 H1 :
Td T TQd TQ
x  reduces to that for the
This, in turn, implies that the stabilization problem for the input-delay plant PD
Q
delay-free P .
Thus, the stabilization problem boils down to finding a finite-dimensional PQ for which PQ PD x
is stable. This turns out to be always possible. The choice proposed in [77], based on the state-space
realization of P .s/ D C.sI A/ 1 B , is

PQ .s/ D C e A
.sI A/ 1 B (3.14)

and has the same order and the same poles as P .s/. In this case the transfer function of the predictor,
known as the modified Smith predictor (MSP), is
Z 
PQ .s/ P .s/e s
D C.e A s 1
e I /.sI A/ B D C e A
e .sI A/t dtB: (3.15)
0

This is an entire function, bounded in C0 . Hence, it belongs to H1 and the MSP is stable. There are
other choices, some of which are proposed in [77], all of them are based on canceling unstable poles of
P .s/ in the predictor PQ .s/ P .s/e s . Because of those cancellations, certain care should be taken in
implementing such predictors, this issue is discussed in Chapter 5. But, in any case, the use of the dead-
time compensation (DTC) architecture renders the stabilization problem essentially finite dimensional,
which is a clear advantage in comparison with the methods discussed in Section 3.1.

3.2.2 Finite spectrum assignment


Consider now the system
P / D Ax.t / C Bu.t
x.t  /; (3.16)
which is a dead-time system with the whole state of its delay-free part measurable. We assume that .A; B/
is stabilizable. Conceptually, the stabilization problem for it would be simple if we measured the future
vector x.t C  / at each t . The control law u.t / D F x.t C  / C Fv v.t /, where v is an exogenous signal and
Fv is some gain, is obviously stabilizing if A C BF is Hurwitz. Finding an appropriate F is a standard step
of designing a state feedback controller for the zero-delay version of (3.16). Of course, the control law
above, based on x.t C  /, is not implementable. But we may try to imitate it by replacing x.t C  / with
its prediction. This logic is reminiscent of that of observer-based feedback, where unmeasurable state is
replaced with its estimate.
A prediction of x can be obtained by solving (3.16),
Z tC Z t
A A.tC r/ A
x.t C  / D e x.t / C e Bu.r  /dr D e x.t / C eA.t r/ Bu.r/dr:
t t 

This is a linear function of the state of (3.16), which is .x.t /; uM t / as discussed in ÷1.2.1, and is causally
implementable. Thus, a candidate for the predictor-based feedback is
 Z t 
A A.t r/
u.t / D F e x.t / C e Bu.r/dr C Fv v.t /; (3.17)
t 
46 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

where F is such that A C BF is Hurwitz.


To analyze the stability of the closed-loop system with the control law (3.17), rewrite it in the Laplace
domain. To this end, note that the last term in it is the convolution of u.t / and eAt B1Œ0; .t /, where 1Œ0; .t /
is the indicator function of the interval Œ0;  . As such, the Laplace transform of it is the product of the
Laplace transforms of each one of these functions, i.e.
Z t  Z 1 Z 
A.t r/ At st
L e Bu.r/dr D e B1Œ0; .t /e dt U.s/ D e .sI A/t dtBU.s/: (3.18)
t  0 0

Thus, (3.17) can be written as


 Z  
.sI A/t
I F e dtB U.s/ D F eA X.s/ C Fv V .s/:
0

Combining this relation with the Laplace-domain version of (3.16), we end up with the following closed-
loop equations:     
sI A B e s X.s/ 0
R  D V .s/:
F eA I F 0 e .sI A/t dtB U.s/ Fv
The stability of these dynamics can be analyzed via its characteristic function, which determines the
existence of a nontrivial free motion under v D 0. By the logic of ÷2.1.1, this function is
 s

sI A B e
;cl .s/ D det R : (3.19)
F eA I F 0 e .sI A/t dtB

Although the task of finding its rots appears complicated, it is actually not. Just note that
Z 
e A .sI A/ e .sI A/t dtB D e A B B e s ;
0

cf. the second equality of (3.15). Therefore,


 s
  A
R  .sI A/t A

sI A B e sI A e .sI A/ 0
e dtB e B
R D R
F eA I F 0 e .sI A/t dtB F eA I F 0 e .sI A/t dtB
  R 
sI A e A B I e A 0 e .sI A/t dtB
D
F eA I 0 I
and
 A

sI A e B A
;cl .s/ D det D det.sI A e BF eA / D det.e A
.sI A BF /eA /
F eA I
D det.sI A BF /;

where the second equality follows by (A.5b). In other words, ;cl .s/ in (3.19) is actually a plain polyno-
mial of degree n, whose roots are the eigenvalues of A C BF , exactly as in the delay-free case. For this
reason, the strategy behind control law (3.17) is called the finite spectrum assignment (FSA). The term
was coined by Manitius and Olbrot in their 1979 paper [33], where the stabilization problem in a slightly
different, albeit essentially equivalent, form was studied. The ideas behind controller (3.17) can be traced
back to the late ’60s, see [16, 35, 29], where similar configurations were discussed in various contexts.
Remark 3.1 (multiple delays). Remarkably, the FSA approach extends to a rather general class of multiple
input delay systems. Although such kinds of extensions are not as intuitive as the predictor-based approach
described above, they can be deduced from the analysis above and still produce finite closed-loop spectra.
3.2. P ROBLEM - ORIENTED CONTROLLER ARCHITECTURES : HISTORICAL DEVELOPMENTS 47

P
P / D Ax.t / C i Bi u.t
For example, consider a plant described by the equation x.t i / for delays i  0.
The control law  
XZ t
u.t / D F x.t / C eA.t i r/ Bu.r/dr C Fv v.t /
i t i
P Ai
assigns the spectrum of the closed-loop system to the roots of det.sI A i e Bi F /, which is a
P Ai
polynomial of degree n, and the stabilization is possible iff .A; i e Bi / is stabilizable. O
If only a part of the vector x is measured, say if (3.16) is complemented by the measurement equation
y.t / D C x.t /, then (3.17) should be complemented by an observer of x , resulting in the control law

PO / D Ax.t
x.t O / C Bu.t  / L.y.t / C x.tO // (3.20a)
 Z t 
u.t / D F eA x.t
O /C eA.t r/ Bu.r/dr C Fv v.t /; (3.20b)
t 

for some L such that A C LC is Hurwitz, which exists iff .C; A/ is detectable. With the standard trick of
O / with the observer error e.t / ´ x.t / x.t
replacing x.t O / by a similarity transformation, the closed-loop
system reads in the Laplace domain as
2 32 3 2 3
sI A LC 0 0 E.s/ 0
s 5 4 X.s/ 5 D 4 0 5 V .s/:
4 0 sI A R  B e.sI A/t
A A
Fe Fe I F 0e dtB U.s/ Fv

Its characteristic function


2 3
sI A LC 0 0
s
;cl .s/ D det 4 0 sI A R  B e.sI A/t
5 D det.sI A LC / det.sI A BF /
F eA F eA I F 0e dtB

is again a polynomial, now having the degree 2n, coinciding with that in the delay-free observer-based
feedback. Controller (3.20), dubbed observer-predictor, was analyzed by Furukawa and Shimemura in
[17], although perhaps appeared for the first time in Mayne’s paper [35] of 1968 and also as a part of the
LQG optimal control law in Kleinman’s paper [29] of 1969.

3.2.3 Kwon–Pearson–Artstein reduction


A related, albeit a bit more general, approach was proposed by Kwon and Pearson in [31] and then ex-
tended by Artstein in [2]. Consider again (3.16) and introduce the variable
Z t
Q / ´ x.t / C
x.t eA.t  r/ Bu.s/dr;
t 

which is effectively a prediction of e A x.t C  /. Differentiating this variable results in the relation
Z t
PQ / D x.t
x.t P /CA eA.t  r/ Bu.r/dr C e A Bu.t / Bu.t  /
t 
Z t
D Ax.t / C A eA.t  r/ Bu.r/dr C e A Bu.t /;
t 

where the first equality is obtained via the Leibniz integral rule (2.20) on p. 38 and the second equality
follows by (3.16). Thus, the variable xQ satisfies the ODE
PQ / D Ax.t
x.t Q /Ce A
Bu.t /: (3.21)
48 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

Note that controllability-related characteristics of .A; e A B/ are equivalent to those of .A; B/. System
(3.21) is called the reduced system and it can be stabilized by standard methods. For example, if the
state-feedback u.t / D FQ x.t
Q / C Fv v.t / is used for it, then the reduced system is stable iff A C e A B FQ is
Hurwitz. But then Z t
x.t / D x.t
Q / eA.t  r/ B FQ x.r/
Q dr
t 

is bounded whenever so is xQ , merely by the triangle inequality and the boundedness of ke As B FQ k for
s 2 Œ0;  . Thus, the stabilization problem for (3.16) is yet again reduced to the stabilization of a delay-
free system, this time (3.21). The stabilizing controller in terms of the original vector x is given by
 Z t 
Q
u.t / D F x.t / C e A.t  r/
Bu.r/dr C Fv v.t /; (3.22)
t 

which actually equals the FSA control law (3.17) under FQ D F eA .
Remark 3.2 (time-varying delays). The main difference of the reduction approach from the FSA is in their
analyses of the closed-loop stability. While the FSA does that via the characteristic function, the reduction
approach uses time-domain arguments. As such, they can be applied to time-varying delays as well. As an
P / D Ax.t / C Bu..t // for some absolutely continuous function .t /  t
example, consider the system x.t
P
such that .t / > 0, which is a special case of the system analyzed in [2, Ex. 5.6]. This is an input-delay
system with the time-varying delay  .t / D t .t /  0. Denote by  1 .t / the inverse of .t /, i.e. the
P / > 0 and satisfies  1 .t /  t . The function
value of r for which .r/ D t . It is unique whenever .t
Z t
1
Q / D x.t / C
x.t eA.t  .r// B P 1 .r/u.r/dr
.t/

satisfies then the reduced equation


1 .t/
PQ / D Ax.t
x.t Q /Ce A. t/
B P 1
.t / u.t /;
1
N / ´ eA. .t/ t/ x.t
which is free of delays. Actually, the variable x.t Q / is the prediction of x. 1 .t //, so
1
the variable  .t / t  0 can be thought of as the prediction horizon required for the delay t .t /.
PN / D Ax.t
Moreover, it can be shown that x.t N / C Bu.t /, so its dynamics are time invariant. Yet, similarly
to the discussion in Remark 3.1, the prediction interpretation is not evident in extending the approach to
P
P / D Ax.t / C i Bi u.i .t //, in which case reduced dynamics are
multiple-delay systems of the form x.t
intrinsically time varying. O

3.2.4 Connections
Remarkably, all three approaches studied in Section 3.2 share essentially the same structure of their con-
trollers. Namely, they all result in controllers having internal feedback of the form presented in Fig. 3.6,
where the “primary controller” CQ is designed for a delay-free system and the “dead-time compensation”
block ˘ is determined by the plant (exogenous signals are taken zero for brevity). Indeed, the Smith
controller in Fig. 3.4 corresponds to this form under ˘.s/ D P .s/.1 e s / and the primary controller CQ
designed for P . The MSP in Fig. 3.5 chooses ˘.s/ in form (3.15). The FSA controller (3.17) can be cast
in the form presented in Fig. 3.6 under y D x , the static CQ .s/ D F eA , and
Z 
˚
˘.s/ D L eA.t / B1Œ0; D e A e .sI A/t dtB;
0

cf. (3.18). Likewise, the controller of (3.22) is the same modulo the choice of CQ .s/ D FQ . In fact, the
multiple-delay extension discussed in Remark 3.1 can also be cast in the form presented in Fig. 3.6 with a
static CQ and a more elaborate ˘ .
3.3. P ROBLEM - ORIENTED CONTROLLER ARCHITECTURES : CONTROL - THEORETIC INSIGHT 49

u y
CQ

Fig. 3.6: Common controller structure (DTC, FSA, reduction)

Moreover, the observer-predictor in (3.20) is also of the same form. This is not obvious, but can be
proved. To this end, introduce the variable
Z t
A
N / ´ e x.t
x.t O /C eA.t r/ Bu.r/dr;
t 

with which (3.20b) reads u.t / D F x.t


N / (remember, we assume here that v D 0). Using again the Leibniz
integral rule (2.20), we have:
Z t
A

Px.t
N / D e Ax.t O / C Bu.t  / L.y.t / C x.tO // C A eA.t r/ Bu.r/dr C Bu.t / eA Bu.t  /
t 
D Ax.t
N / C Bu.t / eA L.y.t / C x.t
O //:
Rt
O / D e A x.t
Substituting x.t N / e A t  eA.t r/ Bu.r/dr to this expression, we end up with the following
equivalent form of (3.20):
 Z t 
Px.t A A A A A.t r/
N / D .A C e LC e N / C Bu.t / e L y.t / C C e
/x.t e Bu.r/dr (3.20a0 )
t 
u.t / D F x.t
N /: (3.20b0 )

But this control law is exactly in the form presented in Fig. 3.6, with
 
A C BF C eA LC e A eA L
CQ .s/ D ;
F 0

which is the observer-based controller for PQ given by (3.14), and ˘.s/ in form (3.15). In other words,
the MSP of Watanabe–Ito with the observer-based primary controller is identical to the observer-predictor
controller of Furukawa–Shimemura. These two controllers were regarded different for some time, until
their equivalence was shown in [46].

3.3 Problem-oriented controller architectures: control-theoretic insight


The fact that independently developed control methods produce the same controller architecture is in-
triguing. A somewhat superficial explanation of that might be given via recalling that these methods have
prediction ideas in their origin. However, prediction is not a part of the classical control-theoretic toolkit
and is often viewed as an incautious business, especially from the robustness point of view. On top of this,
neither the MSP nor multiple-delay extensions of the FSA / reduction approaches is readily interpretable
as a predictor-based controller. Moreover, however natural and clever the guesses discussed in this sec-
tion might be, there is no indication whether they are really justified. We thus shall look for more solid
control-theoretic explanations, for which we yet again turn to the discrete-time version of the problem.

3.3.1 Gaining insight via discrete-time systems: state feedback


Consider the input-delay system
xŒt C 1 D AxŒt  C BuŒt  ; (3.23)
50 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

where xŒt  2 Rn , uŒt  2 Rm ,  2 N, and the whole x is measurable. To simplify the exposition, assume
that .A; B/ is controllable. We already know, cf. (1.11) on p. 5, that the state equation of this system is
2 3 2 32 3 2 3
xŒt C 1 A B 0  0 xŒt  0
6 uŒt  C 1 7 6 0 0 I    0 76 uŒt   7 6 0 7
6 7 6 76 7 6 7
6 :: 7 6 :: :: :: : : :: 76 :: 7 6 :: 7
6 : 7 D 6 : : : : : 76 : 7 C 6 : 7uŒt : (3.24)
6 7 6 76 7 6 7
4 uŒt 1 5 4 0 0 0    I 54 uŒt 2 5 4 0 5
uŒt  0 0 0  0 uŒt 1 I
„ ƒ‚ … „ ƒ‚ …„ ƒ‚ … „ƒ‚…
x ŒtC1 A x Œt B

The first important observation regarding the model above is that if x is assumed to be measurable,
then so should be the whole state x of (3.24). Indeed, the other components of x are the history of
the input signal u, which is generated by us and thus should be available in virtually every reasonable
scenario. Consequently, we can implement the standard state-feedback law for (3.24) and it is of the form

X
 
uŒt  D Fx F    F2 F1 x Œt  D Fx xŒt  C Fi uŒt i : (3.25)
iD1

This is a dynamic control law if considered as a mapping x 7! u and its structure is reminiscent of that
P
in Fig. 3.6, under y D Fx x , CQ D I , and ˘.´/ D i Fi ´ i . This suggests that the internal feedback in
Fig. 3.6 with an FIR ˘ , whose impulse response has support in Œ0;  , is merely a static state feedback
acting on the component uM i of the state of input-delay systems. This is a well-justified choice from the
control-theoretic perspective.
The second important observation about model (3.24) is that not all modes of A would make sense to
move by feedback. To see this, note that
2 3
A I B 0 0  0 0
6
6 0 I I 0    0 0 7 7
  6
6 0 0 I I    0 0 7 7  
rank A I B D rank 6 :: :: :: :: : : :: :: 7 D rank A I B C  m;
6 : : : : : : : 7
6 7
4 0 0 0 0  I 0 5
0 0 0 0    I I

so the only possibly uncontrollable modes of the realization in (3.24) are those of .A; B/. As the latter
pair is assumed to be controllable, so is .A ; B / and all n C m modes of (3.24) can be freely assigned by
static state feedback of the form (3.25). But m eigenvalues of A are already in a perfect location, at the
origin. Such kind of eigenvalues, known as deadbeat, are obviously stable and are normally considered
welcome, albeit expensive to attain. Yet now we have deadbeat modes for free, so it would make perfect
sense to keep them untouched, which would also keep the control energy low. With this logic in mind, the
choice of the feedback gain in (3.25) should aim only at assigning a subset of the modes of A , those of
A. To understand this kind of pole assignment problem, some preliminary results are required.

Preliminary: partial pole placement by state feedback


Consider the state-feedback problem for the n-order delay-free system

xŒt C 1 D AxŒt  C BuŒt 

and suppose that only nQ < n eigenvalues of A should be moved by feedback, while the other n nQ are to
remain untouched. Perhaps the easiest way to visualize this process, while avoiding the use of the invariant
3.3. P ROBLEM - ORIENTED CONTROLLER ARCHITECTURES : CONTROL - THEORETIC INSIGHT 51

subspace notion, is via applying a similarity transformation T , bringing the realization above to the form
   
1 AQ 0 BQ
T xŒt C 1 D TAT T xŒt  C TBuŒt  D T xŒt  C uŒt ; (3.26)
 AN BN

where AQ 2 Rn Q n
Q
contains all eigenvalues of A that have to be shifted, AN 2 R.n n/.n
Q n/
Q
contains those to
be kept untouched, and “” denotes an irrelevant block. It is readily seen that uncontrollable modes of
Q B/
.A; Q are also uncontrollable modes of .A; B/. An obvious choice of the required state-feedback gain for
 
this realization is F D FQ 0 , where FQ is such that the eigenvalues of AQ C BQ FQ are assigned to required
positions.
An important fact is that the resulted control law in the original coordinates,
   
uŒt  D FQ 0 T xŒt  D FQ I 0 T xŒt ;

does not require the whole similarity transformation matrix T , but only its nQ first rows. An exhaustive
characterization of this part of T is given by the following result:
Lemma 3.1. Let shift  spec.A/ contain all modes of A that we need to shift by feedback. A nQ  n matrix
TQ is the first nQ rows of a nonsingular T 2 Rnn such that
 
1 AQ 0
TAT D
 AN
for some AQ 2 Rn
Q nQ Q D shift iff
such that spec.A/
TQ A D AQTQ and rank TQ D n:
Q (3.27)
Proof. The “only if” statement follows from the relation
   
AQ 0     AQ 0  
TA D N T H) InQ 0 TA D InQ 0 N T D AQ InQ 0 T:
 A  A
Q
To show the “if” statement, assume
 0 that 0 there is a full-rank T satisfying 1(3.27).
 There
 is then a full-rank
TN 2 R.n n/n
Q
such that T ´ TQ TN 0
is nonsingular. In this case TQ T D InQ 0 and we have that
   
InQ 0 TAT 1 D TQ AT 1 D AQTQ T 1 D AQ 0 ;

as required.

Thus, all we need is to solve (3.27) in TQ and AQ, whose spectrum coincides with the part of spec.A/
that is planned to be moved, and implement the control law
uŒt  D FQ TQ xŒt 

for FQ assigning the nQ eigenvalues of AQ C TQ B FQ to desired positions.

Partial pole placement for (3.24)


Now return to the state equation (3.24). We know what part of its dynamics we need to move, so consider
the following version of (3.27) for it:
2 3
A B 0  0
6 0 0 I  0 7
 66 : : : : ::
7    
Tx T    T2 T1 6 :: :: :: : :
7
:7 D Tx A Tx B T    T2 D A Tx T T 1    T1 ;
„ ƒ‚ …6 4 0 0 0 
7 „ ƒ‚ … „ ƒ‚ …
TQ
I 5 TQ A TQ
0 0 0  0
52 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

where the choice of “AQ” is an educated guess (it has to have the same spectrum as A, but need not be equal
A in general). It is not hard to see that this equality is solved by
   
Tx T    T2 T1 D A A 1 B    AB B ; (3.28)

which has full rank iff .A; B/ has no uncontrollable modes at the origin. By the controllability assumption,
the full rank property is thus guaranteed. With this choice, TQ B D B and the controller gain FQ is thus
designed for the delay-free pair .A; B/. The choices of AQ and TQ above are not unique, but they do appear
natural and they lead to an interpretable controller.
Remark 3.3 (controllability of .A; B/). The assumption that .A; B/ simplifies the exposition above, in
particular, arguments about the assignment of modes of A and the proof that the choice of TQ as in (3.28)
has full rank. However, the result holds even if the milder condition of the stabilizability of .A; B/ is
assumed. In that case only controllable modes of .A; B/ can be assigned, which is obvious. Also, if
.A; B/ has uncontrollable modes at the origin, then the matrix in (3.28) has a reduced rank. But this not a
problem, as by the design logic those deadbeat modes are not intended to be shifted anyway. O
Thus, a version of (3.25) that does not touch the m deadbeat modes of (3.24) is
 
X   t 1
X 
uŒt  D F A xŒt  C Ai 1
BuŒt i  D F A xŒt  C At r 1
BuŒr ; (3.29)
iD1 rDt 

where F assigns the eigenvalues of A C BF to desired positions, which are the closed-loop eigenvalues of
the input-delay system (3.23). It is readily seen that this control law is based on the predicted xŒt C  . As
such, it is the perfect discrete-time counterpart of the prediction-based controller (3.17) under v D 0. Just
now it is derived by a conventional state-feedback rationale and thus has a solid justification. Namely, the
predictive feedback is merely a static state feedback that potentially alters only the poles of the delay-free
part of the open-loop system.
The same rationale applies to the continuous-time law (3.17). It is nothing but the static state feedback
law, shifting only the finite modes of the open-loop plant, which are the eigenvalues of A in (3.16). This
state feedback happens to be a predictor in the single-delay case, but might not be readily interpretable
this way for multiple-delay systems.

3.3.2 Gaining insight via discrete-time systems: output feedback


Now consider the system
xŒt C 1 D AxŒt  C BuŒt 
(3.30)
yŒt  D C xŒt  C DuŒt 
in which only a part of the delay-free plant state is measured. We assume that .A; B/ is stabilizable
and .C; A/ is detectable. The state-space realization of this system given by (1.11) on p. 5 assumes that
the system is an operator u 7! y . This assumption may be natural in analyzing the behavior of y , but
does not reflect actual measurement variables. This is because we have also measurements of the last m
components of the state variable x Œt  defined in (3.24). Thus, the accurate measurement equation now is
2 3 2 32 3 2 3
yŒt  C D  0 0 xŒt  0
6 uŒt   7 6 0 I    0 0 76 uŒt   7 6 0 7
6 7 6 76 7 6 7
6 :: 7 6 :: :: : : :: :: 76 :: 7 6 :: 7
6 : 7D6 : : : : : 76 : 7 C 6 : 7uŒt ; (3.31)
6 7 6 76 7 6 7
4 uŒt 2 5 4 0 0    I 0 54 uŒt 2 5 4 0 5
uŒt 1 0 0  0 I uŒt 1 0
„ ƒ‚ … „ ƒ‚ …„ ƒ‚ … „ƒ‚…
y Œt C x Œt D
3.3. P ROBLEM - ORIENTED CONTROLLER ARCHITECTURES : CONTROL - THEORETIC INSIGHT 53

which complements the state equation (3.24). It is straightforward to show that unobservable modes of
the pair .C ; A / are those of .C; A/. Hence, the realization (3.24), (3.31) is detectable.
Equation (3.31) still does not measure the whole state of the system. It is then natural to implement
the state-feedback controller (3.29) in combination with a state observer. Moreover, if u can be measured
without noise, then it is well justified to consider a reduced-order observer [24, Sec. 4.3], reconstructing
only the x component of x using n-dimensional dynamics. The structure of A and C facilitates the
construction of a reduced-order observer for x directly from (3.30). It is readily verified that

O C 1 D AxŒt
xŒt O  C BuŒt  L.yŒt  O 
C xŒt DuŒt  /

yields the following equation for the estimation error eŒt  ´ xŒt  O :
xŒt

eŒt C 1 D .A C LC /eŒt :

Thus, the stabilizing controller for (3.30) of the form

O C 1 D AxŒt
xŒt O  C BuŒt   L.yŒt  C xŒt O  DuŒt  / (3.32a)
 t 1
X 
uŒt  D F A xŒt
O C At r 1 BuŒr ; (3.32b)
rDt 

results in the closed-loop dynamics with the characteristic equation

;cl .´/ D ´m det.´I A BF / det.´I A LC /:

This implies that the observer-predictor control law is merely an observer-based feedback, featuring a
reduced-order observer (justified by perfect measurements of u) and state feedback keeping m open-loop
modes at the origin untouched. This appears to be a well-justified controller architecture.
Remark 3.4 (noisy measurements of u). If, for whatever reason, the past control signals used in (3.32b)
are corrupted by noise, then we shall build a full-order observer instead of (3.32a). But even then the
structure of the system dynamics in (3.24) and (3.31) can be exploited to end up with simpler solution
formulae. O

3.3.3 Intermezzo: Fiagbedzi–Pearson reduction for systems with internal delays


The logic above can be used to address the stabilization of a substantially wider and more challenging
class of systems. Below we consider the approach of Fiagbedzi and Pearson [12], applied to the general
single-delay interconnection of Fig. 1.7 on p. 10, whose dynamics are described by (1.27).
Assume that the whole state of the system, which is .x.t /; Ḿ t /, is measurable, so only the “state
dynamics” part,       
P /
x.t A Bw x.t / Bu
D C u.t /; (3.33)
´.t / C´ D´w ´.t  / D´u
is required. The characteristic function  .s/ associated with this equation is given by (2.3) on p. 21. This
is a quasi-polynomial with an infinite number of roots. As such, finite spectrum assignment should not
be expected, in general. Still, under the already familiar assumption that .D´w / < 1, there is only a
finite number roots of  .s/ in the closed right-hand plane Cx 0 . Hence, the ideas of FSA / reduction can be
applied only to unstable open-loop poles.
Introduce the nQ -dimensional signal
Z t
Q
Q / ´ Qx.t / C
x.t eA.t r/ R´.r/dr
t 
54 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

for some matrices AQ 2 RnQ n


Q
, Q 2 Rnn
Q
, and R 2 Rnm
Q 
to be determined. Differentiating this variable
and using the Leibniz integral rule (2.20) and the relations of (3.33) for xP and ´, we have that
Z t
Q Q
PQ / D Qx.t
x.t P / C R´.t / eA R´.t  / C AQ eA.t r/ R´.r/dr
t 
 Q
D Q Ax.t / C Bw ´.t  / C Bu u.t / C R.C´ x.t / C D´w ´.t  / C D´u u.t // eA R´.t  /
Z t
Q
C AQ eA.t r/ R´.r/dr
t 
Q
D AQx.t
Q / C .QBu C RD´u /u.t / C .QA C RC´ Q
AQ/x.t / C .QBw C RD´w eA R/´.t  /:

The dependence on x and ´ can be eliminated if AQ, Q, and R are chosen to satisfy the equation
 
  A Bw  Q 
Q R D AQ Q eA R : (3.34)
C´ D´w

If such matrices exist, then xQ verifies the finite-dimensional dynamics


PQ / D AQx.t
x.t Q /;
Q / C Bu.t

where BQ ´ QBu C RD´u .


To understand equation (3.34), pre-multiply both its sides by a left eigenvector i 2 CnQ of AQ, corre-
Q . Using the fact that 0 eAQ
sponding to some i 2 spec.A/ i D ei  0i , we then obtain the equality
 
0
  i I A Bw e i 
i Q R D 0:
C´ I D´w e i 
 
Hence, if 0i Q R ¤ 0, then every eigenvalue i of AQ necessarily satisfies  .i / D 0, i.e. it is a root
of the characteristic function of the open-loop system (3.33). For that reason equation (3.34) is referred
to as the left characteristic matrix equation. But the relation above implies that (3.34) can be viewed as a
counterpart of (3.27). Thus, the control law u.t / D FQ x.t Q / C Fv v.t / or, equivalently,
 Z t 
Q
Q
u.t / D F Qx.t / C eA.t r/
R´.r/dr C Fv v.t / (3.35)
t 

is a state-feedback law assigning only the part of the open-loop spectrum, that of the matrix AQ, to AQ C BQ FQ .
This conclusion can be supported by analyzing the closed-loop characteristic function, which can be
derived from the relation
2 32 3 2 3
sI A Bw e s Bu X.s/ 0
4 C´ I D´w e s D´u 5 4 Z.s/ 5 D 4 0 5 V .s/;
R  .sI A/t
Q
Q
FQ Q
F 0e dtR I U.s/ Fv

which represents the system dynamics (3.33) and the control law (3.34) in the Laplace transform domain.
The closed-loop characteristic function for this system is
2 3
sI A Bw e s Bu
;cl .s/ D det 4 C´ I D´w e s D´u 5 :
R  .sI A/t
Q
FQ Q FQ 0 e dtR I
Using equations (A.5) and the equality
 
 R Q
.sI A/t

Q 1
  sI A Bw e s
Q e dtR D .sI A/ Q R ;
0 C´ I D´w e s
3.3. P ROBLEM - ORIENTED CONTROLLER ARCHITECTURES : CONTROL - THEORETIC INSIGHT 55

which can be verified by straightforward algebra via (3.34), we have:


    
sI A Bw e s Bu  R
Q Q  e .sI A/tQ 
;cl .s/ D det F d tR
C´ I D´w e s D´u 0
     
Bu Q Q 1
  sI A Bw e s
D det I F .sI A/ Q R det
D´u C´ I D´w e s
2 3
I 0 Bu FQ  
Q 1 det 4 0 I D´u FQ 5 det sI A Bw e s
D det.sI A/
C´ I D´w e s
Q R sI AQ
 .s/
D det.sI AQ BQ FQ / Q :
det.sI A/
This proves that the closed-loop spectrum under (3.35) differs from the open-loop one only in moving nQ
characteristic roots belonging to spec.A/ Q to spec.AQ C BQ FQ /, as expected.
Thus, to stabilize (3.33) we need to solve the left characteristic matrix equation (3.34) for AQ containing
all roots of  .s/ that are required to be shifted. This set must clearly include all unstable characteristic
roots of (3.33), i.e. those in C x 0 , but might contain additional modes. There are only a finite number
of such roots and finding them is currently a well understood numerical problem. The left characteristic
matrix equation can also be solved numerically, see [12] and some later developments of the same authors.
Note that spectral properties of AQ effectively define that matrix unambiguously, because any similarity
transformation of AQ just leads to an appropriate scaling of Q and R. If D´w D 0, which corresponds to
Q
a retarded system, the second column of (3.34) yields the closed-form R D e A QBw and then a simpler
version of the first column,
Q
QA C e A QBw C´ D AQ; Q (3.340 )
Q
to be solved in AQ and Q, and then BQ D QBu C e A QBw D´u . We illustrate the procedure with a simple
example, which represents a rare case when analytic solution to the stabilization problem is possible.

Example 3.1. Consider the system


      
P /
x.t 1 1 x.t / 1
D C u.t /
´.t / 1 0 ´.t  / 0

P / D x.t / C x.t
(i.e. x.t  / C u.t / in form (1.29)), whose characteristic function
 
s C 1 e s
 .s/ D det D s C 1 e s :
1 1

First, prove that this quasi-polynomial has no roots in C0 . To this end, note that the magnitude relation
corresponding to the characteristic equation s C 1 D e s reads js C 1j D je s j. This equality is con-
tradictory in Re s > 0 because its left-hand side is larger than 1 and its right-hand side is smaller than 1
there. Regarding pure imaginary roots, the magnitude relation for s D j! reads j1 C j!j D 1, which is
only solvable for ! D 0. Thus,  .s/ can have only unstable roots at the origin. It is easy to verify that
 .0/ D 0 for all  , indeed. This is a simple root, which can be seen from
s
 .s/ 1 e  e s
lim D 1 C lim D 1 C lim D 1 C  ¤ 0:
s!0 s s!0 s s!0 1
Thus, to stabilize this system we only need to move its single pole at the origin. This suggests the choice
of nQ D 1 and AQ D 0. Because D´w D 0 in this case, we only solve (3.340 ) which reads Q C Q D 0 and
is thus solvable by any Q ¤ 0. This yields R D Q and BQ D Q, so that the reduced system is xPQ D Qu.t /,
56 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

which is a plain integrator. This system is stabilizable and the control law u.t / D Q / for k > 0
k=Qx.t
stabilizes it, assigning its pole to s D k . But this implies that the control law
 Z t 
u.t / D Fv v.t / k x.t / C x.r/dr
t 
assigns s
sC1 e
;cl D .s C k/ ;
s
which is an entire function, although not a finite quasi-polynomial. Note that the control law above is
independent of the choice of Q. O
In some special cases, when the open-loop characteristic function  .s/ is a plain polynomial, the
left characteristic matrix equation can be solved analytically. For instance, if applied to the input-delay
system, whose state-space realization was discussed in Example 2.1 on p. 21 and for which ´ D u, it reads
 
  A B  Q 
Q R D AQ Q eA R :
0 0

If we choose nQ D n, then this equation is obviously solved by AQ D A, Q D I , and R D e A B , so that


BQ D B and we end up with the control law (3.22), as expected.
Another special case with a relatively simple solution is that where D´w D 0 and the matrices A and
Bw C´ have the following structure:
2 3 2 3
     0    
6 0     7 6 0 0    7
6 7 6 7
6 : : : : 7 6: : : : 7
A D 6 :: :: : : : :: :: 7 and A ´ Bw C´ D 6 :: :: : : : :: :: 7 ;
6 7 6 7
4 0 0    5 4 0 0  0  5
0 0  0  0 0  0 0
where the asterisk stands for a potentially nonzero element. This class of DDEs is a particular case of re-
tarded systems described by (1.29), where the open-loop characteristic function is still a plain polynomial.
We can then always choose nQ D n and Q D I , so that (3.340 ) reads
Q
AQ ADe A
B w C´ : (3.3400 )
This equation can be solved iteratively in an upper-triangular AQ, column by column, starting from the first
column. As a matter of fact, this immediately yields that aQ i i D ai i for all i 2 Z1::n . We provide a flavor
of this procedure via an example.
Example 3.2. Consider the problem of stabilizing a chain of n integrators connected in series with equal
delays in between, see Fig. 3.7. This setup can be viewed as a baby-platooning problem, with no con-
straints in information exchange between involved vehicles. Assuming that all states of the integrators are
measurable, this system can be described by the interconnection in Fig. 1.7 with
2 3
  0 In 0
Jn =s en
GD D 4 Jn 0 en 5 ;
In =s 0
In 0 0
where ei stands for the i th standard basis of Rn ,
2 3
0 1  0
  6 :::
6 :: : : :: 7
: : :7
Jn ´ 0 e1    en 1 D6 7
40 0  1 5
0 0  0
3.3. P ROBLEM - ORIENTED CONTROLLER ARCHITECTURES : CONTROL - THEORETIC INSIGHT 57

x1 1 s
x2 xn 1 1 s
xn 1 s
u
e e e
s s s

Fig. 3.7: Chain of n integrators with communication delays

is the n-dimensional Jordan block corresponding to the zero eigenvalue, and state components
2 3 2 3
x1 .t / x2 .t /
6 :: 7 6 :: 7
: 6 : 7
7 2 Rn and ´.t / D 6 7 2 Rn :
6 7
x.t / D 6
4 xn 1 .t / 5 4 xn .t / 5
xn .t / u.t /
The characteristic function of this system,  .s/ D s n , is expectably a plain polynomial.
Q
Equation (3.3400 ) for this system reads AQ D e A Jn and can equivalently be presented as the recursion
Q
Q 1D0
Ae and Q i De
Ae A
ei 1; i 2 Z2::n :

This relation defines an upper-triangular nilpotent AQ, whose i th column coincides with the .i 1/th column
of its matrix exponential. At the time we evaluate the i th column of AQ all previous columns are already
determined. But the upper-triangular structure of AQ implies that the first i 1 columns of its exponential
depend only on its first i 1 columns of AQ. Therefore, the recursion above can be implemented.
To illustrate the procedure, consider it in details for n D 3. In this case the steps of the recursion above
are as follows: 02 3 1 2 3
0   1  
Q
1: Ae Q 1D0 H) e A e1 D exp @4 0   5  A e1 D 4 0   5 e1 D e1
0   0  
02 3 1 2 3
0 1  1  
Q Q
2: Ae Q 2 D e A e1 D e1 H) e A e2 D exp @4 0 0  5  A e2 D 4 0 1  5 e2 D e2  e1
0 0  0 0 
Q
3: Ae Q 3De A
e2 D e2  e1 ;

where “” stands for not yet evaluated elements. Having determined AQ, we can find its matrix exponential
02 3 1 2 3
0 1  1 t t .t 2 /=2
Q
eAt D exp @4 0 0 1 5 t A D 4 0 1 t 5;
0 0 0 0 0 1
required for 2 3 2 2 3
1  3 2 =2 3 =2
Q
BQ D e A
Bw D´u D 0 1
4  5 en D 4  5
0 0 1 1
and the matrix function of r
2 3
1 t r  .t r  /.t r 3 /=2
Q Q
eA.t r/
R D eA.t r /
Bw D 0
4 1 t r  5;
0 0 1

which appears in the control law (3.35). As .A;Q B/


Q is controllable, we can find FQ assigning the eigenvalues
of AQ C BQ FQ arbitrarily. Choosing the closed-loop characteristic function as ;cl .s/ D .s C ˛/3 for some
˛ > 0, the application of Ackermann’s formula yields
 3 2 
FQ D ˛ ˛ .2 ˛ C 3/ ˛.. ˛ C 3/2 3/=2 :
58 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

vo vo vo
P x
PD P

K K x
KD
y u vi y u vi y u vi

(a) General plant (b) Dead-time plant (c) Delayed controller

Fig. 3.8: Internal stability setups

The control law (3.35) is then of the form


 Z t 
3 2
u.t / D Fv v.t / ˛ x1 .t / ˛ .2 ˛ C 3/x2 .t / C ˛ x2 .r/dr
t 
 Z t 
. ˛ C 3/2 3 
˛ x3 .t / ˛ ˛r 3 ˛.t C  / x3 .r/dr
2 t 
Z t  2 
˛ 2 . ˛ C 3/2 3
˛ r .3 C ˛t /˛r C u.r/dr
t  2 2
and it assigns all three closed-loop poles to s D ˛ .
For a general n the formulae are more involved, but an analytic form of AQ and its exponential can still
derived. Namely, it can be shown that
i 1
Q 1D0 Q iD
X .j i /i j 2 i j 1
Ae and Ae  ej ; i 2 Z2::n
j D1
.i j 1/Š
and i
Q
X t .t .i j / /i j 1
eAt ei D ej ; i 2 Z1::n
j D1
.i j /Š
(mind that 0Š D 1, as customary), which are both Toeplitz. O

3.4 Loop shifting and all stabilizing controllers


In this section another technique is discussed, which shows that the dead-time compensation is actually an
intrinsic part of every stabilizing controller. This offers further insight into the stabilization of input-delay
systems and justification of the DTC architecture.

3.4.1 Internal stability and loop shifting


We start with describing some general ideas. Consider the feedback interconnection in Fig. 3.8(a), where
P and K are LTI plant and controller, respectively. It defines the relation
      
I P y I   vo
D I P : (3.36)
K I u 0 vi
We say that this interconnection is well posed if
there is ˛ > 0 such that k.I P .s/K.s// 1 k < 1 for all s 2 C˛ :
If both P and K are finite dimensional, then this condition reads as the non-singularity of I P .1/K.1/.
In any case, well posedness requires the properness of .I P .s/K.s// 1 and implies that the transfer
function   1    
I P .s/ 0 0 I  
D C .I P .s/K.s// 1 I P .s/ ;
K.s/ I 0 I K.s/
3.4. L OOP SHIFTING AND ALL STABILIZING CONTROLLERS 59

vo vo vQ o
P P P

˘ ˘ ˘

˘ vi
- ˘ - - ˘ ˘
-
y
K u vi y
K u vi yQ
K u vi

(a) Equivalent of Fig. 3.8(a) (b) Moving summing and pickoff (c) yQ D y C ˘ u and vQ o D vo ˘ vi

vQ o vQ o
P C˘ PQ

yQ
K.I C ˘K/ 1
uQ vQ i yQ
KQ uQ vQ i

(d) uQ D u and vQ i D vi (e) Equivalent final loop

Fig. 3.9: Loop shifting by ˘ , with PQ ´ P C ˘ and KQ ´ K.I C ˘K/ 1

which can be derived by (A.3), is proper.  vo 


 y The feedback interconnection in Fig. 3.8(a) is said to be internally stable if all four systems vi 7!
u are stable. The reason for considering all possible closed-loop systems, rather than only one of them,
lies in the need to rule out stabilization of certain closed-loop systems via unstable cancellations in the
loop. It is readily seen that the system is internally stable iff
  1     
I P I   I 1
  So Td
I P D .I PK/ I P µ 2 H1 ;
K I 0 K Tc Ti
where So is the output sensitivity, Ti is the input complementary sensitivity, Td is the disturbance sensitiv-
ity, and Tc is the control sensitivity systems, aka the Gang of Four. It is worth emphasizing that internal
stability requires the system to be well posed (otherwise, So cannot be in H1 ) and K.s/ to be proper
(otherwise, Tc cannot be in H1 ).
Consider now the sequence of block-diagram transformations presented in Fig. 3.9. They transform
the system in Fig. 3.8(a) to that in Fig. 3.9(e) by redistributing dynamics between its elements on the basis
of a chosen linear ˘ . The transformation has different effects on the parts of the loop, it adds ˘ in
parallel to the plant P and in feedback to the controller K . Because these manipulations do not alter the
loop itself, we can expect that properties of the original setup in Fig. 3.8(a) can be analyzed in terms of
the transformed (shifted) setup in Fig. 3.9(e). The hope is that the latter might be easier to analyze for an
appropriate choice of ˘ . This is an old idea, see [79, Sec. 6.2] or [11, Sec. III.6] and the references therein,
which has been extensively used in nonlinear control, e.g. to introduce a required degree of passivity to
the plant.
Two technical aspects of the loop-shifting technique have to be kept in mind. First, the elements of the
shifted loop should themselves be well posed. This is obviously always true for PQ D P C ˘ , but might
require extra attention for the feedback KQ D K.I C ˘K/ 1 . Second, the manipulations in Fig. 3.9 alter
both exogenous and internal signals of the loop, in the way that the relation between signals of interest in
Fig. 3.8(a) and their counterparts in Fig. 3.9(e) are
         
yQ I ˘ y vQ o I ˘ vo
D and D : (3.37)
uQ 0 I u vQ i 0 I vi
They follows by standard block-diagram manipulation rules in the transition from Fig. 3.9(b) to Fig. 3.9(c).
These relations suggest that ˘ itself should be kept stable in order to preserve internal stability under loop
60 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

shifting. But apart from the limitations above, the choice of ˘ is arbitrary, which renders the method quite
flexible in its abilities to affect properties of PQ .
Our interest is to exploit those abilities in the context of dead-time plants of the form PD x  . To be
Q x
specific, the goal is to construct ˘ with which P D PD C ˘ is finite dimensional. This idea can be
traced back to [7], where it was applied to a wider class of infinite-dimensional systems. Implicitly, it is
also related to the earlier dead-time compensation developments in [77] discussed in ÷3.2.1, cf. (3.15).

3.4.2 Preliminary: truncation and completion operators


Consider a finite-dimensional system G given by its state-space realization G.s/ D D C C.sI A/ 1 B
At
˚ minimal). Its impulse response is g.t / D Dı.t / C C e B1.t /. By the FIR truncation
(not necessarily
operator  G associated with this G and a constant  > 0 we understand the system, whose impulse
response is the truncation of g.t / to the interval Œ0;  . This operator can be visualized as the mapping
˚
 G W 7 ! : (3.38)
0  t 0  t

Formally, the impulse response of this system is



g.t / 1.t / 1.t  / D g.t / C eAt B1.t  / D g.t / C eA eA.t /
B1.t  /:

The last term above can be recognized as the impulse response of the dead-time system GO D
x  , where
   
O A B A eA B
G.s/ D D : (3.39)
C eA 0 C 0

This immediately yields one possible representation of the transfer function of the truncation operator,
     
A B O e s A B A B s
 D G.s/ G.s/ D e (3.40a)
C D C D C eA 0

It can be verified by direct substitution that all singularities of the function above, which are the eigen-
values of A, are removable. But the same function can be represented in a different form, where no
singularities are present˚ at all. This representation follows directly from the Laplace transform of the
impulse response of  G , i.e. LfDı.t / C C eAt B1Œ0; .t /g. Straightforward use of (B.16) yields then
  Z 
A B .sI A/t
 DDCC e dtB: (3.40b)
C D 0

.sI A/t
˚ is an entire function and, because ke
This k < keAt k for all Re s > 0, it is bounded in C0 . Hence,
 G 2 H1 regardless of the stability property of G .
˚
A dual, in a sense, operator to the truncation is the FIR completion operator  G D x  , associated with
the input-delay alteration of G . It is defined as the truncation of the delay-free system GQ , whose impulse
response matches that of G D x  in the whole interval .; 1/, and can be viewed as the mapping

˚
x W
 GD 7 ! : (3.41)
0  t 0  t

This GQ can be determined from the impulse response equality

Q / D g.t
g.t  / D Dı.t  / C C eA.t /
B1.t / D C e A At
e B; 8t > :
3.4. L OOP SHIFTING AND ALL STABILIZING CONTROLLERS 61

A finite-dimensional GQ satisfying this equality has the transfer function


   
Q A B A e A B
G.s/ D D : (3.42)
C e A 0 C 0

By definition,     
A B s A B s
 e D  .1 e /D;
C D C e A D
so the following two representations follow from (3.40) via the substitution C ! C e A :
      
A B s Q s A B A B
 e D G.s/ G.s/e D e s (3.43a)
C D C e A 0 C D
Z 
A
D Ce e .sI A/t dtB e s D (3.43b)
0
˚
and we again have that  GD x  2 H1 regardless of the stability of G . Comparing this expression with
˚
(3.15), we can conclude that the MSP of Watanabe–Ito is effectively  PD x  for P with a strictly proper
transfer function. ˚ ˚
As was already discussed in Remark 1.4 on p. 6, the FIR systems  G and  GD x  are sometimes
referred to as distributed-delay systems. Although this terminology is avoided throughout the notes, one
can encounter it in the literature.

3.4.3 Loop shifting for dead-time systems


Return now to the internal stability problem. Consider the input-delay version of the system presented in
x  , see Fig. 3.8(b). With the loop-shifting procedure in mind
Fig. 3.8(a), where the plant is of the form PD ˚
and a systematic way to produce stable systems ˘ D  PD x  rendering PD x  C ˘ finite dimensional,
the following result can be formulated:

Theorem 3.2. K internally stabilizes the dead-time system in Fig. 3.8(b) iff
˚ 
K D KQ I  PD x  KQ 1 ; (3.44)
˚
for some KQ internally stabilizing the finite-dimensional PQ D PD x  C  PD x  in the setup of Fig. 3.9(e).
˚
Proof. The result follows from the loop shifting procedure with ˘ D  PD x  . Because PQ .s/ constructed
according to (3.42) is strictly proper, we have that lim˛!1 sups2C˛ k˘.s/k D 0, meaning that the feedback
in the interconnection of KQ is well posed whenever K.s/ is proper.
Another, algebraic, way to see the equivalence is via rewriting (3.36) as
       
I 0 I PQ yQ I  Q
 vQ o
D I P ;
0 I C K˘ KQ I uQ 0 vQ i

where (3.37) and the formula KQ D .I C K˘ / 1 K are used. It follows from the fact that the chosen
˘ 2 H1 that the boundedness of y and u for all bounded vo and vi is equivalent to that of their “tilded”
versions in (3.37).

It is perhaps not surprising now that the controller defined by (3.44) is exactly of the˚DTC-based
form
presented in Fig. 3.6, with the primary controller CQ D KQ and the DTC element ˘ D  PD x  . However,
unlike the clever guesses in Section 3.2 or the use of state-feedback based architectures in Section 3.3, this
architecture shows up as an outcome of an abstract procedure, not connected with any particular choice of
62 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

˘
u y
JQ11 JQ12
 

JQ21 JQ22
Q Q

x
Fig. 3.10: All stabilizing controllers for PD

the controller structure. We thus may claim that every stabilizing controller for the dead-time plant PD x
can be cast as a dead-time compensator.
This claim is further strengthened by the fact that we can parametrize all stabilizing controllers KQ
for the finite-dimensional PQ . This parametrization, known as the Youla–Kučera parametrization, can be
presented in several different forms, see [44, Sec. 6.1] for details. We essentially need only one of them,
that given in the result below.
Theorem 3.3 (delay-free Youla–Kučera parametrization). Consider an LTI plant P , given in terms of its
stabilizable and detectable realization P .s/ D C.sI A/ 1 B . A controller K internally stabilizes P iff
K D Fl .J; Q/ for
2 3
  A C BF C LC L B
J11 .s/ J12 .s/
J.s/ D D4 F 0 I 5; (3.45)
J21 .s/ J22 .s/
C I 0
where F and L are any matrices such that A C BF and A C LC are Hurwitz, and some stable Q.
The application of this result to the dead-time plant PD x  is straightforward in the light of the result
of Theorem 3.2. Indeed, that result establishes that the stabilization of PD x  reduces to that of the finite-
dimensional PQ , whose state-space realization is PQ .s/ D C e .sI A/ 1 B , as in (3.42). If the original
A

realization of P is stabilizable and detectable, then so is the realization of PQ above. Hence, the set of all
stabilizing controllers for PQ is KQ D Fl .JQ ; Q/, where
2 3

Q Q
 A C BF C eA LC e A eA L B
J .s/ J12 .s/
JQ .s/ D Q11 D4 F 0 I 5: (3.46)
J21 .s/ JQ22 .s/ A
Ce I 0

Combining this result with that Theorem 3.2 we end up with a complete parametrization of all stabilizing
controllers in the form depicted in Fig. 3.10. This is again a DTC-based controller.

3.4.4 Potential extensions


The loop shifting idea is not limited to the transformations of Fig. 3.9. An obvious alteration is to change
the direction of the “˘ ” blocks in Fig. 3.9(a) or, equivalently, swap P with K and adjust involved signals
appropriately. With the now familiar constraints on the stability of ˘ and the well posedness of the
internal feedback loop, such a transformation would again convert the stabilization problem for the system
in Fig. 3.8(a) to that in Fig. 3.9(e), but now with

PQ D P .I C ˘P / 1
and KQ D K C ˘:

This kind of loop shifting can be advantageous for some classes of systems with internal delays. For
example, let P .s/ D 1=.1 ae s / and apply the transformation above with ˘.s/ D ae s , which is
3.5. D ELAY AS A CONSTRAINT: EXTRACTION 63

´1 w1 ´1 vi
u
P ˚ ˚ 1
K
y
vo w2 ´2 w2

Fig. 3.11: General loop skewing

obviously stable and for which the feedback loop around the plant is well posed. With this choice PQ .s/ D 1
is rational. Of course, the “forward” and “backward” shifts can be combined, by carrying out successively.
Another class of loop transformations are so-called multipliers, see [79, Sec. 6.1] or [11, Sec. VI.9].
Their basic idea is that the cascade MM 1 can be introduced at any point of the loop and then split without
affecting stability if the multiplier M is bi-stable.
A yet more general transformation is shown in Fig. 3.11. The diagram there is rotated 90ı counter-
clockwise to present the loop in the so-called chain-scattering (implicit) form, for which the formulae are
simpler. But now ˚ , as well as ˚ 1 , represents a relation between a mixture of inputs and outputs, rather
than the more conventional i / o relation. For instance, the relation
         
´1 ˚11 ˚12 w1 ´1 ˚11 ˚12 ˚221 ˚21 ˚12 ˚221 w1
D ” D ;
w2 ˚21 ˚22 ´2 ´2 ˚221 ˚21 ˚221 w2

provided ˚22 is square and invertible. The second equality above is the conventional
 i / o relation. The loop
shifting of Fig. 3.9 corresponds in this setting to the choice ˚ D I˘ I0 . An example of a nontrivially
more general loop skewing is the scattering transformation of [1], popular in bilateral teleoperation. It
involves two sets of transformations of the form presented in Fig. 3.11, at the master and slave sides, both
with  p p 
b=2
p b=2
p
˚D ;
1= 2b 1= 2b
where b > 0 is the characteristic impedance of the transmission line. This transformation renders delayed
transmission lines passive. General stability preservation conditions, as well as relations between original
and transformed signals of interest, for the transformation in Fig. 3.11 are more involved though.

3.5 Delay as a constraint: extraction


As if a plethora of ideas presented up to this point were not enough, the chapter is concluded with yet
another approach to characterize all stabilizing controllers for dead-time systems. The idea here is to treat
the loop delay as a causality constraint imposed upon the controller, rather than a part of the plant. The
applicability scope of this approach is thus essentially limited to dead-time systems, where the delay can
be freely moved between the plant and the controller. At the same time, the approach extends to various
optimal control problems in a relatively straightforward manner, which renders it a powerful design tool
in many applications.
Before starting to expose the approach, a technical result is required.

Lemma 3.4. If G.s/ is proper, i.e. uniformly bounded on C˛ for a sufficiently large ˛ > 0, then
x 2 H1 :
G 2 H1 ” G ´ GD

Proof. The implication G 2 H1 H) G 2 H1 is obvious by the stability of the delay element. To prove
the other direction, assume, on the contrary, that G 2 H1 , whereas G is not. It follows from the relation
G.s/ D G .s/es and the fact that es is entire that G.s/ is holomorphic in C0 [63, Rem. 10.3]. Because
64 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

G.s/ is proper, there is ˛ > 0 such that it is uniformly bounded on C˛ . Hence, for G 62 H1 , the function
G.s/ must be unbounded in the strip C0 n C˛ . But in that strip kG.s/k D jes jkG .s/k  e˛ kG .s/k,
which is a contradiction.

Consider the internal stability setup in Fig. 3.8(c). Strictly speaking, this system is not equivalent to
that in Fig. 3.8(b), because the delay element D x  , which is a multiplier moved from one part to another, is
not bi-stable. Nonetheless, with mild well-posedness assumptions their stability properties are equivalent.
Indeed, it is readily verified that
   
So,3.8(b) Td,3.8(b) x
So,3.8(c) Td,3.8(c) D
x  Ti,3.8(b) D Tc,3.8(c) Ti,3.8(c)
Tc,3.8(b) D
:

Thus, if P .s/, K.s/, and .I P .s/K.s/e s / 1 are proper, the stability of the system in Fig. 3.8(b) is
equivalent to that in Fig. 3.8(c) by Lemma 3.4.
x  is a subset
The idea of the extraction approach is that the set of all delayed stabilizing controllers KD
of the set of all causal stabilizing controllers, say K0 . Hence, we may start with characterizing the latter
set first and then extract from it all  -delayed controllers. Because P is finite dimensional, Theorem 3.3
yields a complete parametrization of causal stabilizing controllers for it, which is K0 D Fl .J; Q/ for J
given by (3.45) and an arbitrary Q 2 H1 . Thus, all we need is to characterize all stable Q such that
Fl .J; Q/ D KD x  for a causal K , i.e. is a dead-time system.
To this end, let J be an invertible finite-dimensional LTI system having a 2  2 partition with square
J12 and J21 sub-blocks. Define  
1 H11 H12
H ´J D ;
H21 H22
where the partition is compatible with that of J , and bring in the decompositions
˚ ˚
H22 D  H22 C HO 22 D x  and H11 D x  D HQ 11  H11 D x ;

where HO 22 and HQ 11 have strictly proper transfer functions and can be obtained by (3.39) and (3.42),
respectively. We have then the following key technical result:
x  under
Lemma 3.5. If J is such that J12 and J21 are invertible, then a causal Q renders Fl .J; Q/ D KD
a causal K iff ˚
Q D  H22 C QQ D x

for a causal QQ . Moreover, all attainable K ’s are then in the form shown in Fig. 3.10 with
  1
Q HQ 11 H12
J D
H21 HO 22

and an arbitrary causal QQ .

Proof. The invertibility of J and its .1; 2/ and .2; 1/ sub-blocks guarantees [44, Prop. 5.6] that the mapping
Q 7! Fl .J; Q/ is bijective whenever it is well posed, with K0 D Fl .J; Q/ ” Q D Fu .J 1 ; K0 /. In
other words, K0 D KD x  iff

Q D H22 C H21 .I x  H11 / 1 KD


KD x  H12 D H22 C H21 .I x  / 1 KH12 D
KH11 D x ;

where the latter equality follows by the time-invariance of H11 and H12 . It is readily seen that the response
of this Q to the impulse applied at every tc coincides with that of H22 in the whole Œtc ; tc C   for every
causal K . This suggests that the required Q can be connected with the truncation of H22 .
3.5. D ELAY AS A CONSTRAINT: EXTRACTION 65

To prove that formally, consider first the system


˚
.I KH11 D x  / 1 K D .I K H11 K HQ 11 / 1 K D .I KQ HQ 11 / 1 K;
Q

where ˚
KQ ´ .I K H11 / 1 K
By arguments used in ÷3.4.3, we know that for any causal K the mapping K 7! KQ is bijective and well-
posed for all causal K (thus resulting in a causal KQ ). Thus, we can equivalently rewrite the relation for Q
in terms of KQ as
˚ 
Q D H22 C H21 .I KQ HQ 11 / 1 KH Q 12 Dx  D  H22 C HO 22 C H21 .I KQ HQ 11 / 1 KH Q 12 D x :

The system
QQ ´ HO 22 C H21 .I KQ HQ 11 / 1 KH
Q 12 D Fu .JQ 1 Q
; K/
is well posed (because HQ 11 .s/ is strictly proper) for all causal KQ and is thus causal whenever so is KQ .
The next step is to show that the mapping KQ 7! Fu .JQ 1 ; K/ Q is bijective. To this end, consider the
equality     
J11 J12 H11 I
D :
J21 J22 H21 0
Its first row yields that H11 D J211 J22 H21 and then the second raw—that H211 D J12 J11 J211 J22 .
Because J12 J11 J211 J22 is well defined, by assumption, we conclude that H21 is invertible. The invert-
ibility of H12 can be shown by similar arguments. But this implies that JQ possesses the same properties
as J itself and the statement follows. ˚
Thus, we showed that every Q rendering K0 D KD x  is of the form  H22 CQQ D
x  and that any causal
˚
Q Q Q
Q can be attained by a causal K . The result then follows by the relation K D K.I C  H11 D Q 1,
x  K/
which represents the system in Fig. 3.10 for KQ D Fl .K;
Q Q/Q .

The result of Lemma 3.5 applies to general, possibly time-varying controllers. In the time-invariant
˚
case, we just need to consider time-invariant QQ . Then the result of Lemma 3.4 and the fact that  HO 22 2
H1 can be used to prove that QQ 2 H1 ” Q 2 H1 and thus that the system in Fig. 4.1(b) character-
x .
izes all stabilizing controllers for PD
To conclude, construct a state-space realization of JQ from that of J given by (3.45). It is readily seen
that 2 3
A B L
J 1 .s/ D 4 C 0 I 5 :
F I 0
Hence, by (3.39) we have that
   
A L A eA L
H22 .s/ D H) HO 22 .s/ D
F 0 F 0

and by (3.42)—that
   
A B A B
H11 .s/ D H) HQ 11 .s/ D A :
C 0 Ce 0

Hence,
   
  A B eA L   A B eA L
HQ 11 .s/ H12 .s/ D A and H21 .s/ HO 22 .s/ D
Ce 0 I F I 0
66 C HAPTER 3. S TABILIZATION OF T IME -D ELAY S YSTEMS

1
where the realization for H12 is obtained by a similarity transformation of that in J , from which
2 3
A B eA L
JQ 1 .s/ D 4 C e A 0 I 5:
F I 0

Applying (B.23) we then end up exactly with the realization (3.46), matching the results derived by the
loop-shifting approach in ÷3.4.3.
Chapter 4

Performance of Time-Delay Systems

is a crucial property, it is seldom the ultimate goal of control design. Rather,


A LTHOUGH STABILITY
control systems are supposed to impose a required behavior on controlled systems, like good com-
mand following, sufficient disturbance attenuation level, required transient characteristics, et cetera. These
properties are known collectively as control performance. There are zillions of ways to express perfor-
mance of control systems, from classical loop-shaping and pole-placement ideas to more modern quanti-
tative characteristics based on system norms or Lyapunov-based decay estimates.
This chapter studies optimization-based control performance in dead-time systems, mainly in H2 and
H1 formulations, which is mostly motivated by my own preferences. Some preliminary familiarity with
optimization-based control is expected, albeit the exposition attempts to avoid getting too deep into solu-
tion technicalities. Rather, the main emphasis is placed on effects of loop delays on the controller structure
and on attainable performance levels. The last section addresses certain aspects of the use of DTC-based
architectures for industrial controllers (PI), with mostly qualitative arguments.

4.1 Standard H2 and H1 problems


The standard problem corresponds to the lower LFT configuration depicted in Fig. 4.1(a). The system G
there is known as the generalized plant and K is referred to as the controller. The goal is then to design
K to reduce the effect of w on ´ in the system T´w ´ Fl .G; K/ W w 7! ´. The generalized plant has two
inputs, w and u, and two outputs, ´ and y , whose meaning is spelled out below.
w is dubbed the exogenous input and contains all exogenous signals, whose effect is important for the
problem at hand. These may be a reference signal, a load disturbance, measurement noise, et cetera.
In many cases these are fictitious (normalized) signals forming the actual signals of interest.
´ is dubbed the regulated output and contains signals that are required to be kept “small” (in whatever
sense). These may be deviations from a required behavior, like tracking or estimation errors, actuator
signals and suchlike, weighted to focus their relative importance and important aspects of them.
y is the measured output, through which the controller acquires the information about the effect of w
on the system behavior.
u is the control input, which is the signal generated by the controller and through which the effect of w
on ´ can be affected.
The generalized plant G contains then dynamics of a controlled plant itself, sensors, and actuators (all,
normally, in its u 7! y part), as well as weighing functions, and even some fixed parts of the controller
(e.g. the integral action). For more details and examples see [44, Ch. 7].

67
68 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

´ w ´ w

G G
y u y u

K x
KD
(a) General form (b) Delayed controller

Fig. 4.1: The standard problem setup

The reduction of the effect of w on ´ can be quantified by a norm of T´w . Below we consider the H2
and H1 system norms for this purpose.
 The H2 -norm of a causal LTI system G W u 7! y , whose impulse response g.t / is square integrable,
equals  Z  Z 
1 2
1=2
2
1=2
kGk2 D kG. j!/kF d! D kg.t /kF dt ; (4.1)
2 R RC

where kkF denotes the Frobenius matrix norm. It has also a stochastic interpretation: if u.t / is a
unit-intensity white Gaussian process, then kGk22 equals the steady-state variance of y.t /. The space
of all p  m systems with a bounded H2 -norm is known as H2pm (the dimensions are frequently
skipped). It comprises all causal systems with square-summable impulse responses. This is a Hilbert
space, with the inner product
Z Z
1 0
 
hG1 ; G2 i2 ´ tr ŒG2 . j!/ G1 . j!/ d! D tr Œg2 .t /0 g1 .t / dt : (4.2)
2 R RC

The H2 -norm normally fits applications where properties of exogenous signals are known sufficiently
well. Examples of the standard H2 problem are the infinite-horizon LQG and steady-state Kalman–
Bucy filtering problems.
 The H1 -norm of a causal LTI system G W u 7! y , whose transfer function G 2 H1 as defined by
(B.17), equals
kGk1 D ess sup kG. j!/k;
!2R

which is the peak gain of the frequency response of G . Its time-domain interpretation is the induced
L2 .RC /-gain of G , i.e. kGk1 D sup0¤u2L2 .RC / kyk2 =kuk2 , where kk2 stands for the L2 .RC /-norm
(the square root of its energy) of a signal. This norm fits applications where properties of exogenous
signals are unknown, as well as robust stability problems. Examples of the standard H2 problem are
the weighted- and mixed-sensitivity problems and the gap optimization.
It should be emphasized that H1 is the stability space of LTI system, so the H1 formulation clicks
well with the stability requirement. At the same time, not all systems with a finite H2 norm are stable
(although this is the case for finite-dimensional systems). That implies that some care should be taken for
in the stability analysis of H2 problems for infinite-dimensional systems.
An important property of both H2 and H1 standard problems is that there are complete parametriza-
tions of all their suboptimal solutions. In both of these cases, the parametrizations can be expressed in the
form
K D Fl .J; Q/ (4.3)
for some LTI generator of all suboptimal controllers J , which depends on the problem data, and a free
parameter Q, which is only limited to be stable and norm bounded. The bound on kQk depends on the
performance that we expect to attain. Moreover, the generator J is square and invertible, with square and
4.1. S TANDARD H2 AND H1 PROBLEMS 69

invertible .1; 2/ and .2; 1/ sub-systems. As we already saw in Section 3.5, these properties assure that the
mapping Q 7! K is bijective. This is a key property for solving delayed versions of the problem, which
is presented in Fig. 4.1(b).

4.1.1 State-space formulae


If the generalized plant G in Fig. 4.1(a) is finite dimensional, solutions can be developed in terms of its
state-space realization. Below such solutions are presented.

Standard assumptions
We assume that G is be given in terms of its state-space realization representing the joint dynamics of its
components, i.e. as 2 3
  A Bw Bu
G´w .s/ G´u .s/
G.s/ D D 4 C´ D´w D´u 5 ; (4.4)
Gyw .s/ Gyu .s/
Cy Dyw Dyu
where the input and output partitions are compatible with those in Fig. 4.1(a). It is conventional then to
impose the following assumptions on its parameters:
A 5 : the pair .A; Bu / is stabilizable,
A 6 : the pair .Cy ; A/ is detectable,
0
A 7 : the realization .A; Bu ; C´ ; D´u / has no invariant zeros in jR and D´u D´u > 0,
0
A 8 : the realization .A; Bw ; Cy ; Dyw / has no invariant zeros in jR and Dyw Dyw > 0.
Assumptions A 5,6 are necessary and sufficient conditions for the existence of an internally stabilizing
controller, so they are naturally required. Assumptions A 7,8 are technical and are required to guaran-
tee the solvability of two involved algebraic Riccati equations. The constraints that they impose on the
corresponding feedthrough terms just say that all components of the control input u are penalized in the
cost function (the full column rank of D´u ) and that all measurement channels are corrupted by noise
(the full row rank of Dyw ). Normally, although still not always, assumptions A 7,8 are necessary for the
corresponding optimization problems to be well defined.

All H2 suboptimal solutions for finite-dimensional generalized plants


The standard H2 problem for the system in Fig. 4.1 can be posed as the design on an internally stabilizing
K , which minimizes kT´w k2 . Its solution is based on the following two algebraic Riccati equations:

A0 X C XA C C´0 C´ .XBu C C´0 D´u /.D´u


0
D´u / 1 .Bu0 X C D´u
0
C´ / D 0 (4.5a)
and
AY C YA0 C Bw Bw0 .Y Cy0 C Bw Dyw
0 0
/.Dyw Dyw / 1 .Cy Y C Dyw Bw0 / D 0; (4.5b)

whose solutions X and Y are said to be stabilizing if A C Bu Fu and A C Ly Cy are Hurwitz, respectively,
where
0
Fu ´ .D´u D´u / 1 .Bu0 X C D´u
0
C´ / and Ly ´ .Y Cy0 C Bw Dyw
0 0
/.Dyw Dyw / 1: (4.6)

If A 5–8 hold true, then stabilizing solutions always exist, are unique, and such that X D X 0  0 and
Y D Y 0  0. Furthermore, X > 0 iff .A; Bu ; C´ ; D´u / has no invariant zeros in C n C x 0 and Y > 0 iff
x
.A; Bw ; Cy ; Dyw / has no invariant zeros in C n C0 .
70 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

Theorem 4.1. Let A 5–8 hold true and D´w D 0. If all stabilizing controllers are presented in the form
K D Fl .J2 ; Q/, where
2 3
A C Bu Fu C Ly Cy C Ly Dyu Fu Ly Bu C Ly Dyu
J2 .s/ D 4 Fu 0 I 5 (4.7)
Cy Dyu Fu I Dyu

with Fu and Ly as in (4.6), then kT´w k22 D kFl .G; K/k22 D 0 C kD´u QDyw k22 , where

0 ´ tr .Bw0 XBw / C tr .C´ Y C´0 / C tr .XAY C YA0 X/: (4.8)

The central controller, corresponding to Q D 0, is then the unique optimal controller, attaining 0 .

Some remarks are in order:


Remark 4.1 (solution properties). The optimal K above is an observer-based controller comprised of the
LQR state feedback with the gain Fu and the Kalman–Bucy filter with the gain Ly . This separation is
remarkable and not quite obvious. At the same time, the optimal cost is not just a sum of the LQR (the
first term in the expression for 0 ) and the Kalman–Bucy (the second term) costs. It also contains the
coupling term tr .XAY C YA0 X/, which might be both positive and negative, depending of properties of
A. To explain this fact, rewrite the optimal cost as

0 D tr .Bw0 XBw / C tr .D´u Fu YFu0 D´u


0
/:

The first term above is still the LQR cost. The second term is the cost of estimating the signal v.t / D
D´u Fu x.t / from the measured y.t /. The signal v is nothing but the LQR control law, normalized by the
its penalty in the cost function. O

All H1 suboptimal solutions for finite-dimensional generalized plants


The standard H1 problem for the system in Fig. 4.1 can be posed as the design on an internally stabilizing
K , which renders kT´w k1 < for a given > 0. Its solution is based on two algebraic Riccati equations
too, now of the form
   
A0 X C XA C C´0 C´ X Bw Bu C C´0 D´w D´u
 0  1     
D´w D´w 2 I D´w
0
D´u Bw0 0
D´w
 0 0 XC C´ D0 (4.9a)
D´u D´w D´u D´u Bu0 0
D´u
and
   0 
AY C YA0 C Bw Bw0 Y C´0 Cy0 C Bw D´w Dyw0

 0
 1     
D´w D´w 2 I D´w Dyw0
C´ D´w
 0 0 Y C Bw0 D 0; (4.9b)
Dyw D´w Dyw Dyw Cy Dyw

whose solutions are said to be stabilizing if A C Bw Fw C Bu Fu and A C L´ C´ C Ly Cy are Hurwitz, where


   0
 1     
Fw D´w D´w 2 I D´w
0
D´u Bw0 0
D´w
´ 0 0 XC C´ (4.10a)
Fu D´u D´w D´u D´u Bu0 0
D´u
and
 0
 1
      D´w D´w 2 I D´w Dyw
0
L´ Ly ´ Y C´0 Cy0 C Bw 0
D´w 0
Dyw 0 0 : (4.10b)
Dyw D´w Dyw Dyw
4.1. S TANDARD H2 AND H1 PROBLEMS 71

Assumptions A 5–8 are necessary for the solvability of these equations, but might not be sufficient if is
not sufficiently large. Moreover, even if AREs (4.9) do admit stabilizing solutions, these solutions might
not be positive semi-definite, also depending on . At the same time, null spaces of X and Y do not
depend on . We still have that det.X/ ¤ 0 iff .A; Bu ; C´ ; D´u / has no invariant zeros in C n C x 0 and
x 0 . Also, the AREs in (4.9) reduce to the
det.Y / ¤ 0 iff .A; Bw ; Cy ; Dyw / has no invariant zeros in C n C
corresponding H2 AREs in (4.5) as ! 1.

Theorem 4.2. If A 5–8 hold true and kD´w k < , then the standard H1 problem is solvable iff the
following conditions hold:
(a) there is a stabilizing solution X to ARE (4.9a) such that X D X 0  0,
(b) there is a stabilizing solution Y to ARE (4.9b) such that Y D Y 0  0,
(c) .XY / < 2 .
In such a case, all -suboptimal, possibly infinite-dimensional, controllers are given by K D Fl .J1 ; Q/,
where 2 3
A Z Ly Z .Bu C L´ D´u C Ly Dyu /
J1 .s/ D 4 Fu 0 I 5; (4.11)
Cy Dyw Fw Dyu Fu I Dyu D
2 0 1=2 2 0 1=2
for any Q 2 H1 such that k.I D´w D´w / D´u QDyw .I D´w D´w / k1 < , where

A ´ A C Bw Fw C Bu Fu C Z Ly .Cy C Dyw Fw C Dyu Fu /;

0
D ´ Dyw D´w . 2 I 0
D´w D´w / 1 D´u , and Z ´ .I 2
YX/ 1
.

Some remarks are in order:


Remark 4.2 (solution properties). Although this fact is less evident than in the H2 case, the central subop-
timal controller, that corresponding to Q D 0, is also observer based. The resulting control signal is also
the H1 suboptimal estimate of the H1 state feedback control signal u D Fu x (which would be admis-
sible under measuring the whole state of the plant iff condition (a) of Theorem 4.2 holds). In contrast to
the H2 (Kalman–Bucy) case, parameters of the H1 estimator do depend on the signal it estimates, so the
formulae are more involved and nontrivial transformations are required to decouple the estimator ARE,
which depends on the state-feedback gain, from Fu . The coupling condition (c) of Theorem 4.2 is actually
a remnant of this procedure. O
Remark 4.3 (simplifications). The formulae of Theorem 4.2 are greatly simplified in the case when D´w D
0. Indeed, in this case the AREs (4.9) read

A0 X C XA C C´0 C´ .XBu C C´0 D´u /.D´u


0
D´u / 1 .Bu0 X C D´u
0
C´ / C 2
XBw0 Bw X D 0 (4.9a0 )

AY C YA0 C Bw Bw0 .Y Cy0 C Bw Dyw


0 0
/.Dyw Dyw / 1 .Cy Y C Dyw Bw0 / C 2
Y C´ C´0 Y D 0; (4.9b0 )
control and estimation gains are
0
Fu D .D´u D´u / 1 .Bu0 X C D´u
0
C´ / and Fw D 2
Bw0 X; (4.10a0 )

Ly D .Y Cy0 C Bw Dyw
0 0
/.Dyw Dyw / 1
and L´ D 2
Y C´0 ; (4.10b0 )
and D D 0. Still, the assumption D´w D 0 does not fit such important cases as the mixed and balanced
sensitivity problems. O
72 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

d
yc 1 u y r
s
K -
ym
n

Fig. 4.2: System for optimal control design examples (delay-free case)

4.1.2 Design case study


To illustrate the use of the formulae above and the implication of the choice of the generalized plant
on properties of the resulted closed-loop system, in this subsection we study a simple academic design
example and analyze the design of H2 and H1 controllers for it.
Consider the simple single-loop control system shown in Fig. 4.2. The plant there is a plain integrator,
subject to load disturbances d.t /. To compensate the effect of d on the controlled output yc , a feedback
loop is introduced on the basis of a measured version of the controlled output, ym D yc C n, where n.t / is
a measurement noise signal. Our goal is to design a stabilizing controller K , which attenuates the effect of
the load disturbances on the controlled output, while not amplifying the effect of the measurement noise
on it. Also, we would like to achieve both these goals by a “reasonable” control effort.
The first step in constructing the generalized plant for this problem is to form the exogenous input and
regulated output the signals. The former is naturally taken of the form
 
d.t /
w.t / D ;
n.t /=&
where the weight & reflects the relative size of the measurement noise and the load disturbance. It is tuned
so that d and n=& have comparable intensities. As a result, a decrease of & implies putting more weight on
the load disturbance, while an increase of & renders the measurement noise more dominant. The regulated
signal of choice is  
yc .t /
´.t / D ;
%u.t /
whose L2 -norm k´k22 D kyc k22 C %2 kuk22 . In other words, we penalize both the increase of yc and that
of u, where the weight % serves as a tuning parameter, trading off disturbance attenuation and control
effort. which justifies the introduction of the control signal u to regulated output. The measured output is
obviously y D ym D yc n and the control input is u. With these choices, the generalized plant
2 3
0 1 0 1
6 1 0 0 07
G.s/ D 6 4 0 0 0 % 5:
7

1 0 & 0
It is readily seen that it satisfies all assumptions A 5–8 whenever & > 0 and % > 0.
Another way to interpret the generalized plant is via the relation
    
yc Td T d
D ;
u T Tc n
from which
        
1 0 yc 1 0 Td T 1 0 Td &T
´D D wD w D T´w w:
0 % u 0 % T Tc 0 & %T %&Tc
Thus, the closed-loop system T´w includes the disturbance sensitivity, twice the complementary sensitivity
weighted by % and by & , and the control sensitivity weighted by %& .
4.1. S TANDARD H2 AND H1 PROBLEMS 73

H2 design
Consider first the H2 performance measure. To use Theorem 4.1, we need the stabilizing solutions to the
Riccati equations (4.5a),
2
0X CX 0C1 X 1%  1  X D 0 H) X D %

(the stabilizing solution must be non-negative), and (4.5b),


2
Y 0C0Y C1 Y  . 1/  &  . 1/  Y D 0 H) Y D &:

The resulting gains are then


1 1
Fu D and Ly D ;
% &
for which A C Bu Fu D 1=% and A C Ly Cy D 1=& are indeed Hurwitz. Expectably, the state-feedback
gain Fu increases as the control penalty decreases. Likewise, the Kalman filter gain Ly increases as the
measurement noise weight decreases.
The optimal controller
 
1=% 1=& 1=& 1
K.s/ D Fl .J2 ; 0/ D D
1=% 0 %&s C % C &

then. It is stable for all % > 0 and & > 0, its static gain is a decreasing function of both % and & , and its
time constant %&=.% C & / is an increasing function of both these parameters. In other words, the decrease
of either one of these tuning parameter increases both the static gain and the bandwidth of the optimal K .
Consider now the four closed-loop systems (GoF) resulted from this design. We have that
     
Td .s/ S.s/ 1 1   1 %&s C % C &  
D P .s/ 1 D 1 s :
T .s/ Tc .s/ 1 C P .s/K.s/ K.s/ .%s C 1/.&s C 1/ 1

The closed-loop poles are at s D 1=% and s D 1=& , which is consistent with the separation arguments
for observer-based controllers.
The disturbance sensitivity magnitude
%2 & 2 ! 2 C .% C & /2
jTd . j!/j2 D ;
.%2 ! 2 C 1/.& 2 ! 2 C 1/
which is a monotonically decreasing function of ! with its peak kTd k1 D % C & attained at ! D 0, see
Fig. 4.3(a) for three choices. Clearly, the increase of either % or & increases kTd k1 , which is expectable as
both these constants aim at trading off the disturbance response with other closed-loop properties. At the
same time, the high-frequency gain of Td .s/, which equals 1, is independent of tuning parameters.
The control sensitivity magnitude
!2
jTc . j!/j2 D :
.%2 ! 2 C 1/.& 2 ! 2 C 1/
p
This function vanishes at both very small and very high frequencies, peaking at ! D 1= %& with the value
kTc k1 D 1=.% C & / D 1=kTd k1 , see Fig. 4.3(b). Thus, the increase of either % or & clearly decreases
kTc k1 . But the decrease of either one of these parameters does not necessarily increases the peak of the
control sensitivity, to that end both % and & have to be decreased.
The magnitude of the complementary sensitivity, which is the last function that we penalize in T´w ,
1
jT . j!/j2 D :
.%2 ! 2 C 1/.& 2 ! 2 C 1/
74 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

6 20
0
-3
0

-6
-20
-20

-20

-40 -40 -40

10-1 100 101 102 10-1 100 101 102 10-1 100 101 102

(a) Disturbance sensitivity (b) Control sensitivity (c) Complementary sensitivity

Fig. 4.3: Closed-loop frequency-response plots

This is a monotonically decreasing function of ! , attaining its maximum kT k1 D 1 at ! D 0, see


Fig. 4.3(c). This value does not depend on tuning parameters, which is because the gain of the plant at this
frequency is infinite. But tuning parameters affect the time constants of this system and thus its bandwidth,
which is s r
 
1 1 6 1 1 1
!b D C C :
2 %4 %2 & 2 & 4 %2 & 2
If both % and & decrease, the bandwidth of T . j!/ increases.
As a matter of fact, the sensitivity magnitude,

.%2 & 2 ! 2 C .% C & /2 /! 2


jS. j!/j2 D ;
.%2 ! 2 C 1/.& 2 ! 2 C 1/
p
is contractive, i.e. smaller than or equal to 1, iff !  1= 2%& . This implies that as either % or & decreases,
the frequency range in which jS. j!/j  1 increases. This range of frequencies, in which the feedback
is effective, is a way to define the closed-loop bandwidth. This bandwidth increases faster if % and &
decrease simultaneously.

4.2 H2 design for dead-time systems


This section studies the H2 (LQG) problem for the system in Fig. 4.1(b), for which T´w D Fl .G; KDx  /.
We impose the same assumptions on the generalized plant G as in the delay-free case, i.e. suppose that
A 5–8 hold.

4.2.1 Extraction of optimal dead-time controllers


The main idea discussed in this subsection is essentially the same as that in ÷3.5. Namely, the starting point
is the recognition of the fact that no delayed controller can attain a performance level below that attainable
by its delay-free counterpart, 0 , just because delay imposes additional constraints on the controller. We
thus start with the parametrization of all sub-optimal causal controllers in Theorem 4.1 and then seek for
conditions to be imposed on its free Q-parameter to render the overall controller of the form KD x .
Two facts are instrumental towards this end. First, the performance attainable by any controller of the
form K D Fl .J2 ; Q/ for J2 given by (4.7) is kT´w k22 D 02 C kD´u QDyw k22 . Because 0 given by (4.8)
is fixed, we shall be concerned only with the second term above. Second, the generator of all suboptimal
controllers satisfies the assumptions of Lemma 3.5, so all Q’s rendering Fl .J2 ; Q/ a dead-time system for
4.2. H2 DESIGN FOR DEAD - TIME SYSTEMS 75

˘
u y

J2;
Q Q

Fig. 4.4: All H2 suboptimal controllers for the standard problem in Fig. 4.1(b)

˚
a given  must be of the form Q D  H2;22 C QQ D x  , where
2 3
  A Bu Ly
H2;11 .s/ H2;12 .s/ 1
´ J2 .s/ 4 Cy Dyu I 5;
H2;21 .s/ H2;22 .s/
Fu I 0
for some stable and causal QQ . Thus, the H2 optimization problem reduces to the problem of finding a
stable and causal QQ minimizing kQ0 C Q D x  k2 , where
˚
Q0 ´ D´u  H2;22 Dyw and Q ´ D´u QD Q yw :

A key observation about the expression above is that the two terms (systems) in the decomposition
Q0 C Q D x  have non-overlapping impulse responses. This implies that they are orthogonal in H2 (pro-
Q
vided Q 2 H2 , of course). Indeed, let q0 .t / and q .t / be the impulse responses of Q0 and Q , respectively.
By definition, q0 has support in Œ0;  , while q —in RC . The impulse response of Q D x  is then q .t  /
and it has support in Œ; 1/, see the discussion on p. 4. It then follows from (4.2) that
Z Z  Z
0
 0
 
hQ0 ; Q i2 D tr Œq .t  / q0 .t / dt D tr Œq .t  / q0 .t / dt C tr Œq .t /0 q0 .t C  / dt D 0:
RC 0 „ ƒ‚ … RC „ ƒ‚ …
D0 D0

Hence, by Pythagoras and the fact that a shift does not affect the H2 -norm, we have that
x  k22 D kQ0 k22 C kQ D
kQ0 C Q D x  k22 D kQ0 k22 C kQ k22 ;

which clearly implies that the norm is minimal under Q D 0. By A 7,8 , the matrices D´u and Dyw are
left and right invertible, respectively, so that Q D 0 ” QQ D 0. This yields the optimal Q D Q0 for
the dead-time system and the following result:
Theorem 4.3. Let A 5–8 hold true and D´w D 0. If all stabilizing controllers are presented in the form
shown in Fig. 4.4, where (Fu and Ly below are as in (4.6), again)
2 3
A C Bu Fu C eA Ly Cy e A C eA Ly Dyu Fu eA Ly Bu C eA Ly Dyu
J2; .s/ D 4 Fu 0 I 5 (4.12)
Cy e A Dyu Fu I Dyu
and  
˚ A Bu
˘ D  Gyu D   ;
Cy Dyu
x  /k2 D 0 C DT . /CkD´uQDyw k2 , where the delay-free optimal performance
then kT´w k22 D kFl .G; KD 2 2
0 is given by (4.8) and Z 
DT . / ´ kD´u Fu eAt Ly Dyw k2F dt: (4.13)
0
The central controller, corresponding to Q D 0, is the unique optimal controller, attaining 0 C DT . /.
76 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

Proof. As the structure in Fig. 4.4 is the same as that in Fig. 3.10, it is only remained to show that kQ0 k22 D
DT . /. To this end, note that the impulse response of Q0 is q0 .t / D D´u Fu eAt Ly Dyw 1Œ0; .t /. The result
then follows by the second equality in (4.1).

Some remarks are in order:


Remark 4.4 (H2 cost of delay). The quantity DT . / defined in (4.13) is the increment of 0 due to the
presence of the loop delay in the setup of Fig. 4.1(b). Thus, it can be viewed as the cost of delay from the
H2 perspective. This is obviously a nonnegative quantity. It is also a nondecreasing quantity, just because
Z Cı Z  Z ı
At 2 At 2
kD´u Fu e Ly Dyw kF dt D kD´u Fu e Ly Dyw kF dt C kD´u Fu eA.tC/ Ly Dyw k2F dt;
0 0 0

where the last term in the right-hand side is nonnegative for all ı > 0. There might be situations though, in
which DT . / D 0. As D´u and Dyw are left and right invertible, respectively, this happens iff Fu eAt Ly D
0 for all t 2 Œ0;  . This, in turn, happens iff Fu .sI A/ 1 Ly D 0, which follows directly from the
Gramian-based controllability and observability test [44, Thm. 4.1]. Now, it can be verified that
   
A Ly A C Bu Fu C Ly Cy C Ly Dyu Fu Ly
D0 ” D Fl .J2 .s/; 0/ D 0:
Fu 0 Fu 0

The last equality takes place iff the optimal controller for the delay-free H2 problem is zero. This can
happen, but it is clearly not quite a typical situation. Hence, we may conclude that the cost of delay is
normally nonzero and is a strictly increasing function of  . O
Remark 4.5 (sampled-data vs. delayed feedback). It can be shown [43, ÷V.A] that the optimal attainable
H2 performance under a sampled-data controller with the sampling period h > 0 is 0 C SD .h/, where
Z h Z h t
1
SD .h/ ´ kD´u Fu eAt Ly Dyw k2F dt ds
h 0 0

(assuming that its A/D and D/A parts are also design parameters). Because
Z h Z h t Z h Z h Z h Z h Z h
1 1 1
f .t /dt ds  f .t /dt ds D f .t /ds dt D f .t /dt
h 0 0 h 0 0 h 0 0 0

for any bounded f .t /  0 (moreover, the equality holds iff f .t / D 0 for all t 2 Œ0; h), we have that

SD .h/  DT .h/;

with the equality iff Fl .J2 ; 0/ D 0. In other words, the cost of delay normally exceeds the cost of sampling
in the H2 case if the sampling period h coincides with the loop delay  . O
Remark 4.6 (controller structure). The H2 (sub) optimal controllers in Fig. 4.4 are clearly dead-time com-
pensator, with the modifies Smith predictor (MSP) DTC block, exactly like that introduced by Watanabe–
Ito. This provides an additional justification of the DTC controller configuration. O

4.2.2 Loop shifting solution


Another way to treat the H2 optimization for time-delay systems is an extension of the loop-shifting idea
discussed in Section 3.4. Such an extension might not be simple for general delay systems, even though
it applies to a general stabilization problem. The reason is that loop transformations, like those discussed
in ÷3.2.3 and ÷3.3.3, do not necessarily preserve the structure of the cost function, which is the regulated
4.2. H2 DESIGN FOR DEAD - TIME SYSTEMS 77

´ w ´ w ´ w

G´w G´u
 
x
G´w G´u D
  x 
G´w G´u D
Gyw Gyu Gyw GyuDx Gyw GQ yu
y u y uQ yQ uQ
x K
D K KQ
(a) Original problem x  uQ D u
(b) Moving delay to G , D (c) Shifting the loop, yQ D y C ˘ u

 
´ x O w ´ x Q́ O w
x  G´u 
D G´w D D G´w G´u


Gyw GQ yu Gyw GQ yu
yQ uQ yQ uQ

KQ KQ
(d) Shifting the performance channel x  Q́ D ´
(e) Pulling delay out, D w

Fig. 4.5: Loop shifting in the general problem setup; here GQ yu D ˘ C Gyu D
x  , KQ D K.I C ˘K/ 1 , and
G´w D  C GO ´w D x

signal ´ in Fig. 4.1. Still, in the case of dead-time plants there is a workaround proposed in [46], which is
outlined below.
Consider the setup in Fig. 4.5(a), which is exactly that in Fig. 4.1(b) modulo moving the delay from
the input of the controller to its output (so the controller is supposed to be time invariant). The first step
is to move that delay from the controller to the generalized plant as shown in Fig. 4.5(b). This adds the
delay element D x  to two sub-blocks of G . The new “control input” uQ is then the  -previewed version of u.
Next, the loop-shifting procedure of Section 3.4 with a stable ˘ is applied, resulting in the block-diagram
Q x Q 1
˚ Gyu D ˘ C Gyu D and K D K.I C ˘K/ . And, as we already saw, with the
in Fig. 4.5(c) with
x Q
choice ˘ D  Gyu D the system Gyu is finite dimensional. The problem is that the generalized plant
in Fig. 4.5(c) still contains a delayed sub-block, that from uQ to ´. So some more shifts are required. To
this end, select any  2 H1 such that G´w D  C GO ´w D x  for a finite-dimensional GO ´w . In this case,
the system can be equivalently presented in the form depicted in Fig. 4.5(d), in which both systems in
the upper raw are delayed. As a result, the delay element can be pulled out of the system, as shown in
Fig. 4.5(e). The signal Q́ there is related to the original regulated signal via ´ D Dx  Q́ C w .
The transformations above decomposes the closed-loop system T´w D Fl .G; K D x  / as

x  T Q́w ;
T´w D  C D (4.14)
Q K/
where T Q́w D Fl .G; Q for
 
GO ´w G´u
GQ ´ :
Gyw GQ yu
By the stability of  and Lemma 3.4, T´w 2 H1 ” T Q́w 2 H1 , i.e. the transformation preserves
stability. In light of the discussion preceding Theorem 4.3, it would make sense to separate the impulse
responses˚ of the two terms in the right-hand side of (4.14). This is always the case with the choice
 D  G´w , for which the impulse response of  has support in Œ0;   and thus

x  k22 D kk22 C kT Q́w k22 :


kT´w k22 D kk22 C kT Q́w D

This implies that the problem reduces to the standard finite-dimensional H2 problem for the generalized
78 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

plant GQ , which can be solved by Theorem 4.3. In fact, as the “´u” and “yw ” parts of G and GQ coincide,
assumptions A 5–8 hold for G iff they hold for GQ .
To see the structure of the equivalent problem, bring in the realizations
   
O A eA Bw Q A Bu
G´w .s/ D and Gyu .s/ D
C´ 0 Cy e A 0

(by (3.39) and (3.42), respectively), so that


2 3 2 3
A eA Bw Bu A Bw e A Bu
Q
G.s/ D 4 C´ 0 D´w 5 D 4 C´ eA 0 D´w 5 : (4.15)
A
Cy e Dyw 0 Cy Dyw 0

Because the realization of the “´u” part above is the same as that in G , the control ARE (4.5a) and the
corresponding LQR gain Fu for GQ are the same as those for G . The realization of the “yw ” part of GQ is
similar to that of G , with the similarity matrix eA . As a result, the filtering ARE (4.5b) for GQ is solved by
0
eA Y eA  , where Y is the corresponding solution for G , with the Kalman filter gain eA Ly for Ly defined
in (4.6) for G . Therefore, applying the formulae of Theorem 4.3 to GQ we end up with the controllers in
Theorem 4.3.
The optimal cost is for the finite-dimensional problem for GQ can be expressed as (cf. Remark 4.1)
0 0
Q0 D tr .Bw0 eA  X eA Bw / C tr .D´u Fu eA Y eA  Fu0 D´u
0
/
0  0 
D 0 C tr Bw0 .eA  X eA X/Bw C tr D´u Fu .eA Y eA  Y /Fu0 D´u
0
;

where 0 is the optimal cost for G . Now, it can be verified that if W satisfies the equation A0 W C WA D V
for some V , then Z 
0 0
eA  W eA W D eA t V eAt dt:
0
Applying this expression to the AREs in (4.5), we have:
 Z  
0 A0 t 0 0 0 At
Q0 D 0 C tr Bw e .Fu D´u D´u Fu C´ C´ /e dtBw
0
 Z  
0
C tr D´u Fu eAt .Ly Dyw Dyw
0
Ly Bw Bw0 /eA t dtFu0 D´u
0
0
 Z   Z  
0 A0 t 0 At At 0 A0 t 0 0
D 0 tr Bw e C´ C´ e dtBw C tr D´u Fu e Ly Dyw Dyw Ly e Fu D´u dt
0 0
D 0 kk22 C DT . /;

where DT . / is defined by (4.13) and the expression for kk22 follows by the second equality of (4.1).
Thus, we end up with kT´w k22 D kk22 C Q0 D 0 C DT . /, exactly as in Theorem 4.3.

4.2.3 Design case study


Return to the problem studied in ÷4.1.2. Because A D 0 in this case, we have that eA D 1 for all  and
the central controller in Theorem 4.3 is the same as that in Theorem 4.1 and does not depend on the loop
delay. Yet there is always a DTC block with the transfer function ˘.s/ D .1 e s /=s present, resulting
in the overall controller
s
K.s/ D ;
.%s C 1/.&s C 1/ e s
whose static gain, K.0/ D 1=.% C & C  /, decreases as  grows. The controller in this case is stable for
all  , which can be seen by the fact that the frequency-response gain of 1=..%s C 1/.&s C 1// is strictly
4.2. H2 DESIGN FOR DEAD - TIME SYSTEMS 79

12
9 10
6 6 6

0 0 1

-20 -20 -20

-40 -40 -40

10-1 100 101 102 10-1 100 101 102 10-1 100 101 102

(a) % D 1 and & D 1 (b) % D 1 and & D 0:05 (c) % D 0:05 and & D 0:05

Fig. 4.6: Closed-loop disturbance sensitivity frequency-response plots

contractive for all positive frequencies. We already saw, in ÷3.2.1, that the use of the MSP dead-time
compensator renders the control sensitivity the same as in the delay-free case and the complementary
sensitivity—a delayed version of its delay-free counterpart. Thus, the magnitude plots in Figs. 4.3(b) and
4.3(c) remain unchanged and we can see the effect of the loop delay only via the disturbance sensitivity.
The plots in Fig. 4.6 present jTd . j!/j for various combinations of the weights % and & and for three
loop delays, namely for  2 f0; 1; 2g. The case of % D 1 and & D 1, which resulted in a cautious design
with the narrowest bandwidth and smallest static gain of the controller, is presented in Fig. 4.6(a). We
can see that the increase of the loop delay increases jTd . j!/j for low frequencies, although this increase
is relatively small. The effect of the loop delay becomes more visible as we decrease the weights. This
is most visible in the plot in Fig. 4.6(c), which corresponds to the “aggressive” choice of % D 0:05 and
& D 0:05. The increase of the delay from zero to 1 increase the sensitivity to step load disturbances by a
factor of 11, i.e. by more than 20 dB. These observations agree with the understanding that the effect of
loop delays is more harmful for systems with higher loop gains and, consequently, higher crossovers.

4.2.4 Extensions to systems with multiple loop delays


It is not hard to imagine a situations where different input and output channels have different delays, cf.
Remark 1.3 on p. 5. We already saw that several stabilization methods studied in Chapter 3 extend to
systems with multiple delays seemlessly, see Remark 3.1 on p. 46 for example. It happens that this is not
quite the case for performance-oriented methods. In this subsection we touch upon involved difficulties
and possible workarounds.
Consider first the apparently simplest nontrivial extension of the single-delay setup in Fig. 4.1(b) to
the multiple delay case. Namely, assume that there are two, possibly vector, control input channels, one
of which is delay free and another one is delayed by  > 0. In other words, we assume that
     
u0 u0 I 0 x f0;g Ky:
uD D x  uN  D x Ky µ D
u D 0 D

Such a delay element is referred to1 as the input adobe delay element. This situation corresponds to the
setup in Fig. 4.7(a) or, equivalently, in Fig. 4.7(b). In principle, we can now shift the feedback loop in
this system, similarly to what was done in Fig. 4.5(c). This step requires only minor alterations to the
single-delay case: Q yu
G ˘
  ‚  …„ ƒ ‚ ˚ …„ ƒ
Gyu Dx f0;g D Gyu;0 Gyu; D x  D Gyu;0 GQ yu; C 0  Gyu; D x

1 The term was coined in [37], where it was used as a building block (an adobe brick) in solving more general cases.
80 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

´ w ´ w ´ w
x f0;g G´w ´ GQ ´u ˘u 1
     
G´w G´u G´w G´u D
Gyw Gyu Gyw GyuDx f0;g Gyw Gyu D x f0;g

y u y uN y uN
x f0;g K
D K K
(a) Original problem x f0; g uN D u
(b) Moving delay to G , D (c) Swapping adobe delay with G´u


´ w ´ Q́ O w
 C ´ GO ´w ´ GQ ´u ´ G´w GQ ´u
  

Gyw x f0;g ˘u
Gyu D Gyw GQ yu
y uQ yQ uQ

K uN ˘u 1
KQ
(d) Pre-shifting the loop and decomposing G´w (e) Final setup, yQ D y C ˘y uQ and ´ Q́ D ´ w

Fig. 4.7: H2 loop shifting for input adobe delay, with KQ D ˘u 1 K.I C ˘y K/ 1

for the respective partitioning of Gyu . But this appears to be a dead end in general. The main obstacle is
that the adobe delay element does not commute with G´u , unless this system is block-diagonal. It is thus
not clear how to pool the delay out of the generalized plant via the “´” channel, as done in the single-delay
case in Fig. 4.5(e).
To gain insight into this issue, let us simplify the problem even more. Assume for the time being that
 
G´u;0 G´u;0
G´u D ; (4.16)
0 G´u;

for a square and invertible G´u;0 , where the input partition is compatible with that in Dx f0;g . Consider the
signal     
G´u;0 G´u;0 D x u0 1
G´u;0 .u0 C G´u;0 x  uN  /
G´u;0 D
´u ´ G´u u D x D x  G´u; uN 
0 G´u; D uN  D
and split ˚ 1
1
G´u;0 x  D GQ a
G´u;0 D  G´u;0 x  µ GQ a
G´u;0 D ˘u;0 (4.17)
as in (3.43a). Denoting GQ ´u;0 ´ G´u;0 GQ a , which is finite dimensional, we end up with the relation
    
I 0 G´u;0 GQ ´u;0 I ˘u;0 u0
´u D x µ ´ GQ ´u ˘u 1 u:
N (4.18)
0 D 0 G´u; 0 I uN 

The system ´ is also an adobe delay element in this case, just with potentially different dimensions of
the delayed block in it (G´u; is not necessarily square). This yields the system in Fig. 4.7(c).
The rest is quite technical, albeit straightforward. Because ˘u is bi-stable, we can use it as a multiplier
and shift to the controller side as shown in Fig. 4.7(d). Partitioning the output of G´w according the the
partition of ´ and using the decomposition in (3.40a), we can write
        
G´w;0 G ´w;0 0 I 0 G´w;0
D ˚
x  GO ´w; D  G´w;
˚ C x µ  C ´ GO ´w ;
G´w;  G´w; C D 0 D GO ´w;

see the generalized plant in Fig. 4.7(d). The impulse response of this  still does not overlap that of ´ T
for every causal T , so it can be handled by already familiar approaches.
4.2. H2 DESIGN FOR DEAD - TIME SYSTEMS 81

u0

yQ y
˘u;0 KQ
x
D
u uN 
˘y;

Fig. 4.8: Feedforward action Smith predictor (FASP)

It is thus only left to eliminate the infinite-dimensional part in the “Gyu ” part of the generalized plant.
To this end, note that
   
Gyu Dx f0;g ˘u 1 D Gyu;0 Gyu; D x  Gyu;0 ˘u;0 D Gyu;0 GN yu;0 D x  Gyu;0 GQ a ;

where (4.17) is used to decompose ˘u;0 and GN yu;0 ´ Gyu; C Gyu;0 G´u;0 1
G´u;0 is finite dimensional.
Decomposing ˚
GN yu;0 D x  µ GQ yu;0 ˘y; ;
x  D GQ yu;0  GN yu;0 D

we end up with
   
x f0;g ˘u 1 D Gyu;0 GQ yu;0
Gyu D Gyu;0 GQ a 0 ˘y; µ GQ yu ˘y (4.19)

for a finite-dimensional GQ yu , which is ready for applying the loop shifting from ÷3.4.1. Thus, the final
setup is as shown in Fig. 4.7(e) with

KQ ´ ˘u 1 K.I C ˘y ˘u 1 K/ 1
D ˘u 1 K.I C ˘y K/ 1
(4.20)

(because ˘y ˘u 1 D ˘y ). Taking into account the orthogonality of  and D x  T Q́w , the H2 problem again
reduces to a finite-dimensional H2 standard problem, which can be solved by Theorem 4.1.
Now return to the structure of G´u assumed in (4.16). The structure itself is quite special and should
not be expected to hold true in many applications. Nonetheless, the ideas presented above do extend to a
general G´u under the mild condition that the feedthrough term G´u .1/ has full column rank. The most
technically nontrivial part of the general solution is the factorization
x f0;g D ´ GQ ´u ˘u 1
G´u D (4.21)

for a finite-dimensional GQ ´u , inner (i.e. stable and energy preserving) ´ , and a bi-stable ˘u . In fact, this
factorization results in GQ ´u having the same “A” matrix as G´u and in block upper triangular ˘u with static
invertible diagonal elements. Moreover, it is then always possible to decompose G´w D  C ´ GO ´w with
orthogonal  and ´ T Q́w for all T Q́w 2 H2 and a finite-dimensional GO ´w having the same “A” matrix as
G´w . Details, which are rather technical, can be found in [45, ÷III-B]. The input adobe delay result extends
then to the output adobe delay case, where there are delay-free and delayed measurements channels, and
eventually to systems with general multiple input and output delays by running a recursion of adobe delay
problems. In all these cases delayed H2 problems reduce to equivalent delay-free H2 problems. Details
go beyond the scope of the notes though.
Remark 4.7 (FASP architecture). The controller structure induced by (4.20) is of a special interest. The
mapping K 7! KQ defined by that relation is bijective, with K D ˘u K.I
Q ˘y K/Q 1 for a finite-dimensional
“primary controller” KQ designed for a finite-dimensional H2 problem. Taking into account the structure
of the “˘ ” elements from (4.18) and (4.19), which is preserved in the case of a general G´u , the controller
above corresponds to the block-diagram in Fig. 4.8. It has a “conventional” DTC FIR element ˘y; as
its internal feedback, although only from the delayed control channel. On top of that, there is also the
82 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

“interchannel” DTC FIR element ˘u;0 , which, in a sense, compensates for the delay in the performance
channel. This unorthodox structure is known as the FASP (feedforward-action Smith predictor).
Curiously, nothing similar to FASP has appeared in the classical literature, where DTC architectures
are typically ad hoc. In fact, the question of a proper extension of the DTC architecture to systems
with multiple delays was for long considered an open problem, because stabilization-induced structures
produced rather poor feedback performance, see the discussion in the beginning of [23]. It was even
suggested by Jerome and Ray in [23] that in many situations artificial delays should be added to channels
with smaller delays to equalize loop delays, which is conceptually flawed. Only the advent of transparent
solutions to optimal control problems revealed the FASP architecture. This was done for the first time in
[37], as a technical by-product of the H1 optimization procedure for systems with multiple loop delays.
The arguments above are from [45] and are somewhat more transparent. O
It should be emphasized that the discrepancy between stability- and performance-induced DTC ar-
chitectures discussed above shows up only in problems with multiple i/o delays. It is not unreasonable
to expect that a similar discrepancy exists in more general multiple delay systems, like that discussed in
Remark 3.1 on p. 46. But studies of such architectures are yet to be carried out.

4.3 H1 design for dead-time systems


Let us turn now to the H1 problem for the system in Fig. 4.1(b). We again impose the same delay-free
assumptions on the generalized plant G , i.e. suppose that A 5–8 hold. To simplify the formulae, we also
assume throughout this section that D´w D 0, so that the formulae of Remark 4.3 on p. 71 can be used.

4.3.1 Extraction of -suboptimal dead-time controllers


The logic here follows that in ÷4.2.1, just this time we use Theorem 4.2 as the starting point. The generator
of all suboptimal controllers in Theorem 4.2 satisfies the assumptions of Lemma˚ 3.5 too, so all Q’s ren-
dering Fl .J1 ; Q/ a dead-time system for a given  must be of the form Q D  H1;22 C QQ D x  , where
1
H1 ´ J1 with
2 3
  A C Bw Fw 2 Z Y.XBu C C´0 D´u /Fu Z .Bu C L´ D´u / Z Ly
H1;11 .s/ H1;12 .s/
D4 Cy C Dyw Fw Dyu I 5;
H1;21 .s/ H1;22 .s/
Fu I 0
for some stable and causal QQ , i.e. such that its transfer function QQ 2 H1 . Thus, the H1 optimization
problem reduces to the problem of finding whether a QQ 2 H1 exists such hat kQ0 C Q D x  k1 < , where
˚
Q0 ´ D´u  H1;22 Dyw and Q ´ D´u QD Q yw :

Although this problem looks similar to the corresponding problem in the H2 case, the elegant H2
projection reasonings do not apply here. H1 is not a Hilbert space, so there is no orthogonality notion
on it. As such, the separation of the norms of Q0 and Q exploited in the H2 case does no longer hold.
In fact, zeroing QQ , similarly to the optimal strategy for the H2 case, is not the right course of action in
general. This is illustrated by the following simple example:
Example 4.1. Let  
1 1 e s
Q0 .s/ D  D
s s
x
and consider the problem of minimizing kQ0 C Q D k1 by a causal Q . First, in an attempt to imitate
the H2 solution, choose Q D 0. With this choice the attained norm is
ˇ
j1 e j! j j1 e j! j ˇˇ
kQ0 k1 D sup D D :
!2R ! ! ˇ
!D0
4.3. H1 DESIGN FOR DEAD - TIME SYSTEMS 83

Choose now
1 .2 /2 s 2 C  2
Q .s/ D
s s.2 s C  e s /
(an educated guess). This is an H1 function. Indeed, it is clearly proper, so bounded on C˛ for a
sufficiently large ˛ . Its singularities are at the origin and roots of the quasi-polynomial p.s/ D 2 sC e s .
The singularity at s D 0 is removable, because
.2 s C  e s / .2 /2 s 2 C  2  2
lim Q .s/ D lim D :
s!0 s!0 s.2 s C  e s / 
s
Roots of p.s/ can be analyzed via the Nyquist criterion for  e =.2 s/. Its crossover frequency is
!c D =.2 / and its phase at this frequency is
 j!c 
arg e D  !c D :
j2 !c 2
x 0 except the pair at
This implies that the quasi-polynomial p.s/ has all its roots in the left-half plane C n C
˙ j!c . But at those roots we have that

.2 s C  e s / .2 /2 s 2 C  2 16 ˙ j 2. 2/2


lim Q .s/ D lim D ;
s!˙ j!c s!˙ j!c s.2 s C  e s / . 2 C 4/
i.e. these singularities are also removable.
Now,
s
s 2  2 s e 2 s p. s/
Q0 .s/ C Q .s/e D s
D e :
 2 s C  e  p.s/
We already know that this is an H1 function, so

x  k1 D sup 2 je j! jp. j!/j 2


kQ0 C Q D j D  <  D kQ0 k1 :
!2R  jp. j!/j 

Thus, it is possible to improve the kQ0 k1 bound by at least a factor of =2  1:57 in this case by an
x  do not overlap.
appropriate choice of Q , even though the impulse responses of Q0 and Q D O

The problem of finding functions Q 2 H1 such that kQ0 C Q D x  k1 < is known as the Nehari
extension problem, named after Zeev Nehari. It is known that this problem is solvable iff the Hankel
norm of Q00 Dx  is smaller than , where Q0 is the adjoint operator, on L2 .R/, of Q0 , i.e. the system whose
0
impulse response is Œq0 . t /0, and the Hankel norm is the induced norm as an operator L2 .R / ! L2 .RC /.
The following result from [41], presented here without proof, gives a verifiable solvability condition
for the required Hankel norm to be smaller than and parametrizes all solutions to the problem.
Lemma 4.4 (Nehari problem for delayed systems). Let
 
A 0 B0
G0 .s/ D :
C0 0
˚
There is Q 2 H1 such that k G0 C Q D x  k1 < iff det ˙22 .t / ¤ 0 for all t 2 Œ0;  , where
    
˙0;11 .t / ˙0;12 .t / A0 B0 B00
´ exp t :
˙0;21 .t / ˙0;22 .t / 2 C00 C0 A00

If this condition holds, then all solutions of the problem can be presented in the form
˚ 
 G0 C Q D x  D ˚12 C .˚11 D x  I /QNEP .I C ˚22 C ˚21 QNEP / 1 ;
84 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

where
8̂2 39
ˆ A0 B0 B00 0 B0 >
>
  2 0
A00 2 1
. /C00 1
<6 =
˚11 .s/ ˚12 .s/ C0 C0 ˙0;22 ˙0;22 . /˙0;21 . /B0 7
D  6 7
˚21 .s/ ˚22 .s/ ˆ4 C0 0 0 0 5>
>
:̂ ;
0 B00 0 0

and QNEP 2 H1 is such that kQNEP k1 < but otherwise arbitrary.


Example 4.2. Return to the problem studies in Example 4.1, in which G0 .s/ D 1=s . For this system
      
˙0;11 .t / ˙0;12 .t / 0 1 cos.t = / sin.t = /
´ exp t D ;
˙0;21 .t / ˙0;22 .t / 2 0 1 sin.t = / cos.t = /

so that the problem is solvable iff > 2= . This is exactly what the choice of Q in Example 4.1 attains.
The “central” Q , the one obtained by setting QNEP D 0, attains

cos.= /s 1 sin.= / .cos.2= /s 1 sin.2= //e s !2= 2  2 s e s


Q.s/ D ! ;
.cos.= /s 1 sin.= //s C . 1 sin.2= /s C cos.2= //e s  2 s C  e s

which leads to
 s 
2  2 s e 1 e s s 1 .2 /2 s 2 C  2
Q .s/ D s
C e D
 2 s C  e s s s s.2 s C  e s /
exactly as that in Example 4.1. Thus, the “educated guess” there was actually the optimal solution. O

The result of Lemma 4.4 is instrumental in the solution to the standard H1 problem for the system in
Fig. 4.1(b). To formulate this solution, define the matrix functions
  
A Bw Bw0
˙.t / ´ exp t :
2 C´0 C´ A0

and the matrices


 2
  
  0 C´0 D´u   0 2
X
B ´ Y I ˙ . / and C ´ Dyw Bw0 Cy ˙. / :
Bu I

The result below if the counterpart of Theorem 4.2 for dead-time systems:
Theorem 4.5. If A 5–8 hold true and D´w D 0, then the standard H1 problem is solvable iff the following
conditions hold:
(a) there is a stabilizing solution X to ARE (4.9a) such that X D X 0  0,
(b) there is a stabilizing solution Y to ARE (4.9b) such that Y D Y 0  0,
(c) Z .t / is well defined for all t 2 Œ0;  , where
  2
 1
  X
Z .t / ´ Y I ˙ 0 .t /
I

In such a case, all -suboptimal controllers are given as the system in Fig. 4.9, where
2 3
A C Bw Fw C Bu Fu C Z . /Ly C Z . /Ly Z . /B
J1; .s/ D 4 Fu 0 I 5 (4.22)
C I 0
4.3. H1 DESIGN FOR DEAD - TIME SYSTEMS 85

˘1
u y

J1;
Q Q

Fig. 4.9: All H1 suboptimal controllers for the standard problem in Fig. 4.1(b)

and
82 3 9
< A Bw Bw0 Bu =
˚ 2

˘1 .s/ D  Fu .G.s/; ŒG´w . s/0 /e s 2 0
D   4 C´ C´ A0 2 C´0 D´u 5e s
(4.23)
: ;
Cy Dyw Bw0 Dyu

for any Q 2 H1 such that kD´u QDyw k1 < .

Proof. Follows by plugging the conditions and the parametrization of Lemma 4.4 to the parametrization
of Theorem 4.2. Technical details are quite involved and can be found in [41].

Some remarks are in order:


Remark 4.8 (H1 cost of delay). It is readily seen that Z .0/ in Theorem 4.5 equals Z D .I 2 YX/ 1
from Theorem 4.2. This implies that the solvability conditions for the dead-time H1 problem cover those
of its delay-free counterpart, which is expectable. The addition is expressed in Theorem 4.5 in terms of the
matrix function Z .t /. Equivalently, it can be shown that the delay problem is solvable iff the delay-free
problem is solvable and, in addition, the solution to the differential Riccati equation

PPX .t / D PX .t /A C A0 PX .t / C C´0 C´ C 2
PX .t /Bw Bw0 PX .t / D 0; PX .0/ D X

exists 8t 2 Œ0;   and is such that .YPX . // < 2 . Yet another version is that the solution to the differen-
tial Riccati equation

PPY .t / D PY .t /A C A0 PY .t / C Bw Bw0 C 2
PY .t /C´0 C´ PY .t / D 0; PY .0/ D Y

exists 8t 2 Œ0;   and is such that .PY .0/X/ < 2 . Moreover, both PX .t / and PY .t / are non-decreasing
functions of t , in the sense that PPX .t /  0 and PPY .t /  0 for all t  0. Normally, the performance level
attainable in the delay-free case is attainable for no  > 0. But there might be situations, and not only
those in which the delay-free suboptimal controller K D 0, in which the addition of a finite delay does not
harm the attainable performance. O
Remark 4.9 (sampled-data vs. delayed feedback). An interesting observation is that the solvability con-
ditions for the dead-time H1 problem discussed in Remark 4.8 coincide with the solvability conditions
for the sampled-data H1 problem for the same generalized plant, see [43, ÷V.B]. In other words, the cost
of delay equals the cost of sampling in the H1 case if the sampling period coincides with the loop de-
lay, which is different from the H2 case. This fact is somewhat surprising, because causality constraints
imposed on the controller by sampling are less restrictive than those imposed by the delay. Nonetheless,
the disturbance is capable to “outsmart” the controller, which acts in open loop during each intersample
interval, in this situation. O
86 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS

Remark 4.10 (controller structure). The H1 suboptimal controllers in Fig. 4.9 are dead-time compensator,
again. Yet the DTC block ˘1 given by (4.23) is quite different from the modifies Smith predictor (unless
G´w D 0 or D 1). It is intriguing to understand the rationale behind this structure. To this end, write
2 0
Fu .G; G´w / D Gyu C Gyw . 2 I 0
G´w G´w / 1 G´w
0
G´u ;
0 2 0
where G´w denotes the adjoint operator over L2 .R/. Thus, the relation y D Fu .G; G´w /u can be
presented in the form
y D Gyw w? C Gyu u;
where the exogenous input is generated as

w? D . 2 I 0
G´w G´w / 1 G´w
0
G´u u ” w? D 2 0
G´w .G´w w? C G´u u/ D 2 0
G´w ´:

It can be shown that this signal is the worst-case disturbance in the open-loop setting, viz. the signal that
maximizes k´k22 2 kwk22 for any given u. Thus, the H1 DTC element compensates the loop delay
under the worst-case open-loop scenario. The rationale behind this is well founded. Indeed, the whole
H1 design methodology is effectively a game where u plays against w , trying to minimize k´k22 2 kwk22 .
But the effect of the control signal on ´ delays with respect to that of w , so in the “prediction” part we can
only act in open loop. The DTC form of the H1 controller was first recognized by Meinsma and Zwart in
[38] for the mixed-sensitivity problem and then derived, and interpreted, for the general case in [41]. O

4.3.2 Loop shifting approach


The loop-shifting procedure of Fig. 4.5 is not quite helpful in the H1 case, just because systems with
non-overlapping impulse responses can still alter the effect of each other, cf. Example 4.1. But there is
one special case where the procedure can still be used in the H1 context. This is the case where G´w D 0,
for which we can take  D 0 and end up with the relation Fl .G; K Dx  / D Fl .G;
Q K/Q D
x  . Because the delay
does not affect the magnitude of the frequency response, we have that

x  /k1 D kFl .G;


kFl .G; K D Q K/k
Q 1;

which implies that the problem reduces to a finite-dimensional H1 problem for GQ given by (4.15) and the
DTC element is the standard MSP, like that in the H2 case.
H1 problems with G´w D 0 are actually important. Several robust stability problems fall into this
category. For example, consider a plant P , described as P D P0 .I C W2 IM W1 / for known nominal plant
P0 and weights W1 ; W2 2 RH1 and an uncertain element IM , of which we only know that IM 2 H1 and
kIM k1  ˛ for some ˛ > 0 known as the uncertainty radius. This is the so-called input multiplicative
modeling uncertainty model. It follows from small-gain arguments that a controller K robustly stabilizes
this plant, i.e. stabilizes all P of this form, iff kFl .G; K/k1 < 1=˛ , where the generalized plant
 
0 W1
GD
P 0 W2 P 0

has a zero “G´w ” part. Likewise, the generalized plants for the output multiplicative uncertainty model
P D .I C W2 OM W1 /P0 and for the additive uncertainty model P D P0 C W2 AD W1 are
   
0 W1 P 0 0 W1
GD and G D ;
W2 P 0 W2 P 0

respectively, and both have zero “G´w ” parts.


4.4. T UNING INDUSTRIAL CONTROLLERS 87

d
y u eQ r
P .s/ e s
CQ .s/ -
˘.s/

n yQ

Fig. 4.10: Unity-feedback setup with dead-time compensator

A consequence of this is that the MSP is actually a part of the optimal robust controller, i.e. the
controller stabilizing P under the maximum uncertainty radius. This might sound surprising, taking into
account that the dead-time compensation procedure is intrinsically open loop. Moreover, if P0 is stable,
then we can always choose ˘ D P0 .1 D x  / in the loop-shifting procedure in Fig. 4.5, resulting in the
classical Smith predictor as the optimal controller. With this choice, GQ yu D Gyu D P0 and we conclude
that the Smith controller has the very same robustness level for a dead-time plant as its primary controller
has for the delay-free version of the plant, with respect to either multiplicative or additive uncertainty (in
fact, even with respect to some classes of structural uncertainty). This conclusion is not that obvious.

4.4 Tuning industrial controllers


This chapter is wound up with a brief discussion of the use of the DTC controller architecture to control
simple industrial processes. A representative setup is presented in Fig. 4.10, where P is a low-dimensional
SISO plant, ˘ is a DTC element, and CQ is a primary controller designed in a delay-free setting.
A typical design procedure for such systems is to fix the DTC element (often, as the classical Smith
predictor) and then to design a fixed-structure (often, PI) primary controller for a delay-free equivalent
plant. This is fundamentally different from the optimization-based approaches discussed above, where all
parts of the controller are designed together. Arguably, the most important requirement to this separated
design procedure is its transparency, namely the ability of the resulted closed-loop system to inherit prop-
erties of the delay-free loop, for which CQ is designed. The choice of ˘ is a key for a successful design
towards this end. So in the discussion below the main emphasis is placed on the effect of the DTC element
on the design transparency. Also, because these kinds of methods are seldom analytical, the ideas are
introduces via simple examples.
One day it should be written . . .
88 C HAPTER 4. P ERFORMANCE OF T IME -D ELAY S YSTEMS
Chapter 5

Implementation of DTC-based Controllers

that dead-time compensation (DTC) architectures are


W E SAW IN THE PREVIOUS TWO CHAPTERS
intrinsic to stabilization and optimal control problems for time-delay systems. In the heart of these
architectures are DTC elements, which are normally infinite-dimensional FIR systems of the form
         
A B s A e A B A B s A e A B B I
˘.s/ D   e D e D (5.1a)
C D C 0 C D C 0 D e s I
Z 
DC e .sI A/t dt e A B D e s (5.1b)
0

(cf. (3.43) on p. 61). Such elements are only parts of overall controllers, like those presented in Fig. 3.10
on p. 62 or Fig. 4.8 on p. 81. However, their delayed parts do not blend easily with finite-dimensional
primary controllers and their finite-dimensional parts, like C.sI A/ 1 e A B above, cannot be separated
from ˘ for DTC elements must be stable themselves. For these reasons, DTC elements are implemented
separately from primary controllers, as FIR systems. This chapter discusses related implementation issues.

5.1 General observations


The last equality in (5.1a) suggests that ˘ W u 7! y˘ can be implemented as the following dynamics:
(
xP˘ .t / D Ax˘ .t / C e A Bu.t / Bu.t  /
(5.2)
y˘ .t / D C x˘ .t / C Du.t  /

The only infinite-dimensional element required for this implementation is a buffer to realize the delay line
for having access to the past u, which is a relatively simple element. However, this implementation does
not always work, as can be seen in the example below.

Example 5.1. Consider the system presented in Fig. 5.1, which is an implementation of controller (3.22)

d
y 1 s
u r
s 1
e -
e  e s

s 1

n 2e yQ

Fig. 5.1: Benchmark system for DTC implementation

89
90 C HAPTER 5. I MPLEMENTATION OF DTC- BASED C ONTROLLERS

0
0 1 5 10 15

Fig. 5.2: DTC element in Fig. 5.1 implemented via (5.2), solid line

on p. 48 for the plant with A D 1 and B D 1 and for F D 2. It should result in the closed-loop system
Tyr W r 7! y with the transfer function
e s
Tyr .s/ D ;
sC1
which is readily verifiable. The expected response of the closed-loop system to the unit-step r.t / D 1.t /
is y.t / D .1 e t /1.t  /, which is presented in Fig. 5.2 by the red line. Yet implementing the DTC
part of the controller according to (5.2) leads to an unstable closed-loop response, as shown by the cyan-
blue line in Fig. 5.2. The simulations producing this are carried out in Simulink with the variable-step
ode45 solver under a maximal step size of 0:0001, which should be small enough for this closed-loop
time constant (and the results do not improve under smaller steps). O

The results of Example 5.1 can be understood by noticing that the representation of ˘ via (5.1a)
actually cancels all modes of A. If A is not Hurwitz, as in Example 5.1, these cancellations are unstable.
As a result, inaccuracies in performing those cancellations, say due to roundoff errors, might be disastrous.
Indeed, the output of ˘ in this case is not y˘ generated by (5.2), but rather yQ˘ generated by
(
xP˘ .t / D Ax˘ .t / C e A Bu.t / Bu.t  / C roff .t /
(5.3)
yQ˘ .t / D C x˘ .t / C Du.t  /

where roff represents roundoff errors in solving the differential equation for x˘ (accounting for errors in
the second equation is not quite important here). Assuming zero initial conditions, .x˘ .0/; uM 0 / D 0,
Z t

yQ˘ .t / D C eA.t s/ e A Bu.s/ Bu.s  / C roff .s/ ds C Du.t  /
 0Z t  Z t
D C eA.t s / Bu.s/ds C Du.t  / C C eA.t s/ roff .s/ds µ y˘ .t / C ˘ .t / (5.4)
t  0

for all t  0. The error signal ˘ is the response of a system with the transfer function C.sI A/ 1 to
roff . If the matrix A is not Hurwitz, which is what we have in Example 5.1, this response diverges and we
end up with the unstable behavior like that in Fig. 5.2.

5.2 Implementation via reset mechanism


An elegant workaround for handling unstable cancellations in (5.2) was proposed by Tam and Moore in
[70] in the context of fixed-lag smoothing, which also involves an analog FIR part. The idea is to reset
dynamics (5.2) periodically to avoid the error signal ˘ in (5.4) to grow up unbounded. A mechanical
reset (5.2) would obviously result in a different response from that of ˘ . But because ˘ is FIR, i.e. has a
finite memory, such deviations would last only for  time units after the reset. This fact can be exploited
5.2. I MPLEMENTATION VIA RESET MECHANISM 91

by introducing two clones of ˘ , which run in parallel and reset at odd and even multiples of the delay  ,
respectively. In this scheme, while the first system generates the signal y˘ , the second one accumulates
right initial conditions. After  time units the second system is ready to generate the correct y˘ , so at this
point the first system is reset and the roles are interchanged.
To be specific, consider (5.3) and assume that its state .x˘ .t /; uM t / is reset to zero at t D tc . In this case
Z t

x˘ .t / D eA.t s/ e A Bu.s/ Bu.s  /1.s tc  / C roff .s/ ds
tc
Z t Z t Z t
D eA.t s / Bu.s/ds eA.t s/ Bu.s  /ds C eA.t s/ roff .s/ds
tc minftc C;tg tc
Z t Z t  Z t
D eA.t s / Bu.s/ds eA.t s / Bu.s/ds C eA.t s/ roff .s/ds
tc minftc ;t g tc
Z t Z tc Z t
A.t s / A.t s /
D e Bu.s/ds e Bu.s/ds C eA.t s/ roff .s/ds
t  minftc ;t g tc

for all t  tc . Thus, yQ˘ .t / D y˘ .t / C yı .t / C ˘ .t /, where the error caused by (potentially wrong) zero
initial conditions, Ztc
yı .t / ´ C eA.t s /
Bu.s/ds;
minftc ;t g
vanishes after t D tc C  , when the system forgets all wrong initial conditions. The term ˘ due to round-
off errors still tends to diverge under non-Hurwitz A, but if the system resets every T time units, then it
remains uniformly bounded. In fact, in this case [11, Sec. II.6]
Z T
k˘ k1  kC eAt k1 dt kroff k1 ; (5.5)
0
P
where the 1-norm of a matrix / vector M is kM k1 ´ maxi j jMij j and the L1 .RC /-norm of a signal
 is kk1 ´ sup t2RC k.t /k1 .
Thus, the implementation of ˘ W u 7! y˘ by the time-varying dynamics
8̂ A
< xP 1 .t / D Ax1 .t / C e Bu.t / Bu.t  /; x1 ..2k 1/ / D 0
A
xP 2 .t / D Ax2 .t / C e Bu.t / Bu.t  /; x2 .2k / D 0 (5.6a)

y˘ .t / D  t C x1 .t / C .1  t /C x2 .t / C Du.t  /

where for all k 2 ZC the switching function


(
1 if t 2 Œ2k; .2k C 1/ /
t D ; (5.6b)
0 if t 2 Œ.2k C 1/; .2k C 2/ /

results in a stable system, in the sense that the error ˘ caused by roundoff errors is uniformly bounded
by (5.5), with T D 2 (provided the roundoff errors are the same for each clone). The implementa-
tion of equations (5.6) is straightforward, with relatively low computational expenses and standard tools
applicable.
At the same time, there is not much we can do to improve the accuracy of the method. It is always
a matter of how large 2 vis-à-vis unstable modes of A. In principle, it is possible to implement more
than two clones of (5.2) in parallel, with a round-robin scheduling. But even in this case the run time of
each clone between resets is lower-bounded by  , with the time required to accumulate the right initial
condition. Thus, if there are  2 N n f1g clones, the horizon in the error bound in (5.5) decreases only to
T D  =. 1/ >  , which might still be insufficient if the delay is too large.
92 C HAPTER 5. I MPLEMENTATION OF DTC- BASED C ONTROLLERS

1 1 1

0 0 0

-1 -1 -1
0 1 3 5 7 9 0 2 4 6 8 10 0 1 2 3 4 5 6 7 8 9 10

(a) Response of the first clone (b) Response of the second clone (c) Combined response

Fig. 5.3: Response of ˘ implemented by (5.6) to a square wave of period 2 and amplitude e=.e 1/

Example 5.2. Return to the problem studied in Example 5.1 and consider its DTC element, whose transfer
function
e  e s
˘.s/ D
s 1
and the static gain ˘.0/ D 1 e  . Fig. 5.3 presents the response of this ˘ to a square wave with the
frequency 2 [sec] and the amplitude 1=˘.0/ D e =.e 1/ under  D 1. The responses of each one of two
clones to the same input are depicted in Figs. 5.3(a) and 5.3(b). The dashed lines there represent intervals,
where proper initial conditions are accumulated after each reset and solid lines represent intervals in which
the corresponding output is used as the actual y˘ . They then combine into the solid line in Fig. 5.3(c),
which is bounded and virtually coincides in this case with the expected response. The response of the
closed-loop system in Fig. 5.1 is then indistinguishable from the ideal response y.t / D .1 e t /1.t  /,
which is shown by the red line in Fig. 5.2. O

5.3 Rational approximations


Another direction in implementing ˘ given by (5.1) is to approximate it by a finite-dimensional system.
This would put the DTC element on an equal footing with the primary controller, so that the overall
controller can be implemented as one block, by standard tools. Several approaches to finite-dimensional
approximations are outlined below.

5.3.1 Naı̈ve Padé


The “laziest” approach is just to replace the delay element in (5.1a) with its Œn; n-Padé approximant
studied in ÷1.3.1. This results in
   
A e A B A B
˘naı̈veP;n .s/ D Rn;n .s/
C 0 C D
and requires only standard technical and numerical tools. But this approach does not work in the case
when A is not Hurwitz, at least it does not work off the shelf. Indeed, nothing guarantees that unstable
modes of A are canceled in the expression above. In fact, they generally are not. For example, for the


DTC element in Fig. 5.1 we have that
0:632121.s 1:00148/.s 11:9822/
.s 1/.s 2 C6sC12/
if n D 2
1
e Rn;n .s/ 1:36788.s 0:99998969/.s 2 4:54542sC55:4546/
˘naı̈veP;n .s/ D D .s 1/.sC4:64437/.s 2 C7:35563sC25:8377/
if n D 3 (5.7)
s 1
0:632121.s 1:00000004/.s 39:8821/.s 2
2:39698sC42:1242/
.s 1/.s 2 C8:41516sC45:9512/.s 2 C11:5848sC36:5605///
if n D 4
5.3. R ATIONAL APPROXIMATIONS 93

-8.2
-10.2 -10.3
-12.4 -12
-14.1

-20 -20 -20

-40 -40 -40

-60 -60 -60


0 1 0 1 0 1
10 10 10 10 10 10

(a) n D 2 (b) n D 3 (c) n D 4

1
Fig. 5.4: Normalized approximation errors for rational approximations of ˘.s/ D .e e s /=.s 1/

are all unstable, which is not what we need. Still, as can be seen from the formulae above, their unstable
pole at s D 1 is quite close to a zero and can thus be canceled without impairing the frequency response
of the resulted approximation much. Thus, the approach might be a handy tool for a quick approximation
if accompanied by a manual cancellation of all unstable poles of the result. This works well for the
approximation in (5.7), whose approximation errors are presented in Fig. 5.4 by cyan-blue lines for the
approximant degrees n D 2; 3; 4. In canceling close pole and zero there, the static gain of the approximant
was kept equal to ˘naı̈veP;n .0/ D ˘.0/. However, this approach is simple only in the SISO case and might
x 0.
fail if the resulting transfer functions have relatively distant poles and zeros in C

5.3.2 Padé with interpolation constraints


The problem above can be fixed by adding extra interpolation constraints to the Padé approximant of e s
so that
Rn;n .i / D e i (5.8)
x 0 n f0g (eigenvalues at the origin are already handled by the Padé approximant).
at all eigenvalues of A in C
If the algebraic multiplicity of an eigenvalue is larger than 1, then additional constraints on derivatives of
the approximant at that point must be added. This ensures that all such eigenvalues are canceled in the
approximant. Such approximants are special cases of so-called multipoint Padé approximants, where an
approximant is designed to match a function and its derivatives at several points.
Adding constraints like (5.8) to the procedure described in ÷1.3.1 is a matter of replacing the last rows
of (1.33) on p. 14 with appropriate linear constraints. Taking into account that (1.33) approximates es ,
each constraint (5.8) should be recast as Rn;n . i  / D e i = , which results in the linear constraints
2 3
1
6q 7
6 1 7
6 : 7
6 :: 7
6 7
 n n i  i  n i 
6
n 6 qn 7
7
1 i     . 1/ .i  / e e i     . 1/ e .i  / 6 7 D 0:
6 p0 7
6 7
6 p1 7
6 7
6 :: 7
4 : 5
pn

The rest is technical. If the number of interpolation points does not exceed the number of parameters,
which is 2n C 1, then the resulting problem can be solved by inverting the last 2n C 1 columns of the
counterpart of the matrix in the left-hand side of (1.33).
94 C HAPTER 5. I MPLEMENTATION OF DTC- BASED C ONTROLLERS

-23.3

-40

-60

-80

101 102

˚
Fig. 5.5: Rational approximations of ˘.s/ D  2e s =.2s 2 3s C 1/


For the DTC element in Fig. 5.1 this approach yields
0:451471.s 15:28403/
s 2 C5:6387sC10:9161
if n D 2
1:23504.s 2 4:29253sC57:3393/
˘intP;n .s/ D .sC4:51376/.s 2 C7:08771sC24:8196/
if n D 3 (5.9)
0:527108.s 45:0554/.s 2 2:36514sC42:3681/
.s 2 C8:20247sC45:0126/.s 2 C11:3775sC35:3632/
if n D 4
These transfer functions are stable, so no fine tuning at the end of the procedure is needed. However, the
derivations require extra effort and numerical calculations become problematic for large n (they fail for
n D 8 for the example above). The approximation errors are presented in Fig. 5.4 by red lines for the
approximant degrees n D 2; 3; 4. The H1 -norm of the errors under all three n are a bit below of what we
have with the naı̈ve approximation, although in this case the gain is not quite substantial.

5.3.3 Direct Padé


Another conceptually straightforward approach is to derive the Padé approximant directly to the function
˘.s/, without the use of the delay element as the approximation agent. If D D 0 in (5.1), then it would
make sense to use strictly proper Padé approximants, i.e. to select m < n in the procedure of ÷1.3.1.
As a matter of fact, applying to the DTC element in Fig. 5.1 under m D n 1, this produces the same
approximant as that given by (5.9).˚ However, in general
these two methods result in different approxi-
mants. For example, if ˘.s/ D  2e s =.2s 2 3s C 1/ , then the interpolating Padé and direct Padé result
in
0:4773.s 2 C 9:428s C 70:1/ 0:68479.s 2 C 5:264s C 87:82/
and ;
.s C 4:444/.s 2 C 6:946s C 24:32/ .s C 5:472/.s 2 C 8:994s C 35:49/
respectively. The corresponding error magnitudes are presented in Fig. 5.5. The interpolating Padé pro-
duces a smaller H1 -norm of the approximation error, 0:0685 vs. 0:0849 for the direct approximation. But
the latter method produces better approximations in low frequencies, namely for !  8:256.

5.3.4 Approach of Partington–Mäkilä


Yet another approach to construct finite-dimensional approximations of DTC elements was proposed by
Partington and Mäkilä in [58]. Its key observation is that the function ˘.s/ defined by (5.2) can be
equivalently expressed as  
1 e s 1
˘.s/ D C .ŒsI A/e A B; where .s/ ´ D  : (5.1c)
s s
Thus, we can approximate ˘.s/ via approximating the scalar function .s/ by its Œn 1; n-Padé approx-
imant Rn 1;n .s/, or whatever other method, and then using
A
˘PaMä;n .s/ D CRn 1;n .ŒsI A/e B (5.10)
5.4. L UMPED - DELAY APPROXIMATIONS (LDA) 95

as a rational approximation of ˘.s/. In fact, Rn 1;n .s/ can be calculated analytically via the Œn; n-Padé
approximant of e s as
Pn 1 . s/
Rn 1;n .s/ D ;
Qn . s/
where the degree-n polynomial Qn .s/ is given by (1.35) on p.15 and the degree-2b.n 1/=2c polynomial
b.n 1/=2c  
Q.s/ Q. s/ X n .2n 2i 1/Š 2i
Pn 1 .s/ D D 2 s ;
s 2i C 1 .2n/Š
iD0

where bc stands for the floor function. It should be emphasized that the dimension of the transfer function
˘PaMä;n .s/ in (5.10) is n dim.A/, rather than n, in general. Consequently, the order of this approximation
may be quite high even if n is chosen to be small.
For ˘.s/ as in Example 5.2 this approach yields (in this case A is scalar, so the order of this approxi-


mant is exactly n)
4:41455
s 2 C4sC7
if n D 2
0:735759.s 2 2sC61/
˘PaMä;n .s/ D .sC3:64437/.s 2 C5:35563sC19:4821/
if n D 3 (5.11)
14:7152.s 2 2sC43/
.s 2 C6:41516sC38:536/.s 2 C9:58484sC25:9757/
if n D 4

whose magnitude approximation errors are presented in Fig. 5.4 by yellow lines. In all three cases of n
these approximation produces the lowest H1 -norm of the approximation error. But it also produces larger
approximation errors in low frequencies. Thus, the comparison can be regarded as indecisive.

Remark 5.1 (stability of approximants). It should be emphasized that no general stability results for ap-
proximation methods presented in this whole section appears to be available. As such, the stability of the
resulting approximants is not guaranteed even when all unstable eigenvalues of A are canceled in the ap-
proximant, as additional poles in Cx 0 might appear. An exception is the analysis in [58] for the case where
the function .s/ in (5.1c) is approximated via some alternatives of the Œn 1; n-Padé. For example, if
 
s s C 2n n . s C 2n/n .  s C 2n/n
e  H) .s/  ;
s C 2n s. s C 2n/n

then the approximant in (5.11) is stable whenever n > maxs2spec.A/ Re s =2. O

5.4 Lumped-delay approximations (LDA)


The third direction that we study is motivated by the integral form of ˘ in (5.1b) and the Newton–Cotes
quadrature (numerical integration) rules. A general Newton–Cotes formula of degree  2 N, which
estimates a definite integral using a piecewise-polynomial approximation of the integrand, is of the form
Z b 
X
f .t /dt  i f .ti /; where ti ´ a C i.b a/= (5.12)
a iD0

for some sequence fi giD0 , which depends on the specific approximation of f . Examples of popular ap-
proximation schemes are the trapezoidal rule, in which f .t / is approximated by piecewise-linear functions
connecting the points f .ti / and f .tiC1 / and for which
(
b a 2 if i D 1; : : : ;  1
H) i D
ti tiC1 t iC2
2 1 if i D 0; 
96 C HAPTER 5. I MPLEMENTATION OF DTC- BASED C ONTROLLERS

0
0 1 10 20 30

(a) Closed-loop time response (b) Frequency responses of ˘ & ˘

Fig. 5.6: DTC element in Fig. 5.1 implemented via (5.13)

and Simpson’s rule ( should be even), in which f .t / is approximated by piecewise-quadratic functions


connecting the points f .t2i /, f .t2iC1 /, and f .t2iC2 / and for which
€1 if i D 0; 
b a
H) i D 2 if i even but neither 0 nor 
t2i t2iC1 t2iC2
6
4 if i odd

Applications of Newton–Cotes formulae to (5.1b) lead to easily implementable and stable approximants,
although certain care should be taken to keep system-theoretic common sense in place.

5.4.1 Naı̈ve use of Newton–Cotes formulae


Let us start with applying formula (5.12) to the integral term in (5.1b) mechanically. To this end, write the
integrand as f .t / D C e .sI A/t e A B D C eA.t / B e ts , for which

X 
˘.s/  ˘ .s/ ´ i C eA.i /
Be i s
De  s
; where i ´ i. (5.13)

iD0

This function is a train of (lumped) delays e i s , so ˘ is referred to as a lumped-delay approximation


(LDA) of ˘ . Obviously, if i are all bounded, then ˘ 2 H1 for all  , meaning that we have stability as
an intrinsic property of this approximation method.

Example 5.3. Apply the approach presented above to the system studied in Example 5.1 for  D 1 and
the trapezoidal integration rule. The LDA (5.13) yields then the approximant
 1
X  e .1 i=/
 . i=/s  s
˘ .s/ D C e C e ;
2  2
iD1

which is sufficiently simple. Yet simulating the system in Fig. 5.1 for this choice and  D 10 results in
a slowly diverging oscillatory response, shown in Fig. 5.6(a), rather than in the expected red line. As a
matter of fact, increasing the number of partitions of the integration interval,  , does not result in a stable
closed-loop response (oscillations just become faster). A change of the numerical integration rule also
does not change this behavior qualitatively [73]. O

Although the closed-loop behavior in the example above might appear counter-intuitive at first sight, it
can in fact be easily explained from system-theoretic perspectives. To this end, it is important to remember
that we approximate an operator, or a function of s in terms of its transfer function. Because ˘ .s/ always
5.4. L UMPED - DELAY APPROXIMATIONS (LDA) 97

remains an H1 -function, we may be only concerned with its behavior on the imaginary axis, i.e. with
the frequency response ˘ . j!/. And this behavior does not have bounded derivatives, e.g. the first time
derivative, which satisfies1

kfP.t /k2F D kC eA.t /


.A j!I /B e j!t 2
kF D kC eA.t /
ABk2F C ! 2 kC eA.t /
Bk2F ;

is unbounded as a function of ! unless C eA.t / B  0, which is only the case if ˘ D 0. As a result,


˘ . j!/ does not converge to ˘. j!/ as  ! 1. In more classical control-engineering terms, the LDA as in
(5.13) attempts to approximate a system with a vanishing high-frequency gain (the integral term in (5.1b))
by a system with a non-decaying high-frequency gain (the sum in (5.13)). For instance, the magnitude
frequency responses of ˘ (in red) and ˘ (in cyan-blue) for the system in Example 5.3, presented in
Fig. 5.6(b), show clearly that the LDA approximation there is not a good fit for ˘. j!/ at high frequencies.
It can actually be shown that

 .1 e  /.e= C 1/ !1 
lim sup j˘ . j!/j D ˘ .0/ D !1 e D ˘.0/
!!1 2.e= 1/
in this case. Because the high-frequency gain of ˘ vanishes, we have that k˘ ˘ k1  ˘ .0/ > 1 e 
in this example as well.
It should be emphasized that the lack of convergence of the approximation error to zero cannot ex-
plain the unstable response in Fig. 5.6(a) alone. Instability in this example is actually a result of a lethal
combination of non-zero high-frequency gain of the approximation error ˘ D ˘ ˘ and a poor robust-
ness of the system in Fig. 5.1 to high-frequency uncertainty [42]. Indeed, by presenting the implemented
˘ D ˘ ˘ , the loop in Fig. 5.1 can be rearranged as the feedback interconnection of ˘ and the
control sensitivity, whose transfer function Tc .s/ D 2e . s C 1/=.s C 1/ and the high-frequency gain is
lim!!1 jTc . j!/j D 2e . The high-frequency gain of this loop equals then 2.e 1/  3:4366. This gain
is not strictly contractive, implying that the loop is extremely sensitive to high-frequency modeling errors,
viz. infinitesimal high-frequency modeling uncertainties, like loop delays, cause its instability (cf. Propo-
sition 2.2 on p. 24). This is exactly what happens in Example 5.3. A more robust system, e.g. that with a
strictly proper control sensitivity, would be less sensitive to the high-frequency approximation inaccuracy.

5.4.2 Proper use of Newton–Cotes formulae


Having understood the reason for the failure of approximation (5.13), we can come up with a remedy,
which turns out to be quite simple. All we need is to find a way to keep the high-frequency gain of an
approximation zero. To this end, note that
Z  Z    ˇ Z   d 
.sI A/t At d st .sI A/t ˇ At
s e dt D e e dt D e ˇ C e e st dt
0 0 dt 0 0 dt
Z 
D I eA e s C e .sI A/t dtA;
0

where the second equality in the first line is obtained via integration by parts. Using the obvious equality
˘.s/ D .s˘.s/ C ˛˘.s//=.s C ˛/, representation (5.1b) can be equivalently rewritten as
Z   
1 A 1 A.t / ts 1
˘.s/ D Ce BC Ce .˛I C A/B e dt DC CB e s (5.1d)
sC˛ sC˛ 0 sC˛
for every ˛ 2 R. Moreover, if ˛ > 0, this representation does not involve unstable cancellations and can
thus be used casually.
1 The Frobenius norm is chosen for convenience. As all matrix norms are equivalent, this choice does not affect conclusions.
98 C HAPTER 5. I MPLEMENTATION OF DTC- BASED C ONTROLLERS

1 1
-14

-20

0 0
-40

-60
-1 -1
0 1 2
0 1 10 10 10 0 1

(a) Impulse response of ˘ (b) Frequency response of ˘ (c) Trapezoidal rule, ˛ D 7:8 &  D 64

Fig. 5.7: LDA for Example 5.4 (trapezoidal rule, F .s/ D 1=.s C 7:8/,  D 64)

The second term in the right-hand side of (5.1d) still contains an integral of the same form as that in
(5.1b). But now this term is always postfiltered by the low-pass filter 1=.s C ˛/. Hence, a hight-frequency
mismatch in its approximating is filtered out. A general LDA formula for (5.1d) reads then as follows:

1 A

˘;˛ .s/ D Ce I C .˛I C A/0 B
sC˛
 1
1 X
C i C eA.i /
.˛I C A/B e i s
s C ˛ iD1
 
1  s
DC C I .˛I C A/ B e (5.14)
sC˛

for a sequence fi giD0 determined by the concrete approximation scheme and with i ´ i = , like in
(5.13). The only term of (5.14) that does not vanish at high frequencies is D e s , which originates in a
non-vanishing term of (5.1b). But this term is not an approximation, so it does not affect the approximation
error.
Not surprisingly, the use of (5.14) instead of (5.13) for the system in Example 5.3 results in an accurate
approximation. For the very same  D 10 as in Example 5.3, the approximation error k˘ ˘;˛ k1 is
reduced by more than a factor of 30, to about 0:02 with ˛ D 1. Moreover, the approximation error
vanishes at high frequencies. The plots of ˘ and ˘;˛ , as well as the closed-loop step responses under
those choices, are then virtually indistinguishable. Moreover, the impulse responses of ˘ and ˘;˛ for
this system are
1 1

.t / D and ;˛ .t / D


0  0 

(for  D 10 and ˛ D 1). Note that although the impulse response of ˘;˛ appears to vanish for t   , it
actually does not. The system ˘;˛ is not FIR in general and its impulse response approaches zero only
asymptotically.

Example 5.4. The example considered throughout this chapter so far is quite simple, so any studied
method could be relatively easily applied. Consider now a more challenging problem of approximating
 
80s C 808 s
˘.s/ D   e : (5.15)
.s 1/.s 2 25s C 2500/

The impulse and magnitude frequency responses of this system are shown in Figs. 5.7(a) and 5.7(b),
respectively. Oscillatory behavior and three unstable poles make these responses more complex and harder
5.4. L UMPED - DELAY APPROXIMATIONS (LDA) 99

˚
to approximate than those of  e s =.s 1/ , indeed. In fact, rational approximations fail on this example,
in the sense that if accuracy requirements are relatively demanding, like those in (5.16) below, the order
of rational approximants is high and off-the-shelf numerics no longer work.
Our goal here is to approximate this ˘ by the LDA as in (5.14) with the trapezoidal rule so that

k˘ ˘;˛ k1  0:05 k˘ k1 (5.16)

under the minimal possible number of lumped delays  . The filter crossover frequency ˛ is then manually
tuned, by a trial-and-error search, to end up with the smallest  . The best results are obtained then for
˛ D 7:8, in which case the minimal  D 64. The impulse response of the approximation is presented in
Fig. 5.7(c) by the cyan-blue line. It is now visibly non-FIR. As a result, ˛ should be sufficiently large to
provide sufficiently fast decay in t >  . O

5.4.3 Beyond Newton–Cotes


Although numerical integration schemes appear natural in the context of approximating (5.1b) and (5.1d)
by lumped delays, we do not have to apply them literally. It might be conceptually cleaner to start with a
rather general LDA of the form

X
˘ .s/ D F .s/ ˘N i e i s
De s
(5.17)
iD0

for  2 N, a sequence fi g such that 0 D 0 < 1 <    <  1 <  D  , a strictly proper and stable
F .s/, and matrix-valued coefficients ˘N i , i 2 Z0; . In this formulation the low-pass filter is not constrained
to be of the form F .s/ D 1=.s C ˛/ as in (5.14), the delays i are not constrained to be equidistant, and
the parameters ˘N i are not constrained to be equal to i C eA.i / B for some scalar i . Ideally, we would
then like to determine these F , i , and ˘N i , for all i 2 Z0; , so that this approximation is close to ˘ given
by (5.1) in whatever meaningful sense. However, such a goal might be overly ambitious and not easily
attainable. Still, several directions, in which only a part of the parameter set is designed can be exploited
and they are briefly outlined below.
First, fix the number of delays in (5.17) and the filter F and assume that the delays i D i = , i.e.
equidistant. As the design goal consider then the choice of ˘N i minimizing the H2 -norm

k˘ ˘ k2 : (5.18)

This problem can be solved [71] and in some situations (if F has a square “B ” matrix) it can be solved
analytically. In particular, if F .s/ D 1=.s C ˛/ for some ˛ > 0, then the optimal choice is

˘N 0 D C.A ˛I / 1 .e.A ˛I /= I /e A B; (5.19a)
1 e 2˛=
 A= 
N 1 e C e A= 2e ˛= I
˘i D 2˛C.A ˛I / I eA.i /=
B; for all i 2 Z1:: 1 (5.19b)
e˛= e ˛=
and
˛=
2˛ e
˘N  D 2˛=
C.A ˛I / 1
.I e.A ˛I /=
/e A=
B; (5.19c)
1 e

This choice always results in an FIR approximant, which is attained by an appropriate choice of ˘N  .

Example 5.5. Return to the system studied in Example 5.4 and the approximation criterion (5.16). Now,
try to attain it with the H2 -optimal choice of ˘N i from (5.19). It turned out that the best results are obtained
100 C HAPTER 5. I MPLEMENTATION OF DTC- BASED C ONTROLLERS

1 1 1

0 0 0

-1 -1 -1
0 0.672 0.707 0.914 1 0 0.264 0.496 0.748 0.932 1 0 0.5 0.74 0.824 0.936 1

(a) Uniform fi g, ˛ D 3:5 and  D 58 (b) Varying fi g, ˛ D 0:2 and  D 14 (c) Varying fi g, ˛ D 3:5 and  D 8

Fig. 5.8: Impulse responses for the general LDA in (5.17)

for ˛ D 3:5, in which case the smallest  D 58. This is about 10% below of what we have with the
trapezoidal rule in Example 5.4 and the resulting ˘ , whose impulse response is shown in Fig. 5.8(a),
has the same structure from the implementation point of view. In other words, the gain purely due to
the use of problem-oriented tuning algorithm, which is better suited for the problem than general-purpose
Newton–Cotes schemes. O
The next step is to optimize the delays pattern fi g. Benefits of using a nonuniform split are visible
already in Fig. 5.8(a). Indeed, it can be seen that at three points, namely for i 2 f0:672; 0:707; 0:914g,
the jumps in the impulse response are virtually invisible. This indicates that the corresponding ˘N i ’s are
close to zero, so these delays, whose indices are 39, 41, and 53, can be eliminated without affecting the
approximation performance.
It turns out that optimizing the delay pattern can be done rather efficiently if those delays are restricted
to the finite set fi =~g~iD0 for a given ~ 2 N. This does not entail any practical loss of generality, because
the grid can be made arbitrarily dense and the implementation of (5.17) would be more efficient in this case
anyway. We may then pose the problem of minimizing cost (5.18) under the constraint that the number
of delays   ~ is fixed. A combinatorial  search over all possible sets of  1 “internal” delays (i.e.
excluding 0 and  ) would require ~ 11 combinations to be checked, which is not quite practical. Yet in
the case when the “B ” matrix of F .s/ is square, there is an approach that requires only O.~ 2 =2/ checks,
see [71] for details.
Example 5.6. Let us discuss the application of the approach outlined above to the problem studied in
Examples 5.4 and 5.5. It turns out that the choice ˛ D 0:2 is better in this case. Consider the case of
~ D 250, meaning that the minimal distance between two delays is set to 0:004. This results in  D 14,
still maintaining (5.16), see Fig. 5.8(b) for the corresponding impulse response. Comparing this result
with the uniform split results of Example 5.5, we can see a clear advantage of splitting the interval Œ0;  
non-uniformly. In our example this yields a more than fourfold improvement. This is easy to explain from
the form of the impulse response of ˘ in Fig. 5.7(a). During the first three quarters of the interval Œ0;  
the variations of .t / are relatively minor, so it is sufficient to have only three jumps in that interval. In
the final quarter of Œ0;  , the impulse response .t / is considerably more oscillatory, so a more dense grid
is required. O
Another possible direction is to minimize directly the H1 -norm of the approximation error, together
with the delay pattern fi g. Going this direction drifts us even further from the analytic solution. It
is possible to address the problem by numerical optimization methods. Although the problem is not
even convex, some trick, known as the `1 -norm heuristic [6], can be used and the result can be seen in
Fig. 5.8(c). Going this way we end up with  D 8, which is the eightfold improvement over the direct
Newton–Cotes approach accompanied by the trapezoidal rules, with which we started in Example 5.4.
5.5. C ODA 101

5.5 Coda
Out ultimate goal is to approximate the controller, so we should be less concerned with the accuracy of
approximating ˘ and more with the accuracy of approximating the whole controller. And even more so
with the ability to reproduce the closed-loop properties that were provided by the original DTC element.
One day it should be written . . .
102 C HAPTER 5. I MPLEMENTATION OF DTC- BASED C ONTROLLERS
Chapter 6

Robustness to Delay Uncertainty

P TO THIS POINT, delays in studied analysis and design problems were mostly assumed to be known
U and constant. These assumption might not be realistic in applications. In fact, in all examples con-
sidered in Section 1.2 delays vary in course of operation due to variability of various system parameters,
such as the speed of rolls in Fig. 1.2(a) or the conveyor belt in Fig. 1.2(b), unevenness of communication
delays in networked applications, jitter in computation time, et cetera. In same of such cases variations of
loop delays can be taken into account in the design process, see e.g. Remark 3.2 on p. 48. But in many
situations actual values of delays are not even known, or only known to lie in some interval. In such situa-
tions robustness analysis, i.e. the analysis of the sensitivity of a system to uncertainty in its delay element,
is the approach to explore. This is the direction studied in the current chapter.

6.1 Delay margin


Consider the general interconnection presented in Fig. 6.1, where ı  0 is a constant deviation from
a nominal delay, say 0  0. Unlike the analysis in Chapter 2, we do not limit G to be finite dimen-
sional, primarily because it can include this nominal delay. Assuming that the nominal system, i.e. that
corresponding to ı D 0, is stable, the delay margin d is defined as the the smallest deviation of the
loop delay from nominal value, under which the system becomes unstable. If the system is stable for all
ı 2 Œ ; 1/, then we say that d D 1 or that the system is delay-independent stable.
In the case when G´w is SISO, this is a classical control notion, frequently defined as
ph
d D ; (6.1)
!c

where ph (in radians) is the phase margin of the loop G´w and !c (in radians per time unit) is the
crossover frequency, i.e. the frequency ! at which jG´w . j!/j D 1. However, this definition is not always
accurate. First, as we already know from Proposition 2.2 on p. 24, if G´w is finite dimensional and its
high-frequency gain jG´w .1/j > 1, then the system is unstable for all ı > 0. Thus, the system with

x ı
D
´ w
 
G´w G´u
Gyw Gyu
y u

Fig. 6.1: General single-delay interconnection with uncertain delay

103
104 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

6 6

0 0

-6 -6

-12 -12
-270 -180 -73.4 -50.4 0 33.8 -270 -200.5 -73.9 12.4

(a) Nominal loop (b) Loop with ı D 1=2

Fig. 6.2: Nichols plots of the loop frequency response for G´w .s/ D 6.s 2 C 0:2s C 0:01/=.s.s C 2/2 /

p
G´w .s/ D 2s=.s C 1/ has a zero delay margin, even though formula (6.1) in this case would yield
d D 1:25=1  3:927. Second, there are systems with several crossover frequencies, for which the
phase margin and the corresponding crossover frequency are not a good indicator of delay tolerance. For
example, consider a system with G´w .s/ D 6.s 2 C 0:2s C 0:01/=.s.s C 2/2 /, whose Nichols plot is shown
in Fig. 6.2(a). The phase margin in this case is ph  1:86 [rad], measured at the crossover frequency
!c;1  0:015 (marked by the downward triangle). Formula (6.1) yields then d  121:16. But this is
not even close to the smallest destabilizing delay in this case. For example, Fig. 6.2(b) shows the loop
gain under ı D 1=2, which clearly yields an unstable system. The reason is that there are two more
crossover frequencies, viz. !c;2  0:746 (the upward triangle) and !c;3  5:239 (the leftward triangle).
The distances from the critical point at those frequencies are larger than that at !c;1 , so they do not count
in the calculation of the phase margin. However, these crossover frequencies, especially !c;3 , are higher
than !c;1 . As such, phase lags due to the delay at those crossovers are larger. In particular, the delays
at which each one of these points crosses the critical point are 2 D ph;2 =!c;2  3:73=0:746  5 and
3 D ph;3 =!c;3  2:26=5:239  0:432. Thus the true delay margin for this system is d  0:432.
By this logic, the delay margin with respect to positive ı should be calculated as
˚
0 if lim sup!!1 jG´w . j!/j  1
d D ph;i (6.2)
mini otherwise
!c;i

where ph;i is the angular distance from the critical point in the negative angle direction at the crossover
frequency !c;i . The delay margin with respect to negative ı is calculated similarly, modulo the replace-
ment of the phase margins with angular distances from the critical point in the positive direction, denote
them Q ph;i , and the constraint that such delay margin must be lowerbounded by the nominal 0 .

6.1.1 Bounds on the achievable delay margin


Consider now the particular case of the system in Fig. 6.1 under G´w D PK , where P is a given system
(plant) and K is a controller, see (1.25) on p. 11 and explanation there. It is of interest to understand, what
is the maximum delay margin, attainable in this system by K . If P 2 H1 , then the answer is obviously
d D 1. This is what we get under the choice K D 0. The question becomes nontrivial for unstable P .
Bounds on the attainable delay margins for several classes of unstable plants were derived in [39]. Some
of these results are presented below.
We start with a result, establishing that unstable poles at the origin do not impose limitations on the
achievable delay margin.
6.1. D ELAY MARGIN 105

x 0 n f0g, then any d is attainable by a stabilizing K .


Proposition 6.1. If P .s/ has no poles in C

Proof (outline). It will be shown in ÷6.2.3 that the delay margin is lowerbounded by the reciprocal of the
H1 -norm of the transfer function sT .s/, where T is the complementary sensitivity function corresponding
the the zero-delay case. A controller attaining an arbitrarily small H1 -norm of sT .s/ was designed in [39,
App. C].

But any pole in C x 0 n f0g imposes hard constraints on the attainable delay margin. The result below
proves that for a real pole:

Proposition 6.2. If P .s/ has a pole at s D a > 0, then

2
d < :
a
x 0 and P .s/ is minimum-phase.
This bound is tight if this pole is the only pole of P .s/ in C

Proof. Consider first the stabilization problem for the system in Fig. 2.5 with Rx˛ .s/ ´ . s C ˛/=.s C ˛/,
x x
under the very same G´w . Because R˛ .s/j˛D1 D D .s/jD0 , any controller stabilizing this system under
˛ D 1, does that for the system in Fig. 6.1 under ı D 0. Then, as ˛ decreases, roots of the characteristic
polynomial move continuously along corresponding loci. What we know is that at ˛ D a the system is
unstable, just because s D a is then a root of its characteristic polynomial no matter what K is (unstable
cancellations). Therefore, roots must cross the imaginary axis, say at s D ˙ j!cross for some !cross > 0,
for at least one ˛ D ˛cross > a regardless what K stabilizing the system for ˛ D 1 is used. But the
arguments of Rekašius, discussed in ÷2.1.6, imply that there must then exist a minimal delay cross > 0,
for which there are pure imaginary roots of the characteristic quasi-polynomial associated with the system
in Fig. 6.1 at the very same s D ˙ j!cross . This corresponding destabilizing delay, cf. (2.13), satisfies

˛cross cross !cross 2 !cross 2 2


D cot H) cross D arctan < <
!cross 2 !cross ˛cross ˛cross a

(the first upper bound is approached as !cross # 0). This yields the bound of the proposition.
To show that this bound is tight whenever P .s/ has no other unstable poles and no nonminimum-phase
zeros, write
1
P .s/ D Pinv .s/
s a
for stable and minimum-phase Pinv .s/. In this case the controller Ka .s/ D Pinv1 .s/kp .Td sCa/ renders the
stabilization problem equal to that for the first-order Pa .s/ ´ 1=.s C a/ and the PD Ka D kp .Td s C a/.
But this is essentially the stabilization problem studied in ÷3.1.2. Specifically, it was shown there that a PD
controller approaches the upper-bound delay margin 2=a as kp # 1 and Td " 1. Although this controller is
not proper, its slight modification can attain the same bound as well, see [39, Rem. 8].

The result below extends this bound for the case of complex and pure imaginary poles.
p 
Proposition 6.3. If P .s/ has poles at s D  ˙ j 1  2 !n for some 0   < 1 and !n > 0, then
p   p 
1 2  1 2 2=!n
d <  C 2 max p ; arctan D 3:69=!n
!n 1 2  2=!n
0:652 1 

x 0 and P .s/ is minimum-phase.


This bound is tight if these poles are the only poles of P .s/ in C
106 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

d
y 1 .0 Cı /s
u kp eQ r
s
e 0 -
0 s
1 0 s e
s

n yQ

Fig. 6.3: Smith controller with proportional primary controller

Proof (outline). The proof for the case of  > 0 follows


p the ideas of that of p Proposition 6.2, just with
x˛ .s/ D . j 1  2 s=˛/=. C j 1  2 C s=˛/, which has
the rational substitution of the form R
complex parameters, see [39, Thm. 9] for details. The case of  D 0 can be addressed by somewhat
simpler arguments. Namely, in this case the loop has an infinite magnitude at ! D !n and a strictly
contractive high-frequency gain under any stabilizing controller. This implies that there is a crossover
frequency !c > !n with some ph 2 .0; 2/. Hence, the delay margin d  ph =!c < 2=!n . Finally,
controllers approaching the upper bound of the proposition are presented in [39, Sec. IV].

It should be emphasized that controllers maximizing the delay margin tend to render the crossover
frequency !c # 0. In other words, maximizing d alone, without imposing additional constraints on the
crossover frequency or related feedback properties, does not make much engineering sense.
Remark 6.1 (time-varying controllers). Curiously, and perhaps somewhat counterintuitively, there are no
constraints on the attainable delay margins, even for an unstable LTI P , if time-varying linear controllers
are allowed [40]. Still, the control strategy achieving that, which hinges on the ability to estimate both the
state of P and its appropriate prediction in finite time, is not quite practical either. O

6.1.2 Delay margins of DTC-based loops: case study and general considerations
Consider now the system in Fig. 6.3, which is an integrator plant controlled by a DTC-based controller
with a proportional primary controller, whose gain is normalized by the nominal loop delay 0 . The
DTC element ˘ is effectively the Smith predictor for an integral plant and the nominal delay, shifted
by a constant to guarantee that the static gain of the DTC element ˘.0/ D 0, so that the static gain of
the overall controller equals kp =0 (this is an idea of Watanabe and Ito from [77]). The classical Smith
controller structure can be recovered by shifting the internal controller loop by the constant 0 , leading to
the primary controller Œkp =.1 kp /=0 . Also note that the internal controller loop is well posed iff kp ¤ 1.
The actual loop delay is assumed to be constant yet uncertain, of the form 0 C ı for some ı  0 .
The finite-dimensional system PQ for which the primary controller is designed has the transfer function
Q
P .s/ D .1 0 s/=s , so that its characteristic polynomial under the primary controller kp =0 is
Q
.s/ D 0 s C kp .1 0 s/ D 0 .1 kp /s C kp :
This polynomial is Hurwitz under a well-posed controller loop iff kp 2 .0; 1/, which is assumed hereafter.
Thus, the low-frequency gain of the controller is limited by 1=0 . The nominal closed-loop complementary
sensitivity function is then
kp
T .s/ D e 0 s ;
0 .1 kp /s C kp
whose bandwidth !b D Œkp =.1 kp /=0 is an increasing function of kp .
Consider now the stability margins attained in the system in Fig. 6.3 for the actual, not equivalent
delay-free, system with the loop transfer function
kp e 0 s
L.s/ D 0 s /
:
0 .1 kp /s C kp .1 e
6.1. D ELAY MARGIN 107

2.5
68

1.41

64 0.67

-0.56
2 60
-1
0.5 0.75 1 0.5 0.75 1 0.5 0.75 0.87 0.91 1

(a) Gain margin, g (b) Phase margin, ph (c) Normalized delay margin, d =0

Fig. 6.4: Stability margins as functions of the controller gain kp

The gain and phase margins of this loop, shown in Fig. 6.4(a) and 6.4(b), respectively, as functions of the
normalized controller gain kp (see [20, App. A] for details of their derivations), are expectably monotoni-
cally decreasing and continuous. As a matter of fact, as kp # 0, the gain margin grows unbounded and the
phase margin approaches 90ı , whereas as kp " 1, they reduce to respectable g D 2 and ph D 60ı . The
delay margin, i.e. the maximum deviation ı from the nominal loop delay 0 , is shown in Fig. 6.4(c). It is
also a monotonically decreasing function of kp , unbounded as kp # 0, but with discontinuities. For exam-
ple, at kp  0:749 the delay margin for positive ı drops from 1:4052 to 0:6683, i.e. by more than a factor
of two, and delay margin for negative ı jumps from its highest value of 1 (for which the delay-free
system is also stable with the DTC-based controller) to 0:5622. Moreover, the delay margin vanishes as
kp " 1, which is also qualitatively different from the behavior of g and ph .
The rationale behind this behavior of d becomes apparent if we take a look at the loop frequency
response around the borderline value kp  0:749. At a slightly lower gain of kp D 0:748, there is only one
crossover frequency at !c  0:775=0 , see Fig. 6.5(a). Hence, the delay margin is calculated by (6.1) as
d  1:085=.0:775=0 / D 1:40 . But at kp D 0:749 another (double) crossover frequency !c;2  5:106=0
pops up, see Fig. 6.5(b). Hence, the delay margin is calculated by (6.2) as
   
ph;1 ph;2 1:085 3:412
d D max ;  max ;  0:6680
!c;1 !c;2 0:775=0 5:106=0

(the phase margin and the first crossover frequency are virtually the same as those for kp D 0:748). As
!c;2 is larger than !c;1 by more than a factor of six, this delay margin drops dramatically, even though
ph;1 < ph;2 . and we have the jump at around kp D 0:75. Also, instability can now be caused by a

3 3

0 0

-3 -3

-6 -6

-9 -9

-12 -12

-700 -540 -360 -180 -117.8 -700 -540 -344.5 -180 -117.8

(a) kp D 0:748 and !c  0:775=0 (b) kp D 0:749, !c;1  0:775=0 , and !c;2  5:106=0

Fig. 6.5: Nichols plots of the loop frequency response for the system in Fig. 6.3
108 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

12

-6

-12
-900 -720 -630 -540 -360 -270 -180 0

Fig. 6.6: Closed-loop bandwidth contour on the Nichols chart

decrease of the delay, as the plot can also cross the critical point if a phase lead of Q ph;2  2:871 is added
at the second crossover !c;2 . Note that as kp increases even further, more and more crossover frequencies
pop up, resulting in a further deterioration of the delay margin. In the limit, as kp " 1, the loop transfer
function approaches
e 0 s
L.s/ D ;
1 e 0 s
which yields the infinite-bandwidth T .s/ D e 0 s , but an infinite and unbounded sequence of crossover
frequencies f!c;i g satisfying cos.!c;i 0 / D 1=2. Consequently, the delay margin of this loop indeed
vanishes, even though the gain and phase margins are reasonably high.
A natural question in this respect is whether this kind of behavior, namely the proliferation of the
number of crossover frequencies, is generic in the context of dead-time compensation. The answer is yes,
if the primary controller attempts to increase the closed-loop bandwidth beyond
p certain level. Specifically,
define the closed-loop bandwidth as the largest !b for which jT . j!/j  1= 2 in the whole frequency range
! 2 Œ0; !b . The loop frequency response L. j!/ lies in the shaded area of the Nichols chart in Fig. p 6.6,
determined by the corresponding M -circle, for all ! 2 Œ0; !b . It is readily seen that if jT .0/j > 1= 2 and
the closed-loop system is stable, then there is more than one crossover frequency whenever one of the two
conditions below holds:
1. the phase lag of the open-loop system exceeds 270ı at ! D !b , i.e. arg L. j!b / < 3=2,
2. L.s/ has at least two unstable poles, so its frequency response must encircle the critical point.
Moreover, additional phase lag or / and unstable poles increase the number of crossover frequencies. Clas-
sical control has some accepted rules of thumb, saying that the presence of a loop dely imposes limitations
of the attainable bandwidth, see [18, ÷8.6.2]. However, such limitations are less visible in DTC-based set-
tings. Indeed, as follows from (3.13) on p. 44, the complementary sensitivity transfer function in a general
DTC-based setup, like that in Fig. 3.5, is

P .s/ 0 s 0 s
T .s/ D e µ T0 .s/ e ;
1 C PQ .s/K.s/
Q

where T0 is assigned by a delay-free design of the primary controller KQ . Thus, the closed-loop bandwidth
does not depend on the delay element, as jT . j!/j D jT0 . j!/j, so one might be tempted to increase it
without accounting for the phase lag of the plant at !b due to the delay. As a result, either the loop has a
large phase lag at !b or the controller introduces unstable modes to add enough phase lead and compensate
for the phase lag introduced by e j!0 . In either case, extra crossover frequencies arise and d deteriorates.
Remark 6.2 (closed-loop bandwidth). A moral of the discussion above is that it might be safer to define the
closed-loop bandwidth via the gain of the sensitivity function, say as the largest !b for which jS. j!/j  b
6.2. E MBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 109

x ı
D 
´ w ´ w
   
G´w G´u G´w G´u
Gyw Gyu
 Gyw Gyu
y u y u
(a) Original system, ı 2 Œ0; N  x  
(b) Covering system, D ı

Fig. 6.7: Embedding into a wider class of uncertain systems

p
in the whole range ! 2 Œ0; !b  for some b 2 .0; 1/. If b is sufficiently small, say b  1 1= 2  0:293,
then this measure requires not only the gain of T to be close to one (because 1 b  jT . j!/j  1 C b ,
by the triangle inequality), but also its phase to be close to zero. Thinking this way may help to resist the
temptation of using an overly aggressive design of the primary controller. O

6.2 Embedding uncertain delays into less structured uncertainty classes


A precise analysis of the delay margin might not be an easy problem to address and even harder to incor-
porate into control design problems. A pragmatic alternative is to analyze sensitivity to uncertain delay via
general-purpose robustness tools, which are quite well understood and can be embedded into optimization
design frameworks. This is especially so for robust stability analises under unstructured, and sometimes
atructured, complex uncertainty. In this section methods of the embedding of uncertain delays into wider
uncertainty formalisms are studied.

6.2.1 Underlying idea


Let us return to the general case presented in Fig. 6.1 and consider the problem of finding conditions,
under which it is stable for all ı from a given interval, i.e. robustly. Without loss of generality, we may
assume that this interval is ı 2 Œ0; 
N for a given N > 0. Otherwise, the lower bound on ı can just be
absorbed into the nominal delay, 0 .
A rather obvious—but nevertheless important—observation is that the system in Fig. 6.7(a) is robustly
stable if yet another system, that in Fig. 6.7(b), is stable for every  2 N , where N is a class of systems
containing all admissible delay elements, i.e. such that D x ı 2 N for every ı 2 Œ0; 
N . It is supposed that
x
the set of all admissible delays Dı is a proper subset of N . The rationale behind such an expansion of the
family of considered systems is that the robust stability problem for the system in Fig. 6.7(b) for  2 N is
easier to analyze than that for  D D x ı . The price paid for this simplification is that any resulting stability
condition is only sufficient, i.e. potentially conservative. Namely, the robust instability of the system in
Fig. 6.7(b) does not imply that of the original system in Fig. 6.7(a).
Thus, the basic idea is to embed a system of interest into a wider class of systems, whose analysis
is simpler. A key to the success of this approach is the choice of the uncertainty set N . On the one
hand, it should cover the original uncertain element D x ı tightly to reduce conservatism. On the other
hand, it should result in a simple, and preferably design-friendly, analysis. The quest for an easily tunable
and transparent tradeoff here is perhaps still under way, with no clear winner and a plenty of barely
discriminable results [80]. It is probably safe to claim that the conservatism can be effectively eliminated
by increasing the computational complexity. However, such low-conservatism methods tend to be limited
to analysis problems, where the controller is given. Design-friendly approaches normally use simpler
uncertainty sets and are more conservative.
110 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

 ıIm ıIm

1
T´w T´w M T´w M

(a) Unstructured uncertainty (b) Structured uncertainty (one block) (c) Scaled structured uncertainty

Fig. 6.8: Robust stability setups for m  m stable  and scalar stable ı

6.2.2 Preliminary: robust stability with respect to norm-bounded uncertainty


Arguably, the handiest class of uncertaities is the class of unstructured norm-bounded ones. The simplest
such class, which can be used as a building block for more sophisticated extensions, is the unit ball in H1 ,
mm
˚ mm

B1 ´  2 H1 j kk1  1

(if the dimension is irrelevant of clear from the context, we write B1 ). In the LTI setting, this set describes
stable systems , whose frequency response is contractive at every frequency, i.e. k. j!/k  1, 8! 2 R.
With some abuse of notation, we also use the nomenclature B1 for possibly time-varying or even nonlinear
systems, which are stable as operators L2 .RC / ! L2 .RC / and whose L2 .RC /-induced norm is contractive.
Consider now the internal stability interconnection in Fig. 6.8(a) for a known m  m LTI system T´w
mm
and an uncertainty element , which is only know to belong to B1 . The following result is fundamental:

Theorem 6.4 (Robust Stability Theorem). The feedback interconnection in Fig. 6.8(a) is internally stable
mm
for all  2 B1 iff T´w 2 H1 and kT´w k1 < 1.

Proof. Sufficiency follows by the celebrated Small Gain Theorem, see [79, ÷4.4.2], [11, Sec. III.2], or [9,
Thm. 9.1.7] for example. The stability of T´w is obviously necessary, because  D 0 is admissible. To
prove the necessity of the norm bound on T´w , assume, on the contrary, that kT´w k1 D  1. This
implies that at least one of the following two conditions holds: (i) lim sup!!1 kT´w . j!/k D or (ii) there
is !0 2 R such that kT´w . j!0 /k D . If the high-frequency gain of T´w is not strictly contractive, the
system is destabilized by the admissible  D D x  for some  > 0, see ÷2.1.3. If condition (ii) holds, then
there are unitary vectors u0 ; v0 2 C such that u00 T´w . j!0 /v0 D (obtained via SVD). If !0 D 0, then u0
m

and v0 are real and choose .s/ D v0 u00 = . If ! ¤ 0, consider an LTI  2 H1 with the transfer function
2 s.arg v
x01 /=!0 3
jv01 je
1 ::  s.arg u01 /=!0 s.arg u0m /=!0

.s/ D 4 : 5 ju01 je    ju0m je ;
s.arg v
x0m /=!0
jv0m je

where xx stands for the complex conjugate of x 2 C and the argument of x is assumed to be nonnegative.
In either case kk1 D 1=  1, so that  is admissible, and . j!0 / D v0 u00 = . Hence, we have that

u00 I T´w . j!0 /. j!0 / D u00 u00 T´w . j!0 /v0 u00 = D 0;

i.e. I T´w .s/.s/ has a singularity on the imaginary axis. But this implies that the closed-loop system
is unstable as it has a pure imaginary pole. Thus, there is a destabilizing admissible  and the system is
not robustly stable.

The result above implies that robust stability with respect to the set of contractive uncertainty elements
is equivalent to an H1 optimization problem. This renders it a convenient framework for analysis. Indeed,
the bound on the H1 norm can be verified by several different approaches, such as checking kT´w . j!/k
over a chosen frequency grid, solving an eigenvalue problem associated with a Hamiltonian matrix built
6.2. E MBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 111

from a realization of T´w .s/ [44, Prop. 4.17] (if T´w is finite dimensional) or a Linear Matrix Inequality
resulting from the KYP lemma [44, Thm. 4.18] (also for a finite-dimensional T´w ), et cetera. Moreover,
casting the problem as a bound on the H1 -norm of a known (“certain”) facilitates casting robust design
as a standard H1 problem of the form presented in Fig. 4.1(a) on p. 68, just remove the “” block from
the system in Fig. 6.7(b) and connect u with y via a controller K .
We may add more structure into the uncertainty set by considering the setup in Fig. 6.8(b) for a scalar
11
ı 2 B1 . Such a configuration is a particular case of the general  (structured singular values) framework,
where structured uncertainty is described by block-diagonal elements [54]. The case of only one “repeated
scalar” block is sufficient for our purposes as it fits well the embedding procedure of ÷6.2.1 in the single-
delay case, when the signals ´ and w in Fig. 6.7(a) are not scalar.
mm 11
First, it should be clear that ıIm 2 B1 for all ı 2 B1 . Hence, Theorem 6.4 can be used to end up
with sufficient robust stability conditions. However, such conditions might be arbitrarily conservative, as
illustrated by the example below.

Example 6.1. Consider the system in Fig. 6.8(b) for the static
 
˛ ˇ
T´w .s/ D ; for ˛; ˇ 2 R
0 ˛
11
and an LTI ı 2 B1 . Standard imaginary-axis crossing arguments yield that the closed-loop system is
stable iff I2 ı. j!/T´w . j!/ is nonsingular for all ! 2 R. This, in turn, is equivalent to the condition that
j˛j < 1, which does not depend on ˇ . At the same time, Theorem 6.4 yields the robust stability condition
  p
˛ ˇ 4˛ 2 C ˇ 2 C jˇj p

0 ˛
<1 ” <1 ” j˛j < 1 jˇj:
2

This condition is non-conservative only if ˇ D 0. The degree of conservatism of this condition, i.e. the
gap between it and the actual condition j˛j < 1, increases with jˇj and the problem becomes unsolvable
at all if jˇj  1. O

Conservatism can be reduced by noticing that ıI D M 1 .ıI /M for all LTI and nonsingular M . If M
is bi-stable, then the internal stability of the system in Fig. 6.8(b) is equivalent to that in Fig. 6.8(c). The
freedom in the choice of the multiplier M (cf. the discussion in ÷3.4.4) can then be used to reduce the H1
norm of M T´w M 1 . The following result, which is a particular case of the general  theory [54], holds:

Proposition 6.5. The following statements are equivalent:


11
1. The feedback interconnection in Fig. 6.8(b) is internally stable for all ı 2 B1 .
mm 1 mm 1
2. T´w 2 H1 and there is M 2 H1 such that M 2 H1 and kM T´w M k 1 < 1.
3. T´w 2 H1 and .T´w . j!// < 1 for all ! 2 R.

In the SISO case, i.e. if m D 1, the conditions of Proposition 6.5 clearly reduce to that of Theorem 6.4.

Example 6.2. Return to Example 6.1. The third  condition


 of Proposition 6.5 obviously reads j˛j < 1. To
apply the second condition, let us select M D 0 10 , which is obviously bi-stable for all  2 R n f0g. The
stability condition reads then
     
 0 ˛ ˇ 1= 0 D ˛ ˇ < 1
p

0 1 ” j˛j < 1 jˇj:
0 ˛ 0 1 0 ˛

As  is almost arbitrary (nonzero), the quantity jˇj can be made arbitrarily small, so we can recover the
non-conservative condition j˛j < 1, again. O
112 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

The scaling in Proposition 6.5 requires dynamic systems in general. Finding such a system is in
general a matter of using a finite number of matrix parameters over a chosen frequency grid. This is not
particularly elegant and leads normally to rather high-order conditions. However, if only static scaling
parameters M are used, the contractiveness of M T´w M 1 can be cast as a standard LMI (as a matter of
fact, the use of only static scaling is enough if ı is allowed to be time varying [65]).
Proposition 6.6. Let T´w .s/ D D C C.sI A/ 1 B for a Hurwitz A be an m  m transfer function. There
is M 2 Rmm such that kM T´w M 1 k1 < 1 iff there are X; Y > 0 such that
 0 
A X C XA C C 0 Y C XB C C 0 YD
< 0: (6.3)
B 0X C D0 Y C D 0 YD Y
Proof. Because  
1 A BM 1
M T´w .s/M D ;
M C MDM 1
the condition kM T´w M 1 k1 < 1 is equivalent [44, ÷4.3.4] to the existence of X D X 0 > 0 such that
 
A0 X C XA C C 0 M 0 M C XBM 1 C C 0 M 0 MDM 1
< 0:
M 0 B 0 X C M 0 D 0 M 0 M C M 0 D 0 M 0 MDM 1 I
 
This
 I 0 condition
 does no change it its left-hand side is pre-multiplied by I0 M0 0 and post-multiplied by
0
0 M (as M is nonsingular). This yields (6.3) by denoting Y D M M > 0.

Return now to the uncertainty set B1 . It might appear that in most situations this is a rather rough
description of uncertain elements. Indeed, it is hard to imagine a realistic uncertain element, whose
frequency response lies in the unit disk in the complex plane in all frequencies. A finer model should
accommodate different modeling mismatch sizes over different frequencies, like having smaller errors
over low frequencies and larger—over high. Moreover, in many situations uncertain elements are not
centered at the origin, but rather at some nonzero nominal value, which might also drift with the frequency.
It happens, however, that the set B1 can easily generate such models as well. In a sense, it serves a
base for many more sophisticated, and more realistic, uncertainty models, whose properties are shaped by
frequency-dependent weights. A general, albeit quite abstract, model for such uncertain sets is
mm
˚ mm

Fu .W; B1 / ´  j 90 2 H1 such that  D Fu .W; 0 / (6.4)
11
for a given weight W . Likewise, the set Fu .W; B1 Im / can be used for structured uncertainty elements.
Several concrete simple examples of weight, which are more tangible than the abstract model, are pre-
sented below:
Constant scaling: the choice
 
0 ˛I
W D H) Fu .W; B1 / D B1 ˛;
I 0
which is merely a uniform scaling of the uncertain element by ˛ > 0. In the SISO case, frequency
responses of elements from this set lie in a disk of the radius ˛ centered at the origin at all frequencies,
x.
i.e. in ˛ D
Frequency-dependent scaling: the choice
 
0 W˛
W D H) Fu .W; B1 / D B1 W˛ ;
I 0
which scales it by a dynamic, i.e. frequency dependent, weight W˛ . In the SISO case, frequency
responses of elements from this set at each frequency ! lie in a disk of the radius jW˛ . j!/j centered
x.
at the origin, i.e. in jW˛ . j!/j D
6.2. E MBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 113

ıIm
´ w

GQ ´w GQ ´u
 

GQ yw GQ yu
y u

11
Fig. 6.9: Embedded robustness setup for Fig. 6.1 with ı 2 B1 and GQ D W ? G

Shifted frequency-dependent scaling: the choice


 
0 W˛
W D H) Fu .W; B1 / D W~ C B1 W˛ ;
I W~

which effectively assumes the “nominal model” of the uncertain element at W~ and scales the devia-
tion from this nominal model by a dynamic, i.e. frequency dependent, weight W˛ 2 H1 . In the SISO
case, frequency responses of elements from this set at each frequency ! lie in a disk of the radius
x.
jW˛ . j!/j centered at W~ . j!/, i.e. in W~ . j!/ C jW˛ . j!/j D
In general, the mapping  7! Fu .W; / can be viewed as a Möbius transformation, which maps the
x into yet another disk on the complex plane C, provided jW11 . j!/j < 1 for all ! . The latter
unit disk D
x . If it does not hold, then a disk may be transformed to a half plane
condition just says that 1 62 W11 . j!/D
or to the exterior of a disk. If W11 ¤ 0, the transformed disk may be rotated and warped, in the sense
that the distance between points is not preserved under the transformation (these properties can perhaps
be exploited to reduce the conservatism of the approach, although I am not aware of such results).
The analysis of systems with the uncertainty model (6.4) is not conceptually different from that of
systems with the plain unit disk model. Indeed, the feedback connection of T´w and Fu .W; / is equivalent
to the feedback connection of Fl .W; T´w / and . Thus, we return to the setup studied earlier and can use
11
Theorem 6.4, just for Fl .W; T´w / rather than T´w . Structured uncertainty from the set Fu .W; B1 Im / can
be treated in the same vein. Thus, taking into account that Fu .G1 ; Fu .G2 ; // D Fu .G2 ? G1 ; /, where
“?” stands for the Redheffer start product, the uncertain delay setup in Fig. 6.7(a) can be embedded into
the system in Fig. 6.9 under GQ D W ? G .
In what follows, we are interested in this setup for W as in the shifted frequency-dependent scaling
model above and  
0 Im
GD x 0 0
PD
which corresponds to the input delay system with a plant P and a nominal delay 0  0, cf. (1.22) on p. 9.
In this case      
0 W˛ 0 Im 0 W˛
GQ D ? x 0 0 D x 0 :
x 0 P W~ D (6.5)
Im W~ PD PD
Q K/ W w 7! ´ becomes
If a controller K W y 7! u is applied, the closed-loop system T´w D Fl .G;

T´w D W˛ .I x 0 / 1 KP D
KP W~ D x 0 : (6.6)

As discussed in ÷4.3.2, the H1 problem for 0 ¤ 0 is particularly simple for a generalized plant as in
(6.5).

6.2.3 Covering models for the uncertain delay element


Having acquired tools to handle problems with unstructured and structured norm-bounded uncertainty,
consider now their use in the robust stability analysis of systems with delay uncertainty in Fig. 6.1. Our
114 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

goal is to embed the uncertain delay element D x ı into a family of stable systems, whose frequency re-
sponses lie in disks in the complex plane and include those of D x ı for all ı 2 Œ0; 
N . This, in turn, embed
the problem into the robust stability setup in Fig. 6.9.
To construct such families, consider the areas in the complex plane C, which are covered by the
uncertain frequency response e j!ı , again for ı 2 Œ0;  N . At every frequency ! this is a circular arc of
the unit radius, centered at the origin and subtending the angle N ! [rad] in the clockwise direction starting
from 1 (for ı D 0). Some of these arcs are presented below:

N ! D 0 N ! D 1 N ! D 2 N ! D  N ! D 4 N !  2

1 1 1 1 1
(6.7)

The arc reduces to a single point, 1, at ! D 0 and increases with ! , until it becomes the whole unit circle
T for all !  2=N . Our goal is to cover these arcs by tightest disks of the form ~.!/ C ˛.!/D x , where
~.!/ 2 C is a center and ˛.!/  0 is a radius, at each frequency. This task clearly depends on the choice
of the disk center ~.!/, so consider several possible choices for it below.

Covering with the center in ~.!/ D 0


Perhaps the simplest choice is to place the center to the origin for all frequencies, which yields the closed
x as the frequency-independent covering, see Fig. 6.10(a). This implies that the system in Fig. 6.1
unit disk D
for every ı is covered by the the system in Fig. 6.9 and GQ D G . By Proposition 6.5, the system in Fig. 6.9
is robustly stable iff

GQ ´w 2 H1 and 9M; M 1
2 H1 such that kM GQ ´w M 1 x  k1 D kM GQ ´w M
D 1
k1 < 1:

Hence, the conditions above also guarantee the stability of the system in Fig. 6.1 for all ı  0 , i.e. the
system is delay-independent stable. In the particular case of the standard feedback control system with a
plant P , a controller K , and  D 0, we have that GQ ´w D PK is the loop gain (cf. (1.25)) and the closed-
loop system is stable for every constant loop delay if loop gain is stable and its peak frequency-response
gain is strictly contractive. In the SISO case, i.e. for m D 1, this recovers Tsypkin’s criterion [72].

Covering with the center in ~.!/ D 1


Delay-dependent conditions can be obtained by choosing ~.!/ ¤ 0. The best studied choice is apparently
~.!/ D 1, which effectively assumes the nominal value ı D  . The tightest covering disk for this choice
does depend on N , see Figs. 6.10(b) and 6.10(c). The radii of such disks at each frequency are derived as
(
˚ j1 e jN ! j if N ! 2 Œ0; 
˛.!/ D max j1 e j!ı j D :
ı 2Œ0;N  j1 e j j otherwise

Taking into account that j1 e jN ! j2 D 2.1 cos.!//


N D 4 sin2 .N !=2/, we end up with ˛.!/ D ˛1 .!/
N ,
where (
6 dB
2 sin.!=2/ if ! 2 Œ0; 
˛1 .!/ ´ D  2 !
(6.8)
2 otherwise
(this formula was apparently first derived by Owens and Raya in [53]). The function ˛1 .!/ is mono-
tonically increasing, vanishing as ! # 0. This agrees with the intuition that the uncertainty level of this
6.2. E MBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 115

~.!/ 1 ~.!/ ~.!/

/
˛ .!

˛.
!
/D
2
(a) ~.!/ D 0 (b) ~.!/ D 1 for !
N D1 (c) ~.!/ D 1 for !
N D4

!/
/

˛.
˛ .!

~1.!/

~.!/ ~.!/

~2 .!/

(d) ~.!/ D e jN !=2 for N ! D 2 (e) ~.!/ D e jN !=2 for N ! D 4 N D2
(f) disks comparison for !

~.!/
!/
˛.
~.!/

(g) ~.!/ D ~3 .!/


N from (6.12) for !
N D2 (h) ~.!/ D ~3 .!/
N from (6.12) for N ! D 4

Fig. 6.10: Covering disks for uncertain delays

element decreases at low frequencies. Moreover, the range of frequencies, in which the uncertainty radius
˛1 .N !/ is small, increases as N decreases, which is also intuitive.
The use of ˛1 as the uncertainty radius for the system in Fig. 6.9 is hampered by the fact that this is
not a rational function of ! , i.e. it cannot be the frequency response of a finite-dimensional system. So
we shall approximate ˛1 .!/ by the frequency response of a rational transfer function W˛1 .s/ such that
jW˛1 . j!/j  ˛1 .!/. Possible choices are
p
2 3s 2:007 s s 2 C 3:695s C 5:56
W˛1;0 .s/ ´ s; W˛1;1 .s/ ´ p ; and W˛1;3 .s/ ´ ; (6.9)
sC2 3 s C 2 s 2 C 3:026s C 5:56

see Fig. 6.11(a). We can see that the use of W˛;13 yields a close approximation of ˛1 , so further increase of
the approximation order is normally not required. It should be emphasized that jW˛11 . j!/j < jW˛10 . j!/j
for all ! ¤ 0, which means that the approximation with W˛11 always yields less conservative results than
that with W˛10 . At the same time, the gain of W˛13 is not always smaller than that of W˛11 , so there
might be situations when the use of this third-order covering leads to more conservative results than for
the first-order covering.
With finite-dimensional and stable W~ and W˛ selected according to

W~ .s/ D 1 and W˛ .s/ D W˛1 .N s/


116 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

10:8 10:8

6 6

p
 2 ! 2 3 2 !
˛1 .!/ ˛1 .!/
jW˛1;0 . j!/j ˛3 .!/
jW˛1;1 . j!/j jW˛3;1 . j!/j
jW˛1;3 . j!/j jW˛3;3 . j!/j

(a) ˛1 .!/ from (6.8) for ~.!/ D 1 (b) ˛3 .!/ from (6.14) for ~.!/ D W~3 . jN !/ from (6.13)

Fig. 6.11: Rational approximations of radii of tightest covering disks

(mind scaling the Laplace variable in W˛ ), we can embed the robust stability  problem
 for the system in
Fig. 6.1 into the robust stability problem for the system in Fig. 6.9 with W D W0˛ II . For the problem with
an uncertain loop delay this yields GQ as in (6.5), which in the case of W~ D 1 is actually the generalized
plant for the robust stability problem with the input multiplicative uncertainty model discussed in ÷4.3.2.
The controlled system in (6.6) reads then

T´w D W˛ .I KP / 1 KP D W˛ T;

i.e. it is the weighted complementary sensitivity. Thus, Proposition 6.5 leads to the conclusion that
N if T 2 H1 and there is M; M 1 2 H1 such that
the system in Fig. 6.1 is stable for all ı 2 Œ0; 
1
kM TM W˛ k1 < 1. In the SISO case, this reduces to the condition kW˛ T k1 < 1.
With the choice W˛ .s/ D W˛1;0 .s/
N D N s , the condition above reads as the strict contractiveness of
the H1 -norm of N sT .s/. Thus, the delay margin is lowerbounded by the reciprocal of the H1 -norm of
sT .s/. Even though this condition is based on the loosest upper bound on ˛1 .!/, it is might be useful,
see the proof of Proposition 6.1. The reason is that it is a closed-form expression, independent of N itself,
unlike conditions produced from the other bounds in (6.9).

Example 6.3. Consider a SISO plant P , whose only unstable pole is located at s D a > 0. The problem
addressed in this example is to find conditions under which a stabilizing K exists such that kW˛ T k1 < 1.
This would yield a (conservative) guarantee on the attainable delay margin d , which is yet another take
on the problem studied in ÷6.1.1. It is not hard to see that T .a/ D 1 for all stabilizing controllers.
Therefore, we have that kW˛ T k1  jW˛ .a/j. In fact, in the case of a single unstable pole this bound
can be approached arbitrarily closely by stabilizing controllers, see [13, Sec. 6.1]. Thus, we have that the
system can be robustly stabilized iff
jW˛1 .a/j
N <1 (6.10)

If W˛ .s/ D W˛1;0 .N s/ D N s , then condition (6.10) results in the feasibilitypof the robust stability problem
iff N < 1=a. If W˛ .s/ D W˛1;1 .s/ N from (6.8), then we have N < 2.6 C 3/=.11a/  1:406=a, while if
W˛ .s/ D W˛1;3 .N s/, then all N < 1:702=a can be attained. All these bounds are more conservative than the
upper bound 2=a from Proposition 6.2, although the conservatism level decreases as the approximation
accuracy increases. O

Covering with the center in ~.!/ D e j!=2


N

Another seemingly natural choice is to place the center to the middle of the uncertain arc at each frequency,
i.e. at ~.!/ D e jN !=2 . This effectively assumes the nominal value of the delay at ı D N =2. The tightest
6.2. E MBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 117

disks for this choice, see Figs. 6.10(d) and 6.10(e), have the radii
(
˚ ˚ j1 e jN !=2 j if N ! 2 Œ0; 2
jN !=2
˛.!/ D max je e j!ı j D max j1 e j!.ı =2/
N
j D j j
:
ı 2Œ0;N  ı 2Œ0;N  j1 e otherwise

Thus, ˛.!/ D ˛1 .N !=2/ for ˛1 .!/ defined by (6.8) and we can use all its finite-dimensional approxima-
tions from (6.9). It is readily seen that ˛1 .N !=2/ < ˛1 .N !/ for all ! 2 .0; 2/. Hence, covering disks in
this case have smaller radii than those for ~.!/ D 1 for all N ! 2 .0; 2/.
With the use of
W~ .s/ D e s=2
N
and W˛ .s/ D W˛1 .N s=2/
x 0 CN =2 . This
for the input-delay problem, we end up with (6.5), which includes the nonzero loop delay D
delay can be compensated by the standard MSP though, so its presence does not hamper the design of
controllers minimizing kT´w k1 .
Example 6.4. Consider the problem studied in Example 6.3, where a SISO plant has only one unstable
pole at s D a. The delay margin d D N is guaranteed if there is a stabilizing K such that kT´w k1 < 1,
where T´w D W˛ PK=.1 P W~ K/. Because T´w .a/ D jW˛ .a/j=jW~ .a/j for every stabilizing K , we have
that kT´w k1  jW˛ .a/j=jW~ .a/j D eN a=2 jW˛ .a/j and this bound can be approached. Hence, the system is
robustly stable iff
eN a=2 jW˛1 .N a=2/j < 1: (6.11)
For the rational approximations of ˛.!/ given by (6.9), inequality (6.11) results in the stability conditions
N < 1:134=a, N < 1:259=a, and N < 1:305=a. The first one of them, corresponding to the cover with
W˛1;0 .s/ D s , is less conservative than the corresponding condition in Example 6.3. The second and the
third conditions are actually more conservative, even though the uncertainty disks have smaller radii. O
The outcome of Example 6.4, in that a smaller covering disk might yield a more conservative bound,
might appear counterintuitive at first sight. Yet it is actually not, as explained in [76]. To understand why,
compare the disks for ~.!/ D 1 and ~.!/ D e j!=2 N
at N ! D 2, which are shown in Fig. 6.10(f). These
disks cover different areas and the smaller disk is not inscribed in the larger one. Consequently, it might
happen that the worst uncertainty ı belongs to the area in the smaller disk, but not in the larger one.

Covering with a center motivated by the smallest covering disks


Neither ~.!/ D 1 nor ~.!/ D e jN !=2 results in the smallest disks, i.e. those having the smallest radii,
covering the arcs in (6.7). The smallest disks are obtained by placing the center to the midpoint of the
chord connecting the end points of the arc for every frequency until the arc becomes a semi-circle, and at
the origin afterwards, see Figs. 6.10(g) and 6.10(h). Algebraically, such centers satisfy ~.!/ D ~3 .!/
N ,
where (
1 C e j! .1 C e j! /=2 if ! 2 Œ0; 
~3 .!/ ´ 1Œ0; .!/ D (6.12)
2 0 otherwise
The radii of such disks are
ˇ jN !
ˇ jN ! j
ˇ1 C e j!ı ˇ
ˇ j1 e N !
˛.!/ D max ˇˇ e ˇ D D sin
ı 2Œ0;N  2 2 2

N 2 Œ0;  and ˛.!/ D 1 if !


if ! N   . Thus, we end up with ˛.!/ D ˛1 .!/=2 N , where ˛1 .!/ is
given by (6.8). Moreover, the resulting disks are always inscribed in the disks corresponding to the choice
˛.!/ D 1. This implies that centering disks in (6.12) would never result in more conservative results than
for disks centered at ˛.!/ D 1.
118 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

However, the use of (6.12) in robust control problems is not easy. What we need is a causal system
W~ , whose frequency response coincides with that of ~.!/. The finite support of ~.!/, which bears some
resemblance to the ideal low-pass filter, makes this problem vain. But motivated by the “optimal” choice
above, consider placing the center at ~.!/ D W~3 . jN !/, where
s 2 C 12
W~3 .s/ D ; (6.13)
s 2 C 6s C 12
which is the Œ2; 2-Padé approximant of .1 C e s /=2. The radii of covering disks for this choice are given
in the following result:
Lemma 6.7. The tightest covering disks for ~.!/ D W~3 . jN !/ has the radii ˛.!/ D ˛3 .!/
N , where
s
6 dB
18! 2 C .12 ! 2 /ˇ.!/
˛3 .!/ ´ 2 2 D p (6.14)
! 4 C 12! 2 C 144 2 3 2 !

where ( p
.12 ! 2 / cos ! C 6! sin ! if ! 2 Œ0; 2 3
ˇ.!/ D p :
! 4 C 12! 2 C 144 otherwise
Proof. Without loss of generality assume that N D 1. Denote

j! 18! 2 C .12 ! 2 /..12 ! 2 / cos.! / C 6! sin.! //


. / ´ je W~3 . j!/j2 D 2 2
! 4 C 12! 2 C 144
for  2 RC and analyze its maximum for  2 Œ0; 1. This is a periodic continuous function of  with
d 2!.! 2 12/.6! cos.! / C .! 2 12/ sin.! //
D
d ! 4 C 12! 2 C 144
and P . / D 0 at every
1 6! 
 D k ´ arctan 2
C k; k 2 Z:
! 12 ! !
2
Ifp! < 12, then 0pis a decreasing function of ! , startingpfrom 0:5 at ! D 0 and approaching
=.4 3/  0:45 as ! " 12. In this frequency range =!  = 12 > 0:9, we have that k > 1 if k is
positive and k < 0 if k is negative. Thus, so the only k 2 Œ0; 1 is 0 . It can be verified that  D 0 is a
local minimum point. Hence, the maximum, requiredpfor calculating ˛.!/, is achieved either at  Dp 0 or at
 D 1. Direct calculations show that for all ! 2 Œ0; 2 3 we should pick the latter, pso that ˛3 .!/ D .1/.
If ! 2 > 12, then 0 < 0 and 1 is a decreasing function of ! lying in .0; =.4 3//. Moreover, k is a
local maximum point for all odd k . Because for all odd k
p
.12 ! 2 / cos.!k / C 6! sin.!k / D ! 4 C 12! 2 C 144
p p
is independent of k and 1 2 Œ0; 1 for all ! > 2 3, we may always take ˛3 .!/ D .1 /.

Similarly to the procedure for ~.!/ D 1, it is possible to approximate the function ˛3 .!/ by the
magnitude frequency response of rational transfer functions. Two possible choices are
2s s 2 C 1:92s C 12:781
W˛3;1 .s/ ´ and W˛3;3 .s/ ´ W˛3;1 .s/ ; (6.15)
sC4 s 2 C 2:393s C 12:781
see Fig. 6.11(b) on p. 116. This first-order approximation, jW˛3;1 . j!/j, is tight at both low and high fre-
quencies. This is in contrast to jW˛1;1 . j!/j, which is loose at high frequencies. In fact, it can be verified,
numerically, that
x  1 C jW˛1;1 . j!/j D
W~3 . j!/ C jW˛3;1 . j!/j D x
6.2. E MBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 119

for all ! . Hence, covering the uncertain delay D x ı with disks centered at W~3 and having the radius
jW˛3;1 . j!/j is less conservative than covering with those centered at 1 and having the radius jW˛1;1 . j!/j.
Also, it is readily seen that jW˛3;1 . j!/j > jW˛3;2 . j!/j for all ! > 0, so the third-order upper bound
is always less conservative than the first-order one in this case. This observation is confirmed by the
example below.
Example 6.5. Returning to the system in Examples 6.3 and 6.4. Now the problem is solved for
W~ .s/ D W~3 .N s/ and W˛ .s/ D W˛3 .s/:
N

By the arguments already used in Example 6.4, the problem is solvable if


jW˛3 . jN a/j < jW~3 . jN a/j:

This yields the conditions N < 1:691=a and N < 1:769=a under W˛3 D W˛3;1 and W˛3 D W˛3;1 , respec-
tively. These conditions are less conservative than those in both Example 6.3 and Example 6.4, under
compatible dimensions of W˛ . In fact, the condition N < 1:769=a is only about 88% of the upper bound
in Proposition 6.2, which is not bad for a conservative covering. O

6.2.4 Case study


Return to the system studied in ÷6.1.2, where a proportional primary controller was designed for the
plain integrator with a nominal delay 0 , see Fig. 6.3. Consider again its delay margin, i.e. the minimal
destabilizing ı D N > 0 (for the sake of simplicity, only the increase of the loop delay is addressed
below). But unlike the analysis in ÷6.1.2, where the actual delay margin was calculated, the problem is
now analyzed from the general uncertainty embedding point of view. To this end, consider the system T´w
from (6.6) for
1 kp s
P .s/ D and K.s/ D ;
s 0 .1 kp /s C kp .1 e 0 s /
where kp 2 .0; 1/. Given stable W~ .s/ and W˛ .s/, the resulting T´w has the transfer function
W˛ .s/e 0 s 0 s
T´w .s/ D ´ T0 .s/W˛ .s/e ;
0 s C 1 e 0 s C W~ .s/e 0 s

where 0 ´ 0 .1 kp /=kp > 0. The robust stability condition reads then kT0 W˛ k1 < 1. As a matter
of fact, the latter condition in this case is equivalent to T0 2 H1 and jT0 . j!/j < 1=jW˛ . j!/j for all
! . Because this is not a design problem, we can use the actual disk radius function ˛.!/ rather than its
rational approximation jW˛ . j!/j, so the condition to verify is
1
T0 2 H1 and jT0 . j!/j < ; 8! 2 R: (6.16)
˛.!/
A brute-force search over a chosen frequency grid can then be used to check the second condition above.
It can be shown, via the use of the Poisson integral formula, that an H1 function, actually outer, W˛ .s/
exists such that jW˛ . j!/j D ˛.!/, see [56, Eqn. (1.7)]. Hence, the replacement above is well justified.
First, consider the covering under W~ .s/ D 1. With this choice the delay is eliminated from T0 .s/,
yielding the rational
1
T0 .s/ D ;
0 s C 1
which is clearly stable for all kp 2 .0; 1/. With ˛.!/ D ˛1 .!/
N defined by (6.8), condition (6.16) reads
(
1 1 1= sin.N !=2/ if N ! 2 Œ0; 
p <
2 2
1 C 0 ! 2 1 otherwise
120 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

2.5

1.41

0.67

0
0.5 0.67 0.75 0.87 0.91 1

Fig. 6.12: Normalized delay margins obtained via embedding

In fact, because the left-hand side above is a strictly decreasing function of ! , we may only check the
condition in the frequency range N ! 2 Œ0; . The stability condition is then
p
2 sin.N !=2/ < 1 C 02 ! 2 ; 8N ! 2 Œ0; 

or, equivalently,
p p
2 1 C 02 ! 2 20 1 C !Q 2
N < min
p arcsin D min p arcsin  1:4775 0 :
!2Œ0; 3=0  ! 2 !2Œ0;
Q Q
3 ! 2

Thus, we end up with the condition


N 1 kp
< 1:4775 ;
0 kp
which is easy to verify. This bound is presented in Fig. 6.12 by cyan-blue line (the actual margin from
Fig. 6.4(c) is shown by the dashed black line now). It is quite conservative for kp < 0:75, but becomes
very accurate after the first jump at kp  0:749. However, the approach does not capture the discontinuity
of the actual delay margin in this case.
Now, let us choose W~ .s/ D W~3 .N s/ from (6.13). In this case

1 1
T0 .s/ D 0 s
D 0 s
;
0 s C 1 .1 N e
W~ .s// 0 s C 1 V .N s/e

where V .s/ ´ 6s=.s 2 C 6s C 12/. This transfer function is actually stable for all admissible parameters.
Indeed, this transfer function can result from the feedback interconnection of 1=.0 s C 1/ and V .N s/e 0 s .
Both these transfer functions are stable and have contractive frequency responses. p Moreover, the first one
has the unit magnitude only at ! D 0, whereas the second—only at N ! D 2 3 > 0. Hence, the loop gain
is strictly contractive at all frequencies and T0 2 H1 for all 0  0 and N > 0 by the Small Gain Theorem.
The second condition of (6.16) is more involved that that for the choice ~.!/ D 1 and there is probably
no analytic solution to it. Still, it is readily verified numerically, resulting in the bound shown in Fig. 6.12
by red solid line. It is visibly less conservative than the previous bound for kp < 0:749 and is virtually
indistinguishable from the precise margin for kp > 0:749. It also does capture the discontinuity of the
delay margin.
As a matter of fact, the small-gain arguments used to prove the stability of T0 above can also be applied
in the case when the actual ~3 from (6.12) (or, more accurately, its outer extension) is used instead of W~3 .
The resulting bound is shown by the red dotted line in Fig. 6.12. This curve is almost the same as the solid
red line, except for the interval kp 2 .0:673; 0:749/.
6.2. E MBEDDING UNCERTAIN DELAYS INTO LESS STRUCTURED UNCERTAINTY CLASSES 121

6.2.5 Time-varying delays


Time-varying delays can also be embedded into the general robustness setup in Fig. 6.9, now with a po-
11
tentially time-varying uncertainty element ı 2 B1 . As already discussed at the beginning of ÷6.2.2, by
this unit ball we understand then the set of all stable systems, whose L2 .R/-induced norm is contractive.
Covering time-varying delays is a more delicate, and less visual, task. The fact that the time-varying
delay operator D x .t/ might be unstable, see Remark 1.2 on p. 3, adds even more complications. Nonethe-
less, covering norm-bounded uncertainty sets can be found and the following two results, which should
be apparently attributed to [19, ÷8.6.1], are instrumental toward this end.
Lemma 6.8. If  .t / 2 Œ0;  x .t/ is a bounded operator on L2 .RC /, with
N and jP .t /j   < 1, then D

kDx .t/ kL2 .RC /!L2 .RC / D p 1 > 1:


1 
Proof. Let .t / ´ t  .t /. Because .t P /  1  > 0, the function .t / ´  1 .t / is well defined with
P /  1=.1 /. Now, if w D D
t  .t /  t C N and 0 < .t x .t/ ´, then we have that
Z 1 ˇ Z 1 Z 1
2 2 ˇ 2P P ds
kwk2 D Œ´.t  .t // dt ˇ D Œ´.s/ .s/ds D Œ´.s/2 .s/
0 tD.s/ .0/ 0
Z 1
1 1
 Œ´.s/2 ds D k´k22
1  0 1 
(the facts that t D .s/ ” s D .t / and ´.t / D 0 whenever t < 0 were used). Now it only remains to
prove that the bound is tight. To this end, let (
 t if 0  t  =
N
´.t / D 1Œ0;.1= 1/
N .t / and  .t / D ;
N otherwise
for which w.t / D 1Œ0;N = .t / and
kwk22 N = 1
2
D D :
k´k2 .1= 1/N 1 
This is what we need.

The second result was also independently shown in [25]:


Lemma 6.9. If  .t / 2 Œ0; 
N , then the system N W ´ 7! w such that
Z t Z 0
w.t / D ´.s/ds D ´.t C s/ds
t .t/ .t/

is a bounded operator on L2 .RC / with kN kL2 .RC /!L2 .RC / D N .


Proof. By the Cauchy–Schwarz inequality,
Z 0 2 Z 0 Z 0  Z 0
2 2 2
w .t / D ´.t C s/ds  1 ds ´ .t C s/ds D  .t / ´2 .t C s/ds
.t/ .t/ .t/ .t/
Z 0
2
 N ´ .t C s/ds
N

for every t , so that (mind that it is assumed that ´.t / D 0 whenever t < 0)
Z 1 Z 0 Z 0Z 1 Z 0
2 2 2
kwk2  N ´ .t C s/ds dt D N ´ .t C s/dt ds D N k´k22 ds
0 N N 0 N

Thus, kwk22  N 2 k´k22 . This bound is tight because in the particular case of  .t /  N the system N is
actually the LTI system with the transfer function .1 e s N
/=s , whose H1 -norm equals N .
122 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

Lemma 6.8 is readily applicable to the problem of finding stability conditions under uncertain varying
delays assumed to satisfy jPı .t /j   for a given 0   < 1. The procedure is almost identical then to
the use of disks with centers at the origin in the LTI pcase, like that in Fig. 6.10(a). The only deviation is
a different scaling, i.e. we need the static W˛ D 1= 1   1 instead of W˛pD 1 now. This would lead
to delay-independent results in terms of the size p of ı .t /, requiring GQ ´w D 1= 1 KP D x 0 to be stable
and strictly contractive, which reads kPKk1 < 1   1.
The use of Lemma 6.9 is also similar to the LTI case. To see this, observe that the operator N defined
in the lemma can be viewed as 1 D x .t/ acting on the integrated signal ´. In other words,
  
Gint 1 x
N D Fl ; D.t/ ;
Gint 0

where Gint is the integrator element having the transfer function Gint .s/ D 1=s . But this implies that
     
x 0 Gint1 0 W˛
D.t/ D Fu ; N D Fu ; N =N ;
1 1 1 W~

where W~ .s/ D 1 and W˛ .s/ D N s , exactly as the approximation of ˛1 for the very same W~ .s/ D 1, see
(6.8), by the rational bound W˛1;0 .s/ from (6.9). Because N =N 2 B1 by Lemma 6.9, we have that
  
x 0 W˛
D.t/ 2 Fu ; B1 ; 8 .t / 2 Œ0; 
N
1 W~

and the rest is by now standard. For example, this means that the result of Example 6.3 under this W˛ ,
which says that any system whose only unstable pole is at s D a is stable for all ı  N if N < 1=a, remains
valid for arbitrary time-varying ı .t /. In fact, it should be less conservative in the time-varying case.

6.2.6 Beyond simple coverings


So far, only approaches directly leading to (scaled) Small Gain Theorem applications have been discussed.
Advantage of this kind of results are their relative simplicity and suitability for off-the-shelf design meth-
ods, like the H1 optimization. However, they might result in rather conservative results. Conservatism
can be expected to be reduced if more sophisticated covering options are used.
One trick toward this end is the use of multiple covering areas. To grasp the idea, let us take a look
at Fig. 6.13. It illustrates covering uncertain delays by two convex regions, a disk and a half-plane. The
darkest region in each of these plots is the tightest convex region in C containing uncertain constant delays,
so we may expect it to be less conservative that each of the covering methods in Fig. 6.10. Both involved
areas, the unit disk and a half-plane, can be generated by quadratic forms in the frequency domain and

0 1 0 1 0 1 0 1

(a) N ! D 1 (b) N ! D 2 (c) N ! D 4 (d) N !  2

Fig. 6.13: Covering uncertain delays by multiple areas


6.3. A NALYSIS BASED ON LYAPUNOV–K RASOVSKII METHODS 123

thus falls into the general IQC (integral quadratic constraint) framework introduced in [36]. The use of
integral quadratic constraints in the context of the analysis of the robustness of delay systems can be found
in [26], where various bounds are found and the conservatism is shown to be reduced. However, IQCs,
and even more so those with irrational weights, are not quite easy to incorporate into design procedures.
Still, some approaches of doing that are available, see [74] and the references therein.

6.3 Analysis based on Lyapunov–Krasovskii methods


Lyapunov’s direct method studied in Section 2.2 can also be used to analyze the robustness of systems to
delay uncertainty. The literature on this subject is monstrous, with a great variety of results and not always
clear differentiation between them. Both Lyapunov–Krasovskii and Lyapunov–Razumikhin functions are
used. Below only a flavor of the use of the Lyapunov–Krasovskii approach is provided.
Consider the DDE
P / D Ax.t / C A x.t  /; xM 0 D 0
x.t (6.17)

for A 2 Rnn and A 2 Rnn . This is a version of (2.18) under zero initial conditions. Our goal is to
characterize conditions under which this DDE is stable for all  2 Œ0; N for some N  0. A typical modus
operandi is to transform the system equation into a Lyapunov–Krasovskii-friendly form, then choose a
structure of the Lyapunov–Krasovskii functional, and then analyze its derivative along the trajectories of
the system via upper-bounding cross terms. Each one of these stages has a plenty of options to play with.
The model transformation stage appears to be abandoned in more modern treatments, but exploring that
direction goes beyond the scope of this text. The choice of an appropriate Lyapunov–Krasovskii functional
is normally a tradeoff between treatability and conservatism. The general functional of the form (2.19)
might be too much to digest, it does not appear to lead to manageable robustness formulations. Bounding
cross-terms might not be quite intuitive, but this is where several clever and helpful tricks were introduces.
So we start with rewriting DDE (6.17) in the form
Z 0

P / D .A C A /x.t /
x.t A x.t / x.t  / D .A C A /x.t / A P C s/ds:
x.t (6.18)


It turns out to be advantageous (the idea is from [15]) to rewrite this equation in descriptor form
       Z 0
I 0 P /
x.t 0 I x.t / 0
D y.t C /d
0 0 P /
y.t A C A I y.t / A h
„ ƒ‚ …„ ƒ‚ … „ ƒ‚ …„ ƒ‚ … „ ƒ‚ …
EQ QP
x.t/ AQ x.t/
Q BQ

by adding the auxiliary variable


 
P / µ y.t / D 0 I x.t
x.t Q /:
„ ƒ‚ …
CQ

Choose now a Lyapunov–Krasovskii functional of the form


Z 0 Z t
V .t / D V1 .t / C V2 .t / ´ xQ .t /P EQ x.t
0 0
Q /C y 0 .r/Ry.r/dr ds
 tCs

for R > 0 and  


P1 0
P D ; P1 > 0: (6.19)
P2 P3
124 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY

Q x.t
Note that xQ 0 .t /EP Q / D x 0 .t /P1 x.t /, so that V is indeed positive function of the state .x; yM t /. The
derivatives are
 Z 0 
P 0 0 Q P 0 0 Q Q
V1 .t / D 2xQ .t /P E x.t
Q / D 2xQ .t /P Ax.t Q / B y.t C s/ds

Z 0
0 Q

0 Q0
D xQ .t / P A C A P x.tQ / 2 Q
xQ 0 .t /P 0 By.t C s/ds
h

and, using the Leibniz integral rule (2.20),


Z 0 Z 0

P
V2 .t / D y 0 .t /Ry.t / y 0 .t C s/Ry.t C s/ ds D  xQ 0 .t /CQ 0 RCQ x.t
Q / y 0 .t C s/Ry.t C s/ds:
 

Taking into account that   N , we have:


Z 0 Z 0

VP .t /  xQ 0 .t / P 0 AQ C AQ0 P C N CQ 0 RCQ x.t
Q / y 0 .t C s/Ry.t C s/ds 2 Q
xQ 0 .t /P 0By.t C s/ds:
 

The last term in the right-hand side above is the cross term to be bounded. To handle that, note that for all
Q > 0 and vectors v1 and v2 , we have that 0  .v1 CQ 1 v2 /0 Q.v1 CQ 1 v2 / D v10 Qv1 Cv20 Q 1 v2 C2v20 v1
or, equivalently, that
2v20 v1  v10 Qv1 C v20 Q 1 v2 : (6.20)
Thus
Z 0 Z 0 
2 Q
xQ 0 .t /P 0By.t C s/ds  Q
BQ 0 P xQ 0 .t / C y 0 .t C s/Qy.t C s/ ds
xQ 0 .t /P 0BQ 1
 
Z 0
0 0 Q 1 Q0 0
 N xQ .t /P BQ B P xQ .t / C y 0 .t C s/Qy.t C s/ds;


which is true for all Q > 0, in particular, for Q D R. Thus,



VP .t /  xQ 0 .t / P 0 AQ C AQ0 P C N CQ 0 RCQ C N P 0 BR
Q 1
BQ 0 P x.t
Q /

and VP .t / < 0 for all xQ ¤ 0 if

P 0 A C A0 P C N CQ 0 RCQ C N P 0 BR
Q 1
BQ 0 P < 0:

Equivalently, using (A.6) we can end up with the condition that VP .t / < 0 if the LMI
 0 
P AQ C AQ0 P C N CQ 0 RCQ N P 0 BQ
<0 (6.21)
N BQ 0 P N
R

is solvable in 0 < R 2 Rnn and P 2 R2n2n as in (6.19). This condition is readily verifiable for a fixed N
and can be used as a base for a parametric search over the scalar N to maximize the delay margin.
Remark 6.3 (connections with embedding). Equation (6.17) can be presented as the feedback intercon-
nection of the uncertain delay D x  with a system having the transfer function C´ .sI A/ 1 Bw , where C´
and Bw are any matrices in the rank decomposition A D Bw C´ . Choosing now covering with W~ .s/ D 1
and W˛ .s/ D N s , the robust stability problem converts to the form presented in Fig. 6.8(b) for
  
0 N sI 1
T´w .s/ D Fl ; C´ .sI A/ Bw D sC N ´ .sI A Bw C´ / 1 Bw
I I
D N C´ Bw C N C´ .sI A A / 1 .A C A /Bw ;
6.3. A NALYSIS BASED ON LYAPUNOV–K RASOVSKII METHODS 125

which is a standard system. We can then apply Propositions 6.5 and 6.6 to end up with the LMI condition
 
.A C A /0 X C X.A C A / C N 2 C´0 Y C´ X.A C A /Bw C N 2 C´0 Y C´ Bw
< 0; (6.22)
Bw0 .A C A /0 X C N 2 Bw0 C´0 Y C´ N 2 Bw0 C´0 Y C´ Bw Y

which is simpler, i.e. has less decision variables with X 2 Rnn and Y 2 Rrank A  rank A , than (6.21). But
now ignore the possibility to exploit a potential rank deficiency of A and choose C´ D I and Bw D A .
Also, consider an alternative, descriptor, representation of T´w above, namely
     1  
  I 0 0 I 0
T´w .s/ D N 0 I s D N CQ .s EQ Q 1 B;
A/ Q
0 0 A C A I A

which can be verified by straightforward algebra. It can then be shown, using the LMI characterization of
the H1 norm of descriptor systems from [34], that finding whether there is M such that kM T´w M 1 k1 <
1 is equivalent to solving the very LMI (6.21). In other words, (6.22) is merely a more economical version
of (6.21). It should also be emphasized that the covering disks chosen in the derivation of (6.22) are not
the tightest ones. O
Note that inequality (6.20) can be made less conservative, e.g. if replaced with Park’s inequality [55]
  
0
 0 0  Q QS v1
2v2 v1  v1 v2 ;
S 0 Q .I C S 0 Q/Q 1 .I C QS / v2

which follows from (6.20) via the substitution v1 ! v1 C S v2 . The additional variable S , whose choice
as S D 0 gives (6.20), can be used to tighten the bound. Then, LMI conditions resulting from such a
replacement, can again be shown to be equivalent to those resulting with the embedding approach [81].
There are then more directions for refining the analysis using Lyapunov–Krasovskii functionals, like
the use of functionals with more design parameters, exploiting Jensen’s, Wirtinger’s, Bessel’s inequalities,
et cetera, see surveys [64, 82] and the references therein. Still, the vast majority of these methods apply
only to the analysis problem, where a controller, actually a finite-dimensional one, is given and the stability
is verified. But in such situations, it might be easier to use graphical tests, like the Nyquist criterion,
to estimate the delay margin in a system. The real need in analytical methods is in design problems,
where a controller is to be constructed to satisfy certain performance requirements, including robustness.
It appears that Lyapunov-based methods are not there yet. They also appear to be less transparent in
revealing conservatism sources than methods based on embedding delay uncertainty into general-purpose
robustness problems.
126 C HAPTER 6. ROBUSTNESS TO D ELAY U NCERTAINTY
Chapter 7

Exploiting Delays

HE DELAY ELEMENT HAS SOME FAVORABLE PROPERTIES , which might be exploited in certain sit-
T uations. A handful of those ideas are outlined in this chapter. It should be emphasized that the delay
element has very rich dynamics. For that reason, possible side effects of its use, which might not be im-
mediately visible, should be taken into account. It is therefore well advised to be extra cautious in using
such methods, especially when delays are introduced into feedback loops.

7.1 Dead-beat open-loop control


Apparently the safest use of delays is in generating or processing reference signals, as these operations are
done outside of feedback loops and are therefore a relatively safe matter, in the worst case only accuracy is
lost, not stability. So we start with exposing several such methods, assuming scalar inputs for simplicity.

7.1.1 Posicast control


Consider a cart with a pendulum mounted on it, which is capable of moving in one dimension, say along
the x -axis. The control input is supposed to be the position of the cart x.t / itself. The problem is to move
the cart from its equilibrium at x D 0 to another equilibrium, say at x D xf ¤ 0, quickly, but without
exciting oscillations in the pendulum. This problem can be motivated by the task of moving a gantry crane
with a payload from a pick-up point to a drop-off point without causing oscillations of the payload.
The posicast control strategy1 for such a system, proposed by Otto J. M. Smith in [68], is illustrated
by the control sequence depicted in Fig. 7.1. It assumes that the cart movements do not apply longitudinal
forces to the pendulum end (i.e. a linearized behavior) and that there is no friction. The cart first leaps half
1 It abbreviates the “positive-cast,” because anglers drop their baits in the water at the maximum-position-zero-velocity instant.

0 xf 0 xf =2 xf 0 xf =2 xf 0 xf

(a) t < 0 (b) t D 0 (c) 0 < t < Tp =2 (d) t  Tp =2

Fig. 7.1: Posicast control of a cart with a pendulum with the period Tp

127
128 C HAPTER 7. E XPLOITING D ELAYS

a way, to the point xf =2, as shown in Fig. 7.1(b), and then waits there for a half of the pendulum period.
During this wait time the pendulum swings forth to the rightmost point, whose angle coincides with that
at the beggining of the swing and thus having its x -axis position exactly at xf with the zero velocity, see
Fig. 7.1(c). At this moment the cart leaps to the destination point xf , where the pendulum already stands
still, thus bringing the whole system to its required equilibrium. This sequence of actions corresponds to
the control signal
 x f

x.t / D 1.t / C 1.t Tp =2/ xf =2 D ;


0 Tp =2

where Tp denotes the pendulum period. In other words, the solution is the response of the open-loop
controller
1 C e Tp s=2
Kpcast .s/ D ; (7.1)
2
connected in series with the plant, to the scaled step xf 1.t /.
The intuition above can be expressed in control-theoretic terms as well. Assuming that the pendulum is
a point mass located at the distance l from its pivot point, the linearized model of the system in Fig. 7.1(a)
from the cart position x to the pendulum angle  is
s2
P .s/ D ;
ls 2 C g
p
where g is the standard gravity. Its poles at s D ˙ j g= l cause undamped oscillations if excited. Hence,
we should avoid exciting them by a required maneuver. The only way to prevent such oscillations in
the open-loop architecture is to cancel these poles by zeros of the controller (ignoring internal instability
problems produced by this for a while). This can be done in various ways, one of which is the posicast
controllerpin (7.1). Indeed, its zeros are at j.2i C 1/=Tp for all i 2 Z. The period of the pendulum is
Tp D 2 l=g , so these zeros include the poles of P .s/. In fact, the function P .s/Kpcast .s/ has no poles
and its step response
   
s.1 C e Tp s=2 / 1 2
.t / D L 1
T =2
D cos t 1Œ0;Tp =2 .t / D 0 p

2.ls 2 C g/ 2l Tp
has a finite duration, known as being dead-beat.
The approach is readily extendible to damped pendulums, with transfer functions of the form
s2
P .s/ D (7.2)
ls 2 C 2cs C g
for some c 2 < gl (i.e. the response if assumed to be underdamped). This transfer function has a pair of
stable poles at s D  ˙ j! , where
r
c g 2
´ and ! ´ 2 D ; (7.3)
l l Tp
where Tp is still the period of the pendulum and  can be interpreted as the decay ratio of its oscillations.
We consider a controller of the form
s
Kpcast .s/ D 0 C 1 e ; (7.4)

with the goal to design its three real parameters 0 , 1 , and  , guaranteeing that Kpcast .0/ D 1, to have the
required steady-state x , and Kpcast .  ˙ j!/ D 0, to have an FIR PKpcast , whose oscillations vanish in
finite time. The first requirement yields 0 C 1 D 1 and the second one reads

0 C 1 e e j! D 0 C 1 e cos. !/  j1 e sin. !/ D 0:


7.1. D EAD - BEAT OPEN - LOOP CONTROL 129

Obviously, 1 ¤ 0, so the condition above is solvable in real variables iff sin. !/ D 0 or, equivalently, iff
 D k=! for some k 2 Z. It is justified to choose the smallest  solving this equality. The choice  D 0
yields 0 C 1 D 0, which contradicts the requirement on the static gain above. The next smallest
 Tp
D D ;
! 2
which yields the following linear relation for 0 and 1 :
        T =2 
1 1 0 1 0 e p 1
 D H) D :
1 e 1 0 1 1 1 C eTp =2

With these choices, the controller has the transfer function

eTp =2 C e Tp s=2
Kpcast .s/ D (7.5)
1 C eTp =2
and results in the following step response of PKpcast :
 
1 ˇ t 2 1
.t / D T =2
e cos t C arccos 1Œ0;Tp =2 .t / D Tp =2 ;
1Ce p l Tp ˇ 0

p p
where ˇ ´ 1 C .Tp /2 =.2/2 D 1= 1 c 2 =.gl/  1. Note that 0 > 1 for all  > 0, i.e. the first
leap of the cart is always larger than the second one for a damped pendulum. The overall time required
for settling at the required equilibrium also increases if the damping is nonzero. This is intuitive, taking
into account that the period of the pendulum increases with damping.
Of course, in realistic situations we should not expect the match between zeros of the controller and
poles of the plant to be perfect. The response is no longer FIR then, there are residual oscillations of the
pendulum. For example, assume that the period of the pendulum Tp is known, but its decay ratio  used in
the controller design is different from the actual Q , which is realistic. In this case the residual oscillations,
which can be derived via the residues of the response P .s/Kpcast .s/=s at s D Q ˙ j2=Tp , are

1 e.Q /Tp =2
res .t / D Tp =2
step .t /;
1Ce
where step .s/ is the step response of the uncontrolled P . The maximum amplitude of this response decays
linearly with the mismatch jQ  j if the latter is sufficiently small.

7.1.2 Generating continuous-time FIR responses by a chain of delays


The problem of selecting parameters of the posicast controller in (7.4) can be seen as particular cases of a
more general problem of finding a chain of  delay elements,

X
KD x i ;
i D 0 D 0 < 1 <    <  ; (7.6)
iD0

for a given finite-dimensional P so that PK is an FIR system, whose impulse response has support in
Œ0;  . The problem may be considered for a given delay sequence fi g, when only parameters i 2 Rmq
are to be found, or in a completely general setting, when delays can also be chosen freely. Sometimes,
additional constraints on K , e.g. on its static gain K.0/, are incorporated. Such constraints normally also
rule out the trivial solution i D 0, for all i 2 Z0:: .
The following result characterizes all solutions to this problem.
130 C HAPTER 7. E XPLOITING D ELAYS

Lemma 7.1. If the realization P .s/ D D C C.sI A/ 1 B is minimal and K is as in (7.6), then PK is
FIR iff  A 
e  B    eA.  1 / B B ˚ D 0; (7.7)
 0 
where ˚ ´ 0    0 1 0 0 , and then the impulse response of PK is zero for all t >  .

Proof. Decompose
˚  ˚
x i D 
P i D i P C POi D
x  i
x i D 
i D i
x i C POi i D
P i D x  ;

where the truncation operator is defined in ÷3.4.2 on p. 60 and POi .s/ D C.sI A/ 1 eA. i /
B , cf. (3.39).
Hence,
X 
X
˚
PK D  i P i D x i C POi i D
x  :
iD0 iD0

The first term in the right-hand side above is an FIR system, whose impulse response has support in Œ0;  .
P
The second term is a  -delayed system. If its finite-dimensional part i POi i ¤ 0, the impulse response
of this system has support in . ; 1/ and PK cannot be FIR. Thus, the FIR property holds iff
  P 
X A eA. i /
Bi
POi .s/i D iD0
D 0:
C 0
iD0
P
Condition (7.7), which reads iD0 eA. i /
Bi D 0, is then clearly sufficient and its necessity follows
by the observability of .C; A/.

If P is n-dimensional, then there are nontrivial solutions to (7.7) iff the rank of
 
M ´ eA B    eA.  1 / B B 2 RnC1

is smaller than  C 1. In other words, there must exist a nontrivial kernel of M and then every ˚ 2 ker M
is admissible. This condition holds true whenever   n, which is a sufficient condition for the existence
of a required K . Nevertheless, there might also be situations when (7.7) is solvable by ˚ ¤ 0 even if
 < n.
To see that, return to the plant with the transfer function (7.2). Its minimal realization is
2 3
 ! 0
14
P .s/ D !  1 5:
l 2
 =! ! 2 1

Choosing  D n D 2 we obtain
 2 

e sin.2 !/ e .2 1 / sin..2 1 /!/ 0
M D 2  :
e cos.2 !/ e .2 1 / cos..2 1 /!/ 1

Thus, unless sin.2 !/ D sin..2 1 /!/ D 0, this is a rank-two matrix with


2 3 2 3 2 3
sin..2 1 /!/ 0 sin..2 1 /!/
ker M D Im 4 e 1  sin.2 !/ 5 H) 4 1 5 D 4 e 1  sin.2 !/ 5 ˛
2 
e sin.1 !/ 2 e 2  sin.1 !/

for an arbitrary ˛ ¤ 0. If we pick 1 D =! now, then sin.1 !/ D 0 and we have that 2 D 0, i.e. that
effectively  D 1. Because with this choice sin..2 1 /!/ D sin.2 !/, we then recover the posicast
controller (7.5) under ˛ D csc.2 !/=.1 C e =! /.
7.1. D EAD - BEAT OPEN - LOOP CONTROL 131

If in the example above 1 is such that sin.1 !/ ¤ 0, then two delays are used and the end point
p
can be reached faster. For example, let p1 D =.4!/ and 2 D =.2!/. In this case 0 D ˛= 2,
1 D e =.4!/ ˛ , and 2 D e =.2!/ ˛= 2. The condition K.0/ D 1 reads then
p p
2 2
˛D p D p :
1 2e =.4!/ Ce =.2!/ .1 e =.4!/ 2
/ C .2 2/e =.4!/
To simplify the expressions, assume the undamped case, i.e. that  D 0. The controller is then
 1  p 
K.s/ D 1 C p 1 2 e Tp s=8 C e Tp s=4
2
and its step response x.t /  1:707  1.t / 2:414  1.t Tp =8/ C 1:707  1.t Tp =4/. Thus, the cart position
first overshoots by about 71% at t D 0, then undershoots by about 71% at t D Tp =8, and finally settles at
the steady state value at t D Tp =4. This produces the following response of the pendulum:

p       1

2C 2 2 2 2l

Tp =4
.t / D cos t 1Œ0;Tp =8 .t / sin t 1ŒTp =8;Tp =4 .t / D 0
:
2l Tp Tp

which settles twice as fast as in the posicast controller case. The price is higher amplitudes of both the
cart and the pendulum.

7.1.3 Input shaping


The idea behind posicast control became popular in numerous input shaping schemes, where the control
input is the convolution of a reference signal, like a step, with shaping imulses, see [66] for an overview
of related techniques. Those shaping impulses can be seen as generated by an array of delays, exactly as
in (7.6). Thus, the posicast controller (7.5) can be seen as a special case of an input shaper.
Apart from canceling nominal lightly damped poles, input shapers are also endeavoured to be less
sensitive to the location of those poles, i.e. be robust. There are several approaches to improve robustness,
normally at the expense of a more involved controller and a longer settling to the final equilibrium. They
thus typically use an increased number of delay elements in the controller, like (7.6) for  D 2 for plant
(7.2) instead of  D 1 as in (7.4). The extra parameters are then used to bring in additional properties,
reducing the sensitivity of the scheme to uncertain resonance parameters.
As an example of robustifying ideas, consider the ZVD (zero vibration and derivative) approach.
Its key idea is to require from the canceling zeros of the transfer function of the controller to have a
higher multiplicity than that of the canceled plant poles. To illustrate this approach, return to the damped
pendulum (7.2) and consider the choice of parameters in controller (7.6) under  D 2 for it. Additional
zeros at s D  ˙ j! imply the requirement K 0 .  ˙ j!/ D 0, where K 0 .s/ ´ dK.s/=ds . This
requirement can be accounted for either explicitly, via adding the constraint
 
0 1 e1  e j1 ! 2 e2  e j2 ! ˚ D 0

to M ˚ D 0, or implicitly, via solving the problem for the fourth-order P .s/=.ls 2 C 2cs C g/. In either
case, the problem is solvable iff sin.1 !/ D sin.2 !/ D 0. For instance, the solution for 1 D =! and
1 D 2=! is
C e Tp s=2 2
 Tp =2 
e
KZVD .s/ D : (7.8)
1 C eTp =2
132 C HAPTER 7. E XPLOITING D ELAYS

The step response of PKZVD is then


 
1 ˇ t 2 1 
.t / D Tp =2 2 l
e cos t C arccos 1Œ0;Tp =2 .t / 1ŒTp =2;Tp  .t / D T
p ;
.1 C e / Tp ˇ 0

p
where ˇ ´ 1 C .Tp /2 =.2/2 . This algorithm takes twice the time required by the standard posicast to
move to the final point. What we gain is a lower sensitivity to modeling inaccuracies. To see that, assume
again that the decay ratio is uncertain, namely, its value  used to calculate (7.8) is different from its actual
value Q . In this case the poles at s D Q ˙ j! are not canceled and there are residual oscillations. For
controller (7.8) they are
1 e.Q /Tp =2 2
 
res .t / D step .t /;
1 C e Tp =2
where step .s/ is the step response of the uncontrolled P . Their maximum amplitude decays now quadrat-
ically with the mismatch for small deviations, i.e. as .Q  /2 . Such a function has a slower grow than a
linear function of the error and is thus less sensitive to uncertainty.
Another possible direction of improving the robustness of input shaping algorithms is to place zeros
of controller (7.6) not exactly at each lightly-damped pole of the nominal plant, but rather at two different
points nearby. For example, we may consider the problem for
s2
P .s/ D
.ls 2 C 2.c C cı /s C g/.ls 2 C 2.c cı /s C g/
for a small cı > 0. By doing this, we sacrifice the nominal response, which will exhibit small residual
oscillations, but can guarantee a low gain of the controller in a wider neighborhood of this lightly-damped
pole.
Input-shaping ideas extend to systems with multiple flexible modes. Multi-mode shapers can be con-
structed either via the series interconnection of input shapers for each of those modes or by choosing
parameters of (7.6) directly. The former approach is simpler from the design point of view, as each factor
can be chosen to be of standard forms, like (7.5) or (7.8). The latter approach has a potential advantage
in that it may produce more economic controllers, with faster responses. But parameters of (7.6) is typ-
ically harder to tune to satisfy (7.7) for general systems. State-of-the-art methods appear to use various
numerical optimization schemes.
Remark 7.1 (why dead-beat). An intriguing question is why input-shaping solutions endeavor to reach a
dead-beat response from a controlled signal? In principle, lightly-damped modes can also be canceled by
finite-dimensional controllers. For example, we can use
ls 2 C 2cs C g
K.s/ D
g.T s C 1/2
for some T > 0 or whatever other denominator polynomial, whose order is at least 2, instead of (7.5), to
control a system with the transfer function (7.2). This would yield P .s/K.s/ D s 2 =.g.T s C 1/2 /, whose
step response decays aperiodically, with a faster decay for a smaller T . Such a controller might be easier
to design, especially in the multi-mode case, and it might be less sensitive to modeling mismatches. But
a finite-dimensional K normally has a slower settling under comparable magnitudes of involved signals.
Still, it appears that a comprehensive comparison of these approaches is yet to be carried out. O

7.1.4 Time-optimal control


Another related approach to generate reference signals, in terms of the structure of resulted signals, is the
use of time-optimal strategy under constraints on the various process variables. Specifically, this is the
7.1. D EAD - BEAT OPEN - LOOP CONTROL 133

case for the problem of steering the state of the n-dimensional system

P / D Ax.t / C Bu.t /
x.t

from a given initial condition x.0/ D x0 to a given final state x.tf / D xf in minimum possible time tf
under the constraint that ju.t /j  1 for all t . We assume that both x0 and xf are admissible equilibria of
the system, i.e. are such that Ax0 C Bu0 D Axf C Buf D 0 for some ju0 j  1 and juf j  1. This problem
is well defined iff .A; B/ is controllable and the optimal solution is of the bang-bang form, i.e. the optimal
u.t / takes values only from the set f 1; 1g, with a finite number of switches between u.t / D 1 and
u.t / D 1, see [3, Sec. 6.5] for details. Moreover, if all eigenvalues of A are real, then the number of
switches is upperbounded by n 1.
It is readily seen that the bang-bang strategy for t  0 can be expressed as the step response of (7.6)
for j0 j D 1 and i D . 1/i 20 for all i 2 Z1:: 1 . We can thus still use (7.7), now for effectively a given
˚ and the delays i as the parameters to determine, together with the constraint

lim .sI A/ 1 .sx0 C BK.s// D xf ;


s!0

which guarantees that lim t!1 x.t / D xf .


To illustrate this procedure, consider the time-optimal control of a DC motor connected to a rigid
mechanical load, having the moment of inertia J and the viscous friction coefficient f . Its dynamics can
be described as       
Pm .t / 0 1 m .t / 0
D C u.t /;
!P m .t / 0 a !m .t / b
where m is the angle of the shaft, !m is its angular velocity, u is the normalized input voltage (so that
2
its maximal peak corresponds to ˙1), a D .Rf C Km /=.RJ / > 0, and b D Km =.RJ / > 0, where R is
the armature resistance and Km is the torque constant of the motor. Assume that the initial condition is
m .0/ D !m .0/ D 0 and the goal is to steer the shaft to the equilibrium m .tf / D f and !m .tf / D 0. Both
these equilibria correspond to u D 0, i.e. they are admissible. As both poles of the system are real, we
know that there might be at most one switch in .0; f/. Hence, consider (7.6) for  D 2. Because
    
0 1 1 .1 e at /=a
exp t D ;
0 a 0 e at

condition (7.7) reads


2 3 2 3
      0
.1 e atf /=a .1 e a.tf ts / /=a 0 4 0 5 1=a e atf
=a 1 1 1
b atf a.tf ts / 20 D b atf ats
4 20 5 D 0
e e 1 0 e 1 e eatf
2 2

for j0 j D 1 and the final value condition reads


  1    (
s 1 0 f K 0 .0/ D af =b
lim K.s/ D H)
s!1 0 s C a b 0 K.0/ D 0

Note that the first row of this version of (7.7) is equivalent to K.0/ D 0, so one of these conditions is
redundant. Thus, we end up with two conditions

1 2eats C eatf D 0 and .2ts tf /0 D af =b: (7.9)

The first of these conditions reads eats D .eatf C 1/=2. Because the function e t is strictly convex, the
latter equality implies that 2ts tf > 0. The second condition of (7.9) yields then that 0 D sign f and
134 C HAPTER 7. E XPLOITING D ELAYS

u.t /

u.t /
u.t /
u.t /

t t t t

!m .t /

!m .t /
!m .t /
!m .t /

t t t t
m .t /

m .t /
m .t /
m .t /

f

f
f
ts tf t ts tf t ts tf t ts tf t

(a) (b) (c) (d)

Fig. 7.2: Time-optimal trajectories for a DC motor

2ts tf D ajf j=b or, equivalently, tf D 2ts ajf j=b . Substituting this value to the first equality of (7.9)
we get the quadratic equation
p 
a2 jf j=b 2ats 2 j
e e 2eats C 1 D 0 H) eats D ea f j=b
1˙ 1 e a2 jf j=b :

We are interested in the “C” solution, as this is the only one for which ts < tf . Thus, we end up with
a 1 p
a2 jf j=b
 a 2 p
a2 jf j=b

ts D jf j C ln 1 C 1 e and tf D jf j C ln 1 C 1 e :
b a b a
Expectably, these are monotonically increasing functions of jf j. The resulted trajectories of u.t /, !m .t /,
and m .t / for various f are presented in Fig. 7.2.
Time-optimal trajectories may provide an implicit justification for the input-shaping strategy. In fact,
time-optimal trajectories can be used as possible input shapers. However, they are hard to derive in general
and are known to be quite sensitive to modeling uncertainty.

7.1.5 Generating continuous-time FIR responses by general FIR systems


The controller architecture (7.6) is not the only possible one to produce an FIR PK for a finite-dimensional
P . This goal can be attained by various other structures of controllers. In this subsection we study such
kind of problems in a general setting, namely, as a general tracking setup.
A general open-loop tracking problem can be formulated as the two-block model-matching problem,
where the error system to be reduced is of the form

Ge D Gw C Gu K; (7.10)

where Gw and Gu are finite-dimensional systems, given by their joint state-space realization
 
  A Bw Bu
Gw .s/ Gu .s/ D (7.11)
C Dw Du
7.1. D EAD - BEAT OPEN - LOOP CONTROL 135

and K is a controller to find. In the particular case of Gw D I and Gu D P this error system generates
the tracking error e D r y in the standard open-loop setting, where K W r 7! u and the goal is to
render the plant output y close to a measured reference signal r . We may add a weighing function We
to emphasize important frequency ranges for the reduction of the tracking error or to account for a priori
information about properties of r . In such a case Gw D We and Gu D We P . It is also possible to include
the control signal u, maybe shaped by yet another weight Wu , as a part of the error signal. This yields
   
We We P
Gw D and Gu D
0 Wu
and use weights to trade-off tracking performance and control efforts. In some situations, disturbance
attenuation problems under measured disturbances can also be cast in the form (7.10).
The general tracking setup is often used to pose tracking problems as optimization problems, with the
requirement to minimize a norm, e.g. H2 or H1 , of Ge . Any such problem should start with a suitable
definition of the class of admissible controllers K . Normally, admissability is understood as the stability
of the controller itself, viz. as K 2 H1 , and of the error system, viz. as Ge 2 H1 . But in some situations,
we might also required to strengthen requirements on the controller and the error system by requiring
them to be FIR, with the support of its impulse response in a given interval Œ0;  . The stability of both
these systems is then guaranteed if the impulse response of K is bounded, which is easy to see.
All such admissible solutions are characterized in the following result:
Lemma 7.2. If .C; A/ in realization (7.11) is observable, then there is an FIR K with the impulse response
k.t / supported in Œ0;   such that Ge is also FIR with the same support iff
Z 
Bw C e As Bu k.s/ds D 0: (7.12)
0

Proof. If K is FIR, then the impulse response of the error systems is


Z t
At
ge .t / D Dw ı.t / C C e Bw C Du k.t / C C eA.t s/ Bu k.s/ds (7.13)
0
Z minft;g
D Dw ı.t / C C eAt Bw C Du k.t /1Œ0; .t / C C eA.t s/
Bu k.s/ds:
0

We require ge .t / D 0 for all t >  , which reads then


 Z  
ge .t / D C eAt Bw C e As Bu k.s/ds D 0; 8t > :
0

Equality (7.12) is obviously sufficient for that and its necessity follows by the observability of .C; A/.

Condition (7.12) can be interpreted in terms of attaining the state x. / D eA Bw from the zero initial
conditions by the system xP D Ax CBu u, with the resulted u taken as the impulse response of the controller
K . This is essentially the controllability question. If it is solvable, which is always the case if .A; Bu / is
controllable, there are infinitely many solutions. This degree of freedom can be exploited in different ways.
For example, if the final state is supposed to be attained in minimum time under amplitude constraints on
u, we end up with the time-optimal solution discussed in ÷7.1.4. Another possibility is to endeavor to
attain the final point with the minimum-energy u:
Proposition 7.3. If .A; Bu / in (7.11) is controllable, then
Z   1
0 A0 s
k.t / D kME .t / ´ Bu0 e A t e As Bu Bu0 e ds Bw 1Œ0; .t /
0

has the minimum energy among all solutions to (7.12).


136 C HAPTER 7. E XPLOITING D ELAYS

Proof. If .A; Bu / is controllable, then kME .t / is well defined and satisfies (7.12), which can be shown by
direct substitution. Assume that k D kME C kı solves (7.12) as well. This necessarily implies that kı
satisfies the equation Z 
e As Bu kı .s/ds D 0:
0
Hence,
Z 
kkk22 D kkME k22 C kkı k22
C 2hkı ; kME i D kkME k22
C C 2 ŒkME .t /0 kı .t /dt
kkı k22
Z   1 Z 0
0
D kkME k22 C kkı k22 C 2Bw0 e As Bu Bu0 e A s ds e At Bu kı .t /dt D kkME k22 C kkı k22 :
0 0

Thus, kkME k2  kkk2 for every k satisfying (7.12) and the equality holds iff kı D 0.

To illustrate the minimum-energy algorithm of Proposition 7.3, return to the pendulum on the cart
problem presented in Fig. 7.1. To formulate it as a general tracking problem, consider a step reference
signal r for the cart position x and
 choose as regulated signals the tracking error r x and the pendulum
angle  . The systems w 7! r  x , where w.t / D r.tP /, has the transfer function
2 3
0 0 0 1 1
  60  ! 0 0 7
  1=s 1=s 6 7
Gw .s/ Gu .s/ D D60 6 !  0 1 7 7;
0 P .s/=s 41 0 0 0 0 5
0 =.l!/ 1= l 0 0
where P .s/ is given by (7.2) and  and ! are defined by (7.3). The pair .A; Bu / in the realization above
is controllable, so the formulae of Lemma 7.2 apply. To compare them with the response of the posicast
control, choose  D =! D Tp =2, so that the response settles in the very same Tp =2. This choice yields
  
1 eTp =2 2 t 2
kME .t / D C e sin t 1Œ0;Tp =2 .t /;
Tp  Tp
where ´ 2 . 2 Tp2 C 4 2 /=.8Tp .1 C eTp =2 / C . 2 Tp2 C 4 2 /.1 eTp =2 //, which corresponds to the
transfer function
 Tp =2   
e 1 1 e Tp s=2 4.1 C eTp =2 e Tp s=2 / #0 16 2 1 e Tp s=2 1 C e Tp s=2
KME .s/ D ! 2
 s .s  /2 C 4 2 =Tp2  8 8Tp s Tp2 s 2 C 4 2
rather than (7.5). To simplify the formulae, consider now the undamped pendulum, i.e. assume that  D 0.
The cart position evolves then as
    
4xf 2 2 2 x f

x.t / D xf C 2 1 1 t C cos t 1Œ0;Tp =2 .t / D


 8 4 Tp Tp 0 T =2 p

instead of two posicast steps and the step response of PKME is


   
 4 2 Tp =2
.t / D 2 1 t sin t 1Œ0;Tp =2 .t / D 0
:
. 8/l Tp Tp

Both x.t / and .t / above are smoother than the corresponding signals for the posicast (bang-bang) case.
This is especially important regarding x , as the movements of the card should be feasible under actuator
limitations.
Yet another direction to exploit the freedom of the choice of K satisfying (7.12) is consider the H2
(quadratic) optimization problem associated with the model-matching setup (7.10). The result below
presents the solution to this problem.
7.1. D EAD - BEAT OPEN - LOOP CONTROL 137

(a) Cart position, x.t/ (b) Pendulum angle, .t/

Fig. 7.3: H2 -optimal FIR control of the pendulum on cart system in Fig. 7.1 under % D 1=500

Proposition 7.4. If .A; Bu / in (7.11) is controllable, Dw D 0, and Du has full column rank, then
     1 
  I 0 0 0 H Bw
k.t / D .Du0 Du / 1 Du0 C Bu0 eH t C e 1Œ0; .t /;
0 0 I 0 0

where    
A 0 Bu  
H ´ .Du0 Du / 1
Du0 C Bu0 ;
C 0C A0 C 0 Du
is the impulse response of the unique FIR controller rendering Ge FIR and minimizing its H2 norm. The
optimal cost       1  
  I 0 0 0 H I
kGe k22 D tr Bw0 0 I C e Bw :
0 0 I 0 0
Moreover, the solution in the case of C D 0 and Du0 Du D I recovers that of Proposition 7.3.
Proof. Follows by applying the Projection Theorem [32, Sec. 3.3] under a barrier associated with (7.12).
Details, which are quite technical, can be found in [48] (it corresponds to the case of  D 0 there).

To illustrate this approach, consider again the problem studied throughout this section and define the
error system as the mapping
2p 3
1 p .r x/
w 7! 4  5;
p
% xP
where w.t / D r.tP /, the weight  2 Œ0; 1 serves to reach a required balance between settling the cart
position and oscillations of the pendulum and the weight % > 0 is used to regularize the problem (% D 0
would render Du D 0). An alternative for such a regularization might be the use of P instead of xP , as the
transfer function of the system w 7! P is also bi-proper. This yields the following
2 3
0 0 0 1 1
2p p 3 6 6 0  ! 0 0 7 7
1 =s p 1 =s 6
  0 !  0 1 7
P .s/=s 5 D 6 p
6 7
Gw .s/ Gu .s/ D 4 0 7 : (7.14)
p 6 1  0 0 0 0 7
0 % 6 p p 7
4 0  =.l!/ = l 0 0 5
p
0 0 0 0 %

The optimal solutions for several choices of  and % D 0:002 are presented in Fig. 7.3. The choice of
 D 1 penalizes only the pendulum angle. In this case the response, shown by cyan-blue lines, becomes
138 C HAPTER 7. E XPLOITING D ELAYS

close to that of the posicast control (dashed lines). The other extreme,  D 0, penalizes only the deviation
of the cart position from its destination. It yields a smaller error energy, but larger pendulum oscillations,
see the yellow lines in Fig. 7.3.
The solution of Proposition 7.4 might not be numerically stable, as the matrix H typically has eigen-
values in C0 . As a result, coefficients of its matrix exponential grow rapidly as  increases. For instance,
the computations in the example above fail for %  10 3 because one eigenvalue of H grows above 30 in
this case, leading to the presence of terms O.e30 / in eH . Nonetheless, this hurdle can be overcame by
exploiting the Hamiltonian structure of the matrix H . Namely, bring in the algebraic Riccati equation

A0 X C XA C C 0 C .XBu C C 0 Du /.Du0 Du / 1 .Bu0 X C Du0 C / D 0; (7.15)

which is the same as (4.5a) on p. 69, modulo notational differences. If the realization .A; Bu; C; Du / has no
pure imaginary invariant zeros, then [44, Sec. B.2] this ARE has a unique stabilizing solution X D X 0  0
such that ACBu F is Hurwitz, where F ´ .Du0 Du / 1 .Bu X CDu0 C /. We also need the matrix Lyapunov
equation
.A C Bu F /W C W .A C Bu F /0 C Bu .Du0 Du / 1 Bu0 D 0; (7.16)
whose solution W > 0 because .A C Bu F; Bu .Du0 Du / 1=2
/ is controllable, which follows from the con-
trollability of .A; Bu /.

Proposition 7.5. If .A; Bu / in (7.11) is controllable, Dw D 0, Du has full column rank, and .A; Bu ; C; Du /
has no pure imaginary invariant zeros, then
    
A C Bu F V Bw .A C Bu F /0 W 1 e.ACBu F / V Bw s
K.s/ D   e ; (7.17)
F 0 F W C .Du0 Du / 1 Bu0 0
0
where V ´ .I W e.ACBu F /  W 1 e.ACBu F / / 1 , is the unique FIR controller rendering Ge FIR and
minimizing its H2 norm. The optimal cost

kGe k22 D tr Bw0 .X C W 1 .V I //Bw :

Proof (outline). It can be verified that


   
I W A C Bu F 0 I WX W
H D ;
X I XW 0 A0 F 0 Bu0 X I

so that    
I W e.ACBu F /t 0 I WX W
eH t D .ACBu F /0 t :
X I XW 0 e X I
The formulae of the proposition follow then from those of Proposition 7.4 by a routine use of (A.3b).

Proposition 7.5 offers a numerically stable alternative to the formulae of Proposition 7.4. Indeed, the
impulse response of the controller in (7.17),
0
k.t / D F e.ACBu F /t V Bw 1Œ0; .t / .F W C .Du0 Du / 1 Bu0 / e.ACBu F / . t/
W 1 .ACBu F /
e V Bw 1Œ0; .t /;

has the matrix exponential e.ACBu F / and its transpose only for positive values of  . Because the matrix
A C Bu F is Hurwitz, these exponentials do not tend to have growing components. Moreover, as  grows
unbounded, the second term in the right-hand side of (7.17) vanishes and we end up with the IIR con-
troller K.s/ D F .sI A Bu F / 1 Bw . The optimal kGe k2 is a monotonically decreasing function of  ,
approaching tr .Bw0 XBw / as  ! 1.
7.2. P REVIEW CONTROL 139

7.2 Preview control


There are situations, where delays are actually our allies, rather than foes. This happens, for example, in
applications where previewed information about reference or disturvance signals is available. Just think
of a frequently encountered situation of handling road potholes or speed bumps while driving, where the
ability to see those disturbances ahead of their effect is of a great help. Questions associated with ex-
ploiting previewed information arise in many applications, such as signal processing, automotive control,
robotics, control of wind turbines, and so on and so forth. From the analysis viewpoint, it might be con-
venient to treat such problems via introducing time delays into regulated signals, which renders them a
natural subject of these notes.
We again consider the general tracking setting introduced in ÷7.1.5, but now with the error system
Ge W w 7! e of the form
Ge D Gw D x  C Gu K; (7.18)
where Gw and Gu are finite-dimensional systems given by (7.11) and  > 0 is the preview. The goal is to
design a causal and stable controller K to minimize a size of Ge . The term “preview tracking” becomes
apparent in the case when Gw D I and Gu D P . The error system in this case, Ge D D x  PK , defines
the error signal as e.t / D w.t  / y.t /. This implies that the plant output y is expected to track a past
reference, which was available at the controller side  time units ago. In other words, the controller has a
previewed version of the reference signal w .
A key fact on addressing energy-based preview problems is that the L2 -norm of the error signal gen-
erated by (7.18) is equivalent to the L2 -norm of the output of another system having no preview for every
exogenous input w . To formulate this result, we need to the matrix
     
H11 H12 A 0 Bu  
H D ´ 0 0 0 .Du0 Du / 1 Du0 C Bu0 (7.19)
H21 H22 CC A C Du

(we already saw it in Proposition 7.4) and its exponential


    
˙11 ˙12 H11 H12
˙D ´ exp  ;
˙21 ˙22 H21 H22

in which det ˙22 ¤ 0, see [47, Prop. 3.1]. We also need the matrices

BQ w ´ ˙22
0 0
Bw C ˙12 C 0 Dw and CQ ´ C C Du .Du0 Du / 1 .Du0 C.˙22
0
I/ Bu0 ˙21
0
/:

The following lemma, whose proof can be found in [45], is a key technical result towards handling preview
tracking problems:

Lemma 7.6. Given Gw and Gu as in (7.11) with a full column rank Du , consider yet another erorr system
GQ e W w 7! eQ such that GQ e D GQ w C GQ u .K ˘ /, where
 

Q Q
 A BQ w Bu
Gw .s/ Gu .s/ D Q (7.20a)
C ˙220 Dw Du

and
0 82 0 1 0
3 9 1
< H11 H12 Bw Bu .Du Du / Du Dw =
˘.s/ D .Du0 Du / 1 @  4 H21 H22 C 0 .I .Du0 Du / 1 Du0 /Dw 5 e s
Du0 Dw A : (7.20b)
: ;
Du0 C Bu0 Du0 Dw

If K is such that Ge 2 H1 , then so is GQ e and, moreover, kGe wk2 D kGQ e wk2 for all w 2 L2 .RC /.
140 C HAPTER 7. E XPLOITING D ELAYS

Lemma 7.6 fits both H2 and H1 optimization settings. Consider the H2 case first. As follows from
(4.1), the squared H2 -norm of the error system equals the sum of the squared L2 .RC /-norms of its re-
sponses to Dirac ı -impulses applied in each input direction. By Lemma 7.6, kGe ei ık2 D kGQ e ei ık2 for
every i and every controller, so that kGe k2 D kGQ e k2 for all K . Hence, a controller minimizing kGe k2 does
that for kGQ e k2 as well. In the H1 case, the cost equals the L2 .RC /-norm of the output of the error system
under a worst-case unit-energy exogenous signal w . Because the L2 .RC /-norms of Ge w and GQ e w coincide
for every w and every K , they also coincide for a worst-case w . This implies that kGe k1 D kGQ e k1 for
every K , so that the H1 problems for G and GQ are equivalent.
One day it should be written . . .

7.3 Stabilizing delays


We saw, see e.g. discussions in Section 2.1, that loop delays normally have destabilizing effects on feed-
back loops. Nevertheless, there is a number of examples in the literature, demonstrating that added delays
may have stabilizing effect on control loops. We consider one of them below.
Example 7.1. This example apparently originates in an SNL report of 1991, which was later on published
as [62]. Consider an undamped mass-spring system u 7! y having the transfer function
1
P .s/ D ;
ms 2 Ck
for some known mass m > 0 and spring coefficient k > 0. Obviously, this system cannot be stabilized by
any proportional controller. But it can be stabilized by a delayed feedback of the form u.t / D kp y.t  /
for kp ¤ 0. Indeed, in this case the characteristic quasi-polynomial is

 .s/ D ms 2 C k C kp e s
;

which is of the form (2.7). Let us use the delay sweeping method studied in ÷2.1.5 in its analysis. As-
sumptions A 1–4 on p. 28 hold iff kp ¤ k and the delay-free is always unstable. The magnitude relation
(2.8a) in this case has the form jm! 2 kj D jkp j, which is solved by

k ˙ jkp j
!2 D :
m
If jkp j  k , then there is one nonzero crossing frequency and it is always a switch. Hence, this kp is
stabilizing for none  . If 0 < jkp j < k , then there are two positive crossing frequencies,
r r
k C jkp j k jkp j
!1 D and !2 D ;
m m
so that m!12 > k and m!22 < k for all kp . The corresponding phase relations (2.8b) reads

 ! D arg kp arg.k m! 2 / C .2i 1/; i 2 Z:

This leads then to crossing delays


( (
2 1  if kp < 0 2 1 0 if kp < 0
1i D iC and 2i D iC
!1 !1 0 if kp > 0 !2 !2  if kp > 0

for all i 2 ZC , where 1i correspond to switches and 2i correspond to reversals. Because the zero-delay
system is unstable and the step in the switches is always smaller than that in reversals, there are stabilizing
7.3. S TABILIZING DELAYS 141

kp ’s only if 20 < 10 . This is always the case if kp < 0 and never the case if kp > 0. Thus, we must have
kp 2 . k; 0/, which implies positive feedback. And then we have the following chain of crossing delays
 p p p p p 
 m 2 m 3 m 4 m 5 m
j D 0; p ;p ;p ;p ;p ;:::
k kp k C kp k kp k C kp k kp
(gray elements mark reversals). As far as there is one reversal between two subsequent switches, the
system becomes stable at each reversal. This alternation of stabilizing and destabilizing delay intervals
ends at the point where 2i  1i or, equivalently, where
p p  p 
2i  m .2i C 1/ m 1 C kp =k
p  p ” i p p :
k C kp k kp 2. 1 kp =k 1 C kp =k/
Thus, the system can always be stabilized by a (positive) delayed feedback. O
There are other examples in the literature, where loop delays are introduced to feedback loops to
control finite-dimensional systems. It appears that there are essentially two classes of problems for which
such approaches are considered.
1. One use of delayed feedback is in problems of feedback dampening lightly damped systems, like that
considered in Example 7.1 above. It is known, at least since [78, 62], that it might be advantageous to
add a phase lag into the feedback loop in such problems, cf. the loops in Fig. 2.3 on p. 26. A skillful
use of this approach can yield superior closed-loop performance, as demonstrated by the solutions to
the ECC’95 benchmark problem [50, 28], where finite-dimensional lag elements are used. Delays can
also provide such lags. An example of that is the concept of delayed resonators, introduced in [52].
Still, it is not quite clear to me what advantage such delays can have over more orthodox rational lag
elements. The analysis of delayed systems is more complex and phase lag introduced by the delay
element grows unbounded, which might boomerang at high frequencies. Finite-dimensional lags are
less prone to such problems, as they can be analyzed by conventional tools and their lags are bounded.
2. Another direction may be associated with the use of delays in approximating derivative terms, essen-
tially as
1 e s
s   .s/ ´ (7.21)

for a sufficiently small  > 0. This is done either explicitly, like in the problem of stabilizing a chain
of of integrators in [49], or implicitly, e.g. in the proportional-minus-delay controller, proposed in [69]
as an alternative to PD. But it is not clear to me, again, what are advantages of this approximation
over more conventional solutions, like a rational approximation of the form
s
s  r;T .s/ ´ (7.22)
Ts C1
for a sufficiently small T > 0. For example, let us compare the accuracy of these approximations
over a given low-frequency range, say Œ0; ! Q . To render such a comparison fair, assume that both
approximations have equal high-frequency gain, i.e. that lim sup!!1 j . j!/j D lim!!1 jr;T . j!/j.
This requirement yields the relation 2= D 1=T or, equivalently, T D =2. As the measure of the
approximation accuracy consider then the quantifies
ˇ ˇ ˇ ˇ
ˇ  . j!/ ˇˇ ˇ r;=2 . j!/ ˇˇ
Q ´ max ˇˇ1
 .!/ and r;=2 . Q
!/ ´ max ˇ 1 ˇ:
!2Œ0;!
Q j! ˇ !2Œ0;!
Q
ˇ j!
The plots of these functions of the !Q are presented in Fig. 7.4. It is clear that from this point of view
the rational approximation (7.22) is a better choice than (7.21) for all frequency ranges and all  .
And, on top of this, adding delays into feedback loop renders their analysis more involved.
142 C HAPTER 7. E XPLOITING D ELAYS

1:26
1

Q
 . !/
Q
r;=2 . !/
0 2:33 4:09  !Q

Fig. 7.4: Errors of approximating the derivative by (7.21) and (7.22)

The situation might be different for infinite-dimensional plants. We already saw that DTC elements,
which contain delays, play an important role in the control of dead-time systems. As another example,
consider the LTI system Gx defined via its transfer function in (1.21) on p. 9, which describes the acoustic
pressure in a duct of length L, measured at the distance x from the actuation point. For x D L and the
static reflectance R.s/ D 1 it becomes
2Zm s
GL .s/ D e ;
1 e 2s
where  D L=c . It might not be easy to design a finite-dimensional stabilizing controller for it. However,
this system is clearly stabilized by the delayed control law u.t / D pL .t  /=.2Zm /, which actually
renders the closed-loop system FIR. Still, this is more a toy example, as the delayed control law above has
zero tolerance to uncertainty in  .

7.4 Delays in the regulator problem: repetitive control


The term “regulator problem” is referred to the problem of rejecting the effect of non-decaying distur-
bances, generated as the response of known autonomous models to nonzero initial conditions, on con-
trolled variables. Common models for such disturbances are the integrator 1=s , which generates constant
signals, the double integrator 1=s 2 , which generates ramp signals, and the harmonic oscillator 1=.s 2 C!02 /,
which generates sine waves with the frequency !0 . The internal model principle [14], by which models
of exogenous signals should be incorporated into the controller, is one of fundamental principles of the
control theory. For example, the need in the use of intergal actions to counteract the effect of constant
or slowly varying unmeasured disturbances has been known for aeons. Likewise, harmonic disturbances
of a known frequency can be asymptotically canceled if the feedback loop includes a harmonic oscil-
lator. Incorporating models of disturbances into feedback loops reduces the redulator problem to mere
stabilization problems.
The use of delays can generalize this idea to a substantially wider class of exogenous signals, viz. all
periodic signals of a known period. Indeed, consider the system described by the autonomous difference
equation
w.t / D w.t  /; wM 0 D  (7.23)
for some function .t / defined in Œ ; 0, where wM t is defined according to (1.6) on p. 2. It is readily seen
that this equation generates the  -periodic function w.t / D .t  dt = e/. Thus, any  -periodic signal can
be generated by (7.23) by an appropriate choice of the initial condition function  .
The application of the internal model principle to (7.23) is conceptually straightforward. Namely,
this system has the transfer function 1=.1 e s /, so it should be incorporated into the controller. The
application of this logic in the unity-feedback setting is shown in Fig. 7.5(a), where P is a SISO plant
and K is a free part of the controller to be designed. This is a paraphrase of the architecture proposed in
[22], dubbed repetitive control. As with any other controller constructed according to the internal model
principle, it guarantees asymptotic rejection of all  -periodic disturbances d and asymptotically perfect
7.4. D ELAYS IN THE REGULATOR PROBLEM : REPETITIVE CONTROL 143

d d
y u e r y u e r
P .s/ K.s/ - P .s/ K.s/ -
s s
e F .s/e

(a) Idealized setup (b) More practical setup

Fig. 7.5: Unity-feedback repetitive controller

tracking of all  -periodic reference signals r , provided the closed-loop system is stable and P .s/K.s/ has
no zeros at the poles of the repetitive element, i.e. at s D j2k= , k 2 Z. A way to see that is via the
closed-loop disturbance sensitivity transfer function
P .s/.1 e s /
Td .s/ D ;
1 e s C P .s/K.s/
which has zeros at s D j2k= for all k 2 Z then. Every piecewise smooth periodic disturbance can be
presented via its Fourier series as X
d.t / D dk e j.2k=/t
k2Z

for some Fourier coefficients dk 2 C. By the frequency response theorem, if the closed-loop system is
stable, then its steady-state response to this disturbance is
X
yss .t / D Td . j2k= /dk e j.2k=/t D 0: (7.24)
k2Z

The tracking error can be analyzed in a similar manner via the sensitivity transfer function, which also has
all pure imaginary poles of the controller as its zeros.
However, the stability of repetitive control system happens to be a challenge. To see why, assume that
P and K are finite
  dimensional and rearrange the setup in Fig. 7.5(a) in the form presented in Fig. 2.1 on
p. 19. Taking dr as the “u” input there and y as its namesake, this corresponds to
     
G´w G´u 0 0 0 1 1  
D C 1 1 P :
Gyw Gyu 1 1 0 1 1 C PK

If P .s/K.s/ is strictly proper, which is a reasonable assumption to make, then G´w .1/ D 1 and we have
the case of .D´w / D 1 in terms of the discussion in ÷2.1.3. Hence, no matter what K is selected, the
closed-loop system is practically unstable.
The situation is not hopeless though. There are workarounds, endeavoring to regain stability at the
expense of relaxing perfect regulation requirements. For example, one can use the architecture presented
in Fig. 7.5(b) for a stable filter F such that jF .1/j < 1. In this case the equivalent
     
G´w G´u 0 0 0 F 1  
D C 1 1 P :
Gyw Gyu 1 1 0 1 1 C PK

so that G´w .1/ D F .1/ satisfies .D´w / < 1 and the system can be stabilized. For example, we may
consider designing K so that kF=.1 C PK/k1 < 1. In this case the closed-loop system in Fig. 7.5(b) is
stable for all constant delays  , by the arguments of ÷6.2.3 corresponding to ~ D 0. Other, more refined,
approaches are also possible, see Example 7.2 below.
The price we pay is in the ability to reject all periodic disturbances. Indeed, the disturbance sensitivity
reads now
P .s/.1 F .s/e s /
Td .s/ D ;
1 F .s/e s C P .s/K.s/
144 C HAPTER 7. E XPLOITING D ELAYS

has a zero at j2k= only if F . j2k= / D 1, which can happen only at a finite number of k ’s. Still, we
can choose F so that j1 F . j2k= /j  1 at all k 2 Z of interest. A well-justified requirement is to
maintain this condition for a number of small indices k . This could effectively eliminate low-frequency
Fourier terms in (7.24), while high-frequency components are normally filtered out by the plant itself. If
F .0/ D 1, then the controller still has an integral action. Thus, it might make sense to choose a low-pass
F with the unit static gain. The bandwidth of F can then be a tuning parameter, with which the range of
rejected signals can be selected.

Example 7.2. Consider the problem of rejecting  -periodic load disturbance for the plain integrator plant
with the transfer function P .s/ D 1=s . Employing the repetitive controller in the form shown in Fig. 7.5(b)
for some low-pass F 2 H1 , we are left with one problem to solve, how to design a stabilizing K for the
infinite-dimensional system
P
Prep D :
1 FD x
The trick is to rewrite its transfer function in the form
P .s/
Prep .s/ D s
;
1 P .s/sF .s/e

which is the feedback interconnection of P and ˘rep 2 H1 , where ˘rep .s/ D sF .s/e s . This suggests
that we may use loop-shifting extensions discussed in ÷3.4.4. Specifically, loop-shifting arguments yield
that K stabilizes Prep iff KQ D K ˘rep stabilizes P . The latter problem is simple, e.g. it can be solved
Q
with any proportional controller K.s/ D kp . Thus, we end up with
s
K.s/ D kp C sF .s/e ;

which is easy to implement. The resulting disturbance sensitivity transfer function


P .s/ s
 1 s

Td .s/ D 1 F .s/e D 1 F .s/e
Q
1 C P .s/K.s/ s C kp

is reminiscent of the Smith controller in that the response is that of the system without the repetitive block,
multiplied by the repetitive action.
Consider now the choice
2 ˛
F .s/ D ;
 s C 2 ˛
which is a low-pass filter with the bandwidth 2 ˛= , i.e. ˛ times the fundamental frequency of d . Here ˛
is a tuning parameter, affecting the behavior of F . j!/ at frequencies ! D 2k= for k 2 Z. It is readily
seen that
1
j1 F . j2k= /j D p ;
1 C ˛ 2 =k 2
which is a strictly decreasing function of ˛ and a strictly increasing function of k , approaching 1 as k
grows.
Simulations of the closed-loop disturbance response for this filter under kp D 1 and  D  are
presented in Fig. 7.6. The first row in Fig. 7.6 shows responses to a square wave disturbance and the
second—to a sawtooth d , both of unit amplitude. The simulations agree with the analysis above that
higher-bandwidth F result in a better attenuation of periodic load disturbances. Note that the disturbance
response in the first interval t 2 Œ0;   does not depend on ˛ . This is because during this interval the initial
conditions of the controller are zero and the repetitive part is not efficient. The controller then “learns” the
actual waveform of the disturbance and compensates it. O
7.4. D ELAYS IN THE REGULATOR PROBLEM : REPETITIVE CONTROL 145

(a) square wave, ˛ D 10 (b) square wave, ˛ D 100 (c) square wave, ˛ D 1000

(d) sawtooth wave, ˛ D 10 (e) sawtooth wave, ˛ D 100 (f) sawtooth wave, ˛ D 1000

Fig. 7.6: Disturbance responses under kp D 1


146 C HAPTER 7. E XPLOITING D ELAYS
Appendix A

Background on Linear Algebra

collects some advanced material on matrices. These results are required throughout
T HIS APPENDIX
these notes, yet are not always a part of basic Linear Algebra courses.

A.1 Schur complement


Let M be a square matrix partitioned as  
M11 M12
M D ; (A.1)
M21 M22
with square M11 and M22 . If M11 is nonsingular, then a direct substitution yields the decomposition
     
M11 M12 I 0 M11 0 I M111 M12
D : (A.2a)
M21 M22 M21 M111 I 0 M22 M21 M111 M12 0 I
It follows from this equality that M is nonsingular iff so is the matrix
11 ´ M22 M21 M111 M12 ;
which is called the Schur complement of M11 in M . Similarly, if M22 is nonsingular, then
     
M11 M12 I M12 M221 M11 M12 M221 M21 0 I 0
D (A.2b)
M21 M22 0 I 0 M22 M221 M21 I
and the Schur complement of M22 is defined as
22 ´ M11 M12 M221 M21 :
Using equations (A.2) the following formulae for the inversion of block 2  2 matrices can be derived:
  1    
M11 M12 I M111 M12 M111 0 I 0
D (if det M11 ¤ 0)
M21 M22 0 I 0 111 M21 M111 I
" #
M111 C M111 M12 111 M21 M111 M111 M12 111
D (A.3a)
111 M21 M111 111
   
I 0 221 0 I M12 M221
D (if det M22 ¤ 0)
M221 M21 I 0 M221 0 I
" #
221 221 M12 M221
D : (A.3b)
M221 M21 221 M221 C M221 M21 221 M12 M221

147
148 A PPENDIX A. BACKGROUND ON L INEAR A LGEBRA

The formulae above are particularly simple in the case of block-triangular matrices:
  1  
M11 M12 M111 M111 M12 M221
D (A.4a)
0 M22 0 M221
and
  1  
M11 0 M111 0
D : (A.4b)
M21 M22 M22 M21 M11 M221
1 1

Another useful consequence of the decompositions in (A.2) is the following formulae for the determi-
nant of block 2  2 matrices:

det M11 ¤ 0 H) det M D det M11 det 11 (A.5a)


and
det M22 ¤ 0 H) det M D det M22 det 22 (A.5b)

(the determinants of the triangle factors in (A.2) obviously equal one).

A.2 Sign-definite matrices


An n  n matrix M D M 0 is said to be positive definite, denoted M > 0, if

x 0 M x > 0; 8x ¤ 0:

It is positive semi-definite, denoted M  0, if

x 0 M x  0; 8x:

The notions of negative definite (M < 0) and semi-definite (M  0) matrices can be defined in a similar
way or via the relations M < 0 ” M > 0 and M  0 ” M  0. Compatibly dimensioned
symmetric (or Hermitian) matrices can be compared. We say that M1 > M2 if M1 M2 > 0 and that
M1  M2 if M1 M2  0. A criterion of positive definiteness (positive semi-definiteness) is that all
eigenvalues of M D M 0 , which are real, are positive (nonnegative). Unlike scalars, matrices need not be 
wither positive or negative (semi)definite. There are matrices that are not sign definite, like M D 10 01 .
If a matrix T 2 Rnm , where m  n, has full rank, then M > 0 only if T 0 M T > 0. Indeed, if
T M T 6> 0, there there is y ¤ 0 such that y 0 T 0 M T y  0. But then x 0 M x  0 for x D T y ¤ 0. If m D n,
0

then the relation goes both ways, namely, M > 0 ” T 0 M T > 0. These properties help to simplify
the analysis of block 2  2 matrices of the form (A.1). For example, it is readily seen that
 
M11 M12
>0 H) M11 > 0 ^ M22 > 0:
M21 M22

Moreover, it follows from (A.2) that


 
M11 M12
>0 ” M11 > 0 ^ 11 > 0 ” M22 > 0 ^ 22 > 0 (A.6)
M21 M22

(these relations use the fact that the positive-definiteness of a block-diagonal matrix is equivalent to that
of each its diagonal block).
A.3. L INEAR MATRIX EQUATIONS 149

A.3 Linear matrix equations


Let A1 2 Rn1 n1 , A2 2 Rn2 n2 , and Q 2 Rn1 n2 . The following linear matrix equation:

A1 X XA2 C Q D 0 (A.7)

is called the Sylvester equation. It has a unique solution X 2 Rn1 n2 iff

spec.A1 / \ spec.A2 / D ;;

i.e. iff none of the eigenvalues of A1 is also an eigenvalue of A2 . If this condition fails to hold, then
the Sylvester equation might have either no solutions or an infinite number of solutions (depending on
Q). The following result establishes the connection between the solvability of (A.7) with the block-
diagonalizability of certain block-triangular matrices:
Proposition A.1 (Roth’s removal rule). Equation (A.7) is solvable iff the matrices
   
A1 Q A1 0
and
0 A2 0 A2

are similar.
An important particular case of the Sylvester equation is the so-called (continuous-time) Lyapunov
equation, which is defined as
AX C XA0 C Q D 0 (A.8)
for given A 2 Rnn and Q 2 Rnn . If Q is symmetric (i.e. Q D Q0 ), then the solution X is symmetric
x 0 , then
too. Furthermore, if A is Hurwitz, i.e. if A has all its eigenvalues in the open left half plane C n C
Z
0
XD eAt QeA t dt (A.9)
RC

exists and is the solution of (A.8). Indeed, it is readily seen that


0 0 d At A0 t 
A eAt QeA t C eAt QeA t A0 D e Qe :
dt
Thus, if the integral in (A.9) exists,
Z Z Z
At A0 t At A0 t 0 0 
A e Qe dt C e Qe dtA C Q D d eAt QeA t C Q D 0;
RC RC RC
0
where the fact that lim t!1 eAt QeA t D 0 was used.
The following result reveals an important connection between the existence of positive definite solu-
tion of a Lyapunov equation and the stability of matrices:
Proposition A.2. A matrix A 2 Rnn is Hurwitz iff the solution X 2 Rnn of the Lyapunov equation (A.8)
satisfies X D X 0 > 0 whenever Q D Q0 > 0.
Proof. First, assume that A is Hurwitz. Clearly, .A; Q/ is controllable (follows from the non-singularity
of Q), so that X > 0 by (A.9). Now, let X > 0 for some Q > 0. Assume that A is not Hurwitz, i.e. that
it has an eigenvalue  such that Re   0. Denote the corresponding eigenvector by  ¤ 0 and pre- and
post-multiply (A.8) by 0 and , respectively. We have:

0 Q D 0 .AX C XA0 / D 0 X C 0 X D 2 Re 0 X:

This, in turn, implies that Re 0 X < 0, which is a contradiction.


150 A PPENDIX A. BACKGROUND ON L INEAR A LGEBRA

Proposition A.2 actually says that the stability of A is equivalent to the existence of a matrix X D
X 0 > 0 such that
AX C XA0 < 0: (A.10)
Inequality (A.10) belongs to the so-called class of Linear Matrix Inequalities (LMI), for the verification
of which efficient numerical methods are available. For this reason (A.10) can be considered an alterna-
tive to the conventional verification of eigenvalues of A. More important is that the LMI (A.10) can be
incorporated into many other analysis and design problems that also reduce to LMIs.
Appendix B

Background on Linear Systems

of this appendix is to collect some basic facts about (mostly finite-dimensional and
T HE PURPOSE
time-invariant) linear systems. The material, which is required throughout the notes, is presented in
a condensed matter. More details can be found in [44], whose philosophy is followed below, as well as in
many other textbooks.

B.1 Signals and systems in time domain


Signals are functions of independent variables (mainly time), conveying information about changing phe-
nomena. Systems are constraints imposed on interdependent signals, like those between forces and posi-
tions in mechanical systems or between voltages and currents in electrical systems or between pressures
and volume flow rates in hydraulic systems, et cetera. As customary, throughout these notes we treat one
group of signals as an action (inputs) and another one—as a reaction (outputs), which is sufficiently gen-
eral in many situations, especially for processes with delays. A system G with an input u and an output y
can then be viewed as a mapping G W u 7! y .

B.1.1 Continuous-time signals and systems


Continuous-time signals are viewed as functions defined on some subset of the real axis R. For many good
reasons, control applications are mostly concerned with signals defined on the positive semi-axis RC , in
which case an n-dimensional continuous-time signal f .t / is understood as

f W RC ! Rn :

The i th component of f .t /, denoted fi .t /, is a scalar function of time t . Signal spaces are used to constrain
the set of considered signals in some way, formalizing the notion of admissible signals. We mostly use the
space  Z  
ˇ 1=2
n n ˇ 2
L2 .RC / ´ f W RC ! R kf k2 ´ kf .t /k dt <1 (B.1)
RC

for this purpose. The squared L2 -norm, kf k22 , can be interpreted as the energy of f . Thus, L2 can be seen
as the space of finite-energy signals.
Linear continuous-time systems with m-dimensional inputs and p -dimensional outputs are understood
as linear operators G W DG  L2m .RC / ! Lp2 .RC / for some domain DG . A system G is said to be stable
(or i/o stable) if DG D L2m .RC / and kGkL2 !L2 ´ supkuk2 D1 kGuk2 < 1. The norm of G defined by the
last expression is called its L2 -induced norm. We say that G W u 7! y is causal if y.t / D 0 for all t  tc
whenever u.t / D 0 for all t  tc for every tc 2 RC . In other words, a system is causal if its output at

151
152 A PPENDIX B. BACKGROUND ON L INEAR S YSTEMS

every time instance tc can only depend on the past and present inputs. A linear system G is said to be time
invariant (abbreviated LTI) if GS D S G for all  > 0, where S is the  -shift operator, defined via
(
0 if 0  t < 
.S u/.t / D (B.2)
u.t  / if   t

This effectively says that a time-shifted input produces a time-shifted, but otherwise unchanged, output.
A fairly general class of p  m causal LTI systems G W u 7! y can be described by the convolution
integral Z Z X
y.t / D g.t s/u.s/ds D Q
g.t s/u.s/ds C gi u.t ~i /; (B.3)
RC RC i2ZC

where X
g.t / D g.t
Q /C gi ı.t ~i / (B.4)
i2ZC

for a bounded on any bounded subset of R function gQ W R ! Rpm such that g.t Q / D 0 whenever t < 0,
bounded gi 2 Rpm , and a strictly increasing sequence f~i gi2ZC , such that ~0 D 0 and ~i ~i 1   for
all i 2 ZC and some constant  > 0, independent of i . The (generalized) function g is called the impulse
response of G , i.e. it is the response of G to the Dirac delta impulse1 applied at the time instance t D 0.
An important class of LTI systems are finite-dimensional systems. These are systems for which there
are n 2 ZC and matrices A 2 Rnn , B 2 Rnm , C 2 Rpn , and D 2 Rpm such that

g.t / D Dı.t / C C eAtB1.t / (B.5)

Q / D C eAtB1.t /, g0 D D , and gi D 0 for all i > 0 in (B.4)). Given any tc  0, the response of
(i.e. g.t
such a system at all t  tc can be expressed as
Z t Z t
y.t / D Du.t / C C eA.t s/ Bu.s/ds D Du.t / C C eA.t s/ Bu.s/ds C C eA.t tc / x.tc /; (B.6)
0 tc

where the n-dimensional vector Z t


x.t / ´ eA.t s/
Bu.s/ds (B.7)
0

is called the state vector of G . The meaning of the second relation in (B.6) is that the knowledge of x.tc /
is sufficient to account for the effect of the past, t < tc , on the behavior of the system afterwards. In
other words, the state vector is a history accumulator. By differentiating (B.7), we get the state-space
description, (
P / D Ax.t / C Bu.t /; x.0/ D 0
x.t
GW (B.8)
y.t / D C x.t / C Du.t /;
which is a differential, rather than integral, equation. The state-space representation (B.8) might be more
convenient to handle than the convolution integral (B.3). The quadrapole .A; B; C; D/ is referred to as a
state-space realization of a system G . In some situations the initial condition x.0/ is more convenient to
be taken equal to some x0 ¤ 0. This aims at accounting for the behavior of the system in t < 0. In such a
case we can treat G as an operator DG  Rn  L2m .RC / ! Lp2 .RC /.
There is also an alternative notion of the stability for systems given by their state equation (B.8). A
linear system is said to be Lyapunov stable if for every  > 0 there is ı D ı./ such that kx.t /k <  for
all t 2 RC whenever kx.0/k < ı . The asymptotic stability requires in addition that lim t!1 kx.t /k D 0. It
is known that G is Lyapunov stable iff A has no eigenvalues in the open right-half plane C0 and its pure
1 More precisely, the j th column of g.t/ is the response of G to the input ej ı.s/, where ej is the j th standard basis on Rm .
B.1. S IGNALS AND SYSTEMS IN TIME DOMAIN 153

imaginary eigenvalues are simple. G is asymptotically stable iff A has no eigenvalues in the closed right-
half plane Cx 0 . Note that these notions do not account for exogenous input signals explicitly2 , considering
the system autonomous for t > 0. This is unlike the i/o stability notion, which assumes equilibrium at
t D 0 and is driven by u. Nonetheless, the i/o and Lyapunov stability properties are essentially equivalent
for finite-dimensional systems. Namely, under a mild minimality assumption, G is i/o stable iff its state
equation is asymptotically stable. The situation is less trivial for infinite-dimensional systems though.

B.1.2 Discrete-time signals and systems


Discrete-time signals are functions defined on some subset of the real axis Z. An n-dimensional discrete-
time signal f Œt  on the nonnegative semi-axis ZC is understood as
f W ZC ! Rn :
The discrete counterpart of the L2 space is
 X  
n
ˇ
n ˇ 2 1=2
`2 .ZC / ´ f W ZC ! R kf k2 ´ kf Œt k <1 (B.9)
t2ZC

and the squared `2 -norm, kf k22 , can be also interpreted as the energy of f , rendering `2 the space of
finite-energy signals.
Linear discrete-time systems with m-dimensional inputs and p -dimensional outputs are understood as
linear operators G W DG  `m p
2 .ZC / ! `2 .ZC / for some domain DG . A system G is said to be stable if
m
DG D `2 .ZC / and kGk`2 !`2 ´ supkuk2 D1 kGuk2 < 1. The norm of G defined by the last expression
is called its `2 -induced norm. We say that G W u 7! y is causal (strictly causal) if yŒt  D 0 for all t  tc
whenever uŒt  D 0 for all t  tc (all t < tc ) for every tc 2 ZC . In other words, a system is causal if
its output at every time instance tc can only depend on the past and present inputs and strictly causal if
its output at every tc can only depend on the past inputs. A linear system G is said to be shift invariant
(abbreviated LSI) if GS1 D S1 G , where the unit shift S1 is defined similarly to the continuous-time shift
in (B.2) for  D 1.
A general class of p  m causal LSI systems G W u 7! y can be described by the convolution sum
X
yŒt  D gŒt suŒs (B.10)
s2Z

for a bounded on any bounded subset of Z function g W Z ! Rpm such that gŒt  D 0 whenever t < 0.
The function g , which is the response of the system to the unit pulse applied at t D 0, is called the impulse
response of G . Finite-dimensional LSI systems are those with
gŒt  D DıŒt  C CAt B1Œt  (B.11)
nn nm pn pm
for some n 2 ZC and matrices A 2 R , B 2 R , C 2 R , and D 2 R . For such systems there
is a (non-unique) n-dimensional state vector
t
X
xŒt  ´ At s BuŒs: (B.12)
sD0

It is a history accumulator, exactly like in the continuous-time case. Finite-dimensional systems can be
presented in their state-space description,
(
xŒt C 1 D AxŒt  C BuŒt ; xŒ0 D 0
GW (B.13)
yŒt  D C xŒt  C DuŒt ;
which is a difference equation, similarly to the differential equation in the continuous-time case.
2 If we think of variables in (B.8) as deviations from an equilibrium, then a possibly constant u is involved into the definition.
154 A PPENDIX B. BACKGROUND ON L INEAR S YSTEMS

B.2 Signals and systems in transformed domains


The Fourier transform of a continuous-time signal f W RC ! F n is defined as
Z
Fff g D F . j!/ ´ f .t /e j!t dt; (B.14)
RC

where ! 2 R is called the (angular) frequency and measured in radians per time unit (e.g. per second). The
signal Fff g is called the frequency-domain representation of f . The term “frequency” becomes apparent
in the inverse Fourier transform formula,
Z
1 1
F fF g D f .t / D F . j!/e j!t d!; (B.15)
2 R

which effectively says that f .t / is a superposition of harmonic signals e j!t with frequencies ! . The
Fourier transform F . j!/, called also the spectrum of f , can then be viewed as the weight of the harmonic
e j!t in f .t /. The set of Fourier transformable signals is quite limited. The integral in (B.14) converges, in
norm, essentially only for L2 .RC /-signals. The one-sided (or unilateral) Laplace transform of f W RC !
Cn , defined as Z
Lff g D F .s/ ´ f .t /e st dt (B.16)
RC

for all s 2 C for which the integral converges, can be viewed as a generalization of the Fourier transform,
rendering a wider class of signals transformable. The set of s 2 C for which (B.16) exists is called the
region of convergence (or RoC) of F .s/.
An important property of integral transforms is that they turn convolutions into products. Namely, if
G W u 7! y is LTI, then
Ffyg D Ffgg Ffug or Y. j!/ D G. j!/U. j!/
provided Ffgg and Ffug exist. The Fourier transform of the impulse response g of an LTI G is called its
frequency response and denoted G. j!/. Likewise,

Lfyg D Lfgg Lfug or Y.s/ D G.s/U.s/:

for all s in the RoCs of both G.s/ and U.s/. The Laplace transform of the impulse response g of an LTI
G is called its transfer function, denoted G.s/.
Properties of LTI systems can be analyzed in terms of their transfer functions. For example, an LTI
continuous-time system is causal and stable iff its transfer function G.s/ belongs to the space
pm
˚
H1 ´ G W C0 ! Cpm j G is holomorphic in C0 and kGk1 ´ sups2C0 kG.s/k < 1 ; (B.17)

and kG.s/k stands for the matrix spectral norm on Cpm . The H1 system norm defined in (B.17) actually
equals the L2 -induced norm of G , i.e. kGk1 D kGkL2 !L2 . Also, every G 2 H1 has a unique boundary
function GQ 2 L1 . jR/ such that G.
Q j!/ D lim#0 G. C j!/ for almost all ! and kGk
Q 1 D kGk1 , where
˚
Lpm Q
1 . jR/ ´ G W jR ! C
pm Q 1 ´ ess sup!2R kG.
j kGk Q j!/k < 1 : (B.18)

It is customary to identify G with GQ and regard H1 as a closed subspace of L1 . jR/, in which case

kGk1 D ess sup kG. j!/k: (B.19)


!2R

It should be emphasized that this equality holds only for functions G 2 H1 as defined by (B.17).
B.3. S TATE - SPACE TECHNIQUES 155

The counterpart of the Laplace transform for discrete sequences is the ´-transform, defined as
X
Zff g D F .´/ ´ f Œt ´ t (B.20)
t2ZC

for all ´ 2 C for which the sum converges (RoC). The ´-transform of the impulse response g of a discrete-
time LSI system G is also called its transfer function, denoted G.´/. In full analogy with the continuous-
time case, the relation between the ´-transforms of the input and output to G reads Y.´/ D G.´/U.´/ in
the ´-domain and G is causal and stable iff its transfer function is holomorphic and bounded in the exterior
x.
of the closed unit disk, i.e. in C n D
Transfer functions, in both the Laplace and ´-domains, can be expressed in terms of corresponding
spate-space realizations .A; B; C; D/ as
 
1 A B
G.s/ D D C C.sI A/ B µ ´ D C C.´I A/ 1 B D G.´/:
C D

Transfer functions of finite-dimensional causal systems are rational functions of s and ´ and are proper, in
the sense that their limits for Re s ! 1 and j´j ! 1, respectively, exist. Finite-dimensional systems are
x 0 (or
stable iff they are proper and all poles of their transfer functions are in the open left-half plane C n C
in the open unit disk D in the discrete-time case). All these poles are among the eigenvalues of the “A”
matrix of a state-space realization of G , cf. the relation between i/o and Lyapunov’s asymptotic stability
notions.

B.3 State-space techniques


The transfer function of the parallel interconnection G1 C G2 is
2 3
    A1 0 B1
A 1 B1 A 2 B2
C D 4 0 A2 B2 5: (B.21)
C1 D1 C2 D2
C1 C2 D1 C D2

The transfer function of the cascade (series) interconnection G2 G1 is


2 3 2 3
   A1 0 B1 A2 B2 C1 B2 D1
A 2 B2 A 1 B1
D 4 B2 C1 A2 B2 D1 5 D 4 0 A1 B1 5 ; (B.22)
C2 D2 C1 D1
D2 C1 C2 D2 D1 C2 D2 C1 D2 D1

The transfer function of the inverse system


  1  
A B A BD 1 C BD 1
D (B.23)
C D D 1C D 1

and it exists, as a proper transfer function, iff det.D/ ¤ 0.


156 A PPENDIX B. BACKGROUND ON L INEAR S YSTEMS
Bibliography

[1] R. J. Anderson and M. W. Spong, “Bilateral control of teleoperators with time delay,” IEEE Trans.
Automat. Control, vol. 34, no. 5, pp. 494–501, 1989.
[2] Z. Artstein, “Linear systems with delayed control: A reduction,” IEEE Trans. Automat. Control,
vol. 27, no. 4, pp. 869–879, 1982.

[3] M. Athans and P. L. Falb, Optimal Control: An Introduction to the Theory and Its Applications. NY:
McGraw-Hill, 1966.

[4] R. Bellman and K. L. Cooke, Differential-Difference Equations. New York: Academic Press, 1963.
[5] C. Bonnet, A. R. Fioravanti, and J. R. Partington, “Stability of neutral systems with commensurate
delays and poles asymptotic to the imaginary axis,” SIAM J. Control Optim., vol. 49, no. 2, pp.
498–516, 2011.
[6] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, UK: Cambridge University Press,
2004.

[7] R. F. Curtain and K. Glover, “Robust stabilization of infinite dimensional systems by finite dimen-
sional controllers,” Syst. Control Lett., vol. 7, pp. 41–47, 1986.

[8] R. F. Curtain and K. Morris, “Transfer functions of distributed parameter systems: A tutorial,” Auto-
matica, vol. 45, no. 5, pp. 1101–1116, 2009.
[9] R. F. Curtain and H. Zwart, An Introduction to Infinite-Dimensional Linear Systems Theory. New
York: Springer-Verlag, 1995.
[10] R. Datko, “A procedure for determination of the exponential stability of certain differential-
difference equations,” Quart. Appl. Math., vol. 36, pp. 279–292, 1978.

[11] C. A. Desoer and M. Vidyasagar, Feedback Systems: Input-Output Properties. New York: Aca-
demic Press, 1975.
[12] Y. A. Fiagbedzi and A. E. Pearson, “Feedback stabilization of linear autonomous time lag systems,”
IEEE Trans. Automat. Control, vol. 31, no. 9, pp. 847–855, 1986.
[13] B. A. Francis, A Course in H1 Theory, ser. Lecture Notes in Control and Inform. Sci. New York:
Springer-Verlag, 1987, vol. 88.
[14] B. A. Francis and W. M. Wonham, “The internal model principle for linear multivariable regulators,”
Appl. Math. Opt., vol. 2, no. 2, pp. 170–412, 1975.

[15] E. Fridman, “New Lyapunov–Krasovskii functionals for stability of linear retarded and neutral type
systems,” Syst. Control Lett., vol. 43, pp. 309–319, 2001.

157
158 B IBLIOGRAPHY

[16] A. T. Fuller, “Optimal nonlinear control of systems with pure delay,” Int. J. Control, vol. 8, no. 2, pp.
145–168, 1968.

[17] T. Furukawa and E. Shimemura, “Predictive control for systems with time delay,” Int. J. Control,
vol. 37, no. 2, pp. 399–412, 1983.

[18] G. C. Goodwin, S. F. Graebe, and M. E. Salgado, Control System Design. Englewood Cliffs, NJ:
Prentice-Hall, 2000.

[19] K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-Delay Systems. Boston: Birkhäuser, 2003.

[20] R. Gudin and L. Mirkin, “On the delay margin of dead-time compensators,” Int. J. Control, vol. 80,
no. 8, pp. 1316–1332, 2007.

[21] J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations. New York:
Springer-Verlag, 1993.

[22] T. Inoue, M. Nakano, and S. Iwai, “High accuracy control of servomechanism for repeated con-
touring,” in Proc. 10th Annual Symp. Incremental Motion Control Systems and Devices, Urbana–
Champaign, IL, 1981, pp. 285–292.

[23] N. F. Jerome and W. H. Ray, “High-performance multivariable control strategies for systems having
time delays,” AIChE Journal, vol. 32, no. 6, pp. 914–931, 1986.

[24] T. Kailath, Linear Systems. Englewood Cliffs, NJ: Prentice-Hall, 1980.

[25] C.-Y. Kao and B. Lincoln, “Simple stability criteria for systems with time-varying delays,” Automat-
ica, vol. 40, no. 8, pp. 1429–1434, 2004.

[26] C.-Y. Kao and A. Rantzer, “Stability analysis of systems with uncertain time-varying delays,” Auto-
matica, vol. 43, no. 6, pp. 959–970, 2007.

[27] V. L. Kharitonov, Time-Delay Systems: Lyapunov Functionals and Matrices. NY: Birkhäuser, 2013.

[28] O. Kidron and O. Yaniv, “Robust control of uncertain resonant systems,” European J. Control, vol. 1,
no. 2, pp. 104–112, 1995.

[29] D. L. Kleinman, “Optimal control of linear systems with time-delay and observation noise,” IEEE
Trans. Automat. Control, vol. 14, no. 5, pp. 524–527, 1969.

[30] N. N. Krasovskiı̆, Stability of Motion: Applications of Lyapunov’s Second Method to Differential


Systems and Equations With Delay. Stanford, CA: Stanford Univ. Press, 1963.

[31] W. H. Kwon and A. E. Pearson, “Feedback stabilization of linear systems with delayed control,”
IEEE Trans. Automat. Control, vol. 25, no. 2, pp. 266–269, 1980.

[32] D. G. Luenberger, Optimization by Vector Space Methods. NY: John Wiley & Sons, 1969.

[33] A. Z. Manitius and A. W. Olbrot, “Finite spectrum assignment problem for systems with delay,”
IEEE Trans. Automat. Control, vol. 24, pp. 541–553, 1979.

[34] I. Masubuchi, Y. Kamitane, A. Ohara, and N. Suda, “H1 control for descriptor systems: A matrix
inequalities approach,” Automatica, vol. 33, no. 4, pp. 669–673, 1997.
B IBLIOGRAPHY 159

[35] D. Q. Mayne, “Control of linear systems with time delay,” Electronics Letters, vol. 4, no. 20, pp.
439–440, 1968.
[36] A. Megretski and A. Rantzer, “System analysis via integral quadratic constraints,” IEEE Trans. Au-
tomat. Control, vol. 42, no. 6, pp. 819–830, 1997.
[37] G. Meinsma and L. Mirkin, “H 1 control of systems with multiple I/O delays via decomposition to
adobe problems,” IEEE Trans. Automat. Control, vol. 50, no. 2, pp. 199–211, 2005.

[38] G. Meinsma and H. Zwart, “On H1 control for dead-time systems,” IEEE Trans. Automat. Control,
vol. 45, no. 2, pp. 272–285, 2000.
[39] R. H. Middleton and D. E. Miller, “On the achievable delay margin using LTI control for unstable
plants,” IEEE Trans. Automat. Control, vol. 52, no. 7, pp. 1194–1207, 2007.
[40] D. E. Miller and D. E. Davison, “Stabilization in the presence of an uncertain arbitrarily large delay,”
IEEE Trans. Automat. Control, vol. 50, no. 8, pp. 1074–1089, 2005.

[41] L. Mirkin, “On the extraction of dead-time controllers and estimators from delay-free parametriza-
tions,” IEEE Trans. Automat. Control, vol. 48, no. 4, pp. 543–553, 2003.
[42] ——, “On the approximation of distributed-delay control laws,” Syst. Control Lett., vol. 55, no. 5,
pp. 331–342, 2004.
[43] ——, “Intermittent redesign of analog controllers via the Youla parameter,” IEEE Trans. Automat.
Control, vol. 62, no. 4, pp. 1838–1851, 2017.
[44] ——, “Linear Control Systems,” course notes, Faculty of Mechanical Eng., Technion—IIT, 2019.
[Online]. Available: http://leo.technion.ac.il/Courses/LCS/LCSnotes19.pdf

[45] L. Mirkin, Z. J. Palmor, and D. Shneiderman, “Dead-time compensation for systems with multiple
I/O delays: A loop shifting approach,” IEEE Trans. Automat. Control, vol. 56, no. 11, pp. 2542–2554,
2011.
[46] L. Mirkin and N. Raskin, “Every stabilizing dead-time controller has an observer-predictor-based
structure,” Automatica, vol. 39, no. 10, pp. 1747–1754, 2003.
[47] L. Mirkin, H. Rotstein, and Z. J. Palmor, “H 2 and H 1 design of sampled-data systems using lifting.
Part II: Properties of systems in the lifted domain,” SIAM J. Control Optim., vol. 38, no. 1, pp.
197–218, 1999.
[48] L. Mirkin and G. Tadmor, “Imposing FIR structure on H 2 preview tracking and smoothing solu-
tions,” SIAM J. Control Optim., vol. 48, no. 4, pp. 2433–2460, 2009.

[49] S.-I. Niculescu and W. Michiels, “Stabilizing a chain of integrators using multiple delays,” IEEE
Trans. Automat. Control, vol. 49, no. 4, pp. 802–807, 2004.
[50] M. Nordin and P.-O. Gutman, “Digital QFT design for the benchmark problem,” European J. Con-
trol, vol. 1, no. 2, pp. 97–103, 1995.
[51] J. E. Normey-Rico and E. F. Camacho, Control of Dead-time Processes. London: Springer-Verlag,
2007.

[52] N. Olgac and B. T. Holm-Hansen, “A novel active vibration absorption technique: delayed res-
onator,” J. Sound Vibration, vol. 176, no. 1, pp. 93–104, 1994.
160 B IBLIOGRAPHY

[53] D. H. Owens and A. Raya, “Robust stability of Smith predictor controllers for time-delay systems,”
IEE Proc. – Control Theory Appl., vol. 129, no. 6, pp. 298–304, 1982.

[54] A. Packard and J. Doyle, “The complex structured singular value,” Automatica, vol. 29, no. 1, pp.
71–109, 1993.

[55] P. Park, “A delay-dependent stability criterion for systems with uncertain time-invariant delays,”
IEEE Trans. Automat. Control, vol. 44, no. 4, pp. 876–877, 1999.

[56] J. R. Partington, Linear Operators and Linear Systems: An Analytical Approach to Control Theory.
Cambridge, UK: Cambridge University Press, 2004.

[57] J. R. Partington and C. Bonnet, “H1 and BIBO stabilization of delay systems of neutral type,” Syst.
Control Lett., vol. 52, no. 8, pp. 283–288, 2004.

[58] J. R. Partington and P. M. Mäkilä, “Rational approximation of distributed-delay controllers,” Int. J.


Control, vol. 78, no. 16, pp. 1295–1301, 2005.

[59] R. Rabah, G. M. Sklyar, and A. V. Rezounenko, “Stability analysis of neutral type systems in Hilbert
space,” J. Differential Equations, vol. 214, no. 2, pp. 391–428, 2005.

[60] B. S. Razumikhin, “On the stability of delay systems,” Prikl. Mat. i Meh., vol. 20, no. 4, pp. 500–512,
1956, (in Russian).

[61] Z. V. Rekasius, “A stability test for systems with delays,” in Proc. 1980 Joint Automatic Control
Conf., vol. II, San Francisco, CA, 1980, p. TP9 A.

[62] R. D. Robinett, B. J. Petterson, and J. C. Fahrenholtz, “Time-domain validation for sample-data


uncertainty models,” J. Intell. Robot. Syst., vol. 21, pp. 277–285, 1998.

[63] W. Rudin, Real and Complex Analysis, 3rd ed. New York: McGraw-Hill, 1987.

[64] A. Seuret, F. Gouaisbaut, and L. Baudouin, “D1.1 - Overview of Lyapunov methods for
time-delay systems,” LAAS-CNRS, Research Report no. 16308, Sep. 2016. [Online]. Available:
https://hal.archives-ouvertes.fr/hal-01369516

[65] J. S. Shamma, “Robust stability with time-varying structured uncertainty,” IEEE Trans. Automat.
Control, vol. 39, no. 4, pp. 714–724, 1994.

[66] W. Singhose, “Command shaping for flexible systems: A review of the first 50 years,” Int. J. Precis.
Eng. Manuf., vol. 10, no. 4, pp. 153–168, 2009.

[67] O. J. M. Smith, “Closer control of loops with dead time,” Chem. Eng. Progress, vol. 53, no. 5, pp.
217–219, 1957.

[68] ——, “Posicast control of damped oscillatory systems,” Proc. IRE, vol. 45, no. 9, pp. 1249–1255,
1957.

[69] I. H. Suh and Z. Bien, “Proportional minus delay controller,” IEEE Trans. Automat. Control, vol. 24,
no. 2, pp. 370–372, 1979.

[70] P. K. S. Tam and J. B. Moore, “Stable realization of fixed-lag smoothing equations for continuous-
time signals,” IEEE Trans. Automat. Control, vol. 19, no. 1, pp. 84–87, 1974.
B IBLIOGRAPHY 161

[71] O. Troeng and L. Mirkin, “Toward a more efficient implementation of distributed-delay elements,”
in Proc. 52nd IEEE Conf. Decision and Control, Florence, Italy, 2013, pp. 294–299.

[72] Y. Z. Tsypkin, “Stability of systems with delayed feedback,” Avtomatika i Telemekhanica, vol. 7, no.
2–3, pp. 107–129, 1946.

[73] V. Van Assche, M. Dambrine, J.-F. Lafay, and J.-P. Richard, “Some problems arising in the im-
plementation of distributed-delay control laws,” in Proc. 38th IEEE Conf. Decision and Control,
Phoenix, AZ, 1999, pp. 4668–4672.

[74] J. Veenman and C. W. Scherer, “IQC-synthesis with general dynamic multipliers,” Int. J. Robust and
Nonlinear Control, vol. 24, no. 17, pp. 3027–3056, 2014.

[75] K. Walton and J. E. Marshall, “Direct method for TDS stability analysis,” IEE Proc. – Control Theory
Appl., vol. 134, pp. 101–107, 1987.

[76] Z. Q. Wang, P. Lundström, and S. Skogestad, “Representation of uncertain time delays in the H1
framework,” Int. J. Control, vol. 59, no. 3, pp. 627–638, 1994.

[77] K. Watanabe and M. Ito, “A process-model control for linear systems with delay,” IEEE Trans.
Automat. Control, vol. 26, no. 6, pp. 1261–1269, 1981.

[78] B. Wie and K.-W. Byun, “New generalized structural filtering concept for active vibration control
synthesis,” J. Guidance, Control, and Dynamics, vol. 12, no. 2, pp. 147–154, 1989.

[79] J. C. Willems, The Analysis of Feedback Systems. Cambridge, MA: The MIT Press, 1971.

[80] S. Xu and J. Lam, “On equivalence and efficiency of certain stability criteria for time-delay systems,”
IEEE Trans. Automat. Control, vol. 52, no. 1, pp. 95–101, 2007.

[81] J. Zhang, C. R. Knospe, and P. Tsiotras, “Stability of time-delay systems: Equivalence between
Lyapunov and scaled small-gain conditions,” IEEE Trans. Automat. Control, vol. 46, no. 3, pp. 482–
486, 2001.

[82] X.-M. Zhang, Q.-L. Han, A. Seuret, F. Gouaisbaut, and Y. He, “Overview of recent advances in
stability of linear systems with time-varying delays,” IET Control Theory Appl., vol. 13, no. 1, pp.
1–16, 2018.
Index

Characteristic function, 21 FIR truncation operator, 60


Crossover frequency, 27
Left characteristic matrix equation, 54
Dead-time compensator, 44 Leibniz integral rule, 38
FASP, 82 Loop shifting
implementation H2 performance, 77
lumped-delay approximation, 96 internal stability, 59
reset mechanism, 91 Lumped-delay approximation, 96
MSP, 45 Lyapunov–Krasovskii functional, 38
Smith predictor, 43 Lyapunov function, 36
Dead-time systems, 4 quadratic, 36
Delay-differential equation, 11
neutral, 21 Matrix
retarded, 21 Hurwitz, 149
Delay element Modified Smith predictor (MSP), 45
continuous-time, 2
Nehari extension problem, 83
frequency response, 3
Norm
impulse response, 3
H2 (system norm), 68
state, 2
H1 (system norm), 154
transfer function, 3
L2 (signal norm), 151
discrete-time, 1
`2 (signal norm), 153
impulse response, 1
state, 1 Observer-predictor, 47, 53
state-space realization, 2
transfer function, 1 Padé approximant, 13
distributed, 6 multipoint, 93
input adobe, 79 Posicast control, 127
multiple, 5 Problem
Delay margin, 103 standard, 67

Equation Quasi-polynomial, 9, 21
algebraic Riccati advanced, 21
H2 , 69 neutral, 21
H1 , 70 retarded, 21
stabilizing solution, 69, 70
Lyapunov, 149 Repetitive control, 142
Sylvester, 149 Robust Stability Theorem, 110
Roots of quasi-polynomials
Feedforward-action Smith predictor (FASP), 82 asymptotic properties
Finite spectrum assignment, 46 neutral chains, 23
FIR completion operator, 60 retarded chains, 23

162
I NDEX 163

migration
reversal, 27
switch, 27
stability analysis
bilinear (Rekašius) transformation, 33
delay sweeping (direct method), 27
Roth’s removal rule, 149

Schur complement, 147


Smith controller
predictor, 43
primary controller, 43
Stability
delay-independent, 103
internal, 59
L2 , 151
`2 , 153
Lyapunov, 152
State vector, 152
discrete, 153
System
causal, 151, 153
convolution representation, 152, 153
i/o stable, 151, 153
Lyapunov stable, 152
asymptotically, 152
shift invariant, 153
state vector, 152
discrete, 153
time invariant, 152
transfer function, 154

Transfer function, 154


Transform
´ (one-sided), 155
continuous-time Fourier, 154
Laplace (one-sided), 154

Wave equation, 7
Well posedness of feedback, 58

You might also like