Chapter4 23
Chapter4 23
82
Erik I. Verriest Chapter 4: Transfer Function and Stability 83
provide an elegant algorithm for stability testing in general. The proof of the method is
quite simple, however we will defer this until we have some more mathematical background.
Let the system now be probed by an impulse. The response of a system that is initially
at rest, to the impulsive input, is appropriately called the impulse response of the system.
Since the Laplace transform of the impulse is L− [δ] = 1, the impulse response has Laplace
transform equal to H(s).
Consequently, we get:
[transfer function] = L [impulse response].
Whereas the differential equation a(D)y = b(D)u gives a destructive or implicit description
of the system, its Laplace transform, Y (s) = H(s)U(s) is an explicit representation of the
system, but in the Laplace (or s-) domain.
Erik I. Verriest Chapter 4: Transfer Function and Stability 84
Hence,
b(s) H(s)
Y (s) = = ,
sa(s) s
where H(s) is known as the transfer function of the system. Consider the following example:
s−a
H(s) = s2 +3s+2 . Using the partial fraction expansion, we find
s−a −a/2 a + 1 −(a + 2)/2
Y (s) = = + + .
s(s + 1)(s + 2) s s+1 s+2
The inverse transform yields
a −t a + 2 −2t
y(t) = − + (a + 1)e − e u−1 (t).
2 2
Note that the limit for t → ∞ is −a/2. The final value theorem, which may be applied
here (Why?) gives
a s s
y(∞) = lim[sY (s)] = lim − + (a + 1) − (a + 2) = −a/2,
s→0 s→0 2 s+1 s+2
which is consistent.
If the same system is initially at rest but an impulse is applied at time t = 0 instead,
then its response is:
s−a −(a + 1) a + 2
Y (s) = H(s) = = + .
(s + 1)(s + 2) s+1 s+2
In the time domain, the transient response is
−t a + 2 −2t
y(t) = −(a + 1)e + e u−1 (t).
2
Note that if we take the derivative of the response to the step, then
′
′ a −t a + 2 −2t a −t a + 2 −2t
ystep (t) = − + (a + 1)e − e u−1 (t) + − + (a + 1)e − e u0 (t)
2 2 2 2
By properties of generalized functions, this is
′
−t −2t
a a+2
ystep (t) = −(a + 1)e + (a + 2)e u−1 (t) + − + (a + 1) − u0 (t)
2 2
which is exactly the response to the impulse. Is this fact surprising?
Erik I. Verriest Chapter 4: Transfer Function and Stability 86
If p < 0, the pole lies on the positive real axis, and the impulse response grows without
bound. The border case p = 0 gives for the impulse response Ku−1 (t), i.e. the step. This is
not surprising as the system is now simply an integrator.
How does a first order system respond when a unit step is applied at the input. We
obtain, for the step response in the Laplace domain
K 1 K/p K/p
Y (s) = = − .
s+ps s s+p
The corresponding time domain function is
K
y(t) = (1 − e−pt )u−1(t).
p
For positive p this means that y(t) approaches the value K/p asymptotically. Indeed since
RoC is here Re s > −p and thus zero belongs to the RoC, which implies that the final
theorem applies. The latter gives
K K
y(∞) = lim s = .
s→0 (s + p)s p
Erik I. Verriest Chapter 4: Transfer Function and Stability 87
We say that K/p is the steady state gain of the system. If p < 0, the final value theorem
does not apply. In fact we see that the step response grows without bound.
If r1 = r2 = r, which happens when a21 = 4a2 , then this impulse response is h(t) = Ate−rt .
If r > 0, the response decreases to zero for t → ∞, but for small values of t (how small?) it
will rise approximately linearly with t. The response is extremal when (te−rt )′ = 0. This is
e−r1 t − r1 te−r1 t = 0, or t = 1/r1 = τ , the time constant. Note that the extremal magnitude
is |h(τ )| = |A|τ
e
. If r ≤ 0, the impulse response will grow without bound.
Case 3: ζ = 1
This is the case of the double root, and corresponds to
Kω 2
H(s) =
s2 + 2ζω + ω 2
Erik I. Verriest Chapter 4: Transfer Function and Stability 89
the response to a unit step in the underdamped, critically damped, and overdamped case.
Problem:
A system has impulse response: h(t) = Ke−αt sin ωt.
Determine its transfer function and determine its (unit) step response.
Determine the steady state gain, peak time and maximal overshoot if K = 1, α = 12 , and
ω = 2.
Determine a formula for the envelope of the step response.
4.5 Stability
In this section stability is discussed. There are essentially two types of stability: Stability
w.r.t. initial conditions, usually referred to as Lyapunov stability, and bounded input, bounded
output (BIBO) stability. The first is the ability of the system to return to a nominal behavior
(usually the equilibrium x = 0 with u = 0), once such behavior has been disturbed. The
second is the property that the response of the system remains bounded for all possible
inputs that are bounded. For linear systems, the two notions of stability are related. In
order to analyze Lyapunov stability, we first define the modes of a system, and characterize
their stability. Then we define the (Lyapunov) stability of an entire system. A useful
criterion is given (without proof) that allows to decide whether a system, given by its transfer
function, is stable or not. We discuss how such a criterion can be manipulated to obtain
more quantitative information about the speed of the transients of a system. Finally we
treat the BIBO stability.
a(D)y = b(D)u.
It follows that if the system has arbitrary initial conditions specified at t = 0− , then the
output function has the unilateral Laplace transform of the form
b(s) c(s)
Y (s) = U(s) + ,
a(s) a(s)
Erik I. Verriest Chapter 4: Transfer Function and Stability 90
for some polynomial c(·) of degree strictly less than the degree of a(·). (Why?) Suppose,
that in addition the denominator polynomial a(s) has all distinct roots, pi ; i = 1, . . . , n. Note
that, since a(s) has real coefficients, these roots are either real, or form complex conjugate
pairs. The zeros of the denominator polynomial of a rational function are called poles. (The
name refers to the fact that the graph of the modulus, |H(s)|, looks like a pole erected above
pi in the complex plane. If H(s) is strictly proper, then the partial fraction expansion of the
transfer function is: n
X Ai
H(s) = ,
i=1
s − pi
where the coefficients may be computed by solving the n equations in n unknowns, obtained
once one puts the above right hand side on a common denominator, or one can use the
residue-method. The latter method is a lot simpler if n is large. The coefficients are directly
obtained as
Ai = [H(s)(s − pi )]s=pi
Let’s try to understand this: H(s) is singular, i.e., becomes unbounded, at the pole pi .
Therefore by multiplying with the factor (s − pi ), one can ‘tame’ the singularity. Indeed, the
resulting function H(s)(s − pi ) is now well behaved in the neighborhood of pi . What remains
after this singularity is so removed, is appropriately called the ‘residue’, and is therefore
obtained as above. Now this only explains why Ai is called the residue, and not why it is
the correct coefficient in the partial fraction expansion. For this, substitute the exact partial
fraction expansion in the formula for the residue:
" n # n
X Aj X s − pi s − pi
[H(s)(s − pi )]s=pi = (s − pi ) = Aj + Ai = Ai .
j=1
s − p j
j6=i
s − p j s=p i
s − p i s=p i
s=pi
Now the reason why we want to take a partial fraction expansion is that it gives a
Ai
simple additive decomposition for H(s), and each term of the form s−p i
is known to have
pi t
as inverse Laplace transform Ai e . Additivity is preserved in the time domain, so that the
corresponding impulse response is
n
X
h(t) = Ai epi t .
i=1
Likewise the strictly proper rational part due to the initial conditions may also be expanded
in a partial fractions. This also contributes terms of the form
n
X Ci
,
i=1
s − pi
Erik I. Verriest Chapter 4: Transfer Function and Stability 91
a(s) = 0
as the characteristic equation. Each of the elementary terms is called a mode of the system.
In summary then, the modes are determined by the poles and vice versa.
Our analysis is not quite complete, since we assumed that all roots of the characteristic
equation were disjoint. If this is not the case, a strictly proper rational transfer function will
have a partial fraction expansion, involving powers of the polar factors, 1/(s − pi ). More
precisely, if pi appears with multiplicity mi > 1, then there will terms of the form
1 1 1
, , ... , .
(s − pi ) (s − pi )2 (s − pi )m
From the Laplace transform theory, we know that the corresponding time functions are
t pi t tm−1 pi t
epi t , e , ... , e .
1! (m − 1)!
Consider the system with characteristic polynomial a(s). Its roots are the poles of the
system, and determine for instance the partial fraction expansion of the transfer function.
Now a pole at p is associated with a mode ept of the system. Hence if the real part of p is
positive, this function will grow without bound, while if ℜp < 0, the function converges to
0. In the first case we have instability, in the second asymptotic stability. If this pole occurs
with higher multiplicity, m, we get to multiply the corresponding exponential by a suitable
polynomial of degree m − 1 in t to get the new modes. However, the dominant behavior
is still captured by the exponential factor. Instability or asymptotic stability are thus still
determined by the sign of the real part of the pole.
The situation is different for poles on the imaginary axis. Simple poles correspond with
pure oscillation (a pole at 0 corresponds with the limit case: frequency 0, i.e., “DC”). Mul-
tiple poles on the imaginary axis correspond to [polynomial × oscillation], which always
behaves in a divergent way. Consequently, simple poles on the imaginary axis correspond to
marginally stable modes, whereas poles of higher multiplicity on the imaginary axis corre-
spond to unstable modes.
Erik I. Verriest Chapter 4: Transfer Function and Stability 92
Obviously, if one were to know the exact location of all the roots of the characteristic
polynomial of a system, stability or the lack of it could be decided. Now for polynomials of
second degree, the explicit formula for its roots is probably remembered by everyone. Sim-
ilar formulas exist (Cardano) for the polynomial of third and fourth degree: the roots are
obtainable from the coefficients with the operations of addition, multiplications and radicals
(taking roots). Mathematicians sought for a long time to extend these formulas. Among
them was Lagrange, who in 1770 unified the steps in solving equations for n ≤ 4, and showed
that this unified method failed for the quintic. In 1824, Abel proved conclusively that the
determination of the roots of a general quintic in terms of radicals of sums and products of
the coefficients was impossible. It was finally shown by Evariste Galois, that it is not possible
to obtain such general formulas for the roots of an n-th order equation for n ≥ 5. He did this
by making the crucial connection between group theory and polynomial equations. In fact
this work was the origin of what is now known as Galois theory, which has many applications
to other areas of mathematics.
All this does not mean that the roots do not exist! By the fundamental theorem of
algebra we know that every polynomial of degree n has exactly n roots (over the complex
field). Moreover, if the coefficients of the polynomial are all real, then these roots are either
real, or come in complex conjugate pairs. In addition, the roots are continuous functions of
the coefficients of the polynomial. This does not contradict Galois’s statement, which refers
to a specific class of such functions (obtained by sums products and radicals only).
Fortunately, such detailed information as the exact root location is not necessary. In
order to investigate the stability, it suffices to know whether all roots of the characteristic
equation are inside the left half plane. Their precise location in there is immaterial.
The question is then, what information about the root location can one infer from knowl-
edge of the coefficients of a polynomial?
Erik I. Verriest Chapter 4: Transfer Function and Stability 93
it is not possible to obtain general formulas for the roots of an nth order equation for n ≥ 5
in terms of radicals, as shown by Galois, it is nevertheless possible to determine the number
of roots in the right half plane. One forms a table, the Routh array, and checks the signs
of certain numbers in the array. The number of sign changes corresponds to the number of
unstable poles. This test was independently discovered by Routh and Hurwitz.
with real coefficients and a0 > 0, has all its roots in the open left half plane if
D1 = a1 > 0
a1 a0
D2 = >0
a3 a2
a1 a0 0
D3 = a3 a2 a1 >0
a5 a4 a3
..
.
a1 a0
a3 a2 a1 a0
Dn = a5 a4 a3 a2 a1 a0 > 0.
..
.
a2n−1 a2n−2 ··· ··· an
The coefficients ai = 0 for i > n. His proof uses a decomposition in continued fractions (See
E.A. Guillemin, The Mathematics of Circuit Analysis, Wiley 1949.)
Routh proved the criterion in a more practical form, using what is now known as the “Routh
table”. The construction of the Routh table proceeds as follows: There are n + 1 rows, which
we shall identify by sn , sn−1 , . . . , s1 and finally s0 . Place the coefficients of the polynomial
a(s) alternatingly in the first two rows, as follows:
Erik I. Verriest Chapter 4: Transfer Function and Stability 95
Routh Array
sn a0 a2 a4 · · ·
sn−1 a1 a3 a5 · · ·
Examples:
s3 a0 a2
s2 a1 a3
a1 a2 −a0 a3
s1 a1
0
s0 a3
Note that if a0 , a1 and a3 are positive, then one additional conditional condition is
required: a1 a2 − a0 a3 > 0.
s4 8 3 5
s3 2 1
6−8 10
s2 2=−1 2
=5
−1−10 0
s1 −1
= 11 −1 = 0
55
s0 11
= 5
The first column, [8, 2, −1, 11, 5]′, has two sign changes: Two roots of this polynomial
live in the right half plane.
It follows from the construction of the Routh array, that multiplication of any row by a
positive constant will not change the criterion. This can be useful to reduce large numbers.
Here is an example: Let a(s) = s5 + 4s4 + 11s3 + 16s2 + 5s + 8. The table starts as
s5 1 11 5
s4 4 16 8
At this point, reduce the second row to [1, 4, 2], and continue the array
s4 1 4 2
s3 7 3
25
s2 7
2
−23
s1 25
s0 2
Since there are two sign changes, this a(s) will have two roots in the right half plane.
Erik I. Verriest Chapter 4: Transfer Function and Stability 97
Two special case my occur: a zero may appear in the first column. This is discussed in the
next section. The other special case is the case when two consecutive rows are proportional.
Take for instance a(s) = s7 + 4s6 + 5s5 + 5s4 + 6s3 + 9s2 + 8s + 2.
s7 1 5 6 8
s6 4 5 9 2
15 15 30
s5 4 4 4
s4 1 1 2
s3 0 0
We see that now we cannot generate the s2 row since each term would be 0/0. This problem
originates because the s5 and s4 rows are proportional. Now we need to say why we labeled
the rows by the decreasing powers of s. These rows actually refer to polynomials. For
instance the s7 row refers actually to the polynomial s7 +5s5 +6s3 +8s, the s6 row corresponds
to the polynomial 4s6 + 5s4 + 9s2 + 2. These polynomials have alternating parity (odd or
even). When it happens that two rows in the Routh array are proportional, the polynomial
corresponding to the second proportional row (this is here : s4 + s2 + 2) is a factor of the
original polynomial. To proceed further, differentiate this 4-th degree polynomial to get
d
the requisite 3-d degree polynomial. Thus we use, corresponding to ds (s4 +s2 +2) = 4s3 +2s,
the new s3 row [4, 2]. Now continue the array to get
s3 4 2
2 1
s 2
2
1
s −14
s0 2
4.6.3 Homotopy
The main (and quite intuitive) idea behind this is that the roots of a polynomial are contin-
uous functions of the coefficients of that polynomial. Hence small changes in the coefficients
will induce small changes in the roots. Consequently, if one of the entries in the first column
of the Routh array is zero, it means that by a small change in a coefficient, one can make
this entry nonzero. Typically by changing the coefficient in the other direction, the entry
will change sign. Suppose we had a sequence (· · · , +, 0, +, · · ·). If by a small increase (by
ǫ > 0) in some coefficient, c, we can change this sign sequence to (· · · , +, +, +, · · ·), and by a
small decrease in the same coefficient to (· · · , +, −, +, · · ·), then the number of sign changes
in the two cases differs by two. Consequently, this means that two poles must cross from
Erik I. Verriest Chapter 4: Transfer Function and Stability 98
the left half plane to the right half plane if the coefficient goes from c + ǫ to c − ǫ for some
positive ǫ. Hence for ǫ = 0, the original polynomial, it must mean that two poles were on
the imaginary axis. What conclusion should be drawn if we had originally a sign sequence
(· · · , +, 0, −, · · ·)? What if the zero occurred in the last entry?
Let’s use the homotopy: introduce a small perturbation in one of the coefficients, so that a
zero no longer appears in the first column. We change the coefficient of s4 , to get a1 (s) =
s6 + 3s5 + 2.1s4 + 6s3 + 3s2 + 6s + 3.
Observe that we also ‘simplified’ the s5 , s4 and s3 rows
s6 1 2.1 3 3
s5 1 2 2
4
s 1 10 30
s3 −2 −7
s2 13 60
s1 22
13
s0 60
Erik I. Verriest Chapter 4: Transfer Function and Stability 99
The perturbed polynomial a1 (s) has two roots in the right half plane.
4.7 Supplement
(This section may be skipped at a first reading). The utility of this Routh-Hurwitz criterion
goes much further than one may expect. For instance it is possible to adapt the criterion to
find the number of roots of a polynomial to the left or right of Re s = α, and hence also the
number of roots inside a vertical strip bounded by α < Re s < β. We will develop this in
several steps below
Tα R+ = { s | Re s > −α}.
Denote this half plane by R(−α)+ .
Given a finite set S, the number of elements in it is called the cardinality of the set and
this is denoted by card S.
Proof: (Tα S1 ∩ S2 ) = Tα (S1 ∩ T−α S2 ), and the cardinality of a set does not change by
translation of the set. ✷
Erik I. Verriest Chapter 4: Transfer Function and Stability 100
Define Rα+ as the domain in the complex plane to the right of α: Rα+ = { s | Re s > α }.
Let Rα+ (p) = R(p) ∩ Rα+ denote the set of roots of the polynomial p in the open half plane
to the right of α, and for simplicity let us denote R0+ (p) = R+ (p).
Note that the standard Routh - Hurwitz algorithm determines card R+ (p).
Theorem: If p is a polynomial and α ∈ R, then Rα+ (p) = R+ (T−α p), and therefore its
cardinality can be detected by the Routh - Hurwitz test on T−α p.
Proof:
Rα+ (p) = R(p) ∩ Rα+
= R(p) ∩ T−α R+
= Tα R(p) ∩ R+
= R(T−α p) ∩ R+
= R+ (T−α p). ⋄
Examples:
1. Consider the polynomial p(s) = s2 − 3s + 2. The number of roots with real part
larger than α ∈ R follow from the Routh Hurwitz test on the polynomial p(s + α) =
(s + α)2 − 3(s + α) + 2 = s2 + (2α − 3)s + α2 − 3α + 2. We find that the number of sign
changes in the sequence (1, 2α − 3, α2 − 3α + 2) equals the requisite number Rα+ (p).
i.e.,
2 if α < 1
Rα+ (p) = 1 if α ∈ (1, 2) ,
0 if α > 2
which is easily verified since p has roots 1 and 2.
Erik I. Verriest Chapter 4: Transfer Function and Stability 101
2. For the polynomial p(s) = s3 + 2s2 + as − 3, determine R1+ (p). Noting that p(s + 1) =
(s + 1)3 + 2(s + 1)2 + a(s + 1) − 3 = s3 + 5s2 + (a + 7)s + a, the Routh - Hurwitz test
on the polynomial q(s) = s3 + 5s2 + (a + 7)s + a gives
s3 1 a+7
2
s 5 a
s1 4a + 35
s0 a
R1+ (p) = R+ (q) number of sign changes of the sequence (1, 5, 4a + 35, a), and thus
1 if a < 0
R1+ (p) = .
0 if a > 0
This must indicate that for a = 0, one root must cross the vertical through 1. Since
complex roots must come as conjugate pairs, this indicates that for a = 0 the polyno-
mial has a zero at s = 1. This is easily verified: s3 + 2s2 − 3 = (s − 1)(s2 + 3s + 3).
If, for α ∈ R, we let Rα− denote the half plane to the left of α, then
Theorem: Rα− (p) = R(−α)+ (Rp), where R is the parity or reversal operator.
Note that Rα− (p) is also deg p − Rα+ (p), provided that p has no roots on the line Re s = α.
Corollary: The number of roots of p in the strip (α, β) is found by computing Rα+ (p) −
Rβ+ (p), provided that p has no roots on the lines Re s = α and Re s = β.
Example
Let’s revisit Example 2 above. How many roots does p have in the vertical strip 0 < Re s < 1?
The Routh-Hurwith test for p gives
s3 1 a
2
s 2 −3
s1 2a + 3
s0 −3
For all a there is one sign change. Hence there ia always a root in the RHP, R+ (p) = 1.
From example 2 above, R1+ (p) = 1 or 0, depending on the sign of a. Hence, for a > 0, there
are no roots of p in this strip, while for a < 0, there will be one root in the vertical strip. It
must then be a real root.
Erik I. Verriest Chapter 4: Transfer Function and Stability 102
Remark: The above criterion still holds if the system has arbitrary initial conditions, un-
der the additional assumption that b(·) and a(·) have no common factors. For example,
the system, given in destructive form by D2 y − y = Du − u has a stable impulse response
h(t) = e−t , hence is BIBO stable. However, it is not stable with respect to initial conditions.
The unstable mode happened to correspond to a common unstable factor s − 1 in numerator
and denominator of b(s)/a(s).
Erik I. Verriest Chapter 4: Transfer Function and Stability 103
4.9 Sensitivity
Consider now a system with transfer function H(s, θ) = b(s,θ)a(s,θ
where θ is some parameter
H
of interest. The sensitivity function Sθ (s) is the relative sensitivity of H(s) with respect
to changes in θ. First assume a nominal value for the parameter: θ = θ0 , resulting in
H0 (s) = H(s, θ0 ). Consider now the new parameter value θ = θ0 +∆θ, where ∆θ is sufficiently
small. The corresponding evaluation of the transfer function at s is
∂H(s, θ0 )
H(s, θ0 + ∆θ) = H(s, θ0 ) + ∆θ + h.o.t
∂θ
Hence, taking limits
∆H(s, θ0 ) ∂H(s, θ0 )
→
∆θ ∂θ
The relative change in the transfer function evaluated at s is
∆H(s,θ0 )
H(s,θ0 ∂H(s, θ0 ) θ0
SθH (s) = ∆θ
= .
θ0
∂θ H(s, θ0 )
In general, this is complex valued. Often times we are just interested in the magnitude of
the above and define this as the sensitivity function.
Example: Consider a simple series RC circuit. If the applied voltage is the input, and the
voltage across the capacitor the output, then the transfer function is
1/Cs 1
H(s) = = .
R + 1/Cs 1 + RCs
[2] G. Verriest, “Leçons sur la Théorie des Equations selon Galois,” Editions J. Gabay,
Paris, 1997 (reprint of original 1939 Gauthier Villars edition).
[3] Erik I. Verriest and Nak-seung Patrick Hyun. Roots of Polynomials with Positive Coef-
ficients. Proceedings of the 2018 International Symposium on the Mathematical Theory
of Networks and Systems, (MTNS 2018), Hong Kong, July 2018.
104