0% found this document useful (0 votes)
9 views52 pages

PART2

The document discusses systems dynamics, focusing on the analysis and control of linear systems through methods such as time response, frequency response, and state space. It covers key concepts like impulse response, step response, stability, and PID controllers, detailing how system behavior can be characterized and manipulated. The text emphasizes the importance of understanding system poles and their influence on response characteristics for effective control system design.

Uploaded by

Pablo Rey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views52 pages

PART2

The document discusses systems dynamics, focusing on the analysis and control of linear systems through methods such as time response, frequency response, and state space. It covers key concepts like impulse response, step response, stability, and PID controllers, detailing how system behavior can be characterized and manipulated. The text emphasizes the importance of understanding system poles and their influence on response characteristics for effective control system design.

Uploaded by

Pablo Rey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Systems Dynamics

Ricard Villà

translated and mantained by


Manel Velasco
2
Índex
1 Time Response 5
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Impulse response of first-order systems . . . . . . . . . . . . . 7
1.2.2 Impulse response of second-order systems . . . . . . . . . . . . 8
1.3 Step Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Step Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.1 Step Response of First-Order Systems . . . . . . . . . . . . . 11
1.4.2 Step Response of Second-Order Systems . . . . . . . . . . . . 12
1.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 Routh’s Stability Criterion . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Steady-State Error and System Type . . . . . . . . . . . . . . . . . . 31
1.8 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.9 PID Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3
4
We will now study the ways to control systems, that is, to make them behave
as we want. Generally, this will mean making them stable (we will see what this
means later), and ensuring they meet other performance criteria.
We will do this through three different methods; each has its advantages and
disadvantages. For each method, it will be necessary to know how to analyze the
system (see how it behaves), and to design (synthesize) the controller that makes it
behave as we wish.
The three methods are the time response method, the frequency response met-
hod, and the state space method.

1 Time Response
1.1 Introduction
Let’s see what the response y(t) of an arbitrary system G(s) is to an arbitrary input,
x(t):

Let X(s) be the transform of x(t). If G(s) has the poles p1 , p2 ,. . ., pn , and X(s)
has the poles q1 , q2 ,. . ., ql , then it will be
A1 An B1 Bl
Y (s) = G(s)X(s) = + ··· + + + ··· +
s − p1 s − pn s − q1 s − ql
| {z } | {z }
termes deguts als pols de G(s) termes deguts als pols de X(s)

from which

y(t) = A1 ep1 t + · · · + An epn t + B1 eq1 t + · · · + Bl eql t


| {z } | {z }
termes deguts als pols de G(s) termes deguts als pols de X(s)

Given X(s) as the transform of x(t), if G(s) has poles p1 , p2 , . . ., pn , and X(s) has
poles q1 , q2 , . . ., ql , it will be so. Here, as is customary when there is no doubt, we
have omitted indicating that the entire term on the right is multiplied by u(t).
Thus, we see that the behavior of the system has two parts, one due to the poles
of the system and another due to the poles of the transform of the input. From the
part due to the system, the input only affects the coefficients Ai , the poles pi are not
affected and, consequently, the exponentials will always be the same for any type of
input, only the ”quantity”(Ai ) of each exponential will vary.
This makes it interesting to study the inherent response of the system, indepen-
dent of the input. To achieve this, we need an input with a transform having the
smallest possible number of poles. The obvious candidate is a Dirac impulse, δ(t),
since L[δ(t)] = 1, which has no poles. However, δ(t) is only useful for theoretical
studies as it does not physically exist. For practical studies, we need an input that
can at least be approximated experimentally. The unit step, u(t), is used, which has

5
a transform, 1s , with a single pole at s = 0. Therefore, we will study the response to
these two types of signals.
Now let’s look at another important property of linear systems that will facilitate
their study.
In the inherent output of the system,
y(t) = A1 ep1 t + · · · + An epn t
we can number the poles pi as we like. Let’s number them so that p1 is the one
furthest to the right in the complex plane, the one with the greatest real part, and
so on correlatively to the left until pn , which will be the furthest to the left, the one
with the smallest (most negative) real part.
Let’s see that, after enough time, the term corresponding to the rightmost pole
(which we have called p1 ) predominates over all the others. Indeed, since Re(p1 −
p2 ) > 0, it will be
A1 ep1 t A1 (p1 −p2 )t
lim = lim e =∞
t→∞ A2 ep2 t t→∞ A2

The same happens with p1 and each of the other poles, since they are to its left,
making the exponent positive. This means that, after a certain time, the output is
essentially A1 ep1 t , that is, the system essentially behaves like a first-order system,
A1
s−p1
. We will say that the pole p1 is the dominant pole of the system. Note that
for negative poles (which we will see are the ones we will use), the dominant pole is
the one with the slowest evolution; we say it is a slow pole. The poles further to the
left evolve (go to zero) more quickly; we call them fast poles. The following figure
indicates the directions in which speed (R) increases and in which dominance (D)
increases.

We have implicitly assumed that the rightmost pole was real. Complex poles
always come in pairs of complex conjugates, which have the same real part. If at the
extreme right there is not a single pole, but a pair of complex conjugate poles (p1
and p∗1 ), the reasoning is similar, but the system will ultimately behave, essentially,
as a second-order system with that pair of poles. In this case, we will say that the
system has a pair of dominant poles.
It should be noted that although the rightmost pole always ends up dominating,
it will take more or less time depending on how far it is from the next one. There
are authors who, in stable systems (which, as we will see, have Re(pi ) < 0), define
Re(p2 )
p1 as dominant only if Re(p 1)
> 5.

6
In summary: The response of linear systems has a part due to the input signal and
a part due to the system. While there are infinitely many different linear systems,
in all of them, the part of the response due to the system, after enough time has
passed, is essentially of first or second order. Therefore, it will be useful to study
the inherent response of first and second-order systems.

1.2 Impulse Response


The output of a system when the input is a Dirac impulse, x(t) = δ(t), is called
impulse response and is usually represented by h(t). As we will see, it completely
characterizes the system and, as we have just seen, is quite revealing of the system’s
behavior.

We have
x(t) = δ(t) ⇐⇒ X(s) = 1
from where
Y (s) = G(s) · X(s) = G(s)
and
h(t) = L−1 [Y (s)] = L−1 [G(s)]
That is, the impulse response of a transfer function coincides with the inverse trans-
form of the same transfer function. Since the Laplace transform is bijective, it is
clear that the impulse response h(t) perfectly determines the transfer function G(s)
and, consequently, the system.
Let’s see what the impulse response of first and second order systems looks like.

1.2.1 Impulse response of first-order systems


Let
1
G(s) =
s − s1
where s1 is the only pole of G(s) and, consequently, real.
It will be
h(t) = L−1 [G(s)] = es1 t u(t)
which looks like

7
1.2.2 Impulse response of second-order systems
Let be
1 A B
G(s) = = +
(s − s1 )(s − s2 ) s − s1 s − s2
Doing
1
×(s − s1 ), s = s1 → A =
s1 − s2
1
×(s − s2 ), s = s2 → B =
s2 − s1
whe get  
1 1 1
G(s) = −
s1 − s2 s − s1 s − s2
– First case: the poles s1 and s2 of G(s) are real and different. The system is
equivalent to two first-order systems in series. Performing the inverse transform of
G(s) results in
1
es1 t − es2 t u(t)

h(t) =
s1 − s2
which, if we choose the subscripts such that s1 > s2 , looks like

– Second case: the poles of G(s) are a real double, s1 . The system is still
equivalent to two first-order systems in series. It will be
1
G(s) =
(s − s1 )2

from which
h(t) = tes1 t u(t)
which looks like

8
– Third case: the poles s1 and s2 of G(s) are complex conjugates. Let’s call them

s1 = σ + jω
s2 = σ − jω

and from here

1 1
es1 t − es2 t u(t) = e(σ+jω)t − e(σ−jω)t u(t) =
 
h(t) =
s1 − s2 2jω
1 σt jωt 1
e e − e−jωt u(t) = eσt sin(ωt)u(t)

=
2jω ω

which graphically is

we see, then, that the impulse response is quite determined by the poles of the
transfer function. Depending on where the poles are in the complex plane, the
impulse response will be growing or decaying, oscillatory or not.
All of this can be summarized in the following graph, where the three first-order
cases and the three second-order cases are shown.

9
Note that if Re(s) > 0, the response is increasing, and if Re(s) < 0,it is decre-
asing. If s is real, the response is an exponential; if s is complex, the response is
oscillatory.

1.3 Step Response


1.4 Step Response
The impulse response is quite useful for studying systems from a theoretical pers-
pective. However, it has the drawback that Dirac impulses do not exist physically
(they require infinite power), and their approximations (very short and very tall
impulses) are destructive. Therefore, to be able to study systems experimentally,
the step input is used. Even though it also cannot exist physically (we can never
have an instantaneous change in value), it can be satisfactorily approximated (very
short transition times).
The response to a step input provides significant insights into how a system
behaves. So much so that the output for a step input is called the step response.
It is worthwhile to pre-calculate, based on parameters, the response of first and
second-order systems to a step input.
Another advantage of having this parameterized response is that it allows us to
solve the specification problem. When designing control systems, we need to specify
the characteristics we want the controlled system to have. However, both the plant
to be controlled and the control system are represented by transfer functions, which
are essentially polynomials in s. The coefficients of these polynomials typically
do not have a direct and simple physical meaning and often refer to the physical

10
constitution of the system rather than its behavior. In contrast, as we will see later
(page 16), specifying certain characteristics of the step response will have a clear
dynamic interpretation. The design problem will then reduce to determining the
coefficients of the transfer function polynomials that make the step response exhibit
the specified characteristics.

1.4.1 Step Response of First-Order Systems

We will consider systems of the form

1
G(s) =
τs + 1
c
If the numerator is not unity, G(s) = τ s+1 , the output, due to linearity, will be
multiplied by c, but nothing else will change.
The constant τ is called the time constant, has time dimensions, and is measured
in seconds. As we will see, it plays a very important role in the system’s behavior
(note that it corresponds to a pole p = − τ1 ).
The input will be

1
x(t) = u(t) ⇒ X(s) =
s
i, en conseqüència,

1 1 A B
Y (s) = X(s) · G(s) = · = +
s τs + 1 s τs + 1

×s ; s = 0 → A = 1 1 τ 1 1
Y (s) = − = −
×(τ s + 1);s = − τ1 → B = −τ s τs + 1 s s+ 1
τ

d’on
t t
y(t) = u(t) − e− τ u(t) = (1 − e− τ )u(t)

which looks like

1 − τt
If we look at the slope at t = 0, we have y ′ |t=0 = τ
e = τ1 , and if we look
t=0
−1
at the values of y(t) for t = τ , t = 3τ , and t = 4τ , since e ≃ 0.37, e−3 ≃ 0.05, and
e−4 ≃ 0.02, we get the following graph:

11
That is, after a time equal to τ , the system has reached 63% of its final value,
after 3τ it’s 95%, and after 4τ it’s 98%.

1.4.2 Step Response of Second-Order Systems


It will be convenient to reduce them to the form
kωn2
G(s) =
s2 + 2ξωn s + ωn2
2
kωn k
with ωn positive. Note that the canonical gain is k. In fact, s2 +2ξωn s+ωn
2 = s2 2ξs .
2 + ωn +1
ωn
In the following, we will assume k = 1:

ωn2
G(s) =
s2 + 2ξωn s + ωn2
If k is different, the output will be multiplied by k.
ωn is called the natural frequency of the system and is measured in rad/s, ξ is
dimensionless and is called the damping factor or damping ratio.
The input will be
1
x(t) = u(t) ⇒ X(s) =
s
and, consequently,
ωn2
Y (s) =
s (s2 + 2ξωn s + ωn2 )
To take the inverse transform, we need to find the roots of the denominator.
Thesse are s = 0 and
p  p 
s = −ξωn ± ξ 2 ωn2 − ωn2 = ωn −ξ ± ξ 2 − 1

The nature of the roots will depend on ξ:

ξ>1 2 negative real roots → overdamping


ξ=1 1 negative double real root → critical damping
0 < ξ < 1 2 complex conjugate roots with negative real part → underdamping
ξ=0 2 imaginary roots (purely imaginary real part)
ξ<0 positive real part, which means the system is unstable

We will study the different cases separately.


a) ξ > 1, 2 negative real roots, overdamping.

12
Since the roots are real, the system can be written as

ωn2 ωn2 1
G(s) = = ·
(s − s1 )(s − s2 ) s − s1 s − s2
which is equivalent to two cascaded first-order systems. The step response will be

ωn2 A B C
Y (s) = = + +
s(s − s1 )(s − s2 ) s s − s1 s − s2
where it is easy to see that A = 1, and we obtain

y(t) = 1 + Bes1 t + Ces2 t u(t)




which looks like

b) ξ = 1, 1 negative double real root, critical damping.


It also is equivalent to two cascaded first-order systems. The step response will
be

ωn2 ωn2 A B C
Y (s) = = 2 = + 2 +
s (s2 + 2ωn s + ωn2 ) s (s + ωn ) s (s + ωn ) s + ωn

×s ; s = 0 → A = 1
2
 1 ωn 1
×(s + ωn ) ; s = −ωn → B = −ωn Y (s) = − 2 −
d s (s + ωn ) s + ωn
×(s + ωn )2 ; ds ; s = −ωn → C = −1

from which

y(t) = u(t) − ωn te−ωn t u(t) − e−ωn t u(t) = 1 − (ωn t + 1) e−ωn t u(t)


 

Due to the fact that (ωn t + 1) increases with t, the exponential approaches zero
more slowly than in a first-order system, and the response looks like

13
c) 0 < ξ < 1, 2 complex conjugate roots with negative real part, un-
derdamping.
This is the paradigmatic case of a second-order system, as it cannot be construc-
ted like two first-order systems.

ωn2 ωn2 B C D
Y (s) = 2 2
= = + +
s (s + 2ξωn s + ωn ) s (s − s1 ) (s − s2 ) s s − s1 s − s2
  p 
 s1 = ωn −ξ + j 1 − ξ 2
with  p 
 s2 = ωn −ξ − j 1 − ξ 2

that, when converted to polar form, become



s1 = ωn ejα
s2 = ωn e−jα
with  p
sin α = 1 − ξ 2
cos α = −ξ
Finding B, C, and D and taking their inverse transform, one laboriously arrives at
the expression for the system’s output
" #
e−ξωn t  p 
y(t) = 1 + p sin ωn 1 − ξ 2 t − α u(t)
1 − ξ2
If the transfer function had a canonical gain K1 instead of 1,
K1 ωn2
G(s) =
s2 + 2ξωn s + ωn2
and the input was x(t) = K2 u(t), if we call K1 K2 = A, the output would be
multiplied by A
" #
e−ξωn t  p 
y(t) = A 1 + p sin ωn 1 − ξ 2 t − α u(t)
1−ξ 2

which looks like

14
All the values mentioned in the figure, which we will now discuss, are deduced
from the previous equation. Note that the final value of the output is A = K1 K2 .
An important parameter is the maximum overshoot (S), which is the amount by
which the output exceeds its final value. In absolute terms, it is given by
√−ξπ
Sa = A · e 1−ξ2

Sa
(in units of the output). When referred to the final value, A
, we have the relative
overshoot
√−ξπ
Sr = e 1−ξ2

which is the absolute value Sa measured as a percentage of the increment from the
final value. Therefore, it is a dimensionless number. It has the advantage of not
depending on A, meaning it does not depend on the canonical gain of the transfer
function or the height of the step input. It depends solely on ξ.
It is common to express it as a percentage
√−ξπ
SP = e 1−ξ2 · 100

where SP stands for percent overshoot.


The time it takes to reach this first maximum is called the peak time, tp . The
frequency (constant!) of oscillation is called the damped natural frequencypor simply
natural frequency, ωd (the ’d’ stands for ’damped’). Note that ωd = ωn 1 − ξ 2 is

15
the imaginary part of the poles. The corresponding period is simply the oscillation
period, T . The two enveloping exponentials

!
e−ξωn t
y =A 1± p
1 − ξ2

are used to estimate the time required to ensure that the output will no longer exit
a certain band around the final value (e.g., ±2% of the final value). This time is
called the settling time, ts (’s’ for ’settling’). It is always necessary to specify the
percentage with respect to the final value. Note that the time constant of these
exponentials is ξω1n , which plays the same role as τ in first-order systems, indicating
the speed of response of the system.

Note that if the amplitude (height) of the input varies, since the system is linear,
the amplitude of the output varies proportionally, but not the times, frequencies, or
maximum relative overshoot.

These measures of the step response (peak time, maximum overshoot, settling
time) are applicable to any system (second-order systems with a zero in the numera-
tor, higher-order systems). However, the analytical expressions we have provided are
only valid for systems in the canonical form we have studied. For more complicated
systems, these measures will need to be calculated from their obtained response in
each case using the Laplace transform.

Let’s take a look at the importance of these measures for specifying the behavior
of the system. Imagine a machine tool, such as a lathe. There is a mechanism,
assumed to be second-order, that positions the cutting tool that is going to machine
the workpiece. The tool must contact the workpiece smoothly. It is understood that
this positioning mechanism should not exhibit any overshoot, as it would damage
either the tool or the workpiece. On the other hand, productivity requires rapid
positioning (ts minimized) with good accuracy. One possible solution is to use a
ramp input instead of a step input. Since the ramp is the integral of the step, the
output will be the integral of the step response. Since the step response does not
change sign, its integral will be monotonically increasing, i.e., without any overshoot
of any kind.

16
However, the problem with this positioning is that it would be too slow. A
potentially better solution is to first input a smaller step than required, so that
the output, including the overshoot, does not exceed the desired final value. Then,
continue with a ramp input until reaching the final value.

d) ξ = 0, 2 imaginary roots.
Substituting ξ = 0 into the expression for y(t) from the previous case, we have
h  π i
y(t) = 1 + sin ωn t − u(t) = (1 − cos ωn t) u(t)
2
which has the appearance

17
Unless we intend to build an oscillator, it is clear that this unbounded oscillation
is undesirable.
Example 1
Given the system
3
G(s) = 2
4s + 2.4s + 1
we input a signal x(t) = 5u(t). Find the maximum overshoot of the output, the
time it takes to reach it, and the time it takes to stay within ±2% of the final value
increment.
We need to find ξ and ωn . To identify them in the form

kωn2
s2 + 2ξωn s + ωn2

used in the derivations, we need to manipulate G(s) as follows:

3 3 · 41
G(s) = = 1
4s2 + 2.4s + 1 s2 + 0.6s + 4

1
s2 + 0.6s + ≡ s2 + 2ξωn s + ωn2
4
This gives us k = 3 and ωn2 = 41 , so
r
1
ωn = = 0.5 rad/s
4
We also have 2ξωn = 0.6, which leads to
0.6 0.6
ξ= = = 0.6
2ωn 2 · 0.5
The maximum overshoot in relative terms, i.e., as a fraction between 0 and 1 of the
final increment, is:
√−ξπ √−0.6π
Sr = e 1−ξ2 =e 1−0.62 ≃ 0.095
which can also be expressed as a percentage: SP = 9.5% of the final increment.
Since the canonical gain is 3, the final increment of the output will be 3 times the
input increment, i.e., 3 · 5 = 15 units. This means that the maximum overshoot, in
absolute terms, will be Sa = 0.095 · 15 = 1.42 units of the output. The maximum
output value will therefore be 15 + 1.42 = 16.42 units. To reach this maximum, it
will take
π π
tp = p = √ ≃ 7.85 s
ωn 1 − ξ 2 0.5 1 − 0.62
The time it takes to stay within ±2% of the final increment is typically given by
the envelopes:
e−ξωn t
1± p
1 − ξ2

18
i.e.,
e−0.6·0.5t
1− √ = 1 − 0.02
1 − 0.62
which is usually formulated directly as

e−0.6·0.5t
√ = 0.02
1 − 0.62

resulting in √
− ln(0.02 1 − 0.62 )
ts = ≃ 13.8 s
0.6 · 0.5
Example 2
The damping factor of a second-order system is ξ = 0.14. What is the maximum
overshoot in relative terms? If a step input is applied to the system, and the final
increment of the output is −3 units, what is the maximum overshoot in absolute
terms? What is the most negative value the output reaches when starting from zero?
And when starting from 2?
We have
√−ξπ √−0.14π
Sr = e 1−ξ2 =e 1−0.142 ≃ 0.64
meaning that the maximum overshoot is SP = 64% of the final increment.
In absolute terms, it will be

Sa = −3 · 0.64 = −1.92 units

i.e., the signal will go 1.92 units below the final value. Therefore, the most negative
value of the output will be −3 − 1.92 = −4.92 units.
If it starts from 2, the final value will be −4.92 + 2 = −2.92.
Example 3
In the system from the previous Example 2, the rise time is measured and found
to be 4 s. If the input were a step such that the final increment of the output is 2
units, how long would it take for the output to reach the maximum overshoot? What
is the value of ωn ?
The system is the same, and tp = √π 2 will not change; it will still be 4 s.
ωn 1−ξ
From this last expression, we can find
π π
ωn = p = √ ≃ 0.79 rad/s
tp 1 − ξ 2 4 1 − 0.142

1.5 Stability
For a system to function correctly, it must be stable. In practical terms, this means
that the output does not spontaneously go to infinity (although it will certainly
saturate long before that) and that it does not start oscillating.
These intuitive concepts need to be formalized in order to design the control
system appropriately, which, besides achieving other objectives, prevents instability.

19
Remember that in general (whether linear or nonlinear), we represent the system
with a differential equation (if the system is linear, we can also represent it with
transfer functions).
The system’s output is a solution, y = y(t), of this differential equation, which
passes through the point defined by the initial conditions, that is, the starting point.
Mathematically, a solution to a differential equation corresponding to given ini-
tial conditions is said to be stable if the solutions corresponding to nearby initial
conditions do not deviate significantly from the given solution. Obviously, in mat-
hematical language, this is stated in a much more rigorous way1 .
A nonlinear system can have both stable and unstable solutions; in general, we
do not speak of system stability but rather the stability of each solution.
It is proven that for time-invariant linear systems, if one solution is stable, then
all solutions are stable. Therefore, in time-invariant linear systems (which are the
ones we are dealing with), we can indeed talk about system stability.
It is also proven that a time-invariant linear system is stable if, for bounded
inputs, the output remains bounded. This is referred to as BIBO stability (bounded
input bounded output) in English. This means, in particular, that a linear system
will be stable if its impulse response h(t) tends to zero as time goes to infinity.
So, focusing on time-invariant linear systems, let’s see what the transfer function
must be for the system to be stable.
We impose that the impulse response tends to zero. For an arbitrary system
(with no repeated poles), we have

N (s) A1 A2 An
G(s) = = + + ··· +
(s − s1 )(s − s2 ) · · · (s − sn ) s − s1 s − s2 s − sn
and
h(t) = L−1 [G(s)] = A1 es1 t + A2 es2 t + · · · + An esn t
which tends to zero only if all exponentials tend to zero. Let’s focus on one of them.
If we express the real and imaginary parts of the pole s as follows,

s = σ + jω

the exponentials take the form

est = e(σ+jω)t = eσt · ejωt

which tends to zero if the modulus, eσt , tends to zero. This occurs when σ (the real
part of the pole s) is negative.
If G(s) has repeated poles, h(t) will contain terms of the form tp est , which also
tend to zero only if the real part of s is negative.
In other words, for the system to be stable, all poles of the transfer function
(roots of the denominator) must have a negative real part.
1
Formally: The solution y0 (t) of a differential equation is considered stable if, given ε > 0, there
exists a δ(ε, t0 ) > 0 such that all solutions y(t) with |y(t0 ) − y0 (t0 )| < δ satisfy |y(t) − y0 (t)| < ε
for all t ≥ t0 . The solution y0 (t) is asymptotically stable if it is stable and lim |y(t) − y0 (t)| = 0
t→∞
for |y(t0 ) − y0 (t0 )| < η, η > 0.

20
Marginal stability is discussed when the impulse response does not tend to zero
or infinity but remains bounded. This is the case for pure imaginary poles, which
generate oscillations that neither tend to zero nor infinity, remaining constant (see
figure on page 10). However, this is a case of mathematical interest. In engineering,
we must consider measurement errors in the parameters of the model and their
parameter derivatives, which can cause σ to deviate from the nominal value. (It is
evident that σ = 0 is unacceptable for safe operation; it can become positive at any
moment. Therefore, it is not enough for σ to be negative; it must be sufficiently
negative to prevent it from becoming positive due to variations).
Regarding multiple pure imaginary poles (physically even more improbable),
they generate terms of the form tp sin(ωt + ϕ) which, as they tend to infinity, are
unstable.
A special case of marginal stability that does occur in practice is that of integra-
tors. Note that an integrator is marginally stable (its impulse response is a step),
and two integrators are unstable (their impulse response is a ramp). Integrators
appear in the model of a physical system when we consider a variable that is the
integral of another, as in the example on page ??, where the displacement of the
mass M will always be the integral of the velocity of that mass. These integrators
clearly have no derivatives and are perfect integrators: the displacement of a point
is always the integral of the point’s velocity.
Summary:
– If there is any pole with a positive real part or any multiple pole with a zero
real part, the system is unstable.
– If the above case doesn’t occur, but there are single poles with a zero real part,
the system is marginally stable.
– If all poles have a negative real part, the system is stable.
Stability of Systems Composed of Series Blocks:
If multiple transfer functions G1 (s), G2 (s), ..., Gn (s) are connected in series (cas-
caded), the whole system will be stable if each of the transfer functions is stable.

1.6 Routh’s Stability Criterion


Given a transfer function G(s), it’s essential to determine if it’s stable or not. As
we know, for stability, all poles must have a negative real part.
It’s easy to prove that if a polynomial has all roots with a negative real part,
then all its coefficients have the same sign, and none of them are zero. This fact
is useful for detecting potential instabilities at a glance. However, this condition is
necessary but not sufficient. Even if all coefficients have the same sign and none are
zero, there might still be roots with a positive real part. A more powerful method
is required.
The Routh’s criterion (actually an algorithm) allows us to find out if a polynomial
has all roots with a negative real part. By applying this criterion to the characteristic
equation (denominator of G(s)), we can determine if G(s) is stable.
Let’s see how it works. Let D(s) be the denominator polynomial of G(s) that
we want to determine if it’s stable, and it can be expressed as:

21
D(s) = asn + bsn−1 + csn−2 + dsn−3 + . . .
We create the following table:

n a c e g ···
n−1 b d f h ···
n−2 p q r s ···
n−3 u v w ···
.. ..
. .
0

The left column serves as an index for some operations that may need to be
performed. It simply consists of decreasing integers from n (the degree of the poly-
nomial) down to zero.
On the right, the first two rows are the alternating coefficients of D(s). Please be
careful; if any coefficient is zero, you should explicitly enter it as zero in the table,
even if it’s not explicitly stated in the polynomial (if this happens, as mentioned
earlier, there is at least one root with a positive real part, but we may still want to
apply the criterion to gather more information).
The remaining elements are calculated as follows:

bc − ad be − af bg − ah
p= q= r= ···
b b b
pd − bq pf − br ph − bs
u= v= w= ···
p p p
.. .. ..
. . .

This process continues until we have filled in the only element in the row with
index zero.
The result is obtained from the first column (a, b, p, u, . . .):
Routh’s Criterion: D(s) has all its roots with a negative real part if and only
if in the first column, all elements are non-zero and have the same sign.
With this criterion, we can ensure stability. However, we can gather more infor-
mation using the
Routh’s Theorem: The number of roots of D(s) with a positive real part is
equal to the number of sign changes in the first column.
Note that purely imaginary roots will be detected by the criterion (because they
are not in Re < 0), but they will not be counted by the theorem (because they are
not in Re > 0). As we will see (page 26), they result in a row of zeros.
Example 1

22
Study the stability of the transfer function with the characteristic polynomial

D(s) = s4 + 10s3 + 35s2 + 50s + 24

We perform

4 1 35 24
3 10 50
10·35−1·50 10·24−1·0
2 10
= 30 10
= 24
30·50−10·24
1 30
= 42
42·24−30·0
0 42
= 24

and we see that the first column is 1, 10, 30, 42, 24 which has no zeros and no sign
changes. This means that D(s) has all roots with negative real parts and, conse-
quently, the corresponding transfer function is stable.
Example 2
Study the stability of the transfer function with the characteristic polynomial

D(s) = s4 + 3s3 + s2 + 2s + 1

We have

4 1 1 1
3 3 2
1
2 3
1 and we see that there are 2 sign changes in the first column
1 −7
0 1

(from 31 to −7 and from −7 to 1). D(s) has, therefore, two roots with positive real
parts; the corresponding system is unstable.

There is a property that allows us to make some simplifications: any row can
be multiplied or divided by any positive number. This can save us from performing
divisions by the corresponding element of the first column (provided that it is posi-
tive), which is important when the division is not exact. We will apply this in the
following example.
Example 3
Study the stability of the transmittance that has the characteristic polynomial

D(s) = s4 + 7s3 + 17s2 + 17s + 6

We have

23
4 1 17 6
3 7 17
2 102 42 nd
1 1440 nd
0 42

We avoided divisions, except in the last row, where it is easier to divide than not
to. This has been indicated in the column added to the right (with an nd for no
division), where to facilitate the study we will indicate the variations we make from
the standard process.
If we had not applied this property, it would have resulted in

4 1 17 6
3 7 17
2 14.57 6
1 14.12
0 6

where, for convenience, we have represented two elements approximately. This can
sometimes lead to numerical errors.
Regardless, there are no sign changes, the system is stable.
Example 3b
Study the stability of the transmittance that has the characteristic polynomial
D(s) = s5 + 7s4 + 17s3 + 17s2 + 6s
It is obvious that it has the root s = 0, i.e., there is an integrator. When there
are integrators, it is advisable to eliminate the zero roots and apply Routh to the
resulting polynomial. Afterward, we will add to the set of roots found, the zeros we
had removed and determine stability.
In this example, the polynomial that remains if we suppress the root s = 0
(dividing by s) is that of the previous example 3
s4 + 7s3 + 17s2 + 17s + 6
which had all roots with negative real parts. However, taking into account the zero
root, the system is marginally stable. Note that if our polynomial had been
D(s) = s6 + 7s5 + 17s4 + 17s3 + 6s2
which has two integrators, despite all the other roots being as in example 3, with
negative real parts, having in addition a multiple zero root, the system would be
unstable.

When a zero appears in the first column, the system is not stable. It can,
however, be marginally stable, if there are only simple pure imaginary roots. And it

24
may be that even being unstable we want to know the number of roots with positive
real parts. However, when wanting to divide we have a case of indetermination, a
singular case.

Singular Cases
There are two:
a) The first element of a row is zero, but there is at least one other that is not
zero.
b) All elements of a row are zeros.
Let’s see how they are resolved.

a) Row with the first element zero, but with at least one other that is
not zero.
In this case, there are roots with positive real parts and the system is unstable.
To know how many there are, when you come across a row of this kind, it is replaced
with another that results from adding or subtracting a helper row. The helper row
consists of the same row shifted as many places to the left as needed so that the
first non-null element reaches the left end. If an even number of positions needs
to be shifted, it is added. If an odd number of positions needs to be shifted, it is
subtracted.
An example will explain it better.
Example 4
Study the stability of the transmittance that has the characteristic polynomial

D(s) = s8 + 2s7 + 2s6 + 4s5 + 3s4 + 6s3 + 5s2 + 7s + 3

We proceed

8 1 2 3 5 3
7 2 4 6 7
6 0 0 3 6 nd

Since not the entire row is zeros, we are in case a. We can already say that the
system is not stable. To continue, we do

6 old 0 0 3 6
aux. 3 6
6 new 3 6 3 6

since we needed to shift two positions, we added. We continue:

25
8 1 2 3 5 3
7 2 4 6 7
̸6 0 0 3 6 nd
3 6
6 3 6 3 6
5 0 12 9 nd

now we do

5 old 0 12 9 nd
aux. 12 9
5 new −12 3 9

since we needed to shift an odd number of positions, we subtracted. Continuing,


finally we get

8 1 2 3 5 3
7 2 4 6 7
̸6 0 0 3 6 nd
3 6
6 3 6 3 6
̸5 0 12 9 nd
12 9
5 −12 3 9
4 81 63 72 nd; ×(−1)
3 999 1593 nd
2 −66096 71928 nd
1 177147000 nd; ×(−1)
0 71928

When counting the sign changes, it must be considered that the rows with zero
and the auxiliaries are as if they were not there.
The four sign changes indicate that the number of roots with positive real parts
is four.

b) All elements of a row are zero.


This indicates the presence of roots symmetric about the origin, which, in par-
ticular, can be pure imaginary. Pure imaginary roots always cause a singularity of
this type; roots symmetric about the origin but not pure imaginary, not necessarily.
If there are non-pure imaginary symmetric roots, since half will have Re > 0,
the system is unstable. If there are only simple pure imaginary roots, the system is

26
marginally stable. If there are multiple pure imaginary roots, the system is unstable.
Note that in all cases, the system is not stable, at most it is marginally stable.
Once the singularity is resolved, the sign changes in the first column will tell us
how many roots with positive real parts there are. The presence and number of pure
imaginary roots must be deduced indirectly. As we will see, the polynomials that
we will now define will help us with this.
To resolve the singularity, it is necessary to introduce the concept of a polynomial
associated with a row. It is a polynomial of degree equal to the row’s index, with
exponents decreasing by twos and coefficients being the elements of the row. For
example, for the rows

4 1 17 6
3 7 9

have the associated polynomials

D4 (s) = s4 + 17s2 + 6

D3 (s) = 7s3 + 9s
respectively.
If there are no integrators (which we normally consider separately, as we did in
example 3b), the polynomial associated with the row above a zero row is always of
even degree, and its roots, which are symmetric about the origin, are also those of
the investigated polynomial.
So, when a row is all zeros, it is replaced by the result of taking the associated
polynomial of the row above and deriving it. An example will clarify this.
Example 5
Study the stability of the transmittance that has the characteristic polynomial

D(s) = s6 + 16s5 + 103s4 + 48s3 + 302s2 + 32s + 200

We proceed

6 1 103 302 200


5 16 48 32
4 1 3 2 /100
3 0 0

At this point, we can already say that the system is not stable. But it may be
marginally stable. The associated polynomial of the previous row (index 4) is

D4 (s) = s4 + 3s2 + 2

which, upon derivation, will give us the new row of index 3:

27
D4′ (s) = 4s3 + 6s
and the table becomes

6 1 103 302 200


5 16 48 32
4 1 3 2 /100
̸3 0 0
3 4 6
2 6 8 nd
1 4 nd
0 8

It must be considered that the entire row of zeros is as if it did not exist. There
are no sign changes; this means that there are no roots with positive real part. Since
the system is not stable (due to the row of zeros we found), this means that there
must be roots with zero real part. If we solve D4 (s), we can find them.

s

−3 ± 9 − 8 ±j

s=± =
2 ±j 2

Since they are not multiple, the system is marginally stable.


Example 6
Study the stability of the transmittance that has the characteristic polynomial

D(s) = s6 + 16s5 + 114s4 + 224s3 + 2025s2 + 10000s + 62500

We proceed

6 1 114 2025 62500


5 16 224 10000
4 100 1400 62500
3 0 0

Having a row of zeros, we can already say that the system is not stable.
We can divide the row of index 4 by 100, resulting in its associated polynomial

D4 (s) = s4 + 14s2 + 625

from which
D4′ (s) = 4s3 + 28s
and the last rows of the table become

28
4 1 14 625 /100
3 4 28
2 7 625
1 −2304 nd
0 625

and since there are two sign changes, D(s) has two roots with positive real part, the
system is unstable.
Solving D4 (s) we find four roots of D(s) symmetric about the origin:

s
−14 ± 142 − 4 · 625 p
s=± = ± −7 ± 24j = ±(3 ± 4j)
2

and, as we see, they include the two with positive real part, 3 ± 4j.
Example 7
Study the stability of the transmittance that has the characteristic polynomial

D(s) = s6 + s5 + 3s4 + 3s3 + 3s2 + 2s + 1

6 1 3 3 1
5 1 3 2
4 old 0 1 1
aux. 1 1
4 −1 0 1
3 3 3
2 3 3 nd
̸1 0
1 6
0 3

The zero in the row of index 4 tells us that the system is not stable.
Since there are two sign changes, D(s) has two roots with positive real part.
Note that in the row of index 1, which is all zeros, we made

D2 (s) = 3s2 + 3
D2′ (s) = 6s

and that, if we wish, we can solve D2 (s), which gives us the pure imaginary solutions
s = ±j.

As we see, the Routh criterion tells us whether a system is stable or not, but
it does not tell us anything about how to stabilize it. There is a possibility, quite
restricted indeed, that allows us to know how to stabilize the system. This occurs

29
when the characteristic equation includes some parameter (usually the canonical
gain of the loop transmittance) and we apply the Routh criterion including the
parameter. By ensuring that there are no sign changes in the first column, we find
the restrictions on the parameter that make the system stable.
Example 8
Find for which values of the canonical gain, k, the system with the loop trans-
mittance
k
GH(s) =
s(s + 1) 3s + 1


is stable.
The system’s transmittance will be
G
W (s) =
1 + GH
For W (s) to be stable, the roots of its characteristic equation, 1 + GH = 0, must
have negative real parts. Thus, we have
s(s + 1) 3s + 1 + k

k
1 + GH = 1 + =
s(s + 1) 3s + 1 s(s + 1) 3s + 1


The roots of this are the roots of


s 
s(s + 1) +1 +k =0
3
which, simplified, is
s3 + 4s2 + 3s + 3k = 0
from which

3 1 3
2 4 3k
1 12 − 3k nd
0 3k

For there to be no sign changes, it is necessary that


12 − 3k > 0
and that
3k > 0
This gives us that it is necessary
k<4
and that
k>0
that is, the system will be stable if
0<k<4

30
1.7 Steady-State Error and System Type
A very common control structure is

Figura 1

Here G is the plant or system we want to control, and H is usually the measu-
rement device that observes the value of the output Y of the plant. If the input X
represents the value we want the output Y to have, we see that ε is the difference
between the value we want the output to have and the value that the measurement
device tells us it actually has.
For example, if G represents the dynamic model of a ship, X can represent the
course we want the ship to follow, and Y is the course it actually follows. X will
be, for example, a voltage proportional to the desired course. This voltage cannot
be compared directly with the followed course Y , which is an abstract thing. It is
necessary, therefore, to measure the course and reduce it to a magnitude of the same
physical dimensions as X to be able to compare them (subtract them). This is what
the measurement device H does, which in this example can be a gyroscope compass
that gives an output voltage proportional to the course followed by the ship.
Note that the desired course, which is an entity in our brain, also needs to
be reduced to a physical magnitude. This can be achieved, for example, with a
potentiometer, which will translate the angle we turn its axis into an electrical
voltage. By putting a graduated dial from 0◦ to 360◦ , we can convert the desired
course into a proportional voltage by turning the potentiometer’s axis.
With these considerations in mind, note that, if the value of the measured course,
i.e., HY , matches exactly the desired one, X, the signal ε is zero. This ε that
indicates the difference HY − X, we will call system error, or also error signal.
Notice that it is not the difference between the real output Y and the desired Y
(which in this case would be measured in degrees), but between the measured Y
and the expression X of our desire (which is measured in volts).
— In engineering, we usually define the error as the actual quantity minus the
nominal; a positive error means that the actual magnitude is greater than what we
want. However, in control, as the error is defined (figure 1), it is the magnitude we
want (setpoint) minus the real one. A positive error means that the output is less
than what we want.
In general, we will be interested in the error ε being zero, even though we will not
always achieve it. Then it will be necessary to be able to give a limit. For example,
we can guarantee that ε < ±1%. It is seen, therefore, that it will be interesting to
know how the error behaves depending on what the inputs to the system are. For
this, it will be necessary to know its transmittance, that is, the transmittance with
input X and output ε. By deformation of figure 1 we have

31
Figura 2

ε(s) 1
=
X(s) 1 + GH(s)
This is called the error transmittance. We see that it only depends on the loop
transmittance GH. It is important to note that it is only valid for the structure
shown in figure 1. However, this structure is so common that it justifies delving a bit
deeper into the study of the error with this configuration. For a different structure,
if we were to talk about error, it would be necessary to clearly define at which point
of the block diagram we are referring to; presumably, at the output of a comparator,
but there could be more than one.
Let’s return to the structure of figures 1 and 2. We study what happens with the
error when signals of increasingly rapid variation enter the system: a step x(t) = u(t),
a ramp x(t) = tu(t), and a parabola x(t) = 12 t2 u(t). It can be anticipated that the
error will tend to be larger the more rapidly the input signal evolves, since every
system always has limits on response speed. Let’s analyze it.
We will study the steady-state error, that is, once the initial transients have
passed. This error is also called error in steady state. It is clear that the system
must be stable; otherwise, we would not have a steady state. Therefore, we assume
that the system is stable. For unstable systems, the concept of steady-state error
does not make sense.
Let the loop transmittance already be in canonical form
kL (cm sm + · · · + c1 s + 1)
GH(s) =
sr (dq sq + · · · + d1 s + 1)
We’ll have
ε(s) 1
P (s) = = =
X(s) 1 + GH(s)
sr (dq sq + · · · + d1 s + 1)
= r
s (dq sq + · · · + d1 s + 1) + kL (cm sm + · · · + c1 s + 1)
a) Step input.
We have
1
x(t) = u(t) ⇒ X(s) =
s
1 1
ε(s) = X(s) · = · P (s)
1 + GH(s) s
The permanent error for a step input will be (applying the final value theorem of
the Laplace transform):
εps = lim ε(t) = lim s · ε(s) = lim P (s)
t→∞ s→0 s→0

32
thus
1

1+kL
if r = 0
εps =
0 if r > 0
Therefore, for the step input error to be null, there must be, at least, one inte-
grator in the loop transmittance.
b) Ramp input.
It will be
1
x(t) = tu(t) ⇒ X(s) = 2
s
1 P (s)
ε(s) = X(s) · = 2
1 + GH(s) s
The permanent error for a ramp input will be

P (s)
εpr = lim ε(t) = lim s · ε(s) = lim
t→∞ s→0 s→0 s
from which 
 ∞ if r < 1
1
εpr = if r = 1
 kL
0 if r > 1
Therefore, for the ramp input error to be null, there must be, at least, two
integrators in the loop transmittance.
c) Parabolic input.
It will be
1 1
x(t) = t2 u(t) ⇒ X(s) = 3
2 s
1 P (s)
ε(s) = X(s) · = 3
1 + GH(s) s
The permanent error for a parabolic input will be

P (s)
εpp = lim ε(t) = lim s · ε(s) = lim
t→∞ s→0 s→0 s2

from which 
 ∞ if r < 2
1
εpp = kL
if r = 2
0 if r > 2

Therefore, for the parabolic input error to be null, there must be, at least, three
integrators in the loop transmittance.
The importance of integrators is evident. This has led to the term type of a
system, denoted by the number r of integrators in its loop transfer function. It’s
essential not to confuse it with the degree or order of the system, which is the
highest power in its characteristic equation (page ??). Let’s emphasize this: the
type is reflected in the loop transfer function (GH), while the order is seen in the
G
total system transfer function (W = 1+GH ).
We can summarize all of the above in a table (where kL is the canonical gain of
G
the loop transfer function GH, and ε is the system error for W = 1+GH ):

33
entrada→ Au(t) Atu(t) A2 t2 u(t)
tipus (r) error permanent
↓ εpg εpr εpp
A
0 1+kL
∞ ∞
A
1 0 kL

A
2 0 0 kL

We must remember that these expressions are only valid if W (s) is stable, and they
refer to the steady-state error. If we want the error as a function of time, ε(t), we
obtain it from
1
ε(s) = X(s) ·
1 + GH(s)
always for the structure in Figure 2. For other structures, the corresponding transfer
function must be used.
Example
Calculate the canonical gain kL for the system with the loop transfer function
kL
GH =
s(s + 1) 3s + 1


to have a steady-state error for a ramp input ε < 0.2.


Since it is of type 1, we have
1
εpr = < 0.2
kL ↑
desired

hence the required canonical gain is

kL > 5

However, we recall that on page ??, we saw that the system with this loop transfer
function is unstable for kL > 4. By varying only kL , we cannot achieve an error
smaller than 41 = 0.25. If we wanted zero error, we would need to add an integrator
to the system (and check if the resulting system is stable). Note that, on the other
hand, for a step input, the error is zero regardless of kL .

Generalization of the Type Concept for Other Inputs.


For inputs other than the reference input, i.e., inputs at other points in the
loop, the effects on the error can also be partially determined using this method
by defining a type for each input. This type represents the number of integrators
between the error and the input in the direction of signal flow. It helps determine
whether the error is zero or infinite but is not valid for calculating the value in other
A
cases (it is not 1+k L
or kAL ).
A Note on Nomenclature
Historically, the study of this type of control (Figure 1) began during World
War II. Control was used to aim anti-aircraft defenses at aircraft. The three types
of inputs referred to stationary targets, targets moving at a constant velocity, and

34
accelerating targets, respectively. That’s why the error that the system has for
each of them was called position error, velocity error, and acceleration error. When
this control structure was generalized, these terms remained. This has unfortunate
consequences. For example, if a control system’s mission is to maintain a constant
velocity, the error for a step input, which results in a different velocity than desired,
is called a position error.
This nomenclature is still used in most English-language literature. The nomen-
clature we have used, which is more rational, is used in Catalan literature.
Another American habit that should be known in order to follow English-language
literature but which we will not use is to give different names to the canonical gain
(kL ) of the loop transfer function, depending on the system’s type. For type 0 sys-
tems, they call it position constant, Kp . For type 1 systems, velocity constant, Kv .
And for type 2 systems, acceleration constant, Ka .

1.8 Sensitivity
We will never know the parameters of the models with precision. It is useful to have
a measure of how this uncertainty can affect the results of our analyses.
Sensitivity to the parameter α of the transfer function W (s, α) is defined as the
quotient of the relative variations:
dW
α dW
SαW = W = .
dα W dα
α
Note that it is dimensionless. Ideally, sensitivity should be zero; then, parameter
variations do not affect the transfer function. High sensitivity, for example, SαW = 1
or SαW = −1, indicates that parameter variations affect the transfer function in the
same proportion (in the same direction or opposite direction, depending on the sign).
Example.
Consider a system without feedback:

and calculate its sensitivity to k and G.


G dW G
SGW = = k=1
W dG kG
which tells us that variations in G affect the overall system in the same proportion.
Since k and G can be interchanged without altering the overall transfer function,
the sensitivity to k will be the same, SkW = 1.
Now consider the control system

35
where G is the controlled plant, H is the measuring device used to compare the
output of plant G with the desired setpoint, and k allows us to adjust the canonical
gain of the loop transfer function. The system’s transfer function is
kG
W =
1 + kGH
Let’s examine the sensitivities with respect to the gain k and the transfer function
G:
k dW k G(1 + kGH) − kG · GH 1
SkW = · = kG
· 2 =
W dk 1+kGH (1 + kGH) 1 + kGH
G dW G k(1 + kGH) − kG · kH 1
SGW = · = kG
· 2 =
W dG 1+kGH (1 + kGH) 1 + kGH
which are equal, as expected, since k and G can also be interchanged without chan-
ging W . Note that if we want the sensitivity to be small, we need kGH to be
large.
Now let’s examine the sensitivity with respect to the measurement device, H:
!
2
W H dW H − (kG) −kGH
SH = · = kG 2 =
W dH 1+kGH (1 + kGH) 1 + kGH
W
This presents us with a problem: if we make kGH small so that SH tends to
W W W
zero, then we spread out Sk and SG . If we make kGH large, SH tends towards
−1, indicating high sensitivity. Therefore, we need to choose between one problem
or the other. We will make kGH large to accommodate variations in the power
chain (variations that would be expensive or impossible to eliminate), and we will
accept that the measurement device H must be of high quality since its variations
will affect W practically in the same proportion.
Note that by providing feedback to the system, we have managed to make it
independent of variations in G and k, as long as k, even if it varies, is sufficiently
large.
We can derive some operational rules that will facilitate calculations.
– If W = N (α)
D(α)
and we want SαW , note that

dW
d ln W d ln N d(ln N − ln D)
SαW = W

= = D
= =
α
d ln α d ln α d ln α
dN dD
d ln N d ln D
= − = N

− D

= SαN − SαD
d ln α d ln α α α

In the previous example, we have N = kG and D = 1 + kGH, so

G dN G G
SGN = · = k= k=1
N dG N kG
G dD G kGH
SGD = · = kH =
D dG D 1 + kGH

36
Therefore,

kGH 1
SGW = SGN − SGD = 1 − =
1 + kGH 1 + kGH
– Similarly, we would have

SαAB = SαA + SαB

It tells us that the sensitivity to a parameter of two blocks in series is the sum
of the sensitivities of each block.
– We can also obtain that

SαW = SβW · Sαβ

1.9 PID Controllers


Physical Vision
We have seen that in order to achieve that the error of a system is less than a
certain value, it is necessary to increase the canonical gain of the loop transfer func-
tion. This can be done by placing an amplifier in front of the plant that multiplies
by K:

This amplifier can be considered as a (simple) controller. Since what it does is

u(t) = Ke(t)

we say that it performs a proportional action, it is a P controller. It is understood


that if with a certain input signal e(t) to the input of G(s) the output does not
quite reach the desired value, if we make e(t) larger (multiply it by K), the error
decreases.
We also know that to eliminate the error of a system, sometimes it is necessary
to add an integrator in series:

37
Well, this integrator, which helps control the plant G, we can say it is a controller.
Since Z t
u(t) = e(t)dt ,
0
we say that it performs an integral action, we say it is an I controller. Its action is
based on the fact that if the small error signal e(t) was not large enough to make
G bring Y to its place, presumably the signal u(t), which grows as long as e(t) does
not vanish, will eventually be sufficient for G to bring Y to the exact desired value.
However, it must be noted that whenever we add an integrator, it will be necessary
to take into account that integrators tend to destabilize the system (page ??).
This way of reasoning can be extended to other characteristics of the behavior
of G. For example, if the step response of G brings Y to the final value A, but not
quickly enough,

if we increase the signal applied to G, something like this will happen:

that is to say, we will reach the value A (point M ) more quickly, but we will overshoot
it and reach the final value B instead of A.
What we need is an increase in the input to G that is initially large and then
disappears. This can be achieved by using the derivative of the input to G, which if
it were truly a step function would be

38
but in reality, with a suitable time scale, it looks like

This is achieved by adding to the signal already present at the input of G the
output of a differentiator, that is, a block s.

However, differentiators are dangerous, as they amplify the noise that is always
present in systems (page 45). Therefore, they must be used with caution.
These three actions, proportional (P), integral (I), and derivative (D), are used
in practice
 in various combinations:

 P
PI


 PD
PID

The general structure of a PID controller will be

39
The constants Kp , Ki , and Kd allow adjusting the ”amount”of each type of
action we will use and make the outputs of the three sub-controllers (P, I, and D)
have the same dimensions so that they can be summed. If, as is common, E and U
are of the same nature (usually volts), we note that Kp has no dimensions, Ki has
dimensions of T −1 , and Kd has dimensions of T . From the previous figure, we see
that the controller has the transfer function

U (s) Ki Kp s + Ki + Kd s2
Gc (s) = = Kp + + Kd s =
E(s) s s

We will place it in series with the plant:

Mathematical Perspective
We know that the behavior of the plant G(s) = N (s)
D(s)
largely depends on its
poles, which means on the roots of its characteristic equation D(s) = 0. To alter
this behavior, we can use a controller Gc (s) connected in feedback as shown in the
previous figure. By doing this, the transfer function of the whole system becomes

Gc (s)G(s)
W (s) =
1 + Gc (s)G(s)H(s)
with a characteristic equation of 1 + Gc (s)G(s)H(s) = 0. This equation has
certain roots, which, being the poles of W (s), determine its behavior, which will no
longer be that of the isolated G(s). How should Gc (s) be in order to allow us to
vary the roots of the characteristic equation at will? Clearly, it will be convenient
for it to have parameters that we can change, which in turn affect the roots of
1 + Gc (s)G(s)H(s) = 0. To avoid redundancy, it will be appropriate for these
parameters of Gc (s), when multiplied by Gc (s), to affect different coefficients of the
product polynomial. Let’s assume we want to have three parameters, a, b, and c.
We can choose Gc (s) = as2 + bs + c. When multiplied by any polynomial, it is
certain that a, b, and c affect at least three different powers of s. However, this does
not allow us to incorporate integrators into the control loop, which we have seen
2
can be desirable to eliminate error. A wiser choice is Gc (s) = as +bs+c s
, which will
allow us to introduce an integrator. We can see that we have rediscovered the form
Kp s+Ki +Kd s2
s
of the PID controller.
Design

40
How do we find the constants Kp , Ki , and Kd ? There are many experimental
methods. However, if we have the model of the system (as we will assume in the
following examples), they can be determined analytically. The method to be used
will depend in part on the type of specifications we are given.
Essentially, it will consist of imposing that the system with the controller meets
the specifications, which will involve solving a system of equations.
Examples of specifications that we may want to meet:
– Fixed steady-state error
– Fixed settling time
– Maximum overshoot
– Desired pole locations for the system
The process is basically the same for all of them: the specified magnitude is
calculated as a function of Kp , Ki , and Kd , and it is imposed that the specified
values are met. This gives us Kp , Ki , and Kd or perhaps some constraints on their
values, such that if they are satisfied, the specifications will also be met.
Before giving some general guidelines for design, let’s see some examples to clarify
ideas.
Example 1
Design a controller so that the system with the loop transfer function
1
GH(s) =
s−2
is stable and has zero steady-state error for a step input.
– To achieve zero error, we need to change the system from the type 0 it currently
is to type 1. This means adding an integrator. As an exercise, let’s try the I
controller, although in practice, a standalone I controller is not typically used.

Ki
Gc (s) =
s
which gives us the new loop transfer function

Ki
Gc GH(s) =
s(s − 2)

which indeed makes the system type 1 and thus has zero error for step inputs. Now,
we need to check if it’s stable. Recalling

G
W (s) =
1 + GH
we see that the poles of W (s) will be the roots of 1 + GH = 0, which in our case is

Ki s(s − 2) + Ki
1 + Gc GH = 1 + = =0
s(s − 2) s(s − 2)

which will vanish for


s(s − 2) + Ki = 0

41
meaning,
s2 − 2s + Ki = 0
since it does not have all signs the same, it indicates that there will always be some
roots with positive real parts.
With an I controller, we cannot stabilize the system.
– Let’s try a PI controller:

Ki Kp s + Ki
Gc (s) = Kp + =
s s
which gives us the loop transfer function
  
Kp s + Ki 1
Gc GH(s) =
s s−2

The characteristic equation, 1 + GH = 0, will be


  
Kp s + Ki 1 s(s − 2) + Kp s + Ki
1 + Gc GH = 1 + = =0
s s−2 s(s − 2)

whose roots will be those of

s(s − 2) + Kp s + Ki = 0

that is,
s2 + (Kp − 2)s + Ki = 0 .
Let’s see what’s needed for them to have no positive real part. By the Routh
criterion:

2 1 Ki
1 Kp − 2
0 Ki

We find that it is necessary that



Kp − 2 > 0 → Kp > 2
Ki > 0

Any pair of Kp and Ki that satisfies these conditions will work; the characteristic
equation will have roots with negative real parts, and the system will be stable.
– However, it is often wiser to choose the roots (which, remember, are the poles
of the system). For example, let’s make the poles of the feedback system be

s = −1 ± j

The characteristic equation must then be

(s + 1 − j)(s + 1 + j) = s2 + 2s + 2 = 0

42
which, identifying it with

s2 + (Kp − 2)s + Ki = 0

gives us 
Kp − 2 = 2 → Kp = 4
Ki = 2
The controller
2 4s + 2
Gc = 4 + =
s s
thus makes the system with the controller and feedback
4s+2
Gc G s
G (s − 2) (4s + 2) G
W (s) = = 4s+2 1 =
1 + Gc GH 1 + s · s−2 s2 + 2s + 2

have zero error for step inputs and have poles at s = −1 ± j. Note that this is true
1
as long as GH(s) = s−2 , regardless of the specific form of G(s), which we actually
do not know.
Example 2
A system has the loop transfer function
k
GH(s) = s

s(s + 1) 3
+1

Design a controller that makes the system stable and for a ramp input x(t) = tu(t),
has a steady-state error less than 0.2.
We know (pages 30 and 34) that for stability, k < 4 is required, and for the
steady-state error to be ε < 0.2, k > 5 is necessary.
– If we use a proportional controller, Gc (s) = Kp , we will have Kp k instead of k,

Kp k
Gc GH(s) =
s(s + 1) 3s + 1


and we won’t have achieved anything; we will still require Kp k < 4 and Kp k > 5.
The P controller does not work in this case.
if, once again as an exercise, we try an integral controller, Gc (s) = Ksi ,

Ki k
Gc GH(s) =
+ 1) 3s + 1

s2 (s

the additional integrator (which changes the system from type 1 to type 2) will make
the error due to the ramp be null regardless of the canonical gain, which now will be
Ki k. However, we must not fall into the trap of thinking that by making Ki k < 4,
for example Ki k = 2, we have assured stability. It does not necessarily have to be
so; we have changed the denominator. Let’s see the new characteristic equation:

s2 (s + 1) 3s + 1 + Ki k

Ki k
1 + Gc GH = 1 + 2 =
s (s + 1) 3s + 1 s2 (s + 1) 3s + 1


43
which will be canceled when the numerator does
s 
2
s (s + 1) + 1 + Ki k = 0
3
from which
s4 + 4s3 + 3s2 + 3Ki k = 0
which, lacking a term, we already know is unstable. The I controller, also does not
serve us.
– Let’s try, then, with a PI: Gc (s) = Kp + Ksi . We will have
  !
Kp s + Ki k k(Kp s + Ki )
Gc GH(s) = s
 =
s2 (s + 1) 3s + 1

s s(s + 1) 3 + 1

from which
s2 (s + 1) 3s + 1 + k(Kp s + Ki )

1 + Gc GH(s) =
s2 (s + 1) 3s + 1


will cancel when its numerator does


s 
s2 (s + 1) + 1 + k(Kp s + Ki ) = 0
3
which arranged is
s4 + 4s3 + 3s2 + 3kKp s + 3kKi = 0
and now, by Routh:

4 1 3 3kKi
3 4 3kKp
2 12 − 3kKp 12kKi
1 3kKp (12 − 3kKp ) − 48kKi
0 12kKi

Therefore, it is necessary that



 12 − 3kKp > 0
3kKp (12 − 3kKp ) − 48kKi > 0
12kKi > 0

or simplifying (assuming k > 0)



 kKp < 4
3
Ki < 16 Kp (4 − kKp )
Ki > 0

Any set of k, Kp , Ki that meets these three inequalities will ensure stability. If, for
example, we choose 
k=1
Kp = 1

44
which meet the first inequality, it remains that
9
0 < Ki <
16
Exercise. Verify that a PD also works.
Thus, we see that we can choose between a PI and a PD. Which one will we
choose? Whenever possible, it is necessary to avoid differentiators. Let’s see why.
Assume, as will almost always be the case, that the implementation of the controller
is electronic. In any industrial process, there is always electrical noise. This means
that the electrical signals running through the wires, due to inductions and stray
capacitances, are contaminated with voltages of various frequencies that are added
to the main one. These stray voltages, if the design is correct, are of very low
amplitude and do not reach to impair the proper functioning of the system. But
let’s see what happens if we put in a differentiator. A parasitic signal that we will
always find is the 50 Hz frequency due to the electrical distribution network. This
parasitic signal can be represented by xp (t) = A sin 100πt volts, where the amplitude
A, as we said, will be small (on the order of millivolts). This parasitic signal is added
to the normal control signals. When the signal passes through the differentiator, its
output will have the derivative

dxp
= 100πA cos 100πt
dt
which is again a sinusoidal signal, but with an amplitude 100π times larger. If xp (t)
had an amplitude of 1 mV, its derivative would have an amplitude of about 0.3 V,
probably unacceptable. Therefore, we will choose the PI, not the PD.

Let’s now look at some guidelines for design, which will not be understood
without having done the previous examples, and which cannot be considered as fixed
rules; the real case scenarios are too varied.
1— We will start by deciding what type of controller we need (P, PI, PD, PID).
Let’s see how to do it based on the design specifications of the controller.
The design specifications can be classified as follows:
 
− on the type
− on the error

specifications − numerical
− on the poles

First, let’s focus (if there are any) on the error specifications. If we need a zero
error for a certain input (step, ramp, parabola), we will see if it is necessary to
increase the system’s type to achieve it. If so, a controller containing an integrator,
the I term, will be needed. On the other hand, if what is needed is an error less
than a certain value, this is a numerical condition. Both conditions can go together;
it may be necessary to increase the type and then fix the error.
Now consider the conditions on the poles. Sometimes we will know directly which
poles we want the closed-loop system to have. But it will be much more common to

45
have a specification of another kind. Then it will be necessary to reduce it to which
poles we want. Typical examples are the maximum allowable overshoot, peak time,
the oscillation frequency of the output, settling time, etc. All of them will need to
be reduced to numerical conditions on the poles.
Once all this is done, the controller will need to have as many adjustable constants
(Kp , Ki , or Kd ) as numerical conditions imposed by the specifications (both on the
error and on the poles). Therefore, we see that at most we can fix three numerical
conditions.
For example, if we want the closed-loop system to have a certain pair of complex
conjugate poles (two numerical conditions), we will need two constants; thus, we
will choose either a PI or a PD. Which one of the two? If we want zero error and
need to increase the type, a PI; if we want a quick response, a PD. There may be
a solution both with PI and PD. Then, if the PI is stable, we will prefer it over the
PD to avoid noise problems.
2 — Once the type of controller Gc (s) has been decided, we will express it as a
function of the Kp , Ki , and Kd that are needed.
3 — If there is a numerical condition on the error, we can probably already
impose it. The numerical condition will determine the canonical gain kL of the loop
transmission. By ensuring that the loop transmission including the controller Gc (s),
(generally Gc GH(s)) has the canonical gain kL , we will obtain either directly the
value of one of the constants (Kp or Ki ) or we will have an equation that we will
add to those obtained in step 4.
4 — Always with the expression of Gc (s) as a function of Kp , Ki , and Kd , we will
construct the characteristic equation that the closed-loop system with this controller
has. Generally, it will be 1 + Gc GH(s) = 0. It is convenient to reduce it to a
polynomial ordered in powers of s.
From here, we can follow two different paths.
Method 1. By polynomial identification.
5 — From the desired poles, we will now construct the characteristic equation
that we want the system to have. If the desired poles are p1 and p2 , it will be
(s − p1 )(s − p2 ) = 0. It will also be necessary to reduce it to a polynomial ordered
in powers of s.
6 — Now, it only remains to determine which values of Kp , Ki , Kd in the equation
from step 4 make it coincide with that of step 5.
We will do this by matching the polynomial obtained in step 4 with the one
obtained in step 5.
— If they are not of the same degree, obviously, we could not match them. If
this happens, it will be necessary to increase the degree of the equation from step 5
with as many new arbitrary poles as needed. For example, if a new pole is needed:

(s − p1 )(s − p2 )(s − a) = 0

— Beware, a polynomial of degree n has n + 1 coefficients, but only n roots.


This means that one of the degrees of freedom is fictitious: we can always multiply
the entire polynomial by any constant without changing its roots. Before matching,
it is necessary to first equalize one of the coefficients, for example, the one with the
highest degree. If the highest degree coefficients are already identical, there is no

46
problem. If they are not, considering that the two polynomials are set to zero, we
can multiply each of them by the highest degree coefficient of the other to equalize
them. Then we systematically identify the other coefficients.
This identification will give us a system of equations to which we will add, if it
exists, the one from step 3. From this system, we will find the unknowns Kp , Ki ,
Kd as well as a and the other poles if they exist.
If we have had to add extra poles (a in this explanation), it is necessary to check
that they come out with a negative real part. If any come out with a positive real
part, the controller makes the system unstable and is not valid.
Method 2. By substitution.
5 — It consists of substituting the desired poles into the characteristic equation
obtained in step 4. For each pole, we will obtain an equation. The two obtained
from a pair of complex conjugate poles are equivalent; however, since each one
equates to two (by separately equating to zero the real and imaginary parts), there
is no problem. If the poles are multiple, we will work with the equation and its
derivatives. From these equations, we can find Kp , Ki , and Kd . It is necessary to
find the additional poles separately and verify that they are stable.
Example 3
s+2
The loop transfer function of a plant is GH(s) = s+3 . Design a PID that makes
the feedback system have the poles −2 ± 3j and a steady-state error for a unit ramp
input εpr = 0.5.
First, consider the type of controller. The plant does not provide an integrator.
For a constant steady-state error with ramp input, the closed-loop system needs to
be of type 1. The controller must provide an integrator I. On the other hand, there
2
are 3 numerical conditions, so a PID is required, Gc (s) = Kp s+Ksi +Kd s .
Let’s start with the error that will be present when we implement this controller.
The canonical gain of the loop transfer function must be

A 1
kL = = =2
εpr 0.5

The loop transfer function upon incorporating the controller is

Kp s + Ki + Kd s2 s + 2
Gc (s) · GH(s) = ·
s s+3
Its canonical gain is
2
kL = Ki ·
3
From this, we deduce the Ki

3 3
Ki = · kL = · 2 = 3
2 2
Now, construct the characteristic equation that the system will have with the
controller:
Kp s + 3 + Kd s2 s+2
1 + Gc (s) · GH(s) = 1 + · =0
s s+3

47
which arranged becomes

Kd s3 + s2 (1 + Kp + 2Kd ) + s(6 + 2Kp ) + 6 = 0 (1)

of third degree. Now it is necessary to impose that it has the poles −2 ± 3j that
we want.
— Method 1. By identification with the desired characteristic equation.
The desired characteristic equation is

(s + 2 − 3j) (s + 2 + 3j) = 0

that is
s2 + 4s + 13 = 0
We will need to add a pole so that it matches the degree of (1). Let’s do

s2 + 4s + 13 (s − a) = 0


which arranged becomes

s3 + s2 (4 − a) + s (13 − 4a) − 13a = 0

To be able to identify it with (1) we need to multiply it by Kd :

Kd s3 + s2 (4 − a) Kd + s (13 − 4a) Kd − 13aKd = 0 (2)

Now we can identify (1) with (2) and obtain



 1 + Kp + 2Kd = 4Kd − aKd
6 + 2Kp = 13Kd − 4aKd
6 = −13aKd

−6
and isolating aKd = 13
from the third and substituting it into the other two
6

1 + Kp + 2Kd = 13
6 + 2Kp + 13Kd = 24
13

from which
17
Kp = ≃ 0.1453
117
40
Kd = ≃ 0.3419
117
−27
a= = −1.35
20
Since a = −1.35 < 0, all three poles of the closed-loop system have a negative real
part, and the system is stable. Note that the dominant pole is not −2 ± 3j but
−1.35. If this is not an issue, perfect. But if we wanted the pair of complex poles
to dominate, with a PID we cannot achieve it.

48
The controller will be, therefore,
17 40 2
117
s +3+ 117
s
Gc (s) =
s
We can verify the calculations: The characteristic equation of the system feed-
back with the controller is
17 40 2
117
s +3+ 117
s s+2
1 + Gc (s) · GH(s) = 1 + · =0
s s+3
from which  
17 40 2
s(s + 3) + s+3+ s (s + 2) = 0
117 117
which arranged becomes

0.3419s3 + 1.829s2 + 6.291s + 6 = 0

which indeed has the desired roots −2 ± 3j and the additional −1.35.
— Method 2. By substituting the desired values into the equation.
If we substitute the desired poles into the characteristic equation (1)

Kd s3 + s2 (1 + Kp + 2Kd ) + s(6 + 2Kp ) + 6 = 0

we will obtain two equations with two unknowns. Since the poles are complex, it will
suffice to substitute one, for example, −2 + 3j, and equate the real and imaginary
parts of the resulting expression to zero. First, we calculate (−2 + 3j)2 = −5 − 12j
and (−2 + 3j)3 = 46 + 9j. Substituting, we get

(46 + 9j)Kd − (5 + 12j)(1 + Kp + 2Kd ) + (−2 + 3j)(6 + 2Kp ) + 6 = 0

and separating the real and imaginary parts and setting them to zero, we have

46Kd − 5(1 + Kp + 2Kd ) − 2(6 + 2Kp ) + 6 = 0
9Kd − 12(1 + Kp + 2Kd ) + 3(6 + 2Kp ) = 0

which is more straightforward if the substitution is done in parts, first the real part
and then the imaginary. First, we prepare the powers of the poles

−2 + 3j
(−2 + 3j)2 = −5 − 12j
(−2 + 3j)3 = 46 + 9j

and then we substitute into (1) separately first the real part of each power of the
poles and then the imaginary:
Real part:
46Kd − 5(1 + Kp + 2Kd ) − 2(6 + 2Kp ) + 6 = 0
Imaginary part:

9Kd − 12(1 + Kp + 2Kd ) + 3(6 + 2Kp ) = 0

49
from which we obtain as before

46Kd − 5(1 + Kp + 2Kd ) − 2(6 + 2Kp ) + 6 = 0
9Kd − 12(1 + Kp + 2Kd ) + 3(6 + 2Kp ) = 0

and finally, once again

17
Kp = ≃ 0.1453
117
40
Kd = ≃ 0.3419
117
Note that with this method we do not obtain the additional poles, which is
essential to know if the system is stable. It is necessary to find the roots of the
characteristic equation as done in the final verification of method 1.
Example 4
The transfer function relating the liquid level in an intermediate tank with the
320
voltage applied to the feed pump is Gp (s) = s2 +9.6s+64 m/V and that of the feedback
3
sensor is H(s) = s+4 V/m. Design a PID controller that makes the closed-loop
system have the double pole s = −3 and a steady-state error εpr = 2.5 V for an
input 5tu(t) V.
To have a constant steady-state error for a ramp input, a type 1 system is needed.
The system as it is is of type 0. Therefore, an integrator (I) is required. Moreover, a
double pole and an error of 2.5 are three numerical conditions. We need to be able
to adjust 3 constants. Thus, we will make a PID.
Let’s start with the error.
K p s + K i + Kd s2 320 3
Gc GH(s) = · 2 ·
s s + 9.6s + 64 s + 4
Ki ·320·3
which has a canonical gain kL = 64·4
. We want that

A 5
εpr = = Ki ·320·3
= 2.5
kL 64·4

therefore, it is necessary that


5 · 64 · 4 8
Ki = = ≃ 0.533
320 · 3 · 2.5 15
The characteristic equation of the feedback system will be 1 + Gc GH(s) = 0 which
after operations and with the value of Ki found, becomes

s4 + 13.6s3 + (102.4 + 960Kd ) s2 + (256 + 960Kp ) s + 512 = 0

Now we must impose that this equation has the poles we want: the double −3.
— Method 1. By identification with the desired characteristic equation.
Let’s construct the desired equation:

(s + 3)2 = s2 + 6s + 9

50
which needs two more poles to be of fourth degree like the one the system with the
PID has. So, we add two more poles:

s2 + 6s + 9 (s − a)(s − b) =


= s4 + [6 − (a + b)] s3 + [9 − 6(a + b) + ab] s2 + [6ab − 9 (a + b)] s + 9ab = 0


By identification, we get a system of 4 equations with 4 unknowns


 13.6 = 6 − (a + b)
102.4 + 960Kd = 9 − 6(a + b) + ab


 256 + 960Kp = 6ab − 9 (a + b)
512 = 9ab

and note that it is nonlinear. However, by considering c = a + b and d = ab, things


simplify: 

 13.6 = 6 − c
102.4 + 960Kd = 9 − 6c + d

 256 + 960Kp = 6d − 9c

512 = 9d

from which we successively find




 c = −7.6



512

 d= 9


−47.8+ 512
Kd = 9
≃ 0.00947




 960



 −256+9·7.6+6· 512
Kp = 9
≃ 0.160

960

To find the two additional poles, we have



a + b = −7.6
ab = 512
9

from which
9a2 + 68.4a + 512 = 0
gives us
a = −3.8 ± 6.515j
which are actually the two poles a and b (if we had isolated b, we would have obtained
the same, notice the symmetry of the equations).
It’s good to verify the results. We need to find the roots of the characteristic
equation

s4 + 13.6s3 + (102.4 + 960Kd ) s2 + (256 + 960Kp ) s + 512 = 0

with the obtained constants (remembering that we have already incorporated Ki ),

s4 + 13.6s3 + 111.489s2 + 409.733s + 512 = 0

51
and verify that they are −3, −3, and −3.8 ± 6.515j.
— Method 2. By substituting the desired values into the equation.
If to

s4 + 13.6s3 + (102.4 + 960Kd ) s2 + (256 + 960Kp ) s + 512 = 0

we substitute s with the value of each desired pole, we will obtain as many equations
as unknowns, in this case two, which generally allow us to obtain Kp and Kd without
further problems. However, in this example, by wanting a double pole, we would
obtain the same equation both times. One possibility is to substitute the pole s = −3
into the equation and into the result of dividing the equation by s + 3.
We will do it in an equivalent but more convenient way, which is to work with
the equation and its derivative; we know that multiple roots cancel in successive
derivatives. Therefore, we first substitute −3 into the equation, and get

8640Kd − 2880Kp + 379.4 = 0

And now to the derivative

4s3 + 40.8s2 + 2 (102.4 + 960Kd ) s + (256 + 960Kp ) = 0

if we substitute −3 we get

−5760Kd + 960Kp − 99.2 = 0

from which we deduce


2880·99.2−960·379.4

 Kd = 8640·960−5760·2880
≃ 0.00947

8640·99.2−5760·379.4
Kp = ≃ 0.160

8640·960−5760·2880

We emphasize again that with this method we do not obtain the additional poles,
knowing which is essential to know if the system is stable. Therefore, it is necessary
to find the roots of the characteristic equation as was done in the final verification
of method 1.

Since all roots (−3, −3, and −3.8 ± 6.515j) have a negative real part, the system
is stable and the controller is acceptable. We also note that the dominant poles are
those we imposed. As seen in the previous example, this will not always be the case.

Comparison of Methods 1 and 2.


Method 1, by identification, generates as many equations as the degree of the
characteristic equation. These equations can be nonlinear, although usually mana-
geable. It provides all the Ks and all the poles.
Method 2, by substitution, only generates as many equations as constants needed
to be found, and these equations are always linear. It is necessary to find the
additional poles separately.

52

You might also like