2P6 Linear Systems and Control
2P6 Linear Systems and Control
HANDOUT 1
Signals
+ input output
⌃ Plant
Systems
Feedback
Controller
1
The Aims of the course are to:
Introduce and motivate the use of feedback control systems.
Introduce analysis techniques for linear systems which are used in control, signal
processing, communications, and other branches of engineering.
Introduce the specification, analysis and design of feedback control systems.
Extend the ideas and techniques learnt in the IA Mechanical
Vibrations course.
By the end of the course students should:
Be able to develop and interpret block diagrams and transfer functions for
simple systems.
Be able to relate the time response of a system to its transfer function and/or
its poles.
Understand the term ‘stability’, its definition, and its relation to the poles of a
system.
Understand the term ‘frequency response’ (or ‘harmonic response’), and its
relation to the transfer function of a system.
Be able to interpret Bode and Nyquist diagrams, and to sketch them for simple
systems.
2
What the course is really about
+
⌃
3
SYLLABUS
Section numbers
Course material book 1 book 2
Examples of feedback control systems. Use of block 1.1-1.13 1.1-1.3
diagrams. Differential equation models. Meaning of 2.2-2.3 2.1-2.5
‘Linear System’.
Review of Laplace transforms. Transfer functions. Poles 2.4-2.6 3.1-3.2
(characteristic roots) and zeros. Impulse and step
responses. Convolution integral. Block diagrams of
complex systems.
Definition of stability. Pole locations and stability. Pole 6.1 3.3-3.6
locations and transient characteristics. 5.6
Frequency response (harmonic response). Nyquist (polar) 8.1-8.3 6.1-6.3
and Bode diagrams.
4
Contents
5
1.1 Examples of feedback systems The Ktesibios’ float
valve regulator is the
oldest control system
1.1.1 Ktesibios’ Float Valve regulator that has been reported
(Water-clock, Alexandria 250BC) in the literature. It
goes back to ancient
Supply Needs a Alexandria and it is a
qi feedback system that
constant flow regulates the outflow
rate at “E” rate q0 from a chamber
to a constant value
x Outflow (was used by ancient
E qo Egypticians as a water
clock!).
is a feedback control system.
In particular, if there
is a disturbance in the
supply rate qi , e.g. de-
creasing the water level
x, then the float in the
Block Diagram:
chamber would move
downwards, allowing
more water to flow in
outflow rate and thus recover x to
Net “qo ” its desired level (simi-
larly if the disturbance
Inflow Water increases x).
Rate Float level Orifice
⌃
“q” chamber “x” “E” A block diagram of the
+ float valve regulator is
illustrated on the left.
Each block has inputs
Supply and outputs which are
flow rate Float Supply Pressure signals (i.e. functions
& valve (Disturbance) of time). The block
“qi ” itself corresponds to a
system, i.e. a differen-
Signals have units (usually), are functions of time, and are represented by the tial equation the relates
the inputs and outputs.
connections:
e.g. Net inflow “q(t)” is measured in m3 /s It should be noted that
Water level “x(t)” is measured in m the block diagram is
not unique as each
block could be decom-
posed further into other
6 sublocks.
6
Systems have equations, and are represented by the blocks:
The equation on the
e.g. the Float chamber is described by left is associated with
Z t the system represented
1 by the "float chamber"
x(t) = q(⌧ ) d⌧ block. In particular, it
A 0 relates the output x(t)
cross-sectional area (water level) with the
input q(t) (rate of water
flow into chamber).
It should be noted that
an equivalent way to
write this equation is
1
ẋ(t) = q(t)
A
7
1.1.2 Watt’s Governor The Watt’s Governor
is a control system
that is a central
part of Watt’s steam
engine, an invention
that was at the core
of the industrial
revolution.
8
Watt’s Governor
The operation of the
Watt’s Governor is
based on a flyweight
mechanism that in-
crease/decreases the
opening of the steam
valve as the speed
of the engine de-
creases/increases. The
latter occurs due to
a change in the load
applied.
9
1.1.3 A Helicopter Flight Control System
Designing the flight
control system of a
helicopter is in general
a non-trivial problem
as heicopters are, so
called, open loop un-
stable systems, i.e. the
helicopter will crash if
its controllers are not
in operation.
As illustrated in the
Is a feedback control system
block diagram the flight
control policy will be
implemented by a com-
puter, hence an ADC is
needed to convert the
analogue measurements
to digital signals to be
Block Diagram:
processed by the flight
control computer. A
DAC will then convert
controls wind gusts measurements the digital outputs of
(inputs) (disturbances) (outputs) the computer to appro-
main rotor priate analogue signals
8 collective for the actuators that
> vertical accn 9
> determine the control
>
> >
>
>
> > inputs.
>
>
<
cyclic pitch Helicopter pitch rate > >
>
=
cyclic roll dynamics roll rate It should be noted that
>
> >
>
>
> >
> external disturbances
>
> tail rotor yaw rate >
>
>
: >
; can also affect the
collective dynamic behaviour of
pilot the helicopter.
flight
DAC control ADC
computer
10
10
1.1.4 Internet congestion control (TCP) Internet congestion
control is a protocol
x1 that determines the
src 1 link A dst 1 rates with which
computers/devices
send data, based on
acks congestion signals
they receive from
link B the network.
Files to be transferred across the Internet using the Transmission Control Protocol
(TCP) - eg a download from the web - are broken into packets of size typically around
1500bytes, with headers specifying the destination and the number of the packet
amongst other information. These packets are sent one by one into the network, with
the recipient sending acknowledgements back to the source whenever one is received.
Routers in the network typically operate a drop tail queue. If a packet is received when
the queue is full then it is simply discarded. Packet loss thus indicates congestion. If a
packet is received out of order, it is assumed that intervening packets have been lost.
The recipient sends a duplicate acknowledgement to signal this and the source lowers
its rate (in response to the congestion) and resends the lost packet(s). Whilst a steady
stream of successive acknowledgements is being received the source gradually increases
its sending rate. In normal operation sources are thus constantly increasing and
decreasing their rates in an attempt to make use of the available bandwidth.
Congestion (ie full queues and the resulting packet loss) can occurr anywhere in the
network - at the edges (eg your adsl modem, or at the exchange), in the core (eg a big
transatlantic link) or, very often, at peering points, which are the connections between
the networks that make up the Internet.
11
11
1.1.5 The lac operon – E.Coli (⇡ 130 million years Feedback control is
also ubiquitous in
BC!) biological processes.
A significant part
of the DNA has
a regulatory role,
controlling which
chemical reactions
take place and at
what rates. This is
crucial for the various
functionalities of the
cells.
(Only read this if you’re interested!) The diagram above illustrates 4 genes and some
control regions along the DNA of E.Coli. E.Coli’s favourite sugar is glucose, but it will
quite happily “eat” lactose if there’s no glucose around. If there is glucose around or if
there is no lactose around then there is no need to produce -galactosidase (the
enzyme which breaks down lactose, first into allolactose and then glucose) or the
permease (which transports lactose into the cell). In addition, when it is metabolising
lactose, it wants to regulate the amount of enzyme production to match the available
lactose. This is the control system which achieves this: The lacI gene codes for a
protein (the repressor) which binds to the operator (O) and stops the lacZ,Y and A
genes being transcribed (ie “read”). If there’s lactose in the cell, and at least some
-galactosidase, then there will also be allolactose (the inducer). In this case the
repressor binds with it instead, and falls off the DNA. In the absence of glucose, the
cAMP/CRP complex binds at the promoter (P), this encourages RNA polymerase to
bind and initiate transcription of lacZ,Y and A.
12
12
1.2 Block Diagrams The figures on the
left illustrates various
1.2.1 What goes in the blocks? examples of systems
and the corresponding
Some of them act like “amplifiers” or “attenuators” equations that describe
1 them.
a= ⇥F
M ass In the first block the
F 1 a output is equal to the
input multiplied by a
Force Mass Acceleration constant.
ie Cdg
i
C
Vi
Vi
R
+
Vo
Vo Voltage
if V̇o = 1
RC Vi
Voltage
I i
Note: By drawing this circuit as a block, we are implicitly assuming that any
current it draws has negligible effect on the preceding block and that the
following block draws insignificant current from it (i.e. that R is large and the
op-amp is close to ideal).
13
13
1.2.2 Signals and systems
Block diagrams represent the flow of information, not the flow of “stuff”.
(taking a numeric value as a function of time)
Blocks represent “systems ”, whose inputs and outputs are “signals ”.
(equations mapping inputs into outputs)
This is NOT a block diagram (in our sense)
14
14
1.2.3 ODE models – A circuits example
A more involved system
r
i L Ldf described by an electri-
cal circuit is illustrated
on the left.
x C R y
The voltage difference x
T is the input of the sys-
i cdog tem and the voltage dif-
ference y is the output.
A differential equation
relating x and y is de-
di rived using the stan-
x y=L
dt dard differential equa-
tions satisfied by the
y
i = C ẏ + voltage and current of
R an inductor and capac-
=) ✓ ◆ itor, respectively (part
ẏ IA).
x y = L C ÿ +
R
=)
L
LC ÿ + ẏ + y = x
R
x L y
LC ÿ + R ẏ + y = x
15
15
1.2.4 Block diagrams and the control engineer
16
16
1.3 Linear Systems A system is linear if it
satisfies the principle of
1.3.1 What is a “linear system” superposition, i.e. if in-
put u1 gives output y1 ,
Consider a “system” f mapping dynamic inputs u into outputs y and input u2 gives out-
put y2 , then the input
u(t) y(t) u1 + u2 will have as out-
f put y1 + y2 .
⇣ ⌘
y=f u
the “system” f is linear if superposition holds, that is, if
⇣ ⌘ ⇣ ⌘ ⇣ ⌘
f u1 + f u2 = f u1 + u2
| {z } | {z }
for any u1 and u2 .
u1 y1 u1
f
+ +
=
⌃ ⌃ f
+ +
u2 y2 u2
f
17
17
⇣ ⌘ ⇣ ⌘
In particular, f 2u = 2 f u , eg The principle of super-
position is illustrated
1 by the diagram on
the left. Here the
Linear 0 input is multiplied by
0 1 2 3 4 2, hence for a linear
1 system the output is
also multiplied by the
0 same factor. If this is
0 1 2 3 4 not the case then the
system is non-linear.
Non-linear 1
0
0 1 2 3 4
18
Note that, if a system can be described by linear differential equations with
constant coefficients, and possibly delays, then it is necessarily linear (and
time invariant).
For example:
d2 x(t) du(t)
+ x(t T) = + 2u(t)
dt2 dt
describes a linear system, as if
d2 x1 (t) du1 (t)
+ x1 (t T) = + 2u1 (t)
dt2 dt
and
d2 x2 (t) du2 (t)
+ x2 (t T) = + 2u2 (t)
dt2 dt
then
d2
x1 (t) + x2 (t) + x1 (t T ) + x2 (t T)
dt2
d
= u (t) + u2 (t) + 2 u1 (t) + u2 (t)
dt 1
(because d
dt
u1 (t) + u2 (t) = du1
dt
+ du2
dt
)
which is just the superposition of solutions. If there are x2 terms or sin(x)
terms, for example, then this doesn’t work.
Almost all the linear systems considered in this course will be of this form,
although it is convenient to develop the theory in more generality.
19
19
1.3.2 Linearization
All real systems are actually nonlinear, but many of these behave
approximately linearly for small perturbations from equilibrium.
e.g. Pendulum:
General case
Suppose a system is described by an ODE of the form
ẋ = f (x, u)
f (x0, u0 ) = 0.
20
20
Let x = x0 + x, u = u0 + u,
and use a Taylor series expansion to obtain:
0
ẋ0 + ˙x = f (x0 + x, u0 + u)
0 neglect
just a
@f @f
= f (x0, u0 ) + x+ u+ higher order
constant @x x ,u
| {z0 0}
@u x ,u
| {z0 0}
terms
A B
which results in the linear ODE
˙x = A x + B u
21
21
1.3.3 When can we use linear systems theory?
22
22
1.4 Laplace Transforms
Laplace transforms are an extremely convenient tool for the analysis of linear,
time-invariant, causal systems. We shall now briefly review some pertinent facts that
you learnt at Part IA and introduce some new ideas.
DEFINITION:
Z 1
st
ȳ(s) = y(t)e dt
0
(provided the integral converges for sufficienty large and positive values of s.)
is NOT a function of t
IS a function of s.
Various notations:
Z 1
st
L {y(t)} = Ly = ȳ(s) = y(t)e dt
0
1
y(t) = L ȳ(s)
EXAMPLES
Z " #1
1 (s+a)t
(s+a)t e 1
ȳ(s) = e dt = = (taking Real(s) > a ).
0 (s + a) s+a
0
23
23
Addition or Superposition property
If
y(t) = Ay1 (t) + By2 (t)
then
ȳ(s) = Aȳ1 (s) + B ȳ2 (s)
(A, B constants)
Proof:
Z 1
st
ȳ = (Ay1 + By2 )e dt
0
Z 1 Z 1
st st
= A y1 e dt + B y2 e dt
0 0
= Aȳ1 + B ȳ2
Transforms of derivatives
Z 1
dy st
Lẏ(t) = e dt
0 dt
Z 1
⇥ ⇤
st 1 st
= y(t)e 0
+s y(t)e dt
0
= sȳ y(0)
Z 1
d2 y st
Lÿ = e dt
0 dt2
1 Z 1
dy st dy st
= e +s e dt
dt 0 0 dt
= s2 ȳ sy(0) ẏ(0)
24
24
Obvious pattern:
Ly = ȳ
dn y
L = sn ȳ sn 1 y(0) sn 2
ẏ(0)
dtn ⇣ n 1 ⌘
d y
... dtn 1
(0)
Lẏ = sȳ
Lÿ = s2 ȳ
.. .. ..
. . .
dn y
L = sn ȳ
dtn
25
25
Laplace Transform of tn
tn
Define ȳn (s) = L .
n!
Z 1 n
t st
ȳn = e dt
0 n!
1 Z 1
1 tn st 1 ntn 1 st
= e + e dt
s n! 0 s 0 n!
Z 1 n 1
1 t st
= e dt
s 0 (n 1)!
1
= ȳn 1,
s
Thus we have
1
ȳ0 = L 1 =
s
1
ȳ1 = L t =
s2
t2 1
ȳ2 = L = 3
2 s
t3 1
ȳ3 = L = 4
3⇥2 s
tn 1
Similarly ȳn = L = n+1
n! s
26
26
Poles and Zeros
n(s)
G(s) =
d(s)
where n(s) and d(s) are polynomials in s.
Example:
4s2 8s 60
G(s) =
s3 + 2s2 + 2s
4(s + 3)(s 5)
=
s(s + 1 + j)(s + 1 j)
Imag(s)
X
Real(s)
X
3 5
X
X – denote poles
– denote zeros
27
27
time functions s-plane pole positions
Im
2
Re
1
ȳ = = L cos !t + iL sin !t
s i!
s + i!
=
s2 + ! 2
s
Equating reals : L cos !t =
s2 + !2
!
and similarly : L sin !t =
s2 + ! 2
poles at s = ±!i in both cases
NOTE: Results like this are tabulated in the Maths and Electrical Data Books.
28
28
Shift in s theorem
If Ly(t) = ȳ(s)
Proof:
Z 1
at (s a)t
Le y(t) = e y(t) dt
0
= ȳ(s a),
t
= 2e sin 10t
1 10
because L = sin 10t
s2 + 100
Im
10i
1 -1 Re
t
t
Time functions and pole positions for y(t) = 2e sin 10t
29
29
Initial and Final Value Theorems The Final Value The-
orem is often used in
If ȳ(s) = L y(t) then whenever the indicated limits exist we have control system design
Final Value Theorem: and will be discussed
extensively later
within the course.
lim y(t) = lim sȳ(s)
t!1 s!0
In particular, it
Initial Value Theorem: allows to quantify
the behaviour of a
lim y(t) = lim sȳ(s) signal as time tends
t!0+ s!1 to infinity, from its
Laplace transform.
Proofs omitted (as it’s a little tricky to prove these properly.)
It should be noted
However, for rational functions of s it is easy to demonstrate that these relationships that this theorem
hold: cannot be used for
signals that do not
Let a partial fraction of ȳ(s) be given as:
tend to a constant
b0 X bi
n X n value as t ! 1 (e.g.
ai t
ȳ(s) = + and so y(t) = b0 + bi e . sinusoids, or signals
s s + ai
i=1 i=1 that tend to infinity).
Hence
n
X
y(0) = b0 + bi and, provided ai > 0, y(1) = b0 .
i=1
30
30
1.5 Key points
We distinguish between causes (the input signals) and effects (the output
signals).
31
31
Part IB Paper 6: Information Engineering
LINEAR SYSTEMS AND CONTROL
Ioannis Lestas
HANDOUT 2
transfer function
*
)
Z t
u(t) g(t) y(t)= u(⌧ )g(t ⌧ ) d⌧
0
= u(t) ⇤ g(t)
= g(t) ⇤ u(t)
Laplace transform
pair impulse response
1
Summary
The
impulse response,
step response and
transfer function
of a Linear, Time Invariant and causal (LTI) system each completely
characterize the input-output properties of that system.
2
Contents
3
2.1 Preliminaries The impulse function
is the limit of a square
or triangular pulse that
2.1.1 Definition of the impulse “function” has area equal to 1
and is zero everywhere
The impulse can be defined in many different ways, for example: apart within an interval
1 1 of width that tends
to 0.
taking “limits”,
as ! 0
= (t T)
impulse at t = T
T t (impulse occurs
when argument= 0)
4
2.1.2 Properties of the impulse “function” The impulse function
(t) satisfies the prop-
Consider a continuous function f (t), and let h (t) denote the pulse erty
approximation to the impulse ( (a) on previous page): Z 1
8 f (t) (t T ) dt = f (T )
<1/ if /2 < t < /2 1
h (t) =
:0 otherwise This is shown on the left
by considering the inte-
f (t) gral above when (t T )
1 is replaced with a pulse
h (t T) h (t T ) of width
and area 1. Upper and
lower bounds to this in-
T t tegral are obtained and
it is then shown that
Let
when ! 0 the inte-
Z 1 Z T+ Z T+ gral tends to f (T ).
2 1 1 2
I = f (t)h (t T ) dt = f (t) dt = f (t) dt
1 T 2 T 2
Now,
1
I = ⇥ ⇥ max f (t)
T /2tT + /2
1
⇥
T t
and also
1
I = ⇥ ⇥ min f (t)
1 T /2tT + /2
⇥
T t
5
A similar argument leads to the same result for the triangular approximation
to the impulse ( defined above as (b) ).
Formally, the unit impulse is defined as any “function” (t) which has the
property Z 1
f (t) (t T ) dt = f (T )
1
Z 1
L (t)} = (t)e st dt= e s⇥0 = 1
0
If you know the impulse response of a system, then the response of that
system to any input can be determined using convolution, as we shall now
show:
6
2.2 The convolution integral The slide on the left
shows that the output
y(t) of a linear system
2.2.1 Direct derivation of the convolution integral for a given input u(t),
is equal to the convo-
lution of the impulse
u(t) LTI system y(t) response g(t) of the
(impulse response system with u(t).
= g(t))
This is a very impor-
tant result as it shows
that the impulse re-
INPUT OUTPUT sponse of a linear sys-
tem fully characterizes
the system. This is in
the sense that it allows
impulse response
to deduce the output
(t) g(t) y(t) for any input u(t).
(t ⌧) g(t ⌧)
7
2.2.2 Alternative statements of the
convolution integral
is abbreviated as
y(t) = u(t) ⇤ g(t)
Let T = t ⌧ , so ⌧ = t T and d⌧ = dT
It follows that
Z 1
y(t) = u(t T )g(T ) dT
1
= g(t) ⇤ u(t)
8
2.3 The Transfer Function (for ODE systems)
d2 y dy du
+ ↵ + y = a + bu
dt2 dt dt
or ⇣ ⌘ ⇣ ⌘
s2 + ↵s + ȳ(s) = a s + b ū(s)
and so
as + b
ȳ(s) = ū(s)
s2 + ↵s +
| {z }
Transfer function
The function 2a s+b is called the transfer function from ū(s) (the input) to
s +↵s+
ȳ(s) (the output).
Clearly the same technique will work for higher order linear ordinary
differential equations (with constant coefficients). For such systems, the
transfer function can be regarded as a placeholder for the coefficients of the
differential equation.
9
2.3.1 Laplace transform of the convolution integral
We have seen that both convolution with the impulse reponse (in the time
domain) and multiplication by the transfer function (in the Laplace domain)
can both be used to determine the output of a linear system. What is the
relationship between these techniques?
Assume that g(t) = u(t) = 0 for t < 0,
⇣ ⌘ ✓Z 1 ◆
L g(t) ⇤ u(t) = L g(⌧ )u(t ⌧ ) d⌧
1
Z 1 Z 1
= e st g(⌧ )u(t ⌧ ) d⌧ dt
0 1
Z 1 Z 1
= e st g(⌧ )u(t ⌧ ) dt d⌧
1 0
In words, a Laplace transform (easy when you get used to them) turns a
convolution (always hard!) into a multiplication (very easy).
10
10
2.4 The transfer function for any linear
system
is the response of an LTI system with impulse response g(t) to the input u(t),
then we can also write
transfer function
*
)
impulse response
We shall use the notation x̄(s) to represent the Laplace transform of a signal
x(t), and uppercase characters to represent transfer functions (e.g. G(s)).
11
11
In general a system may have more than one input. In this case, the transfer
function from a particular input to a particular output is defined as the
Laplace transform of that output when an impulse is applied to the given
input, all other inputs are zero and all initial conditions are zero.
This is most easily seen in the Laplace domain: If an LTI system has an input
u and an output y then we can always write
G(s) is then called the transfer function from ū(s) to ȳ(s). (Or, the transfer
function relating ȳ(s) and ū(s))
Here the “other terms” could be a result of non-zero initial conditions or of
other non-zero inputs (disturbances, for example).
NOTE: Remember that although the transfer function is
defined in terms of the impulse response, it is usually
most easily calculated directly from the system’s differential
equations.
12
12
2.5 Example: DC motor
The following example will illustrate the impulse and step responses and the
transfer function. We take the input to the motor to be the applied voltage
e(t), and the output to be the shaft angular velocity
˙
! := ✓(t).
R i L
J B
e(t)
K!(t)
⌧ ✓
di(t)
3) e(t) = Ri(t) + L + K!(t) Kirchoff
dt
Find the effect of e(t) on !(t):
Take Laplace Transforms, assuming that !(0) = i(0) = 0:
1) ⌧¯(s) = K ī(s)
2) ⌧¯(s) B!¯ (s) = J s¯!(s) !(0)
⇣ ⌘
3) ē(s) = Rī(s) + L sī(s) i(0) + K !¯ (s)
13
13
We can now eliminate ī(s) and ⌧¯(s) to leave one equation relating ē(s) and
!
¯ (s):
First 1) and 2) give:
K ī(s) = (Js + B)¯
! (s)
and rearranging 3) gives:
As illustrated in the
That is, output input example it is more
k convenient to find the
¯ (s) =
! ē(s)
(T1 s + 1)(T2 s + 1) output of a system
| {z } when a particular input
Transfer Function is applied by working in
We call G(s) = (sT +1)(sT k the transfer function from ē(s) to !
¯ (s) (or, the Laplace domain. In
1 2 +1) particular we find first
relating !
¯ (s) and ē(s)). the transfer function
The diagram relating the input and
output of interest.
ē(s) !
¯ (s)
G(s)
Then the Laplace trans-
form of the output can
be evaluated by multi-
represents this relationship between the input and the output:
plying the transfer func-
By using Laplace transforms, all LTI blocks can be tion with the Laplace
transform of the input.
treated as multiplication by a transfer function.
For the motor used in the lego demonstrations we have k ⇡ 2.2, T1 ⇡ 54ms
and T2 ⇡ 1ms.
14
14
2.5.1 Impulse Response of the DC motor
To find the impulse response, we let
which implies
ē(s) = 1
and put
!(0) = i(0) = 0.
So,
k
!
¯ (s) = ⇥1
(T2 s + 1)(T1 s + 1)
Split into partial fractions:
1 1 1 1
!¯ (s) = k ⇥ + ⇥
T1 sT2 + 1 T2 sT1 + 1
1 1
T2 T1
k 1 1
= +
T1 T2 s + 1/T2 s + 1/T1
Hence,
k It should be noted that
!(t) = e
|
t/T1
{z } |
e {zt/T2} +
T1 T2 the response of the
Fast Transient Slow Transient system is dominated
e.g. T2 ⇡ 1ms T1 ⇡ 54 ms by the slow transient,
i.e. the term with the
50
slow transient largest time constant
k
40
Ti .
(T1 T2 )
total impulse response
30
Note also that the coef-
20 ficients multiplying the
initial value= 0 (= lims!0 s¯
! (s)) time t in the exponen-
! (rad/sec)
10
tial terms are equal to
0
the poles of the transfer
−10 function.
−20 fast transient
−30
k −40
(T1 T2 ) −50
0 0.05 0.1 0.15 0.2 0.25 0.3
t (sec)
15
15
2.5.2 Step Response of the DC motor
To find the step response, we let
e(t) = H(t)
k 1
!
¯ (s) = ⇥
(T2 s + 1)(T1 s + 1) s
Split into partial fractions:
" #
1 T2 1 T1 1
!
¯ (s) = k + ⇥ ⇥
s T1 T2 s + T12 T1 T2 s + T11
0.5
kT2 fast transient
0
(T1 T2 )
−0.5
initial slope= 0
−1
−1.5
slow transient
kT1 −2
(T1 T2 ) −2.5
0 0.05 0.1 0.15 0.2 0.25 0.3
t (sec)
16
16
2.5.3 Deriving the step response from the
impulse response
For a system with impulse response g(t), the step response is given by
Z t Z t
y(t) = g(⌧ )H(t ⌧ ) d⌧ = g(⌧ ) d⌧
0 0
i.e.
step response = integral of impulse response
Check this on the example. Make sure you understand where the term H(t) in the
step response comes from.
17
2.7 Interconnections of LTI systems One of the main ad-
x̄(s) vantages of the use of
a) F (s) transfer functions is the
+ fact that they simplify
v̄(s) z̄(s) significantly the analy-
⌃ H(s)
sis when we have inter-
ȳ(s) + connections of systems.
G(s)
In particular, finding
the transfer function
represents the equations: of a system that is
an interconnection of
v̄(s) = F (s)x̄(s) + G(s)ȳ(s) other systems, becomes
a relatively simple
and
problem involving alge-
z̄(s) = H(s)v̄(s) braic operations. Such
examples are illustrated
⇣ ⌘
=) z̄(s) = H(s) F (s)x̄(s) + G(s)ȳ(s) on the left and in the
next page.
= H(s)F (s) x̄(s) + H(s)G(s) ȳ(s)
| {z } | {z }
transfer function transfer function
from x̄(s) to z̄(s) from ȳ(s) to z̄(s)
K(s)
18
18
2.7.1 “Simplification” of block diagrams
Recognizing that blocks represent multiplications, and using the above formulae, it
is often easier to rearrange block diagrams to determine overall transfer functions,
e.g. from ū(s) to ȳ(s) below.
ū(s) + + ȳ(s)
⌃ G 1 (s) G 2 (s) ⌃ G 3 (s)
+
K1 (s)
+
ū(s) + G3 (s) ȳ(s)
⌃ G1 (s)G2 (s)
+ 1 + G3 (s)K1 (s)
+
ū(s) + G1 (s)G2 (s)G3 (s) ȳ(s)
⌃
1 + G3 (s)K1 (s)
19
19
2.8 More transfer function examples
To obtain the transfer function, in each case, we take all initial conditions to be zero.
The following three systems all have the same transfer
function (a 1st order lag )
1) Spring/damper system The first example is a
spring/damper system.
k The system has input
x and output y, which
are the displacements
at the two terminals of
the spring, respectively.
y(t) x(t)
The system equation re-
ẏ = k(x y) lating x and y is derived
by considering the dif-
=) ẏ + y = x
k ferential equation satis-
fied by the damper, and
=) sȳ(s) + ȳ(s) = x̄(s)
k Hooke’s law applied to
the spring.
✓ ◆
=) s + 1 ȳ(s) = x̄(s)
k
1
=) ȳ(s) = x̄(s) T = /k
Ts + 1
Step Response:
1 1 1 T
x̄(s) = =) ȳ(s) = =
s s(sT + 1) s sT + 1
y(t)
20
20
2) RC network
The second example is
an RC electrical circuit
R
where the input x is the
supply voltage and the
x(t) C y(t) output y is the voltage
across the capacitor, as
shown in the diagram.
dy x y
i=C = The differential equa-
dt R
tion relating x and y
follows from the differ-
=) RC ẏ + y = x ential equation relating
the voltage across a ca-
pacitor with the current
=) (RCs + 1)ȳ(s) = x̄(s) through it (part IA).
1
=) ȳ(s) = x̄(s) T = RC
Ts + 1
3) Water Heater
The third example is
q(t) (J/s) a water heater system
(assuming perfect mixing – i.e. all where the input q is the
rate with which energy
the water in the tank is at the same
is supplied to the sys-
✓i M(kg) ✓o temperature, ✓o ) tem and the output ✓0
is the temperature of
ṁ specific the water in the tank.
(kg/s) heat capacity
The differential equa-
q = Mc ✓˙0 + ṁc(✓o ✓i ) tion relating these vari-
ables is a statement of
=) (Mcs + ṁc)✓¯o (s) = q̄(s) + ṁc✓¯i (s)
conservation of energy.
1 1 1 ¯ It quantifies the fact
=) ✓¯o (s) = q̄(s) + ✓ (s) T = M/ṁ that the energy sup-
ṁc T s + 1 Ts + 1 i
plied to the system is
input disturbance used to raise the tem-
(water flow rate ṁ is assumed to be constant) perature of the water al-
ready within the water
tank, and of the wa-
ter flowing into the tank
21 with constant rate ṁ.
21
2.9 Key Points
The step response is the integral (w.r.t time) of the impulse response.
The transfer function is the Laplace transform of the impulse response.
22
22
Part IB Paper 6: Information Engineering
LINEAR SYSTEMS AND CONTROL
Ioannis Lestas
HANDOUT 3
repeated
poles +
X
Unstable
X X X
Asymptotically Marginally Unstable
stable stable
Real(s)
X X X
Left half plane Right half plane
Imaginary axis
1
Summary
Contents
2
3.1 Asymptotic Stability
Definition: It should be noted
that this definition
An LTI system is asymptotically stable if its impulse response g(t) satisfies of asymptotic stabil-
the condition Z 1 yet Iggyuld e de ity guarantees that
|g(t)|dt < 1 g(t) ! 0 as t ! 1
Examples:
0
IlyAllactHdt as otherwise the inte-
gral in the definition
would tend to infinity.
E Uman 19aside
1. LCR circuit: g(t) = e t sin(3t + 2)
e t
|g(t)|
lying lil ill
Z 1 Z 1 t
t
|g(t)| dt e dt = 1< 1
0 0
t
Z 1 Z 1X
1 1
X
1 1
|g(t)| dt = (t kT ) dt= = 2< 1
0 0 2k 2k
k=0 k=0
E
integral
T ) asymptotically stable
of impulse
is I
3
3.2 Poles and the Impulse Response
Example: Consider the system with input u and output y related by the ODE
d2 y dy du
2
+↵ + y=a + bu.
dt dt dt
The Auxillary Equation for this ODE is
2
+↵ + =0
4
4
Consider now a general LTI system described by an ODE, and consequently
having a rational transfer function G(s). That is, it can be written as the
ratio of two polynomials
n(s)
G(s) =
d(s)
(where the coefficients of d(s) come from the LHS of the underlying ODE and
the coefficients of n(s) come from the RHS).
We can factorize the denominator to give
n(s)
G(s) =
(s p1 )(s p2 ) · · · (s pn )
(This condition will always be satisfied for physically realizable systems. Moreover, any
system whose transfer function violates this condition is not asymptotically stable.)
Consider one of these terms, ept say. How it contributes to g(t) depends on
whether p is real or complex:
5
• If p is real: then ept is a real exponential, with time constant |1/p| .
p<0 p>0
2⇡/!
†
complex poles always appear in conjugate pairs since they are roots of a real polynomial
6
So each pair of complex poles contributes a term of the form
2Ae t cos(!t + )
We have assumed that no poles are repeated for this discussion. Repeated poles give
rise to terms of the form tm ept (or tm e t cos(!t + )), which have the same general
characteristics (as the exponential dominates the polynomial term).
=(s)
q
p= + j! ! = !n 1 ⇣2
X
!n sin 1 ⇣
cos 1 ⇣ <(s)
0
= !n ⇣
⇤
X p = j!
7
7
This figure shows that, given the pole locations, in the complex plane, of a
second order system we can read off the natural frequency, the damping ratio
and also !n ⇣, the reciprocal of the time constant of the decay.
For a higher order system, we can read X =(s)
off the natural frequency and damping
X
ratio of each “mode” of the system
(each pair of complex poles). The
poles closest to the imaginary axis are
often called the dominant poles (their <(s)
2 X1 0
contribution dies away most slowly, and
so tends to dominate the response)
constant X
If damping ratio
X
then poles lie on lines
Im(s)
!n ⇣ = 1.25
2
⇣
=
⇣=0
1/
p 2
i
1
gg
=
!n
0.
!n
1
1.
=
!n
0
1.
=
5
2.
2
0
This figure shows radial contours of constant damping ratio ⇣ and circles of
constant natural frequency !n as well as a vertical lines on which !n ⇣ is
constant.
8
3.3 Asymptotic Stability and Pole Locations
In particular, the
<(s)
significance of the
<(s) < 0 <(s) > 0 Theorem is that it
allows to check the
Imaginary Axis asymptotic stability
<(s) = 0 of a linear system
without explicitly
evaluating its im-
pulse response, but
Proof: by simply checking
i) First we show that if all poles have a negative real part then the system is the position of the
poles of its transfer
asymptotically stable.
function.
For now, assume that the poles of G(s) are distinct
More precisely, the
i.e. that d(s) has no repeated roots system is asymptot-
(we shall remove this restriction later) ically stable if and
only if the real part of
then we can write the poles is negative.
n(s)
G(s) =
(s p1 )(s p2 ) · · · (s pn )
↵1 ↵2 ↵n
= ↵0 + + + ··· +
(s p1 ) (s p2 ) (s pn )
by partial fraction expansion, and so
9
Now, let
k = <(pk ) and !k = =(pk )
so pk = k + j!k , for each k = 1 . . . n, then
|epk t | = |e( k +j!k )t | = |e k t ej!k t | = |e k t | |ej!k t | = e k t
| {z }
and so 1
|g(t)| |↵0 | (t) + |↵1 |e 1 t + |↵2 |e 2 t + · · · + |↵n |e n t .
Now, 8
1
H
Z 1 1 >
<
1 , if <0
e t dt = e t =
0 0 >
:1, if 0
and furthermore, since every pole has k < 0, then
Z 1
↵ ↵ ↵n
|g(t)| dt |↵0 | + 1 + 2 + · · · + <1
0 1 2 n
and consequently the system is asymptotically stable as required.
1 2 l
G(s) = · · · + + + ··· + + ··· .
(s p) (s p)2 (s p)l
Hence, the impulse response g(t) will be of the form
l
g(t) = · · · + 1 ept + 2 tept + · · · + tl 1 ept + · · ·
(l 1)!
However, if p = + j! and < 0 (ie <(p) < 0), then
Z 1 Z 1
|tk 1 pt
e | dt = tk 1 e t dt < 1
0 0
10
10
ii) Now we show the converse, that if a system is asymptotically stable then
all poles have a negative real part. of jut
For all values of s for which <(s) 0, we have
so jw Esty
Ét t
Z 1 Z 1
|G(s)| = st
e g(t) dt e st |g(t)| dt
Z 01 0
So far, we have divided systems into two classes: those that are
asymptotically stable and those that are not. We shall now further classify the
systems that are not asymptotically stable into two classes: those that are
marginally (i.e. “almost”) stable and those that are unstable.
11
11
3.4 Marginal Stability
Examples:
1. Integrator:
Z T
g(t) = H(t) = |g(t)| dt = T
T 0
G(s) = 1/s =) j!-axis pole at s=0
12
12
3.5 Instability
R R
g(t) = t
1
G(s) = =) double pole at s = 0
s2
Warning: Different people use different definitions of stability. In particular, systems which
we have defined to be marginally stable would be regarded as stable by some, and unstable
by others. For this reason we avoid using the term “stable” without qualification.
13
13
3.6 Stability Theorem
if any of the poles of G(s) have a positive real part then the impulse
response will have a term that blows up exponentially (consider the
partial fraction expansion of G(s)).
Also, if G(s) has a repeated imaginary axis pole then the impulse
response will have a term that still blows up, although more slowly.
In either of these cases, the system is unstable.
Isolated poles on the imaginary axis, on the other hand, give rise to terms
in the impulse response which remain bounded (e.g. steps or sinusoids).
1. A system is asymptotically stable if all its poles have negative real parts.
2. A system is unstable if any pole has a positive real part, or if there are
any repeated poles on the imaginary axis.
Note: we proved part 1, and the converse statement that a system is not asymptotically
stable if any of its poles have a zero or positive real part, on page 6) The refinement of “not
asymptotically stable” into marginal stability and instability has only been illustrated by
examples. The proof of parts 2 and 3 is not difficult, but is messy (and so is omitted).
14
14
X X
The examples on the
X X X X
left illustrate the Sta-
bility Theorem stated
X X
in the previous page,
asymptotically unstable i.e. how the location
stable of the poles determines
the stability properties
of a system.
X X X
X X X X
repeated
X j!-axis poles X X
unstable marginally
stable
Note: it’s the “worst” poles that determine the stability properties
15
15
=(s)
<(s)
X
(s + 1.5)(s2 s + 1)
Poles/zeros for G(s) =
(s + 2)(s2 + 0.1s + 4)
3
|G(s)|
1
2
zero
0
at s = 1.5 0
−3 −2
zero at s = 0.5 0.87j
−1 0 −2
1 2 =(s)
3
<(s)
16
16
3.7 Poles and the Transient Response
The term Transient Response refers to the initial part of the (time domain)
response of a system to a general input (before the “transients” have died
out). To a very large extent, these transients are a characteristic of the
system itself rather than the input.
If, for example a system with transfer function
n(s)
G(s) =
(s p1 )(s p2 ) · · · (s pn )
is given an input u(t), with Laplace transform ū(s), then the response is given
by
n(s)
ȳ(s) = G(s)ū(s) = ū(s)
(s p1 )(s p2 ) · · · (s pn )
1 2 n
= + + ··· + + other stuff
s p1 s p2 s pn
and so
y(t) = 1 ep1 t + 2 ep2 t + · · · + n epn t + other stuff
That is, the response y(t) contains the same terms as the impulse response
(although with different amplitudes) plus some extra terms due to particular
characteristics of the input.
17
17
3.8 Key Points
• The impulse response of an LTI system is a sum of terms due to each real
pole, or pair of complex poles.
• The system’s response to any input will also include these features.
The following figure shows a selection of pole locations, with their
corresponding contribution to the total response.
This again is an important figure.
Note:
The real part of the pole, , determines both stability and the time
constant, |1/ |.
The imaginary part of the pole, !, determines the damped natural
frequency (actual frequency of oscillation) in rad/sec.
The magnitude of the pole determines the natural frequency.
The argument of the pole determines the damping ratio.
Imag(s)
The figure on the left
illustrates how the
location of the poles of
the transfer function of
X X X X X a system (denoted by
X) affect its impulse re-
sponse (and hence also
its transient response
X X X X X when a general input is
applied as explained in
the previous page).
Real(s)
X X X X X In particular, as the
imaginary part of the
Left half plane Right half plane poles increases the
response becomes more
Imaginary axis oscillatory. Also, as the
real part of the poles
Pole locations and corresponding transient responses becomes more negative,
the response decays
more quickly to zero.
18
Part IB Paper 6: Information Engineering
LINEAR SYSTEMS AND CONTROL
Ioannis Lestas
HANDOUT 4
2⇡ arg G(j!)
x(t) ! y(t) !
1
|G(j!)|
t t
Asymptotically
stable LTI
system G(s)
1
Summary
Contents
2
4.1 What is the Frequency Response?
Our analysis in sec-
tion 4.1 will be as
2⇡
follows:
x(t) ! y(t) !
1 (i) We will first use
A
complex number ap-
proaches studied in
t t
part IA to illustrate
that for a system
with transfer function
Asymptotically
stable G(s) that has as in-
LTI system put a sinusoid, the
output is also a si-
nusoid with the same
frequency. Further-
x(t) = cos(!t) y(t) = A cos !t + more, the gain A,
and phase shift are
+starting transient
given by
A = |G(j!)|,
= 6 G(j!)
If the input to an asymptotically stable LTI system is a pure sinusoid then the (ii) We will then show
steady state output will also be a pure sinusoid, of the same frequency as the a more general and
input but at a different amplitude and phase. practically relevant
result. We will show
How do we find A, the gain, and , the phase shift ? that even when the
system is initially
at rest and then
a sinusoidal input
is applied, then as
t ! 1 the output
will be a sinusoid,
with gain and phase
shift given by the
same expressions.
3
How does this relate to Part IA? Consider again a system with input u
and output y, If
d2 y dy du
+ ↵ + y = a + bu.
dt2 dt dt
then we can use the usual trick, letting
u = ej!t
so that
<(u(t)) = cos(!t).
We will find the response to the input cos(!t) by taking the real part of y, the
response to u = ej!t .
To find the solution y, we assume it takes the form
or
a[j!] + b
Y =
[j!]2 + ↵[j!] +
Note that the transfer function from ū(s) to ȳ(s) is given by
as + b
G(s) = 2
s + ↵s +
So, it would appear that Y = G(j!), suggesting:
Answer: If system has the transfer function G(s), then
4
You’ve done this in Linear Circuits, Maths and Mechanical Vibrations in Part
IA, so I’m assuming that you are familiar with these arguments! The notation
above is as used in Maths and Mechanical Vibrations. In Linear Circuits ỹ was
used to represent a complex phasor, rather than the Y above.
Example:The Capacitor is described by the differential equation
dv
i=C
dt
so ī(s) = Csv̄(s) in the absence of initial conditions giving the transfer
function
v̄(s) 1
= .
ī(s) sC
1 ,
So, the frequency response of a capacitor (from current to voltage) is j!C
which equals its impedance, ṽ/ĩ. The advantage of transfer functions is that
they can be used to deduce the response to all possible inputs, whereas the
impedance can only be used for sinusoidal (ac) signals, providing the change
in their amplitude and phase.
5
4.1.1 Derivation of gain and phase shift
The previous argument is not the whole story though - we have assumed that
the input u = cos(!t) has been present since the beginning of time, and have
only shown that y = A cos(!t + ) is a possible response. What if the system
is at rest until t = 0, and then a sinusoid is applied? Take an asymptotically
stable system with input ū(s) output ȳ(s) and rational transfer function
n(s)
G(s) = d(s) , so
ȳ(s) = G(s)ū(s).
Let u(t) = ej!t , so ū(s) = s 1j! then, since G(s) can’t have a pole at
s = j!,
1 1 2 n 0
ȳ(s) = G(s) = + + ··· + +
s j! s p1 s p2 s pn s j!
How can we find 0 without calculating all the other terms?
Ans: Cover-up rule.
6
Example:
−1
0 50 100 150 200
Response to the input x(t) = cos(0.5 t)
1
y(t) = g(t) ⇤ x(t)
0.571 0.5
−0.5
−1
0 50 100 150 200
(sec)
tCsec
7
4.1.2 Frequency response of an arbitrary linear The analysis in section
system 4.1.1 was for linear
systems described by
It’s not just systems described by ODEs which have a frequency response, any differential equations.
This is reflected in
asymptotically stable system, with impulse response g(t) and transfer function
the fact that G(s) was
G(s) = Lg(t), does. (See Q5 on Examples Paper 2 for an example). Consider assumed to be rational
the response of such a system to a sinusoidal input beginning suddenly at in s (polynomial di-
t = 0. vided by polynomial).
yss (t) is called the steady-state part of the response, as it is the part that
remains when the transients have decayed.
8
4.2 Plotting the frequency response
Example: G(s) = 1
2
s(s +0.2s+2)
Bode Gain Plot Nyquist Diagram
20 1
0
|G(j!)| (dB)
0
= G(j!)
−1
−20 −2
−3
−40
−4
−60 −5
−1 0 1 −5 −4 −3 −2 −1 0 1 2
10 10 10
! (rad/s) < G(j!)
Bode Phase Plot Nichols Chart
0 20
6 G(j!) (deg)
|G(j!)| (dB)
0
−90
−20
−180
−40
−270 −60
−1 0 1 −270 −225 −180 −135 −90 −45 0
10 10 10
! (rad/s) 6 G(j!) (deg)
9
Each has its use:
The Bode diagram is relatively straightforward to sketch to a high degree of
accuracy, is compact and gives an indication of the frequency ranges in which
different levels of performance are achieved.
The Nyquist diagram provides a rigorous way of determining the stability of a
feedback system.
The Nichols diagram combines some of the advantages of both of these
(although is not quite as good in either specific application) and is widely
used in industry. We shall not study the Nichols diagram in this course(!), but
the ideas behind its construction will be readily grasped once the two
fundamental diagrams that we do study are understood.
10
10
Consider G(j!) for ! = 2
1 1
G(j!) = =
!=2 2j((2j)2 + 0.2 ⇥ (2j) + 2) 2j( 2 + 0.4j)
Hence
1
|G(j2)| = p = 0.2451
2 22 + 0.42
and
0 = 1
B C
B C
B C
B C
B C
B C
B C
B C
B
B because
X < C
C
B C
B C
B C
B C
B C
B C
B C
@ A
So,
G(j2) = 0.2451e 4.5150j = 0.0481 + 0.2404j
Also,
20log10 |G(j2)| = 12.2dB, 6 G(j!) = 258.7
The corresponding point has been marked with a cross on each of the previous
plots.
11
11
4.3 Sketching Bode Diagrams
it is suffices to be able to sketch Bode diagrams for these terms. The Bode plot of a
complex system is then obtained by adding the gains and phases of the individual
terms.
Note: Always rewrite the transfer function in terms of these building blocks before
starting to sketch a Bode diagram – you will find it much easier. For example, if the
transfer function has a term (s + a) first rewrite this as a ⇥ (1 + s/a) and then
collect together all the constants that have been pulled out. The transfer functions
in Question 6 on Examples Paper 2 are already given in the right form, but you will
need to rewrite the transfer function in Question 8.
500(s + 3s + 2) 500(s + 1)(s + 2)
Example: We would rewrite G(s) = 2
=
s(s + 5s + 100) s(s2 + 5s + 100)
as ✓ ◆
10 (1 + s)(1 + s/2)
G(s) = ⇥
s (1 + 0.05s + s2 /100)
and begin by considering each term individually.
12
12
4.3.1 Powers of s: (sT )k
The gain curve is thus a straight line with slope k decades/decade, or 20k
dB/decade, intersecting the 0dB line at !T = 1. The phase curve is a
constant at 90 ⇥ k. For T = 1, the case when k = 1 corresponds to a
differentiator, and has a slope 20dB/decade and phase 90 . The case when
k = 1 corresponds to an integrator and has a slope 20dB/decade and
phase 90 .
180
90
6 a(j!) 0
90
180
0.01 0.1 1 10 100
T T T T T
13
13
4.3.2 First Order Terms: (1 + sT ) For the Gain plot
the main features
Bode plot of G(s) = (1 + sT ) (for T > 0) illustrated are the
two asymptotes when
. . . replace s by j! to get G(j!) = (1 + j!T ) ! ! 0 (i.e. ! ⌧ 1/T )
and ! ! 1 (i.e.
! 1/T ). These
asymptotes intersect at
=) |G(j!)| = |1 + j!T |
| {z } the ! = 1/T .
p
1 + !2T 2 1 + j!T
6 G(j!)| = 6 (1 + j!T )
Note also that the low
j!T
| {z } frequency asymptote is
atan(!T ) horizontal and the high
frequency asymptote
Asymptotes: 1 has slope 20dB/decade.
! ! 0: (i.e. ! ⌧ 1/T )
The phase 6 G(j!)
20log10 |G(j!)| ! 20log10 1 = 0 increases from 0 to 90
6 G(j!) ! 6 1 = 0
as ! increases from 0 to
1 with the phase being
45 at ! = 1/T .
6 G(j!) ! 6 j!T = 90
At ! = 1/T , we get
14
14
Bode diagram of G(s) = (1 + sT ):
! ! 1, respectively.
20log10 |G(j!)| In the gain plot the
20dB (=10)
low freq asymptotes intersect at
asymptote ! = 1/T (follows from
the expressions derived
for the asymptotes in
0dB (=1) 3dB the previous page).
In the phase plot the
0.01 0.1 1 10 100
T T T T T phase is 45 at this
Frequency (rad/s) frequency.
15
4.3.3 Second order terms: (1 + 2⇣sT + s2 T 2) As in the case of first
order terms a main
1 feature we illustrate
Bode plot of G(s) = (for T > 0, 0 ⇣ 1)
1 + 2⇣sT + s2 T 2 in the Bode plot of
second order terms,
1 in both the Gain plot
. . . replace s by j! to get G(j!) =
1 + 2⇣j!T !2T 2 and the Phase plot, are
the asymptotes when
=) 20log10 |G(j!)| = 20log10 1 ! 2 T 2 + 2⇣j!T ! ! 0 and ! ! 1,
⇣ ⌘ respectively.
6 G(j!) = 6 2 2
1 ! T + 2⇣j!T
As shown on the left
Asymptotes: the high frequency
asymptote has slope
! ! 0: (i.e. ! ⌧ 1/T ) 40dB/decade (in con-
trast to 20dB/decade
20log10 |G(j!)| ! 20log10 1 = 0 in the first order term
6 G(j!) ! 6 1=0 previously analysed).
16
16
20dB The figures on the left
illustrate the Bode plot
8dB=2.5 (1/2⇣) of the second order term
⇣ = 0.2
1
G(s) =
0dB 1 + 2⇣sT + s2 T 2
6dB
Gain (dB)
40dB
⇣ = 0.2
−45
⇣=1
Phase (Degrees)
−90
−135
−180
0.01 0.1 1 10 100
T T T T T
17
17
4.3.4 Examples
5
Example 1: G(s) = (K = 5, 1/T = 0.1)
1 + 10s
We now produce a sketch of the Bode diagram in two stages (this is for clarity
– all these constructions would normally appear on one pair of graphs ). First
we plot the asymptotes and approximations to the true curves for the
individual terms (The exact values are used for the plots here.)
20
20log10 |1 + 10j!|
Gain (dB)
20log10 |5|
0
−20
−3 −2 −1 0 1
10 10 10 10 10
Phase (Degrees)
0
6 1 + 10j! 6 5
−90
−3 −2 −1 0 1
10 10 10 10 10
Frequency (rad/s)
18
18
In the next pair of diagrams, the contributions from the individual terms (now
shown as dashed lines) have been added to give the Bode diagram of G(s)
(this has been done for both the asymptotes and the true gain and phase).
20
20log10 |G(j!)|
Gain (dB)
−20
−3 −2 −1 0 1
10 10 10 10 10
Phase (Degrees)
0
6 G(j!)
−90
−3 −2 −1 0 1
10 10 10 10 10
Frequency (rad/s)
19
19
The next example has both a pole and a zero.
It is an example of what is known as a “phase-lead compensator” (will be
studied again later in the course).
1 + 10s
Example 2: G(s) = 0.05
1+s
1 + 10j!
So, G(j!) = 0.05 ,
1 + j!
20
20
First we draw the individual terms:
20
20log10 |1 + 10j!|
20log10 |1 + j!|
0
Gain (dB)
−20
20log10 |0.05|
−40
−3 −2 −1 0 1 2
10 10 10 10 10 10
90
6 1 + 10j!
Phase (Degrees)
6 0.05
0 6 1 + j!
−90
−3 −2 −1 0 1 2
10 10 10 10 10 10
Frequency (rad/s)
21
21
and then we sum them (Note how the phase terms sum to produce a
T
maximum phase advance of only about 55 .)
20
0
Gain (dB)
−40
−3 −2 −1 0 1 2
10 10 10 10 10 10
90
Phase (Degrees)
6 G(j!)
0
−90
−3 −2 −1 0 1 2
10 10 10 10 10 10
Frequency (rad/s)
22
22
1 + 2s 0.05
Example 3: G(s) = 0.05
100 = ⇥ (1 + 2s)
s s
20
20log10 |1 + 2j!|
0
Gain (dB)
0.05
20log10
j!
−20 = 20log10 0.05 20log10 !
−40
−3 −2 −1 0 1 2
10 10 10 10 10 10
90
Phase (Degrees)
6 1 + 2j!
0
0.05
6 = 6 j
j!
−90
−3 −2 −1 0 1 2
10 10 10 10 10 10
Frequency (rad/s)
23
23
100
T
20
0
Gain (dB)
20log10 |G(j!)|
−20
−40
−3 −2 −1 0 1 2
10 10 10 10 10 10
90
Phase (Degrees)
0
6 G(j!)
−90
−3 −2 −1 0 1 2
10 10 10 10 10 10
Frequency (rad/s)
24
24
RHP poles and zeros
6 (1 j!T ) = 6 (1 + j!T )
25
25
4.4 Key Points
The frequency response is obtained from the transfer function by
replacing s with j!.
26
26
Part IB Paper 6: Information Engineering
LINEAR SYSTEMS AND CONTROL
Ioannis Lestas
HANDOUT 5
z̄(s)
H(s)
2
Contents
5 An Introduction to Feedback Control Systems 1
5.1 Open-Loop Control . . . . . . . . . . . . . . . . . . . . . . . 4
5.2 Closed-Loop Control (Feedback Control) . . . . . . . . . . . . 5
5.2.1 Derivation of the closed-loop transfer functions: . . . . 5
5.2.2 The Closed-Loop Characteristic Equation . . . . . . . . . 6
5.2.3 What if there are more than two blocks? . . . . . . . . 7
5.2.4 A note on the Return Ratio . . . . . . . . . . . . . . . 8
5.2.5 Sensitivity and Complementary Sensitivity . . . . . . . . 9
5.3 Summary of notation . . . . . . . . . . . . . . . . . . . . . . 10
5.4 The Final Value Theorem (revisited) . . . . . . . . . . . . . . 11
5.4.1 The “steady state” response – summary . . . . . . . . . 12
5.5 Some simple controller structures . . . . . . . . . . . . . . . . 13
3
5.1 Open-Loop Control
As explained on the
In principle, we could could choose a “desired” transfer function F (s) and use left this will not work
K(s) = F (s)/G(s) to obtain in practice since even
small disturbances and
F (s) model uncertainty will
ȳ(s) = G(s) r̄(s) = F (s)r̄(s) cause the output to de-
G(s)
viate from its desired
In practice, this will not work value (like trying to
drive a car with your
– because it requires an exact model of the plant and that there be no eyes closed!).
disturbances (i.e. no uncertainty).
4
5.2 Closed-Loop Control (Feedback Control) The block diagram
on the left illustrates
Input Output a typical feedback
For Example:
disturbance disturbance control configuration.
The aim is for the
d¯i (s) d¯o (s) controlled output
Demanded Error ȳ(s) to follow a refer-
Output Control
Signal Controlled ence signal r̄(s).
(Reference) Signal Output
The plant G(s) is the
+ + system we would like
+ Controller + “Plant” +
⌃ K(s) ⌃ G(s) ⌃ to control, and K(s)
r̄(s) ē(s) ū(s) ȳ(s) is the controller that
needs to be designed.
Disturbances acting
at the output and
Figure 5.1 input of the plant
are also considered,
corresponding to
measurement and
actuation noise, re-
5.2.1 Derivation of the closed-loop transfer functions: spectively.
5
Also:
1 G(s)
= d¯o (s) d¯ (s)
1 + G(s)K(s) 1 + G(s)K(s) i
✓ ◆
G(s)K(s)
+ 1 r̄(s)
1 + G(s)K(s)
| {z }
1
1 + G(s)K(s)
Note: All the C losed-Loop Transfer Functions of the previous The equation
section have the same denominator: 1 + G(s)K(s) = 0
1 + G(s)K(s)
is referred to as the
Closed-Loop Char-
The Closed-Loop Poles (ie the poles of the closed-loop system, or feedback acteristic Equation.
system) are the zeros of this denominator. Its solutions give the
For the feedback system of Figure 5.1, the Closed-Loop Poles are the roots of location of the poles in
the closed loop transfer
functions.
1 + G(s)K(s) = 0
As discussed in the pre-
vious handout, the lo-
Closed-Loop Characteristic Equation cation of the poles are
(for Fig 5.1) very important for the
behaviour of a system
as they determine its
stability properties, and
The closed-loop poles determine:
also its transient re-
The stability of the closed-loop system. sponse.
6
5.2.3 What if there are more than two blocks? The diagram on the
left illustrates a feed-
For Example: back configuration
where in addition
to K(s) and G(s),
Demanded there is an additional
d¯i (s) d¯o (s) Controlled
Output transfer function
Output
(Reference) H(s). The latter
could be part of the
+ Controller + + “Plant” + + implementation of
⌃ K(s) ⌃ G(s) ⌃
r̄(s) ē(s) ȳ(s) the control policy
or could include the
sensor dynamics.
Sensor
H(s)
Figure 5.2
The closed-loop
transfer functions
derived on the left
illustrate that the
We now have denominator has
G(s)K(s) a form analogous
ȳ(s) = r̄(s) to that when only
1 + H(s)G(s)K(s)
K(s) and G(s) were
1 G(s)
+ d¯o (s) + d¯ (s) present; i.e. the
1 + H(s)G(s)K(s) 1 + H(s)G(s)K(s) i closed loop poles are
the solutions to the
This time 1 + H(s)G(s)K(s) appears as the denominator of all the
equation
closed-loop transfer functions.
Let, 1 + L(s) = 0
L(s) = H(s)G(s)K(s) where L(s) is the
i.e. the product of all the terms around loop, not including the 1 at the product of all the
transfer functions
summing junction. L(s) is called the Return Ratio of the loop (and is also
round the loop (not
known as the Loop Transfer Function). including the 1
The Closed-Loop Characteristic Equation is then at the summing
junction).
1 + L(s) = 0
and the Closed-Loop Poles are the roots of this equation.
7
5.2.4 A note on the Return Ratio
0 0
+ + + + +
⌃ K(s) ⌃ G(s) ⌃
0 ē(s) ȳ(s)
b̄(s)
ā(s) H(s)
Figure 5.3
With the switch in the position shown (i.e. open), the loop is open. We then
have
ā(s) = H(s)G(s)K(s) ⇥ b̄(s) = H(s)G(s)K(s)b̄(s)
Formally, the Return Ratio of a loop is defined as 1 times the product of all
the terms around the loop. In this case
Note: In general, the block denoted here as H(s) could include filters and
other elements of the controller in addition to the sensor dynamics.
Furthermore, the block labelled K(s) could include actuator dynamics in
addition to the remainder of the designed dynamics of the controller.
8
5.2.5 Sensitivity and Complementary Sensitivity
9
5.3 Summary of notation
The system being controlled is often called the “plant”.
The control law is often called the “controller”; sometimes it is called the
“compensator” or “phase compensator”.
The “demand” signal is often called the “reference” signal or “command”,
or (in the process industries) the “set-point”.
The “Return Ratio”, the “Loop transfer function” always refer to the
transfer function of the opened loop, that is the product of all the transfer
functions appearing in a standard negative feedback loop (our L(s)).
Figure 5.1 has L(s) = G(s)K(s), Figure 5.2 has L(s) = H(s)G(s)K(s).
1
The “Sensitivity function” is the transfer function S(s) = . It
1 + L(s)
characterizes the sensitivity of a control system to disturbances appearing
at the output of the plant.
L(s)
The transfer function T (s) = is called the “Complementary
1 + L(s)
Sensitivity”. The name comes from the fact that S(s) + T (s) = 1. When
this appears as the transfer function from the demand to the controlled
output, as in Fig 5.4 it is often called simply the “Closed-loop transfer
function” (though this is ambiguous, as there are many closed-loop
transfer functions).
10
10
5.4 The Final Value Theorem (revisited)
g(t) *
) G(s)
|{z} | {z}
Impulse response Transfer Function
(assumed asymptotically stable)
Z t
Let y(t) = g(⌧ ) d⌧ denote the step response of this system and
0
G(s)
note that ȳ(s) = .
s
11
11
5.4.1 The “steady state” response – summary
12
12
5.5 Some simple controller structures
r̄ + ē ū ȳ
⌃ K(s) G(s)
L(s) 1
CLTFs: ȳ(s) = r̄(s) and ē(s) = r̄(s)
1 + L(s) 1 + L(s)
L(s) 1 L(0)
lim y(t) = s⇥ ⇥ =
t!1 1 + L(s) s s=0 1 + L(0)
and
1 1 1
lim e(t) = s⇥ ⇥ =
t!1 1 + L(s) s s=0 1 + L(0)
| {z }
Steady-state error
(using the final-value theorem.)
Note: These particular formulae only hold for this simple configuration –
where there is a unit step demand signal and no constant disturbances
(although the final value theorem can always be used).
13
13
5.5.2 Proportional Control Proportional Control
is the simplest con-
trol policy where the
K(s) = kp controller is just a
multiplication with a
r̄ + ē ū ȳ constant kp .
⌃ kp G(s)
As discussed on the left
and in the next page
there are tradeoffs asso-
ciated with the choice
of kp . A large kp leads
Typical result of increasing the gain kp , (for control systems where G(s) is a smaller steady-state
itself stable): error, but this is at the
expense of a more oscil-
Increased accuracy of control. “good” latory response (or even
}
Increased control action. possibly instability).
14
14
Steady-state errors using the final value theorem:
kp
ȳ(s) = 2 r̄(s)
s + 2s + 1 + kp
and
1 (s + 1)2
ē(s) = = 2 r̄(s).
1 + kp G(s) s + 2s + 1 + kp final value of
So, if r(t) = H(t), step response
kp kp
lim y(t) = 2 =
t!1 s + 2s + 1 + kp s=0 1 + kp
and
(s + 1)2 1
lim e(t) = 2 =
t!1 s + 2s + 1 + kp s=0 1 + kp
| {z }
1
Steady-state error
(Note: L(s) = kp =) L(0) = kp ⇥ 1 = kp )
(s + 1)2
To increase damping –
can often use derivative action (or velocity feedback).
To remove steady-state errors – can often use integral action.
15
15
1
For reference, the step response: (i.e. response to r̄(s) = ) is given by
s
kp kp
(2 + s)
1 + kp 1 + kp
ȳ(s) = 2
+
s + 2s + 1 + kp s
so
!
kp p 1 p kp
y(t) = exp( t) cos( kp t) + p sin( kp t) +
1 + kp kp 1 + kp
r ⇣ ⌘
kp p kp
= 1+kp exp( t) cos( kp t ) +
1 + kp
| {z } | {z }
Transient Response Steady-state response
where = arctan p1
kp
But you don’t need to calculate this to draw the conclusions we have made.
16
16
5.5.3 Proportional + Derivative (PD) Control In this case, in addition
to the multiplication
K(s) = kp + kd s
of the input to the
controller with a con-
r̄ + + ū ȳ stant gain kp , we have
⌃ kp ⌃ G(s)
the addition of a term
+ where the input to the
controller is differenti-
ated and multiplied by
kd s a constant kd .
As with proportional
control there are trade-
Typical result of increasing the gain kd , (when G(s) is itself stable): offs associated with the
choice of the gain kd
Increased Damping. within the derivative
term. In particu-
Greater sensitivity to noise. lar, increasing kd makes
the response less oscil-
(It is usually better to measure the rate of change of the error directly if
latory, but this is at
possible – i.e. use velocity feedback) the expense of amplify-
ing noise.
1
Example: G(s) = , K(s) = kp + kd s
(s + 1)2
(kp + kd s) 1 2
K(s)G(s) (s+1)
ȳ(s) = r̄(s) = r̄(s)
1 + K(s)G(s) 1 + (kp + kd s) 1 2
(s+1)
kp + kd s
= 2 r̄(s)
s + (2 + kd )s + 1 + kp
17
17
5.5.4 Proportional + Integral (PI) Control
r̄ + + ū ȳ
⌃ kp ⌃ G(s)
+
ki /s
Example:
1
G(s) = , K(s) = kp + ki /s
(s + 1)2
18
18
(kp + ki /s) 1 2
K(s)G(s) (s+1)
ȳ(s) = r̄(s) = r̄(s)
1 + K(s)G(s) 1 + (kp + ki /s) 1 2
(s+1)
kp s + ki
= r̄(s)
s(s + 1)2 + kp s + ki
1 1
ē(s) = r̄(s) = 1
r̄(s)
1 + K(s)G(s) 1 + (kp + ki /s)
(s+1)2
s(s + 1)2
= r̄(s)
s(s + 1)2 + kp s + ki
kp s + ki
lim y(t) = = 1
t!1 s(s + 1)2 + kp s + ki s=0
and
s(s + 1)2
lim e(t) = = 0
t!1 s(s + 1)2 + kp s + ki s=0
=) no steady-state error
19
19
PI control – General Case
The presence of an inte-
In fact, integral action (if stabilizing) always results in zero steady-state error, grator in the controller,
in the presence of constant disturbances and demands, as we shall now show. ensures that there is no
steady-state error if the
Assume that the following system settles down to an equilibrium with closed-loop system is
asymptotically stable.
lim e(t) = A 6= 0, then:
t!1 di (t) do (t)
This follows by contra-
A
diction. In particular,
r(t)+ + u(t)+ + + + if the error e(t) tends
⌃ kp ⌃ ⌃ “G(s)” ⌃
e(t) to a constant value that
+ is non-zero, the output
of the integrator would
tend to infinity, which
“ki /s”
contradicts the asymp-
totic stability of the sys-
tem.
“H(s)”
=) Contradiction
20
20
5.5.5 Proportional + Integral + Derivative (PID)
A PID controller can
Control combine the advan-
tages of derivative and
ki integral action. Ap-
K(s) = kp + + kd s
s propriate tuning of the
controller parameters is
though important.
ki /s
Systematic methods for
+ designing such dynamic
r̄ + + ū ȳ controllers will be dis-
⌃ kp ⌃ G(s) cussed towards the end
+ of the course.
kd s
Characteristic equation:
1 + G(s)(kp + kd s + ki /s) = 0
• can potentially combine the advantages of both derivative and integral
action:
but can be difficult to “tune”.
There are many empirical rules for tuning PID controllers (Ziegler-Nichols for
example) but to get any further we really need some more theory . . .
21
21
Part IB Paper 6: Information Engineering
LINEAR SYSTEMS AND CONTROL
Ioannis Lestas
HANDOUT 6
That is, the closed-loop system is stable if the Nyquist diagram of the return
ratio doesn’t enclose the point “ 1”.
1
Summary
The Nyquist diagram of a feedback system is a plot of ⇣the frequency
⌘
response of the return ratio, with the imaginary part = L(j!) plotted
⇣ ⌘
against the real part < L(j!) on an Argand diagram (that is, like the
Bode diagram, it is a plot of an open-loop frequency response).
The real power of the Nyquist stability criterion is that it allows you to
determine of the stability of the closed-loop system from the behaviour of
the open-loop Nyquist diagram. This is important from a design point of
view, as it relatively easy to see how changing K(s) affects
L(s) = H(s)G(s)K(s), but difficult to see how changing K(s) affects
L(s)/(1 + L(s)) directly, for example.
2
Contents
3
6.1 The Nyquist Diagram
!
0 0.1 1 10
90
4
Time Delay: = G(j!)
The Nyquist diagram of
G(s) = e sT a system with transfer
G(j!) = e j!T function e sT , i.e. a de-
lay T , is a circle with
|G(j!)| = 1
centre the origin and ra-
arg G(j!) = !T < G(j!) dius 1. Note that the
(radians) phase of e j!T tends to
!=0 1 as ! ! 1.
20dB
! ⇡
0dB0.1 1 10
T T T !=
2T
!
0 0.1 1 10
T T T
90
5
Time Delay with Lag and Integrator:
e sT1
G(s) =
s(1 + sT2 )
e j!T1
G(j!) =
j!(1 + j!T2 )
1 1
|G(j!)| = |e j!T1 | ⇥ ⇥
| {z } |j!| |1 + j!T2 |
1
6
Second-order lag:
1
G(s) =
(1 + sT1 )(1 + sT2 )
1
G(j!) =
(1 + j!T1 )(1 + j!T2 )
1
|G(j!)| = q q
1 + ! T1 1 + ! 2 T22
2 2 = G(j!)
!=1
< G(j!)
|G(j!)| 6 G(j!)
!=0
!=0 1 0
! =! 1 0 180
Unlike the Bode diagram, there are no detailed rules for sketching Nyquist
diagrams. It suffices to determine the asymptotic behaviour as ! ! 0 and
! ! 1 (using the techniques we have seen in the examples) and then
calculate a few points in between. Note that if G(0) is a finite and non-zero,
then the Nyquist locus will always start off by leaving the real axis at right
angles to it. 1 If G(0) is infinite, due to the presence of integrators, then
we must explicity find the first two terms of the Taylor series expansion of
G(j!) about ! = 0, as in the example with a time delay, a lag and an
integrator.
1
This is since G(j✏) = G(0) + j✏G0 (0) ✏2 G00 (0) · · · ⇡ G(0) + j✏G0 (0)
7
6.2 Feedback stability The remainder of this
handout will focus on
the Nyquist Stability
r̄(s)+ ē(s) ȳ(s) r̄(s)+ ē(s) ȳ(s)
⌃ K(s) G(s) ⌃ L(s) Theorem. This is a
⌘ very important result
in control engineering
that facilitates control
system design.
8
6.2.1 Significance of the point “ 1”
6.2.2 Example:
In the example on the
Let 1 left we will investi-
G(s) = 3 2
, K(s) = k,
s + s + 2s + 1 gate the position of
k the closed-loop poles,
=) L(s) = 3 . and hence the stability
s + s2 + 2s + 1
properties of the closed-
The closed-loop poles are the roots of
loop system, as param-
k eter k varies. In each
1+ 3 = 0 () s3 + s2 + 2s{z+ 1 + k = 0}
2
s + s + 2s + 1 | case we will also plot
CLCE the Nyquist diagram of
and the frequency response of the loop is: L(j!) and consider its
shape relative to the
k point 1.
L(j!) =
j( ! 3 + 2!) + ( ! 2 + 1)
p
At ! = 2, L(jw) is purely real. That is
p k
L( 2 j) = p p = k
j( 2 2 + 2 2) 2+1
9
k=1
Nyquist diagram, k=1 Closed-loop step response
2 2
p L(j!) 1
L(
1 2 j)
0
Imag Axis
0 −1
0 10 20 30 40 50
L(s) 1
−1
L 1 ·
1 + L(s) s
−2
−2 −1 0 1 2
Real Axis
p
s= 0.0000 + 1.4142j, (because 1 + L( 2j) = 1 p
0.0000 1.4142j, and so 1 + L(s) = 0 at s = 2j)
1.0000
=)
Closed-loop poles
10
10
k = 0.8
Nyquist diagram, k=0.8 Closed-loop step response
2 2
1
1
0
Imag Axis
0 −1
0 10 20 30 40 50
Closed-loop is
−1 asymptotically stable
−2
−2 −1 0 1 2
Real Axis
k = 1.2
Nyquist diagram, k=1.2 Closed-loop step response
2 2
1
1
0
Imag Axis
0 −1
0 10 20 30 40 50
Closed-loop is
−1 unstable
−2
−2 −1 0 1 2
Real Axis
11
11
6.3 Nyquist Stability Theorem (informal
version)
We can now give an informal statement of Nyquist’s stability theorem:
“If a feedback system has an asymptotically stable return ratio L(s), then the
feedback system is also asymptotically stable if the Nyquist diagram of L(j!)
leaves the point 1 + j0 on its left”.
This is unambiguous in most cases, and usually still works if L(s) has poles at
the origin or is unstable.
For completeness, a full statement of this theorem will be given later.
Definition: We say that the feedback system (or closed-loop system) is
L(s)
asymptotically stable if the closed-loop transfer function is
1 + L(s)
L(s)
asymptotically stable, that is if all the poles of (i.e. the roots of
1 + L(s)
1 + L(s) = 0) lie in the LHP.
12
12
6.4 Gain and Phase Margins
L(j!) encirling or going through the 1 point is clearly bad, leading to the
closed-loop not being asymptotically stable. However, L(j!) coming close to
1 without encircling it is also undesirable, for two reasons:
It implies that a closed-loop pole will be close to the imaginary axis and
that the closed-loop system will be oscillatory.
Gain and phase margins are widely used measures of how close the return
ration L(j!) gets to 1.
The gain margin measures how much the gain of the return ratio can be
increased before the closed-loop system becomes unstable.
The phase margin measures how much phase lag can be added to the return
ratio before the closed-loop system becomes unstable.
1
Gain Margin = Phase Margin = ✓
↵
In this example we have ✓ = 35 and ↵ = 0.75. Hence
Phase Margin = 35 and Gain Margin = 1/0.75 = 4/3.
13
13
6.4.1 Gain and phase margins from the Bode plot The diagram on the
|L(j!)| (dB) left illustrates how the
gain margin and phase
margin of a feedback
system can be deduced
0dB also from the Bode plot
Gain Margin
of its return ratio L(s).
L(j!)
14
14
6.5 Performance of feedback systems
The slide on the left
Good shows that a small mag-
“Small” 1
nitude in the frequency
feedback () sensitivity
properties 1 + L(j!) response of the sensitiv-
ity function
For 1) rejection of disturbances.
1
S(s) =
1 + L(s)
d̄(s) leads to good closed-
ȳ(s) loop properties.
L(s)
In particular, it leads
−
to a rejection of output
disturbances as the
1 sensitivity S(s) is the
Transfer function Transfer function
with f/b = ⇥ without f/b
closed-loop transfer
1 + L(s) function from d¯ to ȳ. It
Plus, 2) reducing the effects of uncertainty. also reduces the effect
of parameter uncer-
tainty (second point in
– if L(s) depends on an uncertain parameter (eg the slide).
1
L(s) = 2 ) then
s +2 s+1
d L
(1 + L) ⇥ dL (L) ⇥ dL . L
d 1+L = d d
L (1 + L)2 1+L
| 1+
{z L } d
L
relative change 1 d
=
in closed-loop
|1 +
{z L} L}
| {z
S relative change
in open-loop
15
15
6.5.1 The relationship between open and closed-loop
frequency responses
16
16
2
Nyquist diagram of the return ratio L(s) = s(1+s)
=
! = !2 “1”
1 0.5 <
“1 + L(j!1 )” “L(j!1 )”
! = !1
L(j!)
1 L(j!)
! L(j!) 1+L(j!) 1+L(j!)
1 1 j j 1 j
1.732 .5 .289j 1.5 + .866j .5 .866j
1 L(j!)
Closed-loop frequency responses: S(j!) = 1+L(j!) , T (j!) = 1+L(j!)
dB |T (j!)| |S(j!)|
d¯o (s)
r̄(s) + ē(s) + + ȳ(s)
⌃ L(s) ⌃
!
0dB
!1 !2
17
17
Small gain and/or phase margins correspond to there being frequencies at
which L(j!) comes close to the 1. We now see that this also corresponds
to making |1 + L(j!)| small and hence there being resonant peaks in the
closed-loop transfer functions.
So,
Small gain and/or phase margins are
bad for robustness, and
bad for performance.
18
18
6.6 The Nyquist stability theorem (for
asymptotically stable L(s))
On page 12 we gave an informal statement of the Nyquist stability criterion.
The formal statement of the Nyquist stability theorem requires counting
encirclements of the point 1:
As before, we take L(s) to be the return ratio so that the closed-loop
characteristic equation is 1 + L(s) = 0.
Note that, since ( j!) = (j!)⇤ , it follows that L( j!) = L(j!)⇤ . So the
section of the Nyquist locus for ! < 0 is the reflection in the real axis of the
section for ! > 0.
19
19
6.6.1 Notes on the Nyquist Stability Theorem:
1. Encirclements must be ‘added algebraically’. If there is 1 clockwise and 1 The points mentioned
anticlockwise encirclement then they ‘add up’ to 0 encirclements. on the left will be illus-
trated via examples in
2. L(s) often has one or more poles at 0 (due to integrators in the plant or the next two pages.
the controller). The theorem still works, but one has to worry about what
happens to the graph of L(j!) at ! = 0 (as the locus is no longer a
closed curve). It can be shown that, if L(s) has n poles at the origin,
then the Nyquist locus should be completed by adding a large n ⇥ 180
arc, in a clockwise direction.
3. If L(s) is unstable, and has np unstable poles, then the theorem must be
modified as follows: “The feedback system is stable if and only if the ‘full’
Nyquist diagram makes np anticlockwise encirclements of the point
( 1 + 0j).”
20
20
Examples: Formal application of Nyquist stability theorem.
= =
The solid black line
shows the Nyquist di-
agram for positive fre-
quencies and the green
1 < 1 < dashed line shows the
diagram for negative
frequencies. The arrow
illustrates the direction
of the plot as the fre-
quency increases.
21
21
Examples: Informal application of the Nyquist stability theorem (based on
The diagrams on the
note 4).
left illustrate that the
= = informal statement of
the Nyquist stability
criterion is also valid for
the examples that have
been considered.
1 < 1 <
= =
1 < 1 <
Hence: the informal application of the Nyquist stability criterion works for all
these cases.
22
22