Ece380 Notes PDF
Ece380 Notes PDF
c Shreyas Sundaram
Acknowledgments
Parts of these course notes are loosely based on lecture notes by Professors
Daniel Liberzon, Sean Meyn, and Mark Spong (University of Illinois), on notes
by Professors Daniel Davison and Daniel Miller (University of Waterloo), and
on parts of the textbook Feedback Control of Dynamic Systems (5th edition) by
Franklin, Powell and Emami-Naeini. I claim credit for all typos and mistakes
in the notes.
The LATEX template for The Not So Short Introduction to LATEX 2ε by T. Oetiker
et al. was used to typeset portions of these notes.
Shreyas Sundaram
University of Waterloo
c Shreyas Sundaram
iv
c Shreyas Sundaram
Contents
1 Introduction 1
1.1 Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 What is Control Theory? . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Outline of the Course . . . . . . . . . . . . . . . . . . . . . . . . 4
5 Bode Plots 25
5.1 Rules for Drawing Bode Plots . . . . . . . . . . . . . . . . . . . . 26
5.1.1 Bode Plot for Ko . . . . . . . . . . . . . . . . . . . . . . . 27
q
5.1.2 Bode Plot for s . . . . . . . . . . . . . . . . . . . . . . . 28
5.1.3 Bode Plot for ( ps + 1)−1 and ( zs + 1) . . . . . . . . . . . . 29
c Shreyas Sundaram
vi CONTENTS
±1
5.1.4 Bode Plot for ( ωsn )2 + 2ζ( ωsn ) + 1 . . . . . . . . . . . 32
5.1.5 Nonminimum Phase Systems . . . . . . . . . . . . . . . . 34
c Shreyas Sundaram
CONTENTS vii
10 Properties of Feedback 75
10.1 Feedforward Control . . . . . . . . . . . . . . . . . . . . . . . . . 76
10.2 Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12 PID Control 89
12.1 Proportional (P) Control . . . . . . . . . . . . . . . . . . . . . . 89
12.2 Proportional-Integral (PI) Control . . . . . . . . . . . . . . . . . 91
12.3 Proportional-Integral-Derivative (PID) Control . . . . . . . . . . 91
12.4 Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . 93
13 Root Locus 95
13.1 The Root Locus Equations . . . . . . . . . . . . . . . . . . . . . 97
13.1.1 Phase Condition . . . . . . . . . . . . . . . . . . . . . . . 98
13.2 Rules for Plotting the Positive Root Locus . . . . . . . . . . . . . 100
13.2.1 Start Points and (Some) End Points of the Root Locus . 100
13.2.2 Points on the Real Axis . . . . . . . . . . . . . . . . . . . 100
13.2.3 Asymptotic Behavior of the Root Locus . . . . . . . . . . 101
13.2.4 Breakaway Points . . . . . . . . . . . . . . . . . . . . . . . 105
13.2.5 Some Root Locus Plots . . . . . . . . . . . . . . . . . . . 108
13.2.6 Choosing the Gain from the Root Locus . . . . . . . . . . 110
13.3 Rules for Plotting the Negative Root Locus . . . . . . . . . . . . 111
c Shreyas Sundaram
viii CONTENTS
c Shreyas Sundaram
Chapter 1
Introduction
Input Output
System
The term dynamical system loosely refers to any system that has an internal
state and some dynamics (i.e., a rule specifying how the state evolves in time).
This description applies to a very large class of systems, from automobiles and
aviation to industrial manufacturing plants and the electrical power grid. The
presence of dynamics implies that the behavior of the system cannot be en-
tirely arbitrary; the temporal behavior of the system’s state and outputs can be
predicted to some extent by an appropriate model of the system.
Example 1. Consider a simple model of a car in motion. Let the speed of
the car at any time t be given by v(t). One of the inputs to the system is the
acceleration a(t), applied by the throttle. From basic physics, the evolution of
the speed is given by
dv
= a(t). (1.1)
dt
The quantity v(t) is the state of the system, and equation (1.1) specifies the
dynamics. There is a speedometer on the car, which is a sensor that measures
c Shreyas Sundaram
2 Introduction
the speed. The value provided by the sensor is denoted by s(t) = v(t), and this
is taken to be the output of the system.
As shown by the above example, the inputs to physical systems are applied via
actuators, and the outputs are measurements of the system state provided by
sensors.
Other examples of systems: Electronic circuits, DC Motor, Economic Sys-
tems, . . .
Input Output
System
Mapping from
output to input
Typically, the mapping from outputs to inputs in the feedback loop is performed
via a computational element known as a controller, which processes the sensor
measurements and converts it to an appropriate actuator signal. The basic
architecture is shown below. Note that the feedback loop typically contains
disturbances that we cannot control.
Desired Control
Output Input Output
Controller System
−
Example 2 (Cruise Control). Consider again the simple model of a car from
Example 1. A cruise control system for the car would work as follows.
c Shreyas Sundaram
1.2 What is Control Theory? 3
• The speedometer in the car measures the current speed and produces
s(t) = v(t).
• The controller in the car uses these measurements to produce control sig-
nals: if the current measurement s(t) is less than the desired cruising
speed, the controller sends a signal to the throttle to accelerate, and if
s(t) is greater than the desired speed, the throttle is asked to allow the
car to slow down.
The motion of the car might also be affected by disturbances such as wind
gusts, or slippery road conditions. A properly designed cruise control system
will maintain the speed at (or near) the desired value despite these external
conditions.
As we will see later, feedback control has many strengths, and is used to achieve
the following objectives.
• Robustness. Feedback control can work well even when the actual model
of the plant is not known precisely; sufficiently small errors in modeling
can be counteracted by the feedback input.
c Shreyas Sundaram
4 Introduction
Example 4 (Teary Eyes on a Cold Day). Blinking and tears are a feedback
mechanism used by the body to warm the surface of the eyeball on cold days –
the insides of the lids warm the eyes. In very cold situations, tears come from
inside the body (where they are warmed), and contain some proteins and salts
that help to prevent front of eyes from freezing.
c Shreyas Sundaram
Chapter 2
Review of Complex
Numbers
Consider the polynomial f (x) = x2 + 1. The roots of the polynomial are the
values of x for which f (x) = 0, or equivalently x2 = −1. Clearly, there are no
real numbers that satisfy this equation. To address this problem, let us define
a new “number” j, such that j 2 = −1. Since this number does not belong to
the set of real numbers, we will call it an imaginary or complex number. With
this number in hand, we can actually generate an infinite set of other complex
numbers.1
c Shreyas Sundaram
6 Review of Complex Numbers
• All complex numbers s satisfying Re(s) < 0 are said to lie in the Open
Left Half Plane (OLHP).
• All complex numbers s satisfying Re(s) ≤ 0 are said to lie in the Closed
Left Half Plane (CLHP).
• All complex numbers s satisfying Re(s) > 0 are said to lie in the Open
Right Half Plane (ORHP).
• All complex numbers s satisfying Re(s) ≥ 0 are said to lie in the Closed
Right Half Plane (CRHP).
Note from Euler’s equation that ejθ = cos θ +j sin θ, and so the complex number
s can be denoted by s = rejθ .
Given the complex number s = σ + jω, its complex conjugate is defined as the
complex number s∗ = σ − jω. Note that
ss∗ = (σ + jω)(σ − jω) = σ 2 + ω 2 = |s|2 = |s∗ |2 .
In the geometric representation, the complex conjugate of a complex number is
obtained by reflecting the vector about the real axis.
As we will see throughout the course, many interesting properties of control
systems are related to the roots of certain polynomials. The following result
explains why complex numbers are so useful.
c Shreyas Sundaram
7
c Shreyas Sundaram
8 Review of Complex Numbers
c Shreyas Sundaram
Chapter 3
Review of Laplace
Transforms
This is a function of the complex variable s, so we can write L{f (t)} = F (s).
Note: There are various conditions that f (t) must satisfy in order to have
a Laplace Transform. For example, it must not grow faster than est for some
s. In this course, we will only deal with functions that have (unique) Laplace
Transforms.1
Example. Find the Laplace Transform of f (t) = e−at , t ≥ 0, where a ∈ R.
Solution.
Z ∞
F (s) = L{f (t)} = e−at e−st dt
0
Z ∞
= e−(s+a)t dt
0
1 Uniqueness may be lost at points of discontinuity. More specifically, if f (t) and g(t) are
piecewise continuous functions, then F (s) = G(s) for all s implies that f (t) = g(t) everywhere
except at the points of discontinuity.
c Shreyas Sundaram
10 Review of Laplace Transforms
1 ∞ 1
=− e−(s+a)t = (if Re(s + a) > 0) .
s+a 0 s+a
Example. Find the Laplace Transform of the unit step function 1(t), defined
as
1 if t ≥ 0,
1(t) =
0 otherwise.
Solution.
Example: The Dirac delta function (or impulse function). Consider the
function δ (t) defined as
1
if 0 ≤ t ≤ ,
δ (t) =
0 otherwise,
where is a small positive number.
Figure 3.2: (a) The function δ (t). (b) The impulse function δ(t).
R∞
Note that δ (t)dt
−∞
= 1. Now suppose that we let → 0; we still have
Z ∞
lim δ (t)dt = 1 .
→0 −∞
Note that as gets smaller, the function gets narrower, but taller. Define the
impulse function
∞ if t = 0,
δ(t) = lim δ (t) =
→0 0 otherwise.
Some properties of δ(t):
c Shreyas Sundaram
3.1 The Laplace Transform 11
R∞
• −∞
δ(t)dt = 1.
R∞
• Let f (t) be any function. Then −∞
δ(t − τ )f (t)dt = f (τ ). This is called
the sifting property.
Note: The functions f (t) in this table are only defined for t ≥ 0 (i.e., we are
assuming that f (t) = 0 for t < 0).
Z ∞ Z ∞
L{f (t − λ)} = f (t − λ)e−st dt = f (t − λ)e−st dt
0
Zλ∞
= f (τ )e−s(τ +λ) dτ (letting τ = t − λ)
0
−sλ
=e F (s) .
c Shreyas Sundaram
12 Review of Laplace Transforms
3. Differentiation. L{ df
dt } = sF (s) − f (0). More generally,
Rt
4. Integration. L{ 0 f (τ )dτ } = 1s F (s).
Rt
Example. What is L{ 0 cos τ dτ }?
Solution.
c Shreyas Sundaram
3.2 The Inverse Laplace Transform 13
1
In the above example, we “broke up” the function s(s+1) into a sum of simpler
functions, and then applied the inverse Laplace Transform (by consulting the
Laplace Transform table) to each of them. This is a general technique for
inverting Laplace Transforms.
c Shreyas Sundaram
14 Review of Laplace Transforms
(s + z1 )(s + z2 ) · · · (s + zm )
F (s) = K .
(s + p1 )(s + p2 ) · · · (s + pn )
Note: Remember these terms, as the poles and zeros of a system will play a
very important role in our ability to control it.
First, suppose each of the poles are distinct and that F (s) is strictly proper.
We would like to write
k1 k2 kn
F (s) = + + ··· + ,
s + p1 s + p2 s + pn
k1 (s + pi ) k2 (s + pi ) kn (s + pi )
(s + pi )F (s) = + + · · · + ki + · · · + .
s + p1 s + p2 s + pn
Now if we let s = −pi , then all terms on the right hand side will be equal to
zero, except for the term ki . Thus, we obtain
ki = (s + pi )F (s)|s=−pi .
s+5
Example. What is the partial fraction expansion of F (s) = s3 +3s2 −6s−8 ?
c Shreyas Sundaram
3.3 The Final Value Theorem 15
Solution.
The partial fraction expansion when some of the poles are repeated is obtained
by following a similar procedure, but it is a little more complicated. We will
not worry too much about this scenario here. One can also do a partial fraction
expansion of nonstrictly proper functions by first dividing the denominator into
the numerator to obtain a constant and a strictly proper function, and then
applying the above partial fraction expansion. The details will be covered in a
homework problem.
c Shreyas Sundaram
16 Review of Laplace Transforms
1
Example. F (s) = s(s+1) . What is limt→∞ f (t)?
Solution.
Why do we need the poles of sF (s) to be in the OLHP in order to apply the
Final Value Theorem? First, note from the partial fraction expansion of F (s)
that
k1 k2 k3 kn
F (s) = + + + ··· +
s s + p2 s + p3 s + pn
⇔ f (t) = k1 1(t) + k2 e−p2 t + k3 e−p3 t + · · · + kn e−pn t .
Note that if F (s) does not have a pole at s = 0, then the constant k1 will simply
be zero in the above expansion. Based on the above expansion, if one of the
poles has a positive real part, the corresponding exponential term will explode,
and thus f (t) will have no final value! On the other hand, if all poles have
negative real parts, all of the exponential terms will go to zero, leaving only the
term k1 1(t) (corresponding to ks1 in the partial fraction expansion). Thus, the
asymptotic behavior of f (t) is simply k1 , and this is obtained by calculating
lims→0 sF (s).
One must be careful about applying the Final Value Theorem; if the signal f (t)
does not settle down to some constant steady state value, the theorem might
yield nonsensical results. For example, consider the function f (t) = sin t, with
Laplace transform
1
F (s) = 2 .
s +1
The function sF (s) does not have all poles in the OLHP (it has two poles on the
imaginary axis). However, if we forgot to check this before applying the Final
Value Theorem, we would get
lim sF (s) = 0,
s→0
and would mistakenly assume that f (t) → 0. Clearly f (t) has no steady state
value (it constantly oscillates between −1 and 1).
c Shreyas Sundaram
Chapter 4
Linear Time-Invariant
Systems
Input Output
System
c Shreyas Sundaram
18 Linear Time-Invariant Systems
In this class, we will primarily be dealing with the analysis of linear time-
invariant causal systems (if the system is nonlinear, we will linearize it). We
will be interested in how the system responds to certain types of inputs. The
impulse response of a system is the output of the system when the input to
the system is δ(t), and is denoted by the signal h(t). The step response is the
output of the system when the input to the system is 1(t).
Note: The impulse response for causal systems satisfies h(t) = 0 for t < 0.
c Shreyas Sundaram
4.2 Transfer Functions 19
the system says that the output of the system due to the input δ(t − τ ) will be
h(t − τ ). Using the Principle of Superposition, we see that the output of the
system when u(t) is the input will be the infinite sum (or integral) of weighted
and shifted impulse responses (when all initial conditions are equal to zero):
Z ∞
y(t) = u(τ )h(t − τ )dτ .
0
Note that the output of the system is just the convolution of the signals u(t)
and h(t)! This is a well known relationship for linear time-invariant systems.
Applying Laplace Transforms to both sides of the above equation, we obtain
Y (s) = H(s)U (s), or equivalently,
Y (s)
H(s) = .
U (s)
The function H(s) is the ratio of the Laplace Transform of the output to the
Laplace Transform of the input (when all initial conditions are zero), and it is
called the transfer function of the system. Note that this transfer function is
independent of the actual values of the inputs and outputs – it tells us how any
input gets transformed into the output. It is a property of the system itself. We
will frequently represent systems in block diagrams via their transfer functions:
Input Output
U (s) Y (s)
H(s)
Note: The transfer function is the Laplace Transform of the impulse response
of the system.
c Shreyas Sundaram
20 Linear Time-Invariant Systems
Taking Laplace Transforms of both sides (assuming that all initial conditions
are zero), we get
(sn +an−1 sn−1 +· · ·+a1 s+a0 )Y (s) = (bm sm +bm−1 sm−1 +· · ·+b1 s+b0 )U (s) ,
c Shreyas Sundaram
4.3 Frequency Response 21
Since both u(t) and h(t) are zero for t < 0, the above expression becomes
Z t Z t Z t
y(t) = u(t − τ )h(τ )dτ = e s0 (t−τ )
h(τ )dτ = e s0 t
e−s0 τ h(τ )dτ .
0 0 0
Rt
The quantity 0 e−s0 τ h(τ )dτ looks a lot like the Laplace Transform of the signal
h(τ ) evaluated at s = s0 , except that the upper limit on the integration is t
instead of ∞. Suppose that we examine what happens when t becomes very
large (i.e., as t → ∞). In this case, if the integral exists, we can write
Thus, the asymptotic response (if H(s0 ) exists) to a complex exponential input
is that same complex exponential, scaled by the transfer function evaluated at
s0 . This gives us one potential way to identify the transfer function of a given
“black box” system: apply the input es0 t for several different values of s0 , and
use that to infer H(s).
Problem: If Re(s0 ) > 0, then es0 t blows up very quickly, and if Re(s0 ) < 0, it
decays very quickly. What about s0 = jω? This would solve the problem. How
do we apply ejωt in practice?
jωt −jωt
Solution: Sinusoids. Recall the identity cos ωt = e +e 2 . Using the
Principle of Superposition and the property derived above:
1
y1 (t) = H(jω)ejωt .
2
1
y2 (t) = H(−jω)e−jωt .
2
Note that H(jω) is just a complex number, and so we can write it in the polar
form H(jω) = |H(jω)|ej∠H(jω) , where |H(jω)| is the magnitude of H(jω) and
∠H(jω) is the phase of H(jω) (they will both depend on the choice of ω).
Similarly, H(−jω) is just the complex conjugate of H(jω),1 and so we can
1 You should be able to prove this by using the fact that the numerator and denominator
c Shreyas Sundaram
22 Linear Time-Invariant Systems
In other words, the (steady-state) response to the sinusoid cos ωt is a scaled and
phase-shifted version of the sinusoid! This is called the frequency response
of the system, and will be a useful fact to identify and analyze linear systems.
Later, we will be plotting the magnitude and phase of the system as we sweep
ω from 0 to ∞; this is called the Bode plot of the system.
As an example, consider a linear system with transfer function
ωn2
H(s) = ,
s2 + 2ζωn s + ωn2
where ζ and ωn are some real numbers. We will study systems of this form in
more detail later in the course. Since the denominator of this transfer function
has degree 2, it is called a second order system. The magnitude of this function
at s = jω is given by
ωn2
|H(jω)| =
| − ω 2 + 2ζωn ωj + ωn2 |
ωn2 1
=p =q ,
2 2 2 2
(ωn − ω ) + 4ζ ωn ω2 2
(1 − ( ωωn )2 )2 + 4ζ 2 ( ωωn )2
ωn2
∠H(jω) = ∠
−ω 2 + 2ζωn ωj + ωn2
1 2ζ ωωn
=∠ = − tan−1 2 .
−( ωωn )2 + 2ζ( ωωn )j + 1
ω
1− ωn
Since these quantities are a function of ωωn , we can plot them vs ωωn for vari-
ous values of ζ. Note that in the following plots, we used a logarithmic scale
for the frequency. This is commonly done in order to include a wider range
of frequencies in our plots. The intervals on logarithmic scales are known as
c Shreyas Sundaram
4.3 Frequency Response 23
decades.
c Shreyas Sundaram
24 Linear Time-Invariant Systems
• The magnitude of the transfer function for low frequencies (i.e., near ω =
0) is called the low frequency gain or the DC gain.
• The bandwidth of the system is the frequency at which the magnitude
drops to √12 times the DC gain, and is denoted by ωBW . For the sec-
ond order system considered above, the plot shows that the bandwidth is
approximately equal to ωn .
• The resonant peak is the difference between maximum value of the fre-
quency response magnitude and the DC gain, and is denoted by Mr .
The concepts of bandwidth and DC gain will play an important role in this
course, and you should be comfortable manipulating and deriving these quan-
tities. While we identified these metrics using the magnitude plot of a second
order system, these quantities can be used to discuss the frequency response
of any transfer function. For simple first order systems, we can calculate these
quantities explicitly, as shown in the following example.
Example. Find the bandwidth and DC gain of the system with transfer func-
b
tion H(s) = s+a .
Solution. The DC gain is the magnitude of the transfer function when s = jω,
with ω = 0. In this case, the DC gain is
b
|H(0)| = .
a
The bandwidth is the frequency ωBW at which the magnitude |H(jωBW )| is
equal to √12 of the DC gain. In this case, we have
1 b b b
√ = |H(jωBW )| = =p .
2 a |jω BW + a| 2 2
a + ωBW
2a2 = a2 + ωBW
2
,
c Shreyas Sundaram
Chapter 5
Bode Plots
A Bode plot is a plot of the magnitude and phase of a linear system, where
the magnitude is plotted on a logarithmic scale, and the phase is plotted on
a linear scale. Specifically, consider the linear system with transfer function
H(s) = N (s)
D(s) . For the moment, assume that all poles and zeros of the transfer
function are real (to avoid cumbersome notation), and write
(s + z1 )(s + z2 ) · · · (s + zm )
H(s) = K .
(s + p1 )(s + p2 ) · · · (s + pn )
When working with Bode plots, we will find it more convenient to write this
system as:
( s + 1)( zs2 + 1) · · · ( zsm + 1)
H(s) = Ko zs1 ,
( p1 + 1)( ps2 + 1) · · · ( psn + 1)
where Ko = K zp11zp22...z
...pn . This is called the Bode form, and the reason for doing
m
this is that the DC gain of the above transfer function can now immediately be
obtained as Ko , which will be useful when drawing Bode plots. We will handle
more general transfer functions after the following discussion.
The magnitude of H(jω) is given by
|j zω1 + 1||j zω2 + 1| · · · |j zωm + 1|
|H(jω)| = |Ko | .
|j pω1 + 1||j pω2 + 1| · · · |j pωn + 1|
Now if we take the logarithm of both sides (any base is acceptable, but base 10
is conventional), we get
m n
X ω X ω
log |H(jω)| = log |Ko | + log |j + 1| − log |j + 1| .
i=1
zi i=1
pi
c Shreyas Sundaram
26 Bode Plots
magnitude of the overall transfer function. This is quite useful, and the reason
for introducing the logarithm. In keeping with convention, we will multiply both
sides of the above equation by 20, and work with the units in decibels; this only
scales the magnitudes, but does not change the additivity due to the logarithm.
Note that the phase of H(jω) already satisfies the additivity property:
m n
X ω X ω
∠H(jω) = ∠Ko + ∠(j + 1) − ∠(j + 1) ,
i=1
zi i=1
pi
• Ko (a constant)
• sq (corresponding to zeros at the origin if q is a positive integer, or poles
at the origin if q is a negative integer)
• ( ps + 1)−1 and ( zs + 1) (corresponding to real poles and zeros)
±1
• ( ωsn )2 + 2ζ( ωsn ) + 1 (corresponding to complex conjugate zeros if the
exponent is 1, and complex conjugate poles if the exponent is −1).
2
Example. Consider H(s) = 3 s3(s+2)(s +2s+4)
(s+1)(s2 +3s+4) . Write this in Bode form, and
write the logarithm of the magnitude of H(jω) in terms of the logarithm of the
magnitudes of each of the factors. Also write the phase of H(jω) in terms of
the phases of each of the factors.
c Shreyas Sundaram
5.1 Rules for Drawing Bode Plots 27
Since the log-magnitude and phase are obtained by simply adding together the
log-magnitudes and phases of the individual factors, we can draw the Bode plot
for the overall system by drawing the Bode plots for each of the individual
factors, and then adding the plots together.
c Shreyas Sundaram
28 Bode Plots
On a log scale, this simply a straight line with slope 20q, going through the
point 0 when ω = 1.
The phase of sq at s = jω is
π
∠(jω)q = q∠jω = q ,
2
which is just a horizontal line at q π2 .
Example. Draw the Bode plot of s2 .
Solution.
c Shreyas Sundaram
5.1 Rules for Drawing Bode Plots 29
1
Example. Draw the Bode plot of s3 .
Solution.
r
ω ω
20 log |j + 1| = 20 log 1 + ( )2 .
z z
c Shreyas Sundaram
30 Bode Plots
The point ω = z is called the breakpoint. These straight lines were derived
based on values of ω that were much smaller or much larger than the breakpoint,
and thus they are called asymptotes. Note that they are only approximations
to the shape of the actual magnitude plot – for example, the actual value of
20 log(j ωz + 1) at ω = z is equal to 3 dB. However, the approximations will
suffice for us to obtain some general intuition about Bode plots.
ω ω
∠(j + 1) = tan−1 .
z z
The magnitude and phase plots for a factor corresponding to a pole follow the
same rules as the plot for the zero, except that everything is negated. Specifi-
c Shreyas Sundaram
5.1 Rules for Drawing Bode Plots 31
The Bode plot for the factor (j ωp + 1)−1 looks like this:
s+10
Example. Draw the Bode plot for H(s) = 10 (s+1)(s+100) .
Solution.
c Shreyas Sundaram
32 Bode Plots
±1
5.1.4 Bode Plot for ( ωsn )2 + 2ζ( ωsn ) +1
We have already seen what the magnitude and phase plots look like for a second
order system of this form. To derive general rules for drawing this, note that
the magnitude of this function at s = jω is given by
r
jω 2 ω ω ω
20 log ( ) + 2ζ( )j + 1 = 20 log (1 − ( )2 )2 + 4ζ 2 ( )2 .
ωn ωn ωn ωn
For ω ωn , we have ωωn ≈ 0, and so 20 log ( ωjωn )2 + 2ζ( ωωn )j + 1 ≈ 0. For
ω ωn , we have
s
r 4 2
ω 2 2 2
ω 2
ω ω
(1 − ( ) ) + 4ζ ( ) ≈ = ,
ωn ωn ωn ωn
and so 20 log ( ωjωn )2 + 2ζ( ωωn )j + 1 ≈ 40 log ω − 40 log ωn . This is a line of slope
40 passing
through the point 0 when ω = ωn . The general magnitude curve for
jω 2 ω
20 log ( ωn ) + 2ζ( ωn )j + 1 thus looks like:
The phase of ( ωjωn )2 + 2ζ( ωωn )j + 1 is given by
2ζ ωωn
jω 2 ω
∠ ( ) + 2ζ( )j + 1 = tan−1 2 .
ωn ωn
ω
1− ωn
For ω ωn , the argument of the arctan function is almost 0, and so the phase
curve starts at 0 for small ω. For ω = ωn , the argument is ∞, and so the phase
curve passes through π2 when ω = ωn . For ω ωn , the argument of the arctan
function approaches 0 from the negative side, and so the phase curve approaches
π for large values of ω. Just as in the first order case, we will take the phase
curve transitions to occur one decade before and after ωn . This produces a
c Shreyas Sundaram
5.1 Rules for Drawing Bode Plots 33
−1
The Bode plot for the factor ( ωjωn )2 + 2ζ( ωωn )j + 1 looks just like the Bode
jω 2
plot for the factor ( ωn ) + 2ζ( ωωn )j + 1 , except that everything is flipped:
(s+1)(s2 +3s+100)
Example. Draw the Bode Plot of H(s) = s2 (s+10)(s+100)
c Shreyas Sundaram
34 Bode Plots
Solution.
So far we have been looking at the case where all zeros and poles are in the
CLHP. Bode plots can also be drawn for systems that have zeros or poles in
the RHP – however, note that for systems that have RHP poles, the steady
state response to a sinusoidal input will not be a sinusoid (there won’t even be a
steady state response, as the output will blow up). This does not change the fact
that the transfer function will have a magnitude and phase at every frequency
(since the transfer function is simply a complex number at every frequency ω).
Transfer functions with zeros in the right half plane are called nonminimum
phase systems, and those with all zeros and poles in the CLHP are called
minimum phase systems.
To gain intuition about how the Bode plot of a nonminimum phase system
compares to that of a minimum phase system, let us see how the Bode plots of
the terms H1 (s) = s + 1 and H2 (s) = s − 1 compare. First, note that
p
|H1 (jω)| = |jω + 1| = 1 + ω 2 = |H2 (jω)|,
and thus the magnitude plots of the two terms are identical. To compare the
phase contribution, it is useful to examine the complex number representation
c Shreyas Sundaram
5.1 Rules for Drawing Bode Plots 35
From this, we see that ∠H2 (jω) = ∠(jω − 1) = π − ∠H1 (jω). Thus, the phase
plots of the two terms look like this:
An alternative method to draw the phase plot of H2 (s) = s−1 is to first convert
it to Bode form to obtain H2 (s) = −1(−s + 1), where we now have a gain of −1
in front. This gain contributes nothing to the log-magnitude, but it contributes
a phase of π. The phase of −s+1 is the negative of the phase of s+1 (since they
are complex conjugates), and once again, we obtain that the phase of H2 (jω)
is π − ∠H1 (jω).
Example. Draw the Bode plots for the systems
s+1 s−1
H1 (s) = 10 , H2 (s) = 10 .
s + 10 s + 10
Solution.
c Shreyas Sundaram
36 Bode Plots
As we can see from the above example, the magnitudes of the two transfer
functions do not depend on whether the zero is in RHP or the LHP. However,
the phase plots are quite different. Based on the above analysis, we see that the
phase contribution of a zero in the right half plane is always at least as large (in
absolute terms) as the phase contribution of a zero in the left half plane – this
is the reason for calling systems with such zeros (or poles) nonminimum phase.
Note that for minimum phase systems, the magnitude plot uniquely determines
the transfer function, but for nonminimum phase systems, we need both the
magnitude plot and the phase plot in order to determine the transfer function.
c Shreyas Sundaram
Chapter 6
With the mathematical foundations from the previous chapters in hand, we are
now ready to move on to the modeling of control systems.
The key equation governing the model of many mechanical systems is Newton’s
Law: F = ma. In this equation, F represents the vector sum of all the forces
acting on a body, m represents the mass of the body, and a represents the
acceleration of the body. The forces acting on the body can be generated by an
c Shreyas Sundaram
38 Modeling and Block Diagram Manipulation
outside entity (as an input to the system), or by springs and dampers attached
to the body.
Example: Mass-Spring-Damper System.
The main modeling technique for electrical systems is to use Kirchoff ’s Laws.
c Shreyas Sundaram
6.1 Mathematical Models of Physical Systems 39
When the system involves rotation about a point, the system dynamics are
governed by a modified form of Newton’s Law: τ = Jθ̈. Here, τ is the sum
of all external torques about the center of mass, J is the moment of inertia of
the body, and θ is the angular position of the body (so that θ̈ is the angular
acceleration).
Example: A DC motor consists of an electrical component and a rotational
component. The input voltage to the electrical component induces a current,
which then provides a torque to the motor shaft via a magnetic field. This
torque causes the shaft to rotate. In turn, this torque also induces a voltage
drop (called the back emf) in the electrical circuit. Derive the overall system
equations. and find the transfer function from the input voltage to the angular
velocity (ω = θ̇) of the DC motor.
c Shreyas Sundaram
40 Modeling and Block Diagram Manipulation
c Shreyas Sundaram
6.2 Block Diagram Manipulation 41
All of the above examples considered above yielded models that involved linear
differential equations of the form
In practice, many systems are actually nonlinear, and there is a whole set of
tools devoted to controlling such systems. One technique is to linearize the
system around an operating point, where the nonlinearities are approximated
by linear functions. In the rest of the course, we restrict our attention to lin-
ear systems, and assume that these linearization techniques have been applied
to any nonlinearities in the system. We will now study how to manipulate
interconnections of linear systems.
c Shreyas Sundaram
42 Modeling and Block Diagram Manipulation
The overall transfer function from U (s) to Y (s) can be obtained as follows:
Y (s) = H2 (s)Y1 (s)
⇒ Y (s) = H2 (s)H1 (s)U (s) ,
Y1 (s) = H1 (s)U (s)
Y (s)
and thus the overall transfer function is H(s) = U (s) = H2 (s)H1 (s).
Parallel Connection. In this case, two (or more systems) obtain the same
input U (s), and their outputs are summed together to produce the output of
the overall system.
The overall transfer function from U (s) to Y (s) can be obtained as follows:
Y1 (s) = H1 (s)U (s)
Y2 (s) = H2 (s)U (s) ⇒ Y (s) = (H1 (s) + H2 (s)) U (s) ,
Y (s) = Y1 (s) + Y2 (s)
Y (s)
and thus the overall transfer function is H(s) = U (s) = H1 (s) + H2 (s).
Feedback Connection. In this case, the output of one system feeds into the
input of a second system, and the output of this second system feeds back into
the input of the first system (perhaps in conjunction with another signal).
The overall transfer function from R(s) to Y (s) can be obtained by noting that
Y (s) = H1 (s)E(s)
Y2 (s) = H2 (s)Y (s)
E(s) = R(s) − Y2 (s),
which yields
c Shreyas Sundaram
6.2 Block Diagram Manipulation 43
Y (s) H1 (s)
H(s) = = .
R(s) 1 − H1 (s)H2 (s)
Note: The basic feedback control system shown at beginning of this section is
a special case of the negative feedback configuration shown above, with H1 (s) =
P (s)C(s), and H2 (s) = 1. The basic feedback control system is thus said to be
in “unity feedback” configuration.
Based on the above configurations, we can derive the following rules to modify
block diagrams.
c Shreyas Sundaram
44 Modeling and Block Diagram Manipulation
c Shreyas Sundaram
6.2 Block Diagram Manipulation 45
Yi (s) = H1i (s)U1 (s) + H2i (s)U2 (s) + · · · + Hmi (s)Um (s), i ∈ {1, 2, . . . , p}.
c Shreyas Sundaram
46 Modeling and Block Diagram Manipulation
Example. Write the output Y (s) of the following system in terms of the inputs
R(s) and D(s).
D(s)
R(s) + Y (s)
+
C(s) P (s)
−
c Shreyas Sundaram
Chapter 7
c Shreyas Sundaram
48 Step Responses of Linear Systems
This system has a single pole at s = −a0 . The step response of this system is
obtained by calculating
b0 1
Y (s) = H(s)U (s) =
s + a0 s
b0 1 1
= − ,
a0 s s + a0
which produces
b0 b0
y(t) = 1(t) − e−a0 t , t ≥ 0. (7.1)
a0 a0
Consider two cases:
• If a0 > 0, then the pole of the transfer function is in the OLHP, and
e−a0 t → 0 as t → ∞. The step response thus reaches a steady-state value
of ab00 (which is the DC gain of the system). The response is said to be
stable.
• If a0 < 0, the pole of the transfer function is in the ORHP, and e−a0 t → ∞
as t → ∞. The step response therefore goes to ∞, and there is no steady
state value. The response is said to be unstable.
Figure 7.1: Step Response of First Order System. (a) Pole in OLHP. (b) Pole
in ORHP.
We will now study two measures of performance that can be used to evaluate
the step-response.
c Shreyas Sundaram
7.1 Step Response of First Order Systems 49
b0
The rise time of a first order system with transfer function H(s) = s+a0 is
ln 9
tr =
a0
(you should be able to easily prove this). Note that the larger a0 is, the smaller
the rise time becomes. This can also be seen from the actual step-response
(7.1): a larger value of a0 means that the term e−a0 t dies out faster, leading the
response to get to its steady state quicker.
1
Note: The quantity τ = a0 is called the time-constant of the system, and (7.1)
is commonly written as
b0 b0 t
y(t) = 1(t) − e− τ .
a0 a0
A larger time-constant means that the system takes longer to settle to its steady
state. To find the time-constant in practice, note that at t = τ we have
b0 b0
y(τ ) = (1 − e−1 ) ≈ 0.63 .
a0 a0
Since the steady state value of y(t) is ab00 (the DC gain of the system), we see that
the time-constant is the point in time where the output of the system reaches
63% of its final value.
One can also readily verify that the bandwidth of the system H(s) occurs at
s = a0 (i.e., |H(ja0 )| = √12 |H(0)|). Thus, we get the following rule of thumb.
c Shreyas Sundaram
50 Step Responses of Linear Systems
• 0 ≤ ζ < 1: The system has two complex poles in the CLHP (they will be
in the OLHP if ζ > 0). The system is said to be underdamped.
• ζ = 1: The system has two repeated poles at s = −ωn . The system is said
to be critically damped.
• ζ > 1: The system has two poles on the negative real axis. The system is
said to be overdamped.
Figure 7.2: Location of Poles in Complex Plane for Different Ranges of ζ. (a)
0 ≤ ζ < 1. (b) ζ = 1. (c) ζ > 1.
c Shreyas Sundaram
7.2 Step Response of Second Order Systems 51
in which case the poles are at s = −σ ± jωd . The transfer function can be
written as
ωn2 ωn2
H(s) = = .
(s + σ + jωd )(s + σ − jωd ) (s + σ)2 + ωd2
The Laplace Transform of the step response is given by Y (s) = H(s) 1s , and
using a Laplace Transform table, we see that
−σt σ
y(t) = 1 − e cos ωd t + sin ωd t . (7.2)
ωd
• When ζ = 0, we have σ = 0 and ωd = ωn , and the response becomes
y(t) = 1 − cos ωn t, which oscillates between 0 and 2 for all t.
• When ζ = 1, we have σ = ωn and ωd = 0, and the response becomes
y(t) = 1 − e−ωn t (1 + ωn t); one can verify that this does not have any
oscillations at all, and asymptotically reaches the steady state value of 1
(this is the DC gain H(0)).
The response for intermediate values of ζ falls in between these two extremes.
The behavior of y(t) for different values of ζ (with a fixed value of ωn ) is shown
in Fig. 7.3.
As we can see from the responses for various values of ζ, a larger value of ζ
corresponds to fewer oscillations and less overshoot, and thus ζ is said to rep-
resent the amount of damping in the response (a higher value of ζ corresponds
c Shreyas Sundaram
52 Step Responses of Linear Systems
to more damping). ζ is called the damping ratio. Examining Fig. 7.2, we see
that the angle between the imaginary axis and the vector connecting the pole
to the origin is given by sin−1 ζ. The magnitude of the vector is given by ωn .
Thus, as ζ increases with a fixed ωn , the poles move on the perimeter of a circle
toward the real axis.
The quantity ωn is called the undamped natural frequency, since it repre-
sents the location of the pole on the imaginary axis when there is no damping
(i.e., when ζ = 0). The quantity ωd is called the damped natural frequency,
since it represents the imaginary part of the pole when there is damping. When
ζ = 0, we have ωd = ωn . If ζ > 0 and we increase ωn , the poles move further to
the left in the OLHP. This causes the term σ = ζωn to increase, which causes
the term e−σt to die out faster in the step response (equation (7.2)). Thus,
increasing ωn has the effect of making the oscillations die out faster (and, in
general, making the system respond faster). Recall from the magnitude plot
of the second order frequency response from Section 4.3 that the bandwidth of
the system is approximately equal to ωn . Thus, we obtain the following rule of
thumb for the underdamped system.
7.2.3 Discussion
Based on the above analysis, we can come to the following general conclusions
about how the poles of second order systems affect their step responses.
c Shreyas Sundaram
7.2 Step Response of Second Order Systems 53
• If the poles are complex, the step response will have oscillations and over-
shoot.
• As the poles move toward the real axis while maintaining a fixed distance
from the origin, the amount of oscillation decreases (leading to less over-
shoot) – this corresponds to the damping ratio ζ increasing (with a fixed
ωn ),
• If ωn increases, the poles move further left in the OLHP, and the oscilla-
tions die out faster.
• If all poles are on the negative real axis, there will be no oscillation and
no overshoot.
Note that if the transfer function has one or more poles in the open right half
plane, the step response will contain a term that goes to ∞ as t → ∞, and thus
there is no steady state response.
Based on these general rules, the step responses for various pole locations are
shown below:
c Shreyas Sundaram
54 Step Responses of Linear Systems
c Shreyas Sundaram
Chapter 8
Performance of Second
Order Step Responses
We have seen that the step response of a second order system with transfer
function
ωn2
H(s) = 2
s + 2ζωn s + ωn2
will have different characteristics, depending on the location of the poles (which
are a function of the values ζ and ωn ). We will now introduce some measures
of performance for these step responses.
• Rise time (tr ): The time taken for the response to first get close to its
final value. This is typically measured as the time taken for the response
to go from 10% of its final value to 90% of its final value.
• Settling time (ts ): The time taken for the response to stay close to its
final value. We will take this to be the time after which the response stays
within 2% of its final value. Other measures of “closeness” can also be
used (e.g., 1% instead of 2%, etc.).
• Peak value (Mp ) and overshoot OS: This is largest value of the step
response. One can also calculate the overshoot as the maximum amount
that the response overshoots its final value, divided by the final value
(often expressed as a percentage).
c Shreyas Sundaram
56 Performance of Second Order Step Responses
• Peak time (tp ): This is the time at which the response hits its maximum
value (this is the peak of the overshoot).
Figure 8.1: Step response of second order system, with tr , Mp , tp and ts shown.
An explicit formula for the rise time is somewhat hard to calculate. However,
we notice that rise time increases with ζ and decreases with ωn . The best linear
fit to the curve gives us the approximation
2.16ζ + 0.6
tr ≈ ,
ωn
which is reasonably accurate for 0.3 < ζ < 0.8. A cruder approximation is
obtained by finding a best fit curve with ζ = 0.5, yielding
1.8
tr ≈ .
ωn
Keep in mind that these are only approximations, and iteration may be required
if design specifications are not met.
Recall that the step response will have overshoot only if the poles are complex
– this corresponds to the case 0 ≤ ζ < 1, and the corresponding system is called
underdamped. The overshoot is defined as
Mp − y(∞)
OS = ,
y(∞)
c Shreyas Sundaram
8.1 Performance Measures 57
where Mp is the peak value of the step response, and y(∞) is the final (steady
state) value of the step response. Note that if we want to express the overshoot
as a percentage, we simply multiply OS by 100, and denote this by %OS.
We can explicitly calculate the peak value of the step response (and the corre-
sponding time) as follows. First, note that at the peak value, the derivative of
the step response is zero. Recalling that the step response is given by
σ
y(t) = 1 − e−σt cos ωd t + sin ωd t ,
ωd
dy
calculate dt and set it equal to zero to obtain
dy σ
0= = σe−σt (cos ωd t + sin ωd t) − e−σt (−ωd sin ωd t + σ cos ωd t)
dt ωd
σ 2 −σt
= e sin ωd t + ωd e−σt sin ωd t
ωd
σ2
=( + ωd )e−σt sin ωd t .
ωd
From this, we obtain sin ωd t = 0, which occurs for t = 0, ωπd , ω2πd , · · · . The first
peak therefore occurs at the peak time
π
tp = .
ωd
Substituting this into the expression for y(t), we obtain the value of the step
response at the peak to be
π −σ π σ
Mp = y( ) = 1 − e ωd (cos π + sin π)
ωd ωd
−σ ωπ
=1+e d .
p
Substituting y(∞) = 1, σ = ζωn , ωd = ωn 1 − ζ 2 , and the above expression
for Mp into the definition of overshoot, we obtain
−σ ωπ
1+e d −1 −ζωn √π
1−ζ 2
OS = =e ωn
1
− √ πζ
=e 1−ζ 2 .
c Shreyas Sundaram
58 Performance of Second Order Step Responses
π π
tp = p =
ωn 1 − ζ 2 ωd
4
ts ≈ .
ζωn
Note that the expressions for tr and ts are only approximations; more detailed
expressions for these quantities can be obtained by performing a more careful
analysis. For our purposes, the above relationships will be sufficient to obtain
intuition about the performance of second order systems. In particular, we note
the following trends:
c Shreyas Sundaram
8.2 Choosing Pole Locations to Meet Performance Specifications 59
example, suppose that we want the step response to have overshoot less than
some value OS, ¯ rise time less than some value t̄r , settling time less than some
value t̄s , and peak time less than some t̄p . From the expressions given at the end
of the previous section, these constraints on the step response can be translated
into constraints on ζ and ωn :
− √ πζ
e 1−ζ 2 ¯
≤ OS ⇒ Find upper bound for ζ from this.
1.8 1.8
≤ t̄r ⇒ ωn ≥
ωn t̄r
4 4
≤ t̄s ⇒σ≥
σ t̄s
π π
≤ t̄p ⇒ ωd ≥ .
ωd t̄p
Noting that σ is the real part of the poles of the transfer function, ωn is the
distance of the complex poles from the origin, ωd is imaginary part of the poles,
and sin−1 ζ is the angle between the imaginary axis and the vector joining the
origin to the pole, we can formulate appropriate regions in the complex plane
for the pole locations in order to satisfy the given specifications.
Figure 8.2: Pole Locations in the Complex Plane. (a) Overshoot. (b) Rise time.
(c) Settling time. (d) Peak time. (e) Combined specifications.
c Shreyas Sundaram
60 Performance of Second Order Step Responses
Example. We would like our second order system to have a step response with
overshoot OS ≤ 0.1 and settling time ts ≤ 2. What is the region in the complex
plane where the poles can be located?
Solution.
So far, we have been studying second order systems with transfer functions of
the form
ωn2
H(s) = .
s2 + 2ζωn s + ωn2
This transfer function produced a step response y(t), with Laplace Transform
Y (s) = H(s) 1s .
Note: All of the analysis so far also holds if we consider the transfer function:
ωn2
H(s) = K ,
s2 + 2ζωn s + ωn2
for some constant K. In this case, the step response is simply scaled by K,
but the time characteristics (such as rise time, settling time, peak time and
overshoot) of the response are not affected.
We will now consider what happens when we add zeros or additional poles to
the system.
c Shreyas Sundaram
8.3 Effects of Poles and Zeros on the Step Response 61
( z1 s + 1)ωn2
Hz (s) = .
s2 + 2ζωn s + ωn2
Note that the reason for writing the zero term as z1 s + 1 instead of s + z is to
maintain a DC gain of 1 for the transfer function (just so that we can compare
it to the original transfer function). We can split the above transfer function
into the sum of two terms:
ωn2 1 ωn2 1
Hz (s) = 2 2
+ s 2 2
= H(s) + sH(s) ,
s + 2ζωn s + ωn z s + 2ζωn s + ωn z
where H(s) is the transfer function of the original system (without the zero).
Denote the Laplace Transform of the step response for this system by Yz (s) =
Hz (s) 1s . Using the above decomposition of Hz (s), we obtain
1 1 1 1 1 1
Yz (s) = H(s) + sH(s) = H(s) + sH(s) = Y (s) + sY (s) .
z s s z s z
1
yz (t) = y(t) + ẏ(t) .
z
Thus the step response of the second order system with a zero at s = −z is
given by the step response of the original system plus a scaled version of the
derivative of the step response of the original system. A sample plot for z > 0
(i.e., corresponding to the zero being in the OLHP) is shown in Fig. 8.3.
Figure 8.3: Step response of second order system with transfer function Hz (s) =
2
( z1 s+1)ωn
s2 +2ζωn s+ωn 2 , z > 0.
c Shreyas Sundaram
62 Performance of Second Order Step Responses
Note that as z increases (i.e., as the zero moves further into the left half plane),
the term z1 becomes smaller, and thus the contribution of the term ẏ(t) decreases
(i.e., the step response of this system starts to resemble the step response of the
original system). From the above figure, the effect of a LHP zero is to increase
the overshoot, decrease the peak time, and decrease the rise time; the settling
time is not affected too much. In other words, a LHP zero makes the step
response faster. One can also see this by thinking about the effect of the zero
on the bandwidth of the system. Since the presence of the term ( z1 s + 1) in
the numerator of the transfer function will only increase the magnitude of the
Bode plot at frequencies above ω = z, we see that adding a zero will generally
increase the bandwidth of the system. This fits with our rule of thumb that a
larger bandwidth corresponds to a faster response.
Now consider what happens if z is negative (which corresponds to the zero being
in the ORHP). In this case, the derivative ẏ(t) is actually subtracted from y(t)
to produce yz (t). A sample plot is shown in Fig. 8.4. Note that the response
can actually go in the opposite direction before rising to its steady state value.
This phenomenon is called undershoot.
Figure 8.4: Step response of second order system with transfer function Hz (s) =
2
( z1 s+1)ωn
s2 +2ζωn s+ωn 2 , z < 0.
Recall from Section 5.1.5 that zeros in the right half plane are called nonmini-
mum phase – this is due to the fact the phase of the system has a large swing
from its maximum and minimum values (as compared to the phase plot of the
system with the same magnitude plot, but with all zeros in the OLHP). The
effect of a RHP zero is to slow down the system, and perhaps introduce under-
shoot. However, the magnitude plot of the system is affected in the same way
as with a LHP zero: the bandwidth increases. Thus, for a nonminimum phase
c Shreyas Sundaram
8.3 Effects of Poles and Zeros on the Step Response 63
zero, our rule of thumb about a larger bandwidth implying a faster response no
longer holds.
It is also worth noting that if z = 0, the above analysis no longer holds directly.
Adding a zero at s = 0 produces the transfer function Hz (s) = sH(s), and the
step response of this system is purely the derivative of the step response of the
original system. However, the steady state value of this step response is zero,
not 1 (note that this agrees with the DC gain of the new system).
Figure 8.5: Sample distribution of poles in OLHP, with dominant poles encir-
cled.
From the partial fraction expansion, we know that a pole −p contributes a term
of the form e−pt to the step response. If p is large, this exponential term dies out
quickly. Suppose that the set of poles for a system can be divided into a cluster
of poles that are closest to the origin, and another cluster that are very far away
in comparison (e.g., 5 times further away). The poles that are closest to the
origin are called the dominant poles of the system. The exponential terms in
the step response corresponding to the far away poles will die out very quickly
in relation to the exponential terms corresponding to the dominant poles. Thus,
the system effectively behaves as a lower order system with only the dominant
poles. This is one way to approximate a high order system by a lower order
system (such as a first or second order system).
Since each additional pole contributes an additional exponential term that must
die out before the system reaches its final value, each additional pole increases
the rise time of the system. In other words, adding a pole to the system makes
c Shreyas Sundaram
64 Performance of Second Order Step Responses
the step response more sluggish. Again, one can see this by looking at the
bandwidth of the system: including an additional term of the form ( p1 s + 1) in
the denominator of the transfer function causes the Bode plot to decrease faster
after ω = p, which generally decreases the bandwidth of the system.
Example. Plot the step response of the system with transfer function
4
H(s) = ,
( p1 s + 1)(s2 + 2s + 4)
• Adding a LHP zero to the transfer function makes the step response faster
(decreases the rise time and the peak time) and increases the overshoot.
The bandwidth of the system is increased.
• Adding a RHP zero to the transfer function makes the step response
slower, and can make the response undershoot.1 The bandwidth of the
system is increased.
• Adding a LHP pole to the transfer function makes the step response slower.
The bandwidth of the system is decreased.
• If the system has a cluster of poles and zeros that are much closer (5 times
or more) to the origin than the other poles and zeros, the system can be
approximated by a lower order system with only those dominant poles and
zeros.
1 Actually, one can show that the step response for a given (stable) linear time-invariant
system will have undershoot if and only if the transfer function has an odd number of zeros in
the ORHP. See the paper “On Undershoot and Nonminimum Phase Zeros” by M. Vidyasagar
(IEEE Transactions on Automatic Control, vol. 31, no. 5, May 1986, p. 440) for the
derivation of this result.
c Shreyas Sundaram
Chapter 9
Stability of Linear
Time-Invariant Systems
Recall from our discussion on step responses that if the transfer function contains
poles in the open right half plane, the response will go to infinity. However, if
all poles of the transfer function are in the open left half plane, the response
will settle down to the DC gain of the transfer function. To describe these
characteristics of systems, we define the following terms:
c Shreyas Sundaram
66 Stability of Linear Time-Invariant Systems
Recall that this system is stable if all of the poles are in the OLHP, and these
poles are the roots of the polynomial D(s). It is important to note that one
should not cancel any common poles and zeros of the transfer func-
tion before checking the roots of D(s). Specifically, suppose that both of the
polynomials N (s) and D(s) have a root at s = a, for some complex (or real)
number a. One must not cancel out this common zero and pole in the transfer
function before testing for stability. The reason for this is that, even though the
pole will not show up in the response to the input, it will still appear as a re-
sult of any initial conditions in the system, or due to additional inputs entering
the system (such as disturbances). If the pole and zero are in the CRHP, the
system response might blow up due to these initial conditions or disturbances,
even though the input to the system is bounded, and this would violate BIBO
stability.
To see this a little more clearly, consider the following example. Suppose the
transfer function of a linear system is given by
s−1 N (s)
H(s) = = .
s2 + 2s − 3 D(s)
Noting that s2 + 2s − 3 = (s + 3)(s − 1), suppose we decided to cancel out the
common pole and zero at s = 1 to obtain
1
H(s) = .
s+3
Based on this transfer function, we might (erroneously) conclude that the system
is stable, since it only has a pole in the OLHP. What we should actually do is
look at the original denominator D(s), and correctly conclude that the system
is unstable because one of the poles is in the CRHP. To see why the pole-
zero cancellation hides instability of the system, first write out the differential
equation corresponding to the transfer function to obtain
ÿ + 2ẏ − 3y = u̇ − u .
Take the Laplace Transform of both sides, taking initial conditions into account:
s2 Y (s) − sy(0) − ẏ(0) + 2sY (s) − 2y(0) − 3Y (s) = sU (s) − u(0) − U (s) .
Rearrange this equation to obtain
s−1 s+2 1 1
Y (s) = U (s)+ 2 y(0)+ 2 ẏ(0)− 2 u(0) .
s2 + 2s − 3 s + 2s − 3 s + 2s − 3 s + 2s − 3
| {z }
H(s)
Note that the denominator polynomial in each of the terms on the right hand
sides is equal to D(s) (the denominator of the transfer function). For simplicity,
suppose that y(0) = y0 (for some real number y0 ), ẏ(0) = 0 and u(0) = 0. The
s+2
partial fraction expansion of the term s2 +2s−3 y0 is given by
s+2 y0 1 1
y 0 = + ,
s2 + 2s − 3 4 s+3 s−1
c Shreyas Sundaram
9.2 Stability of the Unity Feedback Loop 67
and this contributes the term y40 e−3t + et , t ≥ 0, to the response of the sys-
tem. Note that the et term blows up, and thus the output of the system blows
up if y0 is not zero, even if the input to the system is bounded.
If all poles of the transfer function are in the OLHP (before any pole-zero
cancellations), all initial conditions will decay to zero, and not cause the output
of the system to go unbounded.
The above example demonstrates the following important fact:
n (s)
Suppose that we write P (s) = dpp(s) and C(s) = ndcc(s) (s)
for some polynomials
np (s), dp (s), nc (s), dc (s). The transfer function from r to y is given by
np (s)nc (s)
P (s)C(s) dp (s)dc (s) np (s)nc (s)
H(s) = = np (s)nc (s)
= .
1 + P (s)C(s) 1 + dp (s)dc (s) dp (s)dc (s) + np (s)nc (s)
The denominator of the above transfer function is called the characteristic poly-
nomial of the closed loop system, and we have the following result.
Note that the above test captures unstable pole/zero cancellations: if there is
an unstable pole s = a in dp (s) or dc (s), and that same root appears in either
nc (s) or nd (s), then s = a would be a root of dp (s)dc (s) + np (s)nc (s) and would
thus cause the characteristic polynomial to fail the test for stability.
c Shreyas Sundaram
68 Stability of Linear Time-Invariant Systems
We would like to determine whether all poles of this transfer function are in the
OLHP. One way to do this would be to actually find the poles of the system
(by finding the roots of D(s)). However, finding the roots of a high-degree
polynomial can be complicated, especially if some of the coefficients are symbols
rather than numbers (this will be the case when we are designing controllers, as
we will see later). Furthermore, note that we do not need the actual values of
the poles in order to determine stability – we only need to know if all poles are
in the OLHP. Can we determine this from the coefficients of the polynomial?
(s + p1 )(s + p2 ) = s2 + (p1 + p2 )s + p1 p2
(s + p1 )(s + p2 )(s + p3 ) = s3 + (p1 + p2 + p3 )s2 + (p1 p2 + p1 p3 + p2 p3 )s
+ p1 p2 p3
(s + p1 )(s + p2 )(s + p3 )(s + p4 ) = s4 + (p1 + p2 + p3 + p4 )s3
+ (p1 p2 + p1 p3 + p1 p4 + p2 p3 + p2 p4 + p3 p4 )s2
+ (p1 p2 p3 + p1 p2 p4 + p1 p3 p4 + p2 p3 p4 )s
+ p1 p2 p3 p4
..
.
c Shreyas Sundaram
9.3 Tests for Stability 69
..
.
Now, suppose that all roots are in the OLHP (i.e., p1 > 0, p2 > 0, . . . , pn > 0).
This means that a0 > 0, a1 > 0, . . . , an−1 > 0 as well. This leads us to the
following conclusion.
Note that the above condition is necessary, but it is not sufficient in general.
We will see this in the following examples.
Examples.
• D(s) = s3 − 2s2 + s + 1:
• D(s) = s4 + s2 + s + 1:
• D(s) = s3 + 2s2 + 2s + 1:
c Shreyas Sundaram
70 Stability of Linear Time-Invariant Systems
The first two rows of this array contain the coefficients of the polynomial. The
numbers b1 , b2 , b3 , . . . on the third row are defined as:
1 an an−2 an−1 an−2 − an an−3
b1 = − = ,
an−1 an−1 an−3 an−1
1 an an−4 an−1 an−4 − an an−5
b2 = − = ,
an−1 an−1 an−5 an−1
1 an an−6 an−1 an−6 − an an−7
b3 = − = ,
an−1 an−1 an−7 an−1
..
.
Notice the pattern: the i–th element on the third row is obtained by taking
the negative determinant of the matrix consisting of the first column and the
(i+1)–th column in the first two rows, divided by the first element in the second
row. The third row will have one less element than the first two rows.
Similarly, the numbers c1 , c2 , c3 , . . . on the fourth row are defined as:
1 an−1 an−3 b1 an−3 − an−1 b2
c1 = − = ,
b1 b1 b2 b1
1 a an−5 b1 an−5 − an−1 b3
c2 = − n−1 = ,
b1 b1 b3 b1
1 a an−7 b1 an−7 − an−1 b4
c3 = − n−1 = ,
b1 b1 b4 b1
..
.
c Shreyas Sundaram
9.3 Tests for Stability 71
Again, notice the pattern: the i–th element on the fourth row is obtained by
taking the negative determinant of the matrix consisting of the first column and
the (i + 1)–th columns of the preceding two rows, divided by the first element
in the immediately preceding row.
We continue this process until the (n + 1)–th row (corresponding to s0 in the
Routh array), which will have only one entry. After this process is complete,
we can use the following result1 to check stability of D(s).
The number of sign changes in the first column of the Routh array
indicates the number of roots that are in the ORHP. All roots are in
the OLHP if and only if there are no sign changes in the first column
(i.e., either all entries are positive, or all are negative).
the Routh-Hurwitz Test” by G. Meinsma (Systems and Control Letters, vol. 25, no. 4, 1995,
pp. 237-242).
c Shreyas Sundaram
72 Stability of Linear Time-Invariant Systems
There may be times when we would like to test a polynomial to see if the
real parts of all of its roots are less than a certain value; so far, we have been
considering this value to be zero (i.e., we have been testing to see if all roots lie
in the OLHP). Suppose that we would like to see if all roots of a polynomial
D(s) have real part less than λ. Consider the polynomial D̄(s) = D(s + λ). It
is easy to see that the roots of D̄(s) are the roots of D(s) shifted by λ: if s = a
is a root of D(s), then s = a − λ is a root of D̄(s). Thus, all roots of D(s) have
real parts less than λ if and only if all roots of D̄(s) are in the OLHP, and we
can use the Routh-Hurwitz test on D̄(s) to see whether all roots of D(s) lie to
the left of λ.
c Shreyas Sundaram
9.3 Tests for Stability 73
Solution.
In control system design, we will frequently run across cases where some of the
coefficients of the polynomial are parameters for us to design or analyze (such
as control gains, or unknown values for system components). We can use the
Routh-Hurwitz test to determine ranges for these parameters so that the system
will be stable.
Example. In the feedback control loop shown below, determine the range of
values for K for which the closed loop transfer function (from R(s) to Y (s)) will
be stable.
R(s) 1
Y (s)
− K (s+6)(s+3)(s−1)
Solution.
c Shreyas Sundaram
74 Stability of Linear Time-Invariant Systems
Example. In the feedback loop shown below, determine the values of K and a
for which the closed loop system is stable.
R(s) K(s+a) 1
Y (s)
− s+1 s(s+2)(s+3)
Solution.
c Shreyas Sundaram
Chapter 10
Properties of Feedback
With the tools to analyze linear systems in hand, we now turn our attention
to feedback control. Recall the unity feedback control loop that we looked at
earlier:
D(s)
R(s) + Y (s)
+
C(s) P (s)
−
In the above figure, P (s) is the plant, C(s) is the controller, r is the reference
signal, y is the output of the system, and d is a disturbance affecting the
control loop (e.g., wind on an airplane, noise in a sensor, faults in an electrical
grid, etc.).
Recall from Chapter 1 that there are three main properties that a good control
system should have:
• Tracking. The output of the system should behave like the reference
signal. This property is studied by examining the transfer function from
the reference input to the output.
• Disturbance Rejection. The disturbances should affect the output as
little as possible. This property is studied by examining the transfer func-
tion from the disturbance to the output.
• Robustness. The output should track the reference signal even if the
plant model is not exactly known, or changes slightly. This property is
studied by examining the sensitivity of the transfer function to perturba-
tions in the plant, as we show below.
c Shreyas Sundaram
76 Properties of Feedback
Sensitivity. Let Try (s) denote the transfer function of the control system from
the reference r to the output y. Now suppose that we allow the plant P (s) to
change by a small amount δP (s) to become the new plant P̄ (s) = P (s) + δP (s).
This will cause the transfer function Try (s) to also change by a small amount
δTry (s), to become T̄ry (s) = Try (s)+δTry (s). The question is: how does δTry (s)
compare to δP (s)? More specifically, the sensitivity is defined as the fractional
(or percentage) change in the transfer function as related to the fractional change
in the plant model:
δTry (s)
Try (s) δTry (s) P (s)
S(s) = δP (s)
= .
δP (s) Try (s)
P (s)
δT (s)
Note that for small perturbations δP (s) and δTry (s), the expression δPry(s) is
the derivative of Try (s) with respect to P (s). In order to have good robustness,
we want the sensitivity to be as small as possible.
We will analyze the tracking, disturbance rejection and robustness properties for
both a feedforward control configuration and a feedback control configuration,
and see why feedback is an important concept for control system design.
D(s)
R(s) + Y (s)
+
C(s) P (s)
We can examine how well feedforward control satisfies the properties listed
above.
• Tracking. The transfer function from the reference to the output is ob-
tained by assuming that the disturbance is not present (i.e., take d = 0),
and is given by
Y (s)
Try (s) = = P (s)C(s) .
R(s)
c Shreyas Sundaram
10.2 Feedback Control 77
• Tracking. The transfer function from the reference to the output is ob-
tained by assuming that the disturbance is not present (i.e., take d = 0),
and is given by
Y (s) P (s)C(s)
Try (s) = = .
R(s) 1 + P (s)C(s)
c Shreyas Sundaram
78 Properties of Feedback
Y (s) 1
Tdy (s) = = .
D(s) 1 + P (s)C(s)
C(s) P (s) 1
S(s) = = .
(1 + P (s)C(s))2 P (s)C(s) 1 + P (s)C(s)
1+P (s)C(s)
Once again, choosing C(s) = K, for a large constant K, makes S(s) very
small. Thus, feedback control is robust to variations in the plant.
In addition to the above benefits, we will see later that feedback control can
also stabilize an unstable plant (whereas feedforward control cannot). Also, it
is worth noting that although we used high gain control to show that feedback
provides good tracking, disturbance rejection and robustness, it is not the only
option (and sometimes not even the best option). In practice, high gains are
c Shreyas Sundaram
10.2 Feedback Control 79
c Shreyas Sundaram
80 Properties of Feedback
c Shreyas Sundaram
Chapter 11
Tracking of Reference
Signals
In the above figure, P (s) is the plant, C(s) is the controller, r is the reference
signal, y is the output of the system, and e is the error (i.e., e = r − y). We will
neglect disturbances for now. Note that the signal y is directly subtracted from
the reference signal in this feedback loop, so this configuration is called unity
feedback. The transfer function from r to y is
Y (s) P (s)C(s)
Try (s) = = .
R(s) 1 + P (s)C(s)
Recall that the product P (s)C(s) is called the forward gain of the feedback
loop. We can always write P (s)C(s) as
a(s)
P (s)C(s) = ,
sq b(s)
for some polynomials a(s) and b(s), and some nonnegative integer q (this is the
number of poles at the origin in the product P (s)C(s)). The polynomial b(s)
has no roots at s = 0 (otherwise this root can be grouped into the term sq );
this means that b(0) 6= 0. The transfer function then becomes
a(s)
sq b(s) a(s)
Try (s) = = .
1 + sa(s)
q b(s)
sq b(s) + a(s)
c Shreyas Sundaram
82 Tracking of Reference Signals
12(s+2)
Example. Suppose P (s) = s+1 s2 and C(s) = s(s+4) . What are a(s), b(s) and
q? What is the transfer function?
Solution.
We’ll assume that C(s) is chosen so that the closed loop system is stable (other-
wise, we do not have any hope of tracking any signal). This means that all roots
of sq b(s) + a(s) are in the OLHP. We will now examine how well this feedback
loop tracks certain types of reference signals.
c Shreyas Sundaram
11.1 Tracking and Steady State Error 83
To determine how well the system tracks (or follows) the reference inputs, we
will examine the error signal e = r − y. Specifically, the Laplace Transform of
e(t) is given by:
E(s) = R(s) − Y (s) = R(s) − P (s)C(s)E(s) ,
from which we obtain
1 1
E(s) = R(s) = m! .
1 + P (s)C(s) (1 + P (s)C(s))sm+1
Recall that the Final Value Theorem states that if all poles of sE(s) are in the
OLHP, then the signal e(t) settles down to some finite steady state value. From
the above expression for E(s), we have
1
sE(s) = m! .
(1 + P (s)C(s))sm
Note that, as written, sE(s) seems to have m poles at the origin. Note from our
earlier discussion, however, that P (s)C(s) has q poles at the origin, so that we
can write P (s)C(s) = sa(s)
q b(s) , for some polynomials a(s) and b(s). The expression
Recall that the polynomial sq b(s) + a(s) has all roots in the OLHP (by our
assumption of stability). We thus only have to check for poles at the origin
(given by the quantity sm−q ). We consider three different cases:
• If q > m, the function sE(s) will have all poles in the OLHP, and q − m
zeros at the origin. The steady state error is obtained from the Final Value
Theorem as
0q−m b(0)
ess = lim e(t) = lim sE(s) = m! =0 .
t→∞ s→0 0q b(0)
+ a(0)
Thus, if q > m, we have perfect steady state tracking (i.e., the steady
state error is zero).
• If q = m, the function sE(s) has all poles in the OLHP, and no zeros at
the origin. From the Final Value Theorem, we have
b(s)
ess = lim e(t) = lim sE(s) = lim m!
t→∞ s→0 s→0 sq b(s) + a(s)
1
= lim m! a(s)
s→0 sq + b(s)
1
= lim m! ,
s→0 sq + sq P (s)C(s)
c Shreyas Sundaram
84 Tracking of Reference Signals
• If q < m, the function sE(s) will have m − q poles at the origin. Thus
the signal e(t) blows up as t → ∞, and there is no steady state value. In
other words, the system output does not track the reference input
at all.
Note that if a linear system can track a signal tm , t ≥ 0, then it can also track
any polynomial of degree m or less (by linearity).
System type. The above results indicate that the number of poles at the origin
in P (s)C(s) determines the type of reference inputs that the closed loop system
can track. Thus, the integer q is called the system type. Specifically, a system
of type q can track reference signals that are polynomials of degree q or less to
within a constant finite steady state error.
Note: It does not matter whether the poles in P (s)C(s) come from the plant
P (s) or from the controller C(s). The only thing that matters is how many poles
their product has. We can therefore use this fact to construct controllers with a
certain number of poles in order to track certain types of reference signals, even
if the plant does not have the required number of poles. We will see this in the
next lecture.
1 Do not try to memorize this. Instead, always just derive the tracking error using first
principles, first calculating E(s) in terms of Try (s) and R(s), and then applying the final
value theorem.
c Shreyas Sundaram
11.1 Tracking and Steady State Error 85
Example. Consider the unity feedback loop with C(s) = 12(s+2) s(s+4) and P (s) =
s+1
s2 . What is the system type? What is the steady state tracking error for the
signals r(t) = 1(t), r(t) = t1(t), r(t) = t2 1(t), r(t) = t3 1(t), and r(t) = t4 1(t)?
Solution.
c Shreyas Sundaram
86 Tracking of Reference Signals
Example. Suppose that C(s) = K (for some positive constant K), and P (s) =
1
s+1 . What is the system type? What is the steady state tracking error for the
signals r(t) = 1(t), r(t) = t1(t) and r(t) = t2 1(t)?
Solution.
c Shreyas Sundaram
11.1 Tracking and Steady State Error 87
To see why having an integrator guarantees perfect tracking for a step input (if
the closed loop system is stable), rearrange the closed loop system as follows:
R(s) E(s) 1
W (s) Rest of Y (s)
s P (s)C(s)
−
In the above diagram, we have simply pulled the integrator out of the product
P (s)C(s), and denoted the signal at the output of the integrator by W (s) (or
w(t) in the time-domain). Now suppose that the closed loop system is stable
(note that this is a necessary assumption); in this case, all of the signals in the
system will settle down to some steady state values when r(t) is a unit step
input. This includes the signal w(t), which is related to e(t) as
Z t
1
W (s) = E(s) ⇔ w(t) = e(τ )dτ.
s 0
If e(t) settles down to some nonzero value, the above integral will become un-
bounded, and thus w(t) will not settle down to some steady state value, con-
tradicting the fact that the system is stable. Thus the only way for all signals
to have settled to a steady state value (which is guaranteed by stability) is if
e(t) → 0. An alternative way to see this is to note that if w(t) settles down to
a steady state value, then ẇ(t) = e(t) = 0.
This is an example of what is known as the internal model principle: if we
wish to perfectly track a signal of a certain form, we should include a model of
that signal inside our feedback loop.
c Shreyas Sundaram
88 Tracking of Reference Signals
c Shreyas Sundaram
Chapter 12
PID Control
So far, we have examined the benefits of feedback control, and studied how
the poles of P (s)C(s) affect the ability of the control system to track reference
inputs. We will now study a type of controller C(s) that is commonly used in
practice, called a proportional-integral-derivative (PID) controller. To
develop this controller, we will assume that the plant is a second order system
of the form
b0
P (s) = 2 .
s + a1 s + a0
Note that the transfer function from r to y for the above feedback loop is given
by
P (s)C(s)
Try (s) = .
1 + P (s)C(s)
c Shreyas Sundaram
90 PID Control
Recall that the poles of this transfer function dictate how the system behaves
to inputs. In particular, we would like to ensure that the system is stable (i.e.,
all poles are in the OLHP). Since the gain KP affects one of the coefficients in
the denominator polynomial, it can potentially be used to obtain stability.
1
Example. Suppose P (s) = s2 +3s−1 . Can we stabilize this plant with propor-
tional control?
Solution.
1
Example. Suppose P (s) = s2 −3s−1 . Can we stabilize this plant with propor-
tional control?
Solution.
The above examples demonstrate that simple proportional control can stabilize
some plants, but not others.
Another benefit of proportional control is that it can potentially be used to
speed up the response of the system. Recall the standard second order system
had a denominator of the form s2 + 2ζωn s + ωn2 , and the larger ωn is, the faster
the system responds. In the closed loop transfer function Try (s) above, the term
ωn2 is given by a0 + KP b0 , and thus we can potentially make ωn very large by
choosing KP to be very large, thereby speeding up the system.
c Shreyas Sundaram
12.2 Proportional-Integral (PI) Control 91
Now let’s consider tracking. Recall from the previous lecture that in order to
track a step input perfectly (i.e., with zero steady state error), the system must
be of type 1 (a type 0 system would track a step within a finite steady state
error). If C(s) = KP and P (s) = s2 +ab10s+a0 , the system would only be of type
0 (if a0 is not zero), and thus we will not be able to track a step perfectly. To
rectify this, we will have to add an integrator to the controller in order to make
the system type 1.
KI
C(s) = KP + .
s
In the time-domain, this corresponds to the input to the plant being chosen as
Z t
u(t) = KP e(t) + KI e(τ )dτ ,
0
Note that we now have a third order system. Two of the coefficients of the
denominator polynomial can be arbitrarily set by choosing KP and KI appro-
priately. Unfortunately, we still have no way to stabilize the system if a1 < 0
(recall that for stability, all coefficients must be positive). Even if the system
is stable with the given value of a1 , we might want to be able to choose better
pole locations for the transfer function in order to obtain better performance.
To do this, we add one final term to the controller.
c Shreyas Sundaram
92 PID Control
In the time-domain, the input to the plant due to this controller is given by
Z t
u(t) = KP e(t) + KI e(τ )dτ + KD ė(t) ,
0
b0 KP s+KI +KD s2
s2 +a1 s+a0 s
Try (s) = 2
1 + s2 +ab10s+a0 KP s+KsI +KD s
b0 (KP s + KI + KD s2 )
= .
s3 + (a1 + KD b0 )s2 + (a0 + KP b0 )s + KI b0
Note that we are now able to arbitrarily set all coefficients of the denominator
polynomial, via appropriate choices of KP , KI and KD . Thus we can now guar-
antee stability (only for a second order plant, though), good transient behavior,
and perfect tracking!
1
Example. Consider the plant P (s) = s2 −3s−1 . Design a PID controller so that
the closed loop system has perfect tracking for a step input, and has poles at
s = −5, −6, −7.
Solution.
c Shreyas Sundaram
12.4 Implementation Issues 93
c Shreyas Sundaram
94 PID Control
c Shreyas Sundaram
Chapter 13
Root Locus
Based on our discussion so far, we know that the response of a linear system to
an input is dictated by the location of the poles of the transfer function. For
example, the response will be unstable if there are any poles in the CRHP, or
may contain oscillations if the poles appear in complex conjugates. We have also
seen that feedback control can be used to move the poles of a closed loop sys-
tem: by choosing the controller gain appropriately, one can potentially stabilize
unstable systems (and perhaps even destabilize stable systems). In this section
of the course, we will examine in more detail how the poles of a transfer function
vary in the complex plane in response to changes in a certain parameter. We
will begin with some examples.
1
Example. Consider the unity feedback loop with C(s) = K and P (s) = s−3 .
How does the pole of the closed loop system vary with K (for K ≥ 0)?
Solution.
c Shreyas Sundaram
96 Root Locus
1
Example. Consider the unity feedback loop with C(s) = K and P (s) = s2 +2s .
How do the poles of the closed loop system vary with K (for K ≥ 0)?
Solution.
1
Example. Consider the closed loop transfer function Try (s) = s2 +bs+1 . How
do the poles of the system vary with b (for b ≥ 0)?
Solution.
While we are able to easily draw the locations of the poles for first and second
c Shreyas Sundaram
13.1 The Root Locus Equations 97
order systems, we cannot do the same for higher order systems (because it
becomes difficult to explicitly calculate the roots). Furthermore, we would like
to obtain some intuition about how the poles of the system will be affected by
our choice of controller. We would thus like to come up with a way to sketch
how the poles of a system behave in response to a change in a parameter. Since
the poles are given by the roots of the denominator polynomial, such a sketch is
called a root locus. The trajectory of each root in the plane is called a branch
of the root locus.
R(s) Y (s)
K L(s)
−
The transfer function L(s) could represent the plant, or it could represent some
composite system (such as the combination of a controller and plant). We will
write
N (s) sm + bm−1 sm−1 + bm−2 sm−2 + · · · + b1 s + b0
L(s) = = n ,
D(s) s + an−1 sn−1 + an−2 sn−2 + · · · + a1 s + a0
where N (s) and D(s) are polynomials in s. As usual, the degree of N (s) is m,
the degree of D(s) is n, and we assume that n ≥ m (i.e., L(s) is proper). The
transfer function from r to y is given by
KL(s) KN (s) KN (s)
Try (s) = = = .
1 + KL(s) D(s) + KN (s) ∆(s)
The polynomial ∆(s) = D(s)+KN (s) is called the characteristic polynomial
of the system. Note that the roots of D(s) + KN (s) are the closed loop poles,
the roots of D(s) are the open loop poles, and the roots of N (s) are the open
loop zeros. When we plot these elements graphically, we will use × to denote
poles, and ◦ to denote zeros.
The root locus is a graph of how the roots of D(s) + KN (s) vary with
K. Equivalently, the root locus is the set of all solutions s to the
1
equation L(s) = − K .
Note that the root locus can actually be used to find how the roots of any
polynomial vary with a single parameter, and thus it is a very general tool. For
c Shreyas Sundaram
98 Root Locus
example, aside from analyzing how the poles of a unity feedback loop vary with
a controller gain K, the root locus can also be used to analyze how the poles
vary in response to a change in one of the system parameters. We saw this in
the third example above, and we will see more of it later.
To start, we will only focus on the case where K varies from 0 to ∞ – this is
called the positive root locus. We will deal with the negative root locus later.
We will also assume that both N (s) and D(s) are monic (i.e., the coefficient
corresponding to the highest power in both polynomials is equal to 1). This
is not a strict assumption, because we can always divide the entire polynomial
D(s) + KN (s) by the leading coefficient of D(s), and then absorb the leading
coefficient of N (s) into K to define a new gain K̄. After plotting the root locus,
we can then map the gain K̄ back to K.
The positive root locus is the set of all points s in the complex plane for
which ∠L(s) = (2l + 1)π radians (where l is any integer).
(s + z1 )(s + z2 ) · · · (s + zm )
L(s) = ,
(s + p1 )(s + p2 ) · · · (s + pn )
where −z1 , −z2 , . . . , −zm are the open loop zeros, and −p1 , −p2 , . . . , −pn are
the open loop poles. The phase of L(s̄), for some point s̄ in the complex plane,
is given by
c Shreyas Sundaram
13.1 The Root Locus Equations 99
Note that the phase of the point s̄+zi is given by the angle between the positive
real axis and the vector from −zi to s̄:
The same holds for the phase of the point s̄+pi . The phase of L(s̄) can therefore
be obtained by summing these angles, and this will allow us to determine if s̄
is on the root locus.
s+4
Example. Consider L(s) = s((s+1) 2 +1) . Is the point s = −3 on the root locus?
c Shreyas Sundaram
100 Root Locus
While we can use the above method to test if specific points are on the root locus,
it is quite cumbersome to determine all points in the complex plane that are on
the root locus in this way. What we need are some general rules for sketching
the root locus for a given L(s) (or equivalently, for the equation D(s) + KN (s)).
• When K = 0, the roots of this equation are simply the roots of D(s),
which are the open loop poles.
Rule 1. The n branches of the root locus begin at the open loop poles
(when K = 0). Of the n branches, m branches end at the open loop
zeros (when K = ∞).
c Shreyas Sundaram
13.2 Rules for Plotting the Positive Root Locus 101
Consider a point s = s̄ on the real axis. Each real pole or zero to the right
of s̄ contributes −π radians or π radians to the angle. Each pair of complex
conjugate poles or zeros contributes nothing to the angle of a s̄ (since the angles
of the complex conjugate poles or zeros will sum to zero). Each pole or zero to
the left of s̄ will also contribute nothing (i.e., 0 radians) to the angle.
Thus, in order for s̄ to satisfy the above condition, there must be an odd number
of zeros or poles to the right of s̄.
Rule 2. The positive root locus contains all points on the real axis
that are to the left of an odd number of zeros or poles.
(s+3)(s+7)
Example. Consider L(s) = s2 ((s+1) 2 +1)(s+5) . Determine the portions of the
c Shreyas Sundaram
102 Root Locus
and thus the root locus consists of the points that cause L(s) to be zero. This
will be true for the m open loop zeros – are there any other choices of s that
will make the left hand side equal to zero? To answer this, write L(s) as
sm + bm−1 sm−1 + · · · + b1 s + b0
L(s) =
sn + an−1 sn−1 + · · · + a1 s + a0
1 1 1 1
n−m + bm−1 sn−m+1 + · · · + b1 sn−1 + b0 sn
= s 1 1 1 .
1 + an−1 s + · · · + a1 sn−1 + a0 sn
If n > m, we see that the numerator goes to zero if |s| → ∞. Thus, the system
L(s) = N (s)
D(s) is said to have n − m zeros at infinity, in addition to the m finite
zeros (i.e., the roots of N (s)). We can thus conclude that m of the n branches
go to the open loop zeros, and the remaining n − m branches go off to infinity
as K → ∞. The question is, how do these branches approach infinity? The
following rule characterizes the behavior of these branches.
We will now go over a sketch of the proof of this result.1 First, note that the
1 This proof is borrowed from Prof. Daniel Liberzon at the University of Illinois.
c Shreyas Sundaram
13.2 Rules for Plotting the Positive Root Locus 103
transfer function L(s) has n poles and m zeros at certain locations. Now, if we
were to consider some point s with a very large magnitude (i.e., very far away
from the other poles and zeros), the poles and zeros would essentially look like
they were clustered at one point; let’s call this point α. So, for large |s|, we
would like to find a good value of α so that we can approximate the transfer
function as
Note that we are taking the exponent to be n − m, because to the point s, the
m zeros look like they are directly on top of n poles, and thus they ‘cancel’ each
other out. To see what value of α to choose, let’s start dividing the denominator
on the left hand side into the numerator to obtain the first couple of terms of
the quotient:
bm−1 − an−1
α= . (13.2)
n−m
Next, suppose that
m
X
N (s) = (s + z1 )(s + z2 ) · · · (s + zm ) = sm + zi sm−1 + · · ·
i=1
n
X
D(s) = (s + p1 )(s + p2 ) · · · (s + pn ) = sn + pi sn−1 + · · · .
i=1
Pm Pn
Thus, we have bm−1 = i=1 zi and an−1 = i=1 pi , which we substitute into
(13.2) to produce α in (13.1).
c Shreyas Sundaram
104 Root Locus
To derive the asymptote angles in (13.1), consider the root locus of (s−α)−(n−m) .
Recall that the root locus of a function is the set of all s such that
X X
∠s + zi − ∠s + pi = (2l + 1)π (13.3)
for some integer l, where zi are the zeros and pi are the poles. In this case, the
function
P (s − α)−(n−m) has no zeros, and all n − m poles at α. Thus, we have
∠s + pi ≈ (n − m)∠s − α, and the above expression becomes
(2l + 1)π
∠s − α = .
n−m
Note that the negative sign in front of the pole angles in (13.3) contributes an
angle of −π, which can just be absorbed into the term (2l + 1)π. There are
n − m different possibilities for the angle on the right hand side of the above
equation, corresponding to l = 0, 1, . . . , n − m − 1 (after this, the angles start
repeating), and thus there are n − m different asymptotes leading out from the
poles at α with the angles specified by (13.1).
1
Example. Consider L(s) = s(s+2) . Draw the portions of the real axis that are
on the positive root locus, and determine the asymptotes.
Solution.
1
Example. Consider L(s) = s((s+1) 2 +1) . Draw the portions of the real axis that
c Shreyas Sundaram
13.2 Rules for Plotting the Positive Root Locus 105
Solution.
s+6
Example. Consider L(s) = s((s+1) 2 +1) . Draw the portions of the real axis that
Since we are only interested in polynomials D(s) + KN (s) that have purely real
coefficients, the roots of the polynomial will either be real or appear as complex
conjugate pairs. This produces the following important fact.
c Shreyas Sundaram
106 Root Locus
polynomial ∆(s) = D(s) + KN (s) will have multiple roots at the breakaway
point. Let the breakaway point be s = s̄. Then we can write
where q ≥ 2 is the multiplicity of the root s̄, and D̄(s) is some polynomial. This
means that ∆(s̄) = 0 and d∆ ds (s̄) = 0. Substituting ∆(s) = D(s) + KN (s), we
have
D(s̄) + KN (s̄) = 0
dD dN
(s̄) + K (s̄) = 0 .
ds ds
D(s̄)
Solving the first equation, we get K = − N (s̄) , and substituting this into the
second equation, we come to the following rule.
Rule 4. The root locus will have multiple roots at the points s̄ for
which both of the following conditions are satisfied.
• N (s̄) dD dN
ds (s̄) − D(s̄) ds (s̄) = 0.
D(s̄)
• −N (s̄) = K is a positive real number.
s+6
Example. Draw the positive root locus for L(s) = s(s+2) .
Solution.
c Shreyas Sundaram
13.2 Rules for Plotting the Positive Root Locus 107
Example. Verify that the branches in the positive root locus for L(s) =
1
s((s+1)2 +1) never intersect.
Solution.
c Shreyas Sundaram
108 Root Locus
There are various other rules that we could derive to draw root locus plots, but
they tend to be cumbersome to apply by hand. The above rules will be sufficient
for us to get intuition about many systems.
c Shreyas Sundaram
13.2 Rules for Plotting the Positive Root Locus 109
1
Example. Consider a control system in unity feedback with P (s) = s2 and
C(s) = K s+1
s+4 . Draw the positive root locus.
Solution.
c Shreyas Sundaram
110 Root Locus
1
Example. Consider a control system in unity feedback with P (s) = s2 and
s+1
C(s) = K s+9 . Draw the positive root locus.
Solution.
The above examples show that as the pole of the controller moves in closer to
the root locus, it tends to push the branches of the locus to the right. From our
discussion so far, we can state the following rules of thumb: poles repel, and
zeros attract.
c Shreyas Sundaram
13.3 Rules for Plotting the Negative Root Locus 111
1 |s + p1 ||s + p2 | · · · |s + pn |
K= = .
|L(s)| |s + z1 ||s + z2 | · · · |s + zm |
Specifically, if we have a desired point s̄ on the root locus, we can find the gain
K that produces a pole at s̄ by multiplying and dividing the lengths of the
vectors from each of the poles and zeros to s̄, according to the above equation.
1
Example. Suppose L(s) = s2 +2s . Find the gain K that results in the closed
loop system having a peak time of at most 2π seconds.
Solution.
Note that MATLAB is an extremely useful tool for doing this in practical con-
troller design. Once one has plotted the root locus for a given system in MAT-
LAB (using the rlocus command), one can simply click on the root locus branch
at any desired location to find the value of the gain at that point.
The negative root locus is the set of all points s in the complex plane for
which ∠L(s) = 2lπ radians (where l is any integer).
c Shreyas Sundaram
112 Root Locus
All of the rules for plotting the positive root locus translate directly once we
consider this new phase condition:
• Rule 1. The n branches of the root locus begin at the open loop poles
(when K = 0). Of the n branches, m branches end at the open loop zeros
(when K = −∞).
• Rule 2. The negative root locus contains all points on the real axis that
are to the left of an even number of zeros or poles.
• Rule 3. Of the n branches in the root locus, n − m of the branches go to
infinity, and asymptotically approach lines coming out of the point s = α
with angles Φl , where
Σ open loop poles − Σ open loop zeros 2lπ
α= , Φl = ,
n−m n−m
for l = 0, 1, 2, . . . , n − m − 1.
• Rule 4. The root locus will have multiple roots at the points s̄ for which
both of the following conditions are satisfied.
N (s̄) dD dN
ds (s̄) − D(s̄) ds (s̄) = 0.
D(s̄)
−N (s̄) = K is a negative real number.
Note that the gain for a particular point s on the negative root locus is given
by
1 |s + p1 ||s + p2 | · · · |s + pn |
K=− =− .
|L(s)| |s + z1 ||s + z2 | · · · |s + zm |
The asymptotes for the negative root locus look like this:
c Shreyas Sundaram
13.3 Rules for Plotting the Negative Root Locus 113
1
Example. Determine the negative root locus for L(s) = s((s+1)2 +1) , and then
c Shreyas Sundaram
114 Root Locus
c Shreyas Sundaram
Chapter 14
The last chapter showed how to analyze and understand the closed loop system
from a root locus perspective. We will now study the use of Bode plots to
analyze closed loop systems, complementing the root locus techniques. In the
next chapter, we will use these ideas to design controllers (building on our study
of PID controllers).
Suppose we’re given the Bode plot for the transfer function L(s), and we would
like to study properties of the following feedback loop:
R(s) Y (s)
K L(s)
−
In other words, we would like to infer some things about the closed loop
system based on the open loop Bode plot. Remember that we also did this
when we studied root locus plots: we studied the locations of the closed loop
poles by starting with the open loop poles and zeros.
Recall the root locus equation 1 + KL(s) = 0 (the closed loop poles are the
values s that satisfy this equation). When K is a positive real number, this
means that |KL(s)| = 1 and ∠KL(s) ≡ π (modulo 2π). A point s = jω on the
imaginary axis (for some ω) will be on the positive root locus if |KL(jω)| = 1 and
∠KL(jω) ≡ π. Since we have access to |KL(jω)| and ∠KL(jω) from the Bode
plot, we should be able to determine the imaginary axis crossings by finding the
frequencies ω (if any) on the plot that satisfy the conditions |KL(jω)| = 1 (or
20 log |KL(jω)| = 0) and ∠KL(jω) ≡ π.
1
To develop this further, suppose that L(s) = s
s(s+1)( 100 +1) . The Bode plot of
c Shreyas Sundaram
116 Stability Margins from Bode Plots
For the above example, we have ωcg ≈ 1 and ωcp ≈ 10. The phase at ωcg
is approximately − 3π
4 , and so the feedback configuration with K = 1 does
not have any closed loop poles on the imaginary axis. Is there another value
of K for which the closed loop system will have poles on the imaginary axis
(and thus cross the boundary from stability to instability)? To determine this
from the Bode plot, note that 20 log |KL(jω)| = 20 log K + 20 log |L(jω)| and
∠KL(jω) = ∠L(jω) (for K > 0). Thus, K has no effect on the phase, and it
affects the magnitude plot by shifting it up or down by 20 log K. For example,
when K = 10, the entire magnitude gets shifted up by 20 log 10 = 20 dB (when
c Shreyas Sundaram
117
the vertical axis denotes 20 log |KL(s)|); this is shown on the above Bode plot
by the dashed lines. Based on this plot, we see that changing K has the effect of
changing the gain crossover frequency (but not the phase crossover frequency).
In order to find the value of K that causes some closed loop poles to lie on
the imaginary axis in the above example, we need to find out how to make the
gain crossover frequency and the phase crossover frequency coincide. Examining
the magnitude plot, we see that 20 log |L(j10)| ≈ −40, and thus the magnitude
curve needs to be shifted up by approximately 40 dB in order to set ωcg = ωcp ,
which can be accomplished by setting 20 log K ≈ 40, or K ≈ 100. Thus, we can
conclude that the closed loop system will have an imaginary axis crossing when
K ≈ 100. One can easily see from the Bode plot that this is the only positive
value of K for which this will happen.
We can verify this result by examining the positive root locus of L(s):
As expected, the branches cross the imaginary axis only once (other than the
trivial case where K = 0). To find the locations where the branches cross the
imaginary axis, we note that
1 + KL(s) = 0
1
⇔ 1+K s =0
s(s + 1)( 100 + 1)
⇔ s3 + 101s2 + 100s + 100K = 0 .
We use the Routh-Hurwitz test to determine the region of stability as 0 < K <
101. Thus, we have a potential imaginary axis crossing at K = 101. To find the
points on the imaginary axis where this happens, we set s = jω and K = 101
and solve the equation
Setting the imaginary and real parts to zero, we find that ω = 10. Thus, we
have an imaginary axis crossing at s = ±10j when K = 101. Note that this
c Shreyas Sundaram
118 Stability Margins from Bode Plots
agrees with the analysis from the Bode plot (the Bode plot actually told us
K ≈ 100, since we approximated the Bode plot with straight lines).
While we could determine imaginary axis crossings by looking at the Bode
plot, we didn’t necessarily know which direction the branches were going –
are we going from stability to instability, or instability to stability? We could
determine this information by looking at the root locus, but we will later develop
a completely frequency domain approach to characterizing the stability of the
closed loop system. For now, we will assume that the closed loop system is
stable with a given value of K, and investigate ways to design controllers using
a frequency domain analysis in order to improve the stability of the closed loop
system.
Stability Margins
We will define some terminology based on the discussion so far. Consider again
the Bode plot of KL(s) with K = 1:
c Shreyas Sundaram
119
Assuming that the closed loop system is stable, we can ask the question: How
far from instability is the system? There are two metrics to evaluate this:
P M = ∠L(jωcg ) + π .
In general, we would like to have large gain and phase margins in order to
improve the stability of the system. In the above example with K = 1, the gain
margin is approximately 100, and the phase margin is approximately π4 . Let us
consider some more examples, just to be clear on the concept of gain and phase
margins.
1
Example. What is the gain margin and phase margin for KL(s) = s(s+1)2 ?
Solution.
c Shreyas Sundaram
120 Stability Margins from Bode Plots
s+1
Example. What is the gain margin and phase margin for KL(s) = s
s2 ( 10 +1) ?
Solution.
In the above example, we noticed that the gain margin is ∞, since the phase only
hits −π at ω = ∞. However, note that as we increase K, the gain crossover
frequency starts moving to the right, and the phase margin decreases. If we
examine the root locus for the system L(s) = s2 (s+1s , we see that we have a
10 +1)
set of poles that move vertically in the plane as K → ∞, and thus the damping
ratio ζ for these poles decreases as K → ∞. This seems to indicate that there
might be some relationship between the phase margin and the damping ratio ζ.
We will now derive an explicit relationship between these two quantities.
2
ωn
Consider the system L(s) = s(s+2ζωn ) , which is placed in the feedback configu-
ration
R(s) Y (s)
L(s)
−
c Shreyas Sundaram
121
ω2
The closed loop transfer function is Try (s) = s2 +2ζωnn s+ω2 , which is the standard
n
second order system with damping ratio ζ. The phase plot of L(jω) is given by
This shows that the gain margin is ∞ (and this is easily verified by looking at
the root locus). Next, let’s examine the phase margin. By setting the magnitude
of L(jω) equal to 1, one can verify that the Bode plot of L(s) has gain crossover
frequency equal to qp
ωcg = ωn 1 + 4ζ 4 − 2ζ 2 .
Using the fact that P M = ∠L(jωcg ) + π, we obtain (after some algebra)
2ζ
P M = tan−1 qp .
1 + 4ζ 4 − 2ζ 2
Notice that the phase margin is a function of ζ and not ωn . Interestingly, this
seemingly complicated expression can be approximated fairly well by a straight
line for small values of ζ:
c Shreyas Sundaram
122 Stability Margins from Bode Plots
For 0 ≤ ζ ≤ 0.7, the phase margin (in degrees) and damping ratio are
related by
P M ≈ 100ζ .
While we derived the above expression for the standard second order system, we
can also use it as a general rule of thumb for higher order systems. Specifically,
as the phase margin decreases, the system becomes less stable, and might exhibit
oscillatory behavior. We can use the above relationship to design control systems
in the frequency domain in order to obtain certain time-domain characteristics
(such as meeting overshoot specifications).
Another useful rule-of-thumb that is generally adopted for the closed loop band-
width ωBW is as follows:
ωBW ≈ ωcg ≈ ωn .
Note that our discussions here have assumed a typical Bode plot that has large
magnitude at low frequencies, and low magnitude at high frequencies, with a
single gain crossover frequency. We can deal with more complicated Bode plots
by generalizing our discussions, but we’ll focus on these typical Bode plots for
now.
In order to obtain fast transient behavior, we typically want a large gain crossover
frequency, but this would come at the cost of decreasing the phase margin. Fur-
thermore, in order to obtain better steady state tracking, we would typically
want to increase the gain K in order to boost the low frequency behavior, but
this would again move the gain crossover frequency to the right and decrease
the phase margin. Therefore, we must consider more complicated controllers
(other than just a simple proportional controller K) in order to obtain a good
phase margin, a good gain crossover frequency, and good steady state tracking.
This will be the focus of the next part of the course.
c Shreyas Sundaram
Chapter 15
We now turn our attention to designing dynamic controllers (also called com-
pensators), building on our earlier study of PID controllers. We have seen so far
that the phase margin of a given system is representative of the system’s stabil-
ity, and is directly related to the damping of the system – a larger phase margin
makes the system more stable, and increases the damping. Given a system, we
thus want to design a controller that improves the phase margin. In certain sys-
tems, one way to do this would be to decrease the gain of the system, so that the
gain crossover frequency moves to the left (in the direction of increasing phase).
However, we have seen that the low frequency gain of the system is related to
how well the system tracks reference inputs – a larger low frequency gain corre-
sponds to better tracking. Another metric is the gain crossover frequency: since
the gain crossover frequency is approximately equal to the bandwidth and the
natural frequency of the system, a larger gain crossover frequency corresponds
to faster response, but also leads to smaller phase margins. Therefore, we would
like to design more sophisticated controllers in order to keep the low frequency
gain large (in order to meet tracking specifications), or to increase the gain
crossover frequency (in order to obtain faster transients), and also to increase
the phase at the gain crossover frequency (in order to boost the phase margin).
In this chapter, we will study the design of lead and lag compensators using
Bode plots. We will start by introducing the form of these controllers.
c Shreyas Sundaram
124 Compensator Design Using Bode Plots
controller could not. Recall that this was the same conclusion that we reached
when we were studying PID control.
1
Example. For the unity feedback loop with P (s) = s2 , draw the positive root
locus when C(s) = K and C(s) = K(s + 1).
Solution.
ps KP (s + p) + KD ps
C(s) = KP + KD =
s+p s+p
(KP + KD p)s + KP p
=
s+p
KP p
s+ KP +KD p
= (KP + KD p) .
s+p
KP p
If we let K = KP + KD p and z = KP +KD p , we obtain the dynamic controller
s+z
C(s) = K .
s+p
This controller is called a lead controller (or lead compensator) if z < p and
a lag controller (or lag compensator) if z > p. To see where this terminology
comes from, recall that if we applied a sinusoidal input cos(ωt) to the controller
C(s), the output would be |C(jω)| cos(ωt + ∠C(jω)) in steady state. The phase
of C(jω) is given by ∠C(jω) = ∠(jω + z) − ∠(jω + p), and if z < p, we have
∠C(jω) > 0 (i.e., the output leads the input). On the other hand, if z > p, we
c Shreyas Sundaram
15.1 Lead and Lag Compensators 125
c Shreyas Sundaram
126 Compensator Design Using Bode Plots
s+z
A lead controller will have the form C(s) = Kc s+p , where p > z. Since p > z,
we can write z = αp for some 0 < α < 1. The Bode form of the above controller
is then given by
s s
s + αp αp( αp + 1) αp + 1
C(s) = Kc = Kc s = Kc α s .
s+p p( p + 1) |{z} p +1
K | {z }
Cl (s)
The phase margin of the closed loop system can be obtained by examining the
Bode plot of C(s)P (s), which is obtained by simply adding together the Bode
plots of Cl (s) and KP (s) (since the magnitude is on a log scale, and the phases
inherently add). The gain K of the compensator can be first be chosen to meet
steady state error specifications, or to obtain a certain crossover frequency. Once
that is done, let’s see what the lead compensator contributes to the system by
examining the Bode plot of Cl (s):
We see that the phase plot of Cl (s) has a bump, and we can use this positive
phase contribution to increase the phase of KP (s). Specifically, we would like
to choose α and p so that the bump occurs near the crossover frequency of
KCl (s)P (s), thereby increasing the phase margin of the system. To see how to
choose the pole and zero, note that the phase of Cl (jω) is given by
ω ω
∠Cl (jω) = tan−1 ( ) − tan−1 ( ) .
αp p
c Shreyas Sundaram
15.2 Lead Compensator Design 127
From the phase plot, we note that the maximum phase occurs halfway between
the zero and the pole (on a logarithmic scale). If we denote the frequency where
the maximum phase occurs as ωmax , we have
1 p
log ωmax = (log(αp) + log(p)) = log αp2 ,
2
√
from which we obtain ωmax = αp. If we denote φmax = ∠Cl (jωmax ), we obtain
(after some algebra)
1−α
sin φmax = .
1+α
These expressions are important, so let’s restate them:
The maximum phase of the lead compensator with zero at αp√and pole
at p is denoted by φmax and occurs at the frequency ωmax = αp. The
maximum phase satisfies the equation
1−α 1 − sin φmax
sin φmax = or equivalently, α= .
1+α 1 + sin φmax
The idea will be to choose the pole and zero of the compensator such that ωmax
lies on the crossover frequency of KP (s), with the hope of contributing an extra
φmax degrees of phase margin. Let’s try an example to see how this works.
1
Example. Consider KP (s) = s(s+1) . Draw an approximate Bode plot for
Cl (s)KP (s) when the pole and zero of the compensator are such that the max-
imum compensator phase occurs at the gain crossover frequency of KP (s).
Solution.
c Shreyas Sundaram
128 Compensator Design Using Bode Plots
From the above example, we see that although the compensator does contribute
φmax to the phase at the gain crossover frequency of KP (s), the gain crossover
frequency of Cl (s)KP (s) actually shifts to the right due to the positive magni-
tude contribution of Cl (s). Thus, the phase margin of KCl (s)P (s) is actually
a little less than the phase margin of KP (s) plus φmax . In order to still get our
desired phase margin, we should therefore make φmax a little larger than we need
(usually about 10◦ extra is enough), so that the phase margin of KCl (s)P (s)
will meet the specification.
3. Find how much extra phase is required in order to meet the phase
margin spec. Set φmax to be this extra phase plus 10◦ .
1−sin φmax
4. Find α = 1+sin φmax .
Note: The lead compensator also sometimes appears as Clead (s) = Gain1 1+aT s
1+T s .
Comparing this to the lead compensator given above, we have Gain1 = K,
T = p1 and a = α1 .
1
Example. Consider P (s) = s(s+1) . Design a lead compensator so that the
closed loop system has a steady state tracking error of 0.1 to a ramp input, and
overshoot less than 25%.
c Shreyas Sundaram
15.2 Lead Compensator Design 129
Solution.
c Shreyas Sundaram
130 Compensator Design Using Bode Plots
Note that we are interested in the gain-boosting properties of the lag controller,
and so we will group the DC gain β with the dynamics of the controller in the
term Cg (s) (this is in contrast to the lead controller, where we grouped the DC
gain α with the gain Kc ). In this case, we will be using the gain Kc to obtain a
c Shreyas Sundaram
15.3 Lag Compensator Design 131
desired phase margin, and the controller Cg (s) to boost the DC gain. Since the
Bode plot of C(s)P (s) is obtained simply by adding together the Bode plots of
Cg (s) and Kc P (s), let us examine the Bode plot of Cg (s):
Note from the magnitude plot that Cg (s) will add to the magnitude of Kc P (s)
at low frequencies, thereby reducing the steady state tracking error:
Furthermore, we see that the phase plot of Cg (s) has a dip between ω = p and
ω = z, which will reduce the phase of Kc P (s) in that frequency range. This
is generally bad, because a lower phase might lead to a reduced phase margin.
The idea will be to choose the pole and zero very small, so that the dip in phase
will occur at very low frequencies (far away from the gain crossover frequency).
c Shreyas Sundaram
132 Compensator Design Using Bode Plots
1
Example. Consider P (s) = s(s+1) . Design a lag compensator so that the
closed loop system has a steady state tracking error of 0.1 to a ramp input, and
overshoot less than 10%.
Solution.
c Shreyas Sundaram
15.3 Lag Compensator Design 133
c Shreyas Sundaram
134 Compensator Design Using Bode Plots
Note that we can use either a lead compensator or a lag compensator to satisfy
the specs here. The difference is that the lag compensator increases the phase
margin by reducing the gain crossover frequency, whereas the lead compensator
increases the phase margin by adding more phase to the system. Therefore, the
response of the system with the lead compensator will generally be faster than
that of the same system with a lag compensator. However, the lag compen-
sator is capable of boosting the DC gain of the system without substantially
moving the gain crossover frequency or reducing the phase margin. Thus, a lag
compensator is often used in order to improve the tracking characteristics of an
existing controller, without affecting the other performance metrics too much.
The choice of controller will generally depend on the application requirements
and constraints.
c Shreyas Sundaram
Chapter 16
Nyquist Plots
So far, we have studied root locus methods and Bode plot methods for analyzing
the behavior of closed loop systems from the open loop transfer functions. The
root locus allows us to see how the poles of the transfer function change when
we vary a certain parameter, and allows us to visualize the effect of adding
additional poles and zeros. However, the root locus is not capable of handling
delays in the feedback loop (because a delay of τ contributes a term e−sτ to
the transfer function, which does not have a nice zero/pole interpretation).
Furthermore, the root locus cannot handle general uncertainties in the model
(it can, however, tell us something about the locations of the poles when a single
parameter is allowed to change slightly).
Bode plots are able to capture uncertainties and delay, and we have seen how to
use them to design controllers and analyze properties of the closed loop system.
However, up to this point, we have been assuming that the closed loop system
is stable when we put P (s) in the unity feedback loop with a certain controller
gain K. Under this condition, we have seen how to use the Bode plot of the
open loop system KP (s) to determine how much we can boost K before the
closed loop poles cross the imaginary axis. We have also seen how to use Bode
plots to design lead and lag controllers in order to meet certain performance
specifications. We will now study Nyquist plots, which complement Bode
plots to provide us with frequency response techniques to determine the stability
of the closed loop system (i.e., we will not have to assume initial stability, as
we did in the Bode plot analysis). Furthermore, the Nyquist plots will provide
us with an alternative mechanism to evaluate the robustness of the system (via
the gain margin and phase margin). To develop these concepts, we will need
the notion of contours in the complex plane.
c Shreyas Sundaram
136 Nyquist Plots
(s + z1 )(s + z2 ) · · · (s + zm )
H(s) = ,
(s + p1 )(s + p2 ) · · · (s + pn )
Let’s focus on a particular point s̄ on the contour C. The complex number H(s̄)
has a magnitude and a phase; the latter is given by
m
X n
X
∠H(s̄) = ∠(s̄ + zi ) − ∠(s̄ + pi ) .
i=1 i=1
Note that H(s̄) can be represented as a vector from the origin with magnitude
|H(s̄)| and angle ∠H(s̄), as shown in the above figure. We will be interested in
seeing how the phase of H(s̄) changes as the point s̄ moves around the contour
C. To do this, we see from the above expression for ∠H(s̄) that we can examine
how each of the quantities ∠(s̄ + zi ) and ∠(s̄ + pi ) vary as s̄ moves around the
c Shreyas Sundaram
137
contour C. Suppose the contour C and the distribution of poles and zeros looks
like this:
The quantity ∠(s̄ + zi ) is given by the angle between the vector from −zi to
s̄ and the positive real axis, and the quantity ∠(s̄ + pi ) is given by the angle
between the vector from −pi to s̄ and the positive real axis. Now, consider a
zero −zi that is outside the contour C. As s̄ moves around the contour C and
comes back to its starting point, the vector s̄ + zi swings up and down, but it
does not swing all the way around. As a result, the net change in ∠(s̄ + zi ) is
0◦ . The same analysis holds for a pole outside the contour C.
Now consider a zero −zj inside the contour C. As s̄ moves around C, the vector
s̄ + zj turns all the way around, and the net change in ∠(s̄ + zj ) is therefore
−360◦ . Similarly, if we consider a pole −pj inside C, the net change in ∠(s̄ + pj )
is also −360◦ .
If we put this all together, we see that every zero and pole inside the contour
C induces a net phase change of −360◦ as s̄ moves around C, and every zero
and pole outside the contour C induces a net phase change of 0◦ . Let Z denote
the number of zeros of H(s) inside the contour C, and let P denote the number
of poles of H(s) inside the contour C. From the earlier expression for ∠H(s̄),
we see that ∠H(s̄) undergoes a net change of −(Z − P )360◦ as s̄ moves around
the contour C. Since each net change of −360◦ means that the vector from the
origin to H(s̄) swings clockwise around the origin for one full rotation, a net
change of −(Z − P )360◦ means that the contour H(C) must encircle the origin
in the clockwise direction Z − P times. This leads us to the following principle.
Note: The reason for calling this the Principle of the Argument is that the phase
c Shreyas Sundaram
138 Nyquist Plots
of a complex number is also sometimes called the argument of the number, and
the above principle is derived by considering how the phase of H(s̄) changes as
s̄ moves around the contour C.
Also note that we assume that the contour C does not pass through any of the
poles or zeros of the transfer function (because the phase contribution of a zero
or pole is undefined if we evaluate the contour at that point). Similarly, the
above argument only applies if the contour H(C) does not pass through the
origin; the number of encirclements of the origin is undefined otherwise.
R(s) Y (s)
L(s)
−
This contour encloses the entire right half plane. Part C1 contains points of the
c Shreyas Sundaram
16.1 Nyquist Plots 139
N =Z −P ,
where
In order to apply the above technique, we needed to draw the Nyquist plot of
H (which is the contour H(C)). We can relate the Nyquist plot of H to the
Nyquist plot of the open loop system L(s) by noting that H(s) = 1 + L(s). The
contour H(C) is thus obtained by shifting the contour L(C) one unit to the
c Shreyas Sundaram
140 Nyquist Plots
The Nyquist plot of L(s) is obtained by combining the contours L(C1 ), L(C2 )
and L(C3 ), where C1 , C2 and C3 are the three portions of the contour C. We
will now examine how to draw each of these contours.
Contour C1
Note that the contour C1 is made up of points of the form s = jω, as ω ranges
from 0 to ∞. Each point on the contour L(C1 ) is then of the form L(jω), which
is just a complex number with magnitude |L(jω)| and phase ∠L(jω). We have
access to these quantities from the Bode plot of L(s), and so we can draw the
contour L(C1 ) by drawing the magnitude and phase plots from the Bode plot
together in the complex plane.
c Shreyas Sundaram
16.2 Drawing Nyquist Plots 141
10
Example. Consider L(s) = (s+1)2 . Draw the contour L(C1 ).
Solution.
10
Example. Consider L(s) = (s+1)3 . Draw the contour L(C1 ).
Solution.
c Shreyas Sundaram
142 Nyquist Plots
Contour C2
Now that we have drawn the contour L(C1 ), let us turn our attention to the
contour L(C2 ). Note that the points on C2 are of the form s = −jω, as ω ranges
from ∞ to 0. The points on the contour L(C2 ) are thus of the form L(−jω),
which is the complex conjugate of L(jω). The magnitude of L(jω) and L(−jω)
are the same, but the phases are negatives of each other. This means that the
contour L(C2 ) is simply a mirrored version of L(C1 ) about the real axis. The
contour L(C2 ) can now be added to the plots in the examples above.
Contour C3
The contour C3 is described by points of the form s = ejθ , where → ∞,
and θ ranges from 90◦ to −90◦ . The contour L(C3 ) is made up of points of the
form L(ejθ ), and each of these points can be evaluated by substituting ejθ into
L(s). Specifically, note that since is taken to be very large (infinite, in fact),
this term will dominate every factor that it appears in. Thus, if L(s) is strictly
proper, L(ejθ ) will simply evaluate to zero (and thus L(C3 ) is a single point at
the origin). If L(s) is nonstrictly proper, then L(ejθ ) will be some constant.
This will become clearer by evaluating L(C3 ) for the previous two examples,
and also from the following additional example.
s+1
Example. Draw the Nyquist plot of L(s) = s+10 .
Solution.
c Shreyas Sundaram
16.2 Drawing Nyquist Plots 143
For example, the new portion C4 on the contour is described by points of the
form s = ejθ where → 0, and θ ranges from −90◦ to 90◦ . We can evaluate
L(C4 ) by substituting s = ejθ into L(s), and examining what happens as → 0
(similarly to what was done for the portion C3 ). A few examples will make this
clear.
1
Example. Consider L(s) = s(s+1) . Draw the Nyquist Plot of L(s).
Solution.
c Shreyas Sundaram
144 Nyquist Plots
1
Example. Consider L(s) = s2 (s+1) . Draw the Nyquist Plot of L(s).
Solution.
c Shreyas Sundaram
16.2 Drawing Nyquist Plots 145
s(s+1)
Example. Consider L(s) = (s+10)2 . Draw the Nyquist Plot of L(s).
Solution.
c Shreyas Sundaram
146 Nyquist Plots
R(s) Y (s)
K L(s)
−
The gain K simply scales the Nyquist plot (since the magnitude on every point
of the contour KL(C) simply gets multiplied by K). In other words, increasing
K serves to push all of the points on the Nyquist plot further away from the
origin. We will see this from the following example.
Example. Draw the Nyquist plot of KL(s) for K = 1 and K = 200, where
1
L(s) = s(s+1)(s+10) .
Solution.
c Shreyas Sundaram
16.3 Stability Margins from Nyquist Plots 147
Note that the system is stable for small K, but unstable for large K (this can
be confirmed from the root locus of L(s)):
Recall that the gain margin is defined as the factor by which K can be increased
before the closed loop system becomes unstable (we will define the gain margin
only for systems that are closed loop stable for an initial value of K). In the
Nyquist plot, this corresponds to scaling the plot so that the number of encir-
clements of the −1 point changes. Note that this is in complete accordance
with the Bode plot analysis. Specifically, the point −1 in the complex plane
is a complex number with magnitude 1 and phase −180◦ . When we looked
at Bode plots, we showed that imaginary axis crossings occur when the gain
crossover frequency and phase crossover frequency coincide (which corresponds
to the case where KL(jω) = −1).
Similarly, the phase margin is defined as the amount by which the angle of
L(jωcg ) exceeds −180◦ , where ωcg is the point where |L(jωcg )| = 1. In the
Nyquist plot, this can be obtained in the following way. First, draw a line
from the origin to the point where the Nyquist plot crosses a circle of radius
1 centered at the origin. This crossing point corresponds to |L(jω)| = 1. The
phase margin is then the angle between this line and the negative real axis:
Example. Identify the gain margin and phase margin for the example given
1
above (with L(s) = s(s+1)(s+10) and K = 1).
c Shreyas Sundaram
148 Nyquist Plots
Example. Investigate the stability (and associated margins) for the system
KL(s) = 10(s+1)
s(s−1) .
Solution.
c Shreyas Sundaram
16.3 Stability Margins from Nyquist Plots 149
The gain margin and phase margin are clearly useful concepts for dealing with
this issue. Typically, we would like to design the system to have sufficiently
high margins so that the closed loop system is stable even with the worst case
uncertainty.
Another benefit of Nyquist and Bode plots is that they can readily handle delays
in the system. For example, consider the system ÿ(t) + 2ẏ(t) + y(t) = u(t − T ).
In this system, the output at time t is a function of the input at time t − T .
Since the Laplace transform of a delayed signal u(t−T ) is e−sT U (s), the transfer
function for this system is given by
Y (s) e−sT
L(s) = = .
U (s) (s + 1)2
When s = jω, the term e−jωT is a complex number with magnitude 1 and phase
−ωT . This term has the effect of subtracting ωT radians from the phase at each
1
frequency ω on Bode plot of (s+1) 2:
c Shreyas Sundaram
150 Nyquist Plots
Once again, the phase margin and gain margin come in handy, as they give us
an indication of the maximum delay that the system can tolerate before going
unstable. More specifically, note that a large delay can cause the Nyquist plot
to rotate enough that the number of encirclements of −1 changes, indicating
that the closed loop system becomes unstable. In other words, a large delay can
destabilize a feedback loop that is otherwise stable!
c Shreyas Sundaram
Chapter 17
Up to this point, we have been analyzing and designing control systems by us-
ing a transfer-function approach (which allowed us to conveniently model the
system, and use techniques such as root-locus, Bode plots and Nyquist plots).
These techniques were developed and studied during the first half of the twen-
tieth century in an effort to deal with issues such as noise and bandwidth issues
in communication systems. Transfer function methods have various drawbacks
however, since they cannot deal with nonlinear systems, are not very convenient
when considering systems with multiple inputs and outputs, and are difficult to
use for formulating ‘optimal’ control strategies. Starting in the 1950’s (around
the time of the space race), control engineers and scientists started turning to
state-space models of control systems in order to address some of these is-
sues. These are purely time-domain ordinary differential equation models of
systems, and are able to effectively represent concepts such as the internal state
of the system, and also present a method to introduce optimality conditions into
the controller design procedure. This chapter will provide an introduction to
the state-space approach to control design (sometimes referred to as “modern
control”).
ÿ + 3ẏ + 2y = 4u .
c Shreyas Sundaram
152 Modern Control Theory: State Space Models
The transfer function for this system can be readily found to be H(s) = YU (s)
(s)
=
4
s2 +3s+2 . To represent this model in state-space form, we first draw an all-
integrator block diagram for this system. Specifically, the all-integrator
block diagram is simply a set of integrator blocks that are chained together
according to the constraints imposed by the system. To obtain the diagram for
this system, we first solve for the highest derivative:
ÿ = −2y − 3ẏ + 4u .
Starting from the highest derivative (ÿ), we need to somehow obtain the lower
derivatives y and ẏ. This can be done by integrating ÿ twice, so we chain together
two integrator blocks. From this, we can easily obtain the all-integrator block
diagram as:
Each integrator block in this diagram can be viewed as representing one of the
internal states of the system. Let us assign a state variable to the output of
each integrator in order to represent the states. In this case, we will use the
state variables x1 and x2 defined as
x1 = y, x2 = ẏ .
Using the all integrator block diagram, we can differentiate each of the state
variables to obtain
ẋ1 = ẏ = x2
ẋ2 = ÿ = −2y − 3ẏ + 4u = −2x1 − 3x2 + 4u
y = x1 .
c Shreyas Sundaram
17.1 State-Space Models 153
ẋ = Ax + Bu
(17.1)
y = Cx .
• The vector x is called the state vector of the system. We will denote
the number of states in the system by n, so that x ∈ Rn . The quantity
n is often called the order of the system. In the above example, we have
n = 2.
• In general, we might have multiple inputs u1 , u2 , .. . , um to the system.
0 In
this case, we can define an input vector u = u1 u2 · · · um (the
notation M0 indicates the transpose of matrix M). In the above example,
m = 1.
c Shreyas Sundaram
154 Modern Control Theory: State Space Models
c Shreyas Sundaram
17.1 State-Space Models 155
The states in state-space models often represent physical quantities in the sys-
tem. For example, one common model1 for an F-8 aircraft contains four states:
Furthermore, the input to the system is applied via a deflection in the elevator
angle, and is denoted by δe . These quantities are shown visually below:
−1.357 × 10−2
V̇ −32.2 −46.3 0 V −0.433
γ̇
= 1.2 × 10−4 0 1.214 0 γ 0.1394
+
α̇ −1.212 × 10−4 α −0.1394 δe
0 −1.214 1
q̇ 5.7 × 10−4 0 −9.01 −6.696 × 10−1 q −0.1577
| {z } | {z } | {z } | {z }
ẋ A x B
0 0 0 1
y= x.
1 0 0 0
| {z }
C
Note that real aircraft dynamics are more complicated than this, and are non-
linear; however they can be approximated by choosing the dominant states of
the system, and linearizing the dynamics. We will look at this in more detail
next.
1 See the paper Linear Regulator Design for Stochastic Systems by a Multiple Time-Scales
Method by Teneketzis and Sandell, IEEE Transactions in Automatic Control, vol 22, no. 4,
Aug 1977, pp. 615-621, for more details.
c Shreyas Sundaram
156 Modern Control Theory: State Space Models
The second state equation becomes ẋ2 = − gl x1 + ml1 2 Te , and thus the nonlinear
pendulum model can be approximated by the linear model
0 1 0
ẋ = x+ Te
− gl 0 1
ml2
y= 1 0 x .
c Shreyas Sundaram
17.2 Nonlinear State-Space Models and Linearization 157
ẋ = f (x),
1 d2 f 1 d3 f
df 2
f (x) = f (x̄) + (x − x̄) + (x − x̄) + (x − x̄)3 + · · · .
dx x=x̄ 2 dx2 x=x̄ 6 dx3 x=x̄
For x sufficiently close to x̄, these higher order terms will be very close to zero,
and so we can drop them to obtain the approximation
ẋ = a(x − x̄) .
The extension to functions of multiple states and inputs is very similar to the
above procedure. Suppose the evolution of state xi is given by
ẋi = fi (x1 , x2 , . . . , xn , u1 , u2 , . . . , um ) ,
c Shreyas Sundaram
158 Modern Control Theory: State Space Models
for some general function fi . Suppose that the equilibrium points are given by
x̄1 , x̄2 , . . . , x̄n , ū1 , ū2 , . . . , ūm , so that
fi (x̄1 , x̄2 , . . . , x̄n , ū1 , ū2 , . . . , ūm ) = 0 ∀i ∈ {1, 2, . . . , n} .
Note that the equilibrium point should make all of the functions fi equal to
zero, so that all states in the system stop moving when they reach equilibrium.
The linearization of fi about the equilibrium point is then given by
n m
X ∂fi X ∂fi
fi (x1 , . . . , xn , u1 , . . . , um ) ≈ (x j − x̄ j ) + (uj − ūj ) .
j=1
∂xj xj =x̄j j=1
∂uj uj =ūj
If we define the delta states and inputs δxj = xj − x̄j (for 1 ≤ j ≤ n) and
δuj = uj − ūj (for 1 ≤ j ≤ m), the linearized dynamics of state xi are given by
n m
X ∂fi X ∂fi
δ ẋi = δxj + δuj .
j=1
∂xj xj =x̄j j=1
∂uj uj =ūj
Note: Sometimes the “δ” notation is dropped in the linearized equation, with
the implicit understanding that we are working with a linearized system.
Example. Linearize the nonlinear state-space model
ẋ1 = x21 + sin x2 − 1
ẋ2 = −x32 + u
y = x1 + x2
around the equilibrium point x̄1 = 1, x̄2 = 0, ū = 0.
Solution.
c Shreyas Sundaram
17.3 The Transfer Function of a Linear State-Space Model 159
c Shreyas Sundaram
160 Modern Control Theory: State Space Models
Solution.
Note that the above solution agrees with the transfer function at the beginning
of the section.
c Shreyas Sundaram
17.4 Obtaining the Poles from the State-Space Model 161
Av = λv.
The poles of the transfer function H(s) = C(sI − A)−1 B are exactly
the eigenvalues of matrix A. In other words, the poles are the values s
that satisfy det(sI − A) = 0.
0 1 1
Example. Find the poles of the system with A = , B = ,
−20 −9 1
C= 2 3 .
Solution.
1 1 −2 1
Example. Find the poles of the system with A = 0 −9 3 , B = 1,
0 0 2 0
C= 1 0 1 .
c Shreyas Sundaram
162 Modern Control Theory: State Space Models
Solution.
Controllability is about making the system state (not just the output) behave
how we want it to by applying proper inputs. Observability is about whether we
can determine what the internal state of the system is doing by looking at the
outputs. Both of these concepts play central roles in modern control systems
design.
When it comes to stabilization of systems in the state-space domain, one com-
monly considers the use of linear state feedback. Specifically, suppose that
we have access to the entire state x(t) of the system for all time; this is unreal-
istic, since we only have access to y(t), which measures only a few of the states,
but let’s just assume it for now. Linear state feedback control applies an input
of the form
u(t) = −Kx(t),
c Shreyas Sundaram
17.5 An Overview of Design Approaches for State-Space Models 163
for some matrix K that we will choose. The closed loop system is then
ẋ = Ax + Bu = Ax − BKx = (A − BK)x.
The dynamics of the closed loop system are given by the matrix A − BK;
specifically, as discussed in the previous section, the poles of this system are
given by the eigenvalues of this matrix. Thus, in order to obtain a stable system,
we have to choose K so that all eigenvalues of A − BK are stable (i.e., in the
OLHP). It turns out that it is possible to do this if the system is controllable.
The above feedback mechanism assumed that we have access to the entire state.
Since we only have access to the measurements of a few state variables (provided
by the output y(t)), one strategy would be to try to reconstruct the entire state,
based on the measurements available. This is possible if the system is observable,
in which case one can construct a state-estimator that provides an estimate
of x(t) to be used with the linear state feedback input described above. The
architecture of state feedback control with a state estimator looks like this:
The details of these topics, along with issues such as choosing the inputs op-
timally, dealing with noise, etc., are treated in more advanced undergraduate
and graduate courses. Hopefully this course has piqued your interest in control
systems, and motivated you to learn more about this subject in future courses!
The End.
c Shreyas Sundaram