Feedback Lecture Notes
Feedback Lecture Notes
Lecture 1
Lecturer: Asst. Prof. M. Mert Ankarali
Dynamical Systems
A dynamical system is a collection of “elements” for which there is a time-dependent cause and effect
relationships between “variables”. The behavior of a dynamical system changes with time, usually in response
to external inputs.
The elements can range from atoms and molecules to oceans and planets. Similarly, time scales can range
from pico seconds to years and even decades.
Goal: Obtaining a mathematical description of the dynamic relationship between the variables of the given
dynamical system/behavior. Always, the mathematical description is a simplification of the real phenomena.
Always remember:
“All models are wrong but some are useful” George E. Box, 1978.
1. Differential equations
Differential equations can model both linear and non-linear systems, time-invariant and time-varying
ones.
Z∞
y(t) = h(t) ∗ u(t) = h(t − τ )u(τ )dτ
−∞
Impulse-response representation can be used to model the dynamic relation between input, u(t), and
output y(t) signals. This representation is limited to Linear-Time-Invariant (LTI) systems.
Time varying impulse response representation can be used to model Linear-Time-Varying (LTV) sys-
tems.
1-1
1-2 Lecture 1
Similar to differential equations, we can model both linear and non-linear systems, and time-invariant
and time-varying systems using state-space representations.
Feedback Systems
The feedback terms constitute the case, where dynamical systems are connected in a way that the output of
each system influences its own driving input (directly or indirectly), and dynamic relations between different
sub-systems are tightly coupled.
Simple causal reasoning about a feedback system is difficult (if not possible). For example, if we observe
the feedback system illustrated in Fig. 1.1(B), we see that the first system influences the second, and the
second system influences the first, leading to a circular argument. We can conclude that formal methods are
necessary to understand the behavior of feedback systems.
Fig. 1.1 illustrates the idea of feedforward vs. feedback in block diagram topology. Open-loop and closed-
loop terms are also commonly used to refer to feedforward and feedback structures, respectively. When
systems connect each other in a cyclic topology, we refer to the topolgy as a closed-loop system. If one cuts
an interconnection such that the cyclic structure is broken, the system becomes an open-loop system.
Feedback based control may not be mandatory for some control applications, yet a feedforward based control
policy can be more advantageous than a feedback-based control policy in some cases. The core benefit of
feedback in a control system is that feedback reduces uncertainties in systems and improves the robustness.
Since uncertainties are unavoidable in real life, feedback control systems are ubiquitous in both synthetic
and biological control systems.
Lecture 1 1-3
Electrical Circuits
VL + VR + VC = Vs (t)
Vs(t) dI
L + RI + VC = Vs (t)
dt
d dVC dVC
L L C +R C + VC = Vs (t)
dt dt dt
LC V̈C + RC V˙C + VC = Vs (t)
R 1 1
C ÿ + ẏ +
L LC
y=
LC
u
Find the transfer function representation of the system for the given input–output pair.
R 1 1
L ÿ + ẏ + y =L u
L LC LC
R 1 1
s2 Y (s) + s Y (s) + Y (s) = U (s)
L LC LC
1
Y (s)
G(s) = = 2 RLC 1
U (s) s + L s + LC
x˙1 = x2
1 R 1
x˙2 = −x1 − x2 + u
LC L LC
If we put the equations in state-space form, we obtain
0 1 0
ẋ = 1 x+ 1 u
− LC −R LC
L
y= 1 0 x
where
0 1 0
A= 1 , B= 1 , C= 1 0 , D=0
− LC −RL LC
1-4 Lecture 1
z1 VC
Now let, z = = , then
z2 I
1
z˙1 = z2
C
1 R 1
z˙2 = − z1 − z2 + u
L L L
If we put the equations in state-space form, we obtain
1
0 C 0
ż = z + 1 u
− L1 − RL L
y= 1 0 z
It can be seen that state-space representation of a dynamical system is not unique. Indeed there exist
ifinitelly many state-space representations of the same system.
EE302 - Feedback Systems Spring 2019
Lecture 2
Lecturer: Asst. Prof. M. Mert Ankarali
In this lecture we will cover the conversion between different LTI representations.
2.2 State-Space to TF
Note that a SS representation of an nth order LTI system has the from below.
In order to convert state-space to transfer function, we start with taking the Laplace transform of the both
sides of the state-equation
2-1
2-2 Lecture 2
Note that for a given LTI system, there exist infinitely many different SS representations. In this part, we
learn two different ways converting a TF/ODE into State-Space form. For the sake of clarity, we will derive
the realization for a general 3rd order LTI system.
In this method of realization, we will use the fact the system is LTI. Let’s consider the transfer function of
the system and let’s perform some LTI operations.
b3 s3 + b2 s2 + b1 s + b0
Y (s) = U (s)
s3 + a2 s2 + a1 s + a0
1
= b3 s3 + b2 s2 + b1 s + b0 3
2
U (s)
s + a2 s + a1 s + a0
= G2 (s)G1 (s)U (s) where
H(s) 1
G1 (s) = = 3
U (s) s + a2 s2 + a1 s + a0
Y (z)
G2 (s) = = b3 s3 + b2 s2 + b1 s + b0
H(z)
As you can see we introduced an intermediate variable h(t) or with a Laplace transform of H(s). First
transfer function has static input dynamics, operates on x(t), and produces an output, i.e. h(t). Second
transfer function is a non-causal system and operates on h(t) and produces output x(t). If we write the
ODEs of both systems we obtain
...
h = −a2 ḧ − a1 ḣ − a0 h + u
...
y = b3 h + b2 ḧ + b1 ḣ + b0 h
x1 h
Now let the state-variables be x = x2 = ḣ . Then, individual state equations take the form
x3 ḧ
x˙1 = x2
x˙2 = x3
x˙3 = −a2 x3 − a1 x2 − a0 x1 + u
y = b3 (−a2 x3 − a1 x2 − a0 x1 + u) + b2 x3 + b1 x2 + b0 x1
= (b0 − b3 a0 )x1 + (b1 − b3 a1 )x2 + (b2 − b3 a2 )x3 + b3 u
If we obtain a state-space model from this approach, the form will be in controllable canonical form. We will
cover this later in the semester. Thus we can call this representation also as controllable canonical realization.
For a general nth order system controllable canonical form has the following A , B , C , & D matrices
0 1 0 ··· 0 0
0 0 1 ··· 0
0
A=
.. .. .. ..
, B=
..
. . . .
.
0 0 0 ··· 1 0
−a0 −a1 −a2 ··· −an−1 1
C= (b0 − bn a0 ) (b1 − bn a1 ) · · · (bn−1 − bn an−1 ) , D = bn
In this method will obtain a different minimal state-space realization. The process will be different and
state-space structure will have a different topology. Let’s start with the transfer function and perform some
grouping based on the s elements.
b3 s3 + b2 s2 + b1 s + b0
Y (s) = U (s)
s3 + a2 s2 + a1 s + a0
Y (s) s3 + a2 s2 + a1 s + a0 = b3 s3 + b2 s2 + b1 s + b0 U (s)
s3 Y (s) = b3 s3 U (s) + s2 (−a2 Y (s) + b2 U (s)) + s (−a1 Y (s) + b1 U (s)) + (−a0 Y (s) + b0 U (s))
1
Let’s multiply both sides with s3 and perform further grouping
1 1 1
Y (s) = b3 U (s) + (−a2 Y (s) + b2 U (s)) + 2 (−a1 Y (s) + b1 U (s)) + 3 (−a0 Y (s) + b0 U (s))
s s s
1 1 1
Y (s) = b3 U (s) + (−a2 Y (s) + b2 U (s)) + (−a1 Y (s) + b1 U (s)) + (−a0 Y (s) + b0 U (s))
s s s
X1 (s)
Let the Laplace domain representations of state variables X(s) = X2 (s) defined as
X3 (s)
1
X1 (s) = (−a0 Y (s) + b0 U (s))
s
1 1
X2 (s) = (−a1 Y (s) + b1 U (s)) + (−a0 Y (s) + b0 U (s))
s s
1 1 1
X3 (s) = (−a2 Y (s) + b2 U (s)) + (−a1 Y (s) + b1 U (s)) + (−a0 Y (s) + b0 U (s))
s s s
In this context output equation in s and time domains simply takes the form
Y (s) = X3 (s) + b3 U (S) → y(t) = x3 (t) + b3 u(t)
Dependently the state equations (in s and time domains) take the form
sX1 (s) = −a0 X3 (s) + (b0 − a0 b3 )U (s) → ẋ1 = −a0 x3 + (b0 − a0 b3 )u
sX2 (s) = X1 (s) − a1 X3 (s) + (b1 − a1 b3 )U (s) → ẋ2 = x1 − a1 x3 + (b1 − a1 b3 )u
sX3 (s) = X2 (s) − a2 X3 (s) + (b2 − a2 b3 )U (s) → ẋ3 = x2 − a2 x3 + (b2 − a2 b3 )u
2-4 Lecture 2
If we obtain a state-space model from this method, the form will be in observable canonical form. Thus we
can call this representation also as observable canonical realization. This form and representation is the dual
of the previous representation.
For a general nth order system observable canonical form has the following A , B , C , & D matrices
0 0 ··· 0 −a0 (b0 − bn a0 )
1 0 ··· 0 −a1
(b1 − bn a1 )
A=
.. .. .. .. ..
, B=
..
. . . . .
.
0 0 ··· 0 −an−2 (bn−2 − bn an−2 )
0 0 ··· 1 −an−1 (bn−1 − bn an−1 )
C= 0 0 ··· 0 1 , D = bn
Example:
1μF
20kΩ
10kΩ
100μF 10kΩ
v1(t) 25kΩ
vi(t)
vo(t)
1. Given that u(t) = vi (t) and y(t) = vo (t), compute the transfer function for the given (ideal) OPAMP
circuit
Solution:
V1 (s)
= −Y (s) s10−6 + 10−4
2.5104
V1 (s)
= −Y (s) (s + 100) 10−6
2.5104
Y (s) −40
=
V1 (s) s + 100
where
V1 (s) Y (s)
1. First find state space representations of the sub-system transfer functions, i.e. U (s) and V1 (s) , sepa-
rately.
2. Then combine the state-space representations of the sub-systems to find a state-space representation
for the whole system.
3. Compute the TF from the computed state-space representation and compare it to the previous results.
EE302 - Feedback Systems Spring 2019
Lecture 3
Lecturer: Asst. Prof. M. Mert Ankarali
We use Kirchhoff’s Current and Voltage laws to derive the dynamical (and static) relationships in Electri-
cal Circuits. Similarly, we utilize Newton’s laws of motion to derive equations of motion in (rigid body)
mechanical dynamical systems.
There exist two different analogies that we can construct between electrical and mechanical systems. Math-
ematically, there is no difference between the two approaches. In this lecture, we will learn one of these
analogies.
In electrical circuits, the core variables are Voltage, V , and current, I, whereas in translational mechanical
systems, core variables are translational velocity, ν, and force, f . Similarly, in rotational mechanical systems,
the core variables are angular velocity, ωn, and torque τ .
Voltage, V ⇐⇒ Velocity, ν ⇐⇒ Angular Velocity, ω
In electrical systems, voltage, also called electric potential difference, accounts for the difference in electric
potential between two points. When we refer to the voltage of a node/point, we always measure it with
respect to a reference point, e.g. ground. In mechanical systems, we measure the velocity either between
two points in space, or (which is more general) with respect to an inertial reference frame, e.g. ground or
earth in general. The analogy is similar with angular velocity. For this reason, we say that voltage, linear
velocity, and angular velocity are the analog variables.
Current, I ⇐⇒ force, f ⇐⇒Torque, τ
In electrical systems, the current is the flow (or rate of change of) of electric charge and carried by electrons
in motion. Roughly speaking, based on Newton’s second law, the force acting upon a (rigid) body is equal
to the rate of change of momentum related to this specific force component. Momentum can be considered
as an analog of the electrical charge in this case. A similar analogy can also be constructed using Torque
and Angular Momentum. For this reason, we say that Current, Force, and Torque are the analog variables.
3.1.2 Capacitor C, 1-DOF Translating Body with Mass m, and 1-DOF Rotating
Body with Inertia J
If we follow the analogs between the variables, we can see that ideal capacitor for which one end is connected
to the ground, 1-DOF translating body with a mass of m, and 1-DOF rotating body with an inertia of J
analogs of each other. These are all ideal energy storage elements in their modeling domains and they are
illustrated in the figure below.
3-1
3-2 Lecture 3
Ideal Capacitor Ideal 1-DOF Translating Body (m) Ideal 1-DOF Rotating Body (J)
The ODEs that govern the dynamics of these elements are provided below
Based on these equations we can reach the following (system) parameter analogy as
C≡m≡J
If we follow the analogs between the variables we can see that Ideal inductor (L), linear translational spring
(k), and linear torsional spring (κ) are analogs of each other. These are also ideal energy storage elements
in their modeling domains and they are illustrated in the figure below.
Spring at rest
Spring at rest
Free Body
Diagram
Lecture 3 3-3
The ODEs that govern the dynamics of these elements are provided below
˙ = V (t)
Induction : LI(t)
1 ˙
TranslationalSpring : f (t) = kx(t) → f (t) = ν(t)
k
1
TorsionalSpring : τ (t) = κθ(t) → τ̇ (t) = ω(t)
κ
Based on these equations we can reach the following (system) parameter analogy as
1 1
L≡ ≡
k κ
If we follow the analogs between the variables we can see that Ideal resistor (R), linear translational damper
(k), and linear torsional damper (κ) are analogs of each other. These elements are ideal fully passive
dissipative elements. Thus, these are memoryless (static) components as opposed to the previous elements.
These elements are illustrated in the figure below.
Ideal Resistor Ideal Linear Damper (Viscous Friction) Ideal Torsional Damper
Free Body
Diagram
The algebraic equations that govern the statics of these elements are provided below
Based on these equations we can reach the following (system) parameter analogy as
1 1
R≡ ≡
b β
In both electrical and mechanical systems, we have transmission elements. In their ideal form, they conserve
the energy after the transformation. In electrical systems, transformer is the component that achieves the
transmission. In translational mechanical systems a linearized lever can achieve this under the assumption
of small movements, where as for rotational systems a gear pair is one of the many solutions for mechanical
transmission. These components are illustrated in the figure below.
Ideal Transformer Linearized Lever Gear Pair
The algebraic equations that govern the statics of these elements are provided below
V1 V2
N1 = N2
Electrical Transformer :
I 1 N1 = I 2 N2
ν1 ν2
l1 = l2
Lever :
f1 l1 = f2 l2
ω1 ω2
r2 = r1
Gear − Pair :
τ1 r2 = τ2 r1
N1 l1 r2
≡ ≡
N2 l2 r1
Lecture 3 3-5
3.2 Examples
Ex 3.1. Let’s consider the following translational mechanical system. The input of the system is vi (t) which
is the velocity of the one side of the first damper. The output of the system is vm (t), i.e. the velocity of the
mass.
Y (s)
1. Given that u(t) = vi (t) and y(t) = vm (t), find the transfer function G(s) = U (s)
Let’s first draw the FBD and then derive the equations of motion
vi − v m vm
= Im + ⇒ Im = mv̇m
1/b 1/b
bvi = mv̇m + 2bvm ⇒ bu = mẏ + 2by
Y (s) b/m
G(s) = =
U (s) s + (2b)/m
3-6 Lecture 3
Ex 3.2. Let’s consider the following translational mechanical system. It is given that when the lever is in
vertical position, [x0 x1 x2 ] = 0 and springs are at their rest length positions.
1. Given that u(t) = x0 (t) and y(t) = x2 (t), find the ODE of the system dynamics.
Vi ≡ ẋi
Ii ≡ fi
V2 (s)
Let’s also compute V0 (s) using node voltage analysis in impedance domain.
l2
Since ideal transformer has the following relation, V1 (s) = l1 V0 (s), we have the following transfer
function between V0 (s) and V2 (s)
k1 l2
V2 (s) m l1
= 2 b k1 +k2
V0 (s) s + ms + m
Obviously this transfer function is equal to G(s) computed from directly mechanical system and con-
sidering positional variables.
4. Convert the derived ODE into a state-space form
We will solve the problem using a different approach. First let’s integrate the ODE twice
Z Z
b k1 + k2 k1 l2
y= − y+ − y+ u dt dt
m m m l1
Ex 3.3. Let’s consider the following gear system. Unlike ideal gear pair case, now each gear has its own
inertia, J1 and J2 , as well as both gears are affected by viscous friction, β1 and β2 , due to mechanical
contact with the environment.
1. Given that there is an external torque, τi , acting on the first gear is the input of the system, and the
rotational speed of the second gear, ω2 , is the output of the system, find the ODE of the gear-box
dynamics.
First let’s draw the free-body diagrams of both gears separately and then write the equations of motion
for each body.
Lecture 3 3-9
It can be seen that the resultant ODE is a first order ODE.Ee can also consider the whole system as a
2
N2
single rotating body with an effective total inertia of JT = J2 + J1 N1 and effective total viscous
2
N2
friction of βT = β2 + β1 N 1
.
Take Home Problem: Now let’s assume that output is y(t) = ω1 (t), and govern the ODE and
re-compute the new effective inertia and viscous friction.
2. Compute the transfer function
2 N
Y (s)
G(s) = = N1 2
U (s) 2
N2
J2 + J1 N1 s + β2 + β1 N
N1
2
N2 N2 1
N1 N1 J T
= =
JT s + βT s + βJTT
Take Home Problem: Solve the electrical circuit and compute the transfer function G(s)
Ex 3.4. The figure below illustrates the model of the target system. Input of the system is the external
torque, u(t) = τ (t), acting on the R axis and the output of the system is the angular velocity of the Load,
i.e. y(t) = ωL (t)
JR · ẋ2 = u − DR x2 − Kx3
1 DR κ
ẋ2 = u− x2 − x3
JR JR JR
Now let’s write the differential equation on the load axis and perform the same change of variables operation
JL · ω̇L = κ(θR − θL ) − DL ωL
K DL
ẋ1 = x3 − x1
JL JL
We need one more state-equation which can be drived as
d
ẋ3 = (θR − θL ) = ωR − ωL = x2 − x1
dt
Based on our choice of state-definition, full state-space representation takes the form
−DL κ
JL 0 JL 0
ẋ(t) = 0 −D R
JR − JκR x(t) + J1R u(t)
−1 1 0 0
y(t) = 1 0 0 x(t) + [0]u(t)
Ex 3.5. The mechanism illustration given below is an ideal belt-pulley mechanism. Fundamentally, it has
the same kinematic relations with a gear pair system. The only difference is that, the direction of motion is
preserved in a pulley system. r1 and r2 correspond to the radii of the first and second pulleys respectively.
ω1 ω2
=
r2 r1
τ1 τ2
=
r1 r2
You will analyze the following belt-pulley system consisting of three pulleys and two belts. First pulley has
a radius of r1 and an inertia of J1 . Third pulley has a radius of r3 and an inertia of J3 . Second pulley (in
the middle) is connected to the first pulley through its outer disk, which has the same radius with the third
pulley, i.e., r2,o = r3 . Second pulley is also connected to the first pulley via its inner disk, which has the
same radius with the first pulley, i.e., r2,i = r1 . Second pulley has an inertia of J2 (outer and inner disks
move together). A linear rotational viscous damping with a damping constant β2 also affects the motion of
the second pulley.
Given that the external torque acting on the first pulley is the input, u(t) = τex (t), and the angular velocity
of the third pulley is the output, y(t) = ω3 (t), compute the transfer function of the system.
3-12 Lecture 3
Let’s solve this problem using the concept of reflected inertia, damping, and torque.
If we reflect the variables and parameters of first pullet to the second pulley we obtain
Now if we reflect the variables and parameters of the modified second pulley to the the third pulley we obtain
Lecture 3 3-13
Ex 3.6. In some belt-pulley applications, ignoring the elasticity of the belt can be very crude and can lead
to substantial modeling errors. In order to overcome this problem, a very common method is modeling
the belt with a linear (translational) spring-damper as shown in the belt-pulley mechanism below. In this
mechanism, first pulley has a radius of r1 and inertia of J1 , where as the second pulley has a radius of r2
and inertia of J2 . The spring-mass dampers (above and below) that model the elasticity of the belt have
spring stiffnesses of k and damping constants of b.
Given that the external torque acting on the first pulley is the input, u(t) = τex (t), and the angular
displacement of the second pulley is the output, y(t) = θ2 (t),
1. Find a state-space representation of the dynamics. (Hint: You can choose your state variables as
h iT
x = θ1 θ̇1 θ2 θ̇2 ).
Simplify both the state-space and transfer function representations using these numerical values. Fi-
nally convert the state-space form to the transfer function form and verify that converted transfer
function is equal to the previously computed one. (Hint: You can use MATLAB’s ss2tf command for
conversion).
We assume that when [θ1 θ2 θ̇1 θ̇2 ] = [0 0 0 0] the mechanism is at rest condition. Then let’s draw the
free-body diagrams
Lecture 3 3-15
3-16 Lecture 3
d
Fu = k (∆x1 − ∆x2 ) + b (∆x1 − ∆x2 )
dt
= k (r1 θ1 − r2 θ2 ) + b r1 θ̇1 − r2 θ̇2
= kr1 θ1 − kr2 θ2 + br1 θ̇1 − br2 θ̇2
d
Fb = k (−∆x1 + ∆x2 ) + b (−∆x1 + ∆x2 )
dt
= k (−r1 θ1 + r2 θ2 ) + b −r1 θ̇1 + r2 θ̇2
= −kr1 θ1 + kr2 θ2 − br1 θ̇1 + br2 θ̇2
J1 θ¨1 = τex − Fu r1 + Fb r1
= τex + −kr12 θ1 + kr2 r1 θ2 − br12 θ̇1 + br2 r1 θ̇2 + −kr12 θ1 + kr2 r1 θ2 − br12 θ̇1 + br2 r1 θ̇2
= τex − 2kr12 θ1 + 2kr2 r1 θ2 − 2br12 θ̇1 + 2br2 r1 θ̇2
J2 θ¨2 = Fu r2 − Fb r2
= kr1 r2 θ1 − kr22 θ2 + br1 r2 θ̇1 − br22 θ̇2 + kr1 r2 θ1 − kr22 θ2 + br1 r2 θ̇1 − br22 θ̇2
= 2kr1 r2 θ1 − 2kr22 θ2 + 2br1 r2 θ̇1 − 2br22 θ̇2
h iT
1. Let x = θ1 θ̇1 θ2 θ̇2 , then we can find a state-space representation
0 1 0 0
0
2kr12 2br12 2kr1 r2 2br1 r2
− − 1
ẋ = J1 J1 J1 J1 J1
x +
0 u
0 0 0 1
2kr1 r2 2br1 r2 2kr 2 2br 2 0
J2 J2 − J2 2 − J22
y= 0 0 1 0 x
2
J1 s + 2br12 s + 2kr12 Θ1 (s) = U (s) + [2br2 r1 s + 2kr2 r1 ] Y (s)
2
J2 s + 2br22 s + 2kr22 Y (s) = [2kr1 r2 + 2br1 r2 s] Θ1 (s)
2
J1 s + B1 s + K1 Θ1 (s) = U (s) + [B12 s + K12 ] Y (s)
2
2 J2 s + B2 s + K2
J2 s + B2 s + K2 Y (s) = [B12 s + K12 ] Θ1 (s) → Θ1 (s) = Y (s)
[B12 s + K12 ]
( 2 )
J 2 s + B2 s + K 2
Y (s) J1 s2 + B1 s + K1
− [B12 s + K12 ] = U (s)
[B12 s + K12 ]
( )
2 J2 s2 + B2 s + K2
Y (s) J1 s + B1 s + K1 − [B12 s + K12 ] = U (s)
[B12 s + K12 ]
Y (s) N (s)
Let U (s) = D(s) , then
3. The state-space representation with the given coefficients take the form
0 100 0 0 0
−50 −5 100 10 1
ẋ =
0
x + u
0 0 1 0
10 1 −20 −2 0
y= 0 0 1 0 x
A sample MATLAB code piece which converts the state-space form to the transfer function form (in
terms of numerator and denumerator coefficients) is provided below. It is clear that, the computed
coefficients match the previous ones.
3-18 Lecture 3
Lecture 3 3-19
Input–output of the system is the are u(t) = vi (t), and y(t) = vm (t) respectively. If we draw the free body
diagram of the mass and draw the kinematic realtions of the lever we obtain the following illsutration
First let’s concentrate on the lever side and try to eliminate intermediate variables
2F1 = F2
2(vi − v1 ) = (v2 − vm )
v2 + 2v1 = 2vi + vm
2 1
v2 = vi + vm
5 5
3-20 Lecture 3
Input–output of the system is the are u(t) = vi (t), and y(t) = vm (t) respectively. If we draw the free body
diagrams of the lever and the mass, and derive the force relations we obtain
Lecture 3 3-21
If we write equations of motion for the lever and the mass, we obtain following differnetial equations
Now let’s find a state-space representation for the given system. The ODE representation of the transfer
function has the following form
ÿ + 7ẏ + 11y = 2u
x˙1 = ẏ = x2
x˙2 = ÿ = −7ẏ − 11y + 2u = −7x2 − 11x1 + 2u
y = x1
The state-representation with the chosedn state definition takes the form
0 1 0
ẋ = x+ u
−11 −7 1
y = 1 0 x + [0]u
EE302 - Feedback Systems Spring 2019
Lecture 4
Lecturer: Asst. Prof. M. Mert Ankarali
Y (s)
= Ḡ(s) = G1 (s)G2 (s)
U (s)
Y (s)
= Ḡ(s) = G1 (s) + G2 (s)
U (s)
4-1
4-2 Lecture 4
4.1.2 Examples
Solution:
EE302 - Feedback Systems Spring 2019
Lecture 5
Lecturer: Asst. Prof. M. Mert Ankarali
The dependent and “independent” variables associated with the idealized DC motor model and important
relations/equations regarding the electro-mechanical interactions are given below.
Va Armature voltage
ia Armature current
Vf “Field voltage”
if “Field current” Φ(t) = Kf If (t)
Vb Back emf τ (t) = Km Φ(t)Ia (t)
ω Rotor angular velocity eb (t) = Kb ω(t)
τ Generated torque
Φ Air-gap magnetic flux
Note that if both if (t) and ia (t) are non-constant the electric-motor model won’t be LTI. In order to have
an LTI representation, there are two options
5-1
5-2 Lecture 5
Majority of “DC” Motors are controlled (and indeed manufactured) with this approach. Either there is a
permanent magnet which satisfies the constant Φ or a constant current is supplied through the coils that
generates the magnetic field.
Let’s model the following electro-mechanical system where the DC motor is armature controlled and given
that y(t) = ω(t) and u(t) = Va (t).
Ka
=
La Js2 + (La β + Ra J)s + Ra β + Ka Kb
Lecture 5 5-3
Example 1: Given that Va is the input and θ is the output, construct a block-diagram for the following
electro mechanical system and then compute the transfer function.
Solution: A block diagram topology can be constructed by modifying the previous block diagram (armature
controlled DC motor without torsional spring).
Then the transfer function can ne derived using block-diagram simplification methods as given in the next
page
5-4 Lecture 5
Lecture 5 5-5
In the field controlled DC motors, magnetic flux is actively controlled by adjusting electrical current/voltage.
We assume that Ia is constant (LTI constraints). Since, there is no “feedback” in this field controlled DC
motor model, the electrical circuit is isolated from the mechanical one.
Let’s model the following electro-mechanical system where the DC motor is field controlled and given that
y(t) = ω(t) and u(t) = Vf (t).
Ω(s) 1
=
T (s) Js + β
If (s) 1
=
Vf (s) Lf s + Rf
T (s) = Kf If (s)
f
where Kf ≡ Ktau . Finally transfer function can be computed as
Kf
G(s) =
JLf s2 + (JRf + βLf )s + (βRf + Kf )
Example 2: Consider the following closed-loop field controlled electro-mechanical circuit. It is given that
θ∗ (t), i.e. reference angle signal, is the input and θ2 , angular displacement of the second gear, is the output.
In the system, there is an encoder which reads the angular displacement and sends it to a controller box.
The other input of this box is the reference signal. The box produces an output voltage, Vc = γ (θ∗ − θ2 ),
and feeds it to the input terminal of the Vf . Compute the transfer function.
Encoder
5-6 Lecture 5
Solution: A block diagram topology can be constructed by modifying the previous block diagram (armature
controlled DC motor without torsional spring).
Let’s first find a transfer function from τ to ω2 and θ2 . The easiest way of computing this is using the
concept of reflected inertia, damping, and torque.
T̄ (s) n
Ω2 (s) = = 2 T (s)
JT s + βT (n JR + n2 J1 + J2) s + n2 β
nT (s)
Θ2 (s) =
JT s2 + βT s
We know that Laplace domain equations for remaining parts take the form
T (s) Kf
=
Vf (s) Lf s + Rf
Vf (s) = γ (Θ∗ (s) − Θ2 (s))
Lecture 6
Lecturer: Asst. Prof. M. Mert Ankarali
When modeling and analyzing a closed system in addition to the desired reference input signal, it is also
important to model/analyze the system with unwanted disturbances and noise input signals. Let’s consider
the following block-diagram topology. In this closed-loop system, there exist three exogenous input signals;
r(t) (reference input), d(t) (“disturbance” input), and n(t) (“noise” input).
When modeling the response or characteristic of the system with respect to different external inputs, we
assume that remaining ones are zero.
Response to r(t)
Y (s) C(s)G(s)
TR (s) = =
R(s) 1 + C(s)G(s)H(s)
Response to d(t)
Y (s) G(s)
TD (s) = =
D(s) 1 + C(s)G(s)H(s)
Response to n(t)
Y (s) C(s)G(s)H(s)
TN (s) = =
N (s) 1 + C(s)G(s)H(s)
6-1
6-2 Lecture 6
Lets roughly analyze the desired responses under different type of inputs. Let’s assume that G(s) is the
plant transfer function and H(s) is the sensory dynamics transfer function. C(s) is the transfer function of
the controller.
In the ideal case, we want
• Perfect tracking of reference signal, TR∗ (s) ≈ 1. Since it is not possible to perfectly achieve this under
dynamic system constrains, we can design a “high gain” controller such that
C(s)G(s) 1
TR (s) ≈ ≈
C(s)G(s)H(s) H(s)
If H(s) ≈ 1, then we can have a high tracking performance from the system.
• Perfect rejection of disturbance signal, TR∗ (s) ≈ 0. Similarly, we can design a “high gain” controller
such that
G(s)
TD (s) ≈ ≈0
C(s)G(s)H(s)
It seems that the requirement on C(s) is similar for good tracking and good disturbance rejection.
• Perfect rejection of noise signal, TN∗ (s) ≈ 0. In this case, we can design a “low gain” controller (or low
gain H(s)) such that
C(s)G(s)H(s)
TN (s) ≈ ≈0
1
It seems that requirements on C(s) and H(s) start conflicting when we consider both tracking performance,
disturbance rejection, and noise rejection. This paradox is the most well-known limitation of feedback control
systems. The basic idea is that one can not only concentrate on designing a controller C(s) such that we reach
excellent closed-loop tracking performance when the system suffers from uncertainties and noises. Somehow
we need to design G(s), H(s), and even N (s) and D(s) together such that the whole system achieves a
“good” closed-loop behavior.
EE302 - Feedback Systems Spring 2019
Lecture 7
Lecturer: Asst. Prof. M. Mert Ankarali
Objective:
Simplest first order system is an integrator, which is also the fundamental block for higher order systems.
7-1
7-2 Lecture 7
Input Output
1
U (s) =
u(t) s
y(t)
1
Y (s) = G(s)U (s) = 2
1 s
−1 1
y(t) = L
s2
= t , for t ≥ 0
t t
K
G(s) =
s+K
K K
Y (s) = =
s(s + K) s(s + K)
A B
= +
s s+K
We can compute A and B as
K
A = lim [sY (s)] = =1
s→0 K
K
B = lim [(s + K)Y (s)] = = −1
s→−K −K
−1 1 1
y(t) = L −
s s+K
= 1 − e−Kt , for t ≥ 0
Proportional Controller
K
lim y(t) = 1
t→∞
y
t
Lecture 7 7-3
r(t)
y(t)
Proportional Controller
1 1
e(t) = r(t) − y(t) = 1 − e−Kt
K K
1
lim e(t) =
t→∞ K
Non-zero steady-state error
Steady-state error & as K %
t
EE302 - Feedback Systems Spring 2019
Lecture 8
Lecturer: Asst. Prof. M. Mert Ankarali
Let’s derive the transfer functions for the following electrical and mechanical systems
A B C
R y(t)
Vs(t)
b
L
F(t)
k
C
1
VC (s) LC
GA (s) = = 2 R 1
Vs (s) s + L s + LC
1
Y (s) M
GB (s) = = 2 b k
F (s) s + Ms+ M
1
Θ(s) J
GC (s) = = β
T (s) s2 + Js+ J
κ
Most of the (passive) second order systems, can be put into the following the standard form
ωn2
G(s) = KDC
s2 + 2ζωn s + ωn
where
0 < ωn Undamped natural frequency
0<ζ Damping ratio
KDC DC Gain
Accordingly for the systems that we analyzed previously we have the following relations
r r
1 R C
A : ωn = , ζ= , KDC = 1
LC 2 L
r r
k b 1 1
B : ωn = , ζ= , KDC =
M 2 Mk k
r r
κ b 1 1
C : ωn = , ζ= , KDC =
J 2 Jκ κ
8-1
8-2 Lecture 8
8.1.1 Step Response Types for the Second Order System in Standard Form
2
ωn
Given that G(s) = 2 ,
s2 +2ζωn s+ωn the poles can be computed as
p
s1,2 = −ζωn ± ωn ζ2 − 1
Case 1: When ζ = 0, the system becomes undamped and G(s) takes the form
ωn2
G(s) =
s2 + ωn2
1 ωn2 1 s
Y (s) = G(s)U (s) = 2 2
= − 2
s s + ωn s s + ωn2
y(t) = 1 − cos(ωn t) for t > 0
Pole locations and step response when ζ = 0 (undamped), is illustrated in the Figure below
Im
Output
Re 1
0
Time
Case 2: When ζ = 1, the system becomes “critically” damped and G(s) takes the form
ωn2 ωn2
G(s) = =
s2 + 2ωn s + ωn (s + ωn )2
1 ωn2 1 1 ωn
Y (s) = G(s)U (s) = = − +
s (s + ωn )2 s s + ωn (s + ωn )2
y(t) = 1 − e−ωn t − ωn te−ωn t
Pole locations and step response when ζ = 1 (critically damped), is illustrated in the Figure below
Lecture 8 8-3
Im
1
Output
Re
0
Time
Case 3: When ζ > 1, the system becomes over damped and there exist two real roots
p
p1 = −ζωn + ωn ζ 2 − 1 > −ωn
p
p2 = −ζωn − ωn ζ 2 − 1 < −ωn
where it is easy to see that p1 p2 = ωn2 . Finally, we can compute the step-response as
p1 p2
Y (s) = G(s)U (s) =
s(s − p1 )(s − p2 )
p2 p1
y(t) = 1 + e p1 t − ep2 t
p1 − p2 p1 − p2
Pole locations and step response when ζ > 1 (over damped), is illustrated in the Figure below
Im
1
Output
Re
0
Time
8-4 Lecture 8
Case 3: When 0 < ζ < 1, the system becomes under damped and there exist two complex conjugate roots.
p
p1,2 = −ζωn ± jωn 1 − ζ2
|p1,2 | = ωn
p
Let σ = ζωn and ωd = ωn 1 − ζ 2 (which is called damped natural frequency), then we know that general
solution of the ODE solution takes the form
Steady-state conditions leads that yp (t) = 1. Then we can compute remaining coefficients from zero initial
conditions constraints
y(0) = 0 → C1 = −1
d −σt
ẏ(0) = 0 → e [− cos(ωd t) + C2 sin(ωd t)] |t=0 = 0
dt
−σe−σt [− cos(ωd t) + C2 sin(ωd t)] + e−σt [ωd sin(ωd t) + C2 ωd cos(ωd t)] |t=0 = 0
[σ + C2 ωd ] = 0
σ ζ
C2 = − = −p
ωd 1 − ζ2
" #
ζ
y(t) = 1 − e−σt cos(ωd t) + p sin(ωd t)
1 − ζ2
If we combine cos and sin terms into a single sin with phase shift we obtain
e−σt hp i
y(t) = 1 − p 1 − ζ 2 cos(ωd t) + ζ sin(ωd t)
1 − ζ2
e−ζωn t
=1− p sin(ωd t + φ) t ≥ 0
1 − ζ2
where
Pole locations and step response when ζ > 1 (under damped), is illustrated in the Figure below
Lecture 8 8-5
Im
Output
Re
0
Time
Important transient characteristics and performance metrics for 2nd order underdamped systems are illus-
trated in the Figure below.
MP
1 %2 %5
0 Time
tr t p ts,5 ts
8-6 Lecture 8
Rise Time (tr): The first time instant the response intersects the y = 1 line.
y(tr ) = 1
e−ζωn tr
1=1− p sin(ωd tr + φ)
1 − ζ2
π = ωd tr + φ
π−φ
tr =
ωd
Peak Time (tp ): The first time instant the response makes a peak
dy
=0
dt tp
" #
d e−ζωn t hp i
−p 1 − ζ cos(ωd t) + ζ sin(ωd t)
2 =0
dt 1 − ζ2 tp
h hp i h p ii
ζωn e−ζωn t 1 − ζ 2 cos(ωd t) + ζ sin(ωd t) − e−ζωn t −ωd 1 − ζ 2 sin(ωd t) + ζωd cos(ωd t) =0
tp
hh p i h p ii
ζ 1 − ζ 2 cos(ωd t) + ζ 2 sin(ωd t) − −(1 − ζ 2 ) sin(ωd t) + ζ 1 − ζ 2 cos(ωd t) =0
tp
2
ζ sin(ωd t) + (1 − ζ 2 ) sin(ωd t) tp = 0
sin(ωd ttp ) = 0
π
tp =
ωd
Maximum Overshoot (Mp ): The maximum amount by which the response exceeds the value 1.
Mp = y(tp ) − 1
" #
e−ζωn t hp i
= 1− p 1 − ζ cos(ωd t) + ζ sin(ωd t)
2 −1
1 − ζ2 tp
−ζωn ωπ p
" #
e d
= p 1 − ζ 2 (−1)
1 − ζ2
−π √ ζ −π
Mp = e 1−ζ 2 = e tan φ
Example 1:
Lecture 8 8-7
In this problem, we perform four different pole re-location cases. During re-locations we keep some parameters
constant. Specifically
For each four cases, explain what happens rise time, peak time, maximum overshoot, and settling time.
Im Im
Re Re
Im Im
Re Re
8-8 Lecture 8
+ +
r(t) KP y ( t)
− −
KD
Design KP and KD gains such that, maximum percent overshoot is less than %4.32, and settling time (%2)
is less than 1 s.
Solution: Lets compute the closed-loop transfer function. In order to do that, first derive the transfer
function from E(s) to Y (s) which is called feed-forward transfer function.
Y (s) 1/s 1 KP
= KP =
E(s) 1 + KD /s s s(s + KD )
KP
s(s+KD ) KP
G(s) = KP
=
1 + s(s+K s2 + KD s + KP
D)
We can see that with KP and KD gains we have total control on the characteristic equation. Now let’s
analyze the performance requirements.
First requirement state that Maximum percent overshoot is less than %4.32, which means that MP < 0.0432.
Let’s find a condition on ζ or φ,
−π √ ζ
MP = e 1−ζ 2 = e−π/ tan φ < 0.0432
−π/ tan φ < −π
tan φ < 1
s-plane
45o
Lecture 8 8-9
Second requirement state that settling time (%2) is less than 1 s, which implies
4 4
ts,2 = = <1
ζωn σ
σ>4
The region on s-plane that satisfy the settling time requirement is illustrated below.
s-plane
-4
If we combine the requirements, we obtain the following region of possible pole locations.
s-plane
-4
Based on these requirements let’s choose p1,2 = −5 ± 4j as desired pole locations. We can then compute the
desired characteristic equation and then find the associated controller gains as
If we plot the step-response, we can illustrate the performance and check if we can meet the requirements.
Step Response
MP = 0.02
1
ts = 0.56 s
0
0 1 2
Time (seconds)
• Since y(t) only crosses the y = 1 as t → ∞, tr definition is not applicable for over-damped case. Instead,
a different rise time definition can be used (which is applicable for both over-damped, under-damped,
critically-damped systems, as well as first order systems). t̄r is the time for y(t) to go from 0.1 to 0.9.
It is pretty hard to compute this time analytically, thus in general numerical and/or graphical methods
are used.
Rise-time concept is illustrated in the figure below, for an example system.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 1 2 3 4 5 6
Time (seconds)
Lecture 8 8-11
In general, the poles closer to the jω axis determine the behavior of the system. If the “distance” between
the poles that are close to the jω axis and other poles is high, then they dominate the behavior and we call
them dominant poles. For example, figure given below illustrates a third order system, where we have two
complex conjugate roots and one real root. In this case, the magnitude of the real pole is more than three
times of the magnitude of the real part of the complex conjugate poles, thus the systems acts like a second
order system.
s-plane s-plane
EE302 - Feedback Systems Spring 2018
Lecture 9
Lecturer: Asst. Prof. M. Mert Ankarali
..
Fundamental concept that we need to perform stead-state response analysis of a control system is the final
value theorem. Given a continuous time signal x(t) and its Laplace transform X(s), if x(t) is convergent
signal, final value theorem states that
The most important steady-state performance condition for a control system is the tracking performance
under steady-state conditions. Let’s consider the following fundamental feedback topology.
In order to achieve a good tracking performance, obviously the error signal e(t) need to be small. Accordingly,
steady-state tracking performance is determined by the steady-state error of the closed-loop system, that we
can compute using final value theorem as
Let’s compute E(s)/R(s), i.e. transfer function from the reference input to the error signal,
9-1
9-2 Lecture 9
Note that G(s)H(s) is the transfer function from the error signal E(s) to the signal which is fed to the
negative terminal of the main difference operator, i.e. F (s). This transfer function is called the feed-forward
or open-loop pulse transfer function of the closed-loop control system. For this system,
F (s)
= GOL = G(s)H(s)
E(s)
Then E(s) can be written as
1
E(s) = R(s)
1 + GOL (s)
It is obvious that first requirement on steady-state error performance is that closed-loop system have to be
stable. Now let’s analyze specific but fundamental input scenarios.
Unit-Step Input
1
We know that r(t) = h(t) and R(s) = s then we have
1
ess = lim sR(s)
s→0 1 + GOL (s)
1 1
= lim s
s→0 s 1 + GOL (s)
1
ess =
1 + lims→0 GOL (s)
If the DC gain of the system (also called static error constant) is constant, i.e. lims→0 GOL (s) = KDC then
the steady state error can be computed as
1
ess =
1 + KDC
It is obvious that
ess 6= 0 if |KDC | < ∞
ess → 0 if KDC → ∞
At this point, it could be helpful to introduce the concept of system type, to generalize the steady-state error
analysis.
Definition: Let’s write the open-loop transfer function of a closed-loop system in the following standard
form
K b0 sm + · · · + bm−1 s + 1
GOL (s) = N
s a0 sn + · · · + an−1 s + 1
The closed-loop system is called as Type N system, where N is the # if integrators in the open-loop transfer
function (OLTF).
Based on these results, we can have the following conclusions regarding steady-state error for unit-step input
• If GOL (0) = ∞, then ess = 0. In other words, for Type N > 0 systems, the steady-state error is
perfectly zero .
1
• Type N ≤ 0: ess = 1+KP
Unit-Ramp Input
1
We know that r(t) = th(t) and R(s) = (s2 then we have
1
ess = lim sR(s)
s→0 1 + GOL (s)
1 1
= lim s 2
s→0 s 1 + GOL (s)
1
= −1
lims→0 s GOL (s)
1
ess = K b0 sm +···+bm−1 s+1
lims→0 sN −1 a0 sn +···+an−1 s+1
Based on this result we can have the following steady-state error conditions for the unit-ramp input based
on the type condition of the system
Based on this result we can have the following steady-state error conditions for the unit-ramp input based
on the type condition of the system
9-4 Lecture 9
Example 1: Compute the GOL (s) for the following closed-loop system and define its Type. After that,
compute the steady-state errors to unit-step, unit-ramp, a and unit-quadratic inputs.
+ +
r(t) KP y ( t)
− −
KD
Solution:
KP
GOL (s) =
s(s + KD )
KP
Type 1 , Kv =
KD
Then the steady-state errors are computed as
• Unit-step: ess = 0
KD
• Unit-ramp: ess = KP
• Unit-acceleration: ess = ∞
Example 2: Compute the GOL (s) for the following closed-loop system and define its Type. After that,
compute the steady-state errors to unit-step, unit-ramp, a and unit-quadratic inputs.
+
r(t) y ( t)
−
Solution:
KP + KD s
GOL (s) =
s2
Type 2 , Ka = KP
• Unit-step: ess = 0
• Unit-ramp: ess = 0
1
• Unit-acceleration: ess = Ka
Example 3: Compute the GOL (s) for the following closed-loop system and define its Type. After that,
compute the steady-state errors to unit-step, unit-ramp, a and unit-quadratic inputs.
Solution:
s
GOL (s) =
(s + 1)(s + 10)
Type −1 , KP = 0
• Unit-step: ess = 1
• Unit-ramp: ess = ∞
• Unit-acceleration: ess = ∞
Example 4: Compute the steady-state error to unit-step input for the following system.
+
r(t) y ( t)
−
Bad Solution:
Kp
GOL (s) =
s2
Type 2
ess = 0 ??????
9-6 Lecture 9
Error function takes the form e(t) = cos(Kt) which does not have a limit, i.e., there is no ess . If closed-loop
transfer function has poles on imaginary axis then, we can not apply final value theorem.
Example 5: Compute the steady-state error to unit-step input for the following system when KP = 1.
+
r(t) y ( t)
−
In conclusion, If closed-loop transfer function has poles on imaginary axis or open right half-plane then, we
can not apply final value theorem.
Lecture 9 9-7
When analyzing the steady-state response of a system in addition to the desired response to the reference
input, it is also important to analyze the response to unwanted disturbances and noises.
Let’s analyze the steady-state performance of the following topology which is perturbed by a disturbance
input, d(t).
In order to analyze the response to the disturbance d(t), we assume r(t) = 0 (which is just fine due to the
linearity). Let’s first find the pulse transfer function from D(s) to Y (s).
Y (s) G(s)
TD (s) = =
D(s) 1 + C(s)G(s)H(s)
G(s)
=
1 + GOL (s)
Note that Y (s) depends on both GOL (s) (OLTF) and G(s) (Plant TF). If one wants to generalize the
stead-state disturbance rejection performance, he/she needs to analyze the conditions for both GOL (s) and
G(s). Moreover, for a different topology and type of disturbance, we can have very different conditions.
For this reason, in order to analyze steady-state disturbance/noise rejection performance, it is better to use
fundamentals and apply final value theorem.
9-8 Lecture 9
Example 6: The following closed-loop system is affected by a disturbance input d(t). Compute the steady-
state performance/response to a unit step disturbance input.
d(t)
+ +
r(t) KP y ( t)
− −
KD
Lecture 10
Lecturer: Asst. Prof. M. Mert Ankarali
PID
A PID controller has the following forms in time and Laplace domain
Z t
d
u(t) = KP e(t) + KD e(t) + KI e(t)dt
dt 0
1
U (s) = KP + KD s + KI E(s)
s
In this lecture we will analyze the the e↵ects of PID coefficients on the transient and steady-state performance
on a 2nd order plant.
Let’s assume that H(s) = 1 and G(s) is a second order transfer function in general form
!n2
G(s) =
s2 + 2⇣!n + !n2
KP !n2
GOL (s) =
s2 + 2⇣!n + !n2
T ype 0 Kp = KP
Thus steady-state error for unit-step and unit-ramp inputs can be find as
1
• Unit step: ess = 1+K P
, i.e. Kp % ) ess &
Unit ramp: ess = 1
10-1
10-2 Lecture 10
To sum up, higher KP provides better steady-state performance. Now let’s analyze transient performance.
Y (s) KP !n2
T (s) = = 2
R(s) s + 2⇣!n + (1 + KP )!n2
p
!
¯ n = 1 + KP ! n
1
⇣¯ = ⇣ p
1 + KP
We can see that
Kp % ) ! n %
Kp % ) ⇣ &
If the plant is an over-damped system (i.e. ⇣ > 1), them increasing !n and decreasing ⇣ should has a positive
net e↵ect on the closed-loop performance.
On the other hand if the plant is an under-damped system, we can observe the following relations
s
q p p
⇣2
! ¯ n 1 ⇣¯2 = !n 1 + KP 1
¯d = ! = ! n 1 + KP ⇣ 2
1 + KP
⇣ p
⇣¯!
¯n = p !n 1 + KP = ⇣!n
1 + KP
We can see that
Kp % ) ! d %
Kp % ) ⇣!n =
In other words real part of the complex conjugate poles are unchanged, yet imaginary part deviates from
the real axis. From previous lectures we kow that in this scenario
K p % ) Mp %
In conclusion, If the plant is an under-damped system (i.e. ⇣ < 1), them increasing the P gains has an
negative e↵ect on the closed-loop performance (transient).
1
Example 1: Let G(s) = (s+0.5)(s+5) (and over-damp plant), then compute the unit-step steady state error
for Kp = 2 and KP = 5.
1
e2 = ⇡ 0.55 , (%55)
1 + 2/2.5
1
e5 = ⇡ 0.33 , (%33)
1 + 5/2.5
Obviously, stead-state performance is better with Kp = 5 compared to Kp = 1. Now compute the closed-loop
poles for Kp = 1 and KP = 5, and estimate associated settling times (%2).
2 (2) (2)
T2 (s) = ! p1 = 1 , p2 = 4.5 t(2)
s ⇡ 4s
s2 + 5.5s + 4.5
5 (2) (2)
T5 (s) = 2 ! p1 = 2.5 , p2 = 3 t(2)
s ⇡ 1.6s
s + 5.5s + 7.5
We can see that KP = 5 provides a better transient performance compared to KP = 2. Now let’s draw
step-responses and verify these observations
Lecture 10 10-3
0.9
0.8
0.7
0.6 ts=2.1
KP=5
0.5
0.4 ts=4.1
0.3
0.2 KP=2
0.1
0
0 1 2 3 4 5 6 7 8
Time (seconds)
We verify that both steady-state and transient performance increases with larger KP . However, we can also
see that seetling time estimation for KP = 5 has a larger error, which expected since the poles are close to
each other thus violates the dominant pole assumption.
1
Example 2: Let G(s) = s2 +4s+5 (and under-damped plant), then compute the unit-step steady state error
for Kp = 3 and KP = 8.
1
e3 = ⇡ 0.625 , (%62.5)
1 + 3/5
1
e8 = ⇡ 0.385 , (%38.5)
1 + 8/5
Obviously, stead-state performance is better with Kp = 8 compared to Kp = 3. Now compute the closed-loop
poles for Kp = 3 and KP = 8, and estimate associated maximum overshoots
3 (3) pi/ tan pi
T3 (s) = ! p1,2 = 2 ± 2j M P = MP = e 3
=e ⇡ 0.04 (%4)
s2 + 4s + 8
5 (8) pi/ tan pi2/3
T8 (s) = 2 ! p1,1 = 2 ± 3j MP = e 8
=e ⇡ 0.12 (%12)
s + 4s + 13
We can see that KP = 3 provides a better transient performance compared to KP = 8, since both frequency
oscillates and overs-shoot is increased with a higher P gain. Now let’s draw step-responses and verify these
observations.
0.9
0.8
PMP=%12
0.7
0.6
0.5
PMP=%4
0.4
0.3
0.2
0.1
Time (seconds)
10-4 Lecture 10
We verify that stead-state performance is better with a larger P gain. However, transent performance is
worse with a larger P-gain. This implies that if the plant is an under-damped plant, then P controller has
some serious limitations.
C(s) = KP + KD s
Let’s first analyze the a↵ect of KD term on steady-state performance on the same case (2nd order plant in
standard form)
KP !n2 + KD !n2 s
GOL (s) =
s2 + 2⇣!n s + !n2
T ype 0 Kp = KP
Thus steady-state error for unit-step and unit-ramp inputs can be find as
1
• Unit step: ess = 1+K P
, i.e. Kp % ) ess &
Unit ramp: ess = 1
Obviously, KD has no e↵ect on steady-state performance. Now let’s analyze the closed-loop transfer function
KP % ) !n % & ⇣ &
KD % ) ! n = & ⇣ %
In other words, for a second order system, we have full control on closed-loop pole locations with PD control
policy. Since, a high KP is required/preferred for steady-state performance (which couses the system to have
overshoot and oscillatory behavior), KD term can be used to suppress oscillations and overshoot. Note that
in the closed-loop transfer function, numerator part has a zero due to KD !n2 s term, which implies that the
closed-loop transfer function is not in standard 2nd order form. One should note the fact that, the existing of
closed-loop zero can a↵ect the accuracy of our closed-loop transient performance metric calculations (most
probably deviations will be minor).
1
Example 3: Let G(s) = s2 +4s+5 (and under-damp plant). Design a P D controller such that steady-state
error to unit-step input is around %20 and the maximum percentage overshoot is less than %4.
Lecture 10 10-5
Solution: We first design KD based on steady-state requirement then choose KD based on the over-shoot
requirement.
1
ess = = 0.2
1 + KP /5
KP = 20
20 + KD s
T (s) =
s2 + (4 + KD )s + 25
Let KD = 4, than the closed-loop transfer function and associated poles are computed as
20 + 4s
T (s) =
s2 + 8s + 25
p1,2 = 4 ± 3j
⇡/ tan ⇡4/3
P MP = %100e =e = %1.5 < %4
Let’s plot the step-response of the resultant system and check if we can meet the specifications.
0.8
PMP = %5
0.6
0.4
0.2
0 1
Time (seconds)
We can observe that the computed over-shoot is %5, and indeed higher than the requirement. Moreover,
the gap between estimated and numerically computed over-shoot is around %3.5. Obviously, the KD s = 4s
term in the numerator a↵ects the output behavior and its a↵ect should be maximum when 0 < t < ts . In
order see how KD s = 4s, let’s compre step responses of following transfer functions
20 + 4s
T (s) =
s2 + 8s + 25
20
T̂ (s) = 2
s + 8s + 25
T (s) and T̂ (s) share the same poles and DC gain, but T̂ (s) is in standard form, thus has no zeros.
10-6 Lecture 10
0.8
0.6
0.4
0.2
0 1 2
Time (seconds)
We can see that there exist non-negligible di↵erences between two transfer functions, which clearly shows
that the numerator dynamics can substantially a↵ect the response. In this context, we should think as the
transient performance metrics and associated approximate estimation formulas as heuristics which guides
the design of controllers.
Practical use of Integral controller (alone) is extremely rare, however we will analyze this case to better
understand the e↵ect of Integral action on more useful PI and PID topologies.
In the pure Integral controller, C(s), takes the form
1
C(s) = KI
s
Let’s first analyze the steady-state performance where the plant is a 2nd system in standard
KI !n2
GOL (s) =
s(s2 + 2⇣!n s + !n2 )
T ype 1 K v = KI
Thus steady-state error for unit-step and unit-ramp inputs can be find as
Basic idea is very clear, Integral action increases the type of the system by introducing an extra pole at
the origin (also increases the total system order). Thus for a Type 0 plant, it completely eliminates the
steady-state error for step-like inputs, and provides a constant steady-state error for ramp-like inputs.
Let’s compte the closed-loop transfer function
KI !n2
T (s) =
s3 + 2⇣!n s2 + !n2 s + KI !n2
The closed loop system is now a third order system and thus harder to analyze (we need new analysis tools).
Moroever, we have only one paremeter KI and the closed-loop system has three poles. One may easily guess
that it is very hard to obtain a good performance with a pure Integral controller. Sometimes it bay be even
quite hard to obtain a stable behavior.
Lecture 10 10-7
Example 4: Let’s analyze the influence of an I controller on a first order plant in order to better understand
1
the positive and negative e↵ects of I action. Let G(s) = s+2 .
The steady-state value of y(t) (for unit-step input) for the uncontrolled plant is yss = 1/2, thus we can say
that steady-state error under unit-step input is ess = 0.5, where as for unit ramp it is easy to show that
ess = 1. On the other hand, we can estimate the settling time (%2) for the uncontrolled plant as ts ⇡ 2.
KI
Now let’s first analyze the setady-state performance under C(s) = s ,
KI
GOL (s) =
s(s + 2)
T ype 1 Kv = KI /2
Obviously, steady-state performance improvement is significant (structurally). Now, lets compute closed-lopp
transfer function
KI
T (s) = 2
s + 2s + KI
Now let’s analyze the transient performance (settling time and maximum overshoot) for KI = 1 and KI = 2
and compre them w.r.t. uncontrolled plant
KI = 1 ! ⇣ = 1 , p1,2 = 1 ! MP = 0 , ts ⇡ 4s
p
KI = 2 ! ⇣ = 1/ 2 , p1,2 = 1 + ±j ! MP = 0.04 , ts ⇡ 4s
We can clearly see that integral action has a negative e↵ect on transient performance. KI = 1 case has
a worse settling time value than the original plant¡ Moroever we start to observe over-shoot at the output
when we increase integral gain to KI = 2 case.
In order to illustrate these analytic observations, we plotted the step responses of original plant, closed-loop
system with KI = 1 and closed-loop system with KI = 2.
PMP=%4
ts=4.2
1
ts=5.8
0.8
0.6
ts=1.9s
0.4
Plant
0.2 KI=1
KI=2
0
0 1 2 3 4 5 6 7 8
Time (seconds)
10-8 Lecture 10
It is clear that PI controlle eliminates the steady-state error, but we can also observe that transient per-
formance substantially degraded. In both cases settling time is worse than the original plant, and for case
KI = 2, overs-shoot is clear from the figure. One interesting result is that settling time performance for
KI = 1 (critically damped) is worse than KI = 2 (under-damped) even though the approximate formula
provides the same estimate. The reason is that for the critically damped case, the approximation under-
estimates the settling time.
In this context, we can conclude that a little bit over-shoot could be good for the closed-loop system from
the perspective of settling time.
In other words P I and I controllers have same steady-state performance characteristics. Now, let’s compte
the closed-loop transfer function
KP !n2 s + KI !n2
T (s) =
s3 + 2⇣!n s2 + (1 + KP )!n2 s + KI !n2
In this case, the closed-loop transfer function has three poles, and we have two parameters for tuning. Even
though it provides a much better framework than an I controller, still we need di↵erent tools to tune the KI
and KP gains.
1
Example 5: Let’s compre PI and I controllers using the same plant in previous example, G(s) = s+2 . We
KI
know that PI controller has the transfer function form C(s) = s and steady-state error characteristics can
be derived as
sKP + KI
GOL (s) =
s(s + 2)
T ype 1 Kv = KI /2
Unit step : ess = 0
2
Unit ramp : ess = ess =
KI
Lecture 10 10-9
which are exactly same with the I controller. Now, lets compute closed-lopp transfer function
K D s + KI
T (s) =
s2 + (2 + KD )s + KI
Now let’s choose KI = 8 and KP = 4, and estimate maximum over-shoot and settling time for the closed-loop
plant.
p1 = 2 , p2 = 4 ! MP = 0 , ts ⇡ 2s
These PI gains can match the settling time value of the original plant without any over-shoot (since closed-
loot TF is over-damped).
Let’s illustrate this analytic observations, by plotting the step responses of the closed-loop system with only
integral controller with KI = 2, the closed-loop system with only integral controller with KI = 8, and the
closed-loop system with PI controller with KP = 4 , KI = 8.
PMP =% 30
PMP =% 4 ts,I ≈ 4 s
1
KI = 4
KI = 8
KP = 4 , KI = 8
0 ts,PI = 1 s 2 4 6
Time (seconds)
In this illustration, we can see that settling time for both I controllers is around 4s, however settling time
for the PI controller is 1s which is even better than our estimation. The gap between settling time estimates
come form the a↵ect of zero introduced by the PI controller. If one carefully, analyzes the closed-loop transfer
function he/she can see that a pole-zero cancellation occurs (which may not be a good feature for practical
reasons). Technically, the closed-loop transfer function is reduced to a first order system in this case (????).
We also see that when PI and I controllers has the same KI gain, the over-shoot in I controller is %30 which
is very high.
PID controller technically combines the advantages of the PD and PI controllers, with the trade of increased
parameter and implementation complexity. We know that PID controller has the following transfer function
10-10 Lecture 10
form
KI
C(s) = KP + KD s +
s
Let’s first analyze the steady-state performance again where the plant is a 2nd system in standard form
(KD s2 + KP s + KI )!n2
GOL (s) =
s(s2 + 2⇣!n s + !n2 )
T ype 1 Kv = KI
Thus steady-state error for unit-step and unit-ramp inputs can be find as
In other words P ID, P I, and I controllers share the same steady-state performance characteristics. Now,
let’s compte the closed-loop transfer function
(KD s2 + KP s + KI )!n2
T (s) =
s3 + (2⇣!n + KD !n2 )s2 + (1 + KP )!n2 s + KI
We can see that the closed-loop transfer function has three poles, and we have three parameters to tune. If
we have no limits on gains, we can place the closed-loop poles to any desired location. However, numerator
has now two zeros, thus it is now harder to predict the a↵ect of closed-loop zeros on the output behavior.
1
Example 4: Let G(s) = s2 +4s+5 , Design a PID controller such that we observe zero unit-step steady state
error, maximum percent overs-shoot is less than %5, and settling time is around 1s.
Solution: We already know that PID controller for a Type 0 plant completely eliminates the unit-step
steady state error. Now let’s compte closed-loop transfer function
KD s 2 + K P s + KI
T (s) =
s3 + (4 + KD )s2 + (5 + KP )s + KI
One way of choosing appropriate pole locations for a third-order closed-loop system is placing one of the
poles (a real one) far away from the other poles such that closed-loop system shows a second order like
behavior. We require that maximum over shoot of %5 and settling time is around 1s. Let the dominant
poles of the closed-loop system be
p1,2 = 4 ± 3j
Settling time and maximum overs-shoot associated with this closed-loop poles can be estimated as
ts ⇡ 1s
pi4/3
MP ⇡ e = 0.015
These estimates satisfy the requirements. Let p3 = 3( 4) = 12, then desired characteristic equation takes
the form
KP = 116
KI = 300
KD = 16
The first thing, we can observe is that the quantitive values of the gains seem to be much larger than the
gains that we played before. In general, if we want to improve the performance of both steady-state and
transient characteristics by implementing PID topology instead of P, PI, or PD the gain values would go up
which may cause practical problems and potentially can be costly in terms of energetic performance.
Now let’s plot the step response of the closed-loop system and try to verify these analytic observations
PMP = %11
ts = 0.6 s
0
0 0.2 0.6 1
Time (seconds)
We can see that the settling time in numerical simulation is much better than our estimate 0.6s < 1s,
however numerical percentage over-shoot is higher than our estimate, %11 > %1.5, and does not meet our
specifications. The core reason behind this is that it is basically harder to tune a PID controller compared
to P, PD, and PI controllers due to increased parametric complexity and (may be more importantly) 2nd
numerator dynamics introduced with the PID control policy.
EE302 - Feedback Systems Spring 2019
Lecture 11
Lecturer: Asst. Prof. M. Mert Ankarali
11.1 Stability
A SISO system is called BIBO (bounded-input–bounded-output) stable, if the output will be bounded for
every input to the system that is bounded.
Rt
If we use the impulse response representation for an LTI SISO system, i.e.. y(t) = 0 g(t)u(t − τ )dτ (we
assume that system is causal), the system is BIBO stable if and only if its impulse response is absolutely
integrable.
Z ∞
BIBO Stable ⇔ |g(t)|dt < B < ∞
0
Y (s) N (s)
= G(s) =
U (s) D(s)
In this context, a rational transfer function representation is BIBO stable if and only if, the poles of G(s)
(or roots of D(s) ) are strictly located in the open left half s−plane.
1
Ex: Show that G(s) = s is not BIBO stable
Solution: Let’s first check the impulse response condition
g(t) = 1, t ≥ 0
Z t Z t
lim |g(τ )|dτ = lim 1dτ = ∞
t→∞ 0 t→∞ 0
Thus the system is BIBO unstable. The system has a single pole at the origin, so we already know that it
is BIBO unstable. Now, let’s find a specific bounded input such that output is unbounded. Let u(t) be the
unit-step input then
1
Y (s) = ⇒ y(t) = t , t ≥ 0
s2
lim |y(t)| = ∞
t→∞
11-1
11-2 Lecture 11
1
Ex: Let G(s) = s2 +1 . Find a bounded input, such that output is unbounded. Solution: Let u(t) =
cos t , t ≥ 0, then
s
Y (s) = G(s)U (s) =
(s2 + 1)2
y(t) = t sin t , t ≥> 0
In order to gain some intuition about how to check stability of general rational LTI systems, we will analyze
first and second order systems.
The transfer function of a first order system has the form
D(s) = a0 s + a1
a0 > 0, w.l.g
The single pole of the system and associated stability condition can be derived as
a1
p=−
a0
Stable ⇔ a1 > 0
Now lets analyze second order systems. Transfer function of a second order system has the following form
D(s) = a0 s2 + a1 s + a2
a0 > 0, w.l.g
Sign[D(0)] < 0, one pole is located in the open-left half plane, where as the other one is located in the
open- right half plane.
Sign[D(0)] = 0, there exist at least one pole at the origin.
In this context, we can derive the first condition (necessary but not sufficient)
Stable ⇒ a2 > 0
Under this condition, we can re-write the characteristic equation in a more standard form
2 a1 a2
D(s) = a0 s + s +
a0 a0
D(s) = a0 s + 2ζωn s + ωn2
2
where
p
ωn = a2 /a0 > 0
a1
ζ=
2a0 ωn
From the analysis of second order system in standard form, we know that when
Lecture 11 11-3
It is also easy to see that ζ > 0 ⇔ a1 > 0. As a result, we can derive a necessary and sufficient condition on
BIBO stability.
A second order system with D(s) = ao s2 + a1 s + a2 (with a0 > 0), is BIBO stable if and only if ai > 0, ∀i ∈
{1, 2}.
Routh’s Stability Criterion (a.k.a Routh–Hurwitz stability criterion) is a mathematical test that is a necessary
and sufficient condition for the stability of an LTI system. The Routh test is a very computationally efficient
algorithm for the test of absolute LTI system stability.
In this test, one first constructs the routh table for a given
sn a0 a2 a4 a6 ···
n−1
s a1 a3 a5 a7 ···
sn−2 b1 b2 b3 b4 ···
sn−3 c1 c2 c3 c4 ···
sn−4 d1 d2 d3 d4 ···
.. .. ..
. . .
s2 e1 e2 0 ···
s1 f1 0 0 ···
s0 g0 0 0 ···
a1 a2 − a0 a3
a0 a2
b1 = −det /a1 =
a1 a3 a1
a1 a4 − a0 a5
a0 a4
b2 = −det /a1 =
a1 a5 a1
a1 a6 − a0 a7
a0 a6
b3 = −det /a1 =
a1 a7 a1
..
.
11-4 Lecture 11
b1 a 3 − a 1 b2
a1 a3
c1 = −det /b1 =
b1 b2 b1
b1 a 5 − a 1 b3
a1 a5
c2 = −det /b1 =
b1 b3 b1
b1 a 7 − a 1 b4
a1 a7
c3 = −det /b1 =
b1 b4 b1
..
.
c1 b2 − b1 c2
b1 b2
d1 = −det /c1 =
c1 c2 c1
c1 b3 − b1 c3
b1 b3
d2 = −det /c1 =
c1 c3 c1
c1 b4 − b1 c4
b1 b4
d3 = −det /c1 =
c1 c4 c1
..
.
Other coefficients are computed with the same structure with the computation flow of di ’s.
Result 1: The system is NOT BIBO stable if ∃ i, s.t, ai ≤ 0. In other words, since we assumed a0 > 0, all
other coefficients have to be strictly positive. This is a necessary, but not sufficient condition.
Result 2: The system is BIBO stable, if and only if all the coefficients in the first row strictly positive.
Routh test provides a necessary and sufficient condition.
Result 3: The # of roots of D(s) with positive real parts is equal to the # of sign changes in the first
column (shown in table below) of the Routh array.
a0
a1
b1
c1
d1
..
.
e1
f1
g0
Ex: Let D(s) = s4 + 2s3 + 3s2 + 4s + 5, is this system is stable. If not what is the number of poles with
positive real parts.
Solution: Let’s build the Routh table
Lecture 11 11-5
s4 1 3 5 0
s3 2 4 0 0
2·3−1·4 2·5−1·0
s2 2 =1 2 =5 0 0
1·4−2·5
s1 1 = −6 0 0 0
s0 5 0 0 0
# sign changes is equal to 2, thus the system is unstable and 2 out of 4 poles are located in the open
right-half plane. If we compute the poles numerically using a programming environment, we can find that
p1,2 = −1.29 ± 0.86j and p3,4 = 0.29 ± 1.4j, which indeed verifies the Routh table result.
Ex: Consider the following closed-loop system, find the set of KP and KD gains such that closed-loop system
is stable
+
r(t) y ( t)
−
Solution: Denominator of the closed-loop transfer function of the system has the following form D(s) =
s2 + KD s + KP . Now, lets construct the Routh table for the D(s)
s2 1 KP
s1 KD 0
s0 KP 0
In order the closed-loop system to be stable, # sign changes in the first column must be equal to zero, thus
the closed-loop system is BIBO stable if and only if KP > 0 and KD > 0.
Ex: Consider the following closed-loop system, find the set of KI gains such that closed-loop system is stable
+
r(t) y ( t)
−
Solution: Denominator of the closed-loop transfer function of the system has the following form
D(s) = s3 + 3s2 + 2s + KI
Case I: Let’s analyze the stability of “D(s) = s3 − 3s + 2”. Since a1 = 0 and a2 = −3, we know that the
system is not BIBO stable. Let’s construct the Routh table to verify this result.
s3 1 −3
s2 0 2
s1 ?
s0 ?
We can see that one of the coefficients in the Routh array is zero, and we can not complete the Rout table.
If a coefficient in the Routh array is zero, we know from the Routh Hurwitz test that the system is BIBO
unstable. However, what can we do, if we want to compute the # poles with positive real parts.
Solution Type 1: Lets find a new D̄(s) = (s + α)D(s), α > 0. We know that D(s) and D̄(s) has the same
# poles with positive real parts. Then we can construct a Routh table for D̄(s) to seek an answer.
Let D̄(s) = (s + 2)(s3 − 3s + 2) = s4 + 2s3 − 3s2 − 4s + 4, the Routh table takes the form
s4 1 -3 4
s3 2 -4 0
s2 -1 4 0
s1 4 0
s0 4 0
# sign changes in the Routh array of D̄(s) is equal to 2, so we can conclude that D(s) has 2 poles with
positive real parts. If we compute the poles of D(s), we find that p1 = −2, p2,3 = 1, which verifies our
finding.
Solution Type 2: Replace 0 element with an infinitesimal but non-zero element ϵ > 0.
s3 1 −3
s2 ϵ 2
s1 − 3ϵ+2
ϵ 0
s0 2
# sign changes in the perturbed Routh array of D(s) is equal to 2, so we can conclude that D(s) has 2 poles
with positive real parts.
Lecture 11 11-7
D(s)|s=1/q = q −3 − 3q −1 + 2 = q −3 2q 2 − 3q 2 + 1
Now let’s define D̄(q) = q 3 D(s)|s=1/q = 2q 3 − 3q 2 + 1. Note that we simply flip the coefficients of D(s) to
find the coefficients of D̄(q). It is easy to see that if pi is a pole of D(s), then 1/pi is a pole of D̄(q). Let
pi = σ + jω, then
1 1 σ − jω
= = 2
pi σ + jω σ + ω2
σ ω
= 2 −j 2
σ + ω2 σ + ω2
1
Sign(Re{pi }) = Sign Re
pi
We can see that D(s) and D̄(q) have same number of stable and unstable poles. Thus, we can perform a
Routh Hurwitz test on D̄(q)
s3 2 0
s2 -3 1
s1 2/3 0
s0 1
# sign changes in the Routh array of D̄(q) is equal to 2, so we can conclude that D(s) has 2 poles with
positive real parts.
Case II:
Ex: Analyze the stability of D(s) = s4 + 2s3 + 2s2 + 2s + 1 using Routh table
s4 1 2 1
s3 2 2 0
s2 1 1 0
s1 0 0
s0 ? ?
We can see that one row in Routh table is completely zero. Thus we know that system is indeed BIBO
unstable. However, what we can do in order to find number of unstable poles. This happens when there
exists roots of equal magnitude located radially opposite in s−plane, i.e. symmetric w.r.t. origin. These
cases are illustrated in the figure below.
11-8 Lecture 11
Im Im Im
Re Re Re
This case happens always after an even row, in this example right after s2 row. In such cases, we compute
the Auxiliary polynomial, A(s) = s2 + 1 in this example. After that, we compute its derivative, A′ (s) = 2s
in this example, and use the coefficient of A′ (s) in replacement of the zero row and compute the Routh array
accordingly. This process is illustrated below
s4 1 2 1
s3 2 2 0
s2 1 1 0 → A(s) = s2 + 1
s1 2 0 ← A′ (s) = 2s
s0 1 0
# sign changes in the Routh array of D(s) with Auxiliary polynomial is equal to 0, so we can conclude that
D(s) has 0 poles with positive real parts. This means that all unstable poles are located on the imaginary
axis. If we compute the poles numerically we find that p1,2 = −1 and p3,4 = ±j which verifies our finding.
Indeed, we can see that problematic roots are the roots of A(s).
s3 1 -1
s2 2 -2 → A(s) = 2s2 − 2
s1 4 0 ← A′ (s) = 4s
s0 -2 0
# sign changes in the Routh array of D(s) with Auxiliary polynomial is equal to 1, so we can conclude that
D(s) has 1 pole with positive real part. If we compute the poles numerically we find that p1 = −1, p2 = −2
and p3 = 1 which verifies our finding. Indeed, we can see that problematic roots is one of the roots of A(s).
This case occurs when there are roots of equal magnitude lying radially opposite in the s-plane.
Lecture 11 11-9
Let’s assume that we not only interested in the absolute stability of a system, but also we would like to test
weather all poles are located in a region where the real part of the whole poles has an upper bound of −4.
Similarly we can think that, we define a performance region based on a settling time requirement.
s-plane
-4
We can still use Routh Hurwitz test to compute the # poles for which real parts is lower than −4. However,
we need to perform a change of variables.
Replace s with z − 4
D(s)|s=z−4 = D(z)
If we apply a Routh Hurwitz test on D(z), we compute the # poles of D(z) with positive real parts. Based
on change of variables, we defined
Ex: Let D(s) = s3 + 8s2 + 19s + 12, first test if the system is BIBO stable using Routh Hurwitz test. If the
system is BIBO stable, then check if the real parts of all the poles are located in the region σ ∈ (−∞, −2).
Solution: First test the absolute stability using Routh test on D(s)
s3 1 19
s2 8 12
s1 17.5 0
s0 12 0
Since all of the coefficients of the Routh table are positive, the system is BIBO stable. Now check the relative
stability by applying Routh test on “D(z) = D(s)|s=z−2 = z 3 + 2z 2 − z − 2”,
11-10 Lecture 11
s3 1 -1
s2 2 -2 → A(z) = 2z 2 − 2
s1 4 0 ← A′ (z) = 4z
s0 -2 0
Since in the Routh table (with Auxiliary polynomial), there exist one sign change, D(z) has one pole in
the open right half z-plane. This means that D(s) has a single pole where its real part is located in the
σ ∈ (−2, 0) region.
If we compute the roots of D(s) numerically, we find that p1 = −4, p2 = −3, and p3 = −1. This verifies
that the system is BIBO stable but, only two of the poles are located in the σ ∈ (−∞, −2) region.
Ex: Consider the closed-loop system that we analyzed previously in terms of absolute stability.
+
r(t) y ( t)
−
Now let’s fix KP = 20, and find the set of KD gains such that closed-loop poles are located in the desired
(gray) region illustrated below.
s-plane
-4
Solution:
We can use Routh Hurwitz test on
in order to solve this problem. In order to achieve the desired pole locations, D(z) can not have any poles
Lecture 11 11-11
s2 1 36 − 4KD
s1 KD − 8 0
s0 36 − 4KD 0
with positive real parts, thus the coefficients of the Routh array has to be positive.
KD > 8
KD < 9
Lecture 12
Lecturer: Asst. Prof. M. Mert Ankarali
..
In control theory, root locus analysis is a graphical analysis method for investigating the change of closed-
loop poles/roots of a system with respect to the changes of a system parameter, commonly a gain parameter
K > 0.
In order to better understand the root locus and derive fundamental rules, we start with the following basic
feedback topology where the controller is a P-controller with a gain K.
where the poles of the closed loop system are the roots of the characteristic equation
1 + KGOL (s) = 0
n(s)
1+K =0
d(s)
The goal is deriving the qualitative and quantitive behavior of closed-loop pole “paths” for positive gain K
that solves the equation 1 + KGOL (s) = 0 (or 1 + K n(s)
d(s) = 0).
n(s) (s − z1 ) · · · (s − zM )
KGOL (s) = −1 , or K = −1 , or K = −1
d(s) (s − p1 ) · · · (s − pN )
12-1
12-2 Lecture 12
n(s) |s − z1 | · · · |s − zM |
K|GOL (s)| = 1 , or K = 1 , or K =1
d(s) |s − p1 | · · · |s − pN |
For a given K, s values that satisfy both magnitude and angle conditions are located on the root loci. These
constitutes the most fundamental knowledge regarding the root locus analysis.
How we can check whether a candidate s∗ is in the root -locus or not. If we analyze the angle condition, we
can see that it is independent from the parameter K. However, If we focus on the magnitude condition, we
can see that
1 d(s∗ ) |s∗ − p1 | · · · |s∗ − pN |
K= ∗
= ∗
= ∗
|GOL (s )| d(s ) |s − z1 | · · · |s∗ − zM |
which implies that for every s∗ candidate (that is not a pole or zero), we can indeed compute a gain K value.
In conclusion, only angle condition is used for testing whether a point is in the root-locus or not. On the
other hand, will use the magnitude condition to compute the value of gain K, if we find that a candidate s∗
is in the root locus based on the angle condition.
Lecture 12 12-3
1
Ex: It is given that GOL (s) = s(s+4) . Determine if the following pole candidates are on the root-locus or
not
p∗1 = −2 , p∗2 = 2 p∗3 = −2 + 2j
Solution: We only test the angle condition. Solutions are illustrated on the s−planes provided below
Im Im
√
φ1=0 φ2=π
Re Re
-4 -2 -4 -2
Im Im
φ1=0 φ2=0
Re Re
-4 -2 2 -4 -2 2
Im Im
2 2
φ1=π/4
φ2=3π/4
√
Re Re
-4 -2 -4 -2
Im Im
12-4 Lecture 12
1 + KGOL (s) = 0
n(s)
1+K =0
d(s)
(zs − z1 ) · · · (zs − zM )
1+K =0
(s − p1 ) · · · (s − pN )
d(s) + Kn(s) = 0
K → 0 ⇒ [d(s) + Kn(s) = 0 → [d(s) = 0]
K → ∞ ⇒ [d(s) + Kn(s) = 0 → [n(s) = 0]
4. Root loci on the real axis determined by open-loop zeros and poles. s = σ ∈ R then, based on the
angle condition we have
Sign[GOL (σ)] = −1
Let’s first analyze the effect of complex conjugate pole/zero (and double pole/zero on real axis) pairs
on the equation above. Let σ ∗ ∈ R is the candidate location and complex conjugate poles has the
following form p1,2 = σ ± jω
We can see that complex conjugate zero/pole pairs have not effect on angle condition for the roots on
the real axis. Then for the remaining ones we can derive the following condition
M̄
Y N̄
Y
Sign[GOL (σ)] = Sign[σ − zi ] Sign[σ − pj ] = −1
i=1 j=1
which means that for ODD number of poles + zeros Sign[σ − pi ] and Sign[σ − zi ] must be negative for
satisfying this condition for that particular σ to be on the root-locus. We can summarize the rule as
If the test point σ on real axis has ODD numbers of poles and zeros in its right, then this
point is located on the root-locus.
Lecture 12 12-5
Ex: The figure below illustrates the root locus plots of three different transfer functions.
Imaginary Axis
-4 -3 -2 -1 0 1 -4 -3 -2 -1 0 1 -4 -3 -2 -1 0 1
5. Asymptotes
(s − z1 ) · · · (s − zM ) K
K ≈ N −M
(s − p1 ) · · · (s − pN ) s
K
∠ N −M = −(N − M )∠[s] = π(2k + 1), k ∈ Z
s
±π(2k + 1)
φa = , k ∈ {1, · · · , N − M }
N −M
Ex: The figure below illustrates the root locus plots of two different transfer functions.
10 20
10
Imaginary Axis
0 0
-10
-10 -20
-15 -10 -5 0 5 -6 -4 -2 0 2
Real Axis Real Axis
1 + KGOL (σ) = 0
Note that break-in and breakaway points corresponds to double roots. Thus, if σb is a break-away or
break-in point we have
1 + KGOL (σb ) = 0
d
K GOL (σ) =0
dσ σb
Thus, we conclude that break-in or break-away points satisfy the following conditions
dGOL (σ) −1
= 0 , K(σB ) = , K(σb ) > 0
dσ σ=σb G OL (σb )
1
Ex: Draw the root locus diagram for GOL (s) = s(s+4) , compute the real axis intercept σc and break
away point (with the associated gain value).
1
Imaginary Axis
0
σc = −2
σba = −2 , K(σba ) = 4
-1
-2
-4 -2 0
Real Axis
s+4
Ex: Draw the root locus diagram for GOL (s) = s(s+3) , compute the break-away and break-in points
(with the associated gain values).
2
Imaginary Axis
K=9
K=1
0 σb2 + 8σb + 12 = 0
σb−a = −2 , K(σb−a ) = 1
σb−in = −6 , K(σb−in ) = 9
-2
-6 -4 -2 0
Real Axis
1 + KGOL (jω) = 0
D(jω) + KN (jω) = 0
,
Re{D(jω) + KN (jω)} = 0
Im{D(jω) + KN (jω)} = 0
Note that depending on the order of the system, solving the above equation can be most computation-
ally very heavy.
12-8 Lecture 12
Second way of finding the imaginary axis crossing is to apply the Routh-Hurwitz criteria. Note that
at these crossings the system becomes unstable. Using this fact, we can first construct a Routh table
for the closed-loop characteristic equation and then derive the K values where a change of stability
occurs. After that, we can use the computed critical K values to derive the pole locations on the
imaginary axis..
1
Ex: Draw the root locus diagram for GOL (s) = s(s+2)(s+3) , compute the break-away point and
imaginary axis crossings (with the associated gain values).
Break-away point
2
3σb2 + 10σb + 6 = 0
σb,1 = −0.8 , K(σb,1 ) = 2.1 > 0 → OK
Imaginary Axis
D(jω) + K(jω) = 0
-2 (jω)3 + 5(jω)2 + 6(jω) + K = 0
(K − 5ω 2 ) + (6ω − ω 3 )j = 0
√
⇒ ω = 6 , K = 30
-4 -2 0 2
Real Axis
Now let’s find the imaginary axis crossings using the Routh table. The characteristic equation for this
system is s3 + 5s2 + 6s + K, then the Routh table takes the form
s3 1 6
s2 5 K
30−K
s1 5 0
s0 K 0
We know that in order for the system to stable K ∈ (0, 30), since we only consider positive K values,
when K = 30 system stability changes (from stable to unstable). Let K = 30 and re-form the Routh
table.
s3 1 6
s2 5 30 → A(s) = 5s2 + 30
s1 10 0 ← A0 (s) = 10s
s0 30 0
Based on the Routh table, we conclude that when K = 30, system becomes unstable, the unstable
poles are located on the imaginary axis, and their locations can be find using the Auxiliary polynomial
as
√
A(s) = 0 → p1,2 = ± 6j
Lecture 12 12-9
Let’s first compute the closed-loop TF and analyze the characteristic equation.
Y (s) KG(s)
=
R(s) 1 + KG(s)H(s)
s+A
=
s(s + 4)(s + A) + 4A
s+A
= 3
s + (A + 4)s2 + 4As + 4A
Now let’s organize the characteristic equation
Now if we consider ḠOL (s) as the open-loop transfer function and draw the root-locus, then we would
derive the dependence of the roots to the parameter A.
Root-locus of the system w.r.t parameter A > 0 is given below.
1
Imaginary Axis
-1
-4 -2 0
Real Axis
EE302 - Feedback Systems Spring 2019
Lecture 13
Lecturer: Asst. Prof. M. Mert Ankarali
Let’s assume that we would like you to design a controller for the following second order plant transfer
function
1
G(s) =
(s + 1)(s + 3)
Let’s first design a P controller, C(s) = K. Let’s start with steady-state error performance. Plant is a type-
one system and controller is a static gain thus unit step and unit ramp steady-state error can be computed
as
1
Step : ess =
1 + KP /3
Step : ess =∞
We can see that unit-ramp error is ∞ regardless of KP , where as we can reduce the steady-state error by
increasing proportional gain KP . Now let’s draw root locus and comment on settling time and over-shoot
performance.
13-1
13-2 Lecture 13
2 K=5
Imaginary Axis
K=1
0
-1
-2
-3 -2 -1 0
Real Axis
Based on root-locus plot we can see that best convergence rate achieved with σ = −2, for which the
approximate settling time is 2s. When K = 1, system becomes critically damped. When we further increase
the gain, real part of the poles do not change, however oscillations starts and grows with KP . Obviously,
we should choose a gain KP > 1, however after that point theres is a trade-off between over-shoot and
(1) (1)
settling time performance. Let’s choose two candidate locations, p1,2 = −2 and p1,2 = −2 ± 2j. Table below
details the gain values at these pole locations, unit-step steady-state errors, and estimated settling time and
maximum over-shoot values.
p1,2 KP ess ts MP
-2 1 0.75 2 0
−2 ± 2 5 0.375 2 0.04
We verify that both steady-state and transient performance increases with larger KP . However, we can also
see that settling time estimation for KP = 5 has a larger error, which expected since the poles are close to
each other thus violates the dominant pole assumption. If we consider the values in the table, it seems to be
reasonable to choose KP = 5 since it provides a much better steady-state performance, of course if existing
of an small overs-shoot is not a major problem for the design. Now let’s plot step-responses of the closed
loop system for both cases.
Lecture 13 13-3
0.8
0.6
0.4
0.2
0
0 1 2 3 4
Time (seconds)
We verify most of our theoretical findings with the only exception about the settling time for KP = 1, which
is larger then the estimation. In conclusion, KP = 5 is a good choice for overall requirements.
Now let’s design a PD controller. First make some change of variables and re-write the PD controller in a
different form.
KP
C(s) = KP + KD s = KD s + = KD (s + α)
KD
We can see that PD controller introduces an extra zero to the open loop transfer function. Let’s compute
the steady-state error performance.
1 1
• Unit step: ess = 1+KP /3 = 1+αKD /3 ,
Technically in classical PD form KD has no effect on steady-state performance. However if we adopt the
second form with α and fix α first, then as we increase KD steady-state error will decrease. Now let’s fix
KP
α=4= K D
, and draw the root-locus.
KD= 2.5
p = -3.3 + 1.6i
2
PMP = % 0.15
1
Imaginary Axis
KD= 7.5
p = -5.7
0
-1
-2
-6 -5 -4 -3 -2 -1 0
Real Axis
13-4 Lecture 13
We think that the improvement of added zero is very clear. If we carefully look at the root-locus, we can see
that at the pole location that gives the worst maximum over-shoot performance, maximum percent over-shoot
is only %0.15, moreover overs-shoot completely disappears when KD ≥ 7.5. The best settling/convergence
time performance occurs at the break-in point which has an approximate gain of KD = 7.5,
Table below details the P and D gain values at this pole locations, closed-pole locations, unit-step steady-state
error, and estimated settling time value.
KP KD p ess ts [s]
30 7.5 -5.7 0.09 0.7
Now let’s compare this PD controller (with (KP , KD ) = (30, 7.5)) and the previously designed P controller
(with (KP , KD ) = (5, 0)) by simulating the step responses.
0.8
0.6
0.4
0.2
0
0 1 2 3
Time (seconds)
We can see that PD control policy outperforms the P controller in every category.
On the other hand, interestingly we expected no over-shoot with thet PD controller, yet we still observe
some over-shoot at the output. In addition to this, actual settling time is approximatelly half of our previous
estimation. The discrepancies between the transient characteristics between simulation and estimated values
occurs due the effectx of extra zero in the closed-loop transfer function.
Now let’s analyze how a pure integral controller affects the proposed system using root-locus analysis.
KI 1
C(s) = , G(s) =
s (s + 1)(s + 3)
1
GOL (s) = , K = KI
s(s + 1)(s + 3)
We already know that steady-state performance is affected positively with an Integral controller
Now let’s analyze the affects on transient performance and stability using root-locus diagram
Break-away point
1 3σb2 + 8σb + 3 = 0
σb,1 = −0.45 → OK , K = 0.63
Imaginary Axis
σb,2 = −2.2 → NO
0
D(jω) + KI N (jω) = 0
-1
(jω)3 + 4(jω)2 + 3(jω) + KI = 0
(K − 4ω 2 ) + (3ω − ω 3 )j = 0
√
-2 ⇒ ω = 3 , K = 12
-4 -3 -2 -1 0 1
Real Axis
• Closed-loop system becomes unstable for KI > 6. Note that with a P controller closed loop system
was always stable for KP > 0
• Best settling time that can be achieved with an I controller is approximately, ts = 9s, which is quite
bad compared to the simple P-controller.
In conclusion, integral action improves steady-state performance, but it degrades the stability and transient
performance.
Now let’s design a PI controller. First make some change of variables and re-write the PI controller in a
different form.
1 s + KI /KP s+α
C(s) = KP + KI = KP = KP
s s s
We can see that PI controller introduces an extra zero to the open loop transfer function and a pole at the
origin. We know that the steady-state error performance characteristics. of a PI controller is same with an
I controller.
KI
Now let’s fix α = 2 = KP , and draw the root-locus w.r.t KP . Note that
s+2
GOL (s) =
s(s + 1)(s + 3)
13-6 Lecture 13
Break-away point
Imaginary Axis
0
2σb3 + 10σb2 + 16σb + 6 = 0
σb,1 = −0.53 → OK , K = 0.42
-2
σb,2 = −2.2 ± 0.8j → NO
-4
-3 -2 -1 0
Real Axis
• Best settling time that can be achieved with this PI controller is approximately, ts = 4s, but this is
achieved when K → ∞.
• Best settling time value with this PI controller is the approximately double of the best settling time
value of the P controller.
• System is approximately acts like a second-order over-damped system when KP ∈ (0, 0.42), where as
acts like a a second-order under-damped system when KP ∈ (0.42, ∞).
• Oscillations and over-shoot increases as we increase KP . This there is a trade-off between settling time
and over-shoot performance.
In conclusion, PI controller has superior steady-state performance compared to P and PD controllers, however
there exist substantial transent performance drop even compared to simple P controller.
Now let’s choose KP = 2, then we can find that KI = 4. Now let’s compre this PI controller (KP , KI ) = (2, 4)
with our previously chosen P controller (KP , KI ) = (5, 0) by simulating the step-responses.
0
0 2 4 6 8 10 12
Time (seconds)
Lecture 13 13-7
It is obvious that while PI controller has a superior steady-state performance, P controller provides much
better transient characteristics.
Practice Question: Choose α from this set {4, −1.2, 0.8}, and draw the root-locus diagrams for each α.
Comment on the results in terms of overall transient performance based on the root locus diagrams.
If one is not satisfied with the transient characteristics of a P-controller but also would like to eliminiate
steady-state error fot the unit step input, he/she can design a PID controller.
First make some change of variables and re-write the PID controller in a different form.
1 1
C(s) = KD s + KP + KI = KD s + (KP /KD ) + (KI /KD )
s s
1
= KD s2 + (KP /KD )s + (KI /KD )
s
We can see that a PID controller has a second order numerator dynamics, and it is possible to have two real
zeros, or two complex conjugate zeros. In general, PID gains are chosen such that numerator has two zeros
(s + α)(s + β)
C(s) = KD
s
We know that PID controller has same stead-state performance characteristics with PI and I controllers.
Now let’s choose α = 2 and β = 4 and draw the root locus w.r.t. KD .
2
KD=10
p=-6+1.6j
KD=11.1
Imaginary Axis
0
KD=10
p=-2.1
-2
-6 -4 -2 0
Real Axis
Unlike the root-locus plots of previous cases, it is harder to have an idea about what is good for transient
performance. It seems that as KD %, one pole on the real axis move towards imaginary axis and stops at
σ = −2 when K → ∞, where as other two poles deviates from the imaginary axis when KD %. Let’s choose
KD = 10 which marked on the root-locus plot. In this case poles of the closed-loop system takes the form
p1 = −2.1 and p2,3 = −6 ± 1.6j. We may conclude that the system is approximatelly first order since the
single pole is much closer to the real-axis. In this case, we expect settling time as t ≈ 1.9s. Note that this
settling time value is slightly better then the best settling time value that is satisfied with a P-controller.
Moreover we expect that over-shoot would be negligible (less than %0.001).
13-8 Lecture 13
Now let’s simulate this PID controller and previously designed PD controller and compre the transient and
steady state performances.
1.2
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2
Time (seconds)
We can clearly see the qualitative differences between the designed PD and PID controllers. For this specific
controllers, we can see that PD has much better transient performance and PID is superior in steady-state
performance. Note that PID completely eliminates the steady-state error and it is certainly possible to
improve the steady-state performance of PD controller by increasing its gain.
On the other hand in PID controller, if we locate both zeros to the left of the pole at −3 location (left as
a practice). we would obtain better root-locus picture (in terms of pole locations with better transient
performance). However, the main trade-off will be the substantially increased gain values, which is already
much larger then the designed P and PD controllers.
Example: Design a controller√C(s) for the following feedback-system such that dominant closed-loop poles
has a damping ratio of ζ = 1/ 2 and damped natural frequency of ωd = 2rad/s.
+
r(t) y ( t)
−
Solution: Since there is no steady-state requirement and the goal is to locate the dominant poles to a
specific location, first natural choice of a controller is a PD controller. We know that PD controller can be
Lecture 13 13-9
⇒α=3
Note that direct complex algebra could provide a simpler computational process. Now given that α = 3,
let’s draw the root locus. We can see that the desire dominant pole locations are satisfied when KP = 48
and KD = 16.
10
KP=48
KD=16
KP=48
Imaginary Axis
KD=16
0
-10
-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1
Real Axis
13-10 Lecture 13
+
r(t) y ( t)
−
Solution: Since the requirement imposes zero steady-state error we have to implement an I action. Let’s
test a PI controller first.
1 s+α
C(s) = KP + KI = KI
s s
s+α
GOL (s) = KI
s(s + 1)(s + 3)
Now let’s try to compute α using the angle condition since we want the specified pole to be on the root-locus
This implies that a PI controller can not satisfy the requirements. Now let’s try to designa PID controller.
Now we have two parameters to satisfy angle condition. Let’s simplify the process and let α = β, thus the
PID controller has the following form
(s + α)2
C(s) = KD
s
Lecture 13 13-11
KI = 24.12
KP = 17.07
Let’s compute closed-loop transfer function and associated closed loop-poles
3.018s2 + 17.07s + 24.12
T (s) =
s3 + 7.018s2 + 20.07s + 24.12
p1,2 = −2 ± 2j
p3 = −3.0144
We can see that the poles that are closer to the origin are placed at desired locations, but we can also see
that third pole is not far away enough to conclude the dominance. Now let’s plot the step-response of the
closed-loop system,
0
0 1 2 3
Time (seconds)
Based on the dominant pole locations, we would expect a settling time value of ts ≈ 2s, while the actual
settling tim is approximatelly 1.8s. On the other hand the over-shoot expectation based on the pole location
is P MP = %4, where as in the simulation we observe %12 maximum overs-hoot. Note that, we observed
similar “errors” due to closed-loop zero dynamics even in the cases where system has only to complex-
conjugate poles. For this reason, PID controller fairly satisfies tha design requirements.
EE302 - Feedback Systems Spring 2019
Lecture 14
Lecturer: Asst. Prof. M. Mert Ankarali
Let’s assume u(t), y(t), and G(t) represents the input, output, and transfer function representation of an
input-output continuous time system.
In order to characterize frequency response of a dynamical system, the test signal is
u(t) = ejωt
which is an artificial complex periodic signal with a frequency of ω. The Laplace transform of u(t) takes the
form
1
U (s) = L{ejωt } =
s − jω
Response of the system in s-domain is given by
1
Y (s) = G(s)U (s) = G(s)
s − jω
Assuming that G(s) is a rational transfer function we can perform a partial fraction expansion
a
Y (s) = + [terms due to the poles of G(s)]
s − jω
a = lim [(s − jω)Y (s)] = G(jω)
s→jω
G(jω)
Y (s) = + [terms due to the poles of G(s)]
s − jω
Taking the inverse Laplace transform yields
y(t) = G(jω)ejωt + L−1 [terms due to the poles of G(s)]
If we assume that the system is “stable” or system is a part of closed loop system and closed loop behavior
is stable then at steady state we have
yss (t) = G(jω)ejωt
= |G(jω)|eiωt+∠[G(jω)]
= M eiωt+θ
In other words complex periodic signal is scaled and phase shifted based on the following operators
M = |G(jω)|
θ = ∠G(jω)
It is very easy to show that for a general real time domain signal u(t) = sin(ωt + φ), the output y(t) at
steady state is computed via
yss (t) = M sin(ωt + φ + θ)
14-1
14-2 Lecture 14
We can consider the frequency response function G(jω) as a mapping from positive jω axis to a curve in the
complex plane. In polar plot, we draw the frequency response function starting from ω = 0 (or ω → 0+ ) to
ω → ∞.
Let’s draw the polar plots of
1
G1 (s) = s , G2 (s) = , G3 (s) = s + 2
s
1 1
G4 (s) = 2 + , G5 (s) = 2 + s +
s s
1 1 − jω 1 ω
G1 (jω) = = 2 = 2 − j
jω + 1 ω +1 ω + 1 ω2 + 1
1
|G1 (jω)| = √
1 + ω2
∠[G1 (jω)] = arctan(−ω)
jω jω + ω 2 ω2 ω
G2 (jω) = = 2 = 2 + 2 j
jω + 1 ω +1 ω +1 ω +1
r
ω2
|G2 (jω)| =
1 + ω2
∠[G2 (jω)] = arctan(1/ω)
s−1
Now let’s draw the polar plot of G(s) = s+1 (note that there is a zero in open-right half plane). Note that
2
G(s) = 1 −
s+1
1 ω
G(jω) = 1 − 2 − j
ω2 + 1 ω2 + 1
s−1
Process of polar plot drawing of G(s) = s+1 is illustrated below
-2 -1 -1 1
1
Now let’s draw the polar plot of G(s) = (s+1)2
1 (−jω + 1)2
G(jω) = 2
=
(jω + 1) (ω 2 + 1)2
1
= 1 − ω 2 + j(−2ω)
(ω + 1)2
2
14-4 Lecture 14
Some important points and associated features on the polar plot can be computed as
ω → 0 ⇒ G(jω) = 1
ω → 1 ⇒ G(jω) = −0.5j
ω → ∞ ⇒ |G(jω)| → 0 & ∠[G(jω)] → −π
-0.5
1
Now let’s draw the polar plot of G(s) = (s+1)3
1 (−jω + 1)3
G(jω) = 3
=
(jω + 1) (ω 2 + 1)3
1
= 1 − 3ω 2 + j(ω 3 − 3ω)
(ω 2 + 1)3
Some important points and associated features on the polar plot can be computed as
ω → 0 ⇒ G(jω) = 1
p
ω → 1/3 ⇒ G(jω) = −0.65j
√
ω → 3 ⇒ G(jω) = −1/4
ω → ∞ ⇒ |G(jω)| → 0 & ∠[G(jω)] → π/2
-1/4
-0.65 j
EE302 - Feedback Systems Spring 2019
Lecture 15
Lecturer: Asst. Prof. M. Mert Ankarali
Nyquist stability criterion is another method to investigate the stability (absolute and relative) of a dynamical
system (feed-forward and feed-back). Its based on the frequency response characteristics of a system.
Definition: A contour Γs is a closed path with a direction in a complex plane.
Remark: A continuous function F (s) maps a contour Γs in s−plane to another contour ΓF (s) in F (s)
plane. The figure below illustrates a clock-wise contour Γs and its map ΓF (s) which is also clock-wise in this
example.
N =Z −P times
N : # CW encirclements of origin
Z : # zeros F (s)
P : # poles F (s)
Let’s assume that we want to analyze the stability of the following input-output system
15-1
15-2 Lecture 15
• P = Z − N , where
• N : # CW encirclements of origin by ΓG(s)
• Z: # zeros of G(s) with positive real parts
1 2
3
1
3
In this example illustration N = 2, which implies that total number of unstable poles of G(s) is given by
P = Z + 2, if Z, i.e. number of zeros with positive real parts, is known than we can compute number
of unstable poles P . Alternatively, since in a feedforward system only denominator part determines the
1
stability, instead of G(s), one can draw the Nyquist plot of Ḡ(s) = D(s) .
Lecture 15 15-3
s−1
Ex: Let’s analyze the feedforward stability of G(s) = s+1 using Nyquist plot.
Solution: Based on the Nyquist contour we have three major paths
1. This is the polar plot that we covered in the previous lecture, where we plot G(jω), where ω : 0 → ∞
2. This is the infinite radius circular path. In this case if we write s in polar form, we get s = Rejθ where
R → ∞, and θ : π/2 → −π/2. Then we can derive that
Rejθ − 1 Rejθ
G Rejθ = jθ
≈
Re + 1 Rejθ
⇒ |G Rejθ | ≈ 1 , ∠[G Rejθ ] ≈ 0
In other words, this whole path in the Nyquist plot is concentrated around 1 + 0j point in the complex
plane.
3. This path is the mapping of the negative imaginary axis, i.e G(−jω), where ω : ∞ → 0. Obviously
since G(−jω) is complex conjugate of G(jω) this path is symmetric to the polar plot with respect to
the real axis. Note that direction of this path is reverse of the direction of the polar plot.
1
1 2
-1 1
3
3
We can see from the derived Nyquist plot that N = 1, and we know that the system has 1 zero with positive
real part Z = 1. The total number of unstable poles is equal to P = Z − N = 1 − 1 = 0, thus the system is
obviously stable.
15-4 Lecture 15
1
Ex: Let’s analyze the feedforward stability of G(s) = s+1 using Nyquist plot.
Solution: Now let’s analyze the Nyquist paths
1. This is the polar plot that we covered in the previous lecture, where we plot G(jω), where ω : 0 → ∞.
In this case, the behavior when ω → ∞ is important. Now let’s assume that ω → R and R 1
1 1
G1 (jR) ≈ − j
R2 R
∠[G1 (jω)] ≈ −π/2
2. Now we should be careful with mapping the infinite radius circular path. Again let s = Rejθ and
θ : π/2 → −π/2. Then we can derive that
1 ej(−θ)
G Rejθ =≈
jθ
=
Re R
⇒ |G Rejθ | ≈ 1 , ∠[G Rejθ ] ≈ −θ
Note that when θ : π/2 → −π/, the infinite-small contour around origin rotates in CCW direction.
3. Last path (mapping of negative imaginaty axis) is again is the conjugate of polar plot with reverse
direction.
1 2
We can see from the derived Nyquist plot that N = 0, and we know that the system has no zero with positive
real part Z = 0. The total number of unstable poles is equal to P = Z − N = 0 − 0 = 0, thus the system is
obviously stable.
Lecture 15 15-5
1
Ex: Analyze the feedforward stability of G(s) = (s+1)2 using Nyquist plot.
1. This is the polar plot that we covered in the previous lecture, where we plot G(jω), where ω : 0 → ∞.
In this case, the behavior when ω → ∞ is important. Now let’s assume that ω → R and R 1
1 1
G1 (jR) ≈ − 2
− 3j , ∠[G1 (jω)] ≈ −π
R R
2. Again, we should be careful with mapping of the infinite radius circular path. s = Rejθ and θ : π/2 →
−π/2. Then we can derive that
1 ej(−2θ)
G Rejθ =≈ 2 j2θ =
R e2 R2
⇒ |G Re | ≈ 1 , ∠[G Rejθ ] ≈ −2θ
jθ
Note that when θ : π/2 → −π/, the infinite-small contour around origin rotates in CCW direction.
3. Last path (mapping of negative imaginaty axis) is again the conjugate of polar plot with reverse
direction.
1 2
0.5j
3 -0.5j
We can see from the derived Nyquist plot that N = 0, and we know that the system has no zero with positive
real part Z = 0. The total number of unstable poles is equal to P = Z − N = 0 − 0 = 0, thus the system is
obviously stable.
15-6 Lecture 15
1
Ex: Analyze the feedforward stability of G(s) = (s+1)3 using Nyquist plot.
1. This is the polar plot that we have covered in the previous lecture, where we plot G(jω), where
ω : 0 → ∞. In this case, the behavior when ω → ∞ is important. Now let’s assume that ω → R and
R1
3 1
G(jR) ≈ − 4 + 3 j ∠[G(jω)] ≈ π/2
R R
2. Again, we should be careful with mapping of the infinite radius circular path. s = Rejθ and θ : π/2 →
−π/2. Then we can derive that
1 ej(−3θ)
G Rejθ =≈ 3 j3θ =
R e3 R3
⇒ |G Re | ≈ 1 , ∠[G Rejθ ] ≈ −3θ
jθ
Note that when θ : π/2 → −π/, the infinite-small contour around origin rotates in CCW direction.
3. Last path (mapping of negative imaginaty axis) is again the conjugate of polar plot with reverse
direction.
1 2
If carefully analyze the encirclements around the origin, we can see that net encirclement around the origin
is N = 1 − 1 (or N = 2 − 2). Based on this, we verify that the total number of unstable poles is equal to
P = Z − N = 0 − 0 = 0, thus the system is stable.
Lecture 15 15-7
1
Ex: Analyze the feedforward stability of G(s) = (s−1)(s+2) using Nyquist plot.
Now let’s derive the Nyquist plot conditions for the infinite circle on the Nyquist contour. In this path
s = Rejθ where R → ∞ (or R 1) and θ : π/2 → −π/2 (in CW direction). Thus,
1 1
G(Rejθ ) ≈ = 2 ej(−2θ)
R2 ej2θ R
|G(Rejθ )| ≈ 1
∠[G(Rejθ )] = −2θ θ : π/2 → −π/2
Note that associated Nyquist plot around origin turns approximatelly 2π radians in CCW direction. Last
path is the mapping of negative imaginary axis which is simply conjugate of the polar plot with reverse
direction.
1 2
-0.5
If carefully analyze the encirclements around the origin, we can see that net CW encirclement around the
origin is N = −1. Based on this, we verify that the total number of unstable poles is equal to P = Z − N =
0 − (−1) = 1, thus the system is indeed unstable.
EE302 - Feedback Systems Spring 2019
Lecture 16
Lecturer: Asst. Prof. M. Mert Ankarali
Even though we illustrated how to apply Nyquist stability criterion for feedforward systems in the previous
lecture (for teaching the details of Nyquost plot and some basics), in control theory and applications Nyqist
plot and Nyquist stability test are majorly used for analyzing feedback topologies.
The figure below illustrates the fundamental feedback system topology for a SISO system
We know that the closed-loop transfer function, T (s), for this system has the following form
G(s) G(s)
T (s) = =
1 + G(s)H(s) 1 + GOL (s)
where GOL (s) is the open-loop transfer function for the given topology. We know that poles of T (s) are the
roots that satisfy 1 + GOL (s) = 0. Now let’s define an analytic function F (s) = 1 + GOL (s), analyze its
relation with the open-loop and closed-loop transfer functions.
where
Moreover
• Poles of F (s) are the roots of D(s), hence they constitutes the open-loop poles.
• Zeros of F (s) are the poles of T (s), hence they constitutes the closed-loop poles.
16-1
16-2 Lecture 16
Note that open-loop zeros and poles are “known”, and goal is to investigate/analyze the number of unstable
poles of T (s) which is equal to the number of zeros of F (s) with positive real parts
Now let’s assume that we derive Nyquist plots of F (s) = 1 + GOL (s) and GOL (s) and obtain ΓF (s) and
ΓGOL (s) . The figure below provides an illustrative example of a Nyquist contour, Nyquist plot of F (s), and
Nyquist plot of GOL (s).
2 2
1 2
3 3
-1
1 1
3
First simple observation that we have to pay attention is that in order to obtain ΓGOL (s) we shift ΓF (s) on
real axis to the left (by 1), similarly in order to obtain ΓF (s) we shift ΓGOL (s) on real axis to the right ((by
1)). From this fact we can derive the following equality which is critical for stability analysis
For example In this illustration above N = 0. If we apply Cauchy’s Principle argument for F (s), we can
derive that
• N = ZF − P F
• PF : # poles of F (s) with positive real parts, which is indeed equal to the # poles of GOL (s) with
positive real parts which is a “known” quantity.
• ZF : # zeros of F (s) with positive real parts, which is indeed equal to to the # unstable poles of T (s)
which is the desired output.
In conclusion, in order to analyze the closed-loop stability of a unity-feedback feed-back systems we apply
the following procedure
Ex: Analyze the stability of the following feedback system using Nyquist plot.
1
Solution: For this given system GOL (s) = s+1 . In the previous lecture we already derived the Nyquist plot
1
for s+1 which is illustrated below
1 2
N-1=0
-1 1
We can conclude from the derived Nyquist plot and open-loop transfer function
• N =0
Now let’s consider the following feedback system topology for a SISO system where DC gain of the open-loop
transfer function is adjusted with a gain parameter K (e.g. P controller).
We would like to test the stability of the closed-loop system for different values of K, moreover we would like
to derive the range of K values that makes the closed-loop system stable. Indeed, we don’t need to re-draw
the Nyquist plot for each K that we want to test.
The analytic function, F (s), that we adopt for analyzing stability can be written in the form
N (s)
F (s) = 1 + GOL (s) = 1 + K
D(s)
where
In order to analyze the stability of Nyquist plot, in the previous section we showed that we can draw the
Nyqist plot GOL (s) and analyze the # encirclements of (−1 + 0j).
N (s)
Now, in this case we will derive the Nyquist plot of D(s) , i.e. Γ N (s) . Note that ΓGOL (s) = KΓ N (s) , i.e. we
D(s) D(s)
1
N (s)
N = # of CW enrichments of (−1 + 0j) by ΓGOL (s) = # of CW enrichments of − K + 0j by D(s) .
To sum-up, in order to analyze the closed-loop stability of a feedback systems for which we have a variable
gain parameter K in the open-loop transfer function, we apply the following procedure
N (s)
• Draw the Nyquist plot of where GOL (s) = K N
D(s)
(s)
D(s)
1
• Compute N = # CW encirclements of − K + 0j by Γ N (s)
D(s)
• Compute POL = # open-loop poles with positive real parts, i.e. # roots of D(s).
• Finally compute PCL = N + POL = # unstable closed-loop poles
Lecture 16 16-5
Ex: Find the range of K values that makes the following closed-loop system stable.
0.5j
-0.5j
We can see from the derived Nyquist plot that the closed path divides the real axis in three different parts.
If we analyze these regions separately, we can then find a complete range of K values that makes the closed
loop-system stable
−1
∈ (−∞, 0) → K ∈ [0, ∞) ⇒ N = 0
K
−1
∈ (0, 1) → K ∈ (−∞, −1) ⇒ N = 1
K
−1
∈ (1, ∞) → K ∈ (−1, 0] ⇒ N = 0
K
Note that since number of open-loop unstable poles is equal to 0, we have POL = 0. Thus, system is BIBO
stable if and only if N = 0. Whereas, for the region when N = 1, there always exist one unstable pole.
Note that for K = −1, we don’t have a conclusion, indeed system is BIBO unstable since in this case there
exist a pole on the imaginary axis. As a result
Ex: Find the range of K values that makes the following closed-loop system stable. Also, for the K values
that makes the closed-loop system unstable find the number of unstable poles.
-0.25 3 1 4
2
1
1
As we can see from the Nyquist plot pf (s+1) 3 , it divides the region into four different parts. If we analyze
these regions separately, we can then find a complete range of K values that makes the closed loop-system
stable, as well as find the number of unstable poles for the unstable cases.
−1
∈ −∞, −1
1. K 4 → K ∈ [0, 4) ⇒ (N = 0, PCL = 0), closed-loop system is stable
−1
∈ −1
2. K 4 ,0 → K ∈ (4, ∞) ⇒ (N = 2, PCL = 2). Unstable CL system with 2 unstable poles
−1
3. K ∈ (0, 1) → K ∈ (−∞, −1) ⇒ (N = 1, PCL = 1). Unstable CL system with 1 unstable pole
−1
4. K ∈ (1, ∞) → K ∈ (−1, 0) ⇒ (N = 0, PCL = 0), closed-loop system is stable
As a result
Ex: Find the range of K values that makes the following closed-loop system stable. Also, for the K values
that makes the closed-loop system unstable find the number of unstable poles.
N (s) 1
Solution: For this given system D(s) = (s−1)(s+2) . Note that for this system POL = 1.
1
In the previous lecture we already derived the Nyquist plot for (s−1)(s+2) which is illustrated below
-0.5
We can see that the Nyquist plot divides the real axis in three different parts. If we analyze these regions
separately, we can then find a complete range of K values that makes the closed loop-system stable
−1
1. K ∈ (−∞, −1 2 ) → K ∈ (0, 2) ⇒ N = 0 → PCL = 1. Unstable CL-system with 1 unstable pole.
−1
∈ −1
2. K 2 , 0 → K ∈ (2, ∞) ⇒ N = −1 → P = 1 − 1 = 0. Stable CL-system
−1
3. K ∈ (0, ∞) → Kin(−∞, 0) ⇒ N = 0 → P = 1. Unstable CL-system with 1 unstable pole.
As a result
16.1.3 Nyquist Plot and Stability Test with Open-Loop Poles/Zeros on the
Imaginary Axis
If you remember, when we introduced Nyquist contour and Nyquist stability test we assumed that there is
no pole/zero on the imaginary axis. However, it is especially very common to have poles at the origin since
it corresponds to simple integrator. In this course, we will explicitly cover the case when there exist a pole
or zero at the origin.
Let’s assume that GOL (s) has a pole (or zero, or multiple poles) at the origin. We simply modify the Nyquist
contour by adding an infinitesimal notch at the origin to the original Nyquist contour.
The figure below illustrates this modified Nyquist contour and an illustrative Nyquist plot.
4
1 2
3
1
3
Lecture 16 16-9
Ex: Find the range of K values that makes the following closed-loop system stable.
1 −j − ω −1 −1/ω
G(jω) = = = 2 + j
jω(jω + 1) ω(ω 2 + 1) ω + 1 ω2 + 1
2. Note that since we are now only intersted in the circulation around −1/K. The origin in the Nyquist
plot corresponds to |K| → ∞, so that it is really not critical to correctly draw this detail for feedback
systems. However, let’s draw this detail for this example for practice. s = Rejθ and θ : π/2 → −π/2.
Then we can derive that
1 ej(−2θ)
G Rejθ ≈
2 j2θ
=
R e 2 R2
⇒ |G Re | ≈ 1 , ∠[G Rejθ ] ≈ −2θ
jθ
Note that when θ : π/2 → −π/, the infinite-small contour around origin rotates in CCW direction.
3. This part is simply the conjugate of polar plot with reverse direction.
4. Now this part is new, since we are dealing with this path since there is a pole at the origin. s = ejφ ,
where → 0 and φ : −π/2 → π/2 (CCW direction). Then we can derive that
1
G ejφ ≈ = Rej(−φ)
e jφ
N (s) 1
As a result, we obtain the following Nyquist plot for D(s) = s(s+1)
16-10 Lecture 16
3 4
1 2
-1
4
3
1
Note that the open-loop transfer function has no unstable poles, i.e. POL = 0. We can see from the Nyquist
plot divides the real axis in two different parts. If we analyze these regions separately, we can then find a
range of K values that makes the closed loop-system stable
−1
1. K ∈ (−∞, 0) → K ∈ (0, ∞) ⇒ N = 0 → PCL = 0. System is stable
−1
2. K ∈ (0, ∞) → K ∈ (−∞, 0) ⇒ N = 1 → PCL = 1. Unstable system with 1 unstable pole
Ex: Find the range of K values that makes the following closed-loop system stable.
2
Solution: For this given system, N (s)
D(s) =
(s+1)
s3 . Note, there exist three repeated pole at the origin, thus we
need to utilize the modified Nyquist Contour.
In the modified Nyquist contour there exist 4 major paths, and this we need to draw Nyquist plot based on
these 4 paths.
Re{G(jω)} < 0
Im{G(jω)} > 0 , ∀ω ∈ (0, 1)
Im{G(jω)} < 0 , ∀ω ∈ (1, ∞)
[G(jω)]ω=1 = −2 + 0j
lim |G(jω)| = ∞ & lim ∠[G(jω)] = π/2
ω→0 ω→0
lim |G(jω)| = 0 & lim ∠[G(jω)] = −π/2
ω→∞ ω→∞
2. Note that since we are intersted in the circulation around −1/K and origin in the Nyquist plot corre-
sponds to |K| → ∞, it is really not critical to correctly draw this detail. Thus we will omit it for this
problem.
3. This part is simply the conjugate of polar plot with reverse direction.
4. This part is very critical for stability analysis s = ejφ , where → 0 and φ : −π/2 → π/2 (CCW
direction). Then we can derive that
1
G ejφ ≈ = R3 ej(−3φ)
3 ej3φ
⇒ |G ejφ | ≈ R3 → ∞ , ∠[G ejφ ] ≈ −3φ
N (s) (s+1)2
As a result, we obtain the following Nyquist plot for D(s) = s3
16-12 Lecture 16
Nyquist Plot
-2
Note that the open-loop transfer function has no unstable poles, thus POL = 0. We can see that the derived
Nyquist plot divides the real axis in three different parts. If we analyze these regions separately, we can then
find a range of K values that makes the closed loop-system stable
−1
1. K ∈ (−∞, −2) → K ∈ (0, 2) ⇒ N = 2 → PCL = 2. Unstable system with 2 unstable poles.
−1
2. K ∈ (−2, 0) → K ∈ (2, ∞) ⇒ N = 0 → PCL = 0. System is stable
−1
3. K ∈ (0, ∞) → K ∈ (−∞, 0) ⇒ N = 1 → PCL = 1. Unstable system with 1 unstable pole.
Lecture 17
Lecturer: Asst. Prof. M. Mert Ankarali
We already know that a binary stability metric is not enough to characterize the system performance and that
we need metrics to evaluate how stable the system is and its robustness to perturbations. Using root-locus
techniques we talked about some “good” pole regions which provides some specifications about stability and
closed-loop performance.
Another common and powerful method is to use stability margins, specifically gain and phase margins, based
on the frequency domain analysis of a feedback-system.
Phase and gain margins are derived from the Nyquist stability criterion and it is relatively easy to compute
them only from the Polar Plot or Bode diagrams for a class of systems.
In this part of the course, we assume that
Gain Margin
For a stable-system the gain margin, gm , of a system is defined as the smallest amount that the open loop
gain can be increased before the closed loop system goes unstable.
In terms of Nyquist & polar plot, we simply choose point, σpc where the polar plot crosses the negative-real
axis and gain margin is simply equal to gm = σ1p .
Alternatively, the gain margin can be computed based on the frequency where the phase of the loop transfer
function GOL (jω) is −180o . Let ωp represent this frequency, called the phase crossover frequency. Then the
gain margin for the system is given by
1
∠[GOL (jωp )] = ± − 1800 ⇒ gm = or Gm = −20 log10 |GOL (jωp )|
|GOL (jωp )|
where Gm is the gain margin in dB scale. If the phase response never crosses the −180o , i.e. Re{G(jω)} ≥
0 ∀ ω ∈ [0, ∞] , gain margin is simply ∞. Higher the gain margin is more robust and stable closed-loop
system is.
17-1
17-2 Lecture 17
Phase Margin
The phase margin is the amount of “phase lag” required to reach the (Nyquist) stability limit.
In terms of Nyquist & polar plot, we simply choose point, where the polar plot crosses the unit-circle, and
phase margin is simply the “angular distance” between this point and the critical point −1 + 0j in CW
direction.
Alternatively, let ωgc be the gain crossover frequency, the frequency where the loop transfer function satisfies
|GOL (jωg )| = 1 (i.e. unit magnitude). The phase margin is given by
Higher the phase margin is more robust and stable closed-loop system is. Moreover, negative phase simply
shows that the closed-loop system is indeed unstable.
Note that if the G(jω) is strictly inside the unit-circle, then we can not compute the phase-crosover frequency
which simply implies that φm = ∞.
Ex: Compute the gain margin and phase margin for the following closed-loop system
-1 1
We can see that the Real part of the polar polat is always positive, thus gm = ∞. Where as the polar plot
crosses the unit circle only when ω = 0, thus φm = 1800 .
Lecture 17 17-3
Ex: Compute the gain margin and phase margin for the following closed-loop system for K = 2 and K = 4.
Nyquist plots for both gain cases is illustrated in the Figure below. We can see from the illustration that
K = 2 ⇒ φm = 90o & gm = ∞
K = 4 ⇒ φm = 60o & gm = ∞
2 K=4
1
K=2
Imaginary Axis
60o
-1
-2
-3
-1 0 1 2 3 4
Real Axis
Now let’s try to compute the phase margins analytically. Let’s start with K = 2
2
|G(jωg )| = 1 → = 1 → ωg = 1
ωg2 +1
∠[G(j)] = −2∠[j + 1] = −90o
φm = 90o
17-4 Lecture 17
Now let’s compute the closed-loop transfer function and compre the damping coefficients for both gain cases
2 1
T2 = → ζ2 = √
s2
+ 2s + 3 3
4 1
T4 = 2 → ζ2 = √
s + 2s + 5 5
We can see that as we decrease the phase margin from 90o to 60o , we also decrease the damping ration which
results in increased maximum-overshoot. In general good phase margin provides good transent performance
in time domain.
Ex: Compute the gain margin and phase margin for the following closed-loop system for K = 1 and K = 8
and comment on the stability of the system for both cases.
We already derived the Nyquist plot for the case K = 1, now let’s illustrate both Nyquist plots side-by-side.
K=1 K=8
1
2
Imaginary Axis
0 0
STABLE UNSTABLE
-2
-4
-1
-1 -0.5 0 0.5 1 -2 0 2 4 6 8
Real Axis Real Axis
Lecture 17 17-5
Ex: Compute the gain margin and phase margin for the following closed-loop system
We already derived the Nyquist plot for the case K = 1, now we have a different gain. Figure below illustrates
the zoomed Nyquist plot (which is the important part for gain and phase margin computations).
1
Imaginary Axis
→ ωg = 1
∠[G(j)] = −(90o + 450 )
-1 φm = 450
gm = ∞
-2
-3
-4
-2 -1 0 1 2
Real Axis
EE302 - Feedback Systems Spring 2019
Lecture 18
Lecturer: Asst. Prof. M. Mert Ankarali
Previously, we showed how to illustrate the frequency response function of an LTI system, G(jω), using
the polar plot and the Nyquist plot. In the Bode Plot gain, |G(jω)| and phase response, ∠[G(jω)], of the
system are illustrated separately as a function of frequency, ω. In both diagrams logarithmic scale is used
for frequency axis. On the other hand, in magnitude axis we use a special logarithmic scale in dB units,
where as for phase axis we use linear scale. Specifically Magnitude in dB scale, MdB and phase response, φ
of a transfer function, G(jω) are computed as
Now let’s write G(s) in pole-zero-gain form and analyze the magnitude (dB) and phase functions
(s − z1 ) · · · (s − zN )
G(s) =K
(s − p1 ) · · · (s − pN )
,
MdB {G(s)} =20log10 |G(jω)|
=20log10 K + [20log10 |(jω − z1 )| + · · · + 20log10 (jω − zN )]
− [20log10 |(jω − p1 )| + · · · + 20log10 (jω − pM )]
=KdB + [MdB {s − z1 } · · · MdB {s − zN }] − [MdB {s − p1 } · · · MdB {s − pM }]
,
φ{G(s)} = ∠[G(jω)]
= (∠[jω − z1 )] + · · · + ∠[jω − zN ]) − (∠[jω − p1 ] + · · · + ∠[jω − pM ])
= (φ{s − z1 } + · · · + φ{s − zN }) − (φ{s − p1 } + · · · + φ{s − pM })
In conclusion, in order to obtain a bode diagram, we can first find phase and magnitude (dB) arguments
associated with each pole/zero/gain separately for a given frequency. After that, final magnitude (dB) and
phase arguments of G(s) are found by simply adding (and subtracting) the individual components.
18-1
18-2 Lecture 18
1
Bode Plots of s and s: Let’s write the magnitude and phase functions
1 decade
40
20 dB
Magnitude (dB)
20
20 dB/decade
0
-20 dB/decade
-20
-40
90
Phase (deg)
45
0 s
1/s
-45
-90
First let’s analyze the phase and magnitude (dB) response of G(s) = s + 1
1/2
MdB (ω) = 20 log10 |G(jω)| = 20 log10 ω 2 + 1 = 10 log10 ω 2 + 1
φ(ω) = arctan ω
Now we will approximate the gain and phase curves using piece-wise continuous straight lines. Firs approx-
imate the magnitude response
Note that high-frequency and low-frequency approximations intersect at ω = 1 rad/s and MdB = 0 dB
point. Now let’s approximate the phase response
Low − Frequency ⇒ φ ≈ 0o
High − Frequency ⇒ φ ≈ 90o
Medium − Frequency ⇒ φ ≈ 450 + 450 log10 (ω)
Note that, low-frequency and mid-frequency approximations intersect when ω = 0.1 rad/s, whereas high-
frequency and mid-frequency approximations intersect when ω = 10 rad/s. The corner frequency of this
1
“system” is ωc = 1 rad/s. Note that it is very easy to obtain the bode approximations of G(s) = s+1 , if we
know the bode approximations of G(s) = s+1. We simple multiply both magnitude (dB) and phase responses
of s + 1 with (−1). The figure below illustrates the original bode plots (solid curves) of G1 (s) = (s + 1) and
1
G2 (s) = s+1 as well as their approximations (dashed lines).
60
40
Magnitude (dB)
20
20 dB/decade
0
-20 dB/decade
-20
-40
-60
90
45
Phase (deg)
-45
-90
10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3
Frequency (rad/s)
φ = arctan T ω
18-4 Lecture 18
Note that low-frequency and mid-frequency approximations intersect when ω = 0.1a rad/s, whereas high-
frequency and mid-frequency approximations intersect when ω = 10a rad/s. The corner freqency of this
system is ωc = a rad/s = 1/T rad/s. Note that in order to obtin the bode plot of T s + 1, we simply shift
the bode plot of s + 1 (in ω axis).
1
Ex: The figure below illustrates the original bode plots (solid curves) of G1 (s) = (s + 10) and G2 (s) = s+10
as well as their approximations (dashed lines).
60
40
20 dB/decade
Magnitude (dB)
20
-20
-20 dB/decade
-40
-60
90
45
Phase (deg)
-45
-90
A unity gain standard second order system can be written in the form
ωn2 1
G(s) = =
s2 + 2ζωn s + ωn2 1 2
ωn2 s + ω2ζn s + 1
Case 1 : ζ = 1 First let’s analyze the bode plots for the critically damped case,
1 1
G(s) = 1 2ζ
= s
2
ω 2 s + ωn s + 1
( ωn + 1)2
n
We can easily observe that Magnitude and phase functions can be obtained as
2
1
MdB {G(s)} = 20 log10 |G(jω)| = 20 log10 s
ωn + 1
( )
1
= 2MdB s
ωn + 1
( )
1
φ{G(s)} = 2φ s
ωn + 1
10 100
Ex: The figure below illustrates the original bode plots (solid curves) of G1 (s) = s+10 and G2 (s) = (s+10)2
as well as their approximations (dashed lines).
0
-20 dB/decade
-20
Magnitude (dB)
-40
-80
0
-45
Phase (deg)
-90
-135
-180
10 -1 10 0 10 1 10 2 10 3
Frequency (rad/s)
18-6 Lecture 18
Case 2: ζ > 1 Over-damped case is simply the combination of two identical first order systems.
Ex: Let’s analyze the the bode plots for the following system
10 1 10
G(s) = =
(s + 1)(s + 10) s + 1 s + 10
-20
Magnitude (dB)
-40
-60
-80
-100
-45
Phase (deg)
-90
-135
-180
10 -2 10 -1 10 0 10 1 10 2 10 3
Frequency (rad/s)
Lecture 18 18-7
Case 3 : ζ < 1 For under-damped systems, corner frequencies of the piece-wise linear approximations are
unchanged. Thus, we use same approximation with the critically damped case. However, as damping ratio
increases we may observe larger differences between the actual bode plot and the approximations.
10 100
Ex: The figure below illustrates the original bode plots (solid curves) of G1 (s) = s+10 and G2 (s) = (s+10)2
as well as their approximations (dashed lines).
20
0
Magnitude (dB)
-20
-40
-60
-80
0
=1
= 1/2
-45 = 1/4
Phase (deg)
= 1/8
= 1/16
-90
-135
-180
-2 -1 0 1 2
10 10 10 10 10
Frequency (rad/s)
We can see that best phase matching between the actual bode plot and approximation is achieved when
ζ = 1, however surprisingly best magnitude matching between the actual bode plot and approximation is
achieved when ζ = 2. √Indeed, best match between the actual and approximate bode plots in magnitude is
achieved when ζ = 1/ 2.
18-8 Lecture 18
We already know that for a feedback system, phase and gain margins can be computed based on the Frequency
Response characteristics of the open-loop transfer function, GOL (jω) (under some assumption regarding the
system properties).
Specifically, we can compute the phase crossover frequency, ωpc and the gain margin, gm (linear scale) and
Gm (dB scale), as
1
∠[GOL (jωp )] = ± − 1800 ⇒ gm = or Gm = −20 log10 |GOL (jωpc )|
|GOL (jωpc )|
where as gain crossover frequency, ωgc , and the phase margin can be computed as
Indeed it is generally easier to derive the phase and gain margin of a system from the bode plots compared
to the Nyquist & polar plots.
Ex: Compute the phase margin for the following closed-loop system for K = 2 and K = 4, both from the
approximate and actual bode plots.
12
10
6
0
-10
Magnitude (dB) -20
-30
-40
K=4
-50
K=2
-60
-45
Phase (deg)
-90
-105
-120
-135
-180
-2 -1 0 1 2
10 10 10 10 10
Frequency (rad/s)
If we label the gain crossover frequencies and find the corresponding phase values, we can easily compute
the phase margins as
These results verify the actual phase margin values that we previously computed using the Nyquist plot.
Note that Gm is infinity for both cases.
Now let’s illustrate the approximate bode plots (dashed lines), which are illustrated in the Figure below on
top of the actual ones
18-10 Lecture 18
12
10
6
0
-10
Magnitude (dB)
-20
-30
-40
K=4
-50
K=2
-60
-45
Phase (deg)
-90
-105
-120
-135
-180
-2 -1 0 1 2
10 10 10 10 10
Frequency (rad/s)
If we approximately compute the phase margins based on the approximate bode plots we obtain that
K = 2 ⇒ φm ≈ 80o
K = 4 ⇒ φm ≈ 63o
Ex: Compute the gain margin and phase margin for the following closed-loop system for K = 1 and K = 8
both from the actual and approximate bode plots.
1
First let’s analyze K = 1, the Figure below illustrates the actual and approximate bode plots of G(s) = (s+1)3 .
Lecture 18 18-11
20
-20
Magnitude (dB)
-40
-60
-80
-100
-120
0
-45
-90
Phase (deg)
-135
-180
-225
-270
10 -3 10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
We can see that phase margin is 180o for the given system, since ωgc = 0 → φ(ωgc ) = 0o . On the other
hang we can derive the following gain margin estimates from the actual and approximate bode plots
Actual : Gm ≈ 18 dB → gm ≈ 8
Approximate : Gm ≈ 20 dB → gm ≈ 10
8
Now let’s analyze K = 8, the Figure below illustrates the actual Bode plots of G(s) = (s+1)3 .
18-12 Lecture 18
20
-20
Magnitude (dB)
-40
-60
-80
-100
-120
0
-45
-90
Phase (deg)
-135
-180
-225
-270
10 -3 10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
We can see from the actual bode-plot that Gm = 0 dB and φm = 0o . However, if we draw the approximate
bode plots
20
-20
Magnitude (dB)
-40
-60
-80
-100
-120
0
-45
-90
Phase (deg)
-135
-180
-225
-270
10 -3 10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
Lecture 18 18-13
The Figure below illustrates the actual and approximate bode plots.
40
20
Magnitude (dB)
-20
-40
-60
-80
-90
-105
-120
Phase (deg)
-135
-150
-165
-180
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
We can derive the following phase margin computations from the actual and approximate bode plots
Actual : φm ≈ 52.50 (ωgc ≈ 0.8 rad/s)
Approximate : φm ≈ 45o (ωgc = 1 rad/s)
EE302 - Feedback Systems Spring 2019
Lecture 19
Lecturer: Asst. Prof. M. Mert Ankarali
The lead-compensator is a controller which has the form of a first-order high-pass filter
Ts + 1
Gc (s) = Kc α ∈ (0, 1)
T αs + 1
In general, we first design Kc based on the setady-state requirements of the system, and then design T and α
based on the phase-margin requirement. First let’s illustrate the bode-plots of a unity gain lead-compensator
to understand how we can utilize its properties for the design process.
Magnitude (dB)
90
Phase (deg)
45
-45
-90
Frequency
19-1
19-2 Lecture 19
We can see that (both from the actual and approximate bode-plots), phase-lead compensator is a type of
1
high-pass filter for which the cut-off (mid) frequency is ωc = √αT . The low and high frequency gains are
respectively 0 dB and −20log10 α dB. The lead-compensator phase, φmax , peaks at its cut-off frequency, and
basically we will try to use this positive maximum phase shift to improve the phase margin of the feedback
system.
Let’s derive a formula for φmax , which we will need during the design phase
As expected when α = 1, φmax = 0 since numerator and denominator time constants becomes equal in this
case. Accordingly, we can see that
α & ⇒ φmax %
Theoretical maximum value for φmax is 90o , however practically φmax < 75o for analog lead compensator
circuits. Another important factor that we will need to pay attention is the gain-shift of the lead-compensator
at the cut-off frequency
1
MdB (jωc ) = −10log10 α & |G(jωc )| = √
α
1
Ex: Consider the following feedback system illustrated below. It is given that G(s) = s(s+1) and we want
T s+1
to design a lead-compensator, Gc (s) = Kc T αs+1 , such that unit-ramp steady-state error satisfies, ess = 0.1
and phase-margin of the compensated system satisfies φ∗m = 450 , 550 .
Solution:
Step 1: We design/compute Kc based on the steady-state requirement on the unit-ramp error.
1 1
ess = = = 1 → Kc = 10 (19.1)
Kv Kc
Lecture 19 19-3
draw the bode-plot for Ḡ(s), and compute the gain-crosover frequency, ωgc , and the phase margin, φM , for
the uncompensated Ḡ(s). The bode plot of Ḡ(s) and its approximation is illustrated in the Figure below.
80
60
Magnitude (dB)
40
20
-20
-40
-60
-90
Phase (deg)
-135
-180
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
If we concentrate on the approximate bode-plots, we can estimate the gain-crosover frequency and the
phase-margin as
√
ωgc ≈ 10 rad/s = 3.16 rad/s
φm ≈ 22.5o
If we compute the actual values from the actual Bode plot, we obtain
Step 3: Compute the required phase increment, ∆φ to be added by the compensator and compute α
φmax ≈ ∆φ = φ∗M − φM +
= 50 − 100
So for the given problem, we can compute φmax and α as
φmax ≈ 47.5o − 17.50 + 70 = 370
1 − sin φmax 1
α= ≈
1 + sin φmax 4
Step 4: Estimate the “new” gain-crossover frequency, ω̂gc , and place the peak point of the lead-compensator,
at this estimated ω̂gc .
We already know that, lead-compensator at its center/cutoff frequency shifts the bode magnitude by
−10log10 α (or increases the gain by √16 ) and this causes a shift in gain-crossover frequency. For this reason,
we can estimate the new gain-crossover frequency as the point where the bode magnitude of Ḡ(s) crosses
10 log10 α, i.e.
√
MdB (j ω̂gc ) = 10 log10 α or |G(j ω̂gc )| = α
In our problem,
1
MdB (j ω̂gc ) ≈ −6 dB or |G(j ω̂gc )| =
2
We can indeed estimate the new gain-crossover frequency graphically from the bode-plot. The figure below,
illustrates how we can find the new-gain crossover frequency graphically
80
60
Magnitude (dB)
40
20
0
-6
-20
-40
-60
-90
Phase (deg)
-135
-180
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
Lecture 19 19-5
where ω̂gc ≈ 4.5 rad/s. We can also estimate the new-gain crosover frequency numerically
1
|G(j ω̂gc )| =
2
100 1
=
2
ω̂gc 2
ω̂gc + 1 4
4 2
ω̂gc + ω̂gc − 400 = 0
ω̂gc ≈ 4001/4 = 4.47 rad/s
1 1
ω̂gc = ωc = √ ⇒ T =√
αT αω̂gc
In our example
1 0.45s + 1 s + 2.25
T = 4.5 ≈ 0.45 ⇒ Gc (s) = 10 = 40
2
0.1125s + 1 s+9
The Figure below illustrates the bode plots of both compensated and uncompensated (Ḡ(s) = Kc G(s))
systems. Compensated systems has a phase margin of φm = 49.7o which meets the requirements.
60
40
20
Magnitude (dB)
-20
-40
-60
-80
-90
-120
Phase (deg)
-150
-180
-2 -1 0 1 2 3
10 10 10 10 10 10
Frequency (rad/s)
19-6 Lecture 19
Now let’s compute new phase margin directly using the new gain-crossover frequency
which is consistent with the phase-margin on the bode-plot. In Figure below, we compare the closed-loop
step responses of both uncompensated and compensated closed-loop transfer functions. We can clearly see
that, the lead-compensator substantially improves both the settling time and over-shoot performance.
1.8
Uncompansated
1.6 Overshoot (%): 60
Settling time: 7.3 s
1.4
1.2
Amplitude
0.8
0.6
Compansated
0.4 Overshoot (%): 22
Settling time: 1.2 s
0.2
0
0 2 4 6 8 10 12
Time (seconds)
3. Compute the required phase increment, ∆φ to be added by the compensator and design/compute α
4. Estimate the “new” gain-crossover frequency, ω̂gc , and place the peak point of the lead-compensator,
at this estimated ω̂gc .
5. Check the phase-margin if it does not meet the requirements increase ∆φ and repeat the process.
Lecture 19 19-7
The lag-compensator is a controller which has the form of a first-order low-pass filter
Ts + 1
Gc (s) = Kc α ∈ (1, ∞)
T αs + 1
In general, we first design Kc based on the steady-state requirements of the system, then design α based
on the phase-margin requirement, and finally choose a T such that phase-lag of the compensator does not
interfere with the gain-crossover frequency.
First let’s illustrate the bode-plots of a unity gain lag-compensator to understand how we can utilize its
properties for the design process.
Magnitude (dB)
90
45
Phase (deg)
-45
-90
Frequency
In lag-compensator design, we basically use the negative-gain shift of the compensator in the high-frequency
region and we try to push the low and mid frequency region to the left (in frequency axis) such that they
don’t interfere with the gain-crossover frequency. For this reason, design process easier compared to the
lead-compensator.
We will illustrate the lag-compensator design process on an the same example
Ex: Consider the feedback system that we analyzed previously in lead compensatory case. Plant is same,
1
G(s) = s(s+1) . However now we want to design a lag-compensator, Gc (s) = Kc TTαs+1
s+1
, a ∈ (1, ∞)), such
that unit-ramp steady-state error satisfies, ess 0.1 and phase-margin of the compensated system satisfies
φ∗m > 40o .
19-8 Lecture 19
Solution:
Step 1: Same as the lead-design, we design/compute Kc based on the steady-state requirement on the
unit-ramp error.
1 1
ess = = = 1 → Kc = 10 (19.2)
Kv Kc
80
60
Magnitude (dB)
40
20
-20
-40
-60
-90
Phase (deg)
-135
-180
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
Lecture 19 19-9
∗
From the bode-plots we can observe that at the desired ωgc ≈ 1 rad/s, bode-plot approximation has a
magnitude of 20 dB, whereas the magnitude in the actual bode plot is approximately 17 dB. In this
example, let’s use the magnitude of the approximation in the next Step.
Step 4: Compute α to compensate the the magnitude at the new-gain crossover frequency
∗ ∗
20log10 α = 20log10 |Ḡ(jωgc )| or α = |Ḡ(jωgc )|
In our example, α, can be computed as
20log10 α ≈ 20 dB ⇒ α ≈ 10
∗
Step 4: Choose T such that “10/T ≤ ωgc ”. Note that 10/T is the frequency where the phase of the
compensator approximately re-approaches to zero. This is required in order the negative phase bump of the
compensator does not affect the phase-margin.
1 ∗
≈ 0.1ωgc ⇒ T ≈ 10
T
10s + 1 s + 0.1
Gc (s) = 10 =
100s + 1 s + 0.01
The Figure below illustrates the bode plots of the uncompensated (Ḡ(s) = Kc G(s)) system, designed lag-
compensator, and the compensated system. Compensated systems has a phase margin of φm = 45o which
meets the requirements.
100
Uncompensated
Compensated
Lag-Compensator
50
Magnitude (dB)
-50
-45
Phase (deg)
-90
-135
-180
-4 -3 -2 -1 0 1 2
10 10 10 10 10 10 10
Frequency (rad/s)
19-10 Lecture 19
In Figure below, we compare the closed-loop step responses of both uncompensated and compensated closed-
loop transfer functions. We can see that, similar to the the lead-compensator, lag-compensator improves the
over-shoot performance. However, the settling-time of the compensated system is approximately two times
of the original system. This is major the drawback of the lag-compensator.
1.8
Overshoot (%): 60.5
Uncompansated
1.6
Compansated
1.4
Overshoot (%): 27.8
1.2
Settling time: 7.31 s Settling time: 14.8 s
1
0.8
0.6
0.4
0.2
0 2 4 6 8 10 12 14 16 18 20
Time (seconds)
The reason behind the reduced convergence speed performance is that new gain-crossover frequency is less
than the original gain-crossover frequency. PM is closely related with over-shoot performance. On the other
hand gain-crossover frequency determines the band-with of the system which is closely related with the rise
and settling times.
5. Check the phase-margin if it does not meet the requirements change ∆φ and repeat the process.
EE302 - Feedback Systems Spring 2019
Lecture 20
Lecturer: Asst. Prof. M. Mert Ankarali
20.1.1 State-Space to TF
Let’s first re-visit the conversion from a state-space representation to the transfer function representations
for LTI systems.
Note that a SS representation of an nth order LTI system has the from below.
In order to convert state-space to transfer function, we start with taking the Laplace transform of the both
sides of the state-equation
n(s)
G(s) =
d(s)
20-1
20-2 Lecture 20
If p is a pole of G(s), then d(s)|p = 0. Now let’s analyze the dependence of G(s) to the state-space form.
h i
−1
G(s) = C (sI − A) B + D
−1 Adj (sI − A)
(sI − G) =
det (sI − A)
CAdj (sI − G) B + Ddet (sI − A)
G(s) =
det (sI − A)
Obviously p is an eigenvalue of A.
˙
P −1 x̂(t) = AP −1 x̂(t) + Bu(t) , y(t) = CP −1 x̂(t) + Du(t)
x̂˙ = P AP −1 x̂(t) + P Bu(t) , y(t) = CP −1 x̂(t) + Du(t)
˙
x̂(t) = Âx̂(t) + B̂u(t)
y(t) = Ĉx(t) + D̂u(t)
 = P AP −1 , B̂ = P B , Ĉ = CP −1 , D̂ = D
Since there exist infinitely many non-singular n × n matrices, for a given LTI system, there exist infinitely
many different but equivalent state-space representations.
Example: Show that A ∈ Rn×n and P −1 AP , where P ∈ Rn×n and det(P ) 6= 0, have the same characteristic
equation
Solution:
det λI − P −1 AP = det λP −1 IP − P −1 AP
= det P −1 (λI − A) P
P x(t) = x̂(t) , Ĝ = P AP −1 , B̂ = P B , Ĉ = CP −1 , D̂ = D
We know that for a given LTI system, there exist infinitely many different SS representations. We previously
learnt some methods to convert a TF/ODE into State-Space form. We will now re-visit them and talk about
the canonical state-space forms.
For the sake of clarity, derivations are given for a general 3rd order LTI system.
In this method of realization, we use the fact the system is LTI. Let’s consider the transfer function of the
system and let’s perform some LTI operations.
b3 s3 + b2 s2 + b1 s + b0
Y (s) = U (s)
s3 + a2 s2 + a1 s + a0
1
= b3 s3 + b2 s2 + b1 s + b0 3
2
U (s)
s + a2 s + a1 s + a0
= G2 (s)G1 (s)U (s) where
H(s) 1
G1 (s) = = 3
U (s) s + a2 s2 + a1 s + a0
Y (z)
G2 (s) = = b3 s3 + b2 s2 + b1 s + b0
H(z)
As you can see we introduced an intermediate variable h(t) or with a Laplace transform of H(s). First
transfer function has static input dynamics, operates on u(t), and produces an output, i.e. h(t). Second
20-4 Lecture 20
transfer function is a “non-causal” system and operates on h(t) and produces output y(t). If we write the
ODEs of both systems we obtain
...
h = −a2 ḧ − a1 ḣ − a0 h + u
...
y = b3 h + b2 ḧ + b1 ḣ + b0 h
x1 h
Now let the state-variables be x = x2 = ḣ . Then, individual state equations take the form
x3 ḧ
x˙1 = x2
x˙2 = x3
x˙3 = −a2 x3 − a1 x2 − a0 x1 + u
y = b3 (−a2 x3 − a1 x2 − a0 x1 + u) + b2 x3 + b1 x2 + b0 x1
= (b0 − b3 a0 )x1 + (b1 − b3 a1 )x2 + (b2 − b3 a2 )x3 + b3 u
If we obtain a state-space model from this approach, the form will be in controllable canonical form.
For a general nth order transfer function controllable canonical form has the following A , B , C , & D matrices
0 1 0 ··· 0 0
0 0 1 ··· 0
0
A=
.. .. .. ..
, B=
..
. . . .
.
0 0 0 ··· 1 0
−a0 −a1 −a2 ··· −an−1 1
C= (b0 − bn a0 ) (b1 − bn a1 ) · · · (bn−1 − bn an−1 ) , D = bn
In this method will obtain a different minimal state-space realization, the form is called observable canonical
form. The process is different and state-space structure will have a different topology. Let’s start with a 3rd
transfer function and perform some grouping based on the s elements.
b3 s3 + b2 s2 + b1 s + b0
Y (s) = U (s)
s3 + a2 s2 + a1 s + a0
Y (s) s3 + a2 s2 + a1 s + a0 = b3 s3 + b2 s2 + b1 s + b0 U (s)
s3 Y (s) = b3 s3 U (s) + s2 (−a2 Y (s) + b2 U (s)) + s (−a1 Y (s) + b1 U (s)) + (−a0 Y (s) + b0 U (s))
Lecture 20 20-5
1
Let’s multiply both sides with s3 and perform further grouping
1 1 1
Y (s) = b3 U (s) + (−a2 Y (s) + b2 U (s)) + 2 (−a1 Y (s) + b1 U (s)) + 3 (−a0 Y (s) + b0 U (s))
s s s
1 1 1
Y (s) = b3 U (s) + (−a2 Y (s) + b2 U (s)) + (−a1 Y (s) + b1 U (s)) + (−a0 Y (s) + b0 U (s))
s s s
X1 (s)
Let the Laplace domain representations of state variables X(s) = X2 (s) defined as
X3 (s)
1
X1 (s) = (−a0 Y (s) + b0 U (s))
s
1 1
X2 (s) = (−a1 Y (s) + b1 U (s)) + (−a0 Y (s) + b0 U (s))
s s
1 1 1
X3 (s) = (−a2 Y (s) + b2 U (s)) + (−a1 Y (s) + b1 U (s)) + (−a0 Y (s) + b0 U (s))
s s s
In this context output equation in s and time domains simply takes the form
Dependently the state equations (in s and time domains) take the form
If we obtain a state-space model from this method, the form will be in observable canonical form. Thus we
can call this representation also as observable canonical realization. This form and representation is the dual
of the previous representation.
For a general nth order system controllable canonical form has the following A , B , C , & D matrices
0 0 ··· 0 −a0 (b0 − bn a0 )
1 0 ··· 0 −a1
(b1 − bn a1 )
A=
.. .. .. .. ..
, B=
..
. . . . .
.
0 0 ··· 0 −an−2 (bn−2 − bn an−2 )
0 0 ··· 1 −an−1 (bn−1 − bn an−1 )
C= 0 0 ··· 0 1 , D = bn
20-6 Lecture 20
If the transfer function of the LTI system has distinct poles, we can expand it using partial fraction expansion
c1 c2 c3
Y (s) = b0 + + + X(s)
z − p1 z − p2 z − p3
Now let’s concentrate on the candidate “state variables” and try to write state evaluation equations
1
X1 (s) = U (s) → ẋ1 = p1 x1 + u
s − p1
1
X2 (s) = U (s) → ẋ2 = p2 x2 + u
s − p2
1
X3 (s) = U (s) → ẋ3 = p3 x3 + u
s − p3
where as output equation can be derived as
If we combine the state and output equations, we can obtain the state space form as
p1 0 0 1
ẋ(t) = 0 p2 0 x(t) + 1 u(t)
0 0 p3 1
y(t) = c1 c2 c3 x(t) + b3 u(t)
where
x1 (t) p1 0 0 1
x = x2 (t) , A= 0 p2 0 , B= 1 , C= c1 c2 c3 , D = b3
x3 (t) 0 0 p3 1
The form obtained with this approach is called diagonal canonical form. Obviously, this form is not applicable
for “some” systems that has repeated roots.
For a general nth order system with distinct roots diagonal canonical form has the following A , B , C , & D
matrices
p1 0 · · · 0 0 1
0 p2 · · · 0 0 1
.. .
.. .
.. .
.. .
.. , B = .
A= . ..
0 0 · · · pn−1 0 1
0 0 ··· 0 pn 1
C = c1 c2 · · · cn−1 cn , D = bn
Lecture 20 20-7
s2 + 8s + 10
G(s) =
s2 + 3s + 2
find a controllable, observable, and diagonal canonical state-space representation of the given TF.
Solution:
If we follow the derivation of controllable canonical form for a second order system we obtain the following
structure
0 1 0
ẋ = x+ u
−a0 −a1 1
y = (b0 − b2 a0 ) (b1 − b2 a1 ) x + [b2 ] u
where
a0 = 2 , a1 = 3 , b0 = 10 , b1 = 8 , & b2 = 1
Observable canonical form is the dual of the controllable canonical form thus for the given system, we know
that
0 −2
AOCF = ATCCF =
1 −3
T 8
BOCF = CCCF =
5
T
COCF = BCCF = 0 1
DOCF = DCCF = [1]
In order to find the diagonal canonical form, we need to perform partial fraction expansion
s2 + 8s + 10 3 2
G(s) = 2
=1+ +
s + 3s + 2 s+1 s+2
then SS matrices for the diagonal canonical form can be simply derived as
−1 0
ADCF =
0 −2
1
BDCF =
1
CDCF = 3 2
DDCF = [1]
20-8 Lecture 20
˙
x̄(t) = AT x̄(t) + C T u(t),
y(t) = B T x̄(t) + Du(t)
Show that these two state-space representations results in same transfer function form
This result also shows that controllable and observable canonical representations are similar.
Lecture 20 20-9
Given LTI system is called asymptotically stable if, with u(t) = 0 and ∀x(0) ∈ Rn , we have
lim ||x(t)|| = 0
t→∞
Theorem: A state-space representation is asymptotically stable if and only if all of the eigenvalues of the
system matrix, A, have negative real parts, i.e.
Ex: Show that if a state-space representation is asymptotically stable then its transfer function representation
is BIBO stable.
Solution: Previously we showed that if p is a pole of G(s), then it is also an eigenvalue of A, since we can
write G(s) as
If the state-space representation is asymptotically stable then we know that for each pole, pi of G(s) we have
Re{pi } < 0 which makes the input–output dynamics BIBO stable. In conclusion,
Asymptoticly stable ⇒ BIBO stable
Example: Consider the following state-space form of a CT system
0 1 0
ẋ(t) = x(t) + u(t)
1 0 1
y(t) = 1 −1 x(t)
λ −1
det = λ2 − 1
−1 λ
λ1,2 = ±1
Thus the system is NOT Asymptotically Stable. Now let’s check BIBO stability condition. First, compute
the G(s)
20-10 Lecture 20
−1
s −1 0
G(s) = 1 −1
−1 s 1
s 1 0 1
= 1 −1
1 s 1
s2 − 1
s 1 0 1
= 1 −1
1 s 1 s2 − 1
1 1
= 1 −1
s s2 − 1
−(s − 1)
=
s2 − 1
−1
=
s+1
Lecture 21
Lecturer: Asst. Prof. M. Mert Ankarali
21.1 Reachability/Controllability
• A state xd is said to be reachable if there exist a finite time interval t ∈ [0, tf ] and an input signal
defined on this interval, u(t), that transfers the state vector x(t) from the origin (i.e. x(0) = 0) to the
state xd within this time interval, i.e. x(tf ) = xd .
• A state xd is said ti be controllable if there exist a finite time interval t ∈ [0, tf ] and an input
signal defined on this interval, u(t), that transfers the state vector x(t) from the initial state xd (i.e.
x(0) = xd ) to the origin this time interval, i.e. x(tf ) = 0.
For CT systems xd ∈ R if and only if xd ∈ C, the Reachability and Controllability conditions are equivalent.
• If the reachable (or controllable) set is the entire state space, i.e., if R = Rn , then the system is called
fully reachable/controllable.
One way of testing reachability/controllability is checking the rank (or the range space) the of reachabil-
ity/controllability matrix
Q = B AB · · · An−1 B
rank(Q) = n
or equivalently
Ra(M) = Rn
21-1
21-2 Lecture 21
Solution:
Let’s start with B1 and derive the controllability matrix
1 0
Q1 = B1 AB1 =
0 0
det(Q1 ) = 0
x˙1 = x2 + u
x˙2 = 0
Let’s go with reachability definition, which is based on starting from zero initial conditions and going to a
desired state, we can derive the following relations
ZT
x1 (T ) = u(t)dt
0
x2 (T ) = 0
Neither the input nor the first state has an affect on the second state, so x2 = α for α 6= 0 is not controllable
(or reachable). However it is also easy to see that we can always find u(t) that will drive the x1 to a desired
state, x∗1 (T ).
Now let’s analyze the case with B2
0 1
Q2 = B2 AB2 =
1 0
det(Q2 ) = −1 6= 0
Remark: A state-space representation that is in controllable canonical form is always fully controllable.
Lecture 21 21-3
The system matrix of this autonomous system is  = A − BK. Important questions is how to choose K.
Note that
K ∈ Rn Single − Input
As in all of the control design techniques, the most critical criterion is stability, thus we want all of the
eigenvalues to be in the open-left-half s-plane. However, we know that there could be different requirements
on the poles/eigenvalues of the system.
The fundamental principle of “pole-placement” design is that we first define a desired closed-loop eigenvalue
set E ∗ = {λ∗1 , · · · , λ∗1 }, and then if possible we choose K ∗ such that the closed-loop eigenvalues match the
desired ones.
The necessary and sufficient condition on arbitrary pole-placement is that the system should be fully Con-
trollable/Reachable.
In Pole-Placement, first step is computing the desired characteristic polynomial.
E ∗ = {λ∗1 , · · · , λ∗n }
p∗ (s) = (s − λ∗1 ) · · · (s − λ∗n )
= sn + a∗1 sn−1 + · · · + a∗n−1 s + a∗n
Then we tune K such that
det (sI − (A − BK)) = p∗ (s)
p∗ (s) = s2 + 2s + 1
Let K = k1 k2 , then the characteristic equation of  can be computed as
s − 1 + k1 k1
det (sI − (A − BK)) = det
k2 s − 2 + k2
= s2 + s(k1 + k2 − 3) + (2 − 2k1 − k2 )
s2 + s(k1 + k2 − 3) + (2 − 2k1 − k2 ) = s2 + 2s + 1
k1 + k2 = 5
2k1 + k2 = 1
k1 = −4
k2 = 9
Thus K = −4 9 .
Let’s assume that the state-space representation is in controllable canonical form and we have access to the
all states of this form
0 1 0 ··· 0 0
0
0 1 · · · 0
0
ẋ = ... .. .. .. x + .. u
. . . .
0 0 0 ··· 1 0
−an −an−1 −an−2 · · · −a1 1
Let K = kn · · · k1 , then closed-loop system takes the form
0 1 0 ··· 0 0
0
0 1 · · · 0
0
ẋ = ... .. .. .. x + .. r − k
· · · k1 x
. . .
.
n
0 0 0 ··· 1 0
−an −an−1 −an−2 · · · −a1 1
0 1 0 ··· 0 0
0 0 1 · · · 0
0
.. .. .. .. .
ẋ = x + .. r
. . . .
0 0 0 ··· 1 0
−(an + kn ) −(an−1 + kn−1 ) −(an−2 + kn−2 ) · · · −(a1 + k1 ) 1
K = (a∗n − an ) · · · (a∗1 − a1 )
Lecture 21 21-5
However, what if the system is not in controllable canonical form. We can find a transformation which finds
the controllable canonical representation.
The controllability matrix of a state-space representation is given as
Q = B AB · · · An−1 B
T = QW , x(t) = T x̂(t)
x̂˙ = T −1 AT x̂ + T −1 Bu
where
an−1 an−2 ··· a1 1
an−2 an−3 ··· 1 0
.. .. . ..
W = ..
. . .
a1 1
1 0 ··· 0
We know how to design a state-feedback gain K̂ for the controllable canonical form. Given K̂ u(t) is given
as
Design a state-feedback rule using the controllable canonical form approach, such that poles are located at
λ1,2 = −1
Solution: Characteristic equation of A can be derived as
s−1 0
det = s2 − 3s + 2
0 s−2
21-6 Lecture 21
Given that desired characteristic polynomial is p∗ (s) = s2 + 2s + 1, K̂ of controllable canonical from can be
computed as
K̂ = −a2 −a1
= 1 − 2 2 − (−3) = −1 5
As expected this is the same result with the one found with Direct-Method.
Lecture 21 21-7
21.3 Observability
It turns out that it is more natural to think in terms of “un-observability” as reflected in the following
definitions.
• If any initial condition, x(0) or x[0], can be uniquely determined from input-output measurement,
One way of testing Observability of CT systems is checking the rank (or the range space, or null space) the
of the Observability matrix
C
CA
CA2
O=
..
.
CAn−1
rank(O) = n
or equivalently
Ra(O) = Rn
or equivalently
dim (N (O)) = 0
Remark: A state-space representation that is in observable canonical form is always fully Observable
Remark: A state-space representation is called minimal if it is both fully Controllable and Observable.
21-8 Lecture 21
In general the state, x(t), of a system is not accessible and observers, estimators, filters) have to be used
to extract this information. The output, y(t), represents the measurements which is a function of x(t) and
u(t).
ẋ = Ax + Bu
y = Cx + Du
A Luenberger observers is built using a “simulated” model of the system and the errors caused by the
mismatched initial conditions x0 6= x̂0 (or other types of perturbations) are reduced by introducing output
error feedback.
Let’s assume that the state vector of the simulated system is x̂, then the state space equation of this synthetic
system takes the form
x̂˙ = Ax̂ + Hu
ŷ = C x̂ + Du
Note that since u is the input that is supplied by the controller, we assume that it is known apriori. If
x(0) = x̂[= (0) and when there is no model mismatch or uncertainty in the system then we expect that
x(t) = x̂(t) and y(t) = ŷ(t) for all t ∈ R+ . When x(0) 6= x̂(0), then we should observe a difference between
the measured and predicted output y(t) 6= ŷ(t) (if the initial condition is not in the unobservable sub-space).
The core idea in Luenberger observer is feeding the error in the output prediction y(t) − ŷ(t) to the simulated
system via a linear feedback gain.
In order to understand how a Luenberger observer works and to choose a proper observer gain L, we define
an error signal e = x − x̂. The dynamics w.r.t e can be derived as
ė = ẋ − x̂˙
= (Ax + Hu) − (Ax̂ + Hu + L (y − ŷ))
ė = (A − LC) e
where e(0) = x(0) − x̂(0) denotes the error in the initial condition.
If the matrix (A − LC) is stable then the errors in initial condition will diminish eventually. Moreover, in
order to have a good observer/estimator performance the observer convergence should be sufficiently fast.
Lecture 21 21-9
Similar to the state-feedback gain design, the fundamental principle of “pole-placement” Observer design is
that we first define a desired closed-loop eigenvalue set and compute the associated desired characteristic
polynomial.
E ∗ = {λ∗1 , · · · , λ∗n }
p∗ (s) = (s − λ∗1 ) · · · (s − λ∗n )
= sn + a∗0 z n−1 + · · · + a∗n−2 z + a∗n−1
The necessary and sufficient condition on arbitrary observer pole-placement is that the system should be
fully Observable. Then, we ca tune L such that
Design an observer such that estimater poles are located at λ1,2 = −5 (Dead-beat Observer)
Solution: Desired characteristic equation can be computed as
p∗ (s) = s2 + 10s + 25
l2
Let L = , then the characteristic equation of (G − LC) can be computed as
l1
s − 1 + l2 −l2
det (sI − (A − LC)) = det
l1 s − 2 − l1
= s2 + s(l2 − l1 − 3) + (l1 − 2l2 + 2)
l2 − l1 = 13
l1 − 2l2 = 23
l2 = −49
l1 = −36
−49
Thus L =
−36