System Control Notes
System Control Notes
September 7, 2015
Delft
Delft University of Technology
Contents
Preface
7
7
12
15
.
.
.
.
.
17
17
22
32
36
40
second-order systems
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
43
47
59
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
61
63
72
76
79
88
96
99
4
6 An
6.1
6.2
6.3
6.4
6.5
CONTENTS
introduction to feedback control
Block diagrams . . . . . . . . . . . . .
Control configurations . . . . . . . . .
Steady state tracking and system type
PID control . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
113
113
122
125
133
141
143
145
147
Index
163
References
165
Preface
Engineers have always been interested in the dynamic phenomena they observe when studying physical systems. Mechanical engineers are interested in the interaction between forces
and motion, while electrical engineers want to know more about the relation between
current and voltage. To facilitate their study they utilize the concept of system, a mathematical abstraction that is devised to serve as a model for a dynamic phenomenon. It
represents the dynamic phenomenon in terms of mathematical relations among the input
and the output of the system, usually called the signals of the system. A system is, of
course, not limited to modeling only physical dynamic phenomena; the concept is equally
applicable to abstract dynamic phenomena such as those encountered in economics or other
social sciences.
The motions of the planets, the weather system, the stock market, a simple chemical
reaction, and the oscillating air in a trumpet are all examples of dynamic systems, in
which some phenomena evolve in time.
The main maxim of science is its ability to relate cause and effect. On the basis of the
laws of gravity, for example, astronomical events such as eclipses and the appearances
of comets can be predicted thousands of years in advance. Other natural phenomena,
however, are so complex that they appear to be much more difficult to predict. Although
the movements of the atmosphere (wind), for example, obey the laws of physics just as
much as the movements of the planets do, long term weather prediction is still rather
problematic.
The objective of this course is to present an introduction in the modeling, analysis, and
control of dynamical systems. In Chapter 1 we discuss some basic concepts of signals and
systems. In Chapter 2 we study the modeling of linear dynamic systems in the domains of
mechanical, electrical, electromechanical, and fluid/heat flow systems. Chapter 3 analyzes
first-order and second-order systems. Chapter 4 proceeds the analysis of Chapter 3, but
now for general (and higher-order) systems. In Chapter 5 we discuss the extension to
nonlinear dynamical systems and introduce the concept of linearization. In Chapter 6 an
introduction to feedback control is given.
Acknowledgements
The author likes to thank Peter Heuberger, Bart De Schutter, Nicolas Weiss, Martijn
Leskens and Rufus Fraanje for the fruitful discussions on the topic of modeling and control
and for their comments on the concept version of these lecture notes.
5
Preface
Chapter 1
Signals and Systems
A signal is an indicator of a phenomenon, arising in some environment, that may be
described quantitatively. Examples of signals are mechanical signals such as forces, displacement, and velocity, or electrical signals such as voltages and currents in an electrical
circuit.
Engineers and physicists have utilized the concept of a system to facilitate the study of
the interaction between forces and matter for many years. A system is a mathematical
abstraction that is devised to describe a part of the environment the properties of which we
want to study. It describes the relation between certain phenomena that can occur in this
environment. A dynamical system describes such relations with respect to time. A mechanical setup is a typical example of a dynamical system, because the forces, accelerations,
velocities and positions that exist within the system are related in time.
We will now more formally define the terms signal and system.
1.1
Signals
(a) Mechanical signal. The velocity of a mass is a time signal with signal axis T =
(, ) and signal space W = R.
(b) Electrical signal. The voltage across a capacitor is a time signal with signal axis
T = [t0 , ) (measurement of voltage starts at time t0 ) and signal space W = R.
(c) Hydraulic signal. The oil pressure difference across a fluid resistance in a hydraulic
system is a signal.
(d) Thermal signal. The heat flow through a wall in a thermal system is a signal.
What is not a signal? For example, the value of a resistor in an electric circuit is usually
constant, and therefore considered not to be a signal. The same holds for other constant
system parameters, such as the mass in a mechanical system or the thermal capacity of a
room in a heat-flow system.
Elementary signals
We now give some elementary signals
The rectangular function: The rectangular function T : R R for a given scalar
T > 0 is defined as:
(
1/T for 0 t T
T (t) =
(1.1)
0
elsewhere
T (t)
1/T
The unit impulse function: The unit impulse function : R R is defined as:
(t) = lim T (t)
T 0
where T (t) as defined above is a block function with constant amplitude 1/T and
duration T . We find that for T 0, the amplitude will go to infinity where the
duration approaches zero. The area under the pulse remains equal to 1. This leads
1.1. SIGNALS
(t)
0
Figure 1.2: Unit impulse function
(1.2)
Note that the amplitude for t = 0 is not defined, but the integral (area under the
pulse) is also equal to 1.
The unit impulse function is often also referred to as the Dirac delta function. An
important property of the unit impulse function is that it can filter out values of a
function f through integration:
Z
f (t)(t) dt = f (0)
(1.3)
and
Z
f (t)(t ) dt = f ( )
(1.4)
us (t)
The unit step function: The unit step function us : R R is defined as:
(
0 for t < 0
us (t) =
1 for t 0
(1.5)
10
The unit ramp function: The unit ramp function ur : R R is defined to be a linearly
increasing function of time with a unit increment:
ur (t)
0
Figure 1.4: Unit ramp function
(
0
ur (t) =
t
for t 0
for t > 0
(1.6)
Note that the derivative of the unit ramp function is the unit step function, so
us (t) = d udrt(t) .
up (t)
0
11
1.1. SIGNALS
Harmonic functions: The well-known sine function sin : R R and cosine function
cos : R R, can be written as exponentials
ejt + ejt
2
ejt ejt
sin(t) =
2j
cos(t) =
(1.8)
with j 2 = 1.
sin(t)
t
cos(t)
/2
e(j)t
et sin(t) =
2j
et cos(t) =
(1.9)
et cos(t)
t
12
The impulse, step, ramp, and parabolic are often called singularity functions.
Finally we introduce a single dot to denote the time derivative of a signal, so
y(t)
=
d y(t)
dt
and we use a double dot to denote the second time derivative of a signal, so
y(t) =
1.2
d2 y(t)
d t2
Systems
1.2. SYSTEMS
13
(b) Steered ship. When the steersman of a ship changes the position of the wheel (input)
to a new position, the heading of the ship (output) changes because of hydrodynamic
side forces acting on the newly positioned rudder.
(c) Mass-damper-spring system. A mechanical system with a number of masses that
are connected by springs and dampers to each other and without any outside forces
acting on it is an autonomous system.
(d) Wafer stage for lithography. A example of a more complex system is a wafer
stepper, with a positioning mechanism that is used in chip manufacturing processes
for accurate positioning (outputs of the system) of the silicon wafer on which the chips
are to be produced. The wafer can be accurately moved (stepped) in three degrees of
freedom (3DOF) by manipulating the currents of linear motors (inputs of the system).
(e) A municipal solid waste combustion plant. An example of a large-scale system
is a municipal solid waste combustion plant in which household waste is incinerated
for the reduction of the amount of waste and for the production of energy. The input
of the system is the amount a waste that is put into the oven, the output of the system
is the amount of energy that is produced.
System properties
In the remainder of this section we will introduce some basic system properties.
Definition 1.8 (Dynamical vs memoryless) A system is said to be memoryless if its
output at a given time is dependent only on the input at that same time.
For example the system
2
y(t) = 2u(t) u2 (t)
is memoryless, as the value y(t) at any particular time t only depends on the input u(t) at
that time. A physical example of a memoryless system is a resistor in an electric circuit.
Let i(t) be the input of the system and v(t) the output, then the input-output relationship
is given by
v(t) = R i(t)
where R is the resistance.
An example of a system with memory is a capacitor, where the voltage v(t) is the integral
of the current i(t), so
Z t
1
v(t) =
i( ) d
C
14
Definition 1.9 (Causality) A system is causal if the output at any time depends only
on values of the input at present time and in the past.
All physical systems in the real world are causal because they cannot anticipate on the
future. Nonetheless, there are important applications where causality is not required. For
example if we have recorded an audio signal in a computers memory, we can process it
later off-line. We can then use non-causal filtering by allowing a delay in the input signal,
and as such implement a system that is theoretically non-causal.
Note that all memoryless systems are causal, since the output responds only to the current
value of the input.
Definition 1.10 (Linearity) Let y1 be the output of the system to the input u1, and let
y2 be the output to the input u2 . A system is said to be linear if and only if it satisfies the
following properties:
1. the input u1 (t) + u2 (t) will give an output y1 (t) + y2 (t).
2. the input u1 (t) will give an output y1 (t) for any (complex) constant .
An example of a linear system is an inductor, where the current i(t) is the integral of the
voltage v(t), so
Z t
1
i(t) =
v( ) d
L
An example of a non-linear system is the following system with input u and output y:
y(t) = u2 (t)
Definition 1.11 (Time-invariance) A system is said to be time-invariant if the behavior
and characteristics are fixed over time.
An input-output system is time-invariant if and only if a time-shift in the input leads to
the same time-shift in the output.
u(t) = y(t) , t R
u(t ) = y(t ) , t R
In other words, if the input signal u(t) produces an output y(t) then any time shifted
input, u(t ), results in a time-shifted output y(t ). An example of a time-invariant
system is a mass-spring system with a mass and a spring-constant that do not vary in time.
Definition 1.12 (Stability) A system is said to be stable if a bounded input (i.e. if its
magnitude does not grow without bound) gives a bounded output and therefore will not
diverge.
1.3. EXERCISES
15
1.3
Exercises
Exercise 1. Signals
a) Show that the unit rectangular function can be written as a sum of scaled unit step
functions.
Exercise 2. Plots of signals
Plot the signals ac.
a) T1 (t) 2 T2 (t T1 ), for T1 = 1 and T2 = 2, t R.
b) ur (t) ur (t 1) us (t 4), for t R.
c) up (t) us (1 t) + us (t 1), for t R.
Exercise 3. Derivative of signals
Compute the derivative of the signals ac of Exercise 2.
Exercise 4. System properties
Are the systems ad memoryless, linear, time-invariant and/or causal.
a) 6 y(t) = 4 u(t) + 3 et .
16
b) y(t)
= 0.1 u(t) .
c) y(t) = 6 y(t 2) + 3 u(t) + 2 .
d) y(t) = sin(u(t)).
Chapter 2
Modeling of dynamical systems
Real physical systems, which engineers must design, analyze, and understand, are usually
very complex. We therefore formulate a conceptual model made up of basic buildingblocks. These blocks are idealizations of the essential physical phenomena occurring in
real systems. An adequate model of a particular physical device or system will behave
approximately like the real system, and the best system model is the simplest one which
yields the information necessary for the engineering job.
In this chapter we study the modeling of linear dynamic systems in various domains:
1. Mechanical domain
2. Electrical domain
3. Electromechanical domain
4. Fluid flow domain
5. Heat flow domain
For these domains we aim at deriving linear differential equations for physical systems.
2.1
In this section we study some basic tools for the modeling of dynamical systems in various
domains. For each domain we define the so called basic signals, that described the phenomena in that particular domain. Furthermore, for each domain the dynamical relations
between the basic signals can be described by a set of basic elements, which can be seen
as the building blocks of the system.
1. Mechanical systems
We can distinguish between translational and rotational mechanical systems. In
translational mechanical systems we consider motions that are restricted to translation along a line. In rotational mechanical systems we consider motions that are
restricted to rotation around an axis.
17
18
mass
f = m x
x
f-
f
x1
damper
f = b (x 1 x 2 )
spring
f = k (x1 x2 )
x2
f-
f
x1
x2
Figure 2.1: Translational mechanical systems
velocity = ,
There are three basic system elements: inertia (J), rotational damper (b), and
rotational spring (k). The relations between the basic signals and the basic
elements for linear systems are summarized in Figure 2.2.
inertia
= J
19
rotational damper
= b (1 2 )
rotational spring
= k (1 2 )
resistor
v2 v1 = R i
capacitor
i = C (v 2 v 1 )
inductor
v2 v1 = L
v2 e
i
v1 e
v2 e
i
v1 e
di
dt
20
fx
uuuuu
transducer
f = Kp i
e2 e1 = Kp x
e1 e
Figure 2.4: Translational electromechanical system
differential equation for each inductor/capacitor in the electrical part of the
system, and one differential equation for each mass in the mechanical part of
the system.
(b) Rotational electromechanical systems
The basic signals in a rotational electromechanical system are torque ( ), angular
voltage (e) and current (i). The basic system element is the
velocity ( = ),
transducer with transduction ratio or electromechanical coupling constant Kr .
(see Figure 2.5).
For a rotational electromechanical system we derive one differential equation for
each inductor/capacitor in the electrical part of the system, and one differential
equation for each inertia in the mechanical part of the system.
Remark 2.2 Note that in electromechanical systems we use e for voltage instead of
v. We do this to avoid confusion with the velocity v.
21
,
= Kr i
transducer
e2 e1 = Kr
e1
T2
T
thermal capacitor
1
T = q
C
thermal resistor
q=
T1
1
(T2 T1 )
R
22
p =
1
w
C
fluid resistor
w=
1
(p1 p2 )
R
p
w p1
p2
Figure 2.7: Fluid flow systems
p0 is related to the level h of the fluid in the vessel by the linear equation
p p0 = g h
(2.1)
where is the mass density of the fluid and g is the gravitational constant.
Remark 2.3 Note that in these lecture notes we only consider transducers that transform
mechanical energy into electrical energy and vice versa. We could have discussed conversion
between any combination of physical domains, e.g. thermo-mechanical systems, thermoelectrical systems, but for the sake of simplicity we have limited the discussion to the
mechanical and electrical domains.
2.2
Examples of modeling
b3
m1
k1
b1
m2
k2
fe -
b2
Given 2 carts in the configuration of Figure 2.8. Cart 1 with mass m1 is connected
to a wall by linear spring with spring constant k1 , and to cart 2 by a spring with
spring constant k2 and a damper with damping constant b3 . Cart 2 with mass m2 is
driven by an external force fe . Friction with the ground causes a damping force with
damping constant b1 for cart 1 and damping constant b2 for cart 2. Our task is to
23
First we note that we will have two differential equations, one for each mass. Let us
first concentrate on the first mass m1 :
Newtons law tells us that
X
m1 x1 =
f1,i
i
where f1,i are all forces acting on mass m1 , see Figure 2.9.
b1 x 1
b3 (x 2 x 1 )
m1
k1 x1
x2
-
x1 k2 (x2 x1 )
where f2,i are all forces acting on mass m2 , see Figure 2.10.
We can distinguish four forces:
24
x1
fe
k2 (x2 x1 )
m2
b2 x 2
x2
(a) The force due to the spring between cart 2 and cart 1 is equal to
f2,k1 = k2 (x1 x2 )
(b) The force due to the damper between cart 2 and cart 1 is equal to
f2,b3 = b3 (x 1 x 2 )
(c) The force due to the friction is equal to
f2,b2 = b2 (0 x 2 ) = b2 x 2
(d) The external force fe
and so the second differential equation becomes
m2 x2 = b2 x 2 + k2 (x1 x2 ) + fe + b3 (x 1 x 2 )
So summarizing, the two differential equations describing the motion of the two-cart
system are as follows:
m1 x1 = b1 x 1 + b3 (x 2 x 1 ) + k2 (x2 x1 ) k1 x1
(2.2)
m2 x2 = b2 x 2 + b3 (x 1 x 2 ) + k2 (x1 x2 ) + fe
2. Rotational mechanical systems
Given an inertia J in the configuration of Figure 2.11. The inertia is lined up with
a rotational damper with damping constant b1 and a friction with damping constant
b2 . An external angular velocity 1 is exciting the system. Our task is to derive the
differential equations for this system.
25
b1
J
2
b2
and so we obtain the differential equation describing the dynamics between 1 and
2 as follows:
J 2 = b1 1 (b1 + b2 ) 2
3. Electrical system The electrical circuit of Figure 2.12 consists of an inductor, a
resistor and a capacitor in series, where L is the induction, R the resistance and C
the capacity. Often we assume one of the voltages to be zero, in this example we
choose v4 = 0.
First note that the current i is the same for three elements (induction, resistance and
capacitor). For the inductor we find:
L
di
= v1 v2
dt
26
v1
e
v2
v3
C
v4
r v4
v2 = v3 + R i
di
= v1 v3 R i
dt
and so we obtain the differential equations describing the dynamics of the electrical
network as follows:
di
= v1 v3 R i
L
dt
(2.3)
1
d
v
3
= i
dt
C
Consider the system of Figure 2.13. A current i through a coil causes a force ft on
a permanent magnet, resulting in a displacement x. The voltages in the circuit are
e1 , e2 , e3 , and e4 .
L
R
e1
e2
e3 ie
uuuuu
ft
e4
Kp
fs
m
uuuuu
k
27
We start with the electrical part. Note that the current i is the same through all
electrical elements. For the inductor we find:
e1 e2 = L
di
dt
d2 x
= ft + fs
d t2
= ft k x
The last step is to connect the two systems by the transducer using the relations
ft = Kp i
dx
e3 e4 = Kp
dt
Substitution gives us:
di
+ R i + (e3 e4 )
dt
dx
di
= L + R i + Kp
dt
dt
e1 e4 = L
d2 x
= ft k x
d t2
= Kp i k x
So summarizing, the two differential equations describing the dynamics of the translational electromechanical system are as follows:
di
dx
= e1 e4 R i Kp
dt
dt
2
d
x
m
= Kp i k x
d t2
L
(2.4)
28
ext ,
re2
Kr e
R
e
re1
d2
= ext t
d t2
d2
= ext t
d t2
= ext + Kr i
Kr
= ext +
(e1 e2 )
R
K2 d
= ext r
R dt
29
p0
A1
h1
p1
A2
R1
-
p2
h2
R2
wout
wmed
Figure 2.15: Example of a water flow system
Water runs into the left vessel from a source with mass flow win , from the left vessel
through a restriction with restriction constant R1 into the right vessel with mass flow
wmed , and through a restriction with restriction constant R2 out of the right vessel
with mass flow wout . The pressures at the bottom of the water vessels are denoted
as p1 and p2 . The outside pressure is p0 . The areas of the left and rights vessels are
A1 and A2 , and the water levels are denoted by h1 and h2 . Our task is to derive the
differential equations for this system.
First we consider the left vessel:
The net flow into the left vessel is w1 = win wmed . The fluid capacitance of the left
vessel is given by
C1 =
A1
g
1
g
w1 =
(win wmed)
C1
A1
1
(p1 p2 )
R1
30
p 1 =
A2
g
1
g
w2 =
(wmed wout )
C2
A2
1
(p2 p0 )
R2
p 2 =
g
(p2 p0 )
A2 R2
g
g
(p2 p0 )
(p2 p0 )
A2 R1
A2 R2
g(R1 + R2 )
(p2 p0 )
A2 R1 R2
So summarizing, the two differential equations describing the dynamics of the 2 vessel
system are as follows:
g
g
g
g
g(R1 + R2 )
p 2 =
(p1 p0 )
(p2 p0 )
A2 R1
A2 R1 R2
Now we can use the relation between the fluid levels h1 , h2 and p1 , p2 :
p1 p0 = gh1 ,
p2 p0 = gh2 .
This gives us
p 1 = g h 1 ,
p 2 = g h 2 ,
31
1
g
g
h 1 = A win A R h1 + A R h2
1
h 2 =
(2.6)
g(R1 + R2 )
g
h1
h2 .
A2 R1
A2 R1 R2
To
qout-
cw
co
1
(Tw T0 )
Rw
32
2.3
1
(Tw T0 )
c0 Rw
So summarizing, the two differential equations describing the dynamics of the heat
flow system are as follows:
1
1
qin +
(T0 Tw )
Tw =
cw
cw Rw
(2.7)
1
T0 =
(Tw T0 )
c0 Rw
Input-output models
In the previous sections we showed that for a linear dynamical system we can derive a
set of differential equations, that describe the dynamics of the system. In many cases we
obtain more than one differential equation, and the relation between the input variable
and the output variable are not directly given, but we need one or more auxiliary variables. For example, the two differential equations of (2.2), describing the dynamics of a
translational mechanical system, use three variables x1 , x2 , and fe . If we define the signal
fe as the input of the system, and we consider the signal x2 as the output of the system,
then x1 can be seen as an auxiliary variable. It would be nice if we could eliminate the
variable x1 from the equations and have a direct relation between fe and x2 , in a so-called
input-output differential equation. For linear systems, this elimination can be done easily
using the Laplace transformation.
Let x(t), t 0 be a signal, then the Laplace transform is defined as follows:
Z
X(s) = L{x(t)} =
x( ) es d
(2.8)
where X(s) is called the Laplace transform of x(t), or X(s) = L{x(t)}. The complex
variable s C is called the Laplace variable. Table 2.1 gives the Laplace transforms of
some common signals. A more extensive list of Laplace transforms is given in Appendix B.
The Laplace transformation is a linear operation and so it has the following property
L{f1 (t) + f2 (t)} = L{f1 (t)} + L{f2 (t)}
for , C
(2.9)
33
Laplace transform
Dirac pulse
(t)
Unit step
us (t)
Ramp
ur (t)
Parabolic
up (t)
Exponential
Sinusoid
eat us (t)
sin (t)us (t)
1
s
1
s2
1
s3
1
s+a
2
s + 2
dm u(t)
dm1 u(t)
d u(t)
+
b
+ . . . + bm1
+ bm u(t)
1
m
m1
dt
dt
dt
Let U(s) = L{u(t)} and Y (s) = L{y(t)}. The Laplace transformation gives us
sn Y (s) + a1 sn1 Y (s) + . . . + an1 sY (s) + an Y (s)
= b0 sm U(s) + b1 sm1 U(s) + . . . + bm1 sU(s) + bm U(s)
and using the linearity property we obtain
n
n1
m
m1
s + a1 s
+ . . . + an1 s + an Y (s) = b0 s + b1 s
+ . . . + bm1 s + bm U(s)
(2.11)
In the following examples we will show how we can rewrite a system of differential equations
into a single input-output differential equation.
1. Translational mechanical system
In (2.2) two differential equations were given, describing the dynamics of a translational mechanical system:
m1 x1 (t) = b1 x 1 (t) + b3 (x 2 (t) x 1 (t)) k1 x1 (t) + k2 (x2 (t) x1 (t))
m2 x2 (t) = b2 x 2 (t) + b3 (x 1 (t) x 2 (t)) + k2 (x1 (t) x2 (t)) + fe (t)
34
which gives
2
s m1 + s (b1 + b3 ) + k1 + k2 X1 (s) = s b3 + k2 X2 (s)
s2 m2 + s (b2 + b3 ) + k2 X2 (s) = s b3 + k2 X1 (s) + Fe (s)
(2.14)
(2.15)
s2
s b3 + k2
X2 (s)
m1 + s (b1 + b3 ) + k1 + k2
This leads to
s4 m1 m2 + s3 (m1 b3 + m2 b3 + m1 b2 + m2 b1 ) + s2 (m1 k2 + m2 k1 + m2 k2
+ b1 b2 + b1 b3 + b2 b3 ) + s(k1 b3 + k1 b2 + k2 b2 + k2 b1 ) + (k1 k2 ) X2 (s)
= s2 m1 + s (b3 b1 ) + k1 + k2 Fe (s)
(2.16)
(2.17)
(2.18)
(2.19)
d4 x2 (t)
d3 x2 (t)
+
(m
b
+
m
b
+
m
b
+
m
b
)
+ (m1 k2 + m2 k1 + m2 k2
1 3
2 3
1 2
2 1
d t4
d t3
d2 x2 (t)
d x2 (t)
+ b1 b2 + b1 b3 + b2 b3 )
+ (k1 b3 + k1 b2 + k2 b2 + k2 b1 )
+ k1 k2 x2 (t)
2
dt
dt
d2 fe (t)
d fe (t)
= m1
+ (b1 + b3 )
+ (k1 + k2 )fe (t)
(2.20)
2
dt
dt
m1 m2
35
2. Electrical system
In (2.3) two differential equations were given, describing the dynamics of an electrical
system:
d i(t)
d v3 (t) = 1 i(t)
dt
C
1
C
I(s)
1
I(s)
C
or
s V1 (s) = s2 L I(s) + s R I(s) +
1
I(s)
C
d i(t)
L
= e1 (t) e4 (t) R i(t) Kp d dx(t)
t
dt
2
p
d t2
36
m
k
X(s)
X(s)
Kp
Kp
k
m
k
m
X(s)
X(s)) = E1 (s)E4 (s)+R (s2
X(s)+
X(s))s Kp X(s)
Kp
Kp
Kp
Kp
or
s3 L m X(s) s2 R m X(s) + s (Kp2 L k) X(s) Rk X(s) = Kp (E1 (t) E4 (t))
We can rewrite this as the input-output differential equation:
L m
2.4
d3 x(t)
d2 x(t)
d x(t)
2
R
m
+
(K
L
k)
R k x(t) = Kp (e1 (t) e4 (t))
p
d t3
d t2
dt
State systems
In this section we introduce the notion of state systems. We will define the concept of a
state, and describe the behavior of a system using a state differential equation.
One of the triumphs of Newtons mechanics was the observation that the motion of the
planets could be predicted based on the current positions and velocities of all planets. It
was not necessary to know the past motion. In general, the state of a dynamical system
is a collection of variables that completely characterizes the evolution of a system for the
purpose of predicting the future evolution. For a system of planets the state simply consists
of the positions and the velocities of the planets.
Definition 2.1 The state of a system is a collection of variables that summarize the past
of a system for the purpose of predicting the future.
Example 2.1 Consider the translational mechanical system of Section 2.2. The differential equations were given by (2.2):
m1 p1 (t) = (b1 + b3 ) p 1 (t) + b3 p 2 (t) (k1 + k2 ) p1 (t) + k2 p2 (t)
m2 p2 (t) = b3 p1 (t) (b2 + b3 ) p 2 (t) + k2 p1 (t) k2 p2 (t) + fe (t)
where we use the variable p1 and p2 (instead of x1 and x2 ) for the positions of mass 1 and
2. In the sequel we will use the variable x for the state vector. In mechanical systems the
state consists of the velocities and the positions of each mass in the system. In this case
we have two masses, and therefore two velocities and two positions. This gives us a state
vector
x1 (t)
p 1 (t)
x2 (t) p1 (t)
x(t) =
x3 (t) = p 2 (t)
x4 (t)
p2 (t)
37
b3
k2
2)
(b1m+b1 3 ) (k1m+k
m1
m1
1
1
0
0
0
x(t)
=
(b2 +b3 )
b3
k2
m2
mk22
m2
m2
0
0
1
0
x(t) +
1 u(t)
m2
0
d x1 (t)/d t
x 1 (t)
d x2 (t)/d t x 2 (t)
x(t)
=
d x3 (t)/d t = x 3 (t)
d x4 (t)/d t
x 4 (t)
38
b3
k2
2)
(b1m+b1 3 ) (k1m+k
m1
m1
1
1
0
0
0
A=
(b2 +b3 )
b3
k2
m2
mk22
m2
m2
0
0
1
0
we can write the system as
0
0
B=
1
m2
0
x(t)
= A x(t) + B u(t)
If we choose the first position to be the output of the system, so y(t) = p1 (t) = x2 (t), we
obtain the output equation
y(t) = 0 1 0 0 x(t) + 0 u(t)
= C x(t) + D u(t)
where the matrices C and D are defined by
C= 0 1 0 0
D=0
We have seen in this example that state variables were gathered in a vector x Rn , which is
called the state vector. In general every linear time-invariant (LTI) state system (with one
input and one output) can be represented by the system of first-order differential equations
x 1 (t) = a11 x1 (t) + a12 x2 (t) + . . . a1n xn (t) + b1 u(t)
x 2 (t) = a21 x1 (t) + a22 x2 (t) + . . . a2n xn (t) + b2 u(t)
..
..
.
.
x n (t) = an1 x1 (t) + an2 x2 (t) + . . . ann xn (t) + bn u(t)
y(t) = c1 x1 (t) + c2 x2 (t) + . . . cn xn (t) + d u(t)
These equations can
x 1 (t)
x 2 (t)
.. =
.
x 1 (t)
x1 (t)
a11 a12 a1n
x1 (t)
x2 (t)
y(t) = c1 c2 cn .. + d u(t)
.
xn (t)
b1
b2
..
.
bn
u(t)
39
(2.21)
where A Rnn , B Rn1 , C R1n , and D R are constant and the derivative x(t)
is
taken elementwise, so
d x1 (t)/d t
x 1 (t)
d x2 (t)/d t x 2 (t)
x(t)
=
= ..
..
.
.
d xn (t)/d t
x n (t)
The input signal of the system is represented by u and the output signal by y.
Example 2.2 Consider the electrical system of Section 2.2 where we set v4 = 0. The
differential equations were given by (2.3):
d i(t)
1
R
= (v1 (t) v3 (t)) i(t)
dt
L
L
d v3 (t)
1
= i(t)
dt
C
In general, in electrical systems the state vector will consist of the currents through the
inductors and the voltages over the capacitors. In this particular case we have one inductor
and one capacitor. This gives us a state-vector
i(t)
x1 (t)
x(t) =
=
v3 (t)
x2 (t)
If we define u(t) = v1 (t) as the input of the system, we can write the equations
R
1
1
d i(t)
= x1 (t) x2 (t) + u(t)
dt
L
L
L
d v3 (t)
1
x 2 (t) =
= x1 (t)
dt
C
x 1 (t) =
1
R
1
L
L
x(t) + L u(t)
x(t)
=
1
0
0
C
If we choose the current as the output of the system, so y(t) = i(t), we obtain the output
equation
y(t) = 1 0 x(t) + 0 u(t)
40
This means that the matrices A, B, C and D for this example become
1
1
R
L
L
A=
B= L
1
0
0
C
C= 1 0
D=0
Table 2.2 indicates how to choose the states during the modeling phase for various physical
domains.
Physical domain
Translational mechanical system
Rotational mechanical system
Electrical system
Translational electromechanical system
States
velocity of each mass
position of each mass
angular velocity of each inertia
angular position of each inertia
voltage over each capacitor
current through each inductor
velocity of each mass
position of each mass
voltage over each capacitor
current through each inductor
angular velocity of each inertia
angular position of each inertia
voltage over each capacitor
current through each inductor
fluid pressure at the bottom of each vessel
(or fluid level of each vessel)
temperature in each room
2.5
Exercises
41
2.5. EXERCISES
m1
m2
x2
c2
ft-
c1
x1
i1 -
i2
i3
v2
e
?
r
42
Chapter 3
Analysis of first-order and
second-order systems
3.1
First-order systems
First-order systems can be described by a state system with only one state or by a single
first-order differential equation. For example, braking of a car, the discharge of an electronic camera flash, the flow in a fluid vessel, and the cooling of a cup of tea may all be
approximated by a first-order differential equation which may be written in a standard
form as
y(t)
+ y(t) = f (t)
(3.1)
where the system is defined by the single parameter = 1/ , where is the system time
constant, and f (t) = b0 u(t)
+ b1 u(t) is a forcing function, which depends on the input u
and the parameters b0 , b1 R.
Example 3.1 (First-order systems)
Consider the RC-circuit of Figure 3.1.a with a resistor with resistance R and a capacitor
with capacitance C. The relation between the voltage v(t) = v1 (t) v2 (t) and the current
i(t) is given by the first-order differential equation
v(t)
+
1
1
v(t) = i(t) .
RC
C
So with i(t) the input and v(t) the output of the system, we find that the time constant for
this system is = RC and the forcing function is f (t) = C1 i(t).
For the water vessel in Figure 3.1.b with area A, and fluid resistance R we find that
the relation between the inflow win (t) and level h(t) is given by the first-order differential
equation
1
g
h(t)
+
h(t) =
win (t) .
AR
A
43
i-
win
r
J
A
(a) RC network
wout
v2
e
Let win (t) be the input and h(t) be the output of the system, then we find that the time
1
win (t).
constant for this system is = A R/g and the forcing function is f (t) = A
Finally, the rotational mechanical system in Figure 3.1.c with inertia J, and damping
constant b we find the relation between the external torque T (t) and rotational velocity (t)
is given by the first-order differential equation
b
1
(t)
+ (t) = T (t) .
J
J
So with T (t) the input and (t) the output of the system, we find that the time constant
for this system is = J/b and the forcing function is f (t) = J1 T (t).
First-order state space systems can be described by the equations
x(t)
= a x(t) + b u(t) ,
y(t) = c x(t) + d u(t),
(3.2)
where the state x(t) R is a scalar signal and the system matrices a, b, c and d are scalar
constants. If we rewrite the second equation as
1
d
x(t) = y(t) u(t)
c
c
we can derive
d
1
x(t)
= y(t)
u(t)
c
c
and we can substitute these expressions for x(t) and x(t)
= y(t)
u(t) + b u(t) ,
c
c
c
c
and so we obtain the first-order equation:
y(t)
a y(t) = d u(t)
+ (bc ad) u(t) ,
and we see that the time constant for the first-order state system (3.2) is = 1/a and
the forcing function is f (t) = d u(t)
+ (bc ad) u(t).
45
(3.3)
Note that if yh (t) is a solution, then also yh (t) = et for any R is a solution.
We now compute a particular solution, which means we find a solution for (3.1) without
regarding the initial value y(0). Observing equation (3.1) for f (t) = 1 for t 0, we find
that yp (t) = 1/ for t 0 satisfies the equation.
For the final solution we combine the homogeneous solution and particular solution y(t) =
yp (t) + yh (t) = 1/ + et and compute by solving y(0) = 0, so = 1/. The unit
step response for a first-order system is given by ys (t) = 1/(1 et ) for t 0. This can
be rewritten as:
ys (t) =
1
(1 et )us (t)
(3.4)
The unit step response is given in Figure 3.2. We see that the response asymptotically
1/
6
ys
t
Figure 3.2: Unit step response for a first-order system
d us (t)
dt
and thus the impulse response y (t), resulting from an input f (t) = (t) is given by
y (t) =
d ys(t)
= et , t 0
dt
(3.5)
The impulse response is given in Figure 3.3. We see that the response starts in y (0) = 1,
and then decays asymptotically towards zero.
1
6
y
t
Figure 3.3: Impulse response for a first-order system
47
>0
(a)
<0
(b)
=0
(c)
and so the forcing function is f (t) = 4 i(t) = 4 us (t). From Equation (3.4) we know that
for a first-order system the unit step response is given by
ys (t) = 1/(1 et )us (t) = 0.5(1 e2t )us (t)
In our case we have f (t) = 4 us (t) and so the step response of this RC-circuit is given by
v(t) = 4 ys (t) V = 2(1 e2t )us (t) V
3.2
Second-order systems
Second-order systems can be described by a state system with two state or by a single
second-order differential equation. Physical second-order systems contain two independent
elements in which energy can be stored. For example in mechanical systems the kinetic
energy in a mass can be exchanged with the potential energy of a spring, and in electrical
systems the energy between capacitors and inductors can be exchanged. Second-order
systems can be written in a standard form as
y(t) + a1 y(t)
+ a2 y(t) = f (t)
(3.6)
where the system is defined by the parameters a1 and a2 and f (t) is the forcing function.
Often the forcing function has the form f (t) = b0 u(t) + b1 u(t) + b2 and so it depends on
the input u(t) and the parameters b0 , b1 , b2 R.
Example 3.3 (Second-order systems)
Consider the RLC-circuit of Figure 3.5.a with inductance L, capacitance C and resistance
value R. The relation between the voltage v(t) = v1 (t) v2 (t) and the current i(t) is given
by the second-order differential equation
d2 v(t)
1 d v(t)
1
1 d i(t)
+
+
v(t) =
2
dt
RC d t
LC
C dt
i-
m g sin
C
R
F
v2
m
mg
b
m
fe
so with i(t) as the input and v(t) as the output of the system, we find that the parameters
for this system are a1 = 1/RC and a2 = 1/LC, and the forcing function is f (t) = C1 ddi(t)
.
t
For the simple pendulum in Figure 3.5.b with mass m, length we find that the relation
between the angle and an external force F is given by the second-order differential equation
+ g (t) = 1 F (t)
(t)
m
where we used the approximation sin() and cos() = 1 for small . With F (t) as the
input and (t) as the output of the system, we find that the parameters for this system are
1
a1 = 0 and a2 = g/, and the forcing function is f (t) = m
F (t).
Finally, for the translation mechanical system in Figure 3.5.c with mass m, spring constant
k, damping b we find the relation between the position y and an external force fe is given
by the second-order differential equation
m y(t) + b y(t)
+ k y(t) = fe (t)
With fe (t) as the input and y(t) as the output of the system, we find that the parameters
for this system a1 = b/m and a2 = k/m, and the forcing function is f (t) = fe (t)/m.
Second-order single-input single-output state space systems can be described by the equations
x
(t)
a
a
x
(t)
b
1
11
12
1
1
+
u(t) ,
x (t) = a
x2 (t)
b2
2
21 a22
(3.7)
x1 (t)
y(t) = c1 c2
+ d u(t),
x2 (t)
49
(3.8)
where the state x(t) R2 is a vector with two entries and the system matrices are A R22 ,
B R21 , C R12 and D R.
We will use the Laplace transformation to find the input-output description of this state
system. Define X1 (s) = L{x1 (t)}, X2 (s) = L{x2 (t)}, Y (s) = L{y(t)} and U(s) = L{u(t)}.
Now (3.8) can be rewritten as
X1 (s)
X2 (s)
sI
a11 a12
a21 a22
1
b1
b2
U(s)
With
1
1
a11 a12
s a11 a12
sI
=
a21 a22
a21 s a22
1
s a22
a12
=
a21
s a11
(s a11 )(s a22 ) a12 a21
we derive
1
X1 (s)
s a22
a12
b1
=
U(s)
X2 (s)
a21
s a11
b2
(s a11 )(s a22 ) a12 a21
1
sb1 a22 b1 + a12 b2
= 2
U(s)
sb2 + a21 b1 a11 b2
s + s(a11 + a22 ) + (a11 a22 a12 a21 )
1
sb1 a22 b1 + a12 b2
= 2
U(s)
sb2 + a21 b1 a11 b2
s + sa1 + a2
(3.10)
a2
a1
=
2n
where is called the damping ratio and n is the undamped natural frequency. We obtain
a new description of the second-order system:
y(t) + 2 n y(t)
+ n2 y(t) = f (t)
We will now study the unit step response for system (3.10) when the forcing function is
a unit step function (f (t) = us (t)) and the system is initially at rest, so y(0) = 0 and
y(0)
= 0.
We first solve the homogeneous equation
yh (t) + 2 n y h (t) + n2 yh (t) = 0
51
(3.12)
We now compute a particular solution, which means we find a solution for (3.10) without
regarding the initial value y(0). Observing equation (3.10) for f (t) = 1 for t 0, we find
that yp (t) = 1/n2 for t 0 satisfies the equation.
Finally we combine the homogeneous solution and the particular solution y(t) = 1/n2 +
1 e1 t + 2 e2 t and we compute 1 and 2 by solving y(0) = 0 and y(0)
= 0. With
y(0) = 1/n2 + 1 + 2 = 0
and with y(t)
= 1 e1 t + 2 e2 t , resulting in
y(0)
= 1 1 + 2 2 = 0
we find
1 =
2
,
2 )
n2 (1
2 =
1
1 )
n2 (2
(3.13)
ys (t)
=0
0.1
0.3
0.707
1.0
2.0
5.0
t
Figure 3.6: Step response of a second-order system for n = 1, and different values of
2 = j d
We find
2
1 1
1 = 2
= 2 +j p
n (1 2 )
n
2
2 1 2
1
1 1
2 = 2
= 2 j p
n (2 1 )
n
2
2 1 2
t
t
ys (t) = 2 1 e
cos d t p
e
sin d t us (t) , for t 0
(3.14)
n
1 2
Critically damped system = 1:
If = 1 the system is called critically damped. We find = 1 = 2 = n . The
53
homogeneous solutions yh1 = yh 2 = e t become equal and we need an additional homogeneous solution yh = t e t (See chapter 4). The complete solution is now given by
y(t) = 1/n2 + 1 e t + 2 t e t . We compute 1 and 2 by solving y(0) = 0 and y(0)
= 0,
and we obtain
1 =
1
n2
2 =
1
=
2
n
n
This leads to the following unit step response for the case = 1:
1
ys (t) = 2 1 en t n t en t us (t) , for t 0
n
(3.15)
In Figure 3.6 the response for different values of is given. After a dynamical phase the
output will asymptotically approach the final value (except for = 0). In the underdamped
case the output will oscillate, whereas the output for the overdamped case will not.
d us (t)
dt
and thus the impulse response y (t), resulting from an input f (t) = (t) is given by
y (t) =
d ys(t)
, t0
dt
(3.16)
(3.17)
1 t
e
sin d t , for t 0
d
(3.18)
y (t)
=0
0.1
0.3
0.707
1.0
2.0
5.0
Figure 3.7: Impulse response of a second-order system for n = 1 and different values of
ys (t) = 1 e t cos d t + p
sin d t
1 2
(3.19)
Interesting criteria in this response are the overshoot, the peak-time, the rise time, and
the settling time of the response.
The settling time ts is the time it takes for the output to enter and remain within
a 1% band centered around its steady-state value.
The rise time tr is the time required for the output to rise from 10% to 90% of the
steady-state value.
The peak time tp is the time it takes for the response to reach its maximum value.
The overshoot Mp is the maximum value of the response minus the steady-state
value of the response divided by the steady-state value of the response .
55
Im
d
n
= arccos()
Re
log(0.01)
4.6
ys ( ) = 1 e cos( 1 2 ) + p
sin( 1 2)
1 2
We can now compute functions 1 () and 2 (), such that for any choice of we find
ys (1 ()) = 0.1
ys (2 ()) = 0.9
+1%
1%
1
0.9
0.1
tr
ts
Define the function r () = 2 () 1 (), then the rise time tr is is given by:
tr =
r ()
n
3
2
1.8
1
0
0.25
0.50
0.75
1.8
n
Note that tr is inversely proportional to n . Increasing n will decrease the rise time
tr , and vice versa.
57
3. Peak time:
To compute the peak time we set the derivative of the response from (3.19) to zero:
y s (t) = y (t) = p
n
1 2
e t sin d t
into (3.19):
ys (tp ) = 1 + e
ys (tp ) ys ()
= e d = e 12
ys ()
Mp
1.00
0.75
0.5
0.25
0
0.25
0.50
0.75
Re
Figure 3.12: Response for various pole locations
59
3.3. EXERCISES
Note that yh (t) decays to zero as t approaches infinity if e t decays to zero, which is the
case for < 0. This means that a critically damped second-order system is stable if < 0
and unstable if > 0.
Let 1 , 2 be the roots of the homogeneous equation of a second-order system. In general
we can say that the second-order system will be stable if there holds
Re(i ) < 0 , i = 1, 2
or, in other words, if the poles are in the left-half of the complex plane.
The relation between various pole locations and the system response is sketched in Figure
3.12. For a pole with a negative real part we see a decaying response, for a pole with a
positive real part we have an increasing response. If the pole has an imaginary part, there
is an oscillation. The frequency of the oscillation grows with increasing imaginary part. If
there is a single pole is in the origin, the response will contain a step function. If the pole
is on the imaginary axis (but not in the origin), the response is an oscillation with constant
amplitude. For multiple poles on the imaginary axis, the response will be unstable.
Example 3.4 Consider the translation mechanical
system in Figure 3.5.c with mass m = 2
kg, spring constant k = 8 N/m, damping b = 4 3 Ns/m, and let the external force be given
by fe (t) = 10 (t). The differential equation is given
y(t) + 2 3 y(t)
+ 4 y(t) = 0.5 fe (t)
and so the forcing functionis f (t) = 0.5 fe (t)
the undamped natural
p
= 5 (t). We compute
2
frequency n = 2, = 0.5 3, = n = 3, and d = n 1 = 1. With < 1 we
have an underdamped system and so for f (t) = (t) we find
y (t) =
1 t
e
sin d t = e 3 t sin t , for t 0
d
In our case we have f (t) = 5 (t) and so the step response of this mechanical system is
given by
y(t) = 5 y (t) = 5 e
3.3
3t
sin t , for t 0
Exercises
y(t)
+ 3 y(t) = u(t)
4y(t)
2 y(t) = 4u(t)
y + 3y y(t) = u(t)
4
y + 2y + y(t) = u(t)
Chapter 4
General system analysis
In this chapter we will analyze single-input single-output linear time-invariant systems,
which are described by the linear differential equation:
dn y(t)
dn1 y(t)
d y(t)
+
a
+
.
.
.
+
a
+ an y(t)
1
n1
d tn
d tn1
dt
dm u(t)
dm1 u(t)
d u(t)
+
b
+ . . . + bm1
+ bm u(t).
= b0
1
m
m1
dt
dt
dt
(4.1)
where we assume in this chapter that m n. In this chapter we will generalize the
results of the first-order and second-order systems of Chapter 3 to higher order systems.
To facilitate the discussion we will introduce the new concepts of transfer function and
convolution. Further we will consider the computation of the time response and show the
relation between various equivalent system descriptions.
4.1
Transfer functions
In Section 2.3 we have introduced the Laplace transformation and claimed that if the
system is initially-at-rest at time t = 0 (which means that output y(0) = 0 and all its
n
derivatives d d y(t)
| = 0 as well), then
tn t=0
n
d y(t)
L
= sn Y (s)
(4.2)
d tn
With U(s) = L{u(t)} and Y (s) = L{y(t)}, Equation (4.1) can be written as
sn + a1 sn1 + . . . + an1 s + an Y (s) = b0 sm + b1 sm1 + . . . + bm1 s + bm U(s)
(4.3)
and if we define a(s) = sn +a1 sn1 +. . .+an1 s+an and b(s) = b0 sm +b1 sm1 +. . .+bm1 s+bm
we obtain
a(s)Y (s) = b(s)U(s)
(4.4)
61
62
(4.5)
then H(s) is called the transfer function, which describes the relation between the Laplace
transform of the input signal and the Laplace transform of the output signal. Note that
b(s)
b0 sm + b1 sm1 + . . . + bm2 s2 + bm1 s + bm
=
(4.6)
a(s)
sn + a1 sn1 + . . . + an2 s2 + an1 s + an
The transfer function provides a complete representation of a linear system in the Laplace
domain. The order of the transfer function is defined as the order of the denominator
polynomial and is therefore equal to n, the order of the differential equation of the system.
H(s) =
Type
differential equation
Integrator
y = u
Differentiator
y = u
First-order system
y + ay = u
Double integrator
y = u
Damped oscillator
y + 2ny + n2 y = u
Transfer Function
1
s
s
1
s+a
1
s2
1
2
s + 2ns + n2
(4.7)
where
M( + j) = |H( + j)|
(4.8)
( + j) = H( + j)
(4.9)
(4.10)
(4.11)
63
b(s)
,
a(s)
4.2
Time responses
In this section we will derive the time response of linear time-invariant systems of the form
(4.1), which can be written as
dn y(t)
dn1 y(t)
d y(t)
+
a
+ . . . + an1
+ an y(t) = f (t)
1
n
n1
dt
dt
dt
m
(4.12)
m1
d
u(t)
d u(t)
where the forcing function f (t) = b0 d d tu(t)
m + b1 d tm1 + . . . + bm1 d t + bm u(t) is assumed
to be known, and where we have initial conditions
dn1 y(t)
d y(t)
= cn1 , . . . ,
= c1 , y(0) = c0
(4.13)
d tn1 t=0
d t t=0
The procedure to find the signal y(t) that is a solution of differential equation (4.12) and
at the same time satisfies the initial conditions (4.13) consists of three parts
1. Compute the homogeneous solution yh(t), t 0 that satisfies the so called homogeneous equation
dn y(t)
dn1 y(t)
d y(t)
+
a
+
.
.
.
+
a
+ an y(t) = 0
1
n1
d tn
d tn1
dt
(4.14)
64
Homogeneous solution
In the previous section we already observed that solutions of the form et , for possiblycomplex values of , play an important role in solving first-order and second-order differential equations. The exponential function is one of the few functions that keep its shape
even after differentiation. In order for the sum of multiple derivatives of a function to sum
up to zero, the derivatives must cancel each other out and the only way for them to do so
is for the derivatives to have the same form as the initial function. Thus, to solve
dn1 y(t)
d y(t)
dn y(t)
+
a
+
.
.
.
+
a
+ an y(t) = 0
1
n1
d tn
d tn1
dt
(4.15)
we set y = et , leading to
n et + a1 n1 et + + an et = 0.
Division by et gives the nth-order polynomial equation
a() = n + a1 n1 + + an = 0.
This equation a() = 0, is the characteristic equation and a is defined to be the characteristic polynomial.
The polynomial a() is of nth order and so there are n roots (1 , . . . , n ) of the characteristic polynomial. If all roots i are distinct, we obtain n possible solutions ei t (i = 1, . . . , n).
Note that because of linearity, any linear combination of possible solutions will satisfy the
homogeneous equation (4.15). This gives us the most general form of the homogeneous
solution:
yh (t) = C1 e1 t + C2 e2 t + . . . + Cn en t
n
X
=
Ci ei t
i=1
with Ci C.
Example 4.1 Consider the homogeneous equation
d3 y(t)
d2 y(t)
d y(t)
+
10
+ 31
+ 30y(t) = 0
3
2
dt
dt
dt
The characteristic equation is:
3 + 2 10 + 31 + 30 = ( + 2)( + 3)( + 5) = 0
We find three roots: 1 = 2, 2 = 3, 3 = 5, and so the homogeneous solution becomes
yh (t) = C1 e2 t + C2 e3 t + C3 e5 t
with C1 , C2 , C3 R.
65
C1 , , R.
Multiple roots:
If one of the roots i has a multiplicity equal to m (with m > 1), then
y = tk ei t for k = 0, 1, . . . , m 1
is a solution of the homogeneous equation. Applying this to all roots gives a collection of
n distinct and linearly independent solutions, where n is the order of the system.
Example 4.3 Consider the homogeneous equation
d4 y(t)
d3 y(t)
d2 y(t)
d y(t)
+
2
+
2
+2
+ y(t) = 0
4
3
2
dt
dt
dt
dt
with characteristic equation:
4 + 23 + 22 + 2 + 1 = ( + 1)2 ( + j)( j) = 0
We find three roots: 1 = j, 2 = j, 3 = 1 (multiplicity 2), and so the homogeneous
solution becomes
yh (t) = C1 ej t + C2 ej t + C3 et + C4 t et
As in Example 4.2, the coefficients C1 = + j and C2 = j will be complex conjugates
of each other with , R. We derive
yh (t) = C1 ej t + C2 ej t + C3 et + C4 t et
= 2 cos t 2 sin t + C3 et + C4 t et
with , , C3, C4 R.
66
Particular solution
To obtain the solution to the non-homogeneous equation (sometimes called inhomogeneous
equation), we need to find a particular solution yp (t), t 0 that satisfies the differential
equation (4.12) (but without considering the initial condition).
Particular solution for an exponential function
We start by considering the particular solution of differential equation (4.12) for an exponential input u(t) = es t where s C. First note that the derivatives of the input function
is given by
d2 u(t)
dm u(t)
d u(t)
2 st
= s es t ,
=
s
e
,
.
.
.
,
= sm es t ,
dt
d t2
d tm
We observe that all the derivatives are scaled versions of the the exponential input es t .
Therefore assume that y(t) has the same shape as u(t), so let
y(t) = C es t
for the same fixed complex value s. We can compute the derivatives of y:
d y(t)
d2 y(t)
dn y(t)
2
st
= s C es t ,
=
s
C
e
,
.
.
.
,
= sn C es t
dt
d t2
d tn
Substitution gives us:
sn C es t + a1 sn1 C es t + . . . + an1 s C es t + an C es t
= b0 sm es t + b1 sm1 es t + . . . + bm1 s2 es t + bm es t .
(4.16)
(4.17)
or
t0
(4.18)
Using property (4.18) we can compute the particular solution for forcing functions that are
singularity, harmonic and damped harmonic functions.
67
If we consider u(t) = est , t 0 for s = 0, we find u(t) = est |s=0 = 1, t 0 and so the
particular solution for a step function is given by
bm
y(t) = H(s)est s=0 =
,
an
t0
(4.19)
From (4.19) we know that a particular solution for a step input is given by
ys (t) = H(0) =
bm
for t 0
an
bm1 an1 c0
an bm1 an1 bm
=
.
an
a2n
68
The parabolic response for this system for an parabolic input up(t) = t2 /2, t 0 is given
by
Z
t2
yp (t) = yr (t) = c0 + c1 t + c2 , for t 0
2
The particular solution of this system for a unit parabolic input is given by
Z
yp (t) = yr (t) = 1/12 t2 + 1/36 t 11/216 for t 0
Higher order singularity function
For a function
1
u(t) = tk
k!
for k > 2 we can also compute the particular solution
y(t) = c0
1 n
1
t + cn1
tn1 + . . . + cn1 t + cn
n!
(n 1)!
69
(4.20)
then
H( + j) = M( + j) ej(+j)
Using the similar properties as (4.10) and (4.11) this leads to
y(t) = 1/2H( + j)e(+j)t + 1/2H( j)e(j)t
The general solution to the linear differential equation is the sum of the general solution
of the related homogeneous equation and the particular solution. First note that the
signal y(t) = yp (t) + yh (t) satisfies (4.12) for any C1 , . . . , Cn C. To compute the correct
C1 , . . . , Cn we have to make sure that the initial conditions are satisfied.
70
Output yp (t), t 0
H(0)
1 n
t
n!
t
1
c0 n!1 tn + c1 (n1)!
tn1 + . . . + cn1 t + cn
ejt
H(j ) ejt
cos(t)
sin(t)
et cos(t)
M( + j) e t cos(t + ( + j))
et sin(t)
M( + j) e t sin(t + ( + j))
H() et
and so
y(0) = C1 + 0.1 ,
y(0)
71
Frequency response
We will now give a special attention to the particular solution for a sinusoidal input. We
already discussed that the sinusoid can be expressed as a sum of two harmonic functions
u(t) = cos t = 1/2ejt + 1/2ejt
If we let s = j then the response to u(t) = ejt is equal to
y(t) = 1/2H(j)ejt + 1/2H(j)ejt
Note that similar to (4.20) we can define
H(j) = M(j) ej(j)
with
M(j) = |H(j)| and (j) = H(j)
(4.21)
(4.22)
This means that if a system represented by the transfer function H(s) has a sinusoidal
input, the output will be sinusoidal at the same frequency with magnitude M(j) and will
be shifted in phase by an angle (j).
Example 4.6 Frequency response of a second-order system. Consider a second-order system with input u(t) and output y(t), satisfying the differential equation
y(t) + 0.1y(t)
+ y(t) = u(t)
The transfer function of this system is given by
1
H(s) = 2
s + 0.1 s + 1
and so
1
H(j) = M(j) ej(j) =
2
+ j 0.1 + 1
with
1
1
1
= p
M(j) =
=
2 + j 0.1 + 1
4 1.99 2 + 1
(1 2 )2 + (0.1)2
0.1
(j) = H(j) = arctan
1 2
Figure 4.3 shows the plots of M(j) and (j) as a function of the frequency. These plots
are called the Bode plots of the system (named after H.W. Bode). The frequencies on the
horizontal axis are customarily given on a logarithmic scale. The magnitude M(j) is
given in decibels (dB), which means that we plot 20 log[M(j)] on the vertical axis. For
the phase (j) we use a linear scale on the vertical axis.
72
20 log[M(j)]
20
40
101
100
1
10
101
100
1
10
(j)
90
180
4.3
In this section we will derive the time response of linear time-invariant system (4.12), with
initial conditions
d y(t)
dn1 y(t)
= cn1 , . . . ,
= c1 , y(0) = c0
(4.23)
d tn1 t=0
d t t=0
using the Laplace transform. In Section 4.1 we have introduced the notion of transfer
function and have given the property for a initially-at-rest systems:
k
d y(t)
L
= sk Y (s)
(4.24)
k
dt
where Y (s) is the Laplace transform of y(t). For the computation of the time response
for a system that is not initially-at-rest (so c0 , . . . , cn1 in (4.23) are not all equal to zero),
property (4.24) can be extended into the following property
L
dk y(t)
d tk
= s Y (s)
= sk Y (s)
k1
X
i=0
k1
X
i=0
k1i
di y(t)
d ti t=0
sk1i ci
(4.25)
(4.26)
73
= s2 Y (s) s c0 c1
d t2
Consider the differential equation
dn y(t)
dn1 y(t)
d y(t)
+ a1
+ . . . + an1
+ an y(t) = f (t).
n
n1
dt
dt
dt
(4.27)
for a known forcing function f (t). Applying the Laplace transformation we obtain
n dn y(t) o
n dn1 y(t) o
L
+ L a1
+ ...
d tn
d tn1
n
n
o
n
o
d y(t) o
+ L an1
+ an L y(t) = L f (t) .
dt
(4.28)
s Y (s)
n1
X
i=0
n1i
ci + a1 s
n1
Y (s) a1
n2
X
sn2i ci + . . .
(4.29)
i=0
(4.30)
where Y (s) and F (s) are the Laplace transforms of y(t) and f (t), respectively. From this
equation we can find an expression for Y (s). We can now compute y(t) for t 0 from
Y (s), using the inverse Laplace transformation. Let Y (s), s C be the Laplace transform
of y(t), then the inverse Laplace transformation is defined as follows:
Z
1
1
y(t) = L {Y (s)} =
Y (s) es t d s , t 0
(4.31)
2 j
A problem is that the inverse Laplace integral is not always easy to compute. An alternative
way to find L1 {Y (s)} is carrying out a partial fraction expansion, in which Y (s) is broken
into components
Y (s) = Y1 (s) + Y2 (s) + . . . + Yn (s)
for which the inverse Laplace transforms of Y1 (s), . . . , Yn (s) are available from Table 2.1 or
the Table in Appendix B. The method using partial fraction expansion is possible because
the inverse Laplace transformation is a linear operation, so:
L1 {f1 (t) + f2 (t)} = L1 {f1 (t)} + L1{f2 (t)}
(4.32)
The procedure to compute the output signal using Laplace transformation now consists of
three steps:
74
Step 1: We compute the so called Laplace transform F (s) of the forcing signal f (t). We
can use Equation (2.8) or we can use Table 2.1, where for a number of known signals
the Laplace transforms are given.
Step 2: In the second step we substitute (4.26) and obtain an expression for the Laplace
transform Y (s).
Step 3: Finally, split Y (s) up in pieces by the so called partial fractional expansion. For
every term we apply the Inverse Laplace transform using Table 2.1 or the Table in
Appendix B to retrieve the output signal y(t) of the original problem.
b1 sm + b2 sm1 + bm s + bm+1
sn + a1 sn1 + an1 s + an
Let p1 , . . . , pn be the roots of the polynomial a(s) = sn + a1 sn1 + an1 s + an . Then a(s)
can be written as
a(s) = (s p1 )(s p2 ) (s pn1 )(s pn ) =
n
Y
(s pi )
i=1
C1
C2
Cn
+
+ ...+
(s p1 ) (s p2 )
(s pn )
0
1
r1
Cr+1
Cn
+
+ ...+
+
+ ...+
r
r1
(s )
(s )
(s ) (s pr+1 )
(s pn )
(4.33)
75
where the Ci s are determines using Equation (4.33), and the i s can be computed using:
1 dk
r
k =
(s ) Y (s)
, for k = 0, . . . , r1
k! d sk
s=
If we have a complex pole in p1 = + j then we will also find the complex conjugate
p2 = j as a pole. The corresponding coefficients C1 and C2 will be complex conjugates
of each other, so if C1 = + j , then C2 = j and so
C1
C2
s+
+
= 2
2
2
2
(s p1 ) (s p2 )
(s + ) +
(s + )2 + 2
This leads the solution
s+
1
1
2 L
= 2 e t cos( t)2 e t sin( t)
y(t) = 2 L
(s + )2 + 2
(s + )2 + 2
Example 4.7 Consider the function
Y (s) = 6
(s + 2)(s + 4)
s(s + 1)(s + 3)
C1
C2
C3
+
+
s
s+1 s+3
C2 = 9 ,
C3 = 1
(s + 3)
(s + 1)(s + 2)2
This system has a multiple pole at s = 2 with multiplicity 2. We can rewrite Y (s) as a
sum of partial fractions:
Y (s) =
C1
C2
C3
+
+
s + 1 s + 2 (s + 2)2
76
2
2
1
+
+
s + 1 s + 2 (s + 2)2
y(t) = 2 L1 {
s(s2
10
+ 2s + 5)
This system has a complex pole pair at s = 1 j 2. We can rewrite it as a sum of partial
fractions:
C1
C2 (s + 1)
C3 2
+
+
2
2
s
(s + 1) + 2
(s + 1)2 + 22
and using Equation (4.33) we can compute the coefficients
Y (s) =
2
2s + 4
2
2(s + 1)
1 2
2
= +
+
2
2
s s + 2s + 5
s (s + 1) + 2
(s + 1)2 + 22
4.4
In this section we will show that the time response of a linear time-invariant system can be
completely characterized in terms of the unit impulse response of the system. Recall that
the impulse response of a system can be determined by exciting the system with a unit
impulse (t) and observing the corresponding output y(t). We will denote this impulse
response as h(t).
To develop the theory of convolution we will start by considering the rectangular function
T (t), t R for a small scalar value T > 0 as defined in Section 1.1. First note that
T (t) = 1/T us (t) us (t T )
77
u6
u(t) = (t)
(t)
impulse
system
y(t) = h(t)
-
impulse
response
h(t)
-
y(t) = hT (t )
y6
u6
1/T
T
t
y(t) = hT (t)
u(t) = T (t)
y6
u6
-
-
t
u(t) = T (t )
t
y(t) = h(t )
78
uT 6
u6
u(t) uT (t) =
T 2T 3T 4T 5T 6T
X
k
u(k T ) T T (t k T )
X
k
u(k T ) T hT (t k T )
(4.35)
So,
y6
u6
=
-
uT (t) =
X
k
t
u(k T ) T T (t k T )
yT (t) =
X
k
t
u(k T ) T hT (t k T )
79
X
k
u(k T ) T T (t k T )
y(t) yT (t) =
X
k
u(k T ) T hT (t k T )
As we let T approach 0, the approximation will improve for smaller T , and in the limit yT
will be equal to the true y(t):
X
y(t) = lim yT (t) = lim
u(k T ) T hT (t k T )
T 0
T 0
this is called the convolution integral. The convolution of two signals u and h will be
represented symbolically as
y(t) = u(t) h(t)
(4.37)
(4.38)
This means that the input-output behavior of a linear time-invariant system can be described by either one of the following convolution integrals:
Z
y(t) =
u( ) h(t ) d
Z
input-output behavior
y(t) =
u(t ) h( ) d
4.5
= Ax(t) + Bu(t)
(4.39)
y(t) = Cx(t) + Du(t)
Before we compute this time response we first introduce the state transformation which
gives us the A, B, C and D matrices for a different set of state variables. A special state
transformation is the modal transformation, which gives us a state system description in
which the A-matrix is diagonal. This will facilitate the computations dramatically.
80
State transformation
There is no unique set of state variables that describe a given system; many different sets
of variables may be selected to yield a complete system description.
Example 4.10 Consider the fluid flow system of Section 2.2, in which the dynamics between the two levels in the vessels and the inflow are described as follows:
g
g
1
h 2 (t) =
g
g(R1 + R2 )
h1 (t)
h2 (t) .
A2 R1
A2 R1 R2
g
g
1
A1 R1
A1 R1
x(t)
=
g
g(R1 + R2 ) x(t) + A1 u(t)
0
A2 R1
A2 R1 R2
Now suppose we are interested in the mean level y(t) = (h1 (t) + h2 (t))/2, then the output
equation is given by
y(t) = 0.5 0.5 x(t) + 0 u(t)
Note that we could have chosen the states in a different way, for example
(h1 (t) + h2 (t))/2
.
x (t) =
(h1 (t) h2 (t))/2
In other words, the first state gives the average water level and the second state gives the
difference divided by two. Now we obtain a new set of equations:
2g
h 1 (t) + h 2 (t)
g h1 (t) + h2 (t)
=
+
2
2A2 R1 2A2 R2
2
2g
g h1 (t) h2 (t)
1
+
+
win (t)
A1 R1 A2 R2
2
2A1
2g
h 1 (t) h 2 (t)
g h1 (t) + h2 (t)
=
+
2
2A2 R1 2A2 R2
2
2g
g h1 (t) h2 (t)
1
+
win (t)
A1 R1 A2 R2
2
2A1
81
1
g
2g
g
2g
+
2A1
R1 2A2 R2
A1 R1 A2 R2
x (t) = 2A22g
g
2g
g x (t) + 1 u(t)
+
2A2 R1 2A2 R2
A1 R1 A2 R2
2A1
and the output equation:
y(t) =
1 0 x(t) + 0 u(t)
The first system with state x(t) and the second system with state x (t) both describe the
same physical system with the same input u(t) = win (t) and the same output y(t) = (h1 (t)+
h2 (t))/2, but the state description is different because of a different choice of states.
In the previous example we have introduced a state transformation
1 1
x(t) =
x (t)
1 1
or
x(t) = T x (t)
(4.40)
We will now consider how the system matrices will change if we introduce a state transformation (4.40) where T is non-singular (invertible). First note that the time-derivative
of the new state x (t) is given by
x(t)
= T x (t)
Consider the original system
x(t)
= Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
(4.41)
(4.42)
82
By defining
x (t) = T 1 x(t)
A = T 1 AT
B = T 1 B
C = C T
D = D
we arrive at the state system
x (t) = A x (t) + B u(t)
y(t) = C x (t) + D u(t)
(4.43)
Modal transformation
Consider system (4.39). We will now give a closer loop to the matrix A which is called the
system matrix of the system. The values i satisfying the equation
i mi = A mi
for mi 6= 0
(4.44)
are known as the eigenvalues of A and the corresponding column vectors m are defined as
the eigenvectors. Equation (4.44) can be rewritten as:
(i I A) mi = 0
(4.45)
The condition for a non-trivial solution of such a set of linear equations is that
det(i I A) = 0
(4.46)
which is defined as the characteristic polynomial of the A matrix. Eq. (4.46) may be
written as
n + an1 n1 + . . . + a1 + 1 = 0
(4.47)
(4.48)
Let us consider the case that all eigenvalues are distinct. Define
M=
m1 m2 mn
(4.49)
83
0
..
.
0
..
.
,
..
(4.50)
(4.51)
6 6
1
x(t)
=
x(t) +
u(t)
2
1
1
AM =
1 2
2 3
6 6
2 1
3 2
2 1
2 0
0 3
1
A = M AM = =
B =M B=
0 3
1
C = C M = 3 2
D = D = 4
84
In the case of multiple eigenvalues, the modal transformation is not always possible. One
possible solution is to transform the A matrix into a modified-diagonal form or Jordan
form (see the book by Kailath [4]). The A matrix will then be almost diagonal in the sense
that its only non-zero entries lie on the diagonal and the superdiagonal (A superdiagonal
entry is one that is directly above and to the right of the main diagonal).
Homogeneous state response
The computation of the time response of a state system consists of two steps: First the
state-variable response x(t) is determined by solving the first-order state equation (4.39.1),
and then the state response is substituted into the algebraic output equation (4.39.2) in
order to compute y(t). The state variable response of a system described by (4.39) with
zero input and an arbitrary set of initial conditions x(0) is the solution of the system of n
homogeneous first-order differential equations
x(t)
= A x(t) ,
x(0) = x0
(4.52)
(At)2
+ ...
2
(4.53)
where (t) = eA t is defined as the state transition matrix. To compute this state transition
matrix we need to compute the exponential of a matrix. This can be done by bringing the
model into the modal form. We compute M and according to (4.49) and (4.50).
(t) = eAt
= I + At+
(At)2
+ ...
2
M 2 t2 M 1
= M M 1 + M t M 1 +
+ ...
2
(t)2
= M I +t+
+ . . . M 1
2
t
1
=Me M
85
(4.54)
For e t we derive:
(t)2
et = I + t +
+ ...
2
2
0
0
1 + 1 t + (12t) + . . .
2
..
(
t)
2
0
1
+
t
+
+
.
.
.
.
2
2
=
..
..
.
.
2
0
1 + n t + (n2t) + . . .
1 t
e
0
0
..
.
0 e2 t
= .
.
..
..
n t
0
e
x(t)
= 3 x(t) + 2 u(t)
y(t) = 5 x(t) + 2 u(t).
= 3 x(t)
It follows that the 1 1 state transition matrix is given by
(t) = e3 t , t > 0, t R.
Example 4.13 Consider the second-order system of Example 4.11. The homogeneous
state differential equation is
x(t)
= Ax(t) =
6 6
x(t)
2 1
At
(t) = e
=e
6 6
2 1
86
2 0 t
3 2
1 2
0
3
=
e
2 1
2 3
2t
3 2
e
0
1 2
=
2 1
0 e3t
2 3
2t
3t
2t
3e +4e
2e +2e3t
=
6e2t 6e3t
4e2t 3e3t
a11
0
ea11 t
0
..
..
t =
.
.
0
ann
ann t
0
e
e
The forced response of a state system
Let us now determine the solution of the inhomogeneous state equation
x(t)
= Ax(t) + Bu(t)
We can rewrite this as
x(t)
Ax(t) = Bu(t)
(4.55)
87
and so
At
At
x(t) = e x(0) + e
eA Bu( ) d
This results in
At
x(t) = e
x(0) +
eA (t ) B u( ) d
(4.56)
(4.57)
Example 4.14 Consider the first-order system of Example 4.12. For this system the forced
output response is
Z t
3 t
y(t) = 5 e x(0) +
10 e3 (t ) u( ) d + 2 u(t)
0
88
4.6
In this section we summarize the different types of system description for linear timeinvariant systems.
= A x(t) + B u(t) ,
y(t) = C x(t) + D u(t) .
Note that x(t) is a vector. The Laplace transform of a vector x(t) is given by the Laplace
transform of its entries:
X
(s)
x
(t)
1
1
x2 (t)
X2 (s)
1
1
L {x(t)} = L
= X(s)
.. =
..
.
.
Xn (s)
xn (t)
where L1 {xi (t)} = Xi (s) for i = 1, . . . , n. For the derivative x(t)
it follows:
L1 {x 1 (t)}
L1 {x 2 (t)}
1
L {x(t)}
..
L1 {x n (t)
s X1 (s)
s X2 (s)
..
.
s Xn (s)
s 0 0
X1 (s)
0 s
0
X2 (s)
..
..
..
.
.
.
0 0 ... s
Xn (s)
= s I X(s)
Let L1 {u(t)} = U(s) and L1 {y(t)} = Y (s), then we can write the state equation in
Laplace form as s I X(s) = A X(s) + B U(s) , or (s I A)X(s) = B U(s) . If we assume
that the matrix (s I A) is invertible, we obtain
X(s) = (s I A)1 B U(s) .
89
(4.58)
(4.59)
Remark:
Note that the transfer function is an input-output description of the system. A state
transformation does not influence the input-output behavior of the system. We therefore
may use the modal form of the state space description of the system, and the transfer
function becomes:
H(s) = C (s I )1 B + D
which can easily be computed, because of the diagonal form of :
(s I )
s 1
0
..
.
s 2
(s 1 )1
..
0
..
.
s n
0
..
.
(s 2 )1
..
0
..
.
.
(s n )1
Example 4.15 Consider the system of Example 4.11 and Example 4.13. The transfer is
given by
H(s) = C (s I )1 B + D
1
2 0
1
= 3 2
sI
+4
0 3
1
1
s+2
0
1
+4
= 3 2
0
s+3
1
1
s+2
0
1
= 3 2
+4
1
0 s+3
1
3
2
=
+
+4
s+2 s+3
4 s2 + 21 s + 29
=
s2 + 5 s + 6
(4.60)
90
M 1 =
and so
Y (s) =
C adj (s I A) B
+ D U(s)
det(s I A)
C adj (s I A) B + D det(s I A)
=
U(s)
det(s I A)
(4.61)
(4.62)
(4.63)
=
x(t) +
u(t) ,
3
4
2
y(t) = 1 0 x(t) + 2 u(t) .
(4.64)
Note that det(s I A) and C adj (s I A) B +D det(s I A) are polynomials in the variable
s and so by inverse Laplace transformation we find the input-output differential equation.
From
s I A =
=
s 0
0 s
2 3
3 4
s2
3
3 s + 4
91
= A x(t) + B u(t)
y(t) = C x(t) + D u(t)
by choosing:
A=
C=
a1 an1 an
1
0
0
..
.. ,
..
.
.
.
0
1
0
b1 b0 a1 bn1 b0 an1 bn b0 an ,
B=
D=
1
0
..
.
0
b0
(4.65)
92
To prove that this is really a state system representing the input-output differential equation, we simply transform this state system into an input-output system using equation
(4.63). We have
s + a1 a2 an2 an1 an
1
s
0
0
0
..
..
.. ..
..
.
.
.
.
.
det(s I A) = det
..
0
. 1
0
s
0
0
0
1 s
= sn + a1 sn1 + + an1 s + an
Furthermore
adj (s I A) =
sn1
sn2
..
.
s
1
..
..
.
.
where the stars indicate that the values can be computed but are not relevant.
sn1
sn2
..
b
b
a
b
a
b
b
a
C adj (S I A) B =
1
0 1
n1
0 n1
n
0 n
.
s
1
= (b1 b0 a1 )sn1 + . . . + (bn1 b0 an1 )s + bn b0 an
and so
C adj (s I A) B + D det(s I A) =
= (b1 b0 a1 )sn1 + (b2 b0 a2 )sn2 + . . . + (bn1 b0 an1 )s + (bn b0 an )
+ b0 (sn + a1 sn1 + + an1 s + an )
= b0 sn + b1 sn1 + b2 sn2 + . . . + bn1 s + bn
We obtain
sn Y (s) + a1 sn1 Y (s) + . . . + an1 sY (s) + an Y (s)
= b0 sn U(s) + b1 sn1 U(s) + . . . + bn1 sU(s) + bn U(s)
This means that after a inverse Laplace transformation we end up with the original differential equation.
93
B
C
D
a1 a2
2 1
=
=
,
1
0
1
0
1
=
0
= b1 b0 a1 b2 b0 a2 = 1 2 ,
= b0 = 2
Note that the system matrices of this realization are different from the system matrices in
(4.64). The two realizations are related by the state transformation matrix
1 2
T =
2 1
with inverse
T
1
=
3
1 2
2 1
We find that
x (t) = A x (t) + B u(t)
y(t) = C x (t) + D u(t)
with
x (t) = T 1 x(t)
A = T 1 AT
B = T 1 B
C = C T
D = D
(4.66)
(4.67)
94
(4.68)
Finally note that in the impulse response we usually assume the system initially at rest,
so x(0) = 0. Equation (4.68) now becomes
h(t) = CeA t B us (t) + D(t)
(4.69)
which is the impulse response of the linear time-invariant state system (4.39).
Remark:
Note that a state transformation does not influence the input-output behavior of the system. We therefore may use the diagonal form of the state space description of the system.
The impulse response becomes:
h(t) = C e t B us (t) + D (t)
= CMe t M 1 B us (t) + D(t)
which can easily be computed.
Example 4.18 Consider the second-order system of Example 4.11 and Example 4.13. Using the model transformation we can derive
h(t) = C e t B us (t) + D (t)
e2t 0
1
= 3 2
us (t) + 4(t)
0 e3t
1
= (3e2t 2e3t )us (t) + 4(t)
(4.70)
The relation between the transfer function and the impulse response
One of the properties of the Laplace transformation is that the Laplace transform of a
convolution of two functions f1 and f2 results in the product of the Laplace transforms of
f1 and f2 :
L{f1 (t) f2 (t)} = L{f1 (t)}L{f2 (t)}
(4.71)
95
(4.72)
where H(s) is the transfer function of the system. This means that the Laplace transform
of the impulse response is equal to the transfer function.
Eq. (4.69)
A, B, C, D
Eq. (4.63)
Eq. (4.65)
Eq. (4.59)
h(t)
Eq. (4.72)
H(s)
Eq. (4.6)
Differential
Equation
s=j
H(j)
dm u(t)
dm1 u(t)
d u(t)
+
b
+ . . . + bm1
+ bm u(t)
1
m
m1
dt
dt
dt
96
In Figure 4.8 the equations that gives the relations between the various system descriptions
are given.
4.7
Stability
n
X
Ci epi t = 0
(4.73)
i=1
In (4.73) we assume that all poles have multiplicity equal to one. If a pole pi has multiplicity mi > 1,
the terms for this pole has the form Ci tj epi t , j = 0, . . . , mi 1
97
4.7. STABILITY
for any x(0) and all eigenvalues have multiplicity equal to one. All the elements of x(t) are
a linear combination of the modal components ei t , and therefore, the stability of a system
response depends on all components decaying to zero with time. If Re(i ) > 0 for some i ,
the component will grow exponentially with time and the sum is by definition unbounded.
The requirements for system stability may therefore be summarized:
A linear time-invariant state system described by the state equation x(t)
= A x(t) + b u(t)
is asymptotically stable if and only if all eigenvalues have real part smaller than zero.
Three other separate conditions should be considered:
1. If one or more eigenvalues, or pair of conjugate eigenvalues, has a real part larger than
zero, there is at least one corresponding modal component that increases exponentially without bound from any initial condition, violating the definition of stability.
2. Any pair of conjugate eigenvalues on the imaginary axis (real part equal to zero),
i,i+1 = j with multiplicity equal to one, generates an undamped oscillatory
component in the state response. The magnitude of the homogeneous state response
neither decays nor grows but continues to oscillate for all time at a frequency . Such
a system is defined to be marginally stable. For poles on the imaginary axis with
multiplicity higher than 1, the homogeneous state response will grow unboundedly.
3. An eigenvalue = 0 with multiplicity one generates a model exponent e t = e0 t = 1
that is a constant. The system response neither decays or grows, and again the system
is defined to be marginally stable. A eigenvalue = 0 with multiplicity mi > 1
gives additional components tj , j = 1, . . . , mi 1 and will lead to an unbounded
homogeneous state response.
output sup |y(t)| = M2 . A necessary and sufficient condition for such BIBO stability is
t
(4.74)
98
sup
|h( )u(t )|d
t
Z
sup
|h( )| |u(t )|d
t
|h( )|M1 d
M3 M1
This means that M2 is finite if (4.74) holds. That (4.74) is necessary can be seen as follows.
Assume that for we want to compute y(0) for an input given u given by
if h(t) > 0
1
u(t) := sgn[h(t)] = 0
if h(t) = 0 , t.
1 if h(t) < 0
Then, M1 = sup |u(t)| = 1 and,
t
y(0) =
=
h( )u( ) d
|h( )| d
This shows that if M3 is not bounded, that y(t0) is not bounded and so the system is not
BIBO stable. This shows that for BIBO stability (4.74) is necessary.
Example 4.19 Consider the system of Example 4.11 and Example 4.13. The eigenvalue
of the matrix
6 6
A=
2 1
are 1 = 2 and 2 = 3. Both eigenvalues are negative real, which means that this
system is stable.
As we will show in the next section the transfer function the input-output differential equation of this system is given by
d2 y(t)
d y(t)
d2 u(t)
d u(t)
+
5
+
6
y(t)
=
4
+ 21
+ 29 u(t)
2
2
dt
dt
dt
dt
The characteristic equation is equal to
2 + 5 + 6 = 0
99
4.8. EXERCISES
and we find (not surprisingly) the poles 1 = 2 and 2 = 3, which are equal to the
eigenvalues of the A-matrix of the corresponding state system. Both poles are negative
real, which means that this system is stable.
Finally we can compute the impulse response h(t) given in Equation (4.70).
Z
M3 =
|h(t)| dt
Z
=
|(3e2t 2e3t )us (t) + 4(t)| dt
Z
=
(3e2t 2e3t )us (t) + 4(t) dt
Z
Z
Z
2t
3t
=3
e dt 2
e dt + 4
(t) dt
0
2
3
= e2t 0 + e3t 0 + 4
2
3
3 2
= +4
2 3
<
2
9
12
1
36 x(t) + 2 u(t)
x(t)
= 9 26
6 18 25
4
The eigenvalues of A are 1 = 1 and 2,3 = 2 j 3. Note that two eigenvalues have a
positive real part, which means that this system is unstable.
4.8
Exercises
13
+
12y(t)
=
2
+
12
+ 24
+ 16u(t)
3
3
2
dt
dt
dt
dt
dt
d4 y(t)
d3 y(t)
d2 y(t)
d y(t)
c)
+6
+ 22
+ 30
+ 13y(t)
4
3
2
dt
dt
dt
dt
d3 u(t)
d2 u(t)
d u(t)
=3
+
6
21
+ 12u(t)
3
2
dt
dt
dt
a)
100
= 2 and y(0) = 5.
Exercise 5: Time response
Consider the system
y(t) + 10y(t)
+ 25y(t) = 40 u(t)
with step input u(t) = e3t for t 0. Compute the output y(t) for the initial conditions
y(0)
0 for t < 0
1 for 0 t < 1
u(t) =
0 for t 1
4.8. EXERCISES
Exercise 7: State systems
Consider the state system
3 2
1
x(t)
=
x(t) +
u(t)
21 10
2
y(t) = 4 1 x(t)
1. Is this state system stable?
101
102
Chapter 5
Nonlinear dynamical systems
In this chapter we will consider some examples of nonlinear (differential) systems. We will
also discuss the concept of linearization, which gives us the possibility to approximate the
behavior of a nonlinear system locally by a linear system description.
5.1
In Chapter 2 we have discussed the modeling of dynamical systems with only linear basic
elements. In practice however we will often encounter phenomena that are nonlinear. We
will present two examples with nonlinear elements and show that the differential equations
can be derived in a similar way as in the linear case.
Let u(t) be the input signal and let y(t) be the output signal of a nonlinear dynamical
system. The relation between inputs and outputs of dynamical systems can be described
by a differential equation:
dn1 y(t)
dn y(t)
d y(t)
dm u(t) dm1 u(t)
d u(t)
=F
,...,
, y(t),
,
,...,
, u(t)
d tn
d tn1
dt
d tm
d tm1
dt
Example 5.1 (Mechanical system)
A mass M is connected to the ceiling by a nonlinear spring and a linear damper in the
configuration of Figure 5.1. The spring force is given by fs = k y 2 and the damping force
is equal to fd = c y,
where k and c are constants. The gravity force is equal to fg = m g
and finally there is an external force fext acting on the mass. Our task is to derive the
differential equations for this system.
We use Newtons law for this system and we obtain
X
m y =
fi
i
103
104
k y2
c y
m
mg
fext
?
wmed =
wout
By introducing the square root in the relation between the flow and the pressure, the equations can describe the physical behavior more realistically. Our task is to derive the differential equations for this system.
First we consider the upper vessel:
The net flow into the upper level is w1 = win wmed . The fluid capacitance of the upper
vessel is given by
C1 =
A1
g
105
win p0
A
wmed
h1
p1
R1
A
h2
p2
wout R2
g
1
w1 =
(win wmed)
C1
A1
Substitution of wmed =
p 1 =
1
p1 p0 gives us
R1
g
g
win
p1 p0
A1
A1 R1
A2
g
1
g
w2 =
(wmed wout )
C2
A2
1
p2 p0
R2
g
g
p1 p0
p2 p0
A2 R1
A2 R2
106
So summarizing, the two differential equations describing the dynamics of the 2 vessel
system are as follows:
g
g
win
p1 p0
p 1 =
A1
A1 R1
g
g
p 2 =
p1 p0
p2 p0
A2 R1
A2 R2
Now we can use the relation between the fluid levels hi and pi , i = 1, 2:
pi p0 = ghi , and pi = g h i
and we can rewrite the equations as
g p
g
win
gh1
g h 1 =
A1
A1 R1
g p
g p
g h 2 =
gh1
gh2
A2 R1
A2 R2
2 = R2 g
1 = R1 g and R
or by introducing the parameters R
1
g p
win
h 1 =
1 h1
A1
A1 R
p
g p
h 2 = g
h
1
1
2 h2
A2 R
A2 R
We now introduce the notion of nonlinear state systems. A nonlinear system can then be
represented by the state equations
x(t)
= f (x, u),
y(t) = h(x, u),
(5.1)
where f and h are nonlinear mappings. We call a model of this form a nonlinear state
space model. The dimension of the state vector is called the order of the system. The system (5.1) is called time-invariant because the functions f and h do not depend explicitly
on time t; there are more general time-varying systems where the functions do depend on
time. The model consists of two functions: the function f gives the rate of change of the
state vector as a function of state x and input u, and the function h gives the output signal
as functions of state x and control u. A system is called a linear state space system if the
functions f and h are linear in x and u.
Example 5.3 (Mechanical system)
Consider the system of Example 5.1. The nonlinear differential equation is given by
m y(t) = k y 2 (t) c y(t)
+ m g + fext (t)
107
y(t) = x2 (t)
g p
1
win
h 1 =
1 h1
A1
A1 R
p
g p
h 2 = g
h
1
1
2 h2
A2 R
A2 R
If we choose the state x(t) = h1 (t) h2 (t) , the input u(t) = win (t) and the output
y(t) = h2 (t) we obtain the nonlinear state equations
1
g p
x 1 (t) =
u(t)
1 x1 (t)
A1
A1 R
g p
g p
x
(t)
=
x
(t)
2
1
1
2 x2 (t)
A2 R
A2 R
y(t) = x2 (t)
5.2
An equilibrium point (or steady state) is a point where the system comes to a rest. For
a system at rest all signals will be constant and so in an equilibrium point the derivative
of the state will be zero. We define an equilibrium point, or steady state, of a nonlinear
system as follows:
Definition 5.1 Consider a nonlinear state system, described by (5.1). For a steady state
or equilibrium point (x0 , u0, y0 ) there holds
f (x0 , u0 ) = 0
with a corresponding output y0 :
y0 = g(x0 , u0 )
Example 5.5 (Mechanical system)
Consider the nonlinear state system of Example 5.3. We aim at computing the equilibrium
point (x0 , u0 , y0 ) for the system
y(t) = x2 (t)
108
0
. This means
This means that x0,1 = 0 and
= g + 1/m u(t), so x0,2 = m g+u
k
that if the external
q force fext (t) = u0 is constant, and the system is in rest then y 0 = x0,1 = 0
and y0 = x0,2 =
m g+u0
.
k
g
x0,1 (t)
2
A2 R
x0,2 (t)
2
R
x
=
u20
0,1
2
2
22
22
R
R
x
=
x
=
u2
0,2
2 0,1
2 2 0
g
R
y0 (t) = x0,2
In many physical systems the relations used to define the model elements are inherently
nonlinear. The analysis of systems containing such elements is a much more difficult
task than that for a system containing only linear elements, and for many such systems of
interconnected nonlinear elements there may be no exact analysis technique. In engineering
practice it is often convenient to approximate the behavior of a nonlinear systems by a
linear one over a limited range of operation, usually in the neighborhood of an equilibrium
point. The achieve this linear behavior we have to do a linearization step. We study small
variations about the equilibrium (x0 , u0 , y0 ), where x0 , u0 and y0 satisfy f (x0 , u0 ) = 0 and
g(x0 , u0 ) = y0 . To derive the linear behavior we look at small variations x, u, and y about
the equilibrium (x0 , u0 , y0 ):
x(t) = x0 + x(t)
u(t) = u0 + u(t)
y(t) = y0 + y(t)
109
Now we can, using Taylor expansion, describe the nonlinear equations in terms of these
small variations x and u, which yields
x (t) = f (x0 , u0) + A x(t) + B u(t))
y(t) = g(x0 , u0 ) + C x(t) + D u(t)) y0
where A, B, C, and D are computed as
f
f
A=
, B=
,
x x = x0
u x = x0
u = u0
h
C=
,
x x = x0
u = u0
u = u0
h
D=
u x = x0
(5.2)
u = u0
y(t) = x2 (t)
q
q
m g+u0
0
,
and
y
=
x
=
. We
and found the equilibrium point x0,1 = 0, x0,2 = m g+u
0
0,2
k
k
compute
"
#
f1
f1
f
x2
1
A=
= x
f2
f2
x x=x0 ,u=u0
x = x0
x1
x2
u = u0
c/m 2 k/m x2
1
0
"
f
B=
=
u x=x0 ,u=u0
h
g
C=
=
x x=x0 ,u=u0
f1
u
f2
u
g
x1
x = x0
u = u0
"
=
x = x0
u = u0
g
x2
x = x0
u = u0
c/m 2
1
1/m
0
0 1
mkg+ku0
m2
110
g
u
=0
x = x0
u = u0
So for small variations x, u and y about the working point equilibrium (x0 , u0 , y0 ) the linear
behavior is described by the linear time-invariant state system
x (t) = A x(t) + B u(t)
y(t) = C x(t) + D u(t)
These equations can describe the dynamic behavior of the mechanical system quite accurately as long as the signals u(t) and x(t) remain small.
Example 5.8 (Water flow system)
Consider the nonlinear flow system of Example 5.6 with state equations
1
g p
(t)
=
u(t)
1 x1 (t)
A1
A1 R
g p
g p
x 2 (t) =
x
(t)
1
2 x2 (t)
A
R
A
R
2
2
2
2
R
R
1
2
2
u
,
x
=
u20 and y0 (t) = x0,2 . We compute
0,2
0
2
2
2
2
g
g
#
1 x1 (t)
2 A1 R
g
1 x1 (t)
2 A2 R
f1
x2
f2
x2
f1
x1
f2
x1
2
2 A2 R
"
f
B=
=
u x=x0 ,u=u0
h
g
C=
=
x x=x0 ,u=u0
g
D=
=
u x=x0 ,u=u0
x = x0
u = u0
f1
u
f2
u
g
x1
g
u
x2 (t)
=
x = x0
u = u0
=
x = x0
u = u0
g
x2
x = x0
u = u0
x = x0
u = u0
=0
1
A1
1
2A1 u0
1
2A2 u0
0 1
0
1
2A2 u0
111
5.3. EXERCISES
So for small variations x, u, and y about the working point equilibrium (x0 , u0, y0 ) the
linear behavior is described by the linear time-invariant state system
x (t) = A x(t) + B u(t)
y(t) = C x(t) + D u(t)
These equations can describe the dynamic behavior of the mechanical system quite accurately as long as the signals u(t) and x(t) remain small.
5.3
Exercises
fge
j
fgx
fz
fgy
112
i1 -
i2
i3
?
v2
e
Chapter 6
An introduction to feedback control
Engineers often use control methods engineering to enhance the performance of systems in
many fields of application, such as mechanical, electrical, electromechanical, and fluid/heat
flow systems (see Chapter 2). This chapter gives an introduction to the field of control
engineering. Some basic definitions and terminology will be introduced and the concept of
feedback will be presented.
Definition 6.1 Given a system with some inputs for which we can set the values, control
is a set of actions undertaken in order to obtain a desired behavior of the system, and it
can be applied in an open-loop or a closed-loop configuration by supplying the proper control
signals.
Controllers can be found in all kinds of technical systems, from cruise control to a central
heating system, from hard disks to washing machines, from GPS to oil refineries, from
watches to communication satellites. In many cases the impact of control is not recognized
from the outside. Control is therefore often called the hidden technology.
6.1
Block diagrams
In Section 4.1 we have seen that a linear time-invariant system can be represented by a
transfer function. Often real-life physical systems consist of many subsystems where each
subsystem can be described by a differential equation and therefore can be represented by
a transfer function. If we want to look at the overall system on a higher, less detailed level,
we can draw a block diagram of the system.
Definition 6.2 A block diagram is a diagram of a system, in which the principal subsystems are represented by blocks interconnected by arrows that show the interaction between
the blocks.
Example 6.1 Consider the three-vessel water flow system of Figure 6.1.
113
114
h1
R1
A
h2
R2
A
h3
R3
Using the modeling techniques of Chapter 2 we can describe the system with three differential equations
1 = 1 win g h1 ,
A
AR1
g
g
h 2 =
h1
h2 ,
AR1
AR2
h 3 = g h2 g h3 .
AR2
AR3
We consider each equation as a subsystem. In the first subsystem, the input is the inflow
win (t), and the output of the system is the water level h1 (t). In the second subsystem we
have h1 (t) as an input and h2 (t) as an output. In the third subsystem we have h2 (t) as an
input and h3 (t) as an output. In Figure 6.2 the three-vessel water flow system is represented
by a block diagram, in which each block represents one of the three vessels.
win
Water vessel 1
h1 Water vessel 2
h2 Water vessel 3
h3 -
For the three-vessel water flow system of Example 6.1 we see that the block diagram
consists of three subsystems that are in a series connection. Each of the subsystems can
115
H1 (s) =
Subsystem 2:
H2 (s) =
Subsystem 3:
H3 (s) =
1
A
g
AR1
g
AR1
g
+ AR
2
g
AR2
g
+ AR
3
s+
s
s
Interconnection of systems
Series interconnection:
In a series interconnection of two systems the output of the first system becomes the
input of the second system (See Figure 6.3). Let U1 (s) and Y1 (s) denote the Laplace
U(s)
H1 (s)
H2 (s)
Y (s)
116
1
A
g
g
AR1
AR2
g
g
g
s + AR
s + AR
s + AR
1
2
3
g2
A3 R1 R2
g
g
g
)(s + AR
)
(s + AR1 )(s + AR
2
3
and thus
Y (s) = Htot (s) U(s)
Parallel interconnection:
In a parallel interconnection two systems have the same input and the outputs of the
systems are added (See Figure 6.4). Let the Laplace transform of input and output of the
-
H1 (s)
U(s)
Y (s)
?
e
6
-
H2 (s)
117
R(s)
-e
+
6
U1 (s)
H1 (s)
Y1 (s)
Y2 (s)
H2 (s)
H1 (s)
R(s)
1 + H2 (s)H1 (s)
the input u1 (t) and the output y1 (t) of the first system, respectively, and denote by U2 (s)
and Y2 (s) the Laplace transforms of the input u2 (t) and the output y2 (t) of the second
system. Furthermore, let R(s) denote the Laplace transform of the reference signal r(t).
The loop is created by setting u2 (t) = y1 (t) and u1 (t) = r(t) y2 (t). Consequently we
obtain U2 (s) = Y1 (s), U1 (s) = R(s) Y2 (s). By substitution we find:
U1 (s) = R(s)Y2 (s) = R(s)H2 (s)U2 (s) = R(s)H2 (s)Y1 (s) = R(s)H2 (s)H1 (s)U1 (s)
Hence, we obtain
1 + H2 (s)H1 (s) U1 (s) = R(s)
and so
1
R(s)
1 + H2 (s)H1 (s)
H1 (s)
Y1 (s) =
R(s)
1 + H2 (s)H1 (s)
U1 (s) =
H1 (s)
1 + H2 (s)H1 (s)
When the feedback argument y2 (t) is subtracted (see Figure 6.5) we call it a negative
feedback. Negative feedback often appear in controller design an is required for system
stability. For a negative feedback configuration as in Figure 6.5 we can express the solution
by a simple rule:
The transfer function of a single-loop negative feedback system is given by the forward
transfer function divided by the sum of one plus the loop gain function.
where the loop gain function is the product of the transfer functions making the loop, that
118
R(s)
-e
+
6
U1 (s)
H1 (s)
Y1 (s)
Y2 (s)
H2 (s)
H1 (s)
R(s)
1 H2 (s)H1 (s)
When the feedback argument is added (instead of subtracted) we call it a positive feedback
(See Figure 6.6). For a positive feedback configuration the solution is given by the rule:
The transfer function of a single loop positive feedback system is given by the forward
transfer function divided by one minus the loop gain function.
Block diagram manipulations
Figure 6.7 shows some basic block diagram manipulations for nodes where signals split into
two branches, or where signals are added. The basic manipultions can be used to convert
block diagrams without effecting the mathematical relationships.
Example 6.3 (Simplifying block scheme)
In this example we will consider the block diagram of Figure 6.8 and find the transfer
function from input r(t) to output y(t).
Using the manipulations defined in Figure 6.7 we first replace the closed-loop of H1 (s) and
H3 (s) by H1 (s)/(1H1(s)H3 (s)) (Note that we have a positive feedback here). The next step
is to shift the input of system H6 (s) over the system H2 (s) using the basic manipulation step
given in Figure 6.7.b. Now we have a system consisting of two subsystems: The first subsystem is the feedback of H1 (s)/(1 H1 (s)H3 (s)), H2 (s) and H4 (s). For this subsystem use
the rule for a positive feedback configuration: The transfer function of a single-loop positive
feedback system is given by the forward transfer function (H1 (s)H2 (s)/(1 H1 (s)H3 (s)))
divided by one minus the loop gain function (1 H1 (s)H2 (s)H4 (s)/(1 H1 (s)H3 (s))). This
results in the following transfer function for the first subsystem:
Hsub,1 (s) =
119
Y1 (s)
H(s)
U(s)
H(s)
H(s)
Y2 (s)
U(s)
H(s)
Y1 (s)
Y1 (s)
Y2 (s)
(a)
U(s)
H(s)
Y1 (s)
Y2 (s)
-e 6
H(s)
Y (s)
?
1
H(s)
U1 (s)
Y2 (s)
(b)
U1 (s)
H(s)
H(s)
-e
6
Y (s)
U2 (s)
U2 (s)
(c)
U1 (s)
H(s)
-e
6
Y (s)
U1 (s)
-e 6
H(s)
Y (s)
1
H(s)
U2 (s)
(d)
U2 (s)
120
R(s)
X (s) X (s)
- e 1- e 2 6
6
X3 (s)
H1
H6
H2
X4 (s)
H5
-?
e
Y (s)
-?
e
Y (s)
-?
e
Y (s)
H3
H4
R(s)
-e
6
H1
1H1 H3
H6
H2
H5
H4
R(s)
-e
6
H1
1H1 H3
H2
H6
H2
H5
H4
|
{z
Hsub,1
}|
{z
Hsub,2
121
A more algebraic approach starts with writing down the input-output equations of all
subsystems, and proceeds with eliminating the internal signals, which are not relevant. We
will show this approach by computing the overall transfer for the system of Example 6.3.
Example 6.4 (Simplifying block scheme II)
In this example we will consider the block diagram of Figure 6.8 and find the transfer
function from input r(t) to output y(t). We write down the equations:
X1 (s) = R(s) + H4 X4 (s)
X2 (s) = X1 (s) + H3 (s) X3 (s)
X3 (s) = H1 (s) X2(s)
X4 (s) = H2 X3 (s)
Y (s) = H6 (s) X3(s) + H5 (s) X4(s)
(6.1)
(6.2)
(6.3)
(6.4)
(6.5)
(6.6)
(6.7)
(6.8)
(6.9)
so
1
H1 (s) R(s)
(6.10)
1
H1 (s) R(s)
Y (s) = H6 (s) + H2 (s) H5 (s) 1 H1 (s) H2 (s) H4(s) H1 (s) H3 (s)
and so the overall transfer function is
H(s) =
(6.11)
122
reference
controller
control
signal
process
output
6.2
Control configurations
controller
control
signal
process
output
measured output
Figure 6.11: Closed-loop configuration
In a closed-loop configuration, the measured output is compared with the desired reference
signal to provide an error signal that then initiates corrective action until the feedback
signal duplicates the reference signal. In this chapter we assume the sensor is optimal and
errors in the measurement can be neglected. In that case the measured output is equal to
the real output.
Open-loop systems are simpler than closed-loop systems and perform satisfactory in applications involving highly repeatable processes, having well established characteristics, and
that are not exposed to disturbances. In the case of model uncertainty or disturbances
acting on the system, closed-loop methods are preferred.
123
controller
Dol (s)
Uol (s)
-?
e
process
Yol (s)
H(s)
124
-e + 6
controller
U(s)
D(s)
?
-e
process
Y (s)
H(s)
V (s)
?
e
H(s) D(s)
H(s)
H(s) D(s)
R(s) +
W (s)
V (s)
1 + H(s) D(s)
1 + H(s) D(s)
1 + H(s) D(s)
H(s) D(s)
D(s)
D(s)
R(s)
W (s)
V (s)
1 + H(s) D(s)
1 + H(s) D(s)
1 + H(s) D(s)
1
1
=
1 + L(s)
1 + H(s) D(s)
L(s)
H(s) D(s)
=
1 + L(s)
1 + H(s) D(s)
(6.12)
125
The loop gain is an engineering term used to quantify the gain of a system controlled by
feedback loops. The loop gain function plays an important role in control engineering. A
high loop gain may improve the performance of the closed-loop system, but it may also
destabilize it.
The sensitivity function has an important role to play in judging the performance of the
controller, because it describes how much of the reference signal cannot be tracked and
will still be in the tracking error. The smaller S, the smaller the reference tracking error.
The complementary sensitivity function is the counterpart of the sensitivity function (Note
that S(s) + T (s) = 1). The closer T is to 1, the better the reference tracking.
6.3
For most closed-loop control systems the primary goal is to produce an output signal that
follows the reference signal as closely as possible. It is therefore important to know how
the output signal behaves for t . We define the steady state value of a signal x(t) as
xss = lim x(t).
t
s0
(6.13)
3(s + 2)
+ 2s + 10)
s(s2
Consider a system with input u(t), output y(t) and transfer function H(s). Often one is
interested in the value y() for a step input u(t) = us (t). From Table 2.1 we find that
1
U(s) = L{us (t)} = 1/s. The output Y (s) is now given by Y (s) = H(s) U(s) = H(s) .
s
With the final value theorem we derive:
1
y() = lim s Y (s) = lim s H(s) = H(0)
s0
s0
s
126
This means that H(0) is the value that remains if we put a constant signal on the input.
The value H(0 is therefore often referred to as the DC Gain of the system.
Example 6.6 (DC Gain of the system)
Consider the system
H(s) =
(s2
3(s + 2)
+ 2s + 10)
The steady state performance of a control system is judged by the steady state difference
between the reference and output signals. We consider the stable closed-loop configuration
of Figure 6.14 with a loop gain function L(s) = H(s) D(s).
r -e e D(s)
+ 6
u H(s)
y-
We will study the steady state error ess for different reference
signals r(t).
127
s0
s0
This means that the steady state error for a step response signal is equal to the sensitivity
function for s = 0.
Steady state error for a ramp and a parabolic reference signal
For r(t) = ur (t) = t us (t) we find the Laplace transform R(s) = 1/s2 from Table 2.1. The
steady state error now becomes
ess = lim s S(s) R(s) = lim s S(s) 1/s2 = lim
s0
s0
s0
S(s)
s
Similarly, for a parabolic signal r(t) = up (t) = t2 /2 us (t) we find the Laplace transform
R(s) = 1/s3 from table 2.1. The steady state error for a parabolic reference signal is
S(s)
s0 s2
s0
tk
for t 0, k Z+
k!
1
sk+1
Note that the behavior of the function s1k S(s) for s 0 is important. To describe this
behavior we introduce the notion of system type.
Definition 6.3 Consider a system in the closed-loop configuration as in Figure 6.14 with
S(s) =
1
1
=
.
1 + L(s)
1 + H(s) D(s)
Assume that for some positive integer value n the sensitivity function S can be written as
S(s) = sn S0 (s)
such that S0 (0) is neither zero nor infinite, i.e. 0 < |S0 (0)| < . Then the system type is
equal to n.
128
tk
k!
s=0
and so
if n > k
0
S0 (0) if n = k
ess =
if n < k
If the system type is 0 a step input signal results in a constant tracking error. If the system
type is 1, then for a ramp input signal the steady state tracking error is constant. If the
system type is 2 a parabolic input signal result in a constant tracking error. Summarizing,
if the system type is n then an input signal r(t) = tn /n!us (t) results in a constant steady
state tracking error.
The relation between system type and loop gain function L(s)
Consider a system in the closed-loop configuration as in Figure 6.14 with
1
1
=
.
S(s) =
1 + L(s)
1 + H(s) D(s)
Assume that for some positive integer value n the loop gain function can be written as
L0 (s)
sn
with L0 (0) = Kn finite but not zero. Then
L(s) =
S(s) =
1
sn
1
=
=
1 + L(s)
sn + L0 (s)
1 + Ls0n(s)
e
=
lim
ss
s0 s + L0 (s) sk
s0 sn + Kn sk
r(t) =
Kv = lim s L(s)
s0
Ka = lim s2 L(s)
s0
129
If a system is of type 0, then we find for a step a steady state tracking error ess = 1/(1+Kp ),
and for a ramp and parabola the error will diverge to infinity. For a type 1 system the
steady state tracking error for a step is zero, for a ramp we find ess = 1/Kv , and for a
parabola the error is infinite. Finally, for a type 2 system, the steady state tracking error
for a step and for a ramp is zero, and for a parabola we find ess = 1/Ka . This summarized
in Table 6.1.
step
ramp
parabola
type 0
1
1 + Kp
type 1
1
Kv
type 2
1
Ka
Table 6.1: System type and steady state errors for various reference signals
Example 6.7 Consider the feedback configuration of Figure 6.15 for which we study the
steady state error for different types of loop gain function L(s).
r -e e L(s)
+ 6
y-
10
System 1 is of type 0 because we compute L0 (s) = L(s) = (s+1)(s+2)
and Kp = L0 (0) = 5.
4
System 2 is of type 1 because we compute L0 (s) = s L(s) = s+2 and Kv = L0 (0) = 2.
System 3 is of type 2 because we compute L0 (s) = s2 L(s) = 4s+1
and Ka = L0 (0) = 0.25.
s+4
130
System 2
System 1
t
Figure 6.16: Step responses for different system types
First we plot the output y(t) for a step reference signal. As can be seen in Figure 6.16
systems 2 and 3 can follow the signal with zero steady state error. System 1 of type 0 gives
a finite steady state state tracking error ess = 1/(1 + Kp ) = 1/6.
Figure 6.17 gives the output y(t) for a ramp reference signal. We see that the response of
system 1 diverges from the reference, the response of system 2 follows the reference with a
finite error, and the response of system 3 converges to the reference.
Finally Figure 6.18 shows the output y(t) for a parabolic reference signal. None of the
responses converge to the parabolic signal, but system 3 with system type 2 can follow the
parabola with a finite error. The response of system 1 and 2 diverge from the parabola.
H(s)
W (s) = Tw (s) W (s)
1 + H(s) D(s)
where Tw (s) = H(s)/(1 + H(s) D(s)) is the transfer function of the system with input w
and output e. For a step function w(s) = us (t) we find
ess = lim e(t) = lim s E(s) = lim s Tw (s) W (s) = lim s Tw (s) 1/s = Tw (0)
t
s0
s0
s0
131
t
Figure 6.17: Ramp responses for different system types
We can extend the analysis of the steady state error to the class of disturbance signals
tk
us (t)
k!
with Laplace transform
w(t) =
W (s) =
1
sk+1
s0
sk+1
= lim Tw (s)
s0
s=0
and so
if n > k
0
Tw,0 (0) if n = k
ess =
if n < k
If fact we have the same property as for the reference tracking error. For the disturbance
system type n we find that a disturbance signal w(t) = tn /n!us (t) results in a constant
steady state tracking error.
132
System 3
System 2
t
Figure 6.18: Parabola responses for different system types
w(t)
r(t) = 0 - e e(t) + 6
u(t) - ?
e
y(t) -
Figure 6.19: Closed-loop configuration with reference r(t) and disturbance w(t)
Example 6.8 Consider the system
H(s) =
1
s(s + 1)
H(s)
1
= 2
1 + H(s) D(s) s + s + 2
For n = 0 we find Tw,0(s) = Tw (s) with 0 < Tw (0) = |1/2| < , and so the disturbance
system type is equal to 0. If we choose a different controller
D(s) =
0.1
+3
s
133
we have
Tw (s) =
10 s
H(s)
=
3
1 + H(s) D(s) 10 s + 10 s2 + 30 s + 1
6.4
10 s3
PID control
In most industrial applications, PID controllers are used to enhance the system performance
and to meet the desired specifications. The terms P, I, and D stand for P - Proportional, I
- Integral, and D - Derivative. These terms describe three basic mathematical operations
applied to the error signal e(t) = r(t) y(t). The proportional value determines the
reaction to the current error, the integral value determines the reaction based on the
integral of recent errors, and the derivative value determines the reaction based on the
rate at which the error has been changing. We will discuss the PID controllers as they
operate in the closed-loop configuration of Figure 6.19. We start with the P controller
with only a proportional action. Then we discuss the PI controller (proportional + integral
action) and PD controller (proportional + derivative action), and finally the PID controller
(proportional + integral + derivative action).
P control
The controller in the configuration of Figure 6.20 is called the P controller (proportional
controller). Typically the proportional action is the main drive in a control loop, as it
reduces a large part of the overall error. For a P controller we have D(s) = kp and so the
control signal u(t) is proportional to the error signal e(t):
u(t) = kp e(t)
Increasing the value kp may improve the steady state tracking error and the response speed.
Unfortunately, it may also lead to excessive values of the control signal u(t), which cannot
be realized in practice. Furthermore high values of kp may lead to instability.
PI control
Another kind of controller is the PI controller (proportional-integral control), in which
a part of the control signal u(t) is proportional to the error signal e(t) and another part is
proportional to the integral of the error signal e(t):
Z
1 t
u(t) = kp e(t) +
e( ) d
Ti 0
134
e(t)
u(t)
-
kp
kp
e(t)
?
e
6
-
u(t) -
ki
s
PD control
Instead of an integral action we can also introduce a derivative action, in which the proportional part of the control action is added to a multiple of the time derivative of the
error signal e(t):
d e(t)
u(t) = kp e(t) + Td
(6.14)
dt
135
where Td is called the derivative time. The derivative action is used to increase damping
and improve the systems stability. It counteracts the kp (and ki in case of an integral
action) when the output changes quickly. This helps reduce overshoot and avoid unwanted
oscillation of a signal. It has no effect on final error. Note that the derivative action alone
never occurs, because if e(t) is constant and different from zero, the controller does not
react.
-
kp
e(t)
?
e
6
-
u(t) -
kd s
d e(t)
u(t) = kp e(t) + Td
dt
d e(t)
= kp e(t) + kd
dt
The transfer function of the PD controller is given by
D(s) = kp (1 + Td s)
= kp + kd s
Unfortunately it is impossible to realize a derivative action in practice. The implementation
is usually done as
du
d e(t)
+ u(t) = kp e(t) + Td
dt
dt
PID control
The most general case is the PID controller in which we combine the proportional action
with an integral and derivative action:
Z t
1
d e(t)
u(t) = kp e(t) +
e( ) d + Td
(6.15)
Ti 0
dt
136
e(t)
kp
ki
s
kd s
e
-?
6
u(t) -
H(s) =
kp
(s + 1)(10 s + 1) + kp
1
(s + 1)(10 s + 1) + kp
137
- e e+ 6
D(s)
u -?
e
H(s)
1
)
Ti s
kp (s + 1/Ti )
s(s + 1)(10 s + 1) + kp (s + 1/Ti )
s
s(s + 1)(10 s + 1) + kp (s + 1/Ti )
We can analyze the response of y(t) for different values of Ti . The results are given in Figure
6.25.b for kp = 25 and Ti = 5, 10, 50. The left plot is for a step reference (r(t) = us (t),
w(t) = 0), and the right plot is for a step disturbance (w(t) = us (t), r(t) = 0). In the plots
we can see that decreasing Ti leads to a faster decay of the steady state error with respect
to a step reference signal and a step disturbance signal. However, smaller values of Ti also
give an increase of the overshoot.
Next we choose a Proportional-Integral-Derivative controller
D(s) = kp (1 +
1
+ Td s)
Ti s
kp (Td s2 + s + 1/Ti )
s(s + 1)(10 s + 1) + kp (Td s2 + s + 1/Ti )
138
(a)
2.0
kp = 50
kp = 25
kp = 10
kp = 5
1.5
1.0
kp = 5
1.0
kp = 10
0.5
kp = 25
0.5
10
15
Reference tracking
(b)
20
Disturbance rejection
20
1.5
0.05
Ti = 5
Ti = 10
0.04
Ti = 50
1.0
Ti = 50
0.03
Ti = 10
0.02
0.5
0.01
5
10
15
Reference tracking
20
Ti = 5
5
10
15
Disturbance rejection
20
1.5
Td = 0.2
Td = 1
0.04
1.0
Td = 0.2
Td = 1
0.03
(c)
kp = 50
10
15
Td = 5
0.02
0.5
0.01
5
10
15
Reference tracking
20
Td = 5
5
10
15
Disturbance rejection
Figure 6.25: Responses for (a) P control, (b) PI control, and (c) PID control
20
139
s
s(s + 1)(10 s + 1) + kp (Td s2 + s + 1/Ti )
We can analyze the response of y(t) for different values of Td . The results are given in
Figure 6.25.c for kp = 25, Ti = 10 and Td = 0.2, 1, 5. The left plot is for a step reference
(r(t) = us (t), w(t) = 0), and the right plot is for a step disturbance (w(t) = us (t), r(t) = 0).
In the plots we can see that increasing Td introduces damping of the oscillatory behavior.
However, too much damping will result in a slower convergence towards the steady state.
Example 6.10 Consider the system
H(s) =
1
(s + p1 )(s + p2 )
and the PID controller of (6.15). The closed-loop transfer function becomes
Ti s3 + Ti (p1 + p2 )s2 + Ti p1 p2 s
1
=
1 + H(s) D(s) Ti s3 + (Ti p1 + Ti p2 + kp Ti Td )s2 + (Ti p1 p2 + kp Ti )s + kp
Note that the closed-loop system has 3 poles determined by 3 parameters of the controller.
We can place poles anywhere we like.
Note that we can use the PID-controller to place the poles on desired locations. Using the
properties of second-order systems we can tune the P, I, or D action in such a way that the
system properties such as settling time, rise-time, overshoot, and peak-time satisfy certain
design criteria. We will illustrate this with some examples.
Example 6.11 Given a system
H(s) =
1
(5s + 1)(2s + 1)
and a PD controller
D(s) = kp (1 + Td s)
in a closed-loop configuration of Figure 6.14. The proportional gain kp has to be tuned
such that the closed-loop system has an undamped natural frequency n = 1 rad/s. The
derivative time constant Td has to be such that the closed-loop system has relative damping
of = 21 . So the tasks are:
1. Compute kp .
2. Compute Td .
140
kp (1 + Td s)
(5s + 1)(2s + 1)
1 + kp
=1
10
kp = 9
Td =
1
3
Finally we can study the steady state error for a system in closed-loop with a PID controller.
Example 6.12 Given a system
H(s) =
5
(10s + 1)(s + 1)
and a PD controller
D(s) = kp (1 +
1
)
10 s
in the closed-loop configuration of Figure 6.14. The reference input is a unit ramp signal
r(t) = ur (t). For which values of controller gain kp is the steady state error ess < 10% ?
To answer the question we use the fact that for a type 1 control system the steady state
error for a ramp reference signal is given by
ess =
1
with Kv = lim s D(s) H(s).
s0
Kv
There holds:
(10 s + 1)
5
kp
Kv = lim s D(s) H(s) = lim s kp
=
.
s0
s0
10 s
(10 s + 1)(s + 1)
2
So Kv > 10 and it follows kp > 20.
141
6.5. EXERCISES
R(s)
- e - H1 (s)
+ 6
-e
+ 6
H2 (s)
Y (s)
-
H3 (s)
H4 (s)
6.5
Exercises
1
(s + 1)(s + 5)
3
(s + 1)(s + 5)
142
1
adj M
det M
11 )
21 )
det(M
det(M
det(M
12 )
det(M22 )
adj M = ..
..
.
.
1+n
2n )
det((1) M1n ) (1)2+n det(M
..
.
ij is equal to the matrix M after removing the ith row and jth column.
where M
Example:
1 2 3
M = 4 4 4
1 2 1
33 = det
M
= 0,
M
1 2 ) = 4, M23 = det
1 2
4 4
and so the adjoint matrix is given by
4
4 4
adj M = 0 2
8
4
0 4
143
= 4,
= 8,
= 4,
144
With det(M) = 8 the inverse of M is now computed as
0.5
0.5 0.5
1
0 0.25
1
M 1 =
adj M =
det M
0.5
0 0.5
For a 2 2 matrix this works out as
1
1
m4 m2
m1 m2
=
m3 m4
m1 m4 m2 m3 m3 m1
Appendices
1
1
s
1
s2
2!
s3
3!
s4
m!
sm+1
1
s+a
1
(s + a)2
1
(s + a)3
1
(s + a)m
a
s(s + a)
1
(at 1 + eat )
a
eat ebt
(1 at)eat
1 eat (1 + a t)
bebt aeat
sin at
cos at
eat cos bt
eat sin bt
1 eat (cos bt + ab sin bt)
145
a
+ a)
ba
(s + a)(s + b)
s
(s + a)2
a2
s(s + a)2
(b a)s
(s + a)(s + b)
a
2
s + a2
s
s2 + a2
s+a
(s + a)2 + b2
b
(s + a)2 + b2
a2 + b2
s[(s + a)2 + b2 ]
s2 (s
146
Appendices
for 0 < t
for 0 t T
for t > T
b).
1
147
148
Appendices
c).
1
1
b). d/dt ur (t) ur (t 1) us (t 4) = us (t) us (t 1) (t 4), for t R.
c). d/dt up (t) us (1 t) + us (t 1) = ur (t) us (1 t) + 0.5(t 1), for t R.
Answer exercise 4. System properties
memoryless
linear
time-invariant
causal
System a)
Yes
No
No
Yes
System b)
No
Yes
Yes
Yes
System c)
No
No
Yes
Yes
System d)
Yes
No
Yes
Yes
Exercises chapter 2
Answer exercise 1.
system
149
b).
S 2 m1 + c1 S + k x1 (t) = k x2 (t) + ft (t)
2
S m2 + c2 S + k x2 (t) = k x1 (t)
s2 m2 + c2 s + k
x2 (t)
k
(s2 m2 + c2 s + k)
x2 (t) = k x2 (t) + ft (t)
k
or
s4 m1 m2 + s3 (c1 m2 + c2 m1 ) + s2 (km2 + km1 + c1 c2 ) + s k(c2 + c1 ) x2 (t) = k ft (t)
d4 x(t)
d3 x(t)
d2 x(t)
+
(c
m
+
c
m
)
+
(km
+
km
+
c
c
)
1
2
2
1
2
1
1
2
d t4
d t3
d t2
d x(t)
+ k(c2 + c1 )
= k ft (t)
dt
m1 m2
x1 (t)
x2 (t)
x(t) =
x3 (t) =
x4 (t)
then
x 1 (t)
x2 (t)
x (t)
3
x 4 (t)
x 1 (t)
x1 (t)
x 2 (t)
x2 (t)
= mc11 x1 (t) +
= x1 (t)
= mc22 x3 (t) +
= x3 (t)
k
(
x4 (t)
m1
x2 (t)) +
k
(
x2 (t)
m2
x4 (t))
1
u(t)
m1
150
Appendices
and so
1
k
0
mc11 mk1
m1
m
1
0
0
0
0
u(t)
x (t) =
x(t) +
0
0
+ mk2 mc22 mk2
0
0
0
1
0
y(t) = 0 0 0 1 x(t) + 0 u(t)
d i3 (t)
= v1 (t)
dt
d v1 (t)
= i1 (t) i3 (t)
dt
and so
SLi1 (t) = S 2 LCv1 (t) + v1 (t)
or
SLi1 (t) = (S 2 LC + 1)v1 (t)
We can rewrite this as the input-output differential equation:
L
d i1 (t)
d2 v1 (t)
= LC
+ v1 (t)
dt
d t2
151
c). Choose
x(t) =
i3 (t)
v1 (t)
u(t) = i1 , y(t) = v1
then
d i3 (t)
1
= v1 (t)
dt
L
1
d v1 (t)
= (i1 (t) i3 (t))
dt
C
or
i3 (t)
0
x =
+ 1 u(t)
0
v1 (t)
C
i3 (t)
y= 0 1
+ 0 u(t)
v1 (t)
0
C1
1
L
Exercises chapter 3
Answer exercise 1. Driving car
so
so f (t) = 0.5 us (t). From Equation 3.4 in the lecture notes we know that for a forcing
function f (t) = us (t) we find:
ys (t) = 1/(1 et ) = 1/3(1 e3t )
The actual forcing is 0.5us (t), and so
v(t) = 1/6(1 e3t ) for t 0
Answer exercise 2. RLC-circuit
The differential equation of the circuit is given by
1 d v(t)
1
1 d i(t)
d2 v(t)
+
+
v(t) =
2
dt
RC d t
LC
C dt
In other words
1
1
v +
v +
v(t) = v + 2nv + n2 v(t) = v + 2v + 4v(t)
RC
LC
with L = 1 we find 1/C = n2 = 4, so C = 0.25, and 1/RC = 2 so R = 2.
152
Appendices
3, and
1 2 = d /n 0.5 3
Stable
System a)
System b)
Yes
= 3
Yes
= 0.5
System c)
System d)
No p
Yes
1 = 1.5 + p5/4 1 = (2 + j 3)/4
2 = 1.5 5/4 2 = (2 j 3)/4
= 0.5
n = 3
d = 1.5 3
= 1.5
System b):
=2
n = 0.5
(d = j 0.5 3)
=1
System c):
= 0.2
n = 1
System d):
=1
n = 2
d = 0
=2
Note that for question b) the system is overdamped ( > 1) and there is no overshoot.
This means that d becomes complex and do not have a physical meaning.
Answer exercise 6. Response criteria
We use
ts = 4.6/ ,
tp = /d ,
tr = 1.8/n ,
Mp = exp(
) = exp( p
)
d
1 2
153
System a):
System b):
System c):
System d):
0.4 6
5 6
12
) 0.8842 j 0.4671)
(Mp = exp( j2
3
tr = 1.8/1 = 1.8
3.2064
ts = 4.6/2 = 2.3
tp =
tr = 1.8/0.5 = 3.6
Mp = exp( 0.2
) 0.5266
0.96
tr = 1.8/2 = 0.9
Mp = exp( 0 ) = exp() = 0
Note that for question b) the system is overdamped ( > 1) and there is no overshoot.
This means that tp and Mp become complex and do not have a physical meaning.
Exercises chapter 4
Answer exercise 1. Transfer functions
4
+6s+5
2 s3 + 12 s2 + 24 s + 16
b)
s3 13 s + 12
3 s3 + 6 s2 21 s + 12
c) 4
s + 6 s3 + 22 s2 + 30 s + 13
a)
s2
b):
poles: p1 = 1 en p2 = 3 en p3 = 4
zeros: z1,2,3 = 2
stable: NO (two poles have positive real part!!)
154
Appendices
c):
poles: p1,2 = 2 j 3 en p3,4 = 1
zeros: z1 = 4, z2,3 = 1
stable: YES
Answer exercise 3: Frequency response
H(s) =
s2
1
+6s+5
Magnitude
M(j) = | H(j) |
=
=p
1
+ j 6 + 5|
1
(5 2 )2 + (6 )2
1
=
2
25 10 + 4 + 36 2
1
=
4 + 26 2 + 25
Phase
(j) = (H(j)
1
=
2 + j 6 + 5
1
=
2 + j 6 + 5
6
= arctan
5 2
Response:
1
1
1
=
= 0.0542
34 + 26 32 + 25
340
2 85
63
(j 3) = arctan
= arctan(4.5) 1.3521rad
5 32
M(j 3) =
Output:
155
Answer exercise 4: Time response
13
+ 6s + 13
Homogeneous solution:
Characteristic equation:
H(s) =
s2
s2 + 6s + 13 = (s + 3)2 + 22 = 0
so 1 = 3 + 2 j, 2 = 3 2 j. Homogeneous solution:
yhom = C1 e3t cos 2 t + C2 e3t sin 2 t
Particular solution:
ypart = H(0) = 1
Final solution with derivatives:
y(t) = yhom + ypart = 1 + C1 e3t cos 2t + C2 e3t sin 2t
y(t)
= (3 C1 + 2C2 ) e3t cos 2t + (2C1 3C2 ) e3t sin 2t
Initial conditions:
y(0) = 1 + C1 = 5
y(0)
= 3C1 + 2 C2 = 2
C1 = 6
C2 = 8
and so
y(t) = 1 6 e3t cos 2t 8 e3t sin 2t
Answer exercise 5: Time response
40
+ 10 s + 25
Homogeneous solution:
Characteristic equation:
H(s) =
s2
s2 + 10s + 25 = (s + 5)2 = 0
so 1,2 = 5. Homogeneous solution:
yhom = C1 e5t + C2 t e5t
156
Appendices
Particular solution:
40 3t
40
3t
e
=
e = 10e3t
32 10 3 + 25
4
Final solution with derivatives:
ypart = H(3)e3t =
Initial conditions:
y(0) = 10 + C1 = 55
C1 = 45
y(0)
= 30 5C1 + C2 = 65
and so
C2 = 190
h(t )u( )d
For t 1 we find h(t ) = 1 for all 0 < 1, and h(t ) = 1 for all 0 < 1 and so
Z 1
Z 1
for t 1 : y(t) =
h(t )1d =
1 1 d = 1
0
0 for t < 0
t for 0 t < 1
y(t) =
1 for t 1
or
y(t) = ur (t) ur (t 1)
157
Answer exercise 7: State systems
a. We find the eigenvalues 1 = 3 and 2 = 4 with the corresponding eigenvectors
v1 =
1
3
en v2 =
2
.
7
Both eigenvalues have a negative real part, and so the system is stable.
b. We find
V =
A =V
1 2
3 7
3 0
0 4
AV =
3
1
B =V B=
1
C =CV = 1 1
D = D = 0
7 2
3 1
x(t) = V
=
=
=
e t V 1 x(0)
3t
1 2
e
0
5
3 7
0 e4t
2
3t
1 2
5e
3 7
2e4t
5e3t 4e4t
15e3t 14e4t
158
Appendices
d. Forced response:
At
x(0) +
eA (t ) B u( ) d
Z t 0
= C eA t 0 +
C V e (t ) V 1 , B d
0
Z t
=
C e (t ) B d
Z0 t
e3(t )
0
3
1 1
=
d
1
0
e4(t )
0
Z t
=
3 e3(t ) e4(t ) d
0
Z t
Z t
3t
3
4t
= 3e
e d e
e4 d
0
0
1
1 4t
= 3 e3t
e3t e3 0 e4t
e e4 0
3
4
= 1 e3t 0.25 1 e4t
y(t) = C e
e. Impulse response:
s+3 s+4
3(s + 4)
(s + 3)
=
(s + 3)(s + 4) (s + 3)s + 4
2s + 9
= 2
s + 7 s + 12
159
Exercises chapter 5
Answer exercise 1. Pendulum system
a)
m = fez + fe
fez = fz sin = m g sin
We find:
= m g sin (t) + fe (t)
m(t)
Choose x(t) =
(t)
, y(t) = , u(t) = fe (t), then
(t)
1
g
u(t)
x 1 (t) = sin x2 (t) +
m
x 2 (t) = x1 (t)
y(t) = x2 (t)
b)
With u0 = fe,0 = 0.5 m g we find
g
g
0 = sin x2,0 + 0.5
0 = x1,0
and so x1,0 = 0 and sin x2,0 = 0.5 which gives x2,0 = y0 = /6.
c)
g
f1 /x1 f1 /x2
0 g cos x(t)
0 2
3
A=
=
=
f2 /x1 f2 /x2 (x0 ,u0 )
1
0
1
0
(x0 ,u0 )
1
f1 /u
B=
= m
f2 /u (x ,u )
0
0 0
C = g/x1 g/x2 (x0 ,u0 ) = 0 1
D = g/u|(x0 ,u0 ) = 0
160
Appendices
1 3
1
x (t) + u(t)
RC
C
1
y(t) = C v 1 (t) = C x(t)
= x3 (t) + u(t)
R
b)
With u0 = i1,0 = 4 we find
0=
1 3 1
x + u0
RC 0 C
f
3 2
3 2
=
x (t)
=
x = 24
x (x0 ,u0 )
RC
RC 0
(x0 ,u0 )
f
1
=
=4
u (x0 ,u0 ) C
h
3 2
3
= x (t)
= x20 24
x (x0 ,u0 )
R
R
(x0 ,u0 )
h
=1
u
(x0 ,u0 )
Exercises chapter 6
Answer exercise 1. Block scheme
The transfer function from x2 to y is given by:
Y (s)
H2
=
.
X2 (s)
1 + H2 H3
161
R - e X1
H1 (s)
+ 6
X2 e H2 (s)
+ 6
Y-
H3 (s)
H4 (s)
Y (s)
H2
H1 H2
= H1
=
.
X1 (s)
1 + H2 H3
1 + H2 H3
Finally the transfer function becomes:
In series with H1 this gives:
H1 H2
H1 H2
Y (s)
1 + H2 H3
=
=
H
H
R(s)
1 + H2 H3 + H1 H2 H4
1 2
1 + H4
1 + H2 H3
Answer exercise 2. Rise time and settling time
The closed-loop transfer function is given by
s2
kp
kp
kp
kp
= 2
=
=
2
2
2
2
+ 6s + 5 + kp
(s + 2ns + n )
(s + ) + d
(s + 3) + kp 4
a). We find
1,2 = 3
4 kp
1.8
< 0.2
n
c). To find
4.6
< 2.3
we have to make > 2. However, we already found that = 3, which means that
ts < 2.3 for all kp > 0.
ts =
162
Appendices
E(s) =
1
(s + 1)(s + 5)
(s + 1)(s + 5) + 3kp s2
Index
angle, 18
angular acceleration, 18
angular velocity, 18, 20
autonomous system, 12
basic elements,, 17
basic signals,, 17
harmonic function, 10
block diagrams, 113
Bounded-Input-Bounded-Output (BIBO) sta- heat energy flow, 21
heat flow system, 21
bility, 97
homogeneous solution, 45, 51, 64, 84
capacitor, 19
first-order system, 45
causality, 14
LTI system, 64
closed-loop control, 122
second-order system, 51
convolution, 76
state system, 84
critically damped system, 52
impulse response, 45, 53
current, 19, 20
first-order system, 45
damped harmonic function, 11
second-order system, 53
damped natural frequency, 54
impulse response model, 76
damper, 18
impulse response of a state system, 93
damping ratio, 50, 54
inductor, 19
dynamical system, 12
inertia, 18
dynamical systems, 17
inhomogeneous solution of a state system, 86
initial conditions, 63
electrical system, 19
input, 12
electromechanical system, 20
input-output differential equation, 32
elementary signals, 8
input-output system, 12
equilibrium point, 107
Laplace transform, 32
feedback control, 113
linear time-invariant systems, 61
feedback controller, 123
linearity, 14
feedforward controller, 123
linearization, 108
final value theorem, 125
loop gain function, 117, 124
first-order system, 43
mass, 18
fluid capacitor, 21
mechanical system, 17
fluid flow system, 21
163
164
INDEX
memoryless, 13
modal transformation, 83
model, 17
Newtons
Newtons
nonlinear
nonlinear
law, 23
law for rotation, 25
dynamical systems, 103
state system, 106
Bibliography
[1] K.J.
Astrom and R.M. Murray. Feedback Systems; An Introduction for Scientists and
Engineers. Princeton Univeristy Press, Princeton, New Jersey, USA, 2009.
[2] C.M. Close, D.K. Frederick, and J.C. Newell. Modeling and Analysis of Dynamic
Systems. John Wiley & Sons, New York, USA, 2002.
[3] G.F. Franklin, J.D. Powel, and A. Emami-Naeini. Feedback Control of Dynamic Systems. Pearson/Prentice Hall, New Jersey, USA, 2006.
[4] T. Kailath. Linear Systems. Prentice Hall, New Jersey, USA, 1980.
[5] H. Kwakernaak and R. Sivan. Modern Signals and Systems. Prentice Hall, New Jersey,
USA, 1991.
[6] A.V. Oppenheim and Alan S. Willsky. Signals and Systems. Prentice Hall, New Jersey,
USA, 1983.
[7] D. Rowell and D.N. Wormley. System Dynamics, an Introduction. Prentice Hall, New
Jersey, USA, 1997.
[8] V. Verdult and T.J.J. van den Boom. Analysis of Continuous-time and Discrete-time
Dynamical System. Lecture Notes for the course ET2-039, faculty EWI, TU Delft,
2002.
165