ECD3701 - Study Guide
ECD3701 - Study Guide
SCHOOL OF ENGINEERING
Electrical and Mining Engineering
II Module Preface vi
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
i
CONTENTS
5 Stability 168
5.1 Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.2 The general concept of stability and the Routh-Hurwitz Criterion . . . . . . . . 168
5.3 The Routh Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.3.1 Special cases and required techniques . . . . . . . . . . . . . . . . . . . . 174
5.4 Stability of State-space models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
UNISA ii ECD3071
CONTENTS
Appendices 224
Dear Student
As part of this tutorial letter, we wish to inform you that Unisa has implemented a transfor-
mation charter based on five pillars and eight dimensions. In response to this charter, we have
also placed curriculum transformation high on the agenda. For your information, curriculum
transformation includes the following pillars: student-centred scholarship, the pedagogical re-
newal of teaching and assessment practices, the scholarship of teaching and learning, and the
infusion of African epistemologies and philosophies. These pillars and their principles will be
integrated at both the programme and module levels, as a phased-in approach. You will no-
tice the implementation thereof in your modules, and we encourage you to fully embrace these
changes during your studies at Unisa.
I.1 Introduction
Welcome to the subject Computer Aided Design and Simulation 1A (ECD3701) at UNISA. This
tutorial letter serves as a guideline for this subject. It provides you with general administrative
information as well as specific information about the subject. Read it carefully and keep it safe
for future reference. We trust that you will enjoy this course.
I.2.1 Purpose
ECD3701 uses a computer and software to simulate electrical and control systems which can
help the design and development of the hardware systems, at a lower cost.
I.2.2 Outcomes
iv
I. Module Administrative Details
Your lecturer for Computer Aided Design and Simulation 1A is Mr. D. Nangana.
email: [email protected]
Contact Times: 08h00 to 12h00, Mondays to Fridays
Your Lecturer for Computer Aided Design and Simulation 1B is Prof. Z. Wang.
I.4 Department
UNISA v ECD3071
II : Module Preface
So what exactly is Computer Aided Design and Simulation 1A (ECD3701) and how
exactly does it relate to the topic of Control Systems, which is what the module is all about?
To understand this we can break up the titles and relate each concept through its definition.
System, (n.); An organised or connected group of things [1]. Control, (n.); The place from
which a system or activity is directed [2]. Thus, Control Systems as a topic is the study of
how a system can be controlled, usually by another system. This is a fundamental learning
objective of the module. But the module title holds more detail about what you are expected
to do, in addition to understanding Control Systems.
Computer-Aided, (adj.); Completed partially or entirely by using a computer [3]. Design, (v.);
Do or plan (something) with a specific purpose in mind [4]. Simulation, (n.); The production
of a computer model of something, especially for the purpose of study [5]. This means that the
general concept of this module is to design and simulate control systems using a computer.
This may seem arbitrary, especially analysing the dictionary definitions of the words describing
the module; however, this can help you understand and answer questions in your mind, “what
is it exactly that I am getting myself into, and why would this be necessary, if not useful?”
Well the necessity stems from the usefulness. Imagine for a moment you are working in a
high-tech solar power solutions company. You need to design and develop a solar solution for
a client. Generally, the position and alignment of the panels significantly impact performance.
This should definitely be controlled using an electro-mechanical angle-positioning controller.
However, electrical phenomena of power transfer due to load impedance on the panel are also
at play, and a Maximum Power Point Tracker (MPPT) is needed to ensure maximum efficiency.
This controls the electrical impedance of the charging circuit, as the load to the panel, to draw
the most power from the panel. Additionally, the power produced either needs to be used or
stored. A charge control system is used to quickly and safely charge the battery storage with
maximum efficiency. At night the stored power is desired to be used, this requires an inverter,
which is a power flow controller that inverts low voltage DC power into rated AC power.
Say for a moment you are currently working on the inverter. This uses solid-state switching
technology, usually power MOSFETs. These devices are designed to switch large current and
voltage signals. However, tears would result from a straight-through first attempt at building
an inverter, plugging it in and seeing smoke spewing out of the MOSFETs, which are generally
very expensive.
As an alternative, you design the inverter in a circuit simulator, having considered the design
process, the control system and signals required to switch the MOSFETs. In this simulation,
you can break the MOSFETs as many times as you desire, and at a significantly lower cost of
“running a simulation”. In the case of unintentional failures due to design error, this means that
you can simply fix the design flaw through the iterative design process, and rerun the simulation,
with no need to replace expensive MOSFETs. The benefit of simulating far outweighs the cost
of buying hundreds of MOSFETs, each time just to break them to “see if it works”. The
vi
II. Module Preface
usefulness of Computer-Aided Design and Simulation for Control Systems essentially becomes
a necessity.
References
[1] Oxford English Dictionary. (2021). “System, n.,” [Online]. Available: https : / / www .
lexico.com/definition/system (visited on 05/05/2021).
[2] ——, (2021). “Control, n.,” [Online]. Available: https://www.lexico.com/definition/
control (visited on 05/05/2021).
[3] ——, (2021). “Computer-aided, adj.,” [Online]. Available: https://www.lexico.com/
definition/computer-aided (visited on 05/05/2021).
[4] ——, (2021). “Design, n.,” [Online]. Available: https://www.lexico.com/definition/
design (visited on 05/05/2021).
[5] ——, (2021). “Simulation, n.,” [Online]. Available: https://www.lexico.com/definition/
simulation (visited on 05/05/2021).
The module is divided into a theoretical aspect of Control Systems and analysis, and a practical
aspect of the simulation of electrical and control systems. Both play a role in the learning
outcomes of the module.
The following topics fall under Control Systems theory and as seen covered in the prescribed
textbook, Control Systems Engineering, by Norman Nise [1]. These also form the learning
units:
Basic control system modelling covers concepts of the design process itself. It describes the
fundamental knowledge of how to represent some physical system in a way that can be modelled
mathematically. Modelling in the frequency domain analyses these models to reduce them to
an overall system model represented by its transfer function. Modelling in the time domain
covers concepts of analysing a system’s state-space representation and related topics. Time-
response analyses system behaviour, characterised by its order and described by its damping,
is fundamentally related to a system’s transient behaviour. Stability covers the concepts to
understand when a system is stable and unstable and the various techniques used to determine
this. Steady-state errors covers the reality that any system has inherent errors, and describes
the nature of these errors in a system.
Each of these topics will be discussed in detail and will incorporate the practical aspect of the
module within each topic.
viii
III. Module Overview
The practical aspect of the module uses easyEDA simulation software. The overall outcomes
of the module with regards to simulation are as follows:
Each of these practical outcomes will be realised throughout the theoretical topics mentioned
above. As the theoretical basis is developed, the practical outcomes will be achievable.
The module is divided into learning units that have their own learning outcomes. Collectively,
the module is essentially outcomes-based. This means that, instead of using a set of topics as a
starting point, the outcomes are used to guide the learning process. Additionally, this functions
as a set of “tick-boxes” used as a metric for completing the module.
The outcomes are focused on the tasks an engineer is expected to be able to do in a work
environment. The module is planned in a way that will assist you to acquire the necessary
competence to perform these tasks. These tasks are reflected in the learning outcomes that
appear at the beginning of every learning unit.
The teaching in this module is also based on the principle of “active learning”. It has been
shown that the more active one is involved in reading and learning, the more clearly one
understands what one is learning. Consequently, this leads to the more effective application of
one’s knowledge and skills in real-life situations.
To help you work through this study guide actively – rather than just reading it passively –
there are a large number of activities, examples and self-assessment questions that have been
included. These closely follow the learning outcomes of the particular learning units of the
module. By completing these meaningfully, you will ensure that what you are learning is
meaningful. In this process, you will start to develop the practical skills that will be required in
your work situation. It is on your conscience to ensure that these are completed and understood,
with sufficient discretion, as a self-reflected indication of your competence.
UNISA ix ECD3071
III. Module Overview
Learning Outcomes
The learning outcomes contained in each learning unit can be regarded as a checklist of the
things you should be able to do once you have completed that particular learning unit. They
fundamentally form a metric for you to assess your own understanding. In other words, they
tell you what the purpose of your learning in that particular learning unit is. When you are
reviewing the module, you should look back at the learning outcomes and check whether you
have achieved them all. They will give you an overview of the knowledge and skills you should
have acquired in the module.
Worked Examples
Each study unit contains worked examples providing feedback on how a particular problem
should be attempted. These will give you an indication of how well you have grasped the study
material.
Activities/Problems
Completing the activities will help you to acquire the knowledge and skills that are taught in
every learning unit. This will enable you to achieve the learning outcomes. Feedback on the
activities is provided at the back of the study guide.
Feedback on Activities
Most activities are followed by some form of feedback (comments on or suggested answers to)
the questions in the activity. Sometimes this feedback appears as part of the learning unit. In
many cases, however, feedback is included at the end of this study guide. Please note, however,
that you should try complete activities on your own first before checking the feedback.
Videos
For additional explanations of the content of this module, you can watch some video clips
which will occasionally be provided. It should be noted that some video clip links may have
become invalid; however, you can search for other relevant video clips for helping you study
this module. Web addresses/links to video clips and other web pages are provided in each of
the learning units. The name of the video (as well as the time) is referred to in the text and is
UNISA x ECD3071
III. Module Overview
hyper-linked. Additionally, if the hyperlink is not working properly, the video is referenced in
the references, where the URL can be explicitly found.
myUnisa Discussions
You will find a myUnisa discussion for some of the learning units of the module. Log into
myUnisa and check the discussion related to the learning unit.
References
At the end of each learning unit, you will find references to sources you can consult to, read
more on that particular topic. These references exceed the prescribed contents of the course.
It is at your discretion to explore the contents of the references further.
References
[1] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
UNISA xi ECD3071
Learning Unit 1:
Introduction to Control Systems
This learning unit introduces the concept of a system, what it is, and the various nomenclature
used in the field of control systems. The concept of defining and specifying a system, from
the perspective of analysis and control, is covered. A system and its signals are qualitatively
described and defined, which helps with conceptual understanding.
Study this learning unit in conjunction with Chapter 1 of the prescribed textbook, Control
Systems Engineering, by Norman Nise [1].
A system is an entity that processes a set of signals (inputs) to obtain another set of signals
(outputs) [2]. A system is characterised by its system boundary, which separates the system of
interest from its environment. Assumptions are used to characterise the system boundary and
impose restrictions on the system’s behaviour. This assists in simplifying the system and the
resultant analysis. However, assumptions must always be justified, and not restrict or affect
the system model significantly, beyond the desired error. The system error is usually specified
as a design requirement, and also presents as a necessary assumption to specify quantitatively.
In the context of Control Systems Engineering, a control system is exactly like any other system
as defined above. It is an entity that processes inputs to outputs. However, a control system is
1
1. Introduction to Control Systems
primarily driven by intelligent design, with the primary purpose of controlling another generic
system. Thus, a control system consists of subsystems assembled to obtain the desired output,
with the desired performance, given a specified input [1].
Outcome 1 is complete
Control Systems are characterised by two main types, Open-loop and Closed-loop [1–3]. The
differences between these two systems are easily seen graphically from a block diagram Fig-
ures 1.1a and 1.1b respectively below [1–3].
E(s)
R(s) G(s) C(s) R(s) +
−
G(s) C(s)
H(s)
M (s)
(a) Open-loop (b) Closed-loop
As can be seen, the defining difference is the feedback loop present in the closed-loop system
[1–3]. For the Open-loop system, the symbols and their meaning are as follows:
For the Closed-loop system, the symbols and their meaning are as follows:
UNISA 2 ECD3071
1. Introduction to Control Systems
The controlled signal C(s) is fed to the measurement transducer subsystem H(s) generating
the measured signal M (s) which is fed back to the reference signal R(s), generating the error
signal E(s). The error E(s) = R(s) − E(s) is simply the difference between the input reference
signal R(s) and the measured signal M (s). Figures 1.1a and 1.1b are generic forms of open-loop
and closed-loop control systems [1–3].
• vegetable peeler
• heater
• air-conditioner
• toaster
• clothes dryer
• washing machine
• DC-AC inverter
Control systems fit a generic form as seen above in Figures 1.1a and 1.1b. However, some detail
is missing in these diagrams that will be useful to understand control theory. Firstly, since the
module is primarily concerned with control, there is a controller GC (s) and a plant GP (s) [1,
3]. However, with all real-life systems, there exists an external or environmental disturbance
D(s) [1]. This needs to be incorporated into the existing diagrams.
For an open-loop control system, the controller and plant are explicitly distinguished, GC (S)
and GP (s) respectively [1]. There is also an input transducer, HI (s) [1]. This transduces the
reference R(s) into an appropriate form for the controller to understand and use. The controller
receives this modified reference and responds to this accordingly. Ideally, this response is fed
directly to the plant, however, as mentioned environmental disturbances exist. This disturbance
D1 (s) is added to the controller output signal which when goes to the plant[1]. The output
of the plant itself can also be affected by its own independent disturbance D2 (s) [1]. This is
added to the plant’s output, which is also the entire system’s output C(s) [1]. This additional
detail is seen in Figure 1.2 below [1].
Similarly, for a closed-loop system, there is a distinction between the controller and the plant,
GC (s) and GP (s) [1]. There is also an input transducer HI (s) that modifies the reference appro-
priately for the controller to understand [1]. This is distinguished from the output transducer, a
sensor, HS (s) [1]. Additionally, there are external disturbances similar to the open-loop system.
This additional detail is seen in the Figure 1.3 below [1].
Outcome 4 is complete
UNISA 3 ECD3071
1. Introduction to Control Systems
D1 (s) D2 (s)
+ +
R(s) HI (s) GC (s) + GP (s) + C(s)
D1 (s) D2 (s)
E(s) + +
R(s) HI (s) +
−
GC (s) + GP (s) + C(s)
M (s)
HS (s)
Hint: Identify where additional disturbances (could) exist. Then consider if these can be
collected and simplified in some way.
The basic concepts of system block diagram simplification are briefly covered. You have already
been prematurely exposed to these concepts, however, they will now be formalised. This will
be done in a more self-discovery fashion, through activities.
There are a handful of basic elements that make up a system block diagram. Looking back at
Figure 1.1a, the most obvious are signals and (system) blocks [1–3]. Signals are represented by
arrows and are denoted simply with their name (if needed) near the arrow’s line [1–3]. Blocks
represent systems or subsystems as explained above, and are denoted inside of the block [1–3].
UNISA 4 ECD3071
1. Introduction to Control Systems
Hint: There are two more basic elements. Incidentally, they both occur in Figure 1.1b.
To clarify a loop, characterising a closed-loop system is not a basic element but a system
structure.
R(s)
R(s) R(s)
R(s)
which simply “copies” the desired input signal. It functions as a single-input, multi-
output element. The outputs (as copies of the input) are to be used as inputs to multiple
subsystems as desired;
X(s)
±
W (s) ±
±
Z(s)
Y (s)
which take multiple signals and add or subtract them as needed to form a new composite
signal. As seen in the example here, ±W (s) ± X(s) ± Y (s) = Z(s). Note that a summing
point is characterised as multi-input, single-output.
UNISA 5 ECD3071
1. Introduction to Control Systems
W (s) +
−
+
X(s) −
Z(s)
+
Y (s) −
However to assist further, consider the diagram. There are three inputs, namely W (s),
X(s) and Y (s), as well as one output, Z(s). The output must be represented as some
mathematical relation of the three input signals.
Starting with the topmost sum, W (s) and X(s) feed into this sum. but looking carefully,
W (s) feeds in positively, and X(s) feeds in negatively. Therefore this output is simply
W (s) − X(s).
For the rightmost sum, the two previous summed signals are fed in. generating the output
Z(s). Looking carefully we see, The topmost sum output is fed in positively, while the
bottom-most is fed in negatively. Therefore we have, (W (s) − X(s)) − (X(s) − Y (s)).
This resolves to
Activity 1.3
In Exampl 1.2 you were asked to find the equations from the block diagram. See if you
can find another simplified block diagram for this example.
UNISA 6 ECD3071
1. Introduction to Control Systems
Use summing and pickoff points to represent the relationships of inputs to outputs.
−
P (s) +
Z(s)
−
W (s) + X(s)
As mentioned above blocks are elements that represent a (sub)system that manipulates its input
to generate some output. But blocks can represent entire mathematical relations themselves [1,
3]. These mathematical relations, however, are limited to (sub)systems and their inter-relations.
Some of these relations are now explored.
Combination in Cascade
Signals and direct signal manipulation have been presented above with the use of pickoff points,
and summing junctions. However, a (sub)system also manipulates a signal [1, 3]. How this
occurs is now explored.
What do you think is the mathematical operation of a signal going through a system
block? For completeness, relate mathematically the signal C(s) to R(s) and G(s) in
Figure 1.1a above.
Hint: There are basic and fundamental mathematical operations in a block diagram. You
already know of two operations, signal duplication, and more importantly, signal addi-
tion (and subtraction). One more fundamental operation needs to be accommodated for,
incidentally, it involves signals and system blocks.
UNISA 7 ECD3071
1. Introduction to Control Systems
The mathematical relationship between signals and system blocks is simply multiplica-
tion. I.e. the mathematical relationship of Figure 1.1a is simply
Find its equivalent reduced (open-loop) system diagram, and the associated equation
relating the input R(s) to the output C(s).
Combination in Parallel
The concepts of parallel systems and how they combine and reduce are covered. Again this is
done through a self-discovery process.
Activity 1.5
Consider the system in Figure 1.5 below. Find its equivalent reduced (open-loop) system
diagram, and the associated equation relating the input R(s) to the output C(s).
G2 (s)
+
R(s) G1 (s) +
+
C(s)
G3 (s)
UNISA 8 ECD3071
1. Introduction to Control Systems
This section is provided to expand your understanding and supplement the module content. It
is not a requirement for the module but is invaluable for your understanding. The concept of
reducing a closed-loop system into an open-loop form is briefly discussed.
Consider for a moment the closed-loop system block diagram in Figure 1.1b above. It would
be convenient to have the input reference signal going into a single block, which generates the
output controlled signal. There is a way the signals can be related to one another through their
mathematical relationship as seen above. With this in mind, let’s see if we can find the system
of equations and reduce them to a single equation, containing the control signal in terms of the
reference signal. Consider the following equations the result directly from the diagram [1, 3]:
The substituting this Equation 1.5 into Equation 1.2 and simplifying, we get:
which is the control signal C(s) in terms of the reference signal R(s) multiplied with some
general system expression [1, 3]. We will explore the meaning of this general function and
its significance later in the module. Importantly, this means that the closed-loop system can
be represented in a simple open-loop form seen in Figure 1.6 below [1, 3]. Control Systems
Lectures - Closed Loop Control (9:12) [4] offers a summary on the importance of closed-loop
systems and their open-loop form.
Outcome 5 is complete
UNISA 9 ECD3071
1. Introduction to Control Systems
G(s)
R(s) C(s)
1 + G(s)H(s)
Further concepts and methods of simplifying block diagrams are seen in Appendix A. Although
not explicitly necessary for the module, this content is invaluable though an often overlooked
topic in systems design and control theory.
Hint: You may need to look up the theory of superposition, which is covered in Appendix A.
Consider the disturbances as additional “inputs”.
Note: This activity involves additional theoretical content. You are not explicitly expected
to be able to do this considering the contents of the module.
The video Block algebra (18:41) [5] offers a summary of the block diagram reduction techniques
covered in detail in Appendix A. This is referred to as an alternative to going through the entire
appendix and can help with a general understanding of the concepts.
Discussion
A very important question you may have asked yourself is, “In all of the learning content,
examples and activities above, why where all the variables functions of s? Surely the most
logical choice is t, given that the signals and systems we wish to control are time-varying?”
While this may seem logical, there is a method to the madness. “s” is a special variable
and relates to the time variable t. However, since the idea is to represent systems with more
complicated behaviour, such as integrodifferential relationships, the standard block diagram
interpretation would be difficult at best in the time domain. It would be convenient if there is
a way to convert calculus problems, as is common with most systems, into algebra problems.
Then systems could be easily represented in this converted form with block diagrams. This
is achieved by the Laplace transform with converts time-domain signals into the complex-
frequency-domain or s-domain.
The next learning unit Chapter 2, Modelling in the Frequency Domain, explores this concept
and other principles necessary for control and system analysis.
UNISA 10 ECD3071
1. Introduction to Control Systems
Feedback
• heater
• toaster
• washing machine
• ... ???
Closed-loop systems:
• air conditioner
• DC-AC inverter
• vegetable peeler
• ... ???
+
X(s) 2 −
+
Z(s)
Y (s)
UNISA 11 ECD3071
1. Introduction to Control Systems
C(s) = G1 (s) ∗ R(s) + G2 (s) ∗ R(s) + G3 (s) ∗ R(s) = (G1 (s) + G2 (s) + G3 (s))R(s)
References
[1] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
[2] B. P. Lathi, Signal Processing and Linear Systems. New York: Oxford University Press,
2010, isbn: 9780195392579.
[3] R. Burns, Advanced Control Engineering. Oxford Boston: Butterworth-Heinemann, 2001,
isbn: 0750651008.
[4] Brian Douglas, Control Systems Lectures - Closed Loop Control (9:12), 2012. [Online].
Available: https://www.youtube.com/watch?v=O-OqgFE9SD4.
[5] controltheoryorg, Block algebra (18:41), 2012. [Online]. Available: https://www.youtube.
com/watch?v=pV0wAM6Uldc.
UNISA 12 ECD3071
Learning Unit 2:
Modelling in the Frequency Domain
This learning unit covers the learning content necessary for analysing a linear, time-invariant,
causal (LTIC) system in the so-called frequency domain.
Study this learning unit in conjunction with Chapter 2 of the prescribed textbook, Control
Systems Engineering, by Norman Nise [1].
3. Solve D.E.’s using the Transfer function and associated techniques thereof;
For most time-domain signals in real applications, we analyse the functions from time t = 0
[2]. We can simplify the integral limits to the following [2]:
Z ∞ Z ∞
L {f (t)u(t)} = F (s) = −st
(f (t)u(t)) e dt = f (t)e−st dt (2.2)
−∞ −0
13
2. Modelling in the Frequency Domain
which is known as the unilateral Laplace transform [2]. The function u(t) is the unit-step (or
Heaviside) function and is defined as [1–3]:
(
1 ≥0
u(t) = (2.3)
0 <0
However, this module will move away from the mechanics of actually solving the Laplace
Transform integral mathematically. Instead, the focus will be on the techniques to evaluate
integrodifferential equations using transforms symbolically. This is assisted with Transform
Tables C.1, C.2 and C.3 found in Appendix C (compiled with reference to various sources
cited therein). To start, an example is given that touches on the knowledge you are eventually
expected to know. Don’t worry if this all seems a bit out of place or if it doesn’t make sense
entirely. An example of a standard circuit equation with a resistor, capacitor and inductor is
given.
t=0 1F
1H
i
+ vL − + −
+
vC
1V + vR 1Ω
−
−
Find the loop current of the circuit, assuming all initial conditions of the circuit are zero.
1 t
Z Z t
di(t) di(t)
vL (t) = L = (1) vC (t) = i(t)dt = (1) i(t)dt
dt dt C 0 0
vR (t) = Ri(t) = (1)i(t) vin = u(t)
UNISA 14 ECD3071
2. Modelling in the Frequency Domain
Next, we transform the equation from the time-domain into the s-domain using the
Laplace tables. Here the functions are standard base functions and are straightforward.
1 1
[sI(s) − i(0)] + I(s) + I(s) =
s s
Solving for I(s) through algebraic manipulation, we get
1 1
[sI(s) − 0] + I(s) + I(s) =
s s
s2 I(s) + sI(s) + I(s) = 1
(s2 + s + 1)I(s) = 1
1
I(s) =
s2 + s + 1
This is the complex frequency (s-domain) characteristic of the loop current. But the
question implicitly asks for the time-domain solution. However, I(s) is not in standard
form to transform back. This is resolved by completing the square in the denominator
and introducing factors as necessary. Therefore,
1
I(s) =
s2 +s+1
1
I(s) =
s2 +s+ ( 12 )2 − ( 21 )2 + 1
1
I(s) = 2
(s2 + s + ( 21 )) − 14 + 1
1
I(s) =
(s + 2 ) − 34
1 2
3
4 4
I(s) =
3 (s + 12 )2 − 43
Using the Laplace Tables in Appendix C we get a final answer for the loop current as
4 t 3
i(t) = e(− 2 ) sin( t) u(t) (2.5)
3 4
Conceptually, this is a simple exponentially decaying sinusoid.
Activity 2.1
Design the circuit from Example 2.1 above in EasyEDA. Use the following netlist com-
mand,
UNISA 15 ECD3071
2. Modelling in the Frequency Domain
Confirm that the simulated waveform and the loop current equation are in agreement.
Note: This can also be cross-verified with MATLAB by calculating the current i(t) and
plotting the results.
Example 2.1 above gives a glimpse into what will be expected in answering a question. Utilising
the knowledge of time-domain analysis (such as Kirchhoff’s voltage and current laws, loop, mesh
and nodal analysis on circuits) and transforming them into the complex frequency domain to
solve for the system behaviour. This enables a designer to understand not only the response
over time, but the fundamental characteristic of any frequency or spectrum of frequencies to a
circuit system.
If you are uncertain of the mechanics of the Laplace transform and the concepts of its use, watch
the following video Lesson 1 - Laplace Transform Definition (Engineering Math) (28:53) [4].
Laplace Transforms will play a central role in this learning unit. For this reason, this skill
must be adequately developed before continuing. However, there are additional prerequisite
skills that are additionally required for performing Laplace Transforms as needed. These are
Polynomial Long Division and Partial Fraction Decomposition, and are both covered in theory
and examples in Appendix B. However, these are assumed known knowledge moving forward.
As the main focus here, simple examples and exercises of Laplace transforms are provided for
revision.
The first term in red is not a standard form and requires a bit more thought before
transforming. The standard form is either sin(ωt) or t sin(ωt), but there is no t2 sin(ωt).
However this can be approached by considering sin(ωt) as a “generic function” multiplied
by t2 . If we look at Table C.3, the frequency derivative, we can see that the property of t
multiplied with a generic function is described. We could say t2 f (t), but this will require
two derivatives, and the quotient rule when looking at sin ωt. But the first derivative will
produce the transform of t sin ωt anyway. So the smart approach is to do one derivative
UNISA 16 ECD3071
2. Modelling in the Frequency Domain
The third term in blue is also in standard form, but don’t be fooled, it’s far simpler than
it seems. The exponential is actually a constant (and don’t forget the negative in the
final answer), i.e.
1 2
L{e−1 t2 } =
e s3
The fourth term in cyan is also a standard form,
2
L{2e2t } =
s−2
The fifth, in green (and remembering the negative sign in the final answer), is
(s − 2)
L{e2t cos(3t)} =
(s − 2)2 + 32
Example 2.3
UNISA 17 ECD3071
2. Modelling in the Frequency Domain
Find the Inverse Laplace Transform of the following. Perform the necessary algebraic
manipulation and use the tables
s3 + 6s2 + 11s + 5
s2 + 3s + 2
s3 + 6s2 + 11s + 5 1
2
=s+3−
s + 3s + 2 (s + 1)(s + 2)
s3 + 6s2 + 11s + 5 1 1
2
=s+3+ −
s + 3s + 2 s+2 s+1
Finally, all the terms are in a standard form and the following results
3
s + 6s2 + 11s + 5
1 1
L −1
=L −1
s+3+ −
s2 + 3s + 2 s+2 s+1
1 1
= L−1 {s} + L−1 {3} + L−1 { } − L−1 { }
s+2 s+1
d
= L−1 {1} + 3L−1 {1} + e−2t − e−t
dt
d
= δ(t) + 3δ(t) + e−2t − e−t
dt
Activity 2.2
Find the Laplace Transform of the following
a)
b)
c)
1
√ e3t−3 u(t − 1)
t−1
UNISA 18 ECD3071
2. Modelling in the Frequency Domain
a)
6 s 2s(s2 − 12)
+ −
(2 + s)4 s2 − 9 (s2 + 4)3
b)
√ √ √
(s2 + 2s − 1) ( 3s2 + (4 + 6 3)s + 5 3 + 12)
√ −
2(s2 + 1)2 2(s2 + 6s + 13)2
c)
√
e−s π
√
s−3
Activity 2.3
Find the Inverse Laplace Transform of the following
a)
2
s2 + 2s + 5
b)
4s + 3
s2 + 2s + 5
c)
Γ(4/3)e−2s
(s + 1)4/3
d)
a)
e−t sin 2t
UNISA 19 ECD3071
2. Modelling in the Frequency Domain
b)
e−t
(8 cos(2t) − sin(2t))
2
c)
√
e2−t 3
t − 2 u(t − 2)
d)
Outcome 1 is complete.
A system can be typically characterised by how it transforms or changes its input r(t) to
the output c(t) [2]. However, the representation of this practically in the time domain is a
rather nasty, generalised integro-differential equation. This can usually be transformed into a
convenient pure differential equation [2]:
dn dn−1 dm dm−1
an c(t) + a n−1 c(t) + · · · + a 0 c(t) = b m r(t) + b m−1 r(t) + · · · + b0 r(t) (2.6)
dtn dtn−1 dtm dtm−1
Though this is not a good improvement either. However, the previous section explained that
the Laplace Transform can change rather challenging integrodifferential equations into algebra
problems [1–3]. Therefore a system is also equally well described by
If the initial conditions are zero, then the expression is purely algebraic in s, which is the entire
purpose of Laplace Transforms [1–3]. Additionally, we can factor out the Inputs and outputs
giving [1–3]:
With the knowledge of Laplace transforms and its associated techniques, this equation is simply
UNISA 20 ECD3071
2. Modelling in the Frequency Domain
This is the fundamental principle of a transfer function [1–3]. A system is described in the
s-domain as the Output function C(s), divided by the Input function R(s). However, this is
fundamentally described by the input polynomial function P (s), divided by the output poly-
nomial function Q(s).
Importantly, this characteristic transfer function of a system must be evaluated with all its
initial conditions at zero. This is known as the zero-state response [2]. If there are any initial
conditions, then these initial condition components form what is called the zero-input response
[2].
Additionally, once a transfer function has been found, the same definition can be used to
determine the output of the system to a given input [1–3]. Conversely, though not commonly
done, the input can be evaluated for the desired output. I.e.
P (s)
C(s) = R(s)G(s) = R(s) (2.10)
Q(s)
d2 d d
2
c(t) + 5 c(t) + 6c(t) = r(t) + r(t)
dt dt dt
Assume all initial conditions are zero.
2e−3t − e−2t
UNISA 21 ECD3071
2. Modelling in the Frequency Domain
Generalise the transfer function. What is special in terms of the impulse response of a
system?
The contents covered in this section covered the mechanics of finding transfer functions. If you
are struggling to understand the “bigger picture” and purpose of transfer functions, watch the
following video, Control System Lectures - Transfer Functions (11:26) [5]. Applications of the
transfer function and its concepts are further covered below.
In this section, frequency-domain analysis is utilised with the fundamentals of circuit analysis.
The concept of translating a classical time-domain circuit into its frequency-domain equivalent
is formalised.
Before we can understand how to find the transfer functions of electrical networks, it is impor-
tant to understand the fundamentals of the basic elements. The basic circuit elements (resistor,
capacitor and inductor) are transformed from the time domain to the frequency domain. This
will help guide the understanding of how more complex networks interact. These elements are
covered through examples and activities.
− − −
UNISA 22 ECD3071
2. Modelling in the Frequency Domain
Solution:
Firstly we need to find a suitable time-domain transfer function. For simplicity let us
assume that the voltage across the resistor is the desired output signal, and the current
through the resistor is the designated input signal. Then we have the system equation as
Put simply, the resistor in the frequency domain is still a pure scaling element. The
resistor scales and translates a frequency domain current into a frequency domain voltage.
The equivalent s-domain circuit is then
+ IR (s)
VR (s) R
Solution:
Firstly we need to find a suitable time-domain transfer function. For simplicity let us
assume that the voltage across the inductor is the desired output signal, and the current
through the inductor is the designated input signal. Then we have the system equation
as
diL (t) diL (t)
vL (t) = L ⇒ L{vL (t)} = L{L }
dt dt
VL (s) = L(sIL (s) + iL (0))
VL (s) = LsIL (s) − LiL (0)
UNISA 23 ECD3071
2. Modelling in the Frequency Domain
For simplicity let us assume the initial conditions as indicated by iL (0) = 0. Then,
VL (s)
GL (s) = = Ls
IL (s)
However, we can include the memory component −LiL (0) in the circuit model. Put simply
this is an “added” frequency domain voltage to the output voltage VL (s). Importantly,
this is a constant that is present forever in the frequency domain. This represents the
memory of an inductor. This memory inherently means to have a record of the state of
the inductor for t < 0 from the start of the analysis. This necessarily and inherently
affects the circuit for all time after the start time t = 0 if it is non-zero. Therefore the
equivalent circuits are
+ IL (s)
+ IL (s)
VL (s) Ls
−
VL (s) Ls
−
LiL (0) +
−
+ IC (s)
1
VC (s)
Cs
−
UNISA 24 ECD3071
2. Modelling in the Frequency Domain
frequency?
In the above Examples and Activities, we found the equivalent frequency-domain circuit ele-
ment representation of the resistor, inductor and capacitor [1, 2]. These are summarised in
Figures 2.2a-2.2c. If you are still struggling with the concept of frequency domain equivalent
circuit elements, watch the following video, Laplace Domain Circuit Analysis (13:44) [6].
However, in these examples and activities, the general assumption is that the output signal is
the voltage across the element, and the input is the current through the element. This then
generated the general transfer functions,
V (s)
Z(s) = (2.11)
I(s)
This is the generic frequency domain definition of impedance [1]. However, nothing specifically
requires us to have the input defined as the current and output as the voltage. This was an
artificial choice.
The definition of impedance, as a complex variable and from circuit elements is given as
where R is the resistance and X is the reactance. The reactance is a form of complex resistance
attributed to capacitive and inductive elements. The reactance of an individual capacitor is
1
XC = , and the reactance of an individual inductor is given by XL = Ls. In this way,
Cs
the resistance, and in general, the impedance, represents the transfer function of the output
voltage across a device from the ’s itinput current. An important note on impedance is that it
is frequency-dependent, specifically through the reactance component.
There is a similar but alternative concept of resistance. This takes the perspective of the input
being the voltage and the current the output. This is referred to as admittance, given by
I(s) 1
Y (s) = = (2.13)
V (s) Z(s)
UNISA 25 ECD3071
2. Modelling in the Frequency Domain
Try to find the equivalent admittance transfer functions and the associated equivalent
circuit diagrams of the
a) resistor
b) inductor
c) capacitor
Now that there is an understanding of how basic circuit elements transform, this can be em-
ployed in frequency-domain circuit analysis. Essentially, we do not need to actually transform
the differential equation, but rather simply transform the elements to their frequency-domain
circuit representation, and apply circuit theory as normal. This is explored further in the next
subsection 2.4.2.
However, if you are still struggling with the concept of s-domain circuit elements, watch Laplace
Transforms of Circuit Elements (16:06) [7].
A major benefit of understanding how basic circuit elements can transform into a frequency
domain equivalent, is that a circuit can be entirely transformed into the frequency domain.
Since Kirchhoff’s voltage and current laws (giving rise to mesh and nodal analysis respectively)
UNISA 26 ECD3071
2. Modelling in the Frequency Domain
are essentially only additive, these laws are readily used in the frequency domain equivalent
circuit [1, 2]. This is possible since the Laplace transform of a sum (of elements) is the sum of
the transform (of each element). This is a powerful technique, not only for finding the transfer
function of a circuit, but for characterising the frequency response of circuits.
There is however a point of practicality behind this. For the examples in this section, we
will primarily look at circuits called passive filters. Once the concept of frequency-domain
circuit analysis is covered with these examples, some more complex networks of elements will
be investigated.
i(t) R +
+ v(t) C vC (t)
−
−
Solution:
The translated circuit is simply given as
I(s) R +
+ 1
− V (s) VC (s)
Cs
−
Using the simple voltage divider rule, we get the following equation
1
VC (s) = Cs V (s)
1
R+
Cs
Manipulating the equation we get
VC (s) 1
=
V (s) 1 + RCs
UNISA 27 ECD3071
2. Modelling in the Frequency Domain
The same results when using mesh analysis with, V (s) = VR (s) + VC (s), VR (s) = RI(s),
and I(s) = CsVC (s).
i(t) L +
+ v(t) R vR (t)
−
−
Solution:
The translated circuit is simply given as
I(s) Ls +
+ V (s) R VR (s)
−
Using the simple voltage divider rule, we get the following equation
R
VR (s) = V (s)
R + Ls
Manipulating the equation we get
VR (s) 1
=
V (s) L
1+ s
R
The same results when using mesh analysis with, V (s) = VR (s) + VL (s), VR (s) = RI(s),
VL (s)
and I(s) = .
Ls
UNISA 28 ECD3071
2. Modelling in the Frequency Domain
• Place a voltage probe at the required output (and don’t forget the ground reference).
• Set the voltage source attribute, AC Apmlitude = 1. (right-click on source > At-
tributes > scroll down to the setting)
• Use the following AC analysis spice-net command, .ac dec 10 0.001 10. (Sim-
ulation tab > Simulation Setting [Crtl+J] > AC Analysis: Type=Decade, Points=10,
Start=0.001, Stop=10)
Hint: The results will display the frequency in units of Hertz. However, the s = σ + jω
parameter is the generalised complex frequency. To relate back to pure frequency we let
σ → 0, and s = jω, but ignore the imaginary value. The value ω has units of radians/sec,
and relates to pure frequency f by ω = 2πf . Take this into consideration when finding
the half-value throughput.
Find the transfer functions of each high-pass filter variant. Confirm the high-pass be-
haviour analytically and find the half-value throughput frequency. Practically simulate
the circuit using easyEDA to confirm the behaviour as well. Use the same settings as in
Activity 2.11.
In this section, some more advanced networks of circuits are analysed in the frequency domain.
Here specifically, the applications of nodal and mesh analysis will be employed due to necessity,
as the circuits become more intricate.
The context of study here will also continue on the point of practicality, in that multiple order
filters will be analysed. However, the fundamental concepts of the order of systems, their
(filter) types and design parameters are covered in detail later in Chapter 4. For now, the
UNISA 29 ECD3071
2. Modelling in the Frequency Domain
general process of finding the transfer function and interpretation of the basic behaviour of
more intricate networks of passive elements are covered here.
i(t) L R +
+ v(t) C vC (t)
−
−
Solution:
As before, the circuit is easily transformed into the frequency-domain equivalent.
i(t) Ls R +
+ 1
− v(t) vc (t)
Cs
−
From here the voltage divider rule can be used again to find the output.
1 1
Cs VC (s) LC
VC (s) = ⇒ =
1 V (s) R 1
Ls + R + s2 + s +
Cs L LC
What are the differences and similarities between the low pass filter in Activity 2.11 and
this circuit?
Hint: Place two voltage probes, one at the inductor-resistor node and another at the
resistor-capacitor node. What is the low-frequency behaviour of the RL and RC filter
from the previous activity compared to this RLC filter? What about the high-frequency
behaviour? Is the critical frequency the same?
UNISA 30 ECD3071
2. Modelling in the Frequency Domain
Example 2.10
IR2 (s)
Find the transfer function, of the circuit network below.
V (s)
iR2 (t)
R1 R2
+ v(t) C1 C2
−
Solution:
The frequency-domain circuit is seen below, with the addition of two loop-currents, which
will be used for mesh analysis.
IR2 (s)
R1 R2
+ 1 1
− V (s) I1 I2
sC1 sC2
From the circuit we can see I2 = IR2 (s). A system of equations can be generated from
each of the loop currents.
For loop current I1 we use Kirchhoff’s Voltage Law (KVL) to define the loop voltage. In
this loop I1 goes through the voltage source, resistor R1 and capacitor C1 . The standard
defined polarities of the voltages are from the perspective of I1 . So for the voltage source,
I1 enters the negative voltage reference of the source, thus the polarity of the source is
negative. The voltage on the resistor R1 is not defined, but we can simply assume that
the voltage drops in the direction of the loop current, since the frame of reference is the
loop current I1 . The capacitor C1 is a bit more tricky. The capacitor actually has two
loop currents passing through it. Since the reference is I1 , in this circumstance, we use
the direction of I1 to define the positive current flow. Since I2 is opposed to this current,
the net current through C1 is (I1 − I2 ) in the direction of I1 . Therefore,
(I1 − I2 )
−V (s) + VR1 + VC1 = 0 ⇒ V (s) = I1 R1 +
sC1
1 −1
V (s) = R1 + I1 + I2 (2.16)
sC1 sC1
UNISA 31 ECD3071
2. Modelling in the Frequency Domain
A similar approach is applied to the loop current I2 . Note, the major difference is that
the frame of reference is the loop current I2 . Therefore, the net current through C1 will
now be (I2 − I1 ).
I2 1
VR2 + VC2 + VC1 = 0 ⇒ 0 = I2 R2 + + (I2 − I1 )
sC sC1
2
−1 1 1
0= I1 + R2 + + I2 (2.17)
sC1 sC1 sC2
Equation 2.16 and Equation 2.17 are a system of linear equations. This can be represented
as a linear matrix equation as follows.
v = Zi (2.18)
1 −1
R1 + sC1
V (s) sC1 I1
= (2.19)
0 −1 1 1 I2
R2 + +
sC1 sC1 sC2
The linear matrix equation can then be solved. However, in this circumstance, we are
only interested in the loop current I2 since it is equal to the current IR2 (s) which is the
output of interest. To solve for I2 specifically, Cramer’s Rule can be used. This uses the
determinant of the sub-matrix ZI2 and the determinant of the matrix Z.
N.B.: The sub-matrix ZI2 is obtained by replacing the sub-vector in matrix Z corre-
sponding to the entries of the variable of concern (i.e. I2 , thus the second column of Z),
with the solution vector v. Therefore
1 −1
R1 + V (s) 1
sC1 sC1 R1 + V (s)
−1
1
1 sC 1
0 R2+ + −1
sC1 sC2 sC1 0
sC1
I2 = ⇒ I2 =
det A det A
1 −1
R1 + 0 − V (s)
sC1 sC
I2 = 1
1 1 1 −1 −1
R1 + R2 + + −
sC1 sC2 sC1 sC1 sC1
1
V (s)
sC1
⇒ I2 =
R1 R2 R1 1 1 1
R1 R2 + + + + 2 + 2 2− 2 2
sC1 sC1 sC2 s C1 C2 s C1 s C1
UNISA 32 ECD3071
2. Modelling in the Frequency Domain
Note: There is another more useful form in terms of practical realisation for this circuit,
and its behaviour. This will become relevant later once the concepts of the natural fre-
quency and damping coefficient are covered. In the meantime, simply know that this form
exists as is given below.
r
C2 1
√ s
IR2 (s) R1 R2 C1 R1 R2 C1 C2
G(s) = =
V (s) R1 C1 + R1 C2 + R2 C2 1 1
2
s +2 √ √ s+
2 R1 R2 C1 C2 R1 R2 C1 C2 R1 R2 C1 C2
Activity 2.14
In Example 2.10, you are asked to find the transfer function of the mesh network with
the current iR2 (t) as the output for the input v(t).
Using the same circuit, find the transfer function, if vC2 (t) is the desired output.
VC2 (s) 1
= 2
V (s) R1 R2 C1 C2 s + (R1 C1 + R1 C2 + R2 C2 ) s + 1
There is an intuitive way to translate a frequency domain circuit (which inherently has a system
of equations describing it) into a linear matrix algebra equation form. There are two ways of
doing this, depending on if mesh analysis (as Kirchhoff’s voltage law) or nodal analysis (as
Kirchhoff’s Current Law) is used.
Consider the mesh analysis method, as demonstrated in Example 2.10, from which the concept
is generalised. Equation 2.18 is the matrix equation used to describe the system [1]. Incidentally,
it is synonymous with the standard v = Zi Ohm’s Law equation, where v is the applied voltage,
i is the current through the element and Z is the element impedance, but these parameters are
now vectors and matrices [1]. To translate a circuit, first identify the necessary meshes [1, 8].
For each mesh, there is an assumed mesh current and direction [1, 8]. This is the reference
perspective for this mesh (In general, any given mesh may have other mesh currents flowing
through elements in the particular mesh) [1, 8]. The mesh currents then form the current vector
UNISA 33 ECD3071
2. Modelling in the Frequency Domain
[1]
imesh 1
imesh 2
i = ..
.
imesh n
The algebraic sum of the voltage of all the sources in a given mesh, referenced to the mesh
current polarity, defines the mesh voltage vector [1]
P
v1 voltages of sources in mesh 1
v2 P voltages of sources in mesh 2
v = .. =
..
. .
P
vn voltages of sources in mesh n
As mentioned and seen in Example 2.10, Z = [zij ] is a matrix. The entry zij corresponds to
the sum of the impedances of the elements common between the i and jth mesh [1]. So for z11 ,
this corresponds to the sum of impedance common to mesh 1 and mesh 1 [1]. This is simply
the sum of impedances in mesh 1. For entry z12 , this is the sum of impedances common to
1
mesh 1 and mesh 2 [1]. In Example 2.10 this was simply C1 or rather . This will always
sC1
be a symmetric square matrix, since entries that are common to mesh i and mesh j, will be
identical to entries common to mesh j and mesh i [1]. The diagonal entries will simply be the
total sum of the impedance in the particular ith mesh [1]. i.e.
z11 z12 · · · z1n
z21 z22 · · · z2n
Z = ..
.. ..
. . .
zn1 zn2 · · · znn
P P P
impedance in mesh 1 − impedance common to mesh 1 and 2 · · · − P impedance common to mesh 1 and n
− P impedance common to mesh 2 and 1 P
impedance in mesh 2 · · · − impedance common to mesh 2 and n
Z=
.. .. ... ..
P . P . P .
− impedance common to mesh n and 1 − impedance common to mesh n and 2 ··· impedance in mesh n
Note that the voltage vector can actually be split into a component matrix. The use of this is
to explicitly state the sources in each mesh. This can be defined as
b11 b12 · · · b1n vsource 1
b21 b22 · · · b2n vsource 2
Bv = ..
.. .. ..
. . . .
bn1 bn2 · · · bnn vsource n
The entries bij are either 1, −1, or 0. This indicates if the source is present on the particular
UNISA 34 ECD3071
2. Modelling in the Frequency Domain
mesh, and if it is, the sign indicates the source polarity referenced from the perspective of
the mesh current. The index i simply indicates in which ith mesh the voltage source is being
applied, and the j index indicates which specificjth source is being applied. For example, the B
1
matrix and v vector in Example 2.10 would be V (s) . The v vector is reduced to a scalar,
0
since there is only one source. The B matrix is consequently reduced to a simple vector as well
and says that the first (and only) source is applied in mesh 1, but not in mesh 2.
The same basic concepts apply to nodal analysis, with some slight differences. Nodal analysis
uses Kirchhoff’s Current Law, the sum of all the currents into a node sum to zero [1, 8]. The
node itself is at a particular nodal voltage (this takes the place of the mesh current variables)
[1]. The net current flowing between two nodes is equal to the voltage difference between the
two nodes multiplied by the admittance between the nodes [1]. This concept is then extended
into a matrix equation as follows
i = Yv
The admittance matrix Y is similar to the impedance matrix mentioned above. However, the
perspective is from the nodes, not meshes. Therefore the yij element is indexed according to
the admittance common to the ith node and jth node.
y11 y12 · · · y1n
y21 y22 · · · y2n
Y = ..
.. ..
. . .
yn1 yn2 · · · ynn
P P P
Padmittance connected to node 1 −P admittance between node 1 and 2 · · · − P admittance between node 1 and n
− admittance between node 2 and 1 admittance connected to node 2 · · · − admittance between node 2 and n
Y=
.. .. ... ..
P . P . P .
− admittance between node n and 1 − admittance between node n and 2 ··· admittance connected to node n
The current vector is simply the sum of all the source currents into the particular node. I.e.
P
source currents into node 1
P source currents into node 2
i=
..
P .
source currents into node n
UNISA 35 ECD3071
2. Modelling in the Frequency Domain
The i index refers to the particular ith node and the j index refers to the particular jth source.
The bij value is either ±1 or 0, with the same reasoning as before. The nodal analysis technique
is now demonstrated.
If there is some confusion on the generalised technical aspects then it is recommended to watch
the following two videos which show how to find the matrices in mesh and nodal analysis.
See How to obtain Matrix by Inspection in Mesh Analysis( Simple Method with Animation)
(7:08) [9] for a mesh example, and How to obtain Matrix by Inspection in Nodal Analysis
(Easy Technic with Animation) (9:00) [10] for a node example. One cautionary note, instead
of the parameter s the imaginary number j is used. This is permissible since the inductor
and capacitor are special cases where the real (resistive) component is zero, and s is a general
complex number, i.e. s = σ +jω. If you are comfortable with the concepts, look at the following
example and try the following activity.
Example 2.11
VC (s)
Find the transfer function, of the circuit network below.
I(s)
C1
R1 R2
+
i(t) L1 vC2 (t) C2
−
Solution:
Since the question asks to evaluate the transfer function using nodal analysis, we first
need to identify the nodes, and then define the currents into (or out of) the nodes. We can
arbitrarily choose any direction for the currents, as long as the directions are consistent.
The nodes that are identified are va , vb , and vc with the corresponding currents.
UNISA 36 ECD3071
2. Modelling in the Frequency Domain
sC1
i2 i2
i3 G1 vb i4 G2
va vc
i1 i5
+ i6
1
I(s) VC2 (s) sC2
sL1
−
We can now define the circuit matrix equation. This will be done incrementally, defining
the equation for each node. For node va we have admittance G1 and sC1 connected. The
admittance common to va and vb is G1 ; The admittance common to va and vc is sC1 .
The source currents connected to va is I(s). This translates to
VC2 (s)
But we are interested in the transfer function . Recognising that node vc = VC2 (s)
I(s)
UNISA 37 ECD3071
2. Modelling in the Frequency Domain
det(Yvc )
VC2 (s) = (2.22)
det(Y)
(G1 + sC1 ) −G1 I(s)
1
−G1 G1 + G2 + 0
sL1
−sC1 −G2 0
=
det(Y)
1
G1 G2 + sC1 G1 + G2 + I(s)
sL1
=
C1 C2 L1 (G1 + G2 )s3 + C2 (C1 + G1 G2 L1 )s2 + (C1 G1 + C1 G2 + C2 G1 )s + G1 G2
sL1
Activity 2.15
In Activity 2.14 you were asked to find the voltage vC2 (t) using mesh analysis. Try to
find the transfer function using the nodal analysis method and its associated admittance
matrix equation.
Hint: The voltage source does not have a well-defined current. To resolve this issue the
voltage source, and the impedance R1 can be considered as a Thévenin source at nodes v1
and GND. The Norton equivalent source can be readily substituted at these nodes.
v1 v2
R1 R2 +
+ v(t) C1 vC2 (t) C2
−
−
GND
UNISA 38 ECD3071
2. Modelling in the Frequency Domain
Operational Amplifiers (opamp) can be analysed in the frequency domain in much the same
way as in the time domain [1, 2]. Importantly, the rules for the ideal Op-Amp remain the same.
As a reminder the basic rules for the ideal opamp are as follows [1, 11]:
The ideal circuit model for the opamp is defined as a Thévinin equivalent voltage-controlled
voltage source [1, 11]. The voltage is dependent on the differential input vi (t) [1, 11]. Note
the ideal opamp can be modelled more realistically by having a finite input impedance and a
non-zero output impedance [1, 11]. The importance of this statement is that it is an impedance,
thereby can have capacitive (or inductive) elements as well, adding to the model complexity,
realistic and intrinsic frequency behaviour [11]. The circuit diagram of the ideal opamp is seen
in Figure 2.3 [1, 11]. For simplicity, this section will analyse ideal opamps.
+
v+ (t)
Zo = 0
vi (t) Zi = ∞ vo (t)
+ Ao vi (t)
−
v− (t)
−
There is one basic opamp circuit that is analysed and explored, the amplifier circuit. The
amplifier has two sub-types, the inverting amplifier and the non-inverting amplifier. The differ-
ence between the two is simply the polarity of the output. The non-inverting amplifier simply
scales the input by the gain factor; and the inverting amplifier scales by the gain factor as well,
but inherently the gain is “negative”, thereby, inverts the signal about the zero (reference)
voltage. The circuit diagrams for the inverting and non-inverting amplifiers can be seen in
Figure 2.4a and Figure 2.4b.
UNISA 39 ECD3071
2. Modelling in the Frequency Domain
Final answer:
Inverting Non-inverting
RF RF
vout (t) = − vin (t) (2.24) vout (t) = 1 + vin (t) (2.25)
RG RG
+
vin (t)
vout (t)
−
RF
RF
RG
vin (t) −
vout (t) RG
+
As can be seen in Activity 2.16, the gain is easily determined by the ratio of the negative
feedback resistor RF and the grounding resistor RG . This is then slightly modified for each of
the inverting and non-inverting amplifiers.
For the inverting amplifier, it is simply the gain, as a ratio of the resistors, but is negative,
because of the inverting relationship. More accurately, the ground could be any reference
voltage. (This is particularly useful for single rail supply opamps).
For the non-inverting amplifier, the gain ratio is added to 1. This essentially means that the
non-inverting amplifier will always have a gain ≥ 1. Additionally, there is a very useful special
case, where RF = 0 (and RG = ∞). This is known as a unitary gain buffer. The usefulness of
this is for isolating signals between circuits and removing the effects of loading.
The opamp is an incredibly useful device and can achieve some complex tasks with relatively few
additional elements. The next few examples and exercises will explore the frequency domain
analysis of opamps explicitly, and some fundamental circuits that are invaluable in control
design.
The impedance equivalent circuit of the inverting and non-inverting opamps are easily achieved
by replacing the resistors with generic impedance devices, as seen in Figure 2.5a and Figure 2.5b
UNISA 40 ECD3071
2. Modelling in the Frequency Domain
+
vin (t)
vout (t)
−
ZF
ZF
ZG
vin (t) −
vout (t) ZG
+
The impedance elements are generalised as having some resistance and reactance. An important
note is that the reactive component comes from inductance or capacitance, and their effects
are frequency dependent.
vin (t) −
vout (t)
C +
Additionally, find the time-domain mathematical relationship between the input and
output. Assume all initial conditions are zero.
Solution:
Changing the circuit to the frequency domain gives
vin (t) −
1/sC vout (t)
+
UNISA 41 ECD3071
2. Modelling in the Frequency Domain
Using the gain equation of the inverting amplifier Equation 2.24 we get,
R
Vout (s) = − Vin (s)
1
sC
Vout (s)
∴ = −RCs
Vin (s)
Using the Laplace Transform Tables, the time derivative section of Table C.3 we get the
time domain relationship
vin (t) −
R vout (t)
+
Additionally, find the time-domain mathematical relationship between the input and
output. Assume all initial conditions are zero.
Answer:
UNISA 42 ECD3071
2. Modelling in the Frequency Domain
integrator amplifiers will do. Why would these amplifiers not necessarily be ideal?
Note that the opamp circuits given above only used capacitors, and had no inductors. This
is due to practical implementation. Capacitors are much more readily available with a wider
variety in value, and with higher precision. Inductors usually do not have a wide variety of
values and usually (always) have low precision; furthermore, there are other practical limitations
such as saturation and hysteresis effects that make the use of inductors impractical.
However, this does not mean that inductors are not valid in the circuit analysis for opamps.
Sometimes it becomes necessary to consider inductors due to parasitic inductance effects in
real circuit applications. This is out of necessity and not a choice! The effects are undesirable,
but also unavoidable! Though this is usually necessary for power applications, and/or high
frequency (≥ 1M Hz) applications. However, this is beyond the scope of this Module.
There is another reason why inductors are not used explicitly. This is explored in the next
activity.
vin (t) −
R vout (t)
+
L
vin (t) −
vout (t)
+
Now that some of the theory is covered, it would be nice to confirm the behaviour of the
integrator circuit and differentiator circuit with a simulation. The integrator is covered as a
simulation activity below. However, the differentiator amplifier will not be covered.
The differentiator amplifier is difficult to implement practically for conceptual reasons that are
explored in other modules, but will not be explicitly stated. It is not impossible to design a
differentiator amplifier, but due to resonance and stability issues that arise from the circuit
UNISA 43 ECD3071
2. Modelling in the Frequency Domain
inherently, as well as effects from real opamps, the circuit is difficult to build and sometimes
even difficult to simulate with the desired ideal results.
• The input voltage is a square wave with a peak voltage of 1V with a DC offset of
0V . This is generated from a voltage source with the following settings.
– Type = PULSE
– Vinitial = 1
– Von = -1
– Tdelay = 1n
– Trise = 1n
– Tfall = 1n
– Ton = 0.5m
– Tperiod = 1m
• Place two voltage probes, one at the input and one at the output.
Confirm that the output of the circuit is the integral of the input.
Outcome 4 is complete
Much of the same theoretical applications described in passive electrical networks can be readily
extended to mechanical networks as well. This is briefly explored in this section.
UNISA 44 ECD3071
2. Modelling in the Frequency Domain
Passive mechanical systems have three basic components, similar to electrical systems. Mechan-
ical systems are characterised as having springs, masses and dampers. Consider the following
simple mechanical system
x(t)
v(t)
k1
m1 f (t)
q1
The mechanical system has a reference position relative to the mass m1 with direction defined
by x(t). Additionally, the mass has defined velocity v(t). The input to the system is the
applied force f (t), with the positive direction defined the same as the linear position. The
mechanical system can be defined with the following integrodifferential equations, relating
either the position to the applied force, or the velocity to the applied force.
d2 d
m1 2
x(t) + q1 x(t) + k1 x(t) = f (t) (2.27)
dx Z dt t
d
m v(t) + q1 v(t) + k1 v(τ )dτ = f (t) (2.28)
dt 0
Equation 2.27 is the force-position differential equation. Equation 2.28 is the force-velocity
differential equation. However, an important observation is that Equation 2.28 has the exact
same form as the differential equation for an RLC circuit. The direct consequence of this is
that the force-velocity D.E. and voltage-current D.E. are analogues of one another. There is
another simple analogue for the force-position D.E. in the electrical circuit domain, specifically
the voltage-charge D.E. This should make sense since the time derivative of an object’s velocity
is its position, and the time derivative of a circuit element’s current is the charge through that
element.
In this equivalence, the concept of mechanical impedance of mechanical elements is formed, with
F (s)
the mechanical impedance ZM (s) defined through the force-position reference, ZM (s) = .
X(s)
UNISA 45 ECD3071
2. Modelling in the Frequency Domain
For the more classical circuit analogue equivalent of mechanical impedance, i.e. the voltage-
current and force-velocity analogue, we get the following.
Z t
1 K
f (t) = K v(τ )dτ F (s) = K V (s) ZM (s) = (2.32)
0 s s
f (t) = Qv(t) F (s) = QV (s) ZM (s) = Q (2.33)
d
f (t) = M v(t) F (s) = M sV (s) ZM (s) = M s (2.34)
dt
This is a very powerful and helpful conclusion. This enables an electrical engineer (such as
yourself as the reader) to have a conceptual understanding and capability to analyse mechan-
ical systems; even though this may be more necessary and formally described in the field of
mechanical engineering. If you are uncertain about how these differential equations result,
watch the following video, Inductors Capacitors and their Mechanical Analogs (9:25) [12].
This cross-domain analysis becomes more necessary in the modern world where electrical control
systems integrate with mechanical, and/or other systems. An additional advantage is that it
enables the mechanical system to be represented entirely as a circuit equivalent model. This
concept is explored in more detail.
Solution:
The important starting point is to realise the meaning of the mechanical system and its
associated equation. Specifically, a force balance analysis is applied to the mass in the
mechanical system. The input force f (t) is balanced by the force due to the spring, the
mass, and the damper, i.e.:
This is then represented in terms of the velocity and the parameters M, Q, and K. I.e.:
Z t
d
fM (t) + fQ (t) + fK (t) = M v(t) + Qv(t) + K v(τ )dτ = f (t)
dt 0
K
FM (s) + FQ (s) + FK (s) = M sv(s) + Qv(s) + v(s) = F (s)
s
From the frequency domain equation above, we can consider an equivalent circuit in
two different forms. We can choose if the force is equivalent to voltage and velocity to
current; or if force is equivalent to current and velocity equivalent to voltage. The choice
is arbitrary, but the important distinction is knowing that the force is the input and
velocity is the output, and that the choice of voltage or current equivalence reflects this
input-output requirement.
UNISA 46 ECD3071
2. Modelling in the Frequency Domain
Consider the force as voltage: The force balance equation becomes a voltage balance
equation. This is directly related to Kirchhoff’s voltage law, as used in mesh analysis,
and the velocity is the mesh current.
M Q Ms Q
v(t) V (s)
+ 1 + K
− f (t) − F (s)
K s
Consider the force as current: The force balance equation becomes a current balance
equation. This is directly related to Kirchhoff’s current law, as used in nodal analysis,
and the velocity is the nodal voltage.
This then gives the following equivalent circuit analysis equation (note the force balance
is identical as before):
K
force balance: FM (s) + FQ (s) + FK (s) = F (s) = M sv(s) + Qv(s) + v(s) = F (s)
s
1 1
current balance: IC (s) + IR (s) + IL (s) = I(s) = CsV (s) + V (s) + V (s) = I(s)
R Ls
Knowing that this comes from a nodal analysis (by choice), and that nodal analysis
requires the expression of the circuit parameters in admittance form, the equivalent circuit
is then:
v(t) V (s)
1 1 K
f (t) M F (s) Ms Q
Q K s
Note that in much the same way that circuits can be represented easily in a matrix equivalent
form for both mesh and nodal analysis, the same applies to mechanical systems.
UNISA 47 ECD3071
2. Modelling in the Frequency Domain
Activity 2.21
Find the equivalent electrical circuit of the following mechanical system. Use a mesh
analysis approach.
f (t)
v1 (t) v2 (t)
k1 q3 k2
m1 m2
q1 q2
Note: q1 and q2 are friction components of masses m1 and m2 respectively (and act as
damping terms). Find the transfer function of the mechanical system if f (t) is the input
and v2 (t) is the desired output.
Outcome 5 is complete.
In the beginning of this learning unit, it was mentioned that the analysis of linear, time-
invariant, causal (LTIC) systems is covered. Time-invariance and causality as theoretical con-
cepts will not be explored in this module, or more precisely non-time-invariant systems and
non-causal systems will not be discussed, partly because these systems are atypical in everyday
life. However, it is common enough that a system is non-linear, that it warrants a discussion
on the concept, and how to handle these systems in control theory.
The property of linearity in systems is specifically about the property of the system as a linear
operator. Put simply a linear operator is a mathematical object that has both of the following
two properties ;
UNISA 48 ECD3071
2. Modelling in the Frequency Domain
total effective output is the sum of the outputs y1 + y2 . I.e. if x1 → y1 and x2 → y2 , then
x1 + x2 → y1 + y2 [1, 2].
The combination of these two properties is known as the property of superposition [1, 2]. For
more clarity, watch the following video Linearity: Definition (7:48) [13].
Linearity as a property can apply to functions or even mathematical operations. For example,
differentiation is considered a linear operator and is the premise of the D-operator; the Laplace
transform is also a linear operator. A subtlety here is that the Laplace transform can change
the D-operator into a polynomial of s. A differential equation that consists of only successive
derivatives is linear.
A non-linear system is either not homogeneous, or not additive. In the context of systems
dealt with here, this usually occurs because the input is acted upon or the operator inherently
contains a non-linear function. There is no easy way of knowing if an operator is non-linear other
than checking the linearity property. However, the example below may assist with developing
some intuition.
d2 d
a) D{x(t)} = 2 x(t) + 2 x(t) − x(t) = (D2 + 2D − 1)x(t)
dt dt
1
b) R{f (x)} =
f (x)
c) P {x(t)} = (x(t))2
d2 d
d) x(t) 2
x(t) + 2 x(t) − x(t)
dt dt
d2 d
e) 2
x(t) + 2 x(t) − ln (x(t))
dt dt
b) is non-linear,
1 1 1
since R{f (x) + g(x)} = 6= + = R{f (x)} + R{g(x)}
f (x) + g(x) f (x) g(x)
c) is non-linear, since P {cx(t)} = c2 (x(t))2 6= c(x(t))2 = cP {x(t)}
UNISA 49 ECD3071
2. Modelling in the Frequency Domain
2.6.2 Linearisation
The process of linearisation in this module utilises the Taylor series expansion of a function.
However, for practical purposes, it is usually sufficient to truncate the Taylor series to a first-
order Taylor polynomial approximation. The Taylor series of an arbitrary function f (x) about
the point x0 is
∞
X f (n) (x0 )
f (x) = (x − x0 )n (2.35)
n=0
n!
f 0 (x) f 00 (x) f 000 (x)
= f (x0 ) + (x − x0 ) + (x − x0 )2 + (x − x0 )3 + · · · (2.36)
1! 2! 3!
The first order approximation of f (x) about x = x0 is the (linear) Taylor polynomial
This can be interpreted in one of two (very useful) ways. Firstly, that f (x) is now equated to
a linear function of x. I.e.
Here f 0 (x) = m is the gradient of the linear function, and (f (x0 )−f 0 (x0 )x0 ) = c is the constant,
or y-intercept of the linear function. A much more subtle fact is that this can be interpreted
∆y y − y0
as the gradient equation of a straight line, i.e. = = m. The actual gradient m can
∆x x − x0
be found exactly by finding the first derivative of f (x), i.e. f 0 (x) = m; but with the gradient
specifically evaluated at x0 , f 0 (x0 ). The function is approximated at a particular point x0 and
the associated “y0 ” value for this is f (x0 ). Therefore
Therefore, for some small perturbation δx (= x − x0 ) about the point x0 , there is a resultant
small perturbation δf (= f (x) − f (x0 )) about the nominal output f (x0 ). This can be used
to define a small linearised operating window for a function, or more usefully here, a system.
I.e. we define the linearised input region as x = x0 ± δx and the linearised output region as
f (x) = f (x0 ) ± δf = f (x0 ) ± f 0 (x0 )δx.
UNISA 50 ECD3071
2. Modelling in the Frequency Domain
Example 2.15
Linearise the following D.E. about the point x = 0
Solution:
The cos(x) term makes this equation non-linear (check this yourself). We wish to linearise
about x = 0, therefore we let x0 = 0 and x take on the following linearised approximation
x = 0 + δx = δx
The term cos(δx) can be linearised with a first-order approximation Taylor polynomial.
I.e.
Here f 0 (x) = −sin(x), therefore f 0 (x0 ) = −sin(0) = 0, and f (x0 ) = cos(0) = 1. There-
fore
f (x) ≈ 1 + (0)δx = 1
for small perturbations δx about x = 0.
δx00 + 2δx0 + 1 = 0
1
s2 δX(s) + 2sδX(s) + =0
s
1
s(s + 2)δX(s) = −
s
1
δX(s) = −
s2 (s + 2)
UNISA 51 ECD3071
2. Modelling in the Frequency Domain
1
This is generally unstable because the − t is unbounded as t → ∞.
2
If you are still struggling with the concepts of linearisation, watch the following video, Trimming
and Linearization, Part 1: What Is Linearization? (14:00) [14]. Note, this video does touch on
the concepts of state-space, which is covered in the next learning unit in Chapter 3. However,
the concepts covered are synonymous with this section, but caution and discretion must be
exercised.
Activity 2.22
Consider the following circuit with a non-linear resistor.
++ −
vL (t) + iR (t)
v(t)
−
vR (t)
+ vDC = 2
− −
Find the small-signal transfer function of the resistor voltage with the small-signal input
v(t). The non-linear resistor has the following characteristic equation
UNISA 52 ECD3071
2. Modelling in the Frequency Domain
Outcome 6 is complete.
2.7 Summary
In this learning unit, the use of Laplace transform tables was covered and the transforms of
functions were found using these tables. The definition of a transfer function of a system and
its use in control theory was covered. The transfer functions of basic passive circuit elements
were introduced, and this was then used to find the transfer function of more complex circuits.
The primary principle used was the concept of circuit analysis using mesh and nodal analysis
in the frequency domain. This was mathematically described using matrices, and associated
matrix methods. The concepts were extended to operational amplifiers.
The electrical analogues of mechanical systems and the methods of finding the transfer function
of a mechanical system using circuit analysis were covered. Finally, the concept of non-linear
systems and their D.E. was introduced. The process of linearisation using first-order Taylor
polynomials as a linear approximation for the D.E. was covered, and the transfer functions of
these linearised systems were found.
UNISA 53 ECD3071
2. Modelling in the Frequency Domain
Feedback
a) The expression is mostly in standard form, the only exception is the last term. This can be
resolved using a similar approach to that of Example 2.2. The only significant differences are
the values and the particular function otherwise the approach is the same.
6 s 2s(s2 − 12)
+ −
(2 + s)4 s2 − 9 (s2 + 4)3
b) This is a standard form, though is a more tedious example. An important note is that the phase
is in radians. Since the Laplace transform fundamentally is an integral, trigonometric functions
in terms of degrees are in no way applicable. Apart from this, as mentioned, it is standard
√ √ √
(s2 + 2s − 1) ( 3s2 + (4 + 6 3)s + 5 3 + 12)
√ −
2(s2 + 1)2 2(s2 + 6s + 13)2
c) In principle, this is not difficult to resolve, but requires the application of 3 different rules, and
transforms properties. It is recommended to first represent the equation in terms of (t − 1).
This then allows the application of the time shift rule in Table C.3, f (t − 1) in this case. Then
what is left is
1
L{ √ e3t }e−s
t
1
Here the frequency shift rule applies, from the e3t factor. Thus the transform of √ simply
t
needs to be evaluated, and then s replaced with s − 3. Then the final answer results.
√
e−s π
√
s−3
It is important to note that the order in which this is done should not matter, but is impractical
the other way around.
a) This is a straightforward problem and there are three different ways of solving it. First, complete
the square in the denominator, then it is in a standard form. The second, more tedious way,
is to find the (complex) linear factors, then find the partial fractions, transform each fraction,
and regroup the complex exponentials into a standard trigonometric function. The last way is
covered in the next example.
b) The easiest approach here would be to use the equations provided in the last few entries of
Table C.2. This is straightforward from here.
c) To solve this there are some notable features to suggest how to approach the problem. Firstly
Γ(p + 1)
there is a Gamma function. Thus, the use of the is mandatory from Table C.1.
sp+1
UNISA 54 ECD3071
2. Modelling in the Frequency Domain
However, there is an (s + 1) factor and not the standard s as required. This however is simply
a frequency shift as seen in Table C.3. The exponential in s almost always requires, and can be
interpreted as, a time-shift property, also seen in Table C.3.
Γ(4/3)e−2s √
L−1 { 4/3
} = e2−t 3 t − 2 u(t − 2)
(s + 1)
d) This question is a bit more of a challenge but is similar in complexity to what you might be
expected to resolve in the real world. The most highly recommended step is to find the partial
fraction decomposition. However, the quadratic terms can be left alone for the most part,
though it is recommended to complete the square.
2(s + 2)2
−1 1 2
L − + = e−2t (t cos t + sin(2t))
((s + 2)2 + 1)2 (s + 2)2 + 1 (s + 2)2 + 4
Once this is done the steps to resolve this becomes a bit more obvious. One final suggestion
is to regroup the partial fractions with the same denominator, i.e. have two partial fractions
instead of the three. This grouping will create a standard form.
C(s) = R(s)G(s)
then
(s + 1)
C(s) = = G(s)
s2 + 5s + 6
Since the impulse function has a Laplace transform that is = 1, no matter what the transfer function
is, the output is C(s) = G(s). This in principle means that an impulse input into a system will
generate an output that is the characteristic transfer function of that system. However, this is very
hard to implement practically as it theoretically requires an infinite amount of energy to be imposed
on the system in an infinitesimal amount of time. Though this can be approximated, but not very
accurately, and could more than likely be destructive.
UNISA 55 ECD3071
2. Modelling in the Frequency Domain
1 (s + 1)
C(s) = R(s)G(s) =
s s2 + 5s + 6
(s + 1)
=
s(s + 2)(s + 3)
1 1 2
= + −
6s 2(s + 2) 3(s + 3)
These are now in standard form and the time domain response can be determined using tables.
1 e−2t 2e−3t
c(t) = + −
6 2 3
1
As t → ∞ the exponentials decay to zero. Therefore the final value approaches .
6
1 t
Z Z t
1
vC (t) = iC (τ )dτ ⇒ L{vC (t)} = L iC (τ )dτ
C −∞ C −∞
1 IC (s) 1 0
Z
VC (s) = + iC (t)dt
C s s −∞
IC (s) 1 1 0
Z
VC (s) = + iC (t)dt
Cs s C −∞
IC (s) vC (0)
VL (s) = +
Cs s
Again, for simplicity, assume the initial conditions as indicated by VC (0) = 0. Then,
VL (s) 1
GL (s) = =
IL (s) Cs
The final circuits, both the basic and memory equivalents, are
UNISA 56 ECD3071
2. Modelling in the Frequency Domain
+ IC (s)
+ I (s) 1
C
VC (s)
Cs
1 −
VC (s)
Cs
+ vC (0)
− s
−
+ IG (s)
VG (s) G
UNISA 57 ECD3071
2. Modelling in the Frequency Domain
Z t Z t
1 1
iL (t) = vL (τ )dτ ⇒ L{iL (t)} = L vL (τ )dτ
L −∞ L −∞
VL (s) 1 0
Z
1
IL (s) = + vL (t)dt
L s s −∞
VL (s) 1 1 0
Z
IL (s) = + vL (t)dt
Ls s L −∞
VL (s) iL (0)
IL (s) = +
Ls s
+ IL (s) + IL (s)
1 1 iL (0)
VL (s) VL (s)
Ls Ls s
− −
+ IC (s) + IC (s)
− −
VC (s) 1 VC (0) 1 1
= ∴ = = =1
V (s) 1 + RCs V (0) 1+0 1
VC (s) 1 1
and lim = lim → →0
s→∞ V (s) s→∞ 1 + RCs ∞
UNISA 58 ECD3071
2. Modelling in the Frequency Domain
The half-value throughput is found by equating the transfer functions to 0.5, resulting in
1 1 1
= ⇒ RCs = 1 ⇒ s=
2 1 + RCs RC
1 1 L R
and = ⇒ s=1 ⇒ s=
2 L R L
1+ s
R
1 R
since R = 1Ω, C = 1F , and L = 1H, then = 1 and = 1. The resultant frequency in Hertz is
RC L
1
then f = ≈ 0.16 = 160mHz. This will correspond to the −3dB point in the graph. The simulation
2π
circuit and results are seen below:
C1
R1 1
V1 1
SIN(0 1 50 10m 1 0)
GND
UNISA 59 ECD3071
2. Modelling in the Frequency Domain
0dB 420°
-5dB 400°
-10dB 380°
-15dB 360°
-20dB 340°
-25dB 320°
-30dB 300°
-35dB 280°
-40dB 260°
1mHz 10mHz 100mHz 1Hz 10Hz
Note the output waveforms overlap. This is expected, since the capacitor and inductor low-pass filters
have the same transfer function.
R VR (s) RCs s 1
VR (s) = V (s) ⇒ = ≡ ≡
1 V (s) RCs + 1 1 1
R+ s+ 1+
Cs RC RCs
Analysing the frequency responses, we get
A similar argument can be made for the inductor high-pass filter. The transfer function is
L
Ls VL (s) s s 1
VL (s) = V (s) ⇒ = R ≡ ≡
R + Ls V (s) L R R
1+ s s+ 1+
R L Ls
The frequency behaviour is similar to the capacitor above.
UNISA 60 ECD3071
2. Modelling in the Frequency Domain
C1
R1 1
GND V1 1
SIN(0 1 50 10m 1 0)
0dB 280°
-10dB 260°
-20dB 240°
-30dB 220°
-40dB 200°
-50dB 180°
1mHz 10mHz 100mHz 1Hz 10Hz
The simulation circuit and results of frequency characteristics can be seen below
UNISA 61 ECD3071
2. Modelling in the Frequency Domain
C1
V1 1
SIN(0 1 50 10m 1 0)
GND
0dB 350°
-20dB 300°
-40dB 250°
-60dB 200°
-80dB 150°
1mHz 10mHz 100mHz 1Hz 10Hz
This is easily resolved by realising that the capacitor voltage is equal to the current through the
capacitor multiplied by its impedance. I.e.
UNISA 62 ECD3071
2. Modelling in the Frequency Domain
I2 (s)
VC2 (s) =
sC2
1
= I2 (s)
sC2
1 sC2 V (s)
=
sC2 R1 R2 C1 C2 s2 + (R1 C1 + R1 C2 + R2 C2 ) s + 1
VC2 (s) 1
∴ = 2
V (s) R1 R2 C1 C2 s + (R1 C1 + R1 C2 + R2 C2 ) s + 1
v1 v2
R2 +
v(t)
R1 C1 vC2 (t) C2
R1
−
GND
For nodal analysis with an admittance matrix equation, all the impedance elements must be transferred
to their admittance equivalent, when changing to the frequency domain. This gives the following
frequency-domain circuit
v1 v2
G2 +
G1 V (s) G1 sC1 VC2 (s) sC2
Yv = i
G1 + G2 + sC1 −G2 v1 G1 V (s)
=
−G2 G2 + sC2 VC2 (s) 0
UNISA 63 ECD3071
2. Modelling in the Frequency Domain
G1 + G2 + sC1 G1 V (s)
−G2 0
VC2 (s) =
G1 + G2 + sC1 −G2
−G2 G2 + sC2
G1 G2 V (s)
VC2 (s) =
(G1 + G2 + sC1 )(G2 + sC2 ) − G22
G1 G2 V (s)
VC2 (s) = 2
C1 C2 s + (C1 G2 + C1 G2 + C2 G1 )s + G1 G2
V (s)
VC2 (s) =
C1 C2 2 C1 C2 C2
s + + + s+1
G1 G2 G1 G1 G2
1 1 VC2 (s)
Finally, realising that R1 = and R2 = , the transfer function can be found,
G1 G2 V (s)
VC2 (s) 1
= 2
V (s) R1 R2 C1 C2 s + (R1 C1 + R1 C2 + R2 C2 )s + 1
The inverting amplifier can be analysed with simple circuit analysis methods and considering the ideal
opamp behaviour.
RF i(t)
RG i(t)
i(t)
vin (t) −
i−
vout (t)
+
iop
Firstly the input impedance between the inverting and non-inverting inputs is infinite. This means
that the current into the inputs is zero, i.e. i− = 0. Since there is no current flow the volt drop across
the input impedance Zi is zero. Therefore the voltage at the inverting input is necessarily zero.
This current cannot flow into the inverting input (as explained, since the input impedance is infinite).
Therefore it must flow through RF entirely and the current through RG and RF are equal. Defining
0 − vout (t) −vout (t)
the volt drop appropriately, we have i(t) = = . Equating the currents and solving
RF RF
UNISA 64 ECD3071
2. Modelling in the Frequency Domain
Non-inverting amplifier
The non-inverting amplifier can be analysed more simply as a simple voltage divider, once the inverting
input voltage is equated to the non-inverting input voltage through the ideal opamp laws.
+
vin (t)
i− = 0
vout (t)
−
v+ = vin (t) RF
RG
1/sC
vin (t) −
R vout (t)
+
UNISA 65 ECD3071
2. Modelling in the Frequency Domain
Using the gain equation of the inverting amplifier from Equation 2.24 we get.
1
sC
Vout (s) = − Vin (s)
R
Vout (s) 1
∴ =−
Vin (s) RCs
Using the Laplace Transform Tables, the time integration section of Table C.3, we get the time-domain
relationship
−1 −1 1
L {Vout (s)} = L − Vin (s)
RCs
1 1
L−1 {Vout (s)} = − L−1 Vin (s)
RC s
Z t
1
vout (t) = − vin (τ )dτ (2.40)
RC 0
The mathematical operation of this circuit is to integrate the input (with some scaling factor)
So the output will have the gain transfer function Z(s) summed with the original output. This may
be useful in a specific application, but will no longer be a “pure” integrator or differentiator, or other
complex transfer function effect.
UNISA 66 ECD3071
2. Modelling in the Frequency Domain
C1
250n
volProbe1 volProbe2
V(volprobe2) V(volprobe1)
1.5V
1V
500mV
0V
-500mV
-1V
-1.5V
-500us 0s 500us 1ms 1.5ms 2ms 2.5ms 3ms 3.5ms
As can be seen, the square wave input has been changed into a triangle wave. This makes sense since
the square wave levels are effectively constants ±1. Integrating these “constant” functions and taking
the sign into consideration generates a “ramp” or linear voltage increase/decrease over the duration
of the applied constant voltage.
Then looking at all the impedances in “mesh 1”, i.e. all the mechanical parameters connected to v1 (t);
we can see that k1 , m1 , q1 and q3 are connected to v1 (t). The mechanical parameters between v1 (t)
and v2 (t) is just q3 . Note the masses do not count since they are the reference objects of the actual
motions v1 (t) and v2 (t).
Similarly looking at all the impedances in “mesh 2”, the mechanical parameters connected to v2 (t)
are q3 , q2 , m2 and k2 . From this it is easy to form not only the circuit but also the matrix equation.
UNISA 67 ECD3071
2. Modelling in the Frequency Domain
1 1
K1 M1 Q1 Q2 M2 K2
v = Zi
K1
+ M 1 s + (Q1 + Q3 ) −Q 3
F (s) s v1 (s)
=
0 K2 v2 (s)
−Q3 (Q2 + Q3 ) + M2 s +
s
The desired output parameter is v2 (s), so using Crammer’s rule we get the following
K1
+ M1 s + (Q1 + Q3 ) F (s)
s
−Q3 0
v2 (s) =
K1
+ M1 s + (Q1 + Q3 ) −Q3
s
K2
−Q3 (Q2 + Q3 ) + M2 s +
s
v2 (s) Q3 s2
=
F (s) M1 M2 s4 + (M1 (Q2 + Q3 ) + M2 (Q1 + Q3 ))s3 + ...
(M1 K2 + M2 K1 + Q1 Q2 + Q3 (Q1 + Q2 ))s2 + ...
(K1 (Q2 + Q3 ) + K2 (Q1 + Q3 ))s + K1 K2
Find the D.E. of this system using normal circuit analysis. Mesh analysis is the simplest for this
problem. Therefore, by Kirchhoff’s voltage law we get
UNISA 68 ECD3071
2. Modelling in the Frequency Domain
This gives
The DC operating point must be found to know where the linearisation approximation must be made.
Therefore, the voltage and current at the DC point are constant and assumed respectively vR (t) = v0
and iR (t) = i0 ; the small-signal voltage is shorted, i.e. v(t) = 0. Substituting the current into the
non-linear equation,
d
i0 + 0.5(i0 )2 − 2 = 0
L
dt
1
L(0) + (i0 )2 = 2
2
∴ i0 2 = 4
√
⇒ i0 = + 4 = 2
Then using this value and substituting into the non-linear resistor equation,
v0 = 0.5(i0 )2
= 0.5(2)2 = 2
The linearised voltage and current are now redefined as some perturbation about the DC operating
point. I.e.:
iR (t) → i0 + δi(t)
vR (t) → v0 + δv(t)
but this has not linearised the nonlinear relationship, for this the first order Taylor polynomial is
found. The resistor voltage is now defined as a function of the resistor current,
and vR and iR are implicit functions of time. This is then linearised as follows
UNISA 69 ECD3071
2. Modelling in the Frequency Domain
" #
d
vR ≈ vR (line) = vR (i0 ) + vR δi(t)
diR iR =i0
2 d
0.5(iR )2
= 0.5(iR ) i =i0 + δi(t)
R diR iR =i0
1 2 1
= (i0 ) + 2(i0 ) δi(t)
2 2
1 2
= (2) + (2)δi(t)
2
= 2 + 2δi(t)
Now considering with foresight that the desired transfer function must have the resistor voltage as the
output, the equation needs to be changed in terms of δv(t). Since vR (t) ≈ vR (t) = v0 + δv(t), this
relationship is easily found.
The linearised D.E. can now be found by substituting the linearised resistor voltage vR (line) i.t.o the
perturbation current δi(t), and the inductor current explicitly as the DC operating current + the
perturbation current.
d
iR (t) + 0.5(iR (t))2 − 2 = v(t)
L
dt
d
⇒L (i0 + δi(t)) + [2 + 2δi(t)] − 2 = v(t)
dt
d d
L (2) + L δi(t) + 2δi(t) + 2 − 2 = v(t)
dt dt
d
L δi(t) + 2δi(t) = v(t)
dt
δv(t)
now substituting δi(t) = ,
2
d δv(t) δv(t)
L ( ) + 2( ) = v(t)
dt 2 2
Ld
δv(t) + δv(t) = v(t)
2 dt
d 2 2
δv(t) + δv(t) = v(t)
dt L L
UNISA 70 ECD3071
2. Modelling in the Frequency Domain
The transfer function can be easily found. Using the Laplace tables,
d 2 2
δv(t) + δv(t) = v(t)
dt L L
2 2
⇒ sδV (s) + δV (s) = V (s)
L
L
2 2
s+ δV (s) = V (s)
L L
2
δV (s)
∴ = L
V (s) 2
s+
L
This should make sense as the circuit is nothing more than a low-pass filter using an inductor. The
small signal “sees” a linear resistor. This resistance is defined by the small signal voltage-current
δv(t)
equation above. I.e. δi(t) = which means δv(t) = 2δi(t) = “R00 δi(t). Note the linear resistance
2
in the linear region is not equal to the “resistance” at the DC operating point (which would look like
1Ω)!
From the perspective of the small-signal, the circuit looks like the following
+
v(t) 2
−
UNISA 71 ECD3071
2. Modelling in the Frequency Domain
References
[1] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
[2] B. P. Lathi, Signal Processing and Linear Systems. New York: Oxford University Press,
2010, isbn: 9780195392579.
[3] R. Burns, Advanced Control Engineering. Oxford Boston: Butterworth-Heinemann, 2001,
isbn: 0750651008.
[4] Math and Science, Lesson 1 - Laplace Transform Definition (Engineering Math) (28:53),
2016. [Online]. Available: https://www.youtube.com/watch?v=8oE1shAX96U.
[5] Brian Douglas, Control Systems Lectures - Transfer Functions, 2012. [Online]. Available:
https://www.youtube.com/watch?v=RJleGwXorUk.
[6] D. Morrell, Laplace Domain Circuit Analysis (13:44), 2010. [Online]. Available: https:
//www.youtube.com/watch?v=L0PZvvt36DA&t=164s.
[7] SnugglyHappyMathTime, Laplace Transforms of Circuit Elements (16:06), 2016. [Online].
Available: https://www.youtube.com/watch?v=QlM1dC2gBLM.
[8] R. Boylestad, Introductory circuit analysis. Harlow: Pearson Education, 2016, isbn: 9781292098951.
[9] Digital Blackboard, How to obtain Matrix by Inspection in Mesh Analysis( Simple Method
with Animation) (7:08), 2018. [Online]. Available: https://www.youtube.com/watch?
v=LG30Fi3Lldo.
[10] ——, How to obtain Matrix by Inspection in Nodal Analysis (Easy Technic with An-
imation) (9:00), 2018. [Online]. Available: https : / / www . youtube . com / watch ? v =
g4EXOLk6cvI.
[11] B. Carter and R. Mancini, Op Amps for Everyone. NEWNES, Jul. 14, 2017, isbn:
9780128116487.
[12] Forrest Charnock, Inductors Capacitors and their Mechanical Analogs (9:25), 2016. [On-
line]. Available: https://www.youtube.com/watch?v=enmn0joTKMA.
[13] D. Morrell, Linearity: Definition (7:48), 2010. [Online]. Available: https://www.youtube.
com/watch?v=fOcPRnC3DvQ.
[14] MATLAB, Trimming and Linearization, Part 1: What Is Linearization? (14:00), 2018.
[Online]. Available: https://www.youtube.com/watch?v=5gEattuH3tI.
UNISA 72 ECD3071
Learning Unit 3:
Modelling in the Time Domain
In the previous learning unit, Modelling in the Frequency Domain, Chapter 2, the benefit of
changing calculus problems, particularly integrodifferential equations, into algebra problems
was covered. From there, the benefits of the transfer function of systems in the s-domain were
demonstrated.
So why cover time-domain analysis in this Learning Unit, let alone modelling in it? Well,
the central theme of modelling in the frequency domain was to change calculus problems into
algebra problems. However, there is another way this can be done, without even leaving the time
domain. This concept is central to state-space modelling which has many powerful advantages
over pure frequency domain modelling.
This learning unit covers contents necessary for the state-space representation and modelling
of systems.
Study this learning unit in conjunction with Chapter 3 of the prescribed textbook, Control
Systems Engineering, by Norman Nise [1].
Consider the title of this section, the aim is to understand and use “State-Space Modelling”,
but what does this actually mean? Firstly this is a modelling technique. Okay, why is this
technique useful considering that the frequency domain is useful in its own right? That is the
first important question.
Secondly, what is the so-called “State-Space”, and why is it specifically useful for modelling?
This is the second important question. Lastly, what exactly is a state, and how might this
73
3. Modelling in the Time Domain
differ from the methods in the previous learning unit? These three questions are subtle but
fundamental to actually be able to understand and use the state-space modelling technique.
Tied into this is the fact that this is a time-domain analysis technique, and the benefits of this
analysis in the time domain become apparent in this learning unit.
An upfront statement of the benefits (and partly answering the first question) can be sum-
marised as follows: Classical control design techniques in the frequency domain are usually
only applicable to [2]:
Additionally, the method extends itself to the use of alternative controller design techniques
which have the benefits of State-Space modelling embedded into them.
The system has a collection of variables, known as system variables, that change over time due
to the initial conditions of the system and any inputs imposed on the system. At any point in
time, the system has a so-called “state”. The state of a system is defined as follows.
“The set of variables (called the state variables) with some known value at some initial time t0 ,
together with the output variables, completely determine the (future) behaviour of the system
for time t ≥ t0 .” [2]
The (minimum) subset of system variables required to fully describe the system is known as the
state-variables. State-variables are not necessarily unique, but the size of the subset required
to describe the system fully is always the same for the system.
An important note on the practical use of state-variables, however, is that the choice of certain
system-variables as state-variables over others, may force the requirement of other system-
variables to be used as part of the state-variables subset. Additionally, and most importantly,
not all system variables are valid as state-variables. The reason will be clarified later
With some terminology defined, let’s cover a few examples identifying system variables, and
find a collection of state-variables.
UNISA 74 ECD3071
3. Modelling in the Time Domain
R
i(t)
+ L
− v(t)
Solution:
There are three intrinsic system variables, all specific attributes of the electrical element;
and one input variable as the extrinsic input system variable. These are:
All these variables are the set of system-variables. From this, we can choose a sufficient
subset to describe the system fully. A mandatory value in the state-variable subset (and
indeed in any system) will be the input(s), v(t).
An important note is that the collection of intrinsic system variables chosen must suffi-
ciently describe the other system variables fully.
1. The input voltage v(t) and the current i(t), and its initial value i(t0 ).
2. The input voltage v(t) and the resistor voltage vR , and its initial value vR (t0 ).
Whichever of these subset state-variables are chosen the system state can be fully de-
termined for t > t0 . Further detail on this is briefly discussed below for each unique
combination.
1. Since the current is known, the resistor voltage can easily be determined, Ri(t).
Therefore the inductor voltage can be found vL = v(t) − Ri(t). Since the inductor
is a reactive component, the rate of change of current must also be determined
UNISA 75 ECD3071
3. Modelling in the Time Domain
di vL 1
= = (v(t) − Ri(t)). In a classical approach, this is changed to an ordinary
dt L L
di R 1
first order D.E. + i = v(t). The initial value i(t0 ) can be used to solve the
dt L L
D.E. and subsequently used to fully determine the system behaviour. Note the
“other” system-variables, in this case vR and vL , are both expressed in terms of the
state variables v(t) and i(t).
vR
2. Since the resistor voltage is known, the circuit current can be determined, . The
R
inductor voltage can easily be found, vL = v(t) − vR , additionally the rate of change
di vL
of current must be found = , but this must be expressed as the state variable
dt L
vR . This is required since as the current through the inductor changes (due to the
inductor voltage) the resistor voltage changes. This expressed as a D.E. in terms
dvR R R
of the state-variables gives + vR = v(t). The initial value vR (t0 ) can be
dt L L
used to solve the D.E. and used to fully determine the system behaviour. Again the
“other” system-variables, i(t) and vL , are expressed in terms of the state variables
v(t) and vR .
In the above example, the initial condition was not explicitly included. However, in each case,
the initial condition is used to determine the explicit solution of the state-variable from the
D.E. solution. A more immediate conclusion is that the derivative of the state variable is in
some way “necessary” it there is a component that relates one system-variable as the derivative
of another. The importance of this is covered in subsection 3.2.2.
R L
i(t)
+ v(t) C
−
Solution:
The system-variables are as follows:
UNISA 76 ECD3071
3. Modelling in the Time Domain
Notice that the capacitor charge is also a valid system variable. In order to describe the
system, KVL can be used as a balancing equation on the system state. I.e.:
vL + vR + vC = v(t)
We notice immediately that the KVL equation has many system-variables but we can
choose a minimum number of system-variables as the state variables.
A.
As an off-the-cuff choice, we can choose the circuit current as one variable and the ca-
pacitor voltage as another. This result in
di
L + Ri + vC = v(t)
dt
But this is not sufficient, since as current flows in the circuit the capacitor voltage will
change. Therefore we need an equation that also related the change in capacitor voltage
to the circuit current. But this is simply
dvC i
=
dt C
dvC i
=
dt C
di
L + Ri + vC = v(t)
dt
along with the initial values i(t0 ) and vC (t0 ), are sufficient to determine the entire system’s
future behaviour.
B.
UNISA 77 ECD3071
3. Modelling in the Time Domain
As an immediate alternative, we can note that the capacitor charge can be substituted
as a state-variable. I.e. the two state-variables can be i(t) and q(t), with the dynamic
equations
dq
=i
dt
di q
L + Ri + = v(t)
dt C
C.
Additionally knowing that vR = Ri we get another set of equations with the state variables
vR and q(t)
dq vR
=
dt R
L dvR q
+ vR + = v(t)
R dt C
What has become apparent in the above examples is that there exists a subset of system-
variables called state-variables, that can describe the system fully (in theory, as long as the
initial conditions and inputs are known). The concept of a state-space model has been subtlety
introduced. The state-space consists of the collection of state-variable equations that can
fully describe the system, given the initial conditions and inputs. This is covered in detail in
subsection 3.2.2 below.
Activity 3.1
Consider for a moment the two examples above, Example 3.1 and Example 3.2. Notice
that in each case the inductor voltage as a system-variable was not used. This is not by
chance and was specifically avoided.
The state-space model is generally easy enough to find and define, however, the theory is neces-
sary in order to fully understand it. Continuing from the examples above and the observations
UNISA 78 ECD3071
3. Modelling in the Time Domain
made; it seems as if a collection of first-order differential equations are necessary. This is exactly
the idea of state-space modelling, a collection of first-order differential equations of each state
variable in terms of the other state variables and the inputs.
Put another way, each of the state-variable derivatives must be expressed as a linear combination
of the state-variables and the inputs. This is generalised as [2]:
dx1
= a11 x1 + a12 x2 + · · · + a1n xn + b11 u1 + · · · + b1m um
dt
dx2
= a21 x1 + a22 x2 + · · · + a2n xn + b21 u1 + · · · + b2m um
dt
..
.
dxn
= an1 x1 + an2 x2 + · · · + ann xn + bn1 u1 + · · · + bnm um
dt
This can be expressed in vector form as the System-State Equation 3.1 [1, 2]:
ẋ = Ax + Bu (3.1)
x is the n-dimensional state vector (containing the state-variables), and ẋ is simply its time
derivative
x1
x2
x = .. (3.2)
.
xn
u is the m-dimensional input vector (consisting of the m unique inputs to the system)
u1
u2
u = .. (3.3)
.
um
UNISA 79 ECD3071
3. Modelling in the Time Domain
An important advantage in the state-space model is that the individual states and their deriva-
tives need not all be measurable for them to be observable [2].
The state-space model also has the so called System-Output Equation 3.6 [1, 2]:
y = Cx + Du (3.6)
Note the matrix Equations 3.1 and 3.6, are only valid as matrix equations in linear, time-
invariant systems. However, if the system is non-linear or time-variant the state-space equations
will still be valid and can still be solved. The method will just not utilise the linear matrix
equation method to solve the equations. Outcome 1 is complete
UNISA 80 ECD3071
3. Modelling in the Time Domain
Example 3.3
Find the state-space matrix model of Example 3.2. Use i(t) and q(t) as the state-variables.
Additionally find the output equation if vL is the desired output.
Solution:
The system-state equation is easily changed into matrix form as follows
" 0 1
# " #
0
q̇ q
= 1 R + 1 v(t)
i̇ − − i
LC L L
The desired output is not directly a system-state, but we know that vL = Li̇, therefore
1 q
vL = − −R + 1 v(t)
C i
q
vL = v(t) − Ri −
C
Since the focus in this learning unit is to find state-space models and solutions to electrical
circuits, it would be useful for some simple “fail-safe” techniques for finding these models.
This is expanded on from the definition of the state-space equation. Specifically, the equation
necessarily requires the expression of the first order state-variable rates in terms of the state-
variables. Conveniently in electrical systems, capacitors and inductors have voltage and current
definitions that are in first-order differential form. Specifically,
dvC iC
= (3.10)
dt C
diL vL
= (3.11)
dt L
dvC diL
Therefore we can choose and to be state derivatives, and consequently vC and iL to be
dt dt
state variables. This technique essentially looks at all energy storage devices (like the capacitor
and inductor, which store energy in electric and magnetic fields respectively) and uses their
inherent differential equation relationships to define the state variables.
To solve for the state-variable derivatives, it will most likely be necessary to use circuit analysis
techniques such as Ohm’s Law, Kirchhoff’s Voltage and Current Laws, or other suitable tech-
niques. This is to find expressions of the capacitor currents and inductor voltages in terms of
the state-variables. Example 3.4 demonstrates how to apply these concepts to find the state-
space model for a more complex circuit; both for the matrix state equation and the matrix
output equation.
UNISA 81 ECD3071
3. Modelling in the Time Domain
Example 3.4
Find the state equation of the following circuit
R1 L R3
+ v(t) C1 R2 C2
−
Also find the output equation if the outputs are iR2 and vC2 .
Solution:
Counting the capacitors and inductors, we have the following state-variable derivatives:
dvC1 iC
• C1 with D.E. = 1 . Therefore the state-variable is x1 = vC1 .
dt C1
dvC2 iC
• C2 with D.E. = 2 . Therefore the state-variable is x2 = vC2 .
dt C2
diL vL
• L with D.E. = . Therefore the state-variable is x3 = iL .
dt L
The state-vector, state-vector derivative and input vector are then defined as
vC1 vC˙ 1
x = vC2 ẋ = vC˙ 2 u = v(t) = u
iL i˙L
Using normal circuit analysis further, we can define the mesh currents, these will assist
in finding the expressions necessary for the ẋ states.
R1 L R3
+ v(t) C1 R2 C2
−
i1 i2 i3
UNISA 82 ECD3071
3. Modelling in the Time Domain
i1 − i2
vC˙ 1 =
C1
The equation must be defined in terms of the state-variables. First we can see that i2 = il ,
and can be immediately substituted. Next an expression for i1 must be found in terms
vR
of the state-variables. We note that i1 = 1 , and vR1 can be easily expressed in terms
R1
of state-variables as,
(u − vC1 )
i1 =
R1
and
(u − vC1 )
− iL
R1
vC˙ 1 =
C1
1 1 1
vC˙ 1 = − vC1 + 0vC2 − iL + u
R1 C1 C1 R1 C1
The next state-variable derivative is
i3
vC˙ 2 =
C2
we must now find an expression for i3 in terms of state-variables (this is a bit more tricky
and involved). We note two simultaneous equations that must be true for vR2 . By Ohm’s
Law and KVL respectively,
vR2 = R2 (i2 − i3 )
vR2 = i3 R3 + vC2
UNISA 83 ECD3071
3. Modelling in the Time Domain
vL = vC1 − R3 i3 − vC2
*Additional Task: The state-space model can be easily checked using MATLAB. The script
for this is seen in Section 3.7.
UNISA 84 ECD3071
3. Modelling in the Time Domain
In the above Example 3.4, an additional technique that can be used to assist both with un-
derstanding and clarity is to replace the inductors with current sources, whose current is the
state variable; and capacitors with voltage sources, whose voltage is the state variable. The
state equations are solved by finding the voltage across the inductors/current sources, and the
current through the capacitors/voltage sources.
The method replaces all energy storage devices (as required in the state space analysis) and
the circuit analysis is greatly simplified containing only a network of (passive) resistor elements
and standard sources.
Activity 3.2
Find the state-space model of the following circuit. Try using the method of replacing
the energy storage devices with voltage and current sources.
L R3
R1 R2
+ v(t) C1 C2
−
Find the output equation if the two outputs are the capacitor voltages.
*Additionally: Use MATLAB and setup a state-space model. Simulate the output of the
integrator to a 1kHz square wave. The resistance is 1kΩ and the capacitance is 250nF .
Compare the results to the results obtained to Activity 2.20.
This is a challenge!
Outcome 2 is complete
UNISA 85 ECD3071
3. Modelling in the Time Domain
The benefits of the state-space were not explicitly discussed or demonstrated. Put simply
the state-space equation represents an arbitrary vector space of the specific state-variables.
Theoretically, the state-vector can exist anywhere, but there are usually conditions that restrict
this. Regardless, the initial values of the state vector easily capture the initial state of the
system. The small change in the state vector, i.e. the state-vector derivative, can be easily
computed from the state equation ẋ = Ax + Bu, to whatever precision is necessary. This is
incredibly useful. Since the output(s) of the system is simply a linear combination of the states,
this is also easily calculated computationally.
The benefits of state-space modelling are exceptional, however, it can be difficult to obtain
a model, and the models are not entirely intuitive. The frequency-domain transfer function
is generally easier to find. Thus it would be convenient if there is a way to simply change a
transfer function into a state-space model. Luckily this is relatively easy to do. Some of the
theoretical aspects of this are covered below.
These are simply polynomials of s, where it is required that the transfer function is a proper
rational function, i.e. m < n. We also know that this polynomial can be decomposed into
partial fractions. However, for the purpose of analysis here, the transfer function is separated
1
into two transfer functions and P (s) in cascade. I.e.:
Q(s)
P (s)
R(s) G(s) = C(s)
Q(s)
(a) Original
G(s)
1 X(s)
R(s) P (s) C(s)
Q(s)
UNISA 86 ECD3071
3. Modelling in the Time Domain
1 1
=
Q(s) an s + an−1 sn−1 + · · · + a0
n
1
Consider for a moment the intermediate transfer function X(s) = R(s). The expression
Q(s)
can be rearranged to R(s) = Q(s)X(s), and using the polynomial expansion of Q(s),
But why is this useful? Recall (from the Laplace Table C.3), that sn X(s) is the transform of
the nth derivative of x(t). Therefore the equation becomes
1 an dn a1 d
x(t) = r(t) − n
x(t) − · · · − x(t) (3.14)
a0 a0 dt a0 dt
dn dn−1
Let us assume for a moment that we know x(t), then to get x(t) we simply need to
dtn dtn−1
dn−2 dn−1
integrate. To get n−2 x(t) we integrate n−1 x(t) again. This can be repeated until x(t) itself
dt dt
is obtained.
Equation 3.14 is a general nth order D.E. A general nth order D.E. can be easily changed into
a state-space model. The requirement is that we change the general nth order D.E. into n
first-order D.E.’s. To do this Equation 3.14 is simply rearranged into
dn 1 an−1 d(n−1) a1 d a0
n
x(t) = r(t) − (n−1)
x(t) − · · · − x(t) − x(t) (3.15)
dt an an dt an dt an
d
We then define x(t) as the first state-variable x1 = x(t). Then we define x(t) = x˙1 = x2 .
dt
n−1
d
This definition is forced and artificial, continuing with this we define up to n−1 x(t) = ẋn−1 =
dt
dn
xn . Lastly we define n x(t) = x˙n . Effectively, this changes the nth order D.E. into n first-
dt
order D.E.’s.
UNISA 87 ECD3071
3. Modelling in the Time Domain
x˙1 = x2
x˙2 = x3
..
.
ẋn−1 = xn
an−1 dn−1 an−2 dn−2 a1 d a0 1
x˙n = − x(t) − x(t) − · · · − x(t) − x(t) + r(t)
an dtn−1 an dtn−2 an dt an an
The last D.E. can be rewritten in terms of the state variables
a0 a1 an−2 an−1 1
x˙n = − x1 − x2 − · · · − xn−1 − xn + r(t)
an an an an an
and the input(s) r(t) is (are) assigned as the input vector u.
Since these are first-order D.E.’s of state variables, the matrix equivalent form is easily obtained.
x˙1
0 1 0 ···
x 0
0
x˙2 0 0 1 1
··· 0 0
x2 ..
.. .. .. ..
.. ...
...
.
. = . . .
.
+ u
0 ··· 0
ẋn−1 0 0 1
xn−1
a0 a1 an−1 1
x˙n − − ··· ··· xn
an an an an
Note that the (n×n) A matrix has a skew unit diagonal, and that the B matrix is simply a
vector, with only the last entry as non-zero.
1
The above argument shows that the state equation is essentially derived from the transfer
Q(s)
function. Similarly, the output equation is derived from the P (s) transfer function. This is
simple to find since the intermediate transfer function C(s) = P (s)X(s) is easily found using
similar logic to the above argument.
dm dm−1
c(t) = bm x(t) + b m−1 x(t) + · · · + b0 x(t)
dtm dtm−1
But the derivatives contained in the output equation are of a lower order than the state-equation
by requirement (since G(s) must be a proper rational function). Therefore, the output c(t) is
UNISA 88 ECD3071
3. Modelling in the Time Domain
entirely in terms of the states and can be easily converted into an output state-equation. At
1
most, the highest derivative will be one order less than the D.E., and thus the output
Q(s)
equation will contain at most, all the states.
In the scenario where the order of P (s) is equal to Q(s), then the transfer function G(s) is not
a proper rational polynomial in s. But, this can be easily changed to a proper rational function
and a constant d (see Appendix B), i.e.
P (s)
C(s) = G(s)R(s) = + d R(s)
Q(s)
1
= P (s) R(s) + dR(s)
Q(s)
= P (s)X(s) + dR(s)
In this scenario, there exists a feed-forward component in the output as well and can be readily
incorporated into the state output equation. Transfer functions with P (s) having an order
larger than Q(s) are beyond the scope of this module (as they are non-causal systems). There
are other practical aspects of converting a transfer function to a state-space model. These are
covered in the examples below.
Example 3.5
Find the state-space model of the following transfer function
1
G(s) =
2s + 3
Solution:
The denominator has an order of one, therefore we need one state and its derivative.
Following the layout of the A matrix, we simply get
3 1
ẋ = − x + u
2 2
UNISA 89 ECD3071
3. Modelling in the Time Domain
y=x
There is another graphical, block diagram, approach to solving the state equation. Firstly,
make sure that the coefficient of the highest power of s in the denominator is unitary, i.e.:
1/2
G(s) =
s + 3/2
1
The order of the polynomial in the denominator is the number “integrator” blocks
s
required. In this case, one; and we have one state, and one state derivative. Then the
block diagram can be drawn directly from the transfer function.
1 ẋ 1 R x y
u +
−
≡
2 s
3
2
The associated rules for this are simple. Follow the state variables from the last state
derivative at the “input”, with the integrator blocks successively generating all the re-
quired states. The final state derivative results from the sum of the input with the
appropriate gain, and the (negative) sum of the states as feedback, whose gains are the
corresponding coefficients. From the
Example 3.6
Find the state equations of
6
G(s) =
2s2 + 10s − 2
Solution:
Using the graphical approach, we can see the denominator is second order. Therefore,
we need two state variables, and two integrators. The transfer function is changed to the
following equivalent
3
G(s) =
s2 + 5s − 1
The input gain is 3, the second state variable gain is 5, and the first state variable gain
UNISA 90 ECD3071
3. Modelling in the Time Domain
+ x˙2 R x2 = x˙1 R x1
u 3 y
−
−
−1
The state equation can now be easily found (taking note of the signs at the summing
block and the sign of the gains).
x˙1 0 1 x1 0
= + u
x˙2 −(−1) −5 x2 3
x1
y= 1 0
x2
Activity 3.5
Find the state equations of the following
s
a) G(s) = Hint: use long division
s−2
4s2 + 2s
b) G(s) =
s3 + s2 + 5s + 3
c) G(s) from Activity 2.14 if R1 = 1kΩ, R2 = 500Ω, C1 = 100µF and C2 = 10µF
s3 + 6s2 + 12s + 3
d) G(s) = Hint: use long division
3s3 − 2s2 + s + 2
Objective 3 is complete
UNISA 91 ECD3071
3. Modelling in the Time Domain
Although state-space modelling is very powerful, the transfer function can provide important
information on the frequency behaviour of a system. Frequency behaviour is not immediately
apparent in the state-space. For this reason, and especially if one finds the state-space model
first, converting from a state-space model to a transfer function is useful. The theory behind
this is straightforward, for the given state equations (and with a particular output in mind),
reduce the matrix equation (as a system of equations) into just one equation as a transfer
function. The theory behind this process is now shown.
ẋ = Ax + Bu
y = Cx + Du
with the output vector y, and input vector u; into a transfer function G(s) as the ratio of the
output function Y (s) to the input function U (s),
Y (s)
G(s) =
U (s)
One issue is that the state equation vectors y and u, are exactly that; vectors. However,
we shall simply ignore this and “do the maths”. Additionally these vectors y, u, x, and ẋ
are implicitly time-domain vectors. These need to be transformed into equivalent s-domain
vectors. But given that these are the variables, they are transformed as normal. The transform
of vectors is simply the transform of each of their components, i.e., y ⇒ Y(s), u ⇒ U(s),
x ⇒ X(s), and we use the derivative property on each component for ẋ giving sX(s) − x(0),
where x(0) is the initial condition state vector. The matrices in the state equations are simply
“constants”. We can now do the algebra and find the transfer function (whatever it may be).
Here, Φ(s) = (sI − A)−1 is known as the characteristic matrix (and is directly related to the
characteristic equation |sI − A| = 0). In the characteristic matrix, s is a simple scalar variable,
in order for a matrix (A) to be subtracted from it, we need to introduce a suitably sized identity
matrix I for the algebra to make sense. Next, we substitute the expression for X(s) into the
(second) output-equation. For the moment we shall consider x(0) = 0.
UNISA 92 ECD3071
3. Modelling in the Time Domain
This matrix equation is now only in terms of the output vector Y(s) and input vector U(s).
The matrices A, B, C and D, are all defined and known. Therefore, we can “solve” for Y(s)
as
If the initial condition vector is non-zero, x(0) 6= 0 , then the equation solves to the following,
As can be seen, the only difference is an additional component CΦ(s)x(0) which is the zero-
input response (i.e. the response of the system because of the initial internal conditions/states,
and zero input); (CΦ(s)B+D) is called the zero-state response (i.e. the response of the system
purely because of the input, and the system starting from a completely de-energized/zero state).
The zero-state response is actually the expression for the transfer function.
In a general MIMO system state-space model, Equation 3.16a is the transfer function matrix
equation; and from Equation 3.16b, G(s) = (C(sI − A)−1 B + D) is defined as the transfer
function matrix. I.e.
Y(s) 1
Note, it is not correct to write G(s) = since generally, both are vectors and is
U(s) U(s)
nonsense and an abuse of notation (additionally U(s)−1 is also an abuse of notation because
U(s)T
inverses of vectors, in this sense, also do not exist). But the reciprocal vector takes on
|U(s)|2
this equivalent definition.
If the state-space model is of a SISO system, then Y(s) = Y (s) and U(s) = U (s) are single
Y(s) Y (s)
entry elements, and G(s) = = is properly defined, as before in Chapter 2.
U(s) U (s)
UNISA 93 ECD3071
3. Modelling in the Time Domain
The procedure to find a transfer function from a state-space model is summarised below:
ẋ = 3x + 3u
1
y =x+ u
3
Solution:
Note that although the values here are not “typical” matrices, the algebra is exactly the
same. I.e.
A=3 B=3
1
C=1 D=
3
The matrix algebra of the four steps above reduces to simple scalar algebra, since the
matrices are simple scalars. Therefore, using Equation 3.16a and substituting directly,
we get
−1 1
G(s) = (1)(s − 3) (3) +
3
3 1
= +
s−3 3
s+6
or in improper form: =
3(s − 3)
UNISA 94 ECD3071
3. Modelling in the Time Domain
Solution:
In this model the matrices are as follows:
0 1 0
A= B=
−2 −4 1
C = −3 1 D=0
Following the four steps mentioned above, we first find −A, which is literally changing
the signs of each element in A.
0 −1
−A =
2 4
Next, we simply add “s” along the diagonal of this matrix. I.e.
0 + s −1 s −1
(sI − A) = = (= Λ(s))
2 4+s 2 s+4
Now to find the inverse. For practical intents and purposes, this can be divided up into
finding the adjoint/adjugate of the matrix, and dividing it by the determinant of the
matrix (which is just a simple scalar). Firstly, let’s find the determinant.
s −1
det(Λ(s)) =
2 s+4
= (s)(s + 4) − (−1)(2)
= s2 + 4s + 2
Now to find the adjoint/adjugate. This is achieved in three steps. Firstly finding the
matrix of minors MΛ of Λ(s). Then finding the cofactor matrix CΛ , and subsequently
the adjoint adj(Λ).
The minor Mij of element aij of matrix Λ is found by removing the ith row and jth column
of Λ and finding the determinant of this matrix. For a 2 × 2 matrix, this is simplified by
replacing the element with its diagonal element. I.e.
s −1 s+4 2
Λ= ⇒ MΛ =
2 s+4 −1 s
UNISA 95 ECD3071
3. Modelling in the Time Domain
The cofactor matrix is easily obtained from the matrix of minors. By definition each
element CΛij of the matrix CΛ is obtained with the following formula CΛij = (−1)(i+j) Mij .
This simplifies to a “checkered lattice” of alternating positive and negative factors on the
original matrix of minors. I.e.
(+1)(s + 4) (−1)(2)
CΛ =
(−1)(−1) (+1)(s)
s + 4 −2
=
1 s
The adjoint/adjugate of a matrix is simply the transpose of the its cofactor matrix, i.e.
adj(Λ(s)) = CΛ T , therefore adj(Λ(s))ij = CΛji . For this example, we have
T
s + 4 −2
adj(Λ(s)) =
1 s
s+4 1
=
−2 s
We have now found the equivalent of Φ(s) and can continue with finding the transfer
function.
adjΛ(s)
G(s) = C B+D
det(Λ(s)
1 s+4 1 0
= 2 −3 1 + 0
s + 4s + 2 −2 s 1
1 s+4 1
= 2 −3 1 (0) + (1) + 0
s + 4s + 2 −2 s
1 1
= 2 −3 1
s + 4s + 2 s
1 s−3
= 2 (−3 + s) = 2
s + 4s + 2 s + 4s + 2
UNISA 96 ECD3071
3. Modelling in the Time Domain
Solution:
Here the matrices are as follows
0 1 0 1
A= 0 0 1 B = −1
−2 −3 −4 0
C = 1 −2 0 D=0
s −1 0
det(Λ(s)) = 0 s −1
2 3 s+4
s −1 0 −1 0 s
= (s) − (−1) + (0)
3 s+4 2 s+4 2 3
= (s)[(s)(s + 4) − (−1)(3)] + [(0)(s + 4) − (−1)(2)] + (0)[(0)(3) − (s)(2)]
= (s)[s2 + 4s + 3] + [0 + 2] + 0
= s3 + 4s2 + 3s + 2
UNISA 97 ECD3071
3. Modelling in the Time Domain
UNISA 98 ECD3071
3. Modelling in the Time Domain
a)
0 1 0 1
ẋ = 0 0 1 x + 0 u
−4/3 −5/3 −11/3 0
y = 2/3 1/3 1/3 x + 0 u
2s2 + 6s + 2
The final answer is
3s3 + 11s2 + 5s + 4
b)
0 1 0 1
ẋ = 0
0 1 x + 0 u
−9 −2 −3 0
y = 3 2 −1 x + 0 u
3s2 + 18s − 12
The final answer is
s3 + 3s2 + 2s + 9
c)
0 1 0 1
ẋ = 0
0 1 x + −1 u
−9 −2 −3 −2
y = 1 −2 3 x + 0 u
d)
1 1 3 1
ẋ = 2
3 1 x + 3 u
−1 −1 −1 2
y = 2 2 −3 x + 0 u
UNISA 99 ECD3071
3. Modelling in the Time Domain
e)
1 2 −1 1
ẋ = 1 −2 1 x + −5 u
3 5 −2 2
y = 2 2 −3 x + 5 u
2 3 1 0
y= x+ u
−3 −1 0 1
b) Go back to Example 3.4, find the transfer function matrix of the state-space model.
Specify your own component values.
Outcome 4
The characteristic equation of a matrix is used to find the eigenvalues λ and eigenvectors x of
the matrix. Eigenvectors and eigenvalues of a matrix satisfy the following equation,
Ax = λx (3.17)
This means that the matrix A (as a linear transformation) maps the input vector x to a λ
scaled version of itself. I.e.
λx − Ax = 0
(λI − A) (x) = 0
The nontrivial solutions (i.e. assuming the input vector x is nonzero) is when
λI − A = 0
This can be simplified into the characteristic (λ polynomial) equation (rather than a matrix
equation) by taking the determinant of this matrix. I.e. the characteristic equation of a matrix
A is of the form,
det(sI −A) is called the characteristic polynomial of A. This polynomial is necessary for finding
the inverse of (sI − A) which is necessary for finding transfer function matrices in state-space
modelling, i.e.
G(s) = CΦ(s)B + D
= C(sI − A)−1 B + D (3.20)
and
The characteristic equation det (sI − A) = 0 is important for analysing the state-space system
behaviour and is discussed further in Chapter 4,5 and 6.
3.6 Summary
In this learning unit, the state-space model as a time domain analysis system-model was covered.
The concept of representing a system in state space was discussed. The knowledge for transfer
functions from the frequency domain learning unit (Chapter 2) was linked to the state space
model; converting of a transfer function to a state space model, and converting a state space
model to a transfer function(s) were discussed. The fundamental relation between the two
modelling methods, the characteristic equation was also discussed.
Example 3.4
1 >> r1 = 1; r2 = 1; r3 = 1; c1 = 1; c2 = 1; l = 1;
2 >> A = [ (−1./(r1*c1)), 0 , (−1./(c1)) ; ...
3 0 , (−1./(c2*(r2+r3))) , (r2./(c2*(r2+r3))) ; ...
4 (1./l) , (−r2./(l*(r2+r3))) , ((−r2*r3)./(l*(r2+r3)))];
5 >> B = [(1./r1*r2) ; 0 ; 0];
6 >> C = [0 , (1./(r2+r3)) , (r3./(r2+r3)); 0 , 1 , 0]
7 >> D = [0 ; 0]
8 >> sys = ss(A,B,C,D,'StateName',{'vc1' 'vc2' 'il'},'InputName','Vin');
9 >> step(sys)
Activity 3.3
Feedback
This does not limit the output being the inductor voltage as in each case the inductor voltage is simply
di di
vL = L and the value can either be explicitly determined from the state variable i or as the
dt dt
combination of the state-variables. From the last example
q
vL = v(t) − Ri −
C
vout = −vC
1
ẋ = 0 x + u
RC
y = −1 x + 0 u
x3 (= iL )
R3
+ −
vL
R1 iR1 B R2 iR2
A C
iC1 iC2
Next, equations for iC1 , iC2 and vL must be found in terms of the state-variables x1 , x2 , x3 , and u.
R1 + R2 1 1
x˙1 = − x1 + x2 + 0x3 + u
C1 R1 R2 C1 R2 C1 R 1
iC2 = iR2 + x3
Using the expression for iR2 as before, and realising (similarly) iC2 = C2 vC˙ 2 = C2 x˙2 , then
1 1 1
x˙2 = x1 − x2 + x3
C2 R2 C2 R2 C2
Lastly, KVL could be used in the inductor loop. But the loop simplifies using between nodes A and C,
vL + vR3 = u − x2
⇒ vL = −x2 − R3 x3 + u
ẋ = 2x + 2u
y =x+u
b)
0 1 0 0
ẋ = 0
0 1 x + 0 u
−3 −5 −1 1
y= 0 2 4 x
c)
0 1 0
ẋ = 1 R1 C1 + R1 C2 + R2 C2 x + 1 u
− −
R1 R2 C1 C2 R1 R2 C1 C2 R1 R2 C1 C2
0 1 0
ẋ = x+ u
−2000 −230 2000
y= 1 0 x
d) The transfer function is improper. Using long division G(s) is changed into
s −1 0
= 0 s −1
4/3 5/3 s + 11/3
s −1 0
= (1/3) 0 s −1
4 5 3s + 11
= (1/3)(3s3 + 11s2 + 5s + 4)
s −1 0
= 0 s −1
9 2 s+3
= (s3 + 3s2 + 2s + 9)
= (s3 + 3s2 + 2s + 9)
The adjoint is the same as above, and the final transfer function is
2
s + 3s + 2 s+3 1 1
1 2 + 3s
G(s) = 3 1 −2 3 −9 s s −1
(s + 3s2 + 2s + 9)
−9s −2s − 9 s2 −2
−3s2 − 9s + 42
=
s3 + 3s2 + 2s + 9
s − 1 −1 −3
= −2 s − 3 −1
1 1 s+1
= (s3 − 3s2 + s − 2)
s − 1 −2 1
= −1 s + 2 −1
−3 −5 s + 2
= (s3 + 3s2 − 4s + 2)
s −1 0
= 0 s −1
4 2 s+5
= (s3 + 5s2 + 2s + 4)
References
[1] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
[2] R. Burns, Advanced Control Engineering. Oxford Boston: Butterworth-Heinemann, 2001,
isbn: 0750651008.
This learning unit moves away from finding system models and begins to find quantitative
descriptions of a system. Concepts regarding both frequency-domain modelling and time-
domain modelling are used here. The particular details concerning a system’s time response
can be “extracted” from either a time-domain or frequency-domain model.
Study this learning unit in conjunction with Chapter 4 of the prescribed textbook, Control
Systems Engineering, by Norman Nise [1].
1. Use the poles and zeros of a system transfer function to determine the time response of
a system;
5. Find the time response characteristics (rise time, peak time, settling time, overshoot) of
under-damped second order systems;
6. Approximate higher-order system time responses with first and second-order system re-
sponse characteristics.
The concept of poles and zeros is more traditionally aligned with a system transfer function.
However as has been shown in time-domain modelling Chapter 3, a state-space time-domain
model can be transformed into the frequency-domain in matrix equation form. Therefore to say
that a pole-zero analysis can only be done on a transfer function is not entirely true. However,
the pole-zero analysis of state-space models will be explored in Section 4.9.
The poles and zeros of a system are directly related to a system transfer function [1–3], as given
in Equation 4.2 (from Equation 2.9) below.
110
4. Time Response
The zeros of a transfer function are the values of “s” that result is G(s) = 0, or specifically
P (s) = 0, i.e. the roots of P (s) [1].
The poles of a transfer function are the values of “s” that result in G(s) → ∞, or specifically
Q(s) = 0, i.e. the roots of Q(s) [1].
For additional clarity, look at Pole/Zero Plots Part 1 (12:40), Pole/Zero Plots Part 2 (12:36),
and Intro to Control - 7.1 Poles and Zeros (6:29) [4–6]. The concepts of poles and zeros are
discussed and are briefly related to the time responses that can occur. This is also strongly
linked to the next Chapter 5.
Remember that “s” is a complex variable, and therefore G(s) is a complex function. It is helpful
to plot the function G(s) vs. s. However, because both the domain (input) and range (output)
are complex variables, we would need 4-D space, which is not easily visualised [7]. There are a
few approaches to plot “G(s) vs s”. Firstly, to plot the constant real and imaginary component
contours of s in the G(s) plane [7]. However, this is not immediately useful from a control
engineering perspective. A second way is to plot the real (Re) surface of G(s) over s, and
imaginary (Im) surface of G(s) over s. This is an uncommon approach. The third and most
useful way is to plot the magnitude (|G(s)|) surface of G(s) over s, and phase (argG(s)) surfaces
of G(s) over s [1–3]. The benefit of |G(s)| is it quite literally represents the poles and zeros
of G(s); since the magnitude of the function necessarily will be infinite at the poles (which
are “holes” in the surface that are stretched to infinity), and the magnitude will necessarily
be zero at the zeros (which are pinched points that touch the original “s”-plane). The rest of
the surface lies somewhere (finite) in between 0 and ∞. This is now demonstrated with the
following transfer function,
s(s + 1.5)
G(s) =
(s − 1.5)(s + 1 + 1.5j)(s + 1 − 1.5j)
To make it easier to visualise, the log magnitude log10 |G(s)| is typically used to represent the
value of the surface (due to the span of the values of the surface). The effect of this is to still
have the poles going to ∞, but the zeros go to −∞ (Note the zeros have also become “holes”
in the surface). The |G(s)| and log |G(s)| plot of the above transfer function is in seen below
in Figure 4.1 (generated with Matlab).
(c) |G(s)| Surface real-axis view (d) log10 |G(s)| Surface real-axis view
(e) |G(s)| Contour s-plane view (f) log10 |G(s)| Contour s-plane view
Since the actual magnitude is relatively unimportant, it is often easier for control engineers to
simply indicate the location of the poles and zeros on the complex s-plane. This is typically
done by using “×” to indicate the location of poles, and using “ ” to indicate the locations
of zeros, on the s-plane [1–3]. This is demonstrated with the same transfer function above in
Figure 4.2.
3
jω
2
1
σ
−3 −2 −1 1 2 3
−1
−2
−3
The poles of a transfer function fundamentally come from analysis and concepts in the s-plane,
the complex-frequency plane. But this can be transformed back into the time domain. Let’s
establish a simple conceptual understanding of what a “pole” does in the time-domain.
Consider a single pole in the s-domain forming the transfer function, and for simplicity, confined
on the real line.
1
G(s) =
s+a
∴ s = −a is a pole
G(s) jω
σ
−a
The Laplace Transform of the transfer function can be found, however, it is more useful to
find the response to an input. An almost natural choice for the input is to use the unit step
1
response, which has a transform of . The purpose for this is the unit step represents a “switch”
s
which turns on the system (this is covered more in depth in Chapter 6). This input pole is then
multiplied with the system transfer function and the output transform is evaluated. I.e.:
C(s) jω
σ
−a
C(s) = R(s)G(s)
1 1
=
ss+a
K1 K2 1/a −1/a
= + = +
s s+a s s+a
1
∴ c(t) = K1 + K2 e−at u(t) 1 − e−at u(t)
=
a
NB: The unilateral transform requires the inclusion of the “unit step function” to define real
1
systems. For the moment the factor is somewhat arbitrary, as it is only a scaling factor,
a
which would change if there were some other gain factor. It is important to acknowledge that
a pole’s location can affect the gain if it is large or small enough, however, this is ignored for
now. The overall system’s time-varying functional behaviour is effectively
Therefore the pole effectively determines the system’s time response to an input [1, 3–6]. This
is inferred from the understanding of Laplace Transforms and the partial fractions techniques.
c(t)
t
−1 1 2 3 4 5
This example covered only a single pole, and with an assumed pure real component. This is not
always the case. Zeros also affect a system’s response, these are covered in their own sections
below. Additionally, there are metrics to quantitatively analyse and classify systems in the
time domain. These concepts are explored further in the sections below.
here again, of a is included to ensure the step response output has a magnitude of 1. The
time-domain response to a unit step is,
This output is specifically called an exponential recovery curve. The system “recovers” from the
old state = 0, and follows the new reference, = 1. The recovery path followed is exponential.
This means that a system can be quantified from the step response. Clearly, the variable of
interest is the “pole value” a, since knowing this describes the system entirely for a single-
pole/first-order system. One of the easiest ways to extract this value is simply through the
derivative [1–3]. I.e.
d
c(t) = (−e−at )(−a)
dt
= ae−at
c(t) c(t)
y = at y = at
1 1
63%
t t
-t -t 1/a
There is another way to determine the value of a by using the behaviour of the exponential
and the initial gradient. The (recovering) exponential reaches some limiting value, in this case,
the normalised value of 1. Notice that the straight line formed by the initial gradient has the
1
form y = at [1, 3]. This means that it reaches the final value in a time of t = . This value
a
is called the time constant and can be used with the original exponential to define a metric [1,
3]. Substituting the time constant, we get,
c(1/a) = 1 − e−a(1/a)
= 1 − e−1
≈ 1 − 0.3679 = 0.6321
This means that a first-order system reaches 63% of the final value in one time constant [1,
3]. The practical method and representation of this is seen in Figure 4.4b. The value a itself
is called the rate constant for a first-order system. The rate constant can be calculated if the
output of a system is measured and left for “long enough” to get to the final value. Then the
rate constant describing the first-order system can be calculated by finding the reciprocal of
the time taken for the output reaches 63% of the final value.
Although the concept of the rise time is presented here in a first-order system, it applies to
systems of any order. Its definition stems from a practical purpose and is a simple concept.
It is important to stress that this parameter is defined here specifically for a system unit-step
response output.
The rise time is defined as the time for the output to go from 10% to 90% of the final value.
In other words it it the difference in time between c(t90% ) = 0.9 and c(t10% ) = 0.1 for a
normalised output. Solving for this mathematically,
2.2
∴ tr10−90 ≈ (4.3)
a
This can be visualised in Figure 4.5a. For additional clarity, watch first order system - unit step
response (7:46) [8] (again) and Intro to Control - 9.1 System Time Response Terms (7:26) [9].
There are other specified metrics of rise time, such as tr20−80 the 20% to 80% rise time, amongst
others. These are somewhat arbitrarily defined, but are still useful from a practical viewpoint.
c(t) c(t)
1 1
90%
10% t t
t10% t90% ts10% ts5% ts2%
Much like the rise time, the settling time is also arbitrarily defined out of practical usefulness.
It also applies to systems of any order. It is defined as the time taken for an output to reach
within a certain percentage of the final value such that the output will always be within that
percentage (to infinite time). This again specifically applies to the system unit-step response.
For a first order system, the step response is a monotonically increasing function, so the lower
boundary of the percentage range is used. Here the 2% settling time ts2% is used and defined
in Equation 4.4 below [1, 3],
4
∴ ts2% = (4.4)
a
The 2% settling time can be visualised in Figure 4.5b. Additional parameters that are common
2.303 3 4.605
are the 10%, ts10% = ; 5%, ts5% = ; and 1% (not shown), ts1% = settling times.
a a a
These are again used out of their practical use, determined from the particular situation at
hand. To see this explained again, watch Intro to Control - 9.1 System Time Response Terms
(7:26) [9].
Second-order systems are characterised as having two distinct poles in the transfer functions
[1–3], i.e.:
1
G(s) =
(s + λ1 )(s + λ2 )
However, many different types of behaviour can arise from a step-response of second-order
systems. These behaviours are essentially sub-types of second-order systems.
To start, an important but non-trivial statement must be realised for a second-order system.
The context of this module is real control systems, therefore, the transfer function must be
“real”. This is not to say that the parameter s is real, but that the resultant system is “real”.
This reduces to a simple statement, the coefficients of the polynomials P (s) and Q(s) must all
be real [1–3, 7, 10]. I.e.:
1
G(s) = a, b, c ∈ Re
as2 + bs + c
or generally
If any are non-real, then the partial fraction decomposition will result in an equation having
partial fraction scaling factors that are non-real [7, 10]. This means that the time-domain
function will have a “non-real” component. This is broadly speaking meaningless. This is why
the first-order system and specifically its pole, as described above, was confined to the real axis
of the s-plane.
Even though the coefficients of the Q(s) polynomial of the second-order system must necessarily
be real, the roots of the polynomial (i.e. the poles) need not be real. However, as will be seen,
if there are non-real roots/complex poles in a second-order system, they necessarily have to be
complex conjugates of one another [1–3, 7, 10]. The reason for this is covered below.
The best way to explore the phenomena of second-order systems is to get into the “dirty work”.
Form this you should try and find trends and build up an intuition about what happens. This
is explained afterwards and is covered sufficiently in Nise [1]. However, it is to your benefit
to try and figure out what is happening and why on your own first then confirm it with the
textbook and study guide.
Hint:
See if you can find a conceptual understanding of what is happening in this activity. The
poles are being moved progressively in a certain way. What is the conceptual consequence
of moving the poles on the step-response given how they are moved?
a) 1. 2.
1 35
G(s) = G(s) =
2
s + 12s + 35 s2 + 12s + 35
b) 1. 2.
1 21
G(s) = G(s) =
2
s + 10s + 21 s2 + 10s + 21
c) 1. 2.
1 7
G(s) = G(s) =
2
s + 8s + 7 s2 + 8s + 7
d) 1. 2.
1 5
G(s) = G(s) =
2
s + 6s + 5 s2 + 6s + 5
e) 1. 2.
1 3
G(s) = G(s) =
2
s + 4s + 3 s2 + 4s + 3
f) 1. 2.
1 99
G(s) = G(s) =
2
25s + 100s + 99 25s2 + 100s + 99
g) 1. 2.
1 1
G(s) = G(s) =
2
s + 2s + 1 s2 + 2s + 1
h) 1. 2.
1 9
G(s) = G(s) =
2
4s + 12s + 9 4s2 + 12s + 9
i) 1. 2.
1 4
G(s) = G(s) =
2
s + 4s + 4 s2 + 4s + 4
j) 1. 2.
1 5
G(s) = G(s) =
2
s + 4s + 5 s2 + 4s + 5
k) 1. 2.
1 13
G(s) = G(s) =
2
s + 4s + 13 s2 + 4s + 13
l) 1. 2.
1 10
G(s) = G(s) =
s2 + 2s + 10 s2 + 2s + 10
m) 1. 2.
1 37
G(s) = G(s) =
4s2 + 4s + 37 4s2 + 4s + 37
n) 1. 2.
1 13
G(s) = G(s) =
s2 + 4s + 13 s2 + 4s + 13
o) 1. 2.
1 29
G(s) = G(s) =
2
s + 4s + 29 s2 + 4s + 29
p) 1. 2.
1 26
G(s) = G(s) =
2
s + 2s + 26 s2 + 2s + 26
q) 1. 2.
1 101
G(s) = G(s) =
2
4s + 4s + 101 4s2 + 4s + 101
r) 1. 2.
1 25
G(s) = G(s) =
2
s + 25 s2 + 25
Many phenomena are highlighted in the activity above, these are now explored. Consider a
general second-order system
1 1 1
G(s) = = 2 = a, b, c ∈ Re (4.5)
Q(s) as + bs + c (s + λ1 )(s + λ2 )
The intention is normally to change this into partial fractions, or at least to factorise it to find
the location of the poles s1 = −λ1 and s2 = −λ2 when Q(s1 ) = Q(s2 ) = 0. This is easily done
using the quadratic formula [1–3]. I.e.
b 1√ 2
s1 , s2 = − ± b − 4ac (4.6)
2a 2a
∆ = b2 − 4ac is called the discriminant (of the quadratic equation), and this fundamentally
changes the nature of the roots of Q(s) [1, 3]. Note s1 and s2 are the roots of Q(s), where λ1
and λ2 are the values in the factors of Q(s) = (s + λ1 )(s + λ2 ).
√ √
b ∆ b ∆
If ∆ > 0, then λ1 6= λ2 , are distinct and real. I.e. λ1 = + and λ2 = − [1, 3].
2a 2a 2a 2a
b
If ∆ = 0, then λ1 = λ2 = , and are repeated real roots [1, 3].
2a
√
b 1√ b j δ
If ∆ < 0, i.e. explicitly ∆ = −δ, then λ1,2 = ± −δ = ± . In other words,
√ √ 2a 2a 2a 2a
b j δ b j δ
λ1 = + and λ2 = − , which are necessarily complex conjugates, i.e. λ2 = λ1 [1,
2a 2a 2a 2a
3].
These cases for the second-order system will be explored in further detail below. An important
note, we will consider a = 1 and the transfer function is in “normalised” form, i.e.
λ1 λ2
G(s) = (4.7a)
(s + λ1 )(s + λ2 )
λ1 λ2
G(s) = 2 (4.7b)
s + (λ1 + λ2 )s + λ1 λ2
ωn 2
G(s) = (4.8)
s2 + 2ζωn s + ωn 2
The convenience of this form is not immediately apparent. This is covered specifically for each
sub-type of second-order systems below. However, the relationship between the two equations
is now briefly demonstrated and is relevant to each of the sub-types generally.
ωn 2 = λ1 λ2 (4.9a)
p √
∴ ωn = λ1 λ2 (or ωn = c) (4.9b)
and
It is also useful to relate λ1 and λ2 back to the quadratic formula (and assuming a = 1).
√ p
λ1 , λ2 = σ ±
∆ or alternatively λ1 , λ2 = ωn ζ ± ωn ζ2 − 1 (4.11)
s
2
b b p
= ± −c = ωn ζ ± ζ 2 − 1 (4.12)
2 2
This subsection goes into further details on the nature of second order system behaviour. It is
more in depth than in Nise [1] and aims to show the “proofs” as to why the system-subtypes
are defined the way they are. This is not essential for finding the time responses but can help
on the critical understanding of why the subsystems are defined the way they are, and why
certain time response parameters are only applicable to certain subtypes, as is covered later
in subsection 4.5.2. If you are comfortable in your understanding, skip to the examples and
activities, Example 4.1, Activity 4.2 and Activity 4.4. However, it is still strongly recommended
that you understand the following and why [1–3]:
• the difference between and mathematical relation of the natural frequency ωn and the
damped frequency ωd .
From Activity 4.1 above, some of the first examples demonstrated the effect of two distinct,
and necessarily real, roots. These had a step response of the following form,
The second-order system step-response, with real distinct poles, behaves very similarly to a
first-order system step-response. I.e.
1 1
This is because e−λ2 t → 0 faster than e−λ1 t → 0. Additionally (and to a lesser extent )
λ1 λ2
acts exactly equivalent to the time-constant (of a first order system).
Arbitrarily speaking, λ2 > λ1 , i.e. one pole is more positive than the other (N.B. s1 = −λ1 >
s2 = λ2 ). When comparing the step-response of the second-order system, with the step response
1
of a first-order system that has a pole equal to the more positive pole, i.e. G(s) = , the
s + λ1
second-order system follows this first-order system closely. The more positive pole is called
the “dominant pole” for this reason. Additionally, the more positive s1 = −λ1 is relatively to
s2 = −λ2 , the more dominant the effect of s1 = −λ1 as the dominant pole. The closer the poles
get to one another, the less the second-order system follows the dominant first order equivalent.
r r !
1 λ2 λ1
ζ= +
2 λ1 λ2
λ2 λ1
From above, we say λ2 > λ1 , ⇒ > 1 and 1 > .
λ1 λ2
λ2
We then define a = > 1.
λ1
r r
√ λ2 λ1 1
Then a = > 1 and 1 > = √ . Therefore
λ1 λ2 a
√
1 1
ζ= a+ √
2 a
√ √ √ 2
Now, since a > 1, then a − 1 > 0, and therefore ( a − 1) > 0. Therefore,
√ 2
a−1 >0
√
a−2 a+1>0
√
a+1>2 a
a+1
⇒ √ >1
2 a
1 √
1
∴ a+ √ >1
2 a
1 √
1
ζ= a+ √ >1
2 a
ζ>1
The defining characteristic of an overdamped second-order system is ζ > 1. The larger the
difference between λ2 and λ1 the larger the damping ratio is ζ, and the more dominant the
effect of the dominant pole is, and the more the system is “first order” like. The extreme of this
is to consider if one of the poles is at −∞, then the system is basically first-order. The effect
of ωn doesn’t fit the traditional meaning of “frequency”, but loosely represents the “speed” of
the system. The larger ωn the “faster” the system.
Two Real, Repeated Poles; The Critically Damped Second Order System
From Activity 4.1 above, some of the systems exhibited the following form of their-step response,
The introduction of the −k1 te−at term occurs because of the repeated pole partial fraction
decomposition and its Laplace transform. Importantly this means that the step-response is
always slower than an equivalent first-order system (with a single pole). Although this is
slower than the first-order system, it is the fastest possible second-order system for a given
natural frequency ωn ! This may seem contradictory, but will be clarified later in the summary
subsection 4.5.3 below.
The defining characteristic of the second order system is that the poles are real and repeated.
I.e. ∆ = 0 and λ1 = λ1 = λ. Therefore,
1 λ1 + λ2
ζ= √
2 λ1 λ2
1λ+λ
∴ζ= √
2 λλ
1 2λ
= √
2 λ2
1 2
λ
=
2 λ
∴ζ=1
For some of the questions in Activity 4.1, the poles were complex, with both a real and an
imaginary part. I.e.
λ1 = σ + jωd
λ2 = σ − jωd
∴ λ2 = λ1
This gave rise to a step response that was of the following form,
1
Here the is exactly equivalent to the time constant, and σ is called the exponential decay
σ
frequency. The sinusoid has a frequency and phase shift. The frequency is incidentally ωd the
imaginary component of λ. The only issue is the exact values of k0 and φ. These can be solved
from the step response of the original complex-pole partial fraction decomposition.
λλ
C(s) =
s(s + λ)(s + λ)
1 −λ 1 λ 1
= + +
s (−λ + λ) s + λ λ − λ s + λ
1 1 σ 1 1 σ 1
= − 1− − +1
s 2 jωd s + λ 2 jωd s+λ
rearranging,
Geometrically, σ + jωd represents a complex number, that has a magnitude and argument. The
geometric relation is shown in Figure 4.6.
jωd
r
φ
=
√ σ
√
ωn = r = σ 2 + ωd 2
2
+
ωd
2
θ ζ = cos(θ)
−σ
−jωd
Figure 4.6: Geometric relationship of: the exponential decay frequency σ; the damped frequency
ωd ; theta θ and phi φ; the damping ratio ζ; and the natural frequency ωn
σ
√
sin (φ) σ 2 + ωd 2
Geometrically, tan (φ) = = . Therefore,
cos (φ) ωd
√
σ 2 + ωd 2
−σt σ
c(t) = 1 − e cos (ωd t) + sin (ωd t) (4.13)
ωd
−σt sin (φ)
=1−e cos (ωd t) + sin (ωd t)
cos (φ)
1 −σt
=1− e [cos (φ) cos (ωd t) + sin (φ) sin (ωd t)]
cos φ
√
σ 2 + ωd 2 −σt
=1− e cos (ωd t − φ) (4.14)
ω
√ d
σ 2 + ωd 2 −σt σ
=1− e cos ωd t − arctan (4.15)
ωd ωd
The geometry of the poles is generalised further for every case in subsection 4.5.3. Additionally,
this can be related back to the damping ratio ζ and natural frequency ωn .
Firstly, the natural frequency is simply equal to the magnitude of the complex number, i.e.:
p
ωn = λλ
p
ωn = |λ| = σ 2 + ωd 2 (4.16)
Secondly, the damping ratio ζ can be determined from the primary definition in Equation 4.10a,
1λ+λ
ζ= √
2 λλ
1 σ + jωd + σ − jωd
=
2 ωn
1 2 σ
= √ 2
2 σ + ωd 2
σ σ
ζ=√ 2 = (4.17)
σ + ωd 2 ωn
The undamped system is readily continued from the underdamped system. Specifically, the
poles of the undamped system are pure imaginary. In other words, the real component σ = 0
π
(and ζ = cos ( ) = 0, i.e. the damping is zero). Substituting σ = 0 into the underdamped step
2
response equation
√
02 + ωd 2 −0t 0
c(t) = 1 − e cos ωd t − arctan
ωd ωd
ωd 0
= 1 − e cos (ωd t − arctan (0))
ωd
c(t) = 1 − cos (ωd ) (4.19)
Importantly for comparison, the undamped system simply has no real component σ that exists
as a decaying exponential to “dampen” the oscillatory behaviour of the system. Such systems
are not unstable in the same sense as an exponential growth (with or without oscillatory
behaviour) would be, by exploding to infinity. However, they do not approach a limiting value
as is required by stable systems. For this reason, a pure oscillatory system is still considered
unstable, but is also sometimes referred to as marginally stable.
1. 3.
1 1
G(s) = G(s) =
s2 + 10s + 24 s2 + 4s + 5
2.
4.
1 1
G(s) = G(s) =
s2 + 8s + 16
s2 +4
Solution:
1.
1
G(s) =
(s + 4)(s + 6))
2.
1
G(s) =
(s + 4)2 )
√ (8)
Additionally, ωn = 16, therefore ζ = = 1. Since ζ = 1, the system is critically
2ωn
damped.
3.
1
G(s) =
(s + 2 − i)(s + 2 + i))
4.
1
G(s) =
(s + 2i)(s − 2i))
1. 4.
1 1
G(s) = G(s) =
s2 + 12s + 37 s2 + 8s + 32
2. 5.
1 1
G(s) = G(s) =
s2 + 16 s2 + 2s + 17
3.
6.
1
G(s) = 2 1
s +4 G(s) =
s2 + 20s + 100
There are a few time domain parameters of a second-order system. These parameters are
typical for an underdamped system. These parameters are
The first three are very useful practical parameters for determining the damping ratio and
natural frequency of an unknown system from a step response. Importantly the first two
(tp and %OS) are uniquely defined for underdamped systems only, and are meaningless for
overdamped, critically damped and undamped systems [1–3]. The settling time ts can be found
for each of the stable systems, overdamped, critically damped and underdamped systems, but
not the undamped system (which is strictly unstable or at least marginally stable) [1–3]. The
rise-time tr , however, is a bit more interesting. tr is defined differently for underdamped systems,
versus the other type of stable second-order systems. Generally, tr is not as helpful other than
as a practical specification. These parameters are discussed in detail below with the concepts
compiled from [1–3] and are graphically referenced both explicitly and implicitly to Figure 4.7.
c(tp )
c(∞)
tp ts
Peak Time, tp
The peak time is defined as the time of the maximum peak of the underdamped second-order
system step response. To relate this parameter to the damping ratio ζ and the natural frequency
ωn , we note that this peak occurs at a “local maximum”. I.e. the function has a gradient = 0
at the peak time. Therefore, this time can be related back to the derivative of the equation.
The step response of a second-order system is
√
σ 2 + ωd 2 −σt
c(t) = 1 − e cos (ωd t − φ)
ωd
However, the intention is to relate tp to ζ and ωn . This can be done by going back to the
frequency domain step response
λλ ωn 2
C(s) = =
s(s + λ)(s + λ s(s2 + 2ζωn + ωn 2 )
Both equations will be used to find the expression for the peak time. Continuing with C(s), we
can use frequency-domain differentiation (multiplication by s) to find the expression. Therefore,
ωn 2
sC(s) =
s2 + 2ζωn + ωn 2
By completing the square in the denominator, we get
ωn 2
sC(s) =
s2 + 2ζωn s + (ζωn )2 − (ζωn )2 + ωn 2
ωn 2
= 2
(s + 2ζωn s + (ζωn )2 ) + ωn 2 (1 − ζ 2 )
!
ωn p
p ωn 1 − ζ 2
1 − ζ2
=
(s + ζωn )2 + ωn 2 (1 − ζ 2 )
Using the Laplace Table C.1, we can find the inverse transform,
ωn p
ċ(t) = p e−ζωn t sin (ωn 1 − ζ 2 t)
1 − ζ2
Then the expression for the peak time can be found, by substituting tp and knowing that
ċ(tp ) = 0,
ωn p
ċ(tp ) = p e−ζωn tp sin (ωn 1 − ζ 2 tp ) = 0
1 − ζ2
p
⇒ sin (ωn 1 − ζ 2 tp ) = 0
p
∴ ωn 1 − ζ 2 tp = nπ
The nπ arises from the fact that the standard sine function is = 0 every nπ radians, n ∈ Z.
In this case, we are interested in the principle value n = 1 corresponding to the first peak.
Therefore
π
tp = p (4.20)
ωn 1 − ζ 2
Alternatively, the derivative of the step response c(t) can be taken directly, therefore,
√
d σ 2 + ωd 2
(−σ)e−σt cos (ωd t − φ) + e−σt (−ωd sin (ωd t − φ))
c(t) = −
dt ωd
√
σ + ωd 2 −σt
2
= e (σ cos (ωd t − φ) + ωd sin (ωd t − φ))
ωd
σ ωd √
Geometrically, = cos (θ) and = sin (θ). Remembering that ωn = σ 2 + ωd 2 , we get
ωn ωn
d 1
c(t) = e−σt (ωn cos (θ) cos (ωd t − φ) + ωn sin (θ) sin (ωd t − φ))
dt sin (θ)
ωn −σt
= e (cos (ωd t − φ − θ))
sin (θ)
ωn −σt
= e cos (ωd t − (φ + θ))
sin (θ)
π
Geometrically φ + θ = are complementary angles. Therefore
2
d ωn −σt π
c(t) = e cos (ωd t − )
dt sin (θ) 2
ωn −σt
= e sin (ωd t)
sin (θ)
0 = sin (ωd tp )
nπ
⇒ tp =
ωd
For the same reasoning above n = 1 and
π
tp = (4.21)
ωd
This expression helps in determining the peak time from the pole’s damped frequency, or its
imaginary component.
p ωd
Note that this also implies that ωd = ωn 1 − ζ 2 . This makes sense singe geometrically, =
p p ωn
sin (θ) = 1 − cos2 (θ) = 1 − ζ 2 . Therefore is mathematically consistent.
The percentage overshoot is readily continued from the concept of the peak time, since it directly
relates to the peak time. The percentage overshoot is the percentage of the peak overshoot
value, occurring at the peak time, compared to the final value. The final value occurs when
t → ∞, and for a normalised step response, this is c(∞) = 1. The base expression for the
percentage overshoot is,
c(tp ) − c(∞)
%OS = × 100
c(∞)
ωd p p
From the geometrical relationship of the poles, = sin (θ) = 1 − cos2 (θ) = 1 − ζ 2.
ωn
Therefore,
σ 1 ! !!
−π
1 σ 1
p
ωn 1 − ζ 2 p p
%OS = −e cos π 1 − ζ 2 p + sin π 1 − ζ 2 p × 100
1 − ζ2 ωd 1 − ζ2
σ 1
−π
σ
p
ωn 1 − ζ 2
= −e cos (π) + sin (π) × 100
ωd
σ 1
−π
σ
p
ωn 1 − ζ 2
= −e (−1) + (0) × 100
ωd
Realising that σ = ζωn the final equation for the percentage overshoot is,
ζωn 1
−π p
ωn 1 − ζ 2
%OS = e × 100
πζ
−p
%OS = e 1 − ζ 2 × 100 (4.22)
A fundamental takeaway from this is that the overshoot is only dependent on the damping ratio
ζ. It is in fact, from a practical point, the damping ratio that characterises and determines the
overshoot. Because, of this relationship, in a practical application, the damping ratio is also
expressed in terms of the %OS. This means that the damping can “easily” be found from a
step response waveform of an underdamped second-order system, i.e.
− ln (%OS/100)
ζ=q (4.23)
π 2 + ln2 (%OS/100)
Settling time, ts
The settling time is a more arbitrary metric purely for practical use. Much like in first-order
systems, the settling time is the time taken for the step function to be guaranteed to be within
a certain percentage S% of the final value. Various settling values are common (as mentioned
in the first-order system settling time), these are 10%, 5%, 2% and 1% (or indeed any “valid”
choice). Importantly,
√ this is only dependent on the scaled exponential component of the second-
σ 2 + ωd 2 −σt
order system, i.e. e . This is because the sinusoidal component cos (ωd t − φ) ≤ 1.
ωd
Additionally, the scaled exponential forms the “envelope” of the response about the unitary
step response, as can be seen as the dotted red decaying exponentials in Figure 4.7; clearly, the
step response is confined between the two exponentials. So for a normalised step response, the
(arbitrary) settling time can be determined from the time taken for the scaled exponential to
reach the desired percentage. I.e.
√
σ 2 + ωd 2 −σts
S% = e %
ωd
Manipulating the equation to solve for the settling time ts% , gives
ωd
e−σts% = √ 2 S%
σ + ωd 2
ωd
−σts% = ln √ 2 S%
σ + ωd 2
ωd
ln √ 2 S%
σ + ωd 2
∴ ts% = − (4.24)
σ
This form is convenient if the relationship between the system settling time and system poles
are important. However, the damping ratio form and natural frequency is also of practical use.
ωd √ p
Noting again that √ 2 = sin θ = 1 − cos2θ = 1 − ζ 2 and σ = ζωn , then the settling
σ + ωd 2
time is,
p
ln S% 1 − ζ 2
ts% = − (4.25)
ζωn
The settling time for overdamped and critically damped systems can also be defined.
For an over-damped system, the effect of the dominant pole (the pole closest to s=0) is
similar to a first-order system. The settling time is therefore approximately that of the first-
order system. I.e. assume λ1 is the dominant pole, then
Therefore the settling time can be approximated to be the same as that of the first-order system,
4
ts2% ≈ (4.27)
λ1
This approximation fails the “closer” λ2 is to λ1 .
For the critically damped systems, the relation is a bit more complicated. The relation is
made as follows
This has an analytical solution, however, involves the use of the Lambert W function (or product
log function) [11]. The nature of this function is beyond the scope of this module and the final
Rise time, tr
In this section, the rise time for each of the stable system types is shown. This is to highlight
the subtle differences between each of the systems and the reasoning behind why the rise time
for an underdamped system can be found using two similar, but different methods. Importantly
the rise time of importance for this module is that of the underdamped system.
The rise time for overdamped and critically damped systems is defined identically to a first-
order system. The Rise time is defined as the time for the output to go from 10% to 90% of
the final value. However, the actual calculations are quite different.
For an over-damped system, the same rationale can be used as for the settling time. The
rise time is approximately dependent on the system’s dominant pole only. I.e. assume λ1 is the
dominant pole, then
Therefore the rise time can be approximated to the same as that of the first order system,
2.2
tr10−90 ≈ . This approximation fails the “closer” λ2 is to λ1 .
λ1
Solving the rise-time for a critically damped system using the tr10−90 definition is analytically
possible. However, it again requires the use of the Lambert W function [11] (as before with the
settling time) which is beyond the scope of the module. The analytical result for t10 and t90 of
a normalised critically damped system is given as
1 1
W−1 (c(t10 ) − 1) + 1 W−1 (c(t90 ) − 1) + 1
e e
t10 = t90 =
λ
λ
9 1
W−1 − +1 W−1 − +1
10e 0.531812 10e 3.88972
= ≈ t90 = ≈
λ λ λ λ
For underdamped second order systems, the rise time is defined in one of two ways.
Firstly as the tr10−90 rise time, however, this definition does not have an analytical solution. It
is solved using numerical methods and can be seen in chapter 4, Evaluation of Tr , in Nise [1].
The more usable, and more widely used definition of rise time for an underdamped system is
the tr0−100 , which also has an analytical solution. This is possible since the overshoot of the
underdamped system means that the response will reach a 100% level. Since c(t0% ) = 0, tr
simply reduces to the first time interval for the function to reach the c(t100% ) or c(∞) value.
Solving for this gives,
√
σ 2 + ωd 2 −σtr
c(t100 ) = 1 − e cos (ωd tr − φ) = 1
ωd
√
σ 2 + ωd 2 −σtr
∴ 0=− e cos (ωd tr − φ)
ωd
√
σ 2 + ωd 2
However, σ and ωd are non zero, i.e. 6= 0 and e−σtr > 0. Therefore the only factor
ωd
that can satisfy the equation is cos (ωd tr − φ). Therefore,
cos (ωd tr − φ) = 0
π
ωd tr − φ = + nπ ,n ∈ Z
2
π
The solution of interest is when n = 0, the “first” zero of cos (ωd tr − φ). Additionally, + φ =
2
ωd
π − θ geometrically. Furthermore, using the geometric relationships, ωd = ωn = ωn sin (θ) =
p p ωn
ωn 1 − cos2 (θ) = ωn 1 − ζ 2 . Therefore, the rise time can be found in terms of the natural
frequency and damping ration,
π − arccos (ζ)
tr = p (4.30)
ωn 1 − ζ 2
Note that this only works for 0 ≤ ζ < 1, as the arccos ζ function is only defined for ζ inputs
between −1 and 1, with the restriction that ζ is positive. Naturally, this means that the rise
time is inversely proportional to the system’s natural frequency. A false conclusion is the fastest
possible response is when ζ → 0. Although this is true for the explicit value of rise time, it
must not be forgotten that zeta also contributes to %OS, tp and ts . Therefore, the closer zeta
is to zero the more the step response oscillates (by the nature of the damping ζ → 0). This
gives a false indication of the actual speed (which is more appropriately quantified by ts ), and
more importantly, the “quality” of the step response.
For Additional help and clarity on the time parameters, watch the following videos; Intro to
Control - 9.1 System Time Response Terms (7:26) [9], Time Domain Specifications: Second
Order Control System (8:05) [12], and rise time, peak time, peak overshoot, settling time and
steady state error (35:42) [13]
1. 4.
1 1
G(s) = G(s) =
s2 + 2s + 2 s2 + 28s + 212
2. 5.
2501 2669
G(s) = G(s) =
s2 + 100s + 2501 s2 + 26s + 2669
3. 6.
1 1
G(s) =
s2 + 4s + 3 G(s) =
s2 + 600s + 90000
Solution:
1
1. The factorised function is G(s) = . Therefore
(s + 1 + i)(s + 1 − i)
1. ωn = 1.414
2. ζ = 0.707
3. tp = πs
4. %OS = 4.3%
5. ts = 4.26s
6. tr = 2.36s
1
2. The factorised function is G(s) = . Therefore
(s + 50 + i)(s + 50 − i)
1. ωn = 1.414
2. ζ = 0.999 ≈ 1
3. tp = πs
4. %OS negligible
5. ts = 0.156s
6. tr = 3.122s
1
3. The factorised function is G(s) = . Therefore
(s + 1)(s + 3)
1. ωn = 1.732
2. ζ = 1.155
3. ts = 3.912s
4. tr = 2.2s
1
4. The factorised function is G(s) = . Therefore
(s + 14 + 4i)(s + 14 − 4i)
1. ωn = 14.56
2. ζ = 0.962
3. tp = 0.785s
4. %OS negligible
5. ts = 0.372s
6. tr = 0.716s
1
5. The factorised function is G(s) = . Therefore
(s + 13 + 50i)(s + 13 − 50i)
1. ωn = 51.66
2. ζ = 0.252
3. tp = 0.062s
4. %OS = 44.2%
5. ts = 0.304s
6. tr = 0.036
1
6. The factorised function is G(s) = . Therefore
(s + 300)2
1. ωn = 300
2. ζ = 1
3. ts = 0.020s
4. tr = 0.011
1.2
0.8
0.6
0.4
0.2
·10−2
Solution:
The time parameters can be estimated from the graph.
Considering the %OS ≈ 9.5%, then ζ can be found using Equation 4.23,
− ln (%OS/100)
ζ=q
π 2 + ln2 (%OS/100)
− ln (0.095)
=q
π 2 + ln2 (0.095)
= 0.59962 ≈ 0.6
This is approximately the poles of the system that generated the graph, λ1 , λ2 == −300 ±
400i, which for this example is provided by an omniscient mythical being, and the error
of 98% is acceptable.
Substituting as needed
240100
G(s) =
s2 + 588s + 240100
and alternatively in factorised form
240100
G(s) = .
(s + 294 − 392i)(s + 294 + 392)
ωn
θ
Figure 4.8 summarises the concepts of poles and their locations such that they are referring to
the same natural frequency. I.e. the figure indicates the circle of constant natural frequency.
This includes the pure-real straight-line “circle”. From this, the zeta value can be qualitatively
understood. On the imaginary axis, ζ = 0; on the actual circle, 0 < ζ < 1; where the circle
intercepts with the (negative) real axis, ζ = 1; anywhere else on the real axis the poles are
effectively reciprocals of one another (their product is equal to the constant natural frequency
ωn ), and ζ > 1.
For a general second-order system, ζ is equal to the arithmetic mean of the pole factor values
λ1 + λ2 √
divided by the geometric mean of the pole factor values λ1 λ2 . Additionally, the
2
natural frequency is simply the geometric mean.
λ1 + λ2 λ1 + λ2 1
ζ= √ = √ (arithmetic mean)/(geometric mean) (4.31)
2 λ1 λ2 2 λ1 λ2
p
ωn = λ1 λ2 (4.32)
Systems are characterised by and compared with their “speed”. Ideally, a system will respond
to a step input by producing the exact same step output. The second-order system parameters
ts , tp %OS and tr provide a means to determine how close the output is to a real step response.
Ideally the step is characterised as having no overshoot, and hence no peak, and the step is
immediate, i.e. ts = tp = %OS = tr = 0. This means that faster systems are characterised as
having these parameters as close to zero as possible.
Systems are first compared to each other by their natural frequency ωn . The higher the natural
frequency the faster the system. However, the comparative effects of ζ cannot be ignored.
As explained above, the rise time is a poor indicator of the system’s speed from a qualitative
perspective. Indeed the rise time is the shortest if ωn is large and ζ is small. However, a system
with small zeta has a poor qualitative “step” shape. The quality is better quantitatively
characterised by the settling time, peak time and overshoot. The second two parameters are
specific to underdamped systems.
πζ
−p
%OS = e 1 − ζ2 (4.33)
−π cot (θ)
=e (4.34)
This is minimum iff ζ → 1 ⇒ θ → 0, since cot θ → ∞ and therefore e−π cot θ → 0. For systems
with the same overshoot, ζ and hence θ is constant. This corresponds to a ray on the pole-zero
plot on the s-plane. The fastest system is then the system with the largest natural frequency,
ωn , i.e. the pole furthest away from the origin s = 0.
The peak time is dependent on both ωn and ζ; but in such a way that systems with the
same peak time are characterised by having the same dampedp frequency ωd (the imaginary
component of the system poles), since ωd = ωn sin (θ) = ωn 1 − ζ 2 . I.e.
π
tp = p (4.35)
ωn 1 − ζ 2
π
= . (4.36)
ωd
hence, systems with the same damped frequency ωd (or same imaginary Im(s) component) have
the same peak-time [1–3].
Systems compared with a constant peak time will have their poles on the same horizontal line
in the s-domain. The step response is better approximated with the real component σ moving
towards the left. This directly affects the exponential envelope e−σt of the system step response.
I.e. the envelope becomes a faster-decaying exponential the larger sigma gets. Additionally,
both ζ and ωn change, ζ gets smaller (the system quality is improving) and ωn gets larger (the
system speed is increasing).
The settling time is also dependent on both ωn and ζ, but reduces to simply being only depen-
However, this is specifically for the settling time of 2% of the final value. The more general
expression for a chosen settling tolerance s% of the final value is
p
ln (s% (1 − ζ 2 ) ln (s% sin (θ))
ts% =− =−
ζω ζωn
pn
2
ln (s% (1 − ζ )) ln (s% sin (θ))
=− =−
σ σ
If ζ > 1 (the overdamped system case), then the pole closest to zero dominates, but will
necessarily have a slower response time (compared to a system with the same natural frequency),
since this translates to a slower decaying exponential [1, 3].
This means that the most ideal and fastest second-order step response (for the family of systems
with the same natural frequency contours, as seen in Figure 4.8) is when ζ = 1, a critically
damped system, for a given ωn [1–3].
A pure zero in the time domain is meaningless to control engineering. The Laplace transform
d
of a transfer function G(s) = s + a resolves to δ(t) + aδ(t). This is the sum of the derivative
dt
of an impulse and a scaled impulse, again meaningless in a practical sense. Additionally, this
is a non-causal system. However, the combination of a zero with a pole (as is more common
in real systems) does provide more insight into the effect that a zero has in the time domain.
For simplicity consider the zero (and pole) confined to the real axis in the following transfer
function:
s+b
G(s) =
s+a
1s+b
C(s) =
ss+a
To find the inverse transform, the partial fraction decomposition is found. However, the char-
acteristics of this are only directly dependent on the denominators, i.e. the characteristic is
determined by the poles.
K1 K2
C(s) = + (4.37)
s s+a
1 1 K1 K2
The original transfer function of has an “identical” transfer function + . The
ss+a s s+a
only direct effect the zero has is scaling the numerator coefficients, K1 and K2 . There is a special
case where a = b, i.e. the pole and zero are equal. This is called pole-zero cancellation. This
is covered in more detail in subsection 4.7.4. What will be left is G(s) = 1 and subsequently
1
C(s) = ⇒ c(t) = u(t), i.e. the system follows the step response perfectly. This is not
s
practically realisable though, and pole-zero cancellation in this way is not ideal or even possible
in practice. More on this is discussed later.
Another way of interpreting the effect of a zero on a transfer function is to “isolate and separate”
the zero from the rest of the function. I.e.
s+b
G(s) =
s+a
1
= (s + b) = G0 (s)(s + b)
s+a
Assume that C0 (s) is the intermediate output of the system G0 (s) to a unit step input, i.e.:
1 1
C0 (s) =
ss+a
Since this is a simple first-order system, it will have a first-order response 1 − e−at . Then the
effect of the zero can be fully understood. The final output is
C(s) = C0 (s)(s + b)
= sC0 (s) + bC0 (s)
This can be transformed back, however, understanding that sC0 (s) is simply a derivative of
c0 (t) then the resultant output is the sum of the derivative and a scaled version of the original
output. If b is “relatively large” (positive or negative) then the bC(s) term dominates and the
effect of the zero is simply a “large” gain. Therefore, c(t) ≈ b(1 − e−at ). If b is relatively small
(or = 0), then the derivative has a dominant effect, and c(t) ≈ ae−at .
To understand more of the detail and effect of a zero, a second order system that has a zero
can also be analysed, i.e.:
(s + b)
G(s) =
(s + λ1 )(s + λ2 )
−λ1 + b 1 −λ2 + b 1
= +
−λ1 + λ2 s + λ1 −λ2 + λ1 s + λ2
There are three possible scenarios, firstly |b| >> |λ1 |, |λ2 |, which is a (gain) dominant zero;
secondly b = 0, which is a pure derivative, and other derivative-dominant zeros; and thirdly b is
“close” to either λ1 or λ2 , which is effectively pole-zero cancellation covered in subsection 4.7.4.
In the first case, since |b| >> |λ1 |, |λ2 |, then −λ1 + b ≈ b and −λ2 + b ≈ b. Then,
b 1 b 1
G(s) ≈ +
−λ + λ2 s + λ1 −λ2 + λ1 s + λ2
1
1 1 1 1
=b +
−λ1 + λ2 s + λ1 −λ2 + λ1 s + λ2
1
=b
(s + λ1 )(s + λ2 )
b
=
(s + λ1 )(s + λ2 )
So if b is very negative compared to the poles, i.e. Re(b) << Re(λ1 ), Re(λ2 ), then the effect is
approximately a “pure” gain proportional to the zero’s value [1].
This is similar to the first-order explanation before, this is simply a derivative effect on the
1
output of the transfer function .
(s + λ1 )(s + λ2 )
The derivative effect of a zero occurs if the zero is very close to s = 0. There is another condition
where the zero can have a noticeable effect. For a system to be generally stable, no poles must
be located in the Right-Hand Plane of the s-plane. This is covered in more detail in Chapter 5.
Therefore, if there is a system that is assumed stable and there is a zero in the RHP then the
real part of the zero is positive compared to the poles, and the condition Re(b) > Re(λ1 ), Re(λ2 )
is certain. Then the intermediate transfer function becomes (and explicitly using −b to indicate
an RHP zero in the rational polynomial),
This shows that the derivative component of the intermediate output has an opposite sign to
the gain component (regardless of what the actual “sign” is, the thing of significance is that
they have an opposite sign either way). If b is sufficiently large then again the gain component
dominates. However, in the early transient state, the derivative dominates. This is because the
nature of a derivative is to amplify differences. In the transient state of t > 0, the difference
between the step value 1 and the initial condition, which is assumed 0, is “large” and the
derivative dominates. As time moves forward, the (stable) system approaches the unit step
and the difference is small, therefore the gain dominates. However, in this specific scenario,
the gain and derivative are of opposite sign. Therefore, the derivative initially moves the
output away from the unit step reference input, this is called an “undershoot” [1]. This system
behaviour is referred to as a non minimum-phase system [1].
This is akin to steering a car, where the steering wheel is turned right, but the car momentarily
turns left! In fact this is a characteristic of bicycle models [14, 15]. For a more easily understood
explaination, watch Most People Don’t Know How Bikes Work (11:21) [16].
If a transfer function has a pole and zero, located at the exact same point in the s-plane, then
they cancel each other out. This is called pole-zero cancellation [1].
(s + a)
G(s) =
(s + a)(s + b)(s + c)
(s+a)
=
(s
+a)(s
+ b)(s + c)
1
=
(s + b)(s + c)
The poles determine the function’s time behaviour of the system output. Therefore, pole-zero
cancellation removes some of the functional behaviour of the system output [1]. However, this
is only possible if the pole and zero are exact, but the effect is approximately the same if the
pole and zero are “close” [1].
(s + a + δa)
G(s) =
(s + a)(s + b)(s + c)
(−a + a + δa) 1 (a + δa − b) 1 (a + δa − c) 1
= + +
(b − a)(c − a) s + a (a − b)(c − b) (s + b) (a − c)(b − c) (s + c)
δa 1 (a − b + δa) 1 (a − c + δa) 1
= + +
(b − a)(c − a) s + a (a − b)(c − b) (s + b) (a − c)(b − c) (s + c)
Outcome 1 is complete.
The behaviour of systems of higher order can be qualitatively comprehended in a similar way
to the arguments of first- and second-order systems. Specifically, it can be noted that a second-
order system with two real, distinct poles (an overdamped system) behaves very similarly to
a first-order system. The quantitative behaviour is that such a second-order system has a
dominant pole (λ1 say, the closest pole to s = 0) and a so-called “transient” pole [1, 2]. The
second-order system behaves very much like a first-order system (with a pole at s = λ1 ) [1, 2].
This demonstrates the concept of a dominant pole(s) of a system [1–3]. This concept readily
extends to higher-order systems by saying that a higher-order system will have either a single
(real) dominant pole, or two (complex conjugate) dominant poles that are closer to s = 0 than
any other pole in the system. This means that higher-order systems can have either first-order
or second-order dominant behaviour responses to step inputs [1–3]. There is in reality some
small deviation from a pure first or second-order response, especially if the pole nearest s = 0
is not “dominant enough” [1–3].
This deviation is maximum and most extreme when the real pole λr is equal to the critical
damping point of the complex poles ζωn . This is essentially a transition point where the step
response doesn’t follow either a first-order system or a second-order system very strongly, but
has properties of both. Essentially this is the emergent third-order system behaviour. This
behaviour is “most pure” when the ζ = 1 and the system has three real repeated poles that are
the “closest” to the origin [1, 2].
Ideally, the dominant pole(s) is five time-constants away from the next closest pole [1, 2]. The
system behaviour then approximates the (number of) dominant pole(s).
For further assistance on the concept of dominant poles (and zeros) and their relation to higher
order system behaviour, watch the following videos: Control Systems, Lecture 9: Dominant
poles and zeros. (19:47) [17], 107 Higher Order Systems - Part I (14:51) [18], and The Concept
of Dominant Pole (7:24) [19].
Outcome 6 is complete.
This solution can be modified to apply to state space models through matrix algebra. This is
also combined with the characteristic matrix of the state-space matrix equations, i.e. Φ(s) =
(sI − A)−1 [1–3]. The state transition matrix is the inverse-Laplace transform of the char-
acteristic matrix, i.e. Φ(t) = L−1 {(sI − A)−1 } [1–3]. However, this can be represented by
abstracting the first-order ordinary-differential equation to a matrix form.
This is almost identical to the ordinary scalar function version of the equation above. Consider
1
momentarily abusing the matrix algebra notation and try to continue. Then would
(sI − A)
1
be equivalent to from the O.D.E. above. But this can be more appropriately abstracted
s−a
1
to the proper matrix representation of “division” by representing these as inverses, i.e. =
s−a
(s − a)−1 and this is equivalent to the proper matrix notation (sI − A)−1 .
1
The inverse-Laplace of (sI − A) can be found by abstraction, i.e. if L
−1 −1
=
s−a
L−1 {(s − a)−1 } = eat then L−1 {(sI − A)−1 } = L−1 {Φ(s)} = eAt = Φ(t), and is called the
state transition matrix [1–3]. Finishing the matrix equation then gives,
This equation is the matrix equation form of the O.D.E. form seen above. The integral is
the convolution of Φ(t) ∗ (Bu(t)). Note: This is a matrix equation and the order of matrix
multiplication (and pre-multiplication) matters! The eAt x(0) is the initial condition matrix
R t A(t−τ )
(this is usually zero if finding the zero-state response), and 0 e Bu(τ )dτ is the zero-state
matrix. Since the actual time response is of interest, y(t) = Cx(t) + Du(t). Assuming there is
no feed-forward matrix D = 0, then
y(t) = Cx(t)
Z t
=C eA(t−τ ) Bu(τ )dτ
Z t0
= CeA(t−τ ) Bu(τ )dτ (4.43)
Z0 t
or = CΦ(t − τ )Bu(τ )dτ (4.44)
0
t2 t3
eAt = Φ(t) = I + At + A2 + A3 + · · · (4.45)
2! 3!
This means that φ(t) can be sufficiently computed by considering successive derivatives of Φ(t)
at t = 0 [2]. I.e.
Φ(0) = I (4.46a)
Φ̇(0) = A (4.46b)
2
Φ̈(0) = A (4.46c)
..
.
adj(sI − A)−1
Since Φ(s) = (sI − A)−1 = , then the partial fraction decomposition is only
det (sI − A)−1
dependent on det (sI − A)−1 = (s + p1 )(s + p2 )(s + p3) · · · . Then Φ(t) can be expressed as [1],
1 1
(s + p1 )(s + p2 )(s + p3) · · · (s + p1 )(s + p2 )(s + p3) · · ·
Φ(t) = L −1
1 1 (4.47)
(s + p1 )(s + p2 )(s + p3) · · · (s + p1 )(s + p2 )(s + p3) · · ·
1 1 1 1
+ + ··· + + ···
(s + p1 ) (s + p2 ) (s + p1 ) (s + p2 )
=L −1
(4.48)
1 1 1 1
+ + ··· + + ···
(s + p1 ) (s + p2 ) (s + p1 ) (s + p2 )
−p t −p2 t −p3 t
+ · · · b1 e−p1 t + b2 e−p2 t + b3 e−p3 t + · · ·
a1 e 1
+ a2 e + a3 e
= (4.49)
c1 e−p1 t + c2 e−p2 t + c3 e−p3 t + · · · d1 e−p1 t + d2 e−p2 t + d3 e−p3 t + · · ·
If the characteristic polynomial, det (sI − A)−1 is of nth order, then n-derivatives of Φ(t) (at
t = 0) are necessary to find the ai , bi , ci and di coefficients [2].
The time response analysis above is quite complex and can be computationally tricky. Alter-
natively, the state-space time-response can be found directly in the s-domain [1–3].
Assume for the moment that there is no feed-forward, D = 0, then the output s-domain
response Y(s) is
Assume that U(s) = Uu(s), where U is the input weighting vector (or matrix), and u(s) is the
Laplace transform of the input (such as a step function). Then u(s) = p(s)
q(s)
. Then the matrix
equation can be simplified further,
C adj Φ(s) x(0) + BUu(s)
Y(s) =
det Φ(s)
h p(s)
i
C adj Φ(s) x(0) + BU q(s)
=
det Φ(s)
C adj Φ(s) x(0)q(s) + BUp(s)
= (4.53)
det Φ(s) q(s)
1
= P(s) (4.54)
Q(s)
Here, P(s) is a standard numerator matrix, and Q(s) is a denominator polynomial [1–3]. This
means that all the information about the poles in the time response is contained in Q(s) =
det (Φ(s))q(s), which is the product of the characteristic polynomial det (sI − A)−1 and the
poles q(s) of the input (from u(s) = p(s)
q(s)
).
For simplicity Y(s) can be a scalar function as well (for a SISO system). Then P(s) is actually
a simple numerator polynomial, and the inverse-Laplace can be easily found to determine the
time-response [1]. If there are multiple outputs of interest, then Y(s) would be a vector, and
the specific output of interest can be found by finding the inverse-Laplace transform of that
component in the P(s) matrix (vector) divided by the Q(s) denominator polynomial.
Outcome 7 is complete
4.10 Summary
In this learning unit, the time response of a system to a unit step response, and the resulting
time domain performance characteristics were discussed. The concept of poles and zeros of a
system were discussed as essential to these performance characteristics. The first-order and
second-order system time performance characteristics were discussed. Extensive detail on their
relations to one-another were discussed as well. The concept of rise time and settling time were
presented for both first- and second-order systems, and the percentage overshoot and peak-
time of second-order systems and their relation to one another presented and discussed. The
second-order characteristic parameters of natural frequency and damping ratio were related to
these time parameters and to the interpretive meaning of the system poles, as well as the real
and imaginary components (the exponential decay frequency and damped frequency). Higher
order system approximation with first- or second-order-systems were discussed in relation to
the dominant pole of a system. The context of the time performance parameters were also
related to the state-space model.
Feedback
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
b)
21
C(s) =
s(s + 3)(s + 7)
1 21 1 21 1 21
= + +
s (0 + 3)(0 + 7) (s + 3) (−3)(−3 + 7) (s + 7) (−7)(−7 + 3)
1 7 1 3 1
= − +
s 4 (s + 3) 4 (s + 7)
7 3
∴ c(t) = 1 − e−3t + e−7t
4 4
e−7t → 0 faster than e−3t , the pole at s = −3 is dominant.
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
c)
7
C(s) =
s(s + 1)(s + 7)
1 7 1 1 1
= − +
s 6 (s + 1) 6 (s + 7)
7 1
∴ c(t) = 1 − e−t + e−7t
6 6
e−7t → 0 faster than e−t , the pole at s = −1 is dominant.
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
d)
5
C(s) =
s(s + 1)(s + 5)
1 5 1 5 1 5
= + +
s (0 + 1)(0 + 5) (s + 1) (−1)(−1 + 5) (s + 5) (−5)(−5 + 1)
1 5 1 1 1
= − +
s 4 (s + 1) 4 (s + 5)
5 1
∴ c(t) = 1 − e−t + e−5t
4 4
e−5t → 0 faster than e−t , the pole at s = −1 is dominant.
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
e)
3
C(s) =
s(s + 1)(s + 3)
1 3 1 1 1
= − +
s 2 (s + 1) 2 (s + 3)
3 1
∴ c(t) = 1 − e−t + e−3t
2 2
e−3t → 0 faster than e−t , the pole at s = −1 is dominant.
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
f)
99
C(s) =
s(s + (9/5))(s + (11/5))
1 11 1 9 1
= − +
s 2 (s + (9/5)) 2 (s + (11/5))
11 −t 9 −3t
∴ c(t) = 1 − e + e
2 2
The poles are getting similar in value, neither are really “dominant”.
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
g)
1
C(s) =
s(s + 1)2
1 1 1
= − −
s (s + 1) (s + 1)2
∴ c(t) = 1 − e−t − te−t
The poles are repeated (real). The effect is to have a te−t term, which seems to slow down the
response.
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
h)
1
C(s) =
s(s + (3/2))2
1 1 3 1
= − −
s (s + (3/2)) 2 (s + (3/2))2
3
∴ c(t) = 1 − e−(3/2)t − te−(3/2)t
2
The poles are repeated. The effect is to have a te−3/2t term, which seems to slow down the
response.
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
i)
1
C(s) =
s(s + 2)2
1 1 1
= − −2
s (s + 2) (s + 2)2
∴ c(t) = 1 − e−2t − 2te−2t
The poles are repeated. The effect is to have a te−2t term, which seems to slow down the
response.
1
jω
1 c(t)
0.5
σ
t
−8 −7 −6 −5 −4 −3 −2 −1 1
1 2 3 4
−0.5
−1
−1
j)
5
C(s) =
s(s + 2 + j)(s + 2 − j)
K0 K2 s + K3 1 s+4
= + 2 = − 2
s (s + 4s + 5) s (s + 4s + 5)
√ −2t
∴ c(t) = 1 − 5e cos (t − 1.107)
1.5
5 jω c(t)
4
3
2 1
1
σ
−8 −7 −6 −5 −4 −3 −2 −1−1 1 0.5
−2
−3
t
−4
1 2 3 4
−5
k)
13
C(s) =
s(s + 2 + 3j)(s + 2 − 3j)
K0 K2 s + K3 1 s+4
= + 2 = − 2
s (s + 4s + 13) s (s + 4s + 13)
√
13 −2t
∴ c(t) = 1 − e cos (3t − 0.588)
3
1.5
5 jω c(t)
4
3
2 1
1
σ
−8 −7 −6 −5 −4 −3 −2 −1−1 1 0.5
−2
−3
t
−4
1 2 3 4
−5
l)
10
C(s) =
s(s + 1 + 3j)(s + 1 − 3j)
K0 K2 s + K3 1 s+2
= + 2 = −
s (s + 2s + 10) s (s2 + 2s + 10)
√
10 −t
∴ c(t) = 1 − e cos (3t − 0.322)
3
1.5
5 jω c(t)
4
3
2 1
1
σ
−8 −7 −6 −5 −4 −3 −2 −1−1 1 0.5
−2
−3
t
−4
1 2 3 4
−5
m)
37
C(s) =
s4(s + 1/2 + 3j)(s + 1/2 − 3j)
K0 K2 s + K3 1 4s + 4
= + = −
s (4s2 + 4s + 37) 2
s (4s + 4s + 37)
√
37 −t
∴ c(t) = 1 − e cos (3t − 0.165)
6
5 jω c(t)
4 1.5
3
2
1 1
σ
−8 −7 −6 −5 −4 −3 −2 −1−1 1
0.5
−2
−3
−4 t
−5 1 2 3 4
n)
9
C(s) =
s(s + 3j)(s − 3j)
K0 K2 s + K3 1 s
= + 2 = − 2
s (s + 0s + 9) s (s + 9)
∴ c(t) = 1 − cos (3t)
5 jω c(t)
2
4
3
2 1.5
1
σ
1
−8 −7 −6 −5 −4 −3 −2 −1−1 1
−2 0.5
−3
−4 t
−5 1 2 3 4
o)
29
C(s) =
s(s + 2 + 5j)(s + 2 − 5j)
K0 K2 s + K3 1 s+4
= + 2 = −
s (s + 4s + 29) s (s2 + 4s + 29)
√
29 −2t
∴ c(t) = 1 − e cos (5t − 0.381)
5
1.5
5 jω c(t)
4
3
2 1
1
σ
−8 −7 −6 −5 −4 −3 −2 −1−1 1 0.5
−2
−3
t
−4
1 2 3 4
−5
p)
26
C(s) =
s(s + 1 + 5j)(s + 1 − 5j)
K0 K2 s + K3 1 s+2
= + 2 = − 2
s (s + 2s + 26) s (s + 2s + 26)
√
26 −t
∴ c(t) = 1 − e cos (5t − 0.198)
5
5 jω c(t)
4 1.5
3
2
1 1
σ
−8 −7 −6 −5 −4 −3 −2 −1−1 1
0.5
−2
−3
−4 t
−5 1 2 3 4
q)
101
C(s) =
s4(s + 1/2 + 5j)(s + 1/2 − 5j)
K0 K2 s + K3 1 4s + 4
= + 2
= − 2
s (4s + 4s + 101) s (4s + 4s + 101)
√
101 −1/2t
∴ c(t) = 1 − e cos (5t − 0.100)
10
5 jω c(t)
4 1.5
3
2
1 1
σ
−8 −7 −6 −5 −4 −3 −2 −1−1 1
0.5
−2
−3
−4 t
−5 1 2 3 4
r)
25
C(s) =
s(s + 5j)(s − 5j)
K0 K2 s + K3 1 s
= + 2 = − 2
s (s + 25) s (s + 25)
∴ c(t) = 1 − cos (5t)
5 jω c(t)
2
4
3
2 1.5
1
σ
1
−8 −7 −6 −5 −4 −3 −2 −1−1 1
−2 0.5
−3
−4 t
−5 1 2 3 4
b) 1. ts = 1.304s 4. tr = 0.390s
2. tr = 0.733s p) 1. tp = 0.628s
l) 1. tp = 1.047s 2. %OS = 53.4%
2. %OS = 35.1% 3. ts = 3.932s
3. ts = 3.964s 4. tr = 0.354s
4. tr = 0.630s q) 1. tp = 0.628s
o) 1. tp = 0.628s 2. %OS = 73%
2. %OS = 28.5% 3. ts = 7.834s
3. ts = 1.993s 4. tr = 0.334s
References
[1] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
[2] B. P. Lathi, Signal Processing and Linear Systems. New York: Oxford University Press,
2010, isbn: 9780195392579.
[3] R. Burns, Advanced Control Engineering. Oxford Boston: Butterworth-Heinemann, 2001,
isbn: 0750651008.
[4] D. Morrell, Pole/zero plots part 1 (12:40), 2010. [Online]. Available: https : / / www .
youtube.com/watch?v=cQdIVwKqj2M.
[5] ——, Pole/zero plots part 2 (12:36), 2010. [Online]. Available: https://www.youtube.
com/watch?v=5jYr0QktWxE.
[6] katkimshow, Intro to control - 7.1 poles and zeros (6:29), 2014. [Online]. Available: https:
//www.youtube.com/watch?v=Em5TuH4TVr4.
[7] D. Zill, A first course in complex analysis with applications. Boston: Jones and Bartlett,
2003, isbn: 0763714372.
[8] First order system - unit step response (7:46), 2015. [Online]. Available: https://www.
youtube.com/watch?v=r8LUG7p8QXo.
[9] katkimshow, Intro to control - 9.1 system time response terms (7:26), 2014. [Online].
Available: https://www.youtube.com/watch?v=uKUA147B3D8.
[10] D. G. Zill, Advanced Engineering Mathematics. JONES & BARTLETT PUB INC, Sep. 14,
2016, isbn: 1284105903.
[11] N. Higham, The Princeton Companion to Applied Mathematics. Princeton University
Press, 2015, isbn: 0691150397.
[12] L. Electronics, Time domain specifications: Second order control system (8:05), 2021.
[Online]. Available: https://www.youtube.com/watch?v=q1sSWoEkUDQ.
[13] techgurukula, Rise time, peak time, peak overshoot, settling time and steady state error
(35:42), 2015. [Online]. Available: https://www.youtube.com/watch?v=FU8bzKweMQg&
list=RDCMUC2XCO0-wiZHeyswwgsVvIKA&start_radio=1.
[14] A. Ailon and S. Arogeti, “Steering-based controllers for stabilizing lean angles in two-
wheeled vehicles,” Jun. 2018. doi: 10.1109/med.2018.8442782.
[15] N. Getz, “Control of balance for a nonlinear nonholonomic non-minimum phase model of
a bicycle,” Jun. 1994. doi: 10.1109/acc.1994.751712.
[16] Veritasium, Most people dont know how bikes work (11:21), 2021. [Online]. Available:
https://www.youtube.com/watch?v=9cNmUNHSBac.
[17] bioMechatronics Lab, Control systems, lecture 9: Dominant poles and zeros (19:47), 2020.
[Online]. Available: https://www.youtube.com/watch?v=m7hL8qP1I1c.
[18] U. M. Systems, 107 higher order systems - part i (14:51), 2018. [Online]. Available: https:
//www.youtube.com/watch?v=lNgqNPFQSCA.
[19] N. Academy, The concept of dominant pole (7:24), 2020. [Online]. Available: https :
//www.youtube.com/watch?v=_s1Z33VXjbU.
This learning unit covers concepts regarding system stability. Systems can be classified as
stable or unstable. The factors determining this “stability” are covered below. One overarching
method for determining stability is utilising the Routh Table. This fundamentally depends on
the Routh-Hurwitz stability criterion. The criterion, as well as the Routh table method, are
explored below.
Study this learning unit in conjunction with Chapter 6 of the prescribed textbook, Control
Systems Engineering, by Norman Nise [1].
As mentioned above, the fundamental concept for stability is “no poles in the (positive) right-
half plane”. This can be checked for some functions reasonably easy by the first Routh-Hurwitz
stability criterion. Most systems will not fall into this simple categorical check and will need
to be checked (under the second criterion) using the Routh table.
1. For a system to have no right-hand plane poles, a necessary but not sufficient condition
is that all coefficients of the characteristic equation (the denominator polynomial Q(s) of
the system’s transfer function) must have the same sign and are non-zero.
168
5. Stability
2. The system is stable by satisfying either of the two equivalent necessary and sufficient
conditions
The Hurwitz determinants can be found in [2], and will not be discussed further. The Routh
table is a simpler and robust method that is more easily implemented than the Hurwitz deter-
minants. Special cases do arise with the Routh table but are easily resolved.
1
R(s) C(s)
(s + 1)(s + 2)
e)
1
R(s) C(s)
s(s + 1)(s + 2)
f)
1
R(s) C(s)
s(s + 1)(s − 2)
g)
1
R(s) + C(s)
−
s(s + 1)(s + 2)
h)
1
R(s) + C(s)
−
(s − 1)(s + 2)
i)
1
R(s) + 3 C(s)
−
(s − 1)(s + 2)
j)
Solution:
a) All coefficients are positive, and the first criterion is satisfied. Therefore, no con-
clusion.
b) The s2 coefficient is negative (−6). Therefore, the first criterion is violated and the
system is unstable.
c) The s4 coefficient is zero. Therefore, the first criterion is violated and the system
is unstable.
d) The constant term/ s0 coefficient is zero. Therefore, the first criterion is violated
and the system is unstable.
e) The first criterion is satisfied, therefore, no conclusion. However, the poles of the
transfer function are in the left half-plane, this definitely means the system is stable.
g) There is a right half plane pole and there is a pole at s = 0. This results in a
transfer function of s3 − s2 − 2s = 0, the first criterion is violated and the system
is unstable.
h) The block diagram must be reduced (refer to Appendix A). The resultant charac-
teristic equation is s3 + 3s2 + s2 + 1 = 0. Therefore, the first criterion is satisfied
and there is no conclusion.
The Routh table is formed directly from the coefficients of the characteristic equation (the
denominator polynomial Q(s) of the system’s transfer function. However, attention must be
given to how this is done [1, 2]. Suppose the characteristic equation of a transfer function is
Then the first two rows of the Routh table are filled in as follows [1, 2]:
s0
s1
.. .. .. .. ..
. . . . .
sn−3
sn−2
sn−1 an−1 an−3 an−5 ···
sn an an−2 an−4 ···
Note that the column is filled from the bottom up, up to the second row, then the next column
is filled in the same way, and so on until all coefficients are used up [2]. If the characteristic
polynomial is an even power then the last coefficient (corresponding to the constant) will only
partly fill a column. The empty entry is simply filled with a zero. This will be made more clear
in an example.
Note: The way the table is formed here is from the bottom up [2]. The prescribed text builds the
table from the top down. This has some implications that the determinants are slightly different
and are negative compared to here. The mechanics are practically identical. Use what is
convenient for you, but take caution on the differences
The next row is formed by the 2 × 2 determinant of the first column and successive columns,
divided by the an−1 coefficient [2]. This is shown below.
s0
s1
.. .. .. ..
. . . .
sn−3
1 an−1 an−3 1 an−1 an−5 1 an−1 an−7
sn−2 b1 = b2 = b3 =
an−1 an an−2 an−1 an an−4 an−1 an an−6
n−1 an−1 an−3 an−5
s
sn an an−2 an−4
The subsequent rows are formed in the exact same way. Using a 2 × 2 determinant of the two
rows below, specifically the first column and the next successive column [2]. This is seen below.
This is done until the whole table is completed with the same number of rows as the polynomial
order [2].
s0
s1
.. .. .. ..
. . . .
1 b1 b2 1 b1 b3 1 b1 b4
sn−3 c1 = c2 = c3 =
b1 an−1 an−3 b1 an−1 an−5 b1 an−1 an−7
n−2
s b1 b2 b3
n−1 an−1 an−3 an−5
s
n
s an an−2 an−4
For a simple worked example watch Routh Hurwitz Stability Criterion Basic Worked Exam-
ple (5:39) [3], and for similar discussion on the details, watch Routh-Hurwitz Criterion, An
Introduction (12:56) [4].
1
R(s) + C(s)
−
s(s + 1)(s + 2)
b)
Solution:
a) The first two rows are filled, and here, padding with zero is necessary
s0
s1
s2
s3 1 3 0
s4 3 6 15
s0
s1
1 1 3 1 1 0 1 1 0
s2 b1 = b2 = b3 =
1 3 6 1 3 15 1 3 0
s3 1 3 0
s4 3 6 15
This gives
s0
s1
s2 -3 15 0
s3 1 3 0
s4 3 6 15
If we are only interested in if the system is stable or not, we could stop here as
there is a sign change, and the system is definitely unstable. However, we are also
asked to determine the number of right-hand poles. Therefore, we must find the
total number of sign changes. Continuing the table then becomes
s0 15 0 0
s1 8 0 0
s2 -3 15 0
s3 1 3 0
s4 3 6 15
This shows that there are a total of two sign changes in the first column. One
from +1 to -3 and then from -3 to +8. Therefore the system is unstable with two
right-hand plane poles.
Note: we only know that there are two RHP poles, we do not know their values.
This must be determined by finding the roots of the characteristic equation.
s0 1 0
s1 5/3 0
s2 3 1
s3 1 2
There are no sign changes in the first column. Therefore the system is definitely
stable.
Outcome 3 is complete.
Two cases can occur where the standard method of evaluating a Routh table fails. These are
easily resolved and covered below.
For some systems, a zero occurs in the first column, before the table is complete [1, 2]. This is
resolved in one of two ways, either using an epsilon approximation, or coefficient reversal.
The concept behind the epsilon approximation is to replace the zero with an epsilon ε > 0
(i.e. positive) and assume that it is very small [1, 2]. Then the table is completed with the ε
variable. The variable is then interpreted to approximate zero, ε → 0 [1, 2]. This then enables
the signs of the numbers in the first column to the examined. Alternatively the epsilon value
can be practically approximated to a value near zero, e.g. ε ≈ 0.1 or ε ≈ 0.01 [1]. The exact
choice will depend on the other numbers in the table, as it needs to be sufficiently smaller than
all other numbers in the table at that point.
Example 5.3 Second criterion Routh tables with zero in first column
Find the Routh table and determine the stability of the following systems. If the system
is unstable determine the number of right-hand poles.
Solution:
s0
s1
s2
s3 6 6 0
s4 1 9 8
s0
s1 0
s2 8 8 0
s3 6 6 0
s4 1 9 8
Here the first column has a zero element. This is replaced with ε. Then the table
is calculated further,
s0 8
s1 ε 0
s2 8 8 0
s3 6 6 0
s4 1 9 8
If we choose ε > 0, then there is no sign change and the system is stable.
Note: ε is specified as starting as positive. However, this choice is determined by
trying to “keep” the column positive. If one were to choose ε < 0 then there are
sign changes! This isn’t trivial. Essentially this occurs when there are roots on the
imaginary axis of the s-plane. The pseudo-stability/pseudo-instability is character-
istic of poles on the imaginary axis as they are marginally stable and correspond to
pure sinusoids.
s0
s1
s2
s3
s4
s5 1 1 2 0
s6 5 7 10 4
s0
s1
s2 0 4 0
s3 1 0 0 0
s4 2 0 4 0
s5 1 1 2 0
s6 5 7 10 4
Replacing the 0 with ε > 0 and filling in the rest of the table, we get
s0 4
s1 -4/ε 0 0
s2 ε 4 0
s3 1 0 0 0
s4 2 0 4 0
s5 1 1 2 0
s6 5 7 10 4
s0 4
s1 -∞ 0 0
s2 +0 4 0
s3 1 0 0 0
s4 2 0 4 0
s5 1 1 2 0
s6 5 7 10 4
Therefore there are two sign changes, one from +0 to −∞, and another from −∞
to 4. The system is unstable with two RHP poles.
s0
s1
s2
s3
s4
s5
s6 2 6 4 1
s7 1 3 2 2
s0
s1
s2
s3
s4
s5 0 0 ( 32 ) 0
s6 2 6 4 1
s7 1 3 2 2
1
Replacing the 0 with ε > 0. Additionally let E = > 0 (this will become a large
ε
number as ε → 0.
s0
s1
s2
s3
s4
s5 ε 0 ( 32 ) 0
s6 2 6 4 1
s7 1 3 2 2
s0 1 0
2 3
s1 ( 16 [ 81+270ε+36ε +6ε
9+30ε+10ε2
]) 0
s2 −( 9E+30+10ε
3−4ε
) 1 0
1
s3 6
(3 − 4ε) ( 23 − 6ε ) 0
s4 6 −(3E − 4) 1 0
s5 ε 0 ( 32 ) 0
s6 2 6 4 1
s7 1 3 2 2
s0 1 0
1
s1 2
0
s2 −∞ 1 0
1
s3 2
( 23 ) 0
s4 6 −∞ 1
3
s5 +0 0 2
0
s6 2 6 4 1
s7 1 3 2 2
There are two sign changes in the first column, one from 12 to −∞ and another from
−∞ to 21 . Thus the system is unstable with two poles in the RHP.
s0
s1
s2
s3
s4
s5
s6 2 6 4 2
s7 1 3 2 2
s0
s1
s2
s3
s4
s5 0 0 1 0
s6 2 6 4 2
s7 1 3 2 2
1
Replacing the 0 with ε = 10
≈ 0 and continuing the calculations, we get
s0 2 0
1481
s1 ( 1510 ) 0
s2 −( 151
4
) 2 0
8
s3 ( 30 ) ( 29
30
)
s4 6 -16 2
1
s5 ( 10 ) 0 1 0
s6 2 6 4 2
s7 1 3 2 2
8
Clearly there are two sign changes in the first column, one from 30 to − 151
4
and
another from − 151
4
to 1481
1510
. Thus the system is unstable with two poles in the RHP.
Note: Although the use of a number “close to zero” has made the calculations more
tangible and much easier, there is still some difficulty in the actual calculation.
Outcome 4 is complete.
As can be seen in the example above, the ε method can become tedious and difficult to compute
and track. Thankfully this can be remedied retrospectively. In cases where the ε method
becomes difficult, the process can be abandoned and restarted using the reversed coefficients
[1]. This swaps the order of the polynomial. Essentially the existing exponents of s are negated
and the highest power added. I.e. the characteristic equation becomes
s0
s1
.. .. .. .. .. ..
. . . . . .
sn−3
sn−2
sn−1 a1 a3 ··· an−2 an
sn a0 a2 ··· an−3 an−1
It is not guaranteed that this table will not require an epsilon method as well. However, this
could make the computational process much easier [1].
(Note these are some of the same equations as in the previous exercise)
Solution:
s0
s1
s2
s3 6 6 0
s4 8 9 1
s0
s1 0 0
s2 1 1
s3 6 6 0
s4 8 9 1
Here the first column has a zero element. This demonstrates that even though a
polynomial is reversed, it may still result in a zero in the first column. This zero is
replaced with ε. Then the table is calculated further,
s0 1 0
s1 ε 0
s2 1 1 0
s3 6 6 0
s4 8 9 1
If we choose ε > 0, then there is no sign change and the system is stable (as before).
s0
s1
s2
s3
s4
s5 2 1 1 0
s6 4 10 7 5
Continuing, we get
s0 5 0
s1 −( 23 ) 0
s2 −3 5 0
s3 −( 14 ) −( 14 ) 0
s4 8 5 5 0
s5 2 1 1 0
s6 4 10 7 5
Clearly there are two sign changes in the first column, one from 8 to −( 14 ) and
another from −( 32 ) to 5. Therefore the system is unstable with two poles in the
RHP.
The Routh table for some systems may result in an entire row being zero before the table
is completed. When this occurs the epsilon method, as described above, will not work, as
continuing will simply regenerate another row of zeros.
sp−1 0 0 0 ···
sp c1 c2 c3 ···
.. .. .. .. ..
. . . . .
sn−2 b1 b2 b3 ···
sn−1 an−1 an−3 an−5 ···
sn an an−2 an−4 ···
To remedy this the auxiliary polynomial of the row of zeros is found and used as a substitute [1,
2]. The row of zeros generally occurs on an odd powered row [1, 2]. The auxiliary polynomial is
formed from the proceeding (even-powered) row [1, 2]. The polynomial starts with that row’s
power, and each successive term is the power reduced by two [1, 2]. I.e.
c1 sp + c2 sp−2 + c3 sp−4 + · · · cp
d
c1 sp + c2 sp−2 + c3 sp−4 + · · · cp
ds
(p)c1 sp−1 + (p − 2)c2 sp−3 + (p − 4)c3 sp−5 + · · · (0)cp
[(p)c1 ]sp−1 + [(p − 2)c2 ]sp−3 + [(p − 4)c3 ]sp−5 + · · · 0
These coefficients are then used to fill the sp−1 row of zeros [1, 2]. I.e.
The process of filling in the table continues as normal [1, 2]. If a zero occurs in the first column,
or there is another row of zeros, the appropriate method is used.
Note: This cannot be used as a substitute for the epsilon method. The epsilon substitution
and the auxiliary polynomial should be used mutually exclusively in their application. The
auxiliary polynomial can only be used if the entire row has zeros. Subsequently, the
epsilon substitution can only be used if the zero occurs in the first column and there is at least
one non-zero term in that same row.
For more clarity on the special cases mentioned above, watch Routh-Hurwitz Criterion, Special
Cases (13:08) [5].
51 4 73
a) The system’s characteristic equation is x6 + 10x5 + x + 15x3 + x2 + 5x + 12 = 0
2 2
b) The system’s characteristic equation is
x7 + 12x6 + 48x5 + 96x4 + 180x3 + 240x2 + 176x + 192 = 0
Solution:
s0
s1
s2
s3
s4
s5 10 15 5 0
51 73
s6 1 2 2
12
s0
s1
s2
s3
s4
s5 10
2
15
3 5 1 0
6 51 73
s 1 2 2
12
s0
s1
s2
s3 0 0 0
4 48 72
s 2 = 24 2
( 2) = 36 3
12 =
1 0
5
s 2 3 1 0
6 51 73
s 1 2 2
12
2s4 + 3s2 + 1 = 0
Substituting this into the s3 row of the table and completing gives
s0 1 0
1
s1 3
0
6
s2 4
= 32 1 0
s3 4 3 0
s4 2 3 1 0
s5 2 3 1 0
51 73
s6 1 2 2
12
There are no sign changes in the first column. Therefore the system is stable.
s0
s1
s2
s3
s4
s5
s6 12 96 240 192
s7 1 48 180 176
s0
s1
s2
s3
s4
s5 40 160 160 0
6
s 12
1
96
8 240 20
192 16
7
s 1 48 180 176
Again, simplifying
s0
s1
s2
s3
s4
s5
40 160
1 160
4 4 0
s6 1 8 20 16
s7 1 48 180 176
s0
s1
s2
s3
s4 4 1
16
4 16
4 0
s5 1 4 4 0
s6 1 8 20 16
s7 1 48 180 176
s0
s1
s2
s3 0 0 0
s4 1 4 4 0
s5 1 4 4 0
s6 1 8 20 16
s7 1 48 180 176
1s4 + 4s2 + 4 = 0
s0
s1 0 0
s2 2 1 4 2 0
s3 1 2 0
s4 1 4 4 0
s5 1 4 4 0
s6 1 8 20 16
s7 1 48 180 176
Another row of zeros. This however can be approached in two different ways, in this
example. Either method of the auxiliary polynomial or the epsilon approximation
can be used(again only in this unique situation, but not generally).
The auxiliary polynomial gives
s0 2
s1 2 0
s2 1 2 0
s3 1 2 0
s4 1 4 4 0
s5 1 4 4 0
s6 1 8 20 16
s7 1 48 180 176
s0 2
s1 0
s2 1 2 0
s3 1 2 0
s4 1 4 4 0
s5 1 4 4 0
s6 1 8 20 16
s7 1 48 180 176
Either way, the first column is all positive. Therefore the system is stable.
b) s4 + 2s3 + 2s2 + 4s + 3 = 0
d) Here, find the value of K that makes the system just unstable.
1 8
R(s) + K C(s)
−
s (s + 1)(s + 3)
e) Here, find the value of K that makes the system just unstable.
1 1
R(s) + K C(s)
−
s (s + 3)(s + 4)
Outcome 5 is complete.
The stability of state-space models is readily determined. Since the stability of a system is only
determined by the closed-loop poles of the system. In other words, only the expression in the
denominator of the system transfer function, which is the characteristic equation sI − A =
0 [1]. The analysis is the same [1].
5.5 Summary
The stability of a closed loop system was discussed in this learning unit. The Routh-Herwitz
Criterion as a basis of stability analysis for a system was presented and the theory briefly
discussed. The use of the first criterion to determine if a system is definitely unstable was
demonstrated. The second criterion and the use of a Routh table to determine the stability of
a system in relation to the second criterion is also discussed. The two special cases of a zero
in the first column and a row of zeros in the Routh table were discussed, and the methods to
resolve the table in these cases presented. Additionally the reversed coefficients method is also
Feedback
s0
s1
s2
1
s3 ε 2 0
s4 2 4 5
s5 1 2 3
Reversing coefficients
s0 1 0
s1 -1 0
1
s2 2 1 0
2 1
s3 3 3 0
s4 3 2 1 0
s5 5 4 2 0
s0
s1
s2 0 3 0
s3 2 4 0
s4 1 2 3
Substituting in ε,
s0 3 0
6
s1 4− ε 0
s2 ε 3
s3 2 4 0
s4 1 2 3
Letting ε → 0+
s0 3 0
s1 −∞ 0
s2 +0 3
s3 2 4 0
s4 1 2 3
s0
s1
s2
s3 0 0 0
s4 2 12 16 0
s5 1 6 8 0
s0 16 0
16
s1 6 0
s2 6 16 0
s3 8 24 0
s4 2 12 16 0
s5 1 6 8 0
s0 8K
s1 3-2K 0
s2 4 8K 0
s3 1 3 0
References
[1] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
[2] R. Burns, Advanced Control Engineering. Oxford Boston: Butterworth-Heinemann, 2001,
isbn: 0750651008.
[3] T. C. G. to Everything, Routh hurwitz stability criterion basic worked example (5:39),
2018. [Online]. Available: https://www.youtube.com/watch?v=CzzsR5FT-8U.
[4] B. Douglas, Routh-hurwitz criterion, an introduction (12:56), 2012. [Online]. Available:
https://www.youtube.com/watch?v=WBCZBOB3LCA.
[5] ——, Routh-hurwitz criterion, specieal cases (13:09), 2012. [Online]. Available: https:
//www.youtube.com/watch?v=oMmUPvn6lP8.
In this learning unit, the errors of systems are defined and quantified, specifically for a system
in so-called “steady-state”. These quantified errors also serve as a definition of system type.
The error of a system depends not only on the input but also on any disturbances in or on the
system. These are quantified below.
Study this learning unit in conjunction with Chapter 7 of the prescribed textbook, Control
Systems Engineering, by Norman Nise [1].
The concept of a control system is that there is some given reference input signal, and then
the system (both the plant, controller and feedback) operates on this input to generate the
actual output controlled signal. As is the reality of real systems, it is more often than not that
a system’s controlled output signal will not be exactly the same as the reference input signal.
E(s)
R(s) +
−
G1 (s) G2 (s) C(s)
H(s)
M (s)
190
6. Steady-State Errors
There is an easy way to interpret what this difference/error is. It is actually built into the
very design of the feedback control block diagram by definition. E(s) is the error signal and is
the primary signal of interest in this learning unit. For the moment we will assume that the
measurement transducer H(s) = 1, therefore M (s) = C(s), and that the controller block G1 (s)
and plant block G2 (s) simply form a single system block G(s).
We can then find this error signal in terms of the input R(s) and the system block G(s). I.e.
R(s)
E(s) = (6.1)
1 + G(s)
This means that the error is dependent on the system transfer function and the input reference
signal [1–3].
Of interest is the system’s error after some time. Generally, it is desired for the error to be
constant or at least approaching some limit. The most convenient method to quantify this is
to determine the error after an infinite amount of time. I.e. the value e(∞) is of interest. This
value is either bounded or explodes to infinity. The s-domain expression of the error is known;
using the final value theorem (at the bottom of C.3), e(∞) can be found for the closed loop
form of G(s) [1–3],
Outcome 1 is complete.
As seen above, the steady-state error is dependent on the system input. In control engineering,
there are three standard types of system inputs that are used to quantify steady-state errors.
These are the step function, the ramp function and the parabolic input, which are seen in
Figure 6.1a-6.1c respectively [1–3].
1 1 1
t t t
1 2 1 2 1 2
(a) unit step (b) unit ramp (c) unit parabola
no surprise then that these inputs are used synonymously with these concepts [1–3]. I.e, The
step input is related to an object that has a constant position; The ramp input corresponds
to the position of an object with a constant velocity (therefore its position is changing at a
constant rate); The parabolic input corresponds to the position of an object with a constant
force/acceleration.
The steady-state error e(∞) for each of the test inputs to the closed-loop form of G(s) will now
be explored.
As an introduction to these concepts, watch Intro to Control - 11.4 Steady State Error with the
Final Value Theorem (6:31) [4] and Final Value Theorem and Steady State Error (12:45) [5]
The error associated with a unit step is found by substituting R(s) = L{u(t)} = 1
s
into the
steady-state error Equation 6.2.
sL{u(t)}
estep (∞) = lim
s→0 1 + G(s)
1
s
= lim s
s→0 1 + G(s)
1
= lim
s→0 1 + G(s)
The only part of this equation that depends on s is G(s). It is ideal that this error is as small
as possible for a system, i.e. estep (∞) → 0. This will happen if G(s) → ∞. The limit itself is
called the position coefficient Kp [1–3], i.e.
and
1
estep (∞) = (6.4)
1 + Kp
The error associated with the (unit) ramp input is found by substituting R(s) = L{t} = 1
s2
into the steady-state error Equation 6.2.
sL{t}
eramp (∞) = lim
s→0 1 + G(s)
1
s 2
= lim s
s→0 1 + G(s)
1
= lim s
s→0 1 + G(s)
1
= lim
s→0 s(1 + G(s))
1
= lim
s→0 s + sG(s)
1
=
lim s + lim sG(s)
s→0 s→0
1
=
0 + lim sG(s)
s→0
1
∴ eramp (∞) =
lim sG(s)
s→0
The only part of this equation that depends on s is sG(s). It is ideal that this error is as small
as possible for a system, i.e. eramp (∞) → 0. This will happen if sG(s) → ∞. The limit itself
is called the velocity coefficient Kv [1–3], i.e.
and
1
eramp (∞) = (6.6)
Kv
The error associated with the (unit) parabola input is found by substituting R(s) = L{t2 } = 1
s3
into the steady-state error Equation 6.2.
sL{t2 }
eparabola (∞) = lim
s→0 1 + G(s)
1
s 3
= lim s
s→0 1 + G(s)
1
= lim s2
s→0 1 + G(s)
1
= lim 2
s→0 s (1 + G(s))
1
= lim 2
s→0 s + s2 G(s)
1
=
lim s + lim s2 G(s)
2
s→0 s→0
1
=
0 + lim s2 G(s)
s→0
1
∴ eparabola (∞) =
lim s2 G(s)
s→0
The only part of this equation that depends on s is s2 G(s). It is ideal that this error is as small
as possible for a system, i.e. eparabola (∞) → 0. This will happen if s2 G(s) → ∞. The limit
itself is called the acceleration coefficient Ka [1–3], i.e.
and
1
eramp (∞) = (6.8)
Kv
As can be seen above, the three standard system steady-state errors are dependent on the
system directly for the three given inputs, step, ramp, and parabola. The only difference
between the position, ramp, and acceleration constants is the successive power of s for each.
As stated, it is desired that a system has the error itself be zero or at least finite. For each
steady-state error, this is only possible if Kp = lims→0 G(s) → ∞, Kv = lims→0 sG(s) → ∞,
Ka = lims→0 s2 G(s) → ∞. These may not all be satisfied for a given system G(s). The
system can be categorised into a system type depending on which of these constants are infinite,
bounded (i.e. a constant) or zero. The properties of G(s) and specifically how it relates to the
different test inputs are now discussed for these different conditions.
Firstly assume G(s) has a reduced factored form, with m zeros and q poles and n poles at
s = 0, i.e. sn , and has a system gain of K. The G(s) is of the form [1, 3],
K(s + z1 )(s + z2 ) · · ·
G(s) =
sn (s + p1 )(s + p2 ) · · ·
Ym
K (s + zi )
i=1
or G(s) = q (6.9)
Y
sn (s + pk )
k=1
The value of n and how it relates to the different constants (and subsequently the different test
inputs) is now discussed.
K(s + z1 )(s + z2 ) · · ·
Kp = lim n (6.10)
s→0 s (s + p1 )(s + p2 ) · · ·
K(s + z1 )(s + z2 ) · · · 1
= lim × lim n
s→0 (s + p1 )(s + p2 ) · · · s→0 s
K(0 + z1 )(0 + z2 ) · · · 1
= × lim n
(0 + p1 )(0 + p2 ) · · · s→0 s
Kz1 z2 z3 · · · 1
= × lim n (6.11)
p1 p2 p3 · · · s→0 s
1. If n = 0 then then sn = 1 and the position constant is simply equal to the product of
Kz1 z2 z3 · · ·
system gain K and the system zeros divided by the system poles, Kp = . This
p1 p2 p3 · · ·
is a finite number and Kp =constant. Then the system step-response error is estep (∞) =
1
1+Kp
(a finite constant) for n = 0 [1–3].
1
2. If n ∈ N, then lim → ∞ and the position constant is infinite. This means that the
s→0 sn
1
steady state error to a step → 0, i.e. the system has no steady state error to a
1 + Kp
step response, i.e. estep (∞) = 0 for n ≤ 1 [1–3].
1
1. If n = 0, then lim = lim s → 0. This means that Kv = 0. Therefore the steady state
s→0 s−1 s→0
error eramp(∞) = ∞ for n = 0. I.e. a system G(s) with no open loop pole at s = 0 has an
infinite error to a ramp input in its closed loop form [1–3].
1
2. If n = 1, then sn−1 = s0 = 1. Therefore lims→0 0 = lims→0 1 → 1, and the velocity
s
Kz1 z2 z3 · · ·
constant is simply Kv = which is a finite constant. Therefore the steady
p1 p2 p3 · · ·
1
state error eramp(∞) = for a system G(s) with a single open loop pole at s = 0 [1–3].
Kv
1 1
3. If n ≥ 2, n ∈ N, then lim = lim → ∞, m ≥ 1, m ∈ N. Therefore the steady state
s→0 sn−1 s→0 sm
error eramp(∞) → 0. I.e. a system G(s) with more than one open loop pole at s = 0 has
no error to a ramp input to the closed loop system [1–3].
1 1
1. If n = 0, or n = 1, then n − 2 = −2 or −1 and lim −2 = lim s2 or lim −1 = lim s. These
s→0 s s→0 s→0 s s→0
both → 0. This means that Ka = 0. Therefore the steady state error eparabola(∞) = ∞ for
n = 0 or n = 1. I.e. a system G(s) with one or no open loop pole at s = 0 has an infinite
error to a parabolic input in its closed loop form [1–3].
1
2. If n = 2, then sn−2 = s0 = 1. Therefore lims→0 0 = lims→0 1 → 1, and the acceleration
s
Kz1 z2 z3 · · ·
constant is simply Ka = which is a finite constant. Therefore the steady
p1 p2 p3 · · ·
1
state error eparabola(∞) = for a system G(s) with two open loop poles at s = 0 [1–3].
Ka
1 1
3. If n > 2, n ∈ N, then lim → ∞, m > 2, m ∈ N. Therefore the steady state
= lim
s→0 sn−2
s→0 sm
error eparabola(∞) → 0. I.e. a system G(s) with more than two open loop poles at s = 0
has no error to a parabolic input to the closed loop system [1–3].
Clearly, the system’s value for n in G(s) (of the form in Equation 6.9) has different effects on
the error for the different types of test inputs. This property is used to quantify and define
the system type [1–3]. Importantly the meaning of sn , when transformed back into the time
domain, is simply n orders of integration in the open-loop transfer function [1–3]. I.e.
1 K(s + z1 )(s + z2 ) · · ·
G(s) =
sn (s + p1 )(s + p2 ) · · ·
1
G(s) = n G0 (s)
s
1
∴ L {G(s)} = L
−1 −1
G0 (s)
sn
ZZZ
g(t) = · · · g0 (t)(dt)n
Then the closed-loop system type and error characteristics can be quantified directly from the
nth power of the open-loop pole s = 0 [1–3]. This is summarised in the Table 6.1 below.
Table 6.1: System types and system errors to standard test inputs.
Type 0 Type 1 Type 2 Higher Order
input r(t) error formula constant error value constant error value constant error value constant error value
1 1
step u(t) Kp =cnst. Kp = ∞ 0 Kp = ∞ 0 Kp = ∞ 0
1 + Kp 1 + Kp
1 1
ramp t Kv = 0 ∞ Kv =cnst. Kv = ∞ 0 Kv = ∞ 0
Kv Kv
1 1
parabola t2 Ka = 0 ∞ Ka = 0 ∞ Ka =cnst. K∞ 0
Ka Ka
(s + 3)(s + 4)
G(s) =
s2 (s + 6)(s + 8)
Solution:
There are two poles at s = 0, therefore the system is type 2 and will have zero unit
step and zero ramp error. It will have a finite, non-zero parabolic error. These are all
explicitly calculated below.
Kp = lim G(s)
s→0
(s + 3)(s + 4)
= lim 2
s→0 s (s + 6)(s + 8)
(s + 3)(s + 4) 1
= lim lim
s→0 (s + 6)(s + 8) s→0 s2
(0 + 3)(0 + 4) 1
= lim
(0 + 6)(0 + 8) s→0 s2
1 1
= lim 2 → ∞
4 s→0 s
Therefore the step error is
1 1
es (∞) = = lim =0
1 + Kp Kp →∞ 1 + Kp
Kv = lim sG(s)
s→0
(s + 3)(s + 4)
= lim s 2
s→0 s (s + 6)(s + 8)
(s + 3)(s + 4) h si
= lim lim
s→0 (s + 6)(s + 8) s→0 s2
(0 + 3)(0 + 4) 1
= lim
(0 + 6)(0 + 8) s→0 s
1 1
= lim → ∞
4 s→0 s
Therefore the ramp error is
1 1
es (∞) = = lim =0
Kv Kp →∞ Kv
Ka = lim s2 G(s)
s→0
(s + 3)(s + 4)
= lim s2 2
s→0 s (s + 6)(s + 8)
s2
(s + 3)(s + 4)
= lim lim
s→0 (s + 6)(s + 8) s→0 s2
(0 + 3)(0 + 4) h i
= lim 1
(0 + 6)(0 + 8) s→0
1
=
4
Is this gain allowed, i.e. is the system still stable with this gain?
Solution:
The error is not specified by the given input. However, the system has no poles at s = 0
in the open-loop transfer function, therefore the closed-loop system can only be a type 0
system. Therefore it is implied that the input is a unit step (since a ramp and parabola
would produce an infinite error).
1
Therefore, es (∞) = < 10%, i.e.
1 + Kp
1
1 + Kp >
0.1
Kp > 99
K((0) + 3)
99 <
((0) + 2)((0)2 + 10(0) + 30)
3K
99 <
60
∴ K > 1980
Activity 6.1
Find the minimum possible steady-state error for the system, and find the value of K
necessary to achieve this. Hint: Analyse the stability using a Routh table
If you are still struggling with the error constants, what they mean and why they are associated
with system-types, then watch the following videos for more guidance: The three videos ECE320
Lecture1-3a: Steady-State Error, System Type (7:46) [6], ECE320 Lecture1-3b: Steady-State
Error, System Type (12:43) [7], and ECE320 Lecture1-3c: Steady-State Error, System Type
(6:18) [8]; as well as System Dynamics and Control: Module 16 - Steady-State Error (41:32) [9]
for a longer lecture style.
Systems can have errors due to disturbances[1, 3]. These disturbances can be combined into a
single disturbance for simpler analysis [1, 3]. This is visualised in Figure 6.2 below. Here the
inclusion of a “controller” G1 (s) and the “plant” G2 (s) is made distinct.
D(s)
E(s) +
R(s) +
−
G1 (s) + G2 (s) C(s)
Using the principle of superposition (see Section A.6) and setting R(s) = 0, the equation for
ED (s) in terms of D(s) can be found [1].
Therefore
For completeness, the error due to the input R(s) can be found by superposition as well [1, 3].
It reduces simply to
1
ER (s) = R(s) (6.17)
1 + G1 (s)G2 (s)
Effectively the error is the superposition of these two errors ER (s) and ED (s)[1, 3]
The same process of error analysis is used to quantify the effect of the disturbance on the
disturbance error. It is usually assumed that the disturbance is a step input [1]. Therefore,
D(s)
E(s) 5 + (s + 1)
R(s) +
−
+ C(s)
s (s + 5)
Solution:
The system has a disturbance (which is assumed to be a unit step disturbance) and the
error needs to include the contribution from both the input R(s) and the disturbance
D(s). I.e.
The input is not specified and is interpreted from the open-loop poles at s = 0. The
open-loop system (and with D(s) = 0) is
Therefore, the system is type 1, and the system has at least a finite (non-zero) er (∞).
Therefore,
Kv = lim sG(s)
s→0
5(s + 1)
= lim s
s→0 s(s + 5)
5(s + 1)
= lim
s→0 (s + 5)
5(0 + 1)
= =1
(0 + 5)
(s + 5)
1
=− →0
5+∞
Therefore the system error to a unit step and ramp input, with a step disturbance, is
Outcomes 4 is complete.
6.3 Sensitivity
a ∂G(s)
SG:a = (6.24)
G(s) ∂a
E(s) 1
R(s) + C(s)
−
s2 (s + a)
E(s) K
R(s) + C(s)
−
s(s + 1)
s+a
Solution:
1. We are asked to find the sensitivity of the error of the system to parameter a. The
system has two poles at s = 0, therefore this is a Type 2 system, and the finite
1
error is ep ∞ = . Therefore the acceleration constant is of interest. I.e.
Ka
Ka = lim s2 G(s)
s→0
2 1
= lim s
s→0 s2 (s + a)
1
=
(0 + a)
1
=
a
with
1
ep (∞) = =a
Ka
Therefore, the sensitivity of the error to a is
a ∂ep (∞)
Se:a =
ep (∞) ∂a
a ∂
= a
(a) ∂a
=1
2. Here we are asked to find the sensitivity of the system to parameter a. The system’s
closed-loop form must be found and is simply
C(s) G(s)
GCL (s) = =
R(s) 1 + H(s)G(s)
K
with G(s) = , and H(s) = (s + a). Therefore,
s(s + 1)
K
GCL (s) =
s(s + 1) + K(s + a))
K
= 2
s + (K + 1)s + Ka
1
=
a + f (s)
a ∂GCL
SGCL :a =
GCL ∂a
∂GCL
First let’s find ,
∂a
∂GCL ∂ 1
=
∂a ∂a a + f (s)
∂
= (a + f (s))−1
∂a
here f (s) is a function of s and is independent of parameter a, therefore can be
treated as a “constant” w.r.t a. Using chain rule, we get
∂GCL ∂
= (a + f (s))−1
∂a ∂a
= (−1)(a + f (s))−2 (1)
1
=−
(a + f (s))2
= −GCL 2
E(s) K
R(s) + C(s)
−
s2 (s + a)
2. Find the sensitivity of the closed loop transfer function GCL (s) to changes in the
plant G2 (s).
Additionally:
What is the sensitivity of the closed-loop system on the other plant G1 ?
What does this expression of sensitivity actually mean?
E(s)
R(s) +
−
G1 (s) G2 (s) C(s)
There are more complete and fundamental forms of sensitivity analysis, such as advanced
system model sensitivities [3] and signal input/disturbance noise sensitivity (through statistical
analysis) [10]. These are loosely expanded from the basic concept of sensitivity explained here
(and in [1]). Though these concepts are more advanced and beyond the scope here, it is
recommended to go through the readings of sensitivity in [3, 10].
Outcomes 5 is complete.
The steady-state of state-space models can be analysed in two different ways. The first method
is to analyse the error between the input and output in the s-domain and use the final value
theorem (as above) to evaluate the error [1]. The second method is to remain in the time
domain and use the steady-state definition to determine the error [1].
Consider the s-domain analysis. It is assumed that there is no feed-forward component, and
that the state-space model is of a SISO system, i.e. u(t) = u(t) and y(t) = y(t) are scalar
functions. Then the error is defined in the s-domain as [1],
Where G(s) is the closed-loop transfer function of the state space model. Therefore,
and G(s) is as defined in Equation 3.16a and 3.16b. Then using the final value theorem,
lim e(t) = lim sE(s), the steady state error is [1]:
t→∞ s→0
Then for each of the step, ramp and parabolic inputs, the state space error is [1]
The error analysis can also be done in the time domain. This is done by considering the type
of input for the steady-state analysis. For simplicity assume that there is no feed-forward
component in this analysis.
Step response
Firstly consider the input as a unit step u(t) = 1, then the state of the system is at some
constant value, i.e. the state vector is
v1
v2
xss-step = .. = v (6.32)
.
vn
Every vi is constant, and the steady-state vector v is therefore constant. If the state vector
d
xss-step = v is constant with time, then the time derivative ẋ(t) = v = 0 [1]. Therefore, the
dt
This means that the state vector’s steady-state is dependent on the state matrix and input (and
is as logically expected). The output state equation (assuming no feed-forward component) is
y = Cx(t) (6.35)
ystep-ss = Cxstep-ss
= Cv
∴ ystep-ss = C −A−1 B
The state space error can then be analysed lim e(t) = u(t) − y(t). Assume that the state space
t→∞
models a SISO system, then y(t) = y(t) and u(t) = u(t) = 1 (a unit step function) are scalar
functions. Then the time domain error is [1]
This can be directly compared to the s-domain solution above in Equation 6.29. The equations
are identical when s → 0 since (sI − A)−1 = (0I − A)−1 = (−A)−1 , and the equation reduces
to the same expression, as should be expected.
Ramp response
A similar approach is used to find the ramp response in the time domain of a state-space model.
The only difference is that u(t) = t and the components of the steady-state state vector xss-ramp
is now expressed as a general linear ramp function [1]. I.e.
v1 t + w1
v2 t + w2
xss-ramp = = vt + w (6.39)
..
.
vn t + wn
However, this means that the state derivative ẋss-ramp is non zero [1], and is
v1
v2
ẋss-step = .. = v (6.40)
.
vn
Then equating the like terms (the coefficient matrix of t and the constant-coefficient matrix),
the two following equations are obtained [1]
y = Cx(t) (6.47)
yramp-ss = Cxramp-ss
2
∴ yramp-ss = C − A−1 Bt − A−1 B
2
yramp-ss = − CA−1 Bt + C A−1 B
(6.48)
The steady state error of the ramp input u(t) = t to the state space model is then readily found
from limt→∞ e(t) = limt→∞ r(t) − yramp-ss , then the ramp error is [1]
In order for any solution to exist A−1 must exist, in other words the system’s state matrix
must be invertible. The ramp equation also has intuitive explanation. Essentially for the limit
to exist and be bounded, (1 + CA−1 B) = 0. But this is the step response error as determined
earlier. This fall within the understanding of the ramp response. The ramp response is finite
iff the system is at least type 1, and type 1 systems necessarily have a zero step-response error.
Therefore, if the system is known to be at least type 1, then the ramp error is simply,
2
1 + CA−1 B t + C A−1 B
eramp-ss (∞) = lim
t→∞
2
estep (∞) t + C A−1 B
eramp-ss (∞) = lim
t→∞
2
(0)t + C A−1 B
eramp-ss (∞) = lim
t→∞
2
eramp-ss (∞) = C A−1 B (6.51)
The concept of time domain steady-state error analysis for state-space models can be expanded
and continued for the parabolic error. This is done by generalising the components of the
steady-state state-vector as li t2 + vi t + wi , and the resultant steady-state vectors are then
xss−parabola = lt2 + vt + w. The rest of the procedure is the same. This is left as an exercise for
the student.
−5 −4 −2 1
ẋ = −3 −10 0 x + 1 u
−1 1 −5 0
y = −1 2 1 x + 0u
Solution:
Here the matrices are as follows
−5 −4 −2 1
A = −3 −10 0 B = 1
−1 1 −5 0
C = −1 2 1 D=0
s2 + 3s − 16
C(sI − A)−1 B =
s3 + 20s2 + 111s + 164
Therefore the error in the s-domain E(s) = R(s)[1 − C(sI − A)−1 B] can be found
and is simply
eu (∞) = 1 + CA−1 B
The term CA−1 B must then be evaluated. Firstly finding A−1 ,we need to evaluate
det (A) and then adj(A)
−5 −4 −2
det(A) = −3 −10 0
−1 1 −5
= (−2) (−3)(1) − (−10)(−1) − (0) · · · + (−5) (−5)(−10) − (−4)(−3)
= −164
−10 0 −3 0 −3 −10
+1 −1 +1
1 −5 −1 −5 −1 1
−4 −2 −5 −2 −5 −4
co-minors(A) = −1 +1 −1
1 −5 −1 −5 −1 1
−4 −2 −5 −2 −5 −4
+1 −1 +1
−10 0 −3 0 −3 −10
50 −15 −13
co-minors(A) = −22 23 9
−20 6 38
then, find the adjoint. Which is simply the transpose of the cofactor matrix of
minors, i.e.
T
50 −15 −13
adj(A) = −22 23 9
−20 6 38
50 −22 −20
= −15 23 6
−13 9 38
Rather than calculate the inverse directly, the matrix algebra can be calculated and
the result divided by the determinant. Therefore, C[adj(A)]B is calculated. To
make the calculation easier, [adj(A)]B is calculated first,
50 −22 −20 1
[adj(A)]B = −15 23 6 1
−13 9 38 0
50 −22 −20
= 1 −15 +1 23 +0 6
−13 9 38
50 − 22
= −15 + 23
−13 + 9
28
= 8
−4
eu (∞) = 1 + CA−1 B
= 1.098
The same as before. This step error is non-zero, therefore the ramp error will be
infinite. I.e.
2
er (∞) = lim [1 + CA−1 B]t + CA−1 B
t→∞
2
= lim 1.098t + CA−1 B
t→∞
2
= ( lim 1.098t + CA−1 B
t→∞
2
= (∞) + C A−1 B
=∞
0 1 0 0
ẋ = −5 −9 8 x + 1 u
−1 0 1 0
y = −1 1 0 x + 0u
0 1 0 0 0
0 0 1 0 0
ẋ =
0
x + u
0 0 1 0
−100 −10 −6 −5 1
y = 100 10 0 0 x + 0u
6.5 Summary
The steady state error of transfer functions were discussed in this learning unit. The error
parameters related to the unit step, unit ramp and unit parabola inputs were discussed. These
parameters were related back to the definition of the system type as well. The errors due to
disturbances on a system were also briefly discussed. The sensitivity of a system input to the
system was also briefly defined and discussed. Additionally the methods of steady-state errors
for state-space were also discussed.
Feedback
Kv = lim sG(s)
s→0
K(s + 10)
= lim s
s→0 s(s + 4)(s + 8)(s + 13)
K(0 + 10)
= lim
s→0 (0 + 4)(0 + 8)(0 + 13)
10K
Kv =
416
Therefore
416
eramp =
10K
This is minimised the larger K is. Ideally K → ∞, but this will also affect system stability. Therefore
the maximum possible value for K needs to be found such that the system is still stable. This is found
using the Routh table of the closed-loop system. The closed-loop system must then be found, i.e.
G(s)
GCL =
1 + G(s)
K(s + 10)
s(s + 4)(s + 8)(s + 13)
=
K(s + 10)
1+
s(s + 4)(s + 8)(s + 13)
10s + 10K
= 4
s + 25s + 188s2 + (416 + K)s + 10K
3
The denominator is the characteristic polynomial, therefore the first two rows of the table are
s0
s1
s2
s3 25 416+K 0
s4 1 188 10K
s0 250K 0
s1 -K2 − 2382K + 1782144 0
s2 4284-K 250K 0
s3 25 416+K 0
s4 1 188 10K
For the system to be stable the first column must have the same sign (+ or −). This is highly dependent
on the value of K. The first column has three entries dependent on K. From the s0 entry, we can see
that K must be positive (this overrides the entry of s3 , 4284 − k, which would allow K > −4284).
The s1 entry is −K 2 − 2382K + 1782144. This must also be positive, therefore the inequality −K 2 −
2382K + 1782144 > 0 must be satisfied for the system to remain stable. Therefore,
The inequality can then be split by considering values of K that would satisfy the inequality, i.e.
−2980.03 < K < 598.03. However, K > 0, therefore
A good maximum value of K chosen is then K = 598, and the minimum error eramp , is then
416
eramp =
10K
416
=
10(598)
= 0.070
Ka = lim s2 G(s)
s→0
K
= lim s2
s→0 s2 (s + a)
K
= lim
s→0 (s + a)
K
=
(0 + a)
K
∴ Ka =
a
1 a
The error is therefore epara = = The sensitivity can now be found,
Ka K
a ∂epara
Separa :a =
epara ∂a
a ∂ a
= a
K ∂a K
1
=K
K
∴ Separa :a = 1
K ∂epara
Separa :K =
epara ∂K
K ∂ a
= a
K ∂K K
2
K ∂ −1
= aK
a ∂K
K2
−aK −2
=
a
K 2 −a
=
a K2
∴ Separa :a = −1
2. First, the closed-loop transfer function needs to be found. This is easily done using G = G1 G2 ,
and the unitary gain closed-loop form,
G
GCL =
1+G
G1 G 2
=
1 + G1 G 2
The sensitivity of this transfer function to the plant G2 can now be found. To make the
G1
calculation easier GCL is modified to GCL = . The sensitivity is therefore
1
+ G1
G2
G2 ∂GCL
SGCL :G2 =
GCL ∂G2
G2 ∂ G1
=
G1 G2
∂G2 1
+ G1
1 + G1 G2 G2
1 + G1 G2 ∂ −1 −1
= G1 (G2 + G1 )
G1 ∂G2
This sensitivity is affected by the other plant G1, and the expression of the sensitivities are
identical.
This means that any “proportional” change made to the plant G2 affects the overall closed-
loop transfer function by reducing the sensitivity in the closed-loop system to this change. This
is counterintuitive.
s-domain
The (sI − A) is
s −1 0
(sI − A) = 5 s + 9 −8
1 0 s−1
s2 − 2s + 1
C(sI − A)−1 B =
s3 + 8s2 − 4s + 3
s3 + 7s2 − 2s + 2
[1 − C(sI − A)−1 B] =
s3 + 8s2 − 4s + 3
s3 + 7s2 − 2s + 2
and the error is simply E(s) = R(s) 1 − C[(sI − A)−1 ]B , which is E(s) = R(s) 3
.
s + 8s2 − 4s + 3
Therefore the step response error is
Therefore the steady state error is estep (∞) = 0.667. Since the step error is finite and non-zero, the
ramp and parabola input errors are infinite.
time domain The inverse of A needs to be found. This is done by finding the adjoint, adj(A), and
then later dividing by the determinant. The determinant is easily found, and det A = 100. Then,
adj(A) is
−10 −6 −5 −1
100 0 0 0
adj(A) =
0
100 0 0
0 0 100 0
Continuing, [adj(A)]B is
−10 −6 −5 −1 0
100 0 0 0 0
[adj(A)]B =
0 100 0 0 0
0 0 100 0 1
−10 −6 −5 −1
100 0 0 0
=0 0 +0 100 +0 0 +1 0
0 0 100 0
−1
0
=0
Therefore, dividing this by the determinant gives C[A−1 ]B = −1. Therefore, the error is
Therefore, the system is at least type 1, because the step error is zero.
To find the ramp error, we need to go back to [adj(A)]B. In gfact we can leave this as is, and simply
find C[adj(A)]. Then these two matricies can be multiplied and divided by the determinant squared.
Therefore C[adj(A)] is
−10 −6 −5 −1 100
100 0 0 0 10
C[adj(A)] =
0 100 0 0 0
0 0 100 0 0
100 −10 −6 −5 −1
X 10 100 0 0 0
=
0 0 100 0 0
100 0 0 100 0
= 0 −600 −500 −100
Then dividing this with the determinant (100) to assist the calculation gives
C(A−1 ) = 0 −6 −5 −1
Then multiplying this C(A−1 ) with [adj(A)]B (and “not forgetting” to divide by the determinant
again later) gives
−1
−1
0
C(A )[adj(A)]B = 0 −6 −5 −1 0
0
= (0)(−1) + (−6)(0) + (−5)(0) + (−1)(0)
=0
Therefore, C(A−1 )−2 B = 0, and the system has a step error of zero as well. I.e.
References
[1] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
[2] B. P. Lathi, Signal Processing and Linear Systems. New York: Oxford University Press,
2010, isbn: 9780195392579.
[3] R. Burns, Advanced Control Engineering. Oxford Boston: Butterworth-Heinemann, 2001,
isbn: 0750651008.
[4] katkimshow, Intro to control - 11.4 steady state error with the final value theorem (6:31),
2014. [Online]. Available: https://www.youtube.com/watch?v=p0b6lKINo0A&list=
PLmK1EnKxphikZ4mmCz2NccSnHZb7v1wV-&index=44.
[5] B. Douglas, Final value theorem and steady state error (12:45), 2013. [Online]. Available:
https://www.youtube.com/watch?v=PXxveGoNRUw.
[6] R.-H. Online, Ece320 lecture1-3a: Steady-state error, system type (7:46), 2016. [Online].
Available: https://www.youtube.com/watch?v=xDErKc41ndk.
[7] ——, Ece320 lecture1-3b: Steady-state error, system type (12:43), 2016. [Online]. Avail-
able: https://www.youtube.com/watch?v=wkdA9PYkqqE.
[8] ——, Ece320 lecture1-3c: Steady-state error, system type (6:18), 2016. [Online]. Available:
https://www.youtube.com/watch?v=hG7dq-51AAg.
[9] R. Hill, System dynamics and control: Module 16 - steady-state error (41:32), 2013. [On-
line]. Available: https://www.youtube.com/watch?v=w0vB2f5TqHA.
[10] J. Bentley, Principles of Measurement Systems. Pearson Education (US), 2004, isbn:
0130430285.
Some methods have already been covered in the module content. However, more will be explored
and will incorporate activities for active learning. Block diagram reduction can be done through
the following manipulation techniques:
A complex system block diagram can be reduced by cycling through these manipulation meth-
ods. More detailed explanations of reduction are covered for each technique where required.
An important point is that primary input and output signals cannot be removed, and thus
in general, primary signals themselves cannot necessarily be reduced in a compact manner.
This point is made apparent in Example 1.2, and Example 1.3 (Solution). In the associated
diagrams, only signals are present, and cannot in any way be simplified or reduced, since all of
them constitute primary input and output signals.
224
A. Block Diagram Reduction
A simple system cascade is synonymous with the multiplication of the system functions. As
with ordinary multiplication, the system multiplication is commutative.
U (s)
R(s) G1 (s) G2 (s) C(s)
V (s)
R(s) G2 (s) G1 (s) C(s)
An important note is that the intermediate signal between G1 (s) and G2 (s) in Figure A.1a is
not the same as the intermediate signal between G2 (s) and G1 (s) in Figure A.1c.
Parallel systems are formally called feed-forward combinations. These can be reduced into
two different ways. The more common system addition as seen in Figure A.2b, or the unitary
feed-forward as seen in Figure A.2c.
R(s) G1 (s) ±
±
C(s)
G2 (s)
(a) Feed-forward
G1 (s)
R(s) G2 (s) ±
±
C(s)
G2 (s)
The unitary feed-forward introduces the idea of inter-system division, though this is a more
artificial construct, but still valid. Needless to say, feed-forward structures are also mathemat-
ically commutative. But this doesn’t add any additional meaning to the block diagram since
these structures are in parallel anyway.
Feedback system reduction was covered in Chapter 1. However, shall be covered here again for
completeness.
E(s)
R(s) +
∓
G(s) C(s)
H(s)
M (s)
(a) Feedback
G(s)
R(s) C(s)
1 ± G(s)H(s)
(b) Reduced Feedback Form
1
R(s) + G(s)H(s) C(s)
H(s) ∓
note the arrangement of the signs in Equation A.2, and how this correlates to Figure A.3a.
Then substituting Equation A.2 and Equation A.3 into Equation A.1 the following results:
and is as seen in the reduced feedback form in Figure A.3b. Again, the important note here is
the change in the ∓ and ± signs between the two feedback forms
If in the primary form the measured signal M (s) is subtracted from the reference R(s), i.e. the
G(s)
sign is “-” and E(s) = R(s) − M (s), then the reduced block will have a “+”, i.e. .
1 + G(s)H(s)
If in the primary form the measured signal M (s) is added to the reference R(s) i.e. the sign is
G(s)
“+” and E(s) = R(s) + M (s), then the the reduced block will have a “-”, i.e. .
1 − G(s)H(s)
Summing junctions are simple signal operators. Summing junctions can be manipulated, usually
in conjunction with pickoff points, to form feed-forward or feedback loops that are then further
reduced. There are three basic manipulation techniques, that have in principle already been
introduced. Namely, commutativity, and moving a summing junction either ahead of or beyond
a system block.
W (s)
±
X(s) ± +
±
Z(s)
Y (s)
(a) Original Form
W (s)
±
X(s) ±
±
Z(s)
Y (s)
(b) Reduced Form
W (s)
±
X(s) ±
±
+ Z(s)
Y (s)
(c) Commutative Form
Unlike system blocks, the commutative property of summing junctions is of use and sometimes
of significance in system block diagrams. Especially in circumstances when a (feed-forward or
feedback) loop needs to be correctly moved and manipulated.
X(s) ±
±
G(s) Z(s)
Y (s)
X(s) G(s) ±
±
Z(s)
Y (s) G(s)
In Figure A.5b, it can be seen that the system block has simply been duplicated as neces-
sary. This occurs because the system fundamentally operates on both (summed) signals. I.e.
Z(s) = (±X(s) ± Y (s)) ∗ G(s) = ±X(s)G(s) ± Y (s)G(s). Incidentally, this demonstrates the
mathematical equivalent distributive property in system diagrams.
X(s) G(s) ±
±
Z(s)
Y (s)
X(s) ±
±
G(s) Z(s)
1
Y (s)
G(s)
(b) Alternative Form
In Figure A.6b, the introduction of the system block has strangely resulted in the reciprocal of
the system. Considering the validity of this for a moment, this can be compared to the original
seen in Figure A.6a. In the Original, the signal Y (s) is simply added to the summing junction
and Z(s) = ±G(s)X(s) ± Y (s).
When looking at the equivalent equation for Figure A.6b and following the signal path from
Y (s) to Z(s), then we have:
1
Z(s) = ±X(s) ± Y (s) ∗ G(s)
G(s)
1
⇒ Z(s) = ±X(s)G(s) ± Y (s) G(s)
G(s)
Z(s) = ±G(s)X(s) ± Y (s)
This is exactly as before. This occurs because the system only affects one of the signals, X(s)
and not the other Y (s). However, since the system block consequently operates on the sum
of both signals in the modified form, the system’s operation on the Y (s) signal needs to be
1
“undone”. This is achieved by preprocessing Y (s) by and feeding this to the summing
G(s)
junction.
Most of the fundamental understanding of pickoff point manipulation is synonymous with that
of summing junctions. Likewise, this helps manipulate and form feed-forward or feedback loops
into the required form. Furthermore, the associated manipulations are; commutativity, and
moving a point beyond or ahead of a system block. The commutativity property, although
useful, is trivial to demonstrate in isolation.
Y (s)
(a) Original Form
Y (s) G(s)
Moving a pickoff point ahead of a block, as seen in Figure A.7, essentially interchanges the
pickoff point from the output signal Y (s) to the input signal X(s). A similar concept applies
to moving a pickoff point from an input signal beyond a block to the output signal, as seen in
Figure A.8.
X(s)
(a) Original Form
1
X(s)
G(s)
(b) Alternative Form
Here again, the concept of a reciprocal, “cancelling” operation is seen. This is required to
generate the original input signal X(s) from the output signal Y (s).
R(s) +
−
G1 (s) +
+
G2 (s) G3 (s) C(s)
H1 (s)
H2 (s)
R(s) +
−
G1 (s) +
+
G2 (s)G3 (s) C(s)
H1 (s)
H2 (s)
Then there is a feedback loop that can be reduced... (NB the sign of the summing
junction!)
G2 (s)G3 (s)
R(s) + G1 (s) C(s)
−
1 − G2 (s)G3 (s)H1 (s)
H2 (s)
H2 (s)
All that is left is a standard feedback-loop form, albeit with a more complex system block.
The final form is... (NB the sign of the summing junction)
G1 (s)G2 (s)G3 (s)
1 − G2 (s)G3 (s)H1 (s)
R(s) C(s)
G1 (s)G2 (s)G3 (s)
1+ H2 (s)
1 − G2 (s)G3 (s)H1 (s)
H3
−
R(s) +
−
G1 + G2 +
−
G3 G4 C(s)
H1 H2
Hint: Try and move the pickoff point feeding - and the summing junction following - the
signal loop containing the system block H3 to the outer parts of the diagram. You will
need to add the required reciprocal system blocks on the H3 path for this.
G1 G2 G3 G4
R(s) C(s)
1 + G1 G2 H1 + G3 G4 H2 + G2 G3 H3 + G1 G2 G3 G4 H1 H2
A.6 Superposition
The concept of superposition can also be applied to system block diagrams. This is useful
for reducing highly complex systems that are Multi-Input-Multi-Output (MIMO) in general.
This technique involves cycling through each input and determining its contribution to each
output, through the particular superposition subsystem involved. In other words, a simple
Single-Input-Single-Output superposition system is created for each input, to each output.
This will ideally result in a simple and reduced open-loop form. The contributions from each
input are then systematically combined through addition to form each of the required outputs.
Importantly, this means that in general there is no open-loop reduced form for a MIMO system
block diagram. But hybrid reduced forms can be formulated. This is demonstrated with a
simple analysis below.
+
R1 (s) +
−
G1 (s) + G2 (s) C(s)
H(s) −
+
R2 (s)
(a) Original Form
G1 (s)G2 (s)
R1 (s) + C(s)
−
1 − G2 (s)
H(s) −
+
R2 (s)
(b) Reduced Form
In Figure A.9b there are clearly two inputs, R1 (s) and R2 (s). Using superposition, the contri-
bution of R1 (s) to the output C(s) is found, which will be defined as CI (s). Following this the
contribution of R1 (s) to the output C(s) will be found, which will be defined as CI (s). Starting
with R1 (s) and letting R2 (s) = 0, the system block diagram in Figure A.10 results.
G1 (s)G2 (s)
R1 (s) + CI (s)
−
1 − G2 (s)
H(s) 1
(R2 (s) = 0)
(a) Substituted Form
G1 (s)G2 (s)
R1 (s) + CI (s)
−
1 − G2 (s)
H(s)
G1 (s)G2 (s)
R1 (s) CI (s)
1 − G2 (s) + G1 (s)G2 (s)H(s)
(c) Final Form
Similarly for R2 (s), letting R1 (s) = 0, we can find the contribution to the output, which we
shall call CII (s). The manipulation is seen in Figure A.11.
G1 (s)G2 (s)
(R1 (s) = 0) −1 CII (s)
1 − G2 (s)
H(s) −
+
R2 (s)
(a) Substituted Form
G1 (s)G2 (s)
R2 (s) − H(s) −1 CII (s)
+
1 − G2 (s)
G1 (s)G2 (s)H(s)
R2 (s) + CII (s)
−
1 − G2 (s)
G1 (s)G2 (s)H(s)
R1 (s) CI (s)
1 − G2 (s) + G1 (s)G2 (s)H(s)
(d) Final Form
From the two diagrams of the contribution of R1 (s) and R2 (s), seen in Figure A.10c and Fig-
ure A.11d respectively, it can easily be seen that the only difference is the contribution of
H(s) in the superposition of R2 (s). Taking this into account and the final output being
C(s) = CI (s) + CII (s), the final system function is obtained:
G1 (s)G2 (s)
C(s) = (R1 (s) + R2 (s)H(s)) (A.5)
1 − G2 (s) + G1 (s)G2 (s)H(s)
The inputs are collected in a manner that is easily interpreted in a system format. I.e. There
is a bulk system that operates on preprocessed inputs R1 (s) and R2 (s)H(s) which are added
together beforehand. The resultant system diagram is:
R1 (s)
+ G1 (s)G2 (s)
R2 (s) H(s) + C(s)
1 − G2 (s) + G1 (s)G2 (s)H(s)
References
In mathematics, one often comes across expressions that are rational polynomials. Simply put
the expression consists of a polynomial divided by another polynomial.
P (x)
F (x) = (B.1)
Q(x)
bm xm + bm−1 xm−1 + bm−2 xm−2 + · · · + b2 x2 + b1 x + b0
F (x) = (B.2)
xn + an−1 xx−1 + an−2 xx−2 + · · · + a2 x2 a1 x + a0
The function F (x) is referred to as a proper rational function if m < n, and an improper
rational function if m ≥ n. Improper rational functions can be separated into the sum of an
ordinary polynomial and a proper rational function. This is of the form,
p(x)
F (x) = r(x) + (B.3)
q(x)
p(x)
where r(x) is the ordinary polynomial, and is the proper (and reduced) rational function.
q(x)
Polynomial long division is a technique used to change an improper rational function into a
proper form. Consider the following improper function:
Let’s go through the process of polynomial long division. Firstly set up the long division
240
B. Partial Fraction Decomposition
Importantly we must arrange the terms of the polynomial from highest order to lowest. The
0x4 has been included to remind and help us when calculating the rest of the sum.
To find the first term of the quotient, we divide the highest order term of the dividend (numer-
ator) polynomial, by the highest order term of the divisor (denominator) polynomial. Inciden-
x5
tally this is = x2 and we have
3
x
x2
x3 −3x2 −6x +8 x5 + 0x4 −25x3 +19x2 +98x −102
Next, as in ordinary long division, we multiply this quotient term with the entire divisor, i.e.
(x2 ) × (x3 − 3x2 − 6x + 8) = x5 − 3x4 − 6x3 + 8x2 , and subtract it from the appropriate
terms in the dividend.
x2
x3 −3x2 −6x +8 x5 +0x4 −25x3 +19x2 +98x −102
−( x5 −3x4 −6x3 +8x2 )
3x4 −19x3 +11x2
since there are four terms in the divisor, we need to bring down a term to balance the new
expression to the same number of terms. I.e.
x2
x3 −3x2 −6x +8 x5 +0x4 −25x3 +19x2 +98x −102
−x5 +3x4 +6x3 −8x2 ↓
3x4 −19x3 +11x2 +98x
The process is essentially repeated as before for evaluating the next term of the quotient. That
is the first term of the reduced dividend divided by the first term of the divisor, giving the next
3x4
term in the quotient. I.e. = 3x
x3
x2 +3x
3 2 5
x −3x −6x +8 x +0x4 −25x3 +19x2 +98x −102
5 4 3 2
−x +3x +6x −8x ↓
3x4 −19x3 +11x2 +98x
The 3x term in the quotient alone is multiplied with the divisor, i.e. (3x2 ) × (x3 − 3x2 − 6x + 8) =
3x4 − 9x3 − 18x2 + 24x , and this subtracted from the reduced dividend.
x2 +3x
3 2 5
x −3x −6x +8 x +0x4 −25x3 +19x2 +98x −102
−x5 +3x4 +6x3 −8x2 ↓
3x4 −19x3 +11x2 +98x
−(3x4 −9x3 −18x2 +24x)
−10x3 29x2 74x
x2 +3x
x3 −3x2 −6x +8 x5 +0x4 −25x3 +19x2 +98x −102
−x5 +3x4 +6x3 −8x2 ↓
3x4 −19x3 +11x2 +98x
−3x4 +9x3 +18x2 −24x ↓
−10x3 29x2 74x −102
x2 +3x −10
x3 −3x2 −6x +8 x5 +0x4 −25x3 +19x2 +98x −102
−x5 +3x4 +6x3 −8x2 ↓
3x4 −19x3 +11x2 +98x
−3x4 +9x3 +18x2 −24x ↓
−10x3 29x2 74x −102
10x3 −30x −60x +80
2
−x +14x −22
The process of long division is now complete. The green highlighted terms form the quotient
polynomial. The grey shaded terms form the remainder, which along with the original divisor,
the red highlighted terms, form the proper rational function. I.e.
a)
2x3 + 9x2 + 11x + 2
x2 + 4x + 3
b)
2x4 + 31x3 + 160x2 + 317x + 210
x2 − 4x + 3
c)
6x6 + 45x5 + 3x4 − 118x3 + 71x2 + 37x − 391
2x3 + 15x2 + x − 42
a)
x−1
2x + 1 +
x2 + 4x + 3
b)
1440x − 720
2x2 + 39x + 310 +
x2 − 4x + 3
c)
11x2 + 33x − 223
3x3 + 4 +
2x3 + 15x2 + x − 42
Remember the intention is to develop techniques that are useful for evaluating Laplace trans-
forms. Or rather, given an expression in the s-domain, which will commonly be a rational
polynomial as you will experience, that could either be proper or improper; manipulate the
expression into a sum of standard forms. Long division reduces an improper rational polyno-
p(s)
mial into a sum of a standard polynomial r(s), with a proper rational polynomial . The
q(s)
standard polynomial r(s) is easily transformed back to the time domain, since the rules for
this are contained in the Laplace tables. Usually, the time derivative rule is applied to the
polynomial r(s), but this is not a rule, simply a suggested approach.
What is of more significance and more commonly experienced is a proper rational polynomial
p(s)
expression . Being able to handle this is of equal importance, and essential, in manipulating
q(s)
an s-domain expression into standard forms. This is covered in the next Section B.2.
This section covers the various methods required for “decomposing” a proper rational polyno-
mial, into a “partial fraction” form. Partial fractions are a sum of fractional expressions, whose
numerator is a constant and the denominator is a linear polynomial (to some power), i.e.
X ki
The expression of the form is usually much easier to apply Inverse Laplace Trans-
i
(s − λi )ri
forms to, since it is easily changed into a standard form with an appropriate n! factor as needed
(see the third entry of the time exponentials in Table C.1). The methods of finding this form
are covered in Sections B.2.1-?? below.
This method is the most robust, in that it will always work for partial fraction decomposition.
This is because it is the fundamental principle of partial fraction decomposition. For this
reason, the theory is also covered, but the theory itself is not essential for you to be able to do
partial fraction decomposition.
Theory of decomposition
This method of partial fraction decomposition works directly on the defining Equation B.7.
However, it requires that the rational polynomial is proper in order to find its corresponding
partial fraction decomposition. From the fundamental theorem of algebra,
If f (z) is a nonconstant polynomial of single variable z, then the equation f (z) = 0 has at least
one root. Given f (z), z ∈ C [4, 5].
This essentially means that any polynomial can be reduced to a product of only linear fac-
tors, however, the solutions may be complex valued. Considering this in conjunction with
Equation B.7, then the polynomial in the denominator can take on the following forms,
These factors correspond with the factors required in the partial fraction decomposition, specif-
X ki
ically . At this stage, the numerator is not of significant importance with regard
i
(x − λi )ri
to factorising. In fact, this method (and all others that follow) explicitly requires us to leave
the numerator polynomial p(x) in expanded form! The purpose of this will become apparent
later. With this in mind consider the following equations
The polynomial denominator in is factorised in , and as explained above, equated to the general
partial fraction form. Consider for a moment a single linear factor in the denominator, (x−λ1 )r1 .
This is decomposed into a set of fractions (the first line of ), with assigned coefficients in the
numerator α, and the denominators of each fraction being the linear factor (x − λ1 ). However,
for each successive fraction, the power of the linear factor increases up to the appropriate value
r1 . This is for the single factor (x − λ1 ) and must be repeated as necessary for each factor
(x − λi ) in the denominator of.
Therefore, each factor, with the corresponding coefficient in their numerators (αj ,βj ,. . . ,κj ),
and the particular linear factor in the denominator (x − λi ) raised to increasing powers of the
order of that factor ri , forms the partial fraction decomposition.
The reason why this should make sense is when the partial fraction is simplified, the lowest
common denominator is found. But this is the original denominator as required by definition
and by necessity. Let’s look at a brief example of this before discussing how to handle the
numerator, and also how to find the coefficients of the numerators.
Example 2.1
Find the partial fraction decomposition with general coefficients of the following proper
rational polynomial:
2x3 + 3x2 − 4x + 6
(B.12)
(x − 1)2 (x + 2)(x − 3)3
The first factor (x − 1) is raised to the power of 2, so this will form two partial fractions.
I.e.
α1 α2
+
(x − 1) (x − 1)2
β
(x + 2)
Finally for (x − 3), this is to the power of 3, therefore forms three partial fractions,
κ1 κ2 κ3
+ 2
+
(x − 3) (x − 3) (x − 3)3
2x3 + 3x2 − 4x + 6 α1 α2 β κ1 κ2 κ3
2 3
= + 2
+ + + 2
+
(x − 1) (x + 2)(x − 3) (x − 1) (x − 1) (x + 2) (x − 3) (x − 3) (x − 3)3
(B.13)
The method has the name “clearing fractions”, in that we literally clear the fractions of the
original rational polynomial function and the partial fractions equation. This then resolves to an
ordinary polynomial equation. The theory of this is continued and covered below. Continuing
with the following
α1 αr1
+ ··· +
(x − λ1 ) (x − λ1 )r1
β1 βr1
bm xm + bm−1 xm−1 + bm−2 xm−2 + · · · + b2 x2 + b1 x + b0 + + ··· +
= (x − λ2 ) (x − λ2 )r2
xn + an−1 xx−1 + an−2 xx−2 + · · · + a2 x2 a1 x + a0
+ ···
κ1 κri
+ + ··· +
(x − λ1 ) (x − λ1 )ri
Then we can eliminate the denominators by multiplying both sides with the original (factorised)
bm x m + · · · + b2 x 2 + b1 x + b0
L.H.S. = (x − λ1 )r1 (x − λ2 )r2 · · · (x − λi )ri =
x n + · · · + a2 x 2 a1 x + a0
α1 κri
+ ··· + (x − λ1 )r1 (x − λ2 )r2 · · · (x − λi )ri = R.H.S.
(x − λ1 ) (x − λ1 )ri
The left-hand side is simply the numerator p(x) of the original proper rational polynomial. The
right-hand side is the partial fraction multiplied by q(x), and resolves to the following
+ ···
Although the above expression of the RHS is quite intimidating, what is important is the simple
understanding of what is happening, and more importantly what this eventually becomes. Lets
α1
look momentarily at the α1 fraction; here is multiplied by q(x) = (x − λ1 )r1 (x −
(x − λ1 )
λ2 )r2 · · · (x − λi )ri , so one of the factors (x − λ1 )r1 of q(x), will cancel with the single factor
α1
(x − λ1 ) in the denominator of . Thus what results is α1 multiplied with q(x) less one
(x − λ1 )
factor corresponding to (x − λ1 ), i.e. α1 (x − λ1 )r1 −1 (x − λ2 )r2 · · · (x − λi )ri .
α2
Similarly for α2 , is multiplied by q(x) = (x − λ1 )r1 (x − λ2 )r2 · · · (x − λi )ri . The
(x − λ1 )2
difference is that the partial fraction has two factors of (x − λ2 ), thus reduce the corresponding
factor of the multiplied q(x) by 2, i.e. α2 (x − λ1 )r1 −2 (x − λ2 )r2 · · · (x − λi )ri . This process
continues for the factor (x − λ1 ) for each of the corresponding partial fraction coefficients αi ;
until the last one αr1 which completely eliminates the factor is q(x), i.e. αr1 (x−λ2 )r2 · · · (x−λi )ri
But what does this result in? The reduced polynomial q(x) for each coefficient is exactly
that, a polynomial. This polynomial is generally different from each of the other coefficients
polynomials. Importantly, the order of any of these polynomials is necessarily less than q(x).
The following expression then results.
where each of the coefficients of the polynomial factors (a11 ,...,ar1 n ,b11 ,...,br2 n ,k11 ,...,kri n ) are
evaluated from the roots (λ1 ,λ2 ,...,λi ) of q(x) as required. Note that this is a generalisation and
some of these values could be zero. But this means that these values are known. Therefore,
the polynomial can be rewritten in a collected form, as a standard polynomial. I.e.
R.H.S. = (a11 α1 +a21 α2 +· · ·+ar1 1 αr1 +b11 β1 +b21 β2 +· · ·+br1 1 βr2 +k11 κ1 +k21 κ2 +· · ·+kri 1 κri ) xn−1
+
(a12 α1 + a22 α2 + · · · + ar1 2 αr1 + b12 β1 + b22 β2 + · · · + br1 2 βr2 + k12 κ1 + k22 κ2 + · · · + kri 2 κri ) xn−2
+ ···
(a1n α1 + a2n α2 + · · · + ar1 n αr1 + b1n β1 + b2n β2 + · · · + br1 n βr2 + k1n κ1 + k2n κ2 + · · · + kri n κri ) 1
Considering this, the right-hand side and left-hand side form a system of equations. The value
of the coefficients in p(x) are known and the coefficients a11 through to kr1 n are found from
the roots of q(x), and are also therefore known. Therefore, the LHS and RHS form a system
of equations of the numerator coefficients of interest, α1 through to κr1 , by equating terms of
the polynomial variable x on both sides of the equation. Clearly, in a generalised form, the
system of equations can be quite voluminous and intimidating. To more easily conceptualise
the system of equations, we will switch to matrix algebra. Consider the following
α1
α
2
..
. bm
a11 a21 · · · ar1 1 · · · k11 k21 · · · kri 1
αr1 bm−1
a12 a22 · · · ar1 2 · · · k12 k22 · · · kri 2
.
.. = k ..
. .. .. .. . . . .. .. .. .. =A . =b (B.14)
.. . . . . . . .
κ1 b1
a1n a2n · · · ar1 n · · · k1n k2n · · · kri n
κ2 b0
.
.
.
κr1
Ak = b (B.15)
Remember the matrix A is already populated, since the entries come from the coefficients of
the polynomial multiplication of q(x) with each partial fraction; and the vector b is also known
as this is consists of the vector form of the numerator polynomial p(x) in the original rational
polynomial. The vector k is the variable of interest, as entries correspond to the numerator
coefficients of the actual partial fraction decomposition of interest. This is simply solved by
finding A−1 . I.e.
k = A−1 b (B.16)
Then simply reading out these values and filling them in appropriately to the corresponding
partial fraction completes the partial fraction decomposition process.
This system of linear equations can be calculated manually, however, this may be extremely
tedious as more coefficients need to be solved. It is recommended to use a software solution to
resolve this matrix. As an example, MatLab (and its open-source equivalent Octave) is very
good at performing this task. Generally speaking, using it with this method is the best way to
perform partial fraction decomposition.
Example 2.2
Find the partial fraction decomposition of F (x) given below:
x3 + 3x2 + 4x + 6
(x + 1)(x + 2)(x + 3)3
The first step is to rewrite the rational polynomial in partial fraction form, with general
coefficients. Thus, we have:
x3 + 3x2 + 4x + 6 k1 k2 k3 k4 k5
3
= + + + 2
+
(x + 1)(x + 2)(x + 3) (x + 1) (x + 2) (x + 3) (x + 3) (x + 3)3
The second step is to multiply through by the denominator (q(x)) to “clear the fractions”.
This gives
= k1 (x4 + 11x3 + 45x2 + 81x + 54) + k2 (x4 + 10x3 + 36x2 + 54x + 27)
+ k3 (x4 + 9x3 + 29x2 + 39x + 18) + k4 (x3 + 6x2 + 11x + 6)
+ k5 (x2 + 3x + 2)
What you should immediately notice, or at least be aware of, is that there is a term
(k1 + k2 + k3 )x4 on the “partial fraction side”. But there is no x4 term in the original
rational polynomial! What do we do? Well, equate it as necessary, i.e. there is actually
an x4 term in the original polynomial, it’s just that its coefficient is zero, and this is what
we equate it to. So it may be necessary to introduce “zero coefficients” in the b vector
discussed in the theory above.
(k1 + k2 + k3 ) = 0 (x4 )
(11k1 + 10k2 + 9k3 + k4 ) = 1 (x3 )
(45k1 + 36k2 + 29k3 + 6k4 + k5 ) = 3 (x2 )
(81k 1 + 54k2 + 39k3 + 11k4 + 3k5 ) = 4 (x1 )
(54k1 + 27k2 + 18k3 + 6k4 + 2k5 ) = 6 (x0 )
Try and solve this system of equations manually yourself, or even in matrix form. Here,
the solution is obtained by converting the equations into matrix form...
1 1 1 0
0 k1 0
11 10 9 1 0 k2 1
45 36 29 6 1 k3 = 3
81 54 39 11 3 k4 4
54 27 18 6 2 k5 6
k5 −3
The MatLab script below shows a worked example of how this is solved.
1 >> A = [1 1 1 0 0; 11 10 9 1 0; 45 36 29 6 1; 81 54 39 11 3; 54 27 18 6 2];
2 >> b = [0;1;3;4;6];
3 >> k = linsolve(C,d)
4
5 k =
6
7 0.5000
8 −2.0000
9 1.5000
10 2.0000
11 −3.0000
Although, very useful when you have (or are allowed access) to a computer and MatLab, this
method has the major drawback of being incredibly tedious. However, as mentioned, it is the
most robust, since the method constructs a system of equations. These equations can be solved
with matrix algebra to find the coefficients, even for complex-valued coefficients and entries in
general.
Heaviside’s “Cover-up” method is a quick and convenient way of finding the coefficients of
a partial fraction decomposition. However, there is a major drawback. This method is not
capable of fully resolving the partial fraction coefficients if there are repeated factors in q(x).
There is a workaround, this is discussed later. For now, the basic method is discussed.
Recall the fundamental principle of partial fraction decomposition, however, consider the case
where the factors in q(x) are unique (i.e. unrepeated),
bm xm + bm−1 xm−1 + · · · + b1 x + b0
F (x) = (B.17)
xn + an−1 xn−1 + · · · + a1 x + a0
p(x)
= (B.18)
(x − λ1 )(x − λ2 ) · · · (x − λn )
k1 k2 kn
⇒ F (x) = + + ··· + (B.19)
(x − λ1 ) (x − λ2 ) (x − λn )
Consider Equation B.20. If we multiply both sides of the equation by one of the factors, (x−λ1 )
say, then we get
k2 (x − λ1 ) kn (x − λ1 )
(x − λ1 )F (x) = k1 + + ··· + (B.20)
(x − λ2 ) (x − λn )
Now consider the right-hand side. If we let x → λ1 , then all the terms with (x − λ1 ) vanish,
except for the only term that does not have the (x − λ1 ) factor, i.e. k1 . Similarly on the
left-hand side. We are explicitly evaluating
p(x)
lim (x − λ1 )F (x) = lim (x − λ1 )
x→λ1 x→λ1 (x − λ1 )(x − λ2 ) · · · (x − λn )
p(x)
= lim (x−λ 1)
(x− λ1 )(x − λ2 ) · · · (x − λn )
x→λ1
p(x)
= lim
x→λ1 (x − λ2 ) · · · (x − λn )
p(λ1 )
= = k1
(λ1 − λ2 ) · · · (λ1 − λn )
This is the principle of Heaviside’s Cover-up method, in that the factor (x − λ1 ) is covered-up
(removed) and the remaining expression of F (x) is evaluated at x = λ1 . This is then generalised
to all (unrepeated) factors
Example 2.3
Find the partial fraction decomposition of F (x) below, using Heaviside’s Cover-up method
3x2 + 4x + 6
F (x) =
(x + 1)(x + 2)(x + 3)
Solution:
For k2 we will use a similar conceptual approach, by actually covering up and substituting
x with the value that would make the factor zero. I.e.
3(−2)2 + 4(−2) + 6
k2 =
((−2) + 1) (x + 2) ((−2) + 3)
12 − 8 + 6
= = −10
(−1)(1)
3(−3)2 + 4(−3) + 6 21
k3 = =
((−3) + 1)((−3) + 2) (x + 3) 2
This section deals with the circumstance where one cannot factorise the denominator q(x) of a
rational polynomial into simple linear (or repeated linear) terms. This can lead to a quadratic
factor that has no real roots. There are two techniques for handling this. The first simply
leaves the factor “as is” and works with it, the second follows the generalised approach and
utilises complex-valued roots to find the generalised linear complex-valued factors.
Quadratic factors
Sometimes the polynomial q(x) is a rational function that contains quadratic factors. E.g.
p(x) x2 + x − 36
F (x) = =
q(x) (x + 1)(x2 + 4x + 15)
The factor (x2 + 4x + 15) has no real roots and thus cannot be expressed as (x − λ1 )(x − λ2 )
if λ1 , λ2 ∈ R. The theory of how this is resolved will not be covered here, but is conceptually
and implicitly covered later in this Section B.2.3. Here a simple example is given on how to
approach a situation where a quadratic factor in q(x) appears.
The general approach to any quadratic factor is to leave it in a proper rational function form.
I.e.
bm xm + bm−1 xm−1 + · · · + b1 x + b0
F (x) =
(x − λ1 )(x − λ2 ) · · · (x − λr )(ax2 + bx + c)
k1 k2 kr d0 + d1 x
= + + ··· + +
(x − λ1 ) (x − λ2 ) (x − λr ) (ax2 + bx + c)
How a quadratic factor is handled is simply to introduce a reduced rational function. Therefore,
for the quadratic factor (ax2 + bx + c), a linear term (d0 + d1 x) is introduced as its partial
fraction numerator. Unfortunately, the coefficients d0 and d1 cannot be as easily resolved
using Heaviside’s Cover-up method, but the remaining ones can (if they are not coefficients of
repeated factors!). Let’s see an example of how this is solved.
Example 2.4
Find the partial fraction decomposition of F (x),
x2 + x − 36
F (x) =
(x + 1)(x2 + 4x + 15)
Solution:
We know that the partial fraction should have the following form
k1 d0 + d1 x
F (x) = + 2
(x + 1) (x + 4x + 15)
conveniently (for this example) we can actually just read off the values for d1 and d0 , and
there is a built-in check too! Equating the coefficients we read off directly that d1 = 4,
and d0 = 9. The x term provides the check, since d1 + d0 = 13, and 4+9 does indeed
equal 13 as required.
There are generalisations that follow the quadratic factor. These are covered later in ??.
As mentioned in subsection B.2.1, the clearing fractions method works for a generalised rational
polynomial, even with complex variables. We now explore this fact along with the cover-up
rule and apply it to quadratic factors.
Say for a moment there is a unique and nor-repeated quadratic factor in a rational polynomial
denominator q(x). I.e.
bm xm + bm−1 xm−1 + · · · + b1 x + b0
F (x) = (B.22)
(x − λ1 )r1 (x − λ2 )r2 · · · (x − λi )ri (ax2 + bx + c)
(B.23)
Note, only the quadratic factor is specified to be unique and non-repeated, the other linear
factors may be repeated, and is generalised. As before this simply decomposes into the following
k1 k2 kr d0 + d1 x
= r
+ r
+ ··· + r
+ (B.24)
(x − λ1 ) 1 (x − λ2 ) 2 (x − λi ) i (ax2 + bx + c)
But by the fundamental theorem of algebra [4, 5], we know that ax2 + bx + c can be factorised,
though may have complex roots. Assuming that ax2 + bx + c has no real roots, this is by default
the requirement, in that it will necessarily have complex roots. These can be found using the
ordinary quadratic formula
√
−b ± b2 − 4ac
x= (B.25)
2a
Remember, this is complex-valued if the discriminant 4ac > b2 . Let us try an example.
Example 2.5
Find the partial fraction decomposition of F (x),
5x3 − 6x2 + 3x − 7
F (x) =
(x − 1)2 (x2 + 2x + 2)
Solution:
Let’s find the linear factors of the quadratic. This can be done by simply using the
quadratic B.25.
√
−b ± b2 − 4ac
x=
p2a
−2 ± (2)2 − 4(1)(2)
=
2(1)
√
−2 ± −4
=
2
−2 ± 2j
=
2
= −1 ± j
We know that the partial fraction should then have the following form
k1 k2 k3 k4
F (x) = + 2
+ +
(x − 1) (x − 1) (x + 1 + j) (x + 1 − j)
We can use Heaviside’s rule to find k3 and k4 since they are not repeated roots.
We can use the exact same method to find k4 , however, another convenient shortcut is
that since the roots of the quadratic factor are conjugates, k4 is also the conjugate of k3 .
Verify this yourself by solving for k4 explicitly.
So far we have
k1 k2 3/2 − 2j 3/2 + 2j
F (x) = + 2
+ +
(x + −1) (x − 1) (x + 1 + j) (x + 1 − j)
It may be of use to combine the complex partial fractions to solve the other terms in
general. In this case, it is. We also remember that we have the original quadratic factor
3/2 − 2j 3/2 + 2j
+
(x + 1 + j) (x + 1 − j)
(3/2 − 2i)(x + 1 − j) + (3/2 + 2i)(x + 1 + j)
=
(x2 + 2x + 2)
(3/2x + 3/2 − 3/2j − 2jx − 2j − 2) + (3/2x + 3/2 + 3/2j + 2jx + 2j − 2)
=
(x2 + 2x + 2)
3/2x + 3/2x + 3/2 + 3/2 − 3/2j
+ −
3/2j 2jx
+
2jx − 2 − 2
− 2j + +2j
=
(x2 + 2x + 2)
3x + 3 − 4
= 2
(x + 2x + 2)
3x − 1
= 2
(x + 2x + 2)
We can find the coefficients k1 and k2 using the clearing fraction method.
As before we can at least read off the value of k1 = 2, corresponding to the x3 terms. But
this only leaves k2 and we can simply pick the easiest equation to solve for k1 . Let’s use
the constant terms, −6 = 2k1 + 2k2 . Then simply solving we find k2 = −1.
Various intuitive tricks can be used to easily solve partial fractions. Unfortunately, this is, as
mentioned, based on intuition. However, two general approaches are dominant; substitution
and limits. In some circumstances, we can eliminate variables by substituting a value into the
partial fraction. Likewise, if we intelligently use limits, we can achieve a similar result. This is
best demonstrated using examples.
Example 2.6
x2 − 8x − 3
F (x) =
(x − 1)(x2 + 2x + 2)
Solution:
The form of the partial fraction decomposition is
k1 d0 + d1 x
F (x) = + 2
(x − 1) (x + 2x + 2)
Substitution
Now instead of complex factors and clearing fraction methods, we can recognise that one
of the unknowns d0 is not attached to a factor of x. We can then use a value of x = 0 to
extract this value from the partial fraction and the rational polynomial. I.e.
Limits
To find d1 we can use a sneaky technique, by recognising that d1 as a factor of x and
d0 does not. This is considered in conjunction with the order of the polynomial factor
x2 + 2x + 2 and the order of the original rational function, p(x) being of order 2, and q(x)
being of order 3.
We can multiply both sides of the equation by x (thereby making the order of the numer-
ator and denominator equal), and let x → ∞. This is helpful as when evaluating these
limits only the higher-order terms are of significance. Thus the value of d0 can be ignored
x2 − 8x − 3 k1 d0 + d1 x
lim xF (x) = x 2
= lim x +x 2
x→∞ (x − 1)(x + 2x + 2) x→∞ (x − 1) (x + 2x + 2)
3 2
x − 8x − 3x k1 x d 0 x + d 1 x2
lim = lim +
x→∞ (x3 + x2 − 2) x→∞ (x − 1) (x2 + 2x + 2)
1 − 8/x − 3/x2 k1 d0 /x + d1
lim 3
= lim +
x→∞ (1 + 1/x − 2/x ) x→∞ (1 − 1/x) (1 + 2/x + 2/x2 )
1 − 8(0) − 3(0) k1 d0 (0) + d1
= +
(1 + 1(0) − 2(0)) (1 − 1(0)) (1 + 2(0) + 2(0))
1 = k1 + d1
⇒ d1 = 3
There have been numerous examples given above. Although each handled a specific method,
nothing limits you from using them in conjunction with one another, utilising each of their
strengths when convenient.
There are however some advanced methods, built on those above, to handle more difficult
partial fraction decompositions. These methods are primarily used to handle repeated roots.
One such technique is to use “hybrid methods”, as discussed above, using each technique when
convenient. There is another alternative, though not necessarily an easier or faster method to
evaluate them, depending on the partial fraction complexity. This will require some theory,
though it builds directly from the theory of Heaviside’s cover-up rule.
For a repeated factor in q(x), we have already discussed what the form partial fraction will be
like. Consider for a moment a single factor, which is repeated
p(x)
F (x) = (B.26)
(x − λ1 )r (x − λ2 ) · · · (x − λi )
Let us approach solving this as usual with Heaviside’s cover-up method; by multiplying both
sides with the repeated factor. However, we need to multiply by the highest factor as the
intention is to substitute x = λ1 , or at least to have x → λ1 . If we do not multiply by (x − λ1 )r ,
then some fractions will still contain at least one factor of (x − λ1 ) in its denominator and the
limit will not exist, since this would result in division by zero. Consider then the following
p(x)
(x − λ1 )r F (x) = = a0 + a1 (x − λ1 ) + · · · + ar−1 (x − λ1 )r−1
(x − λ2 ) · · · (x − λi )
k2 (x − λ1 )r ki (x − λ1 )r
+ + ··· + (B.28)
(x − λ2 ) (x − λi )
therefore the limit of x → λ1 can be taken for both forms of (x − λ1 )r F (x). Since the only term
in the partial fraction that does not have a (x − λ1 ) factor is the a1 term, we get
a0 = lim (x − λ1 )r F (x)
x→λ1
p(λ1 )
∴ a0 =
(λ1 − λ2 ) · · · (λ1 − λi )
as before. However this has only evaluated the a1 coefficient, there are still r−1 more coefficients
to evaluate for this family of partial fractions.
There is however a very useful workaround to this problem. Consider for a moment
If we take the derivative of this equation w.r.t x and apply the chain rule and quotient rule as
necessary, we get
"
d d
(x − λ1 )r F (x) = a0 + a1 (x − λ1 ) + · · · + ar−1 (x − λ1 )r−1
dx dx
#
k1 (x − λ1 )r ki (x − λ1 )r
+ + ··· + (B.30)
(x − λ2 ) (x − λi )
d
(x − λ1 )r F (x) = a1 + a2 (2(x − λ1 )1 ) + a3 (3(x − λ1 )2 ) + · · · + ar−1 ((r − 1)(x − λ1 )r−2 )
dx
(r(x − λ1 )r−1 (x − λ2 )) − (x − λ1 )r (1)
+ k1 + ···
(x − λ2 )2
(r(x − λ1 )r−1 (x − λi )) − (x − λ1 )r (1)
+ ki
(x − λi )2
(B.31)
Notice all the other terms, except for a2 has at least one factor of (x − λ1 ). But this is similar
to the logic used for the ordinary Heaviside’s Cover-up rule! All we need to do is take the limit
x → λ1 of this derivative; then all that is left is the a2 term, which has then been extracted.
Thus, we have
d
a1 = lim [(x − λ1 )r F (x)]
x→λ1 dx
d
= [(x − λ1 )r F (x)]
dx x=λ1
But this can be repeated up to the ar term, and we have the modified Heaviside’s limit
1 di
ai = [(x − λ1 )r F (x)] (B.32)
i! dxi x=λ1
In other words, to find the coefficients of a repeated factor in q(x), one must evaluate the Heav-
iside’s expression (x − λ1 )r F (x), which is the expression left after “covering up” the repeated
factor. To find the coefficient a0 of the rth power partial fraction, simply substitute x = λ1 .
For the subsequent a1 coefficient, simply differentiate the Heaviside’s expression (x − λ1 )r F (x)
once w.r.t. x; then substitute x = λ1 again. For lower order coefficients, we also need to divide
by the factorial of its index value. This is because every time we differentiate the (x − λ1 )i
factor is following the power rule and the power is being multiplied with all the powers before
it, i.e. a factorial is introduced and needs to be removed. Let’s see this in action.
Example 2.7
Find the partial fraction for
Solution:
We know the partial fraction should be of the form
a0 a1 a2 b0 b1 b2
F (x) = 3
+ 2
+ + 3
+ 2
+
(x − 1) (x − 1) (x − 1) (x + 1) (x + 1) (x + 1)
and
−3(−1)5 + 6(−1)4 + 9(−1)3 + 7(−1)2 − 2(−1) − 1
b0 =
((−1) − 1)3 (x + 3)5
3+6−9+7+2−1
=
(−2)3
8
= = −1
−8
So far we have
2 a1 a2 1 b1 b2
F (x) = 3
+ 2
+ − 3
+ 2
+
(x − 1) (x − 1) (x − 1) (x + 1) (x + 1) (x + 3)
To find a1 we differentiate (using the quotient rule, all the following derivatives are left
as an exercise for the reader), then substitute x = 1
What is obvious is that solving these derivatives is tedious, and becomes more difficult the
more derivatives are needed with increasing order of the repeated factor. It is however helpful
in finding the coefficients of say the first or second derivative to at least reduce the number
of variables, if there are many repeated roots and of different factors. Though this is still
recommended as a last resort, and only if absolutely necessary.
Sometimes you may come across the following scenario for finding a partial fraction form.
p(x)
F (x) = (B.34)
(a1 x2 + b1 x + c1 )r1 (a 2 x2 + b2 x + c2 )r2 · · · (ai x2 + bi x + ci )ri
This is essentially a repeated quadratic factor. This in theory is approached in much the same
ways as above in that it decomposes into the following form
A0 + A1 x A2 + A3 x A2r−2 + A2r−1 x
F (x) = + −1
+ · · · + + ··· (B.35)
(a1 x2 r 2 r
+ b1 x + c1 ) 1 (a1 x + b1 x + c1 ) 1 (a1 x2 + b1 x + c1 )
This is quite difficult to evaluate considering that the most robust approach is clearing frac-
tions, or alternatively, Heaviside’s derivative approach. Both of these will be exceptionally
tedious. But it would most likely be better to use clearing fractions as the most direct ap-
proach. However, there is another approach, though it will not necessarily make the repeated
factor problem easier, but can reduce the necessary calculations. This would be to represent
the quadratic factors as complex-valued factors. The basic idea is to present the situation and
suggest different ways to approach the problem.
This can also be generalised to any order factor that is still un-factorised. Say you have a rth
degree polynomial factor in q(s) that you would wish to keep in real factor form. The following
applies
p(x)
F (x) =
(αr xr + αr−1 xr−1 + · · · + α1 x + α0 )(x − λ1 ) · · ·
Whatever order of the factor that is kept in an un-factorised form, the corresponding order of
partial fraction will be a polynomial of one degree less than the factor. If the factor is repeated
then there will be multiple partial fractions, each having the degree relationship as described,
reducing the factor down to a single factor, as has been seen for other simpler examples. The
“only” way this can be solved efficiently is with the clearing fractions method.
This is also flipped on its head, in that we wish to extract a single linear factor out of q(x) as
a partial fraction i.e.
p0 (x)
F (x) =
q0 (x)
p0 (x)
F (x) =
(x − λ1 )q1 (x)
k1 p1 (x)
F (x) = +
(x − λ1 ) q1 (x)
This is a primitive form of the concepts behind partial fraction decomposition. In that, the
method of decomposing a rational polynomial is to partially extract a fraction whose denom-
inator is a factor of the rational polynomial’s denominator. If you repeat this process, you
eventually fully decompose the fraction into its partial fraction constituents.
kx
Sometimes it is desired to have a partial fraction decomposition in the form of , rather
(x − λi )r
k
than the typical . This is easily achieved by dividing the rational polynomial F (x) by x.
(x − λi )r
I.e.
p(x) F (x) p(x)
F (x) = ⇒ =
q(x) x xq(x)
F (x)
Then find the partial fraction expansion of ,
x
F (x) p(x)
=
x xq(x)
p(x)
=
x(x − λ1 )r1 (x
− λ2 )r2 · · · (x − λi )ri
k a0 b0 k0
= + r
+ · · · + r
+ · · · + + ···
x (x − λ1 ) 1 (x − λ2 ) 2 (x − λi )ri
Then once this is found, the modified form is obtained by simply multiplying by x, thereby
restoring the expression back to F (x).
a0 x b0 x k0 x
F (x) = k + r
+ ··· + r
+ ··· + + ···
(x − λ1 ) 1 (x − λ2 ) 2 (x − λi )ri
References
[1] B. P. Lathi, Signal Processing and Linear Systems. New York: Oxford University Press,
2010, isbn: 9780195392579.
[2] R. Burns, Advanced Control Engineering. Oxford Boston: Butterworth-Heinemann, 2001,
isbn: 0750651008.
[3] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
[4] D. Zill, A first course in complex analysis with applications. Boston: Jones and Bartlett,
2003, isbn: 0763714372.
[5] D. G. Zill, Advanced Engineering Mathematics. JONES & BARTLETT PUB INC, Sep. 14,
2016, isbn: 1284105903.
NOTE!
The pages are arranged in such a way to be physically removed for the convenience of use
and reference (take care while doing so). It is also recommended that these are copied and
laminated for further wear-and-tear protection.
References
[1] B. P. Lathi, Signal Processing and Linear Systems. New York: Oxford University Press,
2010, isbn: 9780195392579.
[2] C. L. Phillips, J. M. Parr, and E. A. Riskin, Signals, Systems and Transforms. Pearson,
2014, isbn: 9780133506471.
[3] G. Doetsch, Introduction to the Theory and Application of the Laplace Transformation.
Springer Berlin Heidelberg, 1974, isbn: 9783642656927.
[4] R. Burns, Advanced Control Engineering. Oxford Boston: Butterworth-Heinemann, 2001,
isbn: 0750651008.
[5] N. Nise, Control Systems Engineering, 8th (EMEA) Edition. Hoboken, NJ: John Wiley &
Sons, Inc, 2019, isbn: 9781119590132.
266
C. Laplace Tables
1
u(t) δ(t) 1
s
a
au(t) aδ(t) a
s
e−t0 s
u(t − t0 ) , t0 ≥ 0 δ(t − t0 ) e−t0 s , t0 ≥ 0
s
1 1
t u(t) eat u(t)
s2 s−a
n! 1
tn u(t), n = 0, 1, 2, ... teat u(t)
s(n+1) (s − a)2
Γ(p + 1) n!
tp u(t), p > −1 tn eat u(t)
sp+1 (s − a)(n+1)
√ √
√ π (n− 12 ) 1 · 3 · 5 · · · (2n − 1) π
t u(t) 3 t u(t) , n = 1, 2, 3, ... 1
2s 2 2n s(n+ 2 )
r
1 π
√ u(t)
t s
ω s
sin(ωt) u(t) cos(ωt) u(t)
s2 + ω2 s2 + ω2
ω s
sinh(ωt) u(t) cosh(ωt) u(t)
s2 − ω 2 s2 − ω 2
ω (s + a)
e−at sin(ωt) u(t) e−at cos(ωt) u(t)
(s + a)2 + ω 2 (s + a)2 + ω 2
ω (s + a)
e−at sinh(ωt) u(t) e−at cosh(ωt) u(t)
(s + a)2 − ω 2 (s + a)2 − ω 2
2ωs s2 − ω 2
t sin(ωt) u(t) t cos(ωt) u(t)
(s + ω 2 )2
2 (s2 + ω 2 )2
2ωs s2 + ω 2
t sinh(ωt) u(t) t cosh(ωt) u(t)
(s − ω 2 )2
2 (s2 − ω 2 )2
s sin(φ) + ω cos(φ) s cos(φ) − ω sin(φ)
sin(ωt + φ) u(t) cos(ωt + φ) u(t)
s2 + ω 2 s2 + ω 2
s sinh(φ) + ω cosh(φ) s cosh(φ) + ω sinh(φ)
sinh(ωt + φ) u(t) cosh(ωt + φ) u(t)
s2 − ω 2 s2 − ω 2
(s2 − ω 2 ) sin(φ) + 2ωs cos(φ) (s2 − ω 2 ) cos(φ) − 2ωs sin(φ)
t sin(ωt + φ) u(t) t cos(ωt + φ) u(t)
(s2 + ω 2 )2 (s2 + ω 2 )2
(s2 + ω 2 ) sinh(φ) + 2ωs cosh(φ) (s2 + ω 2 ) cosh(φ) + 2ωs sinh(φ)
t sinh(ωt + φ) u(t) t cosh(ωt + φ) u(t)
(s2 − ω 2 )2 (s2 − ω 2 )2
(s + a) sin(φ) + ω cos(φ) (s + a) cos(φ) − ω sin(φ)
e−at sin(ωt + φ) u(t) e−at cos(ωt+φ) u(t)
(s + a)2 + ω 2 (s + a)2 + ω 2
(s + a) sinh(φ) + ω cosh(φ) (s + a) cosh(φ) + ω sinh(φ)
e−at sinh(ωt + φ) u(t) e−at cosh(ωt + φ) u(t)
(s + a)2 − ω 2 (s + a)2 − ω 2
((s + a)2 − ω 2 ) sin(φ) + 2ω(s + a) cos(φ)
te−at sin(ωt + φ) u(t)
((s + a)2 + ω 2 )2
((s + a)2 − ω 2 ) cos(φ) − 2ω(s + a) sin(φ)
te−at cos(ωt + φ) u(t)
((s + a)2 + ω 2 )2
((s + a)2 + ω 2 ) sinh(φ) + 2ω(s + a) cosh(φ)
te−at sinh(ωt + φ) u(t)
((s + a)2 − ω 2 )2
((s + a)2 + ω 2 ) cosh(φ) + 2ω(s + a) sinh(φ)
te−at cosh(ωt + φ) u(t)
((s + a)2 − ω 2 )2
r
As + B A2 c + B 2 − 2ABa
re−at cos(ωt + φ) u(t) with: r=
s2 + 2as + c c − a2
√
Aa − B
ω = c−a 2 φ = arctan √
A c − a2
−at B − Aa As + B
e A cos(ωt) + sin(ωt) u(t)
ω s2 + 2as + c
√
with: ω= c − a2
Z T
f (t) periodic with 1
e−sτ f (τ )dτ
period T 1 − e−T s 0
scalability additivity