Fundamentals of Linear Control (PDFDrive)
Fundamentals of Linear Control (PDFDrive)
A Concise Approach
Taking a different approach from standard thousand-page reference-style control
textbooks, Fundamentals of Linear Control provides a concise yet comprehensive
introduction to the analysis and design of feedback control systems in fewer than 300
pages.
The text focuses on classical methods for dynamic linear systems in the frequency
domain. The treatment is, however, modern and the reader is kept aware of
contemporary tools and techniques, such as state-space methods and robust and
nonlinear control.
Featuring fully worked design examples, richly illustrated chapters, and an extensive
set of homework problems and examples spanning across the text for gradual challenge
and perspective, this textbook is an excellent choice for senior-level courses in systems
and control or as a complementary reference in introductory graduate-level courses.
The text is designed to appeal to a broad audience of engineers and scientists
interested in learning the main ideas behind feedback control theory.
A Concise Approach
MAURÍCIO C. DE OLIVEIRA
University of California, San Diego
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
4843/24, 2nd Floor, Ansari Road, Daryaganj, Delhi - 110002, India
79 Anson Road, #06-04/06, Singapore 079906
www.cambridge.org
Information on this title: www.cambridge.org/9781107187528
DOI: 10.1017/9781316941409
A catalog record for this publication is available from the British Library.
Cambridge University Press has no responsibility for the persistence or accuracy of URLs
for external or third-party internet websites referred to in this publication, and does
not guarantee that any content on such websites is, or will remain, accurate or
appropriate.
To Beatriz and Victor
Contents
Preface
Overview
1 Introduction
1.1 Models and Experiments
1.2 Cautionary Note
1.3 A Control Problem
1.4 Solution without Feedback
1.5 Solution with Feedback
1.6 Sensitivity
1.7 Disturbances
Problems
2 Dynamic Systems
2.1 Dynamic Models
2.2 Block-Diagrams for Differential Equations
2.3 Dynamic Response
2.4 Experimental Dynamic Response
2.5 Dynamic Feedback Control
2.6 Nonlinear Models
2.7 Disturbance Rejection
2.8 Integral Action
Problems
3 Transfer-Function Models
3.1 The Laplace Transform
3.2 Linearity, Causality, and Time-Invariance
3.3 Differential Equations and Transfer-Functions
3.4 Integration and Residues
3.5 Rational Functions
3.6 Stability
3.7 Transient and Steady-State Response
3.8 Frequency Response
3.9 Norms of Signals and Systems
Problems
4 Feedback Analysis
4.1 Tracking, Sensitivity, and Integral Control
4.2 Stability and Transient Response
4.3 Integrator Wind-up
4.4 Feedback with Disturbances
4.5 Input-Disturbance Rejection
4.6 Measurement Noise
4.7 Pole–Zero Cancellations and Stability
Problems
6 Controller Design
6.1 Second-Order Systems
6.2 Derivative Action
6.3 Proportional–Integral–Derivative Control
6.4 Root-Locus
6.5 Control of the Simple Pendulum – Part I
Problems
7 Frequency Domain
7.1 Bode Plots
7.2 Non-Minimum-Phase Systems
7.3 Polar Plots
7.4 The Argument Principle
7.5 Stability in the Frequency Domain
7.6 Nyquist Stability Criterion
7.7 Stability Margins
7.8 Control of the Simple Pendulum – Part II
Problems
References
Index
Preface
The book you have in your hands grew out of a set of lecture notes scribbled down for
MAE 143B, the senior-level undergraduate Linear Control class offered by the
Department of Mechanical and Aerospace Engineering at the University of California,
San Diego.
The focus of the book is on classical methods for analysis and design of feedback
systems that take advantage of the powerful and insightful representation of dynamic
linear systems in the frequency domain. The required mathematics is introduced or
revisited as needed. In this way the text is made mostly self-contained, with accessory
work shifted occasionally to homework problems.
Key concepts such as tracking, disturbance rejection, stability, and robustness are
introduced early on and revisited throughout the text as the mathematical tools
become more sophisticated. Examples illustrate graphical design methods based on the
root-locus, Bode, and Nyquist diagrams. Whenever possible, without straying too much
from the classical narrative, the reader is made aware of contemporary tools and
techniques such as state-space methods, robust control, and nonlinear systems theory.
With so much to cover in the way of insightful engineering and relevant mathematics,
I tried to steer clear of the curse of the engineering systems and control textbook:
becoming a treatise with 1000 pages. The depth of the content exposed in fewer than
300 pages is the result of a compromise between my utopian goal of at most 100 pages
on the one hand and the usefulness of the work as a reference and, I hope,
inspirational textbook on the other. Let me know if you think I failed to deliver on this
promise.
I shall be forever indebted to the many students, teaching assistants, and colleagues
whose exposure to earlier versions of this work helped shape what I am finally not
afraid of calling the first edition. Special thanks are due to Professor Reinaldo Palhares,
who diligently read the original text and delighted me with an abundance of helpful
comments.
I would like to thank Sara Torenson from the UCSD Bookstore, who patiently worked
with me to make sure earlier versions were available as readers for UCSD students, and
Steven Elliot from Cambridge University Press for his support in getting this work to a
larger audience.
Maurício de Oliveira
San Diego, California
Overview
This book is designed to be used in a quarter- or semester-long senior-level
undergraduate linear control systems class. Readers are assumed to have had some
exposure to differential equations and complex numbers (good references are [BD12]
and [BC14]), and to have some familiarity with the engineering notion of signals and
systems (a standard reference is [Lat04]). It is also assumed that the reader has access
to a high-level software program, such as MATLAB, to perform calculations in many of
the homework problems. In order to keep the focus on the content, examples in the
book do not discuss MATLAB syntax or features. Instead, we provide supplementary
MATLAB files which can produce all calculations and figures appearing in the book.
These files can be downloaded from http://www.cambridge.org/deOliveira.
Chapters 1 and 2 provide a quick overview of the basic concepts in control, such as
feedback, tracking, dynamics, disturbance rejection, integral action, etc. Math is kept at
a very basic level and the topics are introduced with the help of familiar examples, such
as a simplistic model of a car and a toilet bowl.
Chapter 5 takes a slight detour from classic methods to introduce the reader to
state-space models. The focus is on practical questions, such as realization of dynamic
systems and controllers, linearization of nonlinear systems, and basic issues that arise
when using linear controllers with nonlinear systems. It is from this vantage point that
slightly more complex dynamic systems models are introduced, such as a simple
pendulum and a pendulum in a cart, as a well as a simplified model of a steering car.
The simple pendulum model is used in subsequent chapters as the main illustrative
example.
Chapter 6 takes the reader back to the classic path with an emphasis on control
design. Having flirted with second-order systems many times before in the book, the
chapter starts by taking a closer look at the time-response of second-order systems and
associated performance metrics, followed by a brief discussion on derivative action and
the popular proportional–integral–derivative control. It then introduces the root-locus
method and applies it to the design of a controller with integral action to the simple
pendulum model introduced in the previous chapter.
Chapters 4 through 7 constitute the core material of the book. Chapters 5 and 7,
especially, offer many opportunities for instructors to select additional topics for
coverage in class or relegate to reading, such as discussions on nonlinear analysis and
control, a detailed presentation of the argument principle, and more unorthodox
topics such as non-minimum-phase systems and stability analysis of systems with
delays.
The more advanced material in Chapter 8 can be covered, time permitting, or may be
left just for the more interested reader without compromising a typical undergraduate
curriculum.
This book contains a total of almost 400 homework problems that appear at the end
of each chapter, with many problems spanning across chapters. Table I.1 on page xiv
provides an overview of select problems grouped by their motivating theme. Instructors
may choose to follow a few of these problems throughout the class. As mentioned
previously, many of the problems require students to use MATLAB or a similar
computer program. The supplementary MATLAB files provided with this book are a
great resource for readers who need to develop their programming skills to tackle these
problems.
Problem Ch.
Ch. 2 Ch. 3 Ch. 4 Ch. 5
theme 1
We often represent the relationship between a system and its input and output
signals in the form of a block-diagram, such as the ones in Fig. 1.1 through Fig. 1.3. The
diagram in Fig. 1.1 indicates that a system, G, produces an output signal, y, in the
presence of the input signal, u. Block-diagrams will be used to represent the
interconnection of systems and even algorithms. For example, Fig. 1.2 depicts the
components and signals in a familiar controlled system, a water heater; the block-
diagram in Fig. 1.3 depicts an algorithm for converting temperature in degrees
Fahrenheit to degrees Celsius, in which the output of the circle in Fig. 1.3 is the
algebraic sum of the incoming signals with signs as indicated near the incoming arrows.
Figure 1.1 System represented as a block-diagram; u is the input signal; y is the output
signal; y and u are related through or simply .
Figure 1.2 Block-diagram of a controlled system: a gas water heater; the blocks
thermostat, burner, and tank, represent components or sub-systems; the arrows
represent the flow of input and output signals.
Systems, signals, and models are often associated with concrete or abstract
experiments. A model reflects a particular setup in which the outputs appear correlated
with a prescribed set of inputs. For example, we might attempt to model a car by
performing the following experiment: on an unobstructed and level road, we depress
the accelerator pedal and let the car travel in a straight line. We keep the pedal
1
excursion constant and let the car reach constant velocity. We record the amount the
pedal has been depressed and the car’s terminal velocity. The results of this
experiment, repeated multiple times with different amounts of pedal excursion, might
look like the data shown in Fig. 1.4. In this experiment the signals are
The system is the car and the particular conditions of the experiment. The data
captures the fact that the car does not move at all for small pedal excursions and that
the terminal velocity saturates as the pedal reaches the end of its excursion range.
Figure 1.4 Experimental determination of the effect of pressing the gas pedal on the
car’s terminal velocity; the pedal excursion is the input signal, u, and the car’s terminal
velocity is the output signal, y.
From Fig. 1.4, one might try to fit a particular mathematical function to the
experimental data in hope of obtaining a mathematical model. In doing so, one
2
invariably loses something in the name of a simpler description. Such trade-offs are
commonplace in science, and it should be no different in the analysis and design of
control systems. Figure 1.5 shows the result of fitting a curve of the form
where u is the input, pedal excursion in inches, and y is the output, terminal velocity in
mph. The parameters and shown in Fig. 1.5 were obtained from a
standard least-squares fit. See also P1.11.
Figure 1.5 Fitting the curve to the data from Fig. 1.4.
The choice of the above particular function involving the arc-tangent might seem
somewhat arbitrary. When possible, one should select candidate functions from first
principles derived from physics or other scientific reasoning, but this does not seem to
be easy to do in the case of the experiment we described. Detailed physical modeling of
the vehicle would involve knowledge and further modeling of the components of the
vehicle, not to mention the many uncertainties brought in by the environment, such as
wind, road conditions, temperature, etc. Instead, we make an “educated choice” based
on certain physical aspects of the experiment that we believe the model should
capture. In this case, from our daily experience with vehicles, we expect that the
terminal velocity will eventually saturate, either as one reaches full throttle or as a
result of limitations on the maximum power that can be delivered by the vehicle’s
powertrain. We also expect that the function be monotone, that is, the more you press
the pedal, the larger the terminal velocity will be. Our previous exposure to the
properties of the arc-tangent function and engineering intuition about the expected
outcome of the experiment allowed us to successfully select this function as a suitable
candidate for a model.
Other families of functions might suit the data in Fig. 1.5. For example, we could have
used polynomials, perhaps constrained to pass through the origin and ensure
monotonicity. One of the most useful classes of mathematical models one can consider
is that of linear models, which are, of course, first-order polynomials. One might be
tempted to equate linear with simple. Whether or not this might be true in some cases,
simplicity is far from a sin. More often than not, the loss of some feature neglected by a
linear model is offset by the availability of a much broader set of analytic tools. It is
better to know when you are wrong than to believe you are right. As the title suggests,
this book is mostly concerned with linear models. Speaking of linear models, one might
propose describing the data in Fig. 1.4 by a linear mathematical model of the form
(1.1)
Figure 1.6 shows two such models (dashed lines). The curve with slope coefficient
was obtained by performing a least-squares fit to all data points (see P1.11).
The curve with coefficient is a first-order approximation of the nonlinear
model calculated in Fig. 1.5 (see P1.12). Clearly, each model has its limitations in
describing the experiment. Moreover, one model might be better suited to describe
certain aspects of the experiment than the other. Responsibility rests with the engineer
or the scientist to select the model, or perhaps set of models, that better fits the
problem in hand, a task that at times may resemble an art more than a science.
Figure 1.6 Linear mathematical models of the form for the data in Fig. 1.4
(dashed); the model with was obtained by a least-squares fit; the model with
was obtained after linearization of the nonlinear model (solid) obtained in
Fig. 1.5; see P1.12 and P1.11.
1.2 Cautionary Note
It goes without saying that the mathematical models described in Section 1.1 do not
purport to capture every detail of the experiment, not to mention reality. Good models
are the ones that capture essential aspects that we perceive or can experimentally
validate as real, for example how the terminal velocity of a car responds to the
acceleration pedal in the given experimental conditions. A model does not even need
to be correct to be useful: for centuries humans used a model in which the sun
3
revolves around the earth to predict and control their days! What is important is that
models provide a way to express relevant aspects of reality using mathematics. When
mathematical models are used in control design, it is therefore with the understanding
that the model is bound to capture only a subset of features of the actual
phenomenon they represent. At no time should one be fooled into believing in a
model. The curious reader will appreciate [Fey86] and the amusingly provocative
[Tal07].
With this caveat in mind, it is useful to think of an idealized true or nominal model,
just as is done in physics, against which a particular setup can be mathematically
evaluated. This nominal model might even be different than the model used by a
particular control algorithm, for instance, having more details or being more complex
or more accurate. Of course physical evaluation of a control system with respect to the
underlying natural phenomenon is possible only by means of experimentation which
should also include the physical realization of the controller in the form of computer
hardware and software, electric circuits, and other necessary mechanical devices. We
will discuss in Chapter 5 how certain physical devices can be used to implement the
dynamic controllers you will learn to design in this book.
The models discussed so far have been static, meaning that the relationship between
inputs and outputs is instantaneous and is independent of the past history of the
system or their signals. Yet the main objective of this book is to work with dynamic
models, in which the relationship between present inputs and outputs may depend on
the present and past history of the signals.
4
With the goal of introducing the main ideas behind feedback control in a simpler
setup, we will continue to work with static models for the remainder of this chapter. In
the case of static models, a mathematical function or a set of algebraic equations will be
used to represent such relationships, as done in the models discussed just above in
Section 1.1.
Dynamic models will be considered starting in Chapter 2. In this book, signals will be
continuous functions of time, and dynamic models will be formulated with the help of
ordinary differential equations. As one might expect, experimental procedures that can
estimate the parameters of dynamic systems need to be much more sophisticated than
the ones discussed so far. A simple experimental procedure will be briefly discussed in
Section 2.4, but the interested reader is encouraged to consult one of the many
excellent works on this subject, e.g. [Lju99].
Under the experimental conditions described in Section 1.1 and given a target
terminal velocity, , is it possible to design a system, the controller, that is able to
command the accelerator pedal of a car, the input, u, to produce a terminal velocity,
the output, y, equal to the target velocity?
An automatic system that can solve this problem is found in many modern cars, with
the name cruise controller. Of course, another system that is capable of solving the
same problem is a human driver. In this book we are mostly interested in solutions
5
that can be implemented as an automatic control, that is, which can be performed by
some combination of mechanical, electric, hydraulic, or pneumatic systems running
without human intervention, often being programmed in a digital computer or some
other logical circuit or calculator.
Problems such as this are referred to in the control literature as tracking problems:
the controller should make the system, a car, follow or track a given target output, the
desired terminal velocity. In the next sections we will discuss two possible approaches
to the cruise control problem.
The role of the controller in tracking is to compute the input signal u which produces
the desired output signal y. One might therefore attempt to solve a tracking problem
using a system (controller) of the form
This controller can use only the reference signal, the target output , and is said to be
in open-loop, as the controller output signal, u, is not a function of the system output
6
signal, y.
With the intent of analyzing the proposed solution using mathematical models,
assume that the car can be represented by a nominal model, say G, that relates the
input u (pedal excursion) to the output y (terminal velocity) through the mathematical
function
The connection of the controller with this idealized model is depicted in the block-
diagram in Fig. 1.7. Here the function G can be obtained after fitting experimental data
as done in Figs. 1.5 and 1.6, or borrowed from physics or engineering science principles.
The block-diagram in Fig. 1.7 represents the following relationships:
When both the nominal model G and the controller K are linear,
from which only if the product of the constants K and G is equal to one:
Because the control law relies on knowledge of the nominal model G to achieve its
goal, any imperfection in the model or in the implementation of the controller will lead
to less than perfect tracking.
Figure 1.7 Open-loop control: the controller, K, is a function of the reference input, ,
but not a function of the system output, y.
1.5 Solution with Feedback
The controller in the open-loop solution considered in Section 1.4 is allowed to make
use only of the target output, . When a measurement, even if imprecise, of the
system output is available, one may benefit from allowing the controller to make use of
the measurement signal, y. In the case of the car cruise control, the terminal velocity, y,
can be measured by an on-board speedometer. Of course the target velocity, , is set
by the driver.
Controllers that make use of output signals to compute the control inputs are called
feedback controllers. In its most general form, a feedback controller has the functional
form
(1.2)
This scheme is depicted in the block-diagram in Fig. 1.8. One should question whether it
is possible to implement a physical system that replicates the block-diagram in Fig. 1.8.
In this diagram, the measurement, y, that takes part in the computation of the control,
u, in the controller block, K, is the same as that which comes out of the system, G. In
other words, the signals flow in this diagram is instantaneous. Even though we are not
yet properly equipped to address this question, we anticipate that it will be possible to
construct and analyze implementable or realizable versions of the feedback diagram in
Fig. 1.8 by taking into account dynamic phenomena, which we will start discussing in
the next chapter.
Figure 1.8 Closed-loop feedback control: the controller, K, is a function of the reference
input, , and the system output, y, by way of the error signal, .
At this point, we are content to say that if the computation implied by feedback is
performed fast enough, then the scheme should work. We analyze the proposed
feedback solution only in the case of static linear models, that is, when both the
controller, K, and the system to be controlled, G, are linear. Feedback controllers of the
form (1.2), which are linear and static, are known by the name proportional controllers,
or P controllers for short. In the closed-loop diagram of Fig. 1.8, we can think of the
signal , the target velocity, as an input, and of the signal y, the terminal velocity, as
an output. A mathematical description of the relationship between the input signal, ,
and output signal, y, assuming linear models, can be computed from the diagram:
Ironically, a first conclusion from the closed-loop analysis is that it is not possible to
achieve exact tracking of the target velocity since H cannot be equal to one for any
finite value of the constants G and K, not even when , which was the open-
loop solution. However, it is not so hard to make H get close to one: just make K large!
More precisely, make the product large. How large it needs to be depends on the
particular system G. However, a welcome side-effect of the closed-loop solution is that
the controller gain, K, does not depend directly on the value of the system model, G. As
the calculations in Table 1.1 reveal, the closed-loop transfer-function, H, remains within
1% of 1 for values K greater than or equal to 3 for any value of G lying between the two
crude linear models estimated earlier in Fig. 1.6.
In other words, feedback control does not seem to rely on exact knowledge of the
system model in order to achieve good tracking performance. This is a major feature of
feedback control, and one of the reasons why we may get away with using incomplete
and not extremely accurate mathematical models for feedback design. One might find
this strange, especially to scientists and engineers trained to look for accuracy and
fidelity in their models of the world, a line of thought that might lead one to believe
that better accuracy requires the use of complex models. For example, the complexity
required for accurately modeling the interaction of an aircraft with its surrounding air
may be phenomenal. Yet, as the Wright brothers and other flight pioneers
demonstrated, it is possible to design and implement effective feedback control of
aircraft without relying explicitly on such complex models.
This remarkable feature remains for the most part true even if nonlinear models are
7
Figure 1.9 Effect of the gain K on the ability of the terminal velocity, y, to track a given
target velocity, , when the linear feedback control, , is in closed-loop
(Fig. 1.8) with the nonlinear model, from Fig. 1.5.
Insight on the reasons why feedback control can achieve tracking without relying on
precise models is obtained if we look at the control, the signal u, that is effectively
computed by the closed-loop solution. Following steps similar to the ones used in the
derivation of the closed-loop transfer-function, we calculate
Note that , which is exactly the same control as that computed in
open-loop (see Section 1.4). This time, however, it is the feedback loop that computes
the function based on the error signal, . Indeed, u is simply equal to
, which, when K is made large, converges to by virtue of feedback, no
matter what the value of G is. A natural question is what are the side-effects of raising
the control gain in order to improve the tracking performance? We will come back to
this question at many points in this book as we learn more about dynamic systems and
feedback.
1.6 Sensitivity
As seen before, in both open- and closed-loop solutions to the tracking control
problem, the output y is related to the target output through
Now consider that G assumes values in the neighborhood of a certain nominal model
and that . Assume that those changes in G affect H in a continuous and
differentiable way so that 9
Using this formula we compute the sensitivity of the open-loop solution. In the case
of linear models,
This can be interpreted as follows: in open-loop, a relative change in the system model,
G, produces a relative change in the output, y, of the same order.
(1.3)
By making K large we not only improve the tracking performance but also reduce the
sensitivity S. Note that , hence , so that the values of S can be
easily calculated from Table 1.1 in the case of the car cruise control. For this reason, H
is known as the complementary sensitivity function.
In the closed-loop diagram of Fig. 1.8, the transfer-function from the reference input,
, to the tracking error, e, is
1.7 Disturbances
It is easy to incorporate disturbances into the basic open- and closed-loop schemes
of Figs. 1.7 and 1.8, which we do in Figs. 1.12 and 1.13. In both cases, one can write the
output, y, in terms of the reference input, , and the disturbance, w. Better yet, we can
write the transfer-function from the inputs, and w, to the tracking error, .
In open-loop we calculate with Fig. 1.12 that
Substituting the proposed open-loop solution, , we obtain
which means that open-loop control is very effective at tracking but has no capability
to reject the disturbance w, as one could have anticipated from the block-diagram in
Fig. 1.12. Open-loop controllers will perform poorly in the presence of disturbances.
This is similar to the conclusion obtained in Section 1.6 that showed open-loop
controllers to be sensitive to changes in the system model.
(1.4)
The control gain, K, shows up in both transfer-functions from the inputs, w and , to
the tracking error, e. High control gains reduce both terms at the same time. That is,
the closed-loop solution achieves good tracking and rejects the disturbance. This is a
most welcome feature and often the main reason for using feedback in control systems.
By another coincidence, the coefficient of the first term in (1.4) is the same as the
sensitivity function, , calculated in Section 1.6.
Problems
1.1 For each block-diagram in Fig. 1.14 identify inputs, outputs, and other relevant
signals, and also describe what physical quantities the signals could represent.
Determine whether the system is in closed-loop or open-loop based on the presence or
absence of feedback. Is the relationship between the inputs and outputs dynamic or
static? Write a simple equation for each block if possible. Which signals are
disturbances?
1.2 Sketch block-diagrams that can represent the following phenomena as systems:
Identify potential input and output signals that could be used to identify cause–effect
relationships. Discuss the assumptions and limitations of your model. Is the
relationship between the inputs and outputs dynamic or static? Write simple equations
for each block if possible.
1.4 Most cars are equipped with an anti-lock braking system (ABS), which is designed
to prevent the wheels from locking up when the driver actuates the brake pedal. It
helps with emergencies and adverse road conditions by ensuring that traction is
maintained on all wheels throughout breaking. An ABS system detects locking of a
wheel by comparing the rotational speeds among wheels and modifies the pressure on
the hydraulic brake actuator as needed. Sketch a block-diagram that could represent
the signals and systems involved in ABS.
1.5 Humans learn to balance standing up early in life. Sketch a block-diagram that
represents signals and systems required for standing up. Is there a sensor involved?
Actuator? Feedback?
1.6 Sketch a block-diagram that represents the signals and systems required for a
human to navigate across an unknown environment. Is there a sensor involved?
Actuator? Feedback?
1.7 Repeat P1.5 and P1.6 from the perspective of a blind person.
1.8 Repeat P1.5 and P1.6 from the perspective of a robot or an autonomous vehicle.
1.9 For each block-diagram in Fig. 1.15 compute the transfer-function from the input
u to the output y assuming that all blocks are linear.
Figure 1.15 Block diagrams for P1.9.
1.10 Students participating in Rice University’s Galileo Project [Jen14] set out to
carefully reproduce some of Galileo’s classic experiments. One was the study of
projectile motion using an inclined plane, in which a ball accelerates down a plane
inclined at a certain angle then rolls in the horizontal direction with uniform motion for
a short while until falling off the edge of a table, as shown in Fig. 1.16. The distance the
ball rolled along the inclined plane, in feet, and the distance from the end of the
table to the landing site of the ball, d in inches, were recorded. Some of their data, five
trials at two different angles, is reproduced in Table 1.2. Use MATLAB to plot and
visualize the data. Fit simple equations, e.g. linear, quadratic, etc., to the data to relate
the fall height, h, to the horizontal travel distance, d, given in Table 1.2. Justify your
choice of equations and comment on the quality of the fit obtained in each case.
Estimate using the given data the vertical distance y. Can you also estimate gravity?
Try 1 ft 2 ft 4 ft 6 ft
1
2
3
4 14
5
Ramp distance at
Try 1 ft 2 ft 4 ft 6 ft 8 ft
1
2
3
4
5
1.11 Use MATLAB to determine the parameters , , and that produce the least-
squares fit of the data in Fig. 1.4 to the curves and .
Compare your answers with Figs. 1.5 and 1.6.
1.13 Show that the sensitivity function in (1.3) is the one associated with the closed-
loop transfer-function .
This may bring to memory a bad joke about physicists and spherical cows
1
All data used to produce the figures in this book is available for download from the
2
website http://www.cambridge.org/deOliveira.
In the present book, mathematical models for dynamic systems take the form of
ordinary differential equations where signals evolve continuously in time. Bear in mind
1
that this is not a course on differential equations, and previous exposure to the
mathematical theory of differential equations helps. Familiarity with material covered
in standard text books, e.g. [BD12], is enough. We make extensive use of the Laplace
transform and provide a somewhat self-contained review of relevant facts in Chapter 3.
These days, when virtually all control systems are implemented in some form of
digital computer, one is compelled to justify why not to discuss control systems directly
from the point of view of discrete-time signals and systems. One reason is that
continuous-time signals and systems have a long tradition in mathematics and physics
that has established a language that most scientists and engineers are accustomed to.
The converse is unfortunately not true, and it takes time to get comfortable with
interpreting discrete-time models and making sense of some of the implied
assumptions that come with them, mainly the effects of sampling and related practical
issues, such as quantization and aliasing. In fact, for physical systems, it is impossible to
appropriately choose an adequate sampling rate without having a good idea of the
continuous-time model of the system being controlled. Finally, if a system is well
modeled and a controller is properly designed in continuous-time, implementation in
the form of a discrete-time controller is most of the time routine, especially when the
available hardware sampling rates are fast enough.
Let us start with some notation: we denote time by the real variable t in the interval
, where 0 can be thought of as an arbitrary origin of time before which we are
not interested in the behavior of the system or its signals. We employ functions of real
variables to describe signals and use standard functional notation to indicate the
dependence of signals on time. For example, the dynamic signals y and u are denoted
as and . At times, when no confusion is possible, we omit the dependence of
signals on t.
where x is the linear coordinate representing the position of the car, is the
coefficient of friction, and f is a force, which we will use to put the car into motion.
Much can be argued about the exact form of the friction force, which we have assumed
to be viscous, that is of the form , linear, and opposed to the velocity
. As we are interested in modeling the velocity of the car and not its
position, it is convenient to rewrite the differential equation in terms of the velocity v,
obtaining
(2.1)
In order to complete our model we need to relate the car driving force, f , to the pedal
excursion, u. Here we could resort to experimentation or appeal to basic principles.
With simplicity in mind we choose a linear static model:
(2.2)
Of course, no one should believe that the simple force model (2.2) can accurately
represent the response of the entire powertrain of the car in a variety of conditions.
Among other things, the powertrain will have its own complex dynamic behaviors,
which (2.2) gracefully ignores. Luckily, the validity of such simplification depends not
2
only on the behavior of the actual powertrain but also on the purpose of the model. In
many cases, the time-constants of the powertrain are much faster than the time-
3
constant due to the inertial effects of the entire car. In this context, a simplified model
can lead to satisfactory or at least insightful results when the purpose of the model is,
say, predicting the velocity of the car. A human driver certainly does not need to have a
deep knowledge of the mechanical behavior of an automobile for driving one!
Combining Equations (2.1) and (2.2), and labeling the velocity as the output of the
system, i.e. , we obtain the differential equation
(2.3)
which is the mathematical dynamic model we will use to represent the car in the
dynamic analysis of the cruise control problem.
Assuming that integrator blocks are available, all that is left to do is to rewrite the
ordinary differential equation, isolating its highest derivative. For example, we rewrite
(2.3) as
Figure 2.3 Dynamic model of the car: m is the mass, b is the viscous friction coefficient,
p is the pedal gain, u is the pedal excursion, and y is the car’s velocity.
Note the presence of a feedback loop in the diagram of Fig. 2.3! For this reason, tools
for analyzing feedback loops often draw on the theory of differential equations and
vice versa. We will explore the realization of differential equations using block-
diagrams with integrators in detail in Chapter 5.
The differential equation (2.3) looks very different from the static linear models
considered earlier in Chapter 1. In order to understand their differences and similarities
we need to understand how the model (2.3) responds to inputs. Our experiment in
Section 1.1 consisted of having a constant pedal excursion and letting the car reach a
terminal velocity. We shall first attempt to emulate this setup using the differential
equation (2.3) as a model.
It is this relation that should be compared with the static model developed earlier.
Experiments similar to the ones in Section 1.1 can be used to determine the value of
the ratio . In the language of differential equations the function
is a particular solution to the differential equation (2.3). See [BD12] for details.
The particular solution cannot, however, be a complete solution: if the initial velocity
of the car at time is not equal to then . The remaining
component of the solution is found by solving the homogeneous version of Equation
(2.3):
(2.4)
This is an algebraic equation that needs to hold for all , in particular , which
will happen only if is a zero of the characteristic equation:
(2.5)
where
Plots of for various values of and are shown in Fig. 2.4 for . Note
how the responses converge to for all negative values of . The more negative the
value of , the faster the convergence. When is positive the response does not
converge to , even when is very close to .
Figure 2.4 Plots of , , with ; and are as
shown.
It is customary to evaluate how fast the solution of the differential equation (2.3)
converges to by analyzing its response to a zero initial condition and a
nonzero . This is known as a step response. When , the constant
is the time-constant. In P2.1 you will show that has units of time. At select times
and ,
Another measure of the rate of change of the response is the rise-time, , which is
the time it takes the step response to go from 10% to 90% of its final value. Calculating
5
we obtain
As shown in Section 2.3, the terminal velocity attained by the dynamic linear model,
the differential equation (2.3), is related to the static linear model, the algebraic
equation (1.1), through . This means that the ratio can be
determined in the same way as was done in Section 1.1. A new experiment is needed to
determine the parameter , which does not influence the terminal velocity
but affects the rate at which the car approaches the terminal velocity.
First let us select a velocity around which we would like to build our model,
preferably a velocity close to the expected operation of the cruise controller. Looking at
Figs. 1.4 through 1.6, we observe that a pedal excursion of around in will lead to a
terminal velocity around mph, which is close to highway speeds at which a cruise
controller may be expected to operate. We perform the following dynamic experiment:
starting at rest, apply constant pedal excursion, , , and collect
samples of the instantaneous velocity until the velocity becomes approximately
constant. In other words, perform an experimental step response. The result of one
such experiment may look like the plot in Fig. 2.5, in which samples (marked as circles,
crosses, and squares) have been collected approximately every s for s.
Figure 2.5 Experimental velocity response of a car to a constant pedal excursion,
in, ; samples are marked as circles, crosses, and squares.
We proceed by fitting the data in Fig. 2.5 to a function like (2.6) where the initial
condition, , is set to zero before estimating the parameters and . This fit can be
performed in many ways. We do it as follows: we first average the samples over the last
s in Fig. 2.5 (squares) to compute an estimate of the terminal velocity. From the data
shown in Fig. 2.5 we obtain the estimate . If is of the form (2.6)
then
That is, is a line with slope . With this in mind we plot in Fig. 2.6 using
samples taken from the first s from Fig. 2.5 (circles) and estimate the slope of the
line , that is . The parameters and are then estimated
based on the relationships
Note that this model has a static gain of about mph/in which lies somewhere
between the two static linear models estimated earlier in Fig. 1.6. Indeed, this is the
intermediate gain value that was used in Section 1.5 to calculate one of the static
closed-loop transfer-functions in Table 1.1.
The estimation of the structure and the parameters of a dynamic system from
experiments is known as system identification. The interested reader is referred to
[Lju99] for an excellent introduction to a variety of useful methods.
We are now ready to revisit the feedback solution proposed in Section 1.3 for solving
the cruise control problem. Let us keep the structure of the feedback loop the same,
that is let the proportional controller
be connected as in Fig. 1.8. Note a fundamental difference between this controller and
the one analyzed before: in Section 1.5 the signals e and y were the terminal error and
terminal velocity; controller (2.11) uses the dynamic error signal and velocity .
This dynamic feedback loop can be practically implemented if a sensor for the
instantaneous velocity, , is used. Every vehicle comes equipped with one such
sensor, the speedometer. 6
In order to analyze the resulting dynamic feedback control loop we replace the
system model, G, with the dynamic model, the differential equation (2.3), to account
for the car’s dynamic response to changes in the pedal excursion. In terms of block-
diagrams, we replace G in Fig. 1.8 by the block-diagram representation of the
differential equation (2.3) from Fig. 2.3. The result is the block-diagram shown in Fig.
2.7. Using Equations (2.3) and (2.11) we eliminate the input signal, , to obtain
This linear ordinary differential equation governs the behavior of the closed-loop
system. In the next chapters, you will learn to interpret this equation in terms of a
closed-loop transfer-function using the Laplace transform. For now we proceed in the
time domain and continue to work with differential equations.
Figure 2.7 Dynamic closed-loop connection of the car model with proportional
controller.
Since Equation (2.12) has the same structure as Equation (2.3), its solution is also
given by (2.6). That is
We refer to components of the response of a dynamic system that persist as the time
grows large as steady-state solutions. In this case, the closed-loop has a constant
steady-state solution
given in Table 2.1. The corresponding dynamic responses calculated from zero initial
conditions, , are plotted in Fig. 2.8.
Figure 2.8 Open- and closed-loop dynamic response, , for the linear car velocity
model (2.12) calculated for and and a constant target output
of mph with proportional control (2.11) for various values of gain, K; the open-
loop solution is from Section 1.4.
Some numbers in Table 2.1 and Fig. 2.8 look suspicious. Is it really possible to lower
time-constants so much? Take, for example, the case of the largest gain : here
we have almost perfect tracking (3% error) with a closed-loop rise-time that is more
than 40 times faster than in open-loop. This kind of performance improvement is
unlikely to be achieved by any controller that simply steps into the accelerator pedal.
Surely there must be a catch! Indeed, so far we have been looking at the system
output, the car’s velocity, , and have paid little attention to the control input, the
pedal excursion, . We shall now look at the control input in search of clues that
could explain the impressive performance of the closed-loop controller.
The control inputs, , associated with the dynamic responses in Fig. 2.8 are
plotted in Fig. 2.9. In Fig. 2.9 we see that the feedback controller is injecting into the
system, the car, large inputs, pedal excursions, in order to achieve better tracking and
faster response. The larger the control gain, K, the larger the required pedal excursion.
Note that in this case the maximum required control signal happens at , when
the tracking error is at a maximum, and
Clearly, the larger the gain, K, the larger the control input, u. For instance, with
the controller produces an input that exceeds the maximum possible pedal
excursion of in, which corresponds to full throttle. In other words, the control input is
saturated. With we have , which is full throttle. Of course, any
conclusions drawn for will no longer be valid or, at least, not very accurate.
It is not possible to achieve some of the predicted ultra-fast response times due to
limitations in the system, in this case pedal and engine saturation, that were not
represented in the linear models used to design and analyze the closed-loop. Ironically,
the gain is one for which the pedal excursion remains well below saturation,
and is perhaps the one case in which the (poor) performance predicted by the linear
model is likely to be accurate.
Figure 2.9 Open- and closed-loop control inputs (pedal excursion) corresponding to the
dynamic responses in Fig. 2.8; the largest possible pedal excursion is 3 in.
2.6 Nonlinear Models
In Section 2.5 we saw controllers that produced inputs that led to saturation of the
system input, the car’s pedal excursion. In some cases the required control input
exceeded full throttle. In this section we digress a little to introduce a simple nonlinear
model that can better predict the behavior of the system in closed-loop when
saturation is present. This is not a course in nonlinear control, and the discussion will
be kept at a very basic level. The goal is to be able to tell what happens in our simple
example when the system reaches saturation.
In order to model the effect of saturation we will work with a nonlinear differential
equation of the form
This means that the steady-state response of the nonlinear differential equation (2.15)
matches the empirical nonlinear fit performed earlier in Fig. 1.5. Moreover, at least for
small values of , we should expect that (see P2.2)
8
The resulting nonlinear model has a steady-state solution that matches the static fit
from Fig. 1.5 and a time-constant close to that of the linear model from Section 2.1.
It is in closed-loop, however, that the nonlinear model will likely expose serious
limitations of the controller based on the linear model (2.3). In order to capture the
limits on pedal excursion we introduce the saturation nonlinearity:
Eliminating we obtain
The above nonlinear ordinary differential equations cannot be solved analytically but
can be simulated using standard numerical integration methods, e.g. one of the Runge–
Kutta methods [BD12].
Figure 2.10 Open- and closed-loop dynamic response, , produced by the nonlinear
car velocity model (2.17) calculated with , , , and
and a constant target output of mph under proportional control
(2.11) and various values of gain, K. Compare this with Fig. 2.8.
Figure 2.11 Open- and closed-loop control input, pedal excursion, , produced by
the car velocity nonlinear model equation (2.17) under proportional control (2.11); the
largest possible pedal excursion is in; note the marked effect of pedal saturation in
the case of the highest gain and its impact in Fig. 2.10.
2.7 Disturbance Rejection
We now return to linear models to talk about a much desired feature of feedback
control: disturbance rejection. Consider a modified version of the cruise control
problem where the car is on a slope, as illustrated in Fig. 2.12. Newton’s law applied to
the car produces the differential equation 10
where is the velocity of the car, is the angle the slope makes with the
horizontal, and g is the gravitational acceleration. When
11
the car is on the flat,
and the model reduces to (2.1). As before, adoption of a linear model for the
relationship between the force, f , and the pedal excursion, u, i.e. from (2.2),
produces the differential equation
This is a linear differential equation except for the way in which enters the
equation. In most cases, the signal is not known ahead of time, and can be seen as
a nuisance or disturbance. Instead of working directly with , it is convenient to
introduce the disturbance signal
This differential equation is linear and can be analyzed with simpler tools.
Figure 2.12 Free-body diagram showing forces acting on a car on a road slope.
The car model with the input disturbance is represented in closed-loop by the block-
diagram in Fig. 2.13, which corresponds to the equations
Figure 2.13 Closed-loop connection of the car showing the slope disturbance
.
The dynamic response of the car to the change in slope can be computed by formula
(2.6) after setting
and calculating
The closed-loop response and the open-loop response are plotted in Fig. 2.14 for
various values of the gain, K. The predicted change in velocity is equal to
From (2.21), the larger K, the smaller , hence the smaller the change in velocity
induced by the disturbance.
Figure 2.14 Closed-loop response of the velocity of the car with proportional cruise
control (linear model (2.12), and ) to a change in road slope
at , from flat to 10% grade for various values of the control gain.
Compare the above analysis with the change in velocity produced by the open-loop
solution (see P2.3):
we conclude that the feedback solution always provides better regulation of the
velocity in the presence of a road slope disturbance. Finally, large gains will bring down
not only the tracking error but also the regulation error in response to a disturbance.
Indeed, bigger Ks make both and small.
We close this chapter with an analysis of a simple and familiar controlled system: a
toilet water tank, Fig. 2.15. This system has a property of much interest in control, the
so-called integral action. As seen in previous sections, large gains will generally lead to
small tracking errors but with potentially damaging consequences, such as large control
inputs that can lead to saturation and other nonlinear effects. In the examples
presented so far, only an infinitely large gain could provide zero steady-state tracking
error in closed-loop. As we will see in this section, integral action will allow closed-loop
systems to track constant references with zero steady-state tracking error without
resorting to infinite gains.
Figure 2.15 A toilet with a water tank and a simplified schematic diagram showing the
ballcock valve. The tank is in the shape of a rectangular prism with cross-sectional area
A. The water level is y and the fill line is .
A schematic diagram of a toilet water tank is shown in Fig. 2.15(b). Assuming that the
tank has a constant cross-sectional area, A, the amount of water in the tank, i.e. the
volume, v, is related to the water level, y, by
When the tank is closed, for instance, right after a complete flush, water flows in at a
rate , which is controlled by the ballcock valve. Without leaks, the water volume in
the tank is preserved, hence
On combining these two equations in terms of the water level, y, we obtain the
differential equation
which reveals that the toilet water tank is essentially a flow integrator, as shown in the
block-diagram representation in Fig. 2.16.
A ballcock valve, shown in Fig. 2.15(b), controls the inflow of water by using a float to
measure the water level. When the water level reaches the fill line, , a lever
connected to the float shuts down the valve. When the water level is below the fill line,
such as right after a flush, the float descends and actuates the fill valve. This is a
feedback mechanism. Indeed, we can express the flow valve as a function of the error
between the fill line, , and the current water level, y, through
where the profile of the function K is similar to the saturation curves encountered
before in Figs. 1.4–1.6. The complete system is represented in the block-diagram Fig.
2.17, which shows that the valve is indeed a feedback element: the water level, y, tracks
the reference level, fill line, .
Figure 2.17 Block-diagram for water tank with ballcock valve.
With simplicity in mind, assume that the valve is linear. In this case, the behavior of
the tank with the ballcock valve is given by the differential equation
This equation is of the form (2.3) and has once again as solution (2.6), that is,
In other words, the steady-state solution is always equal to the target fill line, , if
, no matter what the actual values of K and A are! The toilet water tank level,
y, tracks the fill line level, , exactly without a high-gain feedback controller. As will
become clear in Chapter 4, the reason for this remarkable property is the presence of a
pure integrator in the feedback loop. Of course, the values of K and A do not affect the
steady-state solution but do influence the rate at which the system converges to it.
Integral action can be understood with the help of the closed-loop diagram in Fig.
2.17. First note that for the output of an integrator to converge to a constant value it is
necessary that its input converge to zero. In Fig. 2.17, it is necessary that 12
We conclude this section by revisiting the car example with linear model (2.3), by
noting that when there is no damping, i.e. , then the car becomes a pure
integrator. Indeed, in this case
which implies
independently of the value of K. We saw in Section 2.7 that large controller gains lead
not only to small tracking errors but also to effective disturbance rejection. The same is
true for an integrator in the controller, which leads to asymptotic tracking and
asymptotic disturbance rejection. However, the position of the integrator in the loop
matters: an integrator in the system but not in the controller will lead to zero tracking
error but nonzero disturbance rejection error. For instance, in the example of the car
we have seen that implies but
which is in general not zero. Nevertheless, it does gets smaller as the gain, K, gets large.
By contrast, an integrator on the controller will generally lead to
independently of the loop gain, K. We will study this issue in more detail in Section 4.5.
Problems
2.1 Consider the solution (2.6) to the first-order ordinary differential equation (2.3)
where the constant parameters m, b, and p are from the car velocity dynamic model
developed in Section 2.1. Assign compatible units to the signals and constants in (2.3)
and calculate the corresponding units of the parameter , from (2.7), and the time-
constant , from (2.8).
2.2 Use the Taylor series expansion of the function to justify the
approximation
when .
2.3 Calculate the dynamic response, , of the open-loop car velocity model (2.19)
when
and . Calculate the change in speed and compare your
answer with (2.22).
The next problems involve the motion of particle systems using Newton’s law.
2.5 The first-order ordinary differential equation obtained in P2.4 can be seen as a
dynamic system where the output is the vertical velocity, v, and the input is the
gravitational force, mg. Calculate the solution to this equation. Consider
kg/s, m/s . Sketch or use MATLAB to plot the response, ,
when , , or .
2.8 A sky diver weighing 70 kg reaches a constant vertical speed of 200 km/h during
the free-fall phase of the dive and a vertical speed of 20 km/h after the parachute is
opened. Approximate each phase of the fall by the ordinary differential equation
obtained in P2.4 and estimate the resistance coefficients using the given information.
Use m/s . What are the time-constants in each phase? At what time and
distance from the ground should the parachute be opened if the landing speed is to be
less than or equal to 29 km/h? If a dive starts at a height of 4 km with zero vertical
velocity at the moment of the jump and the parachute is opened 60 s into the dive,
how long is the diver airborne?
The next problems involve the planar rotation of a rigid body. Such systems can be
approximately modeled by the first-order ordinary differential equation:
where is the body’s angular speed, J is the body’s moment of inertia about its center of
mass, and is the sum of all torques about the center of mass of the body. In constrained
rotational systems, e.g. lever, gears, etc., the center of mass can be replaced by the
center of rotation.
2.10 An (inextensible and massless) belt is used to drive a rotating machine without
slip as shown in Fig. 2.18(a). The simplified motion of the inertia is described by
where is the torque applied by the driving motor and and are tensions on the
belt. The machine is connected to the inertia , which represents the sum of the
inertias of all machine parts. The motion of the inertia is described by
Show that the motion of the entire system can be described by the differential
equation
Figure 2.18 Diagrams for P2.10 and P2.18.
2.13 Determine a first-order ordinary differential equation based on P2.10 and P2.12
to describe the rotating machine as a dynamic system where the output is the angular
velocity of the inertia , , and the input is the motor torque, . Calculate the
solution to this equation. Consider N m, mm, mm,
kg m /s, kg m /s, kg m , kg m . Sketch or use
MATLAB to plot the response, , when rad/s, rad/s, or
rad/s.
2.14 Calculate the (open-loop) motor torque, , for the rotating machine model in
P2.10 and P2.12 so that the rotational speed of the mass , , converges to
rad/s as t gets large. Use the same data as in P2.13 and sketch or use MATLAB to plot
the response, , when rad/s, rad/s, or rad/s.
2.15 What happens with the response in P2.14 if the actual damping coefficients
and are 20% larger than the ones you used to calculate the open-loop torque?
can be used to control the speed of the inertia in the rotating machine discussed in
P2.10 and P2.12. Calculate and solve a differential equation that describes the closed-
loop response of the rotating machine. Using data from P2.13, select a controller gain,
K, with which the time-constant of the rotating machine is 3 s. Compare your answer
with the open-loop time-constant. Calculate the closed-loop steady-state error
between the desired rotational speed, rad/s, and . Sketch or use MATLAB
to plot the response, , when rad/s, rad/s, or
rad/s.
2.17 What happens with the response in P2.16 if the actual damping coefficients
and are 20% larger than the ones you used to calculate the closed-loop gain?
2.21 Calculate the (open-loop) motor torque, , for the elevator model in P2.19 so
that the vertical velocity of the mass , , converges to m/s as t gets large.
Use the same data as in P2.19 and sketch or use MATLAB to plot the response, ,
when , m/s, or m/s.
2.22 Let kg and use the same motor torque, , you calculated in P2.21 and
the rest of the data from P2.19 to sketch or use MATLAB to plot the response of the
elevator mass velocity, , when , m/s, or
m/s. Did the velocity converge to m/s? If not, recalculate a suitable torque. Plot
the response with the modified torque, compare your answer with P2.21, and comment
on the value of torque you obtained.
can be used to control the ascent and descent speed of the mass in the elevator
discussed in P2.18 and P2.19. Calculate and solve a differential equation that describes
the closed-loop response of the elevator. Using data from P2.19, select a controller
gain, K, with which the time-constant of the elevator is approximately 5 s. Compare this
value with the open-loop time-constant. Calculate the closed-loop steady-state error
between the desired vertical velocity, m/s, and . Sketch or use MATLAB to
plot the response of the elevator mass velocity, , when ,
m/s, or m/s. Compare the response with the open-loop control
response from P2.22.
2.24 Repeat P2.23, this time setting the closed-loop time-constant to be about s.
What is the effect on the response? Do you see any problems with this solution?
2.26 Repeat P2.25 this time setting the closed-loop time-constant to be about s.
What is the effect on the response? Do you see any problems with this solution?
Figures 2.19 through 2.21 show diagrams of mass–spring–damper systems. Assume that
there is no friction between the wheels and the floor, and that all springs and dampers
are linear: elongating a linear spring with rest length by produces an opposing
force (Hooke’s Law), where is the spring stiffness; changing the length of a
linear damper at a rate produces an opposing force , where b is the damper’s
damping coefficient.
2.27 Choose wisely to show that the ordinary differential equation
2.31 Can you replace the two springs in P2.30 by a single spring and still obtain the
same ordinary differential equation?
The next problems have simple electric circuits. Electric circuits can be accurately
modeled using ordinary differential equations.
2.34 An electric circuit in which a capacitor is in series with a resistor is shown in Fig.
2.22(a). In an electric circuit, the sum of the voltages around a loop must equal zero:
This is Kirchhoff’s voltage law. The voltage and the current on the capacitor and resistor
satisfy
where C is the capacitor’s capacitance and R is the resistor’s resistance. In this circuit
because all elements are in series. This is Kirchhoff’s current law. Show that
2.36 An electric circuit in which an inductor, a capacitor, and a resistor are in series is
shown in Fig. 2.22(b). As in P2.34, the sum of the voltages around a loop must equal
zero:
This is Kirchhoff’s voltage law. The voltages and the currents on the capacitor and
resistor are as in P2.34 and the voltage on the inductor is
2.37 Consider the differential equation for the RLC-circuit from P2.36. Compare this
equation with the equations of the mass–spring–damper system from P2.27 and
explain how one could select values of the resistance, R, capacitance, C, inductance, L,
and input voltage, v, to simulate the movement of the mass–spring–damper system in
P2.27. The resulting device is an analog computer.
2.38 An approximate model for the electric circuit in Fig. 2.23, where the triangular
element is an amplifier with a very large gain (operational amplifier, OpAmp), is
obtained from
and
Show that
2.40 In P2.38, set and solve for in terms of . Name one application
for this circuit.
2.41 The mechanical motion of the rotor of a DC motor shown schematically in Fig.
2.24(a) can be described by the differential equation
where is the rotor angular speed, J is the rotor moment of inertia, b is the coefficient
of viscous friction. The rotor torque, , is given by
where is the armature current and is the motor torque constant. Neglecting the
effects of the armature inductance ( ), the current is determined by the circuit in
Fig. 2.24(b):
where is the armature voltage, is the armature resistance, and is the back-
EMF constant. Combine these equations to show that
2.42 Show that . Hint: Equate the mechanical power with the electric
power.
2.43 The first-order ordinary differential equation obtained in P2.41 can be seen as a
dynamic system where the output is the angular velocity, , and the input is the
armature voltage, . Calculate the solution to this equation when is constant.
Estimate the parameters of the first-order differential equation describing a DC motor
that achieves a steady-state angular velocity of 5000 RPM when V and has a
time-constant of s. Can you also estimate the “physical” parameters J, b, , ,
and with this information?
2.44 Can you estimate the parameters J, , , and b of the DC motor in P2.43 if
you know and the stall torque N m at V? Hint: The stall
torque is attained when the motor is held in place.
2.45 DC motors with high-ratio gear boxes can be damaged if held in place. Can you
estimate the parameters J, , , and b of the DC motor in P2.43 if you know that
and that after you attach an additional inertia kg m the
motor time-constant becomes s?
can be used to regulate the angular speed of the DC motor, , for which a model
was developed in P2.41. Calculate and solve a differential equation that describes the
closed-loop response of the DC motor. Using data from P2.43, select a controller gain,
K, with which the closed-loop steady-state error between the desired angular speed, ,
and the actual angular speed, , is less than 10%. Calculate the resulting closed-
loop time-constant and sketch or use MATLAB to plot the output and the voltage
generated in response to a reference RPM assuming zero initial
conditions. What is the maximum value of ?
2.48 Redo P2.47 but this time design K such that is always smaller than 12 V
when RPM.
The next problems have simple examples of heat and fluid flow using ordinary differential
equations. Detailed modeling of such phenomena often requires partial differential
equations.
where m and c are the substance’s mass and specific heat, and R is the overall system’s
thermal resistance. The input and output flow mass rates are assumed to be equal to
w. This differential equation model can be seen as a dynamic system where the output
is the substance’s temperature, T, and the inputs are the heat source, q, the flow rate,
w, and the temperatures and . Calculate the solution to this equation when q, w,
, and are constants.
Figure 2.25 Diagram for P2.49.
2.50 Assume that water’s density and specific heat are kg/m and
J/kg K. A 50 gal ( m ) water heater is turned off full with water at F (
C). Use the differential equation in P2.49 to estimate the heater’s thermal
resistance, R, knowing that after 7 days left at a constant ambient temperature, F(
C), without turning it on, , or cycling any water, , the temperature
of the water was about F( C).
2.51 For the same conditions as in P2.50, calculate how much time it takes for the
water temperature to reach F( C) with a constant in/out flow of 20 gal/h (
m /s) at ambient temperature. Compare your answer with the case
when no water flows through the water heater.
2.52 Consider a water heater as in P2.50 rated at 40,000 BTU/h ( kW). Calculate
the time it takes to heat up a heater initially full with water at ambient temperature to
F( C) without any in/out flow of water, .
2.53 Repeat P2.52 for a constant in/out flow of 20 gal/h at ambient temperature.
Compare the solutions.
2.54 Most residential water heaters have a simple on/off-type controller: the water
heater is turned on at full power when the water temperature, T, falls below a set
value, , and is turned off when it reaches a second set point, . For a 50 gal (
m ) heater as in P2.50 rated at 40,000 BTU/h ( kW) and with thermal resistance
K/W, sketch or use MATLAB to plot the temperature of the water during 24
hours for a heater with an on/off controller set with F ( C) and
F ( C), without any in/out flow of water, . Assume that the
heater is initially full with water a tad below . Compute the average water
temperature and power consumption for a complete on/off cycle.
2.55 Repeat P2.54 for a constant in/out flow of 20 gal/h at ambient temperature.
Compare the solutions.
2.56 Repeat P2.54 with F( C) and F( C). What
is the impact of the choice of and on the performance of the controller?
Ordinary, as opposed to partial, means that derivatives appear only with respect to
1
The speedometer measures the speed but it is easy to infer the direction, hence the
6
As we insist on using a non-standard unit for measuring velocity (mph), g will have to
11
12
This is a necessary only condition, since is such that
but .
3
Transfer-Function Models
The dynamic models developed in Chapter 2 relate input and output signals evolving in
the time domain through differential equations. In this chapter we will introduce
transform methods that can relate input and output signals in the frequency domain,
establishing a correspondence between a time-invariant linear system model and its
frequency-domain transfer-function. A linear system model may not be time-invariant
but, for linear time-invariant systems, the frequency domain provides an alternative
vantage point from which to perform calculations and interpret the behavior of the
system and the associated signals. This perspective will be essential to many of the
control design methods to be introduced later in this book, especially those of Chapter
7.
This integral may not converge for every function or every s, and when it
converges it may not have a closed-form solution. However, for a large number of
common functions, some of which are given in Table 3.1, the integral converges and has
a closed-form solution. This short collection will be enough to get us through all
applications considered in this book. A sufficient condition for convergence is
exponential growth. A function f has exponential growth if there exist and
such that
Indeed, for and ,
In other words, the Laplace transform integral (3.1) converges and is bounded for all s
such that . The smallest possible value of , labeled , for which the
Laplace transform converges for all s such that , is called the abscissa of
convergence. Because the complex-valued function is bounded it does not have
any singularities in its region of convergence, that is for all s such that . In
fact is analytic in its region of convergence. See [LeP10] or [Chu72] for a more
3
Function
Impulse 1
Step 1,
Ramp t
Monomial
,
Exponential
,
Sine
Cosine
For a concrete example, consider the Laplace transform of the constant function
, . This function has exponential growth ( , ) and its
Laplace transform is
From a practical standpoint, one might find it useful to think of the Laplace
transform as a calculator: it takes up functions in the time domain and transforms
4
them into the frequency domain where operations are quicker and easier. If needed,
the results can be transformed back into the time domain. This can be done with the
help of the inverse Laplace transform:
which takes the form of a line integral in the complex plane. Note that the line
must lie in the region of convergence of . Fortunately it will not be necessary to
explicitly compute the integral (3.4) to find inverse Laplace transforms. The key is to use
powerful results from complex analysis to convert the integral into a sum of simpler
integrals, the residues, which need be evaluated only at points where is singular.
We will study residues later in Section 3.4, but before we delve further into the details,
let us bring up a cautionary note about the use of the particular form of Laplace
5
transform in (3.1).
First, the transform uses information from only on , and we expect the
inverse Laplace transform (3.4) to be zero when , a fact we will prove later in
Section 3.4. Because of the integration, discrete discontinuities do not contribute to the
value of the integral. Consequently, the inverse Laplace transform is generally not
unique, as piecewise continuous functions that differ only on a discrete set of
discontinuities will have the same Laplace transform. Indeed, uniqueness of the Laplace
transform has to be interpreted up to a discrete set of discontinuities. With this caveat
in mind, each row in Table 3.1 is a Laplace transform pair.
Because one cannot distinguish the step function from the constant function
in , they have the same Laplace transform. However, the integration in (3.1) is
performed in the interval , which makes one wonder whether the difference
matters. The issue is delicate and has deceived many [LMT07]. Its
importance is due to the fact that the choice of the interval impacts many
useful properties of the Laplace transform which depend on . One reason for the
trouble with is a remarkable property of the Laplace transform: it is capable of
handling the differentiation of functions at points which are not continuous by cleverly
encoding the size of the discontinuities with the help of impulses, that is the function
appearing on the first row of Table 3.1, for which . In order to
understand this behavior we need to learn how to compute integrals and derivatives
using the Laplace transform, which we do by invoking the differentiation and integration
properties listed in Table 3.2. All properties in Table 3.2 will be proved by you in P3.1–
P3.14 at the end of this chapter. Key properties are linearity and convolution, which we
will discuss in more detail in Section 3.2. For now we will continue on differentiation
and integration.
Linearity
Integration
Differentiation
in time
Differentiation
in frequency
Convolution
Shift in time
Shift in
frequency
Initial-value
Final-value a
a
The fi na l -va l ue property ma y not hol d i f ha s a s i ngul a ri ty on the i ma gi na ry a xi s or the ri ght-ha nd s i de of the compl ex
a nd .
Consider the constant function and the unit step . Their values
agree for all and they have the same Laplace transform, . The
constant function is differentiable everywhere and has . Application
of the formal differentiation property from Table 3.2 to the Laplace transform of the
constant function produces
which is an impulse at the origin, obtained from Table 3.1. How should this be
interpreted? The Laplace transform can formally compute (generalized) derivatives of
piecewise smooth functions even at points of discontinuity and it indicates that by
formally adding impulses to the derivative function. If is a point where the
function is not continuous, the term
is added to the generalized derivative computed by the Laplace transform at . See
[LMT07] for more examples and details.
Things get complicated if one tries too hard to understand impulses. The first
complication is that there exists no piecewise smooth function for which
. The impulse is not a function in the strict sense. Nevertheless, some see
7
the need to define the impulse by means of the limit of a regular piecewise smooth
function. Choices are varied, and a common one is
8
These definitions of the impulse are unnecessary if one understands the role played by
the impulse in encoding the size and order of the discontinuities of piecewise smooth
functions. A definition of the impulse based on the formal inverse Laplace transform 9
formula (3.4),
is popular when the impulse is introduced using Fourier rather than Laplace
transforms. All these attempts to define the impulse leave a bitter taste. For example,
both definitions require careful analysis of the limit as and the functions used in
(3.7) are not zero for , a fact that causes trouble for the unilateral definition of
the Laplace transform in (3.1). See also Section 5.1 for a discussion of a physical system,
an electric circuit with a capacitor, whose response approximates an impulse as the
losses in the capacitor become zero.
The reason for this apparent inconsistency is the fact that the impulse is not a
piecewise smooth function! The role of the impulse here, as seen before, is to encode a
discontinuity at the origin. This behavior is consistent with the formal properties of
Table 3.2 outside the origin as well. For example, a step at time , that is
, for which , is related to an impulse at , that is
, for which , by integration: and
.
which correctly indicates the discontinuity of the first derivative. Formal differentiation
of the impulse can be interpreted as higher-order impulses [LeP10], but these will have
little use in the present book.
We are now ready to formalize a notion of linearity that can be used with dynamic
systems. Roughly speaking, a dynamic system with input
10
and output
described by the mathematical model
is linear whenever the following property holds: any linear combination of input signals
, in the form
generates the output signal
where
for any and . Time-invariant systems respond the same way to the same
input regardless of when the input is applied.
It is important to note that there are systems which are linear but not time-invariant.
For example, an amplitude modulator is a linear system for which
Modulators are the basic building block in virtually all communication systems. You will
show in P3.50 that a modulator is linear but not time-invariant. Of course, there are
also time-invariant systems which are not linear. See P3.50 through P3.52 for more
examples of time-varying linear systems.
In this book we are mostly interested in linear time-invariant systems. While some of
the results to be presented will hold true for linear time-varying systems, many do not.
See [Bro15] for more details on time-varying linear systems. Most notably many
conclusions to be obtained in the frequency domain will not hold for time-varying
systems.
For time-invariant or time-varying linear systems, it is possible to take advantage of
the properties of the impulse, , to obtain the following compact and powerful
representation in terms of an integral known as the convolution integral.
In this very special case, in view of the sifting property of the impulse that you will
prove in P3.12, one can simply set
to obtain
response of the linear system to each component of the basis of delayed impulses,
This is the system’s impulse response. If the system model is linear, that is, if (3.8) and
(3.9) hold, then
This is the general form of the response of a linear time-varying system, in which the
impulse response, , may change depending on the time the impulse is applied,
. Note that a consequence of causality is that , . More importantly, the
output, , depends only on past values of the input, that is , . In
other words, a causal or non-anticipatory system cannot anticipate or predict its input.
For a causal linear time-invariant system the impulse response can be described in
terms of a single signal:
In this section we illustrate how the Laplace transform can be used to compute the
transfer-function and, more generally, the response of linear models to arbitrary inputs
directly from linear ordinary differential equation models. Recall a familiar example:
the car model in the form of the differential equation (2.3). Application of the
differentiation and linear properties of the Laplace transform from Table 3.2 yields
The first term is related to the Laplace transform of the input, , through ;
the second term is related to the initial condition, , through . The rational
functions and are similar but not equal. Because they share the same
12
denominator, they will display similar properties, as will soon become clear. Indeed,
most of the time we can safely assume zero initial conditions and focus on . Some
exceptions to this rule are discussed in Sections 4.7 and 5.3.
From the discussion in Section 3.2, the rational function is precisely the system
transfer-function. When initial conditions are zero, the system response can be
computed from the transfer-function through the formula (3.14) or, equivalently, by
the impulse response and the time-domain convolution formula (3.13). The impulse
response associated with the transfer-function (3.16) is
which was obtained directly from Table 3.1. Because for , the ordinary
differential equation model (2.3) is causal, linear, and time-invariant.
Analysis of the impulse response can provide valuable information on the dynamic
behavior of a system and clues on the response to actual input functions. Some might
be tempted to record the impulse response, , experimentally after applying an
input that resembles the infinitesimal pulse (3.6) to obtain an approximate transfer-
function, a practice that we discourage due to the many difficulties in realizing the
impulse and collecting the resulting data. Better results can be obtained by system
identification methods, such as the ones described in [Lju99].
The above procedure for calculating transfer-functions directly from linear ordinary
differential equations is not limited to first-order equations. Assume for the moment
zero initial conditions, e.g. in (3.15), and consider a linear system modeled
by the generic linear ordinary differential equation
(3.17)
Given the Laplace transform of any input signal, , and the system transfer-
function , formula (3.14) computes the Laplace transform of the output response
signal, . From one can compute the output response, , using the
inverse Laplace transform (3.4). So far we have been able to compute the inverse
Laplace transform by simply looking up Table 3.1. This will not always be possible. For
example, Table 3.1 does not tell one how to invert the generic rational transfer-
function (3.18) if . If we want to take full advantage of the frequency-domain
formula (3.14) we need to learn more sophisticated methods.
Our main tool is a result that facilitates the computation of integrals of functions of a
single complex variable known as Cauchy’s residue theorem. The theory leading to the
theorem is rich and beyond the scope of this book. However, the result itself is
pleasantly simple and its application to the computation of the inverse Laplace
transform is highly practical. One does not need to talk about integration and residues
at all to be able to calculate inverse Laplace transforms. However, as we shall also use
residues in connection with Chapter 7, we choose to briefly review some of the key
results. Readers who might feel intimidated by the language can rest assured that
practical application of the theory will be possible even without complete mastery of
the subject. Of course, a deeper knowledge is always to be preferred, and good books
on complex variables, e.g. [BC14, LeP10], are good complementary sources of
information.
We will use the concepts of an analytic function and singular points. A function f of a
single complex variable, s, is said to be analytic at a given point if f and , the
complex derivative of f , exist at and at every point in a neighborhood of . Asking
14
for a function to be analytic at a point is no small requirement and one requires more
than the mere existence of the derivative. Indeed, it is possible to show that if f is
15
analytic at then not only do f and exist but also derivatives of all orders exist
and are themselves analytic at [BC14, Section 57]. Furthermore, the function’s Taylor
series expansion exists and is guaranteed to converge in a neighborhood of . By
extension, we say that f is analytic in an open set S if f is analytic at all points of S.
Functions that have only a finite number of singularities, such as rational functions,
have only isolated singularities. In the case of rational functions, the singular points are
the finite number of roots of a polynomial, the denominator, and all such singularities
are poles. When we say singularity in this book we mean isolated singularity, and the
17
The actual integration requires a suitable path. The contour in Fig. 3.1 is a positively
oriented circle that can be parametrized as
where is the center of the circle, its radius, and a parameter describing the
traversal of the circle in the counter-clockwise direction. The parameters used to plot
in Fig. 3.1 were , . The same parametrization can describe
the negatively oriented circle in Fig. 3.1 on replacing “ ” with “ ” to reverse the
direction of travel.
Figure 3.1 Simple closed contours in the complex plane. The rectangle (thick) and
the circle (thin) contours are positively oriented and the circle (dashed) is
negatively oriented.
and substituting
Note that one obtains zero after integration along any circle since the result depends
neither on nor on . It gets better than that: the integral is zero even if C is
replaced by any simple closed contour and f is replaced by any function that is analytic
on and inside the contour. This incredibly strong statement is a consequence of the
following key result [BC14, p. 76]:
Theorem 3.1 (Cauchy’s residue theorem) If a function f is analytic inside and on the
positively oriented simple closed contour C except at the singular points ,
, inside C, then
Application of Cauchy’s residue theorem to calculate the integral (3.20) may not
seem very impressive. A more sophisticated and useful application is the computation
of the inverse Laplace transform. The main idea is to perform the line integral (3.4) as
part of a suitable contour integral, such as the ones shown in Fig. 3.2, where the linear
path is made closed by traversing a semi-circular path of radius centered at ,
labeled and in the figure.
Figure 3.2 Simple closed contours used to compute the inverse Laplace transform;
crosses indicate singular points of ; and denote the semi-circular part of
the contours; encloses all singularities, is free of singularities, and
vanishes on the semi-circular parts of the contours as .
For example, integration along the simple positively oriented contour can be
split into
If F is such that
for instance, if F vanishes for large s, then it is possible to evaluate the inverse Laplace
transform by calculating residues, that is,
where the s are the singular points of located inside the contour, which in
this case coincides with the half-plane as . Moreover, if is
chosen such that then all singular points of F are enclosed by as .
Because is analytic in , the singular points of are simply the singular
points of . The above integral formula is known as the Bromwich integral [BC14,
Section 95]. The main advantage of this reformulation is that evaluation of the
Bromwich integral reduces to a calculus of residues.
Sufficient conditions for (3.21) to hold exist but are often restrictive. Roughly
speaking, it is necessary that F becomes small as gets large, i.e.
That k needs to be such that in (3.24) for the semi-circular path integral in
(3.21) to converge to zero also at can be attributed to a loss of continuity of the
inverse Laplace transform. Indeed, the inverse Laplace transform of a rational function
with a simple pole at , i.e. , is of the form , .
Because the inverse Laplace transform must be zero for all , this means that there
is a discontinuity at the origin. For continuity at the origin, the initial-value property in
Table 3.2 suggests that , which requires in (3.24). This
situation can be partially remedied by using a slightly more sophisticated setup using
the inverse Fourier transform [Chu72, Theorem 6, Section 68] that allows at the
expense of averaging at discontinuities. A key result is unicity of the inverse Laplace
transform [Chu72, Theorem 7, Section 69], which allows one to be content with inverse
Laplace transforms obtained by any reasonable method, be it table lookup, calculus of
residues, or direct integration.
but M cannot be made zero, it is usually possible to split the calculation as a sum of a
bounded function, often a constant, and a convergent function satisfying (3.23). We will
illustrate this procedure with rational functions in Section 3.5. If impulses at the origin
are present one needs to explicitly compute the integral (3.4) at . The simplest
possible example is . Since is analytic everywhere,
for all . In some special cases when (3.25) does not hold, it may
still be possible to compute the inverse Laplace transform using residues. For example,
does not satisfy (3.25) but , , can be computed
after shifting the origin of time (see the time-shift property in Table 3.2). Of course
for if so one should be careful when applying this result.
Care is also required when the function is multivalued. See [LeP10, Section 10-21]
for a concrete example.
In order to take advantage of the Bromwich integral formula (3.22) we need to learn
how to calculate residues. If is an isolated singular point the residue at is the
result of the integration:
We saw in Section 3.3 that transfer-functions for a large class of linear system models
are rational. The Laplace transforms of many input functions of interest are also
rational or can be approximated with rational functions. For this reason, we often have
to compute the inverse Laplace transform of rational functions. For example, in Section
3.3 we computed the inverse Laplace transform of a rational transfer-function, , to
obtain the impulse response, . Many calculations discussed in
Section 3.4 also become much simpler when the functions involved are rational.
A rational function of the single complex variable s is one which can be brought
through algebraic manipulation to a ratio of polynomials in s. When is a rational
function its poles are the roots of the denominator, that is solutions to the polynomial
equation
Compare this equation with (2.5). The zeros of a rational transfer-function are the roots
of the numerator. We will have more to say about zeros later.
Roots may appear more than once, according to their algebraic multiplicity. Having
obtained the poles of a rational function we are ready to compute the residues in
19
(3.22).
Consider first the case of a simple pole , i.e. a pole with multiplicity one. Because
and is analytic for all we can write
For example,
and
Of course one could have computed this simple inverse Laplace transform just by
looking up Table 3.1. The next example, which we borrow from Section 2.3, is slightly
more involved.
We repeat the calculation of the step response performed in Section 2.3, this time
using Laplace transforms. Recall the unit step function, , and its Laplace transform,
. Now let
be a step of amplitude . Substituting into (3.15) we obtain
Both have multiplicity one. Applying (3.22) and (3.31) the response is
Evaluating
Converse application of the formula (3.22) in the case of a rational function provides
an alternative perspective on the above calculation. Combining (3.22) and (3.31) we
obtain
Formula (3.22) requires the convergence condition (3.23), which means that
needs to be strictly proper when is rational. If is proper but not strictly
proper, then the boundedness condition (3.25) holds and it is possible to split the
calculation as discussed in Section 3.4. For example
When F is proper but not strictly proper the polynomial term is simply a constant
which can be calculated directly by evaluating the limit
The case of poles with multiplicity greater than one is handled in much the same
way. Let be a pole with multiplicity . In this case
Let us compute the closed-loop response of the car linear model, Equation (2.12) in
Section 2.5, to a reference input of the form
This is a ramp of slope . For the car, the coefficient represents a desired
acceleration. For example, this type of reference input could be used instead of a
constant velocity reference to control the closed-loop vehicle acceleration when a user
presses a “resume” button on the cruise control panel and the difference between the
current and the set speed is too large. In Chapter 4 we will show how to compute the
closed-loop transfer-function directly by combining the open-loop transfer-function
with the transfer-function of the controller. For now we apply the Laplace transform to
the previously computed closed-loop differential equation (2.12) to calculate the
closed-loop transfer-function
where
The rational function is the closed-loop transfer-function. From Table 3.1 the
Laplace transform of the ramp input is
where
Combined application of (3.22), (3.31), and (3.33) leads to a generalized form for the
expansion of rational functions in partial fractions in which poles with higher
multiplicities appear in powers up to their multiplicities. In our example
In anticipation of things to come, we substitute the car parameters (3.36) into the
response and rewrite it as
and this error grows unbounded unless . We will have more to say about this
in Section 4.1. The closed-loop ramp response for the car model is plotted in Fig. 3.3 for
various values of the gain K using and estimated previously.
Figure 3.3 Closed-loop response of the velocity of the car cruise control according to the
linear model (2.12) to a ramp with slope mph/s (reference, dashed line),
calculated for and and various values of the control gain K.
The figure shows the reference ramp (dashed line), and the complete response and the
steady-state response (dash–dotted lines) for various values of K.
The last case we shall cover is that of complex poles. Even though no new theory is
necessary to handle complex poles, an example is in order to illustrate simplifications
that will lead to a real signal, , even when the Laplace transform, , has
complex poles. Consider the strictly proper transfer-function
where the symbol denotes the complex conjugate of x. Because both poles are
simple, the inverse Laplace transform is computed by straightforward application of
formulas (3.22) and (3.31):
where
and
3.6 Stability
A measure of good behavior of a system with impulse response is that the output,
, be bounded whenever the input, , is bounded. That is, we would like to
verify the implication
for all possible input functions at all . If (3.39) holds we say that the system
is bounded-input–bounded-output stable (BIBO stable). Because
The above condition depends only on the system, in this case represented by the
impulse response , and not on a particular input, , which is reassuring. The
reason for using the symbol will be explained in Section 3.9. Boundedness of
is not only sufficient but also necessary for BIBO stability: for any given
the input signal20
One should be careful when generalized functions take part in the impulse response.
Without getting into details on how to evaluate the resulting integrals, note that
derivatives of the impulse cannot be present since 21
For linear time-invariant systems, it is possible to verify the stability condition (3.40)
in the frequency domain. A linear time-invariant system with impulse response is
asymptotically stable if and only if its transfer-function, , converges
and is bounded for all ; in other words, if does not have poles on the
imaginary axis or on the right-hand side of the complex plane. The meaning of the
terminology asymptotic stability will be discussed later in Chapter 5. To show one side
of this statement, recall that for all s such that . From (3.1)
where all singularities of are located on the left-hand side of the complex plane,
i.e. they are such that ; all singularities of are located on the right-
hand side of the complex plane, i.e. ; and all singularities of are on
the imaginary axis. When
22
is rational, the components of (3.41) can be obtained
from a partial-fraction expansion. The corresponding components of the output,
The component
is called the transient. In order to justify this name we use the notion of asymptotic
stability. From Section 3.6, the function is asymptotically stable and therefore
is bounded. It would be great if we could at this point conclude from
asymptotic stability that
Since the poles, , and the residues are bounded and , the
function is comprised of an impulse at the origin and exponentials that vanish
as time grows, from which (3.43) follows.
When is proper and rational it is possible to go even further and prove the
converse implication of asymptotic stability mentioned at the end of Section 3.6. Taking
absolute values:
which implies is bounded. Having poles with multiplicity greater than one does
not affect the above conclusion since any additional terms involving
Conversely, has all singularities on the right-hand side of the complex plane
and, because of an argument similar to the one used with , the response
will display unbounded growth. That is,
This growth is at least exponential. In the case that is rational the output is
comprised of terms of the form , , stemming from each
pole , , with multiplicity .
is what we refer to as the steady-state response. The signal may not necessarily
be bounded. For example, if is a pole with multiplicity higher than one then
will grow unbounded. However, this growth is polynomial. For example, if
is rational and is a pole with multiplicity m then will display growth of
the order of . Note that one might argue that it makes no sense to talk about
steady-state or even transient response if the response grows unbounded. One can,
however, always speak of transient and steady-state components of the response even
when the overall response diverges, as done in this section.25
We shall revisit some earlier examples. In the calculation of the step response of the
car in Section 3.3,
from which
from which
Note the linear growth of the steady-state response because of the double pole at the
origin.
The key idea behind the notion of frequency response is the calculation of the steady-
state response of a linear time-invariant system to the family of inputs:
As we will see shortly, for a linear time-invariant system without poles on the imaginary
axis,
where is the system’s transfer-function. That is, the steady-state response is
another cosine function with the same frequency, , but different amplitude and
phase. The amplitude and phase can be computed by evaluating the transfer-function,
, at . We stress that the assumption that the linear system is time-
invariant should not be taken for granted. For example, the modulator (3.11) is linear
but not time-invariant. In particular, its steady-state response to the input (3.44) is
Assuming zero initial conditions, this input is applied to a linear time-invariant system
with transfer-function to produce an output with transform
Assume that
26
does not have poles on the imaginary axis, i.e. . In this
case the only poles in will be the pair originating from the input:
is given by
The notion of stability discussed in Section 3.6 highlights the importance of asserting
the boundedness of signals that interact with systems. Signals can be measured by
computing norms. A variety of signal norms exist, spanning a large number of
applications. We do not have room to go into the details here, so the interested reader
is referred to [DFT09] for a more advanced yet accessible discussion.
We have seen these norms at work in Section 3.6. For example, the stability condition
(3.40) is equivalent to bounded and BIBO stability implies
Any norm has the following useful properties: is never negative; moreover,
if and only if for all ; and finally, the triangle inequality
Some signals can be bounded when measured by a norm while being unbounded
according to a different norm. For instance, the signal
This bound is tight in the case , i.e. (3.46), as we have shown in Section 3.6. For
other values of p there might be tighter bounds. One example is , which we
address next.
after rearranging the order of the integrals. The key step now is to use the formula (3.7)
to write
where
The quantity
which is of the form (3.46) with the 2-norm replacing the -norm for the input and
output signals. It is possible to show that this bound is tight, in the sense that there
exists an input signal such that is bounded and equality holds in (3.48). The
proof of this is rather technical though [Vid81, Chapter 3].
stable, but the 2-norm is unbounded. A rational needs to be proper but not
strictly proper if is to be bounded. The norm also provides the useful
bound
You will show that this bound is tight in P3.31 and will learn how to compute the
norm using residues in P3.32.
Problems
3.1 Show that the Laplace transform, (3.1), and the inverse Laplace transform, (3.4),
are linear operators. Hint: The operator L is linear when
for all functions , and scalars and .
In P3.2–P3.14, assume that all functions of time and their derivatives exist and are
continuous. Assume also that they are zero when .
3.2 Assume that the function f has exponential growth. Prove the integration
property in Table 3.2. Hint: Evaluate the integral in by parts.
3.3 Assume that the function f has exponential growth and that the Laplace
transform of exists. Prove the first differentiation property in Table 3.2 for the first
derivative. Hint: Evaluate the integral in by parts.
3.4 Use the differentiation property for the first-order derivative to formally prove
the time differentiation property in Table 3.2 for higher-order derivatives.
3.5 Prove the frequency differentiation property in Table 3.2. Hint: Calculate
by differentiating inside the Laplace integral.
3.6 Change variables under the Laplace integral to prove the time-shift property in
Table 3.2.
3.7 Change variables under the Laplace integral to prove the frequency-shift property
in Table 3.2.
Hint: Use .
3.10 Use P3.9, the time integration property, and the final value property to show
that
3.12 Use the convolution property to prove the sifting property of the impulse:
Interpret the result in terms of the differentiation property of the Laplace transform.
In the following questions assume that is the continuous pulse in Fig. 3.4(a).
when f is continuous and differentiable. Hint: Use the mean-value theorem to write
, where .
3.19 Calculate
3.20 Calculate
3.21 Can one switch the order of the limit and integration in P3.15–P3.20?
In the following questions assume that is the continuous pulse in Fig. 3.4(b).
3.24 Calculate
3.25 Calculate
Use this inequality to prove (3.27). Hint: Use P3.27 and P3.28.
3.30 Modify the arguments in P3.27–P3.29 to prove (3.21) assuming (3.24) is true and
.
3.31 Assume that the causal impulse response is such that the transfer-function
is asymptotically stable and is bounded. If
show that
and
where is the contour from Fig. 3.2 with . Explain how to compute this
integral using Cauchy’s residue theorem (Theorem 3.1). Use this method to verify that
when .
3.39 Compute the Laplace transform of the following signals defined for :
(a) ;
(b) ;
(c) ;
(d) ;
(e) ;
(f) ;
(g) ;
(h) ;
(i) ;
(j) ;
(k) ;
(l) .
3.40 Compute the expansion in partial fractions of the following rational functions:
(a) ;
(b) ;
(c) ;
(d) ;
(e) ;
(f) ;
(g) ;
(h) ;
(i) ;
(j) ;
(k) ;
(l) .
(a) ;
(b) ;
(c) ;
(d) ;
(e) ;
(f) ;
(g) ;
(h) ;
(i) ;
(j) ;
(k) .
3.42 Use the Laplace transform to solve the following linear ordinary differential
equations:
(a) , ;
(b) , ;
(c) , ;
(d) , ;
(e) , , ;
(f) , ;
(g) , , ;
(h) , ;
(i) , ;
(j) , ;
(k) , ;
(l) , , ;
(m) , ;
(n) , , ;
(o) , .
3.43 Prove that, when , , is such that , for all , then its
Laplace transform is analytic in (entire) or its singularities are removable. Verify this
by showing that the following functions have only removable singularities:
(a) ;
(b) ;
(c) ;
(d) ;
(e) ;
(f) ;
(g) .
(a) ;
(b) ;
(c) ;
(d) ;
(e) ;
(f) ;
(g) ;
(h) ;
(i)
3.45 Let C be the unit circle traversed in the counter-clockwise direction and calculate
the contour integral
using Cauchy’s residue theorem (Theorem 3.1) for the following functions:
(a) ;
(b) ;
(c) ;
(d) ;
(e) ;
(f) ;
(g) .
Compute the system’s transfer-function. What is the order of the system? Is the system
asymptotically stable? Assuming zero initial conditions, calculate and sketch the
response to a constant input , . Identify the transient and steady-state
components of the response.
3.48 Let , , be such that for all and let be its Laplace
transform. Show that
is a periodic function with period T. Use this fact to calculate and sketch the plot of the
inverse Laplace transform of
3.50 A modulator is the basic building block of any radio. A modulator converts
signals from one frequency range to another frequency range. Given an input ,
, an amplitude modulator (AM modulator) produces the output
Show that the amplitude modulator system is linear and causal, but not time-invariant.
3.52 Show that the impulse response of the sample-and-hold system, P3.51, is the
function
by verifying that
In the following questions, assume zero initial conditions unless otherwise noted.
3.53 You have shown in P2.4 that the ordinary differential equation
3.55 Consider the simplified model of the rotating machine described in P3.54.
Calculate the machine’s angular velocity, , obtained in response to a constant-
torque input, , . Identify the transient and the steady-state components
of the response.
3.57 Whenever possible, calculate the steady-state responses in P3.55 and P3.56
using the frequency-response method. Compare answers.
3.58 Consider the simplified model of the rotating machine described in P3.54.
Calculate the machine’s angular position, , obtained in response to a constant-
torque input, , . Identify the transient and the steady-state components
of the response.
3.60 Whenever possible, calculate the steady-state responses in P3.58 and P3.59
using the frequency-response method. Compare your answers.
3.62 You have shown in P2.18 that the ordinary differential equation
is a simplified description of the motion of the elevator in Fig. 2.18(b), where is the
angular velocity of the driving shaft and is the elevator’s load linear velocity.
Treating the gravitational torque, , as an input, calculate the
transfer-function, , from the gravitational torque, w, to the elevator’s load linear
velocity, , and the transfer-function, , from the motor torque, , to the elevator’s
load linear velocity, , and show that
Assume that all constants are positive. Are these transfer-functions asymptotically
stable?
3.63 Consider the simplified model of the elevator described in P3.62. Calculate the
elevator’s linear velocity, , obtained in response to a constant-torque input,
, , and a constant gravitational torque , .
Identify the transient and the steady-state components of the response.
3.65 Whenever possible, calculate the steady-state responses in P3.63 and P3.64
using the frequency-response method. Compare answers.
3.66 Consider the simplified model of the elevator described in P3.62. Calculate the
elevator’s linear position, , obtained in response to a constant-torque input,
, , and a constant gravitational torque , .
Identify the transient and the steady-state components of the response.
3.68 Whenever possible, calculate the steady-state responses in P3.66 and P3.67
using the frequency-response method. Compare answers.
3.71 You have shown in P2.27 that the ordinary differential equation
3.74 Calculate the steady-state responses in P3.72 and P3.73 using the frequency-
response method. Compare answers.
3.76 You have shown in P2.32 that the ordinary differential equations
3.80 You have shown in P2.34 that the ordinary differential equation
is an approximate model for the RC electric circuit in Fig. 2.22(a). Calculate the transfer-
function from the input voltage, v, to the capacitor voltage, . Assume that all
constants are positive. Is this transfer-function asymptotically stable?
3.81 Consider the model of the circuit described in P3.80. Calculate the capacitor
voltage, , obtained in response to a constant input voltage, , .
Identify the transient and the steady-state components of the response.
3.83 Calculate the steady-state responses in P3.81 and P3.82 using the frequency-
response method. Compare answers.
3.85 You have shown in P2.36 that the ordinary differential equation
is an approximate model for the RLC electric circuit in Fig. 2.22(b). Calculate the
transfer-function from the input voltage, v, to the capacitor voltage, . Assume that
all constants are positive. Is this transfer-function asymptotically stable?
3.86 Consider the model of the circuit described in P3.85. Calculate the capacitor
voltage, , obtained in response to a constant input voltage, , .
Identify the transient and the steady-state components of the response. How do the
parameters R, L, and C affect the response?
3.88 Calculate the steady-state responses in P3.86 and P3.87 using the frequency-
response method. Compare answers.
3.90 You have shown in P2.38 that the ordinary differential equation
is an approximate model for the electric circuit in Fig. 2.23. Calculate the transfer-
function from the input voltage, v, to the output voltage, . Assume that all constants
are positive. Is this transfer-function asymptotically stable?
3.91 Consider the model of the circuit described in P3.90. Calculate the output
voltage, , obtained in response to a constant input voltage, , .
Identify the transient and the steady-state components of the response.
3.93 Calculate the steady-state responses in P3.91 and P3.92 using the frequency-
response method. Compare answers.
3.95 You have shown in P2.41 that the ordinary differential equation
is a simplified description of the motion of the rotor of the DC motor in Fig. 2.24, where
is the rotor angular velocity. Calculate the transfer-function from the armature
voltage, , to the rotor’s angular velocity, , then calculate the transfer-function from
the voltage, , to the motor’s angular position, . Assume that all
constants are positive. Are these transfer-functions asymptotically stable?
3.96 Consider the simplified model of the DC motor described in P3.95. Calculate the
motor’s angular velocity, , obtained in response to a constant-armature-voltage
input, , . Identify the transient and the steady-state components of
the response.
3.98 Calculate the steady-state responses in P3.96 and P3.97 using the frequency-
response method. Compare answers.
3.100 You have shown in P2.49 that the temperature of a substance, T (in K or in C),
flowing in and out of a container kept at the ambient temperature, , with an inflow
temperature, , and a heat source, q (in W), can be approximated by the differential
equation
where m and c are the substance’s mass and specific heat, and R is the overall system’s
thermal resistance. The input and output flow mass rates are assumed to be constant
and equal to w (in kg/s). Calculate the transfer-functions from the inflow temperature,
, the ambient temperature, , and the heat source, q, to the substance’s
temperature, T. Assume that all constants are positive. Are these transfer-functions
asymptotically stable?
3.103 Calculate the steady-state responses in P3.101 and P3.102 using the frequency-
response method. Compare your answers.
3.104 Assume that water’s density and specific heat are kg/m and
J/kg K and consider a 50 gal ( m ) tank with K/W,
F( C), and BTU/h ( kW). Sketch or use MATLAB
to plot the responses you obtained in P3.101 and P3.102.
1
A formal setup that is comfortable is that of piecewise continuous or piecewise smooth
(continuous and infinitely differentiable) functions with only a discrete set of
discontinuities, such as the one adopted in [LeP10].
The notation
2
means the one-side limit , which is used to accommodate
possible discontinuities of at the origin.
A much older engineering student would be familiar with a slide rule, which was a
4
Versions of the Laplace transform that operate on functions on the entire real axis also
5
Formally it is a generalized function or distribution. See [KF75, Chapter 21] and [LMT07].
7
See P3.15–P3.26.
8
9
The Laplace transform of the impulse converges everywhere and has no
singularities, and hence can be chosen to be 0 in (3.4).
10
See [Kai80, Chapter 1] for some not so obvious caveats.
13
The initial conditions here are .
14
A neighborhood is a sufficiently small open disk , with .
15
If , where x, and then exists only if
and . These are the Cauchy–Riemann equations [BC14,
Section 21].
The function
16
is singular at and every where is
integer. The singular points are all isolated. However, is not isolated
since for any such that there exists a large enough k such that
is in .
This is one case in which the conditions (3.23) and (3.25) may be too strong. In fact, if
18
One of the most remarkable results in abstract algebra is the Abel–Ruffini theorem
19
[Wae91, Section 8.7] which states that no general “simple formula” exists that can
express the roots of polynomials of order 5 or higher. In this sense, it is not possible to
factor such polynomials exactly and we shall rely on numerical methods and
approximate roots.
20
.
In this form, this formula follows formally from integration by parts. See P3.13.
21
Alternatively, one can use the derivative property of the Laplace transform in Table 3.2.
If
22
has a component without singularities (entire) it can be grouped with .
As discussed at the end of Section 3.4, these terms affect the response only at .
If there are repeated poles one must use the more complicated expressions in Section
24
3.5.
Our definition of steady-state is indeed a bit unorthodox. One might prefer to further
25
split the steady-state response in order to separate imaginary poles with multiplicity
greater than one and enforce boundedness. This kind of exercise is not free from flaws.
For example, what are the steady-state components of the signal with Laplace
transform
When
26
does have poles on the imaginary axis the steady-state response will have
contributions from these imaginary poles as well. However, as we will see in Chapter 7,
the frequency response is still useful in the presence of imaginary poles.
29
, .
We have not paid much attention to what form of integral to use so far. For most of
30
our developments the Lebesgue integral [KF75] is the one assumed implicitly.
31
is a fancy replacement for . When the maximum of a certain function is not
attained at any point in its domain we use to indicate its least upper bound. For
example, the function is such that for all , hence
. On the other hand, is equal to
1.
Impulse derivatives of any order have unbounded p-norms, and hence are not
32
Using the Laplace transform we can operate with dynamic linear systems and their
interconnections as if they were static systems, as was done in Chapter 1. Consider for
instance the series connection of systems depicted in Fig. 4.1. We apply the Laplace
transform to the signals , , and :
where and are the transfer-functions for the systems in blocks and
. These are the same expressions that would have been obtained if and
were static linear gains. Eliminating we obtain
Virtually all formulas computed so far that describe the interconnection of static
systems hold verbatim for the interconnection of dynamic systems if static gains are
replaced by transfer-functions. This is true also for closed-loop feedback configurations
such as the one in Fig. 4.2, which is a reproduction of the block-diagram originally
depicted in Fig. 1.8. In this feedback loop
Eliminating and
We assume that to compute
This result agrees with (3.35) and (3.36), which were computed earlier in Section 3.5
from the closed-loop ordinary differential equation (2.12).
The dynamic formulas should be interpreted in the light of the Laplace transform
formalism discussed in Chapter 3. Take, for example, the condition ,
which was assumed to hold in order to compute . What it really means is that s is
taken to be in , where is the abscissa of convergence of the function
. For all such s, the function converges and
therefore is bounded, hence . Note that there might exist particular
values of s for which , but those have to be outside of the region of
convergence. In fact, we will spend much time later in Chapter 7 investigating the
behavior of the function near this somewhat special point “ .” When
several interconnections are involved, the algebraic manipulations are assumed to have
been performed in an appropriate region of convergence. If all of the functions
involved are rational then such a region of convergence generally exists and is well
defined.
The notion of tracking embodies the idea of following a given reference. In Section 1.5,
we proposed a feedback solution based on the block-diagram in Fig. 4.2. A key signal is
the tracking error:
The functions S and H have the same denominator, the roots of which are the poles of
S and H, i.e. the roots of the characteristic equation
The roots of (4.4) are the same as the zeros of the equation
For asymptotic stability, the single root of the characteristic equation (4.4) must satisfy
Because b, m, and p are all positive, S has its pole located on the left-hand side of the
complex plane and the closed-loop is stable for all positive values of the proportional
controller gain, K.
where , , and are constant, a general formula for the calculation of the steady-
state tracking error can be obtained using the frequency-response method introduced
in Section 3.8. Compare the above waveform with (3.44). Assuming that S is linear time-
invariant and asymptotically stable, the steady-state closed-loop response to the above
sinusoidal input is given by the frequency-response formula (3.45):
from where
For example, the car cruise control with proportional feedback responds to a reference
input in the form of a step of amplitude with steady-state error
This is (2.14) obtained in Section 2.5. The higher K the smaller the steady-state error, as
seen in Fig. 2.8.
As discussed above, it is not enough that S be asymptotically stable for the tracking
error to converge to zero. Indeed, it is necessary that the product
confirming the presence of the zero at , the single pole of G, which appears in the
numerator of the sensitivity transfer-function. Because of this zero, and the
toilet water tank achieves asymptotic zero tracking error in response to a constant
reference fill line, , regardless of the area, A, the valve gain, K, and the position of the
fill line, , which can be adjusted to meet the needs of the user’s installation site.
Returning to the car cruise control with Lemma 4.1 in mind, let us add an integrator
to the controller as shown in the closed-loop diagram in Fig. 4.3. In this case
and the corresponding closed-loop sensitivity transfer-function is
As expected, S has a zero at the origin, , and the closed-loop system achieves
asymptotic tracking of constant references as long as is chosen so as to keep the
closed-loop system asymptotically stable. As we will see in Section 4.2, any will
stabilize the closed-loop. The closed-loop response of the car with the integral
controller (I controller) is shown in Fig. 4.4 for various values of the gain. All solutions
converge to the desired target velocity, but the plot now shows oscillations. Oscillations
are present despite the fact that the gains seem to be one order of magnitude lower
than the proportional gains used to plot Fig. 2.8. Oddly enough, high gains seem to de-
stabilize the system, leading to larger oscillations. The addition of the integrator solved
one problem, steady-state asymptotic tracking, but compromised the closed-loop
transient response. We will fix this problem in the next section.
Before we get to the transient response, it is important to note that the idea that
high gains drive down tracking error is still present in this section. The new information
provided by the dynamic analysis of Lemma 4.1 is that high gains are not needed at all
frequencies, but just at those that match the frequencies of the reference being
tracked. An integrator, , effectively achieves a very high gain. It is indeed
true that , but only at the constant zero frequency . The
trade-off imposed by the dynamic solution is that the results of such high gains are
achieved only asymptotically, in steady-state as time grows. We will return to this issue
in a broader context in Chapter 7.
The results in this section can be extended to handle more complex reference inputs.
The key property is that the zeros of S, i.e. the poles of , lead to asymptotic
tracking when they cancel out the poles of . For instance, if is a ramp of slope
, that is , then asymptotic tracking happens if has at least two
poles at , in other words, if S has two zeros at .
Fact (c) is new information. It is a consequence of the fact that no additional pole–zero
cancellations can happen when computing S from G and K. This means that, when the
2
Let us start by revisiting the closed-loop cruise control with proportional control of
Fig. 2.7. The car, G, has order 1, and the controller, K, has order 0, so the closed-loop
system has order 1. The sensitivity transfer-function, S, has been computed in (4.3) and
its single pole is the root of the closed-loop first-order characteristic equation, that is
When b, p, and m are positive, any nonnegative K will lead to stability. The choice of K
affects the speed of the loop as it is directly associated with the closed-loop time-
constant and rise-time. For instance, from (2.9),
Now consider the closed-loop poles in the case of the cruise control with integral
control of Fig. 4.3. The car model, G, has order 1, and the integral controller, K, has
order 1, so the closed-loop system has order 2. The poles of the sensitivity transfer-
function, S, computed in (4.11), are the zeros of the second-order characteristic
equation
It is not difficult to study the roots of this second-order polynomial equation in terms
of its coefficient. For a more general analysis of second-order systems see Section 6.1.
The roots of (4.12) are
so at least one root has a positive real part, leading to an unstable closed-loop system.
When the closed-loop system has one root at the origin. Finally, when ,
the closed-loop is asymptotically stable. Furthermore, if the discriminant of the second-
order polynomial (4.12) is positive, that is
then the resulting closed-loop poles are real and negative. For and
this occurs in the interval
and the constant gains and can be selected independently. The difference
between the PI controller and a pure integral controller is the presence of a zero, which
becomes evident if we rewrite (4.14) as
It is now possible to coordinate the two gains and to place the closed-loop
poles anywhere in the complex plane by manipulating the two coefficients of the
closed-loop characteristic equation. For instance, let us revisit the requirement that the
poles be real and negative. In this case a positive discriminant leads to
With , you will show in P4.1 that the largest value of the control input in
response to a step reference will be achieved at . We use the initial
value property in Table 3.2 to compute
which we compute in Table 4.1 for various values of and in. As with
proportional-only control, the trend is that bigger gains lead to a faster response. For
example, the closed-loop time-constant is , with which one can
calculate the time-constants and rise-times shown in Table 4.1. Those should be
compared with the time-responses plotted in Fig. 4.6 in response to a constant
reference input mph. Note the extraordinary rise-time s obtained
with .
Table 4.1 Maximum value of for which the response of the closed-loop
connection of the linear car velocity model (2.12), , , with the
proportional–integral controller (4.14), , to a constant target output,
, is such that the throttle never saturates, that is, , . The last
three columns are the corresponding integral gain, , the closed-loop time-constant,
, and the closed-loop rise-time, .
6
30
60
90
Figure 4.6 Closed-loop dynamic response, , for the linear car velocity model (2.12),
, , to a constant target output mph with proportional–
integral control (4.14), , for various values of the proportional gain,
.
For perspective, the magnitude of the frequency response of the sensitivity transfer-
function, S, is plotted in Fig. 4.7. Note the zero at both for the integral (I) and for
the proportional–integral (PI) controllers, and the accentuated response of the integral
controller (I) near rad/s. As we will explore in detail in Chapter 7, this peak in
the frequency response explains the oscillatory behavior in Fig. 4.4. The PI design
preserves the same overall frequency response of the proportional (P) design with the
addition of the zero at to ensure asymptotic tracking.
Figure 4.7 Magnitude of the frequency response of the closed-loop sensitivity transfer-
function, , for the linear car velocity model (2.12) calculated for
and with the following controllers: P, proportional, Fig. 2.7, ; I,
integral, Fig. 4.3, ; and PI, proportional–integral, Fig. 4.5, with
and .
4.3 Integrator Wind-up
By now you should have learned not to trust responses which look too good to be true.
Because the input produced by the proportional–integral controller is guaranteed to be
less than in for only if mph, the throttle input must have
saturated in response to a reference input mph in Fig. 4.6, just as it did with
proportional control. Simulation of the nonlinear model introduced in Section 2.6 in
closed-loop with the PI controller (4.14), with chosen as (4.18), reveals saturation
for and of the system input in Fig. 4.8, as predicted by (4.22).
Figure 4.8 Closed-loop control input, pedal excursion, , produced by the car
velocity nonlinear model (2.17) under proportional–integral control (2.11).
The plots in Fig. 4.9 show the corresponding output time-responses, from which we
can see that the saturation of the throttle imposes a limit on how fast the car
accelerates, limiting the slope of the initial response and significantly increasing the
actual time-constant and rise-time. Note that the initial slope coincides for all values of
that saturate the throttle input, namely and , and that, as
seen in Fig. 4.9, the output responses split as soon as the input is no longer saturated.
Compared with the nonlinear responses in the case of proportional control, Fig. 2.10,
something new happens in the case of proportional–integral control: the response
obtained with the largest gain, , overshoots the target velocity, forcing the
system input to take negative values. The car must now brake after its velocity has
exceeded the target. This is something that could never be predicted by the linear
closed-loop model, which is of order one.
Figure 4.9 Closed-loop dynamic response, , for the nonlinearcar velocity model
(2.17) to a constant target output mph with proportional–integral control
(4.14), , for various values of proportional gain, . Compare with the
linear response in Fig. 4.6.
No less disturbing is the fact that it seems to take a very long time for the response
to converge back to the target after it overshoots. This phenomenon happens
frequently with controllers that have integral action and systems with saturation
nonlinearities, and is known as integrator wind-up. The reason for the name wind-up
can be understood by looking at the plots in Fig. 4.10. No matter which form of control
law is used, if a closed-loop system achieves zero tracking error with respect to a
nonzero step input, an additional nonzero constant term must be present at the input
of the system, u, for the tracking error, e, to be zero. We will discuss this in some detail
in Section 5.8. This nonzero constant is precisely the steady-state solution of the
integral component of the controller. In the case of the proportional–integral
3
controller (4.14)
which we plot in Fig. 4.10. Beware that the time scale in this plot is different than the
time scale in previous plots.
Figure 4.10 Integral component of the closed-loop dynamic response, , for the
nonlinear car velocity model (2.17) to a constant target output mph with
proportional–integral control (4.14), , for various values of the
proportional gain, .
Note how the integral components, from (4.23), converge to a common steady-
state value when zero tracking error is achieved, irrespective of the values of the
particular gains, and . Comparing these plots with the ones in Fig. 4.8 we see
that the saturation of the system input, u, causes the integrated error component, ,
to grow faster than the linear model predicted. The reason for this behavior is that the
system produces less output than expected, due to the saturated input, generating
larger tracking errors. Even as the error is reduced, the component of the control due
to the integrator, from (4.23), remains large: the integrator wound-up. As a result,
the system input, , is not small by the time the output gets close to its
target, causing the system to overshoot.
Integrator wind-up can lead to instability and oscillation. With integrators in the
loop, saturation of a component, which might not be relevant when controllers without
integral action are used, may reveal itself in unexpected ways. One example is any
physical system in which tracking of position is asserted by integral action. No matter
how sophisticated your controller may be, in the end, as your system gets close to its
target position, it will have to stop. But it is right there, when you thought you had
everything “under control,” that the devil is hidden. If any amount of dry friction is
present and the controller reduces the system input as it approaches the target, as in
any linear control law, the system will end up stopping before reaching the target! The
4
controller is left operating in closed-loop with a nonzero tracking error which might be
small at first, but gets integrated by the controller until it produces an output large
enough to move your system and, most likely, miss the target again, this time by
overshooting. Before overcoming the dry friction, the system is effectively saturated,
the input has a dead-zone. The closed-loop system with integral action ends up
oscillating around the target without ever hitting it.
which relate the inputs , w, and v to the outputs y and u. We also compute the
tracking error,
Note that the tracking error signal, e, is different from the controller error signal, ,
which is corrupted by the measurement noise v. Remarkably, these nine transfer-
functions can be written in terms of only four transfer-functions:
(4.26)
We have already encountered all of these four transfer-functions before. By now you
should be more than familiar with H and S and their role in tracking. The transfer-
function Q appeared in the previous section in the computation of the amplitude of
the control input signal, u, in response to the reference input, . Indeed,
when . The transfer-function D will be investigated in detail in Section 4.5,
but it has already made a discrete appearance in Section 2.7. Before proceeding, note
that
Input disturbances are often used to model variations in the system’s operating
conditions, such as the road slope in the cruise control problem (Section 2.7). After
setting we obtain
from (4.25) and (4.26). As seen earlier, we need to have S small near the poles of the
reference input for good tracking. Similarly we will need to have D small in the range
of frequencies where the input disturbances are most prominent. A natural question is
that of whether we can make S and D simultaneously small at the frequencies of
interest. Say and w are constants. Can we make and both small?
Because the zeros of S are the poles of the product , the zeros of D are only the
poles of K. The poles of G cancel out with the product . Therefore, for both S
and D to be small at , the DC gain of the controller,
5
, has to be large.
Consider the example of the car cruise control with proportional control (P
controller) in Fig. 2.7. With G from (3.16) and , we compute
Evaluation at coincides with (2.14) and (2.21). Because there is no value of for
which either or is zero, in this example, proportional control does not
achieve asymptotic tracking and asymptotic input-disturbance rejection. As seen in Fig.
2.14, a change in road slope will imply a change in the regulated speed.
We compute S and D for the car cruise control with integral control (I controller),
, Fig. 4.3, to obtain
This time, both S and D have a zero at , hence the closed-loop achieves
asymptotic tracking and asymptotic disturbance rejection of step input references and
disturbances. This is illustrated in Fig. 4.12, where a step change in the slope from flat
to a 10% grade slope ( ) happens at s. The car eventually recovers the
originally set speed of 60 mph without error, that is, asymptotic input-disturbance
rejection is achieved. The transient performance is, however, adversely affected by the
integral controller. This should be no surprise since the transient behavior in response
to an input disturbance is also dictated by the poles of S.
Figure 4.12 Closed-loop response of the velocity of the car cruise control with integral
control (Fig. 4.3) to a change in road slope at s, from flat to % grade, for
and and various values of the control gain, .
At this point you should know what is coming: the cruise control with proportional–
integral control (PI controller), , Fig. 4.5, will display asymptotic
tracking and input-disturbance rejection with better transient performance. Indeed, we
compute S and D with the choice of in (4.18). Recall that this leads to a pole–zero
cancellation in so that
As with pure integral control, both S and D have a zero at , therefore the closed-
loop system achieves asymptotic tracking and asymptotic disturbance rejection of step
inputs. Compare the responses of the integral controller, Fig. 4.12, and of the
proportional–integral controller, Fig. 4.13, to a step change in the slope from flat to a
10% grade slope ( ) at s. The response of the PI controller is not
oscillatory (the poles of D are real) and is much faster than that of the integral
controller. A surprising observation is that pole–zero cancellations take place in S and H
but not in D. The canceled pole, , still appears in D, which is second-order!
We will comment on this and other features of pole–zero cancellations later in Section
4.7.
Figure 4.13 Closed-loop response of the velocity of the car cruise control with
proportional–integral control (Fig. 4.5) with to a change in road slope
at s, from flat to 10 % grade, for and and various
values of the proportional control gain, .
Differences among the various control schemes can be visualized in the plot of the
magnitude of the frequency response of the transfer-function D shown in Fig. 4.14. The
high peak in the frequency response of the sensitivity function, S, of the I controller
seen in Fig. 4.7 appears even more accentuated in the frequency response of D in Fig.
4.14. For both controllers with integral action, I and PI, the frequencies around
rad/s are the ones which are more sensitive to input disturbances. This was already
evident for the controller in Fig. 4.7. However, it became clear for the PI controller
only in Fig. 4.14. In this example, the relative weakness at rad/s should be
evaluated with respect to possible road conditions that can excite this frequency. An
analysis is possible with our current tools.
where x denotes the horizontal distance traveled and h the elevation of the road. The
constant is the characteristic length of the hill oscillation. See Fig. 4.15. Starting at an
elevation of , the car will reach the lowest elevation at and will
reach after traversing units of horizontal length. A car traversing this road
with a constant horizontal speed will experience a slope profile
and the designed cruise control system will experience the worst possible input
disturbance. Such a road profile is not so far-fetched!
If the difference between the peak and the valley on the road is that of a 1% grade
road, that is,
For and and (2.18), the input disturbance seen by the car
would be approximately
With rad/s, pure integral (I) control produces a steady-state velocity error of
where D is from (4.27). The proportional–integral (PI) controller produces the much
smaller error
where D is from (4.28). The I controller responds with oscillations of amplitude 6 mph,
which might be noticeable, whereas the PI controller produces oscillations of amplitude
four times smaller.
We could not conclude without going back to the toilet. For the toilet water tank
example
after (4.8) and (4.9). As expected, the sensitivity function, S, contains a zero at
but D does not, since the integrator is located on the system, G, and not on the
controller, K. The water tank system achieves asymptotic tracking but not asymptotic
input-disturbance rejection of step inputs. For example, if a leak is present that offsets
the input flow, the valve may end up permanently open as the water never reaches the
fill line. In an actual toilet with a real valve, this generates oscillations such as the valve
opening periodically even if no one flushed the toilet, which is the number one sign
you should check the rubber gasket and flapper valve of your toilet water tank.
Let us now analyze the effect of the measurement noise disturbance, v, on the
measured output, y. We use Fig. 4.11 and Equation (4.25) to calculate
The key is that (4.29) has to hold for every frequency, that is,
For instance, the cruise control closed-loop sensitivity functions shown in Fig. 4.7 are
all such that is small for frequencies close to zero while is close to
one for rad/s. This means that they will perform adequately if the
measurement noise is confined to frequencies higher than, say, 1 rad/s. A
complementary picture is Fig. 4.16, which shows the magnitude of the frequency
response of the transfer-function . The peak for the integral
controller around rad/s is likely to cause trouble in closed-loop if
measurement noise is significant at those frequencies.
Note that (4.29) does not imply that . Therefore S and H can
be simultaneously large at a given frequency as long as they have opposite phases.
Indeed, the large peaks in and in the case of the integral controller
both happen very close to rad/s. What is true, however, is that when
is small then should be close to one, and vice versa. We will have much more
to say about this in Chapter 7.
The validity of this solution in a dynamic scenario is at best suspicious. There are at
least two obstacles. (a) Is the connection of K and G stable? (b) If it is stable, can one
construct a physical realization of the controller? We will consider the issue of stability
in the rest of this section. The next chapter will address the problem of constructing a
physical realization of the controller in detail. For now, be aware that it might not be
possible to implement open-loop controllers that attempt to invert the system. When
the system, G, is rational and strictly proper, its inverse is not proper. That is, the
degree of the numerator is greater than the degree of the denominator. For example,
At this point one must ask why is stability relevant if the unstable pole or zero is
being canceled? It is not uncommon to find explanations based on the following
argument: it is impossible to perfectly cancel a pole with a zero, and any imperfection
in the cancellation will reveal itself in the form of instabilities. Take (4.30), and suppose
the actual zero was not at 1 but at . In this case
If is small but not zero, this term would eventually grow out of bounds, hence
revealing the instability in the connection.
A less publicized but much better reasoning is that canceling a pole or a zero with
positive real part is a really bad idea even if perfect cancellation were possible! If we
modify Fig. 4.17 to include an input disturbance as in Fig. 1.12, then, with perfect
cancellation,
so that if either G or K were not asymptotically stable then the corresponding signal y
or u would grow unbounded. A more detailed argument will be provided next as a
particular case of the closed-loop analysis.
Identifying the reference, , and the disturbances, w and v, as inputs, and y, u, and z as
outputs, we write
where S is the sensitivity transfer-function:
We say that the closed-loop connection in Fig. 4.18 is internally stable if the eight
transfer-functions appearing above, namely
are asymptotically stable. The rationale is that all output signals in the loop will be
bounded if their inputs are bounded (see Section 3.6).
In open-loop, 7
and . Therefore, internal stability means asymptotic
stability of the transfer-functions
Note the important fact that asymptotic stability of the product is not enough!
Indeed, as shown earlier, in the presence of a nonzero input disturbance, w, we shall
have
which shows that the signals y and u will have unbounded components if any one of G,
K, and is not asymptotically stable. It was necessary to introduce the disturbance,
w, but also to look inside the loop, at the internal signal u for traces of instability, hence
the name internal stability. Requiring that G, K, and be asymptotically stable has
important implications in the case of open-loop solutions: (a) it means that open-loop
solutions cannot be used to control unstable systems (G must be asymptotically
stable); (b) it rules out choices of K that perform a pole–zero cancellation with positive
real part (K must be asymptotically stable); (c) it reveals that open-loop solutions do
not possess any capability to reject input disturbances other than those already
rejected by the original system G, since .
Now consider internal stability in the case of the unit-feedback loop in Fig. 4.11, that
is, Fig. 4.18 with . In this case, internal stability is equivalent to asymptotic
stability of the four transfer-functions
which are nothing but the four transfer-functions S, D, Q, and H presented earlier in
(4.26). Because , this is really stability of the three transfer-functions
As discussed in Section 4.2, no pole–zero cancellation occurs in S that has not already
occurred when forming the product . Furthermore, the zeros of S are the poles of
. If no pole–zero cancellations occur in , then all we need to check is stability
of the poles of S. On the other hand, if there are pole–zero cancellations in the product
, then these cancellations have to be of stable poles. This is better illustrated with
an example. Suppose that
perform a pole–zero cancellation of the zero z and the pole p. Assume that there are
no further cancellations when forming the product
that is, and are polynomials without a common factor. If the roots of
the polynomial
will be stable only if p and z are both negative. This argument can be extended without
difficulty to complex-conjugate poles and zeros. The discussion is summarized in the
next lemma.
Lemma 4.3 (Internal stability) Consider the closed-loop diagram in Fig. 4.11. The closed-
loop system is internally stable if and only if S, the transfer-function from the reference
input, , to the tracking error, , is asymptotically stable and any pole–zero
cancellations performed when forming the product are of poles and zeros with
negative real part.
Recall that we designed the proportional–integral controller (PI controller) for the
cruise controller with a choice of in (4.18) that performed a pole–zero cancellation.
In that case we had
which are all stable when b, p, m, and are positive. If one chooses to perform a
pole–zero cancellation in the product , this does not mean that the canceled pole
or zero will simply disappear from the loop. As illustrated above, the canceled pole or
zero will still appear as a pole or a zero in one of the loop transfer-functions and
therefore should have negative real part for internal stability.
Interestingly, even when the cancellation of a pole or zero with negative real part is
not perfect, the resulting transfer-function should not be significantly affected by a
small mismatch in the value of the canceled zero or pole. How large the mismatch can
be is, of course, dependent on the problem data. Too much mismatch might even lead
to closed-loop instability. For the car cruise control with proportional–integral control
(PI controller) we evaluate this possibility in Fig. 4.19 after recalculating the step
responses for the choice of integral gain:
Figure 4.19 Closed-loop dynamic response, , for the linear car velocity model (2.12)
calculated for and and a constant target output of
mph with proportional–integral control where and .
When ( %), the controller performs an exact pole–zero cancellation.
Problems
4.3 Calculate
4.4 Repeat P4.3 for the following system and controller transfer-functions:
(a) , ;
(b) , ;
(c) , ;
(d) , ;
(e) , ;
(f) , ,
4.5 Consider the standard feedback connection in Fig. 4.20(a) in which the system
and controller transfer-functions are
Is the closed-loop system internally stable? Does the closed-loop system achieve
asymptotic tracking of a constant input , ?
Figure 4.20 Diagrams for P4.5 and P4.7.
4.6 Repeat P4.5 for the following combinations of system and controller transfer-
functions:
(a) , ;
(b) , ;
(c) , ;
(d) , ;
(e) , ;
(f) , ;
(g) , ;
(h) , ;
(i) , ;
(j) , ;
(k) , ;
(l) , ;
(m) , ;
(n) , .
4.7 Consider the closed-loop connection in Fig. 4.20(b) with in which the
system and controller transfer-functions are
4.8 Repeat P4.7 for the combination of system and controller transfer-functions in
P4.6.
4.9 Consider the closed-loop connection in Fig. 4.20(b) with and the system
and controller transfer-functions
Show that the closed-loop system asymptotically tracks a constant reference input
, , and asymptotically rejects an input disturbance ,
.
4.10 Consider the closed-loop connection in Fig. 4.20(b) with and the system
and controller transfer-functions
Calculate the steady-state component of the output, y, and the tracking error,
, in response to
Does the closed-loop achieve asymptotic tracking of the reference input, ? Does
the closed-loop achieve asymptotic rejection of the measurement noise input, ?
What happens if is very large and is zero?
4.11 The scheme in the diagram in Fig. 4.21 is used to control a time-invariant system
with a time-delay , represented by the transfer-function . Show that
, where
The controller is known as a Smith predictor. Rearrange the closed-loop diagram in
Fig. 4.21 so as to reveal the controller .
4.12 Show that if G and K are rational transfer-functions and there are no pole–zero
cancellations when forming the product GK then the zeros of , from P4.11, are the
poles of G.
4.13 Explain why the controller in Fig. 4.21 can be used only if G is asymptotically
stable. Hint: P4.12 and internal stability.
4.15 Show that the block-diagrams in Fig. 4.22 have the same transfer-function from
to e:
and that
4.17 Show that there exists a proper controller that stabilizes the proper
transfer-function , where is strictly proper if and only if there
exists a proper controller that stabilizes the strictly proper transfer-function
. Calculate the relationship between the two controllers. Hint: Use P4.15 and
P4.16.
4.19 What is wrong with the feedback diagram in Fig. 4.23(b)? Hint: Use P4.15 and
P4.18.
4.20 You have shown in P2.10 and P2.12 that the ordinary differential equation
is a simplified description of the motion of a rotating machine driven by a belt without
slip as in Fig. 2.18(a), where is the angular velocity of the driving shaft and is the
machine’s angular velocity. Let mm, mm, kg m /s,
kg m /s, kg m , and kg m . Design a feedback
controller
and select K such that the closed-loop system is internally stable. Can the closed-loop
system asymptotically track a constant reference , ? Assuming zero as
the initial condition, sketch or use MATLAB to plot the closed-loop response when
rad/s.
where
4.22 In P4.21, does it matter whether the angle used by the controller is
coming from an angular position sensor or from integrating the output of an angular
velocity sensor?
4.23 You have shown in P2.18 that the ordinary differential equation
is a simplified description of the motion of the elevator in Fig. 2.18(b), where is the
angular velocity of the driving shaft and is the elevator’s load linear velocity. Let
m/s , m, kg, kg m /s, and
kg m . Design a feedback controller
and select K such that the closed-loop system is internally stable. Can the closed-loop
system asymptotically track a constant velocity reference , ? Assuming
zero as the initial condition, sketch or use MATLAB to plot the closed-loop response
when m/s.
where
4.26 In P4.25, does it matter whether the position used by the controller is
coming from a position sensor or from integrating the output of a velocity sensor?
4.28 Modify the feedback controller in P4.25 so that the closed-loop elevator system
can asymptotically track a constant reference m, , when
kg.
4.29 You have shown in P2.41 that the ordinary differential equation
is a simplified description of the motion of the rotor of the DC motor in Fig. 2.24, where
is the rotor angular velocity. Let kg m , N m/A,
V s/rad, kg m /s, and . Design a feedback
controller
and select K such that the closed-loop system is internally stable. Can the closed-loop
system asymptotically track a constant-angular-velocity reference , ?
Assuming zero as the initial condition, sketch or use MATLAB to plot the closed-loop
response when RPM.
where
4.32 It seems that the controllers in P4.30 and P4.31 are the same if .
Explain their differences.
4.33 Why would you want to run the controller from problem P4.29 with a zero
velocity reference ? What is the role of the control gain K in this case?
The armature current, , is related to the armature voltage, , and the rotor angular
velocity, , through
Show that
4.35 Consider the DC motor in P4.34 with the same physical parameters as in P4.29.
Design a feedback controller
and select K such that the closed-loop system is internally stable. Can the closed-loop
system asymptotically track a constant-torque reference , ? Assuming
zeros as the initial conditions, sketch or use MATLAB to plot the closed-loop response
when N m.
4.37 Contrast the similarities and differences between the solutions to P4.30 and
P4.35.
4.38 You have shown in P2.49 that the temperature of a substance, T (in K or in C),
flowing in and out of a container kept at the ambient temperature, , with an inflow
temperature, , and a heat source, q (in W), can be approximated by the differential
equation
where m and c are the substance’s mass and specific heat, and R is the overall system’s
thermal resistance. The input and output flow mass rates are assumed to be equal to w
(in kg/s). Assume that water’s density and specific heat are kg/m and
J/kg K. Design a feedback controller
where .
Compare S with the closed-loop sensitivity function computed earlier in Section 1.6 to
1
In fact any control law not necessarily linear that is continuous at the origin!
4
Unless
6
has a zero exactly at “1,” in which case a similar argument about the
exactness of the location of the zero would apply.
The simplest dynamic system for which we can envision a construction is the integrator.
Any device that is capable of storing mass, charge, or energy in some form is basically
an integrator. Indeed, we have already met one such device in Section 2.8: the toilet
water tank. In Fig. 2.16, the water level, y, is the results of integration of the water
input flow, u,
where A is the constant cross-section area of the tank. The voltage across the terminals
of a capacitor, v, is the integral of the current, i,
where C is the capacitor’s capacitance. A fly-wheel with negligible friction integrates the
input torque, f , to produce an angular velocity, ,
indexed by the integer k and where T is a small enough sampling period at which
periodic samples of the continuous input are obtained.
Two difficulties are common to all physical realizations of integrators: (a) providing
enough storage capacity, be it in the form of mass, volume, charge, energy, or numbers
in a computer; and (b) controlling the losses or leaks. The storage capacity should be
sized to fit the application in hand, and losses must be kept under control with
appropriate materials and engineering. In the following discussion we assume that
these issues have been worked out and proceed to utilize integrators to realize more
complex dynamic systems.
Let us start by revisiting the diagram in Fig. 2.3 which represents the linear ordinary
differential equation (2.3). This diagram is reproduced in Fig. 5.1. The trick used to
represent the differential equation (2.3) in a block-diagram with integrators was to
isolate the highest derivative:
The highest derivative is then integrated in a series of cascade integrators from which
all lower-order derivatives become available. For example, in order to represent the
second-order differential equation
which is then integrated twice. The input signal u and the signals y and are run
through amplifiers (gains) and a summer is used to reconstruct as shown in Fig. 5.2.
Of course, one could use the exact same scheme to implement the associated transfer-
function:
and replace integrators by “ .” The result is Fig. 5.3, which is virtually the same as
Fig. 5.2. Initial conditions for the differential equation, and in this case, are
implicitly incorporated in the block-diagrams as initial conditions for the integrators.
For example, the second integrator in Fig. 5.2 implements the definite integral
Physically, is the amount of the quantity being integrated, water, current, torque,
etc., which is present in the integrator at . There will be more about that in
Section 5.2.
using only integrators. The main difficulty is the derivatives of the input signal u. One
idea is to use linearity. First solve the differential equation
One can use the diagram in Fig. 5.2 with for that. A solution to
This idea is implemented in the diagram of Fig. 5.4. See also P2.38.
Here is an alternative: isolate the highest derivative and collect the right-hand terms
with the same degree, that is,
Now integrate twice to obtain
1
These operations are represented in the diagram in Fig. 5.5. Because Figs. 5.4 and 5.5
represent the same system, the realization of a differential equation, or its
corresponding transfer-function, is not unique. Other forms are possible, which we do
not have room to discuss here. They can be found, for instance, in the excellent text
[Kai80].
Besides the issue of uniqueness, a natural question is the following: can we apply
these ideas to any differential equation and obtain a block-diagram using only
integrators? The answer is no. Consider for example
where u is an input and y is the output. It is not possible to represent this equation in a
diagram where y is obtained as a function of u without introducing a derivative block.
In general, by a straightforward generalization of the techniques discussed above, it
should be possible to represent a general linear ordinary differential equation of the
form (3.17):
using only integrators if the highest derivative of the input signal, u, appearing in (3.17)
is not higher than the highest derivative of the output signal, y. That is, if . In
terms of the associated rational transfer-function, G in (3.18), it means that the degree
of the numerator, m, is no higher than the degree of the denominator, n, that is, G is
proper. As we will soon argue, it is not possible to physically implement a
differentiator, hence one should not ordinarily find transfer-function models of
physical systems which are not proper. Nor should one expect to be able to implement
a controller obtained from a transfer-function which is not proper.
That is, however, not to say that components of a system cannot be modeled as
differentiators or have improper transfer functions. Take for example the electric circuit
in Fig. 5.6. The relationship between the voltage and current of an ideal capacitor is the
differentiator:
which is not proper. A real capacitor will, however, have losses, which can be
represented by some small nonzero resistance R appearing in series with the capacitor
in the circuit of Fig. 5.6. The complete circuit relations are
from which we obtain the transfer-function
after eliminating . See P2.34 and P3.80. The overall circuit has a proper transfer-
function, . The smaller the losses, that is, R, the more the circuit behaves as an
ideal capacitor, and hence as a differentiator. Indeed,
which is not proper. It was nevertheless very useful to work with the ideal capacitor
model and its improper component transfer-function to understand the overall circuit
behavior and its limitations. It is in this spirit that one should approach improper
transfer-functions.
Even if the issue of gain could be addressed, there are still potential problems with
losses in practical differentiators. For example, the unit step response of the ideal
capacitor with model is the impulse . The unit step response of the capacitor
with losses is
currents and voltages will cause other problems for the circuit and the physical
materials it is made of; one of them is that the materials will stop responding linearly.
Figure 5.7 Normalized response of a capacitor with losses to a V step input.
On the issue of high-frequency gains, the steady-state response of the capacitor with
losses to a sinusoidal input of frequency and unit amplitude, , is
which shows that the amount of amplification in this circuit is limited by the losses
represented by the resistor R. Note also that
We conclude our discussion with another electric circuit. The circuit in Fig. 5.8
contains a resistor, , two capacitors, and , and an (operational) amplifier with
very high gain, the triangular circuit element. You have shown in problems P2.38 and
P3.90 that the transfer-function from the voltage to the voltage is
Therefore, by adjusting the ratio between the two capacitors, and , and the
resistor, , it is possible to set the gain K and the zero z to be exactly the ones
needed to implement the PI controller (4.15). The particular case when the capacitor
is removed from the circuit, i.e. , is important. In this case the circuit
transfer-function reduces to
which is a pure integrator with gain K (see P2.40). This circuit can be combined with
amplifiers to build a physical realization for the diagrams in Figs. 5.1–5.5. Items of
electronic hardware with specialized circuitry implementing blocks such as the one in
Fig. 5.8 were manufactured and used for analysis and simulation of dynamic systems in
the second half of the twentieth century under the name analog computers. An analog
computer used by NASA is shown in Fig. 5.9. These computers have all but been
replaced by digital computers, in which dynamic systems are simulated via numerical
integration.
After learning how to convert differential equations and transfer-functions into block-
diagrams using integrators we will now introduce a formalism to transform differential
equations of arbitrary order into a set of vector first-order differential equations. The
key is to look at the integrators in the block-diagrams. Start by defining a new variable
for each variable being integrated. For example, in the diagram in Fig. 5.4, define two
variables,
one per output of each integrator. Next write two first-order differential equations at
the input of each integrator:
(5.1)
A similar procedure works for the diagram in Fig. 5.5. With and representing
the output of each integrator, we write the differential equations
to obtain
(5.2)
Equations (5.1) and (5.2) are in a special form called state-space equations.
Differential equations for linear time-invariant systems are in state-space form when
they match the template
(5.3)
In (5.3), not only the state, x, but also the input, u, and output, y, can be vectors. This
means that state-space is capable of providing a uniform representation for single-
input–single-output (SISO) as well as multiple-input–multiple-output (MIMO) systems.
Furthermore, the matrix and linear algebra formalism enables the use of compact and
powerful notation. For instance, applying the Laplace transform to the state-space
3
and compute
where
parametrize the response to the initial state, , which plays the role of the initial
conditions. The simplicity of these formulas hides the complexity of the underlying
calculations. For example, with
we compute
from which
and
from one’s empty hat. The exponential function of a matrix hides the complexities
5
which will ultimately be able to correctly compute the response of linear systems even
in the most complicated cases, e.g. for systems with multiple roots or complex-
6
conjugate roots (see Chapter 3). Likewise, in response to a nonzero initial condition we
have that
and 7
After some algebra, we obtain first-order vector equations for the system state,
These equations are put in the form (5.3) after collecting all inputs, outputs, and
closed-loop state components into the vectors
and rearranging:
where
Closed-loop internal stability (Lemma 4.3) can be shown to be equivalent to matrix
being Hurwitz.
The discussion in Sections 5.1 and 5.2 may leave one with the impression that the
passage from transfer-function to state-space model and vice versa is a relatively
straightforward process. Other than the issue of non-unicity of state-space realizations,
state-space formulas look and feel simple. However, we would like to draw the reader’s
attention to an important issue we have overlooked so far, which we motivate with
two simple examples.
Consider for instance the state-space realization in the form (5.3) with
You will verify in P5.4 that, for the matrices in (5.9) and ,
satisfy (5.10), which shows that this state-space realization is not observable if .
An interpretation for the loss of observability is that not all coordinates of the state
vector can be estimated by taking measurements of the output and its derivatives
alone [Kai80]. A similar statement, this time involving the matrices A and B, can be used
to test for controllability (see P5.3). Lack of controllability can be interpreted as the
impossibility of computing a control input that can steer all coordinates of the state
vector to a desired location [Kai80].
which appears to be of first order. Also is a zero for this transfer-function but
is not.
no matter what f is. That is, the quantity z, which is the sum of the particles’ linear
momentum, is conserved. This is of course a restatement of Newton’s third law that
internal forces cannot change a system’s total linear momentum. No control action
through internal forces can change that!
At a more philosophical level, one might argue that controllability and observability
are not to be verified for any physical system because, at some level, there will always
be variables that cannot be controlled or that cannot be estimated without explicit
measurement. Can we control or observe the exact position of atoms in any object
subject to automatic control? Controllability and observability, and hence minimality,
are properties of one’s mathematical model, never of the underlying physical system.
(5.12)
When f and g are continuous and differentiable functions and the state vector, x, and
the input, u, are in the neighborhood of a point or trajectory , it is
natural to expect that the behavior of the nonlinear system can be approximated by
that of a properly defined linear system. The procedure used to compute such a linear
system is known as linearization, and the resulting system is a linearized approximation.
For reasons which will become clear soon, one is often interested in special points
satisfying the nonlinear equation
(5.16)
in the standard state-space form (5.3). The next lemma, due to Lyapunov and
presented without a proof, links asymptotic stability of the linearized system with
9
Lemma 5.1 (Lyapunov) Consider the nonlinear dynamic system in state-space form
defined in (5.12). Let be an equilibrium point satisfying (5.14) and consider the
linearized system (5.16) for which the quadruple is given in (5.13).
If A is Hurwitz then there is for which any trajectory with initial condition in
and input , , converges asymptotically to the
equilibrium point , that is, .
On the other hand, if A has at least one eigenvalue with positive real part then for
any there exists at least one trajectory with initial condition in
and input , , that diverges from the equilibrium point , that is,
there exists for which for all .
This lemma is of major significance for control systems. The first statement means
that it suffices to check whether a linearized version of a nonlinear system around an
equilibrium point is asymptotically stable in order to ensure convergence to that
equilibrium point. The lemma’s main weakness is that it does not tell us anything
about the size of the neighborhood of the equilibrium point in which convergence to
equilibrium happens, that is, the size of . The second statement says that instability
of the linearized system implies instability of the original nonlinear system. Lemma 5.1
is inconclusive when A has eigenvalues on the imaginary axis.
For some systems, linearizing around an equilibrium point might be too restrictive.
Take, for example, an airplane or spacecraft in orbit which cannot be in equilibrium
with zero velocities. Another example is a bicycle. In such cases it is useful to linearize
around a time-dependent equilibrium trajectory satisfying
(5.17)
which is the familiar statement that a mass in equilibrium will be at rest or at constant
velocity, that is, Newton’s first law. See Section 5.7 for another example.
In the next sections we will illustrate how to obtain linearized systems from
nonlinear models through a series of simple examples.
Consider the simple planar pendulum depicted in Fig. 5.11. The equation of motion of
the simple pendulum obtained from Newton’s law in terms of the pendulum’s angle
is the second-order nonlinear differential equation
where
J is the pendulum’s moment of inertia about its center of mass, m is the pendulum’s
mass, b is the (viscous) friction coefficient, and r is the distance to the pendulum’s
center of mass. For example, if the pendulum is a uniform cylindrical rod of length
then and the moment of inertia about its center of mass is 10
.
The input, u, is a torque, which is applied by a motor mounted on the axis of the
pendulum. We assume that the motor is attached in such a way that the pendulum can
rotate freely and possibly complete multiple turns around its axis. The motor will not
be modeled. 11
Figure 5.11 Simple pendulum.
We set and look for equilibrium points by solving the system of equations
This time, however, matrix A is never Hurwitz. The transfer-function associated with
13
which is not asymptotically stable. According to Lemma 5.1, trajectories starting close
enough to will diverge from , again as we would expect from a physical
pendulum. All other equilibria in which is an integer multiple of will lead to one
of the above linearized systems.
We now complicate the pendulum by attaching it to a cart that can move only in the
direction, as shown in Fig. 5.13(a). In Fig. 5.13(a) the cart is on a rail, which would be
the case, for example, in a crane. A similar model can be used to describe the inverted
pendulum in a cart shown in Fig. 5.13(b). Without delving into the details of the
derivation, the equations of motion for the pendulum are the following pair of coupled
nonlinear second-order differential equations:
(5.20)
where is the pendulum’s angle and is the cart’s position, shown in the diagram in
Fig. 5.13(a). The positive constants , , and are the pendulum’s moment of
inertia, mass, and viscous friction coefficient, r is the distance to the pendulum’s center
of mass, and and are the cart’s mass and viscous friction coefficient. An
important difference is that the input, u, is a force applied to the cart, as opposed to a
torque applied to the pendulum. The goal is to equilibrate the pendulum by moving
the cart, as done in the Segway Personal Transporter shown in Fig. 5.13(b).
Figure 5.13 Pendula in carts.
When , the first equation in (5.20) reduces to the equation of motion of the
simple pendulum developed in Section 5.5 but with zero input torque. Note that the
term is the torque applied to the pendulum by virtue of accelerating
the cart in the direction. As expected, a positive acceleration produces a negative
torque. Check that in Fig. 5.13(a)!
When moving from vector second-order to state-space form one needs to invert the
mass matrix, which might not always be a trivial task. Setting we calculate the
equilibrium points:
Note that the constant is arbitrary, which means that equilibrium of the
pendulum does not depend on a particular value of . In the case of the inverted
pendulum, this is due to the fact that the coordinate does not appear directly
in the equations of motion. This is analogous to what happens in the car model (2.1),
and indicates the existence of equilibrium trajectories such as (5.18), in which the
velocity is constant as opposed to zero. Indeed, for the pendulum in a cart,
and so that a reduced
state-space model is possible, with
or when :
where we defined and used the positive quantities
to simplify the entries in the matrices. The linearized matrices are very similar except
for a couple of sign changes. However, these small changes are fundamental for
understanding the behavior of the system around each equilibrium.
where the imaginary eigenvalues are indicative of an oscillatory system. Indeed, when
the damping coefficients and are positive, all eigenvalues of have negative
real part (you should verify this) and, from Lemma 5.1, the equilibrium point is
asymptotically stable.
and one of them will always have positive real part. When the damping coefficients
and are positive, two of the eigenvalues of have negative real part but one
remains on the right-hand side of the complex plane. From Lemma 5.1, the equilibrium
point is unstable.
Our third example is that of a simplified four-wheel vehicle traveling as depicted in Fig.
5.14. Without any slip, the wheels of the car remain tangent to circles centered at the
virtual point c, as shown in the figure. A real car uses a more complicated steering
mechanism to traverse the same geometry shown in Fig. 5.14: only the front wheels
turn, as opposed to the front axle, and real tires allow some slip to occur. The front
axle steering angle, , is related to the radius of the circle that goes by the mid-point
of the rear axle by the formula
We assume that the steering angle is in the interval . If v is the rear
axle’s mid-point tangential velocity, then v is related to the car’s angular velocity, , by
If is the position of the mid-point of the rear axle then the velocity vector
is
When , the car does not have any equilibrium points because there exists no
such that . For this reason we linearize around a moving trajectory.
For example, a straight horizontal line
which are evaluated to compute the matrices of the linearized system (5.17):
In this case, the matrices and happen not to depend on the time t, and
hence the linearized system is in fact time-invariant. See the footnote for an example
14
where m and J are the car’s mass and moment of inertia, r is the distance measured
along the car’s main axis from the mid-point of the rear axle to the car’s center of mass,
f is a tangential force applied at the rear axle, and b is a (viscous) damping coefficient.
Note that when u is small or is small then
which is the same equation as (2.1) used before to model a car moving in a straight
line.
Suppose that one designs a linear controller based on a model, , linearized around
the equilibrium point, , of a certain nonlinear system, G. Two questions need to
be answered. (a) What is the exact form of the controller? (b) Under what conditions
will this controller stabilize the original nonlinear system?
To answer the first question, observe that the linearized system (5.16) is developed
in terms of deviations, , , and from (5.15), and that a linear feedback controller,
K, designed based on the linearized model, , as shown in the diagram in Fig. 5.15(a)
correspond to
For this reason the actual control that needs to be applied to the original nonlinear
system, G, is therefore
The resulting closed-loop diagram is shown in Fig. 5.15(b). In practice, whenever
possible, most control systems are linearized about a zero input (can you explain
why?), which further simplifies the diagram. Moreover, if integral control is used, one
can often dispense with the input if it is constant. Compare the block-diagram in Fig.
5.15(b) with the one in Fig. 4.11 and recall the discussion in Section 4.5 on how integral
control can reject the constant “disturbance,” . We will discuss integral control for
nonlinear systems at the end of this section.
where is given by (5.19). As discussed in Section 5.5, if all parameters are positive
then is asymptotically stable and is unstable. If a linear proportional controller
is to be used to stabilize the pendulum around the unstable equilibrium then it
must, at a minimum, stabilize the linearized model . In other words, the poles of
the sensitivity transfer-function
must have negative real part. As we will discuss in detail in Sections 6.1 and 6.5, the
linearized closed-loop system is asymptotically stable if
Following Fig. 5.15, this controller must be implemented as in Fig. 5.16 after setting
and .
Figure 5.16 Linear control of the simple pendulum using the scheme of Fig. 5.15;
, ; around stable equilibrium, and around unstable
equilibrium.
One must be especially careful when a linear controller is used to move a nonlinear
system away from its natural equilibrium point. If the reference is constant,
is a dynamic linear controller, and the nonlinear system, G, is described in
state-space by Equations (5.12), then the closed-loop equilibrium point must satisfy
When the reference input is zero, , the controller is a regulator (see Section
4.5). From Lemma 5.1, we know that a regulator designed to stabilize a model
linearized at also stabilizes the original nonlinear system in the neighborhood of
the equilibrium point . If the initial conditions of the closed-loop system are close
enough to , then we should expect that x converges to , u converges to , and y
converges to , so that and . This is a partial answer to
question (b).
to the controller. Washout filters are used in applications where the controller should
have no authority over the system’s steady-state response but needs to act during
transients. A typical application is in the control of electric power systems, in which the
steady-state response is dictated by the loads of the circuit and should not be affected
by the controller [PSL96]. Another application is in flight control. See [FPE14, Section
10.3] for a complete design of a yaw damper for a Boeing 747 aircraft that uses a
washout filter to preserve pilot authority.
When the controller is not a regulator, that is , the closed-loop controller will
typically modify the closed-loop equilibrium point. Tracking is all about modifying the
natural (open-loop) equilibrium of systems. In light of Lemma 5.1, a necessary condition
for stability of the closed-loop system when is a constant is that the controller K
stabilizes the closed-loop system linearized at the new equilibrium point , given
in (5.25). If is close to and the nonlinearities in f and g are mild, then one
can hope that stabilization of the system linearized at might also imply
stabilization of the system linearized at , but there are no guarantees that can be
offered in all cases. As with linear systems, there will likely be a nonzero steady-state
error, i.e. . In many cases, one can enforce a zero closed-loop steady-state error
using integral control. With an integrator in the loop, one often dispenses with the
constant inputs and in the closed-loop diagram of Fig. 5.15(b), which
reduces the controller to the standard block-diagram in Fig. 1.8:
An informal 16
argument to support integral control in nonlinear systems is as follows:
17
given that the initial conditions are close enough to . As with linear systems, the
price to be paid is a more complex dynamic system to contend with. On the other
hand, there is solace in the fact that neither nor need be accurately
estimated.
While a detailed analysis of nonlinear feedback systems is beyond the scope of this
book, some methods to be introduced in Chapter 8 in the context of robustness can be
used to rigorously analyze feedback systems with certain types of nonlinearities. For
example, in Section 8.4, a rigorous analysis of the effectiveness of integral control will
be provided in the case of the simple pendulum which takes into account the
pendulum’s full nonlinear model.
Whether we call it nonlinearity, uncertainty, or nature, we can never stress too much
that predictions based on models, including closed-loop predictions, can be far off if
not validated experimentally. One should be suspicious of predicted closed-loop
performances that seem to be too good to be true. For example, in Chapters 2 and 4,
aggressive controllers produced inputs that saturated the available throttle input in the
car cruise controller. At a minimum, one should always look at the control signal
produced by the controller to perform the desired task. This has been done in Chapters
2 and 4 first using linear models, then simulating the closed-loop system using a more
refined model which was nonlinear. Ultimately, one would move on to
experimentation to validate the predicted performance in a variety of operating
conditions. That humans are prone to take models for reality more than we should is
not news. Check out [Tal07] for some provocative discussion.
Problems
5.1 Write state-space equations for the block-diagrams in Fig. 5.17 and in each case
compute the transfer-function from u to y.
Figure 5.17 Block-diagrams for P5.1.
and
in feedback,
5.3 Verify that the state-space equations (5.3) with matrices as in (5.9) are
controllable except when , i.e. there exist some nonzero vector z and some
complex number such that
only when .
5.4 Verify that the state-space equations (5.3) with matrices as in (5.9) are observable
except when , i.e. there exist some nonzero vector x and scalar that solve
Equations (5.10) only when .
5.5 Show that the state-space equations (5.3) with matrices as in (5.11) are minimal.
Hint: Apply the conditions in P5.3 and (5.10) to show that they are controllable and
observable.
5.6 Compute the linearized equations for the pendulum in a cart model, Equations
(5.20), developed in Section 5.6. Let kg, kg, m, kg m
/s, km/s, , and , and use MATLAB to compute the
transfer-function from u to and from u to around the equilibrium points
calculated with and and . Are the equilibria asymptotically stable?
5.7 Verify that the linearized model of a steering car, Equations (5.23), developed in
Section 5.7 is not controllable. Interpret this result on the basis of your knowledge of
the physical behavior of this system. Hint: Use P5.3.
5.8 You have shown in P2.4 that the ordinary differential equation
is a simplified description of the motion of an object of mass m dropping vertically
under constant gravitational acceleration, g, and linear air resistance, . Let the
gravitational force, , be the input and let the vertical velocity, v, be the output, and
represent this equation in a block-diagram using only integrators. Rewrite the
differential equation in state-space form.
5.10 Repeat P5.8 considering the vertical acceleration, , as the output. Hint: Use
the original equation to obtain as a function of v.
5.12 Use the block-diagram obtained in P5.11 and MATLAB to simulate the velocity of
an object with kg, kg/m, and m/s falling with zero initial
velocity. Compare your solution with the one from P2.9.
5.15 You have shown in P2.10 and P2.12 that the ordinary differential equation
5.18 Use the block-diagram obtained in P5.15 and MATLAB to simulate the rotating
machine’s angular velocity, , with N m, mm, mm,
kg m /s, kg m /s, kg m , kg m , and zero
initial angular velocity. Compare your solution with the one from P2.13.
5.19 You have shown in P2.18 that the ordinary differential equation
is a simplified description of the motion of the elevator in Fig. 2.18(b), where is the
angular velocity of the driving shaft and is the elevator’s load linear velocity. Let the
torque, , and the gravitational torque, , be inputs and let the
elevator’s linear velocity, , be the output, and represent this equation in a block-
diagram using only integrators. Rewrite the differential equation in state-space form.
5.21 Use the block-diagram obtained in P5.19 and MATLAB to simulate the elevator’s
linear velocity, , with m/s , N m, m, kg,
kg m /s, kg m , and initial velocity m/s.
Compare your solution with the one from P2.19.
5.23 You have shown in P2.28 that the ordinary differential equation
5.25 Use the block-diagram obtained in P5.23 and MATLAB to simulate the mass–
spring–damper position, x, with m/s , kg, N/m, kg/s,
and initial position cm.
5.26 You have shown in P2.32 that the ordinary differential equations
5.28 You have shown in P2.38 that the ordinary differential equation
is an approximate model for the OpAmp-circuit in Fig. 2.23. Let the voltage v be the
input and let the voltage be the output, and represent this equation in a block-
diagram using only integrators. Rewrite the differential equation in state-space form.
5.30 You have shown in P2.41 that the ordinary differential equation
is a simplified description of the motion of the rotor of the DC motor in Fig. 2.24, where
is the rotor angular velocity. Let the armature voltage, , be the input and let the
angular velocity, , be the output, and represent this equation in a block-diagram
using only integrators. Rewrite the differential equation in state-space form.
5.32 Use the block-diagram obtained in P5.30 and MATLAB to simulate the DC motor
angular velocity, , with kg m , N m/A, V
s/rad, kg m /s, , V, and zero initial angular
velocity.
where the armature current, , is related to the armature voltage, , and the rotor
angular velocity, , through
As in P5.30, let the armature voltage, , be the input and let the torque, , be the
output, and represent this equation in a block-diagram using only integrators. Rewrite
the differential equation in state-space form and calculate the associated transfer-
function. Compare your answer with that for P4.34.
5.34 As in in Section 2.8, the water level, h, in a rectangular water tank of cross-
sectional area A can be modeled as the integrator
where is the inflow rate. If water is allowed to flow out from the bottom of the
tank through an orifice then
where the resistance and depend on the shape of the outflow orifice,
is the ambient pressure outside the tank, and
is the pressure at the water level, where is the water density and g is the
gravitational acceleration. Combine these equations to write a nonlinear differential
equation in state-space relating the water inflow rate, , to the water tank level, h,
and represent this equation in a block-diagram using only integrators.
5.35 Determine a water inflow rate, , such that the tank system in P5.34 is in
equilibrium with a water level . Linearize the state-space equations about
this equilibrium point for and compute the corresponding transfer-function. Is
the equilibrium point asymptotically stable?
5.36 You have shown in P2.49 that the temperature of a substance, T (in K or in C),
flowing in and out of a container kept at the ambient temperature, , with an inflow
temperature, , and a heat source, q (in W), can be approximated by the differential
equation
where m and c are the substance’s mass and specific heat, and R is the overall system’s
thermal resistance. When the flow rate, w, is not constant, this model is nonlinear. Let
the heat source, q, the flow rate, w, and the ambient and inflow temperatures, and
, be the inputs and let the temperature, T, be the output, and represent this
equation in a block-diagram using only integrators. Rewrite the differential equation in
state-space form.
5.38 Assume that water’s density and specific heat are kg/m and
J/kg K. Design a feedback controller
5.39 The equations of motion of a rigid body with principal moments of inertia ,
, and are given by Euler’s equations:
Represent this set of equations in a block-diagram using only integrators and rewrite
the differential equations in state-space, where
are the angular velocity and torque vectors, is at the same time the state vector and
the output, and is the input.
is an equilibrium point for the rigid body in P5.39. Linearize the equations about this
equilibrium point and show that if or then this is an
unstable equilibrium point. Interpret this result.
are a simplified description of the motion of a satellite orbiting earth as in Fig. 5.18,
where r is the satellite’s radial distance from the center of the earth, is the satellite’s
angular velocity, m is the mass of the satellite, M is the mass of the earth, G is the
universal gravitational constant, is a force applied by a thruster in the tangential
direction, and is a force applied by a thruster in the radial direction. Represent
these equations in a block-diagram using only integrators and rewrite the differential
equations in state-space.
where
5.43 Consider the satellite model from P5.41 and P5.42. Letting kg be
the mass of the earth, and N m /kg , calculate the altitude, R, of a
1600 kg GPS satellite in medium earth orbit (MEO) with a period of 11 h. If earth’s
radius is approximately km, calculate the satellite’s altitude measured from
the surface of the earth. Is the GPS satellite equilibrium point asymptotically stable? Is
it unstable? Does it depend on the mass of the satellite?
5.44 Verify that the linearized model of a satellite in P5.42 is not controllable if only
radial thrust is used. Interpret this result on the basis of your knowledge of the system.
Hint: Use P5.3.
5.45 Verify that the linearized model of a satellite in P5.42 is controllable if tangential
thrust alone is used. Interpret this result on the basis of your knowledge of the system.
Hint: Use P5.3.
has been used by Malthus to study population growth. In this context, x is the current
population and is the growth rate, where b is the birth rate and m is the
mortality rate. Explain the behavior of this model when , , and .
5.47 Verhulst suggested that the growth rate in the model of P5.46 often depends on
the size of the current population:
Calculate the equilibrium points for this model. Assume that all constants are positive,
linearize about the equilibrium points, and classify the equilibria as asymptotically
stable or unstable. Represent the equation in a block-diagram using integrators and
use MATLAB to simulate a population with starting at for
equal to 0, , 1, and 2.
where r, a, e, and m are positive constants, is used to model two populations where
one of the species is the prey and the other is the predator, e.g. foxes and rabbits. The
variable is the prey population, is the predator population, r is the intrinsic rate
of prey population increase, a is the death rate of prey per predator encounter, e is the
efficiency rate of turning prey into predators, and m is the intrinsic predator mortality
rate. Calculate the equilibrium points for this model. Assume that all constants are
positive, linearize about the equilibrium points, and classify the equilibria as
asymptotically stable or unstable. Represent the equations in a block-diagram using
integrators and use MATLAB to simulate the predator and prey populations for
, , , and , with and . Try other
initial conditions and comment on your findings.
5.49 A simplified model for the level of glucose, y, in humans as a function of the
insulin concentration, , is the following set of nonlinear ordinary differential equations
(see [Ste+03]):
where and all constants are positive. Calculate the unique equilibrium point,
, when is constant. Represent the equations in a block-diagram
using integrators.
5.50 Show that the insulin model from P5.49 linearized at its equilibrium point is
with transfer-function
5.51 The authors of [Ste+03] have verified that when mg/dL then
min and mg/dL per U/mL provide a good
experimental fit. Calculate the poles and zeros of the linearized transfer-function
model from P5.50 and classify the corresponding equilibrium point as asymptotically
stable or unstable.
5.52 After insulin has been released in the plasma at a rate u, its concentration, ,
does not reach steady-state values instantaneously. Instead,
Combine the results from P5.49–P5.51 to show that the transfer function from to
is
Use the value min and from [Ste+03] and substitute numerical
values from P5.51 to calculate the corresponding poles and zeros.
The response plotted in Fig. 5.7 is still idealized as it assumes a voltage source that has
2
a step discontinuity. This model could be improved to account for that as well if
wanted.
There are many ways to make peace with the notion of a square matrix exponential.
5
which is a direct extension of the standard power series of the scalar exponential
function. For the most part, the exponential function of a matrix operates like, and has
properties similar to those of, the regular exponential, e.g. .
Beware that some properties hold only if the matrices involved commute, e.g.
unless . Other properties require that A be nonsingular, e.g.
. Note that A, , and all commute, that is , and
.
The next lines may be a bit too advanced for some readers. The important idea is that
7
Messier formulas are available in the case of proper systems. See P5.2.
8
In this case
10
, which is the rod’s moment of
inertia about one of its ends.
11
See P2.41 and P3.95 for a model of a DC motor.
12
The eigenvalues of A are the roots of the characteristic equation
, which is similar to Equation (4.12) studied in Section 4.2.
See also Section 6.1.
With ,
One problem with this argument is that there might not exist
16
such that
For example, consider the open-loop stable first-order linear system with an input
saturation
A more subtle obstacle is that, even when there exists an equilibrium, it might not be
17
reachable. Consider for example an integral controller in feedback with the nonlinear
system
In state-space,
is Hurwitz this equilibrium point is asymptotically stable. For instance, with initial
conditions the closed-loop system converges to as t
grows. However, with initial condition , the system never reaches
this equilibrium. As tries to grow continuously toward 2, the value of grows
linearly but the value of becomes more and more negative as
approaches 1. Indeed, can never exceed one, so it never reaches equilibrium.
6
Controller Design
In this chapter we introduce a number of techniques that can be used to design
controllers. Our main goal is to understand how the locations of the poles and zeros of
the open-loop system influence the locations of the poles of the closed-loop system.
We start with a thorough study of second-order systems and culminate with a graphic
tool known as root-locus, in which the poles of the closed-loop system can be plotted in
relation to the open-loop poles and zeros and the loop gain. Along the way we
introduce derivative control and the ubiquitous proportional–integral–derivative
controller.
Stable first- and second-order systems epitomize the basic behaviors of stable dynamic
linear systems: exponentially decaying potentially oscillatory responses. Consider a
second-order system with characteristic equation
where the parameter is the damping ratio and the parameter is the natural
frequency. Most second-order polynomials can be put in this form by an adequate
1
choice of and . We will provide concrete examples later. The nature of the
response of a second-order system is controlled by the location of the roots of the
characteristic equation (6.1):
where
Note that the parameter only scales the roots, and that the parameter controls
whether the roots are real or complex-conjugate: if the roots are real; if
they are complex conjugates.
When a second-order system has real roots and its response is the
superposition of the response of two first-order systems. Indeed, the inverse Laplace
transform of a second-order transfer-function with real poles has terms
where the exact values of and depend on the zeros of the transfer-function and
can be computed from residues as in Chapter 3. In the complex plane, the roots are
located on the real axis symmetrically about the point , as shown in Fig.
6.1(a). If the roots are on the left-hand side of the complex plane, hence the
associated transfer-function is asymptotically stable. If the roots are on the
right-hand side of the complex plane, and the associated transfer-function is unstable.
Figure 6.1 Locations of the roots of the second-order equation (6.1); roots are marked
with “ .”
where
The following facts can be used to locate the roots in the complex plane:
(1) the real part of the roots is equal to ;
(2) the absolute value of both roots is equal to
(3) the angle measured between the root and the imaginary axis is
The damping ratio, , controls how damped the system response is. The name and
the following terminology are borrowed from the standard analysis of the harmonic
oscillator: when the roots are purely imaginary and the response contains an
oscillatory component at the natural frequency, ; when the system
is said to be underdamped as any response will be oscillatory with damped natural
frequency ; when the system is said to be critically damped, with two
repeated real roots; when the system has two real roots with negative real part
and is said to be overdamped.
where the damped natural frequency, , is given by (6.3), and is given by (6.4). We
plot the step response for various values of the damping ratio, , in Fig. 6.2 The plots
2
It is useful to define and quantify some terminology with respect to the step
response of the standard second-order system (6.5) when . As can be seen in
Fig. 6.2, there is always some overshoot, that is, the response exceeds its steady-state
value. This is a new dynamic characteristic, since strictly proper first-order systems
never overshoot. By differentiating the response with respect to t and looking for
peaks, you will determine in P6.3 that
are the time and value of the first peak. It is common to quantify overshoot in terms of
the percentage overshoot:
Another useful metric is the settling-time, , which is the time it takes the step
response to become confined within 2% of its steady-state value. You will show in P6.4
that the settling-time can be approximated by
These figures of merit are illustrated in Fig. 6.3. The rise-time, , and the time-constant,
, are also shown in this figure. They were computed in the exact same way as for first-
order systems (see Section 2.3): the rise-time is the time it takes for the response to go
from 10% to 90% of its steady-state value, and the time-constant is the time it takes the
response to reach of its steady-state value. Unfortunately it is not
possible to have simple formulas for the rise-time and the time-constant for second-
order systems. The approximate formulas
With the above facts in mind, we revisit the cruise control solution with integral
control discussed in Section 4.2. The closed-loop characteristic equation obtained in
(4.12) can be put in the form (6.1) after setting
In other words,
From these relations it is clear that one cannot raise in order to increase , i.e.
the speed of the response, without compromising the damping ratio, . By contrast, if
proportional–integral control is used, the closed-loop poles are governed by the
characteristic equation (4.17), meaning that they are in the form (6.1) with
One can now choose to set the natural frequency, , to any desired value and
then choose to set the damping ratio, , independently.
As in Section 5.5, if all parameters are positive then the model linearized around
is asymptotically stable and the model linearized around , , is
unstable. Our first task is to develop a controller that can stabilize the unstable
linearized system . Recalling the discussion in Section 5.8, a linear proportional
controller that stabilizes a nonlinear system around its equilibrium point will also
stabilize the nonlinear system, at least in the neighborhood of the equilibrium.
The connection of the nonlinear model of the pendulum derived in Section 5.5 with
the proportional controller is depicted in the block-diagram in Fig. 6.4. The
feedback connection of the pendulum with this controller linearized in the
neighborhood of has as closed-loop transfer-function 3
This is a second-order system with characteristic equation (6.1), where
These expressions are similar to the ones found in the case of the integral cruise
control of the car, where increasing K improves the speed of response, , at the
expense of decreasing the damping ratio, . A complication is the fact that K needs to
be chosen high enough to stabilize the pendulum:
Figure 6.4 Linear proportional control of the simple pendulum; around stable
equilibrium, and around unstable equilibrium.
It is desirable to have a controller that can work around both equilibrium points,
that is, near both and . A controller that is capable of operating a system
in a variety of conditions is a robust controller. We will discuss robustness in more
detail later in Sections 7.7 and 8.2. For now, we are happy to verify that the same
proportional control does not de-stabilize the system when operated near the stable
equilibrium . Repeating the same steps as above, we obtain the closed-loop
transfer-function
for which
We can see that any value of the gain K that stabilizes the unstable transfer-function
does not de-stabilize the asymptotically stable transfer-function . This is
because . For example, the selection of
implies
The choice (6.12) halves the damping ratio near the stable equilibrium while
providing a damping ratio of about of the open-loop damping ratio, , around
the unstable equilibrium . In both cases the controlled pendulum has a damping
ratio that is smaller than that of the open-loop pendulum. The only way to improve the
damping ratio is to reduce K, which makes the closed-loop system response slower. In
the car cruise control problem, a better solution was proposed in Section 4.2 in the
form of a more complex controller: a proportional–integral controller enabled us to set
the damping ratio and the natural frequency by a choice of two independent gains,
and . As we will discuss in detail in the next section, the key in the case of the
pendulum is the addition of a zero in the controller transfer-function: instead of an
integrator we need a differentiator.
We close this section with a note on the behavior of some systems that have order
higher than second. Even though it is not possible to provide simple formulas for the
response of such systems, asymptotically stable higher-order systems that have a real
pole or a pair of complex-conjugate poles with real part much smaller than the
system’s other poles tend to behave like a first- or second-order system. Indeed, the
response quickly becomes dominated by the slower response of such poles: the
contribution of faster and highly damped poles approaches zero quicker and leaves the
response appearing to be that of a first- or second-order system, the slow poles being
called dominant poles. For this reason, it is common to see performance specifications
of open- and closed-loop systems in terms of first- or second-order figures of merit.
We have seen in previous sections that increasing the control gain may have a
detrimental effect on the damping properties of a feedback system. This has been the
case in the examples of integral cruise control as well as of proportional control of the
pendulum. In order to improve damping we generally need to increase the complexity
of the controller. In a mechanical system, the idea of damping is naturally associated
with a force that opposes and increases with the system velocity, for instance, viscous
friction in the simple pendulum model, . Using velocity or, more generally, the
derivative of a signal, is an effective way to improve damping. However, as will become
clear later, velocity feedback alone may not be enough to asymptotically stabilize an
unstable system, and hence derivative action is often combined with proportional
control.
be applied to the simple pendulum model derived in Section 5.5. This controller can be
implemented as shown in the block-diagram of Fig. 6.5, in which the controller makes
use of two measurements: the angular position, , and the angular velocity, . If we
are not able to directly measure the velocity, , then we have to be careful when
building the controller, given the difficulties involved in physically implementing a
derivative block, as discussed in Chapter 5. Either way, in order to avoid working with
two outputs, it is convenient to introduce a derivative block component during
controller analysis and design. With this caveat in mind, we can study the single-output
block-diagrams in Fig. 6.6, where we use the Laplace transform variable “s” to represent
an idealized derivative operation.4
Figure 6.5 Proportional–derivative control of the simple pendulum.
The closed-loop systems in Figs. 6.6(a) and (b) are not equivalent and behave slightly
differently if is not constant. The diagram in Fig. 6.6(a) corresponds to Fig. 6.5, which
implements the control law (6.13). The diagram in Fig. 6.6(b) is the standard
proportional–derivative controller, or PD controller:
From an implementation perspective, the diagram in Fig. 6.6(a) offers some advantages,
specially when has discontinuities or is rich in high-frequency components. Note that
y is continuous even if is not. For this reason, in the presence of discontinuities, one
should expect to see short-lived large signal spikes in the control input u in Fig. 6.6(b)
5
in comparison with Fig. 6.6(a). This is due to the differentiation of in Fig. 6.6(b). In
terms of transfer-functions, this difference amounts to an extra zero, with both
diagrams having the same closed-loop poles. To see this, let , and verify
that (see P6.6)
in Fig. 6.6(b). Since Fig. 6.6(b) is a better fit for the standard feedback diagram of Fig.
4.11 that we have analyzed earlier, we shall use Fig. 6.6(b) in the rest of this section.
Because the controller does not have any poles, this is also a second-order system with
characteristic equation (6.1) for which
The same calculations repeated for the model linearized about and
produce
Note how the values of are not affected by the introduction of the derivative gain,
, which leads to the important conclusion that damping alone (derivative action),
would not have been able to stabilize the pendulum. The effect of is confined to
the damping ratio, .
For example, after stabilizing the closed-loop pendulum with the choice of
from (6.12) we can choose to provide any desired level of damping,
say
around the equilibrium . This corresponds to setting
which is a bit smaller than , but still acceptable. We will continue to improve this
controller in Sections 6.5 and 7.8.
Derivative action can help improve the damping properties of a feedback system, as
seen in the last section. If integral action is required to track constant or low-frequency
references, or reject low-frequency input disturbances, a controller can be constructed
by combining a proportional term with integral and derivative terms to form a
proportional–integral–derivative controller, or PID controller. A generic block-diagram of
a feedback system with a PID controller is shown in Fig. 6.7. PID control is arguably the
most popular form of control and is widely used in industry. Manufacturers of
instrumentation and control hardware offer implementations of PID controllers where
the gains , , and can be tuned by the user to fit the controlled process. Most
modern hardware comes also with algorithms for identifying the process model and
automatically tuning the gains, and other bells and whistles.
One difficulty is that the controller (6.18) is of order one, and application to a
second-order system generally leads to a third-order closed-loop system. The situation
gets much more complicated if additional poles are introduced to make (6.18) proper
and therefore implementable, in which case the order of the PID controller is at least
two. Calculating the roots of a closed-loop characteristic polynomial of order three or
four in terms of the parameters , , and is possible but requires working with
highly convoluted formulas. In addition to that, it is impossible to come up with any
type of analytic formula for computing the roots of a polynomial of order five or higher
except in very special cases. Instead, we will rely on numerical computations and
6
auxiliary graphic tools that we will introduce in the next sections and following
chapters.
6.4 Root-Locus
In this section we introduce a technique for controller design that is based on a graphic
known as root-locus. The root-locus is a plot showing the poles of a system with
transfer-function L in feedback with a static gain, , as shown in Fig. 6.8. That is a plot
of the roots of the characteristic equation
obtained as varies from 0 to infinity. The root-locus plot is used to study general
SISO feedback systems after properly grouping all dynamic elements into the loop
transfer-function, L.
Figure 6.8 Closed-loop feedback configuration for root-locus; .
Take for instance the diagram of Fig. 4.18. As shown in Chapter 4, the closed-loop
poles are the zeros of the characteristic equation
and the root-locus plot is used for the determination of a suitable gain .
As mentioned earlier, one reason for working with a graphic rather than an algebraic
tool is that it is not possible to find a closed-form expression for the roots of a
polynomial of arbitrary degree as a function of its coefficients, even in this simple case
where the roots depend on only one parameter, . It is, however, relatively simple to
sketch the location of the roots with respect to without computing their exact
location. This can even be done by hand using a set of rules that depend only on the
computation of the zeros and poles of . With the help of a computer program,
such as MATLAB, very accurate root-locus plots can be traced without effort. Knowledge
of some but not necessarily all of the root-locus rules can help one predict the effect of
moving or adding extra poles or zeros to L, which is a very useful skill to have when
designing controllers and analyzing feedback systems. It is with this intent that we
introduce some of the root-locus rules. Readers are referred to standard references,
e.g. [FPE14, DB10], for a complete set of rules including a detailed discussion of the case
.
A key observation behind the root-locus is the following property: suppose the
complex number s is a root of and . In this case,
7
In other words, is a negative real number and hence with phase equal to .
Conversely, suppose that for some s, then is a negative real number;
that is, for some and . It is much easier to locate
points in the complex plane for which has phase equal to than to compute the
roots of as a function of . This is especially simple when is rational,
as discussed below.
the phase of L, i.e. , can be quickly computed by evaluating angles measured from
the zeros , , , and poles , , , . This is the key idea behind
sketching the root-locus plot and is illustrated in Fig. 6.9.
Figure 6.9 Measuring the phase of at ;“ ” denotes a pole of and “
” denotes a zero of ; ; the point shown is not a
root of because .
Because L and are continuous functions of s at any point that is not a pole of L,
that is a root of , one should expect the root-locus to be a set of continuous curves
in the complex plane. Indeed, this fact and more is part of our first root-locus rule:
The roots depart from the poles and arrive at the zeros of L because
for any s such that , that is, any s that is not a pole of L.
Take for example the linear model we have obtained for the speed of the car in (2.3)
and the parameters and estimated in Chapter 2. This model
has an open-loop transfer-function (3.16):
The stability of the feedback loop with proportional control in Fig. 2.7 can be studied
with the root-locus plot after setting
The root-locus plot in this case is very simple, namely the black solid segment of line in
Fig. 6.10. Because , , it consists of a single curve starting at the open-loop
root of , marked with a cross, and ending at the zero of , which is
at minus infinity. The markers in Fig. 6.10 locate selected values of that correspond
to the control gains used to generate the responses in Fig. 2.8. There are no other
curves in this root locus. It is not possible for the closed-loop system to have complex
roots because L is of order one, and the root-locus remains entirely on the real axis.
Figure 6.10 Root-locus for the proportional control of the car speed model (2.3) with
parameters and ; symbols show roots for the gains indicated
in the legend. Compare this with the step responses in Fig. 2.10.
This rule is all that is needed to completely explain the root-locus of Fig. 6.10. For
another example, take the poles and zeros in Fig. 6.9. Application of the rule leads to
the real segment shown in Fig. 6.11 being the only part of the root-locus on the real
axis. The arrow indicates the fact that the locus departs from the pole and arrives at
the zero. We will add the remaining curves to this plot later. Note that the above rule
applies only to the case when
11
.
Figure 6.11 Locus of real roots; complexlocus is not shown; see Fig. 6.13 for complete
root-locus.
When L is of order 2 or higher the root-locus will generally have complex roots.
Consider for example the car model
this time in closed-loop with the integral controller shown in Fig. 4.3. In this case,
, , and the root-locus computed with
is shown in Fig. 6.12. In this example , , and we have two curves (solid
segments of line) starting at the roots of , and , and ending at the
zeros at infinity. The two curves meet at the break-away point ,
after which the two roots become complex-conjugate pairs. The markers locate values
of that correspond to the control gains used to generate the responses in Fig. 4.4.
The root-locus helps us understand why this example’s transient performance with
integral-only control is much worse than with proportional-only control: in the I case,
the closed-loop roots can never have real part more negative than the open-loop pole,
, whereas in the P case, the closed-loop roots always have real part more
negative than the open-loop pole. Compare Fig. 6.12 with Fig. 6.10. The behavior after
the roots have broken away from the real line is explained by the next rule:
at angles
The behavior away from the real axis in Fig. 6.12 is completely determined by the
asymptotes. In a more complicated example, the finite portion of the locus depends on
the relative location of the poles and zeros. For the pole–zero configuration in Fig. 6.9
we have , , and hence asymptotes. The exact values of the
poles and zeros are not given in the figure, but we can assume by symmetry that
which is the origin. That is, the two asymptotes coincide with the imaginary axis. The
path followed by the complex root-locus will, however, depend on the particular values
of a and b. Figure 6.13 illustrates three possibilities for : in Fig. 6.13(a) the
complex locus never intersects the real locus; in Fig. 6.13(b) the complex locus and the
real locus merge at break-away points where the roots have multiplicity two; finally, in
Fig. 6.13(c) the complex locus and the real locus touch at a single break-away point
where the roots have multiplicity three. Which one will take place depends on the
particular values of the open-loop poles and zeros. There exist rules to determine the
existence of multiple roots on the real axis which allow one to distinguish among these
three cases. These days, however, the best practical way to determine which case
occurs is with the help of computer software, such as MATLAB. Note that, intuitively,
the root paths tend to attract each other. So, if the imaginary component of the
complex-conjugate poles is large, Fig. 6.13(a) is likely to apply, e.g. ; whereas if
the imaginary component is small Fig. 6.13(b) is likely to apply, e.g. ; finally,
Fig. 6.13(c) represents the transition from Fig. 6.13(a) to Fig. 6.13(b), and applies only
for very special values of a and b, in this example only if .
Figure 6.12 Root-locus for the integral control of the car speed model (2.3) with
parameters and ; symbols show roots for the gains indicated
in the legend. Compare this with the step responses in Fig. 4.4.
Figure 6.13 Possible root-locus for transfer-function with three poles and one zero.
Finally, the car cruise control model with the proportional and integral (PI) controller
in Fig. 4.5 has
We let , , and
We will now work a complete design example: the control of the simple pendulum. The
goal is to design a controller that can drive the pendulum to any desired angular
position. That is, we want to track a given reference angle around both stable and
unstable equilibrium points. We start as we left off in Section 6.2, where we designed a
proportional–derivative (PD) controller with transfer-function
that was able to stabilize the simple pendulum around both equilibrium points. The
choice of gains
was shown to produce a stable second-order closed-loop system around the unstable
equilibrium with natural frequency and a damping ratio
. In order to better quantify the performance of this controller we consider
the following numerical parameters:
Note that we are designing a controller for a frictionless pendulum model, , so all
damping must be provided by the controller. We compute the open-loop natural
frequency around the stable equilibrium :
The problem with the introduction of the extra pole at the origin is that this controller
no longer stabilizes the closed-loop system for any value of . The introduction of
the pole means that the locus has asymptotes centered at
and most likely the roots on the right-hand side of the complex plane will simply
converge toward the two vertical asymptotes which have positive real part when
. This is illustrated by the root-locus in Fig. 6.16.
on the left-hand side of the complex plane. This is the controller of the form
For reasons that will be clear in Chapter 7, the above controller is known as a lead
compensator. We shall use the root-locus diagram to help us select the position of the
pole, that is, the value of p. This is accomplished by defining the loop transfer-function:
Note that
Placing the pole, , too far to the left means that the controller still has to deliver
high gains in a wide region of the frequency spectrum. We compare the magnitude of
the frequency response of the above controllers, that is (dB) versus
, in Fig. 6.18. We will be much better equipped to understand such plots, called
Bode plots, in Chapter 7. For now we just want to compare the magnitude of the
frequency response of the PD controller, in Fig. 6.18, with the lead controller
with the pole at , in Fig. 6.18. The lead controller still requires
large amplification at high frequencies: the magnitude of the frequency response of the
controller at rad/s is about 10 times higher than at rad/s ( dB,
see Section 7.1). Because of the high gain at high frequencies, we will attempt to bring
the controller pole closer to the controller zero.
On placing the pole twice as far to the left as z, that is, at , the resulting
controller has to deliver much less gain at high frequencies, about 5 dB or times
higher than at low-frequencies, as seen for in Fig. 6.18. However, the
closed-loop behavior is much impacted, with the root-locus taking a completely
different shape, Fig. 6.17(b). The asymptotes are now located between the two system
poles, which results in much less damping, , if is to stay close to
. A comparable damping ratio is now possible only if the closed-loop system operates
at a much slower natural frequency, which is not what one would want in this case.
In order to improve the closed-loop damping ratio we fix the pole at and
shift the zero of the controller toward the right in an attempt to move the asymptotes
more to the left. This forces the intersection of the locus with the circle of radius
to happen at higher damping ratios. After moving the zero from
to
such that and still obtain a damping ratio of about , which is a huge
improvement when compared with the damping ratio of obtained with the zero
in its original location. As a side-effect, as seen for in Fig. 6.18, the
resulting controller operates with lower gains at lower frequencies. Interestingly, after
moving the zero we almost performed a pole–zero cancellation of the stable pole of
the pendulum! The result is the controller
Figure 6.19 Root-locus for the lead control of the simple pendulum after fixing the pole
and adjusting the zero in order to improve the closed-loop damping ratio, .
The controller (6.22) was designed to achieve good performance around the unstable
equilibrium . Before proceeding we shall also evaluate its performance around
the stable equilibrium . Figure 6.20 shows the root-locus obtained with controller
(6.22) in closed-loop with the pendulum model linearized around . The root-locus
with the pendulum model linearized around is shown as a dashed line for
comparison. Note that the closed-loop natural frequency is close to the one predicted
before with the proportional–derivative controller, i.e. , but this
controller displays much less closed-loop damping, . This means that one
should be careful if the same controller is to be used around both equilibria, even
though, in practice, additional mechanical damping, which we chose to ignore during
the design, might improve the overall damping. We will justify this later in Section 8.2
when we perform a more complete analysis of the closed-loop system.
Figure 6.20 Root-locus for the lead control (6.22) of the simple pendulum in closed-loop
with the model linearized around the stable equilibrium ; the root-locus for the
model linearized around the unstable equilibrium is shown as a dashed line for
comparison.
Let us turn our attention once again to the issue of tracking. As discussed earlier in
Section 5.8, one should be careful when using a model linearized around equilibrium as
the basis for a tracking controller that attempts to move the system away from
equilibrium: if the model near the new equilibrium is too different than the one for
which the controller was originally designed then the design might no longer be valid.
Having said that, we cautiously proceed using the model linearized around the unstable
equilibrium, , ready to perform a redesign if necessary.
Our goal is to add integral action to the loop. Our previous attempt to simply add a
pole at the origin, Fig. 6.16, revealed serious problems with closed-loop stability. For
this reason we must take a different route. Instead of adding the integrator in series
with the controller, we add it in parallel, the resulting controller being very close to an
implementable PID controller, as we will see. We assume that has already
been designed to provide regulation performance, our goal being to design to
achieve adequate levels of integral action. The feedback connection is illustrated in Fig.
6.21. The controller resulting from this diagram has the transfer-function
which has not only an extra pole at the origin but also an extra zero, because both the
denominator and the numerator of the controller are of degree 2.
At first it might not be obvious what loop transfer-function, L, to choose for a root-
locus analysis of the diagram in Fig. 6.21. Our goal is to visualize the closed-loop poles
as a function of so we compare Fig. 6.21 with Fig. 6.8 to conclude that “
13
” must
be the transfer-function between the input and the error signal e. From Figs. 6.21
and 4.11, the closed-loop transfer-function from w to e has already been computed in
(4.25) and (4.26):
with which
It follows that L has as poles the closed-loop poles of the connection of with
plus the origin, and has as zero the single pole of
14
. The corresponding root-locus
is plotted in Fig. 6.22.
Figure 6.22 Root-locus for the lead control with additional integral action (6.23);
integral gain is chosen for the closed-loop system to have double real poles and a
slightly smaller natural frequency, rad/s, marked with diamonds; alternative
choice of gain that matches rad/s leads to highly underdamped poles close
to the imaginary axis, marked with squares; higher integral gains eventually lead to
instability.
Roots on the path that depart from the pair of complex poles have more and more
damping as grows. It is the pair of roots departing from the real poles that become
dominant and need special attention. With this in mind we select as integral gain the
root-locus gain
which corresponds to closed-loop poles at the break-away point from the real axis,
marked in Fig. 6.22 by diamonds. This implies the fastest possible response from the
dominant poles that is not oscillatory. Note that the alternative choice of gain
15
leads to the only possible stable roots that intersect the circle ,
marked in Fig. 6.22 by squares, but we do not opt for this choice because it produces a
pair of highly underdamped complex poles on the dominant branch.
Note how the magnitude of the frequency response of the final controller and that
of the lead controller , plotted in Fig. 6.23, are very close at high-frequencies. At
low-frequencies, displays the typical unbounded low-frequency gain provided by
the integrator.
Figure 6.23 Magnitude of the frequency response of the lead controller, from
(6.22), and the lead controller with integral action, from (6.25).
Of course one may take different paths to arrive at a suitable controller. For instance,
one might have concluded after the root-locus plotted in Fig. 6.16 that a second zero
was needed and designed a second-order controller by placing the additional zero and
pole and studying the resulting root-locus diagram. The diagram in Fig. 6.24 shows the
root-locus one would study if a second-order controller with the same poles and zeros
as from (6.25) were used. The gain corresponding to the closed-loop poles marked
by diamonds in Fig. 6.24 recovers those shown in Fig. 6.22. Note how arguments similar
to the ones brought when discussing previous root-locus plots can be used to help
locate the poles and zeros in this direct design. One might also enjoy the
complementary discussion in Section 7.8, in which we will revisit the control of the
simple pendulum using frequency domain techniques.
Figure 6.24 Alternative root-locus for the direct study of the control of the pendulum in
closed-loop with a second-order controller with the same poles and zeros as (6.25);
diamond marks indicate the choice of gain that leads to the same closed-loop poles as
in the root-locus in Fig. 6.22.
Problems
6.2 Show that the step response of an underdamped second-order system with
transfer-function
is
where
for , .
6.3 Maximize to show that and given in (6.8) are the time and value of
the first peak of the step response of an underdamped second-order system with
transfer-function as in P6.2 with . Hint: Differentiate and solve
.
6.5 Verify graphically that the formulas in (6.11) have a maximum relative error of
less than 1% in the range .
6.7 Sketch the root-locus for the SISO systems with poles and zeros shown in Fig. 6.25
then use MATLAB to verify your answer.
Figure 6.25 Pole–zero diagrams for P6.7.
6.8 Use the root-locus method to determine a proper feedback controller that can
stabilize the SISO systems with poles and zeros shown in Fig. 6.25. Is the transfer-
function of the controller asymptotically stable? Recall that you should never perform a
pole–zero cancellation on the right-hand side of the complex plane. Note: Some are
not trivial!
6.9 You were shown in Section 5.5 that the nonlinear differential equation
is an approximate model for the motion of the simple pendulum in Fig. 5.11 and that
and are the pendulum equilibrium points. Calculate the
nonlinear differential equation obtained in closed-loop with the linear proportional
controller
Show that and are still equilibrium points. Linearize the
closed-loop system linearized about and and calculate
the associated transfer-function. Assuming all constants are positive, find the range of
values of K that stabilize both equilibrium points.
6.11 You have shown in P2.10 and P2.12 that the ordinary differential equation
and select so that both closed-loop poles are real and as negative as possible. Is the
closed-loop capable of asymptotically tracking a constant reference input ? Is the
closed-loop capable of asymptotically rejecting a constant input torque disturbance?
and select the gains and such that both closed-loop poles have negative real
part more negative than the pole of the machine without performing a pole–zero
cancellation. Is the closed-loop capable of asymptotically tracking a constant reference
input ?
6.13 The rotating machine in P6.11 is connected to a piston that applies a periodic
torque that can be approximated by , where the angular frequency
is equal to the angular velocity . Show that the modified equation including this
additional torque is given by
Use the root-locus method to design a dynamic feedback controller that uses as
control input and as the measured output so that the closed-loop system is capable
of asymptotically tracking a constant reference input , , and
asymptotically rejecting the torque perturbation when .
6.14 You have shown in P2.18 that the ordinary differential equation
is a simplified description of the motion of the elevator in Fig. 2.18(b), where is the
angular velocity of the driving shaft and is the elevator’s load linear velocity. Let
m, kg, kg m /s, kg m , and
m/s . Use the root-locus method to design a dynamic feedback controller that
uses as control input and the elevator’s load vertical position
6.16 You have shown in P2.28 that the ordinary differential equation:
6.17 You have shown in P2.32 that the ordinary differential equations
6.20 Consider the one-eighth-car model from P6.19. Calculate the transfer-function
from the road profile, y, to the mass relative displacement, z. Calculate the value of the
spring stiffness, k, and shock absorber damping coefficient, b, for a car with mass
equal to 640 kg to have a natural frequency Hz and damping ratio .
Locate the roots in the complex plane.
6.21 Consider the one-eighth-car model from P6.19. Use MATLAB to plot the
response of the car suspension with parameters as in P6.20 to a pothole with a profile
as shown in Fig. 6.27, where m and cm for a car traveling at 10 km/h.
Repeat for a car traveling at 100 km/h. Comment on your findings.
6.22 Consider the one-eighth-car model from P6.19 and a road with profile
, where v is the car’s velocity. What is the worst possible velocity a
car with suspension parameters as in P6.20 could be traveling at as a function of the
road wavelength ?
from which u can be interpreted as the output of a PD controller. Use this fact to repeat
P6.20 using the root-locus method.
6.25 Consider the one-quarter-car model from P6.24. If the tire stiffness is
N/m and the tire damping coefficient is negligible, i.e. , use
MATLAB to calculate the transfer-function from the road profile, y, to the relative
displacement and select values of the spring stiffness, , and shock absorber
damping coefficient, , for a car with mass kg and wheel mass
kg to have its dominant poles display a natural frequency Hz and
damping ratio . Locate all the roots in the complex plane.
6.26 Consider the one-quarter-car model from P6.24. Use MATLAB to plot the
response of the car suspension with parameters as in P6.25 to a pothole with a profile
as shown in Fig. 6.27, where 1 m and cm for a car traveling at 10 km/h.
Repeat for a car traveling at 100 km/h. Comment on your findings.
6.27 Consider the one-quarter-car model from P6.24 and a road with profile
, where v is the car’s velocity. What is the worst possible velocity a
car with suspension parameters as in P6.25 could be traveling at as a function of the
road wavelength ?
where
can be interpreted as the output of a PD controller. Use this fact to repeat P6.25 using
the root-locus method.
6.29 Compare the answers from P6.25–P6.28 with the answers from P6.20–P6.23.
6.30 You have shown in P2.41 that the ordinary differential equation
is a simplified description of the motion of the rotor of the DC motor in Fig. 2.24. Let
the voltage be the control input and the rotor angular velocity be the measured
output. Let kg m , N m/A, V s/rad,
kg m /s, and . Use the root-locus method to design a
dynamic feedback controller so that the closed-loop system is capable of asymptotically
tracking a constant reference input , .
6.31 Repeat P6.30 to design a position controller that uses a measurement of the
angular position
6.32 What difference does it make using an integrator to integrate the angular
velocity rather than a sensor to directly measure angular position in P6.31?
6.33 You have shown in P4.34 that the torque of a DC motor, , is related to the
armature voltage, , through the transfer-function
Use the data from P6.30 and the root-locus method to design a controller that uses the
voltage as the control input and the torque as the measured output so that the
closed-loop system is capable of asymptotically tracking a constant reference input
torque , .
6.34 You have shown in P2.49 that the temperature of a substance, T (in K or in C),
flowing in and out of a container kept at the ambient temperature, , with an inflow
temperature, , and a heat source, q (in W), can be approximated by the differential
equation
where m and c are the substance’s mass and specific heat, and R is the overall system’s
thermal resistance. The input and output flow mass rates are assumed to be equal to w
in kg/s. Assume that water’s density and specific heat are kg/m and
J/kg K. Use the root-locus method to design a dynamic feedback controller that uses the
heat source q as the control input and the temperature T as the measured output for a
50 gal ( m ) water heater rated at BTU/h ( kW) and thermal
resistance K/W at ambient temperature, F ( C). The
controller should achieve asymptotic tracking of a reference temperature F(
C) without any in/out flow, i.e. .
where y is the glucose level and u is the rate of release of insulin in the plasma. Draw a
block diagram representing the complete closed-loop insulin homeostasis system,
including the signals , y, and u. What kind of “controller” is represented by (6.26)?
Explain why the feedback controller (6.26) can be defined in terms of the actual glucose
level, y, rather then its variation from equilibrium, , when is constant.
6.39 In [Ste+03], the authors have determined experimentally that the values
, , and for the “controller” proposed in (6.26) in P6.38
seem to match the physiological response to glucose level response to insulin plasma
delivery. Calculate the transfer-function corresponding to the controller (6.26). Use the
values of and above and calculate the loop transfer-function, , that can be
used for feedback analysis of the closed-loop glucose homeostasis system with respect
to the proportional gain, , and sketch the corresponding root-locus diagram. Is
the closed-loop insulin homeostasis system asymptotically stable?
The only second-order polynomials that are not encoded in (6.1) have the form
1
You will verify in P6.1 that in this case the roots are always real, with one of the roots
always positive.
2
These curves were also used to generate the three-dimensional figure on the cover of
this book.
See the discussion in Section 5.1 about impulses appearing in the response due to
5
differentiation.
If
7
then .
9
If L is proper but not strictly proper then the characteristic equation may
have fewer than n roots for some values of . For example, if
then , which has no roots when .
10
Imagine what happens if the point is placed on the real axis in Fig. 6.9.
11
Can you figure out the correct rule when ?
12
If , drop from the formula.
13
Recall that is a negative feedback gain in Fig. 6.21.
14
Verify the location of the poles and zeros in the root-locus diagram in Fig. 6.22!
15
The other two poles still contribute oscillatory components to the response.
7
Frequency Domain
The frequency response of a linear time-invariant system is a complex-valued function
that encodes the response of the system to a family of sinusoidal input functions
parametrized by the frequency variable . The frequency response can be obtained
experimentally or from a model in the form of a transfer-function. The study of the
frequency response is a powerful source of insight into the behavior of feedback
systems. It also plays a key role in many controller design methods to be introduced in
this chapter. You will learn how to sketch Bode plots, polar plots, and Nyquist plots,
with which you can analyze the stability of open- and closed-loop systems.
in which case we say that the magnitude is expressed in decibels (dB), no matter the
original units of G, such as in Figs. 6.18 and 6.23. Figures 4.7, 4.14, and 4.16 were
plotted in linear scale, not in dB, taking the units of the associated transfer-function.
The reason for the scaling factor “ ” and the use of base logarithms is mostly
historic, dating back to the measurements of gain and attenuation on early
communication systems. A pair of logarithmic plots of the magnitude in dB and the
phase in degrees is known as a Bode plot. If G is a transfer-function with real
coefficients then and . For this reason it
is necessary only to plot the frequency response for . When the transfer-function
is rational, it is possible to compute straight-line asymptotes to quickly sketch a Bode
plot. The trick is to break up the frequency response into contributions from individual
poles and zeros.
where
Normalization makes each term with a nonzero pole or nonzero zero equal to one at
. The term
After applying logarithms, the magnitude of the frequency response is split into sums
and differences of first-order terms:
each term involving a distinct root. The phase of the magnitude response naturally
splits into sums and differences without the need to use logarithms:
In the next paragraphs we will derive asymptotes for the low- and high-frequency
behavior of first- and also second-order terms.
First-Order Real Poles and Zeros
Consider the transfer-function with a single real nonzero pole with multiplicity k:
where is assumed to be real and the integer k is positive. The complex case will be
analyzed later. When is small, that is when , the term
and the magnitude of the frequency response in dB is
Figure 7.1 compares the exact response (thick solid) with the asymptotes (thin) when
over two decades. If more information is required at the corner frequency
we can use the fact that and
when is large. As Fig. 7.1 illustrates, it is reasonable to consider small and large to be
about a decade above and below , and the straight-line approximation
corresponds to
where is real and the integer is positive, exactly the same analysis is possible
upon substitution of and . When this implies a change in slope
in both the magnitude and phase plots, as shown in Fig. 7.2.
Figure 7.2 Normalized Bode plots for the first-order transfer-function ,
; the thick solid curve is the exact frequency response and the thin solid curve is
the straight-line approximation.
The sign of (or ) does not affect the magnitude of , which depends on
(or ) alone, but does change the sign of the slope in the phase plot. The
diagrams in Fig. 7.3 illustrate the frequency response when both and are negative:
the phase of a pole looks like the phase of a zero and vice versa. Poles with negative ,
i.e. positive real part, are associated with unstable systems, and therefore will appear
frequently in control design. Zeros with negative , i.e. positive real part, appear in
interesting and sometimes difficult-to-control systems. Systems with poles and zeros
with positive real part introduce extra phase into the phase diagram and for this
reasons are called non-minimum-phase systems. See Section 7.2 for details.
Figure 7.3 Exact (thick) and straight-line approximations (thin) of the normalized Bode
plots for the first-order transfer-functions , (solid), and
, (dashed); the magnitude response is unaffected by the
sign of or but the slope of the phase is reversed.
Second-Order Complex Poles and Zeros
The diagrams studied so far are for transfer-functions with real poles and zeros. When
complex poles and zeros are present the best approach is to group these poles and
zeros into complex-conjugate pairs and study the magnitude and phase of the pairs.
which is the kth power of the canonical second-order system (6.5). As studied in detail
in Section 6.1, when the roots are complex conjugate and
from which
whose two lines intersect at . Figure 7.4 compares the exact responses (thick)
obtained for various values of with the asymptotes (thin solid) when
over two decades. Note the impact of the damping ratio, , on the magnitude of the
response near . When the poles are real and the response is similar to
that of a transfer-function with a single real pole with multiplicity . Note the
dB/decade slope. When approaches zero, the magnitude shows accentuated peaks
near , which are characteristic of lightly damped second-order systems. In particular,
when the magnitude response is unbounded at , since G has imaginary poles
at . When sketching Bode plots of second-order systems by hand it is
therefore important to take into account the value of the damping ratio near the
natural frequency .
As seen in Fig. 7.4, the normalized magnitude of the frequency response of a pair of
complex poles can have a maximum that exceeds dB. On differentiating
with respect to and equating to zero we obtain
which indicates that the magnitude of the frequency response can be potentially
maximized at
Note that as the ratio inside the inverse tangent approaches when
and when , from which we conclude that
When the ratio inside the tangent inverse approaches when and
when , and, if we use the second branch of the inverse tangent,
This implies that the phase is actually continuous at when . See Fig. 7.4.
As before,
and
As with real poles, it is reasonable to consider small and large to be about a decade
above and below and to use the straight-line approximation
which is shown by the thin lines in Fig. 7.4. As with the magnitude, the value of has
important effects on the phase, especially near . The value of needs
special attention. In this case
can be treated similarly by letting . The sign of will flip the slopes in
both the magnitude and phase diagrams. As with real poles, when the poles (or
zeros) have positive real part, leaving the magnitude of unaltered but flipping
the phase to produce non-minimum-phase systems.
Poles and Zeros at the Origin
The last case we need to discuss is the simplest: poles and zeros at the origin. First note
that a SISO transfer-function can have either poles or zeros at the origin, but not both,
due to cancellations. In either case, the frequency response for a pole at the origin with
multiplicity k,
is simply
As a first example consider the transfer-function of the lead controller from (6.22):
First normalize:
We will sketch the magnitude of the frequency response first. The gain will offset the
magnitude by
and it will remain at this level because the dB/decade slope contributed by the
pole cancels the dB/decade of the earlier zero. Points (A) and (B) are joined to
trace the straight-line approximation for the magnitude of the frequency response
shown in Fig. 7.5 as thin solid lines.
Figure 7.5 Exact plot (thick solid line) and straight-line approximation (thin solid line) of
the magnitude of the frequency response of the lead controller (6.22).
We will now sketch the phase response in degrees. Start by compiling the slope
contribution from each pole and zero. The gain contributes nothing since .
The phase remains constant until one decade below the first zero at , which
contributes a slope of
Add and compile all slope contributions per interval to obtain the slope profile:
Figure 7.6 Exact plot (thick solid line) and straight-line approximation (thin solid line) of
the phase of the frequency response of the lead controller (6.22).
Putting It All Together
We will illustrate how you can use straight-line approximations to sketch both the
magnitude and the phase of the frequency response of the more involved transfer-
function
where
Start by sketching the straight-line approximations for the magnitude of the frequency
response. The gain offsets the magnitude of the frequency response by
Locate the first pole starting from , which in this case is the pole at zero, which
contributes a slope of dB. This pole is followed by a complex-conjugate pair of
poles at
The complex-conjugate pair of zeros will cancel the slope of the last pair of poles, which
brings the slope of the magnitude back to dB/decade. The next complex
conjugate pair of poles is at
with magnitude
Figure 7.7 Exact plot (thick solid line) and straight-line approximation (thin solid line) of
the magnitude of the frequency response of the transfer-function (7.1).
Let us now sketch the phase response. We start by compiling the slope contribution
from each pole and zero. The pole at zero and the gain contribute a constant phase
of
and no slope. Indeed, the phase is constant until one decade below the first pair of
complex-conjugate poles at , which contributes a slope of
Points (A)–(H) are joined to trace the straight-line approximation for the phase of the
frequency response shown in Fig. 7.8 as thin solid lines. Note how the s produce large
errors in the straight-line approximation in the neighborhood of complex poles and
zeros.
Figure 7.8 Exact plot (thick solid line) and straight-line approximation (thin solid line) of
the phase of the frequency response of the transfer-function (7.1).
Of course, these days, modern computer software, such as MATLAB, can produce
exact Bode plots without much effort. The point of learning about the straight-line
approximations is to be able to anticipate changes in the diagram that come with the
introduction of poles and zeros. This is a very useful skill to have when designing
controllers and analyzing feedback systems.
Verify that
In other words, their frequency responses have exactly the same magnitude. However,
since they are not the same transfer-functions, their phases
must differ. The transfer-function has the minimum possible phase among all of
the transfer-functions that have , which include and . For this
reason, it is known as a minimum-phase transfer-function. By extension, a linear system
modeled as a minimum-phase transfer-function is a minimum-phase system. All other
transfer-functions and systems with the same frequency-response magnitude are said
to be of non-minimum-phase.
It is not hard to be convinced that minimum-phase rational transfer-functions are
well behaved: they have all poles and zeros with negative real part. Non-minimum-
phase systems, on the other hand, can have intriguing behaviors. For example, when
excited by a unit step input, , non-minimum-phase systems respond in the
opposite direction. For example, the unit step response, , of a system with transfer-
function is simply a unit step in the same direction as the input step, i.e.
. However, the unit step response of a system with transfer-function is
in the direction opposite to the input step, i.e. . The step response of the
transfer-function is even more interesting. We calculate the step response using
the inverse Laplace transform:
A subtler example is
but the system with transfer-function will first veer in the opposite direction, as
shown in Fig. 7.10. Indeed, using the initial-value theorem to calculate the derivative of
the step response, , at ,
which explains why the trajectories for and diverge initially. Yet
so and start with the same derivative, but the non-minimum-phase system
ends up veering in the opposite direction after a while, as shown in Fig. 7.10. The odd
behavior of non-minimum-phase systems often translates into challenges in control
design.
Figure 7.9 Bode plots for the transfer-functions , , and in (7.2); the
magnitudes of their frequency response are the same but their phases differ; is
minimum-phase; and are non-minimum-phase.
Figure 7.10 Step responses for the transfer-functions , , and in (7.2). is
minimum-phase; and are non-minimum-phase. All responses converge to one.
The responses of and start with the same derivative but and eventually
veer toward negative values before converging to one.
from which the transfer-function from the steering input, , to the output,
the y-coordinate of the mid-point of the front axle, is
Figure 7.11 Trajectories of a four-wheeled vehicle changing its y-coordinate; the mid-
point of the forward axle is marked with a circle. The forward maneuver is minimum-
phase; backward motion is non-minimum-phase. The mid-point of the front axle has to
first decrease its y-coordinate before that can be increased.
7.3 Polar Plots
Information about the segment of the curve is contained in the Bode plot
of G. Because G has real coefficients and ,
, the segment is obtained using symmetry: just
reflect the plot about the real axis. From the Bode plot of G we can directly obtain
some points, say
and calculate
which can be used to sketch the polar plot by hand. The points (7.3) and (7.4), along
with complete Bode and polar plots, are shown in Fig. 7.12.
We will encounter more examples of polar plots later when we learn how to use
these beautiful figures to make inferences about the stability of open- and closed-loop
systems in Sections 7.5 and 7.6.
Before we talk about stability in the frequency domain it is necessary to introduce the
argument principle. The principle itself is very simple and many readers may choose to
skip the latter parts of this section that are dedicated to its proof on a first reading.
Indeed, many standard books do not provide a proof, but rather work out the principle
using graphical arguments, e.g. [FPE14, Section 6.3.1] and [DB10, Section 9.2]. If you
have endured (perhaps even secretly enjoyed) the most technical parts of Chapter 3,
then the proof will be enlightening. The argument principle can be stated as follows.
Theorem 7.1 (Argument principle) If a function f is analytic inside and on the positively
oriented simple closed contour C except at a finite number of poles inside C and f has
5 6
no zeros on C then
where is the number of zeros and is the number of poles of f that lie inside
the contour C counting their multiplicities.
Because C is a closed contour and f has no singularities on C, the quantity
which is the total argument variation recorded as we traverse the image of C under f ,
must be an integer multiple of . Indeed, the quantity on the left-hand side of (7.5) is
an integer that indicates how many times the image of the contour encircles the origin.
The notation reflects the fact that encirclements should be counted
around the origin.
and recall the positively oriented simple closed contours , , and introduced
earlier in Fig. 3.1 and reproduced in Fig. 7.13(a). The images of , , and under
the mapping G are plotted in Fig. 7.13(b). The direction of traversal of the contours and
their images is indicated by arrows along the paths. The total argument variation can
be obtained directly from Fig. 7.13(b) by simply counting the net number of times the
image of , , or under G encircles the origin, taking into account their direction
of traversal. From Fig. 7.13(b)
because the image of the positively oriented simple closed contour encircles the
origin once in the clockwise (negative) direction. Because the image of the contour
never encircles the origin,
The case is a bit trickier because the closed contour is negatively oriented. This
case can be accounted for after observing that reversing the direction of travel along a
negatively oriented contour simply reverses the direction of travel of its image, that is,
The right-hand side of (7.5) can be evaluated by locating the poles and zeros relative
to the contours. In Fig. 7.13(a) poles are marked with crosses and zeros with circles.
has two simple poles, at and , and one simple zero, at .
From Fig. 7.13(a)
and
which agree with our previous argument calculations obtained graphically from Fig.
7.13(b).
It is also possible to restate the argument principle for negatively oriented contours.
If C is a negatively oriented contour satisfying the assumptions of Theorem 7.1 then
which once again agrees with our previous calculations. This form of the argument
principle will be used in Sections 7.5 and 7.6. The rest of this section is a proof of the
argument principle.
As seen in our simple example, application of the argument principle usually involves
evaluating the quantities on the far left and far right of (7.5). The proof will rely on the
contour integral in the middle.
and
with g analytic at . This fact has been used before in Section 3.4. We perform the
same analysis as in the case of zeros, replacing h for g and the multiplicity m for ,
to obtain
concluding that a pole of f is always a pole of at which the residue is equal to the
negative of the multiplicity n, i.e. .
which does not seem very enlightening until we expand in partial fractions
to reveal the poles of at the zeros and poles of G and the corresponding
residues of or depending on whether it concerns what was originally a zero or a
pole in G.
Summing the residues at the poles of , that is, the poles and zeros of f , inside
the positively oriented contour C we obtain from Theorem 3.1
where is the number of zeros and is the number of poles inside C, counting
their multiplicities. This is the right-hand side of (7.5).
The left-hand side of (7.5) is obtained through direct evaluation of the integral. For
that we introduce a parametric representation of the contour C in terms of the real
variable :
Because C is closed we have that and the first term is zero. The second
term is related to the desired total angular variation:
10
which is the left-hand side of (7.5).
In order to use the argument principle, Theorem 7.1, to check for stability of a linear
time-invariant system with transfer-function we need to define a suitable closed
contour. We shall use a special case of the contour , introduced earlier in Fig. 3.2(a),
in which . The resulting contour, which we refer to in the rest of this book simply
as , is reproduced in Fig. 7.14(a). As in Section 3.4, it can cover the entire right-hand
side of the complex plane by taking the limit . It is convenient to split the
contour into three smooth segments:
The thick- and dashed-line segments lie on the imaginary axis and their image under
the mapping G coincides exactly with the polar plot of G, Section 7.3, as is made
larger. The thin semi-circular part of the path closes the contour to allow the
application of the argument principle. Note, however, that transfer-functions that
satisfy the convergence condition (3.23) are such that
consequently the radius, , cannot be made infinite without violating the assumption
of Theorem 7.1 that no zeros should lie on the contour. We will handle this
complication later. For now, assume that G does not have any finite or infinite poles or
zeros on . In this case, the total argument of the image of under G offers an
assessment of the number of poles and zeros of G on the right-hand side of the
complex plane. Because the contour is negatively oriented, see Fig. 7.14(a), the
11 12
argument principle, Theorem 7.1, is best applied in the form given by formula (7.8):
In practice, the reversal of the direction of travel of the contour means that
encirclements of the origin should be counted as positive if they are clockwise and as
negative if they are counter-clockwise. See Section 7.4.
Figure 7.14 Simple closed contour, , for assessing stability; imaginary poles need to be
excluded from the contour as shown in (b); as is made large and is made small
these contours cover the entire right-hand side of the complex plane; the image of the
thick solid and thick dashed segments under a mapping G can be obtained directly from
the polar or Bode plot of ).
It is easy to compute the total argument variation, , from the polar plot
of G. If is known or is easy to compute, then G is asymptotically stable if and only if
Otherwise, G has at least one pole on the right-hand side of the complex plane and
therefore is not asymptotically stable.
The function has no poles or zeros on the contour . It has a zero, , on the
left-hand side of the complex plane so that . Because , the polar
plot of is the polar plot of , the clockwise circle of radius centered at
shown in Fig. 7.12, translated by . See Fig. 7.15. That is, the polar plot of is the
circle of radius centered at , which never encircles the origin.
Consequently
When the transfer-function G has poles or zeros on the contour , the assumptions
of the argument principle, Theorem 7.1, are violated. The case of poles is simple: if G
satisfies (3.23) or the weaker (3.25), then it can only have finite poles; therefore, any
poles on must be on the imaginary axis and G is not asymptotically stable. The case
of zeros on the contour requires some more thought because a transfer-function can
have zeros on the imaginary axis and still be asymptotically stable. Moreover, as
mentioned earlier, if G satisfies (3.23) then it necessarily has zeros at infinity. In either
case, the symptom is the same: if is a zero then , therefore the
image of under G contains the origin, making it difficult to analyze encirclements.
This should be no surprise. In fact, the polar plot of the transfer-function
has a zero at and its polar plot is a clockwise circle of radius centered
at , Fig. 7.15, which contains the origin. Both and are
asymptotically stable but it is not clear how many encirclements of the origin have
occurred. In the case of infinite zeros, a formal workaround is possible if all poles of G
on the right-hand side of the complex plane are finite so that there exists a large
enough for which contains all such poles. For example, by plotting the image of
the contour under the mapping G for such a large yet finite radius in the case of
the transfer-function , we obtain the graphic in Fig. 7.16. It is now possible to see
that no encirclements of the origin occur and, since and
Do not let these difficulties at the origin obfuscate the power of the argument
principle applied to stability analysis. The real virtue of the test is that one does not
need to explicitly calculate the poles of a transfer-function, in other words the roots of
the characteristic equation, to make inferences about the presence of poles on the
right-hand side of the complex plane. This makes it useful even if the characteristic
equation is not polynomial or G is not rational. For example, a linear system in closed-
loop with a delay has a characteristic equation which is not polynomial. A simple
example is
It is possible to use the argument principle to assess the location of the roots of
even in this case in which we do not know the exact number of roots. Define the
13
function
Since for all s such that ,
which means that G does not have any zeros on the right-hand side of the complex
plane, . All poles on the right-hand side of the complex plane must also be
finite because
that is the image of under G does not encircle the origin and G has no zeros or poles
on the imaginary axis. This condition is easily checked graphically in Fig. 7.17. Note how
the term significantly affects the phase of , which oscillates and crosses the
positive real axis an infinite number of times. Yet, as the polar plot of G in Fig. 7.17(c)
shows, the image of under G never encircles the origin, and G does not have any
poles (the plot is bounded) or zeros (does not contain the origin) on the imaginary axis.
The conclusion is that G is asymptotically stable, that is, the characteristic equation
has no roots on the right-hand side of the complex plane.
Figure 7.17 Bode and polar plots of .
7.6 Nyquist Stability Criterion
We are finally ready to study the celebrated Nyquist stability criterion. The setup is the
same as for root-locus analysis (Section 6.4): the loop transfer-function, L, is placed in
feedback with a static gain, , as shown in Fig. 7.18. The Nyquist criterion is an indirect
graphical method to asses the location of the zeros of the characteristic equation
Furthermore, the poles of are the same as the (open-loop) poles of L, which we
assume are known by the designer. The information we are after is the locations of the
zeros of , which correspond to the closed-loop poles of the feedback system in Fig.
7.18.
Figure 7.18 Closed-loop feedback configuration for Nyquist criterion; .
The idea behind the Nyquist criterion is that the image of the simple closed contour
, Fig. 7.14(a), under the (closed-loop) mapping is readily obtained from the image
of under the (open-loop) mapping L: the image of under is simply the image
of shifted by , as illustrated in Fig. 7.19. Such plots are known as Nyquist plots.
Moreover, clockwise encirclements of the origin by the image of under are the
same as clockwise encirclements of the image of under L around
See Fig. 7.19. Recalling the argument principle, Theorem 7.1, and formula (7.8), the
number of encirclements is equal to
Because the poles of on the right-hand side of the complex plane are the same as
the poles of L, that is , the zeros of on the right-hand side of the complex plane
are
which are the right-hand-side closed-loop poles of the feedback system in Fig. 7.18.
Closed-loop asymptotic stability ensues if and only if , that is,
Theorem 7.2 (Nyquist) Assume that the transfer-function L satisfies (3.25) and has no
poles on the imaginary axis. For any given the closed-loop connection in Fig. 7.18
is asymptotically stable if and only if the number of counter-clockwise encirclements of
the image of the contour , Fig. 7.14(a), under the mapping L around the point
is equal to the number of poles of L on the right-hand side of the complex plane.
Note that the Nyquist stability criterion can be applied even if L is not rational, in
contrast with the root-locus method studied in Chapter 6, which is better suited to
rational transfer-functions. The assumption that L has no poles on the imaginary axis
will be removed later in this section.
in Fig. 7.19. Verify that L is asymptotically stable, that is . The image of under
L encircles the point once in the clockwise direction. Therefore
and the closed-loop system is not asymptotically stable. Because , no net
encirclements of the point should occur if the closed-loop system is to be
asymptotically stable. Note also that
indicating that exactly one closed-loop pole is on the right-hand side of the complex
plane.
We have traced their polar plots in Fig. 7.15, which we reproduce in Fig. 7.20. All these
transfer-functions have one pole at , therefore . For closed-loop
stability, there should be no net encirclement of the point , . For and
, this is always the case, as their Nyquist plots never intersect the negative real axis.
The Nyquist plots of and intersect the negative real axis at and produce no
encirclement if . That is, and are asymptotically stable in closed-
loop if , e.g. in Fig. 7.20. However, when , the Nyquist plots of
and display one clockwise (negative) encirclement, hence the associated closed-
loop systems are not stable and exhibit
poles on the right-hand side of the complex plane. Verify that these conclusions match
exactly what is expected from the corresponding root-locus plots. In the case of
14
and , the point where the Nyquist plot intersects the negative real axis also
provides the gain, , at which the closed-loop system ceases to be asymptotically
stable, that is, the value of the gain at which the single branch of the root-locus crosses
toward the right-hand side of the complex plane. There is more to come in Section 7.7.
Figure 7.20 Nyquist plots for , ,
, and ; and intersect the
negative real axis at . for all transfer-functions. and are
asymptotically stable in closed-loop for any feedback gain because their Nyquist
plots do not encircle any negative real number. and are asymptotically stable in
closed-loop for any feedback gain because their Nyquist plots do not encircle
; e.g. does and does not stabilize and in closed-loop.
As in Section 7.5, difficulties arise when a pole or zero of lies on the contour .
We will study poles on the contour first. If L satisfies (3.25) then
which means that has only finite poles. Since the poles of are the same as the
poles of L, any finite pole of or L on must be imaginary. The solution is to indent
the contour by drawing a small semicircle around each imaginary pole of L, thus
removing the pole from the indented contour , as illustrated in Fig. 7.14(b) in the case
of three poles on the imaginary axis. The image of the indented contour is then
traced as we make large and , the radii of the indented semicircles, small. This
covers the entire open right-hand side of the complex plane. Because L is singular at
the indented poles, one should expect that the image of under L becomes
unbounded as . Yet stability can still be assessed if we are careful about keeping
track of the direction and number of encirclements.
When is small 15
the phase of the image of the indented contour under the mapping L decreases by a
total of radians. That is, it is an arc with very large radius that spins radians in
the clockwise direction. A negative value of does not change this behavior and the
direction of the spin is independent of the segment within which the pole is located
(thick solid or thick dashed line in Fig. 7.14(b)).
has a pole at zero, and its frequency response is unbounded as . This can be
seen in the Bode plot of L shown in Fig. 7.21, where the straight-line approximation to
the magnitude of the frequency response has a constant negative slope of
dB/decade for . The behavior of the Nyquist plot as it approaches the origin
can be inferred by analyzing
from which, for ,
and
so that the Nyquist plot has a semicircle with large radius that swings from to
, as shown in Fig. 7.22(a). The asymptote is obtained from
Unfortunately, this asymptote is not easily computed from the Bode plot. In this
example, L has no poles inside the indented contour , that is . If then
no encirclements happen and
That is, there are no closed-loop poles on the right-hand side of the complex plane and
the closed-loop system is asymptotically stable. Note that the hypothesis that is
important. In this example, the closed-loop system with is not asymptotically
stable because of the pole at the origin.
Figure 7.21 Combined Bode plot of ; is the gain
margin and is the phase margin.
In other words,
and the Nyquist plot of L crosses the negative real axis at exactly when
is a zero of . Because the location of the zeros of , that is, the closed-loop poles,
is precisely the information we are looking for, it is not practical to indent the contour
at these (unknown) zeros. Luckily, it often suffices simply to shift the point .
For example, in Fig. 7.22(a), the function has a zero on the contour only if
. On the one hand, no encirclements of the point happen if
, that is, the closed-loop system is asymptotically stable if . On
the other hand, if then one clockwise encirclement occurs and the closed-loop
system is not asymptotically stable for any .
More generally, the exact behavior at the crossing of the point will depend
on whether the associated zero is finite or infinite. For example, if
the Nyquist plot of L is a clockwise circle of radius centered at that crosses the
negative real axis at ( in Fig. 7.20). The loop transfer-function, L, is
asymptotically stable, , and the closed-loop system is asymptotically stable if
, because ; the closed-loop system is not asymptotically stable if
, because . At , the Nyquist plot crosses the point ,
which means that has a zero on the contour . In this case, this is a zero at
infinity, and an analysis similar to the one used in Section 7.5 to obtain Fig. 7.16 reveals
that no crossings of the point occur if we let be large yet finite, and the
corresponding closed-loop system is asymptotically stable. Indeed,
which has no finite zeros. In this very special case the numerator has order zero, which
indicates that an exact closed-loop pole–zero cancellation happened.
Most commonly, however, crossings of the negative real axis by the Nyquist plot
happen due to finite zeros of in . This will be the case, for example, whenever L
satisfies the convergence condition (3.23), e.g. L is strictly proper. In this case, zeros on
must be imaginary. In fact, any point for which is real
and negative is such that
for some . On comparing this with (6.19) we conclude that these very special
points are also part of the root-locus of . Indeed, when the Nyquist plot crosses
the negative real axis at , a root must cross the imaginary axis in the root-locus
plot for a corresponding . If has n poles on the right-hand side of the
complex plane, all n poles will eventually have to cross over the imaginary axis on the
root-locus plot if the closed-loop system is to become asymptotically stable. We
illustrate this relationship with a simple yet somewhat puzzling example. Consider the
loop transfer-function
Its root-locus is shown in Fig. 7.23(a). There exists no for which the closed-loop
system is asymptotically stable. When the closed-loop system has two real
poles, one of which is always on the right-hand side of the complex plane, and when
the closed-loop system has a pair of imaginary poles. At the closed-loop
poles are both at the origin. Because
the Nyquist plot is simply the segment of line on the negative real axis from to
shown in Fig. 7.23(b). Since L has one pole with positive real part, there must be at least
one encirclement of the point for the closed-loop system to be asymptotically
stable. No encirclements happen when , indicating that one pole remains on the
right-hand side of the complex plane. When no encirclements happen but the
16
Crossings of the negative real axis in the Nyquist plot and crossings of the imaginary
axis in the root-locus plot play a critical role in the stability of closed-loop systems. By
measuring how much additional gain or phase is needed to make an asymptotically
stable closed-loop system become unstable we obtain a measure of the available
stability margin. In the Nyquist plot, we define gain and phase margins after
normalizing the closed-loop gain to . The gain margin is defined as
The margins (7.15) and (7.16) can be computed in the Bode plot or the Nyquist
diagram. For G given in (7.14), the margins are
17
which are indicated in the Bode plot in Fig. 7.21 and in the Nyquist plot in Fig. 7.22(b).
As noted earlier, the Nyquist plot often crosses the negative real axis at points that
are part of the root-locus. This means that the gain margin can also be calculated in the
root-locus diagram, corresponding to the smallest distance (smallest additional gain)
needed for a root to cross the imaginary axis toward the right-hand side of the complex
plane. Indeed, when the Nyquist plot crosses the negative real axis at some ,a
root must cross the imaginary axis in the root-locus plot for some .
Gain and phase margins can be fully understood only in the broader context of the
Nyquist stability criterion. For example, if the closed-loop systems is unstable, crossings
of the negative real axis and the unit circle can be computed but they do not imply any
margins. Even when closed-loop stability is possible, the Nyquist plot may intersect the
negative real axis many times, so one needs to be careful about calculating the
appropriate margins. If the loop transfer-function, L, is unstable and has n poles on the
right-hand side of the complex plane, then it will be necessary to encircle the point
exactly n times in the counter-clockwise direction in order to have closed-loop
stability. This implies crossing the negative real axis at least n times! In this case, the
gain margin may be less than one (negative in dB), which means that a reduction in the
gain will lead to instability. If L is asymptotically stable then the gain margin is greater
than one, and is often obtained at the first crossing. This is the case, for example, in
Fig. 7.19(a).
The margins can also be interpreted in terms of robustness to changes in the gain
and phase of the system being controlled. For example, gain margin provides
guarantees that the closed-loop system will remain asymptotically stable if the overall
gain changes. With respect to the closed-loop diagram of Fig. 7.18 with ,
Phase margin offers guarantees in case the overall phase changes. A pure change in
phase requires the introduction of a complex disturbance, shown in Fig. 7.24(a). In this
case
When and L is asymptotically stable under unit feedback then the closed-loop
system in Fig. 7.24(a) remains asymptotically stable as long as
because keeping the phase away from will prevent changes in the number of
encirclements of the point . For example, L given in (7.14) is such that the closed-
loop system in Fig. 7.24(a) remains asymptotically stable for all .
When a similar reasoning applies.
The pure complex phase disturbance in the block-diagram in Fig. 7.24(a) is a bit
artificial, as it involves a complex element. In practical systems, additional phase is
often introduced by delays, as shown in the diagram in Fig. 7.24(b), for which
Changes in phase due to delay are frequency-dependent and one needs to be careful
when interpreting the phase margin as robustness against delays. When , L is
asymptotically stable, and intercepts the unit circle only once for ,
then the closed-loop system in Fig. 7.24 remains asymptotically stable if
For example, L given in (7.14) intercepts the unit circle only once and
at , therefore the closed-loop system in Fig. 7.24(b) is
asymptotically stable for all s. If then stability of
the closed-loop system is guaranteed for all values of delay. If L intercepts the unit
circle more than once, one needs to be careful and evaluate at all such
that and select the smallest value. When a similar reasoning
applies but this time
Recall that because , delays can only add negative phase. We will have more to
say about robustness and stability margins in Section 8.2.
Gain and phase margins are complementary measures of how close the Nyquist plot
is from the point . An alternative stability margin is obtained by directly measuring
the smallest distance from the Nyquist plot to
18
:
When the loop transfer-function, L, is obtained from the standard feedback connection
diagram in Figs. 1.8 and 4.2 then
that is, the inverse of the norm of the sensitivity transfer-function S (see Section
3.9). When L satisfies the convergence condition (3.23) then
, in which case we can conclude that
The smaller the closer the Nyquist plot is to the point . Note also that, if
has a peak, for example S has low-damped complex poles, then we expect
that will be small. We will have much more to say about the role of the sensitivity
function in stability and performance in Section 8.1.
We now revisit the control of the simple pendulum from Section 6.5. Recall that
which are models linearized around the stable equilibrium point, , and around
the unstable equilibrium point, , respectively. We use the same set of
parameters as in (6.20) and (6.21):
As in Section 6.5, we assume during design that there is no friction and that the
controller is responsible for providing all necessary damping. We require integral
action, which means that the controller must have a pole at the origin. For this reason
we trace in Fig. 7.25 the Bode plots of the loop transfer-functions:
We analyze first because its frequency response looks less intimidating. We start by
normalizing:
Because of the pole at the origin, the magnitude of the frequency response is
unbounded as . The phase of the frequency response at is , with
the negative numerator contributing and the pole at the origin contributing
. The phases of the symmetric pair of real poles cancel each other and the phase
of remains constant at for any frequency , as seen in Fig. 7.25. The polar
plot of is therefore the entire imaginary axis. In addition to the imaginary axis, the
pole at zero creates a arc of infinite radius in the clockwise direction starting at
and ending at in the Nyquist
plot of . Verify that this Nyquist plot coincides with the contour , from Fig.
3.2(a), traversed in the reverse direction, which encircles the point once in the
clockwise (negative) direction. Hence there are
closed-loop poles on the right-hand side of the complex plane. For closed-loop stability
it is necessary that the Nyquist plot encircles the point once but in the counter-
clockwise (positive) direction. For that to happen we need to add zeros to the
controller. We prefer non-minimum-phase poles and zeros, which do not affect the
19
Note how quickly the frequency response leads to a general location of the controller
poles and zeros that matches the general form of the controller , from (6.25),
obtained only at the end of Section 6.5. Further analysis will reveal even more
information about the locations of the zeros.
The frequencies and are marked in Fig. 7.26 by circles. The fact that the gain
margin is less than one (negative in dB), indicates that the closed-loop will become
unstable if the gain is reduced.
Figure 7.26 Bode plots of (dashed), (thin solid) from (6.25), and
(thick solid).
This can be visualized in the associated Nyquist diagram shown in Fig. 7.27, which
displays a large arc on the left-hand side of the complex plane due to the pole at the
origin and one counter-clockwise (positive) encirclement of . Consequently,
and the closed-loop system is asymptotically stable. The key to achieving a magnitude
greater than one at the crossing of the negative real axis is the location of the zeros: by
placing the first zero well before , at , we stop the decrease in
magnitude due to the pole at the origin; by adding the second zero near , at
, we compensate for the magnitude drop in and add the necessary
phase to produce the crossing at ; finally, a controller pole added beyond both
zeros, at , keeps the controller transfer-function proper without affecting the
behavior near . Locate these features in the Bode plot in Fig. 7.26.
In the control of the simple pendulum, the ordering of poles and zeros established in
(7.19) seems to be more important than the actual locations of the poles and zeros.
With that in mind, we will attempt to shift the poles and zeros in the controller to suit
other design requirements. For example, we would like to move the zero and the
pole in order to increase the loop gain near . Since the role of the first
controller zero, , is to interrupt the gain decrease due to the integrator, we set the
first controller zero at , which is close to the original . By shifting the
second controller zero, , and pole, , to
we should be able to raise the controller gain, and hence the loop gain, near .
We also adopt the following guideline: the controller must compensate for magnitude
losses introduced by at . Because
(double real poles at ) we select the controller gain, K, so that . The
result is the controller
For comparison, the Bode plots of the controller and are plotted in Fig. 7.28,
from which . The controller is slightly more aggressive than the
controller in the bandwidth of the system but has a smaller high-frequency
gain. Because the crossing of the negative real axis happens near we expect a
slightly increased gain margin. Controller also has smaller phase near ,
which reduces the phase margin when compared with . Indeed,
The gain and phase margins are indicated in the frequency response of the controller
, from (7.20), along with the frequency response of and the loop transfer-
function, , in Fig. 7.29. The overall behavior is very close to the one
obtained with controller , from (6.25). Of course, the corresponding Nyquist diagram
is also very close to the one plotted in Fig. 7.27, and can be easily sketched if wanted.
Note that there is a somewhat significant reduction in stability margin: the value of
is % smaller than before, which may stand as a red flag for a possible deterioration
in closed-loop performance and robustness. We will investigate these issues in more
detail in Sections 8.1 and 8.2.
Figure 7.28 Combined Bode plots of (solid), from (6.25), and (dashed), from
(7.20); thick curves are magnitude and thin curves are phase.
Figure 7.29 Bode plots of (dashed), (thin solid) from (7.20), and
(thick solid).
Finally, as was done in Section 6.5, we verify that the controller designed to
stabilize also works when placed in feedback with the model linearized around the
equilibrium . On substituting (7.18) into L with we obtain
and trace the Bode plots of , C, and the loop transfer-function, , with
from (7.20) in Fig. 7.30. Because of the pair of imaginary poles at
, the magnitude of the frequency response is singular and the
phase is discontinuous at .
Figure 7.30 Bode plots of (dashed), (thin solid) from (7.20), and
(thick solid).
The Nyquist diagram, Fig. 7.31, is obtained after indenting the contour, , to exclude
the three imaginary poles. The loop transfer-function, , has no poles or
zeros on the right-hand side of the complex plane, that is, , therefore the
Nyquist plot no longer needs to encircle in order for the closed-loop system to be
asymptotically stable. This seems completely different from the design for , which
required a counter-clockwise encirclement. Yet, the requirements on the form and
locations of the zeros and poles are very similar. Starting from at , the
phase of the loop transfer-function swings in the clockwise direction to reach
at because of the integrator in the controller. As approaches
the phase swings in the clockwise direction, connecting the discontinuity
in the phase diagram, because of the pair of imaginary poles of . This time it is
necessary to raise the phase above before reaching in order to avoid crossing
the negative real axis and producing an encirclement of . This can be done by
adding two zeros and a pole as in (7.18) and (7.19) to introduce phase in excess of
and keep the controller proper. Remarkably, in spite of the significant differences in the
frequency responses of and , both designs require controllers that have the
same structure (7.18)! Because the Nyquist plot in Fig. 7.31, with three large
arc swings, neither crosses the negative real axis nor encircles the point , that
is,
and the frequency is marked in Fig. 7.30 by a circle. The gain margin is infinite due
to the absence of crossings of the negative real axis. The phase and stability margins
are slightly smaller than the ones obtained in closed-loop around the unstable
equilibrium, i.e. with .
Figure 7.31 Nyquist plot of , with from (7.20).
Problems
7.1 Draw the straight-line approximation and sketch the Bode magnitude and phase
diagrams for each of the following transfer-functions:
(a) ;
(b) ;
(c) ;
(d) ;
(e) ;
(f) ;
(g) ;
(h) ;
(i) ;
(j) .
(k)
(l)
7.2 Draw the polar plots associated with the rational transfer-functions in P7.1.
7.3 If necessary, modify the polar plots associated with the rational transfer-
functions in P7.1 to make them suitable for the application of the Nyquist stability
criterion.
7.4 Use the Nyquist stability criterion to decide whether the rational transfer-
functions in P7.1 are stable under negative unit feedback. If not, is there a gain for
which the closed-loop system can be made asymptotically stable? Draw the
corresponding root-locus diagrams and compare them.
7.5 Draw the Bode plots associated with the pole–zero diagrams in Fig. 7.32 assuming
that the straight-line approximations of the magnitude of the frequency-response have
unit gain at (or at if there is a pole or zero at zero).
Figure 7.32 Pole–zero diagrams for P7.5–P7.7.
7.6 Draw the Nyquist plots associated with the pole–zero diagrams in Fig. 7.32
assuming that the straight-line approximations of the magnitude of the frequency-
response have unit gain at (or at if there is a pole or zero at zero).
7.7 Use the Nyquist stability criterion to decide whether the rational transfer-
functions in P7.5 and P7.6 are stable under negative unit feedback. If not, is there a
gain for which the closed-loop system can be made asymptotically stable? Draw the
corresponding root-locus diagram and compare.
7.9 Find a minimum-phase rational transfer-function that matches the Bode phase
diagrams in Fig. 7.34. The straight-line approximations are plotted as thin lines.
Figure 7.34 Phase diagrams for P7.8–P7.11.
7.10 Calculate the rational transfer-function that simultaneously matches the Bode
magnitude diagrams in Fig. 7.33 and the corresponding phase diagrams in Fig. 7.34.
7.11 Draw the polar plot associated with the Bode diagrams in Figs. 7.33 and 7.34.
Use the Nyquist stability criterion to decide whether the corresponding rational
transfer-functions are stable under negative unit feedback. If not, is there a gain for
which the closed-loop system can be made asymptotically stable?
7.12 You have shown in P2.10 and P2.12 that the ordinary differential equation
and select so that the closed-loop system is asymptotically stable. Calculate the
corresponding gain and phase margin. Is the closed-loop capable of asymptotically
tracking a constant reference input ? Is the closed-loop capable of asymptotically
rejecting a constant input torque disturbance?
7.14 The rotating machine in P6.11 is connected to a piston that applies a periodic
torque that can be approximated by , where the angular frequency
is equal to the angular velocity . The modified equation including this additional
torque is given by
Use Bode plots and the Nyquist stability criterion to design a dynamic feedback
controller that uses as control input and as the measured output so that the
closed-loop system is capable of asymptotically tracking a constant reference input
, , and asymptotically rejecting the torque perturbation
when . Calculate the corresponding gain and phase margins.
7.15 You have shown in P2.18 that the ordinary differential equation
is a simplified description of the motion of the elevator in Fig. 2.18(b), where is the
angular velocity of the driving shaft and is the elevator’s load linear velocity. Let
m, kg, kg m /s, kg m , and
m/s . Use Bode plots and the Nyquist stability criterion to design a dynamic
feedback controller that uses as control input and the elevator’s load vertical
position
7.17 You have shown in P2.28 that the ordinary differential equation
7.18 You have shown in P2.32 that the ordinary differential equations
constitute a simplified description of the motion of the mass–spring–damper system in
Fig. 2.20(b), where and are displacements, and and are forces applied on
the masses and . Let the force, , be the control input and let the
displacement, , be the measured output. Let kg, kg/s,
N/m, and N/m. Use Bode plots and the Nyquist stability criterion to
design a dynamic feedback controller that uses as control input and as the
measured output and that can regulate the position, , at zero for any constant
possible value of force . Calculate the corresponding gain and phase margins. Hint:
Treat the force as a disturbance.
7.20 In P6.20 you have designed the spring and damper on the one-eighth-car model
from P6.19 for a car with mass equal to kg to have a natural frequency
Hz and damping ratio . Draw the Bode magnitude and phase
diagrams corresponding to the design in P6.20. Interpret your result using Bode plots
and the frequency-response method.
7.21 In P6.21 you calculated the response of the one-eighth-car model you designed
in P6.20 to a pothole with a profile as shown in Fig. 6.27, where 1m and cm
for a car traveling first at km/h and then at km/h. Interpret your results using
Bode plots and the frequency-response method.
7.22 In P6.22 you calculated the worst possible velocity a car modeled by the one-
eighth-car model you designed in P6.20 could have when traveling on a road with
profile , where v is the car’s velocity. Interpret your results using
Bode plots and the frequency-response method.
7.23 You showed in P6.23 that the design of the spring and damper for the one-
eighth-car model from P6.19 can be interpreted as a PD control design problem. Use
this reformulation to evaluate the solution obtained in P6.20 using Bode plots and the
Nyquist stability criterion. Calculate the corresponding gain and phase margins.
7.24 In P6.25 you designed the spring and damper on the one-quarter-car model
from P6.24 for a car with mass kg, wheel mass kg, tire
stiffness equal to N/m, and negligible tire damping coefficient ,
to have its dominant poles display a natural frequency Hz and damping ratio
. Interpret your result using Bode plots and the frequency-response method.
7.25 In P6.26 you calculated the response of the one-quarter-car model you designed
in P6.25 to a pothole with a profile as shown in Fig. 6.27, where 1 m and
cm for a car traveling first at km/h and then at km/h. Interpret your results
using Bode plots and the frequency-response method.
7.26 In P6.27 you have calculated the worst possible velocity a car modeled by the
one-quarter-car model you designed in P6.25 could have when traveling on a road with
profile , where v is the car’s velocity. Interpret your results using
Bode plots and the frequency-response method.
7.27 You have shown in P6.28 that the design of the spring and damper for the one-
quarter-car model from P6.24 can be interpreted as a PD control design problem. Use
this reformulation to evaluate the solution obtained in P6.25 using Bode plots and the
Nyquist stability criterion. Calculate the corresponding gain and phase margins.
7.28 Compare the answers from P7.24–P7.27 with the answers from P7.20–P7.23.
7.29 You have shown in P2.41 that the ordinary differential equation
is a simplified description of the motion of the rotor of the DC motor in Fig. 2.24. Let
the voltage, , be the control input and the rotor angular velocity, , be the
measured output. Let kgm , N m/A, V s/rad,
kg m /s, and . Use Bode plots and the Nyquist stability
criterion to design a dynamic feedback controller so that the closed-loop system is
capable of asymptotically tracking a constant reference input , .
Calculate the corresponding gain and phase margins.
7.30 Repeat P7.29 to design a position controller that uses a measurement of the
angular position
7.31 You showed in P4.34 that the torque of a DC motor, , is related to the
armature voltage, , through the transfer-function
Use the data from P7.29 and Bode and Nyquist plots to design a controller that uses
the voltage as the control input and the torque as the measured output so that
the closed-loop system is capable of asymptotically tracking a constant reference input
torque , . Calculate the corresponding gain and phase margins.
7.32 You showed in P2.49 that the temperature of a substance, T (in K or in C),
flowing in and out of a container kept at the ambient temperature, , with an inflow
temperature, , and a heat source, q (in W), can be approximated by the differential
equation
where m and c are the substance’s mass and specific heat, and R is the overall system’s
thermal resistance. The input and output flow mass rates are assumed to be equal to w
(in kg/s). Assume that water’s density and specific heat are kg/m and
J/kg K. Use Bode plots and the Nyquist stability criterion to design a dynamic feedback
controller that uses the heat source q as the control input and the temperature T as
the measured output for a gal ( m ) water heater rated at
BTU/h ( kW) and thermal resistance K/W at ambient temperature,
F( C). The controller should achieve asymptotic tracking of a reference
temperature F( C) without any in/out flow, i.e. . Calculate the
corresponding gain and phase margins.
7.36 In P6.38 and P6.39 you reproduced the results of [Ste+03] by verifying using the
root-locus method that the PID controller (6.26) is capable of stabilizing the insulin
homeostasis system in closed-loop. Use the values and , and
calculate the loop transfer-function, , that can be used for feedback analysis of the
closed-loop glucose homeostasis system with respect to the proportional gain,
, and sketch the corresponding Bode and polar plots. Use the Nyquist stability criterion
to show that the closed-loop insulin homeostasis system is asymptotically stable.
The function
3
is equal to when .
4
A better reason will be provided in Section 7.6, where you will learn how to handle
imaginary poles in Nyquist plots.
This implies that all singularities are isolated.
5
Because h and
8
are analytic at and .
continuous function.
11
See discussion at the end of Section 7.4.
This is for convenience, as it allows one to read Bode plots from left to right as we
12
traverse the imaginary axis from to to produce polar and Nyquist plots.
The characteristic equations of linear systems with delay have an infinite number of
13
roots [GKC03].
Note that
14
and have when their numerators and denominators are made
monic, so be careful with their root-locus plots!
15
This argument can be made precise by expanding F in a power series.
16
Do not get fooled by the drawing in Fig. 7.23(b)! The two line segments coincide.
17
, which is in ; hence .
18
is a fancy replacement for as is for . See footnote 31 on p. 75.
Minimum-phase poles and zeros also result in controllers that are easier to
19
implement. How would you “start” a feedback loop with an unstable controller?
8
Performance and Robustness
In earlier chapters you have learned the basic techniques and methods that are used in
classic control design. One main goal was to achieve closed-loop stability. In this
chapter we take our discussion on some aspects of performance and robustness of
feedback systems further. We also introduce the concepts of filtering and feedforward.
In Chapter 7 we saw how frequency-domain methods can be used for control design.
We studied the frequency response of the loop transfer-function and the associated
Bode and Nyquist diagrams to obtain clues on the structure and location of the poles
and zeros of a controller needed for closed-loop stabilization. We introduced gain,
phase, and stability margins, in Section 7.7, as measures of closed-loop robustness.
However, it is not yet clear how closed-loop robustness relates to performance. We
have also not established whether closed-loop performance specifications can be
translated in terms of the (open) loop transfer-function.
Often the energy of the reference signal, , is concentrated on a limited region of the
spectrum and the job of the control system designer is to keep as small as
possible in this region in order to achieve good tracking. A number of obstacles can
make this a difficult task. First, one should not expect to be able to set
Indeed, the poles and zeros of L dictate the behavior of , which makes this task
impossible. If p is a pole of L and z is a zero of L then
For instance, if L satisfies (3.23), e.g. L is rational and strictly proper, then
This is the same as thinking of L as having a zero at infinity. Loosely speaking, achieving
(8.1) would amount to having infinitely large gains, , throughout the
entire spectrum ! Nevertheless, as detailed in Chapter 4, it is possible and even
desirable to make at select frequencies by cleverly placing the poles of the
controller. The resulting closed-loop system achieves asymptotic tracking (Lemma 4.1).
Even if it were possible to make the sensitivity function equal to zero at all
frequencies, there would still be plenty of reasons why it would not be a good idea to
do that. As discussed in Section 4.6, by making small at a certain frequency, ,
we automatically make large at that frequency, in which case we
should expect a deterioration in performance in the presence of measurement noise.
Most control systems get around this limitation by making small at the low
end of the spectrum, where most practical reference signals are concentrated, while
making at the upper end of the spectrum, which is permeated by noise.
One usually makes small by making large. Indeed, if for some
then 1
then
This parallels the discussion in Chapters 1 and 4: large gains mean better tracking. But
the real problem is not when gains are small or large but rather occurs when the gain is
somewhere in the middle. If then attention shifts from the magnitude to
the phase. If and is close to then a crossing of the negative
real axis might occur, which, as seen in Chapter 7, has far-reaching implications for
closed-loop stability; in terms of tracking performance, might become large,
sometimes much larger than 1 if , compromising the overall stability
margin, as measured by (7.17). We illustrate this situation with an example.
Recall the two controllers for the simple pendulum, , from (6.25), and , from
(7.20), which were analyzed in Sections 6.5 and 7.8. The Bode plots of the sensitivity
functions, and , calculated for the simple pendulum model linearized around the
unstable equilibrium , i.e. , are shown in Fig. 8.2. The corresponding loop
transfer-functions, and , are plotted for comparison. As expected, the value of
reaches its peak, which in both cases is much higher than one ( dB),
precisely when is near 0 dB, that is . Note how much more
pronounced is near than due to the difference in phase in
and : the phase of is closer to . Because the poles of the sensitivity
function are the closed-loop poles, the magnitude of the sensitivity is often indicative
of low-damped poles. Indeed, after calculating
The peaking of the closed-loop sensitivity function in the above example may not be
completely obvious to an inexperienced designer who is looking at the open-loop
transfer-functions and . The plots in Fig. 8.2 suggest, however, that something
else may be at work: it seems as if the reduction in sensitivity achieved by the
controller at frequencies is balanced by an increase in sensitivity at .
This phenomenon is similar to what is sometimes known in the literature as the
waterbed effect [DFT09, Section 6.2]: reducing the sensitivity at one frequency seems to
2
raise the sensitivity at another frequency. In fact, the controller was designed
precisely to increase the loop transfer-function gain for . This was accomplished
but at the expense of raising the sensitivity for .
When L is rational, condition (8.2) is satisfied only if L is strictly proper. The value of
is zero if the difference between the degree of the denominator, n, and the degree
of the numerator, m, is at least two, i.e. . The integer is known as
the relative degree of the transfer-function L.
Without getting distracted by the right-hand side of (8.3), notice the rather strong
implication of Bode’s sensitivity integral: the total variation of the magnitude of the
closed-loop sensitivity function about 1 (see the ?) is bounded. This means that if a
controller is able to reduce the closed-loop sensitivity at a certain frequency range this
must be at the expense of the sensitivity increasing somewhere else. Poles of L on the
right-hand side further compromise the performance by raising the overall integral of
the log of the magnitude of the sensitivity function. If the unstable poles of L come
from the system, G, as they often do, then the right-hand side of (8.3) is the price to be
paid to make these poles stable in closed-loop.
The balance imposed by (8.3) does not necessarily mean that the sensitivity function
will have to peak. In practice, however, it often does. For instance, if the overall
bandwidth of the loop transfer-function is bounded then peaking will occur. In Fig. 8.2
3
the magnitude of rolls off quickly after and the decreased sensitivity in
the range must be balanced in the relatively narrow region . But
peaking of the sensitivity need not occur if the controller is allowed to arbitrarily, and
very often unrealistically, increase the bandwidth of the loop transfer-function. For
example, let
so that
In this simple example one can obtain from the Bode plot of G that the range of
frequencies for which , that is, the bandwidth of the open-loop
system, is . However, the range of frequencies for which
4
is
. In other words, the bandwidth of the loop transfer-function L can be
made arbitrarily large by increasing the control gain K. In terms of the Bode integral
(8.3),
and
In this case raising the gain K lowers the overall sensitivity function. Interestingly, for a
system in which , raising the loop gain will not reduce the overall sensitivity and
generally leads to peaking: recall from the root-locus method, Section 6.4, that if the
relative degree of L is two or more, that is, if L has at least two more poles than zeros,
then the root-locus will have at least two asymptotes and raising the gain will
necessarily lead to low-damped closed-loop poles and even instability when the
relative degree is greater than or equal to three.
From (8.3) alone, it is not clear how the zeros of L impact the closed-loop sensitivity
function. Indeed, it does not seem to be possible to provide a simple account of the
impact of zeros on the closed-loop performance. However, in the following simplified
case, a quantitative analysis is possible. Let L have a single pole and a zero
, , on the right-side of the complex plane. The asymptotically stable
sensitivity function, S, will have a zero at . Define
where is now not only asymptotically stable but also minimum-phase since all
remaining zeros are on the left-hand side of the complex plane. In P8.7 you will show
that
Because is a zero of L we have that and
Application of the maximum modulus principle [BC14, Section 59] (see also P8.1) yields
the bound
which shows that an unstable zero can amplify the already detrimental impact of an
unstable pole: if z and p are close then can be much larger than one. See
[GGS01] and [DFT09] for much more.
Proof of Theorem 8.1
Some of the steps in this proof will be developed in problems at the end of the
chapter. A proof of Theorem 8.1 is relatively simple after P8.2 and P8.3 make us realize
that
and evaluate
where is the contour in Fig. 7.14 in which the radius of the semi-circular segment, ,
is made large. If L has no poles on the right-hand side of the complex plane then S has
no zeros on the right-hand side of the complex plane. Because S is asymptotically
stable it has no poles on the right-hand side either which means that the function
is analytic on the right-hand side of the complex plane. From Cauchy’s residue
theorem (Theorem 3.1),
This simplification results in
You will prove the left-hand equality in (8.5) in P8.5. In this case,
resulting in
which is a special case of (8.3) that holds only when L has no poles on the right-hand
side of the complex plane.
Repeating the same sequence of steps as was performed above but this time with
instead of S we obtain from (8.4)
In order to compute the integral on the right-hand side we use the fact that
which is (8.3).
8.2 Robustness
For instance, recall the block-diagram in Fig. 6.4 which shows the connection of a
feedback controller to the nonlinear model of the simple pendulum derived in Section
5.5. The only nonlinear block in this diagram is the nonlinear sine function. Upon
renaming this block
and rearranging the signal flow we arrive at the closed-loop diagram in Fig. 8.4, where
the block is referred to as an uncertainty. The particular sine function belongs to the
set of functions bounded by a linear function:
5
Any function in must lie in the shaded area in Fig. 8.5, which also shows
as a solid curve.
Figure 8.4 Feedback configuration of the controlled simple pendulum for robust
analysis; ; .
Figure 8.5 Functions in satisfy and must lie on the shaded area
(sector); solid curve is .
If the closed-loop feedback connection in Fig. 8.4 is asymptotically stable for all
we say that it is robustly stable. Of course, robust stability implies asymptotic
stability in the case of a particular , say for the pendulum.
Back to the block-diagram of Fig. 8.3, assume that G is an asymptotically stable linear
time-invariant system model with norm less than one, that is,
This inequality appears frequently in the robust control literature, where it is known as
a small-gain condition [DFT09, ZDG96]. After constructing a suitable state-space
6
The inequalities come from applying the triangle inequality (3.47), (3.48) and (5.8). The
signal is used here as was done in Section 5.2 to represent the response to a
possibly nonzero initial condition, and the constant M is related to the eigenvalues of
the observability Gramian (see (5.8)). It might be a good idea to go back now to review
the material in Sections 3.9 and 5.2 before continuing. If belongs to then
in other words, that all signals will eventually converge to zero. Convergence to zero
from any bounded initial condition is guaranteed in the presence of any input v with
bounded 2-norm.
An important particular instance of the above analysis is obtained when the input, v,
is zero and is a nonlinear element in as in the simple pendulum model. In this
case, it is possible to prove that the origin is a globally asymptotically stable equilibrium
point for the nonlinear system resulting from the feedback diagram in Fig. 8.3. In other
8
words, the origin is the only equilibrium point of the nonlinear system and it is
asymptotically stable. Unfortunately, with nonlinear systems, it is not always possible
to go from asymptotic stability or input–output stability in the 2-norm sense to input–
output stability in the -norm sense, that is, BIBO stability. In the special case of the
9
diagram in Fig. 8.3 this is possible but requires technical devices which are beyond the
scope of this book. We will present an ad hoc methodology for the control of the
10
Speaking of the simple pendulum, note that its open-loop model has several stable
equilibria, namely , , . However, in closed-loop with a linear
controller, the origin becomes the only stable equilibrium. This brings up a curious
practical situation: if the sensor that measures the angle is not reset before closed-
loop operation, that is, if its initial reading is not in the interval , then the
controller will wind up or wind down the pendulum until the sensor reading is zero!
11
The setup in Fig. 8.3 can also be used to prove stability for different classes of
uncertainty. One example is when is an asymptotically stable linear time-invariant
system. In this case, stability of the loop in Fig. 8.3 can be assessed in terms of the loop
transfer-function . Recalling the frequency-domain definition of the
norm in Section 3.9,
in which the feedback control at time t is based on the delayed signal , i.e. the
signal produced seconds earlier, a situation which is common in practice. When
all blocks in the loop are linear time-invariant systems it is possible to write the loop in
terms of the transforms to obtain the feedback diagram in Fig. 8.6. Compare this
diagram with the standard diagram for robustness analysis in Fig. 8.3. Application of
the small-gain condition to linear systems with feedback delays then follows from the
correspondence
Therefore, if both G and K are asymptotically stable and then the closed-
loop connection will be asymptotically stable in the presence of any constant delay
because
Conditions for stability in the presence of delay obtained in this way are often
conservative [GKC03]. Even in this very simple example, we already know that the
feedback connection is asymptotically stable in the presence of delays for all ,
which can be shown as an application of the Nyquist criterion. Indeed, the sensitivity
transfer-function of the above feedback connection when and is
That is the same as (7.12), which we have already shown to be asymptotically stable in
Section 7.5.
We will now apply robustness tools to the design of a controller for the simple
pendulum. Instead of working with the linearized models at the equilibrium points
and , we shall work directly with the closed-loop configuration for robust
analysis from Fig. 8.4 and treat the nonlinearity as a member of the
sector shown in Fig. 8.5 as discussed earlier in Section 8.2.
Consider for now that the reference input is zero. We will treat the case
later. A transfer-function, , suitable for robust analysis of the closed-loop control
of the simple pendulum, is obtained from the input w to the output y in Fig. 8.4:
where is the transfer-function of the controller. We set the parameters as in
(6.20) and (6.21):
and substitute for the transfer-function of the controllers , from (6.25), and
, from (7.20), to compute the corresponding transfer-functions and using
(8.11). You will verify in P8.9 that and are both asymptotically stable. Moreover,
the magnitude of their frequency response, plotted using a linear scale in Fig. 8.7, is
such that
Figure 8.7 Magnitude of the frequency response of the transfer-function G, from (8.11),
plotted on a linear scale for robust analysis of the simple pendulum in closed-loop with
the controllers (thick solid), from (6.25), (thick dashed), from (7.20), and
(thin solid), from (8.13); the norm is the peak value.
The idea is to make changes in the zeros and poles of so as to lower the peaking
in seen in Fig. 8.7. Compare Fig. 8.2 with Fig. 8.7 and note that, in the case of the
simple pendulum, the peaking of the magnitude of the transfer-function G, from (8.11),
is similar to the peaking of the magnitude of the sensitivity transfer-function, S.
Therefore we hope that a reasoning similar to that employed in Section 8.1 to explain
the peaking of the magnitude of the sensitivity can also explain the peaking of the
magnitude of the transfer-function G. We expect that the peaking of will be
reduced if we allow to be higher at lower frequencies. This can be
accomplished by reducing the gain of the loop transfer-function, , at low
frequencies, . One strategy is to stick with the general controller structure from
(7.18), where is kept at the same location as in and the second zero, , is
shifted from 5 to 3 while preserving the same level of gain in the region by
enforcing the constraint . Because the zero occurs earlier, the overall
magnitude at low frequencies, , is reduced. We finally move the pole slightly
from to to limit the gain at high frequencies. This results in the
controller
The features discussed above can be visualized in the Bode plots of the three
controllers shown in Fig. 8.8. The impact in the closed-loop performance can be
evaluated in the Bode plots of the loop transfer-functions in Fig. 8.9. Note that
accomplishes the goal of reducing the loop gain at frequencies below while
preserving the same level of loop gain as at frequencies higher than . More
importantly, notice how the phase of is pushed further away from around
, where . Looking back at Fig. 8.7 we see how the peak of the
magnitude of the corresponding transfer-function for robust analysis, , has been
significantly reduced with . Indeed,
which is even lower than the value achieved by . Of course, improved robustness
has been achieved at the expense of some degradation of performance at lower
frequencies, namely around 1 Hz. It is the control engineer’s job to assess whether such
trade-offs are possible in view of the intended application of the system in hand when
confronted with control design decisions.
Figure 8.8 Magnitude of the frequency response of the controllers (thick solid), from
(6.25), (thick dashed), from (7.20), and (thin solid), from (8.13).
We shall now address the issue of tracking. As mentioned at the end of Section 8.3,
because of the presence of nonlinearities in the closed loop, one needs to deploy
heavy theoretical machinery to go from asymptotic stability to BIBO stability, which
would allow one to extend the above analysis to a nonzero reference . Even
then, it is often the case that additional arguments need to be invoked to prove
asymptotic tracking. This is the motivation for the discussion in the next paragraphs in
which we consider the problem of asymptotic tracking of a constant reference input for
the simple pendulum.
Figure 8.10 Modified feedback configuration of the controlled simple pendulum from
Fig. 8.4 for robust tracking analysis; constant reference signal has been incorporated
into the state of the last integrator; ,
; .
In order for the diagrams in Fig. 8.4 and Fig. 8.10 to be equivalent we need to
determine appropriate signals , , and , and a nonlinearity . In terms of signals
the following relationships must hold:
After a bit of trigonometry
where
As you will show in P8.11, in this case, is also in , which is remarkable! Note
that the transfer-function of the linear part of the system in Fig. 8.10, G, is the same as
the one in Fig. 8.4, which is given by (8.11).
With the setup from Fig. 8.10 in mind we repeat the calculations as in Section 8.3:
A final touch is the evaluation of stability with respect to various levels of damping.
In Section 6.5 we stated that one might expect that controllers designed under the
assumption of no damping, , should also perform well in the presence of natural
damping, . Robust analysis with respect to more than one parameter is often a
complicated task. What we do here is simply evaluate the
14
norm of the transfer-
function G, from (8.11), for various values of . The result is shown in Fig. 8.11,
from which we can see that as b grows the norm of G decreases so that the
closed-loop nonlinear connection in Figs. 8.4 and 8.10 with controllers and will
remain asymptotically stable and achieve asymptotic tracking for any constant level of
15
damping . Note that for even the norm of , calculated with the
controller , from (7.20), is below one. Interestingly, for the transfer-
function has the lowest norm, suggesting that is the most robust setup
when high enough levels of damping are present.
Figure 8.11 norm of the transfer-function, G, from (8.11), for robust analysis of the
simple pendulum in closed-loop with the controllers (thick solid), from (6.25),
(thick dashed), from (7.20), and (thin solid), from (8.13), as a function of the
damping b.
Conversely
Theorem 8.2 (Circle criterion) Consider the diagram in Fig. 8.3. The origin of a state-
space realization of the connection of the linear time-invariant system G with any
is globally asymptotically stable if one of the following conditions
holds:
(a) , G is asymptotically stable and the polar plot of G never leaves and
never touches the circle ;
(b) , the Nyquist plot of G encircles the circle exactly m times in the
counter-clockwise direction, where m is the number of poles of G on the right-hand
side of the complex plane, but never enters and never touches the circle .
Both conditions can be easily checked graphically by analyzing the behavior of the
polar plot or Nyquist plot of G with respect to the circle ; this is the reason why
this criterion is referred to as the circle criterion. Note that in condition (b) the transfer-
function G need not be stable and one might need to indent poles on the imaginary
axis, hence the need for a Nyquist rather than a simpler polar plot.
function has positive real part, its polar plot never ventures onto the left-hand side of
the complex plane, and therefore never encircles the point in closed-loop with any
constant feedback gain . The corresponding sector description is
which is the entire first and third quadrants. Positive-realness implies that the phase of
G is bounded, that is,
and therefore
and the Nyquist plot of the loop transfer-function, L, never crosses the negative real
axis.
As an illustration of the use of the circle criterion, consider once again the control of
the simple pendulum. As shown in Fig. 8.13, it is possible to find a tighter description
for the nonlinearity in terms of the sector uncertainty
(see also P8.13). The polar plot of the transfer-
function G, from (8.11), calculated for the simple pendulum model with the controllers
, from (6.25), , from (7.20), and , from (8.13), is shown in Fig. 8.15
superimposed on the unit circle , from the small-gain condition, and the circle
, from the circle criterion with the tighter uncertainty description. All
controllers designed so far can now be proven to asymptotically stabilize the simple
pendulum because the polar plots of , , and all lie inside the circle
. Recall that the model with the controller previously failed the small-
gain test, which can be visualized in Fig. 8.15 as the polar plot of leaving the unit
circle.
Figure 8.15 Polar plots of the transfer-function G, from (8.11), calculated for the simple
pendulum model with the controllers (thick solid), from (6.25), (thick dashed),
from (7.20), and (thin dashed), from (8.13). All controllers produce closed-loop
responses which are contained in the circle ; the closed-loop response
with the controller is the only one that is not also contained in the circle .
Proof of the Circle Criterion
The next paragraphs are dedicated to translating these two conditions in terms of the
open-loop transfer-function G, instead of .
Figure 8.16 Closed-loop feedback configuration for robust analysis with the circle
criterion.
that is, the circle we introduced earlier. Some examples were shown in Fig.
8.14 for different choices of parameters and .
For robust stability we must locate not only the image of the unit circle but also the
image of its interior. A summary of the conclusions (see [Neh52, Section V.2] for details)
is as follows: if the pole of the mapping T, i.e. the point , is
inside the circle then T maps the interior of the circle into the interior
of the unit circle. Otherwise, if the point is outside the circle then T
maps the exterior of the circle into the interior of the unit circle. With that
information all that is left is to locate the center point relative to .
With , , there are two possibilities: either or .
Consider first that . The intersection of the circle with the real
axis is the segment (see Fig. 8.14). If then
and if then
Continuing with our discussion on performance, consider now the system with a
control input, u, and a disturbance input, w, shown in Fig. 8.17. The key assumption in
this section is that the disturbance, w, is either known ahead of time or can be
measured online.
Figure 8.17 System with control input, u, and measurable disturbance, w.
One control architecture that can take advantage of this knowledge of the
disturbance is the one in Fig. 8.18. In this figure the controller, C, makes use of three
inputs: the measured output, y, the reference input, , and the measured disturbance,
w. Internally, it is composed of a feedback term, the block K, and a feedforward term,
the block F. The feedforward block, F, makes no use of the output, y, and in this sense
it is essentially open-loop control. Any instabilities will have to be dealt with by the
feedback block, K.
Figure 8.18 Closed-loop feedback configuration for tracking and measured disturbance,
w, with feedback, K, and feedforward, F, control blocks.
then the tracking error, e, becomes completely decoupled from the disturbance, which
would be a remarkable feat. Unfortunately it is not always possible to achieve this.
Indeed, one would need to implement a feedforward controller:
which might not be possible, depending on G and W . For instance, if W is static, i.e. of
zeroth order, and G is rational and strictly proper, then F would not be proper, and
therefore would not be realizable (see Chapter 5). Even when the filter F is realizable,
the need to invert the system, G, should bring to mind the many difficulties with open-
loop control discussed in various parts of this book, such as the nefarious effects of
cancellations of non-minimum-phase poles and zeros addressed in Section 4.7. For
instance, if
then
is unstable and therefore the connection in Fig. 8.18 will not be internally stable even
though . Indeed, because W and F appear in series with the rest of the
components of the closed-loop, one should expect trouble whenever W or F are not
asymptotically stable. Note that instability of the system, G, is not a problem as long as
it is stabilized by the feedback controller, K.
A particular case of interest is that of input disturbances such as in Fig. 1.13 or Fig.
4.11 with . These diagrams become equivalent to Fig. 8.18 if we set . In
this case the choice achieves perfect disturbance rejection, since
. This means that if we know ahead of time that a known disturbance
18
will affect the control input of a system we can simply cancel it by applying an opposite
input. For example, in the case of the car cruise control subject to a change of road
slope discussed in Section 4.5, if one could anticipate the slope, and compute and apply
the exact force necessary to counteract the additional component of gravitational force
due to the slope, then one would achieve “perfect” disturbance rejection. Of course
this is easier said than done since, in practice, accurately estimating an upcoming slope
might require sophisticated resources, such as for instance fast vision-processing
capabilities. Moreover, other forces and uncertainties, e.g. friction and drag forces,
uncertainties on the total mass as well as the mass distribution of the car, etc. are
guaranteed to torment the optimistic control engineer. Nevertheless, even if perfect
rejection is not achieved, the solution proposed in Fig. 8.18 is certainly much better
than simply open-loop control and can be better than simply closed-loop feedback. In
fact, the combination of feedforward and feedback controllers can deliver enhanced
performance, as the feedforward control attempts to invert the system model, while
providing robustness to uncertainties through feedback.
which is very similar to the expression obtained for the configuration in Fig. 8.18. For
this reason much of the former discussion applies verbatim to the diagram in Fig. 8.19.
The case is of special importance here as well, not only because perfect
tracking is possible with , but also because of its interpretation: when ,
the auxiliary reference input, w, can be thought of as the control input required to
drive the system, G, to a desired output, . The feedforward controller, , makes
sure that this reference input, w, is applied to the actual system while the feedback
controller, K, corrects any mistakes in the achieved trajectory, y. This is the scheme
behind virtually all control systems in which open-loop control inputs are used. For a
concrete example, w might be the calculated thrust input necessary to take a rocket
from the surface of the earth to the surface of the moon along a desired trajectory, .
The actual thrust input, u, is produced by the rocket in closed loop with a feedback
controller that makes sure the desired trajectory is being closely tracked. What other
way could we have made it to the moon?
Figure 8.19 Closed-loop feedback configuration for tracking with reference input, w,
feedback, K, and feedforward, F, control blocks.
When perfect input rejection is not possible we have to settle for more modest
goals. For instance, we could seek to achieve asymptotic disturbance rejection. The
setup is similar to the one in Chapter 4, as for instance in Lemma 4.1 and Lemma 4.2.
Assuming that , , if , or equivalently if G or K
have a pole at , then the first term in (8.21) will approach zero asymptotically.
Likewise, if , where , then the second term in (8.21) will
also approach zero asymptotically when , . Because the
zeros of S are the poles of K and G (see Chapter 4), when K has a pole at then S
and will have at least one zero at if no pole–zero cancellations occur.
Likewise, if we assume that W and F are asymptotically stable, then such zeros cannot
19
That feedforward will not destroy what feedback worked hard to achieve is good
news. However, when K does not have a pole at , it may still be possible to
achieve asymptotic disturbance rejection using feedforward. Indeed, all that is needed
is that a stable ) be chosen so that
and hence that . Note that this requirement is much weaker than asking
that F be equal to . Indeed, when , that is, in the case of step inputs, this
can be done with a static feedforward gain . A complete
feedforward controller for the case will be worked out in P8.21 and P8.22.
Compare this with (8.23), where F is forced to be unstable. Finally, it may be possible
that G has a pole at , in which case the closed-loop system already achieves
asymptotic tracking and a choice of or even will suffice to
achieve asymptotic disturbance rejection.
where
from which the optimal feedforward filter is
Note that this filter is not proper, and therefore not realizable. It is possible to formally
show that for any the proper filter has a performance
which is as close as desired to that of if is made small enough. See [DFT09,
Chapter 10] for details. The argument here is the same as the one used in Section 6.3,
where an extra pole far enough on the left-hand side of the complex plane was
introduced to make a PID controller proper and the same practical considerations
apply. This problem is often much more complicated if the norm is used instead
of the norm [DFT09, Chapter 9]. The solution in the case of the above example is,
however, still simple enough to be computed in P8.30 and P8.31, where it is shown that
is the optimal solution to (8.24). Note that this feedforward filter is even more
aggressive at higher frequencies than the optimal filter.
In some applications one can get away with an unstable estimator, for example when
the estimator is used in closed-loop with a feedback controller. It may also be possible
to relax some of the requirements on realizability if the filter is not to be used online.
Contrast (8.28) with (8.24) to see that the optimal feedforward design problem (8.24) is
in fact a particular case of the optimal filtering problem (8.28) where and
.
Figure 8.20 Standard configuration for filtering; F is the filter that is to be designed to
render the filtered error, , as small as possible.
It is worth noticing that the diagram in Fig. 8.20 shows its true versatility when the
signals and systems are allowed to be multivariate. For instance, in the context of
filtering we can let
Figure 8.21 Filter estimates the input signal, , andthe measurement, ; feedback
compares the estimate, , with the actual measurement, y.
There are many more examples in the systems and control literature where the
problems of filtering, estimation, and control are connected. Perhaps one of the most
fundamental problems is that of parametrizing all stabilizing controllers, a procedure
that converts problems in feedback control into the design of filters, at the heart of
many developments in robust, and control and estimation theory [DFT09].
The simplest case, that of a stable system, can be visualized easily in the block-diagram
of Fig. 8.22, where the system block, G, has been duplicated and absorbed into the filter
block, Q. Because G is asymptotically stable, the connection is internally stable if and
only if Q is also asymptotically stable. Moreover, for any stabilizing controller, K, the
filter, Q, is given by
must be stabilizing. The complete theory requires the introduction of many new ideas,
which can be found, for instance, in [DFT09]. It is also remarkable that once the
feedback controller, K, has been parametrized in terms of the auxiliary filter, Q, all
transfer-functions in the feedback loop, say those in Fig. 4.18, become linear functions
20
practical since they tend to produce solutions with very large dimensions even when
the input data is made of low-order systems.
8.1 The maximum modulus principle [BC14, Section 59] states that if a complex
function f is analytic in an open set D then for any . Show that
if f has a maximizer in the interior of D then f is a constant.
8.6 Explain why in P8.5 one does not have to worry about the assumption
when evaluating .
for all .
and
8.9 Show that G, from (8.11), is asymptotically stable when the parameters , ,
and are given by (8.12) and K is , from (6.25), , from (7.20), or , from (8.13).
8.10 Construct state-space realizations for the block-diagrams in Fig. 8.4 and Fig. 8.10
and show how their states are related.
8.11 Let
Hint: Calculate all tangents to the curve that pass through the origin and
evaluate their coefficients.
8.14 Show that the image of the imaginary axis under the bilinear
mapping
is the mapping
where
8.17 Let be defined as in P8.15. Show that if the image of the unit circle
under the mapping is the circle , where
8.18 The controller in Fig. 8.23 is known as a two degrees-of-freedom controller. The
transfer-function K is the feedback part of the controller and the transfer-function F is
the feedforward part of the controller. Show that
How does the choice of the feedforward term F affect closed-loop stability? Name one
advantage and one disadvantage of this scheme when compared with the standard
diagram of Fig. 8.1 if you are free to pick any suitable feedforward, F, and feedback, K,
transfer-functions.
8.19 With respect to P8.18 and the block diagram in Fig. 8.23 show that if
8.20 Let
Select K and design F in the block-diagram in Fig. 8.23 so that . If is a unit step,
what is the corresponding signal w?
is finite then it is always possible to calculate the gain K and the time-delay of
the feedforward controller
so that .
8.23 Show that any rational transfer-function without poles on the imaginary axis
can be factored as the product , where has all its poles and zeros on the
left-hand side of the complex plane, that is, is minimum phase, and U is a product of
factors of the form
Prove also that both and its inverse are asymptotically stable.
8.24 Transfer-functions of the form (8.31) are known as all-pass. Show that
and that
8.25 Show that for any rational transfer-function, G, the poles of the transfer-
function
lie on the mirror image of the poles of G with respect to the imaginary axis.
8.26 Show that any rational transfer-function without poles on the imaginary axis
can be factored as the sum , where and are asymptotically
stable. Hint: Expand in partial fractions and use P8.25.
8.28 Let X and Y be asymptotically stable. Factor as in P8.23 and show that
if F is asymptotically stable.
8.29 Let and and use P8.28 to calculate (8.25) and (8.26) when
W and G are as in (8.22) and .
8.30 Assume that X and Y are asymptotically stable and that the only zero of Y on the
right-hand side of the complex plane is . Use the maximum modulus principle
(see P8.1) to show that
8.31 Let and and use P8.30 to verify the value of F given in
(8.27) when W and G are as in (8.22) and .
8.33 Consider the diagram in Fig. 8.21. Let X and Y be represented by the state-space
equations
(8.32)
8.34 Consider the same setup as in P8.33, but instead of making and
, let the relationship among signals , , and be described by the state-
space equations
(8.34)
where is the state estimation error vector given in (8.33). What conditions are
required in order for the system described by the state-space equations (8.32) to be
asymptotically stable? Is it necessary that X and Y be asymptotically stable as well?
Compare your answer with that for P8.33.
8.35 In P7.12, you designed an I controller for the simplified model of a rotating
machine driven by a belt without slip as in Fig. 2.18(a). Using the data and the
controller you designed in P7.12, plot the Bode diagram of the closed-loop sensitivity
transfer-function and comment on the relationship of this plot, the closed-loop poles,
and the stability margins you calculated in P7.12.
8.36 Repeat P8.35 for the PI controller you designed in P7.13. Compare your answer
with that for P8.35.
8.37 In P7.15, you designed a dynamic feedback controller for the simplified model of
the elevator in Fig. 2.18(b). Using the data and the controller you designed in P7.15,
plot the Bode diagram of the closed-loop sensitivity transfer-function and comment on
the relationship of this plot, the closed-loop poles, and the stability margins you
calculated in P7.15.
8.38 In P7.17, you designed a dynamic feedback controller for the simplified model of
the mass–spring–damper system in Fig. 2.19(b). Using the data and the controller you
designed in P7.17, plot the Bode diagram of the closed-loop sensitivity transfer-
function and comment on the relationship of this plot, the closed-loop poles, and the
stability margins you calculated in P7.17.
8.39 In P7.18, you designed a dynamic feedback controller for the simplified model of
the mass–spring–damper system in Fig. 2.20(b). Using the data and the controller you
designed in P7.18, plot the Bode diagram of the closed-loop sensitivity transfer-
function and comment on the relationship of this plot, the closed-loop poles, and the
stability margins you calculated in P7.18.
8.40 Repeat problem P8.39 for the controller you designed in P7.19. Compare your
answer with that for P8.39.
8.41 In P7.29, you designed a dynamic velocity feedback controller for the simplified
model of the rotor of the DC motor in Fig. 2.24. Using the data and the controller you
designed in P7.29, plot the Bode diagram of the closed-loop sensitivity transfer-
function and comment on the relationship of this plot, the closed-loop poles, and the
stability margins you calculated in P7.29.
8.42 Repeat P8.41 for the position controller you designed in P7.30.
8.43 Repeat P8.41 for the torque controller you designed in P7.31.
8.44 In P7.32, you designed a dynamic feedback controller to control the temperature
of the water in a water heater. Using the data and the controller you designed in P7.32,
plot the Bode diagram of the closed-loop sensitivity transfer-function and comment on
the relationship of this plot, the closed-loop poles, and the stability margins you
calculated in P7.32.
8.45 Repeat P8.44 for the controller you designed in P7.33. Compare your answer
with that for P8.44.
8.46 Repeat P8.44 for the controller you designed in P7.34. Compare your answer
with those for P8.44 and P8.45.
8.47 In P7.35, you designed a dynamic feedback controller for the simplified model of
a satellite orbiting earth as in Fig. 5.18. Using the data and the controller you designed
in P7.35, plot the Bode diagram of the closed-loop sensitivity transfer-function and
comment on the relationship of this plot, the closed-loop poles, and the stability
margins you calculated in P7.35.
Because of the triangle inequality
1
, and hence
.
2
The waterbed effect, as discussed in detail in [DFT09, Section 6.2] applies only to
systems with non-minimum-phase zeros. A peaking phenomenon akin to the one
displayed in Fig. 8.2 is also discussed in [DFT09, p. 98] in connection with a constraint
on the bandwidth of the loop transfer-function.
With
4
.
Because
5
.
One that is minimal, that is observable and controllable. See Section 5.3 and [Kai80].
6
This result is surprisingly hard to prove under the general framework we are working
7
9
Some nonlinear systems are asymptotically stable but not BIBO stable:
is asymptotically stable at the origin when but unstable when [Son08].
The linear system G provides a Lyapunov function that can be used as in Theorem 6.1
10
of [Kha96].
11
One can only hope that this pendulum is not a rocket.
12
We will have more to say about such circles in Section 8.5.
13
See [GKC03] for many more connections between delay systems and robustness.
14
Structured uncertainty is the terminology used in the literature [DFT09].
17
With the condition relaxed to in the
case of strictly proper transfer-functions [Kha96, Section 10.1].
Technically affine.
20
This is because the space of rational functions of a single complex variable is infinite-
21
dimensional: a basis for the space of rational functions cannot be written using a finite
number of rational functions, not even if the order is fixed. Contrast that with the
space of polynomials of fixed degree, which can be written in terms of a finite number
of monomials, e.g. 1, x, , .
References
[AM05] Brian D. O. Anderson and John B. Moore. Optimal Filtering. New York, NY: Dover
Publications, 2005.
[ÅM08] Karl J. Åström and Richard M. Murray. Feedback Systems: An Introduction for
Scientists and Engineers. v2.10d. Princeton, MA: Princeton University Press, 2008.
[BB91] Stephen P. Boyd and Craig H. Barratt. Linear Controller Design: Limits of
Performance. Englewood Cliffs, NJ: Prentice-Hall, 1991.
[BC14] James Brown and Ruel Churchill. Complex Variables and Applications. Ninth
Edition. New York, NY: McGraw-Hill, 2014.
[LeP10] Wilbur R. LePage. Complex Variables & Laplace Transform for Engineers. New
York, NY: Dover Publications, 2010.
[Lju99] Lennart Ljung. System Identification: Theory for the User. Second Edition.
Englewood Cliffs, NJ: Prentice-Hall, 1999.
[LMT07] Kent H. Lundberg, Haynes R. Miller, and David L. Trumper. “Initial Conditions,
Generalized Functions, and the Laplace Transform: Troubles at the Origin.” In IEEE
Control Systems Magazine 27(1) (2007), pp. 22–35.
[Neh52] Zeev Nehari. Conformal Mapping. New York, NY: McGraw-Hill, 1952.
[Olm61] John M. H. Olmsted. Advanced Calculus. New York, NY: Appleton–Century–
Crofts, 1961.
[PSL96] John J. Paserba, Juan J. Sanchez-Gasca, and Einar V. Larsen. “Control of Power
Transmission.” In The Control Handbook. Edited by William S. Levine. Boca Raton,
FL: CRC Press, 1996, pp. 1483–1495.
[PW34] Raymond E. A. C. Paley and Norbert Wiener. Fourier Transforms in the Complex
Domain. New York, NY: American Mathematical Society, 1934.
[Son08] Eduardo D. Sontag. “Input to State Stability: Basic Concepts and Results.” In
Nonlinear and Optimal Control Theory. Edited by P. Nistri and G. Stefani. Lecture
Notes in Mathematics. New York, NY: Springer, 2008, pp. 163–220.
[Tal07] Nassim Nicholas Taleb. The Black Swan: The Impact of the Highly Improbable.
New York, NY: Random House, 2007.
[Vid81] Mathukumalli Vidyasagar. Input–Output Analysis of Large-Scale Interconnected
Systems. Lecture Notes in Control and Information Sciences. New York, NY:
Springer, 1981.
[Vid93] Mathukumalli Vidyasagar. Nonlinear Systems Analysis. Englewood Cliffs, NJ:
Prentice-Hall, 1993.
[Wae91] Bartel L. van der Waerden. Algebra. Vol. I. New York, NY: Springer, 1991.
[WW06] Orville Wright and Wilbur Wright. “Flying Machine.” US Patent No. 821.383.
1906. URL: http://airandspace.si.edu/exhibitions/wright-
brothers/online/images/fly/1903_07_pdf_pat821393.pdf.
[ZDG96] Kemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control.
Englewood Cliffs, NJ: Prentice-Hall, 1996.
Index
Abel–Ruffini theorem, 63
amplifier, 132
analog computer, 133
analytic function, 48, 57, 62, 221, 231, 260
Cauchy–Riemann conditions, 57
entire, 58
meromorphic, 218
approximation, 4, 56, 141, 168, 203, 216
argument principle, 218, 223, 226, 229
for negatively oriented contour, 220
ballcock valve, 34
bandwidth, 242, 258, 259
BIBO, see bounded-input–bounded-output
block-diagram, 1, 17
all stabilizing controllers, 283
closed-loop control, 8, 91, 255
closed-loop with input disturbance, 12, 31
closed-loop with input disturbance and measurement noise, 105, 138
closed-loop with reference, input disturbance, measurement noise, and
feedback filter, 113
closed-loop with time-delay, 265
differential equation, 20
feedforward control, 278, 279
filter as feedback system, 282
filtering, 282
first-order differential equation, 127
integrator, 19
linear control of nonlinear systems, 153
open-loop control, 7
open-loop with input disturbance, 12
pendulum, 144
proportional control, 170
robust control, 262
robust tracking, 269
phase disturbance, 237
proportional–derivative control, 172
proportional–integral control, 99
proportional–integral–derivative control, 174
robust analysis, 262
second-order differential equation, 127
series connection, 91
system with input disturbance, 11
system with measured disturbance, 278
Bode diagram, 185, 201, 216, 232, 236, 255
asymptotes, 201, 203, 206, 232
first-order model, 203
magnitude, 201, 202
phase, 201, 202
poles and zeros at the origin, 208
rational transfer-function, 202
second-order model, 205, 207
Bode’s sensitivity integral, 258
bounded signals, 69, 75, 93, 114, 137, 264
damper, 153
damping, 35, 150, 167, 174, 189, 239, 270
overdamped, 167
ratio, see second-order model, 257
underdamped, 167, 191
dB, see decibel
DC gain, 106, 202
DC motor, 104, 144
dead-zone, 104
decibel, 186, 201, 203
derivative control, 165
determinant, 136
differential equations
homogeneous solution, 21
initial conditions, 21
linear, 22, 24, 56
nonlinear, 28, 126
ordinary, 17, 19, 22, 24, 29, 56
particular solution, 20
Runge–Kutta, 29
differentiator, 130, 171
realization, 130
digital, 6, 17
discrete-time model, 17
aliasing, 18
sampling, 17, 127
distribution, 51
disturbance, 11, 17, 105, 151, 174, 261, 278
input, 30
measured, 278
rejection, 30, 35, 280, 281
asymptotic, 35
dynamic model, 52
eigenfunction, 75
eigenvalue, 136, 142
eigenvector, 136
equilibrium, 142, 148, 150
closed-loop, 152
point, 141
trajectory, 142, 149
error signal, 7, 9, 24, 189, 255
with measurement noise, 105
estimation, 140
estimator, 282
examples
car steering, see car steering
cruise control, see cruise control
inverted pendulum, see pendulum in a cart
pendulum, see pendulum
pendulum in a cart, see pendulum in a cart
toilet water tank, see toilet water tank
experiments, 1, 6, 18, 19, 22, 23, 28, 47, 56, 74, 154
feedback, 5, 9, 20
controller, 7
in state-space, 138
feedforward, 255, 278–280, 282
filter, 113, 152, 279, 281
Kalman, 282, 283
filtering, 255, 282
first-order model
Bode diagram, 202
characteristic equation, 21
rise-time, 22
time-constant, 22
Fourier series, 54
Fourier transform, 47, 51, 54, 61, 62
frequency domain, 47, 49, 92
frequency response, 73, 74, 94, 201, 255
magnitude, 74
phase, 74
zeros
complex-conjugate, 115
rational function, 63
Table of Contents
Half title
Title
Copyright
Dedication
Table of Contents
Preface
Overview
1 Introduction
1.1 Models and Experiments
1.2 Cautionary Note
1.3 A Control Problem
1.4 Solution without Feedback
1.5 Solution with Feedback
1.6 Sensitivity
1.7 Disturbances
Problems
2 Dynamic Systems
2.1 Dynamic Models
2.2 Block-Diagrams for Differential Equations
2.3 Dynamic Response
2.4 Experimental Dynamic Response
2.5 Dynamic Feedback Control
2.6 Nonlinear Models
2.7 Disturbance Rejection
2.8 Integral Action
Problems
3 Transfer-Function Models
3.1 The Laplace Transform
3.2 Linearity, Causality, and Time-Invariance
3.3 Differential Equations and Transfer-Functions
3.4 Integration and Residues
3.5 Rational Functions
3.6 Stability
3.7 Transient and Steady-State Response
3.8 Frequency Response
3.9 Norms of Signals and Systems
Problems
4 Feedback Analysis
4.1 Tracking, Sensitivity, and Integral Control
4.2 Stability and Transient Response
4.3 Integrator Wind-up
4.4 Feedback with Disturbances
4.5 Input-Disturbance Rejection
4.6 Measurement Noise
4.7 Pole–Zero Cancellations and Stability
Problems
5 State-Space Models and Linearization
5.1 Realization of Dynamic Systems
5.2 State-Space Models
5.3 Minimal State-Space Realizations
5.4 Nonlinear Systems and Linearization
5.5 Simple Pendulum
5.6 Pendulum in a Cart
5.7 Car Steering
5.8 Linear Control of Nonlinear Systems
Problems
6 Controller Design
6.1 Second-Order Systems
6.2 Derivative Action
6.3 Proportional–Integral–Derivative Control
6.4 Root-Locus
6.5 Control of the Simple Pendulum – Part I
Problems
7 Frequency Domain
7.1 Bode Plots
7.2 Non-Minimum-Phase Systems
7.3 Polar Plots
7.4 The Argument Principle
7.5 Stability in the Frequency Domain
7.6 Nyquist Stability Criterion
7.7 Stability Margins
7.8 Control of the Simple Pendulum – Part II
Problems
8 Performance and Robustness
8.1 Closed-Loop Stability and Performance
8.2 Robustness
8.3 Small Gain
8.4 Control of the Simple Pendulum – Part III
8.5 Circle Criterion
8.6 Feedforward Control and Filtering
Problems
References
Index