Dr. Dong-Jin Lim - Control Systems Engineering - Design and Implementation Using Arm Cortex-M Microcontrollers-Independently Published (2021)
Dr. Dong-Jin Lim - Control Systems Engineering - Design and Implementation Using Arm Cortex-M Microcontrollers-Independently Published (2021)
Copying prohibited
All rights reserved. No part of this publication may be reproduced or transmitted in any
form or by any means, electronic or mechanical, including photocopying and recording, or
by any information storage or retrieval system, without the prior written permission of the
publisher.
Edition 1.0
3
2.4 Lab 2 45
2.4.1 A/D Conversion 45
2.4.2 Real-Time Simulation of First-Order Dynamic Systems 49
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Preface
Control systems engineering is one of the most difficult topics in engineering because it in-
volves complex mathematical theories. Electrical and mechanical engineering departments
in universities worldwide offer several relevant courses. The learning curve is accelerated
by laboratory sessions. However, the relevant equipment and software tend to be expensive
and complex.
This book describes hands-on laboratory sessions using inexpensive equipment. The book
is for undergraduate engineering students; all required material is covered. The last section
of every chapter explains the laboratory session. The assignments in the laboratory sections
are related to the topics discussed in the chapters. The equipment used in the laboratory
assignments is inexpensive, readily available, and compact.
Today, almost all controllers are microcontrollers, of which the Arm Cortex*-M 32-bit mi-
crocontroller is one of the most powerful. The price is falling rapidly, approaching that of
8-bit microcontrollers. Herein, we use the STM32F429 Discovery** board manufactured by
STMicroelectronics. The board is equipped with a very powerful microcontroller and is in-
expensive. Throughout the book, all necessary code is provided and explained. The book
includes the MATLAB*** source code for problems that require computer assistance. All
code can be downloaded from http://limdj.com. The book can serve as a textbook for a
course that features laboratory sessions. If students have a good mathematical background,
the material may be covered in a single semester. Industrial engineers who are interested in
control systems featuring Arm Cortex-M microcontrollers may find the book useful.
I hope the book assists students and engineers interested in control systems engineering.
Questions and suggestions can be sent to the email [email protected].
Dong-Jin Lim
http://limdj.com
1.4 Lab 1 13
8
Chapter 1. The Concept of a Control System 9
A change in the input signal varies the output signal. Usually, the input and output sig-
nals are physical data such as position, velocity, or temperature. Sometimes, the signals are
abstract quantities. Almost anything can be a system. Consider a room with a heater. The
input is the heat generated; the output is the temperature.
As another example, consider an automotive engine. The power and the revolutions per
minute (rpm) change as the amount of vaporized gasoline varies. The power generated or
the rpm may be the output; the gasoline used is the input. Note that different variables may
serve as inputs or outputs for the same system. For example, the engine output may be the
rpm or the torque depending on the aim of control. The control system ensures that the
output obeys the input command. Thus, it is essential to determine what is to be controlled.
Sometimes, the system to be controlled is termed a plant. Figure 1.3 shows the concept of a
control system.
Above, the controller ensures that the output is as required. The control system is a com-
bination of a controller and what is controlled. A control systems engineer ensures that the
control system behaves as expected.
10 1.2. Feedback Control
The most crucial concept in control systems engineering is feedback. Not all control systems
feature feedback, but most do. First, consider the open-loop control system of Figure 1.4.
In an open-loop control system, the controller accepts a command and calculates the control
signal to the input. If the characteristics of the controlled system are completely understood
(there is no uncertainty), the system will work as desired. However, such an assumption is
impractical, as there are always uncertainties. We can handle an open-loop control system
using feedback. Figure 1.5 shows the feedback control.
In a system with feedback, the controller compares the actual output to the commanded
output. If an error is apparent, the controller seeks to steer the output in the direction of
error reduction. Such feedback control ensures that the output follows the input commands
even when uncertainties exist. Feedback is not confined to control systems. Figure 1.6 shows
an inverting operational (OP) amplifier circuit with feedback. We assume that the OP am-
plifier is ideal, thus exhibiting infinite gain.
Above, the resistor Rf serves as a feedback resistor ensuring that the OP amplifier output
will not become infinite. Feedback is widely used in electronic circuits. Indeed, we use
feedback in everyday life. For example, when driving on a straight road, we have to take
corrective action to maintain a straight line. Visual feedback is used to steer the automobile;
the driver serves as the feedback controller. Figure 1.7 shows this concept.
Chapter 1. The Concept of a Control System 11
As can be seen, the driver determines the direction of motion, detects what the automobile
is doing, and takes corrective action. Soon, many automobiles may be automated; feedback
will be received by a computer. Figure 1.8 shows this concept.
As can be seen, when implementing feedback control, we need to deliver an electrical output
signal. In most control systems, the initial output signal may not be electrical but, rather,
physical. The device used to convert a physical quantity to an electrical signal is termed a
sensor or transducer. The controller accepts an electrical signal from the sensor and com-
pares it with the command when calculating a control signal. The controller may be an
analog circuit or a digital computer. Most modern control systems feature inexpensive digi-
tal microcontrollers.
The following equation shows the relationship between the input signal u(t) and the output
signal y(t) in the form of a differential equation.
dy(t)
RC + y(t) = u(t) (1.1)
dt
To find the output signal y(t) for a given input signal u(t) , we solve this equation; this yields
the output signal for a given input as a function of t. As we are dealing with time, we are
operating in the “time domain”. When we mathematically transform a function of time, we
obtain a different function with a different variable. Of the many mathematical transforms,
the Laplace Transform is the most frequently used in engineering. A one-to-one relation-
ship exists between the original and the transformed function. When we mathematically
transform a function of time, the new variable is usually a frequency variable. We thus enter
the frequency-domain; this yields valuable information lacking in the time function. Figure
1.12 shows the relationship between the time and frequency domains.
1.4 Lab 1
1.4.1 The Digital Control System
The digital control system features a microcontroller. Figure 1.13 shows the configuration
of a digital control system.
Before starting the first project, prepare a host computer and an STM32F429 Discov-
ery board. Here, we use a Microsoft Windows computer, but it is also possible to use a
Linux computer. First, download the STM32CubeIDE program from the STMicroelectronics
website. STM32CubeIDE is an integrated development environment for STMicroelectronics
STM32 Cortex-M microcontrollers. STM32CubeIDE can compile and download the pro-
gram. It is also capable of generating the source code required for setting up the hardware
registers. Usually, programmers must be familiar with the registers and write the routines
that set them up. However, STM32CubeIDE generates source code that sets up the registers
using a hardware abstraction layer (HAL) driver library. The library delivers API functions
to the programmers.
First, we print the “Hello World” message using a serial terminal program, many of
which are license-free. We employ SmartTTY from SysProgs. SmartTTY can be downloaded
from the SysProgs website. Before programming, connect the board to the host computer
via a USB cable. The STM32F429 Discovery board has two USB connectors, a mini-USB and
a micro-USB connector. The mini-USB connector supplies power to the board and connects
the ST-Link on-board debugger. We do not use the micro-USB connector. The board also
has a USB-to-serial function; it is possible to communicate with the host computer via a USB
connection. After connecting the mini-USB cable to the board and opening the SmarTTY
program, the following screen should appear. The COM port number may differ depending
on the computer’s configuration.
14 1.4. Lab 1
Double-click on the serial port icon, and the following screen should appear. The computer
is now ready to communicate with the Discovery board via the serial port.
To start a new STM32 project, pull down the File menu and select New. It is also possible to
commence by pressing the ”Start new STM32 project” button on the welcome screen. Then,
the following board-selection screen appears.
Chapter 1. The Concept of a Control System 15
Select the STM32F429IDISCOVERY board and press NEXT. The following project setup
screen appears:
Type the name of the project in the Project Name field and click Finish. Memorize or record
the location of the project files, or change the location to a folder of choice. It is essential to
know where the project files are saved. Then, when the following dialog window on project
options appears, click Yes.
16 1.4. Lab 1
The following figure shows the project window used to configure the project. On the left,
note the project in Project Explorer.
As initialization employed the default mode, there is no need to change the configuration.
However, it is possible to check if all required peripherals are appropriately configured.
When the Connectivity button is clicked, a list of all communication peripherals is shown,
as seen in the following figure. The USART1 is the serial port used to communicate with the
host computer. A click on the USART1 button reveals the port configuration:
Chapter 1. The Concept of a Control System 17
Before generating code, FREERTOS must be disabled, as shown in the following figure.
Although all projects in this book can be performed using FREERTOS, for simplicity, we
decided not to use FREERTOS.
Now, source code can be generated by selecting Generate Code from the Project menu,
as shown in the following figure:
18 1.4. Lab 1
After generating code, source files can be opened by double-clicking. The following
figure shows the screen that appears when the main source file is opened.
Find the sections running from /* USER CODE BEGIN . . . */ to /* USER CODE END . . .
*/. If the source code is to be modified or code is to be inserted, do so between these lines.
A repeat of code generation erases the original code. Always remember to insert code as
described above. When developing embedded programs, the printf function is very useful.
In an embedded programming environment, the output implementation must be included
when using printf . Type the following code into the main source file. Remember to insert
the code between /* USER CODE BEGIN . . . */ and /* USER CODE END . . . */. Below,
Code 1.2 is the implementation function for printf . Note that the printf call is made after
Chapter 1. The Concept of a Control System 19
all initialization functions of the main program are complete. Always remember to invoke
the HAL library API function when initialization is concluded.
Code 1.1
Code 1.2
Code 1.3
Build the project by selecting Build Project from the Project menu, as shown in the following
figure:
Wait until the build is finished and make sure that no error is shown in the Console window,
as in the following figure:
20 1.4. Lab 1
After a successful build, the program is ready to run on the Discovery board. By selecting
Debug from the Run menu, as shown in the following figure, it is possible to download and
run the program.
Wait for downloading to finish. At the bottom of the window, a status bar shows progress.
Chapter 1. The Concept of a Control System 21
Await the message “Download verified successfully” in the Console window before contin-
uing. After a successful download, the following debugging window appears:
The program is not running; go to the little arrow. To start the program, click the green
arrow “Go” button on the debugging menu bar. The arrow button becomes gray and the
program runs, as shown in the following figure. To stop the program and exit debugging
mode, press the square red stop button.
The following final screen indicates that the first project is successful.
22 1.4. Lab 1
3
= 0.0007324Volt (1.2)
212
In hexadecimal, the values for the D/A converter is ranging from 0x000 to 0xFFF. In decimal,
it is from 0 to 4095. When 0 is written to the D/A converter data register, the output voltage
is 0 V. When the maximum value 0xFFF in hexadecimal and 4095 in decimal is written to
the D/A converter register, the output voltage is as follows:
3 3
4095 × 12
= 4095 × = 2.99927 (1.3)
2 4096
In this project, we use the timer TIM10 to generate the interrupt signal at 1 msec interval
and output the D/A converter signal at each timer interrupt to check the interval is accurate.
Chapter 1. The Concept of a Control System 23
Start a new project, as explained above, and name the project Timer10. As can be seen
in the following figure, we have two projects in one workspace. When you have more than
one project in one workspace, you have to be careful in selecting the right project. You can
select the project by clicking the project name. In the following figure, you can see that the
Timer10 project is selected since the background color of the project name is changed.
Before configuring the timer, the clock circuit has to be configured. Clicking the Clock
Configuration tab brings up the following clock configuration window. Change the default
value to 168 in HCLK (MHz), as shown in the following figure.
After the clock circuit is set, select Pinout & Configuration tab. Then, click the Timers
button to list up the timers and select TIM10. Check the Activated box and type in 839 in
Prescaler, 199 in Interval Clock Division, as shown in the following figure. Since TIM10 uses
a 168MHz ABP2 timer clock, the following relationship shows that we have 1 msec interval.
168MHz
= 1000Hz (1.4)
840 × 200
24 1.4. Lab 1
Since the counter starts at 0, you have to type in 839 instead of 840 and 199 instead of 200.
Then, select the NVIC Settings tab for the timer and check the Enabled box to enable the
timer interrupt.
The next device to set up is D/A converter. Click the Analog button to list up the ADC’s
and DAC’s. Since DAC OUT1 is not available due to the pin conflict, you have to select
OUT2, as shown in the following figure. You can see that the PA5 pin is used for the D/A
converter output.
Chapter 1. The Concept of a Control System 25
Before generating code, do not forget to disable FREERTOS from Middleware selection.
After the code generation, open the main source code file and type in Code 1.4 and Code 1.5.
Insert Code 1.4 right after the initialization of peripherals. Code 1.4 is to start the timer and
the D/A converter.
The location of Code 1.5 is the inside of the function HAL TIM PeriodElapsedCallback near
the bottom of the main source file. The HAL TIM PeriodElapsedCallback function is a call-
back function that is called when the processor detects the periodic interrupt signals from
the timers. In Code 1.5, when the interrupt signal from the timer 10 is detected, the proces-
sor outputs the maximum D/A value, which is 3V. After a brief delay, the processor outputs
the minimum D/A value, which is 0V. The following figure shows the waveform from the
D/A output port measured using an oscilloscope. As you can see, the maximum value is 3V,
and the period of the signal is 1 msec.
26 1.4. Lab 1
Exercise 1.1 Modify the above program so that the output signal from the D/A con-
verter is a sawtooth wave with a period of 100 msec, as shown in the following figure.
The maximum and minimum of the wave signal are 3V and 0V, respectively.
2.4 Lab 2 45
27
28 2.1. Differential Equations
ẋ = −x (2.3)
Note that the derivative of the variable in the above equation has to take the same form as the
variable itself. For example, if we assume x = t 2 , then ẋ = 2t . This function cannot satisfy
the equation. Also, if we assume x = cos t, then ẋ = − sin t . Then, again, this function cannot
satisfy the equation. The function whose derivative has the same form as the function is an
exponential function. In other words, if we assume x = Keλt ( K and λ are constants), then
ẋ = Kλeλt . Therefore, the derivative ẋ has the term eλt . Plugging these terms into Eq. (2.3)
gives us the following equation:
K (λ + 1) eλt = 0 (2.5)
For the above equation to hold, we must have either K = 0 or λ = −1 . When K = 0 , the
solution is trivial and meaningless, since x ≡ 0 . If we exclude the trivial solution, λ has to
be −1 . Therefore, the solution of Eq. (2.3) is as follows.
In the above equation, the value of K can be determined, when the initial value of x(t) is
given, i.e. x(0) . For example, if x(0) = 1, we can determine K = 1 from the above equation.
Chapter 2. Differential Equations and Laplace Transforms 29
Next, try to solve the differential equation (2.2), when the right-hand side is 1, as shown
in the following equation.
ẋ + x = 1 (2.7)
We expect that the solution of the above equation also has the form of an exponential func-
tion. However, since we have a non-zero constant on the right-hand side, let us assume that
the solution has the form of the following equation.
K2 in Eq. (2.9) has to be equal to 1. If not, Eq. (2.9) does not hold. Therefore, the above
equation can be changed to the following equation.
K1 (λ + 1) eλt = 0 (2.10)
We have the following solution of the differential equation since we can obtain λ = −1 from
the above equation.
x(t) = K1 e−t + 1 (2.11)
K1 in the above equation can be determined from the initial condition. For example, if we
assume x(0) = 0, we have the following relationship.
x(0) = K1 + 1 = 0 (2.12)
From the above relationship, we have K1 = −1, which determines the unique solution of the
differential equation. Note that the solution has two terms. The first term is an exponential
function, and the second term is a constant. We call the first term a homogeneous solution,
and the second term a particular solution. Generally, the homogeneous solution has a form
of exponential functions. The particular solutions depend on the right-hand side of the
differential equations. In our above example, the particular solution is a constant, since the
right-hand side of the differential equation is a constant.
ẍ + 3ẋ + 2x = 0 (2.13)
If we assume the solution is of the form x = Keλt , the first and second derivative are ẋ = Kλeλt
and ẍ = Kλ2 eλt , respectively. If we plug these terms into the differential equation, we obtain
the following relationship,
ẍ + 3ẋ + 2x = Kλ2 eλt + 3Kλeλt + 2Keλt = K λ2 + 3λ + 2 eλt = 0 (2.14)
λ2 + 3λ + 2 = (λ + 1)(λ + 2) = 0 (2.15)
30 2.1. Differential Equations
Solving the above algebraic equation gives us λ = −1, −2 . Then, we can assume that the
solution is of the following form.
When we solve a differential equation, we need to solve an algebraic equation, such as Eq.
(2.15), which is called a characteristic equation. To determine the unique solution of a dif-
ferential equation containing second derivative terms, we need two initial conditions, x(0)
and ẋ(0). For example, if we assume x(0) = 1 and ẋ(0) = 0 we can obtain the following
relationship.
x(0) = K1 + K2 = 1 (2.17)
ẋ(0) = −K1 − 2K2 = 0 (2.18)
Solving the above algebraic simultaneous equation gives us K1 = 2 and K2 = −1 . Then, we
obtain the following solution of the differential equation uniquely.
ẍ + 3ẋ + 2x = 1 (2.20)
Since the left-hand side of the above equation is the same as the previous example, we can
expect that the solution has the same exponential terms. Since the equations have a constant
on the right-hand side, we assume the solution is of the following form.
The value K3 in the above equation has to be 0.5. If we assume the initial conditions x(0) = 0
and ẋ(0) = 0, we obtain the following equations.
As in the first-order differential equation, the exponential terms are homogeneous solutions,
and the constant term is a particular solution.
Chapter 2. Differential Equations and Laplace Transforms 31
In the above example, the solutions of the characteristic equation are real numbers. The next
example is the case where the solutions of the characteristic equation are complex numbers.
Consider the following differential equation.
ẍ + 2ẋ + 5x = 0 (2.26)
For the above differential equation, we have the following characteristic equation.
λ2 + 2λ + 5 = 0 (2.27)
Solving the above algebraic equation gives us the following complex number solutions.
λ = −1 ± j2 (2.28)
Without going into detail, the above equation can be changed to the following equation.
Note that K1 and K2 are complex conjugate numbers. nth -order
n o
x(t) = K1 e(−1+j2)t + K1∗ e(−1−j2)t = 2Re K1 e(−1+j2)t (2.30)
If we assume the initial conditions x(0) = 1 and ẋ(0) = 0, we obtain the following equations.
Except for complex number calculations, we have a similar procedure as in the case we have
real number solutions of the characteristic equation.
In summary, the solutions of differential equations have a homogeneous solution and a
particular solution. The homogeneous solution has forms of exponential functions, while
a particular solution depends on the right-hand side of the differential equation. In the
above examples, we consider simple cases where we have a constant term on the right-hand
side of the differential equations. For more complicated cases, refer to books on differential
equations.
32 2.2. The Relationship between Differential Equations and Control Systems
We use a linear differential equation with constant coefficients to describe the dynamic char-
acteristics of linear time-invariant systems. Almost all the systems discussed in this book are
linear timer-invariant systems. Also, remember that frequency-domain analysis is applied
to linear time-invariant systems.
room temperature system has dynamic characteristics. If we describe the dynamic behavior
of the room temperature system approximately, we obtain the following differential equa-
tion.
τ ẋ + x = u (2.38)
In the above equation, τ is a constant related to the response speed. If we have a large τ , the
temperature rises slowly. If we solve the above differential equation with an initial condition
x(0) = 10 , and the input u = 20, we have the following solution. Figure 2.1 is the graph of
the solution.
t
x(t) = 20 − 10e− τ , t ≥ 0 (2.39)
From the above simple example, we can see how the differential equation enables us to ex-
pect the response of the system for a given input.
Chapter 2. Differential Equations and Laplace Transforms 33
As a more complicated example, consider a system shown in Figure 2.2, where u is the
input variable, and x is the output variable.
The system in Figure 2.2 is a dynamic system whose input and output variables satisfy the
following differential equation.
ẍ + 3ẋ + 2x = u (2.40)
Assume the initial conditions x(0) = 0 and ẋ(0) = 0. We want to know how the system
responds when a constant input value of 1 is applied to the system at t = 0. If we solve the
following differential equation with the given initial conditions, we can see how the output
variable changes in time.
ẍ + 3ẋ + 2x = 1 (2.41)
The following is the solution of the above differential equation.
From the above equation, we can see that the output approaches 0.5 exponentially. If we
have a different input applied, we can predict how the system responds by solving the dif-
ferential equation for the given input.
Since the limits of the above integration are infinite, we need the condition for convergence
given below. For the Laplace transform to exist, the function f (t) has to satisfy the following
inequality for a certain real number σ .
Z ∞
f (t)e−σ t dt < ∞ (2.45)
−∞
As a result of the Laplace transform of f (t) that is a function of time, we obtain a new
function F(s) that is a function of a new variable s. The new variable s has a meaning of
frequency, which will be explained later. For this reason, the Laplace transform method is
called the frequency-domain method.
The Laplace transform defined in Eq. (2.43) is called the two-sided Laplace transform,
where both limits of the integration are infinite. If we change the lower limit of the inte-
gration to 0− , we have the following definition of the Laplace transform, which is called a
one-sided Laplace transform.
Z∞
F(s) = f (t)e−st dt (2.46)
0−
In the above definition, the meaning of 0− is the limit value at zero approaching from the
negative side. The reason for using 0− instead of 0 in the above definition is that we should
be able to define the Laplace transform of functions that are not continuous at t = 0, such as
an impulse function δ(t).
In control systems engineering, we can assume that all the functions are zero for t < 0
without loss of generality. Therefore, in this book, we use only the one-sided Laplace trans-
form. Also, we assume all the functions in the following examples are zero for t < 0.
The Laplace transform of the unit step function can be obtained as follows:
Z∞ Z∞
1 −st ∞ 1
−st −st
L [us (t)] = us (t)e dt = e dt = − e = (2.48)
0− 0− s 0− s
In the above equation, a is a constant. For example, if a = −2, the Laplace transform is
as follows: h i 1
L e−2t = (2.53)
s+2
Exemple 2.4 The Laplace transform of the first order of time function can be obtained
as follows.
∞ Z ∞
1 −st t=∞
Z
1 −st
−st
L [t] = te dt = − te − − e dt
0− s t=0− 0− s
(2.54)
1 −st t=∞ 1 −st t=∞
1
= − te − 2 e = 2
s t=0− s t=0− s
In the above derivation, integration by parts is used.
Note: Derivation of the formula for integration by parts is shown in the following. Let
us start with the derivative of the product of two functions.
Take definite integrals of both sides of the above equation and obtain the following
relationship.
Z b Z b Z b
0 0
(f (t) · g(t)) dt = f (t) · g(t)dt + f (t) · g 0 (t)dt (2.56)
a a a
Rearranging the terms of the above equation, we obtain the following relationships.
Z b Z b Z b
0 0
f (t) · g(t)dt = (f (t) · g(t)) dt− f (t) · g 0 (t)dt (2.57)
a a a
Z b Z b
f 0 (t) · g(t)dt = (f (t) · g(t))|t=b
t=a − f (t) · g 0 (t)dt (2.58)
a a
Exemple 2.5 In this example, let us find the Laplace transform of a cosine function.
Using Euler’s relationship, we obtain the following relationship.
ejωt + e−jωt
cos ωt = (2.59)
2
By taking the Laplace transform of the above equation, we can obtain the following
relationships.
1 h i 1 h i 1 1 1 1 s
L [cos ωt] = L ejωt + L e−jωt = + = 2 (2.60)
2 2 2 s − jω 2 s + jω s + ω2
36 2.3. Laplace Transform
If the frequency of the cosine function is 100 Hz, the Laplace transform is as follows.
Note ω = 2π × 100 = 200π.
s
L [cos 200πt] = (2.61)
s2 + (200π)2
In the above examples, we assume that the integrals do not diverge. For each case, there are
regions of where the integral converges. We call this region a region of convergence.
Linearity
The Laplace transform satisfies the following linearity properties.
L [f1 (t) + f2 (t)] = L [f1 (t)] + L [f2 (t)] = F1 (s) + F2 (s) (2.63)
Shift in Frequency
If we take the Laplace transform of a function multiplied by an exponential function, we ob-
tain the following relationship. In the following relationship, F(s) is the Laplace transform
of f (t). h i
L e−at f (t) = F(s + a) (2.64)
Exemple 2.6 In this example, we find the Laplace transform of a cosine function
multiplied by an exponential function. Note that the shift in the frequency domain
property is used in this example.
h i s+2
L e−2t cos (200πt) = (2.65)
(s + 2) + (200π)2
2
Shift in Time The following relationship shows the Laplace transform of a function
shifted in time T . In the following relationship, F(s) is the Laplace transform of f (t).
Differentiation
The following shows the Laplace transform of a derivative. In the following relationship,
F(s) is the Laplace transform of f (t).
" #
df (t)
L = sF(s) − f (0) (2.67)
dt
Chapter 2. Differential Equations and Laplace Transforms 37
In the above relationship, f (i) (t) is the ith derivative of f (t), and f (i) (0) is the value of f (i) (t)
at t = 0.
Integration
The following is the Laplace transform of an integral. In the following relationship, F(s) is
the Laplace transform of f (t). "Z t #
F(s)
L f (τ)dτ = (2.69)
0 s
Initial-value theorem
lim f (t) = lim sF(s) (2.70)
t→0 s→∞
F(s) is the Laplace transform of f (t). The above relationship is valid only when all the poles
of sF(s) are in the left half-plane.
Final-value theorem
lim f (t) = lim sF(s) (2.71)
t→∞ s→0
F(s) is the Laplace transform of f (t). The above relationship is valid only when all the poles
of sF(s) are in the left half-plane.
Exemple 2.7 This is an example of the final value theorem. Consider the following
function.
1 1 −2t
f (t) =
− e (2.72)
2 2
The Laplace transform of the above function is as follows:
1 1 1 1
L [f (t)] = F(s) = − = (2.73)
2 s s+2 s(s + 2)
We obtain the following relationship from the application of the final value theorem.
1 1
lim f (t) = lim sF(s) = lim = (2.74)
t→∞ s→0 s→0 s + 2 2
The above relationship shows that the final value theorem is valid for the above func-
tion.
Convolution
The following is the definition of the convolution of two functions, f1 (t) and f2 (t). The nota-
tion of convolution is ∗. In the definition, we assume that the functions are zero for t < 0.
Zt Zt
f1 (t) ∗ f2 (t) = f1 (τ)f2 (t − τ)dτ = f1 (t − τ)f2 (τ)dτ (2.75)
0 0
38 2.3. Laplace Transform
The following relationship shows the Laplace transform of a convolution of two functions.
In this relationship, the Laplace transforms of f1 (t) and f2 (t) are F1 (s) and F2 (s), respectively.
"Z t # "Z t #
L [f1 (t) ∗ f2 (t)] = L f1 (τ)f2 (t − τ)dτ = L f1 (t − τ)f2 (τ)dτ = F1 (s)F2 (s) (2.79)
0 0
From the above relationship, we can see that Laplace transform turns convolution into mul-
tiplication. This relationship is very useful in control systems engineering. Additionally, we
have the following relationship the other way around.
From the above relationship, we can see that the Laplace transform turns multiplication into
convolution.
Exemple 2.8 The followings are the Laplace transforms of the above two functions.
h i 1
L e−at = (2.81)
s+a
1
L [us (t)] = (2.82)
s
If we take the Laplace transform of the convolution of two functions, the result is the
Chapter 2. Differential Equations and Laplace Transforms 39
1 1 1 1 1 1
L − e−at + = − + = (2.83)
a a a s+a s s(s + a)
2
G(s) = (2.85)
s2 + 5s + 6
We obtain the following relationship by factorizing the denominator.
2 K K
G(s) = = 1 + 2 (2.86)
(s + 2)(s + 3) s + 2 s + 3
2 (K + K2 )s + 3K1 + 2K2
G(s) = = 1 (2.87)
(s + 2)(s + 3) (s + 2)(s + 3)
K1 = 2, K2 = −2 (2.89)
2 2 2 2
g(t) = L−1 [G(s)] = L−1 − = L−1 − L−1 = 2e−2t − 2e−3t (2.90)
s+2 s+3 s+2 s+3
Note that linearity and Eq. (2.52) are used in the above process.
40 2.3. Laplace Transform
Consider a rational function whose denominator and numerator are polynomial func-
tions as follows:
N (s) bm sm + bm−1 sm−1 + · · · + b1 s + b0
G(s) = = (2.91)
D(s) sn + an−1 sn−1 + · · · + a1 s + a0
Without loss of generality, we can assume n > m in the above equation. If m ≥ n, we can
divide the numerator by the denominator and apply the partial fraction expansion to the
remainder. Consider the following example.
s2 + 3s + 4 2
G(s) = = 1+ 2 (2.92)
s2 + 3s + 2 s + 3s + 2
After the division in the above equation, we obtain the quotient of 1 and the remainder
function. Since we know the inverse Laplace transform of 1 is δ(t), we need to do partial
fraction expansion for the remainder function.
When we do factorization of the denominator, we have two cases depending on the types
of factors, as shown below.
N (s) K1 K2 Kn
G(s) = = + +···+ (2.93)
(s + p1 )(s + p2 ) · · · (s + pn ) s + p1 s + p2 s + pn
Multiplying the above equation by (s + pi ) gives us the following relationship. From this
relationship, we can find the value of Ki .
K1 (s + pi ) K2 (s + pi ) K (s + pi )
G(s)(s + pi ) = + · · · +Ki + · · · n (2.94)
s + p1 s + p2 s + pn
If we plug s = −pi into the right-hand side of the above equation, all the terms except Ki
become zero. Therefore, we have the following relationship.
Exemple 2.9 In the above, when we find partial fraction expansion of Eq. (2.85), we
used the coefficient comparison method. In this example, we find the same Ki ’s using
Eq. (2.95), as follows.
2
K1 = (s + 2) =2 (2.97)
(s + 2)(s + 3) s=−2
2
K2 = (s + 3) = −2 (2.98)
(s + 2)(s + 3) s=−3
Chapter 2. Differential Equations and Laplace Transforms 41
MATLAB The coefficients in this example can be found using the residue function in
MATLAB, as follows:
num = 2;
den = [1 5 6];
[r,p,k] = residue(num,den)
In the above MATLAB code, r is the coefficient of partial fraction expansion, p is the
root of the denominator, and k is the quotient of division. If the order of the denomi-
nator is higher than numerator, k is zero.
2 2 K1 K2
G(s) = = = + (2.99)
s2 + 2s + 5 (s + 1 + j2)(s + 1 − j2) s + 1 + j2 s + 1 − j2
Using the same relationship as in the case of real roots, we have the following values
for the coefficients.
2 j
K1 = (s + 1 + j2) = (2.100)
(s + 1 + j2)(s + 1 − j2) s=−1−j2 2
2 j
K2 = (s + 1 − j2) =− (2.101)
(s + 1 + j2)(s + 1 − j2) s=−1+j2 2
Therefore, we have partial fraction expansion of the given function, as follows.
!
1 j −j
G(s) = + (2.102)
2 s + 1 + j2 s + 1 − j2
Taking the inverse Laplace transform of the above function gives us the following re-
sult. " j2t −j2t #
1 −(1+j2)t −t (e −e )
−(1−j2)t
g(t) = je − je =e = e−t sin 2t (2.103)
2 2j
Note that we used Euler’s relationship in the above relationship.
Alternatively, the following shows the same result using different method. First, mod-
ify the given function, as shown below.
2 2
G(s) = = (2.104)
s2 + 2s + 5 (s + 1)2 + 22
The above equation is the same as the function given below, except that s is replaced
by s + 1.
2
G1 (s) = 2 (2.105)
s + 22
Using the Laplace transform table, we can see that the inverse Laplace transform of
G1 (s) is sin 2t. If we use the relationship Eq. (2.64), we can find that the inverse Laplace
transform of G(s) is e−t sin 2t.
42 2.3. Laplace Transform
s2 K1 K2 K3
G(s) = 3
= 3
+ 2
+ (2.106)
(s + 2) (s + 2) (s + 2) (s + 2)
The denominator of the above function has a triple root at −2. Multiplying the above equa-
tion by (s + 2)3 and plugging in s = −2 enables us to find the value of K1 , as shown below.
h i
G(s)(s + 2)3 s=−2 = K3 (s + 2)2 + K2 (s + 2) + K1
= K1 (2.107)
s=−2
dG(s)(s + 2)3
= [2K3 (s + 2) + K2 ]|s=−2 = K2 (2.108)
ds
s=−2
Again, if we differentiate G(s)(s + 2)3 with respect to s twice, K2 is removed. Then, plugging
in s = −2 enables us to find the value of K3 , as shown below.
d 2 G(s)(s + 2)3
= 2K3 (2.109)
ds2
s=−2
Using the above coefficients, we can find partial fraction expansion, as follows.
4 4 1
G(s) = 3
− 2
+ (2.110)
(s + 2) (s + 2) (s + 2)
If we take inverse the Laplace transform of the terms on the right-hand side, we have the
inverse Laplace transform of the given function.
g(t) = e−2t 2t 2 − 4t + 1 (2.111)
Note that Laplace transform of t is 1/s2 and Laplace transform of t 2 is 2/s3 . Also, since we
used (2.64), we have the term e−2t on the right-hand side of the above equation.
The following is the more general case for multiple roots.
N (s) K1 K2
G(s) = r = + +···
(s + p1 )(s + p2 ) · · · (s + pi−1 )(s + pi ) s + p1 s + p2
(2.112)
Ki−1 Ki1 Ki2 Kir
+ + r + +···+
s + pi−1 (s + pi ) (s + pi ) r−1
(s + pi )1
In the above equation, the denominator has r roots at s = −pi . The following formula for
coefficients can be derived using a similar procedure explained above.
1 d j−1 G(s)(s + pi )r
Kij = , j = 1, 2, ..., r (2.113)
(j − 1)! dsj−1
s=−pi
Chapter 2. Differential Equations and Laplace Transforms 43
Exemple 2.11 In this example, we solve the following differential equation for given
initial conditions using the Laplace transform.
d 2y dy
2
+5 + 6y = us (t) (2.114)
dt dt
s2 + 5s + 1 s2 + 5s + 1
Y (s) = = (2.118)
s (s2 + 5s + 6) s(s + 2)(s + 3)
1 5 −2t 5 −3t
y(t) = + e − e us (t) (2.120)
6 2 3
MATLAB Using the residue command in MATLAB, we can find the coefficients in
this example.
num=[1 5 1];
den=[1 5 6 0];
[r,p,k]=residue(num,den)
F(s) can give us a better understanding of the control problems. In this section, the meaning
of the function F(s) obtained by Laplace transform is explained. Let us start the discussion
with the Fourier series that is familiar to most of us. When we have a periodic function f (t)
with a period of T , we can express the function as a sum of an infinite number of sinusoidal
terms. We call this sum Fourier series, and the following is the definition.
∞
X
f (t) = a0 + (an cos nω0 t + bn sin nω0 t) (2.121)
n=1
In the above equation, the radian frequency ω0 is related to the period T as follows.
2π
ω0 = = 2πf0 (2.122)
T
The coefficients of the Fourier series are as follows.
1 T
Z
a0 = f (t)dt
T 0
2 T
Z
an = f (t) cos nω0 tdt (2.123)
T 0
2 T
Z
bn = f (t) sin nω0 tdt
T 0
If we look at the definition of the Fourier series, we can see that the Fourier coefficients
represent the amplitudes of sine and cosine components. For example, the coefficient an
represents the amplitude of the component cos nω0 t in the function f (t). After modifying
and rearranging terms, the Fourier series equation can be represented as follows.
n=∞
X
f (t) = cn ejnω0 t (2.124)
n=−∞
Z T /2
1
cn = f (t)e−jnω0 t dt (2.125)
T −T /2
In the above equation, coefficients are complex numbers. However, similarly, the coefficient
cn may represent the weight of the component ejnω0 t in the function f (t). Extension of the
Fourier Series to non-periodic function gives us Fourier transform. In other words, a non-
periodic function can be regarded as a function with an infinite period. If we increase the
period T in Eqs. (2.124) and (2.125) to infinity, we obtain the following Fourier transform
relationship. Z∞
1
f (t) = F(jω)ejωt dω (2.126)
2π −∞
Z∞
F(jω) = f (t)e−jωt dt (2.127)
−∞
Eq. (2.127) is the definition of Fourier transform, and Eq. (2.126) is the definition of inverse
Fourier transform. By increasing the period in the Fourier series to infinity, the infinite sum
in Eq. (2.124) becomes the integral in Eq. (2.126). Also, the Fourier series coefficient in
Eq. (2.125) becomes the Fourier transform in Eq. (2.127) after some modifications. In other
Chapter 2. Differential Equations and Laplace Transforms 45
words, the Fourier transform in Eq. (2.127) represents the weight of the component ejωt in
function f (t).
If we compare Fourier transform with Laplace transform definition in Eq. (2.43), we can
see that replacing jω in Fourier transform with s gives us the definition of Laplace trans-
form. In other words, Laplace transform definition in Eq. (2.43) represents the weight of the
component est in function f (t). Here, the variable s is a complex variable with the real part
and imaginary part, as shown below.
s = σ + jω (2.128)
The variable s has a meaning of frequency. Especially, the imaginary part ω is called radian
frequency, and the real part σ is called neper frequency.
2.4 Lab 2
2.4.1 A/D Conversion
To implement feedback control systems, we need sensors to feedback output signals. The
sensor output signals may be analog or digital. Recently, digital sensors are becoming pop-
ular; many analog sensors still exist. For microcontrollers to read analog sensor signals,
they need to have A/D (analog-to-digital) converters. A/D converters read analog signals
and convert them into digital numbers. STM32F329 Discovery board has three 12bit A/D
converters. There are 16 channels for each converter. However, many of them may not be
available since some of the pins may be used for other purposes.
Before we start to discuss programming A/D converters, we need to build OP amp cir-
cuits to convert voltage ranges. Since the applied voltage for our microcontroller is 3V, the
voltage ranges of A/D and D/A converters are 0 ∼ 3V . When we build digital control sys-
tems, we may have to input and output bipolar voltages. In other words, A/D and D/A
converter should be able to handle positive and negative voltages.
First, let us build a voltage conversion circuit for the A/D converter. The input voltage
range for our circuit is from -10V to +10V. The output voltage range is from 0V to 3V. The
following figure shows a conversion curve for the A/D converter.
The following is the voltage conversion equation for the A/D converter input.
3 3
Vo = Vi + (2.129)
20 2
There are many possible circuits to realize the above relationship. Figure 2.5 shows an ex-
ample. In this circuit, you may want to use a precision voltage reference IC for the 10V
reference, such as REF102 from Texas Instruments. Also, you have to connect a series trim-
mer for the voltage reference input resistor so that you can adjust the voltage level. One way
46 2.4. Lab 2
to do is an 18K Ohm resistor connected with a 5K Ohm trimmer, and you can have positive
and negative trims.
Next, let us build a voltage conversion circuit for the D/A converter. The input voltage
range for our circuit is from 0V to +3V. The output voltage range is from -10V to +10V. The
following figure shows a conversion curve for the D/A converter.
The following equation is the voltage conversion equation for D/A converter input.
20
Vo = V − 10 (2.130)
3 i
There are many possible circuits to realize the above relationship. Figure 2.7 shows an ex-
ample. Again, you have to connect a series trimmer for the voltage reference input resistor
so that you can adjust the voltage level. One way to do is a 9K Ohm resistor connected with
a 2K Ohm trimmer so that you can have a positive and negative trim.
In this lab, we write a simple program to input an analog signal from a function generator
using an A/D converter and output the data to a D/A converter. Figure 2.8 shows the setup
for this lab.
Chapter 2. Differential Equations and Laplace Transforms 47
Start a new STM32 project in STM32CubeIDE with the name ADDAconversion. The setup
procedure is the same as in Section 1.4.3. Remember to disable FREERTOS as we did in Lab
1. In addition to the setup in section 1.4.3, we need to configure the A/D converter, as shown
in Figure 2.9. Note that we use ADC1 channel 13, and PC3 is the analog input pin.
After generating code, find the sections between /* USER CODE BEGIN . . . */ and /*
USER CODE END . . . */ in the main source file, and insert the Code 2.1 and Code 2.2. Re-
member that we have a timer interrupt with the period of 1msec. Since Code 2.2 is within
the timer interrupt callback function, it is executed every 1msec. In this code, the micro-
controller reads the A/D converter and sends the digital value to the D/A converter as it
is.
HAL_TIM_Base_Start_IT(&htim10);
HAL_DAC_Start(&hdac, DAC_CHANNEL_2);
/* USER CODE END 2 */
Code 2.1
Set the function generator to generate a 100 Hz sine wave with 2V peak-to-peak amplitude
and 0V offset. Connect the channel one probe of the oscilloscope to the D/A converter and
channel two to the A/D converter. If you display both channels, you should be able to see
the screen , as shown in Figure 2.10. Note that we have ten samples per period of sine wave
since we sample a 100 Hz sine wave at 1000 Hz sampling frequency.
Exercise 2.1 Suppose you want to double the amplitude of the sine wave from the
D/A converter, i.e., 4V peak-to-peak voltage and 0V offset. See what happens to the
D/A output signal if you change
da_value = ad_value;
Chapter 2. Differential Equations and Laplace Transforms 49
to
da_value = 2*ad_value;
in Code 2.2. Discuss why you do not get the correct result. Think about what you have
to do to get a correct output signal.
From the result of Exercise 2.1, we can see that the numbers from the A/D converter
cannot be used for arithmetic calculations as it is. If you look at Table 2.1, you can see that
the A/D converter assigns the lowest number to the lowest voltage and the highest number
to the highest voltage. The input voltage range in this table is for the voltage conversion
circuit, i.e., ±10V . Since the A/D converter in our controller is 12bit, the lowest number is
0, and the highest number is 0xfff in hexadecimal (4095 in decimal). The A/D converter
input number has to be converted to two’s complement to do arithmetic calculations. In
two’s complement representation, we have a negative number for the negative voltage and a
positive number for the positive voltage. Also, note that two’s complement is zero for zero
Volt, while the A/D input number is 2048 for zero Volt. Conversion to two’s complement is
very simple. Subtracting 2048 from the value obtained from the A/D converter gives you a
two’ complement number. After all the arithmetic calculations, you have to add 2048 before
writing to the D/A converter since the scheme for the D/A converter is the same. Try this
method to Exercise 2.1 and check if you have the correct result.
dx(t)
τ + x(t) = u(t) (2.131)
dt
50 2.4. Lab 2
In the above equation, x(t) is the output and u(t) is the input of the system. τ is a time-
constant that governs the speed of the system response. Since the above system is a continuous-
time system, the equation has to be changed to a digital form. There are many ways to con-
vert a continuous-time system to a digital form, and the following relationship is the most
simple and easiest one. It is called Euler’s method.
ad_value = HAL_ADC_GetValue(&hadc1);
}
u = ad_value - 2048.0;
x = (tau*x_old + delt*u)/(tau+delt);
x_old = x;
da_value = x + 2048.0;
HAL_DAC_SetValue(&hdac, DAC_CHANNEL_2, DAC_ALIGN_12B_R, (uint32_t)(da_value));
}
/* USER CODE END Callback 0 */
Code 2.6
Set the function generator to generate a square wave with 4 Hz frequency and 2V peak-
to-peak amplitude. When you run the program, you should be able to see the step response
of the system, as in Figure 2.11. You can see that the time constant of the system is approxi-
mately 25 msec.
Exercise 2.2 (1) Repeat the above project with τ = 0.0125 and check the time constant
of the response.
(2) Repeat the above project with the following differential equation.
dx(t)
τ + 2x(t) = u(t) (2.135)
dt
52
Problem
Problem 2.1 Find the characteristic equations and homogeneous solutions of the
following differential equations.
dy
(1) 5 + 4y = 0, y(0) = 1
dt
d 2y dy
(2) 2
+5 + 4y = 0, y(0) = 1, y (1) = 0
dt dt
d 2y dy
(3) 2 + 2 + 2y = 0, y(0) = 1, y (1) = 0
dt dt
d 2y
(4) 2 + 4y = 0, y(0) = 1, y (1) = 0
dt
Problem 2.2 Find the complete solutions of the following differential equations.
dy
(1) 5 + 4y = us (t), y(0) = 0
dt
d 2y dy
(2) 2
+5 + 4y = us (t), y(0) = 0, y (1) = 0
dt dt
d 2y dy
(3) 2 + 2 + 2y = us (t), y(0) = 0, y (1) = 0
dt dt
Problem 2.4 Find Laplace transforms of the following functions. Assume functions
are zero for t < 0.
(
1 0≤t<1
(1) f1 (t) =
0 1≤t
(2) f2 (t) = δ(t) + δ(t − 1)
53
Problem 2.5 Answer the questions for the following function. Assume function is
zero for t < 0.
(
1 0≤t<1
f1 (t) = , f2 (t) = us (t)
0 1≤t
(1) Find the convolution f1 (t) ∗ f2 (t).
(2) Find the Laplace transform of the function found in part (1)
(3) Show that the result of part (2) is the same as the product of Laplace transforms of
the above two functions.
Problem 2.6 Answer the questions for the following function. Assume function is
zero for t < 0.
(
1 0≤t<1
f1 (t) = , f2 (t) = e−t
0 1≤t
(1) Find the convolution f1 (t) ∗ f2 (t).
(2) Find the Laplace transform of the function found in part (1)
(3) Show that the result of part (2) is the same as the product of Laplace transforms of
the above two functions.
1
(1) G(s) =
s(s + 3)(s + 4)
10
(2) G(s) =
(s + 1)2 (s + 3)
1
(3) G(s) = 2
s (s + 2)
2
(4) G(s) = 2
s(s + s + 2)
Problem 2.8 Find the solutions of the following differential equations using Laplace
transforms.
d 2y
(1) + 9y = 0, y(0) = 1, y (1) = 0
dt 2
d 2y dy
(2) 2 +5 + 4y = e−3t us (t), y(0) = 1, y (1) = 0
dt dt
d 2y dy
(3) 2 +2 + 2y = us (t), y(0) = 0, y (1) = 0
dt dt
d 2y
(4) 2 + 4y = us (t), y(0) = 1, y (1) = 0
dt
54
Problem 2.9 Answer the following question for the differential equations given
below. First, find the value of y when t → ∞ using the final value theorem. Next, find
y(t) by solving the differential equations and find the limit lim y(t). Check if the limit
t→∞
values are the same as the results obtained by the final value theorem.
d 2y dy
(1) 2
+5 + 4y = us (t), y(0) = 1, y (1) = 0
dt dt
d 2y dy
(2) 2 + 3 + 2y = us (t), y(0) = 1, y (1) = −1
dt dt
d 2y dy
(3) 2 + 2 + y = 2us (t), y(0) = 0, y (1) = 0
dt dt
Problem 2.10 Answer the following question for the differential equations given
below. First, find the value of y when t → ∞ using the final value theorem. Next, find
y(t) by solving the differential equations and find the limit lim y(t). Check if the limit
t→∞
values are the same as the results obtained by the final value theorem.
dy
(1) − 2y = us (t), y(0) = 0
dt
d 2 y dy
(2) + − 2y = us (t), y(0) = 0, y (1) = 0
dt 2 dt
3. Modeling of Dynamic Systems
55
56 3.1. Modeling of Dynamic Systems using Differential Equations
To analyze and design control systems, we need to describe the characteristics of controlled
systems mathematically. One of the most common ways to describe dynamic characteristics
is using differential equations. If a system can be described using a differential equation,
we call it a dynamic system. The process of finding the differential equation for a system is
called modeling. Here, the differential equation describing a system is called a mathemati-
cal model of the system. Or, just simply, we call it a model. Almost everything around us
can be a dynamic system that can be described using differential equations. In this chapter,
we discuss the modeling of various systems, such as mechanical systems, electrical systems,
and electromechanical systems. In addition to differential equations, there are several meth-
ods describing the characteristics of dynamic systems, such as transfer functions and state
equations. These methods may be able to give us a better understanding of the systems.
In this chapter, transfer functions and state-variable equations are discussed. Besides equa-
tions, graphical representations of dynamic systems such as block diagrams and signal flow
graphs are very helpful methods. We also discuss these graphical methods in this chapter.
f = ma (3.1)
In the above equation, f is the force applied to the object, and a is the acceleration of the
object. These two quantities are vectors. The scalar quantity m is the mass of the object. The
above equation is the fundamental equation to find the dynamic equations of the moving
object. The SI units of force f , mass m, and acceleration a are N (Newton), kg, and m/sec2 ,
respectively
Three basic elements are comprising mechanical systems in translational motion. They
are mass, spring, and friction.
Mass
The mass of an object is measured in kilograms (kg) in SI units. Weight is the force exerted
on an object by gravity. We usually measure the mass by finding the weight of an object.
In other words, if you divide the weight of the object by gravitational acceleration, you can
Chapter 3. Modeling of Dynamic Systems 57
obtain the mass of the object. For example, if you have an object of 1kg mass, and the grav-
itational acceleration is 9.8m/sec2 , the weight of the object is 9.8N on earth. If you say the
weight of the object is 1kg, you are wrong. It is correct to say the weight of the object is
1kgf (kilogram-force), or the mass of the object is 1kg. Note that 1kgf is equal to 9.8N in
earth gravity. Suppose you have an object of mass M and apply the force f to the object, as
in Figure 3-1. If the variable y represents the position of the object, we have the following
dynamic equation by applying Newton’s law of motion.
d 2y
f =M (3.2)
dt 2
Spring
Elasticity is the ability of an object to return to its original shape after it is stretched or com-
pressed. Spring is the mechanical element with elasticity. The mechanical element with the
following relationship is called a linear spring. There are springs with a nonlinear relation-
ship, which we do not discuss in this book.
f = Ky (3.3)
In the above equation, f is the applied force, y is a change in position, and K is the spring
constant. We can see that the unit in the SI unit system of the spring constant is N /m. For
example, if you apply 1N force to the spring with a 100N /m spring constant, you have 0.01m
displacement. If the spring constant of a spring is large, it is a hard or stiff spring. On the
other hand, if the spring constant is small, it is a soft spring. Figure 3-2 shows the spring el-
ement. Depending on the force direction, a spring can be stretched or compressed; however,
we can use the equation (3.3) for both cases. When we build a mechanical system, springs
can be used as mechanical parts. However, a certain mechanical part that is not a spring may
have elastic characteristics. We can also use the spring equation to model elastic character-
istics approximately, even though they are not actual mechanical spring parts.
58 3.1. Modeling of Dynamic Systems using Differential Equations
Friction
Friction is the force acting against the motion of an object. The direction of the friction is
the opposite of the motion. The magnitude of friction may or may not depend on the veloc-
ity. The friction whose magnitude is proportional to the velocity is called the linear friction.
The mechanical element with linear friction is called a damping element or a damper. The
following is the relationship for the linear friction.
dy
f =B (3.4)
dt
In the above equation, f is the force acting against the motion. B is the damping coefficient
whose unit is N · sec /m in the SI unit system. The linear damping element may exist as
a natural phenomenon; however, we may want to make a mechanical part with the linear
friction. Such an example is the shock absorber for automotive suspension systems. Figure
3-3 shows the linear friction element. In addition to the linear friction, there are nonlinear
frictions that are not proportional to the velocity. The Coulomb friction is the most common
nonlinear friction. The magnitude of Coulomb friction is constant regardless of the velocity
magnitude, and the direction is changed as the movement direction changes. The following
equation represents the Coulomb friction.
(
Fc ẏ > 0
f = (3.5)
−Fc ẏ < 0
Figure 3-4 shows the friction forces versus velocity for the linear friction and Coulomb fric-
tion. In real mechanical systems, there are many cases where the magnitude of nonlinear
friction is larger than the linear friction. However, dealing with nonlinear frictions is very
difficult and beyond the scope of this book.
Exemple 3.1 Consider the system with a mass, a spring, and a linear friction element,
as shown in Figure 3-5. f is the force applied, and y is the position of the object. Let
us find the dynamic equation of the system. Figure 3-6 shows all the forces applied to
the object.
dy d 2y
f − Ky − B =M 2 (3.6)
dt dt
The total force applied to the mass is the sum of the external force f , spring force Ky,
and damping force B(dy/dt). By applying Newton’s law of motion, we obtain the fol-
lowing dynamic equation. The left-hand side of the equation is the total force applied
to the object. Rearranging the terms gives us the following second-order differential
equation.
d 2y dy
M 2 +B + Ky = f (3.7)
dt dt
Exemple 3.2 Consider two masses connected by a spring, as shown in Figure 3-7. f is
the force applied, and y1 and y2 are positions of the objects. Let us find the dynamic
equation of the system. Figure 3-8 shows the forces applied to the objects. If we apply
Newton’s law to the objects in Figure 3-8, we obtain the following equations.
d 2 y1
f − K (y1 − y2 ) = M1 (3.8)
dt 2
dy2 d 2y
= M2 22
K (y1 − y2 ) − B (3.9)
dt dt
Arranging terms in the above equations gives us the following simultaneous differen-
60 3.1. Modeling of Dynamic Systems using Differential Equations
tial equations.
d 2 y1
M1 + Ky1 − Ky2 = f (3.10)
dt 2
d 2 y2 dy
M2 2
+ B 2 + Ky2 − Ky1 = 0 (3.11)
dt dt
τ = Jα (3.12)
In the above equation, τ is the torque applied to the object, J is the moment of inertia, andα
is a radial acceleration. Both τ and α are vectors. The SI units of torque τ, inertia J, and
radial acceleration α are N · m , Kg · m2 , and rad/sec2 , respectively
Three basic elements are comprising mechanical systems in rotational motion. They are
a moment of inertia, torsional spring, and rotational friction.
Moment of inertia
As a simple example, consider a singles object that is fixed to a rod that is rotating, as shown
in Figure 3-9. The mass of the object is M, and the length of the rod is R. Assume the rod is
massless for simplicity.
Chapter 3. Modeling of Dynamic Systems 61
The following equation is the definition of the inertia for the rotating object in Figure 3-9.
J = MR2 (3.13)
The above equation is the basic equation that can be used to calculate the inertia of more
complex shape objects. For example, the inertia of a rotating disk in Figure 3-10 can be
calculated using the equation. Assume the mass of the disk is M, and the radius is R.
Assume ρ is the mass per area of the disk. Consider an infinitesimally small square at a
distance r from the center of the disk. The area of this square is rdθ · dr, and the mass is
ρ·rdθ·dr. The inertia of this small square is r 2 ·ρ·rdθ·dr . If we integrate this term throughout
the whole disk, the inertia of the disk can be obtained as in the following equation.
Z R Z 2π
1 1 1
J= r 2 ·ρ · rdθ · dr = R4 · 2π · ρ = R2 · πR2 ρ = R2 M (3.14)
0 0 4 2 2
If we look at Newton’s equation for the rotational motion, we can see that dividing the torque
by radial acceleration gives us the inertia. The following relationship gives us the idea of how
to get the SI unit of the inertia. Remember that the radian is the ratio and a dimensionless
quantity.
torque N ·m Kg · m/sec2 · m Kg · m2
= = = (3.15)
radial acceleration rad/sec2 rad/sec2 rad
62 3.1. Modeling of Dynamic Systems using Differential Equations
Torsion Spring
A torsion spring is the rotational version of spring. If we apply torque to a torsion spring,
it is twisted. However, it returns to its original shape after the removal of the torque. The
following is the relationship for the linear torsion spring.
τ = Kθ (3.16)
In the above equation, τ is the applied torque, and θ is the twisted angle. K is the spring
constant. Figure 3-11 shows the torsion spring.
Rotational Friction
Rotational friction is the rotational version of friction. Rotational friction is the torque act-
ing against the rotation of an object. The following is the relationship of linear rotational
friction. Figure 3-12 shows the rotational friction.
dθ
τ =B (3.17)
dt
There are nonlinear rotational frictions. Coulomb friction is the most common one. The
following is the definition of rotational Coulomb friction.
(
Fc θ̇ > 0
τ= (3.18)
−Fc θ̇ < 0
Figure 3-13 shows the relationships between the rotational friction torques and radial veloc-
ities.
Exemple 3.3 Consider the system in Figure 3-14. In this system, τ is the external
torque applied, and θ is the angle of the rotating member. Figure 3-15 shows all the
torques applied to the rotating member whose inertia is J. Applying Newton’s law to
this system gives us the following dynamic equation.
dθ d 2θ
τ − Kθ − B =J 2 (3.19)
dt dt
Chapter 3. Modeling of Dynamic Systems 63
d 2θ dθ
J 2
+B + Kθ = τ (3.20)
dt dt
Gears
Many mechanical systems employ gears to amplify the torque. Consider a set of ideal gears
shown in Figure 3.16. The numbers of teeth are N1 and N2 . We assume that ideal gears are
without inertia and frictions. If we assume that the number of teeth is proportional to the
radius of the gear, we have the following relationship.
R2 N2
= (3.21)
R1 N1
We also assume that the lengths of arcs traveled by both gears are equal during the same
period. If the rotational angles of the gears are θ1 and θ2 during the same period, we have
the following relationship.
R1 θ1 = R2 θ2 (3.22)
64 3.1. Modeling of Dynamic Systems using Differential Equations
Taking derivatives of both sides of the above equation gives us the following equation.
N1
θ̇2 = θ̇ (3.23)
N2 1
Suppose τ1 is the input torque, and τ2 is the output torque in the above ideal gear system.
Since there should be no energy loss in the ideal gear system, the input energy must be equal
to the output energy. Remember that energy or work in the translational motion is the force
multiplied by the distance of motion. Similarly, in rotational motion, the work or energy is
the product of torque and angle of rotation. Therefore, we have the following relationship
for the above ideal gear system.
τ1 θ1 = τ2 θ2 (3.24)
If we rewrite the above equation, we have the following relationship.
θ1 N
τ2 = τ = 2τ (3.25)
θ2 1 N1 1
If N2 /N1 is greater than one, we have a reduction gear system, where output speed is re-
duced. From the above equation, we can see that the output torque is greater than the input
torque. Therefore, we have a torque amplification at the cost of reduced speed. Automotive
transmission is an example. When we start an automobile, we need to have a large traction
force. Therefore, the gear ratio N2 /N1 has to be greater than one to obtain a large torque.
When the automobile is running at high speed, the gear ratio N2 /N1 is less than one. In this
case, the wheel is rotating at a higher speed than the engine.
Before we discuss the mechanical system with gears, consider the two rotors connected
with a rigid shaft, as shown in Figure 3-17. Since the two rotors can be considered as a single
rotor with the sum of two inertia, we have the following dynamic equation. τ is the external
torque applied.
d 2 θ1 d 2 θ1 d 2 θ2
τ = (J1 + J2 ) = J 1 + J2 (3.26)
dt 2 dt 2 dt 2
Let us define τ2 as follows:
d 2θ
τ2 = J2 22 (3.27)
dt
Chapter 3. Modeling of Dynamic Systems 65
d 2 θ1
τ − τ2 = J1 (3.28)
dt 2
The above equation is the dynamic equation for the rotor with the inertia J1 . The left-hand
side is the sum of torques applied to the rotor with the inertia J1 . τ2 is the reaction torque
from the rotor with the inertia J2 . Next, let us assume that the two rotors are connected with
ideal gears, as shown in Figure 3-18. In Figure 3-18, τ1 is the reaction torque τ2 conveyed
through the gears. Then, we have the following dynamic equation for the rotor with the
inertia J1 .
d 2θ
τ − τ1 = J1 21 (3.29)
dt
Plugging Eq. (3.25) and Eq. (3.27) into Eq. (3.29) gives us the following equation.
d 2θ d 2θ N d 2θ N d 2θ
τ = J1 21 + τ1 = J1 21 + 1 τ2 = J1 21 + 1 J2 22
dt dt N2 dt N2 dt
!2 2 (3.30)
N d θ1
= J1 + 1 J2
2 N dt 2
Suppose τ is the torque generated by a motor and N1 /N2 is less than one, i.e., reduction gear
ratio. We can see that the motor is looking at the inertia J2 reduced by the square of the gear
ratio. Therefore, a small motor can drive a large load at the expense of reduced speed.
66 3.1. Modeling of Dynamic Systems using Differential Equations
Electric Circuits
The electric circuits are also dynamic systems, even though they do not have any moving
parts. Differential equations can describe electric circuits. In the following discussion about
circuits, R, L, and C mean a resistor, an inductor, and a capacitor, respectively. Consider a
simple RC circuit in Figure 3-19. In the above circuit, u is the input voltage, y is the ca-
pacitor voltage, and i is the current. The following equation is the relationship between the
current and the capacitor voltage.
dy
i=C (3.31)
dt
Applying KVL(Kirchoff Voltage Law) gives us the following first-order differential equation.
dy
Ri + y = RC +y = u (3.32)
dt
As another circuit example, consider the following RL circuit in Figure 3.20.
In the above circuit, u is the input voltage, i is current, and y is the resistor voltage. The
following is the relationship between the inductor voltage and the current.
di
vL = L (3.33)
dt
Applying KVL(Kirchoff Voltage Law) gives us the following first-order differential equation.
di
vL + Ri = L + Ri = u (3.34)
dt
If we use the variable y instead of i, we have the following alternative differential equation
since y = Ri.
L dy
+y = u (3.35)
R dt
Chapter 3. Modeling of Dynamic Systems 67
As a more complex circuit example, consider the following RLC circuit in Figure 3-21.
In the above circuit, u is the input voltage, i is current, and y is the capacitor voltage. The
following is the relationship between the capacitor voltage y and the current
dy
i=C (3.36)
dt
Applying KVL(Kirchoff Voltage Law) gives us the following first-order differential equation.
di
L + Ri + y = u (3.37)
dt
Plugging Eq. (3.36) into (3.37) gives us the following equation. Note that the following equa-
tion is a second-order differential equation since it has a second derivative of the variable.
d 2y dy
LC 2
+ RC +y = u (3.38)
dt dt
Inside a DC servo motor, there are permanent magnets to generate magnetic fields. The
rotor is an armature with windings of insulated wire wrapped around an iron core. The
armature windings are connected to commutators. The commutators allow currents to flow
to the armature windings. Figure 3-22 shows a conceptual diagram of a DC servo motor. If
we use the above two electromagnetic principles, we can obtain two fundamental equations
to model a DC servo motor. The following is the torque generated by the motor. The torque
is proportional to the current.
τ = Kt ia (3.39)
68 3.1. Modeling of Dynamic Systems using Differential Equations
In the above equation, ia is the armature current, and Kt is the torque constant. Torque is
also dependent on the strength of the magnetic field. However, since permanent magnets
generate a magnetic field, the torque constant includes their effects. The next equation is the
relationship between the back emf and the rotational speed of the motor.
dθ
eb = Kb = Kb ω (3.40)
dt
The above equation is the voltage generated by the rotation of the armature in the magnetic
field. The reason we call this back emf is that it is generated in the direction of opposing the
current flow. In the above equation, ω is the rotational speed of the armature in rad/sec, and
Kb is the back emf constant. Figure 3-23 shows the equivalent circuit of a DC servo motor.
In Figure 3-23, ea is the voltage applied to the armature, ia is the current flowing armature
windings. Ra and La are resistance and inductance of the armature, respectively. Ja is the
inertia of the rotor, and B is the linear friction coefficient. In the actual system, the effect of
nonlinear friction is larger than the linear friction; however, nonlinear friction is ignored in
this linear model. Alternatively, we can assume that nonlinear friction is contributing to the
part of the linear friction. Applying KVL to the above circuit gives us the following equation.
dia di dθ
ea = Ra ia + La + eb = Ra ia + La a + Kb (3.41)
dt dt dt
Since the torque generated by the motor turns the rotor, we have the following dynamic
equation by applying Newton’s law.
d 2θ dθ
τ = Kt ia = Ja 2
+B (3.42)
dt dt
Chapter 3. Modeling of Dynamic Systems 69
To find the model of the DC servo motor, we need to determine constant parameters. Finding
motor parameters is not a simple task. Particularly, finding torque constant Kt is not easy.
However, if we ignore the inductance and energy loss, we can simplify the parameter finding
process. The main contributions of energy loss in electric motors are electrical loss due to
the resistance and mechanical loss due to friction. Roughly speaking, the energy efficiency
of an electric motor is relatively high. Therefore, if we ignore energy losses in a DC motor,
we have the following relationships. The following is the input electric power to the motor.
Pe = ea ia (3.43)
eb
Pm = τω = Kt ia (3.44)
Kb
If we assume no energy loss, we have Pm = Pe . Also, if we ignore the inductance and resis-
tance of the armature, we have eb = ea . Plugging these relations to Eq. (3.44) gives us the
following relationship.
Kt = Kb (3.45)
Measuring back emf constant Kb is relatively easy. Therefore, from the above relationship,
we may use the same value for the torque constant Kt .
Exemple 3.4 Consider a robot manipulator in Figure 3-24. Each axis of the robot
manipulator is equipped with a servo motor. By controlling the motors at the axes, we
can make the robot to perform desired tasks. Generally, robot manipulators are driven
by geared motors. The model for one axis is shown in Figure 3-25. In Figure 3-25, J1
is the inertia of the motor, and J2 is the inertia of the robot link. Applying KVL to the
model gives us the following equation.
dia di dθ
ea = Ra ia + La + eb = Ra ia + La a + Kb 1 (3.46)
dt dt dt
In Figure 3-25, τ1 is the reaction torque τ2 conveyed through the gears. Then, we have
the following dynamic equation for the motor.
d 2 θ1 dθ
τ − τ1 = Kt ia − τ1 = J1 2
+ B1 1 (3.47)
dt dt
The reaction torque τ2 is defined as follows:
d 2 θ2 dθ
τ2 = J2 2
+ B2 2 (3.48)
dt dt
We also have the following relationships.
N1
τ1 = τ (3.49)
N2 2
N1
θ2 = θ (3.50)
N2 1
70 3.1. Modeling of Dynamic Systems using Differential Equations
d 2 θ1 dθ1 d 2 θ1 dθ N
τ = Kt ia = J1 + B 1 + τ1 = J1 + B 1 1 + 1 τ2
dt 2 dt dt 2 dt N2
2 2
" #
d θ dθ N d θ dθ
= J1 21 + B1 1 + 1 J2 22 + B2 2 (3.51)
dt dt N2 dt dt
!2 2 !2
N d θ1 N dθ
= J1 + 1 J2 + B1 + 1 B2 1
N2 dt 2 N2 dt
In most cases, robot manipulators use servo motors with reduction gears. Therefore,
N1 /N2 in Eq. (3.51) is less than one. For example, if we have a reduction ratio of 100:1,
(N1 /N2 )2 in Eq. (3.51) is 1/10000. Generally, robot link inertia J2 is much larger than
the motor inertia J1 . However, since J2 is multiplied by (N1 /N2 )2 in Eq. (3.51), the
influence of the robot link to the motor is relatively small. Also, the changes in the
inertia due to the movements of the robot are small for the same reason.
In the above equation, the input is u(t) and the output is y(t). Since the equation has a
second derivative term, this is a second-order system. Taking the Laplace transform of the
above equation gives us the following relationship. Remember that we assume zero initial
conditions.
s2 Y (s) + 3sY (s) + 2Y (s) = U (s) (3.54)
Note that the Laplace transform of the second derivative of y(t) is s2 Y (s). The above equation
can be rearranged to the following equation.
s2 + 3s + 2 Y (s) = U (s) (3.55)
the transfer function. Consider the above example system again. Suppose we want to find
72 3.2. Transfer Function
the output for the unit step input with zero initial conditions. We can obtain the following
equation from Eq. (3.52).
Y (s) = G(s)U (s) (3.57)
Since the input is a unit step, the Laplace transform of the input is . Then, we have the
following relationship using the transfer function obtained above.
1 1
Y (s) = G(s)U (s) = (3.58)
(s + 1)(s + 2) s
Taking the inverse Laplace transform of the above equation gives us the output function y(t).
Exemple 3.5 Consider an RC circuit in Figure 3-27. Let us find the transfer function
of the circuit. The output of the circuit is the capacitor voltage y(t), and the input
is the voltage u(t). Applying KCL(Kirchoff’s Current Law) to the circuit gives us the
following equation.
u(t) − y(t) dy(t)
=C (3.59)
R dt
After arranging terms, we have the following differential equation.
dy(t)
RC + y(t) = u(t) (3.60)
dt
Taking the Laplace transform of the equation gives us the following relationship.
1
Y (s) = (U (s) + y(0)) (3.62)
RCs + 1
Since y(0) = 0, we can obtain the transfer function as follows:
Y (s) 1
G(s) = = (3.63)
U (s) RCs + 1
Exemple 3.6 Consider a mass-spring-damper system in Example 3.1, again. The dy-
Chapter 3. Modeling of Dynamic Systems 73
d 2y dy
M 2
+B + Ky = f (3.64)
dt dt
With the assumption of zero initial conditions, Taking the Laplace transform of the
above equation gives us the following equation.
Ms2 Y (s) + BsY (s) + KY (s) = Ms2 + Bs + K Y (s) = F(s) (3.65)
From the above equation, we can obtain the transfer function of the system as follows:
Y (s) 1
G(s) = = 2
(3.66)
F(s) Ms + Bs + K
Exemple 3.7 Consider a mass-spring-damper system in Example 3.2. Since the system
has two mass objects, we have the following simultaneous differential equations.
d 2 y1
f − K (y1 − y2 ) = M1 (3.67)
dt 2
dy2 d 2y
K (y1 − y2 ) − B = M2 22 (3.68)
dt dt
With the assumption of zero initial conditions, Taking the Laplace transform of the
above equation gives us the following equation.
M1 s2 + K Y1 (s) = F(s) + KY2 (s) (3.69)
M2 s2 + Bs + K Y2 (s) = KY1 (s) (3.70)
Since the above equations are simultaneous, we have to solve for and to obtain the
following transfer functions.
Y1 (s) M2 s2 + Bs + K
= (3.71)
F(s) (M1 s2 + K) (M2 s2 + Bs + K) − K 2
Y2 (s) K
= (3.72)
F(s) (M1 s + K) (M2 s2 + Bs + K) − K 2
2
Since we have two outputs, we have a set of two transfer functions. Note that the two
transfer functions have the same denominators.
Impulse Response
The impulse response is closely related to the transfer function. Before we discuss the im-
pulse response, we need to define an impulse function. First, consider a pulse function
defined as in Figure 3-28.
74 3.2. Transfer Function
The pulse in Figure 3-28 has ∆t width and 1/∆t height. Therefore, the area of this pulse is
one. If we integrate the pulse function, we have the following relationship.
∆t
Z ∞ Z
2 1 1
δ∆ (t)dt = dt = ∆t · =1 (3.73)
−∞ − ∆t ∆t ∆t
2
If we bring the value of ∆t to an infinitesimally small number, the width of this pulse ap-
proaches zero, and, on the other hand, the height of the pulse approaches infinity. The func-
tion obtained by this procedure is called an impulse function. The notation for the impulse
function is δ(t) . The impulse function satisfies the following relationship.
Z ∞ Z 0+
δ(t)dt = δ(t)dt = 1 (3.74)
−∞ 0−
The impulse function does not exist in the real world; however, it is very useful in mathe-
matics. One of the important facts is that the Laplace transform of the impulse function is
one, as shown below.
Z∞ Z 0+
−st
L [δ(t)] = δ(t)e dt = δ(t) · 1dt = 1 (3.75)
0− 0−
Now, we are ready to define an impulse response. The impulse response is the response of
the system when an impulse is applied to the input. Remember that the transfer function
has the following relationship.
Y (s) = G(s)U (s) (3.76)
The above relationship enables us to find the Laplace transform of the output for a given
input. Since the Laplace transform of the impulse function is one, we have U (s) = 1 . Then,
we have the following relationship.
The function g(t) in the above equation is the impulse response, i.e., g(t) is the response of
the system for the impulse input.
Chapter 3. Modeling of Dynamic Systems 75
Using the impulse response, we can obtain an important relationship between the output
and input of the system. If we take the inverse Laplace transform of Eq. (3.76) and apply the
convolution property for the Laplace transform, we can obtain the following relationships.
y(t) = L−1 [Y (s)] = L−1 [G(s)U (s)] = L−1 [G(s)] ∗ L−1 [U (s)]
Zt
(3.79)
= g(t) ∗ u(t) = g(t − τ)u(τ)dτ
0
Eq. (3.79) shows the relationship between the input function and the output function. If we
know the impulse response g(t) of a system, we can determine the output of the system for
any inputs using Eq. (3.79). Figure 3-29 summarizes this relationship.
Exemple 3.8 Consider the RC circuit in Example 3-5 again. Let us find the impulse
response of the system. The Laplace transform of the output is as follows:
1
Y (s) = (U (s) + y(0)) (3.80)
RCs + 1
Taking the inverse Laplace transform gives us the following equations.
1 1
−1 −1 −1
y(t) = L [Y (s)] = L U (s) + L y(0)
RCs + 1 RCs + 1
1 1
= L−1 ∗ L−1 [U (s)] + y(0) · L−1 (3.81)
RCs + 1 RCs + 1
t t
− RC − RC
=e ∗ u(t) + y(0) · e
If we assume zero initial condition, the above equation is the same as Eq. (3.79). The
impulse response of this RC circuit is as follows:
t
g(t) = e− RC (3.82)
In the above equation, the coefficients a0 , a1 , ...,an−1 and b0 , b1 , ...,bm are constant real num-
bers. The differential equation represents the linear time-invariant system as above. With
the assumption of zero initial conditions, taking the Laplace transform of the above equation
gives us the following relationship.
Characteristic Equation
If we set the denominator of the transfer function to zero, we can obtain the characteristic
equation, which delivers very important information about the system. The characteristic
equation of the above system is as follows:
The above equation has n roots, which determines the stability of the system. We discuss the
stability of the system in chapter 4.
Exemple 3.9 Let us find the poles and zeros of the following transfer function.
(s + 1)2
G(s) = (3.87)
(s + 2) (s2 + 2s + 2)
We can find the poles by solving the following equation.
(s + 2) s2 + 2s + 2 = 0 (3.88)
Therefore, the system has three poles at s = −2, −1 ± j. On the other hand, the system
has two zeros at s = −1. Since the denominator has a higher order than the numerator,
the transfer function becomes zero at s = ∞. Therefore, the system has a zero at infin-
ity. Figure 3-30 shows the locations of poles and zeros on the complex plane.
Chapter 3. Modeling of Dynamic Systems 77
MATLAB Poles of this example can be found using the following MATLAB com-
mands. MATLAB command conv computes the coefficients of the two multiplied poly-
nomials.
den=conv([1 2],[1 2 2])
roots(den)
Exemple 3.10 Let us find the poles and zeros of the following transfer function.
s (s + 1)
G(s) = (3.89)
s+2
This system has two zeros at s = 0, −1. This system has a pole at s = −2. Since the
transfer function becomes infinity at s = ∞, this system has a pole at infinity.
78 3.3. Block Diagrams
The meaning of the basic block in Figure 3-31 is that the output signal can be obtained by
multiplying the transfer function by the input in the frequency-domain.
To represent a complex system, we need to connect many basic blocks in various ways.
To do so, definitions of various connections for blocks are necessary.
Series Connection
When we have two blocks in series as in Figure 3-32, the transfer function of the whole
system is as follows:
Y (s) Y2 (s) Y1 (s)
G(s) = 2 = = G2 (s)G1 (s) (3.91)
U1 (s) U2 (s) U1 (s)
Parallel connection
When we have two blocks in parallel, as in Figure 3-33, the transfer function of the whole
system is as follows:
Feedback Connection
When we have two blocks connected by a feedback loop, as in Figure 3-34, we have the
following relationships.
Y1 (s) G1 (s)
G(s) = = (3.95)
R(s) 1 + G1 (s)G2 (s)
Figure 3-35 shows various operations for block diagrams. These relationships are very help-
ful when we deal with complex block diagrams.
80 3.3. Block Diagrams
(a)
(b)
(c)
(d)
Exemple 3.11 Let us find the transfer function of the system in Figure 3.36(a) by
simplifying block diagrams. The block diagram in Figure 3-36(a) has two feedback
loops. First, if we simplify the inner loop, we obtain the block diagram in Figure
3.36(b). Next, if we simplify the outer loop and combine the parallel blocks, we obtain
the block diagram in Figure 3.36(c). Finally, we have the transfer function as follows:
Exemple 3.12 Let us find the transfer function of the system in Figure 3.37(a). The
block diagram in Figure 3-37(a) needs to be rearranged to the block diagram in Figure
3-37(b). If we simplify feedback loops and parallel blocks, we can obtain the block
diagram in Figure 3.37(c). From this, we can find the transfer function as follows:
Figure 3-39 shows the signal flow graph of the feedback block diagram in Figure 3-34. If
we need to subtract a signal at the summing junction, we must add the negative sign to the
Chapter 3. Modeling of Dynamic Systems 83
transfer function or gain, as shown in the case of U1 (s). Note that U1 (s) is the sum of signals
entering the node.
Exemple 3.13 Consider the block diagram in Figure 3.36(a). Figure 3.40 shows the
signal flow graph of the block diagram.
Exemple 3.14 Figure 3.41 shows the signal flow graph of the block diagram in Figure
3.37.
In the previous section, the transfer function is found by simplifying the block diagram.
If we have a very complex block diagram, it may be very complicated to find transfer func-
tions by simplifications. Mason’s rule is the formula to find the transfer function of a signal
flow graph. To explain Mason’s rule, we need several definitions for the signal flow graph.
• Forward path: A path from an input node to an output node. No node is traversed
more than once.
• Loop: A path starts and ends on the same node. No node is traversed more than once.
84 3.4. Signal Flow Graph
• Path gain: The product of the gains of all the branches in the path.
• Loop gain: The product of the gains of all the branches in the loop.
• Non-touching: Two parts of a signal flow graph are non-touching if they do not share
a common node.
Mason’s Rule
Mason’s rule gives the formula to compute the gain or transfer function from the input node
to the output node. Without proof, the following equation shows the formula.
n
P
Mk ∆k
k=1
M= , (3.98)
∆
where
n = total number of forward paths,
Mk = path gain of the kth forward path,
∆ = 1 − L1 + L2 − L3 + · · · (−1)i Li · · · ,
L1 = sum of loop gains of all loops,
L2 = sum of products of the loop gains of any two non-touching loops,
L3 = sum of products of the loop gains of any three non-touching loops,
...
Li = sum of products of the loop gains of any i non-touching loops,
...
∆k = the ∆ for all part of the signal flow graph that is non-touching with the kth forward
path.
Exemple 3.15 Consider the signal flow graph in Figure 3-40. Let us find the gain
using Mason’s rule. This signal flow graph has two forward paths whose path gains
are as follows:
M1 = G1 G4 G5 (3.99)
M2 = G1 G4 G6 (3.100)
This signal flow graph has two loops whose loop gains are −G1 G2 and −G1 G3 G4 . There-
fore, we can obtain the following equation.
L1 = −G1 G2 − G1 G3 G4 (3.101)
Since there are no non-touching loops, the term L2 and the rest of the terms do not
exist. Since there are no non-touching loops with two forward paths, both ∆1 and ∆2
are one. Therefore, the gain of the whole system is as follows:
M1 ∆1 + M2 ∆2 G G G + G1 G4 G6
M= = 1 4 5 (3.102)
∆ 1 + G1 G2 + G1 G3 G4
Exemple 3.16 The purpose of this example is to point out a common mistake in the
signal flow graph. First, consider a signal flow graph in Figure 3.42. If we assume that
there is no feedback loop in the signal flow graph (G3 = 0), the gain of the whole system
is G4 + G1 G2 . In other words, the summation of gains of two parallel paths gives us
the whole gain. However, someone may think that there are two parallel paths for the
case G3 , 0. Therefore, someone may compute the gain of the feedback loop first and
add gains of two parallel paths to get the whole gain G4 + G1 G2 /(1 + G1 G3 ). However,
applying Mason’s rule gives us the following gain, which does not agree with the result
in the above.
Y G4 + G1 G2
= (3.103)
R 1 + G1 G3
Figure 3.43 is the block diagram of the above signal flow graph. If there is no feedback
loop, we have two parallel paths in the block diagram. However, with the feedback
loop, we do not have any parallel paths anymore. Note that the variable representing
the summing junction (Node E in Figure 3.42) is the sum of entering signals. If we
have two parallel paths with a feedback loop, the block diagram and the signal flow
graph should look like Figure 3.44 and Figure 3.45, respectively. Applying Mason’s
rule to Figure 3.45 gives us the following equation.
Y G4 (1 + G1 G3 ) + G1 G2 G1 G2
= = G4 + (3.104)
R 1 + G1 G3 1 + G1 G3
Exemple 3.17 Consider the signal flow graph in Figure 3-46. Let us find the whole
gain using Mason’s rule. There are two forward paths, whose path gains are as follows:
M1 = G1 G2 G3 G4 G5 G6 (3.105)
M2 = G11 (3.106)
There are four loops in the signal flow graph. L1 is obtained as follows:
Find the pairs of two non-touching loops, and L2 can be obtained as follows:
L2 = G1 G7 G3 G8 + G1 G7 G5 G9 + G3 G8 G5 G9 + G1 G2 G3 G10 G5 G9 (3.108)
L3 = −G1 G7 G3 G8 G5 G9 (3.109)
Since there are no set of four non-touching loops, the term L4 and the rest of the terms
do not exist. Since the forward path with the gain M1 does not have a non-touching
loop, ∆1 is one. Non-touching loop gains with the forward path of the gain M2 are
−G3 G8 and −G5 G9 . Then, we can find ∆2 as follows:
∆2 = 1 + G3 G8 + G5 G9 + G3 G8 G5 G9 (3.110)
Y M1 · 1 + M2 · ∆2
M= = (3.111)
R 1 − L1 + L2 − L3
Chapter 3. Modeling of Dynamic Systems 87
2s + 2 2 (s + 1) 2
G2 (s) = = = (3.113)
s2 + 4s + 3 (s + 3) (s + 1) s + 3
88 3.5. State Equation
The above two systems are different. The first one is the first-order system, while the second
one is the second-order system. As can be seen above, the systems have the same transfer
functions. It is very unlikely that the above second-order system exists in the real world.
However, these simple examples show the problem of the transfer function representation.
Using state variables may be a solution to this problem. The equations using state variables
show the internal dynamic characteristics of the system.
The above system is a second-order system with the output y(t) and the input u(t). To find
the state equation of the system, we need to define state variables. In this example, let us
define the state variables x1 (t) and x2 (t) as below. Note that the number of state variables is
the same as the order of the system.
x1 (t) = y(t)
(3.115)
x2 (t) = ẏ(t)
There are many ways to define state variables. The state variable definition is not unique for
a system. One of the easiest ways to define state variables is using the output variable and
its derivatives. Using the above variable definition, we obtain the following equations.
If we look at the state equations, we can see that they are a set of first-order differential
equations. Finding the state equation is like converting a single differential equation to a set
of multiple first-order differential equations. Note that the number of first-order equations
is the same as the order of the system. The state equations can be represented by a vector
equation. Let us define a vector variable as follows:
" #
x1 (t)
x(t) = (3.118)
x2 (t)
The above variable is called a state vector. Using the above vector variable, we can obtain
the following state equation in a vector form.
" # " # " #" # " #
ẋ1 (t) x2 (t) 0 1 x1 (t) 0
ẋ(t) = = = + u(t) (3.119)
ẋ2 (t) −2x1 (t) − 3x2 (t) + u(t) −2 −3 x2 (t) 1
In addition to the vector state equation, we need to define the output equation. Since the
output variable is y(t), and x1 (t) = y(t), we have the following output equation.
" #
h i x (t) h i
1
y(t) = x1 (t) = 1 0 = 1 0 x(t) (3.122)
x2 (t)
Exemple 3.18 Let us find state equations for the systems in Figure 3-47 and Figure 3-
48. Since the system in Figure 3-47 is the first-order system, we need one state variable.
Using the state variable defined in Figure 3-47 gives us the following state equations.
ẋ1 = −3x1 + u
(3.123)
y = 2x1
Since the system in Figure 3-48 is a second-order system, we need two state variables.
Using the state variable defined in Figure 3-48 gives us the following state equations.
ẋ1 = x2
ẋ2 = −3x1 − 4x2 + u (3.124)
y = 2x1 + 2x2
As we know, the systems in Figure 3-47 and Figure 3-48 have the same transfer func-
tions. However, as we can see in the above, they are different systems with different
state equations.
In the above equation, n is the order of the system, and it is the number of independent state
variables that can describe the system completely. This number is also the same as the order
of the differential equation. The following is the general form of state equations.
x : (n × 1), u : (1 × 1), y : (1 × 1)
(3.131)
A : (n × n), B : (n × 1), C : (1 × n), D : (1 × 1)
A is called a system matrix, and B is called an input matrix. The system matrix A has very
important information about the system, which we discuss later.
There are numerous ways to define state variables. Therefore, a system may have many
different forms of state equations depending on the state variable definition. Some forms
of state equations are very useful. In the following, we discuss how to derive several useful
forms of state equations.
If we define the following constant matrices, the above state equations can be represented
by the vector equations as in Eqs. (3.129),(3.130).
0 1 0 ... 0 0
0 0 1 ··· 0 0
. . .. .. .. h i
A = .. .. . .
, B =
.
, C = 1 0 0 · · · 0 , D = 0
(3.136)
0 0 0 ··· 1
0
−a0 −a1 −a2 · · · −an−1 1
Next, consider a system described by the following differential equation. Note that this
equation has derivatives of input variable u(t) on the right-hand side.
Note that the above equation is the same form as Eq. (3.132). Then, we can define state
variables as follows:
x1 (t) = z(t)
x2 (t) = z(1) (t)
.. (3.143)
.
xn (t) = z(n−1) (t)
92 3.5. State Equation
Using the above definitions, we can obtain the following state equations.
Next, to find the equation for the output variable y(t), if we convert Eq. (3.141) to a differ-
ential equation, we have the following equation.
Changing the above state equations to vector forms gives us the following equation.
ẋ1 0 1 0 ··· 0 x1 0
ẋ2 0 0 1 ··· 0 x2 0
=
.. .. .. .. .. .. + .. u
. . . . . . .
ẋn −a0 −a1 −a2 · · · −an−1 xn 1
(3.146)
x
1
i x2
h
y = b0 b1 · · · bn−2 bn−1 .
..
xn
Figure 3-49 shows the block diagram of the controllable canonical form. In the figure, a
block with the transfer function 1/s is an integrator. Note that the number of integrators is
the same as the order of the system.
In Eq. (3.138), the order of denominator is greater than the order of the numerator. In this
case, D in Eq. (3.130) is always zero. In one of the examples, we will discuss the case where
the order of denominator is equal to the order of the numerator. When we have a system
where the order of denominator is less than the order of the numerator, we cannot find the
state equation for the system since the system is not realizable. We will discuss the system
that is not realizable in the frequency response chapters.
Exemple 3.19 Let us find the controllable canonical form of the following system.
Y (s) 1
G(s) = = 3 2
(3.147)
U (s) s + 2s + 3s + 1
Note that the last equation of the above equations can be obtained from the differential
equation. The following equation is the vector form of the above state equations.
ẋ1 0 1 0 x1 0
ẋ2 = 0 0 1 x2 + 0 u
ẋ3 −1 −3 −2 x3 1
(3.152)
h x
i 1
y = 1 0 0 x2
x3
Exemple 3.20 Let us find the controllable canonical form of the following system.
Note that the numerator of the transfer function has the term s.
Y (s) 5s + 4
G(s) = = 3 (3.153)
U (s) s + 2s2 + 3s + 1
If we multiply both the numerator and the denominator by Z(s), we have the following
relationships.
Y (s) 5s + 4 Z(s)
= · (3.154)
U (s) s3 + 2s2 + 3s + 1 Z(s)
U (s) = s3 + 2s2 + 3s + 1 Z(s) (3.155)
Y (s) = (5s + 4) Z(s) (3.156)
Let us define state variables as follows:
x1 = z, x2 = ż, x3 = z̈ (3.157)
Using the above state variables, we have the state equation of the system.
ẋ1 = x2
ẋ2 = x3
... (3.158)
ẋ3 = z = −x1 − 3x2 − 2x3 + u
y = 4x1 + 5x2
Let us move all the terms without s to the left-hand side and terms with s to the right-hand
side. Then, define sX1 (s) as in the following equation. X1 (s) is the Laplace transform of the
state variable x1 (t) , and sX1 (s) is the Laplace transform of ẋ1 (t) .
b0 U (s) − a0 Y (s) = sn + an−1 sn−1 + · · · + a1 s Y (s)
(3.163)
− bn−1 sn−1 + bn−2 sn−2 + · · · + b1 s U (s) = sX1 (s)
If we divide the above equation by s, we can obtain the following relationship for X1 (s).
X1 (s) = sn−1 + an−1 sn−2 + · · · + a1 Y (s)
(3.164)
− bn−1 sn−2 + bn−2 sn−3 + · · · + b1 U (s)
Then, again, let us move all the terms without s to the left-hand side and terms with s to
the right-hand side. Then, define sX2 (s) as in the following equation. X2 (s) is the Laplace
transform of the state variable x2 (t) , and sX2 (s) is the Laplace transform of ẋ2 (t).
X1 (s) + b1 U (s) − a1 Y (s) = sn−1 + an−1 sn−2 + · · · + a2 s Y (s)
(3.165)
− bn−1 sn−2 + bn−2 sn−3 + · · · + b2 s U (s) = sX2 (s)
96 3.5. State Equation
Again, if we divide the above equation by s, we can obtain the following relationship for
X2 (s).
X2 (s) = sn−2 + an−1 sn−3 + · · · + a2 Y (s)
(3.166)
− bn−1 sn−3 + bn−2 sn−4 + · · · + b2 U (s)
Repeat the above process until we have the following relationship.
Xn−1 (s) + bn−1 U (s) − an−1 Y (s) = sY (s) = sXn (s) (3.167)
From the above process, we can obtain the following set of equations.
b0 U (s) − a0 Y (s) = sX1 (s)
X1 (s) + b1 U (s) − a1 Y (s) = sX2 (s)
.. (3.168)
.
Xn−1 (s) + bn−1 U (s) − an−1 Y (s) = sY (s) = sXn (s)
If we convert the above equations to the time-domain functions, we have the observable
canonical state equations as follows:
ẋ1 = −a0 xn + b0 u
ẋ2 = x1 − a1 xn + b1 u
.. (3.169)
.
ẋn = xn−1 − an−1 xn + bn−1 u
y = xn
The following equation is the vector form of the state equations.
ẋ1 0 0 · · · 0 −a0 x1 b0
ẋ 1 0 · · · 0 −a x b
2 1
2 + 1
. = . . .. .. .. ..
u
.. .. .. . . . .
ẋn 0 0 · · · 1 −an−1 xn bn−1
(3.170)
x1
i x2
h
y = 0 0 · · · 1 .
..
xn
Figure 3-52 shows the block diagram of the system.
Exemple 3.21 Let us find the observable canonical form of the following system.
Y (s) 5s + 4
G(s) = = 3 (3.171)
U (s) s + 2s2 + 3s + 1
We can obtain the following relationship from the above transfer function.
s3 + 2s2 + 3s + 1 Y (s) = (5s + 4) U (s) (3.172)
Let us move all the terms without s to the left-hand side and terms with s to the right-
hand side. Then, define sX1 (s) as in the following equation. X1 (s) is the Laplace trans-
form of the state variable x1 (t), and sX1 (s) is the Laplace transform of ẋ1 (t).
4U (s) − Y (s) = s3 + 2s2 + 3s Y (s) − 5sU (s) = sX1 (s) (3.173)
Then, let us move all the terms without s to the left-hand side and terms with s to
the right-hand side. Define sX2 (s) as in the following equation. X2 (s) is the Laplace
transform of the state variable x2 (t), and the Laplace transform of ẋ2 (t).
X1 (s) + 5U (s) − 3Y (s) = s2 + 2s Y (s) = sX2 (s) (3.175)
Then, let us move all the terms without s to the left-hand side and the term with s to
the right-hand side. Define sX3 (s) as in the following equation.
Exemple 3.22 Let us find the state equation of the following system.
Note that the order of the denominator is the same as the order of the numerator. In
this case, we divide and get the quotient and the remainder. We can apply the same
process as above to the remainder.
2(s3 + 2s2 + 3s + 1) + 5s + 4 5s + 4
G(s) = 3 2
= 2+ 3 (3.183)
s + 2s + 3s + 1 s + 2s2 + 3s + 1
If we take the Laplace transform of the above function, we have the following relation-
ship. (3.184)
5s + 4
L [y(t)] = Y (s) = G(s)U (s) = 2U (s) + U (s) (3.184)
s3 + 2s2 + 3s + 1
Let us define Y1 (s) and Y2 (s) as shown below.
5s + 4
Y2 (s) = L [y2 (t)] = U (s) (3.187)
s3 + 2s2 + 3s + 1
Then, if we consider y2 as an output, we have the following state equations.
ẋ1 0 1 0 x1 0
ẋ2 = 0 0 1 x2 + 0 u
ẋ3 −1 −3 −2 x3 1
(3.188)
h x
i 1
y2 = 4 5 0 x2
x3
Therefore, we have the following output equation for the system. (3.189)
h i x1
y = y1 + y2 = 4 5 0 x2 + 2u (3.189)
x3
As we can see, the matrix D in Eq. (3.130) is not zero and equal to the quotient of
the division. Figure 3-54 shows the block diagram of the system. As we can see in
the block diagram, there is a direct path from the input to the output without going
through any integrators.
k1 k2 kn
G(s) = + + ··· + (3.191)
s + α1 s + αn s + αn
100 3.5. State Equation
We can consider this system as a parallel connection of n blocks. Figure 3-55 shows the block
diagram.
Using the state variable definition in Figure 3-55, we can obtain the following state equation.
ẋ1 −α1 0 ··· 0 x1 1
ẋ2 0 −α2 ··· 0 x2 1
= + u
.. .. .. .. .. .. ..
.
. .
. .
.
.
ẋn 0 0 · · · −α n xn 1
(3.192)
x1
i x2
h
y= k1 k2 · · · kn .
..
xn
As we can see in the above equation, the system matrix is a diagonal matrix. When the
transfer function has multiple poles, we may not be able to find a diagonal canonical form.
In this case, we have to use a Jordan canonical form, which we do not discuss in this book.
Interested readers may find the topic in books about linear system theory.
Exemple 3.23 Consider the following system. Let us find the diagonal canonical form
of the system.
1 1 −1
G(s) = = + (3.193)
(s + 1) (s + 2) s + 1 s + 2
From the partial fraction expansion of the transfer function, we can find the diagonal
Chapter 3. Modeling of Dynamic Systems 101
Exemple 3.24 Let us find a state equation for the mass-spring-damper system in Ex-
ample 3-1. First, consider the following differential equation.
d 2y dy
M 2
+B + Ky = f (3.195)
dt dt
Since the system is a second-order system, we need two state variables. Let us define
the state variables as follows:
x1 = y
(3.196)
x2 = ẏ
In other words, we have two state variables representing the position and the velocity
of the object. Using the above state variables, we have the following state equations.
ẋ1 = x2
K B f
ẋ2 = ÿ = −
y − ẏ + (3.197)
M M M
K B 1
= − x1 − x2 + u
M M M
The following equation is a vector form of the state equation.
" # " #" # " #
ẋ1 0 1 x1 0
= + u
ẋ2 −K/M −B/M x2 1/M
" # (3.198)
h i x
1
y= 1 0
x2
Note that the above equation is not exactly a controllable canonical form, but it is
similar to it. Let us find the observable canonical form of the system as follows:
" # " #" # " #
ż1 0 −K/M z1 1/M
= + u
ż2 1 −B/M z2 0
" # (3.199)
h i z
1
y= 0 1
z2
102 3.6. Modeling of Dynamic Systems using State Equations
If we compare the state variables in Eq. (3.198) and the state variables in Eq. (3.199),
we can obtain the following relationship.
z2 = y = x1
B B B (3.200)
z1 = z2 + ż2 = x1 + ẋ1 = x1 + x2
M M M
As we can see from the above equations, the state variables in Eq. (3.199) are the linear
combinations of the state variables in Eq. (3.198). The following is the matrix form of
this relationship. " # " #" #
z1 B/M 1 x1
= (3.201)
z2 1 0 x2
Note that the state variables in the observable canonical form do not have physical
meanings. The above relationships tell us that we can find a new set of state variables
by the linear combinations of state variables. It is called the state transformation,
which we discuss in chapter 8.
Exemple 3.25 Consider the circuit in Figure 3-56. Let us find the state equations. Let
us define the capacitor voltage and the inductor current as state variables, as follows:
x1 = y
(3.202)
x2 = iL
1 u −y 1 1 1
ẋ1 = ẏ = − iL = − x1 − x2 + u
C R RC C RC
1 1 (3.203)
ẋ2 = i̇L = y = x1
L L
y = x1
The state variables in the above state equation have physical meanings. Next, let us
find the controllable canonical form. Consider the transfer function of the circuit.
Y (s) s/(RC)
G(s) = = 2 (3.204)
U (s) s + s/(RC) + 1/(LC)
From the above transfer function, we can find the controllable canonical form.
" # " #" # " #
ż1 0 1 z1 0
= + u
ż2 −1/(LC) −1/(RC) z2 1
" # (3.205)
h i z
1
y= 0 1/(RC)
z2
Chapter 3. Modeling of Dynamic Systems 103
Exemple 3.26 Consider the two inertia system in Figure 3-57. Let us find the state
equations of the system. The input to the system is an external torque τ, and the
output is the angle θ1 . The following equations are dynamic equations of the system
d 2θ
!
dθ1 dθ2
τ − K1 (θ1 − θ2 ) − B1 − = J1 21 (3.206)
dt dt dt
d 2θ
!
dθ1 dθ2 dθ
K1 (θ1 − θ2 ) + B1 − − K2 θ2 − B2 2 = J2 22 (3.207)
dt dt dt dt
Since we have two second-order differential equations, the order of the system is four.
Let us define four state variables as follows:
dθ1 dθ2
x1 = θ1 , x2 = , x = θ2 , x4 = (3.208)
dt 3 dt
Then, we can obtain the following state equations.
ẋ1 = x2
d 2 θ1 K B K B 1
ẋ2 = 2
= − 1 x1 − 1 x2 + 1 x3 + 1 x4 + u
dt J1 J 1 J1 J1 J 1
ẋ3 = x4 (3.209)
d 2 θ2 K1
! !
B1 K1 K2 B1 B2
ẋ4 = = x + x − + x − + x
dt 2 J2 1 J2 2 J2 J2 3 J2 J2 4
y1 = x 1
There are various ways to define state variables. We can use physical variables as state
variables, as in the previous examples. If we use state variables with physical meanings, we
can easily tell the internal state of the system from the state variable responses. Sometimes,
we may want to define state variables to find canonical forms. We discuss the usefulness of
canonical forms in later chapters.
In the real world, there are many systems with nonlinear characteristics. Nonlinear systems
are more common than linear systems in nature. However, dealing with nonlinear systems
is very difficult. Nonlinear system theory is very complicated and not well developed as the
linear system theory. There are many types of nonlinear systems, and it is very difficult to
find theories to cover all the types of nonlinear systems. On the other hand, linear system
theory has a long history and is very well developed. One of the easiest ways to deal with
nonlinear systems is to find an approximate linear model of the nonlinear system and ap-
ply linear theories. Due to the approximate nature of this approach, we cannot always use
this method for nonlinear systems. However, there are many cases where we can use ap-
proximate linear models of nonlinear systems. This approach is called the linearization of
nonlinear systems.
Let us start with the definition of linear functions. The functions that satisfy the follow-
ing relationships are defined as linear functions.
If a function fails to satisfy one of the above conditions, it is a nonlinear function. For
example, y = x2 is a nonlinear function since it does not satisfy the first condition as follows:
The function y = x2 does not satisfy either of the linearity conditions. The linear function
with one variable is the following function. In this equation, K is a constant.
y = Kx (3.214)
The graph of a linear function is a straight line through the origin, as shown in Figure 3-58.
Chapter 3. Modeling of Dynamic Systems 105
y = f (x) (3.215)
To linearize a nonlinear function, the first thing to do is to determine the center of lineariza-
tion. Let us assume that we want to linearize the function at x0 , and the value of the function
at the point is as follows:
y0 = f (x0 ) (3.216)
When we have a change in x by ∆x, the approximate value of y can be obtained as follows:
!
df
y = f (x0 + ∆x) ≈ f (x0 ) + · ∆x = f (x0 ) + K · ∆x = y0 + ∆y (3.217)
dx x=x0
We can find the approximate value of y at x = x0 + ∆x, using the following relationship. In
this relationship, the slope of the nonlinear curve is found by calculating the derivative at
x = x0 . Then, the approximate change ∆y is calculated using the slope.
!
df
∆y = · ∆x = K · ∆x (3.218)
dx x=x0
106 3.7. Linearization of Nonlinear Systems
There is an error between the accurate value f (x0 + ∆x) and the approximate value y0 + ∆y.
However, this error can be ignored if we assume that ∆x is small enough. Then, we have a
linear relationship between ∆x and ∆y , i.e., ∆y = K∆x. We have to remember three facts
about linearization. First, if we have to change the center of the linearization, we have to cal-
culate a new slope. In other words, we have different linear functions for the different centers
of linearization. Second, we have an increased error if we move away from the center of lin-
earization. Third, the nonlinear function must be differentiable at the center of linearization.
This linearization method is frequently used in electronic circuits. The small-signal models
for transistors are typical examples.
Next, let us consider the linearization of a nonlinear function with two variables. A
linear function with two variables is as follows. As can be seen in the following equation,
the constant and the variable can be vectors.
" #
h i x
1
y = K1 x1 + K2 x2 = K 1 K 2 (3.219)
x2
The above function is a plane in the three-dimensional space, as shown in Figure 3-60. The
plane passes through the origin. Remember that the graph of a linear function with one
variable is a line through the origin in the two-dimensional space.
As can be seen in Figure 3-60, a linear function with two variables has two slopes, i.e., K1
and K2 . K1 is the slope on the x1 –y plane, and K2 is the slope on the x2 –y. Next, Consider
the linearization of the following nonlinear function with two variables.
y = f (x1 , x2 ) (3.220)
The above function is a surface in the three-dimensional space. Let us assume the center of
the linearization is (x10 , x20 ), and the value of the function at the point is as follows:
When we have changes ∆x1 in x1 and ∆x2 in x2 , the approximate value of y can be obtained
as follows:
y = f (x10 + ∆x1 , x20 + ∆x2 )
! !
∂f ∂f
≈ f (x10 , x20 ) + · ∆x1 + · ∆x2 (3.222)
∂x1
x1 =x10 ,x2 =x20 ∂x2 x1 =x10 ,x2 =x20
= f (x10 , x20 ) + K1 · ∆x1 + K2 · ∆x2 = y0 + ∆y
Chapter 3. Modeling of Dynamic Systems 107
Then, the approximate change ∆y is the linear function of ∆x1 and ∆x2 , as in the following
equation. " #
h i ∆x
1
∆y = K1 · ∆x1 + K2 · ∆x2 = K1 K2 (3.223)
∆x2
The above method can be extended to nonlinear functions with more than two variables.
The above method can be applied to the linearization of nonlinear state equations. Con-
sider the following state equation. If any of the functions on the right-hand side are nonlin-
ear, the system is nonlinear.
ẋ1 f1 (x1 , x2 , · · · , xn , u)
ẋ2 f2 (x1 , x2 , · · · , xn , u)
= (3.224)
.. ..
. .
ẋn fn (x1 , x2 , · · · , xn , u)
First, we need to have a center of linearization. Let us assume that the center of linearization
is (x10 , x20 , · · · , u0 ), which satisfies the following relationship.
ẋi0 = fi (x10 , x20 , · · · , u0 ) , i = 1, 2, · · · , n (3.225)
Suppose we have small changes in variables as in the following equations:
xi = xi0 + ∆xi , i = 1, 2, · · · n
(3.226)
u = u0 + ∆u
Then, we have the approximate relationships as follow:
ẋi = ẋi0 + ∆ẋi = fi (x1 , x2 , · · · , xn , u)
!
∂fi
≈ fi (x10 , x20 , · · · , u0 ) + · ∆x1
∂x1 x1 =x10 ,x2 =x20 ,··· ,xn =xn0 ,u=u0
!
∂fi
+ · ∆x2
∂x2 x1 =x10 ,x2 =x20 ,··· ,xn =xn0 ,u=u0
.. (3.227)
.
!
∂fi
+ · ∆xn
∂xn x1 =x10 ,x2 =x20 ,··· ,xn =xn0 ,u=u0
!
∂fi
+ · ∆u, i = 1, 2, · · · , n
∂u x1 =x10 ,x2 =x20 ,··· ,xn =xn0 ,u=u0
From the above equations, we have the linearized state equation as follows:
∆ẋ1 a11 a12 · · · a1n ∆x1 b1
∆ẋ a
2 21 a22 · · · a2n
∆x b
2 2
. = . .. .. + .. ∆u (3.228)
.. .. ..
. . . .
∆ẋn an1 an2 · · · ann ∆xn bn
∂fi ∂fi
aij = ,b = (3.229)
∂xj x =x ,x =x ,··· ,x =x ,u=u i ∂u x1 =x10 ,x2 =x20 ,··· ,xn =xn0 ,u=u0
1 10 2 20 n n0 0
Exemple 3.27 Consider a pendulum in Figure 3-61. The mass of the pendulum is M,
and the length of the rod is L. Assume the rod is massless. An external torque τ moves
the pendulum. Let us find the linearized state equation at the equilibrium position. If
we assume the gravitational acceleration is g, the pulling force by gravity is Mg. The
component acting in the direction of movement is Mg sin θ. Therefore, the total torque
acting on the pendulum is τ −LMg sin θ. Applying Newton’s law gives us the following
dynamic equation.
d 2θ
τ − LMg sin θ = ML2 2 (3.230)
dt
To find the state equation, let us define state variables as follows:
x1 = θ, x2 = θ̇, u = τ (3.231)
Using the above state variables, we can obtain the following state equation in the vector
form.
" # " # x2
ẋ1 f1 (x1 , x2 , u)
= = g 1 (3.232)
ẋ2 f2 (x1 , x2 , u) − sin x1 + 2
u
L ML
Let us assume the center of linearization is x1 = 0, x2 = 0, u = 0. We can see that this
position is an equilibrium position. In other words, with zero external torque, the
pendulum can stay at this position forever. Evaluate the following values in the equi-
librium position.
∂f1
= 0,
∂x1 x1 =0,x2 =0,u=0
∂f1
= 1,
∂x2 x1 =0,x2 =0,u=0
(3.233)
∂f2 g g
= − cos x1 =− ,
∂x1 x1 =0,x2 =0,u=0 L x1 =0,x2 =0,u=0 L
∂f2
=0
∂x2 x1 =0,x2 =0,u=0
∂f1
= 0,
∂u x1 =0,x2 =0,u=0
(3.234)
∂f2 1
=
∂u x1 =0,x2 =0,u=0 ML2
Using the above values, we can obtain the linearized state equation as follows.
# " # 0
0 1 ∆x1
"
∆ẋ1
= g + 1 ∆u (3.235)
∆ẋ2 − 0 ∆x2
L ML 2
If we take Laplace transforms of the above equations, we have the following equations.
s∆X1 (s) = ∆X2 (s)
g 1 (3.236)
s∆X2 (s) = − ∆X1 (s) + 2
∆U (s)
L ML
From the above equations, we can find the transfer function of the linearized system.
∆X1 (s) 1 1
= 2 2
(3.237)
∆U (s) ML s + (g/L)
Chapter 3. Modeling of Dynamic Systems 109
Exemple 3.28 Consider a magnetic levitation system in Figure 3-62. The magnetic
levitation system floats a metal ball in the air using an electromagnet. The input of
the system is the current i flowing the electromagnet, and the output is the distance
y of the ball from the magnet. The pulling force of the magnet is proportional to the
square of the current. It is inversely proportional to the square of the distance. There
is also a gravitational pull to the ball; we can obtain the following dynamic equation
using Newton’s law.
i2 d 2y
Mg − K 2 = M 2 (3.238)
y dt
M is the mass of the ball, g is the acceleration constant, and K is a proportional con-
stant. To find the state equation, let us define the state variables as follows:
x1 = y
x2 = ẏ (3.239)
u=i
Using the above state variables, we can obtain the state equation as follows:
" # " # x2
ẋ1 f1 (x1 , x2 , u)
= = K u2 (3.240)
ẋ2 f2 (x1 , x2 , u) g−
M x12
Let us assume the ball is stationary at the position y = Y0 , and the current is i = I0 at
the position. Since the ball is at rest at the equilibrium position, we have the condition
ẋ2 = 0, from which we can obtain the following relationship.
gMY02
K= (3.241)
I02
Therefore, state variables have the following values in the equilibrium position.
x1 = Y0 , x2 = 0, u = I0 (3.242)
110 3.7. Linearization of Nonlinear Systems
From the above results, we can obtain the linearized state equation as follow.
" # 0 1 " # 0
∆ẋ1 ∆x1
= 2g + 2g ∆u (3.245)
∆ẋ2 0
∆x2 −
Y0 I0
From the above equations, we can find the transfer function of the linearized system.
∆X1 (s) 2g 1
=− (3.247)
∆U (s) I0 s2 − (2g/Y0 )
3.8 Lab 3
3.8.1 Encoder for Motor Control
An encoder is one of the most frequently used devices to measure the angle of the motor.
Figure 3.63 shows a picture of an optical encoder. The optical encoder has a disk with
stripes around the perimeter. It also has an LED light source and a photodetector. When
the stripes on the disk pass through the space between the light source and the detector, the
detector circuit generates a train of pulses. By counting the number of pulses generated, we
can measure the angle of rotation.
To detect the direction of rotation, we need two light source and detector pairs so that they
can generate a pair of pulse trains with a phase difference. These types of encoders are
called quadrature encoders. Figure 3-64 shows pulse trains from the quadrature encoder.
Quadrature encoders can provide us with the quadrupled resolution. In other words, the
resolution of the encoder can be four times the number of pulses. For example, if an encoder
generates 96 pulses per revolution, the resolution we can obtain from this encoder is 4 × 96
per revolution. From Figure 3-64, we can see that the changes in angle can be detected at the
rising edge and the falling edge. Since we have a pair of pulse trains with a phase difference,
we can have four times resolution.
Some microcontrollers have timer/counters with encoder reading capability. If timers in
the microcontroller do not have encoder capability, we can write a program to read encoders
using external interrupts. Fortunately, the STM32F429 Discovery board has timers with the
112 3.8. Lab 3
capability of reading encoders. In this lab, we write a simple program to read quadrature
encoder. For this lab, we can use any DC motor with encoders. Since our board uses a 3V
supply, we have to check if the encoder works with a 3V supply. If not, we have to use a
voltage level converter.
Start a new STM32 project in STM32CubeIDE with the name Encoder. In this project,
we read the encoder and print the value of count on the serial terminal. For this project,
we choose to use Timer 2 for reading the encoder. In project configuration, select TIM2 and
enable Encoder Mode, as shown in Figure 3-65. Note that TIM2 is a 32bit timer/counter.
Since we are reading a quadrature encoder, we need two ports for two channels. The
configuration menu assigns PA5 and PB3 for input ports, as we can see in Figure 3-65. How-
ever, we need to reserve the port PA5, because PA5 will be used for the D/A converter later.
We can remap the port at the Pinout view window. Find PA15 in Pinout view and change to
TIM2 CH1, as shown in Figure 3-66.
Chapter 3. Modeling of Dynamic Systems 113
Then, you can check if the TIM2 CH1 is changed to PA15, as shown in Figure 3-67.
We also need to change Counter Period to 0xFFFFFFFF, which is a maximum value for 32bit
registers, as shown in Figure 3-68. This value is the maximum count value for the timer/-
counter. (This is necessary to use two’s complement for count values.) We also have to choose
114 3.8. Lab 3
Encoder Mode TI1 and TI2 in Encoder Mode to be able to read the quadrature encoder sig-
nal at the maximum resolution.
After generating code, open the main source file and type in the following codes.
/* USER CODE BEGIN Includes */
#include "stdio.h"
/* USER CODE END Includes */
Code 3.1
Code 3.3
Code 3.4
In Code 3.4, HAL Delay(100) is the function call to have a 100msec delay. Before we run
the above program, we have to connect two encoder channel outputs to the ports, 3V power
supply and ground. We also need to open the serial terminal, as in the Lab 1 Hello project.
Try to rotate the motor shaft with hands forward and backward, and we should be able to see
the count changes, as shown in Figure 3-69. Make sure that we can display negative counts.
We also have to check the count number for one full revolution.
To drive a DC motor, we need a power amplifier. There are two types of power amplifiers
for DC motors, PWM amplifier and linear amplifier. Both types of amplifiers have pros
and cons. If we want to use a linear power amplifier, the microcontroller must have a D/A
converter. If the microcontroller does not have a D/A converter, there is no option but to
choose a PWM amplifier. Since our board has D/A converters, we choose to use a linear
power amplifier. We can build an inexpensive power amplifier with OP amps and power
transistors, as shown in Figure 3-70.
116 3.8. Lab 3
Exercise 3.1 Enable DAC2 and connect D/A output to the linear power amplifier
for a DC motor. Output a constant positive voltage (output of the voltage conversion
circuit for D/A) and watch encoder counter value changes as the motor runs. If the
counter value decreases for a positive D/A converter voltage, reverse the motor input
wiring. Repeat for the negative D/A voltage (output of the voltage conversion circuit
for D/A). We must have an increasing encoder count for a positive D/A output volt-
age and a decreasing encoder count for negative D/A output voltage. Remember the
wiring polarity, and this is the setup we will use for the feedback control of the DC
motor in the next chapters.
In the above equation, we can regard Tm as a time constant. If we want to use the above
transfer function as a control system model, we do not need to know all the individual
parameters. Finding Kb and Tm is enough for control system analysis and design. Note that
the output of the above transfer function is the angle of the motor. If we assume that the
output of the system is the velocity, we have the first-order transfer function as follows:
In the above equation, Ω(s) is the Laplace transform of the radial velocity ω, which is the
derivative of the angle θ. If we know the time constant Tm and the DC gain Kb , we can
determine the transfer function.
In this lab, we record the actual step response of a DC motor velocity and try to find the
parameters of the above transfer function. Start a new STM32 project in STM32CubeIDE
with the name MotorSpeed. Project configuration is similar to the project ADDAconversion
in Lab 2, except we need to use Timer 2 to read encoder signals. We may want to start
with the same configuration as the project ADDAconversion, then additionally configure
the Timer 2 for encoder signal as above. In addition to the timer, we need to enable the
external interrupt by a push switch so that we can generate a step input to the motor. There
is a blue push switch on the STM32F429 Discovery board. We want an interrupt generated
by this switch. Find the PA1 in the Pinout view, and change to GPIO EXT1. Then, select the
NVIC menu and enable EXTI line0 interrupt, as shown in Figure 3-71.
After generating code, type the following codes in the main source file. In addition to the
following codes, we need Code 3.1 and Code 3.2 for the printf function.
/* USER CODE BEGIN PV */
#define CAPTURE_START 1
#define CAPTURE_FINISHED 2
int data_flag=0;
int data_counter=0,sf=1000;
int data[4000];
/* USER CODE END PV */
Code 3.5
118 3.8. Lab 3
Code 3.6
/* Infinite loop */
/* USER CODE BEGIN WHILE */
while (1)
{
if (data_flag == CAPTURE_FINISHED) {
printf("%d %d\r\n",0,0);
for (int i=1; i < 2*sf ;i++){
printf("%d %d\r\n",i,data[i]-data[i-1]);
}
HAL_GPIO_WritePin(GPIOG, GPIO_PIN_14, GPIO_PIN_RESET);
data_flag = 0;
}
/* USER CODE END WHILE */
Code 3.7
Code 3.8
}
}
/* USER CODE END Callback 0 */
Code 3.9
After completing all the wirings, open the serial terminal proram. SmatTTY program has a
feature to save the screen output to a file. If we press the blue user button on the Discov-
ery board, the board outputs 5V to the linear power amplifier for two seconds. After two
seconds, the program stops the motor and print the velocity values on the serial screen, as
shown in Figure 3-72. The first column is the index of the data array, and the second column
is the velocity value. Since the sampling frequency is 1KHz and the duration is 2 seconds, we
have 2000 data points. After the data capture is finished, press the diskette icon in the menu
bar, type in the data file name of your choice, and save the file. Before you start capturing
data, make sure to clear the screen by pressing the clear screen icon (red-cross).
By running the following MATLAB program, we can draw an angular velocity step response,
as shown in Figure 3-73. In the MATLAB program, the data file name is assumed to be data.
clear
clf
sf=1000;
load -ascii data
for i=1:2*sf
x(i)=data(i,1)/sf;
y(i)=data(i,2);
end
figure(1)
plot(x,y)
axis([0 0.2 0 50])
xlabel(’Time(sec)’);
ylabel(’Encoder count/(1msec)’);
grid on
Code 3.10
The quadrature encoder used in Figure 3-73 has 384 counts per revolution since the encoder
generates 96 pulses per revolution. The steady-state value is about 34 in Figure 3-73. There-
fore, we can calculate the steady-state speed of the motor as follows. Note that the sampling
120 3.8. Lab 3
period is 1msec.
34 × 1000 × 2π
= 556(rad/ sec) (3.254)
96 × 4
Since the power amplifier gain used in this response is 2, the applied motor voltage is 10
Volts. Therefore, the DC gain of the system is 55.6. We can estimate the time constant
by finding the time when the response reaches 63 percent of the steady-state value. From
Figure 3-73, we can find that the approximate time constant is 0.015. From the above mea-
surements, we can estimate the transfer function as follows.
Ω(s) sΘ(s) 1/Kb 55.6
= = = (3.255)
Ea (s) Ea (s) Tm s + 1 0.015s + 1
We may repeat the above experiment for various D/A converter values. Since the real motor
is not an ideal linear system, we may find variations in values for the same parameter. De-
pending on the number of experiments, we can take the average values or use the regression
method.
121
Problem
Problem 3.6 There are transmission mechanisms such as gears, belts, and chains in
the robot manipulator axes. Motors drive links through transmission mechanisms. We
usually regard these mechanisms as rigid bodies. However, these mechanical parts are
not ideally rigid and have some elasticity. Due to the elasticities of mechanical parts,
robot manipulators have vibrations when they move. In the control system design
for robots, controlling vibration is a very important problem. We have to include the
elasticity in the model to handle vibrations properly. We can model the elasticity in
the transmission mechanisms using a spring, as shown in Figure 3.79.
(1) Find the dynamic equation of the system.
(2) Find the transfer function Θ2 (s)/Ea (s).
123
Problem 3.7 Find the transfer function and impulse response of the following sys-
tems. Also, find poles and zeros.
d 2y dy
(1) 2 + 2 + y = u(t)
dt dt
d 2y dy du(t)
(2) 2 + 5 + 4y = + 2u(t)
dt dt dt
Problem 3.8 Find the transfer functions of the following block diagrams.
(1)
(2)
(3)
Problem 3.9 Covert the block diagrams in the previous problem to signal flow graphs.
Then, find the transfer functions using Mason’s rule.
Problem 3.10 Find the transfer functions of the following signal flow graphs using
Mason’s rule.
(1)
(2)
(3)
125
Problem 3.11 Find the controllable canonical forms and observable canonical forms
of the following systems. Draw signal flow graphs.
d 2y dy
(1) 2 + 2 + y = u(t)
dt dt
d 2y dy du(t)
(2) 2 + 5 + 4y = + 2u(t)
dt dt dt
3 2
d y d y dy d 2 u(t) du(t)
(3) 3 + 2 2 + 5 + 4y = +2 + 3u(t)
dt dt dt dt 2 dt
Problem 3.12 Draw signal flow graphs of the systems in the previous problem. Find
the transfer functions using Mason’s rule.
Problem 3.13 Find the controllable canonical forms and observable canonical forms
of the following systems. Draw signal flow graphs.
s+4 s2 + 2s + 4 s2 + 5s + 4
(1)G(s) = 2
(2)G(s) = (3)G(s) =
s + 3s + 2 s3 + 5s2 + 3s + 2 s2 + 3s + 2
Problem 3.14 Using the following state variables, find the state equation of the system
in Problem 3.1.
x1 = y1 , x2 = ẏ1 , x3 = y2
Problem 3.15 Using the following state variables, find the state equation of the system
in Problem 3.3.
x1 = y1 , x2 = ẏ1 , x3 = y2 , x4 = ẏ2
Problem 3.16 Using the following state variables, find the state equation of the system
in Problem 3.4.
x1 = θ1 , x2 = θ̇1 , x3 = θ2 , x4 = θ̇2
Problem 3.17 Using the following state variables, find the state equation of the system
in Problem 3.6.
x1 = θ1 , x2 = θ̇1 , x3 = θ2 , x4 = θ̇2 , x5 = ia
126
Problem 3.18 In the following circuit, the input is , and the output is the capacitor
voltage y. Using the following state variables, find the state equation of the circuit.
x1 = y, x2 = iL
Problem 3.19 In the following circuit, the input is , and the output is the capacitor
voltage y. Using the following state variables, find the state equation of the circuit.
x1 = y, x2 = iL , x3 = vC
127
128
The purpose of control systems is to make the system respond to the command as desired.
Most control systems are designed using the feedback concept. There are some cases where
non-feedback controllers are used. However, control systems engineering deals with mostly
feedback control systems.
Let us consider a room temperature control system as a simple example. Every room has
a room temperature control system, and the users can set the desired temperature. Figure
4.1 shows a feedback concept used in the room temperature control system.
When we enter a room in cold weather, we set the desired room temperature on the control
panel. Then, the controller compares the current room temperature with the desired tem-
perature. If the room temperature is lower than the desired temperature, the temperature
controller turns on the heater. The heater stays on until the room temperature reaches the
desired temperature. Once the desired temperature is reached, the controller maintains the
temperature at the desired value. In this situation, everybody desires two objectives. First,
we want the temperature to reach the desired temperature as soon as possible. Second, we
want the room temperature to stay at the desired value without error. How to achieve these
two objectives are determined by the performance of the control system. In control systems
engineering, we call these two objectives transient and steady-state performance.
Let us take another example. A Multi-copter or multi-rotor is a UAV (Unmanned Aerial
Vehicle) that is propelled by multiple motor-driven propellers. If they have four propellers,
we call them quadcopter or quadrotor. Figure 4.2 shows a quadcopter.
Propulsion forces generated by four propellers control the attitude of a quadcopter. The
flight controller has sensors to measure the attitude of the quadcopter. Using the feedback
signals from sensors, the controller maintains the attitude of the aircraft. Figure 4.3 shows
the conceptual diagram of the attitude feedback control system.
Chapter 4. Control System Performances 129
If the controller receives a desired angle command, the controller compares the feedback
signal from the angle sensor with the desired value. If there is an error, the controller out-
puts the control signal to motors to take corrective actions. Figure 4-4 shows examples of
responses when the corrective action takes place.
If we look at the response on the left in Figure 4.4, there is an oscillation before the response
curve reaches the final value. In this case, the quadcopter shows oscillatory movements. On
the other hand, the response on the right smoothly reaches the final value. The performance
of the control system determines the differences in the responses of the system. It is crucial
to design a controller so that the performance of the control system satisfies the require-
ments. In this chapter, we discuss the definitions of control system performances and how
to evaluate them.
If H(s) in the feedback path is a unity gain transfer function, i.e., H(s) = 1, the system is called
a unity feedback control system. We may convert a non-unity feedback control system to a
unity feedback control system. For example, we can change the block diagram in Figure 4.5
to the unity feedback control system in Figure 4.6 by a block diagram operation. Note that
both block diagrams have the same transfer function.
The purpose of the control system is to make the output follow the command. Therefore,
when there is a change in the reference command, the controller must take action to make
the output track the reference command promptly and accurately. The performances of the
control system are determined by how the system reacts to the changes of command. Let us
take a simple example in Figure 4.7.
The controlled system in Figure 4.7 is a first-order system, and the controller is a constant
gain. The control system has the most simple configuration. Y (s) is the output of the system,
U (s) is the control signal, and R(s) is the reference input. Consider the case when the refer-
ence input is changed from zero to one at t = 0. Then, the output of the system starts from
zero and approaches one. Figure 4.8 shows the response curves for K = 5 and K = 10.
As we can see in Figure 4.8, the response of the case K = 10 is faster than that of the case
K = 5. Also, the final value of the case K = 10 is closer to one than that of the case K = 5. We
can say that the case K = 10 shows a better performance than the case K = 5. When we assess
Chapter 4. Control System Performances 131
the performances of the control system, we have to look at two features of the response. First,
we have to see how fast and smooth the output approaches the reference. Second, we have to
see how close to the reference the final value is. We call these features transient and steady-
state performances. The transient performance is about behavioral characteristics before
the output reaches the final value. We usually use several quantitative data to evaluate the
transient performance. The steady-state performance is about the final state of the output.
We typically use the steady-state error to assess the steady-state performance.
To assess the performances of the control system, we need to apply test input signals to
the control systems. There are three typical input signals to test control systems. They are a
unit step, a unit ramp, and a unit parabolic, as defined below. We multiplied t 2 by 1/2 since
we want the Laplace transform of the function to be 1/s3 . Figure 4.9 shows the graphs of test
signals.
(
1 t≥0
unit step input: us (t) = (4.1)
0 t<0
(
t t≥0
unit ramp input: ur (t) = (4.2)
0 t<0
2
t
t≥0
unit parabolic input: up (t) = 2 (4.3)
0 t<0
The responses to the above test signals are called a unit step response, a unit ramp response,
and a unit parabolic response. Figure 4.8 is an example of a unit step response. We usu-
ally do not use these test signals in the actual operations of control systems; however, we
frequently use these signals for testing.
Y (s) K
= (4.4)
R(s) s + K + 1
132 4.2. Transient performance of Control Systems
In the above equation, let us assume that Y (s) is the Laplace transform of y(t) and R(s) is the
Laplace transform of r(t). Since sY (s) is the Laplace transform of the derivative of y(t), we
can change the above equation to the following differential equation:
We can obtain the unit step response in Figure 4.4 by solving Eq. (4.6) with r(t) = us (t) and
zero initial condition. The following equation is the solution.
K −(K+1)t K
y(t) = − e + ,t≥0 (4.7)
K +1 K +1
On the right-hand side in the above equation, the first term is the homogeneous solution,
and the second term is the particular solution. When the time t approaches infinity, the
homogeneous solution approaches zero, and the whole solution approaches the particular
solution. The following is the limit value of the solution as the time t goes to infinity.
K
yss = lim y(t) = (4.8)
t→∞ K +1
The above state is called a steady-state. Theoretically, the steady-state is reached when the
time is infinity. However, in reality, this never happens. In the real applications, we as-
sume that the steady-state is reached when the homogeneous solution is small enough to be
ignored.
If we look at the above solution of the differential equation, we can see that the transient
performance is determined by the homogeneous solution, while the particular solution de-
termines the steady-state performance. We can obtain the response in Figure 4.8 by plugging
K = 5 and K = 10 in Eq. (4.7), as follows:
5 5
K = 5 : y(t) = − e−6t + , t ≥ 0 (4.9)
6 6
10 −11t 10
K = 10 : y(t) = − e + , t≥0 (4.10)
11 11
From the above equations, we can see that the case K = 10 is faster than the case K = 5. Also,
The steady-state value of the case K = 10 is closer to 1 than that of the case K = 5.
There are several parameters we can use to assess the transient performance. The follow-
ing list and Figure 4.10 show the definitions of the parameters.
• Overshoot Mp : If the output passes over the steady-state value, the overshoot is the max-
imum peak output subtracted by the steady-state value. The overshoot is represented
by the percentage of the steady-state value.
• Peak time tp : Peak time is the time required for the output to reach the first peak of the
overshoot.
• Settling time ts : The settling time is the time required for the output to reach and stay
within an error boundary about the final value. The error boundary is usually rep-
resented by the percentage of the final value (for example, 2% or 5%). The shorter
Chapter 4. Control System Performances 133
the settling time, the faster the response. The settling time is dependent on the error
boundary. We have a longer settling time for the smaller error boundary. To define the
settling time, we have to determine the error boundary first. For example, if the error
boundary is 2%, we call it 2% settling time.
• Rise time tr : The rise time is the time required for the output to rise from 10% to 90% of
its final value.
Let us find the unit step response of the above first-order system with zero initial condi-
tions. Assume the Laplace transform of the unit step function us (t) is Us (s). Then, if we
replace R(s) with Us (s), we can obtain the following relationship.
The following equation and Figure 4.11 show the solution of Eq. (4.14).
t
y(t) = 1 − e− τ , t ≥ 0 (4.15)
The steady-state value of Eq. (4.15) is 1. Therefore, if we subtract the time when the output
reaches 0.1 from the time when the output reaches 0.9, we can find the rise time as follows:
tr = τ ln 9 (4.16)
As defined above, we need to determine the error boundary to find the settling time. For
example, let us find the 2% settling time. We can find the 2% settling time ts by solving the
following equation.
ts
1 − e− τ = 1 − 0.02 (4.17)
Since the first-order system response does not have an overshoot, the overshoot value and
peak time are not defined. As can be seen above, both the rise time and the settling time
are proportional to the value of τ. In the first-order system response, τ is called the time
constant and determines the speed of the response. The SI unit of the time constant is sec.
At the time t = τ, the output has the value 1 − e−1 = 0.632. In other words, the time constant
is the time when the output response reaches 63.2% of the steady-state value. We can see
that the 2% settling time is about four times the time constant τ. Ideally, the steady-state is
the time when the transient response reaches zero. However, in reality, we can say that the
steady-state is reached practically after four times the time constant since the error is less
than 2%.
Chapter 4. Control System Performances 135
Exemple 4.1 Let us consider the DC motor model in section 3.1. Assume the input
of the system is the armature voltage ea , and the output is the motor speed dθ/dt. To
simplify the problem, let us assume the inductance La and the damping coefficient b
are zero. By taking the Laplace transform of the differential equations Eqs. (3.41) and
(3.42), we can find the following transfer function.
Since Eq. (4.19) has the same form as Eq. (4.12), the motor time constant τm can be
defined as follows:
J R
τm = a a (4.20)
Kt Kb
The motor time constant is proportional to the inertia Ja and inversely proportional to
the torque constant Kt . Therefore, the response of the motor with large inertia is slow.
On the other hand, the response of the motor with a large torque constant is fast. Since
the numerator of the transfer function is 1/Kb , the steady-state value of the unit step
response is 1/Kb . Therefore, by finding the steady-state value of the motor speed, we
can find the back emf constant Kb . Let us configure a feedback speed control system,
as in Figure 4.12. The following equation is the closed-loop transfer function of the
speed control system in Figure 4.12.
K/Kb K K
Y (s) τm s + 1 Kb Kb + K K 1
GT (s) = = = = = (4.21)
R(s) K/Kb K Kb τm Kb + K τs + 1
1+ τm s + 1 + s+1
τm s + 1 Kb Kb + K
From the above equation, we can see that the time constant of the closed-loop system
is as follows:
K τ
τ= b m (4.22)
Kb + K
If we increase the controller gain K, we can make the time constant τ smaller; in return,
we have a faster response.
Next, consider the transient response of the second-order system. Figure 4.13 shows a simple
example of the second-order system.
136 4.3. Transient Responses
Y (s) 1 ωn2
= GT (s) = = (4.25)
R(s) (s/ωn )2 + (2ζs/ωn ) + 1 s2 + 2ζωn s + ωn2
First, let us find the unit step response of the prototype second-order system. For simplicity,
let us assume all the initial conditions are zero. If we assume the Laplace transform of the
unit step us (t) is Us (s), then we can plug R(s) = Us (s) in Eq. (4.25). Since Us (s) = 1/s , we have
the following relationship.
ωn2
Y (s) = GT (s)Us (s) = (4.26)
s2 + 2ζωn s + ωn2 s
We can obtain the unit step response of the second-order system by finding the inverse
Laplace transform of Eq. (4.26). We can find the inverse Laplace transform by applying
the partial fraction expansion to Eq. (4.26). First, let us assume that the characteristic equa-
tion of the above system has two distinct roots.
If we assume s1 and s2 are roots of the characteristic equation, we can find the partial fraction
expansion of Eq. (4.26) as follows:
ωn2 K1 K 1
Y (s) = = + 2 + (4.28)
(s − s1 ) (s − s2 ) s s − s1 s − s2 s
Chapter 4. Control System Performances 137
ωn2
K1 = (4.29)
(s1 − s2 ) s1
ωn2
K2 = (4.30)
(s2 − s1 ) s2
If Eq. (4.27) has complex roots, they are complex conjugate numbers. We can easily show
that K1 and K2 are also complex conjugate numbers. Using the above results, we can find
the unit step response as follows:
Second, let us assume that the characteristic equation of the above system has double roots.
The partial fraction expansion of Eq. (4.26) is as follows:
ωn2 K1 K2 1
Y (s) = 2
= + 2
+ (4.32)
(s − s1 ) s s − s1 (s − s1 ) s
ωn2
K1 = − (4.33)
s12
ωn2
K2 = (4.34)
s1
Using the above results, we can find the unit step response as follows:
The inverse Laplace transform of Eq. (4.26) is the complete solution of the differential equa-
tion Eq. (4.37). The complete solution of the differential equation is the sum of the homoge-
neous solution and the particular solution. We can solve the differential equation Eq. (4.37)
without using the Laplace Transform. To do so, we have to solve the following characteristic
equation. Note that the following characteristic equation is the same as Eq. (4.27) that we
solve to find the partial fraction expansion.
Using the quadratic formula, we can find the solution of the characteristic equation as fol-
lows: q
s1 = −ζωn + (ζ 2 − 1) ωn2 (4.39)
q
s2 = −ζωn − (ζ 2 − 1) ωn2 (4.40)
Let us assume ζ ≥ 0. Then, we have three different cases depending on the value of ζ. In
each case, the response shows a different transient response.
• ζ > 1 : two distinct real roots
In Eqs. (4.39) and (4.40), if ζ > 1, the numbers inside the roots are positive. Therefore,
both roots are real numbers. Let us define the following real constants.
1
τ1 = −1/s1 = q (4.41)
ζωn − (ζ 2 − 1) ωn2
1
τ2 = −1/s2 = q (4.42)
ζωn + (ζ 2 − 1) ωn2
The above constants are positive real numbers and play similar roles as the time constant in
the first-order system (Remember that roots of the characteristic equation of the first-order
differential equation Eq. (4.14) is −1/τ). In this case, the solution of the differential equation
Eq. (4.37) is as follows:
− τt − τt
y(t) = K1 e 1 + K2 e 2 + 1, t ≥ 0 (4.43)
Since τ1 and τ2 are both positive, the output converges to the particular solution, which is
one in this case. Note that the homogeneous solution is in the form of exponential functions.
This case is called overdamped.
Exemple 4.2 Let us find the unit step response of the following system with zero
initial conditions.
2 2
G(s) = = (4.44)
s2 + 3s + 2 (s + 1)(s + 2)
If we assume Y (s) is the Laplace transform of the output and U (s) is the Laplace trans-
form of the input, we have the relationship Y (s) = G(s)U (s). Since the Laplace trans-
form of the unit step is 1/s, the Laplace transform of the output is as follows:
1 2 2 1 1
Y (s) = G(s)U (s) = G(s) = =− + + (4.45)
s (s + 1)(s + 2)s s+1 s+2 s
Chapter 4. Control System Performances 139
If we take the inverse Laplace transform of the above equation, we can obtain the
following output response.
We can specify the final time in step function, which is 10sec in this case. We can set
the axis limit using axis command.
If ζ < 1 in Eq. (4.39), the numbers inside the roots are negative, and the roots become
complex numbers, as shown below.
q q
s1 = −ζωn + (ζ 2 − 1) ωn2 = −ζωn + j (1 − ζ 2 ) ωn2 (4.47)
q q
s2 = −ζωn − (ζ 2 − 1) ωn2 = −ζωn − j (1 − ζ 2 ) ωn2 (4.48)
Let us define the following constants:
σ = −ζωn (4.49)
q p
ωd = (1 − ζ 2 ) ωn2 = ωn 1 − ζ 2 (4.50)
With the above constants, the roots can be represented as follows:
s1 = σ + jωd (4.51)
140 4.3. Transient Responses
s2 = σ − jωd (4.52)
Then, the solution of Eq. (4.37) is as follows:
∗
y(t) = K1 es1 t + K2 es2 t + 1 = K1 es1 t + K1∗ es1 t + 1, t ≥ 0 (4.53)
In the above equation * is the notation for complex conjugate and K1 is as follows:
ωn2
K1 = (4.54)
2jωd (σ + jωd )
We have complex function terms in Eq. (4.53); however, we can rearrange the terms so that
the response has only real function terms as in the following.
y(t) = K1 eσ t+jωd t + K1∗ eσ t−jωd t + 1
= |K1 | ej∠K1 eσ t+jωd t + |K1 | e−j∠K1 eσ t−jωd t + 1
(4.55)
= |K1 | eσ t ej∠K1 ejωd t + e−j∠K1 e−jωd t + 1
= 2 |K1 | eσ t cos (ωd t + ∠K1 ) + 1, t ≥ 0
The homogeneous solution in the above response is a cosine function multiplied by an ex-
ponential function eσ t . The frequency of the cosine function is ωd (rad/sec). Since σ is a
negative real number, the homogeneous solution converges to zero. This case is called the
underdamped. If we plug Eqs. (4.49), (4.50) and (4.54) in (4.55), we can obtain the following
equations. p
y(t) = 2 |K1 | e−ζωn t cos ωn 1 − ζ 2 t + ∠K1 + 1
1
p (4.56)
−ζωn t 2
= −p e cos ωn 1 − ζ t − φ + 1, t ≥ 0
1 − ζ2
In the above equation, φ is defined as follows:
ζ
φ = tan−1 p (4.57)
1 − ζ2
Remember that the roots of the characteristic equation are the poles of the transfer function.
Figure 4.15 shows the locations of the poles in the underdamped case.
In Figure 4.15, the distance between the origin and the pole is ωn regardless of the value
of ζ. Note that we have the relationship ζ = sin θ. In Eq. (4.56), all the time variables
are multiplied by ωn . Therefore, the speed of the response is directly proportional to ωn .
The response with a larger ωn has a faster response. As an example, Figure 4.16 shows the
responses of the case ωn = 1 and ωn = 2 for ζ = 0.2. As we can see in the figure, the speed of
the response of the case ωn = 2 is twice the speed of the case ωn = 1. The transient response
speed of the second-order system is proportional to the distance between the origin and the
pole.
Next, consider the influence of ζ on the response. In Eq. (4.56), the decay rate of the
exponential term is varied by changing the value of ζ. In other words, ζ controls the decay
142 4.3. Transient Responses
rates of oscillations and is called a damping ratio. In addition to the decay rate, the value ζ
influences the frequency of sinusoidal function. The frequency of the sinusoidal function ωd
is determined by Eq. (4.50). We can see that ωd is less than ωn for 0 < ζ < 1. If we increase
ζ, the frequency ωd is decreased, and the oscillation gets slower. We call the frequency ωd
a damped natural frequency. Figure 4.17 shows how the response varies according to the
changes in the value of ζ when ωn = 1.
Exemple 4.3 If we remove the spring element in the Example 3.1 system, we have the
system in Figure 4.18. Let the mass position y be the output, and the external force f
be the input. Then, the transfer function is as follows:
Y (s) 1
G(s) = = (4.58)
U (s) s (Ms + B)
Let us configure a simple position feedback system, as in Figure 4.19. The closed-loop
transfer function of the system in Figure 4.19 is as follows:
K
Y (s) s(Ms + B) K
GT (s) = = = (4.59)
R(s) K Ms2 + Bs + K
1+
s(Ms + B)
In Example 3.6, we find the transfer function of the system in Example 3.1. If we com-
pare the above transfer function with the transfer function in Example 3.6, we can see
that both transfer functions have the same denominators. The system in Figure 4.18
does not have a spring element; however, the feedback system has the same character-
istic as the system with a spring element. Let us find the unit step response for the
following parameter values.
M = 2, B = 6, K = 18 (4.60)
Then, the closed-loop transfer function is as follows:
18 9
GT (s) = = (4.61)
2s2 + 6s + 18 s2 + 3s + 9
If we compare the above transfer function with Eq. (4.25), we can obtain the following
parameters.
2ζωn = 3 (4.62)
ωn2 = 9 (4.63)
From the above relationship, we can determine ζ = 0.5, ωn = 3. Figure 4.20 shows the
step response.
MATLAB We can draw a unit step response of the system using the following com-
mands. We can find the closed-loop transfer function using the MATLAB command
feedback.
M=2;B=6;K=18;
G=tf(1,[M B 0])
GT=feedback(K*G,1)
step(GT,20)
144 4.3. Transient Responses
Using the unit step response found above, we can find the formulas for the transient perfor-
mance parameters, as shown below.
If we take the derivative of y(t) in Eq. (4.56), we have the following equation.
ω
p p p
ẏ(t) = p n e−ζωn t ζ cos ωn 1 − ζ 2 t − φ + 1 − ζ 2 sin ωn 1 − ζ 2 t − φ
1 − ζ2
ω
p p
= p n e−ζωn t sin φ cos ωn 1 − ζ 2 t − φ + cos φ sin ωn 1 − ζ 2 t − φ (4.64)
1 − ζ2
ω p
= p n e−ζωn t sin ωn 1 − ζ 2 t
1 − ζ2
Chapter 4. Control System Performances 145
The following equation is the time when the derivative of y(t) is zero.
nπ
t= p (4.65)
ωn 1 − ζ 2
Since the peak time is the time when the response reaches the first peak, the peak time can
be obtained by plugging n = 1 in the above equation. Therefore, the following is the formula
for the peak time tp .
π
tp = p (4.66)
ωn 1 − ζ 2
If we plug tp in Eq. (4.56), we can obtain the first peak value as follows:
√
1−ζ 2
y(tp ) = 1 + e−ζπ/ (4.67)
By the definition of the overshoot, the following is the formula for the overshoot Mp .
√
1−ζ 2
Mp = e−ζπ/ (4.68)
As we can see in the above equation, the overshoot is dependent on ζ while it is not related
to ωn . Figure 4.16 shows the responses with the same ζ for two different ωn . We can see that
the two responses in Figure 4.16 have the same overshoot. Figure 4.21 shows the changes in
the overshoot as the damping ratio ζ varies.
Settling time ts :
In order to define the settling time, the error boundary has to be determined first. Let us
assume that the error boundary is 5% and find the 5% settling time. For different error
boundaries, we can follow a similar procedure. The 5% settling time is the time required
for the response in Eq. (4.56) to reach and stay between 0.95 and 1.05. If we use the enve-
lope function in Eq. (4.56), we can find the settling time easily. The response in Eq. (4.56)
is in the form of the sinusoidal function multiplied by an exponential function. As we can
see in Figure 4.22, we may use the exponential envelope function to find the approximate
p
settling time, which can be obtained by finding the time when the value of e−ζωn t / 1 − ζ 2 is
0.05. Note that the settling time obtained from the envelope function may be longer than
the actual settling time.
Therefore, the formula for the approximate 5% settling time can be obtained as follows:
e−ζωn ts
p = 0.05 (4.70)
1 − ζ2
p
ln 0.05 1 − ζ 2
5% settling time: ts = − (sec) (4.71)
ζωn
Figure 4.23 shows the graph of Eq. (4.71). Note that the vertical axis of this graph is ωn ts .
Therefore, we can use this graph for various values of ωn . For example, the value of the
graph is 30 when ζ = 0.1. If ωn = 2.0, the settling time is ts = 30/ωn = 30/2.0 = 15 . As we
can see in Figure 4.23, the settling time is almost inversely proportional to the damping ratio
ζ. We can approximate the relationship as follows:
3.2
5% settling time: ts ≈ (sec) (4.72)
ζωn
Chapter 4. Control System Performances 147
Using a similar procedure, we can find the following formula for the 2% settling time.
Note that 2% settling time is longer than 5% settling time.
p
ln 0.02 1 − ζ 2 4
2% settling time: ts = − ≈ (sec) (4.73)
ζωn ζωn
Rise time tr :
We can find the rise time for different damping ratios by reading the time required for the
output to rise from 0.1 to 0.9 in Figure 4.17. Instead of finding the analytical formula, we
can draw the curve in Figure 4.24 by numerically finding the values using a computer. Note
that the horizontal axis is ωn tr . Since we do not have an analytical formula, the following ap-
proximate formula may be useful. Figure 4.24 shows the curve obtained from the numerical
computation and the approximate line.
2.16ζ + 0.60
tr ≈ (4.74)
ωn
Double roots: ζ = 1
As a boundary case, let us consider the case ζ = 1. In this case, Eq. (4.39) has the following
double root.
s = −ωn (4.75)
If we plug ζ = 0 in Eqs. (4.39) and (4.40), we have the following roots with zero real parts.
s1 = jωn (4.77)
148 4.3. Transient Responses
s2 = −jωn (4.78)
We can obtain the following response by plugging ζ = 0 in Eq. (4.55).
Since the above response has a purely sinusoidal function, it oscillates forever. In other
words, the zero damping factor means we have a sustained oscillation without damping.
The frequency of this oscillation is ωn , which is called an undamped natural frequency. The
undamped natural frequency ωn is always greater than the damped natural frequency ωd .
Exemple 4.4 If we remove the damping element in Figure 2.20, we have the system in
Figure 4.25. Let the position of the mass y be the output, and the external force f be
the input u. Then, we have the following transfer function.
Y (s) 1 1/M
G(s) = = = (4.80)
U (s) Ms2 + K s2 + K/M
If we compare the above transfer function with Eq. (4.25), we can obtain the following
parameters.
ζ=0 (4.81)
√
ωn = K/M (4.82)
Figure 4.26 shows the step response for the following cases.
M = 0.25, K = 1, ωn = 2 (4.83)
M = 1, K = 1, ωn = 1 (4.84)
Chapter 4. Control System Performances 149
As we can see in the figure, we have a slower oscillation for a larger mass.
MATLAB We can draw the step responses of this example using the following MAT-
LAB commands.
M=0.25;K=1;
G=tf(1,[M 0 K])
t=0:0.001:20;
step(G,t)
In the MATLAB step function, we can specify the time vector instead of the final time.
In this example, we generate a time vector from 0 to 20 sec with 0.001 sec interval.
In this way, we have a fine resolution in the time scale; in turn, we can have a smooth
response curve.
If ζ < 0, the roots of the characteristic equation have positive real parts, and they are on the
right half-plane. In this case, the response is the same as Eq. (4.55); however, the exponen-
tial term e−ζωn t increases as the time increases. Therefore, the amplitude of the oscillation
increases as time increases. This case is called negative damping.
150 4.3. Transient Responses
Exemple 4.5 Let us assume that the damping coefficient B is negative in Example 4.3.
If B = −0.6, then ζ = −0.05 and ωn = 3. Figure 4.27 shows a unit step response.
We can summarize all the above cases in the following. If we draw the paths of the roots
for varying ζ with a constant ωn , we have the trajectories in Figure 4.28.
Remember that the distance between the origin and the pole is ωn . Therefore, if we vary
the damping ratio ζ while ωn is fixed, the roots follow a circle, as can be seen in Figure
4.28. If we increase ζ from zero, the roots start from the imaginary axis and move to the left
half-plane following the circle. As the roots approach the real axis, the overshoot and the
oscillation frequency are decreased. The case when the roots are on the unit circle before
they reached the real axis is called underdamped. When the roots reach the real axis, the
Chapter 4. Control System Performances 151
damping ratio ζ is one, and two roots coincide. It is the case called critically damped. If we
further increase ζ, the two roots become two separate real roots and stay on the real axis.
When the two roots are on the real axis, the response does not have oscillations. It is the case
called overdamped. On the other hand, if we decrease ζ from zero, the roots move to the
right half-plane following the circle. In this case, the response diverges as time increases.
Figure 4.29 shows the relationship between the pole locations and the step responses.
152 4.3. Transient Responses
Y (s) 2 2
= G(s) = 2 = (4.85)
R(s) s + s − 2 (s − 1)(s + 2)
If we find the unit step response of the above system, we have the following response.
2 2
y(t) = et − e−2t + 1, t ≥ 0 (4.86)
3 3
Since the above system has two poles at s = 1, −2, the response has the exponential terms et
and e−2t . Since the term et goes to infinity as time goes to infinity, the response diverges.
Even one diverging exponential term makes the response diverge. If the output response of
the system diverges regardless of the input, the system is unstable.
Next, consider the following system.
Y (s) 2 2
= G(s) = 2 = (4.87)
R(s) s + s + 2 (s + 1)(s + 2)
The following equation is the step response of the system obtained in a similar way as in the
previous example.
y(t) = −2e−t + e−2t + 1, t ≥ 0 (4.88)
Since the above system has two poles at s = −1 and s = −2 , the output response of the system
has the exponential terms e−t and e−2t . All the exponential terms converge to zero; in turn,
the output response converges to the final value 1. This system is an example of a stable
system. If the output of the system converges to a particular solution, the system is stable.
From the above two cases, we can see that the output response should not have a single di-
verging exponential term to be a stable system. Since the locations of poles determine the
exponential terms in the output response, all the poles of the system should have negative
real parts to be a stable system. If the output response of the system diverges to infinity, it is
meaningless to evaluate the transient performances. Also, it is not possible to evaluate the
steady-state performance since the output response does not have a steady state. Therefore,
it is necessary to find out if the system has a steady-state before we try to evaluate the per-
formance of the system. In other words, we have to determine if the system is stable before
we begin to do any performance analysis. The condition of being stable is called stability. In
this section, we discuss how to determine the stability of control systems.
To discuss the definition of stability, consider a control system with the following transfer
function:
Y (s) bm sm + bm−1 sm−1 + · · · + b1 s + b0
GT (s) = =
R(s) sn + an−1 sn−1 + · · · + a1 s + a0
(4.89)
(s − z1 ) (s − z2 ) · · · (s − zm )
=K
(s − p1 ) (s − p2 ) · · · (s − pn )
In the above transfer function, Y (s) and R(s) are Laplace transforms of the output and the
input, respectively. Let us assume that the degree of denominator n is greater than the degree
of numerator m. In most control systems, the degrees of denominators are greater than the
154 4.4. Stability of Control Systems
degree of numerators, and the reason is discussed later. For a given input r(t), the output
response y(t) is of the following form:
y(t) = K1 ep1 t + K2 ep2 t + · · · + Kn epn t + yp (t) (4.90)
The exponential functions in the above equation are terms for the homogeneous solution,
and yp (t) is the particular solution. If all the exponential terms in the homogeneous solution
converge to zero as time goes to infinity, the system is stable. On the other hand, if the
homogeneous solution has at least one diverging exponential term, the system is unstable.
There are certain cases when the homogeneous solution neither diverges nor converges to
zero. These cases are called marginally stable. In the control system design, the marginally
stable cases are usually considered to be unstable. In Eq. (4.90), p1 , p2 , . . . , pn are poles of
the transfer function, and they determine the stability of the system. Poles are either real or
complex. For the exponential terms in the homogeneous solution of Eq. (4.90) to converge
to zero, real parts of all the poles should be less than zero. In other words, the poles should
satisfy the following relationship.
Re {pi } < 0, i = 1, 2, · · · , n (4.91)
If there is at least one pole with a positive real part, the corresponding exponential term in
the homogeneous solution diverges, and the system is unstable. Therefore, for the system to
be stable, the real parts of all the poles should be less than zero. In other words, all the poles
of the system should be in the left half-plane to be a stable system. The transfer function in
Eq. (4.89) does not have multiple poles. If the transfer function has multiple poles, we can
draw the same conclusion. Consider the following transfer function with multiple poles. Let
us assume that the poles of the following system have negative real parts.
Y (s) (s − z1 ) (s − z2 ) · · · (s − zm )
GT (s) = =K (4.92)
R(s) (s − p1 )n
For a given input r(t), the output response y(t) of the above system is of the following form:
y(t) = K1 ep1 t + K2 tep1 t · · · Kn t n−1 ep1 t + yp (t) (4.93)
The homogeneous solution terms of the above equation are of the form . Even though the
term t i diverges, the whole term converges to zero since the exponential term goes to zero at
a much faster rate. Therefore, all the homogeneous solution terms converge to zero; in turn,
this system is stable.
In general, the stability theory is a very complicated mathematical theory; however, the
stability of the linear time-invariant systems is rather simple. In this book, we restrict our
attention to the stability of the linear time-invariant system. In mathematical terms, the
stability discussed above is BIBO stability(bounded input bounded output stability). When
a system is BIBO stable, the output of the system is always bounded for a bounded input. As
we can see from the above discussion, the BIBO stability of the linear time-invariant system
is determined by the locations of poles. As we discussed above, the stability of the system
Eq. (4.89) can be determined by the locations of the roots of the following characteristic
equation.
sn + an−1 sn−1 + · · · a1 s + a0 = 0 (4.94)
To be stable, all the roots of the above characteristic equation should be in the left half-plane.
We can determine the stability by finding the roots of the characteristic equation. However,
there is a method to determine the stability without finding the roots of the characteristic
equation. In the following, we discuss the Routh-Hurwitz method that enables us to deter-
mine the stability without finding the roots of the characteristic equation.
Chapter 4. Control System Performances 155
In the above equation, since α and β are positive, expanding the equation as in the form of
Eq. (4.94) shows that all the coefficients are positive. Since any higher-order equation can
be factored with the first-order and second-order factors, we can draw the same conclusion
for the higher-order equation. Therefore, if any coefficients of the characteristic equation are
zero or negative, the system is unstable, and there is no need to proceed with the Routh-
Hurwitz method to determine the stability of the system.
The Routh-Hurwitz method begins with building the following table. First, fill in the
first and second rows of the table using the coefficients of Eq. (4.94).
sn 1 an−2 an−4
sn−1 an−1 an−3 an−5
sn−2 b1 b2 b3
sn−3 c1 c2 c3 · · · (4.96)
sn−4 d1 d2 d3
.. ..
. .
s0
Use the following relationships to fill in the third row and below.
1 an−2
−
an−1 an−3 an−1 an−2 − an−3
b1 = = (4.97)
an−1 an−1
1 an−4
−
an−1 an−5 an−1 an−4 − an−5
b2 = = (4.98)
an−1 an−1
a a
− n−1 n−3
b1 b2 b1 an−3 − b2 an−1
c1 = = (4.99)
b1 b1
156 4.4. Stability of Control Systems
an−1 an−5
−
b1 b3 b1 an−5 − b3 an−1
c2 = = (4.100)
b1 b1
When all the rows are filled in the table Eq. (4.96), the number of sign changes in the left-
most column (1, an−1 , b1 , c1 , d1 , · · · ) is the number of roots in the right half-plane. In other
words, for the system to be stable, all the coefficients in the left-most column of the Routh-
Hurwitz table should be positive. The case when the equation has roots on the imaginary
axis is a special case and discussed later. The necessary and sufficient condition that all the
roots of Eq. (4.94) lie in the left half-plane is that all the coefficients of Eq. (4.94) are positive
and all the terms in the first column of the Routh-Hurwitz table have positive signs.
Exemple 4.6 Using the Routh-Hurwitz method, find the number of roots of the fol-
lowing equation in the right half-plane.
s5 + 2s4 + 3s3 + 4s2 + 5s + 2 = 0 (4.101)
First, fill in the first and second rows of the following Routh-Hurwitz table.
s5 1 3 5
s4 2 4 2
s3
(4.102)
s2
s
1
To fill in the rest of the rows, use the following relationships.
1 3
−
2 4 2·3−4
b1 = = =1 (4.103)
2 2
1 5
−
2 2 2·5−2
b2 = = =4 (4.104)
2 2
2 4
−
1 4 1·4−4·2
c1 = = = −4 (4.105)
1 1
2 2
−
1 0 1·2−0·2
c2 = = =2 (4.106)
1 1
1 4
−
−4 2 −4 · 4 − 2 · 1 9
d1 = = = (4.107)
−4 −4 2
The following is the complete Routh-Hurwitz table.
s5 1 3 5
s4 2 4 2
s3 1 4
(4.108)
s2 −4 2
s 9/2
1 2
Chapter 4. Control System Performances 157
Since we have two sign changes in the left-most column, the equation has two roots in
the right half-plane. The following roots of Eq. (4.101) confirm that the equation has
two roots in the right half-plane.
MATLAB The roots of the equation in this example can be found using the following
MATLAB commands.
p=[1 2 3 4 5 2];
roots(p)
There are special cases when we cannot complete the table with the procedure explained
above. We discuss the special cases of the Routh-Hurwitz method in the following.
In the first special case, there are zeros in the left-most column, while the rest of the columns
have non-zero values. The following example shows this case.
s5 1 3 5
s4 2 6 2
s3 0 4
(4.113)
s2
s
1
We cannot continue the process since we cannot divide a non-zero value with a zero in the
left-most column. To go around this problem, let us assume that b1 is not zero but an in-
finitesimally small number ε. Then, continue with the procedure.
2 4
−
ε 4 ε·4−4·2 8
c1 = = ≈− (4.114)
ε ε ε
2 2
−
ε 0 ε·2−0·2
c2 = = =2 (4.115)
ε ε
158 4.4. Stability of Control Systems
ε 4
−
(4ε − 8) /ε 2 4 · (4ε − 8) /ε − 2 · ε
d1 = = ≈4 (4.116)
(4ε − 8) /ε (4ε − 8) /ε
With ε, we can complete the table as in the following.
s5 1 3 5
s4 2 6 2
s3 ε 4
(4.117)
s2 −8/ε 2
s 4
1 2
In the above table, ε can be either positive or negative. Either way, we can see that the
number of sign changes in the left-most column is two. Therefore, we can conclude that the
equation has two roots in the right half-plane. The following roots of the equation confirm
this.
s = 0.3226 ± j1.5490, −0.5414 ± j0.4671, −1.5624 (4.118)
The following is the Routh Hurwitz table for the above equation.
s5 1 4 4
s4 2 5 2
s3 1.5 3
(4.120)
s2 1 2
s 0 0
1
As we can see, the above Routh Hurwitz table is incomplete. Since all the numbers in the
row corresponding to the term s are zero, we cannot continue to fill in the rest of the table.
In this case, we cannot use the method replacing 0 with ε, as explained in the previous case.
We encounter this case when the equation has even polynomial factors. All the terms in even
polynomials have even degrees, as shown in the following equation.
Factorizing Eq. (4.119), we can see that it has an even polynomial factor, as shown in the
following equation.
s2 + 2 s3 + 2s2 + 2s + 1 = 0 (4.122)
When the polynomial has an even polynomial factor, the entries right above the all-zero row
in the Routh-Hurwitz table show coefficients of the even polynomial factor. For example,
the entries right above the all-zero row in Eq. (4.120) are 1 and 2, which are the coefficients
of the even polynomial s2 + 2.
Chapter 4. Control System Performances 159
The roots of even polynomial have a particular pattern in the complex plane. Let us find
the roots of the even polynomial (4.121). First, define a new variable p, and obtain a new
equation with the variable p as follows.
p = s2 (4.123)
The followings are all the possible roots of the above equation.
p = ±γ (4.125)
p = α ± jβ = re±jφ (4.126)
In the above equation, we assume that γ is a positive real number. α and β are a real part
and an imaginary part, respectively. re±jφ is a polar coordinate representation of the complex
root. If p = γ, the original variable s has the following values:
√
s=± γ (4.127)
In other words, the original equation has two real roots, which are symmetric with respect
to the origin. If p = −γ, the original variable s has the following values:
√ √
s = ± −γ = ±j γ (4.128)
In other words, the original equation has two complex conjugate roots on the imaginary axis.
If p = α ± jβ, the original equation has four complex roots which are symmetric with respect
to the origin as follows:
√ √ √
s = ± α ± jβ = ± re±jφ/2 = re±jφ/2 , re±j(π+φ/2)
p
(4.129)
If a row with all zero entries appears in the process of building the Routh-Hurwitz table,
the entries right above the all-zero row are coefficients of the even polynomial factor. If we
divide the original polynomial by the even polynomial factor, we can obtain the quotient
polynomial. We can find the number of roots of the quotient in the right half-plane by
building the Routh-Hurwitz table. For the even polynomial factor, we can find the number of
roots in the right half-plane using the process, as explained above. As an example, consider
Eq. (4.122). When we divide the original polynomial by the even polynomial s2 +2, we obtain
a quotient polynomial s3 +2s2 +2s+1. We can find out all the roots of the quotient polynomial
lie in the left half-plane by building the Routh-Hurwitz table. Since the even polynomial
s2 + 2 has two complex conjugate roots on the imaginary axis, the original polynomial has
three roots in the left half-plane and two complex roots on the imaginary axis. There is
an alternative method for this special case. If we have an all-zero row, the entries in the
row right above the all-zero row are coefficients of the auxiliary polynomial. Note that this
auxiliary polynomial is the even polynomial factor, as discussed above. Find the derivative
of the auxiliary polynomial and replace entries in an all-zero row with the coefficients of
the derivative polynomial. Then, continue with the process. Consider Eq. (4.120) as an
example. The auxiliary polynomial is s2 + 2, and the derivative of this polynomial is 2s. We
obtain the following table by replacing the entries of the all-zero row with the coefficients of
160 4.4. Stability of Control Systems
s5 1 4 4
s4 2 5 2
s3 1.5 3
s2 1 2 → s2 + 2 (4.130)
d (s2 +2)
s 2 0 ← ds = 2s
1 2 0
Since there is no change of sign in the first column, we can conclude that all the roots are in
the left-half plane.
Exemple 4.7 Consider the following equation and find out the number of roots in the
left and right half-plane using the Routh-Hurwitz method.
The following is the Routh-Hurwitz table for the above equation, and the entries cor-
responding s3 are all zero.
s6 1 3 4 2
s5 1 2 2
s 4 1 2 2
s3 0 0 (4.132)
s 2
s
1
The polynomial has an even polynomial factor s4 + 2s2 + 2 and can be factored as in the
following equation.
s4 + 2s2 + 2 s2 + s + 1 = 0 (4.133)
Define a new variable p as follows:
p = s2 (4.134)
Then, even polynomial can be changed as follows:
s4 + 2s2 + 2 = p2 + 2p + 2 = 0 (4.135)
If we solve the above equation for the variable p, we can see that the equation has two
complex conjugate roots. Therefore, the even polynomial with the variable s has four
roots, which are symmetric with respect to the origin and, in turn, has two roots in the
right half-plane, as in the case of Eq. (4.129). Since the equation s2 + s + 1 = 0 has two
roots in the left half-plane, Eq. (4.131) has two roots in the right half-plane and four in
the left half-plane. As an alternative solution, let us try the method in Eq. (4.130). The
auxiliary polynomial is s4 + 2s2 + 2. We can obtain the derivative polynomial 4s3 + 4s
by taking the derivative of the auxiliary polynomial. Fill in the all-zero row with the
Chapter 4. Control System Performances 161
coefficients of the derivative polynomial and continue with the process as follows:
s6 1 3 4 2
s5 1 2 2
s4 1 2 2 → s4 + 2s2 + 2
d (s4 +2s2 +2)
s3 4 4 ← = 4s3 + 4s (4.136)
ds
s2 1 2
s −4
1 2
Since we have two sign changes in the first column, we can conclude that the equation
has two roots in the right half-plane. This coincides with the result of the first method.
The advantage of the Routh-Hurwitz method is to enable us to find the number of roots
in the right half-plane without solving the polynomial equation. This could be a big advan-
tage when there is no computer available. The advantage is diminished now since computers
are everywhere. We can find the roots of high order polynomial equations very easily using
computer software such as MATLAB. However, the Routh-Hurwitz method can still be use-
ful when there are unknown coefficients in the polynomial. The following example shows
how the Routh-Hurwitz method can be used when the polynomial has unknown coefficients.
Exemple 4.8 Consider the feedback control system shown in Figure 4.30. The system
has an unknown controller gain K. Let us find a range of K which makes the closed-
loop system stable. The transfer function of the closed-loop system is as follows:
Y (s) K
= (4.137)
R(s) s(s + 1)2 + K
s3 1 1
s2 2 K
(4.139)
s (2 − K)/2
1 K
For the closed-loop system to be stable, all the entries of the first column in Eq. (4.139)
should be positive. From this requirement, we have the following inequalities:
Therefore, we have the following range of k, which makes the closed-loop system sta-
ble.
0<K <2 (4.141)
162 4.4. Stability of Control Systems
Exemple 4.9 This example shows the case when we have two unknown coefficients.
Consider the feedback control system in Figure 4.31. In this system, we have a deriva-
tive term sK2 in addition to the proportional term K1 in the controller. Let us find the
ranges of K1 and K −2 which make the closed-loop system stable. The transfer function
of the closed-loop system is as follows:
Y (s) K2 s + K1
= (4.142)
R(s) s(s + 1)2 + K2 s + K1
s3 1 1 + K2
s2 2 K1
(4.144)
s (2 + 2K2 − K1 )/2
1 K1
For the closed-loop system to be stable, all the entries of the first column in Eq. (4.144)
should be positive. From this requirement, we have the following inequalities:
Since we have two unknown coefficients, the region which satisfies the above condi-
tions can be drawn on the two-dimensional space, as shown in Figure 4.32. The shaded
area is the region of the coefficients which makes the system stable.
(a)
(b)
(c)
(d)
To discuss the steady-state error of the unity feedback system, consider the system in
Figure 4.34.
To calculate the steady-state error using Eq. (4.146), we need to find the output response y(t)
of the system by solving a differential equation of the closed-loop system. However, there
is a simpler way of finding the steady-state error in the frequency domain. Consider the
frequency domain representation of the system in Figure 4.34, as shown in Figure 4.35.
The steady-state error can be calculated easily using the final value theorem of the Laplace
transform, as shown below:
D(s)G(s)
Y (s) = R(s) (4.148)
1 + D(s)G(s)
We can obtain the following relationship by plugging the above equation into Eq. (4.147).
!
D(s)G(s) sR(s)
ess = lim s R(s) − R(s) = lim (4.149)
s→0 1 + D(s)G(s) s→0 1 + D(s)G(s)
To calculate the steady-state error using the above equation, consider three standard types
of command inputs, the unit step input, the unit ramp input, and the unit parabolic in-
put. Usually, these three standard command inputs are used to evaluate steady-state perfor-
mance. Evidently, in industrial control systems, there are various kinds of command inputs
for control systems. As an example, consider an industrial robot manipulator, as shown in
Figure 4.36. Suppose the tip of the robot manipulator is following a straight line to perform
a welding job. To move the tip of the robot manipulator in a straight line, the command
inputs for the joint servo systems have very complex forms that are far from the types of the
command inputs mentioned above.
166 4.5. Steady-State Performance
It is impossible to find steady-state errors for all kinds of command inputs for industrial
control systems. Therefore, it is reasonable to estimate the steady-state errors for the three
standard command inputs to evaluate the steady-state performance. In fact, any complex
form of input commands can be approximated using the three standard types of command
inputs.
Let us assume that the input of the control system is a unit step as follows:
1
R(s) = (4.151)
s
Using the Eq. (4.149), the steady-state error can be calculated as follows:
s 1 1
ess = lim = (4.152)
s→0 1 + D(s)G(s) s 1 + lim D(s)G(s)
s→0
1
ess = (4.154)
1 + Kp
For the above steady-state error to be zero, Kp has to be infinity. For Kp to be infinity, D(s)G(s)
should have at least one pole at the origin.
Let us assume that the input of the control system is a unit ramp as follows:
1
R(s) = (4.156)
s2
Using the Eq. (4.149), the steady-state error can be calculated as follows:
s 1 1 1
ess = lim 2
= lim = (4.157)
s→0 1 + D(s)G(s) s s→0 s + sD(s)G(s) lim sD(s)G(s)
s→0
1
ess = (4.159)
Kv
For the above steady-state error to be zero, Kv has to be infinity. For Kv to be infinity, D(s)G(s)
should have at least two poles at the origin.
Let us assume that the input of the control system is a unit parabolic input as follows:
t2
r(t) = u (t) (4.160)
2 s
Then, the Laplace transform of the input is as follows:
1
R(s) = (4.161)
s3
Using the Eq. (4.149), the steady-state error can be calculated as follows:
s 1 1 1
ess = lim 3
= lim 2 2 = 2
(4.162)
s→0 1 + D(s)G(s) s s→0 s + s D(s)G(s) lim s D(s)G(s)
s→0
168 4.5. Steady-State Performance
1
ess = (4.164)
Ka
For the above steady-state error to be zero, Ka has to be infinity. For Ka to be infinity, D(s)G(s)
should have at least three poles at the origin.
System Type
The system type is a way of classifying control systems according to the steady-state er-
rors. The systems type is defined as an order of the input command function that makes the
steady-state error to have a non-zero finite value. For example, when the steady-state error
of a system for a step input is a non-zero constant, the system is a type 0. For another exam-
ple, when the steady-state error of a system for a ramp input is a non-zero finite constant,
the system is a type 1. In general, the steady-state error of a system for an N-th order input
function is a non-zero constant; the system is a type N.
In a unity-feedback system, the system type is the same as the number of poles of D(s)G(s)
at the origin. If D(s)G(s) does not have a pole at the origin, the position error constant Kp is a
finite number, and the steady-state error for a step input is a non-zero constant. Therefore, in
this case, the system is type 0. If D(s)G(s) has a pole at the origin, the velocity error constant
Kv is a finite number, and the steady-state error for a ramp input is a non-zero constant.
Therefore, this system is type 1. In a similar way, it can be seen that the number of poles of
D(s)G(s) is equal to the system type number.
Exemple 4.10 Let us calculate the steady-state of the system in Figure 4.37 for a unit
step input and a unit ramp input. First, let us assume that the input is a unit step. In
order to calculate the steady-state error, the position error constant Kp can be calcu-
lated as follows:
K A
Kp = lim D(s)G(s) = lim 1 = K1 A (4.165)
s→0 s→0 τs + 1
The steady-state error for a unit step input is as follows:
1 1
ess = = (4.166)
1 + Kp 1 + K1 A
Next, let us assume that the input is a unit ramp. In order to calculate the steady-state
error, the velocity error constant Kv can be calculated as follows:
sK1 A
Kv = lim sD(s)G(s) = lim =0 (4.167)
s→0 s→0 τs + 1
1
ess = =∞ (4.168)
Kv
Chapter 4. Control System Performances 169
To check the above results, let assume that the parameters have the following values
and find the steady-state errors.
K1 = 2, A = 2, τ = 1 (4.169)
K1 A
K1 A 4
Y (s) = τs + 1 R(s) = R(s) = R(s) (4.170)
K A τs + 1 + K1 A s+5
1+ 1
τs + 1
If the input is a unit step, the Laplace transform of the output is as follows:
4 −0.8 0.8
Y (s) = = + (4.171)
(s + 5) s s + 5 s
By taking the inverse Laplace transform, the output can be found as follows:
y(t) = −0.8e−5t + 0.8 us (t) (4.172)
The above results coincide with Eq. (4.166). Figure 4.38 shows the simulation result.
If the input is a unit ramp, the Laplace transform of the output y(t) is as follows:
By taking the inverse Laplace transform, the output can be found as follows:
y(t) = 0.16e−5t − 0.16 + 0.8t us (t) (4.175)
The above results coincide with Eq. (4.168). Figure 4.39 shows the simulation result.
MATLAB In this example, the lsim MATLAB function is used instead of the step
MATLAB function. We cannot use the step MATLAB function since we have to do the
simulations for a ramp input. The lsim MATLAB function enables us to do simulations
for an arbitrary input. t=0:0.001:5 generate a time vector variable and R=ones(size(t))
generates a unit step input with the same number of elements as the time vector vari-
able. When we need to write many MATLAB commands in sequence, it is convenient
to open an editor, write a sequence of commands, and save in the text file. The name of
the file has to have the extension .m. The MATLAB script file with the extension .m can
be executed in the console window like any other MATLAB commands. For example,
if you write a MATLAB script file with the name Example4 10.m, typing Example4 10
in the console window executes the MATLAB commands in the file.
170 4.5. Steady-State Performance
The following is the MATLAB commands to do simulations for a ramp input. Note
that R=t generates a ramp input.
K1=2; A=2; tau=1;
G=tf(K1*A,[tau 1]);
GT=feedback(G,1)
t=0:0.001:5;
R=t;
lsim(GT,R,t)
grid on
Since the transfer function D(s)G(s) of the system in Example 4.10 does not have a pole
at the origin, the system is type 0. The steady-state error of the system for a unit step input
is a non-zero constant, and this result coincides with the definition of the system type.
Exemple 4.11 If we add an integrator to the controller for the system in Example 4.10,
we have the system in Figure 4.40. Let us calculate the steady-state error of the system
in Figure 4.40 for a unit step input, a unit ramp input, and a unit parabolic input.
First, let us assume that the input is a unit step. In order to calculate the steady-state
error, the position error constant Kp can be calculated as follows:
(K1 s + K2 ) A
Kp = lim D(s)G(s) = lim =∞ (4.177)
s→0 s→0 s (τs + 1)
The steady-state error for a unit step input is as follows:
1
ess = =0 (4.178)
1 + Kp
Next, let us assume that the input is a unit ramp. In order to calculate the steady-state
error, the velocity error constant Kv can be calculated as follows:
s (K1 s + K2 ) A
Kv = lim sD(s)G(s) = lim = K2 A (4.179)
s→0 s→0 s (τs + 1)
The steady-state error for a unit ramp is as follows:
1
ess = (4.180)
K2 A
Finally, let us assume that the input is a unit ramp. In order to calculate the steady-
state error, the velocity error constant Ka can be calculated as follows:
s2 (K1 s + K2 ) A
Ka = lim s2 D(s)G(s) = lim =0 (4.181)
s→0 s→0 s (τs + 1)
172 4.5. Steady-State Performance
K1 = 2, K2 = 3, A = 2, τ = 1 (4.183)
(K1 s + K2 ) A 4s + 6
Y (s) = R(s) = R(s) (4.184)
τs2 + (K1 A + 1) s + K2 A (s + 2) (s + 3)
If the input is a unit step, the Laplace transform of the output is as follows:
4s + 6 1 2 1
Y (s) = = − + (4.185)
(s + 2) (s + 3) s s + 2 s + 3 s
By taking the inverse Laplace transform, the output can be found as follows:
y(t) = e−2t − 2e−3t + 1 us (t) (4.186)
The above results coincide with Eq. (4.178). Figure 4.41 shows the simulation result.
If the input is a unit ramp, the Laplace transform of the output y(t) is as follows:
4s + 6 1 2 1 1
Y (s) = 2
=− + − + 2 (4.188)
(s + 2) (s + 3) s 2 (s + 2) 3 (s + 3) 6s s
By taking the inverse Laplace transform, the output can be found as follows:
1 2 1
y(t) = − e−2t + e−3t − + t us (t) (4.189)
2 3 6
By taking the limit, the steady-state error can be found as follows:
1 2 1 1
ess = lim (t − y(t)) = lim t + e−2t − e−3t + − t = (4.190)
t→∞ t→∞ 2 3 6 6
The above results coincide with Eq. (4.180). Figure 4.42 shows the simulation result.
Finally, if the input is a unit parabolic input, the Laplace transform of the output y(t)
is as follows:
4s + 6 1 2 1 1 1
Y (s) = 3
= − − − 2+ 3 (4.191)
(s + 2) (s + 3) s 4 (s + 2) 9 (s + 3) 36s 6s s
By taking the inverse Laplace transform, the output can be found as follows:
1 −2t 2 −3t 1 1 1 2
y(t) = e − e − − t + t us (t) (4.192)
4 9 36 6 2
Chapter 4. Control System Performances 173
1 1 1 2 1 1 1
ess = lim t 2 − y(t) = lim t 2 − e−2t + e−3t + + t − t2 = ∞ (4.193)
t→∞ 2 t→∞ 2 4 9 36 6 2
The above results coincide with Eq. (4.182). Figure 4.43 shows the simulation result.
MATLAB MATLAB commands for a unit step response and a unit ramp response are
similar to the previous example. The following is the MATLAB commands for a unit
parabolic response. In MATLAB, the ‘dot’ operator is an element-wise operation. In the
following MATLAB code, t.*t is the element-wise multiplication of the time variable
vector t and generates a ramp input vector.
K1=2;K2=3;A=2;tau=1;
G=tf(A*[K1 K2],[tau 1 0]);
GT=feedback(G,1)
t=0:0.001:10;
R=t.*t/2;
lsim(GT,R,t)
Since the transfer function D(s)G(s) of the system in Example 4.11 has a pole at the origin,
the system is type 1. The steady-state error of the system for a ramp step input is a non-zero
constant, and this result coincides with the definition of the system type.
Table 4.1 summarizes the steady-state errors for type 0, type 1, and type 2 systems for
various inputs.
Chapter 4. Control System Performances 175
Table 4.1 Steady-state errors for type 0, type 1, and type 2 systems
Let us assume that the transfer function of the closed-loop system in Figure 4.44 is as follows:
Y (s) D(s)G(s) b sn + b sn−1 + · · · + b1 s + b0
GT (s) = = = n n n−1 n−1 (4.194)
R(s) 1 + D(s)G(s)H(s) an s + an−1 s + · · · + a1 s + a0
The steady-state error of this system can be determined as follows:
ess = lim e(t) = lim sE(s) = lim s (Y (s) − R(s)) = lim s (GT (s) − 1) R(s)
t→∞ s→0 s→0 s→0
(bn − an ) sn + (b n−1
! (4.195)
n−1 − an−1 ) s + · · · + (b1 − a1 ) s + (b0 − a0 )
= lim s R(s)
s→0 an sn + an−1 sn−1 + · · · + a1 s + a0
For the above system to be a type 0 system, the steady-state error for a step input should
be a non-zero constant. The steady-state error can be found using Eq. (4.195) as follows:
(bn − an ) sn + (bn−1 − an−1 ) sn−1 + · · · + (b1 − a1 ) s + (b0 − a0 ) 1
!
ess = lim s
s→0 an sn + an−1 sn−1 + · · · + a1 s + a0 s
(bn − an ) sn + (bn−1 − an−1 ) sn−1 + · · · + (b1 − a1 ) s + (b0 − a0 ) (4.196)
= lim
s→0 an sn + an−1 sn−1 + · · · + a1 s + a0
b −a
= 0 0
a0
For Eq. (4.196) to have a non-zero constant value, the following conditions have to be satis-
fied.
(b0 − a0 ) , 0, a0 , 0 (4.197)
176 4.5. Steady-State Performance
In other words, for the system with the transfer function Eq. (4.194) to be a type 0 system,
the constant term in the denominator should be non-zero, and the constant terms of the nu-
merator and denominator should have different values.
For the above system to be a type 1 system, the steady-state error for a ramp input should be
a non-zero constant. The steady-state error can be found using Eq. (4.195) as follows:
(bn − an ) sn + (bn−1 − an−1 ) sn−1 + · · · + (b1 − a1 ) s + (b0 − a0 ) 1
!
ess = lim s
s→0 an sn + an−1 sn−1 + · · · + a1 s + a0 s2
(bn − an ) sn + (bn−1 − an−1 ) sn−1 + · · · + (b1 − a1 ) s + (b0 − a0 ) 1
= lim (4.198)
s→0 an sn + an−1 sn−1 + · · · + a1 s + a0 s
!
1 (b − a )
= lim (b1 − a1 ) + 0 0
s→0 a0 s
For Eq. (4.198) to have a non-zero constant value, the following conditions should be satis-
fied.
(b1 − a1 ) , 0, (b0 − a0 ) = 0, a0 , 0 (4.199)
In other words, for the system with the transfer function Eq. (4.194) to be a type 1 system,
the constant term in the denominator should be non-zero, the constant terms of the numer-
ator and denominator should have the same values, and the coefficients of the first-order
terms of the numerator and denominator should have different values.
For the above system to be a type 2 system, the steady-state error for a parabolic input
should be a non-zero constant. The steady-state error can be found using Eq. (4.195) as
follows:
(bn − an ) sn + (bn−1 − an−1 ) sn−1 + · · · + (b1 − a1 ) s + (b0 − a0 ) 1
!
ess = lim s
s→0 an sn + an−1 sn−1 + · · · + a1 s + a0 s3
(bn − an ) sn + (bn−1 − an−1 ) sn−1 + · · · + (b1 − a1 ) s + (b0 − a0 ) 1
= lim (4.200)
s→0 an sn + an−1 sn−1 + · · · + a1 s + a0 s2
!
1 (b − a ) (b − a )
= lim (b2 − a2 ) + 1 1 + 0 2 0
s→0 a0 s s
For Eq. (4.200) to have a non-zero constant value, the following conditions should be satis-
fied.
(b2 − a2 ) , 0, (b1 − a1 ) = 0, (b0 − a0 ) = 0, a0 , 0 (4.201)
In other words, for the system with the transfer function Eq. (4.194) to be a type 2 system, the
constant term in the denominator should be non-zero, the constant terms of the numerator
and denominator should have the same values, the coefficients of the first-order terms of
the numerator and denominator should have the same values, and the coefficients of the
second-order terms of the numerator and denominator should have different values.
The above process can be extended to the system type N. Using a similar process, the
condition for a type N system can be obtained as follows:
(bN − aN ) , 0, (bN −1 − aN −1 ) = 0, · · · , (b0 − a0 ) = 0, a0 , 0 (4.202)
Chapter 4. Control System Performances 177
Exemple 4.12 We already know that the system in Example 4.11 is a type 1 system.
Let us apply the above method to the system in Example 4.11. The transfer function
of the closed-loop system is as follows:
(K1 s + K2 ) A
Y (s) s (τs + 1) K1 As + K2 A
GT (s) = = = 2 (4.203)
R(s) (K s + K2 ) A τs + (1 + K1 A) s + K2 A
1+ 1
s (τs + 1)
In Eq. (4.203), we can see that the constant terms have the same values, and the first-
order coefficients are different. This system satisfies the condition for system type 1,
as shown above. Note that this condition is always satisfied as long as K2 , 0, which is
the gain for an integrator.
Exemple 4.13 In Example 4.11, we make the system in Example 4.10 to be a type 1
system by adding an integrator to the controller. In this example, we make the system
to be a type 1 system without adding an integrator. If we add a feedforward term
DF (s) to the system in Example 4.10, we have the system in Figure 4.45. The transfer
function of the whole system is as follows:
K1 A
Y (s) DF (s) · τs + 1 DF (s) · K1 A
GT (s) = = = (4.204)
R(s) K A τs + (1 + K1 A)
1+ 1
τs + 1
Let us assume that DF (s) is constant. For the above system to be a type 1 system, the
constant terms of the numerator and the denominator should have the same values, as
shown below.
DF (s) · K1 A = (1 + K1 A) (4.205)
Therefore, DF (s) should have the following value:
1 + K1 A
DF (s) = (4.206)
K1 A
Note that the value of DF (s) is dependent on the value of A. If the value of A is different
from the actual value of the plant, the condition for the system type 1 is not satisfied.
In this case, the steady-state error for a step input would be non-zero. In the mean-
time, the system in Example 4.12 is always a type 1 system regardless of the values of
parameters. Note that the controller in Example 4.12 has an integrator term.
Exemple 4.14 Consider the system in Figure 4.46. Since the open-loop transfer func-
tion of the system has a pole at the origin, this system is a type 1 system. Let us make
this system to be a type 2 system without adding an integrator. As in the previous
example, the condition for a type 2 system can be satisfied by adding a feedforward
term DF (s). The transfer function of the whole system is as follows:
Y (s) D (s) · K1 A
GT (s) = = 2F (4.207)
R(s) τs + s + K1 A
For this system to be a type 2 system, the constant terms and the coefficients of the
first-order terms of the numerator and denominator should have the same values, and
the coefficients of the second-order terms should have different values. To satisfy these
conditions, DF (s) should have a constant and a first-order term. Let us assume that
DF (s) has the following form.
DF (s) = K2 s + K3 (4.208)
Then, the transfer function of the whole system is as follows:
Y (s) (K2 s + K3 ) · K1 A K2 K1 As + K3 K1 A
GT (s) = = = (4.209)
R(s) τs2 + s + K1 A τs2 + s + K1 A
1
K2 = ,K = 1 (4.210)
K1 A 3
As in the previous example, the value of DF (s) is dependent on the value of A. If the
value of A is different from the actual value of the plant, the condition for the system
type 2 is not satisfied.
4.6 Lab 4
4.6.1 Real-Time Simulation of a Second-Order Dynamic System
In this lab, let us implement the following second-order system and observe the step re-
sponses for various parameters.
Y (s) 1 ωn2
= = (4.211)
U (s) (s/ωn )2 + (2ζs/ωn ) + 1 s2 + 2ζωn s + ωn2
The above transfer function can be converted to the following second-order differential
equation.
d 2 y(t) dy(t)
+ 2ζωn + ωn2 y(t) = ωn2 u(t) (4.212)
dt 2 dt
To implement the above differential equation using a microcontroller, we need to convert
the above differential equation to a digital form. There are many ways to find the digital
form of the differential equations, and using a state-variable form is one of the easiest ways.
Let us define the state-state variables as follows:
x1 (t) = y(t)
(4.213)
x2 (t) = ẏ(t)
Then, we can obtain the state-variable form of the above differential equation as follows:
By taking integrals of both sides of the above equation, we obtain the following relationship:
Zt
x1 (t) = x2 (τ)dτ
0
Z th (4.215)
i
2 2
x2 (t) = −ωn x1 (τ) − 2ζωn x2 (τ) + ωn u(τ) dτ
0
There are many ways to find the approximate equations of the above equations. Using the
definition of integral, we can find the easiest approximate equations as follows:
Set the function generator to generate a square wave with 2 Hz frequency and 2V peak-
to-peak amplitude. When you run the program, you should be able to see the step response
of the system, as in Figure 4.47. Channel 1 of the oscilloscope is connected to the function
generator, and channel 2 is connected to the D/A converter output. Note that ωn = 2π ·
20, ζ = 0.2 for this simulation. Using Eqs. (4.66) and (4.68), calculate the peak time and the
overshoot, and compare them with the measurements.
Chapter 4. Control System Performances 181
Figure 4.47 Oscilloscope screen for real-time simulation of the second-order system
Exercise 4.1 Repeat the above simulations for the following parameters. Calculate
the peak time and the overshoot, and compare with the measurements.
ωn = 2π · 20, ζ = 0.5
(4.217)
ωn = 2π · 20, ζ = 0.8
From the basic circuit theory, the transfer function of the above OP amp circuit can be found
as follows:
Vo (s) −1 −10 10
= = (4.218)
Vi (s) 0.1s + 1 s s(0.1s + 1)
Let us build the feedback control system for an analog dynamic simulator with a constant
controller gain, as shown in Figure 4.49.
182 4.6. Lab 4
Figure 4.49 Feedback control system for the analog dynamic simulator
Start a new STM32 project in STM32CubeIDE and set up the project as in section 2.4.1.
After the code generation, open the main source file and type in the following codes.
/* USER CODE BEGIN PV */
float y,Kp,ref,control;
int interrupt_counter, sampling_frequency;
/* USER CODE END PV */
Code 4.5
Code 4.8
When you run the program, you should be able to see the step response of the closed-loop
system, as in Figure 4.50. Channel 1 of the oscilloscope is connected to the output of the
analog dynamic simulator. Make sure all the capacitors are discharged by turning off the
power supply before starting the experiment. Note that the controller gain Kp is 5. The
reference r is a square wave with a magnitude of 1 Volt and a period of 4 seconds.
10Kp
Y (s) s(0.1s+1) 10Kp 100Kp
= = = (4.219)
R(s) 1 + 10Kp 0.1s2 + s + 10Kp s2 + 10s + 100Kp
s(0.1s+1)
By comparing the above transfer function with the second-order prototype, we can find ζ
and ωn as follows:
100Kp ωn2
= (4.220)
s2 + 10s + 100Kp s2 + 2ζωn s + ωn2
q 10 10 1
ωn = 100Kp , ζ = = p = p (4.221)
2ωn 2 100Kp 2 Kp
Using Eqs. (4.66) and (4.68), we can calculate ζ and ωn by measuring the overshoot and the
peak time of the step response. Compare the measured results with the calculated ones.
184
Exercise 4.2 Repeat the above experiment for the following controller gains. Calcu-
late ζ and ωn , and compare with the measurements.
Kp = 10, Kp = 2 (4.222)
185
Problem
Problem 4.1 Find the time constants and the rise times of the following unit step
responses.
Problem 4.2 Find the time constants and the rise times of the output responses of the
systems with the following transfer functions.
10 10 1
(1) G(s) = (2) G(s) = (3) G(s) =
s+4 2s + 4 s+8
Problem 4.3 Consider the motor speed control system in Figure 4.51. Find the time
constants and 2% settling times of the closed-loop system for the following values of
the controller gain K.
Figure 4.51
1 2 12
(1) (2) (3)
s2 + s + 1 2s2 + 4s + 2 3s2 + 9s + 6
Figure 4.52
Y (s) 1
= GT (s) = h
2
i
R(s) (τs + 1) (s/ωn ) + (2ζs/ωn ) + 1
(1) For τ = 0, ωn = 1, ζ = 0.5, find the overshoot Mp and 5% settling time ts .
(2) For τ = 1, ωn = 1, ζ = 0.5, draw the unit step response curve using MATLAB, and
find find the overshoot Mp and 5% settling time ts from the curve.
(3) For τ = 0.5, ωn = 1, ζ = 0.5, draw the unit step response curve using MATLAB, and
find find the overshoot Mp and 5% settling time ts from the curve.
(4) For τ = 0.2, ωn = 1, ζ = 0.5, draw the unit step response curve using MATLAB, and
find find the overshoot Mp and 5% settling time ts from the curve.
(5) For τ = 0, ωn = 5, ζ = 0.5, find the overshoot Mp and 5% settling time ts .
(6) For τ = 1, ωn = 5, ζ = 0.5, draw the unit step response curve using MATLAB, and
find find the overshoot Mp and 5% settling time ts from the curve.
(7) Draw pole locations for the above cases. Discuss the relationship between the pole
locations and the transient responses.
Problem 4.7 Consider the spring-mass system in Figure 4.53(a). The output is the
position y of the mass, and the input is the force f applied to the mass. Suppose we
want to build the closed-loop control system as shown in Figure 4.53(b). G(s) is the
transfer function of the spring-mass system, and D(s) is the transfer function of the
controller. Find D(s) such that the ωn and ζ satisfies the following relationships.
r
K
ωn = ,ζ = 1
M
(a)
187
(b)
Figure 4.53
Problem 4.8 Consider the control system in Figure 4.54 with the transfer functions
given below. Find K1 and K2 such that the closed-loop system satisfies the following
conditions.
1 1
(a) G(s) = (b) G(s) =
s(10s + 1) s(0.1s + 1)
√
(1) ωn = 5π, ζ = 1/ √2
(2) ωn = 20π, ζ = 1/ 2
(3) ωn = 5π, ζ = 1
(4) ωn = 20π, ζ = 1
Figure 4.54
Problem 4.9 Consider the system given in Problem 8. Find K1 and K2 such that the
closed-loop system satisfies the following conditions.
(1) Overshoot: 20%, 5% settling time: 2 sec
(2) Overshoot: 30%, 5% settling time: 1 sec
(3) Draw the unit step response curves of the part (1) and (2) using MATLAB, and
check if the response curves satisfy the given requirements.
Problem 4.10 Find the poles of the characteristic equations given below and deter-
mine the stability.
Problem 4.11 Find the poles of the characteristic equations given below and deter-
mine the stability. All the equations have a root at -1.
Problem 4.12 Determine the stabilities of the following characteristic equations using
the Routh-Hurwitz method. Find the roots of the following equations using MATLAB
and confirm the results from the Routh-Hurwitz method.
(1) s4 + 2s3 + 2s2 + 3s + 1 = 0
(2) s4 + 2s3 + 4s2 + 3s + 1 = 0
(3) [s5 + 2s4 + 4s3 + 3s2 + 3s + 2 = 0
(4) s5 + 2s4 + 4s3 + 5s2 + 3s + 2 = 0
(5) s5 + 3s4 + 4s3 + 4s2 + 3s + 1 = 0
(6) s6 + 3s5 + 3s4 + 3s3 + 3s2 + 3s + 2 = 0
Problem 4.13 Using the Routh-Hurwitz method, find the ranges of the gain K that
stabilizes the control system in Figure 4.55 with the following transfer functions.
1 1 1
(a) G(s) = (b) G(s) = 3
(c) G(s) =
s(s + 1) (s + 1) s3 + 2s2 + 2s + 1
Figure 4.55
Problem 4.14 Using the Routh-Hurwitz method, find the ranges of the gain K1 and K2
that stabilize the control system in Figure 4.56 with the following transfer functions.
Draw the stable regions on the K1 -K2 planes.
1 1
(a) G(s) = (b) G(s) =
s(s + 1) s2 (s + 1)
Figure 4.56
Problem 4.15 Using the Routh-Hurwitz method, find the ranges of the gain K1 and K2
that stabilize the control system in Figure 4.57 with the following transfer functions.
Draw the stable regions on the K1 -K2 planes.
1 1
(a) G(s) = 2
(b) G(s) =
(s + 1) s(s + 1)
189
Figure 4.57
Problem 4.16 Find the position error constant Kp , the velocity error constant Kv ,
and the acceleration error constant Ka for the system in Figure 4.58 with the following
transfer functions. Find the steady-state errors for a unit step input, a unit ramp input,
and a unit parabolic input.
4 10 2s + 1
(a) G(s) = (b) G(s) = 2
(c) G(s) =
s(s + 2) (s + 2) s2 (s + 2)
Figure 4.58
Problem 4.17 Consider the system in Figure 4.59. The following problems are about
the steady-state errors of the system.
(1) Determine the system type and the steady-state error for a unit step input for the
following transfer functions and controller gains.
4 4 4
(a) G(s) = ,K = 1 (b) G(s) = , K = 10 (c) G(s) = , K = 0.1
s2 + 2s + 2 s2 + 2s + 2 s2 + 2s
(2) Determine the system type and the steady-state error for a ramp step input for the
following transfer functions and controller gains.
4 4 10s + 4
(a) G(s) = ,K = 1 (b) G(s) = , K = 10 (c) G(s) = , K = 0.1
s2 + 2 s2 + s s3 + 2s2
Figure 4.59
190
Problem 4.18 The system in Figure 4.60 has an external disturbance input W (s). An-
swer the following problems for the transfer functions given below:
1 1 2
(a) G(s) = , D(s) = 2 (b) G(s) = , D(s) =
s(s + 3) s+3 s
Figure 4.60
Problem 4.19 Determine the system types of the system in Figure 4.61 for the follow-
ing transfer functions.
4 4
(a) G(s) = , D(s) = 1, H(s) = s + 1 (b) G(s) = , D(s) = 1, H(s) = s + 2
s2 + s s2 + s
4 1 4 1
(c) G(s) = , D(s) = 1 + , H(s) = 4s + 1 (d) G(s) = , D(s) = 1 + , H(s) = 4s + 2
s2 + s s s2 + s s
Figure 4.61
1
DF (s) = 1, D(s) =
s
(2) When D(s) = 1, find DF (s) such that the steady-state error for a unit step input is
zero.
(3) Draw the unit step response curves of the part (1) and (2) using MATLAB and
compare the transient responses.
(4) When the transfer function of the plant is changed to 4/(0.2s + 1), find the steady-
state errors of the part (1) and (2).
Figure 4.62
Problem 4.21 Using the system type determination method in section 4.5.2, show that
the system in Figure 4.63 is a type N system. N is a positive integer.
Figure 4.63
5. The Root Locus Method
5.2 Properties of the Root Loci and Rules for Plotting 196
192
Chapter 5. The Root Locus Method 193
1
Y (s) K K
= GT (s) = s+1 = (5.1)
R(s) 1 s+K +1
1+K
s+1
From the denominator of the transfer function, we have the characteristic equation of the
closed-loop system as follows:
s+K +1 = 0 (5.2)
The pole of the system is as follows:
s = −K − 1 (5.3)
If we vary the value of K, the location of the pole changes. The pole follows a locus on the
complex plane as we vary the value of K. Figure 5.2 shows the locus of the pole for K ≥ 0.
In the figure, the pole is at −6 when K = 5, and the pole is at −11 when K = 10. Figure 5.2 is
the root locus of the simple system. We can see that the locus moves to the left as the gain is
increased. Figure 5.3 shows step responses for two controller gains.
194 5.1. The Basic Concept of the Root Locus
From Figure 5.3, we can see that the response for K = 10 is faster than the response for
K = 5. If we relate this finding to the root locus in Figure 5.2, we can see that moving the
pole location to the left makes the system respond faster. The location of the pole is closely
related to the system response, and it is possible to find how the system performs from the
location of the pole.
Exemple 5.1 Let us draw the root locus of the system in Figure 5.4 for K ≥ 0. The
transfer function of the closed-loop system is as follows:
1
K
Y (s) s(s + 1) K
= GT (s) = = 2 (5.4)
R(s) 1 s +s+K
1+K
s(s + 1)
The characteristic equation of the system is as follows:
s2 + s + K = 0 (5.5)
The roots of the above equation are as follows:
√
−1 ± 1 − 4K
s= (5.6)
2
Chapter 5. The Root Locus Method 195
Depending on the value of K, the roots are either real or complex numbers. For 0 ≤
K ≤ 1/4, the roots are real. For 1/4 < K, the characteristic equation has two complex
roots as follows: √
−1 ± j 4K − 1
s= (5.7)
2
The real part of the above complex root is always −1/2, and the magnitude of the
imaginary part increases as the value of K is increased. Figure 5.5 shows the loci of the
roots. The roots start at K = 0 and K = −1 and move along the real axis towards each
other. Then, after meeting at −1/2, they depart from the real axis and move towards
infinity. The root locus plot shows the movements of two roots of the closed-loop
system. In general, the number of branches of the root locus is the same as the number
of roots. Therefore, since the system in this example is a second-order system, the root
locus in this example has two branches.
The above examples show the root loci for simple systems. In order to define the root
locus generally, consider the system in Figure 5.6.
196 5.2. Properties of the Root Loci and Rules for Plotting
In the system, K is a constant gain. The transfer function of the above closed-loop system is
as follows:
Y (s) KG(s)
= GT (s) = (5.8)
R(s) 1 + KG(s)H(s)
In the previous chapters, it is mentioned that the stability and the transient performance of
the system are determined by the poles of the closed-loop transfer function. The poles of the
system can be found by finding the roots of the characteristic equation. The characteristic
equation is obtained by setting the denominator of the closed-loop transfer function to zero.
The characteristic equation of the system in Figure 5.6 is as follows:
1 + KG(s)H(s) = 0 (5.9)
The roots of the above equation are the poles of the system. The root locus is defined as the
set of loci of the poles on the complex plane as the gain K is varied. Also, it is the same as
the set of loci of the roots of Eq. (5.9). Let us assume that G(s)H(s) is as follows:
sm + bm−1 sm−1 + · · · + b1 s + b0
G(s)H(s) = (5.10)
sn + an−1 sn−1 + · · · + a1 s + a0
In the above transfer function, let us assume that n is the same or higher than m. The reason
for this assumption is discussed in the later chapter.
It is not difficult to draw root loci of simple systems, as shown in the above examples.
However, finding the root locus of the high-order system is not a simple task. In the next
section, the properties of the root loci and the rules for plotting are discussed.
The root locus is the collection of points on the complex plane satisfying the following char-
acteristic equation:
1 + KG(s)H(s) = 0 (5.11)
In order to find the rules for plotting the root locus, we need to examine the conditions for
the points on the root locus to satisfy. Eq. (5.11) can be changed to the following equation:
1
G(s)H(s) = − (5.12)
K
Chapter 5. The Root Locus Method 197
We assume that K in Eq. (5.12) is a real number and K ≥ 0. The case when K is less than
zero is discussed later. The right-hand side of Eq. (5.12) is a negative real number and can
be represented as follows:
1 1
− = ∠ (180◦ + 360◦ k) , (k = 0, ±1, ±2, · · · ) (5.13)
K K
The angle of the above value can be 180◦ or −180◦ depending on the value of k; however,
they are the same angle. Eq. (5.12) can be represented as follows:
1
|G(s)H(s)| ∠ (G(s)H(s)) = ∠ (180◦ + 360◦ k) , (k = 0, ±1, ±2, · · · ) (5.14)
K
In order to use Eq. (5.16) for plotting the root locus, we need to review the subtraction of
complex numbers and the angle of its resultant on the complex plane. As a simple example,
consider the following complex number subtraction.
s1 = 2 + j, s2 = 1, s1 − s2 = 1 + j (5.17)
Let us show how to find the angle ∠ (s1 − s2 ) on the complex plane. Figure 5.7 is the graphical
representation of the above subtraction on the complex plane. In the figure, s1 − s2 can be
represented by the vector pointing from s2 to s1 . The angle ∠ (s1 − s2 ) can be found by the
angle between the real axis and the vector s1 − s2 . In this example, we can easily see that the
angle ∠ (s1 − s2 ) is 45 degrees.
In order to explain how to use Eq. (5.16) for plotting the root locus, let us consider a sys-
tem with two complex poles and one zero, as shown in Figure 5.8. The open-loop transfer
function G(s)H(s) of the system is as follows:
(s − z1 )
G(s)H(s) = (5.18)
(s − p1 ) (s − p2 )
The root locus of this system is a collection of all the points satisfying Eq. (5.16). If any point
s1 on the complex plane satisfies the following equation, it is on the root locus; otherwise, it
is not.
∠ (G(s1 )H(s1 )) = 360◦ k + 180◦ (5.19)
The above angle for this system can be found as follows:
(s1 − z1 )
∠ (G(s1 )H(s1 )) = ∠ = ∠ (s1 − z1 ) − ∠ (s1 − p1 ) − ∠ (s1 − p2 ) (5.20)
(s1 − p1 ) (s1 − p2 )
Figure 5.8 shows the angles between the complex number vectors and the real axis, and the
angles are as follows:
Therefore, we can determine if the point s1 is on the root locus by finding if s1 satisfies the
following equation.
∠ (G(s1 )H(s1 )) = θ1 − θ2 − θ3 = 360◦ k + 180◦ (5.22)
However, it is not possible to find if the above relationship is satisfied for all the points on
the complex plane. When we sketch the root locus, we use the rules obtained from the above
Chapter 5. The Root Locus Method 199
relationship. The rules for plotting the root locus is explained in the following for the case
K ≥ 0. The case for the negative K is discussed later. When we sketch the root locus, we
mark arrows to show the directions of increasing K. In the range of K ≥ 0, the root loci start
at K = 0 and move towards to K = ∞.
The root locus is a set of loci of the roots of the characteristic equation. Since it is assumed
that the coefficients of the characteristic equations are real, the complex roots are always
conjugate. The complex conjugate is symmetric about the real axis; in turn, the root locus is
symmetric about the real axis.
Number of Branches
sm + bm−1 sm−1 + · · · + b1 s + b0
1+K =0 (5.23)
sn + an−1 sn−1 + · · · + a1 s + a0
Since it is assumed that n is the same or higher than m, the order of the above equation is
n, which is the order of the denominator. Since the root locus is the loci of the roots of the
above equation, the number of branches of the root locus is n. Note that n is the same as the
number of poles of the open-loop transfer function G(s)H(s).
The root loci start at K = 0. We can obtain the following equation by plugging K = 0 into Eq.
(5.11).
1 + 0 · G(s)H(s) = 0 (5.25)
Since the above equation is not valid when G(s)H(s) has a finite value, G(s)H(s) should be
infinity in the above equation. Note that the open-loop transfer function G(s)H(s) is infinity
at poles. Therefore, the root loci start at the pole locations of G(s)H(s). For example, the root
loci of the Example 5.1 start at s = 0 and s = −1, which are the poles of G(s)H(s).
The root loci ends at K = ∞. We can obtain the following equation by plugging K = ∞
into Eq. (5.11).
1 + ∞ · G(s)H(s) = 0 (5.26)
In order for the above relationship to be valid, G(s)H(s) should be 0. The above relationship
does not hold for a non-zero G(s)H(s). The open-loop transfer function G(s)H(s) is 0 at zeros.
Therefore, the root loci end at zeros of the open-loop transfer function G(s)H(s). In Example
5.1, the root loci end at infinity. Note that the open-loop transfer function of Example 5.1
does not have any finite zeros. When the order of the denominator is higher than the order
200 5.2. Properties of the Root Loci and Rules for Plotting
We can determine if the point on the real axis belongs to the root locus easily using Eq.
(5.16). As a simple example, consider the pole-zero map in Figure 5.8 again. Let us suppose
a point s1 is on the real axis, as shown in Figure 5.9.
In Figure 5.9, we can see that the sum of the angles θ2 and θ3 is 360 degrees due to the
symmetry of complex conjugate poles. Since θ1 is 180 degrees, we have the following rela-
tionship for the point s1 on the real axis.
From the above relationship, we can see that the point s1 on the real axis belongs to the root
locus. Since we have the same relationship for all the points on the left side of the zero z1 ,
they all belong to the root locus. If the zero z1 is replaced by a pole, θ1 is -180 degrees, which
is the same angle as 180 degrees. Also, note that we have the same result if the complex poles
are replaced by the complex zeros.
Chapter 5. The Root Locus Method 201
In the case of Figure 5.10, θ1 is zero and does not contribute to the sum of the angle. If the
zero z1 is replaced by a pole, θ1 is still zero. We can ignore all the poles and zeros on the left
side of the test point s1 when we determine if the test point s1 is on the root locus.
Next, consider the case when we have two zeros on the real axis, as in Figure 5.11.
Since the above relationship does not satisfy the angle condition for the root locus, the test
point s1 is not on the root locus.
In general, we have the following relationship for the test point s1 .
∠ (G(s1 )H(s1 )) = 360◦ k + 180◦ Np + Nz (5.29)
In the above equation, k = 0, ±1, ±2, · · · , Np is the number of poles on the right side of s1 ,
and Nz is the number of zeros on the right side of s1 . In order for Eq. (5.29) to satisfy the
condition of Eq. (5.16), Np + Nz should be an odd number. In other words, for the test point
s1 to be on the root locus, the sum of the number of poles and zeros on the right side of s1
should be an odd number.
If the degree n of the denominator of the open-loop transfer function G(s)H(s) is higher
than the degree m of the numerator, G(s)H(s) has n − m zeros at infinity. The root loci start at
n poles and end at zeros. Since we have m finite zeros, n − m branches go to infinity. In order
to draw the root locus, we need to find asymptotic lines for the branches that go to infinity.
In order to examine how the root locus behaves when K is infinity, let us rewrite Eq.
(5.23) as follows:
sn + an−1 sn−1 + · · · + a1 s + a0
= −K (5.30)
sm + bm−1 sm−1 + · · · + b1 s + b0
If we do the division on the left-hand side of the above equation, we have the following
equation.
sn−m + (an−1 − bm−1 ) sn−m−1 + · · · = −K (5.31)
In the above equation, s should be infinity when K is infinity. For s at infinity, we have the
following approximate equation by ignoring non-dominant terms in Eq. (5.31).
!
n−m an−1 − bm−1
s 1+ ≈ −K (5.32)
s
We have the following equation by taking the 1/ (n − m)-th power of the above equation.
!1/(n−m)
a −b
s 1 + n−1 m−1 = (−K)1/(n−m) (5.33)
s
For s at infinity, we can obtain the following equation by taking the first and second terms
of the expanded series for the left-hand side of Eq. (5.33).
an−1 − bm−1
s+ = (−K)1/(n−m) (5.34)
n−m
The right-hand side can be written as follows:
In the above equation, k = 0, ±1, ±2, · · · . By comparing the real and imaginary parts of Eq.
(5.36), we have the following equation.
an−1 − bm−1 1/(n−m) (2k + 1) π
σ+ = K cos (5.37)
n−m n−m
(2k + 1) π
ω = K 1/(n−m) sin (5.38)
n−m
If we divide Eq. (5.38) by Eq. (5.37), we have the following equation.
(2k + 1) π
ω sin
= n−m (5.39)
a −b (2k + 1) π
σ + n−1 m−1 cos
n−m n−m
We can rewrite the above equation as follows:
!
(2k + 1) π a −b
ω = tan σ + n−1 m−1 (5.40)
n−m n−m
Note that the above equation represents a straight line. The above equation can be simplified
as follows:
ω = M (σ − σ0 ) (5.41)
M is the slope of the straight line and can be written as follows:
(2k + 1) π
M = tan (5.42)
n−m
or
(2k + 1) 180◦
M = tan (5.43)
n−m
Then, we can find the angle between the asymptote and the real axis as follows:
(2k + 1) 180◦
φ= , k = 0, ±1, ±2, · · · (5.44)
n−m
We can also find the intersection of the asymptote and the real axis as follows:
!
an−1 − bm−1
σ0 = − (5.45)
n−m
Let us assume that the open-loop transfer function G(s)H(s) is as follows:
(s − z1 ) (s − z2 ) · · · (s − zm )
G(s)H(s) = (5.46)
(s − p1 ) (s − p2 ) · · · (s − pn )
Expanding the denominator and the numerator, we can obtain the following equations.
(s − p1 ) (s − p2 ) · · · (s − pn ) = sn − (p1 + p2 + · · · + pn ) sn−1 + · · · (5.47)
(s − z1 ) (s − z2 ) · · · (s − zm ) = sm − (z1 + z2 + · · · + zm ) sm−1 + · · · (5.48)
Using the above relationships, Eq. (5.45) can be written as follows:
n
P m
P
pi − zi
−an−1 + bm−1 i=1 i=1
σ0 = = (5.49)
n−m n−m
204 5.2. Properties of the Root Loci and Rules for Plotting
Exemple 5.2 Draw the root locus of the system in Figure 5.12 for K ≥ 0. The open-loop
transfer function is as follows:
1
G(s)H(s) = (5.50)
s (s2 + 2s + 2)
s = 0, −1 ± j (5.51)
The open-loop transfer function has no finite zero. Since the transfer function has
three poles and no finite zero, we have three asymptotes. Using Eq. (5.44), the angles
of asymptotes are found as follows:
The above values are found by plugging k = 0, ±1 into Eq. (5.44). Using Eq. (5.49), the
intersection on the real axis is found as follows:
(−1 + j − 1 − j) − 0 2
σ0 = =− (5.53)
3−0 3
Figure 5.13 shows three asymptotes. The root locus for this system is shown in Figure
5.14. We can see that three root loci start at poles and approach zeros at infinity along
with the asymptotes found above. When K = 0, the closed-loop system has a pole
at the origin; in turn, the system is unstable. As we increase K, the closed-loop pole
moves away from the origin to the left, and the closed-loop system becomes stable. In
the meantime, two root loci starting at complex poles move toward the imaginary axis.
If we further increase K, the closed-loop poles move into the right half-plane; in turn,
the closed-loop system becomes unstable. Therefore, for the closed-loop system to be
stable, K must be greater than zero and less than a certain value. We discuss later how
to find the range of K that makes the closed-loop system stable.
MATLAB We can draw the root locus of this example using the following MATLAB
commands.
g=tf(1,[1 2 2 0]);
rlocus(g)
Exemple 5.3 Draw the root locus of the system in Figure 5.15 for K ≥ 0. The poles of
the open-loop transfer function G(s)H(s) are as follows:
s = 0, −1 ± j (5.54)
The poles are at the same locations as in Example 5.2; however, G(s)H(s) has one finite
zero at the following location:
s = −1 (5.55)
206 5.2. Properties of the Root Loci and Rules for Plotting
Since G(s)H(s) has two zeros at infinity, we have two asymptotes. Using Eq. (5.44), the
asymptotes angles can be obtained as follows:
The above values are found by plugging k = 0, −1 into Eq. (5.44). Using Eq. (5.49), the
intersection on the real axis is found as follows:
(−1 + j − 1 − j) − (−1) 1
σ0 = =− (5.57)
3−1 2
Figure 5.16 shows two asymptotes. Figure 5.17 shows the root locus of the system. The
root locus starting at the origin move towards the zero at −1. Two root loci starting at
the complex poles approaches zeros at infinity along with the asymptotes shown in
Figure 5.16. We can see that the closed-loop system is stable for K > 0.
MATLAB We can draw the root locus of this example using the following MATLAB
commands.
g=tf([1 1],[1 2 2 0]);
rlocus(g)
axis([-3 3 -3 3])
We can draw the approximate root locus using the above rules; however, we can add de-
tails to the root locus using the following rules.
If we compare the root loci in Example 5.2 and Example 5.3, we can see that the angles
of departure from the complex poles are different from each other. If we can determine the
angle of departure from a complex pole, we can draw a more accurate root locus. We can
determine the angle using Eq. (5.16). Consider an example in Figure 5.18. Let us assume
that the test point s1 is on the root locus and very close to the pole p1 .
Since we assume that s1 is on the root locus, s1 satisfies the following relationship.
Also, since we assume that s1 is very close to the pole p1 , the angles θ1 and θ3 can be regarded
as the same as the angles obtained from the vectors pointing to the pole p1 instead of s1 . In
this way, we can determine the departure angle θ2 by plugging θ1 and θ3 into Eq. (5.58).
Exemple 5.4 Let us find the departure angles of the root locus in Example 5.2. Figure
5.19 shows the vectors to calculate the angle of departure from the pole −1 + j. In
Figure 5.19, let us assume that s1 is very close to the pole −1 + j. Then, we can assume
that θ1 = 135◦ and θ3 = 90◦ , and we can obtain the following relationship.
We can obtain θ2 = −45◦ by plugging k = −1 into the above equation. Similarly, the
angle of departure from the pole −1 − j is 45◦ .
Exemple 5.5 Let us find the departure angles of the root locus in Example 5.3. Figure
5.20 shows the vectors to calculate the angle of departure from the pole −1 + j. In
Figure 5.20, let us assume that s1 is very close to the pole −1 + j. Then, we can assume
that θ1 = 135◦ , θ3 = 90◦ , and θ4 = 90◦ , and we can obtain the following relationship.
Note that we have to add θ4 since it is the angle for a zero. We can obtain θ2 = 45◦
by plugging k = −1 into the above equation. Similarly, the angle of departure from the
pole −1 − j is −45◦ .
Chapter 5. The Root Locus Method 209
If the open-loop transfer function G(s)H(s) has finite zeros, the root locus ends at the
zeros. Using a similar method, we can find the angles of arrival to the zeros, as shown in the
following example.
Exemple 5.6 Draw the root locus for the following open-loop transfer function:
s2 + 2s + 2
G(s)H(s) = (5.61)
s2 (s + 2)
The open-loop transfer function has three poles and two finite complex zeros. There-
fore, one root locus moves to infinity, and two root loci arrive at two complex conjugate
zeros −1 ± j. Figure 5.21 shows the vectors to calculate the angle of arrival at the zero
−1 + j. In Figure 5.21, let us assume that s1 is very close to the zero −1 + j. Then, we
can assume that θ1 = 135◦ , θ3 = 90◦ , and θ4 = 45◦ , and we can obtain the following
relationship.
∠ (G(s1 )H(s1 )) = −2θ1 + θ2 + θ3 − θ4 = −2 × 135◦ + θ2 + 90◦ − 45◦
(5.62)
= 360◦ k + 180◦
Note that θ1 is multiplied by 2 since it is the angle for the double pole. By plugging
k = 0 into Eq. (5.62), we can obtain θ2 = 405◦ , which is equivalent to θ2 = 45◦ . Simi-
larly, the angle of arrival at the zero −1 − j is −45◦ . Let us find the angles of departure
from the double pole at the origin. The left and right-hand sides of the origin on the
real axis do not belong to the root locus according to the real axis condition previously
explained. Therefore, the root loci depart in the direction of the complex planes. Fig-
ure 5.22 shows the vectors to calculate the angle of departure. In Figure 5.22, let us
assume that s1 is very close to the double pole at the origin. Then, we can assume that
θ2 = 315◦ , θ3 = 45◦ , and θ4 = 0◦ , and we can obtain the following relationship.
∠ (G(s1 )H(s1 )) = −2θ1 + θ2 + θ3 − θ4 = −2θ1 + 315◦ + 45◦ − 0◦
(5.63)
= 360◦ k + 180◦
210 5.2. Properties of the Root Loci and Rules for Plotting
Since the above equation has the term 180◦ k, we can obtain two different angles for θ1 .
Note that we have two root loci departing from the double pole. By plugging k = 0, 1
into Eq. (5.64), we can obtain two departure angles θ1 = 90◦ , 270◦ . Figure 5.23 shows
the root locus, and we can see that the departure angles coincide with the above result.
Exemple 5.7 Draw the root locus for the following open-loop transfer function:
s+2
G(s)H(s) = (5.65)
s3
The above transfer function has a triple pole at the origin and one zero on the real axis.
Figure 5.24 shows the vectors to calculate the departure angles. In Figure 5.24, let us
assume that s−1 is very close to the pole at the origin. Then, we can assume θ2 = 0◦ ,
and we can obtain the following relationship.
The angle θ1 is multiplied by three since it is the angle for the triple pole. From the
above equation, we can obtain the following equation for θ1 . (5.67)
(5.67)
Since the above equation has the term 120◦ k, we can obtain three different angles
for θ1 . By plugging k = 0, 1, 2 into Eq. (5.67), we have three departure angles θ1 =
−60◦ , 60◦ , 180◦ . Figure 5.25 shows the root locus, and we can see that the departure
angles coincide with the above result.
MATLAB We can draw the root locus using the following MATLAB commands.
g=tf([1 2],[1 0 0 0]);
rlocus(g)
axis([-3 3 -3 3])
When we use the rlocus MATLAB command, the resulting plot may not be smooth.
This problem can be solved by increasing the number of points in the gain vector K as
in the following MATLAB commands.
212 5.2. Properties of the Root Loci and Rules for Plotting
In the rlocus MATLAB command, ‘b-‘ is the option for the solid blue line.
When the root locus crosses over the imaginary axis, we may need to know the intersec-
tion point on the imaginary axis. Knowing the intersection point has a significant meaning
since it is the boundary between the stable and unstable regions. The intersection point can
be found using the Routh-Hurwitz method. The following shows an example.
Exemple 5.8 Let us consider Example 5.2 and find the value of K and the intersec-
tion point when the root locus crosses over the imaginary axis. The following is the
characteristic equation of the system.
s3 + 2s2 + 2s + K = 0 (5.68)
Build the following Routh-Hurwitz table to find the range of K that makes the system
stable.
s3 1 2
s 2 2 K
4−K (5.69)
s
2
1 K
According to the above table, the range of K that makes the system stable is 0 < K < 4.
The system becomes unstable when the root locus crosses over the imaginary axis.
Therefore, the value of K is 4 when the root locus crosses over the imaginary axis. We
can obtain the following equation by plugging K = 4 into Eq. (5.68).
s3 + 2s2 + 2s + 4 = 0 (5.70)
The roots of the above equation are s = −2, ±1.414j. We can obtain the same result by
plugging K = 4 into Eq. (5.69) as follows:
s3 1 2
s2 2 4
(5.71)
s 0
1 4
Note that the above table has an all-zero row. It is the special case of the Routh-
Hurwitz table, and we can see that Eq. (5.70) has a factor of 2s2 + 4. By solving the
following equation, we can obtain the roots on the imaginary axis.
2s2 + 4 = 2 s2 + 2 = 0 (5.72)
MATLAB The root locus in this example passes the imaginary axis at K = 4. We
can print the roots for the values of K in the range of 3 ≤ K ≤ 5 using the following
MATLAB commands.
K=3:0.1:5;
g=tf([1],[1 2 2 0]);
[r,K]=rlocus(g,K);
[K’ r(1,:)’ r(2,:)’ r(3,:)’]
In the printed result, the left most column shows the values of K, and the rest terms
are three roots. We can see that the roots are on the imaginary axis when K = 4.
214 5.2. Properties of the Root Loci and Rules for Plotting
The root loci departing from the poles on the real axis may meet on the real axis. The root
loci move away from the real axis after meeting on the real axis. As an example, consider the
following open-loop transfer function.
s2
G(s)H(s) = (5.73)
s2 + 3s + 2
Figure 5.26 shows the root locus for the above transfer function.
The transfer function Eq. (5.73) has poles at −1 and −2, and two root loci departing from the
poles move away from the real axis after meeting on the real axis. If we know the breakaway
point, it will help us to draw the root locus more accurately. On the breakaway point, the
characteristic equation of the closed-loop system has a double root, and the system is crit-
ically damped in the case of the second-order system. Since the breakaway point is on the
real axis, we assume that the complex variable s has only a real part as follows:
s=σ (5.74)
1
K =− (5.76)
G(σ )H(σ )
Using Eq. (5.76), we can draw the graph of K for −2 ≤ σ ≤ −1 as shown in Figure 5.27.
Chapter 5. The Root Locus Method 215
The values of K are zero at both ends of the range since the root loci start at poles. As the
root loci move away from the poles, the value of K continues to increase until two root loci
meet on the real axis. In Figure 5.27, K has the maximum value at σmax , where the root locus
has a breakaway point.
Next, consider the following open-loop transfer function.
s2 + 3s + 2
G(s)H(s) = (5.77)
s2
In Figure 5.28, the root loci depart from the double pole at the origin and move toward the
real axis until they meet on the real axis. Then, the root loci move away from each other
until they arrive at the zeros on the real axis. Using Eq. (5.76), we can draw the graph of K
for −2 ≤ σ ≤ −1 as shown in Figure 5.29.
216 5.2. Properties of the Root Loci and Rules for Plotting
As the root loci move towards the zeros, the value of K continues to increase until two root
loci arrive at the zeros. In Figure 5.28, K has the minimum value at σmin , where the root
locus has a breakaway point.
As we can see from the above two examples, we can find the breakaway point by finding
the point where Eq. (5.76) has either a maximum or minimum value. By taking the derivative
of K, we have the following relationship.
!
dK d 1
= − =0 (5.78)
dσ dσ G(σ )H(σ )
For an easy explanation, we assume that the breakaway is on the real axis; however, it may
be anywhere on the complex plane. Therefore, Eq. (5.78) can be generalized to the following
relationship, where s is a complex variable.
!
dK d 1
= − =0 (5.79)
ds ds G(s)H(s)
b(s)
G(s)H(s) = (5.80)
a(s)
da(s) db(s)
b(s) − a(s) =0 (5.82)
ds ds
Exemple 5.9 Let us find the breakaway point for the root locus of Eq. (5.73). Using
Chapter 5. The Root Locus Method 217
a(s) = s2 + 3s + 2 (5.83)
b(s) = s2 (5.84)
da(s)
= 2s + 3 (5.85)
ds
db(s)
= 2s (5.86)
ds
da(s) db(s)
b(s) − a(s) = s2 (2s + 3) − s2 + 3s + 2 2s = 0 (5.87)
ds ds
By solving the above equation, we have s = −4/3, which coincides with the root locus
in Figure 5.26.
In the previous examples, the breakaway points are on the real axis. The next example
shows the case when the breakaway points are on the complex plane.
Exemple 5.10 Draw the root locus for the following open-loop transfer function.
1
G(s)H(s) = (5.88)
s (s + 2) (s2 + 2s + 5)
The transfer function has poles at s = 0, −2, −1 ± j2, and no finite zero. Therefore, we
have four asymptotes, and the angles and the intersection point of the asymptotes are
as follows:
φ = ±45◦ , ±135◦ (5.89)
(−1 + j2 − 1 − j2 − 2) − 0
σ0 = = −1 (5.90)
4−0
On the real axis, we have an odd number of poles and zeros to the right of points in the
section between −2 and 0. Therefore, this section belongs to the root locus. The root
locus departing from 0 moves to the left, and the root locus departing from −2 moves
to the right. Then, they meet with each other on the real axis. Using Eq. (5.82), the
breakaway points can be obtained as follows:
a(s) = s (s + 2) s2 + 2s + 5 = s4 + 4s3 + 9s2 + 10s (5.91)
b(s) = 1 (5.92)
da(s)
= 4s3 + 12s2 + 18s + 10 (5.93)
ds
db(s)
=0 (5.94)
ds
da(s) db(s)
b(s) − a(s) = 4s3 + 12s2 + 18s + 10 = 0 (5.95)
ds ds
The roots of Eq. (5.95) are s = −1, −1 ± j1.225, and we can see that s = −1 is the break-
away point on the real axis. However, we need to check if s = −1 ± j1.225 can also be
breakaway points for the root loci. To do so, let us find the angles of departure from
218 5.2. Properties of the Root Loci and Rules for Plotting
the poles s = −1 ± j2. Figure 5.30 shows the vectors to find the angle of departure from
−1 + j2. The following is the condition for s1 to be on the root locus.
We have θ2 = −90◦ from the above equation and can see that the root locus departing
from −1 + j2 moves downward. Similarly, the departure angle from −1 − j2 is θ2 =
90◦ , and the root locus moves upward. Let us suppose that s1 is on the straight line
between the poles −1 + j2 and −1 − j2. Then, we have the following relationship since
θ1 + θ4 = 180◦ .
In other words, every point on the line segment between the poles −1 + j2 and −1 − j2
satisfies the angle condition and belongs to the root locus. The points s = −1 ± j1.225,
which are the solutions of Eq. (5.95), are on the straight line segment and on the root
locus. Therefore, we can see that these points can be breakaway points on the complex
plane, as shown in Figure 5.31.
We can find the value of K of the point on the root locus using the following relationship, as
shown in the next example.
1
K =− (5.98)
G(s)H(s)
Exemple 5.11 Let us find the value of K of the breakaway point on the real axis in
Example 5.9. The breakaway point is s = −4/3, and K can be obtained as follows:
1 s2 + 3s + 2
K =− =−
G(s)H(s) s2
(5.99)
(−4/3)2 + 3 (−4/3) + 2
=− = 0.125
(−4/3)2
In other words, the closed-loop system has a double pole at s = −4/3 for the controller
gain K = 0.125.
So far, we have investigated the root locus for the following characteristic equation in the
range K ≥ 0.
1 + KG(s)H(s) = 0 (5.100)
Now, let us draw the root locus in the range K ≤ 0. If K = −∞, Eq. (5.100) is satisfied when
s is at the zeros of G(s)H(s). We already know that Eq. (5.100) is satisfied at poles for K = 0.
In the range K ≤ 0, the root locus starts at zeros and ends at poles of G(s)H(s).
We can rewrite Eq. (5.100) as follows:
1
G(s)H(s) = − (5.101)
K
220 5.2. Properties of the Root Loci and Rules for Plotting
The right-hand side of the above equation is positive since K has a negative value. Then, we
have the following angle condition for the root locus.
If we compare the above equation with Eq. (5.16), the angle on the right-hand side is differ-
ent by 180 degrees. When we determine if the point s1 on the real axis is on the root locus,
we used the following relationship.
∠ (G(s1 )H(s1 )) = 360◦ k + 180◦ Np + Nz (5.103)
In the above equation, k = 0, ±1, ±2, · · · , Np is the number of poles to the right of s1 , and Nz
is the number of zeros to the right of s1 . In order for s1 to be on the root locus, Eq. (5.103)
should satisfy the condition of Eq. (5.102). Therefore, if number of poles and zeros to the
right side of s1 is even, s1 is on the root locus.
Similarly, the slope of the asymptotes is changed to the following equation.
360◦ k
M = tan (5.104)
n−m
The intersection of the asymptotes and the real axis is not changed. When we find the angles
of departure and arrival, we use the angle condition in Eq. (5.102).
Exemple 5.12 Let us consider the system in Example 5.2 again. Draw the root locus
for the following open-loop transfer function in the range K ≤ 0.
1
G(s)H(s) = (5.105)
s (s2 + 2s + 2)
The above transfer function has three poles at the following locations:
s = 0, −1 ± j (5.106)
The transfer function has no finite zero. Therefore, we have three asymptotes. Using
Eq. (5.104), the angles of asymptotes are as follows:
The above angles are obtained by plugging k = 0, ±1 in Eq. (5.104). Using Eq. (5.49),
the intersection is obtained as follows:
(−1 + j − 1 − j) − 0 2
σ0 = =− (5.108)
3−0 3
Figure 5.32 shows the asymptotes. The root loci depart from zeros at infinity and arrive
at poles along with the asymptotes in Figure 5.32. Let us find the angle of arrival at
poles −1 ± j. Figure 5.33 shows the vectors to find the angle of arrival at pole −1 + j.
Using Eq. (5.102), we have the following angle condition for the test point s1 , which is
assumed to be very close to the pole.
locus. As we can see, the system is unstable for K ≤ 0. If we combine the root locus in
Figure 5.34 with the root locus in Figure 5.14, we can obtain the complete root locus
for the whole range of K, i.e., from −∞ to ∞.
MATLAB If we add a negative sign to the transfer function, we can draw the root
locus using MATLAB for the negative gain, as in the following.
g=tf(-[1],[1 2 2 0])
rlocus(g)
1
G(s)H(s) = (5.110)
s (s + 1)
Let us add a pole at s = −2 and draw the root locus. The following is the open-loop transfer
function with the added pole.
1
G(s)H(s) = (5.111)
s (s + 1) (s + 2)
Figure 5.35 shows the root locus for the above transfer function.
Chapter 5. The Root Locus Method 223
Note that the root locus in Figure 5.5 stays in the left half-plane, and the closed-loop system
is stable in the range K > 0. However, Figure 5.35 shows that the root locus is pushed to the
direction of the right half-plane by adding a pole; in turn, the closed-loop system becomes
unstable for a large value of K. The change in the root locus can be predicted from the
asymptotic angles. In the root locus of Figure 5.5, since the difference between the degrees
of the numerator and denominator is two, we have the following equation for the asymptote
angles.
(2k + 1) 180◦
φ= , k = 0, ±1, ±2, · · · (5.112)
2
Therefore, we have φ = ±90◦ from the above equation. On the other hand, in the root locus
of Figure 5.35, since the difference between the degrees of the numerator and denominator
is three, we have the following equation for the asymptote angles.
(2k + 1) 180◦
φ= , k = 0, ±1, ±2, · · · (5.113)
3
Therefore, we have φ = ±60◦ , 180◦ from the above equation, and the root loci are pushed
to the direction of the right half-plane. We can see that adding a pole has an adverse effect
on stability. Let us investigate further by adding one more pole to the open-loop transfer
function. By adding a pole at s = −3, we have the following transfer function.
1
G(s)H(s) = (5.114)
s (s + 1) (s + 2) (s + 3)
Figure 5.36 shows the root locus for the above transfer function.
224 5.3. Effects of Poles and Zeros on the Root Locus
If we compare the root locus of Figure 5.36 with Figure 5.35, we can see that the root loci
cross over the imaginary axis at the smaller value of K. Therefore, we have a smaller range
of K that makes the system stable. The following is the angle of asymptotes.
(2k + 1) 180◦
φ= , k = 0, ±1, ±2, · · · (5.115)
4
We have φ = ±45◦ , ±135◦ from the above equation. Since the difference between the degrees
of the numerator and denominator is four at this time, we have smaller values of the angles.
As another example, let us consider the following open-loop transfer function obtained
by adding a zero s = −3 to Eq. (5.111).
(s + 3)
G(s)H(s) = (5.118)
s (s + 1) (s + 2)
Figure 5.37 shows the root locus for the above transfer function.
The root locus in Figure 5.37 is pushed to the left in comparison with the root locus in
Figure5.35. Therefore, we have an increased range of K that makes the system stable in
Figure 5.37. From the above examples, we can see that adding poles has an adverse effect on
stability, while adding zeros has a favorable effect on stability. This fact can be used in the
controller design, as shown in the next section.
First, consider a constant gain controller D(s) = K for the system in Figure 5.38. In order
to investigate the steady-state performance, the steady-state error is found in the following.
226 5.4. Controller Design and the Root Locus
1 1 2
ess = = = (5.119)
1 + Kp 1 + K/2 2 + K
We can see that K should be increased to reduce the steady-state error. Figure 5.39 shows
unit step responses for the following values of K. (5.120)
K = 3, K = 8, K = 38 (5.120)
As can be seen in Eq. (5.119) and Figure 5.39, the steady-state error is reduced as K is
increased. However, the overshoot is excessive for the large value of K. In other words, even
though increasing K improves steady-state performance, it makes the transient performance
worse. To confirm this result, let us draw the root locus. The following is the characteristic
equation of the closed-loop system, and Figure 5.40 shows the root locus.
1
1+K =0 (5.121)
(s + 1) (s + 2)
Chapter 5. The Root Locus Method 227
As we can see in Figure 5.40, the roots of the characteristic equation move away from the
origin as K is increased; in turn, the angle θ between the vector pointing to the pole and the
imaginary axis becomes smaller. Since the damping ratio ζ of the second-order system is
sin θ, the damping ratio ζ becomes smaller as K is increased. This explains why we have an
excessive overshoot for a large K. Therefore, we may not be able to find the controller gain
K such that both the transient and the steady-state performance are satisfactory.
With the above PD controller, the following is the characteristic equation, and Figure 5.41
shows the root locus.
(s + 3)
1+K =0 (5.126)
3 (s + 1) (s + 2)
Figure 5.41 Root locus of the system in Figure 5.38 with the PD controller
As we can see in Eq. (5.126), the open-loop transfer function has an additional zero at s = −3.
Adding a zero to the open-loop transfer function pushes the root locus to the left and has a
positive effect on the stability, as we can see in Figure 5.41. If we compare the root locus in
Figure 5.41 with Figure 5.40, the angles θ for the root locus in Figure 5.41 are larger than in
Figure 5.40; in turn, the damping ratios have larger values. Figure 5.42 shows improved unit
step responses. As we can see, the magnitudes of the overshoot are reduced considerably due
to the increased damping ratios.
Figure 5.42 Unit step responses of the system in Figure 5.38 with the PD controller
If the transfer function D(s) of the controller has a variable s, we call it a dynamic con-
Chapter 5. The Root Locus Method 229
troller or a dynamic compensator. The PD controller is the simplest form of the dynamic
controllers. When the controller transfer function D(s) has a variable s, the controller can be
represented by a differential equation. There are various ways to design dynamic controllers.
We discuss how to design the dynamic controllers in the rest of this book.
s/b + 1
D(s) = K , a>b (5.127)
s/a + 1
The lead controller is similar to the PD controller, except that the lead controller has an
additional pole. The lead controller is also called the phase-lead controller, or the lead com-
pensator.
Let us consider the following lead controller for the system in Figure 5.38.
s/3 + 1
D(s) = K (5.128)
s/11 + 1
Note that the above controller can be regarded as the PD controller with an additional pole
at s = −11. Figure 5.43 shows the root locus for this controller. Remember that the pole of
this controller is always on the left side of the controller’s zero.
If we compare the root locus in Figure 5.43 with Figure 5.40, we can see that the root locus
in Figure 5.43 is pushed to the left. If the root locus is pushed to the left, the damping ratio
230 5.4. Controller Design and the Root Locus
is increased. Figure 5.44 shows the unit step responses for the lead controller. As we can
see in the figure, the unit step responses show improved transient responses with reduced
overshoots, even though not as much as the PD controller.
As we move the pole location of the lead controller to the left, the lead controller resem-
bles the PD controller. As a limit case, if a = ∞, Eq. (5.127) is of the same form as Eq. (5.122).
Figure 5.45 shows the root loci for the pole locations of the lead controller at s = −15 and
s = −20 with the zero at s = −3.
Figure 5.45 Root locus for the pole locations of the lead controller at s = −15 and s = −20
The root loci in Figure 5.45 resemble the root loci for the PD controller in Figure 5.41, except
they have branches going to infinity. The branches move to the left and have little effect on
the system performance as we move the poles to the left. On the other hand, if we move the
pole of the lead controller to the right, the root locus for the lead controller resembles the
Chapter 5. The Root Locus Method 231
root locus for the constant gain controller in Figure 5.40. Figure 5.46 shows the root locus
for the pole locations of the lead controller at s = −14, −11, −8, −3. Note that if a = 3, the pole
cancels the zero of the controller.
Figure 5.46 Root loci for the pole locations of the lead controller at s = −14, −11, −8, −3
If we convert the signal to the function in the time-domain, we have the following equation
for the controller output signal.
1 t
Z !
u(t) = K e(t) + e(τ)dτ (5.132)
Ti 0
232 5.4. Controller Design and the Root Locus
The PI controller has the proportional term and the integral term. The proportional term
is the error signal e(t) = y(t) − r(t) multiplied by a constant gain, and the integral term is
the integral of the error signal multiplied by a constant gain. The PI controller in the unity
feedback system increases the system type by one; in turn, it improves the steady-state per-
formance.
Let us apply the PI controller to the system in Figure 5.38. As we can see in Figure
5.39, when D(s) = 3 is applied, the transient response is relatively acceptable. However, the
steady-state error is quite large for this controller. Let us consider the following PI controller
to reduce the steady-state error. Figure 5.47 shows the step response for the PI controller.
0.8 3 (s + 0.8)
D(s) = 3 1 + = (5.133)
s s
Figure 5.47 Unit step response for the PI controller Eq. (5.133)
As we can see in Figure 5.47, the PI controller reduces the steady-state error to zero.
Even though the PI controller is effective in reducing the steady-state error, the controller
itself is unstable since the controller transfer function has a pole at the origin. In general,
the controller has not to be necessarily stable; however, the unstable controller poses several
problems. First, the unstable controller is not easy to test since the controller alone diverges.
Second, the control system that is controlled by the PI controller becomes unstable when the
feedback loop is broken for some reason.
To avoid having an unstable controller, we may consider the following controller.
3 (s + 0.8) 30 (s/0.8 + 1)
D(s) = = (5.134)
s + 0.08 (s/0.08 + 1)
The above controller is obtained by moving the pole at the origin in Eq. (5.133) slightly to
the left. Then, we have a stable controller. Figure 5.48 shows the step response for the above
controller.
Chapter 5. The Root Locus Method 233
Figure 5.48 Unite step response for the controller Eq. (5.134)
Since the controller Eq. (5.134) does not increase the system type, the unit step response
in Figure 5.48 has a steady-state error. However, the controller reduces the steady-state
error considerably when compared with the constant gain controller. As mentioned in the
previous chapter, the position error constant is defined as follows:
Using the position error constant, the steady-state error for the unit step input is as follows:
1
ess = (5.136)
1 + Kp
If we plug s = 0 in Eq. (5.134), we have a value of 30; in turn, the position error constant is
ten times larger than the constant gain controller. Therefore, we have a reduced steady-state
error. This effect is different from simply increasing the gain. The effect of this form of the
controller is discussed in the later chapter.
The controller in the form of Eq. (5.134) is called the lag controller or phase-lag con-
troller. The following is the general form of the lag controller.
s/b + 1
D(s) = K ,a < b (5.137)
s/a + 1
The transfer function of the lag controller has the same form as the lead controller; however,
the pole and zero locations are the opposite. The lag controller has a pole to the right of the
zero, while the lead controller has a pole to the left of the zero.
The previous example shows that we can use the lag controller to reduce the steady-state
error. We can also use the lag controller to improve the transient response. Figure 5.39
shows the step response of the system in Figure 5.38 when D(s) = 38 is used. Since the gain
is quite large, the steady-state error is reduced; however, the overshoot is excessive due to
the low damping ratio. Let us use the following lag controller for the system. Note that the
position error constant remains the same for the following controller. Figure 5.49 shows the
step response.
s/0.7 + 1 38 s + 0.7
D(s) = 38 = (5.138)
s/0.07 + 1 10 s + 0.07
234 5.4. Controller Design and the Root Locus
Figure 5.49 Unit step response for the controller Eq. (5.138)
As we can see, the overshoot is reduced considerably. Figure 5.50 shows the root locus.
Let us compare the root locus in Figure 5.50 with Figure 5.40. We can see that the roots
at the gain K = 38 in Figure 5.50 are much closer to the real axis than in Figure 5.40; in turn,
they have a larger damping ratio. In Figure 5.40, the roots at K = 38 are s = −1.5 ± j6.14, and
the damping ratio is follows:
1.5
ζ=√ = 0.237 (5.139)
1.52 + 6.142
In Figure 5.50, the roots at s = −1.22 ± j1.73, and the damping ratio is follows:
1.73
ζ=√ = 0.576 (5.140)
1.222 + 1.732
Chapter 5. The Root Locus Method 235
We can see that the damping ratio of the roots in Figure 5.50 is larger than in Figure 5.40
at the same gain. Remember that increasing the gain reduces the damping ratio when we
use a constant gain controller. However, we can maintain a large gain without reducing the
damping ratio if we use the lag controller. Consider Eq. (5.98), which enables us to find the
gain on the root locus. If we use the lag controller Eq. (5.137), Eq. (5.98) can be written as
follows:
1 s/a + 1 1
|K| = = (5.141)
s/b + 1 s/b + 1 |G(s)H(s)|
|G(s)H(s)|
s/a + 1
In general, when we design the lag controller, we usually choose small values for a and b.
If the point s on the root locus is far from the origin, Eq. (5.141) can be approximated as
follows:
b 1 1
≈
|K| > (5.142)
a |G(s)H(s)| |G(s)H(s)|
Since a < b for the lag controller, we can see that the gain with the lag controller can be
larger than without the lag controller. Therefore, with the lag controller, we can reduce the
steady-state error by increasing the gain without reducing the damping ratio.
5.5 Lab 5
5.5.1 PD Control of an Analog Dynamic Simulator
In this lab, we use the same analog dynamic simulator used in section 4.6.2. In addition to
a constant gain controller, we add a derivative term to implement the PD controller. Figure
5.51 shows the block diagram of the system with the PD controller.
de(t)
u(t) = Kp e(t) + Kd (5.144)
dt
The transfer function of the closed-loop system in Figure 5.5.1 is as follows:
Y (s) 10(sKd + Kp )
= 2
(5.145)
R(s) 0.1s + s + 10(sKd + Kp )
The above transfer function has two poles and one zero. Having a zero does not affect the
stability of the system; however, the output of the system may have an undesirable overshoot
236 5.5. Lab 5
depending on the type of the input reference signal r. If the input reference is the step
function, the output signal contains the derivative of the step function. Since the derivative
of the step function is an impulse function, it may cause the overshoot regardless of the
damping ratio. To avoid this problem, we may reconfigure the system, as shown in Figure
5.52.
After the code generation, open the main source file and type in the following codes.
/* USER CODE BEGIN Includes */
#include "stdio.h"
/* USER CODE END Includes */
Code 5.1
Code 5.5
Code 5.6
Code 5.7
data_counter=0;
data_flag=2;
}
ref=205;
}
if (interrupt_counter >= sampling_frequency*2) {
ref=0;
}
if (data_flag==2) {
if (data_counter<=sampling_frequency*4) {
data[data_counter++]=(int16_t)y;
}
else {
data_done=1;
}
}
control = Kp*(float)(ref-y)-(float)sampling_frequency*Kd*(float)(y-oldy);
oldy=y;
if (control > 2047) control = 2047;
if (control < -2048) control = -2048;
da_value = control + 2048;
HAL_DAC_SetValue(&hdac, DAC_CHANNEL_2, DAC_ALIGN_12B_R, (uint32_t)(da_value));
}
/* USER CODE END Callback 0 */
Code 5.8
Figure 5.55 shows the step response. If we compare the response with Figure 4.50, we can
see that the PD controller increases the damping ratio and reduces the overshoot.
Now, let us save the response data and transfer it to the PC for plotting. Start the ter-
minal program SmarTTY and open the serial port. Press the user button (blue color) on the
240 5.5. Lab 5
STM32F429 Discovery board and wait for a while (longer than four seconds). Then, you are
able to see the data captured on the terminal screen. The first column is the index of the
data array, and the second column is the A/D value. Since the sampling frequency is 1KHz
and the period of the reference input is 4 seconds, we have 4000 data points for one period
of the response curve. After the data capture is finished, press the diskette icon in the menu
bar, type in the data file name of your choice, and save the file. Before you start capturing
data, make sure to clear the screen by pressing the clear screen icon (red-cross).
With the data file in the same folder, run the following MATLAB script file to draw the
response curve. Figure 5.57 shows the response curve drawn by the following MATLAB
script.
clear
clf
sf=1000;
load -ascii data
for i=1:4*sf
x(i)=data(i,1)/sf;
y(i)=data(i,2)/205;
end
figure(1)
plot(x,y)
axis([0 4 -1 2])
xlabel(’Time(sec)’);
ylabel(’Output/(Volt)’);
grid on
Code 5.9
Chapter 5. The Root Locus Method 241
}
/* USER CODE END 0
Code 5.12
Code 5.13
Code 5.14
Code 5.15
interrupt_counter=0;
if (data_flag==1) {
data_counter=0;
data_flag=2;
ref = 192;
}
else {
ref = 0;
}
}
if (interrupt_counter >= sampling_frequency*1) {
ref=0;
}
if (data_flag==2) {
if (data_counter<=sampling_frequency*2) {
data[data_counter++]=(int16_t)y;
}
else {
data_done=1;
}
}
control = Kp*(float)(ref-y)-(float)sampling_frequency*Kd*(float)(y-oldy);
oldy=y;
if (control > 2047) control = 2047;
if (control < -2048) control = -2048;
da_value = control + 2048;
HAL_DAC_SetValue(&hdac, DAC_CHANNEL_2, DAC_ALIGN_12B_R, (uint32_t)(da_value));
}
/* USER CODE END Callback 0 */
Code 5.16
In this program, the motor stays at the initial position until you press the user button
(blue color). Before you press the user button, open the serial port in the terminal program
SmarTTY. When you press the user button, the motor turns to the reference position and
returns to the initial position after one second. Then, the program prints the captured re-
sponse data on the terminal screen. You can save the data by pressing the diskette icon on
the SmarTTY program.
With the data file in the same folder, the following MATLAB script draws the response
curve.
clear
clf
sf=1000;
load -ascii data
for i=1:2*sf
x(i)=data(i,1)/sf;
y(i)=data(i,2)/384;
end
figure(1)
plot(x,y)
axis([0 .2 0 1])
xlabel(’Time(seconds)’);
244 5.5. Lab 5
ylabel(’Revolution’);
grid on
Code 5.17
Figure 5.58 shows the step response for Kp = 8.0 and Kd = 0.0. Without the derivative
term in the controller, we can see that the step response has an excessive overshoot.
To reduce the overshoot, let us add the derivative term. Figure 5.59 and Figure 5.60 show
the step responses for Kp = 8.0, Kd = 0.02 and Kp = 8.0, Kd = 0.04, respectively.
We can do the simulations using the DC motor model obtained in section 3.8.2. Figure
5.61 shows the block diagram for the simulation. Note that the simulation model must
include the gains for the encoder and the D/A converter in addition to the motor model.
Figure 5.61 Block diagram for the position control system of the DC motor
Kp=8;Kd=0.0;Ki=0;
D=Kp+tf([Ki],[1 0])
G=tf([2*55.6*384/(2*3.14*204.8)],[0.015 1 0])
G1=feedback(G, tf([Kd 0],[1]))
Gt=feedback(D*G1,1)
t=0:0.001:0.2;
R=0.5*ones(size(t));
figure(2)
lsim(Gt,R,t)
pole(Gt)
axis([0 .2 0 1])
xlabel(’Time’);
ylabel(’Revolution’);
grid on
Code 5.18
Figure 5.62, Figure 5.63, and Figure 5.64 show the step response simulations for the
same controller gains as above. Note that the simulation model does not include the nonlin-
246 5.5. Lab 5
ear friction. The simulations show quite a similar tendency to the experimental responses.
However, the experimental responses decay faster and have less overshoots than the simula-
tions due to the nonlinear friction.
Problem
Problem 5.1 Repeat Example 5.1 with the following transfer functions. Sketch the
root locus using the same method as in Example 5.1.
1 1 1
(1) (2) (3)
(s + 1)(s + 2) (s − 1)(s + 2) s2
K
1+ = 0, K ≥ 0
s(s + 1)(s + 4)
(1) Find the sections that belong to the root locus on the real axis.
(2) Draw the asymptotes for K → ∞.
(3) Find the root locus crossing points on the imaginary axis. Also, find the gain K at
the crossing point.
(4) Sketch the root locus and compare it with the root locus drawn by MATLAB.
K
1+ = 0, K ≥ 0
(s + 1)(s2 + 2s + 2)
(1) Find the sections that belong to the root locus on the real axis.
(2) Draw the asymptotes for K → ∞.
(3) Find the root locus crossing points on the imaginary axis. Also, find the gain K at
the crossing point.
(4) Find the departure angles from the complex poles.
(5) Sketch the root locus and compare it with the root locus drawn by MATLAB.
K(s + 2)
1+ = 0, K ≥ 0
(s + 1)(s2 + 2s + 2)
(1) Find the sections that belong to the root locus on the real axis.
(2) Draw the asymptotes for K → ∞.
(3) Find the departure angles from the complex poles.
(4) Sketch the root locus and compare it with the root locus drawn by MATLAB.
K(s2 + 2s + 2)
1+ = 0, K ≥ 0
s(s + 1)(s + 2)
(1) Find the sections that belong to the root locus on the real axis.
(2) Find the multiple root points on the real axis. Also, find K at the multiple roots
Chapter 5. The Root Locus Method 249
points.
(3) Find the arrival angles at the complex zeros.
(4) Sketch the root locus and compare it with the root locus drawn by MATLAB.
Problem 5.6 Draw the root loci of the following characteristic equations for K ≥ 0.
(1) s3 + 5s2 + 6s + K = 0
(2) s3 + 5s2 + 6s + K(s + 1) = 0
(3) s3 + s2 + s + K(s2 + 2s + 2) = 0
Problem 5.7 Draw the root loci of the systems in Problem 4.13.
Problem 5.8 Consider the system in Figure 5.6. Sketch the root loci for the following
open-loop transfer functions in the range of K ≥ 0, and compare them with the root
loci drawn by MATLAB.
1
(1) G(s)H(s) = 3 2
s +s
1
(2) G(s)H(s) =
(s + 1)(s + 2)(s2 + 2s + 2)
(s + 3)
(3) G(s)H(s) =
(s + 1)(s + 2)(s2 + 2s + 2)
Problem 5.9 Sketch the root loci for the systems in Problem 8 in the range of K ≤ 0,
and compare them with the root loci drawn by MATLAB.
K
1+ = 0, K > 0
s(s + 1)2
(1) Draw the root locus and find the multiple root points on the real axis. Also, find
the value of K at the multiple root points.
(2) Let the value of K found in part (1) be K0 . Plug K = K0 + K1 in the above equation
and rearrange the equation as in the following form.
1 + K1 G1 (s) = 0
Draw the root locus for the above characteristic equation when K1 is the variable gain.
Compare it with the root locus obtained in part (1).
Problem 5.13 The following is the characteristic equation of the second-order proto-
type system.
s2 + 2ζωn s + ωn2 = 0
Draw the root locus of the characteristic equation as the value of ζ is varied from 0 to
∞ for the cases given below.
(a) ωn = 1 (b) ωn = 2
Problem 5.14 Consider the unity feedback control system for the following plant
transfer function.
1
G(s) =
s(s + 2)2
Suppose we want to control the above system with the following PD controller.
D(s) = K (1 + Td s)
(1) Determine K such that the steady-state error for the unit ramp input is 0.05.
(2) With the value of K determined in part (1), we want to find the range of Td that
makes the closed-loop system stable. Draw the root locus as the value of Td is varied.
From the root locus, find the range of Td that makes the closed-loop system stable.
(3) Since the system is a third-order system, the root locus has three branches. If we
ignore the branch on the real axis, we can regard this system as a second system. By
considering the complex roots only, find the value of Td such that the damping ratio ζ
has the maximum value. Draw the unit step response for the value using MATLAB.
Problem 5.15 Consider the unity feedback control system for the following plant
transfer function.
1
G(s) =
s(s + 1)(s + 5)
We want to investigate the root loci and the unit step responses for the various con-
trollers D(s).
(1) When D(s) = K, draw the root locus and the unit step response using MATLAB.
Chapter 5. The Root Locus Method 251
(2) When D(s) = K(s + 1), draw the root locus and the unit step response using MAT-
LAB.
(3) When D(s) = K(s+1)/(0.2s+1), draw the root locus and the unit step response using
MATLAB.
(4) When D(s) = K(s+1)/(0.5s+1), draw the root locus and the unit step response using
MATLAB.
Problem 5.16 Consider the unity feedback control system for the following plant
transfer function.
1
G(s) =
(s + 1)(s + 0.2)
We want to control the above system using the PI controller as follows:
K2
D(s) = K1 +
s
(1) Find K2 such that the steady-state error for the unit ramp input is 0.05.
(2) With the value of K2 determined in part (1), draw the root locus of the closed-loop
system as the value of K1 is varied. Find the range of K1 such that the closed-loop
system is stable from the root locus.
(3) Since the system is a third-order system, the root locus has three branches. If we
ignore the branch on the real axis, we can regard this system as a second system. By
considering the complex roots only, find the value of K1 such that the damping ratio ζ
has the maximum value. Draw the unit step response for the value using MATLAB.
6. Frequency Response Analysis
252
Chapter 6. Frequency Response Analysis 253
Sinusoidal Wave
In order to discuss the frequency response, we need to define the sinusoidal wave signal.
Figure 6.2 shows the definition of the sinusoidal wave, which is mathematically described
by A sin ωt.
Suppose a rod with a length of A is rotating around the origin at a constant speed. If we
draw the projected length of the rod on the vertical axis with respect to time, we have the
sinusoidal wave signal. In Figure 6.2, ω is the radial frequency (angular frequency) in the
unit of rad/sec. The radial frequency is the angle swept by the rod per second. In Figure
6.2, the bottom sinusoidal wave curve is drawn with respect to time, while the top curve
with respect to radian. Per one revolution, the rod sweep 2π radian during 2π/ω seconds.
Therefore, the period T can be defined as follows:
2π
T = (6.1)
ω
Since the frequency f in Hz is the reciprocal of the period T , we have the following relation-
ships.
1 ω 2π
f = = , ω = 2πf = (6.2)
T 2π T
In the above relationships, the frequency f is in the unit of Hz, while the radial frequency
ω is in the unit of rad/sec. When we encounter the term frequency, we have to be careful
about the unit. In the fields of engineering, the radial frequency is used more frequently.
Consider the following linear time-invariant system. Let us assume that the system is stable.
Y (s)
= G(s) (6.3)
U (s)
Suppose the sinusoidal signal shown below is applied to the input of the system.
u(t) = A sin ωt (6.4)
In steady-state, the output of the system is the sinusoidal signal with the same frequency as
the input, as shown in Figure 6.3.
Figure 6.3 Steady-state output of the system with the sinusoidal input
In the above equation, the amplitude of the output is A |G(jω)|. In other words, |G(jω)|
is the gain of the system. Remember that the gain is the ratio of the output divided by
the input. Note that the gain |G(jω)| is dependent on the frequency ω. The magnitude
of gain changes as we vary the frequency. The magnitude frequency response is defined
as the function |G(jω)|. Similarly, the phase frequency response is defined as the function
∠G(jω). Note that ∠G(jω) is the phase angle difference between the input and the output.
The frequency response is usually represented as a set of the magnitude and phase angle
graphs with respect to frequency.
The steady-state output in Eq. (6.5) can be derived as in the following paragraph. For
simplicity, assume that G(s) is of the following form:
b(s)
G(s) = (6.6)
(s − p1 ) (s − p2 ) · · · (s − pn )
Let us find the response y(t) using the Laplace transform. The Laplace transform of the
output y(t) is as follows:
Aω
Y (s) = G(s)U (s) = G(s) 2 (6.7)
s + ω2
After the partial fraction expansion, we can obtain the following equation.
α1 α2 αn α0 α0∗
Y (s) = + + ··· + + + (6.8)
s − p1 s − p2 s − pn s + jω s − jω
In the above equation, α0∗ and α0 are complex conjugates. By taking the inverse Laplace
transform of Eq. (6.8), we can obtain the following equation.
y(t) = α1 ep1 t + α2 ep2 t + · · · + αn epn t + α0 e−jωt + α0∗ ejωt (6.9)
In Eq. (6.9), the last two terms are particular solutions, and the rest of the terms are ho-
mogeneous solutions. We can find the steady-state output by taking the limit of Eq. (6.9)
as t → ∞. Since the system is assumed to be stable, all the homogeneous solution terms
converge to zero. Therefore, we have the following steady-state solution.
y(t) = α0 e−jωt + α0∗ ejωt (6.10)
The coefficients for the partial fraction expansion can be found as follows:
Aω A
α0 = G(s) 2 2
(s + jω) = − G(−jω)
s +ω j2
s=−jω
(6.11)
A
=− |G(jω)| e−j∠G(jω)
j2
Aω A
α0∗
= G(s) 2 2
(s − jω) = G(jω)
s +ω s=jω j2
(6.12)
A
= |G(jω)| ej∠G(jω)
j2
Then, Eq. (6.10) can be changed to the following equation.
A A
y(t) = − |G(jω)| e−j∠G(jω) e−jωt + |G(jω)| ej∠G(jω) ejωt
j2 j2
−j(ωt+∠G(jω)) ej(ωt+∠G(jω))
!
e (6.13)
= A |G(jω)| − +
j2 j2
= A |G(jω)| sin (ωt + ∠G(jω))
256 6.1. Frequency Response
By applying KCL (Kirchhoff’s Current Law), we can obtain the following differential equa-
tion.
dy u − y
C = (6.14)
dt R
In the above equation, the left-hand side is the current flowing through the capacitor, and the
right-hand side is the current flowing through the resistor. By taking the Laplace transform,
we have the following equation.
U (s) − Y (s)
CsY (s) = (6.15)
R
If we assume the zero initial condition, the transfer function is as follows:
Y (s) 1
= G(s) = (6.16)
U (s) RCs + 1
!
1
∠G(jω) = ∠ = −tan−1 (RCω) (6.18)
jRCω + 1
Figure 6.5 shows the frequency response for the following values.
Figure 6.6 and Figure 6.7 shows the time responses for the following two frequencies.
The amplitude of the input sinusoidal wave is 1. Remember that the radial frequency in
rad/sec is related to the frequency in Hz as in the following relationship:
ω = 2πf (6.21)
Chapter 6. Frequency Response Analysis 257
If you read the magnitude frequency response in Figure 6.5 at ω = 2π(rad/ sec), the gain is
0.847. This gain coincides with the time response in Figure 6.6. Similarly, if you read the
magnitude frequency response in Figure 6.5 at ω = 6π(rad/ sec), the gain is 0.469. Again,
this gain coincides with the time response in Figure 6.7. It is not easy to read the phase angle
differences in time responses; however, we may be able to obtain similar results from the
phase frequency response.
A transfer function is said to be proper if the degree of the denominator is the same or
higher than the degree of the numerator. A transfer function is said to be strictly proper if
the degree of the denominator is higher than the degree of the numerator. If the transfer
function is strictly proper, |G(jω)| converges to zero as ω → ∞. Since |G(jω)| is the gain,
the output of the system with the strictly proper transfer function is zero for any input
signal with the infinite frequency. If the transfer function is not strictly proper, |G(jω)| has
a non-zero value for the infinite frequency. In the physical world, the system cannot have a
non-zero gain for the infinite frequency signal. When we try to control a plant in the real
world, we can assume that the transfer function of the plant is strictly proper.
ωn2 1
G(s) = 2
= 2
(6.22)
2
s + 2ζωn s + ωn (s/ωn ) + (2ζs/ωn ) + 1
The magnitude frequency response function of the above second-order system is as follows:
1
|G(jω)| = (6.23)
(jω/ωn )2 + (j2ζω/ωn ) + 1
For simplicity, assume ωn = 1. Figure 6.8 shows the magnitude frequency responses for the
following damping ratios:
We can see that the peak of the magnitude frequency responses increases as the damping
ratio is decreased. Figure 6.9 shows the pole locations for the damping ratios in Eq. (6.24)
when ωn = 1. The poles follow the unit circle, centered at the origin, as the damping ratio
is varied while ωn = 1. Let us try to relate the pole locations in Figure 6.9 to the magnitude
Chapter 6. Frequency Response Analysis 259
frequency responses in Figure 6.8. Then, we can see that the poles approach the imaginary
axis, and the peak of the magnitude frequency responses increases as the damping ratio is
decreased.
To understand the relationship between the frequency response and poles, let us consider
the magnitude of the transfer function as follows:
1
|G(s)| = 2
(6.25)
(s/ωn ) + (2ζs/ωn ) + 1
Note that |G(s)| is the function of the complex variable s = σ + jω. The graphical represen-
tation of the function |G(s)| is a surface graph in the three dimensional space. Figure 6.10
shows the graphs of |G(s)| for the damping ratios in Eq. (6.24). If we relate the pole locations
in Figure 6.9 to the graphs in Figure 6.10, we can see that |G(s)| goes to infinity at the poles
of G(s).
Note that |G(jω)| is obtained by plugging s = jω into |G(s)|. Therefore, if we slice the three
dimensional graph |G(s)| along with the imaginary axis, the shape of the cut surface is the
same as the magnitude frequency response |G(jω)|. We can see that the shapes of the cut
surfaces in Figure 6.10 are the same as the graphs in Figure 6.8. If we decrease the damping
ratio, the poles approach the imaginary axis; in turn, the peak of |G(jω)| increases. When
ζ = 0, the poles are on the imaginary axis and the peak of |G(jω)| goes to infinity.
260 6.1. Frequency Response
(a) ζ = 0.3
(b) ζ = 0.5
Chapter 6. Frequency Response Analysis 261
(c) ζ = 0.7
(d) ζ = 0.9
Generally, the frequency response is drawn in the form of the Bode plot. The Bode plot
consists of the magnitude and phase of a transfer function plotted versus frequency in log
scale. In the Bode plot, the magnitude is usually in dB (decibel). The magnitude in dB is
obtained by taking 20 times the common logarithm of the magnitude, as shown below:
Figure 6.11 shows the Bode plot for the circuit in Figure 6.4.
262 6.2. Bode Plot
We can draw the frequency response easily using a computer program, such as MAT-
LAB. However, the approximate Bode plot can be drawn without using a computer. In the
following paragraphs, we discuss how to draw the Bode plot approximately without com-
plex calculations. In the past, when computers were not available to everybody, plotting the
frequency responses without complex calculations was very useful to control system engi-
neers. Now, we have computers everywhere, and the merits of the approximate Bode plot
has diminished. However, it is still important to know how to draw the approximate Bode
plot because it gives us insight into the system. Also, when we use the computer to draw the
frequency responses, we should be able to check if the result from the computer is correct.
Let us consider the following transfer function for the Bode plot.
(s − z1 ) (s − z2 ) · · ·
G(s) = K m
(6.27)
s (s − p1 ) (s − p2 ) · · ·
Since the Bode plot is the graphical representation of the frequency response, it is the graph
of the following function.
(jω − z1 ) (jω − z2 ) · · ·
G(jω) = G(s)|s=jω = K (6.28)
(jω)m (jω − p1 ) (jω − p2 ) · · ·
The above forms of the transfer function help us to identify the poles and zeros easily; how-
ever, the following form is more convenient to draw the Bode plots.
(jωτa + 1) (jωτb + 1) · · ·
G(jω) = K0 (6.29)
(jω)m (jωτ1 + 1) (jωτ2 + 1) · · ·
If we represent the magnitude of the above function in dB, we have the following relation-
Chapter 6. Frequency Response Analysis 263
ships.
(jωτa + 1) (jωτb + 1) · · ·
|G(jω)|dB = 20log10 |G(jω)| = 20log10 K0
(jω)m (jωτ1 + 1) (jωτ2 + 1) · · ·
|jωτa + 1| |jωτb + 1| · · ·
= 20log10 |K0 |
(6.30)
(jω)m |jωτ1 + 1| |jωτ2 + 1| · · ·
= 20log10 |K0 | + 20log10 |jωτa + 1| + 20log10 |jωτb + 1| + · · ·
− 20log10 (jω)m − 20log10 |jωτ1 + 1| − 20log10 |jωτ2 + 1| − · · ·
In the above relationships, we use the property that the logarithm of a product is the sum of
the logarithms. This property enables us to draw the Bode magnitude plot very easily. After
finding the factors of the transfer function, we can draw the Bode magnitude plot curves
of each factor individually and add them to obtain the Bode magnitude plot for the whole
function.
The following is the phase of the frequency response function.
!
(jωτa + 1) (jωτb + 1) · · ·
∠G(jω) = ∠ K0
(jω)m (jωτ1 + 1) (jωτ2 + 1) · · ·
(6.31)
= ∠K0 + ∠ (jωτa + 1) + ∠ (jωτb + 1) + · · ·
− ∠(jω)m − ∠ (jωτ1 + 1) − ∠ (jωτ2 + 1) − · · ·
Remember that phase of the product of two complex numbers is the sum of the phases of
the individual numbers. We can draw the phase curve by adding the individual phases of
the factors.
All transfer functions discussed in this book are composed of the following basic factors.
• K0 s m
• (sτ + 1) , 1/ (sτ + 1)
h i h i
• (s/ωn )2 + (2ζs/ωn ) + 1 , 1/ (s/ωn )2 + (2ζs/ωn ) + 1
In the above factors, all the coefficients are real numbers. Any terms with a degree higher
than two can be factored with some of the above terms.
If we know the Bode plots for the above basic factors, the Bode plot for any transfer func-
tion can be obtained by combining the Bode plots for individual factors. In the following
paragraphs, let us find the Bode plots for the above basic factors. The Bode plot can be
drawn using a computer program, such as MATLAB. However, we can draw the asymptotic
Bode plot without using a computer. In the following, we discuss the asymptotic Bode plots
for the basic factors.
K0 s m
Since the horizontal axis of the Bode plot is the logarithmic value of ω, the above function is
a straight line. Figure 6.12 shows the Bode magnitude plot when 20log10 |K0 | = 0.
264 6.2. Bode Plot
When the frequency ω is increased by ten times, the corresponding section of the hori-
zontal axis in log scale is called one decade. For example, the section between 0.1 and 1 is
one decade, and the section between 1 and 100 is two decades. We can see that the graph
for m = 1 increases by 20 dB per decade. The slope for m = 1 is 20 dB/dec. Similarly, the
slope for m = 2 is 40 dB/dec, and the slope for m = −1 is -20 dB/dec. In general, the slope
is m times 20 dB/dec. Note that the Bode magnitude plot for (jω)m crosses 0 dB at ω = 1
regardless of the slope. Since 20log10 |K0 | is a constant, we can move up or down the graph
in the vertical direction by the amount.
The following is the phase function for the above factor.
The above phase function is not dependent on the frequency. ∠K0 is zero for K0 > 0, and 180
degrees for K0 < 0.
(sτ + 1) , 1/ (sτ + 1)
The asymptotic Bode plot is composed of straight lines which the Bode plot curve approaches
as the frequency is increased to a large value or decreased to a small value. First, consider
the following two extreme cases for (jωτ + 1).
In the case of ω 1/τ, the approximate function has the same form as Eq. (6.32) for m = 1.
Therefore, it is the straight line with the slope of 20 dB/dec. Note that the graph crosses
0 dB at ω = 1/τ. Figure 6.13 shows the asymptotic Bode magnitude plot, and Figure 6.14
shows the accurate Bode magnitude plot for (jωτ + 1) when τ = 1. The difference between
the asymptotic and the accurate curve is maximum at ω = 1/τ. The magnitude in dB at
Chapter 6. Frequency Response Analysis 265
√
If we plug ω = 1/τ into the above equation, we have 20log10 2 = 3 dB.
The followings are the approximate equations for the phase angle. Figure 6.15 shows
the asymptotic Bode phase plots, and Figure 6.16 shows the accurate Bode phase plot for
(jωτ + 1) when τ = 1.
∠ (jωτ + 1) ≈ ∠1 = 0◦ ω 1/τ
◦
∠ (jωτ + 1) = ∠ (j + 1) = 45 ω = 1/τ (6.36)
◦
∠ (jωτ + 1) ≈ ∠ (jωτ) = 90 ω 1/τ
266 6.2. Bode Plot
Next, consider the function 1/ (jωτ + 1). Similarly as above, the approximate functions
for two extreme cases are as follows:
1 = −20log
jωτ + 1 10 |jωτ + 1| ≈ −20log10 1 = 0 ω 1/τ
dB (6.37)
1 = −20log
jωτ + 1 10 |jωτ + 1| ≈ −20log10 |ωτ| ω 1/τ
dB
Figure 6.17 shows the asymptotic Bode magnitude plot of 1/ (jωτ + 1). Note that the slope
of the asymptotic Bode magnitude plot for 1/ (jωτ + 1) is -20 dB/dec for ω 1/τ. Figure
6.15 shows the accurate Bode magnitude plot for 1/ (jωτ + 1) when τ = 1. The system with
the magnitude frequency response in Figure 6.18 passes low frequency signals, and does not
pass high frequency signals. In other words, this system works as a low pass filter. The band
width is defined as ω = 1/τ. The frequency ω = 1/τ is called the corner frequency or cut-off
frequency.
Chapter 6. Frequency Response Analysis 267
The followings are the approximate phase angles of 1/ (jωτ + 1). Figure 6.19 shows
the asymptotic Bode phase plots, and Figure 6.20 shows the accurate Bode phase plot for
1/ (jωτ + 1) when τ = 1.
!
1
∠ ≈ −∠1 = 0◦ ω 1/τ
jωτ + 1
!
1
∠ = −∠ (j + 1) = −45◦ ω = 1/τ (6.38)
jωτ + 1
!
1
∠ ≈ −∠ (jωτ) = −90◦ ω 1/τ
jωτ + 1
268 6.2. Bode Plot
Exemple 6.1 Let us draw the asymptotic Bode plot of the following transfer function
and compare it with the Bode plot drawn by MATLAB.
1000
G(s) = (6.39)
s (s + 10)
Let us change the above transfer function to the following form:
1000 100
G(jω) = = (6.40)
s (s + 10) s=jω jω (j0.1ω + 1)
1/τ = 10 rad/sec. Figure 6.21 shows the asymptotic Bode magnitude plot, and Figure
6.22 shows the accurate Bode magnitude plot. Note that the asymptotes for the term
1/ (j0.1ω + 1) is zero in the low frequency. Therefore, the asymptotic Bode plot for the
whole transfer function is the same as the asymptotic Bode plot for 100/ (jω) in the low
frequency. Similarly, we can obtain the asymptotic phase plot for the whole transfer
function by adding the asymptotic phase plots for 100/ (jω) and 1/ (j0.1ω + 1). The
asymptotic phase plot for 1/ (j0.1ω + 1) is the case for 1/τ = 10(rad/ sec) in Figure 6.19,
and the phase of the term 100/ (jω) is -90 degrees. Figure 6.23 and Figure 6.24 show
the asymptotic and accurate Bode phase plot, respectively.
MATLAB We can draw the Bode plot in this example using the following MATLAB
commands.
g=tf([1000],[1 10 0])
bode(g)
grid
Figure 6.21 Asymptotic Bode magnitude plot for the system in Example 6.1
270 6.2. Bode Plot
Figure 6.22 Bode magnitude plot for the system in Example 6.1
Figure 6.23 Asymptotic Bode phase plot for the system in Example 6.1
Chapter 6. Frequency Response Analysis 271
Figure 6.24 Bode phase plot for the system in Example 6.1
Exemple 6.2 Let us draw the asymptotic Bode plot of the following transfer function
and compare it with the Bode plot drawn by MATLAB.
1000 (s + 1)
G(s) = (6.42)
(s + 10) (s + 100)
Figure 6.25 and Figure 6.26 show the asymptotic and accurate Bode magnitude plots,
respectively. Figure 6.27 and Figure 6.28 show the asymptotic and accurate Bode phase
plot, respectively.
MATLAB We can draw the Bode plot using the following MATLAB commands. The
conv MATLAB command computes the coefficients of the product of two polynomials.
g=tf(1000*[1 1],conv([1 10],[1 100]))
bode(g)
grid
272 6.2. Bode Plot
Figure 6.25 Asymptotic Bode magnitude plot for the system in Example 6.2
Figure 6.26 Bode magnitude plot for the system in Example 6.2
Figure 6.27 Asymptotic Bode phase plot for the system in Example 6.2
Chapter 6. Frequency Response Analysis 273
Figure 6.28 Bode phase plot for the system in Example 6.2
h i h i
(s/ωn )2 + (2ζs/ωn ) + 1 , 1/ (s/ωn )2 + (2ζs/ωn ) + 1
For the above terms, we only consider the case when the damping ratio is less than one
(ζ < 1). When ζ ≥ 1, we can find the first-order factors with real h coefficients, which are
i in-
2
cluded in the previous cases. Also, let us consider the term 1/ (s/ωn ) + (2ζs/ωn ) + 1 only,
since it is more common.
Similarly, as in the case of the first-order term, the approximate functions are as follows:
1
2 ≈ −20log10 1 = 0 ω ωn
(jω/ωn ) + (2ζjω/ωn ) + 1 dB
(6.44)
1 2
≈ −20log10 (ω/ωn ) ω ωn
(jω/ωn )2 + (2ζjω/ωn ) + 1 dB
When ω ωn , the first-order and constant terms can be ignored since the second-order term
is dominant. In this case, the approximate function is the straight line with a slope of -40
dB/dec. Note that the graph crosses 0 dB at ω = ωn . Figure 6.29 shows the asymptotic Bode
magnitude plot. We can see that it is the low pass filter with the cut-off frequency of ωn .
h i
Figure 6.29 Asymptotic Bode magnitude plot for 1/ (jω/ωn )2 + (2ζjω/ωn ) + 1
274 6.2. Bode Plot
When we draw the asymptotic Bode magnitude plot of the second-order term, the cut-off
frequency is determined by ωn . The damping ratio ζ has no influence on the asymptotic
Bode plot; however, it affects the accurate Bode plot. Figure 6.30 shows the accurate Bode
magnitude plot for various values of the damping ratio ζ when ωn = 1.
h i
Figure 6.30 Bode magnitude plot for 1/ (jω/ωn )2 + (2ζjω/ωn ) + 1
As we can see in Figure 6.30, the peak of the Bode magnitude plot increases as the damping
ratio ζ is decreased. We cannot add the influence of the damping ratio to the asymptotic
Bode magnitude plot; however, knowing the value of the magnitude at ω = ωn helps us to
refine the approximate Bode plot. If we plug ω = ωn into the magnitude function, we have
the following relationship.
1 1
2
= (6.45)
2ζ
(jω/ωn ) + (2ζjω/ωn ) + 1 ω=ωn
h i
Figure 6.31 Asymptotic Bode phase plot for 1/ (jω/ωn )2 + (2ζjω/ωn ) + 1
Similarly, as in the case of the magnitude plot, the accurate Bode phase plot is dependent
on the damping ratio ζ. Figure 6.32 shows the accurate Bode phase plot for various values
of the damping ratio ζ when ωn = 1.
h i
Figure 6.32 Bode phase plot for 1/ (jω/ωn )2 + (2ζjω/ωn ) + 1
h i
The bode plot for (s/ωn )2 + (2ζs/ωn ) + 1 can be obtained by flipping the Bode plot curves
h i
for 1/ (s/ωn )2 + (2ζs/ωn ) + 1 with respect to the horizontal axis.
276 6.2. Bode Plot
Exemple 6.3 Let us draw the asymptotic Bode plot and compare it with the Bode plot
drawn by MATLAB.
10000
G(s) = (6.48)
s (s2 + 2s + 100)
The above transfer function can be changed to the following form:
100 100
G(jω) = 2
=
2
(6.49)
s (s /100 + 2s/100 + 1) s=jω jω (jω/10) + 0.2 (jω/10) + 1
The second-order term in Eq. (6.49) have the cut-off frequency of ωn = 10 and the
damping ratio of ζ = 0.1. Figure 6.33 shows the asymptotic Bode magnitude plot.
Figure 6.34 shows the accurate Bode plot for the system.
If we plug ζ = 0.1 into Eq. (6.46), we can obtain the magnitude at ω = ωn = 10 as
follows:
1
20log10 = 20log10 5 = 14dB (6.50)
2ζ
Since the asymptotic magnitude has the value of 20 dB at ω = 10,the actual magnitude
value ω = 10 is 34 dB, which we can confirm in Figure 6.34.
Figure 6.33 Asymptotic Bode magnitude plot for the system in Example 6.3
Chapter 6. Frequency Response Analysis 277
When a system has a zero in the right half-plane, the phase of the system is greater than
the phase of the system with no right half-plane zero in all frequency range. Such a system
is called a non-minimum system. For the discussion of the non-minimum phase system,
consider the following two simple transfer functions.
s+1
G1 (s) = (6.51)
0.1s + 1
s−1
G2 (s) = (6.52)
0.1s + 1
Note that the function in Eq. (6.52) has a zero in the right half-plane. The followings are the
278 6.3. Nyquist Stability Criterion
As we can see, the above transfer functions have identical magnitude functions. The follow-
ings are the phase functions of the two transfer functions. Figure 6.35 shows the phase plots
for the two transfer functions.
!
jω + 1
∠G1 (jω) = ∠ = tan−1 ω − tan−1 0.1ω (6.55)
0.1jω + 1
!
jω − 1 ω
∠G2 (jω) = ∠ = tan−1 − tan−1 0.1ω (6.56)
0.1jω + 1 (−1)
Figure 6.35 Bode phase plots of Eq. (6.51) and Eq. (6.52)
As we can see in Figure 6.35, the phase of the transfer function with the zero in the right
half-plane is greater than the phase of the transfer function with no right half-plane zero.
to determine the stability of the closed-loop system with the knowledge of the open-loop
transfer functions using the graphical representation of the closed-loop pole trajectories.
The Nyquist stability criterion is another graphical method to determine the stability of
the closed-loop systems. Before we discuss the Nyquist stability criterion, we need to under-
stand Cauchy’s principle of the argument in complex analysis.
The complex function G(s) is a function of the complex variable s. The function G(s) maps
a point in the complex plane to a point in another complex plane. Figure 6.37 shows the
graphical representation of the concept. In Figure 6.37, s-plane is the complex plane for the
complex variable s, and G(s)-plane is the complex plane for the complex function G(s). Poles
and zeros of the function G(s) are defined in the s-plane.
Let us define a closed contour in the s-plane, which does not pass through any poles or
zeros of G(s). Assume that the variable s moves along the contour. Then, G(s) also follows a
closed contour in the G(s)-plane. There is a relationship between the number of poles and
zeros enclosed by the contour in s-plane and the number of encirclements of the origin in
the G(s)-plane. The following is the relationship.
Z −P = N (6.57)
In the above equation, Z and P represent the number of zeros and poles enclosed by the
contour in s-plane, respectively. N is the number of encirclements of the origin in the G(s)-
plane. N has a negative value when the direction of the encirclements is the opposite. Eq.
280 6.3. Nyquist Stability Criterion
(6.57) is called Cauchy’s Principle of the Argument. Let us consider the following transfer
function to explain the principle.
(s − z1 ) (s − z2 )
G(s) = (6.58)
(s − p1 ) (s − p2 )
Figure 6.38 shows the s-plane and G(s)-plane. Suppose the variable s follows a closed con-
tour in s-plane. Figure 6.38 shows the contour mapped by the function G(s).
First, let us assume that the contour on the s-plane does not encircle poles nor zeros. Let s1
be a point on the contour, then we have the following relationship, which maps a point on
the G(s)-plane.
(s − z ) (s − z )
G(s1 ) = 1 1 1 2 (6.59)
(s1 − p1 ) (s1 − p2 )
The following is the phase angle of the above complex number.
(s1 − z1 ) (s1 − z2 )
∠G(s1 ) = ∠
(s1 − p1 ) (s1 − p2 ) (6.60)
= ∠ (s1 − z1 ) + ∠ (s1 − z2 ) − ∠ (s1 − p1 ) − ∠ (s1 − p2 )
As we can see in Figure 6.38, the complex numbers (s1 − z1 ), (s1 − z2 ), (s1 − p1 ), and (s1 − p2 )
are represented by the vectors pointing to s1 from their respective zeros and poles. The phase
angles of the complex numbers are the angles between the vectors and the horizontal line.
Therefore, we have the following relationship for the phase angle.
∠G(s1 ) = θ1 + θ2 − θ3 − θ4 (6.61)
Let us try to find out the number of encirclement by G(s1 ), when s1 traverses the contour in
Figure 6.38 once. In order for G(s1 ) to encircle the origin once, the net change in the angle
∠G(s1 ) should be 360 degrees. For example, if the angle starts from zero degrees and ends
at 360 degrees, the contour encircles the origin once. If we look at Figure 6.38, none of the
angles θ1 , θ2 , θ3 , and θ4 have the net change of 360 degrees. All the angles are increased
to some degrees and decreased to zero. Therefore, the contour of G(s1 ) does not encircle the
origin.
Next, consider the contour in Figure 6.39, where the contour in the s-plane encircles a
zero once.
Chapter 6. Frequency Response Analysis 281
In the case of Figure 6.39, the angles θ2 , θ3 , and θ4 does not have the net change of 360 de-
grees; however, the angle θ1 traverse 360 degrees. Therefore, the contour of ∠G(s1 ) encircles
the origin once in the same direction. If the contour in s-plane encircles two zeros, the net
change in ∠G(s1 ) is 720 degrees, and the contour of ∠G(s1 ) encircle the origin twice. If the
contour in s-plane encircles a pole, the net change in ∠G(s1 ) is -360 degrees, and the contour
of ∠G(s1 ) encircle the origin once in the opposite direction. The above example shows that
Eq. (6.57) is valid.
If we apply Cauchy’s Principle of the Argument to the stability of such a closed-loop sys-
tem, as shown in Figure 6.36, we can obtain the Nyquist stability criterion. The transfer
function of the closed-loop system in Figure 6.36 is as follows:
Y (s) G(s)
= (6.62)
U (s) 1 + G(s)H(s)
The stability of the closed-loop system is determined by the number of roots of the following
characteristic equation in the right half-plane.
1 + G(s)H(s) = 0 (6.63)
We may assume that G(s) in the above equation includes the transfer function of the con-
troller as well as that of the plant. The transfer function G(s)H(s) is called an open-loop
transfer function, while Eq. (6.62) is called a closed-loop transfer function. Let us suppose
that a(s) and b(s) are the denominator and the numerator of G(s)H(s), respectively. Then the
open-loop transfer function G(s)H(s) can be represented as follows:
b(s)
G(s)H(s) = (6.64)
a(s)
Then, Eq. (6.63) can be changed to the following equation:
b(s) a(s) + b(s)
1 + G(s)H(s) = 1 + = =0 (6.65)
a(s) a(s)
As we can see in Eq. (6.65), the number of poles of 1 + G(s)H(s) is the same as the number
of poles of G(s)H(s). Remember that the stability of the closed-loop system is determined by
282 6.3. Nyquist Stability Criterion
the number of zeros of 1 + G(s)H(s) in the right half-plane. Under the assumption that we
know the number of poles of G(s)H(s), using Cauchy’s Principle of the Argument, we can
determine the number of zeros of 1 + G(s)H(s) in the right half-plane.
In order to apply Cauchy’s Principle of the Argument, we need to define a closed contour
in the s-plane, as shown in Figure 6.40. The contour is a half-circle with an infinite radius,
which encloses the whole right half-plane. We call this contour the Nyquist path.
When the variable s moves along the Nyquist path, we can find the closed contour mapped
by the function 1 + G(s)H(s). Then, we have the following relationship.
Z −P = N (6.66)
In the above equation, Z is the number of zeros encircled by the Nyquist path. Since the
Nyquist path encloses the whole right half-plane, Z is the number of finite zeros in the right
half-plane. Similarly, P is the number of finite poles encircled by the Nyquist path, which is
the same as the number of poles in the right half-plane. N is the number of encirclement of
the origin by the contour of 1 + G(s)H(s). If we draw the contour of 1 + G(s)H(s), we can find
the number of encirclement of the origin, i.e., N . Then, if we already know P , we can find Z
from Eq. (6.66). In order for the closed-loop system to be stable, Z has to be zero because Z
is the same as the number of poles of the closed-loop system in the right half-plane.
As an example, let us consider the following open-loop transfer function.
1
G(s)H(s) = (6.67)
(s + 1)2
Figure 6.41 shows the contour of 1 + G(s)H(s), when the variable s moves along the Nyquist
path in Figure 6.40.
Chapter 6. Frequency Response Analysis 283
As we can see in Figure 6.41, the contour does not encircle the origin, and we have N = 0.
Since the open-loop transfer function Eq. (6.67) does not have any poles in the right half-
plane, we have P = 0. Therefore, since we have Z = 0 from Eq. (6.66), the closed-loop system
is stable. When we draw a contour, drawing G(s)H(s) is simpler than drawing 1 + G(s)H(s).
Note that the number of encirclement of the origin by 1 + G(s)H(s) is the same as the number
of encirclement of −1 by G(s)H(s). This is because the contour of 1 + G(s)H(s) is identical to
the contour of G(s)H(s) moved to the right by +1. Figure 6.42 shows the contour of G(s)H(s)
for Eq. (6.67).
In the Nyquist stability criterion, the closed-loop stability is determined by the number of
284 6.3. Nyquist Stability Criterion
encirclement of −1 by G(s)H(s). The contour of G(s)H(s) is called a Nyquist plot when the
variable s moves along the Nyquist path in Figure 6.40.
All physical control systems have a finite bandwidth. In the transfer function of a finite
bandwidth system, the degree of the denominator is greater than the degree of the numera-
tor. If we assume that the bandwidth of the system is finite, the Nyquist plot can be simpler.
Let us consider the Nyquist plot for a finite bandwidth system. The Nyquist path consists
of two parts; a half-circle with an infinite radius and a line along the imaginary axis. When
the variable s moves along the half-circle at the infinity, the contour of G(s)H(s) is usually
concentrated to the origin. The reason is that the degree of the denominator is greater than
the degree of the numerator in the open-loop transfer function G(s)H(s); in turn, G(s)H(s) is
zero for the infinite s. The majority of a Nyquist plot is the section when the variable s moves
along the imaginary axis. When the variable s moves along the imaginary axis, we have the
following relationship.
s = jω (6.68)
Therefore, when we draw the Nyquist plot for a finite bandwidth system, it is enough to
draw the contour of G(jω)H(jω). Also, note that the contour of G(jω)H(jω) for ω < 0 can be
obtained by flipping the contour for ω > 0 with respect to the real axis.
The Bode plot can be helpful when we draw a Nyquist plot since the Bode plot is the
magnitude and phase graph of G(jω)H(jω). For example, Figure 6.43 shows the Bode plot
of Eq. (6.67).
Let us try to relate the Bode plot in Figure 6.43 to the Nyquist plot from ω = 0 to ω = ∞.
At ω = 0, the magnitude of G(jω)H(jω) is 1, and the phase is 0. Note that we do not have
the frequency ω = 0 in the Bode plot’s frequency scale. From the Bode plot, we can see
that, as the frequency approaches 0, the magnitude converges to 0 dB, and the phase angle
converges to zero. Therefore, the contour of G(jω)H(jω) start from 1 on the real axis at
ω = 0. In the Bode plot, as the frequency ω is increased, the magnitude decreases, and the
phase goes down below zero degrees. Therefore, in the Nyquist plot, the distance from the
origin decreases, and the phase decreases to the negative angle. At ω = 1, the Bode plot
Chapter 6. Frequency Response Analysis 285
shows that the magnitude is -6dB, and the phase is -90 degrees. We can see that the Nyquist
plot crosses the point −0.5j on the imaginary axis, which coincides with the magnitude and
the phase in the Bode plot. As the frequency ω goes to ∞, the magnitude approaches zero,
and the phase converges to -180 degrees. We can see that, in the Nyquist plot, the contour
approaches the origin, and the phase angle becomes -180 degrees.
Even though helpful, the Bode plot is not always necessary in drawing the Nyquist plot.
Since the purpose of drawing the Nyquist plot is to determine the number of encirclement
of −1, we may not need to draw an accurate trajectory. Knowing the point on the real axis
that the Nyquist plot crosses may be enough to determine the number of encirclement. The
following examples show the cases when the crossing point on the real axis is determined.
Also, there are some cases when we need to use the Nyquist paths that are different from Fig-
ure 6.40. Some of the following examples show the cases when we have alternative Nyquist
paths.
Exemple 6.4 Consider the following open-loop transfer function and draw the
Nyquist plot to determine the closed-loop stability.
4
G(s)H(s) = (6.69)
s(s + 1)2
The above transfer function has a pole at the origin. Since the origin is on the imag-
inary axis, we cannot use the Nyquist path in Figure 6.40. The closed contour in the
s-plane should not pass through any poles nor zeros. In this case, we can use the
Nyquist path shown in Figure 6.44, which does not pass through the pole at the origin.
The path in Figure 6.44 goes around the origin, following a circle with an infinitesi-
mally small radius ε. A point on the small circle can be represented by εejθ . If we plug
the following relationship into Eq. (6.69), we can find the Nyquist plot when s moves
along the small circle.
s = εejθ (6.70)
Since s on the small circle moves in counterclockwise direction, θ is increased from
−90◦ to 90◦ . By plugging Eq. (6.70) into Eq. (6.69), we can obtain the following
relationship.
4 4 4 −jθ
G(s)H(s)|s=εejθ = 2
= 2 ≈ ε e (6.71)
s(s + 1) s=εejθ εejθ εejθ + 1
Since εejθ is infinitesimally small, the contour of Eq. (6.71) is a half-circle with the
radius of infinity. Note that the contour of Eq. (6.71) moves from 90◦ to −90◦ in
the clockwise direction. To draw the plot from s = j0+ to s = j∞, we need to find
G(jω)H(jω) as follows:
4 4
G(jω)H(jω) = =
s(s + 1) s=jω jω(jω + 1)2
2
h i (6.72)
4 4 −2ω2 − j ω − ω3
= =
−2ω2 + j (ω − ω3 ) (−2ω2 )2 + (ω − ω3 )2
In order to find where the contour of G(jω)H(jω) crosses the real axis, we need to
find the frequency of the crossing point. When the contour of G(jω)H(jω) crosses the
286 6.3. Nyquist Stability Criterion
real axis, the imaginary part of Eq. (6.72) should be zero. Therefore, we can find the
frequency of the crossing point by solving the following equation.
ω − ω3 = ω (ω + 1) (ω − 1) = 0 (6.73)
Among the solutions of the above equation, the frequencies ω = ±1 give us the finite
crossing point on the real-axis. Note that we always have a pair of frequency solutions
since the Nyquist plot has symmetry with respect to the real axis. Plugging ω = ±1
into Eq. (6.72) gives us the crossing point at −2. From the above results, we can draw
the overall approximate Nyquist plot, as shown in Figure 6.45. Figure 6.46 shows the
Nyquist plot drawn by MATLAB. Note that the Nyquist plot drawn by a computer
does not show the contour at the infinity. When you draw the Nyquist plot using a
computer, you always have to remember that there is a part of contour at the infinity.
Since the Nyquist path in Figure 6.44 does not encircle any poles, we have P = 0. Also,
since the Nyquist plot in Figure 6.45 encircles −1 twice in the clockwise direction, we
have N = 2. Therefore, we can obtain Z = 2 from Eq. (6.66); in turn, we can conclude
that the closed-loop system has two poles in the right half-plane, and the closed-loop
system is unstable.
MATLAB We can draw the Nyquist plot in this example using the following MATLAB
commands.
g=tf([4],[1 2 1 0])
nyquist(g)
axis([-4 0 -2 2])
We can find the crossing point of the real axis using the MATLAB commands. If we run
the following script, we are able to see the frequencies and the points on the Nyquist
plot.
w=logspace(-2,2,200);
g=tf([4],[1 2 1 0])
[re,im]=nyquist(g,w);
for i=1:200
re1(i)=re(1,1,i);
im1(i)=im(1,1,i);
end
[w’ re1’ im1’]
Chapter 6. Frequency Response Analysis 287
Figure 6.45 Approximate Nyquist plot for the system in Example 6.4
Exemple 6.5 In this example, we consider the same system as in Example 6.4. But,
we use an alternative Nyquist path shown in Figure 6.47. The Nyquist path in Figure
6.47 moves around the origin in the clockwise direction. Using a similar method as
in the previous example, we can draw the overall approximate Nyquist plot, as shown
in Figure 6.48. Note that the contour at the infinity moves in the opposite direction,
as in the previous example. Since the Nyquist path in Figure 6.47 encircles a pole
at the origin, we have P = 1. Since the Nyquist plot in Figure 6.48 encircles −1 once
in the clockwise direction, we have N = 1. Therefore, we can obtain the following
relationship.
Z −P = Z −1 = N = 1 (6.74)
From the above relationship, we have Z = 2, which is the same result as in Example
6.4.
Exemple 6.6 Consider the system in Figure 6.49. Let us find the range of K that makes
the closed-loop system stable using the Nyquist plot. Note that the open-loop transfer
function has a pole in the right half-plane.
The characteristic equation of the closed-loop system is as follows:
(s + 1)
1+K =0 (6.75)
s (0.1s − 1)
Since the controller gain K is not fixed, we need to draw the Nyquist plot as many
times as necessary for varying K. However, if we change the above equation to the
following form, we need to draw only one Nyquist plot.
1 (s + 1)
+ =0 (6.76)
K s (0.1s − 1)
If we use the above characteristic equation, we need to determine the number of encir-
clement of −1/K instead of −1. We can find the range of K that makes the closed-loop
system stable by moving the position −1/K for a fixed Nyquist plot. First, let us draw
the Nyquist plot of the following open-loop transfer function.
(s + 1)
G(s)H(s) = (6.77)
s (0.1s − 1)
Since the open-loop transfer function has a pole at the origin, we choose to use the
Nyquist path in Figure 6.50. Since the path encircles a pole in the right half-plane, we
have P = 1.
When the variable s moves along the small circle around the origin, the open-loop
transfer function is as follows:
(εejθ + 1)
(s + 1) 1 1 ◦
= ≈ − e−jθ = e−j(θ+180 ) (6.78)
s (0.1s − 1) s=εejθ εe 0.1εe − 1
jθ jθ ε ε
◦ ◦
Note that we use ej180 = e−j180 = −1 in the above relationship. While θ is increased
from −90◦ to 90◦ , the contour of G(s)H(s) moves from −90◦ to −270◦ in clockwise di-
rection at infinity. To draw the plot from s = j0+ to s = j∞, we need to find G(jω)H(jω)
as follows:
−1.1ω 2 + j −0.1ω3 + ω
(jω + 1)
G(jω)H(jω) = = (6.79)
jω (0.1jω − 1) 0.01ω4 + ω2
If we find the frequency
√ at which the imaginary √ part of the above equation equals to
zero, we have ω = ± 10. By plugging ω = ± 10 into Eq. (6.79), we can obtain −1,
which is the crossing point on the real axis. From the above results, we can draw the
approximate Nyquist plot, as shown in Figure 6.51. Let us consider two cases in the
range of K > 0. First, consider the case when −1/K < −1, i.e., 0 < K < 1. In this case,
since the Nyquist plot encircles −1/K once in the clockwise direction, we have N = 1.
Therefore, we have the following relationship.
Z −P = Z −1 = N = 1 (6.80)
Since we have Z = 2 from the above relationship, the closed-loop system has two poles
in the right half-plane, and the closed-loop system is unstable. Next, consider the case
290 6.3. Nyquist Stability Criterion
when −1 < −1/K < 0, i.e., K > 1. In this case, since the Nyquist plot encircles −1/K once
in the counterclockwise direction, we have N = −1. Therefore, we have the following
relationship.
Z − P = Z − 1 = N = −1 (6.81)
Since we have Z = 0 from the above relationship, the closed-loop system has no poles
in the right half-plane, and the closed-loop system is stable. When K < 0, we have
N = 0 and Z = 1, and the closed-loop system is unstable. Therefore, we can conclude
that the controller gain K should be greater than 1 to stabilize the system. Let us draw
the root locus to compare the result from the Nyquist plot. Figure 6.52 shows the root
locus of this system. The root loci start from the origin and the pole in the right half-
plane and cross the imaginary axis at K = 1. For K > 1, the root loci remain in the left
half-plane, and the closed-loop system is stable. This result is the same as we obtained
from the Nyquist plot.
Exemple 6.7 Consider the following open-loop transfer function and determine the
stability of the closed-loop system using the Nyquist plot.
(s + 1)
G(s)H(s) = K (6.82)
s2
The following is the characteristic equation in a similar form as in the previous exam-
ple. We can determine the stability of the closed-loop system by finding the number
292 6.3. Nyquist Stability Criterion
of encirclement of −1/K.
1 (s + 1)
+ =0 (6.83)
K s2
Figure 6.53 is the Nyquist path moving around the pole at the origin. When the vari-
able s moves along the small circle around the origin, G(s)H(s) is as follows:
(s + 1) εejθ + 1 1 −j2θ
G(s)H(s)|s=εejθ = 2
= ≈ e (6.84)
ε2
j2θ
s s=εejθ ε2 e
When the angle of s is increased from −90◦ to 90◦ , the contour of G(s)H(s) moves
along a circle at the infinity from 180◦ to −180◦ in the clockwise direction. Note that
the angle of G(s)H(s) is doubled of the angle of s since the open-loop transfer function
has a double pole at the origin. Next, to draw the plot from s = j0+ to s = j∞, we need
to find G(jω)H(jω) as follows:
s + 1 jω + 1 jω + 1
G(jω)H(jω) = 2 = 2
=− (6.85)
s s=jω (jω) ω2
We can see that the imaginary part of the above equation is not zero, and the Nyquist
plot does not cross the real axis in the range of finite frequency. From the above results,
we can draw the approximate Nyquist plot, as shown in Figure 6.54. Figure 6.55 is the
accurate Nyquist plot drawn by MATLAB. From the Nyquist plot, we can conclude
that the Nyquist plot does not encircle −1/K, and the closed-loop system is stable in
the range of K > 0. Figure 6.56 shows the root locus. Since the root locus remains in
the left half-plane for K > 0, the closed-loop system is stable. This result is the same as
we obtained from the Nyquist plot.
Figure 6.54 Approximate Nyquist plot for the system in Example 6.7
There are many performance criteria to determine how good the control system is. Stability
is the most important criterion to be a good control system. When we determine the stability
of a control system, the answer is either stable or unstable. However, simply being a stable
control system may not be enough to be a good control system. Among the stable control
systems, some control systems may not be good enough to be a good control system. We
should be able to answer the question of how stable the system is. In other words, we need
a quantitative way of judging the stability of control systems, which we call the relative
stability.
The most frequently used way of determining the relative stability is the stability margin,
which is a quantitative measure of the margin that the control system may have before it
becomes unstable. There are two kinds of stability margin, the gain and phase margin. To
explain the gain and phase margin, let us consider the system in Figure 6.57.
Figure 6.57 Control system considered for the gain and phase margin
Note that the system in Figure 6.57 is the same as the system in Example 6.4, except for the
controller gain. The approximate Nyquist plot is similar to Figure 6.45. Figure 6.58 shows
the accurate Nyquist plot for K = 0.5 and K = 2.
Chapter 6. Frequency Response Analysis 295
When K = 0.5, the Nyquist plot in Figure 6.58 crosses the real axis at −0.25, and does not
encircle −1. Therefore, we have N = 0, and the closed-loop system is stable(P = 0, Z = 0).
However, when K = 2, the Nyquist plot crosses the real axis at −1. In this case, we can find
the frequency ω, which satisfies the following equation.
1 + G(jω)H(jω) = 0 (6.86)
Also, in this case, the following characteristic equation has the roots on the imaginary axis.
1 + G(s)H(s) = 0 (6.87)
Figure 6.59 shows the root locus for the system. We can see that the roots are on the imagi-
nary axis when K = 2. Therefore, we can confirm that K = 2 is the boundary value between
the stable and unstable regions. Figure 6.60 shows the Bode plot for K = 2 and K = 0.5. Note
that the phase plot is the same for two gains. Note that, when K = 2, the magnitude Bode
plot crosses 0 dB, and the phase Bode plot crosses -180 degrees at ω = 1. This coincides with
the Nyquist plot crossing −1 on the real axis. Recall that the angle of −1 is -180 degrees.
The Nyquist plot does not show the frequencies; however, the Bode plot shows us that the
Nyquist plot crosses −1 at ω = 1.
296 6.4. Stability Margin
In this example, when K = 0.5, the system is stable and remains stable until we increase the
gain up to K = 2. We can say that the system with K = 0.5 has a stability margin, while the
system with K = 2 has no stability margin at all. The larger the stability margin is, the more
stable the system is. Figure 6.61 shows how we define the gain and phase margin. The gain
margin is defined by the difference between 0 dB and the magnitude at the frequency when
the phase plot crosses -180 degrees. The phase margin is defined by the difference between
-180 degrees and the phase at the frequency when the magnitude crosses 0 dB.
Chapter 6. Frequency Response Analysis 297
Figure 6.61 Definitions of the gain margin and the phase margin
Let us read the gain and phase margin for K = 0.5 in Figure 6.60. The phase plot crosses -180
degrees at ω = 1 (rad/sec), and the magnitude at ω = 1 is -12 dB. Therefore, the gain margin
is 12 dB. Also, the magnitude plot crosses 0 dB at ω = 0.42 (rad/sec), at which frequency the
phase is -136 degrees. Therefore, the phase margin is 44 degrees.
We can read the gain and phase margin from the Nyquist plot. Let ω180 be the frequency
where the phase crosses -180 degrees. Then, the gain margin can be calculated as follows:
Since the Nyquist plot crosses the real axis at ω = ω180 , |G(jω180 )H(jω180 )| is the distance
of the crossing point from the origin. Therefore, to determine the gain margin, find the
reciprocal of the distance between the crossing point and the origin, and take the dB value.
Next, let ωc be the frequency where the magnitude plot crosses 0 dB. Then, the phase margin
is as follows:
The frequency where the magnitude plot crosses 0 dB is called the cross-over frequency. If
we draw a unit circle centered at the origin, the circle intersects the Nyquist plot at the cross-
over frequency. The phase margin is the angle at the cross-over frequency from 180 degrees.
Figure 6.62 shows the gain and phase margin in the Nyquist plot.
298 6.4. Stability Margin
In Figure 6.58, the Nyquist plot for K = 0.5 crosses the real axis at −0.25. Therefore, the gain
margin can be calculated as follows:
1
20log10 = 20log10 4 = 12 dB (6.90)
0.25
When K = 2, both the gain and phase margins are zero. The gain and phase margins for
unstable systems have negative values. For example, the Nyquist plot in Example 6.4 crosses
the real axis at −2, and the gain margin is as follows:
1
20log10 = 20log10 0.5 = −6 dB (6.91)
2
Also, we can find that the phase at the cross-over frequency is -198 degrees, and therefore
the phase margin is -18 degrees.
Generally, the larger the gain and phase margins are, the better the transient response is.
In the second-order systems, the damping ratio is roughly proportional to the phase margin.
Frequently, performance specifications are given in gain and phase margins. In the next
chapter, how to design control systems for given stability margins is discussed.
The discussions for the gain phase margins in the above are for the stable minimum
phase systems. If we consider the systems with poles or zeros in the right half-plane, we
may have to modify the definitions for the gain and phase margins. For example, consider
the system in Example 6.6, when K = 2. From the result in the example, we know that the
closed-loop system is stable for K = 2. Figure 6.63 shows the Bode plot for the following
open-loop transfer function.
2(s + 1)
G(s)H(s) = (6.92)
s (0.1s − 1)
Chapter 6. Frequency Response Analysis 299
Since the magnitude in Figure 6.63 is 6 dB when the phase is 180 degrees, the gain margin
is negative, according to Eq. (6.88). If we want to have a positive gain margin for the stable
system, the gain margin has to be modified as follows:
GM = |G(jω180 )H(jω180 )|dB − 0dB
= 20log10 |G(jω180 )H(jω180 )| − 20log10 1 (6.93)
= 20log10 |G(jω180 )H(jω180 )|
As we can see in the above example, the definitions of the gain and phase margins for non-
minimum phase systems and unstable systems have to be modified such that the stable sys-
tem has positive margins. Since finding the gain and phase margins from the Bode plot may
be misleading, the root locus or the Nyquist plot can be used to confirm the result.
Exemple 6.8 Let us draw the asymptotic phase plot for the following open-loop trans-
fer function using Eq. (6.94).
105 (s + 0.1)
G(s)H(s) = (6.95)
(s + 10)(s + 100)2
0.1(10s + 1)
G(s)H(s) = (6.96)
(0.1s + 1)(0.01s + 1)2
Figure 6.64 shows the asymptotic Bode plot. Note that the asymptotic phase plot in
Figure 6.64 changes by 90 degrees. Figure 6.65 shows an accurate Bode plot. We can
see that the asymptotic phase approaches the accurate value as the distance between
the break frequencies is increased. Even though the approximate phase is far from the
accuracy, we can draw it very quickly to find out how the phase curve changes.
Figure 6.64 Asymptotic Bode plot for the system in Example 6.8
Chapter 6. Frequency Response Analysis 301
Bode’s gain-phase relationship can be used for the controller design of the minimum
phase systems. In order for the control system to be stable, the phase should be above -180
degrees at the cross-over frequency. From Bode’s gain-phase relationship, if the slope of the
magnitude plot is -40 dB/dec at the cross-over frequency, the phase would be around -180
degrees. Therefore, if we want to increase the phase margin, we have to increase the slope
of the magnitude at the cross-over frequency so that the slope is above -40 dB/dec. In other
words, we can determine the relative stability solely from the slope of the magnitude plot at
the cross-over frequency, which we may increase by the proper design of the controller.
In the previous sections, we use the Bode plot of the open-loop transfer function to deter-
mine the relative stability of the closed-loop system. We first draw the Bode plot of the
open-loop transfer function, and then we determine the relative stability of the closed-loop
system. In this section, we draw the Bode plot of the closed-loop system and compare it with
the Bode plot of the open-loop transfer function. Let us consider a unity feedback system
shown in Figure 6.66.
Y (s) G(s)
= = GT (s) (6.97)
R(s) 1 + G(s)
When we describe the closed-loop frequency response, two parameters are critical; the band-
width and resonant peak. Figure 6.67 shows a typical closed-loop Bode magnitude plot and
the definitions of parameters.
The bandwidth is the most important parameter in the closed-loop frequency response. The
bandwidth is defined as the frequency at which the magnitude is decreased by 3dB from the
DC gain. The bandwidth is directly related to the response speed; generally, systems with a
wide bandwidth have fast responses. The resonant peak is the highest peak of the magnitude
plot. The resonant frequency is the frequency at which the magnitude plot has the resonant
peak.
In Eq. (6.97), G(s) is the open-loop transfer function. In many control systems, the open-
loop transfer functions have a high gain in the low frequency range and low gain in the
high frequency range. Therefore, we have the following approximate relationship for the
closed-loop transfer function.
(
1 ω ωc
G(jω)
|GT (jω)| = ≈ (6.98)
1 + G(jω) |G(jω)| ω ωc
In the above equation, ωc is the cross-over frequency, where the magnitude of the open-loop
transfer function crosses 0dB. In the low-frequency range, the open-loop transfer function
is dominant, and 1 in the denominator can be ignored. In the high-frequency range, the
open-loop transfer function in the denominator can be ignored.
Let us consider the following open-loop transfer function as an example.
ωn2
G(s) = (6.99)
s(s + 2ζωn )
Chapter 6. Frequency Response Analysis 303
G(s) ωn2
GT (s) = = (6.100)
1 + G(s) s2 + 2ζωn s + ωn2
Let us compare the open-loop and closed-loop Bode plots for the following open-loop trans-
fer functions.
1
G1 (s) = (6.101)
s(s + 0.4)
10
G2 (s) = √ (6.102)
s(s + 0.4 10)
100
G3 (s) = (6.103)
s(s + 4)
The above three systems have √ the same damping ratio ζ = 0.2, while they have three different
natural frequencies ωn = 1, 10, 10. Figure 6.68 shows the open-loop and closed-loop Bode
magnitude plots of each system. As we can see, the cross-over frequency of the open-loop
transfer function is closely related to the bandwidth of the closed-loop system. The closed-
loop system with the higher cross-over frequency has a wider bandwidth. Figure 6.69 shows
the unit step responses of the closed-loop systems. As we can see in the step responses, the
system with a wider bandwidth shows a faster step response.
Next,
√ let us vary the damping ratios as ζ = 0.4, 0.2, 0.1 for the fixed natural frequency
ωn = 10. The three different open-loop transfer functions are as follows:
Figure 6.68 Open-loop and closed-loop Bode plots for Eq. (6.101),(6.102),(6.103)
304 6.6. Frequency Response of the Closed-loop Transfer Function
10
G4 (s) = √ (6.104)
s(s + 0.8 10)
10
G5 (s) = √ (6.105)
s(s + 0.4 10)
10
G6 (s) = √ (6.106)
s(s + 0.2 10)
Figure 6.70 shows the open-loop and closed-loop Bode magnitude plots for the above trans-
fer functions. The three systems have similar cross-over frequencies and bandwidths. Figure
6.71 shows the unit step responses of the closed-loop systems. The three systems have simi-
lar response speeds since they have similar bandwidths. We can see that the overshoots of the
step responses are closely related to the peak of the closed-loop Bode plots. The closed-loop
system with a higher peak has a larger overshoot in the step response.
Chapter 6. Frequency Response Analysis 305
Figure 6.70 Open-loop and closed-loop Bode magnitude plots for Eq. (6.104),(6.105),(6.106)
As we can see from the above examples, we can predict the time responses for the closed-
loop frequency responses.
Let x(ω) and y(ω) be the real and imaginary part of the complex function G(jω), respectively,
then G(jω) can be represented as follows:
If we plug the above equation into Eq. (6.107), we have the following equation.
x(ω) + jy(ω)
GT (jω) = (6.109)
1 + x(ω) + jy(ω)
The magnitude of the closed-loop frequency response function can be obtained as follows:
p
x2 (ω) + y 2 (ω)
x(ω) + jy(ω)
M(ω) = |GT (jω)| = = q (6.110)
1 + x(ω) + jy(ω)
[1 + x(ω)]2 + y 2 (ω)
The above equation can be changed to the following form. For simplicity, the variable ω is
omitted.
!2 2
M2 M
2
x− +y = ,M , 1 (6.111)
1 − M2 1 − M2
If we assume that M in Eq. (6.111) is a constant, the above equation represents a circle with
the following center coordinates.
M2
x= ,y = 0 (6.112)
1 − M2
Figure 6.72 shows the trajectories of the open-loop transfer function for various values of M.
When M = 1, we have to use Eq. (6.110), since Eq. (6.111) is not valid. Then, we have the
following equation of the straight line.
x = −0.5 (6.114)
x = −1, y = 0 (6.115)
Chapter 6. Frequency Response Analysis 307
Let us try to relate the Nyquist plot to Figure 6.72. As an example, consider the following
open-loop transfer function in Figure 6.66.
K
G(s) = (6.116)
s(s + 1)2
Let us consider the following values of K, and draw the magnitudes of the closed-loop fre-
quency response, as shown in Figure 6.73.
K = 0.5 (6.117)
K = 0.8 (6.118)
K = 1.3 (6.119)
The peak values for the above values of K are as follows:
Figure 6.73 Closed-loop magnitude frequency responses for the open-loop transfer function
Eq. (6.116)
Figure 6.74 Nyquist plots of the open-loop transfer function Eq. (6.116)
The graphical approach explained above enables us to find the closed-loop properties
from the Nyquist plots. However, when we change the gain of the open-loop transfer func-
tion, we need to redraw the Nyquist plot for the new gains. In the case of the Bode plot, it
is very easy to apply new gains. We only need to move the Bode magnitude plot upward
or downward for the new gains. Also, we do not have to redraw the Bode phase plot for
Chapter 6. Frequency Response Analysis 309
the new gains. Therefore, we can easily predict the effect of the gain changes from the Bode
plot. If we combine the Bode magnitude and phase plots into a single plot, we can obtain
the Nichols chart. The Nichols chart has a similar advantage to the Bode plot. Figure 6.75
shows how we convert the Nyquist plot domain to the Nichols chart domain. In the Nichols
chart domain, the vertical axis the magnitude in dB, and the horizontal axis the phase angle.
If we change Figure 6.72 to the Nichols chart domain, we have Figure 6.76.
We can convert the Nyquist plot in Figure 6.74 to the Nichols chart, as shown in Figure
6.77. As we can see in Figure 6.77, if we want to apply new gains to the Nichols chart, we
simply move the curve upward or downward vertically. Therefore, we can easily predict the
effect of changing the gains, similarly as in the Bode plot. Also, as in Figure 6.74, we can
estimate the closed-loop magnitude response by reading the value of M as the open-loop
transfer function trajectory crosses the constant-M curve (Note that the constant-M curves
are not circles anymore.).
We can repeat a similar approach to the phase plot. However, since we do not use the
phase plot as frequently as the magnitude plot, we omit the Nichols chart for the phase plot.
6.7 Lab 6
In this lab, we measure the approximate magnitude frequency response of the second-order
digital filter. We use the same project setup as in section 4.6.1 except for the increased
sampling frequency. Figure 6.78 shows the timer setup for the sampling frequency of 50
kHz.
We use the same codes as in section 4.6.1 except for the sampling period, the natural
frequency, and the damping ratio. Replace Code 4.2 with the following code.
Code 6.1
Set the function generator to generate a sine wave with a 2V peak-to-peak amplitude. Chan-
nel one of the oscilloscope is connected to the function generator, and channel two is con-
nected to the D/A converter output. When you run the program, you should be able to see
the sinusoidal wave output with the same frequency as the input frequency. Figure 6.79
shows the input and output sinusoidal waves for ζ = 0.4 at 100 Hz. From the oscilloscope
screen, we can measure the input-output amplitude ratio.
Fill in the table 6.1 with the input-output amplitude ratios for different damping ratios. The
first column in the table is the frequency. The rest of the columns are the input-output
amplitude ratios for ζ = 0.2, 0.4, 0.6, 0.8.
312 6.7. Lab 6
After filling the table, save the data in a text-format file and run the following MATLAB
script with the data file in the same folder. Type the frequencies in the first column of the
data file and the amplitude ratios in the rest of the columns. The data file name is data.
Then, you should be able to see the approximate magnitude frequency responses, as shown
in Figure 6.80.
clear
load data
semilogx(data(:,1),20*log10(data(:,2)),data(:,1),20*log10(data(:,3)),...
data(:,1),20*log10(data(:,4)),data(:,1),20*log10(data(:,5)))
grid
axis([10 1000 -40 10])
xlabel(’Frequency(Hz)’)
ylabel(’Magnitude(dB)’)
Code 6.2
Exercise 6.1 Using a similar method as above, find the approximate magnitude fre-
quency response of the first-order system in section 2.4.2. Find the cut-off frequency
fc (3dB frequency) from the measured frequency response, and check the following
relationship.
1 1
τ= = (6.125)
ωc 2πfc
Note that the above digital filter is the approximation of the analog filter. We use the
sampling frequency that is significantly higher than the bandwidth of the filter. Since the
discrete-time system theory is outside the scope of this book, it is not discussed here.
314 6.7. Lab 6
Problem
Problem 6.1 1. Suppose the input signal to the following systems is a sinusoidal
wave sin(2π × 100t). Find the output signal in the steady-state. If you cannot find the
steady-state signal, state the reason.
1 1 1 1
(1) (2) (3) (4)
s+5 (s − 1) s2 + 2s + 1 (s + 1)(s − 2)
Problem 6.2 Consider the following RL circuit. The function generator is set to gen-
erate a sinusoidal wave with 1 Volt amplitude (sin ωt). The output signal of the circuit
is the voltage vR across the resistor. The frequencies of the sinusoidal wave are shown
in the table below. Fill in the table with the steady-state output values. The values of
the components are R = 100Ω, L = 100mH. Note: vL = L di L
dt .
Problem 6.3 In the following circuit, e(t) is the input, and the capacitor voltage v(t)
is the output. Find the transfer function of the circuit. Also, find the magnitude fre-
quency response function and the phase frequency function. Note: i(t) = C dvdt .
Problem 6.4 The input force f = A cos ω0 t is applied to the following system. Without
solving a differential equation, find the output y(t) in steady-state.
Problem 6.5 The input voltage ea = A cos ω0 t is applied to the following DC servo
motor system. Ignore nonlinear friction and assume B is linear friction.
(1) Without solving a differential equation, find the angle θ in steady-state using the
frequency response concept. Ignore the inductance of the motor, i.e., La = 0.
(2) Without solving a differential equation, find the angular velocity ω = dθ/dt in
steady-state using the frequency response concept. Do not ignore the inductance of
the motor, i.e., La , 0.
Problem 6.6 Consider the following transfer functions. Draw the asymptotic Bode
plots and compare them with the Bode plots drawn by MATLAB.
10
(1) G(s) =
(s + 5)(s + 20)
10
(2) G(s) =
s(s + 20)
10(s + 10)
(3) G(s) =
(s + 1)(s + 100)
1000
(4) G(s) =
(s + 5)(s + 20)(s + 200)
100(s + 50)
(5) G(s) =
s(s + 20)(s + 200)
100(s + 50)
(6) G(s) = 2
s (s + 20)
316 6.7. Lab 6
Problem 6.7 Find the transfer functions whose asymptotic Bode magnitude plots are
shown below. Assume that there are no poles and zeros in the right half-plane.
(a)
(b)
(c)
ωn2
G(s) =
s2 + 2ζωn s + ωn2
Chapter 6. Frequency Response Analysis 317
(1) Draw the Bode plots of the above transfer function by hands as accurately as pos-
sible for the following parameters. Include the position of the magnitude Bode plot at
the frequency ω = ωn .
(2) Draw the Bode plots of the above transfer function by MATLAB for the parameters
in part (1) and compare them with the results in part (1).
Problem 6.9 Draw the Bode plots of the system in Probelm 6.4 by hands for the
following parameters. Draw the same Bode plots by MATLAB and compare them with
the plots drawn by hands.
(1) K = 100(N /m), M = 1(Kg), B = 10(N · sec /m)
(2) K = 16(N /m), M = 4(Kg), B = 3.2(N · sec /m)
Problem 6.10 Consider the servo motor system in Probelm 6.5. Draw the Bode plot
of the system by hand for the following parameters. Draw the same Bode plot by
MATLAB and compare it with the plot drawn by hands.
Problem 6.11 Consider the system in Problem 3.6. Draw the Bode plot of the system
for the following parameters by MATLAB.
Problem 6.12 Draw the Nyquist plots for the following open-loop transfer functions.
Determine the stabilities of the unity-feedback systems.
1
(1) G(s) =
s+1
1
(2) G(s) =
s(s + 1)
1
(3) G(s) = 2
s (s + 1)
1
(4) G(s) =
(s + 1)(s + 10)
1
(5) G(s) =
s(s + 1)(s + 10)
s+5
(6) G(s) =
s(s + 1)(s + 10)
Problem 6.13 Consider a unity-feedback system with the following open-loop trans-
fer function.
100
G(s) =
s(s2 + 11s + 10)
(1) Draw the Bode plot using MATLAB.
(2) Fill in the following table using the values of the Bode plot drawn in part (1).
318 6.7. Lab 6
Frequency(rad/sec) 1 2 5 10 20
Magnitude
Phase
(3) Draw the approximate Nyquist plot using the values in the table in part (2).
(4) Draw the Nyquist plot using MATLAB. Compare the plot with the approximate
plot drawn in part (3).
Problem 6.14 Consider a unity-feedback system with the following open-loop trans-
fer function.
K
G(s) = ,K > 0
s(s2 + 11s + 10)
(1) Draw the Bode plot using MATLAB for K = 10 and find the gain margin.
(2) Find the range of K that makes the closed-loop system stable using the Routh-
Hurwitz method.
(3) Using the result in part (2), find the boundary value of K between being stable and
unstable. Let the value of K be K1 . Draw the Bod plot for the value of K = K1 and find
the gain margin.
(4) Using the value of K1 in part (3), calculate 20 log K1 − 20 log 10. Check if the value
is the same as the gain margin part (1).
(5) Draw the Bode plot using MATLAB for K = 5 and check if the gain margin is the
same as 20 log K1 − 20 log 5.
Problem 6.15 Consider a unity-feedback system with the following open-loop trans-
fer function.
K
G(s) =
s(s2 + s + 2)
(1) Find the range of K that makes the closed-loop system stable using the Nyquist
plot.
(2) When K has the value in the range found in part (1), find the gain margin.
(3) Find the value of K when the gain margin is 10 dB. Find the phase margin for the
value of K.
(4) Draw the Nyquist plot using MATLAB for the value of K found in part (3). Find
the gain and phase margin.
(5) Draw the unit step response using MATLAB for the value found in part (3).
Problem 6.16 Consider a unity-feedback system with the following open-loop trans-
fer function.
K(s + 0.2)
G(s) =
s(s + 1)3
(1) Find the range of K that makes the closed-loop system stable using the Nyquist
plot.
(2) When K has the value in the range found in part (1), find the gain margin.
Chapter 6. Frequency Response Analysis 319
(3) Find the value of K when the gain margin is 10 dB. Find the phase margin for the
value of K.
(4) Draw the Nyquist plot using MATLAB for the value of K found in part (3). Find
the gain and phase margin.
(5) Draw the unit step response using MATLAB for the value found in part (3).
Problem 6.17 Consider the following unstable open-loop transfer function. Let us
stabilize the system using a PD controller.
1
G(s) =
s(s − 10)
The system is configured as a unity-feedback system, and the following is the transfer
function of the PD controller.
D(s) = K(1 + s)
(1) Find the range of K that makes the closed-loop system stable using the Nyquist
plot.
(2) When K has the value in the range found in part (1), find the gain margin.
(3) Find the value of K when the gain margin is 10 dB. Find the phase margin for the
value of K.
(4) Draw the Nyquist plot using MATLAB for the value of K found in part (3). Find
the gain and phase margin.
(5) Draw the unit step response using MATLAB for the value found in part (3).
Problem 6.18 Consider a unity-feedback system with the following open-loop trans-
fer function. Assume all the constants are positive.
K
G(s) =
s(τ1 s + 1)(τ2 s + 1)
(1) Find the range of K that makes the closed-loop system stable using the Nyquist
plot.
(2) When K has the value in the range found in part (1), find the gain margin.
Problem 6.19 Consider unity-feedback systems with the following open-loop transfer
functions. Find the ranges of K that make the closed-loop systems stable using the
Nyquist plot.
K
(1) G(s) = 3
s
K(s + 1)
(2) G(s) =
s3
K(s + 1)2
(3) G(s) =
s3
K(s + 1)3
(4) G(s) =
s3
320 6.7. Lab 6
Problem 6.20 Consider unity-feedback systems with the following open-loop transfer
functions. Determine the stability of the closed-loop systems using the Nyquist plots.
1
(1) G(s) = 2
s +1
1
(2) G(s) =
(s + 1)(s2 + 1)
Problem 6.21 Let the phase of Eq. (6.109) be N (ω). Find the relationship between the
variables x and y for the constant values of the phase N (ω). Find the phase-version
of Figure 6.72. In other words, using MATLAB, draw the trajectories of the open-loop
transfer functions for constant closed-loop phases.
Problem 6.22 Consider the transfer functions given in Problem 6.6. Draw the Nichols
charts using MATLAB and find the gain and phase margins from the plots. Also,
compare them with the gain and phase margins obtained from the Bode plots.
7. Controller Design in Frequency Domain
321
322
The purpose of the control system design is to find the controller that makes the system
behave as desired. Figure 7.1 shows the general configuration of the control systems. In the
figure, G(s) is the transfer function of the plant to be controlled, H(s) is the transfer function
of the feedback path, and D(s) is the transfer function of the controller. In this configuration,
the purpose of the control system design is to determine D(s) so that the closed-loop system
has the desired characteristics.
Before we start to design a controller, we need to decide on the performance requirements for
the control system to meet. The performance requirements for control systems are usually
given as the steady-state error, transient performance, and stability margin. There are many
ways to find the controller satisfying the given performance requirements. In this chapter,
we discuss the controller design method in the frequency-domain.
In the previous chapters, control system analysis methods are discussed. The root locus,
the Bode plot, and the Nyquist plot are tools for the control system analysis in the frequency-
domain. These tools can also be used for the control system design. In this chapter, we
discuss how to design controllers that satisfy given performance requirements using the
tools in the frequency-domain. The following is a list of procedures of the control system
design.
• Determination of the performance requirements: In many cases, the performance require-
ments are given to the control system designer. If they are not given, the control sys-
tem designer should determine the requirements before he or she starts to design a
controller. The performance requirements usually include transient requirements and
steady-state requirements. The transient requirements are given as the overshoot, the
settling time, and the stability margin. The steady-state requirements are given as the
steady-state error for the specified reference inputs.
• Selection of sensors and actuators: Sometimes, the control system designer may have to
choose sensors and actuators. The selection of sensors and actuators are based on the
performance requirements, cost, reliability, and maintainability.
• Modeling: In order to design a control system, the control system designer needs a math-
ematical model of the controlled plant. In Chapter 3, the modeling of the dynamic
systems is discussed. Sometimes, the model of the controlled plant may be supplied to
the control system designer. However, in many cases, the control system designer may
have to find the model of the controlled plant. When the dynamic characteristics of
sensors and actuators have considerable effects on the control system, they should be
included in the model of the whole system.
• Controller design: The control system designer finds the controller that satisfies the per-
formance requirements. In this chapter, we discuss the controller design in the frequency-
Chapter 7. Controller Design in Frequency Domain 323
domain. In the following chapters, we discuss the controller design using the state-
space concept.
• Simulation: The objective of the control system simulation is to show that the designed
control system will function as desired. It is the safest and inexpensive way of finding
if the controller is designed properly. Evidently, the simulation results may not ex-
actly match the actual experimental results since the simulation uses the system model,
which is prone to error. However, with the repetitive simulations, they help the con-
trol system engineer to expect the performances of the actual system. If the simulation
results are not satisfactory, the control system engineer can go back to the controller
design stage and repeat the process until a satisfactory performance is reached.
• Experiments: When the simulation results are satisfactory, the control system engineer
may decide to actually implement the whole control system. If the experimental results
are not satisfactory, the control system engineer may go back to the controller design
stage and repeat the process until a satisfactory result is obtained.
From the Bode phase plot of the PD controller, we can see that the phase of the PD controller
is increasing from zero degrees to 90 degrees. Using this property, we can improve the
324 7.1. PD Controller Design
stability margin by increasing the phase margin of the closed-loop system. However, if we
look at the Bode magnitude plot of the PD controller, we can see that the magnitude is
increasing over the high-frequency range. Since noise signals usually have high-frequency
components, the PD controller may amplify the magnitude of noise. The amplified noise
may generate excessive vibrations in mechanical components and shorten the life of the
system. Therefore, when we use a PD controller, we have to make the value of Td as small as
possible to avoid the excessive amplification of the noise.
When we design a PD controller, first, determine the value of the constant gain K so
that the steady-state error requirement is satisfied. Then, the value of Td may be chosen to
satisfy the transient requirement. Since the transient performance is closely related to the
stability margin, the transient performance requirements may be converted to the required
stability margin. The phase margin is frequently used as the stability margin. Note that the
PD controller has a positive phase value of 45 degrees at the corner frequency of 1/Td . If
we choose 1/Td to be equal to the crossover frequency of the open-loop transfer function, we
may increase the phase margin by 45 degrees approximately. Since the magnitude is also
increased by 3dB at the corner frequency 1/Td , the increase in the phase margin may be less
than 45 degrees.
Exemple 7.1 Consider the control system in Figure 7.1 with the following transfer
function.
1
G(s) = (7.2)
s(s + 1)
Assume that the system is a unity feedback system, i.e., H(s) = 1. Let us design a PD
controller so that the steady-state error for a unit ramp reference input is less than 0.01
and the phase margin is over 50 degrees. First, determine the constant gain of K that
makes the system satisfy the steady-state requirement. The velocity error constant for
D(s) = K is obtained as follows:
K
Kv = lim s =K (7.3)
s→0 s(s + 1)
For comparison, Figure 7.3 also shows the Bode plot of the open-loop transfer function
D(s)G(s) for 1/Td = 5 and 1/Td = 20. The phase margins for 1/Td = 5 and 1/Td = 20 are
79 degrees and 33 degrees, respectively. Figure 7.4 shows the unit step responses for
D(s) = 100 and D(s) = 100(1 + 0.1s).
Chapter 7. Controller Design in Frequency Domain 325
MATLAB The following is the MATLAB code to draw the Bode plots of the system
in this example. Save the following source code in the text format file with the file
extension .m and run in the MATLAB command window.
clf
N=1001;
w=logspace(-1,2,N);
sys(1)=tf(100*[1],[1 1 0]);
sys(2)=tf(100*[0.1 1],[1 1 0]);
sys(3)=tf(100*[0.2 1],[1 1 0]);
sys(4)=tf(100*[0.05 1],[1 1 0]);
for k=1:4
[mag,phase]=bode(sys(k),w);
for i=1:N
mag_bode(i)=20*log10(mag(i));
phase_bode(i)=phase(i);
end
subplot(2,1,1)
semilogx(w, mag_bode,’-b’,’LineWidth’,3);
axis([.1 100 -40 60])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
set(gca,’ytick’,[-60 -40 -20 0 20 40 60])
xlabel(’{\omega} (rad/sec)’);
ylabel(’Magnitude (dB)’)
grid on;
hold on
subplot(2,1,2)
semilogx(w, phase_bode,’-b’,’LineWidth’,3);
axis([.1 100 -180 -90])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
set(gca,’ytick’,[-180 -150 -120 -90])
xlabel(’{\omega} (rad/sec)’);
ylabel(’Phase (degree)’)
grid on;
hold on
[gm,pm]=margin(sys(k))
end
hold off
The following is the code to draw step responses of the system in this example.
clf
K=100
h=feedback(K*tf([1],[1 1 0]),1)
[y1,t1]=step(h,10);
h=feedback(K*tf([0.1 1],[1 1 0]),1)
[y2,t2]=step(h,10);
plot(t1,y1,’-k’,t2,y2,’-b’,’LineWidth’,3);
axis([0 10 0 2])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
326 7.1. PD Controller Design
xlabel(’Time(sec)’);
ylabel(’Output’)
grid;
Since we assume α < 1, the value of the above equation is always positive. Let ωmax be the
frequency at which the phase has the maximum value φmax . We can easily show that ωmax
328 7.2. Lead Controller Design
1 1 1
log10 ωmax = log10 + log10 (7.9)
2 T αT
1
ωmax = √ (7.10)
T α
From the above relationships, we can see that ωmax is the center frequency between 1/(T ) and
1/(αT ) in log scale. If we plug ωmax into Eq. (7.8), we can obtain the following maximum
value of phase.
1 √
φmax = tan−1 √ − tan−1 α (7.11)
α
We can obtain the following relationships from the above equation.
1−α
tan φmax = √ (7.12)
2 α
1−α
sin φmax = (7.13)
1+α
From the above equation, we can derive the equation for α, as follows:
1 − sin φmax
α= (7.14)
1 + sin φmax
Before we design the lead controller, we need to determine the value of K that satisfies the
steady-state error requirement. Then, find the phase margin of the closed-loop system with
the constant gain K only, and determine the phase margin increase to achieve the required
relative stability. When we design a lead controller, we need to choose the value of φmax .
We usually choose φmax to be somewhat larger than the value of the required phase margin
increase. For example, suppose that we have a phase margin of 10 degrees with the constant
gain controller K, and we want to increase the phase margin to 40 degrees. Then, since the
phase margin increase we desire is 30 degrees, we may choose φmax to be somewhat larger
than 30 degrees, i.e., 35 degrees or 40 degrees. The value of α can be calculated by plugging
the chosen value of φmax into Eq. (7.14).
If we have the value of ωmax , we can find the value of T using Eq. (7.10). Therefore, the
next step is finding the value of ωmax . Remember that the phase margin is the difference
between -180 degrees and the phase at the crossover frequency, where the gain crosses 0 dB.
If we choose ωmax to be equal to the crossover frequency ωc , we can maximize the use of a
positive phase shift. However, the lead controller increases not only the phase but also the
magnitude. If we choose ωmax to be equal to ωc , the crossover frequency will be shifted,
and the phase of the lead controller does not have the maximum value at the new crossover
frequency. Therefore, we have to choose ωmax to coincide with the new crossover frequency.
From Figure 7.5, we can see that the magnitude of the lead controller at the frequency ωmax
is the half of 20log10 (1/α). Choose ωmax to be the frequency where the magnitude of the
open-loop transfer function is equal to −0.5 × 20log10 (1/α). In other words, choose ωmax so
that the following relationship is satisfied.
1
20log10 |KG(jωmax )H(jωmax )| = −0.5 × 20log10 (7.15)
α
Chapter 7. Controller Design in Frequency Domain 329
Then, ωmax becomes the new crossover frequency, and the phase of the lead controller has
the maximum value at the new crossover frequency. Using the value of ωmax , we can find
the value of T as follows:
1
T = √ (7.16)
ωmax α
Figure 7.6 shows how we find ωmax . In Figure 7.6, the solid lines show the Bode plot of
KG(s)H(s), and the dotted lines shows the Bode plot of D(s)G(s)H(s). Even though we can
choose the new crossover frequency so that we can use the maximum phase shift of the lead
controller, we cannot avoid increasing the crossover frequency. Note that the increase in
the crossover frequency results in a decrease in the phase margin. For this reason, we need
to add the extra margin when we choose φmax . The following is a summary of the lead
controller design steps.
• Determine K so that the steady-state error requirement is satisfied.
• Draw the Bode plot of the open-loop transfer function KG(s)H(s), and find the phase
margin.
• Choose φmax to be somewhat larger than the value of the required phase margin increase.
• Find α by plugging φmax into Eq. (7.14).
• Choose ωmax to be the frequency where the magnitude of the open-loop transfer function
is equal to −0.5 × 20log10 (1/α).
• Find T by plugging ωmax into Eq. (7.16).
• With the designed lead controller D(s), draw the Bode plot of D(s)G(s)H(s) and find the
phase margin. If the phase margin requirement is not satisfied, increase φmax and
repeat the above process.
Exemple 7.2 Consider the system Example 7.1. Let us design a lead controller with
the same requirements. The value of K is 100, as in Example 7.1. The phase margin
with the constant gain controller D(s) = 100 is 5 degrees; we need to increase the phase
margin by 45 degrees. Let us choose φmax = 50◦ with the extra margin of 5 degrees. By
plugging φmax = 50◦ into Eq. (7.14), we can obtain α = 0.13. Then, find the magnitude
of the lead controller at ωmax as follows:
The frequency where the magnitude of D(s)G(s) with D(s) = 100 is −9 dB is 16.7
(rad/sec). Then, we can choose ωmax to be 16.7 (rad/sec). Using the above values,
we can find T as follows:
1 1
T = √ = √ = 0.17 (7.18)
ωmax α 16.7 0.13
The following is the designed lead controller.
Ts+1 0.17s + 1
D(s) = K = 100 (7.19)
αT s + 1 0.13(0.17s) + 1
Figure 7.7 shows the Bode plot with the lead controller, and the phase margin is 53
degrees, which satisfies the requirement. Figure 7.8 shows the step response with the
lead controller.
MATLAB The following MATLAB program helps to design a lead controller eas-
ily. When k=1 and k=2, the program draws the Bode plot without and with the
lead controller, respectively. First, run the program with k=1. Then, it will calcu-
late 0.5*20*log10(1/alpha) and draws the Bode plot without the lead controller. Since
we cannot read the frequency in the Bode plot accurately, the program prints the
values of magnitude in dB. The first column is the values of frequency in rad/sec,
and the second column is the magnitude values in dB. In this example, the value of
0.5*20*log10(1/alpha) is 9 dB. We can find that the frequency where the magnitude is
-9 dB is 16.7 (rad/sec). Then, run the program again with k=2 and wmax=16.7. From
the Bode plot, we can see that the phase margin is 53 degrees.
clf
k=1
N=1001;
w=logspace(-1,2,N);
phimax=50;
alpha=(1-sin(pi*phimax/180))/(1+sin(pi*phimax/180))
0.5*20*log10(1/alpha)
wmax=16.7;
T=1/(wmax*sqrt(alpha))
sys(1)=tf(100*[1],[1 1 0]);
sys(2)=tf(100*[T 1],conv([alpha*T 1],[1 1 0]));
[mag,phase]=bode(sys(k),w);
for i=1:N
mag_bode(i)=20*log10(mag(i));
phase_bode(i)=phase(i);
Chapter 7. Controller Design in Frequency Domain 331
end
[w’ mag_bode’ phase_bode’]
subplot(2,1,1)
semilogx(w, mag_bode,’-b’,’LineWidth’,3);
axis([.1 100 -40 60])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,16)
set(gca,’ytick’,[-60 -40 -20 0 20 40 60])
xlabel(’{\omega} (rad/sec)’);
ylabel(’Magnitude (dB)’)
grid on;
hold on
subplot(2,1,2)
semilogx(w, phase_bode,’-b’,’LineWidth’,3);
axis([.1 100 -180 -90])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,16)
set(gca,’ytick’,[-180 -150 -120 -90])
xlabel(’{\omega} (rad/sec)’);
ylabel(’Phase (degree)’)
grid on;
hold on
[Gm,Pm,Wcg,Wcp]=margin(sys(k))
hold off
plot(t2,y2,’-b’,’LineWidth’,3);
axis([0 10 0 2])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’Output’)
grid;
332 7.2. Lead Controller Design
Exemple 7.3 Consider the control system in Figure 7.1 with the following transfer
function.
1
G(s) = (7.20)
s(s + 1)(s/5 + 1)
Assume that the system is a unity feedback system, i.e., H(s) = 1. Let us design a lead
controller so that the steady-state error for a unit ramp reference input is less than 0.1
and the phase margin is over 40 degrees. First, determine the constant gain of K that
makes the system satisfy the steady-state requirement. The velocity error constant for
Chapter 7. Controller Design in Frequency Domain 333
K
Kv = lim s =K (7.21)
s→0 s(s + 1)(s/5 + 1)
1 1
ess = = (7.22)
Kv K
To have the steady-state error of 0.1, choose K = 10. Figure 7.9 shows the Bode plot
of the open-loop transfer function D(s)G(s) for the controller D(s) = 10. The phase
margin for D(s) = 10 is -10 degrees and the closed-loop system is unstable. We need
to increase the phase margin by 50 degrees. Let us choose φmax = 55◦ with the extra
margin of 5 degrees. By plugging φmax = 55◦ into Eq. (7.14), we can obtain α = 0.1.
Then, find the magnitude of the lead controller at ωmax as follows:
The frequency where the magnitude of D(s)G(s) with D(s) = 10 is −10 dB is 4.73
(rad/sec). Then, we can choose ωmax to be 4.73 (rad/sec). Using the above values,
we can find T as follows:
1 1
T = √ = √ = 0.67 (7.24)
ωmax α 4.73 0.1
The following is the transfer function of the lead controller.
Ts+1 0.67s + 1
D(s) = K = 10 (7.25)
αT s + 1 0.1(0.67s) + 1
Figure 7.9 shows the Bode plot of D(s)G(s), and the phase margin is 23 degrees, which
is far below the required phase margin. We cannot increase the phase margin by sim-
ply increasing the extra margin to φmax , because the controlled plant is the third-order
system, and the phase of the third-order system drops below -180 degrees. In this case,
we cannot achieve the required phase margin by using a single lead controller. Let us
try to add an additional lead controller and find if we can achieve the required phase
margin. Since the phase margin with one lead controller is 23 degrees, and the re-
quired phase margin is 40 degrees, we need an additional 17 degrees of the phase
margin. Considering the steep falling rate of the phase curve, let us add an extra 15
degrees margin to the required phase margin increase; i.e., choose φmax = 32◦ . Then,
we can obtain α = 0.3 with the chosen value of φmax . The magnitude of the lead con-
troller at ωmax is obtained as follows:
The frequency where the magnitude of D(s)G(s) with Eq. (7.25) is −5.2 dB is 6.75
(rad/sec). Then, we can choose ωmax to be 6.75 (rad/sec). Using the above values, we
can find T as follows:
1 1
T = √ = √ = 0.27 (7.27)
ωmax α 6.75 0.3
334 7.2. Lead Controller Design
0.67s + 1 0.27s + 1
D(s) = 10 · (7.28)
0.1(0.67s) + 1 0.3(0.27s) + 1
Figure 7.9 shows the Bode plot, and the phase margin is 41 degrees, which satisfies the
requirement. Figure 7.10 shows the step responses with the controllers Eqs. (7.25) and
(7.28).
Exemple 7.4 Consider the control system in Figure 7.1 with the following transfer
function.
1
G(s) = (7.30)
(s + 1)(s/5 + 1)
Assume that the system is a unity feedback system, i.e., H(s) = 1. Let us design a
controller so that the steady-state error for a unit step reference input is zero and the
phase margin is over 40 degrees. We need to make the system type one in order to
have a zero steady-state error for a unit step input. Let us consider a PI controller of
Eq. (7.29). First, choose K = 10 and apply the controller D(s) = 10 without an integral
term to the system. Figure 7.12 shows the Bode plot of the open-loop transfer function
D(s)G(s) with the controller D(s) = 10, and the phase margin is 48 degrees. Figure 7.12
336 7.3. PI Controller Design
shows the unit step response for the controller. We can see that the unit step response
has a non-zero steady-state error for a unit step input. In order to make the system
type one, let us add an integral term. The corner frequency 1/Ti is chosen to be one
decade below the crossover frequency 6 (rad/sec), i.e., 1/Ti = 0.6 (rad/sec). Therefore,
the transfer function of the PI controller is as follows:
0.6
D(s) = 10 1 + (7.31)
s
Figure 7.12 shows the Bode plot with the PI controller, and the phase margin is 42
degrees. Figure 7.13 shows the unit step response with the PI controller. We can
see that the steady-state error is reduced to zero without a significant change in the
transient response. For comparison, let us choose the corner frequency 1/Ti to be the
half of the crossover frequency, i.e., 1/Ti = 3 (rad/sec). The PI controller is as follows:
3
D(s) = 10 1 + (7.32)
s
Figure 7.12 and Figure 7.13 show the Bode plot and the step response with the above
PI controller, respectively. Since the corner frequency of the above PI controller is not
far from the crossover frequency, the phase margin is reduced to 21 degrees; in turn,
the transient response becomes worse.
Since α > 1 in the above equation, the phase angle is negative in the whole frequency range.
The Bode magnitude plot of the lag controller shows that the lag controller has a large gain
in the low-frequency range. Even though the gain of the lag controller does not approach
infinity in the low frequency, unlike a PI controller, the large gain in the low frequency
reduces the steady-state error. When we design a lag controller, we use the gain reduction
in the high-frequency range. By reducing the high-frequency gain, we can decrease the
crossover frequency; in turn, we can increase the phase margin.
First, determine the constant gain of K so that the system satisfies the steady-state error
requirement. Then, draw the Bode plot of D(s)G(s) with D(s) = K and find the phase margin.
If the phase margin is below the required value, find the new crossover frequency ωcnew
where the phase margin is the required value. Next, determine α so that the following
relationship is satisfied:
With the above value of α, the Bode magnitude plot of the lag controller has a value of
−20log10 α in the high-frequency range, and ωcnew becomes the new crossover frequency with
the lag controller. Figure 7.15 shows how to find α. In Figure 7.15, the solid lines are the
Bode plot of KG(s)H(s), and the dotted lines are the Bode plot of D(s)G(s)H(s) with the lag
controller. The value of 1/T should be chosen far below the new crossover frequency so that
the phase at the new crossover frequency is not affected significantly. The usual value for
1/T is one decade below the new crossover frequency, i.e.,
1 ωcnew
= (7.36)
T 10
The steps for the lag controller design are summarized below:
• Draw the Bode plot of the open-loop transfer function KG(s)H(s) and find the phase mar-
gin.
• Find the new crossover frequency ωcnew where the phase margin is the required value
• Draw the Bode plot of D(s)G(s)H(s) with the lag controller and find if the phase margin
is satisfied. If not, repeat the above steps with the new value of ωcnew .
Chapter 7. Controller Design in Frequency Domain 339
Exemple 7.5 Consider the system in Example 7.4 again. The following is the transfer
function of the controlled plant.
1
G(s) = (7.37)
(s + 1)(s/5 + 1)
Let us design the lag controller D(s) so that the steady-state error for a unit step input
is less than 0.01, and the phase margin is over 40 degrees. Since the lag controller does
not change the system type, unlike the PI controller, the steady-state error cannot be
zero.
Choose K = 100 so that the closed-loop system satisfies the steady-state requirement.
Figure 7.16 shows the Bode plot of the open-loop transfer function D(s)G(s) with
D(s) = 100, and the phase margin is 15 degrees. With the extra margin, choose the
target phase margin to be 45 degrees. In order to have 45 degrees of the phase margin,
the phase should be -135 degrees at the new crossover frequency. The phase is -135
degrees at the frequency of 6.75 (rad/sec), where the magnitude is 18.8 dB. Using Eq.
(7.35), we can obtain the following relationship.
Choose 1/T to be one decade below the new crossover frequency of 6.75 (rad/sec).
Then, we can obtain the value of T as follows:
10 10
T = = = 1.5 (7.40)
ωcnew 6.75
340 7.4. Lag Controller Design
1.5s + 1
D(s) = 100 (7.41)
8.7(1.5s) + 1
Figure 7.16 shows the Bode plot of the open-loop transfer function D(s)G(s) with the
controller Eq. (7.41), and the phase margin is 40 degrees. The resulting phase mar-
gin is slightly below the target phase margin since the phase is decreased at the new
crossover frequency. Figure 7.17 shows the step responses. The step response with the
lag controller shows a similar response to the PI controller with a small steady-state
error.
MATLAB The following MATLAB program helps to design a lag controller easily.
When k=1 and k=2, the program draws the Bode plot without and with the lag con-
troller, respectively. First, run the program with k=1. Then, it draws the Bode plot
without the lag controller. Since we cannot read numbers from the Bode plot accu-
rately, the program prints the values of magnitude in dB and the phase. The first
column is the values of frequency in rad/sec, the second column is the magnitude
values in dB, and the third column is the phase values in radian. From the printed
values, we can see that the phase is −135 degrees, and the magnitude is 18.8 dB at
the frequency of 6.75 (rad/sec). Next, run the program with k=2, wcnew=6.75, and
alpha=10ˆ (18.8/20). Then, we can find that the phase margin is around 40 degrees.
clf
k=1
N=1001;
num=100;den=conv([1 1],[0.2 1]);
wcnew=6.75;
w=logspace(-2,2,N);
alpha=10ˆ(18.8/20)
T=10/wcnew
sys(1)=tf(num,den);
sys(2)=tf(num*[T 1],conv([alpha*T 1],den));
[mag,phase]=bode(sys(k),w);
for i=1:N
mag_bode(i)=20*log10(mag(i));
phase_bode(i)=phase(i);
end
[w’ mag_bode’ phase_bode’]
subplot(2,1,1)
semilogx(w, mag_bode,’-b’,’LineWidth’,3);
axis([.01 100 -40 40])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
set(gca,’ytick’,[-60 -40 -20 0 20 40 60])
xlabel(’{\omega} (rad/sec)’);
ylabel(’Magnitude (dB)’)
grid on;
hold on
subplot(2,1,2)
semilogx(w, phase_bode,’-b’,’LineWidth’,3);
axis([.01 100 -180 0])
Chapter 7. Controller Design in Frequency Domain 341
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
set(gca,’ytick’,[-180 -135 -90 -45 0])
xlabel(’{\omega} (rad/sec)’);
ylabel(’Phase (degree)’)
grid on;
hold on
[Gm,Pm,Wcg,Wcp]=margin(sys(k))
hold off
Exemple 7.6 Consider the system in Example 7.4 again. In the example, the phase
margin of the closed-loop system with the PI controller D(s) = 10 (1 + 0.6/s) is 40 de-
grees and the response have a little bit of overshoot. Let us add an additional PD
controller to increase the phase margin up to 70 degrees. If we choose the corner fre-
quency 1/Td to be equal to the crossover frequency of 6 rad/sec, we can increase the
phase margin by approximately 45 degrees. Since we need to increase the phase mar-
gin by 30 degrees, let us choose the corner frequency 1/Td to be somewhat higher than
the crossover frequency, i.e., 1/Td = 10. We do not want to make the system to be too
sensitive to noise. The transfer function of the PID controller is as follows:
0.6
D(s) = 10 1 + (1 + 0.1s) (7.44)
s
Chapter 7. Controller Design in Frequency Domain 343
Figure 7.18 shows the Bode plot of the open-loop transfer function D(s)G(s), and the
phase margin is 73 degrees. Figure 7.19 shows step responses. We can see that the
overshoot is decreased, and the transient is improved with the PID controller.
The lead-lag controller is the series connection of the lead and lag controllers. The fol-
lowing is the transfer function of the lead-lag controller, where α1 < 1, α2 > 1.
T1 s + 1 T s+1
D(s) = K · 2 (7.45)
α1 T1 s + 1 α2 T2 s + 1
344 7.5. PID Controller and Lead-Lag Controller
We can design the above controller in two steps. Design one controller first, then design the
other one for the system, including the controller designed in the first step. The following
shows an example.
Exemple 7.7 Consider the system in Example 7.5 again. In the example, the phase
margin of the closed-loop system with the lag controller is 40 degrees, and the response
has a little bit of overshoot. Let us add an additional lead controller to increase the
phase margin up to 60 degrees. We need to increase the phase margin by 20 degrees.
However, since the phase decreases at a steep rate, we need to add an extra margin of
15 degrees and choose φmax = 35◦ . Then, we can obtain α = 0.27 with the chosen value
of φmax . The magnitude of the lead controller at ωmax is obtained as follows:
The frequency where the magnitude of D(s)G(s) with Eq. (7.41) is −5.7 dB is 10
(rad/sec). Then, we can choose ωmax to be 10 (rad/sec). Using the above values, we
can find T as follows:
1 1
T = √ = √ = 0.2 (7.47)
ωmax α 10 0.27
The final controller is obtained by cascading two controllers as follows:
0.2s + 1 1.5s + 1
D(s) = 100 · (7.48)
0.27(0.2s) + 1 8.7(1.5s) + 1
Figure 7.20 shows the Bode plot, and the phase margin is 63 degrees, which satisfies
the requirement. Figure 7.21 shows the step responses with the controllers Eqs. (7.41)
and (7.48).
Instead of the derivative terms, we have the time-delayed terms in the difference equation.
To analyze discrete-time systems, we need a mathematical tool. Remember that the Laplace
transform is widely used in analyzing the continuous-time system. For discrete-time sys-
tems, a similar mathematical tool is available. The following is the definition of the z-
transform, where y(k) is the sampled signal of y(t), as shown in Figure 7.23, and Ts is the
346 7.6. Digital Implementation of Controller
sampling period.
∞
X
Z[y(k)] = Y (z) = y(k)z−k (7.50)
k=0
Since we have the delayed terms in the difference equation, we need the following relation-
ship, where F(z) is the z-tramsform of f (k).
Z[f (k − 1)] = z−1 Z[f (k)] = z−1 F(z) (7.51)
Using the above relationship, we can define the transfer function of the discrete-time sys-
tem. The transfer function is the ratio of the z-transform of the output of a system to the
z-transform of the input of a system. For example, if we take the z-transform of Eq. (7.49),
we can obtain the following relationship, where Y (z) and U (z) are the z-tramsforms of y(k)
and u(k), respectively.
Y (z) = az−1 Y (z) + U (z) (7.52)
From the above relationship, we can find the transfer function of the system Eq. (7.49) as
follows:
Y (z) 1 z
G(z) = = −1
= (7.53)
U (z) 1 − az z−a
Note that we can always convert the transfer function D(z) to a difference equation by going
through the above steps in the reverse order.
In the above equation Ts is the sampling period and u(kTs ) is the output of the integrator at
t = kTs , where k is the integer. In Figure 7.24, we can see that u(kTs ) is the area under the
curve e(t).
Since we do not have the input values between sampling instances, the simplest way of
finding the approximate output value is using the following relationship, i.e., replacing the
area under the curve with a rectangle. Figure 7.25 shows the concept.
u(kTs ) = u(kTs − Ts ) + Ts · e(kTs ) (7.56)
When we write a difference equation, we leave out the sampling period Ts in the sampled
signal under the assumption that the sampling period is known, as shown in the following
equation.
u(k) = u(k − 1) + Ts · e(k) (7.57)
This method is called the backward rectangular rule. By taking the z-transform of Eq. (7.56),
we can find the transfer function as follows:
U (z) Ts 1
= −1
= (7.58)
1 − z−1
!
E(z) 1 − z
Ts
In a similar way, we can find the discrete-time transfer function of the following system.
a
D(s) = (7.59)
s+a
The transfer function of the approximate discrete-time system of the above system is as
follows:
a
D(z) = (7.60)
1 − z−1
+a
Ts
348 7.6. Digital Implementation of Controller
In general, if we have a continuous-time system transfer function D(s), we can find the
discrete-time transfer function D(z) by substituting as follows:
1 − z−1
s= (7.61)
Ts
We can easily show that the above method is virtually the same as the derivative term ap-
proximation that we use to implement PD controllers in the previous chapter.
Alternatively, we can use the following relationship to replace the area under the curve
with a rectangle. Figure 7.26 shows the concept.
This method is called the forward rectangular rule. By taking the z-transform of Eq.(7.62),
we can find the transfer function as follows:
U (z) T z−1 T 1
= s −1 = s = ! (7.63)
E(z) 1 − z z−1 z−1
Ts
z−1
s= (7.64)
Ts
The problem with the above methods is that they may suffer considerable errors for low
sampling frequencies because we use rectangles to approximate the area under the curves.
The problem may be alleviated by using a trapezoid in the place of a rectangle, as shown in
the following relationship. Figure 7.27 shows the concept.
Ts
u(k) = u(k − 1) + [e(k − 1) + e(k)] (7.65)
2
Chapter 7. Controller Design in Frequency Domain 349
This method is called the trapezoidal rule, or the Tustine’s method. By taking the z-transform
of Eq. (7.65), we can find the transfer function as follows:
U (z) Ts 1 + z−1
!
1
= −1
= (7.66)
2 1 − z−1
!
E(z) 2 1−z
Ts 1 + z−1
2 1 − z−1
!
s= (7.67)
Ts 1 + z−1
Once we have a discrete-time transfer function D(z), we can easily convert it to a difference
equation, which we can use to implement the controller in a digital form.
7.7 Lab7
7.7.1 Lead Controller for an Analog Dynamic Simulator
Consider the analog dynamic simulator circuit shown in Figure 7.28.
Vo (s) −1 −40 40
= = (7.68)
Vi (s) 0.1s + 1 s s(0.1s + 1)
350 7.7. Lab7
Let us design and implement a lead controller for the above system. The digital transfer
function of a lead controller can be found using the trapezoidal rule (Tustin’s method) as
follows:
2 z−1
T +1
U (z) T s + 1 Ts z + 1
D(z) = =K =K (7.69)
2 z−1
E(z) αT s + 1 s= T2 z+1
z−1
s αT +1
Ts z + 1
The above transfer function can be converted to the following difference equation.
1
u(k) = [K (Ts + 2T ) e(k) + K (Ts − 2T ) e(k − 1) + (2αT − Ts ) u(k − 1)] (7.70)
Ts + 2αT
In the above equation, e(k) is the error signal and u(k) is the control signal.
Exercise 7.1 Design and implement a lead controller for the analog dynamic simula-
tor shown in Figure 7.28. Let K = 1. The required phase margin is 55 degrees. Modify
the codes given in Section 5.5.1 to implement the lead controller.
Exercise 7.2 Design and implement a lead controller for the DC motor used in Section
5.5.2. Let K = 5. The required phase margin is 50 degrees. Modify the codes given in
Section 5.5.2 to implement the lead controller.
The magnetic levitation system floats a metal ball in the air using an electromagnet. The
displacement of the ball is governed by the following dynamic equation:
i2 d 2y
Mg − K = M (7.71)
y2 dt 2
Chapter 7. Controller Design in Frequency Domain 351
where i is the current through the electromagnet, y is the distance of the ball from the elec-
tromagnet, M is the mass of the ball, g is the gravitational acceleration constant, and K is a
proportional constant. The current i through the electromagnet is controlled by a current
amplifier, and it is related to the voltage controller output u by the expression:
i = 0.15u + I0 (7.72)
where I0 is the nominal current corresponding to the nominal operating position Y0 . The
photosensor (infrared-based) measures the ball position. It provides a measurement of the
distance of the ball from the electromagnet by providing a voltage v such that:
v = γ (y − Y0 ) (7.73)
x1 = v
(7.74)
x2 = v̇ = γ ẋ
Using the above state variables, we can obtain the state equation as follows:
x2
" # " #
ẋ1 f1 (x1 , x2 , u)
γk (0.15u + I0 )2
= = (7.75)
ẋ2 f2 (x1 , x2 , u) γg −
M (x1 /γ + Y0 )2
Let us assume the ball is stationary at the position Y0 , and the current is I0 at the position.
Since the ball is at rest at the equilibrium position, we have the condition ẋ2 = 0, from which
we can obtain the following relationship.
MgY02
K= (7.76)
I02
Therefore, the state variables and control have the following values in the equilibrium posi-
tion.
x1 = 0, x2 = 0, u = 0 (7.77)
Using the above values, we can evaluate the following equations.
∂f1
= 0,
∂x1 x1 =0,x2 =0,u=0
∂f1
= 1,
∂x2 x1 =0,x2 =0,u=0
∂f2
2K (0.15u + I0 )2
2g (7.78)
= 3
= ,
∂x1 x1 =0,x2 =0,u=0 M (x1 /γ + Y0 ) x =0,x =0,u=0 Y0
1 2
∂f2
=0
∂x2 x1 =0,x2 =0,u=0
∂f1
= 0,
∂u x1 =0,x2 =0,u=0
∂f2 0.3γK (0.15u + I0 ) 0.3γg (7.79)
=− 2
=−
∂u x1 =0,x2 =0,u=0 M (x1 /γ + Y0 ) x =0,x =0,u=0 I0
1 2
352 7.7. Lab7
From the above results, we can obtain the linearized state equation as follows:
" # 0 1 " # 0
∆ẋ1 ∆x1
0.3γg ∆u
= 2g + (7.80)
∆ẋ2 0 ∆x2 −
Y0 I0
From the above equations, we can find the transfer function of the linearized system.
With the above values, the transfer function can be obtained as follows:
1200
G(s) = − (7.84)
s2 − 852
The above transfer function is an unstable transfer function since it has a real pole in the
right-half plane. Note that the above transfer function has a negative sign at the front. When
we design a lead controller, we can regard the negative sign to be at the summing junction
with the reference input, as shown in Figure 7.30. When we implement the lead controller,
we have to consider that the negative sign for the negative feedback is not necessary since
the transfer function already has a negative sign at the front.
Let us design a lead controller so that the closed-loop system has a phase margin of 70
degrees. Using the following MATLAB code, we can design the lead controller.
clf;format shortg;
k=1;
N=1001;w=logspace(-1,4,N);
phimax=70;alpha=(1-sin(pi*phimax/180))/(1+sin(pi*phimax/180))
Chapter 7. Controller Design in Frequency Domain 353
0.5*20*log10(1/alpha)
wmax=112;T=1/(wmax*sqrt(alpha))
sys(1)=tf(2*[1200],[1 0 -852]);
sys(2)=tf(2*1200*[T 1],conv([alpha*T 1],[1 0 -852]));
[mag,phase]=bode(sys(k),w);
for i=1:N
mag_bode(i)=20*log10(mag(i));
phase_bode(i)=phase(i);
end
[w’ mag_bode’ phase_bode’]
figure(1)
subplot(2,1,1)
semilogx(w, mag_bode,’-b’);
axis([.1 10000 -60 20])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,12)
set(gca,’ytick’,[-60 -40 -20 0 20 40 60])
xlabel(’{\omega} (rad/sec)’);
ylabel(’Magnitude (dB)’)
grid on;
hold on
subplot(2,1,2)
semilogx(w, phase_bode,’-b’);
axis([.1 10000 -180 -90])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,12)
set(gca,’ytick’,[-180 -150 -120 -90])
xlabel(’{\omega} (rad/sec)’);
ylabel(’Phase (degree)’)
grid on;
hold off
[Gm,Pm,Wcg,Wcp]=margin(sys(k))
Code 7.1
The constant gain K of the lead controller is chosen to be 2 after trial and error. If the
constant gain is too large, the controller circuit may be saturated. If it is too small, the ball
cannot be held in the air. φmax is chosen to be 70 degrees. The above MATLAB code with
k=1 shows the phase margin without the controller and prints the values of magnitude in
dB. The first column is the values of frequency in rad/sec, and the second column is the
magnitude values in dB. In this example, the value of 0.5*20*log10(1/alpha) is 15 dB. We
can find that the frequency where the magnitude is -15 dB is 112 (rad/sec). Then, run the
program again with k=2 and wmax=112. From the Bode plot, we can see that the phase
margin is 70 degrees. The values for the lead controller are found to be as follows:
Code 7.4. Replace Code 5.2, Code 5.4, and Code 5.8 with Code 7.2, Code 7.3, and Code 7.4,
respectively.
Code 7.2
Code 7.3
if (data_flag==2) {
if (data_counter<=sampling_frequency*4) {
data[data_counter++]=(int16_t)y;
}
else {
data_done=1;
}
}
error=-ref+y;
control=(k*(delt+2*ta)*error+k*(delt-2*ta)*olderror+(2*tb-delt)*oldcontrol)/(delt+2*tb);
oldcontrol=control;
olderror=error;
if (control > 2047) control = 2047;
if (control < -2048) control = -2048;
da_value = control + 2048;
HAL_DAC_SetValue(&hdac, DAC_CHANNEL_2, DAC_ALIGN_12B_R, (uint32_t)(da_value));
}
/* USER CODE END Callback 0 */
Code 7.4
Figure 7.31 shows the step response of the magnetic levitation system with the designed lead
controller. As can be seen in the figure, the response shows no overshoot, which is consistent
with the phase margin.
To see how the change in the lead controller parameter affects the response, let us try the
following lead controller. Note that the coefficient of the derivative term is reduced from
0.05 to 0.02.
0.02s + 1
D(s) = 2 (7.87)
0.00155s + 1
Figure 7.32 shows the response for the above controller. As expected, the response shows a
little overshoot.
356 7.7. Lab7
For comparison, let us reduce the derivative term coefficient to 0.01 as in the following con-
troller.
0.01s + 1
D(s) = 2 (7.88)
0.00155s + 1
Figure 7.33 shows the step response for the above controller. Again, as expected, the re-
sponse shows a significant increase in the overshoot.
Exercise 7.3 Using the linearized model of the magnetic levitation system, do the
MATLAB step response simulations of the closed-loop system with the lead controllers
Eqs. (7.86),(7.87), and (7.88). Compare the simulation results with the above experi-
mental results.
Chapter 7. Controller Design in Frequency Domain 357
Problem
Problem 7.1 Consider the following PD controller for the system in Figure 7.29. The
steady-state error for a unit ramp input should be 0.01.
D(s) = Kp + Kd s
(1) Determine the PD controller gains so that the damping ratio (ζ) is 0.5.
(2) Determine the PD controller gains so that the damping ratio (ζ) is 0.707.
(3) Determine the PD controller gains so that the damping ratio (ζ) is 1.0.
Problem 7.2 Consider the PD controller for the system in Figure 7.29. Determine the
PD controller gains so that the following requirements are satisfied.
Problem 7.3 Consider the system in Figure 7.29. Let us design the PD controller so
that the steady-state error is less than 0.001 for the unit-ramp input.
(1) Determine Kp so that the steady-state error requirement is satisfied. Let Kd be zero.
Draw the Bode plot and find the phase margin using MATLAB. Also, Draw the unit-
step response of the closed-loop system using MATLAB.
(2) With the value of Kp found in (1), Determine Kd so that the phase margin is over
45 degrees and the crossover frequency of the Bode magnitude plot is not over 20Hz.
Draw the unit-step response of the closed-loop system using MATLAB.
Problem 7.4 Consider the system in Figure 7.29. Let us design the following lead
controller for the system.
Ts+1
D(s) = K ,α < 1
αT s + 1
(1) With the same steady-state requirement as in Problem 3, design the lead controller
so that the phase margin is between 45 and 50 degrees. Draw the Bode plot using
MATLAB and find the crossover frequency of the Bode magnitude plot.
(2) Draw the unit-step response of the closed-loop system with the controller designed
in (1) using MATLAB.
358 7.7. Lab7
Problem 7.5 Consider the system in Figure 7.29. Let us design the following lead
controller for the system.
Ts+1
D(s) = K ,α < 1
αT s + 1
(1) Design the lead controller so that the steady-state error for the unit-ramp is less
than 0.01 and the phase margin is over 75 degrees. Draw the Bode plot using MATLAB
and find the crossover frequency of the Bode magnitude plot.
(2) Draw the unit-step response of the closed-loop system with the controller designed
in (1) using MATLAB.
Problem 7.6 Consider the unity-feedback system with the following open-loop trans-
fer function.
10
G(s) =
s2
(1) Design the following lead controller so that the phase margin is over 45 degrees.
Ts+1
D(s) = ,α < 1
αT s + 1
(2) Draw the unit-step response of the closed-loop system with the controller designed
in (1) using MATLAB.
Problem 7.7 Consider the unity-feedback system with the following open-loop trans-
fer function.
6
G(s) =
s(0.2s + 1)(0.5s + 1)
(1) Draw the Bode plot using MATLAB and find the phase margin.
(2) Design the following lead controller so that the phase margin is over 35 degrees.
Ts+1
D(s) = ,α < 1
αT s + 1
(3) Using two lead controllers in series, design the controller so that the phase margin
is over 65 degrees.
(4) Draw the unit-step responses for the controllers found in (2) and (3) using MAT-
LAB.
Problem 7.8 Consider the unity-feedback system with the following open-loop trans-
fer function. Let us design the lead controller so that the steady-state error for the
unit-ramp is less than 0.05.
K
G(s) =
s(s/2 + 1)(s/6 + 1)
(1) Using two cascaded lead controllers, design the controller so that the phase margin
is over 45 degrees.
(2) Draw the unit-step responses for the controllers found in (1) using MATLAB.
Chapter 7. Controller Design in Frequency Domain 359
Ts+1
D(s) = K
αT s + 1
Show that the above lead controller has the maximum phase shift at the following
radial frequency.
1
ωmax = √
T α
Problem 7.10 Consider the following PI controller for the system in Figure 7P-1.
!
1
D(s) = K 1 +
Ti s
(1) Determine the controller gains of the above PI controller so that the phase margin
is over 50 degrees.
(2) Draw the unit-step responses for the controllers found in (1).
Problem 7.11 Let us design the following lag controller for the system in Figure 7P-1.
Ts+1
D(s) = K ,α > 1
αT s + 1
(1) Design the lag controller so that the steady-state error for a unit-ramp is less than
0.01 and the phase margin is over 50 degrees.
(2) Draw the unit-step responses for the controllers found in (1) using MATLAB.
Problem 7.12 Consider the unity-feedback system with the following open-loop
transfer function. Let us design the lag controller so that the steady-state error for
the unit-ramp is less than 0.05.
1
G(s) =
s(s + 2)
(1) Design the following lag controller so that the phase margin is over 45 degrees.
Ts+1
D(s) = K ,α > 1
αT s + 1
(2) Draw the unit-step responses for the controllers found in (1) using MATLAB.
Problem 7.13 Consider the unity-feedback system with the following open-loop
transfer function. Let us design the lag controller so that the steady-state error for
the unit-ramp is less than 0.05.
1
G(s) =
s(s + 10)2
(1) Design the following lag controller so that the phase margin is over 65 degrees.
360 7.7. Lab7
Ts+1
D(s) = K ,α > 1
αT s + 1
(2) Draw the unit-step responses for the controllers found in (1) using MATLAB.
Problem 7.14 Consider the unity-feedback system with the following open-loop
transfer function.
3
G(s) =
s(s + 1)(0.5s + 1)
(1) Design the following lag controller so that the phase margin is over 45 degrees.
Ts+1
D(s) = ,α > 1
αT s + 1
(2) Draw the unit-step responses for the controllers found in (1) using MATLAB.
Problem 7.15 Consider the following PID controller for the system in Figure 7P-1.
!
1
D(s) = K 1 + (1 + Td s)
Ti s
(1) Determine the controller gains of the above PID controller so that the phase margin
is over 70 degrees.
(2) Draw the unit-step responses for the controllers found in (1) using MATLAB.
Problem 7.16 Consider the unity-feedback system with the following open-loop
transfer function. Let us design the lag controller so that the steady-state error for
the unit-ramp is less than 0.005.
20
G(s) =
s(0.1s + 1)(0.05s + 1)
(1) Design the controller in the form of two stages so that the phase margin is over 75
degrees.
(2) Draw the unit-step responses for the controllers found in (1) using MATLAB.
8. Control System Analysis in State-Space
361
362 8.1. Relationship between State-Equation and Transfer Function
In the previous chapters, analysis and design methods in the frequency-domain were dis-
cussed. In this and next chapter, the analysis and design methods using the state equations
are discussed. The methods using the state equations may be called the methods in the state-
space. Since the state variable is a function of time, the state-space method can be regarded
as the time-domain method. When compared with the frequency-domain design methods,
the state-space design method may be regarded as a more systematic method. Before we dis-
cuss the control system design using state equations, the analysis methods in the state-space
are explained in this chapter.
ẋ = Ax + Bu (8.1)
y = Cx + Du (8.2)
The state variable vector x is defined as follows:
x1
x
2
x = . (8.3)
..
xn
The following is the Laplace transform of the above state variable vector.
If we take the Laplace transform of the above state equation, we can obtain the following
equations.
sX(s) − x(0) = AX(s) + BU (s) (8.5)
Y (s) = CX(s) + DU (s) (8.6)
First, find X(s) from Eq. (8.5) as follows:
By plugging the above equation into Eq. (8.6), we can obtain the following relationship.
h i
Y (s) = C(sI − A)−1 x(0) + C(sI − A)−1 B + D U (s) (8.8)
Exemple 8.1 Let us find the transfer function of the system described by the following
state equation. " # " #
0 1 0
ẋ = x+ u (8.12)
−2 −3 1
h i
y= 1 0 x (8.13)
First, let us find the transfer function of the system without using Eq. (8.10). We can
obtain the following equations from the above vector equation.
ẋ1 = x2
(8.14)
ẋ2 = −2x1 − 3x2 + u
With the assumption of zero initial conditions, if we take the Laplace transform
of the above equations, we can obtain the following relationships, where X1 (s) =
L [x1 (t)] , X2 (s) = L [x2 (t)].
If we eliminate X2 (s) in the above simultaneous equations, we can obtain the following
equation.
s2 X1 (s) = −2X1 (s) − 3sX1 (s) + U (s) (8.16)
From the above relationship, we can obtain the transfer function as follows:
s2 + 3s + 2 X1 (s) = U (s) (8.17)
Y (s) X1 (s) 1
G(s) = = = 2 (8.18)
U (s) U (s) s + 3s + 2
Secondly, let us use Eq. (8.10) to find the transfer function. (sI − A)−1 can be obtained
as follows: " # " #!−1 " #−1
s 0 0 1 s −1
− =
0 s " −2 −3 # 2 s+3
(8.19)
1 s+3 1
= 2
s + 3s + 2 −2 s
Then, we can find the transfer function using as follows:
" #" #
1 h i s+3 1 0 1
G(s) = 2 1 0 = 2 (8.20)
s + 3s + 2 −2 s 1 s + 3s + 2
364 8.2. Stability of State Equations
MATLAB In the following MATLAB script, the MATLAB command ss(A,B,C,D) re-
turns the state-space model. We can use the MATLAB command tf to convert the
state-space model to the transfer function model.
A=[0 1;-2 -3];B=[0;1];C=[1 0];D=0;
ss(A,B,C,D)
tf(ss(A,B,C,D))
In Eq. (8.10), the order of denominator of C(sI − A)−1 B is always higher than the order
of numerator. Therefore, when D = 0, the denominator of a transfer function has a higher
degree than the numerator. However, when D , 0, the denominator of a transfer function
has the same degree as the numerator. The following shows an example.
Exemple 8.2 Let us find the transfer function of the system described by the following
state equation. " # " #
0 1 0
ẋ = x+ u (8.21)
−2 −3 1
h i
y = 1 0 x + 4u (8.22)
In this example, D = 4. The transfer function is as follows:
" #" #
1 h i s+3 1 0 1
G(s) = 2 1 0 +4 = 2 +4
s + 3s + 2 −2 s 1 s + 3s + 2 (8.23)
4s2 + 12s + 9
= 2
s + 3s + 2
As we can see in the above equation, the denominator of the transfer function has the
same degree as the numerator.
Since a physical system has a finite bandwidth, the denominator of the transfer function
has a higher degree than the numerator. Therefore, we can assume that D = 0 for physical
systems.
C · Adj(sI − A) · B
G(s) = C(sI − A)−1 B + D = +D (8.24)
|sI − A|
As we can see in the above equation, the denominator of the transfer function is the deter-
minant of (sI − A), i.e., |sI − A| = det(sI − A). The system stability can be determined by the
locations of poles, which can be found by solving the characteristic equation. Since the char-
acteristic equation is obtained by letting the denominator of the transfer function be zero,
we have the following characteristic equation.
We can determine the stability by finding the root locations of the above characteristic equa-
tion.
Roots of Eq. (8.25) are also the eigenvalues of the matrix A. Suppose s satisfies the
following equation for a non-zero vector x.
Ax = sx (8.26)
Then, we have the following relationship.
(sI − A) x = 0 (8.27)
The value of s that satisfies the above equation is called an eigenvalue, and the vector x is
called an eigenvector. For Eq. (8.27) to hold for a non-zero vector x, Eq. (8.25) should be
satisfied. Therefore, the eigenvalues of the system matrix A are the poles of the system. In
order for a system described by the state equation to be stable, all the eigenvalues of the
system matrix A should be in the left-half plane.
s = 1, s = −2 (8.31)
Since one of the poles is in the right-half plane, the system is unstable.
MATLAB The MATLAB command zpk represents the system by zeros, poles, and the
gain. The MATLAB command eig returns the eigenvalues of a matrix. By running the
following MATLAB script, we can see that the eigenvalues of the system matrix match
the system poles.
[MATLAB]
A=[0 1;2 -1];B=[0;1];C=[1 0];D=0;
tf(ss(A,B,C,D))
zpk(ss(A,B,C,D))
eig(A)
Exemple 8.4 Consider the system in Example 8.1. Let us find the unit step response of
the system with the assumption of zero initial condition. We can obtain the following
equations from the given vector equation.
ẋ1 = x2
(8.32)
ẋ2 = −2x1 − 3x2 + u
By eliminating the variable x2 in the above simultaneous equations, we can obtain the
following differential equation.
We can obtain the following equation by taking the Laplace transform of the above
equation. X1 (s) and U (s) are the Laplace transforms of x1 (t) and u(t), respectively.
Since the input is the unit step, U (s) = 1/s.
1 1
X1 (s) = U (s) = (8.34)
s2 + 3s + 2 (s2 + 3s + 2) s
After applying the partial fraction expansion, we have the following equation.
1 1 1 1 1
X1 (s) = = − + (8.35)
(s + 1)(s + 2)s 2s s + 1 2 s + 2
By taking the inverse Laplace transform of the above equation, we can obtain the fol-
lowing solutions for t ≥ 0.
1 −t 1 −2t
y(t) = x1 (t) = −e + e (8.36)
2 2
The previous example shows how to find the response of the system represented by the
state equation. Since the previous example is a simple second-order system, the solution can
be found by the variable elimination method. However, for the high order systems, we need
a systematic method to find the responses of the systems. Therefore, we need to find the
general form of the solution to the state equations. In the first step of finding the solution,
let us assume u(t) = 0 in Eq. (8.1) and find the solution. Also, assume the x(0) is the initial
condition.
ẋ(t) = Ax(t) (8.38)
If the above equation is a scalar equation, the solution of the above differential equation is
as follows:
x(t) = eAt x(0) (8.39)
For example, if A = −2, the solution is x(t) = e−2t x(0). If the above equation is a vector
equation, the exponential function eAt in Eq. (8.39) should be a matrix function. We need to
define an exponential function in a matrix form as follows:
A2 t 2 An t n
eAt = I + At + + ··· + + ··· (8.40)
2 n!
Chapter 8. Control System Analysis in State-Space 367
The above matrix function is called the matrix exponential function. If we take the derivative
of the matrix exponential function, we have the following relationship.
deAt An t n−1
= A + A2 t + · · · + + ···
dt (n − 1)!
A2 t 2
!
= A I + At + + · · · = AeAt (8.42)
2
A2 t 2
!
= I + At + + · · · A = eAt A
2
Using the above relationship, we can easily see that the following relationship holds, and Eq.
(8.39) is the solution of the vector state equation Eq. (8.38).
Alternatively, the matrix exponential function eAt is called the state transition matrix.
We can also find the solution of Eq. (8.38) using the Laplace transform. If we take the
Laplace transform of Eq. (8.38), we can obtain the following relationship.
By taking the inverse Laplace transform of the above equation, we can find the solution as
follows: h i
x(t) = L−1 [X(s)] = L−1 (sI − A)−1 x(0) (8.46)
Since Eq. (8.46) and eq. (8.39) are the solutions of the same differential equation, we can
obtain the following relationships.
h i
eAt = L−1 (sI − A)−1 (8.47)
Eq. (8.47) is used to find the matrix exponential function or the transition matrix.
Next, let us find the solution of Eq. (8.1) for u(t) , 0. Rearranging Eq. (8.1) gives us the
following relationship.
ẋ(t) − Ax(t) = Bu(t) (8.49)
If we multiply both sides of the above relationship by e−At , we can obtain the following
relationship.
e−At [ẋ(t) − Ax(t)] = e−At Bu(t) (8.50)
368 8.3. State Equations and System Responses
Exemple 8.5 Let us find the unit step response of the system in Example 8.1 using Eq.
(8.61). Assume that the initial time is zero. Also, let us assume zero initial conditions.
The coefficient matrices for the state equation are as follows:
" # " #
0 1 0 h i
A= ,B = ,C = 1 0 ,D = 0 (8.62)
−2 −3 1
From the above equation, we can see that the following relationship holds.
d 1 −t 1 −2t
ẋ1 (t) = −e + e = e−t − e−2t = x2 (t) (8.66)
dt 2 2
The above relationship coincides with the definitions of the state variables. From Eq.
(8.65), we can find the unit step response as follows:
i 1 − e−t + 1 e−2t
1
= − e−t + 1 e−2t , t ≥ 0
h
y(t) = 1 0 2 2 (8.67)
e−t − e−2t 2 2
8.4 Controllability
In the previous chapters, the controller design methods using the transfer function are ex-
plained. Similarly, we also can design controllers using state equations. Before we consider
the controller design using the state equations, we need to investigate the properties of the
370 8.4. Controllability
state equations describing the system. Among the properties of the state equations, control-
lability is the most important one. The concept of controllability can be applied to either
states or systems. When we can control a state variable using the control signal, we can say
that the state is controllable. In other words, the controllable state can be influenced by the
control signal. As a simple example, consider the system in Figure 8.1.
In Figure 8.1, the control signal u can control the state variable x1 , and we can say that the
state x1 is controllable. However, the state x2 cannot be controlled by the control signal; the
state variable x2 is not controllable. When all the state variables are controllable, the system
is controllable. Obviously, the system in Figure 8.1 is not controllable since one of the state
variables is not controllable.
The controllability of the state equation Eq. (8.1) can be determined using the controlla-
bility matrix defined in the following equation, where n is the order of the system.
h i
MC = B AB A2 B · · · An−1 B (8.68)
When the input to the system is a single variable, the dimension of the matrix B is n × 1, and
the above matrix is a square matrix with the dimension of n × n. If the inverse matrix of the
controllability matrix exists, the system is controllable. In other words, if the determinant
of the controllability matrix MC is not zero, the system is controllable. On the other hand,
if the determinant of the controllability matrix MC is zero, the system is uncontrollable. We
can easily show that the controllability matrix of the controllable canonical form is always
invertible and, in turn, the controllable canonical form is always controllable.
Exemple 8.6 Let us determine the controllability of the system in Figure 8.1. The state
equation for the system is as follows:
" # " #" # " #
ẋ1 −3 1 x1 1
= + u (8.69)
ẋ2 0 −4 x2 0
" #
h i x
1
y= 1 0 (8.70)
x2
The controllability matrix of the system is as follows:
" " # " #" # # " #
h i 1 −3 1 1 1 −3
MC = B AB = = (8.71)
0 0 −4 0 0 0
Since the determinant of the above matrix is zero and, in turn, the controllability ma-
trix is not invertible. Therefore, the system is uncontrollable.
MATLAB Using the following MATLAB commands, we can calculate the determi-
nant of the controllability matrix of the system in this example.
Chapter 8. Control System Analysis in State-Space 371
Exemple 8.7 Let us determine the controllability of the system in Figure 8.2. The state
equation of the system is as follows:
" # " #" # " #
ẋ1 −3 1 x1 0
= + u (8.72)
ẋ2 0 −4 x2 1
" #
h i x
1
y= 1 0 (8.73)
x2
The controllability matrix of the system is as follows: (8.74)
" " # " #" # # " #
h i 0 −3 1 0 0 1
MC = B AB = = (8.74)
1 0 −4 1 1 −4
Since the determinant of the above matrix is −1, the controllability matrix is invert-
ible and, in turn, the system is controllable. While the state variable x2 in Figure 8.1
cannot be influenced by the control signal, all the state variables in Figure 8.2 can be
controlled by the control signal.
Exemple 8.8 Let us find the value of the constant a so that the system in Figure 8.3 is
uncontrollable. The state equation of the system is as follows:
" # " #" # " #
ẋ1 −3 0 x1 1
= + u (8.75)
ẋ2 0 −a x2 1
" #
h i x
1
y= 1 1 (8.76)
x2
The controllability matrix is as follows:
" " # " #" # # " #
h i 1 −3 0 1 1 −3
MC = B AB = = (8.77)
1 0 −a 1 1 −a
The controllability of the state equation is the property concerning the relationship be-
tween the state variables and the control input. Note that the controllability is determined
by the system matrix A and the input matrix B. The output matrix C is not related to the
controllability. In the next chapter, the relationship between the controllability and the con-
troller design using the state equation is discussed.
8.5 Observability
Since the state variables are internal variables, there may be state variables that cannot be
observed or measured. When we need to observe the state variables which are not available
for measurement, we may be able to form the state estimator, which is discussed in the next
chapter. The state estimator is also called the state observer. In order to estimate the state
variables using the state estimator, the system has to be observable. Observability is the
property of the system to determine if the system is observable. Along with controllability,
observability is one of the most important properties of the system.
The concept of controllability can be applied to either states or systems. A state is said
to be observable if we can find the information about the state from the output signal. As a
simple example, consider the system in Figure 8.4.
In Figure 8.4, since we can find the information about the state x2 from the output signal, the
state variable x2 is observable. However, since there is no signal path from x1 to the output
y, it is not possible to find the information about the state x1 using the output signal. The
state variable x1 is said to be unobservable. When all the state variables are observable, the
system is observable. Obviously, the system in Figure 8.4 is not observable since one of the
state variables is not observable.
Chapter 8. Control System Analysis in State-Space 373
The observability of the state equation Eq. (8.1),(8.2) can be determined using the ob-
servability matrix defined in the following equation, where n is the order of the system.
C
CA
2
MO = CA
(8.78)
..
.
CA n−1
When the output of the system is a single variable, the dimension of the matrix C is 1 × n,
and the above matrix is a square matrix with the dimension of n × n. If the inverse matrix of
the observability matrix exists, the system is observable. In other words, if the determinant
of the observability matrix MO is not zero, the system is observable. On the other hand,
if the determinant of the observability matrix MO is zero, the system is unobservable. We
can easily show that the observability matrix of the observable canonical form is always
invertible and, in turn, the observable canonical form is always observable.
Exemple 8.9 Let us determine the observability of the system in Figure 8.4. The state
equation of the system is as follows:
" # " #" # " #
ẋ1 −3 1 x1 0
= + u (8.79)
ẋ2 0 −4 x2 1
" #
h i x
1
y= 0 1 (8.80)
x2
The observability matrix of the system is as follows:
h i
" # 0 1 " #
C " # 0 1
MO = = h i −3 1 = (8.81)
CA 0 1 0 −4
0 −4
Since the determinant of the above matrix is zero and, in turn, the observability matrix
is not invertible. Therefore, the system is unobservable.
MATLAB Using the following MATLAB commands, we can calculate the determinant
of the observability matrix of the system in this example.
A=[-3 1;0 -4];B=[0;1];C=[0 1];
Mo=[C;C*A]
det(Mo)
Exemple 8.10 Let us determine the system in Example 8.7. The observability matrix
of the system is as follows:
h i
" # 1 0 " #
C " # 1 0
MO = = h i −3 1 = (8.82)
CA 1 0 −3 1
0 −4
Since the determinant of the observability matrix MC is 1, MC is invertible, and the
system is observable.
374 8.6. State Variable Transformation
The observability of the state equation is the property concerning the relationship be-
tween the state variables and the output variable. Note that the observability is determined
by the system matrix A and the output matrix C. The input matrix B is not related to the
observability. In the next chapter, the relationship between the observability and the state
estimator design is discussed.
x =Pz (8.85)
z = P −1 x (8.86)
In the above relationships, the matrix P is called the transformation matrix. By plugging Eq.
(8.86) into Eqs. (8.83) and (8.84), we have the following relationships.
ż = P −1 ẋ = P −1 (Ax + Bu)
(8.87)
= P −1 AP z + P −1 Bu
y = CP z + Du (8.88)
Let us define the following constant matrices.
Az = P −1 AP , Bz = P −1 B, Cz = CP , Dz = D (8.89)
If we use the above definition, we can rewrite Eqs. (8.87) and (8.88) as follows:
ż = Az z + Bz u (8.90)
y = Cz x + Dz u (8.91)
As we can see, the above state equation is a new state equation, and we can find an infinite
number of state equations by choosing different transformation matrices P ’s.
As a simple example, consider the system in Figure 8.5.
Chapter 8. Control System Analysis in State-Space 375
The following is the relationship between the original and new state variables.
" # " #−1 " # " #" #
z1 −1 1 −1 x1 1 1 x1
z= =P x= =
z2 0 1 x2 0 1 x2
" # (8.102)
x1 + x2
=
x2
As we can see, the new state variable z1 is the sum of x1 and x2 , and z2 is the same as x2 .
Obtaining the new state variables using the relationship Eq. (8.85) is to find the new state
variables by the linear combination of the original state variables. The condition that the
transformation matrix P is invertible is necessary for the new state variables to be linearly
independent. As a simple example, consider the following relationship.
" # " #" # " #
z1 1 1 x1 x1 + x2
= = (8.103)
z2 2 2 x2 2x1 + 2x2
Obviously, z1 and z2 are not linearly independent, and the transformation matrix is not
invertible. With the above new state variables, we cannot form a new state equation.
Forming a new state equation using Eq. (8.85) does not change the properties of the
system. It is mentioned that the eigenvalues of the system matrix are the same as the poles
of the system and determine the stability of the system. Changing the state variables using
the state transformation matrix does not change the eigenvalues. The eigenvalues of the new
system matrix Az can be found by solving the following characteristic equation.
|sI − Az | = sI − P −1 AP = sP −1 P − P −1 AP
(8.104)
= P −1 (sI − A)P = P −1 |sI − A| |P | = |sI − A| = 0
As we can see in the above relationships, since the characteristic equation for the new state
equation is the same as the original one, the change of the state variables does not change the
eigenvalues of the system. Also, the transfer function is invariant under the state variable
transformation, as we can see in the following relationships.
−1
G(s) = Cz (sI − Az )−1 Bz + Dz = CP sI − P −1 AP P −1 B + D
(8.105)
= CP P −1 (sI − A)−1 P P −1 B + D = C(sI − A)−1 B + D
The controllability and the observability of the system are not affected by the state vari-
able transformation. The following is the controllability matrix of the new state equation
obtained by the state variable transformation.
h i
M̄C = Bz Az Bz A2z Bz · · · An−1z Bz
= P B P AP P B (P AP )2 P −1 B · · · (P −1 AP )n−1 P −1 B
h i
−1 −1 −1 −1 (8.106)
h i
= P −1 B AB A2 B · · · An−1 B = P −1 MC
Similarly, the observability matrix of the new state equation obtained by the state vari-
able transformation is as follows:
CP
Cz C
−1
Cz Az CP P AP CA
2
−1 2 2
M̄O = Cz Az = C(P AP ) = CA P = MO P (8.108)
.. .. ..
.
.
.
Cz A z n−1 −1 n−1 CA n−1
C(P AP )
Since |P | , 0, the observability of the system is not affected by the state variable transforma-
tion.
We can always change a controllable state equation to the controllable canonical form. In
other words, we can always find a transformation matrix P , which enables us to change the
state equation to the controllable canonical form using the relationship Eq. (8.85). Similarly,
we can always change an observable state equation to the observable canonical form. In
other words, we can always find a transformation matrix P , which enables us to change the
state equation to the observable canonical form using the relationship Eq. (8.85).
The following procedure shows how to find the state transformation matrix P for the
controllable canonical form. Consider the following state equation, which is assumed to be
controllable.
ẋ = Ax + Bu (8.110)
Let us find the state transformation matrix P , which transforms the above state equation to
the following controllable canonical form.
ẋ1 0 1 0 ··· 0 x1 0
ẋ2 0 0 1 ··· 0 x2 0
= + u (8.111)
.. .. .. .. .. .. ..
. .
. . .
.
.
ẋn −a0 −a1 −a2 · · · −an−1 xn 1
p1 0
p2 0
B = (8.114)
.. ..
. .
pn 1
p2 = p1 A
p3 = p2 A = p1 A2
.. (8.115)
.
pn = pn−1 A = p1 An−1
Plugging Eq. (8.115) into Eq. (8.114) gives us the following relationships.
p1 B = 0
p2 B = p1 AB = 0
..
. (8.116)
pn−1 B = pn−2 AB = p1 An−2 B = 0
pn B = pn−1 AB = p1 An−1 B = 1
Since the controllability matrix MC is invertible, we can find the following relationship for
p1 . h i
p1 = 0 0 · · · 0 1 MC −1 (8.118)
By plugging Eq. (8.118) into Eq. (8.115), we can obtain P −1 as follows:
p1
p1 A
2
P −1 = p1 A
(8.119)
..
.
p1 An−1
As we can see in the above procedure, we can always transform a controllable state equation
to the controllable canonical form. The procedure for finding the state transformation matrix
for the observable canonical form is similar, and it is left as an exercise.
Exemple 8.11 Let us find the state transformation matrix, which transforms the fol-
lowing state equation to the controllable canonical form.
−3 0 0 1
ẋ = Ax + Bu = 0 −4 0 x + 1 u (8.120)
0 0 −5 1
h i
y = Cx = 1 1 1 x (8.121)
Chapter 8. Control System Analysis in State-Space 379
Using the above matrix, we can find the following matrices for the controllable canon-
ical form.
0 1 0
P −1 AP = 0 0 1 (8.125)
−60 −47 −12
0
P −1 B = 0 (8.126)
1
h i
CP = 47 24 3 (8.127)
The transfer function of the above system is as follows:
3s2 + 24s + 47
G(s) = (8.128)
s3 + 12s2 + 47s + 60
The transfer function conforms to the above matrices for the controllable canonical
form.
MATLAB Using the following MATLAB commands, we can find the matrices in this
example.
A=[-3 0 0;0 -4 0;0 0 -5];B=[1;1;1];C=[1 1 1];
Mc=[B A*B A*A*B]
p1=[0 0 1]*inv(Mc)
Pinv=[p1;p1*A;p1*A*A]
Pinv*A*inv(Pinv)
Pinv*B
C*inv(Pinv)
380 8.7. Relationship between Transfer Functions and Controllability/Observability
Obviously, the controllability matrix is invertible. Next, let us find the observability matrix
of the state equation. The following is the observability matrix MO of the above controllable
canonical form. h i
2 1 " #
h " # 2 1
MO = i 0 1 = (8.134)
2 1 −6 −3
−6 −5
The determinant of MO is zero, and the state equation is not observable. The reason for not
being observable is that we have a canceled factor in the transfer function, and we cannot
see the canceled state from the output signal.
Next, let us find the observable canonical form of the system Eq. (8.129). The matrices
for the observable canonical form of the system are as follows:
" #
0 −6
Ao = (8.135)
1 −5
" #
2
Bo = (8.136)
1
h i
Co = 1 0 (8.137)
The observability matrix MO is as follows:
h i
1 0 " #
" # 1 0
MO = h i 0 −6 = 0 −6 (8.138)
1 0
1 −5
Chapter 8. Control System Analysis in State-Space 381
Obviously, the observability matrix is invertible. Next, let us find the controllability matrix
of the state equation. The following is the controllability matrix MC of the above observable
canonical form. " " # " # " ## " #
2 0 −6 2 2 −6
MC = = (8.139)
1 1 −5 1 1 −3
The determinant of MC is zero, and the state equation is not controllable.
In the above example, the numerator and denominator of the transfer function of the
system have a common factor. As we can see in the example, the controllable canonical
form is not observable, and the observable canonical form is not controllable. In general, in
order for a system to be controllable and observable, the numerator and denominator of the
transfer function of the system should not have a common factor.
8.8 Lab 8
In this lab, we use the same analog dynamic simulator as in sections 4.6.2 and 5.5.1. We find
the state equation of the dynamic simulator and capture the state variables to observe their
behavior during the transient period. First, let us define the state variables of the dynamic
simulator as in Figure 8.6.
Note that the state variable x2 is the output signal of the input stage OP amp. Let X1 (s),X2 (s),
and U (s) be the Laplace transforms of x1 , x2 , and u, respectively. Then, we have the following
relationships.
X1 (s) −10
= (8.140)
X2 (s) s
X2 (s) −1
= (8.141)
U (s) 0.1s + 1
Rearranging the terms gives us the following relationships.
In a vector form, the state equation for the analog dynamic simulator is as follows:
" # " #" # " #
ẋ1 (t) 0 −10 x1 (t) 0
= + u(t)
ẋ2 (t) 0 " −10 # x2 (t) −10
h i x (t) (8.146)
1
y(t) = 1 0
x2 (t)
Since the above analog dynamic simulator is an unstable system, we have to stabilize the
system using a feedback controller to observe the response. Let us use the following simple
P-controller and observe the step response.
With the above P-controller, the state equation of the closed-loop system is as follows;
" # " #" # " # " #!
ẋ1 (t) 0 −10 x1 (t) 0 h i x (t)
1
= + Kp r(t) − 1 0 (8.148)
ẋ2 (t) 0 −10 x2 (t) −10 x2 (t)
The following MATLAB code simulates the unit step response of the closed-loop system.
Figure 8.7 shows the MATLAB simulation responses of the state variables x1 and x2 .
A=[0 -10;0 -10];B=[0;-10];C=[1 0];D=0;
Kp=5;
t=0:0.001:2;
G=ss(A,B,C,D);
Gc=ss(A-B*Kp*C,Kp*B,C,D);
R=ones(size(t));
[y,t,x]=lsim(Gc,R,t);
subplot(2,1,1)
plot(t,x(:,1))
grid
axis([0 2 0 2])
ylabel(’x1’);
subplot(2,1,2)
plot(t,x(:,2))
grid
axis([0 2 -2 2])
ylabel(’x2’);xlabel(’time(sec)’);
Code 8.1
Chapter 8. Control System Analysis in State-Space 383
The implementation code for the Discovery board is almost the same as in section 5.5.1,
except that we need an additional A/D converter channel for the state variable x2 . Follow
the same setup procedure as in section 5.5.1. Then, enable ADC3, as shown in Figure 8.8.
We cannot select ADC2, since pins for ADC2 are not available.
Replace Code 5.2, Code 5.4, Code 5.6, and Code 5.8 with Code 8.2, Code 8.3, Code 8.4, and
Code 8.5, respectively. Changes in the code are to capture the state variable x2 and save it in
the array.
/* USER CODE BEGIN PV */
float Kp,control;
volatile int32_t x1,x2,ref,interrupt_counter,sampling_frequency,data_counter;
int16_t data[4000],data2[4000];
volatile uint8_t data_flag=0,data_done=0;
/* USER CODE END PV */
Code 8.2
Code 8.3
Code 8.4
}
}
x2 = sum/20 - 2048;
interrupt_counter++;
if (interrupt_counter >= sampling_frequency*4) {
interrupt_counter=0;
if (data_flag==1) {
data_counter=0;
data_flag=2;
}
ref=205;
}
if (interrupt_counter >= sampling_frequency*2) {
ref=0;
}
if (data_flag==2) {
if (data_counter<sampling_frequency*4) {
data[data_counter]=(int16_t)x1;
data2[data_counter]=(int16_t)x2;
data_counter++;
}
else {
data_done=1;
}
}
control = Kp*(float)(ref-x1);
if (control > 2047) control = 2047;
if (control < -2048) control = -2048;
da_value = control + 2048;
HAL_DAC_SetValue(&hdac, DAC_CHANNEL_2, DAC_ALIGN_12B_R, (uint32_t)(da_value));
}
/* USER CODE END Callback 0 */
Code 8.5
MATLAB code in Code 8.6 draws the responses of the state variables x1 and x2 using the
captured data from the experiment. Figure 8.9 shows the state variable responses of the
analog dynamic simulator.
clear
sf=1000;
load data
for i=1:4*sf
t(i)=data(i,1)/sf;
x1(i)=data(i,2)*10/2048;
x2(i)=data(i,3)*10/2048;
end
subplot(2,1,1);
plot(t,x1)
axis([0 2 0 2])
ylabel(’x1’);
grid on
subplot(2,1,2);
plot(t,x2)
axis([0 2 -2 2])
386 8.8. Lab 8
ylabel(’x2’);xlabel(’time(sec)’);
grid on
Code 8.6
Problem
Problem 8.1 Find the transfer function of the system represented by the following
state equations.
" # " #
−3 −2 1 h i
(a) ẋ = x+ u, y = 0 1 x
1 0 0
" # " #
0 1 0 h i
(b) ẋ = x+ u, y = 0 1 x
−2 −3 1
" # " #
0 1 0 h i
(c) ẋ = x+ u, y = 1 1 x
−2 −3 1
Problem 8.2 Determine the stability of the following systems by finding the eigen-
value of" the system# matrix.
" #
−2 0 0 h i
(a) ẋ = x+ u, y = 0 1 x
1 −3 1
" # " #
0 1 0 h i
(b) ẋ = x+ u, y = 0 1 x
6 −1 1
" # " #
−3 −2 1 h i
(c) ẋ = x+ u, y = 0 1 x
1 0 0
Problem 8.3 Find the unit step response of the system represented by the following
state equations.
(a) ẋ = −2x + u, y = x
" # " #
0 1 0 h i
(b) ẋ = x+ u, y = 1 1 x
−2 −3 1
" # " #
−3 −2 1 h i
(c) ẋ = x+ u, y = 0 1 x
1 0 0
Problem 8.4 Consider the systems with the following transfer functions.
3 3s + 1 s2 + 4s + 5 1
(a) G(s) = 2
(b) G(s) = 2
(c) G(s) = 2
(d) G(s) = 3
s + 2s + 3 s + 2s + 3 s + 2s + 3 s
(1) Find the controllable canonical form.
(2) Find the observable canonical form.
(3) Find the transfer functions of the state equations obtained in part (1) and (2).
Check if they are the same as the transfer functions given above.
1
G(s) =
s4 + 7s3 + 16s2 + 18s + 8
(1) Find the poles of the system using MATLAB.
(2) Find the controllable canonical form of this system.
388 8.8. Lab 8
(3) Find the eigenvalues of the system matrix obtained in part (2) using MATLAB, and
check if they are the same as the poles found in part (1).
Problem 8.6 Find the eigenvalues and the matrix exponential function eAt of the fol-
lowing system matrices. Discuss the relationship between the eigenvalues and the
matrix exponential
" # function.
" # " #
−3 0 −2 −1 1 2
(a) A = (b) A = (c) A =
0 −4 0 −3 3 0
ẋ = Ax + Bu
y = Cx + Du
Problem 8.8 Determine the controllability of the systems with the following system
and input " matrices.# " #
0 1 1
(1) A = ,B =
−5 −10 0
" # " #
0 1 0
(2) A = ,B =
−5 −10 1
1 2 0 0
(3) A = 0 4 0 , B = 1
1 2 0 0
1 2 0 0
(4) A = 0 4 0 , B = 1
1 2 0 1
Problem 8.9 Determine the observability of the systems with the following system
and output" matrices.
#
0 1 h i
(1) A = ,C = 1 0
−5 −10
" #
0 1 h i
(2) A = ,C = 0 1
−5 −10
1 2 0 h i
(3) A = 0 4 0 , C = 0 1 0
1 2 0
Chapter 8. Control System Analysis in State-Space 389
1 2 0 h i
(4) A = 0 4 0 , C = 0 1 1
1 2 0
Problem 8.10 Find the controllable canonical form of the following system and show
that it is controllable.
1
G(s) =
s3 + a 2 s2 + a 1 s + a0
Problem 8.11 Find the observable canonical form of the following system and show
that it is observable.
1
G(s) =
s3 + a 2 s2 + a 1 s + a0
ẋ = Ax + Bu
y = Cx + Du
where
1 2 0 0 h i
A = 0 4 0 , B = 1 , C = 1 0 0 , D = 0
1 2 0 1
Find the transformation matrix P , which converts the above state equation to the con-
trollable canonical form. Find the controllable canonical form of the system.
Problem 8.13 If the following state equation is observable, derive the state transfor-
mation matrix P , which converts the following equation to an observable canonical
form.
ẋ = Ax + Bu
y = Cx + Du
ẋ = Ax + Bu
y = Cx + Du
ẋ = Ax + Bu
y = Cx + Du
9.3.1 State Estimator Based Controller with Zero Reference Input 410
9.3.2 State Estimator Based Controller with Non-Zero Reference Input 415
9.5.2 State Estimator Based Controller for an Analog Dynamic Simulator 428
9.5.3 State Estimator Based Controller for a Magnetic Levitation System 432
391
392 9.1. State Feedback Controller
In this chapter, controller design methods using the state equation are discussed. Since the
state equation is the dynamic equation with the time variable, the method may be called the
time-domain controller design. The design procedure is in three steps.
• Step 1: Assume all the state variables are available for measurements. Design the state
feedback controller by feeding back all the state variables.
• Step 2: In the real system, some state variables cannot be measured. Design the state
estimator to estimate the state variables which are not available for measurements.
• Step 3: Form the estimator-based controller by combining the state feedback controller
and the state estimator.
In the above steps, it can be shown that the state feedback controller and the state estima-
tor can be designed independently. The state-space design method is relatively systematic
and reduces trial and error in the controller design procedures. When compared with the
frequency-domain method, the state space method has pros and cons. Both methods have
the same goal to design a good control system. The control system designer has to decide
which approach is better in the given circumstances.
The state feedback controller is the sum of the state variables multiplied by constant feed-
back gains. We can obtain the following equation by plugging Eq. (9.3) into the state equa-
tion.
ẋ = Ax − BKx = (A − BK)x (9.4)
Figure 9.1 shows the block diagram of the system with the state feedback controller.
Chapter 9. Control System Design in State-Space 393
Eq. (9.4) is the state equation of the closed-loop system with the state feedback controller.
The dynamic behavior of this closed-loop system is determined by the eigenvalues of the
closed-loop system matrix A − BK. These eigenvalues can be found by solving the following
characteristic equation.
|sI − (A − BK)| = 0 (9.5)
The easiest way of designing the state feedback controller is to determine the feedback gain
K so that the solutions of Eq. (9.5) match the desired closed-loop pole locations. In other
words, determine K which satisfies the following equation for the desired poles r1 , r2 , · · · , rn .
If the order of the system is not high, we can easily find K by simply comparing coefficients
of the above equation.
Exemple 9.1 Example 7.1 shows a simple PD controller. In this example, let us find
the state feedback controller for the system in Example 7.1 so that the closed-loop pole
locations are the same as in the closed-loop system in Example 7.1. The following is
the transfer function of the controlled plant.
1
G(s) = (9.7)
s(s + 1)
The characteristic equation of the closed-loop system with the PD controller is as fol-
lows:
100(1 + 0.1s)
1 + D(s)G(s) = 1 + =0 (9.9)
s(s + 1)
s2 + 11s + 100 = 0 (9.10)
The roots of the above characteristic equation are −5.5 ± j8.35. Let us find the state
feedback controller so that the closed-loop system has the same poles. The following
is the state equation obtained from the transfer function.
" # " #
0 1 0
ẋ = Ax + Bu = x+ u (9.11)
0 −1 1
The following is the characteristic equation of the closed-loop system with the state
394 9.1. State Feedback Controller
feedback controller.
" # " # " # !
s 0 0 1 0 h i
|sI − (A − BK)| = − − k1 k2
0 s 0 −1 1
" # (9.12)
s −1
= s2 + (1 + k2 ) s + k1
=
k1 s + 1 + k
2
By comparing Eq. (9.12) with Eq. (9.10), we can find the following feedback gains.
k1 = 100, k2 = 10 (9.13)
Figure 9.2 shows the block diagram of the closed-loop system. Figure 9.3 shows the
state variables responses for the initial conditions x1 (0) = 1, x2 (0) = 0. As expected, all
the state variables converge to zero.
MATLAB Using the following MATLAB code, we can draw the responses in this ex-
ample.
clf
A=[0 1;0 -1];B=[0;1];C=[1 0];D=0;
K(1)=100;K(2)=10;
Acl=A-B*K;
h=ss(Acl,B,C,D);
t = 0:0.001:4;
ref=zeros(size(t));
[ys ts xs]=lsim(h,ref,t,[1;0]);
plot(ts,xs(:,1),’-b’,ts,xs(:,2),’--b’,’LineWidth’,3);
axis([0 2 -6 2])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’Output’)
grid on
Figure 9.2 Closed-loop system in Example 9.1 with the state feedback controller
Chapter 9. Control System Design in State-Space 395
If the order of the system is not high, we can find the feedback gains by comparing the
coefficients of the characteristic equation. However, we need a systematic way to find the
state feedback gains for higher-order systems.
First, let us find the state feedback gains for the following controller canonical form. We
will consider the general case later.
ẋ1 0 1 0 ··· 0 x1 0
ẋ2 0 0 1 ··· 0 x2 0
= + u (9.15)
.. .. .. .. .. .. ..
. .
. . .
.
.
ẋn −a0 −a1 −a2 · · · −an−1 xn 1
Note that the entries in the last row of the system matrix of the controller canonical form
are the same as the coefficients of the characteristic equations except for the negative signs.
Therefore, we can easily see that the characteristic equation of Eq. (9.15) is as follows:
Let us assume that the following is the desired characteristic equation of the closed-loop
system with the state feedback controller.
By plugging Eq. (9.3) into the state equation, we can obtain the following closed-loop state
396 9.1. State Feedback Controller
equation.
ẋ = (A − BK)x
0 1 0 ··· 0 0
0 0 1 ··· 0 0
h i
= . . . ..
−
..
k k · · · k
1 2 n
x
.. .. ..
. .
−a0 −a1 −a2 · · · −an−1 1 (9.18)
0 1 0 ··· 0
0 0 1 ··· 0
= .. .. .. ..
x
. . . .
−a0 − k1 −a1 − k2 −a2 − k3 · · · −an−1 − kn
Since the above state equation is also in the controllable canonical form, we can easily find
the closed-loop characteristic equation as follows:
By comparing Eq. (9.19) with Eq. (9.17), we can find the state feedback gains as follows:
k1 = α0 − a0
k2 = α1 − a1
.. (9.20)
.
kn = αn−1 − an−1
Now, let us consider the state feedback controller for the general case. In the previ-
ous section, it is shown that any controllable system can be transformed to the controllable
canonical form using the state variable transformation. We can use the relationship Eq.
(9.20) to find the state feedback gains for the transformed controllable canonical form. Since
the feedback gains obtained using Eq. (9.20) are not for the original system, we need to
change the gains for the original state equation. Since the above process is rather complex,
we use the following Ackermann’s formula to find the state feedback gains.
h i
K = 0 · · · 0 1 MC −1 αc (A) (9.21)
In the above formula, αc (A) is obtained by replacing the variable s with the system matrix A
in the following closed-loop characteristic equation.
If the controllability matrix MC is invertible, we can always find K using Eq. (9.21). There-
fore, if the system is controllable, we can always find the state feedback controller, which
enables us to place the closed-loop poles anywhere.
Chapter 9. Control System Design in State-Space 397
Exemple 9.2 Let us design the state feedback controller for the following system.
" # " #
0 1 0
ẋ = x+ u (9.25)
−1 0 1
h i
y= 1 0 x (9.26)
Since the poles of the above system are at ±j, the response of the system has a sus-
tained oscillation. Let us design the state feedback controller so that the closed-loop
system has a double pole at −2. The following is the desired closed-loop characteristic
equation.
αc (s) = (s + 2)2 = s2 + 4s + 4 = 0 (9.27)
First, let us find the state feedback gains by comparing the coefficients of the charac-
teristic equation. The following is the state feedback controller for the system.
" #
h i x h i
1
u = − (k1 x1 + k2 x2 ) = − k1 k2 = − k1 k2 x (9.28)
x2
By plugging the above controller into Eq. (9.25), we can obtain the following relation-
ship. " # " # " #
0 1 0 h i 0 1
ẋ = x− k1 k2 x = x (9.29)
−1 0 1 −1 − k1 −k2
The above system is a closed-loop system, and the following is the characteristic equa-
tion. " # " # " #
s 0 0 1 s −1
0 s − −1 − k −k = 1 + k s + k = 0 (9.30)
1 2 1 2
s2 + K2 s + (1 + K1 ) = 0 (9.31)
By comparing Eq. (9.27) with the desired characteristic equation, we can find the
following feedback gains.
k1 = 3, k2 = 4 (9.32)
Next, let us use Ackermann’s formula. To use the formula, we need to find the follow-
ing controllability matrix MC .
" " # " #" # # " #
0 0 1 0 0 1
MC = = (9.33)
1 −1 0 1 1 0
Plugging the above matrices into Ackermann’s formula, we can find the feedback gain
as follows:
i 0 1 −1 3 4
" # " #
h h i
K= 0 1 = 3 4 (9.35)
1 0 −4 3
398 9.1. State Feedback Controller
The above state feedback controller coincides with the results obtained by the coeffi-
cient comparison. The state responses of the closed-loop system and the control signal
for the initial conditions x1 (0) = 1, x2 (0) = 0 are shown in Figure 9.4. For comparison,
let us design the state feedback controller so that the closed-loop system has a double
pole at −4. The desired characteristic equation of the closed-loop system is as follows:
Using the same procedure, we can find the state feedback controller as follows:
Note that the feedback gains are larger than the previous case. The state responses of
the closed-loop system and the control signal for the initial conditions x1 (0) = 1, x2 (0) =
0 are shown in Figure 9.5. If we compare Figure 9.5 with Figure 9.4, we can see that
pushing the closed-loop poles to the left results in faster responses and a larger control
signal. We have to note that the large control signal requires the large capacity of the
control devices. Therefore, if we move the desired closed-loop poles to the left, we may
face an increase in the implementation expenses.
MATLAB Using the following MATLAB code, we can draw the responses in this ex-
ample. The MATLAB command for Ackermann’s formula is acker.
clf
A=[0 1;-1 0];B=[0;1];C=[1 0];D=0;
K=acker(A,B,[-2 -2])
Acl=A-B*K;
h=ss(Acl,B,C,D);
t = 0:0.001:4;
ref=zeros(size(t));
[ys ts xs]=lsim(h,ref,t,[1;0]);
subplot(2,1,1)
plot(ts,xs(:,1),’-b’,ts,xs(:,2),’--b’,’LineWidth’,3);
axis([0 4 -2 2])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’Output’)
grid on
subplot(2,1,2)
plot(ts,-K(1)*xs(:,1)-K(2)*xs(:,2),’LineWidth’,3);
axis([0 4 -15 5])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’Control’)
grid on
Chapter 9. Control System Design in State-Space 399
Figure 9.4 Responses for the controller Eq. (9.36) in Example 9.2
Figure 9.5 Responses for the controller Eq. (9.38) in Example 9.2
u = r0 − Kx (9.39)
400 9.1. State Feedback Controller
Figure 9.6 shows the unit step response of the system with the controller.
The output response in Figure 9.6 approaches the steady-state value that is far from the
reference input value. Even though the controlled plant has a pole at the origin, the closed-
loop system is not a unity feedback system, and the system type is not one. We can achieve
zero steady-state error by adding a feedforward term, as shown in Figure 9.7.
Let us define the transfer function of the feedforward term in Figure 9.7 as follows:
R0 (s)
= DF (s) (9.41)
R(s)
Then, the transfer function of the whole system can be obtained as follows:
Y (s)
= C(sI − A + BK)−1 BDF (s) (9.45)
R(s)
Figure 9.8 shows the block diagram of the system with the feedforward term.
In Chapter 4, the conditions for a non-unity feedback system to have zero steady-state
error are discussed. For example, to be a type one system that has zero steady-state error
for a step reference input, the constant terms of the denominator and the numerator in the
transfer function should be equal. To be a type two system that has zero steady-state error
for a ramp reference input, the constant terms and the coefficients of the first-order terms of
the denominator and the numerator in the transfer function should be equal. If we apply the
conditions to Eq. (9.45), we can obtain the feedforward term, which makes the state-state
error to be zero. For example, consider the controller Eq. (9.40) for the system in Example
9.1. The transfer function of the closed-loop system with the controller is as follows:
Y (s)
= C(sI − A + BK)−1 B
R0 (s)
i −1 0
" # " # " # ! " #
h i s 0 0 1 0 h
= 1 0 − + 100 10 (9.46)
0 s 0 −1 1 1
1
= 2
s + 11s + 100
Let us choose the following feedforward term.
DF (s) = 100 (9.47)
Then, the transfer function of the whole system in this example is as follows:
Y (s) Y (s) R0 (s) Y (s) 100
= · = · DF (s) = 2 (9.48)
R(s) R0 (s) R(s) R0 (s) s + 11s + 100
The above transfer function satisfies the condition for the system type one. Therefore, the
controller with the feedforward term is as follows:
u(t) = 100r − (100x1 + 10x2 ) (9.49)
Figure 9.9 shows the closed-loop system with the feedforward term.
If we want to make the system to be a type two system, the transfer function of the whole
system should be as follows:
Y (s) Y (s) D (s) · 1 11s + 100
= · DF (s) = 2 F = 2 (9.50)
R(s) R0 (s) s + 11s + 100 s + 11s + 100
Then, the feedforward term should be as follows:
DF (s) = 11s + 100 (9.51)
Therefore, the controller is as follows:
u(t) = 100r + 11ṙ − (100x1 + 10x2 ) (9.52)
Figure 9.10 shows the closed-loop system with the controller Eq. (9.52).
Note that the feedforward term in the above controller includes a differentiation term. Gen-
erally, we have to avoid differentiating signal inputs from sensors since the differentiation
amplifies noises in the signals. However, since we differentiate the reference signal, which
is usually generated inside the control computer, we may not have noise problems in this
controller. Figure 9.11 shows the unit ramp responses for controller Eqs. (9.49) and (9.52).
As we can see in Figure 9.11, the unit ramp response for the controller Eq. (9.52) has no
steady-state error.
The feedforward term in the above controller is dependent on the coefficients of the
closed-loop system. If there are changes in the coefficients for some reason, the steady-state
error may not be zero. This problem is common in the controllers with feedforward terms.
In order to design controllers that guarantee zero steady-state error regardless of changes in
coefficients, we need to introduce integral terms in the controller. We discuss the integral
state feedback controller in the later chapter.
Since the state estimator estimates the states of a dynamic system, we may assume that the
state estimator is also a dynamic system described by the following equation.
x̂˙ = Ae x̂ + Be u + Ly (9.53)
Since it is required that the state estimate x̂ converge to the state x, we have to find Ae , Be ,
and L so that the following condition is satisfied.
e = x − x̂ (9.55)
Then, we have to find the state estimator which satisfies the following condition.
e → 0, t → ∞ (9.56)
404 9.2. State Estimator
Subtracting Eq. (9.53) from Eq. (9.1) gives us the following error dynamic equation.
ẋ − x̂˙ = Ax + Bu − Ae x̂ − Be u − Ly
= Ax + Bu − Ae x̂ − Be u − LCx (9.57)
= (A − LC)x − Ae x̂ + (B − Be )u
In the above equation, let us assume that the following relationships hold.
Ae = A − LC (9.58)
Be = B (9.59)
Then, we have the following error dynamic equation.
ė = (A − LC)e (9.60)
In the above error equation, if we choose L so that all the eigenvalues of (A − LC) lie in the
left-half plane, the estimation error e converges to zero as the time goes to infinity. Therefore,
Eq. (9.53) can be changed to the following equation.
The above equation is the dynamic equation of the state estimator. Note that the estimator
equation is almost the same as the system dynamic equation except for the addition of the
term L(y − C x̂).
Let us consider what will happen if the term L(y − C x̂) is absent from the estimator equa-
tion. In other words, let us assume that the estimator equation is as follows:
In order for the estimator state x̂ to be equal to the state x, the initial conditions, as well as
the input signal, should be the same. In other words, the initial conditions must satisfy the
following condition.
x̂(0) = x(0) (9.63)
However, since we do not know the initial conditions of the states in the real systems, we
cannot use the above condition to initialize the estimator states. The role of the term L(y−C x̂)
in the estimator equation is to compensate for the error induced by the difference in the
initial condition. Note that the term C x̂ can be regarded as the estimate of the output signal
y. Therefore, the term L(y − C x̂) may represent the output estimation error multiplied by the
estimator gain L. In a word, the estimator compensates the state estimation error by using
the feedback of the output estimation error.
The convergent rate of the error e in Eq. (9.60) is governed by the eigenvalues of the
matrix (A−LC). First of all, all the eigenvalues must lie in the left-half plane. As we move the
eigenvalues to the left, we have a faster converging estimator. When we design an estimator
in the form of Eq. (9.61), we have to choose first the eigenvalues of (A − LC). Suppose we
have chosen eigenvalues β1 , β2 , · · · , βn . Then, we have the following desired characteristic
equation.
αe (s) = (s − β1 ) (s − β2 ) · · · (s − βn )
(9.64)
= sn + αn−1 sn−1 + · · · + α1 s + α0 = 0
Chapter 9. Control System Design in State-Space 405
Then, we have to determine L so that the following equation is the same as the above desired
characteristic equation.
|sI − (A − LC)| = 0 (9.65)
Determination of the estimator gain L is similar to the process of determining the state feed-
back gain. Remember that we choose the state feedback gain K so that the eigenvalues of
(A − BK) are the desired values. Let us take the transpose of the matrix (A − LC) as follows:
Using the above relationship, we can modify Ackermann’s formula to determine the estima-
tor gain L. Let us replace the matrices in Ackermann’s formula, as shown below:
A ↔ AT
B ↔ CT (9.67)
T
K ↔L
Note that the transpose of Eq. (9.69) is the observability matrix. By taking the transpose of
Eq. (9.68), we can obtain the following Ackermann’s formula for the estimator gain.
h iT
L = αe (A)MO −1 0 · · · 0 1 (9.71)
If the observability matrix MO is invertible, we can always find L using Eq. (9.71). Therefore,
if the system is observable, we can always place the estimator poles anywhere.
Exemple 9.3 Let us design a state estimator for the following system.
" # " #" # " #
ẋ1 0 1 x1 0
= + u (9.72)
ẋ2 −1 −2 x2 1
" #
h i x
1
y= 1 0 (9.73)
x2
Suppose we desire that the matrix (A − LC) has a double eigenvalue at −2. Then, the
desired characteristic equation of the state estimator is as follows:
First, let us find the estimator gain L by the coefficients comparison. The estimator
406 9.2. State Estimator
By comparing the above equation with Eq. (9.74), we can find the following estimator
gains.
l1 = 2, l2 = −1 (9.78)
Next, let us find the gains using Ackermann’s formula. The observability matrix is as
follows: h i
" # 1 0 " #
C = 1 0
" #
MO = = h i 0 1 (9.79)
CA 1 0 0 1
−1 −2
By plugging the above matrices into Eq. (9.71), we can obtain the following estimator
gain vector.
" #" #−1 " # " #
3 2 1 0 0 2
L= = (9.81)
−2 −1 0 1 1 −1
The above gain vector coincides with the result of the comparison method. The esti-
mator equation can be written as follows:
x̂˙ 1
" # " #" # " # " #
0 1 x̂1 0 2
= + u+ (y − x̂1 ) (9.82)
x̂˙ 2 −1 −2 x̂2 1 −1
Figure 9.13 shows the system with the state estimator. Figure 9.14 shows the responses
for u ≡ 0, x1 = 1, x2 = 0.5, x̂1 = 0, x̂2 = 0. We can see that the estimator state x̂ converges
to the state x of the system. Next, let us move the eigenvalues of the estimator to the
left. Suppose we desire that the matrix (A − LC) has a double eigenvalue at −4. Then,
the desired characteristic equation of the state estimator is as follows:
The estimator gain vector for the above characteristic equation is found to be as fol-
lows: " #
6
L= (9.84)
3
Chapter 9. Control System Design in State-Space 407
Figure 9.15 shows the responses for the above gain with the same initial conditions as
in Figure 9.14. We can see that the estimator responses in Figure 9.15 are faster than
in Figure 9.14.
MATLAB Using the following MATLAB code, we can draw the responses in this ex-
ample.
clf
A=[0 1;-1 -2];B=[0;1];C=[1 0];D=0;
L=acker(A’,C’,[-2 -2])
h=ss([A [0 0;0 0];L’*C A-L’*C],[B;B],[C 0 0],0);
t = 0:0.001:10;
ref=zeros(size(t));
[ys ts xs]=lsim(h,ref,t,[1;0.5;0;0]);
subplot(2,1,1)
plot(ts,xs(:,1),’-b’,ts,xs(:,3),’--b’,’LineWidth’,3);
axis([0 8 0 1.5])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’State’)
grid on
subplot(2,1,2)
plot(ts,xs(:,2),’-b’,ts,xs(:,4),’--b’,’LineWidth’,3);
axis([0 8 -0.5 0.5])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’State’)
grid on
As we can see in Example 9.3, moving the eigenvalues of (A − LC) to the left results in a
faster converging speed of the state estimator. This is similar to that pushing the closed-loop
poles to the left results in faster responses in the state feedback control. In the previous
section, it is mentioned that making the state feedback control system faster may increase
the implementation cost. Then, we have to ask what the cost of making the estimator faster
is. Since the state estimator is nothing more than a part of the computer program, the larger
estimator gains do not increase the implementation cost. However, if we increase the con-
vergent speed of the state estimator, the bandwidth of the system is increased; in turn, the
noise sensitivity is increased. Therefore, it is important to design the state estimator so that
we have a proper balance between the convergent speed and the noise sensitivity.
Chapter 9. Control System Design in State-Space 409
Exemple 9.4 Let us design the state estimator for the following system.
" # " #" # " #
ẋ1 0 1 x1 0
= + u (9.85)
ẋ2 0 0 x2 1
" #
h i x
1
y= 1 0 (9.86)
x2
Suppose we desire that the matrix (A − LC) has a double eigenvalue at −2. Then, the
desired characteristic equation of the state estimator is as follows:
The estimator gain vector for the above characteristic equation is as follows:
" #
4
L= (9.88)
4
Note that the system is unstable since it has two poles at the origin. Before we apply
the state estimator to the system, the system should be stabilized. Let us design the
state feedback controller so that the closed-loop system has a double pole at −1. The
feedback gain K for the state feedback controller u = −Kx is as follows:
h i
K= 1 2 (9.89)
Figure 9.16 shows the block diagram of the whole system. Note that both the state es-
timator and the controlled plant in Figure 9.16 receive the same control input. Figure
9.17 shows the responses for u ≡ 0, x1 = 1, x2 = 0.5, x̂1 = 0, x̂2 = 0. We can see that the
estimator state x̂ converges to the state x of the system.
u = −K x̂ (9.90)
x̂˙ = Ax̂ + Bu + L (y − C x̂) = Ax̂ − BK x̂ + L (y − C x̂) (9.91)
In the above equations, note that the state variable x in the control u is replaced with the
state estimate variable x̂. To design the above controller, we have to determine two gain
vectors, the state feedback gain K and the state estimator gain L. In the previous sections,
the state feedback gain and the state estimator gain are determined independently. However,
when these two gain vectors are used in the same controller, we need to know if one gain
vector has an influence on the other. The state equations of the closed-loop system with the
estimator based controller are as follows:
ẋ = Ax − BK x̂ (9.92)
The state variables of the above closed-loop system are x and x̂, and the order of the whole
system is 2n, which is twice the order of the controlled plant. In the above state equations,
we can replace the state estimate variable x̂ with the estimation error e = x − x̂. Subtracting
Eq. (9.93) from Eq. (9.92) gives us the following equations.
Then, using the above relationship, we can change the state equation of the whole system to
the following equation. " # " #" #
ẋ A − BK BK x
= (9.95)
ė 0 A − LC e
Since the above state equation is obtained by the state variable transformation, Eq. (9.95)
has the same eigenvalues as Eqs. (9.92) and (9.93). The characteristic equation of the above
system is as follows:
" # " #
sI 0 A − BK BK sI − (A − BK) −BK
0 sI − = = 0 (9.96)
0 A − LC 0 sI − (A − LC)
According to the property of the determinant, the above equation can be changed to the
following equation.
|sI − (A − BK)| |sI − (A − LC)| = 0 (9.97)
As we can see in the above equation, the eigenvalues of the whole closed-loop system are the
union of the roots of the following two separate characteristic equations.
change the locations of the closed-loop poles. The addition of the state estimator merely
adds the poles of the estimator to the poles of the closed-loop system with the state feedback
controller. When we design the state estimator based controller, we can design the state
feedback controller and the state estimator separately.
Exemple 9.5 Consider the system in Example 9.4 again. Let us design the state es-
timator based controller so that (A − BK) has a double eigenvalue at −1 and (A − LC)
has a double eigenvalue at −2. We can use the same state feedback gain and the state
estimator gain as in Example 9.4. All we have to do in this example is to replace x
with x̂ in u = −Kx. Figure 9.19 shows the closed-loop system with the estimator based
controller. Note that the closed-loop system has four poles, one double pole at −1 and
one double pole at −2, as expected in Eq. (9.97). Figure 9.20 shows the responses for
x1 = 1, x2 = 0.5, x̂1 = 0, x̂2 = 0
MATLAB Using the following MATLAB code, we can draw the responses in this ex-
ample.
clf
A=[0 1;0 0];B=[0;1];C=[1 0];D=0;
K=acker(A,B,[-1 -1])
L=acker(A’,C’,[-2 -2])’
h=ss([A -B*K;L*C A-B*K-L*C],[B;0;0],[C 0 0],0);
t = 0:0.001:10;
ref=zeros(size(t));
[ys ts xs]=lsim(h,ref,t,[1;0.5;0;0]);
subplot(2,1,1)
plot(ts,xs(:,1),’-b’,ts,xs(:,3),’--b’,’LineWidth’,3);
axis([0 8 -.5 1.5])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’State’)
grid on
subplot(2,1,2)
plot(ts,xs(:,2),’-b’,ts,xs(:,4),’--b’,’LineWidth’,3);
axis([0 8 -1 1])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’State’)
grid on
Chapter 9. Control System Design in State-Space 413
In Figure 9.18, the system output signal y is the input to the controller, and the control
signal u is the output of the controller. If we view the controller as a system, we can rewrite
the state equation of the controller as follows:
u = −K x̂ (9.101)
Note that y is the input and u is the output in the above state equation. Then, we can find
the transfer function of the controller as follows:
U (s)
= −K[sI − (A − BK − LC)]−1 L (9.102)
Y (s)
414 9.3. State Estimator Based Controller
Using the above transfer function of the controller, we can draw the block diagram of the
closed-loop system as in Figure 9.21.
We can compare the block diagram in Figure 9.21 with the block diagram of the closed-loop
system in the frequency domain method. Let us define the transfer functions of the blocks
as follows:
G(s) = C(sI − A)−1 B (9.103)
D(s) = K[sI − (A − BK − LC)]−1 L (9.104)
Then, we can see that the block diagram in Figure 9.21 has a similar structure to the control
system in the frequency domain method.
Using Eq. (9.104), we can find the transfer function of the controller in Example 9.5 as
follows:
12s + 4
D(s) = 2 (9.105)
s + 6s + 13
Figure 9.22 shows the Bode plot of the controller.
Under the frequency of 3(rad/sec), the Bode plot in Figure 9.22 looks similar to the Bode plot
of a lead controller. Especially, the positive phase in the low-frequency range can increase
the phase margin by pulling up the phase curve of the controlled plant. The effect of phase
lead can be confirmed in Figure 9.23, where the Bode plots of G(s) and D(s)G(s) are drawn. In
Figure 9.23, the dotted lines are for the Bode plot of G(s), which shows zero phase margin. In
Chapter 9. Control System Design in State-Space 415
the same figure, the solid lines are for the Bode plot of D(s)G(s), which shows the increased
phase margin by 45 degrees.
u = −K x̂ (9.107)
If we compare the above state equation with the zero-reference input case, we can see that
the output signal y is replaced by y − r0 . DF (s) is a feedforward term to eliminate or reduce
the steady-state error. The above configuration has the benefit of being a unity feedback
system. The number of poles of the controlled plant at the origin is the same as the system
type number. For example, if the controlled plant has a pole at the origin, we don’t have to
add a feedforward term to follow a constant reference input.
416 9.3. State Estimator Based Controller
Exemple 9.6 Let us introduce a reference input to the system in Example 9.5 and
find a unit step response. To speed up the response, let us choose (A − BK) to have a
double eigenvalue at −3 and (A − LC) to have a double eigenvalue at −12. Then, the
gain vectors are obtained as follows:
h i
K= 9 6 (9.108)
" #
24
L= (9.109)
144
Since we move the poles of the state estimator to the left, the estimator gains are in-
creased. Using the above values, we can find the transfer function of the controller as
follows:
1080s + 1296
D(s) = K[sI − (A − BK − LC)]−1 L = 2 (9.110)
s + 30s + 297
The transfer function of the controlled plant is as follows:
1
G(s) = (9.111)
s2
Figure 9.25 shows the block diagram of the closed-loop system. Since the system is
already a type two system, we don’t have to add a feedforward term. The transfer
function of the whole closed-loop system is as follows:
Figure 9.26 shows the unit step response. As expected, there is no steady-state error.
MATLAB Using the following MATLAB code, we can draw the response of the system
in this example.
clf
A=[0 1;0 0];B=[0;1];C=[1 0];D=0;
K=acker(A,B,[-3 -3])
L=acker(A’,C’,[-12 -12])’
sys(1)=ss([A-B*K-L*C],L,K,0);
sys(2)=sys(1)*ss(A,B,C,D);
h=feedback(sys(2),1);
h=tf(h);
t = 0:0.001:10;
ref=ones(size(t));
[ys ts xs]=lsim(h,ref,t);
plot(ts,ys(:,1),’-b’,’LineWidth’,3);
axis([0 10 0 2])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’Output’)
grid on
Chapter 9. Control System Design in State-Space 417
The second method to introduce a reference input is using Eq. (9.39). If we replace the
state variable x in Eq. (9.39) with the state estimate x̂, we have the following equation for
the controller.
x̂˙ = Ax̂ + B(r0 − K x̂) + L(y − C x̂) (9.113)
u = r0 − K x̂ (9.114)
Note that the control signal is applied to the state estimator as well as to the controlled plant.
Figure 9.27 shows the block diagram of the closed-loop system with a reference input.
Figure 9.27 Block diagram of the closed-loop system with the controller Eq. (9.114)
418 9.3. State Estimator Based Controller
The state equation of the closed-loop system with the controller is as follows:
e = x − x̂ (9.117)
Then, Eqs. (9.115) and (9.116) can be changed to the following equations.
ė = (A − LC)e (9.119)
By taking the Laplace transforms of the above equations, we can obtain the following equa-
tions.
sX(s) − x(0) = (A − BK)X(s) + BKE(s) + BR0 (s) (9.120)
sE(s) − e(0) = (A − LC)E(s) (9.121)
Rearranging the terms, we have the following equations.
Exemple 9.7 Let us apply the controller Eqs. (9.113) and (9.114) to the system in
Example 9.5 and find a unit step response. Use the same gain vectors as in Example
9.6. The followings are the state equations with the controller.
" # " #" # " #
ẋ1 0 1 x1 0
= + u (9.126)
ẋ2 0 0 x2 1
x̂˙ 1
" # " # " # " # " # !" # " #
24 h i x
1 0 1 24 h i x̂
1 0
= 1 0 + − 1 0 + u (9.127)
x̂˙ 2 144 x2 0 0 144 x̂2 1
" #
h i x̂
1
u = r0 − 9 6 (9.128)
x̂2
" #
h i x
1
y= 1 0 (9.129)
x2
Chapter 9. Control System Design in State-Space 419
Y (s) (s + 12)2 1
= 2 2
= 2 (9.130)
R0 (s) (s + 3) (s + 12) s + 6s + 9
As we can see in the above equation, the estimator poles are canceled and do not ap-
pear in the closed-loop transfer function. To make the system type 1, we can use the
following feedforward term.
DF (s) = 9 (9.131)
Then, the transfer function of the whole system is as follows:
Y (s) 9
= 2 (9.132)
R(s) s + 6s + 9
Figure 9.28 shows the unit step response. While the response in Figure 9.26 has an
overshoot, the response in Figure 9.28 does not have an overshoot. However, we have
to note that the system in Example 9.6 is a type two system, and the system in this
example is a type one system. The system type of the system in this example can be
made two by using the following feedforward term.
DF (s) = 6s + 9 (9.133)
plot(ts,ys(:,1),’-b’,’LineWidth’,3);
axis([0 10 0 2])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’Output’)
grid on
If we replace the fifth line in the above code with the following line, we can draw the
response in Figure 9.29.
h=tf([6 9],[1])*ss([A -B*K;L*C A-B*K-L*C],[B;B],[C 0 0],0);
420 9.4. Integral State Feedback Controller
Adding an integrator increases the order of the system. Therefore, if we add an inte-
grator, we have to define an extra state variable. Let r be the reference input. The output
variable is y. Let n be the order of the system. The new state variable due to the addition of
an integrator can be defined as follows:
ẋn+1 = r − y = r − Cx (9.135)
The new state variable xn+1 is the integration of the error between the reference input and
the output. With the new state variable, the state feedback controller is modified as follows:
u = −Kx − kn+1 xn+1 (9.136)
Figure 9.30 shows the block diagram of the closed-loop system with the integral state feed-
back controller.
If we make the closed-loop system stable, the steady-state error for a constant reference
input is zero since the system is a type one system. The new state equation with the new
state variable is as follows:
" # " #" # " # " #
ẋ A 0 x B 0
= + u+ r (9.137)
ẋn+1 −C 0 xn+1 0 1
Let us define the new state variable as follows: (9.138)
x1
x2
.
x̄ = .. (9.138)
xn
xn+1
Let us define the new system matrices as follows:
" #
A 0
Ā = (9.139)
−C 0
422 9.4. Integral State Feedback Controller
" #
B
B̄ = (9.140)
0
Let us assume that the reference input r is zero since the stability of the closed-loop system
is not dependent on the reference input. Then, we can rewrite Eq. (9.137) as follows:
We can make the system stable by applying the following state feedback controller.
h i
u = −K̄ x̄ = − K kn+1 x̄ (9.142)
With the stabilizing state feedback controller, the system is a type one system, and the output
of the system follows a constant reference input with zero steady-state error.
Exemple 9.8 Let us design the integral state feedback controller for the system in
Example 9.1. The system matrices for the controlled plant are as follows:
" #
0 1
A= (9.143)
0 −1
" #
0
B= (9.144)
1
h i
C= 1 0 (9.145)
With the addition of an integrator, the system matrices are changed to the following
matrices.
" # 0 1 0
A 0
Ā = = 0 −1 0 (9.146)
−C 0
−1 0 0
" # 0
B
B̄ = = 1 (9.147)
0
0
Let us find the state feedback controller so that the closed-loop system has a triple pole
at −8. The following is the state feedback gain.
h i h i
K̄ = k1 k2 k3 = 192 23 −512 (9.148)
Figure 9.31 shows the block diagram of the closed-loop system. Figure 9.32 shows the
output response and the output of the integrator.
MATLAB Using the following MATLAB code, we can draw the responses in this ex-
ample.
clf
A=[0 1;0 -1];B=[0;1];C=[1 0];D=0;
Abar=[A [0;0] ; -C 0]
Bbar=[B;0]
K=acker(Abar,Bbar,[-8 -8 -8])
Acl=Abar-Bbar*K;
h=ss(Acl,[0;0;1],[C 0],D);
t = 0:0.001:4;
Chapter 9. Control System Design in State-Space 423
ref=ones(size(t));
[ys ts xs]=lsim(h,ref,t,[0;0;0]);
plot(ts,ys(:,1),’-b’,ts,xs(:,3),’--b’,’LineWidth’,3);
axis([0 2 0 2])
set(gca,’GridLineStyle’,’-’,’FontName’,’times’,’FontSize’,18)
xlabel(’Time(sec)’);
ylabel(’Output’)
grid on
As we can see in the above paragraph, the system with the integral state feedback con-
troller is a type one system as far as the closed-loop system is stable. The integral state
feedback controller is robust against the coefficient variations. However, since an integrator
itself is a low-pass filter, the addition of an integrator may result in slow responses. Also,
the system response may have an overshoot due to the increase of the system order. There-
fore, when compared with the feedforward method, the integral state feedback method has
424 9.5. Lab 9
pros and cons. The control system designer has to decide which method is better to design a
reference tracking controller for a given system.
9.5 Lab 9
Figure 9.34 shows the closed-loop configuration of the state feedback control system for the
simulator. The outputs of the integrators are defined as the state variables, x − 1 and x2 .
Figure 9.34 State feedback control system for the analog dynamic simulator
Let X1 (s),X2 (s), and U (s) be the Laplace transforms of x1 , x2 , and u, respectively. Then, we
have the following relationships.
X1 (s) 10
=− (9.149)
X2 (s) s
X2 (s) 10
=− (9.150)
U (s) s
From the above equations, we can obtain the following state equations.
In a vector form, the state equation for the analog dynamic simulator is as follows:
" # " #" # " #
ẋ1 (t) 0 −10 x1 (t) 0
= + u(t)
ẋ2 (t) 0 0 x2 (t) −10
h
"
i x (t)
# (9.153)
1
y(t) = 1 0
x2 (t)
Let us choose the closed-loop poles as −10 ± j20. Using the Ackermann’s formula, we can
find the state feedback gain as follows:
k1 = 5, k2 = −2 (9.154)
The following MATLAB code simulates the step response of the state feedback control sys-
tem.
a=[0 -10;0 0]
b=[0;-10]
c=[1 0]
d=0
p=[-10+j*20 -10-j*20]
k=acker(a,b,p)
a1=a-b*k
h=ss(a1,b,c,d)
t = 0:0.001:4; % vector of time samples
ref = (rem(t,4)<=2)*k(1); % square wave values
%ref=ones(size(t))*k(1); % unit step
[ys ts xs]=lsim(h,ref,t,[0;0]);
% plot states
figure(1)
subplot(2,1,1);
plot(ts,xs(:,1),’-’,ts,xs(:,2),’:’);axis([0 4 -0.5 1.5])
grid on
ylabel(’x1,x2’);
% plot control
subplot(2,1,2)
plot(ts,ref’-k(1)*xs(:,1)-k(2)*xs(:,2));axis([0 4 -10 10])
grid on
ylabel(’u’);xlabel(’time(sec)’)
Code 9.1
Figure 9.35 shows the MATLAB simulation responses, where the state variables and control
signals are shown.
The implementation code for the Discovery board is similar to the code in section 8.8.
Replace Code 8.2, Code 8.3, Code 8.4, and Code 8.5 with Code 9.2, Code 9.3, Code 9.4, and
Code 9.5, respectively.
426 9.5. Lab 9
Code 9.2
Code 9.3
Code 9.4
Chapter 9. Control System Design in State-Space 427
Code 9.5
428 9.5. Lab 9
Figure 9.36 shows the state responses and control of the analog dynamic simulator.
Figure 9.36 State responses and control of the analog dynamic simulator
Exercise 9.1 Choose a real double pole in the left-half plane as the closed-loop poles
and repeat the above experiment.
where
" # " # " # " #
x̂1 (t) 0 −10 0 h i l1
x̂(t) = ,A = ,B = ,C = 1 0 ,L = (9.157)
x̂2 (t) 0 0 −10 l2
We need to find the digital form of the above equations for the digital implementation. Using
the forward rectangular Euler’s rule, we can obtain the following digital form of the above
equations.
x̂1 (t) = x̂1 (t − ∆t) + ∆t [−10x̂2 (t − ∆t) + l1 (y(t − ∆t) − x̂1 (t − ∆t))]
x̂2 (t) = x̂2 (t − ∆t) + ∆t [−10(k1 ref − k1 x̂1 (t − ∆t) − k2 x̂2 (t − ∆t)) + l2 (y(t − ∆t) − x̂1 (t − ∆t))]
u(t) = k1 ref − k1 x̂1 (t) − k2 x̂2 (t)
(9.160)
Code 9.6
Figure 9.37 shows the MATLAB simulation responses, where the state variables and state
estimates are shown.
430 9.5. Lab 9
The implementation code for the Discovery board is similar to the code in section 8.8.
Replace Code 9.2, Code 9.3, Code 9.4, and Code 9.5 with Code 9.7, Code 9.8, Code 9.9, and
Code 9.10, respectively.
/* USER CODE BEGIN PV */
float x1hat,x1hat_old,x2hat,x2hat_old;
float delt;
float K1,K2,L1,L2,control;
volatile int32_t
x1,x1_old,x2,ref,interrupt_counter,sampling_frequency,data_counter;
int16_t data[4000],data2[4000],data3[4000],data4[4000];
volatile uint8_t data_flag=0,data_done=0;
/* USER CODE END PV */
Code 9.7
Code 9.8
printf("%d %d %d %d %d\r\n",i,data[i],data2[i],data3[i],data4[i]);
}
data_flag=0;
data_done=0;
HAL_GPIO_TogglePin(GPIOG, GPIO_PIN_14);
}
/* USER CODE END WHILE */
Code 9.9
}
}
x1hat=x1hat_old+delt*(-10.0*x2hat_old+L1*(x1_old-x1hat_old));
x2hat=x2hat_old+delt*(-10.0*(K1*ref-K1*x1hat_old-K2*x2hat_old)+L2*(x1_old-x1hat_old));
control = K1*(float)(ref-x1hat)-K2*(float)x2hat;
x1_old=x1;
x1hat_old=x1hat;
x2hat_old=x2hat;
if (control > 2047) control = 2047;
if (control < -2048) control = -2048;
da_value = control + 2048;
HAL_DAC_SetValue(&hdac, DAC_CHANNEL_2, DAC_ALIGN_12B_R, (uint32_t)(da_value));
}
/* USER CODE END Callback 0 */
Code 9.10
Figure 9.38 shows the state responses and state estimates of the analog dynamic simulator.
Figure 9.38 State responses and state estimates of the analog dynamic simulator
Exercise 9.2 Choose a real double pole in the left-half plane as the closed-loop poles
for the state feedback controller and repeat the above experiment. Use the same esti-
mator poles as in the above.
Using the forward rectangular Euler’s rule, we can obtain the digital form of the above esti-
mator state equations as follows:
x̂1 (t) = x̂1 (t − ∆t) + ∆t (x̂2 (t − ∆t) + l1 (x1 (t − ∆t) − x̂1 (t − ∆t)))
2g 0.3γg
(9.164)
x̂2 (t) = x̂2 (t − ∆t) + ∆t Y x̂1 (t − ∆t) − I u(t − ∆t) + l2 (x1 (t − ∆t) − x̂1 (t − ∆t))
0 0
First, let us choose the closed-loop poles for the state feedback control as −15 ± j40 and the
estimator poles as −60 ± j160. The state feedback gains and the estimator gains are found to
be as follows:
k1 = −2.23, k2 = −0.025 (9.165)
l1 = 120, l2 = 30052 (9.166)
The implementation code for the Discovery board is similar to the code in section 9.5.2.
Replace Code 9.7, Code 9.8, Code 9.9, and Code 9.10 with Code 9.11, Code 9.12, Code 9.13,
and Code 9.14, respectively.
/* USER CODE BEGIN PV */
long interrupt_counter;
long ref,data_counter;
short data[8001];
char data_flag,data_done;
long sampling_frequency;
int y,oldy;
float delt,control;
float k1,k2;
float Gamma=333.3;float g=9.8;float I0=0.817;float X0=0.023;
float x1hat,x1hat_old,x2hat,x2hat_old;
float l1,l2;
/* USER CODE END PV */
Code 9.11
434 9.5. Lab 9
}
if (data_flag==2) {
if (data_counter<=sampling_frequency*4) {
data[data_counter++]=(int16_t)y;
}
else {
data_done=1;
}
}
x1hat=x1hat_old+delt*(x2hat_old+l1*(oldy-x1hat_old));
x2hat=x2hat_old+delt*((2*g/X0)*x1hat_old-(0.3*Gamma*g/I0)*control+l2*(oldy-x1hat_old));
control=k1*(ref-x1hat)-k2*(x2hat);
x1hat_old=x1hat;
x2hat_old=x2hat;
oldy=y;
if (control > 2047) control = 2047;
if (control < -2048) control = -2048;
da_value = control + 2048;
HAL_DAC_SetValue(&hdac, DAC_CHANNEL_2, DAC_ALIGN_12B_R, (uint32_t)(da_value));
}
/* USER CODE END Callback 0 */
Code 9.14
Figure 9.39 shows the step response of the magnetic levitation system with the state estimator-
based controller. The response shows an under-damped response, which is consistent with
the complex closed-loop poles.
Figure 9.39 Step response of the magnetic levitation system: complex closed-loop poles
For comparison, let us choose a double pole at −30 as closed-loop poles for the state
feedback control and keep the estimator poles as in the above. The state feedback gains and
the estimator gains are found to be as follows:
Figure 9.40 Step response of the magnetic levitation system: real closed-loop poles
Chapter 9. Control System Design in State-Space 437
Problem
Problem 9.1 Consider the system in Example 9.1. Design the state feedback controller
so that the closed-loop system satisfies the following conditions.
(1) ωn = 10, ζ = 0.5
(2) ωn = 100, ζ = 0.5
(3) ωn = 10, ζ = 1
(4) ωn = 100, ζ = 1
Problem 9.2 Consider the system in Figure 9.41, where f is the force applied to the
mass, and y is the displacement of the mass. The parameters are as follows:
Define the state variables as the displacement and velocity of the mass. Design the
state feedback controller so that the closed-loop system has a double pole at −5.
Problem 9.3 Let us design the state feedback controller for the DC motor system in
Figure 9.42. Define the state variables as the motor angle θ and the angular velocity
dθ/dt. Determine the state feedback gains so that the characteristic equation of the
closed-loop system is as follows:
s2 + 2ζωn s + ωn2 = 0
(1) Using the coefficient comparison method, find the state feedback gains so that the
closed-loop system has a double pole at −3.
(2) Using Ackermann’s formula, find the state feedback gains so that the closed-loop
system has a double pole at −3 and check if they are the same as in part (1).
(3) Using MATLAB, draw x1 and x2 responses. The initial conditions are x1 (0) =
1, x2 (0) = 0.
Problem 9.5 Let us design the state feedback controller for the following system.
" # " #
0 1 0
ẋ = x+ u
3 0 1
The requirement of the closed-loop system is that the damping coefficient ζ = 1 and
2% settling time is less than 2 seconds. Find the state feedback gains. Using MATLAB,
draw x1 and x2 responses. The initial conditions are x1 (0) = 1, x2 (0) = 0.
(1) Find the state feedback gains so that the closed-loop system satisfies the following
conditions.
ωn = 10, ζ = 0.5
(2) In addition to the conditions in part (1), the closed-loop system must follow a unit
ramp reference without steady-steady error. Find the feedforward term.
(3) Using MATLAB, draw the unit step and unit ramp responses for the controller
designed in part (2)
Problem 9.7 Consider the state feedback controller for the following systems.
" # " #
0 1 0
ẋ = x+ u
(a) 0 −1 1
h i
y= 1 0
" # " #
0 1 0
ẋ = x+ u
(b) 0 1 1
h i
y= 1 0
Chapter 9. Control System Design in State-Space 439
(1) Find the state feedback gains so that the closed-loop system has a double
pole at −3 and determine the feedforward term so that the closed-loop system has
zero steady-state error for a unit step input.
(2) Using MATLAB, draw the unit step response and the control signal for the control
systems designed in part (1). Compare the control systems for the above two systems
and discuss the results.
Problem 9.8 Consider the system in Example 9.1. Find the state feedback gains and
the feedforward term so that the closed-loop system has the following characteristic
equation and has zero steady-state error for a unit ramp input.
s2 + 2ζωn s + ωn2 = 0
Problem 9.9 Consider the state estimator for the following system.
" #
0 1
ẋ = x
−2 −2
h i
y= 1 0 x
(1) Using the coefficient comparison method, find the state estimator gain so that the
estimator has a double pole at −10.
(2) Using Ackermann’s formula, find the state estimator gain so that the estimator has
a double pole at −10 and check if the gains are the same as in part (1)
(3) Using MATLAB, draw the state variables and the state estimates and check if the
state estimates converge to the states. Assume zero initial conditions for the state
estimates. The initial conditions for the state variables are x1 (0) = 1, x2 (0) = 0.
Problem 9.10 Derive the state estimator equation for the following system.
ẋ = Ax + Bu
y = Cx + Du
(1) Design a state estimator based controller. Place the state feedback controller poles
at −2 as a double pole and the state estimator poles at −10 as a double pole.
(2) Using MATLAB, draw the state variables and the state estimates and check if the
state estimates converge to the states. Assume zero initial conditions for the state
estimates. The initial conditions for the state variables are x1 (0) = 1, x2 (0) = 0.
(3) Using MATLAB, draw the Bode plot and find the phase margin.
(4) Add the feedforward term so that the closed-loop system has a zero steady-state
error for a unit step input. Using MATLAB, draw the step response.
440 9.5. Lab 9
Problem 9.12 Design the state estimator based controllers for the following system so
that the closed-loop pole requirements given below are satisfied.
" # " #
0 1 0
ẋ = x+ u
0 0 1
h i
y= 1 0 x
(a) Place the state feedback controller poles at −2 as a double pole and the state
estimator poles at −4 as a double pole.
(b) Place the state feedback controller poles at −2 as a double pole and the state
estimator poles at −8 as a double pole.
(c) Place the state feedback controller poles at −4 as a double pole and the state
estimator poles at −8 as a double pole.
(d) Place the state feedback controller poles at −4 as a double pole and the state
estimator poles at −16 as a double pole.
(1) MATLAB, draw the Bode plots and find the crossover frequencies for the
above cases. Discuss the relationship between the closed-loop pole locations and the
crossover frequencies.
(2) Introduce the reference input to the system in the unity feedback configuration.
Note that the closed-systems are type two systems since the controlled plant has a
double pole at the origin. Using MATLAB, draw the unit step responses for the above
cases. Discuss how the unit step responses are related to the closed-loop pole locations
and the crossover frequencies.
Problem 9.13 Let us design the state estimator based controller for the following
system.
1
G(s) =
s3 + 3s2 + 4s + 2
(1) Find the controllable canonical form of the above system.
(2) Design the state estimator based controller using the state equation obtained in
part (1). Place the state feedback controller poles at −3 as a double pole and the state
estimator poles at −10 as a double pole.
(3) Find the feedforward term so that the closed-loop system design in part (2) has a
zero steady-state error for the unit ramp input.
(4) Using MATLAB, draw the unit step and unit ramp responses of the system designed
in part (3).
(5) Using MATLAB, draw the Bode plot, and find the phase margin.
Problem 9.14 Propose the double integrator state feedback control system by extend-
ing the single integrator state feedback controller in section 9.4. Show that the double
integrator state feedback control system has a zero steady-state error for the ramp ref-
erence input.
441
Problem 9.15 Let us design the integral state feedback controller for the system in
Problem 6.
(1) Design the single integrator state feedback controller for the system in Problem 6
so that the system has a similar unit step response as in Problem 6.
(2) We can extend the single integrator state feedback controller to the double inte-
grator case (Refer to the result of Problem 14). Design the double integrator state
feedback controller for the system in Problem 6 so that the system has a similar unit
step response as in Problem 6. Show that the double integrator state feedback control
system is a type two system by finding the closed-loop transfer function.
442
Appendix
A. Laplace Transform Table
B. Mathematical formulas
B.1 Exponential function
e = 2.7182818284
ex ey = ex+y
(ex )y = exy
B.4 Differentiation
(f g)0 = f 0 g + f g 0
!0
f 0 0 f 0g − f g0
= f g −1 = f 0 g −1 + f g −1 = f 0 g −1 − f g −2 g 0 =
g g2
(f g)0 = f 0 g + f g 0
⇒ f 0 g = (f g)0 − f g 0
Z Z Z
⇒ f g = (f g) − f g 0
0 0
Z Z
⇒ f g = f g − f g0
0
Bibliography
[1] C-T Chen. Linear System Theory and Design. Oxford University Press, 1984.
[2] Richard C. Dorf and Robert H. Bishop. Modern Control Systems. 8th ed. Addison Wes-
ley, 1998.
[3] Gene F. Frankin, David Powel, and Abbas Emami-Naeini. Feedback Control of Dynamic
Systems. 7th ed. Pearson, 2014.
[4] William H. Hayt, Jack E. Kemmerly, and Steven M. Durbin. Engineering Circuit Analy-
sis. 8th ed. McGraw-Hill, 2012.
[5] T. Kailath. Linear Systems. Prentice-Hall, 1980.
[6] Erwin Kreyszig. Advanced Engineering Mathematics. 10th ed. John Wiley & Sons, 2011.
[7] Benjamin C. Kuo and Farid Golnaraghi. Automatic Control Systems. 9th ed. John Wiley
& Sons, 2010.
[8] Feedback Instruments Ltd. Control in a MATLAB Environment Magnetic Levitation Sys-
tem 33-006. Feedback Instruments Ltd., 2003.
[9] S.J. Mason. “Feedback Theory-Further Properties of Signal Flow Graphs”. In: Proc. IRE
44 (1956–7), pp. 920–926.
[10] S.J. Mason. “Feedback Theory-Some Properties of Signal Flow Graphs”. In: Proc. IRE
41 (1953–9), pp. 1144–1156.
[11] Mathworks. Control System Toolbox. Mathworks, Inc., 2018.
[12] Mathworks. MATLAB. Mathworks, Inc., 2018.
[13] K. Ogata. Modern Control Engineering. 4th ed. Prentice-Hall, 2002.
[14] Bruce O. Watkins. “A Partial Fraction Algorithm”. In: IEEE Trans. Automatic Control
AC-16 (1971–10), pp. 489–491.
445
Index
446
Index 447
z-transform, 345
unit step function, 34 zero, 76
Index 449