0% found this document useful (0 votes)
79 views8 pages

ME 5659 Scribed Notes

This document provides an overview of control systems engineering. It discusses feedforward and feedback control systems, describing the key components like reference signals, plant dynamics, actuators, sensors, and control logic. Feedback control is emphasized as it can stabilize unstable systems, improve performance, automate processes, reject disturbances, and allow controllers to work with varying systems. Control systems are also classified as continuous or discrete time, and single-input single-output (SISO) or multiple-input multiple-output (MIMO).

Uploaded by

Mayur Bhise
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views8 pages

ME 5659 Scribed Notes

This document provides an overview of control systems engineering. It discusses feedforward and feedback control systems, describing the key components like reference signals, plant dynamics, actuators, sensors, and control logic. Feedback control is emphasized as it can stabilize unstable systems, improve performance, automate processes, reject disturbances, and allow controllers to work with varying systems. Control systems are also classified as continuous or discrete time, and single-input single-output (SISO) or multiple-input multiple-output (MIMO).

Uploaded by

Mayur Bhise
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

ME 5659: Control Systems Engineering Spring 2022

Lecture 01: Overview of Control Systems


Tuesday January 18, 2022
Lecturer: Peter Whitney Scribe: Chunpeng Wang

1 Introduction
Two impressive videos (https://youtu.be/cyN-CRNrb3E and https://youtu.be/meMWfva-Jio)
show how feedback control can be applied to multi-dimensional complex dynamical systems.
• In the first video, a small cart is running on a rack connected to three separate pendulum links.
Motor position and all pendulum links angles can be captured by encoders and all velocities
can be estimated by numerical differentiation. How is it possible to control all 8 system state
variables (positions and velocities) with only a single input?!
• In the second video, a similar setup is running, but subject to external disturbances. The
disturbance forces and their locations can’t me measured directly, but it is possible to com-
pensate for them through continuous feedback control—the effect of the disturbances is seen
indirectly in the joint angle measurements. A good, robust feedback controller can handle all
manner of external disturbances.

2 Signals & Systems


Feed-forward control system A picture of a general feed-forward control system is shown in
Fig. 1. In this picture,

reference r(t) input u(t) Dynamical system output y(t)


Control logic Actuation
"Plant"

Figure 1: General feed-forward control system

• A plant is a dynamical system. It’s some system or process that changes with time. It has one
or more inputs and one or more outputs. Physically realizable systems are causal. Causality
refers to the flow of time; if a system is causal, then changes in the input will always precede
changes in the outputs.
• Inputs u(t) and outputs y(t) for a dynamical system, for example in the inverted pendulum
system, might be motor voltage and cart position. In general, inputs and outputs may be
vectors of signals.
• How to generate the inputs to our plant? A controller containing control logic and actuation
is used. The control logic part takes one or more virtual signal inputs, such a desired position
of the cart for the inverted pendulum example, to make decisions about the voltage we must

1
generate to give the desired behavior, at each instant of time. Our control logic is a dynamical
process like any other system, constant changing its output(s) in response to changes in its
input(s). The actuation component actually generates the desired voltage for the cart. In a
motor control example, the whole controller can be code running on a microcontroller, and
the actuation would be the power amplifier and motor needed to drive the cart.
• The exogenous inputs for the controller are called the reference r(t), or target. It represents
the desired output signals. For example, the desired cart and pendulum angles in the inverted
pendulum system. In a feedback system, the controller will take additional signals as inputs,
such as measurements of the output.

Example A car with mass m on a frictionless surface is pushed with a force f (t). It will results
in the change of car position y(t) and velocity ẏ(t).

y(t), ẏ(t)

f (t) m

Figure 2: 1 DOF car example

From Newton’s second law, the cart dynamics satisfies the equation of motion as

mÿ(t) = f (t).

If the reference is a particular trajectory we want to follow, and knowing precisely the mass and
equation of motion, we can invert this equation and calculate the exact f (t) that will give us the
desired y(t). If there is no disturbance (other uncontrolled or unmonitored forces on the car) and
we have perfect information about the plant, this works perfectly. The controller in this case is just
the inverse of the plant. The following picture shows this special feed-forward control system.

r(t) 1/P P y(t) = r(t)

But we can easily imagine how this feed-forward control system is fragile. The car mass could
change or be different than expected, or there may be disturbances like wind force.

Feedback control system A picture of general feedback control system is shown in Fig. 3.
Comparing it with feedforward control system, we can see there’s a few more components in this
system.
• Sensor measurement. Sensor is used to measurement the output of the plant. It can be
a speed sensor, position sensor or temperature sensor. Technically, we can never know the
output directly. For example, some mechanism is needed to measure the speed. Because of

2
Disturbance d(t)

reference r(t) input u(t) Dynamical system output y(t)


Control logic Actuation
"Plant"

Sensor
ym (t)

Noise n(t)

Figure 3: General feedback system

causality, the measured output ym (t) always has a little bit of delay or lag compared to real
output y(t). A perfect sensor looks like the following figure without any delay and noise. The

ym (t) y(t)
1

gain block

triangle symbol is called a gain block and the output signal is simply the input multiplied by
the constant inside the triangle. In this case, our perfect sensor would achieve ym (t) = y(t).
• Unfortunately, real sensor measurements always contains some noise, n(t). For example,
velocity measurement might contain some high frequency noise because of electrical noise.
Because of noisy measurement, we may need some kind of averaging or smoothing for the
measurement signals.
• The most common job for control logic is to compare reference r(t) with measured output
ym (t). If these signals are equal, we don’t need to change anything to correct output y(t). A
specific feedback controller may look like Fig. 4. In this example, the reference is subtracted
by the measured output, which results in an error e(t) signal. The error is the difference
between where we want to be and where we think we are. A common PID controller sums up
the weighted combination of the error, integral of the error, and rate-of-change of the error,
to give a single input u(t) for the plant. Consider if the error is zero and unchanging with
time. The output will be zero—there is nothing for the controller to do, since we are on-target
already.
There are many reasons to employ feedback control:
• Stabilize unstable systems

3
r(t) + e(t) Rt u(t)
Kp e + Kd ė + Ki t0 edτ

ym (t)

Figure 4: General PID controller block diagram

• Improve performance (passive behavior –> active behavior)


• Automation (reduce human monitoring and control)
• Reject disturbances
• Tolerate plant variation (e.g. the same controller needs to work with changing payloads of a
robot arm)
Sometimes our requirements may be in conflict. A controller may reject disturbances very well, but
might be more sensitive to noise or vice versa. Also, performance and relative stability may be in
conflict. For example, the early humanoid robots were designed to walk in a “crouch” posture to
increase stability, but this comes at the expense of speed and agility.

3 Classification of dynamical systems


Specific analysis methods and controller synthesis techniques are available for different types of
systems.
1. Continuous time system v.s. Discrete time system
• Continuous time system according to our current understanding is how the physical world
works. For example, if a system is governed by Newton’s laws, the equation of motion
are expressed as ordinary or partial differential equations in continuous time, t.
• Discrete time systems include digital computers, where the time would be expressed
with an index k which represents a specific point where data is sampled from an ADC or
the number of times we have passed through a control loop. Discrete input and output
signals can be written as uk , yk or u[k], y[k]. Equations of motion for discrete systems are
expressed as difference equations. These equations use input and output values sampled
backwards in time (e.g. u[k − 1], the previous input) whereas differential equations are
expressed in first, second, or higher-order derivatives of the inputs and outputs.
2. SISO v.s. MIMO
• SISO (Single input, single output systems)
• MIMO (Multiple input, multiple output systems)
MIMO system can only use "modern control " theory.

4
u(t) y(t)
ODE/PDE
e.g. mÿ(t) = f (t)
τ dy
dt + y = u

continuous time system


t t

uk or u[k] yk or y[k]

difference equation
e.g. y[k] = u[k]+u[k−1]
2

discrete time system


k k

Figure 5: continuous time system vs discrete time system

In classical control, we use Laplace transform and systems are represented by transfer func-
tions, which only apply to SISO systems. In modern control, we use linear algebra represen-
tations, which apply to SISO or MIMO systems.
3. Linear v.s. Nonlinear
In short, linear system must satisfy two properties, additivity and homogeneity. Nonlinear
systems have equations of motion that contain sinusoidal functions, power functions, or other
nonlinear terms. Recall the small car example in Fig. 2. That system is linear. If, instead,
the car s under some drag force as fd = CD v 2 q∞ , the system is no longer linear,

mÿ(t) + q∞ CD (ẏ)2 = f (t).

y(t), ẏ(t)

f (t) m fd = CD v 2 q∞

In this class, we will focus on linear systems. For nonlinear systems, unfortunately every
different type of nonlinearity is a field by itself and requires a different set of tools. There are
few general purpose tools for all types of nonlinearity. A common way to deal with nonlinear
systems is to linearize them around an equilibrium point or equilibrium trajectory. Even
though we will make an error in out model, robust feedback controllers will often still do a
good job even if synthesized using a linear system approximation.

5
4. Time varying (TV) v.s. time invariant (TI)
Again in the small car example, consider a more complex system, in which the mass of car is
changing with time due to burning fuel. Let us assume the mass is decreasing exponentially
as m(t) = M0 e−t/τ . The equation of motion for this system becomes

[M0 e−t/τ ]ÿ + q∞ CD (ẏ)2 = f (t).

We can see the coefficient one of the terms in the ODE is changing with time. For classical

y(t), ẏ(t)

f (t) M0 e−t/τ fd = CD v 2 q∞

control methods, unfortunately we cannot deal with time varying systems. Modern controls
methods, on the other hand, can deal with both TI and TV systems.
5. Distributed systems v.s. Lumped systems

w(x, t)

δ(t)
distributed system
x
L

Figure 6: front view of U2 high-altitude aircraft

We will start from an example for this part. In Fig. 6, we can see the aircraft has a very long
thin wings, which are very floppy. The wing can be modeled as an ideal Euler cantilevered
beam. The aileron at the wing tip can change its angle δ(t), to generate up-force or down-
force. Rather than a point mass (accelerating car example), the wing has distributed mass,
and distributed deflection along its length.
• Distributed systems
In a vibrating Euler beam, we must solve a partial differential equation (PDE), to find the
deflection of the wings, w(x, t) (blue curve). We can also consider each deflection point
is connected to small mass (red block) from vibration point of view. In this system,
the state variables or output variables are not just the tip deflection, but the whole
wing. Thus we require an infinite number of state variables or degrees of freedom to
represent, continuously distributed along the wing. All real world (non-quantum) systems
are distributed systems.

6
w

n=2
x

n=3

n=1

Figure 7: Mode shapes for the first three modes of a vibrating cantilever beam

• Lumped systems
Lumped systems have finite DOFs and are modeled by ODEs. In the wing example, the
first three vibrational mode shapes are shown in Fig. 7.

x2 , ẋ2
m2
Fa (t)

wL k2
meq
x1 , ẋ1
m1
keq

k1
(a) 1 DOF

(b) 2 DOF

Figure 8: lumped element model

For the case n = 1 bending mode, we can model it as an equivalent 1-DOF lumped el-
ement model as shown in Fig. 8a, where meq and keq are the wing tip equivalent mass
and stiffness, Fa is the aerodynamic input force, wL is the tip deflection. In this special
case, the deflection can only behave like the red curve as shown in Fig. 7. This lowest
mode represents the quasi-static behavior. If some tip-load is applied, no matter what
transient behavior the system has, it will eventually come to this steady position. Or,
this lowest mode can be considered as deflection of system under very low frequency input.

If the real system is under some high frequency input, such as high frequency turbulence,

7
the n = 1 mode may not fully describe possible vibrations of the system. For example,
a 2 DOF lumped element model would look like Fig. 8b. From distributed system to
lumped system, we can track higher frequency modes, but it makes the equivalent system
more complex. So, there’s a trade-off between model complexity and model accuracy. In
this course, we will only learn how to deal with lumped systems, which means we are
only trying to solve a finite number of ODEs. In the 2-DOF lumped element model,
we have two second order ODEs for two mass, which means it’s a fourth order system.
In state representation, it has four state variable as x1 , ẋ1 , x2 , x˙2 . State variables are
the sufficient set of signals or “states” we need, to fully describe a trajectory of the
system from any point in time. Often state variables are associated with the set of initial
conditions needed to integrate the equations of motion foward in time. State variable
“summarize the influence of prior inputs” on the system.

u(t) y(t) = u(t − 1)

1 sec

u(t) y(t) = u(t − 1)

t
t∗ −1 t∗

Figure 9: One second delayed system

Systems may also be temporally distributed. Consider a one second pure delay system as Fig. 9.
In order to provide the correct current output (and to continue to provide it in the future), we
need a buffer that stores the past one second of input, which is an infinite number of signals
(ε seconds prior, 2ε seconds prior, and so on). A lumped approximation, for example, could
divide the previous 1-second into 1/10 second intervals, and we could keep track of 10 previous
values of the input. We will see later on how best to approximate pure time delays with a
small number of states.

You might also like