SysCon_Lec_Notes
SysCon_Lec_Notes
Contents
※ Lecture 1 (1/23)
This course will have a midterm (25%) on 6 February in-class in Remsen 101, a final
(45%) on 14 March at 6-9pm EST in the same room, completion-based homeworks (20%),
1
ASF 580.246 1.2 Introduction to control systems
and a project (10%) due on 11 March. TAs are Katia, Nina, Sabahat, Serena, Susan,
and Brian.
In Linear Signals and Systems, we were introduced to the linear, time-invariant (LTI)
system H that transformed an input signal u into y given t ≥ 0.
u y
H(s)
Let’s review the formal definitions of these ideas and how they motivate the analytic
approach of this course.
Linear maps
A map V → W between two vector spaces that preserves addition and scalar
multiplication (homogeneity of degree 1) is called linear. Formally, for V and W
vector spaces over the same field K, a function f : V → W is a linear map if, for
any two vectors v, u ∈ V and any scalar c ∈ K, we can satisfy:
Linearity preserves linear combinations. From this, the superposition principle allows
us to assert that linear combinations of inputs to a linear system produce a linear
combination of the individual zero-state outputs (i.e., outputs setting the initial con-
ditions to zero) corresponding to the individual inputs. For a continuous-time system,
given arbitrary inputs u1 (t) and u2 (t), as well as their respective zero-state outputs
y1 (t) = H{u1 (t)} and y2 (t) = H{u2 (t)}, then a linear system must satisfy
A time-invariant system has a time-dependent system function that is not a direct func-
tion of time. Given y(t) and u(t), a system is time-invariant if a time-delay on the input
u(t + δ) directly equates to a time-delay of the output y(t + δ). Let’s do a quick exercise
to understand what this means.
2
ASF 580.246 1.3 Eigenfunctions of LTI systems
Time-invariance
Consider two systems, y(t) = tu(t) and y(t) = 10u(t). The system function for
the first system explicitly depends on t outside of u(t), so it is not time-invariant.
We can have two delayed systems when given an input ud (t) = u(t + δ),
For the non-invariant system, when we start with a delay of the input, we get
y1,A (t) = tud (t) = tu(t + δ). y1,A (t) ̸= y1,B (t), so we can confirm it is time-non-
invariant.
The system at the start of this subsection is written H(s) because LTI systems produce
outputs that are a convolution of their input signal with their impulse response:
Z t
y(t) = (u ∗ h)(t) = u(t − τ )h(τ )dτ (4)
0
Recall that the Fourier transform is simply a “slice” of the Laplace transform taken at
σ = 0 (will expand on this idea more thoroughly later), so
F −1
−−
Y (iω) = H(iω)U (iω) ↽−−
−⇀
− y(t). (6)
F
We need to design u(t) ∀t ≥ 0 such that y(t) ≃ r(t) ∀t ≥ 0 where r(t) is a desired
reference signal that we know. u(t) is defined as the control input.
r e u y
+
−
K(s) H(s)
ym
G(s)
3
ASF 580.246 1.3 Eigenfunctions of LTI systems
ż = T −1 AT z = Dz, (10)
zn 0 0 ... λn zn
4
ASF 580.246 Lecture 2 (1/25) – Laplace Transforms and Transfer Functions
with the rightmost term simply being H(s). The complex exponentials are eigenfunctions
for all LTI systems. In general, for an LTI system H,
est H(s)est
H(s)
with the typical unilateral transform integrating from 0 to ∞ on the real axis. The
Fourier transform is a special case of the bilateral Laplace transform. While the Fourier
transform of a function is a complex function of a real variable (frequency, or ω), the
Laplace transform of a function is a complex function of the complex variable s. The
Laplace transform is typically restricted to transformations of functions ∀t ≥ 0, which
makes it holomorphic on s (i.e., infinitely differentiable and locally equal to its own
Taylor series, making it analytically desirable). We can write that the Fourier transform
is equivalent to the bilateral Laplace transform with argument s = iω,
which is valid iff the region of convergence (ROC) of L{f (t)} contains the imaginary
axis, σ = 0. This is the “Fourier slice” that the Laplace transform must capture in order
to return the Fourier transform.
※ Lecture 2 (1/25)
The bilateral Laplace transform is linear, which we can easily prove by its definition.
5
ASF 580.246 2.1 Properties of the Laplace transform
The number of these properties is too long to discuss one-by-one, so here is a big list of
them:
• L{e−at us (t)} = 1
s+a , R(s) > −a.
• L{−e−at us (−t)} = 1
s+a , R(s) < −a.
• L{tf (t)} = − ds
d
[F (s)], no ROC change.
• L{δ(t − τ )} = e−sτ , ∀s ∈ C.
• L{sin(ωt)us (t)} = ω
s2 +ω 2
, R(s) > 0.
• L{cos(ωt)us (t)} = s
s2 +ω 2
, R(s) > 0.
• L{f (at)} = 1 s
|a| F ( a ), same ROC.
The “general problem,” as Professor Sarma puts it, is, given an input-output model of
H (e.g., ẏ + ay = bu), y(0), and u(t) ∀t ≥ 0, how can we compute y(t) ∀t ≥ 0? We can
simply:
1. Compute H(s).
2. Compute U (s).
6
ASF 580.246 2.2 Partial fraction decomposition
Now, we can write down a list of useful inverse Laplace transforms (assuming causal H):
L−1
• 1
s+a , ℜ(s) > −a −−→ e−at us (t).
L−1
• 1
s+a , ℜ(s) < −a −−→ −e−at us (−t).
L−1
• 1
s , ℜ(s) > 0 −−→ us (t).
L−1
• ω
s2 +ω 2
, ℜ(s) > 0 −−→ sin(ωt)us (t).
Now, if H is LTI and causal, then H(s) is rational in s, meaning that we can apply
partial fraction decomposition to it such that
c1 c2 cn
H(s) = + + ... + . (16)
s − p1 s − p2 s − pn
for pn poles. Thus, the impulse response is simply a series of scaled exponentials:
Note that if we have repeated roots in the denominator, say, p1 = p2 , then we must write
that
c1 c2
H(s) = + . (18)
s − p1 (s − p2 )2
distinct factor, include the denominator in first order. For quadratic factors as2 + bs + c,
write asAs+B
2 +bs+c . To solve, cover one denominator factor and use its solution to resolve
s+3 A B A= −4+3=−1
−4+2=−2
=1/2, B=1/2 1/2 1/2
= + ================== + . (19)
(s + 2)(s + 4) s+2 s+4 s+2 s+4
7
ASF 580.246 2.3 Zeroes of a transfer function
ez̄t H(z̄)ez̄t = 0
H(s)
The zeroes z̄ of a system are absorbed frequencies that result in y(t) = 0 ∀t. We can find
them by solving H(z̄) = 0 for z̄.
then z̄ = {−3, −3, ∞} (we are usually asked to find bounded zeroes, so ∞ is almost
always discarded) and p̄ = {−2, −1, −4}. For a causal system, the ROC is to the right
of the right-most pole. For an anti-causal system, the ROC is to the left of the left-most
pole. For other, intermediate cases, we have a non-causal system.
※ Lecture 3 (1/30)
A signal u(t) is bounded if ∃B > 0 s.t. maxt∈R u(t) ≤ B. This is the bounded input,
and can be visualized as:
An LTI system is BIBO-stable if, given a bounded u(t) ∀t, the output is similarly bounded
with some constant C:
max |y(t)| ≤ C max |u(t)|. (21)
t∈R t∈R
R∞
H(s) is BIBO-stable ⇐⇒ −∞ |h(t)| dt < ∞ ⇐⇒ H(s) ROC includes iω-axis ⇐⇒ all
poles of causal H(s) are in the left-half of the s-domain and right-half for anti-causal
H(s).
8
ASF 580.246 3.2 Elementary block diagrams
R∞
Reverse proof. Given an LTI system, |y(t)| ≤ |u ∗ h| ≤ maxt∈ℜ |u(t)| −∞ |h(t)| dτ . We
know that the integral is bounded by some C ≤ 0, so the max of y is constrained by the
upper bound of the input.
We can write a block diagram for this first order system (with no feedforward term) as:
9
ASF 580.246 Lecture 4 (2/1) – State Space Models
s+3 1 1 1
Or, in series, s+2 · s+4 = 1+ s+2 s+4 :
Note that we have a d = 1 feedforward term in the H1 block diagram which is realized
as a wire.
※ Lecture 4 (2/1)
10
ASF 580.246 4.2 Examples
Here, the state vector x ∈ Rn , output vector y ∈ Rq , input (or control) vector u ∈ Rp ,
state matrix A ∈ R(n×n) , input matrix B ∈ R(n×p) , output matrix C ∈ R(q×n) , and
feedforward matrix D ∈ R(q×p) . Let’s do an example. Suppose we are given the block
diagram:
and " #
h i x h i
1
y= 0 1 + 0 u. (26)
x2
4.2 Examples
11
ASF 580.246 Smash-cut of remaining content
we can either proceed by drawing a block diagram and parsing out the elements of the
matrices or by recalling that
Y (s)
H(s) ≡ = C(sI − A)−1 B + D. (28)
U (s)
I will start with the second option first. We must convert H(s) into a first order system,
which we can do by PFD:
s+1 −1 2
H(s) = = + . (29)
(s + 2)(s + 3) s+2 s+3
BD → SS. For A(2×2) , B (2×1) , C (1×2) , and D ∈ R, our state and output vectors are
" # " #
ẋ1 u − ax1
ẋ = = , y = x2 . (31)
ẋ2 x2
and " #
h i x h i
1
y= 0 1 + 0 u. (33)
x2
12
ASF 580.246 Smash-cut of remaining content
Our task is to compute An . If it is sparse, we can induct it directly. Else, we can attempt
an eigenvalue decomposition:
4. Finally, An = (V DV −1 )n = V Dn V −1 .
Continuous-time. For
Z t
At
x(t) = e x(0) + eA(t−τ ) Bu(τ ) dτ
0
y(t) = Cx(t) + Du(t)
2
Recall the power series expansion: eAt = I + At + (At) 2! + . . .. We can compute this
directly if A is sparse, via eigenvalue decomp. (same as before, except eAt = V eDt V −1 ),
or compute in the Laplace domain. L{eAt } = (sI − A)−1 , so X(s) = (sI − A)−1 [x(0) +
BU (s)] and x(t) is just the inverse Laplace. Remark. The inverse of a (2 × 2) matrix
" # " #
a b 1 d −b
A= ⇒ A−1 = , (34)
c d det(A) −c a
where det(A) = ad − bc. For instance, suppose we want to compute eAt for
" # " #
1 1 0
ẋ = x+ u, and
1 0 1
h i
y= 1 0 x + u.
" #
s − 1 −1
sI − A =
−1 s
" #
1 s 1
⇒ (sI − A)−1 =
s(s − 1) − 1 1 s − 1
" #
s 1
= s2 −s−1 s2 −s−1
1 s−1
s2 −s−1 s2 −s−1
13
ASF 580.246 Smash-cut of remaining content
√ √ √ √
5+ 5 5− 5 5 − 5
10 √
1+ 5
+ 10 √
1− 5
5 √
1+ 5
+ 5 √
s− √ s− √2 s− √2 s− 1− 5
=
5
2
− 5 5+ 5
√2
5− 5
5 √
+ 5 √ 10 √
+ 10 √
s− 1+2 5 s− 1−2 5 s− 1−2 5 s− 1+2 5
√ √ √ √ √ √ √ √
5+ 5 12 (1+ 5)t 1
5 12 (1+ 5)t 5 12 (1− 5)t
" #
5− 5
At 10 e + 10 e 2 (1− 5)t 5 e − e
5√
⇒e = us (t) √ √
5 12 (1+ 5)t
√ 1 √
5 2 (1− 5)t
√ √
5+ 5 12 (1− 5)t
√
5− 5 12 (1+ 5)t
5 e − 5 e 10 e + 10 e
1
y(t)
0.8
0.6
0.4
0.2
t
1 2 3 4 5
Performance metrics. The rise time tR = t2 − t1 ≃ 2.2 σ , where t1 is where y(t) first
reaches 10% of its final ySS (t). t2 is where y(t) first reaches 90% of its final ySS (t). A
larger |σ| yields a smaller rise time, so we reach ySS (t) more quickly. The time constant
τ = σ1 , which is the time needed for y(t) to reach 0.63ySS (t); tR ≃ 2.2τ .
14
ASF 580.246 Personal Project (Spring Break) – Fixing the Mass Balance: a PKPD Model
ω 2 +σ 2
Second order. For H(s) = (s+σ)2 +ω 2
, y(t) = (1 − e−σt (cos ωt + σ
ω sin ωt))us (t):
2
y(t)
1.5
0.5
t
1 2 3 4
Here, tR ≃ √ 1.8
σ 2 +ω 2
, the settling time ts ≃ 4.6
ω , where ts satisfies |y(ts )−1| = e−ωts < 0.01,
−πσ
and overshoot parameter Mp = max y(t) − 1 = e ω .
Feedback control. If σ is small and the response is slow, we can make it faster by
adding feedback control with r(t) and e(t) passing through K, a gain.
15
ASF 580.246 6.1 Reachability
model to change its first-order exchange parameters and to say meaningful statements
about its behavior with respect to them. So, we can rewrite this system as:
" # " #" # " #
ẋC −kCE − kCO kEC xC 1
= + u.
ẋE kCE −kEC xE 0
6.1 Reachability
For this,
" #
h −kCE − kCO 1
i
R ≡ AB B =
kCE 0
=⇒ det(R) = −kCE .
For the system to be reachable, det(R) ̸= 0, therefore kEC xE ̸= kCE xC . There must
always be a degree of preferential flow.
16
ASF 580.246 References
References
[1] Brunton, Steve. Linear Systems [Control Bootcamp]. 2018. Published on YouTube.
https://www.youtube.com/watch?v=nyqJJdhReiA&ab channel=SteveBrunton.
17