0% found this document useful (0 votes)
4 views

SysCon_Lec_Notes

Uploaded by

fierahusky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

SysCon_Lec_Notes

Uploaded by

fierahusky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Course Notes: 580.

246 Systems and Controls


Spring 2024 – JHU WSE

Alejandro Soto Franco

Last updated: August 19, 2024

Contents

Lecture 1 (1/23) – Representations of Systems 1


1.1 Course logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Introduction to control systems . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Eigenfunctions of LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . 3
Lecture 2 (1/25) – Laplace Transforms and Transfer Functions 5
2.1 Properties of the Laplace transform . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Partial fraction decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Zeroes of a transfer function . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Poles of a transfer function . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Lecture 3 (1/30) – Transfer Functions and Block Diagrams 8
3.1 BIBO stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Elementary block diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Lecture 4 (2/1) – State Space Models 10
4.1 Introduction to state space . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Smash-cut of remaining content 12
Personal Project (Spring Break) – Fixing the Mass Balance: a PKPD Model 15
6.1 Reachability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

※ Lecture 1 (1/23)

1.1 Course logistics

This course will have a midterm (25%) on 6 February in-class in Remsen 101, a final
(45%) on 14 March at 6-9pm EST in the same room, completion-based homeworks (20%),

1
ASF 580.246 1.2 Introduction to control systems

and a project (10%) due on 11 March. TAs are Katia, Nina, Sabahat, Serena, Susan,
and Brian.

1.2 Introduction to control systems

In Linear Signals and Systems, we were introduced to the linear, time-invariant (LTI)
system H that transformed an input signal u into y given t ≥ 0.

u y
H(s)

Let’s review the formal definitions of these ideas and how they motivate the analytic
approach of this course.

Linear maps

A map V → W between two vector spaces that preserves addition and scalar
multiplication (homogeneity of degree 1) is called linear. Formally, for V and W
vector spaces over the same field K, a function f : V → W is a linear map if, for
any two vectors v, u ∈ V and any scalar c ∈ K, we can satisfy:

1. Additivity: f (u + v) = f (u) + f (v)

2. Scalar multiplication (degree 1 homogeneity): f (cu) = cf (u)

By associativity, for any vectors un ∈ V and scalars cn ∈ K,

f (c1 u1 + . . . + cn un ) = c1 f (u1 ) + . . . + cn f (un ). (1)

Linearity preserves linear combinations. From this, the superposition principle allows
us to assert that linear combinations of inputs to a linear system produce a linear
combination of the individual zero-state outputs (i.e., outputs setting the initial con-
ditions to zero) corresponding to the individual inputs. For a continuous-time system,
given arbitrary inputs u1 (t) and u2 (t), as well as their respective zero-state outputs
y1 (t) = H{u1 (t)} and y2 (t) = H{u2 (t)}, then a linear system must satisfy

c1 y1 (t) + c2 y2 (t) = H{c1 u1 (t) + c2 u2 (t)}. (2)

A time-invariant system has a time-dependent system function that is not a direct func-
tion of time. Given y(t) and u(t), a system is time-invariant if a time-delay on the input
u(t + δ) directly equates to a time-delay of the output y(t + δ). Let’s do a quick exercise
to understand what this means.

2
ASF 580.246 1.3 Eigenfunctions of LTI systems

Time-invariance
Consider two systems, y(t) = tu(t) and y(t) = 10u(t). The system function for
the first system explicitly depends on t outside of u(t), so it is not time-invariant.
We can have two delayed systems when given an input ud (t) = u(t + δ),

y1,B (t) = (t + δ)u(t + δ) and y2,B (t) = 10u(t + δ) (3)

For the non-invariant system, when we start with a delay of the input, we get
y1,A (t) = tud (t) = tu(t + δ). y1,A (t) ̸= y1,B (t), so we can confirm it is time-non-
invariant.

The system at the start of this subsection is written H(s) because LTI systems produce
outputs that are a convolution of their input signal with their impulse response:
Z t
y(t) = (u ∗ h)(t) = u(t − τ )h(τ )dτ (4)
0

or, in the Laplace domain, given s = σ + iω,


L− 1
Y (s) = H(s)U (s) −
↽−
−⇀
− y(t). (5)
L

Recall that the Fourier transform is simply a “slice” of the Laplace transform taken at
σ = 0 (will expand on this idea more thoroughly later), so
F −1
−−
Y (iω) = H(iω)U (iω) ↽−−
−⇀
− y(t). (6)
F

We need to design u(t) ∀t ≥ 0 such that y(t) ≃ r(t) ∀t ≥ 0 where r(t) is a desired
reference signal that we know. u(t) is defined as the control input.

r e u y
+

K(s) H(s)

ym

G(s)

Here, we have a system H, controller K, sensor G, and an interconnecting summer.

1.3 Eigenfunctions of LTI systems

Consider a square matrix A ∈ Cn×n . An eigenvector v of A is a non-zero vector such


that Av = λv, where λ ∈ C is the corresponding eigenvalue. The equation Av = λv can
be rewritten as (A − λI)v = 0, where I is the identity matrix. This implies that the set

3
ASF 580.246 1.3 Eigenfunctions of LTI systems

of eigenvectors corresponding to a specific eigenvalue λ forms a subspace, known as the


eigenspace, denoted by Eλ .
Let’s do a simple problem from Brunton’s linear systems video [1]. Given ẋ = Ax for
x ∈ Rn , where x(t) = eAt x(0), we can recall that the Taylor expansion of the matrix
exponential is
A2 t2 A3 t3
eAt = I + At + + + ... (7)
2! 3!
and that we can define, for Aξ = λξ, two matrices, the transformation matrix
h i
T = ξ1 ξ2 . . . ξn (8)

and eigenvalue diagonal matrix


 
λ1 0 . . . 0
 0 λ2 . . . 0
 

D=
 .. .. . . ..  (9)
. . . .


0 0 ... λn

Fundamentally, AT = T D. If we suppose x = T z, then ẋ = T ż = Ax. Therefore,

ż = T −1 AT z = Dz, (10)

which we can write explicitly as


    
z1 λ1 0 . . . 0 z1
z   0 λ2 . . . 0   z2 
    
d  .2  =  . .. . .  . .
..  (11)
 
 ..   ..
dt  . . .   .. 
 

zn 0 0 ... λn zn

So, by simply finding D = T −1 AT (potentially using MATLAB), we can solve z(t) =


eDt z(0) as
 λt 
e 1 0 ... 0
λ t
 0 e 2 ... 0
 

z(t) =  .. .. .. ..  z(0) (12)
 . . . .


0 0 ... eλn t
Now, as nice as these eigenvectors are, we may want to work back to the ẋ = Ax
expression. To do this, we note that A = T DT −1 , where eAt = T eDt T −1 . With more
careful inspection, we can note that z(0) = T −1 x(0) and z(t) = eDt z(0).
That was the linear algebraic approach to eigenfunctions. In the analytic approach of
this course, if we suppose u(t) = est ∀t, s ∈ C, then, for an LTI system H,
Z ∞ Z ∞
y(t) = es(t−τ ) h(τ )dτ = est e−sτ h(τ )dτ (13)
−∞ −∞

4
ASF 580.246 Lecture 2 (1/25) – Laplace Transforms and Transfer Functions

with the rightmost term simply being H(s). The complex exponentials are eigenfunctions
for all LTI systems. In general, for an LTI system H,

est H(s)est
H(s)

which is a fascinating property! Recall that the bilateral Laplace transform is


Z ∞
L{f (t)} = e−st f (t)dt (14)
−∞

with the typical unilateral transform integrating from 0 to ∞ on the real axis. The
Fourier transform is a special case of the bilateral Laplace transform. While the Fourier
transform of a function is a complex function of a real variable (frequency, or ω), the
Laplace transform of a function is a complex function of the complex variable s. The
Laplace transform is typically restricted to transformations of functions ∀t ≥ 0, which
makes it holomorphic on s (i.e., infinitely differentiable and locally equal to its own
Taylor series, making it analytically desirable). We can write that the Fourier transform
is equivalent to the bilateral Laplace transform with argument s = iω,

fˆ(ω) = L{f (t)}|s=iω


Z ∞
= e−iωt f (t)dt,
−∞

which is valid iff the region of convergence (ROC) of L{f (t)} contains the imaginary
axis, σ = 0. This is the “Fourier slice” that the Laplace transform must capture in order
to return the Fourier transform.

※ Lecture 2 (1/25)

2.1 Properties of the Laplace transform

The bilateral Laplace transform is linear, which we can easily prove by its definition.

Proof: Linearity of the Laplace transform

Given two functions in the time domain, f (t) and g(t),


Z ∞
L(αf (t) + βg(t)) = (αf (t) + βg(t)) e−st dt
0−
= αL(f (t)) + βL(g(t))
= αF (s) + βG(s), ROCF ∩ ROCG . □ (15)

5
ASF 580.246 2.1 Properties of the Laplace transform

The number of these properties is too long to discuss one-by-one, so here is a big list of
them:

Useful properties of the Laplace transform

• L{e−at us (t)} = 1
s+a , R(s) > −a.

• L{−e−at us (−t)} = 1
s+a , R(s) < −a.

• L{af (t) + bg(t)} = aF (s) + bG(s), ROCF ∩ ROCG .

• L{tf (t)} = − ds
d
[F (s)], no ROC change.

• L{tn f (t)} = (−1)n F (n) (s).

• L{f ′ (t)} = sF (s) − f ′ (0− ), same ROC.

• L{f (t − τ )} = e−sτ F (s), same ROC.

• L{e−at f (t)} = F (s + a), ROC − a.

• L{δ(t − τ )} = e−sτ , ∀s ∈ C.

• L{sin(ωt)us (t)} = ω
s2 +ω 2
, R(s) > 0.

• L{cos(ωt)us (t)} = s
s2 +ω 2
, R(s) > 0.

• L{e−at cos(ωt)us (t)} = s+a


(s+a)2 +ω 2
, R(s) > −a.

• L{e−at sin(ωt)us (t)} = ω


(s+a)2 +ω 2
, R(s) > −a.

• L{us (t)} = 1s , R(s) > 0.

• L{−us (−t)} = 1s , R(s) < 0.

• L{f (at)} = 1 s
|a| F ( a ), same ROC.

• L{f ∗ g} = F (s)G(s), ROCF ∩ ROCG .

The “general problem,” as Professor Sarma puts it, is, given an input-output model of
H (e.g., ẏ + ay = bu), y(0), and u(t) ∀t ≥ 0, how can we compute y(t) ∀t ≥ 0? We can
simply:

1. Compute H(s).

2. Compute U (s).

3. Compute Y (s) = H(s)U (s), keeping track of the ROC.

4. y(t) = L−1 {Y (s)} ≥ 0

6
ASF 580.246 2.2 Partial fraction decomposition

Now, we can write down a list of useful inverse Laplace transforms (assuming causal H):

Useful properties of the inverse Laplace transform

L−1
• 1
s+a , ℜ(s) > −a −−→ e−at us (t).

L−1
• 1
s+a , ℜ(s) < −a −−→ −e−at us (−t).

L−1
• 1
s , ℜ(s) > 0 −−→ us (t).
L−1
• ω
s2 +ω 2
, ℜ(s) > 0 −−→ sin(ωt)us (t).

Now, if H is LTI and causal, then H(s) is rational in s, meaning that we can apply
partial fraction decomposition to it such that

c1 c2 cn
H(s) = + + ... + . (16)
s − p1 s − p2 s − pn
for pn poles. Thus, the impulse response is simply a series of scaled exponentials:

h(t) = (c1 ep1 t + . . . + cn epn t )us (t). (17)

Note that if we have repeated roots in the denominator, say, p1 = p2 , then we must write
that

c1 c2
H(s) = + . (18)
s − p1 (s − p2 )2

2.2 Partial fraction decomposition


P (s) P (s) R(s)
In a proper rational Q(s) , deg P < deg Q. If improper, divide to yield Q(s) = F (s) + Q(s) ,
where F is a polynomial and the fraction is proper. If the denominator is a product of
A1 A2 Aα
(as + b)α , where some are repeated, we write as+b + (as+b)2 + . . . + (as+b)α . For each

distinct factor, include the denominator in first order. For quadratic factors as2 + bs + c,
write asAs+B
2 +bs+c . To solve, cover one denominator factor and use its solution to resolve

the remaining fraction. For example,

s+3 A B A= −4+3=−1
−4+2=−2
=1/2, B=1/2 1/2 1/2
= + ================== + . (19)
(s + 2)(s + 4) s+2 s+4 s+2 s+4

7
ASF 580.246 2.3 Zeroes of a transfer function

2.3 Zeroes of a transfer function

ez̄t H(z̄)ez̄t = 0
H(s)

The zeroes z̄ of a system are absorbed frequencies that result in y(t) = 0 ∀t. We can find
them by solving H(z̄) = 0 for z̄.

2.4 Poles of a transfer function


P (s)
If H(s) = Q(s) , then p̄ is a pole if H(p̄) diverges (i.e., Q(p̄) = 0). These are the natural
frequencies of the system. For example,
s2 + 6s + 9 (s + 3)2
H(s) = = , ℜ(s) > −1, (20)
(s + 4)(s2 + 3s + 2) (s + 4)(s + 2)(s + 1)

then z̄ = {−3, −3, ∞} (we are usually asked to find bounded zeroes, so ∞ is almost
always discarded) and p̄ = {−2, −1, −4}. For a causal system, the ROC is to the right
of the right-most pole. For an anti-causal system, the ROC is to the left of the left-most
pole. For other, intermediate cases, we have a non-causal system.

※ Lecture 3 (1/30)

3.1 BIBO stability

A signal u(t) is bounded if ∃B > 0 s.t. maxt∈R u(t) ≤ B. This is the bounded input,
and can be visualized as:

An LTI system is BIBO-stable if, given a bounded u(t) ∀t, the output is similarly bounded
with some constant C:
max |y(t)| ≤ C max |u(t)|. (21)
t∈R t∈R
R∞
H(s) is BIBO-stable ⇐⇒ −∞ |h(t)| dt < ∞ ⇐⇒ H(s) ROC includes iω-axis ⇐⇒ all
poles of causal H(s) are in the left-half of the s-domain and right-half for anti-causal
H(s).

8
ASF 580.246 3.2 Elementary block diagrams
R∞
Reverse proof. Given an LTI system, |y(t)| ≤ |u ∗ h| ≤ maxt∈ℜ |u(t)| −∞ |h(t)| dτ . We
know that the integral is bounded by some C ≤ 0, so the max of y is constrained by the
upper bound of the input.

3.2 Elementary block diagrams

Consider the system ẏ + ay = bu for a ∈ R and 0 initial conditions. By taking the


Laplace transform of each side,

L{ẏ + ay = bu} = sY (s) + aY (s) = bU (s)


Y (s) b
⇒ = H(s) = . (22)
U (s) s+a

We can write a block diagram for this first order system (with no feedforward term) as:

Now, suppose we have


s+3
H(s) = , ℜ(s) > −2. (23)
(s + 2)(s + 4)
1/2 1/2
We can rewrite H(s) in parallel as s+2 + s+4 :

9
ASF 580.246 Lecture 4 (2/1) – State Space Models
  
s+3 1 1 1
Or, in series, s+2 · s+4 = 1+ s+2 s+4 :

Note that we have a d = 1 feedforward term in the H1 block diagram which is realized
as a wire.

※ Lecture 4 (2/1)

4.1 Introduction to state space

State-space representations, also known as state-space models, are mathematical models


of physical systems specified as sets of inputs, outputs, and variables related by first-
order differential equations or discrete difference equations. The state variables evolve
over time in a way that is dependent on the values that they have at any instant and
on the external imposed values of input. Output variables’ values depend on the values
of the state variables with a potential additional dependence on the input variables (the
feedforward term). The state space (or phase space) is the geometric space in which
variables on the axes are the state variables. The state of the system can be a vector,
the state vector, within the state space.
If our dynamical system of interest is LTI and finite-dimensional, then the differential and
algebraic expressions may be written in matrix form. The most general state-spaccce
model of a linear system with p inputs, q outputs and n state variables is written as
follows:

ẋ(t) = A(t)x(t) + B(t)u(t)


y(t) = C(t)x(t) + D(t)u(t). (24)

This can be represented as the following block diagram model:

10
ASF 580.246 4.2 Examples

Here, the state vector x ∈ Rn , output vector y ∈ Rq , input (or control) vector u ∈ Rp ,
state matrix A ∈ R(n×n) , input matrix B ∈ R(n×p) , output matrix C ∈ R(q×n) , and
feedforward matrix D ∈ R(q×p) . Let’s do an example. Suppose we are given the block
diagram:

As a rule, the output of each integrator in continuous-time (delay in discrete-time) is a


state variable. Therefore,
" # " # " #" # " #
x1 ẋ1 −a 0 x1 1
x≡ ⇒ ẋ = = + u (25)
x2 ẋ2 1 0 x2 0

and " #
h i x h i
1
y= 0 1 + 0 u. (26)
x2

4.2 Examples

If we are tasked with deriving a state space model for


s+1
H(s) = , (27)
(s + 2)(s + 3)

11
ASF 580.246 Smash-cut of remaining content

we can either proceed by drawing a block diagram and parsing out the elements of the
matrices or by recalling that

Y (s)
H(s) ≡ = C(sI − A)−1 B + D. (28)
U (s)

I will start with the second option first. We must convert H(s) into a first order system,
which we can do by PFD:
s+1 −1 2
H(s) = = + . (29)
(s + 2)(s + 3) s+2 s+3

Now, we must compute (sI − A)−1 :


" #−1
s+2 0
(sI − A)−1 =
0 s+3
" #
1
s+2 0
= 1
. (30)
0 s+3

※ Smash-cut of remaining content

BD → SS. For A(2×2) , B (2×1) , C (1×2) , and D ∈ R, our state and output vectors are
" # " #
ẋ1 u − ax1
ẋ = = , y = x2 . (31)
ẋ2 x2

As a rule, the output of each integrator in continuous-time (delay in discrete-time) is a


state variable. Therefore,
" # " #" # " #
ẋ1 −a 0 x1 1
ẋ = = + u (32)
ẋ2 1 0 x2 0

and " #
h i x h i
1
y= 0 1 + 0 u. (33)
x2

12
ASF 580.246 Smash-cut of remaining content

Discrete-time. Given x[0] and u[0], . . . , u[n],


n−1
X
x[n] = An x[0] + An−1−i Bu[i], n ≥ 0
i=0
n−1
X
n
y[n] = CA x[0] + C An−1−i Bu[i] + Du[n]
i=0

Our task is to compute An . If it is sparse, we can induct it directly. Else, we can attempt
an eigenvalue decomposition:

1. Find the eigenvalues of A by solving det(A − λI) = 0.

2. For each λi , find the corresponding eigenvector (A − λi I)v = 0.

3. Construct the diagonalization A = V DV −1 , where V is the matrix of eigenvectors


and D has the eigenvalues along the diagonals.

4. Finally, An = (V DV −1 )n = V Dn V −1 .

Continuous-time. For
Z t
At
x(t) = e x(0) + eA(t−τ ) Bu(τ ) dτ
0
y(t) = Cx(t) + Du(t)
2
Recall the power series expansion: eAt = I + At + (At) 2! + . . .. We can compute this
directly if A is sparse, via eigenvalue decomp. (same as before, except eAt = V eDt V −1 ),
or compute in the Laplace domain. L{eAt } = (sI − A)−1 , so X(s) = (sI − A)−1 [x(0) +
BU (s)] and x(t) is just the inverse Laplace. Remark. The inverse of a (2 × 2) matrix
" # " #
a b 1 d −b
A= ⇒ A−1 = , (34)
c d det(A) −c a

where det(A) = ad − bc. For instance, suppose we want to compute eAt for
" # " #
1 1 0
ẋ = x+ u, and
1 0 1
h i
y= 1 0 x + u.

" #
s − 1 −1
sI − A =
−1 s
" #
1 s 1
⇒ (sI − A)−1 =
s(s − 1) − 1 1 s − 1
" #
s 1
= s2 −s−1 s2 −s−1
1 s−1
s2 −s−1 s2 −s−1

13
ASF 580.246 Smash-cut of remaining content
 √ √ √ √ 
5+ 5 5− 5 5 − 5
10 √
1+ 5
+ 10 √
1− 5
5 √
1+ 5
+ 5 √
 s− √ s− √2 s− √2 s− 1− 5 
=
 5
2
− 5 5+ 5
√2
5− 5


5 √
+ 5 √ 10 √
+ 10 √
s− 1+2 5 s− 1−2 5 s− 1−2 5 s− 1+2 5
√ √ √ √ √ √ √ √
5+ 5 12 (1+ 5)t 1
5 12 (1+ 5)t 5 12 (1− 5)t
" #
5− 5
At 10 e + 10 e 2 (1− 5)t 5 e − e
5√
⇒e = us (t) √ √
5 12 (1+ 5)t
√ 1 √
5 2 (1− 5)t
√ √
5+ 5 12 (1− 5)t

5− 5 12 (1+ 5)t
5 e − 5 e 10 e + 10 e

Asymptotic stability. An LTI system is A.S. if ∀x(0) ∈ Rn and u(t) = 0, ∀t ≥ 0,


limt→∞ x(t) = 0. For any initial condition, if you wait long enough without an input,
will the state go to zero. In CT, we must find that ℜ(λi (A)) < 0 ∀i. In DT, |λi (A)| < 1 ∀i.
Note that due to pole-zero cancellation, λi are not necessarily p̄ of H(s).
Reachability. This considers whether it is possible to design u(t)
h such that
i x(t) reaches
a target state. This is the case if the reachability matrix, R = AB B , has rank n for
h i
n inputs or, when square, det(R) ̸= 0. In general, R = An−1 B An−2 B . . . B .
Observability. Given {u(0), . . . , u(n − 1)} and {y(0), . . . , y(n − 1)}, can we
" reconstruct
#
C
(i.e., observe) x(0) ∈ Rn ? This is the case if the observability matrix O = has full
CA
rank n or, in the SISO case with y(0) and y(1), det(O) ̸= 0. O can extend to CAn−1 .
Step response. This is y(t) when u(t) = us (t). By def., U (s) = 1s , so Y = HU = H
s.
σ −1 L−1
A first order system H = s+σ will have Y = s+σ + 1s (ℜ(s) > max{−σ, 0}) −−→
y(t) = (1 − e−σt )us (t). y(t) is a monotonic function that approaches 1.

1
y(t)

0.8

0.6

0.4

0.2

t
1 2 3 4 5

Performance metrics. The rise time tR = t2 − t1 ≃ 2.2 σ , where t1 is where y(t) first
reaches 10% of its final ySS (t). t2 is where y(t) first reaches 90% of its final ySS (t). A
larger |σ| yields a smaller rise time, so we reach ySS (t) more quickly. The time constant
τ = σ1 , which is the time needed for y(t) to reach 0.63ySS (t); tR ≃ 2.2τ .

14
ASF 580.246 Personal Project (Spring Break) – Fixing the Mass Balance: a PKPD Model

ω 2 +σ 2
Second order. For H(s) = (s+σ)2 +ω 2
, y(t) = (1 − e−σt (cos ωt + σ
ω sin ωt))us (t):

2
y(t)

1.5

0.5

t
1 2 3 4

Here, tR ≃ √ 1.8
σ 2 +ω 2
, the settling time ts ≃ 4.6
ω , where ts satisfies |y(ts )−1| = e−ωts < 0.01,
−πσ
and overshoot parameter Mp = max y(t) − 1 = e ω .
Feedback control. If σ is small and the response is slow, we can make it faster by
adding feedback control with r(t) and e(t) passing through K, a gain.

※ Personal Project (Spring Break)

In the 2016 Systems and Controls final, Problem 2 introduces a two-compartmental


model of the brain under anesthesia. The dynamics are written as
" # " #" # " #
ẋC 1 − (kCE + kCO ) kEC xC 1
= + u.
ẋE kCE 1 − kEC xE 0

From the PK model diagram, we can write


dxC
= u(t) + kEC xE (t) − kCE xC (t) − kCO xC (t). (35)
dt
However, by writing out the equation from the state-space model, we get
dxC
= xC (t) − kCE xC (t) − kCO xC (t) + kEC xE (t) + u(t). (36)
dt
The red-colored term arises from the matrix multiplication of the 1 − (kCE + kCO ) term
with xC in the state vector. This self-referential term breaks the mass balance that we
see in 35. Mechanistic pharmacokinetic modeling requires rigorous mass balance in the
ODEs that describe the dynamics of mass exchange amongst the various compartments.
As well, in the problem, we are allowed to let kCE = 1 and kEC = 0.5. These numbers
make for useful test-taking simplifications but remove a lot of the predictive power of the

15
ASF 580.246 6.1 Reachability

model to change its first-order exchange parameters and to say meaningful statements
about its behavior with respect to them. So, we can rewrite this system as:
" # " #" # " #
ẋC −kCE − kCO kEC xC 1
= + u.
ẋE kCE −kEC xE 0

Let’s start asking some questions about this system!

6.1 Reachability

For this,
" #
h −kCE − kCO 1
i
R ≡ AB B =
kCE 0
=⇒ det(R) = −kCE .

For the system to be reachable, det(R) ̸= 0, therefore kEC xE ̸= kCE xC . There must
always be a degree of preferential flow.

16
ASF 580.246 References

References

[1] Brunton, Steve. Linear Systems [Control Bootcamp]. 2018. Published on YouTube.
https://www.youtube.com/watch?v=nyqJJdhReiA&ab channel=SteveBrunton.

17

You might also like