Note 03
Note 03
1 Matrix Form
1.1 Introduction
Last time we saw how to solve simple scalar first order linear differential equations.
Now, we are going to expand our tool set and learn how to tackle multivariable differential equations. (You
will see in homeworks that these exact same ideas will also equip us to deal with higher-order differential
equations where we have second and third derivatives involved.) Let’s motivate the need to understand
two dimensional systems with the circuit in fig. 1.
R1 I1
1 R2 I2
2
Vin + C1 C2
−
1. Initial condition t = 0 : The voltage has been charging the capacitors for an infinite amount of time.
Hence, both capacitors have voltage VC1 = VC2 = 1V and current I1 = I2 = 0A.
2. As t → ∞ : After the capacitors have been allowed to discharge for a long period of time, they carry
no charges on their plates, hence VC1 = VC2 = 0V.
Now, we solve for the transients. That is, how does our system go from condition (a) to condition (b)? First,
we need to set up the circuit equations. We will define V1 = VC1 and V2 = VC2 as the voltages across our
capacitors.
V2 = V1 − I2 R2 (1)
d
I2 = C2 V2 (2)
dt
V1
0 − V1 = I1 R1 ⇒ I1 = − (3)
R1
1 The SI prefixes ‘M’ and ‘µ’ stand for "mega" and "micro," and correspond to the decimal multiples of 106 and 10−6 respectively.
1
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
d
I1 = I2 + C1 V (4)
dt 1
For eq. (1) and eq. (2), we look at the current through Node 2 , using Ohm’s law for the former and
voltage-current relation for the capacitor in the latter. Similarly, we find eq. (3) and eq. (4) by looking at
Node 1 . Similar to what we had seen in a single capacitor system, we have effectively introduced two
d d
new variables: dt V1 and dt V2 . Next, to solve for the transients, we need to first define our system variables.
The standard approach that we will always take is to make anything that gets differentiated into a state
variable. Hence, we will need two state variables: V1 and V2 .
We get a system of differential equations by isolating the derivative terms and solving for them in terms of
their non-differentiated selves.2 In principle, every other circuit quantity of interest can be found once we
know how the state variables and their derivatives behave.
d 1 1 V2 (t)
V1 (t) = − + V1 (t) + (5)
dt R1 C1 R2 C1 R2 C1
d V (t) V2 (t)
V2 (t) = 1 − (6)
dt R2 C2 R2 C2
The steps outlining how we obtained eq. (5) and eq. (6) are in section Appendix A. The fact that every linear
circuit involving capacitors (and as we will see inductors as well) can be expressed in this form is effectively
a consequence of Gaussian Elimination.
Concept Check: Take a second to work out setting up the above differential equations. It is important to
try to reduce all your branch equations from the circuit analysis to as few variables as possible.
As seen in EECS16A, when encountering a system of equations, we try to put it into vector/matrix form
to
V1 (t)
try to solve it. So, let’s put eq. (5) and eq. (6) in the matrix differential form. We define ~x (t) = , and:
V2 (t)
# "
V2
" #
dV1 1 1
dt
− R C + R C V1 + R C
dV2 = (7)
1 1 2 1 2 1
V1 V2
dt R2 C2 − R2 C2
" #
− R11C1 + R21C1 1
R C V1 (t)
= 2 1 (8)
1
R2 C2 − R21C2 V2 (t)
d −5 2
~x (t) = ~x (t) (9)
dt 2 −2
In eq. (9), we have substituted in component values to get a matrix with concrete numbers.
Quick Aside: You may come across a dot over the variable of interest (e.g. V̇1 ) as short hand to represent
d
the dt operator. This is alternatively known as Newton’s notation and is sometimes used for conciseness.
In general, we want to set up our systems in the following generic form:
d
~x (t) = A~x (t) + ~b (10)
dt
So, why do we choose this form? Because it most closely resembles the first order scalar differential equation
we studied in the previous note, and our goal in this note is to convert this equation into a collection of first
2 You can view this as doing the downward pass of Gaussian Elimination using an ordering of the variables so that the d (·) vari-
dt
ables come second-to-last and their non-differentiated counterparts come last. Because there will be more unknowns than equations,
d
Gaussian elimination will stop on the last dt -ed variable. Then, we can do a limited upward-pass (back-substitution pass) of Gaussian
d d
elimination to purge any dependence of one dt -ed variable on any other dt -ed variable. This gives rise to the system of differential
equations in a systematic way — it is the lower block that remains after doing this back-substitution.
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 2
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
−5 2
order differential equations. In the context of our circuit example, A = and ~b = ~0 (because there
2 −2
is no external voltage or current being applied for t ≥ 0).
Before we delve into solving our system of differential equations, let’s review a core concept from EECS16A
- eigenvalues and eigenvectors. While this will be an abridged version of EECS16A content, the note from
EECS16A has more detail.
An eigenvector is a vector that is only scaled by a value (the corresponding eigenvalue) when acted upon
by a matrix:
A~vλ = λ~vλ (11)
Here, ~vλ 6= ~0 is an eigenvector of A corresponding to the scalar λ eigenvalue. If we look at all the eigen-
vectors of A corresponding to a single λ, these form a subspace known as the λ-eigenspace. Each distinct
eigenvalue is associated with its own nontrivial eigenspace3 . We can see this subspace property by rear-
ranging the above equation into:
( A − λIn )~v = ~0. (12)
Here, In is the n × n identity matrix. Since A~vλ = λ~vλ , we know that A − λIn has a nullspace (namely the
eigenspace corresponding to the eigenvalue λ of A). Consequently, A − λIn must be non-invertible and thus
det ( A − λIn ) = 0. Remember, the determinant of a square matrix measures the oriented volume4 of the
unit hypercube as transformed by that matrix. Non-invertible matrices destroy information by squashing
the space along at least one direction (the nullspace directions), and therefore result in something with
volume zero.
This gives us the characteristic equation, or polynomial and solving
this equation will give us the eigen-
1 2
values λ. As an example, let’s compute the eigenvalues of A = :
4 3
1 2 λ 0
det( A − λI2 ) = det − (13)
4 3 0 λ
1−λ 2
= det (14)
4 3−λ
eigenspaces corresponding to distinct eigenvalues is linearly independent. This is proved by contradiction—the simplest case is for
two distinct eigenvalues. (A vector can’t simultaneously be two different multiples of the same thing!) From there, the proof continues
to three vectors. If one is expressible as a combination of the others, then those two must be expressible as a multiple of each other,
which we have already ruled out, and so on. As a result, there cannot be more than n distinct eigenvalues for an n × n matrix since
each one must be associated with its own eigenspace, and there can only be n linearly independent directions in an n-dimensional
space.
4 Recall how the determinant is computed by doing Gaussian elimination. Multiplication of a row by −1 incurs a factor of −1
because it is like looking through a mirror, since a mirror flips orientation. Every time we swap two rows, we also incur a factor of
−1. This is because swapping two rows is like rotating and then looking through a mirror. Rotating doesn’t change volume, but the
mirror flips the orientation. Adding a multiple of one row to another does nothing to the oriented volume because it is like shearing
a cube—just as pushing over a deck of cards doesn’t change the volume of the deck, shearing doesn’t change volume either. Scaling
a row by a positive number has that scaling effect on the volume, and so by reversing the scalings done in Gaussian Elimination, we
can get the volume effect of the original matrix.
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 3
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
Setting eq. (15) to 0 and solving the quadratic equation, we find λ1 = 5, λ2 = −1. Each eigenvalue has it’s
own eigenspace, defining the nullspace of the matrix A − λIn . Hence to find the eigenspace, we can just
find the relevant nullspace5
−4 2 v11
( A − λ1 I2 )~v1 = ( A − 5In )~v1 = = ~0 (16)
4 −2 v12
1
We see that both rows indicate that 4v11 − 2v12 = 0, and hence an eigenvector is v~1 = .
2
Concept Check: Find an eigenvector ~v2 correspoding
to λ2 .
1
Answer: A second eigenvector is ~v2 = α . Note that any α 6= 0 would still be a valid eigenvector.
−1
For a 2 × 2 matrix, it’s possible that the two eigenvalues that you end up with have the same value, leading
to repeated eigenvalues. This repeated eigenvalue can have one or two-dimensional eigenspace (unlike a
single, unrepeated eigenvalue, which will only have a one-dimensional eigenspace).
For example, the following matrix has a repeated eigenvalue of λ.
λ 0
A= (17)
0 λ
The λ-eigenspace of this matrix is R2 since for any vector ~v ∈ R2 , A~v = λ~v.
We can also have examples like
0 1
A= (18)
0 0
1
that have a single eigenvalue λ = 0. In this case, the relevant eigenspace is one-dimensional, only , and
0
its multiples are eigenvectors here.6
It can also be the case that when we solve det ( A − λIn ) = 0, there will be no real solutions to λ. Consider
the rotation matrix (described in Note j):
cos(θ ) − sin(θ )
Rθ = (19)
sin(θ ) cos(θ )
Concept Check: When does Rθ have real eigenvalues? Why? Answer: For θ = 0◦ or θ = 180◦ .
However, when θ does not correspond to 0◦ or 180◦ rotation, there are no real vectors that are scaled ver-
sions of themselves after the transformation. This results in complex eigenvalues. For example, let’s look
5 You know from EECS16A how you can find nullspaces systematically using Gaussian Elimination. Simply run Gaussian elimina-
tion until it terminates—you will have at least one row of zeros. Then work your way backward from the end finding free variables
(variables that you did not “pivot” on, or variables that you never explicitly tried to eliminate from lower equations. These are
variables that got eliminated on their own!) and then expressing others in terms of them. This will give you a basis for the nullspace.
6 If there were any other linearly independent eigenvectors corresponding to the 0 eigenvalue, then everything would have to be
an eigenvector and that would mean the matrix mapped everything to the zero vector. But the matrix is definitely not the zero matrix,
so that isn’t what is going on.
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 4
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
at 45◦ rotation:
1 1 −1
R= √
2 1 1
1 1
det( R − λI ) = (1 − λ)(1 − λ) +
2 2
Setting this determinant equal to zero and solving yields the complex eigenvalues λ = √1 (1 + j) and λ =
2
√1 (1 − j), which makes sense because there are no physical (real) eigenvectors for a rotation transformation
2
in two dimensions.
Let’s start with a vector ~v ∈ Rn . This vector represents a point in space. When you think about this vector
written out using the coordinates, you are scaling the vectors in the standard basis (i.e. the columns of the
identity matrix, In ) by the components of ~v and then adding them up:
1 0 0
0 1 0
~v = In~v = v1 . + v2 . + · · · + vn . (20)
.. .. ..
0 0 1
But, suppose that I think about this vector in terms of a different set of directions. We can define a new
coordinate system using a basis, i.e.n linearly
independent vectors ~v1 , ~v2 , . . . ~vn . Now, let’s say I have a
u1
u2
vector ~u which I am representing as . with respect to my new coordinate system. How can I translate
..
un
this to the coordinates you are familiar with? Well, instead of scaling the vectors of the standard basis, we
are now scaling the vectors defining my new basis (the new basis vectors). Assuming that both of us were
thinking of the same physical point in space, then the vector ~v in the standard basis is:
Hence, to tranform a vector from my basis to your standard basis involves just a matrix multiplication. This
can be expressed in a diagram given in fig. 2 using the up and down arrows.
1 2
Next, let’s see what happens if we have a matrix A that transforms vectors. Consider A = . This
4 3
matrix performs a linear tranformation, ~y = A~x. Is there another basis within which this transformation is
much simpler to understand?
Let’s suppose we had our new basis V so that ~x = V~xe and correspondingly, ~xe = V −1~x. Similarly, ~y = V~ye
and correspondingly, ~ye = V −1~y. Then:
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 5
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
A
~x ~y
V −1 V V −1 V
~xe ~ye
D
Figure 2: Change of Basis Mapping. The top row has everything in the standard basis. The bottom row is in V-basis.
The matrix D is supposed to represent the same linear transformation as A, except that it does so for vectors expressed
in V-basis. It turns out D = V −1 AV since vectors are columns. We use tilde e coordinates for things in the lower basis.
Now, if we chose our new basis to be the ones defined by the eigenvectors of A, then we can simplify:
where D is the diagonal matrix of eigenvalues and V is a matrix with the corresponding eigenvectors as its
columns. Thus we have proved that A = VDV −1 . Furthermore, this also means that D = V −1 AV.
In general, the pattern we have used above will hold whenever we can find an eigenbasis, which is a full
basis
consisting of eigenvectors.
Why? From the eigenvalue-eigenvector relationship we have AV =
A~v1 A~v2 · · · A~vn = λ1~v1 λ2~v2 · · · λn~vn = VD. If V is full rank and thus invertible, we can
left multiply both sides by V −1 to get:
λ1 0 · · · 0
0 λ1 · · · 0
V −1 AV = V −1 VD = D = . (31)
.. .. ..
.. . . .
0 ··· 0 λn
But, we know that there also exist matrices for which an eigenbasis does not exist. For those, this trick
won’t work.
3 How to Solve?
Now that we understand how to convert our systems of differential equations into vector/matrix form,
let’s understand how to solve them. We would ideally like to use the tools we developed for the scalar case.
But how can we do this when our equations seem to be fundamentally dependent on two independent
variables? We wouldn’t have this problem of "coupling" if our A matrix were diagonal. So, let’s try to
change our basis to one where the A matrix does become diagonal.
Note that if our system of differential equations were "uncoupled" to begin with, our A matrix would have
been diagonal. Hence, we could proceed to solve the first order differential equations independently of
each other, as seen before. Consider the circuit in fig. 3.
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 6
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
R1 I1 I2 R2
+ +
V1 C1 Vin + V2 C2
−
− −
Both the capacitors have been charged to Vin . At t = 0, we set Vin = 0V, and allow the capacitors to
discharge. Hence our initial conditions are V1 (0) = V2 (0) = Vin . We find the branch equations eqs. (32)
to (33):
d V
I1 = −C1 V = 1 (32)
dt 1 R1
d V2
I2 = −C2 V2 = (33)
dt R2
Hence from eq. (32) and eq. (33), we arrive at the following uncoupled differential equation:
" #
− R11C1 0
d V1 (t) V1 (t)
= (34)
dt V2 (t) 0 − R 1C V2 (t)
2 2
Concept Check: We can easily solve the above system of equations by separately solving for V1 and V2 .
Review Note 1 if you are unsure about how to solve for the voltages here.
V1 (t)
Coming back to our original system, we define ~x (t) = . Then, in eq. (9), we set up the differential
V2 (t)
equations as follows:
d −5 2
~x (t) = ~x (t) (35)
dt 2 −2
As discussed, let’s begin by finding a coordinate system in which the transformation represented by this
matrix is diagonal (which we know how to solve). First, we must find its eigenvalues:
λ+5 −2
det (λI − A) = det (36)
−2 λ + 2
= (λ + 5)(λ + 2) − 4 (37)
2
= λ + 7λ + 6 = (λ + 6)(λ + 1) (38)
d
~x (t) = VΛV −1~x (t) (39)
dt
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 7
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
# −1
− √2 √1 − √2 √1
" # "
5 5 −6 0 5 5
= ~x (t) (40)
√1 √2 0 −1 √1 √2
5 5 5 5
~ xe1 (t)
Let’s perform a change of basis by setting xe(t) = = V −1~x (t). By left multiplying V −1 on both sides
xe2 (t)
of eq. (39), we get the following:
d
V −1 ~x (t) = V −1 A~x (t) = V −1 VΛV −1~x (t) (41)
dt
d −1
=⇒ V ~x (t) = Λ~xe(t) (42)
dt
d~ −6 0 ~
=⇒ xe = xe(t) (43)
dt 0 −1
Because V −1 is a constant matrix, we can pull it inside the derivative to go from eq. (41) to eq. (42). In
eq. (43), we have successfully uncoupled our equations and we can proceed to solve them independently
as mentioned earlier:
d
xe (t) = −6e x1 t) =⇒ xe1 = k1 e−6t (44)
dt 1
d
xe2 (t) = − xe2 (t) =⇒ xe2 = k2 e−t (45)
dt
Next, we need to solve for our constants k1 and k2 . Recall our initial conditions, V1 (0) = V2 (0) = 1V.
Hence, xe1 (0) and xe2 (0) are given by7 :
~xe(0) = V −1 V1 (0) (46)
V2 (0)
" √2 √1
#
−
xe1 (0) 5 5 1
=⇒ = (47)
xe2 (0) √1 √2 1
5 5
" 1 #
−√
= 5 (48)
3 √
5
Hence, k1 = − √1 , k2 = √3 . Now, we can transform back into our original basis to find V1 (t) and V2 (t):
5 5
~x = V~xe (49)
− √1 e−6t
" 2
√1
#" #
−√
= 5 5 5 (50)
1 √ √2 √3 e−t
5 5 5
2 −6t 3 −t
5e + 5e
= (51)
− 15 e−6t + 65 e−t
For t ≥ 0, we find that V1 = 52 e−6t + 35 e−t and V2 = − 51 e−6t + 65 e−t . In fig. 4 is a plot of our solutions:
Using the same argument, we can see how the voltage will vary with different initial conditions (fig. 5)
7 Note that in this case, V −1 = V > = V so when we plug in V −1 below, it’s the same as plugging in V.
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 8
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
1
V1 (t)
V2 (t)
0.8
0.6
Voltage (V)
0.4
0.2
1 2 3 4 5
Time (t)
Figure 4: Initial Conditions: V1 (0) = 1V and V2 (0) = 1V. Homogeneous Case Solution.
1 1
V1 (t) V1 (t)
0.8 V2 (t) 0.8 V2 (t)
Voltage (V)
Voltage (V)
0.6 0.6
0.4 0.4
0.2 0.2
1 2 3 4 5 1 2 3 4 5
Time (t) Time (t)
(a) Initial Conditions: V1 (0) = 0V and V2 (0) = 1V. Here, (b) Initial Conditions: V1 (0) = 1V and V2 (0) = 0V. Here,
V1 (t) = − 25 e−6t + 25 e−t , and V2 (t) = 15 e−6t + 45 e−t . V1 (t) = 45 e−6t + 15 e−t , and V2 (t) = − 25 e−6t + 52 e−t .
Figure 5: Voltage transients for different initial conditions on the capacitors. The specific solutions are in the subcaptions
for your reference.
Concept Check: Take a minute to qualitatively reason about the initial increase in voltages in Figure 5.
Now that we have a good understanding of the homogenous case, let’s look at the voltage transients of
charging up our two capacitor system. Then, we have uncharged capacitors to start (V1 (0) = V2 (0) = 0V),
and we apply a voltage Vin = 1V for time t > 0. We get the following equations:
V2 = V1 − I2 R2 (52)
d
I2 = C2 V2 (53)
dt
Vin − V1
Vin − V1 = I1 R1 ⇒ I1 = (54)
R1
d
I1 = I2 + C1 V (55)
dt 1
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 9
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
Now the system has a non-homogeneous term which we haven’t seen before, but let’s see if diagonalization
can help us simplify the problem. Letting ~xe(t) = V −1~x (t) and using A = VΛV −1 , we plug in:
d~ d −1 −1 d
xe(t) = V ~x (t) = V ~x (t) (59)
dt dt dt
= V −1 ( A~x (t) + b) (60)
= V −1 A~x (t) + V −1 b (61)
= V −1 AV~xe(t) + V −1 b (62)
−1
= Λ~xe(t) + V b. (63)
so then
d~
xe(t) = Λ~xe(t) + V −1 b (65)
dt
" 2 # −1
−√ √1
−6 0 ~ 5 5 3
= xe(t) + (66)
0 −1 √1 √2 0
5 5
− √6
" #
−6 0 ~ 5 .
= xe(t) + (67)
0 −1 √3
5
d 6
xe1 (t) = −6e x1 ( t ) − √ (68)
dt 5
d 3
xe2 (t) = − xe2 (t) + √ (69)
dt 5
We already know how to solve these non-homogeneous differential equations! (see near the end of Note 1):
1 1
xe1 (t) = e−6t xe1 (0) + √ −√ (70)
5 5
3 3
xe2 (t) = e−t xe2 (0) − √ +√ (71)
5 5
To get a full function for ~xe(t), we need to find ~xe(0). By the definition of ~xe(t),
# −1
− √2 √1
"
~xe(t) = V −1~x (t) =⇒ ~xe(0) = V −1~x (0) = 5 5 0 0
= (72)
√1 √2 0 0
5 5
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 10
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
1 1
xe1 (t) = √ e−6t − √ (73)
5 5
3 −t 3
xe2 (t) = − √ e + √ (74)
5 5
or, putting it together,
√1 e−6t − √1
" #
~xe(t) = 5 5 (75)
− √3 e−t + √3
5 5
1
V1 (t)
V2 (t)
0.8
0.6
Voltage (V)
0.4
0.2
1 2 3 4 5
Time (t)
Figure 6: Initial Conditions: V1 (0) = 0V and V2 (0) = 0V. Nonhomogeneous Case Solution.
V2 = V1 − I2 R2 (79)
d
I2 = C2 V2 (80)
dt
V1
0 − V1 = I1 R1 ⇒ I1 = − (81)
R1
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 11
EECS 16B Note 3: Vector Differential Equations 2022-01-30 12:36:01-08:00
d
I1 = I2 + C1 V (82)
dt 1
Our goal is to rearrange this into a form where the derivatives of our state quantities (voltages in this case)
are expressed in terms of the voltages themselves and the input voltage.8 Keeping this in mind, we start
with the simpler equation to derive:
d I2
eq. (80) =⇒ V2 (t) = (83)
dt C2
V (t) V2 (t)
eq. (79) =⇒ I2 = 1 − (84)
R2 R2
d V1 (t) V2 (t)
V2 (t) = − (85)
dt R2 C2 R2 C2
And now:
d I I2
eq. (82) =⇒ V1 (t) = 1 − (86)
dt C1 C1
V1 (t) V2 (t)
d I R2 − R2
eq. (84), eq. (81) =⇒ V (t) = 1 − (87)
dt 1 C1 C1
− VR1 (1t) V1 (t)
R2 − V2 (t)
R2
= − (88)
C1 C1
Simplifying:
And these two equations (eq. (90), eq. (85)) exactly match eq. (5), eq. (6).
Contributors:
• Aditya Arun.
• Anant Sahai.
• Nikhil Shinde.
• Jennifer Shih.
• Kareem Ahmad.
• Druv Pai.
• Neelesh Ramachandran.
8 There are multiple ways to arrive at eq. (5) and eq. (6); this is just one valid sequence of steps.
© UCB EECS 16B, Spring 2022. All Rights Reserved. This may not be publicly shared without explicit permission. 12