Notes 9 DEs Systems
Notes 9 DEs Systems
Using vector-valued and matrix-valued functions, a n × n system of 1st order linear ODEs
can be expressed as follows:
dx
= A(t)x + g(t)
dt
x′1 (t) a11 (t) a22 (t) · · · a1n (t) x1 (t) g1 (t)
′
x2 (t) a21 (t) a22 (t) · · · a2n (t) x2 (t) g2 (t)
←→ . = .
.. .. + ..
.
. . . ··· ··· . . .
x′n (t) an1 (t) an2 (t) · · · ann (t) xn (t) gn (t)
| {z } | {z } | {z } | {z }
x′ (t) A(t) x(t) g(t)
where aij (t) and gi (t) are known functions, and xi (t) are unknown functions to be found.
The system of 1st order linear ODEs is called homogeneous if g(t) ≡ 0 is a zero
vector. It is called nonhomogeneous if g(t) 6≡ 0.
In these notes, we outline the basic structure of the solutions of such systems, and
discuss how eigenvalues and eigenvectors in linear algebra play a role in dealing with such
linear systems with constant coefficients.
Some additional applications will also be discussed in the lecture.
58
59
• On the other hand, if you have any two solutions x1 (t), x2 (t) of a non-homogeneous
linear system of ODEs x′ = A(t)x + g(t), then their difference is actually a solution
of the homogeneous linear system x′ = A(t)x, since obviously
x = xh + xp
Example Consider the homogeneous 2×2 linear system of 1st order ODEs with constant
coefficients ( " # " #" #
x′ = 2x − y x′ 2 −1 x
←→ ′ =
y ′ = −x + 2y y −1 2 y
By substituting y = 2x − x′ into the second equation, x(t) can actually be found by
solving a second order linear differential equation:
x = c1 et + c2 e3t
Thus
y(t) = 2x − x′ = 2c1 et + 2c2 e3t − c1 et − 3c2 e3t = c1 et − c2 e3t
The solution (
x(t) = c1 et + c2 e3t
y(t) = c1 et − c2 e3t
can also be express in terms of vector-valued functions as follows:
" # " # " # " #
x(t) c1 et + c2 e3t t 1 3t 1
←→ = t t = c1 e + c2 e = c1 x1 (t) + c2 x2 (t)
y(t) c1 e − c2 e 1 −1
which is a linear combination of two linearly independent (not scalar multiple of the
other) solutions x1 (t) and x2 (t); i.e., the solution set of the homogeneous linear system
of ODEs is a two-dimensional vector space.
Remark This approach requires the solving of a higher order linear differential equation
obtained from the first order linear system of ODEs by suitable substitution/elimination.
In general, solving a 1st order n × n linear system of ODEs is equivalent to solving an
nth order linear differential equation.
In fact, we expect that the general solution of a homogeneous linear system of 1st
order ODEs can be expressed as the linear combinations of n linearly independent solu-
tions of the system.
60
Theorem If A(t), g(t) are continuous on an open interval I containing t0 , then the
initial value problem
x′ = A(t)x + g(t), x(t0 ) = v
has a unique solution on the interval I for any given initial vector v at t = t0 .
c1 v1 + · · · + cn vn
is also a solution of x′ = Ax, with the same initial vector u(t0 ) = v. By the uniqueness
of the solution of the IVP, we must have x(t) = u(t); i.e., the general linear combination
c1 x1 (t) + · · · + cn xn (t) does exhaust all possible solutions of x′ = Ax.
• If Φ(t) is a fundamental matrix of x′ = Ax, then the general solution of the homo-
geneous system can be written in matrix form as follows:
If an initial vector x(t0 ) = v is given, then x(t0 ) = Φ(t0 )c = v, and hence the
solution of the IVP is given by
| | |
W (t) = W [x1 , . . . , xn ](t) = Φ(t) = x1 (t) x2 (t) · · · xn (t)
| | |
Recall that n vectors v1 , . . . , vn taken from Rn are linearly independent if and only
if they form a non-zero determinant
| | |
v1 v2 · · · vn 6= 0
| | |
Equivalently, v1 , . . . , vn are linearly independent if and only if the system of linear equa-
tions
| | | c1
..
v1 v2 · · · vn . = 0
| | | cn
has only trivial solution c1 = c2 = · · · = cn = 0.
Note that any linear relation
c1 x1 (t) + · · · + cn xn (t) = 0
of solutions x1 (t), . . . , xn (t) of x′ = Ax must lead to the a linear relation of their initial
vectors at any point t = t0
| | | c1
..
0 = c1 x1 (t0 ) + · · · + cn xn (t0 ) = x1 (t0 ) x2 (t) · · · xn (t0 ) . = Φ(t0 )c
| | | cn
i.e., if the initial vector at any t = t0 are linearly independent, we must have c1 = c2 =
· · · = cn = 0, and hence x1 (t), . . . , xn (t) must be linearly independent solutions which
forms a fundamental matrix.
The following Abel-Liouville Theorem implies that the Wronskian of n solutions of
x′ = Ax on an interval is either identically zero, or never 0.
62
is either always non-zero if W (t0 ) is non-zero at any one initial point t = t0 , and identically
zero if W (t0 ) = 0 at one initial point t = t0 .
The 2 × 2 case is particularly simple. Suppose
" # " #" # " # " #" #
x ′ (t) a (t) a (t) x (t) x ′ (t) a (t) a (t) x (t)
11 12 11 11 12 21
x′1 = 11 = and x′2 = 21 =
x′21 (t) a21 (t) a22 (t) x21 (t) x′22 (t) a21 (t) a22 (t) x22 (t)
Then
d x11 (t) x12 (t) x′ (t) x′12 (t) x11 (t) x12 (t)
W ′ (t) = = 11 + ′
dt x21 (t) x22 (t) x21 (t) x22 (t) x21 (t) x′22 (t)
a11 (t)x11 (t) + a12 (t)x21 (t) a11 (t)x12 (t) + a12 (t)x22 (t)
=
x21 (t) x22 (t)
a11 (t)x11 (t) a11 (t)x12 (t) x11 (t) x12 (t)
+ = (a11 (t) + a22 (t))W (t)
x21 (t) x22 (t) a22 (t)x21 (t) a22 (t)x22 (t)
Exercise Work out the 3 × 3 case of the Abel-Liouville Theorem, and see how suitable
elementary row operations on determinants can lead to W ′ (t) = [trA(t)]W (t), which
actually works also for the n × n case.
Putting a vector-valued function of the form x(t) = eλt v, with x′ (t) = λeλt v, to the
system x′ (t) = Ax, one arrives at an “eigenvalue-eigenvector” problem:
Consequently, if you can find the eigenvalues (λ) and corresponding eigenvectors of
the matrix A (v 6= 0 so that Av = λv), solutions of the form eλt v can be found.
The only problem is whether there are n such linearly independent solutions to gen-
erate all solutions by taking linear combinations.
Back to the Previous Example
( " # " #" #
x′ = 2x − y x′ 2 −1 x
←→ ′ =
y ′ = −x + 2y y −1 2 y
" #
2 −1
Here A = , and solving Av = λv is considering
−1 2
" #" # " # ( (
2 −1 v1 λv1 2v1 − v2 = λv1 (2 − λ)v1 − v2 = 0
= ⇐⇒ ⇐⇒
−1 2 v2 λv2 −v1 + 2v2 = λv2 −v1 + (2 − λ)v2 = 0
" #" # " #
2 − λ −1 v1 0
⇐⇒ =
−1 2 − λ v2 0
2 − λ −1
= 0 ⇐⇒ (2 − λ)2 − 1 = 0 ⇐⇒ λ = 1, 3
−1 2 − λ
(Note that this is the same as the characteristic equation of the 2nd order linear ODE
considered in the previous example.)
Now, taking λ = 1, the equations for the corresponding eigenvectors are
( (
(2 − 1)v1 − v2 = 0 v1 − v2 = 0
⇐⇒ ⇐⇒ v1 = v2
−v1 + (2 − 1)v2 = 0 −v1 + v2 = 0
In fact, it is not hard to see that these combinations can generate all possible initial
vectors, say at t = 0:
" # " # " #" # " #
1 1 1 1 c1 v1
x(0) = c1 + c2 = v ⇐⇒ =
1 −1 1 −1 c2 v2
When putting together the two fundamental solutions to form a fundamental matrix,
we have " #
et e3t
Φ(t) = t ,
e −e3t
Recall here that given a fundamental matrix, the solution of an initial value problem
is given by: (
x′ = Ax
⇐⇒ x(t) = Φ(t)Φ−1 (t0 )v
x(t0 ) = v
Some Basic Results About Eigenvalues-Eigenvectors
since |P −1 | = 1/|P |.
or equivalently the set of non-zero vectors in the kernel (or null space) of A − λI.
65
• dim ker (A − λI) for any eigenvalue λ of A is called the geometric dimension of
the eigenvalue. Note that if |A − λI| = (λ − λi )ki p(λ), where p(λi ) =
6 0, then the
geometric dimension of λi cannot exceed the “algebraic dimension” of λi ; i.e.,
dim ker (A − λi I) ≤ ki .
• An n × n matrix A may or may not have n distinct eigenvalues. Over the field of
complex numbers, by the Fundamental Theorem of Algebra which states that any
complex polynomial of degree greater than 0 has a (complex) root, the characteristic
polynomial can be factored as
and thus
−1 λ1
| | | | | |
λ2
v1 v2 · · · vn A v1 v2 · · · vn =
..
.
| | | | | |
λn
Here are some examples on 3 × 3 1st order linear systems of ODEs by working with
eigenvalues-eigenvectors.
Example The following matrix has three distinct eigenvalues λ = −2, 1, 3, with corre-
sponding eigenvectors given as follows,
λ= −2
λ= 1 λ=3
1 −1 4 1 1 1
A = 3 2 −1 , −1 , −4 , 2
2 1 −1 −1 −1 1
67
Example The following matrix has only two distinct eigenvalues λ = −1, 1, but three
linearly independent eigenvectors can still be found :
λ=
−1
λ= −1
λ =
1
1 −2 2 1 −1 −1
A = −2 1 −2 1 , 0 , 1
−2 2 −3 0 1 1
Example The following matrix has one real and two complex eigenvalues λ = 1, 2 ± i,
with some corresponding eigenvectors given as follows:
(or the same as taking the linear combination 12 (1st soln + 2nd soln), and 1
2i (1st soln −
2nd soln).)
So, the general solution of x′ = Ax is:
1 0 0
x(t) = c1 et 0 + c2 e2t sin t + c3 e2t − cos t
0 cos t sin t
68
Remark The discussion above can be rephrased in terms of the so called “Jordan
Canonical Form” of A.
69
Remark One could, of course, find a particular solution by the method of undetermined
coefficients: just put " # " #
a c
e4t + e5t
b d
into the nonhomogeneous system and determine a, b, c, d.
Exercise Show that the exponential matrix for and n × n matrix A defined by
satisfies
Φ′ (t) = AΦ(t), Φ(0) = I .
Hence the columns of etA form a fundamental set of solutions of the linear system x′ = Ax.
" # " #
cos t − sin t 0 −1
Exercise Show that etA = , where A = .
sin t cos t 1 0
(Find it algebraically, and also by solving a suitable linear system of ODEs.)
then
a0 I + a1 A + · · · + an−1 An−1 + An = O ,
and hence An+k , can be expressed as a linear combination of I, A, . . . , An−1 . Conse-
quently, the exponential matrix
can be written as
eA = c0 + c1 A + c2 A2 + · · · + cn−1 An−1 ,
or
eA = c0 + c1 A + c2 A2 + · · · + ck Ak
for some 0 ≤ k ≤ n − 1, if the “minimal polynomial” h(λ) of A is considered; i.e., the
polynomial h(λ) with least degree such that h(A) = 0.
Using the exponential matrix, the solution of x′ = Ax can be expressed as x = etA c.)