0% found this document useful (0 votes)
20 views13 pages

Notes 9 DEs Systems

This document discusses the structure and solutions of systems of first-order linear ordinary differential equations (ODEs) using vector and matrix functions. It covers the concepts of homogeneous and nonhomogeneous systems, the role of eigenvalues and eigenvectors, and introduces the fundamental matrix and Wronskian for such systems. Additionally, it presents the Existence and Uniqueness Theorem and the Abel-Liouville Theorem related to the solutions of these systems.

Uploaded by

Chunyeung Tang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views13 pages

Notes 9 DEs Systems

This document discusses the structure and solutions of systems of first-order linear ordinary differential equations (ODEs) using vector and matrix functions. It covers the concepts of homogeneous and nonhomogeneous systems, the role of eigenvalues and eigenvectors, and introduces the fundamental matrix and Wronskian for such systems. Additionally, it presents the Existence and Uniqueness Theorem and the Abel-Liouville Theorem related to the solutions of these systems.

Uploaded by

Chunyeung Tang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Notes 9

Systems of 1st Order Linear ODEs

Using vector-valued and matrix-valued functions, a n × n system of 1st order linear ODEs
can be expressed as follows:
dx
= A(t)x + g(t)
dt
      
x′1 (t) a11 (t) a22 (t) · · · a1n (t) x1 (t) g1 (t)
 ′      
 x2 (t)   a21 (t) a22 (t) · · · a2n (t)   x2 (t)   g2 (t) 
←→  .  =  .
   ..   ..  +  .. 
   
.
 .   . . ··· ··· .  .   . 

x′n (t) an1 (t) an2 (t) · · · ann (t) xn (t) gn (t)
| {z } | {z } | {z } | {z }
x′ (t) A(t) x(t) g(t)

where aij (t) and gi (t) are known functions, and xi (t) are unknown functions to be found.
The system of 1st order linear ODEs is called homogeneous if g(t) ≡ 0 is a zero
vector. It is called nonhomogeneous if g(t) 6≡ 0.
In these notes, we outline the basic structure of the solutions of such systems, and
discuss how eigenvalues and eigenvectors in linear algebra play a role in dealing with such
linear systems with constant coefficients.
Some additional applications will also be discussed in the lecture.

9.1 Basic Structure of the Solution Sets of 1st Order Linear


Systems of Differential Equations
• Superposition of Solutions (Linear Structure)
It is easy to check that if x1 (t) and x2 (t) are two solutions of the homogeneous linear
system of ODEs x′ = A(t)x, then so is their linear combination c1 x1 (t) + c2 x2 (t)
for any constants c1 , c2 , since

[c1 x1 (t) + c2 x2 (t)]′ = c1 x′1 (t) + c2 x′2 (t)

= c1 A(t)x1 + c2 A(t)x2 = A(t)[c1 x1 + c2 x2 ]

Note also that x = 0 is an obvious solution of x′ = A(t)x.


In the language of linear algebra, the solutions of the homogeneous system x′ =
A(t)x form a vector space under the addition and scalar multiplication of solutions.

58
59

• On the other hand, if you have any two solutions x1 (t), x2 (t) of a non-homogeneous
linear system of ODEs x′ = A(t)x + g(t), then their difference is actually a solution
of the homogeneous linear system x′ = A(t)x, since obviously

[x1 (t) − x2 (t)]′ = [A(t)x1 + g(t)] − [A(t)x2 + g(t)] = A(t)[x1 − x2 ]

• Consequently, if xh is the general solution of the homogeneous system x′ = A(t)x,


and xp is one particular solution of the nonhomogeneous system x′ = A(t)x + g(t),
then the general solution of the nonhomogeneous system is given by

x = xh + xp

Example Consider the homogeneous 2×2 linear system of 1st order ODEs with constant
coefficients ( " # " #" #
x′ = 2x − y x′ 2 −1 x
←→ ′ =
y ′ = −x + 2y y −1 2 y
By substituting y = 2x − x′ into the second equation, x(t) can actually be found by
solving a second order linear differential equation:

(2x − x′ )′ = −x + 2(2x − x′ ) ⇐⇒ x′′ − 4x′ + 3x = 0

with characteristic equation r 2 − 4r + 3 = (r − 1)(r − 3) = 0, and hence the solution is

x = c1 et + c2 e3t

Thus
y(t) = 2x − x′ = 2c1 et + 2c2 e3t − c1 et − 3c2 e3t = c1 et − c2 e3t
The solution (
x(t) = c1 et + c2 e3t
y(t) = c1 et − c2 e3t
can also be express in terms of vector-valued functions as follows:
" # " # " # " #
x(t) c1 et + c2 e3t t 1 3t 1
←→ = t t = c1 e + c2 e = c1 x1 (t) + c2 x2 (t)
y(t) c1 e − c2 e 1 −1

which is a linear combination of two linearly independent (not scalar multiple of the
other) solutions x1 (t) and x2 (t); i.e., the solution set of the homogeneous linear system
of ODEs is a two-dimensional vector space.
Remark This approach requires the solving of a higher order linear differential equation
obtained from the first order linear system of ODEs by suitable substitution/elimination.
In general, solving a 1st order n × n linear system of ODEs is equivalent to solving an
nth order linear differential equation.
In fact, we expect that the general solution of a homogeneous linear system of 1st
order ODEs can be expressed as the linear combinations of n linearly independent solu-
tions of the system.
60

The following fundamental Existence and Uniqueness Theorem can be obtained


by applying the general method of Picard iteration again to the initial value problem:

Theorem If A(t), g(t) are continuous on an open interval I containing t0 , then the
initial value problem
x′ = A(t)x + g(t), x(t0 ) = v
has a unique solution on the interval I for any given initial vector v at t = t0 .

Some consequences of the theorem above are:

• If we consider a homogeneous linear system x′ = Ax, with respect to n linearly


independent initial vectors v1 , v2 , · · · , vn which form a basis of Rn , the there is a
unique solution for each of these n initial value problems; i.e., there exists a unique
solution xi (t) for each 1 ≤ i ≤ n which satisfies x′i (t0 ) = Axi , and xi (t0 ) = vi .

• Then the general solution of the system is then given by

x(t) = c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t)

where c1 , c2 , · · · , cn are arbitrary constants.

In fact, if x′ = Ax, then consider v = x(t0 ). Since v1 , . . . , vn form a basis of Rn , v


can be written as a linear combination of these basis vectors

c1 v1 + · · · + cn vn

for some constants c1 , c2 , · · · , cn . Note that the vector-valued function

u(t) = c1 x1 (t) + · · · + cn xn (t)

is also a solution of x′ = Ax, with the same initial vector u(t0 ) = v. By the uniqueness
of the solution of the IVP, we must have x(t) = u(t); i.e., the general linear combination
c1 x1 (t) + · · · + cn xn (t) does exhaust all possible solutions of x′ = Ax.

Fundamental Matrix of an n × n linear system x′ = Ax


Let  
| | |
 
Φ(t) = x1 (t) x2 (t) · · · xn (t)
| | |
be a matrix whose columns are solutions of an n × n homogeneous linear system x′ = Ax.

• Φ(t) is called a fundamental matrix of the system x′ = Ax if x1 (t), . . . , xn (t) are


linearly independent solutions of the system; i.e., no one of these solutions can be
written as a linear combination of the others.

• A set of fundamental solutions of x′ = Ax is just a set of n linearly independent


solutions of the system, which can form a fundamental matrix.
61

• If Φ(t) is a fundamental matrix of x′ = Ax, then the general solution of the homo-
geneous system can be written in matrix form as follows:

x(t) = c1 x1 (t) + · · · + cn xn (t) = Φ(t)c.

If an initial vector x(t0 ) = v is given, then x(t0 ) = Φ(t0 )c = v, and hence the
solution of the IVP is given by

x(t) = Φ(t)[Φ(t0 )]−1 v .

• The Wronskian of n solutions x1 (t), . . . , xn (t) of x′ = Ax is defined accordingly


by the following determinant:

| | |
W (t) = W [x1 , . . . , xn ](t) = Φ(t) = x1 (t) x2 (t) · · · xn (t)
| | |

Recall that n vectors v1 , . . . , vn taken from Rn are linearly independent if and only
if they form a non-zero determinant

| | |
v1 v2 · · · vn 6= 0
| | |

Equivalently, v1 , . . . , vn are linearly independent if and only if the system of linear equa-
tions   
| | | c1
   .. 
v1 v2 · · · vn   .  = 0
| | | cn
has only trivial solution c1 = c2 = · · · = cn = 0.
Note that any linear relation

c1 x1 (t) + · · · + cn xn (t) = 0

of solutions x1 (t), . . . , xn (t) of x′ = Ax must lead to the a linear relation of their initial
vectors at any point t = t0
  
| | | c1
   .. 
0 = c1 x1 (t0 ) + · · · + cn xn (t0 ) = x1 (t0 ) x2 (t) · · · xn (t0 )  .  = Φ(t0 )c
| | | cn

i.e., if the initial vector at any t = t0 are linearly independent, we must have c1 = c2 =
· · · = cn = 0, and hence x1 (t), . . . , xn (t) must be linearly independent solutions which
forms a fundamental matrix.
The following Abel-Liouville Theorem implies that the Wronskian of n solutions of
x′ = Ax on an interval is either identically zero, or never 0.
62

Abel-Liouville Theorem The Wronskian of any n solutions of an n × n homogeneous


linear system x′ = Ax satisfies the following first order linear differential equation
dW
= [trA(t)]W (t)
dt
where the trace of the matrix A(t), denoted by trA(t), is the sum of the diagonal entries
of A(t). In particular, R t
trA(u)du
W (t) = W (t0 )e t0

is either always non-zero if W (t0 ) is non-zero at any one initial point t = t0 , and identically
zero if W (t0 ) = 0 at one initial point t = t0 .
The 2 × 2 case is particularly simple. Suppose
" # " #" # " # " #" #
x ′ (t) a (t) a (t) x (t) x ′ (t) a (t) a (t) x (t)
11 12 11 11 12 21
x′1 = 11 = and x′2 = 21 =
x′21 (t) a21 (t) a22 (t) x21 (t) x′22 (t) a21 (t) a22 (t) x22 (t)

Then
d x11 (t) x12 (t) x′ (t) x′12 (t) x11 (t) x12 (t)
W ′ (t) = = 11 + ′
dt x21 (t) x22 (t) x21 (t) x22 (t) x21 (t) x′22 (t)

a11 (t)x11 (t) + a12 (t)x21 (t) a11 (t)x12 (t) + a12 (t)x22 (t)
=
x21 (t) x22 (t)

x11 (t) x12 (t)


+
a21 (t)x11 (t) + a22 (t)x21 (t) a21 (t)x12 (t) + a22 (t)x22 (t)
Note that the right hand side of the equation can be simplified by elementary row oper-
ations on the determinants to the form

a11 (t)x11 (t) a11 (t)x12 (t) x11 (t) x12 (t)
+ = (a11 (t) + a22 (t))W (t)
x21 (t) x22 (t) a22 (t)x21 (t) a22 (t)x22 (t)

i.e., W ′ (t) = [trA(t)]W (t) as expected.

Exercise Work out the 3 × 3 case of the Abel-Liouville Theorem, and see how suitable
elementary row operations on determinants can lead to W ′ (t) = [trA(t)]W (t), which
actually works also for the n × n case.

9.2 Eigenvalues-Eigenvectors and 1st Order Homogeneous


Linear Systems with Constant Coefficients
The previous example suggests that the general solution of a 1st order homogeneous linear
system with constant coefficients x′ (t) = Ax, where A is a constant matrix, may look like
a linear combination of solutions of the form x(t) = eλt v, where λ is a number, and v is
a non-zero vector.
63

Putting a vector-valued function of the form x(t) = eλt v, with x′ (t) = λeλt v, to the
system x′ (t) = Ax, one arrives at an “eigenvalue-eigenvector” problem:

λeλt v = A(eλt v) = eλt Av ⇐⇒ Av = λv

Consequently, if you can find the eigenvalues (λ) and corresponding eigenvectors of
the matrix A (v 6= 0 so that Av = λv), solutions of the form eλt v can be found.
The only problem is whether there are n such linearly independent solutions to gen-
erate all solutions by taking linear combinations.
Back to the Previous Example
( " # " #" #
x′ = 2x − y x′ 2 −1 x
←→ ′ =
y ′ = −x + 2y y −1 2 y
" #
2 −1
Here A = , and solving Av = λv is considering
−1 2
" #" # " # ( (
2 −1 v1 λv1 2v1 − v2 = λv1 (2 − λ)v1 − v2 = 0
= ⇐⇒ ⇐⇒
−1 2 v2 λv2 −v1 + 2v2 = λv2 −v1 + (2 − λ)v2 = 0
" #" # " #
2 − λ −1 v1 0
⇐⇒ =
−1 2 − λ v2 0

To have nonzero solution v 6= 0, we must have

2 − λ −1
= 0 ⇐⇒ (2 − λ)2 − 1 = 0 ⇐⇒ λ = 1, 3
−1 2 − λ

(Note that this is the same as the characteristic equation of the 2nd order linear ODE
considered in the previous example.)
Now, taking λ = 1, the equations for the corresponding eigenvectors are
( (
(2 − 1)v1 − v2 = 0 v1 − v2 = 0
⇐⇒ ⇐⇒ v1 = v2
−v1 + (2 − 1)v2 = 0 −v1 + v2 = 0

i.e., the eigenvectors corresponding to the eigenvalue λ = 1 are given by


" # " # " # " #
v1 v1 1 1
= = v1 which is a non-zero multiple of
v2 v1 1 1
" #
1
Hence et is a solution of the given system.
1
Similarly, taking λ = 3, the equations for the corresponding eigenvectors are
( (
(2 − 3)v1 − v2 = 0 −v1 − v2 = 0
⇐⇒ ⇐⇒ v1 = −v2
−v1 + (2 − 3)v2 = 0 −v1 − v2 = 0
64

i.e., the eigenvectors corresponding to the eigenvalue λ = 3 are given by


" # " # " # " #
v1 v1 1 1
= = v1 which is a non-zero multiple of
v2 −v1 −1 −1
" #
1
Hence e3t is another solution of the given system.
−1
The two solutions are easily seen to be linearly independent, and hence their linear
combination gives the general solution:
! !
t 1 3t 1
x(t) = c1 e + c2 e
1 −1

In fact, it is not hard to see that these combinations can generate all possible initial
vectors, say at t = 0:
" # " # " #" # " #
1 1 1 1 c1 v1
x(0) = c1 + c2 = v ⇐⇒ =
1 −1 1 −1 c2 v2

is always solvable for any given initial vector v ∈ R2 .

When putting together the two fundamental solutions to form a fundamental matrix,
we have " #
et e3t
Φ(t) = t ,
e −e3t
Recall here that given a fundamental matrix, the solution of an initial value problem
is given by: (
x′ = Ax
⇐⇒ x(t) = Φ(t)Φ−1 (t0 )v
x(t0 ) = v
Some Basic Results About Eigenvalues-Eigenvectors

• For any n × n matrix A and constant λ, Av = λv has non-trivial solution v 6= 0 if


and only if |A− λI| = 0. Hence eigenvalues of A are the roots of the characteristic
polynomial |λI − A| = (−1)n |A − λI|, which is a degree n polynomial.
Note that

|P −1 AP − λI| = |P −1 (A − λI)P | = |P −1 ||A − λI||P | = |A − λI|

since |P −1 | = 1/|P |.

• If λ is an eigenvalue of A, then the eigenvectors of A corresponding to λ are given


by the non-trivial solutions of Av = λv, i.e., the set of vectors

{v ∈ Rn \{0} : Av = λv} = (ker (A − λI))\{0},

or equivalently the set of non-zero vectors in the kernel (or null space) of A − λI.
65

• dim ker (A − λI) for any eigenvalue λ of A is called the geometric dimension of
the eigenvalue. Note that if |A − λI| = (λ − λi )ki p(λ), where p(λi ) =
6 0, then the
geometric dimension of λi cannot exceed the “algebraic dimension” of λi ; i.e.,

dim ker (A − λi I) ≤ ki .

• The eigenvectors v1 , . . . , vk of an n × n matrix A corresponding to distinct eigen-


values λ1 , . . . , λk are linearly independent.

• An n × n matrix A may or may not have n distinct eigenvalues. Over the field of
complex numbers, by the Fundamental Theorem of Algebra which states that any
complex polynomial of degree greater than 0 has a (complex) root, the characteristic
polynomial can be factored as

|λI − A| = (λ − λ1 )k1 · · · (λ − λm )km

for some distinct roots λ1 , . . . , λm , where k1 + · · · + km = n.

• Thus if the characteristic polynomial can be factored as

|λI − A| = (λ − λ1 )k1 · · · (λ − λm )km

for some distinct real roots λ1 , . . . , λm , where k1 + · · · + km = n, then A has n


linearly independent real eigenvectors if and only if the geometric dimension of
each eigenvalue is equal to its algebraic dimension.

• If v1 , . . . , vn are linearly independent eigenvectors of an n × n matrix A correspond-


ing to the eigenvalues λ1 , . . . , λn , then
   
| | | | | |
   
A v1 v2 · · · vn  = λ1 v1 λ1 v2 · · · λn vn 
| | | | | |

and thus
 
 −1   λ1
| | | | | | 
λ2

     
v1 v2 · · · vn  A v1 v2 · · · vn  = 
 ..


.
| | | | | |  
λn

is a diagonal matrix; A is diagonalizable.

• Solving an n × n system of 1st order linear ODEs with constant coefficients is a


matter of looking for n linearly independent eigenvectors of the coefficient matrix,
and see what else could be done in case the coefficient matrix has less than n linearly
independent eigenvectors.
66

9.3 2 × 2 or 3 × 3 Homogeneous Linear Systems with Con-


stant Coefficients
Given a 2 × 2 real matrix A, either it has two distinct roots (both real, or a pair of
conjugate complex roots), or one repeated real root.
Then the general solution of x′ = Ax can be found by

• Two Distinct Real Roots Case: λ1 6= λ2


A pair of linearly independent solutions (fundamental solutions) can be found as
eλ1 t v1 , eλ2 t v2 , with corresponding eigenvectors v1 , v2 .
The general solution is then given by x = c1 eλ1 t v1 + c2 eλ2 t v2 , where c1 , c2 are
arbitrary constants.

• Two Conjugate Complex Roots Case: α ± iβ, where α, β ∈ R


Two linearly independent complex solutions can be found as e(α+iβ)t v1 , e(α−iβ)t v2 ,
with corresponding eigenvectors v1 , v2 .
Since Av = λv ⇐⇒ Av̄ = λ̄v̄, the conjugates of eigenvectors corresponding to
α + iβ are eigenvectors corresponding to α − iβ. Thus a pair of linearly independent
real solutions can be found by taking the real part and imaginary part of e(α+iβ)t v1 .
The general solution can thus be found as the linear combination of the two linearly
independent real solutions thus found.

• One Repeated Real Root Case: λ1


The characteristic polynomial is (λ − λ1 )2 , and hence (A − λ1 I)2 = O.
If λ1 does not have two linearly independent eigenvectors, in addition to one solution
of the form eλ1 t v, we look for a linearly independent solution of the form eλ1 t (tv+u)
for some vector u 6= 0.
In fact, putting eλ1 t (tv + u), with derivative eλt (v + λ1 u + λtv), into x′ = Ax, we
have
eλt (v + λu + λtv) = eλt (Au + tAv) = eλt (Au + tλv)
Au − λu = v ⇐⇒ (A − λI)u = v .

Here are some examples on 3 × 3 1st order linear systems of ODEs by working with
eigenvalues-eigenvectors.
Example The following matrix has three distinct eigenvalues λ = −2, 1, 3, with corre-
sponding eigenvectors given as follows,

  λ= −2
 λ= 1 λ=3
1 −1 4 1 1 1
       
A = 3 2 −1 , −1 , −4 , 2
2 1 −1 −1 −1 1
67

So, the general solution of x′ = Ax is:


     
1 1 1
−2t   t  3t  
x(t) = c1 e −1 + c2 e −4 + c3 e 2
−1 −1 1

Example The following matrix has only two distinct eigenvalues λ = −1, 1, but three
linearly independent eigenvectors can still be found :

  λ=
 −1
 λ= −1
 λ =
 1
1 −2 2 1 −1 −1
       
A = −2 1 −2 1 ,  0 , 1
−2 2 −3 0 1 1

So, the general solution of x′ = Ax is:


     
1 −1 −1
−t   −t   t 
x(t) = c1 e 1 + c2 e  0  + c3 e  1 
0 1 1

Example The following matrix has one real and two complex eigenvalues λ = 1, 2 ± i,
with some corresponding eigenvectors given as follows:

  λ= 1 λ =2 +i λ=


 2− i
1 0 0 1 0 0
       
A = 0 2 1 , 0 , −i
  i
0 −1 2 0 1 1

From the two complex solutions


       
0 0 0 0
e(2+i)t −i = e2t −i cos t + sin t = e2t  sin t  + ie2t − cos t
       
1 cos t + i sin t cos t sin t
       
0 0 0 0
e(2−i)t  i  = e2t i cos t + sin t = e2t  sin t  + ie2t  cos t 
       
1 cos t − i sin t cos t − sin t
just take the real part and imaginary part to find two linearly independent real solutions
   
0 0
e2t  sin t  , and e2t − cos t
   
cos t sin t

(or the same as taking the linear combination 12 (1st soln + 2nd soln), and 1
2i (1st soln −
2nd soln).)
So, the general solution of x′ = Ax is:
     
1 0 0
x(t) = c1 et 0 + c2 e2t  sin t  + c3 e2t − cos t
     
0 cos t sin t
68

Example (Not enough


" eigenvectors!)
# " #
1 2 1
The matrix A = has only one real eigenvalue λ = 1, with eigenvectors c ,
0 1 0
where c 6= 0. " #
t 1
Thus a solution of the form x1 (t) = e can be found.
0
The remaining task is to find another linearly independent solution of the form
(" # " #)
t u1 1
x2 (t) = e +t
u2 0
where u1 , u2 are some coefficients to be determined. Putting that into the system x′ = Ax,
we have (" # " #) " # " #( " # " #)
t u1 1 t 1 1 2 t u1 t 1
e +t +e = e + te
u2 0 0 0 1 u2 0
" #" # " #
0 2 u1 1 1
= ⇔ 2u2 = 1 ⇔ u2 =
0 0 u2 0 2
So, just pick u1 = 0, u2 = 12 , we have another solution
" #
t
x2 (t) = et 1
2
and the general solution of the system is given by
" # " #
t 1 t t
x(t) = c1 e + c2 e 1
0 2
Remark Generally speaking, if there are not enough eigenvalue-eigenvector solutions
eλt v to generate the general solution of the linear system x′ = Ax, consider solutions of
the form eλt (u + tv), which will result in a linear system for u:
(A − λI)u = v
Non-zero solutions u 6= 0 of this system for any eigenvector v of A are called generalized
eigenvectors of A.
Note that u ∈ ker(A − λI)2 \ ker(A − λI).
If there are still not enough solutions to generate the general solution, then go for
1 2
more generalized eigenvectors by considering solutions of the form eλt (w + tu + 2! t v),
3 2
which would result in a system for w ∈ ker(A − λI) \ ker(A − λI) , which is
(A − λI)w = u
Continuing this process if necessary, i.e., picking up solutions of the form
 t2 tk−1 tk 
eλt u1 + tu2 + u3 + · · · + uk + v
2! (k − 1)! k!
where (A − λI)uk = v, (A − λI)uk−1 = uk , (A − λI)uk−2 = uk−1 , . . . , (A − λI)u1 = u2 ,
one can eventually find enough solutions to generate the general solution.

Remark The discussion above can be rephrased in terms of the so called “Jordan
Canonical Form” of A.
69

9.4 Nonhomogeneous Linear Systems of ODEs with Con-


stant Coefficients
As discussed earlier, to find the general solution of a nonhomogeneous system of 1st order
linear ODE, we need to find

1. the solution of the corresponding homogeneous system;

2. one particular solution of the nonhomogeneous system, e.g., by educated guessing


of the form of solution (undetermined coefficients), variation of parameters, or even
Laplace transform, etc..

Variation of Parameters for Nonhomogeneous Systems


Roughly speaking, by taking the ‘linear combination with function coefficients’ of a
set of fundamental solutions of the homogeneous system x′ = A(t)x(t), one may find a
particular solution of the nonhomogeneous system x′ = A(t)x(t) + g(t).
In fact, if we have found the a set of fundamental solutions x1 (t), . . . , xn (t) of the
homogeneous system x′ = A(t)x(t), with the corresponding fundamental matrix denoted
as Φ(t), so that Φ′ (t) = A(t)Φ(t), then

x(t) = u1 (t)x1 (t) + u2 (t)x2 (t) + · · · + un (t)xn (t) = Φ(t)u(t)

is a solution of the nonhomogeneous system if and only if

Φ(t)u′ (t) + Φ′ (t)u(t) = A(t)Φ(t)u(t) + g(t)

Φ(t)u′ (t) + A(t)Φ(t)u(t) = A(t)Φ(t)u(t) + g(t)


Φ(t)u′ (t) = g(t)
from which we can solve for u(t):
Z
u(t) = Φ−1 (t)g(t)dt

" # " #" # " #


x′ 2 −1 x e4t
Example Find a particular solution of ′ = + 5t .
y −1 2 y e
" #
et e3t
Recall that a fundamental matrix can be found as Φ(t) = t , where
e −e3t
" #
1 −e 3t −e3t
Φ−1 (t) = − 4t .
2e −et et
" #
−e−3t − e4t
Then Φ−1 (t)g(t) = − 12 .
−et + e2t
Take Z " # " #
1 3t 1 4t
1 −e−3t − e4t e + e
u(t) = − dt = 61 t 18 2t
2 −et + e2t 2e − 4e
70

Hence a particular solution is


" #" # " #
−e3t −e3t 16 e3t + 18 e4t 2 4t
e − 1 5t
e
xp (t) = Φ(t)u(t) = 1 t 1 2t = 31 4t 83 5t
−et et 2 e − 4 e −3e + 8e

The general solution of the system is:


" # " # " # " #
2
1 1 −1
x(t) = c1 et + c2 e3t + e4t 31 + e5t 38
1 −1 −3 8

Remark One could, of course, find a particular solution by the method of undetermined
coefficients: just put " # " #
a c
e4t + e5t
b d
into the nonhomogeneous system and determine a, b, c, d.

Exercise Show that the exponential matrix for and n × n matrix A defined by

Φ(t) = etA = I + tA + t2 A2 /2! + t3 A3 /3! + · · · + tn An /n! + · · ·

satisfies
Φ′ (t) = AΦ(t), Φ(0) = I .
Hence the columns of etA form a fundamental set of solutions of the linear system x′ = Ax.
" # " #
cos t − sin t 0 −1
Exercise Show that etA = , where A = .
sin t cos t 1 0
(Find it algebraically, and also by solving a suitable linear system of ODEs.)

Some More Linear Algebra


The Caley-Hamilton Theorem in Linear Algebra states that p(A) = O is the zero
matrix, where p(λ) = |λI − A| is the characteristic polynomial of A. More precisely, if

p(λ) = |λI − A| = a0 + a1 λ + · · · + an−1 λn−1 + λn

then
a0 I + a1 A + · · · + an−1 An−1 + An = O ,
and hence An+k , can be expressed as a linear combination of I, A, . . . , An−1 . Conse-
quently, the exponential matrix

eA = I + A + A2 /2! + A3 /3! + · · · + An /n! + · · ·

can be written as
eA = c0 + c1 A + c2 A2 + · · · + cn−1 An−1 ,
or
eA = c0 + c1 A + c2 A2 + · · · + ck Ak
for some 0 ≤ k ≤ n − 1, if the “minimal polynomial” h(λ) of A is considered; i.e., the
polynomial h(λ) with least degree such that h(A) = 0.
Using the exponential matrix, the solution of x′ = Ax can be expressed as x = etA c.)

You might also like