0% found this document useful (0 votes)
18 views33 pages

MAT3110 Notes On ODEs With Variable Coefficients and Systems

This document discusses linear differential equations with variable coefficients, focusing on methods such as the Power Series Method and Euler Equations. It includes definitions, examples of power series, and techniques for solving ordinary differential equations (ODEs) using power series. The document also covers the existence of power series solutions and distinguishes between ordinary and singular points in differential equations.

Uploaded by

Emmanuel Banda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views33 pages

MAT3110 Notes On ODEs With Variable Coefficients and Systems

This document discusses linear differential equations with variable coefficients, focusing on methods such as the Power Series Method and Euler Equations. It includes definitions, examples of power series, and techniques for solving ordinary differential equations (ODEs) using power series. The document also covers the existence of power series solutions and distinguishes between ordinary and singular points in differential equations.

Uploaded by

Emmanuel Banda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

LINEAR DIFFERENTIAL EQUATIONS

WITH VARIABLE COEFFICIENTS AND


SYSTEMS OF FIRST ORDER LINEAR
DIFFERENTIAL EQUATIONS

Contents of this Unit


1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Definition of a Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3 Shifting the Index of a Power Series . . . . . . . . . . . . . . . . . . . . . . . 3
4 Derivatives of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
5 Finding Power Series Solutions to ODEs . . . . . . . . . . . . . . . . . . . . 4
6 Existence of a Power Series Solution . . . . . . . . . . . . . . . . . . . . . . . 8
7 Euler Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
8 Systems of First Order Linear Differential Equations . . . . . . . . . . . . . 12
9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

1 Introduction

Linear differential equations with variable coefficients arise in many areas of mathematics,
physics, and engineering. Unlike constant coefficient equations, these equations require spe-
cialized techniques for their solution. In this chapter, we focus on two important methods:
the Power Series Method and the analysis of Euler (Cauchy-Euler) Equations.

1
2

2 Definition of a Power Series

Definition 2.1: Definition of a Power Series


A power series is an infinite series of the form:

X
an (x − x0 )n ,
n=0

where:

• an are coefficients,

• x0 is the center of the series,

• x is the variable.

When x0 = 0 the power series becomes



X
an x n
n=0

Through out this topic, we will be working with power series whose center is 0.
Example 0.1: Examples of Power Series

Here are some examples of power series:


1. Geometric Power Series:

X 1
xn = 1 + x + x2 + x3 + x 4 + · · · = , for |x| < 1
n=0
1−x

2. Arithmetic Power Series :


X∞
nxn = 0 + x + 2x2 + 3x3 + 4x4 + . . . , for |x| < 1
n=0

3. Logarithmic Power Series:



X xn x 2 x3 x4
=x+ + + + ..., for |x| < 1
n=1
n 2 3 4

4. Alternating Geometric Series:



X 1
(−1)n xn = 1 − x + x2 − x3 + x4 − · · · = , for |x| < 1
n=0
1+x


3

Other examples of power series include the Taylor series expansions of various functions.

Example 0.2: More example of Power Series

1. Exponential Function:

X xn x2 x3 x4
ex = =1+x+ + + + ..., x ∈ R.
n=0
n! 2! 3! 4!

2. Sine Function:

X (−1)n 2n+1 x3 x5 x7
sin x = x =x− + − + ....
n=0
(2n + 1)! 3! 5! 7!

3. Cosine Function:

X (−1)n x2 x4 x6
cos x = x2n = 1 − + − + ....
n=0
(2n)! 2! 4! 6!

4. Hyperbolic Sine Function:



X x2n+1 x3 x5 x7
sinh x = =x+ + + + ....
n=0
(2n + 1)! 3! 5! 7!

5. Hyperbolic Cosine Function:



X x2n x2 x4 x6
cosh x = =1+ + + + ....
n=0
(2n)! 2! 4! 6!

3 Shifting the Index of a Power Series


Shifting the index in a power series is useful when simplifying expressions or matching terms.
The transformation n → n + 1 or n → n − 1 adjusts the summation index while keeping the
meaning of the series unchanged.

Example 0.3:

Consider the series: ∞


X
nxn−1 .
n=1
4

If we let k = n − 1, then n = k + 1, and the series becomes:



X
(k + 1)xk .
k=0


Example 0.4:

1. Shift n → n + 1:

X ∞
X
n
nx ⇒ (n + 1)xn+1 .
n=1 n=0

2. Shift n → n − 1:

X ∞
X
n+1
an x ⇒ an−1 xn .
n=0 n=1

4 Derivatives of Power Series


A power series can be differentiated term-by-term within its radius of convergence:

! ∞
d X n
X
an (x − x0 ) = nan (x − x0 )n−1 .
dx n=0 n=1

Example 0.5:
Given: ∞
X xn
y= ,
n=0
n!
its derivative is: ∞ ∞ ∞
X nxn−1 X xn−1 X xn
y′ = = = = ex .
n=1
n! n=1
(n − 1)! n=0
n!

Combining differentiation with shifting the index allows for further simplifications.

5 Finding Power Series Solutions to ODEs


Power series can be used to solve ordinary differential equations (ODEs) by assuming a
solution of the form:
X∞
y(x) = an x n .
n=0
5

Example 0.6:

Use the power series method to find the general solution of the following Ordinary
Differential Equation.
y ′′ − y = 0
Step 1: Assume a Power Series Solution
We assume that y(x) can be expressed as a power series:

X
y(x) = an x n .
n=0

Differentiating term by term,



X

y (x) = nan xn−1 ,
n=1

X
y ′′ (x) = n(n − 1)an xn−2 .
n=2

Step 2: Substitute into the Differential Equation


Substituting into the given equation,

X ∞
X
n−2
n(n − 1)an x − an xn = 0.
n=2 n=0

Shifting the index in the first sum by letting m = n − 2, we rewrite it as:



X ∞
X
m
(m + 2)(m + 1)am+2 x − an xn = 0.
m=0 n=0

For the series to be zero for all x, the coefficients must satisfy:

(m + 2)(m + 1)am+2 − am = 0.
Rewriting,
am
am+2 = .
(m + 2)(m + 1)
Step 3: Compute the First Few Terms
Let a0 and a1 be arbitrary constants:
- m = 0: a2 = a2!0 = a20 . - m = 1: a3 = a3!1 = a1
6
. - m = 2: a4 = a2
4!
= a0
24
. - m = 3:
a1
a5 = a5!3 = 120 .
The general pattern is:
a0 a1
a2n = , a2n+1 = .
(2n)! (2n + 1)!
Step 4: Express the General Solution Thus, the power series solution is:
6

∞ ∞
X x2n X x2n+1
y(x) = a0 + a1 .
n=0
(2n)! n=0
(2n + 1)!
Recognizing these as the Maclaurin series for cosh x and sinh x, we conclude:

y(x) = a0 cosh x + a1 sinh x.


Thus, the general solution to y ′′ − y = 0 is:

y(x) = C1 ex + C2 e−x ,
where C1 = a0 +a1
2
and C2 = a0 −a1
2
.

Example 0.7:
Find the first four terms in each portion of the series solution around x0 = 0 for the
following differential equation:
(x2 + 1)y ′′ − 4xy ′ + 6y = 0
We finally have a differential equation that doesn’t have constant coefficients for the
second derivative.
Here,
p(x) = x2 + 1, p(0) = 1 ̸= 0
So x0 = 0 is an ordinary point.
Assume a power series solution:

X
y(x) = an x n
n=0

Differentiate: ∞
X

y (x) = nan xn−1
n=1

X
y ′′ (x) = n(n − 1)an xn−2
n=2
Substitute into the differential equation:

X ∞
X ∞
X
2 n−2 n−1
(x + 1) n(n − 1)an x − 4x nan x +6 an x n = 0
n=2 n=1 n=0

Simplify term by term.


First term: ∞ ∞
X X
2 n−2
(x ) n(n − 1)an x = n(n − 1)an xn
n=2 n=2

X ∞
X
1· n(n − 1)an xn−2 = (n + 2)(n + 1)an+2 xn
n=2 n=0
7

Second term: ∞ ∞
X X
−4x nan xn−1 = −4 nan xn
n=1 n=1

Third term remains: ∞


X
6 an x n
n=0

Combine:

X
[n(n − 1)an + (n + 2)(n + 1)an+2 − 4nan + 6an ] xn = 0
n=0

Simplify:

X  2
(n − 5n + 6)an + (n + 2)(n + 1)an+2 xn = 0

n=0

Factor: ∞
X
[(n − 2)(n − 3)an + (n + 2)(n + 1)an+2 ] xn = 0
n=0

Since the series equals zero, the coefficient of each power of x must vanish:

(n − 2)(n − 3)an + (n + 2)(n + 1)an+2 = 0, n = 0, 1, 2, . . .

Solve recurrence:
(n − 2)(n − 3)
an+2 = − an
(n + 2)(n + 1)
Compute terms:
For n = 0:
(0 − 2)(0 − 3) (−2)(−3) 6
a2 = − a0 = − a0 = − a0 = −3a0
(2)(1) 2 2
For n = 1:
(1 − 2)(1 − 3) (−1)(−2) 2 1
a3 = − a1 = − a1 = − a1 = − a1
(3)(2) 6 6 3
For n = 2:
(2 − 2)(2 − 3) (0)(−1)
a4 = − a2 = − a2 = 0
(4)(3) 12
For n = 3:
(3 − 2)(3 − 3) (1)(0)
a5 = − a3 = − a3 = 0
(5)(4) 20
Both sequences terminate! The solution is:
 
2
 1 3
y(x) = a0 1 − 3x + a1 x − x
3

8

6 Existence of a Power Series Solution

Theorem 6.1: Existence of a Power Series Solution


Consider a second-order linear ordinary differential equation of the form:

y ′′ + P (x)y ′ + Q(x)y = 0

where P (x) and Q(x) are functions of x.


A power series solution about a point x = x0 exists if the coefficients P (x) and Q(x)
are analytic at x = x0 . A function is said to be analytic at a point if it can be expressed
as a convergent power series in a neighborhood of that point.

6.1 Ordinary and Singular Points


• The point x = x0 is called an ordinary point of the differential equation if both P (x)
and Q(x) are analytic at x = x0 .

• If either P (x) or Q(x) is not analytic at x = x0 , then x = x0 is called a singular point.

6.2 Singular Points


At a singular point, a power series solution may still exist, but additional methods such
as the Frobenius method are typically used to find solutions. The classification of singular
points into regular and irregular singular points further determines the approach for finding
a solution.

7 Euler Equations
Euler equations, also known as Cauchy-Euler equations, represent a special class of linear
differential equations with variable coefficients that can be solved by exploiting their homo-
geneous structure.
The general form of an Euler equation is:

x2 y ′′ + αxy ′ + βy = 0,

where α and β are constants.

Definition 7.1: Euler Equation


An Euler (or Cauchy-Euler) equation is a differential equation of the form:

x2 y ′′ + αxy ′ + βy = 0.
9

To solve an Euler equation, we assume a solution of the form:

y(x) = xr .

Then, the derivatives are:

y ′ (x) = r xr−1 , y ′′ (x) = r(r − 1)xr−2 .

Substituting these into the Euler equation yields:

x2 · r(r − 1)xr−2 + αx · rxr−1 + βxr = xr [r(r − 1) + αr + β] = 0.

Thus, the indicial (or characteristic) equation is:

r(r − 1) + αr + β = 0.

Theorem 7.2: Solution of Euler Equations


The general solution of the Euler equation depends on the nature of the roots r1 and
r2 of the indicial equation:

• If r1 and r2 are real and distinct, then:

y(x) = C1 xr1 + C2 xr2 .

• If r1 = r2 = r (a repeated root), then:

y(x) = C1 xr + C2 xr ln(x).

• If r1 and r2 are complex conjugates, with r = µ ± iν, then:

y(x) = xµ [C1 cos(ν ln(x)) + C2 sin(ν ln(x))] .

Example 0.8:
Solve the following IVP:

2x2 y ′′ + 3xy ′ − 15y = 0, y(1) = 0, y ′ (1) = 1

We first need to find the roots of the characteristic equation.


Assume y = xr , then:
y ′ = rxr−1 , y ′′ = r(r − 1)xr−2
Substitute into the differential equation:

2x2 · r(r − 1)xr−2 + 3x · rxr−1 − 15xr = 0

Simplify:
2r(r − 1)xr + 3rxr − 15xr = 0
10

Factor out xr :
xr (2r(r − 1) + 3r − 15) = 0
Since xr ̸= 0, we solve:
2r2 − 2r + 3r − 15 = 0
2r2 + r − 15 = 0
Factor:
(2r − 5)(r + 3) = 0
Thus:
5
r1 = , r2 = −3
2
The general solution is:
5
y(x) = c1 x 2 + c2 x−3
Differentiate to prepare for initial conditions:
5 3
y ′ (x) = c1 x 2 − 3c2 x−4
2
Apply the initial conditions:
5
y(1) = 0 ⇒ c1 · 1 2 + c2 · 1−3 = 0 ⇒ c1 + c2 = 0
5 3 5
y ′ (1) = 1 ⇒ c1 · 1 2 − 3c2 · 1−4 = 1 ⇒ c1 − 3c2 = 1
2 2
Substitute c2 = −c1 into the second equation:
5
c1 − 3(−c1 ) = 1
2
5
c1 + 3c1 = 1
2
5 6
c1 + c1 = 1
2 2
11
c1 = 1
2
2
c1 =
11
Thus:
2
c2 = −c1 = −
11
The particular solution is:
2 5 2
y(x) = x 2 − x−3
11 11

11

Example 0.9:
Find the general solution to the following differential equation:

x2 y ′′ − 7xy ′ + 16y = 0

First, we find the roots of the characteristic equation.


Assume y = xr , then:
y ′ = rxr−1 , y ′′ = r(r − 1)xr−2
Substitute into the differential equation:

x2 · r(r − 1)xr−2 − 7x · rxr−1 + 16xr = 0

Simplify:
r(r − 1)xr − 7rxr + 16xr = 0
Factor out xr :
xr (r(r − 1) − 7r + 16) = 0
Since xr ̸= 0, solve:
r2 − r − 7r + 16 = 0
r2 − 8r + 16 = 0
Factor:
(r − 4)2 = 0
Thus, the repeated root is:
r=4
Since we have a repeated root, the general solution is:

y(x) = c1 x4 + c2 x4 ln x


Example 0.10:
Example 3 Find the solution to the following differential equation:

x2 y ′′ + 3xy ′ + 4y = 0

Hide Solution
First, we find the roots of the characteristic equation.
Assume y = xr , then:
y ′ = rxr−1 , y ′′ = r(r − 1)xr−2
Substitute into the differential equation:

x2 · r(r − 1)xr−2 + 3x · rxr−1 + 4xr = 0

Simplify:
r(r − 1)xr + 3rxr + 4xr = 0
12

Factor out xr :
xr (r(r − 1) + 3r + 4) = 0
Since xr ̸= 0, solve:
r2 − r + 3r + 4 = 0
r2 + 2r + 4 = 0
Use the quadratic formula:
√ √ √
−2 ± 22 − 4 · 1 · 4 −2 ± 4 − 16 −2 ± −12
r= = =
2 2 2
√ √
−2 ± i 12 −2 ± i2 3
= =
2 2

= −1 ± i 3
Thus, the roots are: √
r1,2 = −1 ± i 3
Since the roots are complex, the general solution is:
 √  √ 
y(x) = x−1 c1 cos 3 ln x + c2 sin 3 ln x

Or equivalently:
√  √ 
y(x) = c1 x−1 cos 3 ln x + c2 x−1 sin 3 ln x

8 Systems of First Order Linear Differential Equations


8.1 Eigenvalues and Eigenvectors
When an n × n matrix multiplies an n × 1 vector, the result is another n × 1 vector:

A⃗x = ⃗y

We are interested in situations where:

A⃗x = r⃗x

Here, r is a scalar and ⃗x is a nonzero vector. If such a pair exists, r is called an eigenvalue
of A, and ⃗x is its corresponding eigenvector.
To find eigenvalues and eigenvectors, we rearrange:

A⃗x − r⃗x = ⃗0

(A − rI)⃗x = ⃗0
13

For non-trivial solutions (⃗x ̸= ⃗0), the matrix must be singular:


det(A − rI) = 0
This is called the characteristic equation. Solving it gives the eigenvalues. For each
eigenvalue, we solve (A − rI)⃗x = ⃗0 to find eigenvectors.

Important Facts
- The characteristic equation is an nth-degree polynomial. - An n×n matrix has n eigenvalues
(counting multiplicities). - If an eigenvalue appears once, it is simple. - If it appears k times,
it has multiplicity k and between 1 and k linearly independent eigenvectors.

Examples
Example 0.11:

Find the eigenvalues and eigenvectors of:


 
2 7
A=
−1 −6
Solution:
Find det(A − rI):
 
2−r 7
det = (2 − r)(−6 − r) − (−1)(7)
−1 −6 − r

= (2 − r)(−6 − r) + 7 = (−12 − 2r + 6r + r2 ) + 7 = r2 + 4r − 5
Set to zero:
r2 + 4r − 5 = 0
(r + 5)(r − 1) = 0
Eigenvalues: r = −5, r = 1
For r = −5:
(A + 5I)⃗x = 0
  
7 7 x1
= ⃗0
−1 −1 x2
 
−1
Equation: 7x1 + 7x2 = 0 ⇒ x1 = −x2 Eigenvector: ⃗x =
1
For r = 1:
(A − I)⃗x = 0
  
1 7 x1
= ⃗0
−1 −7 x2
 
−7
Equation: x1 + 7x2 = 0 ⇒ x1 = −7x2 Eigenvector: ⃗x =
1
14

Example 2. Find the eigenvalues and eigenvectors of:


 
1 −1
A = 4 −1
9 3

Solution:
Compute det(A − rI):
   
1−r −1 1 4
det 4 = (1 − r) − − r +
9
− 31 − r 3 9

1 r 4
= − + − r + r2 +
3 3 9
Simplify:
2 1
= r2 − r +
3 9
Set to zero:
2 1
r2 − r + = 0
3 9
 2
1
r− =0
3
Eigenvalue: r = 31 (multiplicity 2)
Find eigenvector:
1
(A − I)⃗x = 0
3
2  
−1 x1
3
4 2 = ⃗0
9
− 3
x 2
 
3
First equation: 2
x
3 1
− x2 = 0 ⇒ x2 = 2
x
3 1
Eigenvector: ⃗x = (scaled for clarity) ■
2

Example 0.12:

Find the eigenvalues and eigenvectors of:


 
−4 −17
A=
2 2
Solution:
Compute det(A − rI):
 
−4 − r −17
det = (−4 − r)(2 − r) + 34
2 2−r
= (−8 + 4r − 2r − r2 ) + 34 = −r2 + 2r + 26
Set to zero:
−r2 + 2r + 26 = 0 ⇒ r2 − 2r − 26 = 0
15
√ √ √
2± 4 + 104 2 ± 108 2±6 3 √
r= = = =1±3 3
2 2 2
√ √
Eigenvalues: r =√1 + 3 3, r = 1 − 3 3
For r = 1 + 3 3:
(A − rI)⃗x = 0
 √  
−5 − 3 3 −17√ x1
=0
2 1−3 3 x2
√ √
 
1√
First equation: (−5 − 3 3)x1 − 17x2 = 0 ⇒ x2 = − −5−3 17
3
x1 Eigenvector: ⃗x = 5+3 3
17

 
1√
For r = 1 − 3 3: Similar steps give: Eigenvector: ⃗x = 5−3 3 ■
17

Example 0.13:

Find the eigenvalues and eigenvectors of:


 
3 9
A=
−4 −3

Solution:
First, compute the characteristic polynomial:
 
3−r 9
det(A − rI) = det
−4 −3 − r

= (3 − r)(−3 − r) − (−4)(9)
= (−9 − 3r + 3r + r2 ) + 36
= r2 + 27
Set to zero:
r2 + 27 = 0
r2 = −27
√ √
r = ± −27 = ±3i 3
√ √
Eigenvalues: r = 3i 3, r = −3i 3
Now, find √
eigenvectors.
For r = 3i 3:
(A − rI)⃗x = 0
 √  
3 − 3i 3 9 √ x1
=0
−4 −3 − 3i 3 x2
From the first row: √
(3 − 3i 3)x1 + 9x2 = 0
16

9
x1 = − √ x2
3 − 3i 3
Simplify:
9 3
=− √ x2 = − √ x2
3(1 − i 3) 1−i 3
Multiply numerator and denominator by conjugate:

1+i 3
= −3 · √ x2
(1)2 − (i 3)2

1+i 3 3 √
= −3 · x2 = − (1 + i 3)x2
1+3 4
Choose x2 = 4 for simplicity: √
x1 = −3(1 + i 3)
Eigenvector:  √ 
−3(1 + i 3)
⃗x =
4

For r = −3i 3: By the complex conjugate fact, the eigenvector is the conjugate of the
previous:  √ 
−3(1 − i 3)
⃗x =
4

Example 0.14:

Find the eigenvalues and eigenvectors of:


 
7 1
A=
−4 3

Solution:
First, compute the characteristic polynomial:
 
7−r 1
det(A − rI) = det
−4 3 − r

= (7 − r)(3 − r) − (−4)(1)
= (21 − 7r − 3r + r2 ) + 4
= r2 − 10r + 25
Set to zero:
r2 − 10r + 25 = 0
(r − 5)2 = 0
17

Eigenvalue: r = 5 (multiplicity 2)
Now, find eigenvectors.
Solve (A − 5I)⃗x = 0:   
2 1 x1
= ⃗0
−4 −2 x2
From the first row:
2x1 + x2 = 0 ⇒ x2 = −2x1
General solution:  
x1
⃗x =
−2x1
Choose x1 = 1:  
1
⃗x =
−2
So, the eigenvector corresponding to r = 5 is:
 
1
⃗x =
−2

Since the eigenvalue has multiplicity 2 but we only get one eigenvector, this is a **defec-
tive** matrix (good for mentioning if you’re teaching this). ■

Example 0.15:

Find the eigenvalues and eigenvectors of:


 
2 0
A=
0 2

Solution:
First, compute the characteristic polynomial:
 
2−r 0
det(A − rI) = det
0 2−r

= (2 − r)(2 − r) = (2 − r)2
Set to zero:
(2 − r)2 = 0
Eigenvalue: r = 2 (multiplicity 2)
Now, find eigenvectors.
Solve (A − 2I)⃗x = 0:   
0 0 x1
= ⃗0
0 0 x2
18

Since the matrix is zero, every vector is a solution. That means:


 
x1
⃗x = , where x1 , x2 are arbitrary
x2

So, the eigenspace is:    


1 0
Span ,
0 1
Thus, the matrix has two linearly independent eigenvectors:
   
1 0
⃗x1 = , ⃗x2 =
0 1

8.2 Solutions to Systems of First Order Linear Differential Equa-


tions
Now that we have covered the foundational concepts of systems of differential equations, we
are ready to explore methods for solving them. We will begin by examining the homogeneous
system expressed in matrix notation:
⃗x′ = A⃗x (1)
Here, A is an 2 × 2 matrix, and ⃗x is a vector composed of the unknown functions in the
system.
To build our intuition, let us first consider a familiar first-order linear differential equation:

x′ = ax

which has the well-known solution:


x(t) = ceat
Inspired by this, let us investigate whether a similar approach works for the 2×2. Specifically,
we propose a solution of the form:
⃗x(t) = ⃗v ert (2)
The key difference here is that our constant is now a vector ⃗v . To test this form, we substitute
it into the differential equation. Differentiating gives:

⃗x′ (t) = r⃗v ert

Substituting into the system yields:

r⃗v ert = A⃗v ert

This simplifies to:


(A⃗v − r⃗v )ert = ⃗0
or, by factoring:
(A − rI)⃗v ert = ⃗0
19

Since the exponential term is never zero, we can discard it, leading to the condition:

(A − rI)⃗v = ⃗0

Thus, for our proposed solution to satisfy the given system, the scalar r and vector ⃗v must
correspond to an eigenvalue and eigenvector of the matrix A.
Consequently, solving the a system of first order linear differential equation reduces to
finding the eigenvalues and eigenvectors of A. Once these are known, solutions can be built
as seen above. We will encounter three main scenarios depending on the nature of the
eigenvalues:
• Two distinct real eigenvalues,

• Complex eigenvalues,

• Unique real eigenvalue.


It is important to note that, while this gives us a pathway to constructing solutions, it
does not yet provide a complete method for solving the entire system. We will need a few
more tools to fully resolve the system.

Two distinct real eigenvalues


We begin solving systems of differential equations. We know that solutions to

⃗x′ = A⃗x

are of the form


⃗x(t) = ⃗v ert
where r and ⃗v are eigenvalues and eigenvectors of the matrix A. We will work with 2 × 2
systems, seeking two solutions ⃗x1 (t) and ⃗x2 (t).
For distinct real eigenvalues r1 and r2 , the corresponding eigenvectors are linearly inde-
pendent, and so are the solutions. The general solution is:

⃗x(t) = c1 er1 t v⃗1 + c2 er2 t v⃗2


Example 0.16:
Find the solution to the following initial value problem.
   
′ 1 2 0
⃗x = ⃗x, ⃗x(0) =
3 2 −4

Solution:
First, find eigenvalues:

1−r 2
det (A − rI) = = (1 − r)(2 − r) − 6
3 2−r

= r2 − 3r − 4 = (r − 4)(r + 1)
20

Thus, r1 = 4, r2 = −1.
Find eigenvectors: For r = 4,
  
−3 2 x
(A − 4I)⃗v = 0 ⇒ =0
3 −2 y
 
2
From the first row: −3x + 2y = 0 ⇒ y = 3
2
x.Let x = 2, then y = 3, so v⃗1 = .
3
For r = −1,   
2 2 x
(A + I)⃗v = 0 ⇒ =0
3 3 y
 
1
From the first row: 2x + 2y = 0 ⇒ y = −x. Let x = 1, then y = −1, so v⃗2 = .
−1
General solution:    
4t 2 −t 1
⃗x(t) = c1 e + c2 e
3 −1
Apply initial condition:
     
2 1 0
⃗x(0) = c1 + c2 =
3 −1 −4

This gives:
2c1 + c2 = 0 (1), 3c1 − c2 = −4 (2)
Solving: from (1), c2 = −2c1 . Substitute into (2):
 
4 4 8
3c1 − (−2c1 ) = −4 ⇒ 5c1 = −4 ⇒ c1 = − , c2 = −2 − =
5 5 5

Final solution:    
4 4t 2 8 −t 1
⃗x(t) = − e + e
5 3 5 −1

Example 0.17:
Solve the following initial value problem.
   
′ −5 1 1
⃗x = ⃗x, ⃗x(0) =
4 −2 2

Solution:
Find eigenvalues:

−5 − r 1
det = (r + 5)(r + 2) − 4 = r2 + 7r + 6
4 −2 − r

= (r + 6)(r + 1)
Eigenvalues: r1 = −1, r2 = −6.
21

Find eigenvectors: For r = −1,


  
−4 1 x
(A + I)⃗v = 0 ⇒ =0
4 −1 y
 
1
From first row: −4x + y = 0 ⇒ y = 4x. Choose x = 1, y = 4, so v⃗1 = .
4
For r = −6,   
1 1 x
(A + 6I)⃗v = 0 ⇒ =0
4 4 y
 
1
From first row: x + y = 0 ⇒ y = −x. Choose x = 1, y = −1, so v⃗2 = .
−1
General solution:    
−t 1 −6t 1
⃗x(t) = c1 e + c2 e
4 −1
Apply initial condition:
     
1 1 1
⃗x(0) = c1 + c2 =
4 −1 2

First row: c1 + c2 = 1 Second row: 4c1 − c2 = 2


Solving: From first: c2 = 1 − c1 . Substitute:
3
4c1 − (1 − c1 ) = 2 ⇒ 4c1 − 1 + c1 = 2 ⇒ 5c1 = 3 ⇒ c1 =
5
Thus, c2 = 1 − 53 = 25 .
Final solution:    
3 −t 1 2 −6t 1
⃗x(t) = e + e
5 4 5 −1

Complex Eigenvalues
In this section, we will look at solutions to

⃗x′ = A⃗x

where the eigenvalues of the matrix A are complex. With complex eigenvalues, we are going to
have the same problem that we had back when we were looking at second-order differential
equations. We want our solutions to only have real numbers in them; however, since our
solutions to systems are of the form
⃗x = ⃗v ert
we are going to have complex numbers appear in our solution from both the eigenvalue r and
the eigenvector ⃗v . Getting rid of the complex numbers here will be similar to what we did
in the case of second-order differential equations but will involve a little more work this time
around. It is easiest to see how to do this in an example.
22

When dealing with a 2 × 2 system of first-order linear differential equations with complex
eigenvalues, the typical approach is as follows:

• Suppose the matrix A has complex conjugate eigenvalues r = α + iβ and r̄ = α − iβ,


with corresponding eigenvectors ⃗v and ⃗v¯.

• We separate ⃗v into its real and imaginary parts:

⃗v = p⃗ + i⃗q

where p⃗ and ⃗q are real vectors.

• We want to compute the real and imaginary parts of:

⃗x(t) = ⃗v ert = (⃗p + i⃗q)e(α+iβ)t

• Expand the exponential term

e(α+iβ)t = eαt eiβt = eαt (cos(βt) + i sin(βt))

• Multiply out

(⃗p + i⃗q)e(α+iβ)t = (⃗p + i⃗q)eαt (cos(βt) + i sin(βt))


= eαt p⃗ cos(βt) + i⃗p sin(βt) + i⃗q cos(βt) + i2 ⃗q sin(βt)
 

= eαt [⃗p cos(βt) − ⃗q sin(βt) + i (⃗p sin(βt) + ⃗q cos(βt))]

• Identify the real and imaginary parts


 

(⃗p + i⃗q)e(α+iβ)t = eαt p⃗ cos(βt) − ⃗q sin(βt) +i (⃗p sin(βt) + ⃗q cos(βt))


| {z } | {z }
Real part Imaginary part

Real part = eαt (⃗p cos(βt) − ⃗q sin(βt))

Imaginary part = eαt (⃗p sin(βt) + ⃗q cos(βt))

• We write the general solution entirely in real terms:

⃗x(t) = C1 eαt (⃗p cos(βt) − ⃗q sin(βt)) + C2 eαt (⃗p sin(βt) + ⃗q cos(βt))

Example 0.18:

Solve the following IVP.


   
′ 3 9 2
⃗x = ⃗x, ⃗x(0) =
−4 −3 −4
23

We first need the eigenvalues and eigenvectors for the matrix.


3−r 9
det(A − rI) = = r2 + 27
−4 −3 − r
Thus, √
r1,2 = ±3 3i
Since we have complex eigenvalues, we only need to compute the eigenvector for one
eigenvalue. √
For r1 = 3 3i, we solve:
 √    
3 − 3 3i 9 √ v1 0
=
−4 −3 − 3 3i v2 0
From the first equation: √
(3 − 3 3i)v1 + 9v2 = 0
Thus:
1 √
v2 = − (1 − 3i)v1
3
So the eigenvector is:  
v1√
⃗v = 1
− 3 (1 − 3i)v1
Choosing v1 = 3 to simplify:
   
3√ 3√
⃗v = =
−(1 − 3i) −1 + 3i
The solution corresponding to this eigenvalue and eigenvector is:

 
3 3it 3√
⃗x1 (t) = e
−1 + 3i
Using Euler’s formula: √ √ √
e3 3it
= cos(3 3t) + i sin(3 3t)
Multiplying out:
√ √ 
 
 3√
⃗x1 (t) = cos(3 3t) + i sin(3 3t)
−1 + 3i
Compute:
 √ √ 
√ 3 cos(3
√ 3t) +
√3i sin(3 √3t) √ √
=
− cos(3 3t) − i sin(3 3t) + 3i cos(3 3t) − 3 sin(3 3t)
Separate real and imaginary parts:
 √   √ 
√3 cos(3 √3t) √ √3 sin(3√ 3t) √
= +i
− cos(3 3t) − 3 sin(3 3t) − sin(3 3t) + 3 cos(3 3t)
| {z } | {z }

u(t) ⃗v (t)
24

Thus, ⃗u(t) and ⃗v (t) are linearly independent real solutions.


Therefore, the general solution is:

⃗x(t) = c1⃗u(t) + c2⃗v (t)

Substitute:
 √   √ 
√3 cos(3 √3t) √ √3 sin(3√ 3t) √
= c1 + c2
− cos(3 3t) − 3 sin(3 3t) − sin(3 3t) + 3 cos(3 3t)
 
2
Now apply the initial condition ⃗x(0) = :
−4
   
3 0
⃗x(0) = c1 + c2 √
−1 3

Equating components: (
3c1 = 2

−c1 + 3c2 = −4
From the first: c1 = 32
Substitute into the second:
2 √
− + 3c2 = −4
3
√ 2 10
3c2 = −4 + = −
3 3
10
c2 = − √
3 3
Thus, the final solution is:
 √   √ 
2 3 cos(3 3t) 10 3 sin(3 3t)
⃗x(t) = √ √ √ − √ √ √ √
3 − cos(3 3t) − 3 sin(3 3t) 3 3 − sin(3 3t) + 3 cos(3 3t)

Example 0.19:

Solve the following IVP.


   
′ 3 −13 3
⃗x = ⃗x, ⃗x(0) =
5 1 −10

First, let’s compute the eigenvalues and eigenvectors of the matrix.

3 − r −13
det(A − rI) = = r2 − 4r + 68
5 1−r
25

So,
r1,2 = 2 ± 8i
Now, we find the eigenvector corresponding to r1 = 2 + 8i. We solve:
    
1 − 8i −13 v1 0
=
5 −1 − 8i v2 0

From the second equation:


5v1 + (−1 − 8i)v2 = 0
So,
1 + 8i
v1 = v2
5
Thus, the eigenvector is:  1+8i 
5
v2
⃗v =
v2
Choosing v2 = 5, we get:  
1 + 8i
⃗v =
5
The solution corresponding to this eigenvalue and eigenvector is:
   
(2+8i)t 1 + 8i 2t 8it 1 + 8i
⃗x1 (t) = e =e e
5 5

Using Euler’s formula:


e8it = cos(8t) + i sin(8t)
So:   
2t 1 + 8i
⃗x1 (t) = e (cos(8t) + i sin(8t))
5
Multiplying out:
   
2t cos(8t) − 8 sin(8t) 8 cos(8t) + sin(8t)
=e +i
5 cos(8t) 5 sin(8t)

Denoting:
   
2tcos(8t) − 8 sin(8t) 2t 8 cos(8t) + sin(8t)
⃗u(t) = e , ⃗v (t) = e
5 cos(8t) 5 sin(8t)

Thus, the general solution is:


⃗x(t) = c1⃗u(t) + c2⃗v (t)
 
3
Substitute the initial condition ⃗x(0) = :
−10

⃗x(0) = c1⃗u(0) + c2⃗v (0)


26

Compute:        
0 1 1 8 0 8
⃗u(0) = e = , ⃗v (0) = e =
5 5 0 0
Thus:      
3 1 8
= c1 + c2
−10 5 0
This gives:
c1 + 8c2 = 3
5c1 = −10
From the second equation: c1 = −2
Substitute into the first:
5
−2 + 8c2 = 3 ⇒ 8c2 = 5 ⇒ c2 =
8
Thus, the final solution is:
   
2t cos(8t) − 8 sin(8t) 5 2t 8 cos(8t) + sin(8t)
⃗x(t) = −2e + e
5 cos(8t) 8 5 sin(8t)

Unique Eigenvalue

Case: Repeated Eigenvalue with Two Linearly Indepen-


dent Eigenvectors
In this section, we explore the case of a system

⃗x ′ = A⃗x

where the matrix A has a repeated eigenvalue*.


In general, a repeated eigenvalue can lead to a problem: we may struggle to find enough
linearly independent eigenvectors to form a complete solution. However, if the matrix A is
diagonalizable, even with repeated eigenvalues, we will still have two linearly independent
eigenvectors.
When this happens, the system behaves like one with distinct real eigenvalues. Specifically,
we can write the general solution as:

⃗x(t) = c1 ert⃗v1 + c2 ert⃗v2

where: - r is the repeated eigenvalue, - ⃗v1 and ⃗v2 are two linearly independent eigenvectors
corresponding to r.
Example 0.20:
27

Consider the system:  


′ 4 0
⃗x = ⃗x
0 4
First, find the eigenvalues:

4−r 0
det(A − rI) = = (4 − r)2
0 4−r

Thus, the eigenvalue is:


r = 4 (with multiplicity 2)
Next, find the eigenvectors by solving (A − rI)⃗v = ⃗0:
   
4 0 1 0
−4 ⃗v = ⃗0
0 4 0 1
 
0 0
⃗v = ⃗0
0 0
Thus, any non-zero vector is an eigenvector! For example:
   
1 0
⃗v1 = , ⃗v2 =
0 1

Hence, the general solution is:


   
1
4t 4t 0
⃗x(t) = c1 e + c2 e
0 1


The final case that we need to take a look at is at solutions to the system
⃗x ′ = A⃗x
where the eigenvalue is a repeated eigenvalue and corresponding to it is a single linearly
independent vector. So, the system will have a double eigenvalue r.
This presents a challenge. We want two linearly independent solutions so that we can
form a general solution. However, with a double eigenvalue, we will have only one:
⃗x1 = ⃗v ert
Thus, we need to find a second solution.
To fix this, let’s guess that the second linearly independent solution is:
⃗x = tert⃗v + ert ρ⃗
where ρ⃗ is an unknown vector to be determined.
Compute the derivative:
d
⃗x ′ = tert⃗v + ert ρ⃗ = ⃗v ert + r⃗v tert + r⃗
ρert

dt
28

Compute the right-hand side:


A⃗x = A tert⃗v + ert ρ⃗ = tert A⃗v + ert A⃗

ρ
Equate both sides:
⃗v ert + r⃗v tert + r⃗
ρert = tert A⃗v + ert A⃗
ρ
Now compare terms:
- Coefficient of tert :
r⃗v = A⃗v
which is true since r is an eigenvalue and ⃗v is its eigenvector.
- Coefficient of ert :
⃗v + r⃗
ρ = A⃗ ρ
Rearrange:
(A − rI)⃗
ρ = ⃗v
This is a solvable equation for ρ⃗. Therefore, our new guess works!
Thus, the second solution is:
⃗x2 = tert⃗v + ert ρ⃗
where ρ⃗ satisfies:
(A − rI)⃗
ρ = ⃗v
Finally, the general solution to the system with a double eigenvalue is:
⃗x(t) = c1 ert⃗v + c2 tert⃗v + ert ρ⃗


A Note on the Vector ρ⃗


The vector ρ⃗ is not an ordinary eigenvector, but rather a generalized eigenvector of
degree 2.
In general, generalized eigenvectors of degree k are solutions to:
(A − rI)k⃗v = ⃗0
for some integer k ≥ 1.
Specifically: - For k = 1, we recover the usual eigenvectors:
(A − rI)⃗v = ⃗0
- For k = 2, we have:
(A − rI)2 ρ⃗ = ⃗0
In our case, ρ⃗ satisfies:
(A − rI)⃗
ρ = ⃗v
Since ⃗v itself satisfies (A − rI)⃗v = ⃗0, applying (A − rI) once more to both sides gives:
(A − rI)2 ρ⃗ = (A − rI)⃗v = ⃗0
Thus, ρ⃗ is indeed a generalized eigenvector of degree 2.
The presence of ρ⃗ in our solution fills in the gap left by the missing second eigenvector and
ensures that we have two linearly independent solutions, which are necessary for constructing
the general solution of the system.
29

Example 0.21:

Solve the following IVP.


   
′ 7 1 2
⃗x = ⃗x, ⃗x(0) =
−4 3 −5

First, find the eigenvalues for the system.

7−r 1
det(A − rI) = = r2 − 10r + 25 = (r − 5)2
−4 3 − r

Thus, r1 = r2 = 5.
So, we got a double eigenvalue. Of course, that shouldn’t be too surprising given the
section that we’re in. Let’s find the eigenvector for this eigenvalue.
We solve:     
2 1 v1 0
=
−4 −2 v2 0
From the first equation: 2v1 + v2 = 0, so v2 = −2v1 .
The eigenvector is then:  
v1
⃗v =
−2v1
Since v1 ̸= 0, we can take v1 = 1, so:
 
1
⃗v =
−2

Next, we find ρ⃗. To do this, we solve:


    
2 1 ρ1 1
=
−4 −2 ρ2 −2

From the first equation: 2ρ1 + ρ2 = 1, so ρ2 = 1 − 2ρ1 .


Thus, the general ρ⃗ is:  
ρ1
ρ⃗ =
1 − 2ρ1
If we choose ρ1 = 0, then:  
0
ρ⃗ =
1
In this case, unlike the eigenvector system, we can choose the constant to be anything we
want, so we might as well pick it to make our life easier. This usually means picking it to
be zero.
We can now write down the general solution to the system:
      
5t 1 5t 1 5t 0
⃗x(t) = c1 e + c2 e t +e
−2 −2 1
30

Applying the initial condition:


     
2 1 0
⃗x(0) = = c1 + c2
−5 −2 1

Equating components:
c1 = 2, −2c1 + c2 = −5
Substituting c1 = 2:
−2(2) + c2 = −5 ⇒ c2 = −1
Therefore, the solution is:
      
5t 1 5t 1 5t 0
⃗x(t) = 2e − te +e
−2 −2 1

Simplifying:      
5t 2 5t 1 5t 0
⃗x(t) = e −e t −e
−4 −2 1
Finally:    
5t 2 1
⃗x(t) = e −t
−5 −2
Note that we did a little combining here to simplify the solution up a little. ■

9 Conclusion
In this chapter, we have explored two fundamental techniques for solving linear differential
equations with variable coefficients. The power series method is a versatile tool for addressing
equations near ordinary points, while Euler equations provide a structured approach using
the substitution y = xr . Additionally, we examined systems of first-order linear differential
equations, developing methods to handle multiple interdependent equations simultaneously.
Mastery of these methods is essential for solving complex differential equations encountered
in various scientific and engineering problems.
The University of Zambia
Department of Mathematics, Statistics and
Actuarial Science
MAT 3110 - Engineering Mathematics II

Tutorial Sheet 2 Power Series Method, Euler Equations and March, 2025
Systems of Differential Equations

1. Use the power series method to find the solution to each of the following differential
equations. Express your final answers as power series expansions about x = 0.
(a) y ′ = 2y
(b) y ′′ + y = 0
(c) y ′ = ky (where k is a constant)
(d) (1 − x)y ′ = y
(e) (x + 1)y ′ = 3y
(f) (1 + x)y ′ + y = 0
(g) y ′ + 2xy = 0
(h) y ′ = 3x2 y
(i) y ′′ − y = 0
(j) y ′′ + 2xy = 0
(k) y ′′ − y ′ = 0
(l) y ′′ − 9y = 0

2. Show that

X ∞
X ∞
X
m−2 j−1
m(m − 1)am x = (j + 1)jaj+1 x = (s + 2)(s + 1)as+2 xs
m=2 j=1 s=0

3. For each of the following series, shift the index so that the power of x under the sum-
mation sign is xm .
P∞ (−1)n+1 n+2
(a) n=1 3n
x
P∞ s(s+1) s−1
(b) s=1 s2 +1 x
P∞ (−1)k+1 k−3
(c) k=1 6k
x

4. Solve the following differential equations. Find the general solution in each case.
(a) xy ′ = 3y + 3

1
(b) (x − 3)y ′ − xy = 0
(c) y ′ = 2xy
(d) (1 − x4 )y ′ = 4x3 y
(e) (x + 1)y ′ − (2x + 3)y = 0
(f) (1 + x)y ′′ − y = x
(g) y ′′ − 3y ′ + 2y = 0
(h) y ′′ − 4xy ′ + (4x2 − 2)y = 0
(i) (1 − x2 )y ′′ − 2xy + 2y = 0
(j) y ′′ − xy + y = 0

5. Find the general solution of the following second-order differential equations.


(a) x2 y ′′ − 6y = 0
(b) x2 y ′′ + 4xy ′ = 0
(c) x2 y ′′ − 2xy ′ + 2y = 0
(d) x2 y ′′ + 9xy ′ + 16y = 0
(e) x2 y ′′ + xy ′ − y = 0
(f) x2 y ′′ + 3xy ′ + y = 0
(g) x2 y ′′ + 3xy ′ + 5y = 0
(h) x2 y ′′ + xy ′ + y = 0

6. Solve the following initial value problems (IVPs).


(a) x2 y ′′ − 4xy ′ + 4y = 0, y(1) = 4, y ′ (1) = 13
(b) 4x2 y ′′ − 4xy ′ − y = 0, y(4) = 2, y ′ (4) = 1
4
(c) x2 y ′′ − 5xy ′ + 8y = 0, y(1) = 5, y ′ (1) = 18

7. Find the eigenvalues and corresponding eigenvectors of the following matrices:


 
5 −2
(a)
9 −6
 
1 2
(b)
2 4
 
0 3
(c)
−3 0
 
1 2
(d)
0 3
 
0 1
(e)
0 0

2
8. Find the general solution of each of the following systems of first-order linear differential
equations. Write each system in component form.
(a)

x′1 = 2x1 + 7x2


x′2 = −5x1 − 10x2

(b)

x′1 = −3x1 + 6x2


x′2 = −3x1 + 3x2

(c)

x′1 = 8x1 − 4x2


x′2 = x1 + 4x2

(d)

x′1 = −3x1 + 2x2


x′2 = −x1 − 5x2

(e)

x′1 = 2x1 + x2
x′2 = x1 + 2x2

(f)

x′1 = 2x1 − 5x2


x′2 = x1 − 4x2

End of Tutorial Sheet

You might also like