0% found this document useful (0 votes)
27 views116 pages

Workbook 2019

Uploaded by

songpengyuan123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views116 pages

Workbook 2019

Uploaded by

songpengyuan123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

MATH2010

Analysis of Ordinary Differential


Equations

WORKBOOK
Semester 2, 2019

These lecture notes belong to:

I can be contacted via:

If you find them, please return them to me!

c School of Mathematics and Physics, The University of Queensland, Brisbane QLD 4072, Australia.
2 CONTENTS

Contents

1 Differential Equations 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Electrical Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Systems of ODE’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.4 Texts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Introduction to Systems of ODE’s and Classification of ODE’s. . . . . . . . 8
1.2.1 Introduction to Systems of ODE’s . . . . . . . . . . . . . . . . . . . 8
1.2.2 Classifying ODE’s: Linear, Order, Homogeneous. . . . . . . . . . . 11
1.2.3 The Superposition Principle . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Solving systems of two coupled 1st order ODE’s. . . . . . . . . . . . . . . 13
1.3.1 The system in matrix form. . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 The Homogeneous case with constant coefficients. . . . . . . . . . . 14
1.4 Theory and Theorems for first order systems. . . . . . . . . . . . . . . . . 27
1.5 Homogeneous Constant Coefficient Linear 2-dimensional Systems and the
Phase Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.5.1 The Phase Portrait . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.5.2 Phase Portraits for Real Eigenvalues and Direction Fields. . . . . . 33
1.5.3 Phase Portraits for Complex Eigenvalues . . . . . . . . . . . . . . 44
1.5.4 SUMMARY Of 6 Types of LINEAR PHASE PORTRAITS in 2D . 47
1.6 Critical Points and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.6.1 Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.6.2 Stability of Critical points. . . . . . . . . . . . . . . . . . . . . . . 50
1.7 Non homogeneous Linear systems . . . . . . . . . . . . . . . . . . . . . . . 54
1.8 Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.8.1 Solving for the Phase Curves . . . . . . . . . . . . . . . . . . . . . . 55
1.8.2 Critical Points for Nonlinear Systems. . . . . . . . . . . . . . . . . . 58
CONTENTS 3

1.8.3 Linearization of Nonlinear Systems. . . . . . . . . . . . . . . . . . . 59

1.8.4 Summary for Nonlinear Systems . . . . . . . . . . . . . . . . . . . 67

1.9 Diagonalization and 2D Phase Portraits . . . . . . . . . . . . . . . . . . . 68

1.9.1 Relevance to 2D Phase Portraits. . . . . . . . . . . . . . . . . . . . 68

2 Laplace Transforms 70

2.1 Finding the Laplace Transform of a function . . . . . . . . . . . . . . . . . 70

2.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

2.1.2 The Laplace Transform of simple functions. . . . . . . . . . . . . . 70

2.1.3 The Laplace Transform for Piecewise Continuous functions. . . . . . 74

2.1.4 The First Shifting Theorem. . . . . . . . . . . . . . . . . . . . . . . 76

2.1.5 Summary of Laplace Transforms . . . . . . . . . . . . . . . . . . . . 77

2.1.6 Inverting Laplace Transforms. . . . . . . . . . . . . . . . . . . . . . 78

2.1.7 The Gamma Function and L(ta ), where a is not an integer. . . . . . 80

2.2 Laplace Transforms of Derivatives and Solving Simple Linear Constant


Coefficient ODEs and Systems of ODE’s. . . . . . . . . . . . . . . . . . . . 81

2.2.1 The Laplace Transform of the differential of a function. . . . . . . . 81

2.2.2 Solving Linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . 82

2.2.3 Forcing Functions and Transfer Functions. . . . . . . . . . . . . . . 84

2.2.4 Solving Systems of ODE’s . . . . . . . . . . . . . . . . . . . . . . . 87

2.3 Finding the inverse Laplace Transform of complicated functions using Par-
tial Fractions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

2.3.1 Simple Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

2.3.2 Repeated simple factors . . . . . . . . . . . . . . . . . . . . . . . . 91

2.3.3 Irreducible Quadratic Factors . . . . . . . . . . . . . . . . . . . . . 92

2.3.4 Repeated irreducible factors and the Inverse Laplace Transform of


dF (s)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
ds
2.4 The Second Shifting Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 97

2.4.1 Using the Second Shifting Theorem . . . . . . . . . . . . . . . . . . 97


4 CONTENTS

2.4.2 Circuit Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


2.5 The Convolution Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
2.5.1 Using the Convolution Theorem . . . . . . . . . . . . . . . . . . . . 108
2.5.2 Solving Linear ODE’s using Convolution. . . . . . . . . . . . . . . 110
2.5.3 The Laplace Transform of Periodic Functions . . . . . . . . . . . . 112
5

1 Differential Equations

1.1 Introduction

Differential Equations come in two basic types; ODE’s and PDE’s.

Ordinary Differential Equations & Partial Differential Equations .

(ODE’s) MATH2010 (PDE’s) MATH2011.

In ODE’s the unknown is a function of one independent variable, so that only ordinary
differentials are involved.
For example the equation for a mass spring system:
d2 x dx
m 2
= −kx − c
dt dt
is an equation for the unknown x as a function of the dependent variable t; x(t).

In PDE’s the unknown is a function of 2 or more independent variables and there are
partial differentials. For example the heat equation :
∂ 2H ∂H
m = c
∂x2 ∂t
is an equation for the unknown H as a function of the dependent variables x, t; H(x, t).

MATH2010 has two parts;


SYSTEMS OF ODE’s and LAPLACE TRANSFORMS

SYSTEMS OF ODE’s often arise naturally or a nth order linear differential equation
can be written as a system of differential equations.
For instance to model the spread of an infectious disease the rate of change of the number
of those infected depends on the number of those who are susceptible.

d Infected
= f (Infected, Susceptible)
dt
d Susceptible
= g(Infected, Susceptible)
dt
In an electrical circuit the rate of change of charge gives the current, but the rate of
change of current itself depends on charge.
6 1. DIFFERENTIAL EQUATIONS

1.1.1 Electrical Circuit

I(t) R
R
C
L Q(t) C
L
I(t)
Q(t)
E(t)

E(t)

Kirchhoff ’s Law says that the voltage drop across the Inductor plus the voltage drop
across the Resistor plus the voltage drop across the Capacitor equals the applied voltage.
dI(t) Q(t)
L + RI(t) + = E(t)
dt C
But charge is related to current:

So we could substitute this back to give one second order equation for Q(t).

Or we can write as a system of two coupled first order ODE’s.


1.1. INTRODUCTION 7

1.1.2 Systems of ODE’s

Take the Electrical circuit


d2 Q(t) dQ(t) Q(t)
L 2
+R + = E(t)
dt dt C
with R = 0, L = 1, C = 1 and E(t) = 0, then

If Q(0) = 0 this has solution

So the charge and current oscillate out of phase with each other.
We can see from the equations that

Q2 + I 2 =

the solution curves are circles in (Q, I) space.


8 1. DIFFERENTIAL EQUATIONS

But the full 3-dimensional space (t, Q, I) = (t, B sin(t), B cos(t)) is really hard to work
with.

I
Helix
1

Q 0

t
−1

2
0 0
5
10
15
−2
20

So we have two options;

Work with Q(t), or in (Q,I) space.


Q(t) I(t)
1.5 1.5

1 1

0.5 0.5

0 0
Q(t)
−0.5
t −0.5

−1 −1

−1.5 −1.5
0 5 10 15 20 −1 0 1

Phase space

In the second option, (Q(t), I(t)) space, the curve representing the solutions is parametrized
by time. Because the curve is a circle we can see that the behavior is cyclic and that when
Q(t) is at a maximum I(t)) is 0 etc. But some information has been lost. For instance
we don’t know how fast to move around the circle.
The (Q(t), I(t)) space is called Phase Space.
1.1. INTRODUCTION 9

There are about 6 qualitatively different Linear systems in 2D Phase Space which we will
look at and classify. Then we will begin to ask what we can do with Nonlinear Systems.
Search on the Internet for pplane and try entering your own linear system.

1.1.3 Laplace Transforms

In LAPLACE TRANSFORMS we will solve systems which are time dependent, such
as the circuit above with a variable applied voltage. What makes Laplace Transforms so
useful is that you can use them to solve equations with discontinuous terms, say a voltage
that is suddenly switched on!

d2 Q(t) dQ(t) Q(t)


L 2
+R + = E(t)
dt dt C

Suppose the voltage has been switched on and is then switched off. We could assume that
it is switched off at say t = k

E0(1−u(t−k))
  E0
E0 0≤t<k
E(t) =
0 k≤t 0
k t
0

2
−2E0/ω2 −2E0 cos(ωt)/ω

Then for some circuits 2π/ω


0 π/ω t
the result is oscillatory behavior
2

1.1.4 Texts

William A. Adkins, Mark G. Davidson: Ordinary Differential Equations,


Springer, 2012. PDF is available from Springer-Link through UQ-Library.
Kreyszig Advanced Engineering mathematics (9th Edition)
We use Chap 4 and 6 from the text.
For Laplace Transforms:
W T Thompson, Laplace Transforms QA432.T5 1960
10 1. DIFFERENTIAL EQUATIONS

1.2 Introduction to Systems of ODE’s and Classification of ODE’s.

1.2.1 Introduction to Systems of ODE’s

Systems of first order ODE’s can arise naturally.

Take the Lotka-Volterra, or Predator Prey Population Model.

dr(t)
=
dt
df (t)
=
dt

where r(t) is the number of rabbits and f (t) is the number of foxes and a, b, l and k are
constants.
These equations are said to be coupled (together) meaning that we cannot solve either
one independently.
1.2. INTRODUCTION TO SYSTEMS OF ODE’S AND CLASSIFICATION OF ODE’S. 11

Or take a Mixing Problem Two tanks,


Mixing
one containing water and the other fertilizer
are connected by two flow pipes. In one
r2
pipe fluid flows from tank 1 to tank 2 with Tank 1 Tank 2
flow rate r1 . In the other fluid flows in the r1
opposite direction with flow rate r2 . If yi is
the amount of fertilizer in tank i and Vi is
the volume in tank i the equations are the
following.

dy1 (t)
=
dt
dy2 (t)
=
dt

c
damper

k
mass
Mass − Spring − DamperSystem
Natural length
x

From Newton’s 2nd Law

dp
=
dt
where p is x is

m is the mass, k is the spring constant and c is the damping constant.


12 1. DIFFERENTIAL EQUATIONS

From the linear momentum

p = mẋ →

Sometimes two (or a higher number of) first order systems can be written as one 2nd
(or higher) order ODE .
For instance for the Mass-Spring-Damper System

dp dx p
= −kx − cẋ =
dt dt m

can be written as

dp
(Since from linear momentum we have that dt
= mẍ.)

Have a look at mathsims on the web: http://teaching.smp.uq.edu.au/mathsims


In this course we will introduce the Phase Space. In the simulation you can see how the
trajectory in phase space relates to the solution to the system.

And , going the other way, ANY n-th order ODE of the form

dn y dy d2 y d(n−1) y
= F (t, y, , , ...., )
dtn dt dt2 dt(n−1)

Can be written as a system of n First order ODE’s:


1.2. INTRODUCTION TO SYSTEMS OF ODE’S AND CLASSIFICATION OF ODE’S. 13

Let y1 = y

dy1
=
dt
dy2
=
dt

dy(n−1)
=
dt
dyn
=
dt

1.2.2 Classifying ODE’s: Linear, Order, Homogeneous.

LINEAR
An ODE is said to be Linear if it is linear in the unknown and it’s derivatives.
But it need not be linear in the independent variable.
For example
d2 y(t) t dy(t)
= e − cos(5t)y(t) + t5 , is
dt2 dt
But
d4 y(t) dy(t)
4
= 2y(t) − y(t) + 5 is
dt dt
And 2
d3 u(x) d5 u(x)

+ − xu(x) + x2 = 0 is
dx3 dx5

ORDER
The order of an ODE is the order of the highest derivative .
In the examples above the first is order, the second order and
the third is order.

HOMOGENEOUS
A linear ODE is either homogeneous or inhomogeneous.
An ODE is homogeneous if when y(t) is a solution then so is cy(t) for any con-
stant c.
14 1. DIFFERENTIAL EQUATIONS

This is great when it happens of course because if you can find one solution you immedi-
ately have a whole family of others.
Take some examples

d2 y(t) dy(t)
L(y(t)) = 2
+2 − cos(5t)y(t) = 0 is homogeneous.
dt dt
Try it.

But
d2 y(t) t dy(t)
L(y(t)) = − e + cos(5t)y(t) − t5 = 0 is inhomogeneous.
dt2 dt

Or

d3 u(x)
= sin(x)u(x) + 5 is
dx3
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 15

1.2.3 The Superposition Principle

Linear, Homogeneous ODE’s obey

THE SUPERPOSITION PRINCIPLE

This means that you can take linear combinations of known solutions to form new solu-
tions. So if y1 (t) and y2 (t) are two solutions then c1 y1 (t) + c2 y2 (t) is also a solution for
any constants c1 and c2 .    
y1a y1b
It also works for systems. If and are solutions to a 2D linear homoge-
y2a y2b
neous system then for any constants c1 and c2
   
y1a y1b
c1 + c2 is also a solution.
y2a y2b

But be careful the superposition principle only applies to Linear homogeneous systems.

1.3 Solving systems of two coupled 1st order ODE’s.

1.3.1 The system in matrix form.

Any second order linear ODE can be written as a system of two coupled 1st order ODE’s:

d2 y(t) dy(t)
2
+ p(t) + q(t)y(t) = r(t)
dt dt

Now this system of two first order ODE’s can be written in matrix form:
16 1. DIFFERENTIAL EQUATIONS

If we let
     
y1 0 1 0
y= A(t) = and) R(t) =
y2 −q(t) −p(t) r(t)

Then the system in matrix form is

If R(t) = 0 the system is homogeneous.


If R(t) 6= 0 the system is inhomogeneous.

1.3.2 The Homogeneous case with constant coefficients.


 
0 1
Suppose ẏ = Ay with A= a constant matrix.
−q −p

We know that the solutions to 2nd order linear ODE’s with constant coefficients are linear
sums of exponentials:

If ÿ + pẏ + qy = 0 let y = eλt ,


1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 17

Dividing by eλt gives the Auxiliary equation

Say there are two roots λ1 and λ2 then the General Solution to the ODE is

(Actually these constants are fixed by the initial conditions y(0) and ẏ(0) in an initial
value problem (IVP).)

For a system
ẏ = Ay
Try
 
λt u
y = xe where x= is a constant vector
v
and λ is a constant scalar.

So dividing by eλt gives

Ax = λx
This is called the EIGENVALUE EQUATION for A.

(It implies the auxiliary equation for λ!. )


18 1. DIFFERENTIAL EQUATIONS

If there are two eigenvalues λ1 and λ2 with eigenvectors x(1) and x(2) then

x(1) eλ1 t and x(2) eλ2 t

are solutions to
ẏ = Ay.

The General Solution to the matrix equation is a linear combination of


these:
y = c1 x(1) eλ1 t + c2 x(2) eλ2 t
( from the superposition principle), for constants c1 and c2 .

We now have TWO ways to solve Initial value Problems (IVP’s)


Take
ÿ + 2ẏ − 15y = 0, with y(0) = −1 and ẏ(0) = 13.

METHOD I (OLD Method, see Kreyzsig Sec 2.2 and 2.3)


Let y = eλt , for some constant λ, substitute this into the ODE and obtain an equation
for λ.

Luckily we can divide by eλt so that λ satisfies

General Solution to the ODE is a linear combination of these two solutions

Then ẏ is found by differentiating y. Here this implies that ẏ = 3c1 e3t − 5c2 e−5t .
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 19

Now use the initial conditions to find c1 and c2 .

Recall y = c1 e3t + c2 e−5t and ẏ = 3c1 e3t − 5c2 e−5t , y(0) = −1 and ẏ(0) = 13

Solution to IVP is

y(t) = e3t − 2e−5t .

There are two special cases: Complex roots λ± = α ± iβ, and Equal roots.

Complex Roots

As before y = c1 eλ− t + c2 eλ+ t , but c1 and c2 are now complex. For a real solution c2 must
be the complex conjugate of c1 . However complex numbers are tricky, particularly if we
are assuming that everything is real.

So we often rewrite this solution in real form:

Let λ+ = α + iβ

eλ+ t =

Let c1 = a + ib, then c2 = a − ib


20 1. DIFFERENTIAL EQUATIONS

y = (a+ib)eαt (cos(βt)−i sin(βt))+(a−ib)eαt (cos(βt)+i sin(βt)) =

So we say the general solution (in real form) is y = Aeαt cos(βt) + Beαt sin(βt), for some
real constants A and B.

Equal Roots

The problem here is that there is only one value of λ and so only one solution y1 = eλt
However the second solution can be found by variation of parameters and is y2 = teλt . So
the general solution is y = c1 eλt + c2 teλt .

METHOD II (the NEW way) uses matrices.

Write the second order ODE ÿ + 2ẏ − 15y = 0 as a system.

Let y1 (t) = y(t), y2 (t) = ẏ(t), then

Now this system of two first order ODE’s can be written in matrix form:
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 21

Now let

y = xeλt

 
u
where x = is a constant vector and λ is a constant scalar.
v

Sub in

LHS = ẏ = λxeλt = RHS = Axeλt


So
Ax = λx

This is the EIGENVALUE EQUATION for A.


 
1 0
To solve Ax = λx note that the identity matrix I = is such that Ix = x so the
0 1
eigenvalue equation can be written as

(A − λI)x = 0,

which can only be satisfied for non trivial x ( i.e. x 6= 0) if

 
0 1
det(A − λI) = 0, where A= .
15 −2
22 1. DIFFERENTIAL EQUATIONS

( There is a review of eigenvalues and vectors on the web and in Kreyzsig Sec4.0)
The Eigenvectors corresponding to λ1 = 3, λ2 = −5 are found by solving
 
(i) (i) u
(A − λi I)x = 0 for x = .
v

If λ1 = 3

If λ2 = −5

So that
   
1 1
y (1)
= e 3t
and y (2)
= e−5t are solutions to ẏ = Ay.
3 −5

The General Solution to the matrix equation is


   
1 1
y = c1 3t
e + c2 e−5t , where c1 and c2 are constants,
3 −5

provided that the two solutions y(1) and y(2) are linearly independent.
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 23

The solution can be written in matrix form.


e−5t
 3t  
e c1
y=
3e3t −5e−5t c2
or more generally !
(1) (2) 
y1 y1 c1
y= (1) (2) .
y2 y2 c2

The matrix
!
(1) (2)
y1 y1
Y= (1) (2) is called the fundamental matrix.
y2 y2
If the determinant of this matrix, often called the Wronskian W = detY, is nonzero
then y(1) and y(2) are linearly independent and the General Solution to the matrix
equation is given by y = c1 y(1) + c2 y(2) .
 
−1
If y(0) = −1 and ẏ(0) = 13 then y(0) =
13

For those who like matrices! you can solve this using the matrix form:
For t = 0 we have     
1 1 c1 −1
y(0) = =
3 −5 c2 13
Or
   −1       
c1 1 1 −1 −5 −1 −1 1
= = −1/8 =
c2 3 −5 13 −3 1 13 −2

Finally the Unique solution to the IVP.


   
1 1
y= 3t
e −2 e−5t
3 −5
24 1. DIFFERENTIAL EQUATIONS

SUMMARY
Solving a system of two coupled linear constant coefficient equations.

ẏ = Ay
 
y1
where A is a constant 2 × 2 matrix and y= .
y2
Solve for the eigenvalues λi and eigenvectors x(i) of A .

Then

both y(1) = x(1) eλ1 t and y(2) = x(2) eλ2 t are solutions to the system.

The general solution is a linear combination of these solutions

y = c1 x(1) eλ1 t + c2 x(2) eλ2 t

provided that y(1) and y(2) are linearly independent. (Problems can only arise
if λ1 = λ2 .)
Example. Solve

ÿ − 3ẏ − 4y = 0, with y(0) = 4 and ẏ(0) = 6.

by converting to a first order system of equations.


Let y1 (t) = y(t)
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 25

(Remember you can check by substitution.)


26 1. DIFFERENTIAL EQUATIONS

What if the eigenvalues are complex?


Usually what is wanted is a real solution. This can be expressed in complex form or
in real form. For example. ÿ + 4y = 0, when written in matrix form becomes
    
y˙1 0 1 y1
=
y˙2 −4 0 y2
 
0 1
Now the eigenvalues of A = are given by
−4 0

The eigenvectors are also complex, however because the eigenvalues are complex conju-
gates of each other the eigenvectors are also complex conjugates of each other.
(For any real matrix with complex eigenvalues the eigenvalues and vectors are complex
conjugates of each other.)
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 27

The general solution in complex form is


   
1 1
(1) (2)
y = c1 y + c2 y = c1 2it
e + c2 e−2it
2i −2i

But note that here c1 and c2 are assumed to be complex constants.


Real Solutions
For a real y the constant c2 must be the complex conjugate of c1 .
   
1 1
(1)
Then c1 y = c1 2it (2)
e and c2 y = c2 e−2it are complex conjuagtes of each
2i −2i
other so their sum is real.
So for a real solution let c1 = a + ib and then c2 = a − ib, then since y(1) and y(2) are
(1) (1) (1) (1)
complex conjugates of each other if y(1) = yr + iyi then y(2) = yr − iyi .
This means that the general solution in real form
(1) (1)
y = (a + ib)(yr(1) + iyi ) + (a − ib)(yr(1) − iyi ) =
is a linear combination of the real and imaginary parts of y(1) .

So you only need to find the real and imaginary parts of y(1) :

Then the general solution in real form is a linear combination of these real and
imaginary parts;

for real constants d1 and d2 .


In fact you can show that d1 = c1 + c2 = 2a and d2 = i(c1 − c2 ) = −2b, which are real if
c2 is the complex conjugate of c1 .
28 1. DIFFERENTIAL EQUATIONS

Summary

1. Any Linear Homogeneous second order ODE with constant coefficients


ÿ + pẏ + qy = 0 can be written as a system in matrix form:
 
0 1
ẏ = Ay whereA = a constant matrix.
−q −p

2. To solve a system of two coupled linear constant coefficient equations


in matrix form;
 
y1
ẏ = Ay where A is a constant 2 × 2 matrix and y = ,
y2

solve for the eigenvalues λi and eigenvectors x(i) of A.

Then the general solution is a linear combination of the two solutions


(y(1) = x(1) eλ1 t and y(2) = x(2) eλ2 t )

y = c1 y(1) + c2 y(2) = c1 x(1) eλ1 t + c2 x(2) eλ2 t

provided that y(1) and y(2) are linearly independent.


(Problems can only arise if λ1 = λ2 .)

3. If the eigenvalues are complex λ = α ± iβ the eigenvectors are complex


conjugates of each other (x(2) = x(1)∗ ) and a REAL solution is obtained if c2
is the complex conjugate of c1 . (c2 = c∗1 )

y = c1 x(1) e(α+iβ)t + c∗1 x(1)∗ e(α−iβ)t complex form


(1) (1)
Or given that xr is the real part of x(1) and xi is the imaginary part, the
solution in real form is
(1) (1)
y = d1 eαt (x(1) αt (1)
r cos(βt) − xi sin(βt)) + d2 e (xi cos(βt) + xr sin(βt)))

for some real constants d1 and d2 .


1.4. THEORY AND THEOREMS FOR FIRST ORDER SYSTEMS. 29

1.4 Theory and Theorems for first order systems.

Take a general n-dimensional system


dy1
= f1 (t, y1 , y2 , ...yn )
dt
dy2
= f2 (t, y1 , y2 , ...yn )
dt
etc
dyn
= fn (t, y1 , y2 , ...yn ).
dt
We can write this as
ẏ = f (t, y),
where y and f are n-dimensional vectors and the fi (t, y1 , y2 , ...yn ) are not necessarily linear
functions if yi or t.

For an Initial Value Problem (IVP) there is an initial condition for each yi ;
yi (t0 ) = Ki or y(t0 ) = K. So IVP is written as

ẏ = f (t, y) with y(t0 ) = K

Existence Uniqueness
Basically if f is smooth at a given initial condition then there is one and only one solution
for that initial condition. It may not exist for all time, but it must exist in some open
time neighborhood of t0 .

Existence Uniqueness Theorem


Let fi be continuous functions with continuous partial derivatives with respect to yi in
some domain of (t, y1 , y2 , ..., yn ) space containing (t0 , K1 , K2 , ..., Kn ).
Then the IVP
ẏ = f (t, y) with y(t0 ) = K
has a unique solution on some interval t0 − α < t < t0 + α.
30 1. DIFFERENTIAL EQUATIONS

Note Solutions may not exist for all time.


dy
Take = y2
dt

Note If fi are NOT continuous at (t0 , yi (0)) the Existence Uniqueness Theorem is not
satisfied.
dy 2y
Take =
dt t

∂fi
Note If ∂y i
are not continuous at (t0 , yi (0)) the Existence Uniqueness Theorem is not
satisfied.
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 31

1.5 Homogeneous Constant Coefficient Linear 2-dimensional Sys-


tems and the Phase Plane

We have a vague idea of the types of behavior we can expect from linear constant coefficient
ODE’s because they have exponential solutions.
So we expect exponential decay (from terms like e−3t ) or exponential growth (from terms
like e2t ),

3
0.5 −3t Exponential 2t
e decay e
2
Exponential
growth
1
t
0

−0.1 0
t
0 0.5 1 1.5 2 2.5 3 0 0.5 1

1
0.8

sin(5t) 0.6

0.4
e−t/5 cos(t)

0
t 0.2

t
0

−0.2
Oscillation Decaying
−0.4
oscillation.
−1 −0.6
0 1 2 3 0 5 10 15
t

oscillatory behavior (from terms like sin(5t)) and decaying or growing oscillatory behavior
(from terms like e−3t cos(t)).

But in a 2-dimensional system there are always two fundamental solutions and one may
grow while the other decays meaning that different initial conditions may give different
behavior. We really need to consider the 3-dimensional space (t, y1 , y2 ). But that is too
complicated so we consider (y1 (t), y2 (t)) as coordinates in (y1 , y2 ) space, which is called
Phase Space.

The Linear Pendulum is a good visual model.

g
θ̈ = − θ
l

where g is gravity, l is length and θ is the angle the pendulum makes with the vertical.
Take gl = 9 say then letting y1 = θ and y2 = θ̇ in matrix notation we have
32 1. DIFFERENTIAL EQUATIONS

So since the real solutions are


   
cos(3t) sin(3t)
and
−3 sin(3t) 3 cos(3t)

In both cases

Now the General solution is a linear combination of these and in fact you can show that
in general the solutions satisfy y12 + y22 /9 = c2 , for some constant c. In the (y1 , y2 ) plane
these are ellipses.

1
y2

−1

−2

−3

−4
−4 −2 0 2 4

y1
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 33

1.5.1 The Phase Portrait

Each initial condition gives a curve in phase space, which is called a trajectory. These
trajectories, ellipses here, represent solutions to the ODE in phase space. You can build
up a complete picture by taking lots of different initial conditions, each of which will give
you a trajectory in phase space. This is called the Phase Portrait of the system. Here
y2
the phase portrait is simply lots of ellipses of the form y12 + 92 = c2 , plus the origin.

1
y2

−1

−2

−3

−4
−4 −2 0 2 4

y1
The Phase Plane representation of the solutions does not tell you everything. It cannot
say how fast you move along a phase curve. However we do usually indicate the direction
of increasing time by an arrow.

Existence Uniqueness For a constant coefficient system that is smooth Two Trajec-
tories cannot cross otherwise they would violate existence uniqueness because at the
point where they cross there are two different solutions coming out of one point.
34 1. DIFFERENTIAL EQUATIONS

The Trivial Solution i.e. y=0


A system of the form  
y1
ẏ = Ay, where y=
y2
and A is a 2 × 2 constant matrix has the trivial solution.
That is if y1 (t0 ) = 0 and y2 (t0 ) = 0 then y1 (t) = 0 and y2 (t) = 0 for all time.
This is one trajectory that is easy to plot!

There are 6 qualitatively different phase portraits, apart from special cases. Four are
concerned with real eigenvalues and two with complex eigenvalues. We will look at all 6.
The next section is on real eigenvalues.

Calculating the Phase Portrait Numerically with mathsims and pplane.


Have a look at mathsims on the web: http://teaching.smp.uq.edu.au/mathsims
There are 7 models to look at. To see the 6 types of qualitatively different phase portraits
go to the General Linear Model. For ẋ = −3y and ẏ = 3x, let a = 0, b = −3, c = 3 and
d = 0. This gives purely oscillatory motion.
There is also great free software package on the Internet called pplane.
( http://math.rice.edu/ dfield/dfpp.html)
All you need to do is type in the right hand side of a two dimensional system of ODE’s.
(Only 2D I’m afraid and no explicit time dependence.)
Say ẋ = −3y and ẏ = 3x.
Then click ’Graph Phase Plane’ to get the direction field and click on the actual screen
to get a trajectory. Each click is a whole trajectory, so the program integrates forwards
in time and backwards. The arrows show the direction of time. You can also see the
solution in full 3D (x, y, t)!
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 35

1.5.2 Phase Portraits for Real Eigenvalues and Direction Fields.

If the eigenvalues of A are real and distinct then the solution to ẏ = Ay is in the
form
y = c1 x(1) eλ1 t + c2 x(2) eλ2 t
In this case there are always two straight lines in the phase space on which it is easy
to find the direction of the flow.
Lets take an example.
If  
−2 0
A=
1 −1
Then, after finding the eigenvalues and vectors of A we can write down the general
solution, which is    
1 −2t 0
y = c1 e + c2 e−t
−1 1

Now if either c1 or c2 are zero the trajectory is a straight line because then y1 is just a
multiple of y2 .

Say c2 = 0, then

Also since y1 = c1 e−2t solutions move into the origin.

Say c1 = 0, then

Exponential decay again, so solutions move into the origin.


36 1. DIFFERENTIAL EQUATIONS

Note that we now have 5 different trajectories.


(1) the origin (2)
(3) (4)
(5)

1
y2

−1

−2

−3

−4
−4 −2 0 2 4

y1
In between trajectories move into the origin, but not along straight lines. In fact, provided
the eigenvalues are real positive and distinct, the trajectories approach the origin
tangent to the straight line corresponding to the eigenvector of the eigenvalue
with least magnitude.
y2

y1

Any system with two negative real and different eigenvalues gives similar
results and is called a - STABLE (improper) NODE.
Of course the actual straight lines are different for each case as are the eigenvectors. (See
 
a
if you can show that for an eigenvector the slope of the straight line is ab .)
b
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 37

Have a look at mathsims the General Linear Model for another example. Click on the
stable node tab below the simulations, then click in the phase plane window to see the
solutions for different initial conditions.

The fact that the trajectories approach the origin tangent to the straight line correspond-
ing to the eigenvector of the eigenvalue with least magnitude, is messy to show in general!
But the result is easy to show for a system where the eigenvectors are parallel to the axes.

        
y˙1 −2 0 y1 1 −2t 0
= which has solution y = c1 e +c2 e−5t .
y˙2 0 −5 y2 0 1

Now c2 = 0 =⇒

Also c1 = 0 =⇒

Away from these solution curves, for y1 6= 0 and y2 6= 0 consider the individual ODE’s for
y1 and y2 .

y˙1 = −2y1 and y˙2 = −5y2 .

Since time is not explicitly present

which becomes an equation for y2 as a function of y1 .

Here we have
38 1. DIFFERENTIAL EQUATIONS

4 5
Now because > 1 these
3 2
2
these curves for C 6= 0 have zero slope
1
y2

0
at the origin. So they approach the origin
−1

−2
tangent to the y1 axis.
−3

−4
−4 −2 0 2 4

y1

What if both eigenvalues are positive? Actually the situation isn’t much different, after
all the two minuses cancelled when we solved for y2 as a function of y1 . Lets take an
example though.
    
y˙1 1 3 y1
=
y˙2 0 2 y2

which has solution


   
1 t 3
y = c1 e + c2 e2t .
0 1

1
y2

−1

−2

−3

−4
−4 −2 0 2 4

y1

Now the straight line solutions are


1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 39

In between the curves are tangent to the straight line corresponding to the eigenvector of
the eigenvalue with least magnitude.

Any system with two positive real and different eigenvalues gives similar re-
sults and is called an -UNSTABLE (improper) NODE.

Direction Fields and Nullclines.  


dy y1
Considered as a vector equation: = f (y) with y =
dt
y2
f (y) evaluated at (y1 , y2 ) is the velocity vector at (y1 , y2 ).

The direction of the velocity vector gives the direction of the flow and the length of the
velocity vector is the speed. The Direction Field is the field of these vectors: f (y).
(’Graph Phase Plane’ in pplane gives you a direction field.)

The great thing about the direction field is that it can be calculated without ever solving
the ODE. This is true for a nonlinear system as well as a linear one.

      
y˙1 1 3 y1 y1 + 3y2
So for the system = ⇒ f (y) =
y˙2 0 2 y2 2y2
40 1. DIFFERENTIAL EQUATIONS

2
At say (y1 , y2 ) = (0, 2)
1

or (y1 , y2 ) = (2, 0)
y2

−1

−2
or (y1 , y2 ) = (−2, 1)

−3

−4
−4 −2 0 2 4

y1

The Slope of a Trajectory. Often the most useful aspect of the vector is its slope,
dy2 f2 (y1 , y2 )
given by = .
dy1 f1 (y1 , y2 )
4
The slope along y2 = 0 is
3

2
or along y1 = 0
1
y2

0
or along y2 = y1
−1

−2 1
or along y2 = − y1
−3 3
−4
−4 −2 0 2 4

y1
Nullclines
A Nullcline is a line or curve where the slope of the trajectory is 0 or ∞. (The package
pplane will plot the nullclines for you. Go to Solution and then Show Nullclines in the
phase plane window.)
The Nullclines for the example above are
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 41

What if the eigenvalues have opposite sign. Here again it is useful to be able to solve
for y2 as a function of y1 , so I will take a simple example. where the eigenvectors lie on
the axes.
        
y˙1 2 0 y1 1 0
= ⇒ y = c1 2t
e + c2 e−4t .
y˙2 0 −4 y2 0 1

4 The straight line solutions lie on the axes:


3

1 c2 = 0 ⇒
y2

−1

−2
c1 = 0 ⇒
−3

−4
−4 −2 0 2 4

y1

On the y1 axis we have growth, so the arrow goes away from the origin, while in the y2
axis we have decay, so the arrow comes into the origin.
Also when we solve for y2 as a function of y1 ;

Once again this is typical of the case where the eigenvalues have opposite sign.
Suppose the solution was
   
1 1
y = c1 t
e + c2 e−3t .
3 −1
A sketch of the trajectories gives the following.
y2

y1

−1
−1 0 1

Any system with one positive and one negative eigenvalue gives similar results
and is called a -SADDLE.
42 1. DIFFERENTIAL EQUATIONS

SUMMARY for REAL and DISTINCT Eigenvalues

1. Straight line solutions.


If the eigenvalues of A are real and distinct then the solution to ẏ = Ay is in the
form y = c1 x(1) eλ1 t + c2 x(2) eλ2 t

y2
1

C1=0
C2=0
AND there are two straight lines in the phase portrait 0

y1
associated with the solutions for c1 = 0 and c2 = 0.
−1
−1 0 1

2. Node or saddle

y2

If the eigenvalues of A are both negative 0

the origin is said to be a stable (improper) NODE.


0

y1

If the eigenvalues of A are both positive


y2

the origin is said to be an unstable (improper) NODE.


−1
−1 0 1
y1
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 43

y2
1

If the eigenvalues of A are of opposite sign 0

y1
the origin is said to be a SADDLE.
−1
−1 0 1

Have a look at mathsims the General Linear Model for another examples.
3. Direction Field
Since f (y) evaluated at (y1 , y2 ) is the velocity vector at (y1 , y2 ), the slope of the
curves in phase space is given by
dy2 y˙2 a21 y1 + a22 y2
= (from the chain rule.) =
dy1 y˙1 a11 y1 + a12 y2

1 On
y2=−2y1/3 The slope of the trajectory
the slope at any point is
dy2/dy1 =4y2/(2y1+ 3y2)
is infinite.
In particular the lines where the phase curves
c =0 are horizontal and where they are vertical
2
y2

0
are called the Nullclines
On
y1=0
the slope
is 4/3.
−1
−1 1

y1
4. The solutions to the equation
dy2 a21 y1 + a22 y2
= are the phase space curves.
dy1 a11 y1 + a12 y2
44 1. DIFFERENTIAL EQUATIONS

If the eigenvalues are equal there may still be two linearly independent eigenvec-
tors.

 
1 0
For instance if A =
0 1

So the General Solution is


    
1 0
y = c1 + c2 et . =⇒
0 1

1
y2

−1

−2

−3

−4
−4 −2 0 2 4

y1

This is called a PROPER NODE and it is unstable if λ > 0 and stable if λ < 0.
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 45

Alternatively there may be only one eigenvector.


 
0 1
For instance for A =
−1 2

y2
Then there is only one straight line y1
= 1 in the phase plane.

1
y2

−1

−2

−3

−4
−4 −2 0 2 4

y1

To get some idea as to how the other trajectories come into the origin consider.
dy2 −y1 + 2y2
= which is the slope of the trajectory at the point(y1 , y2 )
dy1 y2

Along y2 = 12 y1

but along y2 = 0

The trajectories bend around, almost spiraling, but not quite.


This is called an DEGENERATE or INFLECTED NODE and it is unstable if λ > 0
and stable if λ < 0.
46 1. DIFFERENTIAL EQUATIONS

1.5.3 Phase Portraits for Complex Eigenvalues

If the eigenvalues are pure imaginary you can prove that the trajectories are ellipses.
Take the following example
    
y˙1 0 1 y1
=
y˙2 −2 0 y2

dy2 y˙2
=
dy1 y˙1

1
y2

−1

−2

−3

−4
−4 −2 0 2 4

y1

Determining the direction of flow.


Go back to the equation of motion for y1 ; and consider the sign of y˙1 on the upper part
of the y2 axis.

Any system with pure imaginary eigenvalues is called a CENTER and has elliptical
trajectories, but finding the actual elliptic orbit may be tricky (until you know about
diagonalization).
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 47

If the eigenvalues are complex with nonzero real part then the trajectories spiral
in (if the real part is negative) or out (if the real part is positive).
To prove this in general requires diagonalization, but it is easily seen in the following case.
    
y˙1 2 −1 y1
=
y˙2 1 2 y2
 
1
which has complex solutions e(2+i)t and it’s complex conjugate.
−i

So the real solutions are

For each solution y12 + y22 = e4t


In polar coordinates

(y1 = r cos θ, y2 = r sin θ)

So solutions spiral out. From a topological point of view there are two possibilities:

Determining the direction of flow.


Go back to the equation of motion for y1 ; and consider the sign of y˙1 on the upper part
of the y2 axis.
48 1. DIFFERENTIAL EQUATIONS

For a more exact picture of the flow you can use the slope of the trajectories as a guide.
dy2 y1 + 2y2
=
dy1 2y1 − y2

1
y2

−1

−2

−3

−4
−4 −2 0 2 4

y1
Any system with complex eigenvalues is called a SPIRAL.
If the real part of the eigenvalues is positive solutions spiral out from the origin and the
critical point at the origin is UNSTABLE.
If the real part of the eigenvalues is negative solutions spiral in to the origin and the
critical point at the origin is STABLE.
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 49

1.5.4 SUMMARY Of 6 Types of LINEAR PHASE PORTRAITS in 2D

The type depends on the eigenvalues and eigenvectors of A . (For more examples go to
the mathsims website and click on the General Linear Model. To see the full picture
with many sample trajectories, click on other initial conditions in the phase plane.)
1 SADDLE One positive eigenvalue and one negative eigenvalue.

y2
1

y1

−1
−1 0 1

2 IMPROPER NODE Two different positive eigenvalues UNSTABLE or different


negative eigenvalues STABLE.
y2

y1

3 PROPER NODE Equal eigenvalues and two linearly independent eigenvectors.

1
y2

0.8

0.6

0.4

0.2

−0.2 y1
−0.4

−0.6

−0.8

−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
50 1. DIFFERENTIAL EQUATIONS

4 DEGENERATE or INFLECTED NODE Equal eigenvalues, but one correspond-


ing eigenvector.

y2
0.8

0.6

0.4

0.2

−0.2 y
1
−0.4

−0.6

−0.8

−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

5 SPIRAL or FOCUS Two complex eigenvalues with either negative real part STA-
BLE or positive real part UNSTABLE.

1
y2

−1
−1 0 1
y1

6 CENTER Two pure imaginary eigenvalues.

1
y2

0.8

0.6

0.4

0.2

−0.2 y1
−0.4

−0.6

−0.8

−1
−1 −0.5 0 0.5 1
1.6. CRITICAL POINTS AND STABILITY 51

1.6 Critical Points and Stability

1.6.1 Critical Points

In all the systems we have looked at so far (linear, constant coefficient, homogeneous 2-d
systems) the origin has been one trajectory all on its own, because if you start at the
origin y = 0 you stay there.

If ẏ = Ay then y = 0 =⇒ ẏ = 0.

It is the so called trivial solution we mentioned before.

In fact any point in Phase Space where ẏ = 0 must be stationary and it is often called a
stationary, equilibrium or critical point.

DEFN A 2-dimensional system


   
y˙1 f1 (y1 , y2 )
=
y˙2 f2 (y1 , y2 )

has a critical ( or stationary) point at (y1∗ , y2∗ )

Nonlinear systems may have more than one critical point.


Take the example y˙1 = y1 − y22 , y˙2 = y2 − y1 y2 .

But Linear systems of the form ẏ = Ay always have one critical point at the origin and
unless det A = 0 this is the only one.
52 1. DIFFERENTIAL EQUATIONS

1.6.2 Stability of Critical points.

Trajectories starting close to the origin, may move away, hang around or tend towards
the critical point itself. Think of a nonlinear pendulum. There is a critical point where
the pendulum hangs vertically down which we might call stable because if we disturb it a
little the pendulum doesn’t move very far. But there is another critical point, where the
pendulum is vertically above which is unstable because any slight displacement results in
the pendulum falling away.
Roughly speaking

if every trajectory that starts close to the critical point at 0

A critical point that is not stable is called UNSTABLE.


For instance a stable node or a stable focus

2
y
0.8
1
y2

Nearby 0.6
solutions 0.4
tend to the 0.2
critical 0
point at y1
y2

0 0
−0.2
the originy1 −0.4

−0.6

−0.8

−1 0 −1
−1 −0.5 0 0.5 1
−1 −0.5 0 1 y1

A center is also stable.

But a saddle or an unstable focus or node are

4 4 4

3 3 3

2 2 2

1 1 1
2

y2

y2

0 0 0
y

−1 −1 −1

−2 −2 −2

−3 −3 −3

−4 −4 −4
−4 −2 0 2 4 −4 −2 0 2 4 −4 −2 0 2 4

y1 y1 y1

One way to classify the types of critical points is via their stability properties.
1.6. CRITICAL POINTS AND STABILITY 53

Stability Criteria for Critical points

It is the eigenvalues of A
that determine the stability properties of the critical point at 0.

a11 − λ a12
det(A − λI) = =
a21 a22 − λ

But the eigenvalues really only depend on the traceA = (a11 + a22 ) and the detA.

λ± =

So if detA < 0 the eigenvalues are real and

If detA > 0 and (traceA)2 − 4(detA) > 0 the eigenvalues are real and

If detA > 0 and (traceA)2 − 4(detA) < 0 the eigenvalues


54 1. DIFFERENTIAL EQUATIONS

If detA > 0 and (traceA)2 − 4(detA) = 0 the eigenvalues

If detA > 0 and (traceA) = 0 the eigenvalue

Finally if detA = 0

Stability Chart

From the stability Chart you can see that

if detA ≥ 0 and (traceA) ≤ 0 then the critical point is STABLE.

But if detA < 0 or if (detA ≥ 0 and (traceA) > 0) then the critical point is UNSTABLE.
The Stability Chart is given in the General Linear Model in mathsims. Try dragging one
of the parameters, say a, and watch the phase portrait change as you cross between the
different types.
1.6. CRITICAL POINTS AND STABILITY 55

Take some examples. If     


y˙1 2 −1 y1
=
y˙2 3 −2 y2

If     
y˙1 2 3 y1
=
y˙2 2 5 y2

    
y˙1 −2 1 y1
=
y˙2 −6 2 y2
56 1. DIFFERENTIAL EQUATIONS

1.7 Non homogeneous Linear systems

Non homogeneous Linear systems, of the form

ẏ = Ay + g where g is a constant vector

are simply a linear translation away from their homogeneous counterpart. The critical
point y∗ of the system is given by (linear systems have at most one critical point unless
detA = 0):

It follows that if we set

which we can solve as before. The Linear model of the Economy in mathsims is a nonho-
mogeneous linear system provided G 6= 0. Try setting G = 0 and then increasing it away
from zero to see the critical point move away from the origin.
1.8. NONLINEAR SYSTEMS 57

1.8 Nonlinear Systems

A 2-dimensional nonlinear system


   
y˙1 f1 (y1 , y2 )
= is one where fi are nonlinear functions of yi .
y˙2 f2 (y1 , y2 )

1.8.1 Solving for the Phase Curves

In some cases you can still solve for the phase curves.

g
Take the nonlinear pendulum θ̈ = − sin(θ)
l
Letting y1 = θ and y2 = θ̇ gives
   
y˙1 y2
= g
y˙2 − l sin(y1 )

Using the chain rule gives

Integrating gives

Solving for y2
58 1. DIFFERENTIAL EQUATIONS

r
2g
The curves y2 = ± 2C + cos(y1 ) are hard to sketch, but easy to draw on a computer.
l
g 1
Suppose l
= 2

Phase Curves for the Pendulum


2

1.5

0.5
y2

−0.5

−1

−1.5

−2
−6 −4 −2 0 2 4 6
y1

Note that it looks as if there are critical points at y1 = nπ, y2 = 0.


Near to the critical points at y1 = 2mπ, y2 = 0 the the trajectories look like those of a
center.
Near to the critical point at y1 = (2m − 1)π, y2 = 0 the trajectories look like those of a
saddle.

Another system for which you can solve for the phase curves
is Lotka- Volterra’s Predator Prey Population Model.

dr(t)
= ar(t) − br(t)f (t)
dt

df (t)
= kr(t)f (t) − lf (t)
dt

Taking a = 2, b = 1, k = 1 and l = 1 gives .


1.8. NONLINEAR SYSTEMS 59

Lotka−Volterra’s Predator Prey Model


4

3.5

2.5
f

1.5

0.5

0.5 1 1.5 2 2.5 3


r

Once again we can recognize some local types of behavior, for instance there appears to
be a critical point that looks like a center in the ’middle’ and the origin looks like a critical
point which is a saddle at the origin.

(This is a special case of more general predator prey systems. For the more general system
see Predator-Prey in mathsims. Set c = 0 and h = 0 for Lotka- Volterra’s model.)
60 1. DIFFERENTIAL EQUATIONS

1.8.2 Critical Points for Nonlinear Systems.

As we defined before a critical point or stationary point y∗ of a nonlinear system

ẏ = f (y) is where ẏ = 0 =⇒ f (y∗ ) = 0.

This generally results in (non-linear) equations to solve for the critical points. Take the
nonlinear pendulum again, with gl = 1, then
   
y˙1 y2
= ,
y˙2 − sin(y1 )

critical points occur where

Similarly we may consider the Lotka-Volterra, or Predator Prey Population Model


looked at before
dr(t)
= 2r − rf
dt
df (t)
= rf − f
dt
Critical points must satisfy 2r − rf = 0 AND rf − f = 0.
1.8. NONLINEAR SYSTEMS 61

1.8.3 Linearization of Nonlinear Systems.

A general method for finding the linearized system about any critical point of a nonlinear
system is provided by Taylor series - which determines the nature of the system locally
about any critical point.
   
∗ ∗ y˙1 f1 (y1 , y2 )
Suppose (y1 , y2 ) is a critical point of = then f1 (y1∗ , y2∗ ) = 0 and
y˙2 f2 (y1 , y2 )
f2 (y1∗ , y2∗ ) = 0,

The Linearized system near to (y1 , y2 ) = (y1∗ , y2∗ ) is


!
  ∂f1 ∂f1 
y˙1 ∂y1 ∂y2 y¯1
= ∂f2 ∂f2
y˙2 ∂y1 ∂y2
y¯2
where all the partial derivatives are evaluated at the critical point (y1∗ , y2∗ ) and y¯1 = y1 −y1∗ ,
y¯2 = y2 − y2∗ .

Note that y¯˙1 =, y¯˙2 = so this becomes

∂f
The Linearized system is ȳ˙ = ȳ
∂y
∂f
where the Linearized matrix is just the Jacobian ∂y evaluated at the critical point.
!
∂f1 ∂f1
∂f
Df = = ∂y1
∂f2
∂y2
∂f2 evaluated at (y1 , y2 ) = (y1∗ , y2∗ )
∂y ∂y1 ∂y2
62 1. DIFFERENTIAL EQUATIONS

Consider, for example, the Nonlinear Pendulum


   
y˙1 y2
=
y˙2 − sin(y1 )
g
where l
has been taken as 1. Recall y1 = θ and y2 = θ̇ .
The linearized matrix is

For the crit-


ical point at (π, 0), this reduces to

In terms of the variables ȳ1 = y1 − π and ȳ2 = y2 , the linearized system is

Phase Curves for the Pendulum


2

1.5

0.5
y2

−0.5

−1

−1.5

−2
−6 −4 −2 0 2 4 6
y1
1.8. NONLINEAR SYSTEMS 63

Similarly we may consider the Lotka-Volterra, or Predator Prey Population Model


looked at before
dr(t)
= 2r − rf
dt

df (t)
= rf − f
dt

which has Critical points (r, f ) = (0, 0), (1, 2).

For the case (r, f ) = (0, 0) the linearization matrix is

Now the Linearized System provides an approximate system for r and f small. So the
stability properties of the critical point at the origin can be determined either from the
eigenvalues of the linearized matrix

or by calculating the determinant of the linearized matrix

Now consider the critical point (r, f ) = (1, 2). In this case the linearization matrix is
64 1. DIFFERENTIAL EQUATIONS

Note that in terms of the new variables r̄ and f¯ such that r̄ = r − 1 and f¯ = f − 2 the
Linearized System is

The linearized matrix in this case gives a CENTER, because detA = 2 > 0 and traceA =
0. In general if traceA = 0 the nonlinear terms may make solutions slowly spiral in or out.
But here the nonlinear system also has a center, as we know from the integral curves.
1.8. NONLINEAR SYSTEMS 65

g
Finally consider the Damped Nonlinear Pendulum with l
= 1 and damping constant
c for which we DON’T have integral curves.

Usually you can’t solve for integral curves, but you can solve for the critical points.

Let y1 = θ and y2 = θ̇.


   
y˙1 y2
=
y˙2 − sin(y1 ) − cy2
As for the undamped case there are critical points at (nπ, 0).

To investigate the stability of the critical points , simply work out the Jacobian
∂f ∂f
∂y
, evaluate it at the critical point and then the Linearized system is ȳ˙ = ∂y
ȳ.

!
∂f1 ∂f1
∂y1 ∂y2
Df = ∂f2 ∂f2 =
∂y1 ∂y2

Df (0, 0) =

Df (π, 0) =

Now detDf 6= 0 and traceDf 6= 0 in both cases so the nonlinear system will also have a
stable spiral or node at (2nπ, 0) and a saddle at ((2n − 1)π, 0).
But this only gives information that is local to the critical points.
2

1.5

0.5

0
y

−0.5

−1

−1.5

−2
−10 −8 −6 −4 −2 0 2 4 6 8 10
x

However one can sort of imagine how the trajectories might join up and infact here you
can use energy considerations to prove that the resulting picture is qualitatively correct.

Note:

Most of the time the linearized system gives a local approximation for the flow of the
nonlinear system which is also topologically equivalent to nonlinear flow in some neigh-
borhood of the critical point. But there are two special cases where the linearized systems
66 1. DIFFERENTIAL EQUATIONS

may give flows that are not qualitatively similar to the nonlinear flow even locally. These
are when detA = 0 or when traceA = 0 and detA > 0.
In this next example the linearized system gives a center (traceA = 0 and detA > 0 ) but
the full nonlinear system does something else!

y1 (y12 + y22 )
      
y˙1 0 1 y1
= +
y˙2 −1 0 y2 y2 (y12 + y22 )

The linearized system is a center, actually the phase curves are circles. But taking polar
coordinates y1 = r cos(θ), y2 = r sin(θ) and calculating the rate of change of r from
r2 = y12 + y22 .

So the radial component, instead of remaining constant as predicted by the linear system,
increases. Solutions slowly spiral out!
2

0.5
y

0.4

0.3

0.2

0.1

−0.1 y1
−0.2

−0.3

−0.4

−0.5
−0.5 0 0.5

Have a look at more general nonlinear terms in Linear Models Versus Nonlinear Models in
mathsims. The starting parameters gives a center in the linear system, but the nonlinear
system is usually a spiral. Try different values of e, f, g and h, the coefficients of the
nonlinear terms, to see if you can get oscillatory motion as in a center.
1.8. NONLINEAR SYSTEMS 67

The Competing Species Model


Suppose x and y are the populations of two species, which compete for food. Then without
y, x would grow exponentially ẋ = r1 x (assuming overcrowding isn’t a problem). Similarly
without x, y would grow exponentially ẏ = r2 y. But if both compete for the same food
supply their growth rates will decrease by an amount proportional to the product of x
and y.

ẋ =, ẏ =

Suppose r1 = 1, r2 = 4, s1 = 1 and s2 = 1 then we have

Critical points are given by

Now the linerarized matrix is

At (0, 0)
68 1. DIFFERENTIAL EQUATIONS

To sketch a node you need to know the eigenvalues and vectors. In fact since the matrix
is diagonal the eigenvalues are just the diagonal elements and the eigenvectors are parallel
to the axes.
At (4, 1)

Here detA 6= 0 and traceA 6= 0 so the linearized systems truly reflect the local behavior
of the nonlinear system. Also you can prove that there are no limit cycles or other
complicated behavior and so you can put the whole picture together.

2.5

1.5

0.5

0 1 2 3 4 5 6

For more examples see Competing Species in mathsims.


1.8. NONLINEAR SYSTEMS 69

1.8.4 Summary for Nonlinear Systems

dy2 f2 (y1 , y2 )
1. If =
you can solve for the phase curves from do so and sketch the
dy1 f1 (y1 , y2 )
phase curves, either as y2 (y1 ) or as contours of some function of (y1 , y2 ). Then add the
direction of flow.
If you can’t solve for the phase curves:
2. Find the Critical Points, investigate their Type and Stability and Sketch them
on the Phase Portrait.

a)Critical Points solve for (y1∗ , y2∗ ) from f1 (y1∗ , y2∗ ) = 0 AND f2 (y1∗ , y2∗ ) = 0.
b)Type and Stability
!
∂f1 ∂f1
∂f ∂y1 ∂y2
Work out the general Linearized Matrix Df = = ∂f2 ∂f2 (Jacobian)
∂y ∂y1 ∂y2

Evaluate Df (y1∗ , y2∗ ) at a critical point then the Linearized system at that critical point
is
∂f ∗ ∗
ȳ˙ = (y , y ) ȳ = A ȳ where A is now some constant matrix.
∂y 1 2
c) To classify the critical point calculate the detA and traceA and recall the stability
chart.
1
Tr(A)2 − 4 Det(A)=0
proper or
0.8
stable inflected
Centers

nodes
0.6
focus UNstable
Determinant(A)

0.4
stable focus UNstable
0.2
node node
0

−0.2
Trace(A)
−0.4

−0.6
saddles
−0.8
One zero eigenvalue
on −1
Det(A)=0
−1 −0.5 0 0.5 1

d) To Sketch the flow near the Critical points on the phase portrait.
i) First locate the critical point in the (y1 , y2 ) space.
ii) If the critical point is a node or a saddle
Find the eigenvalues and eigenvectors, sketch the straight line trajectories and finally add
some of the other trajectories and the direction of flow.
iii) If the critical point is a spiral find if it is stable or unstable, then find out the direction
of the flow from one of the equations ( say y˙1 = f1 (y1 , y2 )).
70 1. DIFFERENTIAL EQUATIONS

1.9 Diagonalization and 2D Phase Portraits

Diagonalization relies on a result from the theory of matrices.


If A, assumed to be an n×n matrix, has a n linearly independent eigenvectors x(1) , x(2) , ..., x(n)
(i)
then the matrix X constructed by taking these eigenvectors as its columns Xi,j = xj ’di-
agonalizes A’:

X −1 AX = D where D is a diagonal matrix.


Now a diagonal matrix is one which is zero everywhere except on the diagonal. In fact
here the entries on the diagonal are simply the eigenvalues of A.

1.9.1 Relevance to 2D Phase Portraits.

ANY system with two real distinct eigenvalues is a linear transformation away from
    
z˙1 λ1 0 z1
=
z˙2 0 λ2 z2

Since the eigenvectors of a system such as this, in Normal Form, are parallel to the axes,
the axes are trajectories.
Also the phase curves are easy to solve for because

dz2
=
dz1

1 1 λ >0>λ
1
|λ2|>|λ1|>0 |λ1|>|λ2|>0 1 2
or
0.5 0.5 λ 0.5
>0>λ
2 2
z2

0 0 0

−0.5 −0.5 −0.5

−1 −1 −1
−1 0 1 −1 0 1 −1 0 1
z1 z1 z
1
1.9. DIAGONALIZATION AND 2D PHASE PORTRAITS 71

The system in the yi variables is a linear transformation, y = Xz, away from Normal
form.
Now a linear transformation can sheer, rotate and enlarge or reduce, so in a general
system with two real distinct eigenvalues the phase curves will be sheered, rotated and or
enlarged or reduced versions of these curves.

System 1 Transforming
1

in back
0.5 0.5
Normal
Form 0

y2
0
z2

−0.5 −0.5

−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
z1 y1

Complex Eigenvalues
Similar results hold when the eigenvalues are complex. You can show that ANY system
with complex eigenvalues is a linear transformation away from one whose phase curves
are ellipses or logarithmic spirals. But in a general system with complex eigenvalues the
logarithmic spiral may be sheered, rotated and enlarged or reduced.

1 1

0.8 System 0.8

0.6 in 0.6

0.4 Normal 0.4

0.2 form 0.2

0 0

−0.2 −0.2

−0.4 −0.4

−0.6 −0.6

−0.8 Transforming
−0.8

−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 back
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
72 2. LAPLACE TRANSFORMS

2 Laplace Transforms

Differential Equation for y(t) Algebraic Equation for Y (s)

Solution to D.E. Solution to Algebraic Equation

The Laplace Transform is an integral transform, so before we can solve any equations we
will have to build up some knowledge of Laplace Transforms. For instance, what is the
Laplace Transform of a constant function or an exponential or a power?

2.1 Finding the Laplace Transform of a function

2.1.1 Definition

Given a function f (t) for t ≥ 0 define the Laplace Transform of f (t) as F (s) where

Note that the only place where s appears is in the exponential. The integral is a definite
integral from 0 to ∞.

2.1.2 The Laplace Transform of simple functions.

The Laplace Transform of a constant.

If f (t) = K, where K is a constant,

This assumes that e−st → 0 as t → ∞, which is OK if s > 0 for s real


and if Re(s) > 0 for complex s.
2.1. FINDING THE LAPLACE TRANSFORM OF A FUNCTION 73

The Laplace Transform of a power tn .

If f (t) = t

Now try the general case f (t) = tn , for an integer n > 0

If f (t) = tn

The Laplace Transform of an exponential eat .

If f (t) = eat

We don’t need a to be real here. Suppose a is complex then the transform goes through
for Re(s) > Re(a).
74 2. LAPLACE TRANSFORMS

LINEARITY

Suppose f (t) = ag(t) + bh(t) then

so that F (s) =
Take the following examples.

If f (t) = 3t4 + 5e4t − 2 =⇒

Going backwards:

5 4
If F (s) = − =⇒
s2 s + 3
2.1. FINDING THE LAPLACE TRANSFORM OF A FUNCTION 75

The Laplace Transform of sine and cosine.


Use Euler’s formula: eiαt = cos(αt) + i sin(αt)

If f (t) = eiαt

For example
L(cos(3t)) =

L(sin(2t)) =

L(cos2 t) =
76 2. LAPLACE TRANSFORMS

2.1.3 The Laplace Transform for Piecewise Continuous functions.

Firstly a Piecewise Continuous function is made up of a finite number of continuous pieces


on each finite subinterval [0, T ]. Also the limit of f (t) as t tends to each point of continuity
is finite.
An example is the unit step function.

 
0 t<0
u(t) =
1 0≤t<∞

This is sometimes denoted H(t) or Θ(t).


Now if the step is at t = k rather than zero the result is simply a shift in time.

u(t − k) =

The Laplace Transform of the unit step function.

L(u(t − k)) =

Suppose the step is down:


 
1 t<k
f (t) =
0 t≥k
2.1. FINDING THE LAPLACE TRANSFORM OF A FUNCTION 77

Or up and down:

0≤t<a

 0
f (t) = 1 a≤t≤b 
t≥b

0

A slightly more complicated example, which can still be written in terms of step functions:

 1 0≤t<3

f (t) = 2 3≤t<4 
t≥4

0
78 2. LAPLACE TRANSFORMS

2.1.4 The First Shifting Theorem.

If the Laplace transform of f (t) is known, then you can work out L(eat f (t)) without
integrating.

Z ∞
at
L(e f (t)) = e−st eat f (t)dt =
0

n!
So since L(tn ) = =⇒
sn+1

L(tn eat ) =
for n = 0, 1, 2, ... and (s − a) > 0.

Also

L(cos(αt)eat )) =

And

L(sin(αt)eat )) =

You can also use the Shifting Theorem on Piecewise Continuous functions.

e−ks
L(u(t − k)) = =⇒ L(eat u(t − k)) =
s
2.1. FINDING THE LAPLACE TRANSFORM OF A FUNCTION 79

2.1.5 Summary of Laplace Transforms

f (t) F (s)
K
K
s
n!
tn
sn+1
1
eat
s−a
LINEARITY ag(t) + bh(t) aG(s) + bH(s)
s
cos(αt)
s + α2
2

α
sin(αt)
s + α2
2

e−ks
 
0 0≤t<k
u(t − k) =
1 k≤t<∞ s

First Shifting Theorem eat f (t) F (s − a)


n!
tn eat
(s − a)n+1
s−a
eat cos(αt)
(s − a)2 + α2
α
eat sin(αt)
(s − a)2 + α2
e−k(s−a)
 
at 0 0≤t<k
e u(t − k) =
eat k≤t<∞ s−a

So for instance if

f (t) = 5e3t cos(2t) + 4t3 e−5t


80 2. LAPLACE TRANSFORMS

2.1.6 Inverting Laplace Transforms.

The Inverse of a Power


1 1
The inverse of a power such as n
, or a shifted power such as can be found by
s (s − a)n
working backwards:
 
−1 1
L =
(s − a)n

So for example

5
if F (s) =
(s + 10)2

The inverse of a function with an Irreducible Quadratic in the denominator


involves sines and cosines.
 
−1 (bs + c)
L =
s2 + α 2

So for example
5s − 2
if F (s) =
s2 + 4

3 6
If F (s) = 3
+ 2
(s − 4) s +9
2.1. FINDING THE LAPLACE TRANSFORM OF A FUNCTION 81

Using Partial Fractions to find inverses.


Recall that for fractions

A B
+ =
(s − b) (s − c)

P1 (s)
So a function of the form , where P1 (s) is a polynomial of degree 1 can be
(s − b)(s − c)
expressed as
P1 (s)
Since = for some constants A and B.
(s − b)(s − c)
So that
 
−1 P1 (s)
L =
(s − b)(s − c)
For example
6
if F (s) =
s(s + 2)
82 2. LAPLACE TRANSFORMS

2.1.7 The Gamma Function and L(ta ), where a is not an integer.

If a is not an integer, but a > 0 then


Z ∞ Z ∞
−st a xa dx
a
L(t ) = e t dt = e−x a making the substitution x = st,
0 0 s s
Z ∞
1 Γ(a + 1)
a
L(t ) = a+1 e−x xa dx = for a ≥ 0.
s 0 sa+1

The Gamma Function was defined 10

by Leonard Euler as
8
Γ(p)
6
Z ∞
e−x xp−1 dx for p ≥ 1.
4

Γ(p) = 2
0
0
p
0 1 2 3 4 5

It generalizes the factorial function and if p = n + 1 is an integer then


Z ∞
Γ(n + 1) = e−x xn dx = n!
0

In fact for any p you can use integration by parts to show that
Z ∞ Z ∞
−x p
 −x p ∞
Γ(p + 1) = e x dx = −e x 0 + p e−x xp−1 dx = 0 + pΓ(p)
0 0

So for an integer
Γ(n + 1) = nΓ(n) = n(n − 1)Γ(n − 1)... = n!
Note however that Γ(p) is only defined for p ≥ 1, otherwise xp−1 in the definition is
undefined at x = 0.

Back to Laplace Transforms.


√ Γ( 23 ) Γ(4.2)
So L( t) = 3 , or L(t3.2 ) = 4.2
s2 s
2.2. LAPLACE TRANSFORMS OF DERIVATIVES AND SOLVING SIMPLE LINEAR CONSTANT COEFFICIENT
ODES AND SYSTEMS OF ODE’S. 83

When is L(f (t)) well defined?

Basically e−st f (t) must be defined for all t ≥ 0 and f (t) cannot grow faster than e−st . So
2
for instance L(et ) is UNdefined.
In general we need.
|f (t)| ≤ M eγt for all t > 0
for some M > 0
for some γ > 0
and f (t) should be at least piecewise continuous for t ≥ 0.
Then F (s) is defined for s > γ.

2.2 Laplace Transforms of Derivatives and Solving Simple Lin-


ear Constant Coefficient ODEs and Systems of ODE’s.

2.2.1 The Laplace Transform of the differential of a function.

Consider the Laplace Transform of the first derivative of a function.


Z ∞
L(f˙(t)) = e−st f˙(t)dt
0

Use integration by parts:

So L(f˙(t)) =

Note f (0) and not F (0) appears on the right hand side.

Now use this result on f¨(t):


84 2. LAPLACE TRANSFORMS

So consider L(f (n) (t)):

L(f (n) (t)) = for n = 1, 2, ...

2.2.2 Solving Linear ODEs

To solve a linear ODE take the Laplace Transform of the whole equation.

Suppose you have

ẏ(t) + 2y(t) = 2e−2t

The Laplace Transform of the whole equation gives

Now let Y (s) = L(y(t)), then L(ẏ(t)) =


2.2. LAPLACE TRANSFORMS OF DERIVATIVES AND SOLVING SIMPLE LINEAR CONSTANT COEFFICIENT
ODES AND SYSTEMS OF ODE’S. 85

Now to transform back!

Note y(0) plays the role of a constant of integration, but its specific form makes solving
initial value problems very easy.

Suppose you have

ÿ(t) + 2ẏ + y(t) = e−t

The Laplace Transform of the whole equation gives

Now let Y (s) = L(y(t)), then L(ẏ(t)) = and

L(ÿ(t)) =
86 2. LAPLACE TRANSFORMS

Now to transform back!

2.2.3 Forcing Functions and Transfer Functions.

Often a fixed circuit or a system of equations can be forced in different ways. This means
we want to consider the effect of different forcing functions, say

ÿ(t) − 4ẏ(t) + 4y(t) = r(t)

where the function r(t), called the forcing or input function, may vary. Laplace
Transforms gives us a general method for finding the response for a given forcing function.

Let Y (s) = L(y(t)) and take Laplace Transforms of the whole equation.
2.2. LAPLACE TRANSFORMS OF DERIVATIVES AND SOLVING SIMPLE LINEAR CONSTANT COEFFICIENT
ODES AND SYSTEMS OF ODE’S. 87

where R(s) is the Laplace Transform of the forcing or input function.


Solving for Y (s) gives

Now consider the case where y(0) = 0 and ẏ(0)) = 0.

⇒ Y (s) =

1
The function Q(s) = is called the Transfer function and is determined by
(s2− 4s + 4)
the left hand side of the equation.
In fact for a general second order constant coefficient ODE ; aÿ(t) + bẏ(t) + cy(t) = r(t),
with y(0) = 0 and ẏ(0)) = 0.
88 2. LAPLACE TRANSFORMS

Y (s) = Q(s)R(s) or L(output ) = transfer function L( input )

Take for instance ÿ(t) − 4ẏ(t) + 4y(t) = r(t) = 5te2t then

and provided y(0) = 0 and ẏ(0)) = 0

Y (s) = Q(s)R(s) =

In the more general case

Y (s) = Q(s) (sy(0) + ẏ(0) − 4y(0)) + Q(s)R(s)


2.2. LAPLACE TRANSFORMS OF DERIVATIVES AND SOLVING SIMPLE LINEAR CONSTANT COEFFICIENT
ODES AND SYSTEMS OF ODE’S. 89

2.2.4 Solving Systems of ODE’s

The method for solving systems is essentially the same, but now there are n equations
to take the Laplace Transform of, n functions Yi (s) to solve for and n Inverse Laplace
transforms to find yi (t), for n = 1, 2, ...

Take the following example.

e−2t
      
y˙1 3 1 y1
= +
y˙2 0 3 y2 e−2t
In component form this is

y˙1 = 3y1 + y2 + e−2t and y˙2 = 3y2 + e−2t

Let Yi (s) = L(yi ) and take the Laplace Transform of each equation.
90 2. LAPLACE TRANSFORMS

y1 (0) + Y2 (s) 1
Y1 (s) = +
s−3 (s + 2)(s − 3)
y2 (0) 1
Y2 (s) = +
s − 3 (s − 3)(s + 2)
Now to take the inverse transform we need to use partial fractions.

Taking inverse transforms gives


2.3. FINDING THE INVERSE LAPLACE TRANSFORM OF COMPLICATED FUNCTIONS USING PARTIAL
FRACTIONS. 91

2.3 Finding the inverse Laplace Transform of complicated func-


tions using Partial Fractions.

Partial fractions applies to any function F (s) which is a rational function of s, that is
N (s)
F (s) = where N (s) and D(s) are polynomials and the degree(N (s)) < degree(D(s)).
D(s)

Then if D(s) factorizes into factors D(s) = D1 (s)D2 (s)


such that D1 (s) and D2 (s) have no factors in common
you can find polynomials N1 (s) and N2 (s), with degree(Ni (s)) < degree(Di (s)) such that
N (s) N1 (s) N2 (s)
F (s) = = +
D(s) D1 (s) D2 (s)

1
So say F (s) = you can find A1 and A2 such that
(s − 2)(s − 3)

1
F (s) = =
(s − 2)(s − 3)

1
If F (s) = you can find A1 , B1 and A2 such that
(s − 2)2 (s − 3)

1
F (s) = =
(s − 2)2 (s − 3)

Then the inverse transform is easy to find.

L−1 (F (s)) =

The result is not restricted to linear factors.


1
If F (s) = 2 you can find A1 , B1 and A2 such that
(s + 4)(s − 3)

1
F (s) = =
(s2 + 4)(s − 3)
92 2. LAPLACE TRANSFORMS

Taking the inverse transform.


L−1 (F (s)) =
There are the 4 different cases of Partial Fractions to consider

2.3.1 Simple Factors

N (s)
F (s) =
(s − a1 )(s − a2 )(s − a3 )(s − a4 )...(s − an )

The we can find Ai such that

F (s) =

L−1 (F (s)) =

Where only simple factors are involved start by factorizing the denominator. For say
−7s − 1
F (s) =
(s − 1)(s − 2)(s + 3)
Then

−7s − 1
F (s) = =
(s − 1)(s − 2)(s + 3)

The constants Ai can be found by equating coefficients of the powers of s or taking values
for s, say s =

If s = 1
2.3. FINDING THE INVERSE LAPLACE TRANSFORM OF COMPLICATED FUNCTIONS USING PARTIAL
FRACTIONS. 93

2.3.2 Repeated simple factors

Where there are repeated factors, but no irreducible quadratic factors ( factors with com-
plex roots) there is an extra term for each repeat: The basic rule is

N (s) A1 A2 A3 Am
F (s) = = + + + ... for constants Ai
(s − a)m (s − a) (s − a)2 (s − a)3 (s − a)m

N (s)
More generally, say F (s) =
(s − a1 )3 (s − a2 )(s − a3 )2 (s − a4 )...(s − an )
Then we can find Ai , Bi and Ci such that

F (s) =

L−1 (F (s)) =

Take the example

15s + 12
F (s) = =
(s + 2)3 (s − 1)
94 2. LAPLACE TRANSFORMS

2.3.3 Irreducible Quadratic Factors

If there are factors that are irreducible ( that have complex roots) then the numerator
N (s) for that factor is a polynomial of degree 1: here the basic rule is

N (s) A(s − a) B
2 2
= 2 2
+
(s − a) + α (s − a) + α (s − a)2 + α2
for constants A, B. The inverse Laplace transform can then be obtained using:
   
−1 s−a at −1 α
L = e cos(αt) and L = eat sin(αt)
(s − a)2 + α2 (s − a)2 + α2

Alternatively you can use complex numbers and factorize the denominator:

N (s) N (s) A B
= = +
(s − a)2 + α2 (s − a + iα)(s − a − iα) (s − a + iα) (s − a − iα)

N (s)
More generally, say F (s) =
((s − a1 )2 + b21 )(s − a2 )(s − a3 )(s − a4 )...(s − an )

Then we can find Ai , Bi and Ci such that

F (s) =

L−1 (F (s)) =

Take the example


3(s − 10)
F (s) =
s2 − 4s + 20
A quadratic factor is irreducible if its roots are complex.

Here the roots of s2 − 4s + 20 are s = .

If the quadratic factor is irreducible it can always be written in the form ((s − a)2 + α2 )
by completing the square.
2.3. FINDING THE INVERSE LAPLACE TRANSFORM OF COMPLICATED FUNCTIONS USING PARTIAL
FRACTIONS. 95

s2 − 4s + 20 = (s − 2)2 − 4 + 20 =

2.3.4 Repeated irreducible factors and the Inverse Laplace Transform of


dF (s)
.
ds

Then we need to find out how to take the inverse Laplace Transform of functions like
1
((s − a)2 + α2 )m

1
Trouble. We don’t even know the inverse transform of !
(s2 + α2 )2

You can always use complex numbers and that method works.
But a more interesting method involves the differential of the transformed function .

R∞
Suppose you know the transform of f (t), that is F (s) = L(f (t)) = 0
e−st f (t)dt.

Then the differential of F (s) with respect to s is


96 2. LAPLACE TRANSFORMS

dF (s)
=
ds

dF (s)
So L(−tf (t)) = .
ds

6
So for instance if you know that L(t3 ) = the theorem implies that
s4

 
d 6
− =
ds s4

which is true!

s
Suppose you know that L(cos(t)) =
s2 +1

the theorem implies that

L(−t cos(t)) =

We can use this to find the inverse transform of repeated irreducible quadratic factors.
2.3. FINDING THE INVERSE LAPLACE TRANSFORM OF COMPLICATED FUNCTIONS USING PARTIAL
FRACTIONS. 97

L(t sin(αt)) =

L(t cos(αt)) =
98 2. LAPLACE TRANSFORMS

Here are the results  


−1 s 1
L 2 2 2
= t sin(αt)
(s + α ) 2α
s2
 
−1 1 1
L 2 2 2
= sin(αt) + t cos(αt)
(s + α ) 2α 2
α2
 
1 1
L−1 2 2 2
= sin(αt) − t cos(αt)
(s + α ) 2α 2

I won’t go into the details, but you can continue using this idea to find the inverse
N (s)
transform of for m = 3, 4..
((s − a)2 + α2 )m

Example

s2 − 7
F (s) = =
(s2 + 4)2
2.4. THE SECOND SHIFTING THEOREM 99

2.4 The Second Shifting Theorem

Consider the function f (t), taken zero for t negative and then shifted over to k

f(t)

0
t
0

0 0≤t<k
f (t − k)u(t − k) =
f (t − k) k ≤ t < ∞ f(t−k)u(t−k)

0
k t

The Laplace Transform of this function is actually rather simple:

L(f (t − k)u(t − k)) =

So that

L(f (t − k)u(t − k)) = e−sk F (s) SECOND SHIFTING THM.

2.4.1 Using the Second Shifting Theorem

e−ks
 
−1
L =
s2
100 2. LAPLACE TRANSFORMS

2 (t−2)u(t−2)

0
0 2 t
0 1 2 3 4 5

Or consider

(1 − e−s )2
 
−1
Or consider L
s2

Going the other way is harder because the function needs to be expressed as a sum of
functions in the form f (t − k)u(t − k).

(L(f (t − k)u(t − k)) = e−sk F (s))

 
2 t<1
h(t) = =
0 1≤t
2.4. THE SECOND SHIFTING THEOREM 101

 
0, t<2
h(t) = =
t, 2≤t

f(t)
1.5

t2 t < 1
 0.5

h(t) =
1 1≤t 0

−0.5
t
−1
−1 −0.5 0 0.5 1 1.5 2 2.5 3

Then h(t) =
102 2. LAPLACE TRANSFORMS

Or take


 2 0≤t<π
g(t) = 0 π ≤ t < 3π
3π ≤ t

sin t
2.4. THE SECOND SHIFTING THEOREM 103

2.4.2 Circuit Examples.

Consider an LC circuit, where the applied EMF, E(t), is a function of time.

I(t)
As before for an RLC circuit

L Q(t) C
d2 Q(t) dQ(t) Q(t)
L 2
+R + = E(t)
dt dt C
E(t)
where Q(t) is the charge at time t.

The unforced system.

If R = 0 the unforced system (E(t) = 0) has purely oscillatory solutions (centers) and
1
then it is usual to set LC equal to ω 2 . In which case the unforced system undergoes

oscillations with period
ω
1
So dividing by L and setting LC
= ω 2 and letting y(t) = Q(t) gives

d2 y(t) 2 E(t)
+ ω y(t) = = Ē(t)
dt2 L

Take Laplace Transforms of both sides of the equation, which gives

Now you can see that the unforced system Ē(t) = 0 has purely oscillatory solutions
104 2. LAPLACE TRANSFORMS

sy(0) + ẏ(0)
Y (s) = =⇒
s2 + ω 2

Now consider the forcing, so set the initial current and charge zero; Q(0) = y(0) = 0
and Q̇(0) = ẏ(0) = 0.

Suppose the voltage is switched on for a short time and then switched off. We could
assume that it is switched on at t = 0 and off at say t = k

E0(1−u(t−k))
E0

Ē0 0 ≤ t < k
Ē(t) =
0 k≤t 0
k t
0

To use Laplace transforms write Ē(t) in terms of the unit step function.

L(Ē(t)) Ē0 (1 − e−ks )


The effect of the forcing term is Y (s) = = .
(s2 + ω 2 ) s(s2 + ω 2 )
Using partial fractions
2.4. THE SECOND SHIFTING THEOREM 105

Given an applied voltage that is E0 for t ≤ k, but zero for t > k

 
Ē0 1 − cos(ωt) 0≤t<k
Q(t) = y(t) = 2
ω cos(ω(t − k)) − cos(ωt) k≤t

π
So oscillations in charge result and are usually sustained. Take the case where k = ω


But if the applied EMF is turned off after t = ω
106 2. LAPLACE TRANSFORMS

The Dirac Delta Function ( A pulse of applied voltage.)


1
Consider an impulse of strength applied for an interval k between t = a and t = a + k.
k

Area= k× 1/k = 1
1/k
 k
 0 0≤t<a
(u(t − a) − u(t − (a + k))) 1 0
1/k
= k
a≤t<a+k
k
a+k ≤t

0 a a+k t

Then the integral of this function over any interval containing [a, a+k] is 1! But as k → 0
the function’s height tends to ∞.
The Dirac delta function is the limit of this function as k → 0.
 
(u(t − a) − u(t − (a + k))) ∞ t=a
δ(t − a) = lim =
k→0 k 0 otherwise

A
Zbit∞odd! Nevertheless,
 by construction, the Integral of the Dirac Delta function is still 1
δ(t − a)dt = 1 and the Laplace Transform of the Dirac Delta function is actually
0
finite.

(u(t − a) − u(t − (a + k)))) (e−sa − e−(a+k)s ) (1 − e−ks )


L( = = e−sa
k ks ks
2.4. THE SECOND SHIFTING THEOREM 107

Taking the limit as k → 0, which is not so easy (looks like 00 ) until you recall L’Hopital’s
(1 − e−ks ) se−ks
Rule, which gives lim = lim = 1. So that L(δ(t − a)) = e−sa
k→0 ks k→0 s

So for instance the Laplace Transform of the charge for an LC circuit with Q(0) = y(0) = 0
and Q̇(0) = ẏ(0) = 0 and

Ē0 e−as
Ē(t) = Ē0 δ(t − a) is Y(s) = .
(s2 + ω 2 )

Taking the Laplace Transform

Ē0
Q(t) = y(t) = sin(ω(t − a))u(t − a)
ω
So even if y(0) = 0 and ẏ(0) = 0

E0
Q(t) = y(t) = sin(ω(t − a))u(t − a) ⇒ 0

ω Q(t)

(
0 0≤t<a 0 a t
Q(t) = y(t) = E0
ω
sin(ω(t − a)) a ≤ t,
0
x

the impulse starts an oscillation going at t = a.


108 2. LAPLACE TRANSFORMS

2.5 The Convolution Theorem

Unfortunately L−1 (F (s)G(s)) 6= f (t)g(t).

It does not even work for the simplest of functions.


1
Take F (s) = G(s) = then
s

However you can prove the following result


Z t
−1
L (F (s)G(s)) = f (τ )g(t − τ )dτ CONVOLUTION THEOREM
0

Z t
−1
So in our simple example L (F (s)G(s)) = 1 × 1dτ = (τ ]t0 = t which is correct!
0

But how do we get such a strange result?


It comes from using the 2nd shift theorem along with a bit of knowledge about double
integrals.

First the Second Shift Theorem: e−sτ G(s) = L(g(t − τ )u(t − τ ))

So e−sτ G(s) =

Z ∞
Now using the fact that F (s) = e−sτ f (τ )dτ
0
2.5. THE CONVOLUTION THEOREM 109

F (s)G(s) =

The trick at this point is to change the order of integration. That is integrate with respect
to τ first and then with respect to t.

1
0
−1
τ τ
Integrate wrt Integrate wrt
t from τ τ from 0
This also changes the limits
to ∞ to t
because the area over which the
integration is taking place is not square.
0 0
0
t 0
t

Now τ is integrated from 0 to t and t from 0 to ∞.

F (s)G(s) =

Note the order of f and g is not important, in fact if you change variables inside the
integration from τ to T = t − τ , then dτ = −dT and
Z t
f (τ )g(t − τ )dτ =
0

(τ and T are both dummy variables.)

Also sometimes the Convolution is represented by a *, so that the fact that it satisfies
commutativity can be written as f ∗ g = g ∗ f.
110 2. LAPLACE TRANSFORMS

2.5.1 Using the Convolution Theorem


 
−1 2s
Lets take an example. What is L ?
(s2 + 1)2

Let F (s) =
Let G(s) =
 
−1 2s
So that L =
(s + 1)2
2

Now using 2 sin A cos B = sin(A + B) + sin(A − B) gives

 
−1 2s
L =
(s2 + 1)2

 
−1 1
Another example L
(s − 2)2 (s + 1)

1
Let F (s) =
(s − 2)2
1
Let G(s) =
(s + 1)

 
−1 1
By convolution L =
(s − 2)2 (s + 1)
2.5. THE CONVOLUTION THEOREM 111

So using integration by parts

 
−1 1
L =
(s − 2)2 (s + 1)

Inverses of functions with exponentials.


To work out the Inverse Laplace Transform of a function with an exponential first work out
the Inverse Laplace Transform without the exponential and then use the second shifting
theorem: (e−sτ G(s) = L(g(t − τ )u(t − τ )))
 
−1 1
So say we know that L =t
s2

e−3s
 
−1
L =
s2

Or the example from above

e−3s
 
−1
L =
(s − 2)2 (s + 1)
112 2. LAPLACE TRANSFORMS

2.5.2 Solving Linear ODE’s using Convolution.

For a second order linear constant coefficient ODE, with time dependent forcing

ÿ(t) + aẏ(t) + by(t) = r(t) with y(0) = 0 and ẏ(0) = 0

the Laplace Transform, with L(y(t)) = Y (s) and L(r(t)) = R(s), has a particularly simple
form:

Y (s) = Q(s)R(s) where the transfer function Q(s) =

Now by Convolution

y(t) = where L(q(t)) = Q(s)

So you never have to work out R(s)!


Mass attached to a spring which is periodically forced with a frequency equal
to the natural frequency of the spring.

Oscillating support
mass
k

Assuming the initial position and velocity of the mass are zero the initial value problem
looks something like the following.

ẍ(t) + 9x(t) = r(t) = cos(3t) with x(0) = 0 and ẋ(0) = 0


2.5. THE CONVOLUTION THEOREM 113

So that if X(s) = L(x(t)) taking Laplace Transforms gives

Which by convolution has inverse transform

x(t) =

Now 2 sin A cos B = sin(A + B) + sin(A − B), which implies that

1.5 x(t) =t sin(3 t)/6


1
t/6
0.5

0
t
−0.5

−1 −t/6
−1.5

−2
0 2 4 6 8 10
t

The amplitude of the resulting oscillations grows linearly with time ( 6t )!


Resonance behaviour.
114 2. LAPLACE TRANSFORMS

2.5.3 The Laplace Transform of Periodic Functions

Suppose that f (t) is periodic with period p, then f (t + p) = f (t) for all t.
Now the Laplace Transform of f (t)

Z ∞
L(f (t)) = e−st f (t)dt =
0

But Z 2p
e−st f (t)dt =
p

Similarly
Z (n+1)p
e−st f (t)dt =
np

Finally

L(f (t)) =
2.5. THE CONVOLUTION THEOREM 115

But the term in front of the integral is simply a Geometric Progression.

1 + e−sp + e−2sp + ... =




Z p
−sp −2sp
e−sx f (x)dx

So finally since L(f (t)) = 1 + e +e + ...
0

L(f (t)) =

Take the example of a square wave.


 
1 0≤t<a
f (t) = with f (t + 2a) = f (t)
0 a ≤ t < 2a

square wave
1

0
a 8a t
0 2a 3a 4a 5a 6a 7a
x hold

F (s) =
116 2. LAPLACE TRANSFORMS

Summary of Laplace Transforms

Z ∞
F (s) = L(f (t)) = e−st f (t)dt
0
Powers
n! Γ(a + 1)
L(tn ) = L(ta ) =
sn+1 sa+1
Sine and Cosine
s α
L(cos(αt)) = and L(sin(αt)) =
s2 + α2 s2 + α2
Dirac Delta function
L(δ(t − k)) = e−ks
Linearity

L(ag(t) + bh(t)) = aG(s) + bH(s)

First Shifting Theorem

L(eat f (t)) = F (s − a)

Second Shifting Theorem

L(f (t − k)u(t − k)) = e−ks F (s)

Convolution Theorem
Z t 
L f (τ )g(t − τ )dτ = F (s)G(s)
0

Differentials

L(f (n) (t)) = sn F (s) − sn−1 f (0) − sn−2 f˙(0) − ... − sf n−2 (0) − f n−1 (0)

dF (s)
L(tf (t)) = −
ds
Periodic Functions f (t + p) = f (t)
Z p
1
L(f (t)) = e−st f (t)dt
1 − e−sp 0

You might also like