Workbook 2019
Workbook 2019
WORKBOOK
Semester 2, 2019
c School of Mathematics and Physics, The University of Queensland, Brisbane QLD 4072, Australia.
2 CONTENTS
Contents
1 Differential Equations 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Electrical Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Systems of ODE’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.4 Texts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Introduction to Systems of ODE’s and Classification of ODE’s. . . . . . . . 8
1.2.1 Introduction to Systems of ODE’s . . . . . . . . . . . . . . . . . . . 8
1.2.2 Classifying ODE’s: Linear, Order, Homogeneous. . . . . . . . . . . 11
1.2.3 The Superposition Principle . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Solving systems of two coupled 1st order ODE’s. . . . . . . . . . . . . . . 13
1.3.1 The system in matrix form. . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 The Homogeneous case with constant coefficients. . . . . . . . . . . 14
1.4 Theory and Theorems for first order systems. . . . . . . . . . . . . . . . . 27
1.5 Homogeneous Constant Coefficient Linear 2-dimensional Systems and the
Phase Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.5.1 The Phase Portrait . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.5.2 Phase Portraits for Real Eigenvalues and Direction Fields. . . . . . 33
1.5.3 Phase Portraits for Complex Eigenvalues . . . . . . . . . . . . . . 44
1.5.4 SUMMARY Of 6 Types of LINEAR PHASE PORTRAITS in 2D . 47
1.6 Critical Points and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.6.1 Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.6.2 Stability of Critical points. . . . . . . . . . . . . . . . . . . . . . . 50
1.7 Non homogeneous Linear systems . . . . . . . . . . . . . . . . . . . . . . . 54
1.8 Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
1.8.1 Solving for the Phase Curves . . . . . . . . . . . . . . . . . . . . . . 55
1.8.2 Critical Points for Nonlinear Systems. . . . . . . . . . . . . . . . . . 58
CONTENTS 3
2 Laplace Transforms 70
2.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.3 Finding the inverse Laplace Transform of complicated functions using Par-
tial Fractions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
1 Differential Equations
1.1 Introduction
In ODE’s the unknown is a function of one independent variable, so that only ordinary
differentials are involved.
For example the equation for a mass spring system:
d2 x dx
m 2
= −kx − c
dt dt
is an equation for the unknown x as a function of the dependent variable t; x(t).
In PDE’s the unknown is a function of 2 or more independent variables and there are
partial differentials. For example the heat equation :
∂ 2H ∂H
m = c
∂x2 ∂t
is an equation for the unknown H as a function of the dependent variables x, t; H(x, t).
SYSTEMS OF ODE’s often arise naturally or a nth order linear differential equation
can be written as a system of differential equations.
For instance to model the spread of an infectious disease the rate of change of the number
of those infected depends on the number of those who are susceptible.
d Infected
= f (Infected, Susceptible)
dt
d Susceptible
= g(Infected, Susceptible)
dt
In an electrical circuit the rate of change of charge gives the current, but the rate of
change of current itself depends on charge.
6 1. DIFFERENTIAL EQUATIONS
I(t) R
R
C
L Q(t) C
L
I(t)
Q(t)
E(t)
E(t)
Kirchhoff ’s Law says that the voltage drop across the Inductor plus the voltage drop
across the Resistor plus the voltage drop across the Capacitor equals the applied voltage.
dI(t) Q(t)
L + RI(t) + = E(t)
dt C
But charge is related to current:
So we could substitute this back to give one second order equation for Q(t).
So the charge and current oscillate out of phase with each other.
We can see from the equations that
Q2 + I 2 =
But the full 3-dimensional space (t, Q, I) = (t, B sin(t), B cos(t)) is really hard to work
with.
I
Helix
1
Q 0
t
−1
2
0 0
5
10
15
−2
20
1 1
0.5 0.5
0 0
Q(t)
−0.5
t −0.5
−1 −1
−1.5 −1.5
0 5 10 15 20 −1 0 1
Phase space
In the second option, (Q(t), I(t)) space, the curve representing the solutions is parametrized
by time. Because the curve is a circle we can see that the behavior is cyclic and that when
Q(t) is at a maximum I(t)) is 0 etc. But some information has been lost. For instance
we don’t know how fast to move around the circle.
The (Q(t), I(t)) space is called Phase Space.
1.1. INTRODUCTION 9
There are about 6 qualitatively different Linear systems in 2D Phase Space which we will
look at and classify. Then we will begin to ask what we can do with Nonlinear Systems.
Search on the Internet for pplane and try entering your own linear system.
In LAPLACE TRANSFORMS we will solve systems which are time dependent, such
as the circuit above with a variable applied voltage. What makes Laplace Transforms so
useful is that you can use them to solve equations with discontinuous terms, say a voltage
that is suddenly switched on!
Suppose the voltage has been switched on and is then switched off. We could assume that
it is switched off at say t = k
E0(1−u(t−k))
E0
E0 0≤t<k
E(t) =
0 k≤t 0
k t
0
2
−2E0/ω2 −2E0 cos(ωt)/ω
1.1.4 Texts
dr(t)
=
dt
df (t)
=
dt
where r(t) is the number of rabbits and f (t) is the number of foxes and a, b, l and k are
constants.
These equations are said to be coupled (together) meaning that we cannot solve either
one independently.
1.2. INTRODUCTION TO SYSTEMS OF ODE’S AND CLASSIFICATION OF ODE’S. 11
dy1 (t)
=
dt
dy2 (t)
=
dt
c
damper
k
mass
Mass − Spring − DamperSystem
Natural length
x
dp
=
dt
where p is x is
p = mẋ →
Sometimes two (or a higher number of) first order systems can be written as one 2nd
(or higher) order ODE .
For instance for the Mass-Spring-Damper System
dp dx p
= −kx − cẋ =
dt dt m
can be written as
dp
(Since from linear momentum we have that dt
= mẍ.)
And , going the other way, ANY n-th order ODE of the form
dn y dy d2 y d(n−1) y
= F (t, y, , , ...., )
dtn dt dt2 dt(n−1)
Let y1 = y
dy1
=
dt
dy2
=
dt
dy(n−1)
=
dt
dyn
=
dt
LINEAR
An ODE is said to be Linear if it is linear in the unknown and it’s derivatives.
But it need not be linear in the independent variable.
For example
d2 y(t) t dy(t)
= e − cos(5t)y(t) + t5 , is
dt2 dt
But
d4 y(t) dy(t)
4
= 2y(t) − y(t) + 5 is
dt dt
And 2
d3 u(x) d5 u(x)
+ − xu(x) + x2 = 0 is
dx3 dx5
ORDER
The order of an ODE is the order of the highest derivative .
In the examples above the first is order, the second order and
the third is order.
HOMOGENEOUS
A linear ODE is either homogeneous or inhomogeneous.
An ODE is homogeneous if when y(t) is a solution then so is cy(t) for any con-
stant c.
14 1. DIFFERENTIAL EQUATIONS
This is great when it happens of course because if you can find one solution you immedi-
ately have a whole family of others.
Take some examples
d2 y(t) dy(t)
L(y(t)) = 2
+2 − cos(5t)y(t) = 0 is homogeneous.
dt dt
Try it.
But
d2 y(t) t dy(t)
L(y(t)) = − e + cos(5t)y(t) − t5 = 0 is inhomogeneous.
dt2 dt
Or
d3 u(x)
= sin(x)u(x) + 5 is
dx3
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 15
This means that you can take linear combinations of known solutions to form new solu-
tions. So if y1 (t) and y2 (t) are two solutions then c1 y1 (t) + c2 y2 (t) is also a solution for
any constants c1 and c2 .
y1a y1b
It also works for systems. If and are solutions to a 2D linear homoge-
y2a y2b
neous system then for any constants c1 and c2
y1a y1b
c1 + c2 is also a solution.
y2a y2b
But be careful the superposition principle only applies to Linear homogeneous systems.
Any second order linear ODE can be written as a system of two coupled 1st order ODE’s:
d2 y(t) dy(t)
2
+ p(t) + q(t)y(t) = r(t)
dt dt
Now this system of two first order ODE’s can be written in matrix form:
16 1. DIFFERENTIAL EQUATIONS
If we let
y1 0 1 0
y= A(t) = and) R(t) =
y2 −q(t) −p(t) r(t)
We know that the solutions to 2nd order linear ODE’s with constant coefficients are linear
sums of exponentials:
Say there are two roots λ1 and λ2 then the General Solution to the ODE is
(Actually these constants are fixed by the initial conditions y(0) and ẏ(0) in an initial
value problem (IVP).)
For a system
ẏ = Ay
Try
λt u
y = xe where x= is a constant vector
v
and λ is a constant scalar.
Ax = λx
This is called the EIGENVALUE EQUATION for A.
If there are two eigenvalues λ1 and λ2 with eigenvectors x(1) and x(2) then
are solutions to
ẏ = Ay.
Then ẏ is found by differentiating y. Here this implies that ẏ = 3c1 e3t − 5c2 e−5t .
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 19
Recall y = c1 e3t + c2 e−5t and ẏ = 3c1 e3t − 5c2 e−5t , y(0) = −1 and ẏ(0) = 13
Solution to IVP is
There are two special cases: Complex roots λ± = α ± iβ, and Equal roots.
Complex Roots
As before y = c1 eλ− t + c2 eλ+ t , but c1 and c2 are now complex. For a real solution c2 must
be the complex conjugate of c1 . However complex numbers are tricky, particularly if we
are assuming that everything is real.
Let λ+ = α + iβ
eλ+ t =
So we say the general solution (in real form) is y = Aeαt cos(βt) + Beαt sin(βt), for some
real constants A and B.
Equal Roots
The problem here is that there is only one value of λ and so only one solution y1 = eλt
However the second solution can be found by variation of parameters and is y2 = teλt . So
the general solution is y = c1 eλt + c2 teλt .
Now this system of two first order ODE’s can be written in matrix form:
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 21
Now let
y = xeλt
u
where x = is a constant vector and λ is a constant scalar.
v
Sub in
(A − λI)x = 0,
0 1
det(A − λI) = 0, where A= .
15 −2
22 1. DIFFERENTIAL EQUATIONS
( There is a review of eigenvalues and vectors on the web and in Kreyzsig Sec4.0)
The Eigenvectors corresponding to λ1 = 3, λ2 = −5 are found by solving
(i) (i) u
(A − λi I)x = 0 for x = .
v
If λ1 = 3
If λ2 = −5
So that
1 1
y (1)
= e 3t
and y (2)
= e−5t are solutions to ẏ = Ay.
3 −5
provided that the two solutions y(1) and y(2) are linearly independent.
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 23
The matrix
!
(1) (2)
y1 y1
Y= (1) (2) is called the fundamental matrix.
y2 y2
If the determinant of this matrix, often called the Wronskian W = detY, is nonzero
then y(1) and y(2) are linearly independent and the General Solution to the matrix
equation is given by y = c1 y(1) + c2 y(2) .
−1
If y(0) = −1 and ẏ(0) = 13 then y(0) =
13
For those who like matrices! you can solve this using the matrix form:
For t = 0 we have
1 1 c1 −1
y(0) = =
3 −5 c2 13
Or
−1
c1 1 1 −1 −5 −1 −1 1
= = −1/8 =
c2 3 −5 13 −3 1 13 −2
SUMMARY
Solving a system of two coupled linear constant coefficient equations.
ẏ = Ay
y1
where A is a constant 2 × 2 matrix and y= .
y2
Solve for the eigenvalues λi and eigenvectors x(i) of A .
Then
both y(1) = x(1) eλ1 t and y(2) = x(2) eλ2 t are solutions to the system.
provided that y(1) and y(2) are linearly independent. (Problems can only arise
if λ1 = λ2 .)
Example. Solve
The eigenvectors are also complex, however because the eigenvalues are complex conju-
gates of each other the eigenvectors are also complex conjugates of each other.
(For any real matrix with complex eigenvalues the eigenvalues and vectors are complex
conjugates of each other.)
1.3. SOLVING SYSTEMS OF TWO COUPLED 1ST ORDER ODE’S. 27
So you only need to find the real and imaginary parts of y(1) :
Then the general solution in real form is a linear combination of these real and
imaginary parts;
Summary
For an Initial Value Problem (IVP) there is an initial condition for each yi ;
yi (t0 ) = Ki or y(t0 ) = K. So IVP is written as
Existence Uniqueness
Basically if f is smooth at a given initial condition then there is one and only one solution
for that initial condition. It may not exist for all time, but it must exist in some open
time neighborhood of t0 .
Note If fi are NOT continuous at (t0 , yi (0)) the Existence Uniqueness Theorem is not
satisfied.
dy 2y
Take =
dt t
∂fi
Note If ∂y i
are not continuous at (t0 , yi (0)) the Existence Uniqueness Theorem is not
satisfied.
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 31
We have a vague idea of the types of behavior we can expect from linear constant coefficient
ODE’s because they have exponential solutions.
So we expect exponential decay (from terms like e−3t ) or exponential growth (from terms
like e2t ),
3
0.5 −3t Exponential 2t
e decay e
2
Exponential
growth
1
t
0
−0.1 0
t
0 0.5 1 1.5 2 2.5 3 0 0.5 1
1
0.8
sin(5t) 0.6
0.4
e−t/5 cos(t)
0
t 0.2
t
0
−0.2
Oscillation Decaying
−0.4
oscillation.
−1 −0.6
0 1 2 3 0 5 10 15
t
oscillatory behavior (from terms like sin(5t)) and decaying or growing oscillatory behavior
(from terms like e−3t cos(t)).
But in a 2-dimensional system there are always two fundamental solutions and one may
grow while the other decays meaning that different initial conditions may give different
behavior. We really need to consider the 3-dimensional space (t, y1 , y2 ). But that is too
complicated so we consider (y1 (t), y2 (t)) as coordinates in (y1 , y2 ) space, which is called
Phase Space.
g
θ̈ = − θ
l
where g is gravity, l is length and θ is the angle the pendulum makes with the vertical.
Take gl = 9 say then letting y1 = θ and y2 = θ̇ in matrix notation we have
32 1. DIFFERENTIAL EQUATIONS
In both cases
Now the General solution is a linear combination of these and in fact you can show that
in general the solutions satisfy y12 + y22 /9 = c2 , for some constant c. In the (y1 , y2 ) plane
these are ellipses.
1
y2
−1
−2
−3
−4
−4 −2 0 2 4
y1
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 33
Each initial condition gives a curve in phase space, which is called a trajectory. These
trajectories, ellipses here, represent solutions to the ODE in phase space. You can build
up a complete picture by taking lots of different initial conditions, each of which will give
you a trajectory in phase space. This is called the Phase Portrait of the system. Here
y2
the phase portrait is simply lots of ellipses of the form y12 + 92 = c2 , plus the origin.
1
y2
−1
−2
−3
−4
−4 −2 0 2 4
y1
The Phase Plane representation of the solutions does not tell you everything. It cannot
say how fast you move along a phase curve. However we do usually indicate the direction
of increasing time by an arrow.
Existence Uniqueness For a constant coefficient system that is smooth Two Trajec-
tories cannot cross otherwise they would violate existence uniqueness because at the
point where they cross there are two different solutions coming out of one point.
34 1. DIFFERENTIAL EQUATIONS
There are 6 qualitatively different phase portraits, apart from special cases. Four are
concerned with real eigenvalues and two with complex eigenvalues. We will look at all 6.
The next section is on real eigenvalues.
If the eigenvalues of A are real and distinct then the solution to ẏ = Ay is in the
form
y = c1 x(1) eλ1 t + c2 x(2) eλ2 t
In this case there are always two straight lines in the phase space on which it is easy
to find the direction of the flow.
Lets take an example.
If
−2 0
A=
1 −1
Then, after finding the eigenvalues and vectors of A we can write down the general
solution, which is
1 −2t 0
y = c1 e + c2 e−t
−1 1
Now if either c1 or c2 are zero the trajectory is a straight line because then y1 is just a
multiple of y2 .
Say c2 = 0, then
Say c1 = 0, then
1
y2
−1
−2
−3
−4
−4 −2 0 2 4
y1
In between trajectories move into the origin, but not along straight lines. In fact, provided
the eigenvalues are real positive and distinct, the trajectories approach the origin
tangent to the straight line corresponding to the eigenvector of the eigenvalue
with least magnitude.
y2
y1
Any system with two negative real and different eigenvalues gives similar
results and is called a - STABLE (improper) NODE.
Of course the actual straight lines are different for each case as are the eigenvectors. (See
a
if you can show that for an eigenvector the slope of the straight line is ab .)
b
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 37
Have a look at mathsims the General Linear Model for another example. Click on the
stable node tab below the simulations, then click in the phase plane window to see the
solutions for different initial conditions.
The fact that the trajectories approach the origin tangent to the straight line correspond-
ing to the eigenvector of the eigenvalue with least magnitude, is messy to show in general!
But the result is easy to show for a system where the eigenvectors are parallel to the axes.
y˙1 −2 0 y1 1 −2t 0
= which has solution y = c1 e +c2 e−5t .
y˙2 0 −5 y2 0 1
Now c2 = 0 =⇒
Also c1 = 0 =⇒
Away from these solution curves, for y1 6= 0 and y2 6= 0 consider the individual ODE’s for
y1 and y2 .
Here we have
38 1. DIFFERENTIAL EQUATIONS
4 5
Now because > 1 these
3 2
2
these curves for C 6= 0 have zero slope
1
y2
0
at the origin. So they approach the origin
−1
−2
tangent to the y1 axis.
−3
−4
−4 −2 0 2 4
y1
What if both eigenvalues are positive? Actually the situation isn’t much different, after
all the two minuses cancelled when we solved for y2 as a function of y1 . Lets take an
example though.
y˙1 1 3 y1
=
y˙2 0 2 y2
1
y2
−1
−2
−3
−4
−4 −2 0 2 4
y1
In between the curves are tangent to the straight line corresponding to the eigenvector of
the eigenvalue with least magnitude.
Any system with two positive real and different eigenvalues gives similar re-
sults and is called an -UNSTABLE (improper) NODE.
The direction of the velocity vector gives the direction of the flow and the length of the
velocity vector is the speed. The Direction Field is the field of these vectors: f (y).
(’Graph Phase Plane’ in pplane gives you a direction field.)
The great thing about the direction field is that it can be calculated without ever solving
the ODE. This is true for a nonlinear system as well as a linear one.
y˙1 1 3 y1 y1 + 3y2
So for the system = ⇒ f (y) =
y˙2 0 2 y2 2y2
40 1. DIFFERENTIAL EQUATIONS
2
At say (y1 , y2 ) = (0, 2)
1
or (y1 , y2 ) = (2, 0)
y2
−1
−2
or (y1 , y2 ) = (−2, 1)
−3
−4
−4 −2 0 2 4
y1
The Slope of a Trajectory. Often the most useful aspect of the vector is its slope,
dy2 f2 (y1 , y2 )
given by = .
dy1 f1 (y1 , y2 )
4
The slope along y2 = 0 is
3
2
or along y1 = 0
1
y2
0
or along y2 = y1
−1
−2 1
or along y2 = − y1
−3 3
−4
−4 −2 0 2 4
y1
Nullclines
A Nullcline is a line or curve where the slope of the trajectory is 0 or ∞. (The package
pplane will plot the nullclines for you. Go to Solution and then Show Nullclines in the
phase plane window.)
The Nullclines for the example above are
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 41
What if the eigenvalues have opposite sign. Here again it is useful to be able to solve
for y2 as a function of y1 , so I will take a simple example. where the eigenvectors lie on
the axes.
y˙1 2 0 y1 1 0
= ⇒ y = c1 2t
e + c2 e−4t .
y˙2 0 −4 y2 0 1
1 c2 = 0 ⇒
y2
−1
−2
c1 = 0 ⇒
−3
−4
−4 −2 0 2 4
y1
On the y1 axis we have growth, so the arrow goes away from the origin, while in the y2
axis we have decay, so the arrow comes into the origin.
Also when we solve for y2 as a function of y1 ;
Once again this is typical of the case where the eigenvalues have opposite sign.
Suppose the solution was
1 1
y = c1 t
e + c2 e−3t .
3 −1
A sketch of the trajectories gives the following.
y2
y1
−1
−1 0 1
Any system with one positive and one negative eigenvalue gives similar results
and is called a -SADDLE.
42 1. DIFFERENTIAL EQUATIONS
y2
1
C1=0
C2=0
AND there are two straight lines in the phase portrait 0
y1
associated with the solutions for c1 = 0 and c2 = 0.
−1
−1 0 1
2. Node or saddle
y2
y1
y2
1
y1
the origin is said to be a SADDLE.
−1
−1 0 1
Have a look at mathsims the General Linear Model for another examples.
3. Direction Field
Since f (y) evaluated at (y1 , y2 ) is the velocity vector at (y1 , y2 ), the slope of the
curves in phase space is given by
dy2 y˙2 a21 y1 + a22 y2
= (from the chain rule.) =
dy1 y˙1 a11 y1 + a12 y2
1 On
y2=−2y1/3 The slope of the trajectory
the slope at any point is
dy2/dy1 =4y2/(2y1+ 3y2)
is infinite.
In particular the lines where the phase curves
c =0 are horizontal and where they are vertical
2
y2
0
are called the Nullclines
On
y1=0
the slope
is 4/3.
−1
−1 1
y1
4. The solutions to the equation
dy2 a21 y1 + a22 y2
= are the phase space curves.
dy1 a11 y1 + a12 y2
44 1. DIFFERENTIAL EQUATIONS
If the eigenvalues are equal there may still be two linearly independent eigenvec-
tors.
1 0
For instance if A =
0 1
1
y2
−1
−2
−3
−4
−4 −2 0 2 4
y1
This is called a PROPER NODE and it is unstable if λ > 0 and stable if λ < 0.
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 45
y2
Then there is only one straight line y1
= 1 in the phase plane.
1
y2
−1
−2
−3
−4
−4 −2 0 2 4
y1
To get some idea as to how the other trajectories come into the origin consider.
dy2 −y1 + 2y2
= which is the slope of the trajectory at the point(y1 , y2 )
dy1 y2
Along y2 = 12 y1
but along y2 = 0
If the eigenvalues are pure imaginary you can prove that the trajectories are ellipses.
Take the following example
y˙1 0 1 y1
=
y˙2 −2 0 y2
dy2 y˙2
=
dy1 y˙1
1
y2
−1
−2
−3
−4
−4 −2 0 2 4
y1
Any system with pure imaginary eigenvalues is called a CENTER and has elliptical
trajectories, but finding the actual elliptic orbit may be tricky (until you know about
diagonalization).
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 47
If the eigenvalues are complex with nonzero real part then the trajectories spiral
in (if the real part is negative) or out (if the real part is positive).
To prove this in general requires diagonalization, but it is easily seen in the following case.
y˙1 2 −1 y1
=
y˙2 1 2 y2
1
which has complex solutions e(2+i)t and it’s complex conjugate.
−i
So solutions spiral out. From a topological point of view there are two possibilities:
For a more exact picture of the flow you can use the slope of the trajectories as a guide.
dy2 y1 + 2y2
=
dy1 2y1 − y2
1
y2
−1
−2
−3
−4
−4 −2 0 2 4
y1
Any system with complex eigenvalues is called a SPIRAL.
If the real part of the eigenvalues is positive solutions spiral out from the origin and the
critical point at the origin is UNSTABLE.
If the real part of the eigenvalues is negative solutions spiral in to the origin and the
critical point at the origin is STABLE.
1.5. HOMOGENEOUS CONSTANT COEFFICIENT LINEAR 2-DIMENSIONAL SYSTEMS AND THE PHASE
PLANE 49
The type depends on the eigenvalues and eigenvectors of A . (For more examples go to
the mathsims website and click on the General Linear Model. To see the full picture
with many sample trajectories, click on other initial conditions in the phase plane.)
1 SADDLE One positive eigenvalue and one negative eigenvalue.
y2
1
y1
−1
−1 0 1
y1
1
y2
0.8
0.6
0.4
0.2
−0.2 y1
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
50 1. DIFFERENTIAL EQUATIONS
y2
0.8
0.6
0.4
0.2
−0.2 y
1
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
5 SPIRAL or FOCUS Two complex eigenvalues with either negative real part STA-
BLE or positive real part UNSTABLE.
1
y2
−1
−1 0 1
y1
1
y2
0.8
0.6
0.4
0.2
−0.2 y1
−0.4
−0.6
−0.8
−1
−1 −0.5 0 0.5 1
1.6. CRITICAL POINTS AND STABILITY 51
In all the systems we have looked at so far (linear, constant coefficient, homogeneous 2-d
systems) the origin has been one trajectory all on its own, because if you start at the
origin y = 0 you stay there.
If ẏ = Ay then y = 0 =⇒ ẏ = 0.
In fact any point in Phase Space where ẏ = 0 must be stationary and it is often called a
stationary, equilibrium or critical point.
But Linear systems of the form ẏ = Ay always have one critical point at the origin and
unless det A = 0 this is the only one.
52 1. DIFFERENTIAL EQUATIONS
Trajectories starting close to the origin, may move away, hang around or tend towards
the critical point itself. Think of a nonlinear pendulum. There is a critical point where
the pendulum hangs vertically down which we might call stable because if we disturb it a
little the pendulum doesn’t move very far. But there is another critical point, where the
pendulum is vertically above which is unstable because any slight displacement results in
the pendulum falling away.
Roughly speaking
2
y
0.8
1
y2
Nearby 0.6
solutions 0.4
tend to the 0.2
critical 0
point at y1
y2
0 0
−0.2
the originy1 −0.4
−0.6
−0.8
−1 0 −1
−1 −0.5 0 0.5 1
−1 −0.5 0 1 y1
4 4 4
3 3 3
2 2 2
1 1 1
2
y2
y2
0 0 0
y
−1 −1 −1
−2 −2 −2
−3 −3 −3
−4 −4 −4
−4 −2 0 2 4 −4 −2 0 2 4 −4 −2 0 2 4
y1 y1 y1
One way to classify the types of critical points is via their stability properties.
1.6. CRITICAL POINTS AND STABILITY 53
It is the eigenvalues of A
that determine the stability properties of the critical point at 0.
a11 − λ a12
det(A − λI) = =
a21 a22 − λ
But the eigenvalues really only depend on the traceA = (a11 + a22 ) and the detA.
λ± =
If detA > 0 and (traceA)2 − 4(detA) > 0 the eigenvalues are real and
Finally if detA = 0
Stability Chart
But if detA < 0 or if (detA ≥ 0 and (traceA) > 0) then the critical point is UNSTABLE.
The Stability Chart is given in the General Linear Model in mathsims. Try dragging one
of the parameters, say a, and watch the phase portrait change as you cross between the
different types.
1.6. CRITICAL POINTS AND STABILITY 55
If
y˙1 2 3 y1
=
y˙2 2 5 y2
y˙1 −2 1 y1
=
y˙2 −6 2 y2
56 1. DIFFERENTIAL EQUATIONS
are simply a linear translation away from their homogeneous counterpart. The critical
point y∗ of the system is given by (linear systems have at most one critical point unless
detA = 0):
which we can solve as before. The Linear model of the Economy in mathsims is a nonho-
mogeneous linear system provided G 6= 0. Try setting G = 0 and then increasing it away
from zero to see the critical point move away from the origin.
1.8. NONLINEAR SYSTEMS 57
In some cases you can still solve for the phase curves.
g
Take the nonlinear pendulum θ̈ = − sin(θ)
l
Letting y1 = θ and y2 = θ̇ gives
y˙1 y2
= g
y˙2 − l sin(y1 )
Integrating gives
Solving for y2
58 1. DIFFERENTIAL EQUATIONS
r
2g
The curves y2 = ± 2C + cos(y1 ) are hard to sketch, but easy to draw on a computer.
l
g 1
Suppose l
= 2
1.5
0.5
y2
−0.5
−1
−1.5
−2
−6 −4 −2 0 2 4 6
y1
Another system for which you can solve for the phase curves
is Lotka- Volterra’s Predator Prey Population Model.
dr(t)
= ar(t) − br(t)f (t)
dt
df (t)
= kr(t)f (t) − lf (t)
dt
3.5
2.5
f
1.5
0.5
Once again we can recognize some local types of behavior, for instance there appears to
be a critical point that looks like a center in the ’middle’ and the origin looks like a critical
point which is a saddle at the origin.
(This is a special case of more general predator prey systems. For the more general system
see Predator-Prey in mathsims. Set c = 0 and h = 0 for Lotka- Volterra’s model.)
60 1. DIFFERENTIAL EQUATIONS
This generally results in (non-linear) equations to solve for the critical points. Take the
nonlinear pendulum again, with gl = 1, then
y˙1 y2
= ,
y˙2 − sin(y1 )
A general method for finding the linearized system about any critical point of a nonlinear
system is provided by Taylor series - which determines the nature of the system locally
about any critical point.
∗ ∗ y˙1 f1 (y1 , y2 )
Suppose (y1 , y2 ) is a critical point of = then f1 (y1∗ , y2∗ ) = 0 and
y˙2 f2 (y1 , y2 )
f2 (y1∗ , y2∗ ) = 0,
∂f
The Linearized system is ȳ˙ = ȳ
∂y
∂f
where the Linearized matrix is just the Jacobian ∂y evaluated at the critical point.
!
∂f1 ∂f1
∂f
Df = = ∂y1
∂f2
∂y2
∂f2 evaluated at (y1 , y2 ) = (y1∗ , y2∗ )
∂y ∂y1 ∂y2
62 1. DIFFERENTIAL EQUATIONS
1.5
0.5
y2
−0.5
−1
−1.5
−2
−6 −4 −2 0 2 4 6
y1
1.8. NONLINEAR SYSTEMS 63
df (t)
= rf − f
dt
Now the Linearized System provides an approximate system for r and f small. So the
stability properties of the critical point at the origin can be determined either from the
eigenvalues of the linearized matrix
Now consider the critical point (r, f ) = (1, 2). In this case the linearization matrix is
64 1. DIFFERENTIAL EQUATIONS
Note that in terms of the new variables r̄ and f¯ such that r̄ = r − 1 and f¯ = f − 2 the
Linearized System is
The linearized matrix in this case gives a CENTER, because detA = 2 > 0 and traceA =
0. In general if traceA = 0 the nonlinear terms may make solutions slowly spiral in or out.
But here the nonlinear system also has a center, as we know from the integral curves.
1.8. NONLINEAR SYSTEMS 65
g
Finally consider the Damped Nonlinear Pendulum with l
= 1 and damping constant
c for which we DON’T have integral curves.
Usually you can’t solve for integral curves, but you can solve for the critical points.
To investigate the stability of the critical points , simply work out the Jacobian
∂f ∂f
∂y
, evaluate it at the critical point and then the Linearized system is ȳ˙ = ∂y
ȳ.
!
∂f1 ∂f1
∂y1 ∂y2
Df = ∂f2 ∂f2 =
∂y1 ∂y2
Df (0, 0) =
Df (π, 0) =
Now detDf 6= 0 and traceDf 6= 0 in both cases so the nonlinear system will also have a
stable spiral or node at (2nπ, 0) and a saddle at ((2n − 1)π, 0).
But this only gives information that is local to the critical points.
2
1.5
0.5
0
y
−0.5
−1
−1.5
−2
−10 −8 −6 −4 −2 0 2 4 6 8 10
x
However one can sort of imagine how the trajectories might join up and infact here you
can use energy considerations to prove that the resulting picture is qualitatively correct.
Note:
Most of the time the linearized system gives a local approximation for the flow of the
nonlinear system which is also topologically equivalent to nonlinear flow in some neigh-
borhood of the critical point. But there are two special cases where the linearized systems
66 1. DIFFERENTIAL EQUATIONS
may give flows that are not qualitatively similar to the nonlinear flow even locally. These
are when detA = 0 or when traceA = 0 and detA > 0.
In this next example the linearized system gives a center (traceA = 0 and detA > 0 ) but
the full nonlinear system does something else!
y1 (y12 + y22 )
y˙1 0 1 y1
= +
y˙2 −1 0 y2 y2 (y12 + y22 )
The linearized system is a center, actually the phase curves are circles. But taking polar
coordinates y1 = r cos(θ), y2 = r sin(θ) and calculating the rate of change of r from
r2 = y12 + y22 .
So the radial component, instead of remaining constant as predicted by the linear system,
increases. Solutions slowly spiral out!
2
0.5
y
0.4
0.3
0.2
0.1
−0.1 y1
−0.2
−0.3
−0.4
−0.5
−0.5 0 0.5
Have a look at more general nonlinear terms in Linear Models Versus Nonlinear Models in
mathsims. The starting parameters gives a center in the linear system, but the nonlinear
system is usually a spiral. Try different values of e, f, g and h, the coefficients of the
nonlinear terms, to see if you can get oscillatory motion as in a center.
1.8. NONLINEAR SYSTEMS 67
ẋ =, ẏ =
At (0, 0)
68 1. DIFFERENTIAL EQUATIONS
To sketch a node you need to know the eigenvalues and vectors. In fact since the matrix
is diagonal the eigenvalues are just the diagonal elements and the eigenvectors are parallel
to the axes.
At (4, 1)
Here detA 6= 0 and traceA 6= 0 so the linearized systems truly reflect the local behavior
of the nonlinear system. Also you can prove that there are no limit cycles or other
complicated behavior and so you can put the whole picture together.
2.5
1.5
0.5
0 1 2 3 4 5 6
dy2 f2 (y1 , y2 )
1. If =
you can solve for the phase curves from do so and sketch the
dy1 f1 (y1 , y2 )
phase curves, either as y2 (y1 ) or as contours of some function of (y1 , y2 ). Then add the
direction of flow.
If you can’t solve for the phase curves:
2. Find the Critical Points, investigate their Type and Stability and Sketch them
on the Phase Portrait.
a)Critical Points solve for (y1∗ , y2∗ ) from f1 (y1∗ , y2∗ ) = 0 AND f2 (y1∗ , y2∗ ) = 0.
b)Type and Stability
!
∂f1 ∂f1
∂f ∂y1 ∂y2
Work out the general Linearized Matrix Df = = ∂f2 ∂f2 (Jacobian)
∂y ∂y1 ∂y2
Evaluate Df (y1∗ , y2∗ ) at a critical point then the Linearized system at that critical point
is
∂f ∗ ∗
ȳ˙ = (y , y ) ȳ = A ȳ where A is now some constant matrix.
∂y 1 2
c) To classify the critical point calculate the detA and traceA and recall the stability
chart.
1
Tr(A)2 − 4 Det(A)=0
proper or
0.8
stable inflected
Centers
nodes
0.6
focus UNstable
Determinant(A)
0.4
stable focus UNstable
0.2
node node
0
−0.2
Trace(A)
−0.4
−0.6
saddles
−0.8
One zero eigenvalue
on −1
Det(A)=0
−1 −0.5 0 0.5 1
d) To Sketch the flow near the Critical points on the phase portrait.
i) First locate the critical point in the (y1 , y2 ) space.
ii) If the critical point is a node or a saddle
Find the eigenvalues and eigenvectors, sketch the straight line trajectories and finally add
some of the other trajectories and the direction of flow.
iii) If the critical point is a spiral find if it is stable or unstable, then find out the direction
of the flow from one of the equations ( say y˙1 = f1 (y1 , y2 )).
70 1. DIFFERENTIAL EQUATIONS
ANY system with two real distinct eigenvalues is a linear transformation away from
z˙1 λ1 0 z1
=
z˙2 0 λ2 z2
Since the eigenvectors of a system such as this, in Normal Form, are parallel to the axes,
the axes are trajectories.
Also the phase curves are easy to solve for because
dz2
=
dz1
1 1 λ >0>λ
1
|λ2|>|λ1|>0 |λ1|>|λ2|>0 1 2
or
0.5 0.5 λ 0.5
>0>λ
2 2
z2
0 0 0
−1 −1 −1
−1 0 1 −1 0 1 −1 0 1
z1 z1 z
1
1.9. DIAGONALIZATION AND 2D PHASE PORTRAITS 71
The system in the yi variables is a linear transformation, y = Xz, away from Normal
form.
Now a linear transformation can sheer, rotate and enlarge or reduce, so in a general
system with two real distinct eigenvalues the phase curves will be sheered, rotated and or
enlarged or reduced versions of these curves.
System 1 Transforming
1
in back
0.5 0.5
Normal
Form 0
y2
0
z2
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
z1 y1
Complex Eigenvalues
Similar results hold when the eigenvalues are complex. You can show that ANY system
with complex eigenvalues is a linear transformation away from one whose phase curves
are ellipses or logarithmic spirals. But in a general system with complex eigenvalues the
logarithmic spiral may be sheered, rotated and enlarged or reduced.
1 1
0.6 in 0.6
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 Transforming
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 back
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
72 2. LAPLACE TRANSFORMS
2 Laplace Transforms
The Laplace Transform is an integral transform, so before we can solve any equations we
will have to build up some knowledge of Laplace Transforms. For instance, what is the
Laplace Transform of a constant function or an exponential or a power?
2.1.1 Definition
Given a function f (t) for t ≥ 0 define the Laplace Transform of f (t) as F (s) where
Note that the only place where s appears is in the exponential. The integral is a definite
integral from 0 to ∞.
If f (t) = t
If f (t) = tn
If f (t) = eat
We don’t need a to be real here. Suppose a is complex then the transform goes through
for Re(s) > Re(a).
74 2. LAPLACE TRANSFORMS
LINEARITY
so that F (s) =
Take the following examples.
Going backwards:
5 4
If F (s) = − =⇒
s2 s + 3
2.1. FINDING THE LAPLACE TRANSFORM OF A FUNCTION 75
If f (t) = eiαt
For example
L(cos(3t)) =
L(sin(2t)) =
L(cos2 t) =
76 2. LAPLACE TRANSFORMS
0 t<0
u(t) =
1 0≤t<∞
u(t − k) =
L(u(t − k)) =
Or up and down:
0≤t<a
0
f (t) = 1 a≤t≤b
t≥b
0
A slightly more complicated example, which can still be written in terms of step functions:
1 0≤t<3
f (t) = 2 3≤t<4
t≥4
0
78 2. LAPLACE TRANSFORMS
If the Laplace transform of f (t) is known, then you can work out L(eat f (t)) without
integrating.
Z ∞
at
L(e f (t)) = e−st eat f (t)dt =
0
n!
So since L(tn ) = =⇒
sn+1
L(tn eat ) =
for n = 0, 1, 2, ... and (s − a) > 0.
Also
L(cos(αt)eat )) =
And
L(sin(αt)eat )) =
You can also use the Shifting Theorem on Piecewise Continuous functions.
e−ks
L(u(t − k)) = =⇒ L(eat u(t − k)) =
s
2.1. FINDING THE LAPLACE TRANSFORM OF A FUNCTION 79
f (t) F (s)
K
K
s
n!
tn
sn+1
1
eat
s−a
LINEARITY ag(t) + bh(t) aG(s) + bH(s)
s
cos(αt)
s + α2
2
α
sin(αt)
s + α2
2
e−ks
0 0≤t<k
u(t − k) =
1 k≤t<∞ s
So for instance if
So for example
5
if F (s) =
(s + 10)2
So for example
5s − 2
if F (s) =
s2 + 4
3 6
If F (s) = 3
+ 2
(s − 4) s +9
2.1. FINDING THE LAPLACE TRANSFORM OF A FUNCTION 81
A B
+ =
(s − b) (s − c)
P1 (s)
So a function of the form , where P1 (s) is a polynomial of degree 1 can be
(s − b)(s − c)
expressed as
P1 (s)
Since = for some constants A and B.
(s − b)(s − c)
So that
−1 P1 (s)
L =
(s − b)(s − c)
For example
6
if F (s) =
s(s + 2)
82 2. LAPLACE TRANSFORMS
by Leonard Euler as
8
Γ(p)
6
Z ∞
e−x xp−1 dx for p ≥ 1.
4
Γ(p) = 2
0
0
p
0 1 2 3 4 5
In fact for any p you can use integration by parts to show that
Z ∞ Z ∞
−x p
−x p ∞
Γ(p + 1) = e x dx = −e x 0 + p e−x xp−1 dx = 0 + pΓ(p)
0 0
So for an integer
Γ(n + 1) = nΓ(n) = n(n − 1)Γ(n − 1)... = n!
Note however that Γ(p) is only defined for p ≥ 1, otherwise xp−1 in the definition is
undefined at x = 0.
Basically e−st f (t) must be defined for all t ≥ 0 and f (t) cannot grow faster than e−st . So
2
for instance L(et ) is UNdefined.
In general we need.
|f (t)| ≤ M eγt for all t > 0
for some M > 0
for some γ > 0
and f (t) should be at least piecewise continuous for t ≥ 0.
Then F (s) is defined for s > γ.
So L(f˙(t)) =
Note f (0) and not F (0) appears on the right hand side.
To solve a linear ODE take the Laplace Transform of the whole equation.
Note y(0) plays the role of a constant of integration, but its specific form makes solving
initial value problems very easy.
L(ÿ(t)) =
86 2. LAPLACE TRANSFORMS
Often a fixed circuit or a system of equations can be forced in different ways. This means
we want to consider the effect of different forcing functions, say
where the function r(t), called the forcing or input function, may vary. Laplace
Transforms gives us a general method for finding the response for a given forcing function.
Let Y (s) = L(y(t)) and take Laplace Transforms of the whole equation.
2.2. LAPLACE TRANSFORMS OF DERIVATIVES AND SOLVING SIMPLE LINEAR CONSTANT COEFFICIENT
ODES AND SYSTEMS OF ODE’S. 87
⇒ Y (s) =
1
The function Q(s) = is called the Transfer function and is determined by
(s2− 4s + 4)
the left hand side of the equation.
In fact for a general second order constant coefficient ODE ; aÿ(t) + bẏ(t) + cy(t) = r(t),
with y(0) = 0 and ẏ(0)) = 0.
88 2. LAPLACE TRANSFORMS
Y (s) = Q(s)R(s) =
The method for solving systems is essentially the same, but now there are n equations
to take the Laplace Transform of, n functions Yi (s) to solve for and n Inverse Laplace
transforms to find yi (t), for n = 1, 2, ...
e−2t
y˙1 3 1 y1
= +
y˙2 0 3 y2 e−2t
In component form this is
Let Yi (s) = L(yi ) and take the Laplace Transform of each equation.
90 2. LAPLACE TRANSFORMS
y1 (0) + Y2 (s) 1
Y1 (s) = +
s−3 (s + 2)(s − 3)
y2 (0) 1
Y2 (s) = +
s − 3 (s − 3)(s + 2)
Now to take the inverse transform we need to use partial fractions.
Partial fractions applies to any function F (s) which is a rational function of s, that is
N (s)
F (s) = where N (s) and D(s) are polynomials and the degree(N (s)) < degree(D(s)).
D(s)
1
So say F (s) = you can find A1 and A2 such that
(s − 2)(s − 3)
1
F (s) = =
(s − 2)(s − 3)
1
If F (s) = you can find A1 , B1 and A2 such that
(s − 2)2 (s − 3)
1
F (s) = =
(s − 2)2 (s − 3)
L−1 (F (s)) =
1
F (s) = =
(s2 + 4)(s − 3)
92 2. LAPLACE TRANSFORMS
N (s)
F (s) =
(s − a1 )(s − a2 )(s − a3 )(s − a4 )...(s − an )
F (s) =
L−1 (F (s)) =
Where only simple factors are involved start by factorizing the denominator. For say
−7s − 1
F (s) =
(s − 1)(s − 2)(s + 3)
Then
−7s − 1
F (s) = =
(s − 1)(s − 2)(s + 3)
The constants Ai can be found by equating coefficients of the powers of s or taking values
for s, say s =
If s = 1
2.3. FINDING THE INVERSE LAPLACE TRANSFORM OF COMPLICATED FUNCTIONS USING PARTIAL
FRACTIONS. 93
Where there are repeated factors, but no irreducible quadratic factors ( factors with com-
plex roots) there is an extra term for each repeat: The basic rule is
N (s) A1 A2 A3 Am
F (s) = = + + + ... for constants Ai
(s − a)m (s − a) (s − a)2 (s − a)3 (s − a)m
N (s)
More generally, say F (s) =
(s − a1 )3 (s − a2 )(s − a3 )2 (s − a4 )...(s − an )
Then we can find Ai , Bi and Ci such that
F (s) =
L−1 (F (s)) =
15s + 12
F (s) = =
(s + 2)3 (s − 1)
94 2. LAPLACE TRANSFORMS
If there are factors that are irreducible ( that have complex roots) then the numerator
N (s) for that factor is a polynomial of degree 1: here the basic rule is
N (s) A(s − a) B
2 2
= 2 2
+
(s − a) + α (s − a) + α (s − a)2 + α2
for constants A, B. The inverse Laplace transform can then be obtained using:
−1 s−a at −1 α
L = e cos(αt) and L = eat sin(αt)
(s − a)2 + α2 (s − a)2 + α2
Alternatively you can use complex numbers and factorize the denominator:
N (s) N (s) A B
= = +
(s − a)2 + α2 (s − a + iα)(s − a − iα) (s − a + iα) (s − a − iα)
N (s)
More generally, say F (s) =
((s − a1 )2 + b21 )(s − a2 )(s − a3 )(s − a4 )...(s − an )
F (s) =
L−1 (F (s)) =
If the quadratic factor is irreducible it can always be written in the form ((s − a)2 + α2 )
by completing the square.
2.3. FINDING THE INVERSE LAPLACE TRANSFORM OF COMPLICATED FUNCTIONS USING PARTIAL
FRACTIONS. 95
s2 − 4s + 20 = (s − 2)2 − 4 + 20 =
Then we need to find out how to take the inverse Laplace Transform of functions like
1
((s − a)2 + α2 )m
1
Trouble. We don’t even know the inverse transform of !
(s2 + α2 )2
You can always use complex numbers and that method works.
But a more interesting method involves the differential of the transformed function .
R∞
Suppose you know the transform of f (t), that is F (s) = L(f (t)) = 0
e−st f (t)dt.
dF (s)
=
ds
dF (s)
So L(−tf (t)) = .
ds
6
So for instance if you know that L(t3 ) = the theorem implies that
s4
d 6
− =
ds s4
which is true!
s
Suppose you know that L(cos(t)) =
s2 +1
L(−t cos(t)) =
We can use this to find the inverse transform of repeated irreducible quadratic factors.
2.3. FINDING THE INVERSE LAPLACE TRANSFORM OF COMPLICATED FUNCTIONS USING PARTIAL
FRACTIONS. 97
L(t sin(αt)) =
L(t cos(αt)) =
98 2. LAPLACE TRANSFORMS
I won’t go into the details, but you can continue using this idea to find the inverse
N (s)
transform of for m = 3, 4..
((s − a)2 + α2 )m
Example
s2 − 7
F (s) = =
(s2 + 4)2
2.4. THE SECOND SHIFTING THEOREM 99
Consider the function f (t), taken zero for t negative and then shifted over to k
f(t)
0
t
0
0 0≤t<k
f (t − k)u(t − k) =
f (t − k) k ≤ t < ∞ f(t−k)u(t−k)
0
k t
So that
e−ks
−1
L =
s2
100 2. LAPLACE TRANSFORMS
2 (t−2)u(t−2)
0
0 2 t
0 1 2 3 4 5
Or consider
(1 − e−s )2
−1
Or consider L
s2
Going the other way is harder because the function needs to be expressed as a sum of
functions in the form f (t − k)u(t − k).
2 t<1
h(t) = =
0 1≤t
2.4. THE SECOND SHIFTING THEOREM 101
0, t<2
h(t) = =
t, 2≤t
f(t)
1.5
t2 t < 1
0.5
h(t) =
1 1≤t 0
−0.5
t
−1
−1 −0.5 0 0.5 1 1.5 2 2.5 3
Then h(t) =
102 2. LAPLACE TRANSFORMS
Or take
2 0≤t<π
g(t) = 0 π ≤ t < 3π
3π ≤ t
sin t
2.4. THE SECOND SHIFTING THEOREM 103
I(t)
As before for an RLC circuit
L Q(t) C
d2 Q(t) dQ(t) Q(t)
L 2
+R + = E(t)
dt dt C
E(t)
where Q(t) is the charge at time t.
If R = 0 the unforced system (E(t) = 0) has purely oscillatory solutions (centers) and
1
then it is usual to set LC equal to ω 2 . In which case the unforced system undergoes
2π
oscillations with period
ω
1
So dividing by L and setting LC
= ω 2 and letting y(t) = Q(t) gives
d2 y(t) 2 E(t)
+ ω y(t) = = Ē(t)
dt2 L
Now you can see that the unforced system Ē(t) = 0 has purely oscillatory solutions
104 2. LAPLACE TRANSFORMS
sy(0) + ẏ(0)
Y (s) = =⇒
s2 + ω 2
Now consider the forcing, so set the initial current and charge zero; Q(0) = y(0) = 0
and Q̇(0) = ẏ(0) = 0.
Suppose the voltage is switched on for a short time and then switched off. We could
assume that it is switched on at t = 0 and off at say t = k
E0(1−u(t−k))
E0
Ē0 0 ≤ t < k
Ē(t) =
0 k≤t 0
k t
0
To use Laplace transforms write Ē(t) in terms of the unit step function.
Ē0 1 − cos(ωt) 0≤t<k
Q(t) = y(t) = 2
ω cos(ω(t − k)) − cos(ωt) k≤t
π
So oscillations in charge result and are usually sustained. Take the case where k = ω
2π
But if the applied EMF is turned off after t = ω
106 2. LAPLACE TRANSFORMS
Area= k× 1/k = 1
1/k
k
0 0≤t<a
(u(t − a) − u(t − (a + k))) 1 0
1/k
= k
a≤t<a+k
k
a+k ≤t
0 a a+k t
Then the integral of this function over any interval containing [a, a+k] is 1! But as k → 0
the function’s height tends to ∞.
The Dirac delta function is the limit of this function as k → 0.
(u(t − a) − u(t − (a + k))) ∞ t=a
δ(t − a) = lim =
k→0 k 0 otherwise
A
Zbit∞odd! Nevertheless,
by construction, the Integral of the Dirac Delta function is still 1
δ(t − a)dt = 1 and the Laplace Transform of the Dirac Delta function is actually
0
finite.
Taking the limit as k → 0, which is not so easy (looks like 00 ) until you recall L’Hopital’s
(1 − e−ks ) se−ks
Rule, which gives lim = lim = 1. So that L(δ(t − a)) = e−sa
k→0 ks k→0 s
So for instance the Laplace Transform of the charge for an LC circuit with Q(0) = y(0) = 0
and Q̇(0) = ẏ(0) = 0 and
Ē0 e−as
Ē(t) = Ē0 δ(t − a) is Y(s) = .
(s2 + ω 2 )
Ē0
Q(t) = y(t) = sin(ω(t − a))u(t − a)
ω
So even if y(0) = 0 and ẏ(0) = 0
E0
Q(t) = y(t) = sin(ω(t − a))u(t − a) ⇒ 0
ω Q(t)
(
0 0≤t<a 0 a t
Q(t) = y(t) = E0
ω
sin(ω(t − a)) a ≤ t,
0
x
Z t
−1
So in our simple example L (F (s)G(s)) = 1 × 1dτ = (τ ]t0 = t which is correct!
0
So e−sτ G(s) =
Z ∞
Now using the fact that F (s) = e−sτ f (τ )dτ
0
2.5. THE CONVOLUTION THEOREM 109
F (s)G(s) =
The trick at this point is to change the order of integration. That is integrate with respect
to τ first and then with respect to t.
1
0
−1
τ τ
Integrate wrt Integrate wrt
t from τ τ from 0
This also changes the limits
to ∞ to t
because the area over which the
integration is taking place is not square.
0 0
0
t 0
t
F (s)G(s) =
Note the order of f and g is not important, in fact if you change variables inside the
integration from τ to T = t − τ , then dτ = −dT and
Z t
f (τ )g(t − τ )dτ =
0
Also sometimes the Convolution is represented by a *, so that the fact that it satisfies
commutativity can be written as f ∗ g = g ∗ f.
110 2. LAPLACE TRANSFORMS
Let F (s) =
Let G(s) =
−1 2s
So that L =
(s + 1)2
2
−1 2s
L =
(s2 + 1)2
−1 1
Another example L
(s − 2)2 (s + 1)
1
Let F (s) =
(s − 2)2
1
Let G(s) =
(s + 1)
−1 1
By convolution L =
(s − 2)2 (s + 1)
2.5. THE CONVOLUTION THEOREM 111
−1 1
L =
(s − 2)2 (s + 1)
e−3s
−1
L =
s2
e−3s
−1
L =
(s − 2)2 (s + 1)
112 2. LAPLACE TRANSFORMS
For a second order linear constant coefficient ODE, with time dependent forcing
the Laplace Transform, with L(y(t)) = Y (s) and L(r(t)) = R(s), has a particularly simple
form:
Now by Convolution
Oscillating support
mass
k
Assuming the initial position and velocity of the mass are zero the initial value problem
looks something like the following.
x(t) =
0
t
−0.5
−1 −t/6
−1.5
−2
0 2 4 6 8 10
t
Suppose that f (t) is periodic with period p, then f (t + p) = f (t) for all t.
Now the Laplace Transform of f (t)
Z ∞
L(f (t)) = e−st f (t)dt =
0
But Z 2p
e−st f (t)dt =
p
Similarly
Z (n+1)p
e−st f (t)dt =
np
Finally
L(f (t)) =
2.5. THE CONVOLUTION THEOREM 115
Z p
−sp −2sp
e−sx f (x)dx
So finally since L(f (t)) = 1 + e +e + ...
0
L(f (t)) =
square wave
1
0
a 8a t
0 2a 3a 4a 5a 6a 7a
x hold
F (s) =
116 2. LAPLACE TRANSFORMS
Z ∞
F (s) = L(f (t)) = e−st f (t)dt
0
Powers
n! Γ(a + 1)
L(tn ) = L(ta ) =
sn+1 sa+1
Sine and Cosine
s α
L(cos(αt)) = and L(sin(αt)) =
s2 + α2 s2 + α2
Dirac Delta function
L(δ(t − k)) = e−ks
Linearity
L(eat f (t)) = F (s − a)
Convolution Theorem
Z t
L f (τ )g(t − τ )dτ = F (s)G(s)
0
Differentials
L(f (n) (t)) = sn F (s) − sn−1 f (0) − sn−2 f˙(0) − ... − sf n−2 (0) − f n−1 (0)
dF (s)
L(tf (t)) = −
ds
Periodic Functions f (t + p) = f (t)
Z p
1
L(f (t)) = e−st f (t)dt
1 − e−sp 0