MA133 Notes
MA133 Notes
Differential Equations
Lecture Notes
Dave Wood
originally typeset by David McCormick
Autumn 2017
ii
Introduction
How do you reconstruct a curve given its slope at every point? Can you predict the trajectory
of a tennis ball? The basic theory of ordinary differential equations (ODEs) as covered in this
module is the cornerstone of all applied mathematics. Indeed, modern applied mathematics
essentially began when Newton developed the calculus in order to solve (and to state precisely)
the differential equations that followed from his laws of motion.
However, this theory is not only of interest to the applied mathematician: indeed, it is
an integral part of any rigorous mathematical training, and is developed here in a systematic
way. Just as a ‘pure’ subject like group theory can be part of the daily armory of the ‘applied’
mathematician, so ideas from the theory of ODEs prove invaluable in various branches of pure
mathematics, such as geometry and topology.
dy
In this module, we will cover only relatively simple examples, first order equations ( dx =
f (x, y)), linear second order equations (ẍ + p(t)ẋ + q(t)x = g(t)) and coupled first order linear
systems with constant coefficients, for most of which we can find an explicit solution. However,
even when we can write the solution down it is important to understand what the solution
means, i.e. its ‘qualitative’ properties. This approach is invaluable for equations for which we
cannot find an explicit solution.
We also show how the techniques we learned for second order differential equations have
natural analogues that can be used to solve difference equations. The course looks at solutions
to differential equations in the cases where we are concerned with one- and two-dimensional
systems, where the increase in complexity will be followed during the lectures. At the end of
the module, in preparation for more advanced modules in this subject, we will discuss why in
three dimensions we see new phenomena, and have a first glimpse of chaotic solutions.
The primary text will be J. C. Robinson, An Introduction to Ordinary Differential Equations,
Cambridge University Press 2003, available from the bookshop or the library. This is invaluable
for reference and for the large numbers of examples and exercises to be found within.
iii
iv
CHAPTER 0
Definitions
Variables
Variables measure things. We have independent variables and dependent variables. Dependent
variables are unknown quantities, for example:
Dependent variables can (usually) be differentiated with respect to the independent vari-
able(s).
Notation
When it’s obvious what we are differentiating with respect to, we use
dy
= y′.
dx
When differentiating with respect to time, we use the following notation:
dx d2 x
= ẋ, = ẍ, etc.
dt dt2
We also use the following:
dk y
y (k) (x) =
dxk
Definition 0.1. A differential equation is a function of variables and their derivatives.
For example:
ẍ + aẋ + x(x2 − 1) = b cos ωt (1)
is a differential equation known as the Forced Duffing’s Equation.
1
If the equation has only one independent variable, it is called an ordinary differential equation
or ODE. If we have a function of several variables, for example v(x, y, t), then we can differentiate
∂v ∂v
with respect to x getting ∂x , or y getting ∂y , or t getting ∂v
∂t ; in this case the equation is known
as a partial differential equation. For now we are interested in ODEs, and for the first half of
this module, we will only have one dependent variable – “one-dimensional problems”.
Order
Definition 0.2. Assume that we have an ODE that can be written
Thus the order of Forced Duffing’s Equation (equation (1)) is order 2 due to the ẍ term. As
a further example, the equation y ′′′ + 2ex y ′′ + yy ′ = x4 is order 3 due to the y ′′′ term.
Definition 0.3. If the independent variable does not appear explicitly in the ODE, e.g.
F (y, ẏ) = 0
on some interval α ≤ t ≤ β or t ∈ R.
Such a solution may not exist, but there are conditions that guarantee existence and unique-
ness of solutions. We will discuss this later, but for now assume that solutions to the examples
in this module exist and are unique.
Linearity
Definition 0.5. An nth order ODE that can be written in the form
dn y dy
an (t) n
+ · · · + a1 (t) + a0 (t)y = f (t) (2)
dt dt
is called linear.
e.g.
d2 y dy
2
+ t + e−t y = cos kt
dt dt
Forced Duffing’s Equation (equation (1)) is nonlinear because of the x(x2 − 1) term. Fur-
thermore, if, in equation (2), f (t) = 0, then it is called a linear homogeneous ODE.
2
Examples
Example 0.6. In Forced Duffing’s Equation,
d2 y dy
ax2 2
+ bx + cy = 0
dx dx
This is a second-order, linear, homogeneous ODE, with dependent variable y and independent
variable x.
3
4
CHAPTER 1
First-Order ODEs
Definition 1.1. A function F that satisfies F ′ (t) = f (t) is called an anti-derivative of f (t).
Then
dG
(x) = f (x)
dx
Furthermore,
Z b
f (x) = F (b) − F (a)
a
for any F with F ′ (x) = f (x) (i.e. F is the anti-derivative of f ).
What theorem 1.2 basically says is that there is a link between differentiation and integration.
So if
dy
= f (x) (1.2)
dx
the solution will satisfy Z
“y(x) = f (x)dx”.
5
Choose a particular anti-derivative F (x) of f (x), then y(x) = F (x) is a solution, but so is
So equation (1.3) is the general solution of equation (1.2). We can specify a particular solution
by saying what F (x) is at a particular value of x (this sort of problem is called an initial value
problem).
Example 1.3. Let x(t) be position along the M6, measured from Carlisle. Assume a car is
driving south along the M6 at constant speed, then
dx
=a
dt
taking the positive direction as south. The solution to this ODE is simply
x = at + c
where at gives how far the car has travelled, and c represents where the car started.
∆p = F ∆t.
From this we obtain a differential equation by dividing both sides by ∆t and letting ∆t tend
to zero ANALYSIS II :
∆p dp
lim = = F (t)
∆t→0 ∆t dt
Since momentum = mass × velocity, if mass is constant then
dp d dv
= (mv) = m = F (t)
dt dt dt
or “force = mass × acceleration”.
Example 1.5. A car of mass m is travelling at a speed v0 when it has to brake. The brakes
apply a constant force k until the car stops. How long does it take to stop, and how far does it
travel before stopping?
dv k
=− .
dt m
Note that the force is negative since it acts in the opposite direction to the direction of travel.
We integrate this to give
kt
v(t) = − + c
m
When t = 0, we know that v(0) = v0 , so c = v0 , giving
kt
v(t) = − + v0
m
6
′
We want to find the time it stops at, call it t′ . We know v(t′ ) = 0, so − kt
m + v0 = 0, or
mv0
t′ =
k
Now velocity is the rate of change of distance, i.e.
dx kt
= − + v0
dt m
Integrate (by theorem 1.2) to give
kt2
x(t) = − + v0 t + A
2m
When t = 0, we want x(0) = 0, so x(0) = A, so A = 0. So we have
kt2
x(t) = − + v0 t.
2m
mv0
To find how far the car travels before coming to a stop, we substitute t = t′ , i.e. t = k
into our displacement equation, so
mv k mv0 2 mv0
0
x =− + v0
k 2m k k
mv02 mv02
=− +
2k k
mv 2
i.e. x(t′ ) = 0
2k
Note that we may not always be able to solve these equations, e.g.
dy 2
= e−x . (1.4)
dx
2
Here we cannot integrate e−x over any finite interval. However, it will be shown in
GEOMETRY AND MOTION that
Z ∞ √
−x2 π
e dx = .
0 2
(This integral also has many applications in PROBABILITY and the normal distribution.)
This means can write the solution as
Z x
2
y(x) = y(0) + e−t dt
0
and while we cannot solve this we do know what happens as x → ∞. We will discuss qualitative
approaches to ODEs later in the module.
7
Equations with no solutions (non-existence)
dx
x2 + t 2
= 0 : x(0) = c : c 6= 0
dt
When t = 0, the equation must satisfy x2 = 0, so x(0) = 0. But this is impossible given
x(0) 6= 0.
(t−c)2
x= 4
c t
Partial Derivatives
For a function of two or more variables f (x, y), it doesn’t make sense to talk about its derivative,
as you don’t know whether to use x or y. So we call them partial derivatives – we specify which
variable to use in the differentiation, and treat the other(s) as a constant, e.g.
f (x, y) = x2 sin y
∂f
= 2x sin y (treating y as a constant)
∂x
∂f
= x2 cos y (treating x as a constant)
∂y
Note the use of the ∂ symbol instead of a d – this signifies a partial derivative.
8
Continuous Functions
For the purposes of this course, for “a function f (x) is continuous” read “you can draw a graph
of y = f (x) against x without taking your pen off the paper”. So, for example, x2 , e−x and sin x
are continuous, but tan x is not, though it is continuous on the interval (− π2 , π2 ). For reference,
the formal definition of continuous which you will learn in ANALYSIS II is as follows:
Definition 1.6. f(x) is continuous if ∀ ǫ > 0, ∃ δ > 0 such that
The proof of the general existence and uniqueness theorem is beyond the scope of this course,
and we will just state the result. But in order to state the theorem properly, we need to have
a more precise definition of a ‘solution’ to an initial value problem:
Definition 1.7 (Solution). Given an open interval, I, that contains t0 , a solution of the initial
value problem (IVP)
dx
(t) = f (x, t) : x(t0 ) = x0 (1.7)
dt
on I is a continuous function x(t) with x(t0 ) = x0 and
ẋ(t) = f (x, t) ∀t ∈ I
Essentially this means that a solution does not have to be defined for every value of t ∈ R,
only some interval thereof. (Intervals will be formally defined in ANALYSIS .) But, given this
formal definition of a solution, we can state the existence and uniqueness theorem.
Theorem 1.8 (Theorem of Existence and Uniqueness). If f (x, t) and ∂f ∂x (x, t) are continuous
for a < x < b and for c < t < d, then for any x0 ∈ (a, b) and t0 ∈ (c, d), the initial value
problem (1.7) has a unique solution on some open interval I containing t0 .
This theorem will be proved in ANALYSIS III .
Coming back to our example in equation (1.6), this fails the test of theorem 1.8 because
∂f 1
= 12 x− 2
∂x
∂f
This isn’t even defined at x = 0, so ∂x isn’t continuous.
9
Setting x(0) = x0 shows that we must have A = x0 and thus
x(t) = x0 e−pt (1.9)
So by uniqueness (theorem 1.8) this is the only solution.
Example 1.9. Equations of the form of equation 1.8 arise frequently when looking at half-life
decay of radioactive isotopes. In particular 14 C exists in all living matter. When a plant or
animal dies the amount of radioactive carbon decreases due to decomposition into nitrogen
(14 N). The rate of decomposition is proportional to the amount of 14 C:
dx
= −kx
dt
where x(t) is the amount of 14 C (number of atoms).
For 14 C, k ≈ 1.216 × 104 year−1 . If N (t) is the number of 14 C atoms at time t then
N (t) = N0 e−kt
The half-life is the time taken for half of the atoms to have decomposed, i.e. t 1 such that
2
−kt 1N0
N0 e 2 =
2
−kt 1 = − log 2
2
log 2
t1 =
2 k
t 1 ≈ 5700 years
2
10
So this is a solution if
dR
= r(t)
dt
i.e. if R(t) is an antiderivative of r(t), i.e.
Z
“R(t) = r(t)dt”.
dy
Example 1.10. Find a solution to + 2ty = 0, given that y(10) = 3.
dt
The general solution, from equation 1.11 is
R
y(t) = Ae− 2tdt
2
i.e. y(t) = Ae−t
3 = Ae−100
∴ A = 3e100
2
∴ y(t) = 3e100−t
Example 1.11 (A Simple Mixing Problem). A 1000 litre tank contains a mixture of water and
chlorine. To reduce the amount of chlorine fresh water is pumped into the pool at a rate of
6ℓ/s, the fluid is well-stirred and pumped out at a rate of 8ℓ/s. If the initial concentration of
chlorine is 0.02g/ℓ find the amount of chlorine in the tank as a function of time and the interval
of validity of the model
6ℓ/s
8ℓ/s
Solution. Let Q(t) denote amount of chlorine and V (t) denote the volume of fluid in the tank.
Then:
dQ
• measures rate of change of chlorine; and
dt
Q(t)
• 8 is the rate fluid is pumped out times the concentration of chlorine, i.e. the outflow
V (t)
of chlorine.
11
dQ Q(t) dQ Q(t)
So and 8 must balance, i.e. +8 = 0. Since V (t) = 1000 − 2t, we have
dt V (t) dt V (t)
dQ 8Q(t)
+ =0
dt 1000 − 2t
8
This is of the form in equation 1.10 with x = Q and r(t) = 1000−2t . Integrating r(t) gives
R(t) = −4 log(500 − t)
so
Q(t) = ce−R(t) = ce4 log(500−t)
Q(t) = c(500 − t)4
Q(0) = 20 =⇒ 20 = c × 5004
500 − t 4
so Q(t) = 20
500
For practical reasons, this is only valid until t = 500.
12
The antiderivative of ept g(t) may be difficult (or even impossible) to find, but we still have
an expression for the solution. The function ept here is called the integrating factor.
• yh (t) is called the complementary solution that solves the homogeneous case
dy
+ py = 0.
dt
dy
+ y = 3t − 5
dt
Here p = 1, so the integrating factor is et , so
d t
e y = et (3t − 5)
dt
An antiderivative of et (3t − 5) is just 3tet − 8et , therefore
et y = 3tet − 8et + A
y = 3t − 8 + Ae−t
i.e.
dT
= −k (T − A(t))
dt
Here k > 0. The minus sign is so that if the ambient temperature is greater than T , then
T − A(t) is negative and so the temperature T increases, i.e. dT
dt > 0. Assuming A(t) is a
constant, we can write this as
dT
+ kt = kA
dt
By inspection, the integrating factor is ekt so
d kt
e T = kAekt
dt
T (t) = A + ce−kt
13
Let’s assume we find a dead body in a room with constant temperature 24o C. At 8am, the
body’s temperature is 28o C and an hour later it is 26o C. Assuming normal body temperature
is 37o C, when was the person killed?
We first need to establish k, so take
26 = 24 + (28 − 24)e−k
e−k = 1
2 =⇒ k = log 2
Non-constant coefficients
We now return to the general first-order linear inhomogeneous ODE:
dy
+ r(t)y = g(t) (1.15)
dt
We try a similar “integrating factor” approach, i.e. multiply both sides by a function I(t)
giving
dy
I(t) + r(t)I(t)y = I(t)g(t)
dt
d
As before, we want I(t) such that the left-hand side becomes dt (I(t)y(t)). Doing the differen-
tiation gives
dy dI
I(t) + y
dt dt
So for this to work we need
dI
= r(t)I(t)
dt
By inspection, R
I(t) = e r(t)dt
works. So we take
dy
+ r(t)y = g(t)
dt
14
R
Multiplying by I(t) = e r(t)dt gives
dy d
I(t) + r(t)I(t)y = (I(t)y) = I(t)g(t)
dt dt
Integrating both sides gives Z
I(t)y = I(t)g(t)dt + A
R
1
Multiplying by I(t) = e− r(t)dt gives the general solution
R Z R R
− r(t)dt r(t)dt
y=e e g(t)dt + Ae− r(t)dt (1.16)
| {z } | {z }
yp (t) yh (t)
R
since e Rr(t)dt 6= 0. And we have a solution, explicit if we can solve the integration. Again
I(t) = e r(t)dt is called an integrating factor.
Note again the general solution consists of:
dy
+ r(t)y = 0.
dt
15
Example 1.15. Find the general solution to
dy
= −et y 2
dt
Rearranging gives
dy
= −et dt
y2
Then integrate: Z Z
dy
− = et dt
y2
1
= et + A
y
1
i.e. y(t) = t
e +A
with the restriction that et + A 6= 0
dy
Note that if y = 0 then dt = 0 so y is a constant, so y ≡ 0 is also a solution.
Justification
Consider again
dx
= f (x)g(t) (1.17)
dt
where f is sufficiently “nice”.
Note that if x(t) is a solution to (1.17) with f (x(s)) = 0 for some s, then x(t) = x(s) ∀ t ∈ R,
since if f (x(s)) = 0 then dx
dt = 0 so x is constant.
So assume that x(t) is a non-constant solution of (1.17) and so f (x(t)) is non-zero on an
interval I. Now divide both sides of (1.17) by f (x(t)):
1 d
x(t) = g(t)
f (x(t)) dt
1
Now note that if G(t) is an antiderivative of g(t) and F (x) is an antiderivative of f (x) then we
can integrate both sides with respect to t to give
because
d dF dx 1 d
(F (x(t)) = = x(t)
dt dx dt f (x) dt
Note that (1.18) does not give an explicit solution for x(t), but rather a functional relationship
between x(t) and t, also known as an implicit solution. We would like to rearrange this to get
an explicit solution, i.e. x(t) = . . . , but unfortunately this is not always possible.
Example 1.16. Solve the initial value problem
dy
= 3t2 e−y y(0) = 1
dt
Since e−y is never zero, we have no constant solutions.
d
ey(t) y(t) = 3t2
dt
16
Integrating gives the implicit solution
ey(t) = t3 + c
Take logs of both sides gives the explicit solution
y(t) = log(t3 + c)
We want y(0) = 1:
y(0) = log c = 1
so c=e
Therefore the solution is
y(t) = log(t3 + e)
For this to be valid, we require that t3 + e > 0, so the solution is valid provided
1
t > −e 3 ≈ −1.396.
Population Dynamics
The equation
dN
= rN
dT
can be thought of as a simple population model where N is the population size. A better
population model due to Verhulst is:
dN N
= rN 1 − (1.19)
dt K
N
where r is the birth/death rate, and K limits the growth – 1 − K becomes small if N gets too
large.
This equation is separable:
K
dN = r dt
N (K − N )
1 1
+ dN = r dt
N K −N
where we have used partial fractions. Integrating gives
log |N | − log |K − N | = rt + A
After a little light algebra:
cKert
N (t) = .
1 + cert
cK N0
If N (0) = N0 then N0 = 1+c , giving c = K−N 0
, so the solution is:
N0 Kert
N (t) =
K − N0 + N0 ert
Note that there are also two constant solutions. Since dN
dt = 0 when N = 0 or N = K,
N (t) ≡ 0 and N (t) ≡ K are also solutions.
Thinking about limits ANALYSIS we can also see by inspection that
lim N (t) = K
t→∞
unless N0 = 0.
17
1.7 Substitution Methods
Note: there are things called “exact equations” – we will not cover them here, but you should
be aware that they exist.
We will look at two types of equation in this section.
Type 1
Equations of the form:
dy y
=F (1.20)
dx x
y
We use the substitution u = x so y = ux giving
dy du
= x+u
dx dx
so
du
x = F (u) − u
dx
which is separable.
dy
Example 1.17. Solve xy + y 2 + x2 − x2 dx = 0 by using the substitution u = xy .
Solution. Equation is
dy y y2
=1+ + 2
dx x x
dy du
Using y = ux, we get = u + x , so
dx dx
du
u+ = 1 + u + u2
dx
du
= 1 + u2
dx
du dx
=
1 + u2 x
Integrating gives
Type 2
Equations of the form:
dy
+ p(x)y = q(x)y n (1.21)
dx
which are known as “Bernoulli Equations”.
18
Note that we have already solved the cases where n = 0, 1. For other n, we use the substitution
u = y 1−n giving us
du dy
= (1 − n)y −n
dx dx
= (1 − n)y [−p(x)y + q(x)y n ]
−n
= (1 − n) −p(x)y 1−n + q(x)
i.e.
du
+ (1 − n)p(x)u = (1 − n)q(x)
dx
which can be solved by integrating factors.
dy
Example 1.18. Solve − 6xy = 2xy 2 .
dx
1
Solution. Set u = y (that is, n = 2 in the general case). Then
du 1 dy 6x
=− 2 =− − 2x = −6ux − 2x
dx y dx y
so
du
+ 6xu = −2x.
dx
2
R
The integrating factor is thus e 6xdx = e3x so
d 3x2 2
ue = −2xe3x
dx
2 1 2
u(x)e3x = − e3x + A
3
so
3
y(x) =
Be−3x2 − 1
where B = 3A.
3
If y was a function of time t, i.e. y(t) = , then assuming B > 1, as t increases from
Be−3t2√
−1
1
0, y(t) gets larger and larger. In fact, as t → √3 log B, y(t) → ∞,i.e. the solution “becomes
1 √
infinite” at t = √3 log B. This is an example of “finite time blowup”; this behaviour is not at
all obvious from the equation we started with (we essentially require y0 > 0).
19
y
3
0
−1 0 1 2 3 t
−1
dy
Figure 1.1: The direction field for dt = 1 − 2ty.
20
y
6
0
−2 −1 0 1 2 3 4 5 6 t
−1
−2
21
1.9 Autonomous First-Order ODEs
Sometimes we may not be able to find an explicit solution to an ODE, but this may not always
be a huge disadvantage. One such class are autonomous ODEs, which are of the form
dx
= f (x) (1.23)
dt
Note that we put no restriction on the linearity of f (x): for example, consider the equation
dx
= f (x) = x2 − 1
dt
−1 0 1
Finally we have two special points, x(0) = ±1 where dx dt = 0 – the particle just sits there.
These points are called fixed points (stationary points, equilibrium points).
Without having solved the ODE we know the qualitative behaviour of the solutions: a
particle starting between −1 < x < 1 will move towards −1, as will a particle starting at x < −1,
while particles starting off with x > 1 will move to the right indefinitely. The behaviour of a
solution as t → ∞ is called the asymptotic behaviour (compare with the equation dx −t2 on
dt = e
page 7).
Points such as x = −1 are called stable fixed points – “if you start near it, you get pulled
towards it.” Points such as x = 1 are called unstable fixed points – “points nearby get pushed
away”.
It should be clear that the stability of a fixed point depends on the gradient of the graph of
f at the fixed point:
22
x∗ x∗
stable unstable
f ′ (x∗ ) < 0 f ′ (x∗ ) > 0
Finally, if f ′ (x) = 0 at a F.P., the equation is called structurally unstable – a small change
to the equation, e.g.
dx
= f (x) + ǫ(x) : |ǫ(x)| ≪ 1
dt
can make the fixed point ‘stable’ or ‘unstable’ by making f ′ (x) non-zero at the fixed point
(which may also move a little).
Justification
Consider the Taylor series expansion ANALYSIS II of f (x) near the fixed point x0 . This says
that for small h
h2
f (x + h) ≈ f (x) + hf ′ (x) + f ′′ (x) + . . . (1.24)
2!
Then
y2
f (x0 + y) ≈ f (x0 ) + yf ′ (x0 ) + f ′′ (x0 ) + . . .
2!
Since x0 is a fixed point, f (x0 ) = 0, so if y is small, i.e. |y| ≪ 1, then
f (x0 + y) ≈ yf ′ (x0 )
But
d dx dy
f (x0 + y) = (x0 + y) = +
dt dt x0 dt
dx
And f (x0 ) = dt x0 = 0, so
dy
= yf ′ (x0 ) (1.25)
dt
This is called the linear approximation of dx
dt = f (x) at x0 .
We can solve this to give
′
y(t) = y0 ef (x0 )t
So if f ′ (x0 ) < 0, then as t → ∞, y(t) → 0. Similarly if f ′ (x0 ) > 0, as t → ∞, y(t) → ±∞.
Example 1.19 (Terminal Velocity). Find the fixed points of the equation dv 2
dt = g − kv .
(This balances the downward force due to gravity with the upward force due to air resistance
in a falling body.)
q
The fixed points are where dv = 0 i.e. v 2 = g , so the fixed points lie at v = ± g
dt k k . So we draw
a graph of f (v) = g − kv 2 against v.
23
f (v)
q
g
q
g v
− k k
As a check, ′
q we find f (v) =q−2kv.
At v = + kg , f ′ (v) = −2k kg < 0, so the fixed point is stable.
q q
At v = − kg , f ′ (v) = +2k kg < 0, so the fixed point is unstable.
dN N
Example 1.20 (Verhulst). Find the fixed points of the equation = rN 1 − where
dt K
r > 0 (see discussion on page 17).
N
The fixed points are where rN 1 − = 0, i.e. N0 = 0 or K.
K
2N
As a check, we find f ′ (N ) = r 1 − so:
K
at N = 0, f ′ (0) = r > 0, so the fixed point is unstable;
at N = K, f ′ (K) = −r < 0, so the fixed point is stable.
So solutions tend towards K and away from zero (compare with the actual solution on page
17).
24
CHAPTER 2
d2 x dx
a(t) + b(t) + c(t)x = 0 (2.1)
dt2 dt
Observations
Observation 1 (Initial Value Problem) For such equations, to make an initial value prob-
lem we require two initial values, e.g. x(0) and ẋ(0). For example,
d2 x
=0
dt2
dx
Integrating gives =A
dt
Integrating again gives x(t) = At + B
This is a general solution: to specify a particular solution, we require A and B.
Observation 2 (Linearity) The underlying assumption of linearity means that if x1 (t) and
x2 (t) are both solutions to (2.1) then so is
αx1 (t) + βx2 (t)
Proof. Substitute αx1 (t) + βx2 (t) in the general equation
d2 x dx
a(t) 2
+ b(t) + c(t)x
dt dt
to give
d2 d
a(t) 2
[αx1 + βx2 ] + b(t) [αx1 + βx2 ] + c(t)[αx1 + βx2 ]
dt dt
= α[a(t)ẍ1 + b(t)ẋ1 + c(t)x1 ] + β[a(t)ẍ2 + b(t)ẋ2 + c(t)x2 ]
=α·0+β·0=0
25
since x1 and x2 are both solutions.
Equation (2.1) defines something called a “linear differential operator”. LINEAR ALGEBRA
In a similar vein to first order ODEs there are conditions on a second order ODEs that will
be satisfied by the above such that the solution to the IVP, plus two initial values, exist and
are unique:
Theorem 2.1. For “nice” second-order ODEs with two initial values x(t0 ) = x0 , and ẋ(t0 ) = v0
there exists a unique solution to the initial value problem
d2 x dx
a(t) + b(t) + c(t)x = 0 : x(t0 ) = x0 , ẋ(t0 ) = v0 (2.2)
dt2 dt
We will now show that we need precisely two such solutions for our ODE to form a general
solution (i.e. to produce all possible solutions by choosing suitable initial values).
There is a notion of linear independence LINEAR ALGEBRA ; for functions this can be
stated as
Definition 2.2. Two functions x1 (t) and x2 (t) defined on an interval I are linearly independent
if the only solution to
α1 x1 (t) + α2 x2 (t) = 0
is α1 = α2 = 0.
For two functions, this is the same as saying that x1 (t) is not a scalar multiple of x2 (t),
since if x1 is a scalar multiple of x2 then x1 = cx2 so x1 − cx2 = 0 on I, so x1 , x2 are linearly
dependent.
Similarly if α1 x1 + α2 x2 = 0 (∀ t ∈ I) for some α1 , α2 6= 0 then
−α2
x1 = x2 = cx2
α1
Having defined linear independence of solutions, we now show that it is not possible to
obtain all solutions to 2.2 as multiples of a single special equation.
Suppose x1 (t) is a solution to 2.2 satisfying x1 (0) = 1 and ẋ1 (0) = 0 and x2 (t) is another
solution satisfying x2 (0) = 0 and ẋ2 (0) = 1. By theorem 2.1, both exist and are unique. It
should be clear that x1 (t) cannot be a multiple of x2 (t), since if it were, ẋ1 (t) would have to be
the same multiple of ẋ2 (t), and it isn’t.
Now we show that two linearly independent solutions x1 (t) and x2 (t) are sufficient. Assume
that
x(t) = α1 x1 (t) + α2 x2 (t)
is a solution to the IVP in equation 2.2 solving x(0) = x0 and ẋ(0) = v0 . The correct values
α1 , α2 can be obtained by solving:
x0 = α1 x1 (0) + α2 x2 (0)
v0 = α1 ẋ1 (0) + α2 ẋ2 (0)
26
We can solve this provided the matrix has a non-zero determinant, i.e.
But then, x1 (0) = cx2 (0) and ẋ1 (0) = cẋ2 (0). Since x2 (t) is a solution, y(t) = cx2 (t) is a
solution with
y(0) = cx2 (0) = x1 (0)
ẏ(0) = cẋ2 (0) = ẋ1 (0)
i.e. another solution with the same initial values as x1 (t). By uniqueness, this implies that
x1 (t) = cx2 (t) ∀ t, i.e. x1 (t), x2 (t) are linearly dependent. But this is a contradiction, since we
assumed that x1 (t), x2 (t) are linearly independent.
(Of course the same argument works if we take a different initial value at t0 6= 0.)
d2 x dx
a 2
+b + cx = 0 (2.3)
dx dt
where a, b, c are constant.
We expect solutions of the form
27
Case 1 – Two real roots k1 , k2
In this case, x1 (t) = Aek1 t and x2 (t) = Bek2 t are both solutions. Obviously x1 (t) is not a scalar
multiple of x2 (t), so we have two linearly independent solutions. So by uniqueness (theorem
2.1), the general solution is
x(t) = Aek1 t + Bek2 t (2.6)
d2 y dy
2
+ − 6y = 0
dx dx
subject to y(0) = 5, y ′ (0) = 0.
k2 + k − 6 = 0
(k + 3)(k − 2) = 0
y = uekt
dy du
= ekt + kuekt
dt dt
d2 y 2
kt d u du du
2
= e 2
+ kekt + kekt + k2 uekt
dt dt dt dt
d2 u du
a + (2ak + b) + (ak2 + bk + c)u = 0
dt2 dt
b
but ak2 + bk + c = 0 by construction, and since k = − 2a we have that 2ak + b = 0, so if u(t)ekt
is a solution, then we must have
d2 u
= 0.
dt2
28
Integrating twice yields u(t) = A + Bt, so
d2 y dy
2
+6 + 9y = 0
dx dx
subject to y(0) = 2, y ′ (0) = −5.
k2 + 6x + 9 = 0
or (k + 3)2 = 0
so repeated root k = −3
y(0) = 2 =⇒ A = 2
′
y (0) = −5 =⇒ B − 3A = −5 =⇒ B = 1
so
y(x) = (2 + x)e−3x
29
Now using Euler’s formula:
30
2.3 Mass/Spring Systems
Recall Newton’s Second Law of Motion:
dv
m = F (t)
dt
d2 x
or m 2 = F (t)
dt
that is, “force = mass × acceleration”.
Consider a (linear) spring, then Hooke’s Law states
“the restoring force is proportional to the extension” or
F = kx
where x is the displacement of the spring from rest (compression is negative and
extension is positive).
Consider the following:
x(t)
Pull the mass and let go, then by Newton’s Second Law, if there is no friction,
d2 x
m = −kx
dt2
since the force is a “restoring force”
The simplest model for friction is that it is proportional to the velocity of the object. This
force acts in the opposite direction to the velocity, i.e. against the direction of travel.
So this gives
d2 x dx
m 2 = −kx − c
dt dt
or
d2 x dx
m 2 +c + kx = 0 (2.11)
dt dt
In the “ideal case” where there is no friction (i.e. c = 0), we get simple harmonic motion
governed by
d2 x
m 2 + kx = 0
dt
which we can solveq by the methods in section 2.2. The auxiliary equation is mλ2 + k = 0 which
k
has roots λ = ±i m so the solution is
r r
k k
x(t) = A cos t + B sin t
m ! m
r
k
or x(t) = C cos t−φ
m
31
Note that the time taken for one oscillation is dependent only on k and m, and not the initial
position of the mass relative to the equilibrium position.
Adding friction or air resistance we retrieve equation (2.11) which, as we have seen, the
solution of which depends on whether the discriminant (b2 − 4ac) is positive, negative or zero.
In this case the discriminant is c2 − 4mk, where c, m, k ≥ 0.
There are thus four cases to be considered:
• undamped, c = 0
Undamped, c = 0
We have already seen that in this case equation (2.11) gives the solution
r !
k
x(t) = C cos t−φ
m
x
C
−C
In this case, there is no friction and so the mass oscillates back and forth ad infinitum.
32
Underdamped, c2 − 4mk < 0
In this case, the auxiliary equation has roots
√
c 4mk − c2
− ±i
2m 2m
so the solution is
c
x(t) = Ae− 2m t cos(ωt − φ)
√
4mk−c2
where ω = 2m .
x
A
−A
In this case, the system still oscillates, but the amplitude decreases gradually over time.
Now if the mass is released from rest (i.e. ẋ(0) = 0) then its distance from equilibrium
decreases monotonically to zero. Sufficient initial momentum (velocity) will make the mass
overshoot, and then increase to zero.
x
A+B
from rest
with a shove
This is called critical damping since a small decrease in c will allow oscillations.
33
Overdamped, c2 − 4mk > 0
√
−c± c2 −4mk
√ √
In this case, the roots are 2m , both negative since c2 − 4mk < c2 = c. This gives
the solution
x(t) = Ae−k1 t + Be−k2 t
√
−c± c2 −4mk
where ki = − 2m .
x
A from rest
with a shove
This gives a similar result to critical damping, but the decay is less severe.
d2 x dx
a(t) 2
+ b(t) + c(t)x = f (t) (2.12)
dt dt
Note that f (t) is sometimes known as a “forcing term”.
We observe that if xp (t) is a solution to (2.12) and Ax1 (t) + Bx2 (t) is a solution to the
homogeneous case
d2 x dx
a(t) 2 + b(t) + c(t)x = 0 (2.3)
dt dt
then Ax1 (t) + Bx2 (t) + xp (t) is also a (general) solution to equation (2.12) because of linearity.
This should be obvious; if not, then look back at section 2.1, observation 2, on page 25.
Compare this with first order ODEs, where the solution to inhomogeneous first order ODEs
is in two bits (see equation (1.14) on page 12, and equation (1.16) on page 15).
Here the solution Ax1 (t) + Bx2 (t) to the homogeneous case is called the complementary
function, and the solution xp (t) is called the particular integral.
So solving equation (2.12) is a two-part process:
34
2.5 Inhomogeneous Linear Second-Order ODEs
with constant coefficients
Case 1 – f (t) is a polynomial
dx
If f (t) is an nth degree polynomial then x(t) must also be nth degree, so dt is a (n − 1)th degree
2
polynomial and ddt2x is an (n − 2)th degree polynomial.
Example 2.6. Find the general solution of
d2 y dy
+ − 2y = t2 (2.13)
dt2 dt
The complementary function, or C.F., is the solution to the equivalent homogeneous case
d2 y
dt2
+ dy t
dt − 2y = 0, which is just y(t) = αe + βe
−2t .
For the particular integral, or P.I., we try the most general possible quadratic:
y(t) = Ct2 + Dt + E
dy
= 2Ct + D
dt
d2 y
= 2C
dt2
Substituting all that in gives
d2 y dy
+ − 2y = 2C + 2Ct + D − 2Ct2 − 2Dt − 2E = t2
dt2 dt
−2Ct2 + (2C − 2D)t + (2C + D − 2E) = t2
−2C = 1 2C − 2D = 0 2C + D − 2E = 0
The first gives C = − 12 , substituting this into the second gives D = − 12 as well, and finally
substituting these into the third yields E = − 34 .
Thus the P.I. = − 12 t2 − 12 t − 34 , and so the general solution to equation 2.13 is thus:
y(t) = Ae−t
dy
= −Ae−t
dt
d2 y
= Ae−t
dt2
35
Substituting those in gives
d2 y dy
+ − 2y = Ae−t − Ae−t − 2Ae−t = e−t
dt2 dt
−2Ae−t = e−t
Equating coefficients gives A = − 12 and hence the general solution is
y(t) = αet + βe−2t − 12 e−t (2.16)
There is, however, a potential problem with this method – what if k is a root of the auxiliary
equation? For example:
d2 y dy
+ − 2y = 3et (2.17)
dt2 dt
Here trying Aet on the LHS will give 0, not et , on the RHS.
Instead, we try Atet , since differentiating this will yield multiples of et . For (2.17):
y(t) = Atet
dy
= Aet + Atet
dt
d2 y
= 2Aet + Atet
dt2
Substituting those in gives
d2 y dy
+ − 2y = 2Aet + Atet + Aet + Atet − 2Atet = 3et
dt2 dt
3Aet = 3et
giving A = 1 and general solution
y(t) = αet + βe−2t + te−t (2.18)
Similarly, in the case where the RHS is of the form ekt and the LHS has repeated root k, where
the C.F. has terms in both ekt and tekt , we try a P.I. of the form
y(t) = At2 ekt
Functions to ”guess”
f (t) Try solution xp (t) =
aekt (k not a root) Aekt
aekt (k a root) Ate or At2 ekt
kt
36
2.6 Mass/Spring Systems with Forcing
Consider again the mass/spring system we saw in section 2.3. If we now force this oscillating
mass/spring system, the solutions become much more interesting. If we can push the mass every
second, say, it may reduce the oscillations, or, if we hit a certain frequency, the oscillations may
become much larger very quickly. We can approximate the force as F cos Ωt, giving us our
general equation
d2 x dx
m 2 +c + ω 2 x = F cos Ωt (2.19)
dt dt
representing a mass/spring system being forced periodically.
Assume that the discriminant c2 − 4mω 2 < 0, i.e. that the auxiliary equation has complex
roots. Simplifying the problem slightly by taking m = 1, it can be shown (exercise!) that
equation 2.19 has the solution:
ct
x(t) = A cos(Ωt − φ) + Be− 2 cos(αt + δ) (2.20)
where
!
c2 F ω 2 − Ω2
α2 = ω 2 − A= p φ = arccos p
4 (ω 2 − Ω2 )2 + c2 Ω2 (ω 2 − Ω2 )2 + c2 Ω2
Comments
1. As t → ∞, the second term “vanishes”. This second term is called the “transient be-
haviour”, while the first term is known as the “steady state solution”.
|A|
c = 0, no friction
c>0
ω Ω
Figure 2.1: A graph of |A| against Ω for the solution of equation (2.20).
37
38
CHAPTER 3
3.1 Terminology
In the previous two chapters, when we were finding a solution to an ODE, we found an ex-
pression for x(t) which gave us a value of x for all times t. While we sometimes came across
integrals that were not solvable analytically, we were still able to obtain an expression for the
solution. However, the cases in which it is possible to obtain an explicit expression for the
solution x(t) to an ODE are in fact extremely limited, and it becomes next to impossible for
ODEs of higher orders. For equations we cannot solve explicitly, we can turn to numerical
solution methods which involve increasing the time t by very small increments, and generating
a numerical approximation to the solution; one such scheme is known as Euler’s method, and
we’ll come back to this notion in example 3.1 below.
Moreover, quite often we’re interested not in obtaining a continuous solution, but rather a
model relating one day/hour/year to the next, e.g. stock prices at the end of trading each day,
or population models in which we’re given the number of, say, rabbits this year, and want to
figure out the population next year. In both examples we want a discrete solution, rather than
a continuous solution. For example, a population model for discrete values of time is:
Nt
Nt+1 = rNt 1 − (3.1)
k
where Nt is the number of rabbits in year t; compare this with equation (1.19) on page 17.
This kind of equation, and similar types relating one value to the previous value(s) is
known as a difference equation, or sometimes as a “recurrence equation”, in with the values
N0 , N1 , N2 , N3 , . . . (where N0 is the initial value supplied in the question) form a sequence.
ANALYSIS
Example 3.1 (Euler’s Method). Consider an ODE of the form
dx
= f (t, x) : x(0) = x0 (3.2)
dt
that we cannot solve. In this case, we must give up the idea of finding a solution for all values
of the independent variable (x(t) for any t ∈ R) and instead try to find an approximation to
the solution at a discrete set of values of t.
39
A computer drawing a graph of a first-order ODE with a solution essentially draws a series
of dots representing the solution. That series of dots looks like a line if it is dense enough, but
it’s still a series of dots. So for a computer plotting a graph of equation (3.2), it assumes that a
solution x(t) exists and that it is unique (i.e. that theorem 1.8 applies) and then all it needs is
a series of dots which approximate the solution x(t). The simplest way to do this is by Euler’s
method.
For this method, we choose a small time step h and make the assumption that over that
interval h, the derivative dx dt is constant, and so by the Taylor expansion (equation (1.24) on
page 23):
x(t + h) = x(t) + hẋ(t) = x(t) + hf (t, x(t)) (3.3)
dx
(We can ignore subsequent terms in the Taylor expansion since we are assuming that dt is
2
constant over the small timestep h, and thus ddt2x = 0.1 )
Implementing this to solve (3.2) yields the following:
x0 = x(0)
x1 = x(h) = x(0 + h) = x(0) + hẋ(0) = x0 + hf (0, x0 )
x2 = x(2h) = x(h + h) = x(h) + hf (x(h)) = x1 + hf (h, x1 )
..
.
xk+1 = xk + hf (kh, xk ) (3.4)
At each step we have everything we need without knowing the explicit solution x(t).
×
x10
x2 x3
x1 × ×
× x4 ×
× x9
x5
× x8
×x
x6 x7 ×
0 × ×
Figure 3.1: The exact solution curve and a series of approximations to it with h = 1.
As you can see, while Euler’s method yields a solution, the errors grow the longer we continue
in this iterative fashion.
1
In reality, dx
dt
is not constant, and this assumption can in fact lead to very poor approximations. The Improved
2
Euler Method, surprisingly enough, improves this method by assuming not that dx dt
is constant, but that ddt2x is
constant, and hence acheives a better approximation by using two terms of the Taylor expansion. The Runge-
4
Kutta Method takes this further by assuming that the fourth derivative, ddt4x , is constant, and in most cases this
method is indistinguishable from the true solution. However, these methods are very much more complicated to
put in to practice, and we will not study them here: those interested should consider taking MA228 Numerical
Analysis next year.
40
Example 3.2 (Fibonacci’s Rabbits). Consider the Fibonacci sequence, given by the difference
equation
xn+2 = xn+1 + xn : x0 = x1 = 1 (3.5)
• a single unisex adult rabbit gives birth to one baby per year.
Year 0 1 2 3 4 5 6. . .
Adult 0 0
1 → 1 → 2 → 3 → 5. . .
ր րր ր ր
One year 0 1 0 1 1 2 3. . .
ր ր ր ր ր ր
Newborns 1 0 1 1 2 3 5. . .
TOTAL 1 1 2 3 5 8 13. . .
number in year n x0 x1 x2 x3 x4 x5 x6 . . .
From the table, we can see that the number of rabbits in year n is equal to the number of
rabbits in year n − 1 plus the number of babies born in year n. But the number of babies in
year n is just equal to the number of adults in year n, which is itself the number of rabbits in
year n − 2, since two years later all the rabbits will be adults. This gives us the formula:
xk = x + xk−2
|{z} | k−1
{z } | {z }
rabbits in year k rabbits in year k − 1 rabbits in year k − 2 = adults in year k
There are, in fact, strong similarities between difference equations and ODEs; for example,
n
xn+2 − 5xn+1 + 3xn = cos
63
is described as a second-order inhomogeneous linear difference equation.
Definition 3.3. The order of a difference equation is the difference between the highest index
of x and the lowest.
For example,
2
xn+7 − cos xn+3 = Aen
has order 4. Euler’s method (example 3.1) is order 1, while the Fibonacci series (example 3.2)
is order 2.
41
3.2 First-Order Homogeneous Linear Difference Equations
First-order equations, or “next-generation models” are equations of the form
xn+1 = f (xn , n)
x1 = ax0
x2 = ax1 = a2 x0
x3 = ax2 = a3 x0
..
.
xn = an x0 (3.7)
For first-order difference equations, we saw solutions of the form an (in place of eat for
first-order ODEs of the form dx dt = ax). So, we try analogous solutions in the second-order case.
42
which is true if
Akn (k2 + ak + b) = 0
So, either k = 0 and xn ≡ 0, which isn’t particularly interesting, or alternatively, k is a root
of the auxiliary equation
k2 + ak + b = 0 (3.10)
so √
a2 − 4b
−a ±
k= (3.11)
2
which, as with the ODE case, gives us three distinct cases detailed below.
When x0 = x1 = 1 as in the classic Fibonacci sequence, then we can substitute these in and
get two simultaneous equations in A and B:
x0 = A + B = 1
√ ! √ !
1+ 5 1− 5
x1 = A +B =1
2 2
√ √
1+ 5 5−1
We can solve these Exercise to give A = √ , B = √ which gives us our solution
2 5 2 5
√ !n+1 √ !n+1
1 1+ 5 1 1− 5
xn = √ −√ (3.14)
5 2 5 2
√
which, despite the proliferance of 5 does, in fact, give an integer for every n ∈ N as required.
a2
2. Repeated real roots, k = − a2 (b = 4 )
Once again, we look to ODEs for inspriation, and try a solution of the form
Since xn = Akn yields one solution, our question is whether xn = Bnkn another solution? In
order to find out, we substitute into the difference equation:
43
which, rearranged, gives
Bnkn k2 + ak + b +Bkn 2k2 + ak = 0
| {z } | {z }
=0, aux eqn =0, k=− a2
as required. So nkn is also a solution when k is a repeated root. Therefore by linearity, equation
3.15 is indeed the general solution when k is a repeated root.
Example 3.5. Find a general solution of
xn − 6xn−1 + 9xn−2 = 0 (3.16)
The auxiliary equation is just
λ2 − 6λ + 9 = 0
(λ − 3)2 = 0
λ = 3 repeated
so from equation 3.15, the general solution is just
xn = (A + Bn)3n (3.17)
3. Complex roots, k = p ± iq
In order to make all our lives easier, we write the roots in the form k± = re±iθ , where r 2 = p2 +q 2
and θ = arctan pq .
In this case, the general solution is
xn = r n (A cos nθ + B sin nθ) (3.18)
This follows by considering n n
C reiθ + C re−iθ
where C is the complex conjugate of C (with C chosen so that the solution is real), and by
noting that reiθ = r(cos θ + i sin θ) and that (cos θ + i sin θ)n = cos nθ + i sin nθ (de Moivre’s
theorem).
Example 3.6. Solve
xn+2 − 2xn+1 + 2xn = 0 (3.19)
The auxiliary equation is
λ2 − 2λ + 2 = 0
√ iπ
λ = 1 ± i = 2e± 4
so, plugging this into our general equation 3.18, we get
√ nπ nπ
xn = ( 2)n A cos + B sin (3.20)
4 4
44
the general solution is simply the complementary function and the particular solution added
together.
In this section we will look at two forms for fn ; other forms follow in a similar
1. fn is a polynomial in n
We will first look at a simple first-order difference equation,
The homogeneous case gives a complementary function xn = Aan . For the particular solution,
we try xn = B. Then
xn+1 = axn + b
B = aB + b
b
B=
1−a
giving the general solution
b
xn = Aan + (3.23)
1−a
For higher-degree polynomials and higher-order difference equations, the method is the same
as ODEs, so for example if fn = n2 , we try An2 + Bn + C.
Example 3.7. Solve
xn − xn−1 − 6xn−2 = −36n (3.24)
Solving the homogeneous case xn − xn−1 − 6xn−2 = 0 gives us our complementary function,
so we find the auxiliary equation:
k2 − k − 6 = 0
(k − 3)(k + 2) = 0
k = 3, −2
xn = A · 3n + B(−2)n (3.25)
It doesn’t matter in this case that the leading term is xn rather than xn+2 – you could argue
that the solution is xn = A · 3n−2 + B(−2)n−2 , but then xn = A9 · 3n + B4 · (−2)n , and then we can
choose A′ = A9 , B ′ = B4 since A and B are just arbitrary constants, giving xn = A′ ·3n +B ′ (−2)n .
For the particular solution, since fn is a polynomial of degree 1, we try the most general
1-degree polynomial possible, i.e. xn = Cn + D, and substitute this into (3.24):
xn = A · 3n + B(−2)n + 6n + 13 (3.26)
45
2. fn is an exponential, an
This is the “equivalent” case to that of eat in ODEs. Once again, we have to be careful when an
is also part of the complementary function - but once again, the ideas from ODEs are carried
over. So, for the particular solution when fn = an , try:
First we find the complementary function. Our auxiliary equation is k2 + k − 6 = 0, giving roots
k = 2, −3 and so we have our complementary function
xn = A · 2n + B(−3)n (3.28)
xn = C(−2)n
xn = Cn2n
xn = A · 2n + B(−3)n + 4n · 2n (3.31)
46
3.4 First-Order (Autonomous) Nonlinear Difference Equations
Solving nonlinear difference equations, even first-order ones, can be extremely difficult. We will
approach this topic largely through one example, the “logistic equation”:
This is related to the Verhulst ODE, and can be thought of as a population model where xn is
population size as a proportion of a maximum size K. Typically 0 ≤ xn ≤ 1, and if so, then we
must have the restriction 0 ≤ λ ≤ 4, or else the values would leave the range 0 ≤ xn ≤ 1.
For small xn , then xn+1 ≈ λxn ; λ can be thought of as the “birth rate” when λ > 1.
However, be warned: this equation looks deceptively simple.
In general, we are thinking of nonlinear autonomous difference equations of the form
given some x0 . This is autonomous because f is a function of xn , but not of n, i.e. it doesn’t
involve terms like n2 or en xn etc.
Running our sausage machine (3.33) gives:
x1 = f (x0 )
x2 = f (x1 ) = f (f (x0 )) = f 2 (x0 )
..
.
xn = f n (x0 )
Here for shorthand purposes we have used the notation f 2 (x) = f (f (x)), or more generally:
f (f (. . . f (f ( x)) . . . )) = f n (x)
| {z }
n
dn
Note that f n (x) 6= [f (x)]n , and also that f n (x) 6= f (n) (x) = dxn f (x).
If f (xn ) is nonlinear, then (most of the time) you cannot write an explicit solution of the
form
xn = some function of x
and, moreover, we can get very strange things happening, but first some theory.
We met fixed points of difference equations in section 3.2, but we now give a more rigorous
definition:
From section 3.2, we know that the linear equation xn+1 = axn has solution xn = an x0 , and
that x0 = 0 is a fixed point, stable if |a| < 1 and unstable if |a| > 1. The upshot of this and the
definition above is that limits of sequences and stability are related. ANALYSIS
The $64,000 question is: does this generalise? Fortunately the answer is yes.
47
Say x∗ is a fixed point of xn+1 = f (xn ), i.e. f (x∗ ) = x∗ . (Note the difference with ODEs:
a common mistake in exams is to set f (x) = 0 instead of f (x) = x.) Consider a nearby point
x0 = x∗ + h with |h| ≪ 1, then
x1 = f (x0 )
= f (x∗ + h)
h2 ′′
= f (x∗ ) + hf ′ (x∗ ) + f (x∗ ) + . . .
2
x1 ≈ f (x∗ ) + hf ′ (x∗ )
x1 ≈ x∗ + hf ′ (x∗ ) (3.35)
If |f ′ (x∗ )| < 1, then |x0 − x∗ | > |x1 − x∗ |, i.e. we have moved closer to x∗ . (More formally, if
|f ′ (x∗ )| < 1, then f n (x∗ + h) → x∗ as n → ∞.) This means that if |f ′ (x∗ )| < 1, x∗ is a stable
fixed point. Conversely if |f ′ (x∗ )| > 1 then x∗ is unstable.
Note that the sequence xn+1 = f (xn ) need not be monotonic (unlike the ODE equivalent);
it can converge to a fixed point via an alternating (i.e. oscillating) sequence.
Returning to the example of the logistic equation (3.32), we can now look for its fixed points,
i.e. x∗ such that
x∗ = λx∗ (1 − x∗ )
i.e.
λx2∗ + (1 − λ)x∗ = 0 (3.36)
giving the two fixed points x∗ = 0 and x∗ = λ−1 λ (with λ > 1).
Since f (x) = λx(1 − x), f ′ (x) = λ(1 − 2x), so:
So that would seem to be it, yes? We have a stable fixed point at x∗ = λ−1
λ to which all xn
with 0 < x0 < 1 converge.
No. We quietly slipped in the fact that x∗ = λ−1 λ was only stable if 1 < λ < 3. What
happens if 3 ≤ λ ≤ 4? Well, in that case, f ( λ ) = 2 − λ ≤ −1 and the fixed point at x∗ = λ−1
′ λ−1
λ
is now unstable4 . That means that both fixed points are unstable, and there are no other fixed
points left.
Think about the equivalent ODE case. The phase diagram (see section 1.9) would look like
this:
4 λ−1
Note that λ = 3 is an example of structural instability - the sequence does converge to x∗ = λ
, but very,
very slowly indeed.
48
If this were an ODE, there would have to be a fixed point between those two where dx dt = 0.
But since difference equations need not be monotonic, this is not a problem, but rather a critical
feature – difference equations allow more complicated behaviour, which it is not easy to pick
up simply from looking at fixed points.
We thus need a graphical way of studying the solutions.
xn+1
xn+1 = xn
x2
x1 xn+1 = f (xn )
x0 x1 x2 xn
Figure 3.2: The cobweb diagram of the logistic map for λ = 2.5.
As in figure 3.2, we plot the graphs of y = x and y = f (x) on one set of axes. The two
graphs cross where xn+1 = f (xn ) = xn , i.e. at the fixed points. The procedure for drawing our
diagram is as follows:
These diagrams are known as cobweb diagrams. While they can be drawn by hand, as we
shall see accuracy is vital, and so the drawings are best created by computer software such as
Matlab. MATHS BY COMPUTER
To see the behaviour of the logistic equation in detail, we will consider a number of examples.
λ = 2.5
λ−1
In this case, we simply spiral in towards the fixed point x∗ = λ . The cobweb diagram is
plotted in figure 3.3.
λ = 3.2
λ−1
Now the fixed point x∗ = λ is unstable. The cobweb diagram is plotted in figure 3.4.
49
xn+1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 xn
−0.1
−0.2
Figure 3.3: The logistic equation with λ = 2.5, with initial value x0 = 0.15, iterated 50 times.
As we can see, the solution simply spirals into the fixed point x∗ = λ−1
λ .
50
xn+1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 xn
−0.1
−0.2
Figure 3.4: The logistic equation with λ = 3.2, with initial value x0 = 0.15, iterated 50 times.
Here the solution settles down to oscillate with period 2.
51
As we can see, when λ = 3.2, we are attracted to a “period two orbit”, i.e. two alternating
points; we repeat every t = 2. In some sense, this is stable – we get attracted towards the orbit.
Each point of the orbit is a fixed point of f 2 (x) (i.e. f (f (x))), i.e. a point x∗ s.t. f 2 (x∗ ) = x∗ .
For f (x) = λxn (1 − xn ), we have
We now want x∗ s.t. f 2 (x∗ ) = x∗ , i.e. we need to solve a quartic equation. While there is a
formula for this, it is hugely complicated.
However, we already know two of the factors – we know that x∗ = 0 and x∗ = λ−1 λ will solve
the quartic, since if x∗ solves f (x∗ ) = x∗ , then
with roots p
(1 + λ) ±
(λ + 1)(λ − 3)
x± = (3.39)
2λ
These exist provided λ > 3, which is consistent with the lack of stable fixed points of f (x)
beyond λ > 3.
By increasing λ through λ = 3, we change the stability of the fixed point x∗ = λ−1 λ and
2
create new solutions to f (x) = x. This √ is an example of bifurcation. These points are stable
fixed points of f 2 (xn ) if 3 < λ < 1 + 6 ≈ 3.45. If we call the two points x+ and x− , then we
find that f (x+ ) = x− and f (x− ) = x+ , and we have a period two orbit.
Once again, however, we hit the same problem: beyond λ ≈ 3.45, these fixed points become
unstable. So, we investigate what happens in this case.
λ = 3.5
The cobweb diagram appears in figure 3.5.
In this case, the oscillations settle down, but we no longer have a period two orbit, but
rather a period four orbit, in which f 4 (x∗ ) = x∗ .
If we then increase λ further in small increments, we get periods of 8, 16, 32 etc. occurring;
this is called a period-doubling cascade. Beyond a certain point5 , the solutions we observe are
apparently random – in fact it is an example of chaos.
5
This occurs at roughly λ ≈ 3.57, and is known as the accumulation point or Feigenbaum point.
52
xn+1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 xn
−0.1
−0.2
Figure 3.5: The logistic equation with λ = 3.5, with initial value x0 = 0.4, iterated 50 times.
We can see that the oscillations settle down, but that the period is now 4.
53
54
CHAPTER 4
So far we have concentrated on one-dimensional systems, i.e. one dependent variable. The
“phase plane” was a line, e.g.
We now increase the dimension to two, by considering two dependent variables x(t), y(t).
Some uses for such equations include:
• competing populations, e.g. rabbits and foxes
• chemical reactions
• more complicated mixing problems (compare with example 1.11 on page 11)
Now the phase plane is two-dimensional:
(x0 , y0 )
55
4.1 In General
An n × n system of first-order ODEs is:
ẋ1 = f1 (x1 , x2 , . . . , xn , t)
ẋ2 = f2 (x1 , x2 , . . . , xn , t)
.. (4.1)
.
ẋn = fn (x1 , x2 , . . . , xn , t)
where
x1 ẋ1 f1 (x1 , x2 , . . . , xn , t)
x2 ẋ2 f2 (x1 , x2 , . . . , xn , t)
x= . ẋ = . f (x, t) = f (x1 , x2 , . . . , xn , t) = ..
.. .. .
xn ẋn fn (x1 , x2 , . . . , xn , t)
Having established our basic notation, we can now proceed to cover the technicalities. This
is very much a generalisation of first-order one-dimensional equations.
Definition 4.1. A solution of the IVP
d
(x(t)) = f (x, t) : x(t0 ) = x0 , x ∈ Rn (4.3)
dt
on an open interval I that contains t0 is a continuous function ANALYSIS II i.e. “nice”
function x : I → Rn with x(t0 ) = x0 and ẋ(t) = f (x, t) ∀t ∈ I.
∂g
Recall that for a function g(u, v), ∂u is the partial derivative of g with respect to u, i.e. we
treat v as a constant and differentiate with respect to u alone. Armed with this knowledge, we
make the following definition:
Definition 4.2. The Jacobian matrix of a function1 f : Rn → Rn is the matrix of partial
derivatives as follows:
∂f1 ∂f1 ∂f1
. . .
∂x1 ∂x2 ∂xn
∂f2 ∂f2 ∂f2
∂x1 ∂x2 . . . ∂x
n
Df = . .. .. (4.4)
. ..
. . . .
∂fn ∂fn ∂fn
∂x1 ∂x2 . . . ∂xn
56
Important Special Case
When we discussed second-order ODEs, we assumed existence and uniqueness. Consider
d2 x dx
=f , x, t (4.5)
dt2 dt
dx
= y = f1 (x, t)
dt
dy
= ẍ = f (y, x, t) = f2 (x, t)
dt
so to guarantee unique solutions for second order ODEs we require that the partial derivatives
∂f ∂f
∂x and ∂y are “nice”, and we also require an initial condition (x0 , y0 ), i.e. x(t0 ) = x0 , ẋ(t0 ) = y0 .
ẍ + cos tẋ − x2 = 0
∂f
= 2x
∂x
∂f
= − cos t
∂y
dx
= 5x − 2y + cos t
dt
dy
= et x + y
dt
Here, however, as in ODEs we consider constant coefficients. One approach is to see if we can
transform the equations into a second-order ODE, and then solve that using our techniques
from chapter 2.
57
Example 4.5. Find an explicit solution to
dx
=x+y (4.7a)
dt
dy
= 4x − 2y + 4e−2t (4.7b)
dt
We first rearrange (4.7a) to get:
y = ẋ − x
=⇒ ẏ = ẍ − ẋ
= 4x −2(ẋ − x) +4e−2t
| {z }
−2y
−2t
=⇒ ẍ + ẋ − 6x = 4e (4.8)
with solution
However, this method is not easy to generalise to n × n equations. Is there a better way?
ẋ = Ax (4.11)
Substituting gives:
d
ẋ = (x) = Ax = λeλt v
dt
but since x = eλt v, Ax = eλt Av, so we require
eλt Av = λeλt v
Av = λv. (4.12)
So finding a solution x = eλt v is equivalent to finding the eigenvalues (λ) and eigenvectors (v)
of A. LINEAR ALGEBRA
58
If we have two such solutions, then by linearity
x(t) = v1 eλ1 t + v2 eλ2 t (4.13)
is also a solution to (4.11).
We will discuss repeated (equal) eigenvalues later. If, however, the eigenvalues are distinct,
be they real or complex, then so are the eigenvectors, and so the two solutions will be linearly
independent. LINEAR ALGEBRA
Example 4.6. Find the general solution to
2 6
ẋ = x (4.14)
−2 −5
2 6
−2
−3
The eigenvalues of −2 −5 are −1 and −2, with corresponding eigenvectors 1 and 2
respectively. Exercise So, the general solution is
x(t) −t −2 −2t −3
= ae + be (4.15)
y(t) 1 2
Note that for any initial value (x0 , y0 ), the solution will head towards (0, 0) as t → ∞.
As with second order ODEs, there will be three cases of eigenvalues. (This should come as
no surprise when we consider that second order ODEs with constant coefficients can be written
as a 2 × 2 system.)
1. Distinct Real Eigenvalues As we’ve already seen, if A has two distinct real eigenvalues
λ1 , λ2 with eigenvectors v1 , v2 , then the solution is x(t) = v1 eλ1 t + v2 eλ2 t .
x(t) = 2ept ℜ [((α cos qt − β sin qt) + i(β cos qt + α sin qt)) (v1 + iv2 )]
x(t) = 2ept [(α cos qt − β sin qt)v1 − (β cos qt + α sin qt)v2 ]
x(t) = ept [(a cos qt + b sin qt)v1 + (b cos qt − a sin qt)v2 ] (4.17)
where v = v1 + iv2 and a, b are constants determined by the initial conditions.
59
Example 4.7. Solve
1 5
ẋ = x (4.18)
−1 −3
1 5
The matrix −1 −3 has eigenvalues λ+ = −1 + i, λ− = −1 − i, with corresponding eigen-
vectors Exercise
5 5
v+ = , v− =
−2 + i −2 − i
Note again that whatever the initial value (a, b), x(t) → 0 as t → ∞.
1
This equation has eigenvalues 2 and −3, with eigenvectors ( 11 ) and −4 . Exercise
(See Example 28.1 in Robinson, pp. 270-271 for further details.)
Equation (4.20) therefore has solution
12t −3t 1
x(t) = ae + be (4.21)
1 −4
k
• Any solution with initial value a = 0, b 6= 0, i.e. x(0) = −4k will give x(t) → ( 00 ) as
t → ∞. In the language of the 1-D case, ( 00 ) is “stable”.
k
• Any solution with initial value a 6= 0, b = 0, i.e. x(0) = k will give x(t) → ∞ as t → ∞.
For such points, ( 00 ) is “unstable”.
All this gives us the phase diagram in figure 4.1 on page 61.
60
y
ẏ = P −1 ẋ (4.23)
−1
=P Ax since ẋ = Ax
−1
= P AP y since x = P y
λ1 0
i.e. ẏ = y (4.24)
0 λ2
61
y
v1
v2
Figure 4.2: The phase diagram when λ1 < λ2 < 0. Here (0, 0) is a stable fixed point, also known
as a “sink”.
v1
v2
Figure 4.3: The phase diagram when λ1 > λ2 > 0. Here (0, 0) is an unstable fixed point, also
known as a “source”.
62
More Phase Diagrams
When we have distinct, real eigenvalues, we have to take into account whether the eigenvalues
are positive or negative. This gives rise to three cases:
• We have already seen the case where λ1 < 0 < λ2 . Here (0, 0) is called a “saddle point”.
• If λ1 < λ2 < 0, all solutions tend to (0, 0) – but they will more faster in the v1 direction
(eλ1 t instead of eλ2 t ), so the phase portrait will look like figure 4.2 on page 62. Now (0, 0)
is a stable fixed point or a “sink”.
• Alternatively, if λ1 > λ2 > 0, we will get something like figure 4.3 on page 62. Now (0, 0)
is an unstable fixed point or a “source”.
(x, y) (r, θ)
r
θ
x x
However, we can also represent any point “uniquely” by a distance r > 0 from the origin,
and an angle 0 ≤ θ < 2π; the coordinates are now (r, θ). By convention we denote the origin as
(0, 0). Compare this with complex numbers:
x + iy 7→ r(cos θ + i sin θ)
To swap between the two:
p
x = r cos θ r = ± x2 + y 2
y
y = r sin θ θ = arctan
x
63
Returning to our system ẋ = Ax when A has complex eigenvalues, changing coordinates
using eigenvectors yields the system
p q
ẏ = y.
−q p
We now convert this into polar form. Differentiating the relationship r 2 = y12 + y22 yields
1 y1 y˙2 − y2 y˙1
θ̇ = 2 ·
y2 y12
1+ y1
ṙ = pr (4.27a)
θ̇ = −q (4.27b)
with solution
(See Robinson chapter 29, pp. 285 ff. for a further discussion of this topic.)
y y
x x
Figure 4.4: The phase diagrams when the eigenvalues are complex; the example on the left is
when ℜ[λ] > 0, while on the right ℜ[λ] < 0.
64
4.4.3 Repeated Real Eigenvalues
Finally, we return to the case of a repeated eigenvalue λ and corresponding eigenvector v. The
problem in this case is that we only have one solution, x(t) = eλt v. Thinking of previous work,
we try a solution of the form
x(t) = teλt a
λt
ae
| +{zaλteλt} = Aateλt
| {z }
ẋ Ax
Collecting terms gives λa = Aa, but also a = 0, which means there are no non-zero solutions
of the form x(t) = teλt a. So, what do we try instead? Let’s try a more general solution:
aλ + b = Aa or (A − λI)a = b (4.29)
bλ = Ab or (A − λI)b = 0 (4.30)
From (4.30) we see that b is an eigenvector of A. So, take b = v, and we can now find a. And
so we have our solution:
x(t) = Beλt v + Ceλt (a + tv) (4.31)
where p, q are arbitrary constants.
is one solution.
We now need a vector u such that (A − λI)u = v, i.e.
3 −4 1 0 2
− u=
1 −1 0 1 1
2 −4 2
u=
1 −2 1
65
( )
u1 2u1 − 4u2 = 2 1
If u = , then we require ; one solution is u = .
u2 u1 − 2u2 = 1 0
This gives the second solution to (4.32) as
t 1 2 t 1 + 2t
be +t = be
0 1 t
y x=y
x = 43 y
2
v=c
1
By setting ẋ = y, we can rewrite this as a coupled first-order system of two equations in two
unknowns: (
ẋ = y
ẏ = − ac x − ab y
66
or, in matrix form:
0 1
ẋ = (4.35)
− ac − ab
However, so far, we have only considered f1 , f2 when they are linear. What if they are no longer
linear? Here we discuss derivatives of functions of two variables. Throughout we assume that
we have “nice” functions.
For a function of one variable, f (x), we can think of the graph y = f (x) as a line in R2 :
At any point x, f ′ (x) gives the gradient of f at x, i.e. the gradient of the tangent at x. The
derivative is defined as
f (x + h) − f (x)
f ′ (x) = lim
h→0 h
67
For a function f (x, y), the “graph” of f (x, y) against (x, y) is a surface in R3 , z = f (x, y):
Now the gradient depends on the direction we wish to move. Taking x = ( xy ), the directional
derivative is defined as
f (x + hv) − f (x)
Dv f (x) = lim
h→0 h
Equivalently this is
∂f ∂f
Dv f (x) = , · v̂
∂x ∂y
where v̂ is a unit vector in direction v. For v = ( 10 ) or ( 01 ), this is just ∂f ∂f
∂x and ∂y , where, for
example,
∂f f (x + h, y) − f (x, y)
= lim
∂x h→0 h
For shorthand purposes, ∂f ∂f
∂x , ∂y is denoted ∇f (x, y), or grad(f ), the gradient of f .
df ∂f dx ∂f dy
= +
dt ∂x dt ∂y dt
df
= 2(3t2 + 1) · 6t + 1 · (2)
dt | {z } |{z} |{z} |{z}
∂f dx ∂f dy
∂x dt ∂y dt
= 12t(3t2 + 1) + 2.
68
Level curves
For a function f (x, y), we can ask ourselves at which points f (x, y) = k, where k is some
constant. For “nice” functions, these points (x, y) lie on smooth curves, called level curves (or
sometimes level sets). For our graph they will look something like this (think of contours of a
map):
−1
−2
−3
−4 −3 −2 −1 0 1 2 3 4
The question is how can we walk around the hill keeping at constant height.
Parametric Curves
A smooth curve in R2 is parametric if it can be written as a function of one variable, say t,
i.e. (x(t), y(t)); t now says “how far along the line we are”. GEOMETRY AND MOTION
For example:
(x(t), y(t)) = (t2 + 5, 1t ) : 1 ≤ t ≤ 35 12
or
(x(t), y(t)) = (cos t, sin t) : 0 ≤ t < 2π
dy dy
For such curves, the vector ( dx
dt , dt ) is tangent vector to the line at (x, y) (think of dx for y = f (x)
in 1-D):
dy
b
dy dt
dy so = dx
dx dy = t dy
dt
dx dt
b b
dx = t dx
dt
∂f ∂f
Theorem 4.9. At a point (x0 , y0 ), the vector ∂x , ∂y , i.e. ∇f (x0 , y0 ), is normal to the
(x0 ,y0 )
level curve through (x0 , y0 ).
69
∇f (x0 , y0 ) = u
(x0 , y0 )
f (x, y) = k
df
Proof. On a level curve parametrised by t, i.e. given by (x(t), y(t)), we must have dt = 0, as
the value of f does not change. By the chain rule,
df ∂f dx ∂f dy
= +
dt ∂x dt ∂y dt
dx dy
= ∇f · ,
dt dt
dx dy
So if ∇f · dt , dt = 0, then ∇f is perpendicular to the tangent and hence to the curve.
Theorem 4.10. The maximum value of Dv f (x, y) (“gradient in direction of v”) occurs in the
direction of ∇f , with maximum value |∇f (x, y)|.
Dv f (x, y) = ∇f (x, y) · v̂
= |∇f (x, y)| |v̂| cos θ
= |∇f (x, y)| cos θ
The maximum occurs when cos θ = 1 =⇒ θ = 0, i.e. v in the direction of ∇f (x, y).
h2 ′′
f (x + h) = f (x) + hf ′ (x) + f (x) + . . .
2
∂f ∂f
f (x + h) = f (x + h1 , y + h2 ) ≈ f (x, y) + h1 (x, y) + h2 (x, y)
∂x ∂y
2
h ∂ f2 ∂2f h2 ∂ 2 f
+ 1 2 + h1 h2 + 2 2 + . . . (4.38)
2 ∂x ∂x∂y 2 ∂y
If h is small, then
f (x + h) ≈ f (x) + ∇f (x) · h
If f (x) = 0 then
f (x + h) ≈ ∇f (x) · h
70
Approximating a solution of a 2 × 2 system near fixed points
We can now finally return to general, nonlinear 2 × 2 system of ODEs. Consider the system
dx
= f1 (x, y) (4.39a)
dt
dy
= f2 (x, y) (4.39b)
dt
Or, letting x = (x, y) and f (x) = (f1 (x), f2 (x)), we have
ẋ = f (x)
Assume we have fixed point, that is an x∗ = (x∗ , y∗ ) such that f (x∗ ) = 0 and consider points
near x∗ . We consider a small change u = (u, v) in x near x∗ , such that if (u, v) = (0, 0) then
we are at x∗ , and if u and v are small then we are nearby, given by x = x∗ + u, y = y∗ + v.
In essence then, we have changed coordinates, with (u, v) = (0, 0) being the fixed point. For
points near x∗ , we can write the system as follows. Consider equation (4.39a):
dx d
= (x∗ + u)
dt dt
d du du
= (x∗ ) + = since x∗ is a constant
dt dt dt
= f1 (x∗ + u)
≈ f1 (x∗ ) + ∇f1 · u
du
so = ∇f1 · u
dt
Similarly, considering equation (4.39b):
dy d
= (y∗ + v)
dt dt
d dv dv
= (y∗ ) + = since y∗ is a constant
dt dt dt
= f2 (x∗ + u)
≈ f2 (x∗ ) + ∇f2 · u
dv
so = ∇f2 · u
dt
Putting this all together, we get the following:
∂f1 ∂f1
(x∗ ) (x∗ )
∂x ∂y
u̇ = u (4.40)
∂f2 ∂f2
∂x (x∗ ) ∂y (x∗ )
That is, closed to the fixed point x∗ , the solutions behave like
ẋ = Df (x∗ )x
i.e. a linear system where the stability of the fixed point x∗ is given by the eigenvalues of Df (x∗ ),
the Jacobian matrix of f evaluated at the fixed point. In other words, the behaviour of solutions
to nonlinear systems can be approximated by a linear system near fixed points (there is actually
a theorem justifying this more rigorously, the Hartman-Grobman Theorem). All this now allows
us to compute the stability of fixed points.
71
Example 4.11 (Lotka-Volterra). Consider the predator-prey model
dN
= N (a − bP ) (4.41a)
dt
dP
= P (cN − d) (4.41b)
dt
with a, b, c, d > 0.
The fixed points occur at (0, 0) and ( dc , ab ), and the Jacobian matrix is
!
a − bP −bN
Df =
cP cN − d
a 0 1 0
So, at (0, 0), this is , with eigenvalues a, −d and eigenvectors , .
0 d 0 1
0 − bd √
At ( dc , ab ), we have ac c with eigenvalues ±i ad, a centre:
b 0
72
dy
Note: At every point on a solution x(t), ( dx
dt , dt ) points in the direction of the solution:
The collection of all such arrows is called the vector field. For any (x, y), we can draw the
dy
vector ( dx
dt , dt ) without solving anything, meaning that we can once again find integral curves
without actually obtaining an explicit solution. This is exhibited in figure 4.8.
73
74
APPENDIX A
Ready Reference
This is a summary sheet for the first half of the module, solving first and second order ordi-
nary differential equations. We assume conditions for FTC hold. You should learn all these
techniques by heart, and practice, practice, practice!
dx
+ p(t)x = q(t)
dt
R
Multiply both sides by an Integrating Factor P (t) = exp ( p(t)dt) so that
d
(P (t)x(t)) = P (t)q(t)
dt
Then integrate so that
Z
x(t) = P (t)−1 P (s)q(s)ds + AP (t)−1
t
75
Separable Equations (Section 1.6)
dx
= f (x)g(t)
dt
First look for constant solutions, i.e. where f (x) = 0. Then look for non-constant
solutions (so f (x) never zero) and ”divide both sides by f (x), multiply both sides by
dt and integrate”.
Z Z
dx
= g(t)dt
f (x)
Autonomous First Order ODEs (Section 1.9)
dx
= f (x)
dt
dx
Look for fixed points x∗ , which satisfy f (x∗ ) = 0, i.e. are points where dt = 0. A
fixed point x∗ is stable if f ′ (x∗ ) < 0 and unstable if f ′ (x∗ ) > 0.
d2 x dx
a 2
+b + cx = f (t)
dt dt
The solution consists of x(t) = xc (t) + xp (t) where xc (t), the complementary solution,
solves the homogeneous case f (t) = 0 and xp (t), the particular integral, gives the f (t).
d2 x dx
a +b + cx = 0
dt2 dt
Find the roots to the auxiliary equations
aλ2 + bλ + c = 0
√
−b± b2 −4ac
i.e. λ± = 2a then we have
• Real roots k1 , k2 complementary solution is
Aek1 t + Bek2 t
Aekt + Btekt
or
Aept cos(qt − φ)
76
The Particular Integral Functions to ”guess”:
77