216
216
216
(t) + c
2
x
(t) + c
3
x
(t) + c
4
x(t) = 0
can be reduced to a rst order system using the following set of substitutions:
x
1
= x, x
2
= x
, x
3
= x
giving:
x
1
= x
2
, x
2
= x
3
, x
3
=
c
4
c
1
x
1
c
3
c
1
x
2
c
2
c
1
x
3
We can write this in matrix form:
_
_
x
1
x
2
x
3
_
_
=
_
_
0 1 0
0 0 1
c
4
c
1
c
3
c
1
c
2
c
1
_
_
_
_
x
1
x
2
x
3
_
_
Hence, any ODE or system of ODEs can be written in the following matrix form:
x
(t) = A(t)x(t)
which has solution:
x(t) = exp(tA) x(0)
2 Computing Matrix Exponentials
The exponential of the matrix tA is given by:
exp(tA) =
n=0
1
n!
t
n
A
n
For a diagonal matrix,
exp
_
_
_
_
_
a 0 . . . 0
0 b . . . 0
.
.
.
.
.
.
0 . . . 0 n
_
_
_
_
_
=
_
_
_
_
_
exp(a) 0 . . . 0
0 exp(b) . . . 0
.
.
.
.
.
.
0 . . . 0 exp(n)
_
_
_
_
_
2
Given two matrices A and B then
exp(A + B) = exp(A)exp(B)
if AB = BA. Note that any scalar multiple of the identity commutes with all matrices.
2 by 2 Matrices
A =
_
a b
c d
_
=
_
a+d
2
0
0
a+d
2
_
+
_
ad
2
b
c
da
2
_
= B + C
and we have BC = CB so that exp(B+C) = expB expC. Letting =
a+d
2
, we then have
exp(tA) = exp(tB) exp(tC)
exp(tA) =
_
exp(t) 0
0 exp(t)
_
exp(tC)
Now, the discriminant of A is
=
_
trA
_
2
4 detA
and C
2
=
4
I. This leads to three cases:
i) = 0, then
exp(tC) = I + tC
ii) < 0, then
exp(tC) = cos
_
t
2
_
I +
sin(
t
2
)
2
C
iii) > 0, then
exp(tC) = cosh
_
t
2
_
I +
sinh(
t
2
)
2
C
n n Matrices
Every n by n matrix A is similar to its Jordan form J, which can be written as the sum of
a diagonal and a nilpotent matrix, J = D + N. We have
A = PJP
1
exp(tA) = P exp(tJ) P
1
exp(tA) = P exp(tD) exp(tN) P
1
The Jordan form J has the eigenvalues of A on the diagonal, and some ones below the
diagonal, depending on whether the eigenvalues are distinct. The columns of the matrix
P are the eigenvectors of A. The entries of P can also be found once you know J, using
AP = PJ.
The exponential of the nilpotent matrix N is computed directly using the exponential
formula.
Note that in the case of a higher order scalar equation, we only need the rst row of P, as
we are just looking for x(t).
3
3 Higher Order Scalar ODEs
Consider a higher order scalar ODE,
c
n
d
n
x
dt
n
+ . . . + c
2
d
2
x
dt
2
+ c
1
dx
dt
+ c
0
x = 0
which we can write as
p
_
d
dt
_
x = 0
where p is the polynomial
p(s) = c
n
s
n
+ . . . + c
2
s
2
+ c
1
s + c
0
= 0
which has roots
i
.
A basis for the solution space is then
_
exp(
1
t), t exp(
1
t), . . . , t
r
1
1
exp(
1
t), . . . , exp(
k
t), . . . , t
r
k
1
exp(
k
t)
_
where the
i
are the individual roots of the equation and r
i
is the multiplicity of the i
th
root.
In the inhomogeneous case, we have p(
d
dt
)x = f, and have the special case where f itself
satises some dierential equation q(
d
dt
)f = 0. Hence
q
_
d
dt
_
p
_
d
dt
_
x = 0
and we can form a basis for the solution space using the roots of r(s) = q(s)p(s). It is
then possible to evaluate the coecients of the particular solution to the inhomogeneous
equation by evaluating p(
d
dt
)x = f
4 Non-constant Coecients
Homogeneous Scalar Equations
The homogeneous equation
x
(t) = a(t)x(t)
has unique solution:
x(t) = x(0) exp
__
t
0
a(s)ds
_
4
Inhomogeneous Scalar Equations
The inhomogeneous equation
x
(t) = A(t)x(t) +
f(t)
has unique solution:
x(t) = W(t)x(0) +
_
t
0
W(t)W
1
(s)
f(s)ds
where W(t) satises the matrix initial value problem
W
(t) + q(t)x
2
(t) x
1
(t)x
2
(t)
giving
p(t)w
(t)+q(t)w(t) = x
1
(t)
_
p(t)x
2
(t)+q(t)x
2
(t)+r(t)x
2
(t)
_
x
2
(t)
_
p(t)x
1
(t)+q(t)x
1
(t)+r(t)x
1
(t)
_
so if x
1
, x
2
solve (1) then w(t) solves
p(t)w
_
t
0
q(s)
p(s)
ds
_
and as
d
dt
_
x
2
(t)
x
1
(t)
_
=
w(t)
x
2
1
(t)
,
x
2
(t)
x
1
(t)
=
x
2
(0)
x
1
(0)
+
_
t
0
w(s)
x
1
(s)
2
ds
The general solution is then any linear combination of x
1
and x
2
:
x(t) = c
1
x(t) + c
2
x
2
(t)
Part II
Stability
6 Non-linear ODEs
Non-linear ODEs
A non-linear ODE is of the form
x
(t) =
F
_
x(t), t
_
Autonomous Systems
An autonomous system is of the form
x
(t) =
F
_
x(t)
_
7 Equilibria and Stability
Equilibria
An equilibrium of an autonomous system x
(t) =
F
_
x(t)
_
is a c such that
F(c) = 0
i.e. the equilibria of a system are the zeros of
F.
Stability
An equilibrium c is said to be stable if > 0, > 0 such that if
|| x(0) c ||
then
|| x(t) c ||
for all positive t.
6
Asymptotic Stability
An equilibrium c is said to be asymptotically stable if > 0 such that
|| x(0) c || lim
t
x(t) = c
Strict Stability
An equilibrium c is said to be strictly stable if it is both stable and asymptotically stable.
Stability and Invariants
If c is an equilibrium of an autonomous system and E is a continuously dierentiable
invariant of the system which has a strict local minimum at c, then c is stable but not
asymptotically stable.
Stability of Linear Constant Coecient First Order Systems
These are systems
x
(t) = Ax(t)
with solution
x(t) = exp(tA)x(0) = P exp(tJ)P
1
x(0)
(t) =
F
_
x(t)
_
about an equilibrium c is the
matrix A dened by
a
jk
=
F
j
x
k
(c)
If all eigenvalues of A have negative real parts, then c is strictly stable.
If some eigenvalue of A has positive real part, then c is neither stable nor asymptotically
stable.
Otherwise, we learn nothing.
9 Method of Lyapunov
Lyapunov Function
A Lyapunov function for the equilibrium c of an autonomous system is a continuously
dierentiable function V with a strict local minimum at c such that
j
V
x
j
F
j
0
Strict Lyapunov Function
A strict Lyapunov function is a Lyapunov function satisfying
j
V
x
j
F
j
r
_
V (x) V (c)
_
for some positive r.
An equilibrium c is stable if it admits a Lyapunov function, and strictly stable if it admits
a strict Lyapunov function.
8