Dynamical Systems
Dynamical Systems
Dynamical Systems
Part I
Dynamical Systems
LotkaVolterra equations
Simplified ecology with two species:
F(t) = number of foxes at time t,
R(t) = number of rabbits at time t.
Assume populations large enough that F and R can be treated as
smoothly varying in time.
In the 1920s, Alfred Lotka and Vito Volterra independently
proposed the preditorprey model
dF
= aF + FR,
dt
dR
= bR FR,
dt
F(0) = F0 ,
R(0) = R0 .
dR
= bR
dt
so
F(t) = F0 eat
Interpretation: if the foxes fail to catch any rabbits, then the foxes
starve (F 0) and the rabbits multiply without limit (R ).
Interaction terms
Rewrite equations as
1 dF
= a + R,
F dt
1 dR
= b F.
R dt
So
and
relative rate of increase in fox populaR =
tion due to predation on rabbits
relative rate of decrease in rabbit popF =
.
ulation due to predation by foxes
Periodic solutions
Example with a = 1.0, = 0.5, b = 1.5 and = 0.75.
Vector field
Any first-order system of N ODEs in the form
dx1
= F1 (x1 , x2 , . . . , xN ),
dt
dx2
= F2 (x1 , x2 , . . . , xN ),
dt
..
.
dxN
= FN (x1 , x2 , . . . , xN ),
dt
x1 (0) = x10 ,
x2 (0) = x20 ,
..
.
xN (0) = xN0 ,
x(0) = x0 .
Predatorprey example
Recall
dF
= aF + FR,
dt
dR
= bR FR,
dt
F(0) = F0 ,
R(0) = R0 .
So defining
F
x=
,
R
we have
aF + FR
F(x) =
,
bR FR
F
x0 = 0
r0
dx
dF/dt
aF + FR
=
=
= F(x),
dR/dt
bR FR
dt
with x(0) = x0 .
Geometric viewpoint
Think of the trajectory x(t) as a parametric curve in the phase
N
space R
. Then dx/dt = F(x) means that along any trajectory,
F x(t) always points in the forward tangent direction to the
trajectory, with the speed of x(t) equal to the magnitude of F.
Non-autonomous ODEs
A system of ODEs of the form
dx
= F(x).
dt
is said to be autonomous.
For example, we have just seen that the LotkaVolterra equations
are autonomous.
In a non-autonomous system, F may depend explicitly on t:
dx
= F(x, t).
dt
(1)
Second-order ODE
Consider an initial-value problem for a general (possibly
non-autonomous) second-order ODE
dx
d2 x
dx
, t , with x = x0 and
= y0 at t = 0.
= f x,
2
dt
dt
dt
If x = x(t) is a solution, and if we let y = dx/dt, then
dx
dy
d2 x
= 2 = f x,
, t = f(x, y, t),
dt
dt
dt
that is, (x, y) is a solution of the first-order system
dx
= y,
dt
dy
= f(x, y, t),
dt
x(0) = x0 ,
y(0) = y0 .
Simple oscillator
Second-order ODEs arise naturally in classical mechanics.
Consider a particle of mass m that moves along the x-axis with
velocity v = x = dx/dt under the influence of
an external applied force = f(t),
a frictional resistance force = r(v)v,
a restoring force = k(x)x.
Newtons second law,
=m
mx
d2 x
= f(t) r(v)v k(x)x,
dt2
Simple oscillator
Simplest case is when r(v) = r0 > 0 and k(x) = k0 > 0 are
constant, giving a linear ODE with constant coefficients:
+ r0 x + k0 x = f(t).
mx
and F(x, t) =
m1
v
.
f(t) r0 v k0 x
Human T-cell
Simian virus
(1 et )
= / as t .
so E(t) E
= 0.5,
= 0.01,
r = 1.25, = 0.01,
Notice E
Numerical solution
Phase portrait
Lipschitz constant
Definition
The number L is a Lipschitz constant for a function f : [a, b] R
if
|f(x) f(y)| 6 L|x y| for all x, y [a, b].
Example
Consider f(x) = 2x2 x + 1 for 0 6 x 6 1. Since
f(x) f(y) = 2(x2 y2 ) (x y) = 2(x + y)(x y) (x y)
= (2x + 2y 1)(x y)
we have |f(x) f(y)| 6 |2x + 2y 1| |x y| so a Lipschitz constant
is
L = max |2x + 2y 1| = 3.
x,y[0,1]
Theorem
If f is Lipschitz then f is (uniformly) continuous.
Proof.
Suppose L is a Lipschitz constant for f : [a, b] R. Given > 0,
if = /L then
|f(x) f(y)| 6 L|x y| < whenever |x y| < ,
so f is (uniformly) continuous on [a, b].
f(x) = 3 + x for 0 6 x 6 4.
In this case, if x, y (0, 4] then
x+ y
f(x) f(y) = x y =
x y
x+ y
xy
=
x+ y
so if a Lipschitz constant L exists then
L>
|f(x) f(y)|
1
=
|x y|
x+ y
Theorem
For any closed and bounded inteval I = [a, b], if f is C1 on I then
L = maxxI |f 0 (x)| is a Lipschitz constant for f on I.
Proof.
Given a 6 x < y 6 b, the Mean Value Theorem says that there
exists a number c (depending on x and y) such that
f(x) f(y) = f 0 (c)(x y)
(2)
x(t) = x0 +
0
(3)
Picard iterates
We try to solve (3) by fixed point iteration, letting
x1 (t) = x0 ,
Zt
f x1 (s) ds,
x2 (t) = x0 +
0
Zt
f x2 (s) ds,
x3 (t) = x0 +
0
and in general,
Zt
f xk1 (s) ds
xk (t) = x0 +
0
for k > 1.
(4)
Increments
Subtracting
Zt
f xk1 (s) ds
xk (t) = x0 +
0
from
Zt
f xk (s) ds
xk+1 (t) = x0 +
0
gives
Zt
f xk (s) f xk1 (s) ds,
t > 0.
We have
Zt
k (t) 6 L
k1 (s) ds
0
so for 0 6 t 6 T ,
Zt
Zt
2 (t) 6 L 1 (s) ds 6 L 1 (T ) ds 6 1 (T )Lt,
0
0
Zt
Zt
3 (t) 6 L 2 (s) ds 6 L 1 (T )Ls ds = 1 (T )L2 12 t2 ,
0
0
Zt
Zt
1 3
4 (t) 6 L 3 (s) ds 6 L 1 (T )L2 21 s2 ds = 1 (T )L3 3!
t ,
0
..
.
(Lt)k1
(k 1)!
for |t| 6 T .
Thus,
k=1
X
(Lt)k1
= 1 (T )eLt
(k 1)!
k=1
X
xk+1 (t) xk (t) .
k=1
Proof.
See Technical Proofs handout.
Example of non-uniqueness
The initial-value problem
dx
= 3x2/3
dt
3
for
<t< .
4
4
4
for |x x0 | 6 r,
r
r
21
6 2
=
< .
2
r + 2r + 2
r + 2r + 2 r= 2
2
4
for all t.
Therefore, the two solutions trace out the same trajectory in phase
space.
Discrete-time approximation
Initial-value problem in 1D:
dx
= f(x)
dt
for n = 0, 1, 2, . . . , N,
so
0 = t0 < t1 < t2 < < tN = T.
Aim: compute numbers X1 , X2 , . . . , XN such that
x(tn ) Xn
for 1 6 n 6 N.
dx
x
x(t + t) x(t)
= lim
= lim
,
dt t0 t t0
t
= f x(t) .
t
dt
Also, when t = tn ,
x(tn + t) = x(tn+1 ) Xn+1
and x(tn ) Xn ,
Eulers method
Rearranging:
Xn+1 = Xn + f(Xn ) t.
Thus, given x0 we let X0 = x0 and calculate
X1 = X0 + f(X0 ) t,
X2 = X1 + f(X1 ) t,
X3 = X2 + f(X2 ) t,
..
.
XN = XN1 + f(XN1 ) t.
Easily programmed on a computer. For instance, using Julia,
X = zeros(N+1)
X[1] = x0
for n = 1:N
X[n+1] = X[n] + f(X[n]) * Dt
end
Accuracy
How large is the error En = Xn x(tn )?
Taylor expansion: if x is C2 , then
t + R1 (t),
x(t + t) = x(t) + x(t)
R1 (t) =
(c) 2
x
t ,
2
Since x(t)
= f x(t) ,
x(tn+1 ) = x(tn ) + f x(tn ) t + R1 (tn )
whereas
Xn+1 = Xn + f(Xn ) t.
Subtracting:
En+1 = En + f(Xn ) f x(tn ) t R1 (tn )
n1
X
j=1
Here, E0 = X0 x0 = 0.
Error bound
Thus,
|En | 6
n1
X
j=1
|
x(cn )| 2 1
t 6 2 M t2
2
where
M = max |
x(t)|.
06t6T
2
1
2 M t
n1
X
eLt
n1j
6 21 M t2 nenL t ,
j=1
with C = 21 T eLT .
...
x (c) 3
t
and t < c < t + t.
R2 (t) =
3!
We therefore define the Taylor method of order 2 by
Xn+1 = Xn + f(Xn ) t + 12 f 0 (Xn )f(Xn ) t2 .
Comparison
We saw earlier that for Eulers method,
|Xn x(tn )| 6 C t max |
x(t)| for 0 6 tn 6 T .
06t6T
Can show in a similar way that for the Taylor method of order 2,
...
|Xn x(tn )| 6 C t2 max | x (t)| for 0 6 tn 6 T .
06t6T
t2
10
100
1000
10000
100000
0.1
0.01
0.001
0.0001
0.00001
0.01
0.0001
0.000001
0.00000001
0.0000000001
Order notation
Definition
We write
(y) = O (y)
as y 0,
as
(y) (y) = O (y) .
Order notation
Example
Taylor expansion implies that, as x 0,
sin x = x + O(x3 ),
cos x = 1 12 x2 + O(x4 ),
log(1 + x) = x 12 x2 + O(x3 ).
Example
We have seen that as t 0, Eulers method satisfies
Xn = x(tn ) + O(t)
whereas the Taylor method of order 2 satisfies
Xn = x(tn ) + O(t2 ).
Systems of ODEs
Eulers method works in the same way if F : RM RM and
dx
= F(x)
dt
for n = 0, 1, 2, . . . , N 1.
1
2
t (1 + 2 ),
for n = 1, 2, . . . , N.
1
6
t 1 + 22 + 23 + 4 ,
for n = 1, 2, . . . , N.
Convergence rate
Denote the maximum error by
EN = max |En | = max kXn x(tn )k,
06n6N
06n6N
EN
CT r Nr
and thus
r log2 (EN/2 /EN ).
Euler
4.64e+00
2.21e+00
1.06e+00
5.19e-01
2.56e-01
1.27e-01
RK2
1.072
1.055
1.033
1.018
1.009
3.02e-01
7.77e-02
1.97e-02
4.96e-03
1.25e-03
3.12e-04
RK4
1.958
1.979
1.989
1.995
1.997
4.44e-04
2.81e-05
1.77e-06
1.11e-07
6.92e-09
4.33e-10
3.981
3.993
3.997
3.999
3.999
Theorem
If A(t) and b(t) are continuous for 0 6 t 6 T , then the linear
initial-value problem
dx
= A(t)x + b(t)
dt
for 0 6 t 6 T ,
with x(0) = x0 ,
General solution
If Av = v and we define x(t) = et v, then
dx
= et v = et (v) = et (Av) = A(et v) = Ax
dt
that is, x is a solution of dx/dt = Ax.
If Avj = j vj for 1 6 j 6 N, then the linear combination
x(t) =
N
X
cj ej t vj
(5)
j=1
N
X
j=1
cj vj = x0 .
Example
Consider
dx
= 5x + 2y,
dt
dy
= 6x + 3y,
dt
In this case,
5 2
A=
,
6 3
1 = 3,
x(0) = 5,
y(0) = 7.
1
v1 =
,
1
2 = 1,
1
v2 =
,
3
3t
1
t 1
+ c2 e
,
1
3
Exponential of a matrix
Recall that the Taylor series
X
zk
z2 z3
e =
=1+z+
+
+
k!
2!
3!
z
k=0
X
Ak
A2 A3
=
=I+A+
+
+ .
k!
2!
3!
k=0
X
X
kAk k
kAkk
ke k 6
6
= ekAk .
k!
k!
A
k=0
k=0
Term-by-term differentiation
Given A and x0 , let
x(t) = e
tA
tk A k
t2 A 2
+ +
+ x0 .
x0 = I + tA +
2!
k!
Since
tk1 Ak
d tA
e = 0 + A + tA2 + +
+
dt
(k 1)!
tk1 Ak1
= A I + tA + +
+ = AetA ,
(k 1)!
we have
dx
= AetA x0 = Ax and x(0) = Ix0 = x0 .
dt
But how can we calculate etA explicitly?
Diagonalising a matrix
Definition
A square matrix A CNN is diagonalisable if there exists a
non-singular matrix Q CNN such that Q1 AQ is diagonal.
Theorem
A square matrix A CNN is diagonalisable if and only if there
exists a basis {v1 , v2 , . . . , vN } for CN consisting of eigenvectors
of A. Indeed, if
Avk = k vk
and we put Q = [v1
for k = 1, 2, . . . , N,
vN ], then Q1 AQ = where
..
=
.
.
N
v2
Matrix powers
Consider a diagonalisable matrix A. Since Q1 AQ = , it follows
that A has an eigenvalue decomposition
A = QQ1 .
Thus
A2 = (QQ1 )(QQ1 ) = Q(Q1 Q)Q1 = QIQ1
= Q2 Q1
and
A3 = A2 A = Q2 Q1 QQ1 = Q3 Q1 .
In general, we see by induction on k that
Ak = Qk Q1
for k = 0, 1, 2, . . . .
Matrix powers
Since is diagonal, so is
2 =
..
21
and in general,
k
1
2
..
.
N
22
..
.
2N
k
2
..
.
k
N
for k = 0, 1, 2, 3, . . . .
Matrix powers
Example
If
5 2
A=
6 3
then
3 0
=
,
0 1
1 1
Q=
,
1 3
1 3 1
=
2 1 1
so
k
A = Q Q
1 (1)k 3k+1 1 (1)k+1 3k + 1
=
.
2 (1)k 3k+1 3 (1)k+1 3k + 3
Polynomial of a matrix
For any polynomial
p(z) = c0 + c1 z + c2 z2 + + cm zm
and any square matrix A, we define
p(A) = c0 I + c1 A + c2 A2 + + cm Am .
When A is diagonalisable, Ak = Qk Q1 so
p(A) = c0 QIQ1 + c1 QQ1 + c2 Q2 Q1 +
+ cm Qm Q1
= c0 QI + c1 Q + c2 Q2 + + cm Qm Q1
= Q c0 I + c1 + c2 2 + + cm m Q1
= Qp()Q1 .
Polynomial of a matrix
Lemma
For any polynomial p and any diagonal matrix ,
p(1 )
p(2 )
p() =
.
.
.
.
p(N )
Theorem
If two polynomials p and q are equal on the spectrum of a
diagonalisable matrix A, that is, if
p(k ) = q(k )
then p(A) = q(A).
for k = 1, 2, . . . , N,
Polynomial of a matrix
Example
Recall that
5 2
A=
6 3
and q(z) = 2z 1,
and observe
p(3) = 5 = q(3)
We find
9 4
p(A) = A 4I =
= 2A I = q(A).
12 7
2
eA = Qe Q1
and e =
e1
e2
..
.
eN
Proof.
eA equals
X
X
X
X
k
Ak
Qk Q1
k Q1
=
=Q
=Q
Q1 .
k!
k!
k!
k!
k=0
k=0
k=0
k=0
Example
Again put
5 2
A=
.
6 3
We have
e
1 1 1 e3 0
3 1
= Qe Q =
0 e 1 1
2 1 3
3
3
1 3e e e + e
.
=
2 3e3 3e e3 + 3e
Notice tA = Q(t)Q1 so
e
tA
1 1 1 e3t 0
3 1
= Qe Q =
0
et 1 1
2 1 3
1 3e3t et e3t + et
.
=
2 3e3t 3et e3t + 3et
t
for k = 1, 2, . . . , N,
then
eA = Qe Q1 = Qp()Q1 = p(A).
For example, we can choose p with degree 6 N 1 using the
Lagrange interpolation formula,
p() =
N
X
k=1
ek
N
Y
j
.
k j
j=1
j6=k
A 3 3 example
Problem: find etA for
6 2 2
A = 2 8 4 .
0 1 7
Can show that A = QQ1 where
1 2 2
6 0 0
Q = 1 0 1 and = 0 7 0 ,
1 1 1
0 0 8
so
tA = Q(t)Q1
for all t.
A 3 3 example
Fix t and put
( 2 )( 3 )
( 1 )( 3 )
+ e2 t
(1 2 )(1 3 )
(2 1 )(2 3 )
(
)(
)
1
2
+ e3 t
(3 2 )(3 2 )
( 7)( 8)
( 6)( 8)
= e6t
+ e7t
(6 7)(6 8)
(7 6)(7 8)
( 6)( 7)
+ e8t
(8 6)(8 7)
p() = e1 t
so that
p(6) = e6t ,
p(7) = e7t ,
p(8) = e8t .
A 3 3 example
Thus,
etA = p(A) =
e6t
(A 7I)(A 8I) e7t (A 6I)(A 8I)
2
e8t
(A 6I)(A 7I)
+
2
1 0 2
4 2 6
2 2 4
= e6t 1 0 2 e7t 0 0 0 + e8t 1 1 2 ,
1 0 2
2 1 3
1 1 2
which equals
6t
e6t e8t
e8t
2e6t + 2e8t .
6t
7t
8t
7t
8t
6t
e + 2e e
e + e
2e 3e7t + 2e8t
Equilibrium points
and V = 0,
and V =
1
.
r
Stable equilibrium
Definition
An equilibrium point a is stable if for every > 0 there exists
> 0 such that whenever kx0 ak < the solution of
dx
= F(x)
dt
satisfies
kx(t) ak < for all t > 0.
(In particular, x(t) must exist for all t > 0.)
In this case, if x0 a then x(t) a, uniformly for t > 0.
In particular, a trajectory stays close to a stable equilibrium
point a if it starts out sufficiently close to a.
Asymptotic stability
Definition
Let D be an open subset of RN that contains an equilibrium
point a. We say that a is asymptotically stable in D if a is stable
and, whenever x0 D, the solution of
dx
= F(x)
dt
satisfies
x(t) a
as t .
dx
= Ax + b with x(0) = x0 .
dt
Since
Ax + b = 0 x = A1 b,
the only equilibrium point is a = A1 b. Moreover,
x(t) = a + etA (x0 a)
because
dx
= AetA (x0 a) = A(x a) = Ax Aa = Ax + b
dt
and
x(0) = a + I(x0 a) = x0 .
(6)
Proof.
Using the eigenvalue decomposition A = QQ1 , we have
x(t) a = etA (x0 a) = Qet Q1 (x0 a).
If Re j 6 0 then 0 < |ej t | = e(Re j )t 6 1 for all t > 0, so
ke
N
N
X
t 2 X
j
wk =
e wj 6
|wj |2 = kwk2 ,
2
j=1
j=1
Theorem
Let A be a diagonalisable matrix with eigenvalues 1 , 2 , . . . , N .
The equilibrium point a = Ab of (6) is asymptotically stable if
and only Re j < 0 for all j. In this case, the domain of attraction
is the whole of RN .
Proof.
For asymptotic stability it is necessary and sufficient that ej t 0
as t , for all j.
2
5
A=
.
1 2
tA
= Qe
cos t 2 sin t
5 sin t
=
.
sin t
cos t 2 sin t
14 9
A=
.
30 19
Re 2 < 0,
1
,
2
and thus
e
tA
= Qe
6et 5e4t 3et + 3e4t
.
=
10et 10e4t 5et + 6e4t
Example: unstable
Consider
dx
= Ax where
dt
26 36
A=
.
18 25
tA
= Qe
8et + 9e2t 12et 12e2t
.
=
6et + 6e2t 9et 8e2t
Linearization
Suppose that x0 is close to an equilibrium point a. If
dx
= F(x)
dt
(7)
dE/dt
E + VE
= F(E, V) =
.
dV/dt
rV VE
Thus,
+ V
E
F (E, V) =
,
V
r E
0
V = 0,
E
F (/, 0) =
.
0 r E
0
Damped pendulum
The angular deflection of a damped pendulum satisfies
m + r + k sin = 0,
where = 0 is the down position of the bob. For simplicity,
take m = 1, so that we have the equivalent first-order system
d
= ,
dt
d
= k sin r.
dt
The equilibrium points are
(, ) = (n, 0)
for n {. . . , 2, 1, 0, 1, 2, . . .}.
Damped pendulum
Writing
d
= F(, ) =
k sin r
dt
we see that
0
1
F (, ) =
,
k cos r
0
Damped pendulum
When n is even, we solve 2 + r + k = 0
1
2
2 r i 4k r ,
= = k,
1
2 4k ,
r
r
2
to obtain
0 6 r < 2 k,
r = 2 k,
r > 2 k.
Damped pendulum
Conclusion: equilibrium point at (, ) = (n, 0) classified as
follows.
n even
n odd
r=0
0 < r 62 k
r>2 k
r>0
Re +
Re +
= Re = 0
= Re < 0
< + < 0
< 0 < +
stable
asymptot. stable
asymptot. stable
unstable
First integrals
Definition
A function G : RN R is a first integral (or constant of the
motion) for the system of ODEs
dx
= F(x)
dt
if G x(t) is constant for every solution x(t).
By the chain rule,
X G dxj
d
dx
G x(t) =
= G(x)
= G(x) F(x).
dt
xj dt
dt
N
j=1
for all x.
Simple example
The function G(x, y) = x2 + y2 is a first integral of the linear
system of ODEs
dx
= y,
dt
dy
= x.
dt
In fact, putting
y
F(x, y) =
x
we have
2x
y
G F =
= (2x)(y) + (2y)(x) = 0,
2y
x
or equivalently,
dG
G dx G dy
=
+
= (2x)(y) + (2y)(x) = 0.
dt
x dt
y dt
Partial solutions
In effect, a first integral provides a partial solution of the ODE.
Putting C = G(x0 ), we know that x(t) is confined to the surface
G(x) = C.
If N = 2, the equation G(x1 , x2 ) = C implicitly gives x2 = g(x1 ),
so
dx1
= F1 (x1 , x2 ) = F1 x1 , g(x1 )
dt
and F1 becomes a known function of x1 alone. If we can then
evaluate
Z
dx1
t=
F1
to obtain t =t(x1 ), then implicitly we know x1 = x1 (t) and finally
x2 = g x1 (t) .
or equivalently
d x
x
=
,
f(x)
dt x
d
dV dx
V(x) =
= f(x)x,
dt
dx dt
(8)
Undamped pendulum
The angular deflection of an undamped pendulum satisfies
m + k sin = 0,
or
(9)
r
+ 2 sin = 0,
which has the form
k
,
m
dV
= f() =
d
1 2
2
2 cos = + 2 sin = + 2 sin = 0
2 cos = C.
Undamped pendulum
Lorenz equations
Edward N. Lorenz, Journal of Atmospheric Sciences 20:130141,
1963.
The 3 3 system of ODEs
dx
= x + y,
dt
dy
= rx y xz,
dt
dz
= xy bz.
dt
is a very simplified model of convection in a thin layer of fluid
heated uniformly from below and cooled uniformly from above.
The dependent variables give the coefficients of 3 Fourier modes.
Lorenz equations
For r > 1, the system has three equilibrium points at
p
p
b(r 1), b(r 1), r 1 ,
(0, 0, 0),
p
p
b(r 1), b(r 1), r 1 .
= 10,
8
b= ,
3
Lorenz equations
Lorenz equations