Bookapm 214
Bookapm 214
APM214—Y/1/00-02
Contents
Page
Contents iii
Preface v
CHAPTER 1 Overview 1
APPENDIX 193
v MAT217—X/1
APM214—Y/1
Preface
CHAPTER 1
OVERVIEW
1. Introduction
First—year calculus courses can give the impression that all problems
involving differential equations are soluble in the sense that the solu-
tion can be expressed as a simple formula. In fact, this impression
is quite wrong : most differential equations do not have such so-
lutions. (The reason the impression arises, of course, is that the
exercises in a first calculus course are carefully chosen so as to have
simple solutions.) So, if faced with a problem that cannot, in the
above sense, be solved, what can one do (other than give up)? One
could use
dv k
= g − v 1.9 , v > 0 (1)
dt m
where v is her downward speed, g is the acceleration due to gravity
(approximately 9.8ms−2 ), m is the mass of the woman and para-
chute, and k is a constant associated with the parachute. For safety,
the woman should hit the ground at not more than 5ms−1 . What
is the least possible value for the ratio k/m?
The integral on the right is easy (it equals T ), but the first integral
is difficult. Trying all the tricks learnt in MAT102, plus some oth-
ers, leads NOWHERE. So we resort to
dv
= 0) at
dt
mg 1/1.9
v = v∗ ≡ .
k
(The symbol “ ≡ ” means “defined to be”.)
For v > v∗ we have dv/dt < 0 and for v < v∗ we get dv/dt > 0.
Thus for this problem, whatever the initial value of v, as time evolves
v → v∗ . This is illustrated in the diagram below:
0 v* v
x(n + 1) = 2x(n),
p(n + 1) = q(n)p(n)
q(n + 1) = 1 − p(n).
[Note that the recommended textbooks for this module use the
above notation, but other textbooks might write the difference equa-
tions as
xn+1 = 2xn
and
pn+1 = qn pn ,
qn+1 = 1 − pn .]
5 MAT217—X/1
APM214—Y/1
dx
=0
dt
or, in the case of a difference equation, if at x(n) = x∗ we have
x(n + 1) = x(n) = x∗ .
6
Examples
dx
1. = x2 − x.
dt
0 1
dx
2. = x2 .
dt
We see that dx/dt > 0 if x < 0 or if x > 0, so we get the phase line
below:
0
This system is neither stable nor unstable. The reasons why will
be discussed later — we included this example to show some of the
subtleties that can arise in dynamical systems theory.
1
3. x(n + 1) = − x(n) + 6.
2
To find the fixed point(s) we substitute x∗ for x(n) and x(n + 1) in
the difference equation to get x∗ = − 12 x∗ + 6. Solving this equation
for x∗ gives x∗ = 4. Plotting x(n) against n, starting at x(0) = 2,
gives the following graph:
x(n)
5 *
4 * *
*
*
3
2*
n
0 1 2 3 4 5
Now let x1 (n) = y(n), x2 (n) = y(n + 1). Then the above difference
equation is equivalent to
x1 (n + 1) = x2 (n)
x2 (n + 1) = x2 (n)x1 (n) − x2 (n)2 + a.
There is more about this matter, and some more examples, in Lu-
enberger on pages 96 and 97.
where the vector u(n) is called the control. The idea is to use the
control u(n) to steer the system state vector x to some desired value.
CHAPTER 2
LINEAR AUTONOMOUS
DYNAMICAL SYSTEMS
solving
f
ẋ = 0 ⇒ ax∗ + f = 0 ⇒ x∗ = − . (3a)
a
In the discrete case x (n + 1) = ax (n) + f , we replace x (n) and
x (n + 1) with x∗ yielding
f
x∗ = ax∗ + f ⇒ x∗ = . (3b)
1−a
Let
X(t) = x(t) − x∗ or X(n) = x(n) − x∗ . (4)
since x∗ = − fa .
x (n + 1) = X (n + 1) + x∗ = a (X (n) + x∗ ) + f.
Hence
X (n + 1) = aX (n) + x∗ (a − 1) + f
= aX (n) ,
f
since x∗ = 1−a .
We see that the substitution (4) has got rid of the constant term f .
Equations (5) can be solved immediately. For the continuous case
we have
Ẋ (t)
dt = a dt
X (t)
nX (t) = at + k (k is a constant)
so that
X (t) = Ceat (C = ek ).
C = X (0)
For the discrete case in equation (5), it follows from the initial con-
dition X (0) that
X (1) = aX (0)
X (2) = aX (1) = a (aX (0)) = a2 X (0)
X (3) = aX (2) = a a2 X (0) = a3 X (0)
..
.
clearly yielding
X (n) = an X (0) . (6b)
Using (4), (3a) and (3b), the solutions of the original equation (2)
are therefore:
f f
x(t) = − + eat x(0) +
a a
14
or (7)
f f
x(n) = + an x(0) − .
1−a 1−a
[The only problem that can arise with (7) is if a = 1 (discrete case),
or if a = 0 (continuous case). Then (2) is
ẋ = f or x(n + 1) = x(n) + f
From (6a) and (6b), it is easy to see under what conditions this dy-
namical system has a stable fixed point. If |a| < 1 (discrete case),
or a < 0 (continuous case) then an → 0 as n → ∞ or eat → 0
as t → ∞, and therefore from equations (6a) and (6b) we see that
X → 0 so that x → x∗ and the fixed point is stable. On the other
hand, if |a| > 1 (discrete case) or a > 0 (continuous case) then
X = x − x∗ → ∞ and the fixed point is unstable.
Example 1
Consider the discrete system
1
x(n + 1) = x(n) + 2
2
with initial condition
x(0) = 2.
Then, replacing x (n + 1) and x (n) with the fixed point x∗ , we get
1
x∗ = x∗ + 2
2
and therefore
x∗ = 4.
Let X(n) = x(n) − 4. (This of course also means that X (n + 1) =
x (n + 1)−4.) In terms of the new variable X the difference equation
becomes
1
X (n + 1) + 4 = (X (n) + 4) + 2
2
clearly yielding
1
X(n + 1) = X(n)
2
with solution
1 n
X(n) = X(0).
2
16
4 * * * *
*
3
2*
n
0 1 2 3 4 5
Example 2
Consider the discrete system
x(n + 1) = 2x(n) + 1
Then
x∗ = 2x∗ + 1
and therefore
x∗ = −1.
17 MAT217—X/1
APM214—Y/1
with solution
X(n) = 2n X(0).
Example 3
Consider the discrete system:
2
x(n + 1) = − x(n), x(0) = 3.
3
This is easy to solve. The solution is x(n) = 3(−2/3)n .
Example 4
Consider the system
dx
= −2x + 1, x(0) = −1.
dt
Since the system is continuous, the fixed point is found by solving
dx
= 0 ⇒ 0 = −2x∗ + 1,
dt
which gives
1
x∗ = .
2
18
1 1
x (t) − = x (0) − e−2t ,
2 2
which, together with the initial condition x (0) = −1, leads to the
solution
1 1
x (t) = + −1 − e−2t
2 2
1 3 −2t
= − e .
2 2
Example 5
Consider the system
dx 3
= x − 3, x(0) = 0.
dt 2
The solution is
3
x(t) = −2e 2 t + 2.
19 MAT217—X/1
APM214—Y/1
Ax + f = 0.
Therefore
x∗ = −A−1 f . (10)
Put in words, the transformation (11) moves the fixed point from
−A−1 f to the origin. In order to understand the properties of equa-
tion (9), it is sufficient to understand the simpler equation (12).
Even so, the solution of (12) is rather more complicated than solv-
ing equation (5).
Since the solution of the one—dimensional equation (5) is X =
constant × exp(constant ×t), we try as a solution of (12):
X = Keλt . (13)
20
Kλeλt = AKeλt .
Therefore
AK = λK. (14)
(A − λI)K = 0 (15)
1 0
where I is the identity matrix . We require that K
0 1
should be non—trivial (i.e. K = 0), which implies that
a−λ b
=0
c d−λ
i.e.
(a − λ)(d − λ) − bc = 0 (18)
or
λ2 − λσ + ∆ = 0 (19)
σ+ σ2 − 4 σ− σ2 − 4
λ1 = , λ2 = . (20)
2 2
In general, the procedure for obtaining the solution of equation (12)
Kx
is as follows: First, we use (15) to find the eigenvector K(= ):
Ky
(A − λI)K = 0 (15)
22
aKx + bKy = 0
caKx + cbKy = 0
[For a 2 × 2 matrix det = 0 means that the two rows of the matrix
are multiples of each other.] Thus K is defined in direction only,
and we represent this by writing down the solution of (15) as cK for
some undetermined constant c. Of course, there are two eigenvalues
λ1 , λ2 so there are two corresponding eigenvectors
c1 K1 , c2 K2 .
X(0) = c1 K1 + c2 K2 .
Case (A)
Once λ1 and λ2 are known, equation (15) is solved for K1 and K2 .
Example 1
Consider the system:
dx
= x + 3y
dt
dy
= 5x + 3y. (22)
dt
x 1 3
Let X = , then Ẋ = AX with A = . Thus equation
y 5 3
(19) becomes
λ2 − λ(4) − 12 = 0.
Thus λ1 = +6, λ2 = −2. Let K1 and K2 be the eigenvectors
corresponding to eigenvalues λ1 , λ2 respectively. Then equation
(15), for K1 is:
−5 3 0
K1 = . (23)
5 −3 0
Note that, although (23) looks like two equations, one is a multi-
ple of the other so there is, in effect, only one equation. As re-
marked earlier, this always happens because λ has been chosen so
that det(A − λI) = 0, and a zero determinant means, for a 2 × 2
matrix, that one row is a multiple of the other. Let, for example
a
K1 = .
b
Then (23) becomes
−5a + 3b = 0 ...(A)
5a − 3b = 0 ...(B)
24
3
K1 = c1
5
for some constant c1 . Note that the choice of a and b are not
unique. We could also have chosen a = c1 (say). Then b = 53 c1 and
our vector K1 would be given by
1
K1 = c1 5
.
3
3 3
K2 = 0
5 5
so that
1
K2 = c2 .
−1
Thus from (13), and using the principle of superposition for linear
systems, the general solution of (22) is:
x 3 1
X= = c1 e6t + c2 e−2t . (24)
y 5 −1
25 MAT217—X/1
APM214—Y/1
Note that X → ∞ as t → ∞.
The above solution is the general solution because it has two arbi-
trary constants, which is correct as (21) is a system of two first—order
differential equations. The solution curves for various values of c1 ,
c2 are shown in Figure 1. The fixed point at (0, 0) is called a saddle
point.
Example 2
Consider the system:
dx
= 2x + 8y,
dt
26
dy
= 4y, (25)
dt
so that clearly
2 8
A= .
0 4
Then (19) becomes
λ2 − 6λ + 8 = 0.
a
Solving this equation we get λ1 = 2, λ2 = 4. Let K1 = be
b
the eigenvector corresponding to the eigenvalue λ1 = 2. Then, from
equation (15) we need to solve
(A − 2I) K1 = 0,
i.e.
0 8 a 0
=
0 2 b 0
yielding the two equations
8b = 0
2b = 0.
1
K1 = c1 .
0
27 MAT217—X/1
APM214—Y/1
c
Next, let K2 = be the eigenvector corresponding to the eigen-
d
value λ2 = 4. Then, from (A − λ2 I) K1 = 0 (see equation (15)), we
get
−2 8 c 0
=
0 0 d 0
i.e.
−2c + 8d = 0 ⇒ 2c = 8d ⇒ c = 4d.
Again we have only one equation with two unknowns. (The second
equation is zero multiplied by the first one.) Clearly we can solve one
unknown in terms of the other, that is, let d = c2 , then c = 4d = 4c2
giving the vector
4
K2 = c2 ,
1
with c2 an arbitrary constant.
Therefore
x 1 4
X= = c1 e2t + c2 e4t . (26)
y 0 1
Example 3
Consider the system:
dx
= −x − 2y + 4
dt
dy
= x − 4y + 2. (27)
dt
First, we need to find the location of the fixed point (x∗ , y∗ ) by
solving the equations:
0 = −x∗ − 2y∗ + 4,
0 = x∗ − 4y∗ + 2.
29 MAT217—X/1
APM214—Y/1
λ2 + 5λ + 6 = 0,
2 1
The solutions are K1 = c1 , K2 = c2 . Then
1 1
x − x∗ 2 1
X= = c1 e−2t + c2 e−3t ,
y − y∗ 1 1
or
Case (B)
In this case equation (19) has two distinct complex solutions for λ.
Writing β = 4 −σ 2 /2, the solutions are
σ σ
λ1 = + iβ, λ2 = − iβ. (29)
2 2
Note that λ1 and λ2 are complex conjugates of each other.
Example 1
Consider the system
dx
= 6x − y,
dt
dy
= 5x + 4y. (30)
dt
31 MAT217—X/1
APM214—Y/1
Then
6 −1
A= ,
5 4
so that equation (19) becomes
λ2 − 10λ + 29 = 0.
λ1 = 5 + 2i, λ2 = 5 − 2i (31)
(1 − 2i) −1
K1 = 0. (32)
5 (−1 − 2i)
[Note that, as in case (A), the second row of the matrix is linearly
dependent on the first : just multiply row one by (1 + 2i) to see
this.] Solving (32) gives
1
K1 = c1 . (33)
1 − 2i
1
K2 = c2 . (34)
1 + 2i
x 1 1
X= = c1 e(5+2i)t + c2 e(5−2i)t . (35)
y 1 − 2i 1 + 2i
32
Let
C1 = c1 + c2 and C2 = c1 i − c2 i. (37)
Then
x cos 2t
X= = C1 e5t
y cos 2t + 2 sin 2t
sin 2t
+C2 e5t . (38)
−2 cos 2t − sin 2t
1 1 0
= +i ,
1 − 2i 1 −2
we have that
1 0
B1 = Re (K1 ) = , B2 = Im (K1 ) = .
1 −2
1 0
X = e5t C1 cos 2t − sin 2t
1 −2
0 1
+e5t C2 cos 2t + sin 2t
−2 1
The solution curves are shown in Figure 4. The fixed point at (0,0)
is called an unstable focus.
34
Example 2
Consider the system
d x −1 2 x 2
= 1
+ . (41)
dt y − 2 −1 y 1
−1 2 x∗ 2
1
=−
− 2 −1 y∗ 1
λ2 + 2λ + 2 = 0
35 MAT217—X/1
APM214—Y/1
−i 2 a 0
= ,
− 12 −i b 0
i.e.
−ia + 2b = 0 ...(1)
1
− a − ib = 0 ...(2)
2
i
From (1), choose a = 1 (say), then b = 2 so that
1
K1 = i
2
1 0
= +i 1
0 2
and therefore
1 0
B1 = and B2 = 1
.
0 2
Hence
x − x∗ 1 0
X = = e−t C1 cos t− 1
sin t
y − y∗ 0 2
0 1
+C2 1
cos t + sin t .
2 0
Thus
Example 3
Consider the system
2 8
Ẋ = X. (43)
−1 −2
(2 − 2i) 8
K1 = 0
−1 (−2 − 2i)
37 MAT217—X/1
APM214—Y/1
Figure 6: Centre
38
Case (C)
Here we have “Equal roots”, so σ 2 − 4∆ = 0. This is a special case
which, for the purposes of this module, is not very important. So we
just briefly summarise the solutions. There are really two sub—cases:
C − 1:
Here
a 0
Ẋ = X (45)
0 a
x (t)
which, for X = , can be written as
y (t)
ẋ (t) a 0 x (t)
=
ẏ (t) 0 a y (t)
i.e.
ẋ (t) = ax (t) and ẏ = ay (t) .
so that
x 1 0
X= = c1 eat + c2 eat . (46)
y 0 1
C − 2:
All other cases in which σ 2 − 4∆ = 0. It turns out that there is only
one eigenvector K. So one solution is
X = c1 Keλt (47)
39 MAT217—X/1
APM214—Y/1
so that
(A − λI)K = 0 (50)
and
(A − λI)P = K. (51)
Equation (50) is the usual eigenvector equation for K and (51) then
determines P.
Example
Consider the system
3 −18
Ẋ = X. (52)
2 −9
λ2 + 6λ + 9 = 0
or
(λ + 3)2 = 0,
so
λ = −3.
40
6 −18
K=0
2 −6
3
which has a solution K = . Then from (51), the equation for
1
P is
6 −18 3
P=
2 −6 1
i.e.
2a − 6b = 1.
1
1 2
Now choose (say) b = 0, then a = 2 so that P = . Thus the
0
general solution of (52) is
1
3 3
X = c1 e−3t + c2 te−3t + 2 e−3t . (53)
1 1 0
is illustrated in figure 2.
ẋ = Ax + f (54)
Let
X = x − x∗ (56)
then
Ẋ = AX. (57)
For a large system, carrying out the procedure to obtain (59) from
(54) is at least tedious if not impractical. So, is there any useful
43 MAT217—X/1
APM214—Y/1
information that can be obtained from (54) without finding the com-
plete general solution? The stability of the solution about the fixed
point x∗ is often of great importance, and is determined entirely by
the eigenvalues λ1 , ..., λm . More precisely, it is obvious from (59)
that (54) is stable at x = x∗ if and only if
Example 1
Consider the system
−4 1 1
Ẋ = 1 5 −1 X. (61)
0 1 −3
which is equal to
Thus the eigenvalues are —3, —4 and 5, and the general solution is
Example 2
Consider the system
−1 1 −2 −1
ẋ = 1 −1 0 x+ 3 . (63)
1 0 −1 2
λ3 + 3λ2 + 4λ + 2 = 0.
(λ + 1)(λ2 + 2λ + 2) = 0
λ1 = −1,
λ2 = −1 + i,
λ3 = −1 − i.
It is clear that the solution is stable, since all eigenvalues have neg-
ative real part. The terms e±it can be expressed in terms of sin t
and cos t, and therefore the solution has a component which oscil-
lates with frequency 1/2π. [Recall that sin ωt or cos ωt has period
T = 2π/ω and frequence ω/2π.] In some applications the frequency
(or frequencies) may be important.
5. Discrete systems
First we describe the general situation, and then investigate some
aspects of 2 − D systems in more detail.
x∗ = Ax∗ + f .
Therefore
x∗ = (I − A)−1 f . (66)
Unfortunately, (68) does not give much insight into how X(n) evolves
in (discrete) time. Rather, we again use eigenvalue theory. For sim-
plicity, we assume that A has m distinct eigenvalues λ1 , ..., λm , with
corresponding eigenvectors K1 , ..., Km . Then, as discussed earlier,
the eigenvectors form a basis of the linear space and for any X(0)
we can write
m
X(0) = ci Ki (69)
i=1
where the constants ci are unique. Putting (69) into (68) gives
m m m
X(n) = ci An Ki = ci λni Ki = λni ci Ki . (70)
i=1 i=1 i=1
From (70) it is clear that if |λi | < 1(i = 1, ..., m) then X(n) → 0
as n → ∞ for any given initial condition X(0). Such a system is
stable. On the other hand if any |λi | > 1 then, under certain initial
conditions, X(n) → ∞ as n → ∞ and the system is unstable.
47 MAT217—X/1
APM214—Y/1
Example 1
Consider the system
with
x(0) = 1, y(0) = 2.
Let
x (n)
X (n) = .
y (n)
Then equation (71) can be written
1 2
X (n + 1) = X (n)
−1 4
with
1
X (0) = .
2
We have calculated eigenvalues and eigenvectors for a 2 × 2 matrix
many times in this chapter, and we just state the result:
2 1
λ1 = 2, K1 = , λ2 = 3, K2 = .
1 1
X(0) = c1 K1 + c2 K2 .
48
This gives
1 = 2c1 + c2
2 = c1 + c2 .
Example 2
Consider the system
0.5 −0.25
X (n + 1) = X (n)
1 0.5
with
1
X (0) = . (73)
2
The eigenvalue equation is
λ2 − λ + 0.5 = 0
so that
λ1 = 0.5 + 0.5i,
λ2 = 0.5 − 0.5i.
49 MAT217—X/1
APM214—Y/1
i
K1 = ,
2
−i
K2 = .
2
i −i
X(n) = c1 (0.5 + 0.5i)n + c2 (0.5 − 0.5i)n .
2 2
1
The constants c1 , c2 are determined from X(0) = .
2
We get
1 i (c1 − c2 )
=
2 2 (c1 + c2 )
which are solved to give
c1 = 0.5 − 0.5i
c2 = 0.5 + 0.5i.
Therefore
i
X(n) = (0.5 − 0.5i)(0.5 + 0.5i)n
2
−i
+ (0.5 + 0.5i) (0.5 − 0.5i)n . (74)
2
There is one important point to make about equation (74). The sec-
ond term is the complex conjugate of the first term, and therefore
their sum is real. This type of behaviour must always occur when
50
One can also interpret (74) in terms of sine and cosine functions.
Using the (r, θ) representation of complex numbers z = a + ib, i.e.
z = reiθ
where
r = a2 + b2
b
θ = tan−1 ,
a
we write:
1
(0.5 − 0.5i) = √ e−iπ/4 ,
2
1 iπ/4
(0.5 + 0.5i) = √ e .
2
Then (74) becomes
n+1
1 eiπ/2
X(n) = √ eiπ/4(n−1)
2 2
n+1
1 e−iπ/2
+ √ eiπ/4(1−n)
2 2
n+1
1 eiπ/4(n−1)+iπ/2 + eiπ/4(1−n)−iπ/2
= √
2 2eiπ/4(n−1) + 2eiπ/4(1−n)
n+1
1 2 cos(π/2 + (n − 1)π/4)
= √
2 4 cos((n − 1)π/4)
n−1
1 − sin((n − 1)π/4)
= √ , (75)
2 2 cos((n − 1)π/4)
51 MAT217—X/1
APM214—Y/1
eit + e−it
cos t = .
2
Example 3
Consider the system
2 1 2
x (n + 1) = x (n) + . (76)
0 2 −1
2 1 −2
x∗ = x∗ + .
0 2 −1
Therefore
1 1 −2
x∗ + = 0,
0 1 −1
and hence
1
x∗ = .
1
Let
X(n) = x(n) − x∗ .
Then
2 1
X (n + 1) = X (n) .
0 2
(λ − 2)2 = 0.
52
0
[Here P = c2 .] Then the general solution is
1
6. Some Applications
The key assumption in the model is that each force has a “hit-
ting strength”, which is proportional to its size; this “hitting
54
which gives
λ1 = − aβ, λ2 = + aβ. (80)
and √
− αβ −α K2x
√ = 0.
−β − αβ K2y
In the sketch above you will see a dividing line between those
solutions which, as time increases, lead to a total elimination
56
yielding
√ √
x (0) β + y (0) α
c1 = √
2 αβ
√ √
y (0) α − x (0) β
c2 = √ .
2 αβ
x (t) 0
lim = ,
t→∞ y (t) 0
that is
√ √
√ √
α − αβt − α αβt 0
c1 √ lim e + c2 √ lim e = .
β t→∞ β t→∞ 0
57 MAT217—X/1
APM214—Y/1
β
(a) If y(0) > x(0) α then x is totally eliminated.
β
(b) If y(0) < x(0) α then y is totally eliminated.
(c) In the case that the two sides are equally matched, y(0) =
x(0) β/α, then the two sides eliminate each other.
The idea is that only the first cohort is increased due to birth,
and the birthrate depends very much on the age of the parents.
The other cohorts are not affected by birth but by the survival
rate from the previous cohort:
m
Now because λ1 > |λi | then as n → ∞, the i=2 term tends
to zero. Thus, as n → ∞,
Example
Consider the fictional species S. Suppose that, measuring n
in years, the population can be divided into three age groups:
x1 , x2 , x3 . Let the birth rates be b1 = 0.5, b2 = 5, b3 = 3, and
let the survival rates be S1 = 0.5, S2 = 32 . Then equation
(84) is
0, 5 5 3
x(n + 1) = 0, 5 0 0 x(n). (85)
0 2/3 0
2
(0.5 − λ)(−λ)2 − 5(0.5 × (−λ)) + 3 × 0.5 × = 0.
3
Therefore
1 1
−λ3 + λ2 + 2 λ + 1 = 0,
2 2
and hence
1
−(λ + )(λ + 1)(λ − 2) = 0. (86)
2
60
Let
K11
K1 = K12 .
K13
Suppose that K13 = 1, then the last row of (87)
2
K1 − 2K13 = 0,
3 2
gives
K12 = 3,
Thus
12
K1 = 3 . (88)
1
and, for large n,
x1 12
x2 ∝ 3 . (89)
x3 1
61 MAT217—X/1
APM214—Y/1
where
1 −1 0
A = a −1 2 −1 ,
0 −1 2
1 0 0
M = m 0 1 0 . (91)
0 0 2
x = Ke±iωt . (92)
Therefore
ω 2 K = BK. (93)
0 = det B − w2 I
a a
m − w2 −m 0
= a 2a 2 a
−m m −w −m
a a 2
0 − 2m m −w
a 2a a a2
= − w2 − w2 − w2 −
m m m 2m2
a a a
+ − − w2
m m m
a 3 1
= (1 − λ) (2 − λ) (1 − λ) − − (1 − λ)
m 2
a 3 1
= (1 − λ) λ2 − 3λ + 2 − − 1
m 2
a 3 1
= (1 − λ) λ2 − 3λ −
m 2
m 2
where λ = aw .
Hence we solve
1
(1 − λ) λ2 − 3λ − =0 (94)
2
λ1 = 1,
√
λ2 = (3 + 7)/2,
√
λ3 = (3 − 7)/2,
so that
a 1/2
ω1 = ,
m
64
a 1/2
ω 2 = 1.68 ,
m
a 1/2
ω3 = 0.42 . (95)
m
Thus the lowest value of ω is ω 3 , and therefore the lowest
natural frequency (frequency ν = ω/2π) is
a 1/2
ν = 0.067 . (96)
m
Problems occur when the lowest frequency is small, mean-
ing in practice when ν ≤ 10 (cycles/second). Thus one
requires
m 10 2
≥ = 22277. (97)
a 0.067
The ratio elastic coefficient: mass must be high. The
analysis described above is very important in Engineer-
ing design. It was not applied to the design of the To-
coma Narrows bridge in the USA, and in 1940 that bridge
shook itself to pieces during a storm.
0 I
C= . (99)
−B 0
65 MAT217—X/1
APM214—Y/1
±i λ1 , ±i λ2 . ± i λ3 (100)
EXERCISES
−1 2
(d) X(n + 1) = X(n).
−7 4
1 2 −4
(e) X(n + 1) = 1 1
X(n) + .
2 2 −1
dX −2 3
(b) = X.
dt −3 5
dx dy
(c) = 6x − y − 5, = 5x + 2y − 3.
dt dt
−1 −1 0
dX 3
(d) = 4 − 32 3 X.
dt 1 1
8 4 − 12
3. For the following systems, find the solution that satisfies the
given initial condition; state the location and nature of the
singular point.
dX −1 −2 3 3
(a) = X+ subject to X(0) = .
dt 3 4 3 4
dX 4 −5 6
(b) = X subject to X(0) = .
dt 5 −4 2
dx
(c) = −x + y, with x(0) = 2.
dt
dy
= x + 2y + z − 5, with y(0) = 3.
dt
dz
= 3y − z − 1, with z(0) = 4.
dt
−2 12 2
(a) X(n + 1) = X(n), with X(0) = .
5 2
67 MAT217—X/1
APM214—Y/1
2 −5 5
(b) X(n + 1) = X(n), with X(0) = .
1 −2 1
13 25 1
(d) X(n + 1) = X(n) +
−9 −17 0
2
with X(0) = .
2
where 0 ≤ α ≤ 1.
x(k + 1) = Ax(k)
where
α − β (1 − γ) βγ
A=
β (1 − γ) α − βγ
and
r(k)
x(k) = .
u(k)
7 −1
(a) X (n + 1) = X (n)
5 3
6 1 1
(b) X (n + 1) = X (n) −
−2 4 2
5 −5
(c) X (n + 1) = X (n)
5 −3
(d) x (n + 1) = 4x(n − y (n) − 2
y (n + 1) = 9x (n) − 2y (n) − 6
(e) x (n + 1) = 3y (n) − 4
y (n + 1) = −3x (n) + 6y (n) − 4
(f) x (n + 1) = −5x (n) + 5y (n)
y (n + 1) = −5x (n) + 5y (n)
dx dy dz
(a) =z = −z =y
dt dt dt
−1 1 0 0
dX
(b) = 1 2 1 X+ −4
dt
0 3 −1 −2
1 1 4
dX
(c) = 0 2 0 X
dt
1 1 1
(a) x (n + 2) = 3x (n + 1) − 2x (n)
(b) x (n + 2) = 14 x (n)
d2 x
(c) dt2 + 4 dx
dt + 3 = 0
d2 x
(d) dt2 − 5 dx
dt + 6 = 0
72
CHAPTER 3
1. Introduction
It was not possible to change the state of the dynamical systems we
studied in Chapter 2 by means of an “outside” control — the changes
in state were entirely determined by the equations describing the dy-
namical system and the initial conditions. In many practical situa-
tions we do have some control over the system (for example, turning
a tap on or off to control the level of water in a tank, or a heater
on or off to control the temperature in a room). In this chapter we
consider some aspects of linear dynamical systems subject to linear
controls.
(i) Controllability
Here we want to know whether we can always choose an input u to
change the state vector x from any initial value to any final value.
Roughly, how well we can control the system. This is dealt with in
sections 3 and 4.
(ii) Observability
The question here is “If we know the output vector y and the input
u can we determine the state vector x?” In other words, can we use
74
the output and input to observe the state of the system? Sections
5 and 6 are concerned with this question.
2. Dynamic Diagrams
These diagrams are built up from five elementary components shown
in figure 1:
Figure 1
The summer (a) adds whatever comes into it, instantaneously pro-
75 MAT217—X/1
APM214—Y/1
ducing the sum. Any number of lines can come into a summer and
one line comes out. The transmission (b) multiplies what comes in
by the number written in the box. Diagram (c) is a splitter — it
connects an incoming line with any number of outgoing lines, each
having the value of the incoming line. The delay (d) is the basic
dynamic component of discrete—time systems. If x(k) comes in to
a delay diagram, then the value of x one time unit earlier goes out
— that is, x(k − 1) goes out. So if x(k + 1) goes in, x(k) goes out,
and if x(k − 1) goes in x(k − 2) goes out, and so on. The integrator
(e) is the basic dynamic component of continuous—time systems. If
ẋ goes in to the integrator then x goes out. The way in which these
diagrams are put together to form linear control systems is best de-
scribed by examples.
ẋ(t) = u(t).
The output x(t) is the integral of the input function u(t). The dia-
gram is therefore:
Figure 2
76
Example 2
Consider the control system
ẋ1 3 0 x1 1
= + u(t)
ẋ2 1 2 x2 0
y (t) = x2 (t) .
Figure 3
Similarly the diagram corresponding to the equation
ẋ2 = x1 + 2x2 is
Figure 4
77 MAT217—X/1
APM214—Y/1
Figure 5
Example 3
The diagram for the one—dimensional discrete system
is
Figure 6
Example 4 (A very simple model for driving a car)
Suppose a car is driven along a straight road, its distance from an
initial point O being s(t) at time t. Assume the car is controlled
by the accelerator, providing a force of u1 (t) per unit mass, and the
78
x1 (t) = s(t)
and speed
x2 (t) = ṡ(t).
Then
ẋ1 = x2
ẋ2 = u1 − u2 .
In matrix notation:
ẋ1 0 1 x1 0 0 u1
= + .
ẋ2 0 0 x2 1 −1 u2
Figure 7
79 MAT217—X/1
APM214—Y/1
Suppose the stick has length L and the mass M of the stick is
concentrated at the top. Let the variables u, θ, and x be as shown
in the figure below:
Figure 8
We also have
x(t) = u(t) + L sin θ(t).
80
ẋ (t) 0 1 x (t) 0
= g + g u(t).
v̇ (t) 0 v (t) −
L L
The dynamic diagram for the system is
Figure 9
Figure 10
or
g
θ̈ + sin θ = 0.
L
Define
x1 = θ
x2 = θ̇.
Then we get
ẋ1 = x2
g
ẋ2 = − sin x1 .
L
The output y is the angle θ so
y = x1 .
82
If the angle θ is small we can put sin x1 = x1 and the matrix equa-
tions for the system are then
ẋ1 0 1 x1
= g
ẋ2 − 0 x2
L
x1
y = 1 0 .
x2
Figure 11
Example 7
Consider a system consisting of an interconnected pair of water
tanks, as shown in figure 12:
83 MAT217—X/1
APM214—Y/1
Figure 12
dV1 V2 V1 V1
= u − y1 + λ − , y1 = µ
dt A2 A1 A1
dV2 V1 V2 V2
= u − y2 + λ − , y2 = µ .
dt A1 A2 A2
Thus
dV1 V1 V2
= −(λ + µ) +λ + u,
dt A1 A2
dV2 V1 V2
= λ − (λ + µ) + u.
dt A1 A2
84
λ+µ λ
− A1 A2
V̇1 V1 1
=
+ u
V̇2 λ λ+µ V2 1
−
A1 A2
µ µ V1
y = .
A1 A2 V2
Definition
The n—dimensional system
Example 8
Consider the system with dynamic diagram
Figure 13
x1 (k + 1) = ax1 (k)
x2 (k + 1) = x1 (k) + bx2 (k) + u(k).
It is clear from both the diagram and equations that the control u
cannot have any effect on the state variable x1 . Hence this system
86
Example 9
Consider the one—dimensional discrete system with equation
Lemma 1
Suppose A is an n × n matrix and B is an n × m matrix. Then, for
any integer N ≥ n ≥ 1, the rank of the matrix
The square bracket [...] notation used here is explained in the ap-
pendix.
Theorem 1
The discrete—time system
has rank n.
Proof
Suppose a sequence of inputs u(0), u(1), u(2), ..., u(N −1) is applied
to the system
x(k + 1) = Ax(k) + Bu(k)
rank K = rank M,
Remark
It follows from the above proof that N ≤ n. Thus we can reach any
vector from the zero vector in n or fewer steps if the system is com-
pletely controllable. In fact we can transfer the state between two
arbitrary vectors within n steps. To see this, suppose x(0) and x(n)
are specified arbitrarily. With zero input the system would move to
An x(0) at time n. Then the desired input sequence is the one that
would transfer the state from the zero vector to x(n) − An x(0) at
time n.
Example 10
Consider the control system in example 8. We have
a 0
A=
1 b
and
0
B= .
1
89 MAT217—X/1
APM214—Y/1
0 0
M= .
1 b
rank M = 1 < 2.
Example 11
Suppose we modify the system in example 10 by shifting the input
to the first stage rather than the second. Then we get the dynamic
diagram
Figure 14
Then
a 0
A=
1 b
and
1
B= .
0
The controllability matrix M is therefore
1 a
M = [B, AB] = .
0 1
Therefore rank M = 2, and hence the system is completely control-
lable.
Example 12
Consider the simple one—dimensional system in example 9. Then
A = 1 and B = 1. Hence
M = [B] = [1] .
Clearly rank M = 1, and therefore the system is completely con-
trollable.
Example 13
We investigate the discrete system with scalar control u and equa-
tion
x(k + 1) = Ax(k) + Bu(k) (1)
where
−2 2
A = ,
1 −1
1
B =
0
91 MAT217—X/1
APM214—Y/1
and n = 2. Then
1 −2
M = [B, AB] = .
0 1
Clearly rank M = 2 = n, and so the system is completely control-
lable. Let us find a control sequence which transfers the zero vector
to the vector
1
.
2
We know we can do this in at most two steps. Let u(0), u(1) be a
control sequence such that
0
x(0) =
0
and
1
x(2) = .
2
Then from equation (1) it follows that
since x (0) = 0.
This gives
1 −2 1
= u(0) + u(1).
2 1 0
92
Therefore
u(1) − 2u(0) = 1
u(0) = 2.
1 1
= u(0).
2 0
This is obviously not possible, and so the transfer cannot be accom-
plished in one step.
Definition
The system
ẋ(t) = Ax(t) + Bu(t)
is said to be completely controllable if for x(0) = 0 and any given
state x1 there exists a finite number t1 and a piecewise continuous
input u(t), 0 ≤ t ≤ t1 , such that x(t1 ) = x1 .
As with the discrete case there is a simple test for complete con-
trollability. The test is exactly the same as for the discrete case
but the proof is different. Some of the mathematical results used
will most likely be new to you, so you should study sections 3—5 of
the appendix before working through the next theorem, whose proof
uses the following lemma:
Lemma 2
Suppose A is a constant n × n matrix, B is a constant n × m matrix
and
rank B, ABA2 B, . . . , An−1 B = n.
Theorem 2
The n dimensional continuous—time system
has rank n.
Proof
We first show that if rank M < n, then the system is not completely
controllable. From the results given in paragraph 5 in the appendix,
we get for any t1 and any input function u(t), 0 ≤ t ≤ t1 ,
t1
x(t1 ) = eA(t1 −t) Bu(t)dt (1)
0
We verify that this input transfers the state from zero to x1 at time
t1 by substituting for u in equation (1):
t1
T
x(t1 ) = eA(t1 −t) BBT e−A t K−1 e−At1 x1 dt
0
= e At1
KK−1 e−At1 x1 = x1 .
Note that in the above proof we select any t1 > 0, so in fact the
state can be transferred to x1 in an arbitrarily short period of time.
Example 14
The system is 1—dimensional with equation
ẋ = x + u.
96
M = [1].
dx 2e−(1+t)
=x+u=x+ .
dt 1 − e−2
Solving this 1st order linear differential equation for x(t) gives
e et − e−t
x(t) =
e2 − 1
and we see x(1) = 1.
Example 15
The matrix form of the system equations is
ẋ 0 1 x 0
= g + g u.
v̇ 0 v −
L L
Therefore
0 1
A= g
0
L
and
0
B= g .
−
L
The controllability matrix M is then
g
0 −
M= g L .
− 0
L
Since rank M = 2 the system is controllable as we expected. But
you may be surprised by the following extension of the problem: Can
two sticks (with masses concentrated at their tops) placed side by
side a little distance apart on one hand be simultaneously balanced?
Let the suffixes 1 and 2 distinguish the various variables for the two
sticks. The same control is applied to both sticks, and there is no
interference between the sticks, so the system equations will be the
same as for one stick but repeated twice:
ẋ1 = v1
g
v̇1 = (x1 − u)
L1
ẋ2 = v2
g
v̇2 = (x2 − u).
L2
98
In matrix form:
ẋ1 0 1 0 0 x1 0
g
v̇1 L
1 0 0 0
v1
+ − Lg1
u.
ẋ = 0 0 0 1 x2 0
2
g
v̇2 0 0 L2 0 v2 − Lg2
Example 16
Consider the water tanks system in example 7. Let us investigate
under what conditions this system is completely controllable. In
this case
1
B=
1
and
λ+µ λ
− A1 + A2
AB =
.
λ λ+µ
−
A1 A2
Hence the controllability matrix for this system is
λ+µ λ
1 − A1 + A2
M = [B, AB] =
.
λ λ+µ
1 −
A1 A2
det M = 0 ⇔ A1 = A2 .
Definition
The discrete—time system
Note that once the initial state x(0) is known, all subsequent states
can be determined.
101 MAT217—X/1
APM214—Y/1
Example 17
Consider the system with dynamic diagram shown in figure 15 be-
low:
Figure 15
The system has the single output y which is not connected at all to
x1 , and therefore there is no way to infer the value of x1 from y. So
this system is not completely observable.
Example 18
Suppose we have a system with the same dynamic diagram as in
the previous example, except that the single output is taken from
x1 instead of x2 . The system is now completely observable. This
102
is easily shown as follows: Suppose we know y(0) and y(1), that is,
x1 (0) and x1 (1). From Figure 15 we see that
But we know x1 (0) and x1 (1), so we can solve for x2 (0). Hence if we
know y(0) and y(1) we can determine x(0) and therefore the system
is completely observable.
Theorem 3
The system
has rank n.
Example 19
The system equations for the dynamic diagram in figure 15 are:
Then
a 1
A=
0 b
and
C= 0 1 .
For CA we get
a 1
CA = 0 1 = 0 b .
0 b
0 1
S= .
0 b
Example 20
If we modify figure 15 as described in example 18 then the system
equations are
C= 1 0 .
Therefore
a 1
CA = 1 0 = a 1 ,
0 b
and the observability matrix S is
1 0
S= .
a 1
Definition
A system
ẋ = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
The following theorem (we omit the proof) gives a condition for
complete observability for continuous systems:
105 MAT217—X/1
APM214—Y/1
Theorem 4
The system in the above definition is completely observable if and
only if the observability matrix
C
CA
CA2
S=
.
..
CAn−1
has rank n.
Example 21
Under what conditions is the system of water tanks in example 7
completely observable? In this case
µ µ
C=
A1 A2
and the observability matrix is
µ µ
A1 A2
C
= .
CA µ (λ (A1 − A2 ) − µA2 ) µ (λ (A2 − A1 ) − µA1 )
A21 A2 A1 A22
The determinant of the observability matrix is
C µ2 (A2 − A1 ) (µ + 2λ)
det = .
CA A21 A22
7. Feedback
Control systems can be classified into two types: open loop and
closed loop or feedback control systems. In an open loop system the
control input follows a pre—set pattern without taking account of the
output or state variables, whereas in a feedback control system the
control input is obtained by the output being fed back in some way
to the input. Simple illustrations of open and closed loop systems
are provided by robots at traffic intersections which change at fixed
intervals of time, and those which measure traffic flow in some way
and change accordingly. Two advantages of feedback control are:
(ii) Feedback control is often simpler than open loop control. For
example, one could design a complicated open loop control for
a traffic robot which varied the green and red cycle lengths
according to the expected traffic flow at various hours of the
day, but feedback control would be simpler and more effective.
Then
u = Cx
ẋ = (A + BC)x.
ẋ = Ax + Bu
Example 22
Consider the one—dimensional system
ẋ(t) = u(t).
108
−5 1 1
(a) A = , B= .
0 4 1
0 1 0 0
(b) A = 0 0 1 , B = 0 .
0 0 0 1
3 3 6 0
(c) A = 1 1 2 , B = 0 .
2 2 4 1
where
2 −5
A =
−4 0
1
B =
−1
110
and
C= 1 1 .
Is this system
where
2 1
A=
0 2
let
0
b1 = ,
1
1
b2 = .
0
ẋ1 = x2 + u ẋ3 = x3 + w
ẋ2 = −2x1 − 3x2 z = x3
y = αx1 + x2
The masses of the rods, which have unit length, can be ne-
glected. Equal and opposite control forces u(t) are applied to
the particles, which have mass m. For small oscillations (so
113 MAT217—X/1
APM214—Y/1
Take
x1 = θ1 + θ2
x2 = θ1 − θ2
x3 = ẋ1
x4 = ẋ2
CHAPTER 4
AUTONOMOUS NONLINEAR
SYSTEMS
4.1 STABILITY
At this stage, we need to be rather more precise about the concept
of stability. A linear system has, in general, only one fixed point,
115 MAT217—X/1
APM214—Y/1
and the whole space is affected by the stability (or not) of the fixed
point. In nonlinear systems there may be several fixed points, and
the stability (or not) about each fixed point is in general a concept
that is valid only locally. First, a trajectory means a path x(t)
or x(n) representing a solution of the system. The diagrams in
Chapter 2 showing the solution behaviour near a fixed point are
diagrams of trajectories. A fixed point x∗ is asymptotically stable
iff there exists a neighbourhood N of x∗ such that every trajectory
that starts in N approaches x∗ as time tends to infinity:
The examples given above are all for linear systems and the domain
of stability is the whole space. A simple example illustrating a finite
domain of stability is:
dx
= x(x2 − 1).
dt
The singular points are at x = 0, x = −1, x = +1 and the phase
diagram for the system is:
dx
= g(x) (1)
dt
and suppose that x = x∗ is a fixed point, i.e. g(x∗ ) = 0. Then by
Taylor’s theorem, for x near x∗ :
dx
≈ g(x∗ ) + (x − x∗ )g (x∗ ).
dt
Thus
dx
≈ (x − x∗ )g (x∗ ) . (2)
dt
In the language of the first part of Chapter 2, g (x∗ ) is equivalent
to the constant a, and it is clear that the fixed point x = x∗ is
asymptotically stable for g (x∗ ) < 0 and unstable for g (x∗ ) > 0.
Thus
x (n + 1) − x∗
≈ g (x∗ ), (4)
x (n) − x∗
and the system is asymptotically stable if |g (x∗ )| < 1 and unstable
if |g (x∗ )| > 1.
118
2.
x(n + 1) = x(n)2 − 3x(n) + 4 (6)
x∗ = x2∗ − 3x∗ + 4.
Thus
x2∗ − 4x∗ + 4 = 0
so that
x∗ = 2.
119 MAT217—X/1
APM214—Y/1
d
Furthermore, g (x) = x2 − 3x + 4 so that g (x) = dx (x2 − 3x + 4) =
2x − 3, and at x = x∗ = 2 we have g (2) = 1. Thus linearisation
does not say whether x∗ = 2 is a stable or unstable fixed point.
2x 2y
A= . (10)
2x −1
The procedure for analysing the behaviour about the fixed points of
a nonlinear system is:
1. Find the fixed point(s).
2. For each fixed point evaluate the Jacobian.
system
dx
= Ax (11)
dt
x∗ = g(x∗ ). (13)
Examples
1.
ẋ = x2 + y 2 − 6
ẏ = x2 − y (14)
ẋ = 0 ⇒ x2 + y2 − 6 = 0 (1)
ẏ = 0 ⇒ x2 − y = 0 (2)
y + y 2 − 6 = 0 ⇒ (y − 2) (y + 3) = 0,
i.e.
y = 2 or y = −3.
122
We only consider real solutions of x and y and thus ignore the option
y = −3, since this would give complex values for x. From y = 2 we
√
get x = ± 2 and our two critical points are therefore given by
√ √
P1 = 2, 2 and P2 = − 2, 2 .
2x 2y
A=
2x −1
and so at P1
√
2 2 4
A= √ .
2 2 −1
Thus
√
= −10 2 < 0
√
σ = 2 2−1
At P2
√
−2 2 4
A= √ ,
−2 2 −1
from which
√
= 10 2
√
σ = −2 2 − 1 < 0
and
σ2 − 4 <0
123 MAT217—X/1
APM214—Y/1
2.
ẍ + x − x3 = 0 (15)
ẋ = y ẏ = x3 − x. (16)
ẋ = 0 ⇒ y = 0
ẏ = 0 ⇒ x3 − x = x (x − 1) (x + 1) = 0
yielding the critical points P1 = (0, 0), P2 = (1, 0) and P3 = (−1, 0).
The Jacobian matrix is
0 1
A= .
2x2−1 0
Thus
0 1
A1 = ,
−1 0
0 1
A2 = A3 = .
2 0
Now, det(A2 ) = det(A3 ) = −2 < 0, and therefore both P2 and P3
are saddle points. The matrix A1 has ∆ = 1, σ = 0. It is therefore a
focus, but since σ is on the borderline between stable and unstable,
it is not clear whether the behaviour near P1 is a stable focus, an
unstable focus, or a centre.
124
3.
x = x2 − 2y + 3, (i)
y = xy − 7y. (ii)
2x −2
A= ,
y x−7
thus
16 −2
A1 =
29.5 1
at the singular point P1 . The eigenvalue equation has solutions
that are complex and |λ| > 1.
The x-isocline and y-isocline are curves in the (x, y) plane on which
dx/dt = 0 and dy/dt = 0, respectively. They are obtained by solving
the (algebraic) equations
(b) they divide the (x, y) plane into regions in which the direction
of a trajectory is known to within 90o ;
126
In this model it does not make sense to talk about negative popu-
lations, so we are only interested in solutions with x, y ≥ 0.
where Q is at the intersection of (ii) and (iv). There are four se-
parate cases to consider, depending on the values of the constants
P1 , P2 , α12 , α21 . The constants r1 , r2 do not appear in the iso-
cline equations and so do not affect the fixed points of the system:
they affect the rate at which the dynamic system evolves, but not
the end—point(s) to which it can evolve. The cases are presented
diagrammatically as:
128
Figure 1
We will investigate case II here, i.e. the case where, as can clearly
be seen from the sketch, P1 > P2 /α21 and P2 > P1 /α12 . The other
cases are left to you to do as exercises.
129 MAT217—X/1
APM214—Y/1
= r1 r2 > 0
σ = r1 + r2
σ2 − 4 = (r1 + r2 )2 − 4r1 r2 = (r1 − r2 )2 > 0.
r1 (1 − α12 P2 /P1 ) 0
J= . (24)
−r2 α21 −r2
In this case, case II, P2 > P1 /α12 so the first term in the Jacobian
is negative. Thus, by setting
P2
β = 1 − α12
P1
and keeping in mind that β < 0 (from P2 > P1 /α12 ), we have
= −r1 r2 β > 0
σ = r1 β − r2 < 0
130
and
σ2 − 4 = (r1 β − r2 )2 + 4r1 r2 β
= r12 β 2 − 2r1 r2 β + r22 + 4r1 r2 β
= r12 β 2 + 2r1 r2 β + r22
= (r1 β + r2 )2
> 0.
and P1 > P2 /α21 so the last term in the Jacobian is negative and
the singular point can also shown to be a stable node.
In order to deal with the singular point Q, we could solve (ii) and
(iv), substitute into the Jacobian and then evaluate σ and ∆. How-
ever, the algebra involved in such a direct approach is rather lengthy
(and therefore it is easy to make a mistake). It is better to approach
the algebra with some cunning. At the point Q equations (ii) and
(iv) apply, so it is permissible to use them to simplify the diagonal
terms of the Jacobian. The result is:
131 MAT217—X/1
APM214—Y/1
At Q,
−r1 x∗ −α12 r1 x∗
P1 P1
J= . (26)
−r α y −r2 y∗
2 21 ∗
P2 P2
Thus
r1 r2 x∗ y∗
∆ = (1 − α21 α12 ),
P1 P2
r1 x∗ r2 y∗
σ = − + .
P1 P2
P1 P2
P2 > > (27)
α12 α12 α21
so that clearly
α12 α21 > 1.
Note from the sketch that a singular point occurs only where an x—
isocline intersects with a y—isocline. Since the y—axis is an x—isocline
and since the trajectories crossing the x—isocline are parallel to the
y—axis, the y—axis itself will be a trajectory. A similar argument
follows for the x—axis. The direction of the trajectories are away
from the unstable singular points, towards the stable points. Note
133 MAT217—X/1
APM214—Y/1
also that the trajectories crossing the other x—isocline are crossing
it parallel to the y—axis and the trajectories crossing the y—isocline
are crossing it parallel to the x—axis.
From a diagram like that above there is usually only one way to
sketch the global, qualitative behaviour of the trajectories. In this
case:
The other cases I, III and IV are left as exercises. Here we shall
simply state that stable coexistence is possible only in case I.
134
Example
The above may be clearer if instead of using arbitrary constants r1 ,
P1 , α12 , r2 , P2 , α21 , we let the constants have definite numerical
values:
r1 = 0.1
P1 = 10000
α12 = 1
r2 = 0.2
P2 = 15000
α21 = 2.
The condition defining the case being considered here, case II, is
P2 > P1 /α12 and P1 > P2 /α21 , which is clearly satisfied. The fixed
points, and the values of the Jacobian at the fixed points, are:
0.1 0
1. (0, 0). From (23), J = .
0 0.2
Clearly this is an unstable node.
0.1 (1 − 1.5) 0
J =
−0.2 × 2 −0.2
−0.05 0
= .
−0.4 −0.2
−0.1 −0.1
3. (P1 , 0). From (25), J = .
0 −0.0666
Thus ∆ = 0.00666, σ = −0.1666; thus σ 2 > 4∆ and the fixed
point is a stable node.
For discrete systems we analyse the behaviour near the fixed points,
but it is not possible to apply the other phase plane techniques
136
that were described for continuous systems. The fixed points are
obtained by solving simultaneously:
x = 0, y = 0
x = 0, y = P
r a a
x = 1− ,y= . (30)
c bP b
The Jacobian is
1 − a + by bx
J= . (31)
−cy 1 + r − 2ry
P − cx
At this stage we should say a little about the values of the constants
a, b, c, r, P . They are all > 0, and the conditions that we will
impose are
1. bP > a (32)
What does this mean? If the prey population is at its (logis-
tic) equilibrium value P , and a small number of predators are
introduced, then the predators would have enough to eat and
their population would grow. It is clear that if this condition
is not satisfied then there is no possibility of a stable situation
involving both predators and prey.
2. r < 2 (33)
This is required so that the model consisting of prey only is
stable at y = P.
137 MAT217—X/1
APM214—Y/1
1−a 0
(i) x = 0, y = 0, J = . (34)
0 1+r
The eigenvalues are 1—a and 1 + r, so it is unstable and anal-
ogous to a saddle point.
1 − a + bP 0
(ii) x = 0, y = P , J = . (35)
−cP 1−r
Since bP > a, the diagonal elements of J, and therefore in this
case its eigenvalues, are λ1 = 1−a+bP > 1 and λ2 = 1−r with
|λ2 | = |1 − r| < 1. Thus the singular point is again unstable
and analogous to a saddle point.
r a
1 b 1−
r a a c bP
(iii) x = 1− ,y= ,J=
. (36)
c bP b ca ra
− 1−
b bP
It can be shown that both eigenvalues of J are less than 1,
although the algebra is rather intricate. For simplicity, let
ra
d = , and note that 0 < d < 2;
bP
a
e = ar 1 − and note that e > 0.
bP
so that
= 1−d+e
σ = 2 − d.
bP < a + 4.
a < bP < a + 4
Example
We let the constants a, b, c, P , r have definite numerical values:
a = 0.5
b = 0.001
c = 0.01
P = 1000
r = 0.5.
140
The conditions (32) and (33) are clearly satisfied. The fixed points
and corresponding Jacobians are:
0.5 0
1. (0, 0), J = .
0 1.5
The eigenvalues are 0.5 and 1.5, so the fixed point has one
eigenvalue greater than 1, and is therefore unstable and anal-
ogous to a saddle point.
1.5 0
2. (0, P ), J = .
−10 0.5
The eigenvalues are 1.5 and 0.5, so the fixed point is unstable
and analogous to a saddle point.
1 0.025
3. (25, 500), J = .
−5 0.75
Thus the eigenvalue equation is:
λ2 − 1.75λ + 0.875 = 0
λ = 0.875 ± 0.331i.
√
Thus |λ| = 0.8752 + 0.3312 ≈ 0.875; this is less than 1 so the
fixed point is stable, and analogous to a stable focus, because
of the imaginary part in λ.
dy
= βxy − γy. (38)
dt
When y = 0, this system reduces to the logistic equation. For
simplicity, we will scale the population to have a value of 1 at the
logistic population value x = P = α/δ; this implies α = δ. Thus
x and y are given as fractions of P . To translate into numbers of
people, we would multiply by P . For example P ≈ 50 million for
South Africa, and P ≈ 6 × 109 for the world. Then the system is
dx
= αx − α(x + y)2 − βxy (39)
dt
dy
= βxy − γy.
dt
In some places the algebra can be rather heavy, and we will some-
times use numerical values for
the constants α, β, γ:
α = 0.02
β = 0.5
γ = 0.2.
142
α 0
1. (0, 0), J = .
0 −γ
It is clearly a saddle point.
−α −2α − β
2. (1, 0), J = .
0 β−γ
If β < γ then J has two negative eigenvalues and the fixed
point is a stable node. If β > γ, which is the case considered
here, then ∆ < 0 and the fixed point is a saddle point.
Discussion
The values of α, β, γ have been chosen as follows. The con-
stant α = 0.02 represents a population that grows at 2% per
year, if resources are available. Putting β = 0.5 means that
the number of infected people increases by about 50% every
year. If γ = 0.2 then about 20% of infected people die every
year. In this case the stable value of the population is reduced
to (x∗ + y∗ ), i.e. 0.437 of what it could be in the absence of
AIDS. The crucial factor is γ/β, and the goal of public health
policy must be to increase γ/β. According to this model if
public education etc. can decrease β so that γ/β > 1, then
the AIDS epidemic will fade away. This can be understood as
saying that if the average infective, before he dies, passes the
disease on to less than, on average, one other person, then the
number of infectives will diminish to zero.
Of course this model is rather crude, but it does show the
power of applied dynamical systems theory to make predic-
tions about important matters in everyday life.
(3) V (x) has a unique minimum at x∗ , i.e. V (x∗ ) < V (x) for all
x ∈ Ω, x = x∗ .
Example
Consider the system
x2 (k)
x1 (k + 1) = ,
1 + x1 (k)2 + x2 (k)2
x1 (k)
x2 (k + 1) = (42)
1 + x1 (k)2 + x2 (k)2
x2 (k)2 + x1 (k)2
= 2
1 + x1 (k)2 + x2 (k)2
147 MAT217—X/1
APM214—Y/1
V (x (k))
= 2 ≤ V (x (k)) . (44)
1 + x1 (k)2 + x2 (k)2
d
V̇ (x) ≡ (V (x (t))) ≤ 0 (46)
dt
with equality if and only if x = x∗ .
By the chain rule, this is
d ∂V dx1 ∂V dx2 ∂V dxn
(V (x (t))) = + +···+ ≤ 0. (47)
dt ∂x1 dt ∂x2 dt ∂xn dt
Now apply the system equation (45):
d ∂V ∂V ∂V
(V (x (t))) = g1 (x (t))+ g2 (x(t))+···+ gn (x (t)) ≤ 0
dt ∂x1 ∂x2 ∂xn
(48)
[Note: For those of you also taking MAT215 = APM212, this is
more conveniently written as
d
(V (x(t))) = (∇V ) · g ≤ 0 (49)
dt
148
(ii) x∗ ∈ Ω;
(v) V satisfies (48) [or equivalently (49)] with equality if and only
if x = x∗ .
lim x(t) = x∗ .
t→∞
lim x(t) = x∗ .
t→∞
149 MAT217—X/1
APM214—Y/1
Note that condition (iv) above was not used explicitly, but it is nec-
essary to ensure that a trajectory that starts in Ω cannot leave Ω.
Note also that condition (v) can be weakened to V̇ ≤ 0 with equality
only at x = x∗ and at isolated points on any trajectory; the proof
is somewhat intricate and is omitted.
∆V (x) ≤ 0 (52)
(ii) x∗ ∈ Ω;
lim x(n) = x∗
n→∞
Proof. Omitted. The basic idea is the same as for the continuous
case, but because a trajectory can jump out of Ω the proof of the
theorem is rather complicated.
Liapunov theory can also be used to show that a fixed point is unsta-
ble. In the above theorems, if the condition (v) V̇ ≤ 0 (continuous
case) or (iv) ∆V ≤ 0 (discrete case) is replaced by (v) V̇ ≥ 0 (con-
tinuous case) or (iv) ∆V ≥ 0 (discrete case), then the fixed point
x = x∗ is unstable.
Examples
0 1
J= . (53)
1 0
2x(−x3 − xy 2 + y) + 2y(−y 3 − x2 y − x)
= −2x4 − 2x2 y 2 + 2xy − 2y4 − 2x2 y 2 − 2xy
= −2(x4 + 2x2 y 2 + y4 )
= −2(x2 + y 2 )2 . (56)
2
Since −2 x2 + y 2 ≤ 0 with equality only at x = y = 0,
the condition (v) is satisfied. Thus the fixed point (0, 0) is
asymptotically stable. In this case Ω is the whole space, so all
trajectories tend towards (0, 0). Note that linear analysis is of
no help. At (0, 0):
0 1
J= (57)
−1 0
152
Let V (x) = x21 + x22 . Then V̇ (x) = 2x1 x2 + 2x2 (−x1 ) = 0. Thus V
is a constant of the motion and the system trajectories are circles
centred at the origin.
D
k= 1 (67)
2 2
(xd − xr ) + yd2
(xd − xr ) D
ẋd = −
(xd − xr )2 + yd2
yd D
ẏd = − . (68)
(xd − xr )2 + yd2
x = xd − xr , y = yd (69)
then
ẋd = ẋ + R, ẏd = ẏ (70)
xD −yD
ẋ = − − R, ẏ = . (71)
x2 + y2 x2 + y 2
We examine the system (71). Will the dog catch the rabbit?
This is equivalent to asking whether a trajectory of (71) with
arbitrary initial condition x(0), y(0) will eventually go to the
origin, where the relative coordinates are zero. Since (71)
157 MAT217—X/1
APM214—Y/1
Now
d
V̇ (x, y) = V (x(t), y(t))
dt
xD 2y (−yD)
= 2x − −R +
x2 + y2 x2 + y2
= −2D x2 + y 2 − 2Rx. (73)
(Note that conditions (i) to (iv) are also satisfied. Thus V (x, y)
is a Liapunov function.)
Attractors
So far we have talked about equilibrium and stability in terms of
one isolated point. While this is (usually) the case for linear sys-
tems, non-linearity permits more interesting possibilities. We start
by generalising the concept of an equilibrium point to that of an
158
invariant set:
Theorem Consider the dynamical system (50) [or (45)]. Let V (x)
be a scalar function with continuous first partial derivatives and let
160
s > 0. Let Ωs denote the region where V (x) < s. Assume that Ωs is
bounded and that ∆V (x) ≤ 0 [or V̇ (x) ≤ 0 in continuous time]. Let
S be the set of points within Ωs where ∆V (x) = 0 [or V̇ (x) = 0],
and let G be the largest invariant set within S i.e. the union of all
the invariant sets within S. Then every trajectory in Ωs tends to G
as time increases.
Proof Omitted.
ẋ = y + x[1 − x2 − y2 ], ẏ = −x + y[1 − x2 − y 2 ].
Define
V (x, y) = (1 − x2 − y 2 )2 . (77)
161 MAT217—X/1
APM214—Y/1
Then
Similarly,
d = g(c) = a1 + (c − a2 )g (a2 ). (83)
Substituting for (c − a2 ) from (82) into (83) gives
d = a1 + (b − a1 )g (a1 )g (a2 )
(d − a1 )
∴ = g (a1 ) g (a2 ) . (84)
(b − a1 )
Thus the 2-cycle is an attractor if
We will show algebraically that the system has a 2-cycle, and that
it is an attractor. Having a cycle (a1 , a2 ) means:
We therefore solve
0 = g(g(x)) − x (88)
with
g(x) = 3.2x − 2.2x2 . (89)
0 = g(3.2x − 2.2x2 ) − x
∴ 0 = 3.2(3.2x − 2.2x2 ) − 2.2(3.2x − 2.2x2 )2 − x. (90)
Exercises
2. For the AIDS epidemic model consider the case γ/β > 1. Find
the nature of the singular points and sketch the phase plane
diagram with the trajectories.
3. Classify (if possible) each critical point of the given plane au-
tonomous system as a stable node, an unstable node, a stable
spiral point, an unstable spiral point, or a saddle point.
(a) ẋ = 1 − 2xy
ẏ = 2xy − y
166
(b) ẋ = y − x2 + 2
ẏ = x2 − xy
(c) ẋ = −3x + y2 + 2
ẏ = x2 − y 2
(d) ẋ = −2xy
ẏ = y − x + xy − y3
(e) ẋ = x2 − y 2 − 1
ẏ = 2y
(f) ẋ = 2x − y 2
ẏ = −y + xy
(g) ẋ = xy − 3y − 4
ẏ = y 2 − x2
(h) ẋ = x(1 − x2 − 2y 2 )
ẏ = y(3 − x2 − 3y 2 )
(i) ẋ = −2x + y + 10
y
ẏ = 2x − y − 15 y+5 .
(a) Find the equilibrium vectors (or points) for this system,
and for each equilibrium vector, determine if it is asymp-
totically stable or unstable.
(b) Pick a point (a(0), b(0)) close to each equilibrium vector
and compute (a(1), b(1)), (a(2), b(2)) and (a(3), b(3))
using this nonlinear system to see if (a(k), b(k)) seems to
agree with your results in part (a).
ẍ + [x2 − 1]ẋ + x = 0
ẋ = −ax + bxy − 1x
ẏ = −cxy + dy − 2 y,
13. Use phase plane analysis to analyze the solutions to the dy-
namical system
2
a(n + 1) = [2 − a(n) − b(n)]a(n)
3
2
b(n + 1) = [2 − b(n) − a(n)]b(n)
3
in the first quadrant.
14. Use phase plane analysis to analyze the solutions to the dy-
namical system
and the function V (x) = x21 + x22 . Show that the system is sta-
ble for a = 0 and unstable for a = 0. Thus, the transition from
(asymptotic) stability to instability does not pass through a
point of marginal stability.
171 MAT217—X/1
APM214—Y/1
16. Prove that the origin is asymptotically stable for each of the
systems below using Liapunov’s direct method. [In parts (a)
and (b) find a suitable Liapunov function. In part (c) try the
suggested function.]
(a) ẋ = y
ẏ = −x3
(b) ẋ = −x3 − y2
ẏ = xy − y3
(c) ẋ = y(1 − x)
ẏ = −x(1 − y)
V = −x − log(1 − x) − y − log(1 − y)
ẍ + [x2 − 1]ẋ + x = 0
ẋ1 = x2
ẋ2 = x3
ẋ3 = −(x1 + cx2 )3 − bx3
173 MAT217—X/1
APM214—Y/1
d2 θ dθ
m 2
= ω2 m sin θ cos θ − mg sin θ − β .
dt dt
175 MAT217—X/1
APM214—Y/1
ẋ = yx2 − x
ẏ = −x3 − y
dI1
= (0.5 − 0.01I2 − 0.0005I1 )I1
dt
dI2
= (−0.5 + 0.001I1 )I2 .
dt
(b) The farmer now decides to try to reduce the pest population
I1 by spraying his crops with insecticide. The effect of this is
to amend the dynamical system equations to:
dI1
= ((0.5 − k) − 0.01I2 − 0.0005I1 )I1
dt
dI2
= ((−0.5 − k) + 0.001I1 )I2 .
dt
a1 = g(a2 ), a2 = g(a1 ),
dx x 0, 1
= 0.1x 1 − − y
dt 10000 10000
dy y 0, 2x
= 0.2y 1 − −
dt 15000 15000
177 MAT217—X/1
APM214—Y/1
33. Use V (x1 , x2 ) = (x1 /a)2 + (x2 /b)2 to show that the system
is unstable by using
36. Show that the fixed point at the origin of the system
CHAPTER 5
ADVANCED TOPICS
• bifurcation
• chaos
• fractals.
Bifurcation theory
We often think that the world is continuous in the sense that a small
change in input causes a small change in the output. But this is not
always the case, as exemplified by the phrase “the straw that broke
the camel’s back”. Bifurcation theory is the study of that breaking
point, that is, it is the study of the point at which the qualitative
behaviour of a system changes.
180
Example
Consider
dx
= g(x) = (1 + x)(1 − a + x2 ). (1)
dt
First, we find all the fixed points:
√
x∗ = −1 and, for a ≥ 1, x∗ = ± a − 1. (2)
Chaos
From a mathematical point of view there are three ingredients nec-
essary for a dynamical system to exhibit chaotic behaviour. We
will not give formal definitions of these ingredients, but the essen-
tial idea is to have a combination of attraction and repulsion. More
precisely we need:
ẋ = σ(y − x)
ẏ = rx − y − xz
ż = xy − bz. (9)
the discs a number of times and then “flips” over to the other
disc. After a few loops round this second disc, it flips back to
the original disc. This pattern continues, with an apparently
random number of circuits before leaving each disc.
The above figure shows a view of the Lorenz attractor for σ = 10,
b = 83 , r = 28. Note the spiralling round the two discs and the
“jumps” from one disc to the other. Points that are initially close
together soon have completely different patterns of residence in the
two discs of the attractor. Thus the system exhibits sensitive de-
pendence on initial values, and is chaotic.
Fractals
Consider the dynamic system
(a) For fixed c, find the domain of stability, i.e. find those points
z(0) such that |z(n)| is bounded for all n > 0. The boundary
of this domain of stability is called the Julia set. If c = 0,
the Julia set is the unit circle |z| = 1, and for small c one
188
(b) For fixed z(0) (usually taken as z(0) = 0), find those values
of c for which |z(n)| is bounded for all n > 0. This is the
Mandelbrot set M . It is a bifurcation diagram in the sense
that as you cross the boundary of M you go from bounded to
unbounded solutions, or vice versa.
Often pictures of the Mandelbrot set, and of Julia sets, are
given in colour. From their popularity it is clear that these
pictures are having an influence on modern art and design.
The definition of the Julia and Mandelbrot sets is a yes/no
issue — either a point is in the set, or it isn’t. So how is colour
introduced into what is essentially a black/white picture? In
practice diagrams of the Julia and Mandelbrot sets are calcu-
lated by computer. Starting with z(0), we apply (12) repeat-
edly to find z(1), z(2), ..., z(N ), where N is some (previously
decided) maximum. If |z(n)| ≤ a, n = 0, ..., N , where a is
a constant usually taken as 2, then the trajectory starting at
z(0) is regarded as bounded, so that c ∈ M , or z(0) ∈ interior
of the Julia set. Otherwise the trajectory may be regarded as
escaping to infinity. In this case, let k be the first iterate such
that |z(k)| > a. The larger the value of k, the closer z(0) is to
the boundary. It is this value of k that is used to determine
189 MAT217—X/1
APM214—Y/1
Exercises
For b > 0 find the fixed points of the system, and then draw
the bifurcation diagram. What limit is imposed on the amount
of harvesting?
192
(b) State the dynamic equation that is used for generating the
Mandelbrot set, and then define the Mandelbrot set.
193 MAT217—X/1
APM214—Y/1
APPENDIX
1. Notation
If A is an n × p matrix and B is an n × q matrix then [A, B] denotes
the n × (p + q) matrix obtained by placing B alongside and to the
right of A. For example, if
1 3
A=
5 4
and
4
B=
2
then
1 3 4
[A, B] = .
5 4 2
This notation is extended in an obvious way to more than two ma-
trices.
2. Rank of a Matrix
The set of all column m—vectors
k1
k2
k3
..
.
km
is denoted by Rm . (The vectors can also be written in row form, but
in this chapter we often think of the columns of a matrix as vectors,
so for us it is more convenient to use column vectors.) As explained
in module MAT103, we can multiply these vectors by scalars and
add them, so forming linear combinations of vectors. Thus if
x1 , x2 , x3 , ..., xn
x1 a1 + x2 a2 + x3 a3 + ... + xn an .
x1 a1 + x2 a2 + x3 a3 + ... + xn an = 0
1 2
A=
4 8
is
1 2
0 0
A2 A3 A4 Ak
eA = I + A + + + + ... + + ...
2! 3! 4! k!
It can be shown that this series converges for any square matrix A.
If A is an n × n matrix then eA will also be an n × n matrix. In
this module you will never have to calculate the value of eA , all you
need to know is the above definition, and the following results:
d At d A2 t2 A3 t3
e = I + At + + +...
dt dt 2! 3!
A3 t2 A4 t3
= A + A2 t + + + ...
2! 3!
A2 t2 A2 t2
= A I + At + + . . . or I + At + + ... A
2! 2!
= AeAt or eAt A.
or
e−At ẋ(t) − e−At Ax(t) = e−At Bu(t).
But
d −At
e x(t) = e−At ẋ(t) − e−At Ax(t)
dt
so we get
d −At
e x(t) = e−At Bu(t).
dt
Therefore
t
e−At x(t) = e−As Bu(s)ds.
a