Lecture Notes Difdif
Lecture Notes Difdif
Lecture Notes Difdif
2013-2014
Contents
1 Difference equations
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1 Asymptotic behaviour and stability . . . . . . . . . .
1.1.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Autonomous first order difference equations . . . . . . . . .
1.2.1 Stability of periodic and equilibrium solutions of autonomous first order difference equations . . . . . . .
1.2.2 Bifurcations . . . . . . . . . . . . . . . . . . . . . . .
1.2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Linear difference equations . . . . . . . . . . . . . . . . . . .
1.3.1 Inhomogeneous linear difference equations . . . . . .
1.3.2 First order linear difference equations . . . . . . . . .
1.3.3 Stability of solutions of linear difference equations . .
1.3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 Systems of first order difference equations . . . . . . . . . .
1.4.1 Homogeneous systems of first order linear difference
equations . . . . . . . . . . . . . . . . . . . . . . . .
1.4.2 Inhomogeneous systems of first order linear difference
equations . . . . . . . . . . . . . . . . . . . . . . . .
1.4.3 Systems of first order linear difference equations with
constant coefficients . . . . . . . . . . . . . . . . . . .
1.4.4 Stability of solutions of systems of linear difference
equations with constant coefficients . . . . . . . . . .
1.4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Autonomous systems of first order difference equations . . .
1.5.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . .
3
.
.
.
.
5
5
13
18
19
.
.
.
.
.
.
.
.
.
24
32
37
38
44
44
48
50
51
. 56
. 58
. 58
.
.
.
.
67
69
71
84
1.6
86
87
90
94
97
2 Differential equations
101
2.1 First order scalar differential equations (supplement to S&B
24.2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2.1.1 Solving equations of type 1 . . . . . . . . . . . . . . . . 103
2.1.2 Separable equations (type 2) . . . . . . . . . . . . . . . 104
2.1.3 Homogeneous linear equations (type 3) . . . . . . . . . 106
2.1.4 Inhomogeneous linear equations (type 4) . . . . . . . . 107
2.1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 110
2.2 Systems of first order differential equations . . . . . . . . . . . 111
2.2.1 Systems of first order linear differential equations . . . 114
2.2.2 Homogeneous linear vectorial differential equations with
constant coefficients . . . . . . . . . . . . . . . . . . . . 115
2.2.3 Inhomogeneous linear vectorial differential equations
with constant coefficients . . . . . . . . . . . . . . . . . 126
2.2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 128
2.3 Stability of solutions of systems of first order differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
2.3.1 Autonomous systems of nonlinear differential equations 133
2.3.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Appendix
A.1 Change of basis and Jordan normal form
A.2 Computation of J n . . . . . . . . . . . .
A.3 Definition and properties of eA . . . . . .
A.4 Vector and matrix functions . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
143
. 143
. 150
. 151
. 153
The figures in these lecture notes were produced with the program Dynamics
of H. E. Nusse and J. A. Yorke.
Chapter 1
Difference equations
1.1
Introduction
or
D(pt+1 ) S(pt ) = 0
If the demand function D is invertible, with inverse D1 , then pt+1 can be
expressed explicitly in terms of pt :
pt+1 = D1 S(pt )
If the price p0 at time t = 0 is known, the prices at subsequent times can be
computed successively: p1 = D1 S(p0 ), p2 = D1 S(p1 ), etc. The simplest
case is when demand and supply are linear, time-independent, functions of
p: D(p) = ap + b, S(p) = cp + d, where a < 0 and c > 0. In that case p
satisfies the linear difference equation
db
c
pt+1 = pt +
a
a
In mathematics, it is customary to place the independent variable in brackets
after the dependent variable: p(t) instead of pt .
Example 1.1.2. A neoclassical growth model in discrete time.
Yt = Q(Kt , Lt )
Kt+1 Kt = sYt
Lt+1 Lt = nLt
Here, Y , K and L denote production, capital and labour, respectively. The
rate of growth n of the labour force is a positive constant and the propensity
to save s is a number between 0 and 1. Q is a production function, assumed
homogeneous of degree 1. We can write the model in per capita form by
defining k := K
, (capital per worker), and
L
q(k) =
Q(K, L)
K
= Q( , 1) = Q(k, 1)
L
L
From the above equations we infer that k must satisfy the first order difference
equation
s
1
kt+1 =
q(kt ) +
kt
(1.1)
1+n
1+n
6
Technical progress can be introduced into the model by replacing the first
equation with
Yt = Q(Kt , Et Lt )
Here, E is a parameter, representing the increase in labour efficiency due to
technological improvement. E is an increasing function of t, and E0 = 1.
K
and
Defining k := EL
q(k) =
Q(K, EL)
K
= Q(
, 1)
EL
EL
If
Et+1
Et
is constant:
1
s
q(kt ) +
kt
1+n
1+n
kt+1
Et
=
Et+1
Et+1
Et
kt+1 =
1
s
q(kt ) +
kt
(1 + n)(1 + g)
(1 + n)(1 + g)
(1.2)
(1.3)
(1.4)
x
h
j = 1, ..., k
u(t + 1) u(t)
=3
h
x
, y(x) 7 u(t) := y(x) = y(th)
h
(1.5)
The difference quotient on the left-hand side of (1.5) can be used to approxdy
. More generally, solutions of the difference equation
imate dx
y(x + h) y(x)
= f (x, y(x))
h
are used to approximate solutions of the differential equation
dy
= f (x, y(x))
dx
The simplest numerical method to compute solutions of differential equations, Eulers method, is based on this idea.
From now on, we take the shift equal to 1 and work with a discrete variable
n Z. A solution of the difference equation
F (n, y(n), y(n + 1), ..., y(n + k)) = 0
(1.6)
nD
(1.7)
to indicate that the solutions we seek are defined on D. The following example demonstrates the importance of this addition.
Example 1.1.4. The set of all solutions of the difference equation
y(n + 1) ny(n) = 0,
nN
consists of all functions y on N with the property that y(n) = 0 for all n 1.
y(0) can take on every real (or complex) value. However, the equation
y(n + 1) ny(n) = 0,
n {1, 2, ...}
(1.8)
(1.9)
nN
1 1
1 + 4a
2 2
12
1 1
i 1 4a
2 2
If, in the above example, we restrict ourselves to real solutions, then the
equation has no equilibrium solutions for a < 41 , a single equilibrium solution y 12 for a = 14 and two equilibrium solutions for a > 41 . At the
parameter-value a = 41 , an abrupt change in the properties of the equation
occurs. Such qualitative changes, due to a change in a parameter are called
bifurcations. The value of the parameter(s) at which the bifurcation occurs
is called the bifurcation value of that parameter. There are many different
types of bifurcations, only a few of which will be reviewed here. The bifurcation in example 1.1.7 is a so-called saddle-node bifurcation. Bifurcations
can be represented in a bifurcation diagram, where certain characteristic
properties of the equation, such as the equilibrium values, are plotted versus
a parameter.
1.1.1
Example 1.1.9. The null sequence is a globally asymptotically stable solution of the equation
y(n + 1)
1
y(n) = 0,
n+1
n N,
since, for every solution of this equation, the following relation holds
y(n) =
y(0)
n!
Hence limn y(n) = 0. On the other hand, the null sequence is an unstable
solution of
y(n + 1) (n + 1)y(n) = 0, n N,
as, in this case, y(n) = n!y(0) for every solution of the equation, and thus
limn |y(n)| = when y(0) 6= 0. Finally, the null sequence is a neutrally
stable solution of the equation
y(n + 1) y(n) = 0,
nN
(Why?).
Example 1.1.10. Newtons method is used to approximate zeroes of functions by means of an iterative procedure. The difference en between the exact
value and the approximation, after n iterations, satisfies an equation of the
form
en+1 = (n)e2n , n N
The method converges if, for an appropriate choice of the starting value of the
algorithm, i.e. for sufficiently small e0 , this difference tends to 0 as n .
This happens precisely then, when the null solution of the above equation is
asymptotically stable. Obviously, this will depend on the function . Lets
consider the simple case that 1 and replace en by y(n). Then the above
equation reads:
y(n + 1) = y(n)2 ,
nN
(1.11)
We claim that the null solution of (1.11) is asymptotically stable. The null
solution is clearly stable, as |y(n+1)| = |y(n)|2 |y(n)|, provided |y(n)| 1.
Let > 0. An inductive argument shows that |y(n) 0| , for all n N,
if |y(0) 0| := min{1, }. It remains to prove the existence of a positive
15
number , such that limn y(n) = 0 for every solution y of (1.11), with the
property that |y(0)| . Let < 1. Another inductive argument shows that
|y(n)| n+1 for all n N, if |y(0)| . Suppose that |y(n)| n+1 for a
certain n N. Then
|y(n + 1)| = |y(n)|2 2n+2 n+2
So the inequality is valid for n + 1 as well, and thus it is valid for all n N.
As limn n = 0, this implies that limn y(n) = 0.
The equation (1.11) has a second equilibrium solution, viz. the sequence
y1 (n) = 1 for all n N. This solution is unstable. This can be proved in
the following way. Let > 0 and consider the solution y with initial value
y (0) = 1 + . Thus, |y (0) y1 (0)| = . A simple inductive argument shows
that
|y (n)| (1 + )n+1
for all n N. Hence it follows that limn y (n) = . This implies that
|y (n) y1 (n)| = |y (n) 1| can take on arbitrarily large values. Since this
holds for any > 0, the condition for stability of the solution y1 is not
fulfilled. Note that the existence of a second equilibrium solution implies
that the null solution is not globally asymptotically stable!
Sometimes, by a change of variables, a difference equation can be simplified,
or reduced to an equation whose properties are already known. Thus, for
example, the change of variables: n = m + n0 , y(n) = y(m + n0 ) = z(m),
transforms the equation
F (n, y(n), ..., y(n + k)) = 0, n {n0 , n0 + 1, ...}
into
G(m, z(m), ..., z(m+k)) := F (m+n0 , y(m+n0 ), ..., y(m+n0 +k)) = 0, m N,
which is of the same type as the original equation, but now the independent
variable runs from 0 to . This is why, from now on, we will usually take
n0 = 0. Another useful transformation is the following. Suppose that a
specific solution y of equation (1.6) is known. Now substitute y = y + z
into the equation. Then y is a solution of (1.6) iff the new dependent variable
z satisfies the equation
G(n, z(n), ..., z(n + k)) := F (n, z(n) + y (n), ..., z(n + k) + y (n + k)) = 0
A particular property of the new equation is that it possesses a null solution.
16
Lemma 1.1.11. The sequence y is an asymptotically stable, neutrally stable, or unstable solution of the equation
y(n + k) = f (n, y(n), y(n + 1), ..., y(n + k 1)),
nN
(1.12)
iff the null sequence is, respectively, an asymptotically stable, neutrally stable, or unstable solution of the equation
z(n + k) = g(n, z(n), z(n + 1), ..., z(n + k 1)) :=
f (n, z(n) + y (n), ..., z(n + k 1) + y (n + k 1)) y (n + k),
nN
(1.13)
(1.14)
17
1.1.2
Exercises
1. Show that the solutions of equation (1.11) do not form a linear space.
2. Find all real, periodic solutions of the equation
y(n + 1) = y(n)2 ,
nN
nN
are stable. Does this equation have any asymptotically stable solutions?
4. Determine the equilibrium solutions of the equation
y(n + 1) = y(n)(1 y(n)),
nN
where R.
5. Determine all equilibrium solutions of the equation
y(n + 2) = y(n) + 2hy(n + 1)(1 y(n + 1)),
nN
where h R.
6. Determine the equilibrium solutions of the equations (1.1) and (1.3) in
example 1.1.2, when Q is a Cobb-Douglas function of the form: Q(x, y) =
x y 1 , with 0 < < 1.
7. Determine the equilibrium solutions of the equation
y(n + 1) = 2y(n)2 ,
nN
nN
nN
1.2
The most general first order difference equation of the type (1.10), with
n0 = 0, is
y(n + 1) = f (n, y(n)), n N
We restrict ourselves in this section to autonomous equations. These are of
the form
y(n + 1) = f (y(n)), n N
(1.15)
(1.11) is an example of such an equation. Another example is the logistic
difference equation:
y(n + 1) = y(n)(1 y(n))
(1.16)
+1
( + 1)( 3)
2
Check that f (x1 ) = x2 and f (x2 ) = x1 . So, if > 3, (1.16) has two real,
periodic solutions of minimal period 2, viz. the sequences x1 , x2 , x1 , x2 , ....
and x2 , x1 , x2 , x1 ..... Both have the same orbit: + (x1 ) = + (x2 ) = {x1 , x2 }
The following simple, graphical method serves to determine the solution of
an equation of the type (1.15), with a given, real, initial value y0 (cf. figure
1.3). In an X Y -coordinate system, draw the graph of the function f
and the line y = x. The first element of the sequence we are looking for
is y0 . The next element of the sequence, y(1), has the value f (y0 ), which
is found by intersecting the graph of f with the vertical line x = y0 . Now
draw a horizontal line through the point (y0 , f (y0 )) and determine the point
(y(1), y(1)), where it intersects the line y = x. The vertical line through
this point intersects the graph of f at (y(1), f (y(1))) = (y(1), y(2)), etc...
The sequence formed by the x-coordinates of the points of the graph of f ,
determined in this way, is the required solution of (1.15). This method can
also give us a first indication of the stability or instability of equilibrium
solutions of the equation. The equilibrium solutions themselves correspond
to the fixed points of f , i.e. the points where the graph of f intersects the
line y = x.
A point x Df is called a positive limit point or -limit point of x0 ,
if there exists a subsequence of {f n (x0 )}
n=0 converging to x. The positive
limit set or -limit set (x0 ) of x0 is the set of all positive limit points of
x0 .
Example 1.2.3. In the case that f (x) = x2 , x R, (x0 ) = {0} if x0
(1, 1), (1) = (1) = {1} and (x0 ) = for all other (real) values of x0 .
+
If {f n (x0 )}
n=0 is a periodic solution of (1.15), then (x0 ) = (x0 ). If
n
limn f (x0 ) = c, then (x0 ) = {c}, but the converse is not true. If
(x0 ) = {c} and + (x0 ) is unbounded, then limn f n (x0 ) does not exist
(cf. ex. 1.2.3; in such cases, it is sometimes said that is a positive limit
point of x0 ).
21
Figure 1.4: solutions of the logistic difference equation, with = 1/2, with
different initial values, varying from 2.2 to 2.5. Observe the asymptotic
behaviour of the different solutions. The basin of attraction of 0 is the interval
(1, 2) (on the y-axis).
1. y(0) (= x) 1/. This implies that y(0) 1 and 1y(0) 11/ < 0,
hence y(1) = y(0)(1 y(0)) 1 1/. From here on, proceed as in case 2
below (starting from y(1) instead of y(0)).
2. y(0) 1 1/. As an inductive argument shows, this implies that y(n)
11/ for all n N. For, suppose that for some n N, y(n) 11/ (< 0),
then we have (1y(n)) 1, so y(n+1) = (1y(n))y(n) y(n) 11/.
From 1. and 2. it follows that none of the points examined so far belongs to
the basin of attraction of 0, as in both cases y(n) 1 1/ for all n 1,
hence limn y(n) 6= 0. Thus, the basin of attraction of 0 must be contained
in the interval (1 1/, 1/).
3. 0 y(0) 1. If 0 y(n) 1 for some n N, then 0 1y(n) 1, and,
consequently, 0 y(n + 1) = y(n)(1 y(n)) y(n). With the principle
of induction it follows that
0 y(n) n y(0)
for all n N. As 0 < < 1, limn n = 0. With the aid of the inclusion
theorem we conclude that limn y(n) = 0 for all sequences y such that
0 y(0) 1. This implies that the interval [0, 1] is contained in the basin
23
of attraction of 0.
4. 1 < y(0) < 1/. Then 1 1/ < 1 y(0) < 0 and 0 < y(0) < 1, hence
1 1/ < y(1) < 0. From here on, proceed as in case 5.
5. If 1 1/ < y(0) < 0, an inductive argument shows that 1 1/ <
y(n) < y(n + 1) < 0 for all n N. For, suppose that 1 1/ < y(n) < 0
for some n N. Then we have (1 y(n)) < 1, hence y(n) < y(n + 1) =
y(n)(1 y(n)) < 0. The sequence y is monotone increasing and bounded
above, hence it has a limit l (1 1/, 0]. Letting n tend to on both
sides of the equation
y(n + 1) = y(n)(1 y(n))
we find that l = l(1 l). In the interval (1 1/, 0] the latter equation has
the unique solution l = 0.
From 3., 4. and 5. we infer that limn y(n) = 0 for all sequences y with
1 1/ < y(0) < 1/. Hence the interval (1 1/, 1/) is contained in the
basin of attraction of 0. As we have also proved that the basin of attraction of
0 is contained in the interval (1 1/, 1/), it must be equal to this interval.
1.2.1
The following theorem provides a test to determine, in many cases, but not
all, the stability of equilibrium solutions of (1.15).
Theorem 1.2.5. Suppose c C is a fixed point of f and f is differentiable
at c.
(i) If |f 0 (c)| < 1, then y c is a (locally) asymptotically stable solution of
(1.15).
(ii) If |f 0 (c)| > 1, then y c is an unstable solution of (1.15).
Proof. (i) Suppose that |f 0 (c)| < 1. By the definition of derivative,
f (x) f (c)
|<1
xc
xc
Let d (|f 0 (c)|, 1) and = d |f 0 (c)|. Then there exists a neighbourhood U
of c of the form |x c| < , with > 0, such that, for all x U {c},
| lim
f (x) f (c)
0
|
| |f (c)|
xc
|
24
f (x) f (c)
f 0 (c)| <
xc
Hence
|
f (x) f (c)
| < |f 0 (c)| + = d
xc
f (x) f (c)
| > |f 0 (c)| = d
xc
for all x U {c}. It follows that
|
(1.17)
g 0 (0) = lim
nN
Application of Theorem 1.2.8 reproduces the known fact that the null solution
of the equation is asymptotically stable for (0, 1) and unstable for > 1.
Theorems 1.2.5 and 1.2.8 are based on a method that is frequently used in the
study of nonlinear difference equations, known as linearization. It consists
in approximating solutions of the nonlinear equation by solutions of a linear
equation. In the case of Theorem 1.2.8 that linear equation is
y(n + 1) = ay(n)
(1.18)
(1.19)
asymptotically
stable when 3 < < 1 + 6 and unstable when > 1 + 6.
In the previous section we mentioned an asymptotically stable equilibrium
point as an example of an attractor. More generally, a periodic attractor
is an attractor consisting of a periodic orbit.
30
Lemma 1.2.15. If f is a continuous map, then the orbit of any asymptotically stable periodic solution of (1.15) is a (periodic) attractor.
Proof. We give the proof for the case of an asymptotically stable, 2-periodic
solution. Let c be the initial value of the solution and A := + (c) =
{c, f (c)}. A being a finite set, it is closed, and it is also invariant, as
f (A) = {f (c), f 2 (c)} = {f (c), c} = A. Due to the asymptotic stability
of the solution, there exists a neighbourhood U1 of c, of the form |x c| < 1 ,
with 1 > 0, such that limn f n (x)f n (c) = 0 for all x U1 . As f n (c) A
for all n N, this implies that the distance of f n (x) to A tends to 0 for all
x U1 . It remains to prove the existence of a neighbourhood U2 of f (c), of
the form |x f (c)| < 2 , with 2 > 0, such that limn f n (x) f n (f (c)) = 0
for all x U2 . Due to the continuity of f at f (c), there exists a positive
number 2 , such that |f (x) f (f (c))| < 1 for all x with the property that
|xf (c)| < 2 . Consequently, f (x) U1 and thus limn f n (f (x))f n (c) =
limn f n+1 (x)f n (f 2 (c)) = limn f n+1 (x)f n+1 (f (c)) = 0 for these values of x. Hence there exists a neighbourhood U := U1 U2 of A such that,
for all x U , the distance of f n (x) to A tends to 0 as n .
We conclude this section with two examples of very simple economic models
that give rise to nonlinear, autonomous difference equations.
Example 1.2.16. Inventory adjustment model
This is described by the equation
Xt Xt1 = Xt1 (K Xt1 )
or, equivalently,
Xt+1 = (1 + K)Xt Xt2 ,
which is a variant of the logistic difference equation. Here, Xt denotes the
inventory level in the period t, K is the desired or maximal stock and is a
positive constant.
Example 1.2.17. Cobweb model with adaptive expectations.
The cobweb model with adaptive expectations is described by the equations
qtd = D(pt )
qts = S(pet )
31
qtd = qts
pet = pet1 + w(pt1 pet1 ), 0 < w < 1
Here, qtd , qts , pt and pet denote the demand, supply, price and expected price
in the period t, respectively. We assume the demand to be a strictly decreasing function of the price, hence its inverse exists. Then pet satisfies the
autonomous, first order difference equation
pet+1 = (1 w)pet + wD1 S(pet )
1.2.2
(1.20)
Bifurcations
In this section we briefly discuss some examples of frequently occurring bifurcations. We already encountered one type of bifurcation in 1.2, in the
1-parameter-family of autonomous, first order difference equations
y(n + 1) = y(n)2 a
where a is a real parameter. If a < 41 , the equation has no real-valued
equilibrium solutions. If a > 41 , however, it has 2 real-valued equilibrium
(1.21)
33
(1.22)
where a is a real parameter. If a < 1, 0 is the unique real fixed point of f (x) =
ax x3 . f 0 (0) = a, so the null solution of (1.22) is asymptotically stable
when 1 < a < 1 and unstable when a > 1 (the case a = 1 was discussed
in example 1.2.7).
If a > 1, there are two additional
real-valued equilibrium
solutions: y a 1. From the fact that f 0 ( a 1) = 32a we deduce
that both solutions are asymptotically stable when 1 < a < 2. At a = 1 a
so-called pitchfork bifurcation occurs (cf. fig. 1.6).
To illustrate a fourth type of bifurcation, we consider once more the
logistic difference equation (cf. the examples 1.2.2 and 1.2.4). The function
f (x) = x(1 x) has two fixed points: 0 and 1 1/, when 6= 1. f 0 (0) =
and f 0 (1 1/) = 2 . According to Theorem 1.2.5, the null solution of
(1.16) is asymptotically stable when 0 < < 1 and unstable when > 1.
The second equilibrium solution is asymptoticallystable when 1 < < 3
and unstable when > 3. When 3 < < 1 + 6, the equation has two
asymptotically stable periodic solutions. At the parameter values = 1 and
= 3, the properties of the equation change significantly, in both cases it
undergoes a bifurcation. The bifurcation at = 1 is of the same type as
in (1.21), this is a transcritical bifurcation. At = 3 a so-called perioddoubling or flip-bifurcation occurs: an asymptotically stable equilibrium
solution gives way to two asymptotically stable 2-periodic solutions (having
the same orbit).
The above examples all concern bifurcations of equilibrium solutions.
Similar bifurcations occur in periodic solutions of period greater than 1. Bifurcations can be represented in a bifurcation diagram. Usually, the values
of the asymptotically stable equilibrium (or periodic) solutions are plotted
versus the parameter. Sometimes, unstable periodic solutions are indicated
by means of dotted lines. In order to make bifurcation diagrams, like the
ones shown in figs. 1.5 through 1.7, on a computer, one roughly proceeds as
follows. For each value of the parameter, first, a certain number of values of the solution with a given initial value x0 , the so-called pre-iterates)
is computed. Next, a number of subsequent entries of the same sequence
34
Figure 1.8: The solutions of (2.2) with = 2.8, y(0) = 0.1 and y(0) = 0.2.
Figure 1.9: The solutions of (2.2) with = 3.5, y(0) = 0.1 and y(0) = 0.2.
(Also compare this figure to Fig. 1.2.)
Figure 1.10: The solutions of (2.2) with = 4, y(0) = 0.1 and y(0) = 0.11.
36
1.2.3
Exercises
x
, x R, R
1 + x2
1.3
(1.23)
(1.24)
aj j y = b
(1.25)
j=0
Ly(n) =
k
X
aj (n)y(n + j) = 0
(1.26)
j=0
nN
c
,
n!
n N, c C,
(1.27)
In fact, this equation is a special case of a recurrence relation (to see this, take
all terms except y(n + k) to the right-hand side). In particular, any solution
is determined by its first k values y(0), ..., y(k 1) (the initial values), which
can be chosen arbitrarily. We begin by studying the homogeneous equation
y(n + k) + ak1 (n)y(n + k 1) + ... + a0 (n)y(n) = 0, n N
(1.28)
If we take the first k entries of the sequence to be zero, then all subsequent
entries will be zero as well and we end up with the null solution.
Theorem 1.3.2. The (real- or complex-valued) solutions of the kth order
homogeneous linear difference equation (1.28) form a k-dimensional linear space (over R or C, respectively).
39
Proof. The k initial values y(0), ..., y(k 1) form a k-dimensional vector.
If l k, we can choose l linearly independent such k-dimensional vectors
(y1 (0), ..., y1 (k1)),...,(yl (0), ..., yl (k1)) and these will determine l solutions
y1 ... yl of (1.28). These solutions are linearly independent. For, suppose
there exist numbers c1 , c2 , ..., cl with the property that c1 y1 (n) + c2 y2 (n) +
... + cl yl (n) = 0 for all n N, then, of course,
c1
y1 (0)
.
.
.
y1 (k1)
+ c2
y2 (0)
.
.
.
y2 (k1)
+ ... + cl
yl (0)
.
.
.
yl (k1)
0
.
.
.
0
(1.29)
The linear independence of the vectors on the left-hand side of (1.29) implies
that c1 = c2 = ... = cl = 0. Hence the equation (1.28) has at least k
linearly independent solutions. Conversely, for any number l of linearly independent solutions y1 , ..., yl of (1.28), the vectors (y1 (0), ..., y1 (k 1)) through
(yl (0), ..., yl (k 1)) are linearly independent. For, suppose there exist numbers c1 , c2 , ..., cl such that (1.29) holds. Define a sequence y by
y := c1 y1 + c2 y2 + ... + cl yl
Then y is a solution of (1.28) with the property that y(n) = 0 for 0 n
k 1 and thus y 0, i.e.
c1 y1 (n) + c2 y2 (n) + ... + cl yl (n) = 0 for all n N
The linear independence of the solutions y1 , ..., yl now implies that c1 = c2 =
... = cl = 0, and, consequently, the vectors
(y1 (0), ..., y1 (k 1)), ..., (yl (0), ..., yl (k 1)) are linearly independent. Since
there can be at most k linearly independent k-dimensional vectors, we conclude that the dimension of the solution space of (1.28) is at most k and
thus is exactly k.
We have proved the following lemma in the process.
Lemma 1.3.3. Let l N. l solutions y1 , ..., yl of the homogeneous difference
equation (1.28) are linearly independent (over R or C), if and only if the
vectors (y1 (0), ..., y1 (k 1)), ..., (yl (0), ..., yl (k 1)) are linearly independent
(over R or C, respectively).
40
y1 (0)
y2 (0)
y1 (1)
y2 (1)
.
.
.
.
.
.
y1 (k 1) y2 (k 1)
.
.
.
.
.
.
.
yk (0)
.
yk (1)
.
.
.
.
.
.
. yk (k 1)
6= 0
The above determinant is called the Casorati determinant. For all nonnegative integers n, Cy1 ,...,yk (n) is defined by
y1 (n)
y2 (n)
y1 (n + 1)
y2 (n + 1)
.
.
Cy1 ,...,yk (n) =
.
.
.
.
y1 (n + k 1) y2 (n + k 1)
.
.
.
.
.
.
.
yk (n)
.
yk (n + 1)
.
.
.
.
.
.
. yk (n + k 1)
(1.30)
Proof. We begin by proving the lemma for the case that k = 2. For every
nonnegative integer n we have
y (n + 1) y2 (n + 1)
Cy1 ,y2 (n + 1) = 1
y1 (n + 2) y2 (n + 2)
Furthermore,
yj (n + 2) = a0 (n)yj (n) a1 (n)yj (n + 1), j = 1, 2
41
y1 (n + 1)
y2 (n + 1)
a0 (n)y1 (n) a1 (n)y1 (n + 1) a0 (n)y2 (n) a1 (n)y2 (n + 1)
The last row being the sum of the two row vectors (a0 (n)y1 (n) a0 (n)y2 (n))
and (a1 (n)y1 (n + 1) a1 (n)y2 (n + 1)), and the determinant of a matrix
being a linear function of its row vectors, this is equal to
a0 (n)
y1 (n + 1) y2 (n + 1)
y1 (n)
y2 (n)
a1 (n)
y1 (n + 1) y2 (n + 1)
y1 (n + 1) y2 (n + 1)
The second term vanishes for all n. Interchanging rows in the first determinant, we find
Cy1 ,y2 (n + 1) =
a0 (n)
y1 (n)
y2 (n)
y1 (n + 1) y2 (n + 1)
y1 (n + 1) y2 (n + 1)
y1 (n + 2) y2 (n + 2)
.
.
Cy1 ,...,yk (n + 1) =
.
.
.
.
y1 (n + k) y2 (n + k)
.
.
.
.
.
.
. yk (n + 1)
. yk (n + 2)
.
.
.
.
.
.
. yk (n + k)
k1
X
l=0
Cy1 ,...,yk (n + 1) =
k1
X
al (n)
l=0
y1 (n + 1) y2 (n + 1)
y1 (n + 2) y2 (n + 2)
.
.
.
.
.
.
y1 (n + l) y2 (n + l)
42
.
.
.
.
.
.
. yk (n + 1)
. yk (n + 2)
.
.
.
.
.
.
. yk (n + l)
The determinants on the right-hand side are seen to vanish, except for l = 0,
hence
Cy1 ,...,yk (n + 1) =
a0 (n)
y1 (n + 1) y2 (n + 1)
y1 (n + 2) y2 (n + 2)
.
.
.
.
.
.
y1 (n)
y2 (n)
.
.
.
.
.
.
. yk (n + 1)
. yk (n + 2)
.
.
.
.
.
.
.
yk (n)
nN
nN
Moreover,
0 1
Cy1 ,y2 (0) =
= 1
1 0
Thus, y1 and y2 constitute a basis of the solution space of the equation. The
Casorati-determinant Cy1 ,y2 (n) satisfies the first order difference equation
y(n + 1) = (1)2 y(n) n N
hence Cy1 ,y2 (n) = 1 for all n N. (Check this by computing Cy1 ,y2 (n)
directly.)
43
1.3.1
k
X
aj (n)y(n + j) = b(n), n N
(1.31)
j=0
where b does not vanish identically (is not a null sequence). As the homogeneous equation Ly = 0 has other solutions besides the null solution, the
linear operator L is not invertible and equation (1.31) does not have a
unique solution. If y0 is a solution, i.e. if Ly0 = b and if y is an arbitrary
solution of the homogeneous equation, so L
y = 0, then y0 + y is another
solution of (1.31), as
L(y0 + y) = Ly0 + L
y =b+0=b
Conversely, the difference of two solutions y1 and y2 of the inhomogeneous
equation (1.31) is a solution of the homogeneous equation, as
L(y1 y2 ) = Ly1 Ly2 = b b = 0
Theorem 1.3.7. Let y0 be a particular solution of
linear difference equation (1.31). Every solution of
as the sum of y0 and a solution of the homogeneous
any sequence that can be written as the sum of y0
homogeneous equation is a solution of (1.31).
1.3.2
the inhomogeneous
(1.31) can be written
equation. Conversely,
and a solution of the
(1.32)
This equation has a 1-dimensional solution space. If the initial value y(0)
is known, all subsequent values can be computed. It is easily seen that, for
n 1,
y(n) = a(n 1)...a(0)y(0)
y(0) can be chosen arbitrarily. Therefore, the general solution of (1.32) has
the form
y(0) = c, y(n) = c
n1
Y
a(m) for n 1,
m=0
44
cC
Example 1.3.8. The terms of a geometric progression with common ratio r R or C satisfy the equation
y(n + 1) = ry(n),
nN
Here a(n) equals the common ratio r for all n, and thus is a constant sequence.
For any solution y we have
y(n) = rn y(0),
nN
n N, c C
nN
n N, c C
(1.33)
and summing the expressions on both sides of the equality signs, we get
y(n) y(0) = b(0) + b(1) + ... + b(n 1)
As y(0) can be chosen arbitrarily, the general solution of (1.33) is
y(0) = c, y(n) = c +
n1
X
b(m) for n 1,
cC
m=0
y(n)
y0 (n)
z(n + 1) = z(n) +
b(n)
y0 (n + 1)
This is an equation of the form (1.33), where, on the right-hand side, b(n)
has been replaced by y0b(n)
. Hence we have, for n 1,
(n+1)
n1
X
b(m)
y(n)
=c+
,
y0 (n)
m=0 y0 (m + 1)
cC
n1
X
b(m)
for n 1,
m=0 y0 (m + 1)
cC
n1
X
cC
m=0
(1.35)
(1.35) represents the general solution of (1.34), even if y0 does have zeroes
(verify this).
46
nN
n1
X
cC
m=0
nN
n1
X
1
for n 1,
m=0 (m + 1)!
cC
cC
nN
Here, r and v are constants: the common ratio and difference of the progression, respectively. The homogeneous equation has a solution yn0 := rn . Thus,
the general solution of the inhomogeneous equation is
y0 = c, yn = crn + rn
n1
X
v for n 1, c C
m=0
1.3.3
all solutions are neutrally stable, or all are asymptotically stable, or all are
unstable. Moreover, every asymptotically stable solution is globally asymptotically stable. For, suppose that y is an asymptotically stable solution of
(1.27). Then z 0 is an asymptotically stable solution of the homogeneous
equation. Hence there exists a positive number , such that limn z(n) = 0
for every solution z of the homogeneous equation, with the property that
|z(n)| for n = 0, ..., k 1. Now consider an arbitrary solution z 6 0
of the homogeneous equation and let := max{|z(0)|, ..., |z(k 1)|}. Obviously, > 0 and the sequence z defined by z(n) = z(n) also satisfies
the homogeneous equation. Moreover, |
z (n)| for n = 0, ..., k 1, hence
limn z(n) = 0, and this implies that limn z(n) = 0. Consequently, all
solutions of the homogeneous equation tend to 0 as n and it immediately follows that limn y(n) y(n) = 0 for any solution y of (1.27).
Summarizing, we have the following theorem.
Theorem 1.3.14. All solutions of the linear difference equation (1.27) are
neutrally stable, globally asymptotically stable, or unstable, iff the
null solution of the homogeneous equation is neutrally stable, asymptotically stable, or unstable, respectively.
Example 1.3.15. The solutions of the equation
y(n + 1) ay(n) = b(n),
nN
(1.36)
where b is an arbitrary sequence, are stable when |a| 1, as in that case every
solution y of the homogeneous equation has the property that |y(n) 0| =
|an y(0) 0| = |an y(0)| |y(0)| for all n N (cf. example 1.3.8). The
solutions of (1.36) are globally asymptotically stable when |a| < 1, as in that
case every solution y of the homogeneous equation, has the property that
lim |y(n) 0| = n
lim |an y(0)| = 0
which implies that the null solution of the homogeneous equation is asymptotically stable. The solutions of (1.36) are unstable when |a| > 1, as in that
case, for every solution y 6 0, of the homogeneous equation, limn |y(n)
0| = limn |y(n)| = .
49
1.3.4
Exercises
y(n)
.
n!
nN
are unstable.
3. Check whether or not the sequences y1 and y2 , defined by
y1 (n) = (2)n , y2 (n) = 3n ,
nN
nN
nN
n
X
1
,
m=0 m!
nN
nN
nN
nN
1.4
Many problems involve more than one, say k, dependent variables, simultaneously satisfying a certain number of, say m, relations. This is referred to
as a system of m equations in k unknowns. Here, we restrict ourselves to
systems of k first order difference equations in k unknowns, of the following
form
y1 (n + 1) = f1 (n, y1 (n), ..., yk (n))
y2 (n + 1) = f2 (n, y1 (n), ..., yk (n))
.
.
(1.37)
.
.
.
.
yk (n + 1) = fk (n, y1 (n), ..., yk (n))
Defining, for every n, a k-dimensional vector y(n) with components y1 (n),...,yk (n),
and a k-dimensional vector function f with components f1 ,...,fk , we can write
the above system of equations in the form of a so-called vectorial difference
equation
y1 (n + 1)
f1 (n, y1 (n), ..., yk (n))
y2 (n + 1)
f2 (n, y1 (n), ..., yk (n))
.
.
=
(1.38)
.
.
.
.
yk (n + 1)
fk (n, y1 (n), ..., yk (n))
or, briefly
y(n + 1) = f (n, y(n))
(1.39)
The difference between (1.37) and (1.38) or (1.39) is mainly a matter of notation. In what follows we will not discriminate between systems of equations
and vectorial equations.
Example 1.4.1. The system of first order, linear difference equations
y1 (n + 1) = a11 (n)y1 (n) +...+ a1k (n)yk (n) +b1 (n)
y2 (n + 1) = a21 (n)y1 (n) +...+ a2k (n)yk (n) +b2 (n)
.
.
.
.
.
.
.
.
.
.
.
.
yk (n + 1) = ak1 (n)y1 (n) +...+ akk (n)yk (n) +bk (n)
(1.40)
y1 (n + 1)
a11 (n) . . . a1k (n)
y1 (n)
b1 (n)
y
(n
+
1)
a
(n)
.
.
.
a
(n)
y
(n)
2
21
2
b2 (n)
2k
.
.
. . .
.
.
.
.
.
. . .
. . .
.
.
. . .
. . .
yk (n + 1)
ak1 (n) . . . akk (n)
yk (n)
bk (n)
(1.41)
(1.42)
Here, y(n + 1), y(n) and b(n) are k-dimensional vectors and A(n) is the k k
matrix in (1.41).
Systems of first order difference equations are important for more than one
reason. In the first place, systems of difference equations emerge naturally in
mathematical models of various problems, involving several time-dependent
variables. Secondly, any kth order, scalar difference equation can be easily
transformed into a system of k first order equations. Here is one way to do
that. As a starting point we take the kth order equation
y(n + k) = f (n, y(n), y(n + 1), ..., y(n + k 1)),
nN
nN
(1.43)
So
y1 (n) = y(n), y2 (n) = y(n + 1), ..., yk (n) = y(n + k 1)
Then
y1 (n + 1)
= y(n + 1)
= y2 (n),
y2 (n + 1)
= y(n + 2)
= y3 (n),
........... . ............ . .....
........... . ............ . .....
yk1 (n + 1) = y(n + k 1) = yk (n)
If y is a solution of (1.43), we have
yk (n+1) = y(n+k) = f (n, y(n), y(n+1), ..., y(n+k1)) = f (n, y1 (n), ..., yk (n))
Hence, the sequence of vectors or the vector function (y1 , ..., yk ), defined
by: (y1 , ..., yk )(n) = (y1 (n), ..., yk (n)), is a solution of the vectorial difference
equation
y1 (n + 1)
y2 (n)
.
y2 (n + 1)
.
.
.
.
.
yk (n)
yk (n + 1)
f (n, y1 (n), ..., yk (n))
Example 1.4.2. Suppose that y is a solution of the kth order, linear difference equation
y(n + k) = ak1 (n)y(n + k 1) . . . a0 (n)y(n) b(n)
(1.44)
Then the sequence of vectors or vector function (y1 , ..., yk ), defined by: (y1 , ..., yk )(n) =
(y1 (n), ..., yk (n)) = (y(n), y(n+1), ..., y(n+k1)) is a solution of the vectorial
difference equation
y1 (n+1)
0
0
y2 (n+1)
.
.
.
.
.
.
yk (n+1)
a0 (n)
1
0
.
.
.
.
.
1
.
.
.
.
.
0
y1 (n)
0
.
0
0
y2 (n)
.
.
.
.
.
.
. .
.
.
. .
. ak1 (n)
yk (n)
b(n)
(1.45)
Conversely: if (y1 , ..., yk ) is a solution of (1.45), then y1 is a solution of the
kth order scalar equation (1.44) and, for j = 1, ..., k, yj (n) = y1 (n + j 1).
53
0
1
(n + 1) n + 3
y1 (n)
y2 (n)
0
3
y1 (n + 1)
0 1 0
y1 (n)
0
0 1
y2 (n + 1) = 0
y2 (n) + 0
y3 (n + 1)
n 0 0
y3 (n)
2n
54
1.4.1
We consider a system of k first order, homogeneous, linear difference equations, in the form of the vectorial equation
y(n + 1) = A(n)y(n)
(1.46)
n1
for all n 1, det(y1 (n), ..., yk (n)) 6= 0 for at least one value of n iff
det(y1 (0), ..., yk (0)) 6= 0. This completes the proof of the theorem.
A k k matrix function Y , having, for its column vectors, k linearly independent solutions y 1 , ..., y k of the homogeneous vectorial difference equation
(1.46), is called a fundamental matrix of this equation. It is easily seen
that Y itself satisfies the equation, i.e.
Y (n + 1) = A(n)Y (n),
nN
Moreover, Theorem 1.4.6 implies that det Y (n) 6= 0 for at least one value
of n. Conversely, the column vectors of a matrix function Y (n) with the
property that Y (n + 1) = A(n)Y (n) for all n N, also satisfy (1.46). If
det Y (n) 6= 0 for at least one value of n, then, according to Theorem 1.4.6,
these column vectors form a basis of the solution space of (1.46), hence Y
is a fundamental matrix of this equation. If Y is a fundamental matrix of
(1.46), with column vectors y 1 , ..., y k , then every solution y of (1.46) can be
written as a linear combination of y 1 , ..., y k :
y(n) =
k
X
ci y i (n) = Y (n)
i=1
c1
.
.
ck
1.4.2
(1.47)
1.4.3
This section is concerned with systems of the form (1.40), where the sequences
aij (i {1, ..., k}, j {1, ..., k}) do not depend on n, so are constants. Again,
we write the system as a vectorial equation
y(n + 1) = Ay(n) + b(n)
(1.48)
y(n + 1) = Ay(n)
(1.49)
c Ck
(1.50)
(1.51)
sj1 , then J consists of a single Jordan block and the column vectors of An S
take the form
!
n n1
n n1
n
s1 , s2 +
s1 , ..., n sk +
sk1 + ... +
nk+1 s1
1
1
k1
(1.52)
(Cf. the appendix, A.1 and A.2.)
n
y(n + 1) =
nN
y(n),
5 1
1 3
!n
1 0
1 1
!n
4 1
0 4
1 0
1 1
Note that Y1 (0) = I. With the aid of A.2 of the appendix, we find (cf. also
example A.1.6)
Y1 (n) =
1 0
1 1
4n n4n1
0
4n
1 0
1 1
!
n1
=4
4+n
n
n 4n
Verify that the column vectors of this matrix do indeed form a basis of the
solution space.
Another fundamental matrix is
Y2 (n) =
1 0
1 1
4 1
0 4
!n
4n
n4n1
4n (4n)4n1
!
n1
=4
4
n
4 4n
y1 (n + 1) =
y2 (n + 1) =
y3 (n + 1) =
+ y3 (n)
,
3
2
1
1
y(n + 1) = 2 0 0 y(n),
0 12 0
0
nN
The eigenvalues of the above matrix are the solutions of the equation
1
1
3
3 + + = (1 )(2 + + ) = 0
4
4
4
The matrix has a simple eigenvalue 1 and an eigenvalue 12 with algebraic
multiplicity 2. The eigenspace associated with the eigenvalue 1 is spanned by
the vector (4, 2, 1). The eigenspace associated with the eigenvalue 21 turns
out to be 1-dimensional as well and is spanned by the vector (1, 1, 1). We
can find a generalized eigenvector of degree 2, corresponding to the eigenvalue
21 , by solving the following system of equations
1
21
2
3
2
1
2
1
2
1
x1
1
0
x2 = 1
1
x3
1
2
x1
2
x
=
2
0
x3
2
Hence it follows that
3
2
1
4 1 2
1 0
0
4 1 2
1
1
A := 2 0 0 = 2 1 0 0 2 1 2 1 0
1 1
2
1 1
2
0 12 0
0 0 21
0
61
4 1 2
2
4 2
1 0
0
n
1
A = 2 1 0 0 2 1
4 10 4
18
0 0 12
1 1
2
3 3 6
Suppose we are interested in the long term development of a population
consisting of 100 first-year individuals in the year 0. Then we compute
200
100
4 1 2
1 0
0
1
1
n
A 0 = 2 1 0 0 2 1 18 400
300
0
1 1
2 0 0 12
200
4 1 2
1
0
0
1
1
1
=
2 1 0 0 ( 2 )n n( 2 )n1 18 400
1 1
2
0
0
( 21 )n
300
2
4 1 2
50
= 9 2 1 0 (6n + 4)( 12 )n
1 n
1 1
2
3(
)
2
1 n
4 + (3n + 5)( 2 )
2
(3n + 2)( 12 )n
= 100
9
1 + (3n 1)( 12 )n
Among other things we find that
4
100
lim y(n) =
2
n
9
1
(1.53)
y(n) = SJ n c =
0
n1
0 Jk2 (2 )n
.
.
.
.
0
0
.
.
.
.
.
.
0
.
0
.
.
.
.
. Jkr (r )n
c,
(1.54)
where c = (c1 , ..., ck ) Ck . J1 (1 ), Jk2 (2 ),..., Jkr (r ) are the Jordan blocks
of J. A being a primitive matrix, with dominant eigenvalue 1 , we have
1 > |i | for i = 2, ..., r, and this implies that
n
lim n
1 Jki (i ) = 0 (= the null matrix of order ki )
n
for i = 2, ..., r. Moreover, n
1 J1 (1 ) = 1.
that
1 0
0 0
n
lim 1 y(n) = S
. .
n
. .
0 0
(1.55)
.
.
.
.
.
0
0
.
.
0
= c1 s 1
(1.56)
Using the continuity of the norm (cf. Simon & Blume, exercise 30.4), we
obtain
n
lim n
1 ky(n)k = lim k1 y(n)k = kc1 s1 k = |c1 |ks1 k
(1.57)
Dividing (1.56) by (1.57) we find, for any solution y of (1.49) with y(0) 6= 0,
y(n)
c1 s 1
=
, provided c1 6= 0
n ky(n)k
|c1 | ks1 k
lim
It remains to be proved that c1 > 0 when y(0) >> 0, as that would imply
|c1 | = c1 . From (1.54) we deduce that c = S 1 y(0) and thus
c1 =
k
X
j=1
1
y(0)j =
S1j
k
X
j=1
63
(1.58)
1
>0
s1 s1
2
1 1
1 1
y(n + 1) = Ay(n) := 0
y(n)
1 1 0
(1.59)
with eigenvector s2 := i
and an eigenvalue 1 + i with eigenvector
1
s3 :=
i . Hence a basis of the solution space of (1.59) is formed by
1
the equilibrium solution
y1 s1 and the (complex conjugated)
i/4
i/4sequences
n
n
=
{(
and
{(1
+
i)
s
2e
)
s
}
=
{(
2e )n s3 }
{(1 i)n s2 }
2 n=0
3
n=0
n=0 .
The general term of the second solution can be written in the following form
in/4
n e
n
{(1 i) s2 = ( 2) iein/4
ein/4
cos(n/4)
sin(n/4)
n/2
n/2
Re y2 = 2 sin(n/4)
and Im y2 = 2 cos(n/4)
cos(n/4) n=0
sin(n/4)
n=0
65
n1
X
(1.60)
m=0
This can be verified by inserting the above expression for y into the equation:
y(n + 1) Ay(n) =
n
X
Anm b(m)
n1
X
m=0
m=0
and
y(1) Ay(0) = y(1) = A0 b(0) = b(0)
Hence the general solution of (1.48) is
y(0) = c, y(n) = An c +
n1
X
m=0
n+p1
X
An+pm1 b(m) = An c +
m=0
n1
X
m=0
66
Anm1 b(m)
(1.61)
n+pm1
m=p
b(m) =
n+p1
X
n+pm1
n1
X
b(m p) =
m=p
Anm1 b(m)
m=0
p1
X
m=0
hence also to
(I Ap )c =
p1
X
Apm1 b(m)
m=0
(why?) This shows that (1.48) has a unique periodic solution of period p,
iff the matrix I Ap is nonsingular. The initial vector of this solution is
p1
X
y(0) = (I Ap )1
Apm1 b(m)
m=0
1.4.4
(1.62)
k
Y
i and tr A =
i=1
k
X
i=1
(this follows from lemma A.1.1 in the appendix), this implies that
| det A| < 1 and |tr A| < k
Analogously, stability of the solutions of (1.62) implies that
| det A| 1 and |tr A| k
The homogeneous equation y(n + 1) = Ay(n) is a simple example of an
autonomous system of the form y(n+1) = f (y(n)) (cf. 1.5), with f (x) = Ax.
Analogously to the scalar case, we define the stable set of the fixed point 0
(i.e. the null vector) of f to be the set of all vectors x Ck with the
property that limn An x = 0. Suppose x is an eigenvector of A, with
eigenvalue . Then An x = n x and it is obvious that limn An x = 0 iff
|| < 1. With the aid of (1.52), it is easily shown that this also holds when x
is a generalized eigenvector with respect to , of degree greater than 1. Let
68
(A):||<1
c1 + h1 m1 m2
m1
c2 + h2 m2
Yt1 +
69
(1.63)
1.4.5
Exercises
2 1
1 4
nN
y(n),
1
1+i
1
1
A=
y(n + 1) =
y(n) + 4n
1
1
nN
1 0 2
1
1 1 0
1
0
0 1
2
6. Determine a fundamental matrix of the system
1 0 1
y(n + 1) = 1 1 1 y(n),
1 0 1
70
nN
7. Find the population vector y(n) in example 1.4.10, when the initial population is composed of 100 second-year individuals. Compute limn y(n).
Check whether there exists an initial vector y(0) such that, in the long
run, the population becomes extinct.
8. Determine a Jordan normal form and a basis of generalized eigenvectors
of the matrix in the model of example 1.4.10, when the birth rates
for the first-year, second-year and third-year individuals are 21 , 12 and
1, respectively. In this case we are dealing with a so-called Markov
matrix (i.e. a matrix with nonnegative entries, having the additional
property that the components of each column vector add up to 1).
P
Compute 3i=1 yi (n).
9. Examine the stability of the solutions of the system
1
2
13
1
3
y(n + 1) = 13
1
3
0
y(n) + 2n ,
1
3n
3
2
3
nN
0 1
0
1
0 1 y(n) + 2
y(n + 1) = 0
, n = 0, 1, ... (1.64)
1
1
1
1
and examine their stability.
b. Determine all real-valued solutions of the homogeneous part.
c. Determine
the
solutions of the equation in a. with initial vector
0
y(0) = 0 .
1
11. Find all periodic solutions of the system
y(n + 1) =
1 1
a 1
y(n)
1.5
In this section we extend the theory discussed in 1.2 to systems of first order
difference equations of the type:
y(n + 1) = f (y(n)),
nN
(1.65)
Figure 1.12: Orbit of the point ( 12 , 21 ), subject to the map f (x) = (x2 , 1x21 );
cf. example 1.5.7.
Example 1.5.1. The iterates of the function f : R2 R2 , defined by
f (x) :=
x2
x1
1 x2
, x (R {0})2 ,
f (x) =
x1
1 x2
x1
1
x1
1
x1
2
!
3
, f (x) =
f (x) =
f has a single fixed point, viz.
x1
2 x1
x1
1
1
!
4
, f (x) =
, f 6 (x) = x
1
1
x1
2
x1
2 x1
=
73
y2 (n)
y1 (n)1 y2 (n)
1
1
and
1
has a unique equilibrium solution: y
, no 2-periodic solutions and
1
three 3-periodic solutions having one and the same orbit. All other solutions
are 6-periodic.
The following result is a generalization of Theorem 1.2.5.
Theorem 1.5.2. Let c Ck be a fixed point of f and suppose f is differentiable at c. Let A := Df (c) (the Jacobian matrix of f at c).
(i) If r (A) < 1, then y c is an asymptotically stable equilibrium solution of (1.65).
(ii) If r (A) > 1, then y c is an unstable equilibrium solution of (1.65).
In the case that c = 0, similarly to Theorem 1.2.8, the above theorem can be
stated as follows.
Theorem 1.5.3. Let A be a constant k k matrix and g a k-dimensional
vector function, continuous at 0 (the origin of Rk or Ck ), with the property
that, for some (hence for any!) vectornorm k.k,
kg(x)k
=0
x0 kxk
lim
(1.66)
If r (A) < 1, then the null solution of the system of first order difference
equations
y(n + 1) = Ay(n) + g(y(n)) n N
(1.67)
is asymptotically stable. If, on the other hand, r (A) > 1, then the null
solution of (1.67) is unstable.
Like in the scalar case, we say that the linear system
y(n + 1) = Ay(n)
is obtained by linearization of (1.67) at 0. More generally, (1.65) can be
linearized at any fixed point c where f is differentiable, and the linearized
equation is obtained from (1.65) by replacing the function f on the right-hand
side by its linear approximation at c, i.e.
y(n + 1) = f (c) + Df (c)(y(n) c) = c + Df (c)(y(n) c)
74
Example 1.5.4.
y1 (n + 1)
y2 (n + 1)
y1 (n)
y2 (n)
=A
y2 (n)2
y1 (n)y2 (n)
nN
g(x) = g(x1 , x2 ) =
x0
kg(x)k1
= lim |x2 | = 0
x0
kxk1
Example 1.5.5. We illustrate Theorem 1.5.2 with a discrete predatorprey model. We assume that in a certain area, after n units of time (e.g.
years, months, generations), there are y1 (n) prey animals and y2 (n) predators
present and that these numbers satisfy the following system of equations
y1 (n + 1) = ((1 + r) ay1 (n) by2 (n))y1 (n)
y2 (n + 1) = cy1 (n)y2 (n)
nN
(1.68)
Here, a, b, c and r are positive constants. The number of prey that are eaten
during one unit of time (viz. by2 (n)y1 (n)) is proportional to both the number
of prey and the number of predators present, and the same applies to the
number of predators born during that period. Moreover, it is assumed in this
model that no predator lives more than 1 unit of time. The system (1.68)
has three equilibrium solutions, including the null solution and the solution
y1 (n)
y2 (n)
1
(r
b
1
c
ac )
for all n N
(1.69)
where x := (x1 , x2 ), is
1 + r 2ax1 bx2 bx1
cx2
cx1
Df (x) =
Df (0, 0) is a diagonal matrix with diagonal entries 1+r and 0. From Theorem
1.5.3 it follows immediately that the null solution is unstable. In most cases
however, the more interesting question is, whether or not the equilibrium
solution (1.69) is (asymptotically) stable. In the given (real) context, this
solution exists, provided r > ac . The matrix A now reads
A=
1 ac
cb
1
(cr a) 1
b
7
2a
=
c
8
This shows that r (A) < 1 and, with Theorem 1.5.3, we conclude that the
equilibrium solution under consideration is asymptotically stable. The
linearized system at the equilibrium point (1.69) is
y1 (n + 1)
y2 (n + 1)
1
(r
b
1
c
ac )
1 ac
cb
1
(cr a) 1
b
y1 (n) 1c
y2 (n) 1b (r ac )
u(n)
u(n + 1)
76
(1.70)
Figure 1.13: The attractor of the equation in example 1.5.6, with = 2.01.
The white area is the basin of attraction.
is converted into the system of first order equations y(n+1) = f (y(n)), where
f (x) =
x2
x2 (1 x1 )
If 6= 1, f has two fixed points: the origin and (1 1/, 1 1/). The
Jacobian matrix of f is
Df (x) =
0
1
x2 (1 x1 )
Df (0, 0) has eigenvalues 0 and , hence the null solution of the equation is
asymptotically stable when || < 1 and unstable when || > 1. The Jacobian
matrix at the second equilibrium point is
Df (1 1/, 1 1/) =
0
1
1 1
=
1/2
1/2
i/2 4 5 when > 5/4. Both eigenvalues have absolute value < 1 iff 1 <
< 2. For these values of , the equilibrium solution y (1 1/, 1 1/)
is therefore asymptotically stable. At = 2 a bifurcation occurs: for values
of > 2, the system no longer has a point attractor, cf. fig. 1.13.
Example 1.5.7. Next, we consider the second order, autonomous difference
equation
u(n + 2) = a u(n)2
77
f (x) =
f (x) =
a x21
a x22
a x22
a (a x21 )2
, f (x) =
!
4
, f (x) =
a (a x21 )2
a (a x22 )2
When a < 1/4, f has no real fixed points, when a > 1/4,
it has two
+
+
1 + 4a, and
real fixed points: c =(c
,
c
),
where
c
=
c
=
1/2
+
1/2
1
2
1
2
Df (c ) =
0
1
2c1 0
2|c
|
=
| 1 1 + 4a|.
1
q
+
The eigenvalues of Df (c ) have absolute value | 1 + 4a 1| and this is
less than 1 when 1/4 < a < 3/4 and greater than 1 for all a > 3/4. Thus the
equilibrium solution y c+ is asymptotically stable for all a (1/4, 3/4)
and unstable for all a > 3/4. f 2 has two more fixed points, in addition to
+
+
c+ and c , viz. p1 := (c
1 , c2 ) and p2 := (c1 , c2 ). These correspond to two
2
2-periodic solutions with orbit {p1 , p2 }. The
of f at p1 and
Jacobian matrix
p2 is diagonal, with diagonal entries 1 1 + 4a and 1 + 1 + 4a. Hence it
follows that both 2-periodic solutions are unstable for all a > 1/4. Finally,
we look for 4-periodic solutions. To that end we try to solve the equation
f (x) =
a (a x21 )2
a (a x22 )2
x1
x2
This requires computing the zeroes of the 4th degree function a(ax2 )2 x.
Since all fixed points of f and f 2 are fixed points of f 4 as well, all zeroes of
a x2 x are zeroes of a (a x2 )2 x. This implies that a x2 x divides
2
a(ax2 )2 x. Division yields: a(ax2 )2 x = (ax2 x)(x
x+1a).
When a > 3/4, the second factor has 2 real zeroes: 1/2 1/2 4a 3. Hence
78
Figure 1.14: 4-periodic attractor of the map f (x) = (x2 , 1 x21 ) and its basin
of attraction (the white region). Which points in the white region do not
belong to the basin of attraction? Also compare this figure to fig. 5.2
there are in total 16 real, periodic solutions of period 4. For 12 of these,
4 is the minimal period. Thus, there are 3 4-periodic orbits of the form
{c, f (c), f 2 (c), f 3 (c)}. The corresponding values of c are:
c=
1+ 4a3
2
1+ 4a3
2
, c=
1+ 4a3
2
1+ 4a+1
2
and c =
1+ 4a3
2
1 4a+1
2
all x U {x0 }.
(i) If V (f (x)) V (x) for all x U such that f (x) U , then x0 is a stable
equilibrium point.
(ii) If V (f (x)) < V (x) for all x U {x0 } such that f (x) U , then x0 is
an asymptotically stable equilibrium point.
(iii) If V (f (x)) > V (x) for all x U {x0 } such that f (x) U , then x0 is
an unstable equilibrium point.
Proof. (i) Let be a sufficiently small positive number, so that the closed
ball B(x0 ; ) := {x Ck : kx x0 k }, where k.k denotes a vectornorm
on Ck , is contained in U . Then V (x) > 0 for all x B(x0 ; ) {x0 }. f is
continuous at x0 , hence there exists a 1 (0, ) with the property that
kf (x) f (x0 )k for all x B(x0 ; 1 )
(1.71)
m := min{V (x) : 1 kx x0 k }
(1.72)
Let
Then we have: m > 0 (why?). Since V is continuous and V (x0 ) = 0, there
exists a 2 (0, 1 ), such that
V (x) < m for all x B(x0 ; 2 )
We want to prove that f n (x) B(x0 ; 1 ) for every n N and every x
B(x0 ; 2 ), as this implies
kf n (x)f n (x0 )k = kf n (x)x0 k 1 < for all x B(x0 ; 2 ) and all n N
hence the stability of the equilibrium point x0 . We give a proof by contradiction: suppose there exist x B(x0 ; 2 ) and n N such that f n (x) 6
B(x0 ; 1 ). Let n0 denote the smallest value of n with this property. Thus,
n0 1, as f 0 (x) = x B(x0 ; 2 ) B(x0 ; 1 ). Now we have
f n0 1 (x) B(x0 ; 1 ), but f n0 (x) 6 B(x0 ; 1 )
(1.73)
Thus we have produced a contradiction and hence f n (x) B(x0 ; 1 ) for every
n N and every x B(x0 ; 2 ).
(iii) Choose > 0 such that B := B(x0 ; ) U . We will show that no solution
with initial vector x B {x0 } stays within B. Again we give an indirect
proof. Suppose that f n (x) B for all n N. B being bounded and closed,
thus compact, there exists a subsequence of {f n (x)}
n=0 converging to an
nm
Consequently,
lim V (f n+1 (x)) = V (x0 )
x1 (1 x22 )
x2 (1 x21 )
81
has a fixed point at the origin. The matrix of the linearized system at (0, 0)
in this case is the identity matrix, so Theorem 1.5.2 doesnt provide any
information about the stability of the null solution of (1.65). Now define
V (x) = kxk22 = x21 + x22
Then we have
V (f (x)) = x21 (1 x22 )2 + x22 (1 x21 )2 = x21 + x22 + x21 x22 (x21 + x22 4)
Hence it follows that V (f (x)) V (x), when x21 + x22 4. Thus, V is a
Lyapunov function for (1.65) on the interior of the circle about the origin with
radius 2 (U := B(0; 2) = {(x1 , x2 ) R2 : x21 + x22 < 4), and, by Theorem
1.5.8, the null solution of this equation is stable. The null solution is not
asymptotically stable, as every solution with initial vector (c, 0), c R, again
is an equilibrium solution. So every neighbourhood of the origin contains
infinitely many equilibrium points.
Lyapunovs method does not only provide us with a test for the stability of
equilibrium solutions (of periodic solutions). It can also be used to determine
the basin of attraction of an asymptotically stable equilibrium point, or a part
of that basin. We illustrate this with the following example.
Example 1.5.10. The function f : R2 R2 , defined by
f (x) =
x2 (1 12 (x21 + x22 ))
x1 (1 12 (x21 + x22 ))
again has a fixed point at the origin. Df (0, 0) has eigenvalues 1 and 1, so
that Theorem 1.5.2 does not apply. We use the same function V as in the
previous example. In this case we have
1
V (f (x)) V (x) = (x21 + x22 )2 (x21 + x22 4)
4
Hence V (f (x)) < V (x) for every x B(0; 2) {0}. Thus, if y is a solution
of (1.65), with initial vector y(0) B(0; 2), then we have, for every n N,
ky(n)k2 < ... < ky(1)k2 < ky(0)k2 < 2
Hence it follows that limn ky(n)k2 exists. As y is a bounded sequence, it
has a subsequence {y(nm )}
m=1 , with n1 < n2 < ..., converging to an element
82
hence
lim ky(nm + 1)k2 = kf (x)k2
Thus, V (f (x)) = V (x) and this implies x = (0, 0). We conclude that
limn ky(n)k2 = 0 for every solution with initial vector in B(0; 2), and
hence B(0; 2) is contained in the basin of attraction of (0, 0).
Theorem 1.5.11. Let f be a continuous vector function on Rk or Ck , with
fixed point x0 . Suppose there exists a positive invariant neighbourhood U of
x0 , such that, for every x U , the sequence {f n (x)}
n=0 is bounded and all
its accumulation points belong to U , and a continuous function V : U R
with the property that V (x0 ) = 0, V (x) > 0 and V (f (x)) < V (x) for all
x U {x0 }. Then U is contained in the basin of attraction of {x0 }.
Proof. Suppose that y is a solution of (1.65) with initial vector y(0)
U {x0 }. Then y is bounded and y(n) = f n (y(0)) U for all n N.
Now, 0 V (y(n + 1)) < V (y(n)) for all n N, so limn V (y(n)) exists.
Moreover, if {y(nm )}
m=1 with n1 < n2 < ..., is a convergent subsequence,
with limit x, then x U . Due to the continuity of V , limm V (y(nm )) =
V (x), hence limn V (y(n)) = V (x). From the continuity of f it follows
that limm f (y(nm )) = f (x), hence limm V (f (y(nm ))) = V (f (x)). On
the other hand, limm V (f (y(nm ))) = limm V ((y(nm + 1))) = V (x).
Thus, V (f (x)) = V (x) and this implies that x = x0 . The above argument
shows that every convergent subsequence of y has limit x0 and, since y is
bounded, the sequence as a whole converges to x0 .
Remark 1.5.12. If U is positive invariant and bounded, then, for every x
U , the sequence {f n (x)}
n=0 is bounded. If U is positive invariant and closed,
then, for every x U , all accumulation points of the sequence {f n (x)}
n=0
belong to U . If U is positive invariant and compact, then, for every x U ,
both conditions on {f n (x)}
n=0 in Theorem 1.5.11 are automatically fulfilled.
83
1.5.1
Exercises
x2 (1 x21 )
x1 (1 x22 )
, x R2
f (x) =
x2
1+x21
x1
1+x22
x R2
9. Prove that the basin of attraction of the fixed point of the map in example
1.5.10 is B(0; 2).
10. The map f : R2 R2 is defined by
f (x1 , x2 ) = (x21 x22 , 2x1 x2 )
a. Determine all equilibrium solutions of the system y(n + 1) = f (y(n))
and examine their stability.
b. Prove that the basin of attraction of {(0, 0)} is the set {(x1 , x2 ) R2 :
x21 + x22 < 1}.
11. a. Convert the scalar difference equation
y(n + 2) = y(n + 1)2 y(n) y(n)
into a system of first order difference equations.
b. Determine all equilibrium solutions of the system.
c. Prove that any solution starting from a point on one of the coordinate
axes is periodic, of period 4.
d. Using Lyapunovs direct method, prove that the null solution is neutrally stable.
1.6
(1.74)
with constant coefficients a0 , ..., ak1 , can be converted into a system of first
order equations, by means of the transformation described in 1.4. Let u
denote the k-dimensional vector, defined by
uj (n) = y(n + j 1),
j = 1, ..., k
Then the kth order equation (1.74) is equivalent to the first order vectorial
equation
86
u(n + 1) =
0
0
.
.
a0
1
0
.
.
.
.
1
.
.
.
.
0
.
0
.
.
.
.
. ak1
u(n) +
0
0
.
.
.
b(n)
(1.75)
1.6.1
The behaviour of the solutions of the homogeneous, vectorial equation associated with (1.75), is, to a great extent, determined by the Jordan normal
form of A. We begin by computing the eigenvalues of A. These are the zeroes
of the characteristic polynomial of A, i.e. the solutions of the equation
det(A I) = 0
The above equation is known as the characteristic equation of (1.74).
Theorem 1.6.1. The eigenvalues of the companion matrix A of the kth
order, linear difference equation (1.74) are the solutions of the characteristic
equation
k + ak1 k1 + ... + a0 = 0
Proof. For every positive integer j k, we define a function Pj by
Pj () =
0
.
.
0
akj
.
.
0
akj+1
87
.
1
.
.
.
.
. 0
0
. 0
0
.
.
.
.
.
.
.
1
. ak2 ak1
j = 1, ..., k
For j = 1 we have: P1 () = ak1 . Now, let j > 1 and assume the above
equality holds for j 1. Expanding the determinant along the first column,
we get:
Pj () =
0
.
0
akj+1
.
0
.
.
.
0
1
.
0
+(1)j+1 .akj
.
.
.
.
.
1
. ak2 ak1
.
0
0
0
1
.
0
.
. .
. .
. .
. 1
.
0
0
.
0
1
The first determinant on the right-hand side equals Pj1 () and the second
one, being the determinant of a lower triangular matrix with diagonal entries
equal to 1, has the value 1. Thus, we find
Pj () = Pj1 () + (1)j akj
Using the inductive hypothesis we obtain
Pj () = (1)j1 (j1 + ak1 j2 + ...akj+1 ) + (1)j akj
= (1)j (j + ak1 j1 + ... + akj )
Hence, by the principle of mathematical induction, the equality holds for
j = 1, ..., k. In particular,
Pk () = (1)k (k + ak1 k1 + ... + a0 )
and this completes the proof of the theorem.
Equation (1.74) can be written in the more compact form
Ly = b
where L denotes the kth order linear difference operator
L = k + ak1 k1 + ... + a0
88
n
k
j = 1, ..., r
has a double root = 2. Thus, the solution space is spanned by the sen
quences: {2n }
n=0 and {n2 }n=0 . The general solution is
y(n) = (c1 + c2 n)2n , n N,
c1 , c2 C
nN
1
arg 2 = arctan =
6
3
The absolute value of 2 and 3 is the square root of the sum of the squares
of the real and imaginary parts, i.e.
s
|2 | = |3 | =
9
3
+
=
36 36
1
1
=
3
3
3
(In fact, there is a simpler proof of the above identity : from the coefficient
of y(n) in the equation it follows that 1 2 3 = 31 . As 1 = 1 and
2 3 = 2 2 = |2 |2 = |3 |2 , we have |2 |2 = |3 |2 = 31 .) Every solution of
the equation is a linear combination of the sequences n1 , n2 and n3 , i.e. of
n
If one is solely interested in real-valued solutions, the last two sequences can
be replaced by the real and imaginary parts of n2 , i.e. by
n
3 2 cos n
and 3 2 sin n ,
6
6
respectively. Thus, every real-valued solution of the equation can be written as follows
n
1.6.2
+ c3 3 2 sin n ,
6
6
n N, c1 , c2 , c3 R
From (1.75) and (1.60) we deduce that the inhomogeneous equation (1.74)
has a particular solution y with initial values y(0) = ... = y(k 1) = 0,
such that, for n 1, y(n) equals the first component of the vector
nm1
A
m=0
n1
X
0
.
.
0
b(m)
i.e.
y(0) = 0, y(n) =
n1
X
m=0
It is not easy, in general, to extract very precise information about the long
term behaviour of y from the above formula and therefore its use in practical
applications is rather limited. In some special cases, a particular solution of
(1.74) can be found by the so-called annihilator method. This is a systematic procedure to find particular solutions to certain types of inhomogeneous
difference (or differential) equations, using the technique of undetermined
coefficients. It can be applied in the case that the function b is itself a
solution of a homogeneous linear difference equation with constant
coefficients, that is, whenever b can be written as a linear combination of
91
l1
X
L0 = l +
bh h
h=0
such that
L0 b = 0
(L0 annihilates b.) Then Ly = b implies
L0 Ly = L0 b = 0
L0 L is a linear difference operator with constant coefficients, of order k + l:
L0 L = l+k +
l1
X
h=0
bh h+k +
k1
X
aj j+l +
j=0
l1 k1
X
X
bh aj h+j
h=0 j=0
or, equivalently,
L0 y := ( 3)y = 0
The left-hand side can be written as follows
Ly := ( 2 4 + 4)y = ( 2)2 y
Hence, every solution of the inhomogeneous equation also satisfies the (third
order) homogeneous equation
L0 Ly = ( 3)( 2)2 y = 0
with characteristic equation
( 3)( 2)2 = 0
This implies that y must have the following form
y(n) = c1 2n + c2 n2n + c3 3n ,
where c1 , c2 , c3 C (or R). Note that c1 2n + c2 n2n is the general solution
of the homogeneous equation Ly = 0. In order to find a particular solution
of the inhomogeneous equation, we set c1 = c2 = 0 and compute c3 by the
method of undetermined coefficients. Inserting y(n) = c3 3n into the equation,
we get
c3 3n+2 4c3 3n+1 + 4c3 3n = 3n for all n N
and hence we deduce that c3 = 1. Thus, the general solution of the inhomogeneous equation is
y(n) = (c1 + c2 n)2n + 3n ,
n N, c1 , c2 C
Example 1.6.6.
y(n + 1) y(n) = n2 n N
The right-hand side can be written as: n2 1n and thus satisfies a third order,
homogeneous linear difference equation, with characteristic equation (
1)3 = 0, viz.
L0 y := ( 1)3 y 0
93
1.6.3
Then there are two corresponding solutions u and u of (1.75), such that
u(0) = (y(1), ..., y(k 1)) and u(0) = (
y (1), ..., y(k 1))
hence
k u(0) u(0) k
and vice versa. From Theorem 1.4.17 we deduce the following result.
Theorem 1.6.7. Suppose that the characteristic equation of (1.74) has r
solutions j , each with multiplicity kj , j {1, ..., r}. The solutions of (1.74)
are asymptotically stable iff |j | < 1 for all j. The solutions are stable if
|j | 1 for all j and, in addition, kj = 1 for all values of j such that |j | = 1.
In all other cases the solutions are unstable.
Example 1.6.8. Samuelsons multiplier-accelerator model.
This model is described by the following equations.
Ct = cYt1
It = a(Yt1 Yt2 ) + A
Yt = Ct + It
Here, A denotes autonomous investments, assumed independent of the time t,
c is a number between 0 and 1 and the accelerator a is a positive constant. In
this model, the income Yt satisfies the second order, linear difference equation
Yt+2 (c + a)Yt+1 + aYt = A, t N
(1.76)
A
,
1c
tN
(1.77)
We distinguish 3 cases.
1). The discriminant D of (1.77) is negative. This is the case when
(c + a)2 4a = c2 + 2ca + a2 4a < 0
or, equivalently,
(a + c 2)2 < 4(1 c)
i.e.
(1
Then the characteristic equation (1.77) has two distinct, complex conjugated
solutions + and , given by
q
1
= (c + a i 4a (c + a)2 )
2
Furthermore, we have
| |2 = + = a
The general, real-valued, solution of (1.76) has the form
Yt =
where
t
A
+ a 2 (c1 cos t + c2 sin t),
1c
t N, c1 , c2 R
c+a
= arg + = arccos
2 a
a = (1 + 1 c)2 or a = (1 1 c)2
The characteristic equation (1.77) has the unique solution
1
= (c + a) = a
2
96
with multiplicity 2. The general (real-valued) solution of (1.76) has the form
Yt =
t
A
+ a 2 (c1 + c2 t),
1c
c1 , c2 R
The solutions are asymptotically stable iff |a| < 1. If, on the other hand,
|a| 1, all solutions are unstable.
3). D > 0, i.e.
(1.79)
1.6.4
Exercises
7. Set up a second order, linear difference equation, with constant coefficients, for the sum S(n) of the first n terms of an arithmetic-geometric
progression, with common ratio r and common difference v (cf. Example
1.3.13). Let S(0) = 0 and determine S(n) as the solution of an initial
value problem.
8. Determine the solution of the initial value problem
y(n + 2) 2y(n + 1) + 2y(n) = 3,
y(0) = 5, y(1) = 6
y(0) = 2, y(1) = 2
b. Convert the above initial value problem into an initial value problem
for a system of first order difference equations and solve it.
10. Determine the general solution of the equation
y(n + 2) 4y(n + 1) + 3y(n) = 2n2 ,
nN
n2
nN
b. Convert the above equation into a system of first order equations and
determine the general solution of this system. Compare the result to
that of part a.
13. Determine the general solution of the equation
y(n + 2) 4y(n + 1) + 4y(n) = 2n ,
nN
nN
nN
100
Chapter 2
Differential equations
An ordinary differential equation, to be denoted by ODE, is an equation of
the form
F (t, y(t), y (1) (t), ..., y (k) (t)) = 0
(2.1)
Here, k is a positive integer: the order of the ODE, t a real- or complexvalued, independent variable, F is a function of k + 2 variables, defined on
a subset D of Rk+2 or Ck+2 . y is an unknown function of t (the dependent
variable) and y (j) denotes the jth order derivative of y with respect to t (we
usually write y 0 and y 00 instead of y (1) and y (2) , other authors use y and y,
respectively). A given function y, defined on a subset T of R or C, is a
solution of (2.1) if, for every t T , we have (t, y(t), y (1) (t), ..., y (k) (t)) D
and F (t, y(t), y (1) (t), ..., y (k) (t)) = 0.
In many applications, the independent variable t is the time. From now on,
t will be a real variable and T a (finite or infinite) interval of R. Often, the
t-dependence of y is omitted and (2.1) is written in the form
F (t, y, y (1) , ..., y (k) ) = 0
The ODE is called autonomous when F doesnt depend explicitly on t, so
that the equation can be written in the form
F (y(t), y (1) (t), ..., y (k) (t)) = 0 or F (y, y (1) , ..., y (k) ) = 0
F can be a scalar or a complex-valued function. In this section we will only
consider the first case.
Example 2.0.10. The logistic differential equation
y 0 = y(a by)
101
2.1
In a first order differential equation only the unknown, say y, itself and its
first derivative y 0 can occur. In this section, we discuss a number of standard
methods to solve simple, first order differential equations.
Let f, g and G be continuous functions. We consider the following 4 types
of differential equations:
1. y 0 = g
2. y 0 = f G(y) (separable differential equation)
3. y 0 = f y (homogeneous linear differential equation)
4. y 0 = f y + g (inhomogeneous linear differential equation if g 6 0)
Obviously, type 1 is a special case of type 4 and type 3 is both a special
case of type 2, with G(y) = y, and of type 4, with g 0.
Example 2.1.1. A neoclassical growth model in continuous time:
Y (t) = Q(K(t), L(t))
K 0 (t) = sY (t)
L0 (t) = nL(t)
102
K
Q(K, L)
= Q( , 1) = Q(k, 1)
L
L
we find that k must satisfy the first order separable differential equation (type
2):
k 0 (t) = sq(k(t)) nk(t)
(2.2)
Here, we can choose, for example, f 1 and G(k) = sq(k) nk.
2.1.1
(2.3)
Z t
t0
y (s) ds + y(t0 ) =
Z t
t0
g(s) ds + y(t0 )
Z t
g(s) ds + C
t0
4e2t dt = 2e2t + C.
Example 2.1.4.
The differential equation y 0 (t) = 6tet has the general soR
2
2
lution y(t) = 6tet dt = 3et + C, C constant. The unique solution with the
2
property that y(1) = 7 is given by y(t) = 3et + 7 3e.
Example 2.1.5. The general solution of the differential equation
y 0 (t) = (t 1)1 + t2 , t [2, )
is
y(t) = log(t 1) t1 + C, C constant.
The solution with initial value y(2) = 1.5 is y(t) = log(t 1) t1 + 2.
2.1.2
The notation
y 0 = f G(y)
(2.4)
p(y(t))y (t) dt =
f (t) dt.
Example 2.1.9. Consider the initial value problem for the logistic differential equation
y 0 (t) = a(1 y(t)/k)y(t), y(0) = y0 ,
(2.5)
where the parameters a and k are positive and y0 R. Check that the
constant functions y 0 and y k satisfy the equation. Applying the
method of separation of variables, we get
Z
Z
y0
dt = a dt.
(1 y(t)/k)y(t)
105
From example 24.11 in Simon and Blume (pp 644-645) we conclude that
y(t) = ky0 /(y0 + (k y0 )eat ).
Note that this expression also represents the solution of the initial value
problem when y0 = 0 or y0 = k, even though 0 and k are zeroes of G (the
right-hand side of (2.5)). So it represents the solution to the initial value
problem for all values of y0 . (Determine the domain of the solution y with
initial value y0 < 0.)
2.1.3
(2.6)
f (t)dt
= eF (t)+C ,
2.1.4
(2.7)
F (t)
y(t) = e
eF ( ) g( )d
108
Moreover, the solution of (2.7) with initial value y(t0 ) = y0 can be represented
as follows
Z
t
eF ( ) g( )d }
(2.8)
t0
Remark 2.1.16. In the above expressions eF (t) can be replaced by any nonvanishing solution of the homogeneous equation (2.6). (Verify this.)
Example 2.1.17. Consider the initial value problem
y 0 (t) = 3y(t) + 8t, y(0) = 2
We begin by solving the associated homogeneous equation y 0 (t) = 3y(t).
As we have seen in example 2.1.11 the general solution is ae3t . By (2.8),
the solution to the initial value problem is given by the expression
y(t) = e3t (2 +
Z t
e3 8 d )
y(t) = eAt (C +
Z t
eA 4B d )
0
2
109
2.1.5
Exercises
K
.
L
1
s
+ Cen(1)t ) 1
n
2.2
Every scalar, kth order ODE can be converted into a system of k first order
ODEs, by means of a transformation resembling the one we discussed in the
case of a kth order difference equation. Consider the general ODE
F (t, y, y (1) , ..., y (k) ) = 0
(2.9)
(2.10)
(2.11)
t T,
(2.13)
ak1 (t)
y (t)
ak (t) k
b(t)
ak (t)
This system of first order differential equations can also be written in the
form of a first order, linear, vectorial ODE:
d
dt
y1 (t)
y2 (t)
.
.
.
yk (t)
0
0
.
.
.
aak0 (t)
(t)
1
0
.
.
.
.
0
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0
.
.
.
(t)
ak1
ak (t)
y1 (t)
y2 (t)
.
.
.
yk (t)
0
0
.
.
.
b(t)
ak (t)
(2.14)
If the function y is a solution of the kth order, linear ODE (2.13), then the
vector function (y1 , ..., yk ), defined by (2.10), is a solution of the vectorial
ODE (2.14) and, conversely, if the vector function (y1 , ..., yk ) satisfies (2.14),
(j1)
then y1 is a solution of the kth order ODE (2.13) and yj = y1
for j =
2, ..., k.
From now on, we consider systems of k first order ODEs, of the form
y10 (t) = f1 (t, y1 (t), ..., yk (t))
y20 (t) = f2 (t, y1 (t), ..., yk (t))
.......................... ,
..........................
yk0 (t) = fk (t, y1 (t), ..., yk (t))
t [t0 , )
t [t0 , )
(2.15)
2.2.1
(2.16)
d
dt
y1 (t)
y2 (t)
.
.
.
yk (t)
a11 (t)
a21 (t)
.
.
.
ak1 (t)
a12 (t)
a22 (t)
.
.
.
ak2 (t)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a1k (t)
a2k (t)
.
.
.
akk (t)
y1 (t)
y2 (t)
.
.
.
yk (t)
b1 (t)
b2 (t)]
.
.
.
bk (t)
or, briefly,
y 0 (t) = A(t)y(t) + b(t)
Here, y and b denote k-dimensional vector functions and A is a k k matrix
function. In what follows we will not discriminate between systems of ODEs
and vectorial ODEs. (2.16) is called a system of coupled ODEs, if there
exists at least one pair (i, j), with i 6= j, such that the function aij does
not vanish identically. In that case, the behaviour of yi depends on that of
yj . When the ODEs are not coupled, the corresponding matrix A(t) is a
diagonal matrix for every value of t. The vectorial ODE
y 0 (t) = A(t)y(t) + b(t), t T
is called homogeneous iff b 0 (i.e. b(t) = 0 for all t T ). A homogeneous,
linear, vectorial ODE has the solution y 0, and every linear vectorial
114
2.2.2
t [t0 , )
y(t0 ) = y0
where y0 is a given real (or complex) number, we have to choose the parameter
c such that ceat0 = y0 . Hence, this solution is
y(t) = ea(tt0 ) y0
y(t) =
c1 e1 t
c2 e2 t
.
.
ck ek t
diag{e1 t , ..., ek t }
c1
c2
.
.
ck
We will prove that the general solution of the vectorial ODE y 0 (t) = Ay(t) is
represented by (2.17), even if A is not a diagonal matrix. For this purpose,
we need the following lemma.
Lemma 2.2.7. For every k-dimensional vector c Ck , the vector function
y defined by: y(t) = etA c is a solution of the vectorial ODE
y 0 (t) = Ay(t)
From the above lemma it is easily deduced that
d tA
e = AetA
dt
(Verify this.) Thus, the matrix function etA satisfies the matrix ODE
Y 0 (t) = AY (t)
116
(2.18)
y(t0 ) = y0
where y0 Ck , is
y(t) = e(tt0 )A y0
Proof. (i) According to lemma 2.2.7, every vector function of the form (2.19)
is a solution of equation (2.18). It remains to be proved that all solutions are
of this form. So suppose that y is a k-dimensional vector function, satisfying
(2.18). We have to show the existence of a vector c Ck , such that y = etA c.
Noting that
d tA
(e y(t)) = AetA y(t) + etA y 0 (t) = AetA y(t) + etA Ay(t) = 0
dt
we conclude that the vector function etA y(t) is a constant (vector), which
we denote by c. Then we have
etA etA y(t) = y(t) = etA c
117
(ii) In order to find the solution of the initial value problem, we need to
choose the vector c in such a manner that
y(t0 ) = et0 A c = y0
Left multiplication with the matrix et0 A yields
et0 A y(t0 ) = c = et0 A y0
Inserting this into the general expression for y, we find
y(t) = etA et0 A y0 = e(tt0 )A y0
(The last equality is due to the fact that the matrices tA and t0 A commute,
cf. Theorem A.3.1.)
Theorem 2.2.9. The solutions of the k-dimensional vectorial ODE (2.18)
form a k-dimensional linear space. The vector functions yi , i = 1, ..., k,
defined by
yi (t) = etA ci , ci Ck
constitute a basis of the solution space if and only if the vectors c1 , ..., ck
constitute a basis of Ck .
Proof. etA is a nonsingular matrix for every value of t. Hence, the linear
Pk
tA Pk
combination of vector functions:
i=1 i ci , with i C,
i=1 i yi = e
vanishes identically (i.e. equals the null vector for all values of t) if and only
if
k
X
i ci = 0
i=1
i.e. precisely then, when the vectors c1 , ..., ck are linearly dependent. Furthermore, according to the previous theorem, every solution y of (2.18) has
the form y = etA c, with c Ck . If the vectors c1 , ..., ck form a basis of Ck ,
then c is a linear combination of c1 , ..., ck and thus y is a linear combination
of the vector functions etA c1 , ..., etA ck . Hence the statements of the theorem
follow easily.
A basis of the solution space of (2.18) can be obtained by choosing for
c1 , ..., ck the standard basis of Ck , i.e.
yi (t) = etA ei , i = 1, ..., k
118
The solution yi is the ith column vector of the matrix function etA . If A
has k linearly independent eigenvectors s1 , ..., sk , with eigenvalues 1 , ..., k ,
respectively, then also the vector functions yi (t) := etA si , i = 1, ..., k, span
the solution space. Since, for every value of t, si is an eigenvector of the
matrix etA , with eigenvalue eti , equation (2.18) in this case has a second
basis of solutions, of the form
yi (t) = ei t si , i = 1, ..., k
The advantage of this basis over the previous one is, that the t-dependence
of the individual basis elements is immediately clear, without the need to
compute the entries of etA . The first basis is sometimes convenient in solving
initial value problems.
Example 2.2.10. The system of first order, homogeneous ODEs
y10 (t) = 4y1 (t) + y2 (t)
y20 (t) = 2y1 (t) + y2 (t)
can be written in the form of the 2-dimensional, vectorial ODE:
dy
=
dt
4 1
2 1
(2.20)
The matrix A in this case has an eigenvector (1, 2), with eigenvalue 2, and
an eigenvector (1, 1), with eigenvalue 3. The vector functions e2t (1, 2)
and e3t (1, 1) span the solution space of (2.20) and the general solution is
y(t) = c1 e
2t
1
2
+c2 e
3t
1
1
e2t
e3t
2e2t e3t
c1
c2
c1
c2
C2
2t
1
2
1
1
3t
+ 2e
We can summarize the above procedure for solving the initial value problem
as follows. First, write the initial vector (1, 0) as a linear combination of the
eigenvectors (1, 2) and (1, 1) of A (and thus of etA ):
1
0
= 1
1
2
+2
1
1
Second, multiply each vector on the right-hand side with the corresponding
eigenvalue of etA .
A slightly different approach is by using the second part of Theorem 2.2.8:
1
0
tA
y(t) = e
The matrix function etA can be computed with the aid of the eigenvectors
and eigenvalues found above (cf. A.3):
e
tA
1
1
2 1
e2t 0
0 e3t
1
1
2 1
!1
1
0
e2t + 2e3t
2e2t 2e3t
Any matrix function, whose column vectors form a basis of the solution space
of (2.18), is called a fundamental matrix of (2.18). Every fundamental
matrix is itself a (matrix) solution of the equation. For, if Y is a fundamental
matrix of (2.18), with column vectors y1 , ..., yk , then
d
Y (t) = (y10 (t)...yk0 (t)) = (Ay1 (t)...Ayk (t)) = A(y1 (t)...yk (t)) = AY (t)
dt
120
tA
tA
1
1
2 1
1
1
2 1
e2t 0
0 e3t
e2t
e3t
2e2t e3t
y=
k
X
ci yi = Y
i=1
c1
.
.
.
ck
y (t) =
5 3
15 7
y(t), y(0) =
1
1
1
2+i
1
2i
Hence
1
1
2+i 2i
1
1
2+i 2i
!1
1
1
1
=
2
1+i
1i
1
2i
1
y(t) =
+ 1 (1 i)e(13i)t
+ i)e
2 + i!! 2
1+i
= Re e(1+3i)t
1 + 3i
!
!
1
1
t
t
= e cos 3t
e sin 3t
1
3
1
(1
2
(1+3i)t
Alternatively, we can use the second part of Theorem 2.2.8 to compute this
solution:
y(t) = etA y(0) =
1
1
2+i 2i
e(1+3i)t
0
(13i)t
0
e
1
1
2+i 2i
!1
e3it (1 + i) + e3it (1 i)
3it
e (1 + 3i) + e3it (1 3i)
!
t
=e
cos 3t sin 3t
cos 3t 3 sin 3t
(Verify that this vector function does indeed satisfy all requirements.) Note
that it is unnecessary, and somewhat akward, to compute the matrix function
etA .
If A is nondiagonalizable, there is no basis of Ck consisting of eigenvectors of A, but there always exists a basis {s1 , ..., sk } of generalized eigenvectors. The solution space of (2.18) then again is spanned by the vector
functions
yi (t) = etA si , i = 1, ..., k
However, only for those values of i for which si is a genuine eigenvector, do
we have yi (t) = ei t si .
122
1
1
y (t) =
0 1
0 0
y(t)
1
0
+ c2
t
1
The matrix A has an eigenvector s1 := (1, 0), with eigenvalue 0, and a generalized eigenvector s2 := (1, 1), of degree 2, with respect to this eigenvalue.
The matrix function etA is easily computed by means of (45) in A.3, since
An vanishes for all n > 1, so
tA
=I +t
0 1
0 0
1 t
0 1
The column vectors of this fundamental matrix form a basis of the solution
space of the equation. Another basis consists of the vector functions
tA
e s1 =
1
0
!
tA
and e s2 =
1+t
1
Verify that both bases yield the general solution determined above.
In the general case, the matrix function etA can be computed as follows.
Suppose that, by a change of coordinates, A is transformed into a Jordan
normal form:
S 1 AS = J = + N
where = diag{1 , ..., k } and N is a nilpotent matrix of a particular form,
which commutes with (cf. A.1). Then, due to Theorem A.3.1,
S 1 etA S = etJ = et etN = diag{e1 t , ..., ek t }
123
k1
X n
t Nn
n=0 n!
(2.21)
etJk () =
et
1
0
.
.
.
t2
2!
t
1
.
.
.
.
.
.
.
.
t
.
.
.
tk1
(k1)!
tk2
(k2)!
.
t
1
(2.22)
A=
1 0
1 1
4 1
0 4
1 0
1 1
Hence the equation y 0 (t) = Ay(t) has a fundamental matrix of the form
tA
1 0
1 1
4 1
0 4
! t
1 0
1 1
1 0
1 1
=e
1 0
1 1
4t
1 t
0 1
1 0
1 1
1 0
1 1
tA
!
4t
=e
1 t
0 1
=e
4t
1
t
1 1 t
4t
y(t) = e
c1
c2
c1 + c2 t
c2 c1 c2 t
4t
=e
, c1 , c2 C
1
2
y(t) = e
1
2
!
4t
=e
1 0
1 1
1 t
0 1
1 0
1 1
1
2
!
4t
=e
(Verify that this vector function has all the required properties.)
124
1 + 3t
2 3t
In cases where A Rkk has eigenvalues with nonzero imaginary part and
where one is exclusively interested in real-valued solutions, it is inconvenient
to use the fundamental matrix SetJ . In such cases, the following theorem
can be used to construct a real-valued basis of the solution space of (2.18).
Theorem 2.2.14. Suppose A Rkk has an eigenvalue with nonzero
imaginary part and eigenvector v. Then the functions y1 and y2 defined by
y1 (t) = Re et v and y2 (t) = Im et v
are linearly independent solutions of (2.18).
Example 2.2.15. Consider once more the equation in example 2.2.11. The
matrix A in this example has an eigenvector (1, 2 + i) with eigenvalue 1 + 3i.
With the aid of Theorem 2.2.14, we obtain the following real-valued basis of
the solution space of (2.18):
(
y1 (t) = Re
(1+3i)t
y2 (t) = Im
(1+3i)t
1
2+i
!)
1
2+i
!)
=e
=e
cos 3t
2 cos 3t sin 3t
sin 3t
2 sin 3t + cos 3t
1
1
= y1 (0) + y2 (0) =
1
2
0
1
1 1
0
A := 1 1 1
0 1 1
1
1
eigenvector i. Hence the solution space of the equation y 0 (t) = Ay(t) has
a basis formed by et s1 , eit s2 and eit s3 . By Theorem 2.2.14, the solution
space has a real-valued basis {y1 , y2 , y3 }, where
1
1
cos t
t
it
y1 (t) = e 0 , y2 (t) = Re e 1 + i = cos t sin t
1
1
cos t
and
1
sin t
y3 (t) = Im eit
1 + i = sin t + cos t
1
sin t
2.2.3
The right-hand side being continuous on T , both sides are integrable and,
for every t T , we have
Z t
t0
Z t
d A
(e
y( ))d =
e A b( )d
d
t0
Z t
t0
e A b( )d
e A b( )d, t T
t0
y(t) = e
y0 + e
tA
Z t
e A b( )d, t T
t0
Z t
e A b( )d, t T
t0
satisfies the equation (check this). Hence, the above expression represents
the general solution of the inhomogeneous equation. Note that the second
term on the right-hand side is the solution of the initial value problem with
initial vector 0 (the nullvector of Ck ). Setting b 0 we retrieve the general
solution of the homogeneous equation (cf. Theorem 2.2.8).
Example 2.2.17. The solution of the initial value problem
0
y (t) =
4 1
2 1
0
2et
y(t) +
1
0
, y(0) =
is given by
!
t
1
e(t )A
y(t) = e
+
0
0
With the aid of example 2.2.10, we find
tA
y(t) =
1
1
2 1
1
1
2 1
!Z
e2t 0
0 e3t
1
2
0
2e
+
!
1 1
2
1
(Verify this.)
127
e2(t )
0
3(t )
0
e
y(t) =
0
2e
2.2.4
Exercises
1 0
1 2
A=
0 1
1 0
, A=
0 1
1 0
, A=
y (t) =
y(t)
0 1
1 0
y (t) =
y(t), y(0) =
0
1
y (t) =
y(t), y(0) =
1
0
1
0
y (t) =
y(t), y(0) =
y (t) =
1
1
2 1
y(t)
4e6t et
e6t
et
et
3et
, y2 (t) =
e2t
4e2t
, tR
are linearly independent. Find out whether or not there exists a homogeneous, linear, vectorial ODE, such that both y1 and y2 are solutions.
10. Determine a fundamental matrix and the general solution of the vectorial
ODE
!
5
2
y 0 (t) =
y(t)
4 1
11. a. Convert the second order ODE
y 00 (t) + 4y 0 (t) + 5y(t) = 0
into a first order vectorial ODE.
b. Determine a real-valued basis of the solution space of the vectorial
ODE.
c. From the result in b. derive a real-valued basis of the solution space of
the original, second order ODE.
12. Determine the general solution of the vectorial ODE
2 0 0
0
y (t) = 0 1 0
y(t)
0 1 1
13. Determine the solution of the initial value problem
3 1 1
1
0
0 1 y(t), y(0) = 1
y (t) = 2
1 1 2
2
14. Determine the general solution of the vectorial ODE: y 0 (t) = Ay(t), where
0
1 0
0 1
A= 0
2 5 4
129
2.3
t [t0 , )
(2.23)
t [t0 , )
(2.24)
t [t0 , )
(2.25)
t [0, )
(2.26)
y(t) = Se e S
y(0) = Se
k1
X l
l=0
t Nl
S 1 y(0)
l!
!
(2.27)
The matrix et is a diagonal matrix, with diagonal entries et1 , ..., etk and
the sum, in parentheses, on the right-hand side of (2.27) is a nilpotent, upper
triangular matrix, whose entries are polynomials in t, of degree not exceeding
k 1.
131
j6=i
(2.29)
|Bij | =
j6=i
2.3.1
t [0, )
(2.31)
This is an autonomous system. The homogeneous, linear system with constant coefficients, y 0 (t) = Ay(t), is a particular case of (2.31), where f (x) =
Ax.
Theorem 2.3.6. Let c Ck be a zero of f and suppose that f is differentiable at c (sufficient condition: all first order partial derivatives of f exist
and are continuous on a neighbourhood of c). Let A := Df (c). The solution
y c of equation (2.31) is (locally) asymptotically stable if all eigenvalues of A have negative real parts. If at least one eigenvalue of A has a
positive real part, then the solution y c is unstable.
The following theorem is the continuous analogue of Theorem 1.5.3.
Theorem 2.3.7. Let A be a constant k k matrix and g a k-dimensional
vector function, continuous at 0 (the origin of Rk or Ck ), with the property
that, for some (hence for any) given vectornorm k.k,
lim
y0
kg(y)k
=0
kyk
133
(2.32)
If all eigenvalues of A have negative real parts, then the null solution of the
equation
y 0 (t) = Ay(t) + g(y(t)), t [0, )
(2.33)
is asymptotically stable. If, on the other hand, A has at least one eigenvalue with positive real part, then the null solution is unstable.
Like in the case of systems of first order difference equations, Theorems
2.3.6 and 2.3.7 are based on linearization of the system of ODEs at the
equilibrium point under consideration. The linearized system is obtained
by replacing f with its linear approximation at c, thus:
y 0 (t) = f (c) + Df (c)(y(t) c) = Df (c)(y(t) c)
Example 2.3.8. Replacing, in the predator-prey model of example 1.5.5,
the discrete variable n by a continuous time-variable t and the difference
yi (n + 1) yi (n) by the derivative yi0 (t) (i = 1, 2), we obtain a continuous
predator-prey model, described by the following system of ODEs
y10 (t) = (r ay1 (t) by2 (t))y1 (t)
y20 (t) = (cy1 (t) 1)y2 (t)
t [0, )
(2.34)
Here, y1 (t) and y2 (t) denote the number of prey and the number of predators
at time t, respectively. a, b, c and r again are positive constants. The
Jacobian matrix of the vector function
f (x) :=
Df (x) =
y (t) =
r 0
0 1
134
y(t)
Like its discrete analogue, the system (2.34) has two more equilibrium solutions, one of which is the following:
y1 (t)
y2 (t)
1
(r
b
1
c
ac )
cb
ac
1
(cr a) 0
b
1
4
t [0, )
(2.35)
Here, y denotes the angle, in radians, that the pendulum makes with the
vertical. The equation has two equilibrium solutions: y1 0 (or y1 2n,
n Z) and y2 (or y2 (2n + 1), n Z). From experience, we
expect the first equilibrium to be stable, but not asymptotically stable:
if, at time t = 0, the pendulum is released from an initial position close
to, but different from 0 (with initial velocity = 0), it will, in the absence of
friction, start a periodic movement about the equilibrium position 0 (with
maximal deviation from 0 equal to the initial one). The second equilibrium
135
t [0, )
y1 (t)
y2 (t)
0 1
1 0
y1 (t)
y2 (t)
0
y1 (t) sin y1 (t)
t [0, )
(2.37)
This is of the form (2.33), with g(y) = (0, y1 sin y1 ). Now we have
|y1 sin y1 |
|y1 sin y1 |
kg(y)k1
=
lim
lim
= 0,
y0 kyk1
(y1 ,y2 )(0,0) |y1 | + |y2 |
(y1 ,y2 )(0,0)
|y1 |
lim
and thus condition (2.32) holds. The eigenvalues of the matrix A are the
solutions of the equation 2 +1 = 0, so the numbers i and i. Unfortunately,
however, both eigenvalues are purely imaginary, so that Theorem 2.3.7 does
not apply. This was to be expected, as the theorem provides a sufficient
condition for asymptotic stability, but not for neutral stability of the
null solution. Next, we consider the equilibrium solution y of (2.35). It
corresponds to the solution
y1
y2
of the vectorial ODE (2.37) (check this). Introducing new (dependent) variables u1 and u2 , defined by
u1 := y1 , u2 := y2
we have
d
dt
u1 (t)
u2 (t)
u2 (t)
sin(u1 (t) + )
136
t [0, )
or, equivalently,
d
dt
u1 (t)
u2 (t)
0 1
1 0
u1 (t)
u2 (t)
0
sin u1 (t) u1 (t)
t [0, )
(2.38)
This is of the form (2.33) again. The matrix A in this case has eigenvalues 1
and 1. By Theorem 2.3.7, the null solution of (2.38), and thus the solution
y (, 0) of (2.37), is unstable.
We conclude the example with a version of (2.35) that takes into account the
friction experienced by the pendulum:
y 00 (t) + wy 0 (t) + sin y(t) = 0,
t [0, )
(2.39)
Here, w a positive constant, the frictional coefficient. By means of the transformation (2.36), the equation is converted into the vectorial differential equation
d
dt
y1 (t)
y2 (t)
y2 (t)
sin y1 (t)wy2 (t)
0
1
1 w
y1 (t)
y2 (t)
0
y1 (t)sin y1 (t)
t [0, )
(2.40)
The eigenvalues of the 2 2 matrix on the right-hand side of (2.40) are the
solutions of the equation
2 + w + 1 = 0
For every positive value of w, both eigenvalues have negative real part (the
proof of this assertion is left to the reader, cf. exercise 10 below). With
the aid of Theorem 2.3.7, we conclude that the null solution of (2.40) is
asymptotically stable.
Similarly to the case of difference equations, stability or instability of solutions of systems of differential equations can sometimes be established by
means of Lyapunovs direct method.
Theorem 2.3.10. Let f be a differentiable function from Rk to Rk , vanishing at x0 . Suppose there exists a neighbourhood U of x0 and a C 1 -function
V : U R, with the property that V (x0 ) = 0 and V (x) > 0 for all
137
x U {x0 }.
P
V
(x)fj (x) 0 for all x U , then y x0
(i) If V (x) := DV (x)f (x) = kj=1 x
j
is a stable equilibrium solution of (2.31).
(ii) If V (x) < 0 for all x U {x0 }, then x0 is an asymptotically stable
equilibrium solution. Moreover, any solution y of (2.31), with the property
that there exists r > 0 such that, for all t [0, ), y(t) B(x0 , r) U ,
converges to x0 as t .
(iii) If V (x) > 0 for all x U {x0 }, then x0 is an unstable equilibrium
solution.
A function V satisfying the conditions of part (i) of Theorem 2.3.10, is called a
Lyapunov function on U , centered at x0 , for equation (2.31). A Lyapunov
function, satisfying the conditions of part (ii) of Theorem 2.3.10, is called a
strict Lyapunov function on U , centered at x0 , for equation (2.31).
Remark 2.3.11. If f is a continuous function from Rk to Rk , then any
solution y of equation (2.31) is a C 1 -function. Consequently, if V : U R
is a C 1 -function, then so is V y and
d
V (y(t)) = DV (y(t))y 0 (t) = DV (y(t))f (y(t)) = V (y(t))
dt
The condition that f should be differentiable ensures the existence and
uniqueness of solutions with a given initial value. It is a sufficient condition and is often replaced by a weaker, so-called Lipschitz condition.
If x0 is an asymptotically stable equilibrium for equation (2.31), the set of
all x in the domain of f , with the property that the solution y of (2.31),
with initial value y(0) = x, converges to x0 as t , is called the basin of
attraction of x0 .
Example 2.3.12. We can apply Theorem 2.3.10 to the example of the pendulum, with or without friction, i.e. equation (2.40) with w 0. We define:
U = {x R2 : |x1 | < 2} and
1
V (x) = x22 + 1 cos x1
2
(the energy of the pendulum). It is easily seen that the right-hand side 0
for all x R2 and = 0 iff x2 = 0 and x1 = 0 mod 2, hence V (x) > 0 for all
x U with the property that x2 6= 0. Furthermore,
V
V
= sin x1 ,
= x2 ,
x1
x2
138
2.3.2
Exercises
y (t) =
1
1
2 2
y(t) t [0, )
y (t) =
1 1
2 1
y(t) +
et
t [0, )
b.
y 0 (t) =
0
2
0
0
2 1
0 0
0 0
0 2
0
1
2
0
y(t),
t [0, )
t [0, )
t [0, )
t [0, )
t [0, )
t [0, )
= y1 + y22
= y1 y2 y23
where a, b R.
14. a. Convert the real-valued equation
y 00 (t) + y 0 (t) + y(t)3 = 0 ( > 0)
into a system of first order ODEs.
b. Use the function V (x1 , x2 ) = 21 x41 + x22 to examine the stability of the
null solution of this system.
15. a. Convert the real-valued equation
y 00 (t) + y 0 (t)y(t)2 + y(t)3 a2 y(t) = 0
into a system of first order differential equations.
b. Determine the equilibrium solutions of this system and examine their
stability in the case that a R, a 6= 0.
c. Examine the stability of the null solution of the system in the case that
a = 0, using Lyapunovs direct method. Try a Lyapunov function of
the form V (x1 , x2 ) = x41 + bx22 .
141
142
Appendix
A.1
since
Le1 = 0.e1 + 1.e2 and Le2 = 1.e1 + 0.e2
whereas, w.r.t. the basis
{e01 , e02 }
1
1
0 1
1 0
:= {
1
1
}, it is represented by
as
Le01 = 0.e01 + 1.e02 and Le02 = 1.e01 + 0.e02
A change of basis of Rk , usually called a coordinate transformation, or
change of coordinates, is a bijective linear map, represented by a nonsingular
k k matrix. Conversely, every nonsingular matrix with real entries corresponds to a change of coordinates in Rk . Likewise, there is a one-to-one
correspondence between changes of coordinates in Ck and nonsingular k k
143
matrices with complex entries. In what follows, a coordinate transformation refers to a coordinate transformation of Ck . If {e1 , ..., ek } denotes
the standard basis of Ck and {s1 , ..., sk } any other basis, then the change of
coordinates transforming the first basis into the second, is, w.r.t. the standard basis, represented by the matrix S with column vectors s1 , ..., sk . A
linear map L : Ck Ck , which, w.r.t. the standard basis, is represented by
a matrix A, in the new coordinate system is represented by
S 1 AS
Matrices that are related through a change of coordinates, are, in a certain
sense, equivalent, and are called similar matrices. Several important properties of square matrices, such as determinant, trace, etc., are invariant under
coordinate transformations.
Lemma A.1.1. Similar square matrices A and B possess the same characteristic polynomial.
Proof. Suppose that A and B Ckk are similar matrices. Then there
exists a nonsingular matrix S Ckk , such that B = S 1 AS. The characteristic polynomial pB of B is given by
pB () = det(B I) = det(S 1 AS I) =
= det(S 1 (A I)S) = det S 1 det(A I) det S
As det S 1 = (det S)1 , this implies that pB () = det(A I).
Many problems, related to linear mappings, can be simplified by an appropriate choice of coordinates, such that the action of a specific, linear map
is as simple as possible. In terms of matrices, this amounts to choosing,
from the similarity class of matrices representing this map, an element with
the simplest possible form, such as a triangular matrix, or, even better, a
diagonal matrix. Suppose, for instance, that a coordinate transformation,
represented by the nonsingular matrix S, converts a given matrix A into a
diagonal matrix , i.e. S 1 AS = . Then we have, for every nonnegative
integer n,
An = Sn S 1
and the right-hand side of this identity can be easily computed, for arbitrary
large values of n.
144
0 1 0
B=
0 0 1
0 0 0
The matrix
0 0 1
B2 = 0 0 0
0 0 0
has an eigenvector (0, 1, 0), but this is not an eigenvector of B, as B(0, 1, 0)
= (1, 0, 0). (0, 1, 0) is a so-called generalized eigenvector of B, of degree
2, with respect to the eigenvalue 0. At the same time, it is a generalized
eigenvector of A w.r.t. the eigenvalue .
Definition A.1.4. A generalized eigenvector of degree j w.r.t. the
eigenvalue of a square matrix A, is a vector x with the property that
(A I)j x = 0, but (A I)j1 x 6= 0
The generalized eigenspace of an eigenvalue is the linear space, spanned
by all generalized eigenvectors w.r.t. that eigenvalue.
An ordinary eigenvector is a generalized eigenvector of degree 1. If x is a
generalized eigenvector of degree j, w.r.t. the eigenvalue of A and y is a
generalized eigenvector of degree h < j, then x+y is a generalized eigenvector
of degree j, for (A I)j (x + y) = 0 and
(A I)j1 (x + y) = (A I)j1 x 6= 0
If 6= 0, then also Ax is a generalized eigenvector of degree j, w.r.t. , for
(A I)j Ax = (A I)j+1 x + (A I)j x = 0
whereas
(A I)j1 Ax = (A I)j x + (A I)j1 x = (A I)j1 x 6= 0
146
j = 1, ..., k
Nk =
0
0
.
0
0
1
0
.
0
0
0
1
.
0
0
.
.
.
.
.
.
.
.
.
.
0
0
.
1
0
(41)
Jk () =
0
.
0
0
.
0
0
0
1
.
0
0
.
.
.
.
.
.
.
.
.
.
0
0
.
1
(42)
A=
1
1
1 1
x=
0
0
x=
1
1
1 0
1 1
5 1
1 3
149
1 0
1 1
4 1
0 4
We conclude with some general remarks, concerning the Jordan normal form.
Let J be a k k matrix in Jordan normal form. J is a block matrix, with the
property that all off-diagonal blocks are null matrices. A matrix with this
property is called a block-diagonal matrix. In the case considered here,
the diagonal entries are Jordan blocks, so matrices of the form (42). We will
denote them by Jki (i ), i = 1, .., r, where r is a positive integer: 1 r k,
P
and ri=1 ki = k. Each Jordan block can be written in the form
Jki (i ) = i Iki + Nki ,
i = 1, ..., r
A.2
Computation of J n
(A + B) =
n
X
m=0
n
Anm B m
m
1 1 0
1 0 1
A+B =
AB +
AB
0
1
150
which is obviously true. Now suppose the statement holds for a certain
positive integer n. Then we have
Pn
n
Anm B m
m=0 m
P
(A + B)n+1 = (A + B)
n
n
n
nm+1 m
=
B + nm=0 m
Anm B m+1
m=0 m A
P
Pn
n!
n!
n+1m m
n+1m m
B + n+1
B
=
m=0 m!(nm)! A
m=1 (m1)!(n+1m)! A
P
n
n!
n+1m m
n+1
B + B n+1
= A
+ m=1 m!(n+1m)! (n + 1 m + m)A
(n+1)!
n+1m m
B
m=0 m!(n+1m)! A
Pn+1
Hence, by the principle of mathematical induction, the identity holds for all
positive integers.
Noting that (Nk )k = 0 and using lemma A.2.1, we find
min{n,k1}
n
Jk () =
X
m=0
n nm
(Nk )m
m
(43)
n nn1
0
J4 ()n =
0
0
A.3
n
0
0
n
2
n2
nn1
n
0
n
n3
3
n
n2
2
n1
n
n
It can be proved that, for every matrix A Ckk , the sequence of partial
sums
N
X
An
(44)
SN :=
n=0 n!
151
converges as N , i.e. limN (SN )ij exists for every i and j {1, ..., k}.
The limit of (44) is noted eA , thus
eA :=
An
n=0 n!
(45)
1 A
e S=S
X
An
(S 1 AS)n
1
S=
= eS AS
n!
n=0 n!
n=0
(A + B)n
n!
n=0
X
n
n
X
1 X
1
n
=
Anm B m =
Anm B m
n!
m
(n
m)!m!
n=0
n=0 m=0
m=0
X
l=0
Al X
Bm
= eA eB
l! m=0 m!
0 1
0 0
1 0
0 0
and B =
commute, for
AB BA =
152
0 1
0 0
. A and B do not
Now, we have
e =I +A=
and
Bn
=
e =
n=0 n!
B
1 1
0 1
1
n=0 n!
0
1
e 0
0 1
Hence
A B
e e =
e 1
0 1
B A
and e e =
e e
0 1
Furthermore,
1 1
0 0
A+B =
1 1
0 1
1 0
0 0
1 1
0 1
Consequently,
A+B
1 1
0 1
1 0
0 0
1 1
0 1
and thus
A+B
A.4
1 1
0 1
e 0
0 1
1 1
0 1
e e1
0
1
Z b
a
Aij (x)dx
The computational rules for integration and differentiation are easily deduced
from those for ordinary (i.e. scalar) functions. Thus, for example, the
derivative of the sum of two differentiable m n matrix functions A and B
equals the sum of the derivatives:
d
dA dB
(A + B) =
+
dx
dx
dx
One should be well aware of the fact that matrix multiplication is noncommutative. If A and B are a differentiable l m and a differentiable m n
matrix function, respectively, then the (i, j)th entry of (AB)0 is given by
(AB)0ij =
m
m
X
d X
0
(A0ih Bhj + Aih Bhj
) = (A0 B + AB 0 )ij
( Aih Bhj ) =
dx h=1
h=1
A(x) =
In this case,
d
d 2
A (x) =
dx
dx
1 0
3x 4
0 0
3 0
However,
dA
A
=
dx
1 0
x 2
0 0
1 0
0 0
2 0
dA
and
A=
dx
0 0
1 0
155
Index
annihilator method, 91
inhomogeneous, 38
arithmetic-geometric progression, 47
linear, 8, 38
asymptotic behaviour, 13
linearized, 28
asymptotically stable, 14, 24, 30, 49,
logistic, 19
55, 58, 67, 73, 74, 94, 130
scalar, 7
134, 138
differential equation
attracting set, 22
autonomous, 101
attractor, 22, 71
linear, 102
autonomous difference equation, 7
logistic, 101
autonomous differential equation, 101 dominant diagonal, 132
dynamic multiplier model, 69
basin of attraction, 22, 23, 71, 77, 79,
138
equilibrium point, 20
bifurcation, 13, 32
equilibrium solution, 10, 65
flip-, 34
Eulers method, 9
period-doubling, 34
fixed point, 19
pitchfork-, 33
flip-bifurcation, 34
saddle-node, 13, 32
fundamental matrix, 57, 59, 120
transcritical, 34
bifurcation diagram, 13, 34, 35
general solution, 11
boundary conditions, 97
generalized eigenspace, 146
business-cycle model, 83
generalized eigenvector, 146
geometric multiplicity, 148
Casorati determinant, 41, 57
graphical solution of difference eqs.,
cobweb model, 5, 31
21, 22
coordinate transformation, 143
cyclic solution, 10
homogeneous difference equation, 89
difference equation, 7
homogeneous linear difference equation, 38
autonomous, 7
hyperbolic, 26, 78
homogeneous, 38
156