0% found this document useful (0 votes)
80 views124 pages

Differential Difference Equations

This document provides an introduction to a course on difference and differential equations. It discusses the following key points: - The course will cover difference equations, which model discrete-time systems, and differential equations, which model continuous-time systems. - Difference equations can be written as recurrence relations and solved using initial values. Autonomous first-order difference equations are an important class that will be studied. - Solutions of difference equations include stationary/equilibrium solutions, periodic solutions, and the general solution comprising all solutions. - An example of a difference equation modeling money in a bank account over time is provided to illustrate translating a real-world problem into a mathematical equation. - The course schedule,

Uploaded by

Z Day
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views124 pages

Differential Difference Equations

This document provides an introduction to a course on difference and differential equations. It discusses the following key points: - The course will cover difference equations, which model discrete-time systems, and differential equations, which model continuous-time systems. - Difference equations can be written as recurrence relations and solved using initial values. Autonomous first-order difference equations are an important class that will be studied. - Solutions of difference equations include stationary/equilibrium solutions, periodic solutions, and the general solution comprising all solutions. - An example of a difference equation modeling money in a bank account over time is provided to illustrate translating a real-world problem into a mathematical equation. - The course schedule,

Uploaded by

Z Day
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 124

Difference and Differential Equations

Lecture 1

Allard van der Made

Week 46 (Tuesday)

Difference and Differential Equations

WELCOME!
This Lecture

Organization of the course

What are difference and differential equations?

Difference equations and their solutions

Autonomous first order difference equations

Course Schedule

I Lectures by Allard van der Made (DUI 761,


[email protected]):
I Tuesday, 11:00-13:00 in the Blauwe Zaal.
I Friday, 11:00-13:00 in 5419.0015.
I Tutorials by Rens Kamphuis, Iris Meesters, and Annemieke
Kruijt:
I Group 1: Thursday, 15:00-17:00 in 5419.0236 (Rens
Kamphuis).
I Group 2: Thursday, 15:00-17:00 in 5419.0102 (Iris
Meesters).
I Group 3: Thursday, 15:00-17:00 in 5419.0114 (Annemieke
Kruijt).
I DifDif Question hours: each Monday, 13:00-14:30 in DUI
761.
Exercises Lectures

During Lectures 6, 10, and 14 we will look at some exercises.


Please inform me in advance which exercises I should discuss.

Nestor

Everything will be posted on Nestor, please do check the


website regularly!

Already on Nestor:
I Lecture Notes.
I Planning lectures and tutorials.
I Old exams.
I Tutorials grouping.
Required Background

From Simon and Blume (1994):


I Chapters 13 and 14: Continuity and differentiability of
multivariate functions, linear approximation.
I Chapter 23: Linear coordinate transformations and Jordan
normal form.
I Chapter 27: Linear spaces over R and C, linear
(in)dependence.
I Chapter 29: Infinite sequences, closed sets, norms on Rn
and Cn .
I Appendix A3: Complex numbers.

We will discuss some exercises from Simon and Blume (1994)


during the first tutorials.

Grading

You need to do two things to pass this course:


I Make assignments and write a report about your findings
with your assignment group.
I Make the exam.
Your final grade is 0.2*(grade report)+0.8*(grade exam). This
grade must be at least 5.5 to pass the course.

There is a resit exam. Your ‘resit final grade’ is


0.2*(grade report)+0.8*(grade resit exam). This grade must be
at least 5.5 to pass the course.

You are not allowed to consult written texts, use electronic


devices, or go to the bathroom during the exam or the resit
exam.
Assignments

I Please find three fellow students with whom you would like
to collaborate and enroll yourself in the same assignment
group as those team mates. Do so before Thursday, 26
November. The set of students who haven’t found a group
by 26 November will be randomly partitioned into groups of
four (or less).
I Assignments will be handed out during Lecture 6 (27
November).
I You are not allowed to make the exam or the resit exam if
you have not handed in a decent assignment report.

Why?

With the aid of difference equations (discrete time) and


differential equations (continuous time) one can study the
dynamics (behaviour over time) of numerous phenomena.

Lots and lots of applications in physics, biology, numerical


analysis, and economics.

Let’s have a look at an example of a difference equation (we will


start studying differential equations in week 50).
Bank Account

I During the christmas holiday you deposit A0 euros on a


bank account.
I Each year you receive r % interest on the money in your
account.
I Right after receiving interest you withdraw G euros to buy
christmas gifts.
How much money is in your bank account after t years? Does
the interest you receive suffice to buy the christmas gifts or will
you eventually run out of money? What happens if you change
A0 a little bit?

Translating into a Difference Equation

The amount of money in the bank account after t + 1 years


depends on the amount of money after t years:
r
A(t + 1) = (1 + 100 )A(t) − G.

Is there an easy way to calculate each A(t), i.e. find a solution


of this difference equation?

What should A0 be such that the interest exactly offsets G, i.e.


what is the equilibrium solution of this difference equation?

What happens if you use a slightly different A0 , i.e. what are the
stability properties of the equilibrium solution?
General Difference Equations

A scalar difference equation can be written as follows:

F (x, z(x), z(x + h), z(x + 2h), . . . , z(x + kh)) = 0, (1)

where:
I z is the (real-valued or complex-valued) variable we are
interested in (dependent variable).
I F is a function of at most k + 2 variables.
I k ∈ N is the order of the difference equation.
I x is the independent variable, usually time.
I h is the shift, usually the length of some time period.

Normalizing the Shift

x
Let n := h and y (n) := z(x) = z(nh).

Then: z(x + h) = y (n + 1), . . . , z(x + kh) = y (n + k ).

Substituting this into (1) yields

F (nh, y (n), y (n + 1), . . . , y (n + k )) = 0.

We only consider cases where this normalization has already


taken place and we work with a discrete independent variable
n ∈ D ⊆ Z (usually D = N ∪ {0}).
Important Classes of Difference Equations

I Linear difference equations:

a0 (n)y (n)+a1 (n)y (n+1)+. . .+ak (n)y (n+k )+b(n) = 0, n ∈ D.

In this case the function F is affine.


I Autonomous difference equations:

F (y (n), y (n + 1), . . . , y (n + k )) = 0, n ∈ D.

So, the function F does not depend explicitly on n.

Solutions of Difference Equations

Consider the difference equation

F (n, y (n), y (n + 1), . . . , y (n + k )) = 0, n ∈ D. (2)

A solution of this difference equation is a sequence {y (n)}n∈D


that ‘fits’ the difference equation.

In other words, a complex-valued solution is a function


y : D → C and a real-valued solution is a function y : D → R.

Example: {1, 1, 1, . . .} is a solution of

y (t + 1) − 3y (t) + 2 = 0, t ∈ N ∪ {0}.
The General Solution of a Difference Equation

The general solution of a difference equation is the set of all of


its solutions.

Example: {1, 1, 1, . . .} is not the general solution of

y (t + 1) − 3y (t) + 2 = 0, t ∈ N ∪ {0}.

Reason: y1 (t) = 3t + 1, t ≥ 0, is also a solution of the


difference equation.

In fact, for any c ∈ C is yc (t) = c3t + 1, t ≥ 0, a solution.

Important Types of Solutions

I An equilibrium, constant, or stationary solution y of (2)


abides by y (n) = c for all n ∈ D for some c ∈ C. So:

F (n, c, . . . , c) = 0, ∀n ∈ D.

Notation: y ≡ c. If c = 0, then y is called the null solution.


I A periodic or cyclic solution of period p (p as small as
possible) is a solution y such that y (n + p) = y (n), ∀n ∈ D.

Example: {1, −1, 1, −1, . . .} is a 2-periodic solution of

y (n + 1) + y (n) = 0, n ∈ N ∪ {0}.

Remark: {−1, 1, −1, 1, . . .} is another 2-periodic solution


of this equation.
Pictures of Solutions (I)
The two stationary solutions of y (n + 1) = 2y (n)(1 − y (n)):

0.7

0.6

0.5

0.4

0.3

0.2

0.1

ï0.1

ï0.2

0 5 10 15 20 25 30 35 40

Pictures of Solutions (II)


One 4-periodic solution of y (n + 1) = 72 y (n)(1 − y (n)):

0.9

0.8

0.7

0.6

0.5

0.4

0 5 10 15 20 25 30 35 40
Recurrence Relations and Initial Value Problems

Difference equations can often be written as recurrence


relations:

y (n + k ) = f (n, y (n), . . . , y (n + k − 1)), n ∈ D = {n0 , n0 + 1, . . .}.


(3)
If the first k values y (n0 ), . . . , y (n0 + k − 1), i.e. the initial
values are known, then the subsequent values
y (n0 + k ), y (n0 + k + 1), . . . can be computed using (3), i.e. one
can determine the solution of this initial value problem.

Suppose f : D × Ck → C. Then any set c1 , . . . , ck of initial


(complex) values yields a solution to (3) and a solution to (3) is
uniquely determined by its k initial values. Likewise for
f : D × Rk → R and real-valued initial values.

Autonomous First Order Difference Equations

Let us consider autonomous first order difference equations


that can be written as recurrence relations:

y (n + 1) = f (y (n)), n ≥ 0. (4)

Important example: the logistic difference equation:

y (n + 1) = αy (n)(1 − y (n)), n ≥ 0,

where α is a positive parameter.


(Positive) Invariant Sets

The difference equation y (n + 1) = f (y (n)), n ∈ N ∪ {0},


combined with the initial value y (0) = y0 only yields a solution
{y (n)}∞
n=0 if the iterates f (y0 ), f (f (y0 )), f (f (f (y0 ))), etc. are
well-defined.

No problems if the domain Df of f is positive invariant: a set


G ⊆ Df is f -positive invariant if f (G) ⊆ G and f -invariant if
f (G) = G.
From now on we assume that the domain of f is invariant.

Iterating a Solution
The first few elements of the solution of
y (n + 1) = 72 y (n)(1 − y (n)) with initial value y (0) = 1
10 .

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
Finding Equilibrium Solutions

If y ≡ c is an equilibrium solution of (4), then c = f (c). So, if


y ≡ c is an equilibrium solution, then c is a fixed point of f .
Conversely, any fixed point of f yields an equilibrium solution.

Remark 1: A fixed point of a map f can only exist if domain and


image of f have a non-empty intersection. No problem if the
domain of f is positive invariant.

Remark 2: If c is a fixed point of f , then {c} is f -invariant.

Picture of a Fixed Point


At the two fixed points of x 7→ f (x) = 27 x(1 − x) the graph of f
and the graph of x 7→ x intersect.

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
Solutions of First Order Autonomous Difference
Equations

If y is a solution of (4), then y (n + k ) = f k (y (n)) for each k ≥ 0.


Taking n = 0 yields

y (k ) = f k (y (0)), k ≥ 0.

So, the general solution of (4) reads {{f k (x)}∞


k =0 : x ∈ Df }.

y is a p-periodic solution of (4) only if c0 := y (0) is a fixed point


of f p : f p (c0 ) = c0 . Then y (n + mp) indeed equals

f n+mp (c0 ) = f n (f mp (c0 )) = f n (f (m−1)p (c0 )) = . . . = f n (c0 ) = y (n).

Note: mp with m > 1 is also a period of y , but not the period.

Periodic Solutions and Fixed Points


The graphs of f (x) = x 2 − 1, f 2 (x) = (x 2 − 1)2 − 1 convey the
two stationary solutions and the 2-periodic solution of
y (n + 1) = (y (n))2 − 1.

ï1

ï2
ï2 ï1.5 ï1 ï0.5 0 0.5 1 1.5 2
Difference and Differential Equations
Lecture 2

Allard van der Made

Week 46 (Friday)

This Lecture

Stability of Solutions

Stability: autonomous first order difference equations

Bifurcations
Bank Account Revisited

Recall the first order difference equation that tells you how
much money is in your bank account:
r
A(t + 1) = (1 + 100 )A(t) − G.
100
The equilibrium solution of this equations is A ≡ r G.

What happens if you deposit slightly less or slightly more than


100
r G?

100
The distance between A(t) and r G grows with t.

Staying close to an Equilibrium Solution

Consider the difference equation

y (n + 1) = y (n), n ≥ 0.

An equilibrium solution of this equation is the null solution


(y ≡ 0).

What happens if the initial value y (0) is unequal to zero?

Consider now the difference equation

y (n + 1) = 12 y (n) + 1, n ≥ 0.

The equilibrium solution of this equation is y ≡ 2.

What happens if the initial value y (0) is unequal to 2?


Stability Concepts (I)

A solution y of the difference equation

y (n + k ) = f (n, y (n), . . . , y (n + k − 1)), n ≥ n0 (1)

is stable if for every  > 0 there exists a δ = δ() > 0 such that
for every solution ỹ with the property

|ỹ (n) − y (n)| ≤ δ, n = n0 , n0 + 1, . . . , n0 + k − 1

one has |ỹ (n) − y (n)| ≤  for all n ≥ n0 .

Remark: mere stability does not require convergence.

Stability Concepts (II)

A stable solution y is (locally) asymptotically stable if there


exists a δ > 0 such that for every solution ỹ with the property

|ỹ (n) − y (n)| ≤ δ, n = n0 , n0 + 1, . . . , n0 + k − 1

one has limn→∞ (ỹ (n) − y (n)) = 0.

A stable solution y is globally asymptotically stable if


limn→∞ (ỹ (n) − y (n)) = 0 for every solution ỹ .

A stable solution that is not asymptotically stable is neutrally


stable.

Not surprisingly, a solution that is not stable is called unstable.


Orbits and Limit Points

The positive orbit γ + (x0 ) of x0 ∈ Df subject to the map f is the


set
γ + (x0 ) := {f n (x0 ), n ∈ N}.
So, γ + (x0 ) contains all the elements of the solution of the initial
value problem y (n + 1) = f (y (n)), y (0) = x0 .
It is not a sequence and hence not equal to the solution!

A point x ∈ Df is called a positive limit point or ω-limit point of x0


if there exists a subsequence of {f n (x0 )}∞
n=0 converging to x.

The positive limit set or ω-limit set ω(x0 ) of x0 is the set of all
ω-limit points of x0 .

Orbits and Limit Points: Example

Consider the difference equation

y (n + 1) = −y (n), n ≥ 0.

This difference equation can be written as y (n + 1) = f (y (n)),


where f : C → C is defined by f (x) = −x.

The orbit of 1 subject to f is {1, −1}, −1 and 1 are both ω-limit


points of 1 and hence its ω-limit set is {−1, 1}.
Attracting Sets, Basins of Attraction

Notion of local asymptotical stability can be extended to sets:

A closed, f -invariant set A ⊆ Df is an attracting set for f if there


exists a neighbourhood U of A such that the distance between
f n (x) and A, i.e.

d(f n (x), A) := min |f n (x) − z|,


z∈A

tends to 0 as n → ∞ for all x ∈ U ∩ Df .

The basin of attraction of an attracting set A is the set of all


x ∈ Df such that limn→∞ d(f n (x), A) = 0
Special case: the basin of attraction of {c} with c a fixed point
of f is called the stable set of c.

Example of a Stable Set (I)


1
Conjecture: The stable set of the fixed point 2 of the map

f : x 7→ x 2 + 14 , x ∈ R (2)

is the interval [− 12 , 12 ].

1
Proof: We have to show that limn→∞ f n (x) = 2 iff x ∈ [− 12 , 12 ].
I Note first that 12 is the unique fixed point of f and that
f (−x) = f (x), x ∈ R.
I Since
f (x) − x = (x − 12 )2 ≥ 0 ⇒ f (x) ≥ x,
we know that f n+1 (x) = f (f n (x)) ≥ f n (x), n ≥ 0, i.e. the
sequence {f n (x)}∞
n=0 is monotonically nondecreasing.
I In fact, f n+1 (x) > f n (x), n ≥ 1, if f n (x) 6= 12 .
Example of a Stable Set (II)

I If |x| > 12 , then by induction f n (x) > |x| > 12 , n ≥ 1, and


therefore f n (x) 6→ 12 .
I If |x| ≤ 12 , then by induction f n (x) ≤ 12 , n ≥ 1.
I So, if |x| ≤ 12 , then the sequence {f n (x)}∞
n=0 is
monotonically nondecreasing as well as bounded above,
so it has a limit, say limn→∞ f n (x) = ` = limn→∞ f n+1 (x).
I Continuity of f implies that

` = lim f n+1 (x) = lim f (f n (x)) = f ( lim f n (x)) = f (`).


n→∞ n→∞ n→∞

1
I Because 2 is the unique fixed point of f , we conclude that
` = 12 .

Stable Set but no Stability

1
We have seen that the stable set of the fixed point 2 of (2) is
[− 12 , 12 ].

Since f n ( 12 + δ) 6→ 12 for all δ > 0, the equilibrium solution y ≡ 1


2
is not asymptotically stable! (There is no neighbourhood of 12
that is a subset of the stable set.)

1
In fact, since {f n (x)}∞
n=0 is not bounded if x > 2 , the equilibrium
solution y ≡ 12 is unstable!
Stability Properties of Equilibrium solutions

Consider the difference equation

y (n + 1) = f (y (n)), n ∈ N ∪ {0}. (3)

Theorem (1.2.5)
Suppose c ∈ C is a fixed point of f and f is differentiable at c.
Then:
1. If |f 0 (c)| < 1, then y ≡ c is a (locally) asymptotically stable
solution of (3).
2. If |f 0 (c)| > 1, then y ≡ c is an unstable solution of (3).

Intuition: Compare y (n + 1) = 2y (n) with y (n + 1) = − 12 y (n).

Proof of Theorem 1.2.5 (I)


1. Suppose |f 0 (c)| < 1. Then by definition
f (x) − f (c)
lim = |f 0 (c)| < 1.

x→c x −c

I Choose an  ∈ (0, 1 − |f 0 (c)|). There exists a δ > 0 such


that 0 < |x − c| < δ implies that

f (x) − f (c) 0
f (x) − f (c) 0

− |f (c)| ≤ − f (c) < .
x −c x −c

I Hence: | f (x)−f (c) 0


x−c | < |f (c)| +  =: η < 1 and consequently:

|f (x) − c| = |f (x) − f (c)| < η|x − c| < ηδ.

I So: |f n (x) − c| < δ ⇒ |f n+1 (x) − c| = |f (f n (x)) − f (c)| <


η|f n (x) − c|.
Proof of Theorem 1.2.5 (II)

I Using induction we find that if |x − c| < δ (also if x = c!),


then
|f n (x) − c| ≤ η n |x − c|, n ≥ 0,
implying that limn→∞ f n (x) = c. We conclude that y ≡ c is
an asymptotically stable solution.

2. Suppose |f 0 (c)| > 1.


I Choose an  ∈ (0, |f 0 (c)| − 1). One can find a δ > 0 such
that 0 < |x − c| < δ implies that

f (x) − f (c) 0
f (x) − f (c) 0

− |f (c)| ≤
− f (c) < .
x −c x −c

I Hence: | f (x)−f (c) 0


x−c | > |f (c)| −  =: d > 1.

Proof of Theorem 1.2.5 (III)

I Consequently:

|f (x) − c| = |f (x) − f (c)| > d|x − c| > |x − c|.

I Therefore:

|f n (x)−c| < δ ⇒ |f n+1 (x)−c| = |f (f n (x))−f (c)| > d|f n (x)−c|.

I Suppose by contradiction that c is a stable solution. Then


there exists an η = η(δ) > 0 such that any solution ỹ with
the property |ỹ (0) − c| < η remains within distance δ from
c: |ỹ (n) − c| < δ, ∀n ∈ N.
I But then also |ỹ (n) − c| = |f n (ỹ (0)) − c| > d n |ỹ (0) − c| > δ
for n sufficiently large, a contradiction!
A Closely Related Theorem

Theorem (1.2.8)
Let g be a continuous function on a neighbourhood of 0 with
the property limx→0 g(x)
x = 0. Consider the equation

y (n + 1) = ay (n) + g(y (n)).

Then:
I If |a| < 1, then the null solution is asymptotically stable.
I If |a| > 1, then the null solution is unstable.

Proof of Theorem 1.2.8

I Note first that limx→0 g(x)


x = 0 only if
limx→0 g(x) = g(0) = 0. So, 0 is indeed a fixed point of
f : x 7→ ax + g(x).
I Observe that
g(x) g(x) − 0 g(x) − g(0)
0 = lim = lim = lim = g 0 (0).
x→0 x x→0 x − 0 x→0 x −0

I Therefore: |f 0 (0)| = |a + g 0 (0)| = |a| and the statements


follow from Theorem 1.2.5.
Stability of Periodic Solutions

Recall that y is a p-periodic solution of (3) iff c = y (0) is a fixed


point of f p (but not of f k for some k < p!).
Theorem (1.2.11)
Suppose c ∈ C is a fixed point of f p and f p is differentiable at c.
Then:
I If f is continuous at f k −1 (c) for k = 1, . . . , p − 1 and
|(f p )0 (c)| < 1, then the periodic solution y of (3) with initial
value c is locally (asymptotically) stable.
I If |(f p )0 (c)| > 1, then the periodic solution y of (3) with
initial value c is unstable.

Sketch of Proof of Theorem 1.2.11 (I)

I If |(f p )0 (c)| < 1, then z ≡ c is an asymptotically stable


solution of
z(n + 1) = f p (z(n)), (4)
i.e. |z̃(n) − c| → 0 as n → ∞ for all solutions z̃ of (4) with
|z̃(0) − c| < δ for some δ > 0.
I Fix a solution ỹ of (3) such that |ỹ (0) − c| < δ. Let
ẑ(n) = ỹ (pn).
I This is a solution of (4) with the property |ẑ(0) − c| < δ, so
|ẑ(n) − c| → 0 as n → ∞.
I Next ingredient: If f is continuous at f k −1 (c) for
k = 1, . . . , p − 1, then f k is continuous at c (exercise!).
Sketch of Proof of Theorem 1.2.11 (II)

I Combining the two observations yields:

|ỹ (pn + k ) − y (pn + k )| = |f k (ỹ (pn)) − f k (y (pn))| =


|f k (ẑ(n)) − f k (c)| → 0 as n → ∞

for k = 0, . . . , p − 1.

Remark regarding Theorem 1.2.11

Stability of the p-periodic solution of (3) does imply stability of


the fixed point c of (4), but...
You really need continuity of f at f k −1 (c), k = 1, . . . , p − 1 for
the converse statement! (‘in-between iterates’ can diverge if
you do not impose these continuity conditions).
Linearizing a Difference Equation

Equations like (3) can be linearized around a fixed point c of f


as long as f is differentiable at c. Such a linear approximation is
much easier to work with (but is only an approximation!).

The linearization of y (n + 1) = f (y (n)) is just the Taylor


polynomial of f of order 1:

y (n + 1) = f (c) + f 0 (c)(y (n) − c) = c + f 0 (c)(y (n) − c).

This is (of course) a linear difference equation.

The previous few theorem are based on this method.

A Change in the Number of Equilibrium Points

Consider the difference equation

y (n + 1) = f (y (n)) = (y (n))2 − α. (5)

If α < − 14 , then (5) has no real-valued equilibrium solutions, if


α = − 14 , then it has one equilibrium solution, and if α > − 14 ,
then (5) has two real-valued equilibrium solutions.

So, the number of equilibrium solutions of (5) changes as the


parameter α passes through − 14 .

Let us first look at the stability of these solutions.


Stability of the Equilibrium Solutions for various α

The stability of equilibrium solutions changes as α changes,


even if the number of solutions remains constant:
I α < − 14 : No real-valued equilibrium solutions.
I α = − 14 : One equilibrium solution that is unstable (see
previous example).
1 1 1

I α > − 4 : The fixed points are c1 = 2 + 2 1 + 4α and
1 1

c2 = 2 − 2 1 + 4α. Since |f 0 (c1 )| = |2c1 | > 1, y1 ≡ c1 is
unstable. Furthermore:
I If − 14 < α < 34 , then |f 0 (c2 )| < 1 → y2 ≡ c2 is asymptotically
stable.
I If α > 34 , then |f 0 (c2 )| > 1 → y2 ≡ c2 is unstable.

Bifurcations

A (local) bifurcation occurs when a change in a parameter


causes the stability of equilibrium solutions to change.

The example: As α passes through 34 the stability of y2 ≡ c2


changes (from asymptotically stable to unstable).

The point (in the parameter space) at which the bifurcation


occurs is called the bifurcation point.

There are different types of bifurcations.


Saddle-node Bifurcations

A saddle-node bifurcation occurs if two equilibrium solutions


appear at the bifurcation point, one of which is unstable and the
other is asymptotically stable.

This happened in our example at α = − 14 .

Transcritical Bifurcations (I)

A transcritical bifurcation occurs when the stability of a fixed


point is transferred to another fixed point.

Example: Consider y (n + 1) = αy (n) − (y (n))2 . If −1 < α < 1,


then y ≡ 0 is asymptotically stable and the other fixed point
(α − 1) generates an unstable equilibrium solution. However, if
1 < α < 3, then y ≡ 0 is unstable whereas z ≡ α − 1 is
asymptotically stable. So, at α = 1 a transcritical bifurcation
occurs.
Transcritical Bifurcations (II)
Picture of a bifurcation diagram with a transcritical bifurcation:

Pitchfork Bifurcations (I)

A pitchfork bifurcation occurs when one asymptotically stable


solution is replaced by two other asymptotically stable
solutions.

Example: Consider y (n + 1) = αy (n) − (y (n))3 . If α < 1, the


null solution is the unique equilibrium solution and it is
asymptotically stable. If 1 < α < 2, then there are two
additional equilibrium solutions, both of which are
asymptotically stable. However, y ≡ 0 is unstable if α > 1. So,
at α = 1 a pitchfork bifurcation occurs.
Pitchfork Bifurcations (II)
Picture of a bifurcation diagram with a pitchfork bifurcation:

Period-doubling or Flip Bifurcation

A period-doubling bifurcation or flip bifurcation occurs when an


asymptotically stable equilibrium solution is replaced by two
asymptotically stable 2-periodic solutions with the same orbit.

Consider the logistic difference equation


y (n + 1) = αy (n)(1 − y (n)). If 1 < α < 3, then the null solution
is unstable and the equilibrium solution y ≡ 1 − α1 is

asymptotically stable. If 3 < α < 1 + 6, then both these
equilibrium solutions are unstable, but there are two
asymptotically stable 2-periodic solutions. So, at α = 3 a
period-doubling bifurcation occurs.
The Logistic Difference Equation has a lot of
Bifurcations

The Logistic Difference Equation with α = 2.8


Solutions with initial values y1 (0) = 0.1 and y2 (0) = 0.2:
The Logistic Difference Equation with α = 3.5
Solutions with initial values y1 (0) = 0.1 and y2 (0) = 0.2:

The Logistic Difference Equation with α = 4


Solutions with initial values y1 (0) = 0.1 and y2 (0) = 0.11:
Difference and Differential Equations
Lecture 3

Allard van der Made

Week 47 (Tuesday)

This Lecture

Linear Difference Equations

The Solution Space of Homogeneous LDEs

Inhomogeneous LDEs

Solutions of First Order LDEs


Linear Difference Equations
Let us consider linear difference equations (LDEs):

F (n, y (n), y (n + 1), . . . , y (n + k )) =


b(n) + a0 (n)y (n) + a1 (n)y (n + 1) + . . . + ak (n)y (n + k ) = 0, n ≥ 0,

where, to ensure that k is the order of this LDE, we assume that


ak (n) 6= 0, ∀n ∈ N ∪ {0}. This allows us to normalize ak ≡ 1.

Such expressions can be written more concisely:


k
X
aj τ j (y ) = −b, (1)
j=0

where b and aj , j ∈ {0, 1, . . . , k } are functions of n, i.e.


sequences.

The symbol τ is the shift operator.

The Shift Operator


With the shift operator τ you can ‘move back and forth in a
sequence’:
Let z : Z → C. Then:
I τ 0 (z(n)) = z(n), τ (z(n)) = z(n + 1), τ k (z(n)) = z(n + k ),
k ∈ N.
I Or ‘go back in time’:
τ −1 (z(n)) = z(n − 1), τ −k (z(n)) = z(n − k ), k ∈ N.

A linear combination of shifts L = kj=0 aj τ j is a linear


P
I
operator acting on sequences in Z and is called a linear
difference operator of order k . Remark: aj depends in
general on n, j ∈ {0, 1, . . . , k }.
I Each L uniquely defines a homogeneous LDE L(y ) = 0.
I Any LDE can be written as L(y ) = −b for some linear
operator L and sequence b.
Homogeneous and Inhomogeneous Linear Difference
Equations
If an LDE is homogeneous, then b ≡ 0. Such a difference
equation can be written as:

a0 (n)y (n) + a1 (n)y (n + 1) + . . . + ak (n)y (n + k ) = 0, n ≥ 0.


Pk
Equivalently: j=0 aj τ j (y ) = 0.

An LDE is homogeneous if and only if the null sequence is one


of its solutions.

If an LDE is inhomogeneous, then b 6≡ 0. Such a difference


equation can be written as:

a0 (n)y (n) + a1 (n)y (n + 1) + . . . + ak (n)y (n + k ) = −b(n), n ≥ 0.

Equivalently: kj=0 aj τ j (y ) = −b.


P

The Solutions of Homogeneous LDEs

Lemma
Let L(y ) = 0 be a homogeneous LDE. Then:
I The set of all real-valued solutions of this equation is a
linear space over R.
I The set of all complex-valued solutions of this equation is a
linear space over C.
Pk
Sketch of proof: Fix an L = j=0 aj τ j . Suppose that L(y1 ) = 0
and L(y2 ) = 0. We have to prove that for any n ≥ 0 and any
(λ1 , λ2 ) in R2 or C2 one has:

L(λ1 y1 + λ2 y2 )(n) = λ1 L(y1 )(n) + λ2 L(y2 )(n).

This follows from the fact that L is a linear operator.


The General Solution of a Homogeneous LDE
Theorem (1.3.2)
Let L(y ) = 0 be a homogeneous LDE of order k . Then:
I The set of all real-valued solutions of this equation form a
k -dimensional linear space over R.
I The set of all complex-valued solutions of this equation
form a k -dimensional linear space over C.

Proof: It remains to show that the dimension of the solution


space is exactly k .
I A solution of L(y ) = 0 is uniquely defined by its k initial
values, which form a k -dimensional vector. You can
choose at most k linearly independent k -dimensional
vectors, so the dimension is at most k .
I Each k -dimensional vector yields k initial values and hence
a solution. Since one can construct k linearly independent
k -dimensional vectors, the dimension is at least k .

Bases of Solution Spaces


Once you have a basis of the solution space of the k th order
difference equation L(y ) = 0, you have the general solution of
this difference equation and can solve initial value problems.

But how do you check that y1 , . . . , yk form a basis of the


solution space of L(y ) = 0?
I Of course, yj must be a solution of L(y ) = 0, j ∈ {1, . . . , k }.
I The vectors containing the initial values must be linearly
independent (Lemma 1.3.3).
I Actually, it suffices (see Theorem 1.3.5) if the vectors

(y1 (n), . . . , y1 (n + k − 1)), . . . , (yk (n), . . . , yk (n + k − 1))

are linearly independent for some n ∈ N ∪ {0} (shorter


vectors won’t work of course).
Casorati Determinants (I)

The vectors with the initial values of y1 , . . . , yk are linearly


independent if and only if
 
y1 (0) y2 (0) ... yk (0)
 y1 (1) y2 (1) ... yk (1)
det   6= 0.
 
.. .. .. ..
 . . . .

y1 (k − 1) y2 (k − 1) . . . yk (k − 1)

Casorati Determinants (II)

Similarly, the vectors


(y1 (n), . . . , y1 (n + k − 1)), . . . , (yk (n), . . . , yk (n + k − 1)) are
linearly independent iff their Casorati determinant is not zero.
That is, iff
 
y1 (n) ... yk (n)
 y1 (n + 1) ... yk (n + 1) 
Cy1 ,...,yk (n) = det   6= 0.
 
.. .. ..
 . . . 
y1 (n + k − 1) . . . yk (n + k − 1)

Casorati determinants have an interesting property:


The Casorati Determinant is itself a Solution

Lemma (1.3.4)
Suppose
Pk y1 , .j . . , yk are solutions of L(y ) = 0, where
L = j=0 aj τ . Then Cy1 ,...,yk is a solution of

y (n + 1) = (−1)k a0 (n)y (n), n ∈ N ∪ {0}.

Proof:
I The case k = 1 is trivial.
I If k = 2, then

y1 (n + 1) y2 (n + 1)
Cy1 ,y2 (n + 1) = =
y1 (n + 2) y2 (n + 2)

y 1 (n + 1) y2 (n + 1)
−a0 (n)y1 (n) + a1 (n)y1 (n + 1) −a0 (n)y2 (n) + a1 (n)y2 (n + 1) .

Proof of Lemma 1.3.4 (continued)

I The determinant of a matrix is a linear function of its rows:



y1 (n + 1) y2 (n + 1)
Cy1 ,y2 (n + 1) = −a0 (n)
y1 (n) y2 (n)

y1 (n + 1) y2 (n + 1)
− a1 (n)
y1 (n + 1) y2 (n + 1)

y1 (n) y2 (n)
= +a0 (n) = (−1)2 a0 (n)Cy1 ,y2 (n).
y1 (n + 1) y2 (n + 1)

I General
Pk −1 case: replace the entries of the last row by
− `=0 a` (n)yj (n + `), j = 1, . . . , k and use the linearity of
the determinant.
The General Solution of Inhomogeneous LDEs
Once you know the general solution of the homogeneous LDE
L(y ) = 0 and one particular solution of the inhomogeneous
LDE L(y ) = b (b 6≡ 0), then you can construct the general
solution of L(y ) = b:
Theorem (1.3.7)
Let y0 be a particular solution of L(y ) = b. Then every solution
of L(y ) = b can be written as the sum of y0 and some solution
of the homogeneous LDE L(y ) = 0. Conversely, any sequence
that can be written as the sum of y0 and a solution of L(y ) = 0
is a solution of L(y ) = b.
Proof: If ỹ is a solution of L(y ) = b, then
L(ỹ − y0 ) = L(ỹ ) − L(y0 ) = b − b = 0. So, ỹ = y0 + (ỹ − y0 ) can
be written as such a sum. Conversely, if L(ŷ ) = 0, then
L(y0 + ŷ ) = L(y0 ) + L(ŷ ) = b + 0. So, y0 + ŷ is a solution of
L(y ) = b.

The General Solution of an Inhomogeneous LDE:


Simple Example
Consider the difference equation

y (n + 2) − 5y (n + 1) + 6y (n) = 2, n ≥ 0. (2)

I The sequences y1 (n) = 2n , n ≥ 0, and y2 (n) = 3n , n ≥ 0,


are solutions of the homogeneous equation
y (n + 2) − 5y (n + 1) + 6y (n) = 0.
 
1 1
I Because det 6= 0, y1 and y2 are linearly
2 3
independent and form a basis of the solution space of the
second order LDE y (n + 2) − 5y (n + 1) + 6y (n) = 0.
I The sequence y0 ≡ 1 is a particular solution of (2).
I The general solution of (2) is thus:

ỹ (n) = c1 2n + c2 3n + 1, n ≥ 0, c1 , c2 ∈ C or c1 , c2 ∈ R.
An Inhomogeneous LDE of Order 2 (I)
Consider the following LDE:

y (n + 2) = αy (n + 1) + βy (n) + γ, n ≥ 0, (3)

with α + β 6= 1 and γ 6= 0.

The general (complex-valued) solution of its homogeneous


counterpart y (n + 2) − αy (n + 1) − βy (n) = 0 reads:
 p
2
n  p
2
n
α − α + 4β α + α + 4β
y (n) = c1 +c2 , c1 , c2 ∈ C.
2 2

Trick: Solve λ2 − αλ − β = 0.  √ ∞
α− α2 +4β n
Remark: Verify that the solutions 2 and
 √ ∞ n=0
2
α+ α +4β n

2 are independent.
n=0

An Inhomogeneous LDE of Order 2 (II)

γ
Furthermore: y0 ≡ 1−α−β is the (unique) equilibrium solution of
(3).
Therefore: any solution z of (3) can be written as
n
α + α2 + 4β n
p p
2
  
γ α − α + 4β
z(n) = +c1 +c2 ,
1−α−β 2 2

where n ∈ N ∪ {0}, for some c1 , c2 ∈ C.


Homogeneous First Order LDEs

Finding the solutions of homogeneous first order LDEs is


relatively easy. The general solution of

y (n + 1) = a(n)y (n), N ∪ {0},

given an initial value y (0) = c ∈ C is


n−1
Y
y (0) = c, y (n) = c a(m), n ∈ N. (4)
m=0

For instance, if a ≡ r , then you get a geometric progression:

y (n) = cr n , c ∈ R.

Finding Solutions of Inhomogeneous First Order LDEs

Consider the inhomogeneous first order LDE

y (n + 1) = y (n) + b(n), n ≥ 0. (5)

Note that (telescoping property):

y (n) − y (0) = b(0) + b(1) + . . . + b(n − 1).

The general solution of (5) is consequently:


n−1
X
y (0) = c, y (n) = c + b(m), n ∈ N, c ∈ C.
m=0
Variation of Constants (I)
Consider now

y (n + 1) = a(n)y (n) + b(n), n ≥ 0. (6)

Suppose you know a solution y0 of y (n + 1) − a(n)y (n) = 0


such that y0 (n) 6= 0 for all n. Then:

y (n + 1) a(n)y (n) + b(n) a(n)y (n) b(n)


= = +
y0 (n + 1) y0 (n + 1) a(n)y0 (n) y0 (n + 1)

for any solution of (6).

Let us look at the difference equation governing the ratio


z(n) = yy0(n)
(n) :

b(n)
z(n + 1) = z(n) + , n ≥ 0.
y0 (n + 1)

Variation of Constants (II)

We know how to find solutions of (5), hence:


n−1
X b(m)
z(0) = c, z(n) = c + , n ∈ N, c ∈ C.
y0 (m + 1)
m=0

We can deduce y (n) = y0 (n)z(n) from this expression:


n−1
X b(m)
y (0) = cy0 (0), y (n) = cy0 (n)+y0 (n) , n ∈ N, c ∈ C.
y0 (m + 1)
m=0

But we already know what y0 (n) looks like!


Variation of Constants (III)
Qn−1
Since y0 (n) = c̃ m=0 a(m) for some c̃ ∈ C, one has:

n−1 n−1
b(m) Y b(m) Y
y0 (n) = c̃ a(j) Qm = b(m) a(j),
y0 (m + 1) c̃ j=0 a(j)
j=0 j=m+1
Q
where by definition j∈∅ a(j) = 1.

This implies that the general solution of (6) is as follows:


n−1
X  n−1
Y 
y (0) = cy0 (0), y (n) = cy0 (n)+ b(m) a(j) , n ∈ N, c ∈ C.
m=0 j=m+1
(7)
Remark: This is also the general solution if y0 (n) = 0 for some
n ∈ N ∪ {0}.

Variation of Constants: Example

Consider the LDE y (n + 1) = ay (n) + b, with a 6= 1 and b 6= 0


and initial value y (0) = c, c ∈ C.

Using (7) we obtain the solution of this initial value problem:


n−1  n−1
X Y 
n
y (0) = c, y (n) =ca + b a
m=0 j=m+1
n−1
n
X
` 1 − an
n
=ca + ba = ca + b , n ≥ 1.
1−a
`=0
Difference and Differential Equations
Lecture 4

Allard van der Made

Week 47 (Friday)

This Lecture

Stability of Solutions of LDEs

Systems of First Order Difference Equations

Solutions of Homogeneous Systems of First Order LDEs

Fundamental Matrices

Solutions of Inhomogeneous Systems of First Order LDEs


Stability of Solutions of LDEs

How do you the determine whether a solution of L(y ) = b is


neutrally stable, locally or globally asymptotically stable, or
unstable?

Recall the definition of neutral stability:


A solution y of the k th order LDE L(y ) = b is (neutrally) stable if
for every  > 0 one can find a δ > 0 such that for every solution
ỹ with the property

|y (n) − ỹ (n)| ≤ δ, n = 0, 1, . . . , k − 1,

we have: |y (n) − ỹ (n)| ≤  for all n ≥ 0.

Focusing on the Null Sequence

Suppose y is a stable solution of L(y ) = b and that the solution


ỹ of L(y ) = b is such that |y (n) − ỹ (n)| ≤ , n ≥ 0, for some
 > 0. Then:
I The sequence y − ỹ is a solution of L(y ) = 0:
L(y − ỹ ) = L(y ) − L(ỹ ) = b − b = 0.
I If y is a stable solution of L(y ) = b, then the null sequence
is a stable solution of L(y ) = 0 and vice versa.
I A similar conclusion can be drawn regarding asymptotic
stability.
The Null Solution Reveals all Stability Properties

Theorem (1.3.14’)
All solutions of L(y ) = b are neutrally stable, globally
asymptotically stable, or unstable iff the null solution of L(y ) = 0
is neutrally stable, globally asymptotically stable, or unstable,
respectively. Furthermore, every asymptotically stable solution
of either LDE is globally asymptotically stable.
Proof: The iff-statement follows from the previous slide and
Theorem 1.3.7.
I Suppose z ≡ 0 is an asymptotically stable solution of
L(y ) = 0, i.e. there exists a δ > 0 such that for every
solution with |z̃(n)| < δ, n = 0, 1, . . . , k − 1 one has
limn→∞ z̃(n) = 0.
I Now take an arbitrary solution ẑ of L(y ) = 0 and let
M = 1 + max{|ẑ(0)|, |ẑ(1)|, . . . , |ẑ(k − 1)|}.

Proof of Theorem 1.3.14’ (continued)

I The sequence Mδ ẑ is a solution of L(y ) = 0 and, because


| Mδ ẑ(n)| < δ, n = 0, 1, . . . , k − 1, we know that
limn→∞ Mδ ẑ(n) = 0.
I But then
M
lim ẑ(n) = lim Mδ ẑ(n)
δ n→∞ =0
n→∞

and we conclude that the null solution is globally


asymptotically stable.
I Combining this observation with the iff-statement of the
theorem reveals that L(y ) = b cannot have solutions which
are merely locally asymptotically stable.
Systems of First Order Difference Equations

Until now we have confined attention to situations with only one


dependent variable. However, it is not difficult to consider
multiple dependent variables:

y1 (n + 1) = f1 (n, y1 (n), y2 (n), . . . , yk (n)),


y2 (n + 1) = f2 (n, y1 (n), y2 (n), . . . , yk (n)),
.. .. (1)
. .
yk (n + 1) = fk (n, y1 (n), y2 (n), . . . , yk (n)).

This is a system of k first order difference equations with k


dependent variables.

From Separate Equations to a Vectorial Equation

The system (1) can be written more concisely as

y (n + 1) = f (n, y (n)), (2)

where
   
y1 (n) f1 (n, y (n))
y2 (n) f2 (n, y (n))
y (n) =  .  , f (n, y (n)) =  .
   
.
 ..   .. 
yk (n) fk (n, y (n))
Solutions of Systems of First Order Difference
Equations

A complex-valued solution y of (2) is a complex-valued


sequence of k -dimensional vectors or, put differently, a
k -dimensional complex-valued vector function on (a subset of)
Z:
y : Z → Ck .
Real-valued solutions y of (2) are real-valued k -dimensional
vector functions:
y : Z → Rk .

Systems of First Order LDEs

Suppose each fj appearing in (1) is affine, i.e.

k
X
fj (n, y1 (n), y2 (n), . . . , yk (n)) = aj` y` (n)+bj (n), j = 1, 2, . . . , k .
`=1

Then the system (2) is linear and can be written as follows:


      
y1 (n + 1) a11 (n) a12 (n) . . . a1k (n) y1 (n) b1 (n)
y2 (n + 1) a21 (n) a22 (n) . . . a2k (n) y2 (n) b2 (n)
 =  .. ..   .. + ..  ,
      
 .. .. ..
 .   . . . .  .   . 
yk (n + 1) ak 1 (n) ak 2 (n) . . . akk (n) yk (n) bk (n)

or as
y (n + 1) = A(n)y (n) + b(n). (3)
From Order k to Order 1 (I)
We have already seen that finding solutions of

y (n + k ) = f (n, y (n), y (n + 1), . . . , y (n + k − 1)), n ≥ 0

can be challenging if k ≥ 2. Luckily, this difference equation


can be transformed into a system of k first order difference
equations:
I Introduce k new variables: yj (n) = y (n + j − 1),
n ∈ N ∪ {0}, j = 1, . . . , k .
I Then:

y1 (n + 1) =y (n + 1) = y2 (n),
y2 (n + 1) =y (n + 2) = y3 (n),
.. ..
. .
yk −1 (n + 1) =y (n + k − 1) = yk (n).

I And: yk (n + 1) = f (n, y1 (n), y2 (n), . . . , yk (n)).

From Order k to Order 1 (II)

So we obtain the following system of first order difference


equations:
   
y1 (n + 1) y2 (n)
y2 (n + 1)  y3 (n) 
=  , n ≥ 0. (4)
   
 .. ..
 .   . 
yk (n + 1) f (n, y1 (n), y2 (n), . . . , yk (n))
2
Example: y (n + 3) = y (n + 1) + y (n) is equivalent to
   
y1 (n + 1) y2 (n)
y2 (n + 1) =  y3 (n) .
y3 (n + 1) y2 (n) + (y1 (n))2
From Order k to Order 1: LDEs

If you
Pstart with an LDE of order k , say L(y ) = −b with
k
L = j=0 aj τ j (ak ≡ 1), then (4) reduces to
   
y1 (n + 1) y2 (n)
y2 (n + 1)  y3 (n) 
=
   
 .. .. 
 .   . 
yk (n + 1) −ak −1 yk (n) − ak −2 yk −1 (n) − . . . − a0 y1 (n) − b(n)
    
0 1 0 ... y1 (n) 0
 0 0 1  y2 (n)  0 
. . .    
= ..   ..  +  ..  .

.. .. ..
 . . . .  .   . 
−a0 (n) −a1 (n) −a2 (n) . . . yk (n) −b(n)
(5)

Homogeneous First Order Vectorial LDEs

The solutions of the homogeneous first order vectorial LDE

y (n + 1) = A(n)y (n), (6)

where each A(n) is a k × k matrix, are reminiscent of those of


homogeneous scalar first order LDEs:

y (n) = A(n − 1)A(n − 2) . . . A(0)c, n ≥ 1, y (0) = c ∈ Ck .

Of course, you can choose k linearly independent vectors


c1 , . . . , ck ∈ Ck , so...
The Solution Space of Homogeneous First Order
Vectorial LDEs

Theorem (1.4.6)
The solutions of (6) form a k -dimensional linear space. A
collection of k solutions {y1 , . . . , yk } is a basis of this space if
and only if
det[y1 (n) y2 (n) . . . yk (n)] 6= 0
for some n ∈ N ∪ {0}.

The (sequence of) k × k matrix (matrices)


Y (n) = [y1 (n) y2 (n) . . . yk (n)] with k linearly independent
solutions is called a fundamental matrix of (6).

Note that:
Y (n + 1) = A(n)Y (n), n ≥ 0.

Using the Fundamental Matrix

The column vectors of a fundamental matrix Y (n) of (6) form a


basis of its solution space. So, every solution ỹ of (6) can be
written as follows:
 
c1
k
X  c2 
ỹ (n) = cj yj (n) = Y (n)  . 
 
 .. 
j=1
ck

for some numbers c1 , . . . , ck ∈ C.


Fundamental Matrices of Autonomous First Order
Vectorial LDEs

If the k × k matrix A in the homogeneous equation

y (n + 1) = Ay (n) (7)

does not depend on n, then the general solution of (7) is simply

y (n) = An c, n ≥ 0, c ∈ Ck .

A fundamental matrix of (7) is thus

Y (n) = An C, n ≥ 0

for some invertible matrix Y (0) = C ∈ Ck ×k .

Fundamental Matrix: Example


Consider the following vectorial LDE:
 
2 1
y (n + 1) = y (n), n ≥ 0.
0 3

The vectorial sequences


   
1 n 1 n
y1 (n) = 2 , n ≥ 0, y2 (n) = 3 , n≥0
0 1

are two linearly independent solutions of this vectorial LDE.

So, a fundamental matrix of this LDE is


 n n

2 3
Y (n) = , n ≥ 0.
0 3n
Jordan Normal Forms

Being able to calculate the solution y (n) = An c hinges on being


able to calculate An .
Writing the k × k matrix A in Jordan normal form makes
calculating An relatively easy. Recall:

A = SJS −1 , where:
I The matrix S contains the (generalized) eigenvectors of A.
I The matrix J has the eigenvalues of A on its main diagonal
and possibly 1s right above some of the main diagonal
entries.
Easiest case: A has k linearly independent eigenvectors
s1 , . . . , sk with eigenvalues λ1 , . . . , λk . Then:

S = [s1 s2 . . . sk ], J = diag{λ1 , . . . , λk }.

Jordan Blocks: Example 1.4.10

Consider the matrix


 3
01 2
A =  12 0 0 .
0 12 0

This matrix has a simple eigenvalue (λ1 = 1) and an eigenvalue


with algebraic multiplicity 2 and geometric multiplicity 1
(λ2 = − 12 ). The jordan matrix J of A is therefore:
 
1 0 0
J = 0 − 12 1 .
0 0 − 12
Generalized Eigenvectors: Example 1.4.10

An eigenvector associated with λ1 is (4, 2, 1)0 and one


associated with λ2 is u = (1, −1, 1)0 , but a third eigenvector
cannot be found (eigenspace of each eigenvector is
1-dimensional).
Solution: λ2 has algebraic multiplicity 2 and hence has a
generalized eigenvector v , which solves
1 3        
2 2 1 v1 1 v1 −2
1 1
(A−λ2 I)v = u ⇔  2 2 0  v2  = −1 ⇔ v2  =  0  .
0 12 12 v3 1 v3 2

Remark: The generalized eigenvector of degree 2 v is such


that (A − λ2 I)v 6= 0 and (A − λ2 I)2 v = 0.

A Jordan Normal Form: Example 1.4.10

Combining the last few slides yields:

A =SJS −1 ⇐⇒
−1
0 32 1
    
4 1 −2 1 0 0 4 1 −2
 1 0 0 = 2 −1 0  0 − 1 1  2 −1 0  .
2 2
1
0 2 0 1 1 2 0 0 − 12 1 1 2
Using Jordan Normal Forms (I)

The general solution of (7) can now be written as follows:


n
y (n) = An c = SJS −1 c = SJ n S −1 c, c ∈ Ck .

You can take Y (0) = S, resulting in the following fundamental


matrix (which you can calculate relatively easily):

Y (n) = An S = SJ n .

If J = diag{λ1 , . . . , λk }, then J n = diag{λn1 , . . . , λnk } and hence:

Y (n) = λn1 s1 λn2 s2 . . . λnk sk .


 

Remark: See Appendices A.1 and A.2 for further details


regarding Jordan Normal Forms.)

Using Jordan Normal Forms (II)

Using Y (n) = SJ n , the solution of the initial value problem


y (n + 1) = Ay (n), y (0) = y0 , is:

y (n) = SJ n c, c = S −1 y0 .
Using Jordan Normal Forms: Example (I)

Consider again the vectorial LDE:


 
2 1
y (n + 1) = y (n), n ≥ 0.
0 3

Suppose that y (0) = (5, −4). Determine the solution of this


initial value problem.
 
2 1
I The eigenvalues of A = are λ1 = 2 and λ2 = 3.
0 3
I An eigenvector associated with λ1 is s1 = (1, 0) and an
eigenvector associated with λ2 is s2 = (1, 1).
   
2 0 1 1
I Let J := diag{λ1 , λ2 } = and S := .
0 3 0 1
I The Jordan normal form of A is SJS −1 .

Using Jordan Normal Forms: Example (II)


I A fundamental matrix of the vectorial LDE is
  n   n n

1 1 2 0 2 3
SJ n = n = .
0 1 0 3 0 3n

I To find the solution of the initial value problem we calculate


S −1 y (0):
    
1 −1 5 9
S −1 y (0) = = .
0 1 −4 −4

I So, the solution of the initial value problem is:


  2n 3n   9 
y (n) = (SJ n ) S −1 y (0) =
0 3n −4
9 × 2n − 4 × 3n
 
= , n ≥ 0.
−4 × 3n
Solutions of Inhomogeneous First Order Vectorial
LDEs
Consider the inhomogeneous vectorial LDE

y (n + 1) = A(n)y (n) + b(n), n ≥ 0, (8)

where each A(n) is a k × k matrix and each b(n) is a


k -dimensional vector.

Just as in the scalar case, you only need one particular solution
of (8) on top of the general solution of its homogeneous
counterpart to obtain the general solution of (8):
Theorem (1.4.7)
Every solution of (8) can be written as the sum of one particular
solution y0 of (8) and a solution of y (n + 1) = A(n)y (n).
Conversely, every sequence of vectors that can be written as
such a sum is a solution of (8).

Equilibrium Solutions of Autonomous Inhomogeneous


First Order Vectorial LDEs

Consider the vectorial LDE

y (n + 1) = Ay (n) + b, n ≥ 0,

where A is a k × k matrix and b 6= 0 is a k -dimensional vector.

If I − A is nonsingular, then:

c = Ac + b ⇔ c = (I − A)−1 b.

So, y ≡ (I − A)−1 b is the unique equilibrium solution of the LDE.


Difference and Differential Equations
Lecture 5

Allard van der Made

Week 48 (Tuesday)

This Lecture

More on Solutions of Vectorial LDEs

Real-valued Solutions of Homogeneous First Order Vectorial


LDEs

Stability of Solutions of Vectorial Difference Equations


The General Solution of Inhomogeneous Vectorial
LDEs (I)

Consider the vectorial LDE

y (n + 1) = Ay (n) + b(n), n ≥ 0, (1)

where A is a k × k matrix and each b(n) is a k -dimensional


vector.

The general solution of (1) is


n−1
X
n
y (0) = c, y (n) = A c + An−1−m b(m), n ≥ 1, c ∈ Ck .
m=0

The General Solution of Inhomogeneous Vectorial


LDEs (II)

Let us prove this claim:


n
X
y (n + 1) =An+1 c + An−m b(m)
m=0
n−1
X
=A × An c + A An−1−m b(m) + b(n)
m=0
=Ay (n) + b(n).

Since you can find k linearly independent solutions of this form


(by choosing k linearly independent initial vectors), there are no
other solutions of (1). This completes our proof.
Equilibrium and Periodic Solutions of Homogeneous
First Order Vectorial LDEs
Consider the vectorial LDE

y (n + 1) = Ay (n), n ≥ 0, (2)

where A is a k × k matrix.
I An equilibrium solution of (2) is a constant sequence
y ≡ c, where c ∈ Ck is such that Ac = c. If c 6= 0, then this
occurs if and only if 1 is an eigenvalue of A.
I A p-periodic solution of (2) is a solution y such that
y (n + p) = y (n) for all n ≥ 0 and p is as small as possible.
So:
Ap y (n) = y (n), n ∈ N ∪ {0}.
This occurs if and only if A has an eigenvalue λ such that
λp = 1, but λr 6= 1, r = 1, 2, . . . , p − 1.

Moving to Real-valued Solutions


Even if A ∈ Rk ×k , the entries of the fundamental matrix SJ n
need not be real-valued. But:
Theorem (1.4.13)
Let A ∈ Rk ×k and suppose A has an eigenvalue λ with nonzero
imaginary part and associated eigenvector v . Then the
sequences

y1 (n) = <(λn v ), y2 (n) = =(λn v )

are linearly independent real-valued solutions of


y (n + 1) = Ay (n).

Remark 1: If z = α + βi, then <(z) = α and =(z) = β.


Remark 2: No, you do not end up with too many independent
solutions: λ̄ is also an eigenvalue (with eigenvector v̄ ).
Proof of Theorem (1.4.13)

I Suppose the contrary. Then u2 = αu1 for some α ∈ R\{0},


where u1 = <(v ) and u2 = =(v ).
I This implies:

Av = A(u1 + αu1 i) = (1 + αi)Au1 .

I We also have:

Av = λv = λ(1 + αi)u1 .

I These two observations imply that Au1 = λu1 , which


cannot hold. This contradiction proves the theorem.

Moving to Real-valued Solutions: Example (I)

Consider the vectorial LDE


 
0 1
y (n + 1) = y (n), n ≥ 0. (3)
−2 2

 
0 1
I The eigenvalues of A := are λ1 = 1 − i and
−2 2
λ2 = 1 + i.
I An eigenvector associated with λ1 is s1 = (1, 1 − i) and an
eigenvector associated with λ2 is s2 = (1, 1 + i).
I A complex-valued basis of the solution space of (3) is thus:
   
1 1
z1 (n) = (1 − i)n , z2 (n) = (1 + i)n .
1−i 1+i
Moving to Real-valued Solutions: Example (II)

I Let us use z2 (n) to construct a real-valued basis of the


solution space.
q √
I Note that |1 + i| = (1 + i)(1 + i) = 2. Hence:
√  √ √  √  √ πi
1 1 π π
1+i = 2 2 2 + 2 2i = 2 cos( 4 ) + i sin( 4 ) = 2e 4 .

I Consequently:
√ nπ  √
< (1 + i)n =< ( 2)n e 4 i = ( 2)n < cos( nπ nπ
 
4 ) + i sin( 4 )
√ n
=( 2) cos( nπ4 ),

= (1 + i)n =( 2)n sin( nπ

4 ).

Moving to Real-valued Solutions: Example (II)

I We thus get the following real-valued basis:


   √ n nπ
!
1 ( 2) cos( 4 )
y1 (n) = < (1 + i)n = √ n+1 ,
1+i ( 2) cos( (n+1)π
4 )
   √ n nπ
!
1 ( 2) sin( 4 )
y2 (n) = = (1 + i)n = √ n+1 .
1+i ( 2) sin( (n+1)π
4 )
Stability Concepts for Vectorial Difference Equations
Consider the k -dimensional vectorial difference equation
y (n + 1) = f (n, y (n)), n ≥ 0, (4)
where f : Df → Ck (Df positive invariant). Then:
I A solution y of (4) is stable iff for some norm || · || on Ck
and for every  > 0 there exists a δ > 0 such that for every
solution ỹ with the property ||y (0) − ỹ (0)|| ≤ δ one has:
||y (n) − ỹ (n)|| ≤ , ∀n ≥ 0.

I A stable solution y of (4) is asymptotically stable iff there


exists a δ > 0 such that for every solution ỹ with the
property ||y (0) − ỹ (0)|| ≤ δ one has:
lim (y (n) − ỹ (n)) = 0.
n→∞

I A stable solution that is not asymptotically stable is called


neutrally stable. Solutions that are not stable are called
unstable.

A First Stability Result

Consider the vectorial LDE

y (n + 1) = A(n)y (n) + b(n), n ≥ 0, (5)

where each A(n) is a k × k matrix and each b(n) is a


k -dimensional vector.

Combining Theorem 1.3.14 and Theorem 1.4.7 one


immediately infers
Theorem (1.4.8)
All solutions of (5) are neutrally stable, globally asymptotically
stable, or unstable if and only if the null solution of its
homogeneous counterpart (y (n + 1) = A(n)y (n)) is neutrally
stable, globally asymptotically stable, or unstable, respectively.
The Spectral Radius of a Matrix

The stability properties of a solution y of

y (n + 1) = Ay (n) + b(n), n ≥ 0,

depend on the eigenvalues of A. In particular, the spectral


radius of A plays an important role:

I The spectrum σ(A) of A is the set of all eigenvalues of A.


I The spectral radius rσ (A) of A is

rσ (A) = max |λ|.


λ∈σ(A)

Stability of Solutions of Autonomous Vectorial LDEs

Consider the vectorial LDE

y (n + 1) = Ay (n) + b(n), n ≥ 0, (6)

where A is a k × k matrix and each b(n) is a k -dimensional


vector.
Theorem (1.4.17)
All solutions of (6) are stable if
I rσ (A) ≤ 1.
I The algebraic and geometric multiplicities of every
eigenvalue λ of A with |λ| = 1 are equal.
In all other cases the solutions are unstable.
The solutions are globally asymptotically stable if and only if
rσ (A) < 1.
Necessary Conditions for Stability

If the solutions of (6) are asymptotically stable, then the


eigenvalues λ1 , . . . , λk of A are such that |λi | < 1, i = 1, . . . , k .
This implies:
Qk
I | det(A)| = |
i=1 λi | < 1.
Pk
I | tr(A)| = |
i=1 λi | < k .

Remark 1: These
Qk conditions are replaced Pkby
| det(A)| = | i=1 λi | ≤ 1 and | tr(A)| = | i=1 λi | ≤ k if the
solutions are stable.

Remark 2: These conditions are necessary conditions for


asymptotic stability, but they are not sufficient.

Asymptotically Stable Solutions: Example (I)

Consider the difference equation


 
0 1
y (n + 1) = 1 y (n), n ≥ 0. (7)
4 0

Are its solutions (asymptotically) stable?


 
0 1
I The eigenvalues of A := 1 are λ1 = − 12 and λ2 = 12 .
4 0
So, rσ (A) = 12 < 1 and hence the solutions of (7) are
asymptotically stable.
I The general solution of (7) is
   
1 1 n 1 1 n
y (n) = c1 (− ) + c 2 1 ( 2 ) , n ≥ 0, c1 , c2 ∈ C.
− 12 2
2
Asymptotically Stable Solutions: Example (II)

I Let us consider two different solutions:


   
1 1 n 1 1 n
y (n) = 1 (− 2 ) + 1 ( 2 ) , n ≥ 0,
−2
  2
1 1 n 1 1 n
ỹ (n) = 2 (− ) + 4 1 ( 2 ) , n ≥ 0.
− 12 2
2

I Then:
w    
1 1 1 n w2
w
2 1 n
lim ||y (n) − ỹ (n)||2 = lim w − (− 2 ) − 3 1 ( 2 ) w
w
n→∞ n→∞ − 12 2 2
 
1 n 1 n 2 1 1 n 3 1 n 2
 
= lim − (− 2 ) − 3( 2 ) + 2 (− 2 ) − 2 ( 2 ) =0
n→∞

I Also: limn→∞ ||y (n) − 0||2 = 0.


Difference and Differential Equations
Lecture 6

Allard van der Made

Week 48 (Friday)

This Lecture

MATLAB Tips and Tricks


Command Window, Functions, Scripts

I After opening MATLAB you can type commands in its


command window.
I It is very cumbersome to type everything directly in the
command window and keep track of what is stored in
MATLAB’s memory. Using scripts and functions is much
more convenient.
I Scripts and functions can be written in MATLAB’s editor.
I You can open the editor by clicking New Script, New, or
Open.

Functions: Example 1

Consider the following function:

functionexample1(a,b)
f=@(x) sin(x);
g=@(x,y) y*cos(x);
T=a:(b-a)/99:b;
plot(T,f(T*pi));
hold on;
for j=1:10
plot(T,g(T*pi,j));
end
xlabel(’time’);
ylabel(’variables of interest’
);
hold off;
end
Functions: Example 2
This function does almost the same things:

functionT=example2(a,b)
functiony=f(x)
y=sin(x);
end
g=@(x,y) y*cos(x);
T=a:(b-a)/99:b;
p=pi*ones(1,100);
plot(T,f(T.*p));
hold on;
for j=1:10
plot(T,g(T.*p,j));
end
xlabel(’time’);
ylabel(’variables of interest’
);
hold off;
end

Help!

MATLAB is a very rich programming language. Go to Help and


then Documentation to discover how you can program
something.
Difference and Differential Equations
Lecture 7

Allard van der Made

Week 49 (Tuesday)

This Lecture

Stability Properties of Autonomous Systems of First Order


Difference Equations

Lyapunov’s Direct Method


Autonomous Systems of First Order Difference
Equations

Consider the following autonomous system of first order


difference equations:

y (n + 1) = f (y (n)), n ≥ 0, (1)

where f is a k -dimensional vector function defined on a


(positive) invariant subset Df of Ck (or Rk ).

We are interested in the stability properties of the solutions of


(1).

The Jacobian of a Vector Function


Consider a function
 
f1
f =  ...  : Df → V ,
 

fk

where V = Rk or V = Ck and Df ⊆ V .

The Jacobian of f at x = (x1 , . . . , xk ) is:


 ∂f ∂f1 ∂f1

1
∂x1 (x) ∂x2 (x) . . . ∂xk (x)
 ∂f2 ∂f2 ∂f2
 ∂x1 (x) ∂x (x) . . . (x)

2 ∂xk 
Df (x) = 
 .. .. .. ..  .

 . . . . 
∂fk ∂fk ∂fk
∂x1 (x) ∂x2 (x) . . . ∂xk (x)
Using the Jacobian to Assess Stability Properties

Theorem (1.5.2)
Let c ∈ Ck be a fixed point of f and suppose f is differentiable
at c. Then:
I If rσ (Df (c)) < 1, then y ≡ c is an asymptotically stable
equilibrium solution of (1).
I If rσ (Df (c)) > 1, then y ≡ c is an unstable equilibrium
solution of (1).

Using the Jacobian to Assess Stability Properties:


Example (I)
Consider the system of difference equations
y (n + 1) = fα (y (n)), n ≥ 0, where
 
x x
1 2
fα : R2 → R2 , fα (x1 , x2 ) =
αx12

and α > 0. Determine the equilibrium solutions of this system


of difference equations and examine their stability properties.
I If y ≡ c is an equilibrium solution, then c = f (c). Solving
   
c1 c1 c2
=
c2 αc12

yields three
q equilibrium solutions:
q the null solution,
1 1
y− ≡ (− α , 1), and y+ ≡ ( α , 1).
Using the Jacobian to Assess Stability Properties:
Example (II)
I The Jacobian of fα is defined by
 
x2 x1
Dfα (x1 , x2 ) = .
2αx1 0

I The eigenvalue of Dfα (0, 0) is 0 (with algebraic multiplicity


2). So, the null solution is asymptotically stable.
I Note that:
q !
± α1
q
1 1
Df (± α , 1) = √ .
±2 α 0

The eigenvalues of these matrices are λ1 = −1 and


λ2 = 2.
I Because |λ2 | > 1, the equilibrium solutions y− and y+ are
unstable.

Adding a Locally Negligible Function

You can use Theorem 1.5.2 to generalize Theorem 1.2.8 to


systems of difference equations:
Theorem (1.5.3)
Let A be a constant k × k matrix and g a k -dimensional vector
function, continuous at the origin, such that

||g(x)||2
lim = 0.
x→0 ||x||2

If rσ (A) < 1, then the null solution of

y (n + 1) = Ay (n) + g(y (n)), n ≥ 0

is asymptotically stable. On the other hand, if rσ (A) > 1, then


this null solution is unstable.
Linearizing a System of First Order Difference
Equations

It is often useful to linearize (1) at one of its fixed points c when


studying the local behaviour of that system. Of course, f needs
to be differentiable at c to do so. The linearized equation
associated with (1) reads:

y (n + 1) = f (c) + Df (c)(y (n) − c) = c + Df (c)(y (n) − c).

Of course, Theorem 1.5.2 is based on this linearization!

What about the Stability of Non-hyperbolic Fixed


Points?

If c is a non-hyperbolic fixed point of (1), then at least one of the


eigenvalues of Df (c) has absolute value 1 and we cannot use
Theorem 1.5.2 (rσ (Df (c)) = 1).
I Possible solution: Use the direct method of Lyapunov.
I Idea: You want to know whether the next iterate f (x) is
closer to the fixed point than x itself. That is difficult to
determine, but V (f (x)) versus V (x) for V ‘appropriately
chosen’ is much easier.
I Such a function V serves as a measure of distance to the
fixed point.
Lyapunov’s Direct Method

Theorem (1.5.8)
Let c be a fixed point of (1) with f continuous. Suppose there
exists a neighbourhood U of c and a continuous function
V : U → R such that V (c) = 0 and V (x) > 0 for all x ∈ U\{c}.
Then:
1. If V (f (x)) ≤ V (x) for all x ∈ U such that f (x) ∈ U, then
y ≡ c is a stable equilibrium solution.
2. If V (f (x)) < V (x) for all x ∈ U\{c} such that f (x) ∈ U,
then y ≡ c is an asymptotically stable equilibrium solution.
3. If V (f (x)) > V (x) for all x ∈ U\{c} such that f (x) ∈ U,
then y ≡ c is an unstable equilibrium solution.

Proof of Theorem 1.5.8 (I)


Statement 1: We have to prove for every  > 0 sufficiently
small that there exists a δ > 0 such that:

||x − c|| ≤ δ ⇒ ||f n (x) − c|| ≤ , n ∈ N.

I Fix an  > 0 such that B(c; ) ⊂ U.


I By continuity of f there exists a δ1 ∈ (0, ) such that

||f (x) − f (c)|| = ||f (x) − c|| ≤ , ∀x ∈ B(c; δ1 ).

I Let m := min{V (x) : δ1 ≤ ||x − c||2 ≤ }. Note that


m > 0.
I By continuity of V and the fact that V (c) = 0 there exists a
δ2 ∈ (0, δ1 ) such that

V (x) < m , ∀x ∈ B(c; δ2 ).


Proof of Theorem 1.5.8 (II)
I It suffices to prove that f n (x) ∈ B(c; δ1 ) for every n ∈ N and
every x ∈ B(c; δ2 ).
I Suppose by contradiction that f n (x) 6∈ B(c; δ1 ) for some
x ∈ B(c; δ2 ) and n ∈ N and let n0 be the smallest such n.
I Then we have:

f n0 −1 (x) ∈ B(c; δ1 ) and f n0 (x) 6∈ B(c; δ1 ).

I Also: δ1 < ||f n0 (x) − c|| = ||f (f n0 −1 (x)) − f (c)|| ≤ .


I So, V (f n0 (x)) ≥ m .
I But we also have:

V (f n0 (x)) ≤ V (f n0 −1 (x)) ≤ . . . ≤ V (x) < m .

I This contradiction proves Statement 1.

Proof of Theorem 1.5.8 (III)

Statement 2: We have to prove that there exists a δ > 0 such


that limn→∞ f n (x) = c for all x ∈ B(c; δ).
I Fix an  > 0 such that B(c; ) ⊂ U and find δ1 ∈ (0, ) and
δ2 ∈ (0, δ1 ) as in Statement 1.
I For all x ∈ B(c; δ2 )\{c} one has:

V (x) > V (f (x)) > . . . > V (f n (x)) > V (f n+1 (x)) > . . . ≥ 0.

I The sequence {V (f n (x))}∞


n=0 consequently converges.
I Because {f n (x)}∞
n=0 ⊆ B(c; ), there exists a converging
subsequence {f nj (x)}∞ nj
j=0 : limj→∞ f (x) = `.
Proof of Theorem 1.5.8 (IV)

I Hence:

lim V (f nj (x)) = V ( lim f nj (x)) = V (`),


j→∞ j→∞

implying that limn→∞ V (f n (x)) = V (`).


I Then: V (`) = limj→∞ V (f (f nj (x))) = V (f (`)).
I Because V (f (z)) < V (z), ∀z ∈ U\{c}, we conclude that
` = c.
I So, any converging subsequence of {f n (x)}∞ n=0 converges
to c and because the sequence is contained in the
compact set B(c; δ1 ) the sequence itself converges to c.

Proof of Theorem 1.5.8 (V)

Statement 3: Fix an  > 0 such that B := B(c; ) ⊂ U. It


suffices to show that for each x ∈ B\{c} there exist an n ∈ N
such that f n (x) 6∈ B.
I Suppose by contradiction that f n (x) ∈ B for all n ∈ N for
some x ∈ B\{c}.
I Because B is compact, there exists a converging
subsequence {f nj (x)}∞ n ∞
j=0 of {f (x)}n=0 converging to some
` ∈ B.
I By continuity of V one has: limj→∞ V (f nj (x)) = V (`).
I The sequence {V (f n (x))}∞
n=0 is monotone, strictly
increasing and bounded above by maxz∈B V (z).
I So, {V (f n (x))}∞
n=0 converges and its limit must be V (`).
Proof of Theorem 1.5.8 (VI)

I Because V (f n (x)) > V (x) > 0, we have that ` 6= c.


I It follows from the continuity of V and f that:

lim V (f n (x)) = lim V (f nj +1 (x)) = V (f ( lim f nj (x))) = V (f (`)).


n→∞ j→∞ j→∞

I But then:

lim V (f n (x)) = V (f (`)) > V (`) = lim V (f n (x)),


n→∞ n→∞

a contradiction!

Lyapunov Functions

I A continuous function V : U → R satisfying the conditions


of Statement 1 is called a Lyapunov function on U
centered at c for (1).
I A Lyapunov function satisfying the conditions of Statement
2 is called a strict Lyapunov function on U centered at c for
(1). The set U is in general contained in the basin of
attraction of {c} (see Theorem 1.5.11).
Lyapunov’s Direct Method: Example (I)
Consider the system of difference equations

y (n + 1) = f (y (n)), n ≥ 0, (2)

where f : R2 → R2 is defined by
f (x1 , x2 ) = (x1 (1 − x22 ), x2 (1 − x12 )). Examine the stability of the
null solution of this equation.
I It is not difficult to see that y ≡ 0 is indeed an equilibrium
solution of (2).
I The jacobian of f is given by:
2
 
1 − x2 −2x1 x2
Df (x1 , x2 ) = .
−2x1 x2 1 − x12

I The eigenvalue of Df (0, 0) is 1 (with algebraic multiplicity


2), so we cannot use Theorem 1.5.2.

Lyapunov’s Direct Method: Example (II)


I Let us use the squared Euclidian distance as a candidate
Lyapunov function (verify that this function has the right
properties!):

||f (x1 , x2 )||22 =x12 (1 − x22 )2 + x22 (1 − x12 )2 ,


||(x1 , x2 )||22 =x12 + x22 > 0, ∀(x1 , x2 ) ∈ R2 \{0, 0}.

I Consequently:

||f (x1 , x2 )||22 − ||(x1 , x2 )||22 = x12 x22 (x12 + x22 − 4).

I Observe that:

||f (x1 , x2 )||22 − ||(x1 , x2 )||22 ≤ 0, ∀(x1 , x2 ) ∈ B(0; 2).

So, || · ||22 is a Lyapunov function on B(0; 2) for (2) and we


conclude that the null solution is stable.
Difference and Differential Equations
Lecture 8

Allard van der Made

Week 49 (Friday)

This Lecture

Solutions of Homogeneous Higher Order LDEs

Solutions of Inhomogeneous Higher Order LDEs

Stability of Solutions of Higher Order LDEs


Higher Order LDEs

Consider the following scalar k th order linear difference


equation:

y (n+k )+ak −1 y (n+k −1)+. . .+a0 y (n)+b(n) = 0, n ≥ 0, (1)

with constant coefficients a0 , a1 , . . . , ak −1 (ak has been


normalized to 1).

Let us use the k variables

uj (n) = y (n + j − 1), j = 1, . . . , k

to convert this k th order LDE into a system of first order LDEs.

The Companion Matrix of (1)


Equation (1) is equivalent to
 
0
  0
u(n + 1) = Au(n) +  , (2)
 
..
  .
−b(n)

where  
0 1 0 ... 0
 0 0 1 ... 0 
A :=  .
 
.. .. .. .. ..
 . . . . . 
−a0 −a1 −a2 . . . −ak −1
If u is a solution of the companion equation (2), then u1 is a
solution of (1) and if u1 is a solution of (1), then
(u1 , τ u1 , . . . , τ k −1 u1 ) is a solution of (2). (τ is the shift operator)
The matrix A is called the companion matrix of (1).
The Eigenvalues of the Companion Matrix

We already know how to obtain the general solution of


u(n + 1) = Au(n): Construct a Jordan normal form of A and
use that to obtain a fundamental matrix of u(n + 1) = Au(n).
The following theorem is useful in this process:
Theorem (1.6.1)
The eigenvalues of the companion matrix of (1) are the roots of
the following characteristic equation of (1):

λk + ak −1 λk −1 + . . . + a1 λ + a0 = 0.

Proof: Use induction on k .

From the Characteristic Equation to Solutions of (1)


Recall that (1) can be written as Ly = b, where:

L = τ k + ak −1 τ k −1 + . . . + a1 τ + a0 .

Note the similarity between L and the characteristic equation!


Hence:
L = (τ − λ1 )(τ − λ2 ) . . . (τ − λk ),
where λj , j = 1, . . . , k , are the eigenvalues of A.
Consider an eigenvalue λj and let x be an associated
eigenvector. Then:

1
 
 λj 
λ2j
 
x2 = λj x1 , x3 = λj x2 , . . . xk = λj xk −1 ⇒ x = x1   . (3)
 
 .. 
 . 
λkj −1
The Eigenvalues of the Companion Matrix: Example

Consider the following second order LDE:

y (n + 2) + y (n + 1) − 6y (n) = 0, n ≥ 0.

I This LDE can be written as: (τ 2 + τ − 6)y = 0.


I Solving λ2 + λ − 6 = 0yields λ
1 = 2 and λ2 = −3, which
0 1
are the eigenvalues of .
6 −1
I An eigenvector associated with λ1 is s1 = (1, 2) and an
eigenvector associated with λ2 is s2 = (1, −3).
I The general solution of the scalar LDE is:

y (n) = c1 2n + c2 (−3)n , n ≥ 0, c1 , c2 ∈ C.

A Basis of the Solution Space of Homogeneous k th


Order LDEs

Theorem (1.6.2)
Suppose the characteristic equation of (1) has r distinct
solutions
Pr λj with algebraic multiplicities kj , j = 1, . . . , r
( j=1 kj = k ). Then the solution space of the homogeneous
equation

y (n + k ) + ak −1 y (n + k − 1) + . . . + a0 y (n) = 0

is spanned by the sequences

{λnj }∞ n ∞
n=0 , {nλj }n=0 , . . . , {n
kj −1 n ∞
λj }n=0 , j = 1, . . . , r .
Using Theorem 1.6.2: Example
Consider the following third order LDE:

y (n + 3) + 3y (n + 2) + 3y (n + 1) + y (n) = 0.

I The root of its characteristic equation is −1, with algebraic


multiplicity 3.
I The solution space of the LDE is thus spanned by

y1 (n) = (−1)n , y2 (n) = n(−1)n , y3 (n) = n2 (−1)n , n ≥ 0.

I So, the general solution of the LDE is

y (n) = c1 (−1)n +c2 n(−1)n +c3 n2 (−1)n , n ≥ 0, c1 , c2 , c3 ∈ C.

Finding Solutions of Inhomogeneous Higher Order


LDEs

To be able to construct the general solution of

y (n + k ) + ak −1 y (n + k − 1) + . . . + a0 y (n) + b(n) = 0, n ≥ 0,

with {b(n)}∞n=0 a non-constant sequence, one needs to find a


particular solution of this LDE. Two methods:
I Use variation of constants. This is in general very
unpractical.
I Use the annihilator method. This is often relatively
straightforward.
The Annihilator Method (I)
Consider
Pk the jinhomogeneous LDE L(y ) = b, where
L = j=0 aj τ , ak = 1, is an autonomous difference operator.
I Suppose that the sequence b is a solution of
Psome
homogeneous LDE Lb (y ) = 0, where Lb = `h=0 γh τ h ,
γ` = 1, is some autonomous difference operator.
I Let ỹ be a particular solution of L(y ) = b. So,
L(ỹ )(n) = b(n), n ≥ 0.
I Then: 
Lb L(ỹ ) = Lb (b) = 0.

I We conclude that ỹ is also a solution of (Lb L)(y ) = 0.


I The converse is not true: a solution of (Lb L)(y ) = 0 need
not be a solution of L(y ) = b (the solution space of
(Lb L)(y ) = 0 has dimension k + `).

The Annihilator Method (II)

I Finding the general solution of (Lb L)(y ) = 0 is relatively


easy.
I To go from the general solution of (Lb L)(y ) = 0 to the
general solution of L(y ) = b we have to get rid of the
solutions of (Lb L)(y ) = 0 that are not solutions of L(y ) = b.
I The solution space of L(y ) = b has dimension k , implying
that we have to get rid of ` linearly independent solutions of
(Lb L)(y ) = 0.
I Trick: set the k parameters in the general solution of
(Lb L)(y ) = 0 ‘belonging to’ the solutions of L(y ) = 0 equal
to 0 (these parameters can be chosen freely) and solve for
the remaining ` parameters.
Annihilator Method: Example (I)

Consider the following inhomogeneous LDE:

y (n+2)+y (n+1)−6y (n) = 2n, n ≥ 0 ⇔ L(y )(n) = 2n, n ≥ 0.

I The roots of the characteristic equation of L(y ) = 0 are 2


and −3. So, L = (τ − 2)(τ + 3).
I The sequence {2n}∞ n=0 is a solution of the LDE Lb (y ) = 0,
where Lb := (τ − 1)2 .
I The general solution of (Lb L)(y ) = 0 reads

z(n) = c1 2n + c2 (−3)n + c3 + c4 n, n ≥ 0, c1 , . . . , c4 ∈ C.

Annihilator Method: Example (II)

I Setting c1 = 0 and c2 = 0 yields z0 (n) = c3 + c4 n, n ≥ 0.


I Solve for c3 and c4 :

z0 (n + 2) + z0 (n + 1) − 6z0 (n) = 2n ⇐⇒
(−4c3 + 3c4 ) − 4c4 n = 2n ⇔ −4c3 + 3c4 = 0 ∧ −4c4 = 2.

Consequently: c3 = − 38 and c4 = − 12 .
I The general solution of L(y )(n) = 2n, n ≥ 0, is thus:

y (n) = c1 2n + c2 (−3)n − 3
8 − 21 n, n ≥ 0, c1 , c2 ∈ C.
Stability of Solutions of Higher Order LDEs

Using the fact that (1) and (2) are equivalent, we have:
Theorem (1.6.7)
Suppose λj is a root of the characteristic equation/eigenvalue of
the companion matrix of (1) with algebraic multiplicity kj ,
j = 1, . . . , r . Then:
I The solutions of (1) are asymptotically stable if and only if
|λj | < 1 for all j ∈ {1, . . . , r }.
I The solutions of (1) are stable if |λj | ≤ 1 for all j and kj = 1
for all λj with |λj | = 1.
I The solutions of (1) are unstable in all other cases.

A Higher Order LDE: Example (I)


Consider the following LDE:

y (n + 2) − y (n + 1) + 12 y (n) = 2n , n ≥ 0.

I The homogeneous LDE y (n + 2) − y (n + 1) + 12 y (n) = 0 is


equivalent to (τ − 1+i 1−i
2 )(τ − 2 )y = 0.
I The sequence {2n }∞
n=0 is a solution of (τ − 2)y = 0.
I Setting c1 = 0 and c2 = 0 in
n 1−i n n
z(n) = c1 ( 1+i
2 ) + c2 ( 2 ) + c3 2 , n ≥ 0, c1 , c2 , c3 ∈ C

and solving

c3 2n+2 − c3 2n+1 + 12 c3 2n = 2n

yields c3 = 23 .
A Higher Order LDE: Example (II)

I The general solution of the inhomogeneous LDE is thus:


n 1−i n 2 n
y (n) = c1 ( 1+i
2 ) + c2 ( 2 ) + 3 2 , n ≥ 0, c1 , c2 ∈ C.

I Since 1+i 1−i 1 √


2 2 < 1,
= =
2 2

all solutions of the LDE are asymptotically stable.


I In fact:
lim |ỹ (n) − 23 2n | = 0
n→∞

for all solutions ỹ of the LDE.


Difference and Differential Equations
Lecture 9

Allard van der Made

Week 50 (Tuesday)

This Lecture

Differential Equations

Solutions of Ordinary Differential Equations

Solutions of Inhomogeneous Linear ODEs


Differential Equations

Differential equations are like difference equations, but in


continuous time. Example:

dy
y 0 (t) = = −(1 − a)y (t) + b, t ∈ R+ .
dt
This one looks a lot like the difference equation

y (n + 1) = ay (n) + b, n ≥ 0.

Bank Account Revisited

If the instanteneous interest rate is ρ > 0 and you deposit an


amount A0 at t = 0, then the amount A(t) on your bank account
after t years is a solution of

A0 (t) = ρA(t), t ∈ [0, ∞), A(0) = A0 .

How much money is in your bank account after six months, i.e.
at t = 12 ?
Ordinary Differential Equations

We are only going to study ordinary differential equations


(ODEs). Ordinary means that there is only one ‘time’ variable.
An ODE is an equation of the form:

F (t, y (t), y (1) (t), . . . , y (k ) (t)) = 0, t ∈ T (1)

where y (j) (t) is the j th derivative (with respect to t) of the


function y and F is a function of (at most) k + 2 variables. The
set T ⊆ R is connected.

ODEs can be autonomous and they can be linear.

A solution of (1) is a function y that is k times differentiable that


satisfies (1).

Finding Solutions of First Order ODEs: a


Categorization

For four types of first order ODEs a general method can be


used to find solutions. Let f , g, and G be continuous functions.
The four types are:
1. y 0 (t) = g(t). (type I ODE)
2. y 0 (t) = f (t)G(y (t)) (separable ODE).
3. y 0 (t) = f (t)y (t) (homogeneous linear ODE).
4. y 0 (t) = f (t)y (t) + g(t), g 6≡ 0 (inhomogeneous linear ODE).
Remark: Sometimes the dependence on t of the various
functions is omitted and one writes, for instance, y 0 = g instead
of y 0 (t) = g(t).
Solutions of Type I ODEs

Consider an ODE of the following form:

y 0 (t) = g(t), t ∈ T .

Solutions of this type of ODE can be found by integrating both


sides (choice of t0 depends on the situation):
Z t Z t Z t
0
y (t) − y (t0 ) = y (s) ds = g(s) ds ⇒ y (t) = g(s) ds + y (t0 ).
t0 t0 t0

So, the general solution of a type I equation reads


Z t
y (t) = g(s) ds + c, c ∈ C.
t0

Solutions of Type I ODEs: Example


Consider the following initial value problem:

1
y 0 (t) = , t ∈ [0, ∞), y (0) = 3.
1+t

I The ODE is of type I. Hence


Z t
1
y (t) = ds + c = log(1 + t) + c, t ∈ [0, ∞) c ∈ C
0 1+s

is the general solution of this ODE.


I Solving 3 = log(1) + c yields c = 3. So, the solution of the
initial value problem is:

y (t) = log(1 + t) + 3, t ∈ [0, ∞).


Solutions of Separable ODEs
Consider an ODE of the following form:

y 0 (t) = f (t)G(y (t)), t ∈ T .


1
Suppose G(y ) 6= 0, for all y , and define p(y ) := G(y ) .
The ODE can then be written as follows:

p(y (t))y 0 (t) = f (t).

Suppose we can find a primitive P of p and a primitive F of f .


Then by the chain rule:
Z t Z t
0
p(y (s))y (s) ds = P(y (t))−P(y (t0 )) = f (s) ds = F (t)−F (t0 ),
t0 t0

yielding the implicit general solution

P(y (t)) = F (t) + c, c ∈ C.

Solutions of Separable ODEs: Example

Consider the following initial value problem:

y 0 (t) = (t + 1)e−y (t) , t ≥ 0, y (0) = 0.

Multiplying both sides with ey (t) yields y 0 (t)ey (t) = t + 1 and


hence:
Z t Z t
y 0 (s)ey (s) ds = (s + 1) ds ⇒ ey (t) = 12 (t + 1)2 − 12 + ey (0) .
0 0

So, the solution of the initial value problem is:

y (t) = log( 12 (t + 1)2 + 12 ), t ≥ 0.


Solutions of Homogeneous Linear ODEs
Consider an ODE of the following form:

y 0 (t) = f (t)y (t), t ∈ T .

Because this ODE is a special case of a separable equation


(with G(y ) = y ), we can again apply the method of separation
of variables:
Z t 0 Z t
y (s)
ds = f (s) ds ⇒ log |y (t)| = F (t) + c, c ∈ R,
t0 y (s) t0

where F is a primitive of f .
So, |y (t)| = eF (t)+c and the general is consequently

y (t) = DeF (t) , t ∈ T ,

where D = y (0)e−F (0) ∈ R.

Solutions of Homogeneous Linear ODEs: Example

Consider the following initial value problem:

y 0 (t) = −y (t) sin(t), t ≥ 0, y (0) = −1.

Since cos(t) is a primitive of − sin(t), we obtain:

log(|y (t)|) = cos(t) + c ⇒ |y (t)| = ec ecos(t) .

Combining with y (0) = −1 gives us the solution of the initial


value problem:

y (t) = −e−1+cos(t) , t ≥ 0.
Inhomogeneous Linear ODEs

Consider an ODE of the form

y 0 (t) = f (t)y (t) + g(t), t ∈ T , (2)

with g 6≡ 0.

To find solutions of this ODE we use the same method as the


one we used to determine solutions of inhomogeneous
difference equations:
I Find the general solution of the homogeneous ODE y 0 = fy
and then add a particular solution of the inhomogeneous
ODE y 0 = fy + g.

The General Solution of Inhomogeneous Linear ODEs

Theorem (2.1.13)
Let y0 be a particular solution of (2). Then:
1. Every solution of (2) can be written as the sum of y0 and a
solution of y 0 = fy .
2. Any function that can be written as the sum of y0 and a
solution of y 0 = fy is a solution of (2).

Proof: Suppose y1 is a solution of (2). Then:


0 0
 
y1 (t) − y0 (t) = f (t)y1 (t) + g(t) − f (t)y0 (t) + g(t)
=f (t)(y1 (t) − y0 (t))

So, z = y1 − y0 solves y 0 = fy and y1 = y0 + z.


Conversely, any function y1 = y0 + z with z a solution of y 0 = fy
is a solution of (2).
The General Solution of Inhomogeneous Linear
ODEs: Example

Consider the following ODE:


2 +t
y 0 (t) = 2ty (t) + et , t ∈ R.

We first determine the general solution of y 0 (t) = 2ty (t):

y 0 (t) 2
= 2t ⇒ log |y (t)| = t 2 ⇒ y (t) = cet , c ∈ R.
y (t)
2
Next we figure out that y0 (t) = et +t is a particular solution of
the inhomogeneous ODE. So, the general solution reads:
2 2 +t
y (t) = cet + et , t ∈ R, c ∈ R.

Variation of Constants (I)

Idea: replace the constant in the general solution of y 0 = fy by


a function and then try to find the ‘right function’ to obtain the
general solution of y 0 = fy + g.
I The general solution of the homogeneous equation y 0 = fy
is y = ceF , where F is a primitive of f and c ∈ C.
I Replace the constant c by an unknown function C : T → C.
I This results in y (t) = C(t)eF (t) , t ∈ T , with C an unknown
function.
I Substitute y (t) = C(t)eF (t) into the ODE and solve for C.
Variation of Constants (II)

I Substituting y (t) = C(t)eF (t) into y 0 = fy + g results in:

C 0 (t)eF (t) + C(t)f (t)eF (t) = f (t)C(t)eF (t) + g(t) =⇒


C 0 (t) = e−F (t) g(t).

I Integrating the last equality yields:


Z t
C(t) = e−F (s) g(s) ds + C(t0 ).
t0

I Consequently:
Z t 
−F (s)
y (t) = e g(s) ds + C(t0 ) eF (t) .
t0

Variation of Constants (III)

I If the initial value y (t0 ) = y0 is given, then since


y0 = C(t0 )eF (t0 ) , the solution of the initial value problem is:
Z t 
−F (s) −F (t0 )
y (t) = e g(s) ds + y0 e eF (t) , t ≥ t0 .
t0
Variation of Constants: Example (I)

Consider the following initial value problem:

y 0 (t) = αy (t) + βt 2 , y (0) = 1.

I We first have to analyze the homogeneous equation


y 0 = αy .
I A primitive of the map x 7→ α is F (t) = αt.
I Let g : R → R be defined by g(t) = βt 2 . We have to
R t −F (s) R t 2 −αs
determine 0 e g(s) ds = 0 βs e ds.
I Integration by parts yields:
Z t  
β 2 2 2β
e−αs βs2 ds = − e−αt t2 + t + 2 + .
0 α α α α3

Variation of Constants: Example (II)

I So:
 
β 2 2 2β
C(t) = − e−αt t2 + t + 2 + + C(0).
α α α α3

I Since y (0) = 1 and y (0) = C(0)eF (0) , we obtain C(0) = 1.


I The solution of the initial value problem is consequently:
 
β −αt 2β
t 2 + α2 t + α22 + α3 + 1 eαt

y (t) = −αe
β 2 2 2
  2β

= − α t + α t + α2 + 1 + α3 eαt , t ≥ 0.
Difference and Differential Equations
Lecture 11

Allard van der Made

Week 51 (Tuesday)

This Lecture

Systems of First Order ODEs

Solutions of Systems of First Order ODEs

Homogeneous Linear Vectorial ODEs with Constant


Coefficients
Converting k th Scalar Order ODEs

Consider the general k th order ODE:

F (t, y (t), y (1) (t), . . . , y (k ) (t)) = 0. (1)

This scalar equation can be converted into a system of k first


order ODEs:
Define y1 := y , y2 := y (1) , . . . , yk := y (k −1) , then (1) is
equivalent to:

y10 (t) = y2 (t),


.. .. ..
. . .
yk0 −1 (t) = yk (t),
F (t, y1 (t), y2 (t), . . . , yk (t), yk0 (t)) = 0.

Example: Converting a Linear ODE (I)

Consider a k th order linear ODE:

ak (t)y (k ) (t)+ak −1 (t)y (k −1) (t)+. . .+a0 (t)y (t) = b(t), t ∈ T ⊆ R,

with ak (t) 6= 0 for all t ∈ T .

This ODE is equivalent to:

y10 (t) = y2 (t),


.. .. ..
. . .
yk0 −1 (t) = yk (t), (2)
a0 (t) ak −1 (t) b(t)
yk0 (t) = − y1 (t) − . . . − yk (t) + .
ak (t) ak (t) ak (t)
Example: Converting a Linear ODE (II)

The system (2) can also be written as follows:

0 1 0 ... 0
    
y1 (t) y1 (t)
0 0 1 ... 0
y2 (t) 
d     y2 (t)
 .  = .. .. .. .. ..
 
  .. 
dt  ..   . . . . .  . 
ak −1 (t)
yk (t) − aak0 (t)
(t) − aak1 (t)
(t) − aak2 (t)
(t) ... − ak (t)
yk (t)
0
 
 0 
+  .. .
 
 . 
b(t)
ak (t)

Systems of First Order ODEs

We confine attention to systems of first order ODEs that can be


written as a system of recurrence relations, i.e. systems of the
following form:

y10 (t) = f1 (t, y1 (t), . . . , yk (t)),


y20 (t) = f2 (t, y1 (t), . . . , yk (t)),
.. .. ..
. . . ,
yk0 (t) = fk (t, y1 (t), . . . , yk (t)).

Equivalently:
y 0 (t) = f (t, y (t)), t ∈ T , (3)
where f is defined on T × D, D ⊆ Ck (or Rk ).
Solutions of Systems of First Order ODEs

Luckily, systems of first order ODEs do have solutions:


Theorem (2.2.4)
Let D ⊆ Ck , T a connected subset of R, and f : T × D → Ck be
continuous on a neighbourhood of (t0 , y0 ) ∈ T × D. Then there
exists a δ > 0 and a C 1 function y : (t0 − δ, t0 + δ) → Ck such
that y 0 (t) = f (t, y (t)) and y (t0 ) = y0 . Moreover, if f is C 1 on a
neighbourhood of (t0 , y0 ), then this solution is unique.

Intuition: See §24.4 of Simon and Blume (1994).

Systems of Linear First Order ODEs

A system of k linear first order ODEs can be written as

y 0 (t) = A(t)y (t) + b(t), t ∈ T , (4)

where A is a k × k matrix function.


I The system (4) is a system of coupled ODEs if the system
cannot be written as k separate scalar ODEs, i.e. aij 6≡ 0
for at least one pair i, j with i 6= j.
I If b ≡ 0, then the system is homogeneous. Clearly, y ≡ 0
is a solution of a homogeneous system.
I Conversely, if y ≡ 0 is a solution of a system of linear
ODEs, then the system is homogeneous.
Homogeneous Linear Vectorial ODEs with Constant
Coefficients
Consider the vectorial ODE

y 0 (t) = Ay (t), t ∈ T , A ∈ Ck ×k . (5)

What is the general solution of this ODE?

Finding the general solution (and solutions to initial value


problems) is easy if k = 1 (then A reduces to a ∈ C):

y (t) = ceta , t ∈ T , c ∈ C.

It turns out that the general solution of (5) looks a lot like the
one of the scalar case:

y (t) = etA c, t ∈ T , c ∈ Ck .

But what does etA mean?

Calculating eA (Appendix A.3)

P∞ an
Recall that ea = n=0 n! , a ∈ C. Similarly:

X 1 n
eA = A , A ∈ Ck ×k .
n!
n=0

This definition implies:


∞ n
X t n d
etA = A ⇒ etA = AetA .
n! dt
n=0

Furthermore: −1
etA = e−tA .
Calculating etA : Example
 
2 0
What is etA , where A = ?
0 3
I
 n  n 
2 0 2 0
An = = .
0 3 0 3n

I
(2t)n
!
tn tn
 n 
2 0 0
An = = n!
(3t)n .
n! n! 0 3n 0 n!

I So:
∞ (2t)n (2t)n
! P∞ !
X 0 0
etA = n!
(3t)n = n=0 n!
P∞ (3t)n
n=0
0 n! 0 n=0 n!
 2t 
e 0
= .
0 e3t

The General Solution of Autonomous Homogeneous


Linear Vectorial ODEs

Theorem (2.2.8)
The general solution of y 0 (t) = Ay (t), t ∈ T , A ∈ Ck ×k is

y (t) = etA c, t ∈ T , c ∈ Ck . (6)

Furthermore, the solution to the initial value problem

y 0 (t) = Ay (t), y (t0 ) = y0 , y0 ∈ Ck

is y (t) = e(t−t0 )A y0 , t ∈ T .
Proof of Theorem 2.2.8

I Since dtd etA c = AetA c, y (t) = etA c is indeed a solution for


each c ∈ Ck (Lemma 2.2.7).
I Suppose now that the k -dimensional vectorial function z is
a solution of (5). We have to show that z(t) = etA c for
some c ∈ Ck .
I Differentiating e−tA z(t) yields (e−tA commutes with A):

d  −tA 
e z(t) = − Ae−tA z(t) + e−tA z 0 (t)
dt
= − Ae−tA z(t) + Ae−tA z(t) = 0.

This implies that indeed z(t) = etA c for some c ∈ Ck .


I Combining y (t0 ) = y0 with (6) yields c = e−t0 A y0 .

The Solution Space of Autonomous Homogeneous


Linear Vectorial ODEs

Theorem (2.2.9)
The solutions of y 0 (t) = Ay (t), t ∈ T , A ∈ Ck ×k form a
k -dimensional linear space. The vector functions

yi (t) = etA ci , ci ∈ Ck , i = 1, . . . , k

constitute a basis of this space if and only if c1 , . . . , ck


constitute a basis of Ck .

Sketch of proof: A vectorial function y is a solution of


y 0 (t) = Ay (t) if and only if y (t) = etA c for some c ∈ Ck . The
claims now follow from the fact that etA is invertible.
Basis of the Solution Space of Autonomous
Homogeneous Linear Vectorial ODEs
Lemma
Let A ∈ Ck ×k . If λ is an eigenvalue of A with associated
eigenvector v , then eλt is an eigenvalue of etA with associated
eigenvector v .
Sketch of proof: If Av = λv , then An v = λn v , i.e. λn is an
eigenvalue of An with eigenvector v .

Hence:
Theorem
Suppose A ∈ Rk ×k has k linearly independent eigenvectors
s1 , . . . , sk with eigenvalues λ1 , . . . , λk . Then a basis of the
solution space of y 0 (t) = Ay (t), t ∈ T , is:

ỹ1 (t) = eλ1 t s1 , . . . , ỹk (t) = eλk t sk .

The Solution Space of an ODE: Example (I)


Find a basis of the solution space of the following ODE:
 
3 −4
y 0 (t) = y (t), t ∈ [0, ∞).
1 −2
 
3 −4
I The eigenvalues of are λ1 = −1 and λ2 = 2.
1 −2
I An eigenvector associated with λ1 is s1 = (1, 1) and an
eigenvector associated with λ2 is s2 = (4, 1).
I A basis of the solution space is thus:
   
1 4
y1 (t) = e−t , y2 (t) = e2t .
1 1

I So, the general solution of the ODE reads


   
1 4
y (t) = c1 e−t + c2 e2t , t ∈ [0, ∞), c1 , c2 ∈ C.
1 1
The Solution Space of an ODE: Example (II)
Find the solution of the following initial value problem:
   
3 −4 −3
y 0 (t) = y (t), t ∈ [0, ∞), y (0) = .
1 −2 3

I Note that:    
1 4
y (0) = c1 + c2 .
1 1

I We thus have to solve:


−3 = c1 + 4c2 , 3 = c1 + c2 .

I It follows that c1 = 5 and c2 = −2 and hence the solution


of the initial value problem is:
   
1 4
y (t) = 5e−t − 2e2t , t ∈ [0, ∞).
1 1
Difference and Differential Equations
Lecture 12

Allard van der Made

Week 51 (Friday)

This Lecture

Solution Space of Homogeneous Linear ODEs (continued)

Solutions of Inhomogeneous Linear ODEs


Linear Homogeneous Vectorial ODEs

Let us reconsider the following equation:

y 0 (t) = Ay (t), t ∈ T , (1)

where A ∈ Rk ×k .

If A has k linearly independent eigenvectors, then we can easily


construct a basis of the solution space of (1). If A does not
have k linearly independent (nongeneralized) eigenvectors,
then it is a bit more difficult to construct a useful fundamental
matrix, i.e. a matrix whose column vectors form a basis of the
solution space.

Jordan Normal Forms and Fundamental Matrices

If A = SJS −1 , then:
∞ ∞
SJS −1
X 1  −1
n X 1 n −1
A
e =e = SJS =S J S = SeJ S −1 .
n! n!
n=0 n=0

The k × k matrix function SetJ is a fundamental matrix of (1).

Each fundamental matrix Y (t) of (1) is a matrix solution of (1)


(so Y 0 (t) = AY (t)). You can of course take Y (t) = etA C for any
nonsingular C ∈ Ck ×k .
Fundamental Matrices: Example (I)
Consider the system
 
3 1
y 0 (t) = Ay (t) with A = .
−1 5

I A Jordan Normal form of A is:


   
1 0 4 1 1 0
A = SJS −1 = .
1 1 0 4 −1 1

I Note that:
 n  n n−1

4t t 4 n4
(tJ)n = = tn .
0 4t 0 4n

I Consequently:
∞ (4t)n (4t)n−1
P∞ P∞ !  
X t 1 t
etJ = (tJ)n = n=0 n! n=1
P∞ (n−1)!
(4t)n
= e4t .
0 0 1
n=0 n=0 n!

Fundamental Matrices: Example (II)

I The following matrix can thus be used as fundamental


matrix:
    
1 0 1 t 1 t
Y (t) = SetJ = e4t = e4t .
1 1 0 1 1 t +1
From a Complex-valued Basis to a Real-valued Basis

Once you have found a complex-valued basis of the solution


space, you can construct a real-valued basis:
Theorem (2.2.14)
Suppose A ∈ Rk ×k has an eigenvalue λ with nonzero imaginary
part and associated eigenvector v . Then the functions y1 and
y2 defined by

y1 (t) = <(etλ v ), y2 (t) = =(etλ v )

are linearly independent solutions of y 0 (t) = Ay (t).

Remark: Recall that such complex eigenvalues come in


complexly conjugated pairs if A ∈ Rk ×k .

Constructing a Real-valued Basis: Example (I)

Consider the following ODE:


 
1 −3
y 0 (t) = y (t) =: Ay (t), t ∈ [0, ∞).
1 −2

Find a real-valued basis of its solution space.



I The eigenvalues of A are λ± = − 1 ± i 1 3.
2 2
1

I An eigenvector associated with λ− is s− = (1, + 2 i 16 3)
and an eigenvector associated with λ+ is s+ = s− .
I Let us determine the real part and the imaginary part of
the solution

 
1 1 1 √ .
z(t) = eλ+ t s+ = e(− 2 +i 2 3)t 1 1
2 −i6 3
Constructing a Real-valued Basis: Example (II)
I Note that:
(− 12 +i 12

3)t − 12 t
√ √
cos( 12 sin( 12

e =e 3t) + i 3t)

and
√ √ √  √
cos( 12 3t) + i sin( 12 3t) 21 − i 16 3 = 12 cos( 12 3t)

√ √ √ √ √
+ 16 3 sin( 12 3t) − i 16 3 cos( 12 3t) + i 12 sin( 12 3t)

I So,
1
√ 
√ cos(
− 12 t 2 √ 3t) √
y1 (t) = <(z(t)) =e 1 1 1 1 ,
2 cos( 2 3t) + 6 3 sin( 2 3t)
 1
√ 
1 sin( 2 3t)
y2 (t) = =(z(t)) =e− 2 t √ √ √
− 16 3 cos( 12 3t) + 12 sin( 12 3t)

constitutes a real-valued basis of the solution space.

Inhomogeneous Linear Vectorial ODEs with Constant


Coefficients

Consider the inhomogeneous vectorial ODE

y 0 (t) = Ay (t) + b(t), t ∈ T , (2)

where A is a constant k × k matrix, b is a continuous


k -dimensional vector function, and T ⊆ [t0 , ∞) is connected
and compact.

We can find the general solution of (2) by using a method akin


to variation of constants.
The General Solution of Inhomogeneous Linear ODEs
wth Constant Coefficients (I)
Suppose y is a solution of (2). Then:
d  −tA 
e y (t) = − Ae−tA y (t) + e−tA y 0 (t)
dt
= − Ae−tA y (t) + e−tA (Ay (t) + b(t))
=e−tA b(t)

Because b is continuous on the compact set T , both sides of


this equality can be integrated (over a subset [t0 , t] of T ):
Z t Z t
d  −sA 
e y (s) ds = e−sA b(s) ds ⇒
t0 ds t0
Z t
e−tA y (t) − e−t0 A y (t0 ) = e−sA b(s) ds.
t0

The General Solution of Inhomogeneous Linear ODEs


with Constant Coefficients (II)

The general solution of (2) is hence:


Z t
y (t) = e (t−t0 )A
c+e tA
e−sA b(s) ds, t ∈ T , c ∈ Ck .
t0

The solution to the initial value problem y 0 (t) = Ay (t) + b(t),


y (t0 ) = y0 , reads:
Z t
y (t) = e (t−t0 )A
y0 + e tA
e−sA b(s) ds, t ∈ T .
t0
Finding General Solutions: Example (I)
Find the general solution of:
   
0 −1 1
y 0 (t) = y (t) + , t ≥ 0.
1 0 1
 
0 −1
I Determine first a Jordan normal form of A = :
1 0
   1 i

1 1 i 0
A = SJS −1 = 2
1
2
i .
−i i 0 −i 2 − 2

I Hence:
   it  1 i

1 1 e 0
etA = SetJ S −1 = 2 2 .
−i i 0 e−it 1
2 − 2i

I Consequently:
   −it   1+i   1+i −it

1 e 0 2 e
e−tA =S 2 =S .
1 0 eit 1−i
2
1−i it
2 e

Finding General Solutions: Example (II)


I We can now calculate the required integral:
Z t   Z t  1+i −is 
1 2 e
e−sA ds = S 1−i is ds =
0 1 0 2 e
!
i(1+i) −it  −it − 1) − (1 + i)(eit − 1)

2 (e − 1) 1 (−1 + i)(e
S −i(1−i) =
2 (e it − 1) 2 (1 + i)(e−it − 1) + (1 − i)(eit − 1)

I Use the fact that e−it = cos(t) − i sin(t) and


eit = cos(t) + i sin(t):
Z t  
1
e−sA ds =
0 1
 
1 2 − (1 − i)(cos t − i sin t) − (1 + i)(cos t + i sin t)
=
2 −2 + (1 + i)(cos t − i sin t) + (1 − i)(cos t + i sin t)
 
1 − cos t + sin t
.
−1 + cos t + sin t
Finding General Solutions: Example (III)

I Furthermore:
 1 it 1 −it i it
− 2i e−it
  
2e + 2e 2e cos t − sin t
etA = = .
− 2i eit + 2i e−it 1 it
2e + 12 e−it sin t cos t

I The general solution reads (using (cos t)2 + (sin t)2 = 1):
Z t  
1
y (t) =etA c + etA e−sA ds
0 1
   
c1 cos t − c2 sin t −1 + cos t + sin t
= + ,
c1 sin t + c2 cos t 1 − cos t + sin t

where c = (c1 , c2 ) ∈ C2 .
Difference and Differential Equations
Lecture 13

Allard van der Made

Week 02 (Tuesday)

This Lecture

Stability of Solutions of First Order ODEs

Stability of Solutions of Linear ODEs

Stability of Autonomous Systems of Nonlinear ODEs


Stability Concepts related to ODEs (I)

Consider the first order ODE

y 0 (t) = f (t, y (t)), t ∈ T = [t0 , ∞), (1)

where f : T × D → Ck , with D ⊆ Ck or D ⊆ Rk .

To assess the stability properties of a solution y of (1) on T ,


one must be able to compare y with solutions ỹ starting ‘close
to y ’ for all t ∈ T . In other words, such a ỹ must also be a
solution on T .

Stability Concepts related to ODEs (II)

I A solution y of (1) is called stable if for every  > 0 there


exists a δ > 0 such that for every solution ỹ defined on an
interval [t0 , t1 ], where t1 > t0 , with ||ỹ (t0 ) − y (t0 )|| ≤ δ one
has that ỹ is a solution on T and

||ỹ (t) − y (t)|| ≤ , ∀t ∈ T .

I A stable solution y is called (locally) asymptotically stable if


there exists a δ > 0 such that for every solution ỹ with
||ỹ (t0 ) − y (t0 )|| ≤ δ one has

lim (ỹ (t) − y (t)) = 0.


t→∞
Stability Concepts related to ODEs (III)

I A stable solution y is called globally asymptotically stable if


every solution ỹ defined on an interval [t0 , t1 ], where
t1 > t0 , is a solution on T and

lim (ỹ (t) − y (t)) = 0.


t→∞

I A stable solution that is not asymptotically stable is called


neutrally stable.
I A solution that is not stable is called unstable.

Stability of Linear First Order ODEs

If you want to examine the stability of linear ODEs, then you do


not have to worry about the inhomogeneous part of the ODE:
Theorem (2.3.2)
All solutions of the equation

y 0 (t) = A(t)y (t) + b(t), t ∈ T

are neutrally stable, globally asymptotically stable, or unstable if


and only if the null solution (y ≡ 0) of

y 0 (t) = A(t)y (t)

is neutrally stable, globally asymptotically stable, or unstable,


respectively.
Stability of Homogeneous, Linear First Order ODEs
with Constant Coefficients

Consider the following ODE:

y 0 (t) = Ay (t), t ∈ [0, ∞), (2)

where A is a constant k × k matrix.


Theorem (2.3.3)
The null solution of (2) is asymptotically stable if and only if the
real parts of all eigenvalues of A are negative. It is stable if A
has no eigenvalues with positive real part and the algebraic and
geometric multiplicities of every purely imaginary eigenvalue
are equal. The null solution is unstable in all other cases.

Intuition behind Theorem 2.3.3 (I)

Recall that the general solution of (2) is:

y (t) = etA c, c ∈ Ck

We can write etA as follows:


−1
etA = eS(tJ)S = SetΛ etN S −1 ,

where J = Λ + N is the sum of the diagonal matrix Λ containing


the eigenvalues of A and a matrix N with Jordan blocks.

It is not difficult to see that the statements hold if N = 0.

Whatif, for 
instance, k = 2, the eigenvalue λ = 0 (twice), and
0 1
N= ?
0 0
Intuition behind Theorem 2.3.3 (II)
Because k × k matrices with Jordan blocks are upper triangular
matrices, only the first k − 1 powers of N are relevant:
−1 ` `
kX
tN t N
e = .
`!
`=0
   
0 1 0 0 1 1 1 t
So, if N = , then etN = t 0!N + t 1!N = .
0 0 0 1
Furthermore, λ = 0 (twice) implies that etΛ = I.

So:  
1 t
etA =S S −1 .
0 1
 
1 t
The t in causes the null solution to be unstable!
0 1

Sufficient Conditions for Asymptotic Stability of the


Null Solution of Linear First Order ODEs
Sometimes a glance at A suffices to infer that the null solution
of (2) is stable (see Lemma 2.3.4 for necessary conditions for
stability):
Theorem (2.3.5)
Suppose A = (Aij ) ∈ Rk ×k has a dominant negative diagonal,
i.e.:
I A has a dominant diagonal:
X
|Aij | < |Aii |, i = 1, . . . , k .
j6=i

I Aii < 0 for i = 1, . . . , k .


Then the null solution of y 0 (t) = Ay (t), t ≥ 0, is asymptotically
stable.
Proof of Theorem 2.3.5
Let λ = α + βi be an eigenvalue of A. We have to show that
α < 0.
I Note that:

|Aii − λ|2 ≥ (Aii − α)2 = (−|Aii | − α)2 = (|Aii | + α)2 .

I Suppose that α ≥ 0. Then |Aii − λ| ≥ |Aii | + α ≥ |Aii |.


I Let B = A − λI. The above observations imply:
X X
|Bij | = |Aij | < |Aii | ≤ |Aii − λ| = |Bii |.
j6=i j6=i

I So B has a dominant diagonal and is consequently


nonsingular. But then λ cannot be an eigenvalue of A, a
contradiction!

Stability of Autonomous Systems of Nonlinear ODEs


Consider the following k -dimensional nonlinear system of
ODEs:
y 0 (t) = f (y (t)), t ≥ 0. (3)
Analysis of its linearization

y 0 (t) = f (c) + Df (c)(y (t) − c) = Df (c)(y (t) − c)

at a zero c of f (f (c) = 0) reveals:


Theorem (2.3.6)
Let c ∈ Ck be a zero of f and suppose that all first order partial
derivatives of f exist and are continuous on a neighbourhood of
c. Then the solution y ≡ c is locally asymptotically stable if all
eigenvalues of Df (c) have negative real parts. If at least one
eigenvalue of Df (c) has positive real part, then y ≡ c is
unstable.
Stability of Autonomous Systems of Nonlinear ODEs:
Example
Examine the stability of the null solution of y 0 (t) = f (y (t)),
where f : R2 → R2 is defined by
 
x1 x2 − 2x1 + x2
f (x1 , x2 ) = .
−2x1 x2 + 12 x1 − x2

I The Jacobian matrix of f is given by


 
x2 − 2 x1 + 1
Df (x1 , x2 ) = .
−2x2 + 12 −2x1 − 1

 
−2 1
I One has consequently: Df (0, 0) = 1 .
2 −1
I Because Df (0, 0) has a dominant negative diagonal, we
conclude that the null solution is asymptotically stable.

A Corollary to Theorem 2.3.6


Theorem (2.3.7)
Let A be a constant k × k matrix and g a k -dimensional vector
function continuous at 0 such that
||g(y )||
lim = 0.
y →0 ||y ||

Then the null solution of

y 0 (t) = Ay (t) + g(y (t)), t ≥ 0 (4)

is asymptotically stable if all eigenvalues of A have negative


real parts. If A has at least one eigenvalue with positive real
part, then the null solution is unstable.
Sketch of proof: Check first that limy →0 ||g(y )||
||y || = 0 implies that
the null solution is indeed an equilibrium solution of (4), then
invoke Theorem 2.3.6.
Lyapunov’s Direct Method (I)
If one cannot establish the stability properties of equilibrium
solutions of (3) with the above results, then Lyapunov’s direct
method might help:
Theorem (2.3.10)
Let f : Rk → Rk be differentiable and let x0 be a zero of f .
Suppose there exists a neighbourhood U of x0 and a
continuously differentiable function V : U → R such that
V (x0 ) = 0 and V (x) > 0, for all x ∈ U\{x0 }. Then:
Pk ∂V
1. If V̇ (x) := j=1 ∂xj (x)fj (x) ≤ 0 for all x ∈ U, then y ≡ x0 is
a stable equilibrium solution of (3).
2. If V̇ (x) < 0 for all x ∈ U\{x0 }, then y ≡ x0 is an
asymptotically stable equilibrium solution of (3).
3. If V̇ (x) > 0 for all x ∈ U\{x0 }, then y ≡ x0 is an unstable
equilibrium solution of (3).

Lyapunov’s Direct Method (II)

I A function V satisfying the conditions of statement 1 of


Theorem 2.3.10 is called a Lyapunov function on U
centered at x0 for equation (3).
I A function V satisfying the conditions of statement 2 is
called a strict Lyapunov function on U centered at x0 for
equation (3).
Remark: Polynomials with only even powers are often suitable
candidate Lyapunov functions.
Lyapunov’s Direct Method: Example (I)

Examine the stability of the null solution of y 0 (t) = f (y (t)),


where f : R2 → R2 is defined by

−x23
 
f (x1 , x2 ) = .
x1 − 12 x23

I The Jacobian matrix


 of f evaluated at (0, 0) is
0 0
Df (0, 0) = . This matrix has eigenvalue 0 with
1 0
algebraic multiplicity 2.
I Let us consider the candidate Lyapunov Va,b : R2 → R,
Va,b (x1 , x2 ) = ax12 + bx24 , where a, b > 0 are yet to be
determined. Note that Va,b is C 1 , Va,b (0, 0) = 0 and
Va,b (x1 , x2 ) > 0 for all (x1 , x2 ) ∈ R2 \{(0, 0)}.

Lyapunov’s Direct Method: Example (II)

I Note that:
∂Va,b ∂Va,b
V̇a,b (x1 , x2 ) = (x1 , x2 ) × (−x23 ) + (x1 , x2 ) × (x1 − 12 x23 )
∂x1 ∂x2
= − 2ax1 x23 + 4bx1 x23 − 2bx26 .

I If a = 2b, then V̇a,b (x1 , x2 ) = −2bx26 ≤ 0 for all


(x1 , x2 ) ∈ R2 .
I We conclude that for each b > 0 V2b,b is a Lyapunov
function on R2 centered at (0, 0) and hence that the null
solution is stable.

You might also like