0% found this document useful (0 votes)
5 views

ConMan FileDownload NumericalODE2

This document discusses numerical methods for approximating solutions to ordinary differential equations (ODEs). It introduces the topic and outlines the key steps involved: discretizing the continuous problem, analyzing the numerical method to ensure accurate approximations, and implementing efficient algorithms. Specific methods covered include Picard's theorem for existence and uniqueness of solutions, one-step methods, and the θ-method for error analysis. The overall goal is to derive, analyze, and implement numerical ODE solvers on a computer.

Uploaded by

Davidon Jani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

ConMan FileDownload NumericalODE2

This document discusses numerical methods for approximating solutions to ordinary differential equations (ODEs). It introduces the topic and outlines the key steps involved: discretizing the continuous problem, analyzing the numerical method to ensure accurate approximations, and implementing efficient algorithms. Specific methods covered include Picard's theorem for existence and uniqueness of solutions, one-step methods, and the θ-method for error analysis. The overall goal is to derive, analyze, and implement numerical ODE solvers on a computer.

Uploaded by

Davidon Jani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Numerical approximations of solutions of ordinary


differential equations

Anotida Madzvamuse

Department of Mathematics
Pevensey III, 5C15,
Brighton, BN1 9QH,
UK

THE FIRST MASAMU ADVANCED STUDY INSTITUTE


AND WORKSHOPS IN MATHEMATICAL SCIENCES
December 1 - 14, 2011
Livingstone, Zambia
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Outline

1 Introduction and Preliminaries

2 Picard’s Theorem

3 One-step Methods

4 Error analysis of the θ- method

5 General explicit one-step method

6 Runge-Kutta methods

7 Linear multi-step methods


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Applications of ODEs

Ordinary differential equations (ODEs) are a fundamental tool in


Applied mathematics,
mathematical modelling
They can be found in the modelling of
biological systems,
population dynamics,
micro/macroeconomics,
game theory,
financial mathematics.
They also constitute an important branch of mathematics with
applications to different fields such as geometry, mechanics, partial
differential equations, dynamical systems, mathematical astronomy
and physics.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Applications of ODEs

The problem consists of finding y : I −→ R such that it satisfies


the differential equation
dy
y 0 := = f (x, y (x)) (1)
dx
and the initial condition

y (x0 ) = y0 . (2)

The above is known as an initial value problem (IVP)


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

The need for computations

Note that an analytical solution of (1)-(2) can be found only


in very particular situations which are usually quite simple
ones.
In general, especially in equations that are of modelling
relevance, there is no systematic way of writing down a
formula for the function y (x).
Therefore, in applications where the quantitative knowledge of
the solution is fundamental one has to turn to a numerical
(i.e., digital or computer) approximation of y (x). This is a
computational mathematics problem. There are three main
questions raised by a computational mathematics problem,
such as ours.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Discretisation

The first question is about our ability to come up with a


computable version of the problem. For instance, in (1) there
are derivatives that appear, but on a computer a derivative (or
an integral) cannot be evaluated exactly and it needs to be
replaced by some approximation.
Also, there is a continuous infinity of time instants between x0
and x0 + T > x0 , it is not possible to determine (not even to
approximate) y (x) for each x ∈ [x0 , x0 + T ) and one has to
settle for a finite number of points x0 < x1 < · · · < xN = T .
The process of transforming a continuous problem (which
involves continuous infinity of operations) into a finite number
of operations is called discretisation and constitutes the first
phase of establishing a computational method.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Numerical Analysis

The second important question regarding a computational


method is to understand whether it yields the wanted results.
We want to guarantee that the figures that the computer will
output are really related to the problem.
It must be checked mathematically that the discrete solution
(i.e., the solution of the discretised problem) is a good
approximation of the problem and deserves the name of
approximate solution.
This is usually done by conducting an error analysis, which
involves concepts such as stability, consistency and
convergence.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Efficiency and implementation

Finally, the third important question regarding a


computational method is that of efficiency and its actual
implementation on a computer. By efficiency, roughly
speaking, we mean the amount of time that we should be
waiting to compute the solution. It is very easy to write an
algorithm that computes a quantity, but it is less easy to write
one that will do it effectively.
Once a discretisation is found to be convergent and to have
an acceptable level of efficiency, it can be implemented by
using a computer language, and used for practical purposes.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Deliverables of the lecture series

The main goal of this course is to derive, understand, analyse


and implement effective numerical algorithms that provide
(good) approximations to the solution y of problem (1)-(2).
A theoretical stream in which we derive and analyse the
various methods
A practical stream where these methods are coded on a
computer using easy progamming languages such as Matlab
or Octave (a free and legal competitor of Matlab with very
similar syntax) or Scilab (also free and very good quality
software with a Matlab-equivalent syntax).
The implementation of methods is fundamental in
understanding and appreciating the methods and it provides a
good feeling of reward once a numerical method is
(successfully) seen in action.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s theorem

In general, even if f (x, y (x)) is a continuous function, there is no


guarantee that the initial value problem (1)-(2) possesses a unique
solution. Fortunately, under a further mild condition on the
function f (x, y (x)), the existence and uniqueness of a solution to
(1)-(2) can be ensured: the result is encapsulated in the next
theorem.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem I
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem II
Theorem (Picard’s Theorem)
Suppose that f (·, ·) is a continuous function of its arguments in a
region U of the (x, y ) plane which contains the rectangle
n o
R = (x, y ) : x0 ≤ x ≤ XM , |y − y0 | ≤ YM ,

where XM > x0 and YM > 0 are constants. Suppose that ∃ L > 0:

|f (x, y ) − f (x, z)| ≤ L|y − z|, ∀(x, y ), (x, z) ∈ R. (3)


n o
Suppose M = max |f (x, y )| : (x, y ) ∈ R , with
M(XM − x0 ) ≤ YM . Then ∃ a unique continuously differentiable
function x −→ y (x), defined on the closed interval [x0 , XM ], which
satisfies (1) and (2).
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem: Conceptual Proof I

The
n oessence of the proof is to consider the sequence of functions

yn , defined recursively through what is known as the Picard
n=0
Iteration:
(
y0 (x) = y0 ,
Rx (4)
yn (x) = y0 + x0 f (ξ, yn−1 (ξ))dξ, n = 1, 2, · · · ,
n o∞
and show, using the conditions of the theorem, that yn
n=0
converges uniformly on the interval [x0 , XM ] to a function y
defined on [x0 , XM ] such that
Z x
yn (x) = y0 + f (ξ, yn−1 (ξ))dξ.
x0
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem: Conceptual Proof II


This then implies that y is continuously differentiable on [x0 , XM ]
and it satisfies the differential equation (1) and the initial condition
(2). The uniqueness of the solution follows from the Lipschitz
condition.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem: System of ODEs

Picards Theorem has a natural extension to an initial value


problem for a system of m differential equations of the form

0
y = f(x, y),

(5)

y(x0 ) = y0 ,

where y0 ∈ Rm and f : [x0 , XM ] × Rm −→ Rm .


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem: System of ODEs

On introducing the Euclidean norm k.k on Rm

m
!1
X 2
2
kvk = |vi | , v ∈ Rm
i=1

we can state the following result.


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem: System of ODEs

Theorem (Picard’s theorem)


Suppose that f(·, ·) is a continuous function of its arguments in a
region U of the (x, ny) space R1+m which contains the o
parallelepiped R = (x, y) : x0 ≤ x ≤ XM , |y − y0 | ≤ YM ,
where XM > x0 and YM > 0 are constants. Suppose also that
there exists a positive constant L such that

|f(x, y) − f(x, z)| ≤ L|y − z| (6)

holds whenever (x, y) and (x, z) lie in R. Finally, letting


M = max{|f(x, y)| : (x, y) ∈ R} , suppose that
M(XM − x0 ) ≤ YM . Then there exists a unique continuously
differentiable function x −→ y(x), defined on the closed interval
[x0 , XM ], which satisfies (5).
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem: System of ODEs

A sufficient condition for (6) is that f is continuous on R,


differentiable at each point (x, y) in int(R), the interior of R, and
there exists L > 0 such that
∂f
(x, y) ≤ L for all (x, y) ∈ int(R), (7)
∂y
∂f
where ∂y denotes the m × m Jacobi matrix of
y ∈ R −→ f(x, y) ∈ Rm , and k · k is a matrix norm subordinate
m

to the Euclidean vector norm on Rm . Indeed, when (7) holds, the


Mean Value Theorem implies that (6) is also valid. The converse
of this statement is not true; for the function
f(y) = (|y1 |, · · · , |ym |)T , with x0 = 0 and y0 = 0, satisfies (6) but
violates (7) because y −→ f(y) is not differentiable at the point
y = 0.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem: System of ODEs

Definition (Stability of solutions)


A solution y = v(x) to (5) is said to be stable on the interval
[x0 , XM ] if for every  > 0 there exists δ > 0 such that for all z
satisfying |v(x0 ) − z| < δ the solution y = w(x) to the differential
equation y0 = f(x, y) satisfying the initial condition w(x0 ) = z is
defined for all x ∈ [x0 , XM ] and satisfies |v(x) − w(x)| <  for all
x ∈ [x0 , XM ].
A solution which is stable on [x0 , ∞) (i.e. stable on [x0 , XM ] for
each XM and with δ independent of XM ) is said to be stable in
the sense of Lyapunov. Moreover, if

limx−→∞ |v(x) − w(x)| = 0,

then the solution y = v(x) is called asymptotically stable.


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Theorem: System of ODEs

Theorem (Stability of solutions)


Under the hypotheses of Picards theorem, the (unique) solution
y = v(x) to the initial value problem (5) is stable on the interval
[x0 , XM ], (where we assume that −∞ < x0 < XM < ∞).

Proof.
Exercise
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Euler’s method and its relatives: the θ-method

Suppose that the IVP (1)-(2) is to be solved on [x0 , XM ].


We divide this interval by the mesh-points
xn = x0 + nh, n = 0, · · · , N, where h = (XMN−x0 ) and N ∈ Z+ .
The positive real number h is called the step size.
Now let us suppose that, for each n, we seek a numerical
approximation yn to y (xn ), the value of the analytical solution
at the mesh point xn . Given that y (x0 ) = y0 is known, let us
suppose that we have already calculated yn , up to some n,
0 ≤ n ≤ N − 1; we define
yn+1 = yn + hf (xn , yn ), n = 0, · · · , N − 1. (8)
Thus, taking in succession n = 0, 1, · · · , N − 1, one step at a
time, the approximate values yn at the mesh points xn can be
easily obtained. This numerical method is known as the
Eulers method.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Euler’s method: Method II

Suppose the function y (x) is twice continuously differential with


respect to x. By Taylor’s Theorem we have

y (xn+1 ) = y (xn ) + hy 0 (xn ) + O(h2 ) (9)

hence if h << 1 then we can write

yn+1 ≈ yn + hf (xn , yn )

where we have neglected O(h2 ) and higher order terms.


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Euler’s method: Method III

Integrating (1) between two consecutive mesh points xn and xn+1


to deduce that
Z xn+1
y (xn+1 ) = y (xn ) + f (x, y (x))dx, n = 0, · · · , N − 1, (10)
xn

and then applying the numerical integration rule


Z xn+1
g (x)dx ≈ hg (xn ),
xn

called the rectangle rule, with g (x) = f (x, y (x)), to get

y (xn+1 ) ≈ y (xn )+hf (xn , y (xn )), n = 0, · · · , N −1, y (x0 ) = y 0.


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

The Euler’s Method: the θ-method

Replacing the rectangle rule in the derivation of Eulers method


with a one-parameter family of integration rules of the form
Z xn+1
g (x)dx ≈ h[(1 − θ)g (xn ) + θg (xn+1 )], (11)
xn

with θ ∈ [0, 1] a parameter. On applying this with


g (x) = f (x, y (x)) we find that

y (xn+1 ) ≈ y (xn ) + h[(1 − θ)f (xn , y (xn )) + θf (xn+1 , y (xn+1 ))],
 n = 0,


y (x0 ) = y0 .

(12)
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

The Euler’s Method: the θ-method

This then motivates the introduction of the following


one-parameter family of methods: given that y0 is supplied by (2),
define

yn+1 = yn + h[(1 − θ)f (xn , yn ) + θf (xn+1 , yn+1 )], n = 0, · · · , N − 1.


(13)
parametrised by θ ∈ [0, 1]; (13) is called the θ-method. Now, for
θ = 0 we recover Explicit Eulers method.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

The Implicit Euler’s Method

For θ = 1, and y0 specified by (2), we get

yn+1 = yn + hf (xn+1 , yn+1 ), n = 0 · · · , N − 1, (14)

referred to as the Implicit Euler Method since, unlike Eulers


method considered above, (14) requires the solution of an implicit
equation in order to determine yn+1 , given yn .
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

1
The Trapezium Rule method: θ = 2

The scheme which results for the value of θ = 12 is also of interest:


y0 is supplied by (2) and subsequent values yn+1 are computed
from
1
yn+1 = yn + h[f (xn , yn ) + f (xn+1 , yn+1 )], n = 0, · · · N − 1; (15)
2
this is called the Trapezium Rule Method.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Example 1

Given the IVP


y 0 = x − y 2, y (0) = 0,
on the interval of x ∈ [0, 0.4], we compute an approximate solution
using the θ-method, for θ = 0, θ = 21 and θ = 1, using the step
size h = 0.1. The results are shown in Table 1. In the case of the
two implicit methods, corresponding to θ = 21 and θ = 1, the
nonlinear equations have been solved by a fixed-point iteration.
For comparison, we also compute the value of the analytical
solution y (x) at the mesh points xn = 0.1 × n, n = 0, · · · , 4. Since
the solution is not available in closed form, we use a Picard
iteration to calculate an accurate approximation to the analytical
solution on the interval [0, 0.4] and call this the exact solution.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Table 1

1
k xk yk for θ = 0 yk for θ = 2 yk for θ = 1
0 0 0 0 0
1 0.1 0 0.005 0.00999
2 0.2 0.01 0.01998 0.02990
3 0.3 0.02999 0.04486 0.05955
4 0.4 0.05990 0.07944 0.09857
Table: The values of the numerical solution at the mesh points.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Picard’s Iteration

Given y (0) = 0 we compute the following sequence


Z x
yk (x) = y (0) + (ξ − yk−1 (ξ))dξ, k = 1, 2, · · · ,
0
to obtain
y0 (x) = 0,
1
y1 (x) = x 2 ,
2
1 1
y2 (x) = x 2 − x 5 ,
2 20
1 2 1 1 8 1 11
y3 (x) = x − x 5 + x − x .
2 20 160 4400
It is easy now to prove by induction that
1 1 1 8 1 11
y (x) = x 2 − x 5 + x − x + O(x 14 ).
2 20 160 4400
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Tabulating results

Tabulating y3 (x) on the interval [0, 0.4] with step size h = 0.1, we
get the exact solution at the mesh points shown in Table 2.
The exact solution is in good agreement with the results obtained
with θ = 12 : the error ≤ 5 × 10−5 . For θ = 0 and θ = 1 the
discrepancy between yk and y (xk ) is larger: error ≤ 3 × 10−2 .

k xk y (xk )
0 0 0
1 0.1 0.005
2 0.2 0.01998
3 0.3 0.04488
4 0.4 0.07949
Table: Values of the ”exact solution” at the mesh points.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

MAPLE: Exact Solution

We note in conclusion that a plot of the analytical solution can be


obtained, for example, by using the MAPLE package by typing in
the following at the command line:
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

First we have to explain what we mean by error.


The exact solution of the initial value problem (1)-(2) is a
function of a continuously varying argument x ∈ [x0 , XM ],
while the numerical solution yn is only defined at the mesh
points xn , n = 0, · · · , N, so it is a function of a discrete
argument.
We can compare these two functions either by extending in
some fashion the approximate solution from the mesh points
to the whole of the interval [x0 , XM ] (say by interpolating
between the values yn ), or by restricting the function y to the
mesh points and comparing y (xn ) with yn for n = 0, · · · , N.
Since the first of these approaches is somewhat arbitrary
because it does not correspond to any procedure performed in
a practical computation, we adopt the second approach, and
we define the global error e by
en = y (xn ) − yn , n = 0, · · · , N.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

θ=0

We wish to investigate the decay of the global error for the


θ-method with respect to the reduction of the mesh size h.
From the Explicit Euler’s method

yn+1 = yn + hf (xn , yn ), n = 0, · · · , N, y0 given

we can define the Truncation erorr by

y (xn+1 ) − y (xn )
Tn = − f (xn , y (xn )), (16)
h
obtained by inserting the analytical solution y(x) into the
numerical method and dividing by the mesh size.
Indeed, it measures the extent to which the analytical solution
fails to satisfy the difference equation for Eulers method.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

θ = 0 Continued

By noting that f (xn , y (xn )) = y 0 (xn ) and applying Taylors


Theorem, it follows from (16) that there exists ξn ∈ (xn , xn+1 )
such that
1
Tn = hy 00 (ξn ) (17)
2
where we have assumed that that f is a sufficiently smooth
function of two variables so as to ensure that y 00 exists and is
bounded on the interval [x0 , XM ]. Since from the definition of
Eulers method
yn+1 − yn
0= − f (xn , yn ),
h
on subtracting this from (16), we deduce that

en+1 = en + h[f (xn , y (xn )) − f (xn , yn )] + hTn .


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

θ = 0 Continued

Thus, assuming that |yn − y0 | ≤ YM | from the Lipschitz condition


(3) we get

|en+1 | ≤ (1 + hL)|en | + h|Tn |, n = 0, · · · , N − 1.

Now, let T = max0≤n≤N−1 |Tn |; then,

|en+1 | ≤ (1 + hL)|en | + hT , n = 0, ..., N − 1.

By induction, and noting that 1 + hL ≤ e hL ,


T
|en | ≤ [(1 + hL)n − 1] + (1 + hL)n |e0 |
L
T  L(xn −x0 ) 
≤ e − 1 + e L(xn −x0 ) |e0 |, n = 1, · · · , N.
L
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

θ = 0 Continued

This estimate, together with the bound


1
|T | ≤ hM2 , where M2 = maxx∈[x0 ,xM ] |y 00 (x)|,
2
which follows from (17), yields
M2 h  L(xn −x0 ) 
|en | ≤ e L(xn −x0 ) |e0 | + e − 1 , n = 0, · · · , N. (18)
2L
Analogously, for the general θ-method we can prove that
 
xn − x0
|en | ≤ |e0 |exp L
1 − θLh
 
hn 1 1 o xn − x0
+ − θ M2 + hM3 exp L −1 , (19)
L 2 3 1 − θLh
for n = 0, · · · , N where now M3 = maxx∈[x0 ,xM ] |y 000 (x)|.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Orders of Convergence

W.L.O.G. suppose that e0 = y (x0 ) − y0 = 0. Then it follows that


θ = 21 , then |en | = O(h2 ).
θ = 0 and θ = 1 (or any θ 6= 12 ), then |en | = O(h).
Hence at each time step:
θ 6= 21 : the mesh size h is halved, the truncation and global
errors are reduced by a factor of 2,
θ = 21 : these are reduced by a factor of 4.
Price we pay?
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Improved Explicit Euler’s Method

It is less convenient, it requires the solution of implicit


equations at each mesh point xn+1 to compute yn+1 .
An attractive compromise is to use the forward Euler method
to compute an initial crude approximation to y (xn+1 ) and
then use this value within the trapezium rule to obtain a more
accurate approximation for y (xn + 1): the resulting numerical
method is
(
yn+1 = yn + hf (xn , yn ),
(20)
yn+1 = yn + 21 h[f (xn , yn ) + f (xn+1 , yn+1 )],

for n = 0, · · · , N, y0 = given.
Frequently referred to as the Improved Explicit Euler’s
method.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

General explicit one-step method

A general explicit one-step method may be written in the form:

yn+1 = yn + hΦ(xn , yn ; h), n = 0, · · · , N − 1, y0 = y (x0 ), (21)

where Φ(·, ·; ·) is a continuous function of its variables. E.g. in the


case of Explicit Euler’s method:

Φ(xn , yn ; h) = f (xn , yn ),

while for the Improved Explicit Euler’s method


1
Φ(xn , yn ; h) = [f (xn , yn ) + f (xn + h, yn + hf (xn , yn ))].
2
In order to assess the accuracy of the numerical method (21), we
define the global error, by

en = y (xn ) − yn .
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Truncation error

y (xn+1 ) − y (xn )
Tn = − Φ(xn , y (xn ); h. (22)
h
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

A bound on the global error in terms of Tn

Theorem
Consider the general one-step method (22) where, in addition to
being a continuous function of its arguments, Φ is assumed to
satisfy a Lipschitz condition with respect to its second argument;
namely, there exists a positive constant LΦ such that, for
0 ≤ h ≤ h0 and for the same region R as in Picards theorem,

|Φ(x, y ; h) − Φ(x, z; h)| ≤ LΦ |y − z|, for (x, y ), (x, z) in R. (23)

Then, assuming that |yn − y0 | ≤ YM , it follows that


h e LΦ (xn −x0 ) − 1 i
|en | ≤ e LΦ (xn −x0 ) |e0 | + T , n = 0, · · · N, (24)

where T = max0≤n≤N−1 |Tn |.


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Proof

Proof.
Exercise
Example 2: Consider

0 −1
y = tan y ,

(25)

y (0) = y0 .

The aim of the exercise is to apply (24) to quantify the size of the
associated global error; thus, we need to find L and M2 . Here
f (x, y ) = tan−1 y , so by the Mean Value Theorem
∂f
|f (x, y ) − f (x, z)| = (x, η)(y − z)
∂y
where y < η < z.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Example 2 Cont’d

In our case
∂f
= |(1 + y 2 )−1 | ≤ 1,
∂y
and therefore L = 1. To find M2 we need to obtain a bound on
|y 00 | (without actually solving the initial value problem!). This is
easily achieved by differentiating both sides of the differential
equation with respect to x:
d dy
y 00 = (tan−1 y ) = (1 + y 2 )−1 = (1 + y 2 )−1 tan−1 y .
dx dx
Therefore |y 00 (x)| ≤ M2 = 12 π. Inserting the values of L and M2
into (18), have that
1
|en | ≤ e xn |e0 | + π (e xn − 1) h, n = 0, · · · , N.
4
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

In particular if we assume that no error has been committed


initially (i.e. e0 = 0), we have that

1
|en | ≤ π(e xn − 1)h, n = 0, · · · , N.
4
Thus, given a tolerance TOL specified beforehand, we can ensure
that the error between the (unknown) analytical solution and its
numerical approximation does not exceed this tolerance by
choosing a positive step size h such that
4  XM −1
h≤ e −1 TOL.
π
For such h we shall have |y (xn ) − yn | = |en | ≤ TOL for each
n = 0, · · · , N, as required.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Consistency

Definition (Consistency)
The numerical method (21) is consistent with the differential
equation (1) if the truncation error defined by (22) is such that for
any  > 0 there exists a positive h() for which |Tn | <  for
0 < h < h() and any pair of points (xn , y (xn )), (xn+1 , y (xn+1 )) on
any solution curve in R.

NOTE:

limh−→0 Tn = y 0 (xn ) − Φ(xn , y (xn ); 0).


Therefore the one-step method (21) is consistent if and only if

Φ(xn , yn ; 0) ≡ f (x, y ). (26)


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Convergence Theorem

Theorem
Suppose that the solution of the initial value problem (1)-(2) lies
in R as does its approximation generated from (21) when h ≤ h0 .
Suppose also that the function Φ(·, ·; ·) is uniformly continuous on
R × [0, h0 ] and satisfies the consistency condition (26) and the
Lipschitz condition

Φ(x, y ; h) − Φ(x, z; h)| ≤ LΦ |y − z|; on R × [0, h0 ]. (27)

Then, if successive approximation sequences (yn ), generated for


xn = x0 + nh, n = 1, 2, · · · , N, are obtained from (21) with
successively smaller values of h, each less than h0 , we have
convergence of the numerical solution to the solution of the initial
value problem in the sense that |y (xn ) − yn | −→ 0 as h −→ 0,
xn −→ x ∈ [x0 , XM ].
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Order of accuracy

Definition (Order of accuracy)


The numerical method (21) is said to have order of accuracy p, if
p is the largest positive integer such that, for any sufficiently
smooth solution curve (x, y (x)) in R of the initial value problem
(1)-(2), there exist constants K and h0 such that

|Tn | ≤ Khp for 0 < h ≤ h0

for any pair of points (xn , y (xn )), (xn+1 , y (xn+1 )) on the solution
curve.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

R-stage Runge-Kutta family

RungeKutta methods aim to achieve higher accuracy by sacrificing


the efficiency of Eulers method through re-evaluating f (·, ·) at
points intermediate between (xn , y (xn )) and (xn+1 , y (xn+1 )). The
general R-stage Runge-Kutta family is defined by
yn+1 = yn + hΦ(xn , yn ; h),
R
X
Φ(xn , yn ; h) = cr kr ,
r =1
k1 = f (x, y ),
r −1
!
X
kr = f x + har , y + h brs ks , r = 2, · · · , R,
s=1
r −1
X
ar = brs , r = 2, · · · , R. (28)
s=1
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

One-stage Runge-Kutta methods

Suppose that R = 1. Then, the resulting one- stage RungeKutta


method is simply Eulers explicit method:

yn+1 = yn + hf (xn , yn ).
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Two-stage Runge-Kutta methods

Next, consider the case of R = 2, corresponding to the following


family of methods:

yn+1 = yn + h(c1 ks + c2 k2 ),

where

k1 = f (xn , yn ),
k2 = f (xn + a2 h, yn + b21 hk1 )

and where the parameters c1 , c2 , a2 and b21 are to be determined.


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Consistency condition

By the consistency condition

Φ(xn , yn ; 0) ≡ f (x, y ).

we have

c1 f (xn , yn ) + c2 f (xn , yn ) = f (xn , yn ) =⇒ c1 + c2 = 1.


Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Order of accuracy

Further conditions on the parameters are obtained by attempting to


maximise the order of accuracy of the method. Indeed, expanding
the truncation error in powers of h, after some algebra we obtain
1 1
Tn = hy 00 (xn ) + h2 y 000 (xn )
2 6
h i h1 i
− c2 h a2 fx + b21 fy f − c2 h2 a22 fxx + a2 b21 fxy f + b21
2
fyy f 2 + O(h3 ),
2
Noting that
y 00 = fx + fy f
it follows that Tn = O(h2 ) for any f provided that

1 00 h i 1
hy − c2 h a2 fx + b21 fy f = 0 =⇒ a2 c2 = b21 c2 = .
2 2
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Two-stage Runge-Kutta methods

This implies that if b21 = a2 , c2 = 2a12 and c1 = 1 − 2a12 then


the method is second-order accurate; while this still leaves one
free parameter, a2
It is easy to see that no choice of the parameters will make
the method generally third-order accurate. There are two
well-known examples of second-order RungeKutta methods:
The modified Euler Method: Taking a2 = 21 we have
 
1 1
yn+1 = yn + hf xn + h, yn + hf (xn , yn )
2 2

The improved Euler method: By choosing a2 = 1 we have


1 h i
yn+1 = yn + h f (xn , yn ) + f (xn + h, yn + hf (xn , yn )) .
2
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Truncation errors of these two methods

For these two methods it is easily verified by Taylor series


expansion that the truncation error is of the form, respectively,
1 h 1 i
Tn = h2 fy F1 + F2 + O(h3 ),
6 4
1 h 1 i
Tn = h2 fy F1 − F2 + O(h3 ),
2 2
where
F1 = fx + f fy , and F2 = fxx + 2f fxy + f 2 fyy .
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Exercise
Let α be a non-zero real number and let xn = a + nh, n = 0, · · · , N, be a uniform mesh
on the interval [a, b] of step size h = b−a
N
. Consider the explicit one-step method for the
numerical solution of the initial value problem y 0 = f (x, y ), y (a) = y0 , which determines
approximations yn to the values y (xn ) from the recurrence relation
“ h h ”
yn+1 = yn + h(1 − α)f (xn , yn ) + hαf xn + , yn + f (xn , yn ) .
2α 2α
Show that this method is consistent and that its truncation error, Tn (h, α), can be
expressed as

h2
»„ « –
4 ∂f
Tn (h, α) = α − 1 y 000 (xn ) + y 00 (xn ) (xn , y (xn )) + O(h3 ).
8α 3 ∂y

This numerical method is applied to the initial value problem y 0 = −y p , y (0) = 1, where
p is a positive integer. Show that if p = 1 then Tn (h, α) = O(h2 ) for every non-zero real
number α. Show also that if p ≥ 2 then there exists a non-zero real number α0 such that
Tn (h, α0 ) = O(h3 ).
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Three-stage RungeKutta methods

Let us now suppose that R = 3 to illustrate the general idea.


Thus, we consider the family of methods:

yn+1 = yn + h [c1 k1 + c2 k2 + c3 k3 ] ,

where

k1 = f (x, y ),
k2 = f (x + ha2 , y + hb21 k1 ),
k3 = f (x + ha3 , y + hb31 k1 + hb32 k2 ),
a2 = b21 , a3 = b31 + b32 .
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Three-stage Runge-Kutta methods

Writing b21 = a2 and b31 = a3 − b32 in the definitions of k2 and k3


respectively and expanding k2 and k3 into Taylor series about the
point (x, y ) yields:

1
k2 = f + ha2 (fx + k1 fy ) + h2 a22 (fxx + 2k1 fxy + k12 fyy ) + O(h3 )
2
1 2 2
= f + ha2 (f x + f fy ) + h a2 (fxx + 2f fxy + f 2 fyy ) + O(h3 )
2
2
1
= f + ha2 F1 + a22 F2 + O(h3 ),
h
where
F1 = fx + f fy and F2 = fxx + 2f fxy + f 2 fyy ,
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Three-stage Runge-Kutta methods

and

k3 = f + h {a3 fx + [(a3 − b32 )k1 + b32 k2 ]fy }


1 
+ h2 a32 fxx + 2a3 [(a3 − b32 )k1 + b32 k2 ]fxy
2
+[(a3 − b32 )k1 + b32 k2 ]2 fyy + O(h3)
 
1
= f + ha3 F1 + h a2 b32 F1 fy + a23 F2 + O(h3 ).
2
2
Substituting these expressions for k2 and k3 with R = 3 we find
that

Φ(x, y , h) = (c1 + c2 + c3 )f + h(c2 a2 + c3 a3 )F1


1 
+ h2 2c3 a2 b32 F1 fy + c2 a22 + c3 a23 F2 + O(h3 ).
 
2
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Three-stage Runge-Kutta methods

We match this with the Taylor series expansion:


y (x + h) − y (x) 1 1
= y 0 (x) + hy 00 (x) + h2 y 000 (x) + O(h3 )
h 2 6
1 1 2
= f + hF1 + h (F1 fy + F2 ) + O(h3 ).
2 6
This yields:
c1 + c2 + c3 = 1,
1
c2 a2 + c3 a3 = ,
2
1
c2 a2 + c3 a23 = ,
3
1
c3 a2 b32 = .
6
Solving this system of 4 equations with 6 unknowns, results in a
2-parameter family of 3-stage Runge-Kutta methods.
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Linear multi-step methods

While Runge-Kutta methods present an improvement over Eulers


method in terms of accuracy, this is achieved by investing
additional computational effort; in fact, Runge-Kutta methods
require more evaluations of f (·, ·) than would seem necessary. For
example, the fourth-order method involves four function
evaluations per step. For comparison, by considering three
consecutive points xn−1 , xn = xn−1 + h, xn+1 = xn−1 + 2h,
integrating the differential equation between xn−1 and xn+1 , and
applying Simpsons rule to approximate the resulting integral yields
Z xn+1
y (xn+1 ) = y (xn−1 ) + f (x, y (x))dx
xn−1
1h i
≈ y (xn−1 ) + f (xn−1 , y (xn−1 )) + 4f (xn , y (xn )) + f (xn+1 , y (xn+1 ))
3h
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Linear multi-step methods

which leads to the method


1
yn+1 = yn−1 + h[f (xn−1 , yn−1 )+4f (xn , yn )+f (xn+1 , yn+1 )]. (29)
3
Introduction and Preliminaries Picard’s Theorem One-step Methods Error analysis of the θ- method General explicit one-step met

Linear multi-step methods

Adams7- Bashforth method: an explicit linear four-step


method:
1
yn+4 = yn+3 + h[55fn+3 − 59fn+2 + 37fn+1 − 9fn ]
24
Adams - Moul- ton method: an implicit linear four-step
method:
1
yn+4 = yn+3 + h[9fn+4 + 19fn+3 − 5fn+2 − 9fn+1 ].
24

You might also like