0% found this document useful (0 votes)
21 views40 pages

WBMT2049-T2/WI2032TH - Numerical Analysis For ODE's

This document discusses numerical differentiation and the approximation of derivatives using finite differences. It examines forward, backward and central difference formulas and their truncation errors, and shows that the central difference formula provides a higher-order approximation with error O(h^2) compared to O(h) for forward/backward differences. Round-off error in the central difference formula implementation is also analyzed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views40 pages

WBMT2049-T2/WI2032TH - Numerical Analysis For ODE's

This document discusses numerical differentiation and the approximation of derivatives using finite differences. It examines forward, backward and central difference formulas and their truncation errors, and shows that the central difference formula provides a higher-order approximation with error O(h^2) compared to O(h) for forward/backward differences. Round-off error in the central difference formula implementation is also analyzed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

WBMT2049-T2/WI2032TH – Numerical Analysis for ODE’s

Lecture 3

numerical differentiation

Lecture 3 1 / 40
first derivative

Lecture 3 2 / 40
Definition of the first derivative

Let f (x) be right- and left-differentiable at some point x. Then, with


h > 0,
f (x + h) − f (x)
fR0 (x) = lim (right first derivative)
h→0 h
but also
f (x) − f (x − h)
fL0 (x) = lim (left first derivative)
h→0 h
If f (x) is differentiable at x, then

fR0 (x) = fL0 (x) = f 0 (x) (first derivative)

Lecture 3 3 / 40
Forward and backward differences

With h > 0:
f (x + h) − f (x)
QF (h) = (forward difference)
h

f (x) − f (x − h)
QB (h) = (backward difference)
h

Lecture 3 4 / 40
Approximation of derivative by forward difference

The idea of the finite-difference approximation:

f 0 (x) ≈ QF (h)

The truncation error (afbreekfout) of the forward-difference approximation


of f 0 (x) is defined as

f (x + h) − f (x)
RF (h) = f 0 (x) − QF (h) = f 0 (x) −
h
How big is this error?

Lecture 3 5 / 40
Forward-difference truncation error

Use the Taylor series to express f (x + h) via f (x) and its derivatives:

f 00 (ξ) 2
f (x + h) = f (x) + f 0 (x)h + h , ξ ∈ (x, x + h)
2
The truncation error:
f (x + h) − f (x)
RF (h) = f 0 (x) − QF (h) = f 0 (x) −
h
00 (ξ)
 
0 1 0 f 2
= f (x) − f (x) + f (x)h + h − f (x)
h 2
f 00 (ξ)
=− h, ξ ∈ (x, x + h)
2

Lecture 3 6 / 40
Order of the forward-difference approximation

f (x + h) − f (x)
QF (h) = ,
h
f 00 (ξ)
RF (h) = − h = O(h)
2
f 0 (x) = QF (h) + O(h)

The forward-difference formula QF (h) is an order-h approximation of f 0 (x).


The truncation error RF (h) goes to zero as fast as h does, i.e., linearly.

Is it possible to make a better approximation? With the error that goes to


zero faster than h?

Lecture 3 7 / 40
Central-difference approximation

f (x + h) − f (x − h)
QC (h) = (central difference)
2h

f 0 (x) = QC (h) + RC (h)

What is the truncation error RC (h)?

Lecture 3 8 / 40
Central-difference truncation error
Use the Taylor series for f (x + h) and f (x − h):

f 00 (x) 2 f 000 (ξ+ ) 3


f (x + h) = f (x) + f 0 (x)h + h + h , ξ+ ∈ (x, x + h)
2 6
f 00 (x) 2 f 000 (ξ− ) 3
f (x − h) = f (x) − f 0 (x)h + h − h , ξ− ∈ (x − h, x)
2 6
The truncation error:
f (x + h) − f (x − h)
RC (h) = f 0 (x) − QC (h) = f 0 (x) −
2h
00 (x) f 000 (ξ+ ) 3

1 f
= f 0 (x) − f (x) + f 0 (x)h + h2 + h
2h 2 6
f 00 (x) 2 f 000 (ξ− ) 3

0
−f (x) + f (x)h − h + h
2 6

Lecture 3 9 / 40
Central-difference truncation error

f 000 (ξ+ ) + f 000 (ξ− ) 2


RC (h) = − h , ξ+ ∈ (x, x + h), ξ− ∈ (x − h, x)
12
Assuming that f (x) ∈ C 3 (x − h, x + h), we can make this formula simpler.
Using the Intermediate Value Theorem here! The number
(f 000 (ξ+ ) + f 000 (ξ− ))/2 is the average of f 000 (ξ+ ) and f 000 (ξ− ). Hence, it is
between f 000 (ξ+ ) and f 000 (ξ− ). Hence, there is ξ ∈ (ξ− , ξ+ ) such that

f 000 (ξ+ ) + f 000 (ξ− )


f 000 (ξ) =
2
Thus,
f 000 (ξ) 2
RC (h) = − h , ξ ∈ (x − h, x + h)
6

Lecture 3 10 / 40
Order of the central-difference approximation

f (x + h) − f (x − h)
QC (h) = ,
2h
f 000 (ξ) 2
RC (h) = − h = O(h2 )
6
f 0 (x) = QC (h) + O(h2 )

The central-difference formula QC (h) is an order-h2 approximation of


f 0 (x). The truncation error RC (h) goes to zero as fast as h2 does, i.e.,
quadratically, faster than h.

Is it possible to make an even better approximation? With the error that


goes to zero faster than h2 ?

Lecture 3 11 / 40
Round-off error of the central-difference formula
On the computer we also have the round-off errors. Let ỹ = fl(y ).

f˜(x + h) − f˜(x − h)
Q̃C =
2h̃
Using the expressions:

f˜(x + h) = f (x + h)(1 + + ),
f˜(x − h) = f (x − h)(1 + − ),
h̃ = h(1 + h ),

we get
 
f (x + h) − f (x − h) f (x + h) + f (x − h) − 1
Q̃C = + −
2h 2 h 2 h 1 + h

Lecture 3 12 / 40
Round-off error of the central-difference formula

 
f (x + h) + f (x − h) − 1
Q̃C = QC + −
2 h 2 h 1 + h

To simplify this we use the Taylor series for (1 + x)−1 around x = 0:

1
= 1 − h + O(2h )
1 + h

f (x + h) + (1 − h + O(2h ))
Q̃C = QC (1 − h + O(2h )) +
2 h
f (x − h) − (1 − h + O(2h ))

2 h

Lecture 3 13 / 40
Round-off error depends on h

f (x + h) + (1 − h + O(2h ))
Q̃C − QC = −QC (−h + O(2h )) +
2 h
f (x − h) − (1 − h + O(2h ))

 2 h
f (x + h)
= −hQC (−h + O(2h )) + + (1 − h + O(2h ))
2

f (x − h) 2 1
− − (1 − h + O(h )) ,
2 h


|Q̃C − QC | = ,
h
where
f (x + h) f (x − h)
ˆ = hQC h + + − − + O(2m )
2 2
Lecture 3 14 / 40
The total error of the central-difference approximation

In exact arithmetic the central-difference formula QC (h) introduces only


the truncation error. In finite-precision floating-point arithmetic, we have
both truncation and round-off errors. The total error is bounded by:

|EC (h)| = |f 0 (x) − Q̃C (h)| ≤ |f 0 (x) − QC (h)| + |Q̃C (h) − QC (h)|
|f 000 (ξ)| 2 ˆ M ˆ
= h + ≤ h2 + ,
6 h 6 h
where we assume that

|f 000 (ξ)| ≤ M for all ξ ∈ (x − h, x + h)

Lecture 3 15 / 40
Minimal useful step size
From the formula
M 2 ˆ
|EC (h)| ≤ h + = φ(h),
6 h
it follows that the upper bound φ(h) on the total error |EC (h)| has a
minimal value φ(hopt ) and it, probably, does not make sense to decrease h
below hopt .

M ˆ
φ0 (h) = h − 2 = 0,
3 h
 1/3


hopt =
M
1/3
2

9Mˆ
|EC (hopt )| ≤
8

Lecture 3 16 / 40
general approach to difference formulas

Lecture 3 17 / 40
It is intuitively clear that the difference formulas approximating the
first derivative have the form
α1 f (x1 ) + α2 f (x2 ) + · · · + αn f (xn )
Q(h) =
h
We only need to choose xi and αi to maximize the order of the
truncation error.

Lecture 3 18 / 40
Uniform grid
Consider a uniform grid with step h in the neighborhood of the point x:

. . . , x − 3h, x − 2h, x − h, x, x + h, x + 2h, h + 3h, . . .

The corresponding function values (data points) are:

. . . , f (x − 3h), f (x − 2h), f (x − h), f (x), f (x + h), f (x + 2h), f (h + 3h), . . .

With three data points one has various possibilities, e.g.:


α−1 f (x − h) + α0 f (x) + α1 f (x + h)
Q(h) = ,
h
α−2 f (x − 2h) + α−1 f (x − h) + α0 f (x)
Q(h) = ,
h
α0 f (x) + α1 f (x + h) + α2 f (x + 2h)
Q(h) = , etc.
h
where αi still need to be determined.

Lecture 3 19 / 40
Central-difference formula again

Let’s see how this works with the central-difference choice of data points:
α−1 f (x − h) + α0 f (x) + α1 f (x + h)
Q(h) = ,
h
Using the Taylor series for all shifted data points, keeping as many terms
as there are data points:
f 00 (x) 2
f (x − h) = f (x) − f 0 (x)h + h + O(h3 )
2
f 00 (x) 2
f (x + h) = f (x) + f 0 (x)h + h + O(h3 )
2

Lecture 3 20 / 40
Central-difference formula again
Substituting these we get:
f 00 (x) 2
  
1
Q(h) = α−1 f (x) − f 0 (x)h + h + O(h3 )
h 2
+ α0 f (x)
f 00 (x) 2
 
+α1 f (x) + f 0 (x)h + h + O(h3 )
2

Now, assemble coefficients of all derivatives separately:


 
1 1 1
Q(h) = α−1 + α0 + α1 f (x)
h h h
+ (−α−1 + α1 ) f 0 (x)
 
h h
+ α−1 + α1 f 00 (x) + O(h2 )
2 2

Lecture 3 21 / 40
Central-difference formula again
We want the difference formula Q(h) to approximate f 0 (x). Hence, we
want
 
1 1 1
Q(h) = α−1 + α0 + α1 f (x)
h h h
+ (−α−1 + α1 ) f 0 (x)
 
h h
+ α−1 + α1 f 00 (x) + O(h2 )
2 2
= (0)f (x) + (1)f 0 (x) + (0)f 00 (x) + O(h2 )

This leads to the algebraic equations:


1 1 1
α−1 + α0 + α1 = 0,
h h h
−α−1 + α1 = 1,
h h
α−1 + α1 = 0
2 2
Notice, that we have three equations and three unknowns.
Lecture 3 22 / 40
Central-difference formula again

Solving the linear system we get:


1
α−1 = − ,
2
α0 = 0,
1
α1 = ,
2
which leads to the central-difference formula:
−f (x − h) + f (x + h)
f 0 (x) = + O(h2 )
2h
How many data points do you need to derive an O(h3 ) difference formula
for f 0 (x)?

Lecture 3 23 / 40
Higher-accuracy forward difference formula

Let’s use three ”forward” data points to derive a higher-order formula:


α0 f (x) + α1 f (x + h) + α2 f (x + 2h)
Q(h) = ,
h
We need these Taylor series:
f 00 (x) 2
f (x + h) = f (x) + f 0 (x)h + h + O(h3 )
2
f 00 (x)
f (x + 2h) = f (x) + f 0 (x)2h + (2h)2 + O(h3 )
2

Lecture 3 24 / 40
Higher-accuracy forward difference formula
Substituting and grouping the terms we get:
 
1 1 1
Q(h) = α0 + α1 + α2 f (x)
h h h
+ (α1 + 2α2 ) f 0 (x)
 
h
+ α1 + 2hα2 f 00 (x) + O(h2 )
2

The requirement Q(h) = (0)f (x) + (1)f 0 (x) + (0)f 00 (x) + O(h2 ) leads to
the algebraic equations:
1 1 1
α0 + α1 + α2 = 0,
h h h
α1 + 2α2 = 1,
h
α1 + 2hα2 = 0.
2
with the solution: α0 = −3/2, α1 = 2, α2 = −1/2.

Lecture 3 25 / 40
Higher-accuracy forward difference formula

Thus, we obtain an O(h2 ) three-point forward-difference formula:

−3f (x) + 4f (x + h) − f (x + 2h)


Q(h) = = f 0 (x) + O(h2 )
2h

Lecture 3 26 / 40
second derivative

Lecture 3 27 / 40
Deriving a difference formula for f 00 (x)
Let us use the general approach:
Since it is f 00 (x), we divide by h2 - the multiplier next to f 00 (x) in the
Taylor series.
To get to f 00 (x) we need to keep at least three terms of the Taylor
series. Hence, we also need to use at least three data points.
To have an O(h2 ) formula, we actually need to keep four terms of the
Taylor series, since O(h4 )/h2 = O(h2 ).
With the choice
α−1 f (x − h) + α0 f (x) + α1 f (x + h)
Q(h) = ,
h2
we need the Taylor series:
f 00 (x) 2 f 000 (x) 3
f (x − h) = f (x) − f 0 (x)h + h − h + O(h4 )
2 6
f 00 (x) 2 f 000 (x) 3
f (x + h) = f (x) + f 0 (x)h + h + h + O(h4 )
2 6

Lecture 3 28 / 40
Deriving a difference formula for f 00 (x)

Substituting and collecting the terms we get:


 
1 1 1
Q(h) = α−1 + α0 + α1 f (x)
h2 h2 h2
 
1 1
+ − α−1 + α1 f 0 (x)
h h
 
1 1
+ α−1 + α1 f 00 (x)
2 2
 
h h
+ − α−1 + α1 f 000 (x) + O(h2 )
6 6

Now we require that

Q(h) = (0)f (x) + (0)f 0 (x) + (1)f 00 (x) + (0)f 000 (x) + O(h2 ) = f 00 (x) + O(h2 )

Lecture 3 29 / 40
Deriving a difference formula for f 00 (x)
This leads to the algebraic equations:
1 1 1
α−1 + 2 α0 + 2 α1 = 0,
h2 h h
1 1
− α−1 + α1 = 0,
h h
1 1
α−1 + α1 = 1,
2 2
h h
− α−1 + α1 = 0,
6 6
which are solved by:

α−1 = 1,
α0 = −2,
α1 = 1.

From the point of view of linear algebra, we are lucky to have a solution.
Why?
Lecture 3 30 / 40
A difference formula for f 00 (x)

Hence,
f (x − h) − 2f (x) + f (x + h)
Q(h) = = f 00 (x) + O(h2 ),
h2
or
f (x − h) − 2f (x) + f (x + h)
f 00 (x) = + O(h2 ),
h2
and we have obtained an O(h2 ) central-difference formula for f 00 (x).

Lecture 3 31 / 40
Determining the truncation error of a given formula

After this derivation we only know that the difference formula is O(h2 ).
To determine the exact form of the truncation error we need to compute:
1
f 00 (x) − Q(h) = f 00 (x) − (f (x − h) − 2f (x) + f (x + h))
h2
Use the Taylor series for the shifted data points with explicit remainders:

f 00 (x) 2 f 000 (x) 3 f (4) (ξ− ) 4


f (x − h) = f (x) − f 0 (x)h + h − h + h
2 6 24
f 00 (x) 2 f 000 (x) 3 f (4) (ξ+ ) 4
f (x + h) = f (x) + f 0 (x)h + h + h + h
2 6 24

Lecture 3 32 / 40
Determining the truncation error of a given formula

Substitution gives

f 00 (x) 2 f 000 (x) 3 f (4) (ξ− ) 4



00 00 1
f (x) − Q(h) = f (x) − 2 f (x) − f 0 (x)h + h − h + h
h 2 6 24
f 00 (x) 2 f 000 (x) 3 f (4) (ξ+ ) 4

−2f (x) + f (x) + f 0 (x)h + h + h + h
2 6 24
f (4) (ξ− ) + f (4) (ξ+ ) h2 f (4) (ξ) 2
=− =− h , ξ ∈ (x − h, x + h),
2 12 12
where we have used the Intermediate Value Theorem.

Lecture 3 33 / 40
Richardson’s extrapolation method

Lecture 3 34 / 40
General expression for the truncation error

We have seen that the truncation error of finite-difference formulas for the
derivatives had the form of:

M − Q(h) = cp (h)hp ,

where, for example, M = f 0 (x), c(h) = −f 00 (ξ)/2, ξ ∈ (x, x + h) and


p = 1 for the forward-difference approximation of f 0 (x). We see that, in
general, ξ depends on h.
If we use cp that does not depend on h, then the truncation error can be
written as

M − Q(h) = cp hp + O(hp+1 ),

where we assume the existence of the Taylor series (higher derivatives of


the quantity M).

Lecture 3 35 / 40
Idea of Richardson’s extrapolation
The idea of Richardson’s extrapolation is to use several estimates Q(h)
with different h’s and determine the unknown parameters cp and p of the
error term numerically. This gives a good, O(hp+1 ) estimate of the
truncation error and even allows to improve (extrapolate) the formula
M ≈ Q(h), which is O(hp ), to the formula M ≈ Q(h) + cp hp , which is
O(hp+1 ).

Lecture 3 36 / 40
Determining the parameters of the Richardson extrapolation

Use the formula Q(h) to produce three estimates of the unknown M, e.g.:
Q(h), Q(2h), Q(4h). Neglecting the O(hp+1 ) term we obtain three
equations:

M − Q(h) = cp hp ,
M − Q(2h) = cp (2h)p = cp 2p hp ,
M − Q(4h) = cp (4h)p = cp 2p 2p hp ,

Leads to:

Q(2h) − Q(4h) = cp hp (2p − 1)2p ,


Q(h) − Q(2h) = cp hp (2p − 1),

and
Q(2h) − Q(4h)
= 2p
Q(h) − Q(2h)

Lecture 3 37 / 40
Determining the parameters of the Richardson extrapolation

From

Q(h) − Q(2h) = cp hp (2p − 1),

we obtain
Q(h) − Q(2h) Q(h) − Q(2h) (Q(h) − Q(2h))2
cp h p = = =
2p − 1 Q(2h)−Q(4h)
−1 2Q(2h) − Q(4h) − Q(h)
Q(h)−Q(2h)

Thus,
Q(h) − Q(2h)
M − Q(h) = + O(hp+1 ),
2p − 1
which is useful if we happen to known p. Alternatively,

(Q(h) − Q(2h))2
M − Q(h) = + O(hp+1 )
2Q(2h) − Q(4h) − Q(h)

Lecture 3 38 / 40
Extrapolating Q(h) to a higher order
Extrapolation of Q(h) can be obtained as:

Q(h) − Q(2h)
M = Q(h) + + O(hp+1 ),
2p − 1
Consider the forward-difference formula for f 0 (x):

f (x + h) − f (x)
f 0 (x) = Q(h) + O(h) = + O(h)
h
From O(h), we know that p = 1 for this Q(h). Extrapolating:

Q(h) − Q(2h)
f 0 (x) = Q(h) + + O(h1+1 )
21 − 1
f (x + h) − f (x) f (x + h) − f (x) f (x + 2h) − f (x)
= + − + O(h2 )
h h 2h
−3f (x) + 4f (x + h) − f (x + 2h)
= + O(h2 ).
2h
Previously, we derived this O(h2 ) formula via a different approach.
Lecture 3 39 / 40
Questions

Compute the truncation error RB (h) of the backward-difference


approximation QB (h) of the first derivative f 0 (x).
Compute the minimal useful step size hopt and the optimal upper
bound on the total error |EB (hopt )| for the backward-difference
approximation Q̃B (h) in finite precision floating-point arithmetic.
Use general approach to derive a difference formula approximating
f 0 (x) to O(h3 ).
Let f (x) = sin(x). Compute f 0 (1) using the central-difference formula
with h = 0.1. Use Richardson’s extrapolation to estimate the
truncation error and to derive a better approximation for f 0 (1).

Lecture 3 40 / 40

You might also like