0% found this document useful (0 votes)
48 views10 pages

Numerical Differentiation and Differential Equations

This document discusses numerical differentiation using finite difference methods. It introduces forward, backward and central difference approximations of the first and second derivatives using Taylor series expansions. Examples are provided to demonstrate applying these methods and estimating errors. The section also covers extrapolation techniques to improve the accuracy of derivative estimates from evenly spaced data.

Uploaded by

ismael kenedy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views10 pages

Numerical Differentiation and Differential Equations

This document discusses numerical differentiation using finite difference methods. It introduces forward, backward and central difference approximations of the first and second derivatives using Taylor series expansions. Examples are provided to demonstrate applying these methods and estimating errors. The section also covers extrapolation techniques to improve the accuracy of derivative estimates from evenly spaced data.

Uploaded by

ismael kenedy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 6

Numerical differentiation

6.1 Finite Difference Methods (FDMs)


6.1.1 FDMs by Taylor’s expansion
Consider Taylor’s expansion of a function

h2 00 h3 h4
f (x + h) = f (x) + hf 0 (x) + f (x) + f 000 (x) + f (4) (x) + · · · (6.1)
2! 3! 4!
2 3 4
h h h
f (x − h) = f (x) − hf 0 (x) + f 00 (x) − f 000 (x) + f (4) (x) − · · · (6.2)
2! 3! 4!
Rearranging Eq (6.1) gives

f (x + h) − f (x) h h2
f 0 (x) = − f 00 (x) − f 000 (x) + · · ·
h 2! 3!
f (x + h) − f (x)
f 0 (x) = + O(h). (6.3)
h
This latter equation is the Forward Difference approximation to f 0 (x).
Rearranging Eq (6.2) gives

f (x) − f (x − h) h h2
f 0 (x) = + f 00 (x) − f 000 (x) + · · ·
h 2! 3!
f (x) − f (x − h)
f 0 (x) = + O(h).
h
This equation is the Backward Difference approximation to f 0 (x).
If we subtract Eq (6.2) from Eq (6.1) and after rearranging we have

f (x + h) − f (x − h) h2 000 h4
f 0 (x) = − f (x) − f (5) − · · · ,
2h 3! 5!
f (x + h) − f (x − h)
f 0 (x) = + O(h2 ). (6.4)
2h

41
NUM701S: Numerical Methods 1 42

Eq. (6.4) is called Central Difference approximation to f 0 (x). The parameter h is called the
step size.

Example
Let f (x) = ln x and x = x0 = 1.8, approximate f 0 (1.8) by the Forward Difference and estimate
the error.

f (1.8 + h) − f (1.8)
f 0 (1.8) ≈ , h>0
h
|hf 00 (ξ)| |h| |h|
with error = 2 ≤ , where 1.8 < ξ < 1.8 + h. The results are summarised in the
2 2ξ 2(1.8)2
following table for various values of h.

f (1.8 + h) − f (1.8) |h|


h f (1.8 + h) 2(1.8)2
h
0.1 0.64185389 0.5406722 0.0154321
0.01 0.59332685 0.5540180 0.0015432
0.001 0.58834207 0.5554013 0.0001543
The exact value is f 0 (1.8) = 0.555 · · · .
Using the central difference approximation, we have

f (1.8 + h) − f (1.8 − h)
f 0 (1.8) ≈ , h>0
2h
h2 |f (3) (ξ)| h2 h2
with error = ≤ , where 1.8 < ξ < 1.8 + h.
3! 3|ξ|3 3(1.8)3
f (1.8 + h) − f (1.8 − h) h2
h f (1.8 + h) f (1.8 − h)
2h 3(1.8)3
0.1 0.64185389 0.53062825 0.5561282 5.719 × 10−4
0.01 0.59332685 0.58221562 0.5555615 5.7155 × 10−6
0.001 0.58834207 0.58723096 0.555555 5.7155 × 10−8
We can also obtain an approximation to the second derivative. For that we add Eqs. (6.1)
and (6.2):
2 00 h4 (4)
f (x + h) + f (x − h) = 2f (x) + h f (x) + f (x) + · · · ,
12
00
solving for f (x) we get

f (x + h) − 2f (x) + f (x − h) h2 (4) h4 (6)


f 00 (x) = − f (x) − f (x) − · · · ,
h2 12 360
f (x + h) − 2f (x) + f (x − h)
f 00 (x) = + O(h2 ).
h2
This is the Central Difference approximation to f 00 (x), which is second-order accurate.
NUM701S: Numerical Methods 1 43

Example
Following the previous example to approximate f 00 (1.8) with f (x) = ln x we obtain the results
given in the table below:

f (1.8 + h) − 2f (x) + f (1.8 − h) h2


h f (1.8 + h) f(1.8) f (1.8 − h)
h2 2(1.8)4
0.1 0.64185389 0.58778666 0.53062825 -0.30911926 4.76299 × 10−4
0.01 0.59332685 0.58778666 0.58221562 -0.30864674 4.76299 × 10−6
0.001 0.58834207 0.58778666 0.58723096 -0.30864202 4.76299 × 10−8

The exact value being f 00 (1.8) = −0.30864198 correct to 8 significant digits.


For h = 0.001, the relative error is 1.3 × 10−5 % while the absolute error is 4 × 10−8 . We can see
that it is less than 4.76299 × 10−8 as predicted by the theory.

6.1.2 Extrapolation Techniques


This is a way to improve the accuracy of the estimates of derivatives from a table of uniformly or
evenly spaced data. We expose the method through the following example.
Example
Estimate the value of f 0 (2.4) from the following data

xi fi
2.0 0.123060
2.1 0.105706
2.2 0.089584
2.3 0.074764
2.4 0.061277
2.5 0.049126
2.6 0.038288
2.7 0.028722
2.8 0.020371
Using central differences with h = 0.1, we have
f (2.5) − f (2.3)
f 0 (2.4) = + O(h2 ),
2(0.1)
= −0.12819 + C1 (0.1)2 + Ĉ1 (0.1)4 + · · · ,
Repeating the same procedure with h = 0.2, we have
f (2.6) − f (2.2)
f 0 (2.4) = + O(h2 ),
2(0.2)
= −0.12824 + C2 (0.1)2 + Ĉ2 (0.1)4 + · · ·
NUM701S: Numerical Methods 1 44

The values of C1 and C2 are not identical, but we assume they are the same. It can be shown that
we make an error of O(h4 ) in taking the two values as equal. Based on this assumption, we can
eliminate the C’s and get
1
f 0 (2.4) = −0.12819 + [−0.12819 − (−0.12824)]
(0.2/0.1)2 − 1
= −0.12817 + O(h2 ).

The latter assumption is an instance of a general rule stated as follows:

Given two estimates of a value that have errors of O(hn ), where the h’s are in the ratio of
2 to 1, we can extrapolate to a better estimate of the exact value as
1
Better estimate = more accurate + n (more accurate − less accurate).
2 −1
The more accurate value is the one computed with the smaller value of h.
From the table of data, one can calculate an approximation with h = 0.4 and then extrapolate
using the approximation with h = 0.2 to obtain a second O(h4 ). We then have two first-order
extrapolation with O(h4 ) error, a second extrapolation can then be performed on the two O(h2 )
approximations to give an O(h6 ) approximation. The extrapolations are laid out in table form as
follows:

h Initial estimate First order est. Second order est.


0.1 −0.12819 −0.12817 −0.12817
0.2 −0.12824 −0.12820
0.4 −0.12836
The second-order extrapolation in the above table comes from the following computation
1
f 0 (2.4) = −0.12817 + (−0.12817 − (−0.12820))
−1 24
= −0.12817 + O(h6 ).

The rule stated above is a general rule thus it may be used for second derivative approximation
as well. For example, let’s approximate f 00 (2.4). First we take h = 0.1, the central difference
approximation gives:
f (2.5)− 2f (2.4) + f (2.3)
f 00 (x) = + O(h2 ),
(0.1)2
= 0.13360 + O(h2 ). (6.5)

Then take h = 0.2


f (2.6)− 2f (2.4) + f (2.2)
f 00 (x) = 2
+ O(h2 ),
(0.2)
= 0.13295 + O(h2 ). (6.6)
NUM701S: Numerical Methods 1 45

And finally take h = 0.4

f (2.8)− 2f (2.4) + f (2.0)


f 00 (x) = + O(h2 ),
(0.4)2
= 0.13048 + O(h2 ). (6.7)

The extrapolations are laid out in the following table

h Initial estimate First order est. Second order est.


0.1 0.13360 0.13382 0.13382
0.2 0.13295 0.13377
0.4 0.13048

Since the data was constructed using f (x) = e−x sin x, we can compute the true values f 0 (2.4) =
−0.12817 and f 00 (2.4) = 0.13379. So the difference in the results for f 00 (2.4) is due to to round-off
error.

6.1.3 Richardson’s Extrapolation


We can apply the same technique when we want to differentiate a known function numerically. In
this application, we can make h-values smaller rather than use larger values as required when the
function is known only from tabulated values as above.
We begin at some arbitrary value of h and compute an approximation to f 0 (x), then compute a sec-
ond using h/2 then use the extrapolation formula to compute an improved estimate. Generally, one
builds a table by continuing with higher-order extrapolations with the h value halved at each stage.

Example
Build a Richardson table for f (x) = x2 cos x to evaluate f 0 (1). Start with h = 0.1.

h Initial estimate First order est. Second order est. Third est.
0.1 0.226736
0.05 0.236031 0.239129
0.025 0.238358 0.239133 0.239134
0.0125 0.238938 0.239132 0.239132 0.239132

The Richardson’s technique indicates convergence when two successive values on the same line are
the same. Note that the table is oriented differently to that of the tabulated data problem, this is
done to indicate the reducing value of h.
Chapter 8

Ordinary differential equations

8.1 Initial value problems


8.1.1 First-order ODEs
First-order ODEs are of the form
dy
y0 = = f (x, y) (8.1)
dx
We seek a solution for x ∈ [a, b] with the initial condition y(a) = η. A numerical method proceeds
from the initial point and integrates Eq. (8.1) over the range x ∈ [a, b].
Defining (xi ) by xi = a + ih, i = 0, 1, . . . , n where h is the constant stepsize, successive values
y1 , y2 , . . . , yn are computed at a + h, a + 2h, . . . , a + nh.
Following notation is used

xi = x0 + ih, yi = y(xi ), fi = f (xi , yi ), where x0 = a, and xn = b.

8.1.2 Euler’s method


If we use forward difference operator

f (x + h) − f (x)
f 0 (x) = + O(h)
h
to approximate dy/dx at a point xi , then
1
yi0 = (yi+1 − yi )
h
we obtain the algorithm for the Euler’s method

yi+1 = yi + hfi (8.2)

Thus given (x0 , y0 ) = (a, η), we can calculate (xi , yi ) for i = 1, . . . , n. Since the new value yi+1 can
be calculated from known values of xi and yi , this method is explicit.

49
NUM701S: Numerical Methods 1 50

8.1.3 Error in Euler’s method


Consider the Taylor expansion of yi+1

h2 00 h3 000
yi+1 = yi + hyi0 + y + yi + . . . ,
2! i 3!
since yi0 = fi
h2 0 h3 00
yi+1 = yi + hfi + fi + fi + . . .
2! 3!
If we subtract Eq. (8.2) from this we obtain

h2 0 h3 00
εE = f + fi + . . . ,
2! i 3!
h2 0
= f (ξ),
2! i
where xi < ξ < xi+1 . Thus the error per step is O(h2 ) and is called the Local Truncation Error
(LTE). If we integrate over n step the Global Truncation Error is O(h).

Example
Solve the IVP √
x0 (t) = 2 x, x(0) = 4
to compute x(1) using Euler’s method with 4 and 8 steps respectively. Display all the intermediate
steps in the form of a table including the exact solution and the error.

With 4 steps

k tk xk x(tk ) Error
0 0 4 4 0
1 0.25 5.000 5.063 0.063
2 0.50 6.118 6.250 0.132
3 0.75 7.355 7.563 0.208
4 1.00 8.711 9.000 0.289
With 8 steps

k tk xk x(tk ) Error
0 0 4 4 0
2 0.25 5.030 5.063 0.033
4 0.50 6.182 6.250 0.068
6 0.75 7.456 7.563 0.107
8 1.00 8.852 9.000 0.148
NUM701S: Numerical Methods 1 51

8.1.4 Improved Euler method


A much more efficient than Euler’s method can be derived using a predictor-corrector strategy.
The method obtained is second-order accurate and is called improved Euler method or Heun’s
method.
The algorithm is derived as follows. Assume we want to solve a more general non-autonomous
IVP
y 0 (t) = f (t, y(t)), y(a) = η (8.3)
We choose evenly spaced times a = t0 < t1 < . . . < tn−1 < tn = T with step size h. Integrating
Eq. (8.3) over the interval [tk , tk+1 ] and using the fundamental theorem of calculus yields
Z tk+1
y(tk+1 ) = y(tk ) + f (t, y(t))dt
tk

Using the trapezoidal rule for numerical integration, we obtain


h
y(tk+1 ) ≈ y(tk ) + (f (tk , y(tk )) + f (tk+1 , y(tk+1 ))) .
2
Knowing that yk ≈ y(tk ) and yk+1 ≈ y(tk+1 ), we assume

h
yk+1 = yk + (f (tk , yk ) + f (tk+1 , yk+1 ))
2
This an implicit method in the sense that if yk is known, we cannot always determine directly yk+1
without solving at this stage a non-linear equation, due to the term f (tk+1 , yk+1 ) (if f is non-linear
in y), via an iterative method such as Newton’s method. A way out is to estimate yk+1 using
Euler’s method and that estimate is called a predictor and is denoted pk+1 . We can now write the
complete algorithm for the improved Euler’s method

t0 = a,
Predictor: pk+1 = yk + hf (tk , yk ),
tk+1 = tk + h,
h
Corrector: yk+1 = yk + (f (tk , yk ) + f (tk+1 , pk+1 )), k = 0, 1, . . . , n − 1.
2

Example
Solve the IVP √
x0 (t) = 2 x, x(0) = 4
to compute x(1) using the improved Euler method with 4 and 8 steps respectively and compare
your results with those of the Euler’s method. (Partially solved in Class).
NUM701S: Numerical Methods 1 52

8.1.5 Runge-Kutta(RK) methods


The general one step method takes the form

yi+1 = yi + hφ(xi , yi ; h)

In Euler’s method we have φ(xi , yi ; h) = fi = yi0 we are using the slope at the point yi
to extrapolate yi and obtain yi+1 . The method of Runge-Kutta extends this. Instead of just
calculating the slope at yi we can take it to be the ’weighted’ average at yi and intermediate points.

A second order Runge-Kutta method is

q1 = f (xi , yi )

h h
q2 = f (xi + , yi + q1 )
2 2
yi+1 = yi + hq2
which has LTE of O(h3 ).

We can include more sample points in φ(xi , yi ; h) to increase the accuracy. The most widely used
formula is the forth-order RK method

q1 = f (xi , yi )

h h
q2 = f (xi + , yi + q1 )
2 2
h h
q3 = f (xi + , yi + q2 )
2 2

q4 = f (xi + h, yi + hq3 )
h
yi+1 = yi + (q1 + 2q2 + 2q3 + q4 )
6
this method has LTE of O(h4 )

Example
Approximate y(0.4) using RK4 with 2 steps for the following non-autonomous IVP

y 0 (t) = ty(t) + y(t) + t2 , y(0) = 2

f (t, y) = ty + y + t2 , t0 = 0, y0 = 2, h = 0.2.
Step 1:

q1 = f (t0 , y0 ) = t0 y0 + y0 + t20 = 0 + 2 + 0 = 2
NUM701S: Numerical Methods 1 53

h
y0 + q1 = 2 + (0.1)(2) = 2.2
2
h h
q2 = f (t0 + , y0 + q1 ) = 0.1(2.2) + 2.2 + 0.12 = 2.43
2 2
h
y0 + q2 = 2.243
2
h h
q3 = f (t0 + , y0 + q2 ) = 2.4773
2 2
y0 + hq3 = 2.4955
q4 = f (t0 + h, y0 + hq3 ) = 3.0346
h
y1 = y0 + (q1 + 2q2 + 2q3 + q4 ) = 2.4950
6
Step 2:
In a similar manner as above, we find

y(0.4) ≈ y2 = 3.2566 

8.1.5 linear multi-step (LMM)Methods

The method considered above are called single step methods since they calculate the solution at
xi+1 from known information at xi This approach ignores other points which may already have
been calculated, LMM method attempt to take more than one point into consideration.
if we take the central difference operation (26)

1
yi0 = (yi+1 − yi−1 )
2h
therefore

yi+1 = yi−1 + 2hf (xi , yi ).


Thus to calculate yi+1 at xi+1 requires information at xi+1 and xi . if we integrate y 0 = f (x, y)
using Simpson’s rule (27) in the range [xi−1 , xi+1 ]

h
yi+1 − yi−1 = (fi−1 + 4fi + fi+1 ).
3
This time an estimate of fi+1 is required on the right hand side, this is called an implicit method.
both (31) and (32) are 2 steps methods.

You might also like