Numerical Methods: King Saud University
Numerical Methods: King Saud University
In this chapter we deal with techniques for approximating numerically the two
fundamental operations of the calculus, differentiation and integration. Both of
these problems may be approached in the same way. Although both numerical
differentiation and numerical integration formulas will be discussed, it should be
noted that numerical differentiation is inherently much less accurate than
numerical integration, and its application is generally avoided whenever possible.
Nevertheless, it has been used successfully in certain applications.
Important Points
Firstly, we discuss the numerical process for approximating the derivative of the
function f (x) at the given point. A function f (x), known either explicitly or as a
set of data points, is replaced by a simpler function. A polynomial p(x) is the
obvious choice of approximating function, since the operation of differentiation is
then easily performed. The polynomial p(x) is differentiated to obtain p0 (x), which
is taken as an approximation to f 0 (x) for any numerical value of x. Geometrically,
this is equivalent to replacing the slope of f (x), at x, by that of p(x). Here,
numerical differentiation are derived by differentiating interpolating polynomials.
We now turn our attention to the numerical process for approximating the
derivative of a function f (x) at x, that is
f (x + h) − f (x)
f 0 (x) = lim , provided the limit exits. (1)
h→0 h
In principle, it is always possible to determine an analytic form (1) of a derivative
for a given function. In some cases, however, the analytic form is very complicated,
and a numerical approximation of the derivative may be sufficient for our purpose.
The formula (1) provides an obvious way to get an approximation to f 0 (x); simply
compute
f (x + h) − f (x)
Dh f (x) = , (2)
h
for small values of stepsize h, called numerical differentiation formula for (1).
Here, we shall derive some formulas for estimating derivatives but we should avoid
as far as possible, numerically calculating derivatives higher than the first, as the
error in their evaluation increases with their orders. In spite of some inherent
shortcomings, numerical differentiation is important to derive formulas for solving
integrals and the numerical solution of both ordinary and partial differential
equations.
There are three different approaches for deriving the numerical differentiation
formulas. The first approach is based on the Taylor expansion of a function about
a point, the second is to use difference operators, and the third approach to
numerical differentiation is to fit a curve with a simple form to a function, and
then to differentiate the curve-fit function. For example, the polynomial
interpolation or spline methods of the Chapter 4 can be used to fit a curve to
tabulated data for a function and the resulting polynomial or spline can then be
differentiated. When a function is represented by a table of values, the most
obvious approach is to differentiate the Lagrange interpolation formula
n
f (n+1) (η(x)) Y
f (x) = pn (x) + (x − xi ), (3)
(n + 1)! i=0
where the first term pn (x) of the right hand side is the Lagrange interpolating
polynomial of degree n and the second term is its error term.
It is interesting to note that the process of numerical differentiation may be less
satisfactory than interpolation the closeness of the ordinates of f (x) and pn (x) on
the interval of interest does not guarantee the closeness of their respective
derivatives. Note that the derivation and analysis of formulas for numerical
differentiation is considerably simplifies when the data is equally spaced. It will be
assumed, therefore, that the points xi are given by xi = x0 + ih, (i = 0, 1, . . . , n)
for some fixed tabular interval h.
Numerical Differentiation Formulas
Here, we will find the approximation of first and second derivative of a function at
a given arbitrary point x. For the approximation of the first derivative of a
function we will use two-point formula, three-point formula, and Richardson’s
extrapolation formula. While for second derivative approximation we will discuss
three-point formula only.
First Derivative Numerical Formulas
The formula (4) is called the (n+1)-point formula to approximate f 0 (xk ) with its
error term. From this formula we can obtain many numerical differentiation
formulas but here we shall discuss only two formulas to approximate (1) at given
point x = xk . First one is called the two-point formula and its error term which
we can get from (4) by taking n = 1 and k = 0. The second numerical
differentiation formula is called the three-point formula and its error term which
can be obtained from (4) when n = 2 and k = 0, 1, 2.
Two-point Formula
Consider two distinct points x0 and x1 , then, to find the approximation of (1), the
first derivative of a function at given point, take x0 ∈ (a, b), where f ∈ C 2 [a, b] and
that x1 = x0 + h for some h 6= 0 that is sufficiently small to ensure that x1 ∈ [a, b].
Consider the linear Lagrange interpolating polynomial p1 (x) which interpolate
f (x) at the given points is
x − x1 x − x0
f (x) = p1 (x) = f (x0 ) + f (x1 ). (5)
x0 − x1 x1 − x0
f (x0 ) f (x1 )
f 0 (x)|x=x0 ≈ p01 (x)|x=x0 = − + .
x0 − x1 x1 − x0
Simplifying the above expression, we have
f (x0 ) f (x0 + h)
f 0 (x0 ) ≈ − + ,
h h
which can be written as
f (x0 + h) − f (x0 )
f 0 (x0 ) ≈ = Dh f (x0 ). (6)
h
It is called the two-point formula for smaller values of h. For h > 0, sometime the
formula (6) is also called the two-point forward-difference formula because it
involves only differences of a function values forward from f (x0 ).
The two-point forward-difference formula has a simple geometric interpretation as
the slope of the forward secant line, as shown in Figure 1.
f(x0)
A f(x)
Forward
x0 x +h
0 x
f (x0 ) − f (x0 − h)
f 0 (x0 ) ≈ . (7)
h
In this case, a value of x behind the point of interest is used. The formula (7) is
useful in cases where the independent variable represents time. If x0 denotes the
present time, the backward-difference formula uses only present and past samples,
it does not rely on future data samples that may not yet be available in a real
time application.
The geometric interpretation of the two-point backward-difference formula, as the
slope of the backward secant line, is shown in Figure 2.
y
f(x0)
B f(x)
Backward
x0−h x0 x
f (2 + h) − f (2)
f 0 (2) ≈ .
h
Then for h = 0.1, we get
(e2.01 − e2 )
f 0 (2) ≈ = 7.4262.
0.01
Since the exact solution of f 0 (2) = e2 is, 7.3891, so the corresponding actual errors
with h = 0.1 and h = 0.01 are, −0.3821 and −0.0371 respectively. This shows that
the approximation obtained with h = 0.01 is better than the approximation with
h = 0.1. •
Error Term of Two-point Formula
The formula (6) is not very useful, therefore, let us attempt to find the error
involves in our first numerical differentiation formula (6). Consider the error term
for the linear Lagrange polynomial which can be written as
1
f 00 (η(x)) Y
f (x) − p1 (x) = (x − xi ),
2! i=0
for some unknown point η(x) ∈ (x0 , x1 ). By taking derivative of above equation
with respect to x and at x = x0 , we have
d 00 (x − x0 )(x − x1 )
f 0 (x0 ) − p01 (x0 ) = f (η(x))
dx x=x0 2
f 00 (η(x0 ))
d 2
+ (x − x(x0 + h) − xx0 + x0 (x0 + h)) .
2 dx x=x0
d 00
Since f (η(x)) = 0 only if x = x0 , so error in the forward-difference formula (6)
dx
is
h
EF (f, h) = f 0 (x0 ) − Dh f (x0 ) = − f 00 (η(x)), where η(x) ∈ (x0 , x1 ), (8)
2
which is called the error formula of the two-point formula (6). Hence the formula
(6) can be written as
f (x0 + h) − f (x0 ) h
f 0 (x0 ) = − f 00 (η(x)), where η ∈ (x0 , x1 ). (9)
h 2
The formula (9) is more useful than the formula (6) because now on a large class
of function, an error term is available along with the basic numerical formula.
Note that the formula (9) may also be derived from the Taylor’s theorem.
Expansion of function f (x1 ) about x0 as far as term involving h2 gives
h2 00
f (x1 ) = f (x0 ) + hf 0 (x0 ) + f (η(x)). (10)
2!
From this the result follows by subtracting f (x0 ) both sides and dividing both
sides by h and put x1 = x0 + h.
Example 0.2
Let f (x) = x3 be defined in the interval [0.2, 0.3]. Use the error formula (8) of
two-point formula for the approximation of f 0 (0.2) to compute a value of η.
Solution. Since the exact value of the first derivative of the function at x0 = 0.2 is
0.1
−0.07 = − 6η,
2
and solving for η, we get η = 0.233. •
Example 0.3
Let f (x) = x2 cos x and h = 0.1. Then
(a) Compute the approximate value of f 0 (1) using forward difference two-point
formula (6).
(b) Compute the error bound for your approximation using the formula (8).
(c) Compute the absolute error.
(d) What best maximum value of stepsize h required to obtain the approximate
value of f 0 (1) correct to 10−2 .
Solution. (a) Given x0 = 1, h = 0.1, then by using the formula (6), we have
f (1 + 0.1) − f (1) f (1.1) − f (1)
f 0 (1) ≈ = = Dh f (1).
0.1 0.1
Thus
(1.1)2 cos(1.1) − (1)2 cos(1) 0.5489 − 0.5403
f 0 (1) ≈ ≈ = 0.0860,
0.1 0.1
which is the required approximation of f 0 (x) at x = 1.
(b) To find the error bound, we use the formula (8), which gives
0.1 00
EF (f, h) = − f (η(x)), where η(x) ∈ (1, 1.1),
2
or
0.1 00
|EF (f, h)| = − |f (η(x))|, for η ∈ (1, 1.1).
2
00
The second derivative f (x) of the function can be found as
f (x) = x2 cos x, gives f 00 (x) = (2 − x2 ) cos x − 4x sin x.
The value of the second derivative f 00 (η(x)) cannot be computed exactly because
η(x) is not known. But one can bound the error by computing the largest possible
value for |f 00 (η(x))|. So bound |f 00 | on [1, 1.1] can be obtain
0.1
|EF (f, h)| ≤ M = 0.05(3.5630) = 0.1782,
2
which is the possible maximum error in our approximation.
(c) Since the exact value of the derivative f 0 (1) is 0.2392, therefore the absolute
error |E| can be computed as follows:
h 00
|EF (f, h)| = | − f (η(x))| ≤ 10−2 ,
2
for η(x) ∈ (1, 1.1). This gives
h (2 × 10−2 )
M ≤ 10−2 , or h≤ .
2 M
Using M = 3.5630, we obtain
2
h≤ = 0.0056,
356.3000
which is the best maximum value of h to get the required accuracy. •
The truncation error in the approximation of (9) is roughly proportional to
stepsize h used in its computation. The situation is made worse by the fact that
the round-off error in computing the approximate derivative (6) is roughly
1
proportion to . The overall error therefore is of the form
h
δ
E = ch + ,
h
where c and δ are constants. This places serve restriction on the accuracy that can
be achieved with this formula.
Example 0.4
Consider f (x) = x2 cos x and x0 = 1. To show the effect of rounding error, the
values f˜i are obtained by rounding f (xi ) to seven significant digits, compute the
total error for h = 0.1 and also, find the optimum h.
1
Solution. Given |i | ≤ × 10−7 = δ and h = 0.1. Now to calculate the total
error, we use 2
h 10−t
E(h) = M + ,
2 h
where
M = max |(2 − x2 ) cos x − 4x sin x| = 3.5630.
1≤x≤1.1
Then
0.1 10−7
E(h) = (3.5630) + = 0.17815 + 0.000001 = 0.178151.
2 0.1
Now to find the optimum h, we use
r r
2 2
h = hopt = × 10−t = × 10−7 = 0.00024,
M 3.5630
which is the smallest value of h, below which the total error will begin to increase.
Note that for
h = 0.00024, E(h) = 0.000844,
h = 0.00015, E(h) = 0.000934,
h = 0.00001, E(h) = 0.010018.
Three-point Central Difference Formula
(x − x1 )(x − x2 ) (x − x0 )(x − x2 )
f (x) = p2 (x) = f (x0 ) + f (x1 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )
(x − x0 )(x − x1 )
+ f (x2 ).
(x2 − x0 )(x2 − x1 )
Now taking the derivative of the above expression with respect to x and then take
x = xk , for k = 0, 1, 2, we have
(2xk − x1 − x2 ) (2xk − x0 − x2 )
f 0 (xk ) ≈ f (x0 ) + f (x1 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )
(2xk − x0 − x1 )
+ f (x2 ). (11)
(x2 − x0 )(x2 − x1 )
Three different numerical differentiation formulas can be obtained from (11) by
putting xk = x0 , or xk = x1 or xk = x2 , which are use to find the approximation
of the first derivative of a function defined by the formula (1) at the given point.
Firstly, we take xk = x1 , then the formula (11) becomes
(2x1 − x1 − x2 ) (2x1 − x0 − x2 )
f 0 (x1 ) ≈ f (x0 ) + f (x1 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )
(2x1 − x0 − x1 )
+ f (x2 ).
(x2 − x0 )(x2 − x1 )
f (x1 + h) − f (x1 − h)
f 0 (x1 ) ≈ = Dh f (x1 ). (12)
2h
It is called the three-point central-difference formula for finding the approximation
of the first derivative of a function at the given point x1 .
Note that the formulation of the formula (12) uses data points that are centered
about the point of interest x1 even though it does not appear in the right side of
(12).
Error Formula of Central Difference Formula
The formula (12) is not very useful, therefore, let us attempt to find the error
involve in the formula (12) for numerical differentiation. Consider the error term
for the quadratic Lagrange polynomial which can be written as
2
f 000 (η(x)) Y
f (x) − p2 (x) = (x − xi ),
3! i=0
for some unknown point η(x) ∈ (x0 , x2 ). By taking derivative of the above
equation with respect to x and then taking x = x1 , we have
d 000 (x − x0 )(x − x1 )(x − x2 )
f 0 (x1 ) − p02 (x1 ) = f (η(x))
dx x=x1 6
000
f (η(x1 ))
+ (x − x1 )(x − x2 ) + (x − x0 )(x − x2 ) + (x − x0 )(x − x1 ) .
6 x=x1
d 000
Since f (η(x)) = 0 only if x = x1 , therefore the error formula of the
dx
central-difference formula (12) can be written as
h2 000
EC (f, h) = f 0 (x1 ) − Dh f (x1 ) = − f (η(x1 )), (13)
6
where η(x1 ) ∈ (x1 − h, x1 + h).
Hence the formula (12) can be written as
Then
[(1.1)2 + cos(1.1)] − [(0.9)2 + cos(0.9)] 1.6636 − 1.4316
f 0 (1) ≈ ≈ = 1.1600.
0.2 0.2
(b) By using the error formula (13), we have
(0.1)2 000
EC (f, h) = − f (η(x1 )), for η(x1 ) ∈ (0.9, 1.1),
6
or
(0.1)2 000
|EC (f, h)| = − |f (η(x1 ))|, for η(x1 ) ∈ (0.9, 1.1).
6
Since
f 000 (η(x1 )) = sin η(x1 ).
This formula cannot be computed exactly because η(x1 ) is not known. But one
can bound the error by computing the largest possible value for |f 000 (η(x1 ))|. So
bound |f 000 | on [0.9, 1.1] is
M = max | sin x| = 0.8912,
0.9≤x≤1.1
(a) Use three-point formula for smaller value of h to find approximation of f 0 (3).
(b) The function tabulated is ln x, find error bound and absolute error for the
approximation
of f 0 (3).
(c) What is the best maximum value of stepsize h required to obtain the
approximate value of f 0 (3) within the accuracy 10−4 .
Solution. (a) For the given table of data points, we can use all three-points
formulas as for the central difference we can take
x0 = x1 − h = 2, x1 = 3, x2 = x1 + h = 4, gives h = 1,
h2 000 h2 000
EB (f, h) = f (η), or |EB (f, h)| ≤ |f (η)|.
3 3
Taking |f 000 (η(x2 ))| ≤ M = max |f 000 (x)| = max |2/x3 | = 0.4883. Thus using
1.6≤x≤3 1.6≤x≤3
h = 0.7, we obtain
(0.7)2
|EB (f, h)| ≤ (0.4883) = 0.0798,
3
the required error bounds for the approximations. To compute the absolute error
we do as
|E| = |f 0 (3) − 0.3214| = |0.3333 − 0.3214| = 0.0119.
(c) Since the given accuracy required is 10−4 , so
h2 000
|EB (f, h)| = f (η) ≤ 10−4 ,
3
for η ∈ (1.6, 3). Then
h2
M ≤ 10−4 .
3
Solving for h by taking M = 0.4883, we obtain
3 × 10−4
h2 ≤ = 0.0248,
0.4883
and so h = 0.025 the best maximum value of h. •
Example 0.7
Use the three-point formulas (12), (15) and (17) to approximate the first
derivative of the function f (x) = ex at x = 2, take h = 0.1. Also, compute the
error bound for each approximation.
Solution. Given f (x) = ex and h = 0.1, then
Central-difference formula:
(0.1)2 2.1
|EC (f, h)| ≤ e = 0.0136.
6
Forward-difference formula:
h2 000 h2 000
EF (f, h) = f (η(x0 )), or |EF (f, h)| ≤ |f (η(x0 ))|.
3 3
Taking |f 000 (η(x0 ))| ≤ M = max |ex | = e2.2 and h = 0.1, we obtain
2≤x≤2.2
(0.1)2 2.2
|EF (f, h)| ≤ e = 0.0301.
3
Backward difference formula:
h2 000 h2 000
EB (f, h) = f (η(x2 )), or |EB (f, h)| ≤ |f (η(x2 ))|.
3 3
Taking |f 000 (η(x2 ))| ≤ M = max1.8≤x≤2 |ex | = e2 and h = 0.1, we obtain
(0.1)2 2
|EB (f, h)| ≤ e = 0.0246.
3
Thus we got the required error bounds for the approximations. •
Second Derivative Numerical Formula
h2 (4)
EC (f, h) = − f (η(x1 )), for η(x1 ) ∈ (0.9, 1.1),
12
or
h2 (4)
|EC (f, h)| = − |f (η(x1 ))|, for η(x1 ) ∈ (0.9, 1.1).
12
The fourth derivative of the given function at η(x1 ) is
and it cannot be computed exactly because η(x1 ) is not known. But one can
bound the error by computing the largest possible value for |f (4) (η(x1 ))|. So
bound |f (4) | on the interval (0.9, 1.1) is
h2
|EC (f, h)| ≤ M.
12
Taking M = 0.4536 and h = 0.1, we obtain
0.01
|EC (f, h)| ≤ (0.4536) = 0.0004,
12
which is the possible maximum error in our approximation.
(c) Since the exact value of f 00 (1) is
|E| = |f 00 (1) − Dh
2
f (1)| = |1.4597 − 1.4600| = 0.0003.
h2 (4)
|EC (f, h)| = − f (η(x1 )) ≤ 10−2 ,
12
for η(x1 ) ∈ (0.9, 1.1). Then for |f (4) (η(x1 ))| ≤ M , we have
h2
M ≤ 10−2 .
12
Solving for h2 , we obtain