0% found this document useful (0 votes)
25 views44 pages

Numerical Methods: King Saud University

This document discusses numerical methods for approximating derivatives and integrals. It will derive formulas for numerically estimating the first and second derivatives of a function using 2-point and 3-point formulas. It will also discuss Trapezoidal and Simpson's rules for numerical integration. The key points are that numerical differentiation is less accurate than integration and should be avoided if possible. Formulas are derived using Lagrange interpolating polynomials at equally spaced data points.

Uploaded by

Pallav sah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views44 pages

Numerical Methods: King Saud University

This document discusses numerical methods for approximating derivatives and integrals. It will derive formulas for numerically estimating the first and second derivatives of a function using 2-point and 3-point formulas. It will also discuss Trapezoidal and Simpson's rules for numerical integration. The key points are that numerical differentiation is less accurate than integration and should be avoided if possible. Formulas are derived using Lagrange interpolating polynomials at equally spaced data points.

Uploaded by

Pallav sah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Numerical Methods

King Saud University


Aims

In this lecture, we will . . .


I find the approximate solutions of derivative (first- and second-order) and
antiderivative (definite integral only).
Numerical Differentiation and Integration

In this chapter we deal with techniques for approximating numerically the two
fundamental operations of the calculus, differentiation and integration. Both of
these problems may be approached in the same way. Although both numerical
differentiation and numerical integration formulas will be discussed, it should be
noted that numerical differentiation is inherently much less accurate than
numerical integration, and its application is generally avoided whenever possible.
Nevertheless, it has been used successfully in certain applications.
Important Points

I. Here we shall find the approximate solutions of derivative (first- and


second-order) and antiderivative (definite integral only).
II. Given data points should be equally spaced only (length of each subinterval
should be same). Smaller the length of the interval better the approximation.
III. Numerical methods for differentiation and integration can be derived using
Lagrange interpolating polynomial at equally-spaced data points.
IV. Error term for each numerical method will be discuss which helps to look for
the maximum error in the approximation.
V. Two-point formula (for first derivative) and three-point formulas (for first and
second derivatives) for numerical differentiation and Trapezoidal and Simpson’s
rules for numerical integration will be discuss here.
Numerical Differentiation

Firstly, we discuss the numerical process for approximating the derivative of the
function f (x) at the given point. A function f (x), known either explicitly or as a
set of data points, is replaced by a simpler function. A polynomial p(x) is the
obvious choice of approximating function, since the operation of differentiation is
then easily performed. The polynomial p(x) is differentiated to obtain p0 (x), which
is taken as an approximation to f 0 (x) for any numerical value of x. Geometrically,
this is equivalent to replacing the slope of f (x), at x, by that of p(x). Here,
numerical differentiation are derived by differentiating interpolating polynomials.
We now turn our attention to the numerical process for approximating the
derivative of a function f (x) at x, that is

f (x + h) − f (x)
f 0 (x) = lim , provided the limit exits. (1)
h→0 h
In principle, it is always possible to determine an analytic form (1) of a derivative
for a given function. In some cases, however, the analytic form is very complicated,
and a numerical approximation of the derivative may be sufficient for our purpose.
The formula (1) provides an obvious way to get an approximation to f 0 (x); simply
compute
f (x + h) − f (x)
Dh f (x) = , (2)
h
for small values of stepsize h, called numerical differentiation formula for (1).
Here, we shall derive some formulas for estimating derivatives but we should avoid
as far as possible, numerically calculating derivatives higher than the first, as the
error in their evaluation increases with their orders. In spite of some inherent
shortcomings, numerical differentiation is important to derive formulas for solving
integrals and the numerical solution of both ordinary and partial differential
equations.
There are three different approaches for deriving the numerical differentiation
formulas. The first approach is based on the Taylor expansion of a function about
a point, the second is to use difference operators, and the third approach to
numerical differentiation is to fit a curve with a simple form to a function, and
then to differentiate the curve-fit function. For example, the polynomial
interpolation or spline methods of the Chapter 4 can be used to fit a curve to
tabulated data for a function and the resulting polynomial or spline can then be
differentiated. When a function is represented by a table of values, the most
obvious approach is to differentiate the Lagrange interpolation formula
n
f (n+1) (η(x)) Y
f (x) = pn (x) + (x − xi ), (3)
(n + 1)! i=0

where the first term pn (x) of the right hand side is the Lagrange interpolating
polynomial of degree n and the second term is its error term.
It is interesting to note that the process of numerical differentiation may be less
satisfactory than interpolation the closeness of the ordinates of f (x) and pn (x) on
the interval of interest does not guarantee the closeness of their respective
derivatives. Note that the derivation and analysis of formulas for numerical
differentiation is considerably simplifies when the data is equally spaced. It will be
assumed, therefore, that the points xi are given by xi = x0 + ih, (i = 0, 1, . . . , n)
for some fixed tabular interval h.
Numerical Differentiation Formulas

Here, we will find the approximation of first and second derivative of a function at
a given arbitrary point x. For the approximation of the first derivative of a
function we will use two-point formula, three-point formula, and Richardson’s
extrapolation formula. While for second derivative approximation we will discuss
three-point formula only.
First Derivative Numerical Formulas

To obtain general formula for approximation of the first derivative of a function


f (x), we consider that {x0 , x1 , . . . , xn } are (n + 1) distinct equally spaced points
in some interval I and function f (x) is continuous and its (n + 1)th derivatives
exist in the given interval, that is, f ∈ C n+1 (I). Then by differentiating (3) with
respect to x and at x = xk , we have
n n
X f (n+1) (η(xk )) Y
f 0 (xk ) = f (xi )L0i (xk ) + (xk − xi ). (4)
i=0
(n + 1)! i=0
i6=k

The formula (4) is called the (n+1)-point formula to approximate f 0 (xk ) with its
error term. From this formula we can obtain many numerical differentiation
formulas but here we shall discuss only two formulas to approximate (1) at given
point x = xk . First one is called the two-point formula and its error term which
we can get from (4) by taking n = 1 and k = 0. The second numerical
differentiation formula is called the three-point formula and its error term which
can be obtained from (4) when n = 2 and k = 0, 1, 2.
Two-point Formula
Consider two distinct points x0 and x1 , then, to find the approximation of (1), the
first derivative of a function at given point, take x0 ∈ (a, b), where f ∈ C 2 [a, b] and
that x1 = x0 + h for some h 6= 0 that is sufficiently small to ensure that x1 ∈ [a, b].
Consider the linear Lagrange interpolating polynomial p1 (x) which interpolate
f (x) at the given points is
   
x − x1 x − x0
f (x) = p1 (x) = f (x0 ) + f (x1 ). (5)
x0 − x1 x1 − x0

By taking derivative of (5) with respect to x and at x = x0 , we obtain

f (x0 ) f (x1 )
f 0 (x)|x=x0 ≈ p01 (x)|x=x0 = − + .
x0 − x1 x1 − x0
Simplifying the above expression, we have

f (x0 ) f (x0 + h)
f 0 (x0 ) ≈ − + ,
h h
which can be written as
f (x0 + h) − f (x0 )
f 0 (x0 ) ≈ = Dh f (x0 ). (6)
h
It is called the two-point formula for smaller values of h. For h > 0, sometime the
formula (6) is also called the two-point forward-difference formula because it
involves only differences of a function values forward from f (x0 ).
The two-point forward-difference formula has a simple geometric interpretation as
the slope of the forward secant line, as shown in Figure 1.

f(x0)

A f(x)

Forward

x0 x +h
0 x

Figure: Forward-difference approximations.


Note that if h < 0, then the formula (6) is also called the two-point
backward-difference formula, which can be written as

f (x0 ) − f (x0 − h)
f 0 (x0 ) ≈ . (7)
h
In this case, a value of x behind the point of interest is used. The formula (7) is
useful in cases where the independent variable represents time. If x0 denotes the
present time, the backward-difference formula uses only present and past samples,
it does not rely on future data samples that may not yet be available in a real
time application.
The geometric interpretation of the two-point backward-difference formula, as the
slope of the backward secant line, is shown in Figure 2.

y
f(x0)

B f(x)

Backward

x0−h x0 x

Figure: Backward-difference approximations.


Example 0.1
Let f (x) = ex and h = 0.1, h = 0.01. Use two-point forward difference formula to
approximate f 0 (2). For which value of h we have better approximation and why ?
Solution. Using the formula (6), with x0 = 2, we have

f (2 + h) − f (2)
f 0 (2) ≈ .
h
Then for h = 0.1, we get

f (2.1) − f (2) e2.1 − e2


f 0 (2) ≈ ≈ = 7.7712.
0.1 0.1
Similarly, by using h = 0.01, we obtain

(e2.01 − e2 )
f 0 (2) ≈ = 7.4262.
0.01
Since the exact solution of f 0 (2) = e2 is, 7.3891, so the corresponding actual errors
with h = 0.1 and h = 0.01 are, −0.3821 and −0.0371 respectively. This shows that
the approximation obtained with h = 0.01 is better than the approximation with
h = 0.1. •
Error Term of Two-point Formula
The formula (6) is not very useful, therefore, let us attempt to find the error
involves in our first numerical differentiation formula (6). Consider the error term
for the linear Lagrange polynomial which can be written as
1
f 00 (η(x)) Y
f (x) − p1 (x) = (x − xi ),
2! i=0
for some unknown point η(x) ∈ (x0 , x1 ). By taking derivative of above equation
with respect to x and at x = x0 , we have
 
d 00 (x − x0 )(x − x1 )
f 0 (x0 ) − p01 (x0 ) = f (η(x))
dx x=x0 2

f 00 (η(x0 ))
 
d 2
+ (x − x(x0 + h) − xx0 + x0 (x0 + h)) .
2 dx x=x0

d 00
Since f (η(x)) = 0 only if x = x0 , so error in the forward-difference formula (6)
dx
is
h
EF (f, h) = f 0 (x0 ) − Dh f (x0 ) = − f 00 (η(x)), where η(x) ∈ (x0 , x1 ), (8)
2
which is called the error formula of the two-point formula (6). Hence the formula
(6) can be written as
f (x0 + h) − f (x0 ) h
f 0 (x0 ) = − f 00 (η(x)), where η ∈ (x0 , x1 ). (9)
h 2
The formula (9) is more useful than the formula (6) because now on a large class
of function, an error term is available along with the basic numerical formula.
Note that the formula (9) may also be derived from the Taylor’s theorem.
Expansion of function f (x1 ) about x0 as far as term involving h2 gives

h2 00
f (x1 ) = f (x0 ) + hf 0 (x0 ) + f (η(x)). (10)
2!
From this the result follows by subtracting f (x0 ) both sides and dividing both
sides by h and put x1 = x0 + h.
Example 0.2
Let f (x) = x3 be defined in the interval [0.2, 0.3]. Use the error formula (8) of
two-point formula for the approximation of f 0 (0.2) to compute a value of η.
Solution. Since the exact value of the first derivative of the function at x0 = 0.2 is

f 0 (x) = 3x2 and f 0 (0.2) = 3(0.2)2 = 0.12,

and the approximate value of f 0 (0.2) using two point formula is

f (0.3) − f (0.2) (0.3)3 − (0.2)3


f 0 (0.2) ≈ = = 0.19,
0.1 0.1
so error E can be calculated as

E = 0.12 − 0.19 = −0.07.

Using the formula (6) and f 00 (η) = 6η, we have

0.1
−0.07 = − 6η,
2
and solving for η, we get η = 0.233. •
Example 0.3
Let f (x) = x2 cos x and h = 0.1. Then
(a) Compute the approximate value of f 0 (1) using forward difference two-point
formula (6).
(b) Compute the error bound for your approximation using the formula (8).
(c) Compute the absolute error.
(d) What best maximum value of stepsize h required to obtain the approximate
value of f 0 (1) correct to 10−2 .
Solution. (a) Given x0 = 1, h = 0.1, then by using the formula (6), we have
f (1 + 0.1) − f (1) f (1.1) − f (1)
f 0 (1) ≈ = = Dh f (1).
0.1 0.1
Thus
(1.1)2 cos(1.1) − (1)2 cos(1) 0.5489 − 0.5403
f 0 (1) ≈ ≈ = 0.0860,
0.1 0.1
which is the required approximation of f 0 (x) at x = 1.
(b) To find the error bound, we use the formula (8), which gives
0.1 00
EF (f, h) = − f (η(x)), where η(x) ∈ (1, 1.1),
2
or
0.1 00
|EF (f, h)| = − |f (η(x))|, for η ∈ (1, 1.1).
2
00
The second derivative f (x) of the function can be found as
f (x) = x2 cos x, gives f 00 (x) = (2 − x2 ) cos x − 4x sin x.
The value of the second derivative f 00 (η(x)) cannot be computed exactly because
η(x) is not known. But one can bound the error by computing the largest possible
value for |f 00 (η(x))|. So bound |f 00 | on [1, 1.1] can be obtain

M = max |(2 − x2 ) cos x − 4x sin x| = 3.5630,


1≤x≤1.1

at x = 1.1. Since |f 00 (η(x))| ≤ M , therefore, for h = 0.1, we have

0.1
|EF (f, h)| ≤ M = 0.05(3.5630) = 0.1782,
2
which is the possible maximum error in our approximation.
(c) Since the exact value of the derivative f 0 (1) is 0.2392, therefore the absolute
error |E| can be computed as follows:

|E| = |f 0 (1) − Dh f (1)| = |0.2391 − 0.0860| = 0.1531.

(d) Since the given accuracy required is 10−2 , so

h 00
|EF (f, h)| = | − f (η(x))| ≤ 10−2 ,
2
for η(x) ∈ (1, 1.1). This gives

h (2 × 10−2 )
M ≤ 10−2 , or h≤ .
2 M
Using M = 3.5630, we obtain
2
h≤ = 0.0056,
356.3000
which is the best maximum value of h to get the required accuracy. •
The truncation error in the approximation of (9) is roughly proportional to
stepsize h used in its computation. The situation is made worse by the fact that
the round-off error in computing the approximate derivative (6) is roughly
1
proportion to . The overall error therefore is of the form
h
δ
E = ch + ,
h
where c and δ are constants. This places serve restriction on the accuracy that can
be achieved with this formula.
Example 0.4
Consider f (x) = x2 cos x and x0 = 1. To show the effect of rounding error, the
values f˜i are obtained by rounding f (xi ) to seven significant digits, compute the
total error for h = 0.1 and also, find the optimum h.
1
Solution. Given |i | ≤ × 10−7 = δ and h = 0.1. Now to calculate the total
error, we use 2
h 10−t
E(h) = M + ,
2 h
where
M = max |(2 − x2 ) cos x − 4x sin x| = 3.5630.
1≤x≤1.1

Then
0.1 10−7
E(h) = (3.5630) + = 0.17815 + 0.000001 = 0.178151.
2 0.1
Now to find the optimum h, we use
r r
2 2
h = hopt = × 10−t = × 10−7 = 0.00024,
M 3.5630
which is the smallest value of h, below which the total error will begin to increase.
Note that for
h = 0.00024, E(h) = 0.000844,
h = 0.00015, E(h) = 0.000934,
h = 0.00001, E(h) = 0.010018.
Three-point Central Difference Formula

Consider the quadratic Lagrange interpolating polynomial p2 (x) to the three


distinct equally spaced points x0 , x1 , and x2 , with x1 = x0 + h and x2 = x0 + 2h,
for smaller value h, we have

(x − x1 )(x − x2 ) (x − x0 )(x − x2 )
f (x) = p2 (x) = f (x0 ) + f (x1 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )

(x − x0 )(x − x1 )
+ f (x2 ).
(x2 − x0 )(x2 − x1 )

Now taking the derivative of the above expression with respect to x and then take
x = xk , for k = 0, 1, 2, we have

(2xk − x1 − x2 ) (2xk − x0 − x2 )
f 0 (xk ) ≈ f (x0 ) + f (x1 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )

(2xk − x0 − x1 )
+ f (x2 ). (11)
(x2 − x0 )(x2 − x1 )
Three different numerical differentiation formulas can be obtained from (11) by
putting xk = x0 , or xk = x1 or xk = x2 , which are use to find the approximation
of the first derivative of a function defined by the formula (1) at the given point.
Firstly, we take xk = x1 , then the formula (11) becomes

(2x1 − x1 − x2 ) (2x1 − x0 − x2 )
f 0 (x1 ) ≈ f (x0 ) + f (x1 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 )

(2x1 − x0 − x1 )
+ f (x2 ).
(x2 − x0 )(x2 − x1 )

After, simplifying, and replacing x0 = x1 − h, x2 = x1 + h, we obtain

f (x1 + h) − f (x1 − h)
f 0 (x1 ) ≈ = Dh f (x1 ). (12)
2h
It is called the three-point central-difference formula for finding the approximation
of the first derivative of a function at the given point x1 .
Note that the formulation of the formula (12) uses data points that are centered
about the point of interest x1 even though it does not appear in the right side of
(12).
Error Formula of Central Difference Formula
The formula (12) is not very useful, therefore, let us attempt to find the error
involve in the formula (12) for numerical differentiation. Consider the error term
for the quadratic Lagrange polynomial which can be written as
2
f 000 (η(x)) Y
f (x) − p2 (x) = (x − xi ),
3! i=0

for some unknown point η(x) ∈ (x0 , x2 ). By taking derivative of the above
equation with respect to x and then taking x = x1 , we have
 
d 000 (x − x0 )(x − x1 )(x − x2 )
f 0 (x1 ) − p02 (x1 ) = f (η(x))
dx x=x1 6
000
f (η(x1 ))
 
+ (x − x1 )(x − x2 ) + (x − x0 )(x − x2 ) + (x − x0 )(x − x1 ) .
6 x=x1

d 000
Since f (η(x)) = 0 only if x = x1 , therefore the error formula of the
dx
central-difference formula (12) can be written as

h2 000
EC (f, h) = f 0 (x1 ) − Dh f (x1 ) = − f (η(x1 )), (13)
6
where η(x1 ) ∈ (x1 − h, x1 + h).
Hence the formula (12) can be written as

f (x1 + h) − f (x1 − h) h2 000


f 0 (x1 ) = − f (η(x1 )), (14)
2h 6
where η(x1 ) ∈ (x1 − h, x1 + h). The formula (14) is more useful than the formula
(12) because now on a large class of function, an error term is available along with
the basic numerical formula.
Example 0.5
Let f (x) = x2 + cos x and h = 0.1. Then
(a) Compute the approximate value of f 0 (1) by using three-point central
difference formula (12).
(b) Compute the error bound for your approximation using (13).
(c) Compute the absolute error.
(d) What is the best maximum value of stepsize h required to obtain the
approximate value of f 0 (1) correct to 10−2 .
Solution. (a) Given x1 = 1, h = 0.1, then using the formula (12), we have

f (1 + 0.1) − f (1 − 0.1) f (1.1) − f (0.9)


f 0 (1) ≈ = = Dh f (1).
2(0.1) 0.2

Then
[(1.1)2 + cos(1.1)] − [(0.9)2 + cos(0.9)] 1.6636 − 1.4316
f 0 (1) ≈ ≈ = 1.1600.
0.2 0.2
(b) By using the error formula (13), we have

(0.1)2 000
EC (f, h) = − f (η(x1 )), for η(x1 ) ∈ (0.9, 1.1),
6
or
(0.1)2 000
|EC (f, h)| = − |f (η(x1 ))|, for η(x1 ) ∈ (0.9, 1.1).
6
Since
f 000 (η(x1 )) = sin η(x1 ).
This formula cannot be computed exactly because η(x1 ) is not known. But one
can bound the error by computing the largest possible value for |f 000 (η(x1 ))|. So
bound |f 000 | on [0.9, 1.1] is
M = max | sin x| = 0.8912,
0.9≤x≤1.1

at x = 1.1. Thus, for |f 000 (η(x1 ))| ≤ M and h = 0.1, gives


0.01 0.01
|EC (f, h)| ≤ M = (0.8912) = 0.0015,
6 6
which is the possible maximum error in our approximation.
(c) Since the exact value of the derivative f 0 (1) is, 0.2391, therefore, the absolute
error |E| can be computed as follows
|E| = |f 0 (1) − Dh f (1)| = |(2 − sin 1) − 1.1600| = |1.1585 − 1.1600| = 0.0015.
(d) Since the given accuracy required is 10−2 , so
h2 000
|EC (f, h)| = − f (η(x1 )) ≤ 10−2 ,
6
for η(x1 ) ∈ (0.9, 1.1). Then
h2
M ≤ 10−2 .
6
Solving for h and taking M = 0.8912, we obtain
6
h2 ≤ = 0.0673, and h ≤ 0.2594.
0.8912
So the best value of h is 0.25. •
Three-point Forward and Backward Difference Formulas
Similarly, the two other three-point formulas can be obtained by taking xk = x0
and xk = x2 in the formula (11). Firstly, by taking xk = x0 in the formula (11)
and then after simplifying, we have

−3f (x0 ) + 4f (x0 + h) − f (x0 + 2h)


f 0 (x0 ) ≈ = Dh f (x0 ), (15)
2h
which is called the three-point forward-difference formula which is use to
approximate the formula (1) at given point x = x0 . The error term of this
approximation formula can be obtain in the similar way as we obtained for the
central-difference formula and it is
h2 000
EF (f, h) = f (η(x0 )), (16)
3
where η(x0 ) ∈ (x0 , x0 + 2h). Similarly, taking xk = x2 in the formula (11), and
after simplifying, we obtain

f (x2 − 2h) − 4f (x2 − h) + 3f (x2 )


f 0 (x2 ) ≈ = Dh f (x2 ), (17)
2h
which is called the three-point backward-difference formula which is use to
approximate the formula (1) at given point x = x2 . It has the error term of the
form
h2 000
EB (f, h) = f (η(x2 )), (18)
3
where η(x2 ) ∈ (x2 − 2h, x2 ).
Note that the backward-difference formula (17) can be obtained from the
forward-difference formula by replacing h with −h. Also, note that the error in
(12) is approximately half the error in (15) and (17). This is reasonable since in
using the central-difference formula (12) data is being examined on both sides of
point x1 , and for others in (15) and (17) only on one side. Note that in using the
central-difference formula, a function f (x) needs to be evaluated at only two
points, whereas in using the other two formulas, we need the values of a function
at three points. The approximations in using the formulas (15) and (17) are useful
near the ends of the required interval, since the information about a function
outside the interval may not be available. Thus the central-difference formula (12)
is superior to both the forward-difference formula (15) and the
backward-difference formula (17). The central-difference represents the average of
the forward-difference and the backward-difference.
Example 0.6
Consider the following table for set of data points

x 1 1.6 2 2.3 2.8 3 3.9 4 4.8 5


f (x) 0.00 0.47 0.69 0.83 1.03 1.10 1.36 1.39 1.57 1.61

(a) Use three-point formula for smaller value of h to find approximation of f 0 (3).
(b) The function tabulated is ln x, find error bound and absolute error for the
approximation
of f 0 (3).
(c) What is the best maximum value of stepsize h required to obtain the
approximate value of f 0 (3) within the accuracy 10−4 .
Solution. (a) For the given table of data points, we can use all three-points
formulas as for the central difference we can take

x0 = x1 − h = 2, x1 = 3, x2 = x1 + h = 4, gives h = 1,

for the forward difference formula we can take

x0 = 3, x1 = x0 + h = 3.9, x2 = x0 + 2h = 4.8, gives h = 0.9,

and for the backward difference formula we can take

x0 = x2 − 2h = 1.6, x1 = x2 − h = 2.3, x2 = 3, gives h = 0.7.


Since we know that smaller the vale of h better the approximation of the
derivative of the function, therefore, for the given problem, backward difference is
the best formula to find approximation of f 0 (3) as

f (1.6) − 4f (2.3) + 3f (3) [0.47 − 4(0.83) + 3(1.10)]


f 0 (3) ≈ ≈ = 0.3214.
2(0.7) 1.4

(b) Using error term of backward difference formula, we have

h2 000 h2 000
EB (f, h) = f (η), or |EB (f, h)| ≤ |f (η)|.
3 3
Taking |f 000 (η(x2 ))| ≤ M = max |f 000 (x)| = max |2/x3 | = 0.4883. Thus using
1.6≤x≤3 1.6≤x≤3
h = 0.7, we obtain

(0.7)2
|EB (f, h)| ≤ (0.4883) = 0.0798,
3
the required error bounds for the approximations. To compute the absolute error
we do as
|E| = |f 0 (3) − 0.3214| = |0.3333 − 0.3214| = 0.0119.
(c) Since the given accuracy required is 10−4 , so

h2 000
|EB (f, h)| = f (η) ≤ 10−4 ,
3
for η ∈ (1.6, 3). Then
h2
M ≤ 10−4 .
3
Solving for h by taking M = 0.4883, we obtain

3 × 10−4
h2 ≤ = 0.0248,
0.4883
and so h = 0.025 the best maximum value of h. •
Example 0.7
Use the three-point formulas (12), (15) and (17) to approximate the first
derivative of the function f (x) = ex at x = 2, take h = 0.1. Also, compute the
error bound for each approximation.
Solution. Given f (x) = ex and h = 0.1, then
Central-difference formula:

(f (2.1) − f (1.9)) (e2.1 − e1.9 )


f 0 (2) ≈ = = 7.4014.
2h 0.2
Forward-difference formula:

−3f (2) + 4f (2.1) − f (2.2) −3e2 + 4e2.1 − e2.2


f 0 (2) ≈ ≈ = 7.3625.
2h 0.2
Backward difference formula:

f (1.8) − 4f (1.9) + 3f (2) e1.8 − 4e1.9 + 3e2


f 0 (2) ≈ ≈ = 7.3662.
2h 0.2
Since the exact solution of the first derivative of the given function at x = 2 is
7.3891, so the corresponding actual errors are, −0.0123, 0.0266 and 0.0229
respectively. This shows that the approximate solution got by using the
central-difference formula is closer to exact solution as compared with the other
two difference formulas.
The error bounds for the approximations got by (12), (15), and (17) are as follows:
Central-difference formula:
h2 000 h2 000
EC (f, h) = − f (η(x1 )), or |EC (f, h)| ≤ |f (η(x1 ))|.
6 6
Taking |f 000 (η(x1 ))| ≤ M = max1.9≤x≤2.1 |ex | = e2.1 and h = 0.1, we obtain

(0.1)2 2.1
|EC (f, h)| ≤ e = 0.0136.
6
Forward-difference formula:
h2 000 h2 000
EF (f, h) = f (η(x0 )), or |EF (f, h)| ≤ |f (η(x0 ))|.
3 3
Taking |f 000 (η(x0 ))| ≤ M = max |ex | = e2.2 and h = 0.1, we obtain
2≤x≤2.2

(0.1)2 2.2
|EF (f, h)| ≤ e = 0.0301.
3
Backward difference formula:
h2 000 h2 000
EB (f, h) = f (η(x2 )), or |EB (f, h)| ≤ |f (η(x2 ))|.
3 3
Taking |f 000 (η(x2 ))| ≤ M = max1.8≤x≤2 |ex | = e2 and h = 0.1, we obtain

(0.1)2 2
|EB (f, h)| ≤ e = 0.0246.
3
Thus we got the required error bounds for the approximations. •
Second Derivative Numerical Formula

It is also possible to estimate second and higher order derivatives numerically.


Formulas for higher derivatives can be found by differentiating the interpolating
polynomial repeatedly or using the Taylor’s theorem. Since the two-point and
three-point formulas for the approximation of the first derivative of a function were
derived by differentiating the Lagrange interpolation polynomials for f (x) but the
derivation of the higher-order can be tedious. Therefore, we shall use here the
Taylor’s theorem for finding the three-point central-difference formulas for finding
approximation of the second derivative f 00 (x) of a function f (x) at the given point
x = x1 . The process used to obtain numerical formulas for first and second
derivatives of a function can be readily extended to third- and higher-order
derivatives of a function.
Three-point Central Difference Formula
To find the three-point central-difference formula for the approximation of the
second derivative of a function at given point, we use the third-order Taylor’s
theorem by expanding a function f (x) about a point x1 and evaluate at x1 + h
and x1 − h. Then
1 2 00 1 1 4 (4)
f (x1 + h) = f (x1 ) + hf 0 (x1 ) + h f (x1 ) + h3 f 000 (x1 ) + h f (η1 (x)),
2 6 24
and
1 2 00 1 1 4 (4)
f (x1 − h) = f (x1 ) − hf 0 (x1 ) + h f (x1 ) − h3 f 000 (x1 ) + h f (η2 (x)),
2 6 24
where (x1 − h) < η2 (x) < x1 < η1 (x) < (x1 + h).
By adding these equations and simplifies, we have

(f (4) (η1 (x)) + f (4) (η2 (x))) 4


f (x1 + h) + f (x1 − h) = 2f (x1 ) + h2 f 00 (x1 ) + h .
24
Solving this equation for f 00 (x1 ), we obtain

f (x1 − h) − 2f (x1 ) + f (x1 + h) h4 h (4) i


f 00 (x1 ) = 2
− f (η1 (x)) + f (4) (η2 (x)) .
h 24
If f (4) is continuous on [x1 − h, x1 + h], then by using the Intermediate Value
Theorem, the above equation can be written as
f (x1 − h) − 2f (x1 ) + f (x1 + h) h4 (4)
f 00 (x1 ) = 2
− f (η(x1 )).
h 12
Then the following formula

f (x1 − h) − 2f (x1 ) + f (x1 + h)


f 00 (x1 ) ≈ 2
= Dh f (x1 ), (19)
h2
is called the three-point central-difference formula for the approximation of the
second derivative of a function f (x) at the given point x = x1 .
Note that the error term of the three-point central-difference formula (19) for the
approximation of the second derivative of a function f (x) at the given point
x = x1 is of the form
h2
EC (f, h) = − f (4) (η(x1 )), (20)
12
for some unknown point η(x1 ) ∈ (x1 − h, x1 + h).
Example 0.8
Let f (x) = x ln x + x and x = 0.9, 1.3, 2.1, 2.5, 3.2. Then find the approximate
1
value of f 00 (x) = at x = 1.9. Also, compute the absolute error.
x
Solution. Given f (x) = x ln x + x, then one can easily find second derivative of
the function as
1
f 0 (x) = ln x + 2 and f 00 (x) = .
x
1
To find the approximation of f 00 (x) = at the given point x1 = 1.9, we use the
x
three-point formula (19)
f (x1 + h) − 2f (x1 ) + f (x1 − h)
f 00 (x1 ) ≈ = Dh2
f (x1 ).
h2
Taking the three points 1.3, 1.9 and 2.5 (equally spaced), giving h = 0.6, we have
f (2.5) − 2f (1.9) + f (1.3)
f 00 (1.9) ≈
0.36
((2.5 ln 2.5 + 2.5) − 2(1.9 ln 1.9 + 1.9) + (1.3 ln 1.3 + 1.3))

0.36
4.7907 − 6.2391 + 1.6411 2 f (1.9).
≈ = 0.5353 = Dh
0.36
1
Since the exact value of f 00 (1.9) is = 0.5263, therefore, the absolute error |E|
1.9
can be computed as follows:
|E| = |f 00 (1.9) − Dh
2
f (1.9)| = |0.5263 − 0.5353| = 0.009.
Example 0.9
Let f (x) = x2 + cos x. Then
(a) Compute the approximate value of f 00 (x) at x = 1, taking h = 0.1 using (19).
(b) Compute the error bound for your approximation using (20).
(c) Compute the absolute error.
(d) What is the best maximum value of stepsize h required to obtain the
approximate value of f 00 (1) within the accuracy 10−2 .
Solution. (a) Given x1 = 1, h = 0.1, then the formula (19) becomes

f (1 + 0.1) − 2f (1) + f (1 − 0.1)


f 00 (1) ≈ 2
= Dh f (1),
(0.1)2
or
f (1.1) − 2f (1) + f (0.9)
f 00 (1) ≈
0.01

[(1.1)2 + cos(1.1)] − 2[12 + cos(1)] + [(0.9)2 + cos(0.9)]



0.01
1.6636 − 3.0806 + 1.4316 2 f (1).
≈ ≈ 1.4600 = Dh
0.01
(b) To compute the error bound for our approximation in part (a), we use the
formula (20) and have

h2 (4)
EC (f, h) = − f (η(x1 )), for η(x1 ) ∈ (0.9, 1.1),
12
or
h2 (4)
|EC (f, h)| = − |f (η(x1 ))|, for η(x1 ) ∈ (0.9, 1.1).
12
The fourth derivative of the given function at η(x1 ) is

f (4) (η(x1 )) = cos η(x1 ),

and it cannot be computed exactly because η(x1 ) is not known. But one can
bound the error by computing the largest possible value for |f (4) (η(x1 ))|. So
bound |f (4) | on the interval (0.9, 1.1) is

M = max | cos η(x1 )| = 0.4536,


0.9≤x≤1.1

at x = 1.1, Thus, for |f (4) (η(x))| ≤ M , we have

h2
|EC (f, h)| ≤ M.
12
Taking M = 0.4536 and h = 0.1, we obtain
0.01
|EC (f, h)| ≤ (0.4536) = 0.0004,
12
which is the possible maximum error in our approximation.
(c) Since the exact value of f 00 (1) is

f 00 (1) = (2 − 12 ) cos 1 − 4(1) sin 1 = −2.8256,

therefore, the absolute error |E| can be computed as follows:

|E| = |f 00 (1) − Dh
2
f (1)| = |1.4597 − 1.4600| = 0.0003.

(d) Since the given accuracy required is 10−2 , so

h2 (4)
|EC (f, h)| = − f (η(x1 )) ≤ 10−2 ,
12

for η(x1 ) ∈ (0.9, 1.1). Then for |f (4) (η(x1 ))| ≤ M , we have

h2
M ≤ 10−2 .
12
Solving for h2 , we obtain

(12 × 10−2 ) (12 × 10−2 )


h2 ≤ = = 0.2646,
M 0.4536
and it gives the value of h as
h ≤ 0.5144.
Thus the best maximum value of h is 0.5. •
Formulas for Computing Derivatives
For convenience, we collect following some useful central-difference,
forward-difference and backward-difference formulas for computing different orders
derivatives.
Central Difference Formulas
The central-difference formula (12) for first derivative f 0 (x1 ) of a function required
that a function can be computed at points that lies on both sides of x1 . The
Taylor series can be used to obtain central-difference formulas for higher
derivatives. The most usable are those of order O(h2 ) and O(h4 ) and are given as
follows:
f1 − f−1
f 0 (x0 ) = + O(h2 )
2h
−f2 + 8f1 − 8f−1 + f−2
f 0 (x0 ) = + O(h4 )
12h
f1 − 2f0 + f−1
f 00 (x0 ) = + O(h2 )
h2
−f2 + 16f1 − 30f0 + 16f−1 − f−2
f 00 (x0 ) = + O(h4 )
12h2
f2 − 2f1 + 2f−1 − f−2
f 000 (x0 ) = + O(h2 )
2h3
−f3 + 8f2 − 13f1 + 13f−1 − 8f−2 + f−3
f 000 (x0 ) = + O(h4 )
8h3
f2 − 4f1 + 6f0 − 4f−1 + f−2
f (4) (x0 ) = + O(h2 )
h4
−f3 + 12f2 − 39f1 + 56f0 − 39f−1 + 12f−2 − f−3
f (4) (x0 ) = + O(h4 )
6h4
Summary

In this lecture, we ...


I found the approximate solutions of derivative (first- and second-order) and
antiderivative (definite integral only).

You might also like