0% found this document useful (0 votes)
8 views7 pages

NA10 04 Notes

Chapter 4 of 'Numerical Analysis' discusses numerical differentiation and integration techniques, including various formulas for approximating derivatives and integrals when explicit forms are unavailable. It covers Richardson's extrapolation for improving derivative approximations, as well as Newton-Cotes quadrature rules for numerical integration. Additionally, the chapter addresses composite numerical integration, Romberg integration, adaptive quadrature methods, and handling improper integrals in numerical contexts.

Uploaded by

ICafe Internet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views7 pages

NA10 04 Notes

Chapter 4 of 'Numerical Analysis' discusses numerical differentiation and integration techniques, including various formulas for approximating derivatives and integrals when explicit forms are unavailable. It covers Richardson's extrapolation for improving derivative approximations, as well as Newton-Cotes quadrature rules for numerical integration. Additionally, the chapter addresses composite numerical integration, Romberg integration, adaptive quadrature methods, and handling improper integrals in numerical contexts.

Uploaded by

ICafe Internet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Numerical Analysis (10th ed)

R.L. Burden, J. D. Faires, A. M. Burden

Chapter 4
Numerical Differentiation and Integration

Chapter 4.1: Numerical Differentiation*


Although various techniques to find the derivative of a function were learned in beginning calculus,
sometimes a function is so complicated that an explicit form for the derivative is not evident with the
techniques we have learned in the past. Further, in reality, from discussions we have had in previous
sections, we may not know the function. That is, all we have available are function values at specific
points. We might also only have values of the derivative at these points. In situations such as these,
numerical differentiation is key.

f x0 C h K f x0
From calculus, the derivative of f at x0 was defined as f' x0 = lim . Certainly,
h/0 h
for small enough values of h, the derivative can be approximated by the difference quotient. Thus, our
first attempt at an approximation formula for the derivative of f at x0 is

f x0 C h K f x0 h
f' x0 z C f" x
h 2

where the last term represents the computational truncation error. Although we cannot compute this
error, we can find a bound on it by maximizing the error term on x0 , x0 C h .

The formula for f' x0 is called a forward-difference formula if h O 0 and a backward-difference


formula if h ! 0.

If we increase the number of evaluation points, we obtain other numerical differentiation formulas, each
with their own unique error term.

THREE-POINT ENDPOINT FORMULA:


1 h2 3
f' x0 = K3 f x0 C 4f x0 C h K f x0 C 2 h C f x0 ; x0 between x0 and x0
2h 3
C2 h

THREE-POINT MIDPOINT FORMULA:


1 h2 3
f' x0 = f x0 C h K f x0 Kh C f x1 ; x1 between x0 and x0 C h
2h 6
FIVE-POINT MIDPOINT FORMULA:
1 h4 5
f' x0 = f x0 K 2 h K 8 f x0 K h C 8 f x0 C h K f x0 C 2 h C f x ;
12 h 30
x between x0 and x0 C 2 h

THREE-POINT ENDPOINT FORMULA:


1
f' x0 = K25 f x0 C 48 f x0 C h K 36 f x0 C 2 h C 16 f x0 C 3 h K 3 f x0 C 4 h
12 h
h4 5
C f x ; x between x0 and x0 C 4 h
5

The second order derivative can be derived using Taylor polynomial expansions.
SECOND DERIVATIVE MIDPOINT FORMULA:
1 h2 4
f'' x0 = 2 f x0 K h K 2 f x0 C f x0 C h K f x ; x between x0 K h and x0 C h.
h 12

Not only do we need to be cognizant of the truncation error in each of the above formulas, we must
also be aware of the round-off error that is produced by the computations (as was pointed out in
Chapter 2). Interestingly, as h decreases, truncation error also decreases, but round-off error increases.
So it is important that we find a suitably sized h that will reduce both types of error.

Chapter 4.2: Richardson's Extrapolation*


The goal of extrapolation is to find an easy way to combine approximations where the truncation error
is on the order of O h in such a way that the result is an approximation whose truncation error is of
higher order. Richardson's extrapolation makes this possible. Basically, we can take ANY method
whose truncation error can be expressed as a summation of higher powers of h and generate new
columns of approximations that have higher order truncation error. Given a stepsize, h, and either the
function, f x , or values of the function at specific x values, we can use Richardson's extrapolation to
approximate the derivative f' x as follows, where H1 is the approximation obtained from any O h
method :

O h O h2 O h4 O h6
h H h =f
1
' x
0

h h h h
H2 =f H h =H C H
1 2 2 1 2 1 2
' x
0 KH h
1

h h h h h h
H4 =f H =H C H H h =H
1 4 2 2 1 4 1 4 3 2 2
' x h h
0 KH H KH h
1 2 2 2 2
C
3
h h h h h h h h
H8 =f H =H C H H =H H h =H
1 8 2 4 1 8 1 8 3 2 2 4 4 3 2
' x h 1 h h
0 KH C H H KH h
1 2 3 2 4 3 2 3
h C
15
KH
2 2

Chapter 4.3: Elements of Numerical Integration*


Just like we found that we might need approximations to derivatives, we may also find that we need to
evaluate a definite integral that has no explicit antiderivative or whose antiderivative is not easily
obtained. The basic methods to approximate these definite integrals are called numerical quadrature. The
methods discussed in this section are rooted in the interpolation polynomials from Chapter 3 of your
text. Methods of this type are called Newton-Cotes Quadrature rules because they involve evaluating
the integrand at equally spaced points. There are two types of Newton-Cotes Quadrature rules: open
and closed.

This section opens with the development of two closed quadrature formulas (rules), the Trapezoidal
rule and Simpson's rule. These formulas were derived by integrating the first and second Lagrange
interpolating polynomials respectively using distinct nodes x0 , x1 ,..., xn in the interval [a,b].

When reading the derivation in the text, remember that f xi is a constant and can therefore, be pulled
out of the integral as a coefficient ai .
Since the derivation is developed extensively in the text, we will only provide the rules here and some
additional comments.

COMMON CLOSED NEWTON-COTES FORMULAS:

n = 1: TRAPEZOIDAL RULE:
b
h h3
f x dx = f x0 C f x1 K f" x where x0 ! x ! x1
2 12
a

n = 2: SIMPSON'S RULE:
b=x
2
h h5 4
f x dx = f x0 C 4 f x1 C f x2 K f x where x0 ! x ! x2
3 90
a=x
0

n = 3: SIMPSON'S THREE-EIGHTS RULE:


b=x
3
3h h5 4
f x dx = f x0 C 3 f x1 C 3 f x2 C f x3 K f x where x0 ! x ! x3
8 80
a=x
0

n = 4:
b=x
4
2h
f x dx = 7 f x0 C 32 f x1 C 12 f x2 C 32 f x3 C 7 f x4
45
a=x
0

8 h7 6
K f x where x0 ! x ! x4
945

To measure the precision of a quadrature method, we simply look for the largest positive integer n such
that the formula is exact for xk, for each k = 0, 1,..., n.

For example, suppose we have a quadrature formula that is given by


1
2 2
f x dx = f K Cf we compare as follows
3 3
K1

f x Integral result Quadrature result

x0 = 1 1 2 2
1 dx = 2 f K Cf = 1 K K1
3 3
K1
=2

x1 1 2 2 2
x dx = 0 f K Cf =K
3 3 3
K1
2
C =0
3

x2 1 2
2 2 2
x2 dx = f K Cf = K
K1
3 3 3
2
2 2 8
C =
3 3 9

Since the last function at which the integral result equals the quadrature result is x1 the precision of this
quadrature method is one.

COMMON OPEN NEWTON-COTES FORMULA: (these do not include the endpoints o [a,b]
as nodes).

n = 0: MIDPOINT RULE:
b=x
1
h3 ''
f x dx = 2 hf x0 C f x , where xK1 ! x ! x1
3
a=x
K1

n = 1:
b=x
2
3h 3 h3 ''
f x dx = f x0 C f x1 C f x , where xK1 ! x ! x2
2 4
a=x
K1

n = 2:
b=x
3
4h 14 h5 4
f x dx = 2 f x0 K f x1 C 2 f x2 C f x , where xK1 ! x ! x3
3 45
a=x
K1

n = 3:
b=x
4
5h 95 h5 4
f x dx = 11 f x0 C f x1 C f x2 C 11 f x3 C f x , where
24 144
a=x
K1
xK1 ! x ! x4

Chapter 4.4: Composite Numerical Integration*


The Newton-Cotes formulas that were discussed in the last section are not suitable for use over large
intervals of integration for the reasons specified in the text. This brings us to a piecewise approach to
numerical integration that uses the low-order Newton-Cotes formulas.

4
As an example, for an approximation to ex dx, the text compares the following:
0
• Simpson's rule on [0,4]
• Simpson's rule on each of the intervals [0,2] and [2,4]
• Simpson's rule on each of the four intervals [0,1], [1,2], [2,3], and [3,4]

What we found was as the number of subintervals increased, the accuracy in our approximation
increased as well. Thus, the procedure was generalized as follows: subdivide the interval [a, b] into n
subintervals, and apply Simpson's rule on each consecutive pair of subintervals. We also found that the
same process could be used with the Trapezoidal rule and the Midpoint rule forming Simpson's
Composite rule, Composite Trapezoidal rule, and the Composite Midpoint rule.

In each case, the error formula for the corrsponding rule can be used to find a bound on the error. An
important property shared by all composite integrations techniques is a stability with respect to round-
off error.

Chapter 4.5: Romberg Integration*


This section illustrates how Richardson's extrapolation (discussed in Chapter 4.2) can be used on the
results of the Composite Trapezoidal rule to obtain high accuracy approximation with little
computational cost. Table 4.10 shows the process in notation form. The second column of values are
the approximations to the integral using the Composite Trapezoidal rule. Since the Composite
Trapezoidal rule is O h2 , Richardson's extrapolation O h4 formulas from the Section 4.2 table
above are applied to that column to generate the third column. Richardson's extrapolation is then applied
to the third column to generate the fourth column, and so on.

All computations are shown in the example so we will not go into greater detail here.

Chapter 4.6: Adaptive Quadrature Methods

Chapter 4.7: Gaussian Quadrature

Chapter 4.8: Multiple Integrals

Chapter 4.9: Improper Integrals*


Improper integrals are definite integrals where one or both limits are infinite or where the integrand
approaches infinity (singularity) at one or more points over the interval of integration. We often
encounter improper integrals in physics or probability & statistics applications so it is important to
know how to numerically deal with them. In Calculus, you took a limit from the left or right of the
offending x value to see what happened to the function values as you approached the offending x value.
In numerical integration, the rules of integral approximation for these integrals must be modified in a
way that deal with the situation.

LEFT ENDPOINT SINGULARITY:


g x
• Given: f x = where 0 ! p ! 1 and g is continuous on [a,b].
xKa p
• Construct the fourth degree Taylor Polynomial, P4 , for g about x = a (assuming that g e C5 a, b .
b
P4
• Evaluate p
dx.
xKa
a
b
• Use Composite Simpson's rule to approximate G x dx where the function values of G x are
a
determined
g x K P4 x
p
, a!x%b
by G x = xKa
0, x=a
b b
P4
• Find p
dx C G x dx
xKa
a a

RIGHT ENDPOINT SINGULARITY:


• Proceed as in the left endpoint singularity but expand in terms of the right endpoint b instead of the
left endpoint a.
• OR make the substitution z =Kx, dz =Kdx
b Ka
• Evaluate f x dx = f Kz dz which has a singularity at the left endpoint and proceed as in the
a Kb
left endpoint singularity.

INFINITE SINGULARITY:
N
1
• Given: dx where p O 1
xp
a
• t = x , dt = KxK2 dx
K1
1
N a
f x 1
• Use the quadrature formula to approximate dx = tK2 f dt
xp t
a 0

You might also like