0% found this document useful (0 votes)
145 views15 pages

Num 1

1) Numerical differentiation involves approximating the derivatives of a function using finite differences when the function is too complicated to take the derivative analytically or is only known from tabulated values. 2) The accuracy of numerical differentiation can be low because small errors in approximating the function can lead to large errors in approximating the derivative. 3) Numerical integration approximates definite integrals using polynomial approximations over subintervals of the integral range and adding up the approximate area contributions. This is inherently more accurate than numerical differentiation.

Uploaded by

Bhumika Kavaiya
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views15 pages

Num 1

1) Numerical differentiation involves approximating the derivatives of a function using finite differences when the function is too complicated to take the derivative analytically or is only known from tabulated values. 2) The accuracy of numerical differentiation can be low because small errors in approximating the function can lead to large errors in approximating the derivative. 3) Numerical integration approximates definite integrals using polynomial approximations over subintervals of the integral range and adding up the approximate area contributions. This is inherently more accurate than numerical differentiation.

Uploaded by

Bhumika Kavaiya
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 15

STEP 29

NUMERICAL DIFFERENTIATION

Finite differences

In Analysis, we are usually able to obtain the derivative of a function by the


methods of elementary calculus. However, if a function is very complicated or
known only from values in a table, it may be necessary to resort to numerical
differentiation.

Procedure

Formulae for numerical differentiation may easily be obtained by


differentiating interpolation polynomials. The essential idea is that the
derivatives f', f", . . . of a function are represented by the derivatives P'n,
P"n, . . . of the interpolating polynomial Pn. For example, differentiation
of Newton's forward difference formula (cf. STEP 22):

with respect to x, since , etc., yields formally :

In particular, if we set θ = 0 , we arrive at formulae for derivatives at


the tabular points {xj}:

If we set θ = ?, we have a relatively accurate formula at half-way


points (without second differences):

;
if we set = 1 in the formula for the second derivative, we find (without
third differences):

i.e., a formula for the second derivative at the next point.

Note that, if one retains only one term, one arrives at the well-known
formulae:

Errors in numerical differentiation

It must be recognized that numerical differentiation is subject to


considerable error; the basic difficulty is that, while f (x) - Pn(x) may be
small, the differences (f'(x) - Pn'(x)) and (f"(x-Pn"(x)), etc. may be very
large. In geometrical terms, although two curves may be close together,
they may differ considerably in slope, variation in slope, etc. (Figure 14).

FIGURE 14 Interpolating f(x)


It should also be noted that all these formulae involve division of a
combination of differences (which are prone to loss of significance or
cancellation errors, especially if h is small) by a positive power of h.
Consequently, if we want to keep round-off errors down, we should use
a large value of h. On the other hand, it can be shown (see Exercise 3
below) that the truncation error is approximately proportional to hp,
where p is a positive integer, so that k must be sufficiently small for the
truncation error to be tolerable. We are in a cleft stick and must
compromise with some optimum choice of h.

In brief, large errors may occur in numerical differentiation, based on


direct polynomial approximation, so that an error check is always
advisable. There arc alternative methods, based on polynomials, which
use more sophisticated procedures such as least-squares or mini-max,
and other alternatives involving other basis functions (for example,
trigonometric functions). However, the best policy is probably to use
numerical differentiation only when it cannot be avoided!

Example

We will estimate the values of f'(0.1) and f"(0.1) for f(x) = ex, using the
data in STEP 20.

If we use the above formulae with = 0, we obtain (ignoring fourth and


higher differences):

Since f"(0.1) = f''(0.1) = f (0.1) = 1.10517, it is obvious that the second


result is much less accurate (due to round-off errors).

Checkpoint

1. How are formula for the derivatives of a function obtained from


interpolation formulae?
2. Why is the accuracy of the usual numerical differentiation process not
necessarily increased if the argument interval is reduced?
3. When should numerical differentiation be used?
EXERCISES

1. Derive formulae involving backward differences for the first and second
derivatives of a function.
2.The function is tabulated for x = 1.00(0.05)1.30 to 5D:

a. Estimate the values of f'(1.00) and f"(1.00), using Newton's forward


difference formula.
b. Estimate f'(1.30) and f"(1.30), using Newton's backward difference formula.

3Use Tay. lor series to find the truncation errors in the formulae:

a.
b.
c.
d.

STEP 30

NUMERICAL INTEGRATION 1

The trapezoidal rule

It is often either difficult or impossible to evaluate by analytical methods


definite integrals of the form

so that numerical integration or quadrature must be used.

It is well known that the definite integral may be interpreted as the area under
the curve y = f (x) for and may be evaluated by subdivision of the
interval and summation of the component areas. This additive property of the
definite integral permits evaluation in a piecewise sense. For any subinterval
of the interval , we may approximate f (x) by the
interpolating polynomial Pn(x). Then we obtain the approximation

,
which will be a good approximation, provided n is chosen so that the error |f
(x) - Pn(x)| in each tabular subinterval is sufficiently
small. Note that for n > 1 the error is often alternately positive and negative in
successive subintervals and considerable cancellation of error occurs; in
contrast with numerical differentiation, quadrature is inherently accurate!
It is usually sufficient to use a rather low degree, polynomial approximation
over any subinterval .

The trapezoidal rule

Perhaps the most straightforward quadrature is to subdivide the interval


into N equal strips of width h by the points

such that b = a + Nh. Then one can use the additive property

and the linear approximations, involving

to obtain the trapezoidal rule, which is suitable for computer


implementation (cf. Pseudo-Code

Integration by the trapezoidal rule therefore involves computation of a


finite sum of values of the integrand f, whence it is very quick. Note that
this procedure can be interpreted geometrically (Figure 16) as the sum of
the areas of N trapezoids of width h and average height (fj + fj+1)/2.
FIGURE 15 The trapezoidal rule

Accuracy

The trapezoidal rule corresponds to a rather crude polynomial


approximation (a straight line) between successive points xj and xj+1 = xj
+ h, and hence can only be accurate for sufficiently small h. An
approximate (upper) bound on the error may be derived as follows: The
Taylor expansion

yields the trapezoidal form:

while f (x) may be in xj ? x ? xj+1 as

to arrive at the exact form:

Comparison of these two forms shows that the truncation error is

.
(STEP 2 regarding the concept of truncation error.) Ignoring higher-
order terms, one arrives at an approximate bound on this error when
using the trapezoidal rule over N subintervals:

Whenever possible, we will choose h small enough to make this error


negligible. In the case of hand computations from tables, this may not be
possible. On the other hand, in a computer program in which f (x) may
be generated anywhere in , the interval may be resubdivided
until sufficient accuracy is achieved. (The integral value for successive
subdivisions can be compared, and the subdivision process terminated
when there is adequate agreement between successive values.)

Example

Obtain an estimate of the integral

using the trapezoidal rule and the data in STEP 20:. If we use T(h) to
denote the approximation with strip width h, we obtain

Since we may observe that the error sequence


-0.00081, -0.00020, -0.00005 decreases with h?, as expected.

Checkpoint

1. Why is quadrature using a polynomial approximation for the


integrand likely to be satisfactory, even if the polynomial is of low
degree?
2. What is the degree of the approximating polynomial
corresponding to the trapezoidal rule?
3. Why is the trapezoidal rule well suited for implementation on a
computer?

EXERCISES

4. Estimate the value of the integral

using the trapezoidal rule and the data given in Exercise 2 of the
preceding Step.

5. Use the trapezoidal rule with h = 1,0.5, and 0.25 to estimate the
value of the integral

STEP 31

NUMERICAL INTEGRATION

Simpson's Rule

If it is undesirable (for example, when using tables) to increase the subdivision


of an interval , in order to improve the accuracy of a quadrature, one
alternative is to use an approximating polynomial of higher degree. The
integration formula, based on a quadratic (i.e., parabolic) approximation is
called Simpson's Rule. It is adequate for most purposes that one is likely to
encounter in practice.

Simpson's Rule

Simpson's Rule gives for


.

A parabolic arc is fitted to the curve y = f(x) at the three tabular points
Hence, if N - (b - a) is even, one obtains Simpson's Rule:

where

Integration by Simpson's Rule involves computation of a finite sum of


given values of the integrand f, as in the case of the trapezoidal rule.
Simpson's Rule is also effective for implementation on a computer; a
single direct application in a hand calculation usually gives sufficient
accuracy.

Accuracy

For a given integrand f, it is quite appropriate to computer program


increased interval subdivision, in order to achieve a required accuracy,
while for hand calculations an error bound may again be useful.

Let the function f(x) have in the Taylor expansion

then
.

One may reformulate the quadrature rule for by replacing


fj+2 = f (j+1 + h) and fj = f (xj+1 - k) by its Taylor series; thus

A comparison of these two versions shows that the truncation error is

Ignoring higher order terms, we conclude that the approximate bound


on this error while estimating

by Simpson's Rule with N/2 subintervals of width 2h is

Note that the error bound is proportional to h4, compared with h2 for the
cruder trapezoidal rule. Note that Simpson's rule is exact for cubics!

Example

We shall estimate the value of the integral

,
using Simpson's rule and the data in Exercise 2 of STEP29. If we choose
h = 0.15 or h = 0.05, there will be an even number of intervals. Denoting
the approximation with strip width h by S(h), we obtain

and

respectively. Since f(4)(x) = -15x-7/2/16, an approximate truncation error


bound is

whence it is 0.000 000 8 for h = 0.15 and 0.000 000 01 for h = 0.05.
Note that the truncation error is negligible; within round-off error, the
estimate is 0.32148(6).

Checkpoint

1. What is the degree of the approximating polynomial corresponding to


Simpson's Rule?
2. What is the error bound for Simpson's rule?
3. Why is Simpson's Rule well suited for implementation on a computer?

EXERCISES

1. Estimate by numerical integration.the value of the integral

to 4D.

2. Use Simpson's Rule with N = 2 to estimate the value of

.
Estimate to 5D the resulting error, given that the true value of the
integral is 0.26247.

STEP 32

NUMERICAL INTEGRATION 3

Gauss integration formulae

The numerical integration procedures, discussed so far (i.e., the trapezoidal


rule and Simpson's Rule) involve equally spaced argument values.
However, for a fixed number of points, accuracy may be increased, if we do
not insist on equidistant points. This is the background of an alternative
integration process, due to Gauss. Thus, assuming that we can compute a
specified number of values of the integrand (at arbitrary points), we shall
construct a formula by selecting arguments (or abscissae) within the range of
integration, in order to arrive at a most accurate integration rule.

Gauss two-point integration formula

Consider any two-point formula of the form

where the weights wl, w2 and the abscissae xl, x2 are to be determined
such that the formula integrates exactly 1, x, x2, and x3 (and hence all
cubic functions). Then the following four conditions are imposed on the
four unknowns:

1. f (x) = 1 integrates exactly, ,


2. f (x) = x integrates exactly, ,
2
3. f (x) = x integrates exactly, ,
3
4. f (x) = x integrates exactly, /

It is easily verified that

satisfies the four equations 1 - 4. Thus, we arrive at the readily


programmable Gauss two-point integration formula (PSEUDO-
CODES):
.

The following change of variable makes this formula applicable to any


integration interval.

so that the evaluated integral becomes, say,

If we write

then:

since .

Note that the Gauss two-point formula is exact for cubic polynomials,
and hence may be compared in accuracy with Simson's Rule. (In fact,
the error for the Gauss formula is about 2/3 that for Simpson's Rule.).
Since the Gauss formula requires one less evaluation of function values,
it may be preferred, provided function evaluations at irrational
abscissae values are possible.

Other Gauss formulae

The Gauss two-point integration formula discussed is but one of a large


family of such formulae. For example, the Gauss three-point formula

exact for fifth degree polynomials, involves an error smaller than

.
These two formulae are the first in the series of so called Gauss-
Legendre formulae, because of their association with Legendre
polynomials.

There are yet further formulae, associated with other orthogonal


polynomials (Laguerre, Hermite, Chebyshev, etc.) of the general form

where W(x) is referred to as the weight function, (x1, x2, . . . , x2} is a set
of points in the integration range and the weights wi are known
constants.

Application of Gauss quadrature

The sets {xi} and (wi} are tabulated in reference books, so that
application of Gauss quadrature is easy.

As an demonstration of the Gauss-Legendre two-point and four-point


formulae we will evaluate the integral:

The two-point formula

after the change of variable

yields:

Letting , we find g(-0.577 350 27) = 0.325 886 and


g(0.577 350 27) = 0.945 409, whence:
.

The four-point formula

yields:

a result correct to 7D. This accuracy is impressive enough; Simpson's


Rule with 64 points yields 0.999 999 83!

Checkpoint

1. What is one disadvantage of integration formulae which employ


equally spaced values of the argument?
2. What is the general form of the Gauss integration formula?
3. How accurate are the Gauss-Legendre two-point and three-point
formulae?<>

EXERCISE

Apply the Gauss two-point and four-point formulae to evaluate the


integral

You might also like