0% found this document useful (0 votes)
185 views

Interpolation and Polynomial Approximation

Uploaded by

M.Y M.A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
185 views

Interpolation and Polynomial Approximation

Uploaded by

M.Y M.A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

124 CHAPTER 3 Interpolation and Polynomial Approximation

5. Neville’s method is used to approximate f (0.4), giving the following table.

x0 =0 P0 = 1
x1 = 0.25 P1 = 2 P01 = 2.6
x2 = 0.5 P2 P1,2 P0,1,2
x3 = 0.75 P3 = 8 P2,3 = 2.4 P1,2,3 = 2.96 P0,1,2,3 = 3.016

Determine P2 = f (0.5).
6. Neville’s method is used to approximate f (0.5), giving the following table.

x0 = 0 P0 = 0
x1 = 0.4 P1 = 2.8 P0,1 = 3.5
27
x2 = 0.7 P2 P1,2 P0,1,2 = 7

Determine P2 = f (0.7).
7. Suppose xj = j, for j = 0, 1, 2, 3 and it is known that
P0,1 (x) = 2x + 1, P0,2 (x) = x + 1, and P1,2,3 (2.5) = 3.
Find P0,1,2,3 (2.5).
8. Suppose xj = j, for j = 0, 1, 2, 3 and it is known that
P0,1 (x) = x + 1, P1,2 (x) = 3x − 1, and P1,2,3 (1.5) = 4.
Find P0,1,2,3 (1.5).
9. Neville’s Algorithm is used to approximate f (0) using f (−2), f (−1), f (1), and f (2). Suppose
f (−1) was understated by 2 and f (1) was overstated by 3. Determine the error in the original
calculation of the value of the interpolating polynomial to approximate f (0).
10. Neville’s Algorithm is used to approximate f (0) using f (−2), f (−1), f (1), and f (2). Suppose
f (−1) was overstated by 2 and f (1) was understated by 3. Determine the error in the original
calculation of the value of the interpolating polynomial to approximate f (0).

11. Construct a sequence of interpolating values yn to f (1 + 10), where f (x) √ = (1 + x 2 )−1 for
−5 ≤ x ≤ 5, as follows: For each n = 1, 2, . . . , 10, let h = 10/n and yn = Pn (1 + 10), where Pn (x)
is the interpolating polynomial for f (x) at the nodes x0(n) , x1(n) , . . . , xn(n) and xj(n) = −5 + jh, for each

j = 0, 1, 2, . . . , n. Does the sequence {yn } appear to converge to f (1 + 10)?
Inverse Interpolation Suppose f ∈ C 1 [a, b], f ′ (x) 6= 0 on [a, b] and f has one zero p in [a, b].
Let x0 , . . . , xn , be n + 1 distinct numbers in [a, b] with f (xk ) = yk , for each k = 0, 1, . . . , n. To
approximate p construct the interpolating polynomial of degree n on the nodes y0 , . . . , yn for f −1 .
Since yk = f (xk ) and 0 = f (p), it follows that f −1 (yk ) = xk and p = f −1 (0). Using iterated
interpolation to approximate f −1 (0) is called iterated inverse interpolation.
12. Use iterated inverse interpolation to find an approximation to the solution of x − e−x = 0, using the
data

x 0.3 0.4 0.5 0.6


−x
e 0.740818 0.670320 0.606531 0.548812

13. Construct an algorithm that can be used for inverse interpolation.

3.3 Divided Differences


Iterated interpolation was used in the previous section to generate successively higher-degree
polynomial approximations at a specific point. Divided-difference methods introduced in
this section are used to successively generate the polynomials themselves.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
3.3 Divided Differences 125

Suppose that Pn (x) is the nth Lagrange polynomial that agrees with the function f at
the distinct numbers x0 , x1 , . . . , xn . Although this polynomial is unique, there are alternate
algebraic representations that are useful in certain situations. The divided differences of f
with respect to x0 , x1 , . . . , xn are used to express Pn (x) in the form
Pn (x) = a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 ) + · · · + an (x − x0 ) · · · (x − xn−1 ), (3.5)
for appropriate constants a0 , a1 , . . . , an . To determine the first of these constants, a0 , note
that if Pn (x) is written in the form of Eq. (3.5), then evaluating Pn (x) at x0 leaves only the
constant term a0 ; that is,
a0 = Pn (x0 ) = f (x0 ).

As in so many areas, Isaac Similarly, when P(x) is evaluated at x1 , the only nonzero terms in the evaluation of
Newton is prominent in the study Pn (x1 ) are the constant and linear terms,
of difference equations. He
developed interpolation formulas f (x0 ) + a1 (x1 − x0 ) = Pn (x1 ) = f (x1 );
as early as 1675, using his 1
so
notation in tables of differences.
He took a very general approach f (x1 ) − f (x0 )
to the difference formulas, so
a1 = . (3.6)
x1 − x0
explicit examples that he
produced, including Lagrange’s We now introduce the divided-difference notation, which is related to Aitken’s 12
formulas, are often known by notation used in Section 2.5. The zeroth divided difference of the function f with respect
other names. to xi , denoted f [xi ], is simply the value of f at xi :
f [xi ] = f (xi ). (3.7)
The remaining divided differences are defined recursively; the first divided difference
of f with respect to xi and xi+1 is denoted f [xi , xi+1 ] and defined as
f [xi+1 ] − f [xi ]
f [xi , xi+1 ] = . (3.8)
xi+1 − xi
The second divided difference, f [xi , xi+1 , xi+2 ], is defined as
f [xi+1 , xi+2 ] − f [xi , xi+1 ]
f [xi , xi+1 , xi+2 ] = .
xi+2 − xi
Similarly, after the (k − 1)st divided differences,
f [xi , xi+1 , xi+2 , . . . , xi+k−1 ] and f [xi+1 , xi+2 , . . . , xi+k−1 , xi+k ],
have been determined, the kth divided difference relative to xi , xi+1 , xi+2 , . . . , xi+k is
f [xi+1 , xi+2 , . . . , xi+k ] − f [xi , xi+1 , . . . , xi+k−1 ]
f [xi , xi+1 , . . . , xi+k−1 , xi+k ] = . (3.9)
xi+k − xi
The process ends with the single nth divided difference,
f [x1 , x2 , . . . , xn ] − f [x0 , x1 , . . . , xn−1 ]
f [x0 , x1 , . . . , xn ] = .
xn − x 0
Because of Eq. (3.6) we can write a1 = f [x0 , x1 ], just as a0 can be expressed as a0 =
f (x0 ) = f [x0 ]. Hence the interpolating polynomial in Eq. (3.5) is
Pn (x) = f [x0 ] + f [x0 , x1 ](x − x0 ) + a2 (x − x0 )(x − x1 )
+ · · · + an (x − x0 )(x − x1 ) · · · (x − xn−1 ).

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
126 CHAPTER 3 Interpolation and Polynomial Approximation

As might be expected from the evaluation of a0 and a1 , the required constants are
ak = f [x0 , x1 , x2 , . . . , xk ],
for each k = 0, 1, . . . , n. So Pn (x) can be rewritten in a form called Newton’s Divided-
Difference:
n
X
Pn (x) = f [x0 ] + f [x0 , x1 , . . . , xk ](x − x0 ) · · · (x − xk−1 ). (3.10)
k=1

The value of f [x0 , x1 , . . . , xk ] is independent of the order of the numbers x0 , x1 , . . . , xk , as


shown in Exercise 21.
The generation of the divided differences is outlined in Table 3.9. Two fourth and one
fifth difference can also be determined from these data.

Table 3.9
First Second Third
x f (x) divided differences divided differences divided differences

x0 f [x0 ]
f [x1 ] − f [x0 ]
f [x0 , x1 ] =
x1 − x0
f [x1 , x2 ] − f [x0 , x1 ]
x1 f [x1 ] f [x0 , x1 , x2 ] =
x2 − x0
f [x2 ] − f [x1 ] f [x1 , x2 , x3 ] − f [x0 , x1 , x2 ]
f [x1 , x2 ] = f [x0 , x1 , x2 , x3 ] =
x2 − x1 x3 − x0
f [x2 , x3 ] − f [x1 , x2 ]
x2 f [x2 ] f [x1 , x2 , x3 ] =
x3 − x1
f [x3 ] − f [x2 ] f [x2 , x3 , x4 ] − f [x1 , x2 , x3 ]
f [x2 , x3 ] = f [x1 , x2 , x3 , x4 ] =
x3 − x2 x4 − x1
f [x3 , x4 ] − f [x2 , x3 ]
x3 f [x3 ] f [x2 , x3 , x4 ] =
x4 − x2
f [x4 ] − f [x3 ] f [x3 , x4 , x5 ] − f [x2 , x3 , x4 ]
f [x3 , x4 ] = f [x2 , x3 , x4 , x5 ] =
x4 − x3 x5 − x2
f [x4 , x5 ] − f [x3 , x4 ]
x4 f [x4 ] f [x3 , x4 , x5 ] =
x5 − x3
f [x5 ] − f [x4 ]
f [x4 , x5 ] =
x5 − x4
x5 f [x5 ]

ALGORITHM Newton’s Divided-Difference Formula


3.2
To obtain the divided-difference coefficients of the interpolatory polynomial P on the (n+1)
distinct numbers x0 , x1 , . . . , xn for the function f :
INPUT numbers x0 , x1 , . . . , xn ; values f (x0 ), f (x1 ), . . . , f (xn ) as F0,0 , F1,0 , . . . , Fn,0 .
OUTPUT the numbers F0,0 , F1,1 , . . . , Fn,n where
n
X i−1
Y
Pn (x) = F0,0 + Fi,i (x − xj ). (Fi,i is f [x0 , x1 , . . . , xi ].)
i=1 j=0
Step 1 For i = 1, 2, . . . , n
For j = 1, 2, . . . , i
Fi,j−1 − Fi−1,j−1
set Fi,j = . (Fi,j = f [xi−j , . . . , xi ].)
xi − xi−j
Step 2 OUTPUT (F0,0 , F1,1 , . . . , Fn,n );
STOP.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
3.3 Divided Differences 127

The form of the output in Algorithm 3.2 can be modified to produce all the divided
differences, as shown in Example 1.

Example 1 Complete the divided difference table for the data used in Example 1 of Section 3.2, and
Table 3.10 reproduced in Table 3.10, and construct the interpolating polynomial that uses all this data.
x f (x) Solution The first divided difference involving x0 and x1 is
1.0 0.7651977 f [x1 ] − f [x0 ] 0.6200860 − 0.7651977
1.3 0.6200860 f [x0 , x1 ] = = = −0.4837057.
x1 − x0 1.3 − 1.0
1.6 0.4554022
1.9 0.2818186 The remaining first divided differences are found in a similar manner and are shown in the
2.2 0.1103623 fourth column in Table 3.11.

Table 3.11 i xi f [xi ] f [xi−1 , xi ] f [xi−2 , xi−1 , xi ] f [xi−3 , . . . , xi ] f [xi−4 , . . . , xi ]


0 1.0 0.7651977
−0.4837057
1 1.3 0.6200860 −0.1087339
−0.5489460 0.0658784
2 1.6 0.4554022 −0.0494433 0.0018251
−0.5786120 0.0680685
3 1.9 0.2818186 0.0118183
−0.5715210
4 2.2 0.1103623

The second divided difference involving x0 , x1 , and x2 is


f [x1 , x2 ] − f [x0 , x1 ] −0.5489460 − (−0.4837057)
f [x0 , x1 , x2 ] = = = −0.1087339.
x2 − x 0 1.6 − 1.0
The remaining second divided differences are shown in the 5th column of Table 3.11.
The third divided difference involving x0 , x1 , x2 , and x3 and the fourth divided difference
involving all the data points are, respectively,
f [x1 , x2 , x3 ] − f [x0 , x1 , x2 ] −0.0494433 − (−0.1087339)
f [x0 , x1 , x2 , x3 ] = =
x3 − x0 1.9 − 1.0
= 0.0658784,
and
f [x1 , x2 , x3 , x4 ] − f [x0 , x1 , x2 , x3 ] 0.0680685 − 0.0658784
f [x0 , x1 , x2 , x3 , x4 ] = =
x4 − x0 2.2 − 1.0
= 0.0018251.
All the entries are given in Table 3.11.
The coefficients of the Newton forward divided-difference form of the interpolating
polynomial are along the diagonal in the table. This polynomial is
P4 (x) = 0.7651977 − 0.4837057(x − 1.0) − 0.1087339(x − 1.0)(x − 1.3)
+ 0.0658784(x − 1.0)(x − 1.3)(x − 1.6)
+ 0.0018251(x − 1.0)(x − 1.3)(x − 1.6)(x − 1.9).
Notice that the value P4 (1.5) = 0.5118200 agrees with the result in Table 3.6 for Example
2 of Section 3.2, as it must because the polynomials are the same.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
128 CHAPTER 3 Interpolation and Polynomial Approximation

We can use Maple with the NumericalAnalysis package to create the Newton Divided-
Difference table. First load the package and define the x and f (x) = y values that will be
used to generate the first four rows of Table 3.11.
xy := [[1.0, 0.7651977], [1.3, 0.6200860], [1.6, 0.4554022], [1.9, 0.2818186]]
The command to create the divided-difference table is
p3 := PolynomialInterpolation(xy, independentvar = ‘x’, method = newton)
A matrix containing the divided-difference table as its nonzero entries is created with the
DividedDifferenceTable(p3)
We can add another row to the table with the command
p4 := AddPoint(p3, [2.2, 0.1103623])
which produces the divided-difference table with entries corresponding to those in
Table 3.11.
The Newton form of the interpolation polynomial is created with
Interpolant(p4)
which produces the polynomial in the form of P4 (x) in Example 1, except that in place of
the first two terms of P4 (x):
0.7651977 − 0.4837057(x − 1.0)
Maple gives this as 1.248903367 − 0.4837056667x.
The Mean Value Theorem 1.8 applied to Eq. (3.8) when i = 0,
f (x1 ) − f (x0 )
f [x0 , x1 ] = ,
x1 − x0
implies that when f ′ exists, f [x0 , x1 ] = f ′ (ξ ) for some number ξ between x0 and x1 . The
following theorem generalizes this result.

Theorem 3.6 Suppose that f ∈ C n [a, b] and x0 , x1 , . . . , xn are distinct numbers in [a, b]. Then a number ξ
exists in (a, b) with
f (n) (ξ )
f [x0 , x1 , . . . , xn ] = .
n!

Proof Let
g(x) = f (x) − Pn (x).
Since f (xi ) = Pn (xi ) for each i = 0, 1, . . . , n, the function g has n+1 distinct zeros in [a, b].
Generalized Rolle’s Theorem 1.10 implies that a number ξ in (a, b) exists with g(n) (ξ ) = 0,
so
0 = f (n) (ξ ) − Pn(n) (ξ ).
Since Pn (x) is a polynomial of degree n whose leading coefficient is f [x0 , x1 , . . . , xn ],
Pn(n) (x) = n!f [x0 , x1 , . . . , xn ],
for all values of x. As a consequence,
f (n) (ξ )
f [x0 , x1 , . . . , xn ] = .
n!

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
3.3 Divided Differences 129

Newton’s divided-difference formula can be expressed in a simplified form when the


nodes are arranged consecutively with equal spacing. In this case, we introduce the notation
h = xi+1 − xi , for each i = 0, 1, . . . , n − 1 and let x = x0 + sh. Then the difference x − xi
is x − xi = (s − i)h. So Eq. (3.10) becomes

Pn (x) = Pn (x0 + sh) = f [x0 ] + shf [x0 , x1 ] + s(s − 1)h2 f [x0 , x1 , x2 ]


+ · · · + s(s − 1) · · · (s − n + 1)hn f [x0 , x1 , . . . , xn ]
n
X
= f [x0 ] + s(s − 1) · · · (s − k + 1)hk f [x0 , x1 , . . . , xk ].
k=1

Using binomial-coefficient notation,


 
s s(s − 1) · · · (s − k + 1)
= ,
k k!

we can express Pn (x) compactly as


n  
X s
Pn (x) = Pn (x0 + sh) = f [x0 ] + k!hk f [x0 , xi , . . . , xk ]. (3.11)
k
k=1

Forward Differences
The Newton forward-difference formula, is constructed by making use of the forward
difference notation 1 introduced in Aitken’s 12 method. With this notation,
f (x1 ) − f (x0 ) 1 1
f [x0 , x1 ] = = (f (x1 ) − f (x0 )) = 1f (x0 )
x1 − x 0 h h
 
1 1f (x1 ) − 1f (x0 ) 1
f [x0 , x1 , x2 ] = = 2 12 f (x0 ),
2h h 2h

and, in general,

1
f [x0 , x1 , . . . , xk ] = 1k f (x0 ).
k!hk
Since f [x0 ] = f (x0 ), Eq. (3.11) has the following form.

Newton Forward-Difference Formula

n  
X s
Pn (x) = f (x0 ) + 1k f (x0 ) (3.12)
k
k=1

Backward Differences
If the interpolating nodes are reordered from last to first as xn , xn−1 , . . . , x0 , we can write
the interpolatory formula as

Pn (x) = f [xn ] + f [xn , xn−1 ](x − xn ) + f [xn , xn−1 , xn−2 ](x − xn )(x − xn−1 )
+ · · · + f [xn , . . . , x0 ](x − xn )(x − xn−1 ) · · · (x − x1 ).

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
130 CHAPTER 3 Interpolation and Polynomial Approximation

If, in addition, the nodes are equally spaced with x = xn + sh and x = xi + (s + n − i)h,
then

Pn (x) = Pn (xn + sh)


= f [xn ] + shf [xn , xn−1 ] + s(s + 1)h2 f [xn , xn−1 , xn−2 ] + · · ·
+ s(s + 1) · · · (s + n − 1)hn f [xn , . . . , x0 ].

This is used to derive a commonly applied formula known as the Newton backward-
difference formula. To discuss this formula, we need the following definition.

Definition 3.7 Given the sequence {pn }∞


n=0 , define the backward difference ∇pn (read nabla pn ) by

∇pn = pn − pn−1 , for n ≥ 1.

Higher powers are defined recursively by

∇ k pn = ∇(∇ k−1 pn ), for k ≥ 2.

Definition 3.7 implies that

1 1 2
f [xn , xn−1 ] = ∇f (xn ), f [xn , xn−1 , xn−2 ] = ∇ f (xn ),
h 2h2
and, in general,

1 k
f [xn , xn−1 , . . . , xn−k ] = ∇ f (xn ).
k!hk
Consequently,

s(s + 1) 2 s(s + 1) · · · (s + n − 1) n
Pn (x) = f [xn ] + s∇f (xn ) + ∇ f (xn ) + · · · + ∇ f (xn ).
2 n!
If we extend the binomial coefficient notation to include all real values of s by letting
 
−s −s(−s − 1) · · · (−s − k + 1) s(s + 1) · · · (s + k − 1)
= = (−1)k ,
k k! k!

then
     
−s 2 −s 12 n −s
Pn (x) = f [xn ]+(−1) ∇f (xn )+(−1) ∇ f (xn )+· · ·+(−1) ∇ n f (xn ).
1 2 n

This gives the following result.

Newton Backward–Difference Formula

n  
X −s k
Pn (x) = f [xn ] + (−1)k ∇ f (xn ) (3.13)
k
k=1

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
3.3 Divided Differences 131

Illustration The divided-difference Table 3.12 corresponds to the data in Example 1.

Table 3.12
First divided Second divided Third divided Fourth divided
differences differences differences differences
1.0 0.7651977
−0.4837057
1.3 0.6200860 −0.1087339
−0.5489460 0.0658784
1.6 0.4554022 −0.0494433 0.0018251
::::::::
−0.5786120 0.0680685
:::::::::
1.9 0.2818186 0.0118183
::::::::
−0.5715210
::::::::::
2.2 0.1103623
::::::::

Only one interpolating polynomial of degree at most 4 uses these five data points, but we
will organize the data points to obtain the best interpolation approximations of degrees 1,
2, and 3. This will give us a sense of accuracy of the fourth-degree approximation for the
given value of x.
If an approximation to f (1.1) is required, the reasonable choice for the nodes would
be x0 = 1.0, x1 = 1.3, x2 = 1.6, x3 = 1.9, and x4 = 2.2 since this choice makes the
earliest possible use of the data points closest to x = 1.1, and also makes use of the fourth
divided difference. This implies that h = 0.3 and s = 13 , so the Newton forward divided-
difference formula is used with the divided differences that have a solid underline ( ) in
Table 3.12:
1
P4 (1.1) = P4 (1.0 + (0.3))
3
 
1 1 2
= 0.7651977 + (0.3)(−0.4837057) + − (0.3)2 (−0.1087339)
3 3 3
  
1 2 5
+ − − (0.3)3 (0.0658784)
3 3 3
   
1 2 5 8
+ − − − (0.3)4 (0.0018251)
3 3 3 3
= 0.7196460.
To approximate a value when x is close to the end of the tabulated values, say, x = 2.0, we
would again like to make the earliest use of the data points closest to x. This requires using
the Newton backward divided-difference formula with s = − 23 and the divided differences
in Table 3.12 that have a wavy underline (:::: ). Notice that the fourth divided difference
is used in both formulas.
 
2
P4 (2.0) = P4 2.2 − (0.3)
3
 
2 2 1
= 0.1103623 − (0.3)(−0.5715210) − (0.3)2 (0.0118183)
3 3 3
      
2 1 4 2 1 4 7
− (0.3)3 (0.0680685) − (0.3)4 (0.0018251)
3 3 3 3 3 3 3
= 0.2238754. 

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
132 CHAPTER 3 Interpolation and Polynomial Approximation

Centered Differences
The Newton forward- and backward-difference formulas are not appropriate for approximat-
ing f (x) when x lies near the center of the table because neither will permit the highest-order
difference to have x0 close to x. A number of divided-difference formulas are available for
this case, each of which has situations when it can be used to maximum advantage. These
methods are known as centered-difference formulas. We will consider only one centered-
difference formula, Stirling’s method.
For the centered-difference formulas, we choose x0 near the point being approximated
and label the nodes directly below x0 as x1 , x2 , . . . and those directly above as x−1 , x−2 , . . . .
With this convention, Stirling’s formula is given by
sh
Pn (x) = P2m+1 (x) = f [x0 ] + (f [x−1 , x0 ] + f [x0 , x1 ]) + s2 h2 f [x−1 , x0 , x1 ] (3.14)
2
s(s2 − 1)h3
+ f [x−2 , x−1 , x0 , x1 ] + f [x−1 , x0 , x1 , x2 ])
2
James Stirling (1692–1770)
published this and numerous + · · · + s2 (s2 − 1)(s2 − 4) · · · (s2 − (m − 1)2 )h2m f [x−m , . . . , xm ]
other formulas in Methodus s(s2 − 1) · · · (s2 − m2 )h2m+1
Differentialis in 1720. + (f [x−m−1 , . . . , xm ] + f [x−m , . . . , xm+1 ]),
Techniques for accelerating the
2
convergence of various series are if n = 2m + 1 is odd. If n = 2m is even, we use the same formula but delete the last line.
included in this work. The entries used for this formula are underlined in Table 3.13.

Table 3.13 First divided Second divided Third divided Fourth divided
x f (x) differences differences differences differences
x−2 f [x−2 ]
f [x−2 , x−1 ]
x−1 f [x−1 ] f [x−2 , x−1 , x0 ]
f [x−1 , x0 ] f [x−2 , x−1 , x0 , x1 ]
x0 f [x0 ] f [x−1 , x0 , x1 ] f [x−2 , x−1 , x0 , x1 , x2 ]
f [x0 , x1 ] f [x−1 , x0 , x1 , x2 ]
x1 f [x1 ] f [x0 , x1 , x2 ]
f [x1 , x2 ]
x2 f [x2 ]

Example 2 Consider the table of data given in the previous examples. Use Stirling’s formula to approx-
imate f (1.5) with x0 = 1.6.
Solution To apply Stirling’s formula we use the underlined entries in the difference
Table 3.14.

Table 3.14 First divided Second divided Third divided Fourth divided
x f (x) differences differences differences differences
1.0 0.7651977
−0.4837057
1.3 0.6200860 −0.1087339
−0.5489460 0.0658784
1.6 0.4554022 −0.0494433 0.0018251
−0.5786120 0.0680685
1.9 0.2818186 0.0118183
−0.5715210
2.2 0.1103623

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
3.3 Divided Differences 133

The formula, with h = 0.3, x0 = 1.6, and s = − 13 , becomes


   
1
f (1.5) ≈ P4 1.6 + − (0.3)
3
  
1 0.3
= 0.4554022 + − ((−0.5489460) + (−0.5786120))
3 2
1 2
 
+ − (0.3)2 (−0.0494433)
3
!
1 2
   
1 1
+ − − − 1 (0.3)3 (0.0658784 + 0.0680685)
2 3 3
!
1 2 1 2
   
+ − − − 1 (0.3)4 (0.0018251) = 0.5118200.
3 3

Most texts on numerical analysis written before the wide-spread use of computers have
extensive treatments of divided-difference methods. If a more comprehensive treatment of
this subject is needed, the book by Hildebrand [Hild] is a particularly good reference.

E X E R C I S E S E T 3.3
1. Use Eq. (3.10) or Algorithm 3.2 to construct interpolating polynomials of degree one, two, and three
for the following data. Approximate the specified value using each of the polynomials.
a. f (8.4) if f (8.1) = 16.94410, f (8.3) = 17.56492, f (8.6) = 18.50515, f (8.7) = 18.82091
b. f (0.9) if f (0.6) = −0.17694460, f (0.7) = 0.01375227, f (0.8) = 0.22363362, f (1.0) =
0.65809197
2. Use Eq. (3.10) or Algorithm 3.2 to construct interpolating polynomials of degree one, two, and three
for the following data. Approximate the specified value using each of the polynomials.
a. f (0.43) if f (0) = 1, f (0.25) = 1.64872, f (0.5) = 2.71828, f (0.75) = 4.48169
b. f (0) if f (−0.5) = 1.93750, f (−0.25) = 1.33203, f (0.25) = 0.800781, f (0.5) = 0.687500
3. Use Newton the forward-difference formula to construct interpolating polynomials of degree one,
two, and three for the following data. Approximate the specified value using each of the polynomials.
a. f − 13 if f (−0.75) = −0.07181250, f (−0.5) = −0.02475000, f (−0.25) = 0.33493750,
" 

f (0) = 1.10100000
b. f (0.25) if f (0.1) = −0.62049958, f (0.2) = −0.28398668, f (0.3) = 0.00660095, f (0.4) =
0.24842440
4. Use the Newton forward-difference formula to construct interpolating polynomials of degree one,
two, and three for the following data. Approximate the specified value using each of the polynomials.
a. f (0.43) if f (0) = 1, f (0.25) = 1.64872, f (0.5) = 2.71828, f (0.75) = 4.48169
b. f (0.18) if f (0.1) = −0.29004986, f (0.2) = −0.56079734, f (0.3) = −0.81401972, f (0.4) =
−1.0526302
5. Use the Newton backward-difference formula to construct interpolating polynomials of degree one,
two, and three for the following data. Approximate the specified value using each of the polynomials.
a. f (−1/3) if f (−0.75) = −0.07181250, f (−0.5) = −0.02475000, f (−0.25) = 0.33493750,
f (0) = 1.10100000
b. f (0.25) if f (0.1) = −0.62049958, f (0.2) = −0.28398668, f (0.3) = 0.00660095, f (0.4) =
0.24842440

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
134 CHAPTER 3 Interpolation and Polynomial Approximation

6. Use the Newton backward-difference formula to construct interpolating polynomials of degree one,
two, and three for the following data. Approximate the specified value using each of the polynomials.
a. f (0.43) if f (0) = 1, f (0.25) = 1.64872, f (0.5) = 2.71828, f (0.75) = 4.48169
b. f (0.25) if f (−1) = 0.86199480, f (−0.5) = 0.95802009, f (0) = 1.0986123, f (0.5) =
1.2943767
7. a. Use Algorithm 3.2 to construct the interpolating polynomial of degree three for the unequally
spaced points given in the following table:

x f (x)
−0.1 5.30000
0.0 2.00000
0.2 3.19000
0.3 1.00000

b. Add f (0.35) = 0.97260 to the table, and construct the interpolating polynomial of degree four.
8. a. Use Algorithm 3.2 to construct the interpolating polynomial of degree four for the unequally
spaced points given in the following table:

x f (x)
0.0 −6.00000
0.1 −5.89483
0.3 −5.65014
0.6 −5.17788
1.0 −4.28172

b. Add f (1.1) = −3.99583 to the table, and construct the interpolating polynomial of degree five.
9. a. Approximate f (0.05) using the following data and the Newton forward-difference formula:

x 0.0 0.2 0.4 0.6 0.8


f (x) 1.00000 1.22140 1.49182 1.82212 2.22554

b. Use the Newton backward-difference formula to approximate f (0.65).


c. Use Stirling’s formula to approximate f (0.43).
10. Show that the polynomial interpolating the following data has degree 3.

x −2 −1 0 1 2 3
f (x) 1 4 11 16 13 −4

11. a. Show that the cubic polynomials

P(x) = 3 − 2(x + 1) + 0(x + 1)(x) + (x + 1)(x)(x − 1)

and

Q(x) = −1 + 4(x + 2) − 3(x + 2)(x + 1) + (x + 2)(x + 1)(x)

both interpolate the data

x −2 −1 0 1 2
f (x) −1 3 1 −1 3

b. Why does part (a) not violate the uniqueness property of interpolating polynomials?
12. A fourth-degree polynomial P(x) satisfies 14 P(0) = 24, 13 P(0) = 6, and 12 P(0) = 0, where
1P(x) = P(x + 1) − P(x). Compute 12 P(10).

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
3.3 Divided Differences 135

13. The following data are given for a polynomial P(x) of unknown degree.

x 0 1 2
P(x) 2 −1 4

Determine the coefficient of x 2 in P(x) if all third-order forward differences are 1.


14. The following data are given for a polynomial P(x) of unknown degree.

x 0 1 2 3
P(x) 4 9 15 18

Determine the coefficient of x 3 in P(x) if all fourth-order forward differences are 1.


15. The Newton forward-difference formula is used to approximate f (0.3) given the following data.

x 0.0 0.2 0.4 0.6


f (x) 15.0 21.0 30.0 51.0

Suppose it is discovered that f (0.4) was understated by 10 and f (0.6) was overstated by 5. By what
amount should the approximation to f (0.3) be changed?
16. For a function f , the Newton divided-difference formula gives the interpolating polynomial

16
P3 (x) = 1 + 4x + 4x(x − 0.25) + x(x − 0.25)(x − 0.5),
3
on the nodes x0 = 0, x1 = 0.25, x2 = 0.5 and x3 = 0.75. Find f (0.75).
17. For a function f , the forward-divided differences are given by

x0 = 0.0 f [x0 ]
f [x0 , x1 ]
50
x1 = 0.4 f [x1 ] f [x0 , x1 , x2 ] = 7
f [x1 , x2 ] = 10
x2 = 0.7 f [x2 ] = 6

Determine the missing entries in the table.


18. a. The introduction to this chapter included a table listing the population of the United States from
1950 to 2000. Use appropriate divided differences to approximate the population in the years
1940, 1975, and 2020.
b. The population in 1940 was approximately 132,165,000. How accurate do you think your 1975
and 2020 figures are?
19. Given

Pn (x) = f [x0 ] + f [x0 , x1 ](x − x0 ) + a2 (x − x0 )(x − x1 )


+ a3 (x − x0 )(x − x1 )(x − x2 ) + · · ·
+ an (x − x0 )(x − x1 ) · · · (x − xn−1 ),

use Pn (x2 ) to show that a2 = f [x0 , x1 , x2 ].


20. Show that
f (n+1) (ξ(x))
f [x0 , x1 , . . . , xn , x] = ,
(n + 1)!
for some ξ(x). [Hint: From Eq. (3.3),

f (n+1) (ξ(x))
f (x) = Pn (x) + (x − x0 ) · · · (x − xn ).
(n + 1)!

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

You might also like