0% found this document useful (0 votes)
244 views19 pages

8.2 Orthogonal Polynomials and Least Squares Approximation: W R W R W R W R W R

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
244 views19 pages

8.2 Orthogonal Polynomials and Least Squares Approximation: W R W R W R W R W R

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

510 CHAPTER 8 Approximation Theory

W R W R W R W R W R
0.017 0.154 0.025 0.23 0.020 0.181 0.020 0.180 0.025 0.234
0.087 0.296 0.111 0.357 0.085 0.260 0.119 0.299 0.233 0.537
0.174 0.363 0.211 0.366 0.171 0.334 0.210 0.428 0.783 1.47
1.11 0.531 0.999 0.771 1.29 0.87 1.32 1.15 1.35 2.48
1.74 2.23 3.02 2.01 3.04 3.59 3.34 2.83 1.69 1.44
4.09 3.58 4.28 3.28 4.29 3.40 5.48 4.15 2.75 1.84
5.45 3.52 4.58 2.96 5.30 3.88 4.83 4.66
5.96 2.40 4.68 5.10 5.53 6.94

14. Show that the normal equations (8.3) resulting from discrete least squares approximation yield a
symmetric and nonsingular matrix and hence have a unique solution. [Hint: Let A = (aij ), where

m
! i+j−2
aij = xk
k=1

and x1 , x2 , . . . , xm are distinct with n < m − 1. Suppose A is singular and that c ̸ = 0 is such that
ct Ac = 0. Show that the nth-degree polynomial whose coefficients are the coordinates of c has more
than n roots, and use this to establish a contradiction.]

8.2 Orthogonal Polynomials and Least Squares Approximation


The previous section considered the problem of least squares approximation to fit a collec-
tion of data. The other approximation problem mentioned in the introduction concerns the
approximation of functions.
Suppose f ∈ C[a, b] and that a polynomial Pn (x) of degree at most n is required that
will minimize the error
+ b
[f (x) − Pn (x)]2 dx.
a

To determine a least squares approximating polynomial; that is, a polynomial to mini-


mize this expression, let
n
!
n n−1
Pn (x) = an x + an−1 x + · · · + a1 x + a 0 = ak x k ,
k=0

and define, as shown in Figure 8.6,


+ b , n
! -2
k
E ≡ E2 (a0 , a1 , . . . , an ) = f (x) − ak x dx.
a k=0

The problem is to find real coefficients a0 , a1 , . . . , an that will minimize E. A necessary


condition for the numbers a0 , a1 , . . . , an to minimize E is that

∂E
= 0, for each j = 0, 1, . . . , n.
∂aj

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.2 Orthogonal Polynomials and Least Squares Approximation 511

Figure 8.6
y

f (x)

n
Pn (x) ! $ax k
k
n 2
k!0
( f (x) " $ a x (
k!0
k
k

a b x

Since
+ b n
! + b + b ,!
n -2
2 k k
E= [f (x)] dx − 2 ak x f (x) dx + ak x dx,
a k=0 a a k=0

we have
+ b n
! + b
∂E j
= −2 x f (x) dx + 2 ak x j+k dx.
∂aj a a
k=0

Hence, to find Pn (x), the (n + 1) linear normal equations


!n + b + b
ak x j+k dx = x j f (x) dx, for each j = 0, 1, . . . , n, (8.6)
k=0 a a

must be solved for the (n + 1) unknowns aj . The normal equations always have a unique
solution provided that f ∈ C[a, b]. (See Exercise 15.)

Example 1 Find the least squares approximating polynomial of degree 2 for the function f (x) = sin π x
on the interval [0, 1].
Solution The normal equations for P2 (x) = a2 x 2 + a1 x + a0 are
+ 1 + 1 + 1 + 1
a0 1 dx + a1 x dx + a2 x 2 dx = sin π x dx,
0 0 0 0
+ 1 + 1 + 1 + 1
a0 x dx + a1 x 2 dx + a2 x 3 dx = x sin πx dx,
0 0 0 0
+ 1 + 1 + 1 + 1
a0 x 2 dx + a1 x 3 dx + a2 x 4 dx = x 2 sin π x dx.
0 0 0 0

Performing the integration yields


1 1 2 1 1 1 1 1 1 1 π2 − 4
a 0 + a1 + a2 = , a0 + a1 + a 2 = , a0 + a 1 + a 2 = .
2 3 π 2 3 4 π 3 4 5 π3

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
512 CHAPTER 8 Approximation Theory

These three equations in three unknowns can be solved to obtain


12π 2 − 120 720 − 60π 2
a0 = ≈ −0.050465 and a1 = −a 2 = ≈ 4.12251.
π3 π3
Consequently, the least squares polynomial approximation of degree 2 for f (x) = sin πx
on [0, 1] is P2 (x) = −4.12251x 2 + 4.12251x − 0.050465. (See Figure 8.7.)

Figure 8.7
y
y " sin π x
1.0

0.8

0.6 y = P2(x)

0.4

0.2

0.2 0.4 0.6 0.8 1.0 x

Example 1 illustrates a difficulty in obtaining a least squares polynomial approximation.


An (n + 1) × (n + 1) linear system for the unknowns a0 , . . . , an must be solved, and the
coefficients in the linear system are of the form
+ b
David Hilbert (1862–1943) was bj+k+1 − aj+k+1
x j+k dx = ,
the dominant mathematician at a j+k+1
the turn of the 20th century. He is
a linear system that does not have an easily computed numerical solution. The matrix in the
best remembered for giving a talk
at the International Congress of
linear system is known as a Hilbert matrix, which is a classic example for demonstrating
Mathematicians in Paris in 1900 round-off error difficulties. (See Exercise 11 of Section 7.5.)
in which he posed 23 problems Another disadvantage is similar to the situation that occurred when the Lagrange poly-
that he thought would be nomials were first introduced in Section 3.1. The calculations that were performed in ob-
important for mathematicians in taining the best nth-degree polynomial, Pn (x), do not lessen the amount of work required
the next century. to obtain Pn+1 (x), the polynomial of next higher degree.

Linearly Independent Functions


A different technique to obtain least squares approximations will now be considered. This
turns out to be computationally efficient, and once Pn (x) is known, it is easy to determine
Pn+1 (x). To facilitate the discussion, we need some new concepts.

Definition 8.1 The set of functions {φ0 , . . . , φn } is said to be linearly independent on [a, b] if, whenever
c0 φ0 (x) + c1 φ1 (x) + · · · + cn φn (x) = 0, for all x ∈ [a, b],
we have c0 = c1 = · · · = cn = 0. Otherwise the set of functions is said to be linearly
dependent.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.2 Orthogonal Polynomials and Least Squares Approximation 513

Theorem 8.2 Suppose that, for each j = 0, 1, . . . , n, φj (x) is a polynomial of degree j. Then {φ0 , . . . , φn }
is linearly independent on any interval [a, b].

Proof Let c0 , . . . , cn be real numbers for which

P(x) = c0 φ0 (x) + c1 φ1 (x) + · · · + cn φn (x) = 0, for all x ∈ [a, b].

The polynomial P(x) vanishes on [a, b], so it must be the zero polynomial, and the coeffi-
cients of all the powers of x are zero. In particular, the coefficient of x n is zero. But cn φn (x)
is the only term in P(x) that contains x n , so we must have cn = 0. Hence
n−1
!
P(x) = cj φj (x).
j=0

In this representation of P(x), the only term that contains a power of x n−1 is cn−1 φn−1 (x),
so this term must also be zero and
n−2
!
P(x) = cj φj (x).
j=0

In like manner, the remaining constants cn−2 , cn−3 , . . . , c1 , c0 are all zero, which implies
that {φ0 , φ1 , . . . , φn } is linearly independent on [a, b].

Example 2 Let φ0 (x) = 2, φ1 (x) = x − 3, and φ2 (x) = x 2 + 2x + 7, and Q(x) = a0 + a1 x + a2 x 2 . Show


that there exist constants c0 , c1 , and c2 such that Q(x) = c0 φ0 (x) + c1 φ1 (x) + c2 φ2 (x).
Solution By Theorem 8.2, {φ0 , φ1 , φ2 } is linearly independent on any interval [a, b]. First
note that
1 3
1= φ0 (x), x = φ1 (x) + 3 = φ1 (x) + φ0 (x),
2 2
and
/ . / .
2 3 1
x = φ2 (x) − 2x − 7 = φ2 (x) − 2 φ1 (x) + φ0 (x) − 7 φ0 (x)
2 2
13
= φ2 (x) − 2φ1 (x) − φ0 (x).
2
Hence
. / . / . /
1 3 13
Q(x) = a0 φ0 (x) + a1 φ1 (x) + φ0 (x) + a2 φ2 (x) − 2φ1 (x) − φ0 (x)
2 2 2
, -
1 3 13
= a0 + a1 − a2 φ0 (x) + [a1 − 2a2 ] φ1 (x) + a2 φ2 (x).
2 2 2
0
The situation illustrated in Example 2 holds in a much more general setting. Let n de-
note the set of all polynomials of degree at most n. The following result is used extensively
in many applications of linear algebra. Its proof is considered in Exercise 13.

Theorem 8.3 Suppose


0 that {φ0 (x), φ1 (x), . . . 0
, φn (x)} is a collection of linearly independent polynomials
in n . Then any polynomial in n can be written uniquely as a linear combination of φ0 (x),
φ1 (x), . . ., φn (x).

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
514 CHAPTER 8 Approximation Theory

Orthogonal Functions
To discuss general function approximation requires the introduction of the notions of weight
functions and orthogonality.

Definition 8.4 An integrable function w is called a weight function on the interval I if w(x) ≥ 0, for all
x in I, but w(x) ̸ ≡ 0 on any subinterval of I.

The purpose of a weight function is to assign varying degrees of importance to approx-


imations on certain portions of the interval. For example, the weight function
1
w(x) = √
1 − x2
places less emphasis near the center of the interval (−1, 1) and more emphasis when |x| is
near 1 (see Figure 8.8). This weight function is used in the next section.
Figure 8.8 Suppose {φ0 , φ1 , . . . , φn } is a set of linearly independent functions on [a, b] and w is a
weight function for [a, b]. Given f ∈ C[a, b], we seek a linear combination
(x)
n
!
P(x) = ak φk (x)
k=0

to minimize the error


1 + . n /2
b !
E = E(a0 , . . . , an ) = w(x) f (x) − ak φk (x) dx.
a k=0
!1 1 x This problem reduces to the situation considered at the beginning of this section in the
special case when w(x) ≡ 1 and φk (x) = x k , for each k = 0, 1, . . . , n.
The normal equations associated with this problem are derived from the fact that for
each j = 0, 1, . . . , n,
+ b . !n /
∂E
0= =2 w(x) f (x) − ak φk (x) φj (x) dx.
∂aj a k=0

The system of normal equations can be written


+ b n
! + b
w(x)f (x)φj (x) dx = ak w(x)φk (x)φj (x) dx, for j = 0, 1, . . . , n.
a k=0 a

If the functions φ0 , φ1 , . . . , φn can be chosen so that


+ b 1
0, when j ̸ = k,
w(x)φk (x)φj (x) dx = (8.7)
a αj > 0, when j = k,
then the normal equations will reduce to
+ b + b
w(x)f (x)φj (x) dx = aj w(x)[φj (x)]2 dx = aj αj ,
a a

for each j = 0, 1, . . . , n. These are easily solved to give


+
1 b
aj = w(x)f (x)φj (x) dx.
αj a
The word orthogonal means
right-angled. So in a sense, Hence the least squares approximation problem is greatly simplified when the functions
orthogonal functions are φ0 , φ1 , . . . , φn are chosen to satisfy the orthogonality condition in Eq. (8.7). The remainder
perpendicular to one another. of this section is devoted to studying collections of this type.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.2 Orthogonal Polynomials and Least Squares Approximation 515

Definition 8.5 {φ0 , φ1 , . . . , φn } is said to be an orthogonal set of functions for the interval [a, b] with
respect to the weight function w if
+ b 1
0, when j ̸ = k,
w(x)φk (x)φj (x) dx =
a αj > 0, when j = k.

If, in addition, αj = 1 for each j = 0, 1, . . . , n, the set is said to be orthonormal.

This definition, together with the remarks preceding it, produces the following theorem.

Theorem 8.6 If {φ0 , . . . , φn } is an orthogonal set of functions on an interval [a, b] with respect to the
weight function w, then the least squares approximation to f on [a, b] with respect to w is
n
!
P(x) = aj φj (x),
j=0

where, for each j = 0, 1, . . . , n,


2b +
w(x)φj (x)f (x) dx 1 b
aj = 2a b = w(x)φj (x)f (x) dx.
w(x)[φ (x)] 2 dx αj a
a j

Although Definition 8.5 and Theorem 8.6 allow for broad classes of orthogonal func-
tions, we will consider only orthogonal sets of polynomials. The next theorem, which is
based on the Gram-Schmidt process, describes how to construct orthogonal polynomials
on [a, b] with respect to a weight function w.

Theorem 8.7 The set of polynomial functions {φ0 , φ1 , . . . , φn } defined in the following way is orthogonal
on [a, b] with respect to the weight function w.
Erhard Schmidt (1876–1959) φ0 (x) ≡ 1, φ1 (x) = x − B1 , for each x in [a, b],
received his doctorate under the
supervision of David Hilbert in where
1905 for a problem involving 2b
integral equations. Schmidt xw(x)[φ0 (x)]2 dx
B1 = 2a b ,
published a paper in 1907 in 2
a w(x)[φ0 (x)] dx
which he gave what is now called
the Gram-Schmidt process for and when k ≥ 2,
constructing an orthonormal
basis for a set of functions. This φk (x) = (x − Bk )φk−1 (x) − Ck φk−2 (x), for each x in [a, b],
generalized results of Jorgen
Pedersen Gram (1850–1916) who where
considered this problem when 2b
studying least squares. Laplace,
xw(x)[φk−1 (x)]2 dx
Bk = 2a b
however, presented a similar 2
a w(x)[φk−1 (x)] dx
process much earlier than either
Gram or Schmidt. and
2b
a xw(x)φk−1 (x)φk−2 (x) dx
Ck = 2b .
2
a w(x)[φk−2 (x)] dx

Theorem 8.7 provides a recursive procedure for constructing a set of orthogonal polyno-
mials. The proof of this theorem follows by applying mathematical induction to the degree
of the polynomial φn (x).

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
516 CHAPTER 8 Approximation Theory

Corollary 8.8 For any n > 0, the set of polynomial functions {φ0 , . . . , φn } given in Theorem 8.7 is linearly
independent on [a, b] and

+ b
w(x)φn (x)Qk (x) dx = 0,
a

for any polynomial Qk (x) of degree k < n.

Proof For each k = 0, 1, . . . , n, φk (x) is a polynomial of degree k. So Theorem 8.2 implies


that {φ0 , . . . , φn } is a linearly independent set.
Let Qk (x) be a polynomial of degree k < n. By Theorem 8.3 there exist numbers
c0 , . . . , ck such that

k
!
Qk (x) = cj φj (x).
j=0

Because φn is orthogonal to φj for each j = 0, 1, . . . , k we have

+ b k
! + b k
!
w(x)Qk (x)φn (x) dx = cj w(x)φj (x)φn (x) dx = cj · 0 = 0.
a j=0 a j=0

Illustration The set of Legendre polynomials, {Pn (x)}, is orthogonal on [−1, 1] with respect to the
weight function w(x) ≡ 1. The classical definition of the Legendre polynomials requires
that Pn (1) = 1 for each n, and a recursive relation is used to generate the polynomials
when n ≥ 2. This normalization will not be needed in our discussion, and the least squares
approximating polynomials generated in either case are essentially the same.

Using the Gram-Schmidt process with P0 (x) ≡ 1 gives

21
x dx
B1 = 2−11 =0 and P1 (x) = (x − B1 )P0 (x) = x.
−1 dx

Also,
21 3
21
−1 x dx x 2 dx 1
B2 = 21 =0 and C2 = 2−1
1
= ,
2 1 dx 3
−1 x dx −1

so

1 1
P2 (x) = (x − B2 )P1 (x) − C2 P0 (x) = (x − 0)x − · 1 = x2 − .
3 3

The higher-degree Legendre polynomials shown in Figure 8.9 are derived in the same
manner. Although the integration can be tedious, it is not difficult with a Computer Algebra
System.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.2 Orthogonal Polynomials and Least Squares Approximation 517

Figure 8.9
y

1 y = P1(x)

y = P2(x)
0.5
y = P3(x)
y = P4(x)
y = P5(x)
!1 1 x

!0.5

!1

For example, the Maple command int is used to compute the integrals B3 and C3 :
3 4 52 6
4 4 5 5
int x x 2 − 13 , x = −1..1 int x x 2 − 13 , x = −1..1
B3 := 34 52 6 ; C3 :=
int x 2 − 13 , x = −1..1 int(x 2 , x = −1..1)

0
4
15

Thus

4 1 4 3
P3 (x) = xP2 (x) − P1 (x) = x 3 − x − x = x 3 − x.
15 3 15 5

The next two Legendre polynomials are

6 3 10 3 5
P4 (x) = x 4 − x 2 + and P5 (x) = x 5 − x + x. !
7 35 9 21

The Legendre polynomials were introduced in Section 4.7, where their roots, given on
page 232, were used as the nodes in Gaussian quadrature.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
518 CHAPTER 8 Approximation Theory

E X E R C I S E S E T 8.2
1. Find the linear least squares polynomial approximation to f (x) on the indicated interval if
a. f (x) = x 2 + 3x + 2, [0, 1]; b. f (x) = x 3 , [0, 2];
1 d. f (x) = ex , [0, 2];
c. f (x) = , [1, 3];
x
1 1 f. f (x) = x ln x, [1, 3].
e. f (x) = cos x + sin 2x, [0, 1];
2 3
2. Find the linear least squares polynomial approximation on the interval [−1, 1] for the following
functions.
a. f (x) = x 2 − 2x + 3 b. f (x) = x 3
1 d. f (x) = ex
c. f (x) =
x+2
1 1 f. f (x) = ln(x + 2)
e. f (x) = cos x + sin 2x
2 3
3. Find the least squares polynomial approximation of degree two to the functions and intervals in
Exercise 1.
4. Find the least squares polynomial approximation of degree 2 on the interval [−1, 1] for the functions
in Exercise 3.
5. Compute the error E for the approximations in Exercise 3.
6. Compute the error E for the approximations in Exercise 4.
7. Use the Gram-Schmidt process to construct φ0 (x), φ1 (x), φ2 (x), and φ3 (x) for the following intervals.
a. [0, 1] b. [0, 2] c. [1, 3]
8. Repeat Exercise 1 using the results of Exercise 7.
9. Obtain the least squares approximation polynomial of degree 3 for the functions in Exercise 1 using
the results of Exercise 7.
10. Repeat Exercise 3 using the results of Exercise 7.
11. Use the Gram-Schmidt procedure to calculate L1 , L2 , and L3 , where {L0 (x), L1 (x), L2 (x), L3 (x)} is
an orthogonal set of polynomials on (0, ∞) with respect to the weight functions w(x) = e−x and
L0 (x) ≡ 1. The polynomials obtained from this procedure are called the Laguerre polynomials.
12. Use the Laguerre polynomials calculated in Exercise 11 to compute the least squares polynomials of
degree one, two, and three on the interval (0, ∞) with respect to the weight function w(x) = e−x for
the following functions:
a. f (x) = x 2 b. f (x) = e−x c. f (x) = x 3 d. f (x) = e−2x
0 0
13. Suppose {φ0 , φ1 , . . . , φn } is any linearly independent set in n . Show that for any element Q ∈ n ,
there exist unique constants c0 , c1 , . . . , cn , such that
n
!
Q(x) = ck φk (x).
k=0

14. Show that if {φ0 , φ1 , . . . , φn } is an orthogonal set of functions on [a, b] with respect to the weight
function w, then {φ0 , φ1 , . . . , φn } is a linearly independent set.
15. Show that the normal equations (8.6) have a unique solution. [Hint: Show that the only solution for the
function f (x) ≡ 0 is aj = 0, j = 0, 1, . . . , n. Multiply Eq. (8.6) by aj , and sum over all j. Interchange
2b
the integral sign and the summation sign to obtain a [P(x)]2 dx = 0. Thus, P(x) ≡ 0, so aj = 0, for
j = 0, . . . , n. Hence, the coefficient matrix is nonsingular, and there is a unique solution to Eq. (8.6).]

8.3 Chebyshev Polynomials and Economization of Power Series


The Chebyshev polynomials {Tn (x)} are orthogonal on (−1, 1) with respect to the weight
function w(x) = (1 − x 2 )−1/2 . Although they can be derived by the method in the previous

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 519

Pafnuty Lvovich Chebyshev section, it is easier to give their definition and then show that they satisfy the required
(1821–1894) did exceptional orthogonality properties.
mathematical work in many For x ∈ [−1, 1], define
areas, including applied
mathematics, number theory,
approximation theory, and
Tn (x) = cos[n arccos x], for each n ≥ 0. (8.8)
probability. In 1852 he traveled
from St. Petersburg to visit It might not be obvious from this definition that for each n, Tn (x) is a polynomial in x, but
mathematicians in France, we will now show this. First note that
England, and Germany. Lagrange
and Legendre had studied
T0 (x) = cos 0 = 1 and T1 (x) = cos(arccos x) = x.
individual sets of orthogonal
polynomials, but Chebyshev was
the first to see the important For n ≥ 1, we introduce the substitution θ = arccos x to change this equation to
consequences of studying the
theory in general. He developed Tn (θ (x)) ≡ Tn (θ ) = cos(nθ), where θ ∈ [0, π].
the Chebyshev polynomials to
study least squares
approximation and probability, A recurrence relation is derived by noting that
then applied his results to
interpolation, approximate Tn+1 (θ ) = cos(n + 1)θ = cos θ cos(nθ) − sin θ sin(nθ)
quadrature, and other areas.
and

Tn−1 (θ ) = cos(n − 1)θ = cos θ cos(nθ) + sin θ sin(nθ)

Adding these equations gives

Tn+1 (θ ) = 2 cos θ cos(nθ) − Tn−1 (θ ).

Returning to the variable x = cos θ, we have, for n ≥ 1,

Tn+1 (x) = 2x cos(n arccos x) − Tn−1 (x),

that is,

Tn+1 (x) = 2xTn (x) − Tn−1 (x). (8.9)

Because T0 (x) = 1 and T1 (x) = x, the recurrence relation implies that the next three
Chebyshev polynomials are

T2 (x) = 2xT1 (x) − T0 (x) = 2x 2 − 1,


T3 (x) = 2xT2 (x) − T1 (x) = 4x 3 − 3x,

and

T4 (x) = 2xT3 (x) − T2 (x) = 8x 4 − 8x 2 + 1.

The recurrence relation also implies that when n ≥ 1, Tn (x) is a polynomial of degree n
with leading coefficient 2n−1 . The graphs of T1 , T2 , T3 , and T4 are shown in Figure 8.10.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
520 CHAPTER 8 Approximation Theory

Figure 8.10
y
y = T1(x)
y = T3(x) 1
y = T4(x)

!1 1 x

!1 y = T2(x)

To show the orthogonality of the Chebyshev polynomials with respect to the weight
function w(x) = (1 − x 2 )−1/2 , consider
+ 1 + 1
Tn (x)Tm (x) cos(n arccos x) cos(m arccos x)
√ dx = √ dx.
−1 1−x 2 −1 1 − x2
Reintroducing the substitution θ = arccos x gives
1
dθ = − √ dx
1 − x2
and
+ 1 + 0 +
Tn (x)Tm (x) π
√ dx = − cos(nθ) cos(mθ) dθ = cos(nθ) cos(mθ) dθ.
−1 1 − x2 π 0

Suppose n ̸ = m. Since
1
cos(nθ) cos(mθ) = [cos(n + m)θ + cos(n − m)θ ],
2
we have
+ 1 + +
Tn (x)Tm (x) 1 π 1 π
√ dx = cos((n + m)θ ) dθ + cos((n − m)θ ) dθ
−1 1 − x2 2 0 2 0
. /π
1 1
= sin((n + m)θ ) + sin((n − m)θ ) = 0.
2(n + m) 2(n − m) 0

By a similar technique (see Exercise 9), we also have


+ 1
[Tn (x)]2 π
√ dx = , for each n ≥ 1. (8.10)
−1 1−x 2 2
The Chebyshev polynomials are used to minimize approximation error. We will see
how they are used to solve two problems of this type:

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 521

• an optimal placing of interpolating points to minimize the error in Lagrange interpolation;

• a means of reducing the degree of an approximating polynomial with minimal loss of


accuracy.

The next result concerns the zeros and extreme points of Tn (x).

Theorem 8.9 The Chebyshev polynomial Tn (x) of degree n ≥ 1 has n simple zeros in [−1, 1] at
, -
2k − 1
x̄k = cos π , for each k = 1, 2, . . . , n.
2n
Moreover, Tn (x) assumes its absolute extrema at
, -

x̄k′ = cos with Tn (x̄k′ ) = (−1)k , for each k = 0, 1, . . . , n.
n

Proof Let
-,
2k − 1
x̄k = cos π , for k = 1, 2, . . . , n.
2n
Then
, , , --- , -
2k − 1 2k − 1
Tn (x̄k ) = cos(n arccos x̄k ) = cos n arccos cos π = cos π = 0.
2n 2
But the x̄k are distinct (see Exercise 10) and Tn (x) is a polynomial of degree n, so all the
zeros of Tn (x) must have this form.
To show the second statement, first note that
d n sin(n arccos x)
Tn′ (x) = [cos(n arccos x)] = √ ,
dx 1 − x2
and that, when k = 1, 2, . . . , n − 1,
, , , ---

n sin n arccos cos
n n sin(kπ )
Tn′ (x̄k′ ) = 7 . , -/2 = , - = 0.
kπ kπ
1 − cos sin
n n

Since Tn (x) is a polynomial of degree n, its derivative Tn′ (x) is a polynomial of degree
(n − 1), and all the zeros of Tn′ (x) occur at these n − 1 distinct points (that they are distinct
is considered in Exercise 11). The only other possibilities for extrema of Tn (x) occur at the
endpoints of the interval [−1, 1]; that is, at x̄0′ = 1 and at x̄n′ = −1.
For any k = 0, 1, . . . , n we have
, , , ---
′ kπ
Tn (x̄k ) = cos n arccos cos = cos(kπ ) = (−1)k .
n
So a maximum occurs at each even value of k and a minimum at each odd value.
The monic (polynomials with leading coefficient 1) Chebyshev polynomials T̃n (x) are
derived from the Chebyshev polynomials Tn (x) by dividing by the leading coefficient 2n−1 .
Hence
1
T̃0 (x) = 1 and T̃n (x) = Tn (x), for each n ≥ 1. (8.11)
2n−1

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
522 CHAPTER 8 Approximation Theory

The recurrence relationship satisfied by the Chebyshev polynomials implies that


1
T̃2 (x) = x T̃1 (x) − T̃0 (x) and (8.12)
2
1
T̃n+1 (x) = x T̃n (x) − T̃n−1 (x), for each n ≥ 2.
4
The graphs of T˜1 , T˜2 , T˜3 , T˜4 , and T˜5 are shown in Figure 8.11.

Figure 8.11
y
!
y = T1(x)
1

!
y = T2(x)
!
y = T3(x)
! !
y = T5(x) y = T4(x)

!1 1 x

!1

Because T˜n (x) is just a multiple of Tn (x), Theorem 8.9 implies that the zeros of T˜n (x)
also occur at
, -
2k − 1
x̄k = cos π , for each k = 1, 2, . . . , n,
2n

and the extreme values of T̃n (x), for n ≥ 1, occur at


, -
′ kπ (−1)k
x̄k = cos , with T̃n (x̄k′ ) = n−1 , for each k = 0, 1, 2, . . . , n. (8.13)
n 2
0
Let 8 n denote the set of all monic polynomials of degree n. The relation expressed
in Eq. (8.13) leads0 to an important minimization property that distinguishes T̃n (x) from the
other members of 8 n .

Theorem 8.10 The polynomials of the form T˜n (x), when n ≥ 1, have the property that
1 :
9
= max |T˜n (x)| ≤ max |Pn (x)|, for all Pn (x) ∈ .
2n−1 x∈[−1,1] x∈[−1, 1] n

Moreover, equality occurs only if Pn ≡ T˜n .

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 523

0
Proof Suppose that Pn (x) ∈ 8 n and that
1
max |Pn (x)| ≤ = max |T˜n (x)|.
x∈[−1, 1] 2n−1 x∈[−1, 1]

Let Q = T˜n − Pn . Then T˜n (x) and Pn (x) are both monic polynomials of degree n, so Q(x) is
a polynomial of degree at most (n − 1). Moreover, at the n + 1 extreme points x̄k′ of T˜n (x),
we have
(−1)k
Q(x̄k′ ) = T˜n (x̄k′ ) − Pn (x̄k′ ) = n−1 − Pn (x̄k′ ).
2
However
1
|Pn (x̄k′ )| ≤ , for each k = 0, 1, . . . , n,
2n−1
so we have
Q(x̄k′ ) ≤ 0, when k is odd and Q(x̄k′ ) ≥ 0, when k is even.
Since Q is continuous, the Intermediate Value Theorem implies that for each j =
0, 1, . . . , n − 1 the polynomial Q(x) has at least one zero between x̄j′ and x̄j+1 ′
. Thus,
Q has at least n zeros in the interval [−1, 1]. But the degree of Q(x) is less than n, so Q ≡ 0.
This implies that Pn ≡ T˜n .

Minimizing Lagrange Interpolation Error


Theorem 8.10 can be used to answer the question of where to place interpolating nodes
to minimize the error in Lagrange interpolation. Theorem 3.3 on page 112 applied to the
interval [−1, 1] states that, if x0 , . . . , xn are distinct numbers in the interval [−1, 1] and if
f ∈ C n+1 [−1, 1], then, for each x ∈ [−1, 1], a number ξ(x) exists in (−1, 1) with
f (n+1) (ξ(x))
f (x) − P(x) = (x − x0 )(x − x1 ) · · · (x − xn ),
(n + 1)!
where P(x) is the Lagrange interpolating polynomial. Generally, there is no control over
ξ(x), so to minimize the error by shrewd placement of the nodes x0 , . . . , xn , we choose
x0 , . . . , xn to minimize the quantity
|(x − x0 )(x − x1 ) · · · (x − xn )|
throughout the interval [−1, 1].
Since (x − x0 )(x − x1 ) · · · (x − xn ) is a monic polynomial of degree (n + 1), we have
just seen that the minimum is obtained when
(x − x0 )(x − x1 ) · · · (x − xn ) = T̃n+1 (x).
The maximum value of |(x − x0 )(x − x1 ) · · · (x − xn )| is smallest when xk is chosen for
each k = 0, 1, . . . , n to be the (k + 1)st zero of T̃n+1 . Hence we choose xk to be
, -
2k + 1
x̄k+1 = cos π .
2(n + 1)
Because maxx∈[−1,1] |T̃n+1 (x)| = 2−n , this also implies that
1
= max |(x − x̄1 ) · · · (x − x̄n+1 )| ≤ max |(x − x0 ) · · · (x − xn )|,
2n x∈[−1,1] x∈[−1,1]

for any choice of x0 , x1 , . . . , xn in the interval [−1, 1]. The next corollary follows from these
observations.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
524 CHAPTER 8 Approximation Theory

Corollary 8.11 Suppose that P(x) is the interpolating polynomial of degree at most n with nodes at the
zeros of Tn+1 (x). Then
1
max |f (x) − P(x)| ≤ max |f (n+1) (x)|, for each f ∈ C n+1 [−1, 1].
x∈[−1,1] 2n (n + 1)! x∈[−1,1]

Minimizing Approximation Error on Arbitrary Intervals


The technique for choosing points to minimize the interpolating error is extended to a
general closed interval [a, b] by using the change of variables
1
[(b − a)x + a + b] x̃ =
2
to transform the numbers x̄k in the interval [−1, 1] into the corresponding number x̃k in the
interval [a, b], as shown in the next example.

Example 1 Let f (x) = xex on [0, 1.5]. Compare the values given by the Lagrange polynomial with
four equally-spaced nodes with those given by the Lagrange polynomial with nodes given
by zeros of the fourth Chebyshev polynomial.
Solution The equally-spaced nodes x0 = 0, x1 = 0.5, x2 = 1, and x3 = 1.5 give
L0 (x) = −1.3333x 3 + 4.0000x 2 − 3.6667x + 1,
L1 (x) = 4.0000x 3 − 10.000x 2 + 6.0000x,
L2 (x) = −4.0000x 3 + 8.0000x 2 − 3.0000x,
L3 (x) = 1.3333x 3 − 2.000x 2 + 0.66667x,
which produces the polynomial
P3 (x) = L0 (x)(0) + L1 (x)(0.5e0.5 ) + L2 (x)e1 + L3 (x)(1.5e1.5 ) = 1.3875x 3
+ 0.057570x 2 + 1.2730x.
For the second interpolating polynomial, we shift the zeros x̄k = cos((2k + 1)/8)π ,
for k = 0, 1, 2, 3, of T˜4 from [−1, 1] to [0, 1.5], using the linear transformation
1
x̃k = [(1.5 − 0)x̄k + (1.5 + 0)] = 0.75 + 0.75x̄k .
2
Because
π 3π
x̄0 = cos = 0.92388, x̄1 = cos = 0.38268,
8 8
5π 7π
x̄2 = cos = −0.38268, andx̄4 = cos = −0.92388,
8 8
we have
x̃0 = 1.44291, x̃1 = 1.03701, x̃2 = 0.46299, and x̃3 = 0.05709.
The Lagrange coefficient polynomials for this set of nodes are
L̃0 (x) = 1.8142x 3 − 2.8249x 2 + 1.0264x − 0.049728,
L̃1 (x) = −4.3799x 3 + 8.5977x 2 − 3.4026x + 0.16705,
L̃2 (x) = 4.3799x 3 − 11.112x 2 + 7.1738x − 0.37415,
L̃3 (x) = −1.8142x 3 + 5.3390x 2 − 4.7976x + 1.2568.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 525

The functional values required for these polynomials are given in the last two columns
of Table 8.7. The interpolation polynomial of degree at most 3 is
P˜3 (x) = 1.3811x 3 + 0.044652x 2 + 1.3031x − 0.014352.

Table 8.7 x f (x) = xex x̃ f (x̃) = xex


x0 = 0.0 0.00000 x̃0 = 1.44291 6.10783
x1 = 0.5 0.824361 x̃1 = 1.03701 2.92517
x2 = 1.0 2.71828 x̃2 = 0.46299 0.73560
x3 = 1.5 6.72253 x̃3 = 0.05709 0.060444

For comparison, Table 8.8 lists various values of x, together with the values of
f (x), P3 (x), and P˜3 (x). It can be seen from this table that, although the error using P3 (x) is
less than using P˜3 (x) near the middle of the table, the maximum error involved with using
P˜3 (x), 0.0180, is considerably less than when using P3 (x), which gives the error 0.0290.
(See Figure 8.12.)

Table 8.8 x f (x) = xex P3 (x) |xex − P3 (x)| P̃3 (x) |xex − P̃3 (x)|
0.15 0.1743 0.1969 0.0226 0.1868 0.0125
0.25 0.3210 0.3435 0.0225 0.3358 0.0148
0.35 0.4967 0.5121 0.0154 0.5064 0.0097
0.65 1.245 1.233 0.012 1.231 0.014
0.75 1.588 1.572 0.016 1.571 0.017
0.85 1.989 1.976 0.013 1.974 0.015
1.15 3.632 3.650 0.018 3.644 0.012
1.25 4.363 4.391 0.028 4.382 0.019
1.35 5.208 5.237 0.029 5.224 0.016

Figure 8.12
y

6
!
y = P3(x)
5 y " xe x

0.5 1.0 1.5 x

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
526 CHAPTER 8 Approximation Theory

Reducing the Degree of Approximating Polynomials


Chebyshev polynomials can also be used to reduce the degree of an approximating poly-
nomial with a minimal loss of accuracy. Because the Chebyshev polynomials have a mini-
mum maximum-absolute value that is spread uniformly on an interval, they can be used to
reduce the degree of an approximation polynomial without exceeding the error tolerance.
Consider approximating an arbitrary nth-degree polynomial

Pn (x) = an x n + an−1 x n−1 + · · · + a1 x + a0

0 [−1, 1] with a polynomial of degree at most n − 1. The object is to choose Pn−1 (x) in
on
n−1 so that

max |Pn (x) − Pn−1 (x)|


x∈[−1, 1]

is as small as possible.
We first note that (Pn (x) − Pn−1 (x))/an is a monic polynomial of degree n, so applying
Theorem 8.10 gives
; ;
;1 ; 1
max ;; (Pn (x) − Pn−1 (x));; ≥ n−1 .
x∈[−1, 1] an 2
Equality occurs precisely when
1
(Pn (x) − Pn−1 (x)) = T̃n (x).
an
This means that we should choose

Pn−1 (x) = Pn (x) − an T̃n (x),

and with this choice we have the minimum value of


; ;
;1 ; |an |
;
max |Pn (x) − Pn−1 (x)| = |an | max ; (Pn (x) − Pn−1 (x));; = n−1 .
x∈[−1, 1] x∈[−1, 1] an 2

Illustration The function f (x) = ex is approximated on the interval [−1, 1] by the fourth Maclaurin
polynomial
x2 x3 x4
P4 (x) = 1 + x + + + ,
2 6 24
which has truncation error
|f (5) (ξ(x))||x 5 | e
|R4 (x)| = ≤ ≈ 0.023, for − 1 ≤ x ≤ 1.
120 120

Suppose that an error of 0.05 is tolerable and that we would like to reduce the degree of the
approximating polynomial while staying within this bound.

The polynomial of degree 3 or less that best uniformly approximates P4 (x) on [−1, 1] is
, -
x2 x3 x4 1 4 2 1
P3 (x) = P4 (x) − a4 T̃4 (x) = 1 + x + + + − x −x +
2 6 24 24 8
191 13 1
= + x + x2 + x3 .
192 24 6

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 527

With this choice, we have


1 1 1
|P4 (x) − P3 (x)| = |a4 T̃4 (x)| ≤ · 3 = ≤ 0.0053.
24 2 192
Adding this error bound to the bound for the Maclaurin truncation error gives

0.023 + 0.0053 = 0.0283,

which is within the permissible error of 0.05.

The polynomial of degree 2 or less that best uniformly approximates P3 (x) on [−1, 1] is
1
P2 (x) = P3 (x) − T̃3 (x)
6
191 13 1 1 3 191 9 13
= + x + x 2 + x 3 − (x 3 − x) = + x + x2 .
192 24 6 6 4 192 8 24

However,
; ; , -
;1 ; 1 1 2 1
; ;
|P3 (x) − P2 (x)| = ; T̃3 (x); = = ≈ 0.042,
6 6 2 24
which—when added to the already accumulated error bound of 0.0283—exceeds the tol-
erance of 0.05. Consequently, the polynomial of least degree that best approximates ex on
[−1, 1] with an error bound of less than 0.05 is
191 13 1
P3 (x) = + x + x2 + x3 .
192 24 6
Table 8.9 lists the function and the approximating polynomials at various points in [−1, 1].
Note that the tabulated entries for P2 are well within the tolerance of 0.05, even though the
error bound for P2 (x) exceeded the tolerance. !

Table 8.9 x ex P4 (x) P3 (x) P2 (x) |ex − P2 (x)|


−0.75 0.47237 0.47412 0.47917 0.45573 0.01664
−0.25 0.77880 0.77881 0.77604 0.74740 0.03140
0.00 1.00000 1.00000 0.99479 0.99479 0.00521
0.25 1.28403 1.28402 1.28125 1.30990 0.02587
0.75 2.11700 2.11475 2.11979 2.14323 0.02623

E X E R C I S E S E T 8.3
1. Use the zeros of T̃3 to construct an interpolating polynomial of degree 2 for the following functions
on the interval [−1, 1].
a. f (x) = ex b. f (x) = sin x c. f (x) = ln(x + 2) d. f (x) = x 4
2. Use the zeros of T̃4 to construct an interpolating polynomial of degree 3 for the functions in Exercise 1.
3. Find a bound for the maximum error of the approximation in Exercise 1 on the interval [−1, 1].
4. Repeat Exercise 3 for the approximations computed in Exercise 3.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
528 CHAPTER 8 Approximation Theory

5. Use the zeros of T̃3 and transformations of the given interval to construct an interpolating polynomial
of degree 2 for the following functions.
1 b. f (x) = e−x , [0, 2]
a. f (x) = , [1, 3]
x
1 1 d. f (x) = x ln x, [1, 3]
c. f (x) = cos x + sin 2x, [0, 1]
2 3
6. Find the sixth Maclaurin polynomial for xex , and use Chebyshev economization to obtain a lesser-
degree polynomial approximation while keeping the error less than 0.01 on [−1, 1].
7. Find the sixth Maclaurin polynomial for sin x, and use Chebyshev economization to obtain a lesser-
degree polynomial approximation while keeping the error less than 0.01 on [−1, 1].
8. Show that for any positive integers i and j with i > j, we have Ti (x)Tj (x) = 21 [Ti+j (x) + Ti−j (x)].
9. Show that for each Chebyshev polynomial Tn (x), we have
+ 1
[Tn (x)]2 π
√ dx = .
−1 1 − x2 2

10. Show that for each n, the Chebyshev polynomial Tn (x) has n distinct zeros in (−1, 1).
11. Show that for each n, the derivative of the Chebyshev polynomial Tn (x) has n − 1 distinct zeros
in (−1, 1).

8.4 Rational Function Approximation


The class of algebraic polynomials has some distinct advantages for use in approximation:

• There are a sufficient number of polynomials to approximate any continuous function on


a closed interval to within an arbitrary tolerance;

• Polynomials are easily evaluated at arbitrary values; and

• The derivatives and integrals of polynomials exist and are easily determined.

The disadvantage of using polynomials for approximation is their tendency for oscil-
lation. This often causes error bounds in polynomial approximation to significantly exceed
the average approximation error, because error bounds are determined by the maximum
approximation error. We now consider methods that spread the approximation error more
evenly over the approximation interval. These techniques involve rational functions.
A rational function r of degree N has the form
p(x)
r(x) = ,
q(x)
where p(x) and q(x) are polynomials whose degrees sum to N.
Every polynomial is a rational function (simply let q(x) ≡ 1), so approximation by
rational functions gives results that are no worse than approximation by polynomials. How-
ever, rational functions whose numerator and denominator have the same or nearly the same
degree often produce approximation results superior to polynomial methods for the same
amount of computation effort. (This statement is based on the assumption that the amount
of computation effort required for division is approximately the same as for multiplication.)
Rational functions have the added advantage of permitting efficient approximation
of functions with infinite discontinuities near, but outside, the interval of approximation.
Polynomial approximation is generally unacceptable in this situation.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

You might also like