8.2 Orthogonal Polynomials and Least Squares Approximation: W R W R W R W R W R
8.2 Orthogonal Polynomials and Least Squares Approximation: W R W R W R W R W R
W R W R W R W R W R
0.017 0.154 0.025 0.23 0.020 0.181 0.020 0.180 0.025 0.234
0.087 0.296 0.111 0.357 0.085 0.260 0.119 0.299 0.233 0.537
0.174 0.363 0.211 0.366 0.171 0.334 0.210 0.428 0.783 1.47
1.11 0.531 0.999 0.771 1.29 0.87 1.32 1.15 1.35 2.48
1.74 2.23 3.02 2.01 3.04 3.59 3.34 2.83 1.69 1.44
4.09 3.58 4.28 3.28 4.29 3.40 5.48 4.15 2.75 1.84
5.45 3.52 4.58 2.96 5.30 3.88 4.83 4.66
5.96 2.40 4.68 5.10 5.53 6.94
14. Show that the normal equations (8.3) resulting from discrete least squares approximation yield a
symmetric and nonsingular matrix and hence have a unique solution. [Hint: Let A = (aij ), where
m
! i+j−2
aij = xk
k=1
and x1 , x2 , . . . , xm are distinct with n < m − 1. Suppose A is singular and that c ̸ = 0 is such that
ct Ac = 0. Show that the nth-degree polynomial whose coefficients are the coordinates of c has more
than n roots, and use this to establish a contradiction.]
∂E
= 0, for each j = 0, 1, . . . , n.
∂aj
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.2 Orthogonal Polynomials and Least Squares Approximation 511
Figure 8.6
y
f (x)
n
Pn (x) ! $ax k
k
n 2
k!0
( f (x) " $ a x (
k!0
k
k
a b x
Since
+ b n
! + b + b ,!
n -2
2 k k
E= [f (x)] dx − 2 ak x f (x) dx + ak x dx,
a k=0 a a k=0
we have
+ b n
! + b
∂E j
= −2 x f (x) dx + 2 ak x j+k dx.
∂aj a a
k=0
must be solved for the (n + 1) unknowns aj . The normal equations always have a unique
solution provided that f ∈ C[a, b]. (See Exercise 15.)
Example 1 Find the least squares approximating polynomial of degree 2 for the function f (x) = sin π x
on the interval [0, 1].
Solution The normal equations for P2 (x) = a2 x 2 + a1 x + a0 are
+ 1 + 1 + 1 + 1
a0 1 dx + a1 x dx + a2 x 2 dx = sin π x dx,
0 0 0 0
+ 1 + 1 + 1 + 1
a0 x dx + a1 x 2 dx + a2 x 3 dx = x sin πx dx,
0 0 0 0
+ 1 + 1 + 1 + 1
a0 x 2 dx + a1 x 3 dx + a2 x 4 dx = x 2 sin π x dx.
0 0 0 0
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
512 CHAPTER 8 Approximation Theory
Figure 8.7
y
y " sin π x
1.0
0.8
0.6 y = P2(x)
0.4
0.2
Definition 8.1 The set of functions {φ0 , . . . , φn } is said to be linearly independent on [a, b] if, whenever
c0 φ0 (x) + c1 φ1 (x) + · · · + cn φn (x) = 0, for all x ∈ [a, b],
we have c0 = c1 = · · · = cn = 0. Otherwise the set of functions is said to be linearly
dependent.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.2 Orthogonal Polynomials and Least Squares Approximation 513
Theorem 8.2 Suppose that, for each j = 0, 1, . . . , n, φj (x) is a polynomial of degree j. Then {φ0 , . . . , φn }
is linearly independent on any interval [a, b].
The polynomial P(x) vanishes on [a, b], so it must be the zero polynomial, and the coeffi-
cients of all the powers of x are zero. In particular, the coefficient of x n is zero. But cn φn (x)
is the only term in P(x) that contains x n , so we must have cn = 0. Hence
n−1
!
P(x) = cj φj (x).
j=0
In this representation of P(x), the only term that contains a power of x n−1 is cn−1 φn−1 (x),
so this term must also be zero and
n−2
!
P(x) = cj φj (x).
j=0
In like manner, the remaining constants cn−2 , cn−3 , . . . , c1 , c0 are all zero, which implies
that {φ0 , φ1 , . . . , φn } is linearly independent on [a, b].
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
514 CHAPTER 8 Approximation Theory
Orthogonal Functions
To discuss general function approximation requires the introduction of the notions of weight
functions and orthogonality.
Definition 8.4 An integrable function w is called a weight function on the interval I if w(x) ≥ 0, for all
x in I, but w(x) ̸ ≡ 0 on any subinterval of I.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.2 Orthogonal Polynomials and Least Squares Approximation 515
Definition 8.5 {φ0 , φ1 , . . . , φn } is said to be an orthogonal set of functions for the interval [a, b] with
respect to the weight function w if
+ b 1
0, when j ̸ = k,
w(x)φk (x)φj (x) dx =
a αj > 0, when j = k.
This definition, together with the remarks preceding it, produces the following theorem.
Theorem 8.6 If {φ0 , . . . , φn } is an orthogonal set of functions on an interval [a, b] with respect to the
weight function w, then the least squares approximation to f on [a, b] with respect to w is
n
!
P(x) = aj φj (x),
j=0
Although Definition 8.5 and Theorem 8.6 allow for broad classes of orthogonal func-
tions, we will consider only orthogonal sets of polynomials. The next theorem, which is
based on the Gram-Schmidt process, describes how to construct orthogonal polynomials
on [a, b] with respect to a weight function w.
Theorem 8.7 The set of polynomial functions {φ0 , φ1 , . . . , φn } defined in the following way is orthogonal
on [a, b] with respect to the weight function w.
Erhard Schmidt (1876–1959) φ0 (x) ≡ 1, φ1 (x) = x − B1 , for each x in [a, b],
received his doctorate under the
supervision of David Hilbert in where
1905 for a problem involving 2b
integral equations. Schmidt xw(x)[φ0 (x)]2 dx
B1 = 2a b ,
published a paper in 1907 in 2
a w(x)[φ0 (x)] dx
which he gave what is now called
the Gram-Schmidt process for and when k ≥ 2,
constructing an orthonormal
basis for a set of functions. This φk (x) = (x − Bk )φk−1 (x) − Ck φk−2 (x), for each x in [a, b],
generalized results of Jorgen
Pedersen Gram (1850–1916) who where
considered this problem when 2b
studying least squares. Laplace,
xw(x)[φk−1 (x)]2 dx
Bk = 2a b
however, presented a similar 2
a w(x)[φk−1 (x)] dx
process much earlier than either
Gram or Schmidt. and
2b
a xw(x)φk−1 (x)φk−2 (x) dx
Ck = 2b .
2
a w(x)[φk−2 (x)] dx
Theorem 8.7 provides a recursive procedure for constructing a set of orthogonal polyno-
mials. The proof of this theorem follows by applying mathematical induction to the degree
of the polynomial φn (x).
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
516 CHAPTER 8 Approximation Theory
Corollary 8.8 For any n > 0, the set of polynomial functions {φ0 , . . . , φn } given in Theorem 8.7 is linearly
independent on [a, b] and
+ b
w(x)φn (x)Qk (x) dx = 0,
a
k
!
Qk (x) = cj φj (x).
j=0
+ b k
! + b k
!
w(x)Qk (x)φn (x) dx = cj w(x)φj (x)φn (x) dx = cj · 0 = 0.
a j=0 a j=0
Illustration The set of Legendre polynomials, {Pn (x)}, is orthogonal on [−1, 1] with respect to the
weight function w(x) ≡ 1. The classical definition of the Legendre polynomials requires
that Pn (1) = 1 for each n, and a recursive relation is used to generate the polynomials
when n ≥ 2. This normalization will not be needed in our discussion, and the least squares
approximating polynomials generated in either case are essentially the same.
21
x dx
B1 = 2−11 =0 and P1 (x) = (x − B1 )P0 (x) = x.
−1 dx
Also,
21 3
21
−1 x dx x 2 dx 1
B2 = 21 =0 and C2 = 2−1
1
= ,
2 1 dx 3
−1 x dx −1
so
1 1
P2 (x) = (x − B2 )P1 (x) − C2 P0 (x) = (x − 0)x − · 1 = x2 − .
3 3
The higher-degree Legendre polynomials shown in Figure 8.9 are derived in the same
manner. Although the integration can be tedious, it is not difficult with a Computer Algebra
System.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.2 Orthogonal Polynomials and Least Squares Approximation 517
Figure 8.9
y
1 y = P1(x)
y = P2(x)
0.5
y = P3(x)
y = P4(x)
y = P5(x)
!1 1 x
!0.5
!1
For example, the Maple command int is used to compute the integrals B3 and C3 :
3 4 52 6
4 4 5 5
int x x 2 − 13 , x = −1..1 int x x 2 − 13 , x = −1..1
B3 := 34 52 6 ; C3 :=
int x 2 − 13 , x = −1..1 int(x 2 , x = −1..1)
0
4
15
Thus
4 1 4 3
P3 (x) = xP2 (x) − P1 (x) = x 3 − x − x = x 3 − x.
15 3 15 5
6 3 10 3 5
P4 (x) = x 4 − x 2 + and P5 (x) = x 5 − x + x. !
7 35 9 21
The Legendre polynomials were introduced in Section 4.7, where their roots, given on
page 232, were used as the nodes in Gaussian quadrature.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
518 CHAPTER 8 Approximation Theory
E X E R C I S E S E T 8.2
1. Find the linear least squares polynomial approximation to f (x) on the indicated interval if
a. f (x) = x 2 + 3x + 2, [0, 1]; b. f (x) = x 3 , [0, 2];
1 d. f (x) = ex , [0, 2];
c. f (x) = , [1, 3];
x
1 1 f. f (x) = x ln x, [1, 3].
e. f (x) = cos x + sin 2x, [0, 1];
2 3
2. Find the linear least squares polynomial approximation on the interval [−1, 1] for the following
functions.
a. f (x) = x 2 − 2x + 3 b. f (x) = x 3
1 d. f (x) = ex
c. f (x) =
x+2
1 1 f. f (x) = ln(x + 2)
e. f (x) = cos x + sin 2x
2 3
3. Find the least squares polynomial approximation of degree two to the functions and intervals in
Exercise 1.
4. Find the least squares polynomial approximation of degree 2 on the interval [−1, 1] for the functions
in Exercise 3.
5. Compute the error E for the approximations in Exercise 3.
6. Compute the error E for the approximations in Exercise 4.
7. Use the Gram-Schmidt process to construct φ0 (x), φ1 (x), φ2 (x), and φ3 (x) for the following intervals.
a. [0, 1] b. [0, 2] c. [1, 3]
8. Repeat Exercise 1 using the results of Exercise 7.
9. Obtain the least squares approximation polynomial of degree 3 for the functions in Exercise 1 using
the results of Exercise 7.
10. Repeat Exercise 3 using the results of Exercise 7.
11. Use the Gram-Schmidt procedure to calculate L1 , L2 , and L3 , where {L0 (x), L1 (x), L2 (x), L3 (x)} is
an orthogonal set of polynomials on (0, ∞) with respect to the weight functions w(x) = e−x and
L0 (x) ≡ 1. The polynomials obtained from this procedure are called the Laguerre polynomials.
12. Use the Laguerre polynomials calculated in Exercise 11 to compute the least squares polynomials of
degree one, two, and three on the interval (0, ∞) with respect to the weight function w(x) = e−x for
the following functions:
a. f (x) = x 2 b. f (x) = e−x c. f (x) = x 3 d. f (x) = e−2x
0 0
13. Suppose {φ0 , φ1 , . . . , φn } is any linearly independent set in n . Show that for any element Q ∈ n ,
there exist unique constants c0 , c1 , . . . , cn , such that
n
!
Q(x) = ck φk (x).
k=0
14. Show that if {φ0 , φ1 , . . . , φn } is an orthogonal set of functions on [a, b] with respect to the weight
function w, then {φ0 , φ1 , . . . , φn } is a linearly independent set.
15. Show that the normal equations (8.6) have a unique solution. [Hint: Show that the only solution for the
function f (x) ≡ 0 is aj = 0, j = 0, 1, . . . , n. Multiply Eq. (8.6) by aj , and sum over all j. Interchange
2b
the integral sign and the summation sign to obtain a [P(x)]2 dx = 0. Thus, P(x) ≡ 0, so aj = 0, for
j = 0, . . . , n. Hence, the coefficient matrix is nonsingular, and there is a unique solution to Eq. (8.6).]
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 519
Pafnuty Lvovich Chebyshev section, it is easier to give their definition and then show that they satisfy the required
(1821–1894) did exceptional orthogonality properties.
mathematical work in many For x ∈ [−1, 1], define
areas, including applied
mathematics, number theory,
approximation theory, and
Tn (x) = cos[n arccos x], for each n ≥ 0. (8.8)
probability. In 1852 he traveled
from St. Petersburg to visit It might not be obvious from this definition that for each n, Tn (x) is a polynomial in x, but
mathematicians in France, we will now show this. First note that
England, and Germany. Lagrange
and Legendre had studied
T0 (x) = cos 0 = 1 and T1 (x) = cos(arccos x) = x.
individual sets of orthogonal
polynomials, but Chebyshev was
the first to see the important For n ≥ 1, we introduce the substitution θ = arccos x to change this equation to
consequences of studying the
theory in general. He developed Tn (θ (x)) ≡ Tn (θ ) = cos(nθ), where θ ∈ [0, π].
the Chebyshev polynomials to
study least squares
approximation and probability, A recurrence relation is derived by noting that
then applied his results to
interpolation, approximate Tn+1 (θ ) = cos(n + 1)θ = cos θ cos(nθ) − sin θ sin(nθ)
quadrature, and other areas.
and
that is,
Because T0 (x) = 1 and T1 (x) = x, the recurrence relation implies that the next three
Chebyshev polynomials are
and
The recurrence relation also implies that when n ≥ 1, Tn (x) is a polynomial of degree n
with leading coefficient 2n−1 . The graphs of T1 , T2 , T3 , and T4 are shown in Figure 8.10.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
520 CHAPTER 8 Approximation Theory
Figure 8.10
y
y = T1(x)
y = T3(x) 1
y = T4(x)
!1 1 x
!1 y = T2(x)
To show the orthogonality of the Chebyshev polynomials with respect to the weight
function w(x) = (1 − x 2 )−1/2 , consider
+ 1 + 1
Tn (x)Tm (x) cos(n arccos x) cos(m arccos x)
√ dx = √ dx.
−1 1−x 2 −1 1 − x2
Reintroducing the substitution θ = arccos x gives
1
dθ = − √ dx
1 − x2
and
+ 1 + 0 +
Tn (x)Tm (x) π
√ dx = − cos(nθ) cos(mθ) dθ = cos(nθ) cos(mθ) dθ.
−1 1 − x2 π 0
Suppose n ̸ = m. Since
1
cos(nθ) cos(mθ) = [cos(n + m)θ + cos(n − m)θ ],
2
we have
+ 1 + +
Tn (x)Tm (x) 1 π 1 π
√ dx = cos((n + m)θ ) dθ + cos((n − m)θ ) dθ
−1 1 − x2 2 0 2 0
. /π
1 1
= sin((n + m)θ ) + sin((n − m)θ ) = 0.
2(n + m) 2(n − m) 0
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 521
The next result concerns the zeros and extreme points of Tn (x).
Theorem 8.9 The Chebyshev polynomial Tn (x) of degree n ≥ 1 has n simple zeros in [−1, 1] at
, -
2k − 1
x̄k = cos π , for each k = 1, 2, . . . , n.
2n
Moreover, Tn (x) assumes its absolute extrema at
, -
kπ
x̄k′ = cos with Tn (x̄k′ ) = (−1)k , for each k = 0, 1, . . . , n.
n
Proof Let
-,
2k − 1
x̄k = cos π , for k = 1, 2, . . . , n.
2n
Then
, , , --- , -
2k − 1 2k − 1
Tn (x̄k ) = cos(n arccos x̄k ) = cos n arccos cos π = cos π = 0.
2n 2
But the x̄k are distinct (see Exercise 10) and Tn (x) is a polynomial of degree n, so all the
zeros of Tn (x) must have this form.
To show the second statement, first note that
d n sin(n arccos x)
Tn′ (x) = [cos(n arccos x)] = √ ,
dx 1 − x2
and that, when k = 1, 2, . . . , n − 1,
, , , ---
kπ
n sin n arccos cos
n n sin(kπ )
Tn′ (x̄k′ ) = 7 . , -/2 = , - = 0.
kπ kπ
1 − cos sin
n n
Since Tn (x) is a polynomial of degree n, its derivative Tn′ (x) is a polynomial of degree
(n − 1), and all the zeros of Tn′ (x) occur at these n − 1 distinct points (that they are distinct
is considered in Exercise 11). The only other possibilities for extrema of Tn (x) occur at the
endpoints of the interval [−1, 1]; that is, at x̄0′ = 1 and at x̄n′ = −1.
For any k = 0, 1, . . . , n we have
, , , ---
′ kπ
Tn (x̄k ) = cos n arccos cos = cos(kπ ) = (−1)k .
n
So a maximum occurs at each even value of k and a minimum at each odd value.
The monic (polynomials with leading coefficient 1) Chebyshev polynomials T̃n (x) are
derived from the Chebyshev polynomials Tn (x) by dividing by the leading coefficient 2n−1 .
Hence
1
T̃0 (x) = 1 and T̃n (x) = Tn (x), for each n ≥ 1. (8.11)
2n−1
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
522 CHAPTER 8 Approximation Theory
Figure 8.11
y
!
y = T1(x)
1
!
y = T2(x)
!
y = T3(x)
! !
y = T5(x) y = T4(x)
!1 1 x
!1
Because T˜n (x) is just a multiple of Tn (x), Theorem 8.9 implies that the zeros of T˜n (x)
also occur at
, -
2k − 1
x̄k = cos π , for each k = 1, 2, . . . , n,
2n
Theorem 8.10 The polynomials of the form T˜n (x), when n ≥ 1, have the property that
1 :
9
= max |T˜n (x)| ≤ max |Pn (x)|, for all Pn (x) ∈ .
2n−1 x∈[−1,1] x∈[−1, 1] n
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 523
0
Proof Suppose that Pn (x) ∈ 8 n and that
1
max |Pn (x)| ≤ = max |T˜n (x)|.
x∈[−1, 1] 2n−1 x∈[−1, 1]
Let Q = T˜n − Pn . Then T˜n (x) and Pn (x) are both monic polynomials of degree n, so Q(x) is
a polynomial of degree at most (n − 1). Moreover, at the n + 1 extreme points x̄k′ of T˜n (x),
we have
(−1)k
Q(x̄k′ ) = T˜n (x̄k′ ) − Pn (x̄k′ ) = n−1 − Pn (x̄k′ ).
2
However
1
|Pn (x̄k′ )| ≤ , for each k = 0, 1, . . . , n,
2n−1
so we have
Q(x̄k′ ) ≤ 0, when k is odd and Q(x̄k′ ) ≥ 0, when k is even.
Since Q is continuous, the Intermediate Value Theorem implies that for each j =
0, 1, . . . , n − 1 the polynomial Q(x) has at least one zero between x̄j′ and x̄j+1 ′
. Thus,
Q has at least n zeros in the interval [−1, 1]. But the degree of Q(x) is less than n, so Q ≡ 0.
This implies that Pn ≡ T˜n .
for any choice of x0 , x1 , . . . , xn in the interval [−1, 1]. The next corollary follows from these
observations.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
524 CHAPTER 8 Approximation Theory
Corollary 8.11 Suppose that P(x) is the interpolating polynomial of degree at most n with nodes at the
zeros of Tn+1 (x). Then
1
max |f (x) − P(x)| ≤ max |f (n+1) (x)|, for each f ∈ C n+1 [−1, 1].
x∈[−1,1] 2n (n + 1)! x∈[−1,1]
Example 1 Let f (x) = xex on [0, 1.5]. Compare the values given by the Lagrange polynomial with
four equally-spaced nodes with those given by the Lagrange polynomial with nodes given
by zeros of the fourth Chebyshev polynomial.
Solution The equally-spaced nodes x0 = 0, x1 = 0.5, x2 = 1, and x3 = 1.5 give
L0 (x) = −1.3333x 3 + 4.0000x 2 − 3.6667x + 1,
L1 (x) = 4.0000x 3 − 10.000x 2 + 6.0000x,
L2 (x) = −4.0000x 3 + 8.0000x 2 − 3.0000x,
L3 (x) = 1.3333x 3 − 2.000x 2 + 0.66667x,
which produces the polynomial
P3 (x) = L0 (x)(0) + L1 (x)(0.5e0.5 ) + L2 (x)e1 + L3 (x)(1.5e1.5 ) = 1.3875x 3
+ 0.057570x 2 + 1.2730x.
For the second interpolating polynomial, we shift the zeros x̄k = cos((2k + 1)/8)π ,
for k = 0, 1, 2, 3, of T˜4 from [−1, 1] to [0, 1.5], using the linear transformation
1
x̃k = [(1.5 − 0)x̄k + (1.5 + 0)] = 0.75 + 0.75x̄k .
2
Because
π 3π
x̄0 = cos = 0.92388, x̄1 = cos = 0.38268,
8 8
5π 7π
x̄2 = cos = −0.38268, andx̄4 = cos = −0.92388,
8 8
we have
x̃0 = 1.44291, x̃1 = 1.03701, x̃2 = 0.46299, and x̃3 = 0.05709.
The Lagrange coefficient polynomials for this set of nodes are
L̃0 (x) = 1.8142x 3 − 2.8249x 2 + 1.0264x − 0.049728,
L̃1 (x) = −4.3799x 3 + 8.5977x 2 − 3.4026x + 0.16705,
L̃2 (x) = 4.3799x 3 − 11.112x 2 + 7.1738x − 0.37415,
L̃3 (x) = −1.8142x 3 + 5.3390x 2 − 4.7976x + 1.2568.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 525
The functional values required for these polynomials are given in the last two columns
of Table 8.7. The interpolation polynomial of degree at most 3 is
P˜3 (x) = 1.3811x 3 + 0.044652x 2 + 1.3031x − 0.014352.
For comparison, Table 8.8 lists various values of x, together with the values of
f (x), P3 (x), and P˜3 (x). It can be seen from this table that, although the error using P3 (x) is
less than using P˜3 (x) near the middle of the table, the maximum error involved with using
P˜3 (x), 0.0180, is considerably less than when using P3 (x), which gives the error 0.0290.
(See Figure 8.12.)
Table 8.8 x f (x) = xex P3 (x) |xex − P3 (x)| P̃3 (x) |xex − P̃3 (x)|
0.15 0.1743 0.1969 0.0226 0.1868 0.0125
0.25 0.3210 0.3435 0.0225 0.3358 0.0148
0.35 0.4967 0.5121 0.0154 0.5064 0.0097
0.65 1.245 1.233 0.012 1.231 0.014
0.75 1.588 1.572 0.016 1.571 0.017
0.85 1.989 1.976 0.013 1.974 0.015
1.15 3.632 3.650 0.018 3.644 0.012
1.25 4.363 4.391 0.028 4.382 0.019
1.35 5.208 5.237 0.029 5.224 0.016
Figure 8.12
y
6
!
y = P3(x)
5 y " xe x
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
526 CHAPTER 8 Approximation Theory
0 [−1, 1] with a polynomial of degree at most n − 1. The object is to choose Pn−1 (x) in
on
n−1 so that
is as small as possible.
We first note that (Pn (x) − Pn−1 (x))/an is a monic polynomial of degree n, so applying
Theorem 8.10 gives
; ;
;1 ; 1
max ;; (Pn (x) − Pn−1 (x));; ≥ n−1 .
x∈[−1, 1] an 2
Equality occurs precisely when
1
(Pn (x) − Pn−1 (x)) = T̃n (x).
an
This means that we should choose
Illustration The function f (x) = ex is approximated on the interval [−1, 1] by the fourth Maclaurin
polynomial
x2 x3 x4
P4 (x) = 1 + x + + + ,
2 6 24
which has truncation error
|f (5) (ξ(x))||x 5 | e
|R4 (x)| = ≤ ≈ 0.023, for − 1 ≤ x ≤ 1.
120 120
Suppose that an error of 0.05 is tolerable and that we would like to reduce the degree of the
approximating polynomial while staying within this bound.
The polynomial of degree 3 or less that best uniformly approximates P4 (x) on [−1, 1] is
, -
x2 x3 x4 1 4 2 1
P3 (x) = P4 (x) − a4 T̃4 (x) = 1 + x + + + − x −x +
2 6 24 24 8
191 13 1
= + x + x2 + x3 .
192 24 6
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
8.3 Chebyshev Polynomials and Economization of Power Series 527
The polynomial of degree 2 or less that best uniformly approximates P3 (x) on [−1, 1] is
1
P2 (x) = P3 (x) − T̃3 (x)
6
191 13 1 1 3 191 9 13
= + x + x 2 + x 3 − (x 3 − x) = + x + x2 .
192 24 6 6 4 192 8 24
However,
; ; , -
;1 ; 1 1 2 1
; ;
|P3 (x) − P2 (x)| = ; T̃3 (x); = = ≈ 0.042,
6 6 2 24
which—when added to the already accumulated error bound of 0.0283—exceeds the tol-
erance of 0.05. Consequently, the polynomial of least degree that best approximates ex on
[−1, 1] with an error bound of less than 0.05 is
191 13 1
P3 (x) = + x + x2 + x3 .
192 24 6
Table 8.9 lists the function and the approximating polynomials at various points in [−1, 1].
Note that the tabulated entries for P2 are well within the tolerance of 0.05, even though the
error bound for P2 (x) exceeded the tolerance. !
E X E R C I S E S E T 8.3
1. Use the zeros of T̃3 to construct an interpolating polynomial of degree 2 for the following functions
on the interval [−1, 1].
a. f (x) = ex b. f (x) = sin x c. f (x) = ln(x + 2) d. f (x) = x 4
2. Use the zeros of T̃4 to construct an interpolating polynomial of degree 3 for the functions in Exercise 1.
3. Find a bound for the maximum error of the approximation in Exercise 1 on the interval [−1, 1].
4. Repeat Exercise 3 for the approximations computed in Exercise 3.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
528 CHAPTER 8 Approximation Theory
5. Use the zeros of T̃3 and transformations of the given interval to construct an interpolating polynomial
of degree 2 for the following functions.
1 b. f (x) = e−x , [0, 2]
a. f (x) = , [1, 3]
x
1 1 d. f (x) = x ln x, [1, 3]
c. f (x) = cos x + sin 2x, [0, 1]
2 3
6. Find the sixth Maclaurin polynomial for xex , and use Chebyshev economization to obtain a lesser-
degree polynomial approximation while keeping the error less than 0.01 on [−1, 1].
7. Find the sixth Maclaurin polynomial for sin x, and use Chebyshev economization to obtain a lesser-
degree polynomial approximation while keeping the error less than 0.01 on [−1, 1].
8. Show that for any positive integers i and j with i > j, we have Ti (x)Tj (x) = 21 [Ti+j (x) + Ti−j (x)].
9. Show that for each Chebyshev polynomial Tn (x), we have
+ 1
[Tn (x)]2 π
√ dx = .
−1 1 − x2 2
10. Show that for each n, the Chebyshev polynomial Tn (x) has n distinct zeros in (−1, 1).
11. Show that for each n, the derivative of the Chebyshev polynomial Tn (x) has n − 1 distinct zeros
in (−1, 1).
• The derivatives and integrals of polynomials exist and are easily determined.
The disadvantage of using polynomials for approximation is their tendency for oscil-
lation. This often causes error bounds in polynomial approximation to significantly exceed
the average approximation error, because error bounds are determined by the maximum
approximation error. We now consider methods that spread the approximation error more
evenly over the approximation interval. These techniques involve rational functions.
A rational function r of degree N has the form
p(x)
r(x) = ,
q(x)
where p(x) and q(x) are polynomials whose degrees sum to N.
Every polynomial is a rational function (simply let q(x) ≡ 1), so approximation by
rational functions gives results that are no worse than approximation by polynomials. How-
ever, rational functions whose numerator and denominator have the same or nearly the same
degree often produce approximation results superior to polynomial methods for the same
amount of computation effort. (This statement is based on the assumption that the amount
of computation effort required for division is approximately the same as for multiplication.)
Rational functions have the added advantage of permitting efficient approximation
of functions with infinite discontinuities near, but outside, the interval of approximation.
Polynomial approximation is generally unacceptable in this situation.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.