0% found this document useful (0 votes)
4 views

Chapter4

Uploaded by

zihanliangeddie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Chapter4

Uploaded by

zihanliangeddie
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Chapter 4

Curve Fitting

We consider two commonly used methods for curve fitting, interpolation and least squares. For inter-
polation, we use first polynomials and later splines. Polynomial interpolation may be implemented
using a variety of bases. We consider power series, bases associated with the names of Newton and
Lagrange, and finally Chebyshev series. We discuss the error arising directly from polynomial inter-
polation and the (linear algebra) error arising in the solution of the interpolation equations. This
error analysis leads us to consider interpolating using piecewise polynomials, specifically splines.
Finally, we introduce least squares curve fitting using simple polynomials and later generalize this
approach sufficiently to permit other choices of least squares fitting functions; for example, splines,
Chebyshev series, or trigonometric series. In a final section we show how to use the Matlab func-
tions to compute curve fits using many of the ideas introduced here and, indeed, we also consider
for the first time fitting trigonometric series for periodic data; this approach uses the fast Fourier
transform.

4.1 Polynomial Interpolation


N
We can approximate a function f (x) by interpolating to the data {(xi , fi )}i=0 by another (com-
putable) function p(x). (Here, implicitly we assume that the data is obtained by evaluating the
function f (x); that is, we assume that fi = f (xi ), i = 0, 1, · · · , N .)
N
Definition 4.1.1. The function p(x) interpolates to the data {(xi , fi )}i=0 if the equations
p(xi ) = fi , i = 0, 1, · · · , N
are satisfied.
This system of N + 1 equations comprise the interpolating conditions. Note, the function f (x) that
has been evaluated to compute the data automatically interpolates to its own data.

⇡ 5⇡
Example 4.1.1. Consider the data: 2 , 1 , (⇡, 0), 2 , 1 . This data was obtained by evalu-
ating the function f (x) = sin(x) at x0 = 2 , x1 = ⇡ and x2 = 5⇡

2 . Thus, f (x) = sin(x) interpolates
to the given data. But the linear polynomial
2 2
p(x) = + x
3 3⇡
also interpolates to the given data because p( ⇡2 ) = 1, p(⇡) = 0, and p( 5⇡ 2 ) = 1. Similarly the
cubic polynomial
16 8 2 8 3
q(x) = x x + x
15⇡ 5⇡ 2 15⇡ 3
interpolates to the given data because q( ⇡2 ) = 1, q(⇡) = 0, and q( 5⇡ 2 ) = 1. The graphs of f (x),
p(x) and q(x), along with the interpolating data, are shown in Fig. 4.1. The interpolating conditions
imply that these curves pass through the data points.

109
110 CHAPTER 4. CURVE FITTING

Graph of y=f(x) Graph of y=p(x) Graph of y=q(x)

1 1
1

0.5 0.5 0.5

0 0 0

−0.5 −0.5 −0.5

−1 −1 −1

−2 0 2 4 6 8 −2 0 2 4 6 8 −2 0 2 4 6 8

Figure 4.1: Graphs of f (x) = sin(x), p(x) = 23 + 3⇡


2
x and q(x) = 15⇡16
x 5⇡8 2 x2 + 15⇡
8 3
3 x . Each
⇡ 5⇡
of these functions interpolates to the data 2 , 1 , (⇡, 0), 2 , 1 . The data points are indicated
by circles on each of the plots.

A common choice for the interpolating function p(x) is a polynomial. Polynomials are chosen
because there are efficient methods both for determining and for evaluating them, see, for example,
Horner’s rule described in Section 6.5. Indeed, they may be evaluated using only adds and multiplies,
the most basic computer operations. A polynomial that interpolates to the data is an interpolating
polynomial. A simple, familiar example of an interpolating polynomial is a straight line, that is a
polynomial of degree one, joining two points.
Before we can construct an interpolating polynomial, we need to establish a standard form
representation of a polynomial. In addition, we need to define what is meant by the “degree” of a
polynomial.

Definition 4.1.2. A polynomial pK (x) is of degree K if there are constants c0 , c1 , · · · , cK for which

pK (x) = c0 + c1 x + · · · + cK xK .

The polynomial pK (x) is of exact degree K if it is of degree K and cK 6= 0.

Example 4.1.2. p(x) ⌘ 1 + x 4x3 is a polynomial of exact degree 3. It is also a polynomial of


degree 4 because we can write p(x) = 1 + x + 0x2 4x3 + 0x4 . Similarly, p(x) is a polynomial of
degree 5, 6, 7, · · · .

4.1.1 The Power Series Form of The Interpolating Polynomial


Consider first the problem of linear interpolation, that is straight line interpolation. If we have
two data points (x0 , f0 ) and (x1 , f1 ) we can interpolate to this data using the linear polynomial
p1 (x) ⌘ a0 + a1 x by satisfying the pair of linear equations

p1 (x0 ) ⌘ a 0 + a 1 x0 = f0
p1 (x1 ) ⌘ a 0 + a 1 x1 = f1

Solving these equations for the coefficients a0 and a1 gives the straight line passing through the two
points (x0 , f0 ) and (x1 , f1 ). These equations have a solution if x0 6= x1 . If f0 = f1 , the solution is a
constant (a0 = f0 and a1 = 0); that is, it is a polynomial of degree zero,.
Generally, there are an infinite number of polynomials that interpolate to a given set of data.
To explain the possibilities we consider the power series form of the complete polynomial (that is, a
polynomial where all the powers of x appear)

pM (x) = a0 + a1 x + · · · + aM xM
4.1. POLYNOMIAL INTERPOLATION 111

N
of degree M . If the polynomial pM (x) interpolates to the given data {(xi , fi )}i=0 , then the interpo-
lating conditions form a linear system of N + 1 equations

pM (x0 ) ⌘ a0 + a1 x0 + · · · + aM xM
0 = f0
pM (x1 ) ⌘ a0 + a1 x1 + · · · + aM xM
1 = f1
..
.
pM (xN ) ⌘ a 0 + a 1 xN + · · · + a M xM
N = fN

for the M + 1 unknown coefficients a0 , a1 , . . . , aM .


From linear algebra (see Chapter 3), this linear system of equations has a unique solution for
N
every choice of data values {fi }i=0 if and only if the system is square (that is, if M = N ) and it is
N
nonsingular. If M < N , there exist choices of the data values {fi }i=0 for which this linear system
has no solution, while for M > N if a solution exists it cannot be unique.
Think of it this way:
1. The complete polynomial pM (x) of degree M has M + 1 unknown coefficients,
2. The interpolating conditions comprise N + 1 equations.
3. So,
• If M = N , there are as many coefficients in the complete polynomial pM (x) as there are
equations obtained from the interpolating conditions,
• If M < N , the number of coefficients is less than the number of data values and we
wouldn’t have enough coefficients so that we would be able to fit to all the data.
• If M > N , the number of coefficients exceeds the number of data values and we would
expect to be able to choose the coefficients in many ways to fit the data.
The square, M = N , coefficient matrix of the linear system of interpolating conditions is the
Vandermonde matrix 2 3
1 x0 x20 · · · xM 0
6 1 x1 x21 · · · xM 7
6 1 7
VN ⌘ 6 .. 7
4 . 5
1 xN x2N ··· xM
N

and it can be shown that the determinant of VN is


Y
det(VN ) = (xi xj )
i>j

Recall from Problems 3.1.4, 3.2.21, and 3.3.4 that the determinant is a test for singularity, and this
is an ideal situation in which to use it. In particular, observe that det(VN ) 6= 0 (that is the matrix
N
VN is nonsingular) if and only if the nodes {xi }i=0 are distinct; that is, if and only if xi 6= xj
N
whenever i 6= j. So, for every choice of data {(xi , fi )}i=0 , there exists a unique solution satisfying
N
the interpolation conditions if and only if M = N and the nodes {xi }i=0 are distinct.
N
Theorem 4.1.1. (Polynomial Interpolation Uniqueness Theorem) When the nodes {xi }i=0 are dis-
tinct there is a unique polynomial, the interpolating polynomial pN (x), of degree N that interpolates
N
to the data {(xi , fi )}i=0 .
Though the degree of the interpolating polynomial is N corresponding to the number of data
values, its exact degree may be less than N . For example, this happens when the three data points
(x0 , f0 ), (x1 , f1 ) and (x2 , f2 ) are collinear; the interpolating polynomial for this data is of degree
N = 2 but it is of exact degree 1; that is, the coefficient of x2 turns out to be zero. (In this case,
the interpolating polynomial is the straight line on which the data lies.)
112 CHAPTER 4. CURVE FITTING

Example 4.1.3. Determine the power series form of the quadratic interpolating polynomial p2 (x) =
a0 + a1 x + a2 x2 to the data ( 1, 0), (0, 1) and (1, 3). The interpolating conditions, in the order of
the data, are
a0 + ( 1)a1 + (1)a2 = 0
a0 = 1
a0 + (1)a1 + (1)a2 = 3
3
We may solve this linear system of equations for a0 = 1, a1 = 2 and a2 = 12 . So, in power series
form, the interpolating polynomial is p2 (x) = 1 + 32 x + 12 x2 .

Of course, polynomials may be written in many di↵erent ways, some more appropriate to a given
N
task than others. For example, when interpolating to the data {(xi , fi )}i=0 , the Vandermonde system
determines the values of the coefficients a0 , a1 , · · · , aN in the power series form. The number of
floating–point computations needed by GEPP to solve the interpolating condition equations grows
like N 3 with the degree N of the polynomial (see Chapter 3). This cost can be significantly reduced
by exploiting properties of the Vandermonde coefficient matrix. Another way to reduce the cost
of determining the interpolating polynomial is to change the way the interpolating polynomial is
represented. This is explored in the next two subsections.

Problem 4.1.1. Show that, when N = 2,


02 31 j=0; i=1,2 j=1; i=2
1 x0 x20 Y z }| { z }| {
det(V2 ) = det @4 1 x1 x21 5A = (xi xj ) = (x1 x0 )(x2 x0 ) (x2 x1 )
1 x2 x22 i>j

Problem 4.1.2. !N +1 (x) ⌘ (x x0 )(x x1 ) · · · (x xN ) is a polynomial of exact degree


N
N + 1. If pN (x) interpolates the data {(xi , fi )}i=0 , verify that, for any choice of polynomial
q(x), the polynomial
p(x) = pN (x) + !N +1 (x)q(x)
interpolates the same data as pN (x). Hint: Verify that p(x) satisfies the same interpolating
conditions as does pN (x).
Problem 4.1.3. Find the power series form of the interpolating polynomial to the data
(1, 2), (3, 3) and (5, 4). Check that your computed polynomial does interpolate the data. Hint:
For these three data points you will need a complete polynomial that has three coefficients;
that is, you will need to interpolate with a quadratic polynomial p2 (x).
Problem 4.1.4. Let pN (x) be the unique interpolating polynomial of degree N for the data
N
{(xi , fi )}i=0 . Estimate the cost of determining the coefficients of the power series form of
pN (x), assuming that you can set up the coefficient matrix at no cost. Just estimate the cost
of solving the linear system via Gaussian Elimination.

4.1.2 The Newton Form of The Interpolating Polynomial


Consider the problem of determining the interpolating quadratic polynomial for the data (x0 , f0 ),
(x1 , f1 ) and (x2 , f2 ). Using the data written in this order, the Newton form of the quadratic
interpolating polynomial is
p2 (x) = b0 + b1 (x x0 ) + b2 (x x0 )(x x1 )
To determine these coefficients bi simply write down the interpolating conditions:
p2 (x0 ) ⌘ b0 = f0
p2 (x1 ) ⌘ b0 + b1 (x1 x0 ) = f1
p2 (x2 ) ⌘ b0 + b1 (x2 x0 ) + b2 (x2 x0 )(x2 x1 ) = f2
4.1. POLYNOMIAL INTERPOLATION 113

The coefficient matrix 2 3


1 0 0
4 1 (x1 x0 ) 0 5
1 (x2 x0 ) (x2 x0 )(x2 x1 )
2
for this linear system is lower triangular. When the nodes {xi }i=0 are distinct, the diagonal entries
of this lower triangular matrix are nonzero. Consequently, the linear system has a unique solution
that may be determined by forward substitution.

Example 4.1.4. Determine the Newton form of the quadratic interpolating polynomial p2 (x) =
b0 + b1 (x x0 ) + b2 (x x0 )(x x1 ) to the data ( 1, 0), (0, 1) and (1, 3). Taking the data points in
their order in the data we have x0 = 1, x1 = 0 and x2 = 1. The interpolating conditions are
b0 = 0
b0 + (0 ( 1))b1 = 1
b0 + (1 ( 1))b1 + (1 ( 1))(1 0)b2 = 3

and this lower triangular system may be solved to give b0 = 0, b1 = 1 and b2 = 12 . So, the Newton
form of the quadratic interpolating polynomial is p2 (x) = 0 + 1(x + 1) + 12 (x + 1)(x 0). After
rearrangement we observe that this is the same polynomial p2 (x) = 1 + 32 x + 12 x2 as the power series
form in Example 4.1.3.

Note, the Polynomial Interpolation Uniqueness theorem tells us that the Newton form of the
polynomial in Example 4.1.4 and the power series form in Example 4.1.3 must be the same poly-
nomial. However, we do not need to convert between forms to show that two polynomials are the
same. We can use the Polynomial Interpolation Uniqueness theorem as follows. If we have two
polynomials of degree N and we want to check if they are di↵erent representations of the same
polynomial we can evaluate each polynomial at any choice of (N + 1) distinct points. If the values
of the two polynomials are the same at all (N + 1) points, then the two polynomials interpolate each
other and so must be the same polynomial.

Example 4.1.5. We show that the polynomials p2 (x) = 1 + 32 x + 12 x2 and q2 (x) = 0 + 1(x + 1) +
1
2 (x + 1)(x 0) are the same. The Polynomial Interpolation Uniqueness theorem tells us that since
these polynomials are both of degree two we should check their values at three distinct points. We
find that p2 (1) = 3 = q2 (1), p2 (0) = 1 = q2 (0) and p2 ( 1) = 0 = q2 ( 1), and so p2 (x) ⌘ q2 (x).

Example 4.1.6. For the data (1, 2), (3, 3) and (5, 4), using the data points in the order given, the
Newton form of the interpolating polynomial is
p2 (x) = b0 + b1 (x 1) + b2 (x 1)(x 3)
and the interpolating conditions are
p2 (1) ⌘ b0 = 2
p2 (3) ⌘ b0 + b1 (3 1) = 3
p2 (5) ⌘ b0 + b1 (5 1) + b2 (5 1)(5 3) = 4
This lower triangular linear system may be solved by forward substitution giving
b0 = 2
3 b0 1
b1 = =
(3 1) 2
4 b0 (5 1)b1
b2 = = 0
(5 1)(5 3)
114 CHAPTER 4. CURVE FITTING

Consequently, the Newton form of the interpolating polynomial is


1
p2 (x) = 2 + (x 1) + 0(x 1)(x 3)
2
Generally, this interpolating polynomial is of degree 2, but its exact degree may be less. Here,
3 1
rearranging into power series form we have p2 (x) = + x and the exact degree is 1; this happens
2 2
because the data (1, 2), (3, 3) and (5, 4) are collinear.

The Newton form of the interpolating polynomial is easy to evaluate using nested multiplication
(which is a generalization of Horner’s rule, see Chapter 6). Start with the observation that

p2 (x) = b0 + b1 (x x0 ) + b2 (x x0 )(x x1 )
= b0 + (x x0 ) {b1 + b2 (x x1 )}

To develop an algorithm to evaluate p2 (x) at a value x = z using nested multiplication, it is helpful


to visualize the sequence of nested operations as:

t2 (z) = b2
t1 (z) = b1 + (z x1 )t2 (z)
p2 (z) = t0 (z) = b0 + (z x0 )t1 (z)

An extension to degree N of this nested multiplication scheme leads to the pseudocode given in
Fig. 4.2. This pseudocode evaluates the Newton form of a polynomial of degree N at a point x = z
given the values of the nodes xi and the coefficients bi .

Evaluating the Newton Form

Input: nodes xi
coefficients bi
scalar z
Output: p = pN (z)

p := bN
for i = N 1 downto 0 do
p := bi + (z xi ) ⇤ p
next i

Figure 4.2: Pseudocode to use nested multiplication to evaluate the Newton form of the interpolating
polynomial, pN (x) = b0 + b1 (x x0 ) + · · · + bN 1 (x x0 ) · · · (x xN 1 ), at x = z.

How can the triangular system for the coefficients in the Newton form of the interpolating
polynomial be systematically formed and solved? First, note that we may form a sequence of
interpolating polynomials as in Table 4.1.

Polynomial The Data


p0 (x) = b0 (x0 , f0 )
p1 (x) = b0 + b1 (x x0 ) (x0 , f0 ), (x1 , f1 )
p2 (x) = b0 + b1 (x x0 ) + b2 (x x0 )(x x1 ) (x0 , f0 ), (x1 , f1 ), (x2 , f2 )

Table 4.1: Building the Newton form of the interpolating polynomial


4.1. POLYNOMIAL INTERPOLATION 115

So, the Newton equations that determine the coefficients b0 , b1 and b2 may be written

b0 = f0
p0 (x1 ) + b1 (x1 x0 ) = f1
p1 (x2 ) + b2 (x2 x0 )(x2 x1 ) = f2

and we conclude that we may solve for the coefficients bi efficiently as follows

b0 = f0
f1 p0 (x1 )
b1 =
(x1 x0 )
f2 p1 (x2 )
b2 =
(x2 x0 )(x2 x1 )
N
This process clearly deals with any number of data points. So, if we have data {(xi , fi )}i=0 , we may
compute the coefficients in the corresponding Newton polynomial

pN (x) = b0 + b1 (x x0 ) + · · · + bN (x x0 )(x x1 ) · · · (x xN 1)

from evaluating b0 = f0 followed by

fk pk 1 (xk )
bk = , k = 1, 2, . . . , N
(xk x0 )(xk x1 ) · · · (xk xk 1)

Indeed, we can easily incorporate additional data values. Suppose we have fitted to the data
N
{(xi , fi )}i=0 using the above formulas and we wish to add the data (xN +1 , fN +1 ). Then we may
simply use the formula

fN +1 pN (xN +1 )
bN +1 =
(xN +1 x0 )(xN +1 x1 ) · · · (xN +1 xN )

Fig. 4.3 shows pseudocode that can be used to implement this computational sequence for the
N
Newton form of the interpolating polynomial of degree N given the data {(xi , fi )}i=0 .

Constructing the Newton Form

Input: data (xk , fk )


Output: coefficients bk

b0 := f0
for k = 1 to N
num := fk pk 1 (xk )
den := 1
for j = 0 to k 1
den := den ⇤ (xk xj )
next j
bk := num/den
next k

Figure 4.3: Pseudocode to compute the coefficients, bi , of the Newton form of the interpolating
polynomial, pN (x) = b0 + b1 (x x0 ) + · · · + bN 1 (x x0 ) · · · (x xN 1 ). This pseudocode requires
evaluating pi 1 (xi ), which can be done by implementing the pseudocode given in Fig. 4.2.
116 CHAPTER 4. CURVE FITTING

If the interpolating polynomial is to be evaluated at many points, generally it is best first to deter-
mine its Newton form and then to use the nested multiplication scheme to evaluate the interpolating
polynomial at each desired point.

Problem 4.1.5. Use the data in Example 4.1.4 in reverse order (that is, use x0 = 1, x1 = 0
and x2 = 1) to build an alternative quadratic Newton interpolating polynomial. Is this the
same polynomial that was derived in Example 4.1.4? Why or why not?
N
Problem 4.1.6. Let pN (x) be the interpolating polynomial for the data {(xi , fi )}i=0 , and
N +1
let pN +1 (x) be the interpolating polynomial for the data {(xi , fi )}i=0 . Show that

pN +1 (x) = pN (x) + bN +1 (x x0 )(x x1 ) · · · (x xN )

What is the value of bN +1 ?

Problem 4.1.7. In Example 4.1.4 we showed how to form the quadratic Newton polynomial
p2 (x) = 0 + 1(x + 1) + 12 (x + 1)(x 0) that interpolates to the data ( 1, 0), (0, 1) and
(1, 3). Starting from this quadratic Newton interpolating polynomial build the cubic Newton
interpolating polynomial to the data ( 1, 0), (0, 1), (1, 3) and (2, 4).

Problem 4.1.8. Let

pN (x) = b0 + b1 (x x0 ) + b2 (x x0 )(x x1 ) + · · · + bN (x x0 )(x x1 ) · · · (x xN 1)

N
be the Newton interpolating polynomial for the data {(xi , fi )}i=0 . Write down the coefficient
matrix for the linear system of equations for interpolating to the data with the polynomial
pN (x).
2
Problem 4.1.9. Let the nodes {xi }i=0 be given. A complete quadratic p2 (x) can be written
in power series form or Newton form:

a0 + a1 x + a2 x2 power series form
p2 (x) =
b0 + b1 (x x0 ) + b2 (x x0 )(x x1 ) Newton form

By expanding the Newton form into a power series in x, verify that the coefficients b0 , b1
and b2 are related to the coefficients a0 , a1 and a2 via the equations
2 32 3 2 3
1 x0 x0 x1 b0 a0
4 0 1 (x0 + x1 ) 5 4 b1 5 = 4 a1 5
0 0 1 b2 a2

Describe how you could use these equations to determine the ai ’s given the values of the
bi ’s? Describe how you could use these equations to determine the bi ’s given the values of
the ai ’s?

Problem 4.1.10. Write a code segment to determine the value of the derivative p02 (x) at
a point x of the Newton form of the quadratic interpolating polynomial p2 (x). (Hint: p2 (x)
can be evaluated via nested multiplication just as a general polynomial can be evaluated by
Horner’s rule. Review how p02 (x) is computed via Horner’s rule in Chapter 6.)

Problem 4.1.11. Determine the Newton form of the interpolating (cubic) polynomial to the
data (0, 1), ( 1, 0), (1, 2) and (2, 0). Determine the Newton form of the interpolating (cubic)
polynomial to the same data written in reverse order. By converting both interpolating
polynomials to power series form show that they are the same. Alternatively, show they are
the same by evaluating the two polynomials at four distinct points of your choice. Finally,
explain why the Polynomial Interpolation Uniqueness Theorem tells you directly that these
two polynomials must be the same.
4.1. POLYNOMIAL INTERPOLATION 117

Problem 4.1.12. Determine the Newton form of the (quartic) interpolating polynomial to
the data (0, 1), ( 1, 2), (1, 0), (2, 1) and ( 2, 3).
N
Problem 4.1.13. Let pN (x) be the interpolating polynomial for the data {(xi , fi )}i=0 . De-
termine the number of adds (subtracts), multiplies, and divides required to determine the
coefficients of the Newton form of pN (x). Hint: The coefficient b0 = f0 , so it costs nothing
to determine b0 . Now, recall that

fk pk 1 (xk )
bk =
(xk x0 )(xk x1 ) · · · (xk xk 1)

N
Problem 4.1.14. Let pN (x) be the unique polynomial interpolating to the data {(xi , fi )}i=0 .
Given its coefficients, determine the number of adds (or, equivalently, subtracts) and multi-
plies required to evaluate the Newton form of pN (x) at one value of x by nested multiplication.
Hint: The nested multiplication scheme for evaluating the polynomial pN (x) has the form

tN = bN
tN 1 = bN 1 + (x xN 1 )tN
tN 2 = bN 2 + (x xN 2 )tN 1
..
.
t0 = b0 + (x x0 )t1

4.1.3 The Lagrange Form of The Interpolating Polynomial


2
Consider the data {(xi , fi )}i=0 . The Lagrange form of the quadratic polynomial interpolating to
this data may be written:
p2 (x) ⌘ f0 · `0 (x) + f1 · `1 (x) + f2 · `2 (x)
We construct each basis polynomial `i (x) so that it is quadratic and so that it satisfies

1, i = j
`i (xj ) = (4.1)
0, i = 6 j
then clearly
p2 (x0 ) = 1 · f0 + 0 · f1 + 0 · f2 = f0
p2 (x1 ) = 0 · f0 + 1 · f1 + 0 · f2 = f1
p2 (x2 ) = 0 · f0 + 0 · f1 + 1 · f2 = f2
This property of the basis functions may be achieved using the following construction:
product of linear factors for each node except x0 (x x1 )(x x2 )
`0 (x) = =
numerator evaluated at x = x0 (x0 x1 )(x0 x2 )

product of linear factors for each node except x1 (x x0 )(x x2 )


`1 (x) = =
numerator evaluated at x = x1 (x1 x0 )(x1 x2 )

product of linear factors for each node except x2 (x x0 )(x x1 )


`2 (x) = =
numerator evaluated at x = x2 (x2 x0 )(x2 x1 )
Notice that these basis polynomials, `i (x), satisfy the condition given in equation (4.1), namely:
`0 (x0 ) = 1, `0 (x1 ) = 0, `0 (x2 ) = 0
`1 (x0 ) = 0, `1 (x1 ) = 1, `1 (x2 ) = 0
`2 (x0 ) = 0, `2 (x1 ) = 0, `2 (x2 ) = 1
118 CHAPTER 4. CURVE FITTING

Example 4.1.7. For the data (1, 2), (3, 3) and (5, 4) the Lagrange form of the interpolating poly-
nomial is
(x 3)(x 5) (x 1)(x 5) (x 1)(x 3)
p2 (x) = (2) + (3) + (4)
(numerator at x = 1) (numerator at x = 3) (numerator at x = 5)
(x 3)(x 5) (x 1)(x 5) (x 1)(x 3)
= (2) + (3) + (4)
(1 3)(1 5) (3 1)(3 5) (5 1)(5 3)

The polynomials multiplying the data values (2), (3) and (4), respectively, are quadratic basis func-
tions.

N
More generally, consider the polynomial pN (x) of degree N interpolating to the data {(xi , fi )}i=0 :
N
X
pN (x) = fi · `i (x)
i=0

where the Lagrange basis polynomials `k (x) are polynomials of degree N and have the property

1, k = j
`k (xj ) =
0, k 6= j

so that clearly
pN (xi ) = fi , i = 0, 1, · · · , N
The basis polynomials may be defined by
N
Y
(x x0 )(x x1 ) · · · (x xk 1 )(x xk+1 ) · · · (x xN ) (x xj )
`k (x) ⌘ =
numerator evaluated at x = xk (xk xj )
j=0,j6=k

Algebraically, the basis function `k (x) is a fraction whose numerator is the product of the linear
factors (x xi ) associated with each of the nodes xi except xk , and whose denominator is the value
of its numerator at the node xk .
Pseudocode that can be used to evaluate the Lagrange form of the interpolating polynomial is
N
given in Fig. 4.4. That is, given the data {(xi , fi )}i=0 and the value z, the pseudocode in Fig. 4.4
returns the value of the interpolating polynomial pN (z).
Unlike when using the Newton form of the interpolating polynomial, the Lagrange form has
no coefficients whose values must be determined. In one sense, Lagrange form provides an explicit
solution of the interpolating conditions. However, the Lagrange form of the interpolating polynomial
can be more expensive to evaluate than either the power form or the Newton form (see Problem
4.1.20).
In summary, the Newton form of the interpolating polynomial is attractive because it is easy to
• Determine the coefficients.
• Evaluate the polynomial at specified values via nested multiplication.

• Extend the polynomial to incorporate additional interpolation points and data.


The Lagrange form of the interpolating polynomial
• is useful theoretically because it does not require solving a linear system, and

• explicitly shows how each data value fi a↵ects the overall interpolating polynomial.
4.1. POLYNOMIAL INTERPOLATION 119

Evaluating the Lagrange Form

Input: data (xi , fi )


scalar z
Output: p = pN (z)

p := 0
for k = 0 to N
` num := 1
` den := 1
for j = 0 to k 1
` num := ` num ⇤ (z xj )
` den := ` den ⇤ (xk xj )
next j
for j = k + 1 to N
` num := ` num ⇤ (z xj )
` den := ` den ⇤ (xk xj )
next j
` := ` num/` den
p = p + fk ⇤ `
next k

Figure 4.4: Pseudocode to evaluate the Lagrange form of the interpolating polynomial, pN (x) =
f0 · `0 (x) + f1 · `1 (x) + · · · + fN · `N (x), at x = z.

Problem 4.1.15. Determine the Lagrange form of the interpolating polynomial to the data
( 1, 0), (0, 1) and (1, 3). Is it the same polynomial as in Example 4.1.4? Why or why not?

Problem 4.1.16. Show that p2 (x) = f0 · l0 (x) + f1 · l1 (x) + f2 · l2 (x) interpolates to the data
2
{(xi , fi )}i=0 .

Problem 4.1.17. For the data (1, 2), (3, 3) and (5, 4) in Example 4.1.7, check that the
Lagrange form of the interpolating polynomial agrees precisely with the power series form
3 1
+ x fitting to the same data, as for the Newton form of the interpolating polynomial.
2 2
Problem 4.1.18. Determine the Lagrange form of the interpolating polynomial for the data
(0, 1), ( 1, 0), (1, 2) and (2, 0). Check that you have determined the same polynomial as the
Newton form of the interpolating polynomial for the same data, see Problem 4.1.11.

Problem 4.1.19. Determine the Lagrange form of the interpolating polynomial for the data
(0, 1), ( 1, 2), (1, 0), (2, 1) and ( 2, 3). Check that you have determined the same poly-
nomial as the Newton form of the interpolating polynomial for the same data, see Problem
4.1.12.

Problem 4.1.20. Let pN (x) be the Lagrange form of the interpolating polynomial for the
N
data {(xi , fi )}i=0 . Determine the number of additions (subtractions), multiplications, and
divisions that the code segments in Fig. 4.4 use when evaluating the Lagrange form of pN (x)
at one value of x = z.
How does the cost of using the Lagrange form of the interpolating polynomial for m di↵erent
evaluation points x compare with the cost of using the Newton form for the same task? (See
Problem 4.1.14.)
120 CHAPTER 4. CURVE FITTING

Problem 4.1.21. In this problem we consider “cubic Hermite interpolation” where the
function and its first derivative are interpolated at the points x = 0 and x = 1:
0 0
1. For the function (x) ⌘ (1+2x)(1 x)2 show that (0) = 1, (1) = 0, (0) = 0, (1) =
0
0 0
2. For the function (x) ⌘ x(1 x)2 show that (0) = 0, (1) = 0, (0) = 1, (1) = 0

3. For the cubic Hermite interpolating polynomial P (x) ⌘ f (0) (x) + f 0 (0) (x) +
f (1) (1 x) f 0 (1) (1 x) show that P (0) = f (0), P 0 (0) = f 0 (0), P (1) = f (1), P 0 (1) =
f 0 (1)

4.1.4 Chebyshev Polynomials and Series


Next, we consider the Chebyshev polynomials, Tj (x), and we explain why Chebyshev series provide
a better way to represent polynomials pN (x) than do power series (that is, why a Chebyshev poly-
nomial basis often provides a better representation for the polynomials than a power series basis).
This representation only works if all the data are in the interval [ 1, 1] but it is easy to transform
any finite interval [a, b] onto the interval [ 1, 1] as we see later. Rather than discuss transformations
to put all the data in the interval [ 1, 1], for simplicity we assume the data is already in place.
The j th Chebyshev polynomial Tj (x) is defined by

Tj (x) ⌘ cos(j arccos(x)), j = 0, 1, 2, · · · .

Note, this definition only works on the interval x 2 [ 1, 1], because that is the domain of the function
arccos(x). Now, let us compute the first few Chebyshev polynomials (and convince ourselves that
they are polynomials):

T0 (x) = cos(0 arccos(x)) = 1


T1 (x) = cos(1 arccos(x)) = x
T2 (x) = cos(2 arccos(x)) = 2[cos(arccos(x))]2 1 = 2x2 1

We have used the double angle formula, cos 2✓ = 2 · ✓ cos2 ✓ ◆1, to✓derive ◆T2 (x), and we use its
↵+ ↵
extension, the addition formula cos(↵) + cos( ) = 2 cos cos , to derive the higher
2 2
degree Chebyshev polynomials. We have

Tj+1 (x) + Tj 1 (x) = cos[(j + 1) arccos(x)] + cos[(j 1) arccos(x)]


= cos[j · arccos(x) + arccos(x)] + cos[j · arccos(x) arccos(x)]
= 2 cos[j · arccos(x)] cos[arccos(x)]
= 2x Tj (x)

So, starting from T0 (x) = 1 and T1 (x) = x, the Chebyshev polynomials may be defined by the
recurrence relation
Tj+1 (x) = 2xTj (x) Tj 1 (x), j = 1, 2, . . .
By this construction we see that the Chebyshev polynomial TN (x) has exact degree N and that if
N is even (odd) then TN (x) involves only even (odd) powers of x; see Problems 4.1.24 and 4.1.25.
A Chebyshev series of order N has the form:

b0 T0 (x) + b1 T1 (x) + · · · + bN TN (x)

for a given set of coefficients b0 , b1 , · · · , bN . It is a polynomial of degree N ; see Problem 4.1.26.


The first eight Chebyshev polynomials written in power series form, and the first eight powers of x
written in Chebyshev series form, are given in Table 4.2. This table illustrates the fact that every
polynomial can be written either in power series form or in Chebyshev series form.
4.1. POLYNOMIAL INTERPOLATION 121

T0 (x) = 1 T0 (x) = 1
T1 (x) = x T1 (x) = x
1
T2 (x) = 2x2 1 {T2 (x) + T0 (x)} = x2
2
1
T3 (x) = 4x3 3x {T3 (x) + 3T1 (x)} = x3
4
1
T4 (x) = 8x4 8x2 + 1 {T4 (x) + 4T2 (x) + 3T0 (x)} = x4
8
1
T5 (x) = 16x5 20x3 + 5x {T5 (x) + 5T3 (x) + 10T1 (x)} = x5
16
1
T6 (x) = 32x6 48x4 + 18x2 1 {T6 (x) + 6T4 (x) + 15T2 (x) + 10T0 (x)} = x6
32
1
T7 (x) = 64x7 112x5 + 56x3 7x {T7 (x) + 7T5 (x) + 21T3 (x) + 35T1 (x)} = x7
64

Table 4.2: Chebyshev polynomial conversion table

It is easy to compute the zeros of the first few Chebyshev polynomials manually. You’ll p find
1 3
that T1 (x) has a zero at x = 0, T2 (x) has zeros at x = ± p , T3 (x) has zeros at x = 0, ± etc.
2 2
What you’ll observe is that all the zeros are real and are in the interval ( 1, 1), and that the zeros
of Tn (x) interlace those of Tn+1 (x); that is, between each pair of zeros of Tn+1 (x) is zero of Tn (x).
In fact, all these properties are seen easily using the general formula for the zeros of the Chebyshev
polynomial Tn (x): ✓ ◆
2i 1
xi = cos ⇡ , i = 1, 2, . . . , n
2n
which are so-called Chebyshev points. These points are important when discussing error on polyno-
mial interpolation; see Section 4.2.
One advantage of using a Chebyshev polynomial Tj (x) = cos(j arccos(x)) over using the corre-
sponding power term xj of the same degree to represent data for values x 2 [ 1, +1] is that Tj (x)
oscillates j times between x = 1 and x = +1, see Fig. 4.5. Also, as j increases the first and last
zero crossings for Tj (x) get progressively closer to, but never reach, the interval endpoints x = 1
and x = +1. This is illustrated in Fig. 4.6 and Fig. 4.7, where we plot some of the Chebyshev and
power series bases on the interval [0, 1], with transformed variable x̄ = 2x 1. That is, Fig. 4.6
shows T3 (x̄), T4 (x̄) and T5 (x̄) for x 2 [0, 1], and Fig. 4.7 shows the corresponding powers x̄3 , x̄4
and x̄5 . Observe that the graphs of the Chebyshev polynomials are quite distinct while those of the
powers of x are quite closely grouped. When we can barely discern the di↵erence of two functions
graphically it is likely that the computer will have difficulty discerning the di↵erence numerically.

Suppose
PNwe want to construct an interpolating polynomial of degree N using a Chebyshev series
pN (x) = j=0 bj Tj (x). Given the data (xi , fi )N
i=0 , where all the xi 2 [ 1, 1], we need to build and
solve the linear system
N
X
pN (xi ) ⌘ bj Tj (xi ) = fi , i = 0, 1, . . . , N
j=0

This is simply a set of (N +1) equations in the (N +1) unknowns, the coefficients bj , j = 0, 1, . . . , N .
To evaluate Tj (xi ), j = 0, 1, . . . , N in the ith equation we simply use the recurrence relation
Tj+1 (xi ) = 2xi Tj (xi ) Tj 1 (xi ), j = 1, 2, . . . , N 1 with T0 (xi ) = 1, T1 (xi ) = xi . The result-
ing linear system can then be solved using, for example, the techniques discussed in Chapter 3. By
the polynomial interpolation uniqueness theorem, we compute the same polynomial (written in a
di↵erent form) as when using each of the previous interpolation methods on the same data.
122 CHAPTER 4. CURVE FITTING

1 1

0 0

−1 −1

−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1

1 1

0 0

−1 −1

−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1

1 1

0 0

−1 −1

−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1

Figure 4.5: The Chebyshev polynomials T0 (x) top left; T1 (x) top right; T2 (x) middle left; T3 (x)
middle right; T4 (x) bottom left; T5 (x) bottom right.

1.5 1.5

1 1

0.5 0.5

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

1.5 1.5

1 1

0.5 0.5

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 4.6: The shifted Chebyshev polynomials T2 (x̄) top left; T3 (x̄) top right; T4 (x̄) bottom left,
T5 (x̄) bottom right.
4.1. POLYNOMIAL INTERPOLATION 123

1.5 1.5

1 1

0.5 0.5

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

1.5 1.5

1 1

0.5 0.5

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 4.7: The powers x̄2 top left; x̄3 top right; x̄4 bottom left, x̄5 bottom right.

The algorithm most often used to evaluate a Chebyshev series is a recurrence that is similar to
Horner’s scheme for power series (see Chapter 6). Indeed, the power series form and the correspond-
ing Chebyshev series form of a given polynomial can be evaluated at essentially the same arithmetic
cost using these recurrences.

Problem 4.1.22. In this problem you are to exploit the Chebyshev recurrence relation.
1. Use the Chebyshev recurrence relation to reproduce Table 4.2.
2. Use the Chebyshev recurrence relation and Table 4.2 to compute T8 (x) as a power
series in x.
3. Use Table 4.2 and the formula for T8 (x) to compute x8 as a Chebyshev series.

Problem 4.1.23. Use Table 4.2 to


1. Write 3T0 (x) 2T1 (x) + 5T2 (x) as a polynomial in power series form.
2. Write the polynomial 4 3x + 5x2 as a Chebyshev series involving T0 (x), T1 (x) and
T2 (x).

Problem 4.1.24. Prove that the nth Chebyshev polynomial TN (x) is a polynomial of degree
N.
Problem 4.1.25. Prove the following results that are easy to observe in Table 4.2.

1. The even indexed Chebyshev polynomials T2n (x) all have power series representations
in terms of even powers of x only.
2. The odd powers x2n+1 have Chebyshev series representations involving only odd in-
dexed Chebyshev polynomials T2s+1 (x).

Problem 4.1.26. Prove that the Chebyshev series

b0 T0 (x) + b1 T1 (x) + · · · + bN TN (x)

is a polynomial of degree N .
124 CHAPTER 4. CURVE FITTING
✓ ◆
2i 1
Problem 4.1.27. Show that the Chebyshev points xi = cos ⇡ , i = 1, 2, . . . , N
2N
are the complete set of zeros of TN (x)
✓ ◆
i
Problem 4.1.28. Show that the points xi = cos ⇡ , i = 1, 2, . . . , N 1 are the complete
N
set of extrema of TN (x).

4.2 The Error in Polynomial Interpolation


N
Let pN (x) be the polynomial of degree N interpolating to the data {(xi , fi = f (xi ))}i=0 . How
accurately does the polynomial pN (x) approximate the data function f (x) at any point x? Mathe-
matically the answer to this question is the same for all forms of the polynomial of a given degree
interpolating the same data. However, as we see later, there are di↵erences in accuracy in practice,
but these are related not to the accuracy of polynomial interpolation but to the accuracy of the
implementation.
N
Let the evaluation point x and all the interpolation points {xi }i=0 lie in a closed interval [a, b]. An
advanced course in numerical analysis shows, using Rolle’s Theorem (see Problem 4.2.1) repeatedly,
that the error expression may be written as

!N +1 (x) (N +1)
f (x) pN (x) = f (⇠x )
(N + 1)!

where !N +1 (x) ⌘ (x x0 )(x x1 ) · · · (x xN ) and ⇠x is some (unknown) point in the interval


[a, b]. The precise location of the point ⇠x depends on the function f (x), the point x at which the
N
interpolating polynomial is to be evaluated, and the data points {xi }i=0 . The term f (N +1) (⇠x ) is
st
the (N + 1) derivative of f (x) evaluated at the point x = ⇠x . Some of the intrinsic properties of
the interpolation error are:
1. For any value of i, the error is zero when x = xi because wN +1 (xi ) = 0 (the interpolating
conditions).
2. The error is zero when the data fi are measurements of a polynomial f (x) of exact degree N
because then the (N + 1)st derivative f (N +1) (x) is identically zero. This is simply a statement
of the uniqueness theorem of polynomial interpolation.
Taking absolute values in the interpolation error expression and maximizing both sides of the re-
sulting inequality over x 2 [a, b], we obtain the polynomial interpolation error bound

max |f (N +1) (z)|


z2[a,b]
max |f (x) pN (x)|  max |!N +1 (x)| · (4.2)
x2[a,b] x2[a,b] (N + 1)!

|f (N +1) (x)|
To make this error bound small we must make either the term max or the term
x2[a,b] (N + 1)!
max |!N +1 (x)|, or both, small. Generally, we have little information about the function f (x) or its
x2[a,b]
derivatives, even about their sizes, and in any case we cannot change them to minimize the bound.
In the absence of information about f (x) or its derivatives, we might aim to choose the nodes
N
{xi }i=0 so that |!N +1 (x)| is small throughout [a, b]. The plots in Fig. 4.8 illustrate that generally
i
equally spaced nodes defined by xi = a + (b a), i = 0, 1, . . . , N are not a good choice, as for them
N
the polynomial !N +1 (x) oscillates with zeros at the nodes as anticipated, but with the amplitudes
of the oscillations growing as the evaluation point x approaches either endpoint of the interval [a, b].
Of course, data is usually measured at equally spaced points and this is a matter over which you
4.2. THE ERROR IN POLYNOMIAL INTERPOLATION 125

−3
−3
x 10 x 10
3 3

2 2

1 1

0 0

−1 −1

−2 −2

−3 −3
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 4.8: Plot of the polynomial !N +1 (x) for equally spaced nodes xi 2 [0, 1], and N = 5. The
equally spaced nodes are denoted by circles. The left plot shows !N +1 (x) on the interval [0, 1], and
the right plot shows !N +1 (x) on the interval [ 0.05, 1.05].

will have little or no control. So, though equally spaced points are a bad choice in theory, they may
be our only choice!
The right plot in Fig. 4.8 depicts the polynomial !N +1 (x) for points x slightly outside the
interval [a, b] where !N +1 (x) grows like xN +1 . Evaluating the polynomial pN (x) for points x outside
the interval [a, b] is extrapolation. We can see from this plot that if we evaluate the polynomial
pN (x) outside the interval [a, b], then the error component !N +1 (x) can be very large, indicating
that extrapolation should be avoided whenever possible; this is true for all choices of interpolation
points. This observation implies that we need to know the application intended for the interpolating
function before we measure the data, a fact frequently ignored when designing an experiment. We
may not be able to avoid extrapolation but we certainly need to be aware of its dangers.
For the data in Fig. 4.9, we use the Chebyshev points for the interval [a, b] = [0, 1]:
✓ ◆
b+a b a 2i + 1
xi = cos ⇡ i = 0, 1, · · · , N.
2 2 2N + 2

Note that the Chebyshev points do not include the endpoints a or b and that all the Chebyshev
points lie inside the open interval (a, b). For the interval [a, b] = [ 1, +1], the Chebyshev points
are the zeros of the Chebyshev polynomial TN +1 (x), hence the function !N +1 (x) is a scaled version
of TN +1 (x) (see Section 4.1.4 for the definition of the Chebyshev polynomials and the Chebyshev
points).
With the Chebyshev points as nodes, the maximum value of the polynomial |!N +1 (x)| on [a, b] =
[0, 1] is less than half its value in Fig. 4.8. As N increases, the improvement in the size of |!N +1 (x)|
when using the Chebyshev points rather than equally spaced points is even greater. Indeed, for
polynomials of degree 20 or less, interpolation to the data measured at the Chebyshev points gives a
maximum error not greater than twice the smallest possible maximum error (known as the minimax
error) taken over all polynomial fits to the data (not just using interpolation). However, there are
penalties:

1. As with equally space points, extrapolation should be avoided because the error component
!N +1 (x) can be very large for points outside the interval [a, b] (see the right plot in Fig. 4.9).
Indeed, extrapolation using the interpolating function based on data measured at Chebyshev
points can be even more disastrous than extrapolation based on the same number of equally
spaced data points.

2. It may be difficult to obtain the data fi measured at the Chebyshev points.


126 CHAPTER 4. CURVE FITTING

−3 −3
x 10 x 10
3 3

2 2

1 1

0 0

−1 −1

−2 −2

−3 −3
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 4.9: Plot of the polynomial !N +1 (x) for Chebyshev nodes xi 2 [0, 1], and N = 5. The
Chebyshev nodes are denoted by circles. The left plot shows !N +1 (x) on the interval [0, 1], and the
right plot shows !N +1 (x) on the interval [ 0.05, 1.05]

Figs. 4.8 — 4.9 suggest that to choose pN (x) so that it will accurately approximate all reasonable
choices of f (x) on an interval [a, b], we should
N
1. Choose the nodes {xi }i=0 so that, for all likely evaluation points x, min(xi )  x  max(xi ).
2. If possible, choose the nodes as the Chebyshev points. If this is not possible, attempt to choose
them so they are denser close to the endpoints of the interval [a, b].

Problem 4.2.1. Prove Rolle’s Theorem: Let f (x) be continuous and di↵erentiable on the
closed interval [a, b] and let f (a) = f (b) = 0, then there exists a point c 2 (a, b) such that
f 0 (c) = 0.

Problem 4.2.2. Let x = x0 + sh and xi = x0 + ih, show that wn+1 (x) = hN +1 s(s
1) · · · (s N ).

Problem 4.2.3. Determine the following bound for straight line interpolation (that is with
a polynomial of degree one, p1 (x)) to the data (a, f (a)) and (b, f (b))
1
max |f (x) p1 (x)|  |b a|2 max |f (2) (z)|
x2[a,b] 8 z2[a,b]

[Hint: You need to use the bound (4.2) for this special case, and use standard calculus
techniques to find maxx2[a,b] |!N +1 (x)|.]

Problem 4.2.4. Let f (x) be a polynomial of degree N + 1 for which f (xi ) = 0, i =


0, 1, · · · , N . Show that the interpolation error expression reduces to f (x) = A wN +1 (x) for
some constant A. Explain why this result is to be expected.

Problem 4.2.5. For N = 5, N = 10 and N = 15 plot, on one graph, the three polynomials
!N +1 (x) for x 2 [0, 1] defined using equally spaced nodes. For the same values of N plot,
on one graph, the three polynomials !N +1 (x) for x 2 [0, 1] defined using Chebyshev nodes.
Compare the three maximum values of |!N +1 (x)| on the interval [0, 1] for each of these two
graphs. Also, compare the corresponding maximum values in the two graphs.
4.2. THE ERROR IN POLYNOMIAL INTERPOLATION 127

Runge’s Example
To use polynomial interpolation to obtain an accurate estimate of f (x) on a fixed interval [a, b], it is
natural to think that increasing the number of interpolating points in [a, b], and hence increasing the
degree of the polynomial, will reduce the error in the polynomial interpolation to f (x) . In fact, for
some functions f (x) this approach may worsen the accuracy of the interpolating polynomial. When
this is the case, the e↵ect is usually greatest for equally spaced interpolation points. At least a part
|f (N +1) (x)|
of the problem arises from the term max in the expression for interpolation error. This
x2[a,b] (N + 1)!
term may grow, or at least not decay, as N increases, even though the denominator (N + 1)! grows
very quickly as N increases.
1
Consider the function f (x) = on the interval [ 5, 5]. Fig. 4.10 shows a plot of interpolating
1 + x2
polynomials of degree N = 10 for this function using 11 (i.e., N + 1) equally spaced nodes and
11 Chebyshev nodes. Also shown in the figure is the error f (x) pN (x) for each interpolating
polynomial. Note that the behavior of the error for the equally spaced points clearly mimics the
behavior of !N +1 (x) in Fig. 4.8. For the same number of Chebyshev points the error is much smaller
and more evenly distributed on the interval [ 5, 5] but it still mimics !N +1 (x) for this placement of
points.

2 2
f(x) f(x)
equally spaced interpolation points Chebyshev interpolation points
1.5 p(x) 1.5 p(x)

1 1

0.5 0.5

0 0

−0.5 −0.5
−5 0 5 −5 0 5

0.5 0.5

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5

f(x)−p(x) for equally spaced points f(x)−p(x) for Chebyshev points


−2 −2
−5 0 5 −5 0 5

1
Figure 4.10: Interpolating polynomials for f (x) = using 11 equally spaced points (top
1 + x2
left) and 11 Chebyshev points (top right) on the interval [ 5, 5]. The associated error functions
f (x) p10 (x) are shown in the bottom two plots.
128 CHAPTER 4. CURVE FITTING

1
Problem 4.2.6. Runge’s example. Consider the function f (x) = on the interval
1 + x2
[ 5, +5].

1. For N = 5, N = 10 and N = 15 plot the error e(x) = pN (x) f (x) where the
polynomial pN (x) is computed by interpolating at the N + 1 equally spaced nodes,
i
xi = a + (b a), i = 0, 1, · · · , N .
N
2. Repeat Part 1 but interpolate at the N + 1 Chebyshev nodes shifted to the interval
[ 5, 5].
Create a table listing the approximate maximum absolute error and its location in [a, b]
as a function of N ; there will be two columns one for each of Parts 1 and 2. [Hint: For
simplicity, compute the interpolating polynomial pN (x) using the Lagrange form. To compute
the approximate maximum error you will need to sample the error between each pair of nodes
(recall, the error is zero at the nodes) — sampling at the midpoints of the intervals between
the nodes will give you a sufficiently accurate estimate.]

Problem 4.2.7. Repeat the previous problem with data taken from f (x) = ex instead of
1
f (x) = . What major di↵erences do you observe in the behavior of the error?
1 + x2

The E↵ect of the Method of Representation of the Interpolating Polynomial


A second source of error in polynomial interpolation is that from the computation of the polynomial
interpolating function and its evaluation. It is “well-known” that large Vandermonde systems tend to
be ill–conditioned, but we see errors of di↵erent sizes for all our four fitting techniques (Vandermonde,
Newton, Lagrange and Chebyshev) and the Newton approach (in our implementation) turns out to be
even more ill–conditioned than the Vandermonde as the number of interpolation points increases. To
illustrate this fact we present results from interpolating to ex with n = 10, 20, . . . , 100 equally spaced
points in the interval [ 1, 1] so we are interpolating with polynomials of degrees N = 9, 19, . . . , 99,
respectively. In Table 4.3, we show the maximum absolute errors (over 201 equally spaced points in
[ 1, 1]) of this experiment. We don’t know the size of the interpolation error as the error shown is
a combination of interpolation error, computation of fit error and evaluation error, though we can
be fairly confident that the errors for n = 10 are dominated by interpolation error since the errors
are all approximately the same. The interpolation errors then decrease and at n = 20 the error
is clearly dominated by calculation errors (since all the errors are di↵erent). For n > 20 it is our
supposition that calculation errors dominate, and in any case, for each value of n the interpolation
error cannot be larger than the smallest of the errors shown. Clearly, for large enough n using the
Chebyshev series is best, and for small n it doesn’t matter much which method is used. Surprisingly,
the Lagrange interpolating function for which we do not solve a system of linear equations but which
involves an expensive evaluation has very large errors when n is large, but Newton is always worst.
A plot of the errors shows that in all cases the errors are largest close to the ends of the interval of
interpolation.
We repeated the experiment using Chebyshev points; the results are shown in Table 4.4. Here, the
interpolation error is smaller, as is to be anticipated using Chebyshev points, and the computation
error does not show until n = 50 for Newton and hardly at all for the Lagrange and Chebyshev fits.
When the errors are large they are largest near the ends of the interval of interpolation but overall
they are far closer to being evenly distributed across the interval than in the equally spaced points
case.
The size of the errors reported above depends, to some extent, on the way that the interpolation
function is coded. In each case, we coded the function in the “obvious” way (using the formulas
presented in this text). But, using these formulas is not necessarily the best way to reduce the
errors. It is clear that in the case of the Newton form the choice of the order of the interpolation
4.2. THE ERROR IN POLYNOMIAL INTERPOLATION 129

n Vandermonde Newton Lagrange Chebyshev


10 3.837398e-009 3.837397e-009 3.837398e-009 3.837399e-009
20 3.783640e-013 1.100786e-013 1.179390e-012 1.857403e-013
30 6.497576e-010 7.048900e-011 7.481358e-010 2.436179e-011
40 5.791111e-008 1.385682e-008 8.337930e-007 3.987289e-008
50 3.855115e-005 2.178791e-005 2.021784e-004 9.303255e-005
60 1.603913e-001 4.659020e-002 9.780899e-002 2.822994e-003
70 1.326015e+003 2.905506e+004 2.923504e+002 1.934900e+000
80 8.751649e+005 3.160393e+010 3.784625e+004 4.930093e+001
90 2.280809e+010 7.175811e+015 6.391371e+007 1.914712e+000
100 2.133532e+012 4.297183e+022 1.122058e+011 1.904017e+000

Table 4.3: Errors fitting ex using n equally spaced data points on the interval [ 1, 1].

n Vandermonde Newton Lagrange Chebyshev


10 6.027099e-010 6.027103e-010 6.027103e-010 6.027103e-010
20 8.881784e-016 1.332268e-015 3.552714e-015 1.554312e-015
30 8.881784e-016 9.992007e-016 2.664535e-015 1.332268e-015
40 1.110223e-015 1.110223e-015 3.552714e-015 1.332268e-015
50 7.105427e-015 1.532131e-009 3.996803e-015 1.776357e-015
60 5.494938e-012 1.224753e-004 5.773160e-015 1.332268e-015
70 1.460477e-009 1.502797e+000 4.884981e-015 2.164935e-015
80 2.291664e-006 3.292080e+005 4.440892e-015 1.332268e-015
90 2.758752e-004 3.140739e+011 5.773160e-015 1.332268e-015
100 1.422977e+000 1.389857e+018 8.881784e-015 1.776357e-015

Table 4.4: Errors fitting ex using n Chebyshev data points in [ 1, 1].


130 CHAPTER 4. CURVE FITTING

points determines the diagonal values in the lower triangular matrix, and hence its degree of ill–
conditioning. In our implementation of all the forms, and specifically the Newton form, we ordered
the points in increasing order of value, that is starting from the left. It is less apparent what, if any,
substantial e↵ect changing the order of the points would have in the Vandermonde and Lagrange
cases. It is however possible to compute these polynomials in many di↵erent ways. For example, the
Lagrange form can be computed using “barycentric” formulas. Consider the standard Lagrange form
XN N
Y ✓ ◆
x xj
pN (x) = li (x)fi where li (x) = . It is easy to see that we can rewrite this in the
i=0
xi xj
j=0,j6=i
PN !i QN 1
barycentric form pN (x) = l(x) i=0 fi where l(x) = i=0 (x xi ) and !j = Q
x xi k6=j (xj xk )
which is less expensive for evaluation. It is also somewhat more accurate but not by enough to be
even close to competitive with using the Chebyshev polynomial basis.

Problem 4.2.8. Produce the equivalent of Tables 4.3 and 4.4 for Runge’s function defined
on the interval [ 1, 1]. You should expect relatively similar behavior to that exhibited in the
two tables.
Problem 4.2.9. Derive the barycentric form of the Lagrange interpolation formula from
the standard form. For what values of x is the barycentric form indeterminate (requires
computing a “zero over zero”), and how would you avoid this problem in practice

4.3 Polynomial Splines


Now, we consider techniques designed to reduce the problems that arise when data are interpolated
by a single polynomial. The first technique interpolates the data by a collection of low degree
polynomials rather than by a single high degree polynomial. Another technique outlined in Section
4.4, approximates but not necessarily interpolates, the data by least squares fitting a single low
degree polynomial.
Generally, by reducing the size of the interpolation error bound we reduce the actual error. Since
the term !N +1 (x) in the bound is the product of N + 1 linear factors |x xi |, each the distance
between two points that both lie in [a, b], we have |x xi |  |b a| and so

maxx2[a,b] |f (N +1) (x)|


max |f (x) pN (x)|  |b a|N +1 ·
x2[a,b] (N + 1)!

This (larger) bound suggests that we can make the error as small as we wish by freezing the value of
N and then reducing the size of |b a|. We still need an approximation over the original interval, so
we use a piecewise polynomial approximation: the original interval is divided into non-overlapping
subintervals and a di↵erent polynomial fit of the data is used on each subinterval.

4.3.1 Linear Polynomial Splines


N
A simple piecewise polynomial fit is the linear interpolating spline. For data {(xi , fi )}i=0 , where

a = x0 < x1 · · · < xN = b, h ⌘ max |xi xi 1 |,


i
4.3. POLYNOMIAL SPLINES 131

the linear spline S1,N (x) is a continuous function that interpolates to the data and is constructed
from linear functions that we identify as two–point interpolating polynomials:
8
> x x1 x x0
>
> f0 + f1 when x 2 [x0 , x1 ]
>
> x0 x1 x1 x0
>
>
>
>
< f x x2 + f2
x x1
when x 2 [x1 , x2 ]
1
S1,N (x) = x1 x2 x2 x1
>
> ..
>
> .
>
>
>
> x xN x xN 1
>
: fN 1 + fN when x 2 [xN 1 , xN ]
xN 1 xN xN xN 1

From the bound on the error for polynomial interpolation, in the case of an interpolating polynomial
of degree one,
2
|xi xi 1|
max |f (z) S1,N (z)|  · max |f (2) (x)|
z2[xi 1 ,xi ] 8 x2[xi 1 ,xi ]

h2
 · max |f (2) (x)|
8 x2[a,b]

(See Problem 4.2.3.) That is, the bound on the maximum absolute error behaves like h2 as the
maximum interval length h ! 0. Suppose that the nodes are chosen to be equally spaced in [a, b],
b a
so that xi = a + ih, i = 0, 1, · · · , N , where h ⌘ . As the number of points N increases, the
N
1
error in using S1,N (z) as an approximation to f (z) tends to zero like 2 .
N

Example 4.3.1. We construct the linear spline to the data ( 1, 0), (0, 1) and (1, 3):
8
> x 0 x ( 1)
> 0·
< + 1· when x 2 [ 1, 0]
( 1) 0 0 ( 1)
S1,2 (x) =
>
: 1· x 1
>
+ 3·
x 0
when x 2 [0, 1]
0 1 1 0

An alternative (but equivalent) method for representing a linear spline uses a linear B–spline
basis. A B-spline basis is one made up of functions with minimal (compact) support; that is,
the functions making up the basis should be defined to be nonzero on the minimum number of
contiguous intervals. In the case of linear splines each B-spline basis function is nonzero on at most
two contiguous intervals and may be represented as follows. The linear B–spline basis, Li (x), i =
0, 1, · · · , N, is chosen so that Li (xj ) = 0 for all j 6= i and Li (xi ) = 1. Here, each Li (x) is a “roof”
shaped function with the apex of the roof at (xi , 1) and the span on the interval [xi 1 , xi+1 ], and
with Li (x) ⌘ 0 outside [xi 1 , xi+1 ]. That is,
8
> x x1
< when x 2 [x0 , x1 ]
L0 (x) = x0 x1
>
: 0 for all other x,
8 x xi 1
>
> when x 2 [xi 1 , xi ]
>
> x xi 1
>
< i
Li (x) = x xi+1 i = 1, 2, . . . , N 1,
> when x 2 [xi , xi+1 ]
>
> xi xi+1
>
>
:
0 for all other x
132 CHAPTER 4. CURVE FITTING

and 8
> x xN 1
< when x 2 [xN 1 , xN ]
LN (x) = xN xN 1
>
: 0 for all other x
In terms of the linear B–spline basis we can write
N
X
S1,N (x) = Li (x) · fi
i=0

Example 4.3.2. We construct the linear spline to the data ( 1, 0), (0, 1) and (1, 3) using the linear
B–spline basis. First we construct the basis:
8
< x 0 , x 2 [ 1, 0]
L0 (x) = ( 1) 0
: 0, x 2 [0, 1]
8
> x ( 1)
< , x 2 [ 1, 0]
L1 (x) = 0 ( 1)
: x 1,
>
x 2 [0, 1]
( 0 1
0, x 2 [ 1, 0]
L2 (x) = x 0
, x 2 [0, 1]
1 0
which are shown in Fig. 4.11. Notice that each Li (x) is a “roof” shaped function. The linear spline
interpolating to the data is then given by

S1,2 (x) = 0 · L0 (x) + 1 · L1 (x) + 3 · L2 (x)

Problem 4.3.1. Let S1,N (x) be a linear spline.


N
1. Show that S1,N (x) is continuous and that it interpolates the data {(xi , fi )}i=0 .

0 fi fi 1
2. At the interior nodes xi , i = 1, 2, · · · , N 1, show that S1,N (xi ) = and
xi xi 1
fi+1 fi
0
S1,N (x+
i )=
xi+1 xi
0
3. Show that, in general, S1,N (x) is discontinuous at the interior nodes.
4. Under what circumstances would S1,N (x) have a continuous derivative at x = xi ?
5. Determine the linear spline S1,3 (x) that interpolates to the data (0, 1), (1, 2), (3, 3)
0
and (5, 4). Is S1,3 (x) discontinuous at x = 1? At x = 2? At x = 3?

Problem 4.3.2. Given the data (0, 1), (1, 2), (3, 3) and (5, 4), write down the linear B–
spline basis functions Li (x), i = 0, 1, 2, 3 and the sum representing S1,3 (x). Show that
S1,3 (x) is the same linear spline that was described in Problem 4.3.1. Using this basis
function representation of the linear spline, evaluate the linear spline at x = 1 and at x = 2.
4.3. POLYNOMIAL SPLINES 133

L (x) 0.5
0

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

L1(x) 0.5

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

L2(x) 0.5

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

Figure 4.11: Linear B–spline basis functions for the data ( 1, 0), (0, 1) and (1, 3). Each Li (x) is a
“roof” shaped function with the apex of the roof at (xi , 1).
134 CHAPTER 4. CURVE FITTING

4.3.2 Cubic Polynomial Splines


Linear splines su↵er from a major limitation: the derivative of a linear spline is generally discontin-
uous at each interior node, xi . To derive a piecewise polynomial approximation with a continuous
derivative requires that we use polynomial pieces of higher degree and constrain the pieces to make
the curve smoother.
Before the days of Computer Aided Design, a (mechanical) spline, for example a flexible piece of
wood, hard rubber, or metal, was used to help draw curves. To use a mechanical spline, pins were
placed at a judicious selection of points along a curve in a design, then the spline was bent so that it
touched each of these pins. Clearly, with this construction the spline interpolates the curve at these
pins and could be used to reproduce the curve in other drawings1 . The locations of the pins are
called knots. We can change the shape of the curve defined by the spline by adjusting the location
of the knots. For example, to interpolate to the data {(xi , fi )} we can place knots at each of the
nodes xi . This produces a special interpolating cubic spline.
N
To derive a mathematical model of this mechanical spline, suppose the data is {(xi , fi )}i=0
where, as for linear splines, x0 < x1 < · · · < xN . The shape of a mechanical spline suggests the
curve between the pins is approximately a cubic polynomial. So, we model the mechanical spline by
a mathematical cubic spline — a special piecewise cubic approximation. Mathematically, a cubic
spline S3,N (x) is a C 2 (that is, twice continuously di↵erentiable) piecewise cubic polynomial. This
means that
• S3,N (x) is piecewise cubic; that is, between consecutive knots xi
8 2 3
> p1 (x) = a1,0 + a1,1 x + a1,2 x2 + a1,3 x3
> x 2 [x0 , x1 ]
>
>
< p2 (x) = a2,0 + a2,1 x + a2,2 x2 + a2,3 x3
> x 2 [x1 , x2 ]
S3,N (x) = p3 (x) = a3,0 + a3,1 x + a3,2 x + a3,3 x x 2 [x2 , x3 ]
>
> ..
>
> .
>
:
pN (x) = aN,0 + aN,1 x + aN,2 x2 + aN,3 x3 x 2 [xN 1 , xN ]

where ai,0 , ai,1 , ai,2 and ai,3 are the coefficients in the power series representation of the ith
cubic piece of S3,N (x). (Note: The approximation changes from one cubic polynomial piece
to the next at the knots xi .)
• S3,N (x) is C 2 (read C two); that is, S3,N (x) is continuous and has continuous first and second
derivatives everywhere in the interval [x0 , xN ] (and particularly at the knots).

To be an interpolating cubic spline we must have, in addition,


• S3,N (x) interpolates the data; that is,
S3,N (xi ) = fi , i = 0, 1, · · · , N
N
(Note: the points of interpolation {xi }i=0 are called nodes (pins), and we have chosen them to
coincide with the knots.) For the mechanical spline, the knots where S3,N (x) changes shape and
the nodes where S3,N (x) interpolates are the same. For the mathematical spline, it is traditional to
place the knots at the nodes, as in the definition of S3,N (x). However, this placement is a choice
and not a necessity.
Within each interval (xi 1 , xi ) the corresponding cubic polynomial pi (x) is continuous and has
continuous derivatives of all orders. Therefore, S3,N (x) or one of its derivatives can be discontinuous
1 Splines were used frequently to trace the plan of an airplane wing. A master template was chosen, placed on

the material forming the rib of the wing, and critical points on the template were transferred to the material. After
removing the template, the curve defining the shape of the wing was “filled-in” using a mechanical spline passing
through the critical points.
4.3. POLYNOMIAL SPLINES 135

only at a knot. For example, consider the following illustration of what happens at the knot xi .

For xi 1 < x < xi , S3,N (x) has value For xi < x < xi+1 , S3,N (x) has value
pi (x) = ai,0 + ai,1 x + ai,2 x2 + ai,3 x3 pi+1 (x) = ai+1,0 + ai+1,1 x + ai+1,2 x2 + ai+1,3 x3

Value of S3,N (x) as x ! xi Value of S3,N (x) as x ! x+


i
pi (xi ) pi+1 (xi )
p0i (xi ) p0i+1 (xi )
p00i (xi ) p00i+1 (xi )
p000
i (xi ) p000
i+1 (xi )

(Here, x ! x+ i means take the limit as x ! xi from above.) Observe that the function S3,N (x) has
two cubic pieces incident to the interior knot xi ; to the left of xi it is the cubic pi (x) while to the
right it is the cubic pi+1 (x). Thus, a necessary and sufficient condition for S3,N (x) to be continuous
and have continuous first and second derivatives is for these two cubic polynomials incident at the
interior knot to match in value, and in first and second derivative values. So, we have a set of
Smoothness Conditions; that is, at each interior knot:

p0i (xi ) = p0i+1 (xi ), p00i (xi ) = p00i+1 (xi ), i = 1, 2, · · · , N 1

In addition, to interpolate the data we have a set of Interpolation Conditions; that is, on the ith
interval:
pi (xi 1 ) = fi 1 , pi (xi ) = fi , i = 1, 2, · · · , N
This way of writing the interpolation conditions also forces S3,N (x) to be continuous at the knots.
Each of the N cubic pieces has four unknown coefficients, so our description of the function
S3,N (x) involves 4N unknown coefficients. Interpolation imposes 2N linear constraints on the co-
efficients, and assuring continuous first and second derivatives imposes 2(N 1) additional linear
constraints. (A linear constraint is a linear equation that must be satisfied by the coefficients of the
polynomial pieces.) Therefore, there are a total of 4N 2 = 2N + 2(N 1) linear constraints on
the 4N unknown coefficients. In order to have the same number of equations as unknowns, we need
2 more (linear) constraints and the whole set of constraints must be linearly independent.

Natural Boundary Conditions


A little thought about the mechanical spline as it is forced to touch the pins indicates why two
constraints are missing. What happens to the spline before it touches the first pin and after it
touches the last? If you twist the spline at its ends you find that its shape changes. A natural
condition is to let the spline rest freely without stress or tension at the first and last knot, that is
don’t twist it at the ends. Such a spline has “minimal energy”. Mathematically, this condition is
expressed as the Natural Spline Condition:

p001 (x0 ) = 0, p00N (xN ) = 0

The so-called natural spline results when these conditions are used as the 2 missing linear con-
straints.
Despite its comforting name and easily understood physical origin, the natural spline is seldom
used since it does not deliver an accurate approximation S3,N (x) near the ends of the interval
[x0 , xN ]. This may be anticipated from the fact that we are forcing a zero value on the second
derivative when this is not normally the value of the second derivative of the function that the data
measures. A natural cubic spline is built up from cubic polynomials, so it is reasonable to expect
that if the data is measured from a cubic polynomial then the natural cubic spline will reproduce
the cubic polynomial. However, for example, if the data are measured from the function f (x) = x2
then the natural spline S3,N (x) 6= f (x); the function f (x) = x2 has nonzero second derivatives at the
136 CHAPTER 4. CURVE FITTING

nodes x0 and xN where value of the second derivative of the natural cubic spline S3,N (x) is zero by
definition. In fact the error behaves like O(h2 ) where h is the largest distance between interpolation
points. Since it can be shown that the best possible error behavior is O(h4 ), the accuracy of the
natural cubic spline leaves something to be desired.

Second Derivative Conditions


To clear up the inaccuracy problem associated with the natural spline conditions we could replace
them with the correct second derivative values

p001 (x0 ) = f 00 (x0 ), p00N (xN ) = f 00 (xN ).

These second derivatives of the data are not usually available but they can be replaced by sufficiently
accurate approximations. If exact values or sufficiently accurate approximations are used then the
resulting spline will be as accurate as possible for a cubic spline; that is the error in the spline will
behave like O(h4 ) where h is the largest distance between interpolation points. (Approximations
to the second derivative may be obtained by using polynomial interpolation. That is, two separate
sets of data values near each of the endpoints of the interval [x0 , xN ] are used to construct two
interpolating polynomials. Then the two interpolating polynomials are each twice di↵erentiated
and the resulting twice di↵erentiated polynomials are evaluated at the corresponding endpoints to
approximate the values of f 00 (x0 ) and f 00 (xN ).)

First Derivative (Slope) Conditions


Another choice of boundary conditions which delivers the full O(h4 ) accuracy of cubic splines is to
use the correct first derivative values

p01 (x0 ) = f 0 (x0 ), p0N (xN ) = f 0 (xN ).

If we do not have access to the derivative of f , we can approximate it in a similar way to that
described above for the second derivative.

Not-a-knot Conditions
A simpler, and accurate, spline may be determined by replacing the boundary conditions with the
so-called not-a-knot conditions. Recall, at each knot, the spline S3,N (x) changes from one cubic to
the next. The idea of the not-a-knot conditions is not to change cubic polynomials as one crosses
both the first and the last interior nodes, x1 and xN 1 . [Then, x1 and xN 1 are no longer knots!]
These conditions are expressed mathematically as the Not-a-Knot Conditions

p000 000 000


1 (x1 ) = p2 (x1 ), pN 1 (xN 1) = p000
N (xN 1 ).

By construction, the first two pieces, p1 (x) and p2 (x), of the cubic spline S3,N (x) agree in value, as
well as in first and second derivative at x1 . If p1 (x) and p2 (x) also satisfy the not-a-knot condition
at x1 , it follows that p1 (x) ⌘ p2 (x); that is, x1 is no longer a knot. The accuracy of this approach
is also O(h4 ) but the error may still be quite large near the ends of the interval.

Cubic Spline Accuracy


For each way of supplying the additional linear constraints discussed above, the system of 4N linear
constraints has a unique solution as long as the knots are distinct. So, the cubic spline interpolating
function constructed using any one of the natural, the correct endpoint first or second derivative
value, an approximated endpoint first or second derivative value, or the not-a-knot conditions is
unique.
4.3. POLYNOMIAL SPLINES 137

This uniqueness result permits an estimate of the error associated with approximations by cubic
splines. From the error bound for polynomial interpolation, for a cubic polynomial p3 (x) interpo-
lating at data points in the interval [a, b], we have

max |f (x) p3 (x)|  Ch4 · max |f (4) (x)|


x2[xi 1 ,xi ] x2[a,b]

where C is a constant and h = maxi |xi xi 1 |. We might anticipate that the error associated with
approximation by a cubic spline behave like h4 for h small, as for an interpolating cubic polyno-
mial. However, the maximum absolute error associated with the natural cubic spline approximation
behaves like h2 as h ! 0. In contrast, the maximum absolute error for a cubic spline based on
correct endpoint first or second derivative values or on the not-a-knot conditions behaves like h4 .
Unlike the natural cubic spline, the correct first and second derivative value and not-a-knot cubic
splines reproduce cubic polynomials. That is, in both these cases, S3,N ⌘ f on the interval [a, b]
whenever the data values are measured from a cubic polynomial f . This reproducibility property is
a necessary condition for the error in the cubic spline S3,N approximation to a general function f
to behave like h4 .

B–splines
Codes that work with cubic splines do not use the power series representation of S3,N (x). Rather,
often they represent the spline as a linear combination of cubic B–splines; this approach is similar
to using a linear combination of the linear B–spline roof basis functions Li to represent a linear
spline. B–splines have compact support, that is they are non-zero only inside a set of contiguous
subintervals just like the linear spline roof basis functions. So, the linear B–spline basis function,
Li , has support (is non-zero) over just the two contiguous intervals which combined make up the
interval [xi 1 , xi+1 ], whereas the corresponding cubic B–spline basis function, Bi , has support (is
non-zero) over four contiguous intervals which combined make up the interval [xi 2 , xi+2 ].

Construction of a B–spline
Assume the points xi are equally spaced with spacing h. We’ll construct Bp (x) the cubic B–spline
centered on xp . We know already that Bp (x) is a cubic spline that is identically zero outside the
interval [xp 2h, xp +2h] and has knots at xp 2h, xp h, xp , xp +h, and xp +2h. We’ll normalize it at
xp by requiring Bp (xp ) = 1. So on the interval [xp 2h, xp h] we can choose Bp (x) = A(x [xp 2h])3
where A is a constant to be determined later. This is continuous and has continuous first and second
derivatives matching the zero function at the knot xp 2h. Similarly on the interval [xp + h, xp + 2h]
we can choose Bp (x) = A(x [xp + 2h])3 where A will turn out to be the same constant by
symmetry. Now, we need Bp (x) to be continuous and have continuous first and second derivatives
at the knot xp h. This is achieved by choosing Bp (x) = A(x [xp 2h])3 + B(x [xp h])3 on
the interval [xp h, xp ] and similarly Bp (x) = A(x [xp + 2h])3 B(x [xp + h])3 on the interval
[xp , xp + h] where again symmetry ensures the same constants. Now, all we need to do is to fix
up the constants A and B to give the required properties at the knot x = xp . Continuity and the
requirement Bp (xp ) = 1 give

A(xp [xp 2h])3 + B(xp [xp h])3 = A(xp [xp + 2h])3 B(xp [xp + h])3 = 1

that is
8h3 A + h3 B = 8( h)3 A ( h)3 B = 1
1
which gives one equation, 8A + B = h3 , for the two constants A and B. Now first derivative
continuity at the knot x = xp gives

3A(xp [xp 2h])2 + 3B(xp [xp h])2 = 3A(xp [xp + 2h])2 3B(xp [xp + h])2

After cancelation, this reduces to 4A + B = 4A B. The second derivative continuity condition


gives an automatically satisfied identity. So, solving we have B = 4A. Hence A = 4h1 3 and
138 CHAPTER 4. CURVE FITTING

1
B= h3 . So,
8
>
> 0, x < xp 2h
>
>
>
> 1
>
> 4h3 (x [xp 2h])3 , xp 2h  x < xp h
>
>
>
< 1 3 (x 3 1 3
4h [xp 2h]) h3 (x [xp h]) , xp h  x < xp
Bp (x) =
> 1
[xp + 2h])3 + 1
[xp + h])3 , x p  x < xp + h
>
>
> 4h3 (x h3 (x
>
>
> 1 3
xp + h  x < xp + 2h
>
>
> 4h3 (x [xp + 2h]) ,
>
:
0, x xp + 2h

Interpolation using Cubic B–splines


Suppose we have data {(xi , fi )}ni=0 and the points xi are equally spaced so that xi = x0 + ih. Define
the “exterior” equally spaced points x i , xn+i , i = 1, 2, 3 then these are all the points we need to
define the B–splines Bi (x), i = 1, 0, . . . , n + 1. This is the B–spline basis; that is, the set of all
B–splines which are nonzero in the interval [x0 , xn ]. We seek a B–spline interpolating function of
Pn+1
the form Sn (x) = i= 1 ai Bi (x). The interpolation conditions give

n+1
X
ai Bi (xj ) = fj , j = 0, 1, . . . , n
i= 1

which simplifies to

aj 1 Bj 1 (xj ) + aj Bj (xj ) + aj+1 Bj+1 (xj ) = fj , j = 0, 1, . . . , n

as all other terms in the sum are zero at x = xj . Now, by definition Bj (xj ) = 1, and we compute
Bj 1 (xj ) = Bj+1 (xj ) = 14 by evaluating the above expression for the B–spline, giving the equations
1
4a 1 + a0 + 14 a1 = f0
1 1
4 a0 + a1 + 4 a2 = f1
..
.
1
4 an 1 + an + 14 an+1 = fn

These are n + 1 equations in the n + 3 unknowns aj , j = 1, 0, . . . , n + 1. The additional equations


come from applying the boundary conditions. For example, if we apply the natural spline conditions
Sn00 (x0 ) = Sn00 (xn ) = 0 we get the two additional equations
3 3 3
2h2 a 1 h2 a0 + 2h2 a1 = 0
3 3 3
2h2 an 1 h2 an + 2h2 an+1 = 0

The full set of n + 3 linear equations may be solved by Gaussian elimination but we can simplify
the equations. Taking the first of the additional equations and the first of the previous set together
we get a0 = 23 f0 ; similarly, from the last equations we find an = 23 fn . So the set of linear equations
reduces to
a1 + 14 a2 = f1 16 f0
1
4 a1 + a2 + 14 a3 = f2
1
4 a2 + a3 + 14 a4 = f3
..
.
1
4 an 3 + an 2 + 14 an 1 = fn 2
1 1
4 an 2 + an 1 = fn 1 6 fn
4.3. POLYNOMIAL SPLINES 139

The coefficient matrix of this linear system is


2 3
1 14
6 1 1 7
6 4 1 4 7
6 .. .. .. 7
6 . . . 7
6 7
6 1 1 7
4 4 1 4 5
1
4 1

Matrices of this structure are called tridiagonal and this particular tridiagonal matrix is of a special
type known as positive definite. For this type of matrix interchanges are not needed for the stability
of Gaussian elimination. When interchanges are not needed for a tridiagonal system, Gaussian
elimination reduces to a particularly simple algorithm. After we have solved the linear equations for
a1 , a2 , . . . , an 1 we can compute the values of a 1 and an+1 from the “additional” equations above.

Problem 4.3.3. Let r(x) = r0 + r1 x + r2 x2 + r3 x3 and s(x) = s0 + s1 x + s2 x2 + s3 x3 be


cubic polynomials in x. Suppose that the value, first, second, and third derivatives of r(x)
and s(x) agree at some point x = a. Show that r0 = s0 , r1 = s1 , r2 = s2 , and r3 = s3 , i.e.,
r(x) and s (x) are the same cubic. [Note: This is another form of the polynomial uniqueness
theorem.]

Problem 4.3.4. Write down the equations determining the coefficients of the not-a-knot
cubic spline interpolating to the data (0, 1), (1, 0), (2, 3) and (3, 2). Just four equations are
sufficient. Why?

Problem 4.3.5. Let the knots be at the integers, i.e. xi = i, so B0 (x) has support on the
interval [ 2, +2]. Construct B0 (x) so that it is a cubic spline normalized so that B0 (0) = 1.
[Hint: Since B0 (x) is a cubic spline it must be piecewise cubic and it must be continuous
and have continuous first and second derivatives at all the knots, and particularly at the
knots 2, 1, 0, +1, +2.]

Problem 4.3.6. In the derivation of the linear system for B–spline interpolation replace
the equations corresponding to the natural boundary conditions by equations corresponding
to (a) exact second derivative conditions and (b) knot-a-knot conditions. In both cases use
these equations to eliminate the coefficients a 1 and an+1 and write down the structure of
the resulting linear system.

Problem 4.3.7. Is the following function S(x) a cubic spline? Why or why not?
8
>
> 0, x<0
>
>
>
> x3 , 0x<1
<
x3 + (x 1)3 , 1x<2
S(x) =
>
> (x 3)3 (x 4)3 , 2x<3
>
>
>
> (x 4)3 , 3x<4
:
0, 4x

4.3.3 Monotonic Piecewise Cubic Polynomials


Another approach to interpolation with piecewise cubic polynomials is, in addition to interpolating
the data, to attempt to preserve a qualitative property exhibited by the data. For example, the piece-
wise cubic polynomial could be chosen in such a way that it is monotonic in the intervals between
successive monotonic data values. In the simplest case where the data values are nondecreasing (or
non-increasing) throughout the interval then the fitted function is forced to have the same property
at all points in the interval. The piecewise cubic Hermite interpolating polynomial (PCHIP) achieves
140 CHAPTER 4. CURVE FITTING

monotonicity by choosing the values of the derivative at the data points so that the function is non-
decreasing, that is with derivative nonnegative (or is non-increasing, that is derivative non-positive)
throughout the interval. The resulting piecewise cubic polynomial will usually have discontinuous
second derivatives at the data points and will hence be less smooth and a little less accurate for
smooth functions than the cubic spline interpolating the same data. However, forcing monotonicity
when it is present in the data has the e↵ect of preventing oscillations and overshoot which can occur
with a spline. For an illustration of this, see Examples 4.6.10 and 4.6.11 in Section 4.6.3.
It would be wrong, though, to think that because we don’t enforce monotonicity directly when
constructing a cubic spline that we don’t get it in practice. When the data points are sufficiently
close (that is, h is sufficiently small) and the function represented is smooth, the cubic spline is a
very accurate approximation to the function that the data represents and any properties, such as
monotonicity, are preserved.

4.4 Least Squares Fitting


N
In previous sections we determined an approximation of f (x) by interpolating to the data {(xi , fi )}i=0 .
An alternative to approximation via interpolation is approximation via a least squares fit.
Let qM (x) be a polynomial of degree M . Observe that qM (xr ) fr is the error in accepting
qM (xr ) as an approximation to fr . So, the sum of the squares of these errors
N
X 2
(qM ) ⌘ {qM (xr ) fr }
r=0

gives a measure of how well qM (x) fits f (x). The idea is that the smaller the value of (qM ), the
closer the polynomial qM (x) fits the data.
We say pM (x) is a least squares polynomial of degree M if pM (x) is a polynomial of degree M
with the property that
(pM )  (qM )
for all polynomials qM (x) of degree M ; usually we only have equality if qM (x) ⌘ pM (x). For
simplicity, we write M in place of (pM ). As shown in an advanced course in numerical analysis,
if the points {xr } are distinct and if N M there is one and only one least squares polynomial
of degree M for this data, so we say pM (x) is the least squares polynomial of degree M . So, the
polynomial pM (x) that produces the smallest value M yields the least squares fit of the data.
While pM (x) produces the closest fit of the data in the least squares sense, it may not produce a
very useful fit. For example, consider the case M = N then the least squares fit pN (x) is the same
as the interpolating polynomial; see Problem 4.4.1. We have seen already that the interpolating
polynomial can be a poor fit in the sense of having a large and highly oscillatory error. So, a close
fit in a least squares sense does not necessarily imply a very good fit and, in some cases, the closer
the fit is to an interpolating function the less useful it might be.
Since the least squares criterion relaxes the fitting condition from interpolation to a weaker
condition on the coefficients of the polynomial, we need fewer coefficients (that is, a lower degree
polynomial) in the representation. For the problem to be well–posed, it is sufficient that all the data
points be distinct and that M  N .

Example 4.4.1. How is pM (x) determined for polynomials of each degree M ? Consider the exam-
ple:
• data:
i xi fi
0 1 2
1 3 4
2 4 3
3 5 1
4.4. LEAST SQUARES FITTING 141

• least squares fit to this data by a straight line: p1 (x) = a0 + a1 x; that is, the coefficients a0
and a1 are to be determined.
We have
3
X 3
X
2 2
1 = {p1 (xr ) fr } = {a0 + a1 xr fr }
r=0 r=0

Multivariate calculus provides a technique to identify the values of a0 and a1 that make 1 smallest;
for a minimum of 1 , the unknowns a0 and a1 must satisfy the linear equations
3
X
@
1 ⌘ 2 {a0 + a1 xr fr } = 2(4a0 + 13a1 10) = 0
@a0 r=0
3
X
@
1 ⌘ 2xr {a0 + a1 xr fr } = 2(13a0 + 51a1 31) = 0
@a1 r=0

that is, in matrix form after canceling the 2’s throughout,


  
4 13 a0 10
=
13 51 a1 31

Gaussian elimination followed by backward substitution computes the solution


107
a0 = ' 3.057
35
6
a1 = ' 0.171
35
166
Substituting a0 and a1 gives the minimum value of 1 = .
35

Generally, if we consider fitting data using a polynomial written in power series form

pM (x) = a0 + a1 x + · · · + aM xM
N
then M is quadratic in the unknown coefficients a0 , a1 , · · · , aM . For data {(xi , fi )}i=0 we have
N
X 2
M = {pM (xr ) fr }
r=0
2 2 2
= {pM (x0 ) f0 } + {pM (x1 ) f1 } + · · · + {pM (xN ) fN }

The coefficients a0 , a1 , · · · , aM are determined by solving the linear system


@
M = 0
@a0
@
M = 0
@a1
..
.
@
M = 0
@aM
@
For each value j = 0, 1, · · · , M , the linear equation M = 0 is formed as follows. Observe that
@aj
142 CHAPTER 4. CURVE FITTING

@
pM (xr ) = xjr so, by the chain rule,
@aj
@ @ h 2 2 2
i
M = {f 0 p M (x 0 )} + {f 1 p M (x 1 )} + · · · + {f N p M (x N )}
@aj @aj

@pM (x0 ) @pM (x1 ) @pM (xN )
= 2 {pM (x0 ) f0 } + {pM (x1 ) f1 } + · · · + {pM (xN ) fN }
@aj @aj @aj
h i
= 2 {pM (x0 ) f0 } xj0 + {pM (x1 ) f1 } xj1 + · · · + {pM (xN ) fN } xjN

N
X
=2 {pM (xr ) fr } xjr
r=0
N
X N
X
=2 xjr pM (xr ) 2 fr xjr .
r=0 r=0

Substituting for the polynomial pM (xr ) the power series form leads to
@ ⇣P PN ⌘
N j M j
M =2 r=0 xr a0 + a1 xr + · · · + aM xr r=0 fr xr
@aj
⇣ P PN PN PN ⌘
N
= 2 a0 r=0 xjr + a1 r=0 xj+1 r + · · · + aM r=0 xj+M
r r=0 fr xjr

@
Therefore, M = 0 may be rewritten as the Normal equations:
@aj
N
X N
X N
X N
X
a0 xjr + a1 xj+1
r + · · · + aM xj+M
r = fr xjr , j = 0, 1, · · · , M
r=0 r=0 r=0 r=0

In matrix form the Normal equations may be written


2 PN PN PN M 32 3 2 PN 3
r=0 1 r=0 xr ··· r=0 xr a0 r=0 fr
6 76 7 6 7
6 PN PN PN 76 7 6 PN 7
6 2
··· M +1 7 6 a1 7 6 7
6 r=0 xr r=0 xr r=0 xr 76 7 6 r=0 fr xr 7
6 76 7=6 7
6 .. .. .. .. 7 6 .. 7 6 .. 7
6 . . . . 76 . 7 6 . 7
4 54 5 4 5
PN PN PN PN
M M +1
··· 2M aM M
r=0 xr r=0 xr r=0 xr r=0 fr xr

The coefficient matrix of the Normal equations has special properties (it is both symmetric and
positive definite). These properties permit the use of an accurate, efficient version of Gaussian
elimination which exploits these properties, without the need for partial pivoting by rows for size.

N
Example 4.4.2. To compute a straight line fit a0 + a1 x to the data {(xi , fi )}i=0 we set M = 1 in
the Normal equations to give
N
X N
X N
X
a0 1 + a1 xr = fr
r=0 r=0 r=0
N
X XN XN
a0 xr + a1 x2r = f r xr
r=0 r=0 r=0

Substituting the data


i xi fi
0 1 2
1 3 4
2 4 3
3 5 1
4.4. LEAST SQUARES FITTING 143

from Example 4.4.1 we have the Normal equations

4a0 + 13a1 = 10
13a0 + 51a1 = 31

which gives the same result as in Example 4.4.1.

N
Example 4.4.3. To compute a quadratic fit a0 + a1 x + a2 x2 to the data {(xi , fi )}i=0 we set M = 2
in the Normal equations to give
N
X N
X N
X N
X
a0 1 + a1 xr + a2 x2r = fr
r=0 r=0 r=0 r=0
N
X N
X N
X N
X
a0 xr + a1 x2r + a2 x3r = fr xr
r=0 r=0 r=0 r=0
N
X N
X N
X N
X
a0 x2r + a1 x3r + a2 x4r = fr x2r
r=0 r=0 r=0 r=0

The least squares formulation permits more general functions fM (x) than simply polynomials,
but the unknown coefficients in fM (x) must still occur linearly. The most general form is
M
X
fM (x) = a0 0 (x) + a1 1 (x) + · · · + aM M (x) = ar r (x)
r=0

M
with a linearly independent basis { r (x)}r=0 . By analogy with the power series case, the linear
system of Normal equations is
N
X N
X N
X N
X
a0 0 (xr ) j (xr ) + a1 1 (xr ) j (xr ) + · · · + aM M (xr ) j (xr ) = fr j (xr )
r=0 r=0 r=0 r=0

for j = 0, 1, · · · , M . The Normal equations are


2 N N N
323 2 N 3
X X X X
2 a
6 0 (xr ) 0 (xr ) 1 (xr ) ··· 0 (xr ) M (xr ) 7 6 0 7 6 fr 0 (xr ) 7
6 r=0 76 7 6 r=0 7
6 r=0 r=0 76 7 6 7
6 X 76 7 6 X 7
6 N X N X N
76 7 6 N 7
6 (x ) (x ) 1 (xr )
2
··· (x ) (x ) 7 6 a 7 6 fr 1 (xr ) 7
6 1 r 0 r 1 r M r 76 1 7 6 7
6 r=0 r=0 r=0 76 7 = 6 r=0 7
6 76 7 6 7
6 . .. .. .. 76 . 7 6 .. 7
6 .. . . . 7 6 . 7 6 . 7
6 76 . 7 6 7
6 76 7 6 7
6 X N X N X N 76 7 6 X N 7
4 2 54 5 4 5
M (xr ) 0 (xr ) M (xr ) 1 (xr ) · · · M (xr ) aM fr M (xr )
r=0 r=0 r=0 r=0

The coefficient matrix of this linear system is symmetric and potentially ill–conditioned. In particu-
lar, the basis functions j could be, for example, a linear polynomial spline basis, a cubic polynomial
B–spline basis, Chebyshev polynomials in a Chebyshev series fit, or a set of linearly independent
trigonometric functions.
The coefficient matrix of the Normal equations is usually reasonably well-conditioned when the
number, M , of functions being fitted is small. The ill–conditioning grows with the number of
functions. To avoid the possibility of ill–conditioning the Normal equations are not usually formed
but instead a stable QR factorization of a related matrix is employed to compute the least squares
144 CHAPTER 4. CURVE FITTING

solution directly (the approach taken by Matlab’s polyfit function), which is discussed in the
next section.

N
Problem 4.4.1. Consider the data {(xi , fi )}i=0 . Argue why the interpolating polynomial
of degree N is also the least squares polynomial of degree N . Hint: What is the value of
(qN ) when qN (x) is the interpolating polynomial?
N
Problem 4.4.2. Show that 0 1 ··· N = 0 for any set of data {(xi , fi )}i=0 . Hint:
The proof follows from the concept of minimization.
Problem 4.4.3. Find the least squares constant fit p0 (x) = a0 to the data in Example 4.4.1.
Plot the data, and both the constant and the linear least squares fits on one graph.
Problem 4.4.4. Find the least squares linear fit p1 (x) = a0 + a1 x to the following data.
Explain why you believe your answer is correct?

i xi fi
0 1 1
1 3 1
2 4 1
3 5 1
4 7 1

Problem 4.4.5. Find the least squares quadratic polynomial fits p2 (x) = a0 + a1 x + a2 x2 to
each of the data sets:
i xi fi
0 2 6
1 1 3
2 0 1
3 1 3
4 2 6
and
i xi fi
0 2 5
1 1 3
2 0 0
3 1 3
4 2 5
Problem 4.4.6. Use the chain rule to derive the Normal equations for the general basis
M
functions { j }j=0 .

Problem 4.4.7. Write down the Normal equations for the following choice of basis func-
tions: 0 (x) = 1, 1 (x) = sin(⇡x) and 2 (x) = cos(⇡x). Find the coefficients a0 , a1 and a2
for a least squares fit to the data
i xi fi
0 1 5
1 0.5 3
2 0 0
3 0.5 3
4 1 5
Problem 4.4.8. Write down the Normal equations for the following choice of basis func-
tions: 0 (x) = T0 (x), 1 (x) = T1 (x) and 2 (x) = T2 (x). Find the coefficients a0 , a1 and a2
for a least squares fit to the data in the Problem 4.4.7.
4.5. LEAST SQUARES AND MATRIX FACTORIZATIONS 145

4.5 Least Squares and Matrix Factorizations


The previous section described how to use calculus to derive the normal equations for a least squares
data fitting problem. In this section, we connect the process to the language of linear algebra. To
begin, we start from the basic polynomial least squares data fitting problem:
N
• Given data ponts {(xi , fi )}i=0 ,
• find a polynomial pM (x) of degree M  N so that pM (xi ) ⇡ fi .
The ideal situation would be that the polynomial exactly fits all of the data; that is,
pM (xi ) = fi , i = 0, 1, 2, . . . , N . (4.3)
If we use the standard monomial basis for pM (x),
pM (x) = a0 + a1 x + a2 x2 + · · · + aM xM ,
then the equations (4.3) can be written in matrix-vector form as V a = f :
2 3 2 3
1 x0 · · · xM
0 2 3 f0
6 7 a0 6 7
6 1 x1 · · · xM
1 7 6 f1 7
6 7 6 7 6 7
6 1 x2 · · · xM 7 6 a1 7 6 f2 7
6 2 76 . 7=6 7. (4.4)
6 . .. .. 7 4 .. 5 6 . 7
6 . 7 6 . 7
4 . . . 5 aM 4 . 5
1 xN · · · xNM | {z } fN
| {z } a | {z }
V f

If M = N , then we have the polynomial interpolation problem, where we saw that if the xi are
distinct, then the matrix V is square and nonsingular, and therefore there is a unique solution to
the linear system.
However, if M < N , then there are more rows than columns in the matrix, and we should recall
from linear algebra that in this case, it is very unlikely that there is a vector a that exactly solves
V a = f . However, we can consider trying to find a vector a such that
Va⇡f.
We need to define what “approximate” means in the language of linear algebra. A natural way to
define approximation is to use norms, and to consider finding a vector a that minimizes the norm
of the residual vector; that is, we consider
min kV a fk
a

where k · k is a specified vector norm. As we have seen in section (3.6.1), there are many vector
norms we could use, such as k · k1 , k · k2 , k · k1 , etc.. The choice of norm will (very likely) give
di↵erent approximation vectors a, and the choice of norm will have an e↵ect on the computational
cost. The most commonly used (and computationally easiest) is to use k · k2 . Thus, we consider
min kV a f k2 or, equivalently min kV a f k22 . (4.5)
a a

Note that for the polynomial least squares data fitting problem, it is not difficult to show that
N
X 2
kV a f k22 = (pM (xi ) fi ) ,
i=0

and thus the polynomial least squares data fitting problem is equivalent to the linear algebra ap-
proximation problem (4.5). We will refer to (4.5) as a linear algebra least squares problem.
What is even better is that this linear algebra formulation can be used for any general function
g(x), provided that g(x) can be written as a linear combination of M + 1 given basis functions. That
is, consider the problem:
146 CHAPTER 4. CURVE FITTING

N
• Given data ponts {(xi , fi )}i=0 , a set of basis functions { 0 (x), 1 (x), . . . , M (x)}, and

g(x) = a0 0 (x) + a1 1 (x) + · · · + aM M (x),

• find coefficients a0 , a1 , . . . , aM so that g(xi ) ⇡ fi .


To find the best a0 , a1 , . . ., aM using the least squares data fitting approach, we again consider the
ideal situation where we can exactly fit the data; that is,

g(xi ) = fi , i = 0, 1, 2, . . . , N .

These equations can be written in matrix-vector form as W a = f :


2 3 2 3
0 (x0 ) 1 (x0 ) ··· M (x0 ) 2 3 f0
6 7 a0 6 7
6 0 (x1 ) 1 (x1 ) ··· M (x1 ) 7 6 f1 7
6 7 6 a1 7 6 7
6 0 (x2 ) (x ) · · · (x ) 7 6 7 6 f2 7
6 1 2 M 2 7 6 .. 7 = 6 7.
6 .. .. .. 7 4 . 5 6 .. 7
6 7 6 7
4 . . . 5 aM 4 . 5
(x ) (x ) · · · (x ) | {z } fN
0 N 1 N M N a
| {z } | {z }
W f

Thus, to find the best least squares fit of the data to the function g(x), we need only solve the linear
algebra least squares problem
min kW a f k2 .
a

Example 4.5.1. Suppose we are given the basis functions,

0 (x) = 1, 1 (x) = sin(⇡x), 2 (x) = cos(⇡x),

and suppose we want to find the best least squares fit of the function

g(x) = a0 0 (x) + a1 1 (x) + a2 2 (x)

to the given the data:


i xi fi
0 1 5
1 0.5 3
2 0 0
3 0.5 3
4 1 5
To set up the equivalent linear algebra least squares problem, we start by considering the ideal case
where we can get equality for all data:

g(xi ) = fi ) a0 0 (xi ) + a1 1 (xi ) + a2 2 (xi ) = fi


) a0 + a1 sin(⇡xi ) + a2 cos(⇡xi ) = fi .

That is, 9 8
g( 1) = 5 >
> >
> a0 + a1 sin( ⇡) + a2 cos( ⇡) = 5
>
> >
>
g( 0.5) = 3 = < a0 + a1 sin( ⇡/2) + a2 cos( ⇡/2) = 3
g(0) = 0 ) a0 + a1 sin(0) + a2 cos(0) = 0
>
> >
>
g(0.5) = 3 >
> >
> a 0 + a 1 sin(⇡/2) + a2 cos(⇡/2) = 3
; :
g(1) = 5 a0 + a1 sin(⇡) + a2 cos(⇡) = 5
4.5. LEAST SQUARES AND MATRIX FACTORIZATIONS 147

or equivalently in matrix-vector form, Wa = f:


2 3 2 3
1 0 1 2 3 5
6 1 1 0 7 6 7
6 7 a0 6 3 7
6 1 0 1 7 a1 = 6
7 4 5 0 7.
6 6 7
4 1 1 0 5 a2 4 3 5
1 0 1 5

Thus, to find a0 , a1 , a2 , we need to solve the linear algebra least squares problem

min kW a f k2 .
a

There are many ways to solve linear algebra least squares problems, which are discussed in the
following subsections.

4.5.1 Linear Algebra Normal Equations


In the previous section we used calculus to derive the normal equations for polynomial and general
function least squares data fitting. In this subsection we describe how the normal equations can be
found directly from the linear algebra least squares formulation.
To illustrate, consider the polynomial least squares data fitting problem, which we saw has the
equivalent linear algebra least squares formulation

min kV a f k2 ,
a

where 2 3 2 3
1 x0 ··· xM
0 2 3 f0
6 7 a0 6 7
6 1 x1 ··· xM
1 7 6 f1 7
6 7 6 a1 7 6 7
6 1 x2 ··· xM 7 6 7 6 f2 7
V =6 2 7 , a=6 . 7, f =6 7.
6 .. .. .. 7 4 .. 5 6 .. 7
6 7 6 7
4 . . . 5 aM 4 . 5
1 xN ··· xM
N fN
Using the above defined matrix and vectors, it is not difficult to verify (we leave it as an exercise)
that the normal equations can be obtained by the simple linear algebra computation

V TV a = V Tf . (4.6)

In fact, this simple approach can be used to construct the normal equations for any least squares
problem (not just those associated with polynomial data fitting).
It is important to note some properties about the matrix V T V . If V is an n ⇥ m matrix (where,
for example, n = N + 1 and m = M + 1 for polynomial least squares problems), then

• V T V is a square m ⇥ m matrix,
• if the columns of V are linearly independent (i.e. V has full column rank), then
– V T V is nonsingular, and also
– V T V is symmetric and positive definite (SPD).

Thus, if V has full column rank, there is always a unique solution to the least squares problem.
Moreover, because V T V is SPD, we can use the Cholesky factorization to efficiently solve (4.6).
148 CHAPTER 4. CURVE FITTING

4.5.2 Least Squares and QR Factorization


While the normal equations formulation is a convenient way to solve least squares problems (es-
pecially when doing hand calculations for small problems), it is not the best approach when V is
ill-conditioned. Instead, least squares problems are generally solved using the QR factorization.
Recall that if V 2 Rn⇥m , n > m, and rank(V ) = m, then we can compute the factorization

V = QR

where Q 2 Rn⇥n is an orthogonal matrix (that is, QT Q = I) and R 2 Rn⇥m is an upper triangular
matrix. Also recall that if Q 2 Rn⇥n is an orthogonal matrix and r 2 Rn then

kQT rk2 = krk2 .

Using this property in the linear algebra least squares problem, observe that

kV a f k22 = kQRa f k22


T
= kQ (QRa f )k22
= kQT QRa QT f k22
= kRa QT f k22
  2
Rm b
= a
0 c 2
= kRm a bk22 + kck22 ,

where we have used the notation



Rm
• R= , Rm 2 Rm⇥m is upper triangular,
0

• b is a vector containing the first m entries of the vector QT f ,


• c is a vector containing the bottom n m entries of the vector AT f .
Thus, in order to solve
min kV a f k2
a

we need only solve


min kRm a bk2 .
a

Since Rm is a square, upper triangular matrix, this latter minimization problem can easily be done
by using backward substitution to compute the unique solution of

Rm a = b .

The singular value decomposition (SVD) can also be used to solve linear algebra least squares prob-
lem; the approach is similar to that done with the QR factorization, but because the SVD is much
more expensive to compute, the QR factorization is the preferred approach in most applications.
4.6. MATLAB NOTES 149

4.6 Matlab Notes


Matlab has several functions designed specifically for manipulating polynomials, and for curve
fitting. Some functions that are relevant to the topics discussed in this chapter include:

polyfit used to create the coefficients of an interpolating or least squares polynomial

polyval used to evaluate a polynomial

spline used to create, and evaluate, an interpolatory cubic spline

pchip used to create, and evaluate, a piecewise cubic Hermite interpolating polynomial

interp1 used to create, and evaluate, a variety of piecewise interpolating polynomials,


including linear and cubic splines, and piecewise cubic Hermite polynomials

ppval used to evaluate a piecewise polynomial, such as given by spline, pchip or interp1

In general, these functions assume a canonical representation of the power series form of a
polynomial to be
p(x) = a1 xN + a2 xN 1 + · · · + aN x + aN +1 .
Note that this is slightly di↵erent than the notation used in Section 4.1, but in either case, all that
is needed to represent the polynomial is a vector of coefficients. Using Matlab’s canonical form,
the vector representing p(x) is:
⇥ ⇤
a = a1 ; a2 ; · · · aN ; aN +1 .

Note that although a could be a row or a column vector, we will generally use column vectors.

Example 4.6.1. Consider the polynomial p(x) = 7x3 x2 + 1.5x 3. Then the vector of coefficients
that represents this polynomial is given by:
>> a = [7; -1; 1.5; -3];
Similarly, the vector of coefficients that represents the polynomial p(x) = 7x5 x4 + 1.5x2 3x is
given by:

>> a = [7; -1; 0; 1.5; -3; 0];


Notice that it is important to explicitly include any zero coefficients when constructing the vector
a.

4.6.1 Polynomial Interpolation


In Section 4.1, we constructed interpolating polynomials using three di↵erent forms: power series,
Newton and Lagrange forms. Matlab’s main tool for polynomial interpolation, polyfit, uses the
power series form. To understand how this function is implemented, suppose we are given data
points
(x1 , f1 ), (x2 , f2 ), . . . , (xN +1 , fN +1 ).
150 CHAPTER 4. CURVE FITTING

Recall that to find the power series form of the (degree N ) polynomial that interpolates this data,
we need to find the coefficients, ai , of the polynomial
p(x) = a1 xN + a2 xN 1
+ · · · + aN x + aN +1
such that p(xi ) = fi . That is,
N 1
p(x1 ) = f1 ) a1 xN
1 + a2 x1 + · · · + aN x1 + aN +1 = f1
p(x2 ) = f2 ) a1 xN
2 + a2 xN
2
1
+ · · · + aN x2 + aN +1 = f2
..
.
N 1
p(xN ) = fN ) a1 xN
N + a2 xN + · · · + aN xN + aN +1 = fN
N 1
p(xN +1 ) = fN +1 ) a1 xN
N +1 + a2 xN +1 + · · · + aN xN +1 + aN +1 = fN +1

or, more precisely, we need to solve the linear system V a = f :


2 32 3 2 3
xN1 xN1
1
··· x1 1 a1 f1
6 76 7 6 7
6 xN xN 1
··· x2 1 7 6 7 6 7
6 2 2 7 6 a2 7 6 f2 7
6 .. .. .. .. 7 6
7 .. 7 6 .. 7
6 6 7=6 7.
6 . . . . 76 . 7 6 . 7
6 76 7 6 7
6 xN xNN 1
··· xN 7
1 54 6 a 7 6 fN 7
4 N N 5 4 5
xN
N +1 xN 1
N +1 · · · xN +1 1 aN +1 fN +1
In order to write a Matlab function implementing this approach, we need to:
• Define vectors containing the given data:
⇥ ⇤
x = x1 ; x2 ; · · · xN +1
⇥ ⇤
f = f1 ; f2 ; · · · fN +1
• Let n = length(x) = N + 1.
• Construct the n ⇥ n matrix, V , which can be done one column at a time using the vector x:
jth column of V = V(:, j) = x .^ (n-j)
• Solve the linear system, V a = f , using Matlab’s backslash operator:
a = V \ f
Putting these steps together, we obtain the following function:

function a = InterpPow1(x, f)
%
% a = InterpPow1(x, f);
%
% Construct the coefficients of a power series representation of the
% polynomial that interpolates the data points (x_i, f_i):
%
% p = a(1)*x^N + a(2)*x^(N-1) + ... + a(N)*x + a(N+1)
%
n = length(x);
V = zeros(n, n);
for j = 1:n
V(:, j) = x .^ (n-j);
end
a = V \ f;
4.6. MATLAB NOTES 151

We remark that Matlab provides a function, vander, that can be used to construct the matrix
V from a given vector x. Using vander in place of the first five lines of code in InterpPow1, we
obtain the following function:

function a = InterpPow(x, f)
%
% a = InterpPow(x, f);
%
% Construct the coefficients of a power series representation of the
% polynomial that interpolates the data points (x_i, f_i):
%
% p = a(1)*x^N + a(2)*x^(N-1) + ... + a(N)*x + a(N+1)
%
V = vander(x);
a = V \ f;

Problem 4.6.1. Implement InterpPow, and use it to find the power series form of the
polynomial that interpolates the data ( 1, 0), (0, 1), (1, 3). Compare the results with that
found in Example 4.1.3.

Problem 4.6.2. Implement InterpPow, and use it to find the power series form of the
polynomial that interpolates the data (1, 2), (3, 3), (5, 4). Compare the results with what
you computed by hand in Problem 4.1.3.

The built-in Matlab function, polyfit, essentially uses the approach outlined above to con-
struct an interpolating polynomial. The basic usage of polyfit is:
a = polyfit(x, f, N)
where N is the degree of the interpolating polynomial. In general, provided the xi values are distinct,
N = length(x) - 1 = length(f) - 1. As we see later, polyfit can be used for polynomial least
squares data fitting by choosing a di↵erent (usually smaller) value for N.
Once the coefficients are computed, we may want to plot the resulting interpolating polynomial.
Recall that to plot any function, including a polynomial, we must first evaluate it at many (e.g., 200)
points. Matlab provides a built-in function, polyval, that can be used to evaluate polynomials:
y = polyval(a, x);

Note that polyval requires the first input to be a vector containing the coefficients of the polynomial,
and the second input a vector containing the values at which the polynomial is to be evaluated. At
this point, though, we should be sure to distinguish between the (relatively few) ”data points” used
to construct the interpolating polynomial, and the (relatively many) ”evaluation points” used for
plotting. An example will help to clarify the procedure.

Example 4.6.2. Consider the data points ( 1, 0), (0, 1), (1, 3). First, plot the data using (red)
circles, and set the axis to an appropriate scale:
x_data = [-1 0 1];
f_data = [0 1 3];
plot(x_data, f_data, ’ro’)
axis([-2, 2, -1, 6])
152 CHAPTER 4. CURVE FITTING

Now we can construct and plot the polynomial that interpolates this data using the following set of
Matlab statements:
hold on
a = polyfit(x_data, f_data, length(x_data)-1);
x = linspace(-2,2,200);
y = polyval(a, x);
plot(x, y)
By including labels on the axes, and a legend (see Section 1.3.2):
xlabel(’x’)
ylabel(’y’)
legend(’Data points’,’Interpolating polynomial’, ’Location’, ’NW’)
we obtain the plot shown in Fig. 4.12. Note that we use di↵erent vectors to distinguish between the
given data (x data and f data) and the set of points x and values y used to evaluate and plot the
resulting interpolating polynomial.

6
Data points
Interpolating polynomial
5

3
y

−1
−2 −1 0 1 2
x

Figure 4.12: Plot generated by the Matlab code in Example 4.6.2.

Although the power series approach usually works well for a small set of data points, one difficulty
that can arise, especially when attempting to interpolate a large set of data, is that the matrix V
may be very ill–conditioned. Recall that an ill–conditioned matrix is close to singular, in which
case large errors can occur when solving V a = f . Thus, if V is ill–conditioned, the computed
polynomial coefficients may be inaccurate. Matlab’s polyfit function checks V and displays a
warning message if it detects it is ill–conditioned. In this case, we can try the alternative calling
sequence of polyfit and polyval:
[a, s, mu] = polyfit(x_data, f_data, length(x_data)-1);
y = polyval(a, x, s, mu);
This forces polyfit to first scale x data (using its mean and standard deviation) before constructing
the matrix V and solving the corresponding linear system. This scaling usually results in a matrix
that is better-conditioned.

Example 4.6.3. Consider the following set of data, obtained from the National Weather Service,
http://www.srh.noaa.gov/fwd, which shows average high and low temperatures, total precipita-
tion, and the number of clear days for each month in 2003 for Dallas-Fort Worth, Texas.
4.6. MATLAB NOTES 153

Monthly Weather Data, 2003


Dallas - Fort Worth, Texas
Month 1 2 3 4 5 6 7 8 9 10 11 12
Avg. High 54.4 54.6 67.1 78.3 85.3 88.7 96.9 97.6 84.1 80.1 68.8 61.1
Avg. Low 33.0 36.6 45.2 55.6 65.6 69.3 75.7 75.8 64.9 57.4 50.0 38.2
Precip. 0.22 3.07 0.85 1.90 2.53 5.17 0.08 1.85 3.99 0.78 3.15 0.96
Clear Days 15 6 10 11 4 9 13 10 11 13 7 18
Suppose we attempt to fit an interpolating polynomial to the average high temperatures. Our first
attempt might use the following set of Matlab commands:
x_data = 1:12;
f_data = [54.4 54.6 67.1 78.3 85.3 88.7 96.9 97.6 84.1 80.1 68.8 61.1];
a = polyfit(x_data, f_data, 11);
x = linspace(1, 12, 200);
y = polyval(a, x);
plot(x_data, f_data, ’ro’)
hold on
plot(x, y)
If we run these commands in Matlab, then a warning message is printed in the command window
indicating that V is ill–conditioned. If we replace the two lines containing polyfit and polyval
with:
[a, s, mu] = polyfit(x_data, f_data, 11);
y = polyval(a, x, s, mu);
the warning no longer occurs. The resulting plot is shown in Fig. 4.13 (we also made use of the
Matlab commands axis, legend, xlabel and ylabel).

120
Data
110 Interpolating polynomial
temperature, degrees Fahrenheit

100

90

80

70

60

50

40

30
0 2 4 6 8 10 12
month

Figure 4.13: Interpolating polynomial from Example 4.6.3

Notice that the polynomial does not appear to provide a good model of the monthly temperature
changes between months 1 and 2, and between months 11 and 12. This is a mild example of the more
serious problem, discussed in Section 4.2, of excessive oscillation of the interpolating polynomial. A
more extreme illustration of this is the following example.

Example 4.6.4. Suppose we wish to construct an interpolating polynomial approximation of the


function f (x) = sin(x + sin 2x) on the interval [ ⇡2 , 3⇡
2 ]. The following Matlab code can be used
to construct an interpolating polynomial approximation of f (x) using 11 equally spaced points:
154 CHAPTER 4. CURVE FITTING

f = @(x) sin(x+sin(2*x));
x_data = linspace(-pi/2, 3*pi/2, 11);
f_data = f(x_data);
a = polyfit(x_data, f_data, 10);
A plot of the resulting polynomial is shown on the left of Fig. 4.14. Notice that, as with Runge’s
example (Example 4.2), the interpolating polynomial has severe oscillations near the end points of
the interval. However, because we have an explicit function that can be evaluated, instead of using
equally spaced points, we can choose to use the Chebyshev points. In Matlab these points can be
generated as follows:
c = -pi/2; d = 3*pi/2;
N = 10; I = 0:N;
x_data = (c+d)/2 - (d-c)*cos((2*I+1)*pi/(2*N+2))/2;
Notice that I is a vector containing the integers 0, 1, . . . , 9, and that we avoid using a loop to
create x data by making use of Matlab’s ability to operate on vectors. Using this x data, and
the corresponding f data = f(x data) to construct the interpolating polynomial, we can create the
plot shown on the right of Fig. 4.14. We observe from this example that much better approximations
can be obtained by using the Chebyshev points instead of equally spaced points.

Interpolation using equally spaced points. Interpolation using Chebyshev points.


2.5 2.5
True function True function
2 Interpolating poly. 2 Interpolating poly.
Interpolation points Interpolation points
1.5 1.5

1 1

0.5 0.5

0 0

−0.5 −0.5

−1 −1

−1.5 −1.5

−2 −2
−2 −1 0 1 2 3 4 5 −2 −1 0 1 2 3 4 5

Figure 4.14: Interpolating polynomials for f (x) = sin(x + sin 2x) on the interval [ ⇡2 , 3⇡
2 ]. The plot
on the left uses 11 equally spaced points, and the plot on the right uses 11 Chebyshev points to
generate the interpolating polynomial.

Problem 4.6.3. Consider the data (0, 1), (1, 2), (3, 3) and (5, 4). Construct a plot that
contains the data (as circles) and the polynomial of degree 3 that interpolates this data. You
should use the axis command to make the plot look sufficiently nice.

Problem 4.6.4. Consider the data (1, 1), (3, 1), (4, 1), (5, 1) and (7, 1). Construct a plot
that contains the data (as circles) and the polynomial of degree 4 that interpolates this data.
You should use the axis command to make the plot look sufficiently nice. Do you believe
the curve is a good representation of the data?
4.6. MATLAB NOTES 155

Problem 4.6.5. Construct plots that contain the data (as circles) and interpolating poly-
nomials for the following sets of data:

xi fi xi fi
2 6 2 5
1 3 1 3
and
0 1 0 0
1 3 1 3
2 6 2 5

Problem 4.6.6. Construct interpolating polynomials through all four sets of weather data
given in Example 4.6.3. Use subplot to show all four plots in the same figure, and use the
title, xlabel and ylabel commands to document the plots. Each plot should show the
data points as circles on the corresponding curves.

Problem 4.6.7. Consider the function given in Example 4.6.4. Write a Matlab script
M–file to create the plots shown in Fig. 4.14. Experiment with using more points to construct
the interpolating polynomial, starting with 11, 12, . . . . At what point does Matlab print
a warning that the polynomial is badly conditioned (both for equally spaced and Chebyshev
points)? Does centering and scaling improve the results?

4.6.2 Chebyshev Polynomials and Series


Matlab does not provide specific functions to construct a Chebyshev representation of the inter-
polating polynomial. However, it is not difficult to write our own Matlab functions similar to
polyfit, spline and pchip. Recall that if x 2 [ 1, 1], the jth Chebyshev polynomial, Tj (x), is
defined as
Tj (x) = cos(j arccos(x)), j = 0, 1, 2, ...
We can easily plot Chebyshev polynomials with just a few Matlab commands.

Example 4.6.5. The following set of Matlab commands can be used to plot the 5th Chebyshev
polynomial.

x = linspace(-1, 1, 200);
y = cos( 5*acos(x) );
plot(x, y)

Recall that the Chebyshev form of the interpolating polynomial (for x 2 [ 1, 1]) is

p(x) = b0 T0 (x) + b1 T1 (x) + · · · + bN TN (x).

Suppose we are given data (xi , fi ), i = 0, 1, . . . , N , and that 1  xi  1. To construct Chebyshev


form of the interpolating polynomial, we need to determine the coefficients b0 , b1 , . . . , bN such that
p(xi ) = fi . That is,

p(x0 ) = f0 ) b0 T0 (x0 ) + b1 T1 (x0 ) + · · · + bN TN (x0 ) = f1


p(x1 ) = f1 ) b0 T0 (x1 ) + b1 T1 (x1 ) + · · · + bN TN (x1 ) = f2
..
.
p(xN ) = fN ) b0 T0 (xN ) + b1 T1 (xN ) + · · · + bN TN (xN ) = fN
156 CHAPTER 4. CURVE FITTING

or, more precisely, we need to solve the linear system T b = f :


2 32 3 2 3
T0 (x0 ) T1 (x0 ) · · · TN (x0 ) b0 f0
6 76 7 6 7
6 T0 (x1 ) T1 (x1 ) · · · TN (x1 ) 7 6 7 6 7
6 7 6 b1 7=6 f1 7.
6 .. .. .. 7 6 .. 7 6 .. 7
4 . . . 54 . 5 4 . 5
T0 (xN ) T1 (xN ) · · · TN (xN ) bN fN

If the data xi are not all contained in the interval [ 1, 1], then we must perform a variable
transformation. For example,
xj xmx xj xmn 2xj xmx xmn
x̄j = + = ,
xmn xmx xmx xmn xmx xmn
where xmx = max(xi ) and xmn = min(xi ). Notice that when xj = xmn , x̄ = 1 and when xj = xmx ,
x̄ = 1, and therefore 1  x̄j  1. The matrix T should then be constructed using x̄j . In this case,
we need to use the same variable transformation when we evaluate the resulting Chebyshev form of
the interpolating polynomial. For this reason, we write a function (such as spline and pchip) that
can be used to construct and evaluate the interpolating polynomial.
The basic steps of our function can be outlined as follows:
• The input should be three vectors,
x data = vector containing given data, xj .
f data = vector containing given data, fj .
x = vector containing points at which the polynomial is to be evaluated. Note that the
values in this vector should be contained in the interval [xmn , xmx ].
• Perform a variable transformation on the entries in x data to obtain x̄j , 1  x̄j  1, and
define a new vector containing the transformed data:
⇥ ⇤
x data = x̄0 ; x̄1 ; · · · x̄N
• Let n = length(x data) = N + 1.
• Construct the n⇥n matrix, T one column at a time using the recurrence relation for generating
the Chebyshev polynomials and the (transformed) vector x data:

column 1 of T = T(:,1) = ones(n,1)


column 2 of T = T(:,2) = x_data

and for j = 3, 4, . . . , n,

column j of T = T(:, j) = 2*x_data .* T(:,j-1) - T(:,j-2)

• Compute the coefficients using Matlab’s backslash operator:

b = T \ f_data

• Perform the variable transformation of the entries in the vector x.


• Evaluate the polynomial at the transformed points.
Putting these steps together, we obtain the following function:
4.6. MATLAB NOTES 157

function y = chebfit(x_data, f_data, x)


%
% y = chebfit(x_data, f_data, x);
%
% Construct and evaluate a Chebyshev representation of the
% polynomial that interpolates the data points (x_i, f_i):
%
% p = b(1)*T_0(x) + b(1)*T_1(x) + ... + b(n)T_N(x)
%
% where n = N+1, and T_j(x) = cos(j*acos(x)) is the jth Chebyshev
% polynomial.
%
n = length(x_data);
xmax = max(x_data);
xmin = min(x_data);
xx_data = (2*x_data - xmax - xmin)/(xmax - xmin);
T = zeros(n, n);
T(:,1) = ones(n,1);
T(:,2) = xx_data;
for j = 3:n
T(:,j) = 2*xx_data.*T(:,j-1) - T(:,j-2);
end
b = T \ f_data;
xx = (2*x - xmax - xmin)/(xmax - xmin);
y = zeros(size(x));
for j = 1:n
y = y + b(j)*cos( (j-1)*acos(xx) );
end

Example 4.6.6. Consider again the average high temperatures from Example 4.6.3. Using chebfit,
for example with the following Matlab commands,
x_data = (1:12)’;
f_data = [54.4 54.6 67.1 78.3 85.3 88.7 96.9 97.6 84.1 80.1 68.8 61.1]’;
x = linspace(1, 12, 200);
y = chebfit(x_data, f_data, x);
plot(x_data, f_data, ’ro’)
hold on
plot(x, y)
we obtain the plot shown in Fig. 4.15. Observe the placement of transpose operations on the
statements to construct the vectors x data and f data, which ensure these are column vectors. It
is important to make sure we use consistent (and legal) linear algebra operations, and the code in
chebfit expects x data and f data to be column vectors. Although it is not shown in the above
code, as with previous examples, we also used the Matlab commands axis, legend, xlabel and
ylabel to make the plot look a bit nicer. Because the polynomial that interpolates this data is
unique, it should not be surprising that the plot obtained using chebfit looks identical to that
obtained using polyfit. (When there is a large amount of data, the plots may di↵er slightly due to
the e↵ects of computational error and the possible di↵erence in conditioning of the linear systems.)

Problem 4.6.8. Write a Matlab script M–file that will generate a figure containing the
jth Chebyshev polynomials, j = 1, 3, 5, 7. Use subplot to put all plots in one figure, and
title to document the various plots.
158 CHAPTER 4. CURVE FITTING

120
Data
110 Chebyshev interpolation

temperature, degrees Fahrenheit


100

90

80

70

60

50

40

30
0 2 4 6 8 10 12
month

Figure 4.15: Interpolating polynomial or average high temperature data given in Example 4.6.3
using the Chebyshev form.

Problem 4.6.9. Consider the data (0, 1), (1, 2), (3, 3) and (5, 4). Construct a plot that
contains the data (as circles) and the polynomial of degree 3 that interpolates this data using
the function chebfit. You should use the axis command to make the plot look sufficiently
nice.

Problem 4.6.10. Consider the data (1, 1), (3, 1), (4, 1), (5, 1) and (7, 1). Construct a
plot that contains the data (as circles) and the polynomial of degree 4 that interpolates this
data using the function chebfit You should use the axis command to make the plot look
sufficiently nice. Do you believe the curve is a good representation of the data?

Problem 4.6.11. Construct plots that contain the data (as circles) and interpolating poly-
nomials, using the function chebfit, for the following sets of data:

xi fi xi fi
2 6 2 5
1 3 1 3
and
0 1 0 0
1 3 1 3
2 6 2 5

Problem 4.6.12. Construct interpolating polynomials, using the function chebfit, through
all four sets of weather data given in Example 4.6.3. Use subplot to show all four plots in
the same figure, and use the title, xlabel and ylabel commands to document the plots.
Each plot should show the data points as circles on the corresponding curves.

4.6.3 Polynomial Splines


Polynomial splines help to avoid excessive oscillations by fitting the data using a collection of low
degree polynomials. We’ve actually already used linear polynomial splines to connect data via
Matlab’s plot command. But we can create this linear spline more explicitly using the interp1
function. The basic calling syntax is given by:

y = interp1(x_data, f_data, x);


4.6. MATLAB NOTES 159

where

• x data and f data are vectors containing the given data points,
• x is a vector containing values at which the linear spline is to be evaluated (e.g., for plotting
the spline), and
• y is a vector containing S(x) values.

The following example illustrates how to use interp1 to construct, evaluate, and plot a linear spline
interpolating function.

Example 4.6.7. Consider the average high temperatures from Example 4.6.3. The following set of
Matlab commands can be used to construct a linear spline interpolating function of this data:

x_data = 1:12;
f_data = [54.4 54.6 67.1 78.3 85.3 88.7 96.9 97.6 84.1 80.1 68.8 61.1];
x = linspace(1, 12, 200);
y = interp1(x_data, f_data, x);
plot(x_data, f_data, ’ro’)
hold on
plot(x, y)
The resulting plot is shown in Fig. 4.16. Note that we also used the Matlab commands axis,
legend, xlabel and ylabel to make the plot look a bit nicer.

120
Data
110 Linear spline
temperature, degrees Fahrenheit

100

90

80

70

60

50

40

30
0 2 4 6 8 10 12
month

Figure 4.16: Linear polynomial spline interpolating function for average high temperature data given
in Example 4.6.3

We can obtain a smoother curve by using a higher degree spline. The primary Matlab function
for this purpose, spline, constructs a cubic spline interpolating function. The basic calling syntax,
which uses, by default, not-a-knot end conditions, is as follows:
y = spline(x_data, f_data, x);

where x data, f data, x and y are defined as for interp1. The following example illustrates how to
use spline to construct, evaluate, and plot a cubic spline interpolating function.
160 CHAPTER 4. CURVE FITTING

Example 4.6.8. Consider again the average high temperatures from Example 4.6.3. The following
set of Matlab commands can be used to construct a cubic spline interpolating function of this
data:
x_data = 1:12;
f_data = [54.4 54.6 67.1 78.3 85.3 88.7 96.9 97.6 84.1 80.1 68.8 61.1];
x = linspace(1, 12, 200);
y = spline(x_data, f_data, x);
plot(x_data, f_data, ’ro’)
hold on
plot(x, y)
The resulting plot is shown in Fig. 4.17. Note that we also used the Matlab commands axis,
legend, xlabel, ylabel and title to make the plot look a bit nicer.

Default, not−a−knot end conditions


120
Data
110 Cubic spline
temperature, degrees Fahrenheit

100

90

80

70

60

50

40

30
0 2 4 6 8 10 12
month

Figure 4.17: Cubic polynomial spline interpolating function, using not-a-knot end conditions, for
average high temperature data given in Example 4.6.3

It should be noted that other end conditions can be used, if they can be defined by the slope of
the spline at the end points. In this case, the desired slope values at the end points are attached to
the beginning and end of the vector f data. This is illustrated in the next example.

Example 4.6.9. Consider again the average high temperatures from Example 4.6.3. The so-called
clamped end conditions assume the slope of the spline is 0 at the end points. The following set of
Matlab commands can be used to construct a cubic spline interpolating function with clamped end
conditions:
x_data = 1:12;
f_data = [54.4 54.6 67.1 78.3 85.3 88.7 96.9 97.6 84.1 80.1 68.8 61.1];
x = linspace(1, 12, 200);
y = spline(x_data, [0, f_data, 0], x);
plot(x_data, f_data, ’ro’)
hold on
plot(x, y)
Observe that, when calling spline, we replaced f data with the vector [0, f data, 0]. The first
and last values in this augmented vector specify the desired slope of the spline (in this case zero)
at the end points. The resulting plot is shown in Fig. 4.18 Note that we also used the Matlab
commands axis, legend, xlabel, ylabel and title to make the plot look a bit nicer.
4.6. MATLAB NOTES 161

Clamped end conditions


120
Data
110 Cubic spline

temperature, degrees Fahrenheit


100

90

80

70

60

50

40

30
0 2 4 6 8 10 12
month

Figure 4.18: Cubic polynomial spline interpolating function, using clamped end conditions, for
average high temperature data given in Example 4.6.3

In all of the previous examples using interp1 and spline, we construct and evaluate the spline
in one command. Recall that for polynomial interpolation we used polyfit and polyval in a two
step process. It is possible to use a two step process with both linear and cubic splines as well. For
example, when using spline, we can execute the following commands:
S = spline(x_data, f_data);
y = ppval(S, x);
where x is a set of values at which the spline is to be evaluated. If we specify only two input
arguments to spline, it returns a structure array that defines the piecewise cubic polynomial, S(x),
and the function ppval is then used to evaluate S(x).
In general, if all we want to do is evaluate and/or plot S(x), it is not necessary to use this two
step process. However, there may be situations for which we must access explicitly the polynomial
pieces. For example, we could find the maximum and minimum (i.e., critical points) of S(x) by
explicitly computing S 0 (x). Another example is given in Section 5.3 where the cubic spline is used
to integrate tabular data.
We end this subsection by mentioning that Matlab has another function, pchip, that can be
used to construct a piecewise Hermite cubic interpolating polynomial, H(x). In interpolation, the
name Hermite is usually attached to techniques that use specific slope information in the construction
of the polynomial pieces. Although pchip constructs a piecewise cubic polynomial, strictly speaking,
it is not a spline because the second derivative may be discontinuous at the knots. However, the
first derivative is continuous. The slope information used to construct H(x) is determined from the
data. In particular, if there are intervals where the data are monotonic, then so is H(x), and at
points where the data has a local extremum, so does H(x). We can use pchip exactly as we use
spline; that is, either one call to construct and evaluate:
y = pchip(x_data, f_data, x);
or via a two step approach to construct and then evaluate:
H = pchip(x_data, f_data);
y = ppval(H, x);
where x is a set of values at which H(x) is to be evaluated.

Example 4.6.10. Because the piecewise cubic polynomial constructed by pchip will usually have
discontinuous second derivatives at the data points, it will be less smooth and a little less accurate for
162 CHAPTER 4. CURVE FITTING

smooth functions than the cubic spline interpolating the same data. However, forcing monotonicity
when it is present in the data has the e↵ect of preventing oscillations and overshoot which can occur
with a spline. To illustrate the point we consider a somewhat pathological example of non-smooth
data from the Matlab Help pages. The following Matlab code fits first a cubic spline then a
piecewise cubic monotonic polynomial to the data, which is monotonic.
x_data = -3:3;
f_data = [-1 -1 -1 0 1 1 1];
x = linspace(-3, 3, 200);
s = spline(x,y,t);
p = pchip(x,y,t);
plot(x_data, f_data, ’ro’)
hold on
plot(x, s, ’k--’)
plot(x, p, ’b-’)
legend(’Data’, ’Cubic spline’, ’PCHIP interpolation’, ’Location’, ’NW’)
The resulting plot is given in Fig. 4.19. We observe both oscillations and overshoot in the cubic
spline fit and monotonic behavior in the piecewise cubic monotonic polynomial fit.

1.5
Data
Cubic spline
1 PCHIP interpolation

0.5

−0.5

−1

−1.5
−3 −2 −1 0 1 2 3

Figure 4.19: Comparison of fitting a monotonic piecewise cubic polynomial to monotonic data (using
Matlab’s pchip function) and to a cubic spline (using Matlab’s spline function).

Example 4.6.11. Consider again the average high temperatures from Example 4.6.3. The follow-
ing set of Matlab commands can be used to construct a piecewise cubic Hermite interpolating
polynomial through the given data:
x_data = 1:12;
f_data = [54.4 54.6 67.1 78.3 85.3 88.7 96.9 97.6 84.1 80.1 68.8 61.1];
x = linspace(1, 12, 200);
y = pchip(x_data, f_data, x);
plot(x_data, f_data, ’ro’)
hold on
plot(x, y)
The resulting plot is shown in Fig. 4.20 As with previous examples, we also used the Matlab
commands axis, legend, xlabel and ylabel to make the plot more understandable. Observe
that fitting the data using pchip avoids oscillations and overshoot that can occur with the cubic
spline fit, and that the monotonic behavior of the data is preserved.
4.6. MATLAB NOTES 163

120
Data
110 PCHIP interpolation

temperature, degrees Fahrenheit


100

90

80

70

60

50

40

30
0 2 4 6 8 10 12
month

Figure 4.20: Piecewise cubic Hermite interpolating polynomial, for average high temperature data
given in Example 4.6.3

Notice that the curve generated by pchip is smoother than the piecewise linear spline, but
because pchip does not guarantee continuity of the second derivative, it is not as smooth as the
cubic spline interpolating function. In addition, the flat part on the left, between the months January
and February, is probably not an accurate representation of the actual temperatures for this time
period. Similarly, the maximum temperature at the August data point (which is enforced by the
pchip construction) may not be accurate; more likely is that the maximum temperature occurs some
time between July and August. We mention these things to emphasize that it is often extremely
difficult to produce an accurate, physically reasonable fit to data.

Problem 4.6.13. Consider the data (0, 1), (1, 2), (3, 3) and (5, 4). Use subplot to make a
figure with 4 plots: a linear spline, a cubic spline with not-a-knot end conditions, a cubic
spline with clamped end conditions, and a piecewise cubic Hermite interpolating polynomial
fit of the data. Each plot should also show the data points as circles on the curve, and should
have a title indicating which of the four methods was used.

Problem 4.6.14. Consider the following sets of data:

xi fi xi fi
2 6 2 5
1 3 1 3
and
0 1 0 0
1 3 1 3
2 6 2 5

Make two figures, each with four plots: a linear spline, a cubic spline with not-a-knot end
conditions, a cubic spline with clamped end conditions, and a piecewise cubic Hermite in-
terpolating polynomial fit of the data. Each plot should also show the data points as circles
on the curve, and should have a title indicating which of the four methods was used.

Problem 4.6.15. Consider the four sets of weather data given in Example 4.6.3. Make four
figures, each with four plots: a linear spline, a cubic spline with not-a-knot end conditions,
a cubic spline with clamped end conditions, and a piecewise cubic Hermite interpolating
polynomial fit of the data. Each plot should also show the data points as circles on the
curve, and should have a title indicating which of the four methods was used.
164 CHAPTER 4. CURVE FITTING

Problem 4.6.16. Notice that there is an obvious ”wobble” at the left end of the plot given
in Fig. 4.17. Repeat the previous problem, but this time shift the data so that the year begins
with April. Does this help eliminate the wobble? Does shifting the data have an e↵ect on
any of the other plots?

4.6.4 Interpolating to Periodic Data


The methods considered thus far and those to be considered later are designed to fit to general
data. When the data has a specific property it behoves us to exploit that property. So, in the
frequently occurring case where the data is thought to represent a periodic function we should fit it
using a periodic function, particularly with a trigonometric series where the frequency of the terms
is designed to fit the frequency of the data.
Given a set of data that is supposed to represent a periodic function sampled at equal spacing
across the period of the function, Matlab’s function interpft for periodic data produces inter-
polated values to this data on a finer user specified equally spaced mesh. First, interpft uses a
discrete Fourier transform internally to produce a trigonometric series interpolating the data. Then,
it uses an inverse discrete Fourier transform to evaluate the trigonometric series on the finer mesh,
over the same interval. (At no stage does the user see the trigonometric series.)

Example 4.6.12. Consider sampling f (x) = sin(⇡x) at x = i/10, i = 0, 1, . . . , 9. The data is


assumed periodic so we do not evaluate f (x) for i = 10. We use interpft to interpolate to this
data at half the mesh size and plot the results using the commands:
x_data = (0:1:9)/10;
f_data = sin(pi*x_data);
x = (0:1:19)/20;
y = interpft(f_data, 20);
plot(x_data, f_data, ’ro’)
hold on
plot(x, y, ’+’)

The plot is given on the left in Figure 4.21 where the interpolated and data values coincide at the
original mesh points.
If we use instead the nonperiodic function f (x) = sin(⇡x2 ) at the same points as follows
x_data = (0:1:9)/10;
f_data = sin(pi*(x.^2));
x = (0:1:19)/20;
y = interpft(f_data, 20);
plot(x_data, f_data, ’ro’)
hold on
plot(x, y, ’+’)

we obtain the plot on the right in Figure 4.21. Note the behavior near x = 0. The plotted function
is periodic. The negative value arises from fitting a periodic function to data arising from a non-
periodic function.

4.6.5 Least Squares


Using least squares to compute a polynomial that approximately fits given data is very similar to the
interpolation problem. Matlab implementations can be developed by mimicking what was done in
Section 4.6.1, except we replace the interpolation condition p(xi ) = fi with p(xi ) ⇡ fi . For example,
suppose we want to find a polynomial of degree M that approximates the N + 1 data points (xi , fi ),
4.6. MATLAB NOTES 165

Periodic function, f(x)=sin(πx) Nonperiodic function, f(x)=sin(πx2)

1 data
1
interpolated values
true function
0.8 0.8

0.6 0.6
y

y
0.4 0.4

0.2 0.2
data
interpolated values
0 0
true function
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x

Figure 4.21: Trigonometric fit to data using Matlab’s interpft function. The left plot fits to
periodic data, and the right plot fits to non-periodic data.

i = 1, 2, . . . N + 1. If we use the power series form, then we end up with an (N + 1) ⇥ (M + 1) linear


system
V a ⇡ f.
The Normal equations approach to construct the least squares fit of the data is simply the (M +
1) ⇥ (M + 1) linear system
V T V a = V T f.

Example 4.6.13. Consider the data

xi 1 3 4 5
fi 2 4 3 1

Suppose we want to find a least squares fit to this data by a line: p(x) = a0 + a1 x. Then using the
conditions p(xi ) ⇡ fi we obtain
2 3 2
3
p(x0 ) ⇡ f0 ) a0 + a1 ⇡ 2 1 1  2
p(x1 ) ⇡ f1 ) a0 + 3a1 ⇡ 4 6 1 3 7 6 4 7
) 6 7 a0 ⇡6 7
p(x2 ) ⇡ f2 ) a0 + 4a1 ⇡ 3 4 1 4 5 a1 4 3 5.
p(x3 ) ⇡ f3 ) a0 + 5a1 ⇡ 1 1 5 1

To find a least squares solution, we solve the Normal equations


  
T T 4 23 a0 10
V V =V f ) =
13 51 a1 31

Notice that this linear system is equivalent to what was obtained in Example 4.4.1.

Matlab’s backslash operator can be used to compute least squares approximations of linear
systems V a ⇡ f using the simple command
a = V \ f
In general, when the backslash operator is used, Matlab checks first to see if the matrix is square
(number of rows = number of columns). If the matrix is square, it will use Gaussian elimination
166 CHAPTER 4. CURVE FITTING

with partial pivoting by rows to solve V a = f . If the matrix is not square, it will compute a least
squares solution, which is equivalent to solving the Normal equations. Matlab does not use the
Normal equations, but instead a generally more accurate approach based on a QR decomposition of
the rectangular matrix V . The details are beyond the scope of this book, but the important point is
that methods used for polynomial interpolation can be used to construct polynomial least squares
fit to data. Thus, the main built-in Matlab functions for polynomial least squares data fitting are
polyfit and polyval. In particular, we can construct the coefficients of the polynomial using
a = polyfit(x_data, f_data, M)
where M is the degree of the polynomial. In the case of interpolation, M = N = length(x data)-1,
but in least squares, M is generally less than N.

Example 4.6.14. Consider the data


xi 1 3 4 5
fi 2 4 3 1
To find a least squares fit of this data by a line (i.e., polynomial of degree M = 1), we use the
Matlab commands:
x_data = [1 3 4 5];
f_data = [2 4 3 1];
a = polyfit(x_data, f_data, 1);
As with interpolating polynomials, we can use polyfit to evaluate the polynomial, and plot to
generate a figure:
x = linspace(1, 5, 200);
y = polyval(a, x);
plot(x_data, f_data, ’ro’);
hold on
plot(x, y)
If we wanted to construct a least squares fit of the data by a quadratic polynomial, we simply replace
the polyfit statement with:
a = polyfit(x_data, f_data, 2);
Fig. 4.22 shows the data, and the linear and quadratic polynomial least squares fits.

Note that it is possible to modify chebfit so that it can compute a least squares fit of the data;
see Problem 4.6.17.
It may be the case that functions other than polynomials are more appropriate to use in data
fitting applications. The next two examples illustrate how to use least squares to fit data to more
general functions.

Example 4.6.15. Suppose it is known that data collected from an experiment, (xi , fi ), can be
represented well by a sinusoidal function of the form
g(x) = a1 + a2 sin(x) + a3 cos(x).
To find the coefficients, a1 , a2 , a3 , so that g(x) is a least squares fit of the data, we use the criteria
g(xi ) ⇡ fi . That is,
g(x1 ) ⇡ f1 ) a1 + a2 sin(x1 ) + a3 cos(x1 ) ⇡ f1
g(x2 ) ⇡ f2 ) a1 + a2 sin(x2 ) + a3 cos(x2 ) ⇡ f2
..
.
g(xn ) ⇡ fn ) a1 + a2 sin(xn ) + a3 cos(xn ) ⇡ fn
4.6. MATLAB NOTES 167

6
Data
LS line
5 LS quadratic

0
0 1 2 3 4 5 6

Figure 4.22: Linear and quadratic polynomial least squares fit of data given in Example 4.6.14

which can be written in matrix-vector form as


2 3 2 3
1 sin(x1 ) cos(x1 ) 2 3 f1
6 1 sin(x2 ) cos(x2 ) 7 a1 6 f2 7
6 74 6 7
6 .. .. .. 7 a2 5 ⇡ 6 .. 7.
4 . . . 5 4 . 5
a3
1 sin(xn ) cos(xn ) fn

In Matlab, the least squares solution can then be found using the backslash operator which,
assuming more than 3 data points, uses a QR decomposition of the matrix. A Matlab function to
compute the coefficients could be written as follows:

function a = sinfit(x_data, f_data)


%
% a = sinfit(x_data, f_data);
%
% Given a set of data, (x_i, f_i), this function computes the
% coefficients of
% g(x) = a(1) + a(2)*sin(x) + a(3)*cos(x)
% that best fits the data using least squares.
%
n = length(x_data);
W = [ones(n, 1), sin(x_data), cos(x_data)];
a = W \ f_data;

Example 4.6.16. Suppose it is known that data collected from an experiment, (xi , fi ), can be
represented well by an exponential function of the form

g(x) = a1 ea2 x .

To find the coefficients, a1 , a2 , so that g(x) is a least squares fit of the data, we use the criteria
g(xi ) ⇡ fi . The difficulty in this problem is that g(x) is not linear in its coefficients. But we can
168 CHAPTER 4. CURVE FITTING

use logarithms, and their properties, to rewrite the problem as follows:

g(x) = a 1 e a2 x
ln(g(x)) = ln (a1 ea2 x )
= ln(a1 ) + a2 x
= â1 + a2 x, where â1 = ln(a1 ), or a1 = eâ1 .

With this transformation, we would like ln(g(xi )) ⇡ ln(fi ), or

ln(g(x1 )) ⇡ ln(f1 ) ) â1 + a2 x1 ⇡ ln(f1 )


ln(g(x2 )) ⇡ ln(f2 ) ) â1 + a2 x2 ⇡ ln(f2 )
..
.
ln(g(xn )) ⇡ ln(fn ) ) â1 + a2 xn ⇡ ln(fn )

which can be written in matrix-vector form as


2 3 2 3
1 x1 ln(f1 )
6 1 x2 7  6 ln(f2 ) 7
6 7 â1 6 7
6 .. .. 7 a ⇡6 .. 7.
4 . . 5 2 4 . 5
1 xn ln(fn )

In Matlab, the least squares solution can then be found using the backslash operator. A Matlab
function to compute the coefficients could be written as follows:

function a = expfit(x_data, f_data)


%
% a = expfit(x_data, f_data);
%
% Given a set of data, (x_i, f_i), this function computes the
% coefficients of
% g(x) = a(1)*exp(a(2)*x)
% that best fits the data using least squares.
%
n = length(x_data);
W = [ones(n, 1), x_data];
a = W \ log(f_data);
a(1) = exp(a(1));

Here we use the built-in Matlab functions log and exp to compute the natural logarithm and the
natural exponential, respectively.

Problem 4.6.17. Modify the function chebfit so that it computes a least squares fit to the
data. The modified function should have an additional input value specifying the degree of
the polynomial. Use the data in Example 4.6.14 to test your function. In particular, produce
a plot similar to the one given in Fig. 4.22.

You might also like