0% found this document useful (0 votes)
23 views5 pages

Error Theory Interpolation

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views5 pages

Error Theory Interpolation

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Lecture notes for TMA4125/4130/4135 Mathematics 4N/D

Polynomial interpolation: Error theory

Anne Kværnø (modified by André Massing)

Sep 3, 2021

The Python codes for this note are given in polynomialinterpolation.py.

1 Theory
Given some function f ∈ C[a, b]. Choose n + 1 distinct nodes in [a, b] and let pn (x) ∈ Pn satisfy the
interpolation condition
pn (xi ) = f (xi ), i = 0, . . . , n.

What can be said about the error e(x) := f (x) − pn (x)?


The goal of this section is to cover a few theoretical aspects, and to give the answer to the natural
question:
• If the polynomial is used to approximate a function, can we find an expression for the error?
• How can the error be made as small as possible?
Let us start with an numerical experiment, to have a certain feeling of what to expect.
Example 1.1. Interpolation of sin x.
Let f (x) = sin(x), x ∈ [0, 2π]. Choose n + 1 equidistributed nodes, that is xi = ih, i = 0, . . . , n, and
h = 2π/n. Calculate the interpolation polynomial by use of the functions cardinal and lagrange. Plot
the error en (x) = f (x) − pn (x) for different values of n. Choose n = 4, 8, 16 and 32. Notice how the error
is distributed over the interval, and find the maximum error maxx∈[a,b] |en (x)| for each n.

# Define the function


def f(x):
return np.sin(x)

# Set the interval


a, b = 0, 2*pi # The interpolation interval
x = np.linspace(a, b, 101) # The ’x-axis’

# Set the interpolation points


n = 8 # Interpolation points
xdata = np.linspace(a, b, n+1) # Equidistributed nodes (can be changed)
ydata = f(xdata)

# Evaluate the interpolation polynomial in the x-values


l = cardinal(xdata, x)
p = lagrange(ydata, l)

# Plot f(x) og p(x) and the interpolation points


plt.subplot(2,1,1)
plt.plot(x, f(x), x, p, xdata, ydata, ’o’)
plt.legend([’f(x)’,’p(x)’])
plt.grid(True)
# Plot the interpolation error
plt.subplot(2,1,2)
plt.plot(x, (f(x)-p))
plt.xlabel(’x’)
plt.ylabel(’Error: f(x)-p(x)’)
plt.grid(True)
print("Max error is {:.2e}".format(max(abs(p-f(x)))))

1
Exercise 1: Interpolation of 1+x2
Repeat the previous experiment with Runge’s function
1
f (x) = , x ∈ [−5, 5].
1 + x2

Solution. See example_runge_interpolation() in polynomialinterpolation.py.

1.1 Error Analysis


Taylor polynomials once more. Before we turn to the analysis of the interpolation error e(x) =
f (x) − pn (x), we quickly recall (once more) Taylor polynomials and their error representation. For
f ∈ C n+1 [a, b] and x0 ∈ (a, b), we defined the n-th order Taylor polynomial Txn0 f (x) of f around x0
by
n
X f (k) (x0 )
Txn0 f (x) := (x − x0 )k
k!
k=0

Note that the Taylor polynomial is in fact a polynomial of order n which not only interpolates f in x0 ,
but also its first, second etc. and n-th derivative f 0 , f 00 , . . . f (n) in x0 !
So the Taylor polynomial the unique polynomial of order n which interpolates the first n derivatives of
f in a single point x0 . In contrast, the interpolation polynomial pn is the unique polynomial of order n
which interpolates only the 0-order (that is, f itself), but in n distinctive points x0 , x1 , . . . xn .
For the Taylor polynomial Txn0 f (x) we have the error representation

f (n+1) (ξ)
f (x) − Txn0 f (x) = Rn+1 (x0 ) where Rn+1 (x0 ) = (x − x0 )n+1 ,
(n + 1)!

with ξ between x and x0 . Of course, we usually don’t know the exact location of ξ and thus not the exact
error, but we can at least estimate it and bound it from above:
M
|f (x) − Txn0 f (x)| 6 hn+1
(n + 1)!

where
M = max |f (n+1) (x)| and h = |x − x0 |.
x∈[a,b]

The next theorem gives us an expression for the interpolation error e(x) = f (x) − pn (x) which is similar
to what we have just seen for the error between the Taylor polynomial and the original functio f .

Theorem 1.1. Interpolation error.

2
Given f ∈ C (n+1) [a, b]. Let pn ∈ Pn interpolate f in n + 1 distinct nodes xi ∈ [a, b]. For each
x ∈ [a, b] there is at least one ξ(x) ∈ (a, b) such that
n
f (n+1) (ξ(x)) Y
f (x) − pn (x) = (x − xi ).
(n + 1)! i=0

Before we turn to the proof of the theoreom, it might be useful to recall Rolle’s theorem and the mean
value theorem from Calculus 1, see also the Preliminaries.pdf notes:

Rolle’s theorem. Let f ∈ C 1 [a, b] and f (a) = f (b) = 0. Then there exists at least one ξ ∈ (a, b) such
that f 0 (ξ) = 0.

Mean value theorem. Let f ∈ C 1 [a, b]. Then there exists at least one ξ ∈ (a, b) such that

f (b) − f (a)
f 0 (ξ) = .
b−a

Proof. We start fromt the Newton polynomial ωn+1 =: ω(x)


n
Y
ω(x) = (x − xi ) = xn+1 + · · · .
i=0

Clearly, the error in the nodes, e(xi ) = 0. Choose an arbitrary x ∈ [a, b] where x 6= xi , i = 0, 1, . . . , n. For
this fixed x, define a function in t as:

ϕ(t) = e(t)ω(x) − e(x)ω(t).

where e(t) = f (t) − pn (t). Notice that ϕ(t) is as differentiable with respect to t as f (t). The function ϕ(t)
has n + 2 distinct zeros (the nodes and the fixed x). As a consequence of Rolle’s theorem, the derivative
ϕ0 (t) has at least n + 1 distinct zeros, one between each of the zeros of ϕ(t). So ϕ00 (t) has n distinct zeros,
etc. By repeating this argument, we can see that ϕn+1 (t) has at least one zero in [a, b], let us call this ξ(x),
as it does depend on the fixed x. Since ω (n+1) (t) = (n + 1)! and e(n+1) (t) = f (n+1) (t) we obtain

ϕ(n+1) (ξ) = 0 = f (n+1) (ξ)ω(x) − e(x)(n + 1)!

which concludes the proof.

Observation. The interpolation error consists of three elements: The derivative of the function f , the
number of interpolation points n + 1 and the distribution of the nodes xi . We cannot do much with
the first of these, but we can choose the two others. Let us first look at the most obvious choice of
nodes.

Equidistributed nodes. The nodes are equidistributed over the interval [a, b] if xi = a+ih, h = (b−a)/n.
In this case it can be proved that:

hn+1
|ω(x)| ≤ n!
4
such that

hn+1
|e(x)| ≤ Mn+1 , Mn+1 = max |f (n+1) (x)|.
4(n + 1) x∈[a,b]

for all x ∈ [a, b].


Let us now see how good this error bound is by an example.

3
Exercise 2: Interpolation error for sin(x) revisited
Let again f (x) = sin(x) and pn (x) the polynomial interpolating f (x) in n+1 equidistributed points on [0, 1].
An upper bound for the error for different values of n can be found easily. Clearly, maxx∈[0,2π] |f (n+1) (x)| =
Mn+1 = 1 for all n, so

n+1
1 2π

|en (x)| = |f (x) − pn (x)| ≤ , x ∈ [a, b].
4(n + 1) n

Use the code in the first Example of this lecture to verify the result for n = 2, 4, 8, 16. How close is the
bound to the real error?

Solution. See the function example_est_error_interpolation in the file polynomialinterpolation.py.

1.2 Optimal choice of interpolation points


So how can the error
Qn be reduced? For a given n there is only one choice: to distribute the nodes in order
to make |ω(x)| = j=0 |x − xi | as small as possible. We will first do this on a standard interval [−1, 1],
and then transfer the results to some arbitrary interval [a, b].
Let us start taking a look at ω(x) for equidistributed nodes on the interval [−1, 1], for different values of
n:
newparams = {’figure.figsize’: (6,3)}
plt.rcParams.update(newparams)

def omega(xdata, x):


# compute omega(x) for the nodes in xdata
n1 = len(xdata)
omega_value = np.ones(len(x))
for j in range(n1):
omega_value = omega_value*(x-xdata[j]) # (x-x_0)(x-x_1)...(x-x_n)
return omega_value

# Plot omega(x)
n = 8 # Number of interpolation points is n+1
a, b = -1, 1 # The interval
x = np.linspace(a, b, 501)
xdata = np.linspace(a, b, n)
plt.plot(x, omega(xdata, x))
plt.grid(True)
plt.xlabel(’x’)
plt.ylabel(’omega(x)’)
print("n = {:2d}, max|omega(x)| = {:.2e}".format(n, max(abs(omega(xdata, x)))))

Run the code for different values of n. Notice the following:


• maxx∈[−1,1] |ω(x)| becomes smaller with increasing n.
• |ω(x)| has its maximum values near the boundaries of [−1, 1].
A a consequence of the latter, it seems reasonable to move the nodes towards the boundaries. It can be
proved that the optimal choice of nodes are the Chebyshev-nodes, given by
(2i + 1)π
 
x̃i = cos , i = 0, . . . , n
2(n + 1)
Qn
Let ωCheb (x) = j=1 (x − x̃i ). It is then possible to prove that

1
= max |ωCheb (x)| ≤ max |q(x)|
2n x∈[−1,1] x∈[−1,1]

for all polynomials q ∈ Pn such that q(x) = xn + cn−1 xn−1 + · · · + c1 x + c0 .

4
The distribution of nodes can be transferred to an interval [a, b] by the linear transformation

b−a b+a
x= x̃ +
2 2
where x ∈ [a, b] and x̃ ∈ [−1, 1]. By doing so we get
n  n
n+1 Y  n+1
Y b−a b−a
ω(x) = (x − xi ) = (x̃ − x̃i ) = ωCheb (x̃).
j=0
2 j=0
2

From the theorem on interpolation errors we can conclude:

Theorem (interpolation error for Chebyshev interpolation).

Given f ∈ C (n+1) [a, b], and let Mn+1 = maxx∈[a,b] |f (n+1) (x)|. Let pn ∈ Pn interpolate f i n + 1
Chebyshev-nodes xi ∈ [a, b]. Then

(b − a)n+1
max |f (x) − pn (x)| ≤ Mn+1 .
x∈[a,b] 22n+1 (n + 1)!

The Chebyshev nodes over an interval [a, b] are evaluated in the following function:
def chebyshev_nodes(a, b, n):
# n Chebyshev nodes in the interval [a, b]
i = np.array(range(n)) # i = [0,1,2,3, ....n-1]
x = np.cos((2*i+1)*pi/(2*(n))) # nodes over the interval [-1,1]
return 0.5*(b-a)*x+0.5*(b+a) # nodes over the interval [a,b]

Exercise 3: Chebyshev interpolation


a) Plot ωCheb (x) for 3, 5, 9, 17 interpolation points on the interval [−1, 1].
b) Repeat Example 3 using Chebyshev interpolation on the functions below. Compare with the results
you got from equidistributed nodes.

f (x) = sin(x), x ∈ [0, 2π]


1
f (x) = , x ∈ [−5, 5].
1 + x2

For information: Chebfun is software package which makes it possible to manipulate functions and
to solve equations with accuracy close to machine accuracy. The algorithms are based on polynomial
interpolation in Chebyshev nodes.

You might also like