1 Motivation For Newton Interpolation
1 Motivation For Newton Interpolation
Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore
Using this representation, the problem is equivalent to solving a system of linear equations
V ~a = F~
Where V is the Vandermonde matrix, ~a is the column vector of (unknown) polynomial coef-
fcients, and F~ is the column vector of function values at the interpolation points.
As we have seen, solving this system is not a good method for solving the polynomial inter-
polation problem. However, this representation is important, since we were able to use it to
conclude that the solution to a polynomial interpolation problem is unique.
Using this representation, we showed that it is easy to find a solution p, and this led to
the important conclusion that the polynomial interpolation problem always has a soluton.
However, a solution written in Lagrange form may be difficult to use.
1
Evaluation We may want to use p to approximate f at points outside the set of interpolation
points. The Lagrange representation may be problematic for this purpose if we have in-
terpolation points (xa , xb ) that are very close together. Recall the formula for a Lagrange
polynomial `i (t).
Yn µ ¶
(t − xj )
`i (t) =
j=0
(xi − xj )
j6=i
Those Lagrange polynomials that include (xa − xb ) or (xb − xa ) in the denominator will be
quite large, and may suffer from loss of significance.
Evaluation of Derivative The symbolic differentiation of a Lagrange polynomial, using the prod-
uct rule, will have n separate terms, each involving n − 1 multiplications. Sine the solution
to the polynomial interpolation problem p contains n + 1 different Lagrange poynomials,
evaluating the derivative could be very inefficient.
Clearly, the Lagrange representation is not ideal for some uses of the polynomial p.
Additionally, there is another problem with the Lagrange representation, it doesn’t take advan-
tage of the inductive structure of a polynomial interpolation problem. Polynomial interpolation is
often an iterative process where we add points one at a time until we reach an acceptable solution.
Given a particular solution p to a polynomial interpolation problem with n points, we would like
to add an additional point and find a new solution p0 without too much additional cost. In other
words, we would like to reuse p to help us find p0 .
It is easy to see that the Lagrange representation will not meet this requirement. Using the
Lagrange representation, we can find a solution to a polynomial interpolation problem with n points
X = (x0 , x1 , . . . , xn−2 , xn−1 ) easily, but, if we add a single additional point xn , we have to throw
out our previous solution and recalculate the solution to the new problem from scratch. This is
because every interpolation point is used in every term of the Lagrange representation.
We shall now discuss a polynomial representation that makes use of the inductive structure of
the problem. However, note that, despite these shortcomings, the Lagrange representation is not a
bad way to represent polynomials.
2 Newton Polynomials
We introduce Newton polynomials with a series of examples.
Example 2.1. Given a set of points X ~ = (1, 3, 0) and the corresponding function values F =
(2, 7, −8), find a quadratic polynomial that interpolates this data.
Step 1 The order of the points does not matter (as long as each point is matched to the corre-
sponding function value), but we will consider the first point X0 = 1, F0 = 2, and find a zero
order polynomial p0 that interpolates this point.
N0 (t) = 1
p0 = 2 · N0 (t)
2
Step 2 We use our previous solution p0 and the first order Newton polynomial N1 to interpolate
the first two points (1, 2) and (3, 7).
N1 (t) = (t − 1)
p1 (t) = p0 (t) + 2 · N1 (t)
We need to find the value of 2 given the two constraints that p1 (1) = 2 and p1 (3) = 7.
p1 (1) = p0 (1) + 2 · (1 − 1) = 2 + 0 = 2ˇ
p1 (3) = p0 (3) + 2 · (3 − 1)
7 =2+2·2
2 = 5/2
Step 3 We use our previous solution p1 and the second order Newton polynomial N2 to interpolate
all three points.
N2 (t) = (t − 1) · (t − 3)
P2 (t) = p1 (t) + 2 · N2 (t)
Solving for the three constraints (1, 2), (3, 7), and (0, −8), we obtain:
p2 (1) = p1 (1) + 2 · 0ˇ
p2 (3) = p1 (3) + 2 · 0ˇ
P2 (0) = p1 (0) + 2 · 3
−8 = −1/2 + 2 · 3
2 = −5/2
To solve polynomial interpolation problems using this iterative method, we need the polynomials
Ni (t). We call these Newton polynomials.
~ = (X0 , X1 , . . . , Xn ),
Definition 2.1 (Newton Polynomials). Given a set of n + 1 input points X
we define the n + 1 Newton polynomials.
N0 (t) =1
N1 (t) = t − X0
.. ..
. .
Nn (t) = (t − X0 ) · (t − X1 ) · · · · · (t − Xn−1 )
3
Note that the Newton polynomials for a polynomial interpolation depend only on the input
~ = (X0 , X1 , . . . , xn ), and not on the associated function values F~ = (f (X0 ), f (X1 ), . . . , f (Xn ).
points X
Given these Newton polynomials, we can define a recurrence for pn (t):
At first, the basis of Newton polynomials doesn’t look any better than Lagrange in terms of
computation: Nn (t) has n factors, similar to the number of factors in a Lagrange polynomial. This
is a valid concern; however, as we will see in future lectures, it turns out that there are very efficient
linear-time algorithms to both evaluate and find derivatives of polynomials in the Newton basis.
We can find the coefficient 2 for each Newton polynomial using the method of divided differences.
3 Divided Differences
Definition 3.1 (Divided Differences). Given a set of n+1 input points X ~ = (X0 , X1 , . . . , Xn ), and
the corresponding function values F~ = (f (X0 ), f (X1 ), . . . , f (Xn )), and the Newton representation
of the polynomial interpolant pn (t) = a0 N0 (t) + a1 N1 (t) + · · · + an Nn (t), the coefficient an of the
nth Newton polynomial Nn (t) is called the divided difference (or D.D.) of f at (X0 , X1 , . . . , Xn )
and is denoted by f [X0 , X1 , . . . , Xn ]
~ = (1, 3, 0) and F~ = (2, 7, −8), then
Example 3.1. Given X
Note that a polynomial interpolation problem is invariant over the ordering of the input points.
If we rewrote P2 (t) in equation 4 in monomial form, the coefficient of t2 would be the divided
difference f [1, 3, 0]. If we were to reorder the input points as (3, 0, 1) and solve the problem using
Newton polynomials, the coefficient of t2 would be f [3, 0, 1]. Since these problems are equivalent,
this means that f [1, 3, 0] = f [3, 0, 1]. In general, the final coefficnet in the Newton representation of
a polynomial interpolant is always the same under any ordering of the input points. This means that
the divided difference f [x0 , x1 , . . . , xn ] is invariant over the ordering of the points (x0 , x1 , . . . , xn ).
Finally, we define the order of a divided difference.
Definition 3.2. The order of a divided difference f [X0 , X1 , . . . , Xn ] is one less then the number
of points in the divided difference.
A divided difference of order k, f (X0 , X1 , . . . , Xk ) is the coefficient of the final (order k) Newton
polynomial in the interpolant of (X0 , X1 , . . . , Xk ).
4
the first column (column 0) are simply the function values F~ . Each successive value D[i, j] (the ith
row and jth column of the table) is calculated as a quotient. The numerator is the difference of the
values immediately left (D[i, j −1]) and immediately on the downward left diagonal (D[i+1, j −1]).
The denominator is the difference between the input value xi labeling the ith row, and the input
value xi+j labeling the i + jth row. More formally, given a polynomial interpolation problem with
n + 1 input points, we define the following table D:
1 2 -2 1
0 4 -4
-1 8
and we have
p(t) = 2 · 1 − 2 · (t − 1) + 1 · (t − 1)(t − 0)
as the solution.
Now when we add one more point and value, say, X ~ = (1, 0, −1, 2) and F~ = (2, 4, 8, 2), we
construct a new array, with the above array as the upper left corner.
1 2 -2 1 -2
1 2 -2 1
0 4 -4 -1
0 4 -4 ⇒
-1 8 -2
-1 8
2 -2