0% found this document useful (0 votes)
92 views

Chapter 4 - Interpolation

This lecture discusses two methods for polynomial interpolation: 1. Lagrange method - Constructs Lagrange basis polynomials Li(x) such that Li(xj) = δij. The interpolating polynomial is a linear combination of the basis polynomials multiplied by the function values. 2. Newton divided difference method - Uses divided differences to recursively construct coefficients of the interpolating polynomial. The Lagrange method has advantages over undetermined coefficients as it does not require solving a system of equations and can handle unevenly spaced data points. The interpolating polynomial from Lagrange interpolation is unique and will match the data function exactly at the sample points.

Uploaded by

Ajay
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

Chapter 4 - Interpolation

This lecture discusses two methods for polynomial interpolation: 1. Lagrange method - Constructs Lagrange basis polynomials Li(x) such that Li(xj) = δij. The interpolating polynomial is a linear combination of the basis polynomials multiplied by the function values. 2. Newton divided difference method - Uses divided differences to recursively construct coefficients of the interpolating polynomial. The Lagrange method has advantages over undetermined coefficients as it does not require solving a system of equations and can handle unevenly spaced data points. The interpolating polynomial from Lagrange interpolation is unique and will match the data function exactly at the sample points.

Uploaded by

Ajay
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

Lecture 16

Last lecture:
Method of undetermined coefficients for interpolation
This lecture:

Lagrange method
Newton divided difference method
IV. INTERPOLATION

Example:
Image of a whole alligator

Now zoom in on the eye region…

For any relatively large print,


Notice the jagged pixel edges you will likely see better results
enlarged with interpolation
if the image (from digital camera) is interpolated
enlarged w/o interpolation
4.1 Method of Undetermined Coefficient

Example: Consider the following discrete data


* Find y(x=4) using a 3rd order polynomial that fits the above data.

n 0 1 2 3 * Start with a 3rd order polynomial:


xn 1 2 3 5.5 P3(x) = a0 + a1 x + a2 x2 + a3 x3
yn 0 6 20 129.4 * Set P3(xn) = yn for n=0, 1, 2, and 3 

a0 + a1 + a2 + a3 = 0 1 𝑥0 𝑥02 𝑥03 𝑎0 𝑦0
a0 + 2a1 + 4a2 + 8a3 = 6 1 𝑥1 𝑥12 𝑥13 𝑎1 𝑦1
a0 + 3a1 + 9a2 + 27a3 = 20  1 𝑥2 𝑥22 𝑥23 𝑎2 = 𝑦2
𝑎3 𝑦3
a0 + 5.5a1 +30.25a2 + 166.375a3 = 129.375 1 𝑥3 𝑥32 𝑥33
or [V] a = y
[V] = Vandermonde matrix.

* Gaussian elimination  (a0, a1, a2, a3) = (-4, 5, -2, 1)


* Interpolation value: y(x = 4) ~ P3(x = 4) = 48.0
Comments on the method of undetermined Coefficient

* The solution for (a0, a1, a2, a3) exists and is unique

because det(V) = ς1≤𝑖<𝑗≤𝑛(𝑥𝑗 − 𝑥𝑖 ) ≠0

* There are two serious problems with this approach.


i. If the polynomial is of higher order, the system may be ill-conditioned
(Cond(V)=1956.45»1 in the present example)
so that the solution for (a0, a1, a2, a3) may be unreliable.
ii. When a new data point is encountered, say (x4, y4), we need to start all over.

 Need more efficient and robust methods.


4.2 Lagrange Interpolation

Consider the data given below:


xi x0 x1 x2 x3
yi y0 y1 y2 y3

• We can use a 3rd order polynomial to interpolate

• Express the third order polynomial as

P3(x) = σ3𝑖=0 Li(x) yi

with Li(x) = a 3rd order polynomial.


4.2.1 Lagrange interpolating polynomials

Consider fitting following data using a 3rd order polynomial:


y
xi x0 x1 x2 x3
yi y1 0 0 0

How to find a 3rd order polynomial for this set of data? x0 x1 x2 x3 x


Note: x1 , x2, x3 are roots of y(x)
=> we can express the 3rd order polynomial as
y=q0(x) = c (x- x1)(x- x2)(x- x3)

Set q0(x0) = y0 1.2


L0
1
 y0 = c (x0 – x1) (x0 – x2) (x0 – x3) 0.8
c = y0 / [(x0 – x1) (x0 – x2) (x0- x3)] 0.6
0.4
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 )
q0(x) = y0
0.2
( y0 L0(x))
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 )(𝑥0 − 𝑥3 ) 0
x
-1 -0.2 0 1 2 3

(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 )
where L0(x)= (𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 )(𝑥0 − 𝑥3 )
Lagrange Interpolating Polynomials
1.2
Now consider 1
L1

0.8
xi x0 x1 x2 x3 0.6
0.4
yi 0 y1 0 0 0.2
0
-1 -0.2 0 1 2 3 x
(𝑥 − 𝑥0 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 )
 q1(x)= y1 (  y1 L1(x) ) -0.4
(𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 )(𝑥1 − 𝑥3 )
1.2
L2
1

 … q2(x)= y2L2(x), q3(x)= y3L3(x) 0.8


0.6
0.4
0.2

Now consider xi x0 x1 x2 x3 -1
0
-0.2 0 1 2 3 x
-0.4
yi y0 y1 y2 y3
1.2
L3
 P3(x) = y0L0(x) + y1L1(x) + y2L2(x) + y3L3(x) 1
0.8
0.6

 3rd order polynomial for arbitrary (xi, xi) 0.4


0.2
0
x
-1 -0.2 0 1 2 3
4.2.2 Lagrange interpolation and its properties

-- General Result
𝑛

𝑃𝑛 (𝑥) = 𝐿0 (𝑥)𝑓(𝑥0 ) + 𝐿1 (𝑥)𝑓(𝑥1 ) + ⋯ + 𝐿𝑛 (𝑥)𝑓(𝑥𝑛 ) = ෍ 𝐿𝑖 (𝑥)𝑓(𝑥𝑖 ) = nth order polynomial fit for the data
𝑖=0
𝑛
𝑥 − 𝑥𝑗 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) ⋯ ⋯ (𝑥 − 𝑥𝑖−1 )(𝑥 − 𝑥𝑖+1 ) ⋯ ⋯ (𝑥 − 𝑥𝑛 )
𝐿𝑖 (𝑥) = ෑ =
𝑥𝑖 − 𝑥𝑗 (𝑥𝑖 − 𝑥0 )(𝑥𝑖 − 𝑥1 ) ⋯ (𝑥𝑖 − 𝑥𝑖−1 )(𝑥𝑖 − 𝑥𝑖+1 ) ⋯ (𝑥𝑖 − 𝑥𝑛 )
𝑗=0
𝑗≠𝑖
 j = i ; Li ( xi ) = 1 
𝐿𝑖 𝑥𝑗 =? Note :    Li ( x j ) =  ij
 j  i ; Li ( x j ) = 0 
 Note:
i. There is no need to solve ill-conditioned system.
ii. (x0, x1, x2, x3,...) can be unevenly spaced and out of order.
iii. Error is exactly zero at x = xi, i.e., E(xi) = 0 (i=1, 2,...n).
iv. If f(x) is a polynomial of order lower than (n+1), then Pn(x) is exactly equal to f(x) since f(n+1)(x)=0
v. The interpolating polynomial is unique. Why?
Uniqueness of interpolating polynomial

Let qn(x) be another polynomial of degree n with


qn(xi) = yi = Pn(xi) for i=0 to n
(fitting curve must go through ALL given data points (xi, yi )).
Then r(x) = Pn(x) - qn(x) is also a polynomial of degree n.
A polynomial of degree n has at most n roots!
However, r(xi) = 0 for i=0, 1, 2, ..., n  it has (n+1) roots.
 the only possibility to have (n+1) roots is r(x)  0.
 qn(x)  Pn(x) & you will never find 2 different polynomials.
4.2.3 Error in polynomial fitting

 Error of Pn(x) in fitting f(x) using the data at (x0, x1, x2,..., xn) is G'(x) has (n+1) roots, ...

E(x) = f(x) - Pn(x) & G(j)(x) has (n+2-j) roots for j=0,1, … n+1

It can be shown that G(n+1)(x) has one root, say x=x,


1
E(x) = (x-x0)(x-x1)... (x-xn) (𝑛+1)! f(n+1)(x)
so that G(n+1)(x) = 0.
where x lies among (x0, x1, x2, ..., xn).
 Derivation of error formula: * Note: P(n+1)(x) = 0 (since P(x) is only nth order)

* Let y(x) = (x-x0)(x-x1)...(x-xn) = polynomial of (n+1)th degree,  E(n+1)(x) = f(n+1)(x).

( Note: y(xi) = 0 for i=0, 1, ..., n; & y(n+1)(x) = (n+1)! )  G(n+1)(x) = f(n+1)(x) - –
𝑛+1 !
E(t)
𝜓 𝑡
𝜓 𝑥
Let G(x) = E(x) – E(t) ( E(x) is still UNKNOWN)
𝜓 𝑡 At x=x, G(n+1)(x) = 0
200
𝜓 𝑥i
Then G(xi) = E(xi) – E(t) = 0 𝜓 𝑡 0
𝜓 𝑡
 E(t) = (𝑛+1)! f(n+1)(x) -200 0 1 2 3 4 5 6 7 8
because E(xi) = 0 & y(xi) = 0 for i=0, 1, ..., n.
𝜓 𝑥
-400 y (x )
𝜓 𝑡 (n+1)(x)
* Furthermore, G(x=t)= E(t) – E(t) =0 Or E(x) = (𝑛+1)!
f -600 n=8
𝜓 𝑡
-800
Thus G(x) has (n+2) roots 
Example

Given the following data, iii. Comparison of interpolations with y = f(x):


n 0 1 2
xn 1 2 4 100 f(x)

yn -4 3 59 80
P2 fit

(data generated using y=f(x) = x3-5; f(3)=22) 60 P1(x)


Find y = f(x=3) using 40

i) P2(x); and ii) P1(x). 20

(𝑥−𝑥1 )(𝑥−𝑥2 ) (𝑥−𝑥0 )(𝑥−𝑥2 ) (𝑥−𝑥0 )(𝑥−𝑥1 )


Sol. i) P2(x)= (𝑥 y0 + (𝑥 y1 + (𝑥 y2 0
0 −𝑥1 )(𝑥0 −𝑥2 ) 1 −𝑥0 )(𝑥1 −𝑥2 ) 2 −𝑥0 )(𝑥2 −𝑥1 ) -2 -1 0 1 2
-20
x 3 4 5

𝑥−2 𝑥−4 𝑥−1 𝑥−4 𝑥−1 𝑥−2


= (-4)+ 3+ 59= 24.
−1 −3 1 −2 3 2
◊ Further comments:
ii) For linear interpolation we need to
choose xi’s as close to x=3 as possible • Both P1(x) & P2(x) work fine inside the interval (1, 4).
• Both are bad outside the interval
𝑥−𝑥2 𝑥−𝑥1
P1(x) = y + (𝑥 y2 • Error(x) oscillates between node points.
(𝑥1 −𝑥2 ) 1 2 −𝑥1 )

𝑥−4 𝑥−2
= *3 + *59 = 31.
−2 2
Example (contd)

* Error in the fit: Error2(x) = f(x) – P2(x)


2
4
Error2 0
3
-2 -1 0 1 2 3 4 5
2 -2
1
-4
0 x L2
-1 0 1 2 3 4 -6
-2
-8
-3
-4 -10

• Outside the interval, consider the second order base functions The contribution from x2=2 has too much
L1(x) and L2(x) shown below
10 influence at x=-1 (outside the interval).
8
L1
6
The contribution from x1=1 • Thus extrapolation outside
has too much influence at x=-1. the interval can be dangerous!
4

0
-2 -1 0 1 2 3 4 5
-2
Application of error analysis in interpolation

* Truncation error in linear interpolation between (x0, x1) * Effect of round-off error (R) in interpolation:

Let h = x1-x0 Note: the values of f(x) at x0 and x1, f0 & f1,
𝑥−𝑥1 𝑥−𝑥0 are only approximate due to rounding
E1(x) = f(x) - P1(x) = f(x) - f
𝑥0 −𝑥1 0
-𝑥 f1
1 −𝑥0
1 f0T = f0A + e0, f1T = f1A + e1; e = max(|e0|, |e1|)
= (x-x0) (x-x1) f"(x) for some x in [x0, x1].
2

Let g(x) = (x-x0)(x-x1) 𝑥−𝑥1 𝑥−𝑥0 𝑥−𝑥1 𝑥−𝑥0


(Notice that g(x) = 0 at x = x0 & x1) R= e + (𝑥 e1 ≤[𝑥 + (𝑥 ]e=e
𝑥0 −𝑥1 0 1 −𝑥0 ) 0 −𝑥1 1 −𝑥0 )
ℎℎ 1
 max |g(x)| = 2 = 4 ℎ2 for x in [x0, x1].
2
* Total interpolation error = E1 + R
h2 h2
Thus, |E1(x)| ≤ Sup | f "(x ) | ≤ Sup | f "(x ) | + e
8 x [ x0 ‚ x1 ] 8 x [ x0 ‚ x1 ]

• This is useful in estimating the dependence of the error in Example: If we interpolate f(x) = 𝑥 in x = [10, 1000],

linear interpolation as a function of h when one has  f"(x) = -1/(4x3/2)

a general idea about the magnitude of |f"(x)|. Thus, max(|f (x)) = 1/[ 4*103/2 ] for x in [10, 1000],
1 1
For h=1, E1≤ 8 4∗103/2 = 9.88 E-4

using linear interpolation


Lecture 17
Last lecture:
Lagrange method
This lecture:

Newton divided difference method


Finite difference
4.3 Newton Divided Difference

4.3.1 Motivation and definitions


◊ Lagrangian interpolation is difficult to use when an additional point (xn+1, fn+1), is to be added.
◊ For the discrete data: x x0 x1 x2 ... xn xn+1
f f0 f1 f2 ... fn fn+1
want to fit the data using Pn(x) in the form of
Pn(x) = a0 +a1(x-x0) +a2(x-x0)(x-x1) +... +an(x-x0) (x-x1)...(x-xn-1)
so that when we want Pn+1(x), we simply add another term to it, Pn+1(x) = Pn(x) + an+1(x-x0)(x-x1)...(x- xn).
[1] 𝑓 −𝑓
◊ Define first divided difference between x0 and x1: f[x0, x1] = c = 𝑓0 = 𝑥1 −𝑥0
1 0
𝑓 −𝑓 [1]
first divided difference between xr and xs: f[xr, xs] = 𝑥𝑠 −𝑥𝑟 = 𝑓𝑟
𝑠 𝑟

𝑓[𝑥1 ,𝑥2 ]−𝑓[𝑥0 ,𝑥1 ] [2]


second order divided difference: f[x0, x1, x2] = = 𝑓0
𝑥2 −𝑥0

𝑓[𝑥1 ‚𝑥2 ‚…,𝑥𝑛 ]−𝑓[𝑥0 ,𝑥2 ‚…,𝑥𝑛−1 ] [𝑛]


high order divided difference: f[x0, x1, x2,..., xn] = = 𝑓0
𝑥𝑛 −𝑥0
4.3.2 Construction of interpolating polynomials

◊ Back to fitting by Pn(x)


At x= x0, Pn(x0) = a0 = f0  a 0 = f0
𝑓1 −𝑓0 [1]
At x= x1, Pn(x1) = a0 + a1(x1- x0) = f1  a1 = = 𝑓0
𝑥1 −𝑥0

At x= x2, Pn(x2) = a0 + a1(x2- x0) + a2(x2- x0)(x2- x1) = f2


𝑓2 − 𝑓1 𝑓1 − 𝑓0 𝑥2 − 𝑥0 𝑓1 − 𝑓0
𝑥2 − 𝑥1 + 𝑥2 − 𝑥1 − 𝑥2 − 𝑥1 𝑥1 − 𝑥0
𝑓 −𝑓
𝑓2 −𝑓0 − 𝑥2 −𝑥0 𝑥1 −𝑥0
 a2 = 1 0
=
(𝑥2 −𝑥0 ) 𝑥2 −𝑥1 𝑥2 − 𝑥0 -1
𝑓1 −𝑓0 𝑥2 −𝑥0 𝑓1 −𝑓0 𝑓1 −𝑓0 𝑥1 −𝑥0 𝑥 −𝑥 𝑓 −𝑓 𝑓 −𝑓 𝑥 −𝑥 𝑥 −𝑥 𝑓1 −𝑓0
Since − = − 𝑥2−𝑥0 𝑥1−𝑥0 = 𝑥1 −𝑥0 (𝑥1−𝑥0 - 𝑥2 −𝑥0)= - 𝑥
𝑥2 −𝑥1 𝑥2 −𝑥1 𝑥1 −𝑥0 𝑥2 −𝑥1 𝑥1 −𝑥0 2 1 1 0 1 0 2 1 2 1 1 −𝑥0

𝑓2 −𝑓1 𝑓 −𝑓
− 𝑥1 −𝑥0 [2]
 a2 =
𝑥2 −𝑥1 1 0
= 𝑓0
𝑥2 −𝑥0

[𝑛] [1] [2]
Similarly, an = 𝑓0  Pn(x) = f0 + (x-x0) 𝑓0 +(x-x0)(x-x1) 𝑓0 + … + (x-x ) (x-x )...(x-x ) 𝑓 [𝑛]
0 1 n-1 0

◊ Error: same as in Lagrangian interpolation (uniqueness)


◊ Note: (x0, x1, x2, ..., xn) can be in any order in the table.
4.3.3 Advantage over Lagrangian interpolation

[𝑛]
i. It is easy to improve from Pn(x) to Pn+1(x) by just adding 𝑓0 (x-x0)(x-x1)...(x- xn) to Pn(x)

In Lagrangian interpolation, we have to start all over for

𝑥−𝑥𝑗
𝐿𝑖 (𝑥) = ς𝑛+1
𝑗=0 𝑥 −𝑥 , i=0, ...k, n+1
𝑖 𝑗
𝑗≠𝑖
[𝑛]
ii. 𝑓0 is easy to construct (see the table below for example).

iii. It is much easier to compute Pk(x) at different x's.

iv. Lagrangian interpolation is valuable theoretical tool.

v. Divided difference is a more practical computational tool.


Newton Interpolation
pn(x) = a0 + a1 (x − x0 ) + a2 (x − x0 ) (x − x1 )+…+ an (x − x0 ) (x − x1 )…(x − xn )
1
an = { f 𝑥𝑛 , … 𝑥1 - f 𝑥𝑛−1 , … 𝑥0 }
𝑥𝑛 − x0

𝑖 𝑥𝑖 𝑦𝑖 = 𝑓(𝑥𝑖 ) 𝑓 𝑥𝑖+1 , 𝑥𝑖 𝑓 𝑥𝑖+2 , 𝑥𝑖+1 , 𝑥𝑖 𝑓 𝑥𝑖+3 , ⋯ , 𝑥𝑖 𝑓 𝑥𝑖+4 , ⋯ , 𝑥𝑖


First Second Third Fourth
0 𝑥0 𝑓(𝑥0 ) 𝑓 𝑥1 , 𝑥0 𝑓 𝑥2 , 𝑥1 , 𝑥0 𝑓 𝑥3 , 𝑥2 , 𝑥1 , 𝑥0 𝑓 𝑥4 , 𝑥3 , 𝑥2 , 𝑥1 , 𝑥0
1 𝑥1 𝑓(𝑥1 ) 𝑓 𝑥2 , 𝑥1 𝑓 𝑥3 , 𝑥2 , 𝑥1 𝑓 𝑥4 , 𝑥3 , 𝑥2 , 𝑥1 𝑓 𝑥5 , 𝑥4 , 𝑥3 , 𝑥2 , 𝑥1
2 𝑥2 𝑓(𝑥2 ) 𝑓 𝑥3 , 𝑥2 𝑓 𝑥4 , 𝑥3 , 𝑥2 𝑓 𝑥5 , 𝑥4 , 𝑥3 , 𝑥2
3 𝑥3 𝑓(𝑥3 ) 𝑓 𝑥4 , 𝑥3 𝑓 𝑥5 , 𝑥4 , 𝑥3
4 𝑥4 𝑓(𝑥4 ) 𝑓 𝑥5 , 𝑥4
5 𝑥5 𝑓(𝑥5 )

Use the top element of each column


(= an) to evaluate the interpolated
functional value f(x)
function yint = Newtint(x,y,xx)
% Newtint: Newton interpolating polynomial
% yint = Newtint(x,y,xx): Uses an (n - 1)-order Newton interpolating polynomial based on n
% data points (x, y) to determine a value of the dependent variable (yint)@ a given value of
% the independent variable, xx.
% input:
% x = independent variable; % y = dependent variable
% xx = value of independent variable at which interpolation is calculated
% output: yint = interpolated value of dependent variable
% compute the finite divided differences in the form of a difference table
n = length(x);
if length(y)~=n, error('x and y must be same length'); end
b = zeros(n,n);
% assign dependent variables to the first column of b.
b(:,1) = y(:); % the (:) ensures that y is a column vector.
for j = 2:n
for i = 1:n-j+1
b(i,j) = (b(i+1,j-1)-b(i,j-1))/(x(i+j-1)-x(i));
end
end
% use the finite divided differences to interpolate
xt = 1;
yint = b(1,1);
for j = 1:n-1
xt = xt*(xx-x(j));
yint = yint+b(1,j+1)*xt;
end
Newton Divided Difference
Given: (xi, f(xi)) below Find: i) p2(x=3) & best p1(x=3)
i xi f ( xi ) f  xi +1 , xi  f  xi + 2 , xi +1 , xi 
ii) p2(x=1.5) & best p1(x=1.5)

1 1 −4 7 7 Sol: i) 𝑝1 3 = −4 + 7 𝑥 − 1 = −4 + 7 ∗ 2 = 10
x = 2 − 1 x = 4 − 1 𝑝2 (3) = 𝑝1 (𝑥) + 7(𝑥 − 1)(𝑥 − 2) = 10 + 7 ∗ 2 ∗ 1
=24
2 2 3 28
For linear interpolation, only 2 points are needed.
x = 4 − 2
In order to avoid extrapolation, we use x2 & x3,
3 4 59 to obtain BEST interpolation for p1(x=3)
p1 (3) = 3 + 28( x − 2) = 3 + 28*1 = 31
x=1 2 3 4
x
ii) At x=1.5, we first compute P1(x=1.5)

P1(x=1.5) = -4+7 (1.5-1) = -0.5; P2(x=1.5)= P1(x=1.5)+ 7(1.5-1)(1.5-2)= - 0.5-1.75= -2.25

Improvement of P2 over P1 is easily accomplished!


4.3.4 Interpretation of interpolation based on divided difference:

* Note, En(t)  f(t) - Pn(t)


1
= (t-x0)(t-x1)... (t-xn) f(n+1)(x) (*)
(𝑛+1)!

* Another way to express En(x) in terms of divided difference instead of f(n+1)(x):


• Consider the data of f(x) at (x0, x1, x2, ..., xn, t)
where f(xi) = fi (i=0, 1, ...n) and f(t) = f(x=t).
• Construct Pn(x) using the data at (x0, x1, x2, ..., xn):
[1] [2] [𝑛]
Pn(x) = f0 + (x-x0) 𝑓0 +(x-x0)(x-x1) 𝑓0 +… (x-x0) (x-x1)...(x-xn-1) 𝑓0
• Substitute x=t with Pn+1(t) = f(t) into above 

f(t)-Pn(t) = (t-x0) (t-x1)...(t-xn) f[x0, x1, x2, ..., xn, t] (**)


which is the error En(t) = f(t) - Pn(t) given by Eq. (*)

• Comparing (*) with (**) 


1 (n+1)(x
f[x0, x1, x2,..., xn, t] = (𝑛+1)!
f ) with x in (x0, x1, x2, ..., xn, t).

 Divided difference form looks like a truncated Taylor series


4.4 Finite Difference

4.4.1 Forward difference and divided difference

◊ If x1-x0=x2-x1=x3-x2 =... = xn-xn-1 = h ---- equidistance !

we can use "forward difference", f, without division: y = f(x)

f0 = f1 - f0 & fi = fi+1- fi


[1] 𝑓 −𝑓 𝑓1 −𝑓0
◊ Note: 𝑓0 = 𝑥1 −𝑥0 =  f1-f0 = h 𝑓0
[1]
1 0 ℎ h h h h
 𝑓2 −𝑓1 𝑓1 −𝑓0

[2] 𝑥2 −𝑥1 𝑥1 −𝑥0 𝛥𝑓1 /ℎ−𝛥𝑓0 /ℎ x
𝑓0 = =
𝑥2 −𝑥0 2ℎ
x0 x1 x2 xn-1 xn
𝛥𝑓1 −𝛥𝑓0 𝑓2 −2𝑓1 +𝑓0
= =
2ℎ2 2ℎ2
[2] [2] [2] [2]
[3] 𝑓1 −𝑓0 𝑓1 −𝑓0 1 1 1
𝑓0 =
𝑥3 −𝑥0
=
3ℎ
= 3ℎ 2ℎ2[ (f3-2f2+f1) - (f2-2f1+f0)] = 3!ℎ3(f3-3f2+3f1 - f0)

[1] [1]
◊ Denote: f0 = f1-f0 = h 𝑓0 , …  fi = fi+1 - fi = h 𝑓𝑖
[2]
2f0  f1-f0 = f2-2f1-f0 = 2h2 𝑓0 , … 2fi  fi+1 -fi = fi+2 - 2fi+1 + fi = 2h2 𝑓𝑖[2] [𝑘] 𝛥𝑘 𝑓𝑖
 𝑓𝑖 =
[3] [𝑘] 𝑘!ℎ𝑘
3f0  2f1 - 2f0 =...= 3!h3 𝑓0 ,… kfi = k-1fi+1 - k-1fi =... = k! hk 𝑓𝑖
4.4.2 Interpolation using forward difference

𝑥−𝑥0
◊ Define s =  x – xk = (x - x0) + (x0 – xk) = sh -kh = (s-k)h

[1] [2] [𝑛]


Hence Pn(x) = f0 + (x-x0) 𝑓0 ) + (x-x0)(x-x1)𝑓0 +... + (x-x0) (x-x1)...(x-xn-1) 𝑓0
𝑠(𝑠−1) 2 𝑠(𝑠−1)...(𝑠−𝑛+1) n
= f0 + sf0 +  f0 +...+  f0
2 𝑛!

Note: 𝑠(𝑠 − 1). . . (𝑠 − 𝑘 + 1) 𝑠


= (combinatoric notation)
𝑘! 𝑘
𝑠 k
so that Pn(x) =σ𝑛𝑘=1  f0 = Newton-Gregory forward polynomial.
𝑘
Example:

Consider the evenly spaced data (xi, fi) given below. (f(x) = x3)
i xi fi f0  2 f0  3 f0
0 1 1 7 12 6 𝑠 k
Pn(x) =σ𝑛𝑘=1  f0
1 2 8 19 18 𝑘
2 3 27 37
3 4 64

Find: P2(1.8), P3(1.8).


Solution: * Note: h=1; values of f0, 2f0, 3f0 are listed on the table.
* For x=1.8, we use starting point x0=1 for both P2 & P3;
for P2, if we use x1=2 as starting point => extrapolation;
for P3, there is no other choice since only 4 data are given.
 s = x - x 0= x -1= 0.8
* P2(x)=1 + 7s + 12/2*s(s-1) = 1+ 7*0.8+6*0.8*(-0.2)=5.64
(error =1.83 – 5.64= 0.192)
[3] 𝑠 3
* P3(x) = P2(x) + (x-x0)(x-x1)(x-x2) 𝑓0  f0
= P2(x) +
3
=5.64+6/3! s(s-1)(s-2) = 5.64+ 0.192 = 5.832 (exact)
(note: P2(1.8) is used to compute P3(1.8)
Example: (SKIP)

For the data f(xi, fi) given below (f(x) = x3 ) with h=0.5
i xi fi f0  2 f0  3 f0
0 1.5 3.375 4.625 3 0.75
1 2 8 7.625 3.75
2 2.5 15.625 11.375
3 3 27

Find: P2(1.8) and P3(1.8)


Solution: * Values of f0, 2f0, 3f0 are listed on the table.

* For x=1.8 we use starting point x0=1.5 for both P2 & P3.
s = (x-1.5)/h= 0.3/0.5 = 0.6
* P2(x) = 3.375 + 4.625s + 1.5s(s-1)
=3.375 + 4.625*0.6+1.5*0.6*(-0.4)= 5.79
(Error = 5.832-5.79 = 0.042; much smaller for h=0.5 than 0.192 for h=1.0)
𝑠 3
* P3(x) = P2(x) +  f0= 5.79+ 0.125 s(s-1)(s-2)
3
= 5.79 +0.125*0.6(0.6-1)(0.6-2) = 5.832
4.5 Detecting error/noise in data using finite difference

* Observation on finite difference:


1
nf0 = n! hn f[x0, x1, x2, ... xn] = n! hn 𝑛! f(n)(x) = hn f(n)(x)
for some x in (x0, x1, x2, ... xn).

* If f(n)(x) is bounded & h <1, |nf0 | decreases with increasing n.

* Consider error e(xi) in the data fi = f(xi) for i=0, 1, ..., n.


i.e. fT(xi) = fA(xi) + e(xi)
 n fT0 = nfA0 + ne0.
Expect n fT0 to be smooth.
How about nfA0?  That depends on ne0.
𝜀 if i = 𝑘
* Consider e(xi) = ቊ
0 if i ≠ 𝑘
Error detection using finite difference

𝜀 if i = 𝑘
* Consider e(xi) = ቊ
0 if i ≠ 𝑘

xi ei ei 2ei 3ei


... ... 0 ... ...
xk-3 0 0
0 0
xk-2 0 0
0 e
xk-1 0 e
e -3e
xk e -2e
-e 3e
xk+1 0 e
0 -e
xk+2 0 0
0 0
xk+3 0 0
1
* The maximum error in the column n e i increases with n as n(n-1) e
2

We can use this behavior to extract isolated error e(xk) from original data and make correction.
Example:

Consider the following discrete data 𝑓ሚ𝑖 on an equal interval which contain an isolated error.
Solution: Let fi = 𝑓ሚ𝑖 + ei
xi 𝑓ሚ𝑖 𝑓ሚ𝑖 2𝑓ሚ𝑖 3𝑓ሚ𝑖 Error guess The kth order forward difference error is
xk-3 0.10396
0.01700
k fi = k 𝑓ሚ𝑖 + k ei
xk-2 0.12096 -0.00014
0.01686 -0.00003 0
 k ei = k fi - k 𝑓ሚ𝑖
xk-1 -0.00017
0.13782
Expect kfi to be small for smooth f(x) so that kei ~ -k 𝑓ሚ𝑖
0.01669 -0.00002 0
xk 0.15451 -0.00019
0.01650 0.00006 e
As a first estimate, we try -3e ~ -0.00025  e ~ 0.00008
xk+1 0.17101 -0.00013
0.01637 -0.00025 −3e If we try 3e~0.00021  e ~ 0.00007
xk+2 0.18738 -0.00038
0.01599 0.00021 3e  Various estimates for the order of magnitude of e are close.
xk+3 0.20337 -0.00017
0.01582 -0.00010 −e * Note: 3fi will also contribute to 3𝑓ሚ𝑖 besides 3ei
xk+4 0.21919 -0.00027
0.01555
xk+5 0.23474
k fi = k 𝑓ሚ𝑖 + k ei k 𝑓ሚ𝑖 = k fi -k ei
Example (contd)
3.E-04 ~
3f
2.E-04 Comment: errors of this magnitude
1.E-04 (0.043%) are impossible to spot on raw graphs.
0.E+00 * Results after correction
-1.E-04 3 𝑓ሚ𝑖
xi 𝑓ሚ𝑖 𝑓ሚ𝑖 2𝑓ሚ𝑖
-2.E-04 xk-3 0.10396
0.01700
-3.E-04 xk-2 0.12096 -0.00014
-4 -3 -2 -1 0 1 2 i 0.01686 -0.00003
xk-1 0.13782 -0.00017
It is reasonable to assume that 3f
~ -0.00002 based on the
i 0.01669 -0.00002

1st two numbers (i=-3 & -2 in above chart) in 3𝑓ሚ𝑖 column. xk 0.15451 -0.00019
0.01650 -0.00002
Hence, e ~ -0.00006 - 0.00002 = -0.00008 xk+1 0.17101 -0.00021
0.01629 -0.00001
or -e ~ 0.00010 - 0.00002 = 0.00008, xk+2 0.18730 -0.00022
0.01607 -0.00003
or -3e ~ 0.00025 - 0.00002  e ~ -0.000077, xk+3 0.20337 -0.00025
or 3e ~ -0.00021 - 0.00002  e ~ -0.000077. 0.01582 -0.00002
xk+4 0.21919 -0.00027
3 fi column
Using e = -0.00008, we trace back the error to xk+2 and 0.01555 is smooth now
correct the data as fk+2 = 𝑓ሚ𝑘+2 + e = 0.18738 -0.00008 = 0.18730 xk+5 0.23474
Lecture 18
Last lecture:
Newton divided difference method
Finite difference
This lecture:

Interpolation Error in General


Runge’s phenomenon
Hermite Interpolation
4.6 Interpolation Error in General

◊ Truncation error * n=2: y2(x) = (x-x0)(x-x1)(x-x2)


1
En(x) = f(x) - Pn(x) = (x-x0)(x-x1)... (x-xn) (𝑛+1)! f (n+1)(x) 0.5
y2
0.4
1
 |En(x)| ≤ max{| (x-x0)(x-x1)...(x-xn)|} (𝑛+1)! Cn+1 0.3

0.2
where Cn+1 =[ x ‚x ,...x
Sup {| f ( n+1) ( ) |}
0 1 n]
is a constant. 0.1

0
Q: How does En(x) depend on x? -0.1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Note Cn+1 /(n+1)! is independent on x. -0.2

-0.3
Let yn(x) = (x-x0)(x-x1)... (x-xn). -0.4

We consider equal interval case, h = x1-x0 = ... = xn - xn-1. -0.5

* n=1: y1(x) = (x-x0)(x-x1) max[y2(x)] = 0.385 h3 at x-x1 = ±h/.


max[y1(x)] = h2/4 is reached when x-x0 = h/2 (middle).
y3 4 y4 6 y5
100 y6
0.4
x2 x4 x5
x0 x1 x2 x3 2 0 50
0 x0 x1 x3 x
x x1 x3 x4 x1 x3 x4 x5 x6
-0.4 0 -6 0
x0 x2 x x0 x2 x
-0.8 -2 -12 -50

-1.2 -4 -18 -100

* n=3: y3(x) = (x-x0)(x-x1)(x-x2)(x-x3)


max[y3(x)] = 0.5625h4 for x1 < x < x2 .
max[y3(x)] = h4 for x < x1 or x> x2.
Largest error occurs when x is near the far end of the interval.
Choose x in the middle to reduce interpolation error.
* n=6: max[y6(x)] = 12.36 h7 for x2<x<x4 (middle).
But max[y6(x)] = 95.8 h7 for x0<x<x6 (whole).
◊ QUESTION: Does increasing n help to reduce the error En(x)?
Answer: it does in many cases.
In some cases, it does not—Runge’s phenomenon.
Runge’s phenomenon

* Consider the following data generated by x -2 -1.5 -1 -0.5 0 ... 2


f(x) = (1+x2)-5 for -2< x ≤2. f 0.00032 0.002758 0.03125 0.32768 1 ... 0.00032

Polynomials of nth degree (n=2, 4, 8, & 16) fitting the given data in -2<x≤2 are shown below.
1.2 1.5 2
y 1.0 y y y
1.0 P 4 (x) 1.0 0 f(x)
P 2 (x)
0.8 0.5 0.5 P (x)
8
f(x) -2
f(x) 0.0 P16 (x)
0.6
0.0
-4
0.4 -0.5
f(x) -0.5 -6
0.2 -1.0

0.0 -1.0 -1.5 -8


-2 -1 0 1 2 -2 -1 0 1 2 -2 -1 0 1 2 -2 -1 0 1 2
x x x x

=> Runge’s phenomenon – error increases as n increases.


* Same problem appears if f(x) = (1+x8)-1, cos10x, ..., in the same interval.
* Interpolation error tends toward infinity when the degree of the polynomial increases:
𝑙𝑖𝑚 𝑚𝑎𝑥 |𝑓(𝑥) − 𝑝𝑛 (𝑥)ቚ = ∞  high degree polynomials are generally unsuitable for interpolation
𝑛→∞ −2<𝑥<2
How to solve the problem of Runge's phenomenon

• The oscillation can be minimized by using Chebyshev nodes

instead of equidistant nodes. In this case the maximum error is


guaranteed to diminish with increasing polynomial order.

• The problem can be avoided by using piecewise continuous polynomial interpolations.


4.7 Hermite Interpolation

4.7.1 Simple illustration using two points (x1, x2)

Given: y1 & y1' at x=x1 and y2 & y2' at x=x2. y2, y2′

Goal: find the interpolating curve that will match the slopes (y1', y2')
at the end of interval in addition to (y1, y2). y1, y1′

• P1: the starting value, y1, of the curve


• T1: the tangent, y1', (e.g. direction and speed) to tell us how the
curve enters the starting point into the interval
• P2: the ending point, y2, of the curve
• T2: the tangent y2' to tell us how the curve meets the ending point

 Similar to the development of Lagrangian polynomial Li(x)


4.7.1 Simple illustration using two points (x1, x2)

◊ Consider the following 4 functions: h1(x), h2(x), h3(x), & h4(x)


0
h3
1 h1 1 0.2 -0.05 h4
slope=0 slope=0 slope=1,y=0
y=1 y=1 0.15 -0.1
h2
0.5 0.5 0.1 -0.15
slope=1
0.05 -0.2 y=0
0 0 0 -0.25
x
01 x12 x x01 x12 x x01 x12 x x01 x12 x

h1(x1)=1, h1(x2)=0; h1'(x1)=0,h1'(x2)=0.


h2(x1)=0, h2(x2)=1; h2'(x1)=0,h2'(x2)=0.
h3(x1)=0, h3(x2)=0; h3'(x1)=1,h3'(x2)=0.
y2, y2′
desired fit
h4(x1)=0, h4(x2)=0; h4'(x1)=0, h4'(x2)=1.
Thus, to obtain the desired fit, y1, y1′
we can simply superimpose:
“ y1 * h1 + y2 * h2
+ y1' * h3 + y2' * h4 ”
◊ What about n points?
4.7.2 Hermite interpolation using n points

◊ Now given (y1, y2, ..., yn) & (y1', y2', ..., yn') at (x1, x2, ..., xn).
We want to find a polynomial P(x) so that
P(xi) = yi
and P'(xi) = yi' for i = 1, 2, ..., n.
◊ Given 2n conditions  P(x) is a polynomial of degree (2n-1).

◊ Consider Hn(x) in the form of

Hn(x) = σ𝑛𝑖=1 yi hi(x) + σ𝑛𝑖=1 yi′ ℎ෨ 𝑖 𝑥

◊ How to determine hi(x) & ℎ෨ 𝑖 𝑥 ?


We must require:
• hi(x) & ℎ෨ 𝑖 𝑥 be polynomials of degree (2n-1);

• hi(xk) = dik, ℎ෨ 𝑖 𝑥k = 0 so that Hn(xk) = yk;

• hi'(xk) = 0, ℎ෨ 𝑖 ′ 𝑥k = dik so that Hn'(xk) = yk'.


4.7.2 Hermite interpolation using n points

Recall the Lagrangian polynomial of degree n  a = - 2 𝐿𝑖 '(xi)


𝑥−𝑥𝑗
𝐿𝑖 (𝑥) = ς𝑛𝑗=1 𝑥 −𝑥 , i=1, 2, ...n. & b = 1 + 2 xi 𝐿𝑖 '(xi)
𝑖 𝑗
𝑗≠𝑖
 a x + b = 1 - 2(x- xi) 𝐿𝑖 '(xi).
 Choose ℎ෨ 𝑖 𝑥 = c𝐿𝑖 2(𝑥)(x-xi) (2n-1 degree)
◊ Finally,
so that ℎ෨ 𝑖 𝑥𝑘 = 0 (because li(xk) = dik),
Hn(x) = σ𝑛𝑖=1 yi𝐿𝑖 2(𝑥)[1 − 2(x− xi) 𝐿𝑖 ′(xi)]
& ℎ෨ 𝑖 ′ 𝑥𝑘 = c𝐿𝑖 2 𝑥𝑘 + c (x𝑘 -xi)2 𝐿𝑖 (𝑥𝑘) 𝐿𝑖 ′(𝑥𝑘)
+ σ𝑛𝑖=1 yi′𝐿𝑖 2(𝑥)(x−xi)
= cdik + 0
* In order for ℎ෨ 𝑖 ′ 𝑥𝑘 = dik, we simply set c = 1 ◊ Uniqueness of Hn(x):
Let Gn(x)= another such a polynomial of degree (2n-1).
* Now choose hi(x) = 𝐿𝑖 2(𝑥)(ax+b)
need: hi(xk) = dik  axk + b = 1 Then R(x) = Hn(x) - Gn(x) = polynomial of degree (2n-1).

& hi'(xk) = 0  a 𝐿𝑖 2 𝑥𝑘 +2𝐿𝑖 (𝑥𝑘) 𝐿𝑖 ′(𝑥𝑘)(axk + b) = 0 Since R(xi) = R'(xi) = 0 for i = 1, 2, ..., n,

The above is automatically satisfied for i≠k, since 𝐿𝑖 (xk)=dik=0. R(x) = A(x- x1)2(x- x2)2...(x- xn)2 = A 𝜓𝑛2 (𝑥)
 R(x) has 2n roots!
For i=k, li(xk)=1, we need a + 2 𝐿𝑖 '(xi) * 1 = 0
 R(x) = 0  Gn(x) = Hn(x)
Example:

Find the desired fit for n=2 Hn(x) = σ𝑛𝑖=1 yi𝐿𝑖 2(𝑥)[1 − 2(x− xi) 𝐿𝑖 ′(xi)]
desired fit + σ𝑛𝑖=1 yi′𝐿𝑖 2(𝑥)(x−xi)
Solution:
𝑥−𝑏 𝑏−𝑥 𝑥−𝑎
i) Let x1 = a, x2 = b. Then 𝐿1 𝑥 = 𝑎−𝑏 = 𝑏−𝑎 , 𝐿2 𝑥 = 𝑏−𝑎

𝑥−𝑎 𝑏−𝑥 2 𝑏−𝑥 𝑥−𝑎 2


H2(x) = 1 + 2 𝑏−𝑎 𝑓(𝑎)+ 1+ 2 𝑏−𝑎 𝑓(𝑏)
𝑏−𝑎 𝑏−𝑎
3.4
(𝑥−𝑎)(𝑏−𝑥)2 (𝑥−𝑎)2 (𝑥−𝑏) H2(x )
+ 𝑓′(𝑎) + 𝑓′(𝑏) 3.2
(𝑏−𝑎)2 (𝑏−𝑎)2
3
2.8
2.6 fit
ii) For a=0, b=1, f(a)=2, f(b)=3, f'(a)=-2, f'(b)=-3
2.4
H2(x) = 2(1+2x)(1-x)2+3[1+2(1-x)]x2-2x(1-x)2 – 3 x2(x -1) 2.2
2
1.8
0 0.2 0.4 0.6 0.8 1 x
4.7.3 More computable form of Hermite interpolation

* Lagrangian interpolation is not easy to compute compared with Newton’s divided difference

* Hermite interpolation in the form of


Hn(x) = σ𝑛𝑖=1 yi𝐿𝑖 2(𝑥)[1 − 2(x− xi) 𝐿𝑖 ′(xi)] + σ𝑛𝑖=1 yi′𝐿𝑖 2(𝑥)(x−xi)
is not easy to compute practically

* Can we convert it to a form similar to divided difference?


* Let us consider having 2n nodes: z1, z2, ..., z2n

* Newton’s divided difference form gives

P2n-1(x) = f(z1) + (x- z1) f [z1, z2] +(x- z1) (x- z2) f [z1, z2, z3] + (x- z1)... (x- z2n-1) f [z1, z2, ..., z2n]

 E(x) = f(x) - P2n-1(x) = (x- z1)... (x- z2n) f [z1, z2, ..., z2n, x]

* Now set z1 = z2 = x1, z3 = z4 = x2, ... , z2n-1 = z2n = xn,


𝑓(𝑥1 )−𝑓(𝜉)
Note: f [x1, x1] = 𝑙𝑖𝑚 = 𝑓′(𝑥1 ) is well defined/given.
𝜉→𝑥1 𝑥1 −𝜉
More computable form of Hermite interpolation

* Thus

P2n-1(x) = f(x1) + (x- x1) 𝑓′(𝑥1 ) + (x- x1)2 f[x1, x1, x2] + (x- x1)2 (x- x2) f[x1, x1, x2, x2]
+ (x- x1)2 (x- x2)2 f[x1, x1, x2, x2, x3]+ …+ (x- x1)2...(x- xn-1)2 (x- xn) f[x1, x1, x2, x2, ..., xn, xn]

* In the above, f[x1, x1, x2] can be computed as


1 𝑓 −𝑓 𝑓(𝑥1 )−𝑓(𝜉)
f[x1, x1, x2] = 𝑥 [ 𝑥2 −𝑥1 - 𝑙𝑖𝑚 ]
2 −𝑥1 2 1 𝜉→𝑥1 𝑥1 −𝜉

1 𝑓2 −𝑓1 1
= [ - 𝑓′(𝑥1 )] = {f[x1, x2]- 𝑓′(𝑥1 )}
𝑥2 −𝑥1 𝑥2 −𝑥1 𝑥2 −𝑥1

1
& f[x1, x1, x2, x2] = 𝑥 { f[x1, x2, x2] - f[x1, x1, x2]}
2 −𝑥1

1
where f[x1, x2, x2] = 𝑥 {𝑓′(𝑥2 ) − f[x1, x2]}
2 −𝑥1


Lecture 19
Last lecture:
Interpolation Error in General
Runge’s phenomenon
Hermite Interpolation
This lecture:

Cubic spline
Example of computable form of Hermite interpolation

Given: a=0, b=1, f(a)=2, f(b)=3, f'(a)=-2, f'(b)=-3, use finite difference to find H2(x).

Solution: 1
f[x1, x1, x2] = 𝑥 {f[x1, x2]- 𝑓′(𝑥1 )}
x f f' f[x1, x2] f[x1, x1, x2] f[x1, x2, x2] f[x1, x1, x2, x2] 2 −𝑥1
0 2 -2 1 3 -4 -7 1
1 3 -3 f[x1, x2, x2] = 𝑥 {𝑓′(𝑥2 ) − f[x1, x2]}
2 −𝑥1
In order to evaluate H2(x), various divided differences are needed. 1
f[x1, x1, x2, x2] = 𝑥 { f[x1, x2, x2]
2 −𝑥1
f[x1, x2] = (3-2)/(1-0) = 1; f[x1, x1, x2] = (1-(-2))/(1-0) = 3 - f[x1, x1, x2]}
f[x1, x2, x2] = (-3-1)/(1-0)= -4; f[x1, x1, x2, x2] = (-4-3)/(1-0)= -7.

Their values are listed in the table above. Thus,

H2(x) = 2 -2x + 3 x2 -7 x2(x-1)


which is much simpler than the 4-term expression
H2(x) = 2(1+2x)(1-x)2+3[1+2(1-x)]x2-2x(1-x)2-3x2(1- x)

P2n-1(x) = f(x1) + (x- x1) 𝑓′(𝑥1 ) + (x- x1)2 f[x1, x1, x2] + (x- x1)2 (x- x2) f[x1, x1, x2, x2]
+ (x- x1)2 (x- x2)2 f[x1, x1, x2, x2, x3]+ …+ (x- x1)2...(x- xn-1)2 (x- xn) f[x1, x1, x2, x2, ..., xn, xn]
Error in Hermite interpolation:

* Again, consider having 2n nodes: z1, z2, ..., z2n = x1, x1, x2, x2, ..., xn, xn.

* We can use error formula derived earlier:

E(x) = (x-x1)2(x-x2)2...(x-xn-1)2(x-xn)2 f[x1, x1, x2, x2, ..., xn, xn, x]


1
= (x-x1)2(x-x2)2...(x-xn-1)2(x-xn)2(2𝑛)! 𝑓 (2𝑛) (𝜉)
1
= 2𝑛 (x) (2𝑛)! 𝑓 (2𝑛) (𝜉)

◊ Comparison between cubic H- and cubic L-interpolations

Hermite H2(x): (x1, x2), (y1, y2), and (y1', y2') are given
1 1
Error: E(x) = 4! f(4)(x) (x-x1)2(x-x2)2 = 4! f(4)(x) 22 (x)

1 4
Max{ 22 (x) }= 22 (x =(x1+ x2)/2) = h
16
Comparing with error in Lagrange interpolation:

* Lagrangian L3(x): (x0, x1, x2, x3) and (y0, y1, y2, y3) are given

1 1
Error: E(x)= 4! f(4)(x) (x-x0)(x-x1)(x-x2)(x-x3) = 4! f(4)(x) 4(x)

Max |4(x)| = |4[x=( x1+ x2)/2) ±1.118h]| 0.8


0.6
9
= 16 h4 for x0≤ x ≤ x3; 0.4
0.2 ψ 22
= h4 for x1≤ x ≤ x2; 0
-0.2 0 0.5 1 1.5 2 2.5 3 x
-0.4
* This is at least 9 times larger than the error in H2(x)!
-0.6
-0.8 4
-1
-1.2
4.8 Cubic Spline

* Applications: airplane, automobile, ship building,…


* Traditional practice: use drafting spline to draw smooth curves.
u r v a tures
c
Zero s
point
4.8.1 Fitting with a cubic polynomial in ONE interval at e n d

y gi(x) yi+1
yi
gi+1(x)
gi-1(x) yi+2
yi-1
left mid right
interval interval interval

xi-1 xi xi+1 xi+2 x

* Fit y=f(x) in (xi, xi+1) using a cubic polynomial gi(x) as

gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3 (1)

Our task: determine FOUR coefficients (ai, bi, ci, di)


Made of plastic,
for each interval.
bamboo, or wood
4.8 Cubic Spline

gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3 (1) y gi(x) yi+1
yi
gi+1(x)
gi-1(x) yi+2
* Two obvious constraints are:
yi-1
gi(xi) = yi & gi(xi+1) = yi+1 (2) left mid right
interval interval interval

 di = yi (3) xi-1 xi xi+1 xi+2 x

& cihi + biℎ𝑖2 + ai ℎ𝑖3 = yi+1 - yi (4)


with hi = xi+1- xi. (5)

 Need two more conditions.

• These two extra conditions will be obtained by imposing


smoothness condition at the joint xi between
the left interval and the mid interval.
4.8.2 Smoothness conditions at joints

gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3 (1) y gi-1"(xi)= gi "(xi) gi"(xi+1)= gi+1"(xi+1)

* Now we pose "smoothness conditions" between adjacent intervals gi-1′(xi)= gi′(xi) gi′(xi+1)= gi+1′(xi+1)

i.e. the 1st & 2nd derivatives at xi evaluated from the LEFT interval gi(x)
gi-1(x) gi+1(x) yi+2
= those at xi from the MID interval. yi-1
left mid right
gi-1'(xi) = gi'(xi)
at x = xi (6a) xi-1 xi xi+1 xi+2 x
gi-1"(xi) = gi"(xi)

gi'(xi+1) = gi+1'(xi+1)
at x = xi+1 (6b)
gi"(xi+1) = gi+1"(xi+1)
* Differentiate gi(x) to get the 1st order derivative
• gi'(x) = ci + 2bi (x-xi) + 3ai (x-xi)2
x=xi  gi'(xi) = ci, (7)

x=xi+1  gi'(xi+1) = ci + 2bi hi + 3aiℎ𝑖2 (8)


4.8.2 Smoothness conditions at joints

gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3 (1) y gi-1"(xi)= gi "(xi) gi"(xi+1)= gi+1"(xi+1)

* For the 2nd order derivative, gi-1′(xi)= gi′(xi) gi′(xi+1)= gi+1′(xi+1)

let Si  gi"(xi) = 2bi, (10a) gi-1(x) gi(x) gi+1(x) yi+2


Si+1  gi"(xi+1) = 2bi + 6ai hi (10b) yi-1
left mid right
Express (ai, bi) from Eq. (10) in terms of (Si, Si+1)  xi-1 xi xi+1 xi+2 x

bi = Si /2, ai = (Si+1 - Si)/(6hi) (11)

Solve for ci using (ai, bi, di) and Si cihi + biℎ𝑖2 + ai ℎ𝑖3 = yi+1 - yi (4)
yi+1−yi 2ℎ𝑖 𝑺𝒊+ℎ𝑖𝑺𝒊+𝟏
 gi'(xi) = ci = - (12)
hi 6

* This gives yi'(xi) = ci (see (7)) based on right interval (xi, xi+1).
* What about yi'(xi) from the left interval (xi-1, xi)? We have
2
yi'(xi) = gi-1'(xi) = ci-1 + 2bi-1 hi-1 + 3ai-1 ℎ𝑖−1 (13)

* Repeat the same procedure from Eqns. (10-12) to obtain (ai-1, bi-1, ci-1) based on (Si-1, Si, yi-1, yi),
4.8.2 Smoothness conditions at joints

gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3 (1) y gi-1"(xi)= gi "(xi) gi"(xi+1)= gi+1"(xi+1)
gi-1′(xi)= gi′(xi) gi′(xi+1)= gi+1′(xi+1)
• Repeat the same procedure from Eqns. (10-12)
to obtain (ai-1, bi-1, ci-1) based on (Si-1, Si, yi-1, yi), gi(x)
gi-1(x) gi+1(x) yi+2
𝑆𝑖 −𝑆𝑖−1
 bi-1 = Si-1/2, ai-1 = (14) yi-1
left mid right
6ℎ𝑖−1

𝑦𝑖 −𝑦𝑖−1 2𝑆𝑖−1 +𝑆𝑖 xi-1 xi xi+1 xi+2 x


 ci-1 = - hi-1 (15)
ℎ𝑖−1 6

𝑦𝑖 −𝑦𝑖−1 2𝑆𝑖−1 +𝑆𝑖 𝑆𝑖 −𝑆𝑖−1


 gi-1'(xi) = ℎ𝑖−1
- 6
hi-1 + hi-1Si-1+ 2
hi-1

 gi-1'(xi) =
𝑦𝑖 −𝑦𝑖−1 1
+ 6 hi-1(Si-1 + 2Si) (16)
yi+1−yi 2ℎ𝑖 𝑺𝒊+ℎ𝑖𝑺𝒊+𝟏
gi'(xi) = ci = - (12)
ℎ𝑖−1 hi 6

* Smoothness at xi requires gi-1'(xi) = gi'(xi) 

yi+1−yi 𝑦𝑖 −𝑦𝑖−1
hi-1Si-1+2(hi-1+hi)Si+hiSi+1 = 6( - )
hi ℎ𝑖−1

= 6(f[xi+1, xi] - f[xi, xi-1]) (17)


4.8.2 Smoothness conditions at joints

gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3, i=0,1,2,..., n-1 (1) y gi-1"(xi)= gi "(xi) gi"(xi+1)= gi+1"(xi+1)
gi-1′(xi)= gi′(xi) gi′(xi+1)= gi+1′(xi+1)
* Simplification: hi = constant (as in many applications_
... ... ... gi(x)
𝑆1 gi-1(x) gi+1(x) yi+2
1 4 1
𝑆2 yi-1
1 4 1 ... left mid right
𝑆3 6
1 4 1 = 𝑦𝑖+1 − 2𝑦𝑖 + 𝑦𝑖−1
𝑆4 ℎ2 xi-1 xi xi+1 xi+2 x
1 4 1 ...
𝑆5
1 4 1 ...
...
... ... ...
𝑆𝑛−1
* Each equation for Si depends on Si-1 & Si+1.
* Hence a set of coupled equations for yi" = Si must be solved.
Total number of unknowns involved: (S0, S1, S2,..., Sn)  n+1
Total number of equations for (17) = n-1
since there are (n-1) internal nodes: (x1, x2,..., xn-1).

 Need TWO additional conditions to close the system.


4.8.3 End conditions

 There several possibilities for providing TWO end conditions y gi-1"(xi)= gi "(xi) gi"(xi+1)= gi+1"(xi+1)
i. If we know the values of S0 & Sn, use the given values. gi-1′(xi)= gi′(xi) gi′(xi+1)= gi+1′(xi+1)

ii. Take S0 = Sn = 0  ends of curve (at x0 & xn) are straight


gi-1(x) gi(x) gi+1(x) yi+2
→ natural spline;
yi-1
then solve (n-1) Eqns. for (S1, S2,..., Sn-1). left mid right

iii.Take S0 = S1 & Sn = Sn-1  a1 = an = 0 xi-1 xi xi+1 xi+2 x

(i.e. 1st & last intervals are fit by parabolas so that y"'=0)
Set S0 = S1 & Sn = Sn-1 from (17), the 1st & last Eqns. 
(3h0+2h1) S1 + h1S2 = 6(f[x2, x1] - f[x1, x0]) (18)
hn-2Sn-2+(2hn-2+3hn-1)Sn-1 = 6 (f[xn, xn-1]- f[xn-1, xn-2]) (19)
The number of unknowns is (n-1) now.

iv. “Not-a-knot” (extrapolation) condition


* Assume that y"' is continuous at x1, it implies that
Lecture 20
Last lecture:
Hermite interpolation
Cubic spline
This lecture:

Least Square Method for Regression


Function approximation
Summary from last lecture based on smoothness conditions at joints

gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3 (1)


y gi-1"(xi)= gi "(xi) gi"(xi+1)= gi+1"(xi+1)

 di = yi (3) gi-1′(xi)= gi′(xi) gi′(xi+1)= gi+1′(xi+1)

gi(x)
& cihi + biℎ𝑖2 + ai ℎ𝑖3 = yi+1 - yi (4) gi-1(x) gi+1(x) yi+2
yi-1
left mid right
Let Si  gi"(xi) = 2bi (10a)
xi-1 xi xi+1 xi+2 x
 bi = Si /2, ai = (Si+1 - Si)/(6hi) (11)
yi+1−yi 2ℎ𝑖𝑺𝒊 +ℎ𝑖𝑺𝒊+𝟏
ci = - (12)
hi 6

yi+1−yi 𝑦𝑖 −𝑦𝑖−1
hi-1Si-1+2(hi-1+hi)Si+hiSi+1 = 6( - ) i. S0 & Sn are given
hi ℎ𝑖−1

= 6(f[xi+1, xi] - f[xi, xi-1]) (17) ii. Take S0 = Sn = 0

valid at (n-1) internal nodes: (x1, x2,..., xn-1)  n-1 eqns. iii. Take S0 = S1 & Sn = Sn-1

involves (n+1) unknowns involved: (S0, S1, S2,..., Sn) iv. “Not-a-knot”
 Need TWO additional conditions to close the system
4.8.3 End conditions

iv. “Not-a-knot” (extrapolation) condition gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3, i=0,1,..., n-1

* Assume that y"' is continuous at x1, it implies that


y g0"(xi)= gi "(xi)
y"'(x1) = g0"'(x1) = 6a0 =g1"'(x1) = 6a1
gi-1′(xi)= gi′(xi)
 6 a0 = 6 a1  (S1 - S0)/h0 = (S2 - S1)/h1
g1(x)
Express S0 in terms of S1 & S2  g0(x)
y0
S0 = (h0/h1+1) S1 - h0/h1 S2 (20) left mid right

(This is a linear extrapolation for S0 given S1 & S2) x0 x1 x2 x x

* Similarly, at the other end, g0"'(x1) = 6a0


6an-2 = 6an-1  (Sn-1-Sn-2)/hn-2 = (Sn-Sn-1)/hn-1 =g1"'(x1) = 6a1
 Sn = (1+hn-1/hn-2) Sn-1 - hn-1/hn-2 Sn-2 (21)
Eq. (17) at i=1:
* Now the first and last equations become
[1] [1]
h0S0 +2(h0+h1)S1+h1S2 = 6(𝑓1 - 𝑓0 )
ℎ0 + ℎ1 )(ℎ0 +2ℎ1 ℎ12 −ℎ02 [1] [1]
S1 + S2 = 6(𝑓1 - 𝑓0 ) (22)
ℎ1 ℎ1 Eq. (17) at i=n-1:
2 −ℎ2
ℎ𝑛−2 (ℎ𝑛−1 + ℎ𝑛−2 )(ℎ𝑛−1 +2ℎ𝑛−2 ) [1] [1]
𝑛−1
Sn-2 + Sn-1 = 6(𝑓𝑛−1 - 𝑓𝑛−2 ) (23) hn-2Sn-2+2(hn-2+hn-1)Sn-1+hn-1Sn
ℎ𝑛−2 ℎ𝑛−2
[1] [1]
 (n-1)x(n-1) tri-diagonal system = 6(𝑓𝑛−1 - 𝑓𝑛−2 )
Example

Reconsider the data generated by


f(xi) = (1+xi2)-5 for -2≤ xi ≤2 with D xi = hi = 0.5.
Sol.: We consider three types of the "end conditions" (type ii-iv).

x Si --(ii) Si --(iii) Si --(iv) 6


-2 0 0.044086 -0.180246 S
-1.5 0.055903 0.044086 0.104217 4
-1 0.401686 0.404868 0.388679
-0.5 4.767861 4.766953 4.771578 2
0 -10.45177 -10.45132 -10.45363 0
0.5 4.767861 4.766953 4.771578
1 0.401686 0.404868 0.388679 -2
1.5 0.055903 0.044086 0.104217
End
2 0 0.044086 -0.180246 -4 condition
natural -6 (ii)
parabolas -8 (iii)
Not-a-knot (iv)
-10
-12
-3 -2 -1 0 1 2 x
Example

Reconsider the data generated by


f(xi) = (1+xi2)-5 for -2≤ xi ≤2 with D xi = hi = 0.5.
Sol.: The graph shows how closely the cubic splines (using three end condition) agree with the exact curve

1.2
End x exact ii iii iv
-2 0.00032 0.00032 0.00032 0.00032
1 conditions -1.9 0.00048 0.00036 -7.4E-05 0.002137
-1.8 0.00073 0.000513 -2.7E-05 0.002720
-1.7 0.001123 0.000888 0.00046 0.002639
0.8 f-ii natural -1.6 0.001749 0.001600 0.001389 0.002462

f-iii parabola -1.5


-1.4
0.002758
0.004401
0.002758
0.004572
0.002758
0.004688
0.002758
0.004096
0.6 f-iv Not-a-knot -1.3 0.007100 0.007637 0.007781 0.007046
-1.2 0.011562 0.012644 0.012758 0.012175
exact -1.1 0.018969 0.020284 0.020340 0.020054
0.4 -1 0.031250 0.031250 0.031250 0.031250
-0.9 0.051476 0.047573 0.047542 0.047699
-0.8 0.084291 0.076645 0.076607 0.076801
0.2 -0.7 0.136166 0.127199 0.127169 0.127321
-0.6 0.214934 0.207966 0.207952 0.208026
-0.5 0.327680 0.327680 0.327680 0.327680
0 -0.4 0.476113 0.488544 0.488551 0.488514
-0.3 0.649931 0.666647 0.666655 0.666614
-0.2 0.821927 0.831550 0.831556 0.831528
-0.2 -0.1 0.951466 0.952814 0.952816 0.952807

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x


4.9 Least Square Method for Regression

4.9.1 Motivation and objective 30


y

* Consider a set of data from an experiment measurement.


20
The data clearly shows a trend but also has random noise in it.
* It is impractical or pointless to use a Lagrangian interpolation
10
because of the "high frequency" random noise in the data.

0
0 1 2 x 3 4 5

* We observe, on average, the data may be represented by:


P2(x) = a0 + a1x + a2x2 (1)
* Obviously, there are many more data available than needed to just determine these 3 coefficients.
* How to "best" choose the coefficients (a0, a1, a2)?
* Define "best" coefficients (a0, a1, a2) in the sense that the mean square error, E, between the data and
the fit P2(x) is smallest among all possible choices of (a0, a1, a2).
4.9.2 Minimization of mean square error

◊ Consider the total values of the error squared,


Error = 𝐸(𝑎0 , 𝑎1 , 𝑎2 ) = σ𝑛𝑖=1[𝑦𝑖 − (a0 + a1xi + a2xi2) ]2 (2)

n = number of data points.


n 𝑎0
* Taking partial derivatives with respect to ai:
𝑛 𝑛 𝑛 𝑛 𝑛
𝜕𝐸 ෍ 𝑎0 + 𝑎1 ෍ 𝑥𝑖 + 𝑎2 ෍ 𝑥𝑖2 = ෍ 𝑦𝑖
= 0 = −2 ෍ 𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2
𝜕𝑎0 𝑖=1 𝑖=1 𝑖=1 𝑖=1
𝑖=1
𝑛
𝑛 𝑛
𝜕𝐸 𝑛 𝑛
= 0 = −2 ෍ 𝑥𝑖 𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 3
𝑎0 ෍ 𝑥𝑖 + 𝑎1 ෍ 𝑥𝑖2 + 𝑎2 ෍ 𝑥𝑖 = ෍ 𝑥𝑖 𝑦𝑖 (3)
𝜕𝑎1 
𝑖=1 𝑖=1 𝑖=1 𝑖=1 𝑖=1
𝑛
𝜕𝐸 𝑛 𝑛 𝑛
= 0 = −2 ෍ 𝑥𝑖2 𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖2 𝑛
2
𝑎0 ෍ 𝑥𝑖2 + 𝑎1 ෍ 𝑥𝑖 + 𝑎3 ෍ 𝑥𝑖 = ෍ 𝑥𝑖 𝑦𝑖
𝜕𝑎2 3 4
𝑖=1
𝑖=1 𝑖=1 𝑖=1 𝑖=1

◊ Fitting with an nth order polynomial using a =(a0, a1, a2,..., an)T.
Let Pn(x) = a0 + a1x + a2x2 + ... + anxn, (4)  [X] a = b.
[X] can be very ill-conditioned for large n
Example

Given the data below, find the least square fit using P1(x) and P2(x).

xi 0.05 0.11 0.15 0.31 0.46 0.52 0.70 0.74 0.82 0.98 1.17
yi 0.956 0.89 0.832 0.717 0.571 0.539 0.378 0.370 0.306 0.242 0.104
Solution:
* n = 11, n n n

𝑛 𝑛 ෍ 𝑥i ෍ xi2 ෍ 𝑦𝑖
σ𝑛𝑖=1 𝑥𝑖 = 6.01, ෌𝑖=1 𝑥𝑖2 = 4.6545 i=1 i=1 i=1
𝑛 𝑛
n n n 𝑎0 n
෌𝑖=1 𝑥𝑖3 = 4.115, ෌𝑖=1 𝑥𝑖4 = 3.9161 ෍ 𝑥i ෍ 𝑥i2 ෍ xi3 𝑎1 = ෍ 𝑥i 𝑦𝑖
i=1 i=1 i=1 𝑎2 i=1
𝑛
σ𝑛𝑖=1 𝑦𝑖 = 5.905, σ𝑛𝑖=1 𝑥𝑖 𝑦𝑖 = 2.1839, ෌𝑖=1 𝑥𝑖2 𝑦𝑖 = 1.3357 n 𝑛 n n

෍ 𝑥i2 ෍ xi3 ෍ xi4 ෍ 𝑥i2 𝑦𝑖


* For P1(x), we solve i=1 i=1 i=1 i=1

11 6.01 𝑎0 = 5.905 Standard error of the estimate for P1(x) (a0 and a1)
6.01 4.6545 𝑎1 2.1839
Let 𝑆𝑟 = σ𝑛𝑖=1(𝑦𝑖 − (𝑎0 + 𝑎1 𝑥𝑖 ))2 = 0.009146
 a0 = 0.9523, a1 = -0.7604
𝑆𝑟
 P1(x) = 0.9523 - 0.7604 x 𝑆𝑦/𝑥 = = 0.03188 ~ “standard deviation”
𝑛−2
Example

Solution: n n n

* n = 11, 𝑛 ෍ 𝑥i ෍ xi2 ෍ 𝑦𝑖
i=1 i=1 i=1
𝑛 𝑛 n n n 𝑎0 n
σ𝑛𝑖=1 𝑥𝑖 = 6.01, ෌𝑖=1 𝑥𝑖2 = 4.6545, ෌𝑖=1 𝑥𝑖3 = 4.115,
෍ 𝑥i ෍ 𝑥i2 ෍ xi3 𝑎1 = ෍ 𝑥i 𝑦𝑖
𝑛 𝑎2
෌𝑖=1 𝑥𝑖4 = 3.9161, σ𝑛𝑖=1 𝑦𝑖 = 5.905, σ𝑛𝑖=1 𝑥𝑖 𝑦𝑖 = 5.905, i=1
n
i=1
𝑛
i=1
n
i=1
n

𝑛 ෍ 𝑥i2 ෍ xi3 ෍ xi4 ෍ 𝑥i2 𝑦𝑖


෌𝑖=1 𝑥𝑖2 𝑦𝑖 = 1.3357 i=1
i=1 i=1 i=1
11 6.01 4.6545 𝑎0 5.905
* For P2(x), we solve 6.01 4.6545 4.115 𝑎1 = 2.1839
4.6545 4.115 3.9161 𝑎2 1.3357
 a0 = 0.9980, a1 = -1.0180, a2 = 0.2247


Standard error of the estimate for P2(x) (a0, a1 and a2)
P2(x) = 0.9980 -1.0180 x + 0.2247 x2 Let 𝑆𝑟 = σ𝑛𝑖=1(𝑦𝑖 − (𝑎0 + 𝑎1 𝑥𝑖 + a2xi2))2 = 0.001868

𝑆𝑟 ~ “standard deviation”
𝑆𝑦/𝑥 = = 0.01528
𝑛−3
Example

Given the data below, find the least square fit using P1(x) and P2(x).

xi 0.05 0.11 0.15 0.31 0.46 0.52 0.70 0.74 0.82 0.98 1.17
yi 0.956 0.89 0.832 0.717 0.571 0.539 0.378 0.370 0.306 0.242 0.104
Solution:
Linear least square fit Quadratic least square fit
1.0 1.0
y y = 0.9523 - 0.7604x y y = 0.9980 - 1.0180x
+ 0.2247x^2
0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2
x x
Chapter 1 ExamplesSupplemental Reading for
Chapter IV Interpolation
Example 1 Polynomial and divided difference interpolation:
If f(x) is a monic quartic polynomial such that
f(-1) = -1, f(2) = -4, f(-3) = -9, and f(4) = -16,
find: f(1).
Note: “quartic” means that the polynomial is of fourth order;
“monic” means that the coefficient of the highest power term is one.
Solution:
Suppose that the 4th order polynomial we are looking for is P4(x). There are
two approaches we can use.
i) Elementary approach (using high school algebra):
We note that the discrete values of the function are related to –x2, which is
not a 4th order polynomial.
However, if we let P4(x) = –x2 + g4(x), where g4(x) should be a monic quartic
polynomial, then we see that
g4(-3) = g4(-1) = g4(2) = g4(4) = 0
i.e., x=-3, -1, 2 & 4 are roots of the monic quartic function g4(x).
Thus g4(x) can be written as g4(x) = (x+3)(x +1)(x-2)(x-4).
Hence P4(x) = (x+3)(x +1)(x-2)(x-4) - x2
And P4(1) = 24-1=23
ii) Newton’s divided difference method:
[1] [2] [3] [4]
x f fi fi fi fi
-3 -9 4 -1 0 B=?
-1 -1 -1 -1
2 -4 -6
4 -16
P4(x) = -9 +4(x+3) -1(x+3)(x+1) +0*(x +3)(x +1)(x -2)
+B*(x+3)(x +1)(x-2)(x-4)
Note: coefficient B of the last term of P4(x) is 1 since P4(x) is monic.
Simplify the above, we obtain
P4(x) =(x+3)(x +1)(x-2)(x-4) - x2
Therefore, P4(1) = 24-1=23
Example 2 (Polynomials and roots) Let P(x) be a polynomial of degree 2006.
If P(n)=1/n for n=1, 2, 3, …, 2007, compute the value of P(2008).
Solution:
Because p(n)=1/n, we have nP(n)-1=0 for n=1, 2, …, 2007,
we define a polynomial Q(x) such that Q(x) = xp(x)-1.
Because nP(n)-1=0 for n=1, 2, …2007, we have
Q(1)=Q(2)=…=Q(2007)=0 which has a total of 2007 real roots.
Since P(x) is a polynomial of order 2006, it implies that Q(x) is a
polynomial of order 2007. Thus
Q(x) = A(x-1)(x-2)…(x-2007) = xp(x)-1
=> P(x) = [Q(x)+1]/x
Since p(x) is a regular polynomial, its value at x=0 should be well defined,
=> Q(x)+1 at x = 0 must be 0 in order for P(0) to be well defined.
=> Q(0) = -1
=> A(0-1)(0-2)…(0-2007) = -1 => A = 1/2007!
( x − 1)( x − 2)...( x − 2007) / 2007!+1
P(x) =
x

=> P(2008)= (2008 − 1)(2008 − 2)...(2008 − 2007) / 2007!+1 = 1 + 1 = 1


2008 2008 1004
Note: in this solution, by introducing Q(x), we take advantage of the
relationship between the roots and the form of polynomial of Q(x).
Example 3 Interpolation using divided difference in Excel spreadsheet
Background:
A set of nonlinear differential equations have been numerically solved using
finite difference method. The value of the function f(x) at x=0.05 needs to be
compared with the ones published in the literature. However, the computational
grids were set up in such a way that x=0.05 does not coincide with the discrete grid
points. In order to obtain f(0.05), interpolation must be performed. Part of the
results from the numerical solution is given in the table below:
x f
-25
0.04702600405 -29.0031775811
f
0.04769189351 -28.5987937135
-26
0.04836156519 -28.2033497952
0.04903504805 -27.8165543888
-27
0.04971237131 -27.4381285867
0.05039356444 -27.0678053440 f
0.05107865718 -26.7053288540 -28
0.05176767950 -26.3504539631
0.05246066168 -26.0029456212 -29
0.05315763421 -25.6625783659
0.05385862789 -25.3291358386 -30
0.05456367378 -25.0024103288 0.045 0.05 x 0.055

and the graph for f(x) is shown above.

Question: How do we obtain a reliable and accurate value for f(0.05) using
interpolation?
Solution:
* We know that Lagrange interpolation gives the same result as Newton divided
difference; thus we choose to work with divided difference method. In fact, for
what we are going to do, divided difference method is ideal.
* In real world, we do not have exact answer and we have to be open-minded to
explore various possibilities. We do not know what the best order of polynomial
to use. It is noted that higher order is not always necessarily better than the
lower order ones since the data given in the table has round offer errors (due to
its finite number of significant digits). Thus we have to examine the results
from various orders, Pk(0.05).
* The interpolated result for f(0.05) also depends the choice of x0. We need to
examine the effect of x0 on the behavior of Pk(0.05) on as well.
* We first establish the divided difference table based on the given data
x f f[1] f[2] f[3] f[4]
0.047026004 -29.0031775811 607.28378 -12563.716 256416.32 -5158347.79
0.047691894 -28.5987937135 590.50417 -12048.564 242559.10 -4812194.74
0.048361565 -28.2033497952 574.32108 -11558.479 229558.14 -4495011.17
0.049035048 -27.8165543888 558.70782 -11092.017 217344.78 -4195169.24
0.049712371 -27.4381285867 543.63913 -10647.849 205880.93 -3928614.26
0.050393564 -27.0678053440 529.09113 -10224.701 195083.95 -3664603.32
0.051078657 -26.7053288540 515.04121 -9821.4430 184954.73 -3437574.10
0.05176768 -26.3504539631 501.46793 -9436.9264 175398.38 -3213430.89
0.052460662 -26.0029456212 488.35103 -9070.1774 166413.64
0.053157634 -25.6625783659 475.67123 -8720.2075
0.053858628 -25.3291358386 463.41028
0.054563674 -29.0031775811

* In the above f[k] denotes the kth order divided difference.


* Here is the screen shot of the Excel implementation:
And here is the Excel implementation formula that you can copy and paste into
your own spreadsheet:
x f f[1] f[2] f[3] f[4]
=(C3- =(D3- =(E3- =(F3-
0.04703 -29.003 C2)/(B3-B2) D2)/(B4-B2) E2)/(B5-B2) F2)/(B6-B2)
=(C4- =(D4- =(E4- =(F4-
0.04769 -28.599 C3)/(B4-B3) D3)/(B5-B3) E3)/(B6-B3) F3)/(B7-B3)
=(C5- =(D5- =(E5- =(F5-
0.04836 -28.203 C4)/(B5-B4) D4)/(B6-B4) E4)/(B7-B4) F4)/(B8-B4)
=(C6- =(D6- =(E6- =(F6-
0.04904 -27.817 C5)/(B6-B5) D5)/(B7-B5) E5)/(B8-B5) F5)/(B9-B5)

* Let x=0.05 (=xx), we then compute the interpolation using linear (n=1),
quadratic (n=2), cubic, & 4th order polynomials for f(x=0.05) as follows:
[1] [2] [n]
Pn(x) = f0 +(x-x0)f0 +(x-x0)(x-x1)f0 +... +(x-x0) (x-x1)...(x-xn-1)f0 .
The implementation for obtaining Pn(x) for n=1 to 4 is shown below:
X f f[1] f[2] f[3] f[4] linear quadratic cubic forth
=H2+E2* =I2+F2*(xx- =J2+G2*(xx-
- =C2+D2* (xx-B2) B2)*(xx- B2)*(xx-B3)*(xx
0.04703 29.003 # … (xx-B2) *(xx-B3) B3)*(xx-B4) -B4)*(xx-B5)
=H3+E3* =I3+F3*(xx- =J3+G3*(xx-
- =C3+D3* (xx-B3) B3)*(xx- B3)*(xx-B4)*(xx
0.04769 28.599 # … (xx-B3) *(xx-B4) B4)*(xx-B5) -B5)*(xx-B6)
=H4+E4* =I4+F4*(xx- =J4+G4*(xx-
- =C4+D4* (xx-B4) B4)*(xx- B4)*(xx-B5)*(xx
0.04836 28.203 # … (xx-B4) *(xx-B5) B5)*(xx-B6) -B6)*(xx-B7)
=H5+E5* =I5+F5*(xx- =J5+G5*(xx-
- =C5+D5* (xx-B5) B5)*(xx- B5)*(xx-B6)*(xx
0.04904 27.817 # … (xx-B5) *(xx-B6) B6)*(xx-B7) -B7)*(xx-B8)
=H6+E6* =I6+F6*(xx- =J6+G6*(xx-
- =C6+D6* (xx-B6) B6)*(xx- B6)*(xx-B7)*(xx
0.04971 27.438 # … (xx-B6) *(xx-B7) B7)*(xx-B8) -B8)*(xx-B9)

The results of the interpolations for xx=0.05 from various orders of polynomials
based on the above formula are:
x0 F f[1] f[2] f[3] f[4] linear quadratic cubic Forth
0.04703 -29.003 # # # # -27.197 -27.2834 -27.280475 -27.28053133
0.04769 -28.599 # # # # -27.236 -27.2814 -27.280526 -27.28053097
0.04836 -28.203 # # # # -27.262 -27.2806 -27.280532 -27.28053102
0.04904 -27.817 # # # # -27.277 -27.2805 -27.280530 -27.28053099
0.04971 -27.438 # # # # -27.282 -27.2806 -27.280532 -27.28053105
0.05039 -27.068 # # # # -27.276 -27.2804 -27.280524 -27.28053056
0.05108 -26.705 # # # # -27.261 -27.2796 -27.280476 -27.2805272
0.05177 -26.350 # # # # -27.237 -27.2779 -27.280345 -27.28051532

If we use a third order polynomial, the best result (with smallest error) should
come from the group of data (in almost equal interval cases) with two points on
the left of x and two points on the right so that
E3(x) = (x-x0)(x-x1)(x-x2)(x-x3)
will have the smallest magnitude.
Thus using x0= 0.4904, we obtain P3(x=0.05) = -27.280530
For P4(0.05), we need 5 points. If we examine the result on the “forth” order
column, we see that p4(0.05)= -27.28053102 when we choose the beginning
point x0=0.04836. If we choose the beginning point to be x0= 0.04904, the then
p4(0.05)= -27.28053099; the difference is 0.00000003. Either is acceptable.
The graph bellows shows how the cubic and forth order interpolation results
depend on the choice of x0; it is the graphical representation of the data in the
respective columns of the above table.

-27.28047
p(0.05)
-27.28048
cubic
-27.28049
forth order
-27.2805

-27.28051

-27.28052

-27.28053

-27.28054
0.047 0.048 0.049 0.050 0.051 x0

Clearly the result from the “forth” order interpolation, p4, is better than the cubic
polynomial fit, p3, because it is less sensitive to the choice of x0 or has less
fluctuations.
We can also see that for 0.04769≤ x0≤0.04971, the forth order polynomial gives
interpolation for x=0.05. For x0 outside (0.04769, 0.04971), the 4th order
polynomial gives extrapolation which should result in larger error and should
be avoided.

 What about using 5th and 6th order polynomials?


* Due to the page margin limitation, f[5] and 5th order interpolation results are not
shown in the above Excel spreadsheet for clarity purpose.
* The result of the 5th order interpolation following a similar procedure for
P5(0.05) as a function of x0 is shown below:

x0 P5(0.05)
0.047691894 -27.2805310067
0.048361565 -27.2805310078
0.049035048 -27.2805310057
0.049712371 -27.2805310063
0.050393564 -27.2805309415
For a 5th order polynomial fit, the smallest error for P5(0.05) will be obtained if
there are 3 data pints on the left of x=0.05 and 3 on the right. Thus the best
result should be between using x0 = 0.048361565. Thus we estimate that
P5(0.05) should be around -27.2805310078.
* For the data given, a 6th order fit using 7 gives
x0 P6(0.05)
0.047026004 -27.2805310077
0.047691894 -27.2805310073
0.048361565 -27.2805310070
0.049035048 -27.2805310058
0.049712371 -27.2805310018
0.050393564 -27.2805309480
0.051078657 -27.2805312436
0.051767680 -27.2805310671
If we use x0 =0.047691894, P6(0.05)=-27.2805310073386. If x0 =0.048361565,
P6(0.05)= -27.2805310070.
If we conclude using P6(x) then it is estimated that
f(0.05) ~ -27.280531007
Example 4 Interpolation using Matlab built-in functions
For the previous problem, we are still interested in finding the value of f(0.05)
for the data given. However, we would like to explore the capability of Matlab
built-in functions and to compare results for f(0.05) from various method.
Solution:
i) At the very end of the last problem, we obtained P6(0.05)=-27.2805310073386.
In doing so, we used 7 data points, which place x=0.05 in the middle of the
interval, to obtain a 6th order polynomial. Thus we first want to use a similar
approach on the same set of data. The functions we can use in Matlab are
“polyfit(x, y)” and “polyval(p, xnew)”.
% Input x(i)data
>> format long
>> x=[0.04769189351
0.04836156519
0.04903504805
0.04971237131
0.05039356444
0.05107865718
0.05176767950
];
>> y=[-28.5987937135
-28.2033497952
-27.8165543888
-27.4381285867
-27.0678053440
-26.7053288540
-26.3504539631
];
>> p=polyfit(x,y,6)
Warning: Polynomial is badly conditioned. Remove repeated data
points
or try centering and scaling as described in HELP
POLYFIT.
> In polyfit at 79
p =
1.0e+009 *
Columns 1 through 5
-1.37967600281235 0.50241468586258 -0.07823040313940
0.00675310209155 -0.00034908333831
Columns 6 through 7
0.00001080647011 -0.00000018554058
* Note: this p vector of dimension 7 contain coefficients of the 6th order
polynomial in the form of P6(x) = p(1)*x6+ p(2)*x5+ p(3)*x4+ p(4)*x3+ p(5)*x2+
p(6)*x+ p(7).
% evaluating P6(x) using “polyval” command:
>> xn=0.05;
>> yn=polyval(p, xn)
yn =
-27.28053100733874
% evaluating P6(x) using definition:
>>yfit=p(1)*xn^6+p(2)*xn^5+p(3)*xn^4+p(4)*xn^3+p(5)*
xn^2+p(6)*xn+p(7)
yfit =
-27.28053100733888
% evaluating P6(x) using nested multiplication which is
more accurate and involves less operation
>>yfit=(((((p(1)*xn+p(2))*xn+p(3))*xn+p(4))*xn+p(5))*xn
+p(6))*xn+p(7)
yfit =
-27.28053100733874
* It is clear that these two ways of evaluating P6(x) for the same coefficients yield
basically the same results.
* P6(x =0.05)= -27.28053100733874 which agrees with what we got in the
last problem using divided difference. The difference in the last two digits reflects
the effect of roundoff error in summing the essentially same polynomial using
different expressions.
ii) Matlab has cubic Hermite interpolating polynomial (PCHIP) and cubic spline
polynomial interpolation. See description of the methods in the next example.
For consistency, we use the same 7 points of (x, y) as in the last example.
x y
0.04769189351 -28.5987937135
0.04836156519 -28.2033497952
0.04903504805 -27.8165543888
0.04971237131 -27.4381285867
0.05039356444 -27.0678053440
0.05107865718 -26.7053288540
0.05176767950 -26.3504539631

* After the input of x and y,…


% use cubic Hermite polynomial:
>> ych=pchip(x,y,xn)
ych =
-27.28053135339867
% use cubic spline (not-a-knot spline end condition):
>> ycs=spline(x,y,xn)
ycs =
-27.28053097433387
% ~ 27.280531
* Because both methods use third order polynomials, the accuracy is not as good
as that obtained from using the 6th order polynomial.
Example 5 Comparison between piecewise cubic Hermite and cubic spline
interpolations in Matlab.
Solution:
* We have studied piecewise cubic spline fitting in details.
* Matlab has programs to perform piecewise cubic Hermit interpolating
polynomial (PCHIP) in addition to cubic spline interpolation.
* Description of PCHIP:

yi = pchip(x,y,xi) returns vector yi containing elements


corresponding to the elements of xi and determined by piecewise cubic
interpolation within vectors x and y. The vector x specifies the points at
which the data y is given. If y is a matrix, then the interpolation is
performed for each column of y and yi is length(xi)-by-size(y,2).

pp = pchip(x,y) returns a piecewise polynomial structure for use by


ppval. x can be a row or column vector. y is a row or column vector of
the same length as x, or a matrix with length(x) columns.

pchip finds values of an underlying interpolating function p(x) at


intermediate points, such that:

• On each subinterval xkx xk+1, p(x) is the cubic Hermite interpolant to the
given values and certain slopes at the two endpoints.
• p(x) interpolates , i.e., p(xj) = yj, and the first derivative p′(x) is continuous.
p″(x) is probably not continuous; there may be jumps at the xj.
• The slopes at the xj are chosen in such a way that p(x) preserves the shape of
the data and respects monotonicity. This means that, on intervals where the
data are monotonic, so is p(x); at points where the data has a local extremum,
so does p(x).

Comments:

spline constructs S(x) in almost the same way pchip constructs p(x).
However, spline chooses the slopes at the xj differently, namely to make
even S″(x) continuous. This has the following effects:

• spline produces a smoother result, i.e. S″(x) is continuous.


• spline produces a more accurate result if the data consists of values of a
smooth function.
• pchip has no overshoots and less oscillation if the data are not smooth.
• pchip is less expensive to set up.
• The two are equally expensive to evaluate.

Illustration:

>>x = -3:3;
>>y = [-1 -1 -1 0 1 1 1];
>>t = -3:.01:3;
>>p = pchip(x,y,t);
>>s = spline(x,y,t);
>>plot(x,y,'o',t,p,'-',t,s,'-.')
>>legend('data','pchip','spline',4)
1.5

0.5

-0.5

-1 data
pchip
spline
-1.5
-3 -2 -1 0 1 2 3
Example: 6 Interpolating curves with singularity
Consider y= x1/2, for x in [0, 2] with h=0.5. Investigate the behavior of the
interpolation using a cubic spline and a 4th order polynomial using Matlab.
Solution:
In solving this problem, we will use Matlab as a tool. The commands are listed
below; hopefully they are self-explanatory.
% define x & y
>> x=0:0.5:2;
>> y=x.^0.5;
% define fine grid x using new_x:
>> new_x=0:0.05:2;
% compute the y value using the cubic spline
>> new_y_spline=interp1(x,y,new_x,'spline');
% compute the exact value of y
>> exact_Y=new_x.^0.5;
% compute coefficient of 4th order polynomial fit given
5 points
>> p=polyfit(x,y,4)
p =
-0.2088 1.0878 -2.0947 2.2157 -0.0000
% tabulate the results for y’s and errors
>> [new_x', new_y_spline',y_fit4', exact_Y', (exact_Y-
new_y_spline)',(exact_Y-y_fit4)']

ans =
0 0 -0.0000 0 0.0000 0.0000
0.0500 0.1014 0.1057 0.2236 0.1222 0.1179
0.1000 0.1949 0.2017 0.3162 0.1213 0.1145
0.1500 0.2809 0.2888 0.3873 0.1064 0.0985
0.2000 0.3597 0.3677 0.4472 0.0875 0.0795
0.2500 0.4319 0.4392 0.5000 0.0681 0.0608
0.3000 0.4977 0.5039 0.5477 0.0500 0.0438
0.3500 0.5578 0.5624 0.5916 0.0338 0.0292
0.4000 0.6124 0.6154 0.6325 0.0201 0.0170
0.4500 0.6620 0.6635 0.6708 0.0088 0.0074
0.5000 0.7071 0.7071 0.7071 0 -0.0000
0.5500 0.7480 0.7469 0.7416 -0.0064 -0.0053
0.6000 0.7852 0.7832 0.7746 -0.0106 -0.0086
0.6500 0.8192 0.8167 0.8062 -0.0129 -0.0104
0.7000 0.8502 0.8476 0.8367 -0.0135 -0.0109
0.7500 0.8788 0.8764 0.8660 -0.0128 -0.0103
0.8000 0.9054 0.9034 0.8944 -0.0110 -0.0090
0.8500 0.9304 0.9290 0.9220 -0.0084 -0.0070
0.9000 0.9542 0.9534 0.9487 -0.0055 -0.0048
0.9500 0.9773 0.9770 0.9747 -0.0026 -0.0024
1.0000 1.0000 1.0000 1.0000 0 0.0000
1.0500 1.0228 1.0225 1.0247 0.0019 0.0022
1.1000 1.0456 1.0448 1.0488 0.0032 0.0040
1.1500 1.0684 1.0670 1.0724 0.0040 0.0054
1.2000 1.0912 1.0892 1.0954 0.0042 0.0062
1.2500 1.1139 1.1115 1.1180 0.0041 0.0065
1.3000 1.1365 1.1339 1.1402 0.0036 0.0063
1.3500 1.1590 1.1565 1.1619 0.0029 0.0054
1.4000 1.1812 1.1792 1.1832 0.0021 0.0041
1.4500 1.2031 1.2019 1.2042 0.0011 0.0022
1.5000 1.2247 1.2247 1.2247 0 -0.0000
1.5500 1.2460 1.2474 1.2450 -0.0010 -0.0025
1.6000 1.2669 1.2699 1.2649 -0.0020 -0.0050
1.6500 1.2874 1.2920 1.2845 -0.0028 -0.0075
1.7000 1.3073 1.3134 1.3038 -0.0035 -0.0096
1.7500 1.3267 1.3341 1.3229 -0.0039 -0.0112
1.8000 1.3456 1.3536 1.3416 -0.0039 -0.0119
1.8500 1.3638 1.3717 1.3601 -0.0036 -0.0116
1.9000 1.3813 1.3881 1.3784 -0.0029 -0.0097
1.9500 1.3981 1.4024 1.3964 -0.0017 -0.0060
2.0000 1.4142 1.4142 1.4142 0 0.0000
>> plot(x,y,new_x,new_y_spline,new_x,y_fit4,new_x,exact_Y,'-o')
1.6

1.4

1.2

0.8

0.6

0.4

0.2

-0.2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

(red and blue lines: cubic spline and polyfit; symbol: exact; blue: raw data)
>> plot(new_x,exact_Y-new_y_spline,new_x,exact_Y-y_fit4,'-o')
0.14

0.12

0.1

0.08

0.06

0.04

0.02

-0.02
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

(symbol: error from polyfit4; blue line: error from cubic spline)
 Discussion:
• The function has a singularity at x=0 since f′(0)→.
• Thus it is difficult to interpolate using any method.
• The cubic spline and a 4th order polynomial fit over the whole interval
perform similarly for this problem.
• The largest errors occur near x=0 (due to the singularity), as expected.

If we extend the interval further to consider x in [0, 3] and use h=0.5 to obtain
the raw data y= x1/2, we have 7 data points. Thus with the regular polynomial fit
using “polyfit(x,y,6)”, we can obtain a 6th order polynomial fit for the data.
The comparison of the errors from the regular 6th order polynomial fit and from
the cubic spline fit is shown below.
0.14

0.12

0.1

0.08

0.06

0.04

0.02

-0.02
0 0.5 1 1.5 2 2.5 3

(symbol: error from polyfit(x, y, 6); blue line: error from cubic spline)
Clearly, near x=3, cubic spline performs better despite its lower order.
Example 7 Two-dimensional interpolations using Matlab
Function: zi = interp2 (x, y, z, xi, yi)

ZI = interp2(X,Y,Z,XI,YI) returns matrix ZI containing elements


corresponding to the elements of XI and YI and determined by interpolation
within the two-dimensional function specified by matrices X, Y, and Z=
f(X,Y). X and Y must be monotonic, and have the same format ("plaid") as if
they were produced by meshgrid. Matrices X and Y specify the points at
which the data Z is given. Out of range values are returned as NaNs.

XI and YI can be matrices, in which case interp2 returns the values of Z


corresponding to the points (XI(i,j),YI(i,j)). Alternatively, you can
pass in the row and column vectors xi and yi, respectively. In this case,
interp2 interprets these vectors as if you issued the command
meshgrid(xi,yi)

>> [x,y,z]=peaks(10); [xi,yi]=meshgrid(-3:0.1:3,-3:0.1:3);


>> zi = interp2(x,y,z,xi,yi); surf(xi,yi,zi)
* Graph is shown below.
8

-2

-4

-6

3
2
3
1 2
0 1
-1 0
-1
-2
-2
-3 -3
Example 8 Cubic spline interpolation –setting up systems for different end
conditions
Consider the following data: 1.5
y
n x y 1.0
0 0 0.0000000000 0.5
1 0.5 0.8414709848
2 1.0 0.9092974268 0.0
3 1.5 0.1411200081 0 1 2 3 4 x
4 2.0 -0.7568024953 -0.5
5 2.5 -0.9589242747 -1.0
6 3.0 -0.2794154982
7 3.5 0.6569865987 -1.5
8 4.0 0.9893582466

Find the cubic spline interpolations using three different end conditions.
Solution:
* We note that h=x=0.5 is a constant. Thus, the system of equations for S (2nd
order derivative) we will be solving is in the form of
... ...   S1   ... 
1 4 1  S  y − 2y + y 
   2  i +1 i i −1 
 1 4 1   S3  6  ... 
  S  = 2  
 1 4 1   4 h ...
 
 1 4 1   S5   ... 
    
 1 4 1  S6   ... 
 ... ...  S 7   ... 

where S0 and S8 have to be either assumed to be 0 or obtained through the other


end conditions. We also note that different end condition will affect the values
in the first row and last row.
i. In the present problem, we do NOT know the values of S0 & S8; so we do not
pursue with the first end condition approach.
ii. Natural spline: Take S0 = S8 = 0, the system of equations become
4 1   S1   - 18.56746903 
1 4 1   S   - 20.06409266 
   2  
 1 4 1   S 3  =  - 3.113882030 
     16.69921738 
 1 4 1  S 4   
 1 4 1   S 5   21.15913334 
    
 1 4 1  S 6   6.165439691 
 1 4  S 7  − 14.49673078

After the reduction and back-substitution (see Chap 3 Notes), we obtain,


 S1   - 3.65497414 
 S   - 3.94757246 
 2  
 S 3  = − 0.61882867
   3.30900511 
S 4   
 S 5   4.082025605 
   
 S 6   1.522025808 
 S 7   - 4.00468915 

Thus, for the piecewise polynomial,


y= gi(x) = di + ci (x- xi) + bi (x- xi)2 + ai (x- xi)3
the coefficients for each interval are:

x y S-(ii) (dy) c b a
0 0 0 0 1.98752315 0 -1.2183247
0.5 0.841471 -3.654974 … 1.07377961 -1.8274871 -0.0975328
1 0.9092974 -3.947572 … -0.8268570 -1.9737862 1.10958126
1.5 0.14112 -0.618829 … -1.9684573 -0.3094143 1.30927793
2 -0.756802 3.3090051 … -1.2959132 1.65450256 0.2576735
2.5 -0.958924 4.0820256 … 0.55184447 2.0410128 -0.8533333
3 -0.279415 1.5220258 … 1.95285732 0.7610129 -1.8422383
3.5 0.6569866 -4.004689 … 1.33219149 -2.0023446 1.33489638
4 0.9893582 0 0
iii. Not-a-Knot: Take S0 = S1 and S8 = S7,
S0 + 4S1 + S2 = (S1) + 4S1 + S2 = 5S1+ 1S2,
& S6 + 4S7 + S8 = S6 + 4S7 + (S7) = 1S6+ 5S7
 the system of equations become
5 1   S1   - 18.56746903 
1 4 1    
   S 2  =  - 20.06409266 
 1 4 1   S 3   - 3.113882030 
    
 1 4 1   S 4   16.69921738 
 1 4 1   S 5   21.15913334 
    
 1 4 1  S 6   6.165439691 
 1 5  S 7  − 14.49673078

 S 0  - 2.88235796
 S  - 2.88235796
 1  
S 2   - 4.15567923
   = - 0.55901777
 S3   
 S 4 3.277868268
   
 S 5  4.146762072
 S 6  1.294216783 
   
 S 7   - 3.15818951
 S   - 3.15818951
 8

x y S-(iii) (d=y) c b a
0 0 0 0 2.40353146 -1.441179 0
0.5 0.841471 -3.654974 … 0.96235248 -1.441179 -0.4244404
1 0.9092974 -3.947572 … -0.79715682 -2.0778396 1.19888716
1.5 0.14112 -0.618829 … -1.97583107 -0.2795089 1.27896201
2 -0.756802 3.3090051 … -1.29611844 1.63893413 0.28963127
2.5 -0.958924 4.0820256 … 0.56003914 2.07338104 -0.9508484
3 -0.279415 1.5220258 … 1.92028386 0.64710839 -1.4841354
3.5 0.6569866 -4.004689 … 1.45429067 -1.5790948 0
4 0.9893582 0 0
iv. Take S0 = 2S1 - S2 and S8 = 2S7- S6, note that
S0 + 4S1 + S2 = (2S1 - S2) + 4S1 + S2 = 6S1 + 0 S2
S6 + 4S7 + S8 = S6 +4S7 + (2S7 – S6) = 0 S6 + 6 S7
 the system of equations becomes
6 0   S1   - 18.56746903 
1 4 1   S   - 20.06409266 
   2  
 1 4 1   S3  =  - 3.113882030 
    
1 4 1 S  16.69921738 
   4
 1 4 1   S 5   21.15913334 
    
 1 4 1  S 6   6.165439691 
 0 6  S 7  − 14.49673078

 S 0  - 2.08938987
 S  - 3.09457817
 1  
 S 2  - 4.09976647
  = 
 S 3  - 0.57044861
 S 4 3.26767889 
   
 S 5  4.19895042 
 S 6  1.09565277 
   
 S 7  - 2.41612180
 S  - 5.92789636
 8  

x y S-(iv) (d=y) c b a
0 0 0 0 2.28905513 -1.04469494 -0.33506277
0.5 0.841471 -3.654974 … 0.99306312 -1.54728909 -0.33506277
1 0.9092974 -3.947572 … -0.80552304 -2.04988323 1.17643928
1.5 0.14112 -0.618829 … -1.97307681 -0.28522431 1.27937584
2 -0.756802 3.3090051 … -1.29876924 1.63383945 0.31042384
2.5 -0.958924 4.0820256 … 0.56788809 2.09947521 -1.03443255
3 -0.279415 1.5220258 … 1.89153888 0.54782638 -1.17059152
3.5 0.6569866 -4.004689 … 1.56142163 -1.20806090 -1.17059152
4 0.9893582 0 0
v. Comparison of the interpolation results
(Note: the y values are generated using yexact = sin(2x))
1.2
1
f-exact
0.8 f-ii
0.6 f-iii
0.4 f-iv

0.2
0
-0.2 0 0.5 1 1.5 2 2.5 3 3.5 4 x

-0.4
-0.6
-0.8
-1
-1.2

0.04
0.03
0.02
0.01
0
-0.01
-0.02 Error-ii
-0.03 Error-iii
-0.04
Error-iv
-0.05
-0.06
0 1 2 3 4 x
 Comments:
* At x=0, y=sin(2x)  y"(0)=0.
Thus, the Natural Spline condition at x=0 is actually exact.
That is why the error (labeled as “Error-ii”) has the smallest magnitude
near x=0 among the three methods.
* However, at x=4, y"(4)≠0. Thus “Error-ii” has larger error near x=4
indicating that Natural Spline usually has larger error.
* Method iii (parabola at the ends) and method iv (“Not-a-knot”) perform
similarly for this problem.
Example 9 Setting up matrix for the least square fit method
Given the data below, set up the matrix for finding the linear and quadratic
polynomial fit for P2(x).
xi 0.05 0.11 0.15 0.31 0.46 0.52 0.70 0.74 0.82 0.98 1.17

yi 0.956 0.89 0.832 0.717 0.571 0.539 0.378 0.370 0.306 0.242 0.104

Solution:
* After we input (xi, yi) into the spreadsheet, we set up columns for xi^k…,
and xi^k*yi. The result is:

i xi yi xi^2 xi^3 xi^4 xi*yi xi^2*yi


0.05 0.956
1 0.003 0.000125 6.25E-06 0.0478 0.00239
0.11 0.89
2 0.012 0.001331 0.000146 0.0979 0.010769
3 0.15 0.832 0.023 0.003375 0.000506 0.1248 0.01872
0.31 0.717
4 0.096 0.029791 0.009235 0.22227 0.068904
5 0.46 0.571 0.212 0.097336 0.044775 0.26266 0.120824
6 0.52 0.539 0.27 0.140608 0.073116 0.28028 0.145746
0.7 0.378
7 0.49 0.343 0.2401 0.2646 0.18522
8 0.74 0.37 0.548 0.405224 0.299866 0.2738 0.202612
9 0.82 0.306 0.672 0.551368 0.452122 0.25092 0.205754
10 0.98 0.242 0.96 0.941192 0.922368 0.23716 0.232417
11 1.17 0.104 1.369 1.601613 1.873887 0.12168 0.142366
i xi yi xi^2 xi^3 xi^4 xi*yi xi^2*yi Matrix RHS
0.05 0.956 =D2 =B2* =D2*
1 =B2^2 =B2^3 ^2 C2 C2
=A2 0.11 0.89 =D3 =B3* =D3* =SUM( =SUM( =SUM(
+1 =B3^2 =B3^3 ^2 C3 C3 =A12 B2:B12) D2:D12) C2:C12)
=A3 0.15 0.832 =D4 =B4* =D4* =SUM( =SUM(
+1 =B4^2 =B4^3 ^2 C4 C4 =J3 =K3 E2:E12) G2:G12)
=A4 0.31 0.717 =D5 =B5* =D5* =SUM( =SUM(
+1 =B5^2 =B5^3 ^2 C5 C5 =J4 =K4 F2:F12) H2:H12)
=A5 0.46 0.571 =D6 =B6* =D6*
+1 =B6^2 =B6^3 ^2 C6 C6
=A6 0.52 0.539 =D7 =B7* =D7*
+1 =B7^2 =B7^3 ^2 C7 C7
The matrix coefficients are set up next.
The resulting matrix is:
11 6.01 4.6545 5.905
6.01 4.6545 4.114963 2.18387
4.6545 4.114963 3.9161277 1.3357207

That is,

11 6.01 4.6545   a0   5.905 


6.01 4.6545 4.114963  a  =  2.18387 
   1  
4.6545 4.114963 3.9161277 a2  1.3357207

The solution for (a0, a1, a2) is


 a0   0.99796839369068 
 a  =  - 1.01804251834621
 1  
a2   0.22468217953879 

You might also like