0% found this document useful (0 votes)
15 views49 pages

Introduction To Numerical Analysis: Arithmetic Operations

The document provides an introduction to Numerical Analysis, focusing on techniques for solving mathematical problems through arithmetic operations, particularly in engineering. It discusses interpolation methods, including polynomial and Lagrange interpolation, and presents examples to illustrate these concepts. Additionally, it covers finite differences and Newton's interpolation formula, emphasizing the importance of numerical methods in computational applications.

Uploaded by

Huong Chu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views49 pages

Introduction To Numerical Analysis: Arithmetic Operations

The document provides an introduction to Numerical Analysis, focusing on techniques for solving mathematical problems through arithmetic operations, particularly in engineering. It discusses interpolation methods, including polynomial and Lagrange interpolation, and presents examples to illustrate these concepts. Additionally, it covers finite differences and Newton's interpolation formula, emphasizing the importance of numerical methods in computational applications.

Uploaded by

Huong Chu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Introduction to Numerical Analysis

Numerical Analysis (Numerical Method, Computational Method):


study techniques by which mathematical problems are formulated
so that they can be solved with arithmetic operations. Although
there are many kinds of numerical methods, they have one
common characteristic: they invariably involve large numbers of
tedious arithmetic calculations. It is little wonder that with the
development of fast, efficient digital computers, the role of
numerical methods in engineering problem solving has increased
dramatically in recent years.

References :
1. Phạm Kỳ Anh, Giải tích số, Nhà xuất bản Đại học Quốc gia Hà Nội, 2008
2. J. Stoer and R. Bulirsch, Introduction to Numerical Analysis, Third
Edition, Springer 2002.
3. Endre Suli and David F. Mayers, An Introduction To Numerical Analysis,
Cambridge University Press, 2003
5/19/22 1
Introduction to Numerical Analysis

5/19/22 2
Numerical Analysis: Chapter 1
Interpolation

Nguyen Duc Manh


Last update: May 2022

Department of Mathematics and Informatics


Hanoi National University of Education
[email protected]
5/19/22 3
Interpolation problem
Problem: Consider an unknown function y = f(x) on [a, b].
However we know the value of this function at a sequence of
support points xi, i = 0,1,...,n in this interval [a, b], that is yi =
f(xi), for i = 0,1,...,n. How to construct approximately the curve of
this function? or to find f(x) for arbitrary x?
y The form of P:
P
• Polynomial
• Spline
f
• ….

x
x0 x1 x2 x* x3 x4 x5 x6

5/19/22 4
Polynomial Interpolation
Objective : We will find a general polynomial P of degree m
Pm(x) = a0 + a1x + . . . + am-1xm-1 + amxm
that interpolates the given data: P(xi) = yi, for i = 0,1,...,n.

y
P

x
x0 x1 x2 x* x3 x4 x5 x6

We see there are m + 1 independent parameters a0, a1, ...,am.


5/19/22 5
Polynomial Interpolation
Since the problem impose n+1 condition on P(x), it is reasonable to first
consider the case when m = n. Then we want to build a0, a1, ..., an such
that:
𝑎! + 𝑎" 𝑥! + 𝑎# 𝑥!# + ⋯ + 𝑎$ 𝑥!$ = 𝑦!
𝑎! + 𝑎" 𝑥" + 𝑎# 𝑥"# + ⋯ + 𝑎$ 𝑥"$ = 𝑦"

𝑎! + 𝑎" 𝑥$ + 𝑎# 𝑥$# + ⋯ + 𝑎$ 𝑥$$ = 𝑦$

This is a system of n + 1 linear equations in n + 1 unknowns, and solving


it is completely equivalent to solving the polynomial interpolation
problem. In vector and matrix notation, the system is
Xa = y
with
X = éë xij ùû , i, j = 0,1,..., n
a = (a0 , a1 ,..., an )T y = ( y0 , y1 ,..., yn )T
The matrix X is called a Vandermonde matrix.
5/19/22 6
Polynomial Interpolation
Theorem 1 : Given n + 1 distinct points x0, x1, ...,xn and n + 1 ordinates
y0, y1, ...,yn there is a polynomial P(x) of degree ≤ n that interpolates yi
at xi, i = 0, 1, ... , n. This polynomial P(x) is unique among the set of all
polynomials of degree at most n.
Proof. Three proofs of this important result are given. Each will furnish
some needed information and has important uses in other interpolation
problems.

(i) It can be shown that for the matrix X in the above system
det( X ) = Õ
0£ j <i £ n
( xi - x j )

This shows that det(X) ≠ 0, since the points xi are distinct.


Thus X is nonsingular and the system Xa = y has a unique
solution a. This proves the existence and uniqueness of an
interpolating polynomial of degree ≤ n.

5/19/22 7
Polynomial Interpolation
(ii) By a standard theorem of linear algebra, the system Xa = y has
a unique solution if and only if the homogeneous system Xb =
0 has only the trivial solution b = 0. Therefore, assume Xb = 0
for some b. Using b, define
P(x) = b0 + b1x + . . . + bn-1xn-1 + bnxn

From the system Xb = 0, we have


P( xi ) = 0, i = 0,1,..., n.
The polynomial P(x) has n + 1 zeros and degree P(x) ≤ n. This
is not possible unless P(x) =0. But then all coefficients bi = 0,
i = 0, 1, ... , n, completing the proof.

(iii) We now exhibit explicitly the interpolating polynomial in the


following section (say, Lagrange’s form, Newton’s forms).

5/19/22 8
Solving system of linear equations for the
interpolating polynomial
Example 1: Find the interpolating polynomial of the function y = 3x
on [-1, 1] with three points x0 = -1, x1 = 0, x2 = 1. Use this interpolating
polynomial to compute the value of this function at x = 1/2.
We have y0 = 1/3, y1 = 1, y2 = 3. We construct the system of linear
euqations with variables 𝑎! , 𝑎" , 𝑎# :
𝑎! + 𝑎" . −1 + 𝑎# −1 # = 1/3 1 −1 1 𝑎! 1/3
𝑎! + 𝑎" . 0 + 𝑎# . (0)# = 1 ⟺ 1 0 0 𝑎" = 1
𝑎! + 𝑎" . 1 + 𝑎# . (1)# = 3 1 1 1 𝑎# 3
Solve this system, we obtain: 𝑎! = 1, 𝑎" = 4/3, 𝑎# = 2/3.
Thus, we get the interpolating polynomial :
4 2
P x = 1 + x + x#.
3 3
With x = 1/2, we have the following approximation:
2 1
31/2
= 3 » 1 + + = 1,8333
3 6
(The value 31/2 = 1,7320508…, the error is about 0,1)
19/05/2022 9
Lagrange's formula
Suppose that xi, i = 0,1,...,n are n+1 support points in [a, b], and yi =
f(xi) is the value of function f. Then the interpolating polynomial can
be written in the following form (called Lagrange's formula for the
interpolating polynomial)
n
P(x) = å y P (x),
i=0
i i

where
(x - x 0 )...(x - x i-1 )(x - x i+1 )...(x - x n ) Õ (x - x k )
Pi (x) = = k ¹i
(x i - x 0 )...(x i - x i-1 )(x i - x i+1 )...(x i - x n )
Õ (x - x
k ¹i
i k )
It is easy to see that:
• It is a polynomial degree at most n
• and it satisfy the condition

Pi (x j ) = 0, "j ¹ i, Pi (x i ) = 1.

5/19/22 10
Lagrange's formula
Example 1: Find the Lagrange's formula for interpolating polynomial
of the function y = 3x on [-1, 1] with three points x0 = -1, x1 = 0, x2 =
1. Use this polynomial interpolation to compute the value of this
function at x = 1/2.
We have :y0 = 1/3, y1 = 1, y2 = 3. The Lagrange's formula is:

1 (x - 0)(x - 1) (x + 1)(x - 1) (x + 1)(x - 0)


P(x) = . + 1. +3 (1)
3 (-1 - 0)(-1 - 1) (0 +1)(0 - 1) (1 + 1)(1 - 0)
4 2
= 1+ x+ x 2 (2)
3 3
With x = 1/2, we have
2 1
31/2
= 3 » 1 + + = 1,8333
3 6
(The value 31/2 = 1,7320…, the error is about 0,1).
Remark: we do not need to reduce the polynomial P(x) to the
standard form (2). We just plug x = x* into the form (1) to obtain the
approximation P(x*) of the f(x*) for any x*. In the other words, in this
case we principally work with the form (1) rather than the form (2) . 11
5/19/22
Lagrange's formula
Example 2: Find the Lagrange's formula for the interpolating
polynomial of the function y = 2cos(πx/4) on [-2, 2] with the points x0 =
-2, x1 = -4/3, x2 = 0, x3 = 4/3, x4 = 2. Use this interpolating polynomial to
compute the value of this function at x = 1.
We have :
xi -2 -4/3 0 4/3 2
yi 0 1 2 1 0

The Lagrange's formula for the interpolating polynomial is:


9 x 4 - 196 x 2 + 640
P(x) =
320
With x = 1, we have
π
2cos = 2 » 1,415625
4
(The value 21/2 = 1,41421…, the error is about 0,001)

5/19/22 12
Newton's interpolation formula : Finite difference
Finite difference:
Definition: Given a function f(x) with variable x and h = Δx = constant >0 which
represents the variation on x. Then:
Δf(x) = f(x+Δx) - f(x)
is called forward difference degree 1 at x of the function f(x) corresponding to h.

Generalization for forward difference degree n:


Δnf(x) = Δ(Δn-1f(x)), Δ0f(x) = f(x)
Properties of the forward difference:
• Δ is a linear operator, that is :
Δ(αf + βg) = αΔf + βΔg, "α, β Î ! ,"f, g.
• If c is a constant then Δc = 0
• Δn (x n ) = n!h n ; Δ m (x n ) = 0,"m > n.
• If P(x) is a polynomial degree n then according to Taylor series expansion
n
h i (i)
ΔP(x) = P(x+h) - P(x) = å P (x)
n i=1 i!
• f(x + nh) = å Cin Δ i f(x)
i=0
n
• Δ n
5/19/22
f(x) = å Cn f (x + (n -1) h)
(-1)
i=0
n i
13
Newton's interpolation formula : Finite difference
Example: Consider the function f(x) = x3 and Δx = h = constant.

We have:
The finite difference degree 1:
Δf(x) = f(x+h) - f(x) = (x+h)3 – x3 = 3x2h + 3xh2 + h3.
The finite difference degree 2:
Δ2f(x) = Δf(x+h) - Δf(x)
= (3(x+h)2h + 3(x+h)h2 + h3) – (3x2h + 3xh2 + h3)
= 6xh2 + 6h3.
The finite difference degree 3:
Δ3f(x) = Δ2f(x+h) - Δ2f(x) = (6(x+h)h2 + 6h3) – (6xh2 + 6h3)
= 6h3.
Thus
Δkf(x) = 0, for all k ≥ 4 and for all x ∈ R.

14
Newton's interpolation formula : Finite difference
Constructing forward differences
Suppose that xi+1 – xi = h = const, for all i = 0,1,.., n - 1. Then the
sequence of forward differences of this function can be determined as
follows:
Δyi = yi+1 - yi
Δ 2 yi = Δ(Δyi ) = Δyi+1 - Δyi
Δ n yi = Δ(Δ n-1yi ) = Δ n-1yi+1 - Δ n-1yi
x y Δy Δ2y Δ3y Δ4y
… … … … … … …
xi-2 yi-2
Δyi-2
xi-1 yi-1 Δ2yi-2
Δyi-1 Δ3yi-2
xi yi Δ2yi-1 Δ4yi-2
Δyi Δ3yi-1
xi+1 yi+1 Δ2yi
Δyi+1
xi+2 yi+2
5/19/22 15
… … … … … … …
Newton's interpolation formula : Finite difference
Newton's forward formula:
Suppose that xi+1 – xi = h = const > 0 for all i = 0,1,..,n – 1 and this
points is ordered as follows
x0<x1 <….<xn-1 <xn
The Newton forward difference form of the interpolating polynomial is:
Δy0 Δ 2 y0 Δ n y0
Pfw (x) = y0 + (x-x 0 ) + 2
(x - x 0 )(x - x1 ) + ... + n
(x-x 0 )(x-x1 )...(x-x n-1 ).
1!h 2!h n!h

If we change the variable


x - x0
t= Û x = x 0 + th.
h
We can rewrite this interpolating polynomial

Δy0 Δ 2 y0 Δ n y0
Pfw (x 0 +th) = y0 + t+ t(t - 1) + ... + t(t - 1)...(t - (n - 1)).
1! 2! n!

5/19/22 16
Newton's interpolation formula : Finite difference
Newton's backward formula:
Suppose that xi+1 – xi = h = const > 0 for all i = 0,1,..,n – 1 and this
points is ordered as follows
x0<x1 <….<xn-1 <xn
The Newton backward difference form of the interpolating polynomial is:
Δyn-1 Δ2 yn-2 Δ n y0
Pbw (x) = yn + (x-x n ) + 2
(x-x n )(x-x n-1 ) + ... + n
(x-x n )(x-x n-1 )...(x-x1 ).
1!h 2!h n!h

If we change the variable


x - xn
t= Û x = x n + th.
h
We can rewrite this interpolating polynomial

Δyn-1 Δ 2 yn-2 Δ n y0
Pbw (x n +th) = yn + t+ t(t + 1) + ... + t(t + 1)...(t + (n - 1)).
1! 2! n!

5/19/22 17
Newton's interpolation formula : Finite difference
Example: Find the polynomial interpolation using Newton’s formula
for the function y = 3x on [-1, 1] with points x0 = -1, x1 = 0, x2 = 1.
The table of the forward differences of the function is:
x y Δy Δ2y
-1 1/3
2/3
0 1 4/3
2
1 3

The Newton forward difference form of the interpolating polynomial


1 2 (x+1) 4 (x+1)x 4 2 2
Pfw (x) = + . + . 2
= 1+ x+ x
3 3 1!1 3 2!1 3 3
The Newton backward difference form of the interpolating polynomial
(x - 1) 4 (x - 1)x 4 2 2
Pbw (x) = 3 + 2. + . 2
= 1+ x+ x
1!1 3 2!1 3 3
5/19/22 18
Newton's interpolation formula : Finite difference
Remarks :
§ Newton's formula (forward or backward forms) for polynomial
interpolation is only different representation of Lagrange’s formula for
polynomial interpolation.
§ When we want to compute the value of function at x close to the first
point x0 (w.r.t. close to the last point xn), we should use Newton's
forward (w.r.t. backward) formula since the numerical error is smaller,
especially in the case there are lots of interpolation points.
§ With the Lagrange form, it is inconvenient to pass from one
interpolation polynomial to another of degree one greater. Such a
comparison of different degree interpolation polynomials is a useful
technique in deciding what degree polynomial to use. By Newton's
formula, we do not need to re-compute everything when some
interpolation points are added.

5/19/22 19
Newton's interpolation formula : Finite difference
Example : Given the data of the function y = sin x as follows
xi 0,1 0,2 0,3 0,4
yi 0,09983 0,19867 0,29552 0,38942

Construct the Newton's formula 1 in both forms: forward and backward.


Use them to compute approximately the value of sin(0,14) and sin(0,46).
The table of the forward differences of the function is:
x y Δy Δ2y Δ3y
0,1 0,09983
0,09884
0,2 0,19867 -0,00199
0,09685 -0,00096
0,3 0,29552 -0,00295
0,0939
0,4 0,38942
The Newton forward difference form of the interpolating polynomial
(x - 0,1) (x - 0,1)(x - 0,2) (x - 0,1)(x - 0,2)(x - 0,3)
Pfw (x) = 0,09983 + 0,09884. - 0,00199. - 0,00096.
1!(0,1) 2!(0,1)2 3!(0,1)3
5/19/22 20
Newton's interpolation formula : Finite difference
If we change the variable
x - x0
t=Û x = 0,1+ 0,1.t.
h
The Newton forward difference form of the interpolating polynomial:
t t(t -1) t(t -1)(t - 2)
Pfw (0,1 + 0,1.t) = 0,09983 + 0,09884. - 0,00199. - 0,00096.
1! 2! 3!
The Newton backward difference form of the interpolating polynomial:
(x - 0,4) (x - 0,4)(x - 0,3) (x - 0,4)(x - 0,3)(x - 0,2)
Pbw (x) = 0,38942 + 0,0939. - 0,00295. - 0,00096.
1!(0,1) 2!(0,1)2 3!(0,1)3
If we change the variable:
x - xn
t= Û x = 0,4 + 0,1.t.
h
It can be rewritten as:
t t(t+1) t(t+1)(t+2)
Pbw (0,4+0,1.t) = 0,38942 + 0,0939. - 0,00295. - 0,00096.
5/19/22
1! 2! 3! 21
Newton's interpolation formula : Finite difference
Use the Newton's forward formula to compute sin(0,14)

sin(0,14) » Pfw (0,14) = 0,1395441.


(The value of sin(0,14) = 0,1395431…, the error is about 10-6)

Use the Newton's backward formula to compute sin(0,46)


sin(0,46) » Pbw (0,46) = 0, 4439373.
(The value of sin(0,46) = 0,4439481…, the error is 10-5)

5/19/22 22
Interpolation Error
Theorem : If P(x) is Lagrange’s formula for interpolation polynomial
of f(x) with support points xi, for i = 0,1,...,n in [a, b]. Then the error
obtained when using P(x) to compute f(x) for an arbitrary point x in
[a, b] can be estimated by the following formula:
M
|f(x) - P(x)| £ |ω(x)|, "x Î [ a, b] ,
(n+1)!
y P
where
M = sup f (n+1) (x) f
a £ x £b

ω(x) = (x - x 0 )(x - x1 )...(x - x n )


x
Example: using this formula we x0 x1 x2 x* x3 x4 x5 x6
can estimate the error of the above
computational examples are:
1
|sin(0,14) - Pfw (0,14)| £ (0,14 - 0,1).(0,14 - 0, 2)(0,14 - 0,3)(0,14 - 0, 4) » 4,16.10-6
4!
1
|sin(0,46) - Pbw (0,46)| £ (0, 46 - 0,1).(0, 46 - 0, 2)(0, 46 - 0,3)(0, 46 - 0, 4) » 3, 74.10-5
5/19/22 4! 23
Runge's phenomenon
Runge's phenomenon is a problem of oscillation at the edges of
an interval that occurs when using polynomial interpolation with
polynomials of high degree over a set of equidistant interpolation
points. It was discovered by Carl David Tolmé Runge (1901) when
exploring the behavior of errors when using polynomial
interpolation to approximate certain functions. The discovery was
important because it shows that going to higher degrees does not
always improve accuracy. The phenomenon is similar to the Gibbs
phenomenon in Fourier series approximations.

Consider the case where one desires to interpolate through n+1


equidistant points of a function f(x) using the n-degree polynomial
Pn(x) that passes through those points. Naturally, one might expect
from Weierstrass' theorem that using more points would lead to a
more accurate reconstruction of f(x). However, this particular set of
polynomial functions Pn(x) is not guaranteed to have the property of
uniform convergence; the theorem only states that a set of
polynomial functions exists, without providing a general method of
finding one.
5/19/22 24
Runge's phenomenon
The Pn(x) produced in this manner may in fact diverge away from
𝑓(𝑥) as n increases; this typically occurs in an oscillating pattern
that magnifies near the ends of the interpolation points. This
phenomenon is attributed to Runge.
Example: Consider the Runge function
1
f(x) =
1+25x 2
Runge found that if this function is interpolated at equidistant points
xi between −1 and 1 such that:
2i
xi = − 1,i = 0,1,...,n.
n
with a polynomial Pn(x) of degree ≤ n, the resulting interpolation
oscillates toward the end of the interval, i.e. close to −1 and 1. It
can even be proven that the interpolation error increases (without
bound) when the degree of the polynomial is increased:
lim max | f (x) − Pn (x) | = + ∞.
n→∞ −1≤x≤1
5/19/22 25
Runge's phenomenon
Runge's phenomenon

5/19/22 26
Runge's phenomenon
Runge's phenomenon

5/19/22 27
Runge's phenomenon
Runge's phenomenon

5/19/22 28
Runge's phenomenon
This shows that high-degree polynomial interpolation at equidistant
points can be troublesome.
Why?
Runge's phenomenon is the consequence of two properties of this
problem.
- The magnitude of the derivatives of this particular function grows
quickly when n increases.
- The equidistance between points leads to a Lebesgue constant
that increases quickly when n increases.

How to solve this problem?


• Change of interpolation points: using Chebyshev nodes
⎛ 2i − 1 ⎞
xi = cos ⎜ ⋅ π ⎟ ,i = 1,...,n, trên [-1,1]
⎝ 2n ⎠
• Use of piecewise polynomials : spline curves
• Constrained minimization
• Least squares fitting
• Bernstein polynomial
5/19/22 29
Runge's phenomenon
Runge's phenomenon
Interpolating the Runge function with Chebyshev nodes:
Interpolating the Runge function with Chebyshev nodes:
⎛ 2i − 1 ⎞
x = cos 2, − 1 ⋅ π ,i = 1,...,n,
#+ i = BCD ⎜⎝ 2n E ⎟⎠ , , = 1, … , ..
2.

5/19/22 30
Newton’s Interpolation Formula: Divided Differences
Divided Differences:
Definition: Let f(x) be a function of x and the support points are ordered
as x0<x1 <….<xn-1 <xn

We call :
f(x i+1 ) - f(x i ) divided difference order 1 of the
f(x i , x i+1 ) =
x i+1 - x i function f at x i , x i+1

f(x i+1 , x i+2 ) - f(x i , x i+1 ) divided difference order 2 of


f(x i , x i+1 , x i+2 ) =
x i+2 - x i the function f at x i , x i+1 , x i+2

f(x i+1 , x i+2 ,...,x i+k ) - f(x i , x i+1 ,...,x i+k-1 )


f(x i , x i+1 , ..., x i+k ) =
x i+k - x i
divided difference order k of the
function f at x i , x i+1 ,..., x i+k

5/19/22 31
Newton’s Interpolation Formula: Divided Differences
Properties of Divided Differences: :
§ Property 1 k
f(x i )
f(x 0 , x1 , ..., x k ) = å ,
i=0 ω¢(x i )
where k
ω(x) = (x - x 0 )(x - x1 )...(x - x k ) = Õ (x - xi )
i=0
§ Property 2 : Divided difference is a linear operator

§ Property 3 : Divided difference is a sysmetric function, i.e., if σ is a


permutation of the set {0, 1, …, k}, then we have

f(xs (0) , xs (1) , ..., x s (k) ) = f(x 0 , x1 , ..., x k ).

§ Property 4 : Divided difference order m + 1 of a polynomial degree


m is exactly 0. That is P(x0, x1,….,xm, x) = 0 for all x.

5/19/22 32
Newton’s Interpolation Formula: Divided Differences

Format for constructing divided·differences of f(x)

x y DD order 1 DD order 2 DD order 3 DD order 4


… … … … … … …
xi-2 yi-2
y(xi-2, xi-1)
xi-1 yi-1 y(xi-2, xi-1, xi)
y(xi-1, xi) y(xi-2, xi-1, xi, xi+1)
xi yi y(xi-1, xi, xi+1) y(xi-2, xi-1, xi, xi+1, xi+2)
y(xi, xi+1) y(xi-1, xi, xi+1, xi+2)
xi+1 yi+1 y(xi, xi+1, xi+2)
y(xi+1, xi+2)
xi+2 yi+2
… … … … … … …

5/19/22 33
Newton’s Interpolation Formula: Divided Differences
Based on this table, we can construct Newton's divided difference
formula for the interpolation polynomial as follows

- Newton’s forward form :

Pfw (x) = y0 + f(x 0 , x1 )(x - x 0 ) + f(x 0 , x1 ,x 2 )(x - x 0 )(x - x1 ) + ...+


+ f(x 0 , x1 ,...,x n )(x - x 0 )(x - x1 )...(x - x n-1 ).

- Newton’s backward form :

Pbw (x) = y n + f(x n-1 ,x n )(x - x n ) + f(x n-2 ,x n-1 ,x n )(x - x n )(x - x n-1 ) + ...+
+ f(x 0 ,x1 ,...,x n )(x - x n )(x - x n-1 )...(x - x1 ).

5/19/22 34
Newton’s Interpolation Formula: Divided Differences
Example : Given the data of (xi ,yi) for a function as follows
xi -4 -1 0 2 5
yi 1245 33 5 9 1335

Construct Newton's divided difference formula for the interpolation polynomial


The table of the divided differences of the function is:
x y DD order 1 DD order 2 DD order 3 DD order 4
-4 1245
-404
-1 33 94
-28 -14
0 5 10 3
2 13
2 9 88
442
5 1335
Newton’s forward form :
Pfw (x) = 1245 - 404(x+4) + 94(x+4)(x+1) - 14(x+4)(x+1)x+
+3(x+4)(x+1)x(x-2)
5/19/22 35
Newton’s Interpolation Formula: Divided Differences
x y DD order 1 DD order 2 DD order 3 DD order 4
-4 1245
-404
-1 33 94
-28 -14
0 5 10 3
2 13
2 9 88
442
5 1335

Newton’s backward form :


Pbw (x) = 1335 + 442(x-5) + 88(x-5)(x-2) + 13(x-5)(x-2) x +
+ 3(x-5)(x-2)x(x+1).

5/19/22 36
Newton’s Interpolation Formula: Divided Differences
Algorithm 1: Compute value of interpolating polynomial at an arbitrary point x

The arrows indicate how the additional


upward diagonal Ti,0, Ti,1, …, Ti,i, can be
constructed if one more support point xi,
fi is added.

After the inner loop has terminated, t[j] = Ti,i-j, 0 ≤ j ≤ i. The desired value Tnn =
P(x) of the interpolating polynomial can be found in t[0].
5/19/22 37
Newton’s Interpolation Formula: Divided Differences
Algorithm 2: Compute divided differences & Evaluate value of interpolation
polynomial at a point x.

Afterwards, the interpolation polynomial may be


evaluated for any desired argument x:

p := a[n];
for i:= n -1 step -1 until 0 do
p := p*(x – x[i]) + a[i];
The output p is the value of interpolation
polynomial at x.
5/19/22 38
Determining Interpolating Cubic Spline Functions
Spline functions yield smooth interpolating curves which are less likely to exhibit
the large oscillations characteristic of high-degree polynomials. They are finding
applications in graphics and, increasingly, in numerical methods. For instance,
spline functions may be used as trial functions in connection with the Rayleigh-
Ritz-Galerkin method for solving boundary value problems of ordinary and partial
differential equations. More recently, they are also used in signal processing.

Spline functions (splines) are connected with a partition


a = x0<x1 <….<xn-1 <xn = b
of an interval [a, b] by support points xi, i = 0, 1,…, n: They are piecewise
polynomial functions S: [a, b] → IR, with certain smoothness properties that are
composed of polynomials, the restrictions S|Ii of S to Ii := (xi-1, xi), i = 1, 2, . . . , n,
are polynomials.

Definition: A cubic spline (function) SΔ on ∆ is a real function SΔ : [a, b] → IR with


the properties:
(a) SΔ ∈ C2 [a, b], that is, S∆ is twice continuously differentiable on [a, b]
(b) SΔ coincides on every subinterval [xi, xi+1], i = 0, 1, . . . , n − 1, with a polynomial
of degree (at most) three.

5/19/22 39
Determining Interpolating Cubic Spline Functions
Consider a finite sequence Y := (y0, y1, . . . , yn) of n + 1 real numbers.
We denote by
SΔ(Y; .)

an interpolating spline function SΔ with SΔ(Y; xi) = yi for i = 0, 1, . . . ,n.

Such an interpolating cubic spline function SΔ(Y; .) is not uniquely


determined by the sequence Y of support ordinates. Roughly speaking,
there are still two degrees of freedom left, calling for suitable additional
requirements. The following three additional requirements are most
commonly considered:

(a) S”Δ (Y; a) = S”Δ (Y; b) = 0,


(b) S(k)Δ (Y; a) = S(k)Δ (Y; b), for k = 0, 1, 2, SΔ(Y; .) is periodic (*)
(c) S’Δ(Y; a) = y’0, S’Δ(Y; b) = y’n for given numbers y’0 , y’n.

We will confirm that each of these three sets of conditions by itself


ensures uniqueness of the interpolating spline function S∆(Y; .). A
prerequisite of the condition (b) is, of course, that yn = y0.

5/19/22 40
Determining Interpolating Cubic Spline Functions
In what follows D = { xi , i = 0,1,..., n} will be a fixed partition of the interval
[a, b] by support points a = x0 < x1 < · · · < xn = b and Y = (yi), i=0,1,...,n
will be a sequence of n+1 prescribed real numbers. In addition let Ij be
the subinterval Ij := [xj-1 , xj], j = 0,1,…, n−1, and hj+1 := xj+1 − xj its length.

We refer to the values of the second derivatives at point xj ∈ ∆,


M j := SD// (Y ; x j ), j = 0, 1, ..., n, (1)

of the desired spline function S∆(Y; ·) as the moments Mj of S∆(Y; ·). We


will show that spline functions are readily characterized by their moments,
and that the moments of the interpolating spline function can be
calculated as the solution of a system of linear equations.
''
Note that the second derivative SD (Y ; x j ) of the spline function coincides
with a linear function in each interval [xj, xj+1], j = 0, …, n − 1, and that
these linear functions can be described in terms of the moments Mi of
S∆(Y; ·):
x j+1 − x x − xj
S (Y; x) = M j
//
Δ
+ M j+1 for x ∈ ⎡⎣ x j , x j+1 ⎤⎦ .
h j+1 h j+1

5/19/22 41
Determining Interpolating Cubic Spline Functions
By integration
( x j +1 - x ) (x - xj )
2 2

S D' (Y ; x) = - M j + M j +1 + Aj
2h j +1 2h j +1 (2)
(x - x) (x - x )
3 3

+ Aj ( x - x j ) + B j
j +1 j
S D (Y ; x) = M j + M j +1
6h j +1 6h j +1

for x ∈ [xj, xj+1], j = 0, 1, . . . , n − 1, where Aj, Bj are constants of integration.


From S∆(Y; xj) = yj, S∆(Y; xj+1) = yj+1 we obtain the following equations for
these constants Aj and Bj:
h 2j +1
Mj + Bj = y j ,
6
h 2j +1
M j +1 + Aj h j +1 + B j = y j +1.
6
Consequently,
h 2j +1
Bj = y j - M j ,
6 (3)
y j +1 - y j h j +1
Aj = - ( M j +1 - M j ).
5/19/22 h j +1 6 42
Determining Interpolating Cubic Spline Functions
This yields the following representation of the spline function in terms of
its moments:
SD (Y ; x) = a j + b j ( x - x j ) + g j ( x - x j ) + d j ( x - x j ) for x Î éë x j , x j +1 ùû . (4)
2 3

where
Mj
a j := y j , g j := ,
2
M j h j +1
b j := S (Y; x j ) = -
'
D + Aj
2
y j +1 - y j 2M j + M j +1
= - h j +1 ,
h j +1 2
S D''' (Y; x +j ) M j +1 - M j
d j := = .
6 6h j +1
Thus S∆(Y; ·) has been characterized by its moments Mj. The task of
calculating these moments will now be addressed.

5/19/22 43
Determining Interpolating Cubic Spline Functions
The continuity of S’∆ (Y; ·) at the interior points x = xj, j = 1, 2, . . . , n − 1
yields n − 1 equations for the moments Mj.
(x - x) (x - x )
2 2
j +1 j
S (Y ; x) = - M j
'
D + M j +1
2h j +1 2h j +1
y j +1 - y j h j +1
+ - ( M j +1 - M j ).
h j +1 6
For j = 1, 2, . . . , n − 1, we have therefore
-
y j - y j -1 h j hj
SD (Y ; x j ) =
'
+ M j + M j -1 ,
hj 3 6
+
y j +1 - y j h j +1 h j +1
S (Y ; x ) =
'
D j - Mj - M j +1 ,
h j +1 3 6
and since SD' (Y ; x +j ) = SD' (Y ; x -j ),
hj h j + h j +1 h j +1 y j +1 - y j y j - y j -1
M j -1 + Mj + M j +1 = - (5)
6 3 6 h j +1 hj
for j = 1, 2, . . . , n − 1.
5/19/22 44
Determining Interpolating Cubic Spline Functions
These are n − 1 equations for the n + 1 unknown moments. Two further
equations can be gained separately from each of the side conditions (a),
(b) and (c) listed in the condition (*).
Case (a) : SD'' (Y ; a ) = M 0 = 0 = M n = S D'' (Y ; b),
Case (b) : S D'' (Y ; a ) = S D'' (Y ; b) Þ M 0 = M n ,
hn h +h h
S D' (Y ; a ) = S D' (Y ; b) Þ
M n -1 + n 1 M n + 1 M1j +1
6 3 6
y -y y - yn -1
= 1 n- n .
h1 hn
The latter condition is identical with (5) for j = n if we put
hn+1 = h1 , M n+1 = M1 , yn+1 = y1.
Recall that (b) in the condition (*) requires yn = y0.
h h y -y
Case (c) : S D' (Y ; a) = y0' Þ 1 M 0 + 1 M 1 = 1 0 - y0' ,
3 6 h1
hn hn yn - yn -1
S (Y ; b) = y Þ M n -1 + M n = yn -
'
D
'
n
'
.
5/19/22
6 3 hn 45
Determining Interpolating Cubic Spline Functions
The last two equations, as well as those in (5), can be written in a
common format:
µ j M j -1 + 2M j + l j M j +1 = d j , j = 1, 2, . . . , n - 1,
upon introducing the abbreviations (j = 1, 2, . . . , n − 1)
h j +1 hj
l j := , µ j := 1 - l j = ,
h j + h j +1 h j + h j +1
(6)
6 æ y j +1 - y j y j - y j -1 ö
dj = çç - ÷÷ .
h j + h j +1 è h j +1 hj ø
In case (a), we define in addition
l0 := 0, d0 := 0, µn := 0, dn := 0, (7)
and in case (c)
6 ⎛ y1 − y0 ⎞
λ0 := 1, d0 := ⎜ − y0 ⎟ ,
'

h1 ⎝ h1 ⎠
(8)
6 ⎛ ' yn − yn−1 ⎞
µ n := 1, d n := ⎜ yn − ⎟ .
5/19/22 hn ⎝ hn ⎠ 46
Determining Interpolating Cubic Spline Functions
This leads in cases (a) and (c) to a system of linear equations for the
moments Mi that reads in matrix notation:
⎛ 2 λ0 ... 0 ⎞ ⎛ M 0 ⎞ ⎛ d0 ⎞
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜ µ1 2 λ1 . . . 0 ⎟ ⎜ M1 ⎟ ⎜ d1 ⎟
⎜ ... ⎟⎜ ! ⎟ =⎜ ! ⎟ (9)
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜0 . . . 2 λ n−1 ⎟ ⎜ ⎟ ⎜ ⎟
⎜0 ⎟ ⎜ ⎟ ⎜ ⎟
⎝ . . . µ n 2 ⎠ ⎜⎝ M n ⎟⎠ ⎜⎝ d n ⎟⎠
The coefficients λi, µi, di in (9) and (11) are well defined by (6) and the
additional definitions (7), (8), and (10), respectively. The periodic case (b)
also requires further definitions,
h1 hn
ln := , µn := 1 - ln = ,
hn + h1 hn + h1
(10)
6 æ y1 - yn yn - yn -1 ö
d n := ç - ÷,
hn + h1 è h1 hn ø
5/19/22 47
Determining Interpolating Cubic Spline Functions
which then lead to the following linear system of equations for the
moments M1, M2, . . . , Mn (= M0):
⎛ 2 λ1 . . . µ1 ⎞ ⎛ M1 ⎞ ⎛ d1 ⎞
⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜ µ2 2 λ2 . . . 0 ⎟ ⎜ M 2 ⎟ ⎜ d2 ⎟
⎜0 µ . . . 0 ⎟⎜ ! ⎟ =⎜ ! ⎟
⎟ (11)
⎜ 3
⎟⎜ ⎟ ⎜
⎜0 . . . 2 λ n−1 ⎟ ⎜ ⎟ ⎜ ⎟
⎜ ⎟ ⎜⎜ ⎟ ⎜ ⎟
⎝ λn . . . µ n 2 ⎠ ⎝ M n ⎟⎠ ⎜⎝ d n ⎟⎠
Note in particular that in (9) and (11)
li ³ 0, µi ³ 0, li + µi = 1, (12)

for all coefficients λi, µi, and that these coefficients depend only on the
location of the knots xj ∈ ∆ and not on the prescribed values yi ∈ Y or on
y’0 , y’n in case (c). We will use this observation when proving the following:
Theorem. The systems (9) and (11) of linear equations are nonsingular for
any partition ∆ of a, b.
5/19/22 48
Convergence Properties of Cubic Spline Functions
Interpolating polynomials may not converge to a function f whose values
they interpolate, even if the partitions ∆ are chosen arbitrarily fine. In
contrast, we will show in this section that, under mild conditions on the
function f and the partitions ∆, the interpolating spline functions do
converge towards f as the fineness of the underlying partitions
approaches zero.

5/19/22 49

You might also like