Advanced Numerical Analysis: Data Interpolation and Smoothing
Advanced Numerical Analysis: Data Interpolation and Smoothing
1
List of Symbols
We shall use standard set-theoretic notation
∪, ∩, ⊆, ⊂, ∈
to denote ‘union’, ‘intersection’, ‘subset of’, ‘proper subset of’, ‘belong(s) to’, re-
spectively. For the sets V1 and V2 , the set {v ∈ V1 : v 6∈ V2 } is denoted by V1 \V2 .
The following standard notations and symbols will be used without defining them
explicitly:
N : set of all positive integers
Z : set of all integers
R : field of all real numbers
Q : field of all rational numbers
C : field of all complex numbers
F : field of real or complex numbers
=⇒ : imply (implies)
⇐⇒ : if and only if
→ : maps to
The Kronecker symbol is given by
(
1 if i = j
δi j =
0 otherwise.
The characteristic function χS of a set S is defined as
(
1 if x ∈ S
χS (x) =
0 otherwise.
The absolute value of a number a ∈ F is indicated as |a|.
If a ∈ R (
a if a ≥ 0
|a| =
−a if a < 0.
Note: Bold face is used when a terminology is defined. Italics are used to em-
phasise a terminology or statement. The big ’O’ notation is used to describe the
behaviour of a function f near some real number a (most often, a = 0). We define
f (x) = O(g(x)) as x → a
if and only if there exist positive numbers δ and M such that
| f (x)| ≤ M|g(x)| for |x − a| < δ.
2
Elementary Definitions
A vector space over a field F is a set V together with two operations vector addition,
denoted v + w ∈ V for v, w ∈ V and scalar multiplication, denoted av ∈ V for a ∈ F
and v ∈ V , such that the following assumptions are satisfied:
1. v + w = w + v, v, w ∈ V .
2. u + (v + w) = (u + v) + w, u, v, w ∈ V .
The elements of a vector space are called vectors. A subset S of a vector space V
is a subspace of V if it is a vector space with respect to the vector space operations
on V . A subspace which is a proper subset of the whole space is called a proper
subspace.
If v1 , . . . , vn are some elements of a vector space V , by a linear combination of
v1 , . . . , vn we mean an element in V of the form a1 v1 + · · · + an vn , with ai ∈ F, i =
1, . . . , n.
Let S be a subset of element of V . The set of all linear combinations of elements
of S is called the span of S and is denoted by span S.
A subset S = {vi }ni=1 of V is said to be linearly independent if and only if
a1 v1 + · · · + an vn = 0, =⇒ ai = 0, i = 1, . . . , n.
3
Then fi is a linear functional for each i. The linear functionals f1 , . . . , fn are called
coordinate functionals on V with respect to the basis {vi }ni=1 .
We denote the space of polynomials of degree m ∈ N on R by
n m o
Pm (R) = p : p(x) = ∑ ai xi , x ∈ R .
i=0
A vector space V together with an inner product h·, ·i is called an inner product
space.
Two vectors v and w in an inner product space are said to be orthogonal if
hv, wi = 0.
Two subspaces V1 and V2 are orthogonal if hv1 , v2 i = 0 for all v1 ∈ V1 and v2 ∈
V1 . The sum of two orthogonal subspaces V1 and V2 is termed orthogonal sum
and will be indicated as V = V1 ⊕ V2 . The subspace V2 is called the orthogonal
complement of V1 in V . Equivalently, V1 is the orthogonal complement of V2 in
V.
A norm k · k on a vector space V is a function from V to R such that for every
v, w ∈ V and a ∈ F the following three properties are fulfilled
2. kavk = |a|kvk.
3. kv + wk ≤ kvk + kwk.
hx, yi = xi yi + . . . xn yn ,
4
with x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ). The norm kxk is induced by the inner
product
1 1 1
kxk = hx, xi 2 = (xi xi + . . . xn xn ) 2 = (|xi |2 + . . . + |xn |2 ) 2 .
The space L2 [a, b] is an inner product space of functions on [a, b] with inner
product defined by
Z b
hx, yi = x(t)y(t) dt
a
and norm
Z b
1
1 2
2
kxk = hx, xi =2 |x(t)| dt .
a
The space Ck (a, b) is the space of functions on [a, b] having continuous deriva-
tives up to order k ∈ N. The space of continuous functions on [a, b] is denoted as
C0 (a, b). A natural norm on the space C0 (S) of continuous functions on S is the
maximum or infinity norm
Suppose that S ⊂ R. We say that a certain property P holds for almost every x ∈ S, if
there exists a set A ⊂ S of measure zero such that the property P holds for all x ∈ S\A.
p(xi ) = yi , i = 0, . . . , n. (1)
A function p satisfying the properties given in (1) is said to interpolate the function f
at the points {xi }ni=0 and is called an interpolant. In practice, the simple function p
is a polynomial, a piecewise polynomial or a rational function. Different interpola-
tion methods arise from the choice of the interpolating function. If the interpolating
function is chosen to be a polynomial, the interpolation is called polynomial inter-
polation.
5
A polynomial interpolation problem can be stated as: Given a set of n + 1 pairs of
real numbers {(xi , yi )}ni=0 , where {xi }ni=0 is a set of distinct points, find a polynomial
pm ∈ Pm (R) such that yi = pm (xi ), i = 0, . . . , n. If n 6= m, the problem is over or
under-determined. The following theorem holds if n = m. We refer to [1, 2] for a
proof.
Theorem 1. If {xi }ni=0 is a set of distinct points, for an arbitrary set {yi }ni=0 of n + 1
numbers there exists a unique polynomial pn ∈ Pn (R) such that pn (xi ) = yi , i =
0, . . . , n.
Assume that we are given a function f in [a, b], and we have an interpolating
polynomial pn of degree n on the set of distinct points {xi }ni=0 of [a, b]. Clearly, the
function f and the polynomial pn have exactly the same values at the interpolation
points {xi }ni=0 . However, if we pick some arbitrary point x ∈ [a, b] which is not
an interpolating point, the function value f (x) may be quite different from pn (x).
Under the assumption that the function f is sufficiently smooth, the interpolation
error is estimated in the following theorem, the proof of which can be found in many
numerical analysis textbooks, e.g., [2, 3].
f (n+1) (ξ)
f (x) − pn (x) = πn+1 (x),
(n + 1)!
The Lagrange form of the interpolating polynomial is obtained by using the Lagrange
basis for the vector space Pn (R).
6
Explicitly, the Lagrange basis functions with respect to the set of distinct points
{xi }ni=0 is n
n x−x
j
li (x) = ∏ .
j=0 xi − x j
j6=i
i=0
In this case the interpolant pn is given by
n
pn (x) = ∑ yi li (x),
i=0
Example 1. Write out the Lagrange basis appropriate to the problem of interpolating
the following table and give the Lagrange form of interpolating polynomial:
1 1
x 3 4 1
f (x) 2 -1 7
7
Therefore, the interpolating polynomial in Lagrange form is
1 1 1 1
p2 (x) = −36 x − (x − 1) − 16 x − (x − 1) + 14 x − x− .
4 3 3 4
Using the fact that pn−1 (xi ) = yi , we have qn (xi ) = pn (xi )− pn−1 (xi ) = 0, i = 0, . . . , n−
1. Hence qn is a nth degree polynomial with zeros at x0 , x1 , · · · , xn−1 . Hence, qn can
be written as qn (x) = bn Πn−1 i=0 (x − xi ), where bn is to be determined. Assuming that
yi = f (xi ), i = 0, . . . , n, the coefficient bn can be found by setting pn (xn ) = f (xn ).
Thereby
f (xn ) − pn−1 (xn )
bn = , (3)
πn (xn )
where πn (x) = Πn−1
i=0 (x − xi ), and π0 (x) = 1. The coefficient bn is called the n-th
Newton divided difference and depends on points x0 , x1 , · · · , xn . So it is denoted by
bn = f [x0 , x1 , . . . , xn ], n ≥ 1. (4)
8
The same procedure gives
Using recursion on n, we obtain the formula for the interpolation polynomial in New-
ton form
n n
pn (x) = ∑ bi πi (x) = ∑ πi (x) f [x0 , . . . , xi ], (5)
i=0 i=0
where p0 (x) = f (x0 ) = f [x0 ] = y0 and π0 = 1. The Lagrange and Newton forms yield
the same interpolating polynomial due to the uniqueness of the interpolating poly-
nomial. The interpolating polynomial in the form (5) is called the Newton divided
difference formula. There are many properties of the Newton divided differences
which make them computationally efficient [1, 3].
Example 2. Using the Newton form, find the interpolating polynomial of least de-
gree for the table:
x 0 1 -1 2 -2
y -5 -3 -15 39 -9
p1 (x) = −5 + 2x.
One big advantage of using the Newton’s form of polynomial interpolation is that
these coefficients bi = f [x0 , x1 , · · · , xi ] can be computed using an efficient algorithm
[1, 4].
9
A way of systematically determining the unknown coefficients b0 , b1 , · · · , bn is
to set x equal to x0 , x1 , · · · , xn in the Newton form (5) and write down the resulting
equations:
f (x0 ) = y0 = b0
f (x1 ) = y1 = b0 + b1 (x1 − x0 )
f (x2 ) = y2 = b0 + b1 (x2 − x0 ) + b2 (x2 − x0 )(x2 − x1 )
f (x3 ) = y3 = b0 + b1 (x3 − x0 ) + b2 (x3 − x0 )(x3 − x1 ) + b3 (x3 − x0 )(x3 − x1 )(x3 − x2 )
..
..
Note that the coefficients can now be evaluated recursively starting from b0 . Solving
these equations, we have
b0 = f (x0 )
f (x1 ) − b0 f (x1 ) − f (x0 )
b1 = =
x1 − x0 x1 − x0
f (x2 )− f (x1 ) f (x1 )− f (x0 )
f (x2 ) − b0 − b1 (x2 − x0 ) x2 −x1 − x1 −x0
b2 = =
(x2 − x0 )(x2 − x1 ) x2 − x0
..
..
x 1 -4 0
f (x) 3 13 -23
f (x0 ) = 3 = b0
f (x1 ) = 13 = b0 + b1 (x1 − x0 )
f (x2 ) = −23 = b0 + b1 (x2 − x0 ) + b2 (x2 − x0 )(x2 − x1 ).
f (x0 ) = 3 = b0
f (x1 ) = 13 = b0 − 5b1
f (x2 ) = −23 = b0 − b1 − 4b2 .
10
This example shows that the coefficients for the Newton form of interpolating
polynomial can be computed recursively. Generally, the interpolating polynomial in
Newton form
n
pn (x) = ∑ bi πi (x)
i=0
can be evaluated at x = xk to give
k−1 k−1 i−1
f (xk ) = pn (xk ) = bk ∏ (xk − x j ) + ∑ bi ∏ (xk − x j ),
j=0 i=0 j=0
Note that we have ∏i−1 j=0 (xk − x j ) = 1 for i = 0. Here is an algorithm to compute the
coefficients bk = f [x0 , x1 , · · · , xk ].
4: end for
Algorithm 1: Compute the divided differences of f
11
Vandermonde Matrix
We now take another point of view to find an interpolating polynomial pn of degree n
for n + 1 pairs of numbers {(xi , yi )}ni=0 . We want to express the interpolating function
pn (x) as a linear combination of a set of basis functions φ0 (x), φ1 (x), · · · φn−1 (x), φn (x)
so that
pn (x) = a0 φ0 (x) + a1 φ1 (x) + a2 φ2 (x) + · · · + an φn (x),
where the coefficients a0 , a1 , · · · , an are to be determined from the interpolating con-
ditions. That means we have a linear system of equations for a0 , a1 , · · · , an in the
form:
Aa = y,
where (i, j)th entry of the matrix A is φ j (xi ) and a and y are two vectors with ith
components ai and yi , respectively. Thus the unknown coefficients ai , i = 0, 1, · · · , n
are obtained by solving this linear system. The monomial basis is the simplest basis
for polynomials. The monomial basis for the polynomial space of degree n is
pn (x) = a0 + a1 x + a2 x2 + · · · + an xn .
This matrix is called a Vandermonde matrix. It can be shown that this matrix is
non-singular if the points xi , i = 0, 2, · · · , n, are distinct. Thus we can solve this
linear system and obtain the interpolating polynomials. However, in practice, the
Vandermonde matrix is nearly singular for large n as monomials are less distinguish-
able from one another for large n. In addition, the columns of Vandermonde matrix
become nearly linearly dependent in this case. Therefore, better basis functions are
chosen for this approach. The most popular choices are Chebyshev or Legendre
polynomials.
12
Remark 1. If the function f to be interpolated is not a polynomial, the quantity
Mn+1 |πn+1 (x)| in (2) can be very large when n is large, leading to a severe limita-
tion of the higher order polynomial interpolation. This problem is typically known
as Runge’s phenomenon and is explained by Runge’s example [1, 3], which is to in-
1
terpolate the function f (x) = 1+25x 2 on [−1, 1], see Figure 2. One can see a strange
oscillation near the boundary. This is also due to the fact that the polynomial πn+1
in the error in Theorem 2 is of high degree. If one has the freedom to choose the
interpolating points, the expression |πn+1 (x)| can be made small by choosing the set
of interpolation points as the zeros or the maxima of a Chebyshev polynomial [1, 2].
However, in many interpolation problems, the set of points is already given and one
cannot use a different set of points.
We have shown in Theorem 2 that the interpolation error at the point x ∈ [a, b] is
given by
f (n+1) (ξ)
f (x) − pn (x) = πn+1 (x),
(n + 1)!
where ξ ∈ [a, b]. Taking the maximum norm both sides, we obtain
1
k f − pn k∞ ≤ max | f (n+1) (x)πn+1 (x)|.
(n + 1)! x∈[a,b]
Convergence
We now want to consider whether or not a sequence {pn } of interpolating polynomi-
als for a continuous function f converges to f as n → ∞. In particular, if
1
lim max | f (n+1) (x)πn+1 (x)| = 0, (6)
n→∞ (n + 1)! x∈[a,b]
13
then, we have
lim k f (x) − pn (x)k∞ = 0.
n→∞
In this case, we say that the sequence {pn } of interpolating polynomials converges
uniformly to f as n → ∞. Unfortunately, even for very smooth function f ,
goes much faster to ∞ than (n + 1)! goes to zero, and hence (6) does not hold. This
is demonstrated by this Runge’s example. However, Weierstrass approximation the-
orem guarantees the existence of a polynomial which can be arbitrarily close to a
given continuous function.
Definition 3. A set S ⊂ Rk , k ∈ N, is convex if for all x, y ∈ S and all t ∈ [0, 1], the
point (1 − t)x + ty ∈ S.
Definition 4. The convex hull for a set of points G is the minimal open convex set
containing G .
14
A two-dimensional interpolation problem is then posed as follows: Given a set
of points G , the convex hull, Ω, of G and given a function f defined on G , find a
function p : Ω̄ → R with {zi = p(xi , yi )}N
i=0 .
We only consider the situation for which p is a polynomial or a piecewise polyno-
mial. If the set of points G has a tensor product structure, it is easy to extend the idea
of one-dimensional construction to the multi-dimensional case. We consider a tensor
product partition for that purpose, which is based on the one-dimensional partition.
Definition 7. Assume that ∆x = {xi }ni=0 is a partition of the closed interval [a, b] and
∆y = {y j }mj=0 is that of [c, d]. Then the set of points ∆xy = {(xi , y j )}n,m
i=0, j=0 is called a
tensor product partition of the rectangular region [a, b] × [c, d]. In short, we write
∆xy = ∆x ⊗ ∆y .
Let ∆xy = ∆x ⊗ ∆y be a tensor product partition of [a, b] × [c, d]. Assume that
∆
{li∆x }ni=0 is the Lagrange basis of Pn (R) with respect to the partition ∆x , and {l j y }mj=0
is that of Pm (R) with respect to the partition ∆y . Then, given the values of the func-
tion f (x, y) at the partition ∆xy , the Lagrange interpolating polynomial of f (x, y) with
respect to the partition ∆xy is of degree n in x and degree m in y, and is given by
n m
∆
∑ ∑ f (xi, y j )li∆x (x)l j y (y).
i=0 j=0
If G does not have a tensor product structure, we have to solve a global polynomial
interpolation problem in a non-rectangular domain or non-tensor product partition,
which is difficult and often ill-posed [3]. Furthermore, Remark 1 points out another
limitation of the global polynomial interpolation. Piecewise polynomial interpola-
tion to be discussed in the next section provides a flexible and efficient solution to
the above discussed problems.
15
polynomial interpolant changes globally. On the contrary, the piecewise polynomial
interpolant does not change globally if the definition of the function changes locally.
Assume that the interpolation problem is posed in a domain Ω ⊂ Rk with k =
1, 2. The central idea of piecewise interpolation is to decompose the domain Ω into
non-overlapping subdomains yielding its decomposition and define polynomial basis
functions in each subdomain.
Definition 8. Let Ω ⊂ Rk be a domain with k = 1, 2. The collection of disjoint sub-
domains T with Ω̄ = ∪T ∈T T̄ is called a decomposition of Ω.
In the two-dimensional case, if the interpolant is to be at least continuous, the
decomposition should be geometrically conforming.
Definition 9. A decomposition T of Ω ⊂ R2 is called geometrically conforming if
the intersection between the boundaries of any two different subdomains ∂Tl ∩ ∂Tk ,
k 6= l, Tk , Tl ∈ T is either empty, a vertex or a common edge.
Four decompositions of the domain Ω are shown in Figure 3. The two on the
left are geometrically conforming and the two on the right are geometrically non-
conforming. In the one-dimensional case, the subdomains are always intervals. In
the two-dimensional case, only quadrilaterals or triangles are allowed.
The polynomial space Pm (T ) will denote three different spaces depending on T .
If T is an interval,
n m o
Pm (T ) = p : p(x) = ∑ ai xi ,
i=0
if T is a triangle,
n m o
Pm (T ) = p : p(x, y) = ∑ ai j x i y j ,
i, j=0
i+ j≤m
and finally, if T is a quadrilateral,
n m o
Pm (T ) = p : p(x, y) = ∑ ai j x i y j .
i, j=0
where f |T represents the restriction of the function f to the element T . The space
of piecewise constant function with respect to the decomposition T is denoted by
S0 (T ).
16
Figure 3: Four decompositions of the domain Ω: geometrically conforming (first
two) and geometrically non-conforming (last two).
One-Dimensional Case
It is easier to introduce some examples in one-dimensional case. Before constructing
some examples, we introduce a decomposition induced by a partition.
Definition 12. Let ∆ = {xi }ni=0 be a partition of the closed interval [a, b], and Ii =
(xi , xi+1 ) an interval. The decomposition T = {Ii }n−1
i=0 of the open interval (a, b) is
called the decomposition induced by the partition ∆ of [a, b].
Example 4. (linear spline) Assume that T is the decomposition of (a, b) induced by
a partition ∆ = {xi }ni=0 of [a, b]. Then, S1,0 (T ) is the space of linear splines on the
decomposition T . Let
x − x1 if x ∈ I x − xn−1 if x ∈ I
0 n−1
φ0 (x) = x0 − x1 , φn (x) = xn − xn−1 and
0 otherwise 0 otherwise
x−x
i−1
if x ∈ Ii−1
xi − xi−1
x − xi+1
φi (x) = if x ∈ Ii , for i = 1, . . . , n − 1.
x − x
i i+1
0 otherwise
The set {φi }ni=0 forms a basis for the space S1,0 (T ). Thus, a function sl ∈ S1,0 (T )
can be written as
n
sl (x) = ∑ ci φi (x),
i=0
17
where c0 , . . . , cn are arbitrary constants. As φi (x j ) = δi j , the basis {φi }ni=0 is a nodal
basis of S1,0 (T ) with respect to the partition ∆. Therefore, the piecewise linear inter-
polation of a continuous function f : [a, b] → R on the decomposition T is obtained
by setting ci = f (xi ), i = 0, . . . , n. The basis functions are continuous but not differ-
entiable and thus the piecewise linear interpolant sl (x) is also continuous.
Example 5. Nearest Neighbour Interpolation in [a, b]: Assume that the values of
a function f at the partition ∆ = {xi }ni=0 are given. Associated with the partition
xi−1 +xi
∆, we form a dual partition ∆˜ = {zi }n+1i=0 with z0 = x0 , zi = 2 , i = 1, . . . , n, and
zn+1 = xn . Let χIi be a characteristic function of the interval Ii = [zi , zi+1 ), i = 0, . . . , n.
If T is the decomposition induced by the partition ∆, ˜ S0 (T ) is spanned by the basis
n
{χIi }i=0 . Then, the nearest neighbour interpolation of the function f at the partition
∆˜ is given by
n
N(x) = ∑ f (xi )χIi (x).
i=0
Definition 13. (Cubic Spline Interpolant) Given n+1 pairs of numbers (x0 , y0 ), · · · , (xn , yn )
with a = x0 < · · · < xn = b, a cubic spline interpolant S is a piecewise cubic function
that satisfies the following conditions:
The function S(x) is called a cubic spline interpolant. This is the piecewise poly-
nomial space of degree 3 and smoothness 1, and thus is also denoted by S3,1 (T ),
where T is the decomposition of the interval [a, b] given by the above partition. We
first count the number of unknown coefficients and number of conditions in the in-
terpolation problem:
1. A cubic polynomial has 4 coefficients, and thus there are 4n coefficients of the
cubic polynomial (4 on each interval [xi , xi+1 ], i = 0, 1, 2, · · · , n.)
18
2. There are n + 1 interpolating conditions and n − 1 continuity conditions for S,
S0 and S00 . Thus there are 3(n − 1) continuity conditions. Hence, we have a
total of 4n − 2 conditions for 4n unknown coefficients.
Depending on the choice of remaining two conditions, we can construct various in-
terpolating cubic splines. The most popular cubic spline is the natural cubic spline,
which is defined as follows.
Definition 14. The natural cubic spline is the cubic spline interpolant satisfying the
end conditions
S00 (x0 ) = S00 (xn ) = 0.
The two additional conditions uniquely determine S2 .
Example 6. Piecewise Cubic Hermite Interpolating Polynomial: Assume that no
derivatives are provided at the partition, but only function values. A piecewise cubic
polynomial can be constructed also in this case assigning some suitable derivatives
of the function at the partition. The derivatives are assigned in such a way that the
resulting piecewise curve is continuously differentiable. One such example can be
found in [5] and is used in piecewise cubic interpolation of M ATLAB. The derivatives
zi are assigned in such a way that the function is continuously differentiable and func-
tion values do not locally overshoot the data values resulting in a shape-preserving,
“visually pleasing” interpolant. Let di be defined as
yi+1 − yi
di = , i = 0, . . . , n − 1.
xi+1 − xi
For an inner point xi , if di and di−1 are of opposite signs or if either of them is zero,
xi is the local extremum. Thus zi is set to be zero. If di and di−1 have the same sign
and the two intervals (xi−1 , xi ) and (xi , xi+1 ) have the same length, then zi is taken to
be the harmonic mean of the two discrete slopes:
1 1 1 1
= + .
zi 2 di−1 di
That means at the support node xi , the reciprocal slope of the Hermite interpolant is
the average of the reciprocal slopes of the piecewise linear interpolant on either side.
If di and di−1 have the same sign, but the two intervals have different lengths,
then zi is set to be a weighted harmonic mean of the two discrete slopes
1 1 w1 w2
= +
zi w1 + w2 di−1 di
with w1 = 2hi + hi−1 and w2 = hi + 2hi−1 , hi = xi+1 − xi . Further modification is
necessary at the end points, see [5] for more detail.
19
Although in the one-dimensional case a decomposition can always be induced by
a partition, this is not the case in two dimensions. Therefore, for simplicity, we divide
the two-dimensional case into two parts depending on whether the decomposition has
a tensor product structure or not.
20
A Delaunay triangulation of a finite set of points in the plane is a triangulation
that minimises the standard deviations of the angles of the triangles. In this sense,
the Delaunay triangulation is the most equi-angular triangulation. The dual graph of
the Delaunay triangulation is a Voronoi diagram for the same set of points.
Definition 17. For a set of points G ⊂ R2 , the Voronoi diagram is the decomposi-
tion of the plane into convex polygons such that each polygon contains exactly one
generating point from G and every point in a given polygon is closer to its gener-
ating point than to any other point in G . A convex polygon Vx associated with the
generating point x ∈ G is called the Voronoi cell for the point x ∈ G .
In other words, the Voronoi cell Vx for the point x ∈ G has the property that
the distance from every y ∈ Vx to x is less than or equal to the distance from y to
any other point in G . The circle circumscribed about a Delaunay triangle has its
centre at the vertex of a Voronoi cell, see the right picture of Figure 4. The idea of
Delaunay triangulation and Voronoi diagram is also extended to higher dimension.
An efficient algorithm for computing Delaunay triangulations and Voronoi diagrams
are presented in [8], see also [9, 10].
As an example of a Delaunay triangulation and Voronoi diagram, we define the
set G1 = {(0.1, 0.4), (0.5, 0.1), (0.45, 0.5), (0.3, 0.6), (0.3, 0.3), (0.1, 0.4), (0.9, 0.8),
(0.3, 0.9), (0.2, 0.1), (0.8, 0.9)}, and generate the Delaunay triangulation and the Voronoi
diagram of G1 . The Delaunay triangulation and the Voronoi diagram of G1 are shown
in the left and middle pictures of Figure 4, respectively. The right picture depicts the
circumcircle of a triangle with its centre at a vertex of the Voronoi diagram (filled
circle).
Figure 4: The Delaunay triangulation of the set G1 (left picture). The corresponding
Voronoi diagram (middle picture). The circumcircle of a triangle with Delaunay
triangulation and Voronoi diagram (last picture).
21
Figure 5: The reference triangle and an affine map F̂T from the reference triangle to
a generic triangle (first two pictures). A nodal basis function φi (last picture).
in the right picture of Figure 5. The basis function φi vanishes outside ∪5j=1 T̄ j , where
T j ∈ T , j = 1, . . . , 5, are the triangles having the common vertex (xi , yi ) as shown
in the right picture of Figure 5. The key point for the construction is to define a
reference triangle, where it is easy to compute polynomial basis functions, and map
the triangle to the actual element by using an affine transformation. It is convenient to
choose the right-angled triangle T̃ = {(x̃, ỹ) : x̃ > 0, ỹ > 0, x̃ + ỹ < 1} as the reference
triangle.
The following piece of MATLAB code generate a Delaunay triangulation of the
points given by x and y co-ordinates.
x=[0.1,0.5,0.45,0.3,0.3,0.9,0.3,0.2,0.8];
y=[0.4,0.1,0.5,0.6,0.3,0.8,0.9,0.1,0.9];
tri=delaunay(x,y);
trimesh(tri,x,y);
hold on;
plot(x,y,’*’,’markersize’,18);
The local Lagrange basis functions associated with three vertices of T̃ for the linear
interpolation are given by l1 (x̃, ỹ) = (1 − x̃ − ỹ), l2 (x̃, ỹ) = x̃, and l3 (x̃, ỹ) = ỹ. See the
left picture of Figure 5. If (x̃ j , ỹ j ), j = 1, . . . , 3 are the three vertices of the reference
triangle, the basis functions satisfy li (x̃ j , ỹ j ) = δi j , i, j = 1, . . . , 3. Let T ∈ T have
three vertices (x1 , y1 ), (x2 , y2 ) and (x3 , y3 ) as in the middle picture of Figure 5. The
mapping F̂T transforms a point in the reference triangle T̃ to a point in the actual
triangle T as follows
x x̃ x̃ x2 − x1 x3 − x1 x̃ x
= F̂T with F̂T = + 1 .
y ỹ ỹ y2 − y1 y3 − y1 ỹ y1
The first two pictures of Figure 5 show a reference triangle T̃ and the triangle T .
22
If the three vertices of the triangle T are not collinear, the determinant of the
matrix
x2 − x1 x3 − x1
y2 − y1 y3 − y1
does not vanish and hence F̂T is invertible. The three global Lagrange basis functions
on the triangle T are then given by gi (x, y) = li (x̃, ỹ), i = 1, . . . , 3, with
x̃ −1 x
= F̂T .
ỹ y
23
(xi , yi ), i = 0, . . . , N. Then, the nearest neighbour interpolant p of the given data is
obtained as
N
p(x) = ∑ zi χVi (x).
i=0
where p plays the role of the smoothing parameter λ in the equation (7). This smooth-
ing parameter is to be chosen by the user. The usage is
y = csaps(x, z, p, xx)
This function returns the values of the smoothing spline at the points given by xx.
Here x is the vector of data points, and z is the vector of given values. The following
piece of MATLAB code demonstrates the use of the function.
24
x= 0:0.2:6;y=sin(x)+0.2*randn(1,31);
z=csaps(x,y,0.9,x);
plot(x,y,’-’,x,z,’-*’);
legend(’given noisy function’,’smoothing spline’);
References
[1] W. Cheney and D. Kincaid. Numerical Mathematics and Computing. Brooks
and Cole, sixth edition, 2008.
[4] R.L. Burden and J.D. Faires. Numerical Analysis. Brooks and Cole, eighth
edition, 2016.
[5] F.N. Fritsch and R.E. Carlson. Monotone piecewise cubic interpolation. SIAM
Journal on Numerical Analysis, 17:238–246, 1980.
[6] R. Franke and G.M. Nielson. Scattered data interpolation and applications: A
tutorial and survey. In H. Hagen and D. Roller, editors, Geometric Modelling:
Methods and Their Application, pages 131–160. Springer-Verlag, 1991.
[7] I. Amidror. Scattered data interpolation methods for electronic imaging sys-
tems: a survey. Journal of Electronic Imaging, 11:157–176, 2002.
[8] C. B. Barber, D.P. Dobkin, and H.T. Huhdanpaa. The quickhull algorithm for
convex hulls. ACM Transactions on Mathematical Software, 22:469–483, 1996.
[10] F. Aurenhammer and R. Klein. Voronoi diagrams. In J.-R. Sack and J. Ur-
rutia, editors, Handbook of Computational Geometry, pages 201–290. North-
Holland, Amsterdam, Netherlands, 2000.
25
[12] G. Wahba. Spline Models for Observational Data, volume 59 of Series in Ap-
plied Mathematic. SIAM, Philadelphia, first edition, 1990.
[16] R.H. Chan, C. Ho, and M. Nikolova. Salt-and-pepper noise removal by median-
type noise detectors and detail-preserving regularization. IEEE Transaction on
Image Processing, 14:1479–1485, 2005.
26