Lecture Notes1 22
Lecture Notes1 22
Michael J. Johnson
Spring 2008
1. Notation
2. Piecewise Polynomials
s| ∈ Πk | for i = 1, 2, . . . , N,
[ξi ,ξi+1 ) [ξi ,ξi+1 )
where the above is taken with the closed interval [ξN , ξN +1 ] when i = N . Note that
we have adopted the somewhat arbitrary convention that piecewise polynomials (pp) are
continuous from the right.
Typeset by AMS-TEX
1
2 ADVANCED NUMERICAL COMPUTING
Is s continuous?
Theorem 2.2. The dimension of Pk,Ξ is N (k + 1).
Knot insertion
If two pp’s have identical knots, then adding them or multiplying them is fairly straight-
forward; however, if they have different knots, then one must first insert knots, as needed,
until both pp’s have been rendered with identical knots.
CS-543 3
Example (knot insertion). Let s be as in the previous example and let Ξ1 = {0, 1, 2, 3, 4}.
Note that Ξ1 has been obtained from Ξ by inserting the knot 2. Find the representation
of s with respect to the knot sequence Ξ1 (ie. as an element of P3,Ξ1 ).
In the above example, we see that the computation required is that of expanding s 2 (x+1)
in powers of x, when s2 (x) = 2x2 + x. This yields s2 (x + 1) = 2x2 + 5x + 3. In general,
the work involved in a knot insertion is simply that of finding, for a given polynomial
p(x) = p1 xk + · · · + pk x + pk+1 , the coefficients q1 , . . . , qk+1 such that the polynomial
q(x) = q1 xk + · · · + qk x + qk+1 satisfies q(x) = p(x + τ ). In other words, we have to
translate the polynomial p by a distance −τ . One can easily write an algorithm for this
based on the binomial formula, however the number of flops needed for execution is about
2k 2 . A better algorithm, which uses only k 2 + 1 flops, is the following (assuming that p is
the Octave representation of the polynomial p(x) above):
q=p
s=tau*p(1)
for i=k+1:-1:2
q(2)=q(2)+s
for j=3:i
q(j)=q(j)+tau*q(j-1)
end
end
It is easy repeat the above example using this efficient algorithm and verify that the
same result is obtained.
The space Pk,Ξ contains functions with various smoothness properties (assuming N ≥
2). The following definition allows us to give a rather precise categorization of the smooth
functions contained in Pk,Ξ .
Definition 3.1. The space of continuous functions on [a, b] is denoted C[a, b] (or C 0 [a, b]).
For a positive integer k, we define C k [a, b] to be the space of functions f : [a, b] → R for
which f, f 0 , . . . , f (k) are continuous on [a, b].
Theorem 3.2. For ` = 0, 1, . . . , k, the dimension of Pk,Ξ ∩ C ` [a, b] equals N (k − `) + ` + 1.
Moreover, Pk,Ξ ∩ C k [a, b] equals Πk | .
[a,b]
Of particular interest is the subspace Pk,Ξ ∩ C k−1 [a, b] which has dimension N + k.
Definition 3.3. The subspace Pk,Ξ ∩C k−1 [a, b], denoted Sk,Ξ , is called the space of splines
of degree k having knot sequence Ξ.
Example 3.4. Determine whether s belongs to S2,Ξ if Ξ = {0, 1, 2, 4} and
2
x −2
if 0 ≤ x < 1
s(x) = 2(x − 1)2 + 2(x − 1) − 1 if 1 ≤ x < 2
−(x − 2)2 + 6(x − 1) + 3 if 2 ≤ x ≤ 4.
4 ADVANCED NUMERICAL COMPUTING
Note that the B-spline is defined on all of R. In the important special case when the
knots are simply ξi = i, the B-splines are called Cardinal B-splines and the above recursion
reduces to
1
Bik (x) := (x − i)Bik−1 (x) + ((i + k + 1) − x)Bi+1
k−1
(x) ;
k
moreover, we have (in the cardinal case) Bik (x) = B0k (x+i) so that Bik is simply a translate
of B0k .
The octave sript ex Bspline recursion.m gives a visual demonstration of this con-
struction.
Example 3.7. Find a formula for Bi1 .
Partition of Unity Theorem 3.8. Let k ∈ N0 . If lim ξi = ±∞, then
i→±∞
∞
X
Bik (x) = 1 for all x ∈ R.
i=−∞
(iv) Bik is (k − 1)-times continuously differentiable (ie. Bik ∈ C k−1 (−∞, ∞)).
Z x ∞
k ξi+k+1 − ξi X k+1
(v) Bi (t) dt = Bj (x).
−∞ k+1 j=i
Xi
1
where di = cj (ξj+k+1 − ξj ).
k + 1 j=−∞
As mentioned above, the knots in Ξ are denoted a = ξ1 < ξ2 < · · · < ξN +1 = b, and
we assume that we have chosen additional knots ξi for i < 1 and i > N + 1, maintaining
ξi < ξi+1 for all i ∈ Z.
Theorem 3.10. A basis for the space Sk,Ξ is formed by the functions
Bik | for i = 1 − k, 2 − k, . . . , N.
[a,b]
Note that the number of functions in the basis above is N + k, which of course is also
the dimension of Sk,Ξ . We also note that the B-splines Bik , for i = 1 − k, 2 − k, . . . , N ,
are precisely those B-splines whose support has some overlap with the interval (a, b). The
Octave script ex Bspline basis.m gives a visual demonstration.
A consequence of Theorem 3.10 is that every function s ∈ Sk,Ξ can be written in the
form
XN
s(x) = cj Bjk (x), x ∈ [a, b],
j=1−k
for some scalars (numbers) {cj }. This form is known as the bb-form of s, where bb is meant
to connotate the phrase B-spline basis. Since the B-splines in use are determined by the
6 ADVANCED NUMERICAL COMPUTING
knots {ξ1−k , ξ2−k , . . . , ξN +k+1 }, the bb-form of s is determined simply by these knots along
with the coefficients {cj }, j = 1 − k, 2 − k, . . . , N . As illustrated by the following example,
given the bb-form of s one can use Corollary 3.9 to obtain the bb-form of the derivative
of s or of an anti-derivative of s.
Example. Let ξi = i for all i, and define
2
s(x) := 2B−1 (x) − B02 (x) + 3B12 (x) + B22 (x) − 3B32 (x) − B52 (x) + B62 (x), x ∈ [1, 7].
Rx
Find the bb-forms for the derivative s0 (x) and the antiderivative se(x) = −1
s(t) dt, where
x ∈ [1, 7].
4. B-Splines in Octave
ξi = Ξext (i + k) for i = 1 − k, 2 − k, . . . , N + k + 1,
or equivalently
Ξext (i) = ξi−k for i = 1, 2, . . . , N + 2k + 1.
By Theorem 3.10, the restriction to [a, b] of the B-splines Bik , i = 1 − k, 2 − k, . . . , N ,
form a basis for Sk,Ξ . In order to construct the B-spline Bik in Octave, one must first form
the vector x which contains the knots of Bik , namely, x = [ξi , ξi+1 , . . . , ξi+k+1 ]. This can
be accomplished with the Octave command
x=Xi ext(i+k : i+2*k+1);
k
The B-spline Bi can then be constructed using the supplementary Octave function B spline.
The command
C=B spline(x);
will produce the (k + 1) × (k + 1) matrix C so that the pair (x,C) represents Bik |
[ξi ,ξi+k+1 ]
as a pp in Sk,x (keep in mind that Bik = 0 outside [ξi , ξi+k+1 ]).
Warning!. Mathematically speaking, a pp s ∈ Pk,Ξ (as defined in Section 2) is a function
defined on the interval [a, b]. However, in Octave one is allowed to evaluate (using pp val)
a pp s at any point x ∈ R: If x < a, then the first piece is used (ie. s(x) := s1 (x)) while if
x > b, then the last piece is used (ie. s(x) := sN (x)). In other words, the first piece of s is
CS-543 7
extended all the way down to −∞ and the last piece is extended all the way up to ∞. This
extension is wisely chosen and usually very convenient, except for the case of B-splines.
The problem is that the B-spline Bik should be 0 outside the interval [ξi , ξi+k+1 ], but
the first and last pieces produced by the Octave command C=B spline(x); are non-zero
polynomials which extend, to the left and right, as non-zero polynomials. We get around
this difficulty with the command
[X,C1]=B spline(x);
which returns the pp (X,C1) which has an extra 0 piece at the beginning and end. Specif-
ically, C1 is obtained from C by adding a row of zeros to the top and bottom, while X is
obtained from x by adjoining an extra knot at the beginning and at the end.
The Octave script ex Bspline extension.m illustrates this problem.
In practice, one usually first constructs the bb-form of a spline s ∈ Sk,Ξ (we discuss
several methods in the next section). In order to use this spline s, it is desirable to have s
in its pp form. The supplemental Octave command
S=bb2pp(Xi ext,d);
returns the coefficient matrix S of the pp (Xi ext,S) which corresponds to
N
X
s(x) = cj Bjk (x),
j=1−k
where d = [c1−k , c2−k , . . . , cN ]. If one prefers the restriction of this pp to the interval
[a, b], then one should use instead the command
[Xi,S]=bb2pp(Xi ext,d,k);
5. Spline Interpolation
The function s is said to interpolate the given data. We assume that the nodes Ξ := {a =
ξ1 , ξ2 , . . . , ξN +1 = b} are increasing (ie. ξi < ξi+1 ), and we desire to choose s as an element
of the spline space Sk,Ξ , where k ≥ 1 is chosen to reflect the desired degree of smoothness
in s. The case k = 1, known as linear spline interplation, is quite easy: The i-th piece of
s is simply si (x) = yξi+1 −yi
i+1 −ξi
(x − ξi ) + yi .
Example. Find the linear spline s which passes through the points (1, 2), (3, 6), (4, 4),
(7, 1), and write s in both pp-form and bb-form.
In addition to the nodes in Ξ, we need nodes ξi for i = 1 − k, 2 − k, . . . , 0 and i =
N + 2, N + 3, . . . , N + 1 + k chosen so that ξi < ξi+1 for i = 1 − k, 2 − k, . . . , N + k.
For the sake of brevity, we will write the B-spline Bjk simply as Bj . By Theorem 3.10,
8 ADVANCED NUMERICAL COMPUTING
where the coefficients {ci } are unknown at present. The condition s(ξi ) = yi becomes
N
X
s(ξi ) = cj Bj (ξi ) = yi , 1 ≤ i ≤ N + 1.
j=1−k
It is very important to note that since Bj (x) = 0 for all x 6∈ (ξj , ξj+k+1 ), the above
(N + 1) × (N + k)-matrix has exactly k(N + 1) non-zero entries. Indeed, the i-th row
reduces to
[0, 0, . . . , 0, Bi−k (ξi ), Bi−k (ξi ), . . . , Bi−1 (ξi ), 0, 0, . . . , 0]
which has only k non-zero entries.
When k is greater than 1, the dimension of Sk,Ξ (or equivalently, the number of un-
knowns c1−k , c2−k , . . . , cN ) exceeds the number of conditions in (5.1), and it turns out that
there are infinitely many splines s ∈ Sk,Ξ which satisfy the interpolation conditions (5.1).
In order to arrive at a unique spline s ∈ Sk,Ξ it is necessary to impose k − 1 additional
conditions.
In the literature, there are four prominent ways of imposing these additional conditions;
these lead to the natural spline, the complete spline, the not-a-knot spline, and the periodic
spline.
Theorem 5.4. If k > 1 is an odd integer, then there exists a unique spline s ∈ S k,Ξ which
satisfies the interpolation conditions (5.1) along with the additional end conditions
k+1 k+3
s(`) (a) = 0 = s(`) (b) for ` = , , . . . , k − 1.
2 2
With s written in the form (5.2), these additional end conditions become
N
X N
X
(`) (`)
cj Bj (a) =0= cj Bj (b).
j=1−k j=1−k
CS-543 9
Adjoining these extra equations to those of (5.3) and setting m = (k + 1)/2, we arrive at
the linear system
(m) (m) (m)
B1−k (a) B2−k (a) ··· BN (a)
0
.. .. .. .. ..
. . . . .
(k−1)
B1−k (a)
(k−1)
B2−k (a) ··· BN
(k−1)
(a)
0
c1−k
B1−k (ξ1 ) B2−k (ξ1 ) ··· BN (ξ1 ) y1
.. .. .. .. c2−k .
. = . .
. . . . . .
.
B1−k (ξN +1 ) B2−k (ξN +1 ) ··· BN (ξN +1 ) yN +1
(m) (m) (m) cN
B1−k (b) B2−k (b) ··· BN (b) 0
.
.. .. .. .. ..
. . . .
(k−1) (k−1) (k−1) 0
B1−k (b) B2−k (b) ··· BN (b)
Let us refer to the above (N + k) × (N + k) matrix as A, and note that those additional
equations associated with left end-conditions are placed at the top of A and those associated
with right end-conditions have been placed at the bottom of A. It follows from Theorems
(`)
5.4 and 3.10 that A is non-singular. Since Bj (x) = 0 = Bj (x) for all x 6∈ (ξj , ξj+k+1 ), it
turns out that A is a banded matrix, and the above system can be solved very efficiently
using Doolittle’s LU decomposition for banded matrices.
Example 5.5. Find the natural cubic spline s which passes through the points (1, 2),
(3, 6), (4, 4), (7, 1).
This example is solved by the Octave script ex natural.m. The natural splines are
famous for the following property:
Theorem 5.6. Let s be the natural spline as described in Theorem 5.4. If f ∈ C k−1 [a, b]
interpolates the given data:
f (ξi ) = yi , i = 1, 2, . . . , N + 1,
then Z Z
b b
(m) 2 (m) 2
s (t) dt ≤ f (t) dt, where m = (k + 1)/2.
a a
The proof of this Lemma proceeds by induction, where the case n = 1 is obtained using
integration by parts:
Z b Z b N
X
00
p(t)g (t) dt = p(t)g 0
(t)]ba − 0 0
p (t)g (t) dt = 0 − λi (g(ξi+1 ) − g(ξi )) = 0,
a a i=1
since p(a) = p(b) = 0 and g(ξi ) = 0. The proof of the Theorem is then:
Let g = f − s and note that f = s + g and g(ξi ) = 0. Hence
Z b Z b Z b Z b Z b
(m) 2 (m) 2 (m) 2 (m) (m) (m) 2
f (t) dt = s (t) dt+ g (t) dt+2 f (t)g (t) dt ≥ s (t) dt,
a a a a a
Theorem 5.7. If k > 1 is an odd integer, then there exists a unique spline s ∈ S k,Ξ which
satisfies the interpolation conditions (5.1) along with the additional end conditions
With s written in the form (5.2), these additional end conditions become
N
X N
X
(`) (`)
cj Bj (a) = ya,` and cj Bj (b) = yb,` .
j=1−k j=1−k
Adjoining these extra equations to those of (5.3) and setting m = (k + 1)/2, we arrive at
the linear system
(1) (1) (1)
B1−k (a) B2−k (a) ··· BN (a) ya,1
. . .. .. ..
.. ..
. . .
B (m−1) (a) B
(m−1)
(a) · · ·
(m−1)
BN
(a) ya,m−1
1−k 2−k
c
B1−k (ξ1 ) B2−k (ξ1 ) ··· BN (ξ1 ) 1−k y1
. . .. .. c2−k ..
.. .. . = .
. . . .
.
B1−k (ξN +1 ) B2−k (ξN +1 ) · · ·
BN (ξN +1 )
cN yN +1
B1−k (1)
(b)
(1)
B2−k (b) ···
(1)
BN (b) yb,1
.. .. .. .. ..
.
. . . .
(m−1)
B1−k (b)
(m−1)
B2−k (b) · · ·
(m−1)
BN (b) yb,m−1
As with the linear system obtained for the natural spline, the above (N + k) × (N + k)
matrix is nonsingular and banded, and consequently, the above system can be solved very
efficiently using Doolittle’s LU decomposition for banded matrices.
CS-543 11
Example 5.8. Find the complete cubic spline s which passes through the points (1, 2),
(3, 6), (4, 4), (7, 1) and additionally satisfies s0 (1) = 0 and s0 (7) = −2.
This example is solved by the Octave script ex complete.m. The complete splines are
famous for the following property:
Theorem 5.9. Let s be the complete spline as described in Theorem 5.7. If f ∈ C k−1 [a, b]
interpolates the given data:
f (ξi ) = yi , i = 1, 2, . . . , N + 1,
then Z Z
b b
(m) 2 (m) 2
s (t) dt ≤ f (t) dt where m = (k + 1)/2.
a a
The complete splines have another property which can be used to find the bb-form of a
spline given in pp form.
Theorem 5.10. Let f ∈ Sk,Ξ and let s be the complete spline (as in Theorem 5.7) deter-
mined by the interpolation conditions
s(ξi ) = f (ξi ) i = 1, 2, . . . , N + 1,
Then s = f .
Theorem 5.11. There exists a unique spline s ∈ Sk,Ξe which satisfies the interpolation
conditions (5.1).
e =: {ξe1 , ξe2 , . . . , ξee }, where N
Let us write Ξ e = N − k + 1, and as usual we suppose
N +1
we have additional knots ξei so that
With Bej , j = 1 − k, 2 − k, . . . , N
e , denoting our B-spline basis for S e , we write s in the
k,Ξ
bb-form
e
N
X
s(x) = ej (x).
cj B
j=1−k
As with the linear system obtained for the natural and complete spline, the above (N +
1) × (N + 1) matrix is nonsingular and banded, and consequently, the above system can
be solved very efficiently using Doolittle’s LU decomposition for banded matrices.
Example 5.12. Find the not-a-knot cubic spline s which passes through the points (1, 2),
(3, 6), (4, 4), (7, 1), (8, 0), and (9, 2).
Definition 5.13. Let k > 1 be an odd integer. The periodic spline s ∈ Sk,Ξ is obtained
by imposing the interpolation conditions (5.1) along with the additional end conditions
With s written in the form (5.2), these additional end conditions become
N
X (`) (`)
cj (Bj (a) − Bj (b)) = 0.
j=1−k
CS-543 13
Adjoining these extra equations to those of (5.3),we arrive at the linear system
B1−k (ξ1 ) ··· BN (ξ1 ) y1
.. .. .. .
. . . c1−k ..
B1−k (ξN +1 ) ··· BN (ξN +1 ) 2−k
c
. = yN +1
.
B (1) (a) − B (1) (b) ···
(1) (1)
BN (a) − BN (b) . 0
1−k 1−k . .
. .. .. ..
.. . . c N
(k−1) (k−1)
B1−k (a) − B1−k (b) · · · B
(k−1)
(a) − B
(k−1)
(b) 0
N N
The above (N + k) × (N + k) matrix fails to be banded due to the bottom k − 1 rows. How-
ever, it is still possible to efficiently solve this system (assuming it has a unique solution)
using a technique called the shooting method.
Example 5.14. Find the periodic cubic spline s which passes through the points (0, 0),
(1, 1), (2, 0), (3, −1), (4, 0).
This example is solved by the Octave script ex periodic.m.
h := max (ξi+1 − ξi )
1≤i≤N
denote the length of the longest subinterval cut by the knot sequence {a = ξ1 , ξ2 , . . . , ξN +1 =
b}. For the natural spline, we have the following error estimate:
Theorem 6.1. Let k ∈ N be an odd integer, set m = (k + 1)/2, and assume that f ∈
C m [a, b]. There exists a constant C (depending only on f and k) such that if s is the
natural spline of degree k which satisfies the interpolation conditions
then
max |s(x) − f (x)| ≤ Chm .
x∈[a,b]
then
max |s(x) − f (x)| ≤ Chr .
x∈[a,b]
Remark. In case k = 1, the natural linear spline and the complete linear spline are identical
to what we have been calling the linear spline interpolant. Both of the above theorems
apply to the linear spline interpolatn.
Example 6.3. Let f (x) = sin(x), 0 ≤ x ≤ 10. For a given N , let Ξ consist of the N + 1
equi-spaced knots from 0 to 10. Compute the quintic natural, complete, and not-a-knot
spline interpolants to f at Ξ for N = 10, 20, 40, and compare the obtained accuracies.
Solution. See the Octave script ex splinecompare.m
1
y 000 (t) − cos(t)y(t) = , 0 ≤ t ≤ 5.
1 + t2
Find a spline s which passes through these points and see how well it satisfies the ODE.
Solution. See the Octave script ex checkode.m
Example 6.6. Construct a parametric cubic spline curve which passes through the points
(0, 0), (1, 1), (0, 0), (1, −1), (0, 0), (−1, −1), (0, 0), (−1, 1), (0, 0). Try it with both natural
and periodic end conditions and compare the results.
Solution. See the Octave script ex splinecurve
The above applications were easily implemented using the following supplementary Oc-
tave commands:
[X,S]=natural spline(Xi,y,k)
produces the pp (X,S) which represents the natural spline interpolant as described in
Theorem 5.4.
[X,S]=complete spline(Xi,y,k,y a,y b)
CS-543 15
produces the pp (X,S) which represents the complete spline interpolant as described in
Theorem 5.7.
[X,S]=notaknot spline(Xi,y,k)
produces the pp (X,S) which represents the not-a-knot spline interpolant as described in
Theorem 5.11.
[X,S]=periodic spline(Xi,y,k)
produces the pp (X,S) which represents the periodic spline interpolant as described in
Definition 5.13.
The function s is said to approximate the given data. We assume that the nodes Ξ :=
{a = ξ1 , ξ2 , . . . , ξN +1 = b} are increasing (ie. ξi < ξi+1 ), and we desire to choose s as an
element of the spline space Sk,Ξ , where k ≥ 1 is an odd integer chosen to reflect the desired
degree of smoothness in s.
Definition 7.2. Put m := (k + 1)/2. For λ > 0 and f ∈ C m [a, b] we define
Z b N
X +1
(m) 2 2
Jλ (f ) := f (x) dx + λ |f (ξi ) − yi | .
a i=1
Theorem 7.3. For each λ > 0, there exists a unique spline s ∈ Sk,Ξ such that
The spline s above is called a smoothing spline. The parameter λ controls the trade-off
between the curviness of s and the degree that the given data is approximated. Small
values of λ will produce a gentle curve s which does not follow the given data very closely;
whereas large values of λ will produce a curvy function s which closely follows the given
data.
where the coefficients {cj } are unknown for the moment. Our first task is to express Jλ (s)
in terms of {cj }.
If G is the (N + k) × (N + k) symmetric matrix given by
Z b
(m) (m)
G(i, j) = Bi−k (t)Bj−k (t) dx,
a
c
1−k
c2−k
and X is the column vector X =
.. , then
.
cN
Z b
(m) 2
s (t) dt = X · GX.
a
If A is the (N + 1) × (N + k) matrix given by A(i, j) = Bj−k (ξi ), then
N
X +1
2
|s(ξi ) − yi | = (AX − Y ) · (AX − Y ),
i=1
y1
y2
where Y denotes the column vector Y =
.. . Thus we can write Jλ (s) as
.
yN +1
Jλ (s) = X · GX + (AX − Y ) · (AX − Y ).
The vector X which minimizes Jλ (s) is found by solving the equations
∂
Jλ (s) = 0 for j = 1 − k, 2 − k, . . . , N.
∂cj
Using the gradiant operator, ∇X , these equations become.
(7.4) ∇X [X · GX + (AX − Y ) · (AX − Y )] = 0.
Proposition 7.5. If C and D are n × n matrices and b and d are n × 1 column vectors,
then
∇X [(CX + d) · (DX + b)] = C 0 (DX + b) + D 0 (CX + d),
where C 0 denotes the transpose of C.
Applying Proposition 7.5, yields
∇X [X · GX + (AX − Y ) · (AX − Y )]
= G0 X + GX + A0 (AX − Y ) + A0 (AX − Y ) = 2(GX + A0 (AX − Y )).
Subtituting this into equation (7.4) and simplifying yields the following linear system:
(G + λA0 A)X = λA0 Y.
The (N + k) × (N + k) matrix G + λA0 A is nonsingular and banded and consequently
the above system can be efficiently solved using Doolittle’s LU decomposition for banded
matrices. The Octave script ex smoothing demonstrates the smoothing spline.
CS-543 17
Let f ∈ C(R) and h > 0. In this section we consider the problem of approximating f ,
given the values f (hj), j ∈ Z. In real life, of course, one would actually work on a bounded
interval [a, b], but the theory is cleaner when we work on the entire real line; it also easy
to transfer our results and methods to the interval [a, b].
Definition 8.1. A function φ ∈ C(R) is compactly supported if there exists a > 0 such
that φ(x) = 0 whenever |x| ≥ a. The space of all compactly supported functions φ ∈ C(R)
is denoted Cc (R).
Definition 8.2. Let φ, ψ ∈ C(R) and assume that φ or ψ is compactly supported. The
convolution of φ and ψ is defined by
Z ∞
(φ ∗ ψ)(x) = φ(t)ψ(x − t) dt.
−∞
In the following theorem, we use the notation D k to denote the k-th derivative operator;
in other words, D k φ = φ(k) .
Theorem 8.3. For φ ∈ Cc (R) and ψ, p ∈ C(R), the following hold:
(i) φ ∗ ψ = ψ ∗ φ.
(ii) φ ∗ (αψ + βp) = α(φ ∗ ψ) + β(φ ∗ p), for all α, β ∈ R.
(iii) φ ∗ ψ ∈ C(R).
(iv) If φ ∈ C k (R), then φ ∗ ψ ∈ C k (R) and
D k (φ ∗ ψ) = (D k φ) ∗ ψ.
D k (φ ∗ ψ) = φ ∗ (D k ψ).
Remark. Item (i) above shows that the convolution operator is commutative. It is also
associative, provided that all three functions are compactly supported. Items (ii) and (iii)
show that φ∗ is a linear operator on C(R).
Definition 8.4. For φ ∈ Cc (R) and and p ∈ C(R), the semi-discrete convolution of φ and
p is defined by X
(φ ∗0 p)(x) = p(j)φ(x − j), x ∈ R.
j∈Z
(iii) φ ∗0 p ∈ C(R).
(iv) If φ ∈ C k (R), then φ ∗0 p ∈ C k (R) and
D k (φ ∗0 p) = (D k φ) ∗0 p.
Remark. Item (i) shows that p 7→ φ ∗0 p is a linear operator on C(R), while (ii) shows that
φ 7→ φ ∗0 p is a linear mapping from Cc (R) into C(R).
Given the values f (j), j ∈ Z, our plan is to approximate f by s = φ ∗0 f . Of course one
must carefully choose the function φ, but we’ll discuss that issue momentarily. In case the
given data is f (hj), j ∈ Z, then we could approximate the function fe(x) = f (hx) (note
that fe(j) , j ∈ Z is given) 0 e
P by se = φ ∗ f . In that case, our approximation to f would be
s(x) = se(x/h) = · · · = j∈Z f (hj)φ(x/h − j). With this as motivation, we make the
Definition 8.6. For φ ∈ Cc (R) and and p ∈ C(R), we define
X
(φ ∗0h p)(x) = p(hj)φ(x/h − j), x ∈ R.
j∈Z
Remark 8.7. It is easy to see that Theorem 8.5 remains true when we replace ∗ 0 with ∗0h ,
provided the last display is written as
kf kR := sup |f (x)| .
x∈R
(sup is similar in meaning to max except that the supremum is allowed to equal ∞ and
there is no guarantee that the supremum is actually attained.)
Approximation Theorem 8.9. Let φ ∈ Cc (R) and k ∈ N0 . If φ ∗0 p = p for all p ∈ Πk ,
then there exists a constant C (depending only on φ and k) such that for all h > 0 and
f ∈ C k+1 (R),
kf − φ ∗0h f kR ≤ Chk+1
D k+1 f
R .
Our proof of this theorem requires the following lemma and Taylor’s theorem.
CS-543 19
Lemma 8.10. Let φ ∈ Cc (R) and k ∈ N0 . If φ ∗0 p = p for all p ∈ Πk , then φ ∗0h p = p for
all p ∈ Πk and h > 0.
Definition 8.11. Let k ∈ N0 , c ∈ R, and let f ∈ C k (R). The k-th degree Taylor
polynomial to f at c is the polynomial p ∈ Πk given by
k
X f (`) (c)
p(x) := (x − c)` .
`!
`=0
Taylor’s Theorem 8.12. Let f ∈ C k+1 (R) and let p be the k-th degree Taylor polynomial
to f at c. For every x 6= c, there exists z, between x and c, such that
f (k+1) (z)
f (x) = p(x) + (x − c)k+1 .
(k + 1)!
Proof of Theorem 8.9. Since φ is compactly supported, there exists M ∈ N such that
φ(x) = 0 whenever |x| ≥ M . It follows that for all y ∈ [−1, 1] and g ∈ C(R),
M
X
0
(φ ∗ g)(y) = g(j)φ(y − j)
j=−M
because φ(y − j) will equal 0 whenever |j| > M . Now let c ∈ R, say c ∈ [h`, h` + h) for
some integer `, and let p be the k-th degeree Taylor polynomial to f at c. Then
(M + 1)k+1
≤ (2M + 1) kφkR hk+1
D k+1 f
R .
(k + 1)!
k+1
The conclusion of the theorem now follows with C = (2M + 1) (M(k+1)!
+1)
kφkR .
Remark. Approximation Theorem 8.9 shows that if our approximation f ≈ φ ∗0h f
is exact
whenever f is a polynomial in Πk , then the error estimate kf − φ ∗0h f kR ≤ Chk+1
D k+1 f
R
20 ADVANCED NUMERICAL COMPUTING
Throughout this section we assume that k ∈ N0 . We see from the above Approximation
Theorem that it is desirable to have a function φ ∈ Cc (R) which satisfies φ ∗0 p = p for all
p ∈ Πk . In this section we discuss the construction of such a function.
Definition 9.1. A function φ ∈ Cc (R) satisfies the Strang-Fix conditions of order k + 1 if
φ ∗0 p ∈ Πk for all p ∈ Πk .
The above formulation of the Strang-Fix conditions is equivalent to the usual formu-
lation which is written in terms of the Fourier transform of φ (see G. Strang & G. Fix,
A Fourier analysis of the finite element variational method, in “Constructive Aspects of
Functional Analysis” (G. Geymonat, Ed.), pp. 793–840, C.I.M.E., 1973). We omit the
proof of the following theorem because it requires the theory of tempered distributions and
their Fourier transforms.
Theorem 9.2. If φ ∈ Cc (R) satisfies the Strang-Fix conditions of order k + 1, then
φ ∗0 p = φ ∗ p for all p ∈ Πk .
The canonical example of a function which satisfies the Strang-Fix conditions is that of
a cardinal B-spline. For that let us employ the integer knots ξi = i, i ∈ Z, and let Bik ,
i ∈ Z, be the B-splines of degree k as defined in Section 3. These B-splines are called
cardinal B-splines because the knots are the integers. Since the knots are equi-spaced, the
B-splines Bik are all translates of B0k ; indeed,
Bik (x) = B0k (x − i), x ∈ R.
Proposition 9.3. Let k ∈ N. The cardinal B-spline B0k satisfies the Strang-Fix conditions
of order k + 1.
Proof. (by induction on k) The base case k = 1 is covered by our Partition of Unity
theorem and a homework problem. For the induction step, assume the proposition is true
for k − 1 and consider k. Let q ∈ Πk and note that
d X
(B0k ∗0 q)(x) = (q(j) − q(j − 1))Bjk−1 (x) = (B0k−1 ∗0 p)(x),
dx
j
CS-543 21
where p(x) := q(x) − q(x − 1). Since p ∈ Πk−1 , we have by the induction hypothesis that
B0k−1 ∗0 p is a polynomial in Πk−1 . Since the derivative of B0k ∗0 q belongs to Πk−1 , it follows
that B0k ∗0 q belongs to Πk .
will satisfy the Strang-Fix conditions of order k + 1 and consequently we will have φ ∗ 0 p =
φ ∗ p for all p ∈ Πk . We can then focus our efforts on choosing the scalars {λj } and
translation points {τj } to achieve φ ∗ p = p for all p ∈ Πk .
Example 9.5. Show that if φ ∈ Cc (R) and p ∈ Πk , then φ ∗ p is also a polynomial in Πk .
Lemma 9.6. Let φ ∈ Cc (R) and let p ∈ Π` be defined by p(x) = x` . Then φ ∗ p = p if
and only if Z ∞
tr φ(t) dt = δr,0 for r = 0, 1, . . . , `,
−∞
1 if i = j
where δi,j := denotes the Kronnecker δ-function.
0 if i 6= j
Theorem 9.7. Let φ ∈ Cc (R) and let q ∈ Πk be defined by q(x) = xk . The following are
equivalent:
(i) φ ∗ p = p for all p ∈ Πk .
(ii) Z
φ ∗ q = q.
∞
(iii) t` φ(t) dt = δ`,0 for ` = 0, 1, . . . , k.
−∞
Theorem 9.10. Assume ψ ∈ Cc (R) has nonzero mean. If τ1 , τ2 , . . . , τk+1 ∈ R are dis-
tinct, then there exist unique scalars λ1 , λ2 , . . . , λk+1 such that
k+1
X
φ(x) := λj ψ(x − τj )
j=1
R∞
Note that the equation −∞
t` φ(t) dt = δ`,0 can be written in terms of the unknowns
{λj } as
k+1
X Z ∞
λj t` ψ(t − τj ) dt = δ`,0 .
j=1 −∞