0% found this document useful (0 votes)
11 views26 pages

Wavelets Based On Orthogonal Polynomials

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 26

MATHEMATICS OF COMPUTATION

Volume 66, Number 220, October 1997, Pages 1593–1618


S 0025-5718(97)00876-4

WAVELETS BASED ON ORTHOGONAL POLYNOMIALS

BERND FISCHER AND JÜRGEN PRESTIN

Abstract. We present a unified approach for the construction of polynomial


wavelets. Our main tool is orthogonal polynomials. With the help of their
properties we devise schemes for the construction of time localized polynomial
bases on bounded and unbounded subsets of the real line. Several examples
illustrate the new approach.

1. Introduction
In this paper we introduce and discuss a new method for the construction of time
localized bases for polynomial subspaces of an L2 -space with arbitrary weight. Our
analysis is based upon the theory of orthogonal polynomials. Whereas the frequency
localization will be predetermined by the choice of the polynomial spaces, the time
localization will be realized by the choice of special basis functions. More precisely,
such a basis function will be defined as the solution of a constrained approximation
problem which is designed such that its solution is maximally localized around a
specified point.
Starting with the paper of Chui and Mhaskar [2], discussing trigonometric poly-
nomial multiresolution analysis, the theory has been adapted to the algebraic poly-
nomial case, see, e.g. Kilgore, Prestin [6] and Tasche [10]. They investigated the
special case of the Chebyshev weight of the first kind. Their analysis is based on
the properties of ordinary Chebyshev polynomials and does not carry over to other
weight functions. In contrast, our derivations make use of the general theory of
kernel polynomials. This allows us to treat not only weight functions which are
supported on a compact interval (e.g., Jacobi weights) but also weight functions
which are supported on the real line (e.g., Hermite weight) or on the real half line
(e.g., Laguerre weight). Moreover, we relate our approach to the classical concept
of multiresolution analysis due to Mallat and Meyer.
The paper is organized as follows. In Section 2 we collect some basic properties
of orthogonal polynomials. Besides more theoretical results we discuss in particular
computational aspects of orthogonal polynomials. Then we define scaling functions
and wavelets and investigate some of their properties. This includes questions con-
cerning orthogonality, interpolatory properties, time localization, and the construc-
tion of dual functions. In Section 3 we discuss the algorithms for reconstruction
and decomposition. Because all participating spaces are of finite dimension, it is

Received by the editor January 24, 1996 and, in revised form, July 8, 1996.
1991 Mathematics Subject Classification. Primary 42C05, 65D05.
Key words and phrases. Orthogonal polynomials, polynomial wavelets, multiresolution analy-
sis, kernel polynomials.

c 1997 American Mathematical Society

1593
1594 BERND FISCHER AND JÜRGEN PRESTIN

straightforward to devise a compact matrix formulation for these schemes. Sec-


tion 4 is concerned with a comparison to the ordinary multiresolution analysis and
in particular with questions related to the Riesz stability and to the (generalized)
translation invariance of the proposed basis functions. Finally in Section 5 we apply
the new approach to two different Chebyshev weights.

2. scaling functions and wavelets


After having collected some auxiliary results for orthogonal polynomials, we will
define in this section scaling functions and wavelets with respect to arbitrary weight
functions.

2.1. Orthogonal polynomials. Let dσ(t) be a nonnegative measure on the real


line, with compact or infinite support [a, b], −∞ ≤ a < b ≤ ∞, for which all
moments
Z b
(2.1) νr := tr dσ(t), r = 0, 1, . . . ,
a

exist and are finite with ν0 > 0. With dσ(t) there is associated an inner product
and a norm
Z b p
(2.2) hp, qi := p(t)q(t)dσ(t), kpk := hp, pi,
a

on the vector space of all polynomials. It is well-known (see, for example Szegö [8,
§2.2]) that there exists a unique system of polynomials that are orthonormal with
respect to this inner product, i.e., a set of polynomials {Pr } such that
(2.3) hPk , Pl i = δk,l .
In general the system {Pr } consists of infinitely many polynomials, but reduces to
a finite number, if σ(t) has only finitely many points of increase. Throughout this
paper we assume that σ(t) has at least 2n + 1 points of increase and consequently
{Pr }2n
r=0 forms a basis for V2n , where

(2.4) Vn := span{P0 , P1 , . . . , Pn }.
An important special case are distributions of the form w(t)dt. Here we assume
Rb
that the weight function w(t) is nonnegative with a w(t)dt > 0.
The orthogonal polynomials Pk fulfill the following three-term recurrence relation
−1/2
P−1 (t) := 0, P0 (t) = ν0 ,
(2.5) bk+1 Pk (t) = (t − ak )Pk−1 (t) − bk Pk−2 (t), k ≥ 1.
Let us collect together the three-term recurrence coefficients of {Pr }nr=0 into an
unreduced symmetric tridiagonal matrix J n , the so-called Jacobi matrix,
 
a1 b 2 0 ··· 0
 .. 
 b 2 a2 . . . . . . .
 
 
(2.6) J n :=  0 . . . . . . . . . 0  .
 
. . 
 .. .. ... ... b 
n
0 ··· 0 b n an
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1595

With the vector


(2.7) v n (t) := (P0 (t), P1 (t), . . . , Pn (t))T
we can rewrite the three-term recurrence relation of the orthonormal polynomials
(2.5) in compact matrix notation as
(2.8) tv n−1 (t) = J n v n−1 (t) + bn+1 Pn (t)en
where en := (0, 0, . . . , 0, 1)T denotes the nth unit vector.
The next lemma collects some properties of the zeros of orthogonal polynomials.
A proof of parts (a) and (b) may be found in Szegö [8, Theorem 3.3.1, 3.3.2],
whereas (c) follows directly from (2.8).
(n)
Lemma 2.1. Let yr , r = 0, 1, . . . , n − 1, denote the zeros of Pn .
(a) The zeros of Pn are all real, simple and are located in (a, b)
(n) (n) (n)
a < y0 < y1 < · · · < yn−1 < b.
(b) The zeros of Pn and Pn+1 separate each other
(n+1) (n) (n+1) (n)
y0 < y0 < y1 < · · · < yn−1 < yn(n+1) .
(n) (n)
(c) Any zero yr of Pn is an eigenvalue of J n with eigenvector v n−1 (yr ).
For a given fixed number ξ ∈ R the polynomial
n
X
(2.9) Kn (t; ξ) := Pk (t)Pk (ξ)
k=0
is called the kernel polynomial with respect to h·, ·i (and the parameter ξ). Note
that
n
X
(2.10) Kn (ξ; ξ) = Pk (ξ)2 > 0.
k=0
The name “kernel” is motivated by the following result, which is also known as the
reproducing property of the kernel polynomials (see, e.g., Davis [3, §10.1]),
Z b
(2.11) hKn (·; ξ), pi = Kn (t; ξ)p(t)dσ(t) = p(ξ), for all p ∈ Vn .
a
The nth kernel polynomial Kn (t; ξ) is the unique solution of the following con-
strained approximation problem (cf. Szegö [8, Theorem 3.1.3])
Kn (·; ξ) 
(2.12) = min kpk : p ∈ Vn , p(ξ) = 1 .
Kn (ξ; ξ)
2.2. Scaling functions. Equation (2.12) indicates that the kernel polynomials are
localized around ξ. Motivated by this property we define scaling functions as kernel
polynomials
(2.13) ϕn,r (t) = ϕn (t; xr(n+1) ) := Kn (t; xr(n+1) ), r = 0, 1, . . . , n,
with respect to a suitable set of parameter
(n+1) (n+1)
(2.14) x0 < x1 < · · · < xn(n+1) .
The next figure displays some typical scaling functions. Note, that we plotted in
(a) and (b) the “plain” polynomials and in (c) and (d) the polynomials times the
underlying weight function. Actually one may view the scaling functions on the
1596 BERND FISCHER AND JÜRGEN PRESTIN

1 1

0.5 0.5

0 0

-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1


(a) (b)

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2
0 2 4 -5 0 5
(c) (d)

Figure 2.2. Various scaling functions of degree n = 32. (a)


Legendre weight: w(t) = 1; ϕ32 (t; 0, 5); (b) Jacobi weight:
w(t) = (1 − t)−0.5 (1 + t)−0.8 ; ϕ32 (t; 0.5); (c) Laguerre weight:
w(t) = t1/2 exp(−t); ϕ32 (t; 1) · w(t); (d) Hermite weight: w(t) =
exp(−t2 ); ϕ32 (t; 1) · w(t).

one hand as polynomial basis functions in a weighted L2 -space and on the other
hand as weighted polynomial basis functions in an unweighted L2 -space.
Some properties of these polynomial scaling functions are summarized in the
next theorem.
(n+1)
Theorem 2.3. Let ϕn,r (t) = ϕn (t; xr ) denote the scaling functions with respect
(n+1) (n+1) (n+1)
to a given set of parameter x0 < x1 < · · · < xn .
(a) The inner product of scaling functions may be evaluated as follows:

hϕn,r , ϕn,s i = ϕn,r (xs(n+1) ), r, s = 0, 1, . . . , n.


(n+1)
(b) The scaling function ϕn,r is localized around xr . More precisely, we have

ϕn,r 
(n+1)
= min kpk : p ∈ Vn , p(xr(n+1) ) = 1 .
ϕn,r (xr )
(c) The ϕn,r ’s form a basis for Vn , i.e.,

Vn = span{ϕn,0 , ϕn,1 , . . . , ϕn,n }.


WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1597

(d) The scaling function ϕn,r is orthogonal with respect to the “modified inner
(n+1)
product” h·, · (· − xr )i
hϕn,r (·), q(·)(· − xr(n+1) )i = 0 for all q ∈ Vn−1 .
(e) The scaling function ϕn,r satisfies the so-called Christoffel - Darboux identity
(n+1) (n+1)
Pn+1 (t)Pn (xr ) − Pn (t)Pn+1 (xr )
ϕn,r (t) = bn+2 (n+1)
,
t− xr
where bn+2 is a three-term recurrence coefficient of Pn+1 (cf. (2.5)).
(n) (n+1) n
(f) Let {yk }n−1 k=0 and {yk }k=0 denote the zeros of Pn and Pn+1 , respectively.
(n) (n)
Moreover, define y−1 := −∞ and yn := ∞.
(n+1) (n) (n)
If xr = yj is a zero of Pn , then ϕn,r has the n − 1 zeros yk , k =
0, 1, . . . , j − 1, j + 1, . . . , n − 1.
(n+1) (n+1) (n+1)
If xr = yj is a zero of Pn+1 , then ϕn,r has the n zeros yk , k=
0, 1, . . . , j − 1, j + 1, . . . , n.
(n+1) (n+1) (n)
If xr ∈ (yj , yj ), then ϕn,r has precisely one zero in each interval
(n+1) (n)
(yk , yk ), k = 0, 1, . . . , j − 1, j + 1, . . . , n.
(n+1) (n) (n+1)
If xr ∈ (yj−1 , yj ), then ϕn,r has precisely one zero in each interval
(n) (n+1)
(yk−1 , yk ), k = 0, 1, . . . , j − 1, j + 1, . . . , n.
Proof. Parts (a) and (b) follow immediately from (2.11) and (2.12), respectively.
To verify (c), assume that
n
X
(2.15) τr ϕn,r (t) ≡ 0.
r=0

Furthermore, let {`r }nr=0 denote the set of fundamental polynomials of Lagrange
interpolation with respect to the knots (2.14), i.e.,
(2.16) `r ∈ Vn and `r (xs(n+1) ) = δr,s , r, s = 0, 1, . . . , n.
In view of the assumption (2.15) and the reproducing property (2.11) we deduce
* n + n
X X
0= τr ϕn,r , `s = τr `s (xr(n+1) ) = τs ,
r=0 r=0

for s = 0, 1, . . . , n, which shows the linear independence of the ϕn,r ’s.


(d) is nothing but the reproducing property (2.11) applied to the polynomial
(n+1)
p(t) = (t − xr )q(t).
(e) follows readily from the three-term recurrence relation (2.5) (compare
Szegö [8, Theorem 3.2.2].
(f) is a direct consequence of the Christoffel - Darboux identity (e) (compare
Fischer [4, Theorem 2.5.8]).
Note that the interlacing property of the zeros of orthogonal polynomials (cf.
Lemma 2.1(b)) together with part (f) implies that the “zero-free interval” around
the constraint point shrinks with increasing degree, as is apparent from Figure 2.4.
Part (a) of the theorem above implies that the scaling functions (2.13) are
orthogonal to each other if, and only if they fulfill the interpolatory property
(n+1) (n+1) (n+1)
ϕn,r (xs ) = dr δr,s , dr ∈ R. This may be seen as a requirement for
1598 BERND FISCHER AND JÜRGEN PRESTIN

1
0.5
0
-0.5
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
n=8

1
0.5
0
-0.5
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
n = 16

1
0.5
0
-0.5
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
n = 32

Figure 2.4. Scaling functions ϕn (t; 0.5) of degree n = 8, 16, 32


with respect to the Legendre weight w(t) = 1 and the parameter
(n+1)
x0 = 0.5.

the parameter set (2.14). The next theorem characterizes the parameter sets which
lead to orthogonal scaling functions.
(n+1)
Theorem 2.5. Let ϕn,r (t) = ϕn (t; xr ) denote the scaling functions with respect
(n+1) (n+1) (n+1)
to a given parameter set x0 < x1 < · · · < xn . Then the following
conditions are equivalent to the orthogonality of the scaling functions.
(a) The scaling functions satisfy an interpolatory condition

ϕn,r (xs(n+1) ) = dr(n+1) δr,s , for r, s = 0, 1, . . . , n,


(n+1)
where dr ∈ R.
(n+1)
(b) The parameter xr defines a quadrature rule which is exact for polynomials
of degree 2n, i.e.,
Z b Xn  −1
p(t)dσ(t) = dr(n+1) p(xr(n+1) ), for all p ∈ V2n ,
a r=0

(n+1) (n+1)
where dr = ϕn,r (xr ).
Qn (n+1)
(c) The polynomial qn+1 (t) := r=0 (t − xr ) is quasi-orthogonal, i.e.,

hqn+1 , tk i = 0, for k = 0, 1, . . . , n − 1.
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1599

(d) There exists a number τn with


Pn+1 (xr(n+1) ) + τn Pn (xr(n+1) ) = 0, for r = 0, 1, . . . , n.
(n+1) (n+1)
Proof. For convenience we drop the superscripts, i.e., xr = xr and dr = dr .
We first show that (b) follows from (a). To this end we assume that
n
X
d−1
r ϕn,r (xs ) = d−1
r Pk (xr )Pk (xs ) = δr,s ,
k=0

and conclude
n
X
Pl (xs ) = Pl (xr )δr,s
r=0
Xn n
X
= Pl (xr ) d−1
r Pk (xr )Pk (xs )
r=0 k=0
n
X Xn
= Pk (xs ) d−1
r Pl (xr )Pk (xr ),
k=0 r=0

for s = 0, 1, . . . , n. Hence, the polynomial


n
X n
X
Pl (t) − Pk (t) d−1
r Pl (xr )Pk (xr )
k=0 r=0

has n + 1 zeros xs . For l ≤ n this is only possible if


n
X
(2.17) d−1
r Pl (xr )Pk (xr ) = δl,k , for l, k = 0, 1, . . . , n.
r=0

On the other hand, the orthonormality of the Pj ’s


Z b
Pl (t)Pk (t)dσ(t) = δl,k , l, k = 0, 1, . . . , n,
a

implies that (2.17) constitutes a quadrature rule for polynomials of the form Pl Pk .
Finally, observe that the product Pl Pk has exact degree l + k which clearly shows
that
V2n = span{Pl Pk : l, k = 0, 1, . . . , n}.
The proof for the statement that (a) follows from (b) is along the same lines and
is therefore omitted here.
To show that (c) follows from (b) observe that
Z b n
X
hqn+1 , tk i = qn+1 (t)tk dσ(t) = d−1 (n+1)
r qn+1 (xr )(xr(n+1) )k = 0,
a r=0
k
for t qn+1 ∈ V2n , i.e., for k ≤ n − 1.
Conversely, let p2n ∈ V2n be given. Then there exist polynomials pn−1 ∈ Vn−1
and pn ∈ Vn with
p2n (t) = pn−1 (t)qn+1 (t) + pn (t).
1600 BERND FISCHER AND JÜRGEN PRESTIN

Now we make use of the quasi-orthogonality of qn+1 , and the fact that any poly-
nomial of degree n can be integrated by an interpolatory quadrature rule based on
n + 1 given knots, to obtain
Z b Z b Z b
p2n (t)dσ(t) = pn−1 (t)qn+1 (t)dσ(t) + pn (t)dσ(t)
a a a
Z b
(2.18) = pn (t)dσ(t)
a
n
X
= e−1 (n+1)
r pn (xr )
r=0
n
X
= e−1 (n+1)
r p2n (xr ).
r=0

It remains to show that e−1


= d−1
r r , r = 0, 1, . . . , n. This, however, follows from
the implication (b) ⇒ (a).
For the rest of the proof we refer to Chihara [1, Ch. II, Theorem 5.1, 5.3].

In particular part (d) of the theorem above is quite useful for actually computing
(n+1)
parameters xr which correspond to orthogonal scaling functions. Note, that the
interlacing property (cf. Lemma 2.1(b)) immediately implies that the polynomial
Pn+1 (t) + τn Pn (t)
has n + 1 real and simple zeros, where at most one of these zeros lies outside the
“orthogonality interval” [a, b] (compare Chihara [1, Ch. I, Theorem 5.2]).
Probably the most important special case is provided by the choice τn = 0.
(n+1)
Corollary 2.6. Let yr , r = 0, 1, . . . , n, denote the zeros of Pn+1 and let
(n+1)
ϕn,r (t) = ϕn (t; yr ) denote the associated scaling functions (2.13). Then

hϕn,r , ϕn,s i = ϕn,r (ys(n+1) ) = cr(n+1) δr,s , r, s = 0, 1, . . . , n,


(n+1)
where the cr ’s are given by the weights in the classical Gaussian quadrature rule
Z b n 
X −1
p(t)dσ(t) = cr(n+1) p(yr(n+1) ), for all p ∈ V2n+1 .
a r=0

We remark that the ϕn,r may be viewed as fundamental polynomials of Lagrange


(n+1)
interpolation with respect to the knots yr .

2.3. Wavelets. In this section we define our wavelets and discuss some of their
properties. To this end let
(2.19) Wn := V2n Vn = span{Pn+1 , Pn+2 , . . . , P2n }.
Note that
(2.20) dim Wn = n.
The goal is to identify functions, our wavelets, which define a localized basis for
Wn .
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1601

In accordance with the definition of the scaling function (2.13) we define the
wavelets, for r = 0, 1, . . . , n − 1, in terms of kernel functions
ψn,r (t) = ψn (t; zr(n) ) := K2n (t; zr(n) ) − Kn (t; zr(n) )
2n
X
(2.21) = Pk (zr(n) )Pk (t),
k=n+1

for a suitable set of parameter


(n) (n) (n)
(2.22) z0 < z1 < · · · < zn−1 .
(n)
Note that the interlacing property implies ψn (zr ) > 0, for n > 1.
The next figure shows some typical wavelets. For a plot of the corresponding
scaling functions we refer to Figure 2.2.
The next theorem collects some properties of the wavelets ψn,r . Note, that
parts (a) and (b) are similar to the one for the associated scaling functions (cf.
Theorem 2.3). We stress that these properties do not depend on the particular
(n)
choice of the parameter set {zr }n−1
r=0 .

1 1

0.5 0.5

0 0

-0.5 -0.5

-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1


(a) (b)

0.5
0.5

0 0

-0.5
-0.5
0 2 4 -5 0 5
(c) (d)

Figure 2.7. Various wavelets of degree n = 32. (a) Le-


gendre weight: w(t) = 1; ϕ32 (t; 0.5); (b) Jacobi weight:
w(t) = (1 − t)−0.5 (1 + t)−0.8 ; ϕ32 (t; 0.5); (c) Laguerre weight:
w(t) = t1/2 exp(−t); ϕ32 (t; 1) · w(t); (d) Hermite weight: w(t) =
exp(−t2 ); ϕ32 (t; 1) · w(t).
1602 BERND FISCHER AND JÜRGEN PRESTIN

(n)
Theorem 2.8. Let ψn,r (t) := ψn (t; zr ) denote the wavelets with respect to the
(n) (n) (n)
parameter z0 < z1 < · · · < zn−1 .
(a) The inner product of wavelets may be evaluated as follows
hψn,r , ψn,s i = ψn,r (zs(n) ), r, s = 0, 1, . . . , n − 1.
(n)
(b) The wavelet ψn,r is localized around zr

ψn,r 
(n)
= min kpk : p ∈ Wn , p(zr(n) ) = 1 .
ψn,r (zr )
(n+1)
(c) Let ϕn,r (t) := ϕn (t; xr ) (cf. (2.13)) denote the scaling functions with
(n+1) (n+1) (n+1)
respect to the parameter x0 < x1 < · · · < xn . The wavelets and
the scaling functions are orthogonal to each other
hψn,r , ϕn,s i = 0, r, s = 0, 1, . . . , n − 1.
(n)
Proof. For convenience we drop the superscript zr := zr .
To verify (a), we show that the wavelets fulfill a reproducing property with
respect to Wn . In fact, for p ∈ Wn we have by (2.21) and (2.11)
hψn,r , pi = hK2n (·; zr ) − Kn (·; zr ), pi
(2.23) = hK2n (·; zr ), pi − hKn (·; zr ), pi
= p(zr ).
(b) The proof is along the lines of the proof for the standard case of kernel polyno-
mials (cf. Chihara [1, Ch.I, Theorem 7.3]). Let p ∈ Wn with p(zr ) = 1, i.e.,
2n
X 2n
X
(2.24) p(t) = dk Pk (t), p(zr ) = dk Pk (zr ) = 1.
k=n+1 k=n+1

The orthonormality of the Pj ’s implies


2n
X
hp, pi = d2k .
k=n+1

This identity together with (2.24) and the Cauchy-Schwarz inequality (applied to
the Euclidian inner product) yields
2
1 = p2 (zr ) = (dn+1 , dn+2 , . . . , d2n )(Pn+1 (zr ), Pn+2 (zr ), . . . , P2n (zr ))T
2n
X
≤ kpk2 Pk2 (zr ).
k=n+1

On the other hand we have


2
ψn,r hψn,r , ψn,r i 1
= P 2 = P2n 2 (z )
,
ψn,r (zr ) 2n 2 k=n+1 Pk r
k=n+1 Pk (zr )

which concludes the proof of statement (b). Part (c) follows directly from the
definition of the participating functions.
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1603

It is worth noticing that in accordance with the properties of scaling functions


(cf. Theorem 2.3(a)) the wavelets are orthogonal if, and only if they satisfy an
(n)
interpolatory condition ψn,r (zs ) = 0, for r 6= s. We will present in Section 5 an
example of orthogonal wavelets. In general, however, it is not clear whether there
exist orthogonal wavelets for a given inner product.
(n)
Moreover, not any set {zr }n−1r=0 leads to linear independent wavelet functions.
(n)
For example, let the zr be zeros of Ps , i.e.,
Ps (zr(n) ) = 0, r = 0, 1, . . . , n − 1, n + 1 ≤ s ≤ 2n.
Then the wavelets
2n
X
ψn (t; zr(n) ) = Pk (zr(n) )Pk (t), r = 0, 1, . . . , n − 1,
k=n+1,k6=s

can span at best a space of dimension n − 1. However, we have the following


theorem.
(n) (n)
Theorem 2.9. Let zr = yr , r = 0, 1, . . . , n − 1, denote the zeros of Pn and let
(n)
ψn,r (t) = ψn (t; yr ) denote the associated wavelets. Then
Wn = span{ψn,0 , ψn,1 , . . . , ψn,n−1 }.
Proof. We show that the {ψn,r }n−1
r=0 are linearly independent. To this end, assume
that
n−1
X
σr ψn,r (t) ≡ 0.
r=0

Since the Pj ’s are orthogonal we have


(
6= 0 for i + j = n,
hPn , Pi Pj i
= 0 for i + j < n.
This together with the reproducing property (2.11) implies, for i = 1, 2, . . . , n,
*n−1 +
X
0 = σr ψn,r , Pn Pi
r=0
n−1
X
= σr hK2n (·; yr(n) ) − Kn (·; yr(n) ), Pn Pi i
r=0
n−1
X
= − σr hKn (·; yr(n) ), Pn Pi i
r=0
n−1
X n−1
X
= − σr Pj (yr(n) )hPn , Pi Pj i
r=0 j=0
n−1
X n−1
X
= − hPn , Pi Pj i σr Pj (yr(n) )
j=0 r=0
n−1
X n−1
X
= − hPn , Pi Pj i σr Pj (yr(n) ).
j=n−i r=0
1604 BERND FISCHER AND JÜRGEN PRESTIN

In other words, we end up with a triangular homogeneous linear system in the


P (n)
unknown n−1 r=0 σr Pj (yr ). Since the entries on the main diagonal do not vanish,
it has the unique solution
n−1
X
σr Pj (yr(n) ) = 0, j = 0, 1, . . . , n.
r=0

This, however, is only possible for σ0 = σ1 = · · · = σn−1 = 0, because the vectors


v n−1 (yr(n) ) = (P0 (yr(n) ), P1 (yr(n−1) ), . . . , Pn (yr(n) ))T , r = 0, 1, . . . , n − 1,
are linearly independent as eigenvectors of J n−1 (cf. Lemma 2.1(d)).

2.4. Dual functions. For practical purposes it is important to get a hand on the
dual functions ϕ̃n,r ∈ Vn and ψ̃n,r ∈ Wn . They are uniquely determined by the
following biorthogonality relations
(2.25) hϕn,s , ϕ̃n,r i = δr,s , r, s = 0, 1, . . . , n,
hψn,s , ψ̃n,r i = δr,s , r, s = 0, 1, . . . , n − 1.
Of course, here we have to assume that the wavelets ψn,r constitute a basis for
Wn . The next theorem shows that the dual functions are easy to identify. The
proof follows directly from (2.25) and the reproducing properties (2.11) and (2.23),
respectively.
Theorem 2.10. Let Vn and Wn be defined as in (2.4) and (2.7), respectively.
(a) The dual scaling functions ϕ̃n,r = `r (cf. (2.16)) are the fundamental poly-
nomials of Lagrange interpolation with respect to the given parameter set
(n+1) (n+1) (n+1)
x0 , x1 , . . . , xn , i.e.,
ϕ̃n,r ∈ Vn and ϕ̃n,r (xs(n+1) ) = δr,s r, s = 0, 1, . . . , n.

(b) Let {ψn,r }n−1


r=0 be a basis for Wn . Then the dual wavelet functions ψ̃n,r ∈ Wn
are the fundamental polynomials of Lagrange interpolation with respect to the
(n) (n) (n)
given parameter set z0 , z1 , . . . , zn−1 , i.e.,

ψ̃n,r ∈ Wn and ψ̃n,r (zs(n) ) = δr,s r, s = 0, 1, . . . , n − 1.


For the actual computation of the dual functions we refer to the next section.
We would like to point out that the dual functions as well satisfy a localization
property with respect to a discrete measure. More precisely, it holds (compare
Theorem 2.3(b))

kϕ̃n,r kn+1 = min kpkn+1 : p ∈ Vn , p(xr(n+1) ) = 1 ,
where
n
!1/2
X
kpkn+1 := |p(xs(n+1) )|2 .
s=0

Analogously, we have for the wavelet space (compare Theorem 2.8(b))



ψ̃n,r = min kpkn : p ∈ Wn , p(zr(n) ) = 1 ,
n
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1605

where
n−1
!1/2
X
kpkn := |p(zs(n) )|2 .
s=0
Finally, let us mention that the dual functions are in general no kernel functions
with respect to the set of orthonormal polynomials Pk as in (2.13) and (2.21). On
the other hand, however, they do have a representation in terms of kernel polyno-
mials with respect to the orthonormal polynomials defined by the corresponding
discrete inner product.

3. Two-scale relations and decomposition


The purpose of this section is to describe reconstruction and decomposition al-
gorithms of given functions. The schemes are based on the space representation
V2n = Vn ⊕ Wn . Clearly, a repeated application of this step would result in a
multiresolution of a weighted L2 -space.
3.1. Matrix notation. We start by noting that in view of (2.13) any function
 T
(n) (n)
fn ∈ Vn , represented by the vector a(n) := a0 , . . . , an ,
n
X
(3.1) fn (t) = ar(n) ϕn (t; xr(n+1) ) = (P0 (t), . . . , Pn (t)) An a(n) ,
r=0

may be written in terms of the matrix


 
(n+1) (n+1)
  P0 (x0 ···) P0 (xn )
 
An := Pk (xr(n+1) ) =  .. .. .. 
k,r=0,1,... ,n  . . . 
(n+1) (n+1)
Pn (x0 ) ··· Pn (xn )
 
(n+1)
(3.2) = v n (x0 ), . . . , v n (xn(n+1) ) .
 T
(n) (n)
Analogously, we obtain for gn ∈ Wn , with b(n) := b0 , . . . , bn−1 , the repre-
sentation
n−1
X
(3.3) gn (t) = br(n) ψn (t; zr(n) ) = (Pn+1 (t), . . . , P2n (t)) B n b(n) ,
r=0

where
 
(n) (n)
  Pn+1 (z0 ) · · · Pn+1 (zn−1 )
 
(3.4) B n := Pk+n+1 (zr(n) ) 
= .. .. .. .
k,r=0,1,... ,n−1
. . . 
(n) (n)
P2n (z0 ) · · · P2n (zn−1 )
It is the purpose of this section to study the matrices An and B n , respectively,
in more detail.
Recall that by Theorem 2.3(c) the scaling functions are linearly independent,
i.e.,
n
X n
X n
X n
X n
X
σr ϕn (t; xr(n+1) ) = σr Pk (xr(n+1) )Pk (t) = Pk (t) σr Pk (xr(n+1) ) = 0
r=0 r=0 k=0 k=0 r=0
1606 BERND FISCHER AND JÜRGEN PRESTIN

implies σ0 = σ1 = . . . = σn = 0. We learn from the above equation that also the


(n+1)
vectors v n (xr ) are linearly independent, which are just the columns of An . In
fact, An has to be regular as coefficient matrix for the interpolation problem at the
(n+1)
knots xr with respect to the space spanned by the Pj ’s.
(n+1) (n+1) (n+1)
Corollary 3.11. Let x0 < x1 < · · · < xn be given.
(a) The matrix An is regular.
(n+1)
(b) The scaling functions ϕn (t; xr ), r = 0, 1, . . . , n, are orthogonal and inter-
polatory (cf. Theorem 2.3(a)) if, and only if AT n An is a diagonal matrix.

In light of Corollary 2.6 it should come as no surprise that the matrix An based
on the zeros of Pn+1 is special.
(n+1) (n+1)
Corollary 3.12. Let xr = yr , r = 0, 1, . . . , n, denote the zeros of Pn+1
(n+1)
and let cr denote the weights of the Gaussian quadrature rule (cf. Corollary
2.6). Then the columns of An are the eigenvectors of J n . Moreover,
 n
(n+1) T (n+1)
AT
n A n = v n (y k ) v n (y r )
 k,r=0 
(n+1) −1 (n+1) −1
= diag (c0 ) , . . . , (cn ) =: D n ,

and
A−1 −1 T
n = D n An .

To discuss properties of B n note that


n−1
X n−1
X 2n
X 2n
X n−1
X
σr ψn (t; zr(n) ) = σr Pk (zr(n) )Pk (t) = Pk (t) σr Pk (zr(n) ).
r=0 r=0 k=n+1 k=n+1 r=0

Hence, the wavelets are linear independent if, and only if the matrix B n is regular.
The next corollary follows from Section 2.3 and in particular from Theorem 2.9.
(n) (n) (n)
Corollary 3.13. Let z0 < z1 < · · · < zn−1 be given.
(a) The matrix B n is not necessarily regular.
(n)
(b) The wavelets ψn (t; zr ), r = 0, 1, . . . , n − 1, are orthogonal and interpolatory
(cf. Theorem 2.8(a)) if, and only if B T n B n is a diagonal matrix.

In Theorem 2.9 we identified a set of parameters which leads to linear indepen-


dent wavelets or, equivalently, to a regular B n .
(n) (n)
Corollary 3.14. Let zr = yr , r = 0, 1, . . . , n − 1, denote the zeros of Pn . Then
B n is regular.
Proof. For later reference we offer a proof which is different from the one of The-
orem 2.9. It provides a convenient expression for B −1
n . Namely, a straightforward
computation shows that
B n A−1
n−1 = B n D −1 T
n−1 An−1
n−1
!
X
(3.5) = cr(n) Pk+n+1 (yr(n) )Pl (yr(n) ) ,
r=0 k,l=0,1,... ,n−1
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1607

(n)
where the matrix An−1 is based on the parameter set yr . It turns out that this
matrix is triangular with nonvanishing anti-main diagonal entries. To justify this
statement observe that by Gaussian quadrature
Z b n−1
X
0= Pk+n+1 (t)Pl (t)dσ(t) = cr(n) Pk+n+1 (yr(n) )Pl (yr(n) ),
a r=0

for k + n + 1 + l ≤ 2n − 1. It follows that det(B n D −1 T


n−1 An−1 ) 6= 0 and consequently
det B n 6= 0.
Finally, let us summarize the relationships between the various introduced bases
for Vn and Wn , respectively.
Corollary 3.15. Let An and B n be defined by (3.2) and by (3.4), respectively.
(n+1)
(a) For a given arbitrary parameter set xr , r = 0, 1, . . . , n, we have
   
ϕn,0 P0
 ..  T . 
 .  = An  ..  ,
ϕn,n  Pn ,   
ϕ̃n,0 P0  −1 ϕn,0
 ..   .   .. 
 .  = A−1 T
n  ..  = An An  . .
ϕ̃n,n Pn ϕn,n
(n)
(b) For a given parameter set zr , r = 0, 1, . . . , n − 1, such that B n is regular,
we have
   
ψn,0 Pn+1
 ..  T . 
 .  = B n  .. 
ψ P ,
 n,n−1   2n   
ψ̃n,0 Pn+1  −1 ψn,0
 ..  −1  .  T  .. 
 .  = B n  ..  = B n B n  . .
ψ̃n,n−1 P2n ψn,n−1

Recall that AT T
n An and B n B n are the Gram matrices for our scaling functions
and wavelets, respectively.
3.2. Two-scale relations and decomposition. In this section we work out the
relationship between the coefficient vectors a(2n) , a(n) , and b(n) in the so-called
two-scale relation
2n
X
f2n (t) = ar(2n) ϕ2n (t; xr(2n+1) )
r=0
n
X n−1
X
(3.6) = ar(n) ϕn (t; xr(n+1) ) + br(n) ψn (t; zr(n) )
r=0 r=0

= fn (t) + gn (t).
In view of (3.1) and (3.3) the above equation may be rewritten as follows:
(P0 (t), . . . , P2n (t)) A2n a(2n)
= (P0 (t), . . . , Pn (t)) An a(n) + (Pn+1 (t), . . . , P2n (t)) B n b(n) ,
1608 BERND FISCHER AND JÜRGEN PRESTIN

which then implies


   (n) 
An 0 a
(3.7) A2n a(2n) = .
0 Bn b(n)
The next theorem shows how to decompose a function from V2n into wavelets
from Wn and scaling functions from Vn and states how to reverse this process. The
proof follows directly from (3.7).
(n+1) (2n+1)
Theorem 3.16. Let the scaling functions ϕn (t; xr ), ϕ2n (t; xr ), the wave-
(n)
lets ψn (t; zr ) and the corresponding matrices An , A2n , B n are based on arbitrary
parameter sets.
(a) (Reconstruction) Let the coefficient vectors a(n) and b(n) in (3.6) be given.
Then
   (n) 
An 0 a
a(2n) = A−1 .
2n 0 Bn b(n)
(b) (Decomposition) Let the coefficient vector a(2n) in (3.6) be given. If B n is
regular, then
 (n)   −1 
a An 0
= A2n a(2n) .
b(n) 0 B −1
n

As it is not surprising, the above formulae simplify in the orthogonal case. In


particular, the inversion of matrices can be avoided. Note, however, that the or-
thogonality of the wavelets is only known for special cases (see Section 5).
(n)
Corollary 3.17. Let the wavelets ψn,r (t) = ψn (t; zr ) and the scaling functions
(n+1) (2n+1)
ϕn,r (t) = ϕn (t; xr ), ϕ2n,r (t) = ϕ2n (t; xr ) be given.
(a) (Reconstruction) Let the coefficient vectors a(n) and b(n) in (3.6) be given.
If the ϕ2n,r , r = 0, 1, . . . , 2n, are orthogonal, then
n n−1
!
1 X X
(2n) (n) (2n+1) (n) (2n+1)
ar = (2n+1)
as ϕn,s (xr )+ bs ψn,s (xr ) .
ϕ2n,r (xr ) s=0 s=0

(b) (Decomposition) Let the coefficient vector a(2n) in (3.6) be given. If the ϕn,r ,
r = 0, 1, . . . , n, and the ψn,r , r = 0, 1, . . . , n − 1, are orthogonal, then
2n
X
1
ar(n) = (n+1)
as(2n) ϕn,r (xs(2n+1) ),
ϕn,r (xr ) s=0

2n
X
1
br(n) = (n+1)
as(2n) ψn,r (xs(2n+1) ).
ψn,r (zr ) s=0

Proof. (a) The orthogonality and (3.6) imply


(2n) hfn + gn , ϕ2n,r i
ar =
hϕ2n,r , ϕ2n,r i
n n−1
!
1 X X
= as(n) hϕn,s , ϕ2n,r i + bs(n) hψn,s , ϕ2n,r i .
hϕ2n,r , ϕ2n,r i s=0 s=0
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1609

The remaining part follows from the reproducing property of ϕ2n,r (cf. (2.11)).
Part (b) is along the same lines. Here we have
hf2n , ϕn,r i hf2n , ψn,r i
ar(n) = and br(n) = .
hϕn,r , ϕn,r i hψn,r , ψn,r i

To decompose a given function f one first has to approximate f by a suit-


able function f2n in V2n . Let us assume that the scaling functions ϕ2n,r (t) =
(2n+1) (2n+1)
ϕ2n (t; yr ) are based on the zeros yr of P2n+1 , i.e., they are orthogonal
hϕ2n,r , ϕ2n,s i = ϕ2n,r (ys(2n+1) ) = cr(2n+1) δr,s .
Then the approximation is typically done by an orthogonal projection
2n
X ϕ2n,r (t)
f (t) ≈ hf, ϕ2n,r i ,
r=0
hϕ2n,r , ϕ2n,r i
or by an interpolatory process
2n
X ϕ2n,r (t)
f (t) ≈ f (yr(2n+1) ) .
r=0
hϕ2n,r , ϕ2n,r i
Actually, if on computes hf, ϕ2n,r i by the Gaussian quadrature both approaches
provide the same approximation. The proof of the next lemma follows directly
from Corollary 2.6.
(2n+1)
Lemma 3.18. Let yr , r = 0, 1, . . . , 2n, denote the zeros of P2n+1 and let
ϕ2n,r denote the associated scaling functions (2.13). Furthermore, let f denote a
given smooth function. Then the Gaussian quadrature of f ϕ2n,r simplifies
2n 
X −1
hf, ϕ2n,r i ≈ cs(2n+1) f (ys(2n+1) )ϕ2n,r (ys(2n+1) ) = f (yr(2n+1) ).
s=0

Let us finish this section with an example. Here we decompose a piecewise


linear “hat function” f which is zero on [−1, 1] \ (−0.01, 0.01) and one at the origin
(compare Figure 3.19 (a)). The scaling function spaces V2n and Vn were defined by
the zeros of P2n+1 and Pn+1 , respectively. The wavelet space Wn was defined by the
zeros of Pn , which ensures that the ψn,r ’s constitute a basis. The approximation
f2n of f in V2n was computed by the above described interpolatory process.
It is important to note that the underlying numerical computations make use
of the properties of orthogonal polynomials. In particular, we computed the corre-
sponding parameter sets as eigenvalues of the associated Jacobi matrix (cf. Lemma
2.1 (c)) and the resulting polynomials were evaluated by means of their three-term
recurrence relations (2.5).
Figure 3.19 shows the decomposition f2n = fn +gn with respect to the Chebyshev
weight function of the first kind w(t) = (1 − t2 )−1/2 . Whereas Figure 3.20 shows
the same decomposition but with respect to the modified weight function w(t) =
t2 (1 − t2 )−1/2 (explicit expressions for the corresponding orthogonal polynomials
may be found in Chihara [1, pp. 155]). Here, the time localization is considerably
improved.
Some comments are in order. The purpose of the example is to show that the
choice of the weight function may have quite some affect on the decomposition.
Here, we designed the given function f such that in both cases the approximation
1610 BERND FISCHER AND JÜRGEN PRESTIN

1 1

0.5 0.5

0 0

-0.5 -0.5
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
(a) (b)

1 1

0.5 0.5

0 0

-0.5 -0.5
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
(c) (d)

Figure 3.19. Decomposition with respect to w(t) = (1 − t2 )−1/2


for n = 128. (a) Given function: f ; (b) projection on V2n : f2n ;
decomposition: (c) fn ∈ Vn ; (d) gn ∈ Wn .

1 1

0.5 0.5

0 0

-0.5 -0.5
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
(a) (b)

1 1

0.5 0.5

0 0

-0.5 -0.5
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
(c) (d)

Figure 3.20. Decomposition with respect to w(t) = t2 (1−t2 )−1/2


for n = 128. (a) Given function: f ; (b) projection on V2n : f2n ;
decomposition: (c) fn ∈ Vn ; (d) gn ∈ Wn .

f2n consists of only one scaling function, that is, f2n is “maximally localized” with
respect to the chosen weight function (cf. Theorem 2.3(b)). It is interesting to note
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1611

(n)
that for both weights gn has a full expansion into wavelets, i.e., br 6= 0, r =
0, 1, . . . , n − 1.

4. Stability and translation invariance


The above introduced scaling functions and wavelets do not provide a multires-
olution in the classical sense. However, there are some relationships which will
be pointed out in this section. To start with, let us mention that we have also a
sequence of successive approximation spaces, i.e.,
V0 ⊂ V1 ⊂ · · · ⊂ V2j ⊂ V2j+1 ⊂ · · · .
Furthermore, from the classical theory of orthogonal polynomials we have

[
closL2 (w) V2j = L2 (w),
j=0

provided that the underlying distribution function has infinitely many points of
increase. Because we deal with finite dimensional spaces V2j , j ≥ 0, we omit the
axiom
\
V2j = {0}.
j

The dilation axiom essentially changes into a condition for the frequencies
f ∈ Vn ⇐⇒ hf, Pk i = 0, for all k > n.
Finally, in the next subsection we discuss in greater detail the fourth axiom of a
classical multiresolution analysis, namely that the span of all integer translates of
a given scaling function yields a Riesz basis for the corresponding space.

4.1. Riesz stability. Here we establish a two-sided estimate between the weighted
L2 -norm kfn k (kgn k) (cf. (2.2)) of an arbitrary function fn ∈ Vn (gn ∈ Wn ) and
the Euclidian norm of the coefficients of fn (gn ) with respect to the basis of scaling
functions (wavelets). The Euclidian norm of a vector a(n) ∈ Rn+1 is defined as
Pn 
2 1/2
usual by ka(n) k2 = r=0 ar with corresponding spectral norm kAk2 .
Theorem 4.21. Let An (cf. (3.2)) and B n (cf. (3.4)) denote the matrices asso-
(n+1) (n)
ciated with the parameter sets xr and zr , respectively. Furthermore, let ϕn,r
and ψn,r denote the corresponding scaling functions and wavelets.
Pn (n)
(a) For fn = r=0 ar ϕn,r , we have
1
ka(n) k2 ≤ kfn k ≤ kAn k2 ka(n) k2 .
kA−1
n k2
Pn−1 (n)
(b) For gn = r=0 br ψn,r , we have

kgn k ≤ kB n k2 kb(n) k2 ,
and if in addition B n is regular, then also
1
kb(n) k2 ≤ kgn k.
kB −1
n k 2
1612 BERND FISCHER AND JÜRGEN PRESTIN

0.8

0.6

0.4

0.2

-0.2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 4.22. Scaling functions of degree n = 32 with respect to


(n+1)
the Legendre weight w(t) = 1 and the parameter x0 = −0.2
(n+1)
(solid line) and x1 = 0.5 (dashed line).

Proof. By Parseval’s equation we obtain for fn ∈ Vn


n n 2 n X
n 2
X X X
kfn k2 = ar(n) Pk (xr(n+1) )Pk = ar(n) Pk (xr(n+1) ) = kAn a(n) k22 .
r=0 k=0 k=0 r=0
Now (a) and analogously (b) follow by standard arguments.
Hence, as it is not surprising, the Riesz stability can be measured by the spectral
condition number of An
s
λmax (ATn An )
(4.1) kA−1n k 2 kAn k 2 = T
λmin (An An )
and by the spectral condition number of B n
s
−1 λmax (B T
n Bn)
(4.2) kB n k2 kB n k2 = T
,
λmin (B n B n )
respectively. Here λmax and λmin denote the extreme eigenvalues of the correspond-
ing matrices.
4.2. Generalized translation. Usually, in a multiresolution analysis time local-
ization is realized by taking shifts of one given function. Also, Euler’s functional
equation is used to advantage. Namely, a shift in the time space is equivalent to
a multiplication by an exponential in the Fourier space. In this section, we briefly
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1613

0.5

-0.5

-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

Figure 4.23. Wavelets of degree n = 32 with respect to the Le-


(n+1)
gendre weight w(t) = 1 and the parameter z0 = −0.2 (solid
(n+1)
line) and z1 = 0.5 (dashed line).

outline how to generalize this concept to the present polynomial approach. Here,
(α,β)
we restrict ourselves to the Jacobi polynomials Pk . These polynomials are or-
thogonal with respect to the weight w(t) = (1 − t)α (1 + t)β , −1 < t < 1.
For a given f in this weighted L2 -space with Fourier-Jacobi-coefficients
Z 1
(α,β)
f ˆ(k) = f (t)Pk (t)(1 − t)α (1 + t)β dt
−1

we consider the operator Sλ : L2 (w) → L2 (w), −1 ≤ λ ≤ 1, defined by a multipli-


cation in the frequency domain
(α,β)
Pk (λ)
(Sλ f )ˆ(k) := (α,β)
f ˆ(k) .
Pk (1)
For −1 < β ≤ α, −1 ≤ α + β, the operator Sλ has the properties (see Gasper [5])
kSλ f k ≤ Ckf k, for all λ ∈ (−1, 1),
and
lim kSλ f − f k = 0.
λ→1−

Hence Sλ may be seen as a generalized translation operator.


1614 BERND FISCHER AND JÜRGEN PRESTIN

In this context, it is possible to recover our scaling functions and wavelets, re-
spectively, as generalized translations of a given function. More precisely, with
n
X (α,β) (α,β)
fV (t) := ϕn (t; 1) = Pk (1)Pk (t)
k=0
it is straightforward to verify that
ϕn (·; λ) = Sλ fV .
Here, we used that
(
(α,β)
Pk (1) for 0≤k≤n,
(4.3) fV ˆ(k) =
0 for k>n.
Analogously, we have
ψn (·; λ) = Sλ fW
with
2n
X (α,β) (α,β)
(4.4) fW (t) := ψn (t; 1) = Pk (1)Pk (t).
k=n+1

We would like to mention that modifications of (4.3) and of (4.4) which at least
remain suppfV ˆ= {0, . . . , n} and suppfW ˆ= {n+1, . . . , 2n}, respectively, do affect
the algorithms of Section 3 only by the multiplication of An and B n by certain
regular diagonal matrices.
The two figures illustrate, that scaling functions (wavelets) with respect to dif-
ferent parameters look almost like a shift of each other.

5. Examples
In this section we want to discuss two examples in more detail. Both belong
to the class of Chebyshev weights, i.e. Jacobi weights with |α| = |β| = 12 . These
weights are of particular interest, because here one can handle the computations
with the help of fast algorithms based on the Discrete Cosine Transform (see, for
example Tasche et al. [9], [10], [7]).
Let us start with the Chebyshev weight of the first kind
1
w(t) = √ , t ∈ (−1, 1).
1 − t2
The corresponding orthonormal polynomials, the Chebyshev polynomials of the
first kind, can conveniently be written
r (√
1 2 cos nθ if n > 0 ,
Pn (t) = ·
π 1 if n = 0 ,
in terms of t = cos θ, 0 ≤ θ ≤ π. If we take as parameter set for the scaling
functions the zeros of Pn+1 (cf. Corollary 2.6)
(2r + 1)π
yr(n+1) = cos , r = 0, . . . , n,
2n + 2
then
n
1 2X k(2r + 1)π
ϕn,r (t) = + cos cos kθ
π π 2n + 2
k=0
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1615

and
r √ !  
2 2 k(2r + 1)π
An = diag , 1, 1, . . . , 1 · cos .
π 2 2n + 2 k,r=0,... ,n

With the help of trigonometric identities it is easy to see that


π
(5.1) A−1
n = AT .
n+1 n
Following Corollary 3.12 we notice that this is the only situation where Gauss-
ian quadrature coincides with Chebyshev quadrature (i.e., all weights are equal
π/(n + 1)).
(n)
Analogously we choose for the wavelets the zeros yr of Pn as the set of param-
eters. Then
2n
2 X k(2r + 1)π
ψn,r (t) = cos cos kθ, r = 0, . . . , n − 1,
π 2n
k=n+1

and
q  
Bn = 2
π cos (n+1+k)(2r+1)π
2n
q   ,n−1
k,r=0,...
= 2
π (−1)r+1 sin (k+1)(2r+1)π
2n .
k,r=0,... ,n−1

In this case we know from Corollary 3.14 that the wavelets are linear independent.
However, the following lemma shows that they are not orthogonal to each other
(compare Corollary 3.13(b)).
Lemma 5.24. The inverse of B n is given by
 
−1 π T 1
(5.2) B n = B n · diag 1, 1, . . . , 1,
n 2
with
 
n 1
BT
n Bn = δk,r + .
π π k,r=0,... ,n−1

Proof. The first assertion is equivalent to


n−1
2 X0 (k + 1)(2r + 1)π (k + 1)(2s + 1)π
δr,s = (−1)r+s sin sin ,
n 2n 2n
k=0

where the prime indicates that the last term in the sum has to be divided by 2.
Having performed an index shift, we obtain for the right-hand side by the addition
formula
Xn
1 k(r − s)π k(r + s + 1)π
(−1)r+s 0
cos − cos .
n n n
k=1

The rest of the proof is just an application of the well-known summation formula
for Dirichlet kernels
n
(
1 X0 k`π 0 if |`| = 1, . . . , 2n − 1,
(5.3) + cos =
2 n n if ` = 0.
k=1

The second statement follows directly from the first one.


1616 BERND FISCHER AND JÜRGEN PRESTIN

We conclude this example with the computation of the Riesz stability constants
(cf. (4.1), (4.2)). With (5.1) we obtain the best possible bounds for the scaling
functions
kA−1
n k2 kAn k2 = 1,

whereas for the wavelets we deduce from (5.2) that



kB −1
n k2 kB n k2 = 2.
Let us now consider as a second example the Chebyshev weight of the second
kind, i.e.,
p
w(t) = 1 − t2 , t ∈ (−1, 1),
and the corresponding orthonormal polynomials
r
2 sin(n + 1)θ
Pn (t) = .
π sin θ
Again we take as parameter set for the scaling functions the zeros of Pn+1 , i.e.,
(r + 1)π
yr(n+1) = cos , r = 0, . . . , n.
n+2
This choice leads to
n (k+1)(r+1)π
2 X sin n+2 sin(k + 1)θ
ϕn,r (t) = (r+1)π
π sin sin θ
k=0 n+2

and
r !
2 sin (k+1)(r+1)π
n+2
An = .
π sin (r+1)π
n+2 k,r=0,... ,n

For completeness we mention the result on the Gaussian quadrature


!
T n+2
(5.4) An An = diag .
π sin2 (r+1)π
n+2 r=0,... ,n
(n)
Choosing the zeros yr of Pn as the set of parameters for the wavelets we obtain

2
2n
X sin (k+1)(r+1)π
n+1 sin(k + 1)θ
ψn,r (t) = (r+1)π
, r = 0, . . . , n − 1,
π sin sin θ
k=n+1 n+1

and
r !
2 sin (k+n+2)(r+1)π
n+1
Bn = .
π sin (r+1)π
n+1 k,r=0,... ,n−1

The next lemma shows that this time we have an orthogonal set of wavelets.
Lemma 5.25. For the above defined matrix B n we have
!
T n+1
(5.5) B n B n = diag .
π sin2 (r+1)π
n+1 r=0,... ,n−1
WAVELETS BASED ON ORTHOGONAL POLYNOMIALS 1617

Proof. The (r, s) element of B T


n B n looks like
(k+n+2)(r+1)π
n−1
2 X sin n+1 sin (k+n+2)(s+1)π
n+1
(r+1)π
· (s+1)π
.
π sin n+1 sin n+1
k=0

It may be simplified to
n 
X 
(−1)r+s k(r − s)π k(r + s + 2)π
cos − cos .
π sin (r+1)π (s+1)π
n+1 sin n+1 k=1
n+1 n+1

Now the statement follows from (5.3).

Again, we finish by computing the quotient for the Riesz bounds. Here, we
obtain from (5.4) that
(
−1 sin−1 n+2
π
for even n,
kAn k2 kAn k2 = 1 −1 π
2 sin 2n+4 for odd n,

and from (5.5) that


(
sin−1 n+1
π
for odd n,
kB −1
n k2 kB n k2 = 1 −1 π
2 sin 2n+2 for even n.
Let us summarize these two examples. We have constructed for the Chebyshev
weight of the first kind orthogonal scaling functions and nonorthogonal wavelets
where the quotient of the associated Riesz bounds are uniformly bounded. On
the other hand, for the Chebyshev weight of the second kind we constructed both
orthogonal scaling functions and orthogonal wavelets where the quotient of the
Riesz constants are growing linearly in n.

Acknowledgment
Part of this work was done while the second author was visiting the Institute of
Mathematics at the Medical University of Lübeck. He is grateful for the support
and the warm hospitality at this institute.

References
1. T. S. Chihara, An introduction to orthogonal polynomials, Gordon and Breach, New York,
London, Paris, 1978. MR 58:1979
2. C.K. Chui and H.N. Maskar, On trigonometric wavelets, Constr. Approx. 9 (1993), 167–190.
MR 94c:42002
3. P. J. Davis, Interpolation & approximation, Blaisdell, Waltham, Massachusetts, 1963. MR
28:393
4. B. Fischer, Polynomial based iteration methods for symmetric linear systems, Wiley-Teubner,
Chichester, 1996.
5. G. Gasper, Banach algebras for Jacobi series and positivity of a kernel, Ann. of Math. 95
(1972), 261–280. MR 46:9634
6. T. Kilgore and J. Prestin, Polynomial wavelets on the interval, Constr. Approx. 12 (1996),
95–110. MR 97b:41003
7. G. Plonka, K. Selig, and M. Tasche, On the construction of wavelets on a bounded interval,
Adv. Comp. Math. 4 (1995), 357–388. MR 96m:42057
8. G. Szegö, Orthogonal polynomials, revised ed., AMS Colloquium Publications XXIII, Ameri-
can Mathematical Society, New York, 1959. MR 21:5029
1618 BERND FISCHER AND JÜRGEN PRESTIN

9. M. Tasche, Fast algorithms for discrete Chebyshev - Vandermonde transforms and applica-
tions, Numer. Alg. 5 (1993), 453–464. CMP 94:07
10. , Polynomial wavelets on [−1, 1], Approximation Theory, Wavelets and Applications
(Dordrecht) (S. P. Singh, ed.), Kluwer Academic Publ., 1995, pp. 497–512. MR 96c:42073

Institut für Mathematik, Medizinische Universität zu Lübeck, D – 23560 Lübeck,


Germany
E-mail address: fischer@informatik.mu-luebeck.de

Fachbereich Mathematik, Universität Rostock, D – 18051 Rostock, Germany


E-mail address: prestin@mathematik.uni-rostock.d400.de

You might also like