Wavelets and Wavelet Transforms 6.9
Wavelets and Wavelet Transforms 6.9
Collection Editor:
C. Sidney Burrus
Wavelets and Wavelet Transforms
Collection Editor:
C. Sidney Burrus
Authors:
C. Sidney Burrus
Ramesh Gopinath
Haitao Guo
Online:
< http://cnx.org/content/col11454/1.6/ >
OpenStax-CNX
This selection and arrangement of content as a collection is copyrighted by C. Sidney Burrus. It is licensed under the
Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/).
Collection structure revised: August 6, 2015
PDF generated: September 24, 2015
For copyright and attribution information for the modules contained in this collection, see p. 308.
Table of Contents
1 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Introduction to Wavelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 A multiresolution formulation of Wavelet Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Filter Banks and the Discrete Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5 Bases, Orthogonal Bases, Biorthogonal Bases, Frames, Tight Frames, and
unconditional Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6 The Scaling Function and Scaling Coecients, Wavelet and Wavelet Co-
ecients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7 Regularity, Moments, and Wavelet System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8 Generalizations of the Basic Multiresolution Wavelet System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
9 Filter Banks and Transmultiplexers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
10 Calculation of the Discrete Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
11 Wavelet-Based Signal Processing and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
12 Summary Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 231
13 Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
14 Appendix B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
15 Appendix C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
16 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 253
17 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .308
iv
Preface 1
This book develops the ideas behind and properties of wavelets and shows how they can be used as analytical
tools for signal processing, numerical analysis, and mathematical modeling. We try to present this in a way
that is accessible to the engineer, scientist, and applied mathematician both as a theoretical approach and
as a potentially practical method to solve problems. Although the roots of this subject go back some time,
the modern interest and development have a history of only a few decades.
The early work was in the 1980's by Morlet, Grossmann, Meyer, Mallat, and others, but it was the paper
by Ingrid Daubechies [102] in 1988 that caught the attention of the larger applied mathematics communities
in signal processing, statistics, and numerical analysis. Much of the early work took place in France [97],
[376] and the USA [102], [451], [116], [445]. As in many new disciplines, the rst work was closely tied to a
particular application or traditional theoretical framework. Now we are seeing the theory abstracted from
application and developed on its own and seeing it related to other parallel ideas. Our own background and
interests in signal processing certainly inuence the presentation of this book.
The goal of most modern wavelet research is to create a set of basis functions (or general expansion
functions) and transforms that will give an informative, ecient, and useful description of a function or
signal and allow more eective and ecient processing. If the signal is represented as a function of time,
wavelets provide ecient localization in both time and frequency or scale. Another central idea is that of
multiresolution where the decomposition of a signal is in terms of the resolution of detail.
For the Fourier series, sinusoids are chosen as basis functions, then the properties of the resulting expan-
sion are examined. For wavelet analysis, one poses the desired properties and then derives the resulting basis
functions. An important property of the wavelet basis is providing a multiresolution analysis. For several
reasons, it is often desired to have the basis functions orthonormal. Given these goals, you will see aspects
of correlation techniques, Fourier transforms, short-time Fourier transforms, discrete Fourier transforms,
Wigner distributions, lter banks, subband coding, and other signal expansion and processing methods in
the results.
Wavelet-based analysis is an exciting new problem-solving tool for the mathematician, scientist, and
engineer. It ts naturally with the digital computer with its basis functions dened by summations not
integrals or derivatives. Unlike most traditional expansion systems, the basis functions of the wavelet analysis
are not solutions of dierential equations. In some areas, it is the rst truly new tool we have had in many
years. Indeed, use of wavelets and wavelet transforms requires a new point of view and a new method of
interpreting representations that we are still learning how to exploit.
Work by Donoho, Johnstone, Coifman, and others have added theoretical reasons for why wavelet analysis
is so versatile and powerful, and have given generalizations that are still being worked on. They have shown
that wavelet systems have some inherent generic advantages and are near optimal for a wide class of problems
[141]. They also show that adaptive means can create special wavelet systems for particular signals and classes
of signals.
1 This content is available online at <http://cnx.org/content/m45097/1.15/>.
1
2 CHAPTER 1. PREFACE
The multiresolution decomposition seems to separate components of a signal in a way that is superior to
most other methods for analysis, processing, or compression. Because of the ability of the discrete wavelet
transform to decompose a signal at dierent independent scales and to do it in a very exible way, Burke calls
wavelets The Mathematical Microscope" [48], [268]. Because of this powerful and exible decomposition,
linear and nonlinear processing of signals in the wavelet transform domain oers new methods for signal
detection, ltering, and compression [141], [149], [138], [458], [566], [233]. It also can be used as the basis for
robust numerical algorithms.
You will also see an interesting connection and equivalence to lter bank theory from digital signal
processing [524], [12]. Indeed, some of the results obtained with lter banks are the same as with discrete-
time wavelets, and this has been developed in the signal processing community by Vetterli, Vaidyanathan,
Smith and Barnwell, and others. Filter banks, as well as most algorithms for calculating wavelet transforms,
are part of a still more general area of multirate and time-varying systems.
The presentation here will be as a tutorial or primer for people who know little or nothing about wavelets
but do have a technical background. It assumes a knowledge of Fourier series and transforms and of linear
algebra and matrix theory. It also assumes a background equivalent to a B.S. degree in engineering, science,
or applied mathematics. Some knowledge of signal processing is helpful but not essential. We develop the
ideas in terms of one-dimensional signals [445] modeled as real or perhaps complex functions of time, but
the ideas and methods have also proven eective in image representation and processing [472], [336] dealing
with two, three, or even four or more dimensions. Vector spaces have proved to be a natural setting for
developing both the theory and applications of wavelets. Some background in that area is helpful but can
be picked up as needed [55]. The study and understanding of wavelets is greatly assisted by using some sort
of wavelet software system to work out examples and run experiments. Matlab TM
programs are included
at the end of this book and on our web site (noted at the end of the preface). Several other systems are
mentioned in Chapter: Wavelet-Based Signal Processing and Applications (Chapter 11).
There are several dierent approaches that one could take in presenting wavelet theory. We have chosen
to start with the representation of a signal or function of continuous time in a series expansion, much as a
Fourier series is used in a Fourier analysis. From this series representation, we can move to the expansion of a
function of a discrete variable (e.g., samples of a signal) and the theory of lter banks to eciently calculate
and interpret the expansion coecients. This would be analogous to the discrete Fourier transform (DFT)
and its ecient implementation, the fast Fourier transform (FFT). We can also go from the series expansion
to an integral transform called the continuous wavelet transform, which is analogous to the Fourier transform
or Fourier integral. We feel starting with the series expansion gives the greatest insight and provides ease in
seeing both the similarities and dierences with Fourier analysis.
This book is organized into sections and chapters, each somewhat self-contained. The earlier chapters
give a fairly complete development of the discrete wavelet transform (DWT) as a series expansion of signals
in terms of wavelets and scaling functions. The later chapters are short descriptions of generalizations of the
DWT and of applications. They give references to other works, and serve as a sort of annotated bibliography.
Because we intend this book as an introduction to wavelets which already have an extensive literature, we
have included a rather long bibliography. However, it will soon be incomplete because of the large number
of papers that are currently being published. Nevertheless, a guide to the other literature is essential to our
goal of an introduction.
A good sketch of the philosophy of wavelet analysis and the history of its development can be found in a
book published by the National Academy of Science in the chapter by Barbara Burke [48]. She has written
an excellent expanded version in [268], which should be read by anyone interested in wavelets. Daubechies
gives a brief history of the early research in [127].
Many of the results and relationships presented in this book are in the form of theorems and proofs or
derivations. A real eort has been made to ensure the correctness of the statements of theorems but the
proofs are often only outlines of derivations intended to give insight into the result rather than to be a formal
proof. Indeed, many of the derivations are put in the Appendix in order not to clutter the presentation.
We hope this style will help the reader gain insight into this very interesting but sometimes obscure new
mathematical signal processing tool.
We use a notation that is a mixture of that used in the signal processing literature and that in the
mathematical literature. We hope this will make the ideas and results more accessible, but some uniformity
and cleanness is lost.
The authors acknowledge AFOSR, ARPA, NSF, Nortel, Inc., Texas Instruments, Inc. and Aware, Inc.
for their support of this work. We specically thank H. L. Resniko, who rst introduced us to wavelets
and who proved remarkably accurate in predicting their power and success. We also thank W. M. Lawton,
R. O. Wells, Jr., R. G. Baraniuk, J. E. Odegard, I. W. Selesnick, M. Lang, J. Tian, and members of the
Rice Computational Mathematics Laboratory for many of the ideas and results presented in this book. The
rst named author would like to thank the Maxeld and Oshman families for their generous support. The
students in EE-531 and EE-696 at Rice University provided valuable feedback as did Bruce Francis, Strela
Vasily, Hans Schüssler, Peter Steen, Gary Sitton, Jim Lewis, Yves Angel, Curt Michel, J. H. Husoy, Kjersti
Engan, Ken Castleman, Je Trinkle, Katherine Jones, and other colleagues at Rice and elsewhere.
We also particularly want to thank Tom Robbins and his colleagues at Prentice Hall for their support
and help. Their reviewers added signicantly to the book.
We would appreciate learning of any errors or misleading statements that any readers discover. Indeed,
any suggestions for improvement of the book would be most welcome. Send suggestions or comments via
email to [email protected]. Software, articles, errata for this book, and other information on the wavelet research
at Rice can be found on the world-wide-web URL: http://dsp.rice.edu/ with links to other sites where wavelet
research is being done.
C. Sidney Burrus, Ramesh A. Gopinath, and Haitao Guo
Houston, Texas; Yorktown Heights, New York; and Cuppertino, California
Introduction to Wavelets 1
This chapter will provide an overview of the topics to be developed in the book. Its purpose is to present
the ideas, goals, and outline of properties for an understanding of and ability to use wavelets and wavelet
transforms. The details and more careful denitions are given later in the book.
A wave is usually dened as an oscillating function of time or space, such as a sinusoid. Fourier analysis is
wave analysis. It expands signals or functions in terms of sinusoids (or, equivalently, complex exponentials)
which has proven to be extremely valuable in mathematics, science, and engineering, especially for periodic,
time-invariant, or stationary phenomena. A wavelet is a small wave", which has its energy concentrated in
time to give a tool for the analysis of transient, nonstationary, or time-varying phenomena. It still has the
oscillating wave-like characteristic but also has the ability to allow simultaneous time and frequency analysis
with a exible mathematical foundation. This is illustrated in Figure 2.1 with the wave (sinusoid) oscillating
with equal amplitude over −∞ ≤ t ≤ ∞ and, therefore, having innite energy and with the wavelet in
Figure 2.2 having its nite energy concentrated around a point in time.
5
6 CHAPTER 2. INTRODUCTION TO WAVELETS
We will take wavelets and use them in a series expansion of signals or functions much the same way a
Fourier series uses the wave or sinusoid to represent a signal or function. The signals are functions of a
continuous variable, which often represents time or distance. From this series expansion, we will develop
a discrete-time version similar to the discrete Fourier transform where the signal is represented by a string
of numbers where the numbers may be samples of a signal, samples of another string of numbers, or inner
products of a signal with some expansion set. Finally, we will briey describe the continuous wavelet
transform where both the signal and the transform are functions of continuous variables. This is analogous
to the Fourier transform.
where ` is an integer index for the nite or innite sum, a` are the real-valued expansion coecients, and
ψ` (t) are a set of real-valued functions of t called the expansion set. If the expansion (2.1) is unique, the set
is called a basis for the class of functions that can be so expressed. If the basis is orthonormal, meaning
Z
< ψk (t) , ψ` (t) > = ψk (t) ψ` (t) dt = 0 k 6= `, (2.2)
One can see that substituting (2.1) into (2.3) and using (2.2) gives the single ak coecient. If the basis
set is not orthogonal, then a dual basis set ψ̃k (t) exists such that using (2.3) with the dual basis gives the
desired coecients. This will be developed in Chapter: A multiresolution formulation of Wavelet Systems
(Chapter 3).
For a Fourier series, the orthogonal basis functions ψk (t) are sin (kω0 t) and cos (kω0 t) with frequencies
of kω0 . For a Taylor's series, the nonorthogonal basis functions are simple monomials tk , and for many other
expansions they are various polynomials. There are expansions that use splines and even fractals.
For the wavelet expansion, a two-parameter system is constructed such that (2.1) becomes
XX
f (t) = aj,k ψj,k (t) (2.4)
k j
where both j and k are integer indices and the ψj,k (t) are the wavelet expansion functions that usually
form an orthogonal basis.
The set of expansion coecients aj,k are called the discrete wavelet transform (DWT) of f (t) and (2.4)
is the inverse transform.
Virtually all wavelet systems have these very general characteristics. Where the Fourier series maps a one-
dimensional function of a continuous variable into a one-dimensional sequence of coecients, the wavelet
expansion maps it into a two-dimensional array of coecients. We will see that it is this two-dimensional
representation that allows localizing the signal in both time and frequency. A Fourier series expansion
localizes in frequency in that if a Fourier series expansion of a signal has only one large coecient, then the
signal is essentially a single sinusoid at the frequency determined by the index of the coecient. The simple
time-domain representation of the signal itself gives the localization in time. If the signal is a simple pulse,
the location of that pulse is the localization in time. A wavelet representation will give the location in both
time and frequency simultaneously. Indeed, a wavelet representation is much like a musical score where the
location of the notes tells when the tones occur and what their frequencies are.
1. All so-called rst-generation wavelet systems are generated from a single scaling function or wavelet by
simple scaling and translation. The two-dimensional parameterization is achieved from the function
(sometimes called the generating wavelet or mother wavelet) ψ (t) by
where Z is the set of all integers and the factor 2j/2 maintains a constant norm independent of scale
j . This parameterization of the time or space location by k and the frequency or scale (actually the
logarithm of scale) by j turns out to be extraordinarily eective.
2. Almost all useful wavelet systems also satisfy the multiresolution conditions. This means that if a set
of signals can be represented by a weighted sum of ϕ (t − k), then a larger set (including the original)
can be represented by a weighted sum of ϕ (2t − k). In other words, if the basic expansion signals are
made half as wide and translated in steps half as wide, they will represent a larger class of signals
exactly or give a better approximation of any signal.
3. The lower resolution coecients can be calculated from the higher resolution coecients by a tree-
structured algorithm called a lter bank. This allows a very ecient calculation of the expansion
coecients (also known as the discrete wavelet transform) and relates wavelet transforms to an older
area in digital signal processing.
The operations of translation and scaling seem to be basic to many practical signals and signal-generating
processes, and their use is one of the reasons that wavelets are ecient expansion functions. Figure 2.3 is a
pictorial representation of the translation and scaling of a single mother wavelet described in (2.5). As the
index k changes, the location of the wavelet moves along the horizontal axis. This allows the expansion to
explicitly represent the location of events in time or space. As the index j changes, the shape of the wavelet
changes in scale. This allows a representation of detail or resolution. Note that as the scale becomes ner (j
larger), the steps in time become smaller. It is both the narrower wavelet and the smaller steps that allow
representation of greater detail or higher resolution. For clarity, only every fourth term in the translation
(k = 1, 5, 9, 13, · · ·) is shown, otherwise, the gure is a clutter. What is not illustrated here but is important
is that the shape of the basic mother wavelet can also be changed. That is done during the design of the
wavelet system and allows one set to well-represent a particular class of signals.
For the Fourier series and transform and for most signal expansion systems, the expansion functions
(bases) are chosen, then the properties of the resulting transform are derived and
analyzed. For the wavelet system, the desired properties or characteristics are mathematically required,
then the resulting basis functions are derived. Because these constraints do not use all the degrees of
freedom, other properties can be required to customize the wavelet system for a particular application. Once
you decide on a Fourier series, the sinusoidal basis functions are completely set. That is not true for the
wavelet. There are an innity of very dierent wavelets that all satisfy the above properties. Indeed, the
understanding and design of the wavelets is an important topic of this book.
Wavelet analysis is well-suited to transient signals. Fourier analysis is appropriate for periodic signals or
for signals whose statistical characteristics do not change with time. It is the localizing property of wavelets
that allow a wavelet expansion of a transient event to be modeled with a small number of coecients. This
turns out to be very useful in applications.
Haar [242] showed this result in 1910, and we now know that wavelets are a generalization of his work. An
example of a Haar system and expansion is given at the end of Chapter: A multiresolution formulation of
Wavelet Systems (Chapter 3).
large number of Fourier components to represent a discontinuity or a sharp corner. In contrast, there are
many dierent wavelets and some have sharp corners themselves.
To appreciate the special character of wavelets you should recognize that it was not until the late 1980's
that some of the most useful basic wavelets were ever seen. Figure 2.5 illustrates four dierent scaling
functions, each being zero outside of 0 < t < 6 and each generating an orthogonal wavelet basis for all square
integrable functions. This gure is also shown on the cover to this book.
Several more scaling functions and their associated wavelets are illustrated in later chapters, and the
Haar wavelet is shown in Figure 2.4 and in detail at the end of Chapter: A multiresolution formulation of
Wavelet Systems (Chapter 3).
Figure 2.5: Example Scaling Functions (See Section: Further Properties of the Scaling Function and
Wavelet (Section 6.8: Further Properties of the Scaling Function and Wavelet) for the meaning of α and
β)
1. The size of the wavelet expansion coecients aj,k in (2.4) or dj,k in (2.6) drop o rapidly with j and k
for a large class of signals. This property is called being an unconditional basis and it is why wavelets
are so eective in signal and image compression, denoising, and detection. Donoho [142], [161] showed
that wavelets are near optimal for a wide class of signals for compression, denoising, and detection.
2. The wavelet expansion allows a more accurate local description and separation of signal characteristics.
A Fourier coecient represents a component that lasts for all time and, therefore, temporary events
must be described by a phase characteristic that allows cancellation or reinforcement over large time
periods. A wavelet expansion coecient represents a component that is itself local and is easier to
interpret. The wavelet expansion may allow a separation of components of a signal whose Fourier
description overlap in both time and frequency.
3. Wavelets are adjustable and adaptable. Because there is not just one wavelet, they can be designed
to t individual applications. They are ideal for adaptive systems that adjust themselves to suit the
signal.
4. The generation of wavelets and the calculation of the discrete wavelet transform is well matched to the
digital computer. We will later see that the dening equation for a wavelet uses no calculus. There are
no derivatives or integrals, just multiplications and additionsoperations that are basic to a digital
computer.
While some of these details may not be clear at this point, they should point to the issues that are important
to both theory and application and give reasons for the detailed development that follows in this and other
books.
where the two-dimensional set of coecients aj,k is called the discrete wavelet transform (DWT) of f (t).
A more specic form indicating how the aj,k 's are calculated can be written using inner products as
X
f (t) = < ψj,k (t) , f (t) > ψj,k (t) (2.9)
j,k
if the ψj,k (t) form an orthonormal basis for the space of signals of interest [117]. The inner product is
2
usually dened as
Z
< x (t) , y (t) > = x∗ (t) y (t) dt. (2.10)
The goal of most expansions of a function or signal is to have the coecients of the expansion aj,k give
more useful information about the signal than is directly obvious from the signal itself. A second goal is
to have most of the coecients be zero or very small. This is what is called a sparse representation and
is extremely important in applications for statistical estimation and detection, data compression, nonlinear
noise reduction, and fast algorithms.
Although this expansion is called the discrete wavelet transform (DWT), it probably should be called a
wavelet series since it is a series expansion which maps a function of a continuous variable into a sequence
of coecients much the same way the Fourier series does. However, that is not the convention.
This wavelet series expansion is in terms of two indices, the time translation k and the scaling index j .
For the Fourier series, there are only two possible values of k , zero and π/2, which give the sine terms and
the cosine terms. The values j give the frequency harmonics. In other words, the Fourier series is also a
two-dimensional expansion, but that is not seen in the exponential form and generally not noticed in the
trigonometric form.
2 Bases and tight frames are dened in Chapter: Bases, Orthogonal Bases, Biorthogonal Bases, Frames, Right Frames, and
unconditional Bases. (Chapter 5)
The DWT of a signal is somewhat dicult to illustrate because it is a function of two variables or indices,
but we will show the DWT of a simple pulse in Figure 2.6 to illustrate the localization of the transform.
Other displays will be developed in the next chapter.
√
Figure 2.6: Discrete Wavelet Transform of a Pulse, using ψD6 with a Gain of 2 for Each Higher Scale.
and others and is briey developed in Section: Discrete Multiresolution Analysis, the Discrete-Time Wavelet
(Section 8.8: Discrete Multiresolution Analysis, the Discrete-Time Wavelet Transform, and the Continuous
Wavelet Transform) of this book. It is analogous to the Fourier transform or Fourier integral.
Both the mathematics and the practical interpretations of wavelets seem to be best served by using the
concept of resolution [380], [338], [343], [118] to dene the eects of changing scale. To do this, we will start
with a scaling function φ (t) rather than directly with the wavelet ψ (t). After the scaling function is dened
from the concept of resolution, the wavelet functions will be derived from it. This chapter will give a rather
intuitive development of these ideas, which will be followed by more rigorous arguments in Chapter: The
Scaling Function and Scaling Coecients, Wavelet and Wavelet Coecients (Chapter 6).
This multiresolution formulation is obviously designed to represent signals where a single event is de-
composed into ner and ner detail, but it turns out also to be valuable in representing signals where a
time-frequency or time-scale description is desired even if no concept of resolution is needed. However, there
are other cases where multiresolution is not appropriate, such as for the short-time Fourier transform or
Gabor transform or for local sine or cosine bases or lapped orthogonal transforms, which are all discussed
briey later in this book.
with the range of integration depending on the signal class being considered. This inner product denes a
norm or length" of a vector which is denoted and dened by
(3.2)
p
k f k = | < f, f > |
1 This content is available online at <http://cnx.org/content/m45081/1.4/>.
15
CHAPTER 3. A MULTIRESOLUTION FORMULATION OF WAVELET
16
SYSTEMS
which is a simple generalization of the geometric operations and denitions in three-dimensional Euclidean
space. Two signals (vectors) with non-zero norms are called orthogonal if their inner product is zero. For
example, with the Fourier series, we see that sin (t) is orthogonal to sin (2t).
A space that is particularly important in signal processing is called L2 (R). This is the space of all
functions f (t) with a well dened integral of the square of the modulus of the function. The L" signies a
Lebesque integral, the 2" denotes the integral of the square of the modulus of the function, and R states
that the independent variable of integration t is a number over the whole real line. For a function g (t) to
be a member of that space is denoted: g ∈ L2 (R) or simply g ∈ L2 .
Although most of the denitions and derivations are in terms of signals that are in L2 , many of the
results hold for larger classes of signals. For example, polynomials are not in L2 but can be expanded over
any nite domain by most wavelet systems.
In order to develop the wavelet expansion described in (2.5), we will need the idea of an expansion
set or a basis
P set. If we start with the vector space of signals, S , then if any f (t) ∈ S can be expressed
as f (t) = k ak φk (t), the set of functions φk (t) are called an expansion set for the space S . If the
representation is unique, the set is a basis. Alternatively, one could start with the expansion set or basis
set and dene the space S as the set of all functions that can be expressed by f (t) = k ak φk (t). This
P
is called the span of the basis set. In several cases, the signal spaces that we will need are actually the
closure of the space spanned by the basis set. That means the space contains not only all signals that can
be expressed by a linear combination of the basis functions, but also the signals which are the limit of these
innite expansions. The closure of a space is usually denoted by an over-line.
One can generally increase the size of the subspace spanned by changing the time scale of the scaling
functions. A two-dimensional family of functions is generated from the basic scaling function by scaling and
translation by
For j > 0, the span can be larger since φj,k (t) is narrower and is translated in smaller steps. It, therefore,
can represent ner detail. For j < 0, φj,k (t) is wider and is translated in larger steps. So these wider scaling
functions can represent only coarse information, and the space they span is smaller. Another way to think
about the eects of a change of scale is in terms of resolution. If one talks about photographic or optical
resolution, then this idea of scale is the same as resolving power.
illustrated in Figure 3.1, is achieved by requiring that φ (t) ∈ V1 , which means that if φ (t) is in V0 , it is also
in V1 , the space spanned by φ (2t). This means φ (t) can be expressed in terms of a weighted sum of shifted
φ (2t) as
X √
φ (t) = h (n) 2 φ (2t − n) , n∈Z (3.13)
n
where the coecients h (n) are a sequence of real or perhaps√ complex numbers called the scaling function
coecients (or the scaling lter or the scaling vector) and the 2 maintains the norm of the scaling function
with the scale of two.
This recursive equation is fundamental to the theory of the scaling functions and is, in some ways,
analogous to a dierential equation with coecients h (n) and solution φ (t) that may or may not exist or be
unique. The equation is referred to by dierent names to describe dierent interpretations or points of view.
It is called the renement equation, the multiresolution analysis (MRA) equation, or the dilation equation.
The Haar scaling function is the simple unit-width, unit-height pulse function φ (t) shown in Figure 3.2,
and it is obvious that φ (2t) can be used to construct φ (t) by
(a) (b)
Figure: Daubechies Scaling Functions satises (3.13) for h = {0.483, 0.8365, 0.2241, −0.1294} as do all
scaling functions for their corresponding scaling coecients. Indeed, the design of wavelet systems is the
choosing of the coecients h (n) and that is developed later.
The relationship of the various subspaces can be seen from the following expressions. From (3.9) we see
that we may start at any Vj , say at j = 0, and write
V0 ⊂ V1 ⊂ V2 ⊂ · · · ⊂ L2 . (3.16)
We now dene the wavelet spanned subspace W0 such that
V1 = V0 ⊕ W0 (3.17)
which extends to
V2 = V0 ⊕ W0 ⊕ W1 . (3.18)
In general this gives
L2 = V0 ⊕ W0 ⊕ W1 ⊕ · · · (3.19)
when V0 is the initial space spanned by the scaling function ϕ (t − k). Figure 3.3 pictorially shows the
nesting of the scaling function spaces Vj for dierent scales j and how the wavelet spaces are the disjoint
dierences (except for the zero element) or, the orthogonal complements.
The scale of the initial space is arbitrary and could be chosen at a higher resolution of, say, j = 10 to
give
for some set of coecients h1 (n). From the requirement that the wavelets span the dierence" or orthogonal
complement spaces, and the orthogonality of integer translates of the wavelet (or scaling function), it is shown
in the Appendix in (13.49) that the wavelet coecients (modulo translations by integer multiples of two)
are required by orthogonality to be related to the scaling function coecients by
n
h1 (n) = (−1) h (1 − n) . (3.25)
One example for a nite even length-N h (n) could be
n
h1 (n) = (−1) h (N − 1 − n) . (3.26)
The function generated by (3.24) gives the prototype or mother wavelet ψ (t) for a class of expansion
functions of the form
where 2 is the scaling of t (j is the log 2 of the scale), 2 k is the translation in t, and 2
j −j
maintains the
j/2
(a) (b)
Figure 3.4: (a) Haar (same as ψD2 ) (b) Triangle (same as ψS1 )
We have now constructed a set of functions φk (t) and ψj,k (t) that could span all of L2 (R). According
to (3.19), any function g (t) ∈ L2 (R) could be written
∞
X ∞ X
X ∞
g (t) = c (k) φk (t) + d (j, k) ψj,k (t) (3.28)
k=−∞ j=0 k=−∞
and
Z
dj (k) = d (j, k) = < g (t) , ψj,k (t) > = g (t) ψj,k (t) dt. (3.30)
The coecient d (j, k) is sometimes written as dj (k) to emphasize the dierence between the time translation
index k and the scale parameter j . The coecient c (k) is also sometimes written as cj (k) or c (j, k) if a
more general starting scale" other than j = 0 for the lower limit on the sum in (3.28) is used.
It is important at this point to recognize the relationship of the scaling function part of the expansion
(3.28) to the wavelet part of the expansion. From the representation of the nested spaces in (3.19) we see
that the scaling function can be dened at any scale j . (3.28) uses j = 0 to denote the family of scaling
functions.
You may want to examine the Haar system example at the end of this chapter just now to see these
features illustrated.
X ∞
XX
cj0 (k) 2j0 /2 φ 2j0 t − k + dj (k) 2j/2 ψ 2j t − k (3.32)
g (t) =
k k j=j0
or
X ∞
XX
g (t) = cj0 (k) φj0 ,k (t) + dj (k) ψj,k (t) (3.33)
k k j=j0
where j0 could be zero as in (3.19) and (3.28), it could be ten as in (3.20), or it could be negative innity as
in (2.9) and (3.22) where no scaling functions are used. The choice of j0 sets the coarsest scale whose space
is spanned by φj0 ,k (t). The rest of L2 (R) is spanned by the wavelets which provide the high resolution
details of the signal. In practice where one is given only the samples of a signal, not the signal itself, there
is a highest resolution when the nest scale is the sample level.
The coecients in this wavelet expansion are called the discrete wavelet transform (DWT) of the signal
g (t). If certain conditions described later are satised, these wavelet coecients completely describe the
original signal and can be used in a way similar to Fourier series coecients for analysis, description, ap-
proximation, and ltering. If the wavelet system is orthogonal, these coecients can be calculated by inner
products
Z
cj (k) =< g (t) , φj,k (t) >= g (t) φj,k (t) dt (3.34)
and
Z
dj (k) = < g (t) , ψj,k (t) >= g (t) ψj,k (t) dt. (3.35)
If the scaling function is well-behaved, then at a high scale, the scaling is similar to a Dirac delta function
and the inner product simply samples the function. In other words, at high enough resolution, samples of
the signal are very close to the scaling coecients. More is said about this later. It has been shown [143]
that wavelet systems form an unconditional basis for a large class of signals. That is discussed in Chapter:
The Scaling Function and Scaling Coecients, Wavelet and Wavelet Coecients (Chapter 6) but means that
even for the worst case signal in the class, the wavelet expansion coecients drop o rapidly as j and k
increase. This is why the DWT is ecient for signal and image compression.
The DWT is similar to a Fourier series but, in many ways, is much more exible and informative. It
can be made periodic like a Fourier series to represent periodic signals eciently. However, unlike a Fourier
series, it can be used directly on non-periodic transient signals with excellent results. An example of the
DWT of a pulse was illustrated in Figure: Two-Stage Two-Band Analysis Tree (Figure 4.3). Other examples
are illustrated just after the next section.
with the energy in the expansion domain partitioned in time by k and in scale by j . Indeed, it is this
partitioning of the time-scale parameter plane that describes the DWT. If the expansion system is a tight
frame, there is a constant multiplier in (3.36) caused by the redundancy.
Daubechies [103], [118] showed that it is possible for the scaling function and the wavelets to have compact
support (i.e., be nonzero only over a nite region) and to be orthonormal. This makes possible the time
localization that we desire. We now have a framework for describing signals that has features of short-time
Fourier analysis and of Gabor-based analysis but using a new variable, scale. For the short-time Fourier
transform, orthogonality and good time-frequency resolution are incompatible according to the Balian-Low-
Coifman-Semmes theorem [111], [470]. More precisely, if the short-time Fourier transform is orthogonal,
either the time or the frequency resolution is poor and the trade-o is inexible. This is not the case for the
wavelet transform. Also, note that there is a variety of scaling functions and wavelets that can be obtained
by choosing dierent coecients h (n) in (3.13).
2 or a tight frame dened in Chapter: Bases, Orthogonal Bases, Biorthogonal Bases, Frames, Right Frames, and unconditional
Bases (Chapter 5)
Donoho [143] has noted that wavelets are an unconditional basis for a very wide class of signals. This
means wavelet expansions of signals have coecients that drop o rapidly and therefore the signal can be
eciently represented by a small number of them.
We have rst developed the basic ideas of the discrete wavelet system using a scaling multiplier of 2 in
the dening (3.13). This is called a two-band wavelet system because of the two channels or bands in the
related lter banks discussed in Chapter: Filter Banks and the Discrete Wavelet Transform (Chapter 4)
and Chapter: Filter Banks and Transmultiplexers
√ (Chapter 9). It is also possible to dene a more general
discrete waveletsystem using φ (t) = n h (n) M φ (M t − n) where M is an integer [483]. This is discussed
P
in Section: Multiplicity-M (M-Band) Scaling Functions and Wavelets (Section 8.2: Multiplicity-M (M-Band)
Scaling Functions and Wavelets). The details of numerically calculating the DWT are discussed in Chapter:
Calculation of the Discrete Wavelet Transform (Chapter 10) where special forms for periodic signals are
used.
3.6 Display of the Discrete Wavelet Transform and the Wavelet Ex-
pansion
It is important to have an informative way of displaying or visualizing the wavelet expansion and transform.
This is complicated in that the DWT is a real-valued function of two integer indices and, therefore, needs
a two-dimensional display or plot. This problem is somewhat analogous to plotting the Fourier transform,
which is a complex-valued function.
There seem to be ve displays that show the various characteristics of the DWT well:
1. The most basic time-domain description of a signal is the signal itself (or, for most cases, samples of
the signal) but it gives no frequency or scale information. A very interesting property of the DWT
(and one dierent from the Fourier series) is for a high starting scale j0 in (3.33), samples of the signal
are the DWT at that scale. This is an extreme case, but it shows the exibility of the DWT and will
be explained later.
2. The most basic wavelet-domain description is a three-dimensional plot of the expansion coecients or
DWT values c (k) and dj (k) over the j, k plane. This is dicult to do on a two-dimensional page or
display screen, but we show a form of that in Figure 3.5 and Figure 3.8.
3. A very informative picture of the eects of scale can be shown by generating time functions fj (t) at
each scale by summing (3.28) over k so that
X
f (t) = fj0 + fj (t) (3.37)
j
where X
fj0 = c (k) φ (t − k) (3.38)
k
X
dj (k) 2j/2 ψ 2j t − k . (3.39)
fj (t) =
k
This illustrates the components of the signal at each scale and is shown in and .
4. Another illustration that shows the time localization of the wavelet expansion is obtained by generating
time functions fk (t) at each translation by summing (3.28) over k so that
X
f (t) = fk (t) (3.40)
k
where X
dj (k) 2j/2 ψ 2j t − k . (3.41)
fk (t) = c (k) ϕ (t − k) +
j
5. There is another rather dierent display based on a partitioning of the time-scale plane as if the time
translation index and scale index were continuous variables. This display is called tiling the time-
frequency plane." Because it is a dierent type of display and is developed and illustrated in Chapter:
Calculation of the Discrete Wavelet Transform (Chapter 10), it will not be illustrated here.
Experimentation with these displays can be very informative in terms of the properties and capabilities of
the wavelet transform, the eects of particular wavelet systems, and the way a wavelet expansion displays
the various attributes or characteristics of a signal.
√
Figure 3.5: Discrete Wavelet Transform of the Houston Skyline, using ψD8 with a Gain of 2 for Each
'
Higher Scale
Figure 3.6 shows the approximations of the skyline signal in the various scaling function spaces Vj . This
illustrates just how the approximations progress, giving more and more resolution at higher scales. The fact
that the higher scales give more detail is similar to Fourier methods, but the localization is new. Figure 3.7
illustrates the individual wavelet decomposition by showing the components of the signal that exist in the
wavelet spaces Wj at dierent scales j . This shows the same expansion as Figure 3.6, but with the wavelet
components given separately rather than being cumulatively added to the scaling function. Notice how the
large objects show up at the lower resolution. Groups of buildings and individual buildings are resolved
according to their width. The edges, however, are located at the higher resolutions and are located very
accurately.
Figure 3.6: Projection of the Houston Skyline Signal onto V Spaces using ΦD80
Figure 3.7: Projection of the Houston Skyline Signal onto W Spaces using ψD80
The second example uses a chirp or doppler signal to illustrate how a time-varying frequency is described
by the scale decomposition. Figure 3.8 gives the coecients of the DWT directly as a function of j and k .
Notice how the location in k tracks the frequencies in the signal in a way the Fourier transform cannot. Fig-
ure 3.9 and Figure 3.10 show the scaling function approximations and the wavelet decomposition of this
chirp signal. Again, notice in this type of display how the location" of the frequencies are shown.
√
Figure 3.8: Discrete Wavelet Transform of a Doppler, using ψD8 with a gain of
' 2 for each higher
scale.
1 if 0 < t < 1
φ (t) = { (3.42)
0 otherwise
Figure 3.9: Projection of the Doppler Signal onto V Spaces using ΦD80
Figure 3.10: Projection of the Doppler Signal onto W Spaces using ψD80
The Haar functions are illustrated in Figure 3.11 where the rst column contains the simple constant basis
function that spans V0 , the second column contains the unit pulse of width one half and the one translate
necessary to span V1 . The third column contains four translations of a pulse of width one fourth and the
fourth contains eight translations of a pulse of width one eighth. This shows clearly how increasing the scale
allows greater and greater detail to be realized. However, using only the scaling function does not allow the
decomposition described in the introduction. For that we need the wavelet. Rather than use the scaling
functions φ (8t − k) in V3 , we will use the orthogonal decomposition
V3 = V 2 ⊕ W2 (3.44)
which is the same as
Span{φ (8t − k)} = Span{φ (4t − k)} ⊕ Span{ψ (4t − k)} (3.45)
k k k
which means there are two sets of orthogonal basis functions that span V3 , one in terms of j = 3 scaling
functions, and the other in terms of half as many coarser j = 2 scaling functions plus the details contained
in the j = 2 wavelets. This is illustrated in Figure 3.12.
V2 = V 1 ⊕ W1 (3.46)
which is the same as
Span{φ (4t − k)} = Span{φ (2t − k)} ⊕ Span{ψ (2t − k)} (3.47)
k k k
V1 = V 0 ⊕ W0 (3.48)
which is shown in Figure 3.13. By continuing to decompose the space spanned by the scaling function
until the space is one constant, the complete decomposition of V3 is obtained. This is symbolically shown in
Figure 3.16.
Finally we look at an approximation to a smooth function constructed from the basis elements in V3 =
V0 ⊕W0 ⊕W1 ⊕W2 . Because the Haar functions form an orthogonal basis in each subspace, they can produce
an optimal least squared error approximation to the smooth function. One can easily imagine the eects of
adding a higher resolution layer" of functions to W3 giving an approximation residing in V4 . Notice that
these functions satisfy all of the conditions that we have considered for scaling functions and wavelets. The
basic wavelet is indeed an oscillating function which, in fact, has an average of zero and which will produce
ner and ner detail as it is scaled and translated.
The multiresolution character of the scaling function and wavelet system is easily seen from Figure 3.12
where a signal residing in V3 can be expressed in terms of a sum of eight shifted scaling functions at scale j = 3
or a sum of four shifted scaling functions and four shifted wavelets at a scale of j = 2. In the second case,
the sum of four scaling functions gives a low resolution approximation to the signal with the four wavelets
giving the higher resolution detail". The four shifted scaling functions could be further decomposed into
coarser scaling functions and wavelets as illustrated in Figure 3.14 and still further decomposed as shown in
Figure 3.13.
Figure 3.15 shows the Haar approximations of a test function in various resolutions. The signal is an
example of a mixture of a pure sine wave which would have a perfectly localized Fourier domain representation
and a two discontinuities which are completely localized in time domain. The component at the coarsest
scale is simply the average of the signal. As we include more and more wavelet scales, the approximation
becomes close to the original signal.
This chapter has skipped over some details in an attempt to communicate the general idea of the method.
The conditions that can or must be satised and the resulting properties, together with examples, are
discussed in the following chapters and/or in the references.
In many applications, one never has to deal directly with the scaling functions or wavelets. Only the
coecients h (n) and h1 (n) in the dening equations (3.13) and (3.24) and c (k) and dj (k) in the expansions
(3.28), (3.29), and (3.30) need be considered, and they can be viewed as digital lters and digital signals
respectively [191], [526]. While it is possible to develop most of the results of wavelet theory using only lter
banks, we feel that both the signal expansion point of view and the lter bank point of view are necessary
for a real understanding of this new tool.
and assuming a unique solution exists, we scale and translate the time variable to give
X √ X √
φ 2j t − k = h (n) 2 φ 2 2j t − k − n = h (n) 2 φ 2j+1 t − 2k − n (4.2)
n n
If we denote Vj as
Vj = Span{2j/2 φ 2j t − k } (4.4)
k
then
X
cj+1 (k) 2(j+1)/2 φ 2j+1 t − k (4.5)
f (t) ∈ Vj+1 ⇒ f (t) =
k
1 This content is available online at <http://cnx.org/content/m45094/1.4/>.
37
CHAPTER 4. FILTER BANKS AND THE DISCRETE WAVELET
38
TRANSFORM
is expressible at a scale of j + 1 with scaling functions only and no wavelets. At one scale lower resolution,
wavelets are necessary for the detail" not available at a scale of j . We have
X X
cj (k) 2j/2 φ 2j t − k + dj (k) 2j/2 ψ 2j t − k (4.6)
f (t) =
k k
where the 2j/2 terms maintain the unity norm of the basis functions at various scales. If φj,k (t) and ψj,k (t)
are orthonormal or a tight frame, the j level scaling coecients are found by taking the inner product
Z
cj (k) =< f (t) , φj,k (t) >= f (t) 2j/2 φ 2j t − k dt (4.7)
which, by using (4.3) and interchanging the sum and integral, can be written as
X Z
f (t) 2(j+1)/2 φ 2j+1 t − m dt (4.8)
cj (k) = h (m − 2k)
m
but the integral is the inner product with the scaling function at a scale of j + 1 giving
X
cj (k) = h (m − 2k) cj+1 (m) . (4.9)
m
There is a large literature on digital lters and how to design them [414], [411]. If the number of lter
coecients N is nite, the lter is called a Finite Impulse Response (FIR) lter. If the number is innite, it
is called an Innite Impulse (IIR) lter. The design problem is the choice of the h (n) to obtain some desired
eect, often to remove noise or separate signals [411], [414].
In multirate digital lters, there is an assumed relation between the integer index n in the signal x (n)
and time. Often the sequence of numbers are simply evenly spaced samples of a function of time. Two basic
operations in multirate lters are the down-sampler and the up-sampler. The down-sampler (sometimes
simply called a sampler or a decimator) takes a signal x (n) as an input and produces an output of y (n) =
x (2n). This is symbolically shown in Figure 4.1. In some cases, the down-sampling is by a factor other than
two and in some cases, the output is the odd index terms y (n) = x (2n + 1), but this will be explicitly stated
if it is important.
In down-sampling, there is clearly the possibility of losing information since half of the data is discarded.
The eect in the frequency domain (Fourier transform) is called aliasing which states that the result of this
loss of information is a mixing up of frequency components [414], [411]. Only if the original signal is band-
limited (half of the Fourier coecients are zero) is there no loss of information caused by down-sampling.
We talk about digital ltering and down-sampling because that is exactly what (4.9) and (4.10) do.
These equations show that the scaling and wavelet coecients at dierent levels of scale can be obtained
by convolving the expansion coecients at scale j by the time-reversed recursion coecients h (−n) and
h1 (−n) then down-sampling or decimating (taking every other term, the even terms) to give the expansion
coecients at the next level of j − 1. In other words, the scale-j coecients are ltered" by two FIR digital
lters with coecients h (−n) and h1 (−n) after which down-sampling gives the next coarser scaling and
wavelet coecients. These structures implement Mallat's algorithm [339], [344] and have been developed
in the engineering literature on lter banks, quadrature mirror lters (QMF), conjugate lters, and perfect
reconstruction lter banks [473], [476], [542], [547], [544], [520], [526] and are expanded somewhat in Chapter:
Filter Banks and Transmultiplexers (Chapter 9) of this book. Mallat, Daubechies, and others showed
the relation of wavelet coecient calculation and lter banks. The implementation of (4.9) and (4.10) is
illustrated in Figure 4.2 where the down-pointing arrows denote a decimation or down-sampling by two and
the other boxes denote FIR ltering or a convolution by h (−n) or h1 (−n). To ease notation, we use both
h (n) and h0 (n) to denote the scaling function coecients for the dilation equation (3.13).
As we will see in Chapter: The Scaling Function and Scaling Coecients, Wavelet and Wavelet Coe-
cients (Chapter 6), the FIR lter implemented by h (−n) is a lowpass lter, and the one implemented by
h1 (−n) is a highpass lter. Note the average number of data points out of this system is the same as the
number in. The number is doubled by having two lters; then it is halved by the decimation back to the
original number. This means there is the possibility that no information has been lost and it will be possible
to completely recover the original signal. As we shall see, that is indeed the case. The aliasing occurring
in the upper bank can be undone" or cancelled by using the signal from the lower bank. This is the idea
behind perfect reconstruction in lter bank theory [526], [174].
This splitting, ltering, and decimation can be repeated on the scaling coecients to give the two-scale
structure in Figure 4.3. Repeating this on the scaling coecients is called iterating the lter bank. Iterating
the lter bank again gives us the three-scale structure in Figure 4.4.
The frequency response of a digital lter is the discrete-time Fourier transform of its impulse response
(coecients) h (n). That is given by
∞
X
H (ω) = h (n) eiωn . (4.12)
n=−∞
The magnitude of this complex-valued function gives the ratio of the output to the input of the lter for a
sampled sinusoid at a frequency of ω in radians per seconds. The angle of H (ω) is the phase shift between
the output and input.
The rst stage of two banks divides the spectrum of cj+1 (k) into a lowpass and highpass band, resulting
in the scaling coecients and wavelet coecients at lower scale cj (k) and dj (k). The second stage then
divides that lowpass band into another lower lowpass band and a bandpass band. The rst stage divides
the spectrum into two equal parts. The second stage divides the lower half into quarters and so on. This
results in a logarithmic set of bandwidths as illustrated in Figure 4.5. These are called constant-Q" lters
in lter bank language because the ratio of the band width to the center frequency of the band is constant.
It is also interesting to note that a musical scale denes octaves in a similar way and that the ear responds
to frequencies in a similar logarithmic fashion.
For any practical signal that is bandlimited, there will be an upper scale j = J , above which the wavelet
coecients, dj (k), are negligibly small [206]. By starting with a high resolution description of a signal in
terms of the scaling coecients cJ , the analysis tree calculates the DWT
down to as low a resolution, j = j0 , as desired by having J − j0 stages. So, for f (t) ∈ VJ , using (3.8) we
have
P
f (t) = k cJ
(k) φJ,k (t)
P P
= k cJ−1 (k) φJ−1,k (t) + k dJ−1 (k) ψJ−1,k (t)
P P PJ−1 (4.13)
f (t) = k cJ−2 (k) φJ−2,k (t) + k j=J−2 dj (k) ψj,k (t)
P P PJ−1
f (t) = k cj0 (k) φj0 ,k (t) + k j=j0 dj (k) ψj,k (t)
which is a nite scale version of (3.33). We will discuss the choice of j0 and J further in Chapter: Calculation
of the Discrete Wavelet Transform (Chapter 10).
(j+1)/2
φ (2j+1 t − 2k − n) (4.16)
P P
fP(t) P = k cj (k) n h (n) 2 +
(j+1)/2
k dj (k) n h1 (n) 2 φ (2j+1 t − 2k − n) .
Because all of these functions are orthonormal, multiplying (4.14) and (4.16) by φ 2j+1 t − k ' and
of the calculation can be done together with a signicant savings in arithmetic. This is developed in Chapter:
Calculation of the Discrete Wavelet Transform (Chapter 10) [526].
Still another approach to the calculation of discrete wavelet transforms and to the calculations of the
scaling functions and wavelets themselves is called lifting." [278],[273] Although it is related to several other
schemes [364], [366], [170], [290], this idea was rst explained by Wim Sweldens as a time-domain construc-
tion based on interpolation [500]. Lifting does not use Fourier methods and can be applied to more general
problems (e.g., nonuniform sampling) than the approach in this chapter. It was rst applied to the biorthog-
onal system [502] and then extended to orthogonal systems [132]. The application of lifting to biorthogonal
is introduced in Section: Biorthogonal Wavelet Systems (Section 8.4: Biorthogonal Wavelet Systems) later
in this book. Implementations based on lifting also achieve the same improvement in arithmetic eciency
as the lattice structure do.
one; one scaling function coecient and one wavelet coecient. An example of how the periodic DWT of a
length 8 can be seen Figure 4.8.
cj (k) dj (k) dj+1 (k) dj+1 (k + 1) dj+2 (k) dj+2 (k + 1) dj+2 (k + 2) dj+2 (k + 3)
The details of this periodic approach are developed in Chapter: Calculation of the Discrete Wavelet
Transform (Chapter 10) showing the aliasing that takes place in this system because of the cyclic convolution
(4.19). This formulation is particularly clean because there are the same number of terms in the transform as
in the signal. It can be represented by a square matrix with a simple inverse that has interesting structure.
It can be eciently calculated by an FFT although that is not needed for most applications.
For most of the theoretical developments or for conceptual purposes, there is little dierence in these two
formulations. However, for actual calculations and in applications, you should make sure you know which
one you want or which one your software package calculates. As for the Fourier case, you can use the periodic
form to calculate the nonperiodic transform by padding the signal with zeros but that wastes some of the
eciency that the periodic formulation was set up to provide.
4.5.3 The Discrete Wavelet Transform versus the Discrete-Time Wavelet Trans-
form
Two more points of view concern looking at the signal processing methods in this book as based on an
expansion of a signal or on multirate digital ltering. One can look at Mallat's algorithm either as a way
of calculating expansion coecients at various scales or as a lter bank for processing discrete-time signals.
The rst is analogous to use of the Fourier series (FS) where a continuous function is transformed into a
discrete sequence of coecients. The second is analogous to the discrete Fourier transform (DFT) where a
discrete function is transformed into a discrete function. Indeed, the DFT (through the FFT) is often used to
calculate the Fourier series coecients, but care must be taken to avoid or minimize aliasing. The dierence
in these views comes partly from the background of the various researchers (i.e., whether they are wavelet
people" or lter bank people"). However, there are subtle dierences between using the series expansion of
the signal (using the discrete wavelet transform (DWT)) and using a multirate digital lter bank on samples
of the signal (using the discrete-time wavelet transform (DTWT)). Generally, using both views gives more
insight into a problem than either achieves alone. The series expansion is the main approach of this book
but lter banks and the DTWT are also developed in Section: Discrete Multiresolution Analysis and the
Discrete-Time wavelet Transform (Section 8.8: Discrete Multiresolution Analysis, the Discrete-Time Wavelet
Transform, and the Continuous Wavelet Transform) and Chapter: Filter Banks and Transmultiplexers .
The process is better described as a organize and share" scheme. The eciency (in fact, optimal eciency)
is based on organizing the calculations so that redundant operations can be shared. The cascaded ltering
(convolution) and down-sampling of Mallat's algorithm do the same thing.
One should not make too much of this dierence between the complexity of the FFT and DTWT. It comes
from the DTWT having a logarithmic division of frequency bands and the FFT having a uniform division.
This logarithmic scale is appropriate for many signals but if a uniform division is used for the wavelet system
such as is done for wavelet packets (see Section: Wavelet Packets (Section 8.3: Wavelet Packets)) or the
redundant DWT (see Section: Overcomplete Representations, Frames, Redundant Transforms, and Adaptive
Bases (Section 8.6: Overcomplete Representations, Frames, Redundant Transforms, and Adaptive Bases)),
the complexity of the wavelet system becomes O (N log (N )).
If you are interested in more details of the discrete wavelet transform and the discrete-time wavelet
transform, relations between them, methods of calculating them, further properties of them, or examples, see
Section: Discrete Multiresolution Analysis, the Discrete-Time Wavelet (Section 8.8: Discrete Multiresolution
Analysis, the Discrete-Time Wavelet Transform, and the Continuous Wavelet Transform) and Chapter:
Calculation of the Discrete Wavelet Transform.
Most people with technical backgrounds are familiar with the ideas of expansion vectors or basis vectors
and of orthogonality; however, the related concepts of biorthogonality or of frames and tight frames are less
familiar but also important. In the study of wavelet systems, we nd that frames and tight frames are needed
and should be understood, at least at a supercial level. One can nd details in [65], [587], [119], [112], [248].
Another perhaps unfamiliar concept is that of an unconditional basis used by Donoho, Daubechies, and
others [144], [373], [119] to explain why wavelets are good for signal compression, detection, and denoising
[227], [225]. In this chapter, we will very briey dene and discuss these ideas. At this point, you may want
to skip these sections and perhaps refer to them later when they are specically needed.
with k ∈ Z and t, a ∈ R. An inner product √ is usually dened for this space and is denoted < f (t) , g (t) >.
A norm is dened and is denoted by k f k= < f, f >.
We say that the set fk (t) is a basis set or a basis for a given space F if the set of {ak } in (5.1) are unique
for any particular g (t) ∈ F . The set is called an orthogonal basis if < fk (t) , f` (t) >= 0 for all k 6= `. If we
are in three dimensional Euclidean space, orthogonal basis vectors are coordinate vectors that are at right
(90o ) angles to each other. We say the set is an orthonormal basis if < fk (t) , f` (t) >= δ (k − `) i.e. if, in
addition to being orthogonal, the basis vectors are normalized to unity norm: k fk (t) k= 1 for all k .
From these denitions it is clear that if we have an orthonormal basis, we can express any element in the
vector space, g (t) ∈ F , written as (5.1) by
X
g (t) = < g (t) , fk (t) > fk (t) (5.2)
k
1 This content is available online at <http://cnx.org/content/m45090/1.4/>.
Available for free at Connexions <http://cnx.org/content/col11454/1.6>
47
CHAPTER 5. BASES, ORTHOGONAL BASES, BIORTHOGONAL BASES,
48
FRAMES, TIGHT FRAMES, AND UNCONDITIONAL BASES
since by taking the inner product of fk (t) with both sides of (5.1), we get
Although a biorthogonal system is more complicated in that it requires, not only the original expansion set,
but the nding, calculating, and storage of a dual set of vectors, it is very general and allows a larger class
of expansions. There may, however, be greater numerical problems with a biorthogonal system if some of
the basis vectors are strongly correlated.
The calculation of the expansion coecients using an inner product in (5.3) is called the analysis part of
the complete process, and the calculation of the signal from the coecients and expansion vectors in (5.1)
is called the synthesis part.
In nite dimensions, analysis and synthesis operations are simply matrixvector multiplications. If the
expansion vectors in (5.1) are a basis, the synthesis matrix has these basis vectors as columns and the matrix
is square and non singular. If the matrix is orthogonal, its rows and columns are orthogonal, its inverse is its
transpose, and the identity operator is simply the matrix multiplied by its transpose. If it is not orthogonal,
then the identity is the matrix multiplied by its inverse and the dual basis consists of the rows of the inverse.
If the matrix is singular, then its columns are not independent and, therefore, do not form a basis.
g = F a, (5.8)
with the left-hand column vector g being the signal vector, the matrix F formed with the basis vectors fk
as columns, and the right-hand vector a containing the four expansion coecients, ak .
The equation for calculating the k th expansion coecient in (5.6) is
where each ak is an inner product of the k th row of F̃T with g and analysis or coecient (5.3) or (5.10)
becomes
a = F̃T g (5.11)
which together are (5.2) or
g = F F̃T g. (5.12)
Therefore,
F FT = I. (5.14)
This means the basis and dual basis are the same, and (5.12) and (5.13) become
g = F FT g (5.15)
and
F̃T = FT (5.16)
which are both simpler and more numerically stable than (5.13).
The discrete Fourier transform (DFT) is an interesting example of a nite dimensional Fourier transform
with orthogonal basis vectors where matrix and vector techniques can be informative as to the DFT's
characteristics and properties. That can be found developed in several signal processing books.
sin Tπ t − πk
X
f (t) = f (T k) π (5.21)
k T t − πk
2
for some 0 < A and B < ∞ and for all signals g (t) in the space. Dividing (5.22) by k g k shows that A
and B are bounds on the normalized energy of the inner products. They frame" the normalized coecient
energy. If
A = B (5.23)
then the expansion set is called a tight frame. This case gives
2
X 2
Akgk = | < φk , g > | (5.24)
k
which is a generalized Parseval's theorem for tight frames. If A = B = 1, the tight frame becomes an
orthogonal basis. From this, it can be shown that for a tight frame [119]
X
g (t) = A−1 < φk (t) , g (t) > φk (t) (5.25)
k
which is the same as the expansion using an orthonormal basis except for the A−1 term which is a measure
of the redundancy in the expansion set.
If an expansion set is a non tight frame, there is no strict Parseval's theorem and the energy in the
transform domain cannot be exactly partitioned. However, the closer A and B are, the better an approximate
partitioning can be done. If A = B , we have a tight frame and the partitioning can be done exactly with
(5.24). Daubechies [119] shows that the tighter the frame bounds in (5.22) are, the better the analysis and
synthesis system is conditioned. In other words, if A is near or zero and/or B is very large compared to A,
there will be numerical problems in the analysissynthesis calculations.
Frames are an over-complete version of a basis set, and tight frames are an over-complete version of an
orthogonal basis set. If one is using a frame that is neither a basis nor a tight frame, a dual frame set can
be specied so that analysis and synthesis can be done as for a non-orthogonal basis. If a tight frame is
being used, the mathematics is very similar to using an orthogonal basis. The Fourier type system in (5.25)
is essentially the same as (5.2), and (5.24) is essentially a Parseval's theorem.
The use of frames and tight frames rather than bases and orthogonal bases means a certain amount of
redundancy exists. In some cases, redundancy is desirable in giving a robustness to the representation so that
errors or faults are less destructive. In other cases, redundancy is an ineciency and, therefore, undesirable.
The concept of a frame originates with Dun and Schaeer [167] and is discussed in [587], [112], [119]. In
nite dimensions, vectors can always be removed from a frame to get a basis, but in innite dimensions, that
is not always possible.
An example of a frame in nite dimensions is a matrix with more columns than rows but with independent
rows. An example of a tight frame is a similar matrix with orthogonal rows. An example of a tight frame in
innite dimensions would be an over-sampled Shannon expansion. It is informative to examine this example.
which corresponds to the basis shown in the square matrix in (5.7). The corresponding analysis equation is
a0 f˜0 (0) f˜0 (1) f˜0 (2)
g (0)
a f˜ (0) f˜ (1) f˜ (2)
1 1 1 1
(5.27)
= .
g (1)
a2 f˜2 (0) f˜2 (1) f˜2 (2)
g (2)
a3 f˜3 (0) f˜3 (1) f˜3 (2)
which corresponds to (5.10). One can calculate a set of dual frame vectors by temporarily appending an
arbitrary independent row to (5.26), making the matrix square, then using the rst three columns of the
inverse as the dual frame vectors. This clearly illustrates the dual frame is not unique. Daubechies [119]
shows how to calculate an economical" unique dual frame.
The tight frame system occurs in wavelet innite expansions as well as other nite and innite dimensional
systems. A numerical example of a frame which is a normalized tight frame with four vectors in three
dimensions is
a0
g (0) 1 1 −1 −1
g (1) = 1 √1 1 −1
a1
(5.28)
A 3 1 −1
a2
g (2) 1 1 1 1
a3
which includes the redundancy factor from (5.25). Note the rows are orthogonal and the columns are
normalized, which gives
1 1 1
1 1 −1 −1 1 0 0
1 1 1 −1 1 4 4
F FT = √ 1 −1 1 −1 √ = 0 1 = 3 I
0 (5.29)
3
3 3 −1
1 1
1 1 1 1 0 0 1
−1 −1 1
or
1
g= F FT g (5.30)
A
which is the matrix form of (5.25). The factor of A = 4/3 is the measure of redundancy in this tight frame
using four expansion vectors in a three-dimensional space.
The identity for the expansion coecients is
1 T
a= F Fa (5.31)
A
which for the numerical example gives
1 1 1 1 1/3 1/3 −1/3
1 1 −1 −1
1 −1 1 1 −1/3
1 1/3 1 1/3
T
(5.32)
F F= √ √ 1 −1 =
1 −1 .
3 −1 1 1 3 1/3 −1/3 1 1/3
1 1 1 1
−1 −1 1 −1/3 1/3 1/3 1
Although this is not a general identity operator, it is an identity operator over the three-dimensional subspace
that a is in and it illustrates the unity norm of the rows of FT and columns of F.
If the redundancy measure A in (5.25) and (5.29) is one, the matrices must be square and the system
has an orthonormal basis.
Frames are over-complete versions of non-orthogonal bases and tight frames are over-complete versions
of orthonormal bases. Tight frames are important in wavelet analysis because the restrictions on the scaling
function coecients discussed in Chapter: The Scaling Function and Scaling Coecients, Wavelet and
Wavelet Coecients guarantee not that the wavelets will be a basis, but a tight frame. In practice, however,
they are usually a basis.
TW X sin ((t − T n) W )
g (t) = g (T n) (5.33)
π n (t − T n) W
or using R as the amount of over-sampling
π
RW = , for R ≥ 1 (5.34)
T
we have
π
1 X sin RT (t − T n)
g (t) = g (T n) π (5.35)
R n RT (t − T n)
where the sinc functions are no longer orthogonal now. In fact, they are no longer a basis as they are not
independent. They are, however, a tight frame and, therefore, act as though they were an orthogonal basis
but now there is a redundancy" factor R as a multiplier in the formula.
Notice that as R is increased from unity, (5.35) starts as (5.21) where each sample occurs where the
sinc function is one or zero but becomes an expansion with the shifts still being t = T n, however, the sinc
functions become wider so that the samples are no longer at the zeros. If the signal is over-sampled, either
the expression (5.21) or (5.35) could be used. They both are over-sampled but (5.21) allows the spectrum
of the signal to increase up to the limit without distortion while (5.35) does not. The generalized sampling
theorem (5.35) has a built-in ltering action which may be an advantage or it may not.
The application of frames and tight frames to what is called a redundant discrete wavelet transform
(RDWT) is discussed later in Section: Overcomplete Representations, Frames, Redundant Transforms, and
Adaptive Bases (Section 8.6: Overcomplete Representations, Frames, Redundant Transforms, and Adaptive
Bases) and their use in Section: Nonlinear Filtering or Denoising with the DWT (Section 11.3: Nonlinear
Filtering or Denoising with the DWT). They are also needed for certain adaptive descriptions discussed at
the end of Section: Overcomplete Representations, Frames, Redundant Transforms, and Adaptive Bases
(Section 8.6: Overcomplete Representations, Frames, Redundant Transforms, and Adaptive Bases) where
an independent subset of the expansion vectors in the frame are chosen according to some criterion to give
an optimal basis.
If for all g ∈ F , the innite sum converges for all |mk | ≤ 1, the basis is called an unconditional basis.
This is very similar to unconditional or absolute convergence of a numerical series [144], [587], [373]. If the
convergence depends on mk = 1 for some g (t), the basis is called a conditional basis.
An unconditional basis means all subsequences converge and all sequences of subsequences converge.
It means convergence does not depend on the order of the terms in the summation or on the sign of the
coecients. This implies a very robust basis where the coecients drop o rapidly for all members of the
function class. That is indeed the case for wavelets which are unconditional bases for a very wide set of
function classes [119], [381], [219].
Unconditional bases have a special property that makes them near-optimal for signal processing in several
situations. This property has to do with the geometry of the space of expansion coecients of a class of
functions in an unconditional basis. This is described in [144].
The fundamental idea of bases or frames is representing a continuous function by a sequence of expansion
coecients. We have seen that the Parseval's theorem relates the L2 norm of the function to the `2 norm
of coecients for orthogonal bases and tight frames (5.24). Dierent function spaces are characterized by
dierent norms on the continuous function. If we have an unconditional basis for the function space, the
norm of the function in the space not only can be related to some norm of the coecients in the basis
expansion, but the absolute values of the coecients have the sucient information to establish the relation.
So there is no condition on the sign or phase information of the expansion coecients if we only care about
the norm of the function, thus unconditional.
For this tutorial discussion, it is sucient to know that there are theoretical reasons why wavelets are
an excellent expansion system for a wide set of signal processing problems. Being an unconditional basis
also sets the stage for ecient and eective nonlinear processing of the wavelet transform of a signal for
compression, denoising, and detection which are discussed in Chapter: The Scaling Function and Scaling
Coecients, Wavelet and Wavelet Coecients.
We will now look more closely at the basic scaling function and wavelet to see when they exist and what
their properties are [135], [340], [315], [319], [318], [324], [120]. Using the same approach that is used in the
theory of dierential equations, we will examine the properties of φ (t) by considering the equation of which
it is a solution. The basic recursion (3.13) that comes from the multiresolution formulation is
X √
φ (t) = h (n) 2 φ (2t − n) (6.1)
n
with h (n) being the scaling coecients and φ (t) being the scaling function which satises this equation
which is sometimes called the renement equation, the dilation equation, or the multiresolution analysis
equation (MRA).
In order to state the properties accurately, some care has to be taken in specifying just what classes of
functions are being considered or are allowed. We will attempt to walk a ne line to present enough detail
to be correct but not so much as to obscure the main ideas and results. A few of these ideas were presented
in Section: Signal Spaces (Section 3.1: Signal Spaces) and a few more will be given in the next section. A
more complete discussion can be found in [533], in the introductions to [550], [571], [5], or in any book on
function analysis.
f ∈ L 1
⇒ |f (t) | dt = K < ∞. This class is important because one may interchange innite sum-
mations and integrations with these functions although not necessarily with L2 functions. These classes of
p
function spaces can be generalized to those with |f (t) | dt = K < ∞ and designated Lp .
R
55
CHAPTER 6. THE SCALING FUNCTION AND SCALING COEFFICIENTS,
56
WAVELET AND WAVELET COEFFICIENTS
A more general class of signals than any Lp space contains what are called distributions. These are
generalized functions which are not dened by their having values" but by the value of an inner product"
with a normal function. An example of a distribution would be the Dirac delta function δ (t) where it is
dened by the property: f (T ) = f (t) δ (t − T ) dt.
R
Another detail to keep in mind is that the integrals used in these denitions are Lebesque integrals which
are somewhat more general than the basic Riemann integral. The value of a Lebesque integral is not aected
by values of the function over any countable set of values of its argument (or, more generally, a set of measure
zero). A function dened as one on the rationals and zero on the irrationals would have a zero Lebesque
integral. As a result of this, properties derived using measure theory and Lebesque integrals are sometime
said to be true almost everywhere," meaning they may not be true over a set of measure zero.
Many of these ideas of function spaces, distributions, Lebesque measure, etc. came out of the early study
of Fourier series and transforms. It is interesting that they are also important in the theory of wavelets. As
with Fourier theory, one can often ignore the signal space classes and can use distributions as if they were
functions, but there are some cases where these ideas are crucial. For an introductory reading of this book
or of the literature, one can usually skip over the signal space designation or assume Riemann integrals.
However, when a contradiction or paradox seems to arise, its resolution will probably require these details.
rows removed. Two particular submatrices that are used later in Section 6.10 (Calculating the Basic Scaling
Function and Wavelet) to evaluate φ (t) on the dyadic rationals are illustrated for N = 6 by
h0 0 0 0 0 0 φ0 φ0
h2 h1 h0 0 0 0
φ1
φ1
√
h4 h3 h2 h1 h0 0 φ2 φ2
2 = (6.6)
0 h5 h4 h3 h2 h1 φ3 φ3
0 0 0 h5 h4 h3
φ4
φ4
0 0 0 0 0 h5 φ5 φ5
which we write in matrix form as
M0 φ = φ (6.7)
with M0 being the 6 × 6 matrix of the h (n) and φ being 6 × 1 vectors of integer samples of φ (t). In other
words, the vector φ with entries φ (k) is the eigenvector of M0 for an eigenvalue of unity.
The second submatrix is a shifted version illustrated by
h1 h0 0 0 0 0 φ0 φ1/2
h3 h2 h1 h0 0 0
φ1
φ3/2
√
h5 h4 h3 h2 h1 h0 φ2 φ5/2
2 = (6.8)
0 0 h5 h4 h3 h2 φ3 φ7/2
0 0 0 0 h5 h4
φ4
φ9/2
0 0 0 0 0 0 φ5 φ11/2
with the matrix being denoted M1 . The general renement matrix M is the innite matrix of which M0
and M1 are partitions. If the matrix H is the convolution matrix for h (n), we can denote the M matrix by
[↓ 2] H to indicate the down-sampled convolution matrix H. Clearly, for φ (t) to be dened on the dyadic
rationals, M0 must have a unity eigenvalue.
A third, less obvious but perhaps more important, matrix is called the transition matrix T and it is built
up from the autocorrelation matrix of h (n). The transition matrix is constructed by
T = [↓ 2] HHT . (6.9)
This matrix (sometimes called the Lawton matrix) was used by Lawton (who originally called it the Wavelet-
Galerkin matrix) [318] to derive necessary and sucient conditions for an orthogonal wavelet basis. As we
will see later in this chapter, its eigenvalues are also important in determining the properties of φ (t) and the
associated wavelet system.
X √
h (n) = 2. (6.10)
n
The proof of this theorem requires only an interchange in the order of a summation and integration (allowed
in L1 ) but no assumption of orthogonality of the basis functions or any other properties of φ (t) other
than a nonzero integral. The proof of this theorem and several of the others stated here are contained in
Appendix A.
This theorem shows that, unlike linear constant coecient dierential equations, not just any set of
coecients will support a solution. The coecients must satisfy the linear equation (6.10). This is the
weakest condition on the h (n).
Theorem 2 If φ (t) is an L1 solution to the basic recursion equation (6.1) with φ (t) dt = 1, and
R
X X
φ (t − `) = φ (`) = 1 (6.11)
` `
where (6.11) may have to be a distributional sum. Conversely, if (6.12) is satised, then (6.11) is true.
Equation (6.12) is called the fundamental condition, and it is weaker than requiring orthogonality but
stronger than (6.10). It is simply a result of requiring the equations resulting from evaluating (6.1) on the
integers be consistent. Equation (6.11) is called a partitioning of unity (or the Strang condition or the
Shoenberg condition).
A similar theorem by Cavaretta, Dahman and Micchelli [56] and by Jia [280] P states thatP if φ ∈ L and
p
the integer translates of φ (t) form a Riesz basis for the space they span, then n h (2n) = n h (2n + 1).
Theorem 3 If φ (t) is an L2 ∩ L1 solution to (6.1) and if integer translates of φ (t) are orthogonal as
dened by
E if k = 0
Z
φ (t) φ (t − k) dt = E δ (k) = { (6.13)
0 otherwise,
then
X 1 if k = 0
h (n) h (n − 2k) = δ (k) = { (6.14)
n 0 otherwise,
Notice that this does not depend on a particular normalization of φ√ (t).
If φ (t) is normalized by dividing by the square root of its energy E , then integer translates of φ (t) are
orthonormal dened by
1 if k = 0
Z
φ (t) φ (t − k) dt = δ (k) = { (6.15)
0 otherwise,
This theorem shows that in order for the solutions of (6.1) to be orthogonal under integer translation,
it is necessary that the coecients of the recursive equation be orthogonal themselves after decimating or
downsampling by two. If φ (t) and/or h (n) are complex functions, complex conjugation must be used in
(6.13), (6.14), and (6.15).
Coecients h (n) that satisfy (6.14) are called a quadrature mirror lter (QMF) or conjugate mirror lter
(CMF), and the condition (6.14) is called the quadratic condition for obvious reasons.
Corollary 1 Under the assumptions of Theorem p. 58, the norm of h (n) is automatically unity.
X 2
|h (n) | = 1 (6.16)
n
√
Not only must the sum of h (n) equal 2, but for orthogonality of the solution, the sum of the squares of
h (n) must be√one, both independent of any normalization of φ (t). This unity normalization of h (n) is the
result of the 2 term in (6.1).
This follows directly from (6.3) and states that the basic existence
√ requirement (6.10) is equivalent to
requiring that the FIR lter's frequency response at DC (ω = 0) be 2.
Theorem 6 For h (n) ∈ `1 , then
X X
h (2n) = h (2n + 1) if and only if H (π) = 0 (6.20)
n n
which says the frequency response of the FIR lter with impulse response h (n) is zero at the so-called
Nyquist frequency (ω = π ). This follows from (6.4) and (8.7), and supports the fact that h (n) is a lowpass
digital lter. This is also equivalent to the M and T matrices having a unity eigenvalue.
Theorem 7 If φ (t) is a solution to (6.1) in L2 ∩ L1 and Φ (ω) is a solution of (6.4) such that Φ (0) 6= 0,
then
Z X 2
φ (t) φ (t − k) dt = δ (k) if and only if |Φ (ω + 2π`) | = 1 (6.21)
`
This is a frequency domain equivalent to the time domain denition of orthogonality of the scaling function
[340], [345], [120]. It allows applying the orthonormal conditions to frequency domain arguments. It also
gives insight into just what time domain orthogonality requires in the frequency domain.
This theorem [315], [245], [120] gives equivalent time and frequency domain conditions on the scaling
coecients and states that the orthogonality requirement (6.14) is equivalent to the FIR lter with h (n)
as coecients being what is called a Quadrature Mirror Filter (QMF) [474]. Note that (6.22), (6.19), and
(6.20) require |H (π/2) | = 1 and that the lter is a half band" lter.
holds.
This condition, called the fundamental condition [493], [323], gives a slightly tighter result than Theo-
rem p. 60. While the scaling function still may be a distribution not in L1 or L2 , it is better behaved than
required by Theorem p. 60 in being dened on the dense set of dyadic rationals. This theorem is equivalent
to requiring H (π) = 0 which from the product formula (6.5) gives a better behaved Φ (ω). It also guarantees
a unity eigenvalue for M and T but not that other eigenvalues do not exist with magnitudes larger than
one.
The next several theorems use the transition matrix T dened in (6.9) which is a down-sampled auto-
correlation matrix.
Theorem 11 If the transition matrix T has eigenvalues on or in the unit circle of the complex plane and
if any on the unit circle are multiple, they have a complete set of eigenvectors, then φ (t) ∈ L2 .
If T has unity magnitude eigenvalues, the successive approximation algorithm (cascade algorithm) (6.71)
converges weakly to φ (t) ∈ L2 [314].
Theorem 12 If the transition matrix T has a simple unity eigenvalue with all other eigenvalues having
magnitude less than one, then φ (t) ∈ L2 .
Here the successive approximation algorithm (cascade algorithm) converges strongly to φ (t) ∈ L2 . This
is developed in [493].
If in addition to requiring (6.10), we require the quadratic coecient conditions (6.14), a tighter result
occurs which gives φ (t) ∈ L2 (R) and a multiresolution tight frame system. √
P Theorem 13 (Lawton) If h (n) has nite support or decays fast enough and if n h (n) = 2 and if
P
n h (n) h (n − 2k) = δ (k), then φ (t) ∈ L (R) exists, and generates a wavelet system that is a tight frame
2
in L .
2
This important result from Lawton [315], [319] gives the sucient conditions for φ (t) to exist and generate
wavelet tight frames. The proof uses an iteration of the basic recursion equation (6.1) as a successive
approximation similar to Picard's method for dierential equations. Indeed, this method is used to calculate
φ (t) in Section 6.10 (Calculating the Basic Scaling Function and Wavelet). It is interesting to note that
the scaling function may be very rough, even fractal" in nature. This may be desirable if the signal being
analyzed is also rough.
Although this theorem guarantees that φ (t) generates a tight frame, in most practical situations, the
resulting system is an orthonormal basis [319]. The conditions in the following theorems are generally
satised. √
Theorem 14 (Lawton) If h(n) has compact support, n h (n) = 2, and n h (n) h (n − 2k) = δ (k),
P P
then φ (t − k) forms an orthogonal set if and only if the transition matrix T has a simple unity eigenvalue.
This powerful result allows a simple evaluation of h (n) to see if it can support a wavelet expansion system
[315], [319], [318]. An equivalent result using the frequency response of the FIR digital lter formed from
h (n) was given by Cohen. √
P Theorem 15 (Cohen) If H (ω) is the DTFT of h (n) with compact support and n h (n) = 2 with
P
n h (n) h (n − 2k) = δ (k),and if H (ω) 6= 0 for −π/3 ≤ ω ≤ π/3, then the φ (t − k) satisfying (6.1) generate
an orthonormal basis in L2 .
A slightly weaker version of this frequency domain sucient condition is easier to prove [340], [345] and to
extend to the M-band case for the case of no zeros allowed in −π/2 ≤ ω ≤ π/2[120]. There are other sucient
conditions that, together with those in Theorem p. 60, will guarantee an orthonormal basis. Daubechies'
vanishing moments will guarantee an orthogonal basis.
Theorems , , and show that h (n) has the characteristics of a lowpass FIR digital lter. We will later see
that the FIR lter made up of the wavelet coecients is a high pass lter and the lter bank view developed
in Chapter: Filter Banks and the Discrete Wavelet Transform (Chapter 4) and Section: Multiplicity-M
(M-Band) Scaling Functions and Wavelets (Section 8.2: Multiplicity-M (M-Band) Scaling Functions and
Wavelets) further explains this view.
Theorem 16 If h (n) has nite support and if φ (t) ∈ L1 , then φ (t) has nite support [314].
If φ (t) is not restricted to L1 , it may have innite support even if h (n) has nite support.
These theorems give a good picture of the relationship between the recursive equation coecients h (n)
and the scaling function φ (t) as a solution of (6.1). More properties and characteristics are presented in
Section 6.8 (Further Properties of the Scaling Function and Wavelet).
then
X
h (n) h1 (n − 2k) = 0 (6.27)
n
2 2
|H (ω) | + |H1 (ω) | = 2, (6.31)
and
Z
ψ (t) dt = 0. (6.32)
Their satisfying the multiresolution equation (3.13) is illustrated in Figure: Haar and Triangle Scaling
Functions. Haar showed that translates and scalings of these functions form an orthonormal basis for
L2 (R). We can easily see that the Haar functions are also a compact support orthonormal wavelet system
that satisfy Daubechies' conditions [120]. Although they are as regular as can be achieved for N = 2, they
are not even continuous. The orthogonality and nesting of spanned subspaces are easily seen because the
translates have no overlap in the time domain. It is instructive to apply the various properties of Section 6.5
(The Wavelet) and Section 6.8 (Further Properties of the Scaling Function and Wavelet) to these functions
and see how they are satised. They are illustrated in the example in Figure: Haar Scaling Functions and
Wavelets that Span Vj through Figure: Haar Function Approximation in Vj (Figure 3.15).
sin (t)
sinc (t) = (6.38)
t
where sinc (0) = 1. This is a very versatile and useful function because its Fourier transform is a simple
rectangle function and the Fourier transform of a rectangle function is a sinc function. In order to be a
scaling function, the sinc must satisfy (3.13) as
X
sinc (Kt) = h (n) sinc (K2t − Kn) (6.39)
n
for the appropriate scaling coecients h (n) and some K . If we construct the scaling function from the
generalized sampling function as presented in (5.35), the sinc function becomes
X π π
sinc (Kt) = sinc (KT n) sinc t− n . (6.40)
n
RT R
In order for these two equations to be true, the sampling period must be T = 1/2 and the parameter
π
K = (6.41)
R
which gives the scaling coecients as
π
h (n) = sinc n . (6.42)
2R
We see that φ (t) = sinc (Kt) is a scaling function with innite support and its corresponding scaling
coecients are samples of a sinc function. If R = 1, then K = π and the scaling function generates
an orthogonal wavelet system. For R > 1, the wavelet system is a tight frame, the expansion set is not
orthogonal or a basis, and R is the amount of redundancy in the system as discussed in this chapter. For
the orthogonal sinc scaling function, the wavelet is simply expressed by
the spectra. Indeed, the Haar and sinc systems are Fourier duals of each other. The sinc generating scaling
function and wavelet are shown in Figure 6.1.
is homogeneous, so its solution is unique only within a normalization factor. In most cases, both the scaling
function and wavelet are normalized to unit energy or unit norm. In the properties discussed here, we
2
normalize the energy as E = |φ (t) | dt = 1. Other normalizations can easily be used if desired.
R
Property 2 Not only can the scaling function be written as a weighted sum of functions in the next higher
scale space as stated in the basic recursion equation (6.44), but it can also be expressed in higher resolution
spaces:
X
h(j) (n) 2j/2 φ 2j t − n (6.46)
φ (t) =
n
where h (1)
(n) = h (n) and for j ≥ 1
X
h(j+1) (n) = h(j) (k) h(j) (n − 2k) . (6.47)
k
2
The norm of the wavelet is usually normalized to one such that |ψ (t) | dt = 1.
R
Property 10 Not only are integer translates of the wavelet orthogonal; dierent scales are also orthog-
onal.
Z
2j/2 ψ 2j t − k 2i/2 ψ 2i t − ` dt = δ (k − `) δ (j − i) (6.55)
Property 13 The scaling coecients can be calculated from the orthogonal or tight frame scaling functions
by
√ Z
h (n) = 2 φ (t) φ (2t − n) dt. (6.58)
Property 14 The wavelet coecients can be calculated from the orthogonal or tight frame scaling functions
by
√ Z
h1 (n) = 2 ψ (t) φ (2t − n) dt. (6.59)
Derivations of some of these properties can be found in Appendix B (Chapter 14). Properties in equations
(6.1), (6.10), (6.14), (6.53), (6.51), (6.52), and (6.57) are independent of any normalization of φ (t). Nor-
malization aects the others. Those in equations (6.1), (6.10), (6.48), (6.49), (6.51), (6.52), and (6.57) do
not require orthogonality of integer translates of φ (t). Those in (6.14), (6.16), (6.17), (6.22), (6.20), (6.53),
(6.58) require orthogonality. No properties require compact support. Many of the derivations interchange
order of summations or of summation and integration. Conditions for those interchanges must be met.
set of simultaneous equations. From these values, it is possible to then exactly calculate values at the half
integers, then at the quarter integers and so on, giving values of φ (t) on what are called the dyadic rationals.
for the k th iteration where an initial φ(0) (t) must be given. Because this can be viewed as applying the
same operation over and over to the output of the previous application, it is sometimes called the cascade
algorithm.
Using denitions (6.2) and (6.3), the frequency domain form becomes
1 ω ω
Φ(k+1) (ω) = √ H Φ(k) (6.72)
2 2 2
and the limit can be written as an innite product in the form
"∞ #
Y 1 ω
Φ (∞)
(ω) = √ H k Φ(∞) (0) . (6.73)
2 2
k=1
The limit does not depend on the shape of the initial φ(0) (t), but only on Φ(k) (0) = φ(k) (t) dt = A0 ,
R
which is invariant over the iterations. This only makes sense if the limit of Φ (ω) is well-dened as when it
is continuous at ω = 0.
The Matlab program in Appendix C (Chapter 15) implements the algorithm in (6.71) which converges
reliably to φ (t), even when it is very discontinuous. From this scaling function, the wavelet can be generated
from (3.24). It is interesting to try this algorithm, plotting the function at each iteration, on both admissible
h (n) that satisfy (6.10) and (6.14) and on inadmissible h (n). The calculation of a scaling function for N = 4
is shown at each iteration in Figure 6.3.
Because of the iterative form of this algorithm, applying the same process over and over, it is sometimes
called the cascade algorithm [493], [488].
at ω = 8 (2k + 1) π , etc. Because (6.74) is a product of stretched versions of H (ω), these zeros of
H ω/2j are the zeros of the Fourier transform of φ (t). Recall from Theorem 15 (p. 61) that H (ω) has
no zeros in −π/3 < ω < π/3. All of this gives a picture of the shape of Φ (ω) and the location of its zeros.
From an asymptotic analysis of Φ (ω) as ω → ∞, one can study the smoothness of φ (t).
A Matlab program that calculates Φ (ω) using this frequency domain successive approximations ap-
proach suggested by (6.74) is given in Appendix C (Chapter 15). Studying this program gives further insight
into the structure of Φ (ω). Rather than starting the calculations given in (6.74) for the index j = 1, they
are started for the largest j = J and worked backwards. If we calculate a length-N DFT consistent with
j = J using the FFT, then the samples of H ω/2j for j = J − 1 are simply every other sample of the case
for j = J . The next stage for j = J − 2 is done likewise and if the original N is chosen a power of two,
the process in continued down to j = 1 without calculating any more FFTs. This results in a very ecient
algorithm. The details are in the program itself.
This algorithm is so ecient, using it plus an inverse FFT might be a good way to calculate φ (t) itself.
Examples of the algorithm are illustrated in Figure 6.4 where the transform is plotted for each step of the
iteration.
M0 φ = φ. (6.76)
InP other words, √ the vector of φ (k) is the eigenvector of M0 for an eigenvalue of unity. TheP simple sum
of
P n h (n) = 2 in (6.10) does not guarantee that M 0 always has such an eigenvalue, but n h (2n) =
n h (2n + 1) in (6.12) does guarantee a unity eigenvalue. This means that if (6.12) is not satised, φ (t) is
not dened on the dyadic rationals and is, therefore, probably not a very nice signal.
Our problem √ is to now nd that eigenvector. Note from (6.6) that φ (0) = φ (N − 1) = 0 or h (0) =
h (N − 1) = 1/ 2. For the Haar wavelet system, the second is true but for longer systems, this would mean
all the other h (n) would have to be zero because of (6.10) and that is not only not interesting, it produces
a very poorly behaved φ (t). Therefore, the scaling function with N > 2 and compact support will always
be zero on the extremes of the support. This means that we can look for the eigenvector of the smaller 4 by
4 matrix obtained by eliminating the rst and last rows and columns of M0 .
From (6.76) we form [M0 − I] φ = 0 which shows that [M0 − I] is singular, meaning its rows are not
independent. We remove the last row and assume the remaining rows are now independent. If that is not
true, we remove another row. We next replace that row with a row of ones in order to implement the
normalizing equation
X
φ (k) = 1 (6.77)
k
This augmented matrix, [M0 − I] with a row replaced by a row of ones, when multiplied by φ gives a vector
of all zeros except for a one in the position of the replaced row. This equation should not be singular and is
solved for φ which gives φ (k), the scaling function evaluated at the integers.
From these values of φ (t) on the integers, we can nd the values at the half integers using the recursive
equation (6.1) or a modied form
X √
φ (k/2) = h (n) 2 φ (k − n) (6.78)
n
M1 φ = φ2 (6.79)
Here, the rst and last columns and last row are not needed (because φ0 = φ5 = φ11/2 = 0) and can be
eliminated to save some arithmetic.
The procedure described here can be repeated to nd a matrix that when multiplied by a vector of the
scaling function evaluated at the odd integers divided by k will give the values at the odd integers divided
by 2k . This modied matrix corresponds to convolving the samples of φ (t) by an up-sampled h (n). Again,
convolution combined with up- and down-sampling is the basis of wavelet calculations. It is also the basis
of digital lter bank theory. Figure 6.5 illustrates the dyadic expansion calculation of a Daubechies scaling
function for N = 4 at each iteration of this method.
Not only does this dyadic expansion give an explicit method for nding the exact values of φ (t) of the
dyadic rationals (t = k/2j ), but it shows how the eigenvalues of M say something about the φ (t). Clearly,
if φ (t) is continuous, it says everything.
Matlab programs are included in Appendix C (Chapter 15) to implement the successive approximation
and dyadic expansion approaches to evaluating the scaling function from the scaling coecients. They were
used to generate the gures in this section. It is very illuminating to experiment with dierent h (n) and
observe the eects on φ (t) and ψ (t).
We now look at a particular way to use the remaining N2 − 1 degrees of freedom to design the N values of
h (n) after satisfying (6.10) and (6.14), which insure the existence and orthogonality (or property of being a
tight frame) of the scaling function and wavelets [121], [113], [126].
One of the interesting characteristics of the scaling functions and wavelets is that while satisfying (6.10)
and (6.14) will guarantee the existence of an integrable scaling function, it may be extraordinarily irregular,
even fractal in nature. This may be an advantage in analyzing rough or fractal signals but it is likely to be
a disadvantage for most signals and images.
We will see in this section that the number of vanishing moments of h1 (n) and ψ (t) are related to the
smoothness or dierentiability of φ (t) and ψ (t). Unfortunately, smoothness is dicult to determine directly
because, unlike with dierential equations, the dening recursion (6.1) does not involve derivatives.
We also see that the representation and approximation of polynomials are related to the number of
vanishing or minimized wavelet moments. Since polynomials are often a good model for certain signals and
images, this property is both interesting and important.
The number of zero scaling function moments is related to the goodness" of the approximation of high-
resolution scaling coecients by samples of the signal. They also aect the symmetry and concentration of
the scaling function and wavelets.
This section will consider the basic 2-band or multiplier-2 case dened in (3.13). The more general M-
band or multiplier-M case is discussed in Section: Multiplicity-M (M-Band) Scaling Functions and Wavelets
(Section 8.2: Multiplicity-M (M-Band) Scaling Functions and Wavelets).
The term scaling lter" comes from Mallat's algorithm, and the relation to lter banks discussed in Chap-
ter: Filter Banks and the Discrete Wavelet Transform. The term unitary" comes from the orthogonality
conditions expressed in lter bank language, which is explained in Chapter: Filter Banks and Transmulti-
plexers.
1 This content is available online at <http://cnx.org/content/m45098/1.3/>.
Available for free at Connexions <http://cnx.org/content/col11454/1.6>
79
CHAPTER 7. REGULARITY, MOMENTS, AND WAVELET SYSTEM
80
DESIGN
A unitary scaling lter is said to be K -regular if its z-transform has K zeros at z = eiπ . This looks like
K
1 + z −1
H (z) = Q (z) (7.2)
2
where H (z) = n h (n) z −n is the z-transform of the scaling coecients h (n) and Q (z) has no poles or
P
zeros at z = eiπ . Note that we are presenting a denition of regularity of h (n), not of the scaling function
φ (t) or wavelet ψ (t). They are related but not the same. Note also from (6.20) that any unitary scaling
lter is at least K = 1 regular.
The length of the scaling lter is N which means H (z) is an N − 1 degree polynomial. Since the multiple
zero at z = −1√ is order K , the polynomial Q (z) is degree N −1−K . The existence of φ (t) requires the zero
th
moment be 2 which is the result of the linear condition in (7.1). Satisfying the conditions for orthogonality
requires N/2 conditions which are the quadratic equations in (7.1). This means the degree of regularity is
limited by
N
1 ≤ K ≤. (7.3)
2
Daubechies used the degrees of freedom to obtain maximum regularity for a given N , or to obtain the
minimum N for a given regularity. Others have allowed a smaller regularity and used the resulting extra
degrees of freedom for other design purposes.
Regularity is dened in terms of zeros of the transfer function or frequency response function of an FIR
lter made up from the scaling coecients. This is related to the fact that the dierentiability of a function
is tied to how fast its Fourier series coecients drop o as the index goes to innity or how fast the Fourier
transform magnitude drops o as frequency goes to innity. The relation of the Fourier transform of the
scaling function to the frequency response of the FIR lter with coecients h (n) is given by the innite
product (6.74). From these connections, we reason that since H (z) is lowpass and, if it has a high order zero
at z = −1 (i.e., ω = π ), the Fourier transform of φ (t) should drop o rapidly and, therefore, φ (t) should be
smooth. This turns out to be true.
We next dene the k th moments of φ (t) and ψ (t) as
Z
m (k) = tk φ (t) dt (7.4)
and
Z
m1 (k) = tk ψ (t) dt (7.5)
and
X
µ1 (k) = nk h1 (n) . (7.7)
n
From these equations and the basic recursion (6.1) we obtain [189]
k
1 X k
m (k) = √ µ (`) m (k − `) (7.9)
(2k − 1) 2 `=1 `
which can be derived by substituting (6.1) into (7.4), changing variables, and using (7.6). Similarly, we
obtain
k
1 X k
m1 (k) = √ µ1 (`) m (k − `) . (7.10)
2k 2 `=0 `
These equations exactly calculate the moments dened by the integrals in (7.4) and (7.5) with simple nite
convolutions of the discrete moments with the lower order continuous moments. A similar equation also holds
for the multiplier-M case described in Section: Multiplicity-M (M-Band) Scaling Functions and Wavelets
(Section 8.2: Multiplicity-M (M-Band) Scaling Functions and Wavelets) [189]. A Matlab program that
calculates the continuous moments from the discrete moments using (7.9) and (7.10) is given in Appendix C.
This is a very powerful result [484], [253]. It not only ties the number of zero moments to the regularity but
also to the degree of polynomials that can be exactly represented by a sum of weighted and shifted scaling
functions.
Theorem 21 If ψ (t) is K -times dierentiable and decays fast enough, then the rst K − 1 wavelet
moments vanish [121]; i.e.,
k
d
(7.11)
dtk ψ (t) < ∞, 0 ≤ k ≤ K
implies
Theorem 22 There exists a nite positive integer L such that if m1 (k) = 0 for 0 ≤ k ≤ K − 1 then
P
d
(7.13)
ψ (t)<∞
dtP
for L P > K .
For example, a three-times dierentiable ψ (t) must have three vanishing moments, but three vanishing
moments results in only one-time dierentiability.
These theorems show the close relationship among the moments of h1 (n), ψ (t), the smoothness of H (ω)
at ω = 0 and π and to polynomial representation. It also states a loose relationship with the smoothness of
φ (t) and ψ (t) themselves.
1 + eiω 2K
2 |L (ω) |2 .
|H (ω) | =
(7.18)
2
If we use the functional notation:
2 2
M (ω) = |H (ω) | and L (ω) = |L (ω) | (7.19)
which gives a complete parameterization of Daubechies' maximum zero wavelet moment design. It also gives
a very straightforward procedure for the calculation of the h (n) that satisfy these conditions. Herrmann
derived this expression for the design of Butterworth or maximally at FIR digital lters [262].
If the regularity is K < N/2, P (y) must be of higher degree and the form of the solution is
K−1
K −1+k
X 1
P (y) = k
y + y R K
−y (7.26)
k 2
k=0
where R (y) is chosen to give the desired lter length N , to achieve some other desired property, and to give
P (y) ≥ 0.
The steps in calculating the actual values of h (n) are to rst choose the length N (or the desired
2
regularity) for h (n), then factor |H (ω) | where there will be freedom in choosing which roots to use for
H (ω). The calculations are more easily carried out using the z-transform form of the transfer function
and using convolution in the time domain rather than multiplication (raising to a power) in the frequency
domain. That is done in theMatlab program [hn,h1n] = daub(N) in Appendix C where the polynomial
coecients in (7.25) are calculated from the binomial coecient formula. This polynomial is factored with
the roots command in Matlab and the roots are mapped fromp the polynomial variable y` to the variable
z` in (7.2) using rst cos (ω) = 1 − 2 y` , then with i sin (ω) = cos2 (ω) − 1 and eiω = cos (ω) ± isin (ω) we
use z = eiω . These changes of variables are used by Herrmann [262] and Daubechies [121].
Examine the Matlab program to see the details of just how this is carried out. The program uses the
sort command to order the roots of H (z) H (1/z) after which it chooses the N − 1 smallest ones to give
a minimum phase H (z) factorization. You could choose a dierent set of N − 1 roots in an eort to get
a more linear phase or even maximum phase. This choice allows some variation in Daubechies wavelets
of the same length. The M -band generalization of this is developed by Heller in [484], [253]. In [121],
Daubechies also considers an alternation of zeros inside and outside the unit circle which gives a more
symmetric h (n). A completely symmetric real h (n) that has compact support and supports orthogonal
wavelets is not possible; however, symmetry is possible for complex h (n), biorthogonal systems, innitely
long h (n), and multiwavelets. Use of this zero moment design approach will also assure the resulting wavelets
system is an orthonormal basis.
If all the degrees of freedom are used to set moments to zero, one uses K = N/2 in (7.14) and the above
procedure is followed. It is possible to explicitly set a particular pair of zeros somewhere other than at ω = π .
In that case, one would use K = (N/2) − 2 in (7.14). Other constraints are developed later in this chapter
and in later chapters.
To illustrate some of the characteristics of a Daubechies wavelet system, Table 7.1 shows the scaling
function and wavelet coecients, h (n) and h1 (n), and the corresponding discrete scaling coecient moments
and wavelet coecient moments for a length-8 Daubechies system. Note√the N/2 = 4 zero moments of the
wavelet coecients and the zeroth scaling coecient moment of µ (0) = 2.
Table 7.1: Scaling Function and Wavelet Coecients plus their Discrete Moments for Daubechies-8
Table 7.2 gives the same information for the length-6, 4, and 2 Daubechies scaling coecients, wavelet
coecients, scaling coecient moments, and wavelet coecient moments. Again notice how many discrete
wavelet moments are zero.
Table 7.3 shows the continuous moments of the scaling function φ (t) and wavelet ψ (t) for the Daubechies
systems with lengths six and four. The discrete moments are the moments of the coecients dened by (7.6)
and (7.7) with the continuous moments dened by (7.4) and (7.5) calculated using (7.9) and (7.10) with the
programs listed in Appendix C.
Daubechies N = 6
n h (n) h1 (n) µ (k) µ1 (k) k
0 0.33267055295008 -0.03522629188571 1.414213 0 0
1 0.80689150931109 -0.08544127388203 1.155979 0 1
2 0.45987750211849 0.13501102001025 0.944899 0 2
3 -0.13501102001025 0.45987750211849 -0.224341 3.354101 3
4 -0.08544127388203 -0.80689150931109 -2.627495 40.679682 4
5 0.03522629188571 0.33267055295008 5.305591 329.323717 5
Daubechies N = 4
n h (n) h1 (n) µ (k) µ1 (k) k
0 0.48296291314453 0.12940952255126 1.414213 0 0
1 0.83651630373781 0.22414386804201 0.896575 0 1
2 0.22414386804201 -0.83651630373781 0.568406 1.224744 2
3 -0.12940952255126 0.48296291314453 -0.864390 6.572012 3
Daubechies N = 2
n h (n) h1 (n) µ (k) µ1 (k) k
0 0.70710678118655 0.70710678118655 1.414213 0 0
1 0.70710678118655 -0.70710678118655 0.707107 0.707107 1
Table 7.2: Daubechies Scaling Function and Wavelet Coecients plus their Moments
N =6
k µ (k) µ1 (k) m (k) m1 (k)
0 1.4142135 0 1.0000000 0
1 1.1559780 0 0.8174012 0
2 0.9448992 0 0.6681447 0
3 -0.2243420 3.3541019 0.4454669 0.2964635
4 -2.6274948 40.6796819 0.1172263 2.2824642
5 5.3055914 329.3237168 -0.0466511 11.4461157
N =4
k µ (k) µ1 (k) m (k) m1 (k)
0 1.4142136 0 1.0000000 0
1 0.8965755 0 0.6343975 0
2 0.5684061 1.2247449 0.4019238 0.2165063
3 -0.8643899 6.5720121 0.1310915 0.7867785
4 -6.0593531 25.9598790 -0.3021933 2.0143421
5 -23.4373939 90.8156100 -1.0658728 4.4442798
Table 7.3 : Daubechies Scaling Function and Wavelet Continuous and Discrete Moments
These tables are very informative about the characteristics √ of wavelet systems in general as well as
particularities of the Daubechies system. We see the µ (0) = 2 of (7.1) and (6.10) that is necessary for
the existence of a scaling function solution to (6.1) and the µ1 (0) = m1 (0) = 0 of (6.32) and (6.29) that
is necessary for the orthogonality of the basis functions. Orthonormality requires (3.25) which is seen in
comparison of the h (n) and h1 (n), and it requires m (0) = 1 from (6.53) and (6.45). After those conditions
are satised, there are N/2 − 1 degrees of freedom left which Daubechies uses to set wavelet moments m1 (k)
equal zero. For length-6 we have two zero wavelet moments and for length-4, one. For all longer Daubechies
systems we have exactly N/2 − 1 zero wavelet moments in addition to the one m1 (0) = 0 for a total of
2
N/2 zero wavelet moments. Note m (2) = m(1) as will be explained in (7.32) and there exist relationships
among some of the values of the even-ordered scaling function moments, which will be explained in (7.51)
through (7.51).
As stated earlier, these systems have a maximum number of zero moments of the wavelets which results
in a high degree of smoothness for the scaling and wavelet functions. Figure 7.1 and Figure 7.2 show the
Daubechies scaling functions and wavelets for N = 4, 6, 8, 10, 12, 16, 20, 40. The coecients were generated
by the techniques described in Section: Parameterization of the Scaling Coecients (Section 6.9: Parame-
terization of the Scaling Coecients) and Chapter: Regularity, Moments, and Wavelet System Design. The
Matlab programs are listed in Appendix C and values of h (n) can be found in [121] or generated by the
programs. Note the increasing smoothness as N is increased. For N = 2, the scaling function is not continu-
ous; for N = 4, it is continuous but not dierentiable; for N = 6, it is barely dierentiable once; for N = 14,
it is twice dierentiable, and similarly for longer h (n). One can obtain any degree of dierentiability for
suciently long h (n).
The Daubechies coecients are obtained by maximizing the number of moments that are zero. This
gives regular scaling functions and wavelets, but it is possible to use the degrees of
freedom to maximize the dierentiability of φ (t) rather than maximize the zero moments. This is not
easily parameterized, and it gives only slightly greater smoothness than the Daubechies system [121].
Examples of Daubechies scaling functions resulting from choosing dierent factors in the spectral factor-
2
ization of |H (ω) | in (7.18) can be found in [121].
Notice that, although Sobolev regularity does not give the explicit order of dierentiability, it does yield
lower and upper bounds on r, the Hölder regularity, and hence the dierentiability of φ if φ ∈ L2 . This can
be seen from the following inclusions:
H s+1/2 ⊂ C r ⊂ H s (7.30)
A very interesting and important result by Volkmer [555] and by Eirola [172] gives an exact asymptotic
formula for the Hölder regularity index (exponent) of the Daubechies scaling function.
Theorem 24 The limit of the Hölder regularity index of a Daubechies scaling function as the length of
the scaling lter goes to innity is [555]
αN log (3)
lim = 1− = (0.2075 · · · ) (7.31)
N →∞ N 2 log (2)
This result, which was also proven by A. Cohen and J. P. Conze, together with empirical calculations for
shorter lengths, gives a good picture of the smoothness of Daubechies scaling functions. This is illustrated
in Figure 7.3 where the Hölder index is plotted versus scaling lter length for both the maximally smooth
case and the Daubechies case.
The question of the behavior of maximally smooth scaling functions was empirically addressed by Lang
and Heller in [312]. They use an algorithm by Rioul to calculate the Hölder smoothness of scaling functions
that have been designed to have maximum Hölder smoothness and the results are shown in Figure 7.3
together with the smoothness of the Daubechies scaling functions as functions of the length of the scaling
lter. For the longer lengths, it is possible to design systems that give a scaling function over twice as smooth
as with a Daubechies design. In most applications, however, the greater Hölder smoothness is probably not
important.
Figure 7.3: Hölder Smoothness versus Coecient Length for Daubechies' (+) and Maximally Smooth
(o) Wavelets.
Figure 7.4: Number of Zeros at ω = π versus Coecient Length for Daubechies' (+) and Maximally
Smooth (o) Wavelets.
Figure 7.4 shows the number of zero moments (zeros at ω = π ) as a function of the number of scaling
function coecients for both the maximally smooth and Daubechies designs.
One case from this gure is for N = 26 where the Daubechies smoothness is SH = 4.005 and the maximum
smoothness is SH = 5.06. The maximally smooth scaling function has one more continuous derivative than
the Daubechies scaling function.
Recent work by Heller and Wells [256], [257] gives a better connection of properties of the scaling coef-
cients and the smoothness of the scaling function and wavelets. This is done both for the scale factor or
multiplicity of M = 2 and for general integer M .
The usual denition of smoothness in terms of dierentiability may not be the best measure for certain
signal processing applications. If the signal is given as a sequence of numbers, not as a function of a
continuous variable, what does smoothness mean? Perhaps the use of the variation of a signal may be a
useful alternative [214], [410], [402].
which gives the component of f (t) which is in Vj and which is the best least squares approximation to f (t)
in Vj .
As given in (7.5), the `th moment of ψ (t) is dened as
Z
m1 (`) = t` ψ (t) dt. (7.34)
We can now state an important relation of the projection (7.33) as an approximation to f (t) in terms of
the number of zero wavelet moments and the scale.
Theorem 25 If m1 (`) = 0 for ` = 0, 1, · · · , L then the L2 error is
1 =k f (t) − P j {f (t)}k2 ≤ C 2−j(L+1) , (7.35)
where C is a constant independent of j and L but dependent on f (t) and the wavelet system [180], [519].
This states that at any given scale, the projection of the signal on the subspace at that scale approaches
the function itself as the number of zero wavelet moments (and the length of the scaling lter) goes to innity.
It also states that for any given length, the projection goes to the function as the scale goes to innity. These
approximations converge exponentially fast. This projection is illustrated in Figure 7.5.
This vector space" illustration shows the nature and relationships of the two types of approximations.
The use of samples as inner products is an approximation within the expansion subspace Vj . The use of a
nite expansion to represent a signal f (t) is an approximation from L2 onto the subspace Vj . Theorems p.
94 and p. 95 show the nature of those approximations, which, for wavelets, is very good.
An illustration of the eects of these approximations on a signal is shown in Figure 7.6 where a signal
with a very smooth component (a sinusoid) and a discontinuous component (a square wave) is expanded
in a wavelet series using samples as the high resolution scaling function coecients. Notice the eects of
projecting onto lower and lower resolution scales.
If we consider a wavelet system where the same number of scaling function and wavelet moments are set
zero and this number is as large as possible, then the following is true [569], [513]:
Theorem 27 If m (`) = m1 (`) = 0 for ` = 1, 2, · · · , L and m1 (0) = 0, then the L2 error is
3 =k f (t) − S j {f (t)}k2 ≤ C3 2−j(L+1) , (7.39)
where C3 is a constant independent of j and L, but dependent on f (t) and the wavelet system.
Here we see that for this wavelet system called a Coifman wavelet system, that using samples as the inner
product expansion coecients is an excellent approximation. This justies that using samples of a signal as
input to a lter bank gives a proper wavelet analysis. This approximation is also illustrated in Figure 7.5
and in [565].
The Coifman wavelet system (Daubechies named the basis functions coiets") is an orthonormal mul-
tiresolution wavelet system with
Z
tk φ (t) dt = m (k) = 0, for k = 1, 2, · · · , L − 1 (7.40)
Z
tk ψ (t) dt = m1 (k) = 0, for k = 1, 2, · · · , L − 1. (7.41)
This denition imposes the requirement that there be at least L − 1 zero scaling function moments and at
least L − 1 wavelet moments in addition to the one zero moment of m1 (0) required by orthogonality. This
system is said to be of order or degree L and sometime has the additional requirement that the length of the
scaling function lter h (n), which is denoted N , is minimum [121], [126]. In the design of these coiets, one
obtains more total zero moments than N/2 − 1. This was rst noted by Beylkin, et al [36]. The length-4
wavelet system has only one degree of freedom, so it cannot have both a scaling function moment and wavelet
moment of zero (see Table 7.6). Tian [513], [515] has derived formulas for four length-6 coiets. These are:
" √ √ √ √ √ √ #
−3 + 7 1 − 7 7 − 7 7 + 7 5 + 7 1 − 7
h= √ , √ , √ , √ , √ , √ , (7.42)
16 2 16 2 8 2 8 2 16 2 16 2
or
√ " √ √ √ √ √ #
−3 − 7 1 + 7 7 + 7 7 − 7 5 − 7 1 + 7
h= √ , √ , √ , √ , √ , √ , (7.43)
16 2 16 2 8 2 8 2 16 2 16 2
or
" √ √ √ √ √ √ #
−3 + 15 1 − 15 3 − 15 3 + 15 13 + 15 9 − 15
h= √ , √ , √ , √ , √ , √ , (7.44)
16 2 16 2 8 2 8 2 16 2 16 2
or
" √ √ √ √ √ √ #
−3 − 15 1 + 15 3 + 15 3 − 15 13 − 15 9 + 15
h= √ , √ , √ , √ , √ , √ , (7.45)
16 2 16 2 8 2 8 2 16 2 16 2
with the rst formula (7.42) giving the same result as Daubechies [121], [126] (corrected) and that of Odegard
[52] and the third giving the same result as Wickerhauser [572]. The results from (7.42) are included in Table
7.4 along with the discrete moments of the scaling function and wavelet, µ (k) and µ1 (k) for k = 0, 1, 2, 3.
The design of a length-6 Coifman system species one zero scaling function moment and one zero wavelet
moment (in addition to µ1 (0) = 0), but we, in fact, obtain one extra zero scaling function moment. That is
2
the result of m (2) = m(1) from [189]. In other words, we get one more zero scaling function moment than
the two degrees of freedom would seem to indicate. This is true for all lengths N = 6` for ` = 1, 2, 3, · · · and
is a result of the interaction between the scaling function moments and the wavelet moments described later.
The property of zero wavelet moments is shift invariant, but the zero scaling function moments are shift
dependent [36]. Therefore, a particular shift for the scaling function must be used. This shift is two for the
length-6 example in Table 7.4, but is dierent for the solutions in (7.44) and (7.45). Compare this table to
the corresponding one for Daubechies length-6 scaling functions and wavelets given in Table 7.2 where there
are two zero discrete wavelet moments just as many as the degrees of freedom in that design.
The scaling function from (7.42) is fairly symmetric, but not around its center and the other three
designs in (7.43), (7.44), and (7.45) are not symmetric at all. The scaling function from (7.42) is also fairly
smooth, and from (7.44) only slightly less so but the scaling function from (7.43) is very rough and from
(7.45) seems to be fractal. Examination of the frequency response H (ω) and the zero location of the FIR
lters h (n) shows very similar frequency responses for (7.42) and (7.44) with (7.43) having a somewhat
irregular but monotonic frequency response and (7.45) having a zero on the unit circle at ω = π/3, i.e., not
satisfying Cohen's condition [78] for an orthognal basis. It is also worth noticing that the design in (7.42) has
the largest Hölder smoothness. These four designs, all satisfying the same necessary conditions, have very
dierent characteristics. This tells us to be very careful in using zero moment methods to design wavelet
systems. The designs are not unique and some are much better than others.
Table 7.4 contains the scaling function and wavelet coecients for the length-6 and 12 designed by
Daubechies and length-8 designed by Tian together with their discrete moments. We see the extra zero
scaling function moments for lengths 6 and 12 and also the extra zero for lengths 8 and 12 that occurs after
a nonzero one.
The continuous moments can be calculated from the discrete moments and lower order continuous mo-
ments [36], [189], [505] using (7.9) and (7.10). An important relationship of the discrete moments for a
system with K − 1 zero wavelet moments is found by calculating P the derivatives of the magnitude squared of
the discrete time Fourier transform of h (n) which is H (ω) = n h (n) e−iωn and has 2K − 1 zero derivatives
of the magnitude squared at ω = 0. This gives [189] the k th derivative for k even and 1 < k < 2K − 1
k
X k
(−1)` µ (`) µ (k − `) = 0. (7.46)
`=0 `
√
Solving for µ (k) in terms of lower order discrete moments and using µ (0) = 2 gives for k even
k−1
1 X k `
µ (k) = √ (−1) µ (`) µ (k − `) (7.47)
2 2 `=1 `
which allows calculating the even-order discrete scaling function moments in terms of the lower odd-order
discrete scaling function moments for k = 2, 4, · · · , 2K − 2. For example:
µ (2) = √1 µ2(1)
2
−1 (7.48)
µ (4) = √
2 2
8µ (1) µ (3) − 3µ4 (1)
··· ···
which can be seen from values in Table 7.2.
Johnson [284] noted from Beylkin [32] and Unser [519] that by using the moments of the autocorrelation
function of the scaling function, a relationship of the continuous scaling function moments can be derived in
the form
k
X k
(−1)k−` m (`) m (k − `) = 0 (7.49)
`=0 `
where 0 < k < 2K if K − 1 wavelet moments are zero. Solving for m (k) in terms of lower order moments
gives for k even
k−1
−1 X k `
m (k) = (−1) m (`) m (k − `) (7.50)
2 `
`=1
which allows calculating the even-order scaling function moments in terms of the lower odd-order scaling
m (2) = m2 (1)
m (4) = 4 m (3) m (1) − 3 m4 (1)
m (6) = 6 m (5) m (1) + 10 m2 (3) + 60 m (3) m3 (1) + 45 m6 (1)
(7.51)
m (8) = 8 m (7) m (1) + 56 m (5) m (3) − 168 m (5) m3 (1)
+2520 m (3) m5 (1) − 840 m (3) m2 (1) − 1575 m8 (1)
··· ···
Length-N = 6, Degree L = 2
n h (n) h1 (n) µ (k) µ1 (k) k
-2 -0.07273261951285 0.01565572813546 1.414213 0 0
-1 0.33789766245781 -0.07273261951285 0 0 1
0 0.85257202021226 -0.38486484686420 0 -1.163722 2
1 0.38486484686420 0.85257202021226 -0.375737 -3.866903 3
2 -0.07273261951285 -0.33789766245781 -2.872795 -10.267374 4
3 -0.01565572813546 -0.07273261951285
Length-N = 8, Degree L = 3
n h (n) h1 (n) µ (k) µ1 (k) k
-4 0.04687500000000 0.01565572813546 1.414213 0 0
-3 -0.02116013576461 -0.07273261951285 0 0 1
-2 -0.14062500000000 -0.38486484686420 0 0 2
-1 0.43848040729385 1.38486484686420 -2.994111 0.187868 3
0 1.38486484686420 -0.43848040729385 0 11.976447 4
1 0.38486484686420 -0.14062500000000 -45.851020 -43.972332 5
2 -0.07273261951285 0.02116013576461 63.639610 271.348747 6
3 -0.01565572813546 0.04687500000000
Length-N = 12, Degree L = 4
n h (n) h1 (n) µ (k) µ1 (k) k
-4 0.016387336463 0.000720549446 1.414213 0 0
-3 -0.041464936781 0.001823208870 0 0 1
-2 -0.067372554722 -0.005611434819 0 0 2
-1 0.386110066823 -0.023680171946 0 0 3
0 0.812723635449 0.059434418646 0 11.18525 4
1 0.417005184423 0.076488599078 -5.911352 175.86964 5
2 -0.076488599078 -0.417005184423 0 1795.33634 6
3 -0.059434418646 -0.812723635449 -586.341304 15230.54650 7
4 0.023680171946 -0.386110066823 3096.310009 117752.68833 8
5 0.005611434819 0.067372554722
6 -0.001823208870 0.041464936781
7 -0.000720549446 -0.016387336463
Table 7.4 : Coiet Scaling Function and Wavelet Coecients plus their Discrete Moments
if the wavelet moments are zero up to k = K − 1. Notice that setting m (1) = m (3) = 0 causes
m (2) = m (4) = m (6) = m (8) = 0 if sucient wavelet moments are zero. This explains the extra zero
moments in Table 7.4. It also shows that the traditional specication of zero scaling function moments is
redundant. In Table 7.4 m (8) would be zero if more wavelet moments were zero.
N = 6, L=2
k µ (k) µ1 (k) m (k) m1 (k)
0 1.4142135623 0 1.0000000000 0
1 0 0 0 0
2 0 -1.1637219122 0 -0.2057189138
3 -0.3757374752 -3.8669032118 -0.0379552166 -0.3417891854
4 -2.8727952940 -10.2673737288 -0.1354248688 -0.4537580992
5 -3.7573747525 -28.0624304008 -0.0857053279 -0.6103378310
N = 8, L=3
k µ (k) µ1 (k) m (k) m1 (k)
0 1.4142135623 0 1.0000000000 0
1 0 0 0 0
2 0 0 0 0
3 -2.9941117777 0.1878687376 -0.3024509630 0.0166054072
4 0 11.9764471108 0 0.5292891854
5 -45.8510203537 -43.9723329775 -1.0458570134 -0.9716604635
Table 7.5 : Discrete and Continuous Moments for the Coiet Systems
To see the continuous scaling function and wavelet moments for these systems, Table 7.5 shows both the
continuous and discrete moments for the length-6 and 8 coiet systems. Notice the zero moment m (4) =
µ (4) = 0 for length-8. The length-14, 20, and 26 systems also have the extra" zero scaling moment just
after the rst nonzero moment. This always occurs for length-N = 6` + 2 coiets.
Figure 7.7 shows the length-6, 8, 10, and 12 coiet scaling functions φ (t) and wavelets ψ (t). Notice their
approximate symmetry and compare this to Daubechies' classical wavelet systems and her more symmetric
ones achieved by using the dierent factorization mentioned in Section 7.3 (Daubechies' Method for Zero
Wavelet Moment Design) and shown in [121]. The dierence between these systems and truly symmetric
ones (which requires giving up orthogonality, realness, or nite support) is probably negligible in many
applications.
Figure 7.7: Length-6, 8, 10, and 12 Coiet Scaling Functions and Wavelets
Length-N = 4, Degree L = 1
n h (n) h1 (n) µ (k) µ1 (k) k
-1 0.224143868042 0.129409522551 1.414213 0 0
0 0.836516303737 0.482962913144 0 -0.517638 1
1 0.482962913144 -0.836516303737 0.189468 0.189468 2
2 -0.129409522551 0.224143868042 -0.776457 0.827225 3
Length-N = 10, Degree L = 3
n h (n) h1 (n) µ (k) µ1 (k) k
-2 0.032128481856 0.000233764788 1.414213 0 0
-1 -0.075539271956 -0.000549618934 0 0 1
0 -0.096935064502 -0.013550370057 0 0 2
1 0.491549094027 0.033777338659 0 3.031570 3
2 0.805141083557 0.304413564385 0 24.674674 4
3 0.304413564385 -0.805141083557 -14.709025 138.980052 5
4 -0.033777338659 0.491549094027 64.986095 710.373341 6
5 -0.013550370057 0.096935064502
6 0.000549618934 -0.075539271956
7 0.000233764788 0.032128481856
Table 7.6 : Coiet Scaling Function and Wavelet Coecients plus their Discrete Moments
We have designed these new" coiet systems (e.g., N = 10, 16, 22, 28) by using the Matlab
optimization
toolbox constrained optimization function. Wells and Tian [515] used Newton's method to design lengths
N = 6` + 2 and N = 6` coiets up to length 30 [52]. Selesnick [465] has used a lter design approach. Still
another approach is given by Wei and Bovik [564].
Table 7.6 also shows the result of designing a length-4 system, using the one degree of freedom to ask for
one zero scaling function moment rather than one zero wavelet moment as we did for the Daubechies system.
For length-4, we do not get any extra" zero moments because there are not enough zero wavelet moments.
Here we see a direct trade-o between zero scaling function moments and wavelet moments. Adding these
new lengths to our traditional coiets gives Table 7.7.
Table 7.7 : Moments for Various Length-N and Degree-L Coiets, where (*) is the number of zero wavelet
moments, excluding the m1 (0) = 0
The fourth and sixth columns in Table 7.7 contain the number of zero wavelet moments, excluding the
m1 (0) = 0 which is zero because of orthogonality in all of these systems. The extra zero scaling function
moments that occur after a nonzero moment for N = 6` + 2 are also excluded from the count. This table
shows coiets for all even lengths. It shows the extra zero scaling function moments that are sometime
achieved and how the total number of zero moments monotonically increases and how the smoothness" as
measured by the Hölder exponent [440], [312], [257] increases with N and L.
When both scaling function and wavelet moments are set to zero, a larger number can be obtained than
is expected from considering the degrees of freedom available. As stated earlier, of the N degrees of freedom
available from the N coecients, h (n), one is used to insure existence of φ (t) through the linear constraint
(7.1), and N/2 are used to insure orthonormality through the quadratic constraints (7.1). This leaves N/2−1
degrees of freedom to achieve other characteristics. Daubechies used these to set the rst N/2 − 1 wavelet
moments to zero. If setting scaling function moments were independent of setting wavelet moments zero, one
would think that the coiet system would allow (N/2 − 1) /2 wavelet moments to be set zero and the same
number of scaling function moments. For the coiets described in Table 7.7, one always obtains more than
this. The structure of this problem allows more zero moments to be both set and achieved than the simple
degrees of freedom would predict. In fact, the coiets achieve approximately 2N/3 total zero moments as
compared with the number of degrees of freedom which is approximately N/2, and which is achieved by the
Daubechies wavelet system.
As noted earlier and illustrated in Table 7.8, these coiets fall into three classes. Those with scaling
lter lengths of N = 6` + 2 (due to Tian) have equal number of zero scaling function and wavelet moments,
but always has extra" zero scaling function moments located after the rst nonzero one. Lengths N = 6`
(due to Daubechies) always have one more zero scaling function moment than zero wavelet moment and
lengths N = 6` − 2 (new) always have two more zero scaling function moments than zero wavelet moments.
These extra" zero moments are predicted by (7.51) to (7.51), and there will be additional even-order zero
moments for longer lengths. We have observed that within each of these classes, the Hölder exponent
increases monotonically.
N m = 0 m1 = 0* Total zero
Length achieved achieved moments
N = 6` + 2 (N − 2) /3 (N − 2) /3 (2/3) (N − 2)
N = 6` N/3 (N − 3) /3 (2/3) (N − 3/2)
N = 6` − 2 (N + 2) /3 (N − 4) /3 (2/3) (N − 1)
Table 7.8: Number of Zero Moments for The Three Classes of Coiets (` = 1, 2, · · ·), *excluding µ1 (0) = 0,
excluding Non-Contiguous zeros
The approach taken in some investigations of coiets would specify the coiet degree and then nd the
shortest lter that would achieve that degree. The lengths N = 6` − 2 were not found by this approach
because they have the same coiet degree as the system just two shorter. However, they achieve two more
zero scaling function moments than the shorter length with the same degree. By specifying the number of
zero moments and/or the lter length, it is easier to see the complete picture.
Table 7.7 is just part of a large collection of zero moment wavelet system designs with a wide variety
of trade-os that would be tailored to a particular application. In addition to the variety illustrated here,
many (perhaps all) of these sets of specied zero moments have multiple solutions. This is certainly true
for length-6 as illustrated in (7.42) through (7.45) and for other lengths that we have found experimentally.
The variety of solutions for each length can have dierent shifts, dierent Hölder exponents, and dierent
degrees of being approximately symmetric.
The results of this chapter and section show the importance of moments to the characteristics of scaling
functions and wavelets. It may not, however, be necessary or important to use the exact criteria of Daubechies
or Coifman, but understanding the eects of zero moments is very important. It may be that setting a few
scaling function moments and a few wavelets moments may be sucient with the remaining degrees of
freedom used for some other optimization, either in the frequency domain or in the time domain. As is noted
in the next section, an alternative might be to minimize a larger number of various moments rather than to
zero a few [409].
Examples of the Coiet Systems are shown in Figure 7.7.
Up to this point in the book, we have developed the basic two-band wavelet system in some detail, trying to
provide insight and intuition into this new mathematical tool. We will now develop a variety of interesting
and valuable generalizations and extensions to the basic system, but in much less detail. We hope the detail
of the earlier part of the book can be transferred to these generalizations and, together with the references,
will provide an introduction to the topics.
The wavelet transform allows analysis of a signal or parameterization of a signal that can locate energy
in both the time and scale (or frequency) domain within the constraints of the uncertainty principle. The
spectrogram used in speech analysis is an example of using the short-time Fourier transform to describe
speech simultaneously in the time and frequency domains.
This graphical or visual description of the partitioning of energy in a signal using tiling depends on the
structure of the system, not the parameters of the system. In other words, the tiling partitioning will depend
on whether one uses M = 2 or M = 3, whether one uses wavelet packets or time-varying wavelets, or whether
one uses over-complete frame systems. It does not depend on the particular coecients h (n) or hi (n), on
the number of coecients N , or the number of zero moments. One should remember that the tiling may
look as if the indices j and k are continuous variables, but they are not. The energy is really a function of
1 This content is available online at <http://cnx.org/content/m45095/1.4/>.
107
CHAPTER 8. GENERALIZATIONS OF THE BASIC MULTIRESOLUTION
108
WAVELET SYSTEM
discrete variables in the DWT domain, and the boundaries of the tiles are symbolic of the partitioning. These
tiling boundaries become more literal when the continuous wavelet transform (CWT) is used as described in
Section 8.8 (Discrete Multiresolution Analysis, the Discrete-Time Wavelet Transform, and the Continuous
Wavelet Transform), but even there it does not mean that the partitioned energy is literally conned to the
tiles.
Figure 8.1:(a) Dirac Delta Function or Standard Time Domain Basis (b) Fourier or Standard Frequency
Domain Basis
Figure 8.2: (a) STFT Basis - Narrow Window. (b) STFT Basis - Wide Window.
The DWT coecients, < x, ψj,k >, are a measure of the energy of the signal components located at 2−j k, 2j
in the time-frequency plane, giving yet another tiling of the time-frequency plane. As discussed in Chapter:
Filter Banks and the Discrete Wavelet Transform and Chapter: Filter Banks and Transmultiplexers, the
DWT (for compactly supported wavelets) can be eciently computed using two-channel unitary FIR lter
banks [105]. Figure 8.3 shows the corresponding tiling description which illustrates time-frequency resolu-
tion properties of a DWT basis. If you look along the frequency (or scale) axis at some particular time
(translation), you can imagine seeing the frequency response of the lter bank as shown in (8.7) with the
logarithmic bandwidth of each channel. Indeed, each horizontal strip in the tiling of Figure 8.3 corresponds
to each channel, which in turn corresponds to a scale j . The location of the tiles corresponding to each
coecient is shown in Figure 8.4. If at a particular scale, you imagine the translations along the k axis,
you see the construction of the components of a signal at that scale. This makes it obvious that at lower
resolutions (smaller j ) the translations are large and at higher resolutions the translations are small.
The tiling of the time-frequency plane is a powerful graphical method for understanding the properties
of the DWT and for analyzing signals. For example, if the signal being analyzed were a single wavelet itself,
of the form
spreads dierently in each scale. If the shift is not an integer, the energy spreads in both j and k . There is
no such thing as a scale limited" signal corresponding to a band-limited (Fourier) signal if arbitrary shifting
is allowed. For integer shifts, there is a corresponding concept [208].
alternative way to obtain orthonormal wavelets ψ (t) is using unitary FIR lter bank (FB) theory. That will
be done with M-band DWTs, wavelet packets, and time-varying wavelet transforms addressed in Section:
Multiplicity-M (M-Band) Scaling Functions and Wavelets (Section 8.2: Multiplicity-M (M-Band) Scaling
Functions and Wavelets) and Section: Wavelet Packets (Section 8.3: Wavelet Packets) and Chapter: Filter
Banks and Transmultiplexers respectively.
Remember that the tiles represent the relative size of the translations and scale change. They do not
literally mean the partitioned energy is conned to the tiles. Representations with similar tilings can have
very dierent characteristics.
In some cases, M may be allowed to be a rational number; however, in most cases it must be an integer,
and in (6.1) it is required to be 2. In the frequency domain, this relationship becomes
1
Φ (ω) = √ H (ω/M ) Φ (ω/M ) (8.5)
M
and the limit after iteration is
∞
Y 1 ω
Φ (ω) = {√ H }Φ (0) (8.6)
M Mk
k=1
assuming the product converges and Φ (0) is well dened. This is a generalization of (6.52) and is derived
in (6.74).
X √
h (n) = M. (8.7)
n
This is a generalization of the basic multiplicity-2 result in (6.10) and does not depend on any particular
normalization or orthogonality of φ (t).
Theorem 29 If integer translates of the solution to (8.4) are orthogonal, then
X
h (n + M m) h (n) = δ (m) . (8.8)
n
This is a generalization of (6.14) and also does not depend on any normalization. An interesting corollary
of this theorem is
Corollary 3 If integer translates of the solution to (8.4) are orthogonal, then
X 2
|h (n) | = 1. (8.9)
n
This is also true under weaker conditions than orthogonality as was discussed for the M = 2 case.
Using the Fourier transform, the following relations Rcan be derived:
Theorem 30 If φ (t) is an L1 solution to (8.4) and φ (t) dt 6= 0, then
√
H (0) = M (8.11)
which is a frequency domain existence condition.
Theorem 31 The integer translates of the solution to (8.4) are orthogonal if and only if
X 2
|Φ (ω + 2π`) | = 1 (8.12)
`
X
h (n + M m) h (n) = δ (m) (8.13)
n
if and only if
2 2 2 2
|H (ω) | + |H (ω + 2π/M ) | + |H (ω + 4π/M ) | + · · · + |H (ω + 2π (M − 1) /M ) | = M. (8.14)
This is a frequency domain orthogonality condition on h (n).
Corollary 5
H (2π `/M ) = 0, for ` = 1, 2, · · · , M − 1 (8.15)
which is a generalization of (6.20) stating where the zeros of H (ω), the frequency response of the scaling
lter, are located. This
P is an interesting
√ constraint on just where certain zeros of H (z) must be located.
Theorem 33 If n h (n) = M , and h (n) has nite support or decays fast enough, then a φ (t) ∈ L2
that satises (8.4) exists
P and is √ unique.
Theorem 34 If n h (n) = M and if n h (n) h (n − M k) = δ (k), then φ (t) exists, is integrable, and
P
generates a wavelet system that is a tight frame in L2 .
These results are a signicant generalization of the basic M = 2 wavelet system that we discussed in the
earlier chapters. The denitions, properties, and generation of these more general scaling functions have the
same form as for M = 2, but there is no longer a single wavelet associated with the scaling function. There
are M − 1 wavelets. In addition to (8.4) we now have M − 1 wavelet equations, which we denote as
X√
ψ` (t) = M h` (n) φ (M t − n) (8.16)
n
for
` = 1, 2, · · · , M − 1. (8.17)
Some authors use a notation h0 (n) for h (n) and φ0 (t) for ψ (t), so that h` (n) represents the coecients
for the scaling function and all the wavelets and φ` (t) represents the scaling function and all the wavelets.
Just as for the M = 2 case, the multiplicity-M scaling function and scaling coecients are unique and are
simply the solution of the basic recursive or renement equation (8.4). However, the wavelets and wavelet
coecients are no longer unique or easy to design in general.
We now have the possibility of a more general and more exible multiresolution expansion system with
the M-band scaling function and wavelets. There are now M − 1 signal spaces spanned by the M − 1 wavelets
at each scale j . They are denoted
Figure 8.5: Vector Space Decomposition for a Four-Band Wavelet System, W`j
The expansion of a signal or function in terms of the M-band wavelets now involves a triple sum over
`, j , and k .
X X ∞ M
∞ X X −1
M j/2 d`,j (k) ψ` M j t − k (8.23)
f (t) = c (k) φk (t) +
k k=−∞ j=0 `=1
and
Z
f (t) M j/2 ψ` M j t − k dt. (8.25)
d`,j (k) =
for ` = 1, 2, · · · , M − 1, then
X
h (n) h` (n − M k) = 0 (8.27)
n
Figure 8.6: Filter Bank Structure for a Four-Band Wavelet System, W`j
Unlike the M = 2 case, for M > 2 there is no formula for h` (n) and there are many possible wavelets
for a given scaling function.
Mallat's algorithm takes on a more complex form as shown in Figure 8.6. The advantage is a more exible
system that allows a mixture of linear and logarithmic tiling of the timescale plane. A powerful tool that
removes the ambiguity is choosing the wavelets by modulated cosine" design.
Figure 8.7 shows the frequency response of the lter band, much as Figure: Frequency Bands for the
Analysis Tree (Figure 4.5) did for M = 2. Examples of scaling functions and wavelets are illustrated in ,
and the tiling of the time-scale plane is shown in Figure 8.9. Figure 8.9 shows the time-frequency resolution
characteristics of a four-band DWT basis. Notice how it is dierent from the Standard, Fourier, DSTFT
and two-band DWT bases shown in earlier chapters. It gives a mixture of a logarithmic and linear frequency
resolution.
Figure 8.7: Frequency Responses for the Four-Band Filter Bank, W`j
1. All moments of the wavelet lters are zero, µ` (k) = 0, for k = 0, 1, · · · , (K − 1) and for ` =
1, 2, · · · , (M − 1)
2. All moments of the wavelets are zero, m` (k) = 0, for k = 0, 1, · · · , (K − 1) and for ` = 1, 2, · · · , (M − 1)
3. The partial moments of the scaling lter are equal for k = 0, 1, · · · , (K − 1)
4. The frequency response of the scaling lter has zeros of order K at the M th roots of unity, ω = 2π `/M
for ` = 1, 2, · · · , M − 1.
5. The magnitude-squared frequency response of the scaling lter is at to order 2K at ω = 0. This
follows from (6.22).
6. All polynomial sequences up to degree (K − 1) can be expressed as a linear combination of integer-
shifted scaling lters.
7. All polynomials of degree up to (K − 1) can be expressed as a linear combination of integer-shifted
scaling functions for all j .
This powerful result [485], [254] is similar to the M = 2 case presented in Chapter: Regularity, Moments,
and Wavelet System Design . It not only ties the number of zero moments to the regularity but also to the
degree of polynomials that can be exactly represented by a sum of weighted and shifted scaling functions.
Note the location of the zeros of H (z) are equally spaced around the unit circle, resulting in a narrower
frequency response than for the half-band lters if M = 2. This is consistent with the requirements given in
(8.14) and illustrated in Section 8.1.2 (Tiling with the Discrete-Time Short-Time Fourier Transform).
Sketches of some of the derivations in this section are given in the Appendix or are simple extensions of
the M = 2 case. More details are given in [192], [485], [254].
Figure 8.10: The full binary tree for the three-scale wavelet packet transform.
Figure 8.11 pictorially shows the signal vector space decomposition for the scaling functions and wavelets.
Figure 8.12 shows the frequency response of the packet lter bank much as Figure: Frequency Bands for the
Analysis Tree (Figure 4.5) did for M = 2 and Figure 8.7 for M = 3 wavelet systems.
Figure 8.14 shows the Haar wavelet packets with which we nish the example started in Section: An
Example of the haar Wavelet System (Section 3.8: An Example of the Haar Wavelet System). This is an
informative illustration that shows just what packetizing" does to the regular wavelet system. It should
be compared to the example at the end of Chapter: A multiresolution formulation of Wavelet Systems.
This is similar to the Walsh-Haddamar decomposition, and Figure 8.13 shows the full wavelet packet system
generated from the Daubechies φD8' scaling function. The prime" indicates this is the Daubechies system
with the spectral factorization chosen such that zeros are inside the unit circle and some outside. This
gives the maximum symmetry possible with a Daubechies system. Notice the three wavelets have increasing
frequency." They are somewhat like windowed sinusoids, hence the name, wavelet packet. Compare the
wavelets with the M = 2 and M = 4 Daubechies wavelets.
Figure 8.11: Vector Space Decomposition for a M = 2 Full Wavelet Packet System
Figure 8.12: Frequency Responses for the Two-Band Wavelet Packet Filter Bank
Another is the full packet decomposition shown in Figure 8.10. Any pruning of this full tree would
generate a valid packet basis system and would allow a very exible tiling of the time-scale plane.
We can choose a set of basic vectors and form an orthonormal basis, such that some cost measure on the
transformed coecients is minimized. Moreover, when the cost is additive, the
best orthonormal wavelet packet transform can be found using a binary searching algorithm [93] in
O (N logN ) time.
Some examples of the resulting time-frequency tilings are shown in Figure 8.15. These plots demonstrate
the frequency adaptation power of the wavelet packet transform.
Figure 8.15: Examples of Time-Frequency Tilings of Dierent Three-Scale Orthonormal Wavelet Packet
Transforms.
There are two approaches to using adaptive wavelet packets. One is to choose a particular decomposi-
tion (lter bank pruning) based on the characteristics of the class of signals to be processed, then to use
the transform nonadaptively on the individual signals. The other is to adapt the decomposition for each
individual signal. The rst is a linear process over the class of signals. The second is not and will not obey
superposition.
Let P (J) denote the number of dierent J -scale orthonormal wavelet packet transforms. We can easily
see that
2
P (J) = P (J − 1) + 1, P (1) = 1. (8.31)
So the number of possible choices grows dramatically as the scale increases. This is another reason for
the wavelet packets to be a very powerful tool in practice. For example, the FBI standard for ngerprint
image compression [42], [45] is based on wavelet packet transforms. The wavelet packets are successfully
used for acoustic signal compression [570]. In [427], a rate-distortion measure is used with the wavelet packet
transform to improve image compression performance.
M -band DWTs give a exible tiling of the time-frequency plane. They are associated with a particular
tree-structured lter bank, where the lowpass channel at any depth is split into M bands. Combining the
M -band and wavelet packet structure gives a rather arbitrary tree-structured lter bank, where all channels
are split into sub-channels (using lter banks with a potentially dierent number of bands), and would give a
very exible signal decomposition. The wavelet analog of this is known as the wavelet packet decomposition
[93]. For a given signal or class of signals, one can, for a xed set of lters, obtain the best (in some sense)
lter bank tree-topology. For a binary tree an ecient scheme using entropy as the criterion has been
developedthe best wavelet packet basis algorithm [93], [427].
asymmetric analysis and synthesis systems. This section will develop the biorthogonal wavelet system using
a nonorthogonal basis and dual basis to allow greater exibility in achieving other goals at the expense of
the energy partitioning property that Parseval's theorem states [75], [563], [514], [567], [401], [17], [74], [292],
[435], [503]. Some researchers have considered almost orthogonal" systems where there is some relaxation of
the orthogonal constraints in order to improve other characteristics [404]. Indeed, many image compression
schemes (including the ngerprint compression used by the FBI [42], [45]) use biorthogonal systems.
Let c1 (n) , n ∈ Z be the input to the lter banks in Figure 8.16, then the outputs of the analysis lter
banks are
X X
c0 (k) = h̃ (2k − n) c1 (n) , d0 (k) = g̃ (2k − n) c1 (n) . (8.32)
n n
Substituting Equation (8.32) into (8.33) and interchanging the summations gives
XXh i
c̃1 (m) = h (2k − m) h̃ (2k − n) + g (2k − m) g̃ (2k − n) c1 (n) . (8.34)
n k
Fortunately, this condition can be greatly simplied. In order for it to hold, the four lters have to be
related as [75]
n n
g̃ (n) = (−1) h (1 − n) , g (n) = (−1) h̃ (1 − n) , (8.36)
up to some constant factors. Thus they are cross-related by time reversal and ipping signs of every other
element. Clearly, when h̃ = h, we get the familiar relations between the scaling coecients and the wavelet
n
coecients for orthogonal wavelets, g (n) = (−1) h (1 − n). Substituting (8.36) back to (8.35), we get
X
h̃ (n) h (n + 2k) = δ (k) . (8.37)
n
In the orthogonal case, we have n h (n) h (n + 2k) = δ (k); i.e., h (n) is orthogonal to even translations of
P
f ∈ L2 (R),
X X
f = < f, ψj,k > ψ̃j,k = < f, ψ̃j,k > ψj,k (8.45)
j,k j,k
(8.46)
< ψj,k , ψ̃j ' ,k' > = δ j − j ' δ k − k '
if and only if
Z
Φ (x) Φ̃ (x − k) dx = δ (k) . (8.47)
This theorem tells us that under some technical conditions, we can expand functions using the wavelets and
reconstruct using their duals. The multiresolution formulations in Chapter: A multiresolution formulation
of Wavelet Systems (Chapter 3) can be revised as
Although Wj is not the orthogonal complement to Vj in Vj+1 as before, the dual space W̃j plays the much
needed role. Thus we have four sets of spaces that form two hierarchies to span L2 (R).
In Section: Further Properties of the Scaling Function and Wavelet (Section 6.8: Further Properties of
the Scaling Function and Wavelet), we have a list of properties of the scaling function and wavelet that do
not require orthogonality. The results for regularity and moments in Chapter: Regularity, Moments, and
Wavelet System Design can also be generalized to the biorthogonal systems.
• In statistical signal processing, white Gaussian noise remains white after orthogonal transforms. If
the transforms are nonorthogonal, the noise becomes correlated or colored. Thus, when biorthogonal
wavelets are used in estimation and detection, we might need to adjust the algorithm to better address
the colored noise.
√ √
h/ 2 h̃/ 2
1/2, 1/2 −1/16, 1/16, 1/2, 1/16, −1/16
1/4, 1/2, 1/4 −1/8, 1/4, 3/4, 1/4, −1/8
1/8, 3/8, 3/8, 1/8 −5/512, 15/512, 19/512, −97/512, −13/256, 175/256, · · ·
Table 8.1 : Coecients for Some Members of Cohen-Daubechies-Feauveau Family of Biorthogonal Spline
Wavelets (For longer lters, we only list half of the coecients)
M (ω) + M (ω + π) = 2, (8.54)
and the resulting compactly supported orthogonal wavelet has the maximum number of zero moments
possible for its length. In the orthogonal case, we get a scaling lter by factoring M (ω) as H (ω) H ∗ (ω).
Here in the biorthogonal case, we can factor the same M (ω) to get H (ω) and H̃ (ω).
Factorizations that lead to symmetric h and h̃ with similar lengths have been found in [75], and their
coecients are listed in Table 8.2. Plots of the scaling and wavelet functions, which are members of the
family used in the FBI ngerprint compression standard, are in Figure 8.17.
h̃ h
0.85269867900889 0.78848561640637
0.37740285561283 0.41809227322204
-0.11062440441844 -0.04068941760920
-0.02384946501956 -0.06453888262876
0.03782845550726
Table 8.2: Coecients for One of the Cohen-Daubechies-Feauveau Family of Biorthogonal Wavelets that is
Used in the FBI Fingerprint Compression Standard (We only list half of the coecients)
Figure 8.17: Plots of Scaling Function and Wavelet and their Duals for one of the Cohen-Daubechies-
Feauveau Family of Biorthogonal Wavelets that is Used in the FBI Fingerprint Compression Standard
[365], [367], [171], [291], [540], [152], and has been systematically developed recently [503], [501]. The
key idea is to build complicated biorthogonal systems using simple and invertible stages. The rst stage
does nothing but to separate even and odd samples, and it is easily invertible. The structure is shown in
Figure 8.18, and is called the lazy wavelet transform in [503].
√ √
2h 2h̃
1, 1 1, 1
1/2, 1, 1/2 −1/4, 1/2, 3/2, 1/2, −1/4
3/8, 1, 3/4, 0, −1/8 3/64, 0, −3/16, 3/8, 41/32, 3/4, −3/16, −1/8, 3/64
−1/16, 0, 9/16, 1, 9/16, 0, −1/16 −1/256, 0, 9/128, −1/16, −63/256, 9/16, 87/64, · · ·
Table 8.3 : Coecients for some Members of the Biorthogonal Coiets. For Longer Filters, We only List
Half of the Coecients.
After splitting the data into two parts, we can predict one part from the other, and keep only the
prediction error, as in Figure 8.19. We can reconstruct the data by recomputing the prediction and then
add back the prediction. In Figure 8.19, s and t are prediction lters.
By concatenating simple stages, we can implement the forward and inverse wavelet transforms as in
Figure 8.20. It is also called the ladder structure, and the reason for the name is clear from the gure.
Clearly, the system is invertible, and thus biorthogonal. Moreover, it has been shown the orthogonal wavelet
systems can also be implemented using lifting [133]. The advantages of lifting are numerous:
• Lifting steps can be calculated inplace. As seen in Figure 8.20, the prediction outputs based on one
channel of the data can be added to or subtracted from the data in other channels, and the results can
be saved in the same place in the second channel. No auxiliary memory is needed.
• The predictors s and t do not have to be linear. Nonlinear operations like the medium lter or rounding
can be used, and the system remains invertible. This allows a very simple generalization to nonlinear
wavelet transform or nonlinear multiresolution analysis.
• The design of biorthogonal systems boils down to the design of the predictors. This may lead to simple
approaches that do not relay on the Fourier transform [503], and can be generalized to irregular samples
or manifolds.
• For biorthogonal systems, the lifting implementations require less numerical operations than direct
implementations [133]. For orthogonal cases, the lifting schemes have the computational complexity
similar to the lattice factorizations, which is almost half of the direct implementation.
8.5 Multiwavelets
In Chapter: A multiresolution formulation of Wavelet Systems, we introduced the multiresolution analysis
for the space of L2 functions, where we have a set of nesting subspaces
Vj = Span{2j/2 φ 2j t − k }. (8.56)
k
The direct dierence between nesting subspaces are spanned by translations of a single wavelet at the
corresponding scale; e.g.,
There are several limitations of this construction. For example, nontrivial orthogonal wavelets can not be
symmetric. To avoid this problem, we generalized the basic construction, and introduced multiplicity-M
(M -band) scaling functions and wavelets in Section 8.2 (Multiplicity-M (M-Band) Scaling Functions and
Wavelets), where the dierence spaces are spanned by translations of M−1 wavelets. The scaling is in terms
of the power of M ; i.e.,
In general, there are more degrees of freedom to design the M-band wavelets. However, the nested V spaces
are still spanned by translations of a single scaling function. It is the multiwavelets that removes the above
restriction, thus allowing multiple scaling functions to span the nested V spaces [184], [183], [497]. Although
it is possible to construct M -band multiwavelets, here we only present results on the two-band case, as most
of the researches in the literature do.
where H (k) is a R×R matrix for each k ∈ Z. This is a matrix version of the scalar recursive equation (3.13).
The rst and simplest multiscaling functions probably appear in [19], and they are shown in Figure 8.21.
The rst scaling function φ1 (t) is nothing but the Haar scaling function, and it is the sum of two
time-compressed and shifted versions of itself, as shown in (a). The second scaling function can be easily
decomposed into linear combinations of time-compressed and shifted versions of the Haar scaling function
and itself, as
√ √
3 1 3 1
φ2 (t) = φ1 (2t) + φ2 (2t) − φ1 (2t − 1) + φ2 (2t − 1) . (8.63)
2 2 2 2
This is shown in Figure 8.22
Since W0 ⊂ V1 for the stacked wavelets Ψ (t) there must exist a sequence of R × R matrices G (k), such that
√ X
Ψ (t) = 2 G (k) Φ (2t − k) (8.66)
k
These are vector versions of the two scale recursive equations and .
We can also dene the discrete-time Fourier transform of H (k) and G (k) as
X X
H (ω) = H (k) eiωk , G (ω) = G (k) eiωk . (8.67)
k k
8.5.4 Support
In general, the nite length of H (k) and G (k) ensure the nite support of Φ (t) and Ψ (t). However, there
are no straightforward relations between the support length and the number of nonzero coecients in H (k)
and G (k). An explanation is the existence of nilpotent matrices [479]. A method to estimate the support is
developed in [479].
8.5.5 Orthogonality
For these scaling functions and wavelets to be orthogonal to each other and orthogonal to their translations,
we need [489]
T
Dj (k) = [d1,j (k) , ..., dR,j (k)] . (8.74)
For f (t) in V0 , it can be written as linear combinations of scaling functions and wavelets,
X ∞ X
X
T T
f (t) = Cj0 (k) ΦJ0 ,k (t) + Dj (k) Ψj,k (t) . (8.75)
k j=j0 k
and
√ X
Dj−1 (k) = 2 G (n) Cj (2k + n) . (8.77)
n
Moreover,
√ X † †
Cj (k) = 2 H(k) Cj−1 (2k + n) + G(k) Dj−1 (2k + n) . (8.78)
k
These are the vector forms of (4.9), (4.10), and (4.17). Thus the synthesis and analysis lter banks for
multiwavelet transforms have similar structures as the scalar case. The dierence is that the lter banks
operate on blocks of R inputs and the ltering and rate-changing are all done in terms of blocks of inputs.
To start the multiwavelet transform, we need to get the scaling coecients at high resolution. Recall that
in the scalar case, the scaling functions are close to delta functions at very high resolution, so the samples
of the function are used as the scaling coecients. However, for multiwavelets we need the expansion
coecients for R scaling functions. Simply using nearby samples as the scaling coecients is a bad choice.
Data samples need to be preprocessed (preltered) to produce reasonable values of the expansion coecients
for multi-scaling function at the highest scale. Prelters have been designed based on interpolation [582],
approximation [244], and orthogonal projection [556].
8.5.7 Examples
Because of the larger degree of freedom, many methods for constructing multiwavelets have been developed.
are both symmetrical and orthogonala combination which is impossible for two-band orthogonal scalar
wavelets. They also have short support, and can exactly reproduce the hat function. These interesting
properties make multiwavelet a promising expansion system.
8.5.11 Applications
Multiwavelets have been used in data compression [252], [326], [498], noise reduction [166], [498], and solution
of integral equations [63]. Because multiwavelets are able to oer a combination of orthogonality, symmetry,
higher order of approximation and short support, methods using multiwavelets frequently outperform those
using the comparable scale wavelets. However, it is found that preltering is very important, and should be
chosen carefully for the applications [166], [498], [582]. Also, since discrete multiwavelet transforms operate
on size-R blocks of data and generate blocks of wavelet coecients, the correlation within each block of
coecients needs to be exploited. For image compression, predictions rules are proposed to exploit the
correlation in order to reduce the bit rate [326]. For noise reduction, joint thresholding coecients within
each block improve the performance [166].
• Sparsity: The expansion should have most of the important information in the smallest number of
coecients so that the others are small enough to be neglected or set equal to zero. This is important
for compression and denoising.
• Separation: If the measurement consists of a linear combination of signals with dierent characteristics,
the expansion coecients should clearly separate those signals. If a single signal has several features
of interest, the expansion should clearly separate those features. This is important for ltering and
detection.
• Superresolution: The resolution of signals or characteristics of a signal should be much better than with
a traditional basis system. This is likewise important for linear ltering, detection, and estimation.
• Stability: The expansions in terms of our new overcomplete systems should not be signicantly changed
by perturbations or noise. This is important in implementation and data measurement.
• Speed: The numerical calculation of the expansion coecients in the new overcomplete system should
be of order O (N ) or O (N log (N )).
These criteria are often in conict with each other, and various compromises will be made in the algorithms
and problem formulations for an acceptable balance.
y = Xα (8.80)
where y is a N × 1 vector with elements being the signal values y (n), the matrix X is N × K the columns of
which are made up of all the functions in the dictionary and α is a K × 1 vector of the expansion coecients
αk . The matrix operator has the basis signals xk as its columns so that the matrix multiplication (8.80) is
simply the signal expansion (8.79).
For a given signal representation problem, one has two decisions: what dictionary to use (i.e., choice of
the X) and how to represent the signal in terms of this dictionary (i.e., choice of α). Since the dictionary is
overcomplete, there are several possible choices of α and typically one uses prior knowledge or one or more
of the desired properties we saw earlier to calculate the α.
Decomposition with this rather trivial operator gives a time-domain description in that the rst expansion
coecient α0 is simply the rst value of the signal, x (0), and the second coecient is the second value of
the signal. Using a dierent set of basis vectors might give the operator
0.7071 0.7071
X= (8.83)
0.7071 −0.7071
which has the normalized basis vectors still orthogonal but now at a 45o angle from the basis vectors in
(8.82). This decomposition is a sort of frequency domain expansion. The rst column vector will simply be
the constant signal, and its expansion coecient α (0) will be the average of the signal. The coecient of
the second vector will
calculate
the dierence in y (0) and y (1) and, therefore, be a measure of the change.
1
Notice that y = can be represented exactly with only one nonzero coecient using (8.82) but will
0
1 1
require two with (8.83), while for y = the opposite is true. This means the signals y = and
1 0
0 1 1
y = can be represented sparsely by (8.82) while y = and y =
can be represented
1 1 −1
sparsely by (8.83).
If we create an overcomplete expansion by a linear combination of the previous orthogonal basis systems,
then it should be possible to have a sparse representation for all four of the previous signals. This is done
by simply adding the columns of (8.83) to those of (8.82) to give
1 0 0.7071 0.7071
X= (8.84)
0 1 0.7071 −0.7071
This is clearly overcomplete, having four expansion vectors in a two-dimensional system. Finding αk requires
solving a set of underdetermined equations, and the solution is not unique.
For example, if the signal is given by
1
y= (8.85)
0
there are an innity of solutions, several of which are listed in the following table.
Case 1 2 3 4 5 6 7
α0 0.5000 1.0000 1.0000 1.0000 0 0 0
α1 0.0000 0.0000 0 0 -1.0000 1.0000 0
α2 0.3536 0 0.0000 0 1.4142 0 0.7071
α3 0.3536 0 0 0.0000 0 1.4142 0.7071
2
||α|| 0.5000 1.0000 1.0000 1.0000 3.0000 3.0000 1.0000
Table 8.4
Case 1 is the minimum norm solution of y = X α for αk . It is calculated by a pseudo inverse with
the Matlab command a = pinv(X)*y . It is also the redundant DWT discussed in the next section and
calculated by a = X'*y/2. Case 2 is the minimum norm solution, but for no more than two nonzero values
of αk . Case 2 can also be calculated by inverting the matrix (8.84) with columns 3 and 4 deleted. Case 3 is
calculated the same way with columns 2 and 4 deleted, case 4 has columns 2 and 3 deleted, case 5 has 1 and
4 deleted, case 6 has 1 and 3 deleted, and case 7 has 1 and 2 deleted. Cases 3 through 7 are unique since
the reduced matrix is square and nonsingular. The second term of α for case 1 is zero because the signal is
orthogonal to that expansion vector. Notice that the norm of α is minimum for case 1 and is equal to the
norm of y divided by the redundancy, here two. Also notice that the coecients in cases 2, 3, and 4 are the
same even though calculated by dierent methods.
Because X is not only a frame, but a tight frame with a redundancy of two, the energy (norm squared)
of α is one-half the norm squared of y. The other decompositions (not tight frame or basis) do not preserve
the energy.
Next consider a two-dimensional signal that cannot be exactly represented by only one expansion vector.
If the unity norm signal is given by
0.9806
y = (8.86)
0.1961
the expansion coecients are listed next for the same cases described previously.
Case 1 2 3 4 5 6 7
α0 0.4903 0.9806 0.7845 1.1767 0 0 0
α1 0.0981 0.1961 0 0 -0.7845 1.1767 0
α2 0.4160 0 0.2774 0 1.3868 0 0.8321
α3 0.2774 0 0 -0.2774 0 1.3868 0.5547
2
||α|| 0.5000 1.0000 0.6923 1.4615 2.5385 3.3077 1.0000
Table 8.5
Again, case 1 is the minimum norm solution; however, it has no zero components this time because there
are no expansion vectors orthogonal to the signal. Since the signal lies between the 90o and 45o expansion
vectors, it is case 3 which has the least two-vector energy representation.
There are an innite variety of ways to construct the overcomplete frame matrix X. The one in this
example is a four-vector tight frame. Each vector is 45o degrees apart from nearby vectors. Thus they are
evenly distributed in the 180o upper plane of the two dimensional space. The lower plane is covered by the
negative of these frame vectors. A three-vector tight frame would have three columns, each 60o from each
other in the two-dimension plane. A 36-vector tight frame would have 36 columns spaced 5o from each other.
In that system, any signal vector would be very close to an expansion vector.
Still another alternative would be to construct a frame (not tight) with nonorthogonal rows. This would
result in columns that are not evenly spaced but might better describe some particular class of signals.
Indeed, one can imagine constructing a frame operator with closely spaced expansion vectors in the regions
where signals are most likely to occur or where they have the most energy.
We next consider a particular modied tight frame constructed so as to give a shift-invariant DWT.
y (n) = φ 24 n − 10 (8.87)
coecients because at this shift or translation, the signal is no longer orthogonal to most of the basis functions.
The signal energy would be partitioned over many more coecients and, therefore, because of Parseval's
theorem, be smaller. This would degrade any denoising or compressions using thresholding schemes. The
DWT described in Chapter: Calculation of the Discrete Wavelet Transform is periodic in that at each scale
j the periodized DWT repeats itself after a shift of n = 2j , but the period depends on the scale. This can
also be seen from the lter bank calculation of the DWT where each scale goes through a dierent number
of decimators and therefore has a dierent aliasing.
A method to create a linear, shift-invariant DWT is to construct a frame from the orthogonal DWT
supplemented by shifted orthogonal DWTs using the ideas from the previous section. If you do this, the
result is a frame and, because of the redundancy, is called the redundant DWT or RDWT.
The typical wavelet based signal processing framework consists of the following three simple steps, 1)
wavelet transform; 2) point-by-point processing of the wavelet coecients (e.g. thresholding for denoising,
quantization for compression); 3) inverse wavelet transform. The diagram of the framework is shown in
Figure 8.25. As mentioned before, the wavelet transform is not translation-invariant, so if we shift the
signal, perform the above processing, and shift the output back, then the results are dierent for dierent
shifts. Since the frame vectors of the RDWT consist of the shifted orthogonal DWT basis, if we replace the
forward/inverse wavelet transform
Figure 8.25: The Typical Wavelet Transform Based Signal Processing Framework (∆ denotes the
pointwise processing)
Figure 8.26: The Typical Redundant Wavelet Transform Based Signal Processing Framework (∆ de-
notes the pointwise processing)
in the above framework by the forward/inverse RDWT, then the result of the scheme in Figure 8.26 is
the same as the average of all the processing results using DWTs with dierent shifts of the input data. This
is one of the main reasons that RDWT-based signal processing tends to be more robust.
Still another view of this new transform can be had by looking at the Mallat-derived lter bank described
in Chapter: The Scaling Function and Scaling Coecients, Wavelet and Wavelet Coecients and Chapter:
Filter Banks and Transmultiplexers . The DWT lter banks illustrated in Figure: Two-Stage Two-Band
Analysis Tree (Figure 4.3) and Figure: Two-Band Synthesis Bank (Figure 4.6) can be modied by removing
the decimators between each stage to give the coecients of the tight frame expansion (the RDWT) of
the signal. We call this structure the undecimated lter bank. Notice that, without the decimation, the
number of terms in the DWT is larger than N . However, since these are the expansion coecients in our
new overcomplete frame, that is consistent. Also, notice that this idea can be applied to M-band wavelets
and wavelet packets in the same way.
These RDWTs are not precisely a tight frame because each scale has a dierent redundancy. However,
except for this factor, the RDWT and undecimated lter have the same characteristics of a tight frame and,
they support a form of Parseval's theorem or energy partitioning.
If we use this modied tight frame as a dictionary to choose a particular subset of expansion vectors as
a new frame or basis, we can tailor the system to the signal or signal class. This is discussed in the next
section on adaptive systems.
This idea of RDWT was suggested by Mallat [348], Beylkin [33], Shensa [467], Dutilleux [169], Nason
[392], Guo [230], [231], Coifman, and others. This redundancy comes at a price of the new RDWT having
O (N log (N )) arithmetic complexity rather than O (N ). Liang and Parks [328], [330], Bao and Erdol [24],
[25], Marco and Weiss [361], [359], [360], Daubechies [122], and others [416] have used some form of averaging
or best basis" transform to obtain shift invariance.
Recent results indicate this nondecimated DWT, together with thresholding, may be the best denoising
strategy [163], [162], [305], [88], [302], [223], [308], [231]. The nondecimated DWT is shift invariant, is less
aected by noise, quantization, and error, and has order N log (N ) storage and arithmetic complexity. It
combines with thresholding to give denoising and compression superior to the classical Donoho method for
many examples. Further discussion of use of the RDWT can be found in Section: Nonlinear Filtering or
Denoising with the DWT (Section 4.3: Input Coecients).
Since these nite dimensional overcomplete systems are a frame, a subset of the expansion vectors can
be chosen to be a basis while keeping most of the desirable properties of the frame. This is described well
by Chen and Donoho in [58], [60]. Several of these methods are outlined as follows:
• The method of frames (MOF) was rst described by Daubechies [109], [114], [122] and uses the rather
straightforward idea of solving the overcomplete frame (underdetermined set of equations) in (8.84)
by minimizing the L2 norm of α. Indeed, this is one of the classical denitions of solving the normal
equations or use of a pseudo-inverse. That can easily be done in Matlab
by a = pinv(X)*y. This
gives a frame solution, but it is usually not sparse.
• The best orthogonal basis method (BOB) was proposed by Coifman and Wickerhauser [93], [156] to
adaptively choose a best basis from a large collection. The method is fast (order N logN ) but not
necessarily sparse.
• Mallat and Zhang [350] proposed a sequential selection scheme called matching pursuit (MP) which
builds a basis, vector by vector. The eciency of the algorithm depends on the order in which vectors
are added. If poor choices are made early, it takes many terms to correct them. Typically this method
also does not give sparse representations.
• A method called basis pursuit (BP) was proposed by Chen and Donoho [58], [60] which solves (8.84)
while minimizing the L1 norm of α. This is done by linear programming and results in a globally
optimal solution. It is similar in philosophy to the MOFs but uses an L1 norm rather than an L2 norm
and uses linear programming to obtain the optimization. Using interior point methods, it is reasonably
ecient and usually gives a fairly sparse solution.
• Krim et al. describe a best basis method in [299]. Tewk et al. propose a method called optimal
subset selection in [390] and others are [30], [100].
All of these methods are very signal and problem dependent and, in some cases, can give much better results
than the standard M-band or wavelet packet based methods.
where the functions χj,k (t) are of the form (for example)
orthogonality of these basis functions, the coecients (the transform) are found by an inner product
Z
ak (n) =< f (t) , χk,n (t) >= f (t) χk,n (t) dt. (8.91)
We will now examine how this can be achieved and what the properties of the expansion are.
Fundamentally, the wavelet packet system decomposes L2 (R) into a direct sum of orthogonal spaces,
each typically covering a certain frequency band and spanned by the translates of a particular element of
the wavelet packet system. With wavelet packets time-frequency tiling with exible frequency resolution is
possible. However, the temporal resolution is determined by the frequency band associated with a particular
element in the packet.
Local trigonometric bases [573], [22] are duals of wavelet packets in the sense that these bases give exible
temporal resolution. In this case, L2 (R) is decomposed into a direct sum of spaces each typically covering
a particular time interval. The basis functions are all modulates of a xed window function.
One could argue that an obvious approach is to partition the time axis into disjoint bins and use a
Fourier series expansion in each temporal bin. However, since the basis functions are rectangular-windowed
exponentials they are discontinuous at the bin boundaries and hence undesirable in the analysis of smooth
signals. If one replaces the rectangular window with a smooth window, then, since products of smooth
functions are smooth, one can generate smooth windowed exponential basis functions. For example, if the
time axis is split uniformly, one is looking at basis functions of the form {w (t − k) eι2πnt }, k, n ∈ Z for some
smooth window function w (t). Unfortunately, orthonormality disallows the function w (t) from being well-
concentrated in time or in frequency - which is undesirable for time frequency analysis. More precisely, the
Balian-Low theorem (see p.108 in [122]) states that the Heisenberg product of g (the product of the time-
spread and frequency-spread which is lower bounded by the Heisenberg uncertainty principle) is innite.
However, it turns out that windowed trigonometric bases (that use cosines and sines but not exponentials)
can be orthonormal, and the window can have a nite Heisenberg product [128]. That is the reason why we
are looking for local trigonometric bases of the form given in (8.90).
Indeed, these orthonormal bases are obtained from the Fourier series on (−2, 2) (the rst two) and on
(−1, 1) (the last two) by appropriately imposing symmetries and hence are readily veried to be complete
and orthonormal on (0, 1). If we choose a set of nonoverlapping rectangular window functions wk (t) such
that k wk (t) = 1 for all t ∈ R, and dene χk,n (t) = wk (t) Φn (t), then, {χk,n (t)} is a local trigonometric
P
basis for L2 (R), for each of the four choices of phin (t) above.
functions. However, since unfolding is unitary, the resulting functions still form an orthonormal basis. The
unfolding operator is parameterized by a function r (t) that satises an algebraic constraint (which makes
the operator unitary). The smoothness of the resulting basis functions depends on the smoothness of this
underlying function r (t).
The function r (t), referred to as a rising cuto function, satises the following conditions (see Figure 8.27)
:
2 2 0, if t ≤ −1
|r (t)| + |r (−t)| = 1, for all t ∈ IR; r (t) = { (8.92)
1, if t ≥ 1
r (t) is called a rising cuto function because it rises from 0 to 1 in the interval [−1, 1] (note: it does not
necessarily have to be monotone increasing). Multiplying a function by r (t) would localize it to [−1, ∞].
Every real-valued function r (t) satisfying (8.92) is of the form r (t) = sin (θ (t)) where
π 0, if t ≤ −1.
θ (t) + θ (−t) = for all t ∈ IR; r (t) = { (8.93)
2 π
2, if t ≥ 1.
This ensures that r (−t) = sin (θ (−t)) = sin π2 − θ (t) = cos (θ (t)) and therefore r2 (t) + r2 (−t) = 1. One
can easily construct arbitrarily smooth rising cuto functions. We give one such recipe from [573] (p.105) .
Start with a function
0, if t ≤ −1
π
(1 + t) , if − 1 < t < 1 (8.94)
r[0] (t) = { sin 4
1, if t ≥ 1
It is readily veried to be a rising cuto function. Now recursively dene r[1] (t) , r[2] (t) , ... as follows:
π
r[n+1] (t) = r[n] sin t . (8.95)
2
Notice that r[n] (t) is a rising cuto function for every n. Moreover, by induction on n it is easy to show
n
that r[n] (t) ∈ C 2 −1 (it suces to show that derivatives at t = −1 and t = 1 exist and are zero up to order
2n − 1).
one can dene U (r, t0 , ) and U [U+2606] (r, t0 , ) that folds and unfolds respectively about t = t0 with action
region[t0 − , t0 + ] and action radius .
Notice (8.96) and (8.97) do not dene the value U (r) f (0) and U [U+2606] (r) f (0) because of the dis-
continuity that is potentially introduced. An elementary exercise in calculus divulges the nature of this
discontinuity. If f ∈ C d (R), then U (r) f ∈ C d (R \ {0}). At t = 0, left and right derivatives exist with all
even-order left-derivatives and all odd order right-derivatives (upto and including d) being zero. Conversely,
given any function f ∈ C d (R \ {0}) which has a discontinuity of the above type, U [U+2606] (r) f has a unique
extension across t = 0 (i.e., a choice of value for (U [U+2606] (r) f (0)) that is in C d (R). One can switch the signs
in (8.96) and (8.97) to obtain another set of folding and unfolding operators. In this case, for f ∈ C d (R),
U (r) f will have its even-order right derivatives and odd-order left derivatives equal to zero. We will use U+ ,
U+[U+2606]
and U− , U− [U+2606]
, respectively to distinguish between the two types of folding/unfolding operators
and call them positive and negative polarity folding/unfolding operators respectively.
So far we have seen that the folding operator is associated with a rising cuto function, acts at a certain
point, has a certain action region and radius and has a certain polarity. To get a qualitative idea of what
these operators do, let us look at some examples.
First, consider a case where f (t) is even- or-odd symmetric about the folding point on the action interval.
Then, U f corresponds to simply windowing f by an appropriate window function. Indeed, if f (t) = f (−t)
on [−1, 1],
We saw that for signals with symmetry in the action region folding, corresponds to windowing. Next we
look at signals that are supported to the right (or left) of the folding point and see what unfolding does to
them. In this case, U+[U+2606]
(r) (f ) is obtained by windowing the even (or odd) extension of f (t) about the
folding point. Indeed if f (t) = 0, t < 0
the polarity is reversed, the eects on signals on the half-line are switched; the right half-line is associated
with windowed odd extensions and left half-line with windowed even extensions.
The bases functions have discontinuities at t = 0 and t = 1 because they are restrictions of the cosines
and sines to the unit interval by rectangular windowing. The natural extensions of these basis functions
to t ∈ R (i.e., unwindowed cosines and sines) are either even (say +) or odd (say -) symmetric (locally)
about the endpoints t = 0 and t = 1. Indeed the basis functions for the four cases are (+, −), (−, +),
(+, +) and (−, −) symmetric, respectively, at (0, 1). From the preceding analysis, this means that unfolding
these basis functions corresponds to windowing if the unfolding operator has the right polarity. Also observe
that the basis functions are discontinuous at the endpoints. Moreover, depending on the symmetry at each
endpoint all odd derivatives (for + symmetry) or even derivatives (for − symmetry) are zero. By choosing
unfolding operators of appropriate polarity at the endpoints (with non overlapping action regions) for the
four bases, we get smooth basis functions of compact support. For example, for (+,−) symmetry, the basis
function U+ (r0 , 0, 0 ) U+ (r1 , 1, 1 ) ψn (t) is supported in (−0 , 1 + 1 ) and is as many times continuously
dierentiable as r0 and r1 are.
Let {tj } be an ordered set of points in R dening a partition into disjoint intervals Ij = [tj , tj+1 ]. Now
choose one of the four bases above for each interval such that at tj the basis functions for Ij−1 and that for
Ij have opposite symmetries. We say the polarity at tj is positive if the symmetry is − (+ and negative if it
is + (− . At each tj choose a smooth cuto function rj (t) and action radius j so that the action intervals
do not overlap. Let p (j) be the polarity of tj and dene the unitary operator
Y
U [U+2606] = [U+2606]
Up(j) (rj , tj , j ) . (8.102)
j
Let {ψn (t)} denote all the basis functions for all the intervals put together. Then {ψn (t)} forms a nonsmooth
orthonormal basis for L2 (R). Simultaneously {U [U+2606] ψn (t)} also forms a smooth
orthonormal basis for L2 (R). To nd the expansion coecients of a function f (t) in this basis we use
All these bases can be constructed in discrete time by sampling the cosines/sines basis functions [573].
Local cosine bases in discrete time were constructed originally by Malvar and are sometimes called lapped
orthogonal transforms [356]. In the discrete case, the ecient implementation of trigonometric transforms
(using DCT I-IV and DST I-IV) can be utilized after folding. In this case, expanding in local trigonometric
bases corresponds to computing a DCT after preprocesing (or folding) the signal.
For a sample basis function in each of the four bases, Figure 8.7 shows the corresponding smooth basis
function after unfolding. Observe that for local cosine and sine bases, the basis functions are not linear
phase; while the window is symmetric, the windowed functions are not. However, for alternating sine/cosine
bases the (unfolded) basis functions are linear phase. There is a link between local sine (or cosine) bases
and modulated lter banks that cannot have linear phase lters (discussed in Chapter: Filter Banks and
Transmultiplexers ). So there is also a link between alternating cosine/sine bases and linear-phase modulated
lter banks (again see Chapter: Filter Banks and Transmultiplexers ). This connection is further explored
in [185].
Local trigonometric bases have been applied to several signal processing problems. For example, they
have been used in adaptive spectral analysis and in the segmentation of speech into voiced and unvoiced
regions [573]. They are also used for image compression and are known in the literature as lapped-orthogonal
transforms [356].
Fourier series in that both are series expansions that transform continuous-time signals into a discrete
sequence of coecients. However, unlike the Fourier series, the DWT can be made periodic or nonperiodic
and, therefore, is more versatile and practically useful.
In this chapter we will develop a wavelet method for expanding discrete-time signals in a series expansion
since, in most practical situations, the signals are already in the form of discrete samples. Indeed, we have
already discussed when it is possible to use samples of the signal as scaling function expansion coecients in
order to use the lter bank implementation of Mallat's algorithm. We nd there is an intimate connection
between the DWT and DTWT, much as there is between the Fourier series and the DFT. One expands
signals with the FS but often implements that with the DFT.
To further generalize the DWT, we will also briey present the continuous wavelet transform which,
similar to the Fourier transform, transforms a function of continuous time to a representation with continuous
scale and translation. In order to develop the characteristics of these various wavelet representations, we will
often call on analogies with corresponding Fourier representations. However, it is important to understand
the dierences between Fourier and wavelet methods. Much of that dierence is connected to the wavelet
being concentrated in both time and scale or frequency, to the periodic nature of the Fourier basis, and to
the choice of wavelet bases.
where ψ (m) is the basic expansion function of an integer variable m. If these expansion functions are an
orthogonal basis (or form a tight frame), the expansion coecients (discrete-time wavelet transform) are
found from an inner product by
X
dj (k) = < f (n) , ψ 2j n − k > = f (n) ψ 2j n − k (8.105)
n
If the expansion functions are not orthogonal or even independent but do span `2 , a biorthogonal system
or a frame can be formed such that a transform and inverse can be dened.
Because there is no underlying continuous-time scaling function or wavelet, many of the questions, prop-
erties, and characteristics of the analysis using the DWT in Chapter: Introduction to Wavelets, Chapter: A
multiresolution formulation of Wavelet Systems, Chapter: Regularity, Moments, and Wavelet System Design
, etc. do not arise. In fact, because of the lter bank structure for calculating the DTWT, the design is often
done using multirate frequency domain techniques, e.g., the work by Smith and Barnwell and associates [14].
The questions of zero wavelet moments posed by Daubechies, which are related to ideas of convergence for
iterations of lter banks, and Coifman's zero scaling function moments that were shown to help approximate
inner products by samples, seem to have no DTWT interpretation.
The connections between the DTWT and DWT are:
• If the starting sequences are the scaling coecients for the continuous multiresolution analysis at
very ne scale, then the discrete multiresolution analysis generates the same coecients as does the
continuous multiresolution analysis on dyadic rationals.
• When the number of scales is large, the basis sequences of the discrete multiresolution analysis converge
in shape to the basis functions of the continuous multiresolution analysis.
The DTWT or DMRA is often described by a matrix operator. This is especially easy if the transform is
made periodic, much as the Fourier series or DFT are. For the discrete time wavelet transform (DTWT),
a matrix operator can give the relationship between a vector of inputs to give a vector of outputs. Several
references on this approach are in [431], [255], [293], [292], [436], [435], [301], [300].
DT CT
DF DFT FS
CF DTFT FT
Table 8.6 : Continuous and Discrete Input and Output for Four Fourier Transforms
Because the basis functions of all four Fourier transforms are periodic, the transform of a periodic signal
(CT or DT) is a function of discrete frequency. In other words, it is a sequence of series expansion coecients.
If the signal is innitely long and not periodic, the transform is a function of continuous frequency and the
inverse is an integral, not a sum.
Also recall that in most cases, it is the Fourier transform, discrete-time Fourier transform, or Fourier
series that is needed but it is the DFT that can be calculated by a digital computer and that is probably using
the FFT algorithm. If the coecients of a Fourier series drop o fast enough or, even better, are zero after
some harmonic, the DFT of samples of the signal will give the Fourier series coecients. If a discrete-time
signal has a nite nonzero duration, the DFT of its values will be samples of its DTFT. From this, one sees
the relation of samples of a signal to the signal and the relation of the various Fourier transforms.
Now, what is the case for the various wavelet transforms? Well, it is both similar and dierent. The
table that relates the continuous and discrete variables is given by where DW indicates discrete values for
scale and translation given by j and k , with CW denoting continuous values for scale and translation.
DT CT
DW DTWT DWT
CW DTCWT CWT
Table 8.7 : Continuous and Discrete Input and Output for Four Wavelet Transforms
We have spent most this book developing the DWT, which is a series expansion of a continuous time
signal. Because the wavelet basis functions are concentrated in time and not periodic, both the DTWT
and DWT will represent innitely long signals. In most practical cases, they are made periodic to facilitate
ecient computation. Chapter: Calculation of the Discrete Wavelet Transform gives the details of how the
transform is made periodic. The discrete-time, continuous wavelet transform (DTCWT) is seldom used and
not discussed here.
The naming of the various transforms has not been consistent in the literature and this is complicated
by the wavelet transforms having two transform variables, scale and translation. If we could rename all
the transforms, it would be more consistent to use Fourier series (FS) or wavelet series (WS) for a series
expansion that produced discrete expansion coecients, Fourier transforms (FT) or wavelet transforms
(WT) for integral expansions that produce functions of continuous frequency or scale or translation variable
together with DT (discrete time) or CT (continuous time) to describe the input signal. However, in common
usage, only the DTFT follows this format!
Table 8.8: Continuous and Discrete, Periodic and Nonperiodic Input and Output for Transforms
Recall that the dierence between the DWT and DTWT is that the input to the DWT is a sequence
of expansion coecients or a sequence of inner products while the input to the DTWT is the signal itself,
probably samples of a continuous-time signal. The Mallat algorithm or lter bank structure is exactly the
same. The approximation is made better by zero moments of the scaling function (see Section: Approxi-
mation of Scaling Coecients by Samples of the Signal (Section 7.8: Approximation of Scaling Coecients
by Samples of the Signal)) or by some sort of preltering of the samples to make them closer to the inner
products [494].
As mentioned before, both the DWT and DTWT can be formulated as nonperiodic, on-going transforms
for an exact expansion of innite duration signals or they may be made periodic to handle nite-length or
periodic signals. If they are made periodic (as in Chapter: Calculation of the Discrete Wavelet Transform
), then there is an aliasing that takes place in the transform. Indeed, the aliasing has a dierent period at
the dierent scales which may make interpretation dicult. This does not harm the inverse transform which
uses the wavelet information to unalias" the scaling function coecients. Most (but not all) DWT, DTWT,
and matrix operators use a periodized form [480].
9.1 Introduction
In this chapter, we develop the properties of wavelet systems in terms of the underlying lter banks associated
with them. This is an expansion and elaboration of the material in Chapter: Filter Banks and the Discrete
Wavelet Transform , where many of the conditions and properties developed from a signal expansion point
of view in Chapter: The Scaling Function and Scaling Coecients, Wavelet and Wavelet Coecients are
now derived from the associated lter bank. The Mallat algorithm uses a special structure of lters and
downsamplers/upsamplers to calculate and invert the discrete wavelet transform. Such lter structures
have been studied for over three decades in digital signal processing in the context of the lter bank and
transmultiplexer problems [478], [521], [539], [536], [545], [541], [357], [528], [425]. Filter bank theory,
besides providing ecient computational schemes for wavelet analysis, also gives valuable insights into the
construction of wavelet bases. Indeed, some of the ner aspects of wavelet theory emanates from lter bank
theory.
161
162 CHAPTER 9. FILTER BANKS AND TRANSMULTIPLEXERS
In summary, the lter bank problem involves the design of the lters hi (n) and gi (n), with the following
goals:
If the signals and lters are multidimensional in Figure 9.1, we have the multidimensional lter bank design
problem.
9.1.2 Transmultiplexer
A transmultiplexer is a structure that combines a collection of signals into a single signal at a higher rate;
i.e., it is the dual of a lter bank. If the combined signal depends linearly on the constituent signal, we have a
linear transmultiplexer. Transmultiplexers were originally studied in the context of converting time-domain-
multiplexed (TDM) signals into frequency domain multiplexed (FDM) signals with the goal of converting
back to time-domain-multiplexed signals at some later point. A key point to remember is that the constituent
signals should be recoverable from the combined signal. Figure 9.3 shows the structure of a transmultiplexer.
The input signals yi (n) were upsampled, ltered, and combined (by a synthesis bank of lters) to give a
composite signal d (n). The signal d (n) can be ltered (by an analysis bank of lters) and downsampled
to give a set of signals xi (n). The goal in transmultiplexer design is a choice of lters that ensures perfect
reconstruction (i.e., for all i, xi (n) = yi (n)). This imposes bilinear constraints on the synthesis and analysis
lters. Also, the upsampling factor must be at least the number of constituent input signals, say L. Moreover,
in classical TDM-FDM conversion the analysis and synthesis lters must approximate the ideal frequency
responses in Figure 9.2. If the input signals, analysis lters and synthesis lters are multidimensional, we
have a multidimensional transmultiplexer.
1. Direct characterization - which is useful in wavelet theory (to characterize orthonormality and frame
properties), in the study of a powerful class of lter banks (modulated lter banks), etc.
2. Matrix characterization - which is useful in the study of time-varying lter banks.
3. z-transform-domain (or polyphase-representation) characterization - which is useful in the design and
implementation of (unitary) lter banks and wavelets.
Moreover, if the number of channels is equal to the downsampling factor (i.e., L = |M |),(9.1) and (9.2)
are equivalent.
Consider a PR lter bank. Since an arbitrary signal is a linear superposition of impulses, it suces
to consider the input signal, x (n) = δP (n P− n1 ), for arbitrary integer n1 . Then (see Figure 9.1) di (n) =
hi (M n − n1 ) and therefore, y (n2 ) = i n gi (n2 − M n) di (n). But by PR, y (n2 ) = δ (n2 − n1 ). The
lter bank PR property is precisely a statement of this fact:
XX XX
y (n2 ) = gi (n2 − M n) di (n) = gi (n2 − M n) hi (M n − n1 ) = δ (n2 − n1 ) . (9.3)
i n i n
Consider a PR transmultiplexer. Once again because of linear superposition, it suces to cosnsider only
the input signals xi (n) = δ (n) δ (i − j) for all i and j . Then, d (n) = gj (n) (see Figure 9.3), and yi (l) =
n hi (n) d (M l − n). But by PR yi (l) = δ (l) δ (i − j). The transmultiplexer PR property is precisely a
P
statement of this fact:
X X
yi (l) = hi (n) d (M l − n) = hi (n) gj (M l − n) = δ (l) δ (i − j) . (9.4)
n n
Remark: Strictly speaking, in the superposition argument proving (9.2), one has to consider the input
signals xi (n) = δ (n − n1 ) δ (i − j) for arbitrary n1 . One readily veries that for all n1 (9.2) has to be
satised.
The equivalence of (9.1) and (9.2) when L = M is not obvious from the direct characterization. However,
the transform domain characterization that we shall see shortly will make this connection obvious. For a PR
lter, bank the L channels should contain sucient information to reconstruct the original signal (note the
summation over i in (9.1)), while for a transmultiplexer, the constituent channels should satisfy biorthog-
onality constraints so that they can be reconstructed from the composite signal (note the biorthogonality
conditions suggested by (9.2)).
where, for each i, Gi is a matrix with entries appropriately drawn from lter gi . Gi is also a block Toeplitz
matrix (since it is obtained by retaining every M th row of the Toeplitz matrix whose transpose represents
convolution by gi ) with every row containing gi in its natural order. Dene d to be the vector obtained
by interlacing the entries of each of the vectors di : d = [· · · , d0 (0) , d1 (0) , · · · , dM −1 (0) , d0 (1) , d1 (1) , · · · ].
Also dene the matrices H and G (in terms of Hi and Gi ) so that
.. .. .. .. .. ..
. . . . . .
h0 (N − 1) ... h0 (N − M − 1) ... h0 (0) 0
.
..
... ... ... ... ...
. ..
.. .
def hL−1 (N − 1) ... hL−1 (N − M − 1) ... hL−1 (0) 0
d = Hx = x. (9.8)
0 0 h0 (N − 1) ... ... ...
.. .. ..
. . . ... ... ...
0 0 hL−1 (N − 1) ... ... ···
.. .. .. .. .. ..
. . . . . .
From this development, we have the following result:
Theorem 39 A lter bank is PR i
GT H = I. (9.9)
A transmultiplexer is PR i
HGT = I. (9.10)
Moreover, when L = M , both conditions are equivalent.
One can also write the PR conditions for lter banks and transmultiplexers in the following form, which
explicitly shows the formal relationship between the direct and matrix characterizations. For a PR lter
bank we have
X
GTi Hi = I. (9.11)
i
Hi GTj = δ (i − j) I. (9.12)
M
X −1
z k Yk z M (9.14)
Y (z) =
k=
M
X −1
z −k Hi,k z M (9.15)
Hi (z) =
k=0
M
X −1
z k Gi,k z M (9.16)
Gi (z) =
k=0
For i ∈ {0, 1, ..., L − 1} and k ∈ {0, 1, ..., M − 1}, dene the polyphase component matrices (Hp (z))i,k =
Hi,k (z) and (Gp (z))i,k = Gi,k (z). Let Xp (z) and Yp (z) denote the z-transforms of the polyphase signals
xp (n) and yp (n), and let Dp (z) be the vector whose components are Di (z). Equations (9.17) and (9.19)
can be written compactly as
A transmultiplexer is unitary i
X
hi (n) hj (M l + n) = δ (l) δ (i − j) . (9.26)
n
If the number of channels is equal to the downsampling factor, then a lter bank is unitary i the corre-
sponding transmultiplexer is unitary.
The matrix characterization of unitary lter banks/transmultiplexers should be clear from the above
discussion:
Theorem 42 A lter bank is unitary i HT H = I, and a transmultiplexer is unitary i HHT = I.
The z-transform domain characterization of unitary lter banks and transmultiplexers is given below:
Theorem 43 A lter bank is unitary i HpT z −1 Hp (z) = I , and a transmultiplexer is unitary i
−1
= I.
T
Hp (z) Hp z
In this book (as in most of the work in the literature) one primarily considers the situation where the
number of channels equals the downsampling factor. For such a unitary lter bank (transmultiplexer), (9.11)
and (9.12) become:
X
HTi Hi = I, (9.27)
i
and
Hi HTj = δ (i − j) I. (9.28)
The matrices Hi are pairwise orthogonal and form a resolution of the identity matrix. In other words, for
each i, HTi Hi is an orthogonal projection matrix and the lter bank gives an orthogonal decomposition of a
given signal. Recall that for a matrix P to be an orthogonal projection matrix, P 2 = P and P ≥ 0; in our
case, indeed, we do have HTi Hi ≥ 0 and HTi Hi HTi Hi = HTi Hi .
Unitarity is a very useful constraint since it leads to orthogonal decompositions. Besides, for a unitary
lter bank, one does not have to design both the analysis and synthesis lters since hi (n) = gi (−n). But
perhaps the most important property of unitary lter banks and transmultiplexers is that they can be
parameterized. As we have already seen, lter bank design is a nonlinear optimization (of some goodness
criterion) problem subject to PR constraints. If the PR constraints are unitary, then a parameterization
of unitary lters leads to an unconstrained optimization problem. Besides, for designing wavelets with
high-order vanishing moments, nonlinear equations can be formulated and solved in this parameter space.
A similar parameterization of nonunitary PR lter banks and transmultiplexers seems impossible and it
is not too dicult to intuitively see why. Consider the following analogy: a PR lter bank is akin to a
left-invertible matrix and a PR transmultiplexer to a right-invertible matrix. If L = M , the PR lter
bank is akin to an invertible matrix. A unitary lter bank is akin to a left-unitary matrix, a unitary
transmultiplexer to a right-unitary matrix, and when L = M , either of them to a unitary matrix. Left-
unitary, right-unitary and in particular unitary matrices can be parameterized using Givens' rotations or
Householder transformations [182]. However, left-invertible, right-invertible and, in particular, invertible
matrices have no general parameterization. Also, unitariness allows explicit parameterization of lter banks
and transmultiplexers which just PR alone precludes. The analogy is even more appropriate: There are
two parameterizations of unitary lter banks and transmultiplexers that correspond to Givens' rotation and
Householder transformations, respectively. All our discussions on lter banks and transmultiplexers carry
over naturally with very small notational changes to the multi-dimensional case where downsampling is by
some integer matrix [197]. However, the parameterization result we now proceed to develop is not known in
the multi-dimensional case. In the two-dimensional case, however, an implicit, and perhaps not too practical
(from a lter-design point of view), parameterization of unitary lter banks is described in [26].
Consider a unitary lter bank with nite-impulse response lters (i.e., for all i, hi is a nite sequence).
Recall that without loss of generality, the lters can be shifted so that Hp (z) is a polynomial in z . In this
−1
case Gp (z) = Hp z −1
is a polynomial in z . Let
K−1
X
Hp (z) = hp (k) z −k . (9.29)
k=0
That is,
Hp (z) is a matrix polynomial in z
−1
with coecients hp (k) and degree K − 1. Since
T
Hp z −1
Hp (z) = I , from (9.29) we must have hTp (0) hp (K − 1) = 0 as it is the coecient of z K−1
in the product HpT z −1 Hp (z). Therefore hp (0) is singular. Let PK−1 be the unique projection ma-
T
trix onto the range of hp (K − 1) (say of dimension δK−1 ). Then hp (0) PK−1 = 0 = PK−1 hp (0). Also
PK−1 h (K − 1) = h (K − 1) and hence (I − PK−1 ) h (K − 1) = 0. Now [I − PK−1 + zPK−1 ] Hp (z) is a ma-
trix polynomial of degree at most K − 2. If h (0) and h (K − 1) are nonzero (an assumption one makes
without loss of generality), the degree is precisely K − 2. Also it is unitary since I − PK−1 + zPK−1 is uni-
tary. Repeated application of this procedure (K − 1) times gives a degree zero (constant) unitary matrix V0 .
The discussion above shows that an arbitrary unitary polynomial matrix of degree K − 1 can be expressed
algorithmically uniquely as described in the following theorem:
Theorem 44 For a polynomial matrix Hp (z), unitary on the unit circle (i.e., HpT z −1 Hp (z) = I ), and
of polynomial degree K − 1, there exists a unique set of projection matrices Pk (each of rank some integer
δk ), such that
1
Y
I − Pk + z −1 Pk }V0 . (9.30)
Hp (z) = {
k=K−1
Remark: Since the projection Pk is of rank δk , it can be written as v1 v1T + ... + vδk vδTk , for a nonunique set
of orthonormal vectors vi . Using the fact that
j−1
Y
I − vj vjT − vj−1 vj−1
T
+ z −1 vj vjT + vj−1 vj−1
T
I − vi viT + z −1 vi viT , (9.31)
=
i=j
dening ∆ = k δk and collecting all the vj 's that dene the Pk 's into a single pool (and reindexing) we
P
get the following factorization:
1
Y
I − vk vkT + z −1 vk vkT }V0 . (9.32)
Hp (z) = {
k=∆
If Hp (z) is the analysis bank of a lter bank, then notice that ∆ (from (9.32)) is the number of storage
elements required to implement the analysis bank. The minimum number of storage elements to implement
any transfer function is called the McMillan degree and in this case ∆ is indeed the McMillan degree [528].
Recall that PK is chosen to be the projection matrix onto the range of hp (K − 1). Instead we could
have chosen PK to be the projection onto the nullspace of hp (0) (which contains the range of hp (K − 1))
or any space sandwiched between the two. Each choice leads to a dierent sequence of factors PK and
corresponding δk (except when the range and nullspaces in question coincide at some stage during the order
reduction process). However, ∆, the McMillan degree is constant.
Equation (9.32) can be used as a starting point for lter bank design. It parameterizes all unitary lter
banks with McMillan degree ∆. If ∆ = K , then all unitary lter banks with lters of length N ≤ M K
are parameterized using a collection of K − 1 unitary vectors, vk , and a unitary matrix, V0 . Each unitary
vector has (M − 1) free parameters,
while the unitary matrix has M (M − 1) /2 free parameters for a total
M
of (K − 1) (M − 1) + free parameters for Hp (z). The lter bank design problem is to choose these
2
free parameters to optimize the usefulness criterion of the lter bank.
If L > M , and Hp (z) is left-unitary, a similar analysis leads to exactly the same factorization as
before except thatV0is aleftunitary matrix. In this case, the number of free parameters is given by
L M
(K − 1) (L − 1) + − . For a transmultiplexer with L < M , one can use the same factorization
2 2
above for HpT (z) (which is left unitary). Even for a lter bank or transmultiplexer with L = M , factorizations
of left-/right-unitary Hp (z) is useful for the following reason. Let us assume that a subset of the analysis
lters has been predesigned (for example in wavelet theory one sometimes independently designs h0 to be a
K -regular scaling lter, as in Chapter: Regularity, Moments, and Wavelet System Design ). The submatrix
of Hp (z) corresponding to this subset of lters is right-unitary, hence its transpose can be parameterized as
above with a collection of vectors vi and a left-unitary V0 . Each choice for the remaining columns of V0 gives
a choice for the remaining lters in the lter bank. In fact, all possible completions of the original subset
with xed McMillan degree are given this way.
Orthogonal lter banks are sometimes referred to as lossless lter banks because the collective energy of
the subband signals is the same as that of the original signal. If U is an orthogonal matrix, then the signals
x (n) and U x (n) have the same energy. If P is an orthogonal projection matrix, then
2 2 2
kxk = kP xk + k (I − P ) xk . (9.33)
For any give X (z), X (z) and z −1 X (z) have the same energy. Using the above facts, we nd that for any
projection matrix, P ,
def
Dp (z) = I − P + z −1 P Xp (z) = T (z) Xp (z) (9.34)
has the same energy as Xp (z). This is equivalent to the fact that T (z) is unitary on the unit circle (one
can directly verify this). Therefore (from (9.30)) it follows that the subband signals have the same energy
as the original signal.
In order to make the free parameters explicit for lter design, we now describe V0 and {vi } using angle
parameters. First consider vi , with kvi k = 1. Clearly, vi has (M − 1) degrees of freedom. One way to
parameterize vi using (M − 1) angle parameters θi,k , k ∈ {0, 1, ..., M − 2} would be to dene the components
of vi as follows:
Qj−1
{ sin (θi,l )}cos (θi,j ) for j ∈ {0, 1, ..., M − 2}
(vi )j = { l=0
QM −1 (9.35)
{ l=0 sin (θi,l )} for j = M − 1.
M
As for V0 , it being an M × M orthogonal matrix, it has degrees of freedom. There are two well
2
known parameterizations of constant orthogonal matrices, one based on Givens' rotation (well known in QR
factorization etc. [134]), and another based on Householder reections. In the Householder parameterization
M
Y −1
I − 2vi viT , (9.36)
V0 =
i=0
where vi are unit norm vectors with the rst i components of vi being zero. Each matrix factor I − 2vi viT
when multiplied by a vector q , reects q about the plane perpendicular to vi , hence the name Householder
reections. Since the rst i components of vi is zero, and kvi k = 1, vi has M − i − 1 degrees of freedom.
Each being a unit vector, they can be parameterized as before using M − i − 1 angles. Therefore, the total
degrees of freedom are
M −1 M −1
X X M
(M − 1 − i) = i = . (9.37)
i=0 i=0 2
In summary, any orthogonal matrix can be factored into a cascade of M reections about the planes
perpendicular to the vectors vi .
Notice the similarity between Householder reection factors for V0 and the factors of Hp (z) in (9.32).
Based on this similarity, the factorization of unitary matrices and vectors in this section is called the House-
holder factorization. Analogous to the Givens' factorization for constant unitary matrices, also one can
obtain a factorization of unitary matrices Hp (z) and unitary vectors V (z)[137]. However, from the points
of view of lter bank theory and wavelet theory, the Householder factorization is simpler to understand and
implement except when M = 2.
Perhaps the simplest and most popular way to represent a 2×2 unitary matrix is by a rotation parameter
(not by a Householder reection parameter). Therefore, the simplest way to represent a unitary 2 × 2 matrix
Hp (z) is using a lattice parameterization using Given's rotations. Since two-channel unitary lter banks
play an important role in the theory and design of unitary modulated lter banks (that we will shortly
address), we present the lattice parameterization [537]. The lattice parameterization is also obtained by an
order-reduction procedure we saw while deriving the Householder-type factorization in (9.30).
Theorem 45 Every unitary 2 × 2 matrix Hp (z) (in particular the polyphase matrix of a two channel
FIR unitary lter bank) is of the form
1 0
Hp (z) = R (θK−1 ) ZR (θK−2 ) Z...ZR (θ1 ) ZR (θ0 ) , (9.38)
0 ±1
where
cosθ sinθ 1 0
R (θ) = and Z= (9.39)
−1
−sinθ cosθ 0 z
Equation (9.38) is the unitary lattice parameterization of Hp (z). The lters H0 (z) and H1 (z) are given by
H0 (z) 1
= Hp z 2 (9.40)
.
H1 (z) z −1
By changing the sign of the lter h1 (n), if necessary, one can always write Hp (z) in the form
With these parameterizations, lter banks can be designed as an unconstrained optimization problem.
The parameterizations described are important for another reason. It turns out that the most ecient
(from the number of arithmetic operations) implementation of unitary lter banks is using the Householder
parameterization. With arbitrary lter banks, one can organize the computations so as capitalize on the
rate-change operations of upsampling and downsampling. For example, one need not compute values that
are thrown away by downsampling. The gain from using the parameterization of unitary lter banks is
over and above this obvious gain (for example, see pages 330-331 and 386-387 in [528]). Besides, with small
modications these parameterizations allow for unitariness to be preserved, even under lter coecient
quantizationwith this having implications for xed-point implementation of these lter banks in hardware
digital signal processors [528].
n 0 1 2 3
√ √ √ √
h0 (n) 1+√ 3
4 2
3+√ 3
4 2
3−√ 3
4 2
1−√ 3
4 2
.
Table 9.1
n
The highpass lter (wavelet lter) is given by h1 (n) = (−1) h0 (3 − n), and both (9.1) and (9.2) are
satised with gi (n) = hi (−n). The matrix representation of the analysis bank of this lter bank is given by
.. .. .. .. .. ..
. . . . . .
√ √ √ √
1−√ 3 3−√ 3 3+√ 3 1+√ 3
0 0
4 2 4 2 4 2 4 2
.. √ √ √ √ ..
. − 1+√ 3 3+√ 3
− 3−√ 3 1−√ 3
0 0 .
d = Hx = 4 2 4 2 4√ 2 4 √2 √ √
x. (9.43)
1−√ 3 3−√ 3 3+√ 3 1+√ 3
0 0 4 2 4 √2 4 2 4 √2
√ √
− 1+√ 3 3+√ 3
− 3−√ 3 1−√ 3
0 0 4 2 4 2 4 2 4 2
.. .. .. .. .. .. ..
. . . . . . .
One readily veries that HT H = I and HHT = I. The polyphase representation of this lter bank is given
by
√ −1
√ √ −1
√
1 1 + 3 + z 3 − 3 3 + 3 + z 1 − 3
Hp (z) = √ √ √ √ √ , (9.44)
3 + 3 + z −1 1 − 3 −3 + 3 − z −1 1 + 3
4 2
and one can show that HpT z −1 Hp (z) = I and Hp (z) HpT z −1 = I . The Householder factorization of
Hp (z) is given by
where
√ √
sin (π/12) 1/ 2 1/ 2
v1 = and V0 = √ √ . (9.46)
cos (π/12) 1/ 2 −1/ 2
Incidentally, all two-band unitary lter banks associated with wavelet tight frames have the same value of
V0 . Therefore, all lter banks associated with two-band wavelet tight frames are completely specied by a set
of orthogonal vectors vi , K − 1 of them if the h0 is of length 2K . Indeed, for the six-coecient Daubechies
wavelets (see Section: Parameterization of the Scaling Coecients (Section 6.9: Parameterization of the
Scaling Coecients)), the parameterization of Hp (z) is associated with the following two unitary vectors
(since K = 3): v1T = [−.3842.9232] and v2T = [−.1053 − .9944].
The Givens' rotation based factorization of Hp (z) for the 4-coecient Daubechies lters given by:
cosθ0 z −1 sinθ0 cosθ1 sinθ1
Hp (z) = , (9.47)
−sinθ0 z −1 cosθ0 −sinθ1 cosθ1
(from the Householder factorization) that there are K − 1 parameters vk . Our second example belongs to a
class of unitary lter banks called modulated lter banks, which is described in a following section. A Type
1 modulated lter bank with lters of length N = 2M and associated with a wavelet orthonormal basis is
dened by
r
1 π (i + 1) (n + .5) π πi (n + .5) π
hi (n) = sin − (2i + 1) − sin − (2i + 1) , (9.48)
2M M 4 M 4
where i ∈ {0, ..., M − 1} and n ∈ {0, ..., 2M − 1}[202], [216]. Consider a three-band example with length six
lters. In this case, K = 2, and therefore one has one projection P1 and the matrix V0 . The projection is
one-dimensional and given by the Householder parameter
1 1 1 1
1 √ √
and V0 = √1 − 3+1 1
√
v1T = √ (9.49)
3−1 .
− 2
6 3 √2 √
2
3−1 3+1
1 2 1 − 2
The third example is another Type 1 modulated lter bank with M = 4 and N = 8. The lters are given
in (9.48). Hp (z) had the following factorization
Hp (z) = I − P1 + z −1 P1 V0 , (9.50)
where P1 is a two-dimensional projection P1 = v1 v1T + v2 v2T (notice the arbitrary choice of v1 and v2 ) given
by
0.41636433418450 0.00000000000000
−0.78450701561376 −0.14088210492943
v1 = , v2 = (9.51)
0.32495344564406 0.50902478635868
0.32495344564406 −0.84914427477499
and
1 1 1 1
√ √
− 2
1 0 2 0
V0 = √ √ . (9.52)
2 0 2 0 − 2
1 −1 1 −1
Notice that there are innitely many choices of v1 and v2 that give rise to the same projection P1 .
supported, scaling function ψ0 (t) ∈ L2 (R) (with support in 0, M −1 , assuming h0 is supported in [0, N − 1])
N −1
Then {ψi,j,k } forms a tight frame for L2 (R). That is, for all f ∈ L2 (R)
M
X −1 ∞
X
f (t) = < f, ψi,j,k > ψi,j,k (t) . (9.56)
i=1 j,k=−∞
Also,
X M
X −1 X
∞ ∞
X
f (t) = < f, ψ0,0,k > ψ0,0,k (t) + < f, ψi,j,k > ψi,j,k (t) . (9.57)
k i=1 j=1 k=−∞
Remark: A similar result relates general FIR (not necessarily unitary) lter banks and M -band wavelet
frames p. 257, p. 260, p. 261.
Starting with (9.53), one can calculate the scaling function using either successive approximation or inter-
polation on the M -adic rationalsi.e., exactly as in the two-band case in Chapter Section 8.2 (Multiplicity-M
(M-Band) Scaling Functions and Wavelets). Equation (9.54) then gives the wavelets in terms of the scaling
function. As in the two-band case, the functions ψi (t), so constructed, invariably turn out highly irregular
and sometimes fractal. The solution, once again, is to require that several moments of the scaling function
(or equivalently the moments of the scaling lter h0 ) are zero. This motivates the denition of K -regular
M -band scaling lters: A unitary scaling lter h0 is said to be K regular if its Z -transform can be written
in the form
K
1 + z −1 + ... + z −(M −1)
H0 (z) = Q (z) , (9.58)
M
√
for maximal possible K . By default, every unitary scaling lter h0 is one-regular (because n h0 (n) = M
P
- see , Theorem (9.58) for equivalent characterizations of K -regularity). Each of the K -identical factors in
Eqn. adds an extra linear constraint on h0 (actually, it is one linear constraint on each of the M polyphase
subsequences of h0 - see ).
There is no simple relationship between the smoothness of the scaling function and K -regularity. How-
ever, the smoothness of the maximally regular scaling lter, h0 , with xed lter length N , tends to be an
increasing function of N . Perhaps one can argue that K -regularity is an important concept independent of
the smoothness of the associated wavelet system. K -regularity implies that the moments of the wavelets
vanish up to order K − 1, and therefore, functions can be better approximated by using just the scaling
function and its translates at a given scale. Formulae exist for M -band maximally regular K -regular scaling
lters (i.e., only the sequence h0 ) [486]. Using the Householder parameterization, one can then design the
remaining lters in the lter bank.
The linear constraints on h0 that constitute K -regularity become nonexplicit nonlinear constraints on
the Householder parameterization of the associated lter bank. However, one-regularity can be explicitly
incorporated and this gives a parameterization of all M -band compactly supported wavelet tight frames. To
see, this consider the following two factorizations of Hp (z) of a unitary lter bank.
1
Y
I − Pk + z −1 Pk V0 , (9.59)
Hp (z) =
k=K−1
and
1
Y
HpT (z) = I − Qk + z −1 Qk W0 . (9.60)
k=K−1
Since Hp (1) = V0 and HpT (1) = W0 , V0 = W0T . The rst column of W0 is the unit vector
T
[H0,0 (1) , H0,1 (1) , ..., H0,M −1 (1)] . Therefore,
M
X −1
2
H0,k (1) = 1. (9.61)
k=0
√
But since M,
P
n h0 (n) = H0 (1) =
M −1
X √
H0,k (1) = M. (9.62)
k=0
h √ √ √ i
Therefore, for all k , H0,k (1) = √1M . Hence, the rst row of V0 is 1/ M , 1/ M , ..., 1/ M . In other
words, a unitary lter bank gives√ rise to a WTF i the rst row of V0 in the Householder parameterization
is the vector with all entries 1/ M .
Alternatively, consider the Given's factorization of Hp (z) for a two-channel unitary lter bank.
1
Y cosθi z −1 sinθi cosθ0 sinθ0
Hp (z) = { } . (9.63)
−1
i=K−1 −sinθ i z cosθi −sinθ 0 cosθ0
Here α is an integer parameter called the modulation phase. Now one can substitute these forms for the
lters in (9.1) to explicit get PR constraints on the prototype lters h and g . This is a straightforward
algebraic exercise, since the summation over i in (9.1) is a trigonometric sum that can be easily computed.
It turns out that the PR conditions depend only on the parity of the modulation phase α. Hence without
loss of generality, we choose α ∈ {M − 1, M − 2}other choices being incorporated as a preshift into the
prototype lters h and g .
Thus there are two types of MFBs depending on the choice of modulation phase:
M
2 Type 1, M even
M −1
Type 1, M odd
J ={ 2
(9.68)
M −2
2 Type 2, M even
M −1
2 Type 2, M odd.
In other words, the MFB PR constraint decomposes into a set of J two-channel PR constraints and a few
additional conditions on the unpaired polyphase components of h and g .
We rst dene M polyphase components of the analysis and synthesis prototype lters, viz., Pl (z) and
Ql (z) respectively. We split these sequences further into their even and odd components to give Pl,0 (z),
Pl,1 (z), Ql,0 (z) and Ql,1 (z) respectively. More precisely, let
M −1 −1
X MX
z −l Pl z M = z −l Pl,0 z 2M + z −M Pl,1 z 2M , (9.69)
H (z) =
l=0 l=0
M −1 −1
X MX
z l Ql z M = z l Ql,0 z 2M + z M Ql,1 z 2M , (9.70)
G (z) =
l=0 l=0
and let
Pl,0 (z) Pl,1 (z)
P (z) = (9.71)
Pα−l,0 (z) −Pα−l,1 (z)
PM −1 (z) QM −1 (z) = M .
2
The result says that Pl , Pα−l , Ql and Qα−l form analysis and synthesis lters of a two-channel PR lter
bank ((9.1) in Z-transform domain).
Modulated lter bank design involves choosing h and g to optimize some goodness criterion while subject
to the constraints in the theorem above.
integer k ).
Unitary modulated lter bank design entails the choice of h, the analysis prototype lter. There are J
associated two-channel unitary lter banks each of which can be parameterized using the lattice parame-
terization. Besides, depending on whether the lter is Type 2 and/or alpha is even one has to choose the
locations of the delays.
For the prototype lter of a unitary MFB to be linear phase, it is necessary that
for some integer k . In this case, the prototype lter (if FIR) is of length 2M k and symmetric about M k − 21
in the Type 1 case and of length 2M k − 1 and symmetric about (M k − 1) (for both Class A and Class B
MFBs). In the FIR case, one can obtain linear-phase prototype lters by using the lattice parameterization
[537] of two-channel unitary lter banks. Filter banks with FIR linear-phase prototype lters will be said
to be canonical. In this case, Pl (z) is typically a lter of length 2k for all l. For canonical modulated lter
banks, one has to check power complementarity only for l ∈ R (J).
lters are obtained by modulation of a prototype lter. We now look at other types of transformations
that relate the lters. Specically, the ideal frequency response of hM −1−i can be obtained by shifting the
response of the hi by π . This either corresponds to the restriction that
n
hM −1−i (n) = (−1) hi (n) ; HM −1−i (z) = Hi (−z) ; HM −1−i (ω) = Hi (ω + π) , (9.81)
or to the restriction that
n
hM −1−i (n) = (−1) hi (N − 1 − n) ; HM −1−i (z) = HiR (−z) ; HM −1−i (ω) = Hi[U+2606] (ω + π) (9.82)
where N is the lter length and for polynomial H (z), H R (z) denotes its reection polynomial (i.e. the poly-
nomial with coecients in the reversed order). The former will be called pairwise-shift (or PS) symmetry (it
is also known as pairwise-mirror image symmetry [397]) , while the latter will be called pairwise-conjugated-
shift (or PCS) symmetry (also known as pairwise-symmetry [397]). Both these symmetries relate pairs of
lters in the lter bank. Another type of symmetry occurs when the lters themselves are symmetric or
linear-phase. The only type of linear-phase symmetry we will consider is of the form
PM −1
z −M m+1 −M
H R (z) = l
l=0 z Hl z
(9.84)
PM −1 −(M −1−l) −mM +M
Hl z −M
= l=0 z z
PM −1 −l R M
= l=0 z HM −1−l z .
Therefore
R
H R l (z) = (HM −1−l ) (z) (9.85)
Hl (z) = ± H R (9.86)
M −1−l
(z) .
Lemma 2 For even M , Hp (z) is of the form
PS Symmetry:
W0 (z) W1 (z) I 0 W0 (z) W1 (z)
M/2
=
M/2
(9.87)
JW0 (z) V (−1) JW1 (z) V 0 J W0 (z) V (−1) W1 (z) V
PCS Symmetry:
W0 (z) W1 (z) J I 0 W0 (z) W1 (z) J
M/2
=
M/2
(9.88)
JW1R (z) V (−1) JW0R (z) JV 0 J W1R (z) V (−1) W0R (z) JV
Linear Phase:
W0 (z) D0 W0R (z) J W0 (z) D0 W0R (z) I 0
= (9.89)
W1 (z) D1 W1R (z) J W1 (z) D1 W1R (z) 0 J
or
W0 (z) W0R (z) J W0 (z) W0R (z) I 0
Q = Q (9.90)
W1 (z) −W1R (z) J W1 (z) −W1R (z) 0 J
where Ai and Bi are constant square matrices of size M/2 × M/2. It is readily veried that X (z) is of the
form
Y0 (z) Y1 (z)
X (z) = (9.98)
−Y1R (z) Y0R (z)
Given X (z), its invertibility is equivalent to the invertibility of the constant matrices,
−1
Ai B i Ai z B i Ai B i I 0
since = , (9.99)
−1 −1
−Bi Ai −Bi z Ai −Bi Ai 0 z I
which, in turn is related to the invertibility of the complex matrices Ci = (Ai + ßBi ) and Di = (Ai − ßBi ),
since,
1 I I C 0 I −ßI Ai B i
i = . (9.100)
2 ßI −ßI 0 Di I ßI −Bi Ai
Moreover, the orthogonality of the matrix is equivalent to the unitariness of the complex matrix Ci (since
Di is just its Hermitian
conjugate). Since an arbitrary complex
matrix
of size M/2 × M/2 is determined
M/2 Ai Bi
by precisely 2 parameters, each of the matrices has that many degrees of freedom.
2 −Bi Ai
Clearly when these matrices are orthogonal X (z) is unitary (on the unit circle) and X T z −1 X (z) = I .
For unitary X (z) the converse is also true as will be shortly proved.
The symmetric lattice is dened by the product
K
def
Y Ai z −1 Bi A0 B0
X (z) = { } (9.101)
−1
i=1 B i z A i B 0 A 0
Once again Ai and Bi are constant square matrices, and it is readily veried that X (z) written as a product
above is of the form
Y0 (z) Y1 (z)
X (z) = (9.102)
Y1R (z) Y0R (z)
The orthogonality of the constant matrix is equivalent to the orthogonality of the real matrices Ci and
M/2
Di , and since each real orthogonal matrix of size M/2 × M/2 is determined by parameters, the
2
M/2
constant orthogonal matrices have 2 degrees of freedom. Clearly when the matrices are orthogonal
2
X T z −1 X (z) = I . For the hyperbolic lattice too, the converse is true.
We now give a theorem that leads to a parameterization of unitary lter banks with the symmetries we
have considered (for a proof, see [198]).
Theorem 49 Let X (z) be a unitary M × M polynomial matrix of degree K . Depending on whether
X (z) is of the form in (9.98), or (9.102), it is generated by an order K antisymmetric or symmetric lattice.
The form of Hp (z) for PS symmetry in (9.87) can be simplied by a permutation. Let P be the permutation
matrix that exchanges the rst column with the last column, the third column with the last but third, etc.
That is,
0 0 0 ... 0 0 1
0 1 0 ... 0 0 0
0 0 0 ... 1 0 0
. . . .. .. ..
. . .
P = . . . ... . . .
. (9.105)
0 0 1 ... 0 0 0
0 0 0 ... 0 1 0
1 0 0 ... 0 0 0
W0 (z) W1 (z) W0' (z) W1' (z)
Then the matrix M/2
in (9.87) can be rewritten as √1
2
P,
W0 (z) V (−1) W1 (z) V −W0' (z) W1' (z)
and therefore
I 0 W0 (z) W1 (z)
Hp (z) =
M/2
0 J
W0 (z) V (−1) W1 (z) V
' '
I 0 W 0 (z) W 1 (z)
= √1 P
2
0 J −W0' (z) W1' (z)
(9.106)
'
I 0 I I W 0 (z) 0
= √1 P
2
0 J −I I 0 W1' (z)
'
I I W 0 (z) 0
= √1 P.
2
−J J 0 W1' (z)
For PS symmetry, one has the following parameterization of unitary lter banks.
Theorem 50 (Unitary PS Symmetry)Hp (z) of order K forms a unitary PR lter bank with PS
symmetry i there exist unitary, order K , M/2 × M/2 matrices W0' (z) and W1' (z), such that
'
1 I I W (z) 0
Hp (z) = √ 0 P. (9.107)
2 −J J 0 W1' (z)
M/2
A unitary Hp , with PS symmetry is determined by precisely 2 (M/2 − 1) (L0 + L1 ) + 2 parameters
2
where L0 ≥ K and L1 ≥ K are the McMillan degrees of W0' (z) and W1' (z) respectively.
In this case
I 0 W0 (z) W1 (z) J
Hp (z) =
M/2
0 J W1R (z) V (−1) W0R (z) JV
def I 0 W0' W1' J
= R R P (9.108)
0 J − W1' W0' J
I 0 W0' W1' I 0
= R R P.
0 J − W1' W0' 0 J
Hence from Lemma "Linear Phase Filter Banks"Hp (z) of unitary lter banks with PCS symmetry can be
parameterized as follows:
Theorem 51 Hp (z) forms an order K , unitary lter bank with PCS symmetry i
K
I 0 Y Ai z −1 Bi A0 B 0 I 0
Hp (z) = { } P (9.109)
−1
0 J i=1 −B i z A i −B 0 A 0 0 J
Ai B i M/2
where are constant orthogonal matrices. Hp (z) is characterized by 2K parameters.
−Bi Ai 2
We now examine and characterize how Hp (z) for unitary lter banks with symmetries can be constrained
to give rise to wavelet tight frames (WTFs). First consider the case of PS symmetry in which case Hp (z) is
parameterized in (9.107). We have a WTF i
h √ √ i
rst row of Hp (z)|z=1 = 1/ M ... 1/ M . (9.116)
In (9.107), since P permutes the columns, the rst row is unaected. Hence (9.116) is equivalent to the rst
rows of both W0' (z) and W1' (z) when z = 1 is given by
h p i
(9.117)
p
2/M ... 2/M .
This is precisely the condition to be satised by a WTF of multiplicity M/2. Therefore both W0' (z) and
W1' (z) give rise to multiplicity M/2 compactly supported WTFs. If the McMillan
degree
of W0 (z) and
'
M/2 − 1
W1' (z) are L0 and L1 respectively, then they are parameterized respectively by + (M/2 − 1) L0
2
M/2 − 1
and + (M/2 − 1) L1 parameters. In summary, a WTF with PS symmetry can be explicitly
2
M/2 − 1
parameterized by 2 + (M/2 − 1) (L0 + L1 ) parameters. Both L0 and L1 are greater than or
2
equal to K .
PS symmetry does not reect itself as any simple property of the scaling function ψ0 (t) and wavelets
ψi (t), i ∈ {1, ..., M −1} of the WTF. However, from design and implementation points of view, PS symmetry
is useful (because of the reduction in the number of parameters).
Next consider PCS symmetry. From (9.109) one sees that (9.116) is equivalent to the rst rows of the
matrices A and B dened by
0
A B Y Ai B i
={ } (9.118)
−B A i=K −B i A i
h √ √ i
are of the form 1/ M ... 1/ M . Here we only have an implicit parameterization of WTFs, unlike
the case of PS symmetry. As in the case of PS symmetry, there is no simple symmetry relationships between
the wavelets.
Now consider the case of linear phase. In this case, it can be seen [481] that the wavelets are also linear
phase. If we dene
0
A B Y Ai B i
={ }, (9.119)
B A i=K B i Ai
h p i
then it can be veried that one of the rows of the matrix A+B has to be of the form 2/M .
p
2/M ...
This is an implicit parameterization of the WTF.
Finally consider the case of linear phase with PCS symmetry. In this case, also the wavelets are linear-
phase. From (9.113)
h p it can bepveriedi that we have a WTF i the rst row of W0 (z) for z = 1, evaluates
'
to the vector 2/M ... 2/M . Equivalently, W0' (z) gives rise to a multiplicity M/2 WTF. In this
M/2 − 1
case, the WTF is parameterized by precisely + (M/2 − 1) L parameters where L ≥ K is the
2
McMillan degree of W0' (z).
1. have lters with nonoverlapping ideal frequency responses as shown in Figure 9.2.
2. are associated with DCT III/IV (or equivalently DST III/IV) in their implementation
3. and do not allow for linear phase lters (even though the prototypes could be linear phase).
In trying to overcome 3, Lin and Vaidyanathan introduced a new class of linear-phase modulated lter banks
by giving up 1 and 2 [331]. We now introduce a generalization of their results from a viewpoint that unies
the theory of modulated lter banks as seen earlier with the new class of modulated lter banks we introduce
here. For a more detailed exposition of this viewpoint see [186].
The new class of modulated lter banks have 2M analysis lters, but M bandseach band being shared
by two overlapping lters. The M bands are the M -point Discrete Fourier Transform bands as shown in
Figure 9.5.
√1 n ∈ {0, M }
kn = { 2 (9.120)
1 otherwise.
Two broad classes of MFBs (that together are associated with all four DCT/DSTs [430]) can be dened.
S1 S2
α even, DCT/DST I R (M ) ∪ {M } R (M ) {\} {0}
α odd, DCT/DST II R (M ) R (M ) {\} {0} ∪ {M }
The PR constraints on the prototype lters h and g (for both versions of the lter banks above) are
exactly the same as that for the modulated lter bank studied earlier [186]. When the prototype lters are
linear phase, these lter banks are also linear phase. An interesting consequence is that if one designs an
M -channel Class B modulated lter bank, the prototype lter can also be used for a Class A 2M channel
lter bank.
where
cosθl,k z −1 sinθl,k
T ' (θ) = . (9.126)
−sinθl,k z −1 cosθl,k
.. .. .. .. ..
. . . . .
. ..
Φ Φ def . . hp (K − 1) hp (K − 2) ... hp (0) 0 .
Φ
d =H x = x, (9.129)
0 hp (K − 1) ... hp (1) hp (0)
.. .. .. .. ..
. . . . .
Φ Φ
and H is unitary i H is unitary. (9.30) induces a factorization of H (and hence H). If V0 = diag (V0 ) and
.. .. ..
. . .
. ..
.. .
P i I − Pi 0
Vi = , for i ∈ {1, ..., K − 1}, (9.130)
0 Pi I − Pi
.. .. ..
. . .
0
Φ Y
H= Vi .(9.131)
i=K−1
The factors Vi , with appropriate modications, will be used as fundamental building blocks for lter banks
for nite-length signals.
Now consider a nite input signal x = [x (0) , x (1) , · · · , x (L − 1)], where L is a multiple of M and
Φ
let x = [x (M − 1) , · · · , x (0) , x (M ) , · · · , x (L − 1) , · · · , x (L − M )]. Then, the nite vector d (the output
signal) is given by
hp (K − 1) ... hp (0) 0 ... ... ... 0
. . . .
. . . .
0 hp (K − 1) ... hp (0) . .
. .
Φ Φ def . . . .
Φ
.. .. .. ..
d =H x =
.
.
.
. . . . . .
. (9.132)
.
.
x.
.
.
. ... ... 0 hp (K − 1) ... hp (0) 0
0 ... ... ... 0 hp (K − 1) ... hp (0)
Φ Φ
H is an (L − N + M )×L matrix, where N = M K is the length of the lters. Now since the rows of H are
mutually orthonormal (i.e., has rank L), one has to append N − M = M (K − 1) rows from the orthogonal
Φ
complement of H to make the map from x to an augmented d unitary. To get a complete description of these
rows, we turn to the factorization of Hp (z). Dene the L×L matrix V0 = diag (V0 ) and for i ∈ {1, ..., K −1}
the (L − M i) × (L − M i + M ) matrices
Pi I − Pi 0 ... ... 0
..
0 Pi I − Pi ... ... .
.. .. .. .. ..
Vi = . . . . . . (9.133)
0
..
..
.
. 0 Pi I − Pi 0
0 ... ... 0 Pi I − Pi
Φ Q0
then H is readily veried by induction to be Vi . Since each of the factors (except V0 ) has M more
i=K−1
Bi
columns than rows, they can be made unitary by appending appropriate rows. Indeed, Vi is unitary
Ci
h i h i
where, Bj = Υj (I − Pj ) 0 ... 0 , and Cj = 0 ... 0 Ξj Pj . . Here Ξ is the δi × M left unitary
T T
matrix that spans the range of the Pi ; i.e., Pi = ΞΞT , and Υ is the (M − δi ) × M left unitary matrix that
spans the range of the I − Pi ; i.e., I − Pi = ΥΥT . Clearly [Υi Ξi ] is unitary. Moreover, if we dene T0 = V0
and for i ∈ {1, ..., K − 1},
I(M −δi )(i−1) 0 0
0 Bi 0
(9.134)
Ti =
0 Vi 0
0 Ci 0
0 0 Iδi (i−1)
then each of the factors Ti is a square unitary matrix of size L − N + M and
B1 V0
..
.
0
Y Φ
Ti = H , (9.135)
..
i=K−1
.
C1 V0
is the unitary matrix that acts on the data. The corresponding unitary matrix that acts on x (rather than
U
Φ
H , where U has M K − M − ∆ rows of entry lters in (K − 1) sets given by (9.136),
x) is of the form
W
while W has ∆ rows of exit lters in (K − 1) given by (9.137):
h i
Υj (I − Pj ) hjp (j − 1) J hjp (j − 2) J ... hjp (0) J, (9.136)
h i
Ξj Pj hjp (j − 1) J hjp (j − 2) J ... hjp (0) J , (9.137)
where J is the exchange matrix (i.e., permutation matrix of ones along the anti-diagonal) and
1 j−1
Y def X
Hpj (z) = I − Pi + z −1 Pi V0 = hjp (i) z −i . (9.138)
i=j−1 i=0
The rows of U and W form the entry and exit lters respectively. Clearly they are nonunique. The
input/output behavior is captured in
u U
(9.139)
d = H x.
w W
For example, in the four-coecient Daubechies' lters in [106] case, there is one entry lter and exit lter.
0.8660 0.5000 0 0
−0.1294 0.2241 0.8365 0.4830
. (9.140)
−0.4830 0.8365 −0.2241 −0.1294
0 0 −0.5000 0.8660
If the input signal is right-sided (i.e., supported in {0, 1, ...}), then the corresponding lter bank would
only have entry lters. If the lter bank is for left-sided signals one would only have exit lters. Based on
the above, we can consider switching between lter banks (that operate on innite extent input signals).
Consider switching from a one-channel to an M channel lter bank. Until instant n = −1, the input is
the same as the output. At n = 0, one switches into an M -channel lter bank as quickly as possible. The
transition is accomplished by the entry lters (hence the name entry) of the M -channel lter bank. The
input/output of this time-varying lter bank is
I 0
(9.141)
d= 0 U x.
0 H
Next consider switching from an M -channel lter bank to a one-channel lter bank. Until n = −1, the
M -channel lter bank is operational. From n = 0 onwards the inputs leaks to the output. In this case, there
are exit lters corresponding to ushing the states in the rst lter bank implementation at n = 0.
H 0
(9.142)
d= W 0 x.
0 I
Finally, switching from an M1 -band lter bank to an M2 -band lter bank can be accomplished as follows:
H1 0
W 0
1
d= x. (9.143)
0 U2
0 H2
The transition region is given by the exit lters of the rst lter bank and the entry lters of the second.
Clearly the transition lters are abrupt (they do not overlap). One can obtain overlapping
transition
lters as
W1 0
follows: replace them by any orthogonal basis for the row space of the matrix . For example,
0 U2
consider switching between two-channel lter banks with length-4 and length-6 Daubechies' lters. In this
case, there is one exit lter (W1 ) and two entry lters (U2 ).
tree is grown. Let the mapping from the input to the output growth channel be as shown in Figure 9.6.
The transition lters are given by the system in Figure 9.7, which is driven by the entry lters of the newly
added lter bank. Every transition lter is obtained by running the corresponding time-reversed entry lter
through the synthesis bank of the corresponding branch of the extant tree.
dierent construction based on the following idea. For k ∈ IN , support of ψi,j,k (t) is in [0, ∞). With this
restriction (in (9.144)) dene the spaces Wi,j
+
. As j → ∞ (since W0,j → L2 (R)) W0,j+
→ L2 ([0, ∞)). Hence
it suces to have a multiresolution
because W0,j
+
is bigger than the direct sum of the constituents at the next coarser scale. Let Uj−1 be this
dierence space:
+
W0,j +
= W0,j−1 +
⊕ W1,j−1 +
... ⊕ WM −1,j−1 ⊕ Uj−1 (9.146)
If we can nd an orthonormal basis for Uj , then we have a multiresolution analysis for L2 ([0, ∞)).
We proceed as follows. Construct entry lters (for the analysis lters) of the lter bank with synthesis
lters {hi }. Time-reverse them to obtain entry lters (for the synthesis lters). If ∆ is the McMillan degree
of the synthesis bank, there are ∆ entry lters. Let ui (n) denote the ith synthesis entry lters. Dene the
entry functions
l −1
LX
√
µl (t) = M ul (k) ψ0 (M t − k) , l ∈ {0, ..., ∆ − 1}. (9.147)
k=0
h i
def
µi (t) is compactly supported in 0, LM l −1
−1 . Let Uj = Span{µl,j } = Span{M
N −1
µi M j t }. By
j/2
+M
considering one stage of the analysis and synthesis stages of this PR lter bank on right sided signals), it
readily follows that (9.146) holds. Therefore
basis is an ON basis for L2 ([0, ∞)). Indeed if {ψ0 (t − k)} is an orthonormal system
Z X
µl (t) ψi (t − n) dt = ul (k) hi (M l + k) = 0, (9.149)
t≥0 k
and
Z X
µl (t) µm (t) dt = ul (k) um (k) = 0. (9.150)
t≥0 k
The dimension of Uj is precisely the McMillan degree of the polyphase component matrix of the scaling and
wavelet lters considered as the lters of the synthesis bank. There are precisely as many entry functions
as there are entry lters, and supports of these functions are explicitly given in terms of the lengths of the
corresponding entry lters. Figure 9.9 shows the scaling function, wavelet, their integer translates
√ and the
single entry function corresponding to Daubechies four coecient wavelets. In this case, u0 = {− 3/2, 1/2}.
One could start with a wavelet basis for L2 ([0, ∞)) and reect all the functions about t = 0. This is equivalent
to swapping the analysis and synthesis lters of the lter bank. We give an independent development. We
start with a WTF for L2 (R) with functions
N −1
√ X
ψi (t) = M hi (k) ψ0 (M t + k) , (9.151)
k=0
h i
supported in − M
N −1
−1 , 0 . Scaling and wavelet lters constitute the analysis bank in this case. Let ∆ be the
McMillan degree of the analysis bank and let {wi } be the (analysis) exit lters. Dene the exit functions
l −1
LX
√
νl (t) = M wl (k) ψ0 (M t + k) , l ∈ {0, ..., ∆ − 1}. (9.152)
k=0
− def
Wj = Span{M j/2 νi M j t }, and Wi,j = Span{ψi,j,k } = {M j/2 ψi M j t + k } for k ∈ IN . Then as j → ∞,
−
W0,j → L2 ((−∞, 0]) and
basis on the line. An example with one exit function (corresponding to M = 3, N = 6) Type 1 modulated
WTF obtained earlier is given in Figure 9.10.
Figure 9.9: (a) Entry function µ0 (t), ψ0 (t) and ψ0 (t − 1) (b) Wavelet ψ1 (t) and ψ1 (t − 1)
for L2 (R), where a 3-band wavelet basis with length-6 lters is used for t < 0 and a 2-band wavelet basis
with length-4 lters is used for t > 0. Certainly a degree of overlap between the exit functions on the
left of a transition and entry functions on the right of the transition can be obtained by merely changing
coordinates in the nite dimensional space corresponding to these functions. Extension of these ideas to
obtain segmented wavelet packet bases is also immediate.
Figure 9.10: (a) Exit function ν0 (t), ψ0 (t) and ψ0 (t + 1) (b) Wavelet ψ1 (t) and ψ1 (t + 1) (c) Wavelet
ψ2 (t) and ψ2 (t + 1)
parameterization of unitary lter banks, with a minor modication, gives a parameterization of all compactly
supported wavelet tight frames. In general, wavelets associated with a unitary lter bank are irregular (i.e.,
not smooth). By imposing further linear constraints (regularity constraints) on the lowpass lter, one obtains
smooth wavelet bases. Structured lter banks give rise to associated structured wavelet bases; modulated
lter banks are associated with modulated wavelet bases and linear phase lter banks are associated with
linear-phase wavelet bases. Filter banks cascadewhere all the channels are recursively decomposed, they
are associated with wavelet packet bases.
From a time-frequency analysis point of view lter banks trees can be used to give arbitrary resolutions
of the frequency. In order to obtain arbitrary temporal resolutions one has to use local bases or switch
between lter bank trees at points in time. Techniques for time-varying lter banks can be used to generate
segmented wavelet bases (i.e., a dierent wavelet bases for disjoint segments of the time axis). Finally, just as
unitary lter banks are associated with wavelet tight frames, general PR lter banks, with a few additional
constraints, are associated with wavelet frames (or biorthogonal bases).
Although when using the wavelet expansion as a tool in an abstract mathematical analysis, the innite sum
and the continuous description of t ∈ R are appropriate, as a practical signal processing or numerical analysis
tool, the function or signal f (t) in (10.1) is available only in terms of its samples, perhaps with additional
information such as its being band-limited. In this chapter, we examine the practical problem of numerically
calculating the discrete wavelet transform.
where the {ψj,k (t)} form a basis or tight frame for the signal space of interest (e.g., L2 ). At rst glance, this
innite series expansion seems to have the same practical problems in calculation that an innite Fourier
series or the Shannon sampling formula has. In a practical situation, this wavelet expansion, where the
coecients are called the discrete wavelet transform (DWT), is often more easily calculated. Both the time
summation over the index k and the scale summation over the index j can be made nite with little or no
error.
The Shannon sampling expansion [413], [363] of a signal with innite support in terms of sinc(t) = sin(t) t
expansion functions
∞
X π
f (t) = f (T n) sinc t − πn (10.2)
n=−∞
T
requires an innite sum to evaluate f (t) at one point because the sinc basis functions have innite support.
This is not necessarily true for a wavelet expansion where it is possible for the wavelet basis functions to
have nite support and, therefore, only require a nite summation over k in (10.1) to evaluate f (t) at any
point.
1 This content is available online at <http://cnx.org/content/m45093/1.3/>.
199
CHAPTER 10. CALCULATION OF THE DISCRETE WAVELET
200
TRANSFORM
The lower limit on scale j in (10.1) can be made nite by adding the scaling function to the basis set as
was done in (3.28). By using the scaling function, the expansion in (10.1) becomes
∞
X ∞
X ∞
X
f (t) = < f, φJ0 ,k > φJ0 ,k (t) + < f, ψj,k > ψj,k (t) . (10.3)
k=−∞ k=−∞ j=J0
where j = J0 is the coarsest scale that is separately represented. The level of resolution or coarseness to
start the expansion with is arbitrary, as was shown in Chapter: A multiresolution formulation of Wavelet
Systems (Chapter 3) in (3.19), (3.20), and (3.21). The space spanned by the scaling function contains all
the spaces spanned by the lower resolution wavelets from j = −∞ up to the arbitrary starting point j = J0 .
This means VJ0 = W−∞ ⊕ · · · ⊕ WJ0 −1 . In a practical case, this would be the scale where separating detail
becomes important. For a signal with nite support (or one with very concentrated energy), the scaling
function might be chosen so that the support of the scaling function and the size of the features of interest
in the signal being analyzed were approximately the same.
This choice is similar to the choice of period for the basis sinusoids in a Fourier series expansion. If the
period of the basis functions is chosen much larger than the signal, much of the transform is used to describe
the zero extensions of the signal or the edge eects.
The choice of a nite upper limit for the scale j in (10.1) is more complicated and usually involves some
approximation. Indeed, for samples of f (t) to be an accurate description of the signal, the signal should be
essentially bandlimited and the samples taken at least at the Nyquist rate (two times the highest frequency
in the signal's Fourier transform).
The question of how one can calculate the Fourier series coecients of a continuous signal from the
discrete Fourier transform of samples of the signal is similar to asking how one calculates the discrete
wavelet transform from samples of the signal. And the answer is similar. The samples must be dense"
enough. For the Fourier series, if a frequency can be found above which there is very little energy in the
signal (above which the Fourier coecients are very small), that determines the Nyquist frequency and the
necessary sampling rate. For the wavelet expansion, a scale must be found above which there is negligible
detail or energy. If this scale is j = J1 , the signal can be written
∞
X
f (t) ≈ < f, φJ1 ,k > φJ1 ,k (t) (10.4)
k=−∞
This assumes that approximately f ∈ VJ1 or equivalently, k f −PJ1 f k≈ 0, where PJ1 denotes the orthogonal
projection of f onto VJ1 .
Given f (t) ∈ VJ1 so that the expansion in (10.5) is exact, one computes the DWT coecients in two
steps.
For J1 large enough, φJ1 ,k (t) can be approximated by a Dirac impulse at its center of mass since φ (t) dt = 1.
R
where m0 = t φ (t) dt is the rst moment of φ (t). Therefore the scaling function coecients at the j = J1
R
scale are
Z Z
cJ1 (k) =< f, φJ1 ,k >= 2J1 /2 f (t) φ 2J1 t − k dt = 2J1 /2 f t + 2−J1 k φ 2J1 t dt (10.7)
For all 2-regular wavelets (i.e., wavelets with two vanishing moments, regular wavelets other than the Haar
waveletseven in the M -band case where one replaces 2 by M in the above equations, m0 = 0), one can
show that the samples of the functions themselves form a third-order approximation to the scaling function
coecients of the signal [190]. That is, if f (t) is a quadratic polynomial, then
cJ1 (k) =< f, φJ1 ,k >= 2−J1 /2 f 2−J1 (m0 + k) ≈ 2−J1 /2 f 2−J1 k . (10.9)
Thus, in practice, the nest scale J1 is determined by the sampling rate. By rescaling the function and
amplifying it appropriately, one can assume the samples of f (t) are equal to the scaling function coecients.
These approximations are made better by setting some of the scaling function moments to zero as in the
coiets. These are discussed in Section: Approximation of Scaling Coecients by Samples of the Signal
(Section 7.8: Approximation of Scaling Coecients by Samples of the Signal).
Finally there is one other aspect to consider. If the signal has nite support and L samples are given, then
we have L nonzero coecients < f, φJ1 ,k >. However, the DWT will typically have more than L coecients
since the wavelet and scaling functions are obtained by convolution and downsampling. In other words, the
DWT of a L-point signal will have more than L points. Considered as a nite discrete transform of one
vector into another, this situation is undesirable. The reason this expansion" in dimension occurs is that
one is using a basis for L2 to represent a signal that is of nite duration, say in L2 [0, P ].
When calculating the DWT of a long signal, J0 is usually chosen to give the wavelet description of the
slowly changing or longer duration features of the signal. When the signal has nite support or is periodic,
J0 is generally chosen so there is a single scaling coecient for the entire signal or for one period of the
signal. To reconcile the dierence in length of the samples of a nite support signal and the number of DWT
coecients, zeros can be appended to the samples of f (t) or the signal can be made periodic as is done for
the DFT.
where the period P is an integer. In this case, < f, φj,k > and < f, ψj,k > are periodic sequences in k with
period P 2j (if j ≥ 0 and 1 if j < 0) and
Z
d (j, k) = 2j/2 f˜ (t) ψ 2j t − k dt (10.11)
Z Z
d (j, k) = 2j/2 f˜ t + 2−j k ψ 2j t dt = 2j/2 f˜ t + 2−j ` ψ 2j t dt (10.12)
where ` =< k> P 2j (k modulo P 2j ) and l ∈ {0, 1, ..., P 2j − 1}. An obvious choice for J0 is 1. Notice
that in this case given L = 2J1 samples of the signal, < f, φJ1 ,k >, the wavelet transform has exactly
1 + 1 + 2 + 22 + · · · + 2J1 −1 = 2J1 = L terms. Indeed, this gives a linear, invertible discrete transform
which can be considered apart from any underlying continuous process similar the discrete Fourier transform
existing apart from the Fourier transform or series.
There are at least three ways to calculate this cyclic DWT and they are based on the equations (10.25),
(10.26), and (10.27) later in this chapter. The rst method simply convolves the scaling coecients at one
scale with the time-reversed coecients h (−n) to give an L + N − 1 length sequence. This is aliased or
wrapped as indicated in (10.27) and programmed in dwt5.m in Appendix 3. The second method creates a
periodic c̃j (k) by concatenating an appropriate number of cj (k) sections together then convolving h (n) with
it. That is illustrated in (10.25) and in dwt.m in Appendix 3. The third approach constructs a periodic h̃ (n)
and convolves it with cj (k) to implement (10.26). The Matlab programs should be studied to understand
how these ideas are actually implemented.
Because the DWT is not shift-invariant, dierent implementations of the DWT may appear to give
dierent results because of shifts of the signal and/or basis functions. It is interesting to take a test signal
and compare the DWT of it with dierent circular shifts of the signal.
Making f (t) periodic can introduce discontinuities at 0 and P . To avoid this, there are several alternative
constructions of orthonormal bases for L2 [0, P ][79], [258], [260], [200]. All of these constructions use (directly
or indirectly) the concept of time-varying lter banks. The basic idea in all these constructions is to retain
basis functions with support in [0, P ], remove ones with support outside [0, P ] and replace the basis functions
that overlap across the endpoints with special entry/exit functions that ensure completeness. These boundary
functions are chosen so that the constructed basis is orthonormal. This is discussed in Section: Time-Varying
Filter Bank Trees (Section 9.11: Time-Varying Filter Bank Trees). Another way to deal with edges or
boundaries uses lifting" as mentioned in Section: Lattices and Lifting (Section 4.4: Lattices and Lifting).
10.3 Filter Bank Structures for Calculation of the DWT and Com-
plexity
Given that the wavelet analysis of a signal has been posed in terms of the nite expansion of (10.5), the
discrete wavelet transform (expansion coecients) can be calculated using Mallat's algorithm implemented
by a lter bank as described in Chapter: Filter Banks and the Discrete Wavelet Transform and expanded
upon in Chapter: Filter Banks and Transmultiplexers . Using the direct calculations described by the one-
sided tree structure of lters and down-samplers in Figure: Three-Stage Two-Band Analysis Tree (Figure 4.4)
allows a simple determination of the computational complexity.
If we assume the length of the sequence of the signal is L and the length of the sequence of scaling lter
coecients h (n) is N , then the number of multiplications necessary to calculate each scaling function and
wavelet expansion coecient at the next scale, c (J1 − 1, k) and d (J1 − 1, k), from the samples of the signal,
f (T k) ≈ c (J1 , k), is LN . Because of the downsampling, only half are needed to calculate the coecients at
the next lower scale, c (J2 − 1, k) and d (J2 − 1, k), and repeats until there is only one coecient at a scale
of j = J0 . The total number of multiplications is, therefore,
Because of the relationship of the scaling function lter h (n) and the wavelet lter h1 (n) at each scale
(they are quadrature mirror lters), operations can be shared between them through the use of a lattice
lter structure, which will almost halve the computational complexity. That is developed in Chapter: Filter
Banks and Transmultiplexers and [529].
0 t<0
f (t) = { 0 t>P (10.16)
f (t) 0<t<P
and then consider its wavelet expansion or DWT. This creation of a meaningful periodic function can still
be done, even if f (t) does not have nite support, if its energy is concentrated and some overlap is allowed
in (10.17).
Periodic Property 1: If f˜ (t) is periodic with integer period P such that f˜ (t) = f˜ (t + P n), then the
scaling function and wavelet expansion coecients (DWT terms) at scale J are periodic with period 2J P .
c̃j (k) =< f˜ (t) , φ (t) >=< f (t) , φ̃ (t) > (10.21)
and
d˜j (k) =< f˜ (t) , ψ (t) >=< f (t) , ψ̃ (t) > (10.22)
where φ̃ (t) = n φ (t + P n) and ψ̃ (t) = n ψ (t + P n).
P P
This is seen from
R∞ P RP
d˜j (k) = f˜ (t) ψ (2j t − k) dt = n f (t) ψ (2j (t + P n) − k) dt = (10.23)
RP P −∞j 0
0
f (t) n ψ (2 (t + P n) − k) dt
Z P
d˜j (k) = f (t) ψ̃ 2j t − k dt (10.24)
0
Periodic Property 3: If f˜ (t) is periodic with period P , then Mallat's algorithm for calculating the
DWT coecients in (4.9) becomes
X
c̃j (k) = h (m − 2k) c̃j+1 (m) (10.25)
m
or
X
c̃j (k) = h̃ (m − 2k) cj+1 (m) (10.26)
m
or
X
c̃j (k) = cj (k + P n) (10.27)
n
where
X
dj (k) = h1 (m − 2k) cj+1 (m) (10.31)
m
These are very important properties of the DWT of a periodic signal, especially one articially constructed
from a nonperiodic signal in order to use a block algorithm. They explain not only the aliasing eects of
having a periodic signal but how to calculate the DWT of a periodic signal.
For a continuous process, the number of stages and, therefore, the level of resolution at the coarsest
scale is arbitrary. It is chosen to be the nature of the slowest features of the signals being processed. It is
important to remember that the lower resolution scales correspond to a slower sampling rate and a larger
translation step in the expansion terms at that scale. This is why the wavelet analysis system gives good
time localization (but poor frequency localization) at high resolution scales and good frequency localization
(but poor time localization) at low or coarse scales.
For nite length signals or block wavelet processing, the input samples can be considered as a nite
dimensional input vector, the DWT as a square matrix, and the wavelet expansion coecients as an output
vector. The conventional organization of the output of the DWT places the output of the rst wavelet lter
bank in the lower half of the output vector. The output of the next wavelet lter bank is put just above that
block. If the length of the signal is two to a power, the wavelet decomposition can be carried until there is
just one wavelet coecient and one scaling function coecient. That scale corresponds to the translation
step size being the length of the signal. Remember that the decomposition does not have to carried to
that level. It can be stopped at any scale and is still considered a DWT, and it can be inverted using the
appropriate synthesis lter bank (or a matrix inverse).
and the resulting lter bank tree structure has one scaling function branch and M − 1 wavelet branches
at each stage with each followed by a downsampler by M . The resulting structure is called an M -band
lter bank, and it too is an alternative to the regular wavelet decomposition. This is developed in Section:
Multiplicity-M (M-band) Scaling Functions and Wavelets (Section 8.2: Multiplicity-M (M-Band) Scaling
Functions and Wavelets).
In many applications, it is the continuous wavelet transform (CWT) that is wanted. This can be calculated
by using numerical integration to evaluate the inner products in (2.11) and (8.106) but that is very slow.
An alternative is to use the DWT to approximate samples of the CWT much as the DFT can be used to
approximate the Fourier series or integral [393], [443], [558].
As you can see from this discussion, the ideas behind wavelet analysis and synthesis are basically the same
as those behind lter bank theory. Indeed, lter banks can be used calculate discrete wavelet transforms
using Mallat's algorithm, and certain modications and generalizations can be more easily seen or interpreted
in terms of lter banks than in terms of the wavelet expansion. The topic of lter banks in developed in
Chapter: Filter Banks and the Discrete Wavelet Transform and in more detail in Chapter: Filter Banks and
Transmultiplexers .
This chapter gives a brief discussion of several areas of application. It is intended to show what areas and
what tools are being developed and to give some references to books, articles, and conference papers where
the topics can be further pursued. In other words, it is a sort of annotated bibliography that does not
pretend to be complete. Indeed, it is impossible to be complete or up-to-date in such a rapidly developing
new area and in an introductory book.
In this chapter, we briey consider the application of wavelet systems from two perspectives. First, we
look at wavelets as a tool for denoising and compressing a wide variety of signals. Second, we very briey
list several problems where the application of these tools shows promise or has already achieved signicant
success. References will be given to guide the reader to the details of these applications, which are beyond
the scope of this book.
207
CHAPTER 11. WAVELET-BASED SIGNAL PROCESSING AND
208
APPLICATIONS
The classical paradigm for transform-based signal processing is illustrated in Figure 11.1 where the center
box" could be either a linear or nonlinear operation. The dynamics" of the processing are all contained in
the transform and inverse transform operation, which are linear. The transform-domain processing operation
has no dynamics; it is an algebraic operation. By dynamics, we mean that a process depends on the present
and past, and by algebraic, we mean it depends only on the present. An FIR (nite impulse response) lter
such as is part of a lter bank is dynamic. Each output depends on the current and a nite number of past
inputs (see (4.11)). The process of operating point-wise on the DWT of a signal is static or algebraic. It
does not depend on the past (or future) values, only the present. This structure, which separates the linear,
dynamic parts from the nonlinear static parts of the processing, allows practical and theoretical results that
are impossible or very dicult using a completely general nonlinear dynamic system.
Linear wavelet-based signal processing consists of the processor block in Figure 11.1 multiplying the DWT
of the signal by some set of constants (perhaps by zero). If undesired signals or noise can be separated from
the desired signal in the wavelet transform domain, they can be removed by multiplying their coecients
by zero. This allows a more powerful and exible processing or ltering than can be achieved using Fourier
transforms. The result of this total process is a linear, time-varying processing that is far more versatile than
linear, time-invariant processing. The next section gives an example of using the concentrating properties of
the DWT to allow a faster calculation of the FFT.
11.2.1 Introduction
The DFT is probably the most important computational tool in signal processing. Because of the character-
istics of the basis functions, the DFT has enormous capacity for the improvement of its arithmetic eciency
[54]. The classical Cooley-Tukey fast Fourier transform (FFT) algorithm has the complexity of O (N log 2 N ).
Thus the Fourier transform and its fast algorithm, the FFT, are widely used in many areas, including signal
processing and numerical analysis. Any scheme to speed up the FFT would be very desirable.
Although the FFT has been studied extensively, there are still some desired properties that are not
provided by the classical FFT. Here are some of the disadvantages of the FFT algorithm:
1. Pruning is not easy. When the number of input points or output points are small compared to the
length of the DWT, a special technique called pruning [482] is often used. However, this often requires
that the nonzero input data are grouped together. Classical FFT pruning algorithms do not work well
when the few nonzero inputs are randomly located. In other words, a sparse signal may not necessarily
give rise to faster algorithm.
2. No speed versus accuracy tradeo. It is common to have a situation where some error would be
allowed if there could be a signicant increase in speed. However, this is not easy with the classical
FFT algorithm. One of the main reasons is that the twiddle factors in the buttery operations are unit
magnitude complex numbers. So all parts of the FFT structure are of equal importance. It is hard
to decide which part of the FFT structure to omit when error is allowed and the speed is crucial. In
other words, the FFT is a single speed and single accuracy algorithm.
3. No built-in noise reduction capacity. Many real world signals are noisy. What people are really
interested in are the DFT of the signals without the noise. The classical FFT algorithm does not have
built-in noise reduction capacity. Even if other denoising algorithms are used, the FFT requires the
same computational complexity on the denoised signal. Due to the above mentioned shortcomings, the
fact that the signal has been denoised cannot be easily used to speed up the FFT.
The details of the buttery operations are shown in Figure 11.3, where WNi = e−j2πi/N is called the
twiddle factor. All the twiddle factors are of magnitude one on the unit circle. This is the main reason that
there is no complexity versus accuracy tradeo for the classical FFT. Suppose some of the twiddle factors
had very small magnitude, then the corresponding branches of the buttery operations could be dropped
(pruned) to reduce complexity while minimizing the error to be introduced. Of course the error also depends
on the value of the data to be multiplied with the twiddle factors. When the value of the data is unknown,
the best way is to cuto the branches with small twiddle factors.
The computational complexity of the FFT algorithm can be easily established. If we let CF F T (N ) be
the complexity for a length-N FFT, we can show
CF F T (N ) = O (N log 2 N ) . (11.3)
This is a classical case where the divide and conquer approach results in very eective solution.
The matrix point of view gives us additional insight. Let FN be the N×N DFT matrix; i.e., FN (m, n) =
e−j2πmn/N , where m, n ∈ {0, 1, ..., N −1}. Let SN be the N ×N even-odd separation matrix; e.g.,
1 0 0 0
0 0 1 0
S4 = . (11.4)
0 1 0 0
0 0 0 1
Clearly S'N SN = IN , where IN is the N ×N identity matrix. Then the DIT FFT is based on the following
matrix factorization,
IN/2 TN/2 F N/2 0
FN = FN S'N SN = SN , (11.5)
IN/2 −TN/2 0 FN/2
where TN/2 is a diagonal matrix with WNi , i ∈ {0, 1, ..., N/2 − 1} on the diagonal. We can visualize the
above factorization as
(11.6)
where we image the real part of DFT matrices, and the magnitude of the matrices for buttery operations
and even-odd separations. N is taken to be 128 here.
At the heart of the discrete wavelet transform are a pair of lters h and g lowpass and highpass
respectively. They have to satisfy a set of constraints Figure: Sinc Scaling Function and Wavelet (6.1) [552],
[495], [530]. The block diagram of the DWT is shown in Figure 11.4. The input data are rst ltered by h
and g then downsampled. The same building block is further iterated on the lowpass outputs.
The computational complexity of the DWT algorithm can also be easily established. Let CDW T (N ) be
the complexity for a length-N DWT. Since after each scale, we only further operate on half of the output
data, we can show
CDW T (N ) = O (N ) . (11.8)
The operation in Figure 11.4 can also be expressed in matrix form WN ; e.g., for Haar wavelet,
1 −1 0 0
√ 0 0 1 −1
W4 Haar
= 2/2 . (11.9)
1 1 0 0
0 0 1 1
where AN/2 , BN/2 , CN/2 , and DN/2 are all diagonal matrices. The values on the diagonal of AN/2 and
CN/2 are the length-N DFT (i.e., frequency response) of h, and the values on the diagonal of BN/2 and
DN/2 are the length-N DFT of g. We can visualize the above factorization as
(11.12)
where we image the real part of DFT matrices, and the magnitude of the matrices for buttery operations
and the one-scale DWT using length-16 Daubechies' wavelets [107], [124]. Clearly we can see that the new
twiddle factors have non-unit magnitudes.
The above factorization suggests a DWT-based FFT algorithm. The block diagram of the last stage of
a length-8 algorithm is shown in Figure 11.5. This scheme is iteratively applied to shorter length DFTs
to get the full DWT based FFT algorithm. The nal system is equivalent to a full binary tree wavelet
packet transform [96] followed by classical FFT buttery operations, where the new twiddle factors are the
frequency response of the wavelet lters.
The detail of the buttery operation is shown in Figure 11.6, where i ∈ {0, 1, ...,N/2−1}. Now the twiddle
factors are length-N DFT of h and g. For well dened wavelet lters, they have well known properties; e.g.,
for Daubechies' family of wavelets, their frequency responses are monotone, and nearly half of which have
magnitude close to zero. This fact can be exploited to achieve speed vs. accuracy tradeo. The classical
radix-2 DIT FFT is a special case of the above algorithm when h = [1, 0] and g = [0, 1]. Although they
do not satisfy some of the conditions required for wavelets, they do constitute a legitimate (and trivial)
orthogonal lter bank and are often called the lazy wavelets in the context of lifting.
more points are close to zero. It should be noted that those lters are not designed for frequency responses.
They are designed for atness at 0 and π . Various methods can be used to design wavelets or orthogonal
lter banks [403], [462], [530] to achieve better frequency responses. Again, there is a tradeo between the
good frequency response of the longer lters and the higher complexity required by the longer lters.
CF AF T (N ) = O (N ) + CF AF T (N/2) , (11.14)
which leads to
CF AF T (N ) = O (N ) . (11.15)
So under the above conditions, we have a linear complexity approximate FFT. Of course, the complexity
depends on the input data, the wavelets we use, the threshold value used to drop insignicant data, and
the threshold value used to prune the buttery operations. It remains to nd a good tradeo. Also the
implementation would be more complicated than the classical FFT.
11.2.9 Summary
In the past, the FFT has been used to calculate the DWT [552], [495], [530], which leads to an ecient
algorithm when lters are innite impulse response (IIR). In this chapter, we did just the opposite using
DWT to calculate FFT. We have shown that when no intermediate coecients are dropped and no approx-
imations are made, the proposed algorithm computes the exact result, and its computational complexity is
on the same order of the FFT; i.e., O (N log 2 N ). The advantage of our algorithm is two fold. From the input
data side, the signals are made sparse by the wavelet transform, thus approximation can be made to speed
up the algorithm by dropping the insignicant data. From the transform side, since the twiddle factors of
the new algorithm have decreasing magnitudes, approximation can be made to speed up the algorithm by
pruning the section of the algorithm which corresponds to the insignicant twiddle factors. Since wavelets
are an unconditional basis for many classes of signals [495], [374], [124], the algorithm is very ecient and
has built-in denoising capacity. An alternative approach has been developed by Shentov, Mitra, Heute, and
Hossen [469], [267] using subband lter banks.
clipping, thresholding, and shrinking of the amplitude of the transform to separate signals or remove noise.
It is the localizing or concentrating properties of the wavelet transform that makes it particularly eective
when used with these nonlinear methods. Usually the same properties that make a system good for denoising
or separation by nonlinear methods, makes it good for compression, which is also a nonlinear process.
Y = X + N, or, Yi = Xi + Ni , (11.17)
where capital letters denote variables in the transform domain, i.e., Y = W y . Then the inverse transform
matrix W −1 exists, and we have
W −1 W = I. (11.18)
The following presentation follows Donoho's approach [155], [145], [150], [139], [306] that assumes an or-
thogonal wavelet transform with a square W ; i.e., W −1 = W T . We will use the same assumption throughout
this section.
^
Let X denote an estimate of X , based on the observations Y . We consider diagonal linear projections
^ ^
x= W −1 X = W −1 ∆Y = W −1 ∆W y. (11.20)
^
The estimate X is obtained by simply keeping or zeroing the individual wavelet coecients. Since we are
interested in the l2 error we dene the risk measure
^ ^ ^ ^
R X , X = E kx −x k22 = E k W −1 X −X k22 = E kX −X k22 . (11.21)
Notice that the last equality in Eq. (11.21) is a consequence of the orthogonality of W . The optimal coe-
cients in the diagonal projection scheme are δi = 1Xi > ;2 i.e., only those values of Y where the corresponding
elements of X are larger than are kept, all others are set to zero. This leads to the ideal risk
^ N
X
min X 2 , 2 . (11.22)
Rid X, X =
n=1
The ideal risk cannot be attained in practice, since it requires knowledge of X , the wavelet transform of the
unknown vector x. However, it does give us a lower limit for the l2 error.
Donoho proposes the following scheme for denoising:
2 It is interesting to note that allowing arbitrary δi ∈ IR improves the ideal risk by at most a factor of 2[157]
^ Y, |Y | ≥ t
X = Th (Y, t) = { (11.23)
0, |Y | < t
^ sgn (Y ) (|Y | − t) , |Y | ≥ t
X = TS (Y, t) = { (11.24)
0, |Y | < t
^ ^
3. compute the inverse DWT x= W −1 X
This simple scheme has several interesting properties. It's risk is within a logarithmic factor (logN ) of
the ideal risk for both thresholding schemes and properly chosen thresholds t (N, ). If one employs soft
thresholding, then the estimate is with high probability at least as smooth as the original function. The
proof of this proposition relies on the fact that wavelets are unconditional bases for a variety of smoothness
^
classes and that soft thresholding guarantees (with high probability) that the shrinkage condition |X i | < |Xi |
^
holds. The shrinkage condition guarantees that x is in the same smoothness class as is x. Moreover, the soft
threshold estimate is the optimal estimate that satises the shrinkage condition. The smoothness property
guarantees an estimate free from spurious oscillations which may result from hard thresholding or Fourier
methods. Also, it can be shown that it is not possible to come closer to the ideal risk than within a factor
logN . Not only does Donoho's method have nice theoretical properties, but it also works very well in practice.
Some comments have to be made at this point. Similar to traditional approaches (e.g., low pass ltering),
there is a trade-o between suppression of noise and oversmoothing of image details, although to a smaller
extent. Also, hard thresholding yields better results in terms of the l2 error. That is not surprising since
the observation value yi itself is clearly a better estimate for the real value xi than a shrunk value in a zero
mean noise scenario. However, the estimated function obtained from hard thresholding typically exhibits
undesired, spurious oscillations and does not have the desired smoothness properties.
Xs = W xs = W SR x = W SR W −1 X, (11.25)
which establishes the connection between the wavelet transforms of two shifted versions of a signal, x and
xs , by the orthogonal matrix W SR W −1 . As an illustrative example, consider Figure 11.8.
3 Since we deal with nite length signals, we really mean circular shift.
(a) (b)
Figure 11.8: Shift Variance of the Wavelet Transform (a) DWT of skyline (b) SWT of skyline circular
shifted by 1
The rst and most obvious way of computing a shift invariant discrete wavelet transform (SIDWT) is
simply computing the wavelet transform of all shifts. Usually the two band wavelet transform is computed
as follows: 1) lter the input signal by a low-pass and a high-pass lter, respectively, 2) downsample each
lter output, and 3) iterate the low-pass output. Because of the downsampling, the number of output values
at each stage of the lter bank (corresponding to coarser and coarser scales of the DWT) is equal to the
number of the input values. Precisely N values have to be stored. The computational complexity is O (N ).
Directly computing the wavelet transform of all shifts therefore requires the storage of N 2 elements and has
computational complexity O N . 2
Beylkin [34], Shensa [468], and the Rice group4 independently realized that 1) there are only N logN
dierent coecient values among those corresponding to all shifts of the input signal and 2) those can be
computed with computational complexity N logN . This can be easily seen by considering one stage of the
lter bank. Let
T
y = [y0 y1 y2 ... yN ] = hx (11.26)
where y is the output of either the high-pass or the low-pass lter in the analysis lter bank, x the input and
the matrix h describes the ltering operation. Downsampling of y by a factor of two means keeping the even
indexed elements and discarding the odd ones. Consider the case of an input signal shifted by one. Then
the output signal is shifted by one as well, and sampling with the same operator as before corresponds to
keeping the odd-indexed coecients as opposed to the even ones. Thus, the set of data points to be further
processed is completely dierent. However, for a shift of the input signal by two, the downsampled output
signal diers from the output of the nonshifted input only by a shift of one. This is easily generalized for
any odd and even shift and we see that the set of wavelet coecients of the rst stage of the lter bank for
arbitrary shifts consists of only 2N dierent values. Considering the fact that only the low-pass component
(N values) is iterated, one recognizes that after L stages exactly LN values result. Using the same arguments
as in the shift variant case, one can prove that the computational complexity is O (N logN ). The derivation
for the synthesis is analogous.
Mallat proposes a scheme for computing an approximation of the continuous wavelet transform [349] that
turns out to be equivalent to the method described above. This has been realized and proved by Shensa
[468]. Moreover, Shensa shows that Mallat's algorithm exhibits the same structure as the so-called algorithm
4 Those are the ones we are aware of.
à trous. Interestingly, Mallat's intention in [349] was not in particular to overcome the shift variance of the
DWT but to get an approximation of the continuous wavelet transform.
In the following, we shall refer to the algorithm for computing the SIDWT as the Beylkin algorithm5
since this is the one we have implemented. Alternative algorithms for computing a shift-invariant wavelet
transform [329] are based on the scheme presented in [34]. They explicitly or implicitly try to nd an optimal,
signal-dependent shift of the input signal. Thus, the transform becomes shift-invariant and orthogonal but
signal dependent and, therefore, nonlinear. We mention that the generalization of the Beylkin algorithm to
the multidimensional case, to an M -band multiresolution analysis, and to wavelet packets is straightforward.
A = {i| |Xi | ≥ }
(11.27)
B = {i| |Xi | < }
and an ideal diagonal projection estimator, or oracle,
Yi = Xi + Ni i∈A
X̃ = { (11.28)
0 i ∈ B.
The pointwise estimation error is then
Ni i∈A
X̃i − Xi = { (11.29)
−Xi i ∈ B.
In the following, a vector or matrix indexed by A (or B ) indicates that only those rows are kept that have
indices out of A (or B ). All others are set to zero. With these denitions and (11.21), the ideal risk for the
5 However, it should be noted that Mallat published his algorithm earlier.
6 A similar remark can be found in [457], p. 53.
^
It depends on the signal (θ), the estimator θ , the noise level , and the basis.
For a xed , the optimal minmax procedure is the one that minimizes the error for the worst possible
signal from the coecient body Θ.
!
^
∗
R (Θ) = inf supR θ , θ . (11.32)
^ θ∈Θ
θ
^
Consider the particular nonlinear procedure θ that corresponds to soft-thresholding of every noisy coecient
yi :
where
^
c i = Q (ci ) . (11.37)
Denote the quantization step size as T . Notice in the gure that the quantizer has a dead zone, so if |ci | < T ,
then Q (ci ) = 0. We dene an index set for those insignicant coecients
I = {i : |ci | < T }. Let M be the number of coecients with magnitudes greater than T (signicant
coecients). Thus the size of I is N − M . The squared error caused by the quantization is
N 2 X 2
X ^ X ^
ci − c i = c2i + ci − c i . (11.38)
i=1 i∈I i∈I
/
Since the transform is orthonormal, it is the same as the reconstruction error. Assume T is small enough,
so that the signicant coecients are uniformly distributed within each quantization bins. Then the second
term in the error expression is
2
X ^ T2
ci − c i =M . (11.39)
12
i∈I
/
For the rst term, we need the following standard approximation theorem [136] that relates it to the lp
norm of the coecients,
N
!1/p
X p
k f kp = |ci | . (11.40)
i=1
Theorem 56 Let λ = 1
p > 1
2 then
X k f k2p 1−2λ
c2i ≤ M (11.41)
2λ − 1
i∈I
This theorem can be generalized to innite dimensional space if k f k2p < + ∞. It has been shown that
for functions in a Besov space, k f k2p < + ∞ does not depend on the particular choice of the wavelet as
long as each wavelet in the basis has q > λ − 21 vanishing moments and is q times continuously dierentiable
[378]. The Besov space includes piece-wise regular functions that may include discontinuities. This theorem
indicates that the rst term of the error expression decreases very fast when the number of signicant
coecient increases.
The bit rate of the prototype compression algorithm can also be separated in two parts. For the rst part,
we need to indicate whether the coecient is signicant, also known as the signicant map. For example,
we could use 1 for signicant, and 0 for insignicant. We need a total of N these indicators. For the second
part, we need to represent the values of the signicant coecients. We only need M values. Because the
distribution of the values and the indicators are not known in general, adaptive entropy coding is often used
[576].
Energy concentration is one of the most important properties for low bitrate transform coding. Suppose
for the sample quantization step size T , we have a second set of basis that generate less signicant coecients.
The distribution of the signicant map indicators is more skewed, thus require less bits to code. Also, we
need to code less number of signicant values, thus it may require less bits. In the mean time, a smaller
M reduces the second error term as in (11.39). Overall, it is very likely that the new basis improves the
rate-distortion performance. Wavelets have better energy concentration property than the Fourier transform
for signals with discontinuities. This is one of the main reasons that wavelet based compression methods
usually out perform DCT based JPEG, especially at low bitrate.
• Insignicant coecients are often clustered together. Especially, they often cluster around the same
location across several scales. Since the distance between nearby coecients doubles for every scale,
the insignicant coecients often form a tree shape, as we can √ see from Figure: Discrete Wavelet
Transform of the Houston Skyline, using ψD8' with a Gain of 2 for Each Higher Scale (Figure 3.5).
These so called zero-trees can be exploited [466], [455] to achieve excellent results.
• The choice of basis is very important. Methods have been developed to adaptively choose the basis for
the signal [428], [584]. Although they could be computationally very intensive, substantial improvement
can be realized.
• Special run-length codes could be used to code signicant map and values [511], [517].
• Advanced quantization methods could be used to replace the simple scalar quantizer [287].
• Method based on statistical analysis like classication, modeling, estimation, and prediction also pro-
duces impressive result [333].
• Instead of using one xed quantization step size, we can successively rene the quantization by using
smaller and smaller step sizes. These embedded schemes allow both the encoder and the decoder to
stop at any bit rate [466], [455].
• The wavelet transform could be replaced by an integer-to-integer wavelet transform, no quantization
is necessary, and the compression is lossless [455].
Other references are:[150], [155], [145], [459], [228], [466], [8], [9], [466], [554], [455], [456], [50], [234].
The closer the coecient body is to a solid, orthosymmetric body with varying side lengths, the less the
individual coecients are correlated with each other and the greater the compression in this basis.
In summary, the wavelet bases have a number of useful properties:
5. They are computationally inexpensiveperhaps one of the few really useful linear transform with a
complexity that is O (N )as compared to a Fourier transform, which is N log (N ) or an arbitrary
linear transform which is O N 2 .
11.7 Applications
Listed below are several application areas in which wavelet methods have had some success.
11.7.5 Fractals
Wavelet-based signal processing has been combined with fractals and to systems that are chaotic [6], [368],
[579], [131], [20], [21], [577], [578]. The multiresolution formulation of the wavelet and the self-similar
characteristic of certain fractals make the wavelet a natural tool for this analysis. An application to noise
removal from music is in [31].
Other applications are to the automatic target recognition (ATR) problem, and many other questions.
Summary Overview 1
Table 12.1 : Properties of M = 2 Scaling Functions (SF) and their Fourier Transforms
231
232 CHAPTER 12. SUMMARY OVERVIEW
The dierent cases" represent somewhat similar conditions for the stated relationships. For example,
in Case 1, Table 1, the multiresolution conditions are stated in the time and frequency domains while in
Table 2 the corresponding necessary conditions on h (n) are given for a scaling function in L1 . However,
the conditions are not sucient unless general distributions are allowed. In Case 1, Table 3, the denition
of a wavelet is given to span the appropriate multiresolution signal space but nothing seems appropriate for
Case 1 in Table 4. Clearly the organization of these tables are somewhat subjective.
If we tighten" the restrictions by adding one more linear condition, we get Case 2 which has consequences
in Tables 1, 2, and 4 but does not guarantee anything better that a distribution. Case 3 involves orthogonality,
both across scales and translations, so there are two rows for Case 3 in the tables involving wavelets. Case 4
adds to the orthogonality a condition on the frequency response H (ω) or on the eigenvalues of the transition
matrix to guarantee an L2 basis rather than a tight frame guaranteed for Case 3. Cases 5 and 6 concern
zero moments and scaling function smoothness and symmetry.
In some cases, columns 3 and 4 are equivalent and others, they are not. In some categories, a higher
numbered case assumes a lower numbered case and in others, they do not. These tables try to give a structure
without the details. It is useful to refer to them while reading the earlier chapters and to refer to the earlier
chapters to see the assumptions and conditions behind these tables.
• Signal Expansions
· General Expansion Systems Section 7.6 (Vanishing Scaling Function Moments)
· Multiresolution Systems
• Multiresolution Wavelet Systems
· M = 2 or two-band wavelet systems -
· M > 2 or M-band wavelet systems Section 8.2 (Multiplicity-M (M-Band) Scaling Functions and
Wavelets)
· wavelet packet systems Section 8.3 (Wavelet Packets)
· multiwavelet systems Section 8.5 (Multiwavelets)
• Length of scaling function lter
· Compact support wavelet systems
· Innite support wavelet systems
• Orthogonality
· Orthogonal or Orthonormal wavelet bases
· Semiorthogonal systems
· Biorthogonal systems Section 8.4 (Biorthogonal Wavelet Systems)
• Symmetry
· Symmetric scaling functions and wavelets Section 8.4 (Biorthogonal Wavelet Systems),Section 8.5
(Multiwavelets)
· Approximately symmetric systems Section 7.9 (Coiets and Related Wavelet Systems)
· Minimum phase spectral factorization systems
· General scaling functions
• Complete and Overcomplete systems ,Section 8.6 (Overcomplete Representations, Frames, Redundant
Transforms, and Adaptive Bases)
· Frames
· Tight frames
· Redundant systems and transforms Section 8.6 (Overcomplete Representations, Frames, Redun-
dant Transforms, and Adaptive Bases),Section 11.3 (Nonlinear Filtering or Denoising with the
DWT)
· Adaptive systems and transforms, pursuit methods Section 8.6 (Overcomplete Representations,
Frames, Redundant Transforms, and Adaptive Bases)
• Discrete and continuous signals and transforms {analogous Fourier method} Section 8.8 (Discrete Mul-
tiresolution Analysis, the Discrete-Time Wavelet Transform, and the Continuous Wavelet Transform)
This appendix contains outline proofs and derivations for the theorems and formulas given in early part of
Chapter: The Scaling Function and Scaling Coecients, Wavelet and Wavelet Coecients . They are not
intended to be complete or formal, but they should be sucient to understand the ideas behind why a result
is true and to give some insight into its interpretation as well as to indicate assumptions and restrictions.
Proof 1 The conditions given by (6.10) and (8.7) can be derived by integrating both sides of
X √
φ (x) = h (n) M φ (M x − n) (13.1)
n
and making the change of variables y = M x
Z X Z √
φ (x) dx = h (n) M φ (M x − n) dx (13.2)
n
and noting the integral is independent of translation which gives
X √ Z 1
= h (n) M φ (y) dy. (13.3)
n
M
With no further requirements other than φ ∈ L1 to allow the sum and integral interchange and
R
φ (x) dx 6=
0, this gives (8.7) as
X √
h (n) = M (13.4)
n
and for M = 2 gives (6.10). Note this does not assume orthogonality nor any specic normalization of φ (t)
and does not even assume M is an integer.
This is the most basic necessary condition for the existence of φ (t) and it has the fewest assumptions or
restrictions.
Proof 2 The conditions in (6.14) and (8.8) are a down-sampled orthogonality of translates by M of the
coecients which results from the orthogonality of translates of the scaling function given by
Z
φ (x) φ (x − m) dx = E δ (m) (13.5)
in . The basic scaling equation (13.1) is substituted for both functions in (13.5) giving
Z "X √
#"
X √
#
h (n) M φ (M x − n) h (k) M φ (M x − M m − k) dx = E δ (m) (13.6)
n k
which, after reordering and a change of variable y = M x, gives
XX Z
h (n) h (k) φ (y − n) φ (y − M m − k) dy = E δ (m) . (13.7)
n k
1 This content is available online at <http://cnx.org/content/m45083/1.3/>.
Available for free at Connexions <http://cnx.org/content/col11454/1.6>
235
236 APPENDIX
in (6.13) and (8.8). This result requires the orthogonality condition (13.5), M must be an integer, and any
non-zero normalization E may be used.
Proof 3 (Corollary 2) The result that
X X √
h (2n) = h (2n + 1) = 1/ 2 (13.9)
n n
is obtained by breaking (13.4) for M = 2 into the sum of the even and odd coecients.
X X X √
h (n) = h (2k) + h (2k + 1) = K0 + K1 = 2. (13.11)
n k k
which we then split into even and odd sums and reorder to give:
P P P
n[ k h (2k + 2n) h (2k) + k h (2k + 1 + 2n) h (2k + 1)]
= k [ n h (2k + 2n)] h (2k) + k [ n h (2k + 1 + 2n)] h (2k + 1) (13.13)
P P P P
Using the orthogonality condition (13.13) as was done in (13.12) and gives
where the support of the right hand side of (13.17) is [N1 /2, (N − 1 + N2 ) /2). Since the support of both
sides of (13.17) must be the same, the limits on the sum, or, the limits on the indices of the non zero h (n)
are such that N1 = 0 and N2 = N , therefore, the support of h (n) is [0, N − 1].
Proof 4 First dene the autocorrelation function
Z
a (t) = φ (x) φ (x − t) dx (13.18)
2
= Φ (ω) Φ (−ω) = |Φ (ω) | (13.21)
If we look at (13.18) as being the inverse Fourier transform of (13.21) and sample a (t) at t = k , we have
Z ∞
1 2
a (k) = |Φ (ω) | ejωk dω (13.22)
2π −∞
Z 2π "X #
1 X 2π
Z
2 jωk 1 2
= |Φ (ω + 2π`) | e dω = |Φ (ω + 2π`) | ejωk dω (13.23)
2π 0 2π 0
` `
but this integral is the form of an inverse discrete-time Fourier transform (DTFT) which means
X X 2
a (k) ejωk = |Φ (ω + 2π`) | . (13.24)
, `
If the integer translates of φ (t) are orthogonal, a (k) = δ (k) and we have our result
X 2
|Φ (ω + 2π`) | = 1. (13.25)
`
which is similar to Parseval's theorem relating the energy in the frequency domain to the energy in the time
domain.
Proof 6 Equation (6.20) states a very interesting property of the frequency response of an FIR lter
with the scaling coecients as lter coecients. This result can be derived in the frequency or time domain.
We will show the frequency domain argument. The scaling equation (13.1) becomes (6.51) in the frequency
domain. Taking the squared magnitude of both sides of a scaled version of
1
Φ (ω) = √ H (ω/2) Φ (ω/2) (13.27)
2
gives
2 1 2 2
|Φ (2ω) | = |H (ω) | |Φ (ω) | (13.28)
2
Add kπ to ω and sum over k to give for the left side of (13.28)
X 2
|Φ (2ω + 2πk) | = K = 1 (13.29)
k
which is unity from (6.57). Summing the right side of (13.28) gives
X1 2 2
|H (ω + kπ) | |Φ (ω + kπ) | (13.30)
2
k
Break this sum into a sum of the even and odd indexed terms.
X1 2 2
X1 2 2
|H (ω + 2πk) | |Φ (ω + 2πk) | + |H (ω + (2k + 1) π) | |Φ (ω + (2k + 1) π) | (13.31)
2 2
k k
1 2
X 2 1 2
X 2
= |H (ω) | |Φ (ω + 2πk) | + |H (ω + π) | |Φ (ω + (2k + 1) π) | (13.32)
2 2
k k
and
Z
ψ (t) φ (t − k) dt = 0 (13.36)
which must be true for all integer k . Dening he (n) = h (2n), ho (n) = h (2n + 1) and g̃ (n) = g (−n) for
any sequence g , this becomes
Assuming the sequences are nite length (13.45) can be used to show that
polynomial matrix with a polynomial matrix inverse. Therefore the determinant of Hp (z) is of the form ±z k
for some integer k . This is equivalent to (13.46). Now, convolving both sides of (13.46) by h̃e we get
But the integral on the right hand side is A0 , usually normalized to one and from (6.17) or (13.9) and
(13.49) we know that
X
h1 (n) = 0 (13.51)
n
In this appendix we develop most of the results on scaling functions, wavelets and scaling and wavelet
coecients presented in Section 6.8 (Further Properties of the Scaling Function and Wavelet) and elsewhere.
For convenience, we repeat (6.1), (6.10), (6.13), and (6.15) here
X √
φ (t) = h (n) 2 φ (2t − n) (14.1)
n
X √
h (n) = 2 (14.2)
n
Z
φ (t) φ (t − k) dt = Eδ (k) = { E if k = 00 otherwise (14.3)
If normalized
Z
φ (t) φ (t − k) dt = δ (k) = { 1 if k = 00 otherwise (14.4)
The results in this appendix refer to equations in the text written in bold face fonts.
Equation (6.45) is the normalization of (6.15) and part of the orthonormal conditions required by (14.3)
for k = 0 and E = 1.
Equation (6.53) If the φ (x − k) are orthogonal, (14.3) states
Z
φ (x + m) φ (x) dx = E δ (m) (14.5)
A20 = E (14.9)
1 This content is available online at <http://cnx.org/content/m45075/1.3/>.
Available for free at Connexions <http://cnx.org/content/col11454/1.6>
241
242 APPENDIX
If the scaling function is not normalized to unity, one can show the more general result of (6.53). This is
done by noting that a more general form of (6.50) is
X Z
φ (x + m) = φ (x) dx (14.10)
m
Applying this j times gives the result in (6.46). A similar result can be derived for the wavelet.
Equation (6.48) is derived by dening the sum
X `
AJ = φ (14.15)
2J
`
but the summation over ` is independent of an integer shift so that using (14.2) and (14.15) gives
√ √ X X `
AJ = 2 2 h (n) { φ } = 2 AJ−1 . (14.18)
n
2J−1
`
AJ − 2 AJ−1 = 0 (14.19)
which has as a solution the geometric sequence
AJ = A0 2J . (14.20)
If the limit exists, equation (14.15) divided by 2J is the Riemann sum whose limit is the denition of the
Riemann integral of φ (x)
Z
1
lim {AJ J } = φ (x) dx = A0 . (14.21)
J→∞ 2
It is stated in (6.57) and shown in (14.6) that if φ (x) is normalized, then A0 = 1 and (14.20) becomes
AJ = 2J . (14.22)
which gives (6.48).
Equation (14.21) shows another remarkable property of φ (x) in that the bracketed term is exactly equal
to the integral, independent of J . No limit need be taken!
Equation (6.49) is the partitioning of unity" by φ (x). It follows from (6.48) by setting J = 0.
Equation (6.50) is generalization of (6.49) by noting that the sum in (6.48) is independent of a shift of
the form
X ` L
φ − = 2J (14.23)
2J 2M
`
for any integers M ≥ J and L. In the limit as M → ∞, 2LM can be made arbitrarily close to any x, therefore,
if φ (x) is continuous,
X `
φ − x = 2J . (14.24)
2J
`
This gives (6.50) and becomes (6.49) for J = 0. Equation (6.50) is called a partitioning of unity" for
obvious reasons.
The rst four relationships for the scaling function hold in a generalized form for the more general dening
equation (8.4). Only (6.48) is dierent. It becomes
X k
φ = MJ (14.25)
MJ
k
for M an integer. It may be possible to show that certain rational M are allowed.
Equations (6.51), (6.72), and (6.52) are the recursive relationship for the Fourier transform of the
scaling function and are obtained by simply taking the transform (6.2) of both sides of (14.1) giving
X Z √
Φ (ω) = h (n) 2 φ (2t − n) e−jωt dt (14.26)
n
here by taking the denition of the Fourier transform of φ (x), sampling it every 2πk points and multiplying
it times its complex conjugate.
Z Z
Φ (ω + 2πk) Φ (ω + 2πk) = φ (x) e−j(ω+2πk)x dx φ (y) ej(ω+2πk)y dy (14.29)
but
X X
ej2πkz = δ (z − `) (14.33)
k `
therefore
X Z X
2
|Φ (ω + 2πk) | = φ (x) φ (x + `) e−jω` dx (14.34)
k `
which becomes
XZ
φ (x) φ (x + `) dx ejω` (14.35)
`
2
Because of the orthogonality of integer translates of φ (x), this is not a function of ω but is |φ (x) | dx
R
which, if normalized, is unity as stated in (6.57). This is the frequency domain equivalent of (6.13).
Equations (6.58) and (6.59) show how the scaling function determines the equation coecients. This
is derived by multiplying both sides of (14.1) by φ (2x − m) and integrating to give
Z Z X
φ (x) φ (2x − m) dx = h (n) φ (2x − n) φ (2x − m) dx (14.36)
n
Z
1 X
= √ h (n) φ (x − n) φ (x − m) dx. (14.37)
2 n
Using the orthogonality condition (14.3) gives
Z Z
1 2 1
φ (x) φ (2x − m) dx = h (m) √ |φ (y) | dy = √ h (m) (14.38)
2 2
which gives (6.58). A similar argument gives (6.59).
You are free to use these programs or any derivative of them for any scientic purpose but please reference
this book. Up-dated versions of these programs and others can be found on our web page at: http://www-
dsp.rice.edu/
function p = psa(h,kk)
% p = psa(h,kk) calculates samples of the scaling function
% phi(t) = p by kk successive approximations from the
% scaling coefficients h. Initial iteration is a constant.
% phi_k(t) is plotted at each iteration. csb 5/19/93
%
if nargin==1, kk=11; end; % Default number of iterations
h2= h*2/sum(h); % normalize h(n)
K = length(h2)-1; S = 128; % Sets sample density
p = [ones(1,3*S*K),0]/(3*K); % Sets initial iteration
P = p(1:K*S); % Store for later plotting
axis([0 K*S+2 -.5 1.4]);
hu = upsam(h2,S); % upsample h(n) by S
for iter = 0:kk % Successive approx.
p = dnsample(conv(hu,p)); % convolve and down-sample
plot(p); pause; % plot each iteration
% P = [P;p(1:K*S)]; % store each iter. for plotting
end
p = p(1:K*S); % only the supported part
L = length(p);
x = ([1:L])/(S);
axis([0 3 -.5 1.4]);
plot(x,p); % Final plot
title('Scaling Function by Successive Approx.');
ylabel('Scaling Function');
xlabel('x');
function p = pdyad(h,kk)
% p = pdyad(h,kk) calculates approx. (L-1)*2^(kk+2) samples of the
% scaling function phi(t) = p by kk+3 dyadic expansions
% from the scaling coefficient vector h where L=length(h).
% Also plots phi_k(t) at each iteration. csb 5/19/93
%
if nargin==1, kk = 8; end % Default iterations
h2 = h*2/sum(h); % Normalize
1 This content is available online at <http://cnx.org/content/m45088/1.2/>.
245
246 APPENDIX
function hn = daub(N2)
% hn = daub(N2)
function h = h246(a,b)
% h = h246(a,b) generates orthogonal scaling function
% coefficients h(n) for lengths 2, 4, and 6 using
% Resnikoff's parameterization with angles a and b.
% csb. 4/4/93
if a==b, h = [1,1]/sqrt(2); % Length-2
elseif b==0
h0 = (1 - cos(a) + sin(a))/2; % Length-4
h1 = (1 + cos(a) + sin(a))/2;
h2 = (1 + cos(a) - sin(a))/2;
h3 = (1 - cos(a) - sin(a))/2;
h = [h0 h1 h2 h3]/sqrt(2);
else % Length-6
h0 = ((1+cos(a)+sin(a))*(1-cos(b)-sin(b))+2*sin(b)*cos(a))/4;
h1 = ((1-cos(a)+sin(a))*(1+cos(b)-sin(b))-2*sin(b)*cos(a))/4;
h2 = (1+cos(a-b)+sin(a-b))/2;
h3 = (1+cos(a-b)-sin(a-b))/2;
h4 = (1-h0-h2);
h5 = (1-h1-h3);
h = [h0 h1 h2 h3 h4 h5]/sqrt(2);
end
function y = upsample(x)
% y = upsample(x) inserts zeros between each term in the row vector x.
% for example: [1 0 2 0 3 0] = upsample([1 2 3]). csb 3/1/93.
L = length(x);
y(:) = [x;zeros(1,L)]; y = y.';
y = y(1:2*L-1);
function y = upsam(x,S)
% y = upsam(x,S) inserts S-1 zeros between each term in the row vector x.
% for example: [1 0 2 0 3 0] = upsample([1 2 3]). csb 3/1/93.
L = length(x);
y(:) = [x;zeros(S-1,L)]; y = y.';
y = y(1:S*L-1);
function y = dnsample(x)
% y = dnsample(x) samples x by removing the even terms in x.
% for example: [1 3] = dnsample([1 2 3 4]). csb 3/1/93.
L = length(x);
y = x(1:2:L);
function z = merge(x,y)
% z = merge(x,y) interleaves the two vectors x and y.
% Example [1 2 3 4 5] = merge([1 3 5],[2 4]).
% csb 3/1/93.
%
z = [x;y,0];
z = z(:);
z = z(1:length(z)-1).';
function w = wave(p,h)
% w = wave(p,h) calculates and plots the wavelet psi(t)
% from the scaling function p and the scaling function
% coefficient vector h.
% It uses the definition of the wavelet. csb. 5/19/93.
%
h2 = h*2/sum(h);
NN = length(h2); LL = length(p); KK = round((LL)/(NN-1));
h1u = upsam(h2(NN:-1:1).*cos(pi*[0:NN-1]),KK);
w = dnsample(conv(h1u,p)); w = w(1:LL);
xx = [0:LL-1]*(NN-1)/(LL-1);
axis([1 2 3 4]); axis;
plot(xx,w);
function g = dwt(f,h,NJ)
% function g = dwt(f,h,NJ); Calculates the DWT of periodic g
% with scaling filter h and NJ scales. rag & csb 3/17/94.
%
N = length(h); L = length(f);
c = f; t = [];
if nargin==2, NJ = round(log10(L)/log10(2)); end; % Number of scales
h0 = fliplr(h); % Scaling filter
h1 = h; h1(1:2:N) = -h1(1:2:N); % Wavelet filter
for j = 1:NJ % Mallat's algorithm
L = length(c);
c = [c(mod((-(N-1):-1),L)+1) c]; % Make periodic
d = conv(c,h1); d = d(N:2:(N+L-2)); % Convolve & d-sample
c = conv(c,h0); c = c(N:2:(N+L-2)); % Convolve & d-sample
t = [d,t]; % Concatenate wlet coeffs.
end;
g = [c,t]; % The DWT
function f = idwt(g,h,NJ)
% function f = idwt(g,h,NJ); Calculates the IDWT of periodic g
% with scaling filter h and NJ scales. rag & csb 3/17/94.
%
L = length(g); N = length(h);
if nargin==2, NJ = round(log10(L)/log10(2)); end; % Number of scales
h0 = h; % Scaling filter
h1 = fliplr(h); h1(2:2:N) = -h1(2:2:N); % Wavelet filter
LJ = L/(2^NJ); % Number of SF coeffs.
c = g(1:LJ); % Scaling coeffs.
for j = 1:NJ % Mallat's algorithm
w = mod(0:N/2-1,LJ)+1; % Make periodic
d = g(LJ+1:2*LJ); % Wavelet coeffs.
cu(1:2:2*LJ+N) = [c c(1,w)]; % Up-sample & periodic
du(1:2:2*LJ+N) = [d d(1,w)]; % Up-sample & periodic
c = conv(cu,h0) + conv(du,h1); % Convolve & combine
c = c(N:N+2*LJ-1); % Periodic part
LJ = 2*LJ;
end;
f = c; % The inverse DWT
function r = mod(m,n)
% r = mod(m,n) calculates r = m modulo n
%
r = m - n*floor(m/n); % Matrix modulo n
function g = dwt5(f,h,NJ)
% function g = dwt5(f,h,NJ)
% Program to calculate the DWT from the L samples of f(t) in
% the vector f using the scaling filter h(n).
% csb 3/20/94
%
N = length(h);
c = f; t = [];
if nargin==2
NJ = round(log10(L)/log10(2)); % Number of scales
end;
h1 = h; h1(1:2:N) = -h1(1:2:N); % Wavelet filter
h0 = fliplr(h); % Scaling filter
for j = 1:NJ % Mallat's algorithm
L = length(c);
d = conv(c,h1); % Convolve
c = conv(c,h0); % Convolve
Lc = length(c);
while Lc > 2*L % Multi-wrap?
d = [(d(1:L) + d(L+1:2*L)), d(2*L+1:Lc)]; % Wrap output
c = [(c(1:L) + c(L+1:2*L)), c(2*L+1:Lc)]; % Wrap output
Lc = length(c);
end
d = [(d(1:N-1) + d(L+1:Lc)), d(N:L)]; % Wrap output
d = d(1:2:L); % Down-sample wlets coeffs.
c = [(c(1:N-1) + c(L+1:Lc)), c(N:L)]; % Wrap output
c = c(1:2:L); % Down-sample scaling fn c.
t = [d,t]; % Concatenate wlet coeffs.
end % Finish wavelet part
g = [c,t]; % Add scaling fn coeff.
function a = choose(n,k)
% a = choose(n,k)
% BINOMIAL COEFFICIENTS
% allowable inputs:
% n : integer, k : integer
% n : integer vector, k : integer
% n : integer, k : integer vector
% n : integer vector, k : integer vector (of equal dimension)
nv = n;
kv = k;
if (length(nv) == 1) & (length(kv) > 1)
nv = nv * ones(size(kv));
elseif (length(nv) > 1) & (length(kv) == 1)
kv = kv * ones(size(nv));
end
a = nv;
for i = 1:length(nv)
n = nv(i);
k = kv(i);
if n >= 0
if k >= 0
if n >= k
c = prod(1:n)/(prod(1:k)*prod(1:n-k));
else
c = 0;
end
else
c = 0;
end
else
if k >= 0
c = (-1)^k * prod(1:k-n-1)/(prod(1:k)*prod(1:-n-1));
else
if n >= k
c = (-1)^(n-k)*prod(1:-k-1)/(prod(1:n-k)*prod(1:-n-1));
else
c = 0;
end
end
end
a(i) = c;
end
Bibliography 1
16.1 Bibliography
In 1998 we especially recommended ve books that complement this one. An excellent reference for the
history, philosophy, and overview of wavelet analysis has been written by Barbara Burke Hubbard [269].
The best source for the mathematical details of wavelet theory is by Ingrid Daubechies [125]. Two good
general books which starts with the discrete-time wavelet series and lter bank methods are by Martin
Vetterli and Jelena Kova£evi¢ [553] and by Gilbert Strang and Truong Nguyen [496]. P. P. Vaiyanathan has
written a good book on general multirate systems as well as lter banks [531].
Much of the recent interest in compactly supported wavelets was stimulated by Daubechies [115], [108],
[125], [80] and S. Mallat [342], [347], [337] and others [317], [320]. A powerful point of view has been recently
presented by D. L. Donoho, I. M. Johnstone, R. R. Coifman, and others [146], [151], [140], [154], [165],
[160], [158], [47], [62], [90]. The development in the DSP community using lters has come from Smith and
Barnwell [475], [477], Vetterli [543], [546], [548], [553], and Vaidyanathan [522], [538], [531]. Some of the
work at Rice is reported in [211], [217], [487], [203], [213], [51], [226], [229], [408], [307], [561][304], [224],
[310], [562], [560] Analysis and experimental work was done using the Matlab computer software system
[388], [387]. Overview and introductory articles can be found in [221], [372], [110], [51], [87], [491], [194],
[453], [29], [49], [492]. [446], [275], [400], [534], [44] Two special issues of IEEE Transactions have focused on
wavelet methods [101], [168]. Books on wavelets, some of which are edited conference proceedings include
[271], [289], [334], [375], [377], [70], [72], [125], [531], [13], [450], [358], [379][586], [383], [28], [461], [296], [335],
[175], [57], [176], [288][559], [369], [553], [575], [496], [18], [261], [580], [269], [509][13], [352], [510], [15], [433],
[73].
In this 2015 revision, we add several new references. An excellent collection of basic wavelet research
papers has been published by Heil and Walnut [251], a very good modern signal procession book which
is also available online is written by Kova£evi¢, Goyal, and Vetterli [297]. Stéphane Mallat has written a
comprehensive third revised edition of his book on Wavelets [353]. New work on lifting can be found in [279],
[274], a general guide in [391], and book on Frames [66] and a new book on sampling [173].
Another way to keep up with current research and results on wavelets is to read the Wavelet Digest on
the world-wide-web at: http://www.wavelet.org/ or the Rice DSP site at http://dsp.rice.edu/software
253
254 APPENDIX
References 1
[AAEG89] F. Argoul, A. Arneodo, J. Elezgaray, and G. Grasseau. Wavelet transform of fractal aggregates.
Physics Letters A., 135:327336, March 1989.
[AAU96] Akram Aldroubi, Patrice Abry, and Michael Unser. Construction of biorthogonal wavelets
starting from any two multiresolutions. preprint, 1996.
[AH92] A. N. Akansu and R. A. Haddad. Multiresolution Signal Decomposition, Transforms, Sub- bands,
and Wavelets. Academic Press, San Diego, CA, 1992.
[AKM90] Louis Auslander, Tom Kailath, and Sanjoy K. Mitter, editors. Signal Processing, Part I: Signal
Processing Theory. Springer-Verlag, New York, 1990. IMA Volume 22, lectures from IMA program, July
1988.
[Ald96] A. Aldroubi. Oblique and Hierarchical Multiwavelet Bases. Technical Report, National Institutes
of Health, December 1996.
[Alp93] B. Alpert. A class of bases in l^2 for the sparce representation of integral operators. SIAM J.
Math. Analysis, 24, 1993.
[AS96] Ali N. Akansu and Mark J. T. Smith. Subband and Wavelet Transforms, Design and Applications.
Kluwer Academic Publishers, Boston, 1996.
[AU96] Akram Aldroubi and Michael Unser, editors. Wavelets in Medicine and Biology. CRC Press,
Boca Raton, 1996.
[Aus89] P. Auscher. Ondelettes fractales et applications. PhD thesis, 1989.
[AWW92] P. Auscher, G. Weiss, and M. V. Wickerhauser. Local sine and cosine bases of Coifman and
Meyer and the construction of smooth wavelets. In C. K. Chui, editor, Wavelets: A Tutorial in Theory
and Applications, pages 1551, Academic Press, 1992. Volume 2 in series on Wavelet Analysis and its
Applications.
[BBH93] J. N. Bradley, C. M. Brislawn, and T. Hopper. The FBI wavelet/scalar quantization standard
for gray-scale ngerprint image compression. In Visual Info. Process. II, SPIE, Orlando, FL, April 1993.
[BBOH96] C. M. Brislawn, J. N. Bradley, R. J. Onyshczak, and T. Hopper. The FBI compression
standard for digitized ngerprint images. In Proceedings of the SPIE Conference 2847, Applications of
Digital Image Processing XIX, 1996.
[BC92] S. Basu and C. Chiang. Complete parameterization of two dimensional orthonormal wavelets. In
Proceedings of IEEE-SP Symposium on Time-Frequency and Time-Scale Methods '92, Victoria, BC, IEEE,
1992.
[BCG94] Jonathan Berger, Ronald R. Coifman, and Maxim J. Goldberg. Removing noise from music
using local trigonometric bases and wavelet packets. Journal of the Audio Engineering Society, 42(10):808
817, October 1994.
[BCR91] G. Beylkin, R. R. Coifman, and V. Rokhlin. Fast wavelet transforms and numerical algorithms
I. Communications on Pure and Applied Mathematics, 44:141183, 1991.
1 This content is available online at <http://cnx.org/content/m45103/1.6/>.
255
256 APPENDIX
[Coi90] R. R. Coifman. Wavelet analysis and signal processing. In Louis Auslander, Tom Kailath, and
Sanjoy K. Mitter, editors, Signal Processing, Part I: Signal Processing Theory, pages 59 68, Springer-Verlag,
New York, 1990. IMA vol. 22, lectures from IMA Program, summer 1988.
[Cro96] Matthew Crouse. Frame Robustness for De-Noising. Technical Report, EE 696 Course Report,
Rice University, Houston, Tx, May 1996.
[CS93] A. Cohen and Q. Sun. An arthmetic characterization of the conjugate quadrature lters associated
to orthonormal wavelet bases. SIAM Journal of Mathematical Analysis, 24(5):13551360, 1993.
[CT91] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley $ Sons, N.Y., 1991.
[CW90] Ronald R. Coifman and M. V. Wickerhauser. Best-Adapted Wave Packet Bases. Technical
Report, Math Dept., Yale University, New Haven, 1990.
[CW92] R. R. Coifman and M. V. Wickerhauser. Entropy-based algorithms for best basis selection. IEEE
Transaction on Information Theory, 38(2):713718, March 1992.
[Dau88a] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on
Pure and Applied Mathematics, 41:909996, November 1988.
[Dau88b] Ingrid Daubechies. Time-frequency localization operators: a geometric phase space approach.
IEEE Transactions on Information Theory, 34(4):605612, July 1988.
[Dau89] Ingrid Daubechies. Orthonormal bases of wavelets with nite support connection with discrete
lters. In J. M. Combes, A. Grossman, and Ph. Tchamitchian, editors, Wavelets, Time-Frequency Methods
and Phase Space, pages 3866, Springer-Verlag, Berlin, 1989. Proceedings of International Colloquium on
Wavelets and Applications, Marseille, France, Dec. 1987.
[Dau90] Ingrid Daubechies. The wavelet transform, time-frequency localization and signal analysis. IEEE
Transaction on Information Theory, 36(5):9611005, September 1990. Also a Bell Labs Technical Report.
[Dau92] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the
1990 CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[Dau93] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets II, variations on a theme.
SIAM Journal of Mathematical Analysis, 24(2):499519, March 1993.
[Dau96] Ingrid Daubechies. Where do wavelets comre from? a personal point of view. Proceedings of
the IEEE, 84(4):510513, April 1996.
[DD87] G. Deslauriers and S. Dubuc. Interpolation dyadique. In G. Cherbit, editor, Fractals, Dimensions
Non Enti rs et Applications, pages 4445, Masson, Paris, 1987.
[DDO97] Wolfgang Dahmen, Andrew Durdila, and Peter Oswald, editors. Multiscale Wavelet Methods
for Partial Dierential Equations. Academic Press, San Diego, 1997. Volume 6 in the series: Wavelet
Analysis and its Applications.
[DFN*93] Special issue on wavelets and signal processing. IEEE Transactions on Signal Processing,
41(12):32133600, December 1993.
[DJ94a] David L. Donoho and Iain M. Johnstone. Ideal denoising in an orthonormal basis chosen from a
library of bases. C. R. Acad. Sci. Paris, Ser. I, 319, to appear 1994. Also Stanford Statistics Dept. Report
461, Sept. 1994.
[DJ94b] David L. Donoho and Iain M. Johnstone. Ideal spatial adaptation via wavelet shrinkage.
Biometrika, 81:425455, 1994. Also Stanford Statistics Dept. Report TR-400, July 1992.
[DJ95] David L. Donoho and Iain M. Johnstone. Adapting to unknown smoothness via wavelet shrinkage.
Journal of American Statist. Assn., to appear 1995. Also Stanford Statistics Dept. Report TR-425, June
1993.
[DJJ] Ingrid Daubechies, St phane Jaard, and Jean-Lin Journ . A simple Wilson orthonormal basis
with exponential decay. preprint.
[DJKP95a] David L. Donoho, Iain M. Johnstone, G rard Kerkyacharian, and Dominique Picard. Dis-
cussion of Wavelet Shrinkage: Asymptopia?. Journal Royal Statist. Soc. Ser B., 57(2):337
[DJKP95b] David L. Donoho, Iain M. Johnstone, G rard Kerkyacharian, and Dominique Picard. Wavelet
shrinkage: asymptopia? Journal Royal Statistical Society B., 57(2):301337, 1995. Also Stanford Statistics
Dept. Report TR-419, March 1993.
[DL91] Ingrid Daubechies and Jerey C. Lagarias. Two-scale dierence equations, part I. Existence
and global regularity of solutions. SIAM Journal of Mathematical Analysis, 22:13881410, 1991. From an
internal report, AT&T Bell Labs, Sept. 1988.
[DL92] Ingrid Daubechies and Jerey C. Lagarias. Two-scale dierence equations, part II. local regularity,
innite products of matrices and fractals. SIAM Journal of Mathematical Analysis, 23:10311079, July 1992.
From an internal report, AT&T Bell Labs, Sept. 1988.
[DL93] R. DeVire and G. Lorentz. Constructive Approximation. Springer-Verlag, 1993.
[DM93] R. E. Van Dyck and T. G. Marshall, Jr. Ladder realizations of fast subband/vq coders with
diamond structures. In Proceedings of IEEE International Symposium on Circuits and Systems, pages
III:177180, ISCAS, 1993.
[DMW92] Special issue on wavelet transforms and multiresolution signal analysis. IEEE Transactions on
Information Theory, 38(2, part II):529924, March, part II 1992.
[Don93a] David L. Donoho. Nonlinear wavelet methods for recovery of signals, densities, and spectra from
indirect and noisy data. In Ingrid Daubechies, editor, Dierent Perspectives on Wavelets, I, pages 173205,
American Mathematical Society, Providence, 1993. Proceedings of Symposia in Applied Mathematics and
Stanford Report 437, July 1993.
[Don93b] David L. Donoho. Unconditional bases are optimal bases for data compression and for statistical
estimation. Applied and Computational Harmonic Analysis, 1(1):100115, December 1993. Also Stanford
Statistics Dept. Report TR-410, Nov. 1992.
[Don93c] David L. Donoho. Wavelet Skrinkage and W. V. D. A Ten Minute Tour. Technical Report
TR-416, Statistics Department, Stanford University, Stanford, CA, January 1993. Preprint.
[Don94] David L. Donoho. On minimum entropy segmentation. In C. K. Chui, L. Montefusco, and L.
Puccio, editors, Wavelets: Theory, Algorithms, and Applications, Academic Press, San Diego, 1994. Also
Stanford Tech Report TR-450, 1994; Volume 5 in the series: Wavelet Analysis and its Applications.
[Don95] David L. Donoho. De-noising by soft-thresholding. IEEE Transactions on Information Theory,
41(3):613627, May 1995. also Stanford Statistics Dept. report TR-409, Nov. 1992.
[Donar] David L. Donoho. Interpolating wavelet transforms. Applied and Computational Harmonic
Analysis, to appear. Also Stanford Statistics Dept. report TR-408, Nov. 1992.
[DS52] R. J. Dun and R. C. Schaeer. A class of nonharmonic fourier series. Transactions of the
American Mathematical Society, 72:341366, 1952.
[DS83] J. E. Dennis and R. B. Schnabel. Numerical Methods for Unconstrained Optimization and
Nonlinear Equations. Prentice-Hall, Inc., Englewood Clis, New Jersey, 1st edition, 1983.
[DS96a] Ingrid Daubechies and Wim Sweldens. Factoring Wavelet Transforms into Lifting Steps. Tech-
nical Report, Princeton and Lucent Technologies, NJ, September 1996. Preprint.
[DS96b] T. R. Downie and B. W. Silverman. The Discrete Multiple Wavelet Transform and Threshold-
ing Methods. Technical Report, University of Bristol, November 1996. Submitted to IEEE Tran. Signal
Processing.
[Dut89] P. Dutilleux. An implementation of the algorithme a` trou to compute the wavelet transform.
In J. M. Combes, A. Grossmann, and Ph. Tchamitchian, editors, Wavelets, Time-Frequency Methods and
Phase Space, pages 220, Springer-Verlag, Berlin, 1989. Proceedings of International Colloquium on Wavelets
and Applications, Marseille, France, Dec. 1987.
[DVN88] Z. Doanata, P. P. Vaidyanathan, and T. Q. Nguyen. General synthesis procedures for FIR
lossless transfer matrices, for perfect-reconstruction multirate lter bank applications. IEEE Transactions
on Acoustics, Speech, and Signal Processing, 36(10):15611574, October 1988.
[Eir92] T. Eirola. Sobolev characterization of solutions of dilation equations. SIAM Journal of Mathe-
matical Analysis, 23(4):10151030, July 1992.
[FHV93] M. Farge, J. C. R. Hunt, and J. C. Vassilicos, editors. Wavelets, Fractals, and Fourier Tranforms.
Clarendon Press, Oxford, 1993. Proceedings of a conference on Wavelets at Newnham College, Cambridge,
Dec. 1990.
[FK94] E Foufoula-Georgiou and Praveen Kumar, editors. Wavelets in Geophyics. Academic Press, San
Diego, 1994. Volume 4 in the series: Wavelet Analysis and its Applications.
[Fli94] F. J. Fliege. Multirate Digital Signal Processing: Multrirate Systems, Filter Banks, and Wavelets.
Wiley & Sons, New York, 1994.
[Gab46] D. Gabor. Theory of communication. Journal of the Institute for Electrical Engineers, 93:429
439, 1946.
[GB00] Haitao Guo and C. Sidney Burrus. Fast approximate Fourier transform via wavelet transforms.
IEEE Transactions, to be submitted 2000.
[GB90] R. A. Gopinath and C. S. Burrus. Ecient computation of the wavelet transforms. In Proceed-
ings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 15991602,
Albuquerque, NM, April 1990.
[GB92a] R. A. Gopinath and C. S. Burrus. Cosinemodulated orthonormal wavelet bases. In Paper
Summaries for the IEEE Signal Processing Society's Fifth DSP Workshop , page 1.10.1, Starved Rock
Lodge, Utica, IL, September 1316, 1992.
[GB92b] R. A. Gopinath and C. S. Burrus. On the moments of the scaling function ψ 0 . In Proceedings
of the IEEE International Symposium on Circuits and Systems, pages 963966, ISCAS-92, San Diego, CA,
May 1992.
[GB92c] R. A. Gopinath and C. S. Burrus. Wavelet transforms and lter banks. In Charles K. Chui,
editor, Wavelets: A Tutorial in Theory and Applications, pages 603655, Academic Press, San Diego, CA,
1992. Volume 2 in the series: Wavelet Analysis and its Applications.
[GB93] R. A. Gopinath and C. S. Burrus. Theory of modulated lter banks and modulated wavelet tight
frames. In Proceedings of the IEEE International Conference on Signal Processing, pages III169172, IEEE
ICASSP-93, Minneapolis, April 1993.
[GB94a] R. A. Gopinath and C. S. Burrus. On upsampling, downsampling and rational sampling rate
lter banks. IEEE Transactions on Signal Processing, April 1994. Also Tech. Report No. CML TR91-25,
1991.
[GB94b] R. A. Gopinath and C. S. Burrus. Unitary FIR lter banks and symmetry. IEEE Transaction
on Circuits and Systems II, 41:695700, October 1994. Also Tech. Report No. CML TR92- 17, August 1992.
[GB95a] R. A. Gopinath and C. S. Burrus. Factorization approach to unitary time-varying lter banks.
IEEE Transactions on Signal Processing, 43(3):666680, March 1995. Also a Tech Report No. CML TR-92-
23, Nov. 1992.
[GB95b] R. A. Gopinath and C. S. Burrus. Theory of modulated lter banks and modulated wavelet
tight frames. Applied and Computational Harmonic Analysis: Wavelets and Signal Processing, 2:303326,
October 1995. Also a Tech. Report No. CML TR-92-10, 1992.
[GB95c] Ramesh A. Gopinath and C. Sidney Burrus. On cosinemodulated wavelet orthonormal bases.
IEEE Transactions on Image Processing, 4(2):162176, February 1995. Also a Tech. Report No. CML
TR-91-27, March 1992.
[GB96a] Haitao Guo and C. Sidney Burrus. Approximate FFT via the discrete wavelet transform. In
Proceedings of SPIE Conference 2825, Denver, August 69 1996.
[GB96b] Haitao Guo and C. Sidney Burrus. Convolution using the discrete wavelet transform. In
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages III
12911294, IEEE ICASSP-96, Atlanta, May 710 1996.
[GB96c] Haitao Guo and C. Sidney Burrus. Phase-preserving compression of seismic images using the
self-adjusting wavelet transform. In NASA Combined Industry, Space and Earth Science Data Compression
Workshop (in conjunction with the IEEE Data Compression Conference, DCC-96), JPL Pub. 96-11, pages
101109, Snowbird, Utah, April 4 1996.
[GB97a] Haitao Guo and C. Sidney Burrus. Waveform and image compression with the Burrows Wheeler
transform and the wavelet transform. In Proceedings of the IEEE International Conference on Image Pro-
cessing, pages I:6568, IEEE ICIP-97, Santa Barbara, October 26-29 1997.
[GB97b] Haitao Guo and C. Sidney Burrus. Wavelet transform based fast approximate Fourier transform.
In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages
III:19731976, IEEE ICASSP-97, Munich, April 2124 1997.
[GGM84a] P. Goupillaud, A. Grossman, and J. Morlet. Cyclo-octave and related transforms in seismic
signal analysis. SIAM J. Math. Anal., 15:723736, 1984.
[GGM84b] P. Groupillaud, A. Grossman, and J. Morlet. Cyclo-octave and related transforms in seismic
signal analysis. Geoexploration, (23), 1984.
[GGT89] S. Ginette, A. Grossmann, and Ph. Tchamitchian. Use of wavelet transforms in the study of
propagation of transient acoustic signals across a plane interface between two homogeneous media. In J.
M. Combes, A. Grossmann, and Ph. Tchamitchian, editors, Wavelets: Time-Frequency Methods and Phase
Space, pages 139146, Springer-Verlag, Berlin, 1989. Proceedings of the International Conference, Marseille,
Dec. 1987.
[GHM94] J. S. Geronimo, D. P. Hardin, and P. R. Massopust. Fractal functions and wavelet expansions
based on several scaling functions. Journal of Approximation Theory, 78:373401, 1994.
[GKM89] A. Grossman, R. KronlandMartinet, and J. Morlet. Reading and understanding continuous
wavelet transforms. In J. M. Combes, A. Grossmann, and Ph. Tchamitchian, editors, Wavelets, Time-
Frequency Methods and Phase Space, pages 220, Springer-Verlag, Berlin, 1989. Proceedings of International
Colloquium on Wavelets and Applications, Marseille, France, Dec. 1987.
[GL93] G. H. Golub and C. F. Van Loan. Matrix Compuations. Johns Hopkins University Press, 1993.
[GL94] T. N. T. Goodman and S. L. LEE. Wavelets of multiplicity r. Tran. American Math. Society,
342(1):307324, March 1994.
[GLOB95] H. Guo, M. Lang, J. E. Odegard, and C. S. Burrus. Nonlinear processing of a shift-invariant
DWT for noise reduction and compression. In Proceedings of the International Conference on Digital Signal
Processing, pages 332337, Limassol, Cyprus, June 2628 1995.
[GLRT90] R. Glowinski, W. Lawton, M. Ravachol, and E. Tenenbaum. Wavelet solution of linear and
nonlinear elliptic, parabolic and hyperbolic problems in one dimension. In Proceedings of the Ninth SIAM
International Conference on Computing Methods in Applied Sciences and Engineering, Philadelphia, 1990.
[GLT93] T. N. T. Goodman, S. L. Lee, and W. S. Tang. Wavelets in wandering subspaces. Tran.
American Math. Society, 338(2):639654, August 1993.
[GOB92] R. A. Gopinath, J. E. Odegard, and C. S. Burrus. On the correlation structure of multiplicity
M-scaling functions and wavelets. In Proceedings of the IEEE International Symposium on Circuits and
Systems, pages 959962, ISCAS-92, San Diego, CA, May 1992.
[GOB94] R. A. Gopinath, J. E. Odegard, and C. S. Burrus. Optimal wavelet representation of signals
and the wavelet sampling theorem. IEEE Transactions on Circuits and Systems II, 41(4):262277, April
1994. Also a Tech. Report No. CML TR-92-05, April 1992, revised Aug. 1993.
[GOL*94a] H. Guo, J. E. Odegard, M. Lang, R. A. Gopinath, I. Selesnick, and C. S. Burrus. Speckle
reduction via wavelet soft-thresholding with application to SAR based ATD/R. In Proceedings of SPIE
Conference 2260, San Diego, July 1994.
[GOL*94b] H. Guo, J. E. Odegard, M. Lang, R. A. Gopinath, I. W. Selesnick, and C. S. Burrus. Wavelet
based speckle reduction with application to SAR based ATD/R. In Proceedings of the IEEE International
Conference on Image Processing, pages I:7579, IEEE ICIP-94, Austin, Texas, November 13-16 1994.
[Gop90] Ramesh A. Gopinath. The Wavelet Transforms and Time-Scale Analysis of Signals. Master's
thesis, Rice University, Houston, Tx 77251, 1990.
[Gop92] Ramesh A. Gopinath. Wavelets and Filter Banks New Results and Applications. PhD thesis,
Rice University, Houston, Tx, August 1992.
[Gop96a] R. A. Gopinath. Modulated lter banks and local trigonometric bases - some connections. Oct
1996. in preparation.
[Gop96b] R. A. Gopinath. Modulated lter banks and wavelets, a general unied theory. In Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 15851588, IEEE
ICASSP-96, Atlanta, May 710 1996.
[GORB96] J. G tze, J. E. Odegard, P. Rieder, and C. S. Burrus. Approximate moments and regularity of
eciently implemented orthogonal wavelet transforms. In Proceedings of the IEEE International Symposium
on Circuits and Systems, pages II405408, IEEE ISCAS-96, Atlanta, May 12-14 1996.
[Gri93] Gustaf Gripenberg. Unconditional bases of wavelets for Sobelov spaces. SIAM Journal of Math-
ematical Analysis, 24(4):10301042, July 1993.
[Guo94] Haitao Guo. Redundant Wavelet Transform and Denoising. Technical Report CML-9417, ECE
Dept and Computational Mathematics Lab, Rice University, Houston, Tx, December 1994.
[Guo95] Haitao Guo. Theory and Applications of the Shift-Invariant, Time-Varying and Undecimated
Wavelet Transform. Master's thesis, ECE Department, Rice University, April 1995.
[Guo97] Haitao Guo. Wavelets for Approximate Fourier Transform and Data Compression. PhD thesis,
ECE Department, Rice University, Houston, Tx, May 1997.
[Haa10] Alfred Haar. Zur Theorie der orthogonalen Funktionensysteme. Mathematische Annalen, 69:331
371, 1910. Also in PhD thesis.
[HB92] F. Hlawatsch and G. F. Boudreaux-Bartels. Linear and quadratic time-frequency signal repre-
sentations. IEEE Signal Processing Magazine, 9(2):2167, April 1992.
[Hei93] Henk J. A. M. Heijmans. Descrete wavelets and multiresolution analysis. In Tom H. Koorn winder,
editor, Wavelets: An Elementary Treatment of Theory and Applications, pages 4980, World Scientic,
Singapore, 1993.
[Hel95] Peter N. Heller. Rank m wavelet matrices with n vanishing moments. SIAM Journal on Matrix
Analysis, 16:502518, 1995. Also as technical report AD940123, Aware, Inc., 1994.
[Her71] O. Herrmann. On the approximation problem in nonrecursive digital lter design. IEEE Trans-
actions on Circuit Theory, 18:411413, May 1971. Reprinted in DSP reprints, IEEE Press, 1972, page
202.
[HHSM95] A. N. Hossen, U. Heute, O. V. Shentov, and S. K. Mitra. Subband DFT Part II: accuracy,
complexity, and applications. Signal Processing, 41:279295, 1995.
[HKRV92] Cormac Herley, Jelena Kova£evi¢, Kannan Ramchandran, and Martin Vetterli. Time- varying
orthonormal tilings of the time-frequency plane. In Proceedings of the IEEE Signal Processing Society's
International Symposium on TimeFrequency and TimeScale Analysis, pages 1114, Victoria, BC, Canada,
October 46, 1992.
[HKRV93] Cormac Herley, Jelena Kova£evi¢, Kannan Ramchandran, and Martin Vetterli. Tilings of
the time-frequency plane: consturction of arbitrary orthogonal bases and fast tiling algorithms. IEEE
Transactions on Signal Processing, 41(12):33413359, December 1993. Special issue on wavelets.
[HPW94] Fr d ric Heurtaux, Fabrice Planchon, and Mladen V. Wickerhauser. Scale decomposition
in Burgers' equation. In John J. Benedetto and Michael W. Frazier, editors, Wavelets: Mathematics and
Applications, pages 505524, CRC Press, Boca Raton, 1994.
[HR96] D. P. Hardin and D. W. Roach. Multiwavelet Prelters I: Orthogonal prelters preserving ap-
proximation order p ≤ 2. Technical Report, Vanderbilt University, 1996.
[HRW92] Peter N. Heller, Howard L. Resniko, and Raymond O. Wells, Jr. Wavelet matrices and
the representation of discrete functions. In Charles K. Chui, editor, Wavelets: A Tutorial in Theory and
Applications, pages 1550, Academic Press, Boca Raton, 1992. Volume 2 in the series: Wavelet Analysis
and its Applications.
[HSS*95] P. N. Heller, V. Strela, G. Strang, P. Topiwala, C. Heil, and L. S. Hills. Multiwavelet lter
banks for data compression. In IEEE Proceedings of the International Symposium on Circuits and Systems,
pages 17961799, 1995.
[HSS96] C. Heil, G. Strang, and V. Strela. Approximation by translates of renable functions. Numerische
Mathematik, 73(1):7594, March 1996.
[Hub96] Barbara Burke Hubbard. The World According to Wavelets. A K Peters, Wellesley, MA, 1996.
Second Edition 1998.
[HW89] C. E. Heil and D. F. Walnut. Continuous and discrete wavelet transforms. SIAM Review,
31(4):628666, December 1989.
[HW94] Peter N. Heller and R. O. Wells. The spectral theory of multiresolution operators and applica-
tions. In C. K. Chui, L. Montefusco, and L. Puccio, editors, Wavelets: Theory, Algorithms, and Applications,
pages 1331, Academic Press, San Diego, 1994. Also as technical report AD930120, Aware, Inc., 1993; Vol-
ume 5 in the series: Wavelet Analysis and its Applications.
[HW96a] Peter N. Heller and R. O. Wells. Sobolev regularity for rank M wavelets. SIAM Journal on
Mathematical Analysis, submitted, Oct. 1996. Also a CML Technical Report TR9608, Rice University,
1994.
[HW96b] Eugenio Hern ndez and Guido Weiss. A First Course on Wavelets. CRC Press, Boca Raton,
1996.
[HW06] Christopher Heil and David F. Walnut. Fundamental Papers in Wavelet Theory . Princeton
University Press, 2006.
[IRP*96] Plamen Ch. Ivanov, Michael G Rosenblum, C.-K. Peng, Joseph Mietus, Shlomo Havlin, H.
Eugene Stanley, and Ary L. Goldberger. Scaling behaviour of heartbeat intervals obtained by wavelet-based
time-series analysis. Nature, 383:323327, September 26 1996.
[JB82] H. W. Johnson and C. S. Burrus. The design of optimal DFT algorithms using dynamic program-
ming. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing,
pages 2023, Paris, May 1982.
[JCF95] R. L. Josho, V. J. Crump, and T. R. Fischer. Image subband coding using arithmetic coded
trellis coded quantization. IEEE Transactions on Circuits and Systems, 515523, December 1995.
[Jia95] R. Q. Jia. Subdivision schemes in Lp spaces. Advances in Computational Mathematics, 3:309341,
1995.
[JMNK96] B. R. Johnson, J. P. Modisette, P. A. Nordlander, and J. L. Kinsey. Quadrature integration
for compact support wavelets. Journal of Computational Physics, submitted 1996. Also Rice University
Tech. Report.
[JN84] N. S. Jayant and P. Noll. Digital Coding of Waveforms. Prentice-Hall, Inc., Englewood Clis,
NJ, 1st edition, 1984.
[JRZ96a] R. Q. Jia, S. D. Riemenschneider, and D. X. Zhou. Approximation by Multiple Renable
Functions. Technical Report, University of Alberta, 1996. To appear in: Canadian Journal of Mathematics.
[JRZ96b] R. Q. Jia, S. D. Riemenschneider, and D. X. Zhou. Vector Subdivision Schemes and Multiple
Wavelets. Technical Report, University of Alberta, 1996.
[JRZ97] R. Q. Jia, S. D. Riemenschneider, and D. X. Zhou. Smoothness of Multiple Renable Functions
and Multiple Wavelets. Technical Report, University of Alberta, 1997.
[JS94a] Bj rn Jawerth and Wim Sweldens. An overview of wavelet based multiresolution analyses. SIAM
Review, 36:377412, 1994. Also a University of South Carolina Math Dept. Technical Report, Feb. 1993.
[JS94b] I. M. Johnstone and B. W. Silverman. Wavelet Threshold Estimators for Data with Correlated
Noise. Technical Report, Statistics Dept., University of Bristol, September 1994.
[Kai94] G. Kaiser. A Friendly Guide to Wavelets. Birkh user, Boston, 1994.
[KMDW95] H. Krim, S. Mallat, D. Donoho, and A. Willsky. Best basis algorithm for signal enhancement.
In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages
15611564, IEEE ICASSP-95 Detroit, May 1995.
[Koo93] Tom H. Koornwinder, editor. Wavelets: An Elementary Treatment of Theory and Applications.
World Scientic, Singapore, 1993.
[KR90] G. Kaiser and M. B. Ruskai, editors. NSF/CMBS Regional Conference on Wavelets. University
of Lowell, MA, June 11 - 15, 1990. Speakers: I.Daubechies, G.Beylkin, R.Coifman, S.Mallat, M.Vetterli,
Aware, Inc.
[KS92] A. A. A. C. Kalker and Imran Shah. Ladder structures for multidimensional linear phase perfect
reconstruction lter banks and wavelets. In Proceedings of SPIE Conference 1818 on Visual Communications
and Image Processing, 1992.
[KT93] Jaroslav Kautsky and Radko Turcajov . Discrete biorthogonal wavelet transforms as block
circulant matrices. Linear Algegra and its Applications, submitted 1993.
[KT94a] Jaroslav Kautsky and Radko Turcajov . A matrix approach to discrete wavelets. In Charles K.
Chui, Laura Montefusco, and Luigia Puccio, editors, Wavelets: Theory, Algorithms, and Applications, pages
117136, Academic Press, Boca Raton, 1994. Volume 5 in the series: Wavelet Analysis and its Applications.
[KT94b] Man Kam Kwong and P. T. Peter Tang. W-matrices, nonorthogonal multiresolution analysis,
and nite signals of arbitrary length. preprint, 1994.
[KV92] R. D. Koilpillai and P. P. Vaidyanathan. Cosine modulated FIR lter banks satisfying perfect
reconstruction. IEEE Transactions on Signal Processing, 40(4):770783, April 1992.
[Kwo94] Man Kam Kwong. Matlab implementation of W-matrix multiresolution analyses. Preprint
MCS-P462-0894, September 1994.
[Law] W. Lawton. Private communication.
[Law90] Wayne M. Lawton. Tight frames of compactly supported ane wavelets. Journal of Mathemat-
ical Physics, 31(8):18981901, August 1990. Also Aware, Inc. Tech Report AD891012.
[Law91a] Wayne M. Lawton. Multiresolution properties of the wavelet Galerkin operator. Journal of
Mathematical Physics, 32(6):14401443, June 1991.
[Law91b] Wayne M. Lawton. Necessary and sucient conditions for constructing orthonormal wavelet
bases. Journal of Mathematical Physics, 32(1):5761, January 1991. Also Aware, Inc. Tech. Report
AD900402, April 1990.
[Law97] Wayne M. Lawton. Innite convolution products & renable distributions on Lie groups. Trans-
actions of the American Mathematical Soceity, submitted 1997.
[LGO*95] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Nonlinear processing of
a shift-invariant DWT for noise reduction. In Harold H. Szu, editor, Proceedings of SPIE Conference 2491,
Wavelet Applications II, pages 640651, Orlando, April 1721 1995.
[LGO*96] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Noise reduction using an
undecimated discrete wavelet transform. IEEE Signal Processing Letters, 3(1):1012, January 1996.
[LGOB95] M. Lang, H. Guo, J. E. Odegard, and C. S. Burrus. Nonlinear redundant wavelet methods for
image enhancement. In Proceedings of IEEE Workshop on Nonlinear Signal and Image Processing, pages
754757, Halkidiki, Greece, June 2022 1995.
[LH96] Markus Lang and Peter N. Heller. The design of maximally smooth wavlets. In Proceedings of
the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 14631466, IEEE
ICASSP-96, Atlanta, May 1996.
[Lin95] A. R. Lindsey. Generalized Orthogonally Multiplexed Communication via Wavelet Packet Bases.
PhD thesis, June 1995.
[LKC*94] R. E. Learned, H. Krim, B. Claus, A. S. Willsky, and W. C. Karl. Wavelet-packet-based
multiple access communications. In Proceedings of SPIE Conference, Wavelet Applications in Signal and
Image Processing, Vol. 2303, pages 246259, San Diego, July 1994.
[LLK96] K.-C. Lian, J. Li, and C.-C. J. Kuo. Image compression with embedded multiwavelet coding.
In Proceedings of SPIE, Wavelet Application III, pages 165176, Orlando, FL, April 1996.
[LLS97a] Wayne M. Lawton, S. L. Lee, and Z. Shen. Convergence of multidimensional cascade algorithm.
Numerische Mathematik, to appear 1997.
[LLS97b] Wayne M. Lawton, S. L. Lee, and Z. Shen. Stability and orthonormality of multivariate renable
functions. SIAM Journal of Mathematical Analysis, to appear 1997.
[LM89] J. L. Larsonneur and J. Morlet. Wavelet and seismic interpretation. In J. M. Combes, A.
Grossmann, and Ph. Tchamitchian, editors, Wavelets: Time-Frequency Methods and Phase Space, pages
126131, Springer-Verlag, Berlin, 1989. Proceedings of the International Conference, Marseille, Dec. 1987.
[LP89] G. Longo and B. Picinbono, editors. Time and Frequency Representation of Signals and Systems.
Springer-Verlag, Wien New York, 1989. CISM Courses and Lectures No. 309.
[LP94] Jie Liang and Thomas W. Parks. A two-dimensional translation invariant wavelet representation
and its applications. In Proceedings of the IEEE International Conference on Image Processing, pages
I:6670, Austin, November 1994.
[LP96] Jie Liang and Thomas W. Parks. A translation invariant wavelet representation algorithm with
applications. IEEE Transactions on Signal Processing, 44(2):225232, 1996.
[LPT92] J. Liandrat, V. Perrier, and Ph. Tchamitchian. Numerical resolution of nonlinear parital
dierential equations using the wavelet approach. In M. B. Ruskai, G. Beylkin, I. Daubechies, Y. Meyer,
R. Coifman, S. Mallat, and L Raphael, editors, Wavelets and Their Applications, pages 181210, Jones and
Bartlett, Boston, 1992. Outgrowth of the NSF/CBMS Conference on Wavelets, Lowell, June 1990.
[LR91] Wayne M. Lawton and Howard L. Resniko. Multidimensional Wavelet Bases. Aware Report
AD910130, Aware, Inc., February 1991.
[LRO97] S. M. LoPresto, K. Ramchandran, and M. T. Orchard. Image coding based on mixture modeling
of wavelet coecients and a fast estimation-quantization framework. Proc. DCC, March 1997.
[LSOB94] M. Lang, I. Selesnick, J. E. Odegard, and C. S. Burrus. Constrained FIR lter design for 2-
band lter banks and orthonormal wavelets. In Proceedings of the IEEE Digital Signal Processing Workshop,
pages 211214, Yosemite, October 1994.
[LV95] Yuan-Pei Lin and P. P. Vaidyanathan. Linear phase cosine-modulated lter banks. IEEE Trans-
actions on Signal Processing, 43, 1995.
[Mal89a] S. G. Mallat. Multifrequency channel decomposition of images and wavelet models. IEEE
Transactions on Acoustics, Speech and Signal Processing, 37:20912110, December 1989.
[Mal89b] S. G. Mallat. Multiresolution approximation and wavelet orthonormal bases of L2 . Transactions
of the American Mathematical Society, 315:6987, 1989.
[Mal89c] S. G. Mallat. A theory for multiresolution signal decomposition: the wavelet representation.
IEEE Transactions on Pattern Recognition and Machine Intelligence, 11(7):674693, July 1989.
[Mal91] S. G. Mallat. Zero-crossings of a wavelet transform. IEEE Transactions on Information Theory,
37(4):10191033, July 1991.
[Mal92] Henrique S. Malvar. Signal Processing with Lapped Transforms. Artech House, Boston, MA,
1992.
[Mal98] Stephane Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1998.
[Mal09] Stephane Mallat. A Wavelet Tour of Signal Processing: The Sparse Way. Third Revised Edition,
Academic Press, 2009.
[Mar91] R. J. Marks II. Introduction to Shannon Sampling and Interpolation Theory. Springer- Verlag,
New York, 1991.
[Mar92] T. G. Marshall, Jr. Predictive and ladder realizations of subband coders. In Proceedings of
IEEE Workshop on Visual Signal Processing and Communication, Raleigh, NC, 1992.
[Mar93] T. G. Marshall, Jr. A fast wavelet transform based on the eucledean algorithm. In Proceedings
of Conference on Information Sciences and Systems, Johns Hopkins University, 1993.
[Mas94] Peter R. Massopust. Fractal Functions, Fractal Surfaces, and Wavelets. Academic Press, San
Diego, 1994.
[Mau92] J. Mau. Perfect reconstruction modulated lter banks. In Proc. Int. Conf. Acoust., Speech,
Signal Processing, pages IV273, IEEE, San Francisco, CA, 1992.
[Mey87] Y. Meyer. L'analyses par ondelettes. Pour la Science, September 1987.
[Mey89] Y. Meyer. Orthonormal wavelets. In J. M. Combes, A. Grossmann, and Ph. Tchamitchian,
editors, Wavelets, Time-Frequency Methods and Phase Space, pages 2137, Springer-Verlag, Berlin, 1989.
Proceedings of International Colloquium on Wavelets and Applications, Marseille, France, Dec. 1987.
[Mey90] Y. Meyer. Ondelettes et op rateurs. Hermann, Paris, 1990.
[Mey92a] Y. Meyer, editor. Wavelets and Applications. Springer-Verlag, Berlin, 1992. Proceedings of
the Marseille Workshop on Wavelets, France, May, 1989; Research Notes in Applied Mathematics, RMA-20.
[Mey92b] Yves Meyer. Wavelets and Operators. Cambridge, Cambridge, 1992. Translated by D. H.
Salinger from the 1990 French edition.
[Mey93] Yves Meyer. Wavelets, Algorithms and Applications. SIAM, Philadelphia, 1993. Translated by
R. D. Ryan based on lectures given for the Spanish Institute in Madrid in Feb. 1991.
[MF97] St phane Mallat and Fr d ric Falzon. Understanding image transform codes. In Proceedings
of SPIE Conference, Aerosense, Orlando, April 1997.
[MLB89] Cleve Moler, John Little, and Steve Bangert. Matlab User's Guide. The MathWorks, Inc.,
South Natick, MA, 1989.
[MMOP96] Michel Misiti, Yves Misiti, Georges Oppenheim, and Jean-Michel Poggi. Wavelet Toolbox
User's Guide. The MathWorks, Inc., Natick, MA, 1996.
[Mou93] Pierre Moulin. A wavelet regularization method for diuse radar-target imaging and speckle-
noise reduction. Journal of Mathematical Imaging and Vision, 3:123134, 1993.
[MP89] C. A. Micchelli and Prautzsch. Uniform renement of curves. Linear Algebra, Applications,
114/115:841870, 1989.
[MW94a] Stephen Del Marco and John Weiss. Improved transient signal detection using a wavepacket-
based detector with an extended translation-invariant wavelet transform. IEEE Transactions on Signal
Processing, 43, submitted 1994.
[MW94b] Stephen Del Marco and John Weiss. M-band wavepacket-based transient signal detector using
a translation-invariant wavelet transform. Optical Engineering, 33(7):21752182, July 1994.
[MWJ94] Stephen Del Marco, John Weiss, and Karl Jagler. Wavepacket-based transient signal detector
using a translation invariant wavelet transform. In Proceedings of Conference on Wavelet Applications,
pages 792802, SPIE, Orlando, FL, April 1994.
[MZ93] S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transac-
tions on Signal Processing, 41(12):33973415, December 1993.
[NAT96] Mohammed Nae, Murtaza Ali, and Ahmed Tewk. Optimal subset selection for adaptive
signal representation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and
Signal Processing, pages 25112514, IEEE ICASSP-96, Atlanta, May 1996.
[Naj12] Najmi, Amir-Homayoon. Wavelets: A Concise Guide . The Johns Hopkins University Press,
2012.
[Ngu92] T. Q. Nguyen. A class of generalized cosine-modulated lter banks. In Proceedings of ISCAS,
San Diego, CA, pages 943946, IEEE, 1992.
[Ngu94] Trong Q. Nguyen. Near perfect reconstruction pseudo QMF banks. IEEE Transactions on Signal
Processing, 42(1):6576, January 1994.
[Ngu95a] Trong Q. Nguyen. Digital lter banks design quadratic constrained formulation. IEEE Trans-
actions on Signal Processing, 43(9):21032108, September 1995.
[Ngu95b] Truong Q. Nguyen. Aliasing-free reconstruction lter banks. In Wai-Kai Chen, editor, The
Circuits and Filters Handbook, chapter 85, pages 26822717, CRC Press and IEEE Press, Roca Raton, 1995.
[NH96] Truong Q. Nguyen and Peter N. Heller. Biorthogonal cosine-modulated lter band. In Proceed-
ings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 14711474,
IEEE ICASSP-96, Atlanta, May 1996.
[NK92] T. Q. Nguyen and R. D. Koilpillai. The design of arbitrary length cosine-modulated lter banks
and wavelets satisfying perfect reconstruction. In Proceedings of IEEE-SP Symposium on Time-Frequency
and Time-Scale Methods '92, Victoria, BC, pages 299302, IEEE, 1992.
[nRGB91] D. L. Jones nd R. G. Barniuk. Ecient approximation of continuous wavelet transforms.
Electronics Letters, 27(9):748750, 1991.
[NS95] G. P. Nason and B. W. Silverman. The Stationary Wavelet Transform and some Statistical
Applications. Technical Report, Department of Mathematics, University of Bristol, Bristol, UK, February
1995. preprint obtained via the internet.
[NV88] T. Q. Nguyen and P. P. Vaidyanathan. Maximally decimated perfect-reconstruction FIR l-
ter banks with pairwise mirror-image analysis and synthesis frequency responses. IEEE Transactions on
Acoustics, Speech, and Signal Processing, 36(5):693706, 1988.
[OB95] J. E. Odegard and C. S. Burrus. Design of near-orthogonal lter banks and wavelets by Lagrange
multipliers. 1995.
[OB96a] Jan E. Odegard and C. Sidney Burrus. New class of wavelets for signal approximation. In
Proceedings of the IEEE International Symposium on Circuits and Systems, pages II189 192, IEEE ISCAS-
96, Atlanta, May 12-15 1996.
[OB96b] Jan E. Odegard and C. Sidney Burrus. Toward a new measure of smoothness for the design
of wavelet basis. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal
Processing, pages III14671470, IEEE ICASSP-96, Atlanta, May 710 1996.
[OB97] J. E. Odegard and C. S. Burrus. Wavelets with new moment approximation properties. IEEE
Transactions on Signal Processing, to be submitted 1997.
[Ode96] J. E. Odegard. Moments, smoothness and optimization of wavelet systems. PhD thesis, Rice
University, Houston, TX 77251, USA, May 1996.
[OGB92] J. E. Odegard, R. A. Gopinath, and C. S. Burrus. Optimal wavelets for signal decomposition
and the existence of scale limited signals. In Proceedings of the IEEE International Conference on Signal
Processing, pages IV 597600, ICASSP-92, San Francisco, CA, March 1992.
[OGB94] J. E. Odegard, R. A. Gopinath, and C. S. Burrus. Design of Linear Phase Cosine Modulated
Filter Banks for Subband Image Compression. Technical Report CML TR94-06, Computational Mathematics
Laboratory, Rice University, Houston, TX, February 1994.
[OGL*95] J. E. Odegard, H. Guo, M. Lang, C. S. Burrus, R. O. Wells, Jr., L. M. Novak, and M. Hiett.
Wavelet based SAR speckle reduction and image compression. In Proceedings of SPIE Conference 2487,
Algorithms for SAR Imagery II, Orlando, April 1721 1995.
[OS89] A. V. Oppenheim and R. W. Schafer. Discrete-Time Signal Processing. Prentice-Hall, Englewood
Clis, NJ, 1989.
[OY87] Ozdo an Yilmaz. Seismic Data Processing. Society of Exploration Geophysicists, Tulsa, 1987.
Stephen M. Doherty editor.
[P P89] P. P. Vaidyanathan and Z. Do anata. The role of lossless systems in modern digital signal
processing: a tutorial. IEEE Transactions on Education, August 1989.
[Pap77] Athanasios Papoulis. Signal Analysis. McGraw-Hill, New York, 1977.
[PB87] T. W. Parks and C. S. Burrus. Digital Filter Design. John Wiley & Sons, New York, 1987.
[Newer version in OpenStax https://legacy.cnx.org/content/col10598/latest/
[PKC96] J. C. Pesquet, H. Krim, and H. Carfantan. Time-invariant orthonormal wavelet representations.
IEEE Transactions on Signal Processing, 44(8):19641970, August 1996.
[Plo95a] G. Plonka. Approximation order provided by renable function vectors. Technical Report 95/1,
Universit t Rostock, 1995. To appear in: Constructive Approximation.
[Plo95b] G. Plonka. Approximation properties of multi-scaling functions: a fourier approach. 1995.
Rostock. Math. Kolloq. 49, 115126.
[Plo95c] G. Plonka. Factorization of renement masks of function vectors. In C. K. Chui and L. L.
Schumaker, editors, Wavelets and Multilevel Approximation, pages 317324, World Scientic Publishing
Co., Singapore, 1995.
[Plo97a] G. Plonka. Necessary and sucient conditions for orthonormality of scaling vectors. Technical
Report, Universit t Rostock, 1997.
[Plo97b] G. Plonka. On stability of scaling vectors. In A. Le Mehaute, C. Rabut, and L. L. Schumaker,
editors, Surface Fitting and Multiresolution Methods, Vanderbilt University Press, Nashville, 1997. Also
Technical Report 1996/15, Universit t Rostock.
[Polar] D. Pollen. Daubechies' scaling function on [0,3]. J. American Math. Soc., to appear. Also Aware,
Inc. tech. report AD891020, 1989.
[PS95] G. Plonka and V. Strela. Construction of multi-scaling functions with approximation and sym-
metry. Technical Report 95/22, Universit t Rostock, 1995. To appear in: SIAM J. Math. Anal.
[PV96] See-May Phoong and P. P. Vaidyanathan. A polyphase approach to time-varying lter banks. In
Proc. Int. Conf. Acoust., Speech, Signal Processing, pages 15541557, IEEE, Atlanta, GA, 1996.
[RBC*92] M. B. Ruskai, G. Beylkin, R. Coifman, I. Daubechies, S. Mallat, Y. Meyer, and L. Raphael,
editors. Wavelets and their Applications. Jones and Bartlett, Boston, MA, 1992. Outgrowth of NSF/CBMS
conference on Wavelets held at the University of Lowell, June 1990.
[RBSB94] J. O. A. Robertsson, J. O. Blanch, W. W. Symes, and C. S. Burrus. Galerkinwavelet mod-
eling of wave propagation: optimal nitedierence stencil design. Mathematical and Computer Modelling,
19(1):3138, January 1994.
[RC83] L. Rabiner and D. Crochi`re. Multirate Digital Signal Processing. Prentice-Hall, 1983.
[RD86] E. A. Robinson and T. S. Durrani. Geophysical Signal Processing. Prentice Hall, Englewood
Clis, NJ, 1986.
[RD92] Olivier Rioul and P. Duhamel. Fast algorithms for discrete and continuous wavelet trans- forms.
IEEE Transactions on Information Theory, 38(2):569586, March 1992. Special issue on wavelets and mul-
tiresolution analysis.
[RD94] Olivier Rioul and Pierre Duhamel. A Remez exchange algorithm for orthonormal wavelets. IEEE
Transactions on Circuits and Systems II, 41(8):550560, August 1994.
[RG95] Peter Rieder and J rgen G tze. Algebraic optimizaion of biorthogonal wavelet transforms.
preprint, 1995.
[RGN94] Peter Rieder, J rgen G tze, and Josef A. Nossek. Algebraic Design of Discrete Wavelet Trans-
forms. Technical Report TUM-LNS-TR-94-2, Technical University of Munich, April 1994. Also submitted
to IEEE Trans on Circuits and Systems.
[Rio91] O. Rioul. Fast computation of the continuous wavelet transform. In Proc. Int. Conf. Acoust.,
Speech, Signal Processing, IEEE, Toronto, Canada, March 1991.
[Rio92] Olivier Rioul. Simple regularity criteria for subdivision schemes. SIAM J. Math. Anal.,
23(6):15441576, November 1992.
[Rio93a] Olivier Rioul. A discrete-time multiresolution theory. IEEE Transactions on Signal Processing,
41(8):25912606, August 1993.
[Rio93b] Olivier Rioul. Regular wavelets: a discrete-time approach. IEEE Transactions on Signal Pro-
cessing, 41(12):35723579, December 1993.
[RN96] P. Rieder and J. A. Nossek. Smooth multiwavelets based on two scaling functions. In Proc. IEEE
Conf. on Time-Frequency and Time-Scale Analysis, pages 309312, 1996.
[Ron92] Amos Ron. Characterization of Linear Independence and Stability of the Sfts of a Univariate
Renable Function in Terms of its Renement Mask. Technical Report CMS TR 93-3, Computer Science
Dept., University of Wisconsin, Madison, September 1992.
[RS83] L. Rabiner and R. W. Schaefer. Speech Signal Processing. Prentice-Hall, Englewood Clis, NJ,
1983.
[RT91] T. A. Ramstad and J. P. Tanem. Cosine modulated analysis synthesis lter bank with critical
sampling and perfect reconstruction. In Proceedings of the IEEE International Conference on Acoustics,
Speech, and Signal Processing, pages 17891792, IEEE ICASSP-91, 1991.
[Rus92] Mary Beth Ruskai. Introduction. In M. B. Ruskai, G. Beylkin, R. Coifman, I. Daubechies, S.
Mallat, Y. Meyer, and L Raphael, editors, Wavelets and their Applications, Jones and Bartlett, Boston, MA,
1992.
[RV91] Olivier Rioul and Martin Vetterli. Wavelet and signal processing. IEEE Signal Processing Mag-
azine, 8(4):1438, October 1991.
[RV93] K. Ramchandran and M. Veterli. Best wavelet packet bases in a rate-distortion sense. IEEE
Transactions on Image Processing, 2(2):160175, 1993.
[RW98] Howard L. Resniko and Raymond O. Wells, Jr. Wavelet Analysis: The Scalable Structure of
Information. Springer-Verlag, New York, 1998.
[RY90] K. R. Rao and P. Yip. Discrete Cosine Transform - Algorithms, Advantages and Applications.
Academic Press, 1990.
[SA92] E. P. Simoncelli and E. H. Adelson. Subband transforms. In John W. Woods, editor, Subband
Image Coding, Kluwer, Norwell, MA, to appear 1992. Also, MIT Vision and Modeling Tech. Report No.
137, Sept. 1989.
[Sai94a] Naoki Saito. Local Feature Extraction and Its Applications Using a Library of Bases. PhD
thesis, Yale University, New Haven, CN, 1994.
[Sai94b] Naoki Saito. Simultaneous noise suppression and signal compression using a library of orthonor-
mal bases and the minimum discription length criterion. In E. Foufoula-Georgiou and P. Kumar, editors,
Wavelets in Geophysics, Academic Press, San Diego, 1994.
[SB86a] M. J. Smith and T. P. Barnwell. Exact reconstruction techniques for tree-structured subband
coders. IEEE Transactions on Acoustics, Speech, and Signal Processing, 34:434441, June 1986.
[SB86b] M. J. Smith and T. P. Barnwell III. Exact reconstruction techniques for tree-structured subband
coders. IEEE Transactions on Acoustics, Speech, and Signal Processing, 34:434 441, 1986.
[SB87] M. J. Smith and T. P. Barnwell. A new lter bank theory for time-frequency representation.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 35:314327, March 1987.
[SB93] H. V. Sorensen and C. S. Burrus. Ecient computation of the DFT with only a subset of input
or output points. IEEE Transactions on Signal Processing, 41(3):11841200, March 1993.
[SC95] J. A. Storer and M. Cohn, editors. Proceedings of Data Compression Conference. IEEE Computer
Society Press, Snowbird, Utah, March 1995.
[SC96] J. A. Storer and M. Cohn, editors. Proceedings of Data Compression Conference. IEEE Computer
Society Press, Snowbird, Utah, April 1996.
[Sca94] John A. Scales. Theory of Seismic Imaging. Samizat Press, Golden, CO, 1994.
[Sel96] I. W. Selesnick. New Techniques for Digital Filter Design. PhD thesis, Rice University, 1996.
[Sel97] Ivan W. Selesnick. Parameterization of Orthogonal Wavelet Systems. Technical Report, ECE
Dept. and Computational Mathematics Laboratory, Rice University, Houston, Tx., May 1997.
[Sha93] J. M. Shapiro. Embedded image coding using zerotrees of wavelet coecients. IEEE Transactions
on Signal Processing, 41(12):34453462, December 1993.
[She92] M. J. Shensa. The discrete wavelet transform: wedding the ` trous and Mallat algorithms. IEEE
Transactions on Signal Processing, 40(10):24642482, October 1992.
[SHGB93] P. Steen, P. N. Heller, R. A. Gopinath, and C. S. Burrus. Theory of regular M-band wavelet
bases. IEEE Transactions on Signal Processing, 41(12):34973511, December 1993. Special Transaction
issue on wavelets; Rice contribution also in Tech. Report No. CML TR-91-22, Nov. 1991.
[SHS*96] V. Strela, P. N. Heller, G. Strang, P. Topiwala, and C. Heil. The application of multiwavelet
lter banks to image processing. Technical Report, MIT, January 1996. Submitted to IEEE Tran. Image
Processing.
[Sie86] William M. Siebert. Circuits, Signals, and Systems. MIT Press and McGraw-Hill, Cambridge
and New York, 1986.
[SL96] M. Sablatash and J. H. Lodge. The design of lter banks with specied minimum stopband attenu-
ation for wavelet packet-based multiple access communications. In Proceedings of 18th Biennial Symposium
on Communications, Queen's University, Kingston, ON, Canada, June 1996.
[SLB97] Ivan W. Selesnick, Markus Lang, and C. Sidney Burrus. Magnitude squared design of recursive
lters with the Chebyshev norm using a constrained rational Remez algorithm. IEEE Transactions on Signal
Processing, to appear 1997.
[SMHH95] O. V. Shentov, S. K. Mitra, U. Heute, and A. N. Hossen. Subband DFT Part I: denition,
interpretation and extensions. Signal Processing, 41:261278, 1995.
[SN96] Gilbert Strang and T. Nguyen. Wavelets and Filter Banks. WellesleyCambridge Press, Wellesley,
MA, 1996.
[SOB96] Ivan W. Selesnick, Jan E. Odegard, and C. Sidney Burrus. Nearly symmetric orthogonal wavelets
with non-integer DC group delay. In Proceedings of the IEEE Digital Signal Processing Workshop, pages
431434, Loen, Norway, September 24 1996.
[SP93] Wim Sweldens and Robert Piessens. Calculation of the wavelet decomposition using quadrature
formulae. In Tom H. Koornwinder, editor, Wavelets: An Elementary Treatment of Theory and Applications,
pages 139160, World Scientic, Singapore, 1993.
[SP96a] A. Said and W. A. Pearlman. A new, fast, and ecient image codec based on set partitioning
in hierarchical trees. IEEE Transactions Cir. Syst. Video Tech., 6(3):243250, June 1996.
[SP96b] A. Said and W. A. Perlman. An image multiresolution representation for lossless and lossy image
compression. IEEE Transactions on Image Processing, 5:13031310, September 1996.
[SS94] V. Strela and G. Strang. Finite element multiwavelets. In Proceedings of SPIE, Wavelet Applica-
tions in Signal and Image processing II, pages 202213, San Diego, CA, July 1994.
[SS95] G. Strang and V. Strela. Short wavelets and matrix dilation equations. IEEE Trans. SP, 43(1):108
115, January 1995.
[Str86] Gilbert Strang. Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley,
MA, 1986.
[Str89] Gilbert Strang. Wavelets and dilation equations: a brief introduction. SIAM Review, 31(4):614
627, 1989. also, MIT Numerical Analysis Report 89-9, Aug. 1989.
[Str94] Gilbert Strang. Wavelets. American Scientist, 82(3):250255, May 1994.
[Str96a] G. Strang. Eigenvalues of (↓ 2)H and convergence of the cascade algorithm. IEEE Transactions
on Signal Processing, 44, 1996.
[Str96b] V. Strela. Multiwavelets: Theory and Applications. PhD thesis, Dept. of Mathematics, MIT,
June 1996.
[SV93] A. K. Soman and P. P. Vaidyanathan. On orthonormal wavelets and paraunitary lter banks.
IEEE Transactions on Signal Processing, 41(3):11701183, March 1993.
[SVN93] A. K. Soman, P. P. Vaidyanathan, and T. Q. Nguyen. Linear phase paraunitary lter banks:
theory, factorizations and designs. IEEE Transactions on Signal Processing, 41(12):3480 3496, December
1993.
[SW93] Larry L. Schumaker and Glenn Webb, editors. Recent Advances in Wavelet Analysis. Academic
Press, San Diego, 1993. Volume 3 in the series: Wavelet Analysis and its Applications.
[SW97] W. So and J. Wang. Estimating the support of a scaling vector. SIAM J. Matrix Anal. Appl.,
18(1):6673, January 1997.
[Swe95] Wim Sweldens. The Lifting Scheme: A Construction of Second Generation Wavelets. Technical
Report TR-1995-6, Math Dept. University of South Carolina, May 1995.
[Swe96a] Wim Sweldens. The lifting scheme: a custom-design construction of biorthogonal wavelets.
Applied and Computational Harmonic Analysis, 3(2):186200, 1996. Also a technical report, math dept.
Univ. SC, April 1995.
[Swe96b] Wim Sweldens. Wavelets: what next? Proceedings of the IEEE, 84(4):680685, April 1996.
[Tas96] Carl Taswell. Handbook of Wavelet Transform Algorithms. Birkh user, Boston, 1996.
[Tew98] Ahmed H. Tewk. Wavelets and Multiscale Signal Processing Techniques: Theory and Applica-
tions. to appear, 1998.
[The89] The UltraWave Explorer User's Manual. Aware, Inc., Cambridge, MA, July 1989.
[Tia96] Jun Tian. The Mathematical Theory and Applications of Biorthogonal Coifman Wavelet Systems.
PhD thesis, Rice University, February 1996.
[TM94a] Hai Tao and R. J. Moorhead. Lossless progressive transmission of scientic data using biorthog-
onal wavelet transform. In Proceedings of the IEEE Conference on Image Processing, Austin, ICIP-94,
November 1994.
[TM94b] Hai Tao and R. J. Moorhead. Progressive transmission of scientic data using biorthogonal
wavelet transform. In Proceedings of the IEEE Conference on Visualization, Washington, October 1994.
[Tur86] M. Turner. Texture discrimination by Gabor functions. Biological Cybernetics, 55:7182, 1986.
[TVC96] M. J. Tsai, J. D. Villasenor, and F. Chen. Stack-run image coding. IEEE Trans. Circ. And
Syst. Video Tech., 519521, October 1996.
[TW95] Jun Tian and Raymond O. Wells, Jr. Vanishing Moments and Wavelet Approximation. Technical
Report CML TR-9501, Computational Mathematics Lab, Rice University, January 1995.
[TW96] J. Tian and R. O. Wells. Image compression by reduction of indices of wavelet transform
coecients. Proc. DCC, April 1996.
[TWBO97] J. Tian, R. O. Wells, C. S. Burrus, and J. E. Odegard. Coifman wavelet systems: approxi-
mation, smoothness, and computational algorithms. In Jacques Periaux, editor, Computational Science for
the 21st Century, John Wiley and Sons, New York, 1997. in honor of Roland Glowinski's 60th birhtday.
[Uns96] Michael Unser. Approximation power of biorthogonal wavelet expansions. IEEE Transactions
on Signal Processing, 44(3):519527, March 1996.
[VA96] M. J. Vrhel and A. Aldroubi. Pre-ltering for the Initialization of Multi-wavelet transforms.
Technical Report, National Institutes of Health, 1996.
[Vai87a] P. P. Vaidyanathan. Quadrature mirror lter banks, Mband extensions and perfect-
reconstruction techniques. IEEE Acoustics, Speech, and Signal Processing Magazine, 4(3):4 20, July 1987.
[Vai87b] P. P. Vaidyanathan. Theory and design of M-channel maximally decimated quadrature morror
lters with arbitrary M, having perfect reconstruction properties. IEEE Transactions on Acoustics, Speech,
and Signal Processing, 35(4):476492, April 1987.
[Vai92] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ,
1992.
[VBL95] J. D. Villasenor, B. Belzer, and J. Liao. Wavelet lter evaluation for image compression. IEEE
Transactions on Image Processing, 4, August 1995.
[VD89] P. P. Vaidyanathan and Z. Doanata. The role of lossless systems in modern digital signal
processing: A tutorial. IEEE Trans. on Education, 32(3):181197, August 1989.
[VD95] P. P. Vaidyanathan and Igor Djokovic. Wavelet transforms. In Wai-Kai Chen, editor, The Circuits
and Filters Handbook, chapter 6, pages 134219, CRC Press and IEEE Press, Roca Raton, 1995.
[Vet86] Martin Vetterli. Filter banks allowing perfect reconstruction. Signal Processing, 10(3):219 244,
April 1986.
[Vet87] Martin Vetterli. A theory of multirate lter banks. IEEE Transactions on Acoustics, Speech, and
Signal Processing, 35(3):356372, March 1987.
[VG89] Martin Vetterli and Didier Le Gall. Perfect reconstruction FIR lter banks: some properties and
factorizations. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(7):10571071, July 1989.
[VH88] P. P. Vaidyanathan and Phuong-Quan Hoang. Lattice structures for optimal design and robust
implementation of two-channel perfect reconstruction QMF banks. IEEE Transactions on Acoustics, Speech,
and Signal Processing, 36(1):8193, January 1988.
[VH92] M. Vetterli and C. Herley. Wavelets and lter banks: theory and design. IEEE Transactions on
Acoustics, Speech, and Signal Processing, 22072232, September 1992.
[VK95] Martin Vetterli and Jelena Kova£evi¢. Wavelets and Subband Coding. PrenticeHall, Upper
Saddle River, NJ, 1995.
[VL89] M. Vetterli and D. Le Gall. Perfect reconstruction FIR lter banks: some properties and factor-
izations. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(7):1057 1071, July 1989.
[VLU97] M. J. Vrhel, C. Lee, and M. Unser. Fast continuous wavelet transform: a least-squares formu-
lation. Signal Processing, 57(2):103120, March 1997.
[VM88] P. P. Vaidyanathan and S. K. Mitra. Polyphase networks, block digital ltering, LPTV systems,
and alias-free QMF banks: a unied approach based on pseudocirculants. IEEE Transactions on Acoustics,
Speech, and Signal Processing, 36:381391, March 1988.
[VNDS89] P. P. Vaidyanathan, T. Q. Nguyen, Z. Doanata, and T. Saram ki. Improved technique for
design of perfect reconstruction FIR QMF banks with lossless polyphase matrices. IEEE Transactions on
Acoustics, Speech, and Signal Processing, 37(7):10421056, July 1989.
[Vol92] Hans Volkmer. On the regularity of wavelets. IEEE Transactions on Information Theory,
38(2):872876, March 1992.
[Wal94] Gilbert G. Walter, editor. Wavelets and Other Orthogonal Systems with Applications. CRC
Press, Boca Raton, FL, 1994.
[WB95] D. Wei and C. S. Burrus. Optimal soft-thresholding for wavelet transform coding. In Proceedings
of IEEE International Conference on Image Processing, pages I:610613, Washington, DC, October 1995.
[WB96] Dong Wei and Alan C. Bovik. On generalized coiets: construction, near-symmetry, and
optimization. IEEE Transactions on Circuits and Systems:II, submitted October 1996.
[WB97] Dong Wei and Alan C. Bovik. Sampling approximation by generalized coiets. IEEE Transac-
tions on Signal Processing, submitted January 1997.
[Wei95] Dong Wei. Investigation of Biorthogonal Wavelets. Technical Report ECE-696, Rice University,
April 1995.
[Wel93] R. O. Wells, Jr. Parameterizing smooth compactly supported wavelets. Transactions of the
American Mathematical Society, 338(2):919931, 1993. Also Aware tech report AD891231, Dec. 1989.
[Wic92] M. V. Wickerhauser. Acoustic signal compression with wavelet packets. In Charles K. Chui,
editor, Wavelets: A Tutorial in Theory and Applications, pages 679700, Academic Press, Boca Raton,
1992. Volume 2 in the series: Wavelet Analysis and its Applications.
[Wic95] Mladen Victor Wickerhauser. Adapted Wavelet Analysis from Theory to Software. A K Peters,
Wellesley, MA, 1995.
[WNC87] I. Witten, R. Neal, and J. Cleary. Arithmetic coding for data compression. Communications
of the ACM, 30:520540, June 1987.
[WO92a] G. Wornell and A. V. Oppenheim. Estimation of fractal signals from noisy measurements using
wavelets. IEEE Transactions on Acoustics, Speech, and Signal Processing, 40(3):611 623, March 1992.
[WO92b] G. W. Wornell and A. V. Oppenheim. Wavelet-based representations for a class of self- similar
signals with application to fractal modulation. IEEE Transactions on Information Theory, 38(2):785800,
March 1992.
[WOG*95a] D. Wei, J. E. Odegard, H. Guo, M. Lang, and C. S. Burrus. SAR data compression using
best-adapted wavelet packet basis and hybrid subband coding. In Harold H. Szu, editor, Proceedings of
SPIE Conference 2491, Wavelet Applications II, pages 11311141, Orlando, April 1721 1995.
[WOG*95b] D. Wei, J. E. Odegard, H. Guo, M. Lang, and C. S. Burrus. Simultaneous noise reduction
and SAR image data compression using best wavelet packet basis. In Proceedings of IEEE International
Conference on Image Processing, pages III:200203, Washington, DC, October 1995.
[Wor96] Gregory W. Wornell. Signal Processing with Fractals, A Wavelet- Based Approach. Prentice
Hall, Upper Saddle River, NJ, 1996.
[WTWB98] Dong Wei, Jun Tian, Raymond O. Wells, Jr., and C. Sidney Burrus. A new class of biorthog-
onal wavelet systems for image transform coding. IEEE Transactions on Image Processing, 7(7):10001013,
July 1998.
[WWJ95] J. Wu, K. M. Wong, and Q. Jin. Multiplexing based on wavelet packets. In Proceedings of
SPIE Conference, Aerosense, Orlando, April 1995.
[WZ94] Raymond O. Wells, Jr and Xiaodong Zhou. Wavelet interpolation and approximate solutions
of elliptic partial dierential equations. In R. Wilson and E. A. Tanner, editors, Noncompact Lie Groups,
Kluwer, 1994. Also in Proceedings of NATO Advanced Research Workshop, 1992, and CML Technical
Report TR-9203, Rice University, 1992.
[XGHS96] X.-G. Xia, J. S. Geronimo, D. P. Hardin, and B. W. Suter. Design of prelters for discrete
multiwavelet transforms. IEEE Trans. SP, 44(1):2535, January 1996.
[XHRO95] Z. Xiong, C. Herley, K. Ramchandran, and M. T. Orcgard. Space-frequency quantization for
a space-varying wavelet packet image coder. Proc. Int. Conf. Image Processing, 1:614617, October 1995.
[XS96] X.-G. Xia and B. W. Suter. Vector-valued wavelets and vector lter banks. IEEE Trans. SP,
44(3):508518, March 1996.
[You80] R. M. Young. An Introduction to Nonharmonic Fourier Series. Academic Press, New York, 1980.
[You93] R. K. Young. Wavelet Theory and Its Applications. Kluwer Academic Publishers, Boston, MA,
1993.
[ZT92a] H. Zou and A. H. Tewk. Design and parameterization ofm-band orthonormal wavelets. In
Proceedings of the IEEE International Symposium on Circuits and Systems, pages 983986, ISCAS-92, San
Diego, 1992.
[ZT92b] H. Zou and A. H. Tewk. Discrete orthogonal M-band wavelet decompositions. In Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages IV605608, San
Francisco, CA, 1992.
[2] Time and Frequency Representation of Signals and Systems. Springer-Verlag, Wien 8211; New York,
1989. CISM Courses and Lectures No. 309.
[3] Time-Frequency Signal Analysis. Wiley, Halsted Press, New York, 1992. Result of 1990 Special
Converence on Time-Frequency Analysis, Gold Coast, Australia.
[4] Time-Frequency Signal Analysis. Wiley, Halsted Press, New York, 1992. Result of 1990 Special
Converence on Time-Frequency Analysis, Gold Coast, Australia.
[5] Wavelets: An Elementary Treatment of Theory and Applications. World Scientic, Singapore, 1993.
[6] Wavelets, Fractals, and Fourier Tranforms. Clarendon Press, Oxford, 1993. Proceedings of a conference
on Wavelets at Newnham College, Cambridge, Dec. 1990.
[7] Wavelets in Geophyics. Academic Press, San Diego, 1994. Volume 4 in the series: Wavelet Analysis
and its Applications.
[8] Proceedings of Data Compression Conference. IEEE Computer Society Press, Snowbird, Utah, March
1995.
[9] Proceedings of Data Compression Conference. IEEE Computer Society Press, Snowbird, Utah, April
1996.
[10] Wavelets in Medicine and Biology. CRC Press, Boca Raton, 1996.
[11] Multiscale Wavelet Methods for Partial Dierential Equations. Academic Press, San Diego, 1997.
Volume 6 in the series: Wavelet Analysis and its Applications.
[12] A. N. Akansu and R. A. Haddad. Multiresolution Signal Decomposition, Transforms, Subbands, and
Wavelets. Academic Press, San Diego, CA, 1992.
[13] A. N. Akansu and R. A. Haddad. Multiresolution Signal Decomposition, Transforms, Subbands, and
Wavelets. Academic Press, San Diego, CA, 1992.
[14] Ali N. Akansu and Mark J. T. Smith. Subband and Wavelet Transforms, Design and Applications.
Kluwer Academic Publishers, Boston, 1996.
[15] Ali N. Akansu and Mark J. T. Smith. Subband and Wavelet Transforms, Design and Applications.
Kluwer Academic Publishers, Boston, 1996.
[16] A. Aldroubi. Oblique and hierarchical multiwavelet bases. Technical report, National Institutes of
Health, December 1996.
273
274 BIBLIOGRAPHY
[17] Akram Aldroubi, Patrice Abry, and Michael Unser. Construction of biorthogonal wavelets starting
from any two multiresolutions. preprint, 1996.
[18] Akram Aldroubi and Michael Unser, editors. Wavelets in Medicine and Biology. CRC Press, Boca
Raton, 1996.
[19] B. Alpert. A class of bases in for the sparce representation of integral operators. SIAM J. Math.
Analysis, 24, 1993.
[20] F. Argoul, A. Arneodo, J. Elezgaray, and G. Grasseau. Wavelet transform of fractal aggregates. Physics
Letters A., 135:327336, March 1989.
[21] P. Auscher. Ondelettes fractales et applications. Ph. d. thesis, 1989.
[22] P. Auscher, G. Weiss, and M. V. Wickerhauser. Local sine and cosine bases of coifman and meyer
and the construction of smooth wavelets. In Wavelets: A Tutorial in Theory and Applications, pages
1551. Academic Press, 1992. Volume 2 in series on Wavelet Analysis and its Applications.
[23] Aware, Inc., Cambridge, MA. The UltraWave Explorer User's Manual, July 1989.
[24] F. Bao and N. Erdol. The optimal wavelet transform and translation invariance. In Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, page
III:138211;16, ICASSP-94, Adelaide, May 1994.
[25] F. Bao and Nurgun Erdol. On the discrete wavelet transform and shiftability. In Proceedings of the
Asilomar Conference on Signals, Systems and Computers, page 14428211;1445, Pacic Grove, CA,
November 1993.
[26] S. Basu and C. Chiang. Complete parameterization of two dimensional orthonormal wavelets. In
Proceedings of IEEE-SP Symposium on Time-Frequency and Time-Scale Methods '92, Victoria, BC.
IEEE, 1992.
[27] T. C. Bell, J. G. Cleary, and I. H. Witten. Text Compression. Prentice Hall, N.J., 1990.
[28] John J. Benedetto and Michael W. Frazier, editors. Wavelets: Mathematics and Applications. CRC
Press, Boca Raton, FL, 1993.
[29] A. Benveniste, R. Nikoukhah, and A. S. Willsky. Multiscale system theory. IEEE Transactions on
Circuits and Systems, I, 41(1):28211;15, January 1994.
[30] Albert P. Berg and Wasfy B. Mikhael. An ecient structure and algorithm for the mixed transform
representation of signals. In Proceedings of the 29th Asilomar Conference on Signals, Systems, and
Computers, Pacic Grove, CA, November 1995.
[31] Jonathan Berger, Ronald R. Coifman, and Maxim J. Goldberg. Removing noise from music using local
trigonometric bases and wavelet packets. Journal of the Audio Engineering Society, 42(10):8088211;817,
October 1994.
[32] G. Beylkin. On the representation of operators in bases of compactly supported wavelets. SIAM
Journal on Numerical Analysis, 29(6):17168211;1740, December 1992.
[33] G. Beylkin. On the representation of operators in bases of compactly supported wavelets. SIAM
Journal on Numerical Analysis, 29(6):17168211;1740, December 1992.
[34] G. Beylkin. On the representation of operators in bases of compactly supported wavelets. SIAM
Journal on Numerical Analysis, 29(6):17168211;1740, December 1992.
[35] G. Beylkin. On wavelet-based algorithms for solving dierential equations. In Wavelets: Mathematics
and Applications, page 4498211;466. CRC Press, Boca Raton, 1994.
[36] G. Beylkin, R. R. Coifman, and V. Rokhlin. Fast wavelet transforms and numerical algorithms i.
Communications on Pure and Applied Mathematics, 44:1418211;183, 1991.
[37] G. Beylkin, R. R. Coifman, and V. Rokhlin. Fast wavelet transforms and numerical algorithms i.
Communications on Pure and Applied Mathematics, 44:1418211;183, 1991.
[38] G. Beylkin, R. R. Coifman, and V. Rokhlin. Wavelets in numerical analysis. In Wavelets and Their
Applications, page 1818211;210. Jones and Bartlett, Boston, 1992. Outgrowth of the NSF/CBMS
Conference on Wavelets, Lowell, June 1990.
[39] G. Beylkin, J. M. Keiser, and L. Vozovoi. A new class of stabel time discretization schemes for the
solution of nonlinear pde's. Technical report, Applied Mathematics Program, University of Colorado,
Boulder, CO, 1993.
[40] Gregory Beylkin. An adaptive pseudo-wavelet approach for solving nonlinear partial dierential equa-
tions. In Multiscale Wavelet Methods for Partial Dierential Equations. Academic Press, San Diego,
1997. Volume 6 in the series: Wavelet Analysis and Applications.
[41] Gregory Beylkin and James M Keiser. On the adaptive nomerical solution of nonlinear partial dier-
ential equations in wavelet bases. Journal of Computational Physics, 132:2338211;259, 1997.
[42] J. N. Bradley, C. M. Brislawn, and T. Hopper. The fbi wavelet/scalar quantization standard for gray-
scale ngerprint image compression. In Visual Info. Process. II, volume 1961, Orlando, FL, April 1993.
SPIE.
[43] Andrew Brice, David Donoho, and Hong-Ye Gao. Wavelet analysis. IEEE Spectrum, 33(10):268211;35,
October 1996.
[44] Andrew Brice, David Donoho, and Hong-Ye Gao. Wavelet analysis. IEEE Spectrum, 33(10):268211;35,
October 1996.
[45] C. M. Brislawn, J. N. Bradley, R. J. Onyshczak, and T. Hopper. The fbi compression standard for
digitized ngerprint images. In Proceedings of the SPIE Conference 2847, Applications of Digital Image
Processing XIX, volume 2847, 1996.
[46] A. G. Bruce, D. L. Donoho, H.-Y. Gao, and R. D. Martin. Denoising and robust nonlinear wavelet anal-
ysis. In Proceedings of Conference on Wavelet Applications, volume 2242, page 3258211;336, Orlando,
FL, April 1994. SPIE.
[47] A. G. Bruce, D. L. Donoho, H.-Y. Gao, and R. D. Martin. Denoising and robust nonlinear wavelet anal-
ysis. In Proceedings of Conference on Wavelet Applications, volume 2242, page 3258211;336, Orlando,
FL, April 1994. SPIE.
[48] Barbara Burke. The mathematical microscope: Waves, wavelets, and beyond. In A Positron Named
Priscilla, Scientic Discovery at the Frontier, chapter 7, page 1968211;235. National Academy Press,
Washington, DC, 1994.
[49] Barbara Burke. The mathematical microscope: Waves, wavelets, and beyond. In A Positron Named
Priscilla, Scientic Discovery at the Frontier, chapter 7, page 1968211;235. National Academy Press,
Washington, DC, 1994.
[50] Michael Burrows and David J. Wheeler. A block-sorting lossless data compression algorithm. Technical
report 124, Digital Systems Research Center, Palo Alto, 1994.
[51] C. S. Burrus. Scaling functions and wavelets. rst version written in 1989, The Computational
Mathematics Lab. and ECE Dept., Rice University, Houston, Tx, 1993.
[52] C. S. Burrus and J. E. Odegard. Coiet systems and zero moments. IEEE Transactions on Signal
Processing, 46(3):7618211;766, March 1998. Also CML Technical Report, Oct. 1996.
[53] C. S. Burrus and T. W. Parks. DFT/FFT and Convolution Algorithms. John Wiley & Sons, New
York, 1985.
[54] C. S. Burrus and T. W. Parks. DFT/FFT and Convolution Algorithms. John Wiley & Sons, New
York, 1985.
[55] C. Sidney Burrus. Basic vector space methods in signal and systems theory. OpenStax Cnx, 2012.
https://legacy.cnx.org/content/col10636/latest/.
[56] S. Cavaretta, W. Dahmen, and C. A. Micchelli. Stationary Subdivision, volume 93. American Mathe-
matical Society, 1991.
[57] Laura Montefusco Charles K. Chui and Luigia Puccio, editors. Wavelets: Theory, Algorithms, and
Applications. Academic Press, San Diego, 1995. Volume 5 in the series: Wavelet Analysis and its
Applications.
[58] Shaobing Chen and David L. Donoho. Basis pursuit. In Proceedings of the 28th Asilomar Conference on
Signals, Systems, and Computers, page 418211;44, Pacic Grove, CA, November 1994. Also Stanford
Statistics Dept. Report, 1994.
[59] Shaobing Chen and David L. Donoho. Basis pursuit. In Proceedings of the 28th Asilomar Conference on
Signals, Systems, and Computers, page 418211;44, Pacic Grove, CA, November 1994. Also Stanford
Statistics Dept. Report, 1994.
[60] Shaobing Chen and David L. Donoho. Atomic decomposition by basis pursuit. Technical report 479,
Statistics Department, Stanford, May 1995. preprint.
[61] Shaobing Chen and David L. Donoho. Atomic decomposition by basis pursuit. Technical report 479,
Statistics Department, Stanford, May 1995. preprint.
[62] Shaobing Chen and David L. Donoho. Atomic decomposition by basis pursuit. Technical report 479,
Statistics Department, Stanford, May 1995. preprint.
[63] Z. Chen, C. A. Micchelli, and Y. Xu. The petrov-galerkin method for second kind integral equations ii:
Multiwavelet schemes. Technical report, Math. Dept. North Dakota State University, November 1996.
[64] Ole Christensen. An Introduction to Frames and Riesz Bases. Birkhauser, Berlin, 2002.
[65] Ole Christensen. An Introduction to Frames and Riesz Bases. Birkhauser, Berlin, 2002.
[66] Ole Christensen. An Introduction to Frames and Riesz Bases. Birkauser, 2003.
[67] C. K. Chui and J. Lian. A study of orthonormal multi-wavelets. Applied Numerical Mathematics,
20(3):2738211;298, March 1996.
[68] Charles K. Chui. An Introduction to Wavelets. Academic Press, San Diego, CA, 1992. Volume 1 in
the series: Wavelet Analysis and its Applications.
[69] Charles K. Chui. An Introduction to Wavelets. Academic Press, San Diego, CA, 1992. Volume 1 in
the series: Wavelet Analysis and its Applications.
[70] Charles K. Chui. An Introduction to Wavelets. Academic Press, San Diego, CA, 1992. Volume 1 in
the series: Wavelet Analysis and its Applications.
[71] Charles K. Chui. Wavelets: A Tutorial in Theory and Applications. Academic Press, San Diego, CA,
1992. Volume 2 in the series: Wavelet Analysis and its Applications.
[72] Charles K. Chui. Wavelets: A Tutorial in Theory and Applications. Academic Press, San Diego, CA,
1992. Volume 2 in the series: Wavelet Analysis and its Applications.
[73] Charles K. Chui. Wavelets: A Mathematical Tool for Signal Analysis. SIAM, Philadilphia, 1997.
[74] A. Cohen. Biorthogonal wavelets. In Wavelets: A Tutorial in Theory and Applications. Academic
Press, Boca Raton, 1992. Volume 2 in the series: Wavelet Analysis and its Applications.
[75] A. Cohen, I. Daubechies, and J. C. Feauveau. Biorthogonal bases of compactly supported wavelets.
Communications on Pure and Applied Mathematics, 45:4858211;560, 1992.
[76] A. Cohen, I. Daubechies, and J. C. Feauveau. Biorthogonal bases of compactly supported wavelets.
Communications on Pure and Applied Mathematics, 45:4858211;560, 1992.
[77] A. Cohen, I. Daubechies, and G. Plonka. Regularity of renable function vectors. Technical report
95/16, Universit[U+FFFD]ostock, 1995. To appear in: J. Fourier Anal. Appl.
[78] A. Cohen and Q. Sun. An arthmetic characterization of the conjugate quadrature lters associated to
orthonormal wavelet bases. SIAM Journal of Mathematical Analysis, 24(5):13558211;1360, 1993.
[79] Albert Cohen, Ingrid Daubechies, and Pierre Vial. Wavelets on the interval and fast wavelet transforms.
Applied and Computational Harmonic Analysis, 1(1):548211;81, December 1993.
[80] Albert Cohen, Ingrid Daubechies, and Pierre Vial. Wavelets on the interval and fast wavelet transforms.
Applied and Computational Harmonic Analysis, 1(1):548211;81, December 1993.
[81] L. Cohen. Time-frequency distributions - a review. Proceedings of the IEEE, 77(7):9418211;981, 1989.
[82] L. Cohen. Time-frequency distributions - a review. Proceedings of the IEEE, 77(7):9418211;981, 1989.
[83] L. Cohen. Time-frequency distributions - a review. Proceedings of the IEEE, 77(7):9418211;981, 1989.
[84] Leon Cohen. Time-Frequency Analysis. Prentice Hall, Upper Saddle River, NJ, 1995.
[85] Leon Cohen. Time8211;Frequency Analysis. Prentice Hall, Upper Saddle River, NJ, 1995.
[86] Leon Cohen. Time8211;Frequency Analysis. Prentice Hall, Upper Saddle River, NJ, 1995.
[87] R. R. Coifman. Wavelet analysis and signal processing. In Signal Processing, Part I: Signal Processing
Theory, page 598211;68. Springer-Verlag, New York, 1990. IMA vol. 22, lectures from IMA Program,
summer 1988.
[88] R. R. Coifman and D. L. Donoho. Translation-invariant de-noising. In Wavelets and Statistics, page
1258211;150. Springer-Verlag, 1995. Springer Lecture Notes in Statistics.
[89] R. R. Coifman and D. L. Donoho. Translation-invariant de-noising. In Wavelets and Statistics, page
1258211;150. Springer-Verlag, 1995. Springer Lecture Notes in Statistics.
[90] R. R. Coifman and D. L. Donoho. Translation-invariant de-noising. In Wavelets and Statistics, page
1258211;150. Springer-Verlag, 1995. Springer Lecture Notes in Statistics.
[91] R. R. Coifman, Y. Meyer, S. Quake, and M. V. Wickerhauser. Signal processing and compression with
wave packets. In Proceedings of the International Conference on Wavelets, 1989 Marseille. Masson,
Paris, 1992.
[92] R. R. Coifman, Y. Meyer, and M. V. Wickerhauser. Wavelet analysis and signal processing. In Wavelets
and Their Applications. Jones and Bartlett, Boston, 1992.
[93] R. R. Coifman and M. V. Wickerhauser. Entropy-based algorithms for best basis selection. IEEE
Transaction on Information Theory, 38(2):7138211;718, March 1992.
[94] R. R. Coifman and M. V. Wickerhauser. Entropy-based algorithms for best basis selection. IEEE
Transaction on Information Theory, 38(2):7138211;718, March 1992.
[95] Ronald R. Coifman and Fazal Majid. Adapted waveform analysis and denoising. Technical report,
Yale University, New Haven, 1994.
[96] Ronald R. Coifman and M. V. Wickerhauser. Best-adapted wave packet bases. Technical report, Math
Dept., Yale University, New Haven, 1990.
[97] Alexander; Combes, Jean-Michel; Grossmann and Philippe Tchamitchian, editors. Wavelets, Time-
Frequency Methods and Phase Space. Springer-Verlag, Berlin, 1989. Proceedings of the International
Conference, Marseille, France, December 1987.
[98] T. Cooklev, A. Nishihara, M. Kato, and M. Sablatash. Two-channel multilter banks and multi-
wavelets. In IEEE Proc. Int. Conf. Acoust., Speech, Signal Processing, volume 5, page 27698211;2772,
1996.
[99] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley $ Sons, N.Y., 1991.
[100] Matthew Crouse. Frame robustness for de-noising. Technical report, EE 696 Course Report, Rice
University, Houston, Tx, May 1996.
[101] I. Daubechies, S. Mallat, and A. S. Willsky. Special issue on wavelet transforms and multiresolution
signal analysis. IEEE Transactions on Information Theory, 38(2, part II):5298211;924, March, part II
1992.
[102] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure
and Applied Mathematics, 41:9098211;996, November 1988.
[103] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure
and Applied Mathematics, 41:9098211;996, November 1988.
[104] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure
and Applied Mathematics, 41:9098211;996, November 1988.
[105] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure
and Applied Mathematics, 41:9098211;996, November 1988.
[106] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure
and Applied Mathematics, 41:9098211;996, November 1988.
[107] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure
and Applied Mathematics, 41:9098211;996, November 1988.
[108] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure
and Applied Mathematics, 41:9098211;996, November 1988.
[109] Ingrid Daubechies. Time-frequency localization operators: a geometric phase space approach. IEEE
Transactions on Information Theory, 34(4):6058211;612, July 1988.
[110] Ingrid Daubechies. Orthonormal bases of wavelets with nite support 8211; connection with discrete
lters. In Wavelets, Time-Frequency Methods and Phase Space, page 388211;66, Berlin, 1989. Springer-
Verlag. Proceedings of International Colloquium on Wavelets and Applications, Marseille, France, Dec.
1987.
[111] Ingrid Daubechies. The wavelet transform, time-frequency localization and signal analysis. IEEE
Transaction on Information Theory, 36(5):9618211;1005, September 1990. Also a Bell Labs Technical
Report.
[112] Ingrid Daubechies. The wavelet transform, time-frequency localization and signal analysis. IEEE
Transaction on Information Theory, 36(5):9618211;1005, September 1990. Also a Bell Labs Technical
Report.
[113] Ingrid Daubechies. The wavelet transform, time-frequency localization and signal analysis. IEEE
Transaction on Information Theory, 36(5):9618211;1005, September 1990. Also a Bell Labs Technical
Report.
[114] Ingrid Daubechies. The wavelet transform, time-frequency localization and signal analysis. IEEE
Transaction on Information Theory, 36(5):9618211;1005, September 1990. Also a Bell Labs Technical
Report.
[115] Ingrid Daubechies. The wavelet transform, time-frequency localization and signal analysis. IEEE
Transaction on Information Theory, 36(5):9618211;1005, September 1990. Also a Bell Labs Technical
Report.
[116] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[117] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[118] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[119] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[120] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[121] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[122] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[123] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[124] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[125] Ingrid Daubechies. Ten Lectures on Wavelets. SIAM, Philadelphia, PA, 1992. Notes from the 1990
CBMS-NSF Conference on Wavelets and Applications at Lowell, MA.
[126] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets ii, variations on a theme.
SIAM Journal of Mathematical Analysis, 24(2):4998211;519, March 1993.
[127] Ingrid Daubechies. Where do wavelets comre from? 8211; a personal point of view. Proceedings of the
IEEE, 84(4):5108211;513, April 1996.
[128] Ingrid Daubechies, St[U+FFFD]ane Jaard, and Jean-Lin Journ[U+FFFD]A simple wilson orthonormal
basis with exponential decay. preprint.
[129] Ingrid Daubechies and Jerey C. Lagarias. Two-scale dierence equations, part i. existence and global
regularity of solutions. SIAM Journal of Mathematical Analysis, 22:13888211;1410, 1991. From an
internal report, AT&T Bell Labs, Sept. 1988.
[130] Ingrid Daubechies and Jerey C. Lagarias. Two-scale dierence equations, part ii. local regularity,
innite products of matrices and fractals. SIAM Journal of Mathematical Analysis, 23:10318211;1079,
July 1992. From an internal report, AT&T Bell Labs, Sept. 1988.
[131] Ingrid Daubechies and Jerey C. Lagarias. Two-scale dierence equations, part ii. local regularity,
innite products of matrices and fractals. SIAM Journal of Mathematical Analysis, 23:10318211;1079,
July 1992. From an internal report, AT&T Bell Labs, Sept. 1988.
[132] Ingrid Daubechies and Wim Sweldens. Factoring wavelet transforms into lifting steps. Technical report,
Princeton and Lucent Technologies, NJ, September 1996. Preprint.
[133] Ingrid Daubechies and Wim Sweldens. Factoring wavelet transforms into lifting steps. Technical report,
Princeton and Lucent Technologies, NJ, September 1996. Preprint.
[134] J. E. Dennis and R. B. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear
Equations. Prentice-Hall, Inc., Englewood Clis, New Jersey, 1st edition, 1983.
[135] G. Deslauriers and S. Dubuc. Interpolation dyadique. In Fractals, Dimensions Non Enti[U+FFFD] et
Applications, page 448211;45. Masson, Paris, 1987.
[136] R. DeVire and G. Lorentz. Constructive Approximation. Springer-Verlag, 1993.
[137] Z. Do487;anata, P. P. Vaidyanathan, and T. Q. Nguyen. General synthesis procedures for r lossless
transfer matrices, for perfect-reconstruction multirate lter bank applications. IEEE Transactions on
Acoustics, Speech, and Signal Processing, 36(10):15611574, October 1988.
[138] David L. Donoho. Nonlinear wavelet methods for recovery of signals, densities, and spectra from
indirect and noisy data. In Dierent Perspectives on Wavelets, I, page 1738211;205, Providence,
1993. American Mathematical Society. Proceedings of Symposia in Applied Mathematics and Stanford
Report 437, July 1993.
[139] David L. Donoho. Nonlinear wavelet methods for recovery of signals, densities, and spectra from
indirect and noisy data. In Dierent Perspectives on Wavelets, I, page 1738211;205, Providence,
1993. American Mathematical Society. Proceedings of Symposia in Applied Mathematics and Stanford
Report 437, July 1993.
[140] David L. Donoho. Nonlinear wavelet methods for recovery of signals, densities, and spectra from
indirect and noisy data. In Dierent Perspectives on Wavelets, I, page 1738211;205, Providence,
1993. American Mathematical Society. Proceedings of Symposia in Applied Mathematics and Stanford
Report 437, July 1993.
[141] David L. Donoho. Unconditional bases are optimal bases for data compression and for statistical
estimation. Applied and Computational Harmonic Analysis, 1(1):1008211;115, December 1993. Also
Stanford Statistics Dept. Report TR-410, Nov. 1992.
[142] David L. Donoho. Unconditional bases are optimal bases for data compression and for statistical
estimation. Applied and Computational Harmonic Analysis, 1(1):1008211;115, December 1993. Also
Stanford Statistics Dept. Report TR-410, Nov. 1992.
[143] David L. Donoho. Unconditional bases are optimal bases for data compression and for statistical
estimation. Applied and Computational Harmonic Analysis, 1(1):1008211;115, December 1993. Also
Stanford Statistics Dept. Report TR-410, Nov. 1992.
[144] David L. Donoho. Unconditional bases are optimal bases for data compression and for statistical
estimation. Applied and Computational Harmonic Analysis, 1(1):1008211;115, December 1993. Also
Stanford Statistics Dept. Report TR-410, Nov. 1992.
[145] David L. Donoho. Unconditional bases are optimal bases for data compression and for statistical
estimation. Applied and Computational Harmonic Analysis, 1(1):1008211;115, December 1993. Also
Stanford Statistics Dept. Report TR-410, Nov. 1992.
[146] David L. Donoho. Unconditional bases are optimal bases for data compression and for statistical
estimation. Applied and Computational Harmonic Analysis, 1(1):1008211;115, December 1993. Also
Stanford Statistics Dept. Report TR-410, Nov. 1992.
[147] David L. Donoho. Wavelet skrinkage and w. v. d. 8211; a ten minute tour. Technical report TR-416,
Statistics Department, Stanford University, Stanford, CA, January 1993. Preprint.
[148] David L. Donoho. On minimum entropy segmentation. In Wavelets: Theory, Algorithms, and Appli-
cations. Academic Press, San Diego, 1994. Also Stanford Tech Report TR-450, 1994; Volume 5 in the
series: Wavelet Analysis and its Applications.
[152] David L. Donoho. Interpolating wavelet transforms. Applied and Computational Harmonic Analysis,
to appear. Also Stanford Statistics Dept. report TR-408, Nov. 1992.
[153] David L. Donoho. Interpolating wavelet transforms. Applied and Computational Harmonic Analysis,
to appear. Also Stanford Statistics Dept. report TR-408, Nov. 1992.
[154] David L. Donoho. Interpolating wavelet transforms. Applied and Computational Harmonic Analysis,
to appear. Also Stanford Statistics Dept. report TR-408, Nov. 1992.
[155] David L. Donoho and Iain M. Johnstone. Ideal spatial adaptation via wavelet shrinkage. Biometrika,
81:4258211;455, 1994. Also Stanford Statistics Dept. Report TR-400, July 1992.
[156] David L. Donoho and Iain M. Johnstone. Ideal denoising in an orthonormal basis chosen from a library
of bases. C. R. Acad. Sci. Paris, Ser. I, 319, to appear 1994. Also Stanford Statistics Dept. Report
461, Sept. 1994.
[157] David L. Donoho and Iain M. Johnstone. Ideal denoising in an orthonormal basis chosen from a library
of bases. C. R. Acad. Sci. Paris, Ser. I, 319, to appear 1994. Also Stanford Statistics Dept. Report
461, Sept. 1994.
[158] David L. Donoho and Iain M. Johnstone. Ideal denoising in an orthonormal basis chosen from a library
of bases. C. R. Acad. Sci. Paris, Ser. I, 319, to appear 1994. Also Stanford Statistics Dept. Report
461, Sept. 1994.
[159] David L. Donoho and Iain M. Johnstone. Adapting to unknown smoothness via wavelet shrinkage.
Journal of American Statist. Assn., to appear 1995. Also Stanford Statistics Dept. Report TR-425,
June 1993.
[160] David L. Donoho and Iain M. Johnstone. Adapting to unknown smoothness via wavelet shrinkage.
Journal of American Statist. Assn., to appear 1995. Also Stanford Statistics Dept. Report TR-425,
June 1993.
[161] David L. Donoho, Iain M. Johnstone, Gerard Kerkyacharian, and Dominique Picard. Wavelet shrink-
age: Asymptopia? Journal Royal Statistical Society B., 57(2):3018211;337, 1995. Also Stanford
Statistics Dept. Report TR-419, March 1993.
[162] David L. Donoho, Iain M. Johnstone, G[U+FFFD]rd Kerkyacharian, and Dominique Picard. Discussion
of 8220;wavelet shrinkage: Asymptopia?". Journal Royal Statist. Soc. Ser B., 57(2):3378211;369, 1995.
Discussion of paper by panel and response by authors.
[163] David L. Donoho, Iain M. Johnstone, G[U+FFFD]rd Kerkyacharian, and Dominique Picard. Wavelet
shrinkage: Asymptopia? Journal Royal Statistical Society B., 57(2):3018211;337, 1995. Also Stanford
Statistics Dept. Report TR-419, March 1993.
[164] David L. Donoho, Iain M. Johnstone, G[U+FFFD]rd Kerkyacharian, and Dominique Picard. Wavelet
shrinkage: Asymptopia? Journal Royal Statistical Society B., 57(2):3018211;337, 1995. Also Stanford
Statistics Dept. Report TR-419, March 1993.
[165] David L. Donoho, Iain M. Johnstone, G[U+FFFD]rd Kerkyacharian, and Dominique Picard. Wavelet
shrinkage: Asymptopia? Journal Royal Statistical Society B., 57(2):3018211;337, 1995. Also Stanford
Statistics Dept. Report TR-419, March 1993.
[166] T. R. Downie and B. W. Silverman. The discrete multiple wavelet transform and thresholding methods.
Technical report, University of Bristol, November 1996. Submitted to IEEE Tran. Signal Processing.
[167] R. J. Dun and R. C. Schaeer. A class of nonharmonic fourier series. Transactions of the American
Mathematical Society, 72:3418211;366, 1952.
[168] P. Duhamel, P. Flandrin, T. Nishitani, A. H. Tewk, and M. Vetterli. Special issue on wavelets and
signal processing. IEEE Transactions on Signal Processing, 41(12):32138211;3600, December 1993.
[169] P. Dutilleux. An implementation of the 8220;algorithme [U+FFFD]ou" to compute the wavelet trans-
form. In Wavelets, Time-Frequency Methods and Phase Space, page 28211;20, Berlin, 1989. Springer-
Verlag. Proceedings of International Colloquium on Wavelets and Applications, Marseille, France, Dec.
1987.
[170] R. E. Van Dyck and T. G. Marshall, Jr. Ladder realizations of fast subband/vq coders with dia-
mond structures. In Proceedings of IEEE International Symposium on Circuits and Systems, page
III:1778211;180, ISCAS, 1993.
[171] R. E. Van Dyck and T. G. Marshall, Jr. Ladder realizations of fast subband/vq coders with dia-
mond structures. In Proceedings of IEEE International Symposium on Circuits and Systems, page
III:1778211;180, ISCAS, 1993.
[172] T. Eirola. Sobolev characterization of solutions of dilation equations. SIAM Journal of Mathematical
Analysis, 23(4):10158211;1030, July 1992.
[173] Yonina C. Eldar. Sampling Theory, Beyond Bandlimited Systems. Cambridge University Press, 2015.
[174] F. J. Fliege. Multirate Digital Signal Processing: Multrirate Systems, Filter Banks, and Wavelets.
Wiley & Sons, New York, 1994.
[175] F. J. Fliege. Multirate Digital Signal Processing: Multrirate Systems, Filter Banks, and Wavelets.
Wiley & Sons, New York, 1994.
[176] E Foufoula-Georgiou and Praveen Kumar, editors. Wavelets in Geophyics. Academic Press, San
Diego, 1994. Volume 4 in the series: Wavelet Analysis and its Applications.
[177] D. Gabor. Theory of communication. Journal of the Institute for Electrical Engineers, 93:429439,
1946.
[178] J. S. Geronimo, D. P. Hardin, and P. R. Massopust. Fractal functions and wavelet expansions based
on several scaling functions. Journal of Approximation Theory, 78:3738211;401, 1994.
[179] S. Ginette, A. Grossmann, and Ph. Tchamitchian. Use of wavelet transforms in the study of prop-
agation of transient acoustic signals across a plane interface between two homogeneous media. In
Wavelets: Time-Frequency Methods and Phase Space, page 1398211;146. Springer-Verlag, Berlin, 1989.
Proceedings of the International Conference, Marseille, Dec. 1987.
[180] R. Glowinski, W. Lawton, M. Ravachol, and E. Tenenbaum. Wavelet solution of linear and nonlinear
elliptic, parabolic and hyperbolic problems in one dimension. In Proceedings of the Ninth SIAM
International Conference on Computing Methods in Applied Sciences and Engineering, Philadelphia,
1990.
[181] R. Glowinski, W. Lawton, M. Ravachol, and E. Tenenbaum. Wavelet solution of linear and nonlinear
elliptic, parabolic and hyperbolic problems in one dimension. In Proceedings of the Ninth SIAM
International Conference on Computing Methods in Applied Sciences and Engineering, Philadelphia,
1990.
[182] G. H. Golub and C. F. Van Loan. Matrix Compuations. Johns Hopkins University Press, 1993.
[183] T. N. T. Goodman and S. L. LEE. Wavelets of multiplicity. Tran. American Math. Society,
342(1):3078211;324, March 1994.
[184] T. N. T. Goodman, S. L. Lee, and W. S. Tang. Wavelets in wandering subspaces. Tran. American
Math. Society, 338(2):6398211;654, August 1993.
[185] R. A. Gopinath. Modulated lter banks and local trigonometric bases - some connections. in prepa-
ration, Oct 1996.
[186] R. A. Gopinath. Modulated lter banks and wavelets, a general unied theory. In Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, page
15858211;1588, IEEE ICASSP-96, Atlanta, May 78211;10 1996.
[187] R. A. Gopinath and C. S. Burrus. Ecient computation of the wavelet transforms. In Proceedings of
the IEEE International Conference on Acoustics, Speech, and Signal Processing, page 15998211;1602,
Albuquerque, NM, April 1990.
[188] R. A. Gopinath and C. S. Burrus. Cosine8211;modulated orthonormal wavelet bases. In Paper Sum-
maries for the IEEE Signal Processing Society's Fifth DSP Workshop, page 1.10.1, Starved Rock Lodge,
Utica, IL, September 138211;16, 1992.
[189] R. A. Gopinath and C. S. Burrus. On the moments of the scaling function. In Proceedings of the IEEE
International Symposium on Circuits and Systems, volume 2, page 9638211;966, ISCAS-92, San Diego,
CA, May 1992.
[190] R. A. Gopinath and C. S. Burrus. On the moments of the scaling function. In Proceedings of the IEEE
International Symposium on Circuits and Systems, volume 2, page 9638211;966, ISCAS-92, San Diego,
CA, May 1992.
[191] R. A. Gopinath and C. S. Burrus. Wavelet transforms and lter banks. In Wavelets: A Tutorial in
Theory and Applications, page 6038211;655. Academic Press, San Diego, CA, 1992. Volume 2 in the
series: Wavelet Analysis and its Applications.
[192] R. A. Gopinath and C. S. Burrus. Wavelet transforms and lter banks. In Wavelets: A Tutorial in
Theory and Applications, page 6038211;655. Academic Press, San Diego, CA, 1992. Volume 2 in the
series: Wavelet Analysis and its Applications.
[193] R. A. Gopinath and C. S. Burrus. Wavelet transforms and lter banks. In Wavelets: A Tutorial in
Theory and Applications, page 6038211;655. Academic Press, San Diego, CA, 1992. Volume 2 in the
series: Wavelet Analysis and its Applications.
[194] R. A. Gopinath and C. S. Burrus. Wavelet transforms and lter banks. In Wavelets: A Tutorial in
Theory and Applications, page 6038211;655. Academic Press, San Diego, CA, 1992. Volume 2 in the
series: Wavelet Analysis and its Applications.
[195] R. A. Gopinath and C. S. Burrus. Theory of modulated lter banks and modulated wavelet tight
frames. In Proceedings of the IEEE International Conference on Signal Processing, volume III, pages
III1698211;172, IEEE ICASSP-93, Minneapolis, April 1993.
[196] R. A. Gopinath and C. S. Burrus. Theory of modulated lter banks and modulated wavelet tight
frames. In Proceedings of the IEEE International Conference on Signal Processing, volume III, pages
III1698211;172, IEEE ICASSP-93, Minneapolis, April 1993.
[197] R. A. Gopinath and C. S. Burrus. On upsampling, downsampling and rational sampling rate lter
banks. IEEE Transactions on Signal Processing, April 1994. Also Tech. Report No. CML TR91-25,
1991.
[198] R. A. Gopinath and C. S. Burrus. Unitary r lter banks and symmetry. IEEE Transaction on Circuits
and Systems II, 41:695700, October 1994. Also Tech. Report No. CML TR92-17, August 1992.
[199] R. A. Gopinath and C. S. Burrus. Factorization approach to unitary time-varying lter banks. IEEE
Transactions on Signal Processing, 43(3):6668211;680, March 1995. Also a Tech Report No. CML
TR-92-23, Nov. 1992.
[200] R. A. Gopinath and C. S. Burrus. Factorization approach to unitary time-varying lter banks. IEEE
Transactions on Signal Processing, 43(3):6668211;680, March 1995. Also a Tech Report No. CML
TR-92-23, Nov. 1992.
[201] R. A. Gopinath and C. S. Burrus. Theory of modulated lter banks and modulated wavelet tight frames.
Applied and Computational Harmonic Analysis: Wavelets and Signal Processing, 2:3038211;326, Oc-
tober 1995. Also a Tech. Report No. CML TR-92-10, 1992.
[202] R. A. Gopinath and C. S. Burrus. Theory of modulated lter banks and modulated wavelet tight frames.
Applied and Computational Harmonic Analysis: Wavelets and Signal Processing, 2:3038211;326, Oc-
tober 1995. Also a Tech. Report No. CML TR-92-10, 1992.
[203] R. A. Gopinath and C. S. Burrus. Theory of modulated lter banks and modulated wavelet tight frames.
Applied and Computational Harmonic Analysis: Wavelets and Signal Processing, 2:3038211;326, Oc-
tober 1995. Also a Tech. Report No. CML TR-92-10, 1992.
[204] R. A. Gopinath, J. E. Odegard, and C. S. Burrus. On the correlation structure of multiplicity scaling
functions and wavelets. In Proceedings of the IEEE International Symposium on Circuits and Systems,
volume 2, page 9598211;962, ISCAS-92, San Diego, CA, May 1992.
[205] R. A. Gopinath, J. E. Odegard, and C. S. Burrus. On the correlation structure of multiplicity scaling
functions and wavelets. In Proceedings of the IEEE International Symposium on Circuits and Systems,
volume 2, page 9598211;962, ISCAS-92, San Diego, CA, May 1992.
[206] R. A. Gopinath, J. E. Odegard, and C. S. Burrus. Optimal wavelet representation of signals and the
wavelet sampling theorem. IEEE Transactions on Circuits and Systems II, 41(4):2628211;277, April
1994. Also a Tech. Report No. CML TR-92-05, April 1992, revised Aug. 1993.
[207] R. A. Gopinath, J. E. Odegard, and C. S. Burrus. Optimal wavelet representation of signals and the
wavelet sampling theorem. IEEE Transactions on Circuits and Systems II, 41(4):2628211;277, April
1994. Also a Tech. Report No. CML TR-92-05, April 1992, revised Aug. 1993.
[208] R. A. Gopinath, J. E. Odegard, and C. S. Burrus. Optimal wavelet representation of signals and the
wavelet sampling theorem. IEEE Transactions on Circuits and Systems II, 41(4):2628211;277, April
1994. Also a Tech. Report No. CML TR-92-05, April 1992, revised Aug. 1993.
[209] Ramesh A. Gopinath. The wavelet transforms and time-scale analysis of signals. Masters thesis, Rice
University, Houston, Tx 77251, 1990.
[210] Ramesh A. Gopinath. The wavelet transforms and time-scale analysis of signals. Masters thesis, Rice
University, Houston, Tx 77251, 1990.
[211] Ramesh A. Gopinath. The wavelet transforms and time-scale analysis of signals. Masters thesis, Rice
University, Houston, Tx 77251, 1990.
[212] Ramesh A. Gopinath. Wavelets and Filter Banks 8211; New Results and Applications. Ph. d. thesis,
Rice University, Houston, Tx, August 1992.
[213] Ramesh A. Gopinath. Wavelets and Filter Banks 8211; New Results and Applications. Ph. d. thesis,
Rice University, Houston, Tx, August 1992.
[214] Ramesh A. Gopinath and C. Sidney Burrus. On cosine8211;modulated wavelet orthonormal bases.
IEEE Transactions on Image Processing, 4(2):1628211;176, February 1995. Also a Tech. Report No.
CML TR-91-27, March 1992.
[215] Ramesh A. Gopinath and C. Sidney Burrus. On cosine8211;modulated wavelet orthonormal bases.
IEEE Transactions on Image Processing, 4(2):1628211;176, February 1995. Also a Tech. Report No.
CML TR-91-27, March 1992.
[216] Ramesh A. Gopinath and C. Sidney Burrus. On cosine8211;modulated wavelet orthonormal bases.
IEEE Transactions on Image Processing, 4(2):1628211;176, February 1995. Also a Tech. Report No.
CML TR-91-27, March 1992.
[217] Ramesh A. Gopinath and C. Sidney Burrus. On cosine8211;modulated wavelet orthonormal bases.
IEEE Transactions on Image Processing, 4(2):1628211;176, February 1995. Also a Tech. Report No.
CML TR-91-27, March 1992.
[218] P. Goupillaud, A. Grossman, and J. Morlet. Cyclo-octave and related transforms in seismic signal
analysis. SIAM J. Math. Anal., 15:723736, 1984.
[219] Gustaf Gripenberg. Unconditional bases of wavelets for sobelov spaces. SIAM Journal of Mathematical
Analysis, 24(4):10308211;1042, July 1993.
[220] A. Grossman, R. Kronland8211;Martinet, and J. Morlet. Reading and understanding continuous
wavelet transforms. In Wavelets, Time-Frequency Methods and Phase Space, page 28211;20, Berlin,
1989. Springer-Verlag. Proceedings of International Colloquium on Wavelets and Applications, Mar-
seille, France, Dec. 1987.
[222] P. Groupillaud, A. Grossman, and J. Morlet. Cyclo-octave and related transforms in seismic signal
analysis. Geoexploration, (23), 1984.
[223] H. Guo, M. Lang, J. E. Odegard, and C. S. Burrus. Nonlinear processing of a shift-invariant dwt
for noise reduction and compression. In Proceedings of the International Conference on Digital Signal
Processing, page 3328211;337, Limassol, Cyprus, June 268211;28 1995.
[224] H. Guo, M. Lang, J. E. Odegard, and C. S. Burrus. Nonlinear processing of a shift-invariant dwt
for noise reduction and compression. In Proceedings of the International Conference on Digital Signal
Processing, page 3328211;337, Limassol, Cyprus, June 268211;28 1995.
[225] H. Guo, J. E. Odegard, M. Lang, R. A. Gopinath, I. Selesnick, and C. S. Burrus. Speckle reduction
via wavelet soft-thresholding with application to sar based atd/r. In Proceedings of SPIE Conference
2260, volume 2260, San Diego, July 1994.
[226] H. Guo, J. E. Odegard, M. Lang, R. A. Gopinath, I. Selesnick, and C. S. Burrus. Speckle reduction
via wavelet soft-thresholding with application to sar based atd/r. In Proceedings of SPIE Conference
2260, volume 2260, San Diego, July 1994.
[227] H. Guo, J. E. Odegard, M. Lang, R. A. Gopinath, I. W. Selesnick, and C. S. Burrus. Wavelet
based speckle reduction with application to sar based atd/r. In Proceedings of the IEEE International
Conference on Image Processing, volume I, page I:758211;79, IEEE ICIP-94, Austin, Texas, November
13-16 1994.
[230] Haitao Guo. Redundant wavelet transform and denoising. Technical report CML-9417, ECE Dept and
Computational Mathematics Lab, Rice University, Houston, Tx, December 1994.
[231] Haitao Guo. Theory and applications of the shift-invariant, time-varying and undecimated wavelet
transform. Masters thesis, ECE Department, Rice University, April 1995.
[232] Haitao Guo. Theory and applications of the shift-invariant, time-varying and undecimated wavelet
transform. Masters thesis, ECE Department, Rice University, April 1995.
[233] Haitao Guo. Wavelets for Approximate Fourier Transform and Data Compression. Ph. d. thesis, ECE
Department, Rice University, Houston, Tx, May 1997.
[234] Haitao Guo. Wavelets for Approximate Fourier Transform and Data Compression. Ph. d. thesis, ECE
Department, Rice University, Houston, Tx, May 1997.
[235] Haitao Guo and C. Sidney Burrus. Approximate t via the discrete wavelet transform. In Proceedings
of SPIE Conference 2825, Denver, August 68211;9 1996.
[236] Haitao Guo and C. Sidney Burrus. Convolution using the discrete wavelet transform. In Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, pages
III12918211;1294, IEEE ICASSP-96, Atlanta, May 78211;10 1996.
[237] Haitao Guo and C. Sidney Burrus. Phase-preserving compression of seismic images using the self-
adjusting wavelet transform. In NASA Combined Industry, Space and Earth Science Data Compression
Workshop (in conjunction with the IEEE Data Compression Conference, DCC-96), JPL Pub. 96-11,
page 1018211;109, Snowbird, Utah, April 4 1996.
[238] Haitao Guo and C. Sidney Burrus. Waveform and image compression with the burrows wheeler
transform and the wavelet transform. In Proceedings of the IEEE International Conference on Image
Processing, volume I, page I:658211;68, IEEE ICIP-97, Santa Barbara, October 26-29 1997.
[239] Haitao Guo and C. Sidney Burrus. Wavelet transform based fast approximate fourier transform.
In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing,
volume 3, page III:19738211;1976, IEEE ICASSP-97, Munich, April 218211;24 1997.
[240] Haitao Guo and C. Sidney Burrus. Fast approximate fourier transform via wavelet transforms. IEEE
Transactions, to be submitted 2000.
[241] J. Gtze, J. E. Odegard, P. Rieder, and C. S. Burrus. Approximate moments and regularity of eciently
implemented orthogonal wavelet transforms. In Proceedings of the IEEE International Symposium on
Circuits and Systems, volume 2, pages II4058211;408, IEEE ISCAS-96, Atlanta, May 12-14 1996.
[242] Alfred Haar. Zur theorie der orthogonalen funktionensysteme. Mathematische Annalen,
69:3318211;371, 1910. Also in PhD thesis.
[243] Alfred Haar. Zur theorie der orthogonalen funktionensysteme. Mathematische Annalen,
69:3318211;371, 1910. Also in PhD thesis.
[244] D. P. Hardin and D. W. Roach. Multiwavelet prelters i: Orthogonal prelters preserving approxima-
tion order. Technical report, Vanderbilt University, 1996.
[245] Henk J. A. M. Heijmans. Descrete wavelets and multiresolution analysis. In Wavelets: An Elementary
Treatment of Theory and Applications, page 498211;80. World Scientic, Singapore, 1993.
[246] C. Heil, G. Strang, and V. Strela. Approximation by translates of renable functions. Numerische
Mathematik, 73(1):758211;94, March 1996.
[247] C. E. Heil and D. F. Walnut. Continuous and discrete wavelet transforms. SIAM Review,
31(4):6288211;666, December 1989.
[248] C. E. Heil and D. F. Walnut. Continuous and discrete wavelet transforms. SIAM Review,
31(4):6288211;666, December 1989.
[249] C. E. Heil and D. F. Walnut. Continuous and discrete wavelet transforms. SIAM Review,
31(4):6288211;666, December 1989.
[250] Christopher Heil and David F. Walnut, editors. Fundamental Papers in Wavelet Theory. Princeton
University Press, 2006.
[251] Christopher Heil and David F. Walnut, editors. Fundamental Papers in Wavelet Theory. Princeton
University Press, 2006. Excellent collection of basic papers.
[252] P. N. Heller, V. Strela, G. Strang, P. Topiwala, C. Heil, and L. S. Hills. Multiwavelet lter banks
for data compression. In IEEE Proceedings of the International Symposium on Circuits and Systems,
volume 3, page 17968211;1799, 1995.
[253] Peter N. Heller. Rank wavelet matrices with vanishing moments. SIAM Journal on Matrix Analysis,
16:5028211;518, 1995. Also as technical report AD940123, Aware, Inc., 1994.
[254] Peter N. Heller. Rank wavelet matrices with vanishing moments. SIAM Journal on Matrix Analysis,
16:5028211;518, 1995. Also as technical report AD940123, Aware, Inc., 1994.
[255] Peter N. Heller, Howard L. Resniko, and Raymond O. Wells, Jr. Wavelet matrices and the repre-
sentation of discrete functions. In Wavelets: A Tutorial in Theory and Applications, page 158211;50.
Academic Press, Boca Raton, 1992. Volume 2 in the series: Wavelet Analysis and its Applications.
[256] Peter N. Heller and R. O. Wells. The spectral theory of multiresolution operators and applications.
In Wavelets: Theory, Algorithms, and Applications, volume 5, page 138211;31. Academic Press, San
Diego, 1994. Also as technical report AD930120, Aware, Inc., 1993; Volume 5 in the series: Wavelet
Analysis and its Applications.
[257] Peter N. Heller and R. O. Wells. Sobolev regularity for rank wavelets. SIAM Journal on Mathematical
Analysis, submitted, Oct. 1996. Also a CML Technical Report TR9608, Rice University, 1994.
[258] Cormac Herley, Jelena Kova269;evi263;, Kannan Ramchandran, and Martin Vetterli. Time-varying
orthonormal tilings of the time-frequency plane. In Proceedings of the IEEE Signal Processing Soci-
ety's International Symposium on Time8211;Frequency and Time8211;Scale Analysis, page 118211;14,
Victoria, BC, Canada, October 48211;6, 1992.
[259] Cormac Herley, Jelena Kova269;evi263;, Kannan Ramchandran, and Martin Vetterli. Tilings of the
time-frequency plane: Consturction of arbitrary orthogonal bases and fast tiling algorithms. IEEE
Transactions on Signal Processing, 41(12):33418211;3359, December 1993. Special issue on wavelets.
[260] Cormac Herley, Jelena Kova269;evi263;, Kannan Ramchandran, and Martin Vetterli. Tilings of the
time-frequency plane: Consturction of arbitrary orthogonal bases and fast tiling algorithms. IEEE
Transactions on Signal Processing, 41(12):33418211;3359, December 1993. Special issue on wavelets.
[261] Eugenio Hern[U+FFFD]ez and Guido Weiss. A First Course on Wavelets. CRC Press, Boca Raton,
1996.
[262] O. Herrmann. On the approximation problem in nonrecursive digital lter design. IEEE Transactions
on Circuit Theory, 18:4118211;413, May 1971. Reprinted in DSP reprints, IEEE Press, 1972, page 202.
[263] Fr[U+FFFD]ric Heurtaux, Fabrice Planchon, and Mladen V. Wickerhauser. Scale decomposition in
burgers' equation. In Wavelets: Mathematics and Applications, page 5058211;524. CRC Press, Boca
Raton, 1994.
[264] F. Hlawatsch and G. F. Boudreaux-Bartels. Linear and quadratic time-frequency signal representations.
IEEE Signal Processing Magazine, 9(2):218211;67, April 1992.
[265] F. Hlawatsch and G. F. Boudreaux-Bartels. Linear and quadratic time-frequency signal representations.
IEEE Signal Processing Magazine, 9(2):218211;67, April 1992.
[266] F. Hlawatsch and G. F. Boudreaux-Bartels. Linear and quadratic time-frequency signal representations.
IEEE Signal Processing Magazine, 9(2):218211;67, April 1992.
[267] A. N. Hossen, U. Heute, O. V. Shentov, and S. K. Mitra. Subband dft 8211; part ii: Accuracy,
complexity, and applications. Signal Processing, 41:2798211;295, 1995.
[268] Barbara Burke Hubbard. The World According to Wavelets. AKPeters, Wellesley, MA, 1996. Second
Edition 1998.
[269] Barbara Burke Hubbard. The World According to Wavelets. AKPeters, Wellesley, MA, 1996. Second
Edition 1998.
[270] Plamen Ch. Ivanov, Michael G Rosenblum, C.-K. Peng, Joseph Mietus, Shlomo Havlin, H. Eugene
Stanley, and Ary L. Goldberger. Scaling behaviour of heartbeat intervals obtained by wavelet-based
time-series analysis. Nature, 383:3238211;327, September 26 1996.
[271] A. Grossmann J. M. Combes and P. Tchamitchian, editors. Wavelets, Time-Frequency Methods and
Phase Space. Springer-Verlag, Berlin, 1989. Proceedings of the International Conference, Marseille,
France, December 1987.
[272] Maarten Jansen and Patrick Oonincx. Second Generation Wavelets and Applications. Springer-Verlag,
London, 2010.
[273] Maarten Jansen and Patrick Oonincx. Second Generation Wavelets and Applications. Springer-Verlag,
London, 2010.
[274] Maarten H. Jansen and Patrick J. Oonincx. Second Generation Wavelets and Applications. Springer,
2005.
[275] Bjrn Jawerth and Wim Sweldens. An overview of wavelet based multiresolution analyses. SIAM
Review, 36:3778211;412, 1994. Also a University of South Carolina Math Dept. Technical Report, Feb.
1993.
[276] N. S. Jayant and P. Noll. Digital Coding of Waveforms. Prentice-Hall, Inc., Englewood Clis, NJ, 1st
edition, 1984.
[277] A. Jensen and A. la Cour-Harbo. Ripples in Mathematics: The Discrete Wavelet Transform. Springer-
Verlag, New York, NY, 2001.
[278] A. Jensen and A. la Cour-Harbo. Ripples in Mathematics: The Discrete Wavelet Transform. Springer-
Verlag, New York, NY, 2001.
[279] Arne Jensen and Anders la Cour-Harbo. Ripples in Mathematics: The Discrete Wavelet Transform.
Springer-Verlag, Heidleberg, 2001.
[280] R. Q. Jia. Subdivision schemes in spaces. Advances in Computational Mathematics, 3:3098211;341,
1995.
[281] R. Q. Jia, S. D. Riemenschneider, and D. X. Zhou. Approximation by multiple renable functions.
Technical report, University of Alberta, 1996. To appear in: Canadian Journal of Mathematics.
[282] R. Q. Jia, S. D. Riemenschneider, and D. X. Zhou. Vector subdivision schemes and multiple wavelets.
Technical report, University of Alberta, 1996.
[283] R. Q. Jia, S. D. Riemenschneider, and D. X. Zhou. Smoothness of multiple renable functions and
multiple wavelets. Technical report, University of Alberta, 1997.
[284] B. R. Johnson, J. P. Modisette, P. A. Nordlander, and J. L. Kinsey. Quadrature integration for
compact support wavelets. Journal of Computational Physics, submitted 1996. Also Rice University
Tech. Report.
[285] H. W. Johnson and C. S. Burrus. The design of optimal dft algorithms using dynamic programming.
In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, page
208211;23, Paris, May 1982.
[286] I. M. Johnstone and B. W. Silverman. Wavelet threshold estimators for data with correlated noise.
Technical report, Statistics Dept., University of Bristol, September 1994.
[287] R. L. Josho, V. J. Crump, and T. R. Fischer. Image subband coding using arithmetic coded trellis
coded quantization. IEEE Transactions on Circuits and Systems, page 5158211;523, December 1995.
[305] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Nonlinear processing of a shift-
invariant dwt for noise reduction. In Proceedings of SPIE Conference 2491, Wavelet Applications II,
volume 24918211;60, page 6408211;651, Orlando, April 178211;21 1995.
[306] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Nonlinear processing of a shift-
invariant dwt for noise reduction. In Proceedings of SPIE Conference 2491, Wavelet Applications II,
volume 24918211;60, page 6408211;651, Orlando, April 178211;21 1995.
[307] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Nonlinear processing of a shift-
invariant dwt for noise reduction. In Proceedings of SPIE Conference 2491, Wavelet Applications II,
volume 24918211;60, page 6408211;651, Orlando, April 178211;21 1995.
[308] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Noise reduction using an
undecimated discrete wavelet transform. IEEE Signal Processing Letters, 3(1):108211;12, January
1996.
[309] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Noise reduction using an
undecimated discrete wavelet transform. IEEE Signal Processing Letters, 3(1):108211;12, January
1996.
[310] M. Lang, H. Guo, J. E. Odegard, C. S. Burrus, and R. O. Wells, Jr. Noise reduction using an
undecimated discrete wavelet transform. IEEE Signal Processing Letters, 3(1):108211;12, January
1996.
[311] M. Lang, I. Selesnick, J. E. Odegard, and C. S. Burrus. Constrained r lter design for 2-band lter
banks and orthonormal wavelets. In Proceedings of the IEEE Digital Signal Processing Workshop, page
2118211;214, Yosemite, October 1994.
[312] Markus Lang and Peter N. Heller. The design of maximally smooth wavlets. In Proceedings of the IEEE
International Conference on Acoustics, Speech, and Signal Processing, volume 3, page 14638211;1466,
IEEE ICASSP-96, Atlanta, May 1996.
[313] J. L. Larsonneur and J. Morlet. Wavelet and seismic interpretation. In Wavelets: Time-Frequency
Methods and Phase Space, page 1268211;131. Springer-Verlag, Berlin, 1989. Proceedings of the Inter-
national Conference, Marseille, Dec. 1987.
[314] W. Lawton. Private communication.
[315] Wayne M. Lawton. Tight frames of compactly supported ane wavelets. Journal of Mathematical
Physics, 31(8):18981901, August 1990. Also Aware, Inc. Tech Report AD891012.
[316] Wayne M. Lawton. Tight frames of compactly supported ane wavelets. Journal of Mathematical
Physics, 31(8):18981901, August 1990. Also Aware, Inc. Tech Report AD891012.
[317] Wayne M. Lawton. Tight frames of compactly supported ane wavelets. Journal of Mathematical
Physics, 31(8):18981901, August 1990. Also Aware, Inc. Tech Report AD891012.
[318] Wayne M. Lawton. Multiresolution properties of the wavelet galerkin operator. Journal of Mathemat-
ical Physics, 32(6):14408211;1443, June 1991.
[319] Wayne M. Lawton. Necessary and sucient conditions for constructing orthonormal wavelet bases.
Journal of Mathematical Physics, 32(1):578211;61, January 1991. Also Aware, Inc. Tech. Report
AD900402, April 1990.
[320] Wayne M. Lawton. Necessary and sucient conditions for constructing orthonormal wavelet bases.
Journal of Mathematical Physics, 32(1):578211;61, January 1991. Also Aware, Inc. Tech. Report
AD900402, April 1990.
[321] Wayne M. Lawton. Innite convolution products & renable distributions on lie groups. Transactions
of the American Mathematical Soceity, submitted 1997.
[322] Wayne M. Lawton, S. L. Lee, and Z. Shen. Convergence of multidimensional cascade algorithm.
Numerische Mathematik, to appear 1997.
[323] Wayne M. Lawton, S. L. Lee, and Z. Shen. Stability and orthonormality of multivariate renable
functions. SIAM Journal of Mathematical Analysis, to appear 1997.
[324] Wayne M. Lawton and Howard L. Resniko. Multidimensional wavelet bases. Aware Report AD910130,
Aware, Inc., February 1991.
[327] J. Liandrat, V. Perrier, and Ph. Tchamitchian. Numerical resolution of nonlinear parital dierential
equations using the wavelet approach. In Wavelets and Their Applications, page 1818211;210. Jones
and Bartlett, Boston, 1992. Outgrowth of the NSF/CBMS Conference on Wavelets, Lowell, June 1990.
[328] Jie Liang and Thomas W. Parks. A two-dimensional translation invariant wavelet representation and
its applications. In Proceedings of the IEEE International Conference on Image Processing, volume 1,
page I:668211;70, Austin, November 1994.
[329] Jie Liang and Thomas W. Parks. A two-dimensional translation invariant wavelet representation and
its applications. In Proceedings of the IEEE International Conference on Image Processing, volume 1,
page I:668211;70, Austin, November 1994.
[330] Jie Liang and Thomas W. Parks. A translation invariant wavelet representation algorithm with appli-
cations. IEEE Transactions on Signal Processing, 44(2):2258211;232, 1996.
[331] Yuan-Pei Lin and P. P. Vaidyanathan. Linear phase cosine-modulated lter banks. IEEE Transactions
on Signal Processing, 43, 1995.
[332] A. R. Lindsey. Generalized Orthogonally Multiplexed Communication via Wavelet Packet Bases. Ph.
d. thesis, June 1995.
[333] S. M. LoPresto, K. Ramchandran, and M. T. Orchard. Image coding based on mixture modeling of
wavelet coecients and a fast estimation-quantization framework. Proc. DCC, March 1997.
[334] Tom Kailath Louis Auslander and Sanjoy K. Mitter, editors. Signal Processing, Part I: Signal Pro-
cessing Theory. Springer-Verlag, New York, 1990. IMA Volume 22, lectures from IMA program, July
1988.
[335] J. C. R. Hunt M. Farge and J. C. Vassilicos, editors. Wavelets, Fractals, and Fourier Tranforms. Claren-
don Press, Oxford, 1993. Proceedings of a conference on Wavelets at Newnham College, Cambridge,
Dec. 1990.
[336] S. G. Mallat. Multifrequency channel decomposition of images and wavelet models. IEEE Transactions
on Acoustics, Speech and Signal Processing, 37:20918211;2110, December 1989.
[337] S. G. Mallat. Multifrequency channel decomposition of images and wavelet models. IEEE Transactions
on Acoustics, Speech and Signal Processing, 37:20918211;2110, December 1989.
[338] S. G. Mallat. Multiresolution approximation and wavelet orthonormal bases of. Transactions of the
American Mathematical Society, 315:6987, 1989.
[339] S. G. Mallat. Multiresolution approximation and wavelet orthonormal bases of. Transactions of the
American Mathematical Society, 315:6987, 1989.
[340] S. G. Mallat. Multiresolution approximation and wavelet orthonormal bases of. Transactions of the
American Mathematical Society, 315:6987, 1989.
[341] S. G. Mallat. Multiresolution approximation and wavelet orthonormal bases of. Transactions of the
American Mathematical Society, 315:6987, 1989.
[342] S. G. Mallat. Multiresolution approximation and wavelet orthonormal bases of. Transactions of the
American Mathematical Society, 315:6987, 1989.
[343] S. G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE
Transactions on Pattern Recognition and Machine Intelligence, 11(7):6748211;693, July 1989.
[344] S. G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE
Transactions on Pattern Recognition and Machine Intelligence, 11(7):6748211;693, July 1989.
[345] S. G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE
Transactions on Pattern Recognition and Machine Intelligence, 11(7):6748211;693, July 1989.
[346] S. G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE
Transactions on Pattern Recognition and Machine Intelligence, 11(7):6748211;693, July 1989.
[347] S. G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE
Transactions on Pattern Recognition and Machine Intelligence, 11(7):6748211;693, July 1989.
[348] S. G. Mallat. Zero-crossings of a wavelet transform. IEEE Transactions on Information Theory,
37(4):10198211;1033, July 1991.
[350] S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions
on Signal Processing, 41(12):33978211;3415, December 1993.
[351] S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions
on Signal Processing, 41(12):33978211;3415, December 1993.
[352] St[U+FFFD]ane Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1998.
[353] St[U+FFFD]ane Mallat. A Wavelet Tour of Signal Processing, Third Revised Edition: The Sparse Way.
Academic Press, 3 edition, 1998, 2009.
[355] St[U+FFFD]ane Mallat and Fr[U+FFFD]ric Falzon. Understanding image transform codes. In Proceed-
ings of SPIE Conference, Aerosense, Orlando, April 1997.
[356] Henrique S. Malvar. Signal Processing with Lapped Transforms. Artech House, Boston, MA, 1992.
[357] Henrique S. Malvar. Signal Processing with Lapped Transforms. Artech House, Boston, MA, 1992.
[358] Henrique S. Malvar. Signal Processing with Lapped Transforms. Artech House, Boston, MA, 1992.
[359] Stephen Del Marco and John Weiss. M-band wavepacket-based transient signal detector using a
translation-invariant wavelet transform. Optical Engineering, 33(7):21758211;2182, July 1994.
[360] Stephen Del Marco and John Weiss. Improved transient signal detection using a wavepacket-based
detector with an extended translation-invariant wavelet transform. IEEE Transactions on Signal Pro-
cessing, 43, submitted 1994.
[361] Stephen Del Marco, John Weiss, and Karl Jagler. Wavepacket-based transient signal detector using a
translation invariant wavelet transform. In Proceedings of Conference on Wavelet Applications, volume
2242, page 7928211;802, Orlando, FL, April 1994. SPIE.
[362] R. J. Marks II. Introduction to Shannon Sampling and Interpolation Theory. Springer-Verlag, New
York, 1991.
[363] R. J. Marks II. Introduction to Shannon Sampling and Interpolation Theory. Springer-Verlag, New
York, 1991.
[364] T. G. Marshall, Jr. Predictive and ladder realizations of subband coders. In Proceedings of IEEE
Workshop on Visual Signal Processing and Communication, Raleigh, NC, 1992.
[365] T. G. Marshall, Jr. Predictive and ladder realizations of subband coders. In Proceedings of IEEE
Workshop on Visual Signal Processing and Communication, Raleigh, NC, 1992.
[366] T. G. Marshall, Jr. A fast wavelet transform based on the eucledean algorithm. In Proceedings of
Conference on Information Sciences and Systems, Johns Hopkins University, 1993.
[367] T. G. Marshall, Jr. A fast wavelet transform based on the eucledean algorithm. In Proceedings of
Conference on Information Sciences and Systems, Johns Hopkins University, 1993.
[368] Peter R. Massopust. Fractal Functions, Fractal Surfaces, and Wavelets. Academic Press, San Diego,
1994.
[369] Peter R. Massopust. Fractal Functions, Fractal Surfaces, and Wavelets. Academic Press, San Diego,
1994.
[370] J. Mau. Perfect reconstruction modulated lter banks. In Proc. Int. Conf. Acoust., Speech, Signal
Processing, volume 4, pages IV273, San Francisco, CA, 1992. IEEE.
[371] Y. Meyer. L'analyses par ondelettes. Pour la Science, September 1987.
[372] Y. Meyer. Orthonormal wavelets. In Wavelets, Time-Frequency Methods and Phase Space, page
218211;37, Berlin, 1989. Springer-Verlag. Proceedings of International Colloquium on Wavelets and
Applications, Marseille, France, Dec. 1987.
[373] Y. Meyer. Ondelettes et op[U+FFFD]teurs. Hermann, Paris, 1990.
[374] Y. Meyer. Ondelettes et op[U+FFFD]teurs. Hermann, Paris, 1990.
[375] Y. Meyer. Ondelettes et op[U+FFFD]teurs. Hermann, Paris, 1990.
[376] Yves Meyer, editor. Wavelets and Applications. Springer-Verlag, Berlin, 1992. Proceedings of the
Marseille Workshop on Wavelets, France, May, 1989; Research Notes in Applied Mathematics, RMA-
20.
[377] Yves Meyer, editor. Wavelets and Applications. Springer-Verlag, Berlin, 1992. Proceedings of the
Marseille Workshop on Wavelets, France, May, 1989; Research Notes in Applied Mathematics, RMA-
20.
[378] Yves Meyer. Wavelets and Operators. Cambridge, Cambridge, 1992. Translated by D. H. Salinger
from the 1990 French edition.
[379] Yves Meyer. Wavelets and Operators. Cambridge, Cambridge, 1992. Translated by D. H. Salinger
from the 1990 French edition.
[380] Yves Meyer. Wavelets, Algorithms and Applications. SIAM, Philadelphia, 1993. Translated by R. D.
Ryan based on lectures given for the Spanish Institute in Madrid in Feb. 1991.
[381] Yves Meyer. Wavelets, Algorithms and Applications. SIAM, Philadelphia, 1993. Translated by R. D.
Ryan based on lectures given for the Spanish Institute in Madrid in Feb. 1991.
[382] Yves Meyer. Wavelets, Algorithms and Applications. SIAM, Philadelphia, 1993. Translated by R. D.
Ryan based on lectures given for the Spanish Institute in Madrid in Feb. 1991.
[383] Yves Meyer. Wavelets, Algorithms and Applications. SIAM, Philadelphia, 1993. Translated by R. D.
Ryan based on lectures given for the Spanish Institute in Madrid in Feb. 1991.
[384] C. A. Micchelli and Prautzsch. Uniform renement of curves. Linear Algebra, Applications,
114/115:8418211;870, 1989.
[385] Michel Misiti, Yves Misiti, Georges Oppenheim, and Jean-Michel Poggi. Wavelet Toolbox User's Guide.
The MathWorks, Inc., Natick, MA, 1996.
[386] Michel Misiti, Yves Misiti, Georges Oppenheim, and Jean-Michel Poggi. Wavelet Toolbox User's Guide.
The MathWorks, Inc., Natick, MA, 1996.
[387] Michel Misiti, Yves Misiti, Georges Oppenheim, and Jean-Michel Poggi. Wavelet Toolbox User's Guide.
The MathWorks, Inc., Natick, MA, 1996.
[388] Cleve Moler, John Little, and Steve Bangert. Matlab User's Guide. The MathWorks, Inc., South
Natick, MA, 1989.
[389] Pierre Moulin. A wavelet regularization method for diuse radar-target imaging and speckle-noise
reduction. Journal of Mathematical Imaging and Vision, 3:1238211;134, 1993.
[390] Mohammed Nae, Murtaza Ali, and Ahmed Tewk. Optimal subset selection for adaptive signal
representation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal
Processing, volume 5, page 25118211;2514, IEEE ICASSP-96, Atlanta, May 1996.
[391] Amir-Homayoon Najmi. Wavelets, A Concise Guide. Johns Hopkins Press, 2012.
[392] G. P. Nason and B. W. Silverman. The stationary wavelet transform and some statistical applications.
Technical report, Department of Mathematics, University of Bristol, Bristol, UK, February 1995.
preprint obtained via the internet.
[393] D. L. Jones nd R. G. Barniuk. Ecient approximation of continuous wavelet transforms. Electronics
Letters, 27(9):7488211;750, 1991.
[394] T. Q. Nguyen. A class of generalized cosine-modulated lter banks. In Proceedings of ISCAS, San
Diego, CA, pages 943946. IEEE, 1992.
[395] T. Q. Nguyen and R. D. Koilpillai. The design of arbitrary length cosine-modulated lter banks and
wavelets satisfying perfect reconstruction. In Proceedings of IEEE-SP Symposium on Time-Frequency
and Time-Scale Methods '92, Victoria, BC, pages 299302. IEEE, 1992.
[396] T. Q. Nguyen and R. D. Koilpillai. The design of arbitrary length cosine-modulated lter banks and
wavelets satisfying perfect reconstruction. In Proceedings of IEEE-SP Symposium on Time-Frequency
and Time-Scale Methods '92, Victoria, BC, pages 299302. IEEE, 1992.
[397] T. Q. Nguyen and P. P. Vaidyanathan. Maximally decimated perfect-reconstruction r lter banks with
pairwise mirror-image analysis and synthesis frequency responses. IEEE Transactions on Acoustics,
Speech, and Signal Processing, 36(5):693706, 1988.
[398] Trong Q. Nguyen. Near perfect reconstruction pseudo qmf banks. IEEE Transactions on Signal
Processing, 42(1):658211;76, January 1994.
[399] Trong Q. Nguyen. Digital lter banks design quadratic constrained formulation. IEEE Transactions
on Signal Processing, 43(9):21038211;2108, September 1995.
[400] Truong Q. Nguyen. Aliasing-free reconstruction lter banks. In The Circuits and Filters Handbook,
chapter 85, page 26828211;2717. CRC Press and IEEE Press, Roca Raton, 1995.
[401] Truong Q. Nguyen and Peter N. Heller. Biorthogonal cosine-modulated lter band. In Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, page
14718211;1474, IEEE ICASSP-96, Atlanta, May 1996.
[402] J. E. Odegard. Moments, smoothness and optimization of wavelet systems. Ph. d. thesis, Rice Univer-
sity, Houston, TX 77251, USA, May 1996.
[403] J. E. Odegard. Moments, smoothness and optimization of wavelet systems. Ph. d. thesis, Rice Univer-
sity, Houston, TX 77251, USA, May 1996.
[404] J. E. Odegard and C. S. Burrus. Design of near-orthogonal lter banks and wavelets by lagrange
multipliers, 1995.
[405] J. E. Odegard and C. S. Burrus. Wavelets with new moment approximation properties. IEEE Trans-
actions on Signal Processing, to be submitted 1997.
[406] J. E. Odegard, R. A. Gopinath, and C. S. Burrus. Optimal wavelets for signal decomposition and
the existence of scale limited signals. In Proceedings of the IEEE International Conference on Signal
Processing, volume 4, page IV 5978211;600, ICASSP-92, San Francisco, CA, March 1992.
[407] J. E. Odegard, R. A. Gopinath, and C. S. Burrus. Design of linear phase cosine modulated lter
banks for subband image compression. Technical report CML TR94-06, Computational Mathematics
Laboratory, Rice University, Houston, TX, February 1994.
[408] J. E. Odegard, H. Guo, M. Lang, C. S. Burrus, R. O. Wells, Jr., L. M. Novak, and M. Hiett. Wavelet
based sar speckle reduction and image compression. In Proceedings of SPIE Conference 2487, Algo-
rithms for SAR Imagery II, volume 24878211;24, Orlando, April 178211;21 1995.
[409] Jan E. Odegard and C. Sidney Burrus. New class of wavelets for signal approximation. In Proceedings
of the IEEE International Symposium on Circuits and Systems, volume 2, pages II1898211;192, IEEE
ISCAS-96, Atlanta, May 12-15 1996.
[410] Jan E. Odegard and C. Sidney Burrus. Toward a new measure of smoothness for the design of wavelet
basis. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing,
volume 3, pages III14678211;1470, IEEE ICASSP-96, Atlanta, May 78211;10 1996.
[411] A. V. Oppenheim and R. W. Schafer. Discrete-Time Signal Processing. Prentice-Hall, Englewood
Clis, NJ, 1989.
[412] A. V. Oppenheim and R. W. Schafer. Discrete-Time Signal Processing. Prentice-Hall, Englewood
Clis, NJ, 1989.
[413] Athanasios Papoulis. Signal Analysis. McGraw-Hill, New York, 1977.
[414] T. W. Parks and C. S. Burrus. Digital Filter Design. John Wiley & Sons, New York, 1987.
[415] T. W. Parks and C. S. Burrus. Digital Filter Design. John Wiley & Sons, New York, 1987.
[416] J. C. Pesquet, H. Krim, and H. Carfantan. Time-invariant orthonormal wavelet representations. IEEE
Transactions on Signal Processing, 44(8):19648211;1970, August 1996.
[417] See-May Phoong and P. P. Vaidyanathan. A polyphase approach to time-varying lter banks. In Proc.
Int. Conf. Acoust., Speech, Signal Processing, pages 15541557, Atlanta, GA, 1996. IEEE.
[418] G. Plonka. Approximation order provided by renable function vectors. Technical report 95/1, Uni-
versit[U+FFFD]ostock, 1995. To appear in: Constructive Approximation.
[419] G. Plonka. Approximation properties of multi-scaling functions: A fourier approach, 1995. Rostock.
Math. Kolloq. 49, 1158211;126.
[420] G. Plonka. Factorization of renement masks of function vectors. In Wavelets and Multilevel Approx-
imation, page 3178211;324. World Scientic Publishing Co., Singapore, 1995.
[421] G. Plonka. Necessary and sucient conditions for orthonormality of scaling vectors. Technical report,
Universit[U+FFFD]ostock, 1997.
[422] G. Plonka. On stability of scaling vectors. In Surface Fitting and Multiresolution Methods. Vanderbilt
University Press, Nashville, 1997. Also Technical Report 1996/15, Universit[U+FFFD]ostock.
[423] G. Plonka and V. Strela. Construction of multi-scaling functions with approximation and symmetry.
Technical report 95/22, Universit[U+FFFD]ostock, 1995. To appear in: SIAM J. Math. Anal.
[424] D. Pollen. Daubechies' scaling function on [0,3]. J. American Math. Soc., to appear. Also Aware, Inc.
tech. report AD891020, 1989.
[425] L. Rabiner and D. Crochi[U+FFFD]. Multirate Digital Signal Processing. Prentice-Hall, 1983.
[426] L. Rabiner and R. W. Schaefer. Speech Signal Processing. Prentice-Hall, Englewood Clis, NJ, 1983.
[427] K. Ramchandran and M. Veterli. Best wavelet packet bases in a rate-distortion sense. IEEE Transac-
tions on Image Processing, 2(2):1608211;175, 1993.
[428] K. Ramchandran and M. Veterli. Best wavelet packet bases in a rate-distortion sense. IEEE Transac-
tions on Image Processing, 2(2):1608211;175, 1993.
[429] T. A. Ramstad and J. P. Tanem. Cosine modulated analysis synthesis lter bank with critical sampling
and perfect reconstruction. In Proceedings of the IEEE International Conference on Acoustics, Speech,
and Signal Processing, page 17898211;1792, IEEE ICASSP-91, 1991.
[430] K. R. Rao and P. Yip. Discrete Cosine Transform - Algorithms, Advantages and Applications. Aca-
demic Press, 1990.
[431] H. L. Resniko and R. O. Wells, Jr. Wavelet Analysis: The Scalable Structure of Information. Springer-
Verlag, New York, 1998.
[432] H. L. Resniko and R. O. Wells, Jr. Wavelet Analysis: The Scalable Structure of Information. Springer-
Verlag, New York, 1998.
[433] H. L. Resniko and R. O. Wells, Jr. Wavelet Analysis: The Scalable Structure of Information. Springer-
Verlag, New York, 1998.
[434] P. Rieder and J. A. Nossek. Smooth multiwavelets based on two scaling functions. In Proc. IEEE
Conf. on Time-Frequency and Time-Scale Analysis, page 3098211;312, 1996.
[435] Peter Rieder and Jrgen Gtze. Algebraic optimizaion of biorthogonal wavelet transforms. preprint,
1995.
[436] Peter Rieder, Jrgen Gtze, and Josef A. Nossek. Algebraic design of discrete wavelet transforms.
Technical report TUM-LNS-TR-94-2, Technical University of Munich, April 1994. Also submitted to
IEEE Trans on Circuits and Systems.
[437] O. Rioul. Fast computation of the continuous wavelet transform. In Proc. Int. Conf. Acoust., Speech,
Signal Processing, Toronto, Canada, March 1991. IEEE.
[438] Olivier Rioul. Simple regularity criteria for subdivision schemes. SIAM J. Math. Anal.,
23(6):15448211;1576, November 1992.
[439] Olivier Rioul. A discrete-time multiresolution theory. IEEE Transactions on Signal Processing,
41(8):25918211;2606, August 1993.
[440] Olivier Rioul. Regular wavelets: A discrete-time approach. IEEE Transactions on Signal Processing,
41(12):35728211;3579, December 1993.
[441] Olivier Rioul. Regular wavelets: A discrete-time approach. IEEE Transactions on Signal Processing,
41(12):35728211;3579, December 1993.
[442] Olivier Rioul and P. Duhamel. Fast algorithms for discrete and continuous wavelet transforms. IEEE
Transactions on Information Theory, 38(2):5698211;586, March 1992. Special issue on wavelets and
multiresolution analysis.
[443] Olivier Rioul and P. Duhamel. Fast algorithms for discrete and continuous wavelet transforms. IEEE
Transactions on Information Theory, 38(2):5698211;586, March 1992. Special issue on wavelets and
multiresolution analysis.
[444] Olivier Rioul and Pierre Duhamel. A remez exchange algorithm for orthonormal wavelets. IEEE
Transactions on Circuits and Systems II, 41(8):5508211;560, August 1994.
[445] Olivier Rioul and Martin Vetterli. Wavelet and signal processing. IEEE Signal Processing Magazine,
8(4):148211;38, October 1991.
[446] Olivier Rioul and Martin Vetterli. Wavelet and signal processing. IEEE Signal Processing Magazine,
8(4):148211;38, October 1991.
[448] E. A. Robinson and T. S. Durrani. Geophysical Signal Processing. Prentice Hall, Englewood Clis,
NJ, 1986.
[449] Amos Ron. Characterization of linear independence and stability of the sfts of a univariate renable
function in terms of its renement mask. Technical report CMS TR 93-3, Computer Science Dept.,
University of Wisconsin, Madison, September 1992.
[450] Coifman Daubechies Mallat Meyer Ruskai, Beylkin and Raphael, editors. Wavelets and Their Applica-
tions. Jones and Bartlett, Boston, MA, 1992. Outgrowth of NSF/CBMS conference on Wavelets held
at the University of Lowell, June 1990.
[451] G.; Coifman R.;Daubechies I.; Mallat S.; Meyer Y; Ruskai, Mary Beth; Beylkin and L. Raphael, editors.
Wavelets and their Applications. Jones and Bartlett, Boston, MA, 1992. Outgrowth of NSF/CBMS
conference on Wavelets held at the University of Lowell, June 1990.
[452] Mary Beth Ruskai. Introduction. In Wavelets and their Applications. Jones and Bartlett, Boston, MA,
1992.
[453] Mary Beth Ruskai. Introduction. In Wavelets and their Applications. Jones and Bartlett, Boston, MA,
1992.
[454] M. Sablatash and J. H. Lodge. The design of lter banks with specied minimum stopband attenuation
for wavelet packet-based multiple access communications. In Proceedings of 18th Biennial Symposium
on Communications, Queen's University, Kingston, ON, Canada, June 1996.
[455] A. Said and W. A. Pearlman. A new, fast, and ecient image codec based on set partitioning in
hierarchical trees. IEEE Transactions Cir. Syst. Video Tech., 6(3):2438211;250, June 1996.
[456] A. Said and W. A. Perlman. An image multiresolution representation for lossless and lossy image
compression. IEEE Transactions on Image Processing, 5:13038211;1310, September 1996.
[457] Naoki Saito. Local Feature Extraction and Its Applications Using a Library of Bases. Ph. d. thesis,
Yale University, New Haven, CN, 1994.
[458] Naoki Saito. Simultaneous noise suppression and signal compression using a library of orthonormal
bases and the minimum description length criterion. In Wavelets in Geophysics. Academic Press, San
Diego, 1994.
[459] Naoki Saito. Simultaneous noise suppression and signal compression using a library of orthonormal
bases and the minimum discription length criterion. In Wavelets in Geophysics. Academic Press, San
Diego, 1994.
[460] John A. Scales. Theory of Seismic Imaging. Samizat Press, Golden, CO, 1994.
[461] Larry L. Schumaker and Glenn Webb, editors. Recent Advances in Wavelet Analysis. Academic Press,
San Diego, 1993. Volume 3 in the series: Wavelet Analysis and its Applications.
[462] I. W. Selesnick. New Techniques for Digital Filter Design. Ph. d. thesis, Rice University, 1996.
[463] Ivan W. Selesnick. Parameterization of orthogonal wavelet systems. Technical report, ECE Dept. and
Computational Mathematics Laboratory, Rice University, Houston, Tx., May 1997.
[464] Ivan W. Selesnick, Markus Lang, and C. Sidney Burrus. Magnitude squared design of recursive lters
with the chebyshev norm using a constrained rational remez algorithm. IEEE Transactions on Signal
Processing, to appear 1997.
[465] Ivan W. Selesnick, Jan E. Odegard, and C. Sidney Burrus. Nearly symmetric orthogonal wavelets
with non-integer dc group delay. In Proceedings of the IEEE Digital Signal Processing Workshop, page
4318211;434, Loen, Norway, September 28211;4 1996.
[466] J. M. Shapiro. Embedded image coding using zerotrees of wavelet coecients. IEEE Transactions on
Signal Processing, 41(12):34458211;3462, December 1993.
[467] M. J. Shensa. The discrete wavelet transform: Wedding the [U+FFFD]ous and mallat algorithms. IEEE
Transactions on Signal Processing, 40(10):24648211;2482, October 1992.
[468] M. J. Shensa. The discrete wavelet transform: Wedding the [U+FFFD]ous and mallat algorithms. IEEE
Transactions on Signal Processing, 40(10):24648211;2482, October 1992.
[469] O. V. Shentov, S. K. Mitra, U. Heute, and A. N. Hossen. Subband dft 8211; part i: Denition,
interpretation and extensions. Signal Processing, 41:2618211;278, 1995.
[470] William M. Siebert. Circuits, Signals, and Systems. MIT Press and McGraw-Hill, Cambridge and
New York, 1986.
[471] William M. Siebert. Circuits, Signals, and Systems. MIT Press and McGraw-Hill, Cambridge and
New York, 1986.
[472] E. P. Simoncelli and E. H. Adelson. Subband transforms. In Subband Image Coding. Kluwer, Norwell,
MA, 1992. Also, MIT Vision and Modeling Tech. Report No. 137, Sept. 1989.
[473] M. J. Smith and T. P. Barnwell. Exact reconstruction techniques for tree-structured subband coders.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 34:4348211;441, June 1986.
[474] M. J. Smith and T. P. Barnwell. Exact reconstruction techniques for tree-structured subband coders.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 34:4348211;441, June 1986.
[475] M. J. Smith and T. P. Barnwell. Exact reconstruction techniques for tree-structured subband coders.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 34:4348211;441, June 1986.
[476] M. J. Smith and T. P. Barnwell. A new lter bank theory for time-frequency representation. IEEE
Transactions on Acoustics, Speech, and Signal Processing, 35:3148211;327, March 1987.
[477] M. J. Smith and T. P. Barnwell. A new lter bank theory for time-frequency representation. IEEE
Transactions on Acoustics, Speech, and Signal Processing, 35:3148211;327, March 1987.
[478] M. J. Smith and T. P. Barnwell III. Exact reconstruction techniques for tree-structured subband
coders. IEEE Transactions on Acoustics, Speech, and Signal Processing, 34:434441, 1986.
[479] W. So and J. Wang. Estimating the support of a scaling vector. SIAM J. Matrix Anal. Appl.,
18(1):668211;73, January 1997.
[480] A. K. Soman and P. P. Vaidyanathan. On orthonormal wavelets and paraunitary lter banks. IEEE
Transactions on Signal Processing, 41(3):11708211;1183, March 1993.
[481] A. K. Soman, P. P. Vaidyanathan, and T. Q. Nguyen. Linear phase paraunitary lter banks: Theory,
factorizations and designs. IEEE Transactions on Signal Processing, 41(12):34803496, December 1993.
[482] H. V. Sorensen and C. S. Burrus. Ecient computation of the dft with only a subset of input or output
points. IEEE Transactions on Signal Processing, 41(3):11848211;1200, March 1993.
[483] P. Steen, P. N. Heller, R. A. Gopinath, and C. S. Burrus. Theory of regular -band wavelet bases.
IEEE Transactions on Signal Processing, 41(12):34978211;3511, December 1993. Special Transaction
issue on wavelets; Rice contribution also in Tech. Report No. CML TR-91-22, Nov. 1991.
[484] P. Steen, P. N. Heller, R. A. Gopinath, and C. S. Burrus. Theory of regular -band wavelet bases.
IEEE Transactions on Signal Processing, 41(12):34978211;3511, December 1993. Special Transaction
issue on wavelets; Rice contribution also in Tech. Report No. CML TR-91-22, Nov. 1991.
[485] P. Steen, P. N. Heller, R. A. Gopinath, and C. S. Burrus. Theory of regular -band wavelet bases.
IEEE Transactions on Signal Processing, 41(12):34978211;3511, December 1993. Special Transaction
issue on wavelets; Rice contribution also in Tech. Report No. CML TR-91-22, Nov. 1991.
[486] P. Steen, P. N. Heller, R. A. Gopinath, and C. S. Burrus. Theory of regular -band wavelet bases.
IEEE Transactions on Signal Processing, 41(12):34978211;3511, December 1993. Special Transaction
issue on wavelets; Rice contribution also in Tech. Report No. CML TR-91-22, Nov. 1991.
[487] P. Steen, P. N. Heller, R. A. Gopinath, and C. S. Burrus. Theory of regular -band wavelet bases.
IEEE Transactions on Signal Processing, 41(12):34978211;3511, December 1993. Special Transaction
issue on wavelets; Rice contribution also in Tech. Report No. CML TR-91-22, Nov. 1991.
[488] G. Strang. Eigenvalues of and convergence of the cascade algorithm. IEEE Transactions on Signal
Processing, 44, 1996.
[489] G. Strang and V. Strela. Short wavelets and matrix dilation equations. IEEE Trans. SP,
43(1):1088211;115, January 1995.
[490] Gilbert Strang. Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, MA,
1986.
[491] Gilbert Strang. Wavelets and dilation equations: A brief introduction. SIAM Review,
31(4):6148211;627, 1989. also, MIT Numerical Analysis Report 89-9, Aug. 1989.
[492] Gilbert Strang. Wavelets. American Scientist, 82(3):2508211;255, May 1994.
[493] Gilbert Strang and T. Nguyen. Wavelets and Filter Banks. Wellesley8211;Cambridge Press, Wellesley,
MA, 1996.
[494] Gilbert Strang and T. Nguyen. Wavelets and Filter Banks. Wellesley8211;Cambridge Press, Wellesley,
MA, 1996.
[495] Gilbert Strang and T. Nguyen. Wavelets and Filter Banks. Wellesley8211;Cambridge Press, Wellesley,
MA, 1996.
[496] Gilbert Strang and T. Nguyen. Wavelets and Filter Banks. Wellesley8211;Cambridge Press, Wellesley,
MA, 1996.
[497] V. Strela. Multiwavelets: Theory and Applications. Ph. d. thesis, Dept. of Mathematics, MIT, June
1996.
[498] V. Strela, P. N. Heller, G. Strang, P. Topiwala, and C. Heil. The application of multiwavelet lter
banks to image processing. Technical report, MIT, January 1996. Submitted to IEEE Tran. Image
Processing.
[499] V. Strela and G. Strang. Finite element multiwavelets. In Proceedings of SPIE, Wavelet Applications
in Signal and Image processing II, volume 2303, pages 202213, San Diego, CA, July 1994.
[500] Wim Sweldens. The lifting scheme: A construction of second generation wavelets. Technical report
TR-1995-6, Math Dept. University of South Carolina, May 1995.
[501] Wim Sweldens. The lifting scheme: A construction of second generation wavelets. Technical report
TR-1995-6, Math Dept. University of South Carolina, May 1995.
[502] Wim Sweldens. The lifting scheme: A custom-design construction of biorthogonal wavelets. Applied
and Computational Harmonic Analysis, 3(2):1868211;200, 1996. Also a technical report, math dept.
Univ. SC, April 1995.
[503] Wim Sweldens. The lifting scheme: A custom-design construction of biorthogonal wavelets. Applied
and Computational Harmonic Analysis, 3(2):1868211;200, 1996. Also a technical report, math dept.
Univ. SC, April 1995.
[504] Wim Sweldens. Wavelets: What next? Proceedings of the IEEE, 84(4):6808211;685, April 1996.
[505] Wim Sweldens and Robert Piessens. Calculation of the wavelet decomposition using quadrature for-
mulae. In Wavelets: An Elementary Treatment of Theory and Applications, page 1398211;160. World
Scientic, Singapore, 1993.
[506] Hai Tao and R. J. Moorhead. Lossless progressive transmission of scientic data using biorthogonal
wavelet transform. In Proceedings of the IEEE Conference on Image Processing, Austin, ICIP-94,
November 1994.
[507] Hai Tao and R. J. Moorhead. Progressive transmission of scientic data using biorthogonal wavelet
transform. In Proceedings of the IEEE Conference on Visualization, Washington, October 1994.
[508] Carl Taswell. Handbook of Wavelet Transform Algorithms. Birkh[U+FFFD]er, Boston, 1996.
[509] Carl Taswell. Handbook of Wavelet Transform Algorithms. Birkh[U+FFFD]er, Boston, 1996.
[510] Ahmed H. Tewk. Wavelets and Multiscale Signal Processing Techniques: Theory and Applications.
to appear, 1998.
[511] J. Tian and R. O. Wells. Image compression by reduction of indices of wavelet transform coecients.
Proc. DCC, April 1996.
[512] J. Tian, R. O. Wells, C. S. Burrus, and J. E. Odegard. Coifman wavelet systems: Approximation,
smoothness, and computational algorithms. In Computational Science for the 21st Century. John
Wiley and Sons, New York, 1997. in honor of Roland Glowinski's 60th birhtday.
[513] Jun Tian. The Mathematical Theory and Applications of Biorthogonal Coifman Wavelet Systems. Ph.
d. thesis, Rice University, February 1996.
[514] Jun Tian. The Mathematical Theory and Applications of Biorthogonal Coifman Wavelet Systems. Ph.
d. thesis, Rice University, February 1996.
[515] Jun Tian and Raymond O. Wells, Jr. Vanishing moments and wavelet approximation. Technical report
CML TR-9501, Computational Mathematics Lab, Rice University, January 1995.
[516] Jun Tian and Raymond O. Wells, Jr. Vanishing moments and wavelet approximation. Technical report
CML TR-9501, Computational Mathematics Lab, Rice University, January 1995.
[517] M. J. Tsai, J. D. Villasenor, and F. Chen. Stack-run image coding. IEEE Trans. Circ. and Syst. Video
Tech., page 5198211;521, October 1996.
[518] M. Turner. Texture discrimination by gabor functions. Biological Cybernetics, 55:7182, 1986.
[519] Michael Unser. Approximation power of biorthogonal wavelet expansions. IEEE Transactions on
Signal Processing, 44(3):5198211;527, March 1996.
[520] P. P. Vaidyanathan. Quadrature mirror lter banks, m8211;band extensions and per-
fect8211;reconstruction techniques. IEEE Acoustics, Speech, and Signal Processing Magazine,
4(3):48211;20, July 1987.
[521] P. P. Vaidyanathan. Quadrature mirror lter banks, m8211;band extensions and per-
fect8211;reconstruction techniques. IEEE Acoustics, Speech, and Signal Processing Magazine,
4(3):48211;20, July 1987.
[522] P. P. Vaidyanathan. Quadrature mirror lter banks, m8211;band extensions and per-
fect8211;reconstruction techniques. IEEE Acoustics, Speech, and Signal Processing Magazine,
4(3):48211;20, July 1987.
[523] P. P. Vaidyanathan. Theory and design of m-channel maximally decimated quadrature morror lters
with arbitrary m, having perfect reconstruction properties. IEEE Transactions on Acoustics, Speech,
and Signal Processing, 35(4):4768211;492, April 1987.
[524] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ, 1992.
[525] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ, 1992.
[526] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ, 1992.
[527] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ, 1992.
[528] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ, 1992.
[529] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ, 1992.
[530] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ, 1992.
[531] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice-Hall, Englewood Clis, NJ, 1992.
[532] P. P. Vaidyanathan and Igor Djokovic. Wavelet transforms. In The Circuits and Filters Handbook,
chapter 6, page 1348211;219. CRC Press and IEEE Press, Roca Raton, 1995.
[533] P. P. Vaidyanathan and Igor Djokovic. Wavelet transforms. In The Circuits and Filters Handbook,
chapter 6, page 1348211;219. CRC Press and IEEE Press, Roca Raton, 1995.
[534] P. P. Vaidyanathan and Igor Djokovic. Wavelet transforms. In The Circuits and Filters Handbook,
chapter 6, page 1348211;219. CRC Press and IEEE Press, Roca Raton, 1995.
[535] P. P. Vaidyanathan and Z. Do287;anata. The role of lossless systems in modern digital signal processing:
A tutorial. IEEE Transactions on Education, August 1989.
[536] P. P. Vaidyanathan and Z. Dovganata. The role of lossless systems in modern digital signal processing:
A tutorial. IEEE Trans. on Education, 32(3):181197, August 1989.
[537] P. P. Vaidyanathan and Phuong-Quan Hoang. Lattice structures for optimal design and robust imple-
mentation of two-channel perfect reconstruction qmf banks. IEEE Transactions on Acoustics, Speech,
and Signal Processing, 36(1):8193, January 1988.
[538] P. P. Vaidyanathan and S. K. Mitra. Polyphase networks, block digital ltering, lptv systems, and
alias-free qmf banks: A unied approach based on pseudocirculants. IEEE Transactions on Acoustics,
Speech, and Signal Processing, 36:3818211;391, March 1988.
[539] P. P. Vaidyanathan, T. Q. Nguyen, Z. Do487;anata, and T. Saram[U+FFFD]. Improved technique for
design of perfect reconstruction r qmf banks with lossless polyphase matrices. IEEE Transactions on
Acoustics, Speech, and Signal Processing, 37(7):10421056, July 1989.
[540] M. Vetterli and C. Herley. Wavelets and lter banks: Theory and design. IEEE Transactions on
Acoustics, Speech, and Signal Processing, pages 22072232, September 1992.
[541] M. Vetterli and D. Le Gall. Perfect reconstruction r lter banks: Some properties and factorizations.
IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(7):10571071, July 1989.
[542] Martin Vetterli. Filter banks allowing perfect reconstruction. Signal Processing, 10(3):2198211;244,
April 1986.
[543] Martin Vetterli. Filter banks allowing perfect reconstruction. Signal Processing, 10(3):2198211;244,
April 1986.
[544] Martin Vetterli. A theory of multirate lter banks. IEEE Transactions on Acoustics, Speech, and
Signal Processing, 35(3):3568211;372, March 1987.
[545] Martin Vetterli. A theory of multirate lter banks. IEEE Transactions on Acoustics, Speech, and
Signal Processing, 35(3):3568211;372, March 1987.
[546] Martin Vetterli. A theory of multirate lter banks. IEEE Transactions on Acoustics, Speech, and
Signal Processing, 35(3):3568211;372, March 1987.
[547] Martin Vetterli and Didier Le Gall. Perfect reconstruction r lter banks: Some properties and
factorizations. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(7):10578211;1071,
July 1989.
[548] Martin Vetterli and Didier Le Gall. Perfect reconstruction r lter banks: Some properties and
factorizations. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(7):10578211;1071,
July 1989.
[549] Martin Vetterli and Jelena Kova269;evi263;. Wavelets and Subband Coding. Prentice8211;Hall, Upper
Saddle River, NJ, 1995.
[550] Martin Vetterli and Jelena Kova269;evi263;. Wavelets and Subband Coding. Prentice8211;Hall, Upper
Saddle River, NJ, 1995.
[551] Martin Vetterli and Jelena Kova269;evi263;. Wavelets and Subband Coding. Prentice8211;Hall, Upper
Saddle River, NJ, 1995.
[552] Martin Vetterli and Jelena Kova269;evi263;. Wavelets and Subband Coding. Prentice8211;Hall, Upper
Saddle River, NJ, 1995.
[553] Martin Vetterli and Jelena Kova269;evi263;. Wavelets and Subband Coding. Prentice8211;Hall, Upper
Saddle River, NJ, 1995.
[554] J. D. Villasenor, B. Belzer, and J. Liao. Wavelet lter evaluation for image compression. IEEE
Transactions on Image Processing, 4, August 1995.
[555] Hans Volkmer. On the regularity of wavelets. IEEE Transactions on Information Theory,
38(2):8728211;876, March 1992.
[556] M. J. Vrhel and A. Aldroubi. Pre-ltering for the initialization of multi-wavelet transforms. Technical
report, National Institutes of Health, 1996.
[557] M. J. Vrhel, C. Lee, and M. Unser. Fast continuous wavelet transform: A least-squares formulation.
Signal Processing, 57(2):1038211;120, March 1997.
[558] M. J. Vrhel, C. Lee, and M. Unser. Fast continuous wavelet transform: A least-squares formulation.
Signal Processing, 57(2):1038211;120, March 1997.
[559] Gilbert G. Walter. Wavelets and Other Orthogonal Systems with Applications. CRC Press, Boca Raton,
FL, 1994.
[560] D. Wei and C. S. Burrus. Optimal soft-thresholding for wavelet transform coding. In Proceedings of
IEEE International Conference on Image Processing, page I:6108211;613, Washington, DC, October
1995.
[561] D. Wei, J. E. Odegard, H. Guo, M. Lang, and C. S. Burrus. Sar data compression using best-adapted
wavelet packet basis and hybrid subband coding. In Proceedings of SPIE Conference 2491, Wavelet
Applications II, volume 24918211;104, page 11318211;1141, Orlando, April 178211;21 1995.
[562] D. Wei, J. E. Odegard, H. Guo, M. Lang, and C. S. Burrus. Simultaneous noise reduction and sar image
data compression using best wavelet packet basis. In Proceedings of IEEE International Conference
on Image Processing, page III:2008211;203, Washington, DC, October 1995.
[563] Dong Wei. Investigation of biorthogonal wavelets. Technical report ECE-696, Rice University, April
1995.
[564] Dong Wei and Alan C. Bovik. On generalized coiets: Construction, near-symmetry, and optimization.
IEEE Transactions on Circuits and Systems:II, submitted October 1996.
[565] Dong Wei and Alan C. Bovik. Sampling approximation by generalized coiets. IEEE Transactions on
Signal Processing, submitted January 1997.
[566] Dong Wei, Jun Tian, Raymond O. Wells, Jr., and C. Sidney Burrus. A new class of biorthogonal wavelet
systems for image transform coding. IEEE Transactions on Image Processing, 7(7):10008211;1013, July
1998.
[567] Dong Wei, Jun Tian, Raymond O. Wells, Jr., and C. Sidney Burrus. A new class of biorthogonal wavelet
systems for image transform coding. IEEE Transactions on Image Processing, 7(7):10008211;1013, July
1998.
[568] R. O. Wells, Jr. Parameterizing smooth compactly supported wavelets. Transactions of the American
Mathematical Society, 338(2):9198211;931, 1993. Also Aware tech report AD891231, Dec. 1989.
[569] Raymond O. Wells, Jr and Xiaodong Zhou. Wavelet interpolation and approximate solutions of elliptic
partial dierential equations. In Noncompact Lie Groups. Kluwer, 1994. Also in Proceedings of NATO
Advanced Research Workshop, 1992, and CML Technical Report TR-9203, Rice University, 1992.
[570] M. V. Wickerhauser. Acoustic signal compression with wavelet packets. In Wavelets: A Tutorial in
Theory and Applications, page 6798211;700. Academic Press, Boca Raton, 1992. Volume 2 in the series:
Wavelet Analysis and its Applications.
[571] Mladen Victor Wickerhauser. Adapted Wavelet Analysis from Theory to Software. AKPeters, Wellesley,
MA, 1995.
[572] Mladen Victor Wickerhauser. Adapted Wavelet Analysis from Theory to Software. AKPeters, Wellesley,
MA, 1995.
[573] Mladen Victor Wickerhauser. Adapted Wavelet Analysis from Theory to Software. AKPeters, Wellesley,
MA, 1995.
[574] Mladen Victor Wickerhauser. Adapted Wavelet Analysis from Theory to Software. AKPeters, Wellesley,
MA, 1995.
[575] Mladen Victor Wickerhauser. Adapted Wavelet Analysis from Theory to Software. AKPeters, Wellesley,
MA, 1995.
[576] I. Witten, R. Neal, and J. Cleary. Arithmetic coding for data compression. Communications of the
ACM, 30:5208211;540, June 1987.
[577] G. Wornell and A. V. Oppenheim. Estimation of fractal signals from noisy measurements using
wavelets. IEEE Transactions on Acoustics, Speech, and Signal Processing, 40(3):611623, March 1992.
[578] G. W. Wornell and A. V. Oppenheim. Wavelet-based representations for a class of self-similar signals
with application to fractal modulation. IEEE Transactions on Information Theory, 38(2):785800,
March 1992.
[579] Gregory W. Wornell. Signal Processing with Fractals, A Wavelet- Based Approach. Prentice Hall,
Upper Saddle River, NJ, 1996.
[580] Gregory W. Wornell. Signal Processing with Fractals, A Wavelet- Based Approach. Prentice Hall,
Upper Saddle River, NJ, 1996.
[581] J. Wu, K. M. Wong, and Q. Jin. Multiplexing based on wavelet packets. In Proceedings of SPIE
Conference, Aerosense, Orlando, April 1995.
[582] X.-G. Xia, J. S. Geronimo, D. P. Hardin, and B. W. Suter. Design of prelters for discrete multiwavelet
transforms. IEEE Trans. SP, 44(1):258211;35, January 1996.
[583] X.-G. Xia and B. W. Suter. Vector-valued wavelets and vector lter banks. IEEE Trans. SP,
44(3):5088211;518, March 1996.
[584] Z. Xiong, C. Herley, K. Ramchandran, and M. T. Orcgard. Space-frequency quantization for a space-
varying wavelet packet image coder. Proc. Int. Conf. Image Processing, 1:6148211;617, October 1995.
[585] [U+FFFD]do287;an Yilmaz. Seismic Data Processing. Society of Exploration Geophysicists, Tulsa,
1987. Stephen M. Doherty editor.
[586] R. K. Young. Wavelet Theory and Its Applications. Kluwer Academic Publishers, Boston, MA, 1993.
[587] R. M. Young. An Introduction to Nonharmonic Fourier Series. Academic Press, New York, 1980.
[588] R. M. Young. An Introduction to Nonharmonic Fourier Series. Academic Press, New York, 1980.
[589] H. Zou and A. H. Tewk. Design and parameterization ofm-band orthonormal wavelets. In Proceedings
of the IEEE International Symposium on Circuits and Systems, page 9838211;986, ISCAS-92, San
Diego, 1992.
[590] H. Zou and A. H. Tewk. Discrete orthogonal m-band wavelet decompositions. In Proceedings of
the IEEE International Conference on Acoustics, Speech, and Signal Processing, volume IV, pages
IV6058211;608, San Francisco, CA, 1992.
R References, 16(253)
Attributions
Collection: Wavelets and Wavelet Transforms
Edited by: C. Sidney Burrus
URL: http://cnx.org/content/col11454/1.6/
License: http://creativecommons.org/licenses/by/4.0/
Module: "Preface"
By: C. Sidney Burrus
URL: http://cnx.org/content/m45097/1.15/
Pages: 1-3
Copyright: C. Sidney Burrus
License: http://creativecommons.org/licenses/by/4.0/
Module: "Introduction to Wavelets"
By: C. Sidney Burrus
URL: http://cnx.org/content/m45096/1.5/
Pages: 5-13
Copyright: C. Sidney Burrus
License: http://creativecommons.org/licenses/by/4.0/
Module: "A multiresolution formulation of Wavelet Systems"
By: C. Sidney Burrus
URL: http://cnx.org/content/m45081/1.4/
Pages: 15-36
Copyright: C. Sidney Burrus
License: http://creativecommons.org/licenses/by/4.0/
Module: "Filter Banks and the Discrete Wavelet Transform"
By: Ramesh Gopinath
URL: http://cnx.org/content/m45094/1.4/
Pages: 37-46
Copyright: Ramesh Gopinath
License: http://creativecommons.org/licenses/by/4.0/
Module: "Bases, Orthogonal Bases, Biorthogonal Bases, Frames, Tight Frames, and unconditional Bases"
By: C. Sidney Burrus
URL: http://cnx.org/content/m45090/1.4/
Pages: 47-54
Copyright: C. Sidney Burrus
License: http://creativecommons.org/licenses/by/4.0/
Module: "The Scaling Function and Scaling Coecients, Wavelet and Wavelet Coecients"
By: C. Sidney Burrus
URL: http://cnx.org/content/m45100/1.4/
Pages: 55-77
Copyright: C. Sidney Burrus
License: http://creativecommons.org/licenses/by/4.0/