Fundamentals of Signal Processing
Fundamentals of Signal Processing
CONNEXIONS
Rice University, Houston, Texas
This selection and arrangement of content as a collection is copyrighted by Minh N. Do. It is licensed under the Creative Commons Attribution 3.0 license (http://creativecommons.org/licenses/by/3.0/). Collection structure revised: November 26, 2012 PDF generated: May 20, 2013 For copyright and attribution information for the modules contained in this collection, see p. 218.
Table of Contents
Introduction to Fundamentals of Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 Foundations 1.1 Signals Represent Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Introduction to Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Discrete-Time Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 Discrete Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.6 Review of Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.7 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.8 Signal Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1.9 Introduction to Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 1.10 Continuous Time Fourier Transform (CTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 1.11 Discrete Time Fourier Transform (DTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 1.12 DFT as a Matrix Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 1.13 The FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2 Sampling and Frequency Analysis 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.3 Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.4 Sampling and Reconstruction with Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.5 Systems View of Sampling and Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.6 Sampling CT Signals: A Frequency Domain Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.7 The DFT: Frequency Domain with a Computer Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.8 Discrete-Time Processing of CT Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 2.9 Short Time Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 2.10 Spectrograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 2.11 Filtering with the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 2.12 Image Restoration Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 104
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3 Digital Filtering 3.1 Dierence Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.2 The Z Transform: Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 114 3.3 Table of Common z-Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 119 3.4 Understanding Pole/Zero Plots on the Z-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 3.5 Filtering in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.6 Linear-Phase FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3.7 Filter Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 134 3.8 Overview of Digital Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3.9 Window Design Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3.10 Frequency Sampling Design Method for FIR lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 3.11 Parks-McClellan FIR Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 3.12 FIR Filter Design using MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 3.13 MATLAB FIR Filter Design Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
iv
4.2 Downsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.3 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.4 Application of Interpolation - Oversampling in CD Players . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 153 4.5 Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 4.6 Resampling with Rational Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 4.7 Digital Filter Design for Interpolation and Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 4.8 Noble Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 4.9 Polyphase Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 4.10 Polyphase Decimation Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 4.11 Computational Savings of Polyphase Interpolation/Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 4.12 Sub-Band Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 4.13 Discrete Wavelet Transform: Main Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 4.14 The Haar System as an Example of DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 4.15 Filterbanks Interpretation of the Discrete Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 4.16 DWT Application - De-noising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5 Statistical and Adaptive Signal Processing 5.1 Introduction to Random Signals and Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.2 Stationary and Nonstationary Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5.3 Random Processes: Mean and Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 5.4 Correlation and Covariance of a Random Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5.5 Autocorrelation of Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 192 5.6 Crosscorrelation of Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.7 Introduction to Adaptive Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.8 Discrete-Time, Causal Wiener Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.9 Practical Issues in Wiener Filter Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5.10 Quadratic Minimization and Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.11 The LMS Adaptive Filter Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 5.12 First Order Convergence Analysis of the LMS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.13 Adaptive Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 207
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Digital Signal Processing (DSP) let's examine what does each of its words mean. Signal is any physical quantity that carries information. Processing is a series of steps or operations to achieve a particular end. It is easy to see that Signal Processing is used everywhere to extract information
from signals or to convert information-carrying signals from one form to another. For example, our brain and ears take input speech signals, and then process and convert them into meaningful words. Finally, the word
Digital
in Digital Signal Processing means that the process is done by computers, microprocessors,
or logic circuits. The eld DSP has expanded signicantly over that last few decades as a result of rapid developments in computer technology and integrated-circuit fabrication. Consequently, DSP has played an increasingly important role in a wide range of disciplines in science and technology. Research and development in DSP are driving advancements in many high-tech areas including telecommunications, multimedia, medical and scientic imaging, and human-computer interaction. To illustrate the digital revolution and the impact of DSP, consider the development of digital cameras. Traditional lm cameras mainly rely on physical properties of the optical lens, where higher quality requires bigger and larger system, to obtain good images. When digital cameras were rst introduced, their quality were inferior compared to lm cameras. But as microprocessors become more powerful, more sophisticated DSP algorithms have been developed for digital cameras to correct optical defects and improve the nal image quality. Thanks to these developments, the quality of consumer-grade digital cameras has now surpassed the equivalence in lm cameras. As further developments for digital cameras attached to cell phones (cameraphones), where due to small size requirements of the lenses, these cameras rely on DSP power to provide good images. Essentially, digital camera technology uses computational power to overcome physical limitations. We can nd the similar trend happens in many other applications of DSP such as digital communications, digital imaging, digital television, and so on. In summary, DSP has foundations on Mathematics, Physics, and Computer Science, and can provide the key enabling technology in numerous applications.
that varies with one or more independent variables such as time (one-dimensional signal), or space (2-D
analog signals
1 This
or 3-D signal). Signals exist in several types. In the real-world, most of signals are that have values continuously at every value of time.
a discrete set of time instants can be stored in computer memory locations. Furthermore, in order to be
processed by logic circuits, these signal values have to be nal result is called a
digital signal.
system is dened as a process whose input and output are signals. An important linear time-invariant (or shift-invariant) systems. These systems have a remarkable property is that each of them can be completely characterized by an impulse response function (sometimes is also called as point spread function), and the system is dened by a convolution (also referred to as a ltering) operation. Thus, a linear time-invariant system is equivalent to a (linear) lter. Linear time-invariant systems are classied into two types, those that have nite-duration impulse response (FIR) and those that have an innite-duration impulse response (IIR). A signal can be viewed as a vector in a vector space. Thus, linear algebra provides a powerful
In signal processing, a class of systems is the class of framework to study signals and linear systems. represented (or expanded) as a
linear combination of elementary signals. The most important signal expansions are provided by the Fourier transforms. The Fourier transforms, as with general transforms,
are often used eectively to transform a problem from one domain to another domain where it is much easier
to solve or analyze. The two domains of a Fourier transform have physical meaning and are called the time domain and the frequency domain. Sampling, or the conversion of continuous-domain real-life signals to discrete numbers that can be processed by computers, is the essential bridge between the analog and the digital worlds. It is important to understand the connections between signals and systems in the real world and inside a computer. These connections are convenient to analyze in the frequency domain. Moreover, many signals and systems are specied by their Because any
frequency characteristics. linear time-invariant system can be characterized as a lter, the design of such systems boils down to the design the associated lters. Typically, in the lter design process, we determine the coecients of an FIR or IIR lter that closely approximates the desired frequency response specications. Together with Fourier transforms, the z-transform provides an eective tool to analyze and design digital
lters.
statistical models as random signals. It minimum mean-square error), so called Wiener lters, can be determined using only second-order statistics (autocorrelation and crosscorrelation functions) of a stationary process. When these statistics cannot be specied beforehand or change over time, we can employ adaptive lters, where the lter coecients are adapted to the signal statistics. The most popular algorithm to adaptively adjust the lter coecients is the least-mean square (LMS)
In many applications, signals are conveniently described via is remarkable that optimum linear lters (in the sense of algorithm.
Chapter 1
Foundations
1.1 Signals Represent Information
the
Whether analog or digital, information is represented by the fundamental quantity in electrical engineering:
signal.
valued; digital signals are discrete-valued. The independent variable of the signal could be time (speech, for example), space (images), or the integers (denoting the sequencing of letters and numbers in the football score).
is
produced by your vocal cords exciting acoustic resonances in your vocal tract. The result is pressure waves propagating in the air, and the speech signal thus corresponds to a function having independent variables of
s (x, t)
to denote
spatial coordinates). When you record someone talking, you are evaluating the speech signal at a particular
x0
s (x0 , t)
Speech Example).
1 This content is available online at <http://cnx.org/content/m0001/2.27/>. 2 "Modeling the Speech Signal" <http://cnx.org/content/m0049/latest/>
CHAPTER 1.
FOUNDATIONS
Speech Example
0.5
0.4
0.3
0.2
0.1 Amplitude
-0.1
-0.2
-0.3
-0.4
-0.5
Figure 1.1:
A speech signal's amplitude relates to tiny air pressure variations. Shown is a recording of the vowel "e" (as in "speech").
Photographs are static, and are continuous-valued signals dened over space. Black-and-white images have only one value at each point in space, which amounts to its optical reection properties. independent spatial variables. In Figure 1.2 (Lena), an image is shown, demonstrating that it (and all other images as well) are functions of two
Lena
(a)
Figure 1.2:
(b)
On the left is the classic Lena image, which is used ubiquitously as a test image. It contains straight and curved lines, complicated texture, and a face. On the right is a perspective display of the Lena image as a signal: a function of two spatial variables. The colors merely help show what signal values are about the same size. In this image, signal values range between 0 and 255; why is that?
Color images have values that express how reectivity depends on the optical spectrum. Painters long ago found that mixing together combinations of the so-called primary colorsred, yellow and bluecan produce very realistic color images. Thus, images today are usually thought of as having three values at every point in space, but a dierent set of colors is used: How much of red, color pictures are multivaluedvector-valuedsignals:
Mathematically,
Interesting cases abound where the analog signal depends not on a continuous variable, such as time, but on a discrete variable. For example, temperature readings taken every hour have continuousanalogvalues, but the signal's independent variable is (essentially) the integers.
A as 65.
Table 1.1: ASCII Table shows the international convention on associating characters with integers.
ASCII Table
CHAPTER 1.
FOUNDATIONS
00 08 10 18 20 28 30 38 40 48 50 58 60 68 70 78
01 09 11 19 21 29 31 39 41 49 51 59 61 69 71 79
soh ht dc1 em ! ) 1 9 A I Q Y a i q y
02 0A 12 1A 22 2A 32 3A 42 4A 52 5A 62 6A 72 7A
03 0B 13 1B 23 2B 33 3B 43 4B 53 5B 63 6B 73 7B
04 0C 14 1C 24 2C 34 3C 44 4C 54 5C 64 6C 74 7C
eot np dc4 fs $ , 4
05 0D 15 1D 25 2D 35 3D 45 4D 55 5D 65 6D 75 7D
enq cr nak gs % 5 = E M U ] e m u }
06 0E 16 1E 26 2E 36 3E 46 4E 56 5E 66 6E 76 7E
07 0F 17 1F 27 2F 37 3F 47 4F 57 5F 67 6F 77 7F
<
D L T
>
F N V ^ f n v
\
d l t |
Table 1.1: The ASCII translation table shows how standard keyboard characters are represented by
integers. In pairs of columns, this table displays rst the so-called 7-bit code (how many characters in a seven-bit code?), then the character the number represents. The numeric codes are represented in hexadecimal (base-16) notation. Mnemonic characters correspond to control characters, some of which may be familiar (like
Denition of a system
x(t) System y(t)
Figure 1.3:
The system depicted has input x (t) and output y (t). Mathematically, systems operate on function(s) to produce other function(s). In many ways, systems are like functions, rules that yield a value for the dependent variable (our output signal) for each value of its independent variable (its input signal). The notation y (t) = S (x (t)) corresponds to this block diagram. We term S () the input-output relation for the system.
3 This
This notation mimics the mathematical symbology of a function: A system's input is analogous to an independent variable and its output the dependent variable. For the mathematically inclined, a system is a
functional:
Simple systems can be connected togetherone system's output becomes another's inputto accomplish some overall design. Interconnection topologies can be quite complicated, but usually consist of weaves of three basic interconnection forms.
x(t)
S1[]
w(t)
S2[]
y(t)
Figure 1.4:
The most rudimentary ways of interconnecting systems are shown in the gures in this section. This is the cascade conguration.
The simplest form is when one system's output is connected only to another's input.
Mathematically,
w (t) = S1 (x (t)),
and
y (t) = S2 (w (t)),
x (t)
the second system. In some cases, the ordering of the systems matter, in others it does not. For example, in the fundamental model of communication the ordering most certainly matters.
y(t)
A signal
x (t)
is routed to two (or more) systems, with this signal appearing as the input to all systems Block diagrams have the convention that signals going to more
of Communication Systems", Figure 1: Fundamental model of communication <http://cnx.org/content/m0002/latest/#commsys> Available for free at Connexions <http://cnx.org/content/col10360/1.4>
4 "Structure
CHAPTER 1.
FOUNDATIONS
than one system are not split into pieces along the way. Two or more systems operate on outputs are added together to create the output in
y (t).
Thus,
x (t)
e(t)
S1[]
y(t)
Figure 1.6:
The subtlest interconnection conguration has a system's output also contributing to its input. Engineers would say the output is "fed back" to the input through system 2, hence the terminology. The mathematical statement of the feedback interconnection (Figure 1.6: feedback) is that the feed-forward system produces the output: output to
y (t) = S1 (e (t)). The input e (t) equals the input signal minus the output of some other system's y (t): e (t) = x (t) S2 (y (t)). Feedback systems are omnipresent in control problems, with the x (t)
is a constant representing what speed you want, and
error signal used to adjust the output to achieve some condition dened by the input (controlling) signal. For example, in a car's cruise control system, equals input).
y (t)
is the car's speed as measured by a speedometer. In this application, system 2 is the identity system (output
Mathematically, analog signals are functions having as their independent variables continuous quantities, such as space and time. Discrete-time signals are functions dened on the integers; they are sequences. As with analog signals, we seek ways of decomposing discrete-time signals into simpler components. Because this approach leads to a better understanding of signal structure, we can exploit that structure to represent information (create ways of representing information with signals) and to extract information (retrieve the information thus represented). For symbolic-valued signals, the approach is dierent: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unied way. From an information representation perspective, the most important issue becomes, for both real-valued and symbolic-valued signals, eciency: what is the most parsimonious and compact way to represent information so that it can be extracted later.
s (n),
where
n = {. . . , 1, 0, 1, . . . }.
5 This
Cosine
sn 1 n
Figure 1.7:
signal?
The discrete-time cosine signal is plotted as a stem plot. Can you nd the formula for this
We usually draw discrete-time signals as stem plots to emphasize the fact they are functions dened only on the integers. We can delay a discrete-time signal by an integer just as with analog ones. A signal delayed by
s (n m).
ei2(f +m)n
(1.2)
This derivation follows because the complex exponential evaluated at an integer multiple of Thus, we need only consider frequency to have a value in some unit-length interval.
equals one.
1.3.3 Sinusoids
Discrete-time sinusoids have the obvious form time counterparts yield unique waveforms
exponentials and sinusoids that can have their frequencies be any real value, frequencies of their discrete-
1 1 2 , 2 . This choice of frequency interval is arbitrary; we can also choose the frequency to lie in the interval [0, 1). How to choose a unit-length
interval for a sinusoid's frequency will become evident later.
1 (n) = 0
if
otherwise
10
CHAPTER 1.
FOUNDATIONS
Unit sample
n 1 n
Figure 1.8:
Examination of a discrete-time signal's plot, like that of the cosine signal shown in Figure 1.7 (Cosine), reveals that all signals consist of a sequence of delayed and scaled unit samples. a sequence at each integer Because the value of
(n m),
we can decompose
any signal as a sum of unit samples delayed to the appropriate location and
is denoted by
s (m)
is written
s (n) =
m=
s (m) (n m)
(1.4)
This kind of decomposition is unique to discrete-time signals, and will prove useful subsequently.
unit step in discrete-time is well-dened at the origin, as opposed to the situation with analog signals.
1 u (n) = 0
if if
n0 n<0
(1.5)
alphabet A.
symbolic-valued signal s (n) takes on one of the values {a1 , . . . , aK } which entirely of analog circuit elements.
This technical terminology does not mean we restrict symbols to being mem-
bers of the English or Greek alphabet. They could represent keyboard characters, bytes (8-bit quantities), integers that convey daily temperature. Whether controlled by software or not, discrete-time systems are ultimately constructed from digital circuits, which consist Furthermore, the transmission and reception of discrete-time signals, like e-mail, is accomplished with analog signals and systems. Understanding how discrete-time and analog signals and systems intertwine is perhaps the main goal of this course.
analog signals can be converted into discrete-time signals, processed with software, and converted back into
11
s (n)
is
Choosing
n0
to be negative advances the signal along the integers. As opposed to analog delays , discrete-time delays
s (n n0 ) e(i2f n0 ) S ei2f
(1.6)
Shift-Invariant
If
S (x (n)) = y (n) ,
Then
S (x (n n0 )) = y (n n0 )
(1.7)
We use the term shift-invariant to emphasize that delays can only have integer values in discrete-time, while in analog signals, delays can be arbitrarily valued. We want to concentrate on systems that are both linear and shift-invariant. It will be these that allow us the full power of frequency-domain analysis and implementations. Because we have no physical constraints in "constructing" such systems, we need only a mathematical specication. In analog systems, the dierential equation species the input-output relationship in the time-domain. The corresponding discrete-time specication is the
y (n) = a1 y (n 1) + + ap y (n p) + b0 x (n) + b1 x (n 1) + + bq x (n q )
Here, the output signal number of coecients is related to its past values of the input signal
(1.8)
x (n).
{b0 , b1 , . . . , bq }.
y (n)
term in the dierence equation (1.8: The Dierence Equation). We have essentially divided
the equation by it, which does not change the input-output relationship. We have thus created the convention that
a0
is always one.
implicit
somehow solve the dierential equation), dierence equations provide an from the previous output values, and the current and previous inputs.
explicit
output for any input. We simply express the dierence equation by a program that calculates each output
Convolution, one of the most important concepts in electrical engineering, can be used to determine the output a system produces for a given input signal. It can be shown that a linear time invariant system is
6 This content is available online at <http://cnx.org/content/m0508/2.7/>. 7 "Simple Systems": Section Delay <http://cnx.org/content/m0006/latest/#delay> 8 This content is available online at <http://cnx.org/content/m10087/2.27/>.
12
CHAPTER 1.
FOUNDATIONS
completely characterized by its impulse response. The sifting property of the discrete time impulse function tells us that the input signal to a system can be represented as a sum of scaled and shifted unit impulses. Thus, by linearity, it would seem reasonable to compute of the output signal as the sum of scaled and shifted unit impulse responses. That is exactly what the operation of convolution accomplishes. Hence, convolution can be used to determine a linear time invariant system's output from knowledge of the input and the impulse response.
(f g ) (n) =
k=
for all signals meaning that
f (k ) g (n k )
(1.9)
f, g
dened on
Z.
f g =gf
for all signals
(1.10)
f, g
dened on
Z.
Thus, the convolution operation could have been just as easily stated using
(f g ) (n) =
k=
for all signals
f (n k ) g (k )
(1.11)
f, g
dened on
Z.
Convolution has several other important properties not listed here but
h.
Given
H (x).
x (n) =
k=
x (k ) (n k )
(1.12)
Hx (n) =
k=
Since
x (k ) H (n k ) . h (n k ),
this gives the result
(1.13)
H (n k )
Hx (n) =
k=
x (k ) h (n k ) = (x h) (n) .
(1.14)
Hence, convolution has been dened such that the output of a linear time invariant system is given by the convolution of the system input with the system unit impulse response.
13
f, g
given by
(f g ) (n) =
k=
f (k ) g (n k ) =
k=
f (n k ) g (k ) .
(1.15)
The rst step in graphically understanding the operation of convolution is to plot each of the functions. Next, one of the functions must be selected, and its plot reected across the same function must be shifted left by
k=0
t.
Example 1.1
Recall that the impulse response for a discrete time echoing feedback system with gain
is (1.16)
h (n) = an u (n) ,
and consider the response to an input signal that is another exponential
x (n) = bn u (n) .
the input signal
(1.17)
We know that the output for this input is given by the convolution of the impulse response with
(1.18)
We would like to compute this operation by beginning in a way that minimizes the algebraic complexity of the expression. However, in this case, each possible coice is equally simple. Thus, we would like to compute
y (n) =
k=
ak u (k ) bnk u (n k ) .
(1.19)
The step functions can be used to further simplify this sum. Therefore,
y (n) = 0
for
(1.20)
n<0
and
y (n) =
k=0
for
(ab)
(1.21)
n 0.
Hence, provided
ab = 1,
we have that
y (n) = {
0
1(ab)n+1 1(ab)
n<0 n0
(1.22)
14
CHAPTER 1.
FOUNDATIONS
N 1
(f g ) (n) =
k=0
for all signals ^ ^
f (k ) g (n k )
(1.23)
f, g
dened on
Z [0, N 1]
where
f, g
and
g.
It is important to
f g =gf
for all signals
(1.24)
f, g
dened on
Z [0, N 1].
N 1
(f g ) (n) =
k=0
for all signals ^ ^
f (n k ) g (k )
(1.25)
f, g
dened on
Z [0, N 1] where f , g
and
g.
Circular convolution
has several other important properties not listed here but explained and derived in a later module. Alternatively, discrete time circular convolution can be expressed as the sum of two summations given by
N 1
(f g ) (n) =
k=0
for all signals
f (k ) g (n k ) +
k=n+1
f (k ) g (n k + N )
(1.26)
f, g
dened on
Z [0, N 1].
Meaningful examples of computing discrete time circular convolutions in the time domain would involve complicated algebraic manipulations dealing with the wrap around behavior, which would ultimately be more confusing than helpful. Thus, none will be provided in this section. Of course, example computations in the time domain are easy to program and demonstrate. However, disrete time circular convolutions are more easily computed using frequency domain tools as will be shown in the discrete time Fourier series section.
h. H (x).
Given First,
N 1
x (n) =
k=0
x (k ) (n k )
(1.27)
N 1
Hx (n) =
k=0
x (k ) H (n k ) .
(1.28)
15
Since
H (n k )
h (n k ),
N 1
Hx (n) =
k=0
x (k ) h (n k ) = (x h) (n) .
(1.29)
Hence, circular convolution has been dened such that the output of a linear time invariant system is given by the convolution of the system input with the system unit impulse response.
f, g
^
given by
N 1
(f g ) (n) =
k=0
f (k ) g (n k ) =
k=0
N 1
f (n k ) g (k ) .
(1.30)
The rst step in graphically understanding the operation of convolution is to plot each of the periodic extensions of the functions. Next, one of the functions must be selected, and its plot reected across the
k=0
k Z [0, N 1],
resulting plots is then constructed. Finally, the area under the resulting curve on
16
CHAPTER 1.
FOUNDATIONS
Interact (when online) with the Mathematica CDF demonstrating Discrete Linear ConvoAvailable for free at save Connexions <http://cnx.org/content/col10360/1.4> lution. To download, right click and le as .cdf
Figure 1.9:
17
18
CHAPTER 1.
FOUNDATIONS
1.6.1 Fields
A eld is a set
equipped with two operations, addition and mulitplication, and containing two special
members 0 and 1 (0 1. a.
= 1),
{a, b, c} F
2.
3.
( a + b) F a+b=b+a c. (a + b) + c = a + (b + c) d. a + 0 = a e. there exists a such that a + a = 0 a. ab F b. ab = ba c. (ab) c = a (bc) d. a 1 = a 1 1 e. there exists a such that aa =1 a (b + c) = ab + ac
b.
More concisely 1. 2.
F F
1.6.1.1 Examples
Q, R , C
be a eld, and
a set. We say
for all
a F, u V
and
v V:
v ) (u + v ) V av V 0 V,
such that the following hold for all
a F, b F,
and
u V,
v V,
1.
and a. b. c. d.
wV
2.
a. b. c. d.
u + (v + w) = (u + v ) + w u+v =v+u u+0=u there exists u such that u + u = 0 a (u + v ) = au + av (a + b) u = au + bu (ab) u = a (bu) 1u=u
More concisely,
9 This
19
1.
1.6.2.1 Examples
RN CN CN RN
is a vector space over is a vector space over is a vector space over is
R C R
The elements of
vectors.
x1
=
T
x2 x= . . . xN
The samples
x1
x2
...
xN
{xi }
could be samples from a nite duration, continuous time signal, for example.
R)
C)
1.6.4 Subspaces
Let
F.
A subset
SV
is called a
Example 1.2
subspace of V
if
V = R2 , F = R, S = any
Figure 1.10:
SV
Theorem 1.1:
aF
and
bF
sS
and
t S , (as + bt) S
20
CHAPTER 1.
FOUNDATIONS
u1 , . . . , u k V .
We say that these vectors are
such that
ai ui = 0
i=1
and at least one
(1.31)
ai = 0. a1 = = ak = 0,
we say that the vectors are
Example 1.3
1 2 5 1 1 2 3 + 1 7 = 0 2 0 2
so these vectors are linearly dependent in
linearly independent.
R3 .
S = {v1 , v2 , . . . , vk }.
Dene the
span of S
k
ai vi | ai F
21
Figure 1.11:
1.6.6.1 Aside
If
is innite, the notions of linear independence and span are easily generalized:
We say
u1 , . . . , uk S , (k
arbitrary) we have
ai ui = 0
i=1
The span of
i : (ai = 0)
is
<S> =
i=1
ai ui | ai F ui S (k < )
nite sums.
1.6.7 Bases
A set 1. 2.
BV
is called a
basis for V
over
if and only if
B is linearly <B> =V
independent
Bases are of fundamental importance in signal processing. They allow us to decompose a signal into building blocks (basis vectors) that are often more easily understood.
22
CHAPTER 1.
FOUNDATIONS
Example 1.5
V
= (real or complex) Euclidean space,
RN
or
CN .
basis
B = {e1 , . . . , eN } standard 0
. . . ei = 1 . . . 0
where the 1 is in the
ith
position.
Example 1.6
V = CN
over
C. B = {u1 , . . . , uN }
uk =
where
1 e(i2 N )
k
. . .
e(i2 N (N 1))
k
i=
1.
is a basis for
V,
then every
vV
v=
i=1
where
ai vi
ai F
and
vi B .
1.6.8 Dimension
Let
B.
The dimension of
V,
denoted
dim (V ),
is the cardinality of
B.
nite dimensional.
23
R C R
Table 1.2
CN
Every subspace is a vector space, and therefore has its own dimension.
Example 1.7
Suppose
(S = {u1 , . . . , uk }) V
Facts
If If
S V and T V be subspaces. V is the direct sum of S and T , written V = S T , unique s S and t T such that v = s + t. If V = S T , then T is called a complement of S . V
be a vector space, and let We say
v V,
there exist
Example 1.8
continuous}
funcitons in C
funcitons in C
f = g + h = g + h , g S and g = g and h = h .
1.6.9.1 Facts
1. Every subspace has a complement 2.
V =ST
a. b.
if and only if
3. If
V = S T,
dim (V ) < ,
then
24
CHAPTER 1.
FOUNDATIONS
1.6.9.2 Proofs
Invoke a basis.
1.6.10 Norms
V be a vector v V , and F
Let 1. 2. 3. space over
F.
A norm is a mapping
V F,
denoted by
u V,
u > 0 if u = 0 u = || u u+v u +
1.6.10.1 Examples
Euclidean norms:
x RN :
N
1 2
x =
i=1
xi
x CN :
N
1 2
x =
i=1
(|xi |)
V d (u, v ) u v
F, F = R
or
C.
V V F,
denoted
< , >,
such that 1. 2. 3.
< v, v > 0, and < v, v >= 0 v = 0 < u, v >= < v, u > < au + bv, w >= a < (u, w) > +b < (v, w) >
1.6.11.1 Examples
RN
over
R: < x, y >= xT y =
xi yi
i=1
CN
over
C: < x, y >= x y =
H
xi yi
i=1
25
If
x = (x1 , . . . , xN )
C,
then
xH
is called the "Hermitian," or "conjugate transpose" of
x1
. . .
xN x.
u =< u, u >,
then
u+v u
Hence, every inner product induces a norm.
In inner product spaces, we have a notion of the angle between two vectors:
(u, v ) = arccos
< u, v > u v
[0, 2 )
1.6.14 Orthogonality
u
and
are
orthogonal if
< u, v >= 0 u = v = 1,
we say
Notation:
u v.
If in addition
In an orthogonal (orthonormal)
Figure 1.12:
Orthogonal vectors in R2 .
{vi }
such that
1 < vi , vi >= ij = 0
if if
i=j i=j
26
CHAPTER 1.
FOUNDATIONS
Example 1.9
The standard basis for
RN
or
CN
Example 1.10
The normalized DFT basis
1 uk = N
1 e(i2 N )
k
. . .
e(i2 N (N 1))
k
with respect to
{ vi }
is
v=
i
then
ai vi
ai =< vi , v >
1.6.17 Gram-Schmidt
Every inner product space has an orthonormal basis. Any (countable) basis can be made orthogonal by the Gram-Schmidt orthogonalization process.
SV
be a subspace. The
orthogonal compliment S is
S = { u | u V (< u, v >= 0) v : (v S ) } S
is easily seen to be a subspace. If
dim (v ) < ,
then
V = S S.
then in order to have
aside: If
dim (v ) = ,
V = S S
we require
to be a
Hilbert Space.
preserves
V, W
F.
T :V W
for all
such that
T (au + bv ) = aT (u) + bT (v ) a F, b F
and
uV, v V.
In this class we will be concerned with linear transformations between (real or complex)
Euclidean
1.6.20 Image
image (T ) = { w | w W T (v ) = wfor
some v }
27
1.6.21 Nullspace
Also known as the kernel:
ker (T ) = { v | v V (T (v ) = 0) }
Both the image and the nullspace are easily seen to be subspaces.
1.6.22 Rank
rank (T ) = dim (image (T ))
1.6.23 Nullity
null (T ) = dim (ker (T ))
1.6.25 Matrices
Every linear transformation represented by an
has a
M N
matrix representation.
A= a11
. . .
If
T : EN EM , E = R
or
C,
then
is
matrix
...
.. .
a1N
. . .
ith
aM 1
...
T
aM N
is the
where
(a1i , . . . , aM i ) = T (ei ) A.
and
ei = (0, . . . , 1, . . . , 0)
aside:
EM ,
leading to a dierent
1.6.27 Duality
If
A : RN RM ,
then
28
CHAPTER 1.
FOUNDATIONS
Figure 1.13
If
A : CN CM ,
then
1.6.28 Inverses
The linear transformation/matrix
is
such that
AB =
BA = I
Only
(identity).
A : FN FN
be linear,
F =R
or
C.
(nonsingular)
5. The columns of If
form a basis.
A 1 = AT
AH
is
10
dened on it is called an inner product space, which is also Hilbert space is an inner product space that is complete with respect to the
11 12
norm dened using the inner product. Hilbert spaces are named after David Hilbert
idea through his studies of integral equations. We dene our valid norm using the inner product as:
x =
< x, x >
(1.32)
29
Hilbert spaces are useful in studying and generalizing the concepts of Fourier expansion, Fourier transforms, and are very important to the study of quantum mechanics. Hilbert spaces are studied under the functional analysis branch of mathematics.
For
n1 xi yi = i=0
xn1
Space of nite energy complex functions:
L 2 ( R)
< f, g >=
f (t) g (t)dt
(Z)
< x, y >=
i=
x [i] y [i]
14
When working with signals many times it is helpful to break up a signal into smaller, more manageable parts. Hopefully by now you have been exposed to the concept of eigenvectors systems through eigenfunctions of LTI systems
16 15
signal into one of its possible basis. By doing this we are able to simplify our calculations of signals and . Now we would like to look at an alternative way to represent signals, through the use of
basis.
orthonormal
We can think of orthonormal basis as a set of building blocks we use to construct functions. We will
Example 1.11
1 The complex sinusoids
In our Fourier series tion of
17
ei0 nt for all < n < form an orthonormal basis for L2 ([0, T ]). T i0 nt equation, f (t) = , the {cn } are just another representan= cn e
f (t).
note:
For signals/vectors in a Hilbert Space, the expansion coecients are easy to nd.
13 "Hilbert Spaces" <http://cnx.org/content/m10434/latest/> 14 This content is available online at <http://cnx.org/content/m10760/2.6/>. 15 "Eigenvectors and Eigenvalues" <http://cnx.org/content/m10736/latest/> 16 "Eigenfunctions of LTI Systems" <http://cnx.org/content/m10500/latest/> 17 "Fourier Series: Eigenfunction Approach" <http://cnx.org/content/m10496/latest/>
30
CHAPTER 1.
FOUNDATIONS
basis:
A set of vectors
{bi }
in a vector space
is a basis if
bi bi
S.
{i },
where
i C x=
i
x, x S :
where
i bi S.
(1.33)
is a vector in
S,
is a scalar in
C,
and
is a vector in
Condition 2 in the above denition says we can 1 ensures that the decomposition is
note:
decompose any vector in terms of the {bi }. unique (think about this at home).
x.
Condition
The
{i }
Example 1.12
Let us look at simple example in
R2 ,
x=
T T
1 2
Standard Basis:
x = e0 + 2e1
Alternate Basis:
x=
In general, given a basis
1 3 h0 + h1 2 2
how do we nd the
{b0 , b1 }
and a vector
x R2 ,
and
x = 0 b0 + 1 b1
i 's
in general for
R2 .
We start by rewriting
bi 's
as columns in a 22 matrix.
= 0
. . .
b0
. . .
+ 1
b1
(1.35)
= b0
. . .
0 b1 1 .
. .
(1.36)
18 "Linear
Algebra: The Basics": Section Span <http://cnx.org/content/m10734/latest/#span_sec> Available for free at Connexions <http://cnx.org/content/col10360/1.4>
31
Example 1.13
Here is a simple example, which shows a little more detail about the above equations.
x [0] x [1]
= 0 =
b0 [0] b0 [1]
+ 1
b1 [0] b1 [1]
(1.37)
x [0] x [1]
0 1
(1.38)
Basis Matrix:
. . .
. . .
B = b0
. . .
b1
. . .
Coecient Vector:
0 1
x = B
which is equivalent to
(1.39)
x=
Example 1.14
Given a standard basis,
1 i=0
i bi .
1 0 , , 0 1
B=
0 1
1 0
To get the
i 's,
= B 1 x
Where
(1.40)
B 1
19
of
B.
19 "Matrix
Inversion" <http://cnx.org/content/m2113/latest/>
32
CHAPTER 1.
FOUNDATIONS
from it.
B=
Where
1 0
0 1
=I
let us nd the inverse of
is the
identity matrix.
rst (which is
B 1 =
Therefore we get,
1 0
0 1
= B 1 x = x
Example 1.16
Let us look at a ever-so-slightly more complicated basis of our basis matrix and inverse basis matrix becomes:
1 1 , = {h0 , h1 } 1 1
Then
B= B 1 =
and for this example it is given that
1 1
1 2 1 2
1 1
1 2 1 2
x=
Now we solve for
3 2
= B 1 x =
1 2 1 2
1 2 1 2
3 2
2.5 0.5
and we get
x = 2.5h0 + 0.5h1
Exercise 1.8.1
Now we are given the following basis matrix and
(Solution on p. 48.)
1 3 , {b0 , b1 } = 2 0 x= 3 2 x
in terms of
x:
For this problem, make a sketch of the bases and then represent
b0
and
b1 .
33
note:
x from a "dierent perspective." B 1 transforms x from {b0 , b1 }. Notice that this is a totally mechanical procedure.
Rn and Cn . This procedure extends natn Given a basis {b0 , b1 , . . . , bn1 } for R , we want to nd {0 , 1 , . . . , n1 }
and look at them in
R2
x = 0 b0 + 1 b1 + + n1 bn1
Again, we will set up a basis matrix
(1.41)
B=
b0
b1
b2
...
bn1
where the columns equal the basis vectors and it will always be an nn matrix (although the above matrix does not appear to be square since we left terms in vector notation). We can then proceed to rewrite (1.39)
x= b0 b1 ... bn1
0
. . .
= B
n1
and
= B 1 x
20
Fourier postulated around 1807 that any periodic signal (equivalently nite length signal) can be built up as an innite linear combination of harmonic sinusoidal waves.
1.9.1.1
i.e. Given the collection
B = {ej T
any
nt }n=
(1.42)
f (t) L2 [0, T )
can be approximated arbitrarily closely by
(1.43)
f (t) =
n=
C n ej T
21
nt
(1.44)
Science (Laplace, Lagrange, Monge and LaCroix comprised the review committee) for several years after its
34
CHAPTER 1.
FOUNDATIONS
presentation on 1807. It was not resolved for also a century, and its resolution is interesting and important to understand from a practical viewpoint. See more in the section on the fact that sinusoids are Eigenfunctions
22
Gibbs Phenomena.
23
Fourier analysis is fundamental to understanding the behavior of signals and systems. This is a result of of linear, time-invariant (LTI) systems. This is to say that if we pass any particular sinusoid through a LTI system, we get a scaled version of that same sinusoid on the output. Then, since Fourier analysis allows us to redene the signals in terms of sinusoids, all we need to do is determine how any given system eects all possible sinusoids (its transfer function
24
understanding of the system. Furthermore, since we are able to dene the passage of sinusoids through a system as multiplication of that sinusoid by the transfer function at the same frequency, we can convert the passage of any signal through a system from convolution ideas are what give Fourier analysis its power. Now, after hopefully having sold you on the value of this method of analysis, we must examine exactly what we mean by Fourier analysis. The four Fourier transforms that comprise this analysis are the Fourier Series
26 25
, Continuous-Time Fourier Transform (Section 1.10), Discrete-Time Fourier Transform (Section 1.11)
27
28
(Section 3.3) as simply extensions of the CTFT and DTFT respectively. All of these transforms act essentially the same way, by converting a signal in time to an equivalent signal in frequency (sinusoids). depending on the nature of a specic signal i.e. whether it is nite- or innite-length and whether it is
discrete- or continuous-time) there is an appropriate transform to convert the signal into the frequency domain. Below is a table of the four Fourier transforms and when each is appropriate. It also includes the relevant convolution for the specied space.
Time Domain
L ([0, T )) L2 (R) l2 (Z) l2 ([0, N 1])
2
Frequency Domain
l (Z) L2 (R) L2 ([0, 2 )) l2 ([0, N 1])
2
Convolution
Continuous-Time cular Continuous-Time ear Discrete-Time Linear LinCir-
Fourier
Discrete-Time Circular
29
In this module, we will derive an expansion for any arbitrary continuous-time function, and in doing so,
22 "Eigenfunctions of LTI Systems" <http://cnx.org/content/m10500/latest/> 23 "System Classications and Properties" <http://cnx.org/content/m10084/latest/> 24 "Transfer Functions" <http://cnx.org/content/m0028/latest/> 25 "Properties of Continuous Time Convolution" <http://cnx.org/content/m10088/latest/> 26 "Continuous-Time Fourier Series (CTFS)" <http://cnx.org/content/m10097/latest/> 27 "Discrete Fourier Transform" <http://cnx.org/content/m0502/latest/> 28 "The Laplace Transform" <http://cnx.org/content/m10110/latest/> 29 This content is available online at <http://cnx.org/content/m10098/2.16/>.
35
30
31
given
est
H (s) C
eigenvalue corresponding to s. As shown in the gure, a simple exponential input would yield the output
(1.45)
is linear, calculating
cn H (sn ) esn t H
The action of
H (sn ) C.
inde-
As such, if
f (t)
Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signals in terms of a set of simpler functions by superposition of a number of complex exponentials. Below we will present the
Transform (FT). Because the CTFT deals with nonperiodic signals, we must nd a way to include all real frequencies in the general equations. For the CTFT we simply utilize integration over real numbers rather than summation over integers in order to express the aperiodic signals.
s (t)
complex sinusoids
s (t) =
n=
where
cn ej0 nt
(1.46)
2 T is the fundamental frequency. For almost all s (t) of practical interest, there exists cn to make 2 (1.46) true. If s (t) is nite energy ( s (t) L [0, T ]), then the equality in (1.46) holds in the sense of energy
0 =
s (t)
s (t)
Dirichlet conditions), then (1.46) holds pointwise everywhere except at points of discontinuity.
cn - called the Fourier coecients - tell us "how much" of the sinusoid ej0 nt is in s (t). The formula s (t) as a sum of complex exponentials, each of which is easily processed by an LTI system (since it
is an eigenfunction of
n, n Z : ej0 nt
1.10.2.1 Equations
Now, in order to take this useful tool and apply it to arbitrary non-periodic signals, we will have to delve deeper into the use of the superposition principle. Let for any assumed value of the period by
sT (t)
T.
We want
to consider what happens to this signal's spectrum as the period goes to innity. We denote the spectrum
cn (T ).
30 "Continuous Time Complex Exponential" <http://cnx.org/content/m10060/latest/> 31 "Eigenfunctions of LTI Systems" <http://cnx.org/content/m10500/latest/> 32 http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Fourier.html
36
CHAPTER 1.
FOUNDATIONS
for a periodic signal, known as the Fourier Series (for more on this derivation, see the section on
Series.)
where
Fourier
(1.47)
cn = 0 =
1 T
s (t) exp (0 t) dt
0
2 T and where we have used a symmetric placement of the integration interval about the origin for subsequent derivational convenience. We vary the frequency index n proportionally as we increase the
period. Dene
ST (f ) T cn =
making the corresponding Fourier Series
1 T
sT (t) =
f (t) exp (0 t)
1 T
(1.49)
As the period increases, the spectral lines become closer together, becoming a continuum. Therefore,
S (f ) exp (0 t) df
(1.50)
with
S (f ) =
s (t) exp (0 t) dt
(1.51)
F () =
f (t) e(it) dt
(1.52)
Inverse CTFT
f (t) =
1 2
F () eit d
(1.53)
warning: It is not uncommon to see the above formula written slightly dierent. One of the most
common dierences is the way that the exponential is written. The above equations use the radial frequency variable explicit expression,
in the exponential, where = 2f , but it is also common to include the more i2f t, in the exponential. Click here33 for an overview of the notation used in
Example 1.17
We know from Euler's formula that
1j jt 2 e
1+j jt . 2 e
33 "DSP
notation" <http://cnx.org/content/m10161/latest/>
37
Interact (when online) with a Mathematica CDF demonstrating Continuous Time Fourier Transform. To Download, right-click and save as .cdf.
Figure 1.14:
(Solution on p. 48.)
(1.54)
Exercise 1.10.2
Find the inverse Fourier transform of the ideal lowpass lter dened by
(Solution on p. 48.)
1 X () = 0
if
| | M
(1.55)
otherwise
38
CHAPTER 1.
FOUNDATIONS
continuous time, periodic function as the sum of continuous time, discrete frequency complex exponentials.
f (t) =
n=
cn ej0 nt
(1.56)
The continuous time Fourier series analysis formula gives the coecients of the Fourier series expansion.
cn =
In both of these equations
1 T
(1.57)
0 =
34
In this module, we will derive an expansion for arbitrary discrete-time functions, and in doing so, derive the are eigenfunctions of linear time-invariant (LTI) systems
36
, calculating the
H [k ] C
2k N , and is the eigenvalue corresponding to k. As shown in the gure, a simple exponential input
given
in
0 =
y [n] = H [k ] ein
(1.58)
Figure 1.15:
is linear, calculating
y [n]
for combinations of
complex exponentials is
cl H [kl ] eil n
l
inde-
As such, if
y [n]
34 This content is available online at <http://cnx.org/content/m10108/2.18/>. 35 "Continuous Time Complex Exponential" <http://cnx.org/content/m10060/latest/> 36 "Eigenfunctions of LTI Systems" <http://cnx.org/content/m10500/latest/>
39
Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signals in terms of a set of simpler functions by superposition of a number of complex exponentials. Below we will present the
signals, we must nd a way to include all real frequencies in the general equations. For the DTFT we simply utilize summation over all real numbers rather than summation over integers in order to express the aperiodic signals.
f [n]
N 1
f [n] =
k=0
where
ck ei0 kn
(1.59)
2 N is the fundamental frequency. For almost all f [n] of practical interest, there exists cn to 2 make (1.59) true. If f [n] is nite energy ( f [n] L [0, N ]), then the equality in (1.59) holds in the sense
0 =
of energy convergence; with discrete-time signals, there are no concerns for divergence as there are with continuous-time signals. The shows
cn - called the Fourier coecients - tell us "how much" of the sinusoid ej0 kn is in f [n]. The formula f [n] as a sum of complex exponentials, each of which is easily processed by an LTI system (since it
is an eigenfunction of
k, k Z : ej0 kn
1.11.2.1 Equations
Now, in order to take this useful tool and apply it to arbitrary non-periodic signals, we will have to delve deeper into the use of the superposition principle. Let for any assumed value of the period by
sT (t)
T.
We want
to consider what happens to this signal's spectrum as the period goes to innity. We denote the spectrum
cn (T ).
for a periodic signal, known as the Fourier Series (for more on this derivation, see the section on
Series.)
where
Fourier
(1.60)
cn = 0 =
1 T
s (t) exp (0 t) dt
0
2 T and where we have used a symmetric placement of the integration interval about the origin for subsequent derivational convenience. We vary the frequency index n proportionally as we increase the
period. Dene
ST (f ) T cn =
making the corresponding Fourier Series
1 T
sT (t) =
f (t) exp (0 t)
1 T
(1.62)
As the period increases, the spectral lines become closer together, becoming a continuum. Therefore,
S (f ) exp (0 t) df
(1.63)
40
CHAPTER 1.
FOUNDATIONS
with
S (f ) =
s (t) exp (0 t) dt
(1.64)
F ( ) =
n=
f [n] e(in)
(1.65)
Inverse DTFT
f [n] =
1 2
F ( ) ein d
(1.66)
warning: It is not uncommon to see the above formula written slightly dierent. One of the most
common dierences is the way that the exponential is written. The above equations use the radial frequency variable to make it clear explicit expression,
in the exponential, where = 2f , but it is also common to include the more i2f t, in the exponential. Sometimes DTFT notation is expressed as F ei , 37 that it is not a CTFT (which is denoted as F ()). Click here for an overview of
Figure 1.16: Click on the above thumbnail image (when online) to download an interactive Mathematica Player demonstrating Discrete Time Fourier Transform. To Download, right-click and save target as .cdf.
37 "DSP
41
F ( ) =
n=
f [n] e(in)
(1.67)
The discrete time Fourier transform analysis formula takes the same discrete time domain signal and represents the signal in the continuous frequency domain.
f [n] =
1 2
F ( ) ein d
(1.68)
38
38 This
42
CHAPTER 1.
FOUNDATIONS
Vectors in
RN :
x0 x1 ... x N 1
xi, xi R : x =
Vectors in
CN :
x0 x1 ... x N 1
xi, xi C : x =
Transposition: a. transpose:
xT =
b. conjugate:
x0 x0
x1 x1
... ...
xN 1 x N 1
xH =
Inner product a. real:
39
N 1
xT y =
i=0
b. complex:
xi yi
N 1
xH y =
i=0
xn yn
Matrix Multiplication:
Ax =
a00 a10
. . .
a01 a11
. . .
a0,N 1 a1,N 1
. . .
x0 x1 ... xN 1
y0 y1 ... y N 1
aN 1,0
aN 1,1
aN 1,N 1
N 1
yk =
n=0
akn xn
Matrix Transposition:
AT =
39 "Inner
a00 a01
. . .
a10 a11
. . .
aN 1,0 aN 1,1
. . .
a0,N 1
a1,N 1
aN 1,N 1
43
AH = AT
The above equation is Hermitian transpose.
in vector-matrix notation.
x= X=
Here
X [N 1]
CN x
and
related:
N 1
X [k ] =
n=0
where
akn = e(i N )
2
kn
= WN kn
so
X = Wx
where
kn
1 x [n] = N
40 "Discrete
N 1
X [k ] ei N
k=0
nk
44
CHAPTER 1.
FOUNDATIONS
where
ei N WN nk
is the matrix Hermitian transpose. So,
nk
= WN nk
x=
where
1 H W X N X
is the DFT vector.
41
42
k, 0 k N 1 :
For each
X [k ] =
n=0
x [n] e(i2 N n)
k
k,
we must execute:
N -point
DFT is
O N2
N N2
1 1
10 100
100 10,000
Table 1.4
1000
106 1012
106
Figure 1.17
ecient way of computing the DFT. The FFT requires only "
DFT.
O (N logN )"
N -point
41 This content is available online at <http://cnx.org/content/m10964/2.6/>. 42 "Discrete Fourier Transform (DFT)" <http://cnx.org/content/m10249/latest/>
45
N N2 N logN
10 100 10
1000
106
3000
How long is
1012 sec?
6 106 sec?
Figure 1.18
WN kn = e(i N kn)
2
Rule 1.1:
Rule 1.2:
Periodicity in n and k
WN kn = WN k(N +n) = WN (k+N )n e(i N kn) = e(i N k(N +n)) = e(i N (k+N )n)
2 2 2
WN = e(i N )
2
many dierent FFT algorithms idea is to build a DFT out of smaller and smaller DFTs by decomposing x [n] into smaller and
N = 2m
(a power of 2)
46
CHAPTER 1.
FOUNDATIONS
1.13.3.1 Derivation N is even, so we can complete X [k ] by separating x [n] into two subsequences each of length
x [n]
N 2 N 2
if if
N 2.
n = even n = odd
N 1
k, 0 k N 1 :
X [k ] =
n=0
x [n] WN kn x [n] WN kn
X [k ] =
n=2r
where
x [n] WN kn +
n=2r +1
0r
N 2
1.
So
N 2 1 r =0 N 2 1 r =0
X [k ]
= =
kr
(1.69)
where
WN = e
(i 2 N 2)
=e
i2 N
2
= WN .
2 N 2
N 2
X [k ] =
r =0
where
N 2 1 r =0
x [2r] W N
2
kr
+ WN
k r =0
x [2r + 1] W N kr
2 N 2 1 r =0
DFT of odd
G [k ])
and
x [2r + 1] W N kr
2
is
N 2 -point
k, 0 k N 1 : X [k ] = G [k ] + WN k H [k ]
Decomposition of an
N -point
DFT as a sum of 2
N 2 -point DFTs.
N -point
N2
N 2
N 2
+N =
N2 +N 2
N 2 -point DFT, G [k ]. The second part is N the number of complex mults and adds for 2 -point DFT, H [k ]. The third part is the number of complex N2 mults and adds for combination. And the total is 2 + N complex mults and adds.
where the rst part is the number of complex mults and adds for
47
N2 +N 2
106 2
N N 2 -point DFTs into two 4 -point DFTs,
etc., ....
So why stop here?! Keep decomposing. Break each of the We can keep decomposing:
N = 21
where
N N N N N , , , . . . , m 1 , m 2 4 8 2 2 m = log2 N =
times
=1
N 2 -pt DFTs. The cost is N N -pt DFT with two -pt DFTs will reduce cost to 2 4
Computational cost:
N -pt
DFT
two
N2 2
N 2 2
+ N.
So replacing each
2 2
As we keep going
N 4
N 2
+N =4
N 4
+ 2N =
N2 N2 + 2 N = + pN 22 2p
where
m = log2 N .
+ N log2 N =
N2 + N log2 N = N + N log2 N N
since
N + N log2 N
For large
N , cost
N log2 N
or "
O (N log2 N )",
N log2 N
for large
N.
Figure 1.19:
Figure 1.20:
48
CHAPTER 1.
FOUNDATIONS
in terms of
b0
and
b1
B= B 1 =
1 3 0
1 3
2 0
1 2 1 6
= B 1 x =
And now we can write
1
2 3
in terms of
b0
and
b1 . 2 x = b0 + b1 3
b0
and
b1
In order to calculate the Fourier transform, all we need to use is (1.52) (Continuous-Time Fourier Transform), , and basic calculus.
F ()
= = = =
(1.70)
F () =
1 + i
(1.71)
t = 0.
x (t)
= = =
(1.72)
x (t) =
sinc
Mt
(1.73)
43 "Continuous
Chapter 2
Table of formulas
robustness towards noise, meaning we can send more bits/s use of exible processing equipment, in particular the computer more reliable processing equipment easier to adapt complex algorithms
1 This content is available online at <http://cnx.org/content/m11419/1.29/>. 2 "Hold operation" <http://cnx.org/content/m11458/latest/> 3 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 4 "Exercises" <http://cnx.org/content/m11442/latest/> 5 "Table of Formulas" <http://cnx.org/content/m11450/latest/>
50
CHAPTER 2.
Figure 2.1:
Claude Shannon
has been called the father of information theory, mainly due to his landmark papers on
7
. Harry Nyquist
in 1928, but it was not proven until Shannon proved it 21 years later in the paper "Communications in the .
2.1.3 Notation
In this chapter we will be using the following notation
x (t) Fs
1 Ts (Note that: Fs = T ) s Sampled signal xs (n). (Note that xs (n) = x (nTs )) Real angular frequency Digital angular frequency . (Note that: = Ts )
When sampling an analog signal the sampling frequency must be greater than twice the
highest frequency component of the analog signal to be able to reconstruct the original signal from the sampled version.
51
2.1.5
Finished? Have at look at: Proof (Section 2.2); Illustrations (Section 2.3); Matlab Example (Section 2.4); Aliasing applet
10
; Hold operation
11
12
2.2 Proof
note:
13
x (t)
x (t)
2.2.1 Introduction
As mentioned earlier (p. 49), sampling is the necessary fundament when we want to apply digital signal The proof is divided in two. First we nd an Next we show processing on analog signals. Here we present the proof of the sampling theorem. that the signal expression for the spectrum of the signal resulting from sampling the original signal
x (t).
x (t)
can be recovered from the samples. Often it is easier using the frequency domain when
We nd an equation (2.8) for the spectrum of the sampled signal We nd a simple method to reconstruct (2.14) the original signal The sampled signal has a periodic spectrum... ...and the period is
2 Fs
x (t)
every
Ts
second we obtain
xs (n). 1 2
is
xs (n) =
Xs ei ein d
(2.1)
For convenience we express the equation in terms of the real angular frequency obtain
using
= Ts .
We then
xs ( n) =
Ts 2
Ts Ts
Xs eiTs eiTs n d
(2.2)
x (t) =
From this equation we nd an expression for
1 2
X (i) eit d
(2.3)
x (nTs ) 1 2
x (nTs ) =
X (i) einTs d
(2.4)
10 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 11 "Hold operation" <http://cnx.org/content/m11458/latest/> 12 "Exercises" <http://cnx.org/content/m11442/latest/> 13 This content is available online at <http://cnx.org/content/m11423/1.27/>. 14 "Discrete time signals" <http://cnx.org/content/m11476/latest/>
52
CHAPTER 2.
To account for the dierence in region of integration we split the integration in (2.4) into subintervals of length
2 Ts and then take the sum over the resulting integrals to obtain the complete area.
1 x (nTs ) = 2
(2k+1) Ts (2k1) Ts
X (i) einTs d
(2.5)
k=
=+
2k Ts
x (nTs ) =
1 2
Ts Ts
X i +
k=
2 k Ts
ei(+
2k Ts
)nTs d
Ts Ts
(2.6)
ei2kn = 1,
reinserting
and multiplying by
Ts x (nTs ) = 2
To make
Ts Ts
k=
1 2 k X i + Ts Ts
einTs d
(2.7)
xs (n) = x (nTs )
n,
Xs eiTs =
1 Ts
X i +
k=
2k Ts
(2.8)
This is a central result. We see that the digital spectrum consists of a sum of shifted versions of the original, analog spectrum. Observe the periodicity! We can also express this relation in terms of the digital angular frequency
= Ts
(2.9)
Xs ei =
1 Ts
X i
k=
+ 2 k Ts
This concludes the rst part of the proof. Now we want to nd a reconstruction formula, so that we can recover
x (t)
from
xs (n).
x (t) =
1 2
Ts Ts
X (i) eit d
(2.10)
Xs eiTs =
Ts Ts
Ts x (t) = 2
Using the DTFT
15
Xs eiTs eit d
relation for
Xs eiTs x (t) = Ts 2
we have
Ts Ts
(2.12)
15 "Table
of Formulas" <http://cnx.org/content/m11450/latest/>
53
x (t) =
Ts 2
xs (n)
n=
Ts Ts
ei(tnTs ) d
(2.13)
Finally we perform the integration and arrive at the important reconstruction formula
sin xs (n)
x (t) =
n=
Ts Ts
(t nTs )
(2.14)
(t nTs )
54
CHAPTER 2.
2.2.4 Summary
note:
Xs eiTs =
1 Ts
k=
X i +
2k Ts
note:
x (t) =
n=
xs ( n)
2.2.5
Go to Introduction (Section 2.1); Illustrations (Section 2.3); Matlab Example (Section 2.4); Hold operation Aliasing applet
17 16
18
2.3 Illustrations
19
processes work together as a whole, take a look at the system view (Section 2.5). In Sampling and reconstruction with Matlab (Section 2.4) we provide a Matlab script for download. The matlab script shows the process of sampling and reconstruction
live.
Example 2.2
The sampling theorem can also be applied in two dimensions, i.e. for image analysis. A 2D sampling theorem has a simple physical interpretation in image analysis: Choose the sampling interval such that it is less than or equal to half of the smallest interesting detail in the image.
Ts
seconds. In
t = nTs .
16 "Hold operation" <http://cnx.org/content/m11458/latest/> 17 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 18 "Exercises" <http://cnx.org/content/m11442/latest/> 19 This content is available online at <http://cnx.org/content/m11443/1.33/>.
55
Figure 2.2:
In signal processing it is often more convenient and easier to work in the frequency domain. So let's look at at the signal in frequency domain, Figure 2.3. For illustration purposes we take the frequency content of the signal as a triangle. (If you Fourier transform the signal in Figure 2.2 you will not get such a nice triangle.)
Figure 2.3:
Notice that the signal in Figure 2.3 is bandlimited. We can see that the signal is bandlimited because
56
CHAPTER 2.
X (i)
[g , g ].
frequencies above
g 2 . Now let's take a look at the sampled signal in the frequency domain. While proving (Section 2.2) the
g ,
Fg =
sampling theorem we found the the spectrum of the sampled signal consists of a sum of shifted versions of the analog spectrum. Mathematically this is described by the following equation:
Xs eiTs =
1 Ts
X i +
k=
2k Ts
(2.15)
x (t)
Sampling Theorem). This means that when sampling the signal in Figure 2.2/Figure 2.3 we use Observe in Figure 2.4 that we have the same spectrum as in Figure 2.3 for scaling factor
Fs 2Fg .
[g , g ],
s
1 Ts . This is a consequence of the sampling frequency. As mentioned in the proof (Key points 2 in the proof, p. 51) the spectrum of the sampled signal is periodic with period 2Fs = T .
Figure 2.4:
So now we are, according to the sample theorem (Section 2.1.4: The Sampling Theorem), able to reconstruct the original signal
exactly.
(Section 2.3.3: Reconstruction). But rst we will take a look at what happens when we sample too slowly.
x (t)
Fs < 2Fg ,
we will get overlap between the repeated spectra, see This overlap gives rise to the
concept of aliasing.
note:
If the sampling frequency is less than twice the highest frequency component, then frequen-
cies in the original signal that are above half the sampling rate will be "aliased" and will appear in the resulting signal as lower frequencies. The consequence of aliasing is that we cannot recover the original signal, so aliasing has to be avoided. Sampling too slowly will produce a sequence So there is module
20
no
xs (n)
chance of recovering the original signal. To learn more about aliasing, take a look at this
20 "Aliasing
57
Figure 2.5:
To avoid aliasing we have to sample fast enough. But if we can't sample fast enough (possibly due to costs) we can include an Anti-Aliasing lter. This will not able us to get an exact reconstruction but can still be a good solution.
note:
Typically a low-pass lter that is applied before sampling to ensure that no components
2.3.3 Reconstruction
Given the signal in Figure 2.4 we want to recover the original signal, but the question is how? When there is no overlapping in the spectrum, the spectral component given by
k = 0
(see (2.15)),is
equal to the spectrum of the analog signal. This oers an oppurtunity to use a simple reconstruction process. Remember what you have learned about ltering. What we want is to change signal in Figure 2.4 into that of Figure 2.3. To achieve this we have to remove all the extra components generated in the sampling process. To remove the extra components we apply an ideal analog low-pass lter as shown in Figure 2.6 As we see the ideal lter is rectangular in the frequency domain. A rectangle in the frequency domain corresponds to a sinc
22
21 http://owers.ofthenight.org/wagonWheel/wagonWheel.html 22 http://ccrma-www.stanford.edu/jos/Interpolation/sinc_function.html
58
CHAPTER 2.
Figure 2.6:
Then we have reconstructed the original spectrum, and as we know if two signals are identical in the frequency domain, they are also identical in the time domain. End of reconstruction.
2.3.4 Conclusions
The Shannon sampling theorem requires that the input signal prior to sampling is band-limited to at most half the sampling frequency. Under this condition the samples give an exact signal representation. It is truly remarkable that such a broad and useful class signals can be represented that easily! We also looked into the problem of reconstructing the signals form its samples. Again the simplicity of the
principle is striking:
linear ltering by an ideal low-pass lter will do the job. However, the ideal lter
2.3.5
Go to? Introduction (Section 2.1); Proof (Section 2.2); Illustrations (Section 2.3); Matlab Example (Section 2.4); Aliasing applet
23
; Hold operation
24
25
26
2.4.2
Introduction (Section 2.1); Proof (Section 2.2); Illustrations (Section 2.3); Aliasing applet (Section 2.4); System view (Section 2.5); Exercises
29 28
; Hold operation
23 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 24 "Hold operation" <http://cnx.org/content/m11458/latest/> 25 "Exercises" <http://cnx.org/content/m11442/latest/> 26 This content is available online at <http://cnx.org/content/m11549/1.9/>. 27 http://cnx.rice.edu/content/m11549/latest/Samprecon.m 28 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 29 "Exercises" <http://cnx.org/content/m11442/latest/>
59
30
Figure 2.7 shows the ideal reconstruction system based on the results of the Sampling theorem proof (SecFigure 2.7 consists of a sampling device which produces a time-discrete sequence tion lter, sequence
xs (n).
The reconstruc-
h (t),
31
lter, with
h (t) = sinc
t Ts
xs ( n)
. Thus we write
Figure 2.7:
But when will the system produce an output tion 2.1.4: The Sampling Theorem) we have the highest frequency component of
x (t) = x (t)? According to the sampling theorem x (t) = x (t) when the sampling frequency, Fs , is at least
(Sectwice
x (t).
Figure 2.8:
Again we ask the question of when the system will produce an output
x (t) = s (t)?
conned within the passband of the lowpass lter we will get perfect reconstruction if lter), we will
Fs
is high enough.
But if the anti-aliasing lter removes the "higher" frequencies, (which in fact is the job of the anti-aliasing
x (t),
will not have aliased frequencies. This is essential for further use of the
60
CHAPTER 2.
apply the output from the hold device to a compensator. The compensation can be as accurate as we wish,
Figure 2.9:
By the use of the hold component the reconstruction will not be exact, but as mentioned above we can get as close as we want.
2.5.4
Introduction (Section 2.1); Proof (Section 2.2); Illustrations (Section 2.3); Matlab example (Section 2.4); Hold operation
35
; Aliasing applet
36
; Exercises
37
38
xc (t)
directly to
x [n].
xs (t) =
n=
Xs ()
= = = = =
X ( ) x [n]. 1 T
where
and
X ( )
is the DTFT of
note:
Xs () =
Xc ( k s )
k=
33 "Hold operation" <http://cnx.org/content/m11458/latest/> 34 "Hold operation" <http://cnx.org/content/m11458/latest/> 35 "Hold operation" <http://cnx.org/content/m11458/latest/> 36 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 37 "Exercises" <http://cnx.org/content/m11442/latest/> 38 This content is available online at <http://cnx.org/content/m10994/2.2/>.
61
X ( )
= =
1 T 1 T
k= k=
Xc ( k s ) Xc
2k T
(2.17)
2 -periodic.
2.6.1.1 Sampling
Figure 2.10
62
CHAPTER 2.
Figure 2.11
Figure 2.12:
xs (t) =
nn
x (nT ) (t nT )
63
Figure 2.13
x (t)
where
1 X
(2.18)
1 is a scaling in frequency.
Xs () X (T )
(2.19)
39
We just covered ideal (and non-ideal) (time) sampling of CT signals (Section 2.6). This enabled DT signal
39 This
64
CHAPTER 2.
Figure 2.14
Much of the theoretical analysis of such systems relied on frequency domain representations. How do we carry out these frequency domain analysis on the computer? Recall the following relationships:
x [n] X ( ) x (t) X ()
where
DTFT
CTFT
and
x [n].
N 1
Assume
(i.e., an
N -point
X ( ) =
n=0
where
x [n] e(i)n
(2.20)
X ( )
The
other function,
x [n],
X ( )
sample X ()?
(2.21)
X [k ]
= X =
In (2.21) we sampled at
2 N k where
k = {0, 1, . . . , N 1}
X [k ]
for
k = {0, . . . , N 1}
is called the
Figure 2.15
The DTFT of the image in Figure 2.15 (Finite Duration DT Signal) is written as follows:
N 1
X ( ) =
n=0
x [n] e(i)n
(2.22)
65
where
is any
2 -interval,
for example
Sample X()
Figure 2.16
2 N k where
k = {0, 1, . . . , M 1}.
M = 10
. In the following section (Section 2.7.1.1.1: Choosing M) we will discuss in more detail how we should choose
M,
interval.
X ( )
in Matlab.)
(length of
x [n]),
choose
Figure 2.17
2.7.1.1.1.2 Case 2
Choose
In general, we require
M N
n, n = {0, . . . , N 1} : (x [n])
Let's concentrate on
M = N:
and
x [n] X [k ]
for
DFT
n = {0, . . . , N 1}
66
CHAPTER 2.
X [k ] X
where
2k N M = N.
k
(2.23)
N = length (x [n])
DFT
and
k = {0, . . . , N 1}.
In this case,
N 1
X [k ] =
n=0
x [n] e(i)2 N n
(2.24)
N 1
X [k ] ei2 N n
k=0
(2.25)
2.7.2.1 Interpretation
Represent
x [n]
in terms of a sum of
complex sinusoids
40
of amplitudes
X [k ]
and frequencies
k, k {0, . . . , N 1} :
note:
k =
2k N
2 N
2.7.2.1.1 Remark 1
IDFT treats
x [n]
as though it were
N -periodic. x [n] = 1 N
N 1
X [k ] ei2 N n
k=0
(2.26)
where
n {0, . . . , N 1}
(Solution on p. 107.)
Exercise 2.7.1
What about other values of
n?
2.7.2.1.2 Remark 2
Proof that the IDFT inverts the DFT for
n {0, . . . , N 1} = =
1 N N 1 k=0 N 1 m=0
1 N
N 1 k=0
X [k ] ei2 N n
???
(2.27)
N = 4,
40 "Continuous
67
Figure 2.18
1. DFT Formula
X [k ]
= = =
1+e 1+e
+e
+ e(i)2 4 3
3 (i) 2 k
(2.28)
(i) 2k
+e
(i)k
+e
Using the above equation, we can solve and get the following results:
x [0] = 4
X ( )
= = =
(2.29)
??? 2k = k 4 2
k =
where
k = {0, 1, 2, 3}
(Figure 2.19).
68
CHAPTER 2.
Figure 2.19
X [k ] =
n=0
where
x [n] e(i)2 N n
(2.30)
e(i)2 N n
is an
N -periodic
Figure 2.20
Also, recall,
x [n]
= = =
1 N 1 N
N 1 n=0 N 1 n=0
(2.31)
???
69
Figure 2.21
note:
When we deal with the DFT, we need to remember that, in eect, this treats the signal as sequence.
an
N -periodic
X ( ),
S ( )
X ( )
and is illustrated in Figure 2.22 as well. This will result in our discrete-
X [k ].
70
CHAPTER 2.
Figure 2.22
note:
Remember the multiplication in the frequency domain is equal to convolution in the time
domain!
k=
2k N
(2.32)
Given the above equation, we can take the DTFT and get the following equation:
N
m=
[n mN ] S [n]
(2.33)
Exercise 2.7.2
Why does (2.33) equal
(Solution on p. 107.)
S [n]?
71
Figure 2.23
72
CHAPTER 2.
2.7.5 Connections
Figure 2.24
Figure 2.25
73
41
Figure 2.26
2.8.1.1 Analysis
Yc () = HLP () Y (T )
where we know that remember that (2.34)
Y ( ) = X ( ) G ( )
and
G ( )
T
So,
Yc () = HLP () G (T ) X (T )
where
(2.35)
Yc ()
and
HLP ()
G (T ) 2 T
and
X (T )
are DTFTs.
note:
X ( ) =
OR
Xc
k=
2k T
X (T ) =
Therefore our nal output signal,
2 T
Xc ( k s )
k=
Yc (),
will be:
Yc () = HLP () G (T )
41 This
2 T
Xc ( k s )
k=
(2.36)
74
CHAPTER 2.
Now, if
Xc ()
is bandlimited to
s s 2 , 2
Figure 2.27:
Figure 2.27
Then,
G (T ) X () c Yc () = 0 otherwise
if
|| <
s 2
(2.37)
75
2.8.1.2 Summary
For bandlimited signals sampled at or above the Nyquist rate, we can relate the input and output of the DSP system by:
Yc () = Ge () Xc ()
where
(2.38)
G (T ) if || < Ge () = 0 otherwise
s 2
Figure 2.28
2.8.1.2.1 Note
Ge ()
1. 2. is LTI if and only if the following two conditions are satised:
and sampling rate equal to or greater than Nyquist. For example, if we had a
Xc (t) = u (t T0 ) u (t T1 )
where
T1 > T0 .
T > T1 T0 ,
time-varying behavior.
Example 2.8
76
CHAPTER 2.
Figure 2.29
If
2 T
> 2B
and
1 < BT ,
Yc ()
Figure 2.30
Unfortunately, in real-world situations electrodes also pick up ambient 60 Hz signals from lights, computers,
etc..
In fact, usually this "60 Hz noise" is much greater in amplitude than the EKG signal shown in
Figure 2.30. Figure 2.31 shows the EKG signal; it is barely noticeable as it has become overwhelmed by noise.
77
Figure 2.31:
Figure 2.32
Figure 2.33
|Y () |
is
bandlimited to 60 Hz.
fs = 240Hz
78
CHAPTER 2.
s = 2 240
rad s
Figure 2.34
Figure 2.35
42
The Fourier transforms (FT, DTFT, DFT, etc.) do not clearly indicate how the frequency content of a signal That information is hidden in the phase - it is not revealed by the plot of the magnitude of the spectrum.
note:
To see how the frequency content of a signal changes over time, we can cut the signal into
blocks and compute the spectrum of each block. To improve the result, 1. blocks are overlapping
42 This
79
2. each block is multiplied by a window that is tapered at its endpoints. Several parameters must be chosen:
Block length,
R.
The type of window. Amount of overlap between blocks. (Figure 2.36 (STFT: Overlap Parameter)) Amount of zero padding, if any.
80
CHAPTER 2.
Figure 2.36
X (, m)
= = =
where
w (n)
R.
81
x (n)
2. The block length is determined by the support of the window function 3. A graphical display of the magnitude of the STFT, It is often used in speech processing. 4. The STFT of a signal is invertible.
5. One can choose the block length. A long block length will provide higher frequency resolution (because the main-lobe of the window function will be narrow). A short block length will provide higher time resolution because less averaging across samples is performed for each STFT value. 6. A 7. A function).
narrow-band spectrogram is one computed using a relatively long block length R, (long window wide-band spectrogram is one computed using a relatively short block length R, (short window
function).
in
=0
(2.40)
= 2 . k, 0 k N 1 : k =
2 k N
X d (k, m) := X
2 N k, m
= = =
R 1 n=0 R 1 n=0
where 0,. . .0 is
N R. R 1.
The signal is shifted along the window
In this denition, the overlap between adjacent blocks is the time direction. That means we usually evaluate
one sample at a time. That generates more points than is usually needed, so we also sample the STFT along
X d (k, Lm)
where
is the time-skip. The relation between the time-skip, the number of overlapping samples, and the
block length is
Overlap = R L
Exercise 2.9.1
Match each signal to its spectrogram in Figure 2.37.
(Solution on p. 107.)
82
CHAPTER 2.
(a)
(b)
Figure 2.37
83
Figure 2.38
84
CHAPTER 2.
Figure 2.39
The matlab program for producing the gures above (Figure 2.38 and Figure 2.39).
% LOAD DATA load mtlb; x = mtlb; figure(1), clf plot(0:4000,x) xlabel('n') ylabel('x(n)') % SET PARAMETERS R = 256; window = hamming(R); N = 512; L = 35; fs = 7418; % % % % % R: block length window function of length R N: frequency discretization L: time lapse between blocks fs: sampling frequency
85
overlap = R - L; % COMPUTE SPECTROGRAM [B,f,t] = specgram(x,N,fs,window,overlap); % MAKE PLOT figure(2), clf imagesc(t,f,log10(abs(B))); colormap('jet') axis xy xlabel('time') ylabel('frequency') title('SPECTROGRAM, R = 256')
86
CHAPTER 2.
Figure 2.40
87
Figure 2.41
Here is another example to illustrate the frequency/time resolution trade-o (See gures - Figure 2.40 (Narrow-band spectrogram: better frequency resolution), Figure 2.41 (Wide-band spectrogram: better time resolution), and Figure 2.42 (Eect of Window Length R)).
88
CHAPTER 2.
(a)
(b)
Figure 2.42
zero-padded to length
N .)
(Solution on p. 107.)
Exercise 2.9.2
For each of the four spectrograms in Figure 2.43 can you tell what
and
are?
89
(a)
(b)
Figure 2.43
and
do not eect the time resolution or the frequency resolution. They only aect the 'pixelation'.
blocks.
Exercise 2.9.3
(Solution on p. 107.)
For each of the four spectrograms in Figure 2.44, match the above values of
and
R.
90
CHAPTER 2.
Figure 2.44
If you like, you may listen to this signal with the Here (Figure 2.45) is a gure of the signal.
soundsc
stft_data.m.
91
Figure 2.45
2.10 Spectrograms
sion
46
43
44 45
, sampling
47
components together to learn how the spectrogram shown in Figure 2.46 (Speech Spectrogram), which is used to analyze speech , is calculated. The speech was sampled at a rate of 11.025 kHz and passed through a 16-bit A/D converter.
Point of interest: Music compact discs (CDs) encode their signals at a sampling rate of 44.1
kHz. We'll learn the rationale for this number later. The 11.025 kHz sampling rate for the speech is 1/4 of the CD sampling rate, and was the lowest available sampling rate commensurate with speech signal bandwidths available on my computer.
Exercise 2.10.1
(Solution on p. 107.)
How
Looking at Figure 2.46 (Speech Spectrogram) the signal lasted a little over 1.2 seconds.
long was the sampled signal (in terms of samples)? What was the datarate during the sampling process in bps (bits per second)? Assuming the computer storage is organized in terms of bytes (8-bit quantities), how many bytes of computer memory does the speech consume?
43 This content is available online at <http://cnx.org/content/m0505/2.21/>. 44 "The Sampling Theorem" <http://cnx.org/content/m0050/latest/> 45 "The Sampling Theorem" <http://cnx.org/content/m0050/latest/> 46 "Amplitude Quantization" <http://cnx.org/content/m0051/latest/> 47 "Fast Fourier Transform (FFT)" <http://cnx.org/content/m10250/latest/> 48 "Modeling the Speech Signal" <http://cnx.org/content/m0049/latest/>
92
CHAPTER 2.
Speech Spectrogram
5000
4000
Frequency (Hz)
3000
2000
1000
0.2
0.4
0.8
1.2
Ri
ce
Uni
Figure 2.46
ver
si
ty
The resulting discrete-time signal, shown in the bottom of Figure 2.46 (Speech Spectrogram), clearly changes its character with time.
frames:
To display these spectral changes, the long signal was sectioned into Conceptually, a Fourier transform of each
frame is calculated using the FFT. Each frame is not so long that signicant signal variations are retained within a frame, but not so short that we lose the signal's spectral character. Roughly speaking, the speech signal's spectrum is evaluated over successive time segments and stacked side by side so that the corresponds to time and the vs. Rectangular)).
x-axis
y -axis
An important detail emerges when we examine each framed signal (Figure 2.47 (Spectrogram Hanning
93
Rectangular Window
Hanning Window
FFT (512)
FFT (512)
f
Figure 2.47:
The top waveform is a segment 1024 samples long taken from the beginning of the "Rice University" phrase. Computing Figure 2.46 (Speech Spectrogram) involved creating frames, here demarked by the vertical lines, that were 256 samples long and nding the spectrum of each. If a rectangular window is applied (corresponding to extracting a frame from the signal), oscillations appear in the spectrum (middle of bottom row). Applying a Hanning window gracefully tapers the signal toward frame edges, thereby yielding a more accurate computation of the signal's spectrum at that moment of time.
At the frame's edges, the signal may change very abruptly, a feature not present in the original signal. A transform of such a segment reveals a curious oscillation in the spectrum, an artifact directly related to this sharp amplitude change. A better way to frame signals for spectrograms is to apply a accomplished by multiplying the framed signal by the sequence applied a rectangular window:
window:
Shape
the signal values within a frame so that the signal decays gracefully as it nears the edges. This shaping is
window;
w (n). In sectioning the signal, we essentially w (n) = 1, 0 n N 1. A much more graceful window is the Hanning 1 2n it has the cosine shape w (n) = . As shown in Figure 2.47 (Spectrogram Hanning 2 1 cos N
vs. Rectangular), this shaping greatly reduces spurious oscillations in each frame's spectrum. Considering the spectrum of the Hanning windowed frame, we nd that the oscillations resulting from applying the rectangular window obscured a formant (the one located at a little more than half the Nyquist frequency).
Exercise 2.10.2
(Solution on p. 107.)
To gain some insight, what is the length-2N
discrete Fourier transform of a length-N pulse? The pulse emulates the rectangular window, and certainly has edges. Compare your answer with the length-2N transform of a length-N Hanning window.
94
CHAPTER 2.
Non-overlapping windows
In comparison with the original speech segment shown in the upper plot, the nonoverlapped Hanning windowed version shown below it is very ragged. Clearly, spectral information extracted from the bottom plot could well miss important features present in the original.
Figure 2.48:
If you examine the windowed signal sections in sequence to examine windowing's eect on signal amplitude, we see that we have managed to amplitude-modulate the signal with the periodically repeated window (Figure 2.48 (Non-overlapping windows)). To alleviate this problem, frames are overlapped (typically by half a frame duration). This solution requires more Fourier transform calculations than needed by rectangular windowing, but the spectra are much better behaved and spectral changes are much better captured. The speech signal, such as shown in the speech spectrogram (Figure 2.46: Speech Spectrogram), is sectioned into overlapping, equal-length frames, with a Hanning window applied to each frame. The spectra of each of these is calculated, and displayed in spectrograms with frequency extending vertically, window time location running horizontally, and spectral magnitude color-coded. Figure 2.49 (Overlapping windows for computing spectrograms) illustrates these computations.
95
FFT
FFT
FFT
FFT
FFT
FFT
FFT
The original speech segment and the sequence of overlapping Hanning windows applied to it are shown in the upper portion. Frames were 256 samples long and a Hanning window was applied with a half-frame overlap. A length-512 FFT of each frame was computed, with the magnitude of the rst 257 FFT values displayed vertically, with spectral amplitude values color-coded.
Figure 2.49:
Exercise 2.10.3
Why the specic values of 256 for
(Solution on p. 107.)
K?
96
CHAPTER 2.
49
Figure 2.50
y [n]
= x [n] h [n] =
k=
x [k ] h [n k ]
(2.42)
Y ( ) = X ( ) H ( )
Assume that
(2.43)
H ( )
Exercise 2.11.1
is specied.
(Solution on p. 107.)
X ( ) H ( ) N -point
in a computer?
49 This
97
Figure 2.51
y [n]
= = = = =
1 N 1 N 1 N
k N 1 i2 N n k=0 Y [k ] e k N 1 i2 N n k=0 X [k ] H [k ] e k k N 1 N 1 (i2 N m) H [k ] ei2 N n k=0 m=0 x [m] e k N 1 N 1 1 i2 N (nm) m=0 x [m] N k=0 H [k ] e N 1 m=0 x [m] h [((n m))N ]
(2.44)
h [n]:
h [n m] = h [((n m))N ]
98
CHAPTER 2.
Figure 2.52
N 1
y [n] =
m=0
(2.45)
is called
Figure 2.53:
The above symbol for the circular convolution is for an N -periodic extension.
Figure 2.54
99
Figure 2.55
x [n] = {. . . , 0, 1, 2, 3, 0, . . . } h [n] = {. . . , 0, 1, 0, 2, 0, . . . }
where
x [0] = 1
and
h [0] = 1.
2
Regular Convolution:
y [n] =
m=0
x [m] h [n m]
(2.46)
Using the above convolution formula (refer to the link if you need a review of convolution (Section 1.5)), we can calculate the resulting value for
y [0]
to
y [4].
(2.47)
(2.48)
100
CHAPTER 2.
12+20+31 5
(2.49)
Figure 2.56:
Circular Convolution:
y [n] =
m=0
(2.52)
h [((n))N ] = {. . . , 1, 0, 2, 1, 0, 2, 1, 0, 2, . . . } n=0 {. . . , 0, 0, 0, 1, 2, 3, 0, . . . } {. . . , 1, 2, 0, 1, 2, 0, 1, . . . }
y [0]
= =
11+22+30 5
(2.54)
101
n=1 {. . . , 0, 0, 0, 1, 2, 3, 0, . . . } {. . . , 0, 1, 2, 0, 1, 2, 0, . . . }
y [1]
= =
11+21+32 8
(2.55)
(2.56)
(2.57)
(2.58)
Figure 2.57:
Result is 3-periodic.
Figure 2.58 (Circular Convolution from Regular) illustrates the relationship between circular convolution and regular convolution using the previous two gures:
Figure 2.58:
extension.
The left plot (the circular convolution results) has a "wrap-around" eect due to periodic
102
CHAPTER 2.
x [n]
and
h [n]
to avoid the overlap (wrap-around) eect. We will zero-pad the two signals
y [n]
= =
1 N
4 k=0 4 m=0
X [k ] H [k ] ei2 5 n
(2.59)
Figure 2.59: The sequence from 0 to 4 (the underlined part of the sequence) is the regular convolution result. From this illustration we can see that it is 5-periodic!
note:
x [n]
with an
N -point
signal
h [n]
M -point signal M +N 1
length sequences and computing the circular convolution (or equivalently computing the IDFT of
H [k ] X [k ],
103
Figure 2.60:
Note that the lower two images are simply the top images that have been zero-padded.
Figure 2.61:
x [n]
x (t) to get x [n] h [n] to length M + N 1. DFTs X [k ] and H [k ] IDFTs of X [k ] H [k ] y [n] = y [n]
and
where
n = {0, . . . , M 1}.
where
5. Reconstruct
104
CHAPTER 2.
50
In many applications (e.g., satellite imaging, medical imaging, astronomical imaging, poor-quality family portraits) the imaging system introduces a slight distortion. Often images are slightly blurred and image restoration aims at
The blurring can usually be modeled as an LSI system with a given PSF
h [m, n].
Figure 2.62:
(2.60)
(2.61)
(2.62)
(a)
Figure 2.63
(b)
Once we apply the PSF to the original image, we receive our blurred image that is shown in Figure 2.64:
50 This
105
Figure 2.64
106
CHAPTER 2.
Figure 2.65
107
N -periodic,
51
:
k
ck
= =
1 N 1 N
N 2
N 2
[n] e(i)2 N n dn
(2.63)
S [n] =
k=
e(i)2 N n
2k N .
(2.64)
Solution to Exercise 2.9.1 (p. 81) Solution to Exercise 2.9.2 (p. 88) Solution to Exercise 2.9.3 (p. 89)
The datarate is
11025 16 = 176.4
Solution to Exercise 2.10.2 (p. 93) Solution to Exercise 2.10.3 (p. 95) Solution to Exercise 2.11.1 (p. 96)
Discretize (sample) The oscillations are due to the boxcar window's Fourier transform, which equals the sinc function. These numbers are powers-of-two, and the FFT algorithm can be exploited with these lengths. To compute a longer transform than the input signal's duration, we simply zero-pad the signal.
26460
bytes.
X ( )
and
H ( ).
x [n]
and
h [n]
to get
X [k ]
and
X [k ].
y [n] = IDFT (X [k ] H [k ])
Does
y [n] = y [n]?
51 "Fourier
108
CHAPTER 2.
Chapter 3
Digital Filtering
3.1 Dierence Equation
3.1.1 Introduction
One of the most important concepts of DSP is to be able to properly represent the input/output relationship to a given LTI system. A linear constant-coecient
which represent the characteristics of the LTI system, as a dierence equation help in understanding and
Example
y [n] + 7y [n 1] + 2y [n 2] = x [n] 4x [n 1]
(3.1)
n.
H (z ),
we will look at the general form of the dierence equation and the general conversion to a z-transform directly
ak y [n k ] =
k=0 k=0
bk x [n k ]
(3.2)
1 This
110
CHAPTER 3.
DIGITAL FILTERING
We can also write the general form to easily express a recursive output, which looks like this:
y [n] =
k=1
From this equation, note that of
ak y [n k ] +
k=0
bk x [n k ]
(3.3)
represents the
order of the dierence equation and corresponds to the memory of the system being initial conditions, must be known. transfer function,
2
The value
represented.
Because this equation relies on past values of the output, in order to compute a numerical
H (z ),
Below are the steps taken to convert any dierence equation into its transfer function, i.e. The rst step involves taking the Fourier Transform of all the terms in (3.2).
the linearity property to pull the transform inside the summation and the time-shifting property of the z-transform to change the time-shifting terms to exponentials. Once this is done, we arrive at the following equation:
a0 = 1.
N M
Y (z ) =
k=1
ak Y (z ) z k +
k=0
bk X (z ) z k
(3.4)
H (z )
= =
Y (z ) XP (z ) M k k=0 bk z P k 1+ N k=1 ak z
(3.5)
Remember that the reason we are dealing with these formulas is to be able to aid us in
lter design. A LCCDE is one of the easiest ways to represent FIR lters. By being able to nd the frequency response, we will be able to look at the basic properties of any lter represented by a simple LCCDE. Below is the general formula for the frequency response of a z-transform. The conversion is simple a matter of taking the z-transform formula,
H (z ),
with
eiw .
H (w )
= =
H (z ) |z,z=eiw
PM (iwk) k=0 bk e PN (iwk) k=0 ak e
(3.6)
Once you understand the derivation of this formula, look at the module concerning Filter Design from the Z-Transform
3
for a look into how all of these ideas of the Z-transform (Section 3.2), Dierence Equation,
2 "Derivation of the Fourier Transform" <http://cnx.org/content/m0046/latest/> 3 "Discrete Time Filter Design" <http://cnx.org/content/m10548/latest/>
111
3.1.3 Example
Example 3.1: Finding Dierence Equation
Below is a basic example showing the opposite of the steps above: given a transfer function one can easily calculate the systems dierence equation.
H (z ) =
(z + 1) z1 z+ 2
3 4
(3.7)
Given this transfer function of a time-domain lter, we want to nd the dierence equation. To begin with, expand both polynomials and divide them by the highest order
z.
H (z )
= = =
(z +1)(z +1)
1 3 (z 2 )(z+ 4 )
(3.8)
From this transfer function, the coecients of the two polynomials will be our of the transfer function, we can easily write the dierence equation:
ak
and
bk
values
found in the general dierence equation formula, (3.2). Using these coecients and the above form
1 3 x [n] + 2x [n 1] + x [n 2] = y [n] + y [n 1] y [n 2] 4 8
recursive nature of the system.
(3.9)
In our nal step, we can rewrite the dierence equation in its more common form showing the
y [n] = x [n] + 2x [n 1] + x [n 2] +
3 1 y [n 1] + y [n 2] 4 8
(3.10)
direct method
x (n),
and the
indirect method,
the later
being based on the z-transform. Below we will briey discuss the formulas for solving a LCCDE using each
(3.11)
particular solution.
yh (n),
is referred to as the
The following method is very similar to that used to solve many dierential
equations, so if you have taken a dierential calculus course or used dierential equations before then this should seem very familiar.
112
CHAPTER 3.
DIGITAL FILTERING
x (n) = 0.
ak y [n k ] = 0
k=0
(3.12)
In order to solve this, we will make the assumption that the solution is in the form of an exponential. We will use lambda,
to represent our exponential terms. We now have to solve the following equation:
ak nk = 0
k=0
(3.13)
We can expand this equation out and factor out all of the lambda terms. This will give us a large polynomial in parenthesis, which is referred to as the the equation will be as follows:
characteristic polynomial.
n n
be the key to solving the homogeneous equation. If there are all distinct roots, then the general solution to
yh (n) = C1 (1 ) + C2 (2 ) + + CN (N )
dierent. Below we have the modied version for an equation where
(3.14)
However, if the characteristic equation contains multiple roots then the above general solution will be slightly
has
multiple roots:
yh (n) = C1 (1 ) + C1 n(1 ) + C1 n2 (1 ) + + C1 nK 1 (1 ) + C2 (2 ) + + CN (N )
(3.15)
yp (n),
will be any solution that will solve the general dierence equation:
ak yp (n k ) =
k=0
In order to solve, our guess for the solution to the dierence equation and solve it out.
bk x (n k )
k=0
(3.16)
After guessing
at a solution to the above equation involving the particular solution, one only needs to plug the solution into
Y (z ).
Z {y (n + 1) y (n)} = zY (z ) y (0)
This can be interatively extended to an arbitrary order derivative as in Equation (3.18).
(3.17)
N 1
N 1
Z {
m=0
y (n m)} = z Y (z )
m=0
(3.18)
113
Now, the Laplace transform of each side of the dierential equation can be taken
N 1
Z{
k=0
which by linearity results in
ak y (n m + 1)
m=0
y (n m) y (n) = Z {x (n)}}
(3.19)
N 1
ak Z {y (n m + 1)
k=0
and by dierentiation properties in
y (n m) y (n)} = Z {x (n)}
m=0
(3.20)
N 1
ak
k=0
z k Z {y (n)}
m=0
= Z {x (n)}.
(3.21)
Z {y (n)} =
Thus, it is found that
Z {x (n)} +
N k=0
k 1 m=0 N k=0
ak z k
(3.22)
Y (z ) =
X (z ) +
N k=0
(0)
(3.23)
In order to nd the output, it only remains to nd the Laplace transform
X (z )
initial conditions, and compute the inverse Z-transform of the result. Partial fraction expansions are often required for this last step. This may sound daunting while looking at (3.23), but it is often easy in practice, especially for low order dierence equations. (3.23) can also be used to determine the transfer function and frequency response. As an example, consider the dierence equation
(3.24)
y (0) = 1
'
and
y (0) = 0
y [n]
is given by
Y [z ] =
z 1 + . [z 2 + 1] [z + 1] [z + 3] [z + 1] [z + 3]
(3.25)
Y [z ] = .25
(3.26)
(3.27)
One can check that this satises that this satises both the dierential equation and the initial conditions.
114
CHAPTER 3.
DIGITAL FILTERING
X (z ) =
n=
Sometimes this equation is referred to as the as
x [n] z n
(3.28)
bilateral z-transform.
X (z ) =
n=0
which is known as the
x [n] z n
(3.29)
unilateral z-transform.
There is a close relationship between the z-transform and the signal, which is dened as
Fourier transform
of a discrete time
X ei =
n=
Notice that that when the
x [n] e(in)
(3.30)
z n
is replaced with
e(in)
z=e
equal to unity.
z-plane.
4 This
115
Z-Plane
Figure 3.1
The Z-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable
z.
rei
. X (z )
|z | = 1,
z = 1 .
This is useful because, by representing the Fourier transform as the z-transform on the unit circle,
ROC,
power series,
n=
x [n] , is dened as the range of z for which the z-transform n it converges when x [n] z is absolutely summable.
(3.31)
|x [n] z n | <
must be satised for convergence. transforms of
n u [n]
and
n u [n 1].
Example 3.2
For
x [n] = n u [n]
(3.32)
116
CHAPTER 3.
DIGITAL FILTERING
Figure 3.2:
X (z )
= = = =
(3.33)
This sequence is an example of a right-sided exponential sequence because it is nonzero for It only converges when
n 0.
|z 1 | < 1.
When it converges,
X (z )
= =
1 1z 1 z z
(3.34)
If
|z 1 | 1,
n=0
z 1
values where
|z 1 | < 1
or, equivalently,
|z | > ||
(3.36)
117
Figure 3.3:
Example 3.3
For
x [n] = (n ) u [(n) 1]
(3.37)
118
CHAPTER 3.
DIGITAL FILTERING
Figure 3.4:
X (z )
= = = = = =
1] z n
(3.38)
|1 z | < 1
or, equivalently,
(3.39)
|z | < ||
If the ROC is satised, then
(3.40)
X (z )
= =
1
z z
1 11 z
(3.41)
119
Figure 3.5:
found in the table below may dier from that found in other tables. For
u [n]
z 1 = z1 1 z 1
(3.42)
5 This content is available online at <http://cnx.org/content/m10119/2.14/>. 6 "Region of Convergence for the Z-transform" <http://cnx.org/content/m10622/latest/>
120
CHAPTER 3.
DIGITAL FILTERING
Signal
[n k ] u [n] u [(n) 1] nu [n] n2 u [n] n u [n] (n ) u [(n) 1] u [n] n u [n] n u [n]
Qm
k=1 nk +1 n u [n] m m!
Z-Transform
z k
z z 1 z z 1 z (z 1)2 z (z +1) (z 1)3 z (z 2 +4z +1) (z 1)4 z z z z z (z )2 z (z +) (z )3 z (z )m+1 z (z cos()) z 2 (2 cos())z + 2 z sin() z 2 (2 cos())z + 2
Table 3.1
ROC
All (z )
2 n
|z | > | | |z | > | |
mappings of their magnitude and phase or real part and imaginary part result in multiple mappings of 2-dimensional surfaces in 3-dimensional space. For this reason, it is very common to examine a plot of a transfer function's
9
poles and zeros to try to gain a qualitative idea of what a system does.
Once the Z-transform of a system has been determined, one can use the information contained in function's polynomials to graphically represent the function and easily observe many dening characteristics. The Ztransform will have the below structure, based on Rational Functions
10
X (z ) = P (z ) Q (z ),
P (z ) Q (z )
11
(3.43)
and
of the Z-Transform.
where
P (z ) = 0.
2. The complex frequencies that make the overall gain of the lter transfer function zero.
where
Q (z ) = 0.
2. The complex frequencies that make the overall gain of the lter transfer function innite.
7 This content is available online at <http://cnx.org/content/m10556/2.12/>. 8 "The Laplace Transform" <http://cnx.org/content/m10110/latest/> 9 "Transfer Functions" <http://cnx.org/content/m0028/latest/> 10 "Rational Functions and the Z-Transform" <http://cnx.org/content/m10593/latest/> 11 "Poles and Zeros" <http://cnx.org/content/m10112/latest/>
121
Example 3.4
Below is a simple transfer function with the poles and zeros shown below it.
H (z ) =
The zeros are: The poles are:
z+1 1 z+ 2
3 4
{1} 1 3 2, 4
z.
rei
and the angle from the positive, real axis around the
When mapping poles and zeros onto the plane, poles are denoted by an "x" and zeros
by an "o". The below gure shows the Z-Plane, and examples of plotting zeros and poles onto the plane can be found in the following section.
Z-Plane
Figure 3.6
z+
3 4
122
CHAPTER 3.
DIGITAL FILTERING
{0} 3 1 2, 4
Pole/Zero Plot
Using the zeros and poles found from the transfer function, the one zero is mapped to zero and 3 and the two poles are placed at 1 2 4
Figure 3.7:
1 2
(z i) (z + i) 1 1 2 i z1 2 + 2i
{i, i} 1 1 1 1 , 1 2 + 2 i, 2 2 i
123
Pole/Zero Plot
Figure 3.8: Using the zeros and poles found from the transfer function, the zeros are mapped to (i), 1 and the poles are placed at 1, 1 +1 i and 2 1 i 2 2 2
An easy mistake to make with regards to poles and zeros is to think that a function like
s + 3.
in what is known as
pole-zero cancellation.
s=1
transfer function of a system that was created with physical circuits. In this case, it is very unlikely that the pole and zero would remain in exactly the same place. A minor temperature change, for instance, could cause one of them to move just slightly. If this were to occur a tremendous amount of volatility is created in that area, since there is a change from innity at the pole to zero at the zero in a very small range of signals. This is generally a very bad way to try to eliminate a pole. A much better way is to use
note:
It is possible to have more than one pole or zero at any given point.
H (s) =
MATLAB
Z-Plane.
- If access to MATLAB is readily available, then you can use its functions to easily create
pole/zero plots. Below is a short program that plots the poles and zeros from the above example onto the
% Set up vector for zeros z = [j ; -j]; % Set up vector for poles p = [-1 ; .5+.5j ; .5-.5j]; figure(1);
Available for free at Connexions <http://cnx.org/content/col10360/1.4>
124
CHAPTER 3.
DIGITAL FILTERING
Interact (when online) with a Mathematica CDF demonstrating Pole/Zero Plots. To Download, right-click and save target as .cdf.
Figure 3.9:
125
understanding of what the system does at various frequencies and is crucial to the discussion of stability
X (z )
plot. Although several regions of convergence may be possible, where each one corresponds to a dierent impulse response, there are some choices that are more practical. A ROC can be chosen to make the transfer function causal and/or stable depending on the pole/zero plot.
If the ROC extends outward from the outermost pole, then the system is If the ROC includes the unit circle, then the system is
stable.
causal.
Below is a pole/zero plot with a possible ROC of the Z-transform in the Simple Pole/Zero Plot (Example 3.5: Simple Pole/Zero Plot) discussed earlier. The shaded region indicates the ROC chosen for the lter. From this gure, we can see that the lter will be both causal and stable since the above listed conditions are both met.
Example 3.8
H (z ) = z z
1 2
z+
3 4
Figure 3.10:
The shaded area represents the chosen ROC for the transfer function.
12 "BIBO
Stability of Continuous Time Systems" <http://cnx.org/content/m10113/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>
126
CHAPTER 3.
DIGITAL FILTERING
13
Because we are interested in actual computations rather than analytic calculations, we must consider the details of the discrete Fourier transform. To compute the length-N DFT, we assume that the signal has a
N.
in terms of lter coecients, we don't have a direct handle on which signal has a Fourier transform
equaling a given frequency response. Finding this signal is quite easy. First of all, note that the discretetime Fourier transform of a unit sample equals one for all frequencies. Since the input and output of linear, shift-invariant systems are related to each other by , a unit-sample input, results in the output's Fourier transform equaling the system's transfer
= 1,
(Solution on p. 147.)
This statement is a very important result. Derive it yourself. In the time-domain, the output for a unit-sample input is known as the system's
unit-sample response,
h (n).
h (n)
h (n) H ei2f
response, and derive the corresponding length-N DFT by sampling the frequency response.
(3.44)
Returning to the issue of how to use the DFT to perform ltering, we can analytically specify the frequency
k, k = {0, . . . , N 1} : H (k ) = H e
i2k N
(3.45)
lter), computing the inverse DFT of the sampled frequency response indeed yields the unit-sample response. If, however, the duration exceeds
N,
by appealing to the Sampling Theorem. By sampling in the frequency domain, we have the potential for aliasing in the time domain (sampling in one domain, be it time or frequency, can result in aliasing in the other) unless we sample fast enough. Here, the duration of the unit-sample response determines the minimal sampling rate that prevents aliasing. For FIR systems they by denition have nite-duration unit sample responses the number of required DFT samples equals the unit-sample response's duration:
Exercise 3.5.2
N q.
(Solution on p. 147.)
Derive the minimal DFT length for a length-q unit-sample response using the Sampling Theorem. Because sampling in the frequency domain causes repetitions of the unit-sample response in the time domain, sketch the time-domain result for various choices of the DFT length
N.
13 This content is available online at <http://cnx.org/content/m10257/2.18/>. 14 "Discrete-Time Systems in the Frequency Domain", (1) <http://cnx.org/content/m0510/latest/#dtsinf>
127
Exercise 3.5.3
example
15
(Solution on p. 147.)
Express the unit-sample response of a FIR lter in terms of dierence equation coecients. Note that the corresponding question for IIR lters is far more dicult to answer: Consider the .
For IIR systems, we cannot use the DFT to nd the system's unit-sample response: aliasing of the unitsample response will
always occur.
FIR lters.
Another issue arises in frequency-domain ltering that is related to time-domain aliasing, this time when we consider the output. Assume we have an input signal having duration lter having a length-q
+1
unit-sample response. What is the duration of the output signal? The dierence
y (n) = b0 x (n) + + bq x (n q )
This equation says that the output depends on current and past input values, with the input value previous dening the extent of the lter's
(3.46)
Nx
depends on
samples
For example, the output at index Thus, the output returns to zero Thus, the output
only after the last input value passes through the lter's memory. As the input signal's last value occurs at index
Nx 1,
n q = Nx 1 or n = q + Nx 1.
Exercise 3.5.4
q + Nx .
(Solution on p. 147.)
In words, we express this result as "The output's duration equals the input's duration plus the lter's duration minus one.". Demonstrate the accuracy of this statement. The main theme of this result is that a lter's output extends longer than either its input or its unit-sample response. Thus, to avoid aliasing when we use DFTs, the dominant factor is not the duration of input or of the unit-sample response, but of the output. Thus, the number of values at which we must evaluate the frequency response's DFT must be at least
q + Nx
extending beyond the signal's duration that the signal is zero. Frequency-domain ltering, diagrammed in Figure 3.11, is accomplished by storing the lter's frequency response as the DFT input's DFT
X (k ),
y (n).
x(n) DFT
X(k)
y(n)
To lter a signal in the frequency domain, rst compute the DFT of the input, multiply the result by the sampled frequency response, and nally compute the inverse DFT of the product. The DFT's length must be at least the sum of the input's and unit-sample response's duration minus one. We calculate these discrete Fourier transforms using the fast Fourier transform algorithm, of course.
Figure 3.11:
Before detailing this procedure, let's clarify why so many new issues arose in trying to develop a frequencydomain implementation of linear ltering. The frequency-domain relationship between a lter's input and
15 "Discrete-Time
Systems in the Time-Domain", Example 1 <http://cnx.org/content/m10251/latest/#p0> Available for free at Connexions <http://cnx.org/content/col10360/1.4>
128
CHAPTER 3.
DIGITAL FILTERING
Y ei2f = H ei2f X ei2f . The Fourier i2f time Fourier transforms; for example, X e = n x (n) e(i2f n) .
output is and the input signal. frequency variable
always true:
to perform ltering is restricted to the situation when we have analytic formulas for the frequency response The reason why we had to "invent" the discrete Fourier transform (DFT) has the same origin: The spectrum resulting from the discrete-time Fourier transform depends on the
continuous
f.
That's ne for analytic calculation, but computationally we would have to make an
uncountably innite number of computations. Did you know that two kinds of innities can be meaningfully dened? A countably innite quantity means that it can be associated with a limiting process associated with integers. An uncountably innite quantity cannot be so associated. The number of rational numbers is
note:
countably innite (the numerator and denominator correspond to locating the rational by row and column; the total number so-located can be counted, voila!); the number of irrational numbers is uncountably innite. Guess which is "bigger?" The DFT computes the Fourier transform at a nite set of frequencies samples the true spectrum which can lead to aliasing in the time-domain unless we sample suciently fast. The sampling interval here
1 K for a length-K DFT: faster sampling to avoid aliasing thus requires a longer transform calculation. Since the longest signal among the input, unit-sample response and output is the output, it is that signal's
is duration that determines the transform length. We simply extend the other two signals with zeros (zero-pad) to compute their DFTs.
Example 3.9
Suppose we want to average daily stock prices taken over last year to yield a running weekly average (average over ve trading sessions). The lter we want is a length-5 averager (as shown in the unit-sample response
16
), and the input's duration is 253 (365 calendar days minus weekend days
253 + 5 1 = 257,
length we need to use. Because we want to use the FFT, we are restricted to power-of-two transform lengths. We need to choose any FFT length that exceeds the required DFT length. As it turns out, 256 is a power of two (2
= 256),
16 "Discrete-Time
129
8000
Dow-Jones Industrial Average
7000 6000 5000 4000 3000 2000 1000 0 0 50 100 150 Daily Average Weekly Average 200 250
Figure 3.12:
The blue line shows the Dow Jones Industrial Average from 1997, and the red one the length-5 boxcar-ltered result that provides a running weekly of this market index. Note the "edge" eects in the ltered output.
Figure 3.12 shows the input and the ltered output. The MATLAB programs that compute the ltered output in the time and frequency domains are
Time Domain h = [1 1 1 1 1]/5; y = filter(h,1,[djia zeros(1,4)]); Frequency Domain h = [1 1 1 1 1]/5; DJIA = fft(djia, 512); H = fft(h, 512); Y = H.*X; y = ifft(Y);
note: The
filter
program has the feature that the length of its output equals the length of its
input. To force it to produce a signal having the proper length, the program zero-pads the input appropriately. MATLAB's
fft
function automatically zero-pads its input if the specied transform length (its The frequency domain result will have a small because of the inherent nite precision
second argument) exceeds the signal's length. imaginary component largest value is nature of computer arithmetic.
2.2 1011
favored FFT lengths, the number of arithmetic operations in the time-domain implementation is far less than those required by the frequency domain version: 514 versus 62,271. If the input signal had been one sample shorter, the frequency-domain computations would have been more than a factor of two less (28,696), but far more than in the time-domain implementation.
130
CHAPTER 3.
DIGITAL FILTERING
An interesting signal processing aspect of this example is demonstrated at the beginning and end of the output. The ramping up and down that occurs can be traced to assuming the input is zero before it begins and after it ends. The lter "sees" these initial and nal values as the dierence equation passes over the input. These artifacts can be handled in two ways: we can just ignore the edge eects or the data from previous and succeeding years' last and rst week, respectively, can be placed at the ends.
17
H f ( ) =
the magnitude and phase are dened as
( ) + i ( )
(3.47)
|H f ( ) | =
( ( )) + ( ( )) ( ) ( )
p ( ) = arctan
so that
H f ( ) = |H f ( ) |eip()
With this denition, to write
(3.48)
|H ( ) | is never negative and p ( ) is usually discontinuous, but it can be very helpful H f ( ) = A ( ) ei()
(3.49) is called the
H f ( )
as
where
A ( )
( ) continuous. A ( ) |H f ( ) | and A ( ).
amplitude response.
17 This
131
Figure 3.13
( )
is linear.
H f ( ) = A ( ) ei()
with
( ) = M + B
We assume in the following that the impulse response
h (n)
is real-valued.
x1 (n) = cos (1 n + 1 )
is processed through a discrete-time lter with frequency response
H f ( ) = A ( ) ei()
then the output signal is given by
y1 (n) = A (1 ) cos (1 n + 1 + (1 ))
Available for free at Connexions <http://cnx.org/content/col10360/1.4>
132
CHAPTER 3.
DIGITAL FILTERING
or
y1 (n) = A (1 ) cos 1 n +
(1 ) 1
+ 1
(1 ) 1 .
The LTI system has the eect of scaling the cosine signal and delaying it by
Exercise 3.6.1
(Solution on p. 147.)
When does the system delay cosine signals with dierent frequencies by the same amount? The function
( ) is called the
phase delay.
(3.50)
1 = 0.31 ,
(1 ) 1
= 2.45.
Figure 3.14
133
Notice that the delay is fractional the discrete-time samples are not exactly reproduced in the output. The fractional delay can be interpreted in this case as a delay of the underlying continuous-time cosine signal.
2 = 0.47 ,
(2 ) 2
= 0.14.
Figure 3.15
note:
For this example, the delay depends on the frequency, because this system does not have
linear phase.
134
CHAPTER 3.
DIGITAL FILTERING
In addition, when a narrow band signal (as in AM modulation) goes through a lter, the envelop will be delayed by the
is independent of the carrier frequency only if the lter has linear phase. Also, in applications like image processing, lters with non-linear phase can introduce artifacts that are visually annoying.
18
A realizable lter must require only a nite number of computations per output sample. For linear, causal, time-Invariant lters, this restricts one to rational transfer functions of the form
H (z ) =
Assuming no pole-zero cancellations,
b0 + b1 z 1 + + bm z m 1 + a1 z 1 + a2 z 2 + + an z n
is FIR if
H (z )
usually implement rational transfer functions as dierence equations. Whether FIR or IIR, a given transfer function can be implemented with many dierent lter structures. With innite-precision data, coecients, and arithmetic, all lter structures implementing the same transfer function produce the same output. However, dierent lter strucures may produce very dierent errors with quantized data and nite-precision or xed-point arithmetic. The computational expense and memory usage may also dier greatly. Knowledge of dierent lter structures allows DSP engineers to trade o these factors to create the best implementation.
19
18 This 19 This
135
is probably the second most important technique in "classical" signal processing (after the Cooley-Tukey ) FFT). Most of the time, FIR lters are designed to have linear phase. The most important advantage of FIR lters over IIR lters is that they can have exactly linear phase. There are advanced design techniques for minimum-phase lters, constrained
magnitude of the response is important, IIR lers usually require much fewer operations and are typically
used, so the bulk of FIR lter design work has concentrated on linear phase designs.
L2
21
The truncate-and-delay design procedure is the simplest and most obvious FIR design procedure.
(Solution on p. 147.)
n, 0 n M 1 : (h [n]),
maximizing the energy dierence between the desired response and the
minhn hn,
by Parseval's relationship
22
(|Hd ( ) H ( ) |) d
minhn hn, 2
1 n=
Since
(|Hd ( ) H ( ) |)2 d
M 1 n=0
n=
(3.51)
n, n < 0n M : (h [n])
this becomes
minhn hn,
M 1 n=0
(|Hd ( ) H ( ) |)2 d
n=M
1 h=
note:
Thus
0 n (M 1)
if else
20 "Decimation-in-time (DIT) Radix-2 FFT" <http://cnx.org/content/m12016/latest/> 21 This content is available online at <http://cnx.org/content/m12790/1.2/>. 22 "Parseval's Theorem" <http://cnx.org/content/m0047/latest/>
136
CHAPTER 3.
DIGITAL FILTERING
is
(Solution on p. 147.)
For desired spectra with discontinuities, the least-square designs are poor in a minimax (worst-case, or error sense.
L )
n 0 n M 1 hn = h d nwn : (n 0 n M 1 hn = h d nwn)
note:
H ( ) = Hd ( ) W ( )
The window design procedure (except for the boxcar window) is ad-hoc and not optimal in any usual sense. However, it is very simple, so it is sometimes used for "quick-and-dirty" designs of if the error criterion is itself heurisitic.
24
Given a desired frequency response, the frequency sampling design method designs a lter with a frequency
k, k = [o, 1, . . . , N 1] :
note:
Hd (k ) =
n=0
h (n) e(ik n)
(3.52)
Desired Response must incluce linear phase shift (if linear phase is desired)
Exercise 3.10.1
What is
note:
(Solution on p. 148.)
Hd ( )
c ?
M 1
Hd (k ) =
n=0
(3.53)
or
Hd (0 ) Hd (1 )
. . .
e(i0 0) e(i1 0)
. . .
e(i0 1) e(i1 1)
. . .
h (0) h (1)
. . .
(3.54)
Hd (N 1 )
e(iM 1 0)
e(iM 1 1)
...
e(iM 1 (M 1))
h (M 1)
Hd = W h
So
h = W 1 Hd
note:
(3.55)
N = M,
i = j + 2l, i = j
137
and
2 ,
i.e.
k =
2k M
+
2kn M
M 1
M 1
Hd (k ) =
n=0
so
h (n) e(i
2kn M
) e(in) =
n=0 M 1
) = DFT!
h (n) e(in) =
or
1 M
Hd (k ) ei
k=0
2nk M
ein h [n] = M
M 1
Hd [k ] ei
k=0
2nk M
h [n] = h [M n],
M 2 degrees of
H [k ]
= =
M 1
M 1 1 h [n] e(ik n) + e(ik (M n1)) h M2 e(ik 2 ) 1 e(ik M2 M 1 )2 M 2 1 n if M even n=0 h [n] cos k 2 = 3 M 1 M 1 M 1 2 e(ik 2 ) 2 n + h M2 if M n=0 h [n] cos k 2
odd
2 A (k ) = 2
M 2 1 n=0 M 3 2 n=0
M 1 2 M 1 2
n n
if
M even
+h
M 1 2
if
M odd
Due to symmetry of response for real coecients, only equations to solve for
[0, )
real-valued
h [n].
equally spaced:
k, 0 k M 1 :
h [n]
= = =
IDFT [Hd (k )]
1 M 1 M M 1 k=0 M 1 k=0
2k 1 i 2nk A (k ) e(i M ) M2 e M M 1 2k A (k ) ei( M (n 2 ))
(3.57)
A ( )
mus be symmetric
(A ( ) = A ( )) (A [k ] = A [M k ])
138
CHAPTER 3.
DIGITAL FILTERING
h [n]
= = =
1 M 1 M 1 M
M 1 2
k=1
A [k ] ei
2k M
1 1 (n M2 ) + e(i2k(n M2 ))
M 1 2
k=1
M 1 2
A [k ] cos
k=1
A [k ] (1) cos
2k M k
n
2k M
M 1 2
(3.58)
n+
1 2
1 2 lter forms.
H ( )
= k .
Possible solution to this problem: specify more frequency samples than degrees of freedom, and minimize the total error in the frequency response at all of these samples.
H (k ) where 0 k M 1 and N > M , nd h [n], where 0 n M 1 minimizing Hd (k ) H (k ) For l norm, this becomes a linear programming problem (standard packages availble!) Here we will consider the l 2 norm. N 1 To minimize the l 2 norm; that is, n=0 |Hd (k ) H (k ) |, we have an overdetermined set of linear e(i0 0)
. . .
equations:
...
. . .
e(i0 (M 1))
. . .
Hd (0 ) Hd (1 )
. . .
(iN 1 0)
...
(iN 1 (M 1))
h =
Hd (N 1 ) W h = Hd
or
The minimum error norm solution is well known to be the pseudo-inverse matrix.
h = WW
W Hd ; W W
is well known as
note: Extended frequency sampled design discourages radical behavior of the frequency response
between samples for suciently closely spaced samples. may no longer pass exactly through
any of the Hd (k ).
25
The approximation tolerances for a lter are very often given in terms of the maximum, or worst-case, deviation within frequency bands. For example, we might wish a lowpass lter in a (16-bit) CD player to
1 1 |H ( ) | 1 + 1 if | | p 217 217 H ( ) = 1 17 |H ( ) | if s | |
2
25 This
139
The Parks-McClellan lter design method eciently designs linear-phase FIR lters that are optimal in terms of worst-case (minimax) error. Typically, we would like to have the shortest-length lter achieving these specications. Figure Figure 3.16 illustrates the amplitude frequency response of such a lter.
The black boxes on the left and right are the passbands, the black boxes in the middle represent the stop band, and the space between the boxes are the transition bands. Note that overshoots may be allowed in the transition bands.
Figure 3.16:
Exercise 3.11.1
Must there be a transition band?
(Solution on p. 148.)
W ( ),
argminargmax|E ( ) | = argmin E ( )
h F h
where
E ( ) = W ( ) (Hd ( ) H ( ))
Available for free at Connexions <http://cnx.org/content/col10360/1.4>
140
CHAPTER 3.
DIGITAL FILTERING
and
is a compact subset of
[0, ]
(i.e., all
note:
for a given
E ( ) and minimize over M and h; M . One then repeats the design procedure
Even-length
We will discuss in detail the design only of odd-length symmetric linear-phase FIR lters.
and anti-symmetric linear phase FIR lters are essentially the same except for a slightly dierent implicit weighting function. For arbitrary phase, exactly optimal design procedures have only recently been developed (1990).
2. An iterative method for nding a lter which satises these conditions (and which is thus optimal) is developed. That is, the
indirectly.
x,
and let
P (x)
L
be and
Lth-order
polynomial
P (x) =
k=0
Also, let
ak xk
function on
D (x) be a desired function of x that is continuous on F , and W (x) a positive, continuous weighting F . Dene the error E (x) on F as E (x) = W (x) (D (x) P (x))
and
E (x)
A necessary and sucient condition that that
= argmax|E (x) |
xF
P (x) is the unique Lth-order polynomial minimizing E (x) is E (x) exhibits at least L + 2 "alternations;" that is, there must exist at least L + 2 values of x, xk F , k = [0, 1, . . . , L + 1], such that x0 < x1 < < xL+2 and such that E (xk ) = E (xk+1 ) = ( E )
Exercise 3.11.2
(Solution on p. 148.)
141
even,
A ( ) =
n=0
h (L n) cos n +
1 2
M where L = 2 1 Using the trigonometric identity cos ( + ) = cos ( ) + 2cos () cos ( ) to pull out the term and then using the other trig identities (p. 148), it can be shown that A ( ) can be written as 2
A ( ) = cos 2
Again, this is a polynomial in
k cosk ( )
k=0
x = cos ( ), E ( ) = = =
P ( ) P ( )
(3.59)
Ad ( ) cos( 2)
which implies
(3.60)
cos
1 1 (cos (x)) 2
1
A' d ( x) =
1 1 2 (cos (x))
E ( )
has at least
L+2=
M 2 + 1 alternations, the even-length symmetric lter is optimal in an The prototypical lter design problem:
sense.
1 W = s
See Figure 3.17.
if if
| | p |s | | |
142
CHAPTER 3.
DIGITAL FILTERING
Figure 3.17
(L 1)th-order
implies that
polynomial, it can have at L 1 zeros. A( ) = 0 at = 0 and = , for two more possible alternation points. band edges can also be alternations, for a total of L 1 + 2 + 2 = L + 3 possible alternations.
most
or
However
L + 3: The proof is that the P (x) = 0. Since P (x) is an x , the mapping x = cos ( )
Finally, the
=0
= . =0
or
and
s .
= .
note: The alternation theorem doesn't directly suggest a method for computing the optimal lter.
What we need is an
143
L+2
cos (0 )
1 W (0 ) 1 W (1 ) . . .
. . . . . .
... ...
cos (2L+1 )
cos (LL+1 ) W h
(1) W (L+1 )
Ad (0 ) Ad (1 )
. . . . . . . . .
Ad (L+1 )
= Ad
T
L + 2 extremal frequencies, we can solve for h and via (h, ) = W 1 Ad . Using the can compute A ( ) of h (n), on a dense set of frequencies. If the old k are, in fact the extremal of A ( ), then the alternation theorem is satised and h (n) is optimal. If not, repeat the process
N log2 N
32L,
typically),
per iteration!
This method is expensive computationally due to the matrix inverse. A more ecient variation of this method was developed by Parks and McClellan (1972), and is based on the Remez exchange algorithm. To understand the Remez exchange algorithm, we rst need to understand Lagrange Interpoloation. Now compute
A ( ) is an Lth-order polynomial in x = cos ( ), so Lagrange interpolation can be used to exactly A ( ) from L + 1 samples of A (k ), k = [0, 1, 2, ..., L]. Thus, given a set of extremal frequencies and knowing , samples of the amplitude response A ( ) can
be computed
A (k ) =
(1) + Ad (k ) W (k )
k(1)
(3.61)
L+2
to obtain (Rabiner,
=
where
(3.62)
L+1
k =
i=i=k,0
1 cos (k ) cos (i )
The result is the Parks-McClellan FIR lter design method, which is simply an application of the Remez exchange algorithm to the lter design problem. See Figure 3.18.
144
CHAPTER 3.
DIGITAL FILTERING
The initial guess of extremal frequencies is usually equally ` spaced in the band. Computing ` O 16L2 . Computing h (n) costs O L3 , but it is only done once!
Figure 3.18: ` 2
145
O 16L2
, as opposed to
O L3
L.
Can also
interpolate to DFT sample frequencies, take inverse FFT to get corresponding lter coecients, and zeropad and take longer FFT to eciently interpolate.
26
b = fir1(N,Wn)
N.
Wn
must be
b = fir1(N,Wn,'high')
b = fir1(N,Wn,'stop')
with
Wn
Wn.
a two-element vector designating the stopband designs a is employed in the design. Other windowing For
Hamming window
functions can be used by specifying the windowing function as an extra argument of the function. example, Blackman window can be used instead by the command
b = remez(N,F,A)
algorithm. F is a vector of frequency band edges in ascending order between 0 and 1 with 1 corresponding to the half sampling rate. A is a real vector of the same size as F which species the desired amplitude of the frequency response of the points between
F(k+1)
and
F(k+2)
(F(k),A(k))
(F(k+1),A(k+1))
for odd
k.
For odd
k,
the bands
26 This
146
CHAPTER 3.
DIGITAL FILTERING
27
(Solution on p. 148.)
Assuming sampling rate at 48kHz, design an order-40 low-pass lter having cut-o frequency 10kHz by windowing method. In your design, use Hamming window as the windowing function.
(Solution on p. 148.)
Assuming sampling rate at 48kHz, design an order-40 lowpass lter having transition band 10kHz-
27 This
147
[0, 2 )
S (k )
i=
s (n iL)
(3.63)
To avoid aliasing (in the time domain), the transform length must equal or exceed the signal's duration. The dierence equation for an FIR lter has the form
y (n) =
m=0
The unit-sample response equals
bm x (n m)
(3.64)
h (n) =
m=0
bm (n m)
28
(3.65)
q+1
Nx .
Solution to Exercise 3.9.1 (p. 135) Solution to Exercise 3.9.2 (p. 136): Gibbs Phenomenon
Yes; in fact it's optimal! (in a certain sense)
(a)
Figure 3.19:
(b)
28 "Discrete-Time
Systems in the Time-Domain", Example 2 <http://cnx.org/content/m10251/latest/#ex2001> Available for free at Connexions <http://cnx.org/content/col10360/1.4>
148
CHAPTER 3.
DIGITAL FILTERING
Solution to Exercise 3.10.1 (p. 136) Solution to Exercise 3.11.1 (p. 139) Solution to Exercise 3.11.2 (p. 140)
H ( ) = =
Yes, when the desired response is discontinuous. Since the frequency response of a nite-length lter must be continuous, without a transition band the worst-case error could be no less than half the discontinuity. It's the same problem! To show that, consider an odd-length, symmetric linear phase lter.
+2
L
L n=1
M 1 2
n cos (n)
(3.66)
A ( ) = h (L) + 2
n=1
Where
h (L n) cos (n)
(3.67)
. L=
cos (n) = 2cos ((n 1) ) cos ()cos ((n 2) )), we can rewrite
L L
A ( )
A ( ) = h (L) + 2
n=1
where the
h (L n) cos (n) =
k=0
k cosk ( )
one-to-one
mapping from
x [1, 1]
onto
h (n) by a linear transformation. Now, let x = cos ( ). This is a [0, ]. Thus A ( ) is an Lth-order polynomial in x = cos ( )! L
lter design problem, too!
Therefore, to determine whether or not a length-M , odd-length, symmetric linear-phase lter is optimal in an
If there are
L+2=
M +3 or more alternations, 2
Chapter 4
Figure 4.1
x y [n] = 0
In the
n L
if
n L
otherwise
z -domain, Y (z ) =
n
y [n] z n =
n n, L Z
n n z = L
x [k ] z (k)L = X z L
k
and substituting
z = ei
Y ei = X eiL
As shown in Figure 4.2, upsampling compresses the DTFT by a factor of
(4.1)
axis.
1 This
150
CHAPTER 4.
Figure 4.2
4.2 Downsampling
The operation of
downsampling by factor M
M th
sample and
(M )"
Figure 4.3
y [n] = x [nM ]
In the
domain,
Y (z )
= = =
n n m
m M
2 This
151
where
1 M
M 1 i 2 pm M p=0 e
1 = 0
if
m is
a multiple of M
otherwise
Y (z )
= =
1 M 1 M
M 1 p=0 M 1 p=0
m
(4.3)
Y ei =
1 M
M 1
X ei
p=0
2p M
(4.4)
2 M.
-periodic repetition of If
X ei
by a factor of
x [m]
is not bandlimited to
M , aliasing may
strongly recommended to carry out the analysis in the Working directly in the
z -domain
ei -domain
Figure 4.4
152
CHAPTER 4.
4.3 Interpolation
4.3.1 Interpolation
more specic, say that version of then
Interpolation is the process of upsampling and ltering a signal to increase its eective sampling rate. To be
x [m].
If we lter
y [n]
will be a
x [m] is an (unaliased) T -sampled version of xc (t) and v [n] is an L-upsampled version v [n] with an ideal L -bandwidth lowpass lter (with DC gain L) to obtain y [n], T -sampled version of xc (t). This process is illustrated in Figure 4.5. L
Figure 4.5
We justify our claims about interpolation using frequency-domain arguments. From the sampling theorem, we know that
T-
sampling
xc (t)
to create
x [n] 1 T
yields
X ei =
After upsampling by factor
Xc i
k
2k T
(4.5)
L,
(4.5) implies
V ei =
Lowpass ltering with cuto
1 T
Xc i
k
L 2k T
1 T
Xc
k
2 L k
T L
L and gain
yields
Y ei =
L T
Xc
k L Z
2 L k
T L
L T
Xc
l
2l
T L
k = lL
(for
T L -shaped version of
l Z)
xc (t).
L = 2.
3 This
153
Figure 4.6
20kHz.
With a standard ZOH-DAC, the analog reconstruction lter would have passband edge at
stopband edge at
24.1kHz.
(See Figure 4.7) With such a narrow transition band, this would be a dicult
Figure 4.7
If digital interpolation is used prior to reconstruction, the eective sampling rate can be increased and the reconstruction lter's transition band can be made much wider, resulting in a much simpler (and cheaper) analog lter. Figure 4.8 illustrates the case of interpolation by edge at
4.
20kHz
156.4kHz,
4 This
154
CHAPTER 4.
Figure 4.8
4.5 Decimation
downsampling.
Decimation is the process of ltering and downsampling a signal to decrease its eective sampling rate, as illustrated in Figure 4.9. The ltering is employed to prevent aliasing that might otherwise result from
Figure 4.9
X ei =
5 This
1 T
Xl i
k
2k T
1 T
Xb i
k
2k T
155
Xb (i)
is the removed by
V ei =
Finally, downsampling yields
1 T
Xl i
k
2k T
Y ei
= = =
1 MT 1 MT 1 MT
M 1 p=0 M 1 p=0 l
k k
Xl i Xl
2p 2k M
T (2 )(kM +p) i MT
(4.6)
2l Xl i M T
A frequency-domain illustration for
M T -sampled
version of
xl (t).
M = 2
appears in
Figure 4.10
M can be combined to change the eective sampling rate of a signal L or . Rather than M . This process is called cascading an anti-imaging lter for interpolation with an anti-aliasing lter for decimation, we implement one lter with the minimum of the two cutos L , M and the multiplication of the two DC gains (L and
and decimation by by the rational factor
resampling
sample-rate conversion
1),
6 This
156
CHAPTER 4.
Figure 4.11
x [n]
that is
to get
v [m],
L , L , while the undesired portion is the remainder of zero energy in the regions
[, ).
Noting from
2k + 0 2 (k + 1) 0 , L L
,k Z
the anti-imaging lter can be designed with transition bands in these regions (rather than passbands or stopbands). For a given number of taps, the additional degrees of freedom oered by these transition bands allows for better responses in the passbands and stopbands. shown in the bottom subplot below (Figure 4.12). The resulting lter design specications are
7 This
157
Figure 4.12
spectral component of the input signal and we have decided to downsample by M . The goal is to minimally distort the M 0 0 M , M , i.e., the post-decimation spectrum over [0 , 0 ). Thus, we must not allow any aliased signals to enter [0 , 0 ). To allow for extra degrees of freedom in the lter design, we allow
desired
<
aliasing to enter the post-decimation spectrum outside of regions which alias outside of
[0 , 0 )
within
[, ).
do
[0 , 0 )
are given by
2k + 0 2 (k + 1) 0 , L L
,k Z
(4.8)
(as shown in Figure 4.13), we can treat these regions as transition bands in the lter design. The resulting lter design specications are illustrated in the middle subplot (Figure 4.13).
158
CHAPTER 4.
Figure 4.13
The Noble identities (illustrated in Figure 4.14 and Figure 4.15) describe when it is possible to reverse the order of upsampling/downsampling and ltering. We prove the Noble identities showing the equivalence of each pair of block diagrams. The Noble identity for interpolation can be depicted as in Figure 4.14:
Figure 4.14
Y (z ) = H z L V1 (z )
8 This
159
where
V1 (z ) = X z L Y (z ) = H z L X z L
Y (z ) = V2 z L
where
V2 (z ) = H (z ) X (z ) Y (z ) = H z L X z L
Thus we have established the Noble identity for interpolation. The Noble identity for decimation can be depicted as in Figure 4.15:
Figure 4.15
V1 (z ) =
1 M
M 1
X e(i) M k z M
k=0
Y (z )
= H (z ) V1 (z ) = H (z )
1 M M 1 k=0
X e(i) M k z M
(4.9)
Y (z ) =
where
1 M
M 1
Vz e(i) M k z M
k=0
(4.10)
V2 (z ) = X (z ) H z M Y (z ) =
1 M M 1 k=0
X e(i) M k z M H e(i) M kM z M
M 1 k=0
1 = H (z ) M
X e(i) M k z M
(4.11)
Thus we have established the Noble identity for decimation. Note that the impulse response of the
H zL
is
L-upsampled
impulse response of
H (z ).
9
160
CHAPTER 4.
Figure 4.16
Note that this procedure is computationally inecient because the lowpass lter operates on a sequence that is mostly composed of zeros. Through the use of the Noble identities, it is possible to rearrange the preceding block diagram so that operations on zero-valued samples are avoided. In order to apply the Noble identity for interpolation, we must transform polyphase components
H (z )
Hp z L
p = {0, . . . , L 1}. H (z ) = =
nn kk
h [n] z n
L1 p=0
h [kL + p] z (kL+p)
(4.12)
via
k :=
n L ,
p := nmodL
L1
H (z ) =
p=0 kk
via
hp [k ] z (kL) z p
(4.13)
hp [k ] := h [kL + p]
L 1
H (z ) =
p=0
Above,
Hp z L z p
the modulo-M operator. Note that the
(4.14)
modM
pth
polyphase lter
hp [k ]
h [n]
at oset
p.
161
Figure 4.17
Applying the Noble identity for interpolation to Figure 4.18 yields Figure 4.17. The ladder of upsamplers and delays on the right below (Figure 4.17) accomplishes a form of parallel-to-serial conversion.
162
CHAPTER 4.
Figure 4.18
10
Figure 4.19
Note that this procedure is computationally inecient because it discards the majority of the computed lter outputs. Through the use of the Noble identities, it is possible to rearrange Figure 4.19 so that lter outputs are not discarded.
10 This
163
In order to apply the Noble identity for decimation, we must transform components
Hp z M
p = {0, . . . , M 1}, H (z ) = =
(Section 4.9).
nn kk
h [n] z n
M 1 p=0
(4.15)
via
k :=
n M ,
p := nmodM
M 1
H (z ) =
p=0
via
hp [k ] z (kM ) z p
k
(4.16)
hp [k ] := h [kM + p]
M 1
H (z ) =
p=0
Hp z M z p
(4.17)
Using these unsampled polyphase components, the preceding block diagram can be redrawn as Figure 4.20.
Figure 4.20
Applying the Noble identity for decimation to Figure 4.20 yields Figure 4.21. The ladder of delays and downsamplers on the left below accomplishes a form of serial-to-parallel conversion.
164
CHAPTER 4.
Figure 4.21
11
M,
we have
H (z ) with N taps, requiring N multiplies per output. For standard N multiplies per intermediate sample and M intermediate samples per
NM
N M multiplies per branch and M branches, giving a total of N multiplies N per output. The assumption of M multiplies per branch follows from the fact that h [n] is downsampled by M to create each polyphase lter. Thus, we conclude that the standard implementation requires M times
For polyphase decimation, we have as many operations as its polyphase counterpart. (For decimation, we count multiples per output, rather than per input, to avoid confusion, since only every independent of the decimation rate lowpass FIR lter
M th
From this result, it appears that the number of multiplications required by polyphase decimation is
H (z )
M.
of the
approximation formula
10log (p s ) 13 2.324 ( ) where ( ) in the transition bandwidth in radians, and p and s are the passband and stopband ripple levels. Recall that, to preserve a xed signal bandwidth, the transition bandwidth ( ) will be linearly proportional N
11 This
165
to the cuto
M M.
M , so that N will be linearly proportional to M . In summary, polyphase decimation by factor requires N multiplies per output, where N is the lter length, and where N is linearly proportional to
Using similar arguments for polyphase interpolation, we could nd essentially the same result. Polyphase
interpolation by factor
requires
is
linearly proportional to the interpolation factor than per output, to avoid confusion, since
L.
12
There exist many applications in modern signal processing where it is advantageous to separate a signal into dierent frequency ranges called
sub-bands.
k =
Figure 4.22
Alternatively, the sub-bands might have a logarithmic spacing like that shown in Figure 4.23.
12 This
166
CHAPTER 4.
Figure 4.23
For most of our discussion, we will focus on uniformly spaced sub-bands. The separation into sub-band components is intended to make further processing more convenient. Some of the most popular applications for sub-band decomposition are audio and video source coding (with the goal of ecient storage and/or transmission). Figure 4.24 illustrates the use of sub-band processing in MPEG audio coding. There a psychoacoustic model is used to decide how much quantization error can be tolerated in each sub-band while remaining below the hearing threshold of a human listener. In the sub-bands that can tolerate more error, less bits are used for coding. The quantized sub-band signals can then be decoded and recombined to reconstruct rate while still maintaining "CD quality" audio. The psychoacoustic model takes into account the spectral masking phenomenon of the human ear, which says that high energy in one spectral region will limit the ear's ability to hear details in nearby spectral regions. Therefore, when the energy in one sub-band is high, nearby sub-bands can be coded with less bits without degrading the perceived quality of the audio signal. The MPEG standard species 32-channels of sub-band ltering. Some psychoacoustic models also take into account "temporal masking" properties of the human ear, which say that a loud burst of sound will temporarily overload the ear for short time durations, making it possible to hide quantization noise in the time interval after a loud sound burst. (an approximate version of ) the input signal. Such processing allows, on average, a 12-to-1 reduction in bit
Figure 4.24
In typical applications, non-trivial signal processing takes place between the bank of analysis lters and the bank of synthesis lters, as shown in Figure 4.25. We will focus, however, on lterbank design rather than on the processing that occurs between the lterbanks.
167
Figure 4.25
Our goals in lter design are: 1. Good sub-band frequency separation (i.e., good "frequency selectivity"). 2. Good reconstruction (i.e., lossless. The rst goal is driven by the assumption that the sub-band processing works
y [n]
x [n d]
d)
best
when it is given
access to cleanly separated sub-band signals, while the second goal is motivated by the idea that the subband ltering should not limit the reconstruction performance when the sub-band processing (e.g., the coding/decoding) is lossless or nearly lossless.
13
{ k,n (t) | k Z n Z },
wavelets.
x (t) L2
using an or-
x (t) =
k= n=
(4.18)
dk,n
(4.19)
where
{dk,n }
Note the relationship to Fourier series and to the sampling using a countably-innite
theorem: in both cases we can perfectly describe a continuous-time signal Fourier coecients
{ X [k ] | k Z }, while the sampling theorem enabled us to describe bandlimited signals using signal samples { x [n] | n Z }. In both cases, signals within a limited class are represented using a coecient set with a single countable index. The DWT can describe any signal in L2 using a coecient set parameterized by two countable indices: { dk,n | k Z n Z }.
13 This
168
CHAPTER 4.
(t) L2 .
Wavelets
L2
mother wavelet,
(4.20)
For example,
k, n, k n Z : k,n (t) = 2 2 2k t n
denes a family of wavelets the wavelet stretches by a factor of two; as
note:
stretches. As
increases,
When
(t) = 1,
k,n (t) = 1
for all
k Z, n Z.
Power-of-two stretching is a convenient, though somewhat arbitrary, choice. In our treatment of the discrete wavelet transform, however, we will focus on this choice. Even with power-of two stretches, there are various possibilities for
(t),
{ k,n (t) | n Z }
k ),
In this way, the DWT can give a multiresolution description of a signal, very useful in analyzing "real-world" signals. Essentially, the DWT gives us a discrete multi-resolution description of a continuous-time signal in L2 . become more "ne grained" and the level of detail increases. In the modules that follow, these DWT concepts will be developed "from scratch" using Hilbert space principles. To aid the development, we make use of the so-called used to approximate the signal
),
the wavelets
scaling function (t) L2 , which will be up to a particular level of detail. Like with wavelets, a family of scaling
k
k, n, k n Z : k,n (t) = 2 2 2k t n
given mother scaling function elaborated upon later via theory
note:
(4.21)
(t).
14
dk,n ,
case.
In our treatment of the discrete wavelet transform, however, we will assume real-valued
signals and wavelets. For this reason, we omit the complex conjugations in the remainder of our DWT discussions
15
The Haar basis is perhaps the simplest example of a DWT basis, and we will frequently refer to it in our
0t<1
(4.22)
otherwise
169
Figure 4.26
From the mother scaling function, we dene a family of shifted and stretched scaling functions according to (4.23) and Figure 4.27
{k,n (t)}
k,n (t)
= =
k, n, k Zn Z : 2 2 2k t n 2 2
k
1 2k
t n2k
(4.23)
Figure 4.27
and
n.
{ k,n (t) | n Z }
is
170
CHAPTER 4.
Figure 4.28
16
x (t) L2 .
0th
level of coarseness
x0 (t).
(Recall that
x0 (t)
x (t)
onto
V0
to decompose
x0 (t)
x0 (t) V0
and
V 0 = V 1 W1 ,
and
{d1 [n]}
such that
x0 (t)
16 This
= =
(4.24)
171
{ 1,n (t) | n Z } c1 [ n ]
V1
[m] < (0,m (t) , 1,n (t)) > [m] < ( (t m) , [m] [m] h [m 2n] h [ ] < ( (t m) , (t 2n)) >
where
[t 2n] =< (t m) , (t 2n) >. The previous expression ((4.25)) indicates that {c1 [n]} {c0 [m]} with a time-reversed version of h [m] then downsampling by factor two
Figure 4.29
{ 1,n (t) | n Z }
W1
d1 [n]
= = = = = =
[m] < (0,m (t) , 1,n (t)) > [m] < ( (t m) , [m] [m] g [m 2n] g [ ] < ( (t m) , (t 2n)) >
where
{c0 [m]}
with a time-
g [m]
Figure 4.30
172
CHAPTER 4.
Putting these two operations together, we arrive at what looks like the analysis portion of an FIR lterbank (Figure 4.31):
Figure 4.31
We can repeat this process at the next higher level. Since and
V1 = W 2 V2 ,
{c2 [n]}
{d2 [n]}
such that
x1 (t)
= =
nn c1 nn
(4.27)
c2 [n] =
mm
c1 [m] h [m 2n]
(4.28)
d2 [n] =
mm
c1 [m] g [m 2n]
(4.29)
173
Figure 4.32
If we use
V0 = W1 W2 W3 Wk Vk
k th
Figure 4.33
As we might expect, signal reconstruction can be accomplished using cascaded two-channel synthesis
174
CHAPTER 4.
c0 [m]
= = = =
nn
(4.30)
nn
d1 [n] g [m 2n]
h [m 2n] =< 1,n (t) , 0,m (t) > g [m 2n] =< 1,n (t) , 0,m (t) >
and
Figure 4.34
c1 [m] =
nn
c2 [n] h [m 2n] +
nn
d2 [n] g [m 2n]
(4.31)
175
Figure 4.35
k th
Figure 4.36
The table (Table 4.1) makes a direct comparison between wavelets and the two-channel orthogonal PRFIR lterbanks.
176
CHAPTER 4.
H z 1 H (z ) H z H ( z ) H z 1 = 2 G z 1 P, P is odd G (z ) = z P H z 1 H (z ) G (z )
Table 4.1
H0 (z )
1
H0 (z ) H0 z 1 H0 (z ) H0 z 1 = 1 H1 (z )
N, N is even : H1 (z ) = z (N 1) H0 z 1 G0 (z ) = 2z (N 1) H0 z 1 G1 (z ) = 2z (N 1) H1 z 1
From the table, we see that the discrete wavelet transform that we have been developing is identical to two-channel orthogonal PR-FIR lterbanks in all but a couple details. 1. Orthogonal PR-FIR lterbanks employ synthesis lters with twice the gain of the analysis lters, whereas in the DWT the gains are equal. 2. Orthogonal PR-FIR lterbanks employ causal lters of length constrained to be causal. For convenience, however, the wavelet lters have even impulse response length
N,
H (z )
and
N,
we require that
G (z ) are usually P = N 1.
17
Say that the DWT for a particular choice of wavelet yields an ecient representation of a particular signal class. In other words, signals in the class are well-described using a few large transform coecients.
the DWT. Due to the orthogonality of the DWT, such noise sequences make, on average, equal contributions Any given noise sequence is expected to yield many small-valued transform
de-noising a signal. Say that we perform a DWT on a well-matched signal class that has been corrupted by additive noise. We expect that large
transform coecients are composed mostly of signal content, while small transform coecients should be composed mostly of noise content. Hence, throwing away the transform coecients whose magnitude is less than some small threshold should improve the signal-to-noise ratio. The de-noising procedure is illustrated in Figure 4.37.
Figure 4.37
Now we give an example of denoising a step-like waveform using the Haar DWT. In Figure 4.38, the top right subplot shows the noisy signal and the top left shows it DWT coecients. Note the presence
17 This
177
of a few large DWT coecients, expected to contain mostly signal components, as well as the presence of many small-valued coecients, expected to contain noise. (The bottom left subplot shows the DWT for the original signal before any noise was added, which conrms that all signal energy is contained within a few large coecients.) If we throw away all DWT coecients whose magnitude is less than 0.1, we are left with only the large coecients (shown in the middle left plot) which correspond to the de-noised time-domain signal shown in the middle right plot. The dierence between the de-noised signal and the original noiseless signal is shown in the bottom right. Non-zero error results from noise contributions to the large coecients; there is no way of distinguishing these noise components from signal components.
178
CHAPTER 4.
Figure 4.38
Chapter 5
Before now, you have probably dealt strictly with the theory behind signals and systems, as well as look and systems .
3
foundation; however, most electrical engineers do not get to work in this type of fantasy world. In many cases the signals of interest are very complex due to the randomness of the world around them, which leaves them noisy and often corrupted. necessary information. This often causes the information contained in the signal to be hidden and distorted. For this reason, it is important to understand these random signals and how to recover the
deterministic signals.
signals are xed and can be determined by a mathematical expression, rule, or table. Because of this, future values of any deterministic signal can be calculated from past values. and future behavior. For this reason, these signals are relatively easy to analyze as they do not change, and we can make accurate assumptions about their past
1 This content is available online at <http://cnx.org/content/m10649/2.2/>. 2 "Signal Classications and Properties" <http://cnx.org/content/m10057/latest/> 3 "System Classications and Properties" <http://cnx.org/content/m10084/latest/>
180
CHAPTER 5.
Deterministic Signal
Figure 5.1:
stochastic signals,
or
random signals,
cannot be characterized by a simple, well-dened mathematical equation and their future values cannot Rather, we must use probability and statistics to analyze their behavior. their randomness, average values (Section 5.3) from a collection of signals are usually studied rather than analyzing one individual signal.
Random Signal
We have taken the above sine wave and added random noise to it to come up with a noisy, or random, signal. These are the types of signals that we wish to learn how to deal with so that we can recover the original sine wave.
Figure 5.2:
random process.
A family or ensemble of signals that correspond to every possible outcome of a certain signal measurement. Each signal in this collection is referred to as a
Example
As an example of a random process, let us look at the Random Sinusoidal Process below. We use
f [n] = Asin (n + )
to represent the sinusoid with a given amplitude and phase. Note that the
181
phase and amplitude of each sinusoid is based on a random number, thus making this a random process.
Figure 5.3:
A random sinusoidal process, with the amplitude and phase being random numbers.
X (t)
or
X [n],
with
x (t)
or
x [n]
In many notes and books, you might see the following notation and terms used to describe dierent types of random processes. For a represents time that has a nite number of values. If
random process.
discrete random process, sometimes just called a random sequence, t t can take on any value of time, we have a continuous
Often times discrete and continuous refer to the amplitude of the process, and process or
sequence refer to the nature of the time variable. For this study, we often just use
to a general collection of discrete-time signals, as seen above in Figure 5.3 (Random Sinusoidal Process).
182
CHAPTER 5.
From the denition of a random process (Section 5.1), we know that all random processes are composed of random variables, each at its own unique point in time. Because of this, random processes have all the properties of random variables, such as mean, correlation, variances, etc.. When dealing with groups of signals or sequences it will be important for us to be able to show whether of not these statistical properties hold true for the entire random process. To do this, the concept of The general denition of a stationary process is:
a random process where all of its statistical properties do not vary with time Processes whose statistical properties do change are referred to as
nonstationary.
Understanding the basic idea of stationarity will help you to be able to follow the more concrete and mathematical denition to follow. Also, we will look at various levels of stationarity used to describe the various types of stationarity characteristics a random process can have.
simply tool used to identify the probability that our observed random variable will be less than or equal to
Fx (x) = P r [X x]
situations where we want to look at the probability of event is an example of a second-order
This same idea can be applied to instances where we have multiple random variables as well. There may be
and Y
Fx (x, y ) = P r [X x, Y y ]
(5.2)
While the distribution function provides us with a full view of our variable or processes probability, it is not always the most useful for calculations.
fx (x) =
d Fx ( x ) dx
(5.3)
(5.4)
(5.4) reveals some of the physical signicance of the density function. This equations tells us the probability
fx (x) dx.
can now use our knowledge of integrals to evaluate probabilities from the above approximation. Again we
for the distribution function. The density function is used for a variety of calculations, such as nding the expected value or proving a random variable is stationary, to name a few.
4 This
183
note:
The above examples explain the distribution and density functions in terms of a single
random variable,
X.
When we are dealing with signals and random processes, remember that
we will have a set of random variables where a dierent random variable will occur at each time instance of the random process,
X (tk ).
5.2.3 Stationarity
Below we will now look at a more in depth and mathematical denition of a stationary process. As was mentioned previously, various levels of stationarity exist and we will look at the most common types.
5.2.3.1 First-Order Stationary Process A random process is classied as rst-order stationary if its rst-order probability density function remains
equal regardless of any shift in time to its time origin. If we let
x t1
t1 ,
then
fx (xt1 ) = fx (xt1 + )
The physical signicance of this equation is that our density function, of
(5.5)
fx (xt1 ),
is completely independent
t1
The most important result of this statement, and the identifying characteristic of any rst-order stationary process, is the fact that the mean is a constant, independent of any time shift. Below we show the results for a random process,
X,
x [n].
= = =
5.2.3.2 Second-Order and Strict-Sense Stationary Process A random process is classied as second-order stationary if its second-order probability density function
does not vary over any time shift applied to both values. In other words, for values have the following be equal for an arbitrary time shift
xt1
and
xt2
then we will
.
(5.7)
From this equation we see that the absolute time does not aect our functions, rather it only really depends on the time dierence between the two variables. Looked at another way, this equation can be described as
(5.8)
bution functions of the process are unchanged regardless of the time shift applied to them. For a second-order stationary process, we need to look at the autocorrelation function (Section 5.5) to see its most important property. Since we have already stated that a second-order stationary process depends only on the time dierence, then all of these types of processes have the following property:
Rxx (t, t + )
= E [X (t + )] = Rxx ( )
(5.9)
184
CHAPTER 5.
slightly more relaxed requirements but ones that are still enough to provide us with adequate results. In order to be WSS a random process only needs to meet the following two requirements. 1. 2.
Note that a second-order (or SSS) stationary process will always be WSS; however, the reverse will not always hold true.
In order to study the characteristics of a random process (Section 5.1), let us look at some of the basic properties and operations of a random process. Below we will focus on the operations of the random signals that compose our random processes. We will denote our random process with
x.
The mean of a
must look at a random signal over a range of time (possible values) and determine our average from this set
X E [X ]
(5.10)
xf (x) dx
This equation may seem quite cluttered at rst glance, but we want to introduce you to the various notations used to represent the mean of a random signal or process. Throughout texts and other readings, remember that these will all equal the same thing. The symbol, used is,
x (t), X"
and the
short-hand to represent an average, so you might see it in certain textbooks. The other important notation
E [X ],
is very common and will appear again. If the random variables, which make up our random process, are discrete or quantized values, such as in a binary process, then the integrals become summations over all the possible values of the random variable. In this case, our expected value becomes
E [x [n]] =
x
P r [x [n] = ]
(5.11)
If we have two random signals or variables, their averages can reveal how the two signals interact. If the product of the two individual averages of both signals do signals, then the two signals are said to be
not equal the average of the product of the two linearly independent, also referred to as uncorrelated.
5 This
185
In the case where we have a random process in which only one sample can be viewed at a time, then we will often not have all the information available to calculate the mean using the density function as shown above. In this case we must estimate the mean through the time-average mean (Section 5.3.4: Time Averages), discussed later. For elds such as signal processing that deal mainly with discrete signals and values, then these are the averages most commonly used.
is the constant:
E [] =
Adding a constant,
(5.12)
E [X + ] = E [X ] +
Multiplying the random variable by a constant,
(5.13)
E [X ] = E [X ]
The expected value of the sum of two or more random variables, is the sum of each individual expected value.
E [X + Y ] = E [X ] + E [Y ]
(5.15)
X2
= E X2 =
(5.16)
x2 f (x) dx
5.3.3 Variance
Now that we have an idea about the average value or values that a random process takes, we are often interested in seeing just how spread out the dierent random values might be. To do this, we look at the
2 ,
is written as follows:
Var (X )
2
(5.17)
= E (X E [X ]) =
x X
f (x) dx
Using the rules for the expected value, we can rewrite this formula as the following form, which is commonly seen:
= =
X2 X E X 2 (E [X ])
2
2
(5.18)
186
CHAPTER 5.
Var ()
= () = 0
2
(5.19)
to a random variable does not aect the variance because the mean increases
Var (X + )
= (X + ) = (X )
2
2
(5.20)
Var (X )
= 2 (X )
are
The variance of the sum of two random variables only equals the sum of the variances if the variable
independent.
Var (X + Y )
= =
(X + Y )
2
2 2
(5.22)
(X ) + (Y )
Otherwise, if the random variable are the product of the variables as follows:
the process seems to have the same statistical behavior as the entire process. The time averages will also only be taken over a nite interval since we will only be able to see a nite part of the sample.
x (t),
we will estimate the mean using the time averaging function dened
= E [X ] =
1 T T 0
(5.24)
X (t) dt
However, for most real-world situations we will be dealing with discrete values in our computations and signals. We will represent this mean as
= E [X ] =
1 N N n=1
(5.25)
X [n]
187
x 2 =X 2 X
2
(5.26)
5.3.5 Example
Let us now look at how some of the formulas and concepts above apply to a simple example. We will just look at a single, continuous random variable for this example, but the calculations and methods are the same for a random process. For this example, we will consider a random variable having the probability density function described below and shown in Figure 5.4 (Probability Density Function).
f ( x) =
1 10
if
10 x 20
(5.27)
otherwise
Figure 5.4:
= = = =
(5.28)
15
188
CHAPTER 5.
Using (5.16) we can obtain the mean-square value for the above density function.
X2
= = = =
(5.29)
233.33
When we take the expected value (Section 5.3), or average, of a random process (Section 5.1.2: Random Process), we measure several important characteristics about how the process behaves in general. This proves to be a very important observation. However, suppose we have several random processes measuring dierent aspects of a system. The relationship between these dierent processes will also be an important observation. The covariance and correlation are two important tools in nding these relationships. Below we will go into more details as to what these words mean and how these tools are helpful. Note that much of the following discussions refer to just random variables, but keep in mind that these variables can represent random signals or random processes.
5.4.1 Covariance
To begin with, when dealing with more than one random process, it should be obvious that it would be nice to be able to have a number that could quickly give us an idea of how similar the processes are. To do this, we use the
covariance, which is analogous to the variance of a single variable. Denition 5.3: Covariance
X
and
A measure of how much the deviations of two or more variables or processes match. For two processes,
Y,
if they are
not closely related then the covariance will be small, and if they
are similar then the covariance will be large. Let us clarify this statement by describing what we mean by "related" and "similar." Two processes are "closely related" if their distribution spreads are almost equal and they are around the same, or a very slightly dierent, mean. Mathematically, covariance is often written as
xy
and is dened as
cov (X, Y )
= xy = E X X
YY
(5.31)
This can also be reduced and rewritten in the following two forms:
xy =(xy ) x y
6 This
(5.32)
189
xy =
X X
YY
f (x, y ) dxdy
(5.33)
If
and
are independent and uncorrelated or one of them has zero mean value, then
xy = 0
If
and
xy = (E [X ] E [Y ])
The covariance is symmetric
5.4.2 Correlation
For anyone who has any kind of statistical background, you should be able to see that the idea of dependence/independence among variables and signals plays an important role when dealing with random processes. Because of this, the
correlation
This measure of association between the variables will provide us with a clue as to how well the value of one variable can be predicted from the value of the other. The correlation is equal to the average of the product of two random variables and is dened as
cor (X, Y )
= E [XY ] =
(5.34)
our variables. The correlation coecient of two variables is dened in terms of their covariance and standard deviations (Section 5.3.3.1: Standard Deviation), denoted by
=
where we will always have
cov (X, Y ) x y
(5.35)
1 1
This provides us with a quick and easy way to view the correlation between our variables. If there is no relationship between the variables then the correlation coecient will be zero and if there is a perfect positive
190
CHAPTER 5.
match it will be one. If there is a perfect inverse relationship, where one set of variables increases while the other decreases, then the correlation coecient will be negative one. This type of correlation is often referred to more specically as the
(a)
(b)
(c)
Figure 5.5:
Types of Correlation (a) Positive Correlation (b) Negative Correlation (c) Uncorrelated (No Correlation)
note:
So far we have dealt with correlation simply as a number relating the relationship between However, since our goal will be to relate random processes to each other,
which deals with signals as a function of time, we will want to continue this study by looking at
5.4.3 Example
Now let us take just a second to look at a simple example that involves calculating the covariance and correlation of two sets of random numbers. We are given the following data sets:
x = {3, 1, 6, 3, 4} y = {1, 5, 3, 4, 3}
To begin with, for the covariance we will need to nd the expected value (Section 5.3), or mean, of
and
y.
x= y=
xy =
191
Next we will solve for the standard deviations of our two sets using the formula below (for a review click here (Section 5.3.3: Variance)).
E (X E [X ])
x =
1 (0.16 + 5.76 + 6.76 + 0.16 + 0.36) = 1.625 5 1 (4.84 + 3.24 + 0.04 + 0.64 + 0.04) = 1.327 6
y =
Now we can nally calculate the covariance using one of the two formulas found above. Since we calculated the three means, we will use that formula (5.32) since it will be much simpler.
x = [3 1 6 3 4]; y = [1 5 3 4 3]; mx = mean(x) my = mean(y) mxy = mean(x.*y) % Standard Dev. from built-in Matlab Functions std(x,1) std(y,1) % Standard Dev. from Equation Above (same result as std(?,1)) sqrt( 1/5 * sum((x-mx).^2)) sqrt( 1/5 * sum((y-my).^2)) cov(x,y,1) corrcoef(x,y)
192
CHAPTER 5.
Before diving into a more complex statistical analysis of random signals and processes (Section 5.1), let us quickly review the idea of correlation (Section 5.4). Recall that the correlation of two signals or variables is the expected value of the product of those two variables. Since our focus will be to discover more about a random process, a collection of random signals, then imagine us dealing with two samples of a random process, where each sample is taken at a dierent point in time. Also recall that the key property of these random processes is that they are now functions of time; imagine them as a collection of signals. The expected value (Section 5.3) of the product of these two variables (or samples) will now depend on how quickly they change in regards to
time.
For example, if the two variables are taken from almost the same time period, We will now look at a correlation function that For the correlation of signals from two
relates a pair of random variables from the same process to the time separations between them, where the argument to this correlation function will be the time dierence. dierent random process, look at the crosscorrelation function (Section 5.6).
variables we will deal with come from the same random process.
the expected value of the product of a random variable or signal realization with a time-shifted
With a simple calculation and analysis of the autocorrelation function, we can discover a few important characteristics about our random process. These include: 1. How quickly our random signal or processes changes with respect to the time function 2. Whether our process has a periodic component and what the expected frequency might be As was mentioned above, the autocorrelation function is simply the expected value of a product. Assume we have a pair of random variables from the same process, is often written as
Rxx (t1 , t2 )
= E [X1 X2 ] =
x1 x2 f (x1 , x2 ) dx 2dx 1
The above equation is valid for stationary and nonstationary random processes. For stationary processes (Section 5.2), we can generalize this expression a little further. Given a wide-sense stationary processes, it can be proven that the expected values from our random process will be independent of the origin of our time function. Therefore, we can say that our autocorrelation function will depend on the time dierence and not some absolute time. For this discussion, we will let expression as
= t2 t1 , Rxx ( )
Rxx (t, t + )
= E [X (t) X (t + )]
(5.37)
for the continuous-time case. In most DSP course we will be more interested in dealing with real signal sequences, and thus we will want to look at the discrete-time case of the autocorrelation function. formula below will prove to be more common and useful than (5.36): The
Rxx [n, n + m] =
n=
x [n] x [n + m]
(5.38)
7 This
193
And again we can generalize the notation for our autocorrelation function as
Rxx [n, n + m]
= =
(5.39)
stationary
random
Rxx ( ) = Rxx ( )
= 0,
which gives us
Rxx (0) =X 2
The autocorrelation function will have its largest value when exceeded.
= 0.
for example in a periodic function at the values of the equivalent periodic points, but will never be
Rxx ( )
seconds, of
xx ( ) = R
x (t) x (t + ) dt
0
(5.40)
However, a lot of times we will not have sucient information to build a complete continuous-time function of one of our random signals for the above analysis. If this is the case, we can treat the information we do know about the function as a discrete signal and use the discrete-time formula for estimating the autocorrelation.
xx [m] = R
1 N m
N m1
x [n] x [n + m]
n=0
(5.41)
5.5.2 Examples
Below we will look at a variety of examples that use the autocorrelation function. prove very useful in these and future calculations. We will begin with a simple example dealing with Gaussian White Noise (GWN) and a few basic statistical properties that will
Example 5.1
We will let
x [n]
represent our GWN. For this problem, it is important to remember the following
E [x [n]] = 0
Available for free at Connexions <http://cnx.org/content/col10360/1.4>
194
CHAPTER 5.
Figure 5.6:
Gaussian density function. By examination, can easily see that the above statement is true - the mean equals zero.
we are now ready to do the short calculations required to nd the autocorrelation.
x [n],
and
are equal and one when they are not equal. When they are equal we can combine
the expected values. We are left with the following piecewise function to solve:
if
m=0
We can now solve the two parts of the above equation. The rst equation is easy to solve as we
x [n]
recall from statistics that the expected value of the square of a function is equal to the variance. Thus we get the following results for the autocorrelation:
0 if m = 0 Rxx [n, n + m] = 2 if m = 0
Or in a more concise way, we can represent the results as
Before diving into a more complex statistical analysis of random signals and processes (Section 5.1), let us quickly review the idea of correlation (Section 5.4). Recall that the correlation of two signals or variables is the expected value of the product of those two variables. Since our main focus is to discover more about random processes, a collection of random signals, we will deal with two random processes in this discussion, where in this case we will deal with samples from two
dierent
random processes.
expected value (Section 5.3.1: Mean Value) of the product of these two variables and how they correlate to one another, where the argument to this correlation function will be the time dierence. For the correlation of signals from the same random process, look at the autocorrelation function (Section 5.5).
8 This
195
if two processes are wide sense stationary, the expected value of the product of a random variable from one random process with a time-shifted, random variable from a dierent random process Looking at the generalized formula for the crosscorrelation, we will represent our two random processes by allowing
U = U (t)
and
V = V (t ).
Ruv (t, t )
= E [U V ] =
(5.42)
Just as the case with the autocorrelation function, if our input and output, denoted as
V (t),
are
at least jointly wide sense stationary, then the crosscorrelation does not depend on absolute time; it is just a function of the time dierence. This means we can simplify our writing of the above function as
Ruv ( ) = E [U V ]
or if we deal with two real signal sequences, the convolution (Section 1.5) of two signals:
(5.43)
for the discrete crosscorrelation function. See the formula below and notice the similarities between it and
Rxy (n, n m)
= Rxy (m) =
n=
x [n] y [n m]
(5.44)
5.6.1.1 Properties of Crosscorrelation wide sense stationary (WSS) random processes. Crosscorrelation is not an even function; however, it does have a unique symmetry property:
Below we will look at several properties of the crosscorrelation function that hold for two
Rxy ( ) = Ryx ( )
prove the following property revealing to us what value the maximum cannot exceed.
(5.45)
The maximum value of the crosscorrelation is not always when the shift equals zero; however, we can
|Rxy ( ) |
(5.46)
Rxy ( ) = Ryx ( )
(5.47)
196
CHAPTER 5.
5.6.2 Examples
Exercise 5.6.1
Using (5.44), nd the crosscorrelation of the sequences
(Solution on p. 210.)
Let us begin by looking at a simple example showing the relationship between two sequences.
x [n] = {. . . , 0, 0, 2, 3, 6, 1, 3, 0, 0, . . . } y [n] = {. . . , 0, 0, 1, 2, 4, 1, 3, 0, 0, . . . }
for each of the following possible time shifts:
m = {0, 3, 1}.
In many applications requiring ltering, the necessary frequency response may not be known beforehand, or it may vary with time. (Example; suppression of engine harmonics in a car stereo.) In such applications, an adaptive lter which can automatically design itself and which can track system variations in time is Adaptive lters are used extensively in a wide variety of applications, particularly in telecommunications.
Outline of adaptive lter material 1. Wiener Filters - L2 optimal (FIR) lter design in a statistical context 2. LMS algorithm - simplest and by-far-the-most-commonly-used adaptive lter algorithm 3. Stability and performance of the LMS algorithm - When and how well it works 4. Applications of adaptive lters - Overview of important applications 5. Introduction to advanced adaptive lter algorithms - Techniques for special situations or faster
convergence
10
L2
optimal (least squares) FIR lter design problem: Given a wide-sense stationary (WSS) input
xk
dk
(WSS
9 This 10 This
197
Figure 5.7
The Wiener lter is the linear, time-invariant lter minimizing As posed, this problem seems slightly silly, since wide cariety of applications.
dk
Example 5.2
active suspension system design
Figure 5.8
note: optimal system may change with dierent road conditions or mass in car, so an
adaptive
Example 5.3
System identication (radar, non-destructive testing, adaptive control systems)
198
CHAPTER 5.
Figure 5.9
Exercise 5.8.1
Usually one desires that the input signal
xk
for convenience, we will analyze only the causal, real-data case; extensions are straightfor-
ward.
M 1
yk =
l=0
w l x k l
argminE [ 2 ] = E (dk yk )2 2
wl M 1 l=0
= E
M 1 m=0
dk
M 1 l=0
wl xkl
= E dk 2
wl E [dk xkl ] +
E
2
M 1 l=0
= rdd (0) 2
l=0
wl rdx (l) +
l=0 m=0
wl wm rxx (l m)
where
199
E
where
...
.. .. .. . . .
...
.. .. . .
rxx (M 1)
. . . . . .
rxx (M 1)
...
...
To solve for the optimum lter, compute the gradient with respect to the top weights vector
. =
2 w0 2 w1 . . . 2 wM 1
= (2P ) + 2RW
d (recall dW
A W =A
d , dW
(W M W ) = 2 M W
for symmetric
M)
Wopt R = P Wopt = R1 P
Since
For
positive
11
Wopt = R1 P ,
is ideal for many applications. But several issues must be addressed to use
Exercise 5.9.1
In practice one usually won't know exactly the statistics of compute the Weiner lter. How do we surmount this problem?
(Solution on p. 210.)
xk
and
dk
(i.e.
and
P)
needed to
Exercise 5.9.2
(Solution on p. 210.)
In many applications, the statistics of How does one develop an system near optimal at all times?
adaptive system which tracks these changes over time to keep the
xk , dk
11 This
200
CHAPTER 5.
(Solution on p. 210.)
be computed eciently?
k rxx (l)
5.9.1 Tradeos
Larger
better
W opt .
However, larger
leads to
slower adaptation.
note: The success of adaptive systems depends on
x, d
samples,
N > M.
That is, all adaptive ltering algorithms require that the underlying system
varies slowly with respect to the sampling rate and the lter length (although they can tolerate occasional step discontinuities in the underlying system).
R is Toeplitz, the linear system of equations can be sovled with O M 2 computations using Levinson's M is the lter length. However, in many applications this may be too expensive, especially computing the lter output itself requires O (M ) computations. There are two main approaches to Rk+1
is only slightly changed from
Rk
O (M );
these algorithms are called Fast Recursive Least Squareds algorithms; all methods proposed so
far have stability problems and are dangerous to use. 2. Find a dierent approach to solving the optimization problem that doesn't require explicit inversion of the correlation matrix.
note: Adaptive algorithms involving the correlation matrix are called
(RLS) algorithms. Historically, they were developed after the LMS algorithm, which is the slimplest and most widely used approach very fast adaptation.
O (M ). O M 2
12
The least squares optimal lter design problem is quadratic in the lter coecients:
E
If
= rdd (0) 2P T W + W T RW E
2
(w0 , w1 , . . . , wM 1 )
is a unimodal "bowl" in
RN .
12 This
201
Figure 5.10
The problem is to nd the bottom of the bowl. In an adaptive lter context, the shape and bottom of the bowl may drift slowly with time; hopefully slow enough that the adaptive algorithm can track it. For a quadratic error surface, the bottom of the bowl can be found in one step by computing Most modern nonlinear optimization methods (which are used, for example, to solve the
R 1 P .
optimal IIR
lter design problem!) locally approximate a nonlinear function with a second-order (quadratic) Taylor series approximation and step to the bottom of this quadratic approximation on each iteration. However, an older and simpler appraoch to nonlinear optimaztion exists, based on
gradient descent.
Figure 5.11
The idea is to iteratively nd the minimizer by computing the gradient of the error function:
E =
E[ 2 ] wi .
202
CHAPTER 5.
RM
pointing in the steepest uphill direction on the error surface at a given point
W i,
with
having a magnitude proportional to the slope of the error surface in this steepest direction.
W i+1 = W i i ,
we go (locally) "downhill" in the steepest direction, which seems to be a sensible way to iteratively solve a nonlinear optimization problem. The performance obviously depends on could bounce back and forth up out of the bowl. However, if approach the bottom. We will determine criteria for
; if is too large, the iterations is too small, it could take many iterations to choosing later. W0
In summary, the gradient descent algorithm for solving the Weiner lter problem is: Guess
do
i = 1,
Wopt = W
The gradient descent idea is used in the LMS adaptive tler algorithm. As presented, this alogrithm costs
O M2
computations per iteration and doesn't appear very attractive, but LMS only requires
O (M )
com-
putations and is stable, so it is very attractive when computation is an issue, even thought it converges more slowly then the RLS algorithms we have discussed so far.
13
Figure 5.12
13 This
203
Find
sense stationary
2 M 1 k
= dk yk = dk
i=0
wi xki = dk X k W k xk xk1
. . .
X =
k
xkM +1 k w0 k w1 k W = . . . k wM 1
The superscript denotes absolute time, and the subscript denotes time or a vector index. the solution can be found by setting the gradient
E[ k W
] X k
T
(5.48)
= E 2
= E 2 dk X k Wk X k = 2E dk X k = 2P + 2RW Wopt = R1 P
Alternatively,
+ E Xk
Wopt
W k+1 = W k k
In practice, we don't know time. To nd the (approximate) Wiener lter, some approximations are necessary. As always, the key is to make the
and
right approximations!
R
and
note: Approximate
P:
k =
Note that
E k2 W E
k 2
. We can get a noisy approximation to the
k ! Widrow and Ho rst published the LMS algorithm, based on this
k2 = =2 W
k
dk W k X k
k
=2
X k = 2 k X k
204
CHAPTER 5.
1. 2. 3.
yk = W k X k = k = dk yk
M 1 i=0
k wi xki
k The LMS algorithm is often called a stochastic gradient algorithm, since is a noisy gradient. by far the most commonly used adaptive ltering algorithm, because
1. it was the rst 2. it is very simple 3. in practice it works well (except that sometimes it converges slowly) 4. it requires relatively litle computation 5. it updates the tap weights every sample, so it continually adapts the lter 6. it tracks slow changes in the signal statistics well
This is
yk M M 1
W k+1 M +1 M
= Total
2M + 1 2M
0 1
Table 5.1
So the LMS algorithm is Note that the parameter time, but usually a constant application.
O (M )
plays a very important role in the LMS algorithm. It can also be varied with ("convergence weight facor") is used, chosen after experimentation for a given
5.11.1.1 Tradeos
large
small
accurate
14
It is important to analyze the LMS algorithm to determine under what conditions it is stable, whether or not it converges to the Wiener solution, to determine how quickly it converges, how much degredation is suered due to the noisy gradient, etc. In particular, we need to know how to choose the parameter
5.12.1.1 Mean of W
does
W k, k
Wk
gradient-based LMS algorithm, we ask whether the expected value of the lter coecients converge to the
14 This
205
Wiener solution)
E W k+1
= W k+1 = E W k + 2 k X k = W k + 2E dk X k + 2E = W k + 2P + 2E
T
W k Xk Xk
(5.49)
W k Xk Xk
X ki , X k
and
dki ,
k 1
and
dk
and
is the same as
dki are statistically independent, i = 0. This X k except for shifting down the vector elements
analyze the LMS algorithm. (First good analysis not making this assumption: Macchi and Eweda[1]) Many simulations and much practical experience has shown that the results one obtains with analyses based on the patently false assumption above are quite accurate in most situations With the independence assumption, independent of Now
W k (which depends T E W k Xk Xk
only on previous
X k i , d k i )
is statitically
kT
Xk Xk
E W k Xk Xk
T
. . .
= E = = = = RW k
M 1 i=0
k wi x k i x k j
. . . . . .
(5.50)
M 1 i=0
k E wi xki xkj
. . . . . .
M 1 i=0
k wi
. . . . . .
E [xki xkj ]
M 1 i=0
kr wi ( i j ) xx
. . .
where
R = E XkXk
(5.51)
W k+1
= =
W k +2P + 2R W k I W +2P
k
206
CHAPTER 5.
Now to? If
if W k converges to a vector of nite magnitude ("convergence in the mean"), what does it converge
converges, then as
and
Wk
k , W k+1 W k ,
W = I W +2P
2R W = 2P R W = P
or
Wopt = R1 P
the Wiener solution! So the LMS algorithm,
if it converges, gives lter coecients which on average are the Wiener coecients!
But does
Wk
where
V k,
V k =W k Wopt ,
Wopt
W k+1 =W k 2R W k +2P
+ 2RWopt 2RWopt + 2P
V k+1 =V k 2R V k + (2RWopt ) + 2P
Now
Wopt = R1 ,
so
V k 0 ?
R
a
R is positive denite, real, and symmetric, all the eigenvalues are real and positive. Also, we can write Q1 Q, where is a diagonal matrix with diagonal entries i equal to the eigenvalues of R, and Q is unitary matrix with rows equal to the eigenvectors corresponding to the eigenvalues of R.
as Using this fact,
V k+1 = I 2 Q1 Q
multiplying both sides through on the left by
Vk
Q:
we get
Q V k+1 = (Q 2Q) V k = (1 2) Q V k
207
Let
V ' is simply V in a rotated coordinate set in Rm , so convergence of V ' implies convergence of V . ' Since 1 2 is diagonal, all elements of V evolve independently of each other. Convergence (stability) bolis down to whether all M of these scalar, rst-order dierence equations are stable, and thus (0).
Note that
|1 2i | < 1,
or
i : (|i | < 1)
and
i : <
1 i
<
1 max max ,
(5.52)
we certainly won't want to compute it. However, another useful mathematical fact comes to the rescue...
tr (R) =
i=1
Since the eigenvalues are all positive and real. For a correlation matrix, estimate
rii =
i=1
i max
So
We can easily
r (0)
with
O (1)
<
as a conservative bound, and perhaps adapt
M r (0)
accordingly with time.
(1 2i )
note: The
k
This is not
initial rate of convergence is dominated by the fastest mode 1 2max . nal rate of convergence is dominated by the slowest mode 1 2min .
R).
surprising, since a dradient descent method goes "downhill" in the steepest direction
note:
The
For small
min ,
it can take a long time for LMS to converge. LMS converges relatively quickly for
15
15 This
208
CHAPTER 5.
Figure 5.13
In principle,
(n
z H , so that the overall response of the top path is approximately ). However, limitations on the form of W (FIR) and the presence of noise cause the equalization to
WH
z ,
or
be imperfect.
Figure 5.14
If the channel distorts the pulse shape, the matched lter will no longer be matched, intersymbol interference may increase, and the system performance will degrade. An adaptive lter is often inserted in front of the matched lter to compensate for the channel.
209
Figure 5.15
This is, of course, unrealizable, since we do not have access to the original transmitted signal, There are two common solutions to this problem: 1. Periodically broadcast a known
sk .
training signal.
sk
is known.
2. Decision-directed feedback: If the overall system is working well, then the output always equal
sk0
should almost
sk0 .
We can thus use our received digital communication signal as the desired signal,
since it has been cleaned of noise (we hope) by the nonlinear threshold device!
Decision-directed equalizer
Figure 5.16
sk
75%),
Otherwise,
dk
is so
inaccurate that the adaptive lter can never nd the Wiener solution. This method is widely used in the telephone system and other digital communication networks.
210
CHAPTER 5.
m = 0,
following sequence:
s [n] = {. . . , 0, 0, 2, 6, 24, 1, 9, 0, 0, . . . }
and so from the sum in our crosscorrelation function we arrive at the answer of
Rxy (0) = 22
2. For
m = 3,
we will approach it the same was we did above; however, we will now shift
y [n]
to the
which yields
s [n] = {. . . , 0, 0, 0, 0, 0, 1, 6, 0, 0, . . . }
and from the crosscorrelation function we arrive at the answer of
Rxy (3) = 6
3. For
m = 1,
we will again take the same approach; however, we will now shift
y [n]
which yields
s [n] = {. . . , 0, 0, 4, 12, 6, 3, 0, 0, 0, . . . }
and from the crosscorrelation function we arrive at the answer of
Rxy (1) = 13
rxx (l)
1 N 1 N
N 1
xk xk+l
k=0 N 1
rxd (l)
then solve
dk xkl
k=0
W opt =R1 =P
rxx (l)
k
1 N
N 1
xkm xkml
m=0 N 1
rdx (l)
and
1 = N
xkml dkm
m=0
Wopt k
Rk
Pk
GLOSSARY
211
Glossary
A Autocorrelation
the expected value of the product of a random variable or signal realization with a time-shifted version of itself
C Correlation
A measure of how much one random variable depends upon the other.
Covariance
A measure of how much the deviations of two or more variables or processes match.
Crosscorrelation
if two processes are wide sense stationary, the expected value of the product of a random variable from one random process with a time-shifted, random variable from a dierent random process
D dierence equation
An equation that shows the relationship between consecutive values of a sequence and the dierences among them. They are often rearranged as a recursive formula so that a systems output can be computed from the input signal and past outputs.
Example:
y [n] + 7y [n 1] + 2y [n 2] = x [n] 4x [n 1]
(3.1)
F FFT
(Fast Fourier Transform) An ecient
16
P poles
1. The value(s) for
where
Q (z ) = 0.
2. The complex frequencies that make the overall gain of the lter transfer function innite.
R random process
A family or ensemble of signals that correspond to every possible outcome of a certain signal measurement. Each signal in this collection is referred to as a of the process.
Example:
below. We use
f [n] = Asin (n + )
Note that the phase and amplitude of each sinusoid is based on a random number, thus making this a random process.
S stationary process
a random process where all of its statistical properties do not vary with time
16 "Discrete
212
GLOSSARY
Z zeros
1. The value(s) for
where
P (z ) = 0.
2. The complex frequencies that make the overall gain of the lter transfer function zero.
Bibliography
[1] O. Macchi and E. Eweda. Second-order convergence analysis of stochastic adaptive linear ltering. IEEE
Trans. on Automatic Controls, AC-28 #1:7685, Jan 1983.
214
INDEX
Ex.
apples, 1
A/D, 2.7(63), 2.8(73), 2.11(96) adaptive, 197, 199 Aliasing, 2.3(54), 4.5(154), 4.7(156) alphabet, 10 amplitude response, 3.6(130), 130 analog, 2.7(63), 2.8(73), 3.5(126) analog signal, 1.1(3) analog signals, 3.5(126) analysis, 4.12(165) anti-aliasing, 4.5(154), 4.7(156) Applet, 2.3(54) autocorrelation, 5.2(182), 5.5(192), 192, 192 average, 5.3(184) average power, 185
control theory, 123 convolution, 1.5(11), 2.11(96), 2.12(104) correlation, 189, 189, 5.5(192) correlation coecient, 189 correlation functions, 190 countably innite, 128 covariance, 188, 188 critically sampled, 4.12(165) Crosscorrelation, 195 crosscorrelation function, 195 CT, 2.6(60) CTFT, 1.10(34), 1.13(44), 2.8(73)
D/A, 2.7(63), 2.8(73), 2.11(96) de-noising, 4.16(176), 176 deblurring, 2.12(104), 104 decimation, 4.2(150), 4.5(154), 4.6(155), 4.7(156), 4.10(162) decimator, 4.5(154) decompose, 1.8(29), 30 deconvolution, 2.12(104) delayed, 11 density function, 5.2(182) design, 3.11(138) deterministic, 5.1(179) deterministic signals, 179 DFT, 1.12(41), 1.13(44), 2.7(63), 2.10(91), 2.11(96), 3.5(126) dierence equation, 1.4(11), 11, 109, 109, 3.5(126) digital, 2.7(63), 2.8(73), 3.11(138) digital audio, 4.3(152) digital lter, 3.5(126) digital signal, 1.1(3) digital signal processing, (1), 2.10(91), 3.5(126) direct method, 111 direct sum, 1.6(18), 23 discrete fourier transform, 2.7(63), 2.10(91), 2.11(96), 3.5(126) Discrete Fourier Transform (DFT), 64 discrete random process, 181
bandlimited, 77, 167 basis, 1.6(18), 21, 1.8(29), 30 basis matrix, 1.8(29), 31 bilateral z-transform, 114 block diagram, 1.2(6) blur, 2.12(104)
cascade, 1.2(6) causal, 125 CD, 4.3(152) cd players, 4.4(153) characteristic polynomial, 112 circular convolution, 2.11(96), 98 coecient vector, 1.8(29), 31 compact disc, 4.3(152) complement, 1.6(18), 23 complex, 3.4(120) complex exponential sequence, 9 complex exponentials, 38 computational algorithm, 211 constant-Q, 4.12(165) continuous frequency, 1.10(34), 1.11(38) continuous random process, 181 continuous time, 1.9(33), 1.10(34) Continuous Time Fourier Transform, 34 Continuous-Time Fourier Transform, 35
INDEX
215
discrete time, 1.5(11), 1.9(33), 1.11(38), 3.3(119) Discrete Time Fourier Transform, 38, 2.7(63) Discrete Wavelet Transform, 4.13(167), 167, 4.14(168) discrete-time, 3.5(126) discrete-time ltering, 3.5(126) Discrete-Time Fourier Transform, 39 distribution function, 5.2(182) downsampler, 4.2(150) downsampling, 4.2(150), 150, 4.5(154), 4.7(156), 4.8(158) DSP, 2.10(91), 2.12(104), 3.3(119), 3.5(126), 3.9(135), 3.10(136), 3.11(138), 5.2(182), 5.3(184), 5.5(192) DT, 1.5(11) DTFT, 1.11(38), 1.13(44), 2.7(63), 2.8(73) DWT, 4.15(170), 4.16(176)
G H
Gibbs Phenomena, 34 gradient descent, 201 group delay, 134 Haar, 4.14(168) Hanning window, 2.10(91), 93 hilbert, 1.7(28), 1.8(29) Hilbert Space, 1.6(18), 26, 28 hilbert spaces, 1.7(28), 1.8(29) Hold, 2.5(59) homogeneous solution, 111
identity matrix, 32 IIR Filter, 3.8(134) Illustrations, 2.3(54) image, 2.12(104) impulse response, 1.5(11) independent, 186 indirect method, 111 information, 1.1(3) initial conditions, 110 inner, 1.7(28) inner product, 1.7(28) inner product space, 28 input, 1.2(6) interpolation, 4.1(149), 4.3(152), 4.4(153), 4.6(155), 4.7(156), 4.9(159), 4.10(162) interpolator, 4.3(152) invertible, 1.6(18), 28
E F
envelop delay, 134 ergodic, 186 Examples, 2.3(54) exercise, 3.13(146) fast fourier transform, 1.13(44) feedback, 1.2(6) FFT, 1.12(41), 1.13(44), 44 lter, 3.10(136), 3.11(138) lter design, 4.7(156) lter structures, 3.7(134) lterbanks, 4.15(170) ltering, 3.5(126) lters, 2.11(96), 3.9(135) nite, 21 nite dimensional, 1.6(18), 22 FIR, 3.9(135), 3.10(136), 3.11(138), 3.12(145), 3.13(146) FIR lter, 3.8(134) rst order stationary, 5.2(182) rst-order stationary, 183 fourier series, 1.9(33), 36, 39 fourier transform, 1.9(33), 1.10(34), 1.11(38), 2.8(73), 2.10(91), 2.11(96), 3.2(114), 114, 3.5(126) fourier transforms, 2.6(60) frames, 92 frequency, 2.6(60) frequency domain, 1.13(44) FT, 2.6(60), 2.12(104) functional, 7
J K L
Java, 2.3(54) joint density function, 5.2(182), 182 joint distribution function, 182 Kaiser, 4.10(162) key concepts, (1) laplace transform, 1.9(33) linear, 3.5(126) linear algebra, 1.12(41) Linear discrete-time systems, 11 linear transformation, 1.6(18), 26 linear-phase FIR lters, 3.6(130) linearly dependent, 1.6(18), 20 linearly independent, 1.6(18), 20, 184 live, 54 LTI Systems, 2.8(73)
216
INDEX
mean-square value, 185 moment, 185 mother wavelet, 168 multi-resolution, 168 multiresolution, 4.13(167)
sample function, 211 sample-rate conversion, 4.6(155), 155 Sampling, 2.1(49), 2.2(51), 2.3(54), 2.4(58), 2.5(59), 2.6(60), 2.7(63) scaling function, 168, 4.14(168), 168 second order stationary, 5.2(182) second-order stationary, 183 Shannon, 2.2(51) shift-invariant, 1.4(11), 11, 3.5(126) short time fourier transform, 2.9(78) signal, 1.1(3), 3, 1.2(6) signals, 1.5(11), 1.9(33) signals and systems, 1.5(11) span, 1.6(18), 20 spectral masking, 166 spectrogram, 81 spectrograms, 2.10(91) SSS, 5.2(182) stable, 125 standard basis, 1.6(18), 27, 1.8(29) stationarity, 5.2(182) stationary, 5.2(182), 5.5(192) stationary process, 182 stationary processes, 182 stft, 2.9(78) stochastic, 5.1(179) stochastic gradient, 204 stochastic signals, 180 strict sense stationary, 5.2(182) strict sense stationary (SSS), 183 sub-band, 4.12(165) sub-bands, 165 subspace, 1.6(18), 19 superposition, 1.4(11) symmetries, 45 synthesis, 4.12(165) System, 2.5(59) system theory, 1.2(6) systems, 1.9(33)
narrow-band spectrogram, 81 noble identities, 4.8(158) nonstationary, 5.2(182), 182 normed linear space, 28 nyquist, 2.6(60)
optimal, 143 order, 110 orthogonal, 1.6(18), 25, 28 orthogonal compliment, 1.6(18), 26 orthonormal, 1.6(18), 25, 1.8(29) orthonormal basis, 1.8(29), 29 output, 1.2(6) oversampling, 4.3(152), 4.4(153) Overview, 2.1(49)
parallel, 1.2(6) particular solution, 111 pdf, 5.2(182) Pearson's Correlation Coecient, 190 perfect reconstrucion, 4.12(165) periodic, 167 phase delay, 132 pole, 3.4(120) pole-zero cancellation, 123 poles, 120 polyphase, 4.9(159), 4.10(162) polyphase decimation, 4.11(164) polyphase interpolation, 4.11(164) power series, 115 probability, 5.2(182) probability density function (pdf ), 182 probability distribution function, 182 probability function, 5.2(182) Proof, 2.2(51)
random, 5.1(179), 5.3(184), 5.5(192) random process, 5.1(179), 180, 180, 181, 5.2(182), 5.3(184), 5.5(192) random sequence, 181 random signal, 5.1(179), 5.3(184) random signals, 5.1(179), 180, 5.3(184), 5.5(192) realization, 211 Reconstruction, 2.2(51), 2.4(58), 2.5(59) Recursive least Squares, 200 resampling, 4.6(155), 155
The stagecoach eect, 57 time, 2.6(60) time-varying behavior, 75 training signal, 209 transfer function, 110, 3.5(126), 3.7(134) transform pairs, 3.3(119) transforms, 33 twiddle factors, 45
INDEX
217
uncorrelated, 184 uncountably innite, 128 unilateral, 3.3(119) unilateral z-transform, 114 unique, 30 unit sample, 9 unit step, 10 unit-sample response, 126 unitary, 1.6(18), 28 upsampler, 4.1(149), 4.3(152) upsampling, 4.1(149), 149, 4.3(152), 4.7(156), 4.8(158)
wavelets, 167, 167 well-dened, 1.6(18), 22 well-matched, 176 wide sense stationary, 5.2(182) wide-band spectrogram, 81 wide-sense stationary (WSS), 184 window, 93 WSS, 5.2(182)
z transform, 1.9(33), 3.3(119) z-plane, 114, 3.4(120) z-transform, 3.2(114), 114, 3.3(119) z-transforms, 119 zero, 3.4(120) zero-order hold, 4.3(152) zero-pad, 127 zeros, 120
218
ATTRIBUTIONS
Attributions
Collection: Fundamentals of Signal Processing Edited by: Minh N. Do URL: http://cnx.org/content/col10360/1.4/ License: http://creativecommons.org/licenses/by/3.0/ Module: "Introduction to Fundamentals of Signal Processing" By: Minh N. Do URL: http://cnx.org/content/m13673/1.1/ Pages: 1-2 Copyright: Minh N. Do License: http://creativecommons.org/licenses/by/2.0/ Module: "Signals Represent Information" By: Don Johnson URL: http://cnx.org/content/m0001/2.27/ Pages: 3-6 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Systems" By: Don Johnson URL: http://cnx.org/content/m0005/2.19/ Pages: 6-8 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time Signals and Systems" By: Don Johnson URL: http://cnx.org/content/m10342/2.16/ Pages: 8-11 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/3.0/ Module: "Systems in the Time-Domain" Used here as: "Linear Time-Invariant Systems" By: Don Johnson URL: http://cnx.org/content/m0508/2.7/ Page: 11 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete Time Convolution" By: Ricardo Radaelli-Sanchez, Richard Baraniuk, Stephen Kruzick, Catherine Elder URL: http://cnx.org/content/m10087/2.27/ Pages: 11-17 Copyright: Ricardo Radaelli-Sanchez, Richard Baraniuk, Stephen Kruzick License: http://creativecommons.org/licenses/by/3.0/
ATTRIBUTIONS
219
Module: "Review of Linear Algebra" By: Clayton Scott URL: http://cnx.org/content/m11948/1.2/ Pages: 18-28 Copyright: Clayton Scott License: http://creativecommons.org/licenses/by/1.0 Module: "Hilbert Spaces" By: Justin Romberg URL: http://cnx.org/content/m10840/2.6/ Pages: 28-29 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Orthonormal Basis Expansions" Used here as: "Signal Expansions" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10760/2.6/ Pages: 29-33 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Fourier Analysis" By: Richard Baraniuk URL: http://cnx.org/content/m10096/2.12/ Pages: 33-34 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Continuous Time Fourier Transform (CTFT)" By: Richard Baraniuk, Melissa Selik URL: http://cnx.org/content/m10098/2.16/ Pages: 34-38 Copyright: Richard Baraniuk, Melissa Selik License: http://creativecommons.org/licenses/by/3.0/ Module: "Discrete Time Fourier Transform (DTFT)" By: Richard Baraniuk URL: http://cnx.org/content/m10108/2.18/ Pages: 38-41 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/3.0/ Module: "DFT as a Matrix Operation" By: Robert Nowak URL: http://cnx.org/content/m10962/2.5/ Pages: 41-44 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0
220 Module: "The FFT Algorithm" By: Robert Nowak URL: http://cnx.org/content/m10964/2.6/ Pages: 44-47 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction" By: Anders Gjendemsj URL: http://cnx.org/content/m11419/1.29/ Pages: 49-51 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Proof" By: Anders Gjendemsj URL: http://cnx.org/content/m11423/1.27/ Pages: 51-54 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Illustrations" By: Anders Gjendemsj URL: http://cnx.org/content/m11443/1.33/ Pages: 54-58 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Sampling and reconstruction with Matlab" Used here as: "Sampling and Reconstruction with Matlab" By: Anders Gjendemsj URL: http://cnx.org/content/m11549/1.9/ Page: 58 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Systems view of sampling and reconstruction" Used here as: "Systems View of Sampling and Reconstruction" By: Anders Gjendemsj URL: http://cnx.org/content/m11465/1.20/ Pages: 59-60 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Sampling CT Signals: A Frequency Domain Perspective" By: Robert Nowak URL: http://cnx.org/content/m10994/2.2/ Pages: 60-63 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0
ATTRIBUTIONS
ATTRIBUTIONS
221
Module: "The DFT: Frequency Domain with a Computer Analysis" By: Robert Nowak URL: http://cnx.org/content/m10992/2.3/ Pages: 63-72 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time Processing of CT Signals" By: Robert Nowak URL: http://cnx.org/content/m10993/2.2/ Pages: 73-78 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Short Time Fourier Transform" By: Ivan Selesnick URL: http://cnx.org/content/m10570/2.4/ Pages: 78-91 Copyright: Ivan Selesnick License: http://creativecommons.org/licenses/by/1.0 Module: "Spectrograms" By: Don Johnson URL: http://cnx.org/content/m0505/2.21/ Pages: 91-95 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/3.0/ Module: "Filtering with the DFT" By: Robert Nowak URL: http://cnx.org/content/m11022/2.3/ Pages: 96-103 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Image Restoration Basics" By: Robert Nowak URL: http://cnx.org/content/m10972/2.2/ Pages: 104-106 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Dierence Equation" By: Michael Haag URL: http://cnx.org/content/m10595/2.6/ Pages: 109-113 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "The Z Transform: Denition" By: Benjamin Fite URL: http://cnx.org/content/m10549/2.10/ Pages: 114-119 Copyright: Benjamin Fite License: http://creativecommons.org/licenses/by/1.0
222 Module: "Table of Common z-Transforms" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10119/2.14/ Pages: 119-120 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Understanding Pole/Zero Plots on the Z-Plane" By: Michael Haag URL: http://cnx.org/content/m10556/2.12/ Pages: 120-126 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/3.0/ Module: "Filtering in the Frequency Domain" By: Don Johnson URL: http://cnx.org/content/m10257/2.18/ Pages: 126-130 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/3.0/ Module: "Linear-Phase FIR Filters" By: Ivan Selesnick URL: http://cnx.org/content/m10705/2.3/ Pages: 130-134 Copyright: Ivan Selesnick License: http://creativecommons.org/licenses/by/1.0 Module: "Filter Structures" By: Douglas L. Jones URL: http://cnx.org/content/m11917/1.3/ Page: 134 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Overview of Digital Filter Design" By: Douglas L. Jones URL: http://cnx.org/content/m12776/1.2/ Pages: 134-135 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/2.0/ Module: "Window Design Method" By: Douglas L. Jones URL: http://cnx.org/content/m12790/1.2/ Pages: 135-136 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/2.0/
ATTRIBUTIONS
ATTRIBUTIONS
223
Module: "Frequency Sampling Design Method for FIR lters" By: Douglas L. Jones URL: http://cnx.org/content/m12789/1.2/ Pages: 136-138 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/2.0/ Module: "Parks-McClellan FIR Filter Design" By: Douglas L. Jones URL: http://cnx.org/content/m12799/1.3/ Pages: 138-145 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/2.0/ Module: "FIR Filter Design using MATLAB" By: Hyeokho Choi URL: http://cnx.org/content/m10917/2.2/ Page: 145 Copyright: Hyeokho Choi License: http://creativecommons.org/licenses/by/1.0 Module: "MATLAB FIR Filter Design Exercise" By: Hyeokho Choi URL: http://cnx.org/content/m10918/2.2/ Page: 146 Copyright: Hyeokho Choi License: http://creativecommons.org/licenses/by/1.0 Module: "Upsampling" By: Phil Schniter URL: http://cnx.org/content/m10403/2.15/ Pages: 149-150 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Downsampling" By: Phil Schniter URL: http://cnx.org/content/m10441/2.12/ Pages: 150-151 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/3.0/ Module: "Interpolation" By: Phil Schniter URL: http://cnx.org/content/m10444/2.14/ Pages: 152-153 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Application of Interpolation - Oversampling in CD Players" By: Phil Schniter URL: http://cnx.org/content/m11006/2.3/ Pages: 153-154 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0
224 Module: "Decimation" By: Phil Schniter URL: http://cnx.org/content/m10445/2.11/ Pages: 154-155 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Resampling with Rational Factor" By: Phil Schniter URL: http://cnx.org/content/m10448/2.11/ Pages: 155-156 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Digital Filter Design for Interpolation and Decimation" By: Phil Schniter URL: http://cnx.org/content/m10870/2.6/ Pages: 156-158 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Noble Identities" By: Phil Schniter URL: http://cnx.org/content/m10432/2.12/ Pages: 158-159 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Polyphase Interpolation" By: Phil Schniter URL: http://cnx.org/content/m10431/2.11/ Pages: 159-162 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Polyphase Decimation Filter" By: Phil Schniter URL: http://cnx.org/content/m10433/2.12/ Pages: 162-164 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Computational Savings of Polyphase Interpolation/Decimation" By: Phil Schniter URL: http://cnx.org/content/m11008/2.2/ Pages: 164-165 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0
ATTRIBUTIONS
ATTRIBUTIONS
225
Module: "Sub-Band Processing" By: Phil Schniter URL: http://cnx.org/content/m10423/2.14/ Pages: 165-167 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete Wavelet Transform: Main Concepts" By: Phil Schniter URL: http://cnx.org/content/m10436/2.12/ Pages: 167-168 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "The Haar System as an Example of DWT" By: Phil Schniter URL: http://cnx.org/content/m10437/2.10/ Pages: 168-170 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Filterbanks Interpretation of the Discrete Wavelet Transform" By: Phil Schniter URL: http://cnx.org/content/m10474/2.6/ Pages: 170-176 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "DWT Application - De-noising" By: Phil Schniter URL: http://cnx.org/content/m11000/2.1/ Pages: 176-178 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Random Signals and Processes" By: Michael Haag URL: http://cnx.org/content/m10649/2.2/ Pages: 179-181 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Stationary and Nonstationary Random Processes" By: Michael Haag URL: http://cnx.org/content/m10684/2.2/ Pages: 182-184 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Random Processes: Mean and Variance" By: Michael Haag URL: http://cnx.org/content/m10656/2.3/ Pages: 184-188 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0
226 Module: "Correlation and Covariance of a Random Signal" By: Michael Haag URL: http://cnx.org/content/m10673/2.3/ Pages: 188-191 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Autocorrelation of Random Processes" By: Michael Haag URL: http://cnx.org/content/m10676/2.4/ Pages: 192-194 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Crosscorrelation of Random Processes" By: Michael Haag URL: http://cnx.org/content/m10686/2.2/ Pages: 194-196 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Adaptive Filters" By: Douglas L. Jones URL: http://cnx.org/content/m11535/1.3/ Page: 196 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time, Causal Wiener Filter" By: Douglas L. Jones URL: http://cnx.org/content/m11825/1.1/ Pages: 196-199 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Practical Issues in Wiener Filter Implementation" By: Douglas L. Jones URL: http://cnx.org/content/m11824/1.1/ Pages: 199-200 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Quadratic Minimization and Gradient Descent" By: Douglas L. Jones URL: http://cnx.org/content/m11826/1.2/ Pages: 200-202 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "The LMS Adaptive Filter Algorithm" By: Douglas L. Jones URL: http://cnx.org/content/m11829/1.1/ Pages: 202-204 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0
ATTRIBUTIONS
ATTRIBUTIONS
227
Module: "First Order Convergence Analysis of the LMS Algorithm" By: Douglas L. Jones URL: http://cnx.org/content/m11830/1.1/ Pages: 204-207 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Adaptive Equalization" By: Douglas L. Jones URL: http://cnx.org/content/m11907/1.1/ Pages: 207-209 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0
About Connexions
Since 1999, Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. We are a Web-based authoring, teaching and learning environment open to anyone interested in education, including students, teachers, professors and lifelong learners. We connect ideas and facilitate educational communities. Connexions's modular, interactive courses are in use worldwide by universities, community colleges, K-12 schools, distance learners, and lifelong learners. Connexions materials are in many languages, including English, Spanish, Chinese, Japanese, Italian, Vietnamese, French, Portuguese, and Thai. Connexions is part of an exciting new information distribution system that allows for
Connexions
has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers.
For your own Unlimited Reading and FREE eBooks today, visit: http://www.Free-eBooks.net
Share this eBook with anyone and everyone automatically by selecting any of the options below:
To show your appreciation to the author and help others have wonderful reading experiences and find helpful information too, we'd be very grateful if you'd kindly post your comments for this book here.
COPYRIGHT INFORMATION
Free-eBooks.net respects the intellectual property of others. When a book's copyright owner submits their work to Free-eBooks.net, they are granting us permission to distribute such material. Unless otherwise stated in this book, this permission is not passed onto others. As such, redistributing this book without the copyright owner's permission can constitute copyright infringement. If you believe that your work has been used in a manner that constitutes copyright infringement, please follow our Notice and Procedure for Making Claims of Copyright Infringement as seen in our Terms of Service here:
http://www.free-ebooks.net/tos.html