0% found this document useful (0 votes)
243 views236 pages

Fundamentals of Signal Processing

Fundamentals of signal processing, ed. Minh do, rice university, houston, Texas. Collection is licensed under the Creative Commons Attribution 3. License.

Uploaded by

Lakhdar Hadjarab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
243 views236 pages

Fundamentals of Signal Processing

Fundamentals of signal processing, ed. Minh do, rice university, houston, Texas. Collection is licensed under the Creative Commons Attribution 3. License.

Uploaded by

Lakhdar Hadjarab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 236

Fundamentals of Signal Processing

Collection Editor: Minh N. Do

Fundamentals of Signal Processing


Collection Editor: Minh N. Do Authors: Richard Baraniuk Hyeokho Choi Minh N. Do Catherine Elder Benjamin Fite Anders Gjendemsj Michael Haag Don Johnson Douglas L. Jones Stephen Kruzick Robert Nowak Ricardo Radaelli-Sanchez Justin Romberg Phil Schniter Clayton Scott Ivan Selesnick Melissa Selik

Online: < http://cnx.org/content/col10360/1.4/ >

CONNEXIONS
Rice University, Houston, Texas

This selection and arrangement of content as a collection is copyrighted by Minh N. Do. It is licensed under the Creative Commons Attribution 3.0 license (http://creativecommons.org/licenses/by/3.0/). Collection structure revised: November 26, 2012 PDF generated: May 20, 2013 For copyright and attribution information for the modules contained in this collection, see p. 218.

Table of Contents
Introduction to Fundamentals of Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 Foundations 1.1 Signals Represent Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Introduction to Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Discrete-Time Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Linear Time-Invariant Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 Discrete Time Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.6 Review of Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.7 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.8 Signal Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1.9 Introduction to Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 1.10 Continuous Time Fourier Transform (CTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 1.11 Discrete Time Fourier Transform (DTFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 1.12 DFT as a Matrix Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 1.13 The FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

2 Sampling and Frequency Analysis 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.3 Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.4 Sampling and Reconstruction with Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.5 Systems View of Sampling and Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.6 Sampling CT Signals: A Frequency Domain Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.7 The DFT: Frequency Domain with a Computer Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.8 Discrete-Time Processing of CT Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 2.9 Short Time Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 2.10 Spectrograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 2.11 Filtering with the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 2.12 Image Restoration Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 104
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

3 Digital Filtering 3.1 Dierence Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.2 The Z Transform: Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 114 3.3 Table of Common z-Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 119 3.4 Understanding Pole/Zero Plots on the Z-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 3.5 Filtering in the Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.6 Linear-Phase FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3.7 Filter Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 134 3.8 Overview of Digital Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3.9 Window Design Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3.10 Frequency Sampling Design Method for FIR lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 3.11 Parks-McClellan FIR Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 3.12 FIR Filter Design using MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 3.13 MATLAB FIR Filter Design Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

4 Multirate Signal Processing 4.1 Upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

iv

4.2 Downsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.3 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4.4 Application of Interpolation - Oversampling in CD Players . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 153 4.5 Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 4.6 Resampling with Rational Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 4.7 Digital Filter Design for Interpolation and Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 4.8 Noble Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 4.9 Polyphase Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 4.10 Polyphase Decimation Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 4.11 Computational Savings of Polyphase Interpolation/Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 4.12 Sub-Band Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 4.13 Discrete Wavelet Transform: Main Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 4.14 The Haar System as an Example of DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 4.15 Filterbanks Interpretation of the Discrete Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 4.16 DWT Application - De-noising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5 Statistical and Adaptive Signal Processing 5.1 Introduction to Random Signals and Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.2 Stationary and Nonstationary Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5.3 Random Processes: Mean and Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 5.4 Correlation and Covariance of a Random Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 5.5 Autocorrelation of Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 192 5.6 Crosscorrelation of Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 5.7 Introduction to Adaptive Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.8 Discrete-Time, Causal Wiener Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.9 Practical Issues in Wiener Filter Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 5.10 Quadratic Minimization and Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.11 The LMS Adaptive Filter Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 5.12 First Order Convergence Analysis of the LMS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5.13 Adaptive Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 207
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Introduction to Fundamentals of Signal Processing


1

What is Digital Signal Processing?


To understand what is 

Digital Signal Processing (DSP) let's examine what does each of its words mean. Signal is any physical quantity that carries information.  Processing is a series of steps or operations to achieve a particular end. It is easy to see that Signal Processing is used everywhere to extract information
from signals or to convert information-carrying signals from one form to another. For example, our brain and ears take input speech signals, and then process and convert them into meaningful words. Finally, the word 

Digital

in Digital Signal Processing means that the process is done by computers, microprocessors,

or logic circuits. The eld DSP has expanded signicantly over that last few decades as a result of rapid developments in computer technology and integrated-circuit fabrication. Consequently, DSP has played an increasingly important role in a wide range of disciplines in science and technology. Research and development in DSP are driving advancements in many high-tech areas including telecommunications, multimedia, medical and scientic imaging, and human-computer interaction. To illustrate the digital revolution and the impact of DSP, consider the development of digital cameras. Traditional lm cameras mainly rely on physical properties of the optical lens, where higher quality requires bigger and larger system, to obtain good images. When digital cameras were rst introduced, their quality were inferior compared to lm cameras. But as microprocessors become more powerful, more sophisticated DSP algorithms have been developed for digital cameras to correct optical defects and improve the nal image quality. Thanks to these developments, the quality of consumer-grade digital cameras has now surpassed the equivalence in lm cameras. As further developments for digital cameras attached to cell phones (cameraphones), where due to small size requirements of the lenses, these cameras rely on DSP power to provide good images. Essentially, digital camera technology uses computational power to overcome physical limitations. We can nd the similar trend happens in many other applications of DSP such as digital communications, digital imaging, digital television, and so on. In summary, DSP has foundations on Mathematics, Physics, and Computer Science, and can provide the key enabling technology in numerous applications.

Overview of Key Concepts in Digital Signal Processing


The two main characters in DSP are

signals and systems.

signal is dened as any physical quantity continuous-time or


To be processed by a computer, a

that varies with one or more independent variables such as time (one-dimensional signal), or space (2-D

analog signals
1 This

or 3-D signal). Signals exist in several types. In the real-world, most of signals are that have values continuously at every value of time.

continuous-time signal has to be rst

sampled in time into a discrete-time signal so that its values at

a discrete set of time instants can be stored in computer memory locations. Furthermore, in order to be

content is available online at <http://cnx.org/content/m13673/1.1/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>


1

processed by logic circuits, these signal values have to be nal result is called a

digital signal.

quantized in to a set of discrete values, and the

When the quantization eect is ignored, the terms discrete-time signal

system is dened as a process whose input and output are signals. An important linear time-invariant (or shift-invariant) systems. These systems have a remarkable property is that each of them can be completely characterized by an impulse response function (sometimes is also called as point spread function), and the system is dened by a convolution (also referred to as a ltering) operation. Thus, a linear time-invariant system is equivalent to a (linear) lter. Linear time-invariant systems are classied into two types, those that have nite-duration impulse response (FIR) and those that have an innite-duration impulse response (IIR). A signal can be viewed as a vector in a vector space. Thus, linear algebra provides a powerful
In signal processing, a class of systems is the class of framework to study signals and linear systems. represented (or expanded) as a

and digital signal can be used interchangeability.

linear combination of elementary signals. The most important signal expansions are provided by the Fourier transforms. The Fourier transforms, as with general transforms,
are often used eectively to transform a problem from one domain to another domain where it is much easier

In particular, given a vector space, each signal can be

to solve or analyze. The two domains of a Fourier transform have physical meaning and are called the time domain and the frequency domain. Sampling, or the conversion of continuous-domain real-life signals to discrete numbers that can be processed by computers, is the essential bridge between the analog and the digital worlds. It is important to understand the connections between signals and systems in the real world and inside a computer. These connections are convenient to analyze in the frequency domain. Moreover, many signals and systems are specied by their Because any

frequency characteristics. linear time-invariant system can be characterized as a lter, the design of such systems boils down to the design the associated lters. Typically, in the lter design process, we determine the coecients of an FIR or IIR lter that closely approximates the desired frequency response specications. Together with Fourier transforms, the z-transform provides an eective tool to analyze and design digital
lters.

statistical models as random signals. It minimum mean-square error), so called Wiener lters, can be determined using only second-order statistics (autocorrelation and crosscorrelation functions) of a stationary process. When these statistics cannot be specied beforehand or change over time, we can employ adaptive lters, where the lter coecients are adapted to the signal statistics. The most popular algorithm to adaptively adjust the lter coecients is the least-mean square (LMS)
In many applications, signals are conveniently described via is remarkable that optimum linear lters (in the sense of algorithm.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Chapter 1

Foundations
1.1 Signals Represent Information
the

Whether analog or digital, information is represented by the fundamental quantity in electrical engineering:

signal.

Stated in mathematical terms,

a signal is merely a function.

Analog signals are continuous-

valued; digital signals are discrete-valued. The independent variable of the signal could be time (speech, for example), space (images), or the integers (denoting the sequencing of letters and numbers in the football score).

1.1.1 Analog Signals


Analog signals are usually signals dened over continuous independent variable(s).
space and time and a value corresponding to air pressure: spatial location, Speech
2

is

produced by your vocal cords exciting acoustic resonances in your vocal tract. The result is pressure waves propagating in the air, and the speech signal thus corresponds to a function having independent variables of

s (x, t)

(Here we use vector notation

to denote

spatial coordinates). When you record someone talking, you are evaluating the speech signal at a particular

x0

say. An example of the resulting waveform

s (x0 , t)

is shown in this gure (Figure 1.1:

Speech Example).

1 This content is available online at <http://cnx.org/content/m0001/2.27/>. 2 "Modeling the Speech Signal" <http://cnx.org/content/m0049/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>


3

CHAPTER 1.

FOUNDATIONS

Speech Example
0.5

0.4

0.3

0.2

0.1 Amplitude

-0.1

-0.2

-0.3

-0.4

-0.5
Figure 1.1:

A speech signal's amplitude relates to tiny air pressure variations. Shown is a recording of the vowel "e" (as in "speech").

Photographs are static, and are continuous-valued signals dened over space. Black-and-white images have only one value at each point in space, which amounts to its optical reection properties. independent spatial variables. In Figure 1.2 (Lena), an image is shown, demonstrating that it (and all other images as well) are functions of two

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Lena

(a)
Figure 1.2:

(b)

On the left is the classic Lena image, which is used ubiquitously as a test image. It contains straight and curved lines, complicated texture, and a face. On the right is a perspective display of the Lena image as a signal: a function of two spatial variables. The colors merely help show what signal values are about the same size. In this image, signal values range between 0 and 255; why is that?

Color images have values that express how reectivity depends on the optical spectrum. Painters long ago found that mixing together combinations of the so-called primary colorsred, yellow and bluecan produce very realistic color images. Thus, images today are usually thought of as having three values at every point in space, but a dierent set of colors is used: How much of red, color pictures are multivaluedvector-valuedsignals:

green and blue is present.


T
.

Mathematically,

s (x) = (r (x) , g (x) , b (x))

Interesting cases abound where the analog signal depends not on a continuous variable, such as time, but on a discrete variable. For example, temperature readings taken every hour have continuousanalogvalues, but the signal's independent variable is (essentially) the integers.

1.1.2 Digital Signals


The word "digital" means discrete-valued and implies the signal has an integer-valued independent variable. Digital information includes numbers and symbols (characters typed on the keyboard, for example). Computers rely on the digital representation of information to manipulate and transform information. Symbols do not have a numeric value, and each is represented by a unique number. The ASCII character code has the upper- and lowercase characters, the numbers, punctuation marks, and various other symbols represented by a seven-bit integer. For example, the ASCII code represents the letter

A as 65.

a as the number 97 and the letter

Table 1.1: ASCII Table shows the international convention on associating characters with integers.

ASCII Table

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

CHAPTER 1.

FOUNDATIONS

00 08 10 18 20 28 30 38 40 48 50 58 60 68 70 78

nul bs dle car sp ( 0 8 @ H P X ' h p x

01 09 11 19 21 29 31 39 41 49 51 59 61 69 71 79

soh ht dc1 em ! ) 1 9 A I Q Y a i q y

02 0A 12 1A 22 2A 32 3A 42 4A 52 5A 62 6A 72 7A

stx nl dc2 sub " * 2 : B J R Z b j r z

03 0B 13 1B 23 2B 33 3B 43 4B 53 5B 63 6B 73 7B

etx vt dc3 esc # + 3 ; C K S [ c k s {

04 0C 14 1C 24 2C 34 3C 44 4C 54 5C 64 6C 74 7C

eot np dc4 fs $ , 4

05 0D 15 1D 25 2D 35 3D 45 4D 55 5D 65 6D 75 7D

enq cr nak gs % 5 = E M U ] e m u }

06 0E 16 1E 26 2E 36 3E 46 4E 56 5E 66 6E 76 7E

ack so syn rs & . 6

07 0F 17 1F 27 2F 37 3F 47 4F 57 5F 67 6F 77 7F

bel si etb us ' / 7 ? G 0 W _ g o w del

<
D L T

>
F N V ^ f n v

\
d l t |

Table 1.1: The ASCII translation table shows how standard keyboard characters are represented by

integers. In pairs of columns, this table displays rst the so-called 7-bit code (how many characters in a seven-bit code?), then the character the number represents. The numeric codes are represented in hexadecimal (base-16) notation. Mnemonic characters correspond to control characters, some of which may be familiar (like

cr for carriage return) and some not (bel means a "bell").


3

1.2 Introduction to Systems


Signals are manipulated by systems.
y (t) = S (x (t)),
with

Mathematically, we represent what a system does by the notation

representing the input signal and

the output signal.

Denition of a system
x(t) System y(t)

Figure 1.3:

The system depicted has input x (t) and output y (t). Mathematically, systems operate on function(s) to produce other function(s). In many ways, systems are like functions, rules that yield a value for the dependent variable (our output signal) for each value of its independent variable (its input signal). The notation y (t) = S (x (t)) corresponds to this block diagram. We term S () the input-output relation for the system.

3 This

content is available online at <http://cnx.org/content/m0005/2.19/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

This notation mimics the mathematical symbology of a function: A system's input is analogous to an independent variable and its output the dependent variable. For the mathematically inclined, a system is a

functional:

a function of a function (signals are functions).

Simple systems can be connected togetherone system's output becomes another's inputto accomplish some overall design. Interconnection topologies can be quite complicated, but usually consist of weaves of three basic interconnection forms.

1.2.1 Cascade Interconnection


cascade

x(t)

S1[]

w(t)

S2[]

y(t)

Figure 1.4:

The most rudimentary ways of interconnecting systems are shown in the gures in this section. This is the cascade conguration.

The simplest form is when one system's output is connected only to another's input.

Mathematically,

w (t) = S1 (x (t)),

and

y (t) = S2 (w (t)),

with the information contained in


4

x (t)

processed by the rst, then

the second system. In some cases, the ordering of the systems matter, in others it does not. For example, in the fundamental model of communication the ordering most certainly matters.

1.2.2 Parallel Interconnection


parallel
x(t) x(t) S2[] S1[] + x(t)
Figure 1.5:

y(t)

The parallel conguration.

A signal

x (t)

is routed to two (or more) systems, with this signal appearing as the input to all systems Block diagrams have the convention that signals going to more

simultaneously and with equal strength.

of Communication Systems", Figure 1: Fundamental model of communication <http://cnx.org/content/m0002/latest/#commsys> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

4 "Structure

CHAPTER 1.

FOUNDATIONS

than one system are not split into pieces along the way. Two or more systems operate on outputs are added together to create the output in

y (t).

Thus,

x (t) and their y (t) = S1 (x (t))+ S2 (x (t)), and the information

x (t)

is processed separately by both systems.

1.2.3 Feedback Interconnection


feedback
x(t) + S2[]
The feedback conguration.

e(t)

S1[]

y(t)

Figure 1.6:

The subtlest interconnection conguration has a system's output also contributing to its input. Engineers would say the output is "fed back" to the input through system 2, hence the terminology. The mathematical statement of the feedback interconnection (Figure 1.6: feedback) is that the feed-forward system produces the output: output to

y (t) = S1 (e (t)). The input e (t) equals the input signal minus the output of some other system's y (t): e (t) = x (t) S2 (y (t)). Feedback systems are omnipresent in control problems, with the x (t)
is a constant representing what speed you want, and

error signal used to adjust the output to achieve some condition dened by the input (controlling) signal. For example, in a car's cruise control system, equals input).

y (t)

is the car's speed as measured by a speedometer. In this application, system 2 is the identity system (output

1.3 Discrete-Time Signals and Systems

Mathematically, analog signals are functions having as their independent variables continuous quantities, such as space and time. Discrete-time signals are functions dened on the integers; they are sequences. As with analog signals, we seek ways of decomposing discrete-time signals into simpler components. Because this approach leads to a better understanding of signal structure, we can exploit that structure to represent information (create ways of representing information with signals) and to extract information (retrieve the information thus represented). For symbolic-valued signals, the approach is dierent: We develop a common representation of all symbolic-valued signals so that we can embody the information they contain in a unied way. From an information representation perspective, the most important issue becomes, for both real-valued and symbolic-valued signals, eciency: what is the most parsimonious and compact way to represent information so that it can be extracted later.

1.3.1 Real- and Complex-valued Signals


A discrete-time signal is represented symbolically as

s (n),

where

n = {. . . , 1, 0, 1, . . . }.

5 This

content is available online at <http://cnx.org/content/m10342/2.16/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Cosine
sn 1 n

Figure 1.7:

signal?

The discrete-time cosine signal is plotted as a stem plot. Can you nd the formula for this

We usually draw discrete-time signals as stem plots to emphasize the fact they are functions dened only on the integers. We can delay a discrete-time signal by an integer just as with analog ones. A signal delayed by

samples has the expression

s (n m).

1.3.2 Complex Exponentials


The most important signal is, of course, the

complex exponential sequence.


s (n) = ei2f n
(1.1)

Note that the frequency variable

is dimensionless and that adding an integer to the frequency of the

discrete-time complex exponential has no eect on the signal's value.

ei2(f +m)n

= ei2f n ei2mn = ei2f n

(1.2)

This derivation follows because the complex exponential evaluated at an integer multiple of Thus, we need only consider frequency to have a value in some unit-length interval.

equals one.

1.3.3 Sinusoids
Discrete-time sinusoids have the obvious form time counterparts yield unique waveforms

s (n) = Acos (2f n + ).

As opposed to analog complex

exponentials and sinusoids that can have their frequencies be any real value, frequencies of their discrete-

1 1 2 , 2 . This choice of frequency interval is arbitrary; we can also choose the frequency to lie in the interval [0, 1). How to choose a unit-length
interval for a sinusoid's frequency will become evident later.

only when f lies in the interval

1.3.4 Unit Sample


The second-most important discrete-time signal is the

unit sample, which is dened to be


n=0
(1.3)

1 (n) = 0

if

otherwise

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

10

CHAPTER 1.

FOUNDATIONS

Unit sample
n 1 n
Figure 1.8:

The unit sample.

Examination of a discrete-time signal's plot, like that of the cosine signal shown in Figure 1.7 (Cosine), reveals that all signals consist of a sequence of delayed and scaled unit samples. a sequence at each integer Because the value of

(n m),

we can decompose

any signal as a sum of unit samples delayed to the appropriate location and

is denoted by

s (m)

and the unit sample delayed to occur at

is written

scaled by the signal value.

s (n) =
m=

s (m) (n m)

(1.4)

This kind of decomposition is unique to discrete-time signals, and will prove useful subsequently.

1.3.5 Unit Step


The

unit step in discrete-time is well-dened at the origin, as opposed to the situation with analog signals.
1 u (n) = 0
if if

n0 n<0

(1.5)

1.3.6 Symbolic Signals


An interesting aspect of discrete-time signals is that their values do not need to be real numbers. We do have real-valued discrete-time signals like the sinusoid, but we also have signals that denote the sequence of characters typed on the keyboard. Such characters certainly aren't real numbers, and as a collection of possible signal values, they have little mathematical structure other than that they are members of a set. More formally, each element of the comprise the

alphabet A.

symbolic-valued signal s (n) takes on one of the values {a1 , . . . , aK } which entirely of analog circuit elements.

This technical terminology does not mean we restrict symbols to being mem-

bers of the English or Greek alphabet. They could represent keyboard characters, bytes (8-bit quantities), integers that convey daily temperature. Whether controlled by software or not, discrete-time systems are ultimately constructed from digital circuits, which consist Furthermore, the transmission and reception of discrete-time signals, like e-mail, is accomplished with analog signals and systems. Understanding how discrete-time and analog signals and systems intertwine is perhaps the main goal of this course.

1.3.7 Discrete-Time Systems


Discrete-time systems can act on discrete-time signals in ways similar to those found in analog signals and systems. Because of the role of software in discrete-time systems, many more dierent systems can be In fact, a special class of envisioned and "constructed" with programs than can be with analog signals.

analog signals can be converted into discrete-time signals, processed with software, and converted back into

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

11

an analog signal, all without the incursion of error.

For such signals, systems can be easily produced in

software, with equivalent analog realizations dicult, if not impossible, to design.

1.4 Linear Time-Invariant Systems


A discrete-time signal can

s (n)

is

delayed by n0 samples when we write s (n n0 ), with n0 > 0.


7

Choosing

n0

to be negative advances the signal along the integers. As opposed to analog delays , discrete-time delays

only be integer valued.

In the frequency domain, delaying a signal corresponds to a linear phase shift

of the signal's discrete-time Fourier transform:

Linear discrete-time systems have the superposition property. Superposition shift-invariant

s (n n0 ) e(i2f n0 ) S ei2f

S (a1 x1 (n) + a2 x2 (n)) = a1 S (x1 (n)) + a2 S (x2 (n))


A discrete-time system is called the input delays the corresponding output.

(1.6)

(analogous to time-invariant analog systems) if delaying

Shift-Invariant

If

S (x (n)) = y (n) ,

Then

S (x (n n0 )) = y (n n0 )

(1.7)

We use the term shift-invariant to emphasize that delays can only have integer values in discrete-time, while in analog signals, delays can be arbitrarily valued. We want to concentrate on systems that are both linear and shift-invariant. It will be these that allow us the full power of frequency-domain analysis and implementations. Because we have no physical constraints in "constructing" such systems, we need only a mathematical specication. In analog systems, the dierential equation species the input-output relationship in the time-domain. The corresponding discrete-time specication is the

dierence equation. The Dierence Equation


y (n)
and

y (n) = a1 y (n 1) + + ap y (n p) + b0 x (n) + b1 x (n 1) + + bq x (n q )
Here, the output signal number of coecients is related to its past values of the input signal

(1.8)

past values y (n l), l = {1, . . . , p}, and to the current and


{a1 , . . . , ap } a0
and

x (n).

The system's characteristics are determined by the choices for the

and the coecients' values

{b0 , b1 , . . . , bq }.

aside: There is an asymmetry in the coecients: where is

? This coecient would multiply the

y (n)

term in the dierence equation (1.8: The Dierence Equation). We have essentially divided

the equation by it, which does not change the input-output relationship. We have thus created the convention that

a0

is always one.

As opposed to dierential equations, which only provide an

implicit

description of a system (we must

somehow solve the dierential equation), dierence equations provide an from the previous output values, and the current and previous inputs.

explicit

way of computing the

output for any input. We simply express the dierence equation by a program that calculates each output

1.5 Discrete Time Convolution


1.5.1 Introduction

Convolution, one of the most important concepts in electrical engineering, can be used to determine the output a system produces for a given input signal. It can be shown that a linear time invariant system is

6 This content is available online at <http://cnx.org/content/m0508/2.7/>. 7 "Simple Systems": Section Delay <http://cnx.org/content/m0006/latest/#delay> 8 This content is available online at <http://cnx.org/content/m10087/2.27/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

12

CHAPTER 1.

FOUNDATIONS

completely characterized by its impulse response. The sifting property of the discrete time impulse function tells us that the input signal to a system can be represented as a sum of scaled and shifted unit impulses. Thus, by linearity, it would seem reasonable to compute of the output signal as the sum of scaled and shifted unit impulse responses. That is exactly what the operation of convolution accomplishes. Hence, convolution can be used to determine a linear time invariant system's output from knowledge of the input and the impulse response.

1.5.2 Convolution and Circular Convolution


1.5.2.1 Convolution 1.5.2.1.1 Operation Denition
Discrete time convolution is an operation on two discrete time signals dened by the integral

(f g ) (n) =
k=
for all signals meaning that

f (k ) g (n k )

(1.9)

f, g

dened on

Z.

It is important to note that the operation of convolution is commutative,

f g =gf
for all signals

(1.10)

f, g

dened on

Z.

Thus, the convolution operation could have been just as easily stated using

the equivalent denition

(f g ) (n) =
k=
for all signals

f (n k ) g (k )

(1.11)

f, g

dened on

Z.

Convolution has several other important properties not listed here but

explained and derived in a later module.

1.5.2.1.2 Denition Motivation


The above operation denition has been chosen to be particularly useful in the study of linear time invariant systems. In order to see this, consider a linear time invariant system a system input signal

with unit impulse response

h.

Given

we would like to compute the system output signal

H (x).

First, we note that the

input can be expressed as the convolution

x (n) =
k=

x (k ) (n k )

(1.12)

by the sifting property of the unit impulse function. By linearity

Hx (n) =
k=
Since

x (k ) H (n k ) . h (n k ),
this gives the result

(1.13)

H (n k )

is the shifted unit impulse response

Hx (n) =
k=

x (k ) h (n k ) = (x h) (n) .

(1.14)

Hence, convolution has been dened such that the output of a linear time invariant system is given by the convolution of the system input with the system unit impulse response.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

13

1.5.2.1.3 Graphical Intuition


It is often helpful to be able to visualize the computation of a convolution in terms of graphical processes. Consider the convolution of two functions

f, g

given by

(f g ) (n) =
k=

f (k ) g (n k ) =
k=

f (n k ) g (k ) .

(1.15)

The rst step in graphically understanding the operation of convolution is to plot each of the functions. Next, one of the functions must be selected, and its plot reected across the same function must be shifted left by

k=0

axis. For each real t, that

t.

The product of the two resulting plots is then constructed. Finally,

the area under the resulting curve is computed.

Example 1.1

Recall that the impulse response for a discrete time echoing feedback system with gain

is (1.16)

h (n) = an u (n) ,
and consider the response to an input signal that is another exponential

x (n) = bn u (n) .
the input signal

(1.17)

We know that the output for this input is given by the convolution of the impulse response with

y (n) = x (n) h (n) .

(1.18)

We would like to compute this operation by beginning in a way that minimizes the algebraic complexity of the expression. However, in this case, each possible coice is equally simple. Thus, we would like to compute

y (n) =
k=

ak u (k ) bnk u (n k ) .

(1.19)

The step functions can be used to further simplify this sum. Therefore,

y (n) = 0
for

(1.20)

n<0

and

y (n) =
k=0
for

(ab)

(1.21)

n 0.

Hence, provided

ab = 1,

we have that

y (n) = {

0
1(ab)n+1 1(ab)

n<0 n0

(1.22)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

14

CHAPTER 1.

FOUNDATIONS

1.5.2.2 Circular Convolution


Discrete time circular convolution is an operation on two nite length or periodic discrete time signals dened by the integral

N 1

(f g ) (n) =
k=0
for all signals ^ ^

f (k ) g (n k )

(1.23)

f, g

dened on

Z [0, N 1]

where

f, g

are periodic extensions of

and

g.

It is important to

note that the operation of circular convolution is commutative, meaning that

f g =gf
for all signals

(1.24)

f, g

dened on

Z [0, N 1].

Thus, the circular convolution operation could have been just as

easily stated using the equivalent denition

N 1

(f g ) (n) =
k=0
for all signals ^ ^

f (n k ) g (k )

(1.25)

f, g

dened on

Z [0, N 1] where f , g

are periodic extensions of

and

g.

Circular convolution

has several other important properties not listed here but explained and derived in a later module. Alternatively, discrete time circular convolution can be expressed as the sum of two summations given by

N 1

(f g ) (n) =
k=0
for all signals

f (k ) g (n k ) +
k=n+1

f (k ) g (n k + N )

(1.26)

f, g

dened on

Z [0, N 1].

Meaningful examples of computing discrete time circular convolutions in the time domain would involve complicated algebraic manipulations dealing with the wrap around behavior, which would ultimately be more confusing than helpful. Thus, none will be provided in this section. Of course, example computations in the time domain are easy to program and demonstrate. However, disrete time circular convolutions are more easily computed using frequency domain tools as will be shown in the discrete time Fourier series section.

1.5.2.2.1 Denition Motivation


The above operation denition has been chosen to be particularly useful in the study of linear time invariant systems. In order to see this, consider a linear time invariant system a nite or periodic system input signal

with unit impulse response

we would like to compute the system output signal

h. H (x).

Given First,

we note that the input can be expressed as the circular convolution

N 1

x (n) =
k=0

x (k ) (n k )

(1.27)

by the sifting property of the unit impulse function. By linearity,

N 1

Hx (n) =
k=0

x (k ) H (n k ) .

(1.28)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

15

Since

H (n k )

is the shifted unit impulse response

h (n k ),

this gives the result

N 1

Hx (n) =
k=0

x (k ) h (n k ) = (x h) (n) .

(1.29)

Hence, circular convolution has been dened such that the output of a linear time invariant system is given by the convolution of the system input with the system unit impulse response.

1.5.2.2.2 Graphical Intuition


It is often helpful to be able to visualize the computation of a circular convolution in terms of graphical processes. Consider the circular convolution of two nite length functions

f, g
^

given by

N 1

(f g ) (n) =
k=0

f (k ) g (n k ) =
k=0

N 1

f (n k ) g (k ) .

(1.30)

The rst step in graphically understanding the operation of convolution is to plot each of the periodic extensions of the functions. Next, one of the functions must be selected, and its plot reected across the

k=0

axis. For each

k Z [0, N 1],

that same function must be shifted left by

resulting plots is then constructed. Finally, the area under the resulting curve on

k . The product of the two Z [0, N 1] is computed.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

16

CHAPTER 1.

FOUNDATIONS

1.5.3 Interactive Element

Interact (when online) with the Mathematica CDF demonstrating Discrete Linear ConvoAvailable for free at save Connexions <http://cnx.org/content/col10360/1.4> lution. To download, right click and le as .cdf
Figure 1.9:

17

1.5.4 Convolution Summary


Convolution, one of the most important concepts in electrical engineering, can be used to determine the output signal of a linear time invariant system for a given input signal with knowledge of the system's unit impulse response. The operation of discrete time convolution is dened such that it performs this function for innite length discrete time signals and systems. The operation of discrete time circular convolution is dened such that it performs this function for nite length and periodic discrete time signals. In each case, the output of the system is the convolution or circular convolution of the input signal with the unit impulse response.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

18

CHAPTER 1.

FOUNDATIONS

1.6 Review of Linear Algebra


respect to a eld of scalars.

Vector spaces are the principal object of study in linear algebra.

A vector space is always dened with

1.6.1 Fields
A eld is a set

equipped with two operations, addition and mulitplication, and containing two special

members 0 and 1 (0 1. a.

= 1),

such that for all

{a, b, c} F

2.

3.

( a + b) F a+b=b+a c. (a + b) + c = a + (b + c) d. a + 0 = a e. there exists a such that a + a = 0 a. ab F b. ab = ba c. (ab) c = a (bc) d. a 1 = a 1 1 e. there exists a such that aa =1 a (b + c) = ab + ac
b.

More concisely 1. 2.

F F

is an abelian group under addition is an abelian group under multiplication

3. multiplication distributes over addition

1.6.1.1 Examples
Q, R , C

1.6.2 Vector Spaces


Let

be a eld, and

a set. We say

for all

a F, u V

and

v V:

is a vector space over F

if there exist two operations, dened

vector addition: (u,

scalar multiplication: (a,v )

v ) (u + v ) V av V 0 V,
such that the following hold for all

and if there exists an element denoted

a F, b F,

and

u V,

v V,
1.

and a. b. c. d.

wV

2.

a. b. c. d.

u + (v + w) = (u + v ) + w u+v =v+u u+0=u there exists u such that u + u = 0 a (u + v ) = au + av (a + b) u = au + bu (ab) u = a (bu) 1u=u

More concisely,

9 This

content is available online at <http://cnx.org/content/m11948/1.2/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

19

1.

is an abelian group under plus

2. Natural properties of scalar multiplication

1.6.2.1 Examples
RN CN CN RN
is a vector space over is a vector space over is a vector space over is

not a vector space over C


V
are called

R C R

The elements of

vectors.

1.6.3 Euclidean Space


Throughout this course we will think of a signal as a vector

x1

=
T

x2 x= . . . xN
The samples

x1

x2

...

xN

{xi }

could be samples from a nite duration, continuous time signal, for example.

A signal will belong to one of two vector spaces:

1.6.3.1 Real Euclidean space


x RN
(over

R)

1.6.3.2 Complex Euclidean space


x CN
(over

C)

1.6.4 Subspaces
Let

be a vector space over

F.

A subset

SV

is called a

Example 1.2

subspace of V

if

is a vector space over

in its own right.

V = R2 , F = R, S = any

line though the origin.

Figure 1.10:

S is any line through the origin.

SV

Theorem 1.1:

Are there other subspaces?

is a subspace if and only if for all

aF

and

bF

and for all

sS

and

t S , (as + bt) S

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

20

CHAPTER 1.

FOUNDATIONS

1.6.5 Linear Independence


Let

u1 , . . . , u k V .
We say that these vectors are

linearly dependent if there exist scalars a1 , . . . , ak F


k

such that

ai ui = 0
i=1
and at least one

(1.31)

ai = 0. a1 = = ak = 0,
we say that the vectors are

If (1.31) only holds for the case

Example 1.3
1 2 5 1 1 2 3 + 1 7 = 0 2 0 2
so these vectors are linearly dependent in

linearly independent.

R3 .

1.6.6 Spanning Sets


Consider the subset

S = {v1 , v2 , . . . , vk }.

Dene the

span of S
k

< S > span (S )


i=1

ai vi | ai F

Fact: < S > is a subspace of V . Example 1.4

V = R3 , F = R, S = {v1 , v2 }, v1 = 0 , v2 = 1 < S > = xy-plane. 0 0

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

21

Figure 1.11:

< S > is the xy-plane.

1.6.6.1 Aside
If

is innite, the notions of linear independence and span are easily generalized:

We say

is linearly independent if, for every nite collection

u1 , . . . , uk S , (k

arbitrary) we have

ai ui = 0
i=1
The span of

i : (ai = 0)

is

<S> =
i=1

ai ui | ai F ui S (k < )

note: In both denitions, we only consider

nite sums.

1.6.7 Bases
A set 1. 2.

BV

is called a

basis for V

over

if and only if

B is linearly <B> =V

independent

Bases are of fundamental importance in signal processing. They allow us to decompose a signal into building blocks (basis vectors) that are often more easily understood.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

22

CHAPTER 1.

FOUNDATIONS

Example 1.5
V
= (real or complex) Euclidean space,

RN

or

CN .
basis

B = {e1 , . . . , eN } standard 0

. . . ei = 1 . . . 0
where the 1 is in the

ith

position.

Example 1.6
V = CN
over

C. B = {u1 , . . . , uN }

which is the DFT basis.

uk =
where

1 e(i2 N )
k

. . .

e(i2 N (N 1))
k

i=

1.

1.6.7.1 Key Fact


If

is a basis for

V,

then every

vV

can be written uniquely (up to order of terms) in the form

v=
i=1
where

ai vi

ai F

and

vi B .

1.6.7.2 Other Facts



If If

S is a linearly independent set, then S < S > = V , then S contains a basis.

can be extended to a basis.

1.6.8 Dimension
Let

Theorem 1.2: Theorem 1.3:


If

be a vector space with basis

B.

The dimension of

V,

denoted

dim (V ),

is the cardinality of

B.

Every vector space has a basis.

dim (V ) is well-dened. dim (V ) < , we say V is

Every basis for a vector space has the same cardinality.

nite dimensional.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

23

1.6.8.1 Examples vector space eld of scalars dimension


RN C
N

R C R
Table 1.2

CN

Every subspace is a vector space, and therefore has its own dimension.

Example 1.7
Suppose

(S = {u1 , . . . , uk }) V

is a linearly independent set. Then

dim (< S > ) =

Facts

If If

S is a subspace of V , then dim (S ) dim (V ). dim (S ) = dim (V ) < , then S = V .

1.6.9 Direct Sums


Let

S V and T V be subspaces. V is the direct sum of S and T , written V = S T , unique s S and t T such that v = s + t. If V = S T , then T is called a complement of S . V
be a vector space, and let We say

if and only if for every

v V,

there exist

Example 1.8

V = C = {f : R R|f is S = even T = odd f (t) =


If which implies

continuous}

funcitons in C

funcitons in C

f = g + h = g + h , g S and g = g and h = h .

1 1 (f (t) + f (t)) + (f (t) f (t)) 2 2 g S , h T and h T , then g g = h h

is odd and even,

1.6.9.1 Facts
1. Every subspace has a complement 2.

V =ST
a. b.

if and only if

S T = {0} < S, T > = V


and

3. If

V = S T,

dim (V ) < ,

then

dim (V ) = dim (S ) + dim (T )

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

24

CHAPTER 1.

FOUNDATIONS

1.6.9.2 Proofs
Invoke a basis.

1.6.10 Norms
V be a vector v V , and F
Let 1. 2. 3. space over

F.

A norm is a mapping

V F,

denoted by

, such that forall

u V,

u > 0 if u = 0 u = || u u+v u +

1.6.10.1 Examples
Euclidean norms:

x RN :
N

1 2

x =
i=1

xi

x CN :
N

1 2

x =
i=1

(|xi |)

1.6.10.2 Induced Metric


Every norm induces a metric on

V d (u, v ) u v

which leads to a notion of "distance" between vectors.

1.6.11 Inner products


Let

be a vector space over

F, F = R

or

C.

An inner product is a mapping

V V F,

denoted

< , >,

such that 1. 2. 3.

< v, v > 0, and < v, v >= 0 v = 0 < u, v >= < v, u > < au + bv, w >= a < (u, w) > +b < (v, w) >

1.6.11.1 Examples
RN
over

R: < x, y >= xT y =

xi yi
i=1

CN

over

C: < x, y >= x y =
H

xi yi
i=1

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

25

If

x = (x1 , . . . , xN )

C,

then

xH
is called the "Hermitian," or "conjugate transpose" of

x1
. . .

xN x.

1.6.12 Triangle Inequality


If we dene

u =< u, u >,

then

u+v u
Hence, every inner product induces a norm.

1.6.13 Cauchy-Schwarz Inequality


For all

uV, v V, | < u, v > | u v

In inner product spaces, we have a notion of the angle between two vectors:

(u, v ) = arccos

< u, v > u v

[0, 2 )

1.6.14 Orthogonality
u
and

are

orthogonal if
< u, v >= 0 u = v = 1,
we say

Notation:

u v.

If in addition

In an orthogonal (orthonormal)

u and v are orthonormal. set, each pair of vectors is orthogonal (orthonormal).

Figure 1.12:

Orthogonal vectors in R2 .

1.6.15 Orthonormal Bases


An Orthonormal basis is a basis

{vi }

such that

1 < vi , vi >= ij = 0

if if

i=j i=j

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

26

CHAPTER 1.

FOUNDATIONS

Example 1.9
The standard basis for

RN

or

CN

Example 1.10
The normalized DFT basis

1 uk = N

1 e(i2 N )
k

. . .

e(i2 N (N 1))
k

1.6.16 Expansion Coecients


If the representation of

with respect to

{ vi }

is

v=
i
then

ai vi

ai =< vi , v >

1.6.17 Gram-Schmidt
Every inner product space has an orthonormal basis. Any (countable) basis can be made orthogonal by the Gram-Schmidt orthogonalization process.

1.6.18 Orthogonal Compliments


Let

SV

be a subspace. The

orthogonal compliment S is

S = { u | u V (< u, v >= 0) v : (v S ) } S
is easily seen to be a subspace. If

dim (v ) < ,

then

V = S S.
then in order to have

aside: If

dim (v ) = ,

V = S S

we require

to be a

Hilbert Space.

1.6.19 Linear Transformations


Loosely speaking, a linear transformation is a mapping from one vector space to another that vector space operations. More precisely, let

preserves

V, W

be vector spaces over the same eld

F.

T :V W
for all

linear transformation is a mapping

such that

T (au + bv ) = aT (u) + bT (v ) a F, b F
and

uV, v V.

spaces, or subspaces thereof.

In this class we will be concerned with linear transformations between (real or complex)

Euclidean

1.6.20 Image
image (T ) = { w | w W T (v ) = wfor
some v }

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

27

1.6.21 Nullspace
Also known as the kernel:

ker (T ) = { v | v V (T (v ) = 0) }
Both the image and the nullspace are easily seen to be subspaces.

1.6.22 Rank
rank (T ) = dim (image (T ))

1.6.23 Nullity
null (T ) = dim (ker (T ))

1.6.24 Rank plus nullity theorem


rank (T ) + null (T ) = dim (V )

1.6.25 Matrices
Every linear transformation represented by an

has a

M N

matrix representation.
A= a11
. . .

If

T : EN EM , E = R

or

C,

then

is

matrix

...
.. .

a1N
. . .

ith

aM 1

...
T

aM N
is the

where

(a1i , . . . , aM i ) = T (ei ) A.

and

ei = (0, . . . , 1, . . . , 0)

standard basis vector.


EN
and

aside:

A linear transformation can be represented with respect to any bases of

EM ,

leading to a dierent

We will always represent a linear transformation using the standard bases.

1.6.26 Column span


colspan (A) =< A > = image (A)

1.6.27 Duality
If

A : RN RM ,

then

ker (A) = image AT

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

28

CHAPTER 1.

FOUNDATIONS

Figure 1.13

If

A : CN CM ,

then

ker (A) = image AH

1.6.28 Inverses
The linear transformation/matrix

is

invertible if and only if there exists a matrix B

such that

AB =

BA = I

Only

square matrices can be invertible. Theorem 1.4:


Let 1. 2. 3. 4.

(identity).

A : FN FN

be linear,

F =R

or

C.

The following are equivalent:

A is invertible rank (A) = N null (A) = 0 detA = 0


(or

(nonsingular)

5. The columns of If

form a basis.

A 1 = AT

AH

in the complex case), we say

is

orthogonal (or unitary).

1.7 Hilbert Spaces


1.7.1 Hilbert Spaces
A vector space a

10

normed linear space.

with a valid inner product A

dened on it is called an inner product space, which is also Hilbert space is an inner product space that is complete with respect to the
11 12

norm dened using the inner product. Hilbert spaces are named after David Hilbert

, who developed this

idea through his studies of integral equations. We dene our valid norm using the inner product as:

x =

< x, x >

(1.32)

10 This content is available online at <http://cnx.org/content/m10840/2.6/>. 11 "Inner Products" <http://cnx.org/content/m10755/latest/> 12 http://www-history.mcs.st-andrews.ac.uk/history/Mathematicians/Hilbert.html

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

29

Hilbert spaces are useful in studying and generalizing the concepts of Fourier expansion, Fourier transforms, and are very important to the study of quantum mechanics. Hilbert spaces are studied under the functional analysis branch of mathematics.

1.7.1.1 Examples of Hilbert Spaces


Below we will list a few examples of Hilbert spaces home.
13

. You can verify that these are valid inner products at

For

Cn , < x, y >= y T x = y0 y1 ... yn1 x0 x1


. . .

n1 xi yi = i=0

xn1
Space of nite energy complex functions:

L 2 ( R)

< f, g >=

f (t) g (t)dt

Space of square-summable sequences:

(Z)

< x, y >=
i=

x [i] y [i]

1.8 Signal Expansions


1.8.1 Main Idea

14

When working with signals many times it is helpful to break up a signal into smaller, more manageable parts. Hopefully by now you have been exposed to the concept of eigenvectors systems through eigenfunctions of LTI systems
16 15

and there use in decomposing a

signal into one of its possible basis. By doing this we are able to simplify our calculations of signals and . Now we would like to look at an alternative way to represent signals, through the use of

basis.

orthonormal

We can think of orthonormal basis as a set of building blocks we use to construct functions. We will

build up the signal/vector as a weighted sum of basis elements.

Example 1.11
1 The complex sinusoids
In our Fourier series tion of
17

ei0 nt for all < n < form an orthonormal basis for L2 ([0, T ]). T i0 nt equation, f (t) = , the {cn } are just another representan= cn e

f (t).

note:

For signals/vectors in a Hilbert Space, the expansion coecients are easy to nd.

13 "Hilbert Spaces" <http://cnx.org/content/m10434/latest/> 14 This content is available online at <http://cnx.org/content/m10760/2.6/>. 15 "Eigenvectors and Eigenvalues" <http://cnx.org/content/m10736/latest/> 16 "Eigenfunctions of LTI Systems" <http://cnx.org/content/m10500/latest/> 17 "Fourier Series: Eigenfunction Approach" <http://cnx.org/content/m10496/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

30

CHAPTER 1.

FOUNDATIONS

1.8.2 Alternate Representation


Recall our denition of a 1. The 2. The

basis:

A set of vectors

{bi }

in a vector space

is a basis if

bi bi

are linearly independent. span


18

S.

That is, we can nd

{i },

where

i C x=
i

(scalars) such that

x, x S :
where

i bi S.

(1.33)

is a vector in

S,

is a scalar in

C,

and

is a vector in

Condition 2 in the above denition says we can 1 ensures that the decomposition is
note:

decompose any vector in terms of the {bi }. unique (think about this at home).
x.

Condition

The

{i }

provide an alternate representation of

Example 1.12
Let us look at simple example in

R2 ,

where we have the following vector:

x=
T T

1 2

Standard Basis:

{e0 , e1 } = (1, 0) , (0, 1)

x = e0 + 2e1
Alternate Basis:

{h0 , h1 } = (1, 1) , (1, 1)

x=
In general, given a basis

1 3 h0 + h1 2 2
how do we nd the

{b0 , b1 }

and a vector

x R2 ,

and

such that (1.34)

x = 0 b0 + 1 b1

1.8.3 Finding the Coecients


Now let us address the question posed above about nding (1.34) so that we can stack our

i 's

in general for

R2 .

We start by rewriting

bi 's

as columns in a 22 matrix.

= 0
. . .

b0
. . .

+ 1

b1

(1.35)

= b0
. . .

0 b1 1 .
. .

(1.36)

18 "Linear

Algebra: The Basics": Section Span <http://cnx.org/content/m10734/latest/#span_sec> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

31

Example 1.13
Here is a simple example, which shows a little more detail about the above equations.

x [0] x [1]

= 0 =

b0 [0] b0 [1]

+ 1

b1 [0] b1 [1]


(1.37)

0 b0 [0] + 1 b1 [0] 0 b0 [1] + 1 b1 [1] b0 [0] b1 [0] b0 [1] b1 [1]

x [0] x [1]

0 1

(1.38)

1.8.3.1 Simplifying our Equation


To make notation simpler, we dene the following two items from the above equations:

Basis Matrix:

. . .

. . .

B = b0
. . .

b1
. . .

Coecient Vector:

0 1

This gives us the following, concise equation:

x = B
which is equivalent to

(1.39)

x=

Example 1.14
Given a standard basis,

1 i=0

i bi .

1 0 , , 0 1

then we have the following basis matrix:

B=

0 1

1 0

To get the

i 's,

we solve for the coecient vector in (1.39)

= B 1 x
Where

(1.40)

B 1

is the inverse matrix

19

of

B.

19 "Matrix

Inversion" <http://cnx.org/content/m2113/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

32

CHAPTER 1.

FOUNDATIONS

1.8.3.2 Examples Example 1.15


Let us look at the standard basis rst and try to calculate

from it.

B=
Where

1 0

0 1

=I
let us nd the inverse of

is the

identity matrix.

In order to solve for

rst (which is

obviously very trivial in this case):

B 1 =
Therefore we get,

1 0

0 1

= B 1 x = x

Example 1.16
Let us look at a ever-so-slightly more complicated basis of our basis matrix and inverse basis matrix becomes:

1 1 , = {h0 , h1 } 1 1

Then

B= B 1 =
and for this example it is given that

1 1
1 2 1 2

1 1
1 2 1 2

x=
Now we solve for

3 2

= B 1 x =

1 2 1 2

1 2 1 2

3 2

2.5 0.5

and we get

x = 2.5h0 + 0.5h1

Exercise 1.8.1
Now we are given the following basis matrix and

(Solution on p. 48.)

1 3 , {b0 , b1 } = 2 0 x= 3 2 x
in terms of

x:

For this problem, make a sketch of the bases and then represent

b0

and

b1 .

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

33

note:

A change of basis simply looks at

the standard basis to our new basis,

x from a "dierent perspective." B 1 transforms x from {b0 , b1 }. Notice that this is a totally mechanical procedure.

1.8.4 Extending the Dimension and Space


We can also extend all these ideas past just urally to higher (> 2) dimensions. such that

Rn and Cn . This procedure extends natn Given a basis {b0 , b1 , . . . , bn1 } for R , we want to nd {0 , 1 , . . . , n1 }
and look at them in

R2

x = 0 b0 + 1 b1 + + n1 bn1
Again, we will set up a basis matrix

(1.41)

B=

b0

b1

b2

...

bn1

where the columns equal the basis vectors and it will always be an nn matrix (although the above matrix does not appear to be square since we left terms in vector notation). We can then proceed to rewrite (1.39)

x= b0 b1 ... bn1

0
. . .

= B

n1

and

= B 1 x

1.9 Introduction to Fourier Analysis


1.9.1 Fourier's Daring Leap

20

Fourier postulated around 1807 that any periodic signal (equivalently nite length signal) can be built up as an innite linear combination of harmonic sinusoidal waves.

1.9.1.1
i.e. Given the collection

B = {ej T
any

nt }n=

(1.42)

f (t) L2 [0, T )
can be approximated arbitrarily closely by

(1.43)

f (t) =
n=

C n ej T
21

nt

(1.44)

Now, The issue of exact convergence did bring Fourier

much criticism from the French Academy of

Science (Laplace, Lagrange, Monge and LaCroix comprised the review committee) for several years after its

20 This content is available online at <http://cnx.org/content/m10096/2.12/>. 21 http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Fourier.html

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

34

CHAPTER 1.

FOUNDATIONS

presentation on 1807. It was not resolved for also a century, and its resolution is interesting and important to understand from a practical viewpoint. See more in the section on the fact that sinusoids are Eigenfunctions
22

Gibbs Phenomena.
23

Fourier analysis is fundamental to understanding the behavior of signals and systems. This is a result of of linear, time-invariant (LTI) systems. This is to say that if we pass any particular sinusoid through a LTI system, we get a scaled version of that same sinusoid on the output. Then, since Fourier analysis allows us to redene the signals in terms of sinusoids, all we need to do is determine how any given system eects all possible sinusoids (its transfer function
24

) and we have a complete

understanding of the system. Furthermore, since we are able to dene the passage of sinusoids through a system as multiplication of that sinusoid by the transfer function at the same frequency, we can convert the passage of any signal through a system from convolution ideas are what give Fourier analysis its power. Now, after hopefully having sold you on the value of this method of analysis, we must examine exactly what we mean by Fourier analysis. The four Fourier transforms that comprise this analysis are the Fourier Series
26 25

(in time) to multiplication (in frequency). These

, Continuous-Time Fourier Transform (Section 1.10), Discrete-Time Fourier Transform (Section 1.11)
27

and Discrete Fourier Transform

. For this document, we will view the Laplace Transform

28

and Z-Transform However,

(Section 3.3) as simply extensions of the CTFT and DTFT respectively. All of these transforms act essentially the same way, by converting a signal in time to an equivalent signal in frequency (sinusoids). depending on the nature of a specic signal i.e. whether it is nite- or innite-length and whether it is

discrete- or continuous-time) there is an appropriate transform to convert the signal into the frequency domain. Below is a table of the four Fourier transforms and when each is appropriate. It also includes the relevant convolution for the specied space.

Table of Fourier Representations Transform


Continuous-Time Fourier Series Continuous-Time Fourier Transform Discrete-Time Transform Discrete Fourier Transform
Table 1.3

Time Domain
L ([0, T )) L2 (R) l2 (Z) l2 ([0, N 1])
2

Frequency Domain
l (Z) L2 (R) L2 ([0, 2 )) l2 ([0, N 1])
2

Convolution
Continuous-Time cular Continuous-Time ear Discrete-Time Linear LinCir-

Fourier

Discrete-Time Circular

1.10 Continuous Time Fourier Transform (CTFT)


1.10.1 Introduction
derive the

29

In this module, we will derive an expansion for any arbitrary continuous-time function, and in doing so,

Continuous Time Fourier Transform (CTFT).

22 "Eigenfunctions of LTI Systems" <http://cnx.org/content/m10500/latest/> 23 "System Classications and Properties" <http://cnx.org/content/m10084/latest/> 24 "Transfer Functions" <http://cnx.org/content/m0028/latest/> 25 "Properties of Continuous Time Convolution" <http://cnx.org/content/m10088/latest/> 26 "Continuous-Time Fourier Series (CTFS)" <http://cnx.org/content/m10097/latest/> 27 "Discrete Fourier Transform" <http://cnx.org/content/m0502/latest/> 28 "The Laplace Transform" <http://cnx.org/content/m10110/latest/> 29 This content is available online at <http://cnx.org/content/m10098/2.16/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

35

Since complex exponentials output of an LTI system

30

are eigenfunctions of linear time-invariant (LTI) systems

31

, calculating the is the

given

est

as an input amounts to simple multiplication, where

H (s) C

eigenvalue corresponding to s. As shown in the gure, a simple exponential input would yield the output

y (t) = H (s) est

(1.45)

Using this and the fact that straightforward.

is linear, calculating

y (t) for combinations of complex exponentials is also

c1 es1 t + c2 es2 t c1 H (s1 ) es1 t + c2 H (s2 ) es2 t cn esn t


n n

cn H (sn ) esn t H

pendently scales each exponential component es


we can write a function output of a system.

The action of

on an input such as those in the two equations above is easy to explain.


nt

by a dierent complex number

H (sn ) C.

inde-

As such, if

f (t)

as a combination of complex exponentials it allows us to easily calculate the

Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signals in terms of a set of simpler functions by superposition of a number of complex exponentials. Below we will present the

Continuous-Time Fourier Transform

(CTFT), commonly referred to as just the Fourier

Transform (FT). Because the CTFT deals with nonperiodic signals, we must nd a way to include all real frequencies in the general equations. For the CTFT we simply utilize integration over real numbers rather than summation over integers in order to express the aperiodic signals.

1.10.2 Fourier Transform Synthesis


Joseph Fourier
32

demonstrated that an arbitrary

s (t)

can be written as a linear combination of harmonic

complex sinusoids

s (t) =
n=
where

cn ej0 nt

(1.46)

2 T is the fundamental frequency. For almost all s (t) of practical interest, there exists cn to make 2 (1.46) true. If s (t) is nite energy ( s (t) L [0, T ]), then the equality in (1.46) holds in the sense of energy

0 =

convergence; if The shows

s (t)

is continuous, then (1.46) holds pointwise. Also, if

s (t)

meets some mild conditions (the

Dirichlet conditions), then (1.46) holds pointwise everywhere except at points of discontinuity.

cn - called the Fourier coecients - tell us "how much" of the sinusoid ej0 nt is in s (t). The formula s (t) as a sum of complex exponentials, each of which is easily processed by an LTI system (since it

is an eigenfunction of

n, n Z : ej0 nt

every LTI system).

Mathematically, it tells us that the set of complex exponentials

form a basis for the space of T-periodic continuous time functions.

1.10.2.1 Equations
Now, in order to take this useful tool and apply it to arbitrary non-periodic signals, we will have to delve deeper into the use of the superposition principle. Let for any assumed value of the period by

sT (t)

be a periodic signal having period

T.

We want

to consider what happens to this signal's spectrum as the period goes to innity. We denote the spectrum

cn (T ).

We calculate the spectrum according to the Fourier formula

30 "Continuous Time Complex Exponential" <http://cnx.org/content/m10060/latest/> 31 "Eigenfunctions of LTI Systems" <http://cnx.org/content/m10500/latest/> 32 http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Fourier.html

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

36

CHAPTER 1.

FOUNDATIONS

for a periodic signal, known as the Fourier Series (for more on this derivation, see the section on

Series.)
where

Fourier
(1.47)

cn = 0 =

1 T

s (t) exp (0 t) dt
0

2 T and where we have used a symmetric placement of the integration interval about the origin for subsequent derivational convenience. We vary the frequency index n proportionally as we increase the
period. Dene

ST (f ) T cn =
making the corresponding Fourier Series

1 T

(ST (f ) exp (0 t) dt(1.48)


0

sT (t) =

f (t) exp (0 t)

1 T

(1.49)

As the period increases, the spectral lines become closer together, becoming a continuum. Therefore,

lim sT (t) s (t) =

S (f ) exp (0 t) df

(1.50)

with

S (f ) =

s (t) exp (0 t) dt

(1.51)

Continuous-Time Fourier Transform

F () =

f (t) e(it) dt

(1.52)

Inverse CTFT
f (t) =

1 2

F () eit d

(1.53)

warning: It is not uncommon to see the above formula written slightly dierent. One of the most

common dierences is the way that the exponential is written. The above equations use the radial frequency variable explicit expression,

in the exponential, where = 2f , but it is also common to include the more i2f t, in the exponential. Click here33 for an overview of the notation used in

Connexion's DSP modules.

Example 1.17
We know from Euler's formula that

cos (t) + sin (t) =

1j jt 2 e

1+j jt . 2 e

33 "DSP

notation" <http://cnx.org/content/m10161/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

37

1.10.3 CTFT Denition Demonstration

Interact (when online) with a Mathematica CDF demonstrating Continuous Time Fourier Transform. To Download, right-click and save as .cdf.
Figure 1.14:

1.10.4 Example Problems


Exercise 1.10.1
Find the Fourier Transform (CTFT) of the function

(Solution on p. 48.)

e(t) if t 0 f (t) = 0 otherwise

(1.54)

Exercise 1.10.2
Find the inverse Fourier transform of the ideal lowpass lter dened by

(Solution on p. 48.)

1 X () = 0

if

| | M

(1.55)

otherwise

1.10.5 Fourier Transform Summary


Because complex exponentials are eigenfunctions of LTI systems, it is often useful to represent signals using a set of complex exponentials as a basis. The continuous time Fourier series synthesis formula expresses a

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

38

CHAPTER 1.

FOUNDATIONS

continuous time, periodic function as the sum of continuous time, discrete frequency complex exponentials.

f (t) =
n=

cn ej0 nt

(1.56)

The continuous time Fourier series analysis formula gives the coecients of the Fourier series expansion.

cn =
In both of these equations

1 T

f (t) e(j0 nt) dt


0

(1.57)

0 =

2 T is the fundamental frequency.

1.11 Discrete Time Fourier Transform (DTFT)


1.11.1 Introduction
Discrete Time Fourier Transform (DTFT).
Since complex exponentials output of an LTI system where
35

34

In this module, we will derive an expansion for arbitrary discrete-time functions, and in doing so, derive the are eigenfunctions of linear time-invariant (LTI) systems
36

, calculating the

H [k ] C

2k N , and is the eigenvalue corresponding to k. As shown in the gure, a simple exponential input

given

in

as an input amounts to simple multiplication, where

0 =

would yield the output

y [n] = H [k ] ein

(1.58)

Figure 1.15:

Simple LTI system.

Using this and the fact that also straightforward.

is linear, calculating

y [n]

for combinations of

complex exponentials is

c1 ei1 n + c2 ei2 n c1 H [k1 ] ei1 n + c2 H [k2 ] ei1 n cl eil n


l
The action of

cl H [kl ] eil n
l

pendently scales each exponential component ei n by a dierent complex number H [kl ] C.


l

on an input such as those in the two equations above is easy to explain.

inde-

As such, if

we can write a function output of a system.

y [n]

as a combination of complex exponentials it allows us to easily calculate the

34 This content is available online at <http://cnx.org/content/m10108/2.18/>. 35 "Continuous Time Complex Exponential" <http://cnx.org/content/m10060/latest/> 36 "Eigenfunctions of LTI Systems" <http://cnx.org/content/m10500/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

39

Now, we will look to use the power of complex exponentials to see how we may represent arbitrary signals in terms of a set of simpler functions by superposition of a number of complex exponentials. Below we will present the

Discrete-Time Fourier Transform

(DTFT). Because the DTFT deals with nonperiodic

signals, we must nd a way to include all real frequencies in the general equations. For the DTFT we simply utilize summation over all real numbers rather than summation over integers in order to express the aperiodic signals.

1.11.2 DTFT synthesis


It can be demonstrated that an arbitrary Discrete Time-periodic function combination of harmonic complex sinusoids

f [n]

can be written as a linear

N 1

f [n] =
k=0
where

ck ei0 kn

(1.59)

2 N is the fundamental frequency. For almost all f [n] of practical interest, there exists cn to 2 make (1.59) true. If f [n] is nite energy ( f [n] L [0, N ]), then the equality in (1.59) holds in the sense

0 =

of energy convergence; with discrete-time signals, there are no concerns for divergence as there are with continuous-time signals. The shows

cn - called the Fourier coecients - tell us "how much" of the sinusoid ej0 kn is in f [n]. The formula f [n] as a sum of complex exponentials, each of which is easily processed by an LTI system (since it

is an eigenfunction of

k, k Z : ej0 kn

every LTI system).

Mathematically, it tells us that the set of complex exponentials

form a basis for the space of N-periodic discrete time functions.

1.11.2.1 Equations
Now, in order to take this useful tool and apply it to arbitrary non-periodic signals, we will have to delve deeper into the use of the superposition principle. Let for any assumed value of the period by

sT (t)

be a periodic signal having period

T.

We want

to consider what happens to this signal's spectrum as the period goes to innity. We denote the spectrum

cn (T ).

We calculate the spectrum according to the Fourier formula

for a periodic signal, known as the Fourier Series (for more on this derivation, see the section on

Series.)
where

Fourier
(1.60)

cn = 0 =

1 T

s (t) exp (0 t) dt
0

2 T and where we have used a symmetric placement of the integration interval about the origin for subsequent derivational convenience. We vary the frequency index n proportionally as we increase the
period. Dene

ST (f ) T cn =
making the corresponding Fourier Series

1 T

(ST (f ) exp (0 t) dt(1.61)


0

sT (t) =

f (t) exp (0 t)

1 T

(1.62)

As the period increases, the spectral lines become closer together, becoming a continuum. Therefore,

lim sT (t) s (t) =

S (f ) exp (0 t) df

(1.63)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

40

CHAPTER 1.

FOUNDATIONS

with

S (f ) =

s (t) exp (0 t) dt

(1.64)

Discrete-Time Fourier Transform

F ( ) =
n=

f [n] e(in)

(1.65)

Inverse DTFT
f [n] =

1 2

F ( ) ein d

(1.66)

warning: It is not uncommon to see the above formula written slightly dierent. One of the most

common dierences is the way that the exponential is written. The above equations use the radial frequency variable to make it clear explicit expression,

in the exponential, where = 2f , but it is also common to include the more i2f t, in the exponential. Sometimes DTFT notation is expressed as F ei , 37 that it is not a CTFT (which is denoted as F ()). Click here for an overview of

the notation used in Connexion's DSP modules.

1.11.3 DTFT Denition demonstration

Figure 1.16: Click on the above thumbnail image (when online) to download an interactive Mathematica Player demonstrating Discrete Time Fourier Transform. To Download, right-click and save target as .cdf.

37 "DSP

notation" <http://cnx.org/content/m10161/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

41

1.11.4 DTFT Summary


Because complex exponentials are eigenfunctions of LTI systems, it is often useful to represent signals using a set of complex exponentials as a basis. The discrete time Fourier transform synthesis formula expresses a discrete time, aperiodic function as the innite sum of continuous frequency complex exponentials.

F ( ) =
n=

f [n] e(in)

(1.67)

The discrete time Fourier transform analysis formula takes the same discrete time domain signal and represents the signal in the continuous frequency domain.

f [n] =

1 2

F ( ) ein d

(1.68)

1.12 DFT as a Matrix Operation

38

38 This

content is available online at <http://cnx.org/content/m10962/2.5/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

42

CHAPTER 1.

FOUNDATIONS

1.12.1 Matrix Review


Recall:

Vectors in

RN :

x0 x1 ... x N 1

xi, xi R : x =
Vectors in

CN :

x0 x1 ... x N 1

xi, xi C : x =
Transposition: a. transpose:

xT =
b. conjugate:

x0 x0

x1 x1

... ...

xN 1 x N 1

xH =
Inner product a. real:
39

N 1

xT y =
i=0
b. complex:

xi yi

N 1

xH y =
i=0

xn yn

Matrix Multiplication:

Ax =

a00 a10
. . .

a01 a11
. . .

... ... ... ...

a0,N 1 a1,N 1
. . .

x0 x1 ... xN 1

y0 y1 ... y N 1

aN 1,0

aN 1,1

aN 1,N 1
N 1

yk =
n=0

akn xn

Matrix Transposition:

AT =
39 "Inner

a00 a01
. . .

a10 a11
. . .

... ... ... ...

aN 1,0 aN 1,1
. . .

a0,N 1

a1,N 1

aN 1,N 1

Products" <http://cnx.org/content/m10755/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

43

Matrix transposition involved simply swapping the rows with columns.

AH = AT
The above equation is Hermitian transpose.

AT k,n = An,k AH k,n = An,k

1.12.2 Representing DFT as Matrix Operation


Now let's represent the DFT
40

in vector-matrix notation.

x= X=
Here

x [0] x [1] ... x [N 1] X [0] X [1] ...

X [N 1]

CN x
and

is the vector of time samples and

is the vector of DFT coecients. How are

related:

N 1

X [k ] =
n=0
where

x [n] e(i N kn)


2

akn = e(i N )
2

kn

= WN kn

so

X = Wx
where

is the DFT vector,

is the matrix and

the time domain vector.


2

Wk,n = e(i N ) X=W


IDFT:

kn

x [0] x [1] ... x [N 1]

1 x [n] = N
40 "Discrete

N 1

X [k ] ei N
k=0

nk

Fourier Transform (DFT)" <http://cnx.org/content/m10249/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

44

CHAPTER 1.

FOUNDATIONS

where

ei N WN nk
is the matrix Hermitian transpose. So,

nk

= WN nk

x=
where

1 H W X N X
is the DFT vector.

is the time vector,

1 H is the inverse DFT matrix, and NW

1.13 The FFT Algorithm


Denition 1.1: FFT

41

(Fast Fourier Transform) An ecient

computational algorithm for computing the DFT

42

1.13.1 The Fast Fourier Transform FFT


DFT can be

expensive to compute directly


N 1

k, 0 k N 1 :
For each

X [k ] =
n=0

x [n] e(i2 N n)
k

k,

we must execute:

N complex multiplies N 1 complex adds


The total cost of direct computation of an

N -point

DFT is

N 2 complex multiplies N (N 1) complex adds


How many adds and mults of This "

O N2

real numbers are required?


N
gets large:

" computation rapidly gets out of hand, as

N N2

1 1

10 100

100 10,000
Table 1.4

1000

106 1012

106

Figure 1.17

The FFT provides us with a much more

ecient way of computing the DFT. The FFT requires only "
DFT.

O (N logN )"

computations to compute the

N -point

41 This content is available online at <http://cnx.org/content/m10964/2.6/>. 42 "Discrete Fourier Transform (DFT)" <http://cnx.org/content/m10249/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

45

N N2 N logN

10 100 10

100 10,000 200


Table 1.5

1000

106 1012 6 106

106
3000

How long is

1012 sec?

More than 10 days! How long is

6 106 sec?

Figure 1.18

The FFT and digital computers revolutionized DSP (1960 - 1980).

1.13.2 How does the FFT work?


The FFT exploits the symmetries of WN kn are called "twiddle factors"
the complex exponentials

WN kn = e(i N kn)
2

Rule 1.1:

Complex Conjugate Symmetry

WN k(N n) = WN (kn) = WN kn e(i2 N (N n)) = ei2 N n = e(i2 N n)


k k k

Rule 1.2:

Periodicity in n and k

WN kn = WN k(N +n) = WN (k+N )n e(i N kn) = e(i N k(N +n)) = e(i N (k+N )n)
2 2 2

WN = e(i N )
2

1.13.3 Decimation in Time FFT



Just one of The

many dierent FFT algorithms idea is to build a DFT out of smaller and smaller DFTs by decomposing x [n] into smaller and
N = 2m
(a power of 2)

smaller subsequences. Assume

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

46

CHAPTER 1.

FOUNDATIONS

1.13.3.1 Derivation N is even, so we can complete X [k ] by separating x [n] into two subsequences each of length
x [n]
N 2 N 2
if if

N 2.

n = even n = odd
N 1

k, 0 k N 1 :

X [k ] =
n=0

x [n] WN kn x [n] WN kn

X [k ] =
n=2r
where

x [n] WN kn +
n=2r +1

0r

N 2

1.

So
N 2 1 r =0 N 2 1 r =0

X [k ]

= =

x [2r] WN 2kr + x [2r] WN 2


So

kr

N (2r +1)k 2 1 r =0 x [2r + 1] WN N kr 1 WN k r2=0 x [2r + 1] WN 2

(1.69)

where

WN = e

(i 2 N 2)

=e

i2 N
2

= WN .
2 N 2

N 2

X [k ] =
r =0
where
N 2 1 r =0

x [2r] W N
2

kr

+ WN

k r =0

x [2r + 1] W N kr
2 N 2 1 r =0

DFT of odd

x [2r] W N kr is N 2 -point 2 samples ( H [k ]).

DFT of even samples (

G [k ])

and

x [2r + 1] W N kr
2

is

N 2 -point

k, 0 k N 1 : X [k ] = G [k ] + WN k H [k ]
Decomposition of an

N -point

DFT as a sum of 2

Why would we want to do this?


note: Cost to compute an

Because it is more ecient!


DFT is approximately

N 2 -point DFTs.

N -point

N2

complex mults and adds.

But decomposition into 2

N 2 -point DFTs + combination requires only

N 2

N 2

+N =

N2 +N 2

N 2 -point DFT, G [k ]. The second part is N the number of complex mults and adds for 2 -point DFT, H [k ]. The third part is the number of complex N2 mults and adds for combination. And the total is 2 + N complex mults and adds.
where the rst part is the number of complex mults and adds for

Example 1.18: Savings


For

N = 1000, N 2 = 106 N2 106 +N = + 1000 2 2


Available for free at Connexions <http://cnx.org/content/col10360/1.4>

47

Because 1000 is small compared to 500,000,

N2 +N 2

106 2
N N 2 -point DFTs into two 4 -point DFTs,
etc., ....

So why stop here?! Keep decomposing. Break each of the We can keep decomposing:

N = 21
where

N N N N N , , , . . . , m 1 , m 2 4 8 2 2 m = log2 N =
times

=1

N 2 -pt DFTs. The cost is N N -pt DFT with two -pt DFTs will reduce cost to 2 4
Computational cost:

N -pt

DFT

two

N2 2

N 2 2

+ N.

So replacing each

2 2
As we keep going

N 4

N 2

+N =4

N 4

+ 2N =

N2 N2 + 2 N = + pN 22 2p

p = {3, 4, . . . , m}, N2 2log2 N

where

m = log2 N .

We get the cost

+ N log2 N =

N2 + N log2 N = N + N log2 N N
since

N + N log2 N
For large

is the total number of complex adds and mults.

N , cost

N log2 N

or "

O (N log2 N )",

N log2 N

for large

N.

Figure 1.19:

N = 8 point FFT. Summing nodes Wn k twiddle multiplication factors.

note: Weird order of time samples

Figure 1.20:

This is called "butteries."

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

48

CHAPTER 1.

FOUNDATIONS

Solutions to Exercises in Chapter 1


Solution to Exercise 1.8.1 (p. 32)
In order to represent

in terms of

b0

and

b1

we will follow the same steps we used in the above example.

B= B 1 =

1 3 0
1 3

2 0

1 2 1 6

= B 1 x =
And now we can write

1
2 3

in terms of

b0

and

b1 . 2 x = b0 + b1 3

And we can easily substitute in our known values of

Solution to Exercise 1.10.1 (p. 37)


complex exponentials
43

b0

and

b1

to verify our results.

In order to calculate the Fourier transform, all we need to use is (1.52) (Continuous-Time Fourier Transform), , and basic calculus.

F ()

= = = =

f (t) e(it) dt (t) (it) e e dt 0 (t)(+i) e dt 0 1 0 +i

(1.70)

F () =

1 + i

(1.71)

Solution to Exercise 1.10.2 (p. 37)


Here we will use (1.53) (Inverse CTFT) to nd the inverse FT given that

t = 0.

x (t)

= = =

M 1 i(,t) d 2 M e 1 i(,t) |,=eiw 2 e 1 t sin (M t)

(1.72)

x (t) =

sinc

Mt

(1.73)

43 "Continuous

Time Complex Exponential" <http://cnx.org/content/m10060/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Chapter 2

Sampling and Frequency Analysis


2.1 Introduction

Proof (Section 2.2) Illustrations (Section 2.3) Matlab Example (Section 2.4) Hold operation
2

Contents of Sampling chapter


Introduction(Current module)

System view (Section 2.5) Aliasing applet Exercises


4 5 3

Table of formulas

2.1.1 Why sample?


This section introduces sampling. Sampling is the necessary fundament for all digital signal processing and communication. Sampling can be dened as the process of measuring an analog signal at distinct points. Digital representation of analog signals oers advantages in terms of

robustness towards noise, meaning we can send more bits/s use of exible processing equipment, in particular the computer more reliable processing equipment easier to adapt complex algorithms

1 This content is available online at <http://cnx.org/content/m11419/1.29/>. 2 "Hold operation" <http://cnx.org/content/m11458/latest/> 3 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 4 "Exercises" <http://cnx.org/content/m11442/latest/> 5 "Table of Formulas" <http://cnx.org/content/m11450/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>


49

50

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.1.2 Claude E. Shannon

Figure 2.1:

Claude Elwood Shannon (1916-2001)

Claude Shannon

has been called the father of information theory, mainly due to his landmark papers on
7

the "Mathematical theory of communication" presence of noise"


9

. Harry Nyquist

was the rst to state the sampling theorem

in 1928, but it was not proven until Shannon proved it 21 years later in the paper "Communications in the .

2.1.3 Notation
In this chapter we will be using the following notation

Original analog signal Sampling frequency Sampling interval

x (t) Fs

1 Ts (Note that: Fs = T ) s Sampled signal xs (n). (Note that xs (n) = x (nTs )) Real angular frequency Digital angular frequency . (Note that: = Ts )

2.1.4 The Sampling Theorem


note:

When sampling an analog signal the sampling frequency must be greater than twice the

highest frequency component of the analog signal to be able to reconstruct the original signal from the sampled version.

6 http://www.research.att.com/njas/doc/ces5.html 7 http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf 8 http://www.wikipedia.org/wiki/Harry_Nyquist 9 http://www.stanford.edu/class/ee104/shannonpaper.pdf

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

51

2.1.5
Finished? Have at look at: Proof (Section 2.2); Illustrations (Section 2.3); Matlab Example (Section 2.4); Aliasing applet
10

; Hold operation

11

; System view (Section 2.5); Exercises

12

2.2 Proof
note:

13

In order to recover the signal

x (t)

from it's samples exactly, it is necessary to sample

x (t)

at a rate greater than twice it's highest frequency component.

2.2.1 Introduction
As mentioned earlier (p. 49), sampling is the necessary fundament when we want to apply digital signal The proof is divided in two. First we nd an Next we show processing on analog signals. Here we present the proof of the sampling theorem. that the signal expression for the spectrum of the signal resulting from sampling the original signal

x (t).

x (t)

can be recovered from the samples. Often it is easier using the frequency domain when

carrying out a proof, and this is also the case here.

Key points in the proof


We nd an equation (2.8) for the spectrum of the sampled signal We nd a simple method to reconstruct (2.14) the original signal The sampled signal has a periodic spectrum... ...and the period is

2 Fs

2.2.2 Proof part 1 - Spectral considerations


By sampling signal
14

x (t)

every

Ts

second we obtain

xs (n). 1 2

The inverse fourier transform of this time discrete

is

xs (n) =

Xs ei ein d

(2.1)

For convenience we express the equation in terms of the real angular frequency obtain

using

= Ts .

We then

xs ( n) =

Ts 2

Ts Ts

Xs eiTs eiTs n d

(2.2)

The inverse fourier transform of a continuous signal is

x (t) =
From this equation we nd an expression for

1 2

X (i) eit d

(2.3)

x (nTs ) 1 2

x (nTs ) =

X (i) einTs d

(2.4)

10 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 11 "Hold operation" <http://cnx.org/content/m11458/latest/> 12 "Exercises" <http://cnx.org/content/m11442/latest/> 13 This content is available online at <http://cnx.org/content/m11423/1.27/>. 14 "Discrete time signals" <http://cnx.org/content/m11476/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

52

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

To account for the dierence in region of integration we split the integration in (2.4) into subintervals of length

2 Ts and then take the sum over the resulting integrals to obtain the complete area.

1 x (nTs ) = 2

(2k+1) Ts (2k1) Ts

X (i) einTs d

(2.5)

k=

Then we change the integration variable, setting

=+

2k Ts

x (nTs ) =

1 2

Ts Ts

X i +

k=

2 k Ts

ei(+

2k Ts

)nTs d
Ts Ts

(2.6)

We obtain the nal form by observing that

ei2kn = 1,

reinserting

and multiplying by

Ts x (nTs ) = 2
To make

Ts Ts

k=

1 2 k X i + Ts Ts

einTs d

(2.7)

xs (n) = x (nTs )

for all values of

n,

the integrands in (2.2) and (2.7) have to agreee, that is

Xs eiTs =

1 Ts

X i +
k=

2k Ts

(2.8)

This is a central result. We see that the digital spectrum consists of a sum of shifted versions of the original, analog spectrum. Observe the periodicity! We can also express this relation in terms of the digital angular frequency

= Ts
(2.9)

Xs ei =

1 Ts

X i
k=

+ 2 k Ts

This concludes the rst part of the proof. Now we want to nd a reconstruction formula, so that we can recover

x (t)

from

xs (n).

2.2.3 Proof part II - Signal reconstruction


For a bandlimited (Figure 2.3) signal the inverse fourier transform is

x (t) =

1 2

Ts Ts

X (i) eit d

(2.10)

In the interval we are integrating we have:

Xs eiTs =
Ts Ts

X (i) Ts . Substituting this relation into (2.10) we get


(2.11)

Ts x (t) = 2
Using the DTFT
15

Xs eiTs eit d

relation for

Xs eiTs x (t) = Ts 2

we have
Ts Ts

xs (n) e(inTs ) eit d


n=

(2.12)

15 "Table

of Formulas" <http://cnx.org/content/m11450/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

53

Interchanging integration and summation (under the assumption of convergence) leads to

x (t) =

Ts 2

xs (n)
n=

Ts Ts

ei(tnTs ) d

(2.13)

Finally we perform the integration and arrive at the important reconstruction formula

sin xs (n)

x (t) =
n=

Ts Ts

(t nTs )
(2.14)

(t nTs )

(Thanks to R.Loos for pointing out an error in the proof.)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

54

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.2.4 Summary
note:

Xs eiTs =

1 Ts

k=

X i +

2k Ts

note:

x (t) =

n=

xs ( n)

(tnTs )) sin( T s Ts (tnTs )

2.2.5
Go to Introduction (Section 2.1); Illustrations (Section 2.3); Matlab Example (Section 2.4); Hold operation Aliasing applet
17 16

; System view (Section 2.5); Exercises

18

2.3 Illustrations

19

In this module we illustrate the processes involved in sampling and reconstruction.

To see how all these

processes work together as a whole, take a look at the system view (Section 2.5). In Sampling and reconstruction with Matlab (Section 2.4) we provide a Matlab script for download. The matlab script shows the process of sampling and reconstruction

live.

2.3.1 Basic examples


Example 2.1
To sample an analog signal with 3000 Hz as the highest frequency component requires sampling at 6000 Hz or above.

Example 2.2
The sampling theorem can also be applied in two dimensions, i.e. for image analysis. A 2D sampling theorem has a simple physical interpretation in image analysis: Choose the sampling interval such that it is less than or equal to half of the smallest interesting detail in the image.

2.3.2 The process of sampling


We start o with an analog signal. This can for example be the sound coming from your stereo at home or your friend talking. The signal is then sampled uniformly. Uniform sampling implies that we sample every Figure 2.2 we see an analog signal. The analog signal has been sampled at times

Ts

seconds. In

t = nTs .

16 "Hold operation" <http://cnx.org/content/m11458/latest/> 17 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 18 "Exercises" <http://cnx.org/content/m11442/latest/> 19 This content is available online at <http://cnx.org/content/m11443/1.33/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

55

Figure 2.2:

Analog signal, samples are marked with dots.

In signal processing it is often more convenient and easier to work in the frequency domain. So let's look at at the signal in frequency domain, Figure 2.3. For illustration purposes we take the frequency content of the signal as a triangle. (If you Fourier transform the signal in Figure 2.2 you will not get such a nice triangle.)

Figure 2.3:

The spectrum X (i).

Notice that the signal in Figure 2.3 is bandlimited. We can see that the signal is bandlimited because

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

56

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

X (i)

is zero outside the interval

[g , g ].

Equivalentely we can state that the signal has no angular

frequencies above

g 2 . Now let's take a look at the sampled signal in the frequency domain. While proving (Section 2.2) the

g ,

corresponding to no frequencies above

Fg =

sampling theorem we found the the spectrum of the sampled signal consists of a sum of shifted versions of the analog spectrum. Mathematically this is described by the following equation:

Xs eiTs =

1 Ts

X i +
k=

2k Ts

(2.15)

2.3.2.1 Sampling fast enough


In Figure 2.4 we show the result of sampling

x (t)

according to the sampling theorem (Section 2.1.4: The

Sampling Theorem). This means that when sampling the signal in Figure 2.2/Figure 2.3 we use Observe in Figure 2.4 that we have the same spectrum as in Figure 2.3 for scaling factor

Fs 2Fg .

[g , g ],
s

except for the

1 Ts . This is a consequence of the sampling frequency. As mentioned in the proof (Key points 2 in the proof, p. 51) the spectrum of the sampled signal is periodic with period 2Fs = T .

Figure 2.4:

The spectrum Xs . Sampling frequency is OK.

So now we are, according to the sample theorem (Section 2.1.4: The Sampling Theorem), able to reconstruct the original signal

exactly.

How we can do this will be explored further down under reconstruction

(Section 2.3.3: Reconstruction). But rst we will take a look at what happens when we sample too slowly.

2.3.2.2 Sampling too slowly


If we sample Figure 2.5.

x (t)

too slowly, that is

Fs < 2Fg ,

we will get overlap between the repeated spectra, see This overlap gives rise to the

According to (2.15) the resulting spectra is the sum of these.

concept of aliasing.
note:

If the sampling frequency is less than twice the highest frequency component, then frequen-

cies in the original signal that are above half the sampling rate will be "aliased" and will appear in the resulting signal as lower frequencies. The consequence of aliasing is that we cannot recover the original signal, so aliasing has to be avoided. Sampling too slowly will produce a sequence So there is module
20

no

xs (n)

that could have orginated from a number of signals.

chance of recovering the original signal. To learn more about aliasing, take a look at this

. (Includes an applet for demonstration!)

20 "Aliasing

Applet" <http://cnx.org/content/m11448/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

57

Figure 2.5:

The spectrum Xs . Sampling frequency is too low.

To avoid aliasing we have to sample fast enough. But if we can't sample fast enough (possibly due to costs) we can include an Anti-Aliasing lter. This will not able us to get an exact reconstruction but can still be a good solution.
note:

Typically a low-pass lter that is applied before sampling to ensure that no components

with frequencies greater than half the sample frequency remain.

Example 2.3 The stagecoach eect


In older western movies you can observe aliasing on a stagecoach when it starts to roll. At rst the spokes appear to turn forward, but as the stagecoach increase its speed the spokes appear to turn backward. This comes from the fact that the sampling rate, here the number of frames per second, is too low. We can view each frame as a sample of an image that is changing continuously in time. (Applet illustrating the stagecoach eect
21

2.3.3 Reconstruction
Given the signal in Figure 2.4 we want to recover the original signal, but the question is how? When there is no overlapping in the spectrum, the spectral component given by

k = 0

(see (2.15)),is

equal to the spectrum of the analog signal. This oers an oppurtunity to use a simple reconstruction process. Remember what you have learned about ltering. What we want is to change signal in Figure 2.4 into that of Figure 2.3. To achieve this we have to remove all the extra components generated in the sampling process. To remove the extra components we apply an ideal analog low-pass lter as shown in Figure 2.6 As we see the ideal lter is rectangular in the frequency domain. A rectangle in the frequency domain corresponds to a sinc
22

function in time domain (and vice versa).

21 http://owers.ofthenight.org/wagonWheel/wagonWheel.html 22 http://ccrma-www.stanford.edu/jos/Interpolation/sinc_function.html

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

58

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.6:

H (i) The ideal reconstruction lter.

Then we have reconstructed the original spectrum, and as we know if two signals are identical in the frequency domain, they are also identical in the time domain. End of reconstruction.

2.3.4 Conclusions
The Shannon sampling theorem requires that the input signal prior to sampling is band-limited to at most half the sampling frequency. Under this condition the samples give an exact signal representation. It is truly remarkable that such a broad and useful class signals can be represented that easily! We also looked into the problem of reconstructing the signals form its samples. Again the simplicity of the

principle is striking:

linear ltering by an ideal low-pass lter will do the job. However, the ideal lter

is impossible to create, but that is another story...

2.3.5
Go to? Introduction (Section 2.1); Proof (Section 2.2); Illustrations (Section 2.3); Matlab Example (Section 2.4); Aliasing applet
23

; Hold operation

24

; System view (Section 2.5); Exercises

25

2.4 Sampling and Reconstruction with Matlab


2.4.1 Matlab les
Samprecon.m
27

26

2.4.2
Introduction (Section 2.1); Proof (Section 2.2); Illustrations (Section 2.3); Aliasing applet (Section 2.4); System view (Section 2.5); Exercises
29 28

; Hold operation

23 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 24 "Hold operation" <http://cnx.org/content/m11458/latest/> 25 "Exercises" <http://cnx.org/content/m11442/latest/> 26 This content is available online at <http://cnx.org/content/m11549/1.9/>. 27 http://cnx.rice.edu/content/m11549/latest/Samprecon.m 28 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 29 "Exercises" <http://cnx.org/content/m11442/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

59

2.5 Systems View of Sampling and Reconstruction


2.5.1 Ideal reconstruction system
tion 2.2).

30

Figure 2.7 shows the ideal reconstruction system based on the results of the Sampling theorem proof (SecFigure 2.7 consists of a sampling device which produces a time-discrete sequence tion lter, sequence

xs (n).

The reconstruc-

h (t),

is an ideal analog sinc

31

lter, with

h (t) = sinc

t Ts

We can't apply the time-discrete sequence into an analog

xs ( n)

directly to the analog lter


32

signal using delta functions

. Thus we write

h (t). To solve this problem we turn the xs (t) = n= xs (n) (t nT ).

Figure 2.7:

Ideal reconstruction system

But when will the system produce an output tion 2.1.4: The Sampling Theorem) we have the highest frequency component of

x (t) = x (t)? According to the sampling theorem x (t) = x (t) when the sampling frequency, Fs , is at least

(Sectwice

x (t).

2.5.2 Ideal system including anti-aliasing


To be sure that the reconstructed signal is free of aliasing it is customary to apply a lowpass lter, an anti-aliasing lter (p. 57), before sampling as shown in Figure 2.8.

Figure 2.8:

Ideal reconstruction system with anti-aliasing lter (p. 57)

Again we ask the question of when the system will produce an output

x (t) = s (t)?

If the signal is entirely

conned within the passband of the lowpass lter we will get perfect reconstruction if lter), we will

Fs

is high enough.

But if the anti-aliasing lter removes the "higher" frequencies, (which in fact is the job of the anti-aliasing

never be able to exactly reconstruct the original signal, s (t).


x (t),
which in most cases is satisfying.

If we sample fast enough we

can reconstruct signal.

The reconstructed signal,

x (t),

will not have aliased frequencies. This is essential for further use of the

30 This content is available online at <http://cnx.org/content/m11465/1.20/>. 31 http://ccrma-www.stanford.edu/jos/Interpolation/sinc_function.html 32 "Table of Formulas" <http://cnx.org/content/m11450/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

60

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.5.3 Reconstruction with hold operation


To make our reconstruction system realizable there are many things to look into. Among them are the fact that any practical reconstruction system must input nite length pulses into the reconstruction lter. This can be accomplished by the hold operation this is cost and application consideration.
33

To alleviate the distortion caused by the hold opeator we

apply the output from the hold device to a compensator. The compensation can be as accurate as we wish,

Figure 2.9:

More practical reconstruction system with a hold component34

By the use of the hold component the reconstruction will not be exact, but as mentioned above we can get as close as we want.

2.5.4
Introduction (Section 2.1); Proof (Section 2.2); Illustrations (Section 2.3); Matlab example (Section 2.4); Hold operation
35

; Aliasing applet

36

; Exercises

37

2.6 Sampling CT Signals: A Frequency Domain Perspective


2.6.1 Understanding Sampling in the Frequency Domain
We want to relate

38

xc (t)

directly to

x [n].

Compute the CTFT of

xs (t) =
n=

xc (nT ) (t nT ) nT ) e(i)t dt nT ) e(i)t dt


(2.16)

Xs ()

= = = = =

n= xc (nT ) (t n= xc (nT ) (t (i)nT n= x [n] e (i)n n= x [n] e

X ( ) x [n]. 1 T

where

and

X ( )

is the DTFT of

note:

Xs () =

Xc ( k s )
k=

33 "Hold operation" <http://cnx.org/content/m11458/latest/> 34 "Hold operation" <http://cnx.org/content/m11458/latest/> 35 "Hold operation" <http://cnx.org/content/m11458/latest/> 36 "Aliasing Applet" <http://cnx.org/content/m11448/latest/> 37 "Exercises" <http://cnx.org/content/m11442/latest/> 38 This content is available online at <http://cnx.org/content/m10994/2.2/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

61

X ( )

= =

1 T 1 T

k= k=

Xc ( k s ) Xc
2k T

(2.17)

where this last part is

2 -periodic.

2.6.1.1 Sampling

Figure 2.10

Example 2.4: Speech


Speech is intelligible if bandlimited by a CT lowpass lter to the band speech as slowly as _____?

kHz. We can sample

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

62

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.11

Figure 2.12:

Note that there is no mention of T or s !

2.6.2 Relating x[n] to sampled x(t)


Recall the following equality:

xs (t) =
nn

x (nT ) (t nT )

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

63

Figure 2.13

Recall the CTFT relation:

x (t)
where

1 X

(2.18)

is a scaling of time and

1 is a scaling in frequency.

Xs () X (T )

(2.19)

2.7 The DFT: Frequency Domain with a Computer Analysis


2.7.1 Introduction
processing solutions for CT applications (Figure 2.14):

39

We just covered ideal (and non-ideal) (time) sampling of CT signals (Section 2.6). This enabled DT signal

39 This

content is available online at <http://cnx.org/content/m10992/2.3/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

64

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.14

Much of the theoretical analysis of such systems relied on frequency domain representations. How do we carry out these frequency domain analysis on the computer? Recall the following relationships:

x [n] X ( ) x (t) X ()
where

DTFT

CTFT

and

are continuous frequency variables.

2.7.1.1 Sampling DTFT


Consider the DTFT of a discrete-time (DT) signal signal).

x [n].
N 1

Assume

x [n] is of nite duration N

(i.e., an

N -point

X ( ) =
n=0
where

x [n] e(i)n

(2.20)

X ( )

is the continuous function that is indexed by the real-valued parameter

The

other function,

x [n],

is a discrete function that is indexed by integers.

We want to work with

X ( )

on a computer. Why not just

sample X ()?
(2.21)

X [k ]

= X =

2 N k k N 1 (i)2 N n n=0 x [n] e


and

Discrete Fourier Transform (DFT) of x [n]. Example 2.5

In (2.21) we sampled at

2 N k where

k = {0, 1, . . . , N 1}

X [k ]

for

k = {0, . . . , N 1}

is called the

Finite Duration DT Signal

Figure 2.15

The DTFT of the image in Figure 2.15 (Finite Duration DT Signal) is written as follows:

N 1

X ( ) =
n=0

x [n] e(i)n

(2.22)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

65

where

is any

2 -interval,

for example

Sample X()

Figure 2.16

where again we sampled at

2 N k where

k = {0, 1, . . . , M 1}.

For example, we take

M = 10
. In the following section (Section 2.7.1.1.1: Choosing M) we will discuss in more detail how we should choose

M,

the number of samples in the

interval.

(This is precisely how we would plot

X ( )

in Matlab.)

2.7.1.1.1 Choosing M 2.7.1.1.1.1 Case 1


Given

(length of

x [n]),

choose

to obtain a dense sampling of the DTFT (Figure 2.17):

Figure 2.17

2.7.1.1.1.2 Case 2
Choose

as small as possible (to minimize the amount of computation).

In general, we require

M N

in order to represent all information in

n, n = {0, . . . , N 1} : (x [n])
Let's concentrate on

M = N:
and

x [n] X [k ]
for

DFT

n = {0, . . . , N 1}

k = {0, . . . , N 1} numbers N numbers

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

66

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.7.2 Discrete Fourier Transform (DFT)


Dene

X [k ] X
where

2k N M = N.
k

(2.23)

N = length (x [n])

DFT

and

k = {0, . . . , N 1}.

In this case,

N 1

X [k ] =
n=0

x [n] e(i)2 N n

(2.24)

Inverse DFT (IDFT)


1 x [n] = N

N 1

X [k ] ei2 N n
k=0

(2.25)

2.7.2.1 Interpretation
Represent

x [n]

in terms of a sum of

complex sinusoids

40

of amplitudes

X [k ]

and frequencies

k, k {0, . . . , N 1} :
note:

k =

2k N

Fourier Series with fundamental frequency

2 N

2.7.2.1.1 Remark 1
IDFT treats

x [n]

as though it were

N -periodic. x [n] = 1 N
N 1

X [k ] ei2 N n
k=0

(2.26)

where

n {0, . . . , N 1}
(Solution on p. 107.)

Exercise 2.7.1
What about other values of

n?

2.7.2.1.2 Remark 2
Proof that the IDFT inverts the DFT for

n {0, . . . , N 1} = =
1 N N 1 k=0 N 1 m=0

1 N

N 1 k=0

X [k ] ei2 N n

x [m] e(i)2 N m ei2 N n

???

(2.27)

Example 2.6: Computing DFT


Given the following discrete-time signal (Figure 2.18) with

N = 4,

we will compute the DFT using

two dierent methods (the DFT Formula and Sample DTFT):

40 "Continuous

Time Complex Exponential" <http://cnx.org/content/m10060/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

67

Figure 2.18

1. DFT Formula

X [k ]

= = =

1+e 1+e

k N 1 n (i)2 N n=0 x [n] e k (i)2 4 (i)2 k 42

+e

+ e(i)2 4 3
3 (i) 2 k

(2.28)

(i) 2k

+e

(i)k

+e

Using the above equation, we can solve and get the following results:

x [0] = 4

x [1] = 0 x [2] = 0 x [3] = 0


2. Sample DTFT. Using the same gure, Figure 2.18, we will take the DTFT of the signal and get the following equations:

X ( )

= = =

3 (i)n n=0 e (i)4 1e 1e(i)

(2.29)

??? 2k = k 4 2

Our sample points will be:

k =
where

k = {0, 1, 2, 3}

(Figure 2.19).

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

68

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.19

2.7.3 Periodicity of the DFT


DFT an

X [k ] consists of N -periodic DFT.

samples of DTFT, so X (), a 2-periodic DTFT signal, can be converted to X [k],


N 1

X [k ] =
n=0
where

x [n] e(i)2 N n

(2.30)

e(i)2 N n

is an

N -periodic

basis function (See Figure 2.20).

Figure 2.20

Also, recall,

x [n]

= = =

1 N 1 N

N 1 n=0 N 1 n=0

X [k ] ei2 N n X [k ] ei2 N (n+mN )


k

(2.31)

???

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

69

Example 2.7: Illustration

Figure 2.21

note:

When we deal with the DFT, we need to remember that, in eect, this treats the signal as sequence.

an

N -periodic

2.7.4 A Sampling Perspective


Think of sampling the continuous function sampling function applied to time sequence,

X ( ),

as depicted in Figure 2.22.

S ( )

will represent the

X ( )

and is illustrated in Figure 2.22 as well. This will result in our discrete-

X [k ].

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

70

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.22

note:

Remember the multiplication in the frequency domain is equal to convolution in the time

domain!

2.7.4.1 Inverse DTFT of S()


k=

2k N

(2.32)

Given the above equation, we can take the DTFT and get the following equation:

N
m=

[n mN ] S [n]

(2.33)

Exercise 2.7.2
Why does (2.33) equal

(Solution on p. 107.)

S [n]?

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

71

So, in the time-domain we have (Figure 2.23):

Figure 2.23

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

72

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.7.5 Connections

Figure 2.24

Combine signals in Figure 2.24 to get signals in Figure 2.25.

Figure 2.25

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

73

2.8 Discrete-Time Processing of CT Signals


2.8.1 DT Processing of CT Signals
DSP System

41

Figure 2.26

2.8.1.1 Analysis
Yc () = HLP () Y (T )
where we know that remember that (2.34)

Y ( ) = X ( ) G ( )

and

G ( )

is the frequency response of the DT LTI system. Also,

T
So,

Yc () = HLP () G (T ) X (T )
where

(2.35)

Yc ()

and

HLP ()

are CTFTs and

G (T ) 2 T

and

X (T )

are DTFTs.

note:

X ( ) =
OR

Xc
k=

2k T

X (T ) =
Therefore our nal output signal,

2 T

Xc ( k s )
k=

Yc (),

will be:

Yc () = HLP () G (T )
41 This

2 T

Xc ( k s )
k=

(2.36)

content is available online at <http://cnx.org/content/m10993/2.2/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

74

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Now, if

Xc ()

is bandlimited to

s s 2 , 2

and we use the usual lowpass reconstruction lter in the D/A,

Figure 2.27:

Figure 2.27

Then,

G (T ) X () c Yc () = 0 otherwise

if

|| <

s 2

(2.37)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

75

2.8.1.2 Summary
For bandlimited signals sampled at or above the Nyquist rate, we can relate the input and output of the DSP system by:

Yc () = Ge () Xc ()
where

(2.38)

G (T ) if || < Ge () = 0 otherwise

s 2

Figure 2.28

2.8.1.2.1 Note
Ge ()
1. 2. is LTI if and only if the following two conditions are satised:

G ( ) is LTI (in DT). Xc (T ) is bandlimited

and sampling rate equal to or greater than Nyquist. For example, if we had a

simple pulse described by

Xc (t) = u (t T0 ) u (t T1 )
where

T1 > T0 .

If the sampling period

T > T1 T0 ,

others might not be "missed." This is what we term

time-varying behavior.

then some samples might "miss" the pulse while

Example 2.8

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

76

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.29

If

2 T

> 2B

and

1 < BT ,

determine and sketch

Yc ()

using Figure 2.29.

2.8.2 Application: 60Hz Noise Removal

Figure 2.30

Unfortunately, in real-world situations electrodes also pick up ambient 60 Hz signals from lights, computers,
etc..

In fact, usually this "60 Hz noise" is much greater in amplitude than the EKG signal shown in

Figure 2.30. Figure 2.31 shows the EKG signal; it is barely noticeable as it has become overwhelmed by noise.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

77

Figure 2.31:

Our EKG signal, y (t), is overwhelmed by noise.

2.8.2.1 DSP Solution

Figure 2.32

Figure 2.33

2.8.2.2 Sampling Period/Rate


First we must note that

|Y () |

is

bandlimited to 60 Hz.
fs = 240Hz

Therefore, the minimum rate should be 120

Hz. In order to get the best results we should set

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

78

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

s = 2 240

rad s

Figure 2.34

2.8.2.3 Digital Filter


Therefore, we want to design a digital lter that will remove the 60Hz component and preserve the rest.

Figure 2.35

2.9 Short Time Fourier Transform


2.9.1 Short Time Fourier Transform
changes over time.

42

The Fourier transforms (FT, DTFT, DFT, etc.) do not clearly indicate how the frequency content of a signal That information is hidden in the phase - it is not revealed by the plot of the magnitude of the spectrum.
note:

To see how the frequency content of a signal changes over time, we can cut the signal into

blocks and compute the spectrum of each block. To improve the result, 1. blocks are overlapping

42 This

content is available online at <http://cnx.org/content/m10570/2.4/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

79

2. each block is multiplied by a window that is tapered at its endpoints. Several parameters must be chosen:

Block length,

R.

The type of window. Amount of overlap between blocks. (Figure 2.36 (STFT: Overlap Parameter)) Amount of zero padding, if any.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

80

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

STFT: Overlap Parameter

Figure 2.36

The short-time Fourier transform is dened as

X (, m)

= = =

STFT (x (n)) := DTFT (x (n m) w (n))


(in) n= x (n m) w (n) e R 1 (in) n=0 x (n m) w (n) e
(2.39)

where

w (n)

is the window function of length

R.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

81

1. The STFT of a signal

x (n)

is a function of two variables: time and frequency.

2. The block length is determined by the support of the window function 3. A graphical display of the magnitude of the STFT, It is often used in speech processing. 4. The STFT of a signal is invertible.

w (n). |X (, m) |, is called the spectrogram of the signal.

5. One can choose the block length. A long block length will provide higher frequency resolution (because the main-lobe of the window function will be narrow). A short block length will provide higher time resolution because less averaging across samples is performed for each STFT value. 6. A 7. A function).

narrow-band spectrogram is one computed using a relatively long block length R, (long window wide-band spectrogram is one computed using a relatively short block length R, (short window

function).

2.9.1.1 Sampled STFT


To numerically evaluate the STFT, we sample the frequency axis to

in

equally spaced samples from

=0
(2.40)

= 2 . k, 0 k N 1 : k =

2 k N

We then have the discrete STFT,

X d (k, m) := X

2 N k, m

= = =

R 1 n=0 R 1 n=0

x (n m) w (n) e(in) x (n m) w (n) WN (kn)


(2.41)

1 DFTN x (n m) w (n) |R n=0 , 0,. . .0

where 0,. . .0 is

N R. R 1.
The signal is shifted along the window

In this denition, the overlap between adjacent blocks is the time direction. That means we usually evaluate

one sample at a time. That generates more points than is usually needed, so we also sample the STFT along

X d (k, Lm)
where

is the time-skip. The relation between the time-skip, the number of overlapping samples, and the

block length is

Overlap = R L

Exercise 2.9.1
Match each signal to its spectrogram in Figure 2.37.

(Solution on p. 107.)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

82

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

(a)

(b)
Figure 2.37

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

83

2.9.1.2 Spectrogram Example

Figure 2.38

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

84

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.39

The matlab program for producing the gures above (Figure 2.38 and Figure 2.39).

% LOAD DATA load mtlb; x = mtlb; figure(1), clf plot(0:4000,x) xlabel('n') ylabel('x(n)') % SET PARAMETERS R = 256; window = hamming(R); N = 512; L = 35; fs = 7418; % % % % % R: block length window function of length R N: frequency discretization L: time lapse between blocks fs: sampling frequency

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

85

overlap = R - L; % COMPUTE SPECTROGRAM [B,f,t] = specgram(x,N,fs,window,overlap); % MAKE PLOT figure(2), clf imagesc(t,f,log10(abs(B))); colormap('jet') axis xy xlabel('time') ylabel('frequency') title('SPECTROGRAM, R = 256')

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

86

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.9.1.3 Eect of window length R Narrow-band spectrogram: better frequency resolution

Figure 2.40

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

87

Wide-band spectrogram: better time resolution

Figure 2.41

Here is another example to illustrate the frequency/time resolution trade-o (See gures - Figure 2.40 (Narrow-band spectrogram: better frequency resolution), Figure 2.41 (Wide-band spectrogram: better time resolution), and Figure 2.42 (Eect of Window Length R)).

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

88

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Eect of Window Length R

(a)

(b)
Figure 2.42

2.9.1.4 Eect of L and N


A spectrogram is computed with dierent parameters:

L {1, 10} N {32, 256}

L = time lapse between blocks. N = FFT length (Each block is

zero-padded to length

N .)
(Solution on p. 107.)

In each case, the block length is 30 samples.

Exercise 2.9.2

For each of the four spectrograms in Figure 2.43 can you tell what

and

are?

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

89

(a)

(b)
Figure 2.43

and

do not eect the time resolution or the frequency resolution. They only aect the 'pixelation'.

2.9.1.5 Eect of R and L


Shown below are four spectrograms of the same signal. Each spectrogram is computed using a dierent set of parameters.

R {120, 256, 1024} L {35, 250}


where

R = block length L = time lapse between

blocks.

Exercise 2.9.3

(Solution on p. 107.)

For each of the four spectrograms in Figure 2.44, match the above values of

and

R.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

90

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.44

If you like, you may listen to this signal with the Here (Figure 2.45) is a gure of the signal.

soundsc

command; the data is in the le:

stft_data.m.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

91

Figure 2.45

2.10 Spectrograms
sion
46

43
44 45

We know how to acquire analog signals for digital processing (pre-ltering

, sampling
47

, and A/D conver-

) and to compute spectra of discrete-time signals (using the FFT algorithm


48

), let's put these various

components together to learn how the spectrogram shown in Figure 2.46 (Speech Spectrogram), which is used to analyze speech , is calculated. The speech was sampled at a rate of 11.025 kHz and passed through a 16-bit A/D converter.
Point of interest: Music compact discs (CDs) encode their signals at a sampling rate of 44.1

kHz. We'll learn the rationale for this number later. The 11.025 kHz sampling rate for the speech is 1/4 of the CD sampling rate, and was the lowest available sampling rate commensurate with speech signal bandwidths available on my computer.

Exercise 2.10.1

(Solution on p. 107.)
How

Looking at Figure 2.46 (Speech Spectrogram) the signal lasted a little over 1.2 seconds.

long was the sampled signal (in terms of samples)? What was the datarate during the sampling process in bps (bits per second)? Assuming the computer storage is organized in terms of bytes (8-bit quantities), how many bytes of computer memory does the speech consume?

43 This content is available online at <http://cnx.org/content/m0505/2.21/>. 44 "The Sampling Theorem" <http://cnx.org/content/m0050/latest/> 45 "The Sampling Theorem" <http://cnx.org/content/m0050/latest/> 46 "Amplitude Quantization" <http://cnx.org/content/m0051/latest/> 47 "Fast Fourier Transform (FFT)" <http://cnx.org/content/m10250/latest/> 48 "Modeling the Speech Signal" <http://cnx.org/content/m0049/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

92

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Speech Spectrogram
5000

4000

Frequency (Hz)

3000

2000

1000

0.2

0.4

0.6 Time (s)

0.8

1.2

Ri

ce

Uni
Figure 2.46

ver

si

ty

The resulting discrete-time signal, shown in the bottom of Figure 2.46 (Speech Spectrogram), clearly changes its character with time.

frames:

To display these spectral changes, the long signal was sectioned into Conceptually, a Fourier transform of each

comparatively short, contiguous groups of samples.

frame is calculated using the FFT. Each frame is not so long that signicant signal variations are retained within a frame, but not so short that we lose the signal's spectral character. Roughly speaking, the speech signal's spectrum is evaluated over successive time segments and stacked side by side so that the corresponds to time and the vs. Rectangular)).

x-axis

y -axis

frequency, with color indicating the spectral amplitude.

An important detail emerges when we examine each framed signal (Figure 2.47 (Spectrogram Hanning

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

93

Spectrogram Hanning vs. Rectangular


256 n

Rectangular Window

Hanning Window

FFT (512)

FFT (512)

f
Figure 2.47:

The top waveform is a segment 1024 samples long taken from the beginning of the "Rice University" phrase. Computing Figure 2.46 (Speech Spectrogram) involved creating frames, here demarked by the vertical lines, that were 256 samples long and nding the spectrum of each. If a rectangular window is applied (corresponding to extracting a frame from the signal), oscillations appear in the spectrum (middle of bottom row). Applying a Hanning window gracefully tapers the signal toward frame edges, thereby yielding a more accurate computation of the signal's spectrum at that moment of time.

At the frame's edges, the signal may change very abruptly, a feature not present in the original signal. A transform of such a segment reveals a curious oscillation in the spectrum, an artifact directly related to this sharp amplitude change. A better way to frame signals for spectrograms is to apply a accomplished by multiplying the framed signal by the sequence applied a rectangular window:

window:

Shape

the signal values within a frame so that the signal decays gracefully as it nears the edges. This shaping is

window;

w (n). In sectioning the signal, we essentially w (n) = 1, 0 n N 1. A much more graceful window is the Hanning 1 2n it has the cosine shape w (n) = . As shown in Figure 2.47 (Spectrogram Hanning 2 1 cos N

vs. Rectangular), this shaping greatly reduces spurious oscillations in each frame's spectrum. Considering the spectrum of the Hanning windowed frame, we nd that the oscillations resulting from applying the rectangular window obscured a formant (the one located at a little more than half the Nyquist frequency).

Exercise 2.10.2

(Solution on p. 107.)
To gain some insight, what is the length-2N

What might be the source of these oscillations?

discrete Fourier transform of a length-N pulse? The pulse emulates the rectangular window, and certainly has edges. Compare your answer with the length-2N transform of a length-N Hanning window.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

94

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Non-overlapping windows

In comparison with the original speech segment shown in the upper plot, the nonoverlapped Hanning windowed version shown below it is very ragged. Clearly, spectral information extracted from the bottom plot could well miss important features present in the original.
Figure 2.48:

If you examine the windowed signal sections in sequence to examine windowing's eect on signal amplitude, we see that we have managed to amplitude-modulate the signal with the periodically repeated window (Figure 2.48 (Non-overlapping windows)). To alleviate this problem, frames are overlapped (typically by half a frame duration). This solution requires more Fourier transform calculations than needed by rectangular windowing, but the spectra are much better behaved and spectral changes are much better captured. The speech signal, such as shown in the speech spectrogram (Figure 2.46: Speech Spectrogram), is sectioned into overlapping, equal-length frames, with a Hanning window applied to each frame. The spectra of each of these is calculated, and displayed in spectrograms with frequency extending vertically, window time location running horizontally, and spectral magnitude color-coded. Figure 2.49 (Overlapping windows for computing spectrograms) illustrates these computations.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

95

Overlapping windows for computing spectrograms

FFT

FFT

FFT

FFT

FFT

FFT

FFT

Log Spectral Magnitude

The original speech segment and the sequence of overlapping Hanning windows applied to it are shown in the upper portion. Frames were 256 samples long and a Hanning window was applied with a half-frame overlap. A length-512 FFT of each frame was computed, with the magnitude of the rst 257 FFT values displayed vertically, with spectral amplitude values color-coded.
Figure 2.49:

Exercise 2.10.3
Why the specic values of 256 for

(Solution on p. 107.)

and 512 for

K?

Another issue is how was the length-512

transform of each length-256 windowed frame computed?

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

96

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.11 Filtering with the DFT


2.11.1 Introduction

49

Figure 2.50

y [n]

= x [n] h [n] =
k=

x [k ] h [n k ]

(2.42)

Y ( ) = X ( ) H ( )
Assume that

(2.43)

H ( )

Exercise 2.11.1

is specied.

(Solution on p. 107.)

How can we implement Recall that the DFT treats

X ( ) H ( ) N -point

in a computer?

sequences as if they are periodically extended (Figure 2.51):

49 This

content is available online at <http://cnx.org/content/m11022/2.3/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

97

Figure 2.51

2.11.2 Compute IDFT of Y[k]

y [n]

= = = = =

1 N 1 N 1 N

k N 1 i2 N n k=0 Y [k ] e k N 1 i2 N n k=0 X [k ] H [k ] e k k N 1 N 1 (i2 N m) H [k ] ei2 N n k=0 m=0 x [m] e k N 1 N 1 1 i2 N (nm) m=0 x [m] N k=0 H [k ] e N 1 m=0 x [m] h [((n m))N ]

(2.44)

And the IDFT periodically extends

h [n]:

h [n m] = h [((n m))N ]

This computes as shown in Figure 2.52:

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

98

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.52

N 1

y [n] =
m=0

x [m] h [((n m))N ]

(2.45)

is called

circular convolution and is denoted by Figure 2.53.

Figure 2.53:

The above symbol for the circular convolution is for an N -periodic extension.

2.11.2.1 DFT Pair

Figure 2.54

Note that in general:

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

99

Figure 2.55

Example 2.9: Regular vs. Circular Convolution


To begin with, we are given the following two length-3 signals:

x [n] = {1, 2, 3} h [n] = {1, 0, 2}


We can zero-pad these signals so that we have the following discrete sequences:

x [n] = {. . . , 0, 1, 2, 3, 0, . . . } h [n] = {. . . , 0, 1, 0, 2, 0, . . . }
where

x [0] = 1

and

h [0] = 1.
2

Regular Convolution:

y [n] =
m=0

x [m] h [n m]

(2.46)

Using the above convolution formula (refer to the link if you need a review of convolution (Section 1.5)), we can calculate the resulting value for

y [0]

to

y [4].

Recall that because we

have two length-3 signals, our convolved signal will be length-5.

n=0 {. . . , 0, 0, 0, 1, 2, 3, 0, . . . } {. . . , 0, 2, 0, 1, 0, 0, 0, . . . } y [0] = = n=1 {. . . , 0, 0, 1, 2, 3, 0, . . . } {. . . , 0, 2, 0, 1, 0, 0, . . . } y [1] = = 10+21+30 2 11+20+30 1

(2.47)

(2.48)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

100

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

n=2 {. . . , 0, 1, 2, 3, 0, . . . } {. . . , 0, 2, 0, 1, 0, . . . } y [2] = = n=3 y [3] = 4 n=4 y [4] = 6


(2.51) (2.50)

12+20+31 5

(2.49)

Regular Convolution Result

Figure 2.56:

Result is nite duration, not periodic!

Circular Convolution:

y [n] =
m=0

x [m] h [((n m))N ] h [n]

(2.52)

And now with circular convolution our signal:

changes and becomes a periodically extended (2.53)

h [((n))N ] = {. . . , 1, 0, 2, 1, 0, 2, 1, 0, 2, . . . } n=0 {. . . , 0, 0, 0, 1, 2, 3, 0, . . . } {. . . , 1, 2, 0, 1, 2, 0, 1, . . . }

y [0]

= =

11+22+30 5

(2.54)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

101

n=1 {. . . , 0, 0, 0, 1, 2, 3, 0, . . . } {. . . , 0, 1, 2, 0, 1, 2, 0, . . . }

y [1]

= =

11+21+32 8

(2.55)

n=2 n=3 n=4

y [2] = 5 y [3] = 5 y [4] = 8

(2.56)

(2.57)

(2.58)

Circular Convolution Result

Figure 2.57:

Result is 3-periodic.

Figure 2.58 (Circular Convolution from Regular) illustrates the relationship between circular convolution and regular convolution using the previous two gures:

Circular Convolution from Regular

Figure 2.58:

extension.

The left plot (the circular convolution results) has a "wrap-around" eect due to periodic

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

102

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.11.2.2 Regular Convolution from Periodic Convolution


1. "Zero-pad"

x [n]

and

h [n]

to avoid the overlap (wrap-around) eect. We will zero-pad the two signals

to a length-5 signal (5 being the duration of the regular convolution result):

x [n] = {1, 2, 3, 0, 0} h [n] = {1, 0, 2, 0, 0}


2. Now take the DFTs of the zero-padded signals:

y [n]

= =

1 N

4 k=0 4 m=0

X [k ] H [k ] ei2 5 n

x [m] h [((n m))5 ]

(2.59)

Now we can plot this result (Figure 2.59):

Figure 2.59: The sequence from 0 to 4 (the underlined part of the sequence) is the regular convolution result. From this illustration we can see that it is 5-periodic!

note:

We can compute the regular convolution result of a convolution of an

x [n]

with an

N -point

signal

h [n]

by padding each signal with zeros to obtain two

M -point signal M +N 1

length sequences and computing the circular convolution (or equivalently computing the IDFT of

H [k ] X [k ],

the product of the DFTs of the zero-padded signals) (Figure 2.60).

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

103

Figure 2.60:

Note that the lower two images are simply the top images that have been zero-padded.

2.11.3 DSP System

Figure 2.61:

The system has a length N impulse response, h [n]

1. Sample nite duration continuous-time input 2. Zero-pad 3. Compute 4. Compute

x [n]

x (t) to get x [n] h [n] to length M + N 1. DFTs X [k ] and H [k ] IDFTs of X [k ] H [k ] y [n] = y [n]
and

where

n = {0, . . . , M 1}.

where

5. Reconstruct

n = {0, . . . , M + N 1}. y (t)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

104

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

2.12 Image Restoration Basics


2.12.1 Image Restoration

50

In many applications (e.g., satellite imaging, medical imaging, astronomical imaging, poor-quality family portraits) the imaging system introduces a slight distortion. Often images are slightly blurred and image restoration aims at

deblurring the image.

The blurring can usually be modeled as an LSI system with a given PSF

h [m, n].

Figure 2.62:

Fourier Transform (FT) relationship between the two functions.

The observed image is

g [m, n] = h [m, n] f [m, n] G (u, v ) = H (u, v ) F (u, v ) F (u, v ) = G (u, v ) H (u, v )

(2.60)

(2.61)

(2.62)

Example 2.10: Image Blurring


Above we showed the equations for representing the common model for blurring an image. order to model a basic blurred image. In Figure 2.63 we have an original image and a PSF function that we wish to apply to the image in

(a)
Figure 2.63

(b)

Once we apply the PSF to the original image, we receive our blurred image that is shown in Figure 2.64:

50 This

content is available online at <http://cnx.org/content/m10972/2.2/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

105

Figure 2.64

2.12.1.1 Frequency Domain Analysis


Example 2.10 (Image Blurring) looks at the original images in its typical form; however, it is often useful to look at our images and PSF in the frequency domain. In Figure 2.65, we take another look at the image blurring example above and look at how the images and results would appear in the frequency domain if we applied the fourier transforms.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

106

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Figure 2.65

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

107

Solutions to Exercises in Chapter 2


Solution to Exercise 2.7.1 (p. 66)
x [n + N ] = ???

Solution to Exercise 2.7.2 (p. 70)


S [n]
is

N -periodic,

so it has the following Fourier Series

51

:
k

ck

= =

1 N 1 N

N 2

N 2

[n] e(i)2 N n dn

(2.63)

S [n] =
k=

e(i)2 N n
2k N .

(2.64)

Solution to Exercise 2.9.1 (p. 81) Solution to Exercise 2.9.2 (p. 88) Solution to Exercise 2.9.3 (p. 89)

where the DTFT of the exponential in the above equation is equal to

Solution to Exercise 2.10.1 (p. 91)


Number of samples equals required would be

1.2 11025 = 13230.

The datarate is

11025 16 = 176.4

kbps. The storage

Solution to Exercise 2.10.2 (p. 93) Solution to Exercise 2.10.3 (p. 95) Solution to Exercise 2.11.1 (p. 96)
Discretize (sample) The oscillations are due to the boxcar window's Fourier transform, which equals the sinc function. These numbers are powers-of-two, and the FFT algorithm can be exploited with these lengths. To compute a longer transform than the input signal's duration, we simply zero-pad the signal.

26460

bytes.

X ( )

and

H ( ).

In order to do this, we should take the DFTs of

x [n]

and

h [n]

to get

X [k ]

and

X [k ].

Then we will compute

y [n] = IDFT (X [k ] H [k ])

Does

y [n] = y [n]?

51 "Fourier

Series: Eigenfunction Approach" <http://cnx.org/content/m10496/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

108

CHAPTER 2.

SAMPLING AND FREQUENCY ANALYSIS

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Chapter 3

Digital Filtering
3.1 Dierence Equation
3.1.1 Introduction
One of the most important concepts of DSP is to be able to properly represent the input/output relationship to a given LTI system. A linear constant-coecient

dierence equation (LCCDE) serves as a way

to express just this relationship in a discrete-time system. manipulating a system.

Writing the sequence of inputs and outputs,

which represent the characteristics of the LTI system, as a dierence equation help in understanding and

Denition 3.1: dierence equation


An equation that shows the relationship between consecutive values of a sequence and the dierences among them. They are often rearranged as a recursive formula so that a systems output can be computed from the input signal and past outputs.

Example

y [n] + 7y [n 1] + 2y [n 2] = x [n] 4x [n 1]

(3.1)

3.1.2 General Formulas for the Dierence Equation


As stated briey in the denition above, a dierence equation is a very useful tool in describing and calculating the output of the system described by the formula for a given sample equation is its ability to help easily nd the transform, from the dierence equation.

n.

The key property of the dierence

H (z ),

of a system. In the following two subsections,

we will look at the general form of the dierence equation and the general conversion to a z-transform directly

3.1.2.1 Dierence Equation


The general form of a linear, constant-coecient dierence equation (LCCDE), is shown below:

ak y [n k ] =
k=0 k=0

bk x [n k ]

(3.2)

1 This

content is available online at <http://cnx.org/content/m10595/2.6/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>


109

110

CHAPTER 3.

DIGITAL FILTERING

We can also write the general form to easily express a recursive output, which looks like this:

y [n] =
k=1
From this equation, note that of

ak y [n k ] +
k=0

bk x [n k ]

(3.3)

represents the

order of the dierence equation and corresponds to the memory of the system being initial conditions, must be known. transfer function,
2

y [n k ] represents the outputs and x [n k ] represents the inputs.

The value

represented.

Because this equation relies on past values of the output, in order to compute a numerical

solution, certain past outputs, referred to as the

3.1.2.2 Conversion to Z-Transform


Using the above formula, (3.2), we can easily generalize the equation. transform.

H (z ),

for any dierence zThen we use

Below are the steps taken to convert any dierence equation into its transfer function, i.e. The rst step involves taking the Fourier Transform of all the terms in (3.2).

the linearity property to pull the transform inside the summation and the time-shifting property of the z-transform to change the time-shifting terms to exponentials. Once this is done, we arrive at the following equation:

a0 = 1.
N M

Y (z ) =
k=1

ak Y (z ) z k +
k=0

bk X (z ) z k

(3.4)

H (z )

= =

Y (z ) XP (z ) M k k=0 bk z P k 1+ N k=1 ak z

(3.5)

3.1.2.3 Conversion to Frequency Response


Once the z-transform has been calculated from the dierence equation, we can go one step further to dene the frequency response of the system, or lter, that is being represented by the dierence equation.
note:

Remember that the reason we are dealing with these formulas is to be able to aid us in

lter design. A LCCDE is one of the easiest ways to represent FIR lters. By being able to nd the frequency response, we will be able to look at the basic properties of any lter represented by a simple LCCDE. Below is the general formula for the frequency response of a z-transform. The conversion is simple a matter of taking the z-transform formula,

H (z ),

and replacing every instance of

with

eiw .

H (w )

= =

H (z ) |z,z=eiw
PM (iwk) k=0 bk e PN (iwk) k=0 ak e

(3.6)

Once you understand the derivation of this formula, look at the module concerning Filter Design from the Z-Transform
3

for a look into how all of these ideas of the Z-transform (Section 3.2), Dierence Equation,

and Pole/Zero Plots (Section 3.4) play a role in lter design.

2 "Derivation of the Fourier Transform" <http://cnx.org/content/m0046/latest/> 3 "Discrete Time Filter Design" <http://cnx.org/content/m10548/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

111

3.1.3 Example
Example 3.1: Finding Dierence Equation
Below is a basic example showing the opposite of the steps above: given a transfer function one can easily calculate the systems dierence equation.

H (z ) =

(z + 1) z1 z+ 2

3 4

(3.7)

Given this transfer function of a time-domain lter, we want to nd the dierence equation. To begin with, expand both polynomials and divide them by the highest order

z.

H (z )

= = =

(z +1)(z +1)
1 3 (z 2 )(z+ 4 )

z 2 +2z +1 z 2 +2z +1 3 8 1+2z 1 +z 2 3 2 1 1 1+ 4 z 8 z

(3.8)

From this transfer function, the coecients of the two polynomials will be our of the transfer function, we can easily write the dierence equation:

ak

and

bk

values

found in the general dierence equation formula, (3.2). Using these coecients and the above form

1 3 x [n] + 2x [n 1] + x [n 2] = y [n] + y [n 1] y [n 2] 4 8
recursive nature of the system.

(3.9)

In our nal step, we can rewrite the dierence equation in its more common form showing the

y [n] = x [n] + 2x [n 1] + x [n 2] +

3 1 y [n 1] + y [n 2] 4 8

(3.10)

3.1.4 Solving a LCCDE


In order for a linear constant-coecient dierence equation to be useful in analyzing a LTI system, we must be able to nd the systems output based upon a known input, common methods exist for solving a LCCDE: the of these methods.

direct method

x (n),

and a set of initial conditions. Two

and the

indirect method,

the later

being based on the z-transform. Below we will briey discuss the formulas for solving a LCCDE using each

3.1.4.1 Direct Method


The nal solution to the output based on the direct method is the sum of two parts, expressed in the following equation:

y (n) = yh (n) + yp (n)


The rst part, to as

(3.11)

particular solution.

yh (n),

is referred to as the

homogeneous solution and the second part, yh (n), is referred

The following method is very similar to that used to solve many dierential

equations, so if you have taken a dierential calculus course or used dierential equations before then this should seem very familiar.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

112

CHAPTER 3.

DIGITAL FILTERING

3.1.4.1.1 Homogeneous Solution


We begin by assuming that the input is zero, dierence equation:

x (n) = 0.

Now we simply need to solve the homogeneous

ak y [n k ] = 0
k=0

(3.12)

In order to solve this, we will make the assumption that the solution is in the form of an exponential. We will use lambda,

to represent our exponential terms. We now have to solve the following equation:

ak nk = 0
k=0

(3.13)

We can expand this equation out and factor out all of the lambda terms. This will give us a large polynomial in parenthesis, which is referred to as the the equation will be as follows:

characteristic polynomial.
n n

The roots of this polynomial will

be the key to solving the homogeneous equation. If there are all distinct roots, then the general solution to

yh (n) = C1 (1 ) + C2 (2 ) + + CN (N )
dierent. Below we have the modied version for an equation where

(3.14)

However, if the characteristic equation contains multiple roots then the above general solution will be slightly

has

multiple roots:

yh (n) = C1 (1 ) + C1 n(1 ) + C1 n2 (1 ) + + C1 nK 1 (1 ) + C2 (2 ) + + CN (N )

(3.15)

3.1.4.1.2 Particular Solution


The particular solution,

yp (n),

will be any solution that will solve the general dierence equation:

ak yp (n k ) =
k=0
In order to solve, our guess for the solution to the dierence equation and solve it out.

bk x (n k )
k=0

(3.16)

yp (n) will take on the form of the input, x (n).

After guessing

at a solution to the above equation involving the particular solution, one only needs to plug the solution into

3.1.4.2 Indirect Method


The indirect method utilizes the relationship between the dierence equation and z-transform, discussed earlier (Section 3.1.2: General Formulas for the Dierence Equation), to nd a solution. The basic idea is to convert the dierence equation into a z-transform, as described above (Section 3.1.2.2: Conversion to ZTransform), to get the resulting output, expansion, we can arrive at the solution.

Y (z ).

Then by inverse transforming this and using partial-fraction

Z {y (n + 1) y (n)} = zY (z ) y (0)
This can be interatively extended to an arbitrary order derivative as in Equation (3.18).

(3.17)

N 1

N 1

Z {
m=0

y (n m)} = z Y (z )
m=0

z nm1 y (m) (0)

(3.18)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

113

Now, the Laplace transform of each side of the dierential equation can be taken

N 1

Z{
k=0
which by linearity results in

ak y (n m + 1)
m=0

y (n m) y (n) = Z {x (n)}}

(3.19)

N 1

ak Z {y (n m + 1)
k=0
and by dierentiation properties in

y (n m) y (n)} = Z {x (n)}
m=0

(3.20)

N 1

ak
k=0

z k Z {y (n)}
m=0

z km1 y (m) (0)

= Z {x (n)}.

(3.21)

Rearranging terms to isolate the Laplace transform of the output,

Z {y (n)} =
Thus, it is found that

Z {x (n)} +

N k=0

k 1 m=0 N k=0

ak z km1 y (m) (0)

ak z k

(3.22)

Y (z ) =

X (z ) +

N k=0

k 1 km1 (m) y m=0 ak z N k k=0 ak z

(0)

(3.23)

In order to nd the output, it only remains to nd the Laplace transform

X (z )

of the input, substitute the

initial conditions, and compute the inverse Z-transform of the result. Partial fraction expansions are often required for this last step. This may sound daunting while looking at (3.23), but it is often easy in practice, especially for low order dierence equations. (3.23) can also be used to determine the transfer function and frequency response. As an example, consider the dierence equation

y [n 2] + 4y [n 1] + 3y [n] = cos (n)


with the initial conditions the solution

(3.24)

y (0) = 1

'

and

y (0) = 0

Using the method described above, the Z transform of

y [n]

is given by

Y [z ] =

z 1 + . [z 2 + 1] [z + 1] [z + 3] [z + 1] [z + 3]

(3.25)

Performing a partial fraction decomposition, this also equals

Y [z ] = .25

1 1 z 1 .35 + .1 2 + .2 2 . z+1 z+3 z +1 z +1

(3.26)

Computing the inverse Laplace transform,

y (n) = .25z n .35z 3n + .1cos (n) + .2sin (n) u (n) .

(3.27)

One can check that this satises that this satises both the dierential equation and the initial conditions.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

114

CHAPTER 3.

DIGITAL FILTERING

3.2 The Z Transform: Denition


The

3.2.1 Basic Denition of the Z-Transform


z-transform of a sequence is dened as

X (z ) =
n=
Sometimes this equation is referred to as the as

x [n] z n

(3.28)

bilateral z-transform.

At times the z-transform is dened

X (z ) =
n=0
which is known as the

x [n] z n

(3.29)

unilateral z-transform.

There is a close relationship between the z-transform and the signal, which is dened as

Fourier transform

of a discrete time

X ei =
n=
Notice that that when the

x [n] e(in)

(3.30)

z n

is replaced with

e(in)

the z-transform reduces to the Fourier Transform.

When the Fourier Transform exists,

z=e

, which is to have the magnitude of

equal to unity.

3.2.2 The Complex Plane


In order to get further insight into the relationship between the Fourier Transform and the Z-Transform it is useful to look at the complex plane or

z-plane.

Take a look at the complex plane:

4 This

content is available online at <http://cnx.org/content/m10549/2.10/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

115

Z-Plane

Figure 3.1

The Z-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable

z.

The position on the complex plane is given by

rei

, and the angle from the positive, real axis around

the plane is denoted by only where

. X (z )

is dened everywhere on this plane.

|z | = 1,

which is referred to as the unit circle. So for

X ei on the other hand is dened example, = 1 at z = 1 and = at

z = 1 .

This is useful because, by representing the Fourier transform as the z-transform on the unit circle,

the periodicity of Fourier transform is easily seen.

3.2.3 Region of Convergence


The region of convergence, known as the converges. Since the z-transform is a Stated dierently,

ROC,

is important to understand because it denes the region

where the z-transform exists. The ROC for a given

power series,
n=

x [n] , is dened as the range of z for which the z-transform n it converges when x [n] z is absolutely summable.
(3.31)

|x [n] z n | <
must be satised for convergence. transforms of

This is best illustrated by looking at the dierent ROC's of the z-

n u [n]

and

n u [n 1].

Example 3.2
For

x [n] = n u [n]

(3.32)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

116

CHAPTER 3.

DIGITAL FILTERING

Figure 3.2:

x [n] = n u [n] where = 0.5.

X (z )

= = = =

n n= x [n] z n n n= u [n] z n n n=0 z 1 n n=0 z

(3.33)

This sequence is an example of a right-sided exponential sequence because it is nonzero for It only converges when

n 0.

|z 1 | < 1.

When it converges,

X (z )

= =

1 1z 1 z z

(3.34)

If

|z 1 | 1,

then the series,

n=0

z 1

does not converge. Thus the ROC is the range of (3.35)

values where

|z 1 | < 1
or, equivalently,

|z | > ||

(3.36)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

117

Figure 3.3:

ROC for x [n] = n u [n] where = 0.5

Example 3.3
For

x [n] = (n ) u [(n) 1]

(3.37)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

118

CHAPTER 3.

DIGITAL FILTERING

Figure 3.4:

x [n] = (n ) u [(n) 1] where = 0.5.

X (z )

= = = = = =

n n= x [n] z n n= ( ) u [n 1 n= n z n n 1 n= 1 z n n=1 1 z n 1 n=0 1 z

1] z n
(3.38)

The ROC in this case is the range of values where

|1 z | < 1
or, equivalently,

(3.39)

|z | < ||
If the ROC is satised, then

(3.40)

X (z )

= =

1
z z

1 11 z

(3.41)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

119

Figure 3.5:

ROC for x [n] = (n ) u [(n) 1]

3.3 Table of Common z-Transforms


species the region of convergence .
note:
6

The table below provides a number of unilateral and bilateral

z-transforms (Section 3.2).

The table also

The notation for

found in the table below may dier from that found in other tables. For

example, the basic z-transform of which are equivalent:

u [n]

can be written as either of the following two expressions,

z 1 = z1 1 z 1

(3.42)

5 This content is available online at <http://cnx.org/content/m10119/2.14/>. 6 "Region of Convergence for the Z-transform" <http://cnx.org/content/m10622/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

120

CHAPTER 3.

DIGITAL FILTERING

Signal
[n k ] u [n] u [(n) 1] nu [n] n2 u [n] n u [n] (n ) u [(n) 1] u [n] n u [n] n u [n]
Qm
k=1 nk +1 n u [n] m m!

Z-Transform
z k
z z 1 z z 1 z (z 1)2 z (z +1) (z 1)3 z (z 2 +4z +1) (z 1)4 z z z z z (z )2 z (z +) (z )3 z (z )m+1 z (z cos()) z 2 (2 cos())z + 2 z sin() z 2 (2 cos())z + 2
Table 3.1

ROC
All (z )

|z | > 1 |z | < 1 |z | > 1 |z | > 1 |z | > 1 |z | < || |z | > || |z | > || |z | > ||

2 n

n cos (n) u [n] n sin (n) u [n]

|z | > | | |z | > | |

3.4 Understanding Pole/Zero Plots on the Z-Plane


3.4.1 Introduction to Poles and Zeros of the Z-Transform
It is quite dicult to qualitatively analyze the Laplace transform
8

and Z-transform (Section 3.2), since

mappings of their magnitude and phase or real part and imaginary part result in multiple mappings of 2-dimensional surfaces in 3-dimensional space. For this reason, it is very common to examine a plot of a transfer function's
9

poles and zeros to try to gain a qualitative idea of what a system does.

Once the Z-transform of a system has been determined, one can use the information contained in function's polynomials to graphically represent the function and easily observe many dening characteristics. The Ztransform will have the below structure, based on Rational Functions
10

X (z ) = P (z ) Q (z ),

P (z ) Q (z )
11

(3.43)

The two polynomials,

Denition 3.2: zeros


1. The value(s) for

and

allow us to nd the poles and zeros

of the Z-Transform.

where

P (z ) = 0.

2. The complex frequencies that make the overall gain of the lter transfer function zero.

Denition 3.3: poles


1. The value(s) for

where

Q (z ) = 0.

2. The complex frequencies that make the overall gain of the lter transfer function innite.

7 This content is available online at <http://cnx.org/content/m10556/2.12/>. 8 "The Laplace Transform" <http://cnx.org/content/m10110/latest/> 9 "Transfer Functions" <http://cnx.org/content/m0028/latest/> 10 "Rational Functions and the Z-Transform" <http://cnx.org/content/m10593/latest/> 11 "Poles and Zeros" <http://cnx.org/content/m10112/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

121

Example 3.4
Below is a simple transfer function with the poles and zeros shown below it.

H (z ) =
The zeros are: The poles are:

z+1 1 z+ 2

3 4

{1} 1 3 2, 4

3.4.2 The Z-Plane


Once the poles and zeros have been found for a given Z-Transform, they can be plotted onto the Z-Plane. The Z-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable The position on the complex plane is given by plane is denoted by

z.

rei

and the angle from the positive, real axis around the

When mapping poles and zeros onto the plane, poles are denoted by an "x" and zeros

by an "o". The below gure shows the Z-Plane, and examples of plotting zeros and poles onto the plane can be found in the following section.

Z-Plane

Figure 3.6

3.4.3 Examples of Pole/Zero Plots


This section lists several examples of nding the poles and zeros of a transfer function and then plotting them onto the Z-Plane.

Example 3.5: Simple Pole/Zero Plot


H (z ) = z z
1 2

z+

3 4

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

122

CHAPTER 3.

DIGITAL FILTERING

The zeros are: The poles are:

{0} 3 1 2, 4

Pole/Zero Plot

Using the zeros and poles found from the transfer function, the one zero is mapped to zero and 3 and the two poles are placed at 1 2 4
Figure 3.7:

Example 3.6: Complex Pole/Zero Plot


H (z ) =
The zeros are: The poles are:

1 2

(z i) (z + i) 1 1 2 i z1 2 + 2i

{i, i} 1 1 1 1 , 1 2 + 2 i, 2 2 i

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

123

Pole/Zero Plot

Figure 3.8: Using the zeros and poles found from the transfer function, the zeros are mapped to (i), 1 and the poles are placed at 1, 1 +1 i and 2 1 i 2 2 2

Example 3.7: Pole-Zero Cancellation


the same as

An easy mistake to make with regards to poles and zeros is to think that a function like

s + 3.

In theory they are equivalent, as the pole and zero at

in what is known as

pole-zero cancellation.

s=1

(s+3)(s1) is s1 cancel each other out

However, think about what may happen if this were a

transfer function of a system that was created with physical circuits. In this case, it is very unlikely that the pole and zero would remain in exactly the same place. A minor temperature change, for instance, could cause one of them to move just slightly. If this were to occur a tremendous amount of volatility is created in that area, since there is a change from innity at the pole to zero at the zero in a very small range of signals. This is generally a very bad way to try to eliminate a pole. A much better way is to use

control theory to move the pole to a better place.


H (z ) = z 2

note:

It is possible to have more than one pole or zero at any given point.

For instance, the

discrete-time transfer function function

will have two zeros at the origin and the continuous-time

H (s) =

1 s25 will have 25 poles at the origin.

MATLAB
Z-Plane.

- If access to MATLAB is readily available, then you can use its functions to easily create

pole/zero plots. Below is a short program that plots the poles and zeros from the above example onto the

% Set up vector for zeros z = [j ; -j]; % Set up vector for poles p = [-1 ; .5+.5j ; .5-.5j]; figure(1);
Available for free at Connexions <http://cnx.org/content/col10360/1.4>

124

CHAPTER 3.

DIGITAL FILTERING

zplane(z,p); title('Pole/Zero Plot for Complex Pole/Zero Plot Example');

3.4.4 Interactive Demonstration of Poles and Zeros

Interact (when online) with a Mathematica CDF demonstrating Pole/Zero Plots. To Download, right-click and save target as .cdf.
Figure 3.9:

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

125

3.4.5 Applications for pole-zero plots


3.4.5.1 Stability and Control theory
Now that we have found and plotted the poles and zeros, we must ask what it is that this plot gives us. Basically what we can gather from this is that the magnitude of the transfer function will be larger when it is closer to the poles and smaller when it is closer to the zeros. This provides us with a qualitative
12

understanding of what the system does at various frequencies and is crucial to the discussion of stability

3.4.5.2 Pole/Zero Plots and the Region of Convergence


The region of convergence (ROC) for

X (z )

in the complex Z-plane can be determined from the pole/zero

plot. Although several regions of convergence may be possible, where each one corresponds to a dierent impulse response, there are some choices that are more practical. A ROC can be chosen to make the transfer function causal and/or stable depending on the pole/zero plot.

Filter Properties from ROC


If the ROC extends outward from the outermost pole, then the system is If the ROC includes the unit circle, then the system is

stable.

causal.

Below is a pole/zero plot with a possible ROC of the Z-transform in the Simple Pole/Zero Plot (Example 3.5: Simple Pole/Zero Plot) discussed earlier. The shaded region indicates the ROC chosen for the lter. From this gure, we can see that the lter will be both causal and stable since the above listed conditions are both met.

Example 3.8
H (z ) = z z
1 2

z+

3 4

Region of Convergence for the Pole/Zero Plot

Figure 3.10:

The shaded area represents the chosen ROC for the transfer function.

12 "BIBO

Stability of Continuous Time Systems" <http://cnx.org/content/m10113/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

126

CHAPTER 3.

DIGITAL FILTERING

3.4.5.3 Frequency Response and Pole/Zero Plots


The reason it is helpful to understand and create these pole/zero plots is due to their ability to help us easily design a lter. Based on the location of the poles and zeros, the magnitude response of the lter can be quickly understood. Also, by starting with the pole/zero plot, one can design a lter and obtain its transfer function very easily.

3.5 Filtering in the Frequency Domain


duration less than or equal to tion
14

13

Because we are interested in actual computations rather than analytic calculations, we must consider the details of the discrete Fourier transform. To compute the length-N DFT, we assume that the signal has a

N.

Because frequency responses have an explicit frequency-domain specica-

in terms of lter coecients, we don't have a direct handle on which signal has a Fourier transform

equaling a given frequency response. Finding this signal is quite easy. First of all, note that the discretetime Fourier transform of a unit sample equals one for all frequencies. Since the input and output of linear, shift-invariant systems are related to each other by , a unit-sample input, results in the output's Fourier transform equaling the system's transfer

which has X e function. Exercise 3.5.1


i2f
and is denoted by pairs

= 1,

Y ei2f = H ei2f X ei2f

(Solution on p. 147.)

This statement is a very important result. Derive it yourself. In the time-domain, the output for a unit-sample input is known as the system's

unit-sample response,

h (n).

Combining the frequency-domain and time-domain interpretations of a linear, shift-

invariant system's unit-sample response, we have that

in terms of the discrete-time Fourier transform.

h (n)

and the transfer function are Fourier transform

h (n) H ei2f
response, and derive the corresponding length-N DFT by sampling the frequency response.

(3.44)

Returning to the issue of how to use the DFT to perform ltering, we can analytically specify the frequency

k, k = {0, . . . , N 1} : H (k ) = H e

i2k N

(3.45)

sample response might be.

Computing the inverse DFT yields a length-N signal

no matter what the actual duration of the unitN


(it's a FIR

If the unit-sample response has a duration less than or equal to

lter), computing the inverse DFT of the sampled frequency response indeed yields the unit-sample response. If, however, the duration exceeds

N,

errors are encountered. The nature of these errors is easily explained

by appealing to the Sampling Theorem. By sampling in the frequency domain, we have the potential for aliasing in the time domain (sampling in one domain, be it time or frequency, can result in aliasing in the other) unless we sample fast enough. Here, the duration of the unit-sample response determines the minimal sampling rate that prevents aliasing. For FIR systems  they by denition have nite-duration unit sample responses  the number of required DFT samples equals the unit-sample response's duration:

Exercise 3.5.2

N q.

(Solution on p. 147.)

Derive the minimal DFT length for a length-q unit-sample response using the Sampling Theorem. Because sampling in the frequency domain causes repetitions of the unit-sample response in the time domain, sketch the time-domain result for various choices of the DFT length

N.

13 This content is available online at <http://cnx.org/content/m10257/2.18/>. 14 "Discrete-Time Systems in the Frequency Domain", (1) <http://cnx.org/content/m0510/latest/#dtsinf>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

127

Exercise 3.5.3
example
15

(Solution on p. 147.)

Express the unit-sample response of a FIR lter in terms of dierence equation coecients. Note that the corresponding question for IIR lters is far more dicult to answer: Consider the .

For IIR systems, we cannot use the DFT to nd the system's unit-sample response: aliasing of the unitsample response will

always occur.

Consequently, we can only implement an IIR lter accurately in the time

FIR lters.

domain with the system's dierence equation.

Frequency-domain implementations are restricted to


Nx
that we pass through a FIR

Another issue arises in frequency-domain ltering that is related to time-domain aliasing, this time when we consider the output. Assume we have an input signal having duration lter having a length-q

+1

unit-sample response. What is the duration of the output signal? The dierence

equation for this lter is

y (n) = b0 x (n) + + bq x (n q )
This equation says that the output depends on current and past input values, with the input value previous dening the extent of the lter's

(3.46)

Nx

depends on

x (Nx ) (which equals zero), x (Nx 1), through x (Nx q ).

memory of past input values.

samples

For example, the output at index Thus, the output returns to zero Thus, the output

only after the last input value passes through the lter's memory. As the input signal's last value occurs at index

Nx 1,

the last nonzero output value occurs when

n q = Nx 1 or n = q + Nx 1.

signal's duration equals

Exercise 3.5.4

q + Nx .
(Solution on p. 147.)

In words, we express this result as "The output's duration equals the input's duration plus the lter's duration minus one.". Demonstrate the accuracy of this statement. The main theme of this result is that a lter's output extends longer than either its input or its unit-sample response. Thus, to avoid aliasing when we use DFTs, the dominant factor is not the duration of input or of the unit-sample response, but of the output. Thus, the number of values at which we must evaluate the frequency response's DFT must be at least

q + Nx

and we must compute the same length DFT of the input.

To accommodate a shorter signal than DFT length, we simply

zero-pad the input:

Ensure that for indices the the

extending beyond the signal's duration that the signal is zero. Frequency-domain ltering, diagrammed in Figure 3.11, is accomplished by storing the lter's frequency response as the DFT input's DFT

X (k ),

multiplying them to create the output's DFT

H (k ), computing Y (k ) = H (k ) X (k ), and computing

inverse DFT of the result to yield

y (n).

x(n) DFT

X(k)

Y(k) IDFT H(k)

y(n)

To lter a signal in the frequency domain, rst compute the DFT of the input, multiply the result by the sampled frequency response, and nally compute the inverse DFT of the product. The DFT's length must be at least the sum of the input's and unit-sample response's duration minus one. We calculate these discrete Fourier transforms using the fast Fourier transform algorithm, of course.
Figure 3.11:

Before detailing this procedure, let's clarify why so many new issues arose in trying to develop a frequencydomain implementation of linear ltering. The frequency-domain relationship between a lter's input and

15 "Discrete-Time

Systems in the Time-Domain", Example 1 <http://cnx.org/content/m10251/latest/#p0> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

128

CHAPTER 3.

DIGITAL FILTERING

Y ei2f = H ei2f X ei2f . The Fourier i2f time Fourier transforms; for example, X e = n x (n) e(i2f n) .
output is and the input signal. frequency variable

always true:

transforms in this result are discreteUnfortunately, using this relationship

to perform ltering is restricted to the situation when we have analytic formulas for the frequency response The reason why we had to "invent" the discrete Fourier transform (DFT) has the same origin: The spectrum resulting from the discrete-time Fourier transform depends on the

continuous

f.

That's ne for analytic calculation, but computationally we would have to make an

uncountably innite number of computations. Did you know that two kinds of innities can be meaningfully dened? A countably innite quantity means that it can be associated with a limiting process associated with integers. An uncountably innite quantity cannot be so associated. The number of rational numbers is
note:

countably innite (the numerator and denominator correspond to locating the rational by row and column; the total number so-located can be counted, voila!); the number of irrational numbers is uncountably innite. Guess which is "bigger?" The DFT computes the Fourier transform at a nite set of frequencies  samples the true spectrum  which can lead to aliasing in the time-domain unless we sample suciently fast. The sampling interval here

1 K for a length-K DFT: faster sampling to avoid aliasing thus requires a longer transform calculation. Since the longest signal among the input, unit-sample response and output is the output, it is that signal's
is duration that determines the transform length. We simply extend the other two signals with zeros (zero-pad) to compute their DFTs.

Example 3.9
Suppose we want to average daily stock prices taken over last year to yield a running weekly average (average over ve trading sessions). The lter we want is a length-5 averager (as shown in the unit-sample response
16

), and the input's duration is 253 (365 calendar days minus weekend days

and holidays). The output duration will be

253 + 5 1 = 257,

and this determines the transform

length we need to use. Because we want to use the FFT, we are restricted to power-of-two transform lengths. We need to choose any FFT length that exceeds the required DFT length. As it turns out, 256 is a power of two (2

= 256),

and this length just undershoots our required length. To use

frequency domain techniques, we must use length-512 fast Fourier transforms.

16 "Discrete-Time

Systems in the Time-Domain", Figure 2 <http://cnx.org/content/m10251/latest/#g1002>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

129

8000
Dow-Jones Industrial Average

7000 6000 5000 4000 3000 2000 1000 0 0 50 100 150 Daily Average Weekly Average 200 250

Trading Day (1997)

Figure 3.12:

The blue line shows the Dow Jones Industrial Average from 1997, and the red one the length-5 boxcar-ltered result that provides a running weekly of this market index. Note the "edge" eects in the ltered output.

Figure 3.12 shows the input and the ltered output. The MATLAB programs that compute the ltered output in the time and frequency domains are

Time Domain h = [1 1 1 1 1]/5; y = filter(h,1,[djia zeros(1,4)]); Frequency Domain h = [1 1 1 1 1]/5; DJIA = fft(djia, 512); H = fft(h, 512); Y = H.*X; y = ifft(Y);

note: The

filter

program has the feature that the length of its output equals the length of its

input. To force it to produce a signal having the proper length, the program zero-pads the input appropriately. MATLAB's

fft

function automatically zero-pads its input if the specied transform length (its The frequency domain result will have a small  because of the inherent nite precision

second argument) exceeds the signal's length. imaginary component  largest value is nature of computer arithmetic.

2.2 1011

Because of the unfortunate mist between signal lengths and

favored FFT lengths, the number of arithmetic operations in the time-domain implementation is far less than those required by the frequency domain version: 514 versus 62,271. If the input signal had been one sample shorter, the frequency-domain computations would have been more than a factor of two less (28,696), but far more than in the time-domain implementation.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

130

CHAPTER 3.

DIGITAL FILTERING

An interesting signal processing aspect of this example is demonstrated at the beginning and end of the output. The ramping up and down that occurs can be traced to assuming the input is zero before it begins and after it ends. The lter "sees" these initial and nal values as the dierence equation passes over the input. These artifacts can be handled in two ways: we can just ignore the edge eects or the data from previous and succeeding years' last and rst week, respectively, can be placed at the ends.

3.6 Linear-Phase FIR Filters


If the real and imaginary parts of

17

3.6.1 THE AMPLITUDE RESPONSE


H f ( )
are given by

H f ( ) =
the magnitude and phase are dened as

( ) + i ( )

(3.47)

|H f ( ) | =

( ( )) + ( ( )) ( ) ( )

p ( ) = arctan
so that

H f ( ) = |H f ( ) |eip()
With this denition, to write

(3.48)

|H ( ) | is never negative and p ( ) is usually discontinuous, but it can be very helpful H f ( ) = A ( ) ei()
(3.49) is called the

H f ( )

as

where

A ( )

can be positive and negative, and

Figure 3.13 illustrates the dierence between

( ) continuous. A ( ) |H f ( ) | and A ( ).

amplitude response.

17 This

content is available online at <http://cnx.org/content/m10705/2.3/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

131

Figure 3.13

A linear-phase phase lter is one for which the continuous phase

( )

is linear.

H f ( ) = A ( ) ei()
with

( ) = M + B
We assume in the following that the impulse response

h (n)

is real-valued.

3.6.2 WHY LINEAR-PHASE?


If a discrete-time cosine signal

x1 (n) = cos (1 n + 1 )
is processed through a discrete-time lter with frequency response

H f ( ) = A ( ) ei()
then the output signal is given by

y1 (n) = A (1 ) cos (1 n + 1 + (1 ))
Available for free at Connexions <http://cnx.org/content/col10360/1.4>

132

CHAPTER 3.

DIGITAL FILTERING

or

y1 (n) = A (1 ) cos 1 n +

(1 ) 1

+ 1
(1 ) 1 .

The LTI system has the eect of scaling the cosine signal and delaying it by

Exercise 3.6.1

(Solution on p. 147.)

When does the system delay cosine signals with dierent frequencies by the same amount? The function

( ) is called the

phase delay.

A linear phase lter therefore has constant phase delay.

3.6.3 WHY LINEAR-PHASE: EXAMPLE


Consider a discrete-time lter described by the dierence equation

y (n) = 0.1821x (n) + 0.7865x (n 1) 0.6804x (n 2) + x (n 3) + 0.6804y (n 1) 0.7865y (n 2) + 0.1821y (n 3)


When

(3.50)

1 = 0.31 ,

then the delay is

(1 ) 1

= 2.45.

The delay is illustrated in Figure 3.14:

Figure 3.14

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

133

Notice that the delay is fractional  the discrete-time samples are not exactly reproduced in the output. The fractional delay can be interpreted in this case as a delay of the underlying continuous-time cosine signal.

3.6.4 WHY LINEAR-PHASE: EXAMPLE (2)


Consider the same system given on the previous slide, but let us change the frequency of the cosine signal. When

2 = 0.47 ,

then the delay is

(2 ) 2

= 0.14.

Figure 3.15

note:

For this example, the delay depends on the frequency, because this system does not have

linear phase.

3.6.5 WHY LINEAR-PHASE: MORE


From the previous slides, we see that a lter will delay dierent frequency components of a signal by the same amount if the lter has linear phase (constant phase delay).

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

134

CHAPTER 3.

DIGITAL FILTERING

In addition, when a narrow band signal (as in AM modulation) goes through a lter, the envelop will be delayed by the

group delay or envelop delay of the lter.

The amount by which the envelop is delayed

is independent of the carrier frequency only if the lter has linear phase. Also, in applications like image processing, lters with non-linear phase can introduce artifacts that are visually annoying.

3.7 Filter Structures

18

A realizable lter must require only a nite number of computations per output sample. For linear, causal, time-Invariant lters, this restricts one to rational transfer functions of the form

H (z ) =
Assuming no pole-zero cancellations,

b0 + b1 z 1 + + bm z m 1 + a1 z 1 + a2 z 2 + + an z n
is FIR if

H (z )

i, i > 0 : (ai = 0),

and IIR otherwise. Filter structures

usually implement rational transfer functions as dierence equations. Whether FIR or IIR, a given transfer function can be implemented with many dierent lter structures. With innite-precision data, coecients, and arithmetic, all lter structures implementing the same transfer function produce the same output. However, dierent lter strucures may produce very dierent errors with quantized data and nite-precision or xed-point arithmetic. The computational expense and memory usage may also dier greatly. Knowledge of dierent lter structures allows DSP engineers to trade o these factors to create the best implementation.

3.8 Overview of Digital Filter Design


Advantages of FIR lters
2. Can be implemented with fast convolution 3. Always stable 4. Relatively insensitive to quantization

19

1. Straight forward conceptually and simple to implement

5. Can have linear phase (same time delay of all frequencies)

Advantages of IIR lters


1. Better for approximating analog systems 2. For a given magnitude response specication, IIR lters often require much less computation than an equivalent FIR, particularly for narrow transition bands Both FIR and IIR lters are very important in applications.

Generic Filter Design Procedure


2. Choose a lter class 3. Choose a quality measure

1. Choose a desired response, based on application requirements

4. Solve for the lter in class 2 optimizing criterion in 3

18 This 19 This

content is available online at <http://cnx.org/content/m11917/1.3/>. content is available online at <http://cnx.org/content/m12776/1.2/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

135

3.8.1 Perspective on FIR ltering


Most of the time, people do (radix-2
20

optimal design, using the Parks-McClellan algorithm (Section 3.11). This

is probably the second most important technique in "classical" signal processing (after the Cooley-Tukey ) FFT). Most of the time, FIR lters are designed to have linear phase. The most important advantage of FIR lters over IIR lters is that they can have exactly linear phase. There are advanced design techniques for minimum-phase lters, constrained

magnitude of the response is important, IIR lers usually require much fewer operations and are typically
used, so the bulk of FIR lter design work has concentrated on linear phase designs.

L2

optimal designs, etc. (see chapter 8 of text). However, if only the

3.9 Window Design Method


Exercise 3.9.1
Is it any Good?

21

The truncate-and-delay design procedure is the simplest and most obvious FIR design procedure.

(Solution on p. 147.)

3.9.1 L2 optimization criterion


nd

n, 0 n M 1 : (h [n]),

maximizing the energy dierence between the desired response and the

actual response: i.e., nd

minhn hn,

by Parseval's relationship
22

(|Hd ( ) H ( ) |) d

minhn hn, 2
1 n=
Since

(|Hd ( ) H ( ) |)2 d
M 1 n=0

n=

(|hd [n] h [n] |)2


n=M

(3.51)

(|hd [n] h [n] |)2 +

(|hd [n] h [n] |)2 +

(|hd [n] h [n] |)2

n, n < 0n M : (h [n])

this becomes

minhn hn,
M 1 n=0

(|Hd ( ) H ( ) |)2 d
n=M

1 h=

(|hd [n] |)2

(|h [n] hd [n] |)2 +


h [n]

(|hd [n] |)2

note:

has no inuence on the rst and last sums.

The best we can do is let

h [n] if 0 n M 1 d h [n] = 0 if else 1 w [n] = 0

Thus

h [n] = hd [n] w [n],


if

0 n (M 1)

if else

20 "Decimation-in-time (DIT) Radix-2 FFT" <http://cnx.org/content/m12016/latest/> 21 This content is available online at <http://cnx.org/content/m12790/1.2/>. 22 "Parseval's Theorem" <http://cnx.org/content/m0047/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

136

CHAPTER 3.

DIGITAL FILTERING

is

optimal in a least-total-sqaured-error ( L2 , or energy) sense! Exercise 3.9.2


Why, then, is this design often considered undersirable?

(Solution on p. 147.)

For desired spectra with discontinuities, the least-square designs are poor in a minimax (worst-case, or error sense.

L )

3.9.2 Window Design Method


Apply a more gradual truncation to reduce "ringing" (Gibb's Phenomenon
23

n 0 n M 1 hn = h d nwn : (n 0 n M 1 hn = h d nwn)
note:

H ( ) = Hd ( ) W ( )

The window design procedure (except for the boxcar window) is ad-hoc and not optimal in any usual sense. However, it is very simple, so it is sometimes used for "quick-and-dirty" designs of if the error criterion is itself heurisitic.

3.10 Frequency Sampling Design Method for FIR lters


response

24

Given a desired frequency response, the frequency sampling design method designs a lter with a frequency

exactly equal to the desired response at a particular set of frequencies k . Procedure


M 1

k, k = [o, 1, . . . , N 1] :
note:

Hd (k ) =
n=0

h (n) e(ik n)

(3.52)

Desired Response must incluce linear phase shift (if linear phase is desired)

Exercise 3.10.1
What is
note:

(Solution on p. 148.)

Hd ( )

for an ideal lowpass lter, coto at

c ?

This set of linear equations can be written in matrix form

M 1

Hd (k ) =
n=0

h (n) e(ik n) ... ...


. . .

(3.53)


or

Hd (0 ) Hd (1 )
. . .

e(i0 0) e(i1 0)
. . .

e(i0 1) e(i1 1)
. . .

e(i0 (M 1)) e(i1 (M 1))


. . .

h (0) h (1)
. . .


(3.54)

Hd (N 1 )

e(iM 1 0)

e(iM 1 1)

...

e(iM 1 (M 1))

h (M 1)

Hd = W h
So

h = W 1 Hd
note:

(3.55)

is a square matrix for

N = M,

and invertible as long as

i = j + 2l, i = j

23 "Gibbs Phenomena" <http://cnx.org/content/m10092/latest/> 24 This content is available online at <http://cnx.org/content/m12789/1.2/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

137

3.10.1 Important Special Case


What if the frequencies are equally spaced between Then

and

2 ,

i.e.

k =

2k M

+
2kn M

M 1

M 1

Hd (k ) =
n=0
so

h (n) e(i

2kn M

) e(in) =
n=0 M 1

h (n) e(in) e(i

) = DFT!

h (n) e(in) =
or

1 M

Hd (k ) ei
k=0

2nk M

ein h [n] = M

M 1

Hd [k ] ei
k=0

2nk M

= ein IDFT [Hd [k ]]

3.10.2 Important Special Case #2


h [n]
symmetric, linear phase, and has real coecients. Since

h [n] = h [M n],

there are only

M freedom, and only 2 linear equations are required.

M 2 degrees of

H [k ]

= =

h [n] e(ik n) n=0M 2 1 h [n] e(ik n) + e(ik (M n1))


n=0 3 M 2 n=0

M 1

if M even if M odd (3.56)

M 1 1 h [n] e(ik n) + e(ik (M n1)) h M2 e(ik 2 ) 1 e(ik M2 M 1 )2 M 2 1 n if M even n=0 h [n] cos k 2 = 3 M 1 M 1 M 1 2 e(ik 2 ) 2 n + h M2 if M n=0 h [n] cos k 2

odd

Removing linear phase from both sides yields

2 A (k ) = 2

M 2 1 n=0 M 3 2 n=0

h [n] cos k h [n] cos k

M 1 2 M 1 2

n n

if

M even

+h

M 1 2

if

M odd

Due to symmetry of response for real coecients, only equations to solve for

M 2 k on M frequencies k thereby being implicitly dened also. Thus we have 2

[0, )

real-valued

need be specied, with the simultaneous linear

h [n].

3.10.2.1 Special Case 2a


h [n] symmetric, k = nk M
odd length, linear phase, real coecients, and

equally spaced:

k, 0 k M 1 :

h [n]

= = =

IDFT [Hd (k )]
1 M 1 M M 1 k=0 M 1 k=0
2k 1 i 2nk A (k ) e(i M ) M2 e M M 1 2k A (k ) ei( M (n 2 ))

(3.57)

To yield real coecients,

A ( )

mus be symmetric

(A ( ) = A ( )) (A [k ] = A [M k ])

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

138

CHAPTER 3.

DIGITAL FILTERING

h [n]

= = =

1 M 1 M 1 M

A (0) + A (0) + 2 A (0) + 2

M 1 2

k=1

A [k ] ei

2k M

1 1 (n M2 ) + e(i2k(n M2 ))

M 1 2

k=1
M 1 2

A [k ] cos

k=1

A [k ] (1) cos

2k M k

n
2k M

M 1 2

(3.58)

n+

1 2

Simlar equations exist for even lengths, anti-symmetric, and

1 2 lter forms.

3.10.3 Comments on frequency-sampled design


This method is simple conceptually and very ecient for equally spaced samples, since using the IDFT.

h [n] can be computed

H ( )

for a frequency sampled design goes

exactly through the sample points, but it may be very far

o from the desired response for

= k .

This is the main problem with frequency sampled design.

Possible solution to this problem: specify more frequency samples than degrees of freedom, and minimize the total error in the frequency response at all of these samples.

3.10.4 Extended frequency sample design


For the samples

H (k ) where 0 k M 1 and N > M , nd h [n], where 0 n M 1 minimizing Hd (k ) H (k ) For l norm, this becomes a linear programming problem (standard packages availble!) Here we will consider the l 2 norm. N 1 To minimize the l 2 norm; that is, n=0 |Hd (k ) H (k ) |, we have an overdetermined set of linear e(i0 0)
. . .

equations:

...
. . .

e(i0 (M 1))
. . .

Hd (0 ) Hd (1 )
. . .

(iN 1 0)

...

(iN 1 (M 1))

h =

Hd (N 1 ) W h = Hd

or

The minimum error norm solution is well known to be the pseudo-inverse matrix.

h = WW

W Hd ; W W

is well known as

note: Extended frequency sampled design discourages radical behavior of the frequency response

between samples for suciently closely spaced samples. may no longer pass exactly through

any of the Hd (k ).

However, the actual frequency response

3.11 Parks-McClellan FIR Filter Design


have no more than

25

The approximation tolerances for a lter are very often given in terms of the maximum, or worst-case, deviation within frequency bands. For example, we might wish a lowpass lter in a (16-bit) CD player to

1 2 -bit deviation in the pass and stop bands.

1 1 |H ( ) | 1 + 1 if | | p 217 217 H ( ) = 1 17 |H ( ) | if s | |
2

25 This

content is available online at <http://cnx.org/content/m12799/1.3/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

139

The Parks-McClellan lter design method eciently designs linear-phase FIR lters that are optimal in terms of worst-case (minimax) error. Typically, we would like to have the shortest-length lter achieving these specications. Figure Figure 3.16 illustrates the amplitude frequency response of such a lter.

The black boxes on the left and right are the passbands, the black boxes in the middle represent the stop band, and the space between the boxes are the transition bands. Note that overshoots may be allowed in the transition bands.
Figure 3.16:

Exercise 3.11.1
Must there be a transition band?

(Solution on p. 148.)

3.11.1 Formal Statement of the L- (Minimax) Design Problem


For a given lter length (M ) and type (odd length, symmetric, linear phase, for example), and a relative error weighting function

W ( ),

nd the lter coecients minimizing the maximum error

argminargmax|E ( ) | = argmin E ( )
h F h
where

E ( ) = W ( ) (Hd ( ) H ( ))
Available for free at Connexions <http://cnx.org/content/col10360/1.4>

140

CHAPTER 3.

DIGITAL FILTERING

and

is a compact subset of

[0, ]

(i.e., all

in the passbands and stop bands).

note:

Typically, we would often rather specify

however, the design techniques minimize for dierent

for a given

E ( ) and minimize over M and h; M . One then repeats the design procedure
Even-length

until the minimum

satisfying the requirements is found.

We will discuss in detail the design only of odd-length symmetric linear-phase FIR lters.

and anti-symmetric linear phase FIR lters are essentially the same except for a slightly dierent implicit weighting function. For arbitrary phase, exactly optimal design procedures have only recently been developed (1990).

3.11.2 Outline of L- Filter Design


The Parks-McClellan method adopts an indirect method for nding the minimax-optimal lter coecients. 1. Using results from Approximation Theory, simple conditions for determining whether a given lter is

(minimax) optimal are found.

2. An iterative method for nding a lter which satises these conditions (and which is thus optimal) is developed. That is, the

lter design problem is actually solved

indirectly.

3.11.3 Conditions for L- Optimality of a Linear-phase FIR Filter


All conditions are based on Chebyshev's "Alternation Theorem," a mathematical fact from polynomial approximation theory.

3.11.3.1 Alternation Theorem


Let

be a compact subset on the real axis

x,

and let

P (x)
L

be and

Lth-order

polynomial

P (x) =
k=0
Also, let

ak xk

function on

D (x) be a desired function of x that is continuous on F , and W (x) a positive, continuous weighting F . Dene the error E (x) on F as E (x) = W (x) (D (x) P (x))

and

E (x)
A necessary and sucient condition that that

= argmax|E (x) |
xF

P (x) is the unique Lth-order polynomial minimizing E (x) is E (x) exhibits at least L + 2 "alternations;" that is, there must exist at least L + 2 values of x, xk F , k = [0, 1, . . . , L + 1], such that x0 < x1 < < xL+2 and such that E (xk ) = E (xk+1 ) = ( E )

Exercise 3.11.2

(Solution on p. 148.)

What does this have to do with linear-phase lter design?

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

141

3.11.4 Optimality Conditions for Even-length Symmetric Linear-phase Filters


For

even,

A ( ) =
n=0

h (L n) cos n +

1 2

M where L = 2 1 Using the trigonometric identity cos ( + ) = cos ( ) + 2cos () cos ( ) to pull out the term and then using the other trig identities (p. 148), it can be shown that A ( ) can be written as 2

A ( ) = cos 2
Again, this is a polynomial in

k cosk ( )
k=0

x = cos ( ), E ( ) = = =

except for a weighting function out in front.

W ( ) (Ad ( ) A ( )) W ( ) Ad ( ) cos W ( ) cos


2 2

P ( ) P ( )

(3.59)

Ad ( ) cos( 2)

which implies

E (x) = W ' (x) A' d (x) P (x)


where

(3.60)

W ' (x) = W (cos (x))


and

cos

1 1 (cos (x)) 2
1

A' d ( x) =

Ad (cos (x)) cos

1 1 2 (cos (x))

Again, this is a polynomial approximation problem, so the alternation theorem holds. If

E ( )

has at least

L+2=

M 2 + 1 alternations, the even-length symmetric lter is optimal in an The prototypical lter design problem:

sense.

1 W = s
See Figure 3.17.

if if

| | p |s | | |

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

142

CHAPTER 3.

DIGITAL FILTERING

Figure 3.17

3.11.5 L- Optimal Lowpass Filter Design Lemma


1. The maximum possible number of alternations for a lowpass lter is extrema of a polynomial occur only where the derivative is zero:

(L 1)th-order
implies that

polynomial, it can have at L 1 zeros. A( ) = 0 at = 0 and = , for two more possible alternation points. band edges can also be alternations, for a total of L 1 + 2 + 2 = L + 3 possible alternations.

most
or

However

L + 3: The proof is that the P (x) = 0. Since P (x) is an x , the mapping x = cos ( )

Finally, the

2. There must be an alternation at either 3. Alternations must occur at

=0

= . =0
or

and

s .

See Figure 3.17.

4. The lter must be equiripple except at possibly

= .

Again see Figure 3.17.

note: The alternation theorem doesn't directly suggest a method for computing the optimal lter.

It simply tells us how to recognize that a lter

is optimal, or isn't optimal.

What we need is an

intelligent way of guessing the optimal lter coecients.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

143

In matrix form, these

L+2

simultaneous equations become

cos (0 )

cos (20 ) cos (21 )


.. . . . . . . .

... ... ...


.. .

cos (L0 ) cos (L1 )


. . . . . . .. .

1 cos (1 ) . . . . . . . . . . . . . . . . . . 1 cos (L+1 )


or

1 W (0 ) 1 W (1 ) . . .
. . . . . .

... ...

cos (2L+1 )

cos (LL+1 ) W h

(1) W (L+1 )

h (L) h (L 1) . . . h (1) h (0)

Ad (0 ) Ad (1 )
. . . . . . . . .

Ad (L+1 )

= Ad
T

So, for the given set of FFT, we locations

L + 2 extremal frequencies, we can solve for h and via (h, ) = W 1 Ad . Using the can compute A ( ) of h (n), on a dense set of frequencies. If the old k are, in fact the extremal of A ( ), then the alternation theorem is satised and h (n) is optimal. If not, repeat the process

with the new extremal locations.

3.11.6 Computational Cost


O L3
for the matrix inverse and

N log2 N

for the FFT (N

32L,

typically),

per iteration!

This method is expensive computationally due to the matrix inverse. A more ecient variation of this method was developed by Parks and McClellan (1972), and is based on the Remez exchange algorithm. To understand the Remez exchange algorithm, we rst need to understand Lagrange Interpoloation. Now compute

A ( ) is an Lth-order polynomial in x = cos ( ), so Lagrange interpolation can be used to exactly A ( ) from L + 1 samples of A (k ), k = [0, 1, 2, ..., L]. Thus, given a set of extremal frequencies and knowing , samples of the amplitude response A ( ) can

be computed

directly from the

A (k ) =

(1) + Ad (k ) W (k )

k(1)

(3.61)

without solving for the lter coecients!


This leads to computational savings! Note that (3.61) is a set of 1975)

L+2

simultaneous equations, which can be solved for

to obtain (Rabiner,

=
where

L+1 k=0 k Ad (k ) L+1 (1)k(1) k k=0 W (k )

(3.62)

L+1

k =
i=i=k,0

1 cos (k ) cos (i )

The result is the Parks-McClellan FIR lter design method, which is simply an application of the Remez exchange algorithm to the lter design problem. See Figure 3.18.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

144

CHAPTER 3.

DIGITAL FILTERING

The initial guess of extremal frequencies is usually equally ` spaced in the band. Computing ` O 16L2 . Computing h (n) costs O L3 , but it is only done once!
Figure 3.18: ` 2

costs O L . Using Lagrange interpolation costs O (16LL)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

145

The cost per iteration is

O 16L2

, as opposed to

O L3

; much more ecient for large

L.

Can also

interpolate to DFT sample frequencies, take inverse FFT to get corresponding lter coecients, and zeropad and take longer FFT to eciently interpolate.

3.12 FIR Filter Design using MATLAB


3.12.1 FIR Filter Design Using MATLAB
3.12.1.1 Design by windowing
The MATLAB function

26

fir1() designs conventional lowpass, highpass, bandpass, and bandstop linear-phase

FIR lters based on the windowing method. The command

b = fir1(N,Wn)

returns in vector The command

the impulse response of a lowpass lter of order

N.

The cut-o frequency

Wn

must be

between 0 and 1 with 1 corresponding to the half sampling rate.

b = fir1(N,Wn,'high')

returns the impulse response of a highpass lter of order Similarly,

b = fir1(N,Wn,'stop')

with

Wn

with normalized cuto frequency

Wn.

a two-element vector designating the stopband designs a is employed in the design. Other windowing For

bandstop lter. Without explicit specication, the

Hamming window

functions can be used by specifying the windowing function as an extra argument of the function. example, Blackman window can be used instead by the command

b = fir1(N, Wn, blackman(N)).

3.12.1.2 Parks-McClellan FIR lter design


The MATLAB command

b = remez(N,F,A)

returns the impulse response of the length

N+1 linear phase FIR lter of order N designed by Parks-McClellan


and

algorithm. F is a vector of frequency band edges in ascending order between 0 and 1 with 1 corresponding to the half sampling rate. A is a real vector of the same size as F which species the desired amplitude of the frequency response of the points between

F(k+1)

and

F(k+2)

(F(k),A(k))

(F(k+1),A(k+1))

for odd

k.

For odd

k,

the bands

is considered as transition bands.

26 This

content is available online at <http://cnx.org/content/m10917/2.2/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

146

CHAPTER 3.

DIGITAL FILTERING

3.13 MATLAB FIR Filter Design Exercise


3.13.1 FIR Filter Design MATLAB Exercise
3.13.1.1 Design by windowing Exercise 3.13.1

27

(Solution on p. 148.)

Assuming sampling rate at 48kHz, design an order-40 low-pass lter having cut-o frequency 10kHz by windowing method. In your design, use Hamming window as the windowing function.

3.13.1.2 Parks-McClellan Optimal Design Exercise 3.13.2


11kHz using the Parks-McClellan optimal FIR lter design algorithm.

(Solution on p. 148.)

Assuming sampling rate at 48kHz, design an order-40 lowpass lter having transition band 10kHz-

27 This

content is available online at <http://cnx.org/content/m10918/2.2/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

147

Solutions to Exercises in Chapter 3


Solution to Exercise 3.5.1 (p. 126)
The DTFT of the unit sample equals a constant (equaling 1). Thus, the Fourier transform of the output equals the transfer function.

Solution to Exercise 3.5.2 (p. 126)


In sampling a discrete-time signal's Fourier transform

times equally over

[0, 2 )

to form the DFT, the

corresponding signal equals the periodic repetition of the original signal.

S (k )
i=

s (n iL)

(3.63)

Solution to Exercise 3.5.3 (p. 127)

To avoid aliasing (in the time domain), the transform length must equal or exceed the signal's duration. The dierence equation for an FIR lter has the form

y (n) =
m=0
The unit-sample response equals

bm x (n m)

(3.64)

h (n) =
m=0

bm (n m)
28

(3.65)

Solution to Exercise 3.5.4 (p. 127)


The unit-sample response's duration is

which corresponds to the representation described in a problem

of a length-q boxcar lter. Thus the statement is correct.

Solution to Exercise 3.6.1 (p. 132)


) ( = constant ( ) = K The phase is linear.

q+1

and the signal's

Nx .

Solution to Exercise 3.9.1 (p. 135) Solution to Exercise 3.9.2 (p. 136): Gibbs Phenomenon
Yes; in fact it's optimal! (in a certain sense)

(a)
Figure 3.19:

(b)

(a) A (), small M (b) A (), large M

28 "Discrete-Time

Systems in the Time-Domain", Example 2 <http://cnx.org/content/m10251/latest/#ex2001> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

148

CHAPTER 3.

DIGITAL FILTERING

Solution to Exercise 3.10.1 (p. 136) Solution to Exercise 3.11.1 (p. 139) Solution to Exercise 3.11.2 (p. 140)
H ( ) = =

1 e(i M2 ) if c c 0 if ( < c ) (c < )

Yes, when the desired response is discontinuous. Since the frequency response of a nite-length lter must be continuous, without a transition band the worst-case error could be no less than half the discontinuity. It's the same problem! To show that, consider an odd-length, symmetric linear phase lter.

M 1 (in) n=0 h (n) e M 1 1 e(i 2 ) h M2

+2
L

L n=1

M 1 2

n cos (n)

(3.66)

A ( ) = h (L) + 2
n=1
Where

h (L n) cos (n)

(3.67)

M 1 2 . Using trigonometric identities (such as


as

. L=

cos (n) = 2cos ((n 1) ) cos ()cos ((n 2) )), we can rewrite
L L

A ( )

A ( ) = h (L) + 2
n=1
where the

h (L n) cos (n) =
k=0

k cosk ( )
one-to-one

are related to the

mapping from

x [1, 1]

onto

h (n) by a linear transformation. Now, let x = cos ( ). This is a [0, ]. Thus A ( ) is an Lth-order polynomial in x = cos ( )! L
lter design problem, too!

note: The alternation theorem holds for the

Therefore, to determine whether or not a length-M , odd-length, symmetric linear-phase lter is optimal in an

sense, simply count the alternations in

Solution to Exercise 3.13.1 (p. 146)


b = fir1(40,10.0/48.0)

If there are

L+2=

M +3 or more alternations, 2

E ( ) = W ( ) (Ad ( ) A ( )) in the pass h (n), 0 n M 1 is the optimal lter!

and stop bands.

Solution to Exercise 3.13.2 (p. 146)


b = remez(40,[1 1 0 0],[0 10/48 11/48 1])

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Chapter 4

Multirate Signal Processing


4.1 Upsampling
4.1.1 Upsampling
The operation of of the input signal. This is denoted by "

upsampling by factor L N describes the insertion of L 1 zeros between every sample


(L)"
in block diagrams, as in Figure 4.1.

Figure 4.1

Formally, upsampling can be expressed in the time domain as

x y [n] = 0
In the

n L

if

n L

otherwise

z -domain, Y (z ) =
n

y [n] z n =
n n, L Z

n n z = L

x [k ] z (k)L = X z L
k

and substituting

z = ei

for the DTFT,

Y ei = X eiL
As shown in Figure 4.2, upsampling compresses the DTFT by a factor of

(4.1)

along with the

axis.

1 This

content is available online at <http://cnx.org/content/m10403/2.15/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>


149

150

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.2

4.2 Downsampling
The operation of

downsampling by factor M

describes the process of keeping every

M th

sample and

discarding the rest. This is denoted by "

(M )"

in block diagrams, as in Figure 4.3.

Figure 4.3

Formally, downsampling can be written as

y [n] = x [nM ]
In the

domain,

Y (z )

= = =

n n m

y [n] z n x [nM ] z n x [m]


1 M M 1 i 2 pm M p=0 e
(4.2)

m M

2 This

content is available online at <http://cnx.org/content/m10441/2.12/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

151

where

1 M

M 1 i 2 pm M p=0 e

1 = 0

if

m is

a multiple of M

otherwise

Y (z )

= =

1 M 1 M

M 1 p=0 M 1 p=0

x [m] e(i M p) z M 2 1 X e(i M p) z M


2 1

m
(4.3)

Translating to the frequency domain,

Y ei =

1 M

M 1

X ei
p=0

2p M

(4.4)

As shown in Figure 4.4, downsampling expands each along the

axis, and reduces the gain by a factor of

2 M.

-periodic repetition of If

X ei

by a factor of

x [m]

is not bandlimited to

M , aliasing may

result from spectral overlap.


note:

When performing a frequency-domain analysis of systems with up/downsamplers, it is

strongly recommended to carry out the analysis in the Working directly in the

z -domain

until the last step, as done above.

ei -domain

can easily lead to errors.

Figure 4.4

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

152

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

4.3 Interpolation
4.3.1 Interpolation
more specic, say that version of then

Interpolation is the process of upsampling and ltering a signal to increase its eective sampling rate. To be

x [m].

If we lter

y [n]

will be a

x [m] is an (unaliased) T -sampled version of xc (t) and v [n] is an L-upsampled version v [n] with an ideal L -bandwidth lowpass lter (with DC gain L) to obtain y [n], T -sampled version of xc (t). This process is illustrated in Figure 4.5. L

Figure 4.5

We justify our claims about interpolation using frequency-domain arguments. From the sampling theorem, we know that

T-

sampling

xc (t)

to create

x [n] 1 T

yields

X ei =
After upsampling by factor

Xc i
k

2k T

(4.5)

L,

(4.5) implies

V ei =
Lowpass ltering with cuto

1 T

Xc i
k

L 2k T

1 T

Xc
k

2 L k

T L

L and gain

yields

Y ei =

L T

Xc
k L Z

2 L k

T L

L T

Xc
l

2l
T L

since the spectral copies with indices other than a

k = lL

(for

T L -shaped version of

l Z)

are removed. Clearly, this process yields

xc (t).

Figure 4.6 illustrates these frequency-domain arguments for

L = 2.

3 This

content is available online at <http://cnx.org/content/m10444/2.14/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

153

Figure 4.6

4.4 Application of Interpolation - Oversampling in CD Players


4.4.1 Application of Interpolation- Oversampling in CD Players
The digital audio signal on a CD is a

20kHz.

With a standard ZOH-DAC, the analog reconstruction lter would have passband edge at

44.1kHz sampled representation of a continuous signal with bandwidth 20kHz and

stopband edge at

24.1kHz.

(See Figure 4.7) With such a narrow transition band, this would be a dicult

(and expensive) lter to build.

Figure 4.7

If digital interpolation is used prior to reconstruction, the eective sampling rate can be increased and the reconstruction lter's transition band can be made much wider, resulting in a much simpler (and cheaper) analog lter. Figure 4.8 illustrates the case of interpolation by edge at

4.

The reconstruction lter has passband

20kHz

and stopband edge at

156.4kHz,

resulting in a much wider transition band and therefore an

easier lter design.

4 This

content is available online at <http://cnx.org/content/m11006/2.3/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

154

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.8

4.5 Decimation
downsampling.

Decimation is the process of ltering and downsampling a signal to decrease its eective sampling rate, as illustrated in Figure 4.9. The ltering is employed to prevent aliasing that might otherwise result from

Figure 4.9

To be more specic, say that

xc (t) = xl (t) + xb (t)


1 xl (t) is a lowpass component bandlimited to 2M T Hz and xb (t) is a bandpass component with energy 1 1 between and Hz . If sampling xc (t) with interval T yields an unaliased discrete representation x [m], 2M T 2T then decimating x [m] by a factor M will yield y [n], an unaliased M T -sampled representation of lowpass component xl (t).
where We oer the following justication of the previously described decimation procedure. From the sampling theorem, we have

X ei =
5 This

1 T

Xl i
k

2k T

1 T

Xb i
k

2k T

content is available online at <http://cnx.org/content/m10445/2.11/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

155

The bandpass component

Xb (i)

is the removed by

M -lowpass ltering, giving

V ei =
Finally, downsampling yields

1 T

Xl i
k

2k T

Y ei

= = =

1 MT 1 MT 1 MT

M 1 p=0 M 1 p=0 l

k k

Xl i Xl

2p 2k M

T (2 )(kM +p) i MT

(4.6)

2l Xl i M T
A frequency-domain illustration for

which is clearly a Figure 4.10.

M T -sampled

version of

xl (t).

M = 2

appears in

Figure 4.10

4.6 Resampling with Rational Factor


Interpolation by

M can be combined to change the eective sampling rate of a signal L or . Rather than M . This process is called cascading an anti-imaging lter for interpolation with an anti-aliasing lter for decimation, we implement one lter with the minimum of the two cutos L , M and the multiplication of the two DC gains (L and
and decimation by by the rational factor

resampling

sample-rate conversion

1),

as illustrated in Figure 4.11.

6 This

content is available online at <http://cnx.org/content/m10448/2.11/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

156

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.11

4.7 Digital Filter Design for Interpolation and Decimation


First we treat lter design for interpolation. Consider an input signal DTFT domain. If we upsample by factor

x [n]

that is

to get

v [m],

the desired portion of

L , L , while the undesired portion is the remainder of zero energy in the regions

[, ).

Noting from

0 -bandlimited in the V ei is the spectrum in i Figure 4.12 that V e has


(4.7)

2k + 0 2 (k + 1) 0 , L L

,k Z

the anti-imaging lter can be designed with transition bands in these regions (rather than passbands or stopbands). For a given number of taps, the additional degrees of freedom oered by these transition bands allows for better responses in the passbands and stopbands. shown in the bottom subplot below (Figure 4.12). The resulting lter design specications are

7 This

content is available online at <http://cnx.org/content/m10870/2.6/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

157

Figure 4.12

spectral component of the input signal and we have decided to downsample by M . The goal is to minimally distort the M 0 0 M , M , i.e., the post-decimation spectrum over [0 , 0 ). Thus, we must not allow any aliased signals to enter [0 , 0 ). To allow for extra degrees of freedom in the lter design, we allow

Next we treat lter design for decimation. Say that the

0 M input spectrum over


is bandlimited to

desired

<

aliasing to enter the post-decimation spectrum outside of regions which alias outside of

[0 , 0 )

within

[, ).

do

Since the input spectral

[0 , 0 )

are given by

2k + 0 2 (k + 1) 0 , L L

,k Z

(4.8)

(as shown in Figure 4.13), we can treat these regions as transition bands in the lter design. The resulting lter design specications are illustrated in the middle subplot (Figure 4.13).

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

158

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.13

4.8 Noble Identities


4.8.1

The Noble identities (illustrated in Figure 4.14 and Figure 4.15) describe when it is possible to reverse the order of upsampling/downsampling and ltering. We prove the Noble identities showing the equivalence of each pair of block diagrams. The Noble identity for interpolation can be depicted as in Figure 4.14:

Figure 4.14

For the left side of the diagram, we have

Y (z ) = H z L V1 (z )
8 This

content is available online at <http://cnx.org/content/m10432/2.12/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

159

where

V1 (z ) = X z L Y (z ) = H z L X z L

while for the right side,

Y (z ) = V2 z L
where

V2 (z ) = H (z ) X (z ) Y (z ) = H z L X z L

Thus we have established the Noble identity for interpolation. The Noble identity for decimation can be depicted as in Figure 4.15:

Figure 4.15

For the left side of the preceding diagram, we have

V1 (z ) =

1 M

M 1

X e(i) M k z M
k=0

Y (z )

= H (z ) V1 (z ) = H (z )
1 M M 1 k=0

X e(i) M k z M

(4.9)

while for the right side,

Y (z ) =
where

1 M

M 1

Vz e(i) M k z M
k=0

(4.10)

V2 (z ) = X (z ) H z M Y (z ) =
1 M M 1 k=0

X e(i) M k z M H e(i) M kM z M
M 1 k=0

1 = H (z ) M

X e(i) M k z M

(4.11)

Thus we have established the Noble identity for decimation. Note that the impulse response of the

H zL

is

L-upsampled

impulse response of

H (z ).
9

4.9 Polyphase Interpolation


9 This

4.9.1 Polyphase Interpolation Filter


Recall the standard interpolation procedure illustrated in Figure 4.16.

content is available online at <http://cnx.org/content/m10431/2.11/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

160

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.16

Note that this procedure is computationally inecient because the lowpass lter operates on a sequence that is mostly composed of zeros. Through the use of the Noble identities, it is possible to rearrange the preceding block diagram so that operations on zero-valued samples are avoided. In order to apply the Noble identity for interpolation, we must transform polyphase components

H (z )

into its upsampled

Hp z L

p = {0, . . . , L 1}. H (z ) = =
nn kk

h [n] z n
L1 p=0

h [kL + p] z (kL+p)

(4.12)

via

k :=

n L ,

p := nmodL
L1

H (z ) =
p=0 kk
via

hp [k ] z (kL) z p

(4.13)

hp [k ] := h [kL + p]
L 1

H (z ) =
p=0
Above,

Hp z L z p
the modulo-M operator. Note that the

(4.14)

denotes the oor operator and

modM

pth

polyphase lter

hp [k ]

is constructed by downsampling the "master lter"

h [n]

at oset

p.

Using the unsampled polyphase

components, the Figure 4.16 diagram can be redrawn as in Figure 4.17.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

161

Figure 4.17

Applying the Noble identity for interpolation to Figure 4.18 yields Figure 4.17. The ladder of upsamplers and delays on the right below (Figure 4.17) accomplishes a form of parallel-to-serial conversion.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

162

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.18

4.10 Polyphase Decimation Filter


4.10.1 Polyphase Decimation

10

Recall the standard decimation method in Figure 4.19.

Figure 4.19

Note that this procedure is computationally inecient because it discards the majority of the computed lter outputs. Through the use of the Noble identities, it is possible to rearrange Figure 4.19 so that lter outputs are not discarded.

10 This

content is available online at <http://cnx.org/content/m10433/2.12/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

163

In order to apply the Noble identity for decimation, we must transform components

H (z ) into its upsampled polyphase

Hp z M

p = {0, . . . , M 1}, H (z ) = =

dened previously in the context of polyphase interpolation

(Section 4.9).

nn kk

h [n] z n
M 1 p=0

h [kM + p] z ((kM ))p

(4.15)

via

k :=

n M ,

p := nmodM
M 1

H (z ) =
p=0
via

hp [k ] z (kM ) z p
k

(4.16)

hp [k ] := h [kM + p]
M 1

H (z ) =
p=0

Hp z M z p

(4.17)

Using these unsampled polyphase components, the preceding block diagram can be redrawn as Figure 4.20.

Figure 4.20

Applying the Noble identity for decimation to Figure 4.20 yields Figure 4.21. The ladder of delays and downsamplers on the left below accomplishes a form of serial-to-parallel conversion.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

164

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.21

4.11 Computational Savings of Polyphase Interpolation/Decimation


4.11.1 Computational Savings of Polyphase Interpolation/Decimation
Assume that we design FIR LPF decimation by factor output, giving

11

M,

we have

H (z ) with N taps, requiring N multiplies per output. For standard N multiplies per intermediate sample and M intermediate samples per

NM

multiplies per output.

N M multiplies per branch and M branches, giving a total of N multiplies N per output. The assumption of M multiplies per branch follows from the fact that h [n] is downsampled by M to create each polyphase lter. Thus, we conclude that the standard implementation requires M times
For polyphase decimation, we have as many operations as its polyphase counterpart. (For decimation, we count multiples per output, rather than per input, to avoid confusion, since only every independent of the decimation rate lowpass FIR lter

M th

input produces an output.)

From this result, it appears that the number of multiplications required by polyphase decimation is

H (z )

Mwill typically be proportional to M . This is suggested, e.g., by the Kaiser FIR-length

M.

However, it should be remembered that the length

of the

approximation formula

10log (p s ) 13 2.324 ( ) where ( ) in the transition bandwidth in radians, and p and s are the passband and stopband ripple levels. Recall that, to preserve a xed signal bandwidth, the transition bandwidth ( ) will be linearly proportional N
11 This

content is available online at <http://cnx.org/content/m11008/2.2/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

165

to the cuto

M M.

M , so that N will be linearly proportional to M . In summary, polyphase decimation by factor requires N multiplies per output, where N is the lter length, and where N is linearly proportional to
Using similar arguments for polyphase interpolation, we could nd essentially the same result. Polyphase

interpolation by factor

requires

multiplies per input, where

is the lter length, and where

is

linearly proportional to the interpolation factor than per output, to avoid confusion, since

L.

(For interpolation we count multiplies per input, rather

outputs are generated in parallel.)

4.12 Sub-Band Processing


4.12.1 Why Filterbanks?
4.12.1.1 Sub-band Processing

12

There exist many applications in modern signal processing where it is advantageous to separate a signal into dierent frequency ranges called

sub-bands.

The spectrum might be partitioned in the uniform manner

illustrated in Figure 4.22, where the sub-band width

k =

2 centers are uniformly spaced at intervals of M.

2 M is identical for each sub-band and the band

Figure 4.22

Alternatively, the sub-bands might have a logarithmic spacing like that shown in Figure 4.23.

12 This

content is available online at <http://cnx.org/content/m10423/2.14/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

166

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.23

For most of our discussion, we will focus on uniformly spaced sub-bands. The separation into sub-band components is intended to make further processing more convenient. Some of the most popular applications for sub-band decomposition are audio and video source coding (with the goal of ecient storage and/or transmission). Figure 4.24 illustrates the use of sub-band processing in MPEG audio coding. There a psychoacoustic model is used to decide how much quantization error can be tolerated in each sub-band while remaining below the hearing threshold of a human listener. In the sub-bands that can tolerate more error, less bits are used for coding. The quantized sub-band signals can then be decoded and recombined to reconstruct rate while still maintaining "CD quality" audio. The psychoacoustic model takes into account the spectral masking phenomenon of the human ear, which says that high energy in one spectral region will limit the ear's ability to hear details in nearby spectral regions. Therefore, when the energy in one sub-band is high, nearby sub-bands can be coded with less bits without degrading the perceived quality of the audio signal. The MPEG standard species 32-channels of sub-band ltering. Some psychoacoustic models also take into account "temporal masking" properties of the human ear, which say that a loud burst of sound will temporarily overload the ear for short time durations, making it possible to hide quantization noise in the time interval after a loud sound burst. (an approximate version of ) the input signal. Such processing allows, on average, a 12-to-1 reduction in bit

Figure 4.24

In typical applications, non-trivial signal processing takes place between the bank of analysis lters and the bank of synthesis lters, as shown in Figure 4.25. We will focus, however, on lterbank design rather than on the processing that occurs between the lterbanks.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

167

Figure 4.25

Our goals in lter design are: 1. Good sub-band frequency separation (i.e., good "frequency selectivity"). 2. Good reconstruction (i.e., lossless. The rst goal is driven by the assumption that the sub-band processing works

y [n]

x [n d]

for some integer delay

d)

when the sub-band processing is

best

when it is given

access to cleanly separated sub-band signals, while the second goal is motivated by the idea that the subband ltering should not limit the reconstruction performance when the sub-band processing (e.g., the coding/decoding) is lossless or nearly lossless.

4.13 Discrete Wavelet Transform: Main Concepts


4.13.1 Main Concepts
The

13

discrete wavelet transform

(DWT) is a representation of a signal

thonormal basis consisting of a countably-innite set of

{ k,n (t) | k Z n Z },

wavelets.

x (t) L2

using an or-

Denoting the wavelet basis as

the DWT transform pair is

x (t) =
k= n=

dk,n k,n (t)

(4.18)

dk,n

= < k,n (t) , x (t) > =


k,n (t)x (t) dt x (t)

(4.19)

where

{dk,n }

are the wavelet coecients.

Note the relationship to Fourier series and to the sampling using a countably-innite

theorem: in both cases we can perfectly describe a continuous-time signal Fourier coecients

{ X [k ] | k Z }, while the sampling theorem enabled us to describe bandlimited signals using signal samples { x [n] | n Z }. In both cases, signals within a limited class are represented using a coecient set with a single countable index. The DWT can describe any signal in L2 using a coecient set parameterized by two countable indices: { dk,n | k Z n Z }.
13 This

(i.e., discrete) set of coecients. Specically, Fourier series enabled us to describe

periodic signals using

content is available online at <http://cnx.org/content/m10436/2.12/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

168

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

(t) L2 .

Wavelets

are orthonormal functions in

L2

obtained by shifting and stretching a


k

mother wavelet,
(4.20)

For example,

k, n, k n Z : k,n (t) = 2 2 2k t n
denes a family of wavelets the wavelet stretches by a factor of two; as
note:

{ k,n (t) | k Z n Z } related by power-of-two n increases, the wavelet shifts right.


the normalization ensures that

stretches. As

increases,

When

(t) = 1,

k,n (t) = 1

for all

k Z, n Z.

Power-of-two stretching is a convenient, though somewhat arbitrary, choice. In our treatment of the discrete wavelet transform, however, we will focus on this choice. Even with power-of two stretches, there are various possibilities for

(t),

each giving a dierent avor of DWT.

Wavelets are constructed so that

{ k,n (t) | n Z }

(i.e., the set of all shifted wavelets at xed scale

k ),

describes a particular level of 'detail' in the signal. As

In this way, the DWT can give a multiresolution description of a signal, very useful in analyzing "real-world" signals. Essentially, the DWT gives us a discrete multi-resolution description of a continuous-time signal in L2 . become more "ne grained" and the level of detail increases. In the modules that follow, these DWT concepts will be developed "from scratch" using Hilbert space principles. To aid the development, we make use of the so-called used to approximate the signal

becomes smaller (i.e., closer to

),

the wavelets

scaling function (t) L2 , which will be up to a particular level of detail. Like with wavelets, a family of scaling
k

functions can be constructed via shifts and power-of-two stretches

k, n, k n Z : k,n (t) = 2 2 2k t n
given mother scaling function elaborated upon later via theory
note:

(4.21)

(t).
14

The relationships between wavelets and scaling functions will be

and example (Section 4.14).

The inner-product expression for

dk,n ,

(4.19) is written for the general complex-valued

case.

In our treatment of the discrete wavelet transform, however, we will assume real-valued

signals and wavelets. For this reason, we omit the complex conjugations in the remainder of our DWT discussions

4.14 The Haar System as an Example of DWT


DWT development. Keep in mind, however, that other ways of constructing a DWT decomposition. For the Haar case, the mother

15

The Haar basis is perhaps the simplest example of a DWT basis, and we will frequently refer to it in our

the Haar basis is only an example; there are many

scaling function is dened by (4.22) and Figure 4.26.


1 (t) = 0
if

0t<1

(4.22)

otherwise

14 "The Scaling Equation" <http://cnx.org/content/m10476/latest/> 15 This content is available online at <http://cnx.org/content/m10437/2.10/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

169

Figure 4.26

From the mother scaling function, we dene a family of shifted and stretched scaling functions according to (4.23) and Figure 4.27

{k,n (t)}

k,n (t)

= =

k, n, k Zn Z : 2 2 2k t n 2 2
k

1 2k

t n2k

(4.23)

Figure 4.27

which are illustrated in Figure 4.28 for various ing

and

n.

(4.23) makes clear the principle that increment-

by one shifts the pulse one place to the right.

Observe from Figure 4.28 that

{ k,n (t) | n Z }

is

orthonormal for each

(i.e., along each row of gures).

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

170

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.28

4.15 Filterbanks Interpretation of the Discrete Wavelet Transform


Assume that we start with a signal by

16

x (t) L2 .

Denote the best approximation at the

0th

level of coarseness

x0 (t).

(Recall that

x0 (t)

is the orthogonal projection of

x (t)

onto

V0

.) Our goal, for the moment, is

to decompose

x0 (t)

into scaling coecients and wavelet coecients at higher levels. Since

x0 (t) V0

and

V 0 = V 1 W1 ,

there exist coecients

{c0 [n]}, {c1 [n]},


nn c0 nn c1

and

{d1 [n]}

such that

x0 (t)
16 This

= =

[n] 0,n (t) [n] 1,n [t] +


nn

d1 [n] 1,n [t]

(4.24)

content is available online at <http://cnx.org/content/m10474/2.6/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

171

Using the fact that equation,

{ 1,n (t) | n Z } c1 [ n ]

is an orthonormal basis for

V1

, in conjunction with the scaling

= < x0 (t) , 1,n (t) > = < = = = =


mm c0 mm c0 mm c0 mm c0 mm c0

[m] 0,m (t) , 1,n (t) > h [ ] (t 2n)) >


(4.25)

[m] < (0,m (t) , 1,n (t)) > [m] < ( (t m) , [m] [m] h [m 2n] h [ ] < ( (t m) , (t 2n)) >

where

results from convolving (Figure 4.29).

[t 2n] =< (t m) , (t 2n) >. The previous expression ((4.25)) indicates that {c1 [n]} {c0 [m]} with a time-reversed version of h [m] then downsampling by factor two

Figure 4.29

Using the fact that scaling equation,

{ 1,n (t) | n Z }

is an orthonormal basis for

W1

, in conjunction with the wavelet

d1 [n]

= = = = = =

< x0 (t) , 1,n (t) > <


mm c0 mm c0 mm c0 mm c0 mm c0

[m] 0,m (t) , 1,n (t) > g [ ] (t 2n)) >


(4.26)

[m] < (0,m (t) , 1,n (t)) > [m] < ( (t m) , [m] [m] g [m 2n] g [ ] < ( (t m) , (t 2n)) >

where

[t 2n] =< (t m) , (t 2n) >. {d1 [n]}


results from convolving

The previous expression ((4.26)) indicates that reversed version of

{c0 [m]}

with a time-

g [m]

then downsampling by factor two (Figure 4.30).

Figure 4.30

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

172

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Putting these two operations together, we arrive at what looks like the analysis portion of an FIR lterbank (Figure 4.31):

Figure 4.31

We can repeat this process at the next higher level. Since and

V1 = W 2 V2 ,

there exist coecients

{c2 [n]}

{d2 [n]}

such that

x1 (t)

= =

nn c1 nn

[n] 1,n (t)


nn c2

d2 [n] 2,n (t) +

[n] 2,n (t)

(4.27)

Using the same steps as before we nd that

c2 [n] =
mm

c1 [m] h [m 2n]

(4.28)

d2 [n] =
mm

c1 [m] g [m 2n]

(4.29)

which gives a cascaded analysis lterbank (Figure 4.32):

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

173

Figure 4.32

If we use

V0 = W1 W2 W3 Wk Vk

to repeat this process up to the

k th

level, we get the

iterated analysis lterbank (Figure 4.33).

Figure 4.33

As we might expect, signal reconstruction can be accomplished using cascaded two-channel synthesis

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

174

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

lterbanks. Using the same assumptions as before, we have:

c0 [m]

= = = =

< x0 (t) , 0,m (t) > <


nn c1 nn c1 nn c1

[n] 1,n (t) +

nn

d1 [n] 1,n (t) , 0,m (t) >


nn

[n] < (1,n (t) , 0,m (t)) > + [n] h [m 2n] +


where

d1 [n] < (1,n (t) , 0,m (t)) >

(4.30)

nn

d1 [n] g [m 2n]

h [m 2n] =< 1,n (t) , 0,m (t) > g [m 2n] =< 1,n (t) , 0,m (t) >

and

which can be implemented using the block diagram in Figure 4.34.

Figure 4.34

The same procedure can be used to derive

c1 [m] =
nn

c2 [n] h [m 2n] +
nn

d2 [n] g [m 2n]

(4.31)

from which we get the diagram in Figure 4.35.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

175

Figure 4.35

To reconstruct from the

k th

level, we can use the iterated synthesis lterbank (Figure 4.36).

Figure 4.36

The table (Table 4.1) makes a direct comparison between wavelets and the two-channel orthogonal PRFIR lterbanks.

Discrete Wavelet Transform

2-Channel Orthogonal PRFIR Filterbank


continued on next page

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

176

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Analysis-LPF Power Symmetry

H z 1 H (z ) H z H ( z ) H z 1 = 2 G z 1 P, P is odd G (z ) = z P H z 1 H (z ) G (z )
Table 4.1

H0 (z )
1

H0 (z ) H0 z 1 H0 (z ) H0 z 1 = 1 H1 (z )

Analysis HPF Spectral Reverse

N, N is even : H1 (z ) = z (N 1) H0 z 1 G0 (z ) = 2z (N 1) H0 z 1 G1 (z ) = 2z (N 1) H1 z 1

Synthesis LPF Synthesis HPF

From the table, we see that the discrete wavelet transform that we have been developing is identical to two-channel orthogonal PR-FIR lterbanks in all but a couple details. 1. Orthogonal PR-FIR lterbanks employ synthesis lters with twice the gain of the analysis lters, whereas in the DWT the gains are equal. 2. Orthogonal PR-FIR lterbanks employ causal lters of length constrained to be causal. For convenience, however, the wavelet lters have even impulse response length

N,

whereas the DWT lters are not

H (z )

and

N,

we require that

G (z ) are usually P = N 1.
17

chosen to be causal. For both to

4.16 DWT Application - De-noising


Now consider unstructured to all transform coecients. coecients. signal from our

Say that the DWT for a particular choice of wavelet yields an ecient representation of a particular signal class. In other words, signals in the class are well-described using a few large transform coecients.

noise, which cannot be eiciently represented by any transform, including

the DWT. Due to the orthogonality of the DWT, such noise sequences make, on average, equal contributions Any given noise sequence is expected to yield many small-valued transform

Together, these two ideas suggest a means of

de-noising a signal. Say that we perform a DWT on a well-matched signal class that has been corrupted by additive noise. We expect that large

transform coecients are composed mostly of signal content, while small transform coecients should be composed mostly of noise content. Hence, throwing away the transform coecients whose magnitude is less than some small threshold should improve the signal-to-noise ratio. The de-noising procedure is illustrated in Figure 4.37.

Figure 4.37

Now we give an example of denoising a step-like waveform using the Haar DWT. In Figure 4.38, the top right subplot shows the noisy signal and the top left shows it DWT coecients. Note the presence

17 This

content is available online at <http://cnx.org/content/m11000/2.1/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

177

of a few large DWT coecients, expected to contain mostly signal components, as well as the presence of many small-valued coecients, expected to contain noise. (The bottom left subplot shows the DWT for the original signal before any noise was added, which conrms that all signal energy is contained within a few large coecients.) If we throw away all DWT coecients whose magnitude is less than 0.1, we are left with only the large coecients (shown in the middle left plot) which correspond to the de-noised time-domain signal shown in the middle right plot. The dierence between the de-noised signal and the original noiseless signal is shown in the bottom right. Non-zero error results from noise contributions to the large coecients; there is no way of distinguishing these noise components from signal components.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

178

CHAPTER 4.

MULTIRATE SIGNAL PROCESSING

Figure 4.38

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Chapter 5

Statistical and Adaptive Signal Processing


5.1 Introduction to Random Signals and Processes
at some the basic characteristics of signals
2

Before now, you have probably dealt strictly with the theory behind signals and systems, as well as look and systems .
3

In doing so you have developed an important

foundation; however, most electrical engineers do not get to work in this type of fantasy world. In many cases the signals of interest are very complex due to the randomness of the world around them, which leaves them noisy and often corrupted. necessary information. This often causes the information contained in the signal to be hidden and distorted. For this reason, it is important to understand these random signals and how to recover the

5.1.1 Signals: Deterministic vs. Stochastic


For this study of signals and systems, we will divide signals into two groups: those that have a xed behavior and those that change randomly. As most of you have probably already dealt with the rst type, we will focus on introducing you to random signals. Also, note that we will be dealing strictly with discrete-time signals since they are the signals we deal with in DSP and most real-world computations, but these same ideas apply to continuous-time signals.

5.1.1.1 Deterministic Signals


Most introductions to signals and systems deal strictly with

deterministic signals.

Each value of these

signals are xed and can be determined by a mathematical expression, rule, or table. Because of this, future values of any deterministic signal can be calculated from past values. and future behavior. For this reason, these signals are relatively easy to analyze as they do not change, and we can make accurate assumptions about their past

1 This content is available online at <http://cnx.org/content/m10649/2.2/>. 2 "Signal Classications and Properties" <http://cnx.org/content/m10057/latest/> 3 "System Classications and Properties" <http://cnx.org/content/m10084/latest/>

Available for free at Connexions <http://cnx.org/content/col10360/1.4>


179

180

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

Deterministic Signal

Figure 5.1:

An example of a deterministic signal, the sine wave.

5.1.1.2 Stochastic Signals


Unlike deterministic signals, be predicted.

stochastic signals,

or

random signals,

are not so nice.

Random signals Also, because of

cannot be characterized by a simple, well-dened mathematical equation and their future values cannot Rather, we must use probability and statistics to analyze their behavior. their randomness, average values (Section 5.3) from a collection of signals are usually studied rather than analyzing one individual signal.

Random Signal

We have taken the above sine wave and added random noise to it to come up with a noisy, or random, signal. These are the types of signals that we wish to learn how to deal with so that we can recover the original sine wave.
Figure 5.2:

5.1.2 Random Process


As mentioned above, in order to study random signals, we want to look at a collection of these signals rather than just one instance of that signal. This collection of signals is called a

Denition 5.1: random process


the process.

random process.

A family or ensemble of signals that correspond to every possible outcome of a certain signal measurement. Each signal in this collection is referred to as a

realization or sample function of

Example

As an example of a random process, let us look at the Random Sinusoidal Process below. We use

f [n] = Asin (n + )

to represent the sinusoid with a given amplitude and phase. Note that the

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

181

phase and amplitude of each sinusoid is based on a random number, thus making this a random process.

Random Sinusoidal Process

Figure 5.3:

A random sinusoidal process, with the amplitude and phase being random numbers.

A random process is usually denoted by signal or waveform from this process.

X (t)

or

X [n],

with

x (t)

or

x [n]

used to represent an individual

In many notes and books, you might see the following notation and terms used to describe dierent types of random processes. For a represents time that has a nite number of values. If

random process.

discrete random process, sometimes just called a random sequence, t t can take on any value of time, we have a continuous

Often times discrete and continuous refer to the amplitude of the process, and process or

sequence refer to the nature of the time variable. For this study, we often just use

random process to refer

to a general collection of discrete-time signals, as seen above in Figure 5.3 (Random Sinusoidal Process).

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

182

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

5.2 Stationary and Nonstationary Random Processes


5.2.1 Introduction

From the denition of a random process (Section 5.1), we know that all random processes are composed of random variables, each at its own unique point in time. Because of this, random processes have all the properties of random variables, such as mean, correlation, variances, etc.. When dealing with groups of signals or sequences it will be important for us to be able to show whether of not these statistical properties hold true for the entire random process. To do this, the concept of The general denition of a stationary process is:

stationary processes has been developed.

Denition 5.2: stationary process

a random process where all of its statistical properties do not vary with time Processes whose statistical properties do change are referred to as

nonstationary.

Understanding the basic idea of stationarity will help you to be able to follow the more concrete and mathematical denition to follow. Also, we will look at various levels of stationarity used to describe the various types of stationarity characteristics a random process can have.

5.2.2 Distribution and Density Functions


In order to properly dene what it means to be stationary from a mathematical standpoint, one needs to be somewhat familiar with the concepts of distribution and density functions. If you can remember your statistics then feel free to skip this section! Recall that when dealing with a single random variable, the a given number. More precisely, let

probability distribution function is a


x
be our given value; from this we (5.1)

simply tool used to identify the probability that our observed random variable will be less than or equal to

be our random variable, and let

can dene the distribution function as

Fx (x) = P r [X x]
situations where we want to look at the probability of event is an example of a second-order

This same idea can be applied to instances where we have multiple random variables as well. There may be

joint distribution function.

and Y

both occurring. For example, below

Fx (x, y ) = P r [X x, Y y ]

(5.2)

While the distribution function provides us with a full view of our variable or processes probability, it is not always the most useful for calculations.

probability density function (pdf).

Often times we will want to look at its derivative, the

We dene the the pdf as

fx (x) =

d Fx ( x ) dx

(5.3)

fx (x) dx = P r [x < X x + dx]


that our random variable falls within a given interval can be approximated by can also dene a

(5.4)

(5.4) reveals some of the physical signicance of the density function. This equations tells us the probability

fx (x) dx.

From the pdf, we

can now use our knowledge of integrals to evaluate probabilities from the above approximation. Again we

joint density function

which will include multiple random variables just as was done

for the distribution function. The density function is used for a variety of calculations, such as nding the expected value or proving a random variable is stationary, to name a few.

4 This

content is available online at <http://cnx.org/content/m10684/2.2/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

183

note:

The above examples explain the distribution and density functions in terms of a single

random variable,

X.

When we are dealing with signals and random processes, remember that

we will have a set of random variables where a dierent random variable will occur at each time instance of the random process,

X (tk ).

In other words, the distribution and density function will

also need to take into account the choice of time.

5.2.3 Stationarity
Below we will now look at a more in depth and mathematical denition of a stationary process. As was mentioned previously, various levels of stationarity exist and we will look at the most common types.

5.2.3.1 First-Order Stationary Process A random process is classied as rst-order stationary if its rst-order probability density function remains
equal regardless of any shift in time to its time origin. If we let

x t1

represent a given value at time

t1 ,

then

we dene a rst-order stationary as one that satises the following equation:

fx (xt1 ) = fx (xt1 + )
The physical signicance of this equation is that our density function, of

(5.5)

fx (xt1 ),

is completely independent

t1

and thus any time shift,

The most important result of this statement, and the identifying characteristic of any rst-order stationary process, is the fact that the mean is a constant, independent of any time shift. Below we show the results for a random process,

X,

that is a discrete-time signal,

x [n].

= = =

mx [n] E [x [n]] constant (independent of n)


(5.6)

5.2.3.2 Second-Order and Strict-Sense Stationary Process A random process is classied as second-order stationary if its second-order probability density function
does not vary over any time shift applied to both values. In other words, for values have the following be equal for an arbitrary time shift

xt1

and

xt2

then we will

.
(5.7)

fx (xt1 , xt2 ) = fx (xt1 + , xt2 + )

From this equation we see that the absolute time does not aect our functions, rather it only really depends on the time dierence between the two variables. Looked at another way, this equation can be described as

P r [X (t1 ) x1 , X (t2 ) x2 ] = P r [X (t1 + ) x1 , X (t2 + ) x2 ]


These random processes are often referred to as

(5.8)

strict sense stationary (SSS) when all of the distri-

bution functions of the process are unchanged regardless of the time shift applied to them. For a second-order stationary process, we need to look at the autocorrelation function (Section 5.5) to see its most important property. Since we have already stated that a second-order stationary process depends only on the time dierence, then all of these types of processes have the following property:

Rxx (t, t + )

= E [X (t + )] = Rxx ( )

(5.9)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

184

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

5.2.3.3 Wide-Sense Stationary Process


As you begin to work with random processes, it will become evident that the strict requirements of a SSS process is more than is often necessary in order to adequately approximate our calculations on random processes. We dene a nal type of stationarity, referred to as

wide-sense stationary (WSS), to have

slightly more relaxed requirements but ones that are still enough to provide us with adequate results. In order to be WSS a random process only needs to meet the following two requirements. 1. 2.

X = E [x [n]] = constant E [X (t + )] = Rxx ( )

Note that a second-order (or SSS) stationary process will always be WSS; however, the reverse will not always hold true.

5.3 Random Processes: Mean and Variance


a random process or signal by

In order to study the characteristics of a random process (Section 5.1), let us look at some of the basic properties and operations of a random process. Below we will focus on the operations of the random signals that compose our random processes. We will denote our random process with

and a random variable from

x.

5.3.1 Mean Value


Finding the average value of a set of random signals or random variables is probably the most fundamental concepts we use in evaluating random processes through any sort of statistical method.

random process is the average of all realizations of that process.


of values. The

The mean of a

In order to nd this average, we

must look at a random signal over a range of time (possible values) and determine our average from this set

mean, or average, of a random process, x (t), is given by the following equation:


mx (t) = = = = x (t)

X E [X ]

(5.10)

xf (x) dx

This equation may seem quite cluttered at rst glance, but we want to introduce you to the various notations used to represent the mean of a random signal or process. Throughout texts and other readings, remember that these will all equal the same thing. The symbol, used is,

x (t), X"

and the

with a bar over it are often used as a

short-hand to represent an average, so you might see it in certain textbooks. The other important notation

E [X ],

which represents the "expected value of

or the mathematical expectation. This notation

is very common and will appear again. If the random variables, which make up our random process, are discrete or quantized values, such as in a binary process, then the integrals become summations over all the possible values of the random variable. In this case, our expected value becomes

E [x [n]] =
x

P r [x [n] = ]

(5.11)

If we have two random signals or variables, their averages can reveal how the two signals interact. If the product of the two individual averages of both signals do signals, then the two signals are said to be

not equal the average of the product of the two linearly independent, also referred to as uncorrelated.

5 This

content is available online at <http://cnx.org/content/m10656/2.3/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

185

In the case where we have a random process in which only one sample can be viewed at a time, then we will often not have all the information available to calculate the mean using the density function as shown above. In this case we must estimate the mean through the time-average mean (Section 5.3.4: Time Averages), discussed later. For elds such as signal processing that deal mainly with discrete signals and values, then these are the averages most commonly used.

5.3.1.1 Properties of the Mean

The expected value of a constant,

is the constant:

E [] =
Adding a constant,

(5.12)

to each term increases the expected value by that constant:

E [X + ] = E [X ] +
Multiplying the random variable by a constant,

(5.13)

multiplies the expected value by that constant. (5.14)

E [X ] = E [X ]

The expected value of the sum of two or more random variables, is the sum of each individual expected value.

E [X + Y ] = E [X ] + E [Y ]

(5.15)

5.3.2 Mean-Square Value


moment of the term (we now look at x2 in the integral), then we will have the mean-square value of our random process. As you would expect, this is written as
If we look at the second

X2

= E X2 =

(5.16)

x2 f (x) dx

This equation is also often referred to as the

average power of a process or signal.

5.3.3 Variance
Now that we have an idea about the average value or values that a random process takes, we are often interested in seeing just how spread out the dierent random values might be. To do this, we look at the

variance which is a measure of this spread.


2

The variance, often denoted by

2 ,

is written as follows:

Var (X )
2
(5.17)

= E (X E [X ]) =

x X

f (x) dx

Using the rules for the expected value, we can rewrite this formula as the following form, which is commonly seen:

= =

X2 X E X 2 (E [X ])
2

2
(5.18)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

186

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

5.3.3.1 Standard Deviation


Another common statistical tool is the standard deviation. Once you know how to calculate the variance, the standard deviation is simply

the square root of the variance, or .


,
equals zero:

5.3.3.2 Properties of Variance

The variance of a constant,

Var ()

= () = 0

2
(5.19)

Adding a constant, by the same value:

to a random variable does not aect the variance because the mean increases

Var (X + )

= (X + ) = (X )
2

2
(5.20)

Multiplying the random variable by a constant,

, increases the variance by the square of the constant: = (X )


2 2
(5.21)

Var (X )

= 2 (X )
are

The variance of the sum of two random variables only equals the sum of the variances if the variable

independent.

Var (X + Y )

= =

(X + Y )
2

2 2
(5.22)

(X ) + (Y )

Otherwise, if the random variable are the product of the variables as follows:

not independent, then we must also include the covariance of


2 2
(5.23)

Var (X + Y ) = (X ) + 2Cov (X, Y ) + (Y )

5.3.4 Time Averages


In the case where we can not view the entire ensemble of the random process, we must use time averages to estimate the values of the mean and variance for the process. Generally, this will only give us acceptable results for independent and

ergodic processes, meaning those processes in which each signal or member of

the process seems to have the same statistical behavior as the entire process. The time averages will also only be taken over a nite interval since we will only be able to see a nite part of the sample.

5.3.4.1 Estimating the Mean


For the ergodic random process, as

x (t),

we will estimate the mean using the time averaging function dened

= E [X ] =
1 T T 0

(5.24)

X (t) dt

However, for most real-world situations we will be dealing with discrete values in our computations and signals. We will represent this mean as

= E [X ] =
1 N N n=1

(5.25)

X [n]

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

187

5.3.4.2 Estimating the Variance


Once the mean of our random process has been estimated then we can simply use those values in the following variance equation (introduced in one of the above sections)

x 2 =X 2 X

2
(5.26)

5.3.5 Example
Let us now look at how some of the formulas and concepts above apply to a simple example. We will just look at a single, continuous random variable for this example, but the calculations and methods are the same for a random process. For this example, we will consider a random variable having the probability density function described below and shown in Figure 5.4 (Probability Density Function).

f ( x) =

1 10

if

10 x 20

(5.27)

otherwise

Probability Density Function

Figure 5.4:

A uniform probability density function.

First, we will use (5.10) to solve for the mean value.

= = = =

20 1 x 10 dx 10 2 1 x 20 10 2 |x=10 1 10 (200 50)

(5.28)

15

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

188

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

Using (5.16) we can obtain the mean-square value for the above density function.

X2

= = = =

20 2 1 x 10 dx 10 1 x3 20 10 3 |x=10 1 8000 1000 10 3 3

(5.29)

233.33

And nally, let us solve for the variance of this function.

= X2 X = = 233.33 152 8.33


(5.30)

5.4 Correlation and Covariance of a Random Signal

When we take the expected value (Section 5.3), or average, of a random process (Section 5.1.2: Random Process), we measure several important characteristics about how the process behaves in general. This proves to be a very important observation. However, suppose we have several random processes measuring dierent aspects of a system. The relationship between these dierent processes will also be an important observation. The covariance and correlation are two important tools in nding these relationships. Below we will go into more details as to what these words mean and how these tools are helpful. Note that much of the following discussions refer to just random variables, but keep in mind that these variables can represent random signals or random processes.

5.4.1 Covariance
To begin with, when dealing with more than one random process, it should be obvious that it would be nice to be able to have a number that could quickly give us an idea of how similar the processes are. To do this, we use the

covariance, which is analogous to the variance of a single variable. Denition 5.3: Covariance
X
and

A measure of how much the deviations of two or more variables or processes match. For two processes,

Y,

if they are

not closely related then the covariance will be small, and if they

are similar then the covariance will be large. Let us clarify this statement by describing what we mean by "related" and "similar." Two processes are "closely related" if their distribution spreads are almost equal and they are around the same, or a very slightly dierent, mean. Mathematically, covariance is often written as

xy

and is dened as

cov (X, Y )

= xy = E X X

YY

(5.31)

This can also be reduced and rewritten in the following two forms:

xy =(xy ) x y
6 This

(5.32)

content is available online at <http://cnx.org/content/m10673/2.3/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

189

xy =

X X

YY

f (x, y ) dxdy

(5.33)

5.4.1.1 Useful Properties

If

and

are independent and uncorrelated or one of them has zero mean value, then

xy = 0
If

and

are orthogonal, then

xy = (E [X ] E [Y ])
The covariance is symmetric

cov (X, Y ) = cov (Y, X )


Basic covariance identity

cov (X + Y, Z ) = cov (X, Z ) + cov (Y, Z )


Covariance of equal variables

cov (X, X ) = Var (X )

5.4.2 Correlation
For anyone who has any kind of statistical background, you should be able to see that the idea of dependence/independence among variables and signals plays an important role when dealing with random processes. Because of this, the

correlation

of two variables provides us with a measure of how the two

variables aect one another.

Denition 5.4: Correlation


A measure of how much one random variable depends upon the other.

This measure of association between the variables will provide us with a clue as to how well the value of one variable can be predicted from the value of the other. The correlation is equal to the average of the product of two random variables and is dened as

cor (X, Y )

= E [XY ] =

xyf (x, y ) dxdy

(5.34)

5.4.2.1 Correlation Coecient


It is often useful to express the correlation of random variables with a range of numbers, like a percentage. For a given set of variables, we use the

correlation coecient to give us the linear relationship between


x ,
as seen below

our variables. The correlation coecient of two variables is dened in terms of their covariance and standard deviations (Section 5.3.3.1: Standard Deviation), denoted by

=
where we will always have

cov (X, Y ) x y

(5.35)

1 1
This provides us with a quick and easy way to view the correlation between our variables. If there is no relationship between the variables then the correlation coecient will be zero and if there is a perfect positive

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

190

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

match it will be one. If there is a perfect inverse relationship, where one set of variables increases while the other decreases, then the correlation coecient will be negative one. This type of correlation is often referred to more specically as the

Pearson's Correlation Coecient,or Pearson's Product Moment Correlation.

(a)

(b)

(c)
Figure 5.5:

Types of Correlation (a) Positive Correlation (b) Negative Correlation (c) Uncorrelated (No Correlation)

note:

So far we have dealt with correlation simply as a number relating the relationship between However, since our goal will be to relate random processes to each other,

any two variables.

which deals with signals as a function of time, we will want to continue this study by looking at

correlation functions (Section 5.5).

5.4.3 Example
Now let us take just a second to look at a simple example that involves calculating the covariance and correlation of two sets of random numbers. We are given the following data sets:

x = {3, 1, 6, 3, 4} y = {1, 5, 3, 4, 3}
To begin with, for the covariance we will need to nd the expected value (Section 5.3), or mean, of

and

y.

x= y=

1 (3 + 1 + 6 + 3 + 4) = 3.4 5 1 (1 + 5 + 3 + 4 + 3) = 3.2 5 1 (3 + 5 + 18 + 12 + 12) = 10 5

xy =

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

191

Next we will solve for the standard deviations of our two sets using the formula below (for a review click here (Section 5.3.3: Variance)).

E (X E [X ])

x =

1 (0.16 + 5.76 + 6.76 + 0.16 + 0.36) = 1.625 5 1 (4.84 + 3.24 + 0.04 + 0.64 + 0.04) = 1.327 6

y =

Now we can nally calculate the covariance using one of the two formulas found above. Since we calculated the three means, we will use that formula (5.32) since it will be much simpler.

xy = 10 3.4 3.2 = 0.88


And for our last calculation, we will solve for the correlation coecient,

0.88 = 0.408 1.625 1.327

5.4.3.1 Matlab Code for Example


The above example can be easily calculated using Matlab. Below I have included the code to nd all of the values above.

x = [3 1 6 3 4]; y = [1 5 3 4 3]; mx = mean(x) my = mean(y) mxy = mean(x.*y) % Standard Dev. from built-in Matlab Functions std(x,1) std(y,1) % Standard Dev. from Equation Above (same result as std(?,1)) sqrt( 1/5 * sum((x-mx).^2)) sqrt( 1/5 * sum((y-my).^2)) cov(x,y,1) corrcoef(x,y)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

192

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

5.5 Autocorrelation of Random Processes

Before diving into a more complex statistical analysis of random signals and processes (Section 5.1), let us quickly review the idea of correlation (Section 5.4). Recall that the correlation of two signals or variables is the expected value of the product of those two variables. Since our focus will be to discover more about a random process, a collection of random signals, then imagine us dealing with two samples of a random process, where each sample is taken at a dierent point in time. Also recall that the key property of these random processes is that they are now functions of time; imagine them as a collection of signals. The expected value (Section 5.3) of the product of these two variables (or samples) will now depend on how quickly they change in regards to

time.

For example, if the two variables are taken from almost the same time period, We will now look at a correlation function that For the correlation of signals from two

then we should expect them to have a high correlation.

relates a pair of random variables from the same process to the time separations between them, where the argument to this correlation function will be the time dierence. dierent random process, look at the crosscorrelation function (Section 5.6).

5.5.1 Autocorrelation Function


The rst of these correlation functions we will discuss is the

autocorrelation, where each of the random

variables we will deal with come from the same random process.

Denition 5.5: Autocorrelation


version of itself

the expected value of the product of a random variable or signal realization with a time-shifted

With a simple calculation and analysis of the autocorrelation function, we can discover a few important characteristics about our random process. These include: 1. How quickly our random signal or processes changes with respect to the time function 2. Whether our process has a periodic component and what the expected frequency might be As was mentioned above, the autocorrelation function is simply the expected value of a product. Assume we have a pair of random variables from the same process, is often written as

X1 = X (t1 ) and X2 = X (t2 ), then the autocorrelation


(5.36)

Rxx (t1 , t2 )

= E [X1 X2 ] =

x1 x2 f (x1 , x2 ) dx 2dx 1

The above equation is valid for stationary and nonstationary random processes. For stationary processes (Section 5.2), we can generalize this expression a little further. Given a wide-sense stationary processes, it can be proven that the expected values from our random process will be independent of the origin of our time function. Therefore, we can say that our autocorrelation function will depend on the time dierence and not some absolute time. For this discussion, we will let expression as

= t2 t1 , Rxx ( )

and thus we generalize our autocorrelation

Rxx (t, t + )

= E [X (t) X (t + )]

(5.37)

for the continuous-time case. In most DSP course we will be more interested in dealing with real signal sequences, and thus we will want to look at the discrete-time case of the autocorrelation function. formula below will prove to be more common and useful than (5.36): The

Rxx [n, n + m] =
n=

x [n] x [n + m]

(5.38)

7 This

content is available online at <http://cnx.org/content/m10676/2.4/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

193

And again we can generalize the notation for our autocorrelation function as

Rxx [n, n + m]

= =

Rxx [m] E [X [n] X [n + m]]

(5.39)

5.5.1.1 Properties of Autocorrelation


Below we will look at several properties of the autocorrelation function that hold for processes.

stationary

random

Autocorrelation is an even function for

Rxx ( ) = Rxx ( )

The mean-square value can be found by evaluating the autocorrelation where

= 0,

which gives us

Rxx (0) =X 2
The autocorrelation function will have its largest value when exceeded.

= 0.

This value can appear again,

for example in a periodic function at the values of the equivalent periodic points, but will never be

Rxx (0) |Rxx ( ) |


If we take the autocorrelation of a period function, then frequency.

Rxx ( )

will also be periodic with the same

5.5.1.2 Estimating the Autocorrleation with Time-Averaging


Sometimes the whole random process is not available to us. In these cases, we would still like to be able to nd out some of the characteristics of the stationary random process, even if we just have part of one sample function. In order to do this we can the sample function.

estimate the autocorrelation from a given interval, 0 to T


1 T
T

seconds, of

xx ( ) = R

x (t) x (t + ) dt
0

(5.40)

However, a lot of times we will not have sucient information to build a complete continuous-time function of one of our random signals for the above analysis. If this is the case, we can treat the information we do know about the function as a discrete signal and use the discrete-time formula for estimating the autocorrelation.

xx [m] = R

1 N m

N m1

x [n] x [n + m]
n=0

(5.41)

5.5.2 Examples
Below we will look at a variety of examples that use the autocorrelation function. prove very useful in these and future calculations. We will begin with a simple example dealing with Gaussian White Noise (GWN) and a few basic statistical properties that will

Example 5.1
We will let

x [n]

represent our GWN. For this problem, it is important to remember the following

fact about the mean of a GWN function:

E [x [n]] = 0
Available for free at Connexions <http://cnx.org/content/col10360/1.4>

194

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

Figure 5.6:

Gaussian density function. By examination, can easily see that the above statement is true - the mean equals zero.

Along with being

zero-mean, recall that GWN is always independent.


Rxx [n, n + m] = E [x [n] x [n + m]]

With these two facts,

we are now ready to do the short calculations required to nd the autocorrelation.

Since the function,

x [n],

is independent, then we can take the product of the individual expected

values of both functions.

Rxx [n, n + m] = E [x [n]] E [x [n + m]]


Now, looking at the above equation we see that we can break it up further into two conditions: one when

and

are equal and one when they are not equal. When they are equal we can combine

the expected values. We are left with the following piecewise function to solve:

E [x [n]] E [x [n + m]] Rxx [n, n + m] = E x2 [n] if m = 0


have already stated that the expected value of

if

m=0

We can now solve the two parts of the above equation. The rst equation is easy to solve as we

x [n]

will be zero. For the second part, you should

recall from statistics that the expected value of the square of a function is equal to the variance. Thus we get the following results for the autocorrelation:

0 if m = 0 Rxx [n, n + m] = 2 if m = 0
Or in a more concise way, we can represent the results as

Rxx [n, n + m] = 2 [m]

5.6 Crosscorrelation of Random Processes

Before diving into a more complex statistical analysis of random signals and processes (Section 5.1), let us quickly review the idea of correlation (Section 5.4). Recall that the correlation of two signals or variables is the expected value of the product of those two variables. Since our main focus is to discover more about random processes, a collection of random signals, we will deal with two random processes in this discussion, where in this case we will deal with samples from two

dierent

random processes.

We will analyze the

expected value (Section 5.3.1: Mean Value) of the product of these two variables and how they correlate to one another, where the argument to this correlation function will be the time dierence. For the correlation of signals from the same random process, look at the autocorrelation function (Section 5.5).

8 This

content is available online at <http://cnx.org/content/m10686/2.2/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

195

5.6.1 Crosscorrelation Function


When dealing with multiple random processes, it is also important to be able to describe the relationship, if any, between the processes. For example, this may occur if more than one random signal is applied to a system. In order to do this, we use the two dierent wide sense stationary random processes.

crosscorrelation function, where the variables are instances from

Denition 5.6: Crosscorrelation

if two processes are wide sense stationary, the expected value of the product of a random variable from one random process with a time-shifted, random variable from a dierent random process Looking at the generalized formula for the crosscorrelation, we will represent our two random processes by allowing

U = U (t)

and

V = V (t ).

We will dene the crosscorrelation function as

Ruv (t, t )

= E [U V ] =

uvf (u, v ) dvdu U (t)


and

(5.42)

Just as the case with the autocorrelation function, if our input and output, denoted as

V (t),

are

at least jointly wide sense stationary, then the crosscorrelation does not depend on absolute time; it is just a function of the time dierence. This means we can simplify our writing of the above function as

Ruv ( ) = E [U V ]
or if we deal with two real signal sequences, the convolution (Section 1.5) of two signals:

(5.43)

x [n] and y [n], then we arrive at a more commonly seen formula

for the discrete crosscorrelation function. See the formula below and notice the similarities between it and

Rxy (n, n m)

= Rxy (m) =
n=

x [n] y [n m]

(5.44)

5.6.1.1 Properties of Crosscorrelation wide sense stationary (WSS) random processes. Crosscorrelation is not an even function; however, it does have a unique symmetry property:
Below we will look at several properties of the crosscorrelation function that hold for two

Rxy ( ) = Ryx ( )
prove the following property revealing to us what value the maximum cannot exceed.

(5.45)

The maximum value of the crosscorrelation is not always when the shift equals zero; however, we can

|Rxy ( ) |

Rxx (0) Ryy (0)

(5.46)

When two random processes are statistically independent then we have

Rxy ( ) = Ryx ( )

(5.47)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

196

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

5.6.2 Examples
Exercise 5.6.1
Using (5.44), nd the crosscorrelation of the sequences

(Solution on p. 210.)

Let us begin by looking at a simple example showing the relationship between two sequences.

x [n] = {. . . , 0, 0, 2, 3, 6, 1, 3, 0, 0, . . . } y [n] = {. . . , 0, 0, 1, 2, 4, 1, 3, 0, 0, . . . }
for each of the following possible time shifts:

m = {0, 3, 1}.

5.7 Introduction to Adaptive Filters


extremely useful.

In many applications requiring ltering, the necessary frequency response may not be known beforehand, or it may vary with time. (Example; suppression of engine harmonics in a car stereo.) In such applications, an adaptive lter which can automatically design itself and which can track system variations in time is Adaptive lters are used extensively in a wide variety of applications, particularly in telecommunications.

Outline of adaptive lter material 1. Wiener Filters - L2 optimal (FIR) lter design in a statistical context 2. LMS algorithm - simplest and by-far-the-most-commonly-used adaptive lter algorithm 3. Stability and performance of the LMS algorithm - When and how well it works 4. Applications of adaptive lters - Overview of important applications 5. Introduction to advanced adaptive lter algorithms - Techniques for special situations or faster
convergence

5.8 Discrete-Time, Causal Wiener Filter


Stochastic signal

10

L2

optimal (least squares) FIR lter design problem: Given a wide-sense stationary (WSS) input

xk

and desired signal

dk

(WSS

E [yk ] = E [yk+d ], ryz (l) = E [yk zk+l ], k, l : (ryy (0) < ))

9 This 10 This

content is available online at <http://cnx.org/content/m11535/1.3/>. content is available online at <http://cnx.org/content/m11825/1.1/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

197

Figure 5.7

The Wiener lter is the linear, time-invariant lter minimizing As posed, this problem seems slightly silly, since wide cariety of applications.

, the variance of the error.

dk

is already available! However, this idea is useful in a

Example 5.2
active suspension system design

Figure 5.8

note: optimal system may change with dierent road conditions or mass in car, so an

adaptive

system might be desirable.

Example 5.3
System identication (radar, non-destructive testing, adaptive control systems)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

198

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

Figure 5.9

Exercise 5.8.1
Usually one desires that the input signal

xk

be "persistently exciting," which, among other things,

implies non-zero energy in all frequency bands. Why is this desirable?

5.8.1 Determining the optimal length-N causal FIR Weiner lter


note:

for convenience, we will analyze only the causal, real-data case; extensions are straightfor-

ward.

M 1

yk =
l=0

w l x k l

argminE [ 2 ] = E (dk yk )2 2
wl M 1 l=0

= E
M 1 m=0

dk

M 1 l=0

wl xkl

= E dk 2

wl E [dk xkl ] +
E
2

M 1 l=0

(wl wm E [xkl xkm ])


M 1 M 1 M 1

= rdd (0) 2
l=0

wl rdx (l) +
l=0 m=0

wl wm rxx (l m)

where

rdd (0) = E dk 2 rdx (l) = E [dk Xkl ] rxx (l m) = E [xk xk+lm ]

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

199

This can be written in matrix form as

E
where

= rdd (0) 2P W T + W T RW P = rdx (0) rdx (1)


. . .

rdx (M 1) R= rxx (0) rxx (1)


. . . . . .

rxx (1) rxx (0)


.. .. . .

...
.. .. .. . . .

...
.. .. . .

rxx (M 1)
. . . . . .

rxx (0) rxx (1)

rxx (1) rxx (0)

rxx (M 1)

...

...

To solve for the optimum lter, compute the gradient with respect to the top weights vector

. =

2 w0 2 w1 . . . 2 wM 1

= (2P ) + 2RW
d (recall dW

A W =A

d , dW

(W M W ) = 2 M W

for symmetric

M)

setting the gradient equal to zero

Wopt R = P Wopt = R1 P
Since

is a correlation matrix, it must be non-negative denite, so this is a minimizer.

For

positive

denite, the minimizer is unique.

5.9 Practical Issues in Wiener Filter Implementation


The weiner-lter, it in practice.

11

Wopt = R1 P ,

is ideal for many applications. But several issues must be addressed to use

Exercise 5.9.1
In practice one usually won't know exactly the statistics of compute the Weiner lter. How do we surmount this problem?

(Solution on p. 210.)

xk

and

dk

(i.e.

and

P)

needed to

Exercise 5.9.2

(Solution on p. 210.)

In many applications, the statistics of How does one develop an system near optimal at all times?

adaptive system which tracks these changes over time to keep the

xk , dk

vary slowly with time.

11 This

content is available online at <http://cnx.org/content/m11824/1.1/>. Available for free at Connexions <http://cnx.org/content/col10360/1.4>

200

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

Exercise 5.9.3 Exercise 5.9.4


How can

(Solution on p. 210.)
be computed eciently?

k rxx (l)

how does one choose N?

5.9.1 Tradeos
Larger

more accurate estimates of the correlation values

better

W opt .

However, larger

leads to

slower adaptation.
note: The success of adaptive systems depends on

x, d

being roughly stationary over at least

samples,

N > M.

That is, all adaptive ltering algorithms require that the underlying system

varies slowly with respect to the sampling rate and the lter length (although they can tolerate occasional step discontinuities in the underlying system).

5.9.2 Computational Considerations


As presented here, an adaptive lter requires computing a matrix inverse at each sample. Actually, since the matrix since algorithm, where

R is Toeplitz, the linear system of equations can be sovled with O M 2 computations using Levinson's M is the lter length. However, in many applications this may be too expensive, especially computing the lter output itself requires O (M ) computations. There are two main approaches to Rk+1
is only slightly changed from

resolving the computation problem 1. Take advantage of the fact that

Rk

to reduce the computation to

O (M );

these algorithms are called Fast Recursive Least Squareds algorithms; all methods proposed so

far have stability problems and are dangerous to use. 2. Find a dierent approach to solving the optimization problem that doesn't require explicit inversion of the correlation matrix.
note: Adaptive algorithms involving the correlation matrix are called

Recursive least Squares

(RLS) algorithms. Historically, they were developed after the LMS algorithm, which is the slimplest and most widely used approach very fast adaptation.

O (M ). O M 2

RLS algorithms are used in applications requiring

5.10 Quadratic Minimization and Gradient Descent


5.10.1 Quadratic minimization problems

12

The least squares optimal lter design problem is quadratic in the lter coecients:

E
If

= rdd (0) 2P T W + W T RW E
2

is positive denite, the error surface

(w0 , w1 , . . . , wM 1 )

is a unimodal "bowl" in

RN .

12 This

content is available online at <http://cnx.org/content/m11826/1.2/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

201

Figure 5.10

The problem is to nd the bottom of the bowl. In an adaptive lter context, the shape and bottom of the bowl may drift slowly with time; hopefully slow enough that the adaptive algorithm can track it. For a quadratic error surface, the bottom of the bowl can be found in one step by computing Most modern nonlinear optimization methods (which are used, for example, to solve the

R 1 P .

optimal IIR

lter design problem!) locally approximate a nonlinear function with a second-order (quadratic) Taylor series approximation and step to the bottom of this quadratic approximation on each iteration. However, an older and simpler appraoch to nonlinear optimaztion exists, based on

gradient descent.

Contour plot of -squared

Figure 5.11

The idea is to iteratively nd the minimizer by computing the gradient of the error function:

E =

E[ 2 ] wi .

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

202

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

The gradient is a vector in

RM

pointing in the steepest uphill direction on the error surface at a given point

W i,

with

having a magnitude proportional to the slope of the error surface in this steepest direction.

By updating the coecient vector by taking a step

opposite the gradient direction :

W i+1 = W i i ,

we go (locally) "downhill" in the steepest direction, which seems to be a sensible way to iteratively solve a nonlinear optimization problem. The performance obviously depends on could bounce back and forth up out of the bowl. However, if approach the bottom. We will determine criteria for

; if is too large, the iterations is too small, it could take many iterations to choosing later. W0

In summary, the gradient descent algorithm for solving the Weiner lter problem is: Guess

do

i = 1,

i = (2P ) + 2RW i W i+1 = W i i


repeat

Wopt = W
The gradient descent idea is used in the LMS adaptive tler algorithm. As presented, this alogrithm costs

O M2

computations per iteration and doesn't appear very attractive, but LMS only requires

O (M )

com-

putations and is stable, so it is very attractive when computation is an issue, even thought it converges more slowly then the RLS algorithms we have discussed so far.

5.11 The LMS Adaptive Filter Algorithm


Recall the Weiner lter problem

13

Figure 5.12

13 This

content is available online at <http://cnx.org/content/m11829/1.1/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

203

Find

{xk }, {dk } jointly wide W minimizing E

sense stationary

2 M 1 k

= dk yk = dk
i=0

wi xki = dk X k W k xk xk1
. . .

X =
k

xkM +1 k w0 k w1 k W = . . . k wM 1
The superscript denotes absolute time, and the subscript denotes time or a vector index. the solution can be found by setting the gradient

E[ k W

] X k
T
(5.48)

= E 2

= E 2 dk X k Wk X k = 2E dk X k = 2P + 2RW Wopt = R1 P
Alternatively,

+ E Xk

Wopt

can be found iteratively using a gradient descent technique

W k+1 = W k k
In practice, we don't know time. To nd the (approximate) Wiener lter, some approximations are necessary. As always, the key is to make the

and

exactly, and in an adaptive context they may be slowly varying with

right approximations!
R
and

note: Approximate

P:

RLS methods, as discussed last time.

note: Approximate the gradient!

k =
Note that

E k2 W E
k 2
. We can get a noisy approximation to the

itself is a very noisy approximation to

gradient by nding the gradient of clever idea, in 1960.

k ! Widrow and Ho rst published the LMS algorithm, based on this

k2 = =2 W
k

dk W k X k
k

=2

X k = 2 k X k

This yields the LMS adaptive lter algorithm

Example 5.4: The LMS Adaptive Filter Algorithm


Available for free at Connexions <http://cnx.org/content/col10360/1.4>

204

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

1. 2. 3.

yk = W k X k = k = dk yk

M 1 i=0

k wi xki

k+1 k W k+1 = W k k = W k 2 k X k = W k + 2 k X k (wi = wi + 2 k xki )

k The LMS algorithm is often called a stochastic gradient algorithm, since is a noisy gradient. by far the most commonly used adaptive ltering algorithm, because
1. it was the rst 2. it is very simple 3. in practice it works well (except that sometimes it converges slowly) 4. it requires relatively litle computation 5. it updates the tap weights every sample, so it continually adapts the lter 6. it tracks slow changes in the signal statistics well

This is

5.11.1 Computational Cost of LMS


To Compute
multiplies adds

yk M M 1

W k+1 M +1 M

= Total
2M + 1 2M

0 1

Table 5.1

So the LMS algorithm is Note that the parameter time, but usually a constant application.

O (M )

per sample. In fact, it is nicely balanced in that the lter computation

and the adaptation require the same amount of computation.

plays a very important role in the LMS algorithm. It can also be varied with ("convergence weight facor") is used, chosen after experimentation for a given

5.11.1.1 Tradeos
large

fast convergence, fast adaptivity

small

accurate

less misadjustment error, stability

5.12 First Order Convergence Analysis of the LMS Algorithm


5.12.1 Analysis of the LMS algorithm

14

It is important to analyze the LMS algorithm to determine under what conditions it is stable, whether or not it converges to the Wiener solution, to determine how quickly it converges, how much degredation is suered due to the noisy gradient, etc. In particular, we need to know how to choose the parameter

5.12.1.1 Mean of W
does

W k, k

approach the Wiener solution? (since

Wk

is always somewhat random in the approximate

gradient-based LMS algorithm, we ask whether the expected value of the lter coecients converge to the

14 This

content is available online at <http://cnx.org/content/m11830/1.1/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

205

Wiener solution)

E W k+1

= W k+1 = E W k + 2 k X k = W k + 2E dk X k + 2E = W k + 2P + 2E
T

W k Xk Xk

(5.49)

W k Xk Xk

5.12.1.1.1 Patently False Assumption


Xk
and

X ki , X k

and

dki ,
k 1

and

dk

and

obviously false, since

is the same as

dki are statistically independent, i = 0. This X k except for shifting down the vector elements

assumption is one place and

adding one new sample.

We make this assumption because otherwise it becomes extremely dicult to

analyze the LMS algorithm. (First good analysis not making this assumption: Macchi and Eweda[1]) Many simulations and much practical experience has shown that the results one obtains with analyses based on the patently false assumption above are quite accurate in most situations With the independence assumption, independent of Now

, and we can simplify is a vector, and

W k (which depends T E W k Xk Xk

only on previous

X k i , d k i )

is statitically

kT

Xk Xk

E W k Xk Xk
T

. . .

= E = = = = RW k

M 1 i=0

k wi x k i x k j
. . . . . .


(5.50)

M 1 i=0

k E wi xki xkj
. . . . . .

M 1 i=0

k wi

. . . . . .

E [xki xkj ]

M 1 i=0

kr wi ( i j ) xx
. . .

where

R = E XkXk

is the data correlation matrix.

Putting this back into our equation

(5.51)

W k+1

= =

W k +2P + 2R W k I W +2P
k

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

206

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

Now to? If

if W k converges to a vector of nite magnitude ("convergence in the mean"), what does it converge

converges, then as

and

Wk

k , W k+1 W k ,

W = I W +2P

2R W = 2P R W = P
or

Wopt = R1 P
the Wiener solution! So the LMS algorithm,

if it converges, gives lter coecients which on average are the Wiener coecients!

This is, of course, a desirable result.

5.12.1.2 First-order stability

But does

Wk

converge, or under what conditions?

Let's rewrite the analysis in term of is the Wiener lter

the "mean coecient error vector"

where

V k,

V k =W k Wopt ,

Wopt

W k+1 =W k 2R W k +2P

W k+1 Wopt =W k Wopt + 2R W k


+ 2RWopt 2RWopt + 2P

V k+1 =V k 2R V k + (2RWopt ) + 2P
Now

Wopt = R1 ,

so

V k+1 =V k 2R V k + 2RR1 P + 2P = (I 2R) V k

We wish to know under what conditions

V k 0 ?

5.12.1.2.1 Linear Algebra Fact


Since

R
a

R is positive denite, real, and symmetric, all the eigenvalues are real and positive. Also, we can write Q1 Q, where is a diagonal matrix with diagonal entries i equal to the eigenvalues of R, and Q is unitary matrix with rows equal to the eigenvectors corresponding to the eigenvalues of R.
as Using this fact,

V k+1 = I 2 Q1 Q
multiplying both sides through on the left by

Vk

Q:

we get

Q V k+1 = (Q 2Q) V k = (1 2) Q V k

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

207

Let

V ' = QV : V 'k+1 = (1 2) V 'k

V ' is simply V in a rotated coordinate set in Rm , so convergence of V ' implies convergence of V . ' Since 1 2 is diagonal, all elements of V evolve independently of each other. Convergence (stability) bolis down to whether all M of these scalar, rst-order dierence equations are stable, and thus (0).
Note that

i, i = [1, 2, . . . , M ] : Vi'k+1 = (1 2i ) Vi'k


These equations converge to zero if

|1 2i | < 1,

or

i : (|i | < 1)

and

are positive, so we require

i : <

1 i

so for convergence in the mean of the LMS adaptive lter, we require

<

1 max max ,

(5.52)

This is an elegant theoretical result, but in practice, we may not know

it may be time-varying, and

we certainly won't want to compute it. However, another useful mathematical fact comes to the rescue...

tr (R) =
i=1
Since the eigenvalues are all positive and real. For a correlation matrix, estimate

rii =
i=1

i max

i, i {1, M } : (rii = r (0)). 1

So

tr (R) = M r (0) = M E [xk xk ].

We can easily

r (0)

with

O (1)

computations/sample, so in practice we might require

<
as a conservative bound, and perhaps adapt

M r (0)
accordingly with time.

5.12.1.3 Rate of convergence


Each of the modes decays as

(1 2i )
note: The

k
This is not

initial rate of convergence is dominated by the fastest mode 1 2max . nal rate of convergence is dominated by the slowest mode 1 2min .
R).

surprising, since a dradient descent method goes "downhill" in the steepest direction
note:

The

For small

min ,

it can take a long time for LMS to converge. LMS converges relatively quickly for

Note that the convergence behavior depends on the data (via

roughly equal eigenvalues. Unequal eigenvalues slow LMS down a lot.

5.13 Adaptive Equalization


note:

15

Design an approximate inverse lter to cancel out as much distortion as possible.

15 This

content is available online at <http://cnx.org/content/m11907/1.1/>.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

208

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

Figure 5.13

In principle,

(n

z H , so that the overall response of the top path is approximately ). However, limitations on the form of W (FIR) and the presence of noise cause the equalization to

WH

z ,

or

be imperfect.

5.13.1 Important Application


Channel equalization in a digital communication system.

Figure 5.14

If the channel distorts the pulse shape, the matched lter will no longer be matched, intersymbol interference may increase, and the system performance will degrade. An adaptive lter is often inserted in front of the matched lter to compensate for the channel.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

209

Figure 5.15

This is, of course, unrealizable, since we do not have access to the original transmitted signal, There are two common solutions to this problem: 1. Periodically broadcast a known

sk .

training signal.
sk
is known.

The adaptation is switched on only when the training

signal is being broadcast and thus

2. Decision-directed feedback: If the overall system is working well, then the output always equal

sk0

should almost

sk0 .

We can thus use our received digital communication signal as the desired signal,

since it has been cleaned of noise (we hope) by the nonlinear threshold device!

Decision-directed equalizer

Figure 5.16

As long as the error rate in

sk

is not too high (say

75%),

this method works.

Otherwise,

dk

is so

inaccurate that the adaptive lter can never nd the Wiener solution. This method is widely used in the telephone system and other digital communication networks.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

210

CHAPTER 5.

STATISTICAL AND ADAPTIVE SIGNAL PROCESSING

Solutions to Exercises in Chapter 5


Solution to Exercise 5.6.1 (p. 196)
1. For

m = 0,

we should begin by nding the product sequence

s [n] = x [n] y [n].

Doing this we get the

following sequence:

s [n] = {. . . , 0, 0, 2, 6, 24, 1, 9, 0, 0, . . . }
and so from the sum in our crosscorrelation function we arrive at the answer of

Rxy (0) = 22
2. For

m = 3,

we will approach it the same was we did above; however, we will now shift

y [n]

to the

right. Then we can nd the product sequence

s [n] = x [n] y [n 3],

which yields

s [n] = {. . . , 0, 0, 0, 0, 0, 1, 6, 0, 0, . . . }
and from the crosscorrelation function we arrive at the answer of

Rxy (3) = 6
3. For

m = 1,

we will again take the same approach; however, we will now shift

y [n]

to the left. Then

we can nd the product sequence

s [n] = x [n] y [n + 1],

which yields

s [n] = {. . . , 0, 0, 4, 12, 6, 3, 0, 0, 0, . . . }
and from the crosscorrelation function we arrive at the answer of

Rxy (1) = 13

Solution to Exercise 5.9.1 (p. 199)


Estimate the statistics

rxx (l)

1 N 1 N

N 1

xk xk+l
k=0 N 1

rxd (l)
then solve

dk xkl
k=0

Solution to Exercise 5.9.2 (p. 199)


Use short-time windowed estiamtes of the correlation functions.
note:

W opt =R1 =P

rxx (l)
k

1 N

N 1

xkm xkml
m=0 N 1

rdx (l)
and

1 = N

xkml dkm
m=0

Solution to Exercise 5.9.3 (p. 200)


Recursively!

Wopt k

Rk

Pk

k k 1 rxx (l) = rxx (l) + xk xkl xkN xkN l


This is critically stable, so people usually do

k 1 (1 ) rxx k (l) = rxx (l) + xk xkl

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

GLOSSARY

211

Glossary
A Autocorrelation
the expected value of the product of a random variable or signal realization with a time-shifted version of itself

C Correlation
A measure of how much one random variable depends upon the other.

Covariance
A measure of how much the deviations of two or more variables or processes match.

Crosscorrelation
if two processes are wide sense stationary, the expected value of the product of a random variable from one random process with a time-shifted, random variable from a dierent random process

D dierence equation
An equation that shows the relationship between consecutive values of a sequence and the dierences among them. They are often rearranged as a recursive formula so that a systems output can be computed from the input signal and past outputs.
Example:

y [n] + 7y [n 1] + 2y [n 2] = x [n] 4x [n 1]

(3.1)

F FFT
(Fast Fourier Transform) An ecient

computational algorithm for computing the DFT

16

P poles
1. The value(s) for

where

Q (z ) = 0.

2. The complex frequencies that make the overall gain of the lter transfer function innite.

R random process
A family or ensemble of signals that correspond to every possible outcome of a certain signal measurement. Each signal in this collection is referred to as a of the process.
Example:

realization or sample function

As an example of a random process, let us look at the Random Sinusoidal Process

below. We use

f [n] = Asin (n + )

to represent the sinusoid with a given amplitude and phase.

Note that the phase and amplitude of each sinusoid is based on a random number, thus making this a random process.

S stationary process
a random process where all of its statistical properties do not vary with time

16 "Discrete

Fourier Transform (DFT)" <http://cnx.org/content/m10249/latest/> Available for free at Connexions <http://cnx.org/content/col10360/1.4>

212

GLOSSARY

Z zeros
1. The value(s) for

where

P (z ) = 0.

2. The complex frequencies that make the overall gain of the lter transfer function zero.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Bibliography
[1] O. Macchi and E. Eweda. Second-order convergence analysis of stochastic adaptive linear ltering. IEEE
Trans. on Automatic Controls, AC-28 #1:7685, Jan 1983.

Available for free at Connexions <http://cnx.org/content/col10360/1.4>


213

214

INDEX

Index of Keywords and Terms


Keywords are listed by the section with that keyword (page numbers are in parentheses).
apples, 1.1 (1) Keywords do not necessarily appear in the text of the page. They are merely associated with that section. Ex.

Terms are referenced by the page they appear on.

Ex.

apples, 1

A/D, 2.7(63), 2.8(73), 2.11(96) adaptive, 197, 199 Aliasing, 2.3(54), 4.5(154), 4.7(156) alphabet, 10 amplitude response, 3.6(130), 130 analog, 2.7(63), 2.8(73), 3.5(126) analog signal, 1.1(3) analog signals, 3.5(126) analysis, 4.12(165) anti-aliasing, 4.5(154), 4.7(156) Applet, 2.3(54) autocorrelation, 5.2(182), 5.5(192), 192, 192 average, 5.3(184) average power, 185

control theory, 123 convolution, 1.5(11), 2.11(96), 2.12(104) correlation, 189, 189, 5.5(192) correlation coecient, 189 correlation functions, 190 countably innite, 128 covariance, 188, 188 critically sampled, 4.12(165) Crosscorrelation, 195 crosscorrelation function, 195 CT, 2.6(60) CTFT, 1.10(34), 1.13(44), 2.8(73)

D/A, 2.7(63), 2.8(73), 2.11(96) de-noising, 4.16(176), 176 deblurring, 2.12(104), 104 decimation, 4.2(150), 4.5(154), 4.6(155), 4.7(156), 4.10(162) decimator, 4.5(154) decompose, 1.8(29), 30 deconvolution, 2.12(104) delayed, 11 density function, 5.2(182) design, 3.11(138) deterministic, 5.1(179) deterministic signals, 179 DFT, 1.12(41), 1.13(44), 2.7(63), 2.10(91), 2.11(96), 3.5(126) dierence equation, 1.4(11), 11, 109, 109, 3.5(126) digital, 2.7(63), 2.8(73), 3.11(138) digital audio, 4.3(152) digital lter, 3.5(126) digital signal, 1.1(3) digital signal processing, (1), 2.10(91), 3.5(126) direct method, 111 direct sum, 1.6(18), 23 discrete fourier transform, 2.7(63), 2.10(91), 2.11(96), 3.5(126) Discrete Fourier Transform (DFT), 64 discrete random process, 181

bandlimited, 77, 167 basis, 1.6(18), 21, 1.8(29), 30 basis matrix, 1.8(29), 31 bilateral z-transform, 114 block diagram, 1.2(6) blur, 2.12(104)

cascade, 1.2(6) causal, 125 CD, 4.3(152) cd players, 4.4(153) characteristic polynomial, 112 circular convolution, 2.11(96), 98 coecient vector, 1.8(29), 31 compact disc, 4.3(152) complement, 1.6(18), 23 complex, 3.4(120) complex exponential sequence, 9 complex exponentials, 38 computational algorithm, 211 constant-Q, 4.12(165) continuous frequency, 1.10(34), 1.11(38) continuous random process, 181 continuous time, 1.9(33), 1.10(34) Continuous Time Fourier Transform, 34 Continuous-Time Fourier Transform, 35

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

INDEX

215

discrete time, 1.5(11), 1.9(33), 1.11(38), 3.3(119) Discrete Time Fourier Transform, 38, 2.7(63) Discrete Wavelet Transform, 4.13(167), 167, 4.14(168) discrete-time, 3.5(126) discrete-time ltering, 3.5(126) Discrete-Time Fourier Transform, 39 distribution function, 5.2(182) downsampler, 4.2(150) downsampling, 4.2(150), 150, 4.5(154), 4.7(156), 4.8(158) DSP, 2.10(91), 2.12(104), 3.3(119), 3.5(126), 3.9(135), 3.10(136), 3.11(138), 5.2(182), 5.3(184), 5.5(192) DT, 1.5(11) DTFT, 1.11(38), 1.13(44), 2.7(63), 2.8(73) DWT, 4.15(170), 4.16(176)

G H

Gibbs Phenomena, 34 gradient descent, 201 group delay, 134 Haar, 4.14(168) Hanning window, 2.10(91), 93 hilbert, 1.7(28), 1.8(29) Hilbert Space, 1.6(18), 26, 28 hilbert spaces, 1.7(28), 1.8(29) Hold, 2.5(59) homogeneous solution, 111

identity matrix, 32 IIR Filter, 3.8(134) Illustrations, 2.3(54) image, 2.12(104) impulse response, 1.5(11) independent, 186 indirect method, 111 information, 1.1(3) initial conditions, 110 inner, 1.7(28) inner product, 1.7(28) inner product space, 28 input, 1.2(6) interpolation, 4.1(149), 4.3(152), 4.4(153), 4.6(155), 4.7(156), 4.9(159), 4.10(162) interpolator, 4.3(152) invertible, 1.6(18), 28

E F

envelop delay, 134 ergodic, 186 Examples, 2.3(54) exercise, 3.13(146) fast fourier transform, 1.13(44) feedback, 1.2(6) FFT, 1.12(41), 1.13(44), 44 lter, 3.10(136), 3.11(138) lter design, 4.7(156) lter structures, 3.7(134) lterbanks, 4.15(170) ltering, 3.5(126) lters, 2.11(96), 3.9(135) nite, 21 nite dimensional, 1.6(18), 22 FIR, 3.9(135), 3.10(136), 3.11(138), 3.12(145), 3.13(146) FIR lter, 3.8(134) rst order stationary, 5.2(182) rst-order stationary, 183 fourier series, 1.9(33), 36, 39 fourier transform, 1.9(33), 1.10(34), 1.11(38), 2.8(73), 2.10(91), 2.11(96), 3.2(114), 114, 3.5(126) fourier transforms, 2.6(60) frames, 92 frequency, 2.6(60) frequency domain, 1.13(44) FT, 2.6(60), 2.12(104) functional, 7

J K L

Java, 2.3(54) joint density function, 5.2(182), 182 joint distribution function, 182 Kaiser, 4.10(162) key concepts, (1) laplace transform, 1.9(33) linear, 3.5(126) linear algebra, 1.12(41) Linear discrete-time systems, 11 linear transformation, 1.6(18), 26 linear-phase FIR lters, 3.6(130) linearly dependent, 1.6(18), 20 linearly independent, 1.6(18), 20, 184 live, 54 LTI Systems, 2.8(73)

M Matlab, 2.4(58), 3.12(145), 3.13(146)


matrix, 1.12(41) matrix representation, 1.6(18), 27 mean, 5.3(184), 184

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

216

INDEX

mean-square value, 185 moment, 185 mother wavelet, 168 multi-resolution, 168 multiresolution, 4.13(167)

restoration, 2.12(104) ROC, 3.2(114), 115

sample function, 211 sample-rate conversion, 4.6(155), 155 Sampling, 2.1(49), 2.2(51), 2.3(54), 2.4(58), 2.5(59), 2.6(60), 2.7(63) scaling function, 168, 4.14(168), 168 second order stationary, 5.2(182) second-order stationary, 183 Shannon, 2.2(51) shift-invariant, 1.4(11), 11, 3.5(126) short time fourier transform, 2.9(78) signal, 1.1(3), 3, 1.2(6) signals, 1.5(11), 1.9(33) signals and systems, 1.5(11) span, 1.6(18), 20 spectral masking, 166 spectrogram, 81 spectrograms, 2.10(91) SSS, 5.2(182) stable, 125 standard basis, 1.6(18), 27, 1.8(29) stationarity, 5.2(182) stationary, 5.2(182), 5.5(192) stationary process, 182 stationary processes, 182 stft, 2.9(78) stochastic, 5.1(179) stochastic gradient, 204 stochastic signals, 180 strict sense stationary, 5.2(182) strict sense stationary (SSS), 183 sub-band, 4.12(165) sub-bands, 165 subspace, 1.6(18), 19 superposition, 1.4(11) symmetries, 45 synthesis, 4.12(165) System, 2.5(59) system theory, 1.2(6) systems, 1.9(33)

narrow-band spectrogram, 81 noble identities, 4.8(158) nonstationary, 5.2(182), 182 normed linear space, 28 nyquist, 2.6(60)

optimal, 143 order, 110 orthogonal, 1.6(18), 25, 28 orthogonal compliment, 1.6(18), 26 orthonormal, 1.6(18), 25, 1.8(29) orthonormal basis, 1.8(29), 29 output, 1.2(6) oversampling, 4.3(152), 4.4(153) Overview, 2.1(49)

parallel, 1.2(6) particular solution, 111 pdf, 5.2(182) Pearson's Correlation Coecient, 190 perfect reconstrucion, 4.12(165) periodic, 167 phase delay, 132 pole, 3.4(120) pole-zero cancellation, 123 poles, 120 polyphase, 4.9(159), 4.10(162) polyphase decimation, 4.11(164) polyphase interpolation, 4.11(164) power series, 115 probability, 5.2(182) probability density function (pdf ), 182 probability distribution function, 182 probability function, 5.2(182) Proof, 2.2(51)

random, 5.1(179), 5.3(184), 5.5(192) random process, 5.1(179), 180, 180, 181, 5.2(182), 5.3(184), 5.5(192) random sequence, 181 random signal, 5.1(179), 5.3(184) random signals, 5.1(179), 180, 5.3(184), 5.5(192) realization, 211 Reconstruction, 2.2(51), 2.4(58), 2.5(59) Recursive least Squares, 200 resampling, 4.6(155), 155

The stagecoach eect, 57 time, 2.6(60) time-varying behavior, 75 training signal, 209 transfer function, 110, 3.5(126), 3.7(134) transform pairs, 3.3(119) transforms, 33 twiddle factors, 45

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

INDEX

217

uncorrelated, 184 uncountably innite, 128 unilateral, 3.3(119) unilateral z-transform, 114 unique, 30 unit sample, 9 unit step, 10 unit-sample response, 126 unitary, 1.6(18), 28 upsampler, 4.1(149), 4.3(152) upsampling, 4.1(149), 149, 4.3(152), 4.7(156), 4.8(158)

wavelets, 167, 167 well-dened, 1.6(18), 22 well-matched, 176 wide sense stationary, 5.2(182) wide-band spectrogram, 81 wide-sense stationary (WSS), 184 window, 93 WSS, 5.2(182)

z transform, 1.9(33), 3.3(119) z-plane, 114, 3.4(120) z-transform, 3.2(114), 114, 3.3(119) z-transforms, 119 zero, 3.4(120) zero-order hold, 4.3(152) zero-pad, 127 zeros, 120

variance, 5.3(184), 185 vector, 1.12(41) vectors, 1.6(18), 19

W wavelet, 4.13(167), 4.14(168), 4.15(170)

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

218

ATTRIBUTIONS

Attributions
Collection: Fundamentals of Signal Processing Edited by: Minh N. Do URL: http://cnx.org/content/col10360/1.4/ License: http://creativecommons.org/licenses/by/3.0/ Module: "Introduction to Fundamentals of Signal Processing" By: Minh N. Do URL: http://cnx.org/content/m13673/1.1/ Pages: 1-2 Copyright: Minh N. Do License: http://creativecommons.org/licenses/by/2.0/ Module: "Signals Represent Information" By: Don Johnson URL: http://cnx.org/content/m0001/2.27/ Pages: 3-6 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Systems" By: Don Johnson URL: http://cnx.org/content/m0005/2.19/ Pages: 6-8 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time Signals and Systems" By: Don Johnson URL: http://cnx.org/content/m10342/2.16/ Pages: 8-11 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/3.0/ Module: "Systems in the Time-Domain" Used here as: "Linear Time-Invariant Systems" By: Don Johnson URL: http://cnx.org/content/m0508/2.7/ Page: 11 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete Time Convolution" By: Ricardo Radaelli-Sanchez, Richard Baraniuk, Stephen Kruzick, Catherine Elder URL: http://cnx.org/content/m10087/2.27/ Pages: 11-17 Copyright: Ricardo Radaelli-Sanchez, Richard Baraniuk, Stephen Kruzick License: http://creativecommons.org/licenses/by/3.0/

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

ATTRIBUTIONS

219

Module: "Review of Linear Algebra" By: Clayton Scott URL: http://cnx.org/content/m11948/1.2/ Pages: 18-28 Copyright: Clayton Scott License: http://creativecommons.org/licenses/by/1.0 Module: "Hilbert Spaces" By: Justin Romberg URL: http://cnx.org/content/m10840/2.6/ Pages: 28-29 Copyright: Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Orthonormal Basis Expansions" Used here as: "Signal Expansions" By: Michael Haag, Justin Romberg URL: http://cnx.org/content/m10760/2.6/ Pages: 29-33 Copyright: Michael Haag, Justin Romberg License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Fourier Analysis" By: Richard Baraniuk URL: http://cnx.org/content/m10096/2.12/ Pages: 33-34 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Continuous Time Fourier Transform (CTFT)" By: Richard Baraniuk, Melissa Selik URL: http://cnx.org/content/m10098/2.16/ Pages: 34-38 Copyright: Richard Baraniuk, Melissa Selik License: http://creativecommons.org/licenses/by/3.0/ Module: "Discrete Time Fourier Transform (DTFT)" By: Richard Baraniuk URL: http://cnx.org/content/m10108/2.18/ Pages: 38-41 Copyright: Richard Baraniuk License: http://creativecommons.org/licenses/by/3.0/ Module: "DFT as a Matrix Operation" By: Robert Nowak URL: http://cnx.org/content/m10962/2.5/ Pages: 41-44 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

220 Module: "The FFT Algorithm" By: Robert Nowak URL: http://cnx.org/content/m10964/2.6/ Pages: 44-47 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction" By: Anders Gjendemsj URL: http://cnx.org/content/m11419/1.29/ Pages: 49-51 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Proof" By: Anders Gjendemsj URL: http://cnx.org/content/m11423/1.27/ Pages: 51-54 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Illustrations" By: Anders Gjendemsj URL: http://cnx.org/content/m11443/1.33/ Pages: 54-58 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Sampling and reconstruction with Matlab" Used here as: "Sampling and Reconstruction with Matlab" By: Anders Gjendemsj URL: http://cnx.org/content/m11549/1.9/ Page: 58 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Systems view of sampling and reconstruction" Used here as: "Systems View of Sampling and Reconstruction" By: Anders Gjendemsj URL: http://cnx.org/content/m11465/1.20/ Pages: 59-60 Copyright: Anders Gjendemsj License: http://creativecommons.org/licenses/by/1.0 Module: "Sampling CT Signals: A Frequency Domain Perspective" By: Robert Nowak URL: http://cnx.org/content/m10994/2.2/ Pages: 60-63 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0

ATTRIBUTIONS

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

ATTRIBUTIONS

221

Module: "The DFT: Frequency Domain with a Computer Analysis" By: Robert Nowak URL: http://cnx.org/content/m10992/2.3/ Pages: 63-72 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time Processing of CT Signals" By: Robert Nowak URL: http://cnx.org/content/m10993/2.2/ Pages: 73-78 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Short Time Fourier Transform" By: Ivan Selesnick URL: http://cnx.org/content/m10570/2.4/ Pages: 78-91 Copyright: Ivan Selesnick License: http://creativecommons.org/licenses/by/1.0 Module: "Spectrograms" By: Don Johnson URL: http://cnx.org/content/m0505/2.21/ Pages: 91-95 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/3.0/ Module: "Filtering with the DFT" By: Robert Nowak URL: http://cnx.org/content/m11022/2.3/ Pages: 96-103 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Image Restoration Basics" By: Robert Nowak URL: http://cnx.org/content/m10972/2.2/ Pages: 104-106 Copyright: Robert Nowak License: http://creativecommons.org/licenses/by/1.0 Module: "Dierence Equation" By: Michael Haag URL: http://cnx.org/content/m10595/2.6/ Pages: 109-113 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "The Z Transform: Denition" By: Benjamin Fite URL: http://cnx.org/content/m10549/2.10/ Pages: 114-119 Copyright: Benjamin Fite License: http://creativecommons.org/licenses/by/1.0

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

222 Module: "Table of Common z-Transforms" By: Melissa Selik, Richard Baraniuk URL: http://cnx.org/content/m10119/2.14/ Pages: 119-120 Copyright: Melissa Selik, Richard Baraniuk License: http://creativecommons.org/licenses/by/1.0 Module: "Understanding Pole/Zero Plots on the Z-Plane" By: Michael Haag URL: http://cnx.org/content/m10556/2.12/ Pages: 120-126 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/3.0/ Module: "Filtering in the Frequency Domain" By: Don Johnson URL: http://cnx.org/content/m10257/2.18/ Pages: 126-130 Copyright: Don Johnson License: http://creativecommons.org/licenses/by/3.0/ Module: "Linear-Phase FIR Filters" By: Ivan Selesnick URL: http://cnx.org/content/m10705/2.3/ Pages: 130-134 Copyright: Ivan Selesnick License: http://creativecommons.org/licenses/by/1.0 Module: "Filter Structures" By: Douglas L. Jones URL: http://cnx.org/content/m11917/1.3/ Page: 134 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Overview of Digital Filter Design" By: Douglas L. Jones URL: http://cnx.org/content/m12776/1.2/ Pages: 134-135 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/2.0/ Module: "Window Design Method" By: Douglas L. Jones URL: http://cnx.org/content/m12790/1.2/ Pages: 135-136 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/2.0/

ATTRIBUTIONS

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

ATTRIBUTIONS

223

Module: "Frequency Sampling Design Method for FIR lters" By: Douglas L. Jones URL: http://cnx.org/content/m12789/1.2/ Pages: 136-138 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/2.0/ Module: "Parks-McClellan FIR Filter Design" By: Douglas L. Jones URL: http://cnx.org/content/m12799/1.3/ Pages: 138-145 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/2.0/ Module: "FIR Filter Design using MATLAB" By: Hyeokho Choi URL: http://cnx.org/content/m10917/2.2/ Page: 145 Copyright: Hyeokho Choi License: http://creativecommons.org/licenses/by/1.0 Module: "MATLAB FIR Filter Design Exercise" By: Hyeokho Choi URL: http://cnx.org/content/m10918/2.2/ Page: 146 Copyright: Hyeokho Choi License: http://creativecommons.org/licenses/by/1.0 Module: "Upsampling" By: Phil Schniter URL: http://cnx.org/content/m10403/2.15/ Pages: 149-150 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Downsampling" By: Phil Schniter URL: http://cnx.org/content/m10441/2.12/ Pages: 150-151 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/3.0/ Module: "Interpolation" By: Phil Schniter URL: http://cnx.org/content/m10444/2.14/ Pages: 152-153 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Application of Interpolation - Oversampling in CD Players" By: Phil Schniter URL: http://cnx.org/content/m11006/2.3/ Pages: 153-154 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

224 Module: "Decimation" By: Phil Schniter URL: http://cnx.org/content/m10445/2.11/ Pages: 154-155 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Resampling with Rational Factor" By: Phil Schniter URL: http://cnx.org/content/m10448/2.11/ Pages: 155-156 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Digital Filter Design for Interpolation and Decimation" By: Phil Schniter URL: http://cnx.org/content/m10870/2.6/ Pages: 156-158 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Noble Identities" By: Phil Schniter URL: http://cnx.org/content/m10432/2.12/ Pages: 158-159 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Polyphase Interpolation" By: Phil Schniter URL: http://cnx.org/content/m10431/2.11/ Pages: 159-162 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Polyphase Decimation Filter" By: Phil Schniter URL: http://cnx.org/content/m10433/2.12/ Pages: 162-164 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Computational Savings of Polyphase Interpolation/Decimation" By: Phil Schniter URL: http://cnx.org/content/m11008/2.2/ Pages: 164-165 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0

ATTRIBUTIONS

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

ATTRIBUTIONS

225

Module: "Sub-Band Processing" By: Phil Schniter URL: http://cnx.org/content/m10423/2.14/ Pages: 165-167 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete Wavelet Transform: Main Concepts" By: Phil Schniter URL: http://cnx.org/content/m10436/2.12/ Pages: 167-168 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "The Haar System as an Example of DWT" By: Phil Schniter URL: http://cnx.org/content/m10437/2.10/ Pages: 168-170 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Filterbanks Interpretation of the Discrete Wavelet Transform" By: Phil Schniter URL: http://cnx.org/content/m10474/2.6/ Pages: 170-176 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "DWT Application - De-noising" By: Phil Schniter URL: http://cnx.org/content/m11000/2.1/ Pages: 176-178 Copyright: Phil Schniter License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Random Signals and Processes" By: Michael Haag URL: http://cnx.org/content/m10649/2.2/ Pages: 179-181 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Stationary and Nonstationary Random Processes" By: Michael Haag URL: http://cnx.org/content/m10684/2.2/ Pages: 182-184 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Random Processes: Mean and Variance" By: Michael Haag URL: http://cnx.org/content/m10656/2.3/ Pages: 184-188 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

226 Module: "Correlation and Covariance of a Random Signal" By: Michael Haag URL: http://cnx.org/content/m10673/2.3/ Pages: 188-191 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Autocorrelation of Random Processes" By: Michael Haag URL: http://cnx.org/content/m10676/2.4/ Pages: 192-194 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Crosscorrelation of Random Processes" By: Michael Haag URL: http://cnx.org/content/m10686/2.2/ Pages: 194-196 Copyright: Michael Haag License: http://creativecommons.org/licenses/by/1.0 Module: "Introduction to Adaptive Filters" By: Douglas L. Jones URL: http://cnx.org/content/m11535/1.3/ Page: 196 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Discrete-Time, Causal Wiener Filter" By: Douglas L. Jones URL: http://cnx.org/content/m11825/1.1/ Pages: 196-199 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Practical Issues in Wiener Filter Implementation" By: Douglas L. Jones URL: http://cnx.org/content/m11824/1.1/ Pages: 199-200 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Quadratic Minimization and Gradient Descent" By: Douglas L. Jones URL: http://cnx.org/content/m11826/1.2/ Pages: 200-202 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "The LMS Adaptive Filter Algorithm" By: Douglas L. Jones URL: http://cnx.org/content/m11829/1.1/ Pages: 202-204 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0

ATTRIBUTIONS

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

ATTRIBUTIONS

227

Module: "First Order Convergence Analysis of the LMS Algorithm" By: Douglas L. Jones URL: http://cnx.org/content/m11830/1.1/ Pages: 204-207 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0 Module: "Adaptive Equalization" By: Douglas L. Jones URL: http://cnx.org/content/m11907/1.1/ Pages: 207-209 Copyright: Douglas L. Jones License: http://creativecommons.org/licenses/by/1.0

Available for free at Connexions <http://cnx.org/content/col10360/1.4>

Fundamentals of Signal Processing


Presents fundamental concepts and tools in signal processing including: linear and shift-invariant systems, vector spaces and signal expansions, Fourier transforms, sampling, spectral and time-frequency analyses, digital ltering, z-transform, random signals and processes, Wiener and adaptive lters.

About Connexions
Since 1999, Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. We are a Web-based authoring, teaching and learning environment open to anyone interested in education, including students, teachers, professors and lifelong learners. We connect ideas and facilitate educational communities. Connexions's modular, interactive courses are in use worldwide by universities, community colleges, K-12 schools, distance learners, and lifelong learners. Connexions materials are in many languages, including English, Spanish, Chinese, Japanese, Italian, Vietnamese, French, Portuguese, and Thai. Connexions is part of an exciting new information distribution system that allows for

Print on Demand Books.

Connexions

has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers.

This book was distributed courtesy of:

For your own Unlimited Reading and FREE eBooks today, visit: http://www.Free-eBooks.net

Share this eBook with anyone and everyone automatically by selecting any of the options below:

To show your appreciation to the author and help others have wonderful reading experiences and find helpful information too, we'd be very grateful if you'd kindly post your comments for this book here.

COPYRIGHT INFORMATION
Free-eBooks.net respects the intellectual property of others. When a book's copyright owner submits their work to Free-eBooks.net, they are granting us permission to distribute such material. Unless otherwise stated in this book, this permission is not passed onto others. As such, redistributing this book without the copyright owner's permission can constitute copyright infringement. If you believe that your work has been used in a manner that constitutes copyright infringement, please follow our Notice and Procedure for Making Claims of Copyright Infringement as seen in our Terms of Service here:

http://www.free-ebooks.net/tos.html

You might also like