Digital Signal Processing A Modern Introduction
Digital Signal Processing A Modern Introduction
Digital Signal Processing A Modern Introduction
A MODERN INTRODUCTION
by
Ashok Ambardar
Michigan Technological University
CONTENTS
PREFACE xiii
1 OVERVIEW 2
1.0 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 The z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 The Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 Filter Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Digital Processing of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.3 The Design of IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.4 The Design of FIR lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 The DFT and FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Advantages of DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 Applications of DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 DISCRETE SIGNALS 8
2.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.1 Signal Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Operations on Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Even and Odd Parts of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Decimation and Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.2 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.3 Fractional Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Common Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.1 Properties of the Discrete Impulse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.2 Signal Representation by Impulses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
c Ashok Ambardar, September 1, 2003 v
vi Contents
2.4.3 Discrete Pulse Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.4 The Discrete Sinc Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.5 Discrete Exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Discrete-Time Harmonics and Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5.1 Discrete-Time Harmonics Are Not Always Periodic in Time . . . . . . . . . . . . . . . 20
2.5.2 Discrete-Time Harmonics Are Always Periodic in Frequency . . . . . . . . . . . . . . . 21
2.6 The Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6.1 Signal Reconstruction and Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6.2 Reconstruction at Dierent Sampling Rates . . . . . . . . . . . . . . . . . . . . . . . . 25
2.7 An Introduction to Random Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.1 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7.2 Measures for Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7.3 The Chebyshev Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.7.4 Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.7.5 The Uniform Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.7.6 The Gaussian or Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.7.7 Discrete Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.7.8 Distributions for Deterministic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.7.9 Stationary, Ergodic, and Pseudorandom Signals . . . . . . . . . . . . . . . . . . . . . . 34
2.7.10 Statistical Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7.11 Random Signal Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3 TIME-DOMAIN ANALYSIS 47
3.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1 Discrete-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.1 Linearity and Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.1.2 Time Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.1.3 LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.1.4 Causality and Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2 Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.1 Digital Filter Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.2 Digital Filter Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3 Response of Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.1 Response of Nonrecursive Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.2 Response of Recursive Filters by Recursion . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4 The Natural and Forced Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.1 The Single-Input Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.2 The Zero-Input Response and Zero-State Response . . . . . . . . . . . . . . . . . . . . 61
3.4.3 Solution of the General Dierence Equation . . . . . . . . . . . . . . . . . . . . . . . 64
3.5 The Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5.1 Impulse Response of Nonrecursive Filters . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5.2 Impulse Response by Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.3 Impulse Response for the Single-Input Case . . . . . . . . . . . . . . . . . . . . . . . . 66
c Ashok Ambardar, September 1, 2003
Contents vii
3.5.4 Impulse Response for the General Case . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.5.5 Recursive Forms for Nonrecursive Digital Filters . . . . . . . . . . . . . . . . . . . . . 68
3.5.6 The Response of Anti-Causal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.6 System Representation in Various Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.6.1 Dierence Equations from the Impulse Response . . . . . . . . . . . . . . . . . . . . . 70
3.6.2 Dierence Equations from Input-Output Data . . . . . . . . . . . . . . . . . . . . . . . 70
3.7 Application-Oriented Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.7.1 Moving Average Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.7.2 Inverse Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.7.3 Echo and Reverb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.7.4 Periodic Sequences and Wave-Table Synthesis . . . . . . . . . . . . . . . . . . . . . . . 75
3.7.5 How Dierence Equations Arise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8 Discrete Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8.1 Analytical Evaluation of Discrete Convolution . . . . . . . . . . . . . . . . . . . . . . . 77
3.9 Convolution Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.10 Convolution of Finite Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.10.1 The Sum-by-Column Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.10.2 The Fold, Shift, Multiply, and Sum Concept . . . . . . . . . . . . . . . . . . . . . . . . 82
3.10.3 Discrete Convolution, Multiplication, and Zero Insertion . . . . . . . . . . . . . . . . . 83
3.10.4 Impulse Response of LTI Systems in Cascade and Parallel . . . . . . . . . . . . . . . . 84
3.11 Stability and Causality of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.11.1 Stability of FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.11.2 Stability of LTI Systems Described by Dierence Equations . . . . . . . . . . . . . . . 86
3.11.3 Stability of LTI Systems Described by the Impulse Response . . . . . . . . . . . . . . 86
3.11.4 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.12 System Response to Periodic Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.13 Periodic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.13.1 Periodic Convolution By the Cyclic Method . . . . . . . . . . . . . . . . . . . . . . . . 92
3.13.2 Periodic Convolution By the Circulant Matrix . . . . . . . . . . . . . . . . . . . . . . 92
3.13.3 Regular Convolution from Periodic Convolution . . . . . . . . . . . . . . . . . . . . . . 94
3.14 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.14.1 Deconvolution By Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.15 Discrete Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.15.1 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.15.2 Periodic Discrete Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.15.3 Matched Filtering and Target Ranging . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.16 Discrete Convolution and Transform Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.16.1 The z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.16.2 The Discrete-Time Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4 z-TRANSFORM ANALYSIS 124
4.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.1 The Two-Sided z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
c Ashok Ambardar, September 1, 2003
viii Contents
4.1.1 What the z-Transform Reveals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.1.2 Some z-Transform Pairs Using the Dening Relation . . . . . . . . . . . . . . . . . . . 125
4.1.3 More on the ROC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.2 Properties of the Two-Sided z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.3 Poles, Zeros, and the z-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.4 The Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.5 Interconnected Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.6 Transfer Function Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.6.1 Transposed Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.6.2 Cascaded and Parallel Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.7 Causality and Stability of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.7.1 Stability and the ROC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4.7.2 Inverse Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.8 The Inverse z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.8.1 Inverse z-Transform of Finite Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.8.2 Inverse z-Transform by Long Division . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.8.3 Inverse z-Transform from Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . 147
4.8.4 The ROC and Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.9 The One-Sided z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.9.1 The Right-Shift Property of the One-Sided z-Transform . . . . . . . . . . . . . . . . . 154
4.9.2 The Left-Shift Property of the One-Sided z-Transform . . . . . . . . . . . . . . . . . . 155
4.9.3 The Initial Value Theorem and Final Value Theorem . . . . . . . . . . . . . . . . . . . 156
4.9.4 The z-Transform of Switched Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . 157
4.10 The z-Transform and System Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.10.1 Systems Described by Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . 158
4.10.2 Systems Described by the Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . 159
4.10.3 Forced and Steady-State Response from the Transfer Function . . . . . . . . . . . . . 161
5 FREQUENCY DOMAIN ANALYSIS 176
5.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.1 The DTFT from the z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.1.1 Symmetry of the Spectrum for a Real Signal . . . . . . . . . . . . . . . . . . . . . . . 177
5.1.2 Some DTFT Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
5.1.3 Relating the z-Transform and DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.2 Properties of the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.2.1 Folding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.2.2 Time Shift of x[n] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.2.3 Frequency Shift of X(F) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.2.4 Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.2.5 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.2.6 The times-n property: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.2.7 Parsevals relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.2.8 Central ordinate theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
c Ashok Ambardar, September 1, 2003
Contents ix
5.3 The DTFT of Discrete-Time Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.3.1 The DFS and DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.4 The Inverse DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.5 The Frequency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.6 System Analysis Using the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.6.1 The Steady-State Response to Discrete-Time Harmonics . . . . . . . . . . . . . . . . . 195
5.7 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.8 Ideal Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.8.1 Frequency Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.8.2 Truncation and Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.8.3 The Rectangular Window and its Spectrum . . . . . . . . . . . . . . . . . . . . . . . . 202
5.8.4 The Triangular Window and its Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.8.5 The Consequences of Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
6 FILTER CONCEPTS 215
6.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
6.1 Frequency Response and Filter Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 215
6.1.1 Phase Delay and Group Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
6.1.2 Minimum-Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6.1.3 Minimum-Phase Filters from the Magnitude Spectrum . . . . . . . . . . . . . . . . . . 216
6.1.4 The Frequency Response: A Graphical View . . . . . . . . . . . . . . . . . . . . . . . 217
6.1.5 The Rubber Sheet Analogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6.2 FIR Filters and Linear-Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6.2.1 Pole-Zero Patterns of Linear-Phase Filters . . . . . . . . . . . . . . . . . . . . . . . . . 220
6.2.2 Types of Linear-Phase Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.2.3 Averaging Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.2.4 Zeros of Averaging Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
6.2.5 FIR Comb Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.3 IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
6.3.1 First-Order Highpass Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.3.2 Pole-Zero Placement and Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.3.3 Second-Order IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
6.3.4 Digital Resonators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
6.3.5 Periodic Notch Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.4 Allpass Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
6.4.1 Transfer Function of Allpass Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
6.4.2 Stabilization of Unstable Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
6.4.3 Minimum-Phase Filters Using Allpass Filters . . . . . . . . . . . . . . . . . . . . . . . 238
6.4.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
7 DIGITAL PROCESSING OF ANALOG SIGNALS 251
7.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
7.1 Ideal Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
7.1.1 Sampling of Sinusoids and Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . 254
c Ashok Ambardar, September 1, 2003
x Contents
7.1.2 Application Example: The Sampling Oscilloscope . . . . . . . . . . . . . . . . . . . . . 256
7.1.3 Sampling of Bandpass Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
7.1.4 Natural Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
7.1.5 Zero-Order-Hold Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
7.2 Sampling, Interpolation, and Signal Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
7.2.1 Ideal Recovery and the Sinc Interpolating Function . . . . . . . . . . . . . . . . . . . . 262
7.2.2 Interpolating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
7.2.3 Interpolation in Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
7.3 Sampling Rate Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
7.3.1 Zero Interpolation and Spectrum Compression . . . . . . . . . . . . . . . . . . . . . . 266
7.3.2 Sampling Rate Increase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
7.3.3 Sampling Rate Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
7.4 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.4.1 Uniform Quantizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.4.2 Quantization Error and Quantization Noise . . . . . . . . . . . . . . . . . . . . . . . . 271
7.4.3 Quantization and Oversampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.5 Digital Processing of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
7.5.1 Multirate Signal Processing and Oversampling . . . . . . . . . . . . . . . . . . . . . . 276
7.5.2 Practical ADC Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
7.5.3 Anti-Aliasing Filter Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.5.4 Anti-Imaging Filter Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
7.6 Compact Disc Digital Audio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
7.6.1 Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
7.6.2 Playback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
7.7 Dynamic Range Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
7.7.1 Companders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
7.8 Audio Equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.8.1 Shelving Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.8.2 Graphic Equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.8.3 Parametric Equalizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
7.9 Digital Audio Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
7.9.1 Gated Reverb and Reverse Reverb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
7.9.2 Chorusing, Flanging, and Phasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
7.9.3 Plucked-String Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.10 Digital Oscillators and DTMF Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7.10.1 DTMF Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
8 DESIGN OF FIR FILTERS 311
8.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8.1.1 The Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
8.1.2 Techniques of Digital Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
8.2 Symmetric Sequences and Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
c Ashok Ambardar, September 1, 2003
Contents xi
8.2.1 Classication of Linear-Phase Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . 313
8.2.2 Applications of Linear-Phase Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . 315
8.2.3 FIR Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
8.3 Window-Based Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
8.3.1 Characteristics of Window Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
8.3.2 Some Other Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
8.3.3 What Windowing Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
8.3.4 Some Design Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
8.3.5 Characteristics of the Windowed Spectrum . . . . . . . . . . . . . . . . . . . . . . . . 322
8.3.6 Selection of Window and Design Parameters . . . . . . . . . . . . . . . . . . . . . . . 323
8.3.7 Spectral Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.4 Half-Band FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.5 FIR Filter Design by Frequency Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
8.5.1 Frequency Sampling and Windowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8.5.2 Implementing Frequency-Sampling FIR Filters . . . . . . . . . . . . . . . . . . . . . . 337
8.6 Design of Optimal Linear-Phase FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8.6.1 The Alternation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.6.2 Optimal Half-Band Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
8.7 Application: Multistage Interpolation and Decimation . . . . . . . . . . . . . . . . . . . . . . 342
8.7.1 Multistage Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
8.8 Maximally Flat FIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
8.9 FIR Dierentiators and Hilbert Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
8.9.1 Hilbert Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
8.9.2 Design of FIR Dierentiators and Hilbert Transformers . . . . . . . . . . . . . . . . . 348
8.10 Least Squares and Adaptive Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
8.10.1 Adaptive Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
8.10.2 Applications of Adaptive Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
9 DESIGN OF IIR FILTERS 361
9.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.2 IIR Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.2.1 Equivalence of Analog and Digital Systems . . . . . . . . . . . . . . . . . . . . . . . . 361
9.2.2 The Eects of Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
9.2.3 Practical Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
9.3 Response Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
9.3.1 The Impulse-Invariant Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
9.3.2 Modications to Impulse-Invariant Design . . . . . . . . . . . . . . . . . . . . . . . . . 368
9.4 The Matched z-Transform for Factored Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
9.4.1 Modications to Matched z-Transform Design . . . . . . . . . . . . . . . . . . . . . . . 372
9.5 Mappings from Discrete Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
9.5.1 Mappings from Dierence Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
9.5.2 Stability Properties of the Backward-Dierence Algorithm . . . . . . . . . . . . . . . . 374
c Ashok Ambardar, September 1, 2003
xii Contents
9.5.3 The Forward-Dierence Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
9.5.4 Mappings from Integration Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
9.5.5 Stability Properties of Integration-Algorithm Mappings . . . . . . . . . . . . . . . . . 376
9.5.6 Frequency Response of Discrete Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 378
9.5.7 Mappings from Rational Approximations . . . . . . . . . . . . . . . . . . . . . . . . . 381
9.6 The Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
9.6.1 Using the Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
9.7 Spectral Transformations for IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
9.7.1 Digital-to-Digital Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
9.7.2 Direct (A2D) Transformations for Bilinear Design . . . . . . . . . . . . . . . . . . . . 386
9.7.3 Bilinear Transformation for Peaking and Notch Filters . . . . . . . . . . . . . . . . . . 386
9.8 Design Recipe for IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
9.8.1 Finite-Word-Length Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
9.8.2 Eects of Coecient Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
9.8.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
10 THE DISCRETE FOURIER TRANSFORM AND ITS APPLICATIONS 405
10.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10.1.1 Connections Between Frequency-Domain Transforms . . . . . . . . . . . . . . . . . . . 405
10.2 The DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
10.3 Properties of the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
10.3.1 Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
10.3.2 Central Ordinates and Special DFT Values . . . . . . . . . . . . . . . . . . . . . . . . 409
10.3.3 Circular Shift and Circular Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
10.3.4 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
10.3.5 The FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10.3.6 Signal Replication and Spectrum Zero Interpolation . . . . . . . . . . . . . . . . . . . 413
10.3.7 Some Useful DFT Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
10.4 Some Practical Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
10.5 Approximating the DTFT by the DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
10.6 The DFT of Periodic Signals and the DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
10.6.1 The Inverse DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
10.6.2 Understanding the DFS Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
10.6.3 The DFS and DFT of Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
10.6.4 The DFT and DFS of Sampled Periodic Signals . . . . . . . . . . . . . . . . . . . . . . 420
10.6.5 The Eects of Leakage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
10.7 The DFT of Nonperiodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
10.7.1 Spectral Spacing and Zero-Padding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
10.8 Spectral Smoothing by Time Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
10.8.1 Performance Characteristics of Windows . . . . . . . . . . . . . . . . . . . . . . . . . . 426
10.8.2 The Spectrum of Windowed Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
10.8.3 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
c Ashok Ambardar, September 1, 2003
Contents xiii
10.8.4 Detecting Hidden Periodicity Using the DFT . . . . . . . . . . . . . . . . . . . . . . . 432
10.9 Applications in Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
10.9.1 Convolution of Long Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
10.9.2 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
10.9.3 Band-Limited Signal Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
10.9.4 The Discrete Hilbert Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
10.10Spectrum Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
10.10.1The Periodogram Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
10.10.2PSD Estimation by the Welch Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
10.10.3PSD Estimation by the Blackman-Tukey Method . . . . . . . . . . . . . . . . . . . . . 438
10.10.4Non-Parametric System Identication . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
10.10.5Time-Frequency Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
10.11The Cepstrum and Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
10.11.1Homomorphic Filters and Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . 441
10.11.2Echo Detection and Cancellation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
10.12Optimal Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
10.13Matrix Formulation of the DFT and IDFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
10.13.1The IDFT from the Matrix Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
10.13.2Using the DFT to Find the IDFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
10.14The FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
10.14.1Some Fundamental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
10.14.2The Decimation-in-Frequency FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . 449
10.14.3The Decimation-in-Time FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 451
10.14.4Computational Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
10.15Why Equal Lengths for the DFT and IDFT? . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
10.15.1The Inverse DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
10.15.2How Unequal Lengths Aect the DFT Results . . . . . . . . . . . . . . . . . . . . . . 456
A USEFUL CONCEPTS FROM ANALOG THEORY 470
A.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
A.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
A.2 System Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
A.2.1 The Zero-State Response and Zero-Input Response . . . . . . . . . . . . . . . . . . . . 476
A.2.2 Step Response and Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
A.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
A.3.1 Useful Convolution Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
A.4 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
A.4.1 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
A.4.2 Interconnected Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
A.4.3 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
A.4.4 The Laplace Transform and System Analysis . . . . . . . . . . . . . . . . . . . . . . . 482
A.4.5 The Steady-State Response to Harmonic Inputs . . . . . . . . . . . . . . . . . . . . . . 483
A.5 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
c Ashok Ambardar, September 1, 2003
xiv Contents
A.5.1 Some Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
A.6 The Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
A.6.1 Connections between Laplace and Fourier Transforms . . . . . . . . . . . . . . . . . . 486
A.6.2 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
A.6.3 Fourier Transform of Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
A.6.4 Spectral Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
A.6.5 Ideal Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
A.6.6 Measures for Real Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
A.6.7 A First Order Lowpass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
A.6.8 A Second-Order Lowpass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
A.7 Bode Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
A.8 Classical Analog Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
c Ashok Ambardar, September 1, 2003
PREFACE
This book provides a modern and self-contained introduction to digital signal processing (DSP) and is written
with several audiences in mind. First and foremost, it is intended to serve as a textbook suitable for a one-
semester junior or senior level undergraduate course. To this extent, it includes the relevant topics covered
in a typical undergraduate curriculum and is supplemented by a vast number of worked examples, drill
exercises and problems. It also attempts to provide a broader perspective by introducing useful applications
and additional special topics in each chapter. These form the background for more advanced graduate courses
in this area and also allow the book to be used as a source of basic reference for professionals across various
disciplines interested in DSP.
Scope
The text stresses the fundamental principles and applications of digital signal processing. The relevant con-
cepts are explained and illustrated by worked examples and applications are introduced where appropriate.
Since many applications of DSP relate to the processing of analog signals, some familiarity with basic analog
theory, at the level taught in a typical undergraduate signals and systems course, is assumed and expected.
In order to make the book self-contained, the key concepts and results from analog theory that are relevant
to a study of DSP are outlined and included in an appendix. The topics covered in this book may be grouped
into the following broad areas:
1. The rst chapter starts with a brief overview. An introduction to discrete signals, their representation
and their classication is provided in Chapter 2.
2. Chapter 3 details the analysis of digital lters in the time-domain using the solution of dierence
equations or the process of convolution that also serves to link the time domain and the frequency
domain.
3. Chapter 4 covers the analysis in the transformed domain using the z-transform that forms a powerful
tool for studying discrete-time signals and systems.
4. Chapter 5 describes the analysis of discrete signals and digital lters in the frequency domain using
the discrete-time Fourier transform (DTFT) that arises as a special case of the z-transform.
5. Chapter 6 introduces the jargon, terminology and variety of digital lters and studies and compares
the various methods of studying them.
6. Chapter 7 discusses the digital processing of analog signals based on the concepts of sampling and
quantization and the spectral representation of sampled signals.
7. Chapter 8 and Chapter 9 describe the the design of FIR and IIR lters for various applications using
well established techniques.
8. Chapter 10 provides an introduction to the spectral analysis of both analog and discrete signals based
on numerical computation of the DFT and the FFT and its applications.
c Ashok Ambardar, September 1, 2003 xv
xvi Preface
One of the concerns often voiced about undergraduate textbooks is the level of mathematical detail. This
book takes the approach that even though mathematical rigor need not be sacriced, it does not have to get
in the way of understanding and applying useful DSP concepts. To this extent, the book attempts to preserve
a rational approach and include all the necessary mathematical details. However, whenever possible, the
results are also described and then applied to problem solving on the basis of simple heuristic explanations.
In each chapter, a short opening section outlines the objectives and topical coverage. Central concepts
are highlighted in review panels, illustrated by worked examples and followed by drill exercises with answers.
Many gures have been included to help the student grasp and visualize critical concepts. Results are
tabulated and summarized for easy reference and access. End-of-chapter problems include a variety of drills
and exercises. Application oriented problems require the use of computational resources such as Matlab.
Since our primary intent is to present the principles of digital signal processing, not software, we have made
no attempt to integrate Matlab into the text. This approach maintains the continuity and logical ow of
the textual material. However, for those interested, a suite of Matlab-based routines that may be used
to illustrate the principles and concepts presented in the book are available on the authors website. We
hasten to add two disclaimers. First, the choice of Matlab is not to be construed as an endorsement of
this product. Second, the routines are supplied in good faith and the author is not responsible for any
consequences arising from their use! A solutions manual for instructors is available from the publisher.
Acknowledgments
This book has gained immensely from the incisive, sometimes provoking, but always constructive, criticism
of the following reviewers:
Many other individuals have also contributed in various ways to this eort. Special thanks are due, in
particular, to
If you come across any errors in the text or discover any bugs in the software, we would appreciate hearing
from you. Any errata will be posted on the authors website.
Ashok Ambardar Michigan Technological University
Internet: http://www.ee.mtu.edu/faculty/akambard.html
e-mail: [email protected]
c Ashok Ambardar, September 1, 2003
Chapter 1
OVERVIEW
1.0 Introduction
Few other technologies have revolutionized the world as profoundly as those based on digital signal processing.
For example, the technology of recorded music was, until recently, completely analog from end to end, and
the most important commercial source of recorded music used to be the LP (long-playing) record. The
advent of the digital compact disc changed all that in the span of just a few short years and made the
long-playing record practically obsolete. with the advent and proliferation of high speed, low cost computers
and powerful, user-friendly software packages, digital signal processing (DSP) has truly come of age. This
chapter provides an overview of the terminology of digital signal processing and of the connections between
the various topics and concepts covered in the text.
1.1 Signals
Our world is full of signals, both natural and man-made. Examples are the variation in air pressure when we
speak, the daily highs and lows in temperature, and the periodic electrical signals generated by the heart.
Signals represent information. Often, signals may not convey the required information directly and may
not be free from disturbances. It is in this context that signal processing forms the basis for enhancing,
extracting, storing, or transmitting useful information. Electrical signals perhaps oer the widest scope for
such manipulations. In fact, it is commonplace to convert signals to electrical form for processing.
The signals we encounter in practice are often very dicult to characterize. So, we choose simple
mathematical models to approximate their behavior. Such models also give us the ability to make predictions
about future signal behaviour. Of course, an added advantage of using models is that they are much easier
to generate and manipulate. What is more, we can gradually increase the complexity of our model to obtain
better approximations, if needed. The simplest signal models are a constant variation, an exponential decay
and a sinusoidal or periodic variation. Such signals form the building blocks from which we can develop
representations for more complex forms.
This book starts with a quick overview of discrete signals, how they arise and how they are modeled.
We review some typical measures (such as power and energy) used to characterize discrete signals and the
operations of interpolation and decimation which are often used to change the sampling rate of an already
sampled signal.
c Ashok Ambardar, September 1, 2003 1
2 Chapter 1 Overview
1.2 Digital Filters
The processing of discrete signals is accomplished by discrete-time systems, also called digital lters. In
the time domain, such systems may be modeled by dierence equations in much the same way that analog
systems are modeled by dierential equations. We concentrate on models of linear time-invariant (LTI)
systems whose dierence equations have constant coecients. The processing of discrete signals by such
systems can be achieved by resorting to the well known mathematical techniques. For input signals that can
be described as a sum of simpler forms, linearity allows us to nd the response as the sum of the response
to each of the simpler forms. This is superposition. Many systems are actually nonlinear. The study of
nonlinear systems often involves making simplifying assumptions, such as linearity. The system response can
also be obtained using convolution, a method based on superposition: if the response of a system is known
to a unit sample (or impulse) input, then it is also known to any arbitrary input which can be expressed as
a sum of such impulses.
Two important classes of digital lters are FIR (nite impulse response) lters whose impulse response
(response to an impulse input) is a nite sequence (lasts only for nite time) and IIR (innite impulse
response) lters whose response to an impulse input lasts forever.
1.2.1 The z-Transform
The z-transform is a powerful method of analysis for discrete signals and systems. It is analogous to the
Laplace transform used to study analog systems. The transfer function of an LTI system is a ratio of
polynomials in the complex variable z. The roots of the numerator polynomial are called zeros and of the
denominator polynomial are called poles. The pole-zero description of a transfer function is quite useful if
we want a qualitative picture of the frequency response. For example, the frequency response goes to zero if
z equals one of the zero locations and becomes unbounded if z equals one of the pole locations.
1.2.2 The Frequency Domain
It turns out that discrete sinusoids and harmonic signals dier from their analog cousins in some striking
ways. A discrete sinusoid is not periodic for any choice of frequency. Yet it has a periodic spectrum. An
important consequence of this result is that if the spectrum is periodic for a sampled sinusoid, it should also
be periodic for a sampled combination of sinusoids. This concept forms the basis for the frequency domain
description of discrete signals called the Discrete-Time Fourier Transform (DTFT). And since analog
signals can be described as a combination of sinusoids (periodic ones by their Fourier series and others by
their Fourier transform), their sampled combinations (and consequently any sampled signal) have a periodic
spectrum in the frequency domain. The central period corresponds to the true spectrum of the analog signal
if the sampling rate exceeds the Nyquist rate.
1.2.3 Filter Concepts
The term lter is often used to denote systems that process the input in a specied way. In this context,
ltering describes a signal-processing operation that allows signal enhancement, noise reduction, or increased
signal-to-noise ratio. Systems for the processing of discrete-time signals are also called digital lters.
Depending on the requirements and application, the analysis of a digital lter may be carried out in the time
domain, the z-domain or the frequency domain. A common application of digital lters is to modify the
frequency response in some specied way. An ideal lowpass lter passes frequencies up to a specied value
and totally blocks all others. Its spectrum shows an abrupt transition from unity (perfect transmission) in
the passband to zero (perfect suppression) in the stopband. An important consideration is that a symmetric
impulse response sequence possesses linear phase (in its frequency response) which results only in a constant
c Ashok Ambardar, September 1, 2003
1.3 Signal Processing 3
delay with no amplitude distortion. An ideal lowpass lter possesses linear phase because its impulse response
happens to be a symmetric sequence but unfortunately, it cannot be realized in practice.
One way to approximate an ideal lowpass lter is by symmetric truncation of its impulse response (which
ensures linear phase). Truncation is equivalent to multiplying (windowing) the impulse response by a nite
duration sequence (window) of unit samples. The abrupt truncation imposed by such a window results in
an overshoot and oscillation in the frequency response that persists no matter how large the truncation
index. To eliminate overshoot and reduce the oscillations, we use tapered windows. The impulse response
and frequency response of highpass, bandpass and bandstop lters may be related to those of a lowpass lter
using frequency transformations based on the properties of the DTFT.
Filters that possess constant gain but whose phase varies with frequencies are called allpass lters and
may be used to modify the phase characteristics of a system. A lter whose gain is zero at a selected
frequency is called a notch lter and may be used to remove the unwanted frequency from a signal. A lters
whose gain is zero at multiples of a selected frequency is called a comb lter and may be used to remove an
unwanted frequency and its harmonics from a signal.
1.3 Signal Processing
Two conceptual schemes for the processing of signals are illustrated in Figure 1.2. The digital processing
of analog signals requires that we use an analog-to-digital converter (ADC) for sampling the analog signal
prior to processing and a digital-to-analog converter (DAC) to convert the processed digital signal back to
analog form.
processor
signal
Analog Digital
signal
processor
Analog signal processing
Analog
signal
Analog
signal
Digital signal processing of analog signals
Digital
signal
Digital
signal
ADC
Analog
signal
DAC
Analog
signal
Figure 1.1 Analog and digital signal processing
1.3.1 Digital Processing of Analog Signals
Many DSP applications involve the processing of digital signals obtained by sampling analog signals and
the subsequent reconstruction of analog signals from their samples. For example, the music you hear from
your compact disc (CD) player is due to changes in the air pressure caused by the vibration of the speaker
diaphragm. It is an analog signal because the pressure variation is a continuous function of time. However,
the information stored on the compact disk is in digital form. It must be processed and converted to analog
form before you can hear the music. A record of the yearly increase in the world population describes time
measured in increments of one (year) while the population increase is measured in increments of one (person).
It is a digital signal with discrete values for both time and population.
For digital signal processing we need digital signals. To process an analog signal by digital means, we
must convert it to a digital signal in two steps. First, we must sample it, typically at uniform intervals
t
s
(every 2 ms, for example). The discrete quantity nt
s
is related to the integer index n. Next, we must
quantize the sample values (amplitudes) (by rounding to the nearest millivolt, for example). The central
concept in the digital processing of analog signals is that the sampled signal must be a unique representation
of the underlying analog signal. Even though sampling leads to a potential loss of information, all is not lost!
Often, it turns out that if we choose the sampling interval wisely, the processing of an analog signal is entirely
c Ashok Ambardar, September 1, 2003
4 Chapter 1 Overview
equivalent to the processing of the corresponding digital signal; there is no loss of information! This is one of
the wonders of the sampling theorem that makes digital signal processing such an attractive option. For a
unique correspondence between an analog signal and the version reconstructed from its samples, the sampling
rate S must exceed twice the highest signal frequency f
0
. The value S = 2f
0
is called the Nyquist sampling
rate. If the sampling rate is less than the Nyquist rate, a phenomenon known as aliasing manifests itself.
Components of the analog signal at high frequencies appear at (alias to) lower frequencies in the sampled
signal. This results in a sampled signal with a smaller highest frequency. Aliasing eects are impossible to
undo once the samples are acquired. It is thus commonplace to band-limit the signal before sampling (using
lowpass lters).
Numerical processing using digital computers requires nite data with nite precision. We must limit
signal amplitudes to a nite number of levels. This process, called quantization, produces nonlinear eects
that can be described only in statistical terms. Quantization also leads to an irreversible loss of information
and is typically considered only in the nal stage in any design.
A typical system for the digital processing of analog signals consists of the following:
An analog lowpass pre-lter or anti-aliasing lter which limits the highest signal frequency to ensure
freedom from aliasing.
A sampler which operates above the Nyquist sampling rate.
A quantizer which quantizes the sampled signal values to a nite number of levels. Currently, 16-bit
quantizers are quite commonplace.
An encoder which converts the quantized signal values to a string of binary bits or zeros and ones (words)
whose length is determined by the number of quantization levels of the quantizer.
The digital processing system itself (hardware or software) which processes the encoded digital signal (or
bit stream) in a desired fashion.
A decoder which converts the processed bit stream to a DT signal with quantized signal values.
A reconstruction lter which reconstructs a staircase approximation of the discrete time signal.
A lowpass analog anti-imaging lter which extracts the central period from the periodic spectrum, removes
the unwanted replicas and results in a smoothed reconstructed signal.
1.3.2 Filter Design
The design of lters is typically based on a set of specications in the frequency domain corresponding to
the magnitude spectrum or lter gain. The design of IIR lters typically starts with a lowpass prototype
from which other forms may be readily developed using frequency transformations.
1.3.3 The Design of IIR Filters
The design of IIR lters starts with an analog lowpass prototype based on the given specications. Classical
analog lters include Butterworth (maximally at passband), Chebyshev I (rippled passband), Chebyshev
II (rippled stopband) and elliptic (rippled passband and stopband). The analog lowpass prototype is then
converted to a lowpass digital lter using an appropriate mapping, and nally to the required form using
an appropriate spectral transformation. Practical mappings are based on response-invariance or equivalence
of ideal operations such as integration and their numerical counterparts. Not all of these avoid the eects
of aliasing. The most commonly used mapping is based on the trapezoidal rule for numerical integration
and is called the bilinear transformation. It compresses the entire innite analog frequency range into a
nite range and thus avoids aliasing at the expense of warping (distorting) the analog frequencies. We can
compensate for this warping if we prewarp (stretch) the analog frequency specications before designing
the analog lter.
c Ashok Ambardar, September 1, 2003
1.4 The DFT and FFT 5
1.3.4 The Design of FIR lters
FIR lters are inherently stable and can be designed with linear phase leading to no phase distortion but
their realization often involves a large lter length to meet given requirements. Their design is typically
based on selecting a symmetric (linear phase) impulse response sequence of the smallest length that meets
design specications and involves iterative techniques. Even though the spectrum of the truncated ideal
lter is, in fact, the best approximation (in the mean square sense) compared to the spectrum of any other
lter of the same length, it shows the undesirable oscillations and overshoot which can be eliminated by
modifying (windowing) the impulse response sequence using tapered windows. The smallest length that
meets specications depends on the choice of window and is often estimated by empirical means.
1.4 The DFT and FFT
The periodicity of the DTFT is a consequence of the fundamental result that sampling a signal in one
domain leads to periodicity in the other. Just as a periodic signal has a discrete spectrum, a discrete-time
signal has a periodic spectrum. This duality also characterizes several other transforms. If the time signal
is both discrete and periodic, its spectrum is also discrete and periodic and describes the discrete Fourier
transform (DFT). The DFT is essentially the DTFT evaluated at a nite number of frequencies and is
also periodic. The DFT can be used to approximate the spectrum of analog signals from their samples,
provided the relations are understood in their proper context using the notion of implied periodicity. The
Fast Fourier Transform (FFT) is a set of fast practical algorithms for computing the DFT. The DFT and
FFT nd extensive applications in fast convolution, signal interpolation, spectrum estimation, and transfer
function estimation.
1.5 Advantages of DSP
In situations where signals are encountered in digital form, their processing is performed digitally. In other
situations that relate to the processing of analog signals, DSP oers many advantages.
Processing
DSP oers a wide variety of processing techniques that can be implemented easily and eciently. Some
techniques (such as processing by linear phase lters) have no counterpart in the analog domain.
Storage
Digital data can be stored and later retrieved with no degradation or loss of information. Data recorded by
analog devices is subject to the noise inherent in the recording media (such as tape) and degradation due to
aging and environmental eects.
Transmission
Digital signals are more robust and oer much better noise immunity during transmission as compared to
analog signals.
Implementation
A circuit for processing analog signals is typically designed for a specic application. It is sensitive to
component tolerances, aging, and environmental eects (such as changes in the temperature and humidity)
c Ashok Ambardar, September 1, 2003
6 Chapter 1 Overview
and not easily reproducible. A digital lter, on the other hand, is extremely easy to implement and highly
reproducible. It may be designed to perform a variety of tasks without replacing or modifying any hardware
but simply by changing the lter coecients on the y.
Cost
With the proliferation of low-cost, high-speed digital computers, DSP oers eective alternatives for a wide
variety of applications. High-frequency analog applications may still require analog signal processing but their
number continues to shrink. As long as the criteria of the sampling theorem are satised and quantization
is carried out to the desired precision (using the devices available), the digital processing of analog signals
has become the method of choice unless compelling reasons dictate otherwise.
In the early days of the digital revolution, DSP did suer form disadvantages such as speed, cost, and
quantization eects but these continue to pale into insignicance with advances in semiconductor technology
and processing and computing power.
1.5.1 Applications of DSP
Digital signal processing nds applications in almost every conceivable eld. Its impact on consumer electron-
ics is evidenced by the proliferation of digital communication, digital audio, digital (high-denition) television
and digital imaging (cameras). Its applications to biomedical signal processing include the enhancement and
interpretation of tomographic images and analysis of ECG and EEG signals. Space applications include
satellite navigation and guidance systems and analysis of satellite imagery obtained by various means. And,
the list goes on and continues to grow.
c Ashok Ambardar, September 1, 2003
Chapter 2
DISCRETE SIGNALS
2.0 Scope and Objectives
This chapter begins with an overview of discrete signals. It starts with various ways of signal classication,
shows how discrete signals can be manipulated by various operations, and quanties the measures used to
characterize such signals. It introduces the concept of sampling and describes the sampling theorem as the
basis for sampling analog signals without loss of information. It concludes with an introduction to random
signals.
2.1 Discrete Signals
Discrete signals (such as the annual population) may arise naturally or as a consequence of sampling con-
tinuous signals (typically at a uniform sampling interval t
s
). A sampled or discrete signal x[n] is just an
ordered sequence of values corresponding to the integer index n that embodies the time history of the signal.
It contains no direct information about the sampling interval t
s
, except through the index n of the sample
locations. A discrete signal x[n] is plotted as lines against the index n. When comparing analog signals with
their sampled versions, we shall assume that the origin t = 0 also corresponds to the sampling instant n = 0.
We need information about t
s
only in a few situations such as plotting the signal explicitly against time t
(at t = nt
s
) or approximating the area of the underlying analog signal from its samples (as t
s
x[n]).
REVIEW PANEL 2.1
Notation for a Numeric Sequence x[n]
A marker () indicates the origin n = 0. Example: x[n] = 1, 2,
4, 8, . . ..
Ellipses (. . .) denote innite extent on either side. Example: x[n] = 2, 4,
6, 8, . . .
A discrete signal x[n] is called right-sided if it is zero for n < N (where N is nite), causal if it is zero
for n < 0, left-sided if it is zero for n > N, and anti-causal if it is zero for n 0.
REVIEW PANEL 2.2
Discrete Signals Can Be Left-Sided, Right-Sided, Causal, or Anti-Causal
>
n 0 ) (zero for n>N ) (zero for n<N ) (zero for n< 0 ) (zero for
n
Anti-causal
N
n
Left-sided
N
n
Right-sided
n
Causal
c Ashok Ambardar, September 1, 2003 7
8 Chapter 2 Discrete Signals
A discrete periodic signal repeats every N samples and is described by
x[n] = x[n kN], k = 0, 1, 2, 3, . . . (2.1)
The period N is the smallest number of samples that repeats. Unlike its analog counterpart, the period N of
discrete signals is always an integer. The common period of a linear combination of periodic discrete signals
is given by the least common multiple (LCM) of the individual periods.
REVIEW PANEL 2.3
The Period of a Discrete Periodic Signal Is the Number of Samples per Period
The period N is always an integer. For combinations, N is the LCM of the individual periods.
DRILL PROBLEM 2.1
(a) Let x[n] = . . . ,
1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, . . . and
y[n] = . . . ,
1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, . . ..
Let g[n] = x[n] +y[n]. What is the period of g[n]? What are the sample values in one period of g[n]?
Answers: (a) 5 (b) 6,
2, 4, 4, 3, 3, 5
2.1.1 Signal Measures
Signal measures for discrete signals are often based on summations. Summation is the discrete-time equiv-
alent of integration. The discrete sum S
D
, the absolute sum S
A
, and the cumulative sum (running sum)
s
C
[n] of a signal x[n] are dened by
S
D
=
n=
x[n] S
A
=
n=
[x[n][ s
C
[n] =
n
k=
x[k] (2.2)
Signals for which the absolute sum [x[n][ is nite are called absolutely summable. For nonperiodic signals,
the signal energy E is a useful measure. It is dened as the sum of the squares of the signal values
E =
n=
[x[n][
2
(2.3)
The absolute value allows us to extend this relation to complex-valued signals. Measures for periodic signals
are based on averages since their signal energy is innite. The average value x
av
and signal power P of a
periodic signal x[n] with period N are dened as the average sum per period and average energy per period,
respectively:
x
av
=
1
N
N1
n=0
x[n] P =
1
N
N1
n=0
[x[n][
2
(2.4)
Note that the index runs from n = 0 to n = N 1 and includes all N samples in one period. Only for
nonperiodic signals is it useful to use the limiting forms
x
av
= lim
M
1
2M + 1
M
n=M
x[n] P = lim
M
1
2M + 1
M
n=M
[x[n][
2
(2.5)
c Ashok Ambardar, September 1, 2003
2.1 Discrete Signals 9
Signals with nite energy are called energy signals (or square summable). Signals with nite power are called
power signals. All periodic signals are power signals.
REVIEW PANEL 2.4
Energy and Power in Discrete Signals
Energy: E =
n=
[x[n][
2
Power (if periodic with period N): P =
1
N
N1
n=0
[x[n][
2
EXAMPLE 2.1 (Signal Energy and Power)
(a) Find the energy in the signal x[n] = 3(0.5)
n
, n 0.
This describes a one-sided decaying exponential. Its signal energy is
E =
n=
x
2
[n] =
n=0
[3(0.5)
n
[
2
=
n=0
9(0.25)
n
=
9
1 0.25
= 12 J
Note:
n=0
n
=
1
1
n=0
x[n] = 0 P =
1
4
3
n=0
x
2
[n] =
1
4
36 + 36
= 18 W
(c) Consider the periodic signal x[n] = 6e
j2n/4
whose period is N = 4.
This signal is complex-valued, with [x[n][ = 6. One period of this signal is x
1
[n] =
6, 6, 6, 6.
The signal power of x[n] is
P =
1
4
3
n=0
[x[n][
2
=
1
4
36 + 36 + 36 + 36
= 36 W
DRILL PROBLEM 2.2
(a) Let x[n] =
1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, . . . and
y[n] = . . . ,
1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, . . ..
Let g[n] = x[n] +y[n]. What is the power in x[n], y[n] and g[n]? What is the average value of g[n]?
Answers: (a) 21 (b) 4.2 (c) 2.5, 3, 10, 3
c Ashok Ambardar, September 1, 2003
10 Chapter 2 Discrete Signals
2.2 Operations on Discrete Signals
Common operations on discrete signals include element-wise addition and multiplication. Two other useful
operations are shifting and folding (or time reversal).
Time Shift: The signal y[n] = x[n] describes a delayed version of x[n] for > 0. In other words, if x[n]
starts at n = N, then its shifted version y[n] = x[n] starts at n = N +. Thus, the signal y[n] = x[n2]
is a delayed (shifted right by 2) version of x[n], and the signal g[n] = x[n + 2] is an advanced (shifted left
by 2) version of x[n]. A useful consistency check for sketching shifted signals is based on the fact that if
y[n] = x[n ], a sample of x[n] at the original index n gets relocated to the new index n
N
based on the
operation n = n
N
.
Folding: The signal y[n] = x[n] represents a folded version of x[n], a mirror image of the signal x[n] about
the origin n = 0. The signal y[n] = x[n ] may be obtained from x[n] in one of two ways:
(a) x[n] delay (shift right) by x[n ] fold x[n ].
(b) x[n] fold x[n] advance (shift left) by x[n ].
In either case, a sample of x[n] at the original index n will be plotted at a new index n
N
given by
n = n
N
, and this can serve as a consistency check in sketches.
REVIEW PANEL 2.5
Time delay means x[n] x[n M], M > 0, and folding means x[n] x[n]
You can generate x[n M] from x[n] in one of two ways:
1. Shift right M units: x[n] x[n M]. Then fold: x[n M] x[n M].
2. Fold: x[n] x[n]. Then shift left M units: x[n] x[(n +M)] = x[n M].
Check: Use n n
N
M to conrm new locations n
N
for the origin n = 0 and end points of x[n].
EXAMPLE 2.2 (Operations on Discrete Signals)
Let x[n] = 2, 3,
2, 4, 1 (b) y[n] = 3, 1, 4,
0 f[n] = 4, 1,
3.
2.2.1 Symmetry
If a signal x[n] is identical to its mirror image x[n], it is called an even symmetric signal. If x[n]
diers from its mirror image x[n] only in sign, it is called an odd symmetric or antisymmetric signal.
Mathematically,
x
e
[n] = x
e
[n] x
o
[n] = x
o
[n] (2.6)
In either case, the signal extends over symmetric limits N n N. For an odd symmetric signal, note
that x
o
[0] = 0 and the sum of samples in x
o
[n] over symmetric limits (, ) equals zero:
M
k=M
x
o
[k] = 0 (2.7)
REVIEW PANEL 2.6
Characteristics of Symmetric Signals
x
e
n [ ] x
e
[n]
= x
o
[0] = 0 and x
o
[n] x
o
n [ ] =
x
e
[n]
x
o
[n]
Even symmetry: Odd symmetry:
n n
2.2.2 Even and Odd Parts of Signals
Even symmetry and odd symmetry are mutually exclusive. Consequently, if a signal x[n] is formed by
summing an even symmetric signal x
e
[n] and an odd symmetric signal x
o
[n], it will be devoid of either
symmetry. Turning things around, any signal x[n] may be expressed as the sum of an even symmetric part
x
e
[n] and an odd symmetric part x
o
[n]:
x[n] = x
e
[n] +x
o
[n] (2.8)
To nd x
e
[n] and x
o
[n] from x[n], we fold x[n] and invoke symmetry to get
x[n] = x
e
[n] +x
o
[n] = x
e
[n] x
o
[n] (2.9)
c Ashok Ambardar, September 1, 2003
12 Chapter 2 Discrete Signals
Adding and subtracting the two preceding equations, we obtain
x
e
[n] = 0.5x[n] + 0.5x[n] x
o
[n] = 0.5x[n] 0.5x[n] (2.10)
This means that the even part x
e
[n] equals half the sum of the original and folded version and the odd part
x
o
[n] equals half the dierence between the original and folded version. Naturally, if a signal x[n] has even
symmetry, its odd part x
o
[n] will equal zero, and if x[n] has odd symmetry, its even part x
e
[n] will equal
zero.
REVIEW PANEL 2.7
Any Discrete Signal Is the Sum of an Even Symmetric and an Odd Symmetric Part
x[n] = x
e
[n] +x
o
[n] where x
e
[n] = 0.5x[n] + 0.5x[n] and x
o
[n] = 0.5x[n] 0.5x[n]
How to implement: Graphically, if possible. How to check: Does x
e
[n] +x
o
[n] give x[n]?
EXAMPLE 2.3 (Signal Symmetry)
(a) Let x[n] = 4, 2,
2, 3, 0 0.5x[n] = 0, 3,
2, 1, 2
Zero-padding, though not essential, allows us to perform element-wise addition or subtraction with
ease to obtain
x
e
[n] = 0.5x[n] + 0.5x[n] = 2, 4,
4, 4, 2
x
o
[n] = 0.5x[n] 0.5x[n] = 2, 2,
0, 2, 2
The various signals are sketched in Figure E2.3A. As a consistency check you should conrm that
x
o
[0] = 0,
x
o
[n] = 0, and that the sum x
e
[n] +x
o
[n] recovers x[n].
[n] x
[n] x 0.5 x [n] 0.5
x
e
[n]
x
o
[n]
4
2
4
6
n
3
2 2
1
n
3
2 2
1
n
4
4 4
2 2
n
2
2
2
2
n
Figure E2.3A The signal x[n] and its odd and even parts for Example 2.3(a)
(b) Let x[n] = u[n] u[n 5]. Find and sketch its odd and even parts.
The signal x[n] and the genesis of its odd and even parts are shown in Figure E2.3B. Note the value
of x
e
[n] at n = 0 in the sketch.
[n] x [n] x 0.5 x [n] 0.5 x
e
[n]
x
o
[n]
2
4
n
1
4
n
1
4
n
1
2
4 4
n
1
4
4
1
n
Figure E2.3B The signal x[n] and its odd and even parts for Example 2.3(b)
c Ashok Ambardar, September 1, 2003
2.3 Decimation and Interpolation 13
DRILL PROBLEM 2.4
(a) Find and sketch the even and odd parts of x[n] =
8, 4, 2.
(b) Find and sketch the odd part of y[n] = 8,
4, 2.
Answers: (a) x
e
[n] = 1, 2,
8, 2, 1 x
o
[n] = 1, 2,
0, 2, 1 (b) y
o
[n] = 3,
0, 3.
2.3 Decimation and Interpolation
The time scaling of discrete-time signals must be viewed with care. For discrete-time signals, time scaling is
equivalent to decreasing or increasing the signal length. The problems that arise in time scaling are not in
what happens but how it happens.
2.3.1 Decimation
Decimation refers to a process of reducing the signal length by discarding signal samples. Suppose x[n]
corresponds to an analog signal x(t) sampled at intervals t
s
. The signal y[n] = x[2n] then corresponds to
the compressed signal x(2t) sampled at t
s
and contains only alternate samples of x[n] (corresponding to
x[0], x[2], x[4], . . .). We can also obtain y[n] directly from x(t) (not its compressed version) if we sample it
at intervals 2t
s
(or at a sampling rate S = 1/2t
s
). This means a twofold reduction in the sampling rate.
Decimation by a factor of N is equivalent to sampling x(t) at intervals Nt
s
and implies an N-fold reduction
in the sampling rate. The decimated signal x[Nn] is generated from x[n] by retaining every Nth sample
corresponding to the indices k = Nn and discarding all others.
2.3.2 Interpolation
If x[n] corresponds to x(t) sampled at intervals t
s
, then y[n] = x[n/2] corresponds to x(t) sampled at t
s
/2
and has twice the length of x[n] with one new sample between adjacent samples of x[n]. If an expression for
x[n] (or the underlying analog signal) were known, it would be no problem to determine these new sample
values. If we are only given the sample values of x[n] (without its analytical form), the best we can do is
interpolate between samples. For example, we may choose each new sample value as zero (zero interpolation),
a constant equal to the previous sample value (step interpolation), or the average of adjacent sample values
(linear interpolation). Zero interpolation is referred to as up-sampling and plays an important role in
practical interpolation schemes. Interpolation by a factor of N is equivalent to sampling x(t) at intervals
t
s
/N and implies an N-fold increase in both the sampling rate and the signal length.
Some Caveats
It may appear that decimation (discarding signal samples) and interpolation (inserting signal samples) are
inverse operations but this is not always the case. Consider the two sets of operations shown below:
x[n] decimate by 2 x[2n] interpolate by 2 x[n]
x[n] interpolate by 2 x[n/2] decimate by 2 x[n]
On the face of it, both sets of operations start with x[n] and appear to recover x[n], suggesting that interpo-
lation and decimation are inverse operations. In fact, only the second sequence of operations (interpolation
followed by decimation) recovers x[n] exactly. To see why, let x[n] =
1, 2, 6, 4, 8
decimate
n 2n
1, 6, 8
interpolate
n n/2
1, 1, 6, 6, 8, 8
1, 2, 6, 4, 8
interpolate
n n/2
1, 1, 2, 2, 6, 6, 4, 4, 8, 8
decimate
n 2n
1, 2, 6, 4, 8
We see that decimation is indeed the inverse of interpolation, but the converse is not necessarily true.
After all, it is highly unlikely for any interpolation scheme to recover or predict the exact value of the
samples that were discarded during decimation. In situations where both interpolation and decimation are
to be performed in succession, it is therefore best to interpolate rst. In practice, of course, interpolation or
decimation should preserve the information content of the original signal, and this imposes constraints on
the rate at which the original samples were acquired.
2.3.3 Fractional Delays
Fractional (typically half-sample) delays are sometimes required in practice and can be implemented using
interpolation and decimation. If we require that interpolation be followed by decimation and integer shifts,
the correct result is obtained by using interpolation followed by an integer shift and decimation. To generate
the signal y[n] = x[n
M
N
] = x[
NnM
N
] from x[n], we use the following sequence of operations.
x[n] interpolate by N x[
n
N
] delay by M x[
nM
N
] decimate by N x[
NnM
N
] = y[n]
The idea is to ensure that each operation (interpolation, shift, and decimation) involves integers.
REVIEW PANEL 2.8
Decimation by N, Interpolation by N, and Fractional Delay by M/N
Decimation: Keep only every Nth sample (at n = kN). This leads to potential loss of information.
Interpolation: Insert N1 new values after each sample. The new sample values may equal
zero (zero interpolation), or the previous value (step interpolation), or linearly interpolated values.
Fractional Delay (y[n] = x[n
M
N
]): Interpolate x[n] by N, then delay by M, then decimate by N.
EXAMPLE 2.4 (Decimation and Interpolation)
(a) Let x[n] = 1,
2, 1.
The zero-interpolated signal is g[n] = x[
n
3
] = 1, 0, 0,
2, 0, 0, 5, 0, 0, 1, 0, 0.
The step-interpolated signal is h[n] = x[
n
3
] = 1, 1, 1,
2, 2, 2, 5, 5, 5, 1, 1, 1.
The linearly interpolated signal is s[n] = x[
n
3
] = 1,
4
3
,
5
3
,
2, 3, 4, 5, 3, 1, 1,
2
3
,
1
3
.
In linear interpolation, note that we interpolated the last two values toward zero.
c Ashok Ambardar, September 1, 2003
2.4 Common Discrete Signals 15
(b) Let x[n] = 3, 4,
5, 6. Find g[n] = x[2n 1] and the step-interpolated signal h[n] = x[0.5n 1].
In either case, we rst nd y[n] = x[n 1] = 3,
4, 5, 6. Then
g[n] = y[2n] = x[2n 1] =
4, 6.
h[n] = y[
n
2
] = x[0.5n 1] = 3, 3,
4, 4, 5, 5, 6, 6.
(c) Let x[n] = 3, 4,
5, 5, 5, 6, 6, 6.
After decimation: y[n] = g[2n] = x[
2
3
n] = 3, 3, 4,
5, 5, 6.
(d) Let x[n] = 2, 4,
6, 8. Find the signal y[n] = x[n 0.5] assuming linear interpolation where needed.
We rst interpolate by 2, then delay by 1, and then decimate by 2 to get
After interpolation: g[n] = x[
n
2
] = 2, 3, 4, 5,
5, 6, 7, 8, 4.
After decimation: y[n] = h[2n] = x[
2n1
2
] = x[n 0.5] = 3,
5, 7, 4.
DRILL PROBLEM 2.5
Let x[n] =
8, 4, 2, 6. Find y[n] = x[2n], g[n] = x[2n + 1], h[n] = x[0.5n], and f[n] = x[n + 0.5].
Assume linear interpolation where required.
Answers: y[n] =
8, 2 g[n] =
4, 6 h[n] =
8, 6, 4, 3, 2, 4, 6, 3 f[n] =
6, 3, 4, 3.
2.4 Common Discrete Signals
The unit impulse (or unit sample) [n], the unit step u[n], and the unit ramp are dened as
[n] =
0, n = 0
1, n = 0
u[n] =
0, n < 0
1, n 0
r[n] = nu[n] =
0, n < 0
n, n 0
(2.11)
The discrete impulse is just a unit sample at n = 0. It is completely free of the kind of ambiguities associated
with the analog impulse (t) at t = 0. The discrete unit step u[n] also has a well-dened, unique value of
u[0] = 1 (unlike its analog counterpart u(t)). The signal x[n] = Anu[n] = Ar[n] describes a discrete ramp
whose slope A is given by x[k] x[k 1], the dierence between adjacent sample values.
c Ashok Ambardar, September 1, 2003
16 Chapter 2 Discrete Signals
REVIEW PANEL 2.9
The Discrete Impulse, Step, and Ramp Are Well Dened at n = 0
[n] [n] u [n] r
1
n n
1
n
1
2
3
4
5
2.4.1 Properties of the Discrete Impulse
The product of a signal x[n] with the impulse [n k] results in
x[n][n k] = x[k][n k] (2.12)
This is because [n k] is nonzero only at n = k where the value of x[n] corresponds to x[k]. The result is
an impulse with strength x[k]. The product property leads directly to
n=
x[n][n k] = x[k] (2.13)
This is the sifting property. The impulse extracts (sifts out) the value x[k] from x[n] at the impulse
location n = k.
2.4.2 Signal Representation by Impulses
A discrete signal x[n] may be expressed as a sum of shifted impulses [nk] whose sample values correspond
to x[k], the values of x[n] at n = k. Thus,
x[n] =
k=
x[k][n k] (2.14)
For example, the signals u[n] and r[n] may be expressed as a train of shifted impulses:
u[n] =
k=0
[n k] r[n] =
k=0
k[n k] (2.15)
The signal u[n] may also be expressed as the cumulative sum of [n], and the signal r[n] may be described
as the cumulative sum of u[n]:
u[n] =
n
k=
[k] r[n] =
n
k=
u[k] (2.16)
2.4.3 Discrete Pulse Signals
The discrete rectangular pulse rect(n/2N) and the discrete triangular pulse tri(n/N) are dened by
rect
n
2N
1, [n[ N
0, elsewhere
tri
n
N
1
|n|
N
, [n[ N
0, elsewhere
(2.17)
c Ashok Ambardar, September 1, 2003
2.4 Common Discrete Signals 17
The signal rect(
n
2N
) has 2N + 1 unit samples over N n N. The factor 2N in rect(
n
2N
) gets around
the problem of having to deal with half-integer values of n when N is odd. The signal x[n] = tri(n/N) also
has 2N + 1 samples over N n N, with its end samples x[N] and x[N] being zero. It is sometimes
convient to express pulse-like signals in terms of these standard forms.
REVIEW PANEL 2.10
The Discrete rect and tri Functions
(N = 5) [n/N] [n/2N ] rect (N = 5)
n
N
1
-N
tri
N
1
-N
n
EXAMPLE 2.5 (Describing Sequences and Signals)
(a) Let x[n] = (2)
n
and y[n] = [n 3]. Find z[n] = x[n]y[n] and evaluate the sum A =
z[n].
The product, z[n] = x[n]y[n] = (2)
3
[n 3] = 8[n 3], is an impulse.
The sum, A =
z[n], is given by
(2)
n
[n 3] = (2)
3
= 8.
(b) Mathematically describe the signals of Figure E2.5B in at least two dierent ways.
[n] x [n] y
3 2 1 1 2 3
[n] h
1
1
2
n
4
2
6
2
4
2
4
6
n n
4
2
3
1
2 3 4 6 1 5
Figure E2.5B The signals for Example 2.5(b)
1. The signal x[n] may be described as the sequence x[n] = 4,
2, 1, 3.
It may also be written as x[n] = 4[n + 1] + 2[n] [n 1] + 3[n 2].
2. The signal y[n] may be represented variously as
A numeric sequence: y[n] = 0, 0, 2, 4, 6, 6, 6.
A sum of shifted impulses: y[n] = 2[n 2] + 4[n 3] + 6[n 4] + 6[n 5] + 6[n 6].
A sum of steps and ramps: y[n] = 2r[n 1] 2r[n 4] 6u[n 7].
Note carefully that the argument of the step function is [n 7] (and not [n 6]).
3. The signal h[n] may be described as h[n] = 6 tri(n/3) or variously as
A numeric sequence: h[n] = 0, 2, 4,
6, 4, 2, 0.
A sum of impulses: h[n] = 2[n + 2] + 4[n + 1] + 6[n] + 4[n 1] + 2[n 2].
A sum of steps and ramps: h[n] = 2r[n + 3] 4r[n] + 2r[n 3].
c Ashok Ambardar, September 1, 2003
18 Chapter 2 Discrete Signals
DRILL PROBLEM 2.6
(a) Sketch the signals x[n] = [n + 2] + 2[n 1] and y[n] = 2u[n + 1] u[n 3].
(b) Express the signal h[n] = 3, 3,
n
N
=
sin(n/N)
(n/N)
, sinc(0) = 1 (2.18)
The signal sinc(n/N) equals zero at n = kN, k = 1, 2, . . .. At n = 0, sinc(0) = 0/0 and cannot be
evaluated in the limit since n can take on only integer values. We therefore dene sinc(0) = 1. The envelope
of the sinc function shows a mainlobe and gradually decaying sidelobes. The denition of sinc(n/N) also
implies that sinc(n) = [n].
2.4.5 Discrete Exponentials
Discrete exponentials are often described using a rational base. For example, the signal x[n] = 2
n
u[n] shows
exponential growth while y[n] = (0.5)
n
u[n] is a decaying exponential. The signal f[n] = (0.5)
n
u[n] shows
values that alternate in sign. The exponential x[n] =
n
u[n], where = re
j
is complex, may be described
using the various formulations of a complex number as
x[n] =
n
u[n] =
re
j
n
u[n] = r
n
e
jn
u[n] = r
n
[cos(n) +j sin(n)]u[n] (2.19)
This complex-valued signal requires two separate plots (the real and imaginary parts, for example) for a
graphical description. If 0 < r < 1, x[n] describes a signal whose real and imaginary parts are exponentially
decaying cosines and sines. If r = 1, the real and imaginary parts are pure cosines and sines with a peak
value of unity. If r > 1, we obtain exponentially growing sinusoids.
2.5 Discrete-Time Harmonics and Sinusoids
If we sample an analog sinusoid x(t) = cos(2f
0
t) at intervals of t
s
corresponding to a sampling rate of
S = 1/t
s
samples/s (or S Hz), we obtain the sampled sinusoid
x[n] = cos(2fnt
s
+) = cos(2n
f
S
+) = cos(2nF +) (2.20)
The quantities f and = 2f describe analog frequencies. The normalized frequency F = f/S is called the
digital frequency and has units of cycles/sample. The frequency = 2F is the digital radian frequency
with units of radians/sample. The various analog and digital frequencies are compared in Figure 2.1. Note
that the analog frequency f = S (or = 2S) corresponds to the digital frequency F = 1 (or = 2).
More generally, we nd it useful to deal with complex valued signals of the form
x[n] = e
j(2nF+)
= cos(2nF +) +j sin(2nF +) (2.21)
This allows us to regard the real sinusoid x[n] = cos(2nF +) as the real part of the complex-valued signal
x[n].
c Ashok Ambardar, September 1, 2003
2.5 Discrete-Time Harmonics and Sinusoids 19
= 2 f
= 2 F
S 0.5 S S 0.5
2S S S 2S
F = f/S ( ) F
f (Hz)
1 0.5 0.5 1
2 2
S
0
0
0
0
Analog frequency
Digital frequency
Analog frequency
Digital frequency
Connection between analog and digital frequencies
Figure 2.1 Comparison of analog and digital frequencies
REVIEW PANEL 2.11
The Digital Frequency Is the Analog Frequency Normalized by Sampling Rate S
F(cycles/sample) =
f(cycles/sec)
S(samples/sec)
(radians/sample) =
(radians/sec)
S(samples/sec)
= 2F
2.5.1 Discrete-Time Harmonics Are Not Always Periodic in Time
An analog sinusoid x(t) = cos(2f
0
t + ) has two remarkable properties. It is unique for every frequency.
And it is periodic in time for every choice of the frequency f
0
. Its sampled version, however, is a beast of a
dierent kind.
Are all discrete-time sinusoids and harmonics periodic in time? Not always! To understand this idea,
suppose x[n] is periodic with period N such that x[n] = x[n +N]. This leads to
cos(2nF
0
+) = cos[2(n +N)F
0
+] = cos(2nF
0
+ + 2NF
0
) (2.22)
The two sides are equal provided NF
0
equals an integer k. In other words, F
0
must be a rational fraction
(ratio of integers) of the form k/N. What we are really saying is that a DT sinusoid is not always periodic but
only if its digital frequency is a ratio of integers or a rational fraction. The period N equals the denominator
of k/N, provided common factors have been canceled from its numerator and denominator. The signicance
of k is that it takes k full periods of the analog sinusoid to yield one full period of the sampled sinusoid.
The common period of a combination of periodic DT sinusoids equals the least common multiple (LCM)
of their individual periods. If F
0
is not a rational fraction, there is no periodicity, and the DT sinusoid is
classied as nonperiodic or almost periodic. Examples of periodic and nonperiodic DT sinusoids appear in
Figure 2.2. Even though a DT sinusoid may not always be periodic, it will always have a periodic envelope.
This discussion also applies to complex-valued harmonics of the type e
j(2nF
0
+)
.
REVIEW PANEL 2.12
The Discrete Harmonic cos(2nF
0
+) or e
j(2nF
0
+)
Is Not Always Periodic in Time
It is periodic only if its digital frequency F
0
= k/N can be expressed as a ratio of integers.
Its period equals N if common factors have been canceled in k/N.
One period of the sampled sinusoid is obtained from k full periods of the analog sinusoid.
c Ashok Ambardar, September 1, 2003
20 Chapter 2 Discrete Signals
0 4 8 12 16 20 24 28
1
0.5
0
0.5
1
DT Index n
A
m
p
l
i
t
u
d
e
(a) cos(0.125n) is periodic. Period N=16
Envelope
is periodic
0 4 8 12 16 20 24 28
1
0.5
0
0.5
1
DT Index n
A
m
p
l
i
t
u
d
e
(b) cos(0.5n) is not periodic. Check peaks or zeros.
Envelope
is periodic
Figure 2.2 Discrete-time sinusoids are not always periodic
EXAMPLE 2.6 (Discrete-Time Harmonics and Periodicity)
(a) Is x[n] = cos(2Fn) periodic if F = 0.32? If F =
3, x[n] is not periodic because F is irrational and cannot be expressed as a ratio of integers.
(b) What is the period of the harmonic signal x[n] = e
j0.2n
+e
j0.3n
?
The digital frequencies in x[n] are F
1
= 0.1 =
1
10
=
k
1
N
1
and F
2
= 0.15 =
3
20
=
k
2
N
2
.
Their periods are N
1
= 10 and N
2
= 20.
The common period is thus N = LCM(N
1
, N
2
) = LCM(10, 20) = 20.
(c) The signal x(t) = 2 cos(40t) + sin(60t) is sampled at 75 Hz. What is the common period of the
sampled signal x[n], and how many full periods of x(t) does it take to obtain one period of x[n]?
The frequencies in x(t) are f
1
= 20 Hz and f
2
= 30 Hz. The digital frequencies of the individual
components are F
1
=
20
75
=
4
15
=
k1
N1
and F
2
=
30
75
=
2
5
=
k2
N2
. Their periods are N
1
= 15 and N
2
= 5.
The common period is thus N = LCM(N
1
, N
2
) = LCM(15, 5) = 15.
The fundamental frequency of x(t) is f
0
= GCD(20, 30) = 10 Hz. One period of x(t) is T =
1
f0
= 0.1 s.
Since N = 15 corresponds to a duration of Nt
s
=
N
S
= 0.2 s, it takes two full periods of x(t) to obtain
one period of x[n]. We also get the same result by computing GCD(k
1
, k
2
) = GCD(4, 2) = 2.
DRILL PROBLEM 2.7
(a) What is the digital frequency of x[n] = 2e
j(0.25n+30
)
? Is x[n] periodic?
(b) What is the digital frequency of y[n] = cos(0.5n + 30
)? Is y[n] periodic?
(c) What is the common period N of the signal f[n] = cos(0.4n) + sin(0.5n + 30
)?
Answers: (a) F
0
= 0.125, yes (b) F
0
= 0.25/, no (c) N = 20
c Ashok Ambardar, September 1, 2003
2.6 The Sampling Theorem 21
2.5.2 Discrete-Time Harmonics Are Always Periodic in Frequency
Unlike analog sinusoids, discrete-time sinusoids and harmonics are always periodic in frequency. If we start
with the sinusoid x[n] = cos(2nF
0
+) and add an integer m to F
0
, we get
cos[2n(F
0
+m) +] = cos(2nF
0
+ + 2nm) = cos(2nF
0
+) = x[n]
This result says that discrete-time (DT) sinusoids at the frequencies F
0
m are identical. Put another way,
a DT sinusoid is periodic in frequency (has a periodic spectrum) with unit period. The range 0.5 F 0.5
denes the principal period or principal range. A DT sinusoid can be uniquely identied only if its
frequency falls in the principal range. A DT sinusoid with a frequency F
0
outside this range can always be
expressed as a DT sinusoid with a frequency that falls in the principal period by subtracting out an integer
M from F
0
such that the new frequency F
a
= F
0
M satises 0.5 F
u
0.5). The frequency F
a
is called
the aliased digital frequency and it is always smaller than original frequency F
0
. This discussion also applies
to complex-valued harmonics of the type e
j(2nF
0
+)
.
To summarize, a discrete-time sinusoid or harmonic is periodic in time only if its digital frequency F
0
is
a rational fraction, but it is always periodic in frequency (with unit period.
REVIEW PANEL 2.13
The Discrete Harmonic cos(2nF
0
+) or e
j(2nF
0
+)
Is Always Periodic in Frequency
Its frequency period is unity (harmonics at F
0
and F
0
K are identical for integer K).
It is unique only if F
0
lies in the principal period 0.5 < F
0
0.5.
If F
0
> 0.5, the unique frequency is F
a
= F
0
M, where the integer M is chosen to ensure 0.5 < F
a
0.5.
DRILL PROBLEM 2.8
(a) Let x[n] = e
j1.4n
= e
j2F
u
n
where F
u
is in the principal range. What is the value of F
u
?
(b) Let y[n] = cos(2.4n + 30
) + cos(2.4n + 30
) (c) cos(0.6n 20
) + cos(0.4n + 30
)
2.6 The Sampling Theorem
The central concept in the digital processing of analog signals is that the sampled signal must be a unique
representation of the underlying analog signal. When the sinusoid x(t) = cos(2f
0
t + ) is sampled at the
sampling rate S, the digital frequency of the sampled signal is F
0
= f
0
/S. In order for the sampled sinusoid
to permit a unique correspondence with the underlying analog sinusoid, the digital frequency F
0
must lie in
the principal range, i.e., [F
0
[ < 0.5. This implies S > 2[f
0
[ and suggests that we must choose a sampling
rate S that exceeds [2f
0
[. More generally, the sampling theorem says that for a unique correspondence
between an analog signal and the version reconstructed from its samples (using the same sampling rate),
the sampling rate must exceed twice the highest signal frequency f
max
. The value S = 2f
max
is called the
critical sampling rate or Nyquist rate or Nyquist frequency. The time interval t
s
=
1
2fmax
is called
the Nyquist interval. For the sinusoid x(t) = cos(2f
0
t + ), the Nyquist rate is S
N
= 2f
0
=
2
T
and this
rate is equivalent to taking exactly two samples per period (because the sampling interval is t
s
=
T
2
). In
order to exceed the Nyquist rate, we should obtain more than two signal samples per period.
c Ashok Ambardar, September 1, 2003
22 Chapter 2 Discrete Signals
REVIEW PANEL 2.14
The Sampling Theorem: How to Sample an Analog Signal Without Loss of Information
For an analog signal band-limited to f
max
Hz, the sampling rate S must exceed 2f
max
.
S = 2f
max
denes the Nyquist rate. t
s
=
1
2f
max
denes the Nyquist interval.
For an analog sinusoid: The Nyquist rate corresponds to taking two samples per period.
DRILL PROBLEM 2.9
(a) What is the critical sampling rate in Hz for the following signals:
x(t) = cos(10t) y(t) = cos(10t) + sin(15t) f(t) = cos(10t) sin(15t) g(t) = cos
2
(10t)
(b) A 50 Hz sinusoid is sampled at twice the Nyquist rate. How many samples are obtained in 3 s?
Answers: (a) 10, 15, 25, 20 (b) 600
2.6.1 Signal Reconstruction and Aliasing
Consider an analog signal x(t) = cos(2f
0
t + ) and its sampled version x[n] = cos(2nF
0
+ ), where
F
0
= f
0
/S. If x[n] is to be a unique representation of x(t), we must be able to reconstruct x(t) from x[n]. In
practice, reconstruction uses only the central copy or image of the periodic spectrum of x[n] in the principal
period 0.5 F 0.5, which corresponds to the analog frequency range 0.5S f 0.5S. We use a
lowpass lter to remove all other replicas or images, and the output of the lowpass lter corresponds to the
reconstructed analog signal. As a result, the highest frequency f
H
we can identify in the signal reconstructed
from its samples is f
H
= 0.5S.
Whether the reconstructed analog signal matches x(t) or not depends on the sampling rate S. If we
exceed the Nyquist rate (i.e. S > 2f
0
), the digital frequency F
0
= f
0
/S is always in the principal range
0.5 F 0.5, and the reconstructed analog signal is identical to x(t). If the sampling rate is below the
Nyquist rate (i.e. S < 2f
0
), the digital frequency exceeds 0.5. Its image in the principal range appears at
the lower digital frequency F
a
= F
0
M (corresponding to the lower analog frequency f
a
= f
0
MS), where
M is an integer that places the aliased digital frequency F
a
between 0.5 and 0.5 (or the aliased analog
frequency f
a
between 0.5S and 0.5S). The reconstructed aliased signal x
a
(t) = cos(2f
a
t +) is at a lower
frequency f
a
= SF
a
than f
0
and is no longer a replica of x(t). The phenomenon, where a reconstructed
sinusoid appears at a lower frequency than the original, is what aliasing is all about. The real problem
is that the original signal x(t) and the aliased signal x
a
(t) yield identical sampled representations at the
sampling frequency S and prevent unique identication of the original signal x(t) from its samples!
REVIEW PANEL 2.15
Aliasing Occurs if the Analog Signal cos(2f
0
t +) Is Sampled Below the Nyquist Rate
If S < 2f
0
, the reconstructed analog signal is aliased to a lower frequency [f
a
[ < 0.5S. We nd f
a
as
f
a
= f
0
MS, where M is an integer that places f
a
in the principal period (0.5S < f
a
0.5S).
Before reconstruction, all frequencies must be brought into the principal period.
The highest frequency of the reconstructed signal cannot exceed half the reconstruction rate.
EXAMPLE 2.7 (Aliasing and Its Eects)
(a) A 100-Hz sinusoid x(t) is sampled at 240 Hz. Has aliasing occurred? How many full periods of x(t)
are required to obtain one period of the sampled signal?
c Ashok Ambardar, September 1, 2003
2.6 The Sampling Theorem 23
The sampling rate exceeds 200 Hz, so there is no aliasing. The digital frequency is F =
100
240
=
5
12
.
Thus, ve periods of x(t) yield 12 samples (one period) of the sampled signal.
(b) A 100-Hz sinusoid is sampled at rates of 240 Hz, 140 Hz, 90 Hz, and 35 Hz. In each case, has aliasing
occurred, and if so, what is the aliased frequency?
To avoid aliasing, the sampling rate must exceed 200 Hz. If S = 240 Hz, there is no aliasing, and
the reconstructed signal (from its samples) appears at the original frequency of 100 Hz. For all other
choices of S, the sampling rate is too low and leads to aliasing. The aliased signal shows up at a lower
frequency. The aliased frequencies corresponding to each sampling rate S are found by subtracting out
multiples of S from 100 Hz to place the result in the range 0.5S f 0.5S. If the original signal
has the form x(t) = cos(200t +), we obtain the following aliased frequencies and aliased signals:
1. S = 140 Hz, f
a
= 100 140 = 40 Hz, x
a
(t) = cos(80t +) = cos(80t )
2. S = 90 Hz, f
a
= 100 90 = 10 Hz, x
a
(t) = cos(20t +)
3. S = 35 Hz, f
a
= 100 3(35) = 5 Hz, x
a
(t) = cos(10t +) = cos(10t )
We thus obtain a 40-Hz sinusoid (with reversed phase), a 10-Hz sinusoid, and a 5-Hz sinusoid (with
reversed phase), respectively. Notice that negative aliased frequencies simply lead to a phase reversal
and do not represent any new information. Finally, had we used a sampling rate exceeding the Nyquist
rate of 200 Hz, we would have recovered the original 100-Hz signal every time. Yes, it pays to play by
the rules of the sampling theorem!
(c) Two analog sinusoids x
1
(t) (shown light) and x
2
(t) (shown dark) lead to an identical sampled version as
illustrated in Figure E2.7C. Has aliasing occurred? Identify the original and aliased signal. Identify the
digital frequency of the sampled signal corresponding to each sinusoid. What is the analog frequency
of each sinusoid if S = 50 Hz? Can you provide exact expressions for each sinusoid?
0 0.05 0.1 0.15 0.2 0.25 0.3
1
0.5
0
0.5
1
Time t [seconds]
A
m
p
l
i
t
u
d
e
Two analog signals and their sampled version
Figure E2.7C The sinusoids for Example 2.7(c)
Examine the interval (0, 0.1) s. The sampled signal shows ve samples per period. This covers three
full periods of x
1
(t) and so F
1
=
3
5
. This also covers two full periods of x
2
(t), and so F
2
=
2
5
. Clearly,
x
1
(t) (with [F
1
[ > 0.5) is the original signal that is aliased to x
2
(t). The sampling interval is 0.02 s.
So, the sampling rate is S = 50 Hz. The original and aliased frequencies are f
1
= SF
1
= 30 Hz and
f
2
= SF
2
= 20 Hz.
From the gure, we can identify exact expressions for x
1
(t) and x
2
(t) as follows. Since x
1
(t) is a delayed
cosine with x
1
(0) = 0.5, we have x
1
(t) = cos(60t
3
). With S = 50 Hz, the frequency f
1
= 30 Hz
c Ashok Ambardar, September 1, 2003
24 Chapter 2 Discrete Signals
actually aliases to f
2
= 20 Hz, and thus x
2
(t) = cos(40t
3
) = cos(40t +
3
). With F =
30
50
= 0.6
(or F = 0.4), the expression for the sampled signal is x[n] = cos(2nF
3
).
(d) A 100-Hz sinusoid is sampled, and the reconstructed signal (from its samples) shows up at 10 Hz.
What was the sampling rate S?
One choice is to set 100 S = 10 and obtain S = 90 Hz. Another possibility is to set 100 S = 10 to
give S = 110 Hz. In fact, we can also subtract out integer multiples of S from 100 Hz, set 100MS = 10
and compute S for various choices of M. For example, if M = 2, we get S = 45 Hz and if M = 3,
we get S = 30 Hz. We can also set 100 NS = 10 and get S = 55 Hz for N = 2. Which of these
sampling rates was actually used? We have no way of knowing!
DRILL PROBLEM 2.10
(a) A 60-Hz sinusoid x(t) is sampled at 200 Hz. What is the period N of the sampled signal? How many
full periods of x(t) are required to obtain these N samples? What is the frequency (in Hz) of the analog
signal reconstructed from the samples?
(b) A 160-Hz sinusoid x(t) is sampled at 200 Hz. What is the period N of the sampled signal? How many
full periods of x(t) are required to obtain these N samples? What is the frequency (in Hz) of the analog
signal reconstructed from the samples?
(c) The signal x(t) = cos(60t + 30
f() d (2.23)
The probability F(x
1
) = Pr[X x
1
] that X is less than x
1
is given by
Pr[X x
1
] =
x1
f(x) dx (2.24)
The probability that X lies between x
1
and x
2
is Pr[x
1
< X x
2
] = F(x
2
) F(x
1
). The area of f(x) is 1.
2.7.2 Measures for Random Variables
Measures or features of a random variable X are based on its distribution. The mean, or expectation, is a
measure of where the distribution is centered and is dened by
E(x) = m
x
=
E(x
2
) =
x
2
f(x) dx (mean square value) (2.26)
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 27
Many of the features of deterministic or random signals are based on moments. The nth moment m
n
is
dened by
m
n
=
x
n
f(x) dx (nth moment) (2.27)
We see that the zeroth moment m
0
gives the signal area, the rst moment m
1
corresponds to the mean,
and the second moment m
2
denes the mean square value. Moments about the mean are called central
moments and also nd widespread use. The nth central moment
n
is dened by
n
=
(x m
x
)
n
f(x) dx (nth central moment) (2.28)
A very commonly used feature is the second central moment
2
. It is also called the variance, denoted
2
x
,
and dened by
2
x
=
E[(x m
x
)
2
] =
2
=
(x m
x
)
2
f(x) dx (variance) (2.29)
The variance may be expressed in terms of the mean and the mean square values as
2
x
=
E[(x m
x
)
2
] =
E(x
2
) m
2
x
=
x
2
f(x) dx m
2
x
(2.30)
The variance measures the spread (or dispersion) of the distribution about its mean. The less the spread,
the smaller is the variance. The quantity is known as the standard deviation and provides a measure
of the uncertainty in a physical measurement. The variance is also a measure of the ac power in a signal.
For a periodic deterministic signal x(t) with period T, the variance can be readily found by evaluating the
signal power (and subtracting the power due to the dc component if present)
2
x
=
1
T
T
0
x
2
(t) dt
. .. .
total signal power
1
T
T
0
x(t) dt
2
. .. .
dc power
(2.31)
This equation can be used to obtain the results listed in the following review panel.
REVIEW PANEL 2.17
The Variance of Some Useful Periodic Signals With Period T
Sinusoid: If x(t) = Acos(2
t
T
t +), then
2
=
A
2
2
.
Triangular Wave: If x(t) = A
t
T
, 0 t < T, or x(t) = A(1
|t|
0.5T
), [t[ 0.5T, then
2
=
A
2
12
.
Square Wave: If x(t) = A, 0 t < 0.5T and x(t) = 0, 0.5 t < T, then
2
=
A
2
4
.
DRILL PROBLEM 2.12
(a) Find the variance of the periodic signal x(t) = A, 0 t < 0.25T and x(t) = 0, 0.25T t < T.
(b) Find the variance of the raised cosine signal x(t) = A[1 + cos(2
t
T
+)].
Answers: (a)
3
16
A
2
(b)
5
4
A
2
c Ashok Ambardar, September 1, 2003
28 Chapter 2 Discrete Signals
2.7.3 The Chebyshev Inequality
The measurement of the variance or standard deviation gives us some idea of how much the actual values
will deviate from the mean but provides no indication of how often we might encounter large deviations from
the mean. The Chebyshev inequality allows us to estimate the probability for a deviation to be within
certain bounds (given by ) as
Pr([x m
x
[ > ) (
x
/)
2
or Pr([x m
x
[ ) > 1
2
x
2
(Chebyshev inequality) (2.32)
It assumes that the variance or standard deviation is known. To nd the probability for the deviation to be
within k standard deviations, we set = k
x
to give
Pr([x m
x
[ k
x
) > 1
1
k
2
The Law of Large Numbers
Chebyshevs inequality, in turn, leads to the so called law of large numbers which, in essence, states that
while an individual random variable may take on values quite far from its mean (show a large spread), the
arithmetic mean of a large number of random values shows little spread, taking on values very close to its
common mean with a very high probability.
2.7.4 Probability Distributions
Two of the most commonly encountered probability distributions are the uniform distribution and normal
(or Gaussian) distribution. These are illustrated in Figure 2.4.
m
x
m
x
x
f(x)
x
F(x)
1
x
f(x)
0.5
Uniform density Uniform distribution Gaussian density
x
F(x)
1
Gaussian distribution
1
Figure 2.4 The uniform and normal probability distributions
2.7.5 The Uniform Distribution
In a uniform distribution, every value is equally likely, since the random variable shows no preference for a
particular value. The density function f(x) of a typical uniform distribution is just a rectangular pulse with
unit area dened by
f(x) =
1
, x
0, otherwise
(uniform density function) (2.33)
Its mean and variance are given by
m
x
= 0.5( +)
2
x
=
(b a)
2
12
(2.34)
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 29
The distribution function F(x) is given by
F(x) =
0 x < a
xa
ba
a x b
1 x > b
(uniform distribution function) (2.35)
This is a nite ramp that rises from a value of zero at x = to a value of unity at x = and equals unity
for x . For a uniform distribution in which values are equally likely to fall between 0.5 and 0.5, the
density function is f(x) = 1, 0.5 x < 0.5 with a mean of zero and a variance of
1
12
.
Uniform distributions occur frequently in practice. When quantizing signals in uniform steps, the error
in representing a signal value is assumed to be uniformly distributed between 0.5 and 0.5, where is
the quantization step. The density function of the phase of a sinusoid with random phase is also uniformly
distributed between and .
DRILL PROBLEM 2.13
(a) If the quantization error is assumed to be uniformly distributed between 0.5 and 0.5, sketch
the density function f() and compute the variance.
(b) The phase of a sinusoid with random phase is uniformly distributed between and . Compute the
variance.
Answers: (a)
2
=
2
12
(b)
2
3
f ()
/2 /2
1/
2
2
x
exp
(x m
x
)
2
2
2
x
2
2
exp
(x m
x
)
2
2
2
e
x
2
/2
dx (standard Gaussian distribution) (2.39)
The Error Function
Another function that is used extensively is the error function dened by
erf(x) =
2
x
0
e
x
2
dx (error function) (2.40)
The Gaussian distribution F(x) may be expressed in terms of the standard form or the error function as
F(x) = P
x m
x
x m
x
(2.41)
The probability that x lies between x
1
and x
2
may be expressed in terms of the error function as
Pr[x
1
x x
2
] = F(x
2
) F(x
1
) = 0.5 erf
x
2
m
x
0.5 erf
x
1
m
x
(2.42)
This is a particularly useful form since tables of error functions are widely available. A note of caution to the
unwary, however. Several dierent, though functionally equivalent, denitions of P(x), erf(x) and related
functions are also prevalent in the literature.
The Q-Function
The Q-function describes the area of the tail of a Gaussian distribution. For the standard Gaussian, the
area Q(x) of the tail is given by
Q(x) = 1 P(x) =
1
x
e
x
2
/2
dx
The Q-function and may also be expressed in terms of the error function as
Q(x) = 0.5 0.5 erf(x/
2) (2.43)
The probability that x lies between x
1
and x
2
may also be expressed in terms of the Q-function as
Pr[x
1
x x
2
] = Q
x
1
m
x
x
2
m
x
(2.44)
The results for the standard distribution may be carried over to a distribution with arbitrary mean m
x
and
arbitrary standard deviation
x
via the simple change of variable x (x m
x
)/.
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 31
The Central Limit Theorem
The central limit theorem asserts that the probability density function of the sum of many random signals
approaches a Gaussian as long as their means are nite and their variance is small compared to the total
variance (but nonzero). The individual processes need not even be Gaussian.
2.7.7 Discrete Probability Distributions
The central limit theorem is even useful for discrete variables. When the variables that make up a given
process s are discrete and the number n of such variables is large, that is,
s =
n
i=1
x
i
, n 1
we may approximate s by a Gaussian whose mean equals nm
x
and whose variance equals n
2
x
. Thus,
f
n
(s)
1
2n
2
x
exp
(s nm
x
)
2
2n
2
x
, n 1 (2.45)
Its distribution allows us to compute the probability Pr[s
1
s s
2
] in terms of the error function or
Q-function as
Pr[s
1
s s
2
] 0.5 erf
s
2
nm
x
2n
0.5 erf
s
1
nm
x
2n
= Q
s
1
nm
x
s
2
nm
x
(2.46)
This relation forms the basis for numerical approximations involving discrete probabilities.
The Binomial Distribution
Consider an experiment with two outcomes which result in mutually independent and complementary events.
If the probability of a success is p, and the probability of a failure is q = 1 p, the probability of exactly k
successes in n trials follows the binomial distribution and is given by
Pr[s = k] = p
n
(k) = C
n
k
(p)
k
(1 p)
nk
(binomial probability) (2.47)
Here, C
n
k
represents the binomial coecient and may be expressed in terms of factorials or gamma
functions as
C
n
k
=
n!
k! (n k)!
=
(n + 1)
(k + 1) (n k + 1)
The probability of at least k successes in n trials is given by
Pr[s k] =
n
i=k
p
n
(i) = I
p
(k, n k + 1)
where I
x
(a, b) is the incomplete beta function dened by
I
x
(a, b) =
1
B(a, b)
x
0
t
a1
(1 t)
b1
dt B(a, b) =
ab
(a +b)
, a > 0, b > 0
c Ashok Ambardar, September 1, 2003
32 Chapter 2 Discrete Signals
The probability of getting between k
1
and k
2
successes in n trials describes a cumulative probability
because we must sum the probabilities of all possible outcomes for the event (the probability of exactly k
1
,
then k
1
+ 1, then k
1
+ 2 successes, and so on to k
2
successes) and is given by
Pr[k
1
s k
2
] =
k
2
i=k
1
p
n
(i)
For large n, its evaluation can become a computational nightmare.
Some Useful Approximations
When n is large and neither p 0 nor q 0, the probability p
n
(k) of exactly k successes in n trials may
be approximated by the Gaussian
p
n
(k) = C
n
k
(p)
k
(1 p)
nk
2
2
exp
(k m)
2
2
2
, m = np,
2
= np(1 p)
This result is based on the central limit theorem and called the de Moivre-Laplace approximation. It
assumes that
2
1 and [k m[ is of the same order as or less. Using this result, the probability of at
least k successes in n trials may be written in terms of the error function or Q-function as
Pr[s k] 0.5 0.5 erf
k m
= Q
k m
k
2
m
0.5 erf
k
1
m
= Q
k
1
m
k
2
m
m
k
e
m
k!
= p
m
(k) (Poisson probability) (2.48)
where n 1, p <1 and m = np. This is known as the Poisson probability. The mean and variance of this
distribution are both equal to m. In practical situations, the Poisson approximation is often used whenever
n is large and p is small and not just under the stringent limiting conditions imposed in its derivation. Unlike
the binomial distribution which requires probabilities for both success and failure, the Poisson distribution
requires only the probability of a success p (through the parameter m = np that describes the expected
number of successes) and may thus be used even when the number of unsuccessful outcomes is unknown.
The probability that the number of successes will lie between 0 and k inclusive, if the expected number is
m, is given by the summation
Pr[s k] =
k
i=0
kp
m
(i) =
k
i=0
m
i
e
m
i!
= 1 P(k + 1, m), k 1
where P(a, x) is the so called incomplete gamma function dened by
P(a, x) =
1
(a)
x
0
t
a1
e
t
dt a > 0
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 33
Note that Pr[s 0] = e
m
. For large m, the Poisson probability of exactly k successes may also be
approximated using the central limit theorem to give
p
m
(k)
1
2m
exp
(k m)
2
2m
k
2
m
2m
0.5 erf
k
1
m
2m
= Q
k
1
m
k
2
m
2
x
=
1
T
T
0
x
2
(t) dt
. .. .
total signal power
1
T
T
0
x(t) dt
2
. .. .
dc power
or
2
x
=
x
2
f(x) dx m
2
x
(2.49)
T=3
Periodic signal
3
t
x(t)
Distribution
3
1
x
F(x)
Density
3
1/3
x
f(x)
Figure 2.5 A periodic signal and its density and distribution functions
DRILL PROBLEM 2.14
(a) Refer to Figure 2.5. Compute the variance from x(t) itself.
(b) Refer to Figure 2.5. Compute the variance from its density function f(x).
Answers: (a) 0.75 (b) 0.75
2.7.9 Stationary, Ergodic, and Pseudorandom Signals
A random signal is called stationary if its statistical features do not change over time. Thus, dierent (non-
overlapping) segments of a single realization are more or less identical in the statistical sense. Signals that are
non-stationary do not possess this property and may indeed exhibit a trend (a linear trend, for example)
with time. Stationarity suggests a state of statistical equilibrium akin to the steady-state for deterministic
c Ashok Ambardar, September 1, 2003
34 Chapter 2 Discrete Signals
situations. A stationary process is typically characterized by a constant mean and constant variance. The
statistical properties of a stationary random signal may be found as ensemble averages across the process
by averaging over all realizations at one specic time instant or as time averages along the process by
averaging a single realization over time. The two are not always equal. If they are, a stationary process is
said to be ergodic. The biggest advantage of ergodicity is that we can use features from a single realization
to describe the whole process. It is very dicult to establish whether a stationary process is ergodic but,
because of the advantages it oers, ergodicity is often assumed in most practical situations! For an ergodic
signal, the mean equals the time average, and the variance equals the ac power (the power in the signal with
its dc component removed).
2.7.10 Statistical Estimates
Probability theory allows us to fully characterize a random signal from an a priori knowledge of its probability
distribution. This yields features like the mean and variance of the random variable. In practice, we are
faced with exactly the opposite problem of nding such features from a set of discrete data, often in the
absence of a probability distribution. The best we can do is get an estimate of such features and perhaps
even the distribution itself. This is what statistical estimation achieves. The mean and variance are typically
estimated directly from the observations x
k
, k = 0, 1, 2, . . . , N 1 as
m
x
=
1
N
N1
k=0
x
k
2
x
=
1
N 1
N1
k=0
(x
k
m
x
)
2
(2.50)
Histograms
The estimates f
k
of a probability distribution are obtained by constructing a histogram from a large number
of observations. A histogram is a bar graph of the number of observations falling within specied amplitude
levels, or bins, as illustrated in Figure 2.6.
Number of observations
Number of observations
Histogram of a uniform random signal Histogram of a Gaussian random signal
Bin width
Bin width
Figure 2.6 Histograms of a uniformly distributed and a normally distributed random signal
Pseudorandom Signals
In many situations, we use articially generated signals (which can never be truly random) with prescribed
statistical features called pseudorandom signals. Such signals are actually periodic (with a very long
period), but over one period their statistical features approximate those of random signals.
2.7.11 Random Signal Analysis
If a random signal forms the input to a system, the best we can do is to develop features that describe the
output on the average and estimate the response of a system under the inuence of random signals. Such
c Ashok Ambardar, September 1, 2003
2.7 An Introduction to Random Signals 35
estimates may be developed either in the time domain or in the frequency domain.
Signal-to-Noise Ratio
For a noisy signal x(t) = s(t) +An(t) with a signal component s(t) and a noise component An(t) (with noise
amplitude A), the signal-to-noise ratio (SNR) is the ratio of the signal power
2
s
and noise power A
2
2
n
and usually dened in decibels (dB) as
SNR = 10 log
2
s
A
2
2
n
dB (2.51)
The decibel value of is dened as 20 log . We can adjust the SNR by varying the noise amplitude A.
Application: Coherent Signal Averaging
Coherent signal averaging is a method of extracting signals from noise and assumes that the experiment can
be repeated and the noise corrupting the signal is random (and uncorrelated). Averaging the results of many
runs tends to average out the noise to zero, and the signal quality (or signal-to-noise ratio) improves. The
more the number of runs, the smoother and less noisy the averaged signal. We often remove the mean or any
linear trend before averaging. Figure 2.7 shows one realization of a noisy sine wave and the much smoother
results of averaging 8 and 48 such realizations. This method is called coherent because it requires time
coherence (time alignment of the signal for each run). It relies, for its success, on perfect synchronization of
each run and on the statistical independence of the contaminating noise.
0 5 10
5
0
5
(a) One realization of noisy sine
Time
A
m
p
l
i
t
u
d
e
0 5 10
5
0
5
(b) Average of 8 realizations
Time
A
m
p
l
i
t
u
d
e
0 5 10
5
0
5
(c) Average of 48 realizations
Time
A
m
p
l
i
t
u
d
e
Figure 2.7 Coherent averaging of a noisy sine wave
c Ashok Ambardar, September 1, 2003
36 Chapter 2 Discrete Signals
CHAPTER 2 PROBLEMS
2.1 (Discrete Signals) Sketch each signal and nd its energy or power as appropriate.
(a) x[n] =
6, 4, 2, 2 (b) y[n] = 3, 2,
1, 0, 1
(c) f[n] =
n
u[n 1]
(g) r[n] =
1
n
2
u[n 1] (h) s[n] = e
jn
(i) d[n] = e
jn/2
(j) t[n] = e
(j+1)n/4
(k) v[n] = j
n/4
(l) w[n] = (
j)
n
+ (
j)
n
[Hints and Suggestions: For x[n] and y[n], 2
2n
= 4
n
= (0.25)
n
. Sum this from n = to n = 0
(or n = 1) using a change of variable (n n) in the summation. For p[n], sum 1/n
2
over n = 1
to n = using tables. For q[n], the sum of 1/n from n = 1 to n = does not converge! For t[n],
separate the exponentials. To compute the power for s[n] and d[n], note that [s[n][ = [d[n][ = 1. For
v[n], use j = e
j/2
. For w[n], set
j = e
j/4
and use Eulers relation to convert to a sinusoid.]
2.6 (Energy and Power) Sketch each of the following signals, classify as an energy signal or power
signal, and nd the energy or power as appropriate.
(a) x[n] =
k=
y[n kN], where y[n] = u[n] u[n 3] and N = 6
(b) f[n] =
k=
(2)
n5k
(u[n 5k] u[n 5k 4])
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 37
[Hints and Suggestions: The period of x[n] is N = 6. With y[n] = u[n] u[n 3], one period of
x[n] (starting at n = 0) is 1, 1, 1, 0, 0, 0. The period of f[n] is N = 5. Its one period (starting at
n = 0) contains four samples from 2
n
(u[n] u[n 4]) and one trailing zero.]
2.7 (Decimation and Interpolation) Let x[n] = 4, 0,
4, 2, 2
[Hints and Suggestions: Conrm the appropriate symmetry for each even part and each odd part.
For each even part, the sample at n = 0 must equal the original sample value. For each odd part, the
sample at n = 0 must equal zero.]
2.11 (Sketching Discrete Signals) Sketch each of the following signals:
(a) x[n] = r[n + 2] r[n 2] 4u[n 6] (b) y[n] = rect(
n
6
)
(c) f[n] = rect(
n2
4
) (d) g[n] = 6 tri(
n4
3
)
[Hints and Suggestions: Note that f[n] is a rectangular pulse centered at n = 2 with 5 samples. Also,
g[n] is a triangular pulse centered at n = 4 with 7 samples (including the zero-valued end samples).]
c Ashok Ambardar, September 1, 2003
38 Chapter 2 Discrete Signals
2.12 (Sketching Signals) Sketch the following signals and describe how they are related.
(a) x[n] = [n] (b) f[n] = rect(n) (c) g[n] = tri(n) (d) h[n] = sinc(n)
2.13 (Signal Description) For each signal shown in Figure P2.13,
(a) Write out the numeric sequence, and mark the index n = 0 by an arrow.
(b) Write an expression for each signal using impulse functions.
(c) Write an expression for each signal using steps and/or ramps.
(d) Find the signal energy.
(e) Find the signal power, assuming that the sequence shown repeats itself.
[n] x [n] x [n] x [n] x
2
1
Signal 1
3 8
n
Signal 2
n
2
4
3 3
Signal 3
1
2
3
4 5
5
n
Signal 4
2
n
8
6
5
4
3
4
Figure P2.13 Signals for Problem 2.13
[Hints and Suggestions: In part (c), all signals must be turned o (by step functions) and any
ramps must be rst attened out (by other ramps) . For example, signal 3 = r[n] r[n5] 5u[n6].
The second term attens out the rst ramp and last term turns the signal o after n = 5.]
2.14 (Discrete Exponentials) A causal discrete exponential has the form x[n] =
n
u[n].
(a) Assume that is real and positive. Pick convenient values for > 1, = 1, and < 1; sketch
x[n]; and describe the nature of the sketch for each choice of .
(b) Assume that is real and negative. Pick convenient values for < 1, = 1, and > 1;
sketch x[n]; and describe the nature of the sketch for each choice of .
(c) Assume that is complex and of the form = Ae
j
, where A is a positive constant. Pick
convenient values for and for A < 1, A = 1, and A > 1; sketch the real part and imaginary
part of x[n] for each choice of A; and describe the nature of each sketch.
(d) Assume that is complex and of the form = Ae
j
, where A is a positive constant. Pick
convenient values for and for A < 1, A = 1, and A > 1; sketch the magnitude and imaginary
phase of x[n] for each choice of A; and describe the nature of each sketch.
2.15 (Signal Representation) The two signals shown in Figure P2.15 may be expressed as
(a) x[n] = A
n
(u[n] u[n N]) (b) y[n] = Acos(2Fn +)
Find the constants in each expression and then nd the signal energy or power as appropriate.
[n] x [n] y
1
1 2 3 4 5
4
n
1 1
1 1
2 2
2 2
1
1
1
1
1
n
Figure P2.15 Signals for Problem 2.15
[Hints and Suggestions: For y[n], rst nd the period to compute F. Then, evaluate y[n] at two
values of n to get two equations for, say y[0] and y[1]. These will yield (from their ratio) and A.]
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 39
2.16 (Discrete-Time Harmonics) Check for the periodicity of the following signals, and compute the
common period N if periodic.
(a) x[n] = cos(
n
2
) (b) y[n] = cos(
n
2
)
(c) f[n] = sin(
n
4
) 2 cos(
n
6
) (d) g[n] = 2 cos(
n
4
) + cos
2
(
n
4
)
(e) p[n] = 4 3 sin(
7n
4
) (f ) q[n] = cos(
5n
12
) + cos(
4n
9
)
(g) r[n] = cos(
8
3
n) + cos(
8
3
n) (h) s[n] = cos(
8n
3
) cos(
n
2
)
(i) d[n] = e
j0.3n
(j) e[n] = 2e
j0.3n
+ 3e
j0.4n
(k) v[n] = e
j0.3n
(l) w[n] = (j)
n/2
[Hints and Suggestions: There is no periodicity if F is not a rational fraction for any component.
Otherwise, work with the periods and nd their LCM. For w[n], note that j = e
j/2
.]
2.17 (The Roots of Unity) The N roots of the equation z
N
= 1 can be found by writing it as z
N
= e
j2k
to give z = e
j2k/N
, k = 0, 1, . . . , N 1. What is the magnitude and angle of each root? The roots
can be displayed as vectors directed from the origin whose tips lie on a circle.
(a) What is the length of each vector and the angular spacing between adjacent vectors? Sketch for
N = 5 and N = 6.
(b) Extend this concept to nd the roots of z
N
= 1 and sketch for N = 5 and N = 6.
[Hints and Suggestions: In part (b), note that z
N
= 1 = e
j
e
j2k
= e
j(2k+1)
.]
2.18 (Digital Frequency) Set up an expression for each signal, using a digital frequency [F[ < 0.5, and
another expression using a digital frequency in the range 4 < F < 5.
(a) x[n] = cos(
4n
3
) (b) x[n] = sin(
4n
3
) + 3 sin(
8n
3
)
[Hints and Suggestions: First nd the digital frequency of each component in the principal range
(0.5 < F 0.5). Then, add 4 or 5 as appropriate to bring each frequency into the required range.]
2.19 (Digital Sinusoids) Find the period N of each signal if periodic. Express each signal using a digital
frequency in the principal range ([F[ < 0.5) and in the range 3 F 4.
(a) x[n] = cos(
7n
3
) (b) x[n] = cos(
7n
3
) + sin(0.5n) (c) x[n] = cos(n)
2.20 (Sampling and Aliasing) Each of the following sinusoids is sampled at S = 100 Hz. Determine if
aliasing has occurred and set up an expression for each sampled signal using a digital frequency in the
principal range ([F[ < 0.5).
(a) x(t) = cos(320t +
4
) (b) x(t) = cos(140t
4
) (c) x(t) = sin(60t)
[Hints and Suggestions: Find the frequency f
0
. If S > 2f
0
there is no aliasing and F < 0.5.
Otherwise, bring F into the principal range to write the expression for the sampled signal.]
2.21 (Aliasing and Signal Reconstruction) The signal x(t) = cos(320t +
4
) is sampled at 100 Hz,
and the sampled signal x[n] is reconstructed at 200 Hz to recover the analog signal x
r
(t).
(a) Has aliasing occurred? What is the period N and the digital frequency F of x[n]?
(b) How many full periods of x(t) are required to generate one period of x[n]?
(c) What is the analog frequency of the recovered signal x
r
(t)?
(d) Write expressions for x[n] (using [F[ < 0.5) and for x
r
(t).
c Ashok Ambardar, September 1, 2003
40 Chapter 2 Discrete Signals
[Hints and Suggestions: For part (b), if the digital frequency is expressed as F = k/N where N
is the period and k is an integer, it takes k full cycles of the analog sinusoid to get N samples of
the sampled signal. In part (c), the frequency of the reconstructed signal is found from the aliased
frequency in the principal range.]
2.22 (Digital Pitch Shifting) One way to accomplish pitch shifting is to play back (or reconstruct) a
sampled signal at a dierent sampling rate. Let the analog signal x(t) = sin(15800t + 0.25) be
sampled at a sampling rate of 8 kHz.
(a) Find its sampled representation with digital frequency [F[ < 0.5.
(b) What frequencies are heard if the signal is reconstructed at a rate of 4 kHz?
(c) What frequencies are heard if the signal is reconstructed at a rate of 8 kHz?
(d) What frequencies are heard if the signal is reconstructed at a rate of 20 kHz?
[Hints and Suggestions: The frequency of the reconstructed signal is found from the aliased digital
frequency in the principal range and the appropriate reconstruction rate.]
2.23 (Discrete-Time Chirp Signals) Consider the signal x(t) = cos[(t)], where (t) = t
2
. Show that
its instantaneous frequency f
i
(t) =
1
2
k=
kx
2
[k]
k=
x
2
[k]
(a) Verify that the delay of the symmetric sequence x[n] = 4, 3, 2, 1,
0, 1, 2, 3, 4 is zero.
(b) Compute the delay of the signals g[n] = x[n 1] and h[n] = x[n 2].
(c) What is the delay of the signal y[n] = 1.5(0.5)
n
u[n] 2[n]?
[Hints and Suggestions: For part (c), compute the summations required in the expression for the
delay by using tables and the fact that y[n] = 0.5 for n = 0 and y[n] = 1.5(0.5)
n
for n 1.]
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 41
2.26 (Periodicity) It is claimed that the sum of an absolutely summable signal x[n] and its shifted (by
multiples of N) replicas is a periodic signal x
p
[n] with period N. Verify this claim by sketching the
following and, for each case, compute the power in the resulting periodic signal x
p
[n] and compare the
sum and energy of one period of x
p
[n] with the sum and energy of x[n].
(a) The sum of x[n] = tri(n/3) and its replicas shifted by N = 7
(b) The sum of x[n] = tri(n/3) and its replicas shifted by N = 6
(c) The sum of x[n] = tri(n/3) and its replicas shifted by N = 5
(d) The sum of x[n] = tri(n/3) and its replicas shifted by N = 4
(e) The sum of x[n] = tri(n/3) and its replicas shifted by N = 3
2.27 (Periodic Extension) The sum of an absolutely summable signal x[n] and its shifted (by multiples
of N) replicas is called the periodic extension of x[n] with period N.
(a) Show that one period of the periodic extension of the signal x[n] =
n
u[n] with period N is
y[n] =
n
1
N
, 0 n N 1
(b) How does the sum of one period of the periodic extension y[n] compare with the sum of x[n]?
(c) With = 0.5 and N = 3, compute the signal energy in x[n] and the signal power in y[n].
[Hints and Suggestions: For one period (n = 0 to n = N 1), only x[n] and the tails of the replicas
to its left contribute. So, nd the sum of x[n +kN] =
n+kN
only from k = 0 to k = .]
2.28 (Signal Norms) Norms provide a measure of the size of a signal. The p-norm, or Holder norm,
|x|
p
for discrete signals is dened by |x|
p
= (
[x[
p
)
1/p
, where 0 < p < is a positive integer. For
p = , we also dene |x|
.
(b) What is the signicance of each of these norms?
COMPUTATION AND DESIGN
2.29 (Discrete Signals) For each part, plot the signals x[n] and y[n] over 10 n 10 and compare.
(a) x[n] = u[n + 4] u[n 4] + 2[n + 6] [n 3] y[n] = x[n 4]
(b) x[n] = r[n + 6] r[n + 3] r[n 3] +r[n 6] y[n] = x[n 4]
(c) x[n] = rect(
n
10
) rect(
n3
6
) y[n] = x[n + 4]
(d) x[n] = 6 tri(
n
6
) 3 tri(
n
3
) y[n] = x[n + 4]
2.30 (Signal Interpolation) Let h[n] = sin(n/3), 0 n 10. Plot the signal h[n]. Use this to
generate and plot the zero-interpolated, step-interpolated, and linearly interpolated signals assuming
interpolation by 3.
2.31 (Discrete Exponentials) A causal discrete exponential may be expressed as x[n] =
n
u[n], where
the nature of dictates the form of x[n]. Plot the following over 0 n 40 and comment on the
nature of each plot.
(a) The signal x[n] for = 1.2, = 1, and = 0.8.
(b) The signal x[n] for = 1.2, = 1, and = 0.8.
c Ashok Ambardar, September 1, 2003
42 Chapter 2 Discrete Signals
(c) The real part and imaginary parts of x[n] for = Ae
j/4
, with A = 1.2, A = 1, and A = 0.8.
(d) The magnitude and phase of x[n] for = Ae
j/4
, with A = 1.2, A = 1, and A = 0.8.
2.32 (Discrete-Time Sinusoids) Which of the following signals are periodic and with what period? Plot
each signal over 10 n 30. Do the plots conrm your expectations?
(a) x[n] = 2 cos(
n
2
) + 5 sin(
n
5
) (b) x[n] = 2 cos(
n
2
) sin(
n
3
)
(c) x[n] = cos(0.5n) (d) x[n] = 5 sin(
n
8
+
4
) 5 cos(
n
8
4
)
2.33 (Complex-Valued Signals) A complex-valued signal x[n] requires two plots for a complete descrip-
tion in one of two formsthe magnitude and phase vs. n or the real part vs. n and imaginary part
vs. n.
(a) Let x[n] =
2e
j(
n
9
4
)
. Plot the following signals and, for each case,
derive analytic expressions for the signals plotted and compare with your plots. Is the signal x[n]
periodic? What is the period N? Which plots allow you determine the period of x[n]?
(a) The real part and imaginary part of x[n] over 20 n 20
(b) The magnitude and phase of x[n] over 20 n 20
(c) The sum of the real and imaginary parts over 20 n 20
(d) The dierence of the real and imaginary parts over 20 n 20
2.35 (Complex Exponentials) Let x[n] = (
j)
n
+(
j)
n
. Plot the following signals and, for each case,
derive analytic expressions for the sequences plotted and compare with your plots. Is the signal x[n]
periodic? What is the period N? Which plots allow you determine the period of x[n]?
(a) The real part and imaginary part of x[n] over 20 n 20
(b) The magnitude and phase of x[n] over 20 n 20
2.36 (Discrete-Time Chirp Signals) An N-sample chirp signal x[n] whose digital frequency varies
linearly from F
0
to F
1
is described by
x[n] = cos
F
0
n +
F
1
F
0
2N
n
2
, n = 0, 1, . . . , N 1
(a) Generate and plot 800 samples of a chirp signal x whose digital frequency varies from F = 0 to
F = 0.5. Using the Matlab based routine timefreq (from the authors website), observe how
the frequency of x varies linearly with time.
(b) Generate and plot 800 samples of a chirp signal x whose digital frequency varies from F = 0 to
F = 1. Is the frequency always increasing? If not, what is the likely explanation?
2.37 (Chirp Signals) It is claimed that the chirp signal x[n] = cos(n
2
/6) is periodic (unlike the analog
chirp signal x(t) = cos(t
2
/6)). Plot x[n] over 0 n 20. Does x[n] appear periodic? If so, can you
identify the period N? Justify your results by trying to nd an integer N such that x[n] = x[n + N]
(the basis for periodicity).
2.38 (Signal Averaging) Extraction of signals from noise is an important signal-processing application.
Signal averaging relies on averaging the results of many runs. The noise tends to average out to zero,
and the signal quality or signal-to-noise ratio (SNR) improves.
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 43
(a) Generate samples of the sinusoid x(t) = sin(800t) sampled at S = 8192 Hz for 2 seconds. The
sampling rate is chosen so that you may also listen to the signal if your machine allows.
(b) Create a noisy signal s[n] by adding x[n] to samples of uniformly distributed noise such that s[n]
has an SNR of 10 dB. Compare the noisy signal with the original and compute the actual SNR
of the noisy signal.
(c) Sum the signal s[n] 64 times and average the result to obtain the signal s
a
[n]. Compare the
averaged signal s
a
[n], the noisy signal s[n], and the original signal x[n]. Compute the SNR of
the averaged signal x
a
[n]. Is there an improvement in the SNR? Do you notice any (visual and
audible) improvement? Should you?
(d) Create the averaged result x
b
[n] of 64 dierent noisy signals and compare the averaged signal
x
b
[n] with the original signal x[n]. Compute the SNR of the averaged signal x
b
[n]. Is there an
improvement in the SNR? Do you notice any (visual and/or audible) improvement? Explain how
the signal x
b
[n] diers from x
a
[n].
(e) The reduction in SNR is a function of the noise distribution. Generate averaged signals, using
dierent noise distributions (such as Gaussian noise) and comment on the results.
2.39 (The Central Limit Theorem) The central limit theorem asserts that the sum of independent noise
distributions tends to a Gaussian distribution as the number N of distributions in the sum increases.
In fact, one way to generate a random signal with a Gaussian distribution is to add many (typically 6
to 12) uniformly distributed signals.
(a) Generate the sum of uniformly distributed random signals using N = 2, N = 6, and N = 12 and
plot the histograms of each sum. Does the histogram begin to take on a Gaussian shape as N
increases? Comment on the shape of the histogram for N = 2.
(b) Generate the sum of random signals with dierent distributions using N = 6 and N = 12. Does
the central limit theorem appear to hold even when the distributions are not identical (as long
as you select a large enough N)? Comment on the physical signicance of this result.
2.40 (Music Synthesis I) A musical composition is a combination of notes, or signals, at various frequen-
cies. An octave covers a range of frequencies from f
0
to 2f
0
. In the western musical scale, there are 12
notes per octave, logarithmically equispaced. The frequencies of the notes from f
0
to 2f
0
correspond
to
f = 2
k/12
f
0
k = 0, 1, 2, . . . , 11
The 12 notes are as follows (the
and
stand for sharp and at, and each pair of notes in parentheses
has the same frequency):
A (A
or B
) B C (C
or D
) D (D
or E
) E F (F
or G
) G (G
or A
)
An Example: Raga Malkauns: In Indian classical music, a raga is a musical composition based on
an ascending and descending scale. The notes and their order form the musical alphabet and grammar
from which the performer constructs musical passages, using only the notes allowed. The performance
of a raga can last from a few minutes to an hour or more! Raga malkauns is a pentatonic raga (with
ve notes) and the following scales:
Ascending: D F G B
C D Descending: C B
G F D
The nal note in each scale is held twice as long as the rest. To synthesize this scale in Matlab, we
start with a frequency f
0
corresponding to the rst note D and go up in frequency to get the notes in
the ascending scale; when we reach the note D, which is an octave higher, we go down in frequency to
get the notes in the descending scale. Here is a Matlab code fragment.
c Ashok Ambardar, September 1, 2003
44 Chapter 2 Discrete Signals
f0=340; d=f0; % Pick a frequency and the note D
f=f0*(2^(3/12)); g=f0*(2^(5/12)); % The notes F and G
bf=f0*(2^(8/12)); c=f0*(2^(10/12)); % The notes B(flat) and C
d2=2*d; % The note D (an octave higher)
Generate sampled sinusoids at these frequencies, using an appropriate sampling rate (say, 8192 Hz);
concatenate them, assuming silent passages between each note; and play the resulting signal, using the
Matlab command sound. Use the following Matlab code fragment as a guide:
ts=1/8192; % Sampling interval
t=0:ts:0.4; % Time for each note (0.4 s)
s1=0*(0:ts:0.1); % Silent period (0.1 s)
s2=0*(0:ts:0.05); % Shorter silent period (0.05 s)
tl=0:ts:1; % Time for last note of each scale
d1=sin(2*pi*d*t); % Start generating the notes
f1=sin(2*pi*f*t); g1=sin(2*pi*g*t);
bf1=sin(2*pi*bf*t); c1=sin(2*pi*c*t);
dl1=sin(2*pi*d2*tl); dl2=sin(2*pi*d*tl);
asc=[d1 s1 f1 s1 g1 s1 bf1 s1 c1 s2 dl1]; % Create ascending scale
dsc=[c1 s1 bf1 s1 g1 s1 f1 s1 dl2]; % Create descending scale
y=[asc s1 dsc s1]; sound(y) % Malkauns scale (y)
2.41 (Music Synthesis II) The raw scale of raga malkauns will sound pretty dry! The reason for
this is the manner in which the sound from a musical instrument is generated. Musical instruments
produce sounds by the vibrations of a string (in string instruments) or a column of air (in woodwind
instruments). Each instrument has its characteristic sound. In a guitar, for example, the strings are
plucked, held, and then released to sound the notes. Once plucked, the sound dies out and decays.
Furthermore, the notes are never pure but contain overtones (harmonics). For a realistic sound, we
must include the overtones and the attack, sustain, and release (decay) characteristics. The sound
signal may be considered to have the form x(t) = (t)cos(2f
0
t + ), where f
0
is the pitch and (t)
c Ashok Ambardar, September 1, 2003
Chapter 2 Problems 45
is the envelope that describes the attack-sustain-release characteristics of the instrument played. A
crude representation of some envelopes is shown in Figure P2.41 (the piecewise linear approximations
will work just as well for our purposes). Woodwind instruments have a much longer sustain time and
a much shorter release time than do plucked string and keyboard instruments.
woodwind instruments
Envelopes of (t)
string and keyboard instruments
Envelopes of (t)
1
t
1
1
t
1
Figure P2.41 Envelopes and their piecewise linear approximations (dark) for Problem 2.41
Experiment with the scale of raga malkauns and try to produce a guitar-like sound, using the appro-
priate envelope form. You should be able to discern an audible improvement.
2.42 (Music Synthesis III) Synthesize the following notes, using a woodwind envelope, and synthesize
the same notes using a plucked string envelope.
F
(0.3) D(1)
All the notes cover one octave, and the numbers in parentheses give a rough indication of their relative
duration. Can you identify the music? (It is Big Ben.)
2.43 (Music Synthesis IV) Synthesize the rst bar of Pictures at an Exhibition by Mussorgsky, which
has the following notes:
A(3) G(3) C(3) D(2) G
4
1/6
1/6
c Ashok Ambardar, September 1, 2003
3.2 Digital Filters 53
1
A
2
A
N
A
0
B
1
B
2
B
M
B
[n] x [n] y [n] y [n] x
1
z
1
z
1
z
1
z
1
z
1
z
+
+
+
+
+
+ +
+
+
+
+
+
Figure 3.2 Realization of a nonrecursive (left) and recursive (right) digital lter
The general form described by
y[n] = A
1
y[n 1] A
N
y[n N] +B
0
x[n] +B
1
x[n 1] + +B
N
x[n N] (3.13)
requires both feed-forward and feedback and may be realized using 2N delay elements, as shown in Figure 3.3.
This describes a direct form I realization.
However, since LTI systems may be cascaded in any order (as we shall learn soon), we can switch the
feedback and feedforward sections to obtain a canonical realization with only N delays, as also shown in
Figure 3.3. It is also called a direct form II realization. Other forms that also use only N elements are
also possible. We discuss various aspects of digital lter realization in more detail in subsequent chapters.
0
B
1
B
2
B
N
B
1
A
2
A
N
A
[n] y [n] x
1
z
1
z
1
z
1
z
1
z
1
z
+
+
+
+
+
+
+
+
+
+
+
+
0
B
1
B
2
B
N
B
1
A
2
A
N
A
[n] y [n] x
1
z
1
z
1
z
+
+
+
+
+
+ +
+
+
+
+
+
Figure 3.3 Direct (left) and canonical (right) realization of a digital lter
c Ashok Ambardar, September 1, 2003
54 Chapter 3 Time-Domain Analysis
DRILL PROBLEM 3.7
What is the dierence equation of the digital lter whose realization is shown?
[n] y [n] x
1
z
1
z
+
+
+
+ +
2
2
Answer: y[n] y[n 1] 2y[n 2] = 2x[n] x[n 1]
3.3 Response of Digital Filters
A digital lter processes discrete signals and yields a discrete output in response to a discrete input. Its
response depends not only on the applied input but also on the initial conditions that describe its state just
prior to the application of the input. Systems with zero initial conditions are said to be relaxed. Digital
lters may be analyzed in the time domain using any of the following models:
The dierence equation representation applies to linear, nonlinear, and time-varying systems. For LTI
systems, it allows computation of the response using superposition even if initial conditions are present.
The impulse response representation describes a relaxed LTI system by its impulse response h[n]. The
output y[n] appears explicitly in the governing relation called the convolution sum. It also allows us to relate
time-domain and transformed-domain methods of system analysis.
The state variable representation describes an nth-order system by n simultaneous rst-order dierence
equations called state equations in terms of n state variables. It is useful for complex or nonlinear systems
and those with multiple inputs and outputs. For LTI systems, state equations can be solved using matrix
methods. The state variable form is also readily amenable to numerical solution. We do not pursue this
method in this book.
3.3.1 Response of Nonrecursive Filters
The system equation of a non-recursive lter is
y[n] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (FIR lter)
Since the output y[n] depends only on the input x[n] and its shifted versions, the response is simply a
weighted sum of the input terms exactly as described by its system equation.
EXAMPLE 3.5 (Response of Nonrecursive Filters)
Consider an FIR lter described by y[n] = 2x[n] 3x[n 2]. Find its output y[n] if the input to the system
is x[n] = (0.5)
n
u[n] and compute y[n] for n = 1 and n = 2.
We nd that y[n] = 2x[n] 3x[n 2] = 2(0.5)
n
u[n] 3(0.5)
n2
u[n 2].
We get y[1] = 2(0.5)
1
= 1 and y[2] = 2(0.5)
2
3(0.5)
0
= 0.5 3 = 2.5
c Ashok Ambardar, September 1, 2003
3.3 Response of Digital Filters 55
3.3.2 Response of Recursive Filters by Recursion
The dierence equation of a recursive digital lter is
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M]
The response shows dependence on its past values as well as values of the input. The output y[n] requires
the prior values of y[n 1], y[n 2], . . . , y[n N]. Once known, we can use y[n] and the other previously
known values to compute y[n + 1] and continue to use recursion to successively compute the values of the
output as far as desired. Consider the second-order dierence equation
y[n] +A
1
y[n 1] +A
2
y[n 2] = B
0
x[n] +B
1
x[n 1]
To nd y[n], we rewrite this equation as follows
y[n] = A
1
y[n 1] A
2
y[n 2] +B
0
x[n] +B
1
x[n 1]
We see that y[n] can be found from its past values y[n 1] and y[n 2]. To start the recursion at n = 0, we
must be given values of y[1] and y[2], and once known, values of y[n], n 0 may be computed successively
as far as desired. We see that this method requires initial conditions to get the recursion started. In general,
the response y[n], n 0 of the Nth-order dierence equation requires the N consecutive initial conditions
y[1], y[2], . . . , y[N]. The recursive approach is eective and can be used even for nonlinear or time
varying systems. Its main disadvantage is that a general closed form solution for the output is not always
easy to discern.
EXAMPLE 3.6 (System Response Using Recursion)
(a) Consider a system described by y[n] = a
1
y[n 1] +b
0
u[n]. Let the initial condition be y[1] = 0. We
then successively compute
y[0] = a
1
y[1] +b
0
u[0] = b
0
y[1] = a
1
y[0] +b
0
u[1] = a
1
b
0
+b
0
= b
0
[1 +a
1
]
y[2] = a
1
y[1] +b
0
u[2] = a
1
[a
1
b
0
+b
0
] +b
0
= b
0
[1 +a
1
+a
2
1
]
The form of y[n] may be discerned as
y[n] = b
0
[1 +a
1
+a
2
1
+ +a
n1
1
+a
n
1
]
Using the closed form for the geometric sequence results in
y[n] =
b
0
(1 a
n+1
1
)
1 a
1
If the coecients appear as numerical values, the general form may not be easy to discern.
(b) Consider a system described by y[n] = a
1
y[n1] +b
0
nu[n]. Let the initial condition be y[1] = 0. We
then successively compute
y[0] = a
1
y[1] = 0
y[1] = a
1
y[0] +b
0
u[1] = b
0
y[2] = a
1
y[1] + 2b
0
u[2] = a
1
b
0
+ 2b
0
y[3] = a
1
y[2] + 3b
0
u[3] = a
1
[a
1
b
0
+ 2b
0
] + 3b
0
= a
2
1
+ 2a
1
b
0
+ 3b
0
c Ashok Ambardar, September 1, 2003
56 Chapter 3 Time-Domain Analysis
The general form is thus y[n] = a
n1
1
+ 2b
0
a
n2
1
+ 3b
0
a
n3
1
+ (n 1)b
0
a
1
+nb
0
.
We can nd a more compact form for this, but not without some eort. By adding and subtracting
b
0
a
n1
1
and factoring out a
n
1
, we obtain
y[n] = a
n
1
b
0
a
n1
1
+b
0
a
n
1
[a
1
1
+ 2a
2
1
+ 3a
3
1
+ +na
n
1
]
Using the closed form for the sum kx
k
from k = 1 to k = N (with x = a
1
), we get
y[n] = a
n
1
b
0
a
n1
1
+b
0
a
n
1
a
1
[1 (n + 1)a
n
+na
(n+1)
]
(1 a
1
)
2
What a chore! More elegant ways of solving dierence equations are described later in this chapter.
(c) Consider the recursive system y[n] = y[n 1] + x[n] x[n 3]. If x[n] equals [n] and y[1] = 0, we
successively obtain
y[0] = y[1] +[0] [3] = 1 y[3] = y[2] +[3] [0] = 1 1 = 0
y[1] = y[0] +[1] [2] = 1 y[4] = y[3] +[4] [1] = 0
y[2] = y[1] +[2] [1] = 1 y[5] = y[4] +[5] [2] = 0
The impulse response of this recursive lter is zero after the rst three values and has a nite length.
It is actually a nonrecursive (FIR) lter in disguise!
DRILL PROBLEM 3.8
(a) Let y[n] y[n 1] 2y[n 2] = u[n]. Use recursion to compute y[3] if y[1] = 2, y[2] = 0.
(b) Let y[n] 0.8y[n1] = x[n]. Use recursion to nd the general form of y[n] if x[n] = [n] and y[1] = 0.
Answers: (a) 32 (b) (0.8)
n
, n 0 or (0.8)
n
u[n]
3.4 The Natural and Forced Response
For an LTI system governed by a linear constant-coecient dierence equation, a formal way of computing
the output is by the method of undetermined coecients. This method yields the total response y[n] as
the sum of the forced response y
F
[n] and the natural response y
N
[n]. The form of the natural response
depends only on the system details and is independent of the nature of the input. The forced response arises
due to the interaction of the system with the input and thus depends on both the input and the system
details.
3.4.1 The Single-Input Case
Consider the Nth-order dierence equation with the single unscaled input x[n]
y[n] +A
1
y[n 1] +A
2
y[n 2] + +A
N
y[n N] = x[n] (3.14)
with initial conditions y[1], y[2], y[3], . . . , y[N].
Its forced response arises due to the interaction of the system with the input and thus depends on
both the input and the system details. It satises the given dierence equation and has the same form as the
c Ashok Ambardar, September 1, 2003
3.4 The Natural and Forced Response 57
input. Table 3.1 summarizes these forms for various types of inputs. The constants in the forced response
can be found uniquely and independently of the natural response or initial conditions simply by satisfying
the given dierential equation.
The characteristic equation is dened by the polynomial equation
1 +A
1
z
1
+A
2
z
2
+ +A
N
z
N
= z
N
+A
1
z
N1
+ +A
N
= 0 (3.15)
This equation has N roots, z
1
, z
2
, . . . , z
N
. The natural response is a linear combination of N discrete-time
exponentials of the form
y
N
[n] = K
1
z
n
1
+K
2
z
n
2
+ +K
N
z
n
N
(3.16)
This form must be modied for multiple roots. Since complex roots occur in conjugate pairs, their associated
constants also form conjugate pairs to ensure that y
N
[n] is real. Algebraic details lead to the preferred form
with two real constants. Table 3.2 summarizes the preferred forms for multiple or complex roots.
The total response is found by rst adding the forced and natural response and then evaluating the un-
determined constants (in the natural component) using the prescribed initial conditions. For stable systems,
the natural response is also called the transient response, since it decays to zero with time. For systems
with harmonic or switched harmonic inputs, the forced response is a harmonic at the input frequency and
is termed the steady-state response.
Table 3.1 Form of the Forced Response for Discrete LTI Systems
Note: If the right-hand side (RHS) is
n
, where is also a root of the characteristic
equation repeated p times, the forced response form must be multiplied by n
p
.
Entry Forcing Function (RHS) Form of Forced Response
1 C
0
(constant) C
1
(another constant)
2
n
(see note above) C
n
3 cos(n +) C
1
cos(n) +C
2
sin(n) or C cos(n +)
4
n
cos(n +) (see note above)
n
[C
1
cos(n) +C
2
sin(n)]
5 n C
0
+C
1
n
6 n
p
C
0
+C
1
n +C
2
n
2
+ +C
p
n
p
7 n
n
(see note above)
n
(C
0
+C
1
n)
8 n
p
n
(see note above)
n
(C
0
+C
1
n +C
2
n
2
+ +C
p
n
p
)
9 ncos(n +) (C
1
+C
2
n)cos(n) + (C
3
+C
4
n)sin(n)
c Ashok Ambardar, September 1, 2003
58 Chapter 3 Time-Domain Analysis
Table 3.2 Form of the Natural Response for Discrete LTI Systems
Entry Root of Characteristic Equation Form of Natural Response
1 Real and distinct: r Kr
n
2 Complex conjugate: re
j
r
n
[K
1
cos(n) +K
2
sin(n)]
3 Real, repeated: r
p+1
r
n
(K
0
+K
1
n +K
2
n
2
+ +K
p
n
p
)
4 Complex, repeated:
re
j
p+1
r
n
cos(n)(A
0
+A
1
n +A
2
n
2
+ +A
p
n
p
)
+ r
n
sin(n)(B
0
+B
1
n +B
2
n
2
+ +B
p
n
p
)
REVIEW PANEL 3.6
Response of LTI Systems Described by Dierence Equations
Total Response = Natural Response + Forced Response
The roots of the characteristic equation determine only the form of the natural response.
The input terms (RHS) of the dierence equation completely determine the forced response.
Initial conditions satisfy the total response to yield the constants in the natural response.
EXAMPLE 3.7 (Forced and Natural Response)
(a) Consider the system shown in Figure E3.7A.
Find its response if x[n] = (0.4)
n
, n 0 and the initial condition is y[1] = 10.
[n] x [n] y
1
z
+
+
0.6
Figure E3.7A The system for Example 3.7(a)
The dierence equation describing this system is y[n] 0.6y[n 1] = x[n] = (0.4)
n
, n 0.
Its characteristic equation is 1 0.6z
1
= 0 or z 0.6 = 0.
Its root z = 0.6 gives the form of the natural response y
N
[n] = K(0.6)
n
.
Since x[n] = (0.4)
n
, the forced response is y
F
[n] = C(0.4)
n
.
We nd C by substituting for y
F
[n] into the dierence equation
y
F
[n] 0.6y
F
[n 1] = (0.4)
n
= C(0.4)
n
0.6C(0.4)
n1
.
Cancel out (0.4)
n
from both sides and solve for C to get
C 1.5C = 1 or C = 2.
Thus, y
F
[n] = 2(0.4)
n
. The total response is y[n] = y
N
[n] +y
F
[n] = 2(0.4)
n
+K(0.6)
n
.
We use the initial condition y[1] = 10 on the total response to nd K:
c Ashok Ambardar, September 1, 2003
3.4 The Natural and Forced Response 59
y[1] = 10 = 5 +
K
0.6
and K = 9.
Thus, y[n] = 2(0.4)
n
+ 9(0.6)
n
, n 0.
(b) Consider the dierence equation y[n] 0.5y[n 1] = 5 cos(0.5n), n 0 with y[1] = 4.
Its characteristic equation is 1 0.5z
1
= 0 or z 0.5 = 0.
Its root z = 0.5 gives the form of the natural response y
N
[n] = K(0.5)
n
.
Since x[n] = 5 cos(0.5n), the forced response is y
F
[n] = Acos(0.5n) +Bsin(0.5n).
We nd y
F
[n 1] = Acos[0.5(n 1)] +Bsin[0.5(n 1)] = Asin(0.5n) Bcos(0.5n). Then
y
F
[n] 0.5y
F
[n 1] = (A+ 0.5B)cos(0.5n) (0.5AB)sin(0.5n) = 5 cos(0.5n)
Equate the coecients of the cosine and sine terms to get
(A+ 0.5B) = 5, (0.5AB) = 0 or A = 4, B = 2, and y
F
[n] = 4 cos(0.5n) + 2 sin(0.5n).
The total response is y[n] = K(0.5)
n
+ 4 cos(0.5n) + 2 sin(0.5n). With y[1] = 4, we nd
y[1] = 4 = 2K 2 or K = 3, and thus y[n] = 3(0.5)
n
+ 4 cos(0.5n) + 2 sin(0.5n), n 0.
The steady-state response is 4 cos(0.5n) + 2 sin(0.5n), and the transient response is 3(0.5)
n
.
(c) Consider the dierence equation y[n] 0.5y[n 1] = 3(0.5)
n
, n 0 with y[1] = 2.
Its characteristic equation is 1 0.5z
1
= 0 or z 0.5 = 0.
Its root, z = 0.5, gives the form of the natural response y
N
[n] = K(0.5)
n
.
Since x[n] = (0.5)
n
has the same form as the natural response, the forced response is y
F
[n] = Cn(0.5)
n
.
We nd C by substituting for y
F
[n] into the dierence equation:
y
F
[n] 0.5y
F
[n 1] = 3(0.5)
n
= Cn(0.5)
n
0.5C(n 1)(0.5)
n1
.
Cancel out (0.5)
n
from both sides and solve for C to get Cn C(n 1) = 3, or C = 3.
Thus, y
F
[n] = 3n(0.5)
n
. The total response is y[n] = y
N
[n] +y
F
[n] = K(0.5)
n
+ 3n(0.5)
n
.
We use the initial condition y[1] = 2 on the total response to nd K:
y[1] = 2 = 2K 6, and K = 4.
Thus, y[n] = 4(0.5)
n
+ 3n(0.5)
n
= (4 + 3n)(0.5)
n
, n 0.
(d) (A Second-Order System) Consider the system shown in Figure E3.7D.
Find the forced and natural response of this system if x[n] = u[n] and y[1] = 0, y[2] = 12.
[n] x [n] y
1
z
1
z
+
+
+
+
4
1/6
1/6
Figure E3.7D The system for Example 3.7(d)
c Ashok Ambardar, September 1, 2003
60 Chapter 3 Time-Domain Analysis
Comparison with the generic realization reveals that the system dierence equation is
y[n]
1
6
y[n 1]
1
6
y[n 2] = 4x[n] = 4u[n]
Its characteristic equation is 1
1
6
z
1
1
6
z
2
= 0 or z
2
1
6
z
1
6
= 0.
Its roots are z
1
=
1
2
and z
2
=
1
3
.
The natural response is thus y
N
[n] = K
1
(z
1
)
n
+K
2
(z
2
)
n
= K
1
(
1
2
)
n
+K
2
(
1
3
)
n
.
Since the forcing function is 4u[n] (a constant for n 0), the forced response y
F
[n] is constant.
Let y
F
[n] = C. Then y
F
[n 1] = C, y
F
[n 2] = C, and
y
F
[n]
1
6
y
F
[n 1]
1
6
y
F
[n 2] = C
1
6
C
1
6
C = 4. This yields C = 6.
Thus, y
F
[n] = 6. The total response y[n] is y[n] = y
N
[n] +y
F
[n] = K
1
(
1
2
)
n
+K
2
(
1
3
)
n
+ 6.
To nd the constants K
1
and K
2
, we use the initial conditions on the total response to obtain
y[1] = 0 = 2K
1
3K
2
+ 6, and y[2] = 12 = 4K
1
+ 9K
2
+ 6. We nd K
1
= 1.2 and K
2
= 1.2.
Thus, y[n] = 1.2(
1
2
)
n
+ 1.2(
1
3
)
n
+ 6, n 0.
Its transient response is 1.2(
1
2
)
n
+ 1.2(
1
3
)
n
. Its steady-state response is a constant that equals 6.
DRILL PROBLEM 3.9
(a) Let y[n] 0.8y[n 1] = 2 with y[1] = 5. Solve for y[n], n 0.
(b) Let y[n] + 0.8y[n 1] = 2(0.8)
n
with y[1] = 10. Solve for y[n], n 0.
(c) Let y[n] 0.8y[n 1] = 2(0.8)
n
with y[1] = 5. Solve for y[n], n 0.
(d) Let y[n] 0.8y[n 1] = 2(0.8)
n
+ 2(0.4)
n
with y[1] = 5. Solve for y[n], n 0.
Answers: (a) 10 4(0.8)
n
(b) (0.8)
n
7(0.8)
n
(c) (2n+6)(0.8)
n
(d) (0.8)
n
+(0.8)
n
2(0.4)
n
3.4.2 The Zero-Input Response and Zero-State Response
A linear system is one for which superposition applies and implies that the system is relaxed (with zero initial
conditions) and that the system equation involves only linear operators. However, we can use superposition
even for a system with nonzero initial conditions that is otherwise linear. We treat it as a multiple-input
system by including the initial conditions as additional inputs. The output then equals the superposition
of the outputs due to each input acting alone, and any changes in the input are related linearly to changes
in the response. As a result, its response can be written as the sum of a zero-input response (due to the
initial conditions alone) and the zero-state response (due to the input alone). The zero-input response obeys
superposition, as does the zero-state response.
It is often more convenient to describe the response y[n] of an LTI system as the sum of its zero-state
response (ZSR) y
zs
[n] (assuming zero initial conditions) and zero-input response (ZIR) y
zi
[n] (assuming zero
input). Each component is found using the method of undetermined coecients. Note that the natural and
forced components y
N
[n] and y
F
[n] do not, in general, correspond to the zero-input and zero-state response,
respectively, even though each pair adds up to the total response.
c Ashok Ambardar, September 1, 2003
3.4 The Natural and Forced Response 61
REVIEW PANEL 3.7
The ZSR and ZIR for y[n] +A
1
y[n 1] + +A
N
y[n N] = x[n]
1. Find ZSR from y
zs
[n] +A
1
y
zs
[n 1] + +A
N
y
zs
[n N] = x[n], assume zero initial conditions.
2. Find ZIR from y
zi
[n] +A
1
y
zi
[n 1] + +A
N
y
zi
[n N] = 0, using given initial conditions.
3. Find complete response as y[n] = y
zs
[n] +y
zi
[n].
The ZSR obeys superposition. The ZIR obeys superposition.
EXAMPLE 3.8 (Zero-Input and Zero-State Response for the Single-Input Case)
(a) (A First-Order System) Consider the dierence equation y[n] 0.6y[n 1] = (0.4)
n
, n 0, with
y[1] = 10.
Its characteristic equation is 1 0.6z
1
= 0 or z 0.6 = 0.
Its root z = 0.6 gives the form of the natural response y
N
[n] = K(0.6)
n
.
Since x[n] = (0.4)
n
, the forced response is y
F
[n] = C(0.4)
n
.
We nd C by substituting for y
F
[n] into the dierence equation
y
F
[n] 0.6y
F
[n 1] = (0.4)
n
= C(0.4)
n
0.6C(0.4)
n1
.
Cancel out (0.4)
n
from both sides and solve for C to get
C 1.5C = 1 or C = 2.
Thus, y
F
[n] = 2(0.4)
n
.
The total response (subject to initial conditions) is y[n] = y
F
[n] +y
N
[n] = 2(0.4)
n
+K(0.6)
n
1. Its ZSR is found from the form of the total response is y
zs
[n] = 2(0.4)
n
+ K(0.6)
n
, assuming
zero initial conditions:
y
zs
[1] = 0 = 5 +
K
0.6
K = 3 y
zs
[n] = 2(0.4)
n
+ 3(0.6)
n
, n 0
2. Its ZIR is found from the natural response y
zi
[n] = K(0.6)
n
, with given initial conditions:
y
zi
[1] = 10 =
K
0.6
K = 6 y
zi
[n] = 6(0.6)
n
, n 0
3. The total response is y[n] = y
zi
[n] +y
zs
[n] = 2(0.4)
n
+ 9(0.6)
n
, n 0.
(b) (A Second-Order System)
Let y[n]
1
6
y[n 1]
1
6
y[n 2] = 4, n 0, with y[1] = 0 and y[2] = 12.
Its characteristic equation is 1
1
6
z
1
1
6
z
2
= 0 or z
2
1
6
z
1
6
= 0.
Its roots are z
1
=
1
2
and z
2
=
1
3
.
Since the forcing function is a constant for n 0, the forced response y
F
[n] is constant.
Let y
F
[n] = C. Then y
F
[n1] = C, y
F
[n2] = C, and y
F
[n]
1
6
y
F
[n1]
1
6
y
F
[n2] = C
1
6
C
1
6
C = 4.
This yields C = 6 to give the forced response y
F
[n] = 6.
c Ashok Ambardar, September 1, 2003
62 Chapter 3 Time-Domain Analysis
1. The ZIR has the form of the natural response y
zi
[n] = K
1
(
1
2
)
n
+K
2
(
1
3
)
n
.
To nd the constants, we use the given initial conditions y[1] = 0 and y[2] = 12:
0 = K
1
(
1
2
)
1
+K
2
(
1
3
)
1
= 2K
1
3K
2
12 = K
1
(
1
2
)
2
+K
2
(
1
3
)
2
= 4K
1
+ 9K
2
Thus, K
1
= 1.2, K
2
= 0.8, and
y
zi
[n] = 1.2(
1
2
)
n
+ 0.8(
1
3
)
n
, n 0
2. The ZSR has the same form as the total response. Since the forced response is y
F
[n] = 6, we have
y
zs
[n] = K
1
(
1
2
)
n
+K
2
(
1
3
)
n
+ 6
To nd the constants, we assume zero initial conditions, y[1] = 0 and y[2] = 0, to get
y[1] = 0 = 2K
1
3K
2
+ 6 y[2] = 0 = 4K
1
+ 9K
2
+ 6
We nd K
1
= 2.4 and K = 0.4, and thus
y
zs
[n] = 2.4(
1
2
)
n
+ 0.4(
1
3
)
n
+ 6, n 0
3. The total response is y[n] = y
zi
[n] +y
zs
[n] = 1.2(
1
2
)
n
+ 1.2(
1
3
)
n
+ 6, n 0.
(c) (Linearity and Superposition of the ZSR and ZIR) An IIR lter is described by y[n] y[n
1] 2y[n 2] = x[n], with x[n] = 6u[n] and initial conditions y[1] = 1, y[2] = 4.
1. Find the zero-input response, zero-state response, and total response.
2. How does the total response change if y[1] = 1, y[2] = 4 as given, but x[n] = 12u[n]?
3. How does the total response change if x[n] = 6u[n] as given, but y[1] = 2, y[2] = 8?
1. We nd the characteristic equation as (1 z
1
2z
2
) = 0 or (z
2
z 2) = 0.
The roots of the characteristic equation are z
1
= 1 and z
2
= 2.
The form of the natural response is y
N
[n] = A(1)
n
+B(2)
n
.
Since the input x[n] is constant for n 0, the form of the forced response is also constant.
So, choose y
F
[n] = C in the system equation and evaluate C:
y
F
[n] y
F
[n 1] 2y
F
[n 2] = C C 2C = 6 C = 3 y
F
[n] = 3
For the ZSR, we use the form of the total response and zero initial conditions:
y
zs
[n] = y
F
[n] +y
N
[n] = 3 +A(1)
n
+B(2)
n
, y[1] = y[2] = 0
We obtain y
zs
[1] = 0 = 3 A+ 0.5B and y
zs
[2] = 0 = 3 +A+ 0.25B
Thus, A = 1, B = 8, and y
zs
[n] = 3 + (1)
n
+ 8(2)
n
, n 0.
For the ZIR, we use the form of the natural response and the given initial conditions:
y
zi
[n] = y
N
[n] = A(1)
n
+B(2)
n
y[1] = 1 y[2] = 4
This gives y
zi
[1] = 1 = A+ 0.5B, and y
zi
[2] = 4 = A+ 0.25B.
Thus, A = 3, B = 4, and y
zi
[n] = 3(1)
n
+ 4(2)
n
, n 0.
The total response is the sum of the zero-input and zero-state response:
y[n] = y
zi
[n] +y
zs
[n] = 3 + 4(1)
n
+ 12(2)
n
, n 0
2. If x[n] = 12u[n], the zero-state response doubles to y
zs
[n] = 6 + 2(1)
n
+ 16(2)
n
.
3. If y[1] = 2 and y[2] = 8, the zero-input response doubles to y
zi
[n] = 6(1)
n
+ 8(2)
n
.
c Ashok Ambardar, September 1, 2003
3.4 The Natural and Forced Response 63
DRILL PROBLEM 3.10
(a) Let y[n] 0.8y[n 1] = 2. Find its zero-state response.
(b) Let y[n] + 0.8y[n 1] = x[n] with y[1] = 5. Find its zero-input response.
(c) Let y[n] 0.4y[n 1] = (0.8)
n
with y[1] = 10. Find its zero-state and zero-input response.
Answers: (a) 10 8(0.8)
n
(b) 4(0.8)
n
(c) 2(0.8)
n
(0.4)
n
, 4(0.4)
n
3.4.3 Solution of the General Dierence Equation
The solution of the general dierence equation described by
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.17)
is found by invoking linearity and superposition as follows:
1. Compute the zero-state response y
0
[n] of the single-input system
y
0
[n] +A
1
y
0
[n 1] +A
2
y
0
[n 2] + +A
N
y
0
[n N] = x[n] (3.18)
2. Use linearity and superposition to nd y
zs
[n] as
y
zs
[n] = B
0
y
0
[n] +B
1
y
0
[n 1] + +B
M
y
0
[n M] (3.19)
3. Find the zero-input response y
zi
[n] using initial conditions.
4. Find the total response as y[n] = y
zs
[n] +y
zi
[n].
Note that the zero-input response is computed and included just once.
REVIEW PANEL 3.8
Solving y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M]
1. Find y
0
[n] from y
0
[n] +A
1
y
0
[n 1] + +A
N
y
0
[n N] = x[n], assume zero initial conditions.
2. Find ZSR (using superposition) as y
zs
[n] = B
0
y
0
[n] +B
1
y
0
[n 1] + +B
M
y
0
[n M].
3. Find ZIR from y
zi
[n] +A
1
y
zi
[n 1] + +A
N
y
zi
[n N] = 0, using given initial conditions.
4. Find complete response as y[n] = y
zs
[n] +y
zi
[n].
EXAMPLE 3.9 (Response of a General System)
Consider the recursive digital lter whose realization is shown in Figure E3.9.
What is the response of this system if x[n] = 6u[n] and y[1] = 1, y[2] = 4?
[n] y [n] x
1
z
1
z
+
+
+
+ +
2
2
Figure E3.9 The digital lter for Example 3.9
c Ashok Ambardar, September 1, 2003
64 Chapter 3 Time-Domain Analysis
Comparison with the generic realization reveals that the system dierence equation is
y[n] y[n 1] 2y[n 2] = 2x[n] x[n 1]
From the previous example, the ZSR of y[n]y[n1]2y[n2] = x[n] is y
0
[n] = [3+(1)
n
+8(2)
n
]u[n].
The ZSR for the input 2x[n] x[n 1] is thus
y
zs
[n] = 2y
0
[n] y
0
[n 1] = [6 + 2(1)
n
+ 16(2)
n
]u[n] [3 + (1)
n1
+ 8(2)
n1
]u[n 1]
From the previous example, the ZIR of y[n] y[n 1] 2y[n 2] = x[n] is y
zi
[n] = [3(1)
n
+4(2)
n
]u[n].
The total response is y[n] = y
zi
[n] +y
zs
[n]:
y[n] = [3(1)
n
+ 4(2)
n
]u[n] + [6 + 2(1)
n
+ 16(2)
n
]u[n] [3 + (1)
n1
+ 8(2)
n1
]u[n 1]
DRILL PROBLEM 3.11
(a) Let y[n] 0.8y[n 1] = x[n] with x[n] = 2(0.4)
n
and y[1] = 10. Find its ZSR and ZIR.
(b) Let y[n] 0.8y[n 1] = 2x[n] with x[n] = 2(0.4)
n
and y[1] = 10. Find y[n].
(c) Let y[n] 0.8y[n 1] = 2x[n] x[n 1] with x[n] = 2(0.4)
n
and y[1] = 10. Find y[n].
Answers: (a) 4(0.8)
n
2(0.4)
n
, 8(0.8)
n
(b) 4(0.4)
n
(c) (0.4)
n
5(0.8)
n
(simplied)
3.5 The Impulse Response
The impulse response h[n] of a relaxed LTI system is simply the response to a unit impulse input [n]. The
impulse response provides us with a powerful method for nding the zero-state response of LTI systems to
arbitrary inputs using superposition (as described in the next chapter). The impulse response and the step
response are often used to assess the time-domain performance of digital lters.
REVIEW PANEL 3.9
The Impulse Response and Step Response Is Dened Only for Relaxed LTI Systems
[n] h [n] Relaxed LTI system Impulse input Impulse response
Impulse response h[n]: The output of a relaxed LTI system if the input is a unit impulse [n]
Step response s[n]: The output of a relaxed LTI system if the input is a unit step u[n]
3.5.1 Impulse Response of Nonrecursive Filters
For a nonrecursive (FIR) lter of length M + 1 described by
y[n] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.20)
the impulse response h[n] (with x[n] = [n]) is an M + 1 term sequence of the input terms, which may be
written as
h[n] = B
0
[n] +B
1
[n 1] + +B
M
[n M] or h[n] =
B
0
, B
1
, . . . , B
M
(3.21)
The sequence h[n] represents the FIR lter coecients.
c Ashok Ambardar, September 1, 2003
3.5 The Impulse Response 65
DRILL PROBLEM 3.12
(a) Let y[n] = 2x[n + 1] + 3x[n] x[n 2]. Write its impulse response as a sequence.
(b) Let y[n] = x[n] 2x[n 1] + 4x[n 3]. Write its impulse response as a sum of impulses.
Answers: (a) h[n] = 2,
1
6
z
2
= 0 or z
2
1
6
z
1
6
= 0.
Its roots, z
1
=
1
2
and z
2
=
1
3
, give the natural response h[n] = K
1
(
1
2
)
n
+K
2
(
1
3
)
n
.
With h[0] = 1 and h[1] = 0, we nd 1 = K
1
+K
2
and 0 = 2K
1
3K
2
.
Solving for the constants, we obtain K
1
= 0.6 and K
2
= 0.4.
Thus, h[n] = [0.6(
1
2
)
n
+ 0.4(
1
3
)
n
]u[n].
DRILL PROBLEM 3.14
(a) Let y[n] 0.9y[n 1] = x[n]. Find its impulse response h[n].
(b) Let y[n] 1.2y[n 1] + 0.32y[n 2] = x[n]. Find its impulse response h[n].
Answers: (a) (0.9)
n
(b) 2(0.8)
n
(0.4)
n
3.5.4 Impulse Response for the General Case
To nd the impulse response of the general system described by
y[n] +A
1
y[n 1] +A
2
y[n 2] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.25)
we use linearity and superposition as follows:
1. Find the impulse response h
0
[n] of the single-input system
y
0
[n] +A
1
y
0
[n 1] +A
2
y
0
[n 2] + +A
N
y
0
[n N] = x[n] (3.26)
by solving the homogeneous equation
h
0
[n] +A
1
h
0
[n 1] + +A
N
h
0
[n N] = 0, h
0
[0] = 1 (all other conditions zero) (3.27)
2. Then, invoke superposition to nd the actual impulse response h[n] as
h[n] = B
0
h
0
[n] +B
1
h
0
[n 1] + +B
M
h
0
[n M] (3.28)
c Ashok Ambardar, September 1, 2003
3.5 The Impulse Response 67
REVIEW PANEL 3.11
Impulse Response of y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M]
1. Find h
0
[n] from h
0
[n] +A
1
h
0
[n 1] + +A
N
h
0
[n N] = 0 with just h
0
[0] = 1 (all others zero).
2. Find h[n] (using superposition) as h[n] = B
0
h
0
[n] +B
1
h
0
[n 1] + +B
M
h
0
[n M].
EXAMPLE 3.12 (Impulse Response for the General Case)
(a) Find the impulse response of y[n] 0.6y[n 1] = 4x[n] and y[n] 0.6y[n 1] = 3x[n + 1] x[n].
We start with the single-input system y
0
[n] 0.6y
0
[n 1] = x[n].
Its impulse response h
0
[n] was found in the previous example as h
0
[n] = (0.6)
n
u[n].
Then, for the rst system, h
[
n] = 4h
0
[n] = 4(0.6)
n
u[n].
For the second system, h[n] = 3h
0
[n + 1] h
0
[n] = 3(0.6)
n+1
u[n + 1] (0.6)
n
u[n].
This may also be expressed as h[n] = 3[n + 1] + 0.8(0.6)
n
u[n].
Comment: The general approach can be used for causal or noncausal systems.
(b) Let y[n]
1
6
y[n 1]
1
6
y[n 2] = 2x[n] 6x[n 1].
To nd h[n], start with the single-input system y[n]
1
6
y[n 1]
1
6
y[n 2] = x[n].
Its impulse response h
0
[n] was found in the previous example as
h
0
[n] = [0.6(
1
2
)
n
+ 0.4(
1
3
)
n
]u[n]
The impulse response of the given system is h[n] = 2h
0
[n] 6h
0
[n 1]. This gives
h[n] = [1.2(
1
2
)
n
+ 0.8(
1
3
)
n
]u[n] [3.6(
1
2
)
n1
+ 2.4(
1
3
)
n1
]u[n 1]
Comment: This may be simplied to h[n] = [6(
1
2
)
n
+ 8(
1
3
)
n
]u[n].
DRILL PROBLEM 3.15
(a) Let y[n] 0.5y[n 1] = 2x[n] +x[n 1]. Find its impulse response h[n].
(b) Let y[n] 1.2y[n 1] + 0.32y[n 2] = x[n] + 2x[n 1]. Find its impulse response h[n].
Answers: (a) 2(0.5)
n
u[n] +(0.5)
n1
u[n 1] = 4(0.5)
n
u[n] 2[n] (b) 7(0.8)
n
6(0.4)
n
(simplied)
3.5.5 Recursive Forms for Nonrecursive Digital Filters
The terms FIR and nonrecursive are synonymous. A nonrecursive lter always has a nite impulse response.
The terms IIR and recursive are often, but not always, synonymous. Not all recursive lters have an innite
impulse response. In fact, nonrecursive lters can always be implemented in recursive form if desired. A
recursive lter may also be approximated by a nonrecursive lter of the form y[n] = B
0
x[n] +B
1
x[n 1] +
+B
M
x[n M] if we know all the past inputs. In general, this implies M .
c Ashok Ambardar, September 1, 2003
68 Chapter 3 Time-Domain Analysis
EXAMPLE 3.13 (Recursive Forms for Nonrecursive Filters)
Consider the nonrecursive lter y[n] = x[n] +x[n 1] +x[n 2].
Its impulse response is h[n] = [n] +[n 1] +[n 2].
To cast this lter in recursive form, we compute y[n 1] = x[n 1] +x[n 2] +x[n 3].
Upon subtraction from the original equation, we obtain the recursive form
y[n] y[n 1] = x[n] x[n 3]
This describes a recursive formulation for the given nonrecursive, FIR lter.
DRILL PROBLEM 3.16
Consider the nonrecursive lter y[n] = x[n] + x[n 1] + x[n 2]. What recursive lter do you obtain by
computing y[n] y[n2]. Does the impulse response of the recursive lter match the impulse response of
the nonrecursive lter?
Answers: y[n] y[n 2] = x[n] +x[n 1] x[n 3] x[n 4], the impulse responses match.
3.5.6 The Response of Anti-Causal Systems
So far, we have focused on the response of systems described by dierence equations to causal inputs.
However, by specifying an anti-causal input and appropriate initial conditions, the same dierence equation
can be solved backward in time for n < 0 to generate an anti-causal response. For example, to nd the
causal impulse response h[n], we assume that h[n] = 0, n < 0; but to nd the anti-causal impulse response
h
A
[n] of the same system, we would assume that h[n] = 0, n > 0. This means that the same system can be
described by two dierent impulse response functions. How we distinguish between them is easily handled
using the z-transform (described in the next chapter).
EXAMPLE 3.14 (Causal and Anti-Causal Impulse Response)
(a) Find the causal impulse response of the rst-order system y[n] 0.4y[n 1] = x[n].
For the causal impulse response, we assume h[n] = 0, n < 0, and solve for h[n], n > 0, by recursion
from h[n] = 0.4h[n 1] +[n]. With h[0] = 0.4h[1] +[0] = 1 and [n] = 0, n = 0, we nd
h[1] = 0.4h[0] = 0.4 h[2] = 0.4h[1] = (0.4)
2
h[3] = 0.4h[2] = (0.4)
3
etc.
The general form is easily discerned as h[n] = (0.4)
n
and is valid for n 0.
Comment: The causal impulse response of y[n] y[n 1] = x[n] is h[n] =
n
u[n].
(b) Find the anti-causal impulse response of the rst-order system y[n] 0.4y[n 1] = x[n].
For the anti-causal impulse response, we assume h[n] = 0, n 0, and solve for h[n], n < 0, by recursion
from h[n 1] = 2.5(h[n] [n]). With h[1] = 2.5(h[0] [0]) = 2.5, and [n] = 0, n = 0, we nd
h[2] = 2.5h[1] = (2.5)
2
h[3] = 2.5h[2] = (2.5)
3
h[4] = 2.5h[3] = (2.5)
4
etc.
The general form is easily discerned as h[n] = (2.5)
n
= (0.4)
n
and is valid for n 1.
Comment: The anti-causal impulse response of y[n] y[n 1] = x[n] is h[n] =
n
u[n 1].
c Ashok Ambardar, September 1, 2003
3.6 System Representation in Various Forms 69
DRILL PROBLEM 3.17
Let y[n] 0.5y[n 1] = 2x[n]. Find its causal impulse response and anti-causal impulse response.
Answers: h
c
[n] = 2(0.5)
n
u[n], h
ac
[n] = 2(0.5)
n
u[n 1]
3.6 System Representation in Various Forms
An LTI system may be described by a dierence equation, impulse response, or input-output data. All three
are related and, given one form, we should be able to access the others. We have already studied how to
obtain the impulse response from a dierence equation. Here we shall describe how to obtain the system
dierence equation from its impulse response or from input-output data.
3.6.1 Dierence Equations from the Impulse Response
In the time domain, the process of nding the dierence equation from its impulse response is tedious. It is
much easier implemented by other methods (such as the z-transform). The central idea is that the terms in
the impulse response are an indication of the natural response (and the roots of the characteristic equation)
from which the dierence equation may be reconstructed if we can describe the combination of the impulse
response and its delayed versions by a sum of impulses. The process is best illustrated by some examples.
EXAMPLE 3.15 (Dierence Equations from the Impulse Response)
(a) Let h[n] = u[n]. Then h[n 1] = u[n 1], and h[n] h[n 1] = u[n] u[n 1] = [n].
The dierence equation corresponding to h[n] h[n 1] = [n] is simply y[n] y[n 1] = x[n].
(b) Let h[n] = 3(0.6)
n
u[n]. This suggests a dierence equation whose left-hand side is y[n] 0.6y[n 1].
We then set up h[n] 0.6h[n 1] = 3(0.6)
n
u[n] 1.8(0.6)
n1
u[n 1]. This simplies to
h[n] 0.6h[n 1] = 3(0.6)
n
u[n] 3(0.6)
n
u[n 1] = 3(0.6)
n
(u[n] u[n 1]) = 3(0.6)
n
[n] = 3[n]
The dierence equation corresponding to h[n] 0.6h[n 1] = 3[n] is y[n] 0.6y[n 1] = 3x[n].
(c) Let h[n] = 2(0.5)
n
u[n] + (0.5)
n
u[n]. This suggests a characteristic equation (z 0.5)(z + 0.5).
The left-hand side of the dierence equation is thus y[n] 0.25y[n 2]. We now compute
h[n] 0.25h[n 2] = 2(0.5)
n
u[n] + (0.5)
n
u[n] 0.25(2(0.5)
n1
u[n 1] + (0.5)
n1
u[n 1])
This simplies to
h[n] 0.25h[n 2] = [2(0.5)
n
+ (0.5)
n
](u[n] u[n 2])
Since u[n] u[n 2] has just two samples (at n = 0 and n = 1), it equals [n] +[n 1], and we get
h[n] 0.25h[n 2] = [2(0.5)
n
+ (0.5)
n
]([n] +[n 1])
This simplies further to h[n] 0.25h[n 2] = 3[n] 0.5[n 1].
From this result, the dierence equation is y[n] 0.25y[n 2] = 3x[n] 0.5x[n 1].
c Ashok Ambardar, September 1, 2003
70 Chapter 3 Time-Domain Analysis
DRILL PROBLEM 3.18
(a) Set up the dierence equation corresponding to the impulse response h[n] = 2(0.5)
n
u[n].
(b) Set up the dierence equation corresponding to the impulse response h[n] = (0.5)
n
u[n] +[n].
Answers: (a) y[n] + 0.5y[n 1] = 2x[n] (b) y[n] 0.5y[n 1] = 2x[n] 0.5x[n 1]
3.6.2 Dierence Equations from Input-Output Data
The dierence equation of LTI systems may also be obtained from input-output data. The response of the
system described by y[n] = 3x[n] +2x[n1] to x[n] = [n] is y[n] = 3[n] +2[n1]. Turning things around,
the input [n] and output 3[n]+2[n1] then corresponds to the dierence equation y[n] = 3x[n]+2x[n1].
Note how the coecients of the input match the output data (and vice versa).
REVIEW PANEL 3.12
Dierence Equation of an LTI System from Input-Output Information
Example: If x[n] =
1, 2, 3 and y[n] =
k=0
x[n k] (3.29)
c Ashok Ambardar, September 1, 2003
3.7 Application-Oriented Examples 71
This is an L-point FIR lter whose impulse response is
h[n] =
1
L
1, 1, 1, . . . , 1, 1
. .. .
L samples
(3.30)
Figure 3.4 shows a noisy sinusoid and the output when it is passed through two moving average lters of
dierent lengths. We see that the output of the 10-point lter is indeed a smoother version of the noisy
input. Note that the output is a delayed version of the input and shows a start-up transient for about 10
samples before the lter output settles down. This is typical of all lters. A lter of longer length will result
in a longer transient. It should also produce better smoothing. While that may be generally true, we see
that the 50-point averaging lter produces an output that is essentially zero after the initial transient. The
reason is that the sinusoid also has a period of 50 samples and its 50-point running average is thus zero!
This suggests that while averaging lters of longer lengths usually do a better job of signal smoothing, the
peculiarities of a given signal may not always result in a useful output.
0 50 100 150
1.5
1
0
1
1.5
(a) Periodic signal
0 50 100 150
1.5
1
0
1
1.5
(b) Noisy signal
0 10 50 100 150
1.5
1
0
1
1.5
(c) Output of 10point averaging filter
0 50 100 150
1.5
1
0
1
1.5
(d) Output of 50point averaging filter
Figure 3.4 The response of two averaging lters to a noisy sinusoid
3.7.2 Inverse Systems
Inverse systems are quite important in practical applications. For example, a measurement system (such as
a transducer) invariably aects (distorts) the signal being measured. To undo the eects of the distortion
requires a system that acts as the inverse of the measurement system. If an input x[n] to a system results
in an output y[n], then its inverse is a system that recovers the signal x[n] in response to the input y[n], as
illustrated in Figure 3.5. For invertible LTI systems described by dierence equations, nding the inverse
system is as easy as switching the input and output variables, as illustrated in the following example.
c Ashok Ambardar, September 1, 2003
72 Chapter 3 Time-Domain Analysis
y[n] x[n] x[n]
System Inverse system
Figure 3.5 A system and its inverse
REVIEW PANEL 3.13
Finding the Inverse of an LTI System? Try Switching the Input and Output
System: y[n] + 2y[n 1] = 3x[n] + 4x[n 1] Inverse system: 3y[n] + 4y[n 1] = x[n] + 2x[n 1]
How to nd the inverse from the impulse response h[n]? Find the dierence equation rst.
EXAMPLE 3.16 (Inverse Systems)
(a) Refer to the interconnected system shown in Figure E3.16A(1). Find the dierence equation of the
inverse system, sketch a realization of each system, and nd the output of each system.
[n] y [n] x [n-1] x = - 0.5 Inverse system
1
4 4
n
Input
Output
Figure E3.16A(1) The interconnected system for Example 3.16(a)
The original system is described by y[n] = x[n] 0.5x[n 1]. By switching the input and output, the
inverse system is described by y[n] 0.5y[n 1] = x[n]. The realization of each system is shown in
Figure E3.16A(2). Are they related? Yes. If you ip the realization of the echo system end-on-end
and change the sign of the feedback signal, you get the inverse realization.
1
z
1
z
0.5
Input Output
+
+
Inverse system
System
0.5
Input Output +
Figure E3.16A(2) Realization of the system and its inverse for Example 3.16(a)
The response g[n] of the rst system is simply
g[n] = (4[n] + 4[n 1]) (2[n 1] + 2[n 2]) = 4[n] + 2[n 1]) 2[n 2])
If we let the output of the second system be y
0
[n], we have
y
0
[n] = 0.5y
0
[n 1] + 4[n] + 2[n 1]) 2[n 2])
Recursive solution gives
y
0
[0] = 0.5y
0
[1] + 4[0] = 4 y
0
[1] = 0.5y
0
[0] + 2[0] = 4 y
0
[2] = 0.5y
0
[1] 2[0] = 0
All subsequent values of y
0
[n] are zero since the input terms are zero for n > 2. The output is thus
y
0
[n] =
An echo filter
+
Input Output
+
+
A reverb filter
1, 2.
(b) Find the response of the system y[n] y[n 3] = 2[n] + 3[n 2].
Answers: (a) y[n] y[n 2] = x[n] + 2x[n 1] (b) Periodic (N = 3) with period y
1
[n] =
2, 0, 3
3.7.5 How Dierence Equations Arise
We conclude with some examples of dierence equations, which arise in many ways in various elds ranging
from mathematics and engineering to economics and biology.
1. y[n] = y[n 1] +n, y[1] = 1
This dierence equation describes the number of regions y[n] into which n lines divide a plane if no
two lines are parallel and no three lines intersect.
2. y[n + 1] = (n + 1)(y[n] + 1), y[0] = 0
This dierence equation describes the number of multiplications y[n] required to compute the deter-
minant of an n n matrix using cofactors.
3. y[n + 2] = y[n + 1] +y[n], y[0] = 0, y[1] = 1
This dierence equation generates the Fibonacci sequence y[n] = 0, 1, 1, 2, 3, 5, . . ., where each
number is the sum of the previous two.
4. y[n + 2] 2xy[n + 1] +y[n] = 0, y[0] = 1, y[1] = x
This dierence equation generates the Chebyshev polynomials T
n
(x) = y[n] of the rst kind. We
nd that T
2
(x) = y[2] = 2x
2
1, T
3
(x) = y[3] = 4x
3
3x, etc. Similar dierence equations called
recurrence relations form the basis for generating other polynomial sets.
c Ashok Ambardar, September 1, 2003
76 Chapter 3 Time-Domain Analysis
5. y[n + 1] = y[n](1 y[n])
This dierence equation, called a logistic equation in biology, is used to model the growth of populations
that reproduce at discrete intervals.
6. y[n + 1] = (1 +)y[n] +d[n]
This dierence equation describes the bank balance y[n] at the beginning of the nth-compounding
period (day, month, etc.) if the percent interest rate is per compounding period and d[n] is the
amount deposited in that period.
3.8 Discrete Convolution
Discrete-time convolution is a method of nding the zero-state response of relaxed linear time-invariant
(LTI) systems. It is based on the concepts of linearity and time invariance and assumes that the system
information is known in terms of its impulse response h[n]. In other words, if the input is [n], a unit sample
at the origin n = 0, the system response is h[n]. Now, if the input is x[0][n], a scaled impulse at the origin,
the response is x[0]h[n] (by linearity). Similarly, if the input is the shifted impulse x[1][n 1] at n = 1, the
response is x[1]h[n 1] (by time invariance). The response to the shifted impulse x[k][n k] at n = k is
x[k]h[nk] (by linearity and time invariance). Since an arbitrary input x[n] is simply a sequence of samples,
it can be described by a sum of scaled and shifted impulses:
x[n] =
k=
x[k][n k] (3.36)
By superposition, the response to x[n] is the sum of scaled and shifted versions of the impulse response:
y[n] =
k=
x[k]h[n k] = x[n] h[n] (3.37)
This denes the convolution operation and is also called linear convolution or the convolution sum, and
denoted by y[n] = x[n] h[n] (or by x[n] h[n] in the gures) in this book. The order in which we perform the
operation does not matter, and we can interchange the arguments of x and h without aecting the result.
k=
x[k]h[n k] =
k=
x[n k]h[k] or x[n] h[n] = h[n] x[n] (3.38)
REVIEW PANEL 3.14
Convolution Yields the Zero-State Response of an LTI System
h[n] x[n]
*
= h[n] x[n]
*
=
x[n] and h[n]
Input Output x[n] y[n]
h[n]
System
Output = convolution of impulse response =
Notation: We use x[n] h[n] to denote
k=
x[k]h[n k]
3.8.1 Analytical Evaluation of Discrete Convolution
If x[n] and h[n] are described by simple enough analytical expressions, the convolution sum can be imple-
mented quite readily to obtain closed-form results. While evaluating the convolution sum, it is useful to
c Ashok Ambardar, September 1, 2003
3.8 Discrete Convolution 77
keep in mind that x[k] and h[n k] are functions of the summation variable k. For causal signals of the
form x[n]u[n] and h[n]u[n], the summation involves step functions of the form u[k] and u[n k]. Since
u[k] = 0, k < 0 and u[n k] = 0, k > n, these can be used to simplify the lower and upper summation
limits to k = 0 and k = n, respectively.
EXAMPLE 3.19 (Analytical Evaluation of Discrete Convolution)
(a) Let x[n] = h[n] = u[n]. Then x[k] = u[k] and h[nk] = u[nk]. The lower limit on the convolution sum
simplies to k = 0 (because u[k] = 0, k < 0), the upper limit to k = n (because u[n k] = 0, k > n),
and we get
y[n] =
k=
u[k]u[n k] =
n
k=0
1 = (n + 1)u[n] = r[n + 1]
Note that (n + 1)u[n] also equals r[n + 1], and thus u[n] u[n] = r[n + 1].
(b) Let x[n] = h[n] = a
n
u[n]. Then x[k] = a
k
u[k] and h[n k] = a
nk
u[n k]. The lower limit on the
convolution sum simplies to k = 0 (because u[k] = 0, k < 0), the upper limit to k = n (because
u[n k] = 0, k > n), and we get
y[n] =
k=
a
k
a
nk
u[k]u[n k] =
n
k=0
a
k
a
nk
= a
n
n
k=0
1 = (n + 1)a
n
u[n]
The argument of the step function u[n] is based on the fact that the upper limit on the summation
must exceed or equal the lower limit (i.e. n 0).
(c) Let x[n] = u[n 1] and h[n] =
n
u[n 1]. Then
u[n 1]
n
u[n 1] =
k=
k
u[k 1]u[n 1 k] =
n1
k=1
k
=
(
n
)
1
u[n 2]
Here, we used the closed form result for the nite summation. The argument of the step function
u[n 2] is dictated by the fact that the upper limit on the summation must exceed or equal the lower
limit (i.e. n 1 1 or n 2).
(d) Let x[n] = (0.8)
n
u[n] and h[n] = (0.4)
n
u[n]. Then
y[n] =
k=
(0.8)
k
u[k](0.4)
nk
u[n k] =
n
k=0
(0.8)
k
(0.4)
nk
= (0.4)
n
n
k=0
2
k
Using the closed-form result for the sum, we get y[n] = (0.4)
n
1 2
n+1
1 2
= (0.4)
n
(2
n+1
1)u[n].
This may also be expressed as y[n] = [2(0.8)
n
(0.4)
n
]u[n].
(e) Let x[n] = nu[n] and h[n] = a
n
u[n1], a < 1. With h[nk] = a
(nk)
u[n1k] and x[k] = ku[k],
the lower and upper limits on the convolution sum become k = 0 and k = n 1. Then
y[n] =
n1
k=0
ka
(nk)
= a
n
n1
k=0
ka
k
=
a
n+1
(1 a)
2
[1 na
n1
+ (n 1)a
n
]u[n 1]
c Ashok Ambardar, September 1, 2003
78 Chapter 3 Time-Domain Analysis
Here, we used known results for the nite summation to generate the closed-form solution.
DRILL PROBLEM 3.23
(a) Let x[n] = (0.8)
n
u[n] and h[n] = (0.4)
n
u[n 1]. Find their convolution.
(b) Let x[n] = (0.8)
n
u[n 1] and h[n] = (0.4)
n
u[n]. Find their convolution.
(c) Let x[n] = (0.8)
n
u[n 1] and h[n] = (0.4)
n
u[n 1]. Find their convolution.
Answers: (a) [(0.8)
n
(0.4)
n
]u[n 1] (b) 2[(0.8)
n
(0.4)
n
]u[n 1] (c) [(0.8)
n
2(0.4)
n
]u[n 2]
3.9 Convolution Properties
Many of the properties of discrete convolution are based on linearity and time invariance. For example, if
x[n] (or h[n]) is shifted by n
0
, so is y[n]. Thus, if y[n] = x[n] h[n], then
x[n n
0
] h[n] = x[n] h[n n
0
] = y[n n
0
] (3.39)
The sum of the samples in x[n], h[n], and y[n] are related by
n=
y[n] =
n=
x[n]
n=
h[n]
(3.40)
For causal systems (h[n] = 0, n < 0) and causal signals (x[n] = 0, n < 0), y[n] is also causal. Thus,
y[n] = x[n] h[n] = h[n] x[n] =
n
k=0
x[k]h[n k] =
n
k=0
h[k]x[n k] (3.41)
An extension of this result is that the convolution of two left-sided signals is also left-sided and the convolution
of two right-sided signals is also right-sided.
EXAMPLE 3.20 (Properties of Convolution)
(a) Here are two useful convolution results that are readily found from the dening relation:
[n] x[n] = x[n] [n] [n] = [n]
(b) We nd y[n] = u[n] x[n]. Since the step response is the running sum of the impulse response, the
convolution of a signal x[n] with a unit step is the running sum of the signal x[n]:
x[n] u[n] =
n
k=
x[k]
(c) We nd y[n] = rect(n/2N) rect(n/2N) where rect(n/2N) = u[n +N] u[n N 1].
The convolution contains four terms:
y[n] = u[n+N] u[n+N] u[n+N] u[nN1] u[nN1] u[n+N] +u[nN1] u[nN1]
c Ashok Ambardar, September 1, 2003
3.10 Convolution of Finite Sequences 79
Using the result u[n] u[n] = r[n + 1] and the shifting property, we obtain
y[n] = r[n + 2N + 1] 2r[n] +r[n 2N 1] = (2N + 1)tri
n
2N + 1
The convolution of two rect functions (with identical arguments) is thus a tri function.
DRILL PROBLEM 3.24
(a) Let x[n] = (0.8)
n+1
u[n + 1] and h[n] = (0.4)
n1
u[n 2]. Find their convolution.
(b) Let x[n] = (0.8)
n
u[n] and h[n] = (0.4)
n1
u[n 2]. Find their convolution.
Answers: (a) [(0.8)
n
(0.4)
n
]u[n 1] (b) [(0.8)
n1
(0.4)
n1
]u[n 2]
3.10 Convolution of Finite Sequences
In practice, we often deal with sequences of nite length, and their convolution may be found by several
methods. The convolution y[n] of two nite-length sequences x[n] and h[n] is also of nite length and is
subject to the following rules, which serve as useful consistency checks:
1. The starting index of y[n] equals the sum of the starting indices of x[n] and h[n].
2. The ending index of y[n] equals the sum of the ending indices of x[n] and h[n].
3. The length L
y
of y[n] is related to the lengths L
x
and L
h
of x[n] and h[n] by L
y
= L
x
+L
h
1.
3.10.1 The Sum-by-Column Method
This method is based on the idea that the convolution y[n] equals the sum of the (shifted) impulse responses
due to each of the impulses that make up the input x[n]. To nd the convolution, we set up a row of index
values beginning with the starting index of the convolution and h[n] and x[n] below it. We regard x[n] as a
sequence of weighted shifted impulses. Each element (impulse) of x[n] generates a shifted impulse response
(product with h[n]) starting at its index (to indicate the shift). Summing the response (by columns) gives
the discrete convolution. Note that none of the sequences is folded. It is better (if only to save paper) to let
x[n] be the shorter sequence. The starting index (and the marker location corresponding to n = 0) for the
convolution y[n] is found from the starting indices of x[n] and h[n].
REVIEW PANEL 3.15
Discrete Convolution Using the Sum-by-Column Method
1. Line up the sequence x[n] below the sequence h[n].
2. Line up with each sample of x[n], the product of the entire array h[n] with that sample of x[n].
3. Sum the columns of the (successively shifted) arrays to generate the convolution sequence.
EXAMPLE 3.21 (Convolution of Finite-Length Signals)
(a) An FIR (nite impulse response) lter has an impulse response given by h[n] =
1, 2, 2, 3. Find its
response y[n] to the input x[n] =
0, 4 and x[n] = 4,
1, 3.
We note that the convolution starts at n = 3 and use this to set up the index array and generate the
convolution as follows:
c Ashok Ambardar, September 1, 2003
3.10 Convolution of Finite Sequences 81
n 3 2 1 0 1 2
h[n] 2 5 0 4
x[n] 4 1 3
8 20 0 16
2 5 0 4
6 15 0 12
y[n] 8 22 11
31 4 12
The marker is placed by noting that the convolution starts at n = 3, and we get
y[n] = 8, 22, 11,
31, 4, 12
(c) (Response of a Moving-Average Filter) Let x[n] =
2, 4, 6, 8, 10, 12, . . ..
What system will result in the response y[n] =
1, 3, 5, 7, 9, 11, . . .?
At each instant, the response is the average of the input and its previous value. This system describes
an averaging or moving average lter. Its dierence equation is simply y[n] = 0.5(x[n] +x[n 1]).
Its impulse response is thus h[n] = 0.5[n] +[n 1], or h[n] =
0.5, 0.5.
Using discrete convolution, we nd the response as follows:
x: 2 4 6 8 10 12 . . .
h:
1
2
1
2
1 2 3 4 5 6 . . .
1 2 3 4 5 6 . . .
y: 1 3 5 7 9 11 . . .
This result is indeed the averaging operation we expected.
DRILL PROBLEM 3.25
(a) Let x[n] =
1, 4, 0, 2 and h[n] =
1, 3 and h[n] = 2,
1, 6, 9, 6, 4, 2 (b) 2, 9, 5,
3, 2, 3
3.10.2 The Fold, Shift, Multiply, and Sum Concept
The convolution sum may also be interpreted as follows. We fold x[n] and shift x[n] to line up its last
element with the rst element of h[n]. We then successively shift x[n] (to the right) past h[n], one index
at a time, and nd the convolution at each index as the sum of the pointwise products of the overlapping
samples. One method of computing y[n] is to list the values of the folded function on a strip of paper and
slide it along the stationary function, to better visualize the process. This technique has prompted the name
sliding strip method. We simulate this method by showing the successive positions of the stationary and
folded sequence along with the resulting products, the convolution sum, and the actual convolution.
c Ashok Ambardar, September 1, 2003
82 Chapter 3 Time-Domain Analysis
EXAMPLE 3.22 (Convolution by the Sliding Strip Method)
Find the convolution of h[n] =
2, 5, 0, 4 and x[n] =
4, 1, 3.
Since both sequences start at n = 0, the folded sequence is x[k] = 3, 1,
4.
We line up the folded sequence below h[n] to begin overlap and shift it successively, summing the product
sequence as we go, to obtain the discrete convolution. The results are computed in Figure E3.22.
The discrete convolution is y[n] =
1, 1, 3
1, 0, 2 =
1, 1, 5, 2, 6 (x
2
+x + 3)(x
2
+ 2) = x
4
+x
3
+ 5x
2
+ 2x + 6
Zero Insertion of Both Sequences Leads to Zero Insertion of the Convolution
Example: If
1, 2
3, 1, 4 =
3, 7, 6, 8
Then,
1, 0, 0, 2
3, 0, 0, 1, 0, 0, 4 =
3, 0, 0, 7, 0, 0, 6, 0, 0, 8
Zero-Padding of One Sequence Leads to Zero-Padding of the Convolution
Example: If x[n] h[n] = y[n] then 0, 0, x[n], 0, 0 h[n], 0 = 0, 0, y[n], 0, 0, 0.
c Ashok Ambardar, September 1, 2003
3.10 Convolution of Finite Sequences 83
EXAMPLE 3.23 (Polynomial Multiplication, Zero Insertion, Zero-Padding)
(a) Let h[n] =
2, 5, 0, 4 and x[n] =
2, 0, 5, 0, 0, 0, 4 x
1
[n] =
4, 0, 1, 0, 3
To nd their convolution, we set up the polynomials
h
1
(z) = 2z
6
+ 5z
4
+ 0z
2
+ 4 x
1
(z) = 4z
4
+ 1z
2
+ 3
Their product is y
1
(z) = 8z
10
+ 22z
8
+ 11z
6
+ 31z
4
+ 4z
2
+ 12.
The convolution is then y
1
[n] =
2, 5, 0, 4, 0, 0 x
2
[n] =
4, 1, 3, 0
To nd their convolution, we set up the polynomials
h
2
(z) = 2z
5
+ 5z
4
+ 4z
2
x
2
(z) = 4z
3
+ 1z
2
+ 3z
Their product is y
2
(z) = 8z
8
+ 22z
7
+ 11z
6
+ 31z
5
+ 4z
4
+ 12z
3
.
The convolution is then y
1
[n] =
1, 0, 2
1, 3 =
1, 0, 0, 0, 2 and h[n] =
1, 0, 2, 0 and h[n] =
1, 0, 2 and h[n] =
1, 0, 3, 0, 2, 0, 6 (b)
1, 3, 2, 6, 0, 0, 0 (c) 0, 0,
1, 3, 2, 6, 0
c Ashok Ambardar, September 1, 2003
84 Chapter 3 Time-Domain Analysis
3.10.4 Impulse Response of LTI Systems in Cascade and Parallel
Consider the ideal cascade of two LTI systems shown in Figure 3.7. The response of the rst system is
y
1
[n] = x[n] h
1
[n]. The response y[n] of the second system is
y[n] = y
1
[n] h
2
[n] = (x[n] h
1
[n]) h
2
[n] = x[n] (h
1
[n] h
2
[n]) (3.42)
If we wish to replace the cascaded system by an equivalent LTI system with impulse response h[n] such that
y[n] = x[n] h[n], it follows that h[n] = h
1
[n] h
2
[n]. Generalizing this result, the impulse response h[n] of
N ideally cascaded LTI systems is simply the convolution of the N individual impulse responses
h[n] = h
1
[n] h
2
[n] h
N
[n] (for a cascade combination) (3.43)
If the h
k
[n] are energy signals, the order of cascading is unimportant.
[n] h
1
[n] h
2
[n] h
1
[n] h
2 *
[n] x [n] y [n] y [n] x
[n] h
1
[n] h
2
[n] h
1
[n] h
2
+ [n] y
[n] x
[n] x
[n] y
Two LTI systems in cascade Equivalent LTI system
Two LTI systems in parallel
Equivalent LTI system
+
+
Figure 3.7 Cascaded and parallel systems and their equivalents
The overall impulse response of LTI systems in parallel equals the sum of the individual impulse responses,
as shown in Figure 3.7:
h
P
[n] = h
1
[n] +h
2
[n] + +h
N
[n] (for a parallel combination) (3.44)
REVIEW PANEL 3.17
Impulse Response of N Interconnected Discrete LTI Systems
In cascade: Convolve the impulse responses: h
C
[n] = h
1
[n] h
2
[n] h
N
[n]
In parallel: Add the impulse responses: h
P
[n] = h
1
[n] +h
2
[n] + +h
N
[n]
EXAMPLE 3.24 (Interconnected Systems)
Consider the interconnected system of Figure E3.24. Find its overall impulse response and the output.
Comment on the results.
[n] y [n] x [n1] x = 0.5 [n] h (0.5)
n
[n] u =
1
4 4
n
Input
Output
Figure E3.24 The interconnected system of Example 3.24
c Ashok Ambardar, September 1, 2003
3.11 Stability and Causality of LTI Systems 85
The impulse response of the rst system is h
1
[n] = [n] 0.5[n1]. The overall impulse response h
C
[n]
is given by the convolution
h
C
[n] = ([n] 0.5[n 1]) (0.5)
n
u[n] = (0.5)
n
u[n] 0.5(0.5)
n1
u[n 1]
This simplies to
h
C
[n] = (0.5)
n
(u[n] u[n 1]) = (0.5)
n
[n] = [n]
What this means is that the overall system output equals the applied input. The second system thus acts
as the inverse of the rst.
DRILL PROBLEM 3.27
(a) Find the impulse response of the cascade of two identical lters, each with h[n] =
1, 1, 3.
(b) The impulse response of two lters is h
1
[n] =
1, 0, 2 and h
2
[n] = 4,
1, 2, 7, 6, 9 (b) 4,
2, 3, 2 (c) h
P
[n] = 3(0.4)
n
u[n], h
C
[n] = 2(n+1)(0.4)
n
u[n]
3.11 Stability and Causality of LTI Systems
System stability is an important practical constraint in lter design and is dened in various ways. Here we
introduce the concept of Bounded-input, bounded-output (BIBO) stability that requires every bounded
input to produce a bounded output.
3.11.1 Stability of FIR Filters
The system equation of an FIR lter describes the output as a weighted sum of shifted inputs. If the input
remains bounded, the weighted sum of the inputs is also bounded. In other words, FIR lters are always
stable. This can be a huge design advantage.
3.11.2 Stability of LTI Systems Described by Dierence Equations
For an LTI system described by the dierence equation
y[n] +A
1
y[n 1] + +A
N
y[n N] = B
0
x[n] +B
1
x[n 1] + +B
M
x[n M] (3.45)
the conditions for BIBO stability involve the roots of the characteristic equation. A necessary and sucient
condition for BIBO stability of such an LTI system is that every root of its characteristic equation must have
a magnitude less than unity. This criterion is based on the results of Tables 3.1 and 3.2. Root magnitudes
less than unity ensure that the natural (and zero-input) response always decays with time (see Table 3.2),
and the forced (and zero-state) response always remains bounded for every bounded input. Roots with
magnitudes that equal unity make the system unstable. Simple (non-repeated) roots with unit magnitude
produce a constant (or sinusoidal) natural response that is bounded; but if the input is also a constant
(or sinusoid at the same frequency), the forced response is a ramp or growing sinusoid (see Table 3.1) and
hence unbounded. Repeated roots with unit magnitude result in a natural response that is itself a growing
sinusoid or polynomial and thus unbounded. In the next chapter, we shall see that the stability condition
is equivalent to having an LTI system whose impulse response h[n] is absolutely summable. The stability of
nonlinear or time-varying systems usually must be checked by other means.
c Ashok Ambardar, September 1, 2003
86 Chapter 3 Time-Domain Analysis
3.11.3 Stability of LTI Systems Described by the Impulse Response
For systems described by their impulse response, it turns out that BIBO stability requires that the impulse
response h[n] be absolutely summable. Here is why. If x[n] is bounded such that [x[n][ < M, so too is its
shifted version x[n k]. The convolution sum then yields the following inequality:
[y[n][ <
k=
[h[k][[x[n k][ < M
k=
[h[k][ (3.46)
If the output is to remain bounded ([y[n][ < ), then
k=
[h[k][ < (for a stable LTI system) (3.47)
In other words, h[n] must be absolutely summable. This is both a necessary and sucient condition. The
stability of nonlinear systems must be investigated by other means.
3.11.4 Causality
In analogy with analog systems, causality of discrete-time systems implies a non-anticipating system with
an impulse response h[n] = 0, n < 0. This ensures that an input x[n]u[n n
0
] starting at n = n
0
results in
a response y[n] also starting at n = n
0
(and not earlier). This follows from the convolution sum:
y[n] =
k
x[k]u(k n
0
]h[n k]u[n k] =
n
n0
x[k]h[n k] (3.48)
REVIEW PANEL 3.18
Stability and Causality of Discrete LTI Systems
Stability: Every root r of the characteristic equation must have magnitude [r[ less than unity.
The impulse response h[n] must be absolutely summable with
k=
[h[k][ < .
Note: FIR lters are always stable.
Causality: The impulse response h[n] must be zero for negative indices (h[n] = 0, n < 0).
EXAMPLE 3.25 (Concepts Based on Stability and Causality)
(a) The system y[n]
1
6
y[n 1]
1
6
y[n 2] = x[n] is stable since the roots of its characteristic equation
z
2
1
6
z
1
6
= 0 are z
1
=
1
2
and z
2
=
1
3
and their magnitudes are less than 1.
(b) The system y[n] y[n1] = x[n] is unstable. The root of its characteristic equation z 1 = 0 is z = 1
gives the natural response y
N
= Ku[n], which is actually bounded. However, for an input x[n] = u[n],
the forced response will have the form Cnu[n], which becomes unbounded.
c Ashok Ambardar, September 1, 2003
3.12 System Response to Periodic Inputs 87
(c) The system y[n] 2y[n 1] + y[n 2] = x[n] is unstable. The roots of its characteristic equation
z
2
2z + 1 = 0 are equal and produce the unbounded natural response y
N
[n] = Au[n] +Bnu[n].
(d) The system y[n]
1
2
y[n 1] = nx[n] is linear, time varying and unstable. The (bounded) step input
x[n] = u[n] results in a response that includes the ramp nu[n], which becomes unbounded.
(e) The system y[n] = x[n] 2x[n 1] is stable because it describes an FIR lter.
(f ) The FIR lter described by y[n] = x[n + 1] x[n] has the impulse response h[n] = 1,
1. It is a
stable system, since
[h[n][ = [1[ + [ 1[ = 2. It is also noncausal because h[n] = [n + 1] [n] is
not zero for n < 0. We emphasize that FIR lters are always stable because
[h[n][ is the absolute
sum of a nite sequence and is thus always nite.
(g) A lter described by h[n] = (0.5)
n
u[n] is causal. It describes a system with the dierence equation
y[n] = x[n] +ay[n 1]. It is also stable because
[h[n][ is nite. In fact, we nd that
n=
[h[n][ =
n=0
(0.5)
n
=
1
1 0.5
= 2
(h) A lter described by the dierence equation y[n] 0.5y[n 1] = nx[n] is causal but time varying. It
is also unstable. If we apply a step input u[n] (bounded input), then y[n] = nu[n] + 0.5y[n 1]. The
term nu[n] grows without bound and makes this system unstable. We caution you that this approach
is not a formal way of checking for the stability of time-varying systems.
DRILL PROBLEM 3.28
(a) Is the lter described by h[n] = 2,
1, 1, 3 causal? Is it stable?
(b) Is the lter described by h[n] = 2
n
u[n + 2] causal? Is it stable?
(c) Is the lter described by y[n] + 0.5y[n 1] = 4u[n] causal? Is it stable?
(d) Is the lter described by y[n] + 1.5y[n 1] + 0.5y[n 2] = u[n] causal? Is it stable?
Answers: (a) Noncausal, stable (b) Noncausal, unstable (c) Causal, stable (d) Causal, unstable
3.12 System Response to Periodic Inputs
In analogy with analog systems, the response of a discrete-time system to a periodic input with period N is
also periodic with the same period N. A simple example demonstrates this concept.
c Ashok Ambardar, September 1, 2003
88 Chapter 3 Time-Domain Analysis
EXAMPLE 3.26 (Response to Periodic Inputs)
(a) Let x[n] =
1, 2, 3, 1, 2, 3, 1, 2, 3, . . . and h[n] =
1, 1.
The convolution y[n] = x[n] h[n], using the sum-by-column method, is
Index n 0 1 2 3 4 5 6 7 8 9 10
x[n] 1 2 3 1 2 3 1 2 3 1 . . .
h[n] 1 1
1 2 3 1 2 3 1 2 3 1 . . .
1 2 3 1 2 3 1 2 3 . . .
y[n] 1 3 1 2 3 1 2 3 1 2 . . .
The convolution y[n] is periodic with period N = 3, except for start-up eects (which last for one
period). One period of the convolution is y[n] =
2, 3, 1.
(b) Let x[n] =
1, 2, 3, 1, 2, 3, 1, 2, 3, . . . and h[n] =
1, 1, 1.
The convolution y[n] = x[n] h[n], using the sum-by-column method, is found as follows:
Index n 0 1 2 3 4 5 6 7 8 9 10
x[n] 1 2 3 1 2 3 1 2 3 1 . . .
h[n] 1 1 1
1 2 3 1 2 3 1 2 3 1 . . .
1 2 3 1 2 3 1 2 3 . . .
1 2 3 1 2 3 1 2 . . .
y[n] 1 3 0 0 0 0 0 0 0 0 . . .
Except for start-up eects, the convolution is zero. The system h[n] =
1, 1, 1 is a moving average
lter. It extracts the 3-point running sum, which is always zero for the given periodic signal x[n].
REVIEW PANEL 3.19
The Response of LTI Systems to Periodic Signals Is Also Periodic with Identical Period
Period = Period = N N
Relaxed
LTI system
Periodic input Periodic output
One way to nd the system response to periodic inputs is to nd the response to one period of the input and
then use superposition. In analogy with analog signals, if we add an absolutely summable signal (or energy
signal) x[n] and its innitely many replicas shifted by multiples of N, we obtain a periodic signal with period
N, which is called the periodic extension of x[n]:
x
pe
[n] =
k=
x[n +kN] (3.49)
c Ashok Ambardar, September 1, 2003
3.12 System Response to Periodic Inputs 89
An equivalent way of nding one period of the periodic extension is to wrap around N-sample sections of
x[n] and add them all up. If x[n] is shorter than N, we obtain one period of its periodic extension simply
by padding x[n] with zeros (to increase its length to N).
EXAMPLE 3.27 (Periodic Extension)
(a) The periodic extension of x[n] =
1, 5, 2, 0, 4, 3, 6, 7 = wrap around =
1 5 2
0 4 3
6 7
= sum =
7, 16, 5
In other words, if we add x[n] to its shifted versions x[n + kN] where N = 3 and k = 1, 2, 3, . . .,
we get a periodic signal whose rst period is
7, 16, 5.
The periodic extension of the signal x[n] =
n
u[n] with period N is given by
x
pe
[n] =
k=
x[n +kN] =
k=0
n+kN
=
n
k=0
(
N
)k =
n
1
N
, 0 n N 1
The methods for nding the response of a discrete-time system to periodic inputs rely on the concepts of
periodic extension and wraparound. One approach is to nd the output for one period of the input (using
regular convolution) and nd one period of the periodic output by superposition (using periodic extension).
Another approach is to rst nd one period of the periodic extension of the impulse response, then nd its
regular convolution with one period of the input and nally, wrap around the regular convolution to generate
one period of the periodic output.
EXAMPLE 3.28 (System Response to Periodic Inputs)
(a) Let x[n] =
1, 1
1, 2, 3 =
1, 3, 1, 3
We then wrap around y
1
[n] past three samples to obtain one period of y[n] as
1, 3, 1, 3 = wrap around =
1 3 1
3
= sum =
2, 3, 1
This is identical to the result obtained in the previous example.
(b) We nd the response y
p
[n] of a moving average lter described by h[n] =
2, 1, 1, 3, 1 to a periodic
signal whose one period is x
p
[n] =
2, 1, 3
2, 1, 1, 3, 1 =
4, 4, 9, 10, 8, 10, 3
To nd y
p
[n], values past N = 3 are wrapped around and summed to give
4 4 9
10 8 10
3
= sum =
17, 12, 19
2. (Method 2) We rst create the periodic extension of h[n], with N = 3, (by wraparound) to get
h
p
[n] =
2, 1 , 3
5, 2 , 1 =
10, 9, 19, 7, 3
This result is wrapped around past N = 3 to give y
p
[n] =
7, 0, 0, 7, 0, 0, . . ..
The impulse response of the system is h[n] = (0.5)
n
u[n]. The input is periodic with period N = 3
and rst period x
1
=
8, 4, 2
DRILL PROBLEM 3.29
(a) A lter is described by h[n] =
1, 0, 0, 2.
(b) A lter is described by y[n] 0.5y[n 1] = 7x[n]. Find one period of its periodic output if the input
is periodic with rst period x
1
[n] =
1, 0, 0.
(c) A lter is described by y[n] 0.5y[n 1] = 7x[n]? Find one period of its periodic output if the input
is periodic with rst period x
1
[n] =
1, 0, 1.
Answers: (a)
6, 3, 7, 11 (b)
8, 4, 2 (c)
12, 6, 10
c Ashok Ambardar, September 1, 2003
3.13 Periodic Convolution 91
3.13 Periodic Convolution
The regular convolution of two signals, both of which are periodic, does not exist. For this reason, we resort
to periodic convolution by using averages. If both x
p
[n] and h
p
[n] are periodic with identical period N,
their periodic convolution generates a convolution result y
p
[n] that is also periodic with the same period N.
The periodic convolution or circular convolution y
p
[n] of x
p
[n] and h
p
[n] is denoted y
p
[n] = x
p
[n] (h
p
[n]
and, over one period (n = 0, 1, . . . , N 1), it is dened by
y
p
[n] = x
p
[n] (h
p
[n] = h
p
[n] (x
p
[n] =
N1
k=0
x
p
[k]h
p
[n k] =
N1
k=0
h
p
[k]x
p
[n k] (3.50)
An averaging factor of 1/N is sometimes included with the summation. Periodic convolution can be imple-
mented using wraparound. We nd the linear convolution of one period of x
p
[n] and h
p
[n], which will have
(2N 1) samples. We then extend its length to 2N (by appending a zero), slice it in two halves (of length
N each), line up the second half with the rst, and add the two halves to get the periodic convolution.
REVIEW PANEL 3.20
Periodic Convolution of Periodic Discrete-Time Signals with Identical Period N
1. Find the regular convolution of their one-period segments (this will have length 2N 1).
2. Append a trailing zero. Wrap around the last N samples and add to the rst N samples.
EXAMPLE 3.29 (Periodic Convolution)
(a) Find the periodic convolution of x
p
[n] =
1, 0, 1, 1 and h
p
[n] =
1, 2, 3, 1.
The period is N = 4. First, we nd the linear convolution y[n].
Index n 0 1 2 3 4 5 6
h
p
[n] 1 2 3 1
x
p
[n] 1 0 1 1
1 2 3 1
0 0 0 0
1 2 3 1
1 2 3 1
y[n] 1 2 4 4 5 4 1
Then, we append a zero, wrap around the last four samples, and add.
Index n 0 1 2 3
First half of y[n] 1 2 4 4
Wrapped around half of y[n] 5 4 1 0
Periodic convolution y
p
[n] 6 6 5 4
(b) Find the periodic convolution of x
p
[n] =
1, 2, 3 and h
p
[n] =
1, 0, 2, with period N = 3.
The regular convolution is easily found to be y
R
[n] =
1, 2, 5, 4, 6.
Appending a zero and wrapping around the last three samples gives y
p
[n] =
5, 8, 5.
c Ashok Ambardar, September 1, 2003
92 Chapter 3 Time-Domain Analysis
DRILL PROBLEM 3.30
Find the periodic convolution of two identical signals whose rst period is given by x
1
[n] =
1, 2, 0, 2.
Answer:
9, 4, 8, 4
3.13.1 Periodic Convolution By the Cyclic Method
To nd the periodic convolution, we shift the folded signal x
p
[n] past h
p
[n], one index at a time, and
nd the convolution at each index as the sum of the pointwise product of their samples but only over a
one-period window (0, N 1). Values of x
p
[n] and h
p
[n] outside the range (0, N 1) are generated by
periodic extension. One way to visualize the process is to line up x[k] clockwise around a circle and h[k]
counterclockwise (folded), on a concentric circle positioned to start the convolution, as shown in Figure 3.8.
[0]=(1)(1)+(2)(2)+(0)(3) = 5 y
(folded h)
1
0
2
1
2 3
Rotate outer
sequence
clockwise
[1]=(0)(1)+(1)(2)+(2)(3) = 8 y
(folded h)
2
0
1
2 3
Rotate outer
sequence
clockwise
1
[2]=(2)(1)+(0)(2)+(1)(3) = 5 y
1
2
0
1
2 3
Figure 3.8 The cyclic method of circular (periodic) convolution
Shifting the folded sequence turns it clockwise. At each turn, the convolution equals the sum of the
pairwise products. This approach clearly brings out the cyclic nature of periodic convolution.
3.13.2 Periodic Convolution By the Circulant Matrix
Periodic convolution may also be expressed as a matrix multiplication. We set up an N N matrix whose
columns equal x[n] and its cyclically shifted versions (or whose rows equal successively shifted versions of
the rst period of the folded signal x[n]). This is called the circulant matrix or convolution matrix.
An N N circulant matrix C
x
for x[n] has the general form
C
x
=
(3.51)
Note that each diagonal of the circulant matrix has equal values. Such a constant diagonal matrix is also
called a Toeplitz matrix. Its matrix product with an N 1 column matrix h describing h[n] yields the
periodic convolution y = Ch as an N 1 column matrix.
c Ashok Ambardar, September 1, 2003
3.13 Periodic Convolution 93
EXAMPLE 3.30 (Periodic Convolution By the Circulant Matrix)
Consider x[n] =
1, 0, 2 and h[n] =
1 2 0
0 1 2
2 0 1
h =
1
2
3
y
1
[n] =
1 2 0
0 1 2
2 0 1
1
2
3
5
8
5
5
3
,
8
3
,
5
3
.
(b) The periodic convolution y
2
[n] of x[n] and h[n] over a two-period window yields
C
2
=
1 2 0 1 2 0
0 1 2 0 1 2
2 0 1 2 0 1
1 2 0 1 2 0
0 1 2 0 1 2
2 0 1 2 0 1
h
2
=
1
2
3
1
2
3
y
2
[n] =
10
16
10
10
16
10
We see that y
2
[n] has double the length (and values) of y
1
[n], but it is still periodic with N = 3.
Comment: Normalization by a two-period window width (6 samples) gives
y
p2
[n] =
y
2
[n]
6
=
5
3
,
8
3
,
5
3
,
5
3
,
8
3
,
5
3
2, 5, 0, 4 and h[n] =
2, 5, 0, 4, 0, 0 h
zp
[n] =
4, 1, 3, 0, 0, 0
c Ashok Ambardar, September 1, 2003
94 Chapter 3 Time-Domain Analysis
The periodic convolution x
zp
[n] (h
zp
[n], using the circulant matrix, equals
C
xzp
=
2 0 0 4 0 5
5 2 0 0 4 0
0 5 2 0 0 4
4 0 5 2 0 0
0 4 0 5 2 0
0 0 4 0 5 2
h
zp
=
4
1
3
0
0
0
y
p
[n] =
8
22
11
31
4
12
This is identical to the regular convolution y[n] = x[n] h[n] obtained previously by several other methods
in previous examples.
DRILL PROBLEM 3.31
Let x[n] =
1, 2, 0, 2, 2 and h[n] =
3, 2.
(a) How many zeros must be appended to x[n] and h[n] in order to generate their regular convolution
from the periodic convolution of the zero-padded sequences.
(b) What is the regular convolution of the zero-padded sequences?
(c) What is the regular convolution of the original sequences?
Answers: (a) 1, 4 (b)
3, 8, 4, 6, 10, 4, 0, 0, 0, 0, 0 (c)
3, 8, 4, 6, 10, 4
3.14 Deconvolution
Given the system impulse response h[n], the response y[n] of the system to an input x[n] is simply the
convolution of x[n] and h[n]. Given x[n] and y[n] instead, how do we nd h[n]? This situation arises very
often in practice and is referred to as deconvolution or system identication.
For discrete-time systems, we have a partial solution to this problem. Since discrete convolution may
be thought of as polynomial multiplication, discrete deconvolution may be regarded as polynomial division.
One approach to discrete deconvolution is to use the idea of long division, a familiar process, illustrated in
the following example.
3.14.1 Deconvolution By Recursion
Deconvolution may also be recast as a recursive algorithm. The convolution
y[n] = x[n] h[n] =
n
k=0
h[k]x[n k] (3.52)
when evaluated at n = 0, provides the seed value h[0] as
y[0] = x[0]h[0] h[0] =
y[0]
x[0]
(3.53)
We now separate the term containing h[n] in the convolution relation
y[n] =
n
k=0
h[k]x[n k] = h[n]x[0] +
n1
k=0
h[k]x[n k] (3.54)
c Ashok Ambardar, September 1, 2003
3.14 Deconvolution 95
and evaluate h[n] for successive values of n > 0 from
h[n] =
1
x[0]
y[n]
n1
k=0
h[k]x[n k]
, n > 0 (3.55)
If all goes well, we need to evaluate h[n] only at M N + 1 points, where M and N are the lengths of y[n]
and x[n], respectively.
Naturally, problems arise if a remainder is involved. This may well happen in the presence of noise,
which could modify the values in the output sequence even slightly. In other words, the approach is quite
susceptible to noise or roundo error and not very practical.
REVIEW PANEL 3.22
Deconvolution May Be Regarded as Polynomial Division or Matrix Inversion
EXAMPLE 3.32 (Deconvolution)
(a) (Deconvolution by Polynomial Division)
Consider x[n] =
2, 5, 0, 4 and y[n] =
4, 1, 3.
(b) (Deconvolution by Recursion)
Let x[n] =
2, 5, 0, 4 and y[n] =
y[1]
0
k=0
h[k]x[1 k]
=
y[1] h[0]x[1]
x[0]
= 1
h[2] =
1
x[0]
y[2]
1
k=0
h[k]x[2 k]
=
y[2] h[0]x[2] h[1]x[1]
x[0]
= 3
As before, h[n] = 4, 1, 3.
DRILL PROBLEM 3.32
The input x[n] =
2, 3, 1, 6. Use deconvolution to
nd the impulse response h[n].
Answer:
2, 1, 3
3.15 Discrete Correlation
Correlation is a measure of similarity between two signals and is found using a process similar to convolution.
The discrete cross-correlation (denoted ) of x[n] and h[n] is dened by
r
xh
[n] = x[n] h[n] =
k=
x[k]h[k n] =
k=
x[k +n]h[k] (3.56)
r
hx
[n] = h[n] x[n] =
k=
h[k]x[k n] =
k=
h[k +n]x[k] (3.57)
Some authors prefer to switch the denitions of r
xh
[n] and r
hx
[n].
To nd r
xh
[n], we line up the last element of h[n] with the rst element of x[n] and start shifting h[n]
past x[n], one index at a time. We sum the pointwise product of the overlapping values to generate the
correlation at each index. This is equivalent to performing the convolution of x[n] and the folded signal
h[n]. The starting index of the correlation equals the sum of the starting indices of x[n] and h[n].
Similarly, r
hx
[n] equals the convolution of x[n] and h[n], and its starting index equals the sum of the
starting indices of x[n] and h[n]. However, r
xh
[n] does not equal r
hx
[n]. The two are folded versions of
each other and related by r
xh
[n] = r
hx
[n].
REVIEW PANEL 3.23
Correlation Is the Convolution of One Signal with a Folded Version of the Other
r
xh
[n] = x[n] h[n] = x[n] h[n] r
hx
[n] = h[n] x[n] = h[n] x[n]
Correlation length: N
x
+N
h
1 Correlation sum:
r[n] = (
x[n])(
h[n])
c Ashok Ambardar, September 1, 2003
3.15 Discrete Correlation 97
EXAMPLE 3.33 (Discrete Autocorrelation and Cross-Correlation)
(a) Let x[n] = 2,
5, 0, 4 and h[n] =
3, 1, 4.
To nd r
xh
[n], we compute the convolution of x[n] and h[n] = 4, 1,
31, 4, 12
(b) Let x[n] = 2,
5, 0, 4 and h[n] =
3, 1, 4.
To nd r
hx
[n], we compute the convolution of x[n] = 4, 0,
3, 1, 4.
To nd r
xx
[n], we compute the convolution of x[n] and x[n] = 4, 1,
2, 1, 3 and h[n] = 3,
2, 1. Find r
xh
[n], r
hx
[n], r
xx
[n] and r
hh
[n]
Answers: 2,
3, 7, 3, 9 9, 3, 7,
3, 2 6, 5,
14, 5, 6 3, 8,
14, 8, 3
3.15.1 Autocorrelation
The correlation r
xx
[n] of a signal x[n] with itself is called the autocorrelation. It is an even symmetric
function (r
xx
[n] = r
xx
[n]) with a maximum at n = 0 and satises the inequality [r
xx
[n][ r
xx
[0].
Correlation is an eective method of detecting signals buried in noise. Noise is essentially uncorrelated
with the signal. This means that if we correlate a noisy signal with itself, the correlation will be due only to
the signal (if present) and will exhibit a sharp peak at n = 0.
REVIEW PANEL 3.24
The Autocorrelation Is Always Even Symmetric with a Maximum at the Origin
r
xx
[n] = x[n] x[n] = x[n] x[n] r
xx
[n] = r
xx
[n] r
xx
[n] r
xx
[0]
EXAMPLE 3.34 (Discrete Autocorrelation and Cross-Correlation)
(a) Let x[n] = (0.5)
n
u[n] and h[n] = (0.4)
n
u[n]. We compute the cross-correlation r
xh
[n] as follows:
r
xh
[n] =
k=
x[k]h[k n] =
k=
(0.5)
k
(0.4)
kn
u[k]u[k n]
This summation requires evaluation over two ranges of n. If n < 0, the shifted step u[k n] is nonzero
for some k < 0. But since u[k] = 0, k < 0, the lower limit on the summation reduces to k = 0 and we
get
(n < 0) r
xh
[n] =
k=0
(0.5)
k
(0.4)
kn
= (0.4)
n
k=0
(0.2)
k
=
(0.4)
n
1 0.2
= 1.25(0.4)
n
u[n 1]
If n 0, the shifted step u[k n] is zero for k < n, the lower limit on the summation reduces to k = n
and we obtain
(n 0) r
xh
[n] =
k=n
(0.5)
k
(0.4)
kn
c Ashok Ambardar, September 1, 2003
3.15 Discrete Correlation 99
With the change of variable m = k n, we get
(n 0) r
xh
[n] =
m=0
(0.5)
m+n
(0.4)
m
= (0.5)
n
m=0
(0.2)
m
=
(0.5)
n
1 0.2
= 1.25(0.5)
n
u[n]
So, r
xh
[n] = 1.25(0.4)
n
u[n 1] + 1.25(0.5)
n
u[n].
(b) Let x[n] = a
n
u[n], [a[ < 1. To compute r
xx
[n] which is even symmetric, we need compute the result
only for n 0 and create its even extension. Following the previous part, we have.
(n 0) r
xx
[n] =
k=
x[k]x[k n] =
k=n
a
k
a
kn
=
m=0
a
m+n
a
m
= a
n
m=0
a
2m
=
a
n
1 a
2
u[n]
The even extension of this result gives r
xx
[n] =
a
|n|
1 a
2
.
(c) Let x[n] = a
n
u[n], [a[ < 1, and y[n] = rect(n/2N). To nd r
xy
[n], we shift y[k] and sum the products
over dierent ranges. Since y[k n] shifts the pulse to the right over the limits (N +n, N +n), the
correlation r
xy
[n] equals zero until n = N. We then obtain
N n N 1 (partial overlap): r
xy
[n] =
k=
x[k]y[k n] =
N+1
k=0
a
k
=
1 a
N+n+1
1 a
n N (total overlap): r
xy
[n] =
N+1
k=N+1
a
k
=
2N
m=0
a
mN+1
= a
N+1
1 a
2N+1
1 a
DRILL PROBLEM 3.34
(a) Let x[n] = (0.5)
n
u[n] and h[n] = u[n]. Find their correlation.
(b) Let x[n] = (0.5)
n
u[n] and h[n] = (0.5)
n
u[n]. Find their correlation.
Answers: (a) 2u[n 1] + 2(0.5)
n
u[n] (b) (n + 1)(0.5)
n
]u[n]
3.15.2 Periodic Discrete Correlation
For periodic sequences with identical period N, the periodic discrete correlation is dened as
r
xhp
[n] = x[n] ( (h[n] =
N1
k=0
x[k]h[k n] r
hxp
[n] = h[n] ( (x[n] =
N1
k=0
h[k]x[k n] (3.58)
As with discrete periodic convolution, an averaging factor of 1/N is sometimes included in the summation. We
can nd one period of the periodic correlation r
xhp
[n] by rst computing the linear correlation of one period
segments and then wrapping around the result. We nd that r
hxp
[n] is a circularly folded version of r
xhp
[n]
with r
hxp
[n] = r
xhp
[n]. We also nd that the periodic autocorrelation r
xxp
[n] or r
hhp
[n] always displays
circular even symmetry. This means that the periodic extension of r
xxp
[n] or r
hhp
[n] is even symmetric about
the origin n = 0. The periodic autocorrelation function also attains a maximum at n = 0.
c Ashok Ambardar, September 1, 2003
100 Chapter 3 Time-Domain Analysis
EXAMPLE 3.35 (Discrete Periodic Autocorrelation and Cross-Correlation)
Consider two periodic signals whose rst period is given by x
1
[n] =
2, 5, 0, 4 and h
1
[n] =
3, 1, 1, 2.
(a) To nd the periodic cross-correlation r
xhp
[n], we rst evaluate the linear cross-correlation
r
xh
[n] = x
1
[n] h
1
[n] = 4, 8, 3,
19, 11, 4, 12
Wraparound gives the periodic cross-correlation as r
xhp
= 15, 12, 9,
19.
We invoke periodicity and describe the result in terms of its rst period as r
xhp
=
19, 3, 8, 4
Wraparound gives the periodic cross-correlation as r
hxp
= 9, 12, 15,
19.
We rewrite the result in terms of its rst period as r
hxp
=
45.
We rewrite the result in terms of its rst period as r
hxp
=
15, 0, 1, 6
Wraparound gives the periodic autocorrelation as r
xxp
= 6, 2, 6,
15.
We rewrite the result in terms of its rst period as r
hxp
=
15, 6, 2, 6.
This displays circular even symmetry (its periodic extension is even symmetric about n = 0).
c Ashok Ambardar, September 1, 2003
3.15 Discrete Correlation 101
DRILL PROBLEM 3.35
Let x
1
[n] =
2, 1, 0, 3 and h
1
[n] = 1,
14, 4, 6, 4
14, 8, 6, 8
0, 8, 12, 4
0, 4, 12, 8
3.15.3 Matched Filtering and Target Ranging
Correlation nds widespread use in applications such as target ranging and estimation of periodic signals
buried in noise. For target ranging, a sampled interrogating signal x[n] is transmitted toward the target.
The signal reected from the target is s[n] = x[n D] + p[n], a delayed (by D) and attenuated (by )
version of x[n], contaminated by noise p[n]. The reected signal s[n] is correlated with the interrogating
signal x[n]. If the noise is uncorrelated with the signal x[n], its correlation with x[n] is essentially zero. The
correlation of x[n] and its delayed version x[n D] yield a result that attains a peak at n = D. It is thus
quite easy to identify the index D from the correlation peak (rather than from the reected signal directly),
even in the presence of noise. The (round-trip) delay index D may then be related to the target distance d
by d = 0.5vD/S, where v is the propagation velocity and S is the rate at which the signal is sampled. The
device that performs the correlation of the received signal s[n] and x[n] is called a correlation receiver.
The correlation of s[n] with x[n] is equivalent to the convolution of s[n] with x[n], a folded version of the
interrogating signal. This means that the impulse response of the correlation receiver is h[n] = x[n] and is
matched to the transmitted signal. For this reason, such a receiver is also called a matched lter.
Figure 3.9 illustrates the concept of matched ltering. The transmitted signal is a rectangular pulse.
The impulse response of the matched lter is its folded version and is noncausal. In an ideal situation, the
received signal is simply a delayed version of the transmitted signal and the output of the matched lter
yields a peak whose location gives the delay. This can also be identied from the ideal received signal itself.
In practice, the received signal is contaminated by noise and it is dicult to identify where the delayed pulse
is located. The output of the matched lter, however, provides a clear indication of the delay even for low
signal-to-noise ratios.
Correlation also nds application in pattern recognition. For example, if we need to establish whether
an unknown pattern belongs to one of several known patterns or templates, it can be compared (correlated)
with each template in turn. A match occurs if the autocorrelation of the template matches (or resembles)
the cross-correlation of the template and the unknown pattern.
Identifying Periodic Signals in Noise
Correlation methods may also be used to identify the presence of a periodic signal x[n] buried in the noisy
signal s[n] = x[n] +p[n], where p[n] is the noise component (presumably uncorrelated with the signal), and to
extract the signal itself. The idea is to rst identify the period of the signal from the periodic autocorrelation
of the noisy signal. If the noisy signal contains a periodic component, the autocorrelation will show peaks
at multiples of the period N. Once the period N is established, we can recover x[n] as the periodic cross-
correlation of an impulse train i[n] = (n kN), with period N, and the noisy signal s[n]. Since i[n] is
uncorrelated with the noise, the periodic cross-correlation of i[n] and s[n] yields (an amplitude scaled version
of) the periodic signal x[n].
Figure 3.10 illustrates the concept of identifying a periodic signal hidden in noise. A periodic sawtooth
signal is contaminated by noise to yield a noisy signal. The peaks in the periodic autocorrelation of the
noisy signal allows us to identify the period as N = 20. The periodic cross-correlation of the noisy signal
with the impulse train [n20k] extracts the signal from noise. Note that longer lengths of the noisy signal
c Ashok Ambardar, September 1, 2003
102 Chapter 3 Time-Domain Analysis
20 0 20 50 100 150 200
0
0.5
1
(a) Transmitted signal
20 0 50 100 150 200
0
0.5
1
(b) Matched filter
0 50 100 150 200
0
0.5
1
(c) Ideal received signal
0 50 100 150 200
0
5
10
15
20
(d) Ideal matched filter output
0 50 100 150 200
2
1
0
1
(e) Noisy received signal
0 50 100 150 200
5
0
5
10
15
(f) Actual matched filter output. SNR = 40 dB
Figure 3.9 The concept of matched ltering and target ranging
(compared to the period N) will improve the match between the recovered and buried periodic signal.
REVIEW PANEL 3.25
How to Identify a Periodic Signal x[n] Buried in a Noisy Signal s[n]
Find the period N of x[n] from the periodic autocorrelation of the noisy signal s[n].
Find the signal x[n] as the periodic cross-correlation of s[n] and an impulse train with period N.
3.16 Discrete Convolution and Transform Methods
Discrete-time convolution provides a connection between the time-domain and frequency-domain methods
of system analysis for discrete-time signals. It forms the basis for every transform method described in this
text, and its role in linking the time domain and the transformed domain is intimately tied to the concept
of discrete eigensignals and eigenvalues. The everlasting exponential z
n
is an eigensignal of discrete-time
linear systems. In this complex exponential z
n
, the quantity z has the general form z = re
j2F
. If the input
to an LTI system is z
n
the output has the same form and is given by Cz
n
where C is a (possibly complex)
constant. Similarly, the everlasting discrete-time harmonic z = e
j2nF
(a special case with r = 1) is also an
c Ashok Ambardar, September 1, 2003
3.16 Discrete Convolution and Transform Methods 103
0 20 40 60
1
0
1
2
3
(a) A periodic signal
0 20 40 60
1
0
1
2
3
(b) Periodic signal with added noise
0 20 40 60
300
400
500
600
700
(c) Correlation of noisy signal gives period
0 20 40 60
1
0
1
2
3
(d) Extracted signal
Figure 3.10 Extraction of a periodic signal buried in noise
eigensignal of discrete-time systems.
3.16.1 The z-Transform
For an input x[n] = r
n
e
j2nF
=
re
j2F
n
= z
n
, where z is complex, with magnitude [z[ = r, the response
may be written as
y[n] = x[n] h[n] =
k=
z
nk
h[k] = z
n
k=
h[k]z
k
= x[n]H(z) (3.59)
The response equals the input (eigensignal) modied by the system function H(z), where
H(z) =
k=
h[k]z
k
(two-sided z-transform) (3.60)
The complex quantity H(z) describes the z-transform of h[n] and is not, in general, periodic in z. Denoting
the z-transform of x[n] and y[n] by X(z) and Y (z), we write
Y (z) =
k=
y[k]z
k
=
k=
x[k]H(z)z
k
= H(z)X(z) (3.61)
Convolution in the time domain thus corresponds to multiplication in the z-domain.
c Ashok Ambardar, September 1, 2003
104 Chapter 3 Time-Domain Analysis
3.16.2 The Discrete-Time Fourier Transform
For the harmonic input x[n] = e
j2nF
, the response y[n] equals
y[n] =
k=
e
j2(nk)F
h[k] = e
j2nF
k=
h[k]e
j2kF
= x[n]H(F) (3.62)
This is just the input modied by the system function H(F), where
H(F) =
k=
h[k]e
j2kF
(3.63)
The quantity H(F) describes the discrete-time Fourier transform(DTFT) or discrete-time frequency
response or spectrumof h[n]. Any signal x[n] may similarly be described by its DTFT X(F). The response
y[n] = x[n]H[F] may then be transformed to its DTFT Y [n] to give
Y (F) =
k=
y[k]e
j2Fk
=
k=
x[k]H(F)e
j2Fk
= H(F)X(F) (3.64)
Once again, convolution in the time domain corresponds to multiplication in the frequency domain. Note
that we obtain the DTFT of h[n] from its z-transform H(z) by letting z = e
j2F
or [z[ = 1 to give
H(F) = H(z)[
z=exp(j2F)
= H(z)[
|z|=1
(3.65)
The DTFT is thus the z-transform evaluated on the unit circle [z[ = 1. The system function H(F) is also
periodic in F with a period of unity because e
j2kF
= e
j2k(F+1)
. This periodicity is a direct consequence
of the discrete nature of h[n].
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 105
CHAPTER 3 PROBLEMS
3.1 (Operators) Which of the following describe linear operators?
(a) O = 4 (b) O = 4 + 3 (c) O =
{ }
3.2 (System Classication) In each of the systems below, x[n] is the input and y[n] is the output.
Check each system for linearity, shift invariance, memory, and causality.
(a) y[n] y[n 1] = x[n] (b) y[n] +y[n + 1] = nx[n]
(c) y[n] y[n + 1] = x[n + 2] (d) y[n + 2] y[n + 1] = x[n]
(e) y[n + 1] x[n]y[n] = nx[n + 2] (f ) y[n] +y[n 3] = x
2
[n] +x[n + 6]
(g) y[n] 2
n
y[n] = x[n] (h) y[n] = x[n] +x[n 1] +x[n 2]
3.3 (System Classication) Classify the following systems in terms of their linearity, time invariance,
memory, causality, and stability.
(a) y[n] = 3
n
x[n] (b) y[n] = e
jn
x[n]
(c) y[n] = cos(0.5n)x[n] (d) y[n] = [1 + cos(0.5n)]x[n]
(e) y[n] = e
x[n]
(f ) y[n] = x[n] + cos[0.5(n + 1)]
3.4 (System Classication) Classify the following systems in terms of their linearity, time invariance,
memory, causality, and stability.
(a) y[n] = x[n/3] (zero interpolation)
(b) y[n] = cos(n)x[n] (modulation)
(c) y[n] = [1 + cos(n)]x[n] (modulation)
(d) y[n] = cos(nx[n]) (frequency modulation)
(e) y[n] = cos(n +x[n]) (phase modulation)
(f ) y[n] = x[n] x[n 1] (dierencing operation)
(g) y[n] = 0.5x[n] + 0.5x[n 1] (averaging operation)
(h) y[n] =
1
N
N1
k=0
x[n k] (moving average)
(i) y[n] y[n 1] = x[n], 0 < < 1 (exponential averaging)
(j) y[n] = 0.4(y[n 1] + 2) +x[n]
3.5 (Classication) Classify each system in terms of its linearity, time invariance, memory, causality,
and stability.
(a) The folding system y[n] = x[n].
(b) The decimating system y[n] = x[2n].
(c) The zero-interpolating system y[n] = x[n/2].
(d) The sign-inversion system y[n] = sgnx[n].
(e) The rectifying system y[n] = [x[n][.
3.6 (Classication) Classify each system in terms of its linearity, time invariance, causality, and stability.
(a) y[n] = roundx[n] (b) y[n] = medianx[n + 1], x[n], x[n 1]
(c) y[n] = x[n] sgn(n) (d) y[n] = x[n] sgnx[n]
c Ashok Ambardar, September 1, 2003
106 Chapter 3 Time-Domain Analysis
3.7 (Realization) Find the dierence equation for each system realization shown in Figure P3.7.
[n] y [n] x
1
z
System 1
4
3
+ +
+
+
2
[n] y [n] x
1
z
1
z
System 2
2
4
3
+ +
+
j)
n
u[n] + (
j)
n
u[n]
[Hints and Suggestions: For part (e), pick the forced response as y
F
[n] = C(j)
n
. This will give
a complex response because the input is complex. For part (f), x[n] simplies to a sinusoid by using
j = e
j/2
and Eulers relation.]
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 107
3.13 (Zero-State Response) Find the zero-state response of the following systems.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)
n
(c) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)
n
(d) y[n] 0.25y[n 2] = cos(n/2)
[Hints and Suggestions: Zero-state implies y[1] = y[2] = 0. For part (b), use y
F
[n] = C(0.5)
n
,
but for part (c), pick y
F
[n] = Cn(0.5)
n
because one root of the characteristic equation is 0.5.]
3.14 (System Response) Let y[n] 0.5y[n1] = x[n], with y[1] = 1. Find the response of this system
due to the following inputs for n 0.
(a) x[n] = 2u[n] (b) x[n] = (0.25)
n
u[n] (c) x[n] = n(0.25)
n
u[n]
(d) x[n] = (0.5)
n
u[n] (e) x[n] = n(0.5)
n
u[n] (f ) x[n] = (0.5)
n
cos(0.5n)u[n]
[Hints and Suggestions: For part (c), pick y
F
[n] = (C + Dn)(0.5)
n
(and compare coecients of
like powers of n to solve for C and D). For part (d), pick y
F
[n] = Cn(0.5)
n
because the root of the
characteristic equation is 0.5. Part(e) requires y
F
[n] = n(C +Dn)(0.5)
n
for the same reason.]
3.15 (System Response) For the system realization shown in Figure P3.15, nd the response to the
following inputs and initial conditions.
(a) x[n] = u[n] y[1] = 0 (b) x[n] = u[n] y[1] = 4
(c) x[n] = (0.5)
n
u[n] y[1] = 0 (d) x[n] = (0.5)
n
u[n] y[1] = 6
(e) x[n] = (0.5)
n
u[n] y[1] = 0 (f ) x[n] = (0.5)
n
u[n] y[1] = 2
[n] x [n] y
1
z
+
0.5
k=0
x[n k], N = 3 (moving average)
(d) y[n] =
2
N(N+1)
N1
k=0
(N k)x[n k], N = 3 (weighted moving average)
(e) y[n] y[n 1] = (1 )x[n], N = 3, =
N1
N+1
(exponential averaging)
c Ashok Ambardar, September 1, 2003
108 Chapter 3 Time-Domain Analysis
3.18 (System Response) It is known that the response of the system y[n] + y[n 1] = x[n], = 0, is
given by y[n] = [5 + 3(0.5)
n
]u[n].
(a) Identify the natural response and forced response.
(b) Identify the values of and y[1].
(c) Identify the zero-input response and zero-state response.
(d) Identify the input x[n].
3.19 (System Response) It is known that the response of the system y[n] +0.5y[n1] = x[n] is described
by y[n] = [5(0.5)
n
+ 3(0.5)
n
)]u[n].
(a) Identify the zero-input response and zero-state response.
(b) What is the zero-input response of the system y[n] + 0.5y[n 1] = x[n] if y[1] = 10?
(c) What is the response of the relaxed system y[n] + 0.5y[n 1] = x[n 2]?
(d) What is the response of the relaxed system y[n] + 0.5y[n 1] = x[n 1] + 2x[n]?
3.20 (System Response) It is known that the response of the system y[n] +y[n1] = x[n] is described
by y[n] = (5 + 2n)(0.5)
n
u[n].
(a) Identify the zero-input response and zero-state response.
(b) What is the zero-input response of the system y[n] +y[n 1] = x[n] if y[1] = 10?
(c) What is the response of the relaxed system y[n] +y[n 1] = x[n 1])?
(d) What is the response of the relaxed system y[n] +y[n 1] = 2x[n 1] +x[n]?
(e) What is the complete response of the y[n] +y[n 1] = x[n] + 2x[n 1] if y[1] = 4?
3.21 (System Response) Find the response of the following systems.
(a) y[n] + 0.1y[n 1] 0.3y[n 2] = 2u[n] y[1] = 0 y[2] = 0
(b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)
n
y[1] = 1 y[2] = 4
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)
n
y[1] = 0 y[2] = 3
(d) y[n] 0.25y[n 2] = (0.4)
n
y[1] = 0 y[2] = 3
(e) y[n] 0.25y[n 2] = (0.5)
n
y[1] = 0 y[2] = 0
[Hints and Suggestions: For parts (b) and (e), pick y
F
[n] = Cn(0.5)
n
because one root of the
characteristic equation is 0.5.]
3.22 (System Response) Sketch a realization for each system, assuming zero initial conditions. Then
evaluate the complete response from the information given. Check your answer by computing the rst
few values by recursion.
(a) y[n] 0.4y[n 1] = x[n] x[n] = (0.5)
n
u[n] y[1] = 0
(b) y[n] 0.4y[n 1] = 2x[n] +x[n 1] x[n] = (0.5)
n
u[n] y[1] = 0
(c) y[n] 0.4y[n 1] = 2x[n] +x[n 1] x[n] = (0.5)
n
u[n] y[1] = 5
(d) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)
n
u[n] y[1] = 2
(e) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)
n
u[n] y[1] = 0
[Hints and Suggestions: For parts (b)(c), use the results of part (a) plus linearity (superposition)
and time invariance.]
3.23 (System Response) For each system, evaluate the natural, forced, and total response. Assume that
y[1] = 0, y[2] = 1. Check your answer for the total response by computing its rst few values by
recursion.
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 109
(a) y[n] + 4y[n 1] + 3y[n 2] = u[n] (b) 1 0.5z
1
y[n] = (0.5)
n
cos(0.5n)u[n]
(c) y[n] + 4y[n 1] + 8y[n 2] = cos(n)u[n] (d) (1 + 2z
1
)
2
y[n] = n(2)
n
u[n]
(e) 1 +
3
4
z
1
+
1
8
z
2
y[n] = (
1
3
)
n
u[n] (f ) 1 + 0.5z
1
+ 0.25z
2
y[n] = cos(0.5n)u[n]
[Hints and Suggestions: For part (b), pick y
F
[n] = (0.5)
n
[Acos(0.5n) + Bsin(0.5n)], expand
terms like cos[0.5(n 1)] using trigonometric identities and compare the coecients of cos(0.5n)
and sin(0.5n) to generate two equations to solve for A and B. For part (d), pick y
F
[n] = (C+Dn)(2)
n
and compare like powers of n to solve for C and D.]
3.24 (System Response) For each system, evaluate the zero-state, zero-input, and total response. Assume
that y[1] = 0, y[2] = 1.
(a) y[n] + 4y[n 1] + 4y[n 2] = 2
n
u[n] (b) z
2
+ 4z + 4y[n] = 2
n
u[n]
[Hints and Suggestions: In part (b), y[n + 2] + 4y[n + 1] + 4y[n] = (2)
n
u[n]. By time invariance,
y[n] +4y[n1] +4y[n2] = (2)
n2
u[n2] and we shift the zero-state response of part (a) by 2 units
(n n 2) and add to the zero-input response to get the result.]
3.25 (System Response) For each system, set up a dierence equation and compute the zero-state,
zero-input, and total response, assuming x[n] = u[n] and y[1] = y[2] = 1.
(a) 1 z
1
2z
2
y[n] = x[n] (b) z
2
z 2y[n] = x[n]
(c) 1
3
4
z
1
+
1
8
z
2
y[n] = x[n] (d) 1
3
4
z
1
+
1
8
z
2
y[n] = 1 +z
1
x[n]
(e) 1 0.25z
2
y[n] = x[n] (f ) z
2
0.25y[n] = 2z
2
+ 1x[n]
[Hints and Suggestions: For part (b), use the result of part (a) and time-invariance to get the answer
as y
zs
[n2]+y
zi
[n]. For part (d), use the result of part (c) to get the answer as y
zi
[n]+y
zs
[n]+y
zs
[n1].
The answer for part (f) may be similarly obtained from part (e).]
3.26 (Impulse Response by Recursion) Find the impulse response h[n] by recursion up to n = 4 for
each of the following systems.
(a) y[n] y[n 1] = 2x[n] (b) y[n] 3y[n 1] + 6y[n 2] = x[n 1]
(c) y[n] 2y[n 3] = x[n 1] (d) y[n] y[n 1] + 6y[n 2] = nx[n 1] + 2x[n 3]
[Hints and Suggestions: For the impulse response, x[n] = 1, n = 0 and x[n] = 0, n = 0.]
3.27 (Analytical Form for Impulse Response) Classify each lter as recursive or FIR (nonrecursive),
and causal or noncausal, and nd an expression for its impulse response h[n].
(a) y[n] = x[n] +x[n 1] +x[n 2] (b) y[n] = x[n + 1] +x[n] +x[n 1]
(c) y[n] + 2y[n 1] = x[n] (d) y[n] + 2y[n 1] = x[n 1]
(e) y[n] + 2y[n 1] = 2x[n] + 6x[n 1] (f ) y[n] + 2y[n 1] = x[n + 1] + 4x[n] + 6x[n 1]
(g) 1 + 4z
1
+ 3z
2
y[n] = z
2
x[n] (h) z
2
+ 4z + 4y[n] = z + 3x[n]
(i) z
2
+ 4z + 8y[n] = x[n] (j) y[n] + 4y[n 1] + 4y[n 2] = x[n] x[n + 2]
[Hints and Suggestions: To nd the impulse response for the recursive lters, assume y[0] = 1 and
(if required) y[1] = y[2] = = 0. If the right hand side of the recursive lter equation is anything
but x[n], start with the single input x[n] and then use superposition and time-invariance to get the
result for the required input. The results for (d)(f) can be found from the results of (c) in this way.]
3.28 (Stability) Investigate the causality and stability of the following right-sided systems.
(a) y[n] = x[n 1] +x[n] +x[n + 1] (b) y[n] = x[n] +x[n 1] +x[n 2]
(c) y[n] 2y[n 1] = x[n] (d) y[n] 0.2y[n 1] = x[n] 2x[n + 2]
(e) y[n] +y[n 1] + 0.5y[n 2] = x[n] (f ) y[n] y[n 1] +y[n 2] = x[n] x[n + 1]
(g) y[n] 2y[n 1] +y[n 2] = x[n] x[n 3] (h) y[n] 3y[n 1] + 2y[n 2] = 2x[n + 3]
c Ashok Ambardar, September 1, 2003
110 Chapter 3 Time-Domain Analysis
[Hints and Suggestions: Remember that FIR lters are always stable and for right-sided systems,
every root of the caracteristic equation must have a magnitude (absolute value) less than 1.]
3.29 (System Interconnections) Two systems are said to be in cascade if the output of the rst system
acts as the input to the second. Find the response of the following cascaded systems if the input is a
unit step and the systems are described as follows. In which instances does the response dier when the
order of cascading is reversed? Can you use this result to justify that the order in which the systems
are cascaded does not matter in nding the overall response if both systems are LTI?
(a) System 1: y[n] = x[n] x[n 1] System 2: y[n] = 0.5y[n 1] +x[n]
(b) System 1: y[n] = 0.5y[n 1] +x[n] System 2: y[n] = x[n] x[n 1]
(c) System 1: y[n] = x
2
[n] System 2: y[n] = 0.5y[n 1] +x[n]
(d) System 1: y[n] = 0.5y[n 1] +x[n] System 2: y[n] = x
2
[n]
3.30 (Systems in Cascade and Parallel) Consider the realization of Figure P3.30.
[n] y
1
z
1
z
1
z
[n] x
+
+
+
+
+
Figure P3.30 System realization for Problem 3.30
(a) Find its impulse response if = . Is the overall system FIR or IIR?
(b) Find its dierence equation and impulse response if = . Is the overall system FIR or IIR?
(c) Find its dierence equation and impulse response if = = 1. What is the function of the
overall system?
3.31 (Dierence Equations from Impulse Response) Find the dierence equations describing the
following systems.
(a) h[n] = [n] + 2[n 1] (b) h[n] = 2,
3, 1
(c) h[n] = (0.3)
n
u[n] (d) h[n] = (0.5)
n
u[n] (0.5)
n
u[n]
[Hints and Suggestions: For part (c), the left hand side of the dierence equation is y[n]0.3y[n1].
So, h[n] 0.3h[n 1] simplied to get impulses leads to the right hand side. For part (d), start with
the left hand side as y[n] 0.25y[n 2].]
3.32 (Dierence Equations from Impulse Response) A system is described by the impulse response
h[n] = (1)
n
u[n]. Find the dierence equation of this system. Then nd the dierence equation of
the inverse system. Does the inverse system describe an FIR lter or IIR lter? What function does
it perform?
3.33 (Dierence Equations) For the lter realization shown in Figure P3.33, nd the dierence equation
relating y[n] and x[n] if the impulse response of the lter is given by
(a) h[n] = [n] [n 1] (b) h[n] = 0.5[n] + 0.5[n 1]
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 111
[n] x [n] y
1
z
+
Filter
Figure P3.33 Filter realization for Problem 3.33
3.34 (Dierence Equations from Dierential Equations) This problem assumes some familiarity
with analog theory. Consider an analog system described by y
(t) + 3y
2, 1
3.41 (Nonrecursive Forms of IIR Filters) An FIR lter may always be exactly represented in recursive
form, but we can only approximately represent an IIR lter by an FIR lter by truncating its impulse
response to N terms. The larger the truncation index N, the better is the approximation. Consider the
IIR lter described by y[n] 0.8y[n1] = x[n]. Find its impulse response h[n] and truncate it to three
terms to obtain h
3
[n], the impulse response of the approximate FIR equivalent. Would you expect the
greatest mismatch in the response of the two lters to identical inputs to occur for lower or higher
values of n? Compare the step response of the two lters up to n = 6 to justify your expectations.
3.42 (Nonlinear Systems) One way to solve nonlinear dierence equations is by recursion. Consider the
nonlinear dierence equation y[n]y[n 1] 0.5y
2
[n 1] = 0.5Au[n].
(a) What makes this system nonlinear?
(b) Using y[1] = 2, recursively obtain y[0], y[1], and y[2].
(c) Use A = 2, A = 4, and A = 9 in the results of part (b) to conrm that this system nds the
square root of A.
(d) Repeat parts (b) and (c) with y[1] = 1 to check whether the choice of the initial condition
aects system operation.
3.43 (LTI Concepts and Stability) Argue that neither of the following describes an LTI system. Then,
explain how you might check for their stability and determine which of the systems are stable.
(a) y[n] + 2y[n 1] = x[n] +x
2
[n] (b) y[n] 0.5y[n 1] = nx[n] +x
2
[n]
3.44 (Response of Causal and Noncausal Systems) A dierence equation may describe a causal or
noncausal system depending on how the initial conditions are prescribed. Consider a rst-order system
governed by y[n] +y[n 1] = x[n].
(a) With y[n] = 0, n < 0, this describes a causal system. Assume y[1] = 0 and nd the rst few
terms y[0], y[1], . . . of the impulse response and step response, using recursion, and establish the
general form for y[n].
(b) With y[n] = 0, n > 0, we have a noncausal system. Assume y[0] = 0 and rewrite the dierence
equation as y[n 1] = y[n] + x[n]/ to nd the rst few terms y[0], y[1], y[2], . . . of the
impulse response and step response, using recursion, and establish the general form for y[n].
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 113
3.45 (Folding) For each signal x[n], sketch g[k] = x[3 k] vs. k and h[k] = x[2 +k] vs. k.
(a) x[n] =
1, 2, 3, 4 (b) x[n] = 3, 3,
3, 2, 2, 2
[Hints and Suggestions: Note that g[k] and h[k] will be plotted against the index k.]
3.46 (Closed-Form Convolution) Find the convolution y[n] = x[n] h[n] for the following:
(a) x[n] = u[n] h[n] = u[n]
(b) x[n] = (0.8)
n
u[n] h[n] = (0.4)
n
u[n]
(c) x[n] = (0.5)
n
u[n] h[n] = (0.5)
n
u[n + 3] u[n 4]
(d) x[n] =
n
u[n] h[n] =
n
u[n]
(e) x[n] =
n
u[n] h[n] =
n
u[n]
(f ) x[n] =
n
u[n] h[n] = rect(n/2N)
[Hints and Suggestions: The summations will be over the index k and functions of n should be
pulled out before evaluating them using tables. For (a), (b), (d) and (e), summations will be from
k = 0 to k = n. For part (c) and (f), use superposition. For (a) and (d), the sum (1)
k
= (1) from
k = 0 to k = n equals n + 1.]
3.47 (Convolution with Impulses) Find the convolution y[n] = x[n] h[n] of the following signals.
(a) x[n] = [n 1] h[n] = [n 1]
(b) x[n] = cos(0.25n) h[n] = [n] [n 1]
(c) x[n] = cos(0.25n) h[n] = [n] 2[n 1] +[n 2]
(d) x[n] = (1)
n
h[n] = [n] +[n 1]
[Hints and Suggestions: Start with [n] g[n] = g[n] and use linearity and time invariance.]
3.48 (Convolution) Find the convolution y[n] = x[n] h[n] for each pair of signals.
(a) x[n] = (0.4)
n
u[n] h[n] = (0.5)
n
u[n]
(b) x[n] =
n
u[n] h[n] =
n
u[n]
(c) x[n] =
n
u[n] h[n] =
n
u[n]
(d) x[n] =
n
u[n] h[n] =
n
u[n]
[Hints and Suggestions: For parts (a) and (b) write the exponentials in the form r
n
. For parts (c)
and (d) nd the convolution of x[n] and h[n] and fold the result to get y[n].]
3.49 (Convolution of Finite Sequences) Find the convolution y[n] = x[n] h[n] for each of the following
signal pairs. Use a marker to indicate the origin n = 0.
(a) x[n] =
1, 2, 0, 1 h[n] =
2, 2, 3
(b) x[n] =
0, 2, 4, 6 h[n] =
6, 4, 2, 0
(c) x[n] = 3, 2,
1, 0, 1 h[n] =
4, 3, 2
(d) x[n] = 3, 2,
1, 1, 2 h[n] = 4,
2, 3, 2
(e) x[n] = 3, 0, 2, 0,
1, 0, 1, 0, 2 h[n] = 4, 0,
2, 0, 3, 0, 2
(f ) x[n] =
0, 0, 0, 3, 1, 2 h[n] = 4,
2, 3, 2
[Hints and Suggestions: Since the starting index of the convolution equals the sum of the starting
indices of the sequences convolved, ignore markers during convolution and assign as the last step.]
c Ashok Ambardar, September 1, 2003
114 Chapter 3 Time-Domain Analysis
3.50 (Convolution of Symmetric Sequences) The convolution of sequences that are symmetric about
their midpoint is also endowed with symmetry (about its midpoint). Compute y[n] = x[n] h[n] for
each pair of signals and use the results to establish the type of symmetry (about the midpoint) in
the convolution if the convolved signals are both even symmetric (about their midpoint), both odd
symmetric (about their midpoint), or one of each type.
(a) x[n] = 2, 1, 2 h[n] = 1, 0, 1
(b) x[n] = 2, 1, 2 h[n] = 1, 1
(c) x[n] = 2, 2 h[n] = 1, 1
(d) x[n] = 2, 0, 2 h[n] = 1, 0, 1
(e) x[n] = 2, 0, 2 h[n] = 1, 1
(f ) x[n] = 2, 2 h[n] = 1, 1
(g) x[n] = 2, 1, 2 h[n] = 1, 0, 1
(h) x[n] = 2, 1, 2 h[n] = 1, 1
(i) x[n] = 2, 2 h[n] = 1, 1
3.51 (Properties) Let x[n] = h[n] =
1, 2, 3, 4, 5?
(d) Use convolution to show that the system performs the required averaging operation.
3.54 (Step Response) Given the impulse response h[n], nd the step response s[n] of each system.
(a) h[n] = (0.5)
n
u[n] (b) h[n] = (0.5)
n
cos(n)u[n]
(c) h[n] = (0.5)
n
cos(n + 0.5)u[n] (d) h[n] = (0.5)
n
cos(n + 0.25)u[n]
(e) h[n] = n(0.5)
n
u[n] (f ) h[n] = n(0.5)
n
cos(n)u[n]
[Hints and Suggestions: Note that s[n] = x[n] h[n] where x[n] = u[n]. In part (b) and (f), note
that cos(n) = (1)
n
. In part (d) expand cos(n + 0.25) and use the results of parts (b).]
3.55 (Convolution and System Response) Consider the system y[n] 0.5y[n 1] = x[n].
(a) What is the impulse response h[n] of this system?
(b) Find its output if x[n] = (0.5)
n
u[n] by convolution.
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 115
(c) Find its output if x[n] = (0.5)
n
u[n] and y[1] = 0 by solving the dierence equation.
(d) Find its output if x[n] = (0.5)
n
u[n] and y[1] = 2 by solving the dierence equation.
(e) Are any of the outputs identical? Should they be? Explain.
[Hints and Suggestions: For part (e), remember that convolution nds the zero-state response.]
3.56 (Convolution and Interpolation) Let x[n] =
2, 4, 6, 8.
(a) Find the convolution y[n] = x[n] x[n].
(b) Find the convolution y
1
[n] = x[2n] x[2n]. Is y
1
[n] related to y[n]? Should it be? Explain.
(c) Find the convolution y
2
[n] = x[n/2] x[n/2], assuming zero interpolation. Is y
2
[n] related to
y[n]? Should it be? Explain.
(d) Find the convolution y
3
[n] = x[n/2] x[n/2], assuming step interpolation. Is y
3
[n] related to
y[n]? Should it be? Explain.
(e) Find the convolution y
4
[n] = x[n/2] x[n/2], assuming linear interpolation. Is y
4
[n] related to
y[n]? Should it be? Explain.
3.57 (Linear Interpolation) Consider a system that performs linear interpolation by a factor of N. One
way to construct such a system, as shown, is to perform up-sampling by N (zero interpolation between
signal samples) and pass the up-sampled signal through a lter with impulse response h[n] whose
output y[n] is the linearly interpolated signal.
x[n] up-sample (zero interpolate) by N lter y[n]
(a) What should h[n] be for linear interpolation by a factor of N?
(b) Let x[n] = 4tri(0.25n). Find y
1
[n] = x[n/2] by linear interpolation.
(c) Find the system output y[n] for N = 2. Does y[n] equal y
1
[n]?
3.58 (Causality) Argue that the impulse response h[n] of a causal system must be zero for n < 0. Based
on this result, if the input to a causal system starts at n = n
0
, when does the response start?
3.59 (Stability) Investigate the causality and stability of the following systems.
(a) h[n] = (2)
n
u[n 1] (b) y[n] = 2x[n + 1] + 3x[n] x[n 1]
(c) h[n] = (0.5)
n
u[n] (d) h[n] = 3, 2,
1, 1, 2
(e) h[n] = (0.5)
n
u[n] (f ) h[n] = (0.5)
|n|
[Hints and Suggestions: Only one of these is unstable. For part (e), note that summing [h[n][ is
equivalent to summing its folded version.]
3.60 (Numerical Convolution) The convolution y(t) of two analog signals x(t) and h(t) may be approx-
imated by sampling each signal at intervals t
s
to obtain the signals x[n] and h[n], and folding and
shifting the samples of one function past the other in steps of t
s
(to line up the samples). At each
instant kt
s
, the convolution equals the sum of the product samples multiplied by t
s
. This is equivalent
to using the rectangular rule to approximate the area. If x[n] and h[n] are convolved using the sum-
by-column method, the columns make up the product, and their sum multiplied by t
s
approximates
y(t) at t = kt
s
.
(a) Let x(t) = rect(t/2) and h(t) = rect(t/2). Find y(t) = x(t) h(t) and compute y(t) at intervals
of t
s
= 0.5 s.
c Ashok Ambardar, September 1, 2003
116 Chapter 3 Time-Domain Analysis
(b) Sample x(t) and h(t) at intervals of t
s
= 0.5 s to obtain x[n] and h[n]. Compute y[n] = x[n] h[n]
and the convolution estimate y
R
(nt
s
) = t
s
y[n]. Do the values of y
R
(nt
s
) match the exact result
y(t) at t = nt
s
? If not, what are the likely sources of error?
(c) Argue that the trapezoidal rule for approximating the convolution is equivalent to subtracting
half the sum of the two end samples of each column from the discrete convolution result and
then multiplying by t
s
. Use this rule to obtain the convolution estimate y
T
(nt
s
). Do the values
of y
T
(nt
s
) match the exact result y(t) at t = nt
s
? If not, what are the likely sources of error?
(d) Obtain estimates based on the rectangular rule and trapezoidal rule for the convolution y(t) of
x(t) = 2 tri(t) and h(t) = rect(t/2) by sampling the signals at intervals of t
s
= 0.5 s. Which rule
would you expect to yield a better approximation, and why?
3.61 (Convolution) Let x[n] = rect(n/2) and h[n] = rect(n/4).
(a) Find f[n] = x[n] x[n] and g[n] = h[n] h[n].
(b) Express these results as f[n] = Atri(n/M) and g[n] = Btri(n/K) by selecting appropriate values
for the constants A, M, B and K.
(c) Generalize the above results to show that rect(n/2N) rect(n/2N) = (2N + 1)tri(
n
2N+1
).
3.62 (Impulse Response of Dierence Algorithms) Two systems to compute the forward dierence
and backward dierence are described by
Forward dierence: y
F
[n] = x[n + 1] x[n] Backward dierence: y
B
[n] = x[n] x[n 1]
(a) What is the impulse response of each system?
(b) Which of these systems is stable? Which of these systems is causal?
(c) Find the impulse response of their parallel connection. Is the parallel system stable? Is it causal?
(d) What is the impulse response of their cascade? Is the cascaded system stable? Is it causal?
3.63 (System Response) Find the response of the following lters to the unit step x[n] = u[n], and to
the alternating unit step x[n] = (1)
n
u[n], using convolution concepts.
(a) h[n] = [n] [n 1] (dierencing operation)
(b) h[n] =
k=0
[n k], N = 3 (moving average)
(d) h[n] =
2
N(N+1)
N1
k=0
(N k)[n k], N = 3 (weighted moving average)
(e) y[n] +
N1
N+1
y[n 1] =
2
N+1
x[n], N = 3 (exponential average)
3.64 (Convolution and Interpolation) Consider the following system with x[n] =
1, 1. Show that,
except for end eects, the output describes a step interpolation between the samples of x[n].
(b) Find the response y[n] if N = 3 and the lter impulse response is h[n] =
1, 1, 1. Does the
output describe a step interpolation between the samples of x[n]?
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 117
(c) Pick N and h[n] if the system is to perform step interpolation by 4.
3.65 (Convolution and Interpolation) Consider the following system with x[n] =
+
+
+
+
+
Figure P3.67 System realization for Problem 3.67
(a) Find its impulse response if = . Is the overall system FIR or IIR?
(b) Find its impulse response if = . Is the overall system FIR or IIR?
(c) Find its impulse response if = = 1. What does the overall system represent?
3.68 (Cascading) The impulse response of two cascaded systems equals the convolution of their impulse
responses. Does the step response s
C
[n] of two cascaded systems equal s
1
[n] s
2
[n], the convolution of
their step responses? If not, how is s
C
[n] related to s
1
[n] and s
2
[n]?
3.69 (Cascading) System 1 is a squaring circuit, and system 2 is an exponential averager described by
h[n] = (0.5)
n
u[n]. Find the output of each cascaded combination. Will their output be identical?
Should it be? Explain.
(a) 2(0.5)
n
u[n] system 1 system 2 y[n]
(b) 2(0.5)
n
u[n] system 2 system 1 y[n]
c Ashok Ambardar, September 1, 2003
118 Chapter 3 Time-Domain Analysis
3.70 (Cascading) System 1 is an IIR lter with the dierence equation y[n] = 0.5y[n 1] + x[n], and
system 2 is a lter with impulse response h[n] = [n] [n 1]. Find the output of each cascaded
combination. Will their output be identical? Should it be? Explain.
(a) 2(0.5)
n
u[n] system 1 system 2 y[n]
(b) 2(0.5)
n
u[n] system 2 system 1 y[n]
3.71 (Cascading) System 1 is an IIR lter with the dierence equation y[n] = 0.5y[n 1] + x[n], and
system 2 is a lter with impulse response h[n] = [n] (0.5)
n
u[n].
(a) Find the impulse response h
P
[n] of their parallel connection.
(b) Find the impulse response h
12
[n] of the cascade of system 1 and system 2.
(c) Find the impulse response h
21
[n] of the cascade of system 2 and system 1.
(d) Are h
12
[n] and h
21
[n] identical? Should they be? Explain.
(e) Find the impulse response h
I
[n] of a system whose parallel connection with h
12
[n] yields h
P
[n].
3.72 (Cascading) System 1 is a lowpass lter described by y[n] = 0.5y[n 1] + x[n], and system 2 is
described by h[n] = [n] 0.5[n 1].
(a) What is the output of the cascaded system to the input x[n] = 2(0.5)
n
u[n]?
(b) What is the output of the cascaded system to the input x[n] = [n]?
(c) How are the two systems related?
3.73 (Convolution in Practice) Often, the convolution of a long sequence x[n] and a short sequence h[n]
is performed by breaking the long signal into shorter pieces, nding the convolution of each short piece
with h[n], and gluing the results together. Let x[n] = 1, 1, 2, 3, 5, 4, 3, 1 and h[n] = 4, 3, 2, 1.
(a) Split x[n] into two equal sequences x
1
[n] = 1, 1, 2, 3 and x
2
[n] = 5, 4, 3, 1.
(b) Find the convolution y
1
[n] = h[n] x
1
[n].
(c) Find the convolution y
2
[n] = h[n] x
2
[n].
(d) Find the convolution y[n] = h[n] x[n].
(e) How can you nd y[n] from y
1
[n] and y
2
[n]?
[Hints and Suggestions: For part (e), use superposition and add the shifted version of y
2
[n] to y
1
[n]
to get y[n]. This forms the basis for the overlap-add method of convolution.]
3.74 (Periodic Convolution) Find the regular convolution y[n] = x[n] h[n] of one period of each pair
of periodic signals. Then, use wraparound to compute the periodic convolution y
p
[n] = x[n] (h[n]. In
each case, specify the minimum number of padding zeros we must use if we wish to nd the regular
convolution from the periodic convolution of the zero-padded signals.
(a) x[n] =
1, 2, 0, 1 h[n] =
2, 2, 3, 0
(b) x[n] =
0, 2, 4, 6 h[n] =
6, 4, 2, 0
(c) x[n] = 3, 2,
1, 0, 1 h[n] =
4, 3, 2, 0, 0
(d) x[n] = 3, 2, 1,
1, 2 h[n] = 4, 2, 3,
2, 0
[Hints and Suggestions: First assign the marker for the regular convolution. After wraparound,
this also corresponds to the marker for the periodic convolution.]
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 119
3.75 (Periodic Convolution) Find the periodic convolution y
p
[n] = x[n] (h[n] for each pair of signals
using the circulant matrix for x[n].
(a) x[n] =
1, 2, 0, 1 h[n] =
2, 2, 3, 0
(b) x[n] =
0, 2, 4, 6 h[n] =
6, 4, 2, 0
3.76 (Periodic Convolution) Consider a system whose impulse response is h[n] = (0.5)
n
u[n]. Show that
one period of its periodic extension with period N is given by h
pe
[n] =
(0.5)
n
1 (0.5)
N
, 0 n N 1.
Use this result to nd the response of this system to the following periodic inputs.
(a) x[n] = cos(n) (b) x[n] =
1, 1, 0, 0, with N = 4
(c) x[n] = cos(0.5n) (d) x[n] = (0.5)
n
, 0 n 3, with N = 4
[Hints and Suggestions: In each case, compute N samples of h
pe
[n] and then get the periodic
convolution. For example, the period of x[n] in part (a) is N = 2.]
3.77 (Correlation) For each pair of signals, compute the autocorrelation r
xx
[n], the autocorrelation r
hh
[n],
the cross-correlation r
xh
[n], and the cross-correlation r
hx
[n]. For each result, indicate the location of
the origin n = 0 by a marker.
(a) x[n] =
1, 2, 0, 1 h[n] =
2, 2, 3
(b) x[n] =
0, 2, 4, 6 h[n] =
6, 4, 2
(c) x[n] = 3, 2,
1, 2 h[n] =
4, 3, 2
(d) x[n] = 3, 2,
1, 1, 2 h[n] = 4,
2, 3, 2
[Hints and Suggestions: Use convolution to get the correlation results. For example, r
xh
[n] =
x[n] h[n] and the marker for the result is based on x[n] and h[n] (the sequences convolved).]
3.78 (Correlation) Let x[n] = rect[(n 4)/2] and h[n] = rect[n/4].
(a) Find the autocorrelation r
xx
[n].
(b) Find the autocorrelation r
hh
[n].
(c) Find the cross-correlation r
xh
[n].
(d) Find the cross-correlation r
hx
[n].
(e) How are the results of parts (c) and (d) related?
3.79 (Correlation) Find the correlation r
xh
[n] of the following signals.
(a) x[n] =
n
u[n] h[n] =
n
u[n]
(b) x[n] = n
n
u[n] h[n] =
n
u[n]
(c) x[n] = rect(n/2N) h[n] = rect(n/2N)
[Hints and Suggestions: In parts (a) and (b), each correlation will cover two ranges. For n 0, the
signals overlap over n k and for n < 0, the overlap is for 0 k . For part (c), x[n] and
h[n] are identical and even symmetric and their correlation equals their convolution.]
3.80 (Periodic Correlation) For each pair of periodic signals described for one period, compute the
periodic autocorrelations r
pxx
[n] and r
phh
[n], and the periodic cross-correlations r
pxh
[n] and r
phx
[n].
For each result, indicate the location of the origin n = 0 by a marker.
c Ashok Ambardar, September 1, 2003
120 Chapter 3 Time-Domain Analysis
(a) x[n] =
1, 2, 0, 1 h[n] =
2, 2, 3, 0
(b) x[n] =
0, 2, 4, 6 h[n] =
6, 4, 2, 0
(c) x[n] = 3, 2,
1, 2 h[n] = 0,
4, 3, 2
(d) x[n] = 3, 2,
1, 1, 2 h[n] = 4,
2, 3, 2, 0
[Hints and Suggestions: First get the regular correlation (by regular convolution) and then use
wraparound. For example, r
xh
[n] = x[n] h[n] and the marker for the result is based on x[n] and
h[n] (the sequences convolved). Then, use wraparound to get r
pxh
[n] (the marker may get wrapped
around in some cases).]
3.81 (Mean and Variance from Autocorrelation) The mean value m
x
of a random signal x[n] (with
nonzero mean value) may be computed from its autocorrelation function r
xx
[n] as m
2
x
= lim
|n|
r
xx
[n].
The variance of x[n] is then given by
2
x
= r
xx
(0) m
2
x
. Find the mean, variance, and average power
of a random signal whose autocorrelation function is r
xx
[n] = 10
1 + 2n
2
2 + 5n
2
.
COMPUTATION AND DESIGN
3.82 (Numerical Integration Algorithms) Numerical integration algorithms approximate the area y[n]
from y[n1] or y[n2] (one or more time steps away). Consider the following integration algorithms.
(a) y[n] = y[n 1] +t
s
x[n] (rectangular rule)
(b) y[n] = y[n 1] +
t
s
2
(x[n] +x[n 1]) (trapezoidal rule)
(c) y[n] = y[n 1] +
ts
12
(5x[n] + 8x[n 1] x[n 2]) (Adams-Moulton rule)
(d) y[n] = y[n 2] +
t
s
3
(x[n] + 4x[n 1] +x[n 2]) (Simpsons rule)
(e) y[n] = y[n 3] +
3t
s
8
(x[n] + 3x[n 1] + 3x[n 2] +x[n 3]) (Simpsons three-eighths rule)
Use each of the rules to approximate the area of x(t) = sinc(t), 0 t 3, with t
s
= 0.1 s and t
s
= 0.3 s,
and compare with the expected result of 0.53309323761827. How does the choice of the time step t
s
aect the results? Which algorithm yields the most accurate results?
3.83 (System Response) Use the Matlab routine filter to obtain and plot the response of the lter
described by y[n] = 0.25(x[n] +x[n 1] +x[n 2] +x[n 3]) to the following inputs and comment on
your results.
(a) x[n] = 1, 0 n 60
(b) x[n] = 0.1n, 0 n 60
(c) x[n] = sin(0.1n), 0 n 60
(d) x[n] = 0.1n + sin(0.5n), 0 n 60
(e) x[n] =
k=
[n 5k], 0 n 60
(f ) x[n] =
k=
[n 4k], 0 n 60
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 121
3.84 (System Response) Use the Matlab routine filter to obtain and plot the response of the lter
described by y[n] y[n 4] = 0.25(x[n] + x[n 1] + x[n 2] + x[n 3]) to the following inputs and
comment on your results.
(a) x[n] = 1, 0 n 60
(b) x[n] = 0.1n, 0 n 60
(c) x[n] = sin(0.1n), 0 n 60
(d) x[n] = 0.1n + sin(0.5n), 0 n 60
(e) x[n] =
k=
[n 5k], 0 n 60
(f ) x[n] =
k=
[n 4k], 0 n 60
3.85 (System Response) Use Matlab to obtain and plot the response of the following systems over the
range 0 n 199.
(a) y[n] = x[n/3], x[n] = (0.9)
n
u[n] (assume zero interpolation)
(b) y[n] = cos(0.2n)x[n], x[n] = cos(0.04n) (modulation)
(c) y[n] = [1 + cos(0.2n)]x[n], x[n] = cos(0.04n) (modulation)
3.86 (System Response) Use Matlab to obtain and plot the response of the following lters, using direct
commands (where possible) and also using the routine filter, and compare your results. Assume that
the input is given by x[n] = 0.1n + sin(0.1n), 0 n 60. Comment on your results.
(a) y[n] =
1
N
N1
k=0
x[n k], N = 4 (moving average)
(b) y[n] =
2
N(N+1)
N1
k=0
(N k)x[n k], N = 4 (weighted moving average)
(c) y[n] y[n 1] = (1 )x[n], N = 4, =
N1
N+1
(exponential average)
3.87 (System Response) Use Matlab to obtain and plot the response of the following lters, using
direct commands and using the routine filter, and compare your results. Use an input that consists
of the sum of the signal x[n] = 0.1n +sin(0.1n), 0 n 60 and uniformly distributed random noise
with a mean of 0. Comment on your results.
(a) y[n] =
1
N
N1
k=0
x[n k], N = 4 (moving average)
(b) y[n] =
2
N(N+1)
N1
k=0
(N k)x[n k], N = 4 (weighted moving average)
(c) y[n] y[n 1] = (1 )x[n], N = 4, =
N1
N+1
(exponential averaging)
3.88 (System Response) Use the Matlab routine filter to obtain and plot the response of the following
FIR lters. Assume that x[n] = sin(n/8), 0 n 60. Comment on your results. From the results,
can you describe the the function of these lters?
(a) y[n] = x[n] x[n 1] (rst dierence)
(b) y[n] = x[n] 2x[n 1] +x[n 2] (second dierence)
c Ashok Ambardar, September 1, 2003
122 Chapter 3 Time-Domain Analysis
(c) y[n] =
1
3
(x[n] +x[n 1] +x[n 2]) (moving average)
(d) y[n] = 0.5x[n] +x[n 1] + 0.5x[n 2] (weighted average)
3.89 (System Response in Symbolic Form) Determine the response y[n] of the following lters and
plot over 0 n 30.
(a) The step response of y[n] 0.5y[n 1] = x[n]
(b) The impulse response of y[n] 0.5y[n 1] = x[n]
(c) The zero-state response of y[n] 0.5y[n 1] = (0.5)
n
u[n]
(d) The complete response of y[n] 0.5y[n 1] = (0.5)
n
u[n], y[1] = 4
(e) The complete response of y[n] +y[n 1] + 0.5y[n 2] = (0.5)
n
u[n], y[1] = 4, y[2] = 3
3.90 (Inverse Systems and Echo Cancellation) A signal x(t) is passed through the echo-generating
system y(t) = x(t) + 0.9x(t ) + 0.8x(t 2), with = 93.75 ms. The resulting echo signal y(t) is
sampled at S = 8192 Hz to obtain the sampled signal y[n].
(a) The dierence equation of a digital lter that generates the output y[n] from x[n] may be written
as y[n] = x[n] + 0.9x[n N] + 0.8x[n 2N]. What is the value of the index N?
(b) What is the dierence equation of an echo-canceling lter (inverse lter) that could be used to
recover the input signal x[n]?
(c) The echo signal is supplied on the authors website as echosig.mat. Load this signal into Matlab
(using the command load echosig). Listen to this signal using the Matlab command sound.
Can you hear the echoes? Can you make out what is being said?
(d) Filter the echo signal using your inverse lter and listen to the ltered signal. Have you removed
the echoes? Can you make out what is being said? Do you agree with what is being said?
3.91 (Nonrecursive Forms of IIR Filters) An FIR lter may always be exactly represented in recursive
form, but we can only approximately represent an IIR lter by an FIR lter by truncating its impulse
response to N terms. The larger the truncation index N, the better is the approximation. Consider
the IIR lter described by y[n] 0.8y[n 1] = x[n]. Find its impulse response h[n] and truncate it to
20 terms to obtain h
A
[n], the impulse response of the approximate FIR equivalent. Would you expect
the greatest mismatch in the response of the two lters to identical inputs to occur for lower or higher
values of n?
(a) Use the Matlab routine filter to nd and compare the step response of each lter up to n = 15.
Are there any dierences? Should there be? Repeat by extending the response to n = 30. Are
there any dierences? For how many terms does the response of the two systems stay identical,
and why?
(b) Use the Matlab routine filter to nd and compare the response to x[n] = 1, 0 n 10 for
each lter up to n = 15. Are there any dierences? Should there be? Repeat by extending the
response to n = 30. Are there any dierences? For how many terms does the response of the
two systems stay identical, and why?
3.92 (Convolution of Symmetric Sequences) The convolution of sequences that are symmetric about
their midpoint is also endowed with symmetry (about its midpoint). Use the Matlab command
conv to nd the convolution of the following sequences and establish the type of symmetry (about the
midpoint) in the convolution.
(a) x[n] = sin(0.2n), 10 n 10 h[n] = sin(0.2n), 10 n 10
(b) x[n] = sin(0.2n), 10 n 10 h[n] = cos(0.2n), 10 n 10
(c) x[n] = cos(0.2n), 10 n 10 h[n] = cos(0.2n), 10 n 10
(d) x[n] = sinc(0.2n), 10 n 10 h[n] = sinc(0.2n), 10 n 10
c Ashok Ambardar, September 1, 2003
Chapter 3 Problems 123
3.93 (Extracting Periodic Signals Buried in Noise) Extraction of periodic signals buried in noise
requires autocorrelation (to identify the period) and cross-correlation (to recover the signal itself).
(a) Generate the signal x[n] = sin(0.1n), 0 n 499. Add some uniform random noise (with a
noise amplitude of 2 and a mean of 0) to obtain the noisy signal s[n]. Plot each signal. Can you
identify any periodicity from the plot of x[n]? If so, what is the period N? Can you identify any
periodicity from the plot of s[n]?
(b) Obtain the periodic autocorrelation r
px
[n] of x[n] and plot. Can you identify any periodicity
from the plot of r
px
[n]? If so, what is the period N? Is it the same as the period of x[n]?
(c) Use the value of N found above (or identify N from x[n] if not) to generate the 500-sample
impulse train i[n] =
[n kN], 0 n 499. Find the periodic cross-correlation y[n] of
s[n] and i[n]. Choose a normalizing factor that makes the peak value of y[n] unity. How is the
normalizing factor related to the signal length and the period N?
(d) Plot y[n] and x[n] on the same plot. Is y[n] a close match to x[n]? Explain how you might
improve the results.
c Ashok Ambardar, September 1, 2003
Chapter 4
z-TRANSFORM ANALYSIS
4.0 Scope and Objectives
This chapter deals with the z-transform as a method of system analysis in a transformed domain. Even
though its genesis was outlined in the previous chapter, we develop the z-transform as an independent
transformation method in order to keep the discussion self-contained. We concentrate on the operational
properties of the z-transform and its applications in systems analysis. Connections with other transform
methods and system analysis methods are explored in later chapters.
4.1 The Two-Sided z-Transform
The two-sided z-transform X(z) of a discrete signal x[n] is dened as
X(z) =
k=
x[k]z
k
(two-sided z-transform) (4.1)
The relation between x[n] and X(z) is denoted symbolically by
x[n] zt X(z) (4.2)
Here, x[n] and X(z) form a transform pair, and the double arrow implies a one-to-one correspondence
between the two.
4.1.1 What the z-Transform Reveals
The complex quantity z generalizes the concept of digital frequency F or to the complex domain and is
usually described in polar form as
z = [r[e
j2F
= [r[e
j
(4.3)
Values of the complex quantity z may be displayed graphically in the z-plane in terms of its real and
imaginary parts or in terms of its magnitude and angle.
The dening relation for the z-transform is a power series (Laurent series) in z. The term for each index
k is the product of the sample value x[k] and z
k
.
For the sequence x[n] = 7, 3,
0, 3, 5
Since the dening relation for X(z) describes a power series, it may not converge for all z. The values of
z for which it does converge dene the region of convergence (ROC) for X(z). Two completely dierent
sequences may produce the same two-sided z-transform X(z), but with dierent regions of convergence. It is
important that we specify the ROC associated with each X(z), especially when dealing with the two-sided
z-transform.
4.1.2 Some z-Transform Pairs Using the Dening Relation
Table 17.1 lists the z-transforms of some useful signals. We provide some examples using the dening relation
to nd z-transforms. For nite-length sequences, the z-transform may be written as a polynomial in z. For
sequences with a large number of terms, the polynomial form can get to be unwieldy unless we can nd
closed-form solutions.
EXAMPLE 4.1 (The z-Transform from the Dening Relation)
(a) Let x[n] = [n]. Its z-transform is X(z) = 1. The ROC is the entire z-plane.
(b) Let x[n] = 2[n +1] +[n] 5[n 1] +4[n 2]. This describes the sequence x[n] = 2,
1, 5, 4. Its
z-transform is evaluated as X(z) = 2z +1 5z
1
+4z
2
. No simplications are possible. The ROC is
the entire z-plane, except z = 0 and z = (or 0 < [z[ < ).
(c) Let x[n] = u[n] u[n N]. This represents a sequence of N samples, and its z-transform may be
written as
X(z) = 1 +z
1
+z
2
+ +z
(N1)
Its ROC is [z[ > 0 (the entire z-plane except z = 0). A closed-form result for X(z) may be found using
the dening relation as follows:
X(z) =
N1
k=0
z
k
=
1 z
N
1 z
1
, z = 1
c Ashok Ambardar, September 1, 2003
126 Chapter 4 z-Transform Analysis
Table 4.1 A Short Table of z-Transform Pairs
Entry Signal z-Transform ROC
Finite Sequences
1 [n] 1 all z
2 u[n] u[n N]
1 z
N
1 z
1
z = 0
Causal Signals
3 u[n]
z
z 1
[z[ > 1
4
n
u[n]
z
z
[z[ > [[
5 ()
n
u[n]
z
z +
[z[ > [[
6 nu[n]
z
(z 1)
2
[z[ > 1
7 n
n
u[n]
z
(z )
2
[z[ > [[
8 cos(n)u[n]
z
2
z cos
z
2
2z cos + 1
[z[ > 1
9 sin(n)u[n]
z sin
z
2
2z cos + 1
[z[ > 1
10
n
cos(n)u[n]
z
2
z cos
z
2
2z cos +
2
[z[ > [[
11
n
sin(n)u[n]
z sin
z
2
2z cos +
2
[z[ > [[
Anti-Causal Signals
12 u[n 1]
z
z 1
[z[ < 1
13 nu[n 1]
z
(z 1)
2
[z[ < 1
14
n
u[n 1]
z
z
[z[ < [[
15 n
n
u[n 1]
z
(z )
2
[z[ < [[
c Ashok Ambardar, September 1, 2003
4.1 The Two-Sided z-Transform 127
(d) Let x[n] = u[n]. We evaluate its z-transform using the dening relation as follows:
X(z) =
k=0
z
k
=
k=0
(z
1
)
k
=
1
1 z
1
=
z
z 1
, ROC: [z[ > 1
Its ROC is [z[ > 1 and is based on the fact that the geometric series
k=0
r
k
converges only if [r[ < 1.
(e) Let x[n] =
n
u[n]. Using the dening relation, its z-transform and ROC are
X(z) =
k=0
k
z
k
=
k=0
k
=
1
1 (/z)
=
z
z
, ROC: [z[ > [[
Its ROC ([z[ > ) is also based on the fact that the geometric series
k=0
r
k
converges only if [r[ < 1.
DRILL PROBLEM 4.2
(a) Let x[n] = (0.5)
n
u[n]. Find its z-transform X(z) and ROC.
(b) Let y[n] = (0.5)
n
u[n]. Find its z-transform Y (z) and ROC.
(c) Let g[n] = (0.5)
n
u[n 1]. Find its z-transform G(z) and ROC.
Answers: (a) X(z) =
z
z 0.5
, [z[ > 0.5 (b) Y (z) =
z
z + 0.5
, [z[ > 0.5 (c) G(z) =
z
z 0.5
, [z[ < 0.5
4.1.3 More on the ROC
For a nite sequence x[n], the z-transform X(z) is a polynomial in z or z
1
and converges (is nite) for all
z, except z = 0, if X(z) contains terms of the form z
k
(or x[n] is nonzero for n > 0), and/or z = if X(z)
contains terms of the form z
k
(or x[n] is nonzero for n < 0). Thus, the ROC for nite sequences is the entire
z-plane, except perhaps for z = 0 and/or z = , as applicable.
In general, if X(z) is a rational function in z, as is often the case, its ROC actually depends on the one-
or two-sidedness of x[n], as illustrated in Figure 4.1.
The ROC excludes all pole locations (denominator roots) where X(z) becomes innite. As a result,
the ROC of right-sided signals is [z[ > [p[
max
and lies exterior to a circle of radius [p[
max
, the magnitude
of the largest pole. The ROC of causal signals, with x[n] = 0, n < 0, excludes the origin and is given
by 0 > [z[ > p[
max
. Similarly, the ROC of a left-sided signal x[n] is [z[ < [p[
min
and lies interior to a
circle of radius [p[
min
, the smallest pole magnitude of X(z). Finally, the ROC of a two-sided signal x[n] is
[p[
min
< [z[ < [p[
max
, an annulus whose radii correspond to the smallest and largest pole locations in X(z).
We use inequalities of the form [z[ < [[ (and not [z[ [[), for example, because X(z) may not converge
at the boundary [z[ = [[.
c Ashok Ambardar, September 1, 2003
128 Chapter 4 z-Transform Analysis
z
z
]
Im[
Re[ Re[
] Im[ z
z Re[ z ] ]
] z ] Im[
ROC
ROC (shaded) of two-sided signals
ROC
ROC
ROC (shaded) of right-sided signals ROC (shaded) of left-sided signals
Figure 4.1 The ROC (shown shaded) of the z-transform for various sequences
REVIEW PANEL 4.2
The ROC of the z-Transform X(z) Determines the Nature of the Signal x[n]
Finite-length x[n]: ROC of X(z) is all the z-plane, except perhaps for z = 0 and/or z = .
Right-sided x[n]: ROC of X(z) is outside a circle whose radius is the largest pole magnitude.
Left-sided x[n]: ROC of X(z) is inside a circle whose radius is the smallest pole magnitude.
Two-sided x[n]: ROC of X(z) is an annulus bounded by the largest and smallest pole radius.
Why We Must Specify the ROC
Consider the signal y[n] =
n
u[n 1] = 1, n = 1, 2, . . . . The two-sided z-transform of y[n], using a
change of variables, can be written as
Y (z) =
1
k=
k
z
k
=
m=1
m
=
z/
1 (z/)
=
z
z
, ROC: [z[ < [[
The ROC of Y (z) is [z[ < [[. Recall that the z-transform of x[n] =
n
u[n] is X(z) =
z
z
. This is
identical to Y (z) but the ROC of X(z) is [z[ > [[. So, we have a situation where two entirely dierent
signals may possess an identical z-transform and the only way to distinguish between the them is by their
ROC. In other words, we cannot uniquely identify a signal from its transform alone. We must also specify
the ROC. In this book, we shall assume a right-sided signal if no ROC is specied.
EXAMPLE 4.2 (Identifying the ROC)
(a) Let x[n] = 4, 3,
2, 6. The ROC of X(z) is 0 < [z[ < and excludes z = 0 and z = because
x[n] is nonzero for n < 0 and n > 0.
(b) Let X(z) =
z
z 2
+
z
z + 3
.
Its ROC depends on the nature of x[n].
If x[n] is assumed right-sided, the ROC is [z[ > 3 (because [p[
max
= 3).
If x[n] is assumed left-sided, the ROC is [z[ < 2 (because [p[
min
= 2).
If x[n] is assumed two-sided, the ROC is 2 < [z[ < 3.
The region [z[ < 2 and [z[ > 3 does not correspond to a valid region of convergence because we must
nd a region that is common to both terms.
c Ashok Ambardar, September 1, 2003
4.2 Properties of the Two-Sided z-Transform 129
DRILL PROBLEM 4.3
(a) Let X(z) =
z + 0.5
z
. What is its ROC?
(b) Let Y (z) =
z + 1
(z 0.1)(z + 0.5)
. What is its ROC if x[n] is right-sided?
(c) Let G(z) =
z
z + 2
+
z
z 1
. What is its ROC if g[n] is two-sided?
(d) Let H(z) =
(z + 3)
(z 2)(z + 1)
. What is its ROC if h[n] is left-sided?
Answers: (a) [z[ = 0 (b) [z[ > 0.5 (c) 1 < [z[ < 2 (d) [z[ < 1
4.2 Properties of the Two-Sided z-Transform
The z-transform is a linear operation and obeys superposition. The properties of the z-transform, listed in
Table 4.2, are based on the linear nature of the z-transform operation.
Table 4.2 Properties of the Two-Sided z-Transform
Entry Property Signal z-Transform
1 Shifting x[n N] z
N
X(z)
2 Reection x[n]
X
1
z
3 Anti-causal x[n]u[n 1]
X
1
z
5 Times-n nx[n]
z
dX(z)
dz
6 Times-cos cos(n)x[n] 0.5
X(ze
j
) +X(ze
j
)
X(ze
j
) X(ze
j
)
k=
x[k N]z
k
(4.4)
With the change of variable m = k N, the new summation index m still ranges from to (since N
is nite), and we obtain
Y (z) =
m=
x[m]z
(m+N)
= z
N
m=
x[m]z
m
= z
N
X(z) (4.5)
c Ashok Ambardar, September 1, 2003
130 Chapter 4 z-Transform Analysis
The factor z
N
with X(z) induces a right shift of N in x[n].
REVIEW PANEL 4.3
Time-Shift Property for the Two-Sided z-Transform: x[n N] zt z
N
X(z)
DRILL PROBLEM 4.4
(a) Let X(z) = 2 + 5z
1
. Find the z-transform of y[n] = x[n 3].
(b) Use the result (0.5)
n
u[n] zt
z
z 0.5
to nd g[n] if G(z)
1
z 0.5
.
Answers: (a) Y (z) = 2z
3
+ 5z
4
(b) g[n] = (0.5)
n1
u[n 1]
Times-n: The times-n property is established by taking derivatives, to yield
X(z) =
k=
x[k]z
k
dX(z)
dz
=
k=
d
dz
x[k]z
k
k=
kx[k]z
(k+1)
(4.6)
Multiplying both sides by z, we obtain
z
dX(z)
dz
=
k=
kx[k]z
k
(4.7)
This represents the transform of nx[n].
REVIEW PANEL 4.4
The Times-n Property: nx[n] zt z
dX(z)
dz
DRILL PROBLEM 4.5
(a) Let X(z) = 2 + 5z
1
4z
2
. Find the z-transform of y[n] = nx[n].
(b) Let G(z) =
z
z 0.5
, ROC : [z[ > 0.5. Find the z-transform of h[n] = ng[n] and its ROC.
Answers: (a) Y (z) = 5z
1
8z
2
(b) H(z) =
0.5z
(z 0.5)
2
, [z[ > 0.5
Scaling: The scaling property follows from the transform of y[n] =
n
x[n], to yield
Y (z) =
k=
k
x[k]z
k
=
k=
x[k]
k
= X
(4.8)
If the ROC of X(z) is [z[ > [K[, the scaling property changes the ROC of Y (z) to [z[ > [K[. In particular,
if = 1, we obtain the useful result (1)
n
x[n] zt X(z). This result says that if we change the
sign of alternating (odd indexed) samples of x[n] to get y[n], its z-transform Y (z) is given by Y (z) = X(z)
and has the same ROC.
c Ashok Ambardar, September 1, 2003
4.2 Properties of the Two-Sided z-Transform 131
REVIEW PANEL 4.5
The Scaling Property:
n
x[n] zt X(z/) and (1)
n
x[n] zt X(z)
DRILL PROBLEM 4.6
(a) Let X(z) = 2 3z
2
. Find the z-transform of y[n] = (2)
n
x[n] and its ROC.
(b) Let G(z) =
z
z 0.5
, [z[ > 0.5. Find the z-transform of h[n] = (0.5)
n
g[n] and its ROC.
(c) Let F(z) = 2 + 5z
1
4z
2
. Find the z-transform of p[n] = f[n] and its ROC.
Answers: (a) 2 12z
2
, z = 0 (b)
z
z + 0.25
, [z[ > 0.25 (c) 2 5z
1
4z
2
, z = 0
If x[n] is multiplied by e
jn
or
e
j
n
, we then obtain the pair e
jn
x[n] zt X(ze
j
). An
extension of this result, using Eulers relation, leads to the times-cos and times-sin properties:
cos(n)x[n] = 0.5x[n]
e
jn
+e
jn
zt 0.5
ze
j
+X
ze
j
(4.9)
sin(n)x[n] = j0.5x[n]
e
jn
e
jn
zt j0.5
ze
j
ze
j
(4.10)
The ROC is not aected by the times-cos and time-sin properties.
DRILL PROBLEM 4.7
(a) Let X(z) =
z
z 1
, [z[ > 1. Find the z-transform of y[n] = cos(0.5n)x[n] and its ROC.
(b) Let G(z) =
z
z 0.5
, [z[ > 0.5. Find the z-transform of h[n] = sin(0.5n)g[n] and its ROC.
Answers: (a) Y (z) =
z
2
z
2
+ 1
, [z[ > 1 (b) H(z) =
0.5z
z
2
+ 0.25
, [z[ > 0.5
Convolution: The convolution property is based on the fact that multiplication in the time domain cor-
responds to convolution in any transformed domain. The z-transforms of sequences are polynomials, and
multiplication of two polynomials corresponds to the convolution of their coecient sequences. This property
nds extensive use in the analysis of systems in the transformed domain.
REVIEW PANEL 4.6
The Convolution Property: x[n] h[n] zt X(z)H(z)
Folding: With x[n] zt X(z) and y[n] = x[n], we use k k in the dening relation to give
Y (z) =
k=
x[k]z
k
=
k=
x[k]z
k
=
k=
x[k](1/z)
k
= X(1/z) (4.11)
If the ROC of x[n] is [z[ > [[, the ROC of the folded signal x[n] becomes [1/z[ > [[ or [z[ < 1/[[.
REVIEW PANEL 4.7
The Folding Property of the Two-Sided z-Transform
x[n] zt X(1/z) (the ROC changes from [z[ > [[ to [z[ < 1/[[)
c Ashok Ambardar, September 1, 2003
132 Chapter 4 z-Transform Analysis
DRILL PROBLEM 4.8
(a) Let X(z) = 2 + 3z
1
, z = 0. Find the z-transform of y[n] = x[n] and its ROC.
(b) Let G(z) =
z
z 0.5
, [z[ > 0.5. Find the z-transform of h[n] = g[n] and its ROC.
Answers: (a) Y (z) = 3z + 2, [z[ = (b) H(z) =
1
1 0.5z
, [z[ < 2
The Folding Property and Symmetric Signals
The folding property is useful in checking for signal symmetry from its z-transform. For a signal x[n] that
has even symmetry about the origin n = 0, we have x[n] = x[n], and thus X(z) = X(1/z). Similarly, if
x[n] has odd symmetry about n = 0, we have x[n] = x[n], and thus X(z) = X(1/z). If x[n] is symmetric
about its midpoint and the center of symmetry is not n = 0, we observe that X(z) = z
M
X(1/z) for even
symmetry and X(z) = z
M
X(1/z) for odd symmetry. The factor z
M
, where M is an integer, accounts for
the shift of the center of symmetry from the origin.
REVIEW PANEL 4.8
A Property of the z-Transform of Symmetric Sequences
Even symmetry: x[n] = x[n] X(z) = z
M
X(1/z) Odd symmetry: x[n] = x[n] X(z) =
z
M
X(1/z)
DRILL PROBLEM 4.9
(a) Let x[n] =
z
z 1
= z
z
(z 1)
2
+
1
z 1
=
z
(z 1)
2
(b) With x[n] =
n
nu[n], we use scaling to obtain the z-transform:
X(z) =
z/
[(z/) 1]
2
=
z
(z )
2
(c) We nd the transform of the N-sample exponential pulse x[n] =
n
(u[n] u[n N]). We let y[n] =
u[n] u[n N]. Its z-transform is
Y (z) =
1 z
N
1 z
1
, [z[ = 1
Then, the z-transform of x[n] =
n
y[n] becomes
X[z] =
1 (z/)
N
1 (z/)
1
, z =
(d) The z-transforms of x[n] = cos(n)u[n] and y[n] = sin(n)u[n] are found using the times-cos and
times-sin properties:
X(z) = 0.5
ze
j
ze
j
1
+
ze
j
ze
j
1
=
z
2
z cos
z
2
2z cos + 1
Y (z) = j0.5
ze
j
ze
j
1
ze
j
ze
j
1
=
z sin
z
2
2z cos + 1
c Ashok Ambardar, September 1, 2003
134 Chapter 4 z-Transform Analysis
(e) The z-transforms of f[n] =
n
cos(n)u[n] and g[n] =
n
sin(n)u[n] follow from the results of part
(d) and the scaling property:
F(z) =
(z/)
2
(z/)cos
(z/)
2
2(z/)cos + 1
=
z
2
z cos
z
2
2z cos +
2
G(z) =
(z/)sin
(z/)
2
2(z/)cos + 1
=
z sin
z
2
2z cos +
2
(f ) We use the folding property to nd the transform of y[n] =
n
u[n1]. We start with the transform
pair x[n] =
n
u[n] zt z/(z ), ROC: [z[ > [[. With x[0] = 1, we nd
y[n] =
n
u[n 1] zt X(1/z) x[0] =
1/z
1/z
1 =
z
1 z
, ROC: [z[ <
1
[[
If we replace by 1/, and change the sign of the result, we get
n
u[n 1] zt
z
z
, ROC: [z[ < [[
This is listed as a standard transform pair in tables of z-transforms.
(g) We use the folding property to nd the transform of x[n] =
|n|
, [[ < 1 (a two-sided decaying
exponential). We write this as x[n] =
n
u[n] +
n
u[n] [n] (a one-sided decaying exponential and
its folded version, less the extra sample included at the origin), as illustrated in Figure E4.3G.
| n|
n
[n] u
n
[n] u
[n]
=
1
n
1
n
1
n n
1
Figure E4.3G The signal for Example 4.3(g)
Its z-transform then becomes
X(z) =
z
z
+
1/z
(1/z)
1 =
z
z
z
z (1/)
, ROC: [[ < [z[ <
1
[[
Note that the ROC is an annulus that corresponds to a two-sided sequence, and describes a valid region
only if [[ < 1.
4.3 Poles, Zeros, and the z-Plane
The z-transform of many signals is a rational function of the form
X(z) =
N(z)
D(z)
=
B
M
z
M
+B
M1
z
M1
+ +B
2
z
2
+B
1
z +B
0
A
N
z
N
+A
N1
z
N1
+ +A
2
z
2
+A
1
z +A
0
(4.13)
c Ashok Ambardar, September 1, 2003
4.3 Poles, Zeros, and the z-Plane 135
Denoting the roots of N(z) by z
i
, i = 1, 2, . . . , M and the roots of D(z) by p
k
, k = 1, 2, . . . , N, we may also
express X(z) in factored form as
X(z) = K
N(z)
D(z)
= K
(z z
1
)(z z
2
) (z z
M
)
(z p
1
)(z p
2
) (z p
N
)
(4.14)
The roots of N(z) are termed zeros and the roots of D(z) are termed poles. A plot of the poles (shown
as ) and zeros (shown as o) in the z-plane constitutes a pole-zero plot, and provides a visual picture of
the root locations. For multiple roots, we indicate their multiplicity next to the root location on the plot.
Clearly, we can also nd X(z) in its entirety from a pole-zero plot of the root locations but only if the value
of the multiplicative constant K is also displayed on the plot.
EXAMPLE 4.4 (Pole-Zero Plots)
(a) Let H(z) =
2z(z + 1)
(z
1
3
)(z
2
+
1
4
)(z
2
+ 4z + 5)
.
The numerator degree is 2. The two zeros are z = 0 and z = 1.
The denominator degree is 5. The ve nite poles are at z =
1
3
, z = j
1
2
, and z = 2 j.
The multiplicative factor is K = 2. The pole-zero plot is shown in Figure E4.4(a).
= 2 K
Im[ z ]
Re[ z ]
= 1 K
Im[ z ]
Re[ z ]
2 1 1/3
0.5
0.5
1
2
2
0.5
0.5
1
2
(a) (b)
Figure E4.4 Pole-zero plots for Example 4.4(a and b)
(b) What is the z-transform corresponding to the pole-zero pattern of Figure E4.4(b)? Does it represent a
symmetric signal?
If we let X(z) = KN(z)/D(z), the four zeros correspond to the numerator N(z) given by
N(z) = (z j0.5)(z +j2)(z +j0.5)(z j2) = z
4
+ 4.25z
2
+ 1
The two poles at the origin correspond to the denominator D(z) = z
2
. With K = 1, the z-transform
is given by
X(z) = K
N(z)
D(z)
=
z
4
+ 4.25z
2
+ 1
z
2
= z
2
+ 4.25 +z
2
Checking for symmetry, we nd that X(z) = X(1/z), and thus x[n] is even symmetric. In fact,
x[n] = [n + 2] + 4.25[n] + [n 2] = 1,
+
+
Figure 4.4 Cascade and parallel systems and their equivalents
The overall transfer function of a cascaded system is the product of the individual transfer functions.
For n systems in cascade, the overall impulse response h
C
[n] is the convolution of the individual impulse
responses h
1
[n], h
2
[n], . . . . Since the convolution operation transforms to a product, we have
H
C
(z) = H
1
(z)H
2
(z) H
n
(z) (for n systems in cascade) (4.18)
We can also factor a given transfer function H(z) into the product of rst-order and second-order transfer
functions and realize H(z) in cascaded form.
For systems in parallel, the overall transfer function is the sum of the individual transfer functions. For
n systems in parallel,
H
P
(z) = H
1
(z) +H
2
(z) + +H
n
(z) (for n systems in parallel) (4.19)
We can also use partial fractions to express a given transfer function H(z) as the sum of rst-order and/or
second-order subsystems, and realize H(z) as a parallel combination.
REVIEW PANEL 4.11
Overall Impulse Response and Transfer Function of Systems in Cascade and Parallel
Cascade: Convolve individual impulse responses. Multiply individual transfer functions.
Parallel: Add individual impulse responses. Add individual transfer functions.
c Ashok Ambardar, September 1, 2003
138 Chapter 4 z-Transform Analysis
EXAMPLE 4.5 (Systems in Cascade and Parallel)
(a) Two digital lters are described by h
1
[n] =
n
u[n] and h
2
[n] = ()
n
u[n]. The transfer function of
their cascade is H
C
(z) and of their parallel combination is H
P
(z). How are H
C
(z) and H
P
(z) related?
The transfer functions of the two lters are H
1
(z) =
z
z
and H
2
(z) =
z
z +
. Thus,
H
C
(z) = H
1
(z)H
2
(z) =
z
2
z
2
2
H
P
(z) = H
1
(z) +H
2
(z) =
2z
2
z
2
2
So, H
P
(z) = 2H
C
(z).
(b) Is the cascade or parallel combination of two linear-phase lters also linear phase? Explain.
Linear-phase lters are described by symmetric impulse response sequences.
The impulse response of their cascade is also symmetric because it is the convolution of two symmetric
sequences. So, the cascade of two linear-phase lters is always linear phase.
The impulse response of their parallel combination is the sum of their impulse responses. Since the
sum of symmetric sequences is not always symmetric (unless both are odd symmetric or both are even
symmetric), the parallel combination of two linear-phase lters is not always linear phase.
DRILL PROBLEM 4.12
(a) Find the transfer function of the parallel connection of two lters described by h
1
[n] =
1, 3 and
h
2
[n] = 2,
1.
(b) The transfer function of the cascade of two lters is H
C
(z) = 1. If the impulse response of one lter
is h
1
[n] = 2(0.5)
n
u[n], nd the impulse response of the second.
(c) Two lters are described by y[n] 0.4y[n 1] = x[n] and h
2
[n] = 2(0.4)
n
u[n]. Find the transfer
function of their parallel combination and cascaded combination.
Answers: (a) 2z + 3z
1
(b)
Figure 4.5 Realization of a nonrecursive (left) and recursive (right) digital lter
We choose M = N with no loss of generality, since some of the coecients B
k
may always be set to zero.
The transfer function (with M = N) then becomes
H(z) =
B
0
+B
1
z
1
+ +B
N
z
N
1 +A
1
z
1
+A
2
z
2
+ +A
N
z
N
= H
N
(z)H
R
(z) (4.23)
The transfer function H(z) = H
N
(z)H
R
(z) is the product of the transfer functions of a recursive and a
nonrecursive system. Its realization is thus a cascade of the realizations for the recursive and nonrecursive
portions, as shown in Figure 4.6(a). This form describes a direct form I realization. It uses 2N delay
elements to realize an Nth-order dierence equation and is therefore not very ecient.
0
B
1
B
2
B
N
B
1
A
2
A
N
A
[n] y [n] x
1
z
1
z
1
z
1
z
1
z
1
z
+
+
+
+
+
+
+
+
+
+
+
+
0
B
1
B
2
B
N
B
1
A
2
A
N
A
[n] y [n] x
1
z
1
z
1
z
+
+
+
+
+
+ +
+
+
+
+
+
Figure 4.6 Direct form I (left) and canonical, or direct form II (right), realization of a digital lter
Since LTI systems can be cascaded in any order, we can switch the recursive and nonrecursive parts to get
the structure of Figure 4.6(b). This structure suggests that each pair of feed-forward and feedback signals
can be obtained from a single delay element instead of two. This allows us to use only N delay elements
and results in the direct form II, or canonic, realization. The term canonic implies a realization with the
minimum number of delay elements.
c Ashok Ambardar, September 1, 2003
140 Chapter 4 z-Transform Analysis
If M and N are not equal, some of the coecients (A
k
or B
k
) will equal zero and will result in missing
signal paths corresponding to these coecients in the lter realization.
REVIEW PANEL 4.12
Digital Filter Realization
FIR: No feedback paths IIR: Both feed-forward and feedback paths
4.6.1 Transposed Realization
The direct form II also yields a transposed realization if we turn the realization around (and reverse the
input and input), replace summing junctions by nodes (and vice versa), and reverse the direction of signal
ow. Such a realization is developed in Figure 4.7.
0
B
1
B
2
B
N
B N
A
2
A
1
A
1
z
1
z
1
z
[n] x [n] y
+
+
+
+
+
+
+
+
+
+
Nodes to summers
Summers to nodes
Reverse signal flow
Turn around
Nodes to summers
Summers to nodes
Reverse signal flow
Turn around
0
B
1
B
2
B
N
B
[n] x [n] y
1
z
1
z
1
z
1
A
2
A
N
A
+
+
+
+
+
+ +
+
+
+
+
+
Figure 4.7 Direct form II (left) and transposed (right) realization of a digital lter
EXAMPLE 4.6 (Direct Form II and Transposed Realizations)
Consider a system described by 2y[n] y[n 2] 4y[n 3] = 3x[n 2]. Its transfer function is
H(z) =
3z
2
2 z
2
4z
3
=
1.5z
z
3
0.5z 2
This is a third-order system. To sketch its direct form II and transposed realizations, we compare H(z) with
the generic third-order transfer function to get
H(z) =
B
0
z
3
+B
1
z
2
+B
2
z +B
3
z
3
+A
1
z
2
+A
2
z +A
3
The nonzero constants are B
2
= 1.5, A
2
= 0.5, and A
3
= 2. Using these, we obtain the direct form II
and transposed realizations shown in Figure E4.6.
c Ashok Ambardar, September 1, 2003
4.6 Transfer Function Realization 141
1
z
1
z
1
z
1
z
1
z
1
z
[n] x
[n] y
[n] x
[n] y
Direct form II
1.5 0.5
2
+
Transposed
1.5 0.5
2
+
+
+
Figure E4.6 Direct form II (left) and transposed (right) realization of the system for Example 4.6
4.6.2 Cascaded and Parallel Realization
The overall transfer function of a cascaded system is the product of the individual transfer functions.
H
C
(z) = H
1
(z)H
2
(z) H
n
(z) (for n systems in cascade) (4.24)
As a consequence, we can factor a given transfer function H(z) into the product of several transfer functions
to realize H(z) in cascaded form. Typically, an N-th order transfer function is realized as a cascade of
second-order sections (with an additional rst-order section if N is odd).
For systems in parallel, the overall transfer function is the sum of the individual transfer functions. For
n systems in parallel,
H
P
(z) = H
1
(z) +H
2
(z) + +H
n
(z) (for n systems in parallel) (4.25)
As a result, we may use partial fractions to express a given transfer function H(z) as the sum of rst-order
and/or second-order subsystems, and realize H(z) as a parallel combination.
EXAMPLE 4.7 (System Realization)
(a) Find a cascaded realization for H(z) =
z
2
(6z 2)
(z 1)(z
2
1
6
z
1
6
)
.
This system may be realized as a cascade H(z) = H
1
(z)H
2
(z), as shown in Figure E4.7A, where
H
1
(z) =
z
2
z
2
1
6
z
1
6
H
2
(z) =
6z 2
z 1
c Ashok Ambardar, September 1, 2003
142 Chapter 4 z-Transform Analysis
[n] x
1
z
1
z
[n] y
1
z
+
1/6
1/6
+
+
+
+
+
6
2
+
Figure E4.7B Parallel realization of the system for Example 4.7(b)
4.7 Causality and Stability of LTI Systems
In the time domain, a causal system requires a causal impulse response h[n] with h[n] = 0, n < 0. If H(z)
describes the transfer function of this system, the number of zeros cannot exceed the number of poles. In
other words, the degree of the numerator polynomial in H(z) cannot exceed the degree of the denominator
polynomial. This means that the transfer function of a causal system must be proper and its ROC must be
outside a circle of nite radius.
For an LTI system to be BIBO (bounded-input, bounded-output) stable, every bounded input must
result in a bounded output. In the time domain, BIBO stability of an LTI system requires an absolutely
summable impulse response h[n]. For a causal system, this is equivalent to requiring the poles of the transfer
function H(z) to lie entirely within the unit circle in the z-plane. This equivalence stems from the following
observations:
Poles outside the unit circle ([z[ > 1) lead to exponential growth even if the input is bounded.
Example: H(z) = z/(z 3) results in the growing exponential (3)
n
u[n].
c Ashok Ambardar, September 1, 2003
4.7 Causality and Stability of LTI Systems 143
Multiple poles on the unit circle always result in polynomial growth.
Example: H(z) = 1/[z(z 1)
2
] produces a ramp function in h[n].
Simple (non-repeated) poles on the unit circle can also lead to an unbounded response.
Example: A simple pole at z = 1 leads to H(z) with a factor z/(z 1). If X(z) also contains a pole at z = 1,
the response Y (z) will contain the term z/(z 1)
2
and exhibit polynomial growth.
None of these types of time-domain terms is absolutely summable, and their presence leads to system
instability. Formally, for BIBO stability, all the poles of H(z) must lie inside (and exclude) the unit circle
[z[ = 1. This is both a necessary and sucient condition for the stability of causal systems.
If a system has simple (non-repeated) poles on the unit circle, it is sometimes called marginally stable.
If a system has all its poles and zeros inside the unit circle, it is called a minimum-phase system.
4.7.1 Stability and the ROC
For any LTI system, causal or otherwise, to be stable, the ROC must include the unit circle. The various
situations are illustrated in Figure 4.8.
]
Re[ z ]
z
ROC
1
Im[ z Im[
Re[ z ]
ROC
1
Im[ ] ]
Re[ z ]
z
ROC
1
causal, stable system two-sided, stable system
ROC (shaded) of
anti-causal, stable system
ROC (shaded) of ROC (shaded) of
Figure 4.8 The ROC of stable systems (shown shaded) always includes the unit circle
The stability of a causal system requires all the poles to lie inside the unit circle. Thus, the ROC includes
the unit circle. The stability of an anti-causal system requires all the poles to lie outside the unit circle.
Thus, the ROC once again includes the unit circle. Similarly, the ROC of a stable system with a two-sided
impulse response is an annulus that includes the unit circle and all its poles lie outside this annulus. The
poles inside the inner circle of the annulus contribute to the causal portion of the impulse response while
poles outside the outer circle of the annulus make up the anti-causal portion of the impulse response.
REVIEW PANEL 4.13
The ROC of Stable LTI Systems Always Includes the Unit Circle
Stable, causal system: All the poles must lie inside the unit circle.
Stable, anti-causal system: All the poles must lie outside the unit circle.
Stability from impulse response: h[n] must be absolutely summable (
[h[k][ < ).
EXAMPLE 4.8 (Stability of a Recursive Filter)
(a) Let H(z) =
z
z
.
If the ROC is [z[ > [[, its impulse response is h[n] =
n
u[n], and the system is causal.
c Ashok Ambardar, September 1, 2003
144 Chapter 4 z-Transform Analysis
For stability, we require [[ < 1 (for the ROC to include the unit circle).
(b) Let H(z) =
z
z
, as before.
If the ROC is [z[ < [[, its impulse response is h[n] =
n
u[n 1], and the system is anti-causal.
For stability, we require [[ > 1 (for the ROC to include the unit circle).
DRILL PROBLEM 4.13
(a) Is the lter described by H(z) =
2z + 1
(z 0.5)(z + 0.5)
, [z[ > 0.5 stable? Causal?
(b) Is the lter described by H(z) =
2z + 1
(z 1.5)(z + 0.5)
, [z[ > 1.5 stable? Causal?
(c) Is the lter described by H(z) =
2z + 1
(z 1.5)(z + 0.5)
, 0.5 < [z[ < 1.5 stable? Causal?
Answers: (a) Stable, causal (b) Unstable, causal (c) Stable, two-sided
4.7.2 Inverse Systems
The inverse system corresponding to a transfer function H(z) is denoted by H
1
(z), and dened as
H
1
(z) = H
I
(z) =
1
H(z)
(4.26)
The cascade of a system and its inverse has a transfer function of unity:
H
C
(z) = H(z)H
1
(z) = 1 h
C
[n] = [n] (4.27)
This cascaded system is called an identity system, and its impulse response equals h
C
[n] = [n]. The
inverse system can be used to undo the eect of the original system. We can also describe h
C
[n] by the
convolution h[n] h
I
[n]. It is far easier to nd the inverse of a system in the transformed domain.
REVIEW PANEL 4.14
Relating the Impulse Response and Transfer Function of a System and Its Inverse
If the system is described by H(z) and h[n] and its inverse by H
I
(z) and h
I
[n], then
H(z)H
I
(z) = 1 h[n] h
I
[n] = [n]
EXAMPLE 4.9 (Inverse Systems)
(a) Consider a system with the dierence equation y[n] +y[n 1] = x[n] +x[n 1].
To nd the inverse system, we evaluate H(z) and take its reciprocal. Thus,
H(z) =
1 +z
1
1 +z
1
H
I
(z) =
1
H(z)
=
1 +z
1
1 +z
1
The dierence equation of the inverse system is y[n] + y[n 1] = x[n] + x[n 1]. Note that, in
general, the inverse of an IIR lter is also an IIR lter.
c Ashok Ambardar, September 1, 2003
4.8 The Inverse z-Transform 145
(b) Consider an FIR lter whose system equation is y[n] = x[n] + 2x[n 1] + 3x[n 2].
To nd the inverse system, we evaluate H(z) and take its reciprocal. Thus,
H(z) = 1 + 2z
1
+ 3z
2
H
I
(z) =
1
H(z)
=
1
1 + 2z
1
+ 3z
2
The dierence equation of the inverse system is y[n] + 2y[n 1] + 3y[n 2] = x[n]. Note that the
inverse of an FIR lter is an IIR lter.
DRILL PROBLEM 4.14
(a) The inverse of an FIR lter is always an IIR lter. True or false? If false, give a counterexample.
(b) The inverse of an IIR lter is always an FIR lter. True or false? If false, give a counterexample.
Answers: (a) True (b) False. For example, the inverse of H(z) =
z 0.4
z 0.5
is also IIR.
4.8 The Inverse z-Transform
The formal inversion relation that yields x[n] fromX(z) actually involves complex integration and is described
by
x[n] =
1
j2
X(z)z
n1
dz (4.28)
Here, describes a clockwise contour of integration (such as the unit circle) that encloses the origin. Eval-
uation of this integral requires knowledge of complex variable theory. In this text, we pursue simpler
alternatives, which include long division and partial fraction expansion.
4.8.1 Inverse z-Transform of Finite Sequences
For nite-length sequences, X(z) has a polynomial form that immediately reveals the required sequence x[n].
The ROC can also be discerned from the polynomial form of X(z).
EXAMPLE 4.10 (Inverse Transform of Sequences)
(a) Let X(z) = 3z
1
+ 5z
3
+ 2z
4
. This transform corresponds to a causal sequence. We recognize x[n]
as a sum of shifted impulses given by
x[n] = 3[n 1] + 5[n 3] + 2[n 4]
This sequence can also be written as x[n] =
0, 3, 0, 5, 2.
(b) Let X(z) = 2z
2
5z + 5z
1
2z
2
. This transform corresponds to a noncausal sequence. Its inverse
transform is written, by inspection, as
x[n] = 2, 5,
0, 5, 2
Comment: Since X(z) = X(1/z), x[n] should possess odd symmetry. It does.
c Ashok Ambardar, September 1, 2003
146 Chapter 4 z-Transform Analysis
4.8.2 Inverse z-Transform by Long Division
A second method requires X(z) as a rational function (a ratio of polynomials) along with its ROC. For a
right-sided signal (whose ROC is [z[ > [[), we arrange the numerator and denominator in ascending powers
of z, and use long division to obtain a power series in decreasing powers of z, whose inverse transform
corresponds to the right-sided sequence.
For a left-sided signal (whose ROC is [z[ < [[), we arrange the numerator and denominator in descending
powers of z, and use long division to obtain a power series in increasing powers of z, whose inverse transform
corresponds to the left-sided sequence.
This approach, however, becomes cumbersome if more than just the rst few terms of x[n] are required.
It is not often that the rst few terms of the resulting sequence allow its general nature or form to be
discerned. If we regard the rational z-transform as a transfer function H(z) = P(z)/Q(z), the method of
long division is simply equivalent to nding the rst few terms of its impulse response recursively from its
dierence equation.
REVIEW PANEL 4.15
Finding Inverse Transforms of X(z) = N(z)/D(z) by Long Division
Right-sided: Put N(z), D(z) in ascending powers of z. Obtain a power series in decreasing powers of z.
Left-sided: Put N(z), D(z) in descending powers of z. Obtain a power series in increasing powers of z.
EXAMPLE 4.11 (Inverse Transforms by Long Division)
(a) We nd the right-sided inverse of H(z) =
z 4
1 z +z
2
.
We arrange the polynomials in descending powers of z and use long division to get
z
2
z + 1
z
1
3z
2
4z
3
z 4
z 1 +z
1
3 z
1
3 + 3z
1
3z
2
4z
1
+ 3z
2
4z
1
+ 4z
2
4z
3
z
2
+ 4z
3
This leads to H(z) = z
1
3z
2
4z
3
+ . The sequence h[n] can be written as
h[n] = (n 1) 3[n 2] 4[n 3] + or h[n] =
0, 1, 3, 4, . . .
(b) We could also have found the inverse by setting up the the dierence equation corresponding to
H(z) = Y (z)/X(z), to give
y[n] y[n 1] +y[n 2] = x[n 1] 4x[n 2]
c Ashok Ambardar, September 1, 2003
4.8 The Inverse z-Transform 147
With x[n] = [n], its impulse response h[n] is
h[n] h[n 1] +h[n 2] = [n] 4[n 2]
With h[1] = h[2] = 0, (a relaxed system), we recursively obtain the rst few values of h[n] as
n = 0: h[0] = h[1] h[2] +[1] 4[2] = 0 0 + 0 0 = 0
n = 1: h[1] = h[0] h[1] +[0] 4[1] = 0 0 + 1 0 = 1
n = 2: h[2] = h[1] h[0] +[1] 4[0] = 1 0 + 0 4 = 3
n = 3: h[3] = h[2] h[1] +[2] 4[1] = 3 1 + 0 + 0 = 4
These are identical to the values obtained using long division in part (a).
(c) We nd the left-sided inverse of H(z) =
z 4
1 z +z
2
.
We arrange the polynomials in descending powers of z and use long division to obtain
1 z +z
2
4 3z +z
2
4 +z
4 + 4z 4z
2
3z + 4z
2
3z + 3z
2
3z
3
z
2
+ 3z
3
z
2
z
3
+z
4
4z
3
z
4
Thus, H(z) = 4 3z +z
2
+ . The sequence h[n] can then be written as
h[n] = 4[n] 3[n + 1] +[n + 2] + or h[n] = . . . , 1, 3,
4
(d) We could also have found the inverse by setting up the the dierence equation in the form
h[n 2] = h[n 1] +[n 1] 4[n 2]
With h[1] = h[2] = 0, we can generate h[0], h[1], h[2], . . . , recursively, to obtain the same result as
in part (c).
DRILL PROBLEM 4.15
(a) Let H(z) =
z 4
z
2
+ 1
, [z[ > 1. Find the rst few terms of h[n] by long division.
(b) Let H(z) =
z 4
z
2
+ 4
, [z[ < 2. Find the rst few terms of h[n] by long division.
Answers: (a) 0,
1
c Ashok Ambardar, September 1, 2003
148 Chapter 4 z-Transform Analysis
4.8.3 Inverse z-Transform from Partial Fractions
A much more useful method for inversion of the z-transform relies on its partial fraction expansion into terms
whose inverse transform can be identied using a table of transform pairs. This is analogous to nding inverse
Laplace transforms, but with one major dierence. Since the z-transform of standard sequences in Table 4.1
involves the factor z in the numerator, it is more convenient to perform the partial fraction expansion for
Y (z) = X(z)/z rather than for X(z). We then multiply through by z to obtain terms describing X(z) in a
form ready for inversion. This also implies that, for partial fraction expansion, it is Y (z) = X(z)/z and not
X(z) that must be a proper rational function. The form of the expansion depends on the nature of the poles
(denominator roots) of Y (z). The constants in the partial fraction expansion are often called residues.
Distinct Linear Factors
If Y (z) contains only distinct poles, we express it in the form:
Y (z) =
P(z)
(z +p
1
)(z +p
2
) (z +p
N
)
=
K
1
z +p
1
+
K
2
z +p
2
+ +
K
N
z +p
N
(4.29)
To nd the mth coecient K
m
, we multiply both sides by (z +p
m
) to get
(z +p
m
)Y (z) = K
1
(z +p
m
)
(z +p
1
)
+ +K
m
+ +K
N
(z +p
m
)
(z +p
N
)
(4.30)
With both sides evaluated at z = p
m
, we obtain K
m
as
K
m
= (z +p
m
)Y (z)
z=p
m
(4.31)
In general, Y (z) will contain terms with real constants and terms with complex conjugate residues, and may
be written as
Y (z) =
K
1
z +p
1
+
K
2
z +p
2
+ +
A
1
z +r
1
+
A
1
z +r
1
+
A
2
z +r
2
+
A
2
z +r
2
+ (4.32)
For a real root, the residue (coecient) will also be real. For each pair of complex conjugate roots, the
residues will also be complex conjugates, and we thus need compute only one of these.
Repeated Factors
If the denominator of Y (z) contains the repeated term (z +r)
k
, the partial fraction expansion corresponding
to the repeated terms has the form
Y (z) = (other terms) +
A
0
(z +r)
k
+
A
1
(z +r)
k1
+ +
A
k1
z +r
(4.33)
Observe that the constants A
j
ascend in index j from 0 to k 1, whereas the denominators (z +r)
n
descend
in power n from k to 1. Their evaluation requires (z +r)
k
Y (z) and its derivatives. We successively nd
A
0
= (z +r)
k
Y (z)
z=r
A
2
=
1
2!
d
2
dz
2
[(z +r)
k
Y (z)]
z=r
A
1
=
d
dz
[(z +r)
k
Y (z)]
z=r
A
n
=
1
n!
d
n
dz
n
[(z +r)
k
Y (z)]
z=r
(4.34)
Even though this process allows us to nd the coecients independently of each other, the algebra in nding
the derivatives can become tedious if the multiplicity k of the roots exceeds 2 or 3. Table 4.3 lists some
transform pairs useful for inversion of the z-transform of causal signals by partial fractions.
c Ashok Ambardar, September 1, 2003
4.8 The Inverse z-Transform 149
Table 4.3 Inverse z-Transform of Partial Fraction Expansion (PFE) Terms
Entry PFE Term X(z) Causal Signal x[n], n 0
Note 1: Where applicable,
K = Ke
j
= C +jD
Note 2: For anti-causal sequences, we get the signal x[n]u[n 1] where x[n] is as listed.
1
z
z
n
2
z
(z )
2
n
(n1)
3
z
(z )
N+1
(N > 1)
n(n 1) (n N + 1)
N!
(nN)
4
z
K
z e
j
+
z
K
z e
j
2K
n
cos(n +) = 2
n
[C cos(n) Dsin(n)]
5
z
K
(z e
j
)
2
+
z
K
(z e
j
)
2
2Kn
n1
cos[(n 1) +]
6
z
K
(z e
j
)
N+1
+
z
K
(z e
j
)
N+1
2K
n(n 1) (n N + 1)
N!
(nN)
cos[(n N) +]
REVIEW PANEL 4.16
Partial Fraction Expansion of Y (z) = X(z)/z Depends on Its Poles (Denominator Roots)
Distinct roots: Y (z) =
N
m=1
P(z)
z +p
m
=
N
m=1
K
m
z +p
m
, where K
m
= (z +p
m
)Y (z)[
z=pm
Repeated:
1
(z +r)
k
N
m=1
P(z)
z +p
m
=
N
m=1
K
m
z +p
m
+
k1
n=0
A
n
(z +r)
kn
, where A
n
=
1
n!
d
n
dz
n
[(z +r)
k
Y (z)]
z=r
EXAMPLE 4.12 (Inverse Transform of Right-Sided Signals)
(a) (Non-Repeated Roots) We nd the causal inverse of X(z) =
1
(z 0.25)(z 0.5)
.
We rst form Y (z) =
X(z)
z
, and expand Y (z) into partial fractions, to obtain
Y (z) =
X(z)
z
=
1
z(z 0.25)(z 0.5)
=
8
z
16
z 0.25
+
8
z 0.5
Multiplying through by z, we get
X(z) = 8
16z
z 0.25
+
8z
z 0.5
x[n] = 8[n] 16(0.25)
n
u[n] + 8(0.5)
n
u[n]
Its rst few samples, x[0] = 0, x[1] = 0, x[2] = 1, and x[3] = 0.75 can be checked by long division.
c Ashok Ambardar, September 1, 2003
150 Chapter 4 z-Transform Analysis
Comment: An alternate approach (not recommended) is to expand X(z) itself as
X(z) =
4
z 0.25
+
4
z 0.5
x[n] = 4(0.25)
n1
u[n 1] + 4(0.5)
n1
u[n 1]
Its inverse requires the shifting property. This form is functionally equivalent to the previous case. For
example, we nd that x[0] = 0, x[1] = 0, x[2] = 1, and x[3] = 0.25, as before.
(b) (Repeated Roots) We nd the inverse of X(z) =
z
(z 1)
2
(z 2)
.
We obtain Y (z) =
X(z)
z
, and set up its partial fraction expansion as
Y (z) =
X(z)
z
=
1
(z 1)
2
(z 2)
=
A
z 2
+
K
0
(z 1)
2
+
K
1
z 1
The constants in the partial fraction expansion are
A =
1
(z 1)
2
z=2
= 1 K
0
=
1
z 2
z=1
= 1 K
1
=
d
dz
1
z 2
z=1
= 1
Substituting into Y (z) and multiplying through by z, we get
X(z) =
z
z 2
z
(z 1)
2
z
z 1
x[n] = (2)
n
u[n] nu[n] u[n] = (2
n
n 1)u[n]
The rst few values x[0] = 0, x[1] = 0, x[2] = 1, x[3] = 4, and x[4] = 11 can be easily checked by long
division.
(c) (Complex Roots) We nd the causal inverse of X(z) =
z
2
3z
(z 2)(z
2
2z + 2)
.
We set up the partial fraction expansion for Y (z) =
X(z)
z
as
Y (z) =
X(z)
z
=
z 3
(z 2)(z 1 j)(z 1 +j)
=
A
z 2
+
K
z
2e
j/4
+
K
2e
j/4
We evaluate the constants A and
K, to give
A =
z 3
z
2
2z + 2
z=2
= 0.5
K =
z 3
(z 2)(z 1 +j)
z=1+j
= 0.7906e
j71.56
= 0.25 j0.75
Multiplying through by z, we get
X(z) =
0.5z
z 2
+
z
K
z
2e
j/4
+
z
K
2e
j/4
The inverse of the rst term is easy. For the remaining pair, we use entry 4 of the table for inversion
of partial fraction forms with =
2, =
4
, K = 0.7906, = 71.56
, to give
x[n] = 0.5(2)
n
u[n] + 2(0.7906)(
2)
n
cos(
n
4
71.56
)u[n]
c Ashok Ambardar, September 1, 2003
4.8 The Inverse z-Transform 151
With C = 0.25, D = 0.75, this may also be expressed in the alternate form
x[n] = 0.5(2)
n
u[n] + 2(
2)
n
[0.25 cos(
n
4
) + 0.75 sin(
n
4
)]u[n]
(d) (Inverse Transform of Quadratic Forms) We nd the causal inverse of X(z) =
z
z
2
+ 4
.
The numerator suggests the generic form x[n] = B
n
sin(n)u[n] because
B
n
sin(n)u[n] zt
Bzsin
z
2
2zcos +
2
Comparing denominators, we nd that
2
= 4 and 2cos = 0. Thus, = 2. If we pick = 2, we
get 4 cos = 0 or cos = 0 and thus = /2.
Finally, comparing numerators, Bzsin = z or B = 0.5. Thus,
x[n] = B
n
sin(n)u[n] = 0.5(2)
n
sin(n/2)u[n]
(e) (Inverse Transform of Quadratic Forms) Let X(z) =
z
2
+z
z
2
2z + 4
.
The quadratic numerator suggests the form x[n] = A
n
cos(n)u[n] +B
n
sin(n)u[n] because
A
n
cos(n)u[n] zt
A(z
2
zcos )
z
2
2zcos +
2
B
n
sin(n)u[n] zt
Bzsin
z
2
2zcos +
2
Comparing denominators, we nd
2
= 4 and 2cos = 2. Thus, = 2. If we pick = 2, we get
cos = 0.5 or = /3.
Now, A(z
2
zcos ) = A(z
2
z) and Bzsin = Bz
3)(z
3) (with A = 1 and
B = 2/
3 = 1.1547). Thus,
x[n] = A
n
cos(n)u[n] +B
n
sin(n)u[n] = (2)
n
cos(
n
3
)u[n] + 1.1547(2)
n
sin(
n
3
)u[n]
The formal approach is to use partial fractions. With z
2
2z +4 = (z 2e
j/3
)(z 2e
j/3
), we nd
X(z)
z
=
z + 1
z
2
2z + 4
=
K
z 2e
j/3
+
K
z 2e
j/3
We nd
K = 0.7638e
j49.11
= 0.5 j0.5774 and entry 4 of the table for inversion of partial fraction
forms (with = 2, =
3
) gives
x[n] = 1.5275(2)
n
cos(
n
3
49.11
)u[n] = (2)
n
cos(
n
3
)u[n] + 1.1547(2)
n
sin(
n
3
)u[n]
The second form of this result is identical to what was found earlier.
c Ashok Ambardar, September 1, 2003
152 Chapter 4 z-Transform Analysis
4.8.4 The ROC and Inversion
We have so far been assuming right-sided sequences when no ROC is given. Only when the ROC is specied
do we obtain a unique sequence from X(z). Sometimes, the ROC may be specied indirectly by requiring
the system to be stable, for example. Since the ROC of a stable system includes the unit circle, this gives
us a clue to the type of inverse we require.
EXAMPLE 4.13 (Inversion and the ROC)
(a) Find all possible inverse transforms of X(z) =
z
(z 0.25)(z 0.5)
.
The partial fraction expansion of Y (z) =
X(z)
z
leads to X(z) =
4z
z 0.25
+
4z
z 0.5
.
1. If the ROC is [z[ > 0.5, x[n] is causal and stable, and we obtain
x[n] = 4(0.25)
n
u[n] + 4(0.5)
n
u[n]
2. If the ROC is [z[ < 0.25, x[n] is anti-causal and unstable, and we obtain
x[n] = 4(0.25)
n
u[n 1] 4(0.5)
n
u[n 1]
3. If the ROC is 0.25 < [z[ < 0.5, x[n] is two-sided and unstable. This ROC is valid only if
4z
z0.25
describes a causal sequence (ROC [z[ > 0.25), and
4z
z0.5
describes an anti-causal sequence (ROC
[z[ < 0.5). With this in mind, we obtain
x[n] = 4(0.25)
n
u[n]
. .. .
ROC: |z|>0.25
4(0.5)
n
u[n 1]
. .. .
ROC: |z|<0.5
(b) Find the unique inverse transforms of the following, assuming each system is stable:
H
1
(z) =
z
(z 0.4)(z + 0.6)
H
2
(z) =
2.5z
(z 0.5)(z + 2)
H
3
(z) =
z
(z 2)(z + 3)
Partial fraction expansion leads to
H
1
(z) =
z
z 0.4
z
z + 0.6
H
2
(z) =
z
z 0.5
z
z + 2
H
3
(z) =
z
z 2
z
z + 3
To nd the appropriate inverse, the key is to recognize that the ROC must include the unit circle.
Looking at the pole locations, we see that
H
1
(z) is stable if its ROC is [z[ > 0.6.
Its inverse is causal, with h
1
[n] = (0.4)
n
u[n] (0.6)
n
u[n].
H
2
(z) is stable if its ROC is 0.5 < [z[ < 2.
Its inverse is two-sided, with h
2
[n] = (0.5)
n
u[n] + (2)
n
u[n 1].
H
3
(z) is stable if its ROC is [z[ < 2.
Its inverse is anti-causal, with h
3
[n] = (2)
n
u[n 1] + (3)
n
u[n 1].
c Ashok Ambardar, September 1, 2003
4.9 The One-Sided z-Transform 153
4.9 The One-Sided z-Transform
The one-sided z-transform is particularly useful in the analysis of causal LTI systems. It is dened by
X(z) =
k=0
x[k]z
k
(one-sided z-transform) (4.35)
The lower limit of zero in the summation implies that the one-sided z-transform of an arbitrary signal x[n]
and its causal version x[n]u[n] are identical. Most of the properties of the two-sided z-transform also apply
to the one-sided version.
REVIEW PANEL 4.17
The Scaling, Times-n, and Convolution Properties of the One-Sided z-Transform
n
x[n] zt X(z/) nx[n] zt z
dX(z)
dz
x[n] h[n] zt X(z)H(z)
However, the shifting property of the two-sided z-transform must be modied for use with right-sided (or
causal) signals that are nonzero for n < 0. We also develop new properties, such as the initial value theorem
and the nal value theorem, that are unique to the one-sided z-transform. These properties are summarized
in Table 4.4.
Table 4.4 Properties Unique to the One-Sided z-Transform
Property Signal One-Sided z-Transform
Right shift x[n 1] z
1
X(z) +x[1]
x[n 2] z
2
X(z) +z
1
x[1] +x[2]
x[n N] z
N
X(z) +z
(N1)
x[1] +z
(N2)
x[2] + +x[N]
Left shift x[n + 1] zX(z) zx[0]
x[n + 2] z
2
X(z) z
2
x[0] zx[1]
x[n +N] z
N
X(z) z
N
x[0] z
N1
x[1] zx[N 1]
Switched
periodic
x
p
[n]u[n]
X
1
(z)
1 z
N
(x
1
[n] is the rst period of x
p
[n])
Initial value theorem: x[0] = lim
z
X(z)
Final value theorem: lim
n
x[n] = lim
z1
(z 1)X(z)
EXAMPLE 4.14 (Properties of the One-Sided z-Transform)
(a) Find the z-transform of x[n] = n(4)
0.5n
u[n].
We rewrite this as x[n] = n(2)
n
u[n] to get X(z) =
2z
(z 2)
2
.
c Ashok Ambardar, September 1, 2003
154 Chapter 4 z-Transform Analysis
(b) Find the z-transform of x[n] = (2)
n+1
u[n 1].
We rewrite this as x[n] = (2)
2
(2)
n1
u[n 1] to get X(z) =
z
1
(4z)
z 2
=
4
z 2
.
(c) Let x[n] zt
4z
(z + 0.5)
2
= X(z), with ROC: [z[ > 0.5. Find the z-transform of the signals
h[n] = nx[n] and y[n] = x[n] x[n].
By the times-n property, we have H(z) = zX
8z
(z + 0.5)
3
+
4
(z + 0.5)
2
=
4z
2
2z
(z + 0.5)
3
By the convolution property, Y (z) = X
2
(z) =
16z
2
(z + 0.5)
4
.
(d) Let (4)
n
u[n] zt X(z). Find the signal corresponding to F(z) = X
2
(z) and G(z) = X(2z).
By the convolution property, f[n] = (4)
n
u[n] (4)
n
u[n] = (n + 1)(4)
n
u[n].
By the scaling property, G(z) = X(2z) = X(z/0.5) corresponds to the signal g[n] = (0.5)
n
x[n].
Thus, we have g[n] = (2)
n
u[n].
4.9.1 The Right-Shift Property of the One-Sided z-Transform
The one-sided z-transform of a sequence x[n] and its causal version x[n]u[n] are identical. A right shift of
x[n] brings samples for n < 0 into the range n 0, as illustrated in Figure 4.9, and leads to the z-transforms
x[n 1] zt z
1
X(z) +x[1] x[n 2] zt z
2
X(z) +z
1
x[1] +x[2] (4.36)
x [1]
x [2]
[n] x
1 2 2 1
x [1]
x [1] z
1
X(z)
1 1 2 3
[n1] x
x [2]
x [2]
x [1]
x [2] z
2
X(z) x [1] z
1
2 3 4 1
[n2] x
Shift
Shift
right
right
X(z)
n
+
n
+ +
n
Figure 4.9 Illustrating the right-shift property of the one-sided z-transform
These results generalize to
x[n N] zt z
N
X(z) +z
(N1)
x[1] +z
(N2)
x[2] + +x[N] (4.37)
For causal signals (for which x[n] = 0, n < 0), this result reduces to x[n N] zt z
N
X(z).
c Ashok Ambardar, September 1, 2003
4.9 The One-Sided z-Transform 155
REVIEW PANEL 4.18
The Right-Shift Property of the One-Sided z-Transform
x[n 1] zt z
1
X(z) +x[1] x[n 2] zt z
2
X(z) +z
1
x[1] +x[2]
4.9.2 The Left-Shift Property of the One-Sided z-Transform
A left shift of x[n]u[n] moves samples for n 0 into the range n < 0, and these samples no longer contribute
to the z-transform of the causal portion, as illustrated in Figure 4.10.
x [0]
x [1]
[n] x
x [1]
x [0]
x [0] z
+1 n [ ] x
x [0]
x [1]
x [1] z z
2
X(z)
n x +2 [ ]
z
2
x [0]
Shift
Shift
left
left
X(z)
2 3 1
n
+
1 1 2
n
zX(z)
1 2 1
n
+
Figure 4.10 Illustrating the left-shift property of the one-sided z-transform
This leads to the z-transforms
x[n + 1] zt zX(z) zx[0] x[n + 2] zt z
2
X(z) z
2
x[0] zx[1] (4.38)
By successively shifting x[n]u[n] to the left, we obtain the general relation
X[n +N] zt z
N
X(z) z
N
x[0] z
N1
x[1] zx[N 1] (4.39)
The right-shift and left-shift properties of the one-sided z-transform form the basis for nding the response
of causal LTI systems with nonzero initial conditions.
REVIEW PANEL 4.19
The Left-Shift Property of the One-Sided z-Transform
x[n + 1] zt zX(z) zx[0] x[n + 2] zt z
2
X(z) z
2
x[0] zx[1]
EXAMPLE 4.15 (Using the Shift Properties)
(a) Using the right-shift property and superposition, we obtain the z-transform of the rst dierence of
x[n] as
y[n] = x[n] x[n 1] zt X(z) z
1
X(z) = (1 z
1
)X(z)
(b) Consider the signal x[n] =
n
. Its one-sided z-transform is identical to that of
n
u[n] and equals
X(z) = z/(z ). If y[n] = x[n 1], the right-shift property, with N = 1, yields
Y (z) = z
1
X(z) +x[1] =
1
z
+
1
The additional term
1
arises because x[n] is not causal.
c Ashok Ambardar, September 1, 2003
156 Chapter 4 z-Transform Analysis
(c) (The Left-Shift Property)
Consider the shifted step u[n + 1]. Its one-sided z-transform should be identical to that of u[n] since
u[n] and u[n + 1] are identical for n 0.
With u[n] zt z/(z 1) and u[0] = 1, the left-shift property gives
u[n + 1] zt zU(z) zu[0] =
z
2
z 1
z =
z
z 1
(d) With y[n] =
n
u[n] zt z/(z ) and y[0] = 1, the left-shift property gives
y[n + 1] =
n+1
u[n + 1] zt z
z
z
z =
z
z
4.9.3 The Initial Value Theorem and Final Value Theorem
The initial value theorem and nal value theorem apply only to the one-sided z-transform and the proper
part X(z) of a rational z-transform.
REVIEW PANEL 4.20
The Initial Value Theorem and Final Value Theorem for the One-Sided z-Transform
Initial value: x[0] = lim
z
X(z) Final value: x[] = lim
z1
(z 1)X(z)
Final value theorem is meaningful only if: Poles of (z 1)X(z) are inside unit circle.
With X(z) described by x[0] + x[1]z
1
+ x[2]z
2
+ , it should be obvious that only x[0] survives as
z and the initial value equals x[0] = lim
z
X(z).
To nd the nal value, we evaluate (z 1)X(z) at z = 1. It yields meaningful results only when the poles
of (z 1)X(z) have magnitudes smaller than unity (lie within the unit circle in the z-plane). As a result:
1. x[] = 0 if all poles of X(z) lie within the unit circle (since x[n] will then contain only exponentially
damped terms).
2. x[] is constant if there is a single pole at z = 1 (since x[n] will then include a step).
3. x[] is indeterminate if there are complex conjugate poles on the unit circle (since x[n] will then
include sinusoids). The nal value theorem can yield absurd results if used in this case.
EXAMPLE 4.16 (Initial and Final Value Theorems)
Let X(z) =
z(z 2)
(z 1)(z 0.5)
. We then nd
The initial value: x[0] = lim
z
X(z) = lim
z
1 2z
1
(1 z
1
)(1 0.5z
1
)
= 1
The nal value: lim
n
x[n] = lim
z1
(z 1)X(z) = lim
z1
z(z 2)
z 0.5
= 2
c Ashok Ambardar, September 1, 2003
4.9 The One-Sided z-Transform 157
4.9.4 The z-Transform of Switched Periodic Signals
Consider a causal signal x[n] = x
p
[n]u[n] where x
p
[n] is periodic with period N. If x
1
[n] describes the rst
period of x[n] and has the z-transform X
1
(z), then the z-transform of x[n] can be found as the superposition
of the z-transform of the shifted versions of x
1
[n]:
X(z) = X
1
(z) +z
N
X
1
(z) +z
2N
X
1
(z) + = X
1
(z)[1 +z
N
+z
2N
+ ] (4.40)
Expressing the geometric series in closed form, we obtain
X(z) =
1
1 z
N
X
1
(z) =
z
N
z
N
1
X
1
(z) (4.41)
REVIEW PANEL 4.21
The z-Transform of a Switched Periodic Signal x
p
[n]u[n] with Period N
X(z) =
X
1
(z)
1 z
N
(X
1
(z) is z-transform of rst period x
1
[n])
EXAMPLE 4.17 (z-Transform of Switched Periodic Signals)
(a) Find the z-transform of a periodic signal whose rst period is x
1
[n] =
0, 1, 2.
The period of x[n] is N = 3. We then nd the z-transform of x[n] as
X(z) =
X
1
(z)
1 z
N
=
z
1
2z
2
1 z
3
(b) Find the z-transform of x[n] = sin(0.5n)u[n].
The digital frequency of x[n] is F =
1
4
. So N = 4. The rst period of x[n] is x
1
[n] =
0, 1, 0, 1.
The z-transform of x[n] is thus
X(z) =
X
1
(z)
1 z
N
=
z
1
z
3
1 z
4
=
z
1
1 +z
2
(c) Find the causal signal corresponding to X(z) =
2 +z
1
1 z
3
.
Comparing with the z-transform for a switched periodic signal, we recognize N = 3 and X
1
(z) = 2+z
1
.
Thus, the rst period of x[n] is
2, 1, 0.
(d) Find the causal signal corresponding to X(z) =
z
1
1 +z
3
.
We rst rewrite X(z) as
X(z) =
z
1
(1 z
3
)
(1 +z
3
)(1 z
3
)
=
z
1
z
4
1 z
6
Comparing with the z-transform for a switched periodic signal, we recognize the period as N = 6, and
X
1
(z) = z
1
z
4
. Thus, the rst period of x[n] is
0, 1, 0, 0, 1, 0.
c Ashok Ambardar, September 1, 2003
158 Chapter 4 z-Transform Analysis
4.10 The z-Transform and System Analysis
The one-sided z-transform serves as a useful tool for analyzing LTI systems described by dierence equations
or transfer functions. The key is of course that the solution methods are much simpler in the transformed
domain because convolution transforms to a multiplication. Naturally, the time-domain response requires
an inverse transformation, a penalty exacted by all methods in the transformed domain.
4.10.1 Systems Described by Dierence Equations
For a system described by a dierence equation, the solution is based on transformation of the dierence
equation using the shift property and incorporating the eect of initial conditions (if present), and subsequent
inverse transformation using partial fractions to obtain the time-domain response. The response may be
separated into its zero-state component (due only to the input), and zero-input component (due to the
initial conditions), in the z-domain itself.
REVIEW PANEL 4.22
Solving Dierence Equations Using the z-Transform
Relaxed system: Transform the dierence equation, then nd Y (z) and its inverse.
Not relaxed: Transform using the shift property and initial conditions. Find Y (z) and its inverse.
EXAMPLE 4.18 (Solution of Dierence Equations)
(a) Solve the dierence equation y[n] 0.5y[n 1] = 2(0.25)
n
u[n] with y[1] = 2.
Transformation using the right-shift property yields
Y (z) 0.5(z
1
Y (z) +y[1]) =
2z
z 0.25
Y (z) =
z(z + 0.25)
(z 0.25)(z 0.5)
We use partial fractions to get
Y (z)
z
=
z + 0.25
(z 0.25)(z 0.5)
=
2
z 0.25
+
3
z 0.5
.
Multiplying through by z and taking inverse transforms, we obtain
Y (z) =
2z
z 0.25
+
3z
z 0.5
y[n] = [2(0.25)
n
+ 3(0.5)
n
]u[n]
(b) Let y[n + 1] 0.5y[n] = 2(0.25)
n+1
u[n + 1] with y[1] = 2.
We transform the dierence equation using the left-shift property. The solution will require y[0].
By recursion, with n = 1, we obtain y[0] 0.5y[1] = 2 or y[0] = 2 + 0.5y[1] = 2 1 = 1.
Let x[n] = (0.25)
n
u[n]. Then, by the left-shift property x[n+1] zt zX(z)zx[0] (with x[0] = 1),
(0.25)
n+1
u[n + 1] zt z
z
z 0.25
z =
0.25z
z 0.25
We now transform the dierence equation using the left-shift property:
zY (z) zy[0] 0.5Y (z) =
0.5z
z 0.25
Y (z) =
z(z + 0.25)
(z 0.25)(z 0.5)
c Ashok Ambardar, September 1, 2003
4.10 The z-Transform and System Analysis 159
This is identical to the result of part (a), and thus y[n] = 2(0.25)
n
+ 3(0.5)
n
, as before.
Comment: By time invariance, this represents the same system as in part (a).
(c) (Zero-Input and Zero-State Response) Let y[n] 0.5y[n 1] = 2(0.25)
n
u[n], with y[1] = 2.
Upon transformation using the right-shift property, we obtain
Y (z) 0.5(z
1
Y (z) +y[1]) =
2z
z 0.25
(1 0.5z
1
)Y (z) =
2z
z 0.25
1
1. Zero-state response: For the zero-state response, we assume zero initial conditions to obtain
(1 0.5z
1
)Y
zs
(z) =
2z
z 0.25
Y
zs
(z) =
2z
2
(z 0.25)(z 0.5)
Upon partial fraction expansion, we obtain
Y
zs
(z)
z
=
2z
(z 0.25)(z 0.5)
=
2
z 0.25
+
4
z 0.5
Multiplying through by z and inverse transforming the result, we get
Y
zs
(z) =
2z
z 0.25
+
4z
z 0.5
y
zs
[n] = 2(0.25)
n
u[n] + 4(0.5)
n
u[n]
2. Zero-input response: For the zero-input response, we assume zero input (the right-hand side)
and use the right-shift property to get
Y
zi
(z) 0.5(z
1
Y
zi
(z) +y[1]) = 0 Y
zi
(z) =
z
z 0.5
This is easily inverted to give y
zi
[n] = (
1
2
)
n
u[n].
3. Total response: We nd the total response as
y[n] = y
zs
[n] +y
zi
[n] = 2(0.25)
n
u[n] + 3(0.5)
n
u[n]
4.10.2 Systems Described by the Transfer Function
The response Y (z) of a relaxed LTI system equals the product X(z)H(z) of the transformed input and the
transfer function. It is often much easier to work with the transfer function description of a linear system.
If we let H(z) = N(z)/D(z), the zero-state response Y (z) of a relaxed system to an input X(z) may be
expressed as Y (z) = X(z)H(z) = X(z)N(z)/D(z). If the system is not relaxed, the initial conditions result
in an additional contribution, the zero-input response Y
zi
(z), which may be written as Y
zi
(z) = N
zi
(z)/D(z).
To evaluate Y
zi
(z), we rst set up the system dierence equation, and then use the shift property to transform
it in the presence of initial conditions.
REVIEW PANEL 4.23
System Analysis Using the Transfer Function
Zero-state response: Evaluate Y (z) = X(z)H(z) and take inverse transform.
Zero-input response: Find dierence equation. Transform this using the shift property and initial
conditions. Find the response in the z-domain and take inverse transform.
c Ashok Ambardar, September 1, 2003
160 Chapter 4 z-Transform Analysis
EXAMPLE 4.19 (System Response from the Transfer Function)
(a) (A Relaxed System) Let H(z) =
3z
z 0.4
. To nd the zero-state response of this system to
x[n] = (0.4)
n
u[n], we rst transform the input to X(z) =
z
z 0.4
. Then,
Y (z) = H(z)X(z) =
3z
2
(z 0.4)
2
y[n] = 3(n + 1)(0.4)
n
u[n]
(b) (Step Response) Let H(z) =
4z
z 0.5
.
To nd its step response, we let x[n] = u[n]. Then X(z) =
z
z 1
, and the output equals
Y (z) = H(z)X(z) =
4z
z 0.5
z
z 1
=
4z
2
(z 1)(z 0.5)
Using partial fraction expansion of Y (z)/z, we obtain
Y (z)
z
=
4z
(z 1)(z 0.5)
=
8
z 1
4
z 0.5
Thus,
Y (z) =
8z
z 1
4z
z 0.5
y[n] = 8u[n] 4(0.5)
n
u[n]
The rst term in y[n] is the steady-state response, which can be found much more easily as described
shortly.
(c) (A Second-Order System) Let H(z) =
z
2
z
2
1
6
z
1
6
. Let the input be x[n] = 4u[n] and the initial
conditions be y[1] = 0, y[2] = 12.
1. Zero-state and zero-input response: The zero-state response is found directly from H(z) as
Y
zs
(z) = X(z)H(z) =
4z
3
(z
2
1
6
z
1
6
)(z 1)
=
4z
3
(z
1
2
)(z +
1
3
)(z 1)
Partial fractions of Y
zs
(z)/z and inverse transformation give
Y
zs
(z) =
2.4z
z
1
2
+
0.4z
z +
1
3
+
6z
z 1
y
zs
[n] = 2.4(
1
2
)
n
u[n] + 0.4(
1
3
)
n
u[n] + 6u[n]
To nd the zero-input response, we rst set up the dierence equation. We start with
H(z) =
Y (z)
X(z)
=
z
2
z
2
1
6
z
1
6
, or (z
2
1
6
z
1
6
)Y (z) = z
2
X(z). This gives
(1
1
6
z
1
1
6
z
2
)Y (z) = X(z) y[n]
1
6
y[n 1]
1
6
y[n 2] = x[n]
c Ashok Ambardar, September 1, 2003
4.10 The z-Transform and System Analysis 161
We now set the right-hand side to zero (for zero input) and transform this equation, using the
right-shift property, to obtain the zero-input response from
Y
zi
(z)
1
6
(z
1
Y
zi
(z) +y[1])
1
6
(z
2
Y
zi
(z) +z
1
y[1] +y[2]) = 0
With y[1] = 0 and y[2] = 12, this simplies to
Y
zi
(z) =
2z
2
z
2
1
6
z
1
6
=
2z
2
(z
1
2
)(z +
1
3
)
Partial fraction expansion of Y
zi
(z)/z and inverse transformation lead to
Y
zi
(z) =
1.2z
z
1
2
+
0.8z
z +
1
3
y
zi
[n] = 1.2(
1
2
)
n
u[n] + 0.8(
1
3
)
n
u[n]
Finally, we nd the total response as
y[n] = y
zs
[n] +y
zi
[n] = 1.2(
1
2
)
n
u[n] + 1.2(
1
3
)
n
u[n] + 6u[n]
2. Natural and forced response: By inspection, the natural and forced components of y[n] are
y
N
[n] = 1.2(
1
2
)
n
u[n] + 1.2(
1
3
)
n
u[n] y
F
[n] = 6u[n]
Comment: Alternatively, we could transform the system dierence equation to obtain
Y (z)
1
6
(z
1
Y (z) +y[1])
1
6
(z
2
Y (z) +z
1
y[1] +y[2]) =
4z
z 1
This simplies to
Y (z) =
z
2
(6z 2)
(z 1)(z
2
1
6
z
1
6
)
=
1.2z
z
1
2
+
1.2z
z +
1
3
+
6z
z 1
The steady-state response corresponds to terms of the form z/(z 1) (step functions). For this
example, Y
F
(z) =
6z
z1
and y
F
[n] = 6u[n].
Since the poles of (z 1)Y (z) lie within the unit circle, y
F
[n] can also be found by the nal value
theorem:
y
F
[n] = lim
z1
(z 1)Y (z) = lim
z1
z
2
(6z 2)
z
2
1
6
z
1
6
= 6
4.10.3 Forced and Steady-State Response from the Transfer Function
In the time domain, the forced response is found by assuming that it has the same form as the input
and then satisfying the dierence equation describing the LTI system. If the LTI system is described
by its transfer function H(z), the forced response may also be found by evaluating H(z) at the complex
frequency z
0
of the input. For example, the input x[n] = K
n
cos(n + ) has a complex frequency given
by z
0
= e
j
. Once H(z
0
) = H
0
e
j
0
is evaluated as a complex quantity, the forced response equals
y
F
[n] = KH
0
n
cos(n++
0
). For multiple inputs, we simply add the forced response due to each input.
For dc and sinusoidal inputs (with = 1), the forced response is also called the steady-state response.
c Ashok Ambardar, September 1, 2003
162 Chapter 4 z-Transform Analysis
EXAMPLE 4.20 (Finding the Forced Response)
(a) Find the steady-state response of the a lter described by H(z) =
z
z 0.4
to the input x[n] =
cos(0.6n).
The complex input frequency is z
0
= e
j0.6
. We evaluate H(z) at z = z
0
to give
H(z)
z=e
j0.6
=
e
j0.6
e
j0.6
0.4
= 0.843e
j18.7
).
(b) Find the forced response of the system H(z) =
z
z 0.4
to the input x[n] = 5(0.6)
n
.
The complex input frequency is z
0
= 0.6. We evaluate H(z) at z = 0.6 to give
H(z)
z=0.6
=
0.6
0.6 0.4
= 3
The forced response is thus y
F
[n] = (3)(5)(0.6)
n
= 15(0.6)
n
.
(c) Find the forced response of the system H(z) =
3z
2
z
2
z + 1
. The input contains two components and
is given by x[n] = x
1
[n] +x
2
[n] = (0.6)
n
+ 2(0.4)
n
cos(0.5n 100
).
The forced response will be the sum of the forced component y
1
[n] due to x
1
[n] and y
2
[n] due to x
2
[n].
The complex input frequency of x
1
[n] is z
0
= 0.6. We evaluate H(z) at z = 0.6 to give
H(z)
z=0.6
=
3(0.36)
0.36 0.6 + 1
= 1.4211
So, y
1
[n] = 1.4211(0.6)
n
.
The complex input frequency of x
2
[n] is z
0
= 0.4e
j0.5
= j0.4. We evaluate H(z) at z = j0.4 to give
H(z)
z=j0.4
=
3(0.16)
0.16 j0.4 + 1
= 0.5159e
j154.54
So, y
2
[n] = (0.5159)(2)(0.4)
n
cos(0.5n 100
+ 154.54
) = 1.0318(0.4)
n
cos(0.5n + 54.54
).
Finally, by superposition, the complete forced response is
y
F
[n] = y
1
[n] +y
2
[n] = 1.4211(0.6)
n
+ 1.0318(0.4)
n
cos(0.5n + 54.54
)
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 163
CHAPTER 4 PROBLEMS
4.1 (The z-Transform of Sequences) Use the dening relation to nd the z-transform and its region
of convergence for the following:
(a) x[n] = 1, 2,
3, 2, 1 (b) y[n] = 1, 2,
0, 2, 1
(c) f[n] =
1, 1, 1, 1 (d) g[n] = 1, 1, 1,
1
4.2 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = (2)
n+2
u[n] (b) y[n] = n(2)
2n
u[n]
(c) f[n] = (2)
n+2
u[n 1] (d) g[n] = n(2)
n+2
u[n 1]
(e) p[n] = (n + 1)(2)
n
u[n] (f ) q[n] = (n 1)(2)
n+2
u[n]
[Hints and Suggestions: For (a), (2)
n+2
= (2)
2
(2)
n
= 4(2)
n
. For (b), (2)
2n
= ((2
2
)
n
= (4)
n
. For
(e), use superposition with (n + 1)(2)
n
= n(2)
n
+ (2)
n
.]
4.3 (The z-Transform of Sequences) Find the z-transform and its ROC for the following:
(a) x[n] = u[n + 2] u[n 2]
(b) y[n] = (0.5)
n
(u[n + 2] u[n 2])
(c) f[n] = (0.5)
|n|
(u[n + 2] u[n 2])
[Hints and Suggestions: First write each signal as a sequence of sample values.]
4.4 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = (0.5)
2n
u[n] (b) x[n] = n(0.5)
2n
u[n] (c) (0.5)
n
u[n]
(d) (0.5)
n
u[n] (e) (0.5)
n
u[n 1] (f ) (0.5)
n
u[n 1]
[Hints and Suggestions: For (a)(b), (0.5)
2n
= (0.5
2
)
n
= (0.25)
n
. For (c) and (f), (0.5)
n
= (2)
n
.
For (d), fold the results of (c).]
4.5 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = cos(
n
4
4
)u[n] (b) y[n] = (0.5)
n
cos(
n
4
)u[n]
(c) f[n] = (0.5)
n
cos(
n
4
4
)u[n] (d) g[n] = (
1
3
)
n
(u[n] u[n 4])
(e) p[n] = n(0.5)
n
cos(
n
4
)u[n] (f ) q[n] = [(0.5)
n
(0.5)
n
]nu[n]
[Hints and Suggestions: For (a), cos(0.25n 0.25) = 0.7071 cos(0.25n) + 0.7071 sin(0.25n).
For (b), start with the transform of cos(0.25n) and use properties. For (c), start with the result for
(a) and use the times-
n
property. For (e) start with the result for (b) and use the times-n property.]
4.6 (Two-Sided z-Transform) Find the z-transform X(z) and its ROC for the following:
(a) x[n] = u[n 1] (b) y[n] = (0.5)
n
u[n 1]
(c) f[n] = (0.5)
|n|
(d) g[n] = u[n 1] + (
1
3
)
n
u[n]
(e) p[n] = (0.5)
n
u[n 1] + (
1
3
)
n
u[n] (f ) q[n] = (0.5)
|n|
+ (0.5)
|n|
[Hints and Suggestions: For (a), start with the transform of u[n 1] and use the folding property.
For (c), note that (0.5)
|n|
= (0.5)
n
u[n] + (0.5)
n
u[n] [n]. For the ROC, note that (c)(f) are
two-sided signals.]
c Ashok Ambardar, September 1, 2003
164 Chapter 4 z-Transform Analysis
4.7 (The ROC) The transfer function of a system is H(z). What can you say about the ROC of H(z)
for the following cases?
(a) h[n] is a causal signal.
(b) The system is stable.
(c) The system is stable, and h[n] is a causal signal.
4.8 (Poles, Zeros, and the ROC) The transfer function of a system is H(z). What can you say about
the poles and zeros of H(z) for the following cases?
(a) The system is stable.
(b) The system is causal and stable.
(c) The system is an FIR lter with real coecients.
(d) The system is a linear-phase FIR lter with real coecients.
(e) The system is a causal, linear-phase FIR lter with real coecients.
4.9 (z-Transforms and ROC) Consider the signal x[n] =
n
u[n] +
n
u[n 1]. Find its z-transform
X(z). Will X(z) represent a valid transform for the following cases?
(a) > (b) < (c) =
4.10 (z-Transforms) Find the z-transforms (if they exist) and specify their ROC.
(a) x[n] = (2)
n
u[n] + 2
n
u[n] (b) y[n] = (0.25)
n
u[n] + 3
n
u[n]
(c) f[n] = (0.5)
n
u[n] + 2
n
u[n 1] (d) g[n] = (2)
n
u[n] + (0.5)
n
u[n 1]
(e) p[n] = cos(0.5n)u[n] (f ) q[n] = cos(0.5n + 0.25)u[n]
(g) s[n] = e
jn
u[n] (h) t[n] = e
jn/2
u[n]
(i) v[n] = e
jn/4
u[n] (j) w[n] = (
j)
n
u[n] + (
j)
n
u[n]
[Hints and Suggestions: Parts (a)(c) are two-sided signals. Part (d) does not have a valid transform.
For (f), cos(0.5n +0.25n) = 0.7071[cos(0.5n) sin(0.5n)]. For part (j), use j = e
j/2
and Eulers
relation to express w[n] as a sinusoid.]
4.11 (z-Transforms and ROC) The causal signal x[n] =
n
u[n] has the transform X(z) whose ROC is
[z[ > . Find the ROC of the z-transform of the following:
(a) y[n] = x[n 5]
(b) p[n] = x[n + 5]
(c) g[n] = x[n]
(d) h[n] = (1)
n
x[n]
(e) p[n] =
n
x[n]
4.12 (z-Transforms and ROC) The anti-causal signal x[n] =
n
u[n 1] has the transform X(z)
whose ROC is [z[ < . Find the ROC of the z-transform of the following:
(a) y[n] = x[n 5]
(b) p[n] = x[n + 5]
(c) g[n] = x[n]
(d) h[n] = (1)
n
x[n]
(e) r[n] =
n
x[n]
4.13 (z-Transforms) Find the z-transform X(z) of x[n] =
|n|
and specify the region of convergence of
X(z). Consider the cases [[ < 1 and [[ > 1 separately.
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 165
4.14 (Properties) Let x[n] = nu[n]. Find X(z), using the following:
(a) The dening relation for the z-transform
(b) The times-n property
(c) The convolution result u[n] u[n] = (n + 1)u[n + 1] and the shifting property
(d) The convolution result u[n] u[n] = (n + 1)u[n] and superposition
4.15 (Properties) The z-transform of x[n] is X(z) =
4z
(z + 0.5)
2
, [z[ > 0.5. Find the z-transform of the
following using properties and specify the region of convergence.
(a) y[n] = x[n 2] (b) d[n] = (2)
n
x[n] (c) f[n] = nx[n]
(d) g[n] = (2)
n
nx[n] (e) h[n] = n
2
x[n] (f ) p[n] = [n 2]x[n]
(g) q[n] = x[n] (h) r[n] = x[n] x[n 1] (i) s[n] = x[n] x[n]
[Hints and Suggestions: For (d)(f), use the results of (c).]
4.16 (Properties) The z-transform of x[n] = (2)
n
u[n] is X(z). Use properties to nd the time signal
corresponding to the following:
(a) Y (z) = X(2z) (b) F(z) = X(1/z) (c) G(z) = zX
(z)
(d) H(z) =
zX(z)
z 1
(e) D(z) =
zX(2z)
z 1
(f ) P(z) = z
1
X(z)
(g) Q(z) = z
2
X(2z) (h) R(z) = X
2
(z) (i) S(z) = X(z)
[Hints and Suggestions: Parts(d)(e) require the summation property.]
4.17 (Properties) The z-transform of a signal x[n] is X(z) =
4z
(z + 0.5)
2
, [z[ > 0.5. Find the z-transform
and its ROC for the following.
(a) y[n] = (1)
n
x[n] (b) f[n] = x[2n]
(c) g[n] = (j)
n
x[n] (d) h[n] = x[n + 1] +x[n 1]
[Hints and Suggestions: For part (b), nd x[n] rst and use it to get f[n] = x[2n] and F(z). For
the rest, use properties.]
4.18 (Properties) The z-transform of the signal x[n] = (2)
n
u[n] is X(z). Use properties to nd the time
signal corresponding to the following.
(a) F(z) = X(z) (b) G(z) = X(1/z) (c) H(z) = zX
(z)
4.19 (Properties) The z-transform of a causal signal x[n] is X(z) =
z
z 0.4
.
(a) Let x
e
[n] be the even part of x[n]. Without computing x[n] or x
e
[n], nd X
e
(z) and its ROC.
(b) Conrm your answer by rst computing x
e
[n] from x[n] and then nding its z-transform.
(c) Can you nd X
e
(z) if x[n] represents an anti-causal signal? Explain.
4.20 (Properties) Find the z-transform of x[n] = rect(n/2N) = u[n +N] u[n N 1]. Use this result
to evaluate the z-transform of y[n] = tri(n/M) where M = 2N + 1.
[Hints and Suggestions: Recall that rect(n/2N) rect(n/2N) = Mtri(n/M) where M = 2N + 1
and use the convolution property of z-transforms.]
c Ashok Ambardar, September 1, 2003
166 Chapter 4 z-Transform Analysis
4.21 (Poles and Zeros) Make a rough sketch of the pole and zero locations of the z-transform of each of
the signals shown in Figure P4.21.
Signal 1
n
Signal 2
n
Signal 3
n
Signal 4
n
Signal 5
n
Figure P4.21 Figure for Problem 4.21
[Hints and Suggestions: Signal 1 has only three samples. Signals 2 and 3 appear to be exponentials.
Signal 4 is a ramp. Signal 5 appears to be a sinusoid.]
4.22 (Pole-Zero Patterns and Symmetry) Plot the pole-zero patterns for each X(z). Which of these
correspond to symmetric sequences?
(a) X(z) =
z
2
+z 1
z
(b) X(z) =
z
4
+ 2z
3
+ 3z
2
+ 2z + 1
z
2
(c) X(z) =
z
4
z
3
+z 1
z
2
(d) X(z) =
(z
2
1)(z
2
+ 1)
z
2
[Hints and Suggestions: If a sequence is symmetric about its midpoint, the zeros exhibit conjugate
reciprocal symmetry. Also, X(z) = X(1/z) for symmetry about the origin.]
4.23 (Realization) Sketch the direct form I, direct form II, and transposed realization for each lter.
(a) y[n]
1
6
y[n 1]
1
2
y[n 2] = 3x[n] (b) H(z) =
z 2
z
2
0.25
(c) y[n] 3y[n 1] + 2y[n 2] = 2x[n 2] (d) H(z) =
2z
2
+z 2
z
2
1
[Hints and Suggestions: For each part, start with the generic second-order realization and delete
any signal paths corresponding to missing coecients.]
4.24 (Realization) Find the transfer function and dierence equation for each system realization shown
in Figure P4.24.
[n] y [n] x
1
z
System 1
4
3
+ +
+
+
2
[n] y [n] x
1
z
1
z
System 2
2
4
3
+ +
+
+
Filter 2
4
+
+
3 2
Figure P4.25 Filter realizations for Problem 4.25
[Hints and Suggestions: For (a), compare with a generic third-order direct form II realization. For
(b), compare with a generic second-order transposed realization.]
4.26 (Inverse Systems) Find the transfer function of the inverse systems for each of the following. Which
inverse systems are causal? Which inverse systems are stable?
(a) H(z) =
z
2
+ 0.1
z
2
0.2
(b) H(z) =
z + 2
z
2
+ 0.25
(c) y[n] 0.5y[n 1] = x[n] + 2x[n 1] (d) h[n] = n(2)
n
u[n]
[Hints and Suggestions: For parts (c) and (d), set up H(z) and nd H
I
(z) = 1/H(z).]
4.27 (Causality and Stability) How can you identify whether a system is a causal and/or stable system
from the following information?
(a) Its impulse response h[n]
(b) Its transfer function H(z) and its region of convergence
(c) Its system dierence equation
(d) Its pole-zero plot
4.28 (Switched Periodic Signals) Find the z-transform of each switched periodic signal.
(a) x[n] =
1, 2, 0, 3 2, 0,
3 (b) y[n] = 1, 2,
0, 2, 1 1, 2,
0, 2, 1
(c) y[n] = (2)
n
u[n] (2)
n
u[n] (d) y[n] = (2)
n
u[n] (3)
n
u[n]
4.42 (Periodic Signal Generators) Find the transfer function H(z) of a lter whose impulse response
is a periodic sequence with rst period x[n] =
j)
n
u[n] + (
j)
n
u[n]
[Hints and Suggestions: In part (e), y[n] will be complex because the input is complex. In part (f),
using j = e
j/2
and Eulers relation, x[n] simplies to a sinusoid. Therefore, the forced response y
F
[n]
is easy to nd. Then, y[n] = K(0.5)
n
+y
F
[n] with y[1] = 0.]
4.50 (Zero-State Response) Find the zero-state response of the following systems, using the z-transform.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)
n
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)
n
(d) y[n] 0.25y[n 2] = cos(n/2)
4.51 (System Response) Let y[n] 0.5y[n 1] = x[n], with y[1] = 1. Find the response y[n] of this
system for the following inputs, using the z-transform.
(a) x[n] = 2u[n] (b) x[n] = (0.25)
n
u[n] (c) x[n] = n(0.25)
n
u[n]
(d) x[n] = (0.5)
n
u[n] (e) x[n] = n(0.5)
n
(f ) x[n] = (0.5)
n
cos(0.5n)
4.52 (System Response) Find the response y[n] of the following systems, using the z-transform.
(a) y[n] + 0.1y[n 1] 0.3y[n 2] = 2u[n] y[1] = 0 y[2] = 0
(b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)
n
y[1] = 1 y[2] = 4
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)
n
y[1] = 0 y[2] = 3
(d) y[n] 0.25y[n 2] = (0.4)
n
y[1] = 0 y[2] = 3
(e) y[n] 0.25y[n 2] = (0.5)
n
y[1] = 0 y[2] = 0
4.53 (System Response) For each system, evaluate the response y[n], using the z-transform.
(a) y[n] 0.4y[n 1] = x[n] x[n] = (0.5)
n
u[n] y[1] = 0
(b) y[n] 0.4y[n 1] = 2x[n] +x[n 1] x[n] = (0.5)
n
u[n] y[1] = 0
(c) y[n] 0.4y[n 1] = 2x[n] +x[n 1] x[n] = (0.5)
n
u[n] y[1] = 5
(d) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)
n
u[n] y[1] = 2
(e) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)
n
u[n] y[1] = 0
4.54 (System Response) Find the response y[n] of the following systems, using the z-transform.
(a) y[n] 0.4y[n 1] = 2(0.5)
n1
u[n 1] y[1] = 2
(b) y[n] 0.4y[n 1] = (0.4)
n
u[n] + 2(0.5)
n1
u[n 1] y[1] = 2.5
(c) y[n] 0.4y[n 1] = n(0.5)
n
u[n] + 2(0.5)
n1
u[n 1] y[1] = 2.5
4.55 (System Response) The transfer function of a system is H(z) =
2z(z 1)
4 + 4z +z
2
. Find its response
y[n] for the following inputs.
(a) x[n] = [n] (b) x[n] = 2[n] +[n + 1] (c) x[n] = u[n]
(d) x[n] = (2)
n
u[n] (e) x[n] = nu[n] (f ) x[n] = cos(
n
2
)u[n]
4.56 (System Analysis) Find the impulse response h[n] and the step response s[n] of the causal digital
lters described by
(a) H(z) =
4z
z 0.5
(b) y[n] + 0.5y[n 1] = 6x[n]
[Hints and Suggestions: Note that y[1] = 0. Choose x[n] = u[n] to compute the step response.]
c Ashok Ambardar, September 1, 2003
172 Chapter 4 z-Transform Analysis
4.57 (System Analysis) Find the zero-state response, zero-input response, and total response for each of
the following systems, using the z-transform.
(a) y[n]
1
4
y[n 1] = (
1
3
)
n
u[n] y[1] = 8
(b) y[n] + 1.5y[n 1] + 0.5y[n 2] = (0.5)
n
u[n] y[1] = 2 y[2] = 4
(c) y[n] +y[n 1] + 0.25y[n 2] = 4(0.5)
n
u[n] y[1] = 6 y[2] = 12
(d) y[n] y[n 1] + 0.5y[n 2] = (0.5)
n
u[n] y[1] = 1 y[2] = 2
4.58 (Steady-State Response) The transfer function of a system is H(z) =
2z(z 1)
z
2
+ 0.25
. Find its steady-
state response for the following inputs.
(a) x[n] = 4u[n] (b) x[n] = 4 cos(
n
2
+
4
)u[n]
(c) x[n] = cos(
n
2
) + sin(
n
2
) (d) x[n] = 4 cos(
n
4
) + 4 sin(
n
2
)
[Hints and Suggestions: For parts (c)(d), add the forced response due to each component.]
4.59 (Steady-State Response) The lter H(z) = A
z
z 0.5
is designed to have a steady-state response
of unity if the input is u[n] and a steady-state response of zero if the input is cos(n). What are the
values of A and ?
4.60 (Steady-State Response) The lter H(z) = A
z
z 0.5
is designed to have a steady-state response
of zero if the input is u[n] and a steady-state response of unity if the input is cos(n). What are the
values of A and ?
4.61 (System Response) Find the response of the following lters to the unit step x[n] = u[n], and to
the alternating unit step x[n] = (1)
n
u[n].
(a) h[n] = [n] [n 1] (dierencing operation)
(b) h[n] =
k=0
[n k], N = 3 (moving average)
(d) h[n] =
2
N(N+1)
N1
k=0
(N k)[n k], N = 3 (weighted moving average)
(e) y[n] y[n 1] = (1 )x[n], =
N1
N+1
, N = 3 (exponential average)
4.62 (Steady-State Response) Consider the following DSP system:
x(t) sampler digital lter H(z) ideal LPF y(t)
The input is x(t) = 2 + cos(10t) + cos(20t). The sampler is ideal and operates at a sampling rate
of S Hz. The digital lter is described by H(z) = 0.1S
z 1
z 0.5
. The ideal lowpass lter has a cuto
frequency of 0.5S Hz.
(a) What is the smallest value of S that will prevent aliasing?
(b) Let S = 40 Hz and H(z) = 1 +z
2
+z
4
. What is the steady-state output y(t)?
(c) Let S = 40 Hz and H(z) =
z
2
+ 1
z
4
+ 0.5
. What is the steady-state output y(t)?
4.63 (Response of Digital Filters) Consider the averaging lter y[n] = 0.5x[n] +x[n1] +0.5x[n2].
c Ashok Ambardar, September 1, 2003
Chapter 4 Problems 173
(a) Find its impulse response h[n] and its transfer function H(z).
(b) Find its response y[n] to the input x[n] =
2, 4, 6, 8.
(c) Find its response y[n] to the input x[n] = cos(
n
3
).
(d) Find its response y[n] to the input x[n] = cos(
n
3
) + sin(
2n
3
) + cos(
n
2
).
[Hints and Suggestions: For part (b), use convolution. For part (c), nd the steady-state response.
For part (d), add the steady state response due to each component.]
4.64 (Transfer Function) The input to a digital lter is x[n] =
Filter
Figure P4.71 Filter realization for Problem 4.71
4.72 (Systems in Cascade) Consider the following system:
x[n] H
1
(z) H
2
(z) H
3
(z) y[n]
It is known that h
1
[n] = 0.5(0.4)
n
u[n], H
2
(z) =
A(z +)
z +
, and h
3
[n] = [n] + 0.5[n 1]. Choose A,
, and such that the overall system represents an identity system.
[Hints and Suggestions: Set up H
1
(z)H
2
(z)H
3
(z) = 1 to nd the constants.]
4.73 (Recursive and Non-Recursive Filters) Consider two lters described by
(1) h[n] =
2, 1
(b) H(z) =
z
2
2z + 1
z
2
(c) y[n] = x[n] x[n 2]
[Hints and Suggestions: Set up H(z) and multiply both numerator and denominator by identical
polynomials in z (linear, quadratic etc). Use this to nd the recursive dierence equation.]
COMPUTATION AND DESIGN
4.76 (System Response in Symbolic Form) The Matlab based routine sysresp2 (on the authors
website) returns the system response in symbolic form. Obtain the response of the following lters and
plot the response for 0 n 30.
(a) The step response of y[n] 0.5y[n 1] = x[n]
(b) The impulse response of y[n] 0.5y[n 1] = x[n]
(c) The zero-state response of y[n] 0.5y[n 1] = (0.5)
n
u[n]
(d) The complete response of y[n] 0.5y[n 1] = (0.5)
n
u[n], y[1] = 4
(e) The complete response of y[n] +y[n 1] + 0.5y[n 2] = (0.5)
n
u[n], y[1] = 4, y[2] = 3
4.77 (Steady-State Response in Symbolic Form) The Matlab based routine ssresp (on the authors
website) yields a symbolic expression for the steady-state response to sinusoidal inputs. Find the
steady-state response to the input x[n] = 2 cos(0.2n
3
) for each of the following systems and plot
the results over 0 n 50.
(a) y[n] 0.5y[n 1] = x[n]
(b) y[n] +y[n 1] + 0.5y[n 2] = 3x[n]
c Ashok Ambardar, September 1, 2003
Chapter 5
FREQUENCY DOMAIN
ANALYSIS
5.0 Scope and Objectives
This chapter develops the discrete-time Fourier transform (DTFT) as an analysis tool in the frequency
domain description of discrete-time signals and systems. It introduces the DTFT as a special case of the
z-transform, develops the properties of the DTFT, and concludes with applications of the DTFT to system
analysis and signal processing.
5.1 The DTFT from the z-Transform
The z-transform describes a discrete-time signal as a sum of weighted harmonics z
k
X(z) =
k=
x[k]z
k
=
k=
x[k]
re
j2F
k
(5.1)
where the complex exponential z = re
j2F
= re
j
includes a real weighting factor r. If we let r = 1, we
obtain z = e
j2F
= e
j
and z
k
= e
j2kF
= e
jk
. The expression for the z-transform then reduces to
X(F) =
k=
x[k]e
j2kF
X() =
k=
x[k]e
jk
(5.2)
The quantity X(F) (or X() is now a function of the frequency F (or ) alone and describes the discrete-
time Fourier transform (DTFT) of x[n] as a sum of weighted harmonics e
j2kF
= e
jk
. The DTFT is a
frequency-domain description of a discrete-time signal. The DTFT of x[n] may be viewed as its z-transform
X(z) evaluated for r = 1 (along the unit circle in the z-plane). The DTFT is also called the spectrum
and the DTFT H(F) of the system impulse response is also referred to as the frequency response or the
frequency domain transfer function.
Note that X(F) is periodic in F with unit period because X(F) = X(F + 1)
X(F + 1) =
k=
x[k]e
j2k(F+1)
=
k=
x[k]e
j2k
e
j2kF
=
k=
x[k]e
j2kF
= X(F)
The unit interval 0.5 F 0.5 (or 0 F 1) denes the principal period or central period.
Similarly, X() is periodic in with period 2 and represents a scaled (stretched by 2) version of X(F).
The principal period of X() corresponds to the interval or 0 2.
176 c Ashok Ambardar, September 1, 2003
5.1 The DTFT from the z-Transform 177
The inverse DTFT allows us to recover x[n] from one period of its DTFT and is dened by
x[n] =
1/2
1/2
X(F)e
j2nF
dF (the F-form) x[n] =
1
2
X()e
jn
d (the -form) (5.3)
We will nd it convenient to work with the F-form, especially while using the inverse transform relation,
because it rids us of factors of 2 in many situations. The discrete signal x[n] and its discrete-time Fourier
transform X(F) or X() form a unique transform pair, and their relationship is shown symbolically using
a double arrow:
x[n] dtft X(F) or x[n] dtft X() (5.4)
REVIEW PANEL 5.1
The DTFT Is a Frequency-Domain Representation of Discrete-Time Signals
Form DTFT Inverse DTFT
F-form X(F) =
k=
x[k]e
j2kF
x[n] =
1/2
1/2
X(F)e
j2nF
dF
-form X() =
k=
x[k]e
jk
x[n] =
1
2
X()e
jn
d
5.1.1 Symmetry of the Spectrum for a Real Signal
The DTFT of a real signal is, in general, complex. A plot of the magnitude of the DTFT against frequency
is called the magnitude spectrum. The magnitude of the frequency response H(F) is also called the
gain. A plot of the phase of the DTFT against frequency is called the phase spectrum. The phase spectrum
may be restricted to a 360
range (180
, 180
0
0
2
Principal period
X
p
(F) F-form for
0.5 0.5 0
F
0 0.5
F
1
Principal period
Figure 5.2 Various ways of plotting the DTFT spectrum
REVIEW PANEL 5.2
The DTFT Is Always Periodic and Shows Conjugate Symmetry for Real Signals
F-form: X(F) is periodic with unit period and conjugate symmetric about F = 0 and F = 0.5.
-form: X() is periodic with period 2 and conjugate symmetric about = 0 and = .
Plotting: It is sucient to plot the DTFT over one period (0.5 F 0.5 or ).
DRILL PROBLEM 5.1
(a) If X(F)[
F=0.2
= 2e
j/3
, nd X(F)[
F=0.2
, X(F)[
F=0.8
.
(b) If X(F)[
F=0.2
= 2e
j/3
, nd X(F)[
F=3.2
, X(F)[
F=5.8
.
Answers: (a) 2e
j/3
, 2e
j/3
(b) 2e
j/3
, 2e
j/3
If we know the spectrum of a real signal over the half-period 0 < F 0.5 (or 0 < ), we can use
conjugate symmetry about the origin to obtain the spectrum for one full period and replicate this to generate
the periodic spectrum. For this reason, the highest useful frequency present in the spectrum is F = 0.5 or
= . For sampled signals, this also corresponds to an analog frequency of 0.5S Hz (half the sampling
frequency).
If a real signal x[n] is even symmetric about n = 0, its DTFT X(F) is always real and even symmetric
in F, and has the form X(F) = A(F). If a real signal x[n] is odd symmetric, X(F) is always imaginary
and odd symmetric in F, and has the form X(F) = jA(F). A real symmetric signal is called a linear-
phase signal. The real quantity A(F) (which may not always be positive for all frequencies) is called the
amplitude spectrum. For a linear-phase signal, it is much more convenient to plot the amplitude (not
c Ashok Ambardar, September 1, 2003
5.1 The DTFT from the z-Transform 179
magnitude) spectrum because its phase is then just zero or just 90
k=
x[k]e
j2kF
=
k=
[k]e
j2kF
= 1
(b) The DTFT of the sequence x[n] = 1, 0, 3, 2 also follows from the denition as
X(F) =
k=
x[k]e
j2kF
= 1 + 3e
j4F
2e
j6F
c Ashok Ambardar, September 1, 2003
180 Chapter 5 Frequency Domain Analysis
Table 5.1 Some Useful DTFT Pairs
Note: In all cases, we assume [[ < 1.
Entry Signal x[n] The F-Form: X(F) The -Form: X()
1 [n] 1 1
2
n
u[n], [ < 1[
1
1 e
j2F
1
1 e
j
3 n
n
u[n], [ < 1[
e
j2F
(1 e
j2F
)
2
e
j
(1 e
j
)
2
4 (n + 1)
n
u[n], [ < 1[
1
(1 e
j2F
)
2
1
(1 e
j
)
2
5
|n|
, [ < 1
1
2
1 2cos(2F) +
2
1
2
1 2cos +
2
6 1 (F) 2()
7 cos(2nF
0
) = cos(n
0
) 0.5[(F +F
0
) +(F F
0
)] [( +
0
) +(
0
)]
8 sin(2nF
0
) = sin(n
0
) j0.5[(F +F
0
) (F F
0
)] j[( +
0
) (
0
)]
9
2F
C
sinc(2nF
C
) =
sin(n
C
)
n
rect
F
2F
C
rect
2
C
10 u[n]
0.5(F) +
1
1 e
j2F
() +
1
1 e
j
In the -form, we have
X() =
k=
x[k]e
jk
= 1 + 3e
j2
2e
j3
For nite sequences, the DTFT can be written just by inspection. Each term is the product of a sample
value at index n and the exponential e
j2nF
(or e
jn
).
(c) The DTFT of the exponential signal x[n] =
n
u[n] follows from the denition and the closed form for
the resulting geometric series:
X(F) =
k=0
k
e
j2kF
=
k=0
e
j2F
k
=
1
1 e
j2F
, [[ < 1
The sum converges only if [e
j2F
[ < 1, or [[ < 1 (since [e
j2F
[ = 1). In the -form,
c Ashok Ambardar, September 1, 2003
5.1 The DTFT from the z-Transform 181
X() =
k=0
k
e
jk
=
k=0
e
j
k
=
1
1 e
j
, [[ < 1
(d) The signal x[n] = u[n] is a limiting form of
n
u[n] as 1 but must be handled with care, since u[n]
is not absolutely summable. In fact, X(F) also includes an impulse (now an impulse train due to the
periodic spectrum). Over the principal period,
X(F) =
1
1 e
j2F
+ 0.5(F) (F-form) X() =
1
1 e
j
+() (-form)
DRILL PROBLEM 5.3
(a) Let x[n] =
, 5
5.1.3 Relating the z-Transform and DTFT
The DTFT describes a signal as a sum of weighted harmonics, or complex exponentials. However, it cannot
handle exponentially growing signals. The z-transform overcomes these shortcomings by using exponentially
weighted harmonics in its denition. The z-transform may be viewed as a generalization of the DTFT to
complex frequencies.
For absolutely summable signals, the DTFT is simply the one-sided z-transform with z = e
j2F
. The
DTFT of signals that are not absolutely summable almost invariably contains impulses. However, for such
signals, the z-transform equals just the non-impulsive portion of the DTFT, with z = e
j2F
. In other words,
for absolutely summable signals we can always nd their z-transform from the DTFT, but we cannot always
nd their DTFT from the z-transform.
REVIEW PANEL 5.5
Relating the z-Transform and the DTFT
From X(z) to DTFT: If x[n] is absolutely summable, simply replace z by e
j2F
(or e
j
).
From DTFT to X(z): Delete impulsive terms in DTFT and replace e
j2F
(or e
j
) by z.
EXAMPLE 5.2 (The z-Transform and DTFT)
(a) The signal x[n] =
n
u[n], [[ < 1, is absolutely summable. Its DTFT equals X
p
(F) =
1
1 e
j2F
.
We can nd the z-transform of x[n] from its DTFT as X(z) =
1
1 z
1
=
z
z
. We can also nd
the DTFT from the z-transform by reversing the steps.
(b) The signal x[n] = u[n] is not absolutely summable. Its DTFT is X
p
(F) =
1
1 e
j2F
+ 0.5(F).
We can nd the z-transform of u[n] as the impulsive part in the DTFT, with e
j2F
= z, to give
X(z) =
1
1 z
1
=
z
z 1
. However, we cannot recover the DTFT from its z-transform in this case.
c Ashok Ambardar, September 1, 2003
182 Chapter 5 Frequency Domain Analysis
5.2 Properties of the DTFT
The properties of the DTFT are summarized in Table 5.2. The proofs of most of the properties follow from
the dening relations if we start with the basic transform pair x[n] dtft X(F).
Table 5.2 Properties of the DTFT
Property DT Signal Result (F-Form) Result (-Form)
Folding x[n] X(F) = X
(F) X() = X
()
Time shift x[n m] e
j2mF
X(F) e
jm
X()
Frequency shift e
j2nF
0
x[n] X(F F
0
) X(
0
)
Half-period shift (1)
n
x[n] X(F 0.5) X( )
Modulation cos(2nF
0
)x[n] 0.5[X(F +F
0
) +X(F F
0
)] 0.5[( +
0
) +X(
0
)]
Convolution x[n] y[n] X(F)Y (F) X()Y ()
Product x[n]y[n] X(F) (Y (F)
1
2
[X() (Y ()]
Times-n nx[n]
j
2
dX(F)
dF
j
dX()
d
Parsevals relation
k=
x
2
[k] =
1
[X(F)[
2
dF =
1
2
2
[X()[
2
d
Central ordinates x[0] =
1
X(F) dF =
1
2
2
X() d X(0) =
n=
x[n]
X(F)
F=0.5
= X()
=
=
n=
(1)
n
x[n]
5.2.1 Folding
With x[n] dtft X(F), the DTFT of the signal y[n] = x[n] may be written (using a change of
variable) as
Y (F) =
k=
x[k]e
j2kF
=
. .. .
m=k
m=
x[m]e
j2mF
= X(F) (5.9)
A folding of x[n] to x[n] results in a folding of X(F) to X(F). For real signals, X(F) = X
(F) implying
an identical magnitude spectrum and reversed phase.
REVIEW PANEL 5.6
Folding x[n] to x[n] Results in Folding X(F) to X(F)
The magnitude spectrum stays the same, and only the phase is reversed (changes sign).
c Ashok Ambardar, September 1, 2003
5.2 Properties of the DTFT 183
DRILL PROBLEM 5.4
(a) Let X(F) = 4 2e
j4F
. Find the DTFT of y[n] = x[n] and compute at F = 0.2.
(b) For a signal x[n], we nd X(F)[
F=0.2
= 2e
j/3
. Compute the DTFT of y[n] = x[n] at F = 0.2, 1.8.
Answers: (a) 4 2e
j4F
, 5.74e
j12
(b) 2e
j/3
, 2e
j/3
.
5.2.2 Time Shift of x[n]
With x[n] dtft X(F), the DTFT of the signal y[n] = x[n m] may be written (using a change of
variable) as
Y (F) =
k=
x[k m]e
j2kF
=
. .. .
l=km
l=
x[l]e
j2(l+m)F
= X(F)e
j2mF
(5.10)
A time shift of x[n] to x[n m] does not aect the magnitude spectrum. It augments the phase spectrum
by (F) = 2mF (or () = m), which varies linearly with frequency.
REVIEW PANEL 5.7
Time Shift Property of the DTFT
x[n m] dtft X(F)e
j2mF
or X()e
jm
A time delay by m adds a linear-phase component (2mF or m) to the phase.
DRILL PROBLEM 5.5
(a) Let X(F) = 4 2e
j4F
. If y[n] = x[n 2], compute Y (F) at F = 0.2, 0.3.
(b) If g[n] = h[n 2], nd the phase dierence
G(F)
, 3.88e
j43
(b) 144
, 216
.
5.2.3 Frequency Shift of X(F)
By duality, a frequency shift of X(F) to X(F F
0
) yields the signal x[n]e
j2nF
0
.
Half-period Frequency Shift
If X(F) is shifted by 0.5 to X(F 0.5), then x[n] changes to e
jn
= (1)
n
x[n]. Thus, samples of x[n] at
odd index values (n = 1, 3, 5, . . .) change sign.
REVIEW PANEL 5.8
Frequency Shift Property of the DTFT
x[n]e
j2nF
0
dtft X(F F
0
) or X(
0
) (
0
= 2F
0
)
(1)
n
x[n] dtft X(F 0.5) or X( )
DRILL PROBLEM 5.6
(a) Let X(F) = 4 2e
j4F
. If y[n] = (1)
n
x[n], compute Y (F) at F = 0.2, 0.4.
(b) Let X(F) = 4 2e
j4F
. If y[n] = (j)
n
x[n], compute Y (F) at F = 0.2, 0.4. [Hint: j = e
j/2
]
Answers: (a) 5.74e
j12
, 3.88e
j29
(b) 2.66e
j26
, 4.99e
j22
e
j2nF0
+e
j2nF0
2
x[n] = cos(2nF
0
)x[n] dtft
X(F +F
0
) +X(F F
0
)
2
(5.11)
Modulation results in a spreading of the original spectrum.
REVIEW PANEL 5.9
Modulation by cos(2nF
0
): The DTFT Gets Halved, Centered at F
0
, and Added
cos(2nF
0
)x[n] dtft
X(F +F
0
) +X(F F
0
)
2
or
X( +
0
) +X(
0
)
2
(
0
= 2F
0
)
DRILL PROBLEM 5.7
(a) The central period of X(F) is dened by X(F) = 1, [F[ < 0.1 and zero elsewhere. Consider the signal
y[n] = x[n] cos(0.2n). Sketch Y (F) and evaluate at F = 0, 0.1, 0.3, 0.4.
(b) The central period of X(F) is dened by X(F) = 1, [F[ < 0.1 and zero elsewhere. Consider the signal
y[n] = x[n] cos(0.5n). Sketch Y (F) and evaluate at F = 0, 0.1, 0.3, 0.4.
(c) The central period of X(F) is dened by X(F) = 1, [F[ < 0.25 and zero elsewhere. Consider the
signal y[n] = x[n] cos(0.2n). Sketch Y (F) and evaluate at F = 0, 0.1, 0.3, 0.4.
Answers: (a) 0.5, 0.5, 0, 0 (b) 0, 0.5, 0.5, 0 (c) 1, 1, 0.5, 0
5.2.5 Convolution
The regular convolution of discrete-time signals results in the product of their DTFTs. This result follows
from the fact that the DTFT may be regarded as a polynomial in powers of e
j2F
and discrete convolution
corresponds to polynomial multiplication. If two discrete signals are multiplied together, the DTFT of the
product corresponds to the periodic convolution of the individual DTFTs. In other words, multiplication in
one domain corresponds to convolution in the other.
REVIEW PANEL 5.10
Multiplication in One Domain Corresponds to Convolution in the Other
x[n] h[n] dtft X(F)H(F) x[n]h[n] dtft X(F) (H(F) (the F-form)
x[n] h[n] dtft X()H() x[n]h[n] dtft
1
2
X() (H() (the -form)
DRILL PROBLEM 5.8
(a) Let X(F) = 4 2e
j4F
. If y[n] = x[n] x[n], compute Y (F) at F = 0.2, 0.4.
(b) Let X(F) = 4 2e
j4F
. If y[n] = x[n 2] x[n], compute Y (F) at F = 0.2, 0.4.
(c) The central period of X(F) is dened by X(F) = 1, [F[ < 0.2 and zero elsewhere. Consider the signal
y[n] = x
2
[n]. Sketch Y (F) and evaluate at F = 0, 0.1, 0.2, 0.3.
Answers: (a) 32.94e
j24
, 15.06e
j59
(b) 32.94e
j72
, 15.06e
j72
k=
(j2k)x[k]e
j2kF
(5.12)
The corresponding signal is (j2n)x[n], and thus the DTFT of y[n] = nx[n] is Y (F) =
j
2
d X(F)
dF
.
REVIEW PANEL 5.11
The Times-n Property: Multiply x[n] by n dtft Dierentiate the DTFT
nx[n] dtft
j
2
d X(F)
dF
or j
d X()
d
DRILL PROBLEM 5.9
(a) Let X(F) = 4 2e
j4F
. If y[n] = nx[n], nd Y (F).
(b) Let X(F) = 4 2e
j4F
. If y[n] = nx[n 2], nd Y (F).
(c) Let X(F) = 4 2e
j4F
. If y[n] = (n 2)x[n], nd Y (F).
(d) Let X(F) =
1
4 2e
j4F
. If y[n] = nx[n], nd Y (F).
Answers: (a) 4e
j4F
(b) 8e
j4F
8e
j8F
(c) 8 (d)
4e
j4F
(4 2e
j4F
)
2
5.2.7 Parsevals relation
The DTFT is an energy-conserving transform, and the signal energy may be found from either the signal
x[n] or from one period of its periodic magnitude spectrum [X(F)[ using Parsevals theorem
k=
x
2
[k] =
1/2
1/2
[X(F)[
2
dF =
1
2
[X()[
2
d (Parsevals relation) (5.13)
REVIEW PANEL 5.12
Parsevals Theorem: We can Find the Signal Energy from x[n] or Its Magnitude Spectrum
k=
x
2
[k] =
1/2
1/2
[X(F)[
2
dF =
1
2
[X()[
2
d
DRILL PROBLEM 5.10
(a) Let X(F) = 5, [F[ < 0.2 and zero elsewhere in the central period. Find its total signal energy and its
signal energy in the frequency range [F[ 0.15.
(b) Let X(F) =
6
4 2e
j2F
. Find its signal energy.
Answers: (a) 10, 7.5 (b) 3
c Ashok Ambardar, September 1, 2003
186 Chapter 5 Frequency Domain Analysis
5.2.8 Central ordinate theorems
The DTFT obeys the central ordinate relations (found by substituting F = 0 (or = 0) in the DTFT or
n = 0 in the IDTFT.
x[0] =
1/2
1/2
X(F) dF =
1
2
X() d X(0) =
n=
x[n] (central ordinates) (5.14)
With F = 0.5 (or = ), we also have the useful result
X(F)
F=0.5
= X()
=
=
n=
(1)
n
x[n] (5.15)
The central ordinate theorems allow us to nd the dc gain (at F = 0) and high frequency gain (at F = 0.5)
without having to formally evaluate the DTFT.
REVIEW PANEL 5.13
Central Ordinate Theorems
x[0] =
1/2
1/2
X(F)dF =
1
2
X()d X(0) =
n=
x[n] X(F)
F=0.5
= X()
=
=
n=
(1)
n
x[n]
DRILL PROBLEM 5.11
(a) Let X(F) = 5, [F[ < 0.2 and zero elsewhere in the central period. Find the value of x[n] at n = 0.
(b) Let x[n] = 9(0.8)
n
u[n]. What is the value of X(F) at F = 0 and F = 0.5.
(c) What is the dc gain and high-frequency gain of the lter described by h[n] =
1, 2, 3, 4.
Answers: (a) 2 (b) 45, 5 (c) 10, 2
EXAMPLE 5.3 (Some DTFT Pairs Using the Properties)
(a) The DTFT of x[n] = n
n
u[n], [ < 1[ may be found using the times-n property as
X(F) =
j
2
d
dF
1
1 e
j2F
=
e
j2F
(1 e
j2F
)
2
In the -form,
X() = j
d
d
1
1 e
j
=
e
j
(1 e
j
)
2
(b) The DTFT of the signal x[n] = (n + 1)
n
u[n] may be found if we write x[n] = n
n
u[n] +
n
u[n], and
use superposition, to give
X(F) =
e
j2F
(1 e
j2F
)
2
+
1
1 e
j2F
=
1
(1 e
j2F
)
2
In the -form,
X() =
e
j
(1 e
j
)
2
+
1
1 e
j
=
1
(1 e
j
)
2
c Ashok Ambardar, September 1, 2003
5.2 Properties of the DTFT 187
By the way, if we recognize that x[n] =
n
u[n]
n
u[n], we can also use the convolution property to
obtain the same result.
(c) To nd DTFT of the N-sample exponential pulse x[n] =
n
, 0 n < N, express it as x[n] =
n
(u[n] u[n N]) =
n
u[n]
N
nN
u[n N] and use the shifting property to get
X(F) =
1
1 e
j2F
N
e
j2FN
1 e
j2F
=
1 (e
j2F
)
N
1 e
j2F
In the -form,
X() =
1
1 e
j
N
e
jN
1 e
j
=
1 (e
j
)
N
1 e
j
(d) The DTFT of the two-sided decaying exponential x[n] =
|n|
, [[ < 1, may be found by rewriting this
signal as x[n] =
n
u[n] +
n
u[n] [n] and using the folding property to give
X(F) =
1
1 e
j2F
+
1
1 e
j2F
1
Simplication leads to the result
X(F) =
1
2
1 2cos(2F) +
2
or X() =
1
2
1 2cos +
2
(e) (Properties of the DTFT)
Find the DTFT of x[n] = 4(0.5)
n+3
u[n] and y[n] = n(0.4)
2n
u[n].
1. For x[n], we rewrite it as x[n] = 4(0.5)
3
(0.5)
n
u[n] to get
X(F) =
0.5
1 0.5e
j2F
or X() =
0.5
1 0.5e
j
2. For y[n], we rewrite it as y[n] = n(0.16)
n
u[n] to get
Y (F) =
0.16e
j2F
(1 0.16e
j2F
)
2
or Y () =
0.16e
j
(1 0.16e
j
)
2
(f ) (Properties of the DTFT)
Let x[n] dtft
4
2 e
j2F
= X(F).
Find the DTFT of y[n] = nx[n], c[n] = x[n], g[n] = x[n] x[n], and h[n] = (1)
n
x[n].
c Ashok Ambardar, September 1, 2003
188 Chapter 5 Frequency Domain Analysis
1. By the times-n property,
Y (F) =
j
2
d
dF
X(F) =
4(j/2)(j2e
j2F
)
(2 e
j2F
)
2
=
4e
j2F
(2 e
j2F
)
2
In the -form,
Y () = j
d
d
X() =
4(j/2)(j2e
j
)
(2 e
j
)
2
=
4e
j
(2 e
j
)
2
2. By the folding property,
C(F) = X(F) =
4
2 e
j2F
or C() = X() =
4
2 e
j
3. By the convolution property,
G(F) = X
2
(F) =
16
(2 e
j2F
)
2
or G() = X
2
() =
16
(2 e
j
)
2
4. By the modulation property,
H(F) = X(F 0.5) =
4
2 e
j2(F0.5)
=
4
2 +e
j2F
In the -form,
H() = X( ) =
4
2 e
j()
=
4
2 +e
j
(g) (Properties of the DTFT)
Let X(F) dtft (0.5)
n
u[n] = x[n]. Find the time signals corresponding to
Y (F) = X(F) (X(F) H(F) = X(F + 0.4) +X(F 0.4) G(F) = X
2
(F)
1. By the convolution property, y[n] = x
2
[n] = (0.25)
n
u[n].
2. By the modulation property, h[n] = 2 cos(2nF
0
)x[n] = 2(0.5)
n
cos(0.8n)u[n] (where F
0
= 0.4).
3. By the convolution property, g[n] = x[n] x[n] = (0.5)
n
u[n] (0.5)u[n] = (n + 1)(0.5)
n
u[n].
5.3 The DTFT of Discrete-Time Periodic Signals
There is a unique relationship between the description of signals in the time domain and their spectra in
the frequency domain. One useful result is that sampling in one domain results in a periodic extension
in the other and vice versa. If a time-domain signal is made periodic by replication, the transform of the
periodic signal is an impulse sampled version of the original transform divided by the replication period.
The frequency spacing of the impulses equals the reciprocal of the period. For example, consider a periodic
analog signal x(t) with period T whose one period is x
1
(t) with Fourier transform X
1
(f). When x
1
(t) is
replicated every T units to generate the periodic signal x(t), the Fourier transform X(f) of the periodic
signal x(t) becomes an impulse train of the form
X
1
(kf
0
)(f kf
0
). The frequency spacing f
0
of the
c Ashok Ambardar, September 1, 2003
5.3 The DTFT of Discrete-Time Periodic Signals 189
impulses is given by f
0
=
1
T
, the reciprocal of the period. The impulse strengths X
1
(kf
0
) are found by
sampling X
1
(f) at intervals of f
0
=
1
T
and dividing by the period T to give
1
T
X
1
(kf
0
). These impulse
strengths
1
T
X
1
(kf
0
) also dene the Fourier series coecients of the periodic signal x(t). Similarly, consider
a discrete periodic signal x[n] with period N whose one period is x
1
[n], 0 n N 1 with DTFT X
1
(F).
When x
1
[n] is replicated every N samples to generate the discrete periodic signal x[n], the DTFT X(F) of
the periodic signal x[n] becomes an impulse train of the form X
1
(kF
0
)(F kF
0
). The frequency spacing F
0
of the impulses is given by F
0
=
1
N
, the reciprocal of the period. The impulse strengths X
1
(kF
0
) are found
by sampling X
1
(F) at intervals of F
0
=
1
N
and dividing by the period N to give
1
N
X
1
(kF
0
). Since X
1
(F) is
periodic, so too is X(F) and one period of X(F) contains N impulses described by
X(F) =
1
N
N1
k=0
X
1
(kF
0
)(F kF
0
) (over one period 0 F < 1) (5.16)
By convention, one period is chosen to cover the range 0 F < 1 (and not the central period) in order
to correspond to the summation index n = 0 n N 1. Note that X(F) exhibits conjugate symmetry
about k = 0 (corresponding to F = 0 or = 0) and k =
N
2
(corresponding to F = 0.5 or = ).
REVIEW PANEL 5.14
The DTFT of x[n] (Period N) Is a Periodic Impulse Train (N Impulses per Period)
If x[n] is periodic with period N and its one-period DTFT is x
1
[n] dtft X
1
(F), then
x[n] dtft X(F) =
1
N
N1
k=0
X
1
(kF
0
)(F kF
0
) (N impulses per period 0 F < 1)
EXAMPLE 5.4 (DTFT of Periodic Signals)
Let one period of x
p
[n] be given by x
1
[n] = 3, 2, 1, 2, with N = 4.
Then, X
1
(F) = 3 + 2e
j2F
+e
j4F
+ 2e
j6F
.
The four samples of X
1
(kF
0
) over 0 k 3 are
X
1
(kF
0
) = 3 + 2e
j2k/4
+e
j4k/4
+ 2e
j6k/4
= 8, 2, 0, 2
The DTFT of the periodic signal x
p
[n] for one period 0 F < 1 is thus
X(F) =
1
4
3
k=0
X
1
(kF
0
)(F
k
4
) = 2(F) + 0.5(F
1
4
) + 0.5(F
3
4
) (over one period 0 F < 1)
The signal x
p
[n] and its DTFT X(F) are shown in Figure E5.4. Note that the DTFT is conjugate symmetric
about F = 0 (or k = 0) and F = 0.5 (or k = 0.5N = 2).
[n] x
p
1 2 3 4 5 2 3 1
3 3 3
2 2 2 2 2
1 1 1
n
X
p
(F)
0.5 1 0.5 1
F
(0.5)
(0.5)
(2) (2) (2)
Figure E5.4 Periodic signal for Example 5.4 and its DTFT
c Ashok Ambardar, September 1, 2003
190 Chapter 5 Frequency Domain Analysis
DRILL PROBLEM 5.12
Find the DTFT of a periodic signal x[n] over 0 F < 1 if its one period is given by
(a) x
1
[n] =
4, 0, 0, 0 (b) x
1
[n] =
4, 4, 4, 4 (c) x
1
[n] =
4, 0, 4, 0
Answers: (a)
3
k=0
(F 0.25k) (b) 4(F) (c) 2(F) + 2(F 0.5)
5.3.1 The DFS and DFT
The discrete Fourier transform (DFT) of the signal x
1
[n] is dened as the sampled version of its DTFT
X
DFT
[k] = X
1
(F)
F=kF
0
=k/N
=
N1
n=0
x
1
[n]e
j2nk/N
, k = 0, 1, 2, . . . , N 1 (5.17)
The N-sample sequence that results when we divide the DFT by N denes the discrete Fourier series
(DFS) coecients of the periodic signal x[n] whose one period is x
1
[n]
X
DFS
[k] =
1
N
X
1
(F)
F=kF
0
=k/N
=
1
N
N1
n=0
x
1
[n]e
j2nk/N
, k = 0, 1, 2, . . . , N 1 (5.18)
Note that the DFT and DFS dier only by a factor of N with X
DFT
[k] = NX
DFS
[k]. The DFS result may
be linked to the Fourier series coecients of a periodic signal with period T
X[k] =
1
T
T
0
x
1
(t)e
j2kt/T
dt
Here, x
1
(t) describes one period of the periodic signal. The discrete version of this result using a sampling
interval of t
s
allows us to set t = nt
s
, dt = t
s
and x
1
(t) = x
1
(nT
s
) = x
1
[n]. Assuming N samples per period
(T = Nt
s
), we replace the integral over one period (from t = 0 to t = T) by a summation over N samples
(from n = 0 to n = N 1) to get the required expression for the DFS.
REVIEW PANEL 5.15
The DFT Is a Sampled Version of the DTFT
If x
1
[n] is an N-sample sequence, its N-sample DFT is X
DFT
[k] = X
1
(F)[
F=k/N
, k = 0, 1, 2, . . . , N 1
The DFS coecients of a periodic signal x[n] whose one period is x
1
[n] are X
DFS
[k] =
1
N
X
DFT
[k]
EXAMPLE 5.5 (The DFT, DFS and DTFT)
Let x
1
[n] = 1, 0, 2, 0, 3 describe one period of a periodic signal x[n].
The DTFT of x
1
[n] is X
1
(F) = 1 + 2e
j4F
+ 3e
j8F
.
The period of x[n] is N = 5. The discrete Fourier transform (DFT) of x
1
[n] is
X
DFT
[k] = X
1
(F)
F=k/N
= 1 + 2e
j4F
+ 3e
j8F
F=k/5
, k = 0, 1, . . . , 4
We nd that
X
DFT
[k] = 6, 0.3090 +j1.6776, 0.8090 +j3.6655, 0.8090 j3.6655, 0.3090 j1.6776
c Ashok Ambardar, September 1, 2003
5.4 The Inverse DTFT 191
The discrete Fourier series (DFS) coecients of x[n] are given by X
DFS
[k] =
1
N
X
DFT
[k]. We get
X
DFS
[k] = 1.2, 0.0618 +j0.3355, 0.1618 +j0.7331, 0.1618 j0.7331, 0.0618 j0.3355
The DTFT X(F) of the periodic signal x[n], for one period 0 F < 1, is then
X(F) =
1
5
4
k=0
X
1
(
k
5
)(F
k
5
) (over one period 0 F < 1)
Note that each of the transforms X
DFS
[k], X
DFT
[k], and X(F) is conjugate symmetric about both k = 0
and k = 0.5N = 2.5.
DRILL PROBLEM 5.13
Find the DFT of the following signals
(a) x[n] =
4, 0, 0, 0 (b) x[n] =
4, 4, 4, 4 (c) x[n] =
4, 4, 0, 0
Answers: (a)
4, 4, 4, 4 (b)
16, 0, 0, 0 (c)
8, 4 j4, 0, 4 +j4
5.4 The Inverse DTFT
For a nite sequence X(F) whose DTFT is a polynomial in e
j2F
(or e
j
), the inverse DTFT x[n] corresponds
to the sequence of the polynomial coecients. In many other situations, X(F) can be expressed as a ratio
of polynomials in e
j2F
(or e
j
). This allows us to split X(F) into a sum of simpler terms (using partial
fraction expansion) and nd the inverse transform of these simpler terms through a table look-up. In special
cases or only as a last resort, if all else fails, do we need to resort to the brute force method of nding the
inverse DTFT by using the dening relation. Some examples follow.
EXAMPLE 5.6 (The Inverse DTFT)
(a) Let X(F) = 1 + 3e
j4F
2e
j6F
.
Its IDFT is simply x[n] = [n] + 3[n 2] 2[n 3] or x[n] =
1, 0, 3, 2.
(b) Let X() =
2e
j
1 0.25e
j2
. We factor the denominator and use partial fractions to get
X() =
2e
j
(1 0.5e
j
)(1 + 0.5e
j
)
=
2
1 0.5e
j
2
1 + 0.5e
j
We then nd x[n] = 2(0.5)
n
u[n] 2(0.5)
n
u[n].
(c) An ideal dierentiator is described by H(F) = j2F, [F[ < 0.5. Its magnitude and phase spectrum
are shown in Figure E5.6C.
c Ashok Ambardar, September 1, 2003
192 Chapter 5 Frequency Domain Analysis
2F
0.5 0.5
Magnitude
F
F
0.5 0.5
/2
/2
Phase (radians)
Figure E5.6C DTFT of the ideal dierentiator for Example 5.6(c)
To nd its inverse h[n], we note that h[0] = 0 since H(F) is odd. For n = 0, we also use the odd
symmetry of H(F) in the IDTFT to obtain
h[n] =
1/2
1/2
j2F[cos(2nF) +j sin(2nF)] dF = 4
1/2
0
F sin(2nF) dF
Using tables and simplifying the result, we get
h[n] =
4[sin(2nF) 2nF cos(2nF)]
(2n)
2
1/2
0
=
cos(n)
n
Since H(F) is odd and imaginary, h[n] is odd symmetric, as expected.
(d) A Hilbert transformer shifts the phase of a signal by 90
1/2
1/2
j sgn(F)[cos(2nF) +j sin(2nF)] dF = 2
1/2
0
sin(2nF) dF =
1 cos(n)
n
DRILL PROBLEM 5.14
(a) Let X(F) = 4 2e
j4F
. Find x[n].
(b) Let X(F) = (4 2e
j4F
)
2
. Find x[n].
(c) Let X(F) = 2, [F[ < 0.2. Find x[n].
Answers: (a)
4, 0, 2 (b)
1
1 2cos(2F) +
2
1/2
(F) = tan
1
1 cos(2F)
sin(2F)
Typical plots of the magnitude and phase are shown in Figure E5.7 over the principal period (0.5, 0.5).
Note the conjugate symmetry (even symmetry of [H(F)[ and odd symmetry of (F)).
F
0.5 0.5
Magnitude
F 0.5
0.5
Phase
Figure E5.7 Magnitude and phase of H
p
(F) for Example 5.7
DRILL PROBLEM 5.15
(a) Find the frequency response H(F) of the lter described by h[n] =
4, 0, 2.
(b) Find the frequency response H(F) of the lter described by h[n] = 2(0.5)
n
u[n] [n].
(c) Find the frequency response H(F) of the lter described by y[n] y[n 2] = 2x[n] 4x[n 1].
Answers: (a) 4 2e
j4F
(b)
1 + 0.5e
j2F
1 0.5e
j2F
(c)
2 4e
j2F
1 e
j4F
5.6 System Analysis Using the DTFT
In concept, the DTFT may be used to nd the zero-state response (ZSR) of relaxed LTI systems to arbitrary
inputs. All it requires is the system transfer function H(F) and the DTFT X(F) of the input x[n]. We rst
nd the response as Y (F) = H(F)X(F) in the frequency domain, and obtain the time-domain response y[n]
by using the inverse DTFT. We emphasize, once again, that the DTFT cannot handle the eect of initial
conditions.
EXAMPLE 5.8 (The DTFT in System Analysis)
c Ashok Ambardar, September 1, 2003
5.6 System Analysis Using the DTFT 195
(a) Consider a system described by y[n] = y[n 1] + x[n]. To nd the response of this system to the
input
n
u[n], we rst set up the transfer function as H() =
1
1 e
j
. Next, we nd the DTFT of
x[n] as X() =
1
1 e
j
and multiply the two to obtain
Y () = H()X() =
1
(1 e
j
)
2
Its inverse transform gives the response as y[n] = (n + 1)
n
u[n]. We could, of course, also use
convolution to obtain y[n] = h[n] x[n] directly in the time domain.
(b) Consider the system described by y[n] = 0.5y[n 1] + x[n]. Its response to the step x[n] = 4u[n] is
found using Y (F) = H(F)X(F):
Y (F) = H(F)X(F) =
1
1 0.5e
j2F
4
1 e
j2F
+ 2(F)
8
1 e
j2F
+ 4(F)
0
.
Steady-state output: y
ss
[n] = AH
0
cos(2nF
0
+ +
0
)
EXAMPLE 5.9 (The DTFT and Steady-State Response)
(a) Consider a system described by y[n] = 0.5y[n 1] + x[n]. We nd its steady-state response to the
sinusoidal input x[n] = 10 cos(0.5n + 60
F=0.25
=
1
1 0.5e
j/2
=
1
1 + 0.5j
= 0.4
26.6
= 0.8944
26.6
5)cos(0.5n + 60
26.6
)
Note that if the input was x[n] = 10 cos(0.5n + 60
F=0
=
1
1 0.8
= 5
The steady-state response is then y
ss
[n] = (5)(4) = 20.
c Ashok Ambardar, September 1, 2003
5.7 Connections 197
(c) Let H(z) =
2z 1
z
2
+ 0.5z + 0.5
. We nd its steady-state response to x[n] = 6u[n].
With z = e
j2F
, we obtain the frequency response H(F) as
H(F) =
2e
j2F
1
e
j4F
+ 0.5e
j2F
+ 0.5
Since the input is a constant for n 0, the input frequency is F = 0.
At this frequency, H(F)[
F=0
= 0.5. Then, y
ss
[n] = (6)(0.5) = 3.
(d) Design a 3-point FIR lter with impulse response h[n] = ,
2
This gives = = 0.4142 and h[n] = 0.4142,
0.4142, 0.4142.
The dc gain of this lter is H(0) =
h[n] = 3(0.4142) = 1.2426.
DRILL PROBLEM 5.17
(a) Let H(F) = 4 2e
j4F
. If the input is x[n] = 4 cos(0.4n), what is the response y[n].
(b) A lter is described by h[n] = 2(0.5)
n
u[n] [n]. The input is x[n] = 1 + cos(0.5n). Find y[n].
(c) A lter is described by y[n] = x[n] + x[n 1] + x[n 2]. Choose the values of and such that
the input x[n] = 1 + 4 cos(n) results in the output y[n] = 4.
Answers: (a) 22.96 cos(0.4n + 12
) (b) 3 + cos(0.5n + 53
) (c) = 2, = 1
5.7 Connections
A relaxed LTI system may be described by its dierence equation, its impulse response, its transfer function,
its frequency response, its pole-zero plot or even its realization. Depending on what is required, one form may
be better suited that others and, given one form, we should be able to obtain the others using time-domain
methods and/or frequency-domain transformations. The connections are summarized below:
1. Given the transfer function H(z), we can use it directly to generate a pole-zero plot. We can also use
it to nd the frequency response H(F) by the substitution z = e
j2F
. The frequency response will
allow us to sketch the gain and phase. The inverse z-transform of H(z) leads directly to the impulse
response h[n]. Finally, if we express the transfer function H(z) = Y (z)/X(z) where X(z) and Y (z)
are polynomials in z, cross-multiplication and inverse transformation can give us the system dierence
equation. The frequency response H(F) may be used to nd the system dierence equation in a similar
manner.
2. Given the system dierence equation, we can use the z-transform to nd the transfer function H(z)
and use H(z) to nd the remaining forms.
c Ashok Ambardar, September 1, 2003
198 Chapter 5 Frequency Domain Analysis
3. Given the impulse response h[n], we can use the z-transform to nd the transfer function H(z) and
use it to develop the remaining forms.
4. Given the pole zero-plot, obtain the transfer function H(z) and use it to nd the remaining forms.
Note that we can nd H(z) directly from a pole-zero plot of the root locations but only to within the
multiplicative gain factor K. If the value of K is also shown on the plot, H(z) is known in its entirety.
For quick computations of the dc gain and high frequency gain (at F = 0.5) we need not even convert
between forms. To nd the dc gain,
1. If H(F) is given, evaluate its absolute value at F = 0.
2. If H(z) is given, evaluate its absolute value at z = 1 (because if F = 0, we have z = e
j2F
= 1).
3. If h[n] is given, evaluate [
B
m
(after summing the coecients on each side).
Similarly, to nd the high frequency gain (at F = 0.5)
1. If H(F) is given, evaluate its absolute value at F = 0.5.
2. If H(z) is given, evaluate its absolute value at z = 1 (because if F = 0.5, we have z = e
j2F
= 1).
3. If h[n] is given, evaluate [
(1)
n
h[n][ (by reversing the sign of alternate samples before summing
them).
4. If the dierence equation is given in the form
A
k
y[n k] =
B
m
x[n m], evaluate the absolute
value of the ratio
(1)
k
A
k
/
(1)
k
B
m
.
EXAMPLE 5.10 (System Representation in Various Forms)
(a) Let y[n] = 0.8y[n 1] + 2x[n]. We obtain its transfer function and impulse response as follows:
Y (z) = 0.8z
1
Y (z) + 2X(z) H(z) =
Y (z)
X(z)
=
2
1 0.8z
1
=
2z
z 0.8
h[n] = 2(0.8)
n
u[n]
(b) Let h[n] = [n] 0.4(0.5)
n
u[n]. We obtain its transfer function as
H(z) = 1
0.4z
z 0.5
=
0.6z 0.5
z 0.5
We also obtain the dierence equation by expressing the transfer function as
H(z) =
Y (z)
X(z)
=
0.6z 0.5
z 0.5
or
0.6 5z
1
1 0.5z
1
and using cross-multiplication to give
(z 0.5)Y (z) = (0.6z 0.5)X(z) or (1 0.5z
1
)Y (z) = (0.6 0.5z
1
)X(z)
The dierence equation may then be found using forward dierences or backward dierences as
y[n + 1] 0.5y[n] = 0.6x[n + 1] 0.5x[n] or y[n] 0.5y[n 1] = 0.6x[n] 0.5x[n 1]
The impulse response, the transfer function, and the dierence equation describe the same system.
c Ashok Ambardar, September 1, 2003
5.7 Connections 199
(c) Let y[n] 0.6y[n 1] = x[n].
Its dc gain is
1
1 0.6
= 2.5
Its high frequency gain is
1
1 + 0.6
= 0.625
Its DTFT gives
Y (F) 0.6e
j2F
Y (F) = X(F) or Y () 0.6e
j
Y () = X()
We thus get the transfer function as
H(F) =
Y (F)
X(F)
=
1
1 0.6e
j2F
or H() =
Y ()
X()
=
1
1 0.6e
j
The system impulse response is thus h[n] = (0.6)
n
u[n].
(d) Let h[n] = [n] (0.5)
n
u[n].
Since
n=0
(0.5)
n
=
1
1 0.5
= 2, the dc gain is [1 2[ = 1
Similarly,
n=0
(1)
n
(0.5)
n
=
n=0
(0.5)
n
1
1 + 0.5
= 2/3, the high frequency gain is [1 2/3[ = 1/3
Of course, it is probably easier to nd its dc gain and high frequency gain from H(F). Its DTFT gives
H(F) = 1
1
1 0.5e
j2F
=
0.5e
j2F
1 0.5e
j2F
In the -form,
H() = 1
1
1 0.5e
j
=
0.5e
j
1 0.5e
j
From this we nd
H(F) =
Y (F)
X(F)
or Y (F)(1 0.5e
j2F
) = X(F)(0.5e
j2F
)
In the -form,
H() =
Y ()
X()
or Y ()(1 0.5e
j
) = X()(0.5e
j
)
Inverse transformation gives y[n] 0.5y[n 1] = 0.5x[n 1].
(e) Let H(F) = 1 + 2e
j2F
+ 3e
j4F
.
Its dc gain is [1 + 2 + 3[ = 6.
Its high frequency gain is [1 2 + 3[ = 2.
Its impulse response is h[n] = [n] + 2[n 1] + 3[n 2] = 1, 2, 3.
Since H(F) =
Y (F)
X(F)
= 1 + 2e
j2F
+ 3e
j4F
, we nd
Y (F) = (1 + 2e
j2F
+ 3e
j4F
)X(F) y[n] = x[n] + 2x[n 1] + 3x[n 2]
c Ashok Ambardar, September 1, 2003
200 Chapter 5 Frequency Domain Analysis
5.8 Ideal Filters
An ideal lowpass lter is described by H
LP
(F) = 1, [F[ < F
C
over its principal period [F[ 0.5, as shown
in Figure 5.4.
F
C
F
C
Magnitude
F
0.5 0.5
1
Figure 5.4 Spectrum of an ideal lowpass lter
Its impulse response h
LP
[n] is found using the IDTFT to give
h
LP
[n] =
0.5
0.5
H
LP
(F)e
j2nF
dF =
F
C
F
C
e
j2nF
dF =
F
C
F
C
cos(2nF) dF =
sin(2nF)
2n
F
C
F
C
(5.25)
Simplifying this result, we obtain
h
LP
[n] =
sin(2nF
C
) sin(2nF
C
)
2n
=
2 sin(2nF
C
)
2n
=
2F
C
sin(2nF
C
)
2nF
C
= 2F
C
sinc(2nF
C
) (5.26)
REVIEW PANEL 5.18
The Impulse Response of an Ideal Lowpass Filter Is a Sinc Function
C
F F
C
C
rect( F 2 F/ = H(F)
=0 n
[n ) F
C
] nF
C
2 sinc( ) h = 2
n
1
F
0.5 0.5
5.8.1 Frequency Transformations
The impulse response and transfer function of highpass, bandpass, and bandstop lters may be related to
those of a lowpass lter using frequency transformations based on the properties of the DTFT. The impulse
response of an ideal lowpass lter with a cuto frequency of F
C
and unit passband gain is given by
h[n] = 2F
C
sinc(2nF
C
) (5.27)
Figure 5.5 shows two ways to obtain a highpass transfer function from this lowpass lter.
The rst way is to shift H(F) by 0.5 to obtain H
H1
(F) = H(F 0.5), a highpass lter whose cuto frequency
is given by F
H1
= 0.5 F
C
. This leads to
h
H1
[n] = (1)
n
h[n] = 2(1)
n
F
C
sinc(2nF
C
) (with cuto frequency 0.5 F
C
) (5.28)
Alternatively, we see that H
H2
(F) = 1 H(F) also describes a highpass lter, but with a cuto frequency
given by F
H2
= F
C
, and this leads to
h
H2
[n] = [n] 2F
C
sinc(2nF
C
) (with cuto frequency F
C
) (5.29)
c Ashok Ambardar, September 1, 2003
5.8 Ideal Filters 201
F
C
F
C
F
C
0.5
H(F 0.5)
F
C
F
C
1 H(F)
1
0.5 0.5
F
H(F)
0.5
F
1
0.5 0.5
F
1
Lowpass filter Highpass filter Highpass filter
0.5
Figure 5.5 Two lowpass-to-highpass transformations
F
C
F
C
F
0
F
C
F
0
F
0
F
C
F
0
1
0.5 0.5
F
0.5 0.5
F
1
1
0.5 0.5
F
Lowpass filter Bandstop filter Bandpass filter
Figure 5.6 Transforming a lowpass lter to a bandpass or bandstop lter
Figure 5.6 shows how to transform a lowpass lter to a bandpass lter or to a bandstop lter.
To obtain a bandpass lter with a center frequency of F
0
and band edges [F
0
F
C
, F
0
+F
C
], we simply shift
H(F) by F
0
to get H
BP
(F) = H(F +F
0
) +H(F F
0
), and obtain
h
BP
[n] = 2h[n]cos(2nF
0
) = 4F
C
cos(2nF
0
)sinc(2nF
C
) (5.30)
A bandstop lter with center frequency F
0
and band edges [F
0
F
C
, F
0
+ F
C
] can be obtained from the
bandpass lter using H
BS
(F) = 1 H
BP
(F), to give
h
BS
[n] = [n] h
BP
[n] = [n] 4F
C
cos(2nF
0
)sinc(2nF
C
) (5.31)
REVIEW PANEL 5.19
Transformation of a Lowpass Filter h[n] and H(F) to Other Forms
h[n] = 2F
C
sinc(2nF
C
); H(F) = rect(F/2F
C
) h
HP
[n] = [n] h[n]; H
HP
(F) = 1 H(F)
h
BP
[n] = 2h[n]cos(2nF
0
); H
BP
(F) by modulation h
BS
[n] = [n] h
BP
[n]; H
BS
(F) = 1 H
BP
(F)
Note: F
C
= cuto frequency of lowpass prototype; F
0
= center frequency (for BPF and BSF)
EXAMPLE 5.11 (Frequency Transformations)
Use frequency transformations to nd the transfer function and impulse response of
1. An ideal lowpass lter with a digital cuto frequency of 0.25.
2. An ideal highpass lter with a digital cuto frequency of 0.3.
3. An ideal bandpass lter with a passband between F = 0.1 and F = 0.3.
4. An ideal bandstop lter with a stopband between F = 0.2 and F = 0.4.
(a) For the lowpass lter, pick the LPF cuto F
C
= 0.25. Then, h[n] = 0.5 sinc(0.5n).
c Ashok Ambardar, September 1, 2003
202 Chapter 5 Frequency Domain Analysis
(b) For a highpass lter whose cuto is F
HP
= 0.3:
Pick an LPF with F
C
= 0.5 0.3 = 0.2. Then, h
HP
[n] = (1)
n
(0.4)sinc(0.4n).
Alternatively, pick F
C
= 0.3. Then, h
HP
[n] = [n] (0.6)sinc(0.6n).
(c) For a bandpass lter with band edges [F
1
, F
2
] = [0.1, 0.3]:
Pick the LPF cuto as F
C
= F
2
F
0
= 0.1, F
0
= 0.2. Then, h
BP
[n] = 0.4 cos(0.4n)sinc(0.2n).
(d) For a bandstop lter with band edges [F
1
, F
2
] = [0.2, 0.4]:
Pick the LPF cuto as F
C
= F
2
F
0
= 0.1, F
0
= 0.3. Then, h
BS
[n] = [n] 0.4 cos(0.6n)sinc(0.2n).
5.8.2 Truncation and Windowing
An ideal lter possesses linear phase because its impulse response h[n] is a symmetric sequence. It can
never be realized in practice because h[n] is a sinc function that goes on forever, making the lter noncausal.
Moreover, the ideal lter is unstable because h[n] (a sinc function) is not absolutely summable. One way
to approximate an ideal lowpass lter is by symmetric truncation (about n = 0) of its impulse response
h[n] (which ensures linear phase). Truncation of h[n] to [n[ N is equivalent to multiplying h[n] by a
rectangular window w
D
[n] = 1, [n[ N. To obtain a causal lter, we must also delay the impulse response
by N samples so that its rst sample appears at the origin.
5.8.3 The Rectangular Window and its Spectrum
Consider the rectangular window w
d
[n] = rect(n/2N) with M = 2N + 1 samples. We have
w
d
[n] = rect
n
2N
= 1, 1, . . . , 1, 1
. .. .
N ones
,
1, 1, 1, . . . , 1, 1
. .. .
N ones
(5.32)
Its DTFT has been computed earlier as
W
D
(F) =
sin(MF)
sin(F)
=
Msinc(MF)
sinc(F)
, M = 2N + 1 (5.33)
The quantity W
D
(F) describes the Dirichlet kernel or the aliased sinc function. It also equals the periodic
extension of sinc(MF) with period F = 1. Figure 5.7 shows the Dirichlet kernel for N = 3, 6, and 9.
The Dirichlet kernel has some very interesting properties. Over one period, we observe the following:
1. It has N maxima. The width of the main lobe is 2/M. The width of the decaying positive and negative
sidelobes is 1/M. There are 2N zeros located at the frequencies F =
1
M
,
2
M
, . . . ,
2N
M
2. Its area equals unity, and it attains a maximum peak value of M at the origin. Its value at [F[ = 0.5
is +1 (for even N) or 1 (for odd N). Increasing M increases the mainlobe height and compresses
the sidelobes. The ratio R of the main lobe magnitude and peak sidelobe magnitude, however, stays
more or less constant, varying between 4 for small M and 1.5 = 4.71 (or 13.5 dB) for very large M.
As M , the spectrum approaches a unit impulse.
c Ashok Ambardar, September 1, 2003
5.8 Ideal Filters 203
0.5 0 0.5
1
0
7
(a) Dirichlet kernel N = 3
Digital frequency F
A
m
p
l
i
t
u
d
e
0.5 0 0.5
1
13
Digital frequency F
A
m
p
l
i
t
u
d
e
(b) Dirichlet kernel N = 6
0.5 0 0.5
1
19
Digital frequency F
(c) Dirichlet kernel N = 9
A
m
p
l
i
t
u
d
e
Figure 5.7 The Dirichlet kernel is the spectrum of a rectangular window
5.8.4 The Triangular Window and its Spectrum
A triangular window may be regarded as the convolution of two rectangular windows. For a rectangular
window w
d
[n] = rect(n/2N), we nd
w
d
[n] w
d
[n] = rect
n
2N
rect
n
2N
= (2N + 1)tri
n
2N + 1
= Mtri
n
M
, M = 2N + 1
We may thus express a triangular window w
f
[n] = tri(n/M) in terms of the convolution
tri
n
M
=
1
M
w
d
[n]
d
[n], M = 2N + 1 (5.34)
Since convolution in the time domain corresponds to multiplication in the frequency domain, the spectrum
of the triangular window may be written as
W
F
(F) =
1
M
W
2
D
(F) =
sin
2
(MF)
M sin
2
(F)
=
Msinc
2
(MF)
sinc
2
(F)
The quantity W
F
(F) is called the Fejer kernel. Figure 5.8 shows the Fejer kernel for M = 4, 5, and 8.
Note that the spectrum is always positive. Its area equals unity and it attains a peak value of M at the
origin. Its value at [F[ = 0.5 is 0 (for even M) or 1/M (for odd M). Increasing M increases the mainlobe
height and compresses the sidelobes. There are M maxima over one period. As M , the spectrum
approaches a unit impulse. For a given nite length M, the mainlobe of the Fejer kernel is twice as wide as
that of the Dirichlet kernel while the sidelobe magnitudes of the Fejer kernel show a faster decay and are
much smaller.
5.8.5 The Consequences of Windowing
Windowing (or multiplication) of the lter impulse response h[n] by a window w[n] in the time domain
is equivalent to the periodic convolution of the lter spectrum H(F) and the window spectrum W(F) in
the frequency domain. If the impulse response an ideal lter given by h[n] = 2F
C
sinc(2F
C
n) is truncated
(multiplied) by a rectangular window w
d
[n], the spectrum of the windowed lter exhibits overshoot and
oscillations as illustrated in Figure 5.9. This is reminiscent of the Gibbs eect that occurs in the Fourier
series reconstruction of a periodic signal containing sudden jumps (discontinuities).
What causes the overshoot and oscillation is the slowly decaying sidelobes of W
D
(F) (the spectrum of
the rectangular window). The overshoot and oscillation persists even if we increase the window length.
The only way to reduce and/or eliminate the overshoot and oscillation is to use a tapered window, such as
c Ashok Ambardar, September 1, 2003
204 Chapter 5 Frequency Domain Analysis
0.5 0 0.5
0
4
(a) Fejer kernel M = 4
Digital frequency F
A
m
p
l
i
t
u
d
e
0.5 0 0.5
0.2
5
Digital frequency F
A
m
p
l
i
t
u
d
e
(b) Fejer kernel M = 5
0.5 0 0.5
0
8
Digital frequency F
(c) Fejer kernel M = 8
A
m
p
l
i
t
u
d
e
Figure 5.8 The Fejer kernel is the spectrum of a triangular window
0.5 0 0.5
6
0
21
A
m
p
l
i
t
u
d
e
Digital frequency F
(a) Dirichlet kernel N = 10
0.5 0.25 0 0.25 0.5
0
1
A
m
p
l
i
t
u
d
e
Digital frequency F
(b) Ideal LPF with F
c
= 0.25
0.5 0.25 0 0.25 0.5
0
1
Digital frequency F
(c) Spectrum of windowed LPF
A
m
p
l
i
t
u
d
e
Figure 5.9 The spectrum of an ideal lter windowed by a rectangular window
the triangular window, whose spectral sidelobes decay much faster. If the impulse response h[n] an ideal
lter is truncated (multiplied) by a triangular window w
f
[n], the spectrum of the windowed lter exhibits a
monotonic response and a complete absence of overshoot, as illustrated in Figure 5.10. All useful windows are
tapered in some fashion to minimize (or even eliminate) the overshoot in the spectrum of the windowed lter
while maintaining as sharp a cuto as possible. The spectrum of any window should have a narrow mainlobe
(to ensure a sharp cuto in the windowed spectrum) and small sidelobe levels (to minimize the overshoot
and oscillation in the windowed spectrum). For a given window length, it is not possible to minimize both
the mainlobe width and the sidelobe levels simultaneously. Consequently, the design of any window entails
a compromise between these inherently conicting requirements.
REVIEW PANEL 5.20
Windowing of h[n] in Time Domain Means Periodic Convolution in Frequency Domain
Windowing makes the transition in the spectrum more gradual (less abrupt).
A rectangular window (truncation) leads to overshoot and oscillations in spectrum (Gibbs eect).
Any other window leads to reduced (or no) overshoot in spectrum.
c Ashok Ambardar, September 1, 2003
Chapter 5 Problems 205
0.5 0 0.5
0
10
A
m
p
l
i
t
u
d
e
Digital frequency F
(a) Fejer kernel M = 10
0.5 0.25 0 0.25 0.5
0
A
m
p
l
i
t
u
d
e
Digital frequency F
(b) Ideal LPF with F
c
= 0.25
0.5 0.25 0 0.25 0.5
0
1
Digital frequency F
(c) Spectrum of windowed LPF
A
m
p
l
i
t
u
d
e
Figure 5.10 The spectrum of an ideal lter windowed by a triangular window
CHAPTER 5 PROBLEMS
5.1 (DTFT of Sequences) Find and simplify the DTFT of the following signals and evaluate at F = 0,
F = 0.5, and F = 1.
(a) x[n] = 1, 2,
3, 2, 1 (b) y[n] = 1, 2,
0, 2, 1
(c) g[n] =
1, 2, 2, 1 (d) h[n] =
1, 2, 2, 1
[Hints and Suggestions: Use e
j
+ e
j
= 2 cos and e
j
e
j
= j2 sin to simplify the results.
In (c)(d), extract the factor e
j3F
before simplifying.]
5.2 (DTFT from Denition) Use the dening relation to nd the DTFT X(F) of the following signals.
(a) x[n] = (0.5)
n+2
u[n] (b) x[n] = n(0.5)
2n
u[n] (c) x[n] = (0.5)
n+2
u[n 1]
(d) x[n] = n(0.5)
n+2
u[n 1] (e) x[n] = (n + 1)(0.5)
n
u[n] (f ) x[n] = (0.5)
n
u[n]
[Hints and Suggestions: Pick appropriate limits in the dening summation and simplify using tables.
For (a), (c) and (d), (0.5)
n+2
= (0.25)(0.5)
n
. For (b), (0.5)
2n
= (0.25)
n
. For (e), use superposition.
For (f), use the change of variable n n.]
5.3 (Properties) The DTFT of x[n] is X(F) =
4
2 e
j2F
. Find the DTFT of the following signals
without rst computing x[n].
(a) y[n] = x[n 2] (b) d[n] = nx[n]
(c) p[n] = x[n] (d) g[n] = x[n] x[n 1]
(e) h[n] = x[n] x[n] (f ) r[n] = x[n]e
jn
(g) s[n] = x[n]cos(n) (h) v[n] = x[n 1] +x[n + 1]
[Hints and Suggestions: Use properties such as shifting for (a), times-n for (b), folding for (c),
superposition for (d) and (h), convolution for (e), frequency shift for (f), and modulation for (g).]
5.4 (Properties) The DTFT of the signal x[n] = (0.5)
n
u[n] is X(F). Find the time signal corresponding
to the following transforms without rst computing X(F).
c Ashok Ambardar, September 1, 2003
206 Chapter 5 Frequency Domain Analysis
(a) Y (F) = X(F) (b) G(F) = X(F 0.25)
(c) H(F) = X(F + 0.5) +X(F 0.5) (d) P(F) = X
(F)
(e) R(F) = X
2
(F) (f ) S(F) = X(F) (X(F)
(g) D(F) = X(F)cos(4F) (h) T(F) = X(F + 0.25) X(F 0.25)
[Hints and Suggestions: Use properties such as folding for (a), frequency shift for (b), superposition
for (c) and (h), times-n for (d), convolution for (e), multiplication for (f), and modulation for (g).]
5.5 (DTFT) Compute the DTFT of the following signals.
(a) x[n] = 2
n
u[n] (b) y[n] = 2
n
u[n]
(c) g[n] = 0.5
|n|
(d) h[n] = 0.5
|n|
(u[n + 1] u[n 2])
(e) p[n] = (0.5)
n
cos(0.5n)u[n] (f ) q[n] = cos(0.5n)(u[n + 5] u[n 6])
[Hints and Suggestions: In (a), 2
n
= (0.5)
n
. In (b), use the folding property on x[n]. In (c),
x[n] = (0.5)
n
u[n] +(0.5)
n
u[n] [n]. In (d), nd the 3-sample sequence rst. In (e), use modulation.
In (f), use modulation on u[n + 5] u[n 6] = rect(0.1n).]
5.6 (DTFT) Compute the DTFT of the following signals.
(a) x[n] = sinc(0.2n) (b) h[n] = sin(0.2n) (c) g[n] = sinc
2
(0.2n)
[Hints and Suggestions: In (a), X(F) is a rect function. In (b), X(F) is an impulse pair. In (c),
G(F) = X(F) (X(F) is a triangular pulse (the periodic convolution requires no wraparound).]
5.7 (DTFT) Compute the DTFT of the following signals in the form A(F)e
(F)
and plot the amplitude
spectrum A(F).
(a) x[n] = [n + 1] +[n 1] (b) x[n] = [n + 1] [n 1]
(c) x[n] = [n + 1] +[n] +[n 1] (d) x[n] = u[n + 1] u[n 1]
(e) x[n] = u[n + 1] u[n 2] (f ) x[n] = u[n] u[n 4]
[Hints and Suggestions: Use e
j
+ e
j
= 2 cos and e
j
e
j
= j2 sin to simplify the results.
In (d), use x[n] = 1,
1 and extract e
jF
before simplifying. In (f), use x[n] =
1, 1, 1 and extract
e
j2F
before simplifying.]
5.8 (Properties) The DTFT of a real signal x[n] is X(F). How is the DTFT of the following signals
related to X(F)?
(a) y[n] = x[n] (b) g[n] = x[n] x[n]
(c) r[n] = x[n/4] (zero interpolation) (d) s[n] = (1)
n
x[n]
(e) h[n] = (j)
n
x[n] (f ) v[n] = cos(2nF
0
)x[n]
(g) w[n] = cos(n)x[n] (h) z[n] = [1 + cos(n)]x[n]
(i) b[n] = (1)
n/2
x[n] (j) p[n] = e
jn
x[n 1]
[Hints and Suggestions: In (c), R(F) is a compressed version. In (d), use frequency shifting. In (e)
and (i), use frequency shifting with j
n
= (1)
n/2
= e
jn/2
. In (h), use superposition and modulation.
In (j), use a time shift followed by a frequency shift.]
5.9 (Properties) Let x[n] = tri(0.2n) and let X(F) be its DTFT. Compute the following without
evaluating X(F).
(a) The DTFT of the odd part of x[n]
(b) The value of X(F) at F = 0 and F = 0.5
(c) The phase of X(F)
c Ashok Ambardar, September 1, 2003
Chapter 5 Problems 207
(d) The phase of the DTFT of x[n]
(e) The integral
0.5
0.5
X(F) dF
(f ) The integral
0.5
0.5
X(F 0.5) dF
(g) The integral
0.5
0.5
[X(F)[
2
dF
(h) The derivative
dX(F)
dF
(i) The integral
0.5
0.5
dX(F)
dF
2
dF
[Hints and Suggestions: For (a), (c) and (d), note that x[n] has even symmetry. In (e), nd x[0].
In (f), nd (1)
n
x[n] at n = 0. In (g), use Parsevals theorem. In (h), use j2nx[n] X
(F).]
5.10 (DTFT of Periodic Signals) Find the DTFT of the following periodic signals with N samples per
period.
(a) x[n] =
1, 1, 1, 1, 1, N = 5 (b) x[n] =
1, 1, 1, 1, N = 4
(c) x[n] = (1)
n
(d) x[n] = 1 (n even) and x[n] = 0 (n odd)
[Hints and Suggestions: Find the N-sample sequence
1
N
X(F))[
F=k/N
, k = 0, 1, . . . N 1 to set up
X
p
(F) =
X[k](F k/N). In part (c), x[n] =
1, 0, N = 2.]
5.11 (IDTFT) Compute the IDTFT x[n] of the following X(F) described over [F[ 0.5.
(a) X(F) = rect(2F) (b) X(F) = cos(F) (c) X(F) = tri(2F)
[Hints and Suggestions: In (a), use rect(F/F
C
) 2F
C
sinc(2nF
C
). In (b), simplify the dening
IDTFT relation. In (c), use rect(2F) (rect(2F) = 0.5tri(2F) and the results of (a).]
5.12 (Properties) Conrm that the DTFT of x[n] = n
n
u[n], using the following methods, gives identical
results.
(a) From the dening relation for the DTFT
(b) From the DTFT of y[n] =
n
u[n] and the times-n property
(c) From the convolution result
n
u[n]
n
u[n] = (n + 1)
n
u[n + 1] and the shifting property
(d) From the convolution result
n
u[n]
n
u[n] = (n + 1)
n
u[n + 1] and superposition
5.13 (Properties) Find the DTFT of the following signals using the approach suggested.
(a) Starting with the DTFT of u[n], show that the DTFT of x[n] = sgn[n] is X(F) = j cot(F).
(b) Starting with rect(n/2N) (2N + 1)
sinc[(2N+1)F]
sincF
, use the convolution property to nd the
DTFT of x[n] = tri(n/N).
(c) Starting with rect(n/2N) (2N + 1)
sinc[(2N+1)F]
sincF
, use the modulation property to nd the
DTFT of x[n] = cos(n/2N)rect(n/2N).
[Hints and Suggestions: In part (a), note that sgn[n] = u[n] u[n] and simplify the DTFT result.
In part (b), start with rect(
n
2N
) rect(
n
2N
) = Mtri(
n
M
) where M = 2N + 1.]
c Ashok Ambardar, September 1, 2003
208 Chapter 5 Frequency Domain Analysis
5.14 (Spectrum of Discrete Periodic Signals) Sketch the DTFT magnitude spectrum and phase
spectrum of the following signals over [F[ 0.5.
(a) x[n] = cos(0.5n)
(b) y[n] = cos(0.5n) + sin(0.25n)
(c) h[n] = cos(0.5n)cos(0.25n)
[Hints and Suggestions: Over the principal period, the magnitude spectrum of each sinusoid is an
impulse pair (with strengths that equal half the peak value) and the phase shows odd symmetry. For
part (c), use cos cos = 0.5 cos( +) + 0.5 cos( ) to simplify h[n]. ]
5.15 (DTFT) Compute the DTFT of the following signals and sketch the magnitude and phase spectrum
over 0.5 F 0.5.
(a) x[n] = cos(0.4n) (b) x[n] = cos(0.2n +
4
)
(c) x[n] = cos(n) (d) x[n] = cos(1.2n +
4
)
(e) x[n] = cos(2.4n) (f ) x[n] = cos
2
(2.4n)
[Hints and Suggestions: Over the principal period, the magnitude spectrum of each sinusoid is an
impulse pair (with strengths that equal half the peak value) and the phase shows odd symmetry. For
part (f), use cos
2
= 0.5 + 0.5 cos 2 and use its frequency in the principal period. ]
5.16 (DTFT) Compute the DTFT of the following signals and sketch the magnitude and phase spectrum
over 0.5 F 0.5.
(a) x[n] = sinc(0.2n) (b) y[n] = sinc(0.2n)cos(0.4n)
(c) g[n] = sinc
2
(0.2n) (d) h[n] = sinc(0.2n)cos(0.1n)
(e) p[n] = sinc
2
(0.2n)cos(0.4n) (f ) q[n] = sinc
2
(0.2n)cos(0.2n)
[Hints and Suggestions: In (a), X(F) is a rect function. In (c), G(F) = X(F) (X(F) is a triangular
pulse (the periodic convolution requires no wraparound). In (b), (d), (e), and (f) use modulation (and
add the images if there is overlap).]
5.17 (System Representation) Find the transfer function H(F) and the system dierence equation for
the following systems described by their impulse response h[n].
(a) h[n] = (
1
3
)
n
u[n] (b) h[n] = [1 (
1
3
)
n
]u[n]
(c) h[n] = n(
1
3
)
n
u[n] (d) h[n] = 0.5[n]
(e) h[n] = [n] (
1
3
)
n
u[n] (f ) h[n] = [(
1
3
)
n
+ (
1
2
)
n
]u[n]
[Hints and Suggestions: For the dierence equation, set up H(F) as the ratio of polynomials in
e
j2F
, equate with
Y (F)
X(F)
, cross-multiply and use the shifting property to convert to the time domain.]
5.18 (System Representation) Find the transfer function and impulse response of the following systems
described by their dierence equation.
(a) y[n] + 0.4y[n 1] = 3x[n] (b) y[n]
1
6
y[n 1]
1
6
y[n 2] = 2x[n] +x[n 1]
(c) y[n] = 0.2x[n] (d) y[n] = x[n] +x[n 1] +x[n 2]
[Hints and Suggestions: Find H(F) =
Y (F)
X(F)
after taking the DTFT and collecting terms in X(F)
and Y (F). In (b), evaluate h[n] after expanding H(F) by partial fractions.]
5.19 (System Representation) Set up the system dierence equation for the following systems described
by their transfer function.
c Ashok Ambardar, September 1, 2003
Chapter 5 Problems 209
(a) H(F) =
6e
j2F
3e
j2F
+ 1
(b) H(F) =
3
e
j2F
+ 2
e
j2F
e
j2F
+ 3
(c) H(F) =
6
1 0.3e
j2F
(d) H(F) =
6e
j2F
+ 4e
j4F
(1 + 2e
j2F
)(1 + 4e
j2F
)
[Hints and Suggestions: Set up H(F) as the ratio of polynomials in e
j2F
, equate with
Y (F)
X(F)
,
cross-multiply and use the shifting property to convert to the time domain.]
5.20 (Steady-State Response) Consider the lter y[n] +0.25y[n2] = 2x[n] +2x[n1]. Find the lter
transfer function H(F) of this lter and use this to compute the steady-state response to the following
inputs.
(a) x[n] = 5 (b) x[n] = 3 cos(0.5n +
4
) 6 sin(0.5n
4
)
(c) x[n] = 3 cos(0.5n)u[n] (d) x[n] = 2 cos(0.25n) + 3 sin(0.5n)
[Hints and Suggestions: Set up H(F). For each sinusoidal component, nd the frequency F
0
,
evaluate the gain and phase of H(F
0
) and obtain the output. In (b) and (d), use superposition.]
5.21 (Response of Digital Filters) Consider the 3-point averaging lter described by the dierence
equation y[n] =
1
3
(x[n] +x[n 1] +x[n 2]).
(a) Find its impulse response h[n].
(b) Find and sketch its frequency response H(F).
(c) Find its response to x[n] = cos(
n
3
+
4
).
(d) Find its response to x[n] = cos(
n
3
+
4
) + sin(
n
3
+
4
).
(e) Find its response to x[n] = cos(
n
3
+
4
) + sin(
2n
3
+
4
).
[Hints and Suggestions: For (c)(e), for each sinusoidal component, nd the frequency F
0
, evaluate
the gain and phase of H(F
0
) and obtain the output. In (d) and (e), use superposition.]
5.22 (Frequency Response) Consider a system whose frequency response H(F) in magnitude/phase
form is H(F) = A(F)e
j(F)
. Find the response y[n] of this system for the following inputs.
(a) x[n] = [n] (b) x[n] = 1 (c) x[n] = cos(2nF
0
) (d) x[n] = (1)
n
[Hints and Suggestions: For (d), note that (1)
n
= cos(n).]
5.23 (Frequency Response) Consider a system whose frequency response is H(F) = 2 cos(F)e
jF
.
Find the response y[n] of this system for the following inputs.
(a) x[n] = [n] (b) x[n] = cos(0.5n)
(c) x[n] = cos(n) (d) x[n] = 1
(e) x[n] = e
j0.4n
(f ) x[n] = (j)
n
[Hints and Suggestions: For (a), use 2 cos = e
j
+e
j
and nd h[n]. For the rest, evaluate H(F)
at the frequency of each input to nd the output. For (f), note that j = e
j/2
.]
5.24 (Frequency Response) The signal x[n] =
0.5)
.
(f ) What is the half-power frequency of a cascade of N such 2-point averagers?
[Hints and Suggestions: For (a), extract the factor e
jF
in H(F) to get H(F) = A(F)e
j(F)
and
sketch A(F). For (b), nd h[n]. For (c), evaluate H(F) at the frequency of x[n] to nd the output.
For (d), use superposition. For (e), use [A(F)[ =
Filter
Figure P5.26 Filter realization for Problem 5.26
[Hints and Suggestions: Note that y[n] = (x[n] y[n 1]) h
1
[n]. Use this to set up the dierence
equation, check stablity and nd H(F).]
5.27 (Interconnected Systems) Consider two systems with impulse response h
1
[n] = [n] + [n 1]
and h
2
[n] = (0.5)
n
u[n]. Find the frequency response and impulse response of the combination if
(a) The two systems are connected in parallel with = 0.5.
(b) The two systems are connected in parallel with = 0.5.
(c) The two systems are connected in cascade with = 0.5.
(d) The two systems are connected in cascade with = 0.5.
[Hints and Suggestions: In each case, nd h[n] for the combination to nd H(F). You may also
work directly in the frequency domain.]
5.28 (Interconnected Systems) Consider two systems with impulse response h
1
[n] = [n] + [n 1]
and h
2
[n] = (0.5)
n
u[n]. Find the response of the overall system to the input x[n] = cos(0.5n) if
(a) The two systems are connected in parallel with = 0.5.
(b) The two systems are connected in parallel with = 0.5.
(c) The two systems are connected in cascade with = 0.5.
(d) The two systems are connected in cascade with = 0.5.
c Ashok Ambardar, September 1, 2003
Chapter 5 Problems 211
[Hints and Suggestions: In each case, nd H(F) for the combination and evaluate at the frequency
of the input to set up the output.]
5.29 (DTFT of Periodic Signals) Find the DTFT of the following periodic signals described over one
period, with N samples per period.
(a) x[n] =
1, 0, 0, 0, 0, N = 5
(b) x[n] =
1, 0, 1, 0, N = 4
(c) x[n] =
3, 2, 1, 2, N = 4
(d) x[n] =
1, 2, 3, N = 3
[Hints and Suggestions: Find the N-sample sequence
1
N
X(F))[
F=k/N
, k = 0, 1, . . . N 1 to set up
X
p
(F) =
X[k](F k/N).]
5.30 (System Response to Periodic Signals) Consider the 3-point moving average lter described by
y[n] = x[n] + x[n 1] + x[n 2]. Find one period of its periodic response to the following periodic
inputs.
(a) x[n] =
1, 0, 0, 0, 0, N = 5
(b) x[n] =
1, 0, 1, 0, N = 4
(c) x[n] =
3, 2, 1, N = 3
(d) x[n] =
1, 2, N = 2
[Hints and Suggestions: Find h[n], create its N-sample periodic extension, and nd its periodic
convolution with the input (you may use regular convolution and wraparound).]
5.31 (DTFT of Periodic Signals) One period of a periodic signal is given by x[n] =
1, 2, 0, 1.
(a) Find the DTFT of this periodic signal.
(b) The signal is passed through a lter whose impulse response is h[n] = sinc(0.8n). What is the
lter output?
[Hints and Suggestions: In (a), nd the N-sample sequence
1
N
X(F))[
F=k/N
, k = 0, 1, . . . N 1 to
set up X
p
(F) =
X[k](F k/N). In (b), h[n] is an ideal lter that passes frequencies in the range
0.4 F < 0.4 with a gain of 1.25. So, extend X
p
(F) to the principal range 0.5 F < 0.5. Each
impulse pair in X
p
(F) (in the principal range) passed by the lter is a sinusoid.]
5.32 (System Response to Periodic Signals) Consider the lter y[n] 0.5y[n 1] = 3x[n]. Find its
response to the following periodic inputs.
(a) x[n] =
4, 0, 2, 1, N = 4
(b) x[n] = (1)
n
(c) x[n] = 1 (n even) and x[n] = 0 (n odd)
(d) x[n] = 14(0.5)
n
(u[n] u[n (N + 1)]), N = 3
[Hints and Suggestions: h[n] = 3(0.5)
n
u[n]. Its periodic extension is h
p
[n] = 3(0.5)
n
/(1 0.5
N
)
with period N. Find the periodic convolution of h
p
[n] and one period of x[n] (by regular convolution
and wraparound). For (b), x[n] =
1, 0 (N = 2).]
c Ashok Ambardar, September 1, 2003
212 Chapter 5 Frequency Domain Analysis
5.33 (System Response to Periodic Signals) Find the response if a periodic signal whose one period
is x[n] =
k=
x(t)(t kt
s
) where t
s
= 1/S.
(a) Find the Fourier transform X(f).
(b) Find the Fourier transform X
i
(f).
(c) If the sampled version of x(t) is x[n], nd the DTFT X(F).
(d) How is X(F) related to X
i
(f)?
(e) Under what conditions is X(F) related to X(f)?
[Hints and Suggestions: In (b), x
i
(t) = (t) +(t t
s
) +(t 2t
s
) + . So, nd X
i
(f) and simplify
the geometric series. In (c), use x[n] = e
nt
s
u[n] = (e
t
s
)
n
u[n]. In (d), use F = ft
s
.]
5.36 (Ideal Filter Concepts) Let H
1
(F) be an ideal lowpass lter with a cuto frequency of F
1
= 0.2
and let H
2
(F) be an ideal lowpass lter with a cuto frequency of F
2
= 0.4. Make a block diagram of
how you could use these lters to implement the following and express their transfer function in terms
of H
1
(F) and/or H
2
(F).
(a) An ideal highpass lter with a cuto frequency of F
C
= 0.2
(b) An ideal highpass lter with a cuto frequency of F
C
= 0.4
(c) An ideal bandpass lter with a passband covering 0.2 F 0.4
(d) An ideal bandstop lter with a stopband covering 0.2 F 0.4
[Hints and Suggestions: In (a), use H(F) = 1 H
1
(F) to set up the block diagram.]
5.37 (Echo Cancellation) A microphone, whose frequency response is limited to 5 kHz, picks up not only
a desired signal x(t) but also its echoes. However, only the rst echo, arriving after t
d
= 1.25 ms, has
a signicant amplitude (of = 0.5) relative to the desired signal x(t).
(a) Set up an equation relating the analog output and analog input of the microphone.
c Ashok Ambardar, September 1, 2003
Chapter 5 Problems 213
(b) The microphone signal is to be processed digitally in an eort to remove the echo. Set up a
dierence equation relating the sampled output and sampled input of the microphone using an
arbitrary sampling rate S.
(c) Argue that if S equals the Nyquist rate, the dierence equation for the microphone will contain
fractional delays and be dicult to implement.
(d) If the sampling rate S can be varied only in steps of 1 kHz, choose the smallest S that will ensure
integer delays and use it to nd the dierence equation of the microphone and describe the nature
of this lter.
(e) Set up the dierence equation of an echo-canceling system that can recover the original signal
using the sampling rate of the previous part. Find its frequency response.
(f ) If the microphone is cascaded with the echo-cancelling lter, what is the impulse response of the
cascaded system?
[Hints and Suggestions: In (a), y(t) = x(t) + x(t t
d
). In (b), y[n] = x[n] + x[n N] with
N = St
d
. In (d), choose the inverse system y[n] +y[n N] = x[n]. In (f), note that H
C
(F) = 1.]
5.38 (Modulation) A signal x[n] is modulated by cos(0.5n) to obtain the signal x
1
[n]. The modulated
signal x
1
[n] is ltered by a lter whose transfer function is H(F) to obtain the signal y
1
[n].
(a) Sketch the spectra X(F), X
1
(F), and Y
1
(F) if X(F) = tri(4F) and H(F) = rect(2F).
(b) The signal y
1
[n] is modulated again by cos(0.5n) to obtain the signal y
2
[n], and ltered by H(F)
to obtain y[n]. Sketch Y
2
(F) and Y (F).
(c) Are the signals x[n] and y[n] related in any way?
[Hints and Suggestions: In (a), the output is a version of X(f) truncated in frequency. In (b), the
output will show sinc distortion (due to the zero-order-hold).]
5.39 (Group Delay) Show that the group delay t
g
of a lter described by its transfer function H(F) may
be expressed as
t
g
=
H
R
(F)H
I
(F) H
I
(F)H
R
(F)
2[H(F)[
2
Here, H(F) = H
R
(F) +jH
I
(F) and the primed quantities describe derivatives with respect to F. This
result may be used to nd the group delay of both FIR and FIR lters.
(a) Find the group delay of the FIR lter described by h[n] =
, 1.
(b) Find the group delay of the FIR lter described by H(F) = 1 +e
j2F
.
(c) For an IIR lter described by H(F) =
N(F)
D(F)
, the overall group delay may be found as the
dierence of the group delays of N(F) and D(F). Use this concept to nd the group delay of
the lter given by H(F) =
+e
j2F
1 +e
j2F
.
[Hints and Suggestions: Start with t
g
=
1
2
d(F)
dF
=
1
2
d
dF
tan
1
H
I
(F)
H
R
(F)
1, 2, 1 h
2
[n] =
2, 0, 2
(a) Plot the frequency response of each lter and identify the lter type.
(b) The frequency response of the parallel connection of h
1
[n] and h
2
[n] is H
P1
(F). If the second lter
is delayed by one sample and then connected in parallel with the rst, the frequency response
changes to H
P2
(F). It is claimed that H
P1
(F) and H
P2
(F) have the same magnitude and dier
only in phase. Use Matlab to argue for or against this claim.
(c) Obtain the impulse response h
P1
[n] and h
P2
[n] and plot their frequency response. Use Matlab
to compare their magnitude and phase. Do the results justify your argument? What type of
lters do h
P1
[n] and h
P2
[n] describe?
(d) The frequency response of the cascade of h
1
[n] and h
2
[n] is H
C1
(F). If the second lter is delayed
by one sample and then cascaded with the rst, the frequency response changes to H
C2
(F). It
is claimed that H
C1
(F) and H
C2
(F) have the same magnitude and dier only in phase. Use
Matlab to argue for or against this claim.
(e) Obtain the impulse response h
C1
[n] and h
C2
[n] and plot their frequency response. Use Matlab
to compare their magnitude and phase. Do the results justify your argument? What type of
lters do h
C1
[n] and h
C2
[n] represent?
5.42 (Nonrecursive Forms of IIR Filters) We can only approximately represent an IIR lter by an FIR
lter by truncating its impulse response to N terms. The larger the truncation index N, the better is
the approximation. Consider the IIR lter described by y[n] 0.8y[n 1] = x[n].
(a) Find its impulse response h[n].
(b) Truncate h[n] to three terms to obtain h
N
[n]. Plot the frequency response H(F) and H
N
(F).
What dierences do you observe?
(c) Truncate h[n] to ten terms to obtain h
N
[n]. Plot the frequency response H(F) and H
N
(F).
What dierences do you observe?
(d) If the input to the original lter and truncated lter is x[n], will the greatest mismatch in the
response y[n] of the two lters occur at earlier or later time instants n?
c Ashok Ambardar, September 1, 2003
Chapter 5 Problems 215
5.43 (Allpass Filters) Consider a lowpass lter with impulse response h[n] = (0.5)
n
u[n]. The input to
this lter is x[n] = cos(0.2n). We expect the output to be of the form y[n] = Acos(0.2n +).
(a) Find the values of A and .
(b) What should be the transfer function H
1
(F) of a rst-order allpass lter that can be cascaded with
the lowpass lter to correct for the phase distortion and produce the signal z[n] = Bcos(0.2n)
at its output?
(c) What should be the gain of the allpass lter in order that z[n] = x[n]?
5.44 (Frequency Response of Averaging Filters) The averaging of data uses both FIR and IIR lters.
Consider the following averaging lters:
Filter 1: y[n] =
1
N
N1
k=0
x[n k] (N-point moving average)
Filter 2: y[n] =
2
N(N + 1)
N1
k=0
(N k)x[n k] (N-point weighted moving average)
Filter 3: y[n] y[n 1] = (1 )x[n], =
N1
N+1
(rst-order exponential average)
(a) Conrm that the dc gain of each lter is unity. Which of these are FIR lters?
(b) Sketch the frequency response magnitude of each lter with N = 4 and N = 9. How will the
choice of N aect the averaging?
(c) To test your lters, generate the signal x[n] = 1(0.6)
n
, 0 n 300, add some noise, and apply
the noisy signal to each lter and compare the results. Which lter would you recommend?
c Ashok Ambardar, September 1, 2003
Chapter 6
FILTER CONCEPTS
6.0 Scope and Objectives
This chapter introduces the terminology of lters and the time-domain and frequency-domain measures that
characterize them. It deals with the connections between various techniques of of digital lter analysis. It
discusses the graphical interpretation of the frequency response and introduces the concept of lter design
by pole-zero placement. It concludes by describing a variety of lters (such as minimum-phase, allpass, comb
and notch) that nd use in digital signal processing.
6.1 Frequency Response and Filter Characteristics
Plots of the magnitude and phase of the transfer function against frequency are referred to collectively as the
frequency response. The frequency response is a very useful way of describing digital lters. We may often
get a fair idea of the lter behaviour in the frequency domain even without detailed plots of the frequency
response. The dc gain (at F = 0 or = 0) and the high-frequency gain (at F = 0.5 or = ) serve as a
useful starting point. For a lowpass lter, for example, whose gain typically decreases with frequency, we
should expect the dc gain to exceed the high-frequency gain. Other clues are provided by the coecients
(and form) of the dierence equation, the transfer function and the impulse response. The more complex
the lter (expression), the less likely it is for us to discern its nature by using only a few simple measures.
REVIEW PANEL 6.1
Finding the DC Gain and High-Frequency Gain
From H(z): Evaluate H(z) at z = 1 (for dc gain) and z = 1 (for high-frequency gain).
From H(F) or H(): Evaluate H(F) at F = 0 and F = 0.5, or evaluate H() at = 0 and = .
From impulse response: Evaluate
h[n] = H(0) and
(1)
n
h[n] = H(F)[
F=0.5
= H()[
=
.
From dierence equation: For dc gain, take the ratio of the sum of the RHS and LHS coecients.
For high-frequency gain, reverse the sign of alternate coecients and then take the ratio of the sum.
6.1.1 Phase Delay and Group Delay
In the frequency domain, performance measures of digital lters are based on the magnitude (gain), phase
and delay. The phase delay and group delay of a digital lter with transfer function H(f) are dened by
t
p
(F
0
) =
1
2
H(F
0
)
F
0
(phase delay) t
g
(F) =
1
2
dH(F)
dF
(group delay) (6.1)
216 c Ashok Ambardar, September 1, 2003
6.1 Frequency Response and Filter Characteristics 217
6.1.2 Minimum-Phase
Consider a system described in factored form by
H(z) = K
(z z
1
)(z z
2
)(z z
3
) (z z
m
)
(z p
1
)(z p
2
)(z p
3
) (z p
n
)
(6.2)
If we replace K by K, or a factor (z ) by (
1
z
) or by
1z
z
or by (1 z), the magnitude of
[H
p
(F)[ remains unchanged, and only the phase is aected. If K > 0 and all the poles and zeros of H(z)
lie inside the unit circle, H(z) is stable and denes a minimum-phase system. It shows the smallest
group delay (and the smallest deviation from zero phase) at every frequency among all systems with the
same magnitude response. A stable system is called mixed phase if some of its zeros lie outside the unit
circle and maximum phase if all its zeros lie outside the unit circle. Of all stable systems with the same
magnitude response, there is only one minimum-phase system.
REVIEW PANEL 6.2
All Poles and Zeros of a Minimum-Phase System Lie Inside the Unit Circle
EXAMPLE 6.1 (The Minimum-Phase Concept)
Consider the transfer function of the following systems:
H
1
(z) =
(z
1
2
)(z
1
4
)
(z
1
3
)(z
1
5
)
H
2
(z) =
(1
1
2
z)(z
1
4
)
(z
1
3
)(z
1
5
)
H
3
(z) =
(1
1
2
z)(1
1
4
z)
(z
1
3
)(z
1
5
)
Each system has poles inside the unit circle and is thus stable. All systems have the same magnitude. Their
phase and delay are dierent, as shown Figure E6.1.
0 0.25 0.5
200
100
0
100
200
(a) Phase of three filters
Digital frequency F
H
1
(z)
H
2
(z)
H
3
(z)
P
h
a
s
e
[
d
e
g
r
e
e
s
]
0 0.25 0.5
400
300
200
100
0
50
(b) Their unwrapped phase
Digital frequency F
H
1
(z)
H
2
(z)
H
3
(z)
P
h
a
s
e
[
d
e
g
r
e
e
s
]
0 0.25 0.5
1
0
1
2
3
4
5
H
1
(z)
H
2
(z)
H
3
(z)
(c) Their delay
Digital frequency F
D
e
l
a
y
Figure E6.1 Response of the systems for Example 6.1
The phase response conrms that H
1
(z) is a minimum-phase system with no zeros outside the unit circle,
H
2
(z) is a mixed-phase system with one zero outside the unit circle, and H
3
(z) is a maximum-phase system
with all its zeros outside the unit circle.
6.1.3 Minimum-Phase Filters from the Magnitude Spectrum
The design of many digital lters is often based on a specied magnitude response [H
p
(F)[. The phase
response is then selected to ensure a causal, stable system. The transfer function of such a system is unique
and may be found by writing its magnitude squared function [H
p
(F)[
2
as
[H
p
(F)[
2
= H
p
(F)H
p
(F) = H(z)H(1/z)[
zexp(j2F)
(6.3)
c Ashok Ambardar, September 1, 2003
218 Chapter 6 Filter Concepts
From [H
p
(F)[
2
, we can reconstruct H
T
(z) = H(z)H(1/z), which displays conjugate reciprocal symmetry.
For every root r
k
, there is a root at 1/r
k
. We thus select only the roots lying inside the unit circle to extract
the minimum-phase transfer function H(z). The following example illustrates the process.
EXAMPLE 6.2 (Finding the Minimum-Phase Filter)
Find the minimum-phase transfer function H(z) corresponding to [H
p
)[
2
=
5 + 4 cos
17 + 8 cos
.
We use Eulers relation to give [H
p
()[
2
=
5 + 2e
j
+ 2e
j
17 + 4e
j
+ 4e
j
. Upon substituting e
j
z, we obtain
H
T
(z) as
H
T
(z) = H(z)H(1/z) =
5 + 2z + 2/z
17 + 4z + 4/z
=
2z
2
+ 5z + 2
4z
2
+ 17z + 4
=
(2z + 1)(z + 2)
(4z + 1)(z + 4)
To extract H(z), we pick the roots of H
T
(z) that correspond to [z[ < 1. This yields
H(z) =
(2z + 1)
(4z + 1)
= 0.5
(z + 0.5)
(z + 0.25)
We nd that [H(z)[
z=1
= [H
p
()[
=0
= 0.6, implying identical dc gains.
6.1.4 The Frequency Response: A Graphical View
The frequency response of an LTI system may also be found by evaluating its transfer function H(z) for
values of z = e
j2F
= e
j
(on the unit circle). The quantity = 2F describes the angular orientation.
Figure 6.1 shows how the values of z, , and F are related on the unit circle.
z | | = 1
=2F
=0.25 F
j
e z =
Im[ z ]
=0.5 F
=0.25 F
=0
=1 z
=0 F
Re[ z ]
z=1
z= j
z=j
= /2
=
= /2
1
Unit circle
Figure 6.1 Relating the variables z, F, and through the unit circle
REVIEW PANEL 6.3
The Frequency Response Is the z-Transform Evaluated on the Unit Circle
It corresponds to the DTFT: H(F) = H(z)
z=e
j2F
or H() = H(z)
z=e
j
c Ashok Ambardar, September 1, 2003
6.1 Frequency Response and Filter Characteristics 219
The factored form or pole-zero plot of H(z) is quite useful if we want a qualitative picture of its frequency
response H(F) or H(). Consider the stable transfer function H(z) given in factored form by
H(z) =
8(z 1)
(z 0.6 j0.6)(z 0.6 +j0.6)
(6.4)
Its frequency response H() and magnitude [H()[ at =
0
are given by
H(
0
) =
8(e
j
0
1)
(e
j
0
0.6 j0.6)(e
j
0
0.6 +j0.6]
=
8N
1
D
1
D
2
(6.5)
[H(
0
)[ = 8
[N
1
[
[D
1
[[D
2
[
= (gain factor)
product of distances from zeros
product of distances from poles
(6.6)
Analytically, the magnitude [H(
0
)[ is the ratio of the magnitudes of each term. Graphically, the complex
terms may be viewed in the z-plane as vectors N
1
, D
1
, and D
2
, directed from each pole or zero location
to the location =
0
on the unit circle corresponding to z = e
j0
. The gain factor times the ratio of the
vector magnitudes (the product of distances from the zeros divided by the product of distances from the
poles) yields [H(
0
)[, the magnitude at =
0
. The dierence in the angles yields the phase at =
0
.
The vectors and the corresponding magnitude spectrum are sketched for several values of in Figure 6.2.
[z] Re
[z] Im
N
1
D
1
D
2
N
1
D
2
D
1
D
2
N
1
D
1
A
B
C
A
B
C
0.5
1/8
/4
H(F)
Figure 6.2 Graphical interpretation of the frequency response
A graphical evaluation can yield exact results but is much more suited to obtaining a qualitative estimate
of the magnitude response. We observe how the vector ratio N
1
/D
1
D
2
inuences the magnitude as is
increased from = 0 to = . For our example, at = 0, the vector N
1
is zero, and the magnitude is zero.
For 0 < < /4 (point A), both [N
1
[ and [D
2
[ increase, but [D
1
[ decreases. Overall, the response is small
but increasing. At = /4 , the vector D
1
attains its smallest length, and we obtain a peak in the response.
For
4
< < (points B and C), [N
1
[ and [D
1
[ are of nearly equal length, while [D
2
[ is increasing. The
magnitude is thus decreasing. The form of this response is typical of a bandpass lter.
6.1.5 The Rubber Sheet Analogy
If we imagine the z-plane as a rubber sheet, tacked down at the zeros of H(z) and poked up to an innite
height at the pole locations, the curved surface of the rubber sheet approximates the magnitude of H(z) for
any value of z, as illustrated in Figure 6.3. The poles tend to poke up the surface and the zeros try to pull
it down. The slice around the unit circle ([z[ = 1) approximates the frequency response H().
EXAMPLE 6.3 (Filters and Pole-Zero Plots)
Identify the lter types corresponding to the pole-zero plots of Figure E6.3.
c Ashok Ambardar, September 1, 2003
220 Chapter 6 Filter Concepts
2
0
2
2
0
2
0
10
20
Im [z]
(a) Magnitude of H(z)
Re [z]
M
a
g
n
i
t
u
d
e
1
0
1
1
0
1
0
2
4
Im [z]
(b) Blowup of magnitude inside and around unit circle
Re [z]
M
a
g
n
i
t
u
d
e
Figure 6.3 A plot of the magnitude of H(z) = 8(z 1)/(z
2
1.2z + 0.72) in the z-plane
Im[ z ]
Re[ z ]
Im[ z ]
Re[ z ]
Im[ z ]
Re[ z ]
Im[ z ]
B
Re[ z ]
B A
Filter 1
B A
Filter 2
B A
Filter 3 Filter 4
Figure E6.3D Pole-zero plots for Example 6.3(d)
(a) For lter 1, the vector length of the numerator is always unity, but the vector length of the denominator
keeps increasing as we increase frequency (the points A and B, for example). The magnitude (ratio of
the numerator and denominator lengths) thus decreases with frequency and corresponds to a lowpass
lter.
(b) For lter 2, the vector length of the numerator is always unity, but the vector length of the denominator
keeps decreasing as we increase the frequency (the points A and B, for example). The magnitude
increases with frequency and corresponds to a highpass lter.
(c) For lter 3, the magnitude is zero at = 0. As we increase the frequency (the points A and B, for
example), the ratio of the vector lengths of the numerator and denominator increases, and this also
corresponds to a highpass lter.
(d) For lter 4, the magnitude is zero at the zero location =
0
. At any other frequency (the point B,
for example), the ratio of the vector lengths of the numerator and denominator are almost equal and
result in almost constant gain. This describes a bandstop lter.
6.2 FIR Filters and Linear-Phase
The DTFT of a lter whose impulse response is symmetric about the origin is purely real or purely imaginary.
The phase of such a lter is thus piecewise constant. If the symmetric sequence is shifted (to make it causal,
c Ashok Ambardar, September 1, 2003
6.2 FIR Filters and Linear-Phase 221
for example), its phase is augmented by a linear-phase term and becomes piecewise linear. A lter whose
impulse response is symmetric about its midpoint is termed a (generalized) linear-phase lter. Linear phase
is important in lter design because it results in a pure time delay with no amplitude distortion. The
transfer function of a causal linear-phase lter may be written as H(F) = A(F)e
j2F
(for even symmetry)
or H(F) = jA(F)e
j2F
= A(F)e
j(2F+
2
)
(for odd symmetry), where the real quantity A(F) is the
amplitude spectrum and is the (integer or half-integer) index corresponding to the midpoint of its impulse
response h[n]. The easiest way to obtain this form is to rst set up an expression for H(F), then extract the
factor e
j2F
from H(F), and nally simplify using Eulers relation.
REVIEW PANEL 6.4
A Linear-Phase Sequence Is Symmetric About Its Midpoint
If h[n] is a linear-phase sequence with its midpoint at the (integer or half-integer) index , then
H(F) = A(F)e
j2F
(for even symmetric h[n]) and H(F) = jA(F)e
j2F
(for odd symmetric h[n]).
To nd A(F): Set up H(F), extract e
j2F
from H(F), and simplify using Eulers relation.
6.2.1 Pole-Zero Patterns of Linear-Phase Filters
Linear phase plays an important role in the design of digital lters because it results in a constant delay with
no amplitude distortion. An FIR lter whose impulse response sequence is symmetric about the midpoint
is endowed with linear phase and constant delay. The pole and zero locations of such a sequence cannot be
arbitrary. The poles must lie at the origin if the sequence h[n] is to be of nite length. Sequences that are
symmetric about the origin also require h[n] = h[n] and thus H(z) = H(1/z). The zeros of a linear-
phase sequence must occur in reciprocal pairs (and conjugate pairs if complex to ensure real coecients)
and exhibit what is called conjugate reciprocal symmetry. This is illustrated in Figure 6.4.
r* / 1
r*
/ 1 r
/ 1 r
for a complex zero for a real zero
Re Re
Im Im
display conjugate reciprocal symmetry.
Conjugate reciprocal symmetry
The zeros of a linear-phase sequence
r
r
Figure 6.4 Conjugate reciprocal symmetry
Each complex zero forms part of a quadruple because it is paired with its conjugate and its reciprocal
with the following exceptions. Zeros at z = 1 or at z = 1 can occur singly because they form their own
reciprocal and their own conjugate. Zeros on the real axis must occur in pairs with their reciprocals (with no
conjugation required). Zeros on the unit circle (except at z = 1) must occur in pairs with their conjugates
(which also form their reciprocals). If there are no zeros at z = 1, a linear-phase sequence is always even
symmetric about its midpoint. For odd symmetry about the midpoint, there must be an odd number of
zeros at z = 1.
The frequency response of a linear-phase lter may be written as H(F) = A(F)e
j2F
(for even symme-
try) or H(F) = jA(F)e
j2F
= A(F)e
j(2F+
2
)
(for odd symmetry), where A(F) is real.
c Ashok Ambardar, September 1, 2003
222 Chapter 6 Filter Concepts
REVIEW PANEL 6.5
Characteristics of a Linear-Phase Filter
1. The impulse response h[n] is symmetric about its midpoint.
2. H(F) = A(F)e
j2F
(for even symmetric h[n]) and H(F) = jA(F)e
j2F
(for odd symmetric h[n]).
3. All poles are at z = 0 (for nite length).
4. Zeros occur in conjugate reciprocal quadruples, in general. Zeros on unit circle occur in conjugate pairs.
Zeros on real axis occur in reciprocal pairs. Zeros at z = 1 or z = 1 can occur singly.
5. Odd symmetry about midpoint if an odd number of zeros at z = 1 (even symmetry otherwise).
6. If h[n] = h[n], then H(z) = H(
1
z
) (symmetry about the origin n = 0).
EXAMPLE 6.4 (Linear-Phase Filters)
(a) Does H(z) = 1 + 2z
1
+ 2z
2
+z
3
describe a linear-phase lter?
We express H(z) as a ratio of polynomials in z to get
H(z) =
z
3
+ 2z
2
+ 2z + 1
z
3
All its poles are at z = 0. Its zeros are at z = 1 and z = 0.5 j0.866 are consistent with a
linear-phase lter because the real zero at z = 1 can occur singly and the complex conjugate pair of
zeros lie on the unit circle.
Since H(z) = H(1/z), the impulse response h[n] cannot be symmetric about the origin n = 0 (even
though it must be symmetric about its midpoint).
We could reach the same conclusions by recognizing that h[n] =
1, 2, 2, 1 describes a linear-phase
sequence with even symmetry about its midpoint n = 1.5.
(b) Let h[n] = [n + 2] + 4.25[n] +[n 2]. Sketch the pole-zero plot of H(z).
We nd H(z) = z
2
+4.25+z
2
. Since h[n] is even symmetric about n = 0 with h[n] = h[n], we must
have H(z) = H(1/z). This is, in fact, the case.
We express H(z) as a ratio of polynomials in factored form to get
H(z) =
z
4
+ 4.25z
2
+ 1
z
2
=
(z +j0.5)(z j0.5)(z +j2)(z j2)
z
2
The pole-zero plot is shown in Figure E6.4 (left panel). All its poles are at z = 0. The four zeros at
z = j0.5, z = j2, z = j0.5, and z = j2 display conjugate reciprocal symmetry.
c Ashok Ambardar, September 1, 2003
6.2 FIR Filters and Linear-Phase 223
= 1 K
Im[ z ]
Re[ z ]
(c)
1 2 0.5
1 2
Im[ z ]
Re[ z ]
= 1 K
(b)
2
2
0.5
0.5
1
2
Figure E6.4 Pole-zero plots of the lters for Example 6.4(b and c)
(c) Sketch the pole-zero plot for H(z) = z
2
+ 2.5z 2.5z
1
z
2
. Is this a linear-phase lter?
We note that H(z) = H(1/z). This means that h[n] = h[n]. In fact, h[n] = 1, 2.5,
0, 2.5, 1.
This describes a linear-phase lter.
With H(z) described as a ratio of polynomials in factored form, we get
H(z) =
(z 1)(z + 1)(z + 0.5)(z + 2)
z
2
The pole-zero plot is shown in Figure E6.4 (right panel). All its poles are at z = 0. There is a pair of
reciprocal zeros at z = 0.5 and z = 2. The two zeros at z = 1 and z = 1 occur singly.
6.2.2 Types of Linear-Phase Sequences
Linear-phase sequences fall into four types. A type 1 sequence has even symmetry and odd length. A type 2
sequence has even symmetry and even length. A type 3 sequence has odd symmetry and odd length. A
type 4 sequence has odd symmetry and even length.
To identify the type of sequence from its pole-zero plot, all we need to do is check for the presence of
zeros at z = 1 and count their number. A type 2 sequence must have an odd number of zeros at z = 1, a
type 3 sequence must have an odd number of zeros at z = 1 and z = 1, and a type 4 sequence must have
an odd number of zeros at z = 1. The number of other zeros, if present (at z = 1 for type 1 and type 2,
or z = 1 for type 1 or type 4), must be even. The reason is simple. If we exclude z = 1, the remaining
zeros occur in pairs or quadruples and always yield a type 1 sequence with odd length and even symmetry.
Including a zero at z = 1 increases the length by one without changing the symmetry. Including a zero at
z = 1 increases the length by 1 and also changes the symmetry. Only an odd number of zeros at z = 1
results in an odd symmetric sequence. These ideas are illustrated in Figure 6.5.
c Ashok Ambardar, September 1, 2003
224 Chapter 6 Filter Concepts
Must be an even number (if present)
=1 z
=1 z =1 z
=1 z =1 z
=1 z
=1 z
=1 z
All other zeros must show conjugate reciprocal symmetry
Type 1 Type 2 Type 3 Type 4
Must be an odd number
Figure 6.5 Identifying the sequence type from its zeros at z = 1
REVIEW PANEL 6.6
How to Identify the Type of a Linear-Phase Sequence
From the sequence x[n]: Check the length and the symmetry to identify the type.
From the pole-zero plot: Count the number of zeros at z = 1.
Type 1: Even number of zeros at z = 1 (if present) and at z = 1 (if present)
Type 2: Odd number of zeros at z = 1 (and even number of zeros at z = 1, if present)
Type 3: Odd number of zeros at z = 1 and odd number of zeros at z = 1
Type 4: Odd number of zeros at z = 1 (and even number of zeros at z = 1, if present)
EXAMPLE 6.5 (Identifying Linear-Phase Sequences)
(a) Find all the zero locations of a type 1 sequence, assuming the smallest length, if it is known that there
is a zero at z = 0.5e
j/3
and a zero at z = 1.
Due to conjugate reciprocal symmetry, the zero at z = 0.5e
j/3
implies the quadruple zeros
z = 0.5e
j/3
z = 0.5e
j/3
z = 2e
j/3
z = 2e
j/3
For a type 1 sequence, the number of zeros at z = 1 must be even. So, there must be another zero at
z = 1. Thus, we have a total of six zeros.
(b) Find all the zero locations of a type 2 sequence, assuming the smallest length, if it is known that there
is a zero at z = 0.5e
j/3
and a zero at z = 1.
Due to conjugate reciprocal symmetry, the zero at z = 0.5e
j/3
implies the quadruple zeros
z = 0.5e
j/3
z = 0.5e
j/3
z = 2e
j/3
z = 2e
j/3
A type 2 sequence is even symmetric and requires an even number of zeros at z = 1. So, there must
be a second zero at z = 1 to give six zeros. But this results in a sequence of odd length. For a type 2
sequence, the length is even. This requires a zero at z = 1. Thus, we have a total of seven zeros.
(c) Find the transfer function and impulse response of a causal type 3 linear-phase lter, assuming the
smallest length and smallest delay, if it is known that there is a zero at z = j and two zeros at z = 1.
The zero at z = j must be paired with its conjugate (and reciprocal) z = j.
c Ashok Ambardar, September 1, 2003
6.2 FIR Filters and Linear-Phase 225
A type 3 sequence requires an odd number of zeros at z = 1 and z = 1. So, the minimum number of
zeros required is one zero at z = 1 and three zeros at z = 1 (with two already present).
The transfer function has the form
H(z) = (z +j)(z j)(z + 1)(z 1)
3
= z
6
2z
5
+z
4
z
2
+ 2z 1
The transfer function of the causal lter with the minimum delay is
H
C
(z) = 1 2z
1
+z
2
z
4
+ 2z
5
z
6
h
C
[n] =
1, 2, 1, 0, 1, 2, 1
6.2.3 Averaging Filters
As an example of a linear phase lter, we consider a causal N-point averaging lter whose system equation
and impulse response are given by
y[n] =
1
N
(x[n] +x[n 1] +. . . +x[n (N 1)]) h[n] =
1
N
1, 1, . . . , 1, 1
. .. .
N ones
(6.7)
Its transfer function then becomes
H(F) =
1
N
N1
k=0
h[k]e
j2kF
=
1
N
N1
k=0
(e
j2F
)
k
=
1
N
1 e
j2NF
1 e
j2F
e
jNF
e
jF
e
jNF
e
jNF
e
jF
e
jF
=
1
N
e
j(N1)F
j2 sin(NF)
j2 sin(F)
This simplies to
H(F) = e
j(N1)F
sin(NF)
N sin(F)
= e
j(N1)F
sinc(NF)
sinc(F)
= e
j(N1)F
A(F) (6.8)
The term A(F) represents the amplitude and the term e
j(N1)F
describes a phase that varies linearly
with frequency. The value of H(F) equals unity at F = 0 for any length N. Also, H(F) = 0 when F is a
multiple of
1
N
and there are a total of N zeros in the principal range 0.5 < F 0.5. At F = 0.5, the value of
H(F) equals zero if N is even. For odd N, however, the value of H(F) at F = 0.5 equals (1)
(N1)/2
(i.e.,
+1 if
N1
2
is even or 1 if
N1
2
is odd). As [F[ increases from 0 to 0.5, the value of [ sin(F)[ increases from
0 to 1. Consequently, the magnitude spectrum shows a peak of unity at F = 0 and decreasing sidelobes of
width
1
N
on either side of the origin. If we look at the amplitude term A(F) (not its absolute value), we nd
that it is even symmetric about F = 0.5 if N is odd but odd symmetric about F = 0.5 if N is even.
For an even symmetric sequence w
d
[n] of length M = 2N + 1, centered at the origin, whose coecients
are all unity, we have
w
d
[n] = 1, 1, . . . , 1, 1
. .. .
N ones
,
1, 1, 1, . . . , 1, 1
. .. .
N ones
(6.9)
Its frequency response may be written as
W
D
(F) =
sin(MF)
sin(F)
=
Msinc(MF)
sinc(F)
, M = 2N + 1 (6.10)
The quantity W
D
(F) is called the Dirichlet kernel and its properties will be described shortly.
c Ashok Ambardar, September 1, 2003
226 Chapter 6 Filter Concepts
6.2.4 Zeros of Averaging Filters
Consider a causal N-point averaging lter whose system equation and impulse response are given by
y[n] =
1
N
(x[n] +x[n 1] +. . . +x[n (N 1)]) h[n] =
1
N
1, 1, . . . , 1, 1
. .. .
N ones
(6.11)
Its transfer function H(z) then becomes
H(z) =
1
N
N1
k=0
h[k]z
k
=
1
N
N1
k=0
(z
1
)
k
=
1
N
1 z
N
1 z
1
=
1
N
z
N
1
z
N1
(z 1)
The N roots of z
N
1 = 0 lie on the unit circle with an angular spacing of
2
N
radians starting at = 0
(corresponding to z = 1). However, the zero at z = 1 is cancelled by the pole at z = 1. So, we have N 1
zeros on the unit circle and N 1 poles at the origin (z = 0). The zeros occur in conjugate pairs and if the
length N is even, there is also a zero at = (corresponding to z = 1).
EXAMPLE 6.6 (Frequency Response and Filter Characteristics)
(a) Identify the pole and zero locations of a causal 10-point averaging lter. What is the dc gain and high
frequency gain of this lter? At what frequencies does the gain go to zero?
The impulse response and transfer function of a 10-point averaging lter is
h[n] =
0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1 H(z) =
0.1(z
10
1)
z
9
(z 1)
There are nine poles at the origin z = 0. There are nine zeros with an angular spacing of
2
10
rad located
on the unit circle at z = e
jk/5
, k = 1, 2, . . . , 9. The dc gain of the lter is [H(0)[ = [
h[n][ = 1. The
high-frequency gain is [H(F)[
F=0.5
= [
(1)
n
h[n][ = 0. Over the principal range, the gain is zero at
the frequencies F
k
= 0.1k, k = 1, 2, . . . , 5 and the magnitude response shows a central lobe with
four successively decaying sidelobes on either side.
(b) Consider the 3-point averaging lter, y[n] =
1
3
x[n 1] + x[n] + x[n + 1]. The lter replaces each
input value x[n] by an average of itself and its two neighbors. Its impulse response h
1
[n] is simply
h
1
[n] =
1
3
,
1
3
,
1
3
. The frequency response is given by
H
1
(F) =
1
n=1
h[n]e
j2Fn
=
1
3
e
j2F
+ 1 +e
j2F
=
1
3
[1 + 2 cos(2F)]
The magnitude [H
1
(F)[ decreases until F =
2
3
(when H
1
(F) = 0) and then increases to
1
3
at F =
1
2
,
as shown in Figure E6.6B. This lter thus does a poor job of smoothing past F =
2
3
.
0.5
1/3
1
F
2/3
H
1
(F)
0.5
1
F
H
2
(F)
Figure E6.6B Frequency response of the lters for Example 6.6(b and c)
c Ashok Ambardar, September 1, 2003
6.2 FIR Filters and Linear-Phase 227
(c) Consider the tapered 3-point moving-average lter h
2
[n] =
1
4
,
1
2
,
1
4
. Its frequency response is given
by
H
2
(F) =
1
n=1
h[n]e
j2Fn
=
1
4
e
j2F
+
1
2
+
1
4
e
j2F
=
1
2
+
1
2
cos(2F)
Figure E6.6D shows that [H
2
(F)[ decreases monotonically to zero at F = 0.5 and shows a much
better smoothing performance. This lowpass lter actually describes a 3-point von Hann (or Hanning)
smoothing window.
(d) Consider the dierencing lter described by y[n] = x[n] x[n1]. Its impulse response may be written
as h[n] = [n] [n 1] = 1, 1. This is actually odd symmetric about its midpoint (n = 0.5). Its
frequency response (in the -form) is given by
H() = 1 e
j
= e
j/2
(e
j/2
e
j/2
) = 2j sin(0.5)e
j/2
Its phase is () =
2
2
and shows a linear variation with . Its amplitude A() = 2 sin(/2)
increases from zero at = 0 to two at = . In other words, the dierence operator enhances high
frequencies and acts as a highpass lter.
6.2.5 FIR Comb Filters
A comb lter has a magnitude response that looks much like the rounded teeth of a comb. One such comb
lter is described by the transfer function
H(z) = 1 z
N
h[n] =
1, 0, 0, . . . , 0, 0
. .. .
N1 zeros
, 1 (6.12)
This corresponds to the system dierence equation y[n] = x[n] x[n N], and represents a linear-phase
FIR lter whose impulse response h[n] (which is odd symmetric about its midpoint) has N +1 samples with
h[0] = 1, h[N] = 1, and all other coecients zero. There is a pole of multiplicity N at the origin, and the
zeros lie on a unit circle with locations specied by
z
N
= e
jN
= 1 = e
j2k
k
=
2k
N
, k = 0, 1, . . . , N 1 (6.13)
The pole-zero pattern and magnitude spectrum of this lter are shown for two values of N in Figure 6.6.
The zeros of H(z) are uniformly spaced 2/N radians apart around the unit circle, starting at = 0.
For even N, there is also a zero at = . Being an FIR lter, it is always stable for any N. Its frequency
response is given by
H(F) = 1 e
j2FN
(6.14)
Note that H(0) always equals 0, but H(0.5) = 0 for even N and H(0.5) = 2 for odd N. The frequency
response H(F) looks like a comb with N rounded teeth over its principal period 0.5 F 0.5.
A more general form of this comb lter is described by
H(z) = 1 z
N
h[n] =
1, 0, 0, . . . , 0, 0
. .. .
N1 zeros
, (6.15)
c Ashok Ambardar, September 1, 2003
228 Chapter 6 Filter Concepts
= 5 N
N
z = 1 H(z)
N
z = 1 H(z)
= 4 N
F
0.5 0.5
2
Magnitude spectrum of
Magnitude spectrum of
5 poles
4 poles
F
0.5 0.5
2
Figure 6.6 Pole-zero plot and frequency response of the comb lter H(z) = 1 z
N
It has N poles at the origin, and its zeros are uniformly spaced 2/N radians apart around a circle of radius
R =
1/N
, starting at = 0. Note that this lter is no longer a linear-phase lter because its impulse
response is not symmetric about its midpoint. Its frequency response H(F) = 1 e
j2FN
suggests that
H(0) = 1 for any N, and H(0.5) = 1 for even N and H(0.5) = 1+ for odd N. Thus, its magnitude
varies between 1 and 1 +, as illustrated in Figure 6.7.
= 4 N
N
z
= 5 N
N
z
F
0.5 0.5
1
1+
H(z) = 1
F
0.5 0.5
1
1+
H(z) = 1 Magnitude spectrum of Magnitude spectrum of
Figure 6.7 Frequency response of the comb lter H(z) = 1 z
N
Another FIR comb lter is described by the transfer function
H(z) = 1 +z
N
h[n] =
1, 0, 0, . . . , 0, 0
. .. .
N1 zeros
, 1 (6.16)
This corresponds to the system dierence equation y[n] = x[n] +x[nN], and represents a linear-phase FIR
lter whose impulse response h[n] (which is even symmetric about its midpoint) has N + 1 samples with
h[0] = h[N] = 1, and all other coecients zero. There is a pole of multiplicity N at the origin, and the zero
locations are specied by
z
N
= e
jN
= 1 = e
j(2k+1)
k
=
(2k + 1)
N
, k = 0, 1, . . . , N 1 (6.17)
The pole-zero pattern and magnitude spectrum of this lter are shown for two values of N in Figure 6.8.
The zeros of H(z) are uniformly spaced 2/N radians apart around the unit circle, starting at = /N.
For odd N, there is also a zero at = . Its frequency response is given by
H(F) = 1 +e
j2FN
(6.18)
c Ashok Ambardar, September 1, 2003
6.3 IIR Filters 229
N
z = 1+ H(z)
N
z = 1+ H(z)
= 5 N
= 4 N
Magnitude spectrum of
Magnitude spectrum of
5 poles
4 poles
F
0.5 0.5
2
2
F
0.5 0.5
Figure 6.8 Pole-zero plot and frequency response of the comb lter H(z) = 1 +z
N
Note that H(0) = 2 for any N, but H(0.5) = 2 for even N and H(0.5) = 0 for odd N.
A more general form of this comb lter is described by
H(z) = 1 +z
N
h[n] =
1, 0, 0, . . . , 0, 0
. .. .
N1 zeros
, (6.19)
It has N poles at the origin and its zeros are uniformly spaced 2/N radians apart around a circle of radius
R =
1/N
, starting at = /N. Note that this lter is no longer a linear-phase lter because its impulse
response is not symmetric about its midpoint. Its frequency response H(F) = 1 + e
j2FN
suggests that
H(0) = 1+ for any N, and H(0.5) = 1+ for even N and H(0.5) = 1 for odd N. Thus, its magnitude
varies between 1 and 1 +, as illustrated in Figure 6.9.
= 4 N
N
z
= 5 N
N
z
F
0.5 0.5
1
1+
H(z) = 1 +
F
0.5 0.5
1+
1
H(z) = 1 +
Magnitude spectrum of
Magnitude spectrum of
Figure 6.9 Frequency response of the comb lter H(z) = 1 +z
N
6.3 IIR Filters
Consider the rst-order lter described by
H
LP
(F) =
1
1 e
j2F
H
LP
(0) =
1
1
, H
LP
(F)
F=0.5
=
1
1 +
, 0 < < 1 (6.20)
The lter H
LP
(F) describes a lowpass lter because its dc gain (at F = 0) exceeds the high frequency gain
(at F = 0.5) and the gain shows a decrease with increasing frequency (from F = 0 to F = 0.5). The lter
gain and impulse response are shown in Figure 6.10.
c Ashok Ambardar, September 1, 2003
230 Chapter 6 Filter Concepts
[n]
[n]
F
0.5 0.5
H (F)
n
h
F
0.5 0.5
H (F) h
HP
LP
HP LP
n
Figure 6.10 Spectrum and impulse response of rst-order lowpass and highpass lters
We can nd the half-power frequency of the lowpass lter from its magnitude squared function:
[H
LP
(F)[
2
=
1
1 cos(2F) +jsin(2F)
2
=
1
1 2cos(2F) +
2
(6.21)
At the half-power frequency, we have [H
LP
(F)[
2
= 0.5, and this gives
1
1 2cos(2F) +
2
= 0.5 or F =
1
2
cos
1
2
1
2
sin(2F)
1 cos(2F)
t
g
(F) =
1
2
d(F)
dF
=
cos(2F)
2
1 2cos(2F) +
2
(6.23)
For low frequencies, the phase is nearly linear and the group delay is nearly constant, and they can be
approximated by
(F)
2F
1
, [F[ <1 t
g
(F) =
1
2
d(F)
dF
1
, [F[ <1 (6.24)
For lters of higher order, we may obtain the group delay by expressing the lter transfer function as the
sum of rst order sections.
Time Constant and Reverberation Time
In the time domain, performance measures for a lter are typically based on features of the impulse response
and step response. The impulse response of the rst-order lowpass lter is given by
h
LP
[n] =
n
u[n] (6.25)
The time constant of the lowpass lter describes how fast the impulse response decays to a specied fraction
of its initial value. For a specied fraction of %, we obtain the eective time constant in samples as
= =
ln
ln
(6.26)
This value is rounded up to an integer, if necessary. We obtain the commonly measured 1% or 40-dB time
constant if = 0.01 (corresponding to an attenuation of 40 dB) and the 0.1% or 60-dB time constant if
= 0.001. If the sampling interval t
s
is known, the time constant in seconds can be computed from
= t
s
. The 60-dB time constant is also called the reverberation time. For higher-order lters whose
impulse response contains several exponential terms, the eective time constant is dictated by the term with
the slowest decay (that dominates the response).
c Ashok Ambardar, September 1, 2003
6.3 IIR Filters 231
REVIEW PANEL 6.7
The Time Constant of a Lowpass Filter
For y[n] y[n 1] = x[n], the % time constant is =
ln
ln
(samples) or = t
s
(seconds)
For the 1% or 40-dB time constant, = 0.01
For the 0.1% or 60-dB time constant (called reverberation time), = 0.001.
6.3.1 First-Order Highpass Filters
Consider the rst-order lter described by
H
HP
(F) =
1
1 +e
j2F
H
HP
(0) =
1
1 +
, H
HP
(F)
F=0.5
=
1
1
, 0 < < 1 (6.27)
The lter H
HP
(F) describes a highpass lter because its high frequency gain (at F = 0.5) exceeds the dc
gain (at F = 0) and the gain shows an increase with increasing frequency (from F = 0 to F = 0.5). The
lter gain and impulse response are shown in Figure 6.10. The impulse response of this lter is given by
h
HP
[n] = ()
n
u[n] = (1)
n
n
u[n] (6.28)
It is the alternating sign changes in the samples of h
HP
[n] that cause rapid time variations and lead to its
highpass behavior. Its spectrum H
HP
(F) is related to H
LP
(F) by H
HP
(F) = H
LP
(F 0.5). This means
that a lowpass cuto frequency F
0
corresponds to the frequency 0.5 F
0
of the highpass lter.
6.3.2 Pole-Zero Placement and Filter Design
The qualitative eect of poles and zeros on the magnitude response can be used to advantage in understanding
the pole-zero patterns of real lters. The basic strategy for pole and zero placement is based on mapping
the passband and stopband frequencies on the unit circle and then positioning the poles and zeros based on
the following reasoning:
1. Conjugate symmetry: All complex poles and zeros must be paired with their complex conjugates.
2. Causality: To ensure a causal system, the total number of zeros must be less than or equal to the
total number of poles.
3. Origin: Poles or zeros at the origin do not aect the magnitude response.
4. Stability: For a stable system, the poles must be placed inside (not just on) the unit circle. The pole
radius is proportional to the gain and inversely proportional to the bandwidth. Poles closer to the unit
circle produce a large gain over a narrower bandwidth. Clearly, the passband should contain poles near
the unit circle for large passband gains.
A rule of thumb: For narrow-band lters with bandwidth 0.2 centered about
0
, we place
conjugate poles at z = Rexp(j
0
), where R 1 0.5 is the pole radius.
5. Minimum phase: Zeros can be placed anywhere in the z-plane. To ensure minimum phase, the zeros
must lie within the unit circle. Zeros on the unit circle result in a null in the response. Thus, the
stopband should contain zeros on or near the unit circle. A good starting choice is to place a zero (on
the unit circle) in the middle of the stopband or two zeros at the edges of the stopband.
6. Transition band: To achieve a steep transition from the stopband to the passband, a good choice is
to pair each stopband zero with a pole along (or near) the same radial line and close to the unit circle.
7. Pole-zero interaction: Poles and zeros interact to produce a composite response that may not match
qualitative predictions. Their placement may have to be changed, or other poles and zeros may have
c Ashok Ambardar, September 1, 2003
232 Chapter 6 Filter Concepts
to be added, to tweak the response. Poles closer to the unit circle, or farther away from each other,
produce small interaction. Poles closer to each other, or farther from the unit circle, produce more
interaction. The closer we wish to approximate a given response, the more poles and zeros we require,
and the higher is the lter order.
Bandstop and bandpass lters with real coecients must have an order of at least two. Filter design
using pole-zero placement involves trial and error. Formal design methods are presented in the next chapter.
It is also possible to identify traditional lter types from their pole-zero patterns. The idea is to use the
graphical approach to qualitatively observe how the magnitude changes from the lowest frequency F = 0 (or
= 0) to the highest frequency F = 0.5 (or = ) and use this to establish the lter type.
6.3.3 Second-Order IIR Filters
The general form for the transfer function of a second-order lter is
H(z) = K
z
2
+
1
z +2
z
2
+
1
z +
2
= K
(z R
z
e
j
z
)(z R
z
e
j
z
)
(z R
p
e
j
p
)(z R
p
e
j
p
)
This lter can describe a wide variety of responses. One interesting situation occurs when the poles and
zeros lie along the same angular orientation with
p
=
z
=
0
. The gain of such a lter shows a peak or
a dip around =
0
depending on the values of R
p
and R
z
. If we assume a causal, minimum-phase lter
with R
p
< 1 and R
z
< 1, we observe the following
(a) If R
z
> R
p
, the lter shows a dip in the gain around =
0
.
(b) If R
z
< R
p
, the lter gain peaks around =
0
.
(c) If R
z
= 0, the lter gain peaks around =
0
.
(d) if R
p
= 0, we have an FIR lter with a dip or notch around =
0
.
If we permit zeros to lie outside the unit circle (R
z
1 but R
p
< 1), we observe the following
(a) If R
z
= 1, we have a notch lter with a dc gain of zero at =
0
.
(b) If 1 < R
z
<
1
Rp
, there is a dip in the gain around =
0
.
(c) If R
z
=
1
R
p
, we have an allpass lter with constant gain.
(d) If R
z
>
1
Rp
, the lter gain peaks around =
0
.
EXAMPLE 6.7 (Filters and Pole-Zero Plots)
Design a bandpass lter with center frequency = 100 Hz, passband 10 Hz, stopband edges at 50 Hz
and 150 Hz, and sampling frequency 400 Hz.
We nd
0
=
2
, =
20
,
s
= [
4
,
3
4
], and R = 1 0.5 = 0.9215.
Passband: Place poles at p
1,2
= Re
j
0
= 0.9215e
j/2
= j0.9215.
Stopband: Place conjugate zeros at z
1,2
= e
j/4
and z
3,4
= e
j3/4
.
We then obtain the transfer function as
H(z) =
(z e
j/4
)(z e
j/4
)(z e
j3/4
)(z e
j3/4
)
(z j0.9215)(z +j0.9215)
=
z
4
+ 1
z
2
+ 0.8941
c Ashok Ambardar, September 1, 2003
6.3 IIR Filters 233
Note that this lter is noncausal. To obtain a causal lter H
1
(z), we could, for example, use double-
poles at each pole location to get
H
1
(z) =
z
4
+ 1
(z
2
+ 0.8941)
2
=
z
4
+ 1
z
4
+ 1.6982z
2
+ 0.7210
The pole-zero pattern and gain of the modied lter H
1
(z) are shown in Figure E6.7.
1.5 1 0.5 0 0.5 1 1.5
1.5
1
0.5
0
0.5
1
1.5
(a) Polezero plot of bandpass filter
double poles
zplane
Re [z]
I
m
[
z
]
0 0.1 0.25 0.4 0.5
0
20
40
60
80
100
(b) Magnitude spectrum of bandpass filter
Digital frequency F
M
a
g
n
i
t
u
d
e
Figure E6.7 Frequency response of the bandpass lter for Example 6.7()
(a) Design a notch lter with notch frequency 60 Hz, stopband 5 Hz, and sampling frequency 300 Hz.
We compute
0
=
2
5
, =
30
, and R = 1 0.5 = 0.9476.
Stopband: We place zeros at the notch frequency to get z
1,2
= e
j0
= e
j2/5
.
Passband: We place poles along the orientation of the zeros at p
1,2
= Re
j
0
= 0.9476e
j2/5
.
We then obtain H(z) as
H(z) =
(z e
j2/5
)(z e
j2/5
)
(z 0.9476e
j2/5
)(z 0.9476e
j2/5
)
=
z
2
0.618z + 1
z
2
0.5857 + 0.898
The pole-zero pattern and magnitude spectrum of this lter are shown in Figure E6.7A.
1.5 1 0.5 0 0.5 1 1.5
1.5
1
0.5
0
0.5
1
1.5
(a) Polezero plot of notch filter
Re [z]
I
m
[
z
]
zplane
0 0.1 0.2 0.3 0.4 0.5
0
0.2
0.4
0.6
0.8
1
(b) Magnitude spectrum of notch filter
Digital frequency F
M
a
g
n
i
t
u
d
e
Figure E6.7A Frequency response of the bandstop lter for Example 6.7(a)
c Ashok Ambardar, September 1, 2003
234 Chapter 6 Filter Concepts
6.3.4 Digital Resonators
A digital resonator is essentially a narrow-band bandpass lter. One way to realize a second-order resonator
with a peak at
0
is to place a pair of poles, with angular orientation
0
(at z = Re
j
0
and z = Re
j
0
),
and a pair of zeros at z = 0, as shown in Figure 6.11. To ensure stability, we choose the pole radius R to be
less than (but close to) unity.
0
Im[ z ]
Re[ z ]
H()
0
[n] y
cos
0
2R
R
2
1
z
1
z
[n] x
R
R
1
0.5
A
2
+
+
+
+
Figure 6.11 A second-order digital resonator
The transfer function of such a digital resonator is
H(z) =
z
2
(z Re
j0
)(z Re
j0
)
=
z
2
z
2
2zRcos
0
+R
2
(6.29)
Its magnitude spectrum [H()[ and realization are also shown in Figure 6.11. The magnitude squared
function [H()[
2
is given by
[H()[
2
=
e
j2
(e
j
Re
j
0
)(e
j
Re
j
0
)
2
(6.30)
This simplies to
[H()[
2
=
1
[1 2Rcos(
0
) +R
2
][1 2Rcos( +
0
) +R
2
]
(6.31)
Its peak value A occurs at (or very close to) the resonant frequency, and is given by
A
2
= [H(
0
)[
2
=
1
(1 R)
2
(1 2Rcos 2
0
+R
2
)
(6.32)
The half-power bandwidth is found by locating the frequencies at which [H()[
2
= 0.5[H(
0
)[
2
= 0.5A
2
,
to give
1
[1 2Rcos(
0
) +R
2
][1 2Rcos( +
0
) +R
2
]
=
0.5
(1 R)
2
(1 2Rcos 2
0
+R
2
)
(6.33)
Since the half-power frequencies are very close to
0
, we have +
0
2
0
and
0
0.5, and the
above relation simplies to
1 2Rcos(0.5) +R
2
= 2(1 R)
2
(6.34)
This relation between the pole radius R and bandwidth can be simplied when the poles are very close
to the unit circle (with R > 0.9 or so). The values of R and the peak magnitude A are then reasonably well
approximated by
R 1 0.5 A
1
(1 R
2
) sin
0
(6.35)
c Ashok Ambardar, September 1, 2003
6.3 IIR Filters 235
The peak gain can be normalized to unity by dividing the transfer function by A. The impulse response of
this lter can be found by partial fraction expansion of H(z)/z, which gives
H(z)
z
=
z
(z Re
j
0
)(z Re
j
0
)
=
K
z Re
j
0
+
K
z Re
j
0
(6.36)
where
K =
z
z Re
j0
z=Re
j
0
=
Re
j0
Re
j0
Re
j0
=
Re
j0
2jRsin
0
=
e
j(0/2)
2 sin
0
(6.37)
Then, from lookup tables, we obtain
h[n] =
1
sin
0
R
n
cos(n
0
+
0
2
)u[n] =
1
sin
0
R
n
sin[(n + 1)
0
]u[n] (6.38)
To null the response at low and high frequencies ( = 0 and = ), the two zeros in H(z) may be relocated
from the origin to z = 1 and z = 1, and this leads to the modied transfer function H
1
(z):
H
1
(z) =
z
2
1
z
2
2zRcos
0
+R
2
(6.39)
EXAMPLE 6.8 (Digital Resonator Design)
We design a digital resonator with a peak gain of unity at 50 Hz, and a 3-dB bandwidth of 6 Hz, assuming
a sampling frequency of 300 Hz.
The digital resonant frequency is
0
=
2(50)
300
=
3
. The 3-dB bandwidth is =
2(6)
300
= 0.04. We
compute the pole radius as R = 1 0.5 = 1 0.02 = 0.9372. The transfer function of the digital
resonator is thus
H(z) =
Gz
2
(z Re
j0
)(z Re
j0
)
=
Gz
2
z
2
2Rcos
0
+R
2
=
Gz
2
z
2
0.9372z + 0.8783
For a peak gain of unity, we choose G =
1
A
= (1 R
2
) sin
0
= 0.1054. Thus,
H(z) =
0.1054z
2
z
2
0.9372z + 0.8783
The magnitude spectrum and passband detail of this lter are shown in Figure E6.8.
0 50 100 150
0
0.5
0.707
1
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
(a) Digital resonator with peak at 50 Hz
40 46.74 50 52.96 60
0
0.5
0.707
1
Analog frequency f [Hz]
(b) Passband detail
M
a
g
n
i
t
u
d
e
Figure E6.8 Frequency response of the digital resonator for Example 6.8
The passband detail reveals that the half-power frequencies are located at 46.74 Hz and 52.96 Hz, a close
match to the bandwidth requirement (of 6 Hz).
c Ashok Ambardar, September 1, 2003
236 Chapter 6 Filter Concepts
6.3.5 Periodic Notch Filters
A notch lter with notches at periodic intervals may be constructed by placing zeros on the unit circle at
the notch frequencies, and poles in close proximity along the same angular orientations. One such form is
H(z) =
N(z)
N(z/R)
=
1 z
N
1 (z/R)
N
(6.40)
Its zeros are uniformly spaced 2/N radians apart around a unit circle, starting at = 0, and its poles
are along the same angular orientations but on a circle of radius R. For stability, we require R < 1. Of
course, the closer R is to unity, the sharper are the notches and the more constant is the response at the
other frequencies. Periodic notch lters are often used to remove unwanted components at the power line
frequency and its harmonics.
EXAMPLE 6.9 (Periodic Notch Filter Design)
We design a notch lter to lter out 60 Hz and its harmonics from a signal sampled at S = 300 Hz.
The digital frequency corresponding to 60 Hz is F = 60/300 = 0.2 or = 0.4. There are thus
N = 2/0.4 = 5 notches in the principal range (around the unit circle), and the transfer
function of the notch lter is
H(z) =
N(z)
N(z/R)
=
1 z
5
1 (z/R)
5
=
z
5
1
z
5
R
5
Figure E6.9(a) shows the response of this lter for R = 0.9 and R = 0.99. The choice of R, the pole radius,
is arbitrary (as long as R < 1), but should be close to unity for sharp notches.
0 0.1 0.2 0.3 0.4 0.5
0
0.5
1
1.5
(a) Periodic notch filter: N = 5 R = 0.9, 0.99
Digital frequency F
M
a
g
n
i
t
u
d
e
R = 0.9
R = 0.99
0 0.1 0.2 0.3 0.4 0.5
0
0.5
1
1.5
(b) Notch filter that also passes dc: R = 0.9, 0.99
Digital frequency F
M
a
g
n
i
t
u
d
e
R = 0.9
R = 0.99
Figure E6.9 Frequency response of the notch lters for Example 6.9
Comment: This notch lter also removes the dc component. If we want to preserve the dc component, we
must extract the zero at z = 1 (corresponding to F = 0) from N(z) (by long division), to give
N
1
(z) =
1 z
5
1 z
1
= 1 +z
1
+z
2
+z
3
+z
4
and use N
1
(z) to compute the new transfer function H
1
(z) = N
1
(z)/D(z) as
H
1
(z) =
N
1
(z)
N
1
(z/R)
=
1 +z
1
+z
2
+z
3
+z
4
1 + (z/R)
1
+ (z/R)
2
+ (z/R)
3
+ (z/R)
4
=
z
4
+z
3
+z
2
+z + 1
z
4
+Rz
3
+R
2
z
2
+R
3
z +R
4
Figure E6.9(b) compares the response of this lter for R = 0.9 and R = 0.99, and reveals that the dc
component is indeed preserved by this lter.
c Ashok Ambardar, September 1, 2003
6.4 Allpass Filters 237
6.4 Allpass Filters
Consider the rst-order lter described by
H
A
(F) =
A+Be
j2F
B +Ae
j2F
[H
A
(0)[ =
A+B
B +A
= 1 [H
A
(0.5)[ =
AB
B A
= 1 (6.41)
Its magnitude at F = 0 and F = 0.5 is equal to unity. In fact, its magnitude is unity for all frequencies, and
this describes an allpass lter. The fact that its magnitude is constant for all frequencies can be explained
by a geometric argument, as illustrated in Figure 6.12.
The numerator and denominator may be regarded as the sum of two vectors of length A and B that
subtend the same angle = 2F. The length of their vector sum is thus equal for any . An interesting
characteristic that allows us to readily identify an allpass lter is that its numerator and denominator
coecients (associated with powers of e
j2F
or e
j
) appear in reversed order. Allpass lters are often used
to shape the delay characteristics of digital lters.
REVIEW PANEL 6.8
Identifying an Allpass Filter by Checking Coecients
From transfer function: The coecients of numerator and denominator appear in reversed order.
From dierence equation: The coecients of RHS and LHS appear in reversed order.
The magnitude of H(F) is thus unity.
F =2
B 0 A 0
A
B
X = + A 0 B
Y = + B 0 A
j 2F
Ae B +
j 2F
Be A +
H(F) = = =
X
Y
+
+ A 0 B
B 0 A
X and Y The vectors are of equal length
F (or ). for any value of
Figure 6.12 The magnitude spectrum of an allpass lter is constant
6.4.1 Transfer Function of Allpass Filters
An allpass lter is characterized by a magnitude response that is constant for all frequencies. For an allpass
lter with unit gain, [H(F)[ = 1. Its transfer function H(z) also satises the relationship
H(z)H(1/z) = 1 (6.42)
This implies that each pole of an allpass lter is paired by a conjugate reciprocal zero. As a result, allpass
lters cannot be minimum-phase. Note that the cascade of allpass lters is also an allpass lter. An allpass
lter of order N has a numerator and denominator of equal order N, with coecients in reversed order:
H
AP
(z) =
N(z)
D(z)
=
C
N
+C
N1
z
1
+ +C
1
z
N1
+z
N
1 +C
1
z
1
+C
2
z
2
+ +C
N
z
N
(6.43)
Note that D(z) = z
N
N(1/z), and as a result, if the roots of N(z) (the zeros) are at r
k
, the roots of D(z)
(the poles) are at 1/r
k
, the reciprocal locations.
c Ashok Ambardar, September 1, 2003
238 Chapter 6 Filter Concepts
REVIEW PANEL 6.9
How to Identify an Allpass Filter
From H(z) = N(z)/D(z): The coecients of N(z) and D(z) appear in reversed order.
From pole-zero plot: Each pole is paired with a conjugate reciprocal zero.
Consider a stable, rst-order allpass lter whose transfer function H(z) and frequency response H(F)
are described by
H(z) =
1 +z
z +
H
A
(F) =
1 +e
j2F
+e
j2F
, [[ < 1 (6.44)
If we factor out e
jF
from the numerator and denominator of H(F), we obtain the form
H(F) =
1 +e
j2F
+e
j2F
=
e
jF
+e
jF
e
jF
+e
jF
, [[ < 1 (6.45)
The numerator and denominator are complex conjugates. This implies that their magnitudes are equal (an
allpass characteristic) and that the phase of H(F) equals twice the numerator phase. Now, the numerator
may be simplied to
e
jF
+e
jF
= cos(F) j sin(F) +cos(F) +jsin(F) = (1 +)cos(F) j(1 )sin(F)
The phase (F) of the allpass lter H(F) equals twice the numerator phase. The phase (F) and phase
delay t
p
(F) may then be written as
(F) = 2 tan
1
1
1 +
tan(F)
t
p
(F) =
(F)
2F
=
1
F
tan
1
1
1 +
tan(F)
(6.46)
A low-frequency approximation for the phase delay is given by
t
p
(F) =
1
1 +
(low-frequency approximation) (6.47)
The group delay t
g
(F) of this allpass lter equals
t
g
(F) =
1
2
d(F)
dF
=
1
2
1 + 2cos(2F) +
2
(6.48)
At F = 0 and F = 0.5, the group delay is given by
t
g
(0) =
1
2
1 + 2 +
2
=
1
1 +
t
g
(0.5) =
1
2
1 2 +
2
=
1 +
1
(6.49)
At low frequencies, cos(2F) 1, and the group delay and phase delay are approximately equal. In
particular, for 0 < < 1, the delay is less than unity and this allows us to use the allpass lter to generate
fractional delays by appropriate choice of .
6.4.2 Stabilization of Unstable Filters
Allpass lters are often used to stabilize unstable digital lters, while preserving their magnitude response.
Consider the unstable lter
H
u
(z) =
z
1 +z
, [[ < 1 (6.50)
c Ashok Ambardar, September 1, 2003
6.4 Allpass Filters 239
It has a pole at z = 1/ and is thus unstable. If we cascade H
u
(z) with a rst-order allpass lter whose
transfer function is H
1
(z) = (1 +z)/(z +
), we obtain
H(z) = H
u
(z)H
1
(z) =
z
z +
(6.51)
The lter H(z) has a pole at z =
m=1
z
1 +
m
z
, [
m
[ < 1 (6.52)
To stabilize H
u
(z), we use an allpass lter H
AP
(z) that is a cascade of P allpass sections. Its form is
H
AP
(z) =
P
m=1
1 +
m
z
z +
m
, [
m
[ < 1 (6.53)
The stabilized lter H(z) is then described by the cascade H
u
(z)H
AP
(z) as
H(z) = H
u
(z)H
AP
(z) = H
s
(z)
P
m=1
z
z +
m
, [
m
[ < 1 (6.54)
There are two advantages to this method. First, the magnitude response of the original lter is unchanged.
And second, the order of the new lter is the same as the original. The reason for the inequality [
m
[ < 1
(rather than [
m
[ 1) is that if H
u
(z) has a pole on the unit circle, its conjugate reciprocal will also lie on
the unit circle and no stabilization is possible.
6.4.3 Minimum-Phase Filters Using Allpass Filters
Even though allpass lters cannot be minimum-phase, they can be used to convert stable nonminimum-phase
lters to stable minimum-phase lters. We describe a nonminimum-phase transfer function by a cascade of a
minimum-phase part H
M
(z) (with all poles and zeros inside the unit circle) and a portion with zeros outside
the unit circle such that
H
NM
(z) = H
M
(z)
P
m=1
(z +
M
), [
m
[ > 1 (6.55)
We now seek an unstable allpass lter with
H
AP
(z) =
P
m=1
1 +z
m
z +
m
, [
m
[ > 1 (6.56)
The cascade of H
NM
(z) and H
AP
(z) yields a minimum-phase lter with
H(z) = H
NM
(z)H
AP
(z) = H
M
(z)
P
m=1
(1 +z
m
) (6.57)
Once again, H(z) has the same order as the original lter.
c Ashok Ambardar, September 1, 2003
240 Chapter 6 Filter Concepts
6.4.4 Concluding Remarks
We have studied a variety of digital lters in previous sections. As mentioned before, the dc gain (at F = 0
or = 0) and the high-frequency gain (at F = 0.5 or = ) serve as a useful measures to identify lters.
The coecients (and form) of the dierence equation, the transfer function and the impulse response reveal
other interesting and useful characteristics such as linear phase, allpass behaviour and the like. The more
complex the lter (expression), the less likely it is for us to discern its nature by using only a few simple
measures.
EXAMPLE 6.10 (Traditional and Non-Traditional Filters)
(a) Let h[n] =
n=0
h[n] = 0 and H(0.5) =
3
n=0
(1)
n
h[n] = 1 + 2 + 2 1 = 2.
(b) Let h[n] = (0.8)
n
u[n]. Identify the lter type and establish whether the impulse response is a linear-
phase sequence.
The sequence h[n] is not linear-phase because it shows no symmetry.
We have H(F) =
1
1 + 0.8e
j2F
.
We nd that H(0) =
1
1 + 0.8
= 0.556 and H(0.5) =
1
1+0.8e
j
=
1
10.8
= 5.
Since the magnitude at high frequencies increases, this appears to be a highpass lter.
(c) Consider a system described by y[n] = y[n 1] + x[n], 0 < < 1. This is an example of a reverb
lter whose response equals the input plus a delayed version of the output. Its frequency response may
be found by taking the DTFT of both sides to give Y (F) = Y (F)e
j2F
+ X(F). Rearranging this
equation, we obtain
H(F) =
Y (F)
X(F)
=
1
1 e
j2F
Using Eulers relation, we rewrite this as
H(F) =
1
1 e
j2F
=
1
1 cos(2F) jsin(2F)
c Ashok Ambardar, September 1, 2003
6.4 Allpass Filters 241
Its magnitude and phase are given by
[H(F)[ =
1
[1 2cos(2F) +
2
]
1/2
(F) = tan
1
sin(2F)
1 cos(2F)
A typical magnitude and phase plot for this system (for 0 < < 1) is shown in Figure E6.10C. The
impulse response of this system equals h[n] =
n
u[n].
F
0.5 0.5
Magnitude
F 0.5
0.5
Phase
Figure E6.10C Frequency response of the system for Example 6.10(c)
(d) Let h[n] =
5, 4, 3, 2. Identify the lter type and establish whether the impulse response is a
linear-phase sequence.
The sequence h[n] is not linear-phase because it shows no symmetry.
We have H(F) = 5 4e
j2F
+ 3e
j4F
2e
j6F
.
We nd H(0) =
h[n] = 2 and H(0.5) =
(1)
n
h[n] = 5 + 4 + 3 + 2 = 14.
Since the magnitude at high frequencies increases, this appears to be a highpass lter.
(e) Let h[n] = (0.8)
n
u[n]. Identify the lter type, establish whether the impulse response is a linear-phase
sequence, and nd its 60-dB time constant.
We have H(F) =
1
1 0.8e
j2F
, H(0) =
1
1 0.8
= 5, and H(0.5) =
1
1 0.8e
j
=
1
1 + 0.8
= 0.556
The sequence h[n] is not a linear-phase phase because it shows no symmetry.
Since the magnitude at high frequencies decreases, this appears to be a lowpass lter. The 60-dB time
constant of this lter is found as
=
ln
ln
=
ln 0.001
ln 0.8
= 30.96 = 31 samples
For a sampling frequency of S = 100 Hz, this corresponds to = t
s
= /S = 0.31 s.
(f ) Let h[n] = 0.8[n] + 0.36(0.8)
n1
u[n 1]. Identify the lter type and establish whether the impulse
response is a linear-phase sequence.
We nd H(F) = 0.8 +
0.36e
j2F
1 + 0.8e
j2F
=
0.8 +e
j2F
1 + 0.8e
j2F
. So, H(0) = 1 and H(0.5) =
0.8 1
1 0.8
= 1.
The sequence h[n] is not a linear-phase sequence because it shows no symmetry.
Since the magnitude is identical at low and high frequencies, this could be a bandstop or an allpass
lter. Since the numerator and denominator coecients in H(F) appear in reversed order, it is an
allpass lter.
c Ashok Ambardar, September 1, 2003
242 Chapter 6 Filter Concepts
(g) Let h[n] = [n] [n 8]. Identify the lter type and establish whether the impulse response is a
linear-phase sequence.
We nd H(F) = 1 e
j16F
. So, H(0) = 0 and H(0.5) = 0.
This suggests a bandpass lter with a maximum between F = 0 and F = 0.5. On closer examination,
we nd that H(F) = 0 at F = 0, 0.125, 0.25, 0.375, 0.5 when e
j16F
= 1 and [H(F)[ = 2 at four
frequencies halfway between these. This multi-humped response is sketched in Figure E6.10G and
describes a comb lter.
We may write h[n] =
1, 1.
(a) Sketch its amplitude spectrum.
(b) Find its phase delay and group delay. Is this a linear-phase lter?
(c) Find its response to the input x[n] = cos(n/3).
(d) Find its response to the input x[n] = [n].
(e) Find its response to the input x[n] = (1)
n
.
(f ) Find its response to the input x[n] = 3 + 3[n] 6 cos(n/3).
[Hints and Suggestions: For (a), use e
j
+e
j
= 2 cos to set up H(F) = A(F)e
j(F)
and sketch
A(F). For (b), use the symmetry in h[n]. For (c), evaluate H(F) at the frequency of x[n] to nd the
output. For (d), nd h[n]. For (e), use (1)
n
= cos(n). For (f), use superposition.]
6.3 (Frequency Response) Consider a lter described by h[n] =
1
3
1,
1, 1.
(a) Sketch its amplitude spectrum.
(b) Find its phase delay and group delay. Is this a linear-phase lter?
(c) Find its response to the input x[n] = cos(n/3).
(d) Find its response to the input x[n] = [n].
(e) Find its response to the input x[n] = (1)
n
.
(f ) Find its response to the input x[n] = 3 + 3[n] 3 cos(2n/3).
[Hints and Suggestions: For (a), use e
j
+e
j
= 2 cos to set up H(F) = A(F)e
j(F)
and sketch
A(F). For (b), use the symmetry in h[n]. For (c), evaluate H(F) at the frequency of x[n] to nd the
output. For (d), nd h[n]. For (e), use (1)
n
= cos(n). For (f), use superposition.]
6.4 (Frequency Response) Consider the tapered 3-point averaging lter h[n] =
0.5, 1, 0.5.
(a) Sketch its amplitude spectrum.
(b) Find its phase delay and group delay. Is this a linear-phase lter?
(c) Find its response to the input x[n] = cos(n/2).
(d) Find its response to the input x[n] = [n 1].
(e) Find its response to the input x[n] = 1 + (1)
n
.
(f ) Find its response to the input x[n] = 3 + 2[n] 4 cos(n/2).
[Hints and Suggestions: For (a), extract e
j2F
in H(F) to set up H(F) = A(F)e
j(F)
and sketch
A(F). For (b), use (F). For (c), evaluate H(F) at the frequency of x[n] to nd the output. For (d),
nd h[n] and shift. For (e), use (1)
n
= cos(n) and superposition. For (f), use superposition.]
c Ashok Ambardar, September 1, 2003
244 Chapter 6 Filter Concepts
6.5 (Frequency Response) Consider the 2-point dierencing lter h[n] = [n] [n 1].
(a) Sketch its amplitude spectrum.
(b) Find its phase and group delay. Is this a linear-phase lter?
(c) Find its response to the input x[n] = cos(n/2).
(d) Find its response to the input x[n] = u[n].
(e) Find its response to the input x[n] = (1)
n
.
(f ) Find its response to the input x[n] = 3 + 2u[n] 4 cos(n/2).
[Hints and Suggestions: For (a), extract e
jF
in H(F) to set up H(F) = A(F)e
j(F)
and sketch
A(F). For (b), use (F). For (c), evaluate H(F) at the frequency of x[n] to nd the output. For (d),
nd y[n] = u[n] u[n 1] and simplify. For (e), use (1)
n
= cos(n). For (f), use superposition.]
6.6 (Frequency Response) Consider the 5-point tapered averaging lter h[n] =
1
9
1, 2, 3, 2, 1.
(a) Sketch its amplitude spectrum.
(b) Find its phase delay and group delay. Is this a linear-phase lter?
(c) Find its response to the input x[n] = cos(n/4).
(d) Find its response to the input x[n] = [n].
(e) Find its response to the input x[n] = (1)
n
.
(f ) Find its response to the input x[n] = 9 + 9[n] 9 cos(n/4).
[Hints and Suggestions: For (a), extract e
j4F
in H(F) to set up H(F) = A(F)e
j(F)
and sketch
A(F). For (b), use (F). For (c), evaluate H(F) at the frequency of x[n] to nd the output. For (d),
nd h[n]. For (e), use (1)
n
= cos(n). For (f), use superposition.]
6.7 (Frequency Response) Sketch the gain [H(F)[ of the following digital lters and identify the lter
type.
(a) y[n] + 0.9y[n 1] = x[n] (b) y[n] 0.9y[n 1] = x[n]
(c) y[n] + 0.9y[n 1] = x[n 1] (d) y[n] = x[n] x[n 4]
[Hints and Suggestions: To sketch, start with the gain [H(F)[ at F = 0 and F = 0.5 and add a few
others such as F = 0.25.]
6.8 (Frequency Response) Consider a system whose frequency response H(F) in magnitude/phase
form is H(F) = A(F)e
j(F)
. Find the response y[n] of this system for the following inputs.
(a) x[n] = [n] (b) x[n] = 1 (c) x[n] = cos(2nF
0
) (d) x[n] = (1)
n
[Hints and Suggestions: For (d), note that (1)
n
= cos(n).]
6.9 (Poles and Zeros) Find the transfer function corresponding to each pole-zero pattern shown in
Figure P6.9 and identify the lter type.
Im[ z ]
Re[ z ]
Im[ z ]
Re[ z ]
Im[ z ]
Re[ z ]
50
0
Im[ z ]
Re[ z ]
0.7
Filter 1
0.6
Filter 2
0.4
Filter 3
0.7
Filter 4
Figure P6.9 Figure for Problem 6.9
c Ashok Ambardar, September 1, 2003
Chapter 6 Problems 245
[Hints and Suggestions: To identify the lter type, evaluate [H(z)[ at z = 1 (dc gain) and at z = 1
(high frequency gain).]
6.10 (Inverse Systems) Find the transfer function of the inverse systems for each of the following. Which
inverse systems are causal? Which inverse systems are stable?
(a) H(z) =
z
2
+ 0.1
z
2
0.2
(b) H(z) =
z + 2
z
2
+ 0.25
(c) y[n] 0.5y[n 1] = x[n] + 2x[n 1] (d) h[n] = n(2)
n
u[n]
[Hints and Suggestions: For parts (c) and (d), set up H(z) and nd H
I
(z) = 1/H(z).]
6.11 (Minimum-Phase Systems) Classify each causal system as minimum phase, mixed phase, or max-
imum phase. Which of the systems are stable?
(a) H(z) =
z
2
+ 0.16
z
2
0.25
(b) H(z) =
z
2
4
z
2
+ 9
(c) h[n] = n(2)
n
u[n] (d) y[n] +y[n 1] + 0.25y[n 2] = x[n] 2x[n 1]
6.12 (Minimum-Phase Systems) Find the minimum-phase transfer function corresponding to the sys-
tems described by the following:
(a) H(z)H(1/z) =
1
3z
2
10z + 3
(b) H(z)H(1/z) =
3z
2
+ 10z + 3
5z
2
+ 26z + 5
(c) [H(F)[
2
=
1.25 + cos(2F)
8.5 + 4 cos(2F)
[Hints and Suggestions: Pick H(z) to have poles and zeros inside the unit circle. For part (c), set
cos(2F) = 0.5e
j2F
+ 0.5e
j2F
= 0.5z + 0.5z
1
and compare with H(z)H(1/z).]
6.13 (System Characteristics) Consider the system y[n] y[n 1] = x[n] x[n 1].
(a) For what values of and will the system be stable?
(b) For what values of and will the system be minimum-phase?
(c) For what values of and will the system be allpass?
(d) For what values of and will the system be linear phase?
6.14 (Frequency Response) Consider the recursive lter y[n] + 0.5y[n 1] = 0.5x[n] +x[n 1].
(a) Sketch the lter gain. What type of lter is this?
(b) Find its phase delay and group delay. Is this a linear-phase lter?
(c) Find its response to the input x[n] = cos(n/2).
(d) Find its response to the input x[n] = [n].
(e) Find its response to the input x[n] = (1)
n
.
(f ) Find its response to the input x[n] = 4 + 2[n] 4 cos(n/2).
[Hints and Suggestions: For (a), start with [H(F)[ at F = 0 and F = 0.5 to sketch the gain. For
(b), use equations for the rst-order allpass lter. For (c), evaluate H(F) at the frequency of x[n] to
nd the output. For (d), nd h[n]. For (e), use (1)
n
= cos(n). For (f), use superposition.]
6.15 (Linear-Phase Filters) Consider a lter whose impulse response is h[n] =
, , .
c Ashok Ambardar, September 1, 2003
246 Chapter 6 Filter Concepts
(a) Find its frequency response in amplitude/phase form H(F) = A(F)e
j(F)
and conrm that the
phase (F) varies linearly with F.
(b) Determine the values of and if this lter is to completely block a signal at the frequency
F =
1
3
while passing a signal at the frequency F =
1
8
with unit gain.
(c) What will be the output of the lter in part(b) if the input is x[n] = cos(0.5n)?
[Hints and Suggestions: In (a), extract e
j2F
in H(F) to set up the required form. In (b), set up
A(F) at the two frequencies and solve for and . In (c), evaluate H(F) at the frequency of the input
to set up the output.]
6.16 (First-Order Filters) For each lter, sketch the pole-zero plot, sketch the frequency response to
establish the lter type, and evaluate the phase delay at low frequencies. Assume that = 0.5.
(a) H(z) =
z
z +
(b) H(z) =
z 1/
z +
(c) H(z) =
z + 1/
z +
6.17 (Filter Concepts) Argue for or against the following. Use examples to justify your arguments.
(a) All the nite poles of an FIR lter must lie at z = 0.
(b) An FIR lter is always linear phase.
(c) An FIR lter is always stable.
(d) A causal IIR lter can never display linear phase.
(e) A linear-phase sequence is always symmetric about is midpoint.
(f ) A minimum-phase lter can never display linear phase.
(g) An allpass lter can never display linear phase.
(h) An allpass lter can never be minimum phase.
(i) The inverse of a minimum-phase lter is also minimum phase.
(j) The inverse of a causal lter is also causal.
(k) For a causal minimum-phase lter to have a a causal inverse, the lter must have as many nite
poles as nite zeros.
6.18 (Filter Concepts) Let H(z) =
0.2(z + 1)
z 0.6
. What type of lter does H(z) describe? Sketch its
pole-zero plot.
(a) What lter does H
1
(z) = 1 H(z) describe? Sketch its pole-zero plot.
(b) What lter does H
2
(z) = H(z) describe? Sketch its pole-zero plot.
(c) What type of lter does H
3
(z) = 1 H(z) H(z) describe? Sketch its pole-zero plot.
(d) Use a combination of the above lters to implement a bandstop lter.
6.19 (Filter Design by Pole-Zero Placement) Design the following lters by pole-zero placement.
(a) A bandpass lter with a center frequency of f
0
= 200 Hz, a 3-dB bandwidth of f = 20 Hz, zero
gain at f = 0 and f = 400 Hz, and a sampling frequency of 800 Hz.
(b) A notch lter with a notch frequency of 1 kHz, a 3-dB stopband of 10 Hz, and sampling frequency
8 kHz.
[Hints and Suggestions: Convert to digital frequencies and put the poles at z = Re
j2F
0
where
R 1 F. For (a), put zeros at z = 1. For (b), put zeros at z = e
j2F0
.]
c Ashok Ambardar, September 1, 2003
Chapter 6 Problems 247
6.20 (Recursive and IIR Filters) The terms recursive and IIR are not always synonymous. A recursive
lter could in fact have a nite impulse response and even linear phase. For each of the following
recursive lters, nd the transfer function H(z) and the impulse response h[n]. Which lters (if any)
describe IIR lters? Which lters (if any) are linear phase?
(a) y[n] y[n 1] = x[n] x[n 2]
(b) y[n] y[n 1] = x[n] x[n 1] 2x[n 2] + 2x[n 3]
[Hints and Suggestions: Set up H(z) and cancel common factors to generate h[n].]
6.21 (Systems in Cascade and Parallel) Consider the lter realization of Figure P6.21.
(a) Find its transfer function and impulse response if = . Is the overall system FIR or IIR?
(b) Find its transfer function and impulse response if = . Is the overall system FIR or IIR?
(c) Find its transfer function and impulse response if = = 1. What does the overall system
represent?
[n] y
1
z
1
z
1
z
[n] x
+
+
+
+
+
Figure P6.21 Filter realization for Problem 6.21
6.22 (Poles and Zeros) It is known that the transfer function H(z) of a lter has two poles at z = 0, two
zeros at z = 1, and a dc gain of 8.
(a) Find the lter transfer function H(z) and impulse response h[n].
(b) Is this an IIR or FIR lter?
(c) Is this a causal or noncausal lter?
(d) Is this a linear-phase lter? If so, what is the symmetry in h[n]?
(e) Repeat parts (a)(d) if another zero is added at z = 1.
6.23 (Frequency Response) Sketch the pole-zero plot and frequency response of the following systems
and describe the function of each system.
(a) y[n] = 0.5x[n] + 0.5x[n 1] (b) y[n] = 0.5x[n] 0.5x[n 1]
(c) h[n] =
1
3
1,
1, 1 (d) h[n] =
1
3
1,
1, 1
(e) h[n] =
0.5, 1, 0.5
(g) H(z) =
z 0.5
z + 0.5
(h) H(z) =
z 2
z + 0.5
(i) H(z) =
z + 2
z + 0.5
(j) H(z) =
z
2
+ 2z + 3
3z
2
+ 2z + 1
6.24 (Inverse Systems) Consider a system described by h[n] = 0.5[n] + 0.5[n 1].
(a) Sketch the frequency response H(F) of this lter.
c Ashok Ambardar, September 1, 2003
248 Chapter 6 Filter Concepts
(b) In an eort to recover the input x[n], it is proposed to cascade this lter with another lter whose
impulse response is h
1
[n] = 0.5[n] 0.5[n 1], as shown:
x[n] h[n] h
1
[n] y[n]
What is the output of the cascaded lter to the input x[n]? Sketch the frequency response H
1
(F)
and the frequency response of the cascaded lter.
(c) What must be the impulse response h
2
[n] of a lter connected in cascade with the original lter
such that the output of the cascaded lter equals the input x[n], as shown?
x[n] h[n] h
2
[n] x[n]
(d) Are H
2
(F) and H
1
(F) related in any way?
6.25 (Linear Phase and Symmetry) Assume a sequence x[n] with real coecients with all its poles at
z = 0. Argue for or against the following statements. You may want to exploit two useful facts. First,
each pair of terms with reciprocal roots such as (z ) and (z 1/) yields an even symmetric impulse
response sequence. Second, the convolution of symmetric sequences is also endowed with symmetry.
(a) If all the zeros lie on the unit circle, x[n] must be linear phase.
(b) If x[n] is linear phase, its zeros must always lie on the unit circle.
(c) If there are no zeros at z = 1 and x[n] is linear phase, it is also even symmetric.
(d) If there is one zero at z = 1 and x[n] is linear phase, it is also odd symmetric.
(e) If x[n] is even symmetric, there can be no zeros at z = 1.
(f ) If x[n] is odd symmetric, there must be an odd number of zeros at z = 1.
6.26 (Comb Filters) For each comb lter, identify the pole and zero locations and determine whether it
is a notch lter or a peaking lter.
(a) H(z) =
z
4
0.4096
z
4
0.6561
(b) H(z) =
z
4
1
z
4
0.6561
6.27 (Linear-Phase Filters) Argue for or against the following:
(a) The cascade connection of two linear-phase lters is also a linear-phase lter.
(b) The parallel connection of two linear-phase lters is also a linear-phase lter.
6.28 (Minimum-Phase Filters) Argue for or against the following:
(a) The cascade connection of two minimum-phase lters is also a minimum-phase lter.
(b) The parallel connection of two minimum-phase lters is also a minimum-phase lter.
6.29 (System Analysis) The impulse response of a system is h[n] = [n] [n 1]. Determine and
make a sketch of the pole-zero plot for this system to act as
(a) A lowpass lter. (b) A highpass lter. (c) An allpass lter.
6.30 (System Analysis) The impulse response of a system is h[n] =
n
u[n], = 0. Determine and
make a sketch of the pole-zero plot for this system to act as
(a) A stable lowpass lter. (b) A stable highpass lter. (c) An allpass lter.
6.31 (Allpass Filters) Argue for or against the following:
(a) The cascade connection of two allpass lters is also an allpass lter.
c Ashok Ambardar, September 1, 2003
Chapter 6 Problems 249
(b) The parallel connection of two allpass lters is also an allpass lter.
6.32 (Minimum-Phase Systems) Consider the lter y[n] = x[n] 0.65x[n 1] + 0.1x[n 2].
(a) Find its transfer function H(z) and verify that it is minimum phase.
(b) Find an allpass lter A(z) with the same denominator as H(z).
(c) Is the cascade H(z)A(z) minimum phase? Is it causal? Is it stable?
6.33 (Causality, Stability, and Minimum Phase) Consider two causal, stable, minimum-phase digital
lters whose transfer functions are given by
F(z) =
z
z 0.5
G(z) =
z 0.5
z + 0.5
Argue that the following lters are also causal, stable, and minimum phase.
(a) The inverse lter M(z) = 1/F(z)
(b) The inverse lter P(z) = 1/G(z)
(c) The cascade H(z) = F(z)G(z)
(d) The inverse of the cascade R(z) = 1/H(z)
(e) The parallel connection N(z) = F(z) +G(z)
6.34 (Allpass Filters) Consider the lter H(z) =
z + 2
z + 0.5
. The input to this lter is x[n] = cos(2nF
0
).
(a) Is H(z) an allpass lter? If so, what is its gain?
(b) What is the response y[n] and the phase delay if F
0
= 0?
(c) What is the response y[n] and the phase delay if F
0
= 0.25?
(d) What is the response y[n] and the phase delay if F
0
= 0.5?
6.35 (Allpass Filters) Consider two causal, stable, allpass digital lters whose transfer function is de-
scribed by
F(z) =
0.5z 1
0.5 z
G(z) =
0.5z + 1
0.5 +z
(a) Is the lter L(z) = F
1
(z) causal? Stable? Allpass?
(b) Is the lter H(z) = F(z)G(z) causal? Stable? Allpass?
(c) Is the lter M(z) = H
1
(z) causal? Stable? Allpass?
(d) Is the lter N(z) = F(z) +G(z) causal? Stable? Allpass?
6.36 (Stabilization by Allpass Filters) The transfer function of a lter is H(z) =
z + 3
z 2
.
(a) Is this lter stable?
(b) What is the transfer function A
1
(z) of a rst-order allpass lter that can stabilize this lter?
What is the transfer function H
S
(z) of the stabilized lter?
(c) If H
S
(z) is not minimum phase, pick an allpass lter A
2
(z) that converts H
S
(z) to a minimum-
phase lter H
M
(z).
(d) Verify that [H(F)[ = [H
S
(F)[ = [H
M
(F)[.
6.37 (Allpass Filters) Consider a lowpass lter with impulse response h[n] = (0.5)
n
u[n]. If its input is
x[n] = cos(0.5n), the output will have the form y[n] = Acos(0.5n +).
c Ashok Ambardar, September 1, 2003
250 Chapter 6 Filter Concepts
(a) Find the values of A and .
(b) What should be the transfer function H
1
(z) of a rst-order allpass lter that can be cascaded with
the lowpass lter to correct for the phase distortion and produce the signal z[n] = Bcos(0.5n)
at its output?
(c) What should be the gain of the allpass lter in order that z[n] = x[n]?
6.38 (Allpass Filters) An unstable digital lter whose transfer function is H(z) =
(z + 0.5)(2z + 0.5)
(z + 5)(2z + 5)
is
to be stabilized in a way that does not aect its magnitude spectrum.
(a) What must be the transfer function H
1
(z) of a lter such that the cascaded lter described by
H
S
(z) = H(z)H
1
(z) is stable?
(b) What is the transfer function H
S
(z) of the stabilized lter?
(c) Is H
S
(z) causal? Minimum phase? Allpass?
6.39 (Signal Delay) The delay D of a discrete-time energy signal x[n] is dened by
D =
k=
kx
2
[k]
k=
x
2
[k]
(a) Verify that the delay of the linear-phase sequence x[n] = 4, 3, 2, 1,
0, 1, 2, 3, 4 is zero.
(b) Compute the delay of the signal f[n] = x[n 1].
(c) Compute the delay of the signal g[n] = x[n 2].
(d) What is the delay for the impulse response of the lter described by H(z) =
1 0.5z
z 0.5
?
(e) Compute the delay for the impulse response of the rst-order allpass lter H(z) =
1 +z
z +
.
[Hints and Suggestions: For part (c), h[n] = 2[n] + 1.5(0.5)
n
u[n]. So, h[0] = 0.5 and h[n] =
1.5(0.5)
n
, n > 0. Use this with a table of summations to nd the delay.]
COMPUTATION AND DESIGN
6.40 (FIR Filter Design) A 22.5-Hz signal is corrupted by 60-Hz hum. It is required to sample this
signal at 180 Hz and lter out the interference from the the sampled signal.
(a) Design a minimum-length, linear-phase lter that passes the desired signal with unit gain and
completely rejects the interference signal.
(b) Test your design by applying a sampled version of the desired signal, adding 60-Hz interference,
ltering the noisy signal, and comparing the desired signal and the ltered signal.
6.41 (Comb Filters) Plot the frequency response of the following lters over 0 F 0.5 and describe
the action of each lter.
(a) y[n] = x[n] +x[n 4], = 0.5
(b) y[n] = x[n] +x[n 4] +
2
x[n 8], = 0.5
(c) y[n] = x[n] +x[n 4] +
2
x[n 8] +
3
x[n 12], = 0.5
c Ashok Ambardar, September 1, 2003
Chapter 6 Problems 251
(d) y[n] = y[n 4] +x[n], = 0.5
6.42 (Filter Design) An ECG signal sampled at 300 Hz is contaminated by interference due to 60-Hz
hum. It is required to design a digital lter to remove the interference and provide a dc gain of unity.
(a) Design a 3-point FIR lter (using zero placement) that completely blocks the interfering signal.
Plot its frequency response. Does the lter provide adequate gain at other frequencies in the
passband? Is this a good design?
(b) Design an IIR lter (using pole-zero placement) that completely blocks the interfering signal.
Plot its frequency response. Does the lter provide adequate gain at other frequencies in the
passband? Is this a good design?
(c) The Matlab based routine ecgsim (on the authors website) simulates one period of an ECG
signal. Use the command yecg=ecgsim(3,9) to generate one period (300 samples) of the ECG
signal. Generate a noisy ECG signal by adding 300 samples of a 60-Hz sinusoid to yecg. Obtain
ltered signals, using each lter, and compare plots of the ltered signal with the original signal
yecg. Do the results support your conclusions of parts (a) and (b)? Explain.
6.43 (Nonrecursive Forms of IIR Filters) If we truncate the impulse response of an IIR lter to
N terms, we obtain an FIR lter. The larger the truncation index N, the better the FIR lter
approximates the underlying IIR lter. Consider the IIR lter described by y[n] 0.8y[n 1] = x[n].
(a) Find its impulse response h[n] and truncate it to N terms to obtain h
N
[n], the impulse response
of the approximate FIR equivalent. Would you expect the greatest mismatch in the response of
the two lters to identical inputs to occur for lower or higher values of n?
(b) Plot the frequency response H(F) and H
N
(F) for N = 3. Plot the poles and zeros of the two
lters. What dierences do you observe?
(c) Plot the frequency response H(F) and H
N
(F) for N = 10. Plot the poles and zeros of the two
lters. Does the response of H
N
(F) show a better match to H(F)? How do the pole-zero plots
compare? What would you expect to see in the pole-zero plot if you increase N to 50? What
would you expect to see in the pole-zero plot as N ?
6.44 (Phase Delay and Group Delay of Allpass Filters) Consider the lter H(z) =
z + 1/
z +
.
(a) Verify that this is an allpass lter.
(b) Pick values of that correspond to a phase delay of t
p
= 0.1, 0.5, 0.9. For each value of , plot
the unwrapped phase, phase delay, and group delay of the lter.
(c) Over what range of digital frequencies is the phase delay a good match to the value of t
p
computed
in part (b)?
(d) How does the group delay vary with frequency as is changed?
(e) For each value of , compute the minimum and maximum values of the phase delay and the
group delay and the frequencies at which they occur.
6.45 (Decoding a Mystery Message) During transmission, a message signal gets contaminated by a
low-frequency signal and high-frequency noise. The message can be decoded only by displaying it in
the time domain. The contaminated signal is provided on the authors website as mystery1.mat. Load
this signal into Matlab (using the command load mystery1). In an eort to decode the message,
try the following steps and determine what the decoded message says.
(a) Display the contaminated signal. Can you read the message?
(b) Display the DFT of the signal to identify the range of the message spectrum.
(c) Design a peaking lter (with unit gain) centered about the message spectrum.
(d) Filter the contaminated signal and display the ltered signal to decode the message.
c Ashok Ambardar, September 1, 2003
Chapter 7
DIGITAL PROCESSING OF
ANALOG SIGNALS
7.0 Scope and Objectives
This chapter begins with a discussion of sampling and quantization that form the critical link between analog
and digital signals. It explains how the sampling theorem forms the basis for sampling signals with little
or no loss of information, and describes schemes for the recovery of analog signals from their samples using
idealized and practical methods. It introduces the concept of signal quantization and its eects and concludes
with applications of the digital processing of audio signals.
7.1 Ideal Sampling
Ideal sampling describes a sampled signal as a weighted sum of impulses, the weights being equal to the
values of the analog signal at the impulse locations. An ideally sampled signal x
I
(t) may be regarded as the
product of an analog signal x(t) and a periodic impulse train i(t), as illustrated in Figure 7.1.
s
t
x
I
(t)
s
t
t
Ideally sampled signal
x(t)
t
Analog signal
i(t)
Multiplier
t
(1) (1)
Sampling function
Figure 7.1 The ideal sampling operation
The ideally sampled signal may be mathematically described as
x
I
(t) = x(t)i(t) = x(t)
n=
(t nt
s
) =
n=
x(nt
s
)(t nt
s
) =
n=
x[n](t nt
s
) (7.1)
Here, the discrete signal x[n] simply represents the sequence of sample values x(nt
s
). Clearly, the sampling
operation leads to a potential loss of information in the ideally sampled signal x
I
(t), when compared with its
underlying analog counterpart x(t). The smaller the sampling interval t
s
, the less is this loss of information.
252 c Ashok Ambardar, September 1, 2003
7.1 Ideal Sampling 253
Intuitively, there must always be some loss of information, no matter how small an interval we use. Fortu-
nately, our intuition notwithstanding, it is indeed possible to sample signals without any loss of information.
The catch is that the signal x(t) must be band-limited to some nite frequency B.
REVIEW PANEL 7.1
The Ideally Sampled Signal Is a Train of Impulses
x
I
(t) = x(t)
n=
(t nt
s
) =
n=
x(nt
s
)(t nt
s
) =
n=
x[n](t nt
s
)
The discrete signal x[n] corresponds to the sequence of the sample values x(nt
s
).
The spectra associated with the various signals in ideal sampling are illustrated in Figure 7.2. The
impulse train i(t) is a periodic signal with period T = t
s
= 1/S and Fourier series coecients I[k] = S. Its
Fourier transform is a train of impulses (at f = kS) whose strengths equal I[k].
I(f) =
k=
I[k](f kS) = S
k=
(f kS) (7.2)
The ideally sampled signal x
I
(t) is the product of x(t) and i(t). Its spectrum X
I
(f) is thus described by the
convolution
X
I
(f) = X(f) I(f) = X(f) S
k=
(f kS) = S
k=
X(f kS) (7.3)
The spectrum X
I
(f) consists of X(f) and its shifted replicas or images. It is periodic in frequency, with a
period that equals the sampling rate S.
2S 2S
Multiply
Convolve
s
t
- B
( ) S ( ) S ( ) S ( ) S
- S
X(f) =
*
I(f)
- S - B
X
I
(f)
x(t) i(t) = x
I
(t)
s
t
and and and
Analog signal
its spectrum
Sampling function
its spectrum
Ideally sampled signal
its spectrum
i(t)
(1) (1)
t
x(t)
A
X(f)
B
t
S
I(f)
S
SA
B
t
f f f
Figure 7.2 Spectra of the signals for ideal sampling
REVIEW PANEL 7.2
The Spectrum of an Ideally Sampled Signal Is Periodic with Period S
X
I
(f) = X(f) S
k=
(f kS) = S
k=
X(f kS)
The spectrum is the periodic extension of SX(f) with period S.
c Ashok Ambardar, September 1, 2003
254 Chapter 7 Digital Processing of Analog Signals
Since the spectral image at the origin extends over (B, B), and the next image (centered at S) extends
over (S B, S +B), the images will not overlap if
S B > B or S > 2B (7.4)
Figure 7.3 illustrates the spectra of an ideally sampled band-limited signal for three choices of the sampling
frequency S. As long as the images do not overlap, each period is a replica of the scaled analog signal spectrum
SX(f). We can thus extract X(f) (and hence x(t)) as the principal period of X
I
(f) (between 0.5S and
0.5S) by passing the ideally sampled signal through an ideal lowpass lter with a cuto frequency of 0.5S
and a gain of 1/S over the frequency range 0.5S f 0.5S.
f
S B
B S
> 2 S B
S B
Oversampling
f
S
B S
< 2 S B
B
Undersampling
S
f
S
= 2 S B
B S
Critical sampling
Figure 7.3 Spectrum of an ideally sampled signal for three choices of the sampling frequency
This is the celebrated sampling theorem, which tells us that an analog signal band-limited to a fre-
quency B can be sampled without loss of information if the sampling rate S exceeds 2B (or the sampling
interval t
s
is smaller than
1
2B
). The critical sampling rate S
N
= 2B is often called the Nyquist rate or
Nyquist frequency and the critical sampling interval t
N
= 1/S
N
= 1/2B is called the Nyquist interval.
REVIEW PANEL 7.3
Band-Limited Analog Signals Can Be Sampled Without Loss of Information
If x(t) is band-limited to B, it must be sampled at S > 2B to prevent loss of information.
The images of X(f) do not overlap in the periodic spectrum X
I
(f) of the ideally sampled signal.
We can recover x(t) exactly from the principal period (0.5S, 0.5S), using an ideal lowpass lter.
If the sampling rate S is less than 2B, the spectral images overlap and the principal period (0.5S, 0.5S)
of X
I
(f) is no longer an exact replica of X(f). In this case, we cannot exactly recover x(t), and there is loss
of information due to undersampling. Undersampling results in spectral overlap. Components of X(f)
outside the principal range (0.5S, 0.5S) fold back into this range (due to the spectral overlap from adjacent
images). Thus, frequencies higher than 0.5S appear as lower frequencies in the principal period. This is
aliasing. The frequency 0.5S is also called the folding frequency.
REVIEW PANEL 7.4
Undersampling Causes Spectral Overlap, Aliasing, and Irreversible Loss of Information
If S < 2B, the images of X(f) overlap in the periodic spectrum and we cannot recover x(t).
Aliasing: A frequency [f
0
[ > 0.5S gets aliased to a lower frequency f
a
in the range (0.5S, 0.5S).
Sampling is a band-limiting operation in the sense that in practice we typically extract only the principal
period of the spectrum, which is band-limited to the frequency range (0.5S, 0.5S). Thus, the highest
frequency we can recover or identify is 0.5S and depends only on the sampling rate S.
c Ashok Ambardar, September 1, 2003
7.1 Ideal Sampling 255
EXAMPLE 7.1 (The Nyquist Rate)
Let the signal x
1
(t) be band-limited to 2 kHz and x
2
(t) be band-limited to 3 kHz. Using properties of the
Fourier transform, we nd the Nyquist rate for the following signals.
The spectrum of x
1
(2t) (time compression) stretches to 4 kHz. Thus, S
N
= 8 kHz.
The spectrum of x
2
(t 3) extends to 3 kHz (a time shift changes only the phase). Thus, S
N
= 6 kHz.
The spectrum of x
1
(t) +x
2
(t) (sum of the spectra) extends to 3 kHz. Thus, S
N
= 6 kHz.
The spectrum of x
1
(t)x
2
(t) (convolution in the frequency domain) extends to 5 kHz. Thus, S
N
= 10 kHz.
The spectrum of x
1
(t) x
2
(t) (product of the spectra) extends only to 2 kHz. Thus, S
N
= 4 kHz.
The spectrum of x
1
(t)cos(1000t) (modulation) is stretched by 500 Hz to 2.5 kHz. Thus, S
N
= 5 kHz.
7.1.1 Sampling of Sinusoids and Periodic Signals
The Nyquist frequency for a sinusoid x(t) = cos(2f
0
t +) is S
N
= 2f
0
. The Nyquist interval is t
N
= 1/2f
0
,
or t
N
= T/2. This amounts to taking more than two samples per period. If, for example, we acquire just two
samples per period, starting at a zero crossing, all sample values will be zero, and will yield no information.
If a signal x(t) = cos(2f
0
t + ) is sampled at S, the sampled signal is x[n] = cos(2nf
0
/S + ). Its
spectrum is periodic, with principal period (0.5S, 0.5S). If f
0
< 0.5S, there is no aliasing, and the principal
period shows a pair of impulses at f
0
(with strength 0.5). If f
0
> 0.5S, we have aliasing. The components
at f
0
are aliased to a lower frequency f
a
in the principal range. To nd the aliased frequency [f
a
[,
we subtract integer multiples of the sampling frequency from f
0
until the result f
a
= f
0
NS lies in the
principal range (0.5S, 0.5S). The spectrum then describes a sampled version of the lower-frequency aliased
signal x
a
(t) = cos(2f
a
t +). The relation between the aliased frequency, original frequency, and sampling
frequency is illustrated in Figure 7.4. The aliased frequency always lies in the principal range.
0.5S
f (Hz)
2S
1.5S
f (Hz)
0.5S
Aliased
Actual
Aliased
S Actual
S -0.5
Principal
range
Figure 7.4 Relation between the actual and aliased frequency
REVIEW PANEL 7.5
Finding Aliased Frequencies and Aliased Signals
The signal x(t) = cos(2f
0
t +) is recovered as x
a
(t) = cos(2f
a
t +) if S < 2f
0
.
f
a
= f
0
NS, where N is an integer that places f
a
in the principal period (0.5S, 0.5S).
A periodic signal x
p
(t) with period T can be described by a sum of sinusoids at the fundamental frequency
c Ashok Ambardar, September 1, 2003
256 Chapter 7 Digital Processing of Analog Signals
f
0
= 1/T and its harmonics kf
0
. In general, such a signal may not be band-limited and cannot be sampled
without aliasing for any choice of sampling rate.
EXAMPLE 7.2 (Aliasing and Sampled Sinusoids)
(a) Consider the sinusoid x(t) = Acos(2f
0
t +) with f
0
= 100 Hz.
1. If x(t) is sampled at S = 300 Hz, no aliasing occurs, since S > 2f
0
.
2. If x(t) is sampled at S = 80 Hz, we obtain f
a
= f
0
S = 20 Hz, The sampled signal describes a
sampled version of the aliased signal Acos[2(20)t +].
3. If S = 60 Hz, we obtain f
0
2S = 20 Hz. The aliased sinusoid corresponds to
Acos[2(20)t +] = Acos[2(20)t ]. Note the phase reversal.
(b) Let x
p
(t) = 8 cos(2t) + 6 cos(8t) + 4 cos(22t) + 6 sin(32t) + cos(58t) + sin(66t).
If it is sampled at S = 10 Hz, the last four terms will be aliased. The reconstruction of the sampled
signal will describe an analog signal whose rst two terms are identical to x
p
(t), but whose other
components are at the aliased frequencies. The following table shows what to expect.
f
0
(Hz) Aliasing? Aliased Frequency f
a
Analog Equivalent
1 No (f
0
< 0.5S) No aliasing 8 cos(2t)
4 No (f
0
< 0.5S) No aliasing 6 cos(8t)
11 Yes 11 S = 1 4 cos(2t)
16 Yes 16 2S = 4 6 sin(8t) = 6 sin(8t)
29 Yes 29 3S = 1 cos(2t) = cos(2t)
33 Yes 33 3S = 3 sin(6t)
The reconstructed signal corresponds to x
S
(t) = 13 cos(2t) +sin(6t) +6 cos(8t) 6 sin(8t), which
cannot be distinguished from x
p
(t) at the sampling instants t = nt
s
, where t
s
= 0.1 s. To avoid aliasing
and recover x
p
(t), we must choose S > 2B = 66 Hz.
(c) Suppose we sample a sinusoid x(t) at 30 Hz and obtain the periodic spectrum of the sampled signal,
as shown in Figure E7.2C. Is it possible to uniquely identify x(t)?
f (Hz)
1
20 10 20 40 70 80 10 50
x(t) sampled at 30 Hz Spectrum of a sinusoid
Figure E7.2C Spectrum of sampled sinusoid for Example 7.2(c)
We can certainly identify the period as 30 Hz, and thus S = 30 Hz. But we cannot uniquely identify
x(t) because it could be a sinusoid at 10 Hz (with no aliasing) or a sinusoid at 20 Hz, 50 Hz, 80 Hz, etc.
(all aliased to 10 Hz). However, the analog signal y(t) reconstructed from the samples will describe a
10-Hz sinusoid because reconstruction extracts only the principal period, (15, 15) Hz, of the periodic
spectrum. In the absence of any a priori information, we almost invariably use the principal period as
a means to uniquely identify the underlying signal from its periodic spectrum, for better or worse.
c Ashok Ambardar, September 1, 2003
7.1 Ideal Sampling 257
7.1.2 Application Example: The Sampling Oscilloscope
The sampling theorem tells us that in order to sample an analog signal without aliasing and loss of informa-
tion, we must sample above the Nyquist rate. However, some applications depend on the very act of aliasing
for their success. One example is the sampling oscilloscope. A conventional oscilloscope cannot directly
display a waveform whose bandwidth exceeds the bandwidth of the oscilloscope. However, if the signal is
periodic (or can be repeated periodically) and band-limited, a new, time-expanded waveform can be built
up and displayed by sampling the periodic signal at successively later instants in successive periods. The
idea is illustrated in Figure 7.5 for a periodic signal x(t) = 1 + cos(2f
0
t) with fundamental frequency f
0
.
0
f
0
f
0
f = 1 + cos(2 t ) x(t)
X(f)
f
(1)
(0.5) (0.5)
= 1 + cos[2 ( ) t ] y(t)
0
f S
0
f S
0
f S
Y(f)
f
S
X (f)
S
0
f
0
f
0
f S
0
f S
f
S
Figure 7.5 The principle of the sampling oscilloscope
If the sampling rate is chosen to be less than f
0
(i.e., S < f
0
), the spectral component at f
0
will alias
to the smaller frequency f
a
= f
0
S. To ensure no phase reversal, the aliased frequency must be less than
0.5S. Subsequent recovery by a lowpass lter with a cuto frequency of f
C
= 0.5S will yield the signal
y(t) = 1 + cos[2(f
0
S)t], which represents a time-stretched version of x(t). With y(t) = x(t/), the
stretching factor and highest sampling rate S are related by
=
f
0
f
a
=
f
0
f
0
S
S =
( 1)f
0
(7.5)
The closer S is to the fundamental frequency f
0
, the larger is the stretching factor and the more slowed
down the signal y(t). This reconstructed signal y(t) is what is displayed on the oscilloscope (with appropriate
scale factors to reect the parameters of the original signal). For a given stretching factor , the sampling
rate may also be reduced to S
m
= S/m (where m is an integer) as long as it exceeds 2f
0
/ and ensures that
the aliased component appears at f
0
/ with no phase reversal. The only disadvantage of choosing smaller
sampling rates (corresponding to fewer than one sample per period) is that we must wait much longer in
order to acquire enough samples to build up a one-period display.
More generally, if a periodic signal x(t) with a fundamental frequency of f
0
is band-limited to the Nth
harmonic frequency f
max
= Nf
0
, the stretching factor and sampling rate are still governed by the same
equations. As before, the sampling rate may also be reduced to S
m
= S/m (where m is an integer) as long
as it exceeds 2f
max
/ and ensures that each harmonic kf
0
is aliased to kf
0
/ with no phase reversal.
EXAMPLE 7.3 (Sampling Oscilloscope Concepts)
(a) We wish to see a 100-MHz sinusoid slowed down by a factor of 50. The highest sampling rate we can
choose is
S =
( 1)f
0
= 98 MHz
This causes the slowed down (aliased) signal to appear at 2 MHz. We may even choose a lower
sampling rate (as long as it exceeds 4 MHz) such as S
2
= S/2 = 49 MHz or S
7
= S/7 = 14 MHz
c Ashok Ambardar, September 1, 2003
258 Chapter 7 Digital Processing of Analog Signals
or S
20
= S/20 = 4.9 MHz and the original signal will still alias to 2 MHz with no phase reversal.
However, if we choose S
28
= S/28 = 3.5 MHz (which is less than 4 MHz), the aliased signal will appear
at 1.5 MHz and no longer reect the correct stretching factor.
(b) We wish to slow down a signal made up of components at 100-MHz and 400 MHz by a factor of 50.
The fundamental frequency is f
0
= 100 MHz and the highest sampling rate we can choose is
S =
( 1)f
0
= 98 MHz
The frequency of 100 MHz slows down to 2 MHz and the highest frequency of 400 MHz slows down
(aliases) to 8 MHz. We may also choose a lower sampling rate (as long as it exceeds 16 MHz) such as
S
2
= S/2 = 49 MHz or S
5
= S/5 = 19.6 MHz and the frequencies will will still alias to 4 MHz and
8 Mhz with no phase reversal. However, if we choose S
7
= S/7 = 14 MHz (which is less than 16 MHz),
the 400 MHz component will alias to 6 MHz and no longer reect the correct stretching factor.
7.1.3 Sampling of Bandpass Signals
The spectrum of baseband signals includes the origin whereas the spectrum of bandpass signals occupies
a range of frequencies between f
L
and f
H
, where f
L
is greater than zero. The quantity B = f
H
f
L
is a
measure of the bandwidth of the bandpass signal. Even though a Nyquist rate of 2f
H
can be used to recover
such signals, we can often get by with a lower sampling rate. This is especially useful for narrow-band signals
(or modulated signals) centered about a very high frequency. To retain all the information in a bandpass
signal, we actually need a sampling rate that aliases the entire spectrum to a lower frequency range without
overlap. The smallest such frequency is S = 2f
H
/N, where N = int(f
H
/B) is the integer part of f
H
/B.
Other choices are also possible and result in the following bounds on the sampling frequency S
2f
H
k
S
2f
L
k 1
, k = 1, 2, . . . , N (7.6)
The integer k can range from 1 to N. The value k = N yields the smallest sampling rate S = 2f
H
/N, and
k = 1 yields the highest sampling rate corresponding to the Nyquist rate S 2f
H
. If k is even, the spectrum
of the sampled signal shows reversal in the baseband.
EXAMPLE 7.4 (Sampling of Bandpass Signals)
Consider a bandpass signal x
C
(t) with f
L
= 4 kHz and f
H
= 6 kHz. Then B = f
H
f
L
= 2 kHz and we
compute N = int(f
H
/B) = 3. Possible choices for the sampling rate S (in kHz) are given by
12
k
S
8
k 1
, k = 1, 2, 3
For k = 3, we have S = 4 kHz. This represents the smallest sampling rate.
For k = 2, we have 6 kHz S 8 kHz. Since k is even, the spectrum shows reversal in the baseband.
For k = 1, we have S 12 kHz and this corresponds to exceeding the Nyquist rate.
Figure E7.4 shows the spectra of the analog signal and its sampled versions for S = 4, 7, 14 kHz.
c Ashok Ambardar, September 1, 2003
7.1 Ideal Sampling 259
f (kHz)
f (kHz)
4 6
Spectrum of bandpass signal
1 3 4 6
Spectrum of signal sampled at 7 kHz
f (kHz)
f (kHz)
2 4 6 8
Spectrum of signal sampled at 4 kHz
8 6 4 10
Spectrum of signal sampled at 14 kHz
Figure E7.4 Spectra of bandpass signal and its sampled versions for Example 7.4
7.1.4 Natural Sampling
Conceptually, natural sampling, is equivalent to passing the signal x(t) through a switch that opens and
closes every t
s
seconds. The action of the switch can be modeled as a periodic pulse train p(t) of unit height,
with period t
s
and pulse width t
d
. The sampled signal x
N
(t) equals x(t) for the t
d
seconds that the switch
remains closed and is zero when the switch is open, as illustrated in Figure 7.6.
s
t
s
t
x
N
(t) = x(t)p(t)
t
x(t)
Analog signal
Naturally sampled signal
Sampling function
Multiplier
p(t)
t
t
Figure 7.6 Natural sampling
The naturally sampled signal is described by x
N
(t) = x(t)p(t). The Fourier transform P(f) is a train of
impulses whose strengths P[k] equal the Fourier series coecients of p(t):
P(f) =
k=
P[k](f kS) (7.7)
The spectrum X
N
(f) of the sampled signal x
S
(t) = x(t)p(t) is described by the convolution
X
N
(f) = X(f) P(f) = X(f)
k=
P[k](f kS) =
k=
P[k]X(f kS) (7.8)
The various spectra for natural sampling are illustrated in Figure 7.7. Again, X
N
(f) is a superposition of
X(f) and its amplitude-scaled (by P[k]), shifted replicas. Since the P[k] decay as 1/k, the spectral images
get smaller in height as we move away from the origin. In other words, X
N
(f) does not describe a periodic
spectrum. However, if there is no spectral overlap, the image centered at the origin equals P[0]X(f), and
X(f) can still be recovered by passing the sampled signal through an ideal lowpass lter with a cuto
frequency of 0.5S and a gain of 1/P[0], 0.5S f 0.5S, where P[0] is the dc oset in p(t). In theory,
natural sampling can be performed by any periodic signal with a nonzero dc oset.
c Ashok Ambardar, September 1, 2003
260 Chapter 7 Digital Processing of Analog Signals
2S 2S
Multiply
Convolve
s
t
s
t
- S - S - B
x
N
(t) = x(t)p(t)
X(f)
*
P(f) X
N
(f) =
[1] P (| |)
[0] P (| |)
- B
[0] P A
and and and
t
S S B
P(f)
p(t)
Analog signal
its spectrum
Sampling function
its spectrum its spectrum
Naturally sampled signal
t
x(t)
t
A
X(f)
B
f f f
Figure 7.7 Spectra of the signals for natural sampling
REVIEW PANEL 7.6
Naturally Sampling Uses a Periodic Pulse Train p(t)
x
N
(t) = x(t)p(t) X
N
(f) =
k=
P[k]X(f kS) P[k] =
1
t
s
t
s
p(t)e
j2kSt
dt
7.1.5 Zero-Order-Hold Sampling
In practice, analog signals are sampled using zero-order-hold (ZOH) devices that hold a sample value
constant until the next sample is acquired. This is also called at-top sampling. This operation is
equivalent to ideal sampling followed by a system whose impulse response h(t) = rect[(t 0.5t
s
)/t
s
] is a
pulse of unit height and duration t
s
(to stretch the incoming impulses). This is illustrated in Figure 7.8.
x
ZOH
(t) x (t)
I
s
t
s
t
s
t
s
t
x(t)
Multiplier
t t
(1) (1)
h(t)
t
1
Analog signal
Sampling function
Ideally sampled signal
Hold circuit
ZOH sampled signal
t
t
Figure 7.8 Zero-order-hold sampling
The sampled signal x
ZOH
(t) can be regarded as the convolution of h(t) and an ideally sampled signal:
x
ZOH
(t) = h(t) x
I
(t) = h(t)
n=
x(nt
s
)(t nt
s
)
(7.9)
c Ashok Ambardar, September 1, 2003
7.1 Ideal Sampling 261
The transfer function H(f) of the zero-order-hold circuit is the sinc function
H(f) = t
s
sinc(ft
s
)e
jft
s
=
1
S
sinc
f
S
e
jf/S
(7.10)
Since the spectrum of the ideally sampled signal is S
f
S
e
jf/S
k=
X(f kS) (7.11)
This spectrum is illustrated in Figure 7.9. The term sinc(f/S) attenuates the spectral images X(f kS)
and causes sinc distortion. The higher the sampling rate S, the less is the distortion in the spectral image
X(f) centered at the origin.
f
B
X(f)
B
X
ZOH
(f)
SB
f
B S
Figure 7.9 Spectrum of a zero-order-hold sampled signal
REVIEW PANEL 7.7
The Zero-Order-Hold (ZOH) Sampled Signal Is a Staircase Approximation of x(t)
ZOH sampling: Ideal sampling followed by a hold system with h(t) = rect[(t 0.5t
s
)/t
s
]
x
ZOH
(t) = h(t)
n=
x(nt
s
)(t nt
s
)
X
ZOH
(f) =
sinc
f
S
k=
X(f kS)
e
jf/S
An ideal lowpass lter with unity gain over 0.5S f 0.5S recovers the distorted signal
X(f) = X(f)sinc
f
S
e
jf/S
, 0.5S f 0.5S (7.12)
To recover X(f) with no amplitude distortion, we must use a compensating lter that negates the eects
of the sinc distortion by providing a concave-shaped magnitude spectrum corresponding to the reciprocal of
the sinc function over the principal period [f[ 0.5S, as shown in Figure 7.10.
The magnitude spectrum of the compensating lter is given by
[H
r
(f)[ =
1
sinc(f/S)
, [f[ 0.5S (7.13)
c Ashok Ambardar, September 1, 2003
262 Chapter 7 Digital Processing of Analog Signals
0.5S 0.5S
0.5S
0.5S
(f/S) sinc
Magnitude
f
Phase (radians)
/2
/2
f 1
1
Figure 7.10 Spectrum of a lter that compensates for sinc distortion
EXAMPLE 7.5 (Sampling Operations)
(a) The signal x(t) = sinc(4000t) is sampled at a sampling frequency S = 5 kHz. The Fourier transform of
x(t) = sinc(4000t) is X(f) = (1/4000)rect(f/4000) (a rectangular pulse over 2 kHz). The signal x(t)
is band-limited to f
B
= 2 kHz. The spectrum of the ideally sampled signal is periodic with replication
every 5 kHz, and may be written as
X
I
(f) =
k=
SX(f kS) = 1.25
k=
rect
f 5000k
4000
(b) The magnitude spectrum of the zero-order-hold sampled signal is a version of the ideally sampled signal
distorted (multiplied) by sinc(f/S), and described by
X
ZOH
(f) = sinc
f
S
k=
X(f kS) = 1.25 sinc(0.0002f)
k=
rect
f 5000k
4000
(c) The spectrum of the naturally sampled signal, assuming a rectangular sampling pulse train p(t) with
unit height, and a duty ratio of 0.5, is given by
X
N
(f) =
k=
P[k]X(f kS) =
1
4000
k=
P[k]rect
f 5000k
4000
Here, P[k] are the Fourier series coecients of p(t), with P[k] = 0.5 sinc(0.5k).
7.2 Sampling, Interpolation, and Signal Recovery
For a sampled sequence obtained from an analog signal x(t), an important aspect is the recovery of the
original signal from its samples. This requires lling in the missing details, or interpolating between the
sampled values. The nature of the interpolation that recovers x(t) may be discerned by considering the
sampling operation in the time domain. Of the three sampling operations considered earlier, only ideal (or
impulse) sampling leads to a truly discrete signal x[n] whose samples equal the strengths of the impulses
x(nt
s
)(t nt
s
) at the sampling instants nt
s
. This is the only case we pursue.
c Ashok Ambardar, September 1, 2003
7.2 Sampling, Interpolation, and Signal Recovery 263
7.2.1 Ideal Recovery and the Sinc Interpolating Function
The ideally sampled signal x
I
(t) is the product of the impulse train i(t) =
(t nt
s
) and the analog signal
x(t) and may be written as
x
I
(t) = x(t)
n=
(t nt
s
) =
n=
x(nt
s
)(t nt
s
) =
n=
x[n](t nt
s
) (7.14)
The discrete signal x[n] is just the sequence of samples x(nt
s
). We can recover x(t) by passing x
I
(t) through
an ideal lowpass lter with a gain of t
s
and a cuto frequency of 0.5S. The frequency-domain and time-
domain equivalents are illustrated in Figure 7.11.
t/t
s
X(f) =
*
I(f)
- S - B
X
I
(f)
0.5S 0.5S
1/S
- B
X
I
(f) H(f) = X(f)
x
I
(t)
*
h(t) = x(t)
s
t
x(t) i(t) = x
I
(t)
( ) sinc = h(t)
S
SA
B
f f
A
B
f
t
t
Figure 7.11 Recovery of an analog signal from its sampled version
The impulse response of the ideal lowpass lter is a sinc function given by h(t) = sinc(t/t
s
). The recovered
signal x(t) may therefore be described as the convolution
x(t) = x
I
(t) h(t) =
n=
x(nt
s
)(t nt
s
)
h(t) =
n=
x[n]h(t nt
s
) (7.15)
This describes the superposition of shifted versions of h(t) weighted by the sample values x[n]. Substituting
for h(t), we obtain the following result that allows us to recover x(t) exactly from its samples x[n] as a sum
of scaled shifted versions of sinc functions:
x(t) =
n=
x[n]sinc
t nt
s
t
s
(7.16)
The signal x(t) equals the superposition of shifted versions of h(t) weighted by the sample values x[n]. At
each sampling instant, we replace the sample value x[n] by a sinc function whose peak value equals x[n] and
whose zero crossings occur at all the other sampling instants. The sum of these sinc functions yields the
analog signal x(t), as illustrated in Figure 7.12.
If we use a lowpass lter whose impulse response is h
f
(t) = 2t
s
Bsinc(2Bt) (i.e., whose cuto frequency
c Ashok Ambardar, September 1, 2003
264 Chapter 7 Digital Processing of Analog Signals
1 0.5 0 0.5 1 1.5 2 2.5 3 3.5 4
1
0
1
2
3
4
5
Signal reconstruction from x[n] = [1, 2, 3, 4] by sinc interpolation
Recovered signal
A
m
p
l
i
t
u
d
e
Index n and time t = nt
s
Figure 7.12 Ideal recovery of an analog signal by sinc interpolation
is B instead of 0.5S), the recovered signal x(t) may be described by the convolution
x(t) = x
I
(t) h
f
(t) =
n=
2t
s
Bx[n]sinc[2B(t nt
s
)] (7.17)
This general result is valid for any oversampled signal with t
s
0.5/B and reduces to the previously obtained
result if the sampling rate S equals the Nyquist rate (i.e., S = 2B).
Sinc interpolation is unrealistic from a practical viewpoint. The innite extent of the sinc means that it
cannot be implemented on-line, and perfect reconstruction requires all its past and future values. We could
truncate it on either side after its magnitude becomes small enough. But, unfortunately, it decays very
slowly and must be preserved for a fairly large duration (covering many past and future sampling instants)
in order to provide a reasonably accurate reconstruction. Since the sinc function is also smoothly varying, it
cannot properly reconstruct a discontinuous signal at the discontinuities even with a large number of values.
Sinc interpolation is also referred to as band-limited interpolation and forms the yardstick by which all
other schemes are measured in their ability to reconstruct band-limited signals.
7.2.2 Interpolating Functions
Since the sinc interpolating function is a poor choice in practice, we must look to other interpolating signals.
If an analog signal x(t) is to be recovered from its sampled version x[n] using an interpolating function h
i
(t),
what must we require of h
i
(t) to obtain a good approximation to x(t)? At the very least, the interpolated
approximation x(t) should match x(t) at the sampling instants nt
s
. This suggests that h
i
(t) should equal
zero at all sampling instants, except the origin where it must equal unity, such that
h
i
(t) =
1, t = 0
0, t = nt
s
, (n = 1, 2, 3, . . .)
(7.18)
In addition, we also require h
i
(t) to be absolutely integrable to ensure that it stays nite between the
sampling instants. The interpolated signal x(t) is simply the convolution of h
i
(t) with the ideally sampled
signal x
I
(t), or a summation of shifted versions of the interpolating function
x(t) = h
i
(t) x
I
(t) =
n=
x[n]h
i
(t nt
s
) (7.19)
c Ashok Ambardar, September 1, 2003
7.2 Sampling, Interpolation, and Signal Recovery 265
At each instant nt
s
, we erect the interpolating function h
i
(t nt
s
), scale it by x[n], and sum to obtain
x(t). At a sampling instant t = kt
s
, the interpolating function h
i
(kt
s
nt
s
) equals zero, unless n = k
when it equals unity. As a result, x(t) exactly equals x(t) at each sampling instant. At all other times, the
interpolated signal x(t) is only an approximation to the actual signal x(t).
7.2.3 Interpolation in Practice
The nature of h
i
(t) dictates the nature of the interpolating system both in terms of its causality, stability,
and physical realizability. It also determines how good the reconstructed approximation is. There is no
best interpolating signal. Some are better in terms of their accuracy, others are better in terms of their
cost eectiveness, still others are better in terms of their numerical implementation.
Step Interpolation
Step interpolation is illustrated in Figure 7.13 and uses a rectangular interpolating function of width t
s
,
given by h(t) = rect[(t 0.5t
s
)/t
s
], to produce a stepwise or staircase approximation to x(t). Even though
it appears crude, it is quite useful in practice.
s
t
s
t
s
t
t t
h(t)
t
1
Figure 7.13 Step interpolation
At any time between two sampling instants, the reconstructed signal equals the previously sampled value
and does not depend on any future values. This is useful for on-line or real-time processing where the output
is produced at the same rate as the incoming data. Step interpolation results in exact reconstruction of
signals that are piecewise constant.
A system that performs step interpolation is just a zero-order-hold. A practical digital-to-analog converter
(DAC) for sampled signals uses a zero-order-hold for a staircase approximation (step interpolation) followed
by a lowpass (anti-imaging) lter (for smoothing the steps).
Linear Interpolation
Linear interpolation is illustrated in Figure 7.14 and uses the interpolating function h(t) = tri(t/t
s
) to
produce a linear approximation to x(t) between the sample values.
s
t
s
t
s
t
s
t
t t
h(t)
t
1
Figure 7.14 Linear interpolation
At any instant t between adjacent sampling instants nt
s
and (n + 1)t
s
, the reconstructed signal equals
c Ashok Ambardar, September 1, 2003
266 Chapter 7 Digital Processing of Analog Signals
x[n] plus an increment that depends on the slope of the line joining x[n] and x[n + 1]. We have
x(t) = x[n] + (t nt
s
)
x[n + 1] x[n]
t
s
, nt
s
t < (n + 1)t
s
(7.20)
This operation requires one future value of the input and cannot actually be implemented on-line. It can,
however, be realized with a delay of one sampling interval t
s
, which is tolerable in many situations. Systems
performing linear interpolation are also called rst-order-hold systems. They yield exact reconstructions
of piecewise linear signals.
Raised Cosine Interpolating Function
The sinc interpolating function forms the basis for several others described by the generic relation
h(t) = g(t)sinc(t/t
s
), where g(0) = 1 (7.21)
One of the more commonly used of these is the raised cosine interpolating function described by
h
rc
(t) =
cos(Rt/t
s
)
1 (2Rt/t
s
)
2
sinc(t/t
s
), 0 R 1 (7.22)
Here, R is called the roll-o factor. Like the sinc interpolating function, h
rc
(t) equals 1 at t = 0 and 0
at the other sampling instants. It exhibits faster decaying oscillations on either side of the origin for R > 0
as compared to the sinc function. This faster decay results in improved reconstruction if the samples are
not acquired at exactly the sampling instants (in the presence of jitter, that is). It also allows fewer past
and future values to be used in the reconstruction as compared with the sinc interpolating function. The
terminology raised cosine is actually based on the shape of its spectrum. For R = 0, the raised cosine
interpolating function reduces to the sinc interpolating function.
EXAMPLE 7.6 (Signal Reconstruction from Samples)
Let x[n] = 1, 2, 3, 2, t
s
= 1. What is the value of the reconstructed signal x(t) at 2.5 s that results from
step, linear, sinc, and raised cosine (with R = 0.5) interpolation?
(a) For step interpolation, the signal value at t = 2.5 s is simply the value at t = 2. Thus, x(2.5) = 3.
(b) If we use linear interpolation, the signal value at t = 2.5 s is simply the average of the values at t = 2
and t = 3. Thus, x(2.5) = 0.5(3 + 2) = 2.5.
(c) If we use sinc interpolation, we obtain
x(t) =
3
k=0
x[k]sinc(t kt
s
) = sinc(t) + 2 sinc(t 1) + 3 sinc(t 2) + 2 sinc(t 3)
So, x(2.5) = 0.1273 0.4244 + 1.9099 + 2.5465 = 4.1592.
c Ashok Ambardar, September 1, 2003
7.3 Sampling Rate Conversion 267
(d) If we use raised cosine interpolation (with R = 0.5), we obtain
x(t) =
3
k=0
x[k]sinc(t k)
cos[0.5(t k)]
1 (t k)
2
We use this equation to compute
x(t) =
sinc(t) cos(0.5t)
1t
2
+ 2
sinc(t1) cos[0.5(t1)]
1(t1)
2
+ 3
sinc(t2) cos[0.5(t2)]
1(t2)
2
+ 4
sinc(t3) cos[0.5(t3)]
1(t3)
2
Thus, x(2.5) = 0.0171 0.2401 + 1.8006 + 2.4008 = 3.9785.
7.3 Sampling Rate Conversion
In practice, dierent parts of a DSP system are often designed to operate at dierent sampling rates because
of the advantages it oers. Sampling rate conversion can be performed directly on an already sampled signal,
without having to re-sample the original analog signal, and requires concepts based on interpolation and
decimation. Figure 7.15 shows the spectrum of sampled signals obtained by sampling a band-limited analog
signal x(t) whose spectrum is X(f) at a sampling rate S, a higher rate NS, and a lower rate S/M. All three
rates exceed the Nyquist rate to prevent aliasing.
B
SX(f)/M
S/M
S/M Sampling rate =
f
X(f)
B
f
B NS
NSX(f)
NS Sampling rate =
f
B
SX(f)
S
S Sampling rate =
f
Figure 7.15 Spectra of a signal sampled at three sampling rates
The spectrum of the oversampled signal shows a gain of N but covers a smaller fraction of the principal
period. The spectrum of a signal sampled at the lower rate S/M is a stretched version with a gain of 1/M.
In terms of the digital frequency F, the period of all three sampled versions is unity. One period of the
spectrum of the signal sampled at S Hz extends to B/S, whereas the spectrum of the same signal sampled at
NS Hz extends only to B/NS Hz, and the spectrum of the same signal sampled at S/M Hz extends farther
out to BM/S Hz. After an analog signal is rst sampled, all subsequent sampling rate changes are typically
made by manipulating the signal samples (and not resampling the analog signal). The key to the process
lies in interpolation and decimation.
7.3.1 Zero Interpolation and Spectrum Compression
A property of the DTFT that forms the basis for signal interpolation and sampling rate conversion is that M-
fold zero interpolation of a discrete-time signal x[n] leads to an M-fold spectrum compression and replication,
as illustrated in Figure 7.16.
c Ashok Ambardar, September 1, 2003
268 Chapter 7 Digital Processing of Analog Signals
1234
1234
n
y[n]
n
x[n]
0.5 1
F
X(F)
0.5 0.5/4 1
F
Y(F)
Discrete-time signal Spectrum of discrete-time signal
Fourfold zero interpolation Fourfold spectrum compression
Figure 7.16 Zero interpolation of a signal leads to spectrum compression
The zero-interpolated signal y[n] = x[n/N] is nonzero only if n = kN, k = 0, 1, 2, . . . (i.e., if n is an
integer multiple of N). The DTFT Y
p
(F) of y[n] = y[kN] may be expressed as
Y
p
(F) =
n=
y[n]e
j2nF
=
k=
y[kN]e
j2kNF
=
k=
x[k]e
j2kNF
= X
p
(NF) (7.23)
This describes Y
p
(F) as a scaled (compressed) version of the periodic spectrum X
p
(F) and leads to N-fold
spectrum replication. This is exactly analogous to the Fourier series result for analog periodic signals where
spectrum zero interpolation produces replication (compression) of the periodic signal. The spectrum of the
interpolated signal y[n] shows N compressed images per period, centered at F =
k
N
. The image centered at
F = 0 occupies the frequency range [F[ 0.5/N.
Similarly, the spectrum of a decimated signal y[n] = x[nM] is a stretched version of the original spectrum
and is described by
Y
p
(F) =
1
M
X
p
F
M
(7.24)
The factor 1/M ensures that we satisfy Parsevals relation. If the spectrum of the original signal x[n] extends
to [F[ 0.5/M in the central period, the spectrum of the decimated signal extends over [F[ 0.5 and there
is no aliasing or overlap between the spectral images. As a result, the central image represents a stretched
version of the spectrum of the original signal x[n]. Otherwise, there is aliasing and the stretched images
overlap. Because the images are added where overlap exists, the central image gets distorted and no longer
represents a stretched version of the spectrum of the original signal. This is a situation that is best avoided.
REVIEW PANEL 7.8
The Spectra of Zero-Interpolated and Decimated Signals
Zero interpolation by N gives N compressed images per period centered at F =
k
N
.
In decimation by M, the gain is reduced byM and images are stretched by M (and added if they overlap).
EXAMPLE 7.7 (Zero Interpolation and Spectrum Replication)
The spectrum of a signal x[n] is X(F) = 2 tri(5F). Sketch X(F) and the spectra of the following signals
and explain how they are related to X(F).
c Ashok Ambardar, September 1, 2003
7.3 Sampling Rate Conversion 269
1. The zero interpolated signal y[n] = x[n/2]
2. The decimated signal d[n] = x[2n]
3. The signal g[n] that equals x[n] for even n, and 0 for odd n
Refer to Figure E7.7 for the spectra.
(a) The spectrum Y (F) is a compressed version of X(F) with Y (F) = X(2F).
(b) The spectrum D(F) = 0.5X(0.5F) is a stretched version of X(F) with a gain factor of 0.5.
(c) The signal g[n] may be expressed as g[n] = 0.5(x[n] + (1)
n
x[n]). Its spectrum (or DTFT) is
described by G(F) = 0.5[X(F) + X(F 0.5)]. We may also obtain g[n] by rst decimating x[n] by 2
(to get d[n]), and then zero-interpolating d[n] by 2.
X(F) Y(F)
D(F) G(F)
1 1
2
2
0.2 0.8 1
f
f
0.4 0.6 1
0.1 0.9 0.4 0.5 0.6 1
f
0.2 0.3 0.5 0.7 0.8 1
f
Figure E7.7 The spectra of the signals for Example 7.7
7.3.2 Sampling Rate Increase
A sampling rate increase by an integer factor N involves zero interpolation and digital lowpass ltering,
as shown in Figure 7.17. The signals and their spectra at various points in the system are illustrated in
Figure 7.18 for N = 2.
N
Gain = N F
C
=
S
x[n] w[n]
NS
i[n]
NS
Anti-imaging digital filter Up-sample (zero-interpolate)
0.5
N
Figure 7.17 Sampling rate increase by an integer factor
An up-sampler inserts N 1 zeros between signal samples and results in an N-fold zero-interpolated
signal corresponding to the higher sampling rate NS. Zero interpolation by N results in N-fold replication
of the spectrum whose central image over [F[ 0.5/N corresponds to the spectrum of the oversampled signal
(except for a gain of N). The spurious images are removed by passing the zero-interpolated signal through
a lowpass lter with a gain of N and a cuto frequency of F
C
= 0.5/N to obtain the required oversampled
signal. If the original samples were acquired at a rate that exceeds the Nyquist rate, the cuto frequency of
the digital lowpass lter can be made smaller than 0.5/N. This process yields exact results as long as the
underlying analog signal has been sampled above the Nyquist rate and the ltering operation is assumed
ideal.
c Ashok Ambardar, September 1, 2003
270 Chapter 7 Digital Processing of Analog Signals
n
t t
s
n
t
N ( =2)
s
t
/2
n
t
s
t
/2
S
1
2S = S
1
0.5
F
f
N ( =2)
0.5S 2S
S
1
F
f
F
C
=0.25
S
1
0.5
F
f
1 2
x[n]
2 1
w[n]
2 1
i[n]
Input signal
Filtered signal
Zero-interpolated signal W(F)
S
0.5
X(F)
I(F)
0.25 0.5
1
1
0.5 1
1
1
Gain = 2
2
Spectrum of input
Spectrum of filtered signal
Spectrum of up-sampled signal
2
Figure 7.18 Spectra of signals when increasing the sampling rate
REVIEW PANEL 7.9
To Increase the Sampling Rate by N, Up-Sample by N and Lowpass Filter
Up-sampling compresses (and replicates) spectrum. The LPF (F
C
=
0.5
N
, gain = N) removes the images.
EXAMPLE 7.8 (Up-Sampling and Filtering)
In the system of Figure E7.8, the spectrum of the input signal is given by X(F) = tri(2F), the up-sampling
is by a factor of 2, and the impulse response of the digital lter is given by h[n] = 0.25 sinc(0.25n). Sketch
X(F), W(F), H(F), and Y (F) over 0.5 F 0.5.
N
w[n]
X(F)
x[n]
W(F)
y[n]
Y(F)
H(F)
Digital filter
Up-sample (zero-interpolate)
Figure E7.8 The system for Example 7.8
The various spectra are shown in Figure E7.8A.
The spectrum X(F) is a triangular pulse of unit width. Zero interpolation by two results in the compressed
spectrum W(F). Now, h[n] = 2F
C
sinc(2nF
C
) corresponds to an ideal lter with a cuto frequency of F
C
.
Thus, F
C
= 0.125, and the lter passes only the frequency range [F[ 0.125.
1/2 1/2
F
X(F)
1
1/4 1/4
F
W(F)
1
1/8 1/8
F
1
H(F)
1/8 1/8
F
Y(F)
1
Figure E7.8A The spectra at various points for the system of Example 7.8
c Ashok Ambardar, September 1, 2003
7.3 Sampling Rate Conversion 271
7.3.3 Sampling Rate Reduction
A reduction in the sampling rate by an integer factor M involves digital lowpass ltering and decimation,
as shown in Figure 7.19. The signals and their spectra at various points in the system for M = 2 are shown
in Figure 7.20.
Gain = 1 M F
C
=
v[n]
S/M S S
d[n] x[n] Anti-aliasing digital filter th sample) M Down-sample (keep every
0.5
M
Figure 7.19 Sampling rate reduction by an integer factor
First, the sampled signal is passed through a lowpass lter (with unit gain) to ensure that it is band-
limited to [F[ 0.5/M in the principal period and to prevent aliasing during the decimation stage. It is then
decimated by a down-sampler that retains every Mth sample. The spectrum of the decimated signal is a
stretched version that extends to [F[ 0.5 in the principal period. Note that the spectrum of the decimated
signal has a gain of 1/M as required, and it is for this reason that we use a lowpass lter with unit gain.
This process yields exact results as long as the underlying analog signal has been sampled at a rate that is
M times the Nyquist rate (or higher) and the ltering operation is ideal.
S
1
2
M ( =2)
F
C
=0.25
0.5S
F
f 0.25S
0.25S 0.5S
S
1
0.5S = S
1
0.5
Gain = 1
f
F
D(F)
F
f S
V(F)
0.25
0.5
S
X(F)
0.25
1
1 2
0.5 1
1
0.5
1
0.5
Spectrum of input
Spectrum of filtered signal
Spectrum of decimated signal M ( =2)
n
t t
s
2
n
t t
s
n
t t
s
d[n]
1 2
v[n]
2 1
x[n]
2 1
Input signal
Filtered signal
Decimated signal
Figure 7.20 The spectra of various signals during sampling rate reduction
REVIEW PANEL 7.10
To Decrease Sampling Rate by M: Unity-Gain LPF (F
C
=
0.5
M
) and Down-Sampling by M
The LPF (F
C
=
0.5
M
) band-limits the input. Down-sampling stretches the spectrum.
Fractional sampling-rate changes by a factor M/N can be implemented by cascading a system that
increases the sampling rate by N (interpolation) and a system that reduces sampling rate by M (decimation).
In fact, we can replace the two lowpass lters that are required in the cascade (both of which operate at the
c Ashok Ambardar, September 1, 2003
272 Chapter 7 Digital Processing of Analog Signals
sampling rate NS) by a single lowpass lter whose gain is 1/N and whose cuto frequency is the smaller of
0.5/M and 0.5/N, as shown in Figure 7.21.
Gain = N
N
M
S
x[n]
NS
w[n] Digital lowpass filter
F
C
=
S(N/M)
y[n] Up-sample Down-sample
,
Min
v[n]
NS
0.5
N
0.5
M
Figure 7.21 Illustrating fractional sampling-rate change
REVIEW PANEL 7.11
How to Implement a Fractional Sampling-Rate Change by M/N
Interpolate by N = lowpass lter (using F
C
=
0.5
max(M,N)
and gain =
1
N
) = decimate by M.
7.4 Quantization
The importance of digital signals stems from the proliferation of high-speed digital computers for signal
processing. Due to the nite memory limitations of such machines, we can process only nite data sequences.
We must not only sample an analog signal in time but also quantize (round or truncate) the signal amplitudes
to a nite set of values. Since quantization aects only the signal amplitude, both analog and discrete-time
signals can be quantized. Quantized discrete-time signals are called digital signals.
Each quantized sample is represented as a group (word) of zeros and ones (bits) that can be processed
digitally. The ner the quantization, the longer the word. Like sampling, improper quantization leads to loss
of information. But unlike sampling, no matter how ne the quantization, its eects are irreversible, since
word lengths must necessarily be nite. The systematic treatment of quantization theory is very dicult
because nite word lengths appear as nonlinear eects. Quantization always introduces some noise, whose
eects can be described only in statistical terms, and is usually considered only in the nal stages of any
design, and many of its eects (such as overow and limit cycles) are beyond the realm of this text.
7.4.1 Uniform Quantizers
Quantizers are devices that operate on a signal to produce a nite number of amplitude levels or quantization
levels. It is common practice to use uniform quantizers with equal quantization levels.
The number of levels L in most quantizers used in an analog-to-digital converter (ADC) is invariably a
power of 2. If L = 2
B
, each of the L levels is coded to a binary number, and each signal value is represented
in binary form as a B-bit word corresponding to its quantized value. A 4-bit quantizer is thus capable of 2
4
(or 16) levels and a 12-bit quantizer yields 2
12
(or 4096) levels.
A signal may be quantized by rounding to the nearest quantization level, by truncation to a level
smaller than the next higher one, or by sign-magnitude truncation, which is rather like truncating
absolute values and then using the appropriate sign. These operations are illustrated in Figure 7.22.
For a quantizer with a full-scale amplitude of X, input values outside the range [X[ will get clipped
and result in overow. The observed value may be set to the full-scale value (saturation) or zero (zeroing),
leading to the overow characteristics shown in Figure 7.23.
The quantized signal value is usually represented as a group (word) with a specied number of bits
called the word length. Several number representations are in use and illustrated in Table 7.1 for B = 3.
c Ashok Ambardar, September 1, 2003
7.4 Quantization 273
Some forms of number representation are better suited to some combinations of quantization and overow
characteristics and, in practice, certain combinations are preferred. In any case, the nite length of binary
words leads to undesirable, and often irreversible, eects collectively known as nite-word-length eects.
Quantized
signal
Original
signal
Rounding
/2
/2
Quantized
signal
Original
signal
Sign-magnitude truncation
Original
signal
Quantized
signal
Truncation
Figure 7.22 Various ways of quantizing a signal
V
V
V
V
Zeroing
V
V
Actual value
Observed value
Saturation
V
V
Actual value
Observed value
Figure 7.23 Two overow characteristics
Table 7.1 Various Number Representations for B = 3 Bits
Decimal Sign and Ones Twos Oset
Value Magnitude Complement Complement Binary
+4 111
+3 011 011 011 110
+2 010 010 010 101
+1 001 001 001 000
+0 000 000 000
0 100 111 011
1 101 110 111 010
2 110 101 110 001
3 111 100 101 000
4 100
7.4.2 Quantization Error and Quantization Noise
The quantization error, naturally enough, depends on the number of levels. If the quantized signal
corresponding to a discrete signal x[n] is denoted by x
Q
[n], the quantization error [n] equals
[n] = x[n] x
Q
[n] (7.25)
c Ashok Ambardar, September 1, 2003
274 Chapter 7 Digital Processing of Analog Signals
It is customary to dene the quantization signal-to-noise ratio (SNR
Q
) as the ratio of the power P
S
in
the signal and the power P
N
in the error [n] (or noise). This is usually measured in decibels, and we get
P
S
=
1
N
N1
n=0
x
2
[n] P
N
=
1
N
N1
n=0
2
[n] SNR
Q
(dB) = 10 log
P
S
P
N
= 10 log
x
2
[n]
2
[n]
(7.26)
The eect of quantization errors due to rounding or truncation is quite dicult to quantify analytically
unless statistical estimates are used. The dynamic range or full-scale range of a signal x(t) is dened as
its maximum variation D = x
max
x
min
. If x(t) is sampled and quantized to L levels using a quantizer with
a full-scale range of D, the quantization step size, or resolution, , is dened as
= D/L (7.27)
This step size also corresponds to the least signicant bit (LSB). The dynamic range of a quantizer is often
expressed in decibels. For a 16-bit quantizer, the dynamic range is 20 log 2
16
96 dB.
For quantization by rounding, the maximum value of the quantization error must lie between /2 and
/2. If L is large, the error is equally likely to take on any value between /2 and /2 and is thus
uniformly distributed. Its probability density function f() has the form shown in Figure 7.24.
f ()
/2 /2
1/
/2
/2
2
f() d =
1
/2
/2
2
d =
2
12
(7.28)
The quantity = /
12
=
4
0.005
12
= 230.94 B = log
2
230.94 = 7.85
Rounding up this result, we get B = 8 bits.
(b) Consider the ramp x(t) = 2t over (0, 1). For a sampling interval of 0.1 s, and L = 4, we obtain the
sampled signal, quantized (by rounding) signal, and error signal as
x[n] = 0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0
x
Q
[n] = 0, 0.0, 0.5, 0.5, 1.0, 1.0, 1.0, 1.5, 1.5, 2.0, 2.0
e[n] = 0, 0.2, 0.1, 0.1, 0.2, 0.0, 0.2, 0.1, 0.1, 0.2, 0.0
We can now compute the SNR in several ways.
1. SNR
Q
= 10 log
P
S
P
N
= 10 log
x
2
[n]
e
2
[n]
= 10 log
15.4
0.2
= 18.9 dB
2. We could also use SNR
Q
= 10 log P
S
+ 10.8 + 20 log L 20 log D. With D = 2 and N = 11,
SNR
S
= 10 log
1
N
N1
n=0
x
2
[n]
x
2
(t) dt =
4
3
SNR
S
= 10 log(
4
3
) + 10 log 12 + 20 log 4 20 log 2 = 18.062 dB
Why the dierences between the various results? Because SNR
S
is a statistical estimate. The larger
the number of samples N, the less SNR
Q
and SNR
S
dier. For N = 500, for example, we nd that
SNR
Q
= 18.0751 dB and SNR
S
= 18.0748 dB are very close indeed.
(c) Consider the sinusoid x(t) = Acos(2ft). The power in x(t) is P
S
= 0.5A
2
. The dynamic range of
x(t) is D = 2A (the peak-to-peak value). For a B-bit quantizer, we obtain the widely used result
SNR
S
= 10 log P
S
+ 10.8 + 6B 20 log D = 6B + 1.76 dB
c Ashok Ambardar, September 1, 2003
276 Chapter 7 Digital Processing of Analog Signals
7.4.3 Quantization and Oversampling
Even though sampling and quantization are independent operations, it turns out that oversampling allows
us to use quantizers with fewer bits per sample. The idea is that the loss of accuracy in the sample values
(by using fewer bits) is oset by the larger number of samples (by using a higher sampling rate). This is a
consequence of how the quantization noise power is distributed.
The quantized signal is x
Q
[n] = x[n] + [n]. The quantization error [n] may be assumed to be a white-
noise sequence that is uncorrelated with the signal x[n], with a uniform probability density over the range
(0.5, 0.5). The variance (average power) of [n] is
2
=
2
/12. The spectrum of [n] is at over the
principal range (0.5S, 0.5S), and the average power
2
is equally distributed over this range. The power
spectral density of [n] is thus P
ee
(f) =
2
/S. If a signal is oversampled at the rate NS, the noise spectrum
is spread over a wider frequency range (0.5NS, 0.5NS). The power spectral density is thus N times less
for the same total quantization noise power. In fact, the in-band quantization noise in the region of interest,
(0.5S, 0.5S), is also decreased by a factor of N. This is illustrated in Figure 7.25.
2
/S
2
/NS
2
/NS
2
2
S/2 S /2 NS /2 NS
/S
S/2 /2 /2 NS /2 NS/2
2
/NS
2
S/2 S /2 NS S NS/2
2
2
/2
f f f
2
H
n
(f)
NS
Figure 7.25 Quantization noise spectrum of an oversampled signal
If we were to keep the in-band noise power the same as before, we can recover the signal using fewer bits.
For such a quantizer with B
2
= (B B) bits per sample, an average power of
2
2
, and a sampling rate of
S
2
= NS Hz, we obtain the same in-band quantization noise power when its power spectral density is also
2
/S, as illustrated in Figure 7.25. Thus,
2
S
=
2
2
NS
2
=
2
2
N
(7.32)
Assuming the same full-scale range D for each quantizer, we obtain
D
2
(12)2
2B
=
D
2
(12N)2
2(BB)
N = 2
2B
B = 0.5 log
2
N (7.33)
This result suggests that we gain 0.5 bits for every doubling of the sampling rate. For example, N = 4
(four-times oversampling) leads to a gain of 1 bit. In practice, a better trade-o between bits and samples
is provided by quantizers that not only use oversampling but also shape the noise spectrum (using lters)
to further reduce the in-band noise, as shown in Figure 7.25. A typical noise shape is the sine function, and
a pth-order noise-shaping lter has the form H
NS
(f) = [2 sin(f/NS)[
p
, 0.5NS f 0.5NS, where N is
the oversampling factor. Equating the ltered in-band noise power to
2
, we obtain
2
=
2
2
N
1
S
S/2
S/2
[H
NS
(f)[
2
df (7.34)
If N is large, H
NS
(f) [(2f/NS)[
p
over the much smaller in-band range (0.5S, 0.5S), and we get
2
=
2
2
N
1
S
S/2
S/2
2f
NS
2p
df =
2
2
2p
(2p + 1)N
2p+1
(7.35)
c Ashok Ambardar, September 1, 2003
7.5 Digital Processing of Analog Signals 277
Simplication, with
2
2
/
2
= 2
2B
, gives
B = (p + 0.5)log
2
N 0.5 log
2
2p
2p + 1
(7.36)
This shows that noise shaping (or error-spectrum shaping) results in a savings of p log
2
N additional bits.
With p = 1 and N = 4, for example, B 2 bits. This means that we can make do with a (B2)-bit DAC
during reconstruction if we use noise shaping with four-times oversampling. In practical implementation,
noise shaping is achieved by using an oversampling sigma-delta ADC. State-of-the-art CD players make use
of this technology. What does it take to achieve 16-bit quality using a 1-bit quantizer (which is just a sign
detector)? Since B = 15, we could, for example, use oversampling by N = 64 (to 2.8 MHz for audio signals
sampled at 44.1 kHz) and p = 3 (third-order noise shaping).
7.5 Digital Processing of Analog Signals
The crux of the sampling theorem is not just the choice of an appropriate sampling rate. More important,
the processing of an analog signal is equivalent to the processing of its Nyquist sampled version, because it
retains the same information content as the original. This is how the sampling theorem is used in practice.
It forms the link between analog and digital signal processing, and allows us to use digital techniques to
manipulate analog signals. When we sample a signal x(t) at the instants nt
s
, we imply that the spectrum
of the sampled signal is periodic with period S = 1/t
s
and band-limited to a highest frequency B = 0.5S.
Figure 7.26 illustrates a typical system for analog-to-digital conversion.
01101001110
Sampler
Hold
Quantizer
and
Analog signal DT signal Staircase signal encoder Digital signal
circuit
Figure 7.26 Block diagram of a system for analog-to-digital conversion
An analog lowpass pre-lter or anti-aliasing lter (not shown) limits the highest analog signal frequency
to allow a suitable choice of the sampling rate and ensure freedom from aliasing. The sampler operates
above the Nyquist sampling rate and is usually a zero-order-hold device. The quantizer limits the sampled
signal values to a nite number of levels (16-bit quantizers allow a signal-to-noise ratio close to 100 dB). The
encoder converts the quantized signal values to a string of binary bits or zeros and ones (words) whose
length is determined by the number of quantization levels of the quantizer.
A digital signal processing system, in hardware or software (consisting of digital lters), processes the
encoded digital signal (or bit stream) in a desired fashion. Digital-to-analog conversion essentially reverses
the process and is accomplished by the system shown in Figure 7.27.
A decoder converts the processed bit stream to a discrete signal with quantized signal values. The zero-
order-hold device reconstructs a staircase approximation of the discrete signal. The lowpass analog post-
lter, or anti-imaging lter, extracts the central period from the periodic spectrum, removes the unwanted
replicas (images), and results in a smoothed reconstructed signal.
c Ashok Ambardar, September 1, 2003
278 Chapter 7 Digital Processing of Analog Signals
01101001110
Decoder
Hold
Analog
Digital signal Staircase signal Analog signal
circuit
post-
DT signal filter
Figure 7.27 Block diagram of a system for analog-to-digital conversion
7.5.1 Multirate Signal Processing and Oversampling
In practice, dierent parts of a DSP system are often designed to operate at dierent sampling rates because
of the advantages it oers. Since real-time digital lters must complete all algorithmic operations in one sam-
pling interval, a smaller sampling interval (i.e. a higher sampling rate) can impose an added computational
burden during the digital processing stage. It is for this reason that the sampling rate is often reduced (by
decimation) before performing DSP operations and increased (by interpolation) before reconstruction. This
leads to the concept of multirate signal processing where dierent subsystems operate at dierent sampling
rates best suited for the given task.
Oversampling does oer several advantages. At the input end, oversampling an analog signal prior to
quantization allows the use of simple anti-aliasing lters. It also allows the use of quantizers with lower
resolution (fewer bits) to achieve a given SNR because oversampling reduces the quantization noise level by
spreading the quantization noise over a wider bandwidth. At the output end, oversampling of the processed
digital signal (by sampling rate conversion) prior to reconstruction can reduce the errors (sinc distortion)
caused by zero-order-hold devices. It also allows us to use a DAC with lower resolution and a lower-order
anti-imaging lter for nal analog reconstruction.
7.5.2 Practical ADC Considerations
An analog-to-digital converter (ADC) for converting analog to digital signals consists of a sample-and-hold
circuit followed by a quantizer and encoder. A block diagram of a typical sample-and-hold circuit and its
practical realization is shown in Figure 7.28.
r
d
R
S
R
L
T
0
-
+
-
+
Implementation using an FET as a switch
C
T
Ground
Block diagram of sample-and-hold system
C
Buffer
Switch
Figure 7.28 Block diagram of a sample-and-hold system and a practical realization
A clock signal controls a switch (an FET, for example) that allows the hold capacitor C to charge
very rapidly to the sampled value (when the switch is closed) and to discharge very slowly through the
high input resistance of the buer amplier (when the switch is open). Ideally, the sampling should be as
instantaneous as possible and, once acquired, the level should be held constant until it can be quantized and
c Ashok Ambardar, September 1, 2003
7.5 Digital Processing of Analog Signals 279
encoded. Practical circuits also include an operational amplier at the input to isolate the source from the
hold capacitor and provide better tracking of the input. In practice, the nite aperture time T
A
(during
which the signal is being measured), the nite acquisition time T
H
(to switch from the hold mode to the
sampling mode), the droop in the capacitor voltage (due to the leakage of the hold capacitor), and the nite
conversion time T
C
(of the quantizer) are all responsible for less than perfect performance.
A nite aperture time limits both the accuracy with which a signal can be measured and the highest
frequency that can be handled by the ADC. Consider the sinusoid x(t) = Asin(2f
0
t). Its derivative
x
(t) = 2Af
0
cos(2f
0
t) describes the rate at which the signal changes. The fastest rate of change equals
2Af
0
at the zero crossings of x(t). If we assume that the signal level can change by no more than X
during the aperture time T
A
, we must satisfy
2Af
0
X
T
A
or f
0
X
DT
A
(7.37)
where D = 2A corresponds to the full-scale range of the quantizer. Typically, X may be chosen to equal the
rms quantization error (which equals /
(t)[
t=0
= V
0
/. A proper choice of the holding capacitor C can minimize the droop. If the maximum
droop is restricted to V during the hold time T
H
, we must satisfy
V
0
RC
V
T
H
or C
V
0
T
H
RV
(7.39)
To impose a lower bound, V
0
is typically chosen to equal the full-scale range, and V may be chosen to
equal the rms quantization error (which equals /
b
N1
2
N1
+b
N2
2
N2
+ +b
1
2
1
+b
0
2
0
(7.41)
c Ashok Ambardar, September 1, 2003
280 Chapter 7 Digital Processing of Analog Signals
V
0
V
i
R
N-1
R
F
R
1
R
0
+
Ground
Figure 7.29 A system for digital-to-analog conversion
where the coecients b
k
correspond to the position of the switches and equal 1 (if connected to the input)
or 0 (if connected to the ground). Practical circuits are based on modications that use only a few dierent
resistor values (such as R and 2R) to overcome the problem of choosing a wide range of resistor values
(especially for high-bit converters). Of course, a 1-bit DAC simply corresponds to a constant gain.
EXAMPLE 7.10 (ADC Considerations)
(a) Suppose we wish to digitize a signal band-limited to 15 kHz using only a 12-bit quantizer. If we
assume that the signal level can change by no more than the rms quantization error during capture,
the aperture time T
A
must satisfy
T
A
X
Df
0
=
D/2
B
12
Df
0
=
1
2
12
(15)(10
3
)
5.18 ns
The conversion time of most practical quantizers is much larger (in the microsecond range), and as a
result, we can use such a quantizer only if it is preceded by a sample-and-hold circuit whose aperture
time is less than 5 ns.
(b) If we digitize a signal band-limited to 15 kHz, using a sample-and-hold circuit (with a capture time of
4 ns and a hold time of 10 s) followed by a 12-bit quantizer, the conversion time of the quantizer can
be computed from
S
1
T
A
+T
H
+T
C
T
A
+T
H
+T
C
1
S
=
1
(30)10
3
T
C
= 23.3 s
The value of T
C
is well within the capability of practical quantizers.
(c) Suppose the sample-and-hold circuit is buered by an amplier with an input resistance of 1 M. To
ensure a droop of no more that 0.5 LSB during the hold phase, we require
C
V
0
T
H
RV
Now, if V
0
corresponds to the full-scale value, V = 0.5 LSB = V
0
/2
B+1
, and
C
V
0
T
H
R(V
0
/2
B+1
)
=
(10)10
6
2
13
10
6
81.9 nF
c Ashok Ambardar, September 1, 2003
7.5 Digital Processing of Analog Signals 281
7.5.3 Anti-Aliasing Filter Considerations
The purpose of the anti-aliasing lter is to band-limit the input signal. In practice, however, we cannot
design brick-wall lters, and some degree of aliasing is inevitable. The design of anti-aliasing lters must
ensure that the eects of aliasing are kept small. One way to do this is to attenuate components above
the Nyquist frequency to a level that cannot be detected by the ADC. The choice of sampling frequency
S is thus dictated not only by the highest frequency of interest but also the resolution of the ADC. If the
aliasing level is V and the maximum passband level is V , we require a lter with a stopband attenuation
of A
s
> 20 log
V
V
dB. If the peak signal level equals A, and the passband edge is dened as the half-power
(or 3-dB) frequency, and V is chosen as the rms quantization error for a B-bit quantizer, we have
A
s
> 20 log
maximum rms passband level
minimum rms stopband level
=
A/
2
/
12
=
A/
2
A/(2
B
12)
= 20 log(2
B
6) dB (7.42)
EXAMPLE 7.11 (Anti-Aliasing Filter Considerations)
Suppose we wish to process a noisy speech signal with a bandwidth of 4 kHz using an 8-bit ADC.
We require a minimum stopband attenuation of A
s
= 20 log(2
B
6) 56 dB.
If we use a fourth-order Butterworth anti-aliasing lter with a 3-dB passband edge of f
p
, the normalized
frequency
s
= f
s
/f
p
at which the stopband attenuation of 56 dB occurs is computed from
A
s
= 10 log(1 +
2n
s
)
s
=
10
0.1A
s
1
1/2n
5
If the passband edge f
p
is chosen as 4 kHz, the actual stopband frequency is f
s
=
s
f
p
= 20 kHz.
If the stopband edge also corresponds to highest frequency of interest, then S = 40 kHz.
The frequency f
a
that gets aliased to the passband edge f
p
corresponds to f
a
= S f
p
= 36 kHz.
The attenuation A
a
(in dB) at this frequency f
a
is
A
a
= 10 log(1 +
2n
a
) = 10 log(1 + 9
8
) 76.3 dB
This corresponds to a signal level (relative to unity) of (1.5)10
4
, well below the rms quantization error
(which equals 1/(2
B
12) = 0.0011).
7.5.4 Anti-Imaging Filter Considerations
The design of reconstruction lters requires that we extract only the central image and minimize the sinc
distortion caused by practical zero-order-hold sampling. The design of such lters must meet stringent spec-
ications unless oversampling is employed during the digital-to-analog conversion. In theory, oversampling
(at a rate much higher than the Nyquist rate) may appear wasteful, because it can lead to the manipulation
of a large number of samples. In practice, however, it oers several advantages. First, it provides ade-
quate separation between spectral images and thus allows lters with less stringent cuto requirements to
be used for signal reconstruction. Second, it minimizes the eects of the sinc distortion caused by practical
zero-order-hold sampling. Figure 7.30 shows staircase reconstructions at two sampling rates.
The smaller steps in the staircase reconstruction for higher sampling rates lead to a better approximation
of the analog signal, and these smaller steps are much easier to smooth out using lower-order lters with less
stringent specications.
c Ashok Ambardar, September 1, 2003
282 Chapter 7 Digital Processing of Analog Signals
Figure 7.30 Staircase reconstruction of a signal at low (left) and high (right) sampling rates
EXAMPLE 7.12 (Anti-Imaging Filter Considerations)
(a) Consider the reconstruction of an audio signal band-limited to 20 kHz, assuming ideal sampling. The
reconstruction lter is required to have a maximum signal attenuation of 0.5 dB and to attenuate all
images by at least 60 dB.
The passband and stopband attenuations are A
p
= 0.5 dB at the passband edge f
p
= 20 kHz and
A
s
= 60 dB at the stopband edge f
s
= S f
p
= S 20 kHz, where S is the sampling rate in kilohertz.
This is illustrated in Figure E7.12A.
f (kHz)
S20
sinc( ) f/S
S 20
Spectral images
Figure E7.12A Spectra for Example 7.12(a)
If we design a Butterworth lter, its order n is given by
n =
log[(10
0.1A
s
1)/
2
]
1/2
log(f
s
/f
p
)
2
= 10
0.1A
p
1
1. If we choose S = 44.1 kHz, we compute f
s
= 44.1 20 = 22.1 kHz and n = 80.
2. If we choose S = 176.4 kHz (four-times oversampling), we compute f
s
= 176.4 20 = 156.4 kHz
and n = 4. This is an astounding reduction in the lter order
(b) If we assume zero-order-hold sampling and S = 176.4 kHz, the signal spectrum is multiplied by
sinc(f/S). This already provides a signal attenuation of 20 log[sinc(20/176.4)] = 0.184 dB at the
passband edge of 20 kHz and 20 log[sinc(156.4/176.4)] = 18.05 dB at the stopband edge of 156.4 kHz.
The new lter attenuation specications are thus A
p
= 0.5 0.184 = 0.316 dB and A
s
= 60 18.05 =
41.95 dB, and we require a Butterworth lter whose order is given by
n =
log[(10
0.1A
s
1)/
2
]
1/2
log(f
s
/f
p
)
= 2.98 n = 3
c Ashok Ambardar, September 1, 2003
7.6 Compact Disc Digital Audio 283
We see that oversampling allows us to use reconstruction lters of much lower order. In fact, the
earliest commercial CD players used four-times oversampling (during the DSP stage) and second-order
or third-order Bessel reconstruction lters (for linear phase).
The eects of sinc distortion may also be minimized by using digital lters with a 1/sinc response
during the DSP phase itself (prior to reconstruction).
7.6 Compact Disc Digital Audio
One application of digital signal processing that has had a profound eect on the consumer market is in
compact disc (CD) digital audio systems. The human ear is an incredibly sensitive listening device that
depends on the pressure of the air molecules on the eardrum for the perception of sound. It covers a
whopping dynamic range of 120 dB, from the Brownian motion of air molecules (the threshold of hearing)
all the way up to the threshold of pain. The technology of recorded sound has come a long way since
Edisons invention of the phonograph in 1877. The analog recording of audio signals on long-playing (LP)
records suers from poor signal-to-noise ratio (about 60 dB), inadequate separation between stereo channels
(about 30 dB), wow and utter, and wear due to mechanical tracking of the grooves. The CD overcomes
the inherent limitations of LP records and cassette tapes and yields a signal-to-noise ratio, dynamic range,
and stereo separation, all in excess of 90 dB. It makes full use of digital signal processing techniques during
both recording and playback.
7.6.1 Recording
A typical CD recording system is illustrated in Figure 7.31. The analog signal recorded from each microphone
is passed through an anti-aliasing lter, sampled at the industry standard of 44.1 kHz and quantized to 16
bits in each channel. The two signal channels are then multiplexed, the multiplexed signal is encoded, and
parity bits are added for later error correction and detection. Additional bits are also added to provide
information for the listener (such as playing time and track number). The encoded data is then modulated
for ecient storage, and more bits (synchronization bits) are added for subsequent recovery of the sampling
frequency. The modulated signal is used to control a laser beam that illuminates the photosensitive layer
of a rotating glass disc. As the laser turns on or o, the digital information is etched on the photosensitive
layer as a pattern of pits and lands in a spiral track. This master disk forms the basis for mass production
of the commercial CD from thermoplastic material.
Right
Left Analog
Analog
Encoding,
and
and
Optics
filter
filter
synchronization
recording
mic
mic
anti-aliasing
anti-aliasing
ADC
ADC
16-bit
16-bit
modulation,
Multiplex
Figure 7.31 Components of a compact disc recording system
How much information can a CD store? With a sampling rate of 44.1 kHz and 32 bits per sample (in
stereo), the bit rate is (44.1)(32)10
3
= (1.41)10
6
audio bits per second. After encoding, modulation, and
c Ashok Ambardar, September 1, 2003
284 Chapter 7 Digital Processing of Analog Signals
synchronization, the number of bits roughly triples to give a bit rate of (4.23)10
6
channel bits per second.
For a recording time of an hour, this translates to about 600 megabytes (with 8 bits corresponding to a byte
and 1024 bytes corresponding to a kilobyte).
7.6.2 Playback
The CD player reconstructs the audio signal from the information stored on the compact disc in a series
of steps that essentially reverses the process at the recording end. A typical CD player is illustrated in
Figure 7.32. During playback, the tracks on a CD are optically scanned by a laser to produce a digital
signal. This digital signal is demodulated, and the parity bits are used for the detection of any errors (due
to manufacturing defects or dust, for example) and to correct the errors by interpolation between samples
(if possible) or to mute the signal (if correction is not possible). The demodulated signal is now ready for
reconstruction using a DAC. However, the analog reconstruction lter following the DAC must meet tight
specications in order to remove the images that occur at multiples of 44.1 kHz. Even though the images are
well above the audible range, they must be ltered out to prevent overloading of the amplier and speakers.
What is done in practice is to digitally oversample the signal (by a factor of 4) to a rate of 176.4 kHz and
pass it through the DAC. A digital lter that compensates for the sinc distortion of the hold operation is
also used prior to digital-to-analog conversion. Oversampling relaxes the requirements of the analog lter,
which must now smooth out much smaller steps. The sinc compensating lter also provides an additional
attenuation of 18 dB for the spectral images and further relaxes the stopband specications of the analog
reconstruction lter. The earliest systems used a third-order Bessel lter with a 3-dB passband of 30 kHz.
Another advantage of oversampling is that it reduces the noise oor and spreads the quantization noise over
a wider bandwidth. This allows us to round the oversampled signal to 14 bits and use a 14-bit DAC to
provide the same level of performance as a 16-bit DAC.
CD
Over-
Analog
Analog
Amplifier
DAC
DAC
lowpass
lowpass
filter
filter
and
speakers
sampling
error correction
pickup
Optical
and
14-bit
14-bit
Demodulation 4
Figure 7.32 Components of a compact disc playback system
7.7 Dynamic Range Processors
There are many instances in which an adjustment of the dynamic range of a signal is desirable. Sound
levels above 100 dB appear uncomfortably loud, and the dynamic range should be matched to the listening
environment for a pleasant listening experience. Typically, compression is desirable when listening to music
with a large dynamic range in small enclosed spaces (unlike a concert hall) such as a living room or an
automobile where the ambient (background) noise is not very low. For example, if the music has a dynamic
range of 80 dB and the background noise is 40 dB above the threshold of hearing, it can lead to sound
levels as high as 120 dB (close to the threshold of pain). Dynamic range compression is also desirable for
background music (in stores or elevators). It is also used to prevent distortion when recording on magnetic
tape, and in studios to adjust the dynamic range of the individual tracks in a piece of recorded music.
There are also situations where we may like to expand the dynamic range of a signal. For example, the
dynamic range of LP records and cassette tapes is not very high (typically, between 50 dB and 70 dB) and
c Ashok Ambardar, September 1, 2003
7.7 Dynamic Range Processors 285
can benet from dynamic range expansion, which in eect makes loud passages louder and soft passages
softer and in so doing also reduces the record or tape hiss.
A compressor is a variable gain device whose gain is unity for low signal levels and decreases at higher
signal levels. An expander is an amplier with variable gain (which never exceeds unity). For high signal
levels it provides unity gain, whereas for low signal levels it decreases the gain and makes the signal level
even lower. Typical compression and expansion characteristics are shown in Figure 7.33 and are usually
expressed as a ratio, such as a 2:1 compression or a 1:4 expansion. A 10:1 compression ratio (or higher)
describes a limiter and represents an extreme form of compression. A 1:10 expansion ratio describes a noise
gate whose output for low signal levels may be almost zero. Noise gating is one way to eliminate noise or
hiss during moments of silence in a recording.
Dynamic range compression
Output level (dB)
10:1 compression
(limiter)
No compression
2:1 compression
4:1 compression
Input level (dB)
Dynamic range expansion
Output level (dB)
Input level (dB)
1:2 expansion
1:4 expansion
1:10 expansion
(noise gate)
No expansion
Threshold Threshold
Figure 7.33 Input-output characteristics of dynamic range processors
Analog level detector
Input Output
Level
detector
amplifier
c(t) Control signal
Variable-gain
+
R
+
C
Figure 7.34 Block digram of a dynamic range processor
Control of the dynamic range requires variable gain devices whose gain is controlled by the signal level,
as shown in Figure 7.34. Dynamic range processing uses a control signal c(t) to adjust the gain. If the
gain decreases with increasing c(t), we obtain compression; if the gain increases with increasing c(t), we
obtain expansion. Following the signal too closely is undesirable because it would eliminate the dynamic
variation altogether. Once the signal level exceeds the threshold, a typical compressor takes up to 0.1 s
(called the attack time) to respond, and once the level drops below threshold, it takes another second or
two (called the release time) to restore the gain to unity. Analog circuits for dynamic range processing may
use a peak detector (much like the one used for the detection of AM signals) that provides a control signal.
Digital circuits replace the rectier by simple binary operations and simulate the control signal and the
attack characteristics and release time by using digital lters. In concept, the compression ratio (or gain),
the delay, attack, and release characteristics may be adjusted independently.
c Ashok Ambardar, September 1, 2003
286 Chapter 7 Digital Processing of Analog Signals
7.7.1 Companders
Dynamic range expanders and compressors are often used to combat the eects of noise during transmission
of signals, especially if the dynamic range of the channel is limited. A compander is a combination of a
compressor and expander. Compression allows us to increase the signal level relative to the noise level. An
expander at the receiving end returns the signal to its original dynamic range. This is the principle behind
noise reduction systems for both professional and consumer use. An example is the Dolby noise reduction
system.
In the professional Dolby A system, the input signal is split into four bands by a lowpass lter with a
cuto frequency of 80 Hz, a bandpass lter with band edges at [80, 3000] Hz, and two highpass lters with
cuto frequencies of 3 kHz and 8 kHz. Each band is compressed separately before being mixed and recorded.
During playback, the process is reversed. The characteristics of the compression and expansion are shown
in Figure 7.35.
Input level (dB) 50
40
30
50
40
30
20
20
Output level (dB)
Compression during recording
Expansion during playback
Figure 7.35 Compression and expansion characteristics for Dolby A noise reduction
During compression, signal levels below 40 dB are boosted by a constant 10 dB, signal levels between
40 dB and 20 dB are compressed by 2:1, and signal levels above 20 dB are not aected. During playback
(expansion), signal levels above 20 dB are cut by 10 dB, signal levels between 30 dB and 20 dB face a 1:2
expansion, and signal levels below 30 dB are not aected. In the immensely popular Dolby B system found
in consumer products (and also used by some FM stations), the input signal is not split but a pre-emphasis
circuit is used to provide a high-frequency boost above 600 Hz. Another popular system is dbx, which uses
pre-emphasis above 1.6 kHz with a maximum high-frequency boost of 20 dB.
Voice-grade telephone communication systems also make use of dynamic range compression because the
distortion caused is not signicant enough to aect speech intelligibility. Two commonly used compressors
are the -law compander (used in North America and Japan) and the A-law compander (used in Europe).
For a signal x(t) whose peak level is normalized to unity, the two compression schemes are dened by
y
(x) =
ln(1 +[x[)
ln(1 +)
sgn(x) y
A
(x) =
A[x[
1 + ln A
sgn(x), 0 [x[
1
A
1 + ln(A[x[)
1 + ln A
sgn(x),
1
A
[x[ 1
(7.43)
The characteristics of these compressors are illustrated in Figure 7.36. The value = 255 has become
the standard in North America, and A = 100 is typically used in Europe. For = 0 (and A = 1), there is no
compression or expansion. The -law compander is nearly linear for [x[ < 1. In practice, compression is
based on a piecewise linear approximation of the theoretical -law characteristic and allows us to use fewer
bits to digitize the signal. At the receiving end, an expander (ideally, a true inverse of the compression law)
restores the dynamic range of the original signal (except for the eects of quantization). The inverse for
-law compression is
[x[ =
(1 +)
|y|
1
(7.44)
c Ashok Ambardar, September 1, 2003
7.8 Audio Equalizers 287
A-law compression
=2 A
=1 A
=100 A
-law compression
Output
1
1
Input
=100
=4
=0
Input
Output
1
1
Figure 7.36 Characteristics of -law and A-law compressors
The quantization of the compressed signal using the same number of bits as the uncompressed signal results
in a higher quantization SNR. For example, the value of = 255 increases the SNR by about 24 dB. Since
the SNR improves by 6 dB per bit, we can use a quantizer with fewer (only B 4) bits to achieve the same
performance as a B-bit quantizer with no compression.
7.8 Audio Equalizers
Audio equalizers are typically used to tailor the sound to suit the taste of the listener. The most common
form of equalization is the tone controls (for bass and treble, for example) found on most low-cost audio
systems. Tone controls employ shelving lters that boost or cut the response over a selected frequency band
while leaving the rest of the spectrum unaected (with unity gain). As a result, the lters for the various
controls are typically connected in cascade. Graphic equalizers oer the next level in sophistication and
employ a bank of (typically, second-order) bandpass lters covering a xed number of frequency bands, and
with a xed bandwidth and center frequency for each range. Only the gain of each lter can be adjusted by
the user. Each lter isolates a selected frequency range and provides almost zero gain elsewhere. As a result,
the individual sections are connected in parallel. Parametric equalizers oer the ultimate in versatility
and comprise lters that allow the user to vary not only the gain but also the lter parameters (such as
the cuto frequency, center frequency, and bandwidth). Each lter in a parametric equalizer aects only a
selected portion of the spectrum (providing unity gain elsewhere), and as a result, the individual sections
are connected in cascade.
7.8.1 Shelving Filters
To vary the bass and treble content, it is common to use rst-order shelving lters with adjustable gain
and cuto frequency. The transfer function of a rst-order lowpass lter whose dc gain is unity and whose
response goes to zero at F = 0.5 (or z = 1) is described by
H
LP
(z) =
1
2
z + 1
z
(7.45)
The half-power or 3-dB cuto frequency
C
of this lter is given by
C
= cos
1
2
1 +
2
(7.46)
c Ashok Ambardar, September 1, 2003
288 Chapter 7 Digital Processing of Analog Signals
The transfer function of a rst-order highpass lter with the same cuto frequency
C
is simply H
HP
(z) =
1 H
LP
(z), and gives
H
HP
(z) =
1 +
2
z 1
z
(7.47)
Its gain equals zero at F = 0 and unity at F = 0.5.
A lowpass shelving lter consists of a rst-order lowpass lter with adjustable gain G in parallel with a
highpass lter. With H
LP
(z) +H
HP
(z) = 1, its transfer function may be written as
H
SL
= GH
LP
(z) +H
HP
(z) = 1 + (G1)H
LP
(z) (7.48)
A highpass shelving lter consists of a rst-order highpass lter with adjustable gain G in parallel with a
lowpass lter. With H
LP
(z) +H
HP
(z) = 1, its transfer function may be written as
H
SH
= GH
HP
(z) +H
LP
(z) = 1 + (G1)H
HP
(z) (7.49)
The realizations of these lowpass and highpass shelving lters are shown in Figure 7.37.
G1 G1 H
HP
(z) H
LP
(z)
Lowpass shelving filter
+
+
Highpass shelving filter
+
+
Figure 7.37 Realizations of lowpass and highpass shelving lters
The response of a lowpass and highpass shelving lter is shown for various values of gain (and a xed
) in Figure 7.38. For G > 1, the lowpass shelving lter provides a low-frequency boost, and for 0 < G < 1
it provides a low-frequency cut. For G = 1, we have H
SL
= 1, and the gain is unity for all frequencies.
Similarly, for G > 1, the highpass shelving lter provides a high-frequency boost, and for 0 < G < 1 it
provides a high-frequency cut. In either case, the parameter allows us to adjust the cuto frequency.
Practical realizations of shelving lters and parametric equalizers typically employ allpass structures.
7.8.2 Graphic Equalizers
Graphic equalizers permit adjustment of the tonal quality of the sound (in each channel of a stereo system)
to suit the personal preference of a listener and represent the next step in sophistication, after tone controls.
They employ a bank of (typically, second-order) bandpass lters covering the audio frequency spectrum, and
with a xed bandwidth and center frequency for each range. Only the gain of each lter can be adjusted
by the user. Each lter isolates a selected frequency range and provides almost zero gain elsewhere. As a
result, the individual sections are connected in parallel, as shown in Figure 7.39.
The input signal is split into as many channels as there are frequency bands, and the weighted sum of
the outputs of each lter yields the equalized signal. A control panel, usually calibrated in decibels, allows
for gain adjustment by sliders, as illustrated in Figure 7.39. The slider settings provide a rough visual
indication of the equalized response, hence the name graphic equalizer. The design of each second-order
bandpass section is based on its center frequency and its bandwidth or quality factor Q. A typical set of
center frequencies for a ten-band equalizer is [31.5, 63, 125, 250, 500, 1000, 2000, 4000, 8000, 16000] Hz. A
typical range for the gain of each lter is 12 dB (or 0.25 times to 4 times the nominal gain).
c Ashok Ambardar, September 1, 2003
7.8 Audio Equalizers 289
10
3
10
2
10
1
12
8
4
0
4
8
12
Lowpass Shelving filter: alpha=0.85
Digital Frequency [F]
M
a
g
n
i
t
u
d
e
[
d
B
]
G=0.5
G=0.25
G=2
G=4
10
3
10
2
10
1
12
8
4
0
4
8
12
Highpass Shelving filter: alpha=0.85
Digital Frequency [F]
M
a
g
n
i
t
u
d
e
[
d
B
]
G=0.5
G=0.25
G=2
G=4
Figure 7.38 Frequency response of lowpass and highpass shelving lters
Bandpass filter
Bandpass filter
Bandpass filter
Gain control
+
+
+
8
4
0
-4
-8
12
-12
8k 16k 4k 2k 1k 500 250
dB
(Hz) Center frequency
Figure 7.39 A graphic equalizer (left) and the display panel (right)
EXAMPLE 7.13 (A Digital Graphic Equalizer)
We design a ve-band audio equalizer operating at a sampling frequency of 8192 Hz, with center frequencies
of [120, 240, 480, 960, 1920] Hz. We select the 3-dB bandwidth of each section as 0.75 times its center
frequency to cover the whole frequency range without overlap. This is equivalent to choosing each section
with Q = 4/3. Each section is designed as a second-order bandpass IIR lter. Figure E7.13 shows the
spectrum of each section and the overall response. The constant Q design becomes much more apparent
when the gain is plotted in dB against log(f), as shown in Figure E7.13(b). A change in the gain of any
section results in a change in the equalized response.
c Ashok Ambardar, September 1, 2003
290 Chapter 7 Digital Processing of Analog Signals
120 480 960 1920
0
0.5
1
1.5
(a) Gain of fiveband equalizer
Frequency f [Hz]
M
a
g
n
i
t
u
d
e
[
l
i
n
e
a
r
]
Equalized
60 120 240 480 960 1920
30
20
10
3
0
5
(b) dB gain of fiveband equalizer
Frequency f [Hz] (log scale)
M
a
g
n
i
t
u
d
e
[
d
B
]
Figure E7.13 Frequency response of the audio equalizer for Example 7.13
7.8.3 Parametric Equalizers
Parametric equalizers oer the ultimate in versatility and allow the user to vary not only the gain but
also the lter parameters (such as the cuto frequency, center frequency, and bandwidth). Each lter in a
parametric equalizer aects only a selected portion of the spectrum (while providing unity gain elsewhere),
and as a result, the individual sections are connected in cascade. Most parametric equalizers are based on
second-order IIR peaking or notch lters whose transfer functions we reproduce here:
H
BP
(z) =
C
1 +C
z
2
1
z
2
2
1+C
z +
1C
1+C
H
BS
(z) =
1
1 +C
z
2
2z + 1
z
2
2
1+C
z +
1C
1+C
(7.50)
The center frequency and bandwidth for the lters are given by
0
= cos
1
= cos
1
2
1 +
2
, where =
1 C
1 +C
(7.51)
It is interesting to note that H
BS
(z) = 1H
BP
(z). A tunable second-order equalizer stage H
PAR
(z) consists
of the bandpass lter H
BP
(z), with a peak gain of G, in parallel with the bandstop lter H
BS
(z):
H
PAR
(z) = GH
BP
(z) +H
BS
(z) = 1 + (G1)H
BP
(z) (7.52)
A realization of this lter is shown in Figure 7.40.
G1 H (z)
BP
+
+
Delay
Figure 7.42 Illustrating echo and reverb
The direct sound provides clues to the location of the source, the early echoes provide an indication of
the physical size of the listening space, and the reverberation characterizes the warmth and liveliness that
we usually associate with sounds. The amplitude of the echoes and reverberation decays exponentially with
c Ashok Ambardar, September 1, 2003
292 Chapter 7 Digital Processing of Analog Signals
time. Together, these characteristics determine the psycho-acoustic qualities we associate with any perceived
sound. Typical 60-dB reverberation times (for the impulse response to decay to 0.001 of its peak value) for
concert halls are fairly long, up to two seconds. A conceptual model of a listening environment, also shown
in Figure 7.42, consists of echo lters and reverb lters.
A single echo can be modeled by a feed-forward system of the form
y[n] = x[n] +x[n D] H(z) = 1 +z
D
h[n] = [n] [n D] (7.53)
This is just a comb lter in disguise. The zeros of this lter lie on a circle of radius R =
1/D
, with angular
orientations of = (2k + 1)/D. Its comb-like magnitude spectrum H(F) shows minima of 1 at the
frequencies F = (2k + 1)/D, and peaks of 1 + midway between the dips. To perceive an echo, the index
D must correspond to a delay of at least about 50 ms.
A reverb lter that describes multiple reections has a feedback structure of the form
y[n] = y[n D] +x[n] H(z) =
1
1 z
D
(7.54)
This lter has an inverse-comb structure, and its poles lie on a circle of radius R =
1/D
, with an angular
separation of = 2/D. The magnitude spectrum shows peaks of 1/(1) at the pole frequencies F = k/D,
and minima of 1/(1 +) midway between the peaks.
Conceptually, the two systems just described can form the building blocks for simulating the acoustics
of a listening space. Many reverb lters actually use a combination of reverb lters and allpass lters. A
typical structure is shown in Figure 7.43. In practice, however, it is more of an art than a science to create
realistic eects, and many of the commercial designs are propriety information.
Reverb filter
Reverb filter
Reverb filter
Allpass filter
Direct sound
+
+ +
+
+
Figure 7.43 Echo and reverb lters for simulating acoustic eects
The reverb lters in Figure 7.43 typically incorporate irregularly spaced delays to allow the blending of
echoes, and the allpass lter serves to create the eect of early echoes. Some structures for the reverb lter
and allpass lter are shown in Figure 7.44. The rst structure is the plain reverb. In the second structure,
the feedback path incorporates a rst-order lowpass lter that accounts for the dependence (increase) of
sound absorption with frequency. The allpass lter has the form
H(z) =
+z
L
1 z
L
=
1
+
1
1 z
L
(7.55)
The second form of this expression (obtained by long division) shows that, except for the constant term, the
allpass lter has the same form as a reverb lter.
7.9.1 Gated Reverb and Reverse Reverb
Two other types of audio eects can also be generated by reverb lters. A gated reverb results from the
truncation (abrupt or gradual) of the impulse response of a reverb lter, resulting in an FIR lter. A reverse
c Ashok Ambardar, September 1, 2003
7.9 Digital Audio Eects 293
D
z
+
+
Reverb filter
D
z
+
+
Reverb with lowpass filter
z
z
+
+
Allpass filter
z
L
Figure 7.44 Realization of typical reverb lters
reverb is essentially a folded version of the impulse response of a gated reverb lter. The impulse response
increases with time, leading to a sound that rst gets louder, and then dies out abruptly.
7.9.2 Chorusing, Flanging, and Phasing
The echo lter also serves as the basis for several other audio special eects. Two of these eects, chorusing
and phasing, are illustrated in Figure 7.45.
Chorusing
Variable Variable
Variable Variable
delay gain
gain delay
Direct sound
+
+
+
Phasing
Tunable
notch filter
Direct sound
+
+
Figure 7.45 Illustrating special audio eects
Chorusing mimics a chorus (or group) singing (or playing) in unison. In practice, of course, the voices
(or instruments) are not in perfect synchronization, nor identical in pitch. The chorusing eect can be
implemented by a weighted combination of echo lters, each with a time-varying delay d
n
of the form
y[n] = x[n] +x[n d
n
] (7.56)
Typical delay times used in chorusing are between 20 ms and 30 ms. If the delays are less than 10 ms (but
still variable), the resulting whooshing sound is known as anging.
Phase shifting or phasing also creates many interesting eects, and may be achieved by passing the
signal through a notch lter whose frequency can be tuned by the user. It is the sudden phase jumps at
the notch frequency that are responsible for the phasing eect. The eects may also be enhanced by the
addition of feedback.
7.9.3 Plucked-String Filters
Comb lters, reverb lters, and allpass lters have also been used to synthesize the sounds of plucked
instruments, such as the guitar. What we require is a lter that has a comb-like response, with resonances
at multiples of the fundamental frequency of the note. We also require that the harmonics decay in time,
with higher frequencies decaying at a faster rate. A typical structure, rst described by Karplus and Strong,
is illustrated in Figure 7.46.
c Ashok Ambardar, September 1, 2003
294 Chapter 7 Digital Processing of Analog Signals
G
LP
(z)
D
z
G (z)
AP
G
LP
(z)
D
z
+
+
A
+
+
A
+
+ +
+ +
+
1
+
+
+
+ +
+
Figure 7.48 Realization of cosine and sine digital oscillators
7.10.1 DTMF Receivers
A touch-tone phone or dual-tone multi-frequency (DTMF) transmitter/receiver makes use of digital oscilla-
tors to generate audible tones by pressing buttons on a keypad, as shown in Figure 7.49. Pressing a button
produces a two-tone signal containing a high- and a low-frequency tone. Each button is associated with a
unique pair of low-frequency and high-frequency tones. For example, pressing the button marked 5 would
generate a combination of 770-Hz and 1336-Hz tones. There are four low frequencies and four high frequen-
cies. The low- and high-frequency groups have been chosen to ensure that the paired combinations do not
interfere with speech. The highest frequency (1633 Hz) is not currently in commercial use. The tones can be
generated by using a parallel combination of two programmable digital oscillators, as shown in Figure 7.50.
The sampling rate typically used is S = 8 kHz. The digital frequency corresponding to a typical high-
frequency tone f
H
is
H
= 2f
H
/S. The code for each button selects the appropriate lter coecients.
The keys pressed are identied at the receiver by rst separating the low- and high-frequency groups using
a lowpass lter (with a cuto frequency of around 1000 Hz) and a highpass lter (with a cuto frequency
of around 1200 Hz) in parallel, and then isolating each tone, using a parallel bank of narrow-band bandpass
lters tuned to the (eight) individual frequencies. The (eight) outputs are fed to a level detector and decision
logic that establishes the presence or absence of a tone. The keys may also be identied by computing the
FFT of the tone signal, followed by threshold detection.
c Ashok Ambardar, September 1, 2003
296 Chapter 7 Digital Processing of Analog Signals
1209 1336 1477 1633
697
770
852
941
1 2 3 A
4 5 6
7 8 9
0 # D
B
C
*
group (Hz)
High-frequency group (Hz)
Low-frequency
Figure 7.49 Layout of a DTMF touch-tone keypad
cos 2
L
cos
H cos 2
H
cos
L
[n] [n] y
1
z
1
z
1
z
1
z
+
+
+
+
+
+
+
+ +
+
+
+
1
1
+
+
generator
generator
High-frequency
Low-frequency
X[k]X(f k/t
s
). In parts (a)(c), X[k] = 1/t
s
and the pulses are replicated every S = 1/t
s
Hz and added (where they overlap). For (d), X[k] = 0.5sinc(0.5k) and the replicated pulses decrease
in height. For (e), the spectrum of (d) is multiplied by sinc(f/S).]
7.3 (Digital Frequency) Express the following signals using a digital frequency [F[ < 0.5.
(a) x[n] = cos(
4n
3
)
(b) x[n] = cos(
4n
7
) + sin(
8n
7
)
7.4 (Sampling Theorem) Establish the Nyquist sampling rate for the following signals.
(a) x(t) = 5 sin(300t +/3) (b) x(t) = cos(300t) sin(300t + 51
)
(c) x(t) = 3 cos(300t) + 5 sin(500t) (d) x(t) = 3 cos(300t)sin(500t)
(e) x(t) = 4 cos
2
(100t) (f ) x(t) = 6 sinc(100t)
(g) x(t) = 10 sinc
2
(100t) (h) x(t) = 6 sinc(100t)cos(200t)
[Hints and Suggestions: For the product signals, time-domain multiplication means convolution of
their spectra. The width of this convolution gives the highest frequency. For example, in (d), the
spectra extend to 150 Hz and 250 Hz, their convolution covers 400 Hz, and f
max
= 400 Hz.]
7.5 (Sampling Theorem) A sinusoid x(t) = Acos(2f
0
t) is sampled at three times the Nyquist rate for
six periods. How many samples are acquired?
[Hints and Suggestions: For sinusoids, the Nyquist rate means taking two samples per period.]
7.6 (Sampling Theorem) The sinusoid x(t) = Acos(2f
0
t) is sampled at twice the Nyquist rate for 1 s.
A total of 100 samples is acquired. What is f
0
and the digital frequency of the sampled signal?
[Hints and Suggestions: For sinusoids, the Nyquist rate means taking two samples per period.]
7.7 (Sampling Theorem) A sinusoid x(t) = sin(150t) is sampled at a rate of ve samples per three
periods. What fraction of the Nyquist sampling rate does this correspond to? What is the digital
frequency of the sampled signal?
[Hints and Suggestions: For sinusoids, the Nyquist rate means taking two samples per period.]
7.8 (Sampling Theorem) A periodic square wave with period T = 1 ms whose value alternates between
+1 and 1 for each half-period is passed through an ideal lowpass lter with a cuto frequency of
4 kHz. The lter output is to be sampled. What is the smallest sampling rate we can choose? Consider
both the symmetry of the signal as well as the Nyquist criterion.
[Hints and Suggestions: Even-indexed harmonics are absent if the signal is half-wave symmetric.]
c Ashok Ambardar, September 1, 2003
298 Chapter 7 Digital Processing of Analog Signals
7.9 (Spectrum of Sampled Signals) Given the spectrum X(f) of an analog signal x(t), sketch the
spectrum of its ideally sampled version x[n], assuming a sampling rate of 50, 40, and 30 Hz.
(a) X(f) = rect(f/40) (b) X(f) = tri(f/20)
[Hints and Suggestions: The spectrum of the sampled signals is
SX(f kS). The images of
SX(f) are replicated every S Hz and added (where they overlap).]
7.10 (Spectrum of Sampled Signals) Sketch the spectrum of the following signals against the digital
frequency F.
(a) x(t) = cos(200t), ideally sampled at 450 Hz
(b) x(t) = sin(400t
4
), ideally sampled at 300 Hz
(c) x(t) = cos(200t) + sin(350t), ideally sampled at 300 Hz
(d) x(t) = cos(200t +
4
) + sin(250t
4
), ideally sampled at 120 Hz
[Hints and Suggestions: The spectrum X(f) contains impulse pairs. The spectrum of the ideally
sampled signals is
SX(f kS). The images of SX(f) are replicated every S Hz.]
7.11 (Sampling and Aliasing) A signal x(t) is made up of the sum of pure sines with unit peak value at
the frequencies 10, 40, 200, 220, 240, 260, 300, 320, 340, 360, 380, and 400 Hz.
(a) Sketch the magnitude and phase spectra of x(t).
(b) If x(t) is sampled at S = 140 Hz, which components, if any, will show aliasing?
(c) The sampled signal is passed through an ideal reconstruction lter whose cuto frequency is
f
C
= 0.5S. Write an expression for the reconstructed signal y(t) and sketch its magnitude
spectrum and phase spectrum. Is y(t) identical to x(t)? Should it be? Explain.
(d) What minimum sampling rate S will allow ideal reconstruction of x(t) from its samples?
7.12 (Sampling and Aliasing) The signal x(t) = cos(100t) is applied to the following systems. Is it
possible to nd a minimum sampling rate required to sample the system output y(t)? If so, nd the
Nyquist sampling rate.
(a) y(t) = x
2
(t) (b) y(t) = x
3
(t)
(c) y(t) = [x(t)[ (d) h(t) = sinc(200t)
(e) h(t) = sinc(500t) (f ) h(t) = (t 1)
(g) y(t) = x(t)cos(400t) (h) y(t) = u[x(t)]
[Hints and Suggestions: The Nyquist rate applies only if y(t) is bandlimited. In (b), y(t) is a
periodic full-rectied cosine. In (h), y(t) is a periodic square wave.]
7.13 (Sampling and Aliasing) The sinusoid x(t) = cos(2f
0
t +) is sampled at 400 Hz and shows up as
a 150-Hz sinusoid upon reconstruction. When the signal x(t) is sampled at 500 Hz, it again shows up
as a 150-Hz sinusoid upon reconstruction. It is known that f
0
< 2.5 kHz.
(a) If each sampling rate exceeds the Nyquist rate, what is f
0
?
(b) By sampling x(t) again at a dierent sampling rate, explain how you might establish whether
aliasing has occurred?
(c) If aliasing occurs and the reconstructed phase is , nd all possible values of f
0
.
(d) If aliasing occurs but the reconstructed phase is , nd all possible values of f
0
.
[Hints and Suggestions: In (b), S > 2f
0
for no aliasing. In (c), the aliased frequency is positive
and so, 150 = f
0
500k = f
0
400m giving
k
m
=
4
5
where k and m are chosen as integers to ensure
f
0
< 2.5 kHz. In (d), start with 150 = f
0
500k = f
0
400m.]
c Ashok Ambardar, September 1, 2003
Chapter 7 Problems 299
7.14 (Sampling and Aliasing) One period of a periodic signal with period T = 4 is given by x(t) =
2tri(0.5t) 1. The periodic signal is ideally sampled by the impulse train i(t) =
k=
(t k).
(a) Sketch the signals x(t) and x
s
(t) and conrm that x
S
(t) is a periodic signal with T = 4 whose
one period is given by x
s
(t) = (t) (t 2), 0 t < 4.
(b) If x
s
(t) is passed through an ideal lowpass lter with a cuto frequency of 0.6 Hz, what is the
lter output y(t).
(c) How do x
s
(t) and y(t) change if the sampling function is i(t) =
k=
(1)
k
(t k)?
[Hints and Suggestions: For (b), x
S
(t) is half-wave symmetric with T = 4 and Fourier series
coecients X[k] = 0.5(1 e
jk
). For (c), x
S
(t) is periodic with T = 2 and x
S
(t) = (t), 0 t < 2.
Its Fourier series coecients are X[k] = 0.5.]
7.15 (Bandpass Sampling) The signal x(t) is band-limited to 500 Hz. The smallest frequency present in
x(t) is f
0
. Find the minimum rate S at which we can sample x(t) without aliasing if
(a) f
0
= 0.
(b) f
0
= 300 Hz.
(c) f
0
= 400 Hz.
7.16 (Bandpass Sampling) A signal x(t) is band-limited to 40 Hz and modulated by a 320-Hz carrier
to generate the modulated signal y(t). The modulated signal is processed by a square law device that
produces g(t) = y
2
(t).
(a) What is the minimum sampling rate for x(t) to prevent aliasing?
(b) What is the minimum sampling rate for y(t) to prevent aliasing?
(c) What is the minimum sampling rate for g(t) to prevent aliasing?
7.17 (Sampling Oscilloscopes) It is required to ideally sample a signal x(t) at S Hz and pass the
sampled signal s(t) through an ideal lowpass lter with a cuto frequency of 0.5S Hz such that its
output y(t) = x(t/) is a stretched-by- version of x(t).
(a) Suppose x(t) = 1 +cos(20t). What values of S will ensure that the output of the lowpass lter
is y(t) = x(0.1t). Sketch the spectra of x(t), s(t), and y(t) for the chosen value of S.
(b) Suppose x(t) = 2 cos(80t) + cos(160t) and the sampling rate is chosen as S = 48 Hz. Sketch
the spectra of x(t), s(t), and y(t). Will the output y(t) be a stretched version of x(t) with
y(t) = x(t/)? If so, what will be the value of ?
(c) Suppose x(t) = 2 cos(80t) + cos(100t). What values of S will ensure that the output of the
lowpass lter is y(t) = x(t/20)? Sketch the spectra of x(t), s(t), and y(t) for the chosen value of
S to conrm your results.
[Hints and Suggestions: We require =
f0
f0S
where f
0
is the fundamental frequency. In part (a),
try S
m
= S/m (with integer m) for other choices (as long as S
m
> 2f
max
/).]
7.18 (Sampling and Reconstruction) A periodic signal whose one full period is x(t) = tri(20t) is band-
limited by an ideal analog lowpass lter whose cuto frequency is f
C
. It is then ideally sampled at
80 Hz. The sampled signal is reconstructed using an ideal lowpass lter whose cuto frequency is 40
Hz to obtain the signal y(t). Find y(t) if f
C
= 20, 40, and 60 Hz.
c Ashok Ambardar, September 1, 2003
300 Chapter 7 Digital Processing of Analog Signals
7.19 (Sampling and Reconstruction) Sketch the spectra at the intermediate points and at the output
of the following cascaded systems. Assume that the input is x(t) = 5sinc(5t), the sampler operates at
S = 10 Hz and performs ideal sampling, and the cuto frequency of the ideal lowpass lter is 5 Hz.
(a) x(t) sampler ideal LPF y(t)
(b) x(t) sampler h(t)=u(t)u(t 0.1) ideal LPF y(t)
(c) x(t) sampler h(t)=u(t)u(t 0.1) ideal LPF [H(f)[ =
1
|sinc(0.1f)|
y(t)
[Hints and Suggestions: X(f) is a rectangular pulse. In part (a), replicate X(f). In part (b), the
lowpass lter preserves only the central central image distorted (multiplied) by sinc(f/S). In (c), the
compensating lter removes the sinc distortion.]
7.20 (Sampling and Aliasing) The signal x(t) = e
t
u(t) is sampled at a rate S such that the maximum
aliased magnitude is less than 5% of the peak magnitude of the un-aliased image. Estimate the sampling
rate S.
[Hints and Suggestions: Sketch the replicated spectra and observe that the maximum aliasing occurs
at f = 0.5S. So, we require [X(0.5S)[ 0.05[X(0)[.]
7.21 (Signal Recovery) A sinusoid x(t) = sin(150t) is ideally sampled at 80 Hz. Describe the signal
y(t) that is recovered if the sampled signal is passed through the following lters.
(a) An ideal lowpass lter with cuto frequency f
C
= 10 Hz
(b) An ideal lowpass lter with cuto frequency f
C
= 100 Hz
(c) An ideal bandpass lter with a passband between 60 Hz and 80 Hz
(d) An ideal bandpass lter with a passband between 60 Hz and 100 Hz
[Hints and Suggestions: The spectrum of the sampled signal contains impulse pairs at 75 Hz
replicated every 80 Hz. Sketch this with the two-sided spectrum of each lter to obtain each output.]
7.22 (Sampling and Aliasing) A speech signal x(t) band-limited to 4 kHz is sampled at 10 kHz to
obtain x[n]. The sampled signal x[n] is ltered by an ideal bandpass lter whose passband extends
over 0.03 F 0.3 to obtain y[n]. Will the sampled output y[n] be contaminated if x(t) also includes
an undesired signal at the following frequencies?
(a) 60 Hz (b) 360 Hz (c) 8.8 kHz (d) 9.8 kHz
[Hints and Suggestions: Use the digital frequency of x[n] in the principal range.]
7.23 (DTFT and Sampling) The transfer function of a digital lter is H(F) = rect(2F)e
j0.5F
.
(a) What is the impulse response h[n] of this lter?
(b) The analog signal x(t) = cos(0.25t) applied to the system shown.
x(t) sampler H(F) ideal LPF y(t)
The sampler is ideal and operates at S = 1 Hz. The cuto frequency of the ideal lowpass lter
is f
C
= 0.5 Hz. How are X(f) and Y (f) related? What is y(t)?
[Hints and Suggestions: In (b), nd the digital frequency of x[n] in the principal range and use it
to nd y[n] and y(t). Then, nd the Fourier transform of y(t).]
c Ashok Ambardar, September 1, 2003
Chapter 7 Problems 301
7.24 (Sampling and Filtering) The analog signal x(t) = cos(2f
0
t) is applied to the following system:
x(t) sampler H(F) ideal LPF y(t)
The sampler is ideal and operates at S = 60 Hz. The lter H(F) describes the frequency response of
a three-point averaging lter whose impulse response is h[n] =
1
3
1,
1, 2, 3, 2 (with t
s
= 1 s) is passed through an interpolating
lter.
(a) Sketch the output if the lter performs step interpolation.
(b) Sketch the output if the lter performs linear interpolation.
(c) What is the interpolated value at t = 2.5 s if the lter performs sinc interpolation?
(d) What is the interpolated value at t = 2.5 s if the lter performs raised cosine interpolation
(assume that R = 0.5)?
[Hints and Suggestions: For (a), sketch the staircase signal. In (b), simply connect the dots. For
(c), evaluate x(t) =
x[k]sinc(t k) at t = 2.5. For (d), use the raised-cosine formula.]
7.31 (DTFT and Sampling) A signal is reconstructed using a lter that performs step interpolation
between samples. The reconstruction sampling interval is t
s
.
(a) What is the impulse response h(t) and transfer function H(f) of the interpolating lter?
(b) What is the transfer function H
C
(f) of a lter that can compensate for the non-ideal reconstruc-
tion?
[Hints and Suggestions: For the step interpolation lter, h(t) =
1
t
s
[u(t) u(t t
s
)].]
7.32 (DTFT and Sampling) A signal is reconstructed using a lter that performs linear interpolation
between samples. The reconstruction sampling interval is t
s
.
(a) What is the impulse response h(t) and transfer function H(f) of the interpolating lter?
(b) What is the transfer function H
C
(f) of a lter that can compensate for the non-ideal reconstruc-
tion?
[Hints and Suggestions: For the linear interpolation lter, h(t) =
1
ts
tri(
t
ts
).]
7.33 (Interpolation) We wish to sample a speech signal band-limited to B = 4 kHz using zero-order-hold
sampling.
(a) Select the sampling frequency S if the spectral magnitude of the sampled signal at 4 kHz is to
be within 90% of its peak magnitude.
c Ashok Ambardar, September 1, 2003
Chapter 7 Problems 303
(b) On recovery, the signal is ltered using a Butterworth lter with an attenuation of less than 1 dB
in the passband and more than 30 dB for all image frequencies. Compute the total attenuation
in decibels due to both the sampling and ltering operations at 4 kHz and 12 kHz.
(c) What is the order of the Butterworth lter?
[Hints and Suggestions: The zero-order-hold causes sinc distortion of the form sinc(f/S). For (a),
we require sinc(B/S) 0.9. For (b), it gives an additional attenuation of 20 log [sinc(f/S)[.]
7.34 (Up-Sampling) The analog signal x(t) = 4000sinc
2
(4000t) is sampled at 12 kHz to obtain the signal
x[n]. The sampled signal is up-sampled (zero interpolated) by N to obtain the signal y[n] as follows:
x(t) sampler x[n] up-sample N y[n]
Sketch the spectra of X(F) and Y (F) over 1 F 1 for N = 2 and N = 3.
[Hints and Suggestions: After up-sampling by N, there are N compressed images per period (cen-
tered at F =
k
N
).]
7.35 (Up-Sampling) The signal x[n] is up-sampled (zero interpolated) by N to obtain the signal y[n].
Sketch X(F) and Y (F) over 1 F 1 for the following cases.
(a) x[n] = sinc(0.4n), N = 2
(b) X(F) = tri(4F), N = 2
(c) X(F) = tri(6F), N = 3
[Hints and Suggestions: After up-sampling by N, there are N compressed images per period (cen-
tered at F =
k
N
).]
7.36 (Linear Interpolation) Consider a system that performs linear interpolation by a factor of N. One
way to construct such a system is to perform up-sampling by N (zero interpolation between signal
samples) and pass the up-sampled signal through an interpolating lter with impulse response h[n]
whose output is the linearly interpolated signal y[n] as shown.
x[n] up-sample N lter h[n] y[n]
(a) What impulse response h[n] will result in linear interpolation by a factor of N = 2?
(b) Sketch the frequency response H(F) of the interpolating lter for N = 2.
(c) Let the input to the system be X(F) = rect(2F). Sketch the spectrum at the output of the
up-sampler and the interpolating lter.
[Hints and Suggestions: In (a), linear interpolation requires h[n] = tri(n/N). In (c), there are N
compressed images per period after up-sampling (centered at F =
k
N
). The digital lter removes any
images outside [F
C
[.]
7.37 (Interpolation) The input x[n] is applied to a system that up-samples by N followed by an ideal
lowpass lter with a cuto frequency of F
C
to generate the output y[n]. Draw a block diagram of the
system. Sketch the spectra at various points of this system and nd y[n] and Y (F) for the following
cases.
(a) x[n] = sinc(0.4n), N = 2, F
C
= 0.4
(b) X(F) = tri(4F), N = 2, F
C
= 0.375
c Ashok Ambardar, September 1, 2003
304 Chapter 7 Digital Processing of Analog Signals
[Hints and Suggestions: After up-sampling by N, there are N compressed images per period (cen-
tered at F =
k
N
). The digital lter removes any images outside [F
C
[.]
7.38 (Decimation) The signal x(t) = 2 cos(100t) is ideally sampled at 400 Hz to obtain the signal x[n],
and the sampled signal is decimated by N to obtain the signal y[n]. Sketch the spectra X(F) and
Y (F) over 1 F 1 for the cases N = 2 and N = 3.
[Hints and Suggestions: In decimation by N, the images are stretched by N, amplitude scaled by
1
N
and added (where overlap exists).]
7.39 (Decimation) The signal x[n] is decimated by N to obtain the decimated signal y[n]. Sketch X(F)
and Y (F) over 1 F 1 for the following cases.
(a) x[n] = sinc(0.4n), N = 2
(b) X(F) = tri(4F), N = 2
(c) X(F) = tri(3F), N = 2
[Hints and Suggestions: In decimation by N, the images are stretched by N, amplitude scaled by
1
N
and added (where overlap exists).]
7.40 (Interpolation and Decimation) Consider the following system:
x[n] up-sample N digital LPF down-sample M y[n]
The signal x[n] is up-sampled (zero-interpolated) by N = 2, the digital lowpass lter is ideal and has
a cuto frequency of F
C
, and down-sampling is by M = 3. Sketch X(F) and Y (F) and explain how
they are related for the following cases.
(a) X(F) = tri(4F) and F
C
= 0.125 (b) X(F) = tri(2F) and F
C
= 0.25
[Hints and Suggestions: After up-sampling by N, there are N compressed images per period (cen-
tered at F =
k
N
). The digital lter removes any images outside [F
C
[. In down-sampling by M, the
images are stretched by M, amplitude scaled by
1
M
and added (where overlap exists).]
7.41 (Interpolation and Decimation) For each of the following systems, X(F) = tri(4F). The digital
lowpass lter is ideal and has a cuto frequency of F
C
= 0.25 and a gain of 2. Sketch the spectra at
the various points over 1 F 1 and determine whether any systems produce identical outputs.
(a) x[n] up-sample N = 2 digital LPF down-sample M = 2 y[n]
(b) x[n] down-sample M = 2 digital LPF up-sample N = 2 y[n]
(c) x[n] down-sample M = 2 up-sample N = 2 digital LPF y[n]
[Hints and Suggestions: After up-sampling by N, there are N compressed images per period (cen-
tered at F =
k
N
). The digital lter removes any images outside [F
C
[. In down-sampling by M, the
images are stretched by M, amplitude scaled by
1
M
and added (where overlap exists).]
7.42 (Interpolation and Decimation) For each of the following systems, X(F) = tri(3F). The digital
lowpass lter is ideal and has a cuto frequency of F
C
=
1
3
and a gain of 2. Sketch the spectra at the
various points over 1 F 1 and determine whether any systems produce identical outputs.
(a) x[n] up-sample N = 2 digital LPF down-sample M = 2 y[n]
(b) x[n] down-sample M = 2 digital LPF up-sample N = 2 y[n]
(c) x[n] down-sample M = 2 up-sample N = 2 digital LPF y[n]
c Ashok Ambardar, September 1, 2003
Chapter 7 Problems 305
[Hints and Suggestions: After up-sampling by N, there are N compressed images per period (cen-
tered at F =
k
N
). The digital lter removes any images outside [F
C
[. In down-sampling by M, the
images are stretched by M, amplitude scaled by
1
M
and added (where overlap exists).]
7.43 (Interpolation and Decimation) You are asked to investigate the claim that interpolation by N
and decimation by N performed in any order, as shown, will recover the original signal.
Method 1: x[n] up-sample N digital LPF down-sample N y[n]
Method 2: x[n] down-sample N up-sample N digital LPF y[n]
(a) Let X(F) = tri(4F) and N = 2. Let the lowpass lter have a cuto frequency of F
C
= 0.25 and
a gain of 2. Sketch the spectra over 1 F 1 at the various points. For which method does
y[n] equal x[n]? Do the results justify the claim?
(b) Let X(F) = tri(3F) and N = 2. Let the lowpass lter have a cuto frequency of F
C
=
1
3
and
a gain of 2. Sketch the spectra over 1 F 1 at the various points. For which method does
y[n] equal x[n]? Do the results justify the claim?
(c) Are any restrictions necessary on the input for x[n] to equal y[n] in each method? Explain.
[Hints and Suggestions: After up-sampling by N, there are N compressed images per period (cen-
tered at F =
k
N
). The digital lter removes any images outside [F
C
[. In down-sampling by M, the
images are stretched by M, amplitude scaled by
1
M
and added (where overlap exists).]
7.44 (Fractional Delay) The following system is claimed to implement a half-sample delay:
x(t) sampler H(F) ideal LPF y(t)
The signal x(t) is band-limited to f
C
, and the sampler is ideal and operates at the Nyquist rate. The
digital lter is described by H
1
(F) = e
jF
, [F[ F
C
, and the cuto frequency of the ideal lowpass
lter is f
C
.
(a) Sketch the magnitude and phase spectra at the various points in this system.
(b) Show that y(t) = x(t 0.5t
s
) (corresponding to a half-sample delay).
7.45 (Fractional Delay) In practice, the signal y[n] = x[n 0.5] may be generated from x[n] using
interpolation by 2 (to give x[0.5n]) followed by a one-sample delay (to give x[0.5(n1)]) and decimation
by 2 (to give x[n 0.5]). This is implemented as follows:
x[n] up-sample 2 ideal LPF 1-sample delay down-sample 2 y[n]
(a) Let X(F) = tri(4F). Sketch the magnitude and phase spectra at the various points.
(b) What should be the gain and the cuto frequency of the ideal lowpass lter if y[n] = x[n 0.5]
or Y (F) = X(F)e
jF
(implying a half-sample delay).
[Hints and Suggestions: Up-sampling by N gives N compressed images per period (centered at F =
k
N
). The digital lter removes images outside [F
C
[. The delay adds the linear phase 2F, [F[ < F
C
.
In down-sampling by M, the images are stretched by M, amplitude scaled by
1
M
.]
7.46 (Quantization SNR) Consider the signal x(t) = t
2
, 0 t < 2. Choose t
s
= 0.1 s, four quantization
levels, and rounding to nd the following:
(a) The sampled signal x[n]
c Ashok Ambardar, September 1, 2003
306 Chapter 7 Digital Processing of Analog Signals
(b) The quantized signal x
Q
[n]
(c) The actual quantization signal to noise ratio SNR
Q
(d) The statistical estimate of the quantization SNR
S
(e) An estimate of the SNR, assuming x(t) to be periodic, with period T = 2 s
[Hints and Suggestions: For (e), use P
S
=
1
T
2
0
t
4
dt to estimate the SNR.]
7.47 (Quantization SNR) A sinusoid with a peak value of 4 V is sampled and then quantized by a 12-bit
quantizer whose full-scale range is 5 V. What is the quantization SNR of the quantized signal?
[Hints and Suggestions: The dynamic range is D = 10 V and the signal power is P
S
= 0.5(4)
2
.]
7.48 (Quantization Noise Power) The quantization noise power based on quantization by rounding is
2
=
2
/12, where is the quantization step size. Find an expression for the quantization noise
power based on
(a) Quantization by truncation.
(b) Quantization by sign-magnitude truncation.
[Hints and Suggestions: For part (a), the error is equally distributed between and 0, and so
f() =
1
, < < 0 with mean m = 0.5. For part (b), the error is equally distributed between
and , and so f() =
1
2
, < < with mean m = 0.]
7.49 (Sampling and Quantization) A sinusoid with amplitude levels of 1 V is quantized by rounding,
using a 12-bit quantizer. What is the rms quantization error and the quantization SNR?
[Hints and Suggestions: With =
V
FS
2
B
, the quantization error is =
12
.]
7.50 (Anti-Aliasing Filters) A speech signal is to be band-limited using an anti-aliasing third-order
Butterworth lter with a half-power frequency of 4 kHz, and then sampled and quantized by an 8-bit
quantizer. Determine the minimum stopband attenuation and the corresponding stopband frequency
to ensure that the maximum stopband aliasing level (relative to the passband edge) is less than the
rms quantization error.
[Hints and Suggestions: The attenuation is A
s
= 20 log(2
B
6) = 10 log(1 +
2n
s
) with
s
= f
s
/f
p
.]
7.51 (Anti-Aliasing Filters) A noise-like signal with a at magnitude spectrum is ltered using a third-
order Butterworth lter with a half-power frequency of 3 kHz, and the ltered signal is sampled at
10 kHz. What is the aliasing level (relative to the signal level) at the half-power frequency in the
sampled signal?
[Hints and Suggestions: The signal level at the passband edge (half-power frequency) is 0.707. The
aliasing level at the passband edge is (1 +
2n
)
1/2
with = (S f
p
)/f
p
.]
7.52 (Anti-Aliasing Filters) A speech signal with amplitude levels of 1 V is to be band-limited using an
anti-aliasing second-order Butterworth lter with a half-power frequency of 4 kHz, and then sampled
and quantized by an 8-bit quantizer. What minimum sampling rate S will ensure that the maximum
aliasing error at the passband edge is less than the rms quantization level?
[Hints and Suggestions: With V
FS
= 2, nd = V
FS
/2
B
and the quantization noise level /
12.
The aliasing level at the passband edge is (1 +
2n
)
1/2
with = (S f
p
)/f
p
.]
c Ashok Ambardar, September 1, 2003
Chapter 7 Problems 307
7.53 (Sampling and Quantization) The signal x(t) = 2 cos(2000t) 4 sin(4000t) is quantized by
rounding, using a 12-bit quantizer. What is the rms quantization error and the quantization SNR?
[Hints and Suggestions: The signal power is P
S
= 0.5(2
2
+ 4
2
). The peak value cannot exceed 6,
so choose D = 12 (or twice the actual peak value).]
7.54 (Sampling and Quantization) A speech signal, band-limited to 4 kHz, is to be sampled and
quantized by rounding, using an 8-bit quantizer. What is the conversion time of the quantizer if it is
preceded by a sample-and-hold circuit with an aperture time of 20 ns and an acquisition time of 2 s?
Assume sampling at the Nyquist rate.
[Hints and Suggestions: Use S
1
T
A
+T
H
+T
C
.]
7.55 (Sampling and Quantization) A 10-kHz sinusoid with amplitude levels of 1 V is to be sampled
and quantized by rounding. How many bits are required to ensure a quantization SNR of 45 dB? What
is the bit rate (number of bits per second) of the digitized signal if the sampling rate is chosen as twice
the Nyquist rate?
[Hints and Suggestions: The signal power is P
S
= 0.5. The noise power is
2
/12 where = 2/2
B
.
The SNR equals 10 log(P
S
/P
N
). The bit rate equals SB bits/second.]
7.56 (Anti-Imaging Filters) A digitized speech signal, band-limited to 4 kHz, is to be reconstructed
using a zero-order-hold. What minimum reconstruction sampling rate will ensure that the signal level
in the passband is attenuated by less than 1.2 dB due to the sinc distortion of the zero-order-hold?
What will be the image rejection at the stopband edge?
[Hints and Suggestions: The sinc distortion is sinc(f/S). So, 20 log [sinc(f
p
/S)[ = 1.2. To nd S,
you will need to nd a way to compute the inverse sinc!]
7.57 (Anti-Imaging Filters) A digitized speech signal, band-limited to 4 kHz, is to be reconstructed using
a zero-order-hold followed by an analog Butterworth lowpass lter. The signal level in the passband
should be attenuated less than 1.5 dB, and an image rejection of better than 45 dB in the stopband is
required. What are the specications for the Butterworth lter if the reconstruction sampling rate is
16 kHz? What is the order of the Butterworth lter?
[Hints and Suggestions: Assume f
s
= Sf
p
and subtract the attenuations 20 log [sinc(f
p
/S)[ and
20 log [sinc(f
s
/S)[ due to sinc distortion from the given values to nd the lter specications.]
7.58 (Anti-Aliasing and Anti-Imaging Filters) A speech signal is to be band-limited using an anti-
aliasing Butterworth lter with a half-power frequency of 4 kHz. The sampling frequency is 20 kHz.
What lter order will ensure that the in-band aliasing level is be less 1% of the signal level? The
processed signal is to be reconstructed using a zero-order-hold. What is the stopband attenuation
required of an anti-imaging lter to ensure image rejection of better than 50 dB?
[Hints and Suggestions: The signal level at the half-power frequency is 0.707. The aliasing level
at the passband edge is (1 +
2n
)
1/2
with = (S f
p
)/f
p
. Find the lter order n and subtract the
attenuation due to sinc distortion from 50 dB to set the lter stopband attenuation.]
COMPUTATION AND DESIGN
7.59 (Interpolating Functions) Consider the signal x(t) = cos(0.5t) sampled at S = 1 Hz to generate
the sampled signal x[n]. We wish to compute the value of x(t) at t = 0.5 by interpolating between its
samples.
c Ashok Ambardar, September 1, 2003
308 Chapter 7 Digital Processing of Analog Signals
(a) Superimpose a plot of x(t) and its samples x[n] over one period. What is the value of x(t)
predicted by step interpolation and linear interpolation of x[n]? How good are these estimates?
Can these estimates be improved by taking more signal samples (using the same interpolation
schemes)?
(b) Use the sinc interpolation formula
x(t) =
n=
x[n]sinc[(t nt
s
)/t
s
]
to obtain an estimate of x(0.5). With t = 0.5 and t
s
= 1/S = 1, compute the summation for
[n[ 10, 20, 50 to generate three estimates of x(0.5). How good are these estimates? Would you
expect the estimate to converge to the actual value as more terms are included in the summation
(i.e., as more signal samples are included)? Compare the advantages and disadvantages of sinc
interpolation with the schemes in part (a).
7.60 (Interpolating Functions) To interpolate a signal x[n] by N, we use an up-sampler (that places
N1 zeros after each sample) followed by a lter that performs the appropriate interpolation as shown:
x[n] up-sample N interpolating lter y[n]
The lter impulse response for step interpolation, linear interpolation, and ideal (sinc) interpolation is
h
S
[n] = u[n] u[n (N 1)] h
L
[n] = tri(n/N) h
I
[n] = sinc(n/N), [n[ M
Note that the ideal interpolating function is actually of innite length but must be truncated in practice.
Generate the test signal x[n] = cos(0.5n), 0 n 3. Up-sample this by N = 8 (seven zeros after
each sample) to obtain the signal x
U
[n]. Use the Matlab routine filter to lter x
U
[n], using
(a) The step interpolation lter to obtain the ltered signal x
S
[n]. Plot x
U
[n] and x
S
[n] on the same
plot. Does the system perform the required interpolation? Does the result look like a sine wave?
(b) The linear interpolation lter to obtain the ltered signal x
L
[n]. Plot x
U
[n] and a delayed (by
8) version of x
L
[n] (to account for the noncausal nature of h
L
[n]) on the same plot. Does the
system perform the required interpolation? Does the result look like a sine wave?
(c) The ideal interpolation lter (with M = 4, 8, 16) to obtain the ltered signal x
I
[n]. Plot x
U
[n]
and a delayed (by M) version of x
I
[n] (to account for the noncausal nature of h
I
[n]) on the same
plot. Does the system perform the required interpolation? Does the result look like a sine wave?
What is the eect of increasing M on the interpolated signal? What is the eect of increasing
both M and the signal length? Explain.
7.61 (Compensating Filters) Digital lters are often used to compensate for the sinc distortion of a
zero-order-hold DAC by providing a 1/sinc(F) boost. Two such lters are described by
Compensating Filter 1: y[n] =
1
16
(x[n] 18x[n 1] +x[n 2])
Compensating Filter 2: y[n] +
1
8
y[n 1] =
9
8
x[n]
(a) For each lter, state whether it is FIR (and if so, linear phase) or IIR.
(b) Plot the frequency response of each lter and compare with [1/sinc(F)[.
(c) Over what digital frequency range does each lter provide the required sinc boost? Which of
these lters provides better compensation?
7.62 (Up-Sampling and Decimation) Let x[n] = cos(0.2n) + 0.5 cos(0.4n), 0 n 100.
c Ashok Ambardar, September 1, 2003
Chapter 7 Problems 309
(a) Plot the spectrum of this signal.
(b) Generate the zero-interpolated signal y[n] = x[n/2] and plot its spectrum. Can you observe the
spectrum replication? Is there a correspondence between the frequencies in y[n] and x[n]? Should
there be? Explain.
(c) Generate the decimated signal d[n] = x[2n] and plot its spectrum. Can you observe the stretching
eect in the spectrum? Is there a correspondence between the frequencies in d[n] and x[n]? Should
there be? Explain.
(d) Generate the decimated signal g[n] = x[3n] and plot its spectrum. Can you observe the stretching
eect in the spectrum? Is there a correspondence between the frequencies in g[n] and x[n]? Should
there be? Explain.
7.63 (Frequency Response of Interpolating Functions) The impulse response of lters for step in-
terpolation, linear interpolation, and ideal (sinc) interpolation by N are given by
h
S
[n] = u[n] u[n (N 1)] h
L
[n] = tri(n/N) h
I
[n] = sinc(n/N)
Note that the ideal interpolating function is of innite length.
(a) Plot the frequency response of each interpolating function for N = 4 and N = 8.
(b) How does the response of the step-interpolation and linear-interpolation schemes compare with
ideal interpolation?
7.64 (Interpolating Functions) To interpolate a signal x[n] by N, we use an up-sampler (that places
N 1 zeros after each sample) followed by a lter that performs the appropriate interpolation. The
lter impulse response for step interpolation, linear interpolation, and ideal (sinc) interpolation is
chosen as
h
S
[n] = u[n] u[n (N 1)] h
L
[n] = tri(n/N) h
I
[n] = sinc(n/N), [n[ M
Note that the ideal interpolating function is actually of innite length and must be truncated in
practice. Generate the test signal x[n] = cos(0.5n), 0 n 3. Up-sample this by N = 8 (seven
zeros after each sample) to obtain the signal x
U
[n]. Use the Matlab routine filter to lter x
U
[n] as
follows:
(a) Use the step-interpolation lter to obtain the ltered signal x
S
[n]. Plot x
U
[n] and x
S
[n] on the
same plot. Does the system perform the required interpolation? Does the result look like a sine
wave?
(b) Use the step-interpolation lter followed by the compensating lter y[n] = x[n] 18x[n 1] +
x[n 2]/16 to obtain the ltered signal x
C
[n]. Plot x
U
[n] and x
C
[n] on the same plot. Does
the system perform the required interpolation? Does the result look like a sine wave? Is there
an improvement compared to part (a)?
(c) Use the linear-interpolation lter to obtain the ltered signal x
L
[n]. Plot x
U
[n] and a delayed
(by 8) version of x
L
[n] (to account for the noncausal nature of h
L
[n]) on the same plot. Does the
system perform the required interpolation? Does the result look like a sine wave?
(d) Use the ideal interpolation lter (with M = 4, 8, 16) to obtain the ltered signal x
I
[n]. Plot
x
U
[n] and a delayed (by M) version of x
I
[n] (to account for the noncausal nature of h
I
[n]) on
the same plot. Does the system perform the required interpolation? Does the result look like a
sine wave? What is the eect of increasing M on the interpolated signal? Explain.
7.65 (FIR Filter Design) A 22.5-Hz sinusoid is contaminated by 60-Hz interference. We wish to sample
this signal and design a causal 3-point linear-phase FIR digital lter operating at a sampling frequency
of S = 180 Hz to eliminate the interference and pass the desired signal with unit gain.
c Ashok Ambardar, September 1, 2003
310 Chapter 7 Digital Processing of Analog Signals
(a) Argue that an impulse response of the form h[n] =
G
LP
(z)
D
z
+
+
A
G
LP
(z)
D
z
+
+
A
s
F
s
F
p
F
s
F
p
A
p
A
s
F
Stopband
Passband
Magnitude
Transition band
0
F
dB Magnitude 20 log
0.5 0.5
H(F)
H(F)
Figure 8.1 The features of a typical lowpass digital lter
If analog frequencies are specied in the design or if the digital lter is to be designed for processing
sampled analog signals, the sampling rate must also be given. The design process is essentially a three-step
process that requires specication of the design parameters, design of the least complex transfer function
that meets or beats the design specications, and realization of the transfer function in software or hardware.
The design based on any set of performance specications is, at best, a compromise at all three levels. At
the rst level, the actual lter may never meet performance specications if they are too stringent; at the
second, the same set of specications may lead to several possible realizations; and at the third, quantization
and roundo errors may render the design useless if based on too critical a set of design values. The fewer
or less stringent the specications, the better is the possibility of achieving both design objectives and
implementation.
8.1.2 Techniques of Digital Filter Design
Digital lter design revolves around two distinctly dierent approaches. If linear phase is not critical, IIR
lters yield a much smaller lter order for a given application. Only FIR lters can be designed with linear
phase (no phase distortion). Their design is typically based on selecting a symmetric impulse response
sequence whose length is chosen to meet design specications. This choice is often based on iterative
techniques or trial and error. For given specications, FIR lters require many more elements in their
realization than do IIR lters.
8.2 Symmetric Sequences and Linear Phase
Symmetric sequences possess linear phase and result in a constant delay with no amplitude distortion. This
is an important consideration in lter design. The DTFT of a real, even symmetric sequence x[n] is of
the form H(F) = A(F) and always real, and the DTFT of a real, odd symmetric sequence is of the form
H(F) = jA(F) and purely imaginary. Symmetric sequences also imply noncausal lters. However, such
lters can be made causal if the impulse response is suitably delayed. A time shift of x[n] to x[n M]
introduces only a linear phase of (F) = e
j2MF
. The DTFT of sequences that are symmetric about their
midpoint is said to possess generalized linear phase. Generalized linear phase is illustrated in Figure 8.2.
The term generalized means that (F) may include a jump (of at F = 0, if H(F) is imaginary). There
may also be phase jumps of 2 (if the phase is restricted to the principal range (F) ). If we plot
the magnitude [H(F)[, there will also be phase jumps of (where the amplitude A(F) changes sign).
REVIEW PANEL 8.2
Linear-Phase Filters Provide a Constant Group Delay
c Ashok Ambardar, September 1, 2003
314 Chapter 8 Design of FIR Filters
Phase jumps of
if A(F) changes sign if H(F) is imaginary
Phase jump of at F=0
Phase (radians)
F
Linear phase
Phase (radians)
F
Phase (radians)
F
/2
/2
Figure 8.2 Examples of linear phase and generalized linear phase
8.2.1 Classication of Linear-Phase Sequences
The length N of nite symmetric sequences can be odd or even, since the center of symmetry may fall on
a sample point (for odd N) or midway between samples (for even N). This results in four possible types of
symmetric sequences.
Type 1 Sequences
A type 1 sequence h
1
[n] and its amplitude spectrum A
1
(F) are illustrated in Figure 8.3.
F = 0 F = 0.5
n
Type 1
A(F) Type 1
and
Odd length
Center of symmetry Even symmetry about
Even symmetry
F
1 0.5 0.5 1
Figure 8.3 Features of a type 1 symmetric sequence
This sequence is even symmetric with odd length N, and a center of symmetry at the integer value
M = (N 1)/2. Using Eulers relation, its frequency response H
1
(F) may be expressed as
H
1
(F) =
h[M] + 2
M1
k=0
h[k]cos[(M k)2F]
e
j2MF
= A
1
(F)e
j2MF
(8.1)
Thus, H
1
(F) shows a linear phase of 2MF, and a constant group delay of M. The amplitude spectrum
A
1
(F) is even symmetric about both F = 0 and F = 0.5, and both [H
1
(0)[ and [H
1
(0.5)[ can be nonzero.
Type 2 Sequences
A type 2 sequence h
2
[n] and its amplitude spectrum A
2
(F) are illustrated in Figure 8.4.
This sequence is also even symmetric but of even length N, and a center of symmetry at the half-integer
value M = (N 1)/2. Using Eulers relation, its frequency response H
2
(F) may be expressed as
H
2
(F) =
2
M1/2
k=0
h[k]cos[(M k)2F]
e
j2MF
= A
2
(F)e
j2MF
(8.2)
c Ashok Ambardar, September 1, 2003
8.2 Symmetric Sequences and Linear Phase 315
(0.5) A = 0
F = 0.5
n
Type 2
A(F) Type 2
F
0.5
0.5 1
Even length
Center of symmetry
Even symmetry
Odd symmetry about
1
Figure 8.4 Features of a type 2 symmetric sequence
Thus, H
2
(F) also shows a linear phase of 2MF, and a constant group delay of M. The amplitude
spectrum A
2
(F) is even symmetric about F = 0, and odd symmetric about F = 0.5, and as a result, [H
2
(0.5)[
is always zero.
Type 3 Sequences
A type 3 sequence h
3
[n] and its amplitude spectrum A
3
(F) are illustrated in Figure 8.5.
(0) A (0.5) A = 0 =
F = 0 F = 0.5
n
Type 3
A(F) Type 3
F 1
0.5
0.5
1
and Odd symmetry about
Odd length
Odd symmetry
symmetry
Center of
Figure 8.5 Features of a type 3 symmetric sequence
This sequence is odd symmetric with odd length N, and a center of symmetry at the integer value
M = (N 1)/2. Using Eulers relation, its frequency response H
3
(F) may be expressed as
H
3
(F) = j
2
M1
k=0
h[k]sin[(M k)2F]
e
j2MF
= A
3
(F)e
j(0.52MF)
(8.3)
Thus, H
3
(F) shows a generalized linear phase of
2
2MF, and a constant group delay of M. The amplitude
spectrum A
3
(F) is odd symmetric about both F = 0 and F = 0.5, and as a result, [H
3
(0)[ and [H
3
(0.5)[ are
always zero.
Type 4 Sequences
A type 4 sequence h
4
[n] and its amplitude spectrum A
4
(F) are illustrated in Figure 8.6.
This sequence is odd symmetric with even length N, and a center of symmetry at the half-integer value
M = (N 1)/2. Using Eulers relation, its frequency response H
4
(F) may be expressed as
H
4
(F) = j
2
M1/2
k=0
h[k]sin[2(M k)F]
e
j2MF
= A
4
(F)e
j(0.52MF)
(8.4)
c Ashok Ambardar, September 1, 2003
316 Chapter 8 Design of FIR Filters
(0) A = 0
F = 0
n
Type 4
Center of
A(F) Type 4
F
1
1 0.5
0.5
Even length
symmetry
Odd symmetry about
Odd symmetry
Figure 8.6 Features of a type 4 symmetric sequence
Thus, H
4
(F) also shows a generalized linear phase of
2
2MF, and a constant group delay of M. The
amplitude spectrum A
4
(F) is odd symmetric about F = 0, and even symmetric about F = 0.5, and as a
result, [H
4
(0)[ is always zero.
8.2.2 Applications of Linear-Phase Sequences
Table 8.1 summarizes the amplitude response characteristics of the four types of linear-phase sequences and
their use in FIR digital lter design. Type 1 sequences are by far the most widely used because they allow
us to design any lter type by appropriate choice of lter coecients. Type 2 sequences can be used for
lowpass and bandpass lters, but not for bandstop or highpass lters (whose response is not zero at F = 0.5).
Type 3 sequences are useful primarily for bandpass lters, dierentiators, and Hilbert transformers. Type 4
sequences are suitable for highpass or bandpass lters, and for dierentiators and Hilbert transformers.
Bandstop lters (whose response is nonzero at F = 0 and F = 0.5) can be designed only with type 1
sequences. Only antisymmetric sequences (whose transfer function is imaginary) or their causal versions
(which correspond to type 3 and type 4 sequences) can be used to design digital dierentiators and Hilbert
transformers.
Table 8.1 Applications of Symmetric Sequences
Type H(F) = 0 (or H() = 0) at Application
1 All lter types. Only sequence for BSF
2 F = 0.5 ( = ) Only LPF and BPF
3 F = 0 ( = 0), F = 0.5 ( = ) BPF, dierentiators, Hilbert transformers
4 F = 0 ( = 0) HPF, BPF, dierentiators, Hilbert transformers
8.2.3 FIR Filter Design
The design of FIR lters involves the selection of a nite sequence that best represents the impulse response
of an ideal lter. FIR lters are always stable. Even more important, FIR lters are capable of perfectly
linear phase (a pure time delay), meaning total freedom from phase distortion. For given specications,
however, FIR lters typically require a much higher lter order or length than do IIR lters. And sometimes
we must go to great lengths to ensure linear phase! The three most commonly used methods for FIR lter
design are window-based design using the impulse response of ideal lters, frequency sampling, and iterative
design based on optimal constraints.
c Ashok Ambardar, September 1, 2003
8.3 Window-Based Design 317
8.3 Window-Based Design
The window method starts by selecting the impulse response h
N
[n] as a symmetrically truncated version of
the impulse response h[n] of an ideal lter with frequency response H(F). The impulse response of an ideal
lowpass lter with a cuto frequency F
C
is h[n] = 2F
C
sinc(2nF
C
). Its symmetric truncation yields
h
N
[n] = 2F
C
sinc(2nF
C
), [n[ 0.5(N 1) (8.5)
Note that for an even length N, the index n is not an integer. Even though the designed lter is an
approximation to the ideal lter, it is the best approximation (in the mean square sense) compared to any
other lter of the same length. The problem is that it shows certain undesirable characteristics. Truncation
of the ideal impulse response h[n] is equivalent to multiplication of h[n] by a rectangular window w[n] of
length N. The spectrum of the windowed impulse response h
W
[n] = h[n]w[n] is the (periodic) convolution
of H(F) and W(F). Since W(F) has the form of a Dirichlet kernel, this spectrum shows overshoot and
ripples (the Gibbs eect). It is the abrupt truncation of the rectangular window that leads to overshoot and
ripples in the magnitude spectrum. To reduce or eliminate the Gibbs eect, we use tapered windows.
REVIEW PANEL 8.3
The Spectrum of the Truncated Ideal Impulse Response Shows the Gibbs Eect
h
N
[n] = 2F
C
sinc(2nF
C
), [n[ 0.5(N 1)
The Gibbs eect (spectral overshoot and oscillation) is due to abrupt truncation of h[n].
Tapered windows reduce (or eliminate) the spectral overshoot and oscillation.
8.3.1 Characteristics of Window Functions
The amplitude response of symmetric, nite-duration windows invariably shows a mainlobe and decaying
sidelobes that may be entirely positive or that may alternate in sign. The spectral measures for a typical
window are illustrated in Figure 8.7.
3
W
S
W
M
W
6
W
P
PSL
0.707P
0.5P
F
DTFT magnitude spectrum of a typical window
High-frequency decay
0.5
Figure 8.7 Spectrum of a typical window
Amplitude-based measures for a window include the peak sidelobe level (PSL), usually in decibels, and
the decay rate D
S
in dB/dec. Frequency-based measures include the mainlobe width W
M
, the 3-dB and
6-dB widths (W
3
and W
6
), and the width W
S
to reach the peak sidelobe level. The windows commonly used
in FIR lter design and their spectral features are listed in Table 8.2 and illustrated in Figure 8.8. As the
window length N increases, the width parameters decrease, but the peak sidelobe level remains more or less
constant. Ideally, the spectrum of a window should approximate an impulse and be conned to as narrow a
mainlobe as possible, with as little energy in the sidelobes as possible.
c Ashok Ambardar, September 1, 2003
318 Chapter 8 Design of FIR Filters
Table 8.2 Some Windows for FIR Filter Design
Note: I
0
(x) is the modied Bessel function of order zero.
Window Expression w[n], 0.5(N 1) n 0.5(N 1)
Boxcar 1
Cosine
cos
n
N 1
Riemann
sinc
L
2n
N 1
, L > 0
Bartlett
1
2[n[
N 1
von Hann (Hanning)
0.5 + 0.5 cos
2n
N 1
Hamming
0.54 + 0.46 cos
2n
N 1
Blackman
0.42 + 0.5 cos
2n
N 1
+ 0.08 cos
4n
N 1
Kaiser
I
0
(
1 4[n/(N 1)]
2
)
I
0
()
Spectral Characteristics of Window Functions
Window G
P
G
S
/G
P
A
SL
(dB) W
M
W
S
W
6
W
3
D
S
Boxcar 1 0.2172 13.3 1 0.81 0.6 0.44 20
Cosine 0.6366 0.0708 23 1.5 1.35 0.81 0.59 40
Riemann 0.5895 0.0478 26.4 1.64 1.5 0.86 0.62 40
Bartlett 0.5 0.0472 26.5 2 1.62 0.88 0.63 40
von Hann (Hanning) 0.5 0.0267 31.5 2 1.87 1.0 0.72 60
Hamming 0.54 0.0073 42.7 2 1.91 0.9 0.65 20
Blackman 0.42 0.0012 58.1 3 2.82 1.14 0.82 60
Kaiser ( = 2.6) 0.4314 0.0010 60 2.98 2.72 1.11 0.80 20
NOTATION:
G
P
: Peak gain of mainlobe G
S
: Peak sidelobe gain D
S
: High-frequency attenuation (dB/decade)
A
SL
: Sidelobe attenuation (
G
P
G
S
) in dB W
6
: 6-dB half-width W
S
: Half-width of mainlobe to reach P
S
W
3
3-dB half-width W
M
: Half-width of mainlobe
Notes:
1. All widths (W
M
, W
S
, W
6
, W
3
) must be normalized (divided) by the window length N.
2. Values for the Kaiser window depend on the parameter . Empirically determined relations are
G
P
=
|sinc(j)|
I
0
()
,
G
S
G
P
=
0.22
sinh()
, W
M
= (1 +
2
)
1/2
, W
S
= (0.661 +
2
)
1/2
c Ashok Ambardar, September 1, 2003
8.3 Window-Based Design 319
10 0 10
0
0.5
1
Rectangular window: N = 21
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
60
40
13.3
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
10 0 10
0
0.5
1
von Hann window: N = 21
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
60
40
31.5
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
10 0 10
0
0.5
1
Hamming window: N = 21
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
60
42.7
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
10 0 10
0
0.5
1
Blackman window: N = 21
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
58.1
40
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
10 0 10
0
0.5
1
Kaiser window N = 21 = 2.6
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
60
40
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
Figure 8.8 Commonly used DTFT windows and their spectra
c Ashok Ambardar, September 1, 2003
320 Chapter 8 Design of FIR Filters
Most windows have been developed with some optimality criterion in mind. Ultimately, the trade-o
is a compromise between the conicting requirements of a narrow mainlobe (or a small transition width),
and small sidelobe levels. Some windows are based on combinations of simpler windows. For example,
the von Hann (or Hanning) window is the sum of a rectangular and a cosine window, the Bartlett window
is the convolution of two rectangular windows, and the cos
a
window is the product of a von Hann and
a cosine window. Other windows are designed to emphasize certain desirable features. The von Hann
window improves the high-frequency decay (at the expense of a larger peak sidelobe level). The Hamming
window minimizes the sidelobe level (at the expense of a slower high-frequency decay). The Kaiser window
has a variable parameter that controls the peak sidelobe level. Still other windows are based on simple
mathematical forms or easy application. For example, the cos
a
windows have easily recognizable transforms,
and the von Hann window is easy to apply as a convolution in the frequency domain. An optimal time-limited
window should maximize the energy in its spectrum over a given frequency band. In the continuous-time
domain, this constraint leads to a window based on prolate spheroidal wave functions of the rst order. The
Kaiser window best approximates such an optimal window in the discrete domain.
8.3.2 Some Other Windows
The Dolph (also called Chebyshev) window corresponds to the inverse DTFT of a spectrum whose sidelobes
remain constant (without decay) at a specied level and whose mainlobe width is the smallest for a given
length. The time-domain expression for this window is cumbersome. This window can be computed from
w[n] = IDFT
T
N1
cos
n
N 1
= cosh
cosh
1
(10
A/20
)
N 1
1
p
1 +
p
A
s
(dB) = 20 log
s
1 +
p
20 log
s
,
p
<1 (8.9)
To convert attenuation specications (in decibels) to values for the ripple parameters, we use
p
=
10
Ap/20
1
10
A
p
/20
+ 1
s
= (1 +
p
)10
As/20
10
As/20
,
p
<1 (8.10)
1+p
1p
s
F
s
F
p
F
s
F
p
A
p
A
s
F
Stopband
Passband
Magnitude
Transition band
0
F
dB Magnitude 20 log
0.5 0.5
H(F)
H(F)
Figure 8.10 The features of a typical lter
Window-based design calls for normalization of the design frequencies by the sampling frequency, devel-
oping the lowpass lter by windowing the impulse response of an ideal lter, and conversion to the required
lter type, using spectral transformations. The method may appear deceptively simple (and it is), but some
issues can only be addressed qualitatively. For example, the choice of cuto frequency is aected by the
window length N. The smallest length N that meets specications depends on the choice of window, and
the choice of the window, in turn, depends on the (stopband) attenuation specications.
c Ashok Ambardar, September 1, 2003
8.3 Window-Based Design 323
8.3.5 Characteristics of the Windowed Spectrum
When we multiply the ideal impulse response h[n] = 2F
C
sinc(2nF
C
) by a window w[n] of length N in the
time domain, the spectrum of the windowed impulse response h
W
[n] = h[n]w[n] is the (periodic) convolution
of H(F) and W(F), as illustrated in Figure 8.11.
F
T
Transition width
F
s
F
p
F
T
Transition width
Periodic
convolution
F
Ideal filter
Peak stopband ripple
window mainlobe width
peak sidelobe level of window
0.5
0.5 0.5
(Not to scale)
Window
F
F
Windowed magnitude spectrum
Peak stopband ripple
*
Figure 8.11 The spectrum of a windowed ideal lter
The ideal spectrum has a jump discontinuity at F = F
C
. The windowed spectrum shows overshoot and
ripples, and a nite transition width but no abrupt jump. Its magnitude at F = F
C
equals 0.5 (corresponding
to an attenuation of 6 dB).
Table 8.4 lists the characteristics of the windowed spectrum for various windows. It excludes windows
(such as the Bartlett window) whose amplitude spectrum is entirely positive (because they result in a
complete elimination of overshoot and the Gibbs eect). Since both the window function and the impulse
response are symmetric sequences, the spectrum of the windowed lter is also endowed with symmetry. Here
are some general observations about the windowed spectrum:
1. Even though the peak passband ripple equals the peak stopband ripple (
p
=
s
), the passband (or
stopband) ripples are not of equal magnitude.
2. The peak stopband level of the windowed spectrum is typically slightly less than the peak sidelobe level
of the window itself. In other words, the lter stopband attenuation (listed as A
WS
in Table 8.4) is
typically greater (by a few decibels) than the peak sidelobe attenuation of the window (listed as A
SL
in
Tables 8.2 and 8.3). The peak sidelobe level, the peak passband ripple, and the passband attenuation
(listed as A
WP
in Table 8.4) remain more or less constant with N.
3. The peak-to-peak width across the transition band is roughly equal to the mainlobe width of the
window (listed as W
M
in Tables 8.2 and 8.3). The actual transition width (listed as F
W
in Table 8.4)
of the windowed spectrum (when the response rst reaches 1
p
and
s
) is less than this width. The
transition width F
WS
is inversely related to the window length N (with F
WS
C/N, where C is more
or less a constant for each window).
The numbers vary in the literature, and the values in Table 8.4 were found here by using an ideal impulse
response h[n] = 0.5 sinc(0.5n), with F
C
= 0.25, windowed by a 51-point window. The magnitude speci-
cations are normalized with respect to the peak magnitude. The passband and stopband attenuation are
computed from the passband and stopband ripple (with
p
=
s
), using the relations already given.
c Ashok Ambardar, September 1, 2003
324 Chapter 8 Design of FIR Filters
Table 8.4 Characteristics of the Windowed Spectrum
Peak Passband Peak Sidelobe Transition
Window Ripple Attenuation Attenuation Width
p
=
s
A
WP
(dB) A
WS
(dB) F
WS
C/N
Boxcar 0.0897 1.5618 21.7 C = 0.92
Cosine 0.0207 0.36 33.8 C = 2.1
Riemann 0.0120 0.2087 38.5 C = 2.5
von Hann (Hanning) 0.0063 0.1103 44 C = 3.21
Hamming 0.0022 0.0384 53 C = 3.47
Blackman (1.71)10
4
(2.97)10
3
75.3 C = 5.71
Dolph (R = 40 dB) 0.0036 0.0620 49 C = 3.16
Dolph (R = 50 dB) (9.54)10
4
0.0166 60.4 C = 3.88
Dolph (R = 60 dB) (2.50)10
4
0.0043 72 C = 4.6
Harris (0) (8.55)10
4
0.0148 61.4 C = 5.36
Harris (1) (1.41)10
4
(2.44)10
3
77 C = 7.45
Harris (2) (1.18)10
4
(2.06)10
3
78.5 C = 5.6
Harris (3) (8.97)10
5
(1.56)10
3
81 C = 5.6
Harris (4) (9.24)10
5
(1.61)10
3
81 C = 5.6
Harris (5) (9.96)10
6
(1.73)10
4
100 C = 7.75
Harris (6) (1.94)10
6
(3.38)10
5
114 C = 7.96
Harris (7) (5.26)10
6
(9.15)10
5
106 C = 7.85
8.3.6 Selection of Window and Design Parameters
The choice of a window is based primarily on the design stopband specication A
s
. The peak sidelobe
attenuation A
WS
of the windowed spectrum (listed in Table 8.4) should match (or exceed) the specied
stopband attenuation A
s
. Similarly, the peak passband attenuation A
WP
of the windowed spectrum should
not exceed the specied passband attenuation A
p
, a condition that is often ignored because it is usually
satised for most practical specications.
The windowed spectrum is the convolution of the spectra of the impulse response and the window function,
and this spectrum changes as we change the lter length. An optimal (in the mean square sense) impulse
response and an optimal (in any sense) window may not together yield a windowed response with optimal
features. Window selection is at best an art, and at worst a matter of trial and error. Just so you know, the
three most commonly used windows are the von Hann (Hanning), Hamming, and Kaiser windows.
Choosing the Filter Length
The transition width of the windowed spectrum decreases with the length N. There is no accurate way to
establish the minimum lter length N that meets design specications. However, empirical estimates are
c Ashok Ambardar, September 1, 2003
8.3 Window-Based Design 325
based on matching the given transition width specication F
T
to the transition width F
WS
= C/N of the
windowed spectrum (as listed in Table 8.4)
F
T
= F
s
F
p
= F
WS
C
N
N =
C
F
s
F
p
(8.11)
Here, F
p
and F
s
are the digital passband and stopband frequencies. The window length depends on the
choice of window (which dictates the choice of C). The closer the match between the stopband attenuation
A
s
and the stopband attenuation A
WS
of the windowed spectrum, the smaller is the window length N. In
any case, for a given window, this relation typically overestimates the smallest lter length, and we can often
decrease this length and still meet design specications.
The Kaiser Window
Empirical relations have been developed to estimate the lter length N of FIR lters based on the Kaiser
window. We rst compute the peak passband ripple
p
, and the peak stopband ripple
s
, and choose the
smallest of these as the ripple parameter :
p
=
10
Ap/20
1
10
A
p
/20
+ 1
s
= 10
As/20
= min(
p
,
s
) (8.12)
The ripple parameter is used to recompute the actual stopband attenuation A
s0
in decibels:
A
s0
= 20 log dB (8.13)
Finally, the length N is well approximated by
N
A
s0
7.95
14.36(F
s
F
p
)
+ 1, A
s0
21 dB
0.9222
F
s
F
p
+ 1, A
s0
< 21 dB
(8.14)
The Kaiser window parameter is estimated from the actual stopband attenuation A
s0
, as follows:
=
0.0351(A
s0
8.7), A
s0
> 50 dB
0.186(A
s0
21)
0.4
+ 0.0251(A
s0
21), 21 dB A
s0
50 dB
0, A
s0
< 21 dB
(8.15)
Choosing the Cuto Frequency
A common choice for the cuto frequency (used in the expression for h[n]) is F
C
= 0.5(F
p
+F
s
). The actual
frequency that meets specications for the smallest length is often less than this value. The cuto frequency
is aected by the lter length N. A design that ensures the minimum length N is based on starting with the
above value of F
C
, and then changing (typically reducing) the length and/or tweaking (typically decreasing)
F
C
until we just meet specications (typically at the passband edge).
8.3.7 Spectral Transformations
A useful approach to the design of FIR lters other than lowpass starts with a lowpass prototype developed
from given specications. This is followed by appropriate spectral transformations to convert the lowpass
prototype to the required lter type. Finally, the impulse response may be windowed by appropriate windows.
c Ashok Ambardar, September 1, 2003
326 Chapter 8 Design of FIR Filters
The spectral transformations are developed from the shifting and modulation properties of the DTFT. Unlike
analog and IIR lters, these transformations do not change the lter order (or length). The starting point is
an ideal lowpass lter, with unit passband gain and a cuto frequency of F
C
= 0.5(F
p
+F
s
), whose noncausal
impulse response h
LP
[n] is symmetrically truncated to length N and given by
h
LP
[n] = 2F
C
sinc(2nF
C
), 0.5(N 1) n 0.5(N 1) (8.16)
If N is even, then n takes on non-integer values and it is more useful to work with the causal version that
has the form h
LP
[k], 0 k N 1 where k = n+0.5(N 1) is always an integer. The sample values of the
causal and noncausal versions are identical. Generating the causal version simply amounts to re-indexing
the impulse response.
h
LP
[k] = 2F
C
sinc2[k 0.5(N 1)]F
C
, 0 k N 1
The lowpass-to-highpass (LP2HP) transformation may be achieved in two ways, as illustrated in Figure 8.12.
F
p
F
s
F
C
[n] w F
C
sinc(2n ) F
C
2 [n] h
HP
= (1)
n
F
C
F
p
F
s
+ ) = 0.5 0.5(
F
p
F
s
F
C
[n] w [n] h
HP
= [n] F
C
sinc(2n ) F
C
2
F
p
F
s
= 0.5( + ) F
C
0.5
1
F
H(F) Highpass
1
F
0.5
0.5
1
F
H(F) Highpass
1
F
0.5
Lowpass prototype
Lowpass prototype
Figure 8.12 The lowpass-to-highpass transformation
The rst form of the LP2HP transformation of Figure 8.12 is based on the result
H
HP
(F) = 1 H
LP
(F)
The noncausal impulse response has the form
h
HP
[n] = [n] h
LP
[n]
Note that this transformation is valid only if the lter length N is odd. It also assumes unit passband gain.
For a passband gain of G, it modies to h
HP
[n] = G[n] h
LP
[n]. If h
LP
[n] describes the (truncated) impulse
response of an ideal lowpass lter with F
C
= 0.5(F
p
+ F
s
), the cuto frequency F
H
of the highpass lter
also equals F
C
. Upon delaying the sequence by 0.5(N 1) or re-indexing with n = k 0.5(N 1), we get
the causal version of the highpass lter as
h
HP
[k] = [k 0.5(N 1)] 2F
C
sinc2[k 0.5(N 1)]F
C
, 0 k N 1
As an example, if we wish to design a highpass lter of length N = 15 with a cuto frequency of F
H
= 0.3,
we start with a lowpass prototype whose cuto frequency is also F
C
= 0.3 and whose (noncausal) impulse
response is h
LP
[n] = 0.6sinc(0.6n). This gives the noncausal impulse response of the highpass lter as
h
HP
[n] = [n] h
LP
[n] = [n] 0.6sinc(0.6n), 7 n 7
c Ashok Ambardar, September 1, 2003
8.3 Window-Based Design 327
The causal impulse response has the form
h
HP
[k] = [k 7] 0.6sinc[0.6(k 7)], 0 k 14
The second form of the LP2HP transformation uses the shifting property of the DTFT. Shifting the spectrum
by F = 0.5 results in multiplication of the corresponding time signal by (1)
n
(a change in the sign of every
other sample value). Shifting the spectrum of a lowpass lter by F = 0.5 results in the highpass form
H
HP
(F) = H
LP
(F 0.5)
Note that this transformation is valid for any lowpass lter (FIR or IIR) with any length N (even or odd).
If the lowpass lter has a cuto frequency of F
C
, a consequence of the frequency shift is that the cuto
frequency of the resulting highpass lter is F
H
= 0.5 F
C
regardless of the lter type. To design a highpass
lter with a cuto frequency of F
H
, we must start with a lowpass prototype whose cuto frequency is
F
C
= 0.5 F
H
. For an ideal causal lter, we start with the lowpass prototype
h
LP
[k] = 2F
C
sinc2[k 0.5(N 1)]F
C
, 0 k N 1 where F
C
= 0.5 F
H
The causal impulse response of the highpass lter is then
h
HP
[k] = (1)
k
h
LP
[k], 0 k N 1 (8.17)
As an example, if we wish to design a highpass lter of length N = 12 with a cuto frequency of F
H
= 0.3,
we start with a causal lowpass prototype whose cuto frequency is F
C
= 0.2 and whose impulse response is
h
LP
[k] = 0.4sinc[0.4(k 5.5)], 0 k 11
The causal impulse response of the highpass lter is then given by
h
HP
[k] = (1)
k
h
LP
[k], 0 k 11
The LP2BP and LP2BS transformations are based on arithmetic symmetry about the center frequency
F
0
. If the band edges are [F
1
, F
2
, F
3
, F
4
] in increasing order, arithmetic symmetry means that F
1
+F
4
=
F
2
+F
3
= 2F
0
and implies equal transition widths. If the transition widths are not equal, we must relocate
a band edge to make both transition widths equal to the smaller transition width, as shown in Figure 8.13.
F
3
F
1
F
2
F
4
F
3
F
1
F
2
F
4
F
4
Relocate
No arithmetic symmetry
1
H(F) Bandpass
F
1
H(F) Bandpass
F
Arithmetic symmetry
Figure 8.13 How to ensure arithmetic symmetry of the band edges
The LP2BP and LP2BS transformations are illustrated in Figure 8.14. In each transformation, the center
frequency F
0
is given by
F
0
= 0.5(F
2
+F
3
) = 0.5(F
1
+F
4
) (8.18)
The lowpass-to-bandpass (LP2BP) transformation results by shifting the spectrum of a lowpass lter by
F
0
to give
H
BP
(F) = H
LP
(F +F
0
) +H
LP
(F F
0
)
c Ashok Ambardar, September 1, 2003
328 Chapter 8 Design of FIR Filters
From Figure 8.14, the cuto frequency F
C
of the lowpass prototype is
F
C
= 0.5(F
3
+F
4
) F
0
(8.19)
From the modulation property of the DTFT, shifting the spectrum H
LP
(F) of the lowpass lter by F
0
results in multiplication of its impulse response h
LP
[n] by 2 cos(2nF
0
) and we obtain the impulse response
of the bandpass lter as
h
BP
[n] = 2 cos(2nF
0
)h
LP
[n] = 4F
C
sinc(2nF
C
) cos(2nF
0
),
N 1
2
n
N 1
2
(8.20)
Upon re-indexing, its causal version assumes the form h
BP
[k], 0 k N 1 where k = n + 0.5(N 1).
A bandstop lter requires a type 4 sequence (with even symmetry and odd length N). The lowpass-to-
bandstop (LP2BS) transformation of Figure 8.14 is described by H
BS
(F) = 1 H
BP
(F) and leads to the
noncausal impulse response
h
BS
[n] = [n] h
BP
[n] = [n] 4F
C
sinc(2nF
C
) cos(2nF
0
),
N 1
2
n
N 1
2
(8.21)
Upon re-indexing, its causal version has the form h
BS
[k], 0 k N 1 where k = n + 0.5(N 1).
A second form of the LP2BS transformation results by describing the bandstop lter as the sum of a
lowpass lter with a cuto frequency of F
L
= 0.5(F
1
+ F
2
) and a highpass lter with a cuto frequency of
F
H
= 0.5(F
3
+F
4
). The noncausal impulse response of the bandstop lter is thus given by
h
BS
[n] = 2F
L
sinc(2nF
L
) + 2(1)
n
(0.5 F
H
) sinc[2n(0.5 F
H
)],
N 1
2
n
N 1
2
(8.22)
Upon re-indexing, its causal version has the form h
BS
[k], 0 k N 1 where k = n + 0.5(N 1).
F
C
F
C
F
2
F
3
F
0
+ ( ) 0.5 = F
3
F
4
0.5( + ) F
0
F
C
=
[n] w [n] h
BP
F
C
sinc(2n ) F
C
4 F
0
cos(2n ) =
[n] w [n] h
BS
[n] F
C
sinc(2n ) F
C
4 F
0
cos(2n ) =
F
2
F
3
F
0
+ ( ) 0.5 = F
3
F
4
0.5( + ) F
0
F
C
=
F
1
F
2
F
4
F
3
F
0
F
1
F
2
F
4
F
3
F
0
1
F
0.5
1
F
0.5
Lowpass prototype
Lowpass prototype
1
H(F)
F
0.5
Bandpass
1
H(F)
F
0.5
Bandstop
Figure 8.14 The lowpass-to-bandpass and lowpass-to-bandstop transformations
REVIEW PANEL 8.4
Impulse Response of Windowed Filters Over [n[ 0.5(N 1)
h
P
[n] = 2F
C
sinc(2nF
C
) h
HP
[n] = [n] h
P
[n] F
C
= cuto frequency of lowpass prototype
h
BP
[n] = 2h
P
[n]cos(2nF
0
) h
BS
[n] = [n] h
BP
[n] F
0
= center frequency (for BPF and BSF)
c Ashok Ambardar, September 1, 2003
8.3 Window-Based Design 329
REVIEW PANEL 8.5
Recipe for Window-Based FIR Filter Design
Normalize the analog design frequencies by the sampling frequency S.
Obtain the band edges F
p
and F
s
of the lowpass prototype.
Choose the lowpass prototype cuto as F
C
= 0.5(F
p
+F
s
).
Choose a window (from Table 8.4) that satises A
WS
A
s
and A
WP
A
p
.
Compute the window length N from F
T
= F
s
F
p
= F
WS
=
C
N
(with C as in Table 8.4).
Compute the prototype impulse response h[n] = 2F
C
sinc[2nF
C
], [n[ 0.5(N 1).
Window h[n] and apply spectral transformations (if needed) to convert to required lter type.
Minimum-length design: Adjust N and/or F
C
until the design specications are just met.
EXAMPLE 8.2 (FIR Filter Design Using Windows)
(a) Design an FIR lter to meet the following specications:
f
p
= 2 kHz, f
s
= 4 kHz, A
p
= 2 dB, A
s
= 40 dB, and sampling frequency S = 20 kHz.
This describes a lowpass lter. The digital frequencies are F
p
= f
p
/S = 0.1 and F
s
= f
s
/S = 0.2.
With A
s
= 40 dB, possible choices for a window are (from Table 8.4) von Hann (with A
WS
= 44 dB)
and Blackman (with A
WS
= 75.3 dB). Using F
WS
(F
s
F
p
) = C/N, the approximate lter lengths
for these windows (using the values of C from Table 8.4) are:
von Hann:N
3.21
0.1
33 Blackman:N
5.71
0.1
58
We choose the cuto frequency as F
C
= 0.5(F
p
+F
s
) = 0.15. The impulse then response equals
h
N
[n] = 2F
C
sinc(2nF
C
) = 0.3 sinc(0.3n)
Windowing gives the impulse response of the required lowpass lter
h
LP
[n] = w[n]h
N
[n] = 0.3w[n]sinc(0.3n)
As Figure E8.2A(a) shows, the design specications are indeed met by each lter (but the lengths are
actually overestimated). The Blackman window requires a larger length because of the larger dierence
between A
s
and A
WS
. It also has the larger transition width.
0 0.1 0.15 0.2 0.3 0.4 0.5
120
80
40
6
2
(a) Lowpass filter using von Hann and Blackman windows
F
C
=0.15 von Hann N=33 Blackman N=58
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
von Hann
Blackman
0 0.1 0.15 0.2 0.3 0.4 0.5
120
80
40
6
2
(b) von Hann: F
C
=0.1313 Minimum N=23
Blackman: F
C
=0.1278 Minimum N=29
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
von Hann
Blackman
Figure E8.2A Lowpass FIR lters for Example 8.2(a and b)
c Ashok Ambardar, September 1, 2003
330 Chapter 8 Design of FIR Filters
(b) (Minimum-Length Design) By trial and error, the cuto frequency and the smallest length that
just meet specications turn out to be F
C
= 0.1313, N = 23, for the von Hann window, and F
C
=
0.1278, N = 29, for the Blackman window. Figure E8.2A(b) shows the response of these minimum-
length lters. The passband and stopband attenuation are [1.9, 40.5] dB for the von Hann window,
and [1.98, 40.1] dB for the Blackman window. Even though the lter lengths are much smaller, each
lter does meet the design specications.
(c) Design an FIR lter to meet the following specications:
f
p
= 4 kHz, f
s
= 2 kHz, A
p
= 2 dB, A
s
= 40 dB, and sampling frequency S = 20 kHz.
The specications describe a highpass lter. The digital frequencies are F
p
= f
p
/S = 0.2 and F
s
=
f
s
/S = 0.1. The transition width is F
T
= 0.2 0.1 = 0.1.
With A
s
= 40 dB, possible choices for a window (see Table 8.4) are Hamming and Blackman. Using
F
T
F
WS
= C/N, the approximate lter lengths for these windows are
Hamming:N
3.47
0.1
35 Blackman:N
5.71
0.1
58
We can now design the highpass lter in one of two ways:
1. Choose the cuto frequency of the lowpass lter as F
C
= 0.5(F
p
+ F
s
) = 0.15. The impulse
response h
N
[n] then equals
h
N
[n] = 2F
C
sinc(2nF
C
) = 0.3 sinc(0.3n)
The windowed response is thus h
W
[n] = 0.3w[n]sinc(0.3n).
The impulse of the required highpass lter is then
h
HP
[n] = [n] h
W
[n] = [n] 0.3w[n]sinc(0.3n)
2. Choose the cuto frequency of the lowpass lter as F
C
= 0.5 0.5(F
p
+F
s
) = 0.35.
Then, the impulse response equals h
N
[n] = 2F
C
sinc(2nF
C
) = 0.7 sinc(0.7n).
The windowed response is thus h
W
[n] = 0.7w[n]sinc(0.7n).
The impulse of the required highpass lter is then
h
HP
[n] = (1)
n
h
W
[n] = 0.7(1)
n
w[n]sinc(0.7n)
The two methods yield identical results. As Figure E8.2B(a) shows, the design specications are indeed
met by each window, but the lengths are actually overestimated.
c Ashok Ambardar, September 1, 2003
8.3 Window-Based Design 331
0 0.1 0.15 0.2 0.3 0.4 0.5
120
80
40
6
2
(a) Highpass filter using Hamming and Blackman windows
LPP F
C
=0.35 Hamming N=35 Blackman N=58
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
Hamming
Blackman
0 0.1 0.15 0.2 0.3 0.4 0.5
120
80
40
6
2
(b) Hamming: LPP F
C
=0.3293 Minimum N=22
Blackman: LPP F
C
=0.3277 Minimum N=29
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
Hamming
Blackman
Figure E8.2B Highpass FIR lters for Example 8.2(c and d)
(d) (Minimum-Length Design) By trial and error, the cuto frequency, and the smallest length that
just meet specications, turn out to be F
C
= 0.3293, N = 22, for the Hamming window, and F
C
=
0.3277, N = 29, for the Blackman window. Figure E8.2B(b) shows the response of the minimum-length
lters. The passband and stopband attenuation are [1.94, 40.01] dB for the Hamming window, and
[1.99, 40.18] dB for the Blackman window. Each lter meets the design specications, even though the
lter lengths are much smaller than the values computed from the design relations.
(e) Design an FIR lter to meet the following specications:
A
p
= 3 dB, A
s
= 45 dB, passband: [4, 8] kHz, stopband: [2, 12] kHz, and S = 25 kHz.
The specications describe a bandpass lter. If we assume a xed passband, the center frequency lies
at the center of the passband and is given by f
0
= 0.5(4 + 8) = 6 kHz.
The specications do not show arithmetic symmetry. The smaller transition width is 2 kHz. For
arithmetic symmetry, we therefore choose the band edges as [2, 4, 8, 10] kHz.
The digital frequencies are: passband [0.16, 0.32], stopband [0.08, 0.4], and F
0
= 0.24.
The lowpass band edges become F
p
= 0.5(F
p2
F
p1
) = 0.08 and F
s
= 0.5(F
s2
F
s1
) = 0.16.
With A
s
= 45 dB, one of the windows we can use (from Table 8.4) is the Hamming window.
For this window, we estimate F
WS
(F
s
F
p
) = C/N to obtain N = 3.47/0.08 = 44.
We choose the cuto frequency as F
C
= 0.5(F
p
+F
s
) = 0.12.
The lowpass impulse response is h
N
[n] = 2F
C
sinc(2nF
C
) = 0.24 sinc(0.24n), 21.5 n 21.5.
Windowing this gives h
W
[n] = h
N
[n]w[n], and the LP2BP transformation gives
h
BP
[n] = 2 cos(2nF
0
)h
W
[n] = 2 cos(0.48n)h
W
[n]
Its frequency response is shown in Figure E8.2C(a) and conrms that the specications are met.
c Ashok Ambardar, September 1, 2003
332 Chapter 8 Design of FIR Filters
0 0.08 0.16 0.24 0.32 0.4 0.48
90
45
20
3
(a) Hamming BPF (N=44, F
0
=0.24): LPP F
C
=0.12
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
0 0.08 0.16 0.24 0.32 0.4 0.48
90
45
20
3
(b) Hamming BPF (N=27, F
0
=0.24): LPP F
C
=0.0956
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
Figure E8.2C Bandpass FIR lters for Example 8.2(e)
The cuto frequency and smallest lter length that meets specications turn out to be smaller. By
decreasing N and F
C
, we nd that the specications are just met with N = 27 and F
C
= 0.0956. For
these values, the lowpass lter is h
N
[n] = 2F
C
sinc(2nF
C
) = 0.1912 sinc(0.1912n), 13 n 13.
Windowing and bandpass transformation yields the lter whose spectrum is shown in Figure E8.2C(b).
The attenuation is 3.01 dB at 4 kHz and 8 kHz, 45.01 dB at 2 kHz, and 73.47 dB at 12 kHz.
8.4 Half-Band FIR Filters
A half-band FIR lter has an odd-length impulse response h[n] whose alternate samples are zero. The main
advantage of half-band lters is that their realization requires only about half the number of multipliers.
The impulse response of an ideal lowpass lter is h[n] = 2F
C
sinc(2nF
C
). If we choose F
C
= 0.25, we obtain
h[n] = 2F
C
sinc(2nF
C
) = 0.5 sinc(0.5n), [n[ 0.5(N 1) (8.23)
Thus, h[n] = 0 for even n, and the lter length N is always odd. Being a type 1 sequence, its transfer
function H(F) displays even symmetry about F = 0. It is also antisymmetric about F = 0.25, with
H(F) = 1 H(0.5 F) (8.24)
A highpass half-band lter also requires F
C
= 0.25. If we choose F
C
= 0.5(F
p
+F
s
), the sampling frequency
S must equal 2(f
p
+ f
s
) to ensure F
C
= 0.25, and cannot be selected arbitrarily. Examples of lowpass
and highpass half-band lters are shown in Figure 8.15. Note that the peak passband ripple and the peak
stopband ripple are of equal magnitude, with
p
=
s
= (as they are for any symmetric window).
Since the impulse response of bandstop and bandpass lters contains the term 2 cos(2nF
0
)h
LP
[n], a
choice of F
0
= 0.25 (for the center frequency) ensures that the odd-indexed terms vanish. Once again, the
sampling frequency S cannot be arbitrarily chosen, and must equal 4f
0
to ensure F
0
= 0.25. Even though
the choice of sampling rate may cause aliasing, the aliasing will be restricted primarily to the transition band
between f
p
and f
s
, where its eects are not critical.
Except for the restrictions in the choice of sampling rate S (which dictates the choice of F
C
for lowpass
and highpass lters, or F
0
for bandpass and bandstop lters) and an odd length sequence, the design of
half-band lters follows the same steps as window-based design.
c Ashok Ambardar, September 1, 2003
8.4 Half-Band FIR Filters 333
0 0.25 0.5
0
0.5
1
1
1+
(a) Amplitude of a halfband lowpass filter
Digital frequency F
A
m
p
l
i
t
u
d
e
0 0.25 0.5
0
0.5
1
1
1+
(b) Amplitude of a halfband highpass filter
Digital frequency F
A
m
p
l
i
t
u
d
e
Figure 8.15 Characteristics of lowpass and highpass half-band lters
REVIEW PANEL 8.6
Recipe for Design of Half-Band FIR Filters
Fix S = 2(f
p
+f
s
) (for LPF and HPF) or S = 4f
0
(for BPF and BSF).
Find F
p
and F
s
for the lowpass prototype and pick F
C
= 0.5(F
p
+F
s
).
Choose an odd lter length N.
Compute the prototype impulse response h[n] = 2F
C
sinc[2nF
C
], [n[ 0.5(N 1).
Window the impulse response and apply spectral transformations to convert to required lter.
If the specications are exceeded, decrease N (in steps of 2) until specications are just met.
EXAMPLE 8.3 (Half-Band FIR Filter Design)
(a) Design a lowpass half-band lter to meet the following specications:
Passband edge: 8 kHz, stopband edge: 16 kHz, A
p
= 1 dB, and A
s
= 50 dB.
We choose S = 2(f
p
+f
s
) = 48 kHz. The digital band edges are F
p
=
1
6
, F
s
=
1
3
, and F
C
= 0.25.
The impulse response of the lter is h[n] = 2F
C
sinc(2nF
C
) = 0.5 sinc(0.5n).
1. If we use the Kaiser window, we compute the lter length N and the Kaiser parameter as
follows:
p
=
10
Ap/20
1
10
A
p
/20
+ 1
= 0.0575
s
= 10
As/20
= 0.00316 = 0.00316 A
s0
= 20 log = 50
N =
A
s0
7.95
14.36(F
s
F
p
)
+ 1 = 18.57 19 = 0.0351(A
s0
8.7) = 1.4431
The impulse response is therefore h
N
[n] = 0.5 sinc(0.5n), 9 n 9.
Windowing h
N
[n] gives the required impulse response h
W
[n].
Figure E8.3A(a) shows that this lter does meet specications with an attenuation of 0.045 dB at
8 kHz and 52.06 dB at 16 kHz.
c Ashok Ambardar, September 1, 2003
334 Chapter 8 Design of FIR Filters
0 0.166 0.25 0.333 0.5
100
50
6
1
(a) Kaiser halfband LPF: =1.44, N=19, F
C
=0.25
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
0 0.166 0.25 0.333 0.5
100
50
6
1
(b) Hamming halfband LPF: N=21 F
C
=0.25
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
Figure E8.3A Lowpass half-band lters for Example 8.3(a)
2. If we choose a Hamming window, we use Table 8.4 to approximate the odd lter length as
F
WS
= F
s
F
p
=
C
N
N =
C
F
s
F
p
=
3.47
1/6
21
This value of N meets specications and also turns out to be the smallest length that does. Its
response is plotted in Figure E8.3A(b). This lter shows an attenuation of 0.033 dB at 8 kHz and
an attenuation of 53.9 dB at 16 kHz.
(b) Design a bandstop half-band lter to meet the following specications:
stopband edges: [2, 3] kHz, passband edges: [1, 4] kHz, A
p
= 1 dB, and A
s
= 50 dB.
Since both the passband and the stopband are symmetric, we have f
0
= 0.5(2 + 3) = 2.5 kHz.
We then choose the sampling frequency as S = 4f
0
= 10 kHz. The digital frequencies are
stopband edges = [0.2, 0.3], passband edges = [0.1, 0.4], F
0
= 0.25.
The specications for the lowpass prototype are
F
p
= 0.5(F
s2
F
s1
) = 0.05, F
s
= 0.5(F
p2
F
p1
) = 0.15, F
C
= 0.5(F
p
+F
s
) = 0.1
The impulse response of the prototype is h[n] = 2F
C
sinc(2nF
C
) = 0.2 sinc(0.2n).
1. If we use the Kaiser window, we must compute the lter length N and the Kaiser parameter
as follows:
p
=
10
Ap/20
1
10
A
p
/20
+ 1
= 0.0575
s
= 10
As/20
= 0.00316 = 0.00316 A
s0
= 20 log = 50
N =
A
s0
7.95
14.36(F
s
F
p
)
+ 1 = 30.28 31 = 0.0351(A
s0
8.7) = 1.4431
The prototype impulse response is h
N
[n] = 0.2 sinc(0.2n), 15 n 15. Windowing h
N
[n]
gives h
W
[n]. We transform h
W
[n] to the bandstop form h
BS
[n], using F
0
= 0.25, to give
h
BS
[n] = [n] 2 cos(2nF
0
)h
W
[n] = [n] 2 cos(0.5n)h
W
[n]
Figure E8.3B(a) shows that this lter does meet specications, with an attenuation of 0.046 dB
at 2 kHz and 3 kHz, and 53.02 dB at 1 kHz and 4 kHz.
c Ashok Ambardar, September 1, 2003
8.5 FIR Filter Design by Frequency Sampling 335
0 0.1 0.2 0.25 0.3 0.4 0.5
70
50
1
(a) Kaiser halfband BSF (N=31, F
0
=0.25, =1.44)
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
0 0.1 0.2 0.25 0.3 0.4 0.5
80
50
1
(b) Hamming halfband BSF (N=35, F
0
=0.25)
Digital frequency F
Figure E8.3B Bandstop half-band lters for Example 8.3(b)
2. For a Hamming window, we use Table 8.4 to approximate the odd lter length as
F
WS
= F
s
F
p
=
C
N
N =
C
F
s
F
p
=
3.47
0.1
35
This is also the smallest lter length that meets specications. The magnitude response is shown
in Figure E8.3B(b). We see an attenuation of 0.033 dB at [2, 3] kHz and 69.22 dB at [1, 4] kHz.
8.5 FIR Filter Design by Frequency Sampling
In window-based design, we start with the impulse response h[n] of an ideal lter, and use truncation and
windowing to obtain an acceptable frequency response H(F). The design of FIR lters by frequency sampling
starts with the required form for H(F), and uses interpolation and the DFT to obtain h[n]. In this sense, it
is more versatile, since arbitrary frequency response forms can be handled with ease.
Recall that a continuous (but band-limited) signal h(t) can be perfectly reconstructed from its samples
(taken above the Nyquist rate), using a sinc interpolating function (that equals zero at the sampling instants).
If h(t) is not band-limited, we get a perfect match only at the sampling instants.
By analogy, the continuous (but periodic) spectrum H(F) can also be recovered from its frequency
samples, using a periodic extension of the sinc interpolating function (that equals zero at the sampling
points). The reconstructed spectrum H
N
(F) will show an exact match to a desired H(F) at the sampling
instants, even though H
N
(F) could vary wildly at other frequencies. This is the basis for FIR lter design
by frequency sampling. Given the desired form for H(F), we sample it at N frequencies, and nd the IDFT
of the N-point sequence H[k], k = 0, 1, . . . , N 1. The following design guidelines stem both from design
aspects as well as computational aspects of the IDFT itself.
1. The N samples of H(F) must correspond to the digital frequency range 0 F < 1, with
H[k] = H(F)
F=k/N
, k = 0, 1, 2, . . . , N 1 (8.25)
The reason is that most DFT and IDFT algorithms require samples in the range 0 k N 1.
2. Since h[n] must be real, its DFT H[k] must possess conjugate symmetry about k = 0.5N (this is a
DFT requirement). Note that conjugate symmetry will always leave H[0] unpaired. It can be set to
c Ashok Ambardar, September 1, 2003
336 Chapter 8 Design of FIR Filters
any real value, in keeping with the required lter type (this is a design requirement). For example, we
must choose H[0] = 0 for bandpass or highpass lters.
3. For even length N, the computed end-samples of h[n] may not turn out to be symmetric. To ensure
symmetry, we must force h[0] to equal h[N] (setting both to 0.5h[0], for example).
4. For h[n] to be causal, we must delay it (this is a design requirement). This translates to a linear
phase shift to produce the sequence [H[k][e
j[k]
. In keeping with conjugate symmetry about the index
k = 0.5N, the phase for the rst N/2 samples of H[k] will be given by
[k] =
k(N 1)
N
, k = 0, 1, 2, . . . , 0.5(N 1) (8.26)
Note that for type 3 and type 4 (antisymmetric) sequences, we must also add a constant phase of 0.5
to [k] (up to k = 0.5N). The remaining samples of H[k] are found by conjugate symmetry.
5. To minimize the Gibbs eect near discontinuities in H(F), we may allow the sample values to vary
gradually between jumps (this is a design guideline). This is equivalent to introducing a nite transition
width. The choice of the sample values in the transition band can aect the response dramatically.
EXAMPLE 8.4 (FIR Filter Design by Frequency Sampling)
(a) Consider the design of a lowpass lter shown in Figure E8.4(a). Let us sample the ideal H(F) (shown
dark) over 0 F < 1, with N = 10 samples. The magnitude of the sampled sequence H[k] is
[H[k][ = 1, 1, 1, 0, 0, 0,
....
k=5
0, 0, 1, 1
The actual (phase-shifted) sequence is H[k] = [H[k][e
j[k]
, where
[k] =
k(N 1)
N
= 0.9k, k 5
Note that H[k] must be conjugate symmetric about k = 0.5N = 5, with H[k] = H
[N k].
Now, H[k] = 0, k = 4, 5, 6, 7, and the remaining samples are H[0] = 1e
j0
= 1 and
H[1] = 1e
j[1]
= e
j0.9
H[9] = H
[1] = e
j0.9
H[2] = 1e
j[3]
= e
j1.8
H[8] = H
[2] = e
j1.8
The inverse DFT of H[k] yields the symmetric real impulse response sequence h
1
[n], with
h
1
[n] = 0.0716, 0.0794, 0.1, 0.1558, 0.452, 0.452, 0.1558, 0.1, 0.0794, 0.0716
Its DTFT magnitude H
1
(F), shown light in Figure E8.4(a), reveals a perfect match at the sampling
points but has a large overshoot near the cuto frequency. To reduce the overshoot, let us pick
H[2] = 0.5e
j[2]
= 0.5e
1.8
H[8] = H
[2] = 0.5e
1.8
The inverse DFT of this new set of samples yields the new impulse response sequence h
2
[n]:
h
2
[n] = 0.0093, 0.0485, 0, 0.1867, 0.3711, 0.3711, 0.1867, 0, 0.0485, 0.0093
Its frequency response H
2
(F), in Figure E8.4(a), not only shows a perfect match at the sampling points
but also a reduced overshoot, which we obtain at the expense of a broader transition width.
c Ashok Ambardar, September 1, 2003
8.5 FIR Filter Design by Frequency Sampling 337
0 0.2 0.4 0.6 0.8 1
0
0.5
1
1.2
(a) Lowpass filter using frequency sampling
Digital frequency F
M
a
g
n
i
t
u
d
e
0 0.2 0.4 0.6 0.8 1
0
0.5
1
1.2
(b) Highpass filter using frequency sampling
Digital frequency F
M
a
g
n
i
t
u
d
e
Figure E8.4 Lowpass and highpass lters for Example 8.4 (a and b)
(b) Consider the design of a highpass lter shown in Figure E8.4(b). Let us sample the ideal H(F) (shown
dark) over 0 F < 1, with N = 10 samples. The magnitude of the sampled sequence H[k] is
[H[k][ = 0, 0, 0, 1, 1, 1,
....
k=5
1, 1, 0, 0
The actual (phase-shifted) sequence is H[k] = [H[k][e
j[k]
. Since the impulse response h[n] must be
antisymmetric (for a highpass lter), [k] includes an additional phase of 0.5 and is given by
[k] =
k(N 1)
N
+ 0.5 = 0.5 0.9k, k 5
Note that H[k] is conjugate symmetric about k = 0.5N = 5, with H[k] = H
[N k].
Now, H[k] = 0, k = 0, 1, 2, 8, 9, and H[5] = 1e
j[5]
= 1. The remaining samples are
H[3] = 1e
j[3]
= e
j2.2
H[7] = H
[3] = e
j2.2
H[4] = 1e
j[4]
= e
j3.1
H[6] = H
[4] = e
j3.1
The inverse DFT of H[k] yields the antisymmetric real impulse response sequence h
1
[n], with
h
1
[n] = 0.0716, 0.0794, 0.1, 0.1558, 0.452, 0.452, 0.1558, 0.1, 0.0794, 0.0716
Its DTFT magnitude H
1
(F), shown light in Figure E8.4(b), reveals a perfect match at the sampling
points but a large overshoot near the cuto frequency. To reduce the overshoot, let us choose
H[2] = 0.5e
j[2]
= 0.5e
1.3
H[8] = 0.5e
1.3
The inverse DFT of this new set of samples yields the new impulse response sequence h
2
[n]:
h
2
[n] = 0.0128, 0.0157, 0.1, 0.0606, 0.5108, 0.5108, 0.0606, 0.1, 0.0157, 0.0128
Its frequency response H
2
(F), in Figure E8.4(b), not only shows a perfect match at the sampling points
but also a reduced overshoot, which we obtain at the expense of a broader transition width.
c Ashok Ambardar, September 1, 2003
338 Chapter 8 Design of FIR Filters
8.5.1 Frequency Sampling and Windowing
The frequency-sampling method can be used to design lters with arbitrary frequency-response shapes.
We can even combine this versatility with the advantages of window-based design. Given the response
specication H(F), we sample it at a large number of points M and nd the IDFT to obtain the M-point
impulse response h[n]. The choice M = 512 is not unusual. Since h[n] is unacceptably long, we truncate it
to a smaller length N by windowing h[n]. The choice of window is based on the same considerations that
apply to window-based design. If the design does not meet specications, we can change N and/or adjust
the sample values in the transition band and repeat the process. Naturally, this design is best carried out
on a computer. The sample values around the transitions are adjusted to minimize the approximation error.
This idea forms a special case of the more general optimization method called linear programming. But
when it comes right down to choosing between the various computer-aided optimization methods, by far the
most widely used is the equiripple optimal approximation method, which we describe in the next section.
8.5.2 Implementing Frequency-Sampling FIR Filters
We can readily implement frequency-sampling FIR lters by a nonrecursive structure, once we know the
impulse response h[n]. We can even implement a recursive realization without the need for nding h[n], as
follows. Suppose the frequency samples H
N
[k] of H(F) are approximated by an N-point DFT of the lter
impulse response h
N
[n]. We may then write
H
N
[k] =
N1
k=0
h
N
[n]e
j2nk/N
(8.27)
Its impulse response h
N
[n] may be found using the inverse DFT as
h
N
[n] =
1
N
N1
k=0
H
N
[k]e
j2nk/N
(8.28)
The lter transfer function H(z) is the z-transform of h
N
[n]:
H(z) =
N1
n=0
h
N
[n]z
n
=
N1
n=0
z
n
1
N
N1
k=0
H
N
[k]e
j2nk/N
(8.29)
Interchanging summations, setting z
n
e
j2nk/N
= [z
1
e
j2k/N
]
n
, and using the closed form for the nite
geometric sum, we obtain
H(z) =
1
N
N1
k=0
H
N
[k]
1 z
N
1 z
1
e
j2k/N
(8.30)
The frequency response corresponding to H(z) is
H(F) =
1
N
N1
k=0
H
N
[k]
1 e
j2FN
1 e
j2(Fk/N)
(8.31)
If we factor out exp(jFN) from the numerator, exp[j(F k/N)] from the denominator, and use Eulers
relation, we can simplify this result to
H(F) =
N1
k=0
H
N
[k]
sinc[N(F
k
N
)]
sinc[(F
k
N
)]
e
j(N1)(Fk/N)
=
N1
k=0
H
N
[k]W[F
k
N
] (8.32)
c Ashok Ambardar, September 1, 2003
8.6 Design of Optimal Linear-Phase FIR Filters 339
Here, W[F
k
N
] describes a sinc interpolating function, dened by
W[F
k
N
] =
sinc[N(F
k
N
)]
sinc[(F
k
N
)]
e
j(N1)(Fk/N)
(8.33)
and reconstructs H(F) from its samples H
N
[k] taken at intervals 1/N. It equals 1 when F = k/N, and zero
otherwise. As a result, H
N
(F) equals the desired H(F) at the sampling instants, even though H
N
(F) could
vary wildly at other frequencies. This is the concept behind frequency sampling.
The transfer function H(z) may also be written as the product of two transfer functions:
H(z) = H
1
(z)H
2
(z) =
1 z
N
N
N1
k=0
H
N
[k]
1 z
1
e
j2k/N
(8.34)
This form of H(z) suggests a method of recursive implementation of FIR lters. We cascade a comb lter,
described by H
1
(z), with a parallel combination of N rst-order resonators, described by H
2
(z). Note that
each resonator has a complex pole on the unit circle and the resonator poles actually lie at the same locations
as the zeros of the comb lter. Each pair of terms corresponding to complex conjugate poles may be combined
to form a second-order system with real coecients for easier implementation.
Why implement FIR lters recursively? There are several reasons. In some cases, this may reduce the
number of arithmetic operations. In other cases, it may reduce the number of delay elements required. Since
the pole and zero locations depend only on N, such lters can be used for all FIR lters of length L by
changing only the multiplicative coecients.
Even with these advantages, things can go wrong. In theory, the poles and zeros balance each other.
In practice, quantization errors may move some poles outside the unit circle and lead to system instability.
One remedy is to multiply the poles and zeros by a real number slightly smaller than unity to relocate the
poles and zeros. The transfer function then becomes
H(z) =
1 (z)
N
N
N1
k=0
H
N
[k]
1 (z)
1
e
j2k/N
(8.35)
With = 1 , typically used values for range from 2
12
to 2
27
(roughly 10
4
to 10
9
) and have been
shown to improve stability with little change in the frequency response.
8.6 Design of Optimal Linear-Phase FIR Filters
Quite like analog lters, the design of optimal linear-phase FIR lters requires that we minimize the maximum
error in the approximation. Optimal design of FIR lters is also based on a Chebyshev approximation. We
should therefore expect such a design to yield the smallest lter length, and a response that is equiripple in
both the passband and the stopband. A typical spectrum is shown in Figure 8.16.
There are three important concepts relevant to optimal design:
1. The error between the approximation H(F) and the desired response D(F) must be equiripple. The
error curve must show equal maxima and minima with alternating zero crossings. The more the number
of points where the error goes to zero (the zero crossings), the higher is the order of the approximating
polynomial and the longer is the lter length.
c Ashok Ambardar, September 1, 2003
340 Chapter 8 Design of FIR Filters
1+p
1p
s
F
s
F
p
F
s
F
p
A
p
A
s
F
Stopband
Passband
Magnitude
Transition band
0.5 0.5
0
dB Magnitude 20log
F
H(F) H(F)
Figure 8.16 An optimal lter has ripples of equal magnitude in the passband and stopband
2. The frequency response H(F) of a lter whose impulse response h[n] is a symmetric sequence can
always be put in the form
H(F) = Q(F)
M
n=0
n
cos(2nF) = Q(F)P(F) (8.36)
Here, Q(F) equals 1 (type 1), cos(F) (type 2), sin(2F) (type 3), or sin(F) (type 4); M is related
to the lter length N with M = int(
N1
2
) (types 1, 2, 4) or M = int(
N3
2
) (type 3); and the
n
are
related to the impulse response coecients h[n]. The quantity P(F) may also be expressed as as a
power series in cos(2F) (or as a sum of Chebyshev polynomials). If we can select the
n
to best meet
optimal constraints, we can design H(F) as an optimal approximation to D(F).
3. The alternation theorem oers the clue to selecting the
n
.
8.6.1 The Alternation Theorem
We start by approximating D(F) by the Chebyshev polynomial form for H(F) and dene the weighted
approximation error (F) as
(F) = W(F)[D(F) H(F)] (8.37)
Here, W(F) represents a set of weight factors that can be used to select dierent error bounds in the passband
and stopband. The nature of D(F) and W(F) depends on the type of the lter required. The idea is to select
the
k
(in the expression for H(F)) so as to minimize the maximum absolute error [[
max
. The alternation
theorem points the way (though it does not tells us how). In essence, it says that we must be able to nd at
least M + 2 frequencies F
k
, k = 1, 2, . . . , M + 2, called the extremal frequencies or alternations where
1. The error alternates between two equal maxima and minima (extrema):
(F
k
) = (F
k+1
), k = 1, 2, . . . , M + 1 (8.38)
2. The error at the frequencies F
k
equals the maximum absolute error:
[(F
k
)[ = [(F)[
max
, k = 1, 2, . . . , M + 2 (8.39)
In other words, we require M + 2 extrema (including the band edges) where the error attains its maximum
absolute value. These frequencies yield the smallest lter length (number of coecients
k
) for optimal
design. In some instances, we may get M + 3 extremal frequencies leading to so called extra ripple lters.
c Ashok Ambardar, September 1, 2003
8.6 Design of Optimal Linear-Phase FIR Filters 341
The design strategy to nd the extremal frequencies invariably requires iterative methods. The most
popular is the algorithm of Parks and McClellan, which in turn relies on the so-called Remez exchange
algorithm.
The Parks-McClellan algorithm requires the band edge frequencies F
p
and F
s
, the ratio K =
p
/
s
of the
passband and stopband ripple, and the lter length N. It returns the coecients
k
and the actual design
values of
p
and
s
for the given lter length N. If these values of
p
and
s
are not acceptable (or do not
meet requirements), we can increase N (or change the ratio K), and repeat the design.
A good starting estimate for the lter length N is given by a relation similar to the Kaiser relation for
half-band lters, and reads
N = 1 +
10 log(
p
s
) 13
14.6F
T
p
=
10
A
p
/20
1
10
Ap/20
+ 1
s
= 10
A
s
/20
(8.40)
Here, F
T
is the digital transition width. More accurate (but more involved) design relations are also available.
To explain the algorithm, consider a lowpass lter design. To approximate an ideal lowpass lter, we
choose D(F) and W(F) as
D(F) =
1, 0 F F
p
0, F
s
F 0.5
W(F) =
1, 0 F F
p
K =
p
/
s
, F
s
F 0.5
(8.41)
To nd the
k
in H(F), we use the Remez exchange algorithm. Here is how it works. We start with
a trial set of M + 2 frequencies F
k
, k = 1, 2, . . . , M + 2. To force the alternation condition, we must satisfy
(F
k
) = (F
k+1
), k = 1, 2, . . . , M + 1. Since the maximum error is yet unknown, we let = (F
k
) =
(F
k+1
). We now have M + 1 unknown coecients
k
and the unknown , a total of M + 2. We solve for
these by using the M + 2 frequencies, to generate the M + 2 equations:
(1)
k
= W(F
k
)[D(F
k
) H(F
k
)], k = 1, 2, . . . , M + 2 (8.42)
Here, the quantity (1)
k
brings out the alternating nature of the error.
Once the
k
are found, the right-hand side of this equation is known in its entirety and is used to
compute the extremal frequencies. The problem is that these frequencies may no longer satisfy the alternation
condition. So we must go back and evaluate a new set of
k
and , using the computed frequencies. We
continue this process until the computed frequencies also turn out to be the actual extremal frequencies (to
within a given tolerance, of course).
Do you see why it is called the exchange algorithm? First, we exchange an old set of frequencies F
k
for
a new one. Then we exchange an old set of
k
for a new one. Since the
k
and F
k
actually describe the
impulse response and frequency response of the lter, we are in essence going back and forth between the
two domains until the coecients
k
yield a spectrum with the desired optimal characteristics.
Many time-saving steps have been suggested to speed the computation of the extremal frequencies, and
the
k
, at each iteration. The Parks-McClellan algorithm is arguably one of the most popular methods
of lter design in the industry, and many of the better commercial software packages on signal processing
include it in their stock list of programs. Having said that, we must also point out two disadvantages of
this method. First, the lter length must still be estimated by empirical means. And second, we have no
control over the actual ripple that the design yields. The only remedy, if this ripple is unacceptable, is to
start afresh with a dierent set of weight functions or with a dierent lter length.
8.6.2 Optimal Half-Band Filters
Since about half the coecients of a half-band lter are zero, the computational burden can be reduced by
developing a lter that contains only the nonzero coecients followed by zero interpolation. To design a
c Ashok Ambardar, September 1, 2003
342 Chapter 8 Design of FIR Filters
half-band lowpass lter h
HB
[n] with band edges F
p
and F
s
, and ripple , an estimate of the lter length
N = 4k 1 is rst found from the given design specications. Next, we design a lowpass prototype h
P
[n] of
even length 0.5(N + 1), with band edges of 2F
p
and 0.5 (in eect, with no stopband). This lter describes
a type 2 sequence whose response is zero at F = 0.5 and whose ripple is twice the design ripple. Finally,
we insert zeros between adjacent samples of 0.5h
P
[n] and set the (zero-valued) center coecient to 0.5 to
obtain the required half-band lter h
HB
[n] with the required ripple.
EXAMPLE 8.5 (Optimal FIR Filter Design)
(a) We design an optimal bandstop lter with stopband edges of [2, 3] kHz, passband edges of [1, 4] kHz,
A
p
= 1 dB, A
s
= 50 dB, and a sampling frequency of S = 10 kHz
We nd the digital passband edges as [0.2, 0.3] and the stopband edges as [0.1, 0.4].
The transition width is F
T
= 0.1. We nd the approximate lter length N, as follows:
p
=
10
A
p
/20
1
10
Ap/20
+ 1
= 0.0575
s
= 10
A
s
/20
= 0.00316 N = 1 +
10 log(
p
s
) 13
14.6F
T
17.7
Choosing the next odd length gives N = 19. The bandstop specications are actually met by a lter
with N = 21. The response of the designed lter is shown in Figure E8.5(a).
0 0.1 0.2 0.3 0.4 0.5
100
50
1
(a) Optimal BSF: N = 21, A
p
= 0.2225 dB, A
s
= 56.79 dB
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
0 0.166 0.25 0.333 0.5
100
50
6
1
(b) Optimal halfband LPF: N=17
Digital frequency F
M
a
g
n
i
t
u
d
e
[
d
B
]
Figure E8.5 Optimal lters for Example 8.5 (a and b)
(b) (Optimal Half-Band Filter Design) We design an optimal half-band lter to meet the following
specications:
Passband edge: 8 kHz, stopband edge: 16 kHz, A
p
= 1 dB, and A
s
= 50 dB.
We choose S = 2(f
p
+f
s
) = 48 kHz. The digital band edges are F
p
=
1
6
, F
s
=
1
3
, and F
C
= 0.25.
Next, we nd the minimum ripple and the approximate lter length N as
p
=
10
A
p
/20
1
10
A
p
/20
+ 1
= 0.0575
s
= 10
As/20
= 0.00316 = 0.00316
N = 1 +
10 log
2
13
14.6(F
s
F
p
)
= 1 +
20 log(0.00316) 13
14.6(1/6)
16.2 N = 19
c Ashok Ambardar, September 1, 2003
8.7 Application: Multistage Interpolation and Decimation 343
The choice N = 19 is based on the form N = 4k 1. Next, we design the lowpass prototype h
P
[n] of
even length M = 0.5(N + 1) = 10 and band edges at 2F
p
= 2(1/6) = 1/3 and F
s
= 0.5. The result is
h
P
[n] = 0.0074, 0.0267, 0.0708, 0.173, 0.6226, 0.6226, 0.173, 0.0708, 0.0267, 0.0074
Finally, we zero-interpolate 0.5h
P
[n] and choose the center coecient as 0.5 to obtain H
HB
[n]:
0.0037, 0, 0.0133, 0, 0.0354, 0, 0.0865, 0, 0.3113, 0, 0.5, 0.3113, 0, 0.0865, 0, 0.0354, 0, 0.0133, 0, 0.0037
Its length is N = 19, as required. Figure E8.5(b) shows that this lter does meet specications, with
an attenuation of 0.02 dB at 8 kHz and 59.5 dB at 16 kHz.
8.7 Application: Multistage Interpolation and Decimation
Recall that a sampling rate increase, or interpolation, by N involves up-sampling (inserting N 1 zeros
between samples), followed by lowpass ltering with a gain of N. If the interpolation factor N is large,
it is much more economical to carry out the interpolation in stages because it results in a smaller overall
lter length or order. For example, if N can be factored as N = I
1
I
2
I
3
, then interpolation by N can be
accomplished in three stages with individual interpolation factors of I
1
, I
2
, and I
3
. At a typical Ith stage,
the output sampling rate is given by S
out
= IS
in
, as shown in Figure 8.17.
S
out
S
in
= I S
in
S
out
S
out
= S f
s
S
in
f
p
=
Gain = I
I
Digital lowpass filter
Up-sample
Figure 8.17 One stage of a typical multistage interpolating lter
The lter serves to remove the spectral replications due to the zero interpolation by the up-sampler. These
replicas occur at multiples of the input sampling rate. As a result, the lter stopband edge is computed
from the input sampling rate S
in
as f
s
= S
in
f
p
, while the lter passband edge remains xed (by the
given specications). The lter sampling rate corresponds to the output sampling rate S
out
and the lter
gain equals the interpolation factor I
k
. At each successive stage (except the rst), the spectral images occur
at higher and higher frequencies, and their removal requires lters whose transition bands get wider with
each stage, leading to less complex lters with smaller lter lengths. Although it is not easy to establish the
optimum values for the interpolating factors and their order for the smallest overall lter length, it turns
out that interpolating factors in increasing order generally yield smaller overall lengths, and any multistage
design results in a substantial reduction in the lter length as compared to a single-stage design.
EXAMPLE 8.6 (The Concept of Multistage Interpolation)
(a) Consider a signal band-limited to 1.8 kHz and sampled at 4 kHz. It is required to raise the sampling
rate to 48 kHz. This requires interpolation by 12. The value of the passband edge is f
p
= 1.8 kHz for
either a single-stage design or multistage design.
For a single-stage interpolator, the output sampling rate is S
out
= 48 kHz, and we thus require a lter
with a stopband edge of f
s
= S
in
f
p
= 4 1.8 = 2.2 kHz, a sampling rate of S = S
out
= 48 kHz, and
c Ashok Ambardar, September 1, 2003
344 Chapter 8 Design of FIR Filters
a gain of 12. If we use a crude approximation for the lter length as L 4/F
T
, where F
T
= (f
s
f
p
)/S
is the digital transition width, we obtain L = 4(48/0.4) = 480.
(b) If we choose two-stage interpolation with I
1
= 3 and I
2
= 4, at each stage we compute the important
parameters as follows:
S
in
Interpolation S
out
= S f
p
f
s
= S
in
f
p
Filter Length
Stage (kHz) Factor (kHz) (kHz) (kHz) L 4S/(f
s
f
p
)
1 4 I
1
= 3 12 1.8 2.2 48/0.4 = 120
2 12 I
2
= 4 48 1.8 10.2 192/8.4 = 23
The total lter length is thus 143.
(c) If we choose three-stage interpolation with I
1
= 2, I
2
= 3, and I
3
= 2, at each stage we compute the
important parameters, as follows:
S
in
Interpolation S
out
= S f
p
f
s
= S
in
f
p
Filter Length
Stage (kHz) Factor (kHz) (kHz) (kHz) L 4S/(f
s
f
p
)
1 4 I
1
= 2 8 1.8 2.2 32/0.4 = 80
2 8 I
2
= 3 24 1.8 6.2 96/4.4 = 22
3 24 I
2
= 2 48 1.8 22.2 192/20.4 = 10
The total lter length is thus 112.
(d) If we choose three-stage interpolation but with the dierent order I
1
= 3, I
2
= 2, and I
3
= 2, at each
stage we compute the important parameters, as follows:
S
in
Interpolation S
out
= S f
p
f
s
= S
in
f
p
Filter Length
Stage (kHz) Factor (kHz) (kHz) (kHz) L 4S/(f
s
f
p
)
1 4 I
1
= 3 12 1.8 2.2 48/0.4 = 120
2 12 I
2
= 2 24 1.8 10.2 96/8.4 = 12
3 24 I
2
= 2 48 1.8 22.2 192/20.4 = 10
The total lter length is thus 142.
Any multistage design results in a substantial reduction in the lter length as compared to a single-stage
design, and a smaller interpolation factor in the rst stage of a multistage design does seem to yield smaller
overall lengths. Also remember that the lter lengths are only a crude approximation to illustrate the relative
merits of each design and the actual lter lengths will depend on the attenuation specications.
For multistage operations, the actual lter lengths depend not only on the order of the interpolating
factors, but also on the given attenuation specications A
p
and A
s
. Since attenuations in decibels add in
a cascaded system, the passband attenuation A
p
is usually distributed among the various stages to ensure
an overall attenuation that matches specications. The stopband specication needs no such adjustment.
Since the signal is attenuated even further at each stage, the overall stopband attenuation always exceeds
specications. For interpolation, we require a lter whose gain is scaled (multiplied) by the interpolation
factor of the stage.
c Ashok Ambardar, September 1, 2003
8.7 Application: Multistage Interpolation and Decimation 345
EXAMPLE 8.7 (Design of Interpolating Filters)
(a) Consider a signal band-limited to 1.8 kHz and sampled at 4 kHz. It is required to raise the sampling
rate to 48 kHz. The passband attenuation should not exceed 0.6 dB, and the minimum stopband
attenuation should be 50 dB. Design an interpolation lter using a single-stage interpolator.
The single stage requires interpolation by 12. The value of the passband edge is f
p
= 1.8 kHz.
The output sampling rate is S
out
= 48 kHz, and we thus require a lter with a stopband edge of
f
s
= S
in
f
p
= 4 1.8 = 2.2 kHz and a sampling rate of S = S
out
= 48 kHz. We ignore the lter gain
of 12 in the following computations. To compute the lter length, we rst nd the ripple parameters
p
=
10
Ap/20
1
10
A
p
/20
+ 1
=
10
0.6/20
1
10
0.6/20
+ 1
= 0.0345
s
= 10
As/20
= 10
50/20
= 0.00316
and then approximate the length N by
N
10 log(
p
s
) 13
14.6(F
s
F
p
)
+ 1 =
S[10 log(
p
s
) 13]
14.6(f
s
f
p
)
+ 1 = 230
The actual lter length of the optimal lter turns out to be N = 233, and the lter shows a passband
attenuation of 0.597 dB and a stopband attenuation of 50.05 dB.
(b) Repeat the design using a three-stage interpolator with I
1
= 2, I
2
= 3, and I
3
= 2. How do the results
compare with those of the single-stage design?
We distribute the passband attenuation (in decibels) equally among the three stages. Thus, A
p
= 0.2 dB
for each stage, and the ripple parameters for each stage are
p
=
10
0.2/20
1
10
0.2/20
+ 1
= 0.0115
s
= 10
50/20
= 0.00316
For each stage, the important parameters and the lter length are listed in the following table:
S
in
Interpolation S
out
= S f
p
f
s
= S
in
f
p
Filter Length
Stage (kHz) Factor (kHz) (kHz) (kHz) L
1 4 I
1
= 2 8 1.8 2.2 44
2 8 I
2
= 3 24 1.8 6.2 13
3 24 I
2
= 2 48 1.8 22.2 7
In this table, for example, we compute the lter length for the rst stage as
L
10 log(
p
s
) 13
14.6(F
s
F
p
)
+ 1 =
S[10 log(
p
s
) 13]
14.6(f
s
f
p
)
+ 1 = 44
The actual lter lengths of the optimal lters turn out to be 47 (with design attenuations of 0.19 dB
and 50.31 dB), 13 (with design attenuations of 0.18 dB and 51.09 dB), and 4 (with design attenuations
of 0.18 dB and 50.91 dB). The overall lter length is only 64. This is about four times less than the
lter length for single-stage design.
c Ashok Ambardar, September 1, 2003
346 Chapter 8 Design of FIR Filters
8.7.1 Multistage Decimation
Decimation by M involves lowpass ltering (with a gain of unity), followed by down-sampling (discarding
M 1 samples and retaining every Mth sample). The process is essentially the inverse of interpolation.
If the decimation factor M is large, decimation in stages results in a smaller overall lter length or order.
If M can be factored as M = D
1
D
2
D
3
, then decimation by M can be accomplished in three stages with
individual decimation factors of D
1
, D
2
, and D
3
. At a typical stage, the output sampling rate is given by
S
out
= S
in
/D, where D is the decimation factor for that stage. This is illustrated in Figure 8.18.
S
in
f
s
S
out
f
p
= S
in
= S
D
S
in S
out
S
in
Gain = 1
Digital lowpass filter
Down-sample
=
D
Figure 8.18 One stage of a typical multistage decimating lter
The decimation lter has a gain of unity and operates at the input sampling rate S
in
, and its stopband
edge is computed from the output sampling rate as f
s
= S
out
f
p
. At each successive stage (except the
rst), the transition bands get narrower with each stage.
The overall lter length does depend on the order in which the decimating factors are used. Although it
is not easy to establish the optimum values for the decimation factors and their order for the smallest overall
lter length, it turns out that decimation factors in decreasing order generally yield smaller overall lengths,
and any multistage design results in a substantial reduction in the lter length as compared to a single-stage
design.
The actual lter lengths also depend on the given attenuation specications. Since attenuations in
decibels add in a cascaded system, the passband attenuation A
p
is usually distributed among the various
stages to ensure an overall value that matches specications.
EXAMPLE 8.8 (The Concept of Multistage Decimation)
(a) Consider a signal band-limited to 1.8 kHz and sampled at 48 kHz. It is required to reduce the sampling
rate to 4 kHz. This requires decimation by 12. The passband edge is f
p
= 1.8 kHz and remains
unchanged for a single stage-design or multistage design.
For a single-stage decimator, the output sampling rate is S
out
= 4 kHz, and we thus require a lter with
a stopband edge of f
s
= S
out
f
p
= 41.8 = 2.2 kHz, a sampling rate of S = S
in
= 48 kHz, and a gain
of unity. If we use a crude approximation for the lter length as L 4/F
T
, where F
T
= (f
s
f
p
)/S
is the digital transition width, we obtain L = 48/0.4 = 120.
(b) If we choose two-stage decimation with D
1
= 4 and D
2
= 3, at each stage we compute the important
parameters, as follows:
S
in
= S Decimation S
out
f
p
f
s
= S
out
f
p
Filter Length
Stage (kHz) Factor (kHz) (kHz) (kHz) L 4S/(f
s
f
p
)
1 48 D
1
= 4 12 1.8 10.2 192/8.4 = 23
2 12 D
2
= 3 4 1.8 2.2 48/0.4 = 120
c Ashok Ambardar, September 1, 2003
8.8 Maximally Flat FIR Filters 347
The total lter length is thus 143.
(c) If we choose three-stage decimation with D
1
= 2, D
2
= 3, and D
3
= 2, at each stage we compute the
important parameters, as follows:
S
in
= S Decimation S
out
f
p
f
s
= S
out
f
p
Filter Length
Stage (kHz) Factor (kHz) (kHz) (kHz) L 4S/(f
s
f
p
)
1 48 D
1
= 2 24 1.8 22.2 192/20.4 =10
2 24 D
2
= 3 8 1.8 6.2 96/4.4 = 22
3 8 D
3
= 2 4 1.8 2.2 32/0.4 = 80
The total lter length is thus 112.
(d) If we choose three-stage decimation but with the dierent order D
1
= 2, D
2
= 2, and D
3
= 3, at each
stage we compute the important parameters, as follows:
S
in
= S Decimation S
out
f
p
f
s
= S
out
f
p
Filter Length
Stage (kHz) Factor (kHz) (kHz) (kHz) L 4S/(f
s
f
p
)
1 48 D
1
= 2 24 1.8 22.2 192/20.4 = 10
2 24 D
2
= 2 12 1.8 10.2 96/8.4 = 12
3 12 D
3
= 3 4 1.8 2.2 48/0.4 = 120
The total lter length is thus 142.
Note how decimation uses the same lter frequency specications as does interpolation for a given split,
except in reversed order. Any multistage design results in a substantial reduction in the lter length as
compared to a single-stage design. Also remember that the lter lengths chosen here are only a crude
approximation (in order to illustrate the relative merits of each design), and the actual lter lengths will
depend on the attenuation specications.
8.8 Maximally Flat FIR Filters
Linear-phase FIR lters can also be designed with maximally at frequency response. Such lters are
usually used in situations where accurate ltering is needed at low frequencies (near dc). The design of
lowpass maximally at lters uses a closed form for the transfer function H(F) given by
H(F) = cos
2K
(F)
L1
n=0
d
n
sin
2n
(F) d
n
=
(K +n 1)!
(K 1)! n!
= C
K+n1
n
(8.43)
Here, the d
n
have the form of binomial coecients as indicated. Note that 2L 1 derivatives of [H(F)[
2
are zero at F = 0 and 2K 1 derivatives are zero at F = 0.5. This is the basis for the maximally at
response of the lter. The lter length equals N = 2(K + L) 1, and is thus odd. The integers K and L
are determined from the passband and stopband frequencies F
p
and F
s
that correspond to gains of 0.95 and
0.05 (or attenuations of about 0.5 dB and 26 dB), respectively. Here is an empirical design method:
c Ashok Ambardar, September 1, 2003
348 Chapter 8 Design of FIR Filters
1. Dene the cuto frequency as F
C
= 0.5(F
p
+F
s
) and let F
T
= F
s
F
p
.
2. Obtain a rst estimate of the odd lter length as N
0
= 1 + 0.5/F
2
T
.
3. Dene the parameter as = cos
2
(F
C
).
4. Find the best rational approximation K/M
min
, with 0.5(N
0
1) M
min
(N
0
1).
5. Evaluate L and the true lter length N from L = M
min
K and N = 2M
min
1.
6. Find h[n] as the N-point inverse DFT of H(F), F = 0, 1/N, . . . , (N 1)/N.
EXAMPLE 8.9 (Maximally Flat FIR Filter Design)
Consider the design of a maximally at lowpass FIR lter with normalized frequencies F
p
= 0.2 and F
s
= 0.4.
We have F
C
= 0.3 and F
T
= 0.2. We compute N
0
= 1 + 0.5/F
2
T
= 13.5 15, and = cos
2
(F
C
) = 0.3455.
The best rational approximation to works out to be 5/14 = K/M
min
.
With K = 5 and M
min
= 14, we get L = M
min
K = 9, and the lter length N = 2M
min
1 = 27.
Figure E8.9 shows the impulse response and [H(F)[ of the designed lter. The response is maximally at,
with no ripples in the passband.
0 13 26
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
DT index n
(a) Impulse response h[n]
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
300
250
200
150
100
50
0
Digital frequency F
(b) Magnitude spectrum in dB
M
a
g
n
i
t
u
d
e
[
d
B
]
0 0.05 0.1 0.15 0.2
0.05
0.04
0.03
0.02
0.01
0
Digital frequency F
(c) Passband detail in dB
M
a
g
n
i
t
u
d
e
[
d
B
]
Figure E8.9 Features of the maximally at lowpass lter for Example 8.9
8.9 FIR Dierentiators and Hilbert Transformers
An ideal digital dierentiator is described by H(F) = j2F, [F[ 0.5. In practical situations, we seldom
require lters that dierentiate for the full frequency range up to [F[ = 0.5. If we require dierentiation only
up to a cuto frequency of F
C
, then
H(F) = j2F, [F[ F
C
(8.44)
The magnitude and phase spectrum of such a dierentiator are shown in Figure 8.19. Since H(F) is odd,
h[0] = 0. To nd h[n], n = 0 we use the inverse DTFT to obtain
h[n] = j
F
C
F
C
2Fe
j2nF
dF (8.45)
c Ashok Ambardar, September 1, 2003
8.9 FIR Dierentiators and Hilbert Transformers 349
F
C
F
C
2F
Magnitude
0.5 0.5
F
F
C
F
C
/2
/2
0.5 0.5
F
Phase (radians)
Figure 8.19 Magnitude and phase spectrum of an ideal dierentiator
Invoking Eulers relation and symmetry, this simplies to
h[n] = 4
F
C
0
F sin(2nF) dF =
2nF
C
cos(2nF
C
) sin(2nF
C
)
n
2
(8.46)
For F
C
= 0.5 , this yields h[0] = 0 and h[n] =
cos(n)
n
, n = 0.
8.9.1 Hilbert Transformers
A Hilbert transformer is used to shift the phase of a signal by
2
(or 90
Magnitude
0.5 0.5
1
F
F
C
F
C
F
/2
0.5
0.5
/2
Phase (radians)
Figure 8.20 Magnitude and phase spectrum of an ideal Hilbert transformer
With h[0] = 0, we use the inverse DTFT to nd its impulse response h[n], n = 0, as
h[n] =
F
C
F
C
j sgn(F)e
j2nF
dF = 2
F
C
0
sin(2nF) dF =
1 cos(2nF
C
)
n
(8.48)
For F
C
= 0.5, this reduces to h[n] =
1 cos(n)
n
, n = 0.
8.9.2 Design of FIR Dierentiators and Hilbert Transformers
To design an FIR dierentiator, we must truncate h[n] to h
N
[n] and choose a type 3 or type 4 sequence, since
H(F) is purely imaginary. To ensure odd symmetry, the lter coecients may be computed only for n > 0,
and the same values (in reversed order) are used for n < 0. If N is odd, we must also include the sample
h[0]. We may window h
N
[n] to minimize the overshoot and ripple in the spectrum H
N
(F). And, nally,
c Ashok Ambardar, September 1, 2003
350 Chapter 8 Design of FIR Filters
to ensure causality, we must introduce a delay of (N 1)/2 samples. Figure 8.21(a) shows the magnitude
response of Hamming-windowed FIR dierentiators for both even and odd lengths N. Note that H(0) is
always zero, but H(0.5) = 0 only for type 3 (odd-length) sequences.
0 0.1 0.2 0.3 0.4 0.5
0
0.5
1
1.5
2
2.5
3
Digital frequency F
M
a
g
n
i
t
u
d
e
(a) FIR differentiators (Hamming window) F
C
=0.5
N=15
25
10
0 0.1 0.2 0.3 0.4 0.5
0
0.2
0.4
0.6
0.8
1
Digital frequency F
M
a
g
n
i
t
u
d
e
(b) Hilbert transformers (Hamming window) F
C
=0.5
N=15 25
10
15
25
Figure 8.21 Magnitude spectra of ideal dierentiators and Hilbert transformers
The design of Hilbert transformers closely parallels the design of FIR dierentiators. The sequence h[n]
is truncated to h
N
[n]. The chosen lter must correspond to a type 3 or type 4 sequence, since H(F) is
imaginary. To ensure odd symmetry, the lter coecients may be computed only for n > 0, with the same
values (in reversed order) used for n < 0. If N is odd, we must also include the sample h[0]. We may window
h
N
[n] to minimize the ripples (due to the Gibbs eect) in the spectrum H
N
(F). The Hamming window is a
common choice, but others may also be used. To make the lter causal, we introduce a delay of (N 1)/2
samples. The magnitude spectrum of a Hilbert transformer becomes atter with increasing lter length N.
Figure 8.21(b) shows the magnitude response of Hilbert transformers for both even and odd lengths N. Note
that H(0) is always zero, but H(0.5) = 0 only for type 3 (odd-length) sequences.
8.10 Least Squares and Adaptive Signal Processing
The concept of least squares can be formulated as the solution to a set of algebraic equations expressed in
matrix form as
Xb = Y (8.49)
If the number of unknowns equals the number of equations, and the matrix X is non-singular such that its
inverse X
1
exists (the unique case), the solution is simply b = X
1
Y. However, if the number of equations
exceeds the number of unknowns (the over-determined case), no solution is possible, and only an approximate
result may be obtained using, for example, least squares minimization. If the number of unknowns exceeds
the number of equations (the under-determined case), many possible solutions exist, including the one where
the mean square error is minimized.
In many practical situations, the set of equations is over-determined and thus amenable to a least-squares
solution. To solve for b, we simply premultiply both sides by X
T
to obtain
X
T
Xb = X
T
Y (8.50)
If the inverse of X
T
X exists, the solution is given by
b = (X
T
X)
1
X
T
Y (8.51)
c Ashok Ambardar, September 1, 2003
8.10 Least Squares and Adaptive Signal Processing 351
The matrix X
T
X is called the covariance matrix.
8.10.1 Adaptive Filtering
The idea of least squares nds widespread application in adaptive signal processing, a eld that includes
adaptive ltering, deconvolution, and system identication, and encompasses many disciplines. Typically,
the goal is to devise a digital lter whose coecients b[k] can be adjusted to optimize its performance (in
the face of changing signal characteristics or to combat the eects of noise, for example). A representative
system for adaptive ltering is shown in Figure 8.22. The measured signal x[n] (which may be contaminated
by noise) is fed to an FIR lter whose output y[n] is compared with the desired signal y[n] to generate
the error signal e[n]. The error signal is used to update the lter coecients b[k] through an adaptation
algorithm in a way that minimizes the error e[n], and thus provides an optimal estimate of the desired signal
y[n].
[n] x
[n] y [n] e
Adaptive
FIR filter
Adaptation
algorithm
+
k=0
b[k]x[n k], n = 0, 1, . . . , M +N + 1 (8.52)
This set of M +N + 1 equations may be cast in matrix form as
y[0]
.
.
.
y[N]
x[0] 0
x[1] x[0] 0
x[2] x[1] x[0] 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x[N] x[N 1] x[N 2] x[N M]
0
0 0
0 0 0
b[0]
.
.
.
b[M]
(8.53)
In vector notation, it has the form
Y = Xb, where b is an (M+1)1 column matrix of the lter coecients
b[0] through b[M],
Y is an (M +N +1) 1 matrix of the output samples, and X is an (M +N +1) (M +1)
Toeplitz (constant diagonal) matrix whose columns are successively shifted replicas of the input sequence.
The (M +1) lter coecients in b are chosen to ensure that the output
Y of the model provides an optimal
estimate of the desired output Y. Clearly, this problem is amenable to a least squares solution.
Since the data is being acquired continuously in many practical situations, the least squares solution
is implemented on-line or in real time using iterative or recursive numerical algorithms. Two common ap-
c Ashok Ambardar, September 1, 2003
352 Chapter 8 Design of FIR Filters
proaches are the recursive least squares (RLS) algorithm and the least mean squares (LMS) algorithm.
Both start with an assumed set of lter coecients and update this set as each new input sample arrives.
The RLS algorithm: In the RLS algorithm, the lter coecients are updated by weighting the input
samples (typically in an exponential manner) so as to emphasize more recent inputs.
The LMS algorithm: The LMS algorithm, though not directly related to least squares, uses the so-called
method of steepest descent to generate updates that converge about the least squares solution. Although it
may not converge as rapidly as the RLS algorithm, it is far more popular due to its ease of implementation.
In fact, the updating equation (for the nth update) has the simple form
b
n
[k] = b
n1
[k] + 2(y[n] y[n])x[n k] = b
n1
[k] + 2e[n]x[n k], 0 k M (8.54)
The parameter governs both the rate of convergence and the stability of the algorithm. Larger values
result in faster convergence, but the lter coecients tend to oscillate about the optimum values. Typically,
is restricted to the range 0 < <
1
x
, where
x
, the variance of x[n], provides a measure of the power in
the input signal x[n].
8.10.2 Applications of Adaptive Filtering
Adaptive ltering forms the basis for many signal-processing applications including system identication,
noise cancellation, and channel equalization. We conclude with a brief introduction.
System Identication
In system identication, the goal is to identify the transfer function (or impulse response) of an unknown
system. Both the adaptive lter and the unknown system are excited by the same input, and the signal y[n]
represents the output of the unknown system. Minimizing e[n] implies that the output of the unknown system
and the adaptive lter are very close, and the adaptive lter coecients describe an FIR approximation to
the unknown system.
Noise Cancellation
In adaptive noise-cancellation systems, the goal is to improve the quality of a desired signal y[n] that may
be contaminated by noise. The signal x[n] is a noise signal, and the adaptive lter minimizes the power in
e[n]. Since the noise power and signal power add (if they are uncorrelated), the signal e[n] (with its power
minimized) also represents a cleaner estimate of the desired signal y[n].
Channel Equalization
In adaptive channel equalization, the goal is to allow a modem to adapt to dierent telephone lines (so as to
prevent distortion and inter-symbol interference). A known training signal y[n] is transmitted at the start
of each call, and x[n] is the output of the telephone channel. The error signal e[n] is used to generate an
FIR lter (an inverse system) that cancels out the eects of the telephone channel. Once found, the lter
coecients are xed, and the modem operates with the xed lter.
c Ashok Ambardar, September 1, 2003
Chapter 8 Problems 353
CHAPTER 8 PROBLEMS
8.1 (Symmetric Sequences) Find H(z) and H(F) for each sequence and establish the type of FIR lter
it describes by checking values of H(F) at F = 0 and F = 0.5.
(a) h[n] =
1, 0, 1 (b) h[n] =
1, 2, 2, 1
(c) h[n] =
1, 0, 1 (d) h[n] =
1, 2, 2, 1
8.2 (Symmetric Sequences) What types of sequences can we use to design the following lters?
(a) Lowpass (b) Highpass (c) Bandpass (d) Bandstop
8.3 (Linear-Phase Sequences) The rst few values of the impulse response sequence of a linear-phase
lter are h[n] = 2, 3, 4, 1, . . .. Determine the complete sequence, assuming the smallest length for
h[n], if the sequence is to be:
(a) Type 1 (b) Type 2 (c) Type 3 (d) Type 4
[Hints and Suggestions: For part (a), the even symmetry will be about the fourth sample. Part (c)
requires odd length, odd symmetry and a zero-valued sample at the midpoint.]
8.4 (Linear Phase and Symmetry) Assume a nite length impulse response sequence h[n] with real
coecients and argue for or against the following statements.
(a) If all the zeros lie on the unit circle, h[n] must be linear phase.
(b) If h[n] is linear phase its zeros must always lie on the unit circle.
(c) If h[n] is odd symmetric, there must be an odd number of zeros at z = 1.
[Hints and Suggestions: Use the following facts. Each pair of reciprocal zeros, such as (z )
and (z 1/), yields an even symmetric impulse response of the form 1, , 1. Multiplication in
the z-domain means convolution in the time-domain. The convolution of symmetric sequences is also
symmetric.]
8.5 (Linear Phase and Symmetry) Assume a linear-phase sequence h[n] with real coecients and
refute the following statements by providing simple examples using zero locations only at z = 1.
(a) If h[n] has zeros at z = 1 it must be a type 2 sequence.
(b) If h[n] has zeros at z = 1 and z = 1, it must be a type 3 sequence.
(c) If h[n] has zeros at z = 1 it must be a type 4 sequence.
8.6 (Linear Phase and Symmetry) The locations of the zeros at z = 1 and their number provides
useful clues about the type of a linear-phase sequence. What is the sequence type for the following
zero locations at z = 1? Other zero locations are in keeping with linear phase and real coecients.
(a) No zeros at z = 1
(b) One zero at z = 1, none at z = 1
(c) Two zeros at z = 1, one zero at z = 1
(d) One zero at z = 1, none at z = 1
(e) Two zeros at z = 1, none at z = 1
(f ) One zero at z = 1, one zero at z = 1
c Ashok Ambardar, September 1, 2003
354 Chapter 8 Design of FIR Filters
(g) Two zeros at z = 1, one zero at z = 1
[Hints and Suggestions: For (a), the length is odd and the symmetry is even (so, type 1). For the
rest, each zero increases the length by one and each zero at z = 1 toggles the symmetry.]
8.7 (Linear-Phase Sequences) What is the smallest length linear-phase sequence with real coecients
that meets the requirements listed? Identify all the zero locations and the type of linear-phase sequence.
(a) Zero location: z = e
j0.25
; even symmetry; odd length
(b) Zero location: z = 0.5e
j0.25
; even symmetry; even length
(c) Zero location: z = e
j0.25
; odd symmetry; even length
(d) Zero location: z = 0.5e
j0.25
; odd symmetry; odd length
[Hints and Suggestions: For (a) and (c), the given zero will be paired with its conjugate. For (b)
and (d), the given zero will be part of a conjugate reciprocal quadruple. For all parts, no additional
zeros give an odd length and even symmetry. Each additional zero at z = 1 or z = 1 will increase
the length by one. Each zero at z = 1 will toggle the symmetry.]
8.8 (Linear-Phase Sequences) Partial details of various lters are listed. Zero locations are in keeping
with linear phase and real coecients. Assuming the smallest length, identify the sequence type and
nd the transfer function of each lter.
(a) Zero location: z = 0.5e
j0.25
(b) Zero location: z = e
j0.25
(c) Zero locations: z = 1, z = e
j0.25
(d) Zero locations: z = 0.5, z = 1; odd symmetry
(e) Zero locations: z = 0.5, z = 1, z = 1; even symmetry
[Hints and Suggestions: Linear phase requires conjugate reciprocal zeros. No zeros at z = 1 yield
even symmetry. Each additional zero at z = 1 toggles the symmetry.]
8.9 (Truncation and Windowing) Consider a windowed lowpass FIR lter with cuto frequency 5 kHz
and sampling frequency S = 20 kHz. Find the truncated, windowed sequence, the minimum delay (in
samples and in seconds) to make the lter causal, and the transfer function H(z) of the causal lter if
(a) N = 7, and we use a Bartlett window.
(b) N = 8, and we use a von Hann (Hanning) window.
(c) N = 9, and we use a Hamming window.
[Hints and Suggestions: For (a)(b), nd the results after discarding any zero-valued end-samples
of the windowed sequence.]
8.10 (Spectral Transformations) Assuming a sampling frequency of 40 kHz and a xed passband, nd
the specications for a digital lowpass FIR prototype and the subsequent spectral transformation to
convert to the required lter type for the following lters.
(a) Highpass: passband edge at 10 kHz, stopband edge at 4 kHz
(b) Bandpass: passband edges at 6 kHz and 10 kHz, stopband edges at 2 kHz and 12 kHz
(c) Bandstop: passband edges 8 kHz and 16 kHz, stopband edges 12 kHz and 14 kHz
c Ashok Ambardar, September 1, 2003
Chapter 8 Problems 355
[Hints and Suggestions: For (b)(c), ensure arithmetic symmetry by relocating some band edges
(assuming a xed passband, for example) to maintain the smallest transition width.]
8.11 (Window-Based FIR Filter Design) We wish to design a window-based linear-phase FIR lter.
What is the approximate lter length N required if the lter to be designed is
(a) Lowpass: f
p
= 1 kHz, f
s
= 2 kHz, S = 10 kHz, using a von Hann (Hanning) window?
(b) Highpass: f
p
= 2 kHz, f
s
= 1 kHz, S = 8 kHz, using a Blackman window?
(c) Bandpass: f
p
= [4, 8] kHz, f
s
= [2, 12] kHz, S = 25 kHz, using a Hamming window?
(d) Bandstop: f
p
= [2, 12] kHz, f
s
= [4, 8] kHz, S = 25 kHz, using a Hamming window?
[Hints and Suggestions: This requires table lookup and the digital transition width. Round up the
length to the next highest integer (an odd integer for bandstop lters). ]
8.12 (Half-Band FIR Filter Design) A lowpass half-band FIR lter is to be designed using a von Hann
window. Assume a lter length N = 11 and nd its windowed, causal impulse response sequence and
the transfer function H(z) of the causal lter.
8.13 (Half-Band FIR Filter Design) Design the following half-band FIR lters, using a Kaiser window.
(a) Lowpass lter: 3-dB frequency 4 kHz, stopband edge 8 kHz, and A
s
= 40 dB.
(b) Highpass lter: 3-dB frequency 6 kHz, stopband edge 3 kHz, and A
s
= 50 dB.
(c) Bandpass lter: passband edges at [2, 3] kHz, stopband edges at [1, 4] kHz, A
p
= 1 dB, and
A
s
= 35 dB.
(d) Bandstop lter: stopband edges at [2, 3] kHz, passband edges at [1, 4] kHz, A
p
= 1 dB, and
A
s
= 35 dB.
[Hints and Suggestions: For (a)(b), pick S = 2(f
p
+ f
s
) and F
C
= 0.25. For (b), design a Kaiser
lowpass prototype h[n] to get h
HP
[n] = (1)
n
h[n]. For (c)(d), design a lowpass prototype h[n] with
S = 4f
0
, F
C
= 0.5(F
p
+ F
s
). Then, h
BP
[n] = 2 cos(0.5n)h[n] and h
BS
[n] = [n] 2 cos(0.5n)h[n].
You will need to compute the lter length N (an odd integer), the Kaiser parameter , and values of
the N-sample Kaiser window for each part.]
8.14 (Frequency-Sampling FIR Filter Design) Consider the frequency-sampling design of a lowpass
FIR lter with F
C
= 0.25.
(a) Sketch the gain of an ideal lter with F
C
= 0.25. Pick eight samples over the range 0 F < 1
and set up the sampled frequency response H[k] of the lter assuming a real, causal h[n].
(b) Compute the impulse response h[n] for the lter.
(c) To reduce the overshoot, modify H[3] and recompute h[n].
[Hints and Suggestions: For (a), set [H[k][ = 1, 1, 1, 0, 0, 0, 1, 1 and nd H[k] = [H[k][e
j[k]
where
k = 0, 1, 2, . . . , 7, H[k] = H
[N k] and [k] = k(N 1)/N for a real and causal h[n]. For (b),
nd h[n] as the IDFT. For (c), pick [H[3][ = 0.5 and H[3] = 0.5e
j[3]
and H[5] = H
[3].]
8.15 (Maximally Flat FIR Filter Design) Design a maximally at lowpass FIR lter with normalized
frequencies F
p
= 0.1 and F
s
= 0.4, and nd its frequency response H(F).
8.16 (FIR Dierentiators) Find the impulse response of a digital FIR dierentiator with
(a) N = 6, cuto frequency F
C
= 0.4, and no window.
(b) N = 6, cuto frequency F
C
= 0.4, and a Hamming window.
c Ashok Ambardar, September 1, 2003
356 Chapter 8 Design of FIR Filters
(c) N = 5, cuto frequency F
C
= 0.5, and a Hamming window.
[Hints and Suggestions: For (a)(b), N = 6 and so n = 2.5, 1.5, 0.5, 0.5, 1.5, 2.5.]
8.17 (FIR Hilbert Transformers) Find the impulse response of an FIR Hilbert transformer with
(a) N = 6, cuto frequency F
C
= 0.4, and no window.
(b) N = 6, cuto frequency F
C
= 0.4, and a von Hann window.
(c) N = 7, cuto frequency F
C
= 0.5, and a von Hann window.
[Hints and Suggestions: For (a)(b), N = 6 and so n = 2.5, 1.5, 0.5, 0.5, 1.5, 2.5.]
8.18 (IIR Filters and Linear Phase) Even though IIR lters cannot be designed with linear phase, it
is possible to implement systems containing IIR lters to eliminate phase distortion. A signal x[n] is
folded and passed through a lter H(F), and the lter output is then folded to obtain the signal y
1
[n].
The signal x[n] is also passed directly through the lter H(F) to get the signal y
2
[n]. The signals y
1
[n]
and y
2
[n] are summed to obtain the overall output y[n].
(a) Sketch a block diagram of this system.
(b) How is Y (F) related to X(F)?
(c) Does the system provide freedom from phase distortion?
[Hints and Suggestions: If h[n] H(F), then h[n] H(F). Also, H(F) + H(F) is purely
real.]
8.19 (Filter Specications) A hi- audio signal band-limited to 20 kHz is contaminated by high-frequency
noise between 70 kHz and 110 kHz. We wish to design a digital lter that reduces the noise by a factor
of 100 while causing no appreciable signal loss. One way is to design the lter at a sampling rate that
exceeds the Nyquist rate. However, we can also make do with a smaller sampling rate that avoids
aliasing of the noise spectrum into the signal spectrum. Pick such a sampling rate and develop the
frequency and attenuation specications for the digital lter.
[Hints and Suggestions: The image of the aliased noise spectrum does not overlap the signal spec-
trum as long as it lies between 20 kHz and 70 kHz.]
8.20 (FIR Filter Specications) Figure P8.20 shows the magnitude and phase characteristics of a causal
FIR lter designed at a sampling frequency of 10 kHz.
0 0.15 0.3 0.5
80
60
40
20
0
Digital frequency F
Magnitude in dB
0 0.1 0.3 0.5
0
0.25
0.5
0.75
1
Digital frequency F
Linear magnitude
0 0.15 0.3 0.5
17
15
10
5
0
Digital frequency F
Phase in radians
Figure P8.20 Filter characteristics for Problem 8.20
(a) What are the values of the passband ripple
p
and stopband ripple
s
?
(b) What are the values of the attenuation A
s
and A
p
in decibels?
c Ashok Ambardar, September 1, 2003
Chapter 8 Problems 357
(c) What are the frequencies (in Hz) of the passband edge and stopband edge?
(d) Does this lter show linear phase? What is the group delay?
(e) What is the lter length N?
(f ) Could this lter have been designed using the window method? Explain.
(g) Could this lter have been designed using the optimal method? Explain.
[Hints and Suggestions: The dB magnitude gives
s
and F
s
. The linear magnitude gives
p
and F
p
.
The slope of the phase plot gives the delay as D =
2F
(an integer). For part (e), N = 2D + 1.
The size of the stopband and passband ripples provides a clue to parts (f)(g).]
8.21 (Multistage Interpolation) It is required to design a three-stage interpolator. The interpolating
lters are to be designed with identical passband and stopband attenuation and are required to provide
an overall attenuation of no more than 3 dB in the passband and at least 50 dB in the stopband. Specify
the passband and stopband attenuation of each lter.
COMPUTATION AND DESIGN
8.22 (FIR Filter Design) It is desired to reduce the frequency content of a hi- audio signal band-limited
to 20 kHz and sampled at 44.1 kHz for purposes of AM transmission. Only frequencies up to 10 kHz
are of interest. Frequencies past 15 kHz are to be attenuated by at least 55 dB, and the passband loss
is to be less than 10%. Design a digital lter using the Kaiser window that meets these specications.
8.23 (FIR Filter Design) It is desired to eliminate 60-Hz interference from an ECG signal whose signif-
icant frequencies extend to 35 Hz.
(a) What is the minimum sampling frequency we can use to avoid in-band aliasing?
(b) If the 60-Hz interference is to be suppressed by a factor of at least 100, with no appreciable signal
loss, what should be the lter specications?
(c) Design the lter using a Hamming window and plot its frequency response.
(d) Test your lter on the signal x(t) = cos(40t) + cos(70t) + cos(120t). Plot and compare the
frequency response of the sampled test signal and the ltered signal to conrm that your design
objectives are met.
8.24 (Digital Filter Design) We wish to design a lowpass lter for processing speech signals. The
specications call for a passband of 4 kHz and a stopband of 5 kHz. The passband attenuation is to
be less than 1 dB, and the stopband gain is to be less than 0.01. The sampling frequency is 40 kHz.
(a) Design FIR lters, using the window method (with Hamming and Kaiser windows) and using
optimal design. Which of these lters has the minimum length?
(b) Design IIR Butterworth and elliptic lters, using the bilinear transformation, to meet the same
set of specications. Which of these lters has the minimum order? Which has the best delay
characteristics?
(c) How does the complexity of the IIR lters compare with that of the FIR lters designed with
the same specications? What are the trade-os in using an IIR lter over an FIR lter?
8.25 (The Eect of Group Delay) The nonlinear phase of IIR lters is responsible for signal distortion.
Consider a lowpass lter with a 1-dB passband edge at f
p
= 1 kHz, a 50-dB stopband edge at f
p
=
2 kHz, and a sampling frequency of S = 10 kHz.
c Ashok Ambardar, September 1, 2003
358 Chapter 8 Design of FIR Filters
(a) Design a Butterworth lter H
B
(z), an elliptic lter H
E
(z), and an optimal FIR lter H
O
(z) to
meet these specications. Using the Matlab routine grpdelay (or otherwise), compute and plot
the group delay of each lter. Which lter has the best (most nearly constant) group delay in
the passband? Which lter would cause the least phase distortion in the passband? What are
the group delays N
B
, N
E
, and N
O
(expressed as the number of samples) of the three lters?
(b) Generate the signal x[n] = 3 sin(0.03n) + sin(0.09n) + 0.6 sin(0.15n) over 0 n 100. Use
the ADSP routine filter to compute the response y
B
[n], y
E
[n], and y
O
[n] of each lter. Plot the
lter outputs y
B
[n], y
E
[n], and y
O
[n] (delayed by N
B
, N
E
, and N
O
, respectively) and the input
x[n] on the same plot to compare results. Which lter results in the smallest signal distortion?
(c) Are all the frequency components of the input signal in the lter passband? If so, how can you
justify that what you observe as distortion is actually the result of the non-constant group delay
and not the lter attenuation in the passband?
8.26 (Raised Cosine Filters) The impulse response of a raised cosine lter has the form
h
R
[n] = h[n]
cos(2nRF
C
)
1 (4nRF
C
)
2
where the roll-o factor R satises 0 < R < 1 and h[n] = 2F
C
sinc(2nF
C
) is the impulse response of
an ideal lowpass lter.
(a) Let F
C
= 0.2. Generate the impulse response of an ideal lowpass lter with length 21 and
the impulse response of the corresponding raised cosine lter with R = 0.2, 0.5, 0.9. Plot the
magnitude spectra of each lter over 0 F 1 on the same plot, using linear and decibel scales.
How does the response in the passband and stopband of the raised cosine lter dier from that
of the ideal lter? How does the transition width and peak sidelobe attenuation of the raised
cosine lter compare with that of the ideal lter for dierent values of R? What is the eect of
increasing R on the frequency response?
(b) Compare the frequency response of h
R
[n] with that of an ideal lowpass lter with F
C
= 0.25. Is
the raised cosine lter related to this ideal lter?
8.27 (Interpolation) The signal x[n] = cos(2F
0
n) is to be interpolated by 5, using up-sampling followed
by lowpass ltering. Let F
0
= 0.4.
(a) Generate and plot 20 samples of x[n] and up-sample by 5.
(b) What must be the cuto frequency F
C
and gain A of a lowpass lter that follows the up-sampler
to produce the interpolated output?
(c) Design an FIR lter (using the window method or optimal design) to meet these specications.
(d) Filter the up-sampled signal through this lter and plot the result.
(e) Is the lter output an interpolated version of the input signal? Do the peak amplitude of the
interpolated signal and original signal match? Should they? Explain.
8.28 (Multistage Interpolation) To relax the design requirements for the analog reconstruction lter,
many compact disc systems employ oversampling during the DSP stages. Assume that audio signals
are band-limited to 20 kHz and sampled at 44.1 kHz. Assume a maximum passband attenuation of 1
dB and a minimum stopband attenuation of 50 dB.
(a) Design a single-stage optimal interpolating lter that increases the sampling rate to 176.4 kHz.
(b) Design multistage optimal interpolating lters that increase the sampling rate to 176.4 kHz.
(c) Which of the two designs would you recommend?
c Ashok Ambardar, September 1, 2003
Chapter 8 Problems 359
(d) For each design, explain how you might incorporate compensating lters during the DSP stage
to oset the eects of the sinc distortion caused by the zero-order-hold reconstruction device.
8.29 (Multistage Interpolation) The sampling rate of a speech signal band-limited to 3.4 kHz and
sampled at 8 kHz is to be increased to 48 kHz. Design three dierent schemes that will achieve this
rate increase and compare their performance. Use optimal FIR lter design where required and assume
a maximum passband attenuation of 1 dB and a minimum stopband attenuation of 45 dB.
8.30 (Multistage Decimation) The sampling rate of a speech signal band-limited to 3.4 kHz and sampled
at 48 kHz is to be decreased to 8 kHz. Design three dierent schemes that will achieve this rate decrease
and compare their performance. Use optimal FIR lter design where required and assume a maximum
passband attenuation of 1 dB and a minimum stopband attenuation of 45 dB. How do these lters
compare with the lters designed for multistage interpolation in Problem 20.28.
8.31 (Filtering Concepts) This problem deals with time-frequency plots of a combination of sinusoids
and their ltered versions.
(a) Generate 600 samples of the signal x[n] = cos(0.1n) + cos(0.4n) + cos(0.7n) comprising the
sum of three pure cosines at F = 0.05, 0.2, 0.35. Use the Matlab command fft to plot its DFT
magnitude. Use the routine timefreq (from the authors website) to display its time-frequency
plot. What do the plots reveal? Now design an optimal lowpass lter with a 1-dB passband edge
at F = 0.1 and a 50-dB stopband edge at F = 0.15 and lter x[n] through this lter to obtain
the ltered signal x
f
[n]. Plot its DFT magnitude and display its time-frequency plot. What do
the plots reveal? Does the lter perform its function? Plot x
f
[n] over a length that enables you
to identify its period. Does the period of x
f
[n] match your expectations?
(b) Generate 200 samples each of the three signals y
1
[n] = cos(0.1n), y
2
[n] = cos(0.4n), and
y
3
[n] = cos(0.7n). Concatenate them to form the 600-sample signal y[n] = y
1
[n], y
2
[n], y
3
[n].
Plot its DFT magnitude and display its time-frequency plot. What do the plots reveal? In what
way does the DFT magnitude plot dier from part (a)? In what way does the time-frequency
plot dier from part (a)? Use the optimal lowpass lter designed in part (a) to lter y[n], obtain
the ltered signal y
f
[n], plot its DFT magnitude, and display its time-frequency plot. What do
the plots reveal? In what way does the DFT magnitude plot dier from part (a)? In what way
does the time-frequency plot dier from part (a)? Does the lter perform its function? Plot
y
f
[n] over a length that enables you to identify its period. Does the period of y
f
[n] match your
expectations?
8.32 (Decoding a Mystery Message) During transmission, a message signal gets contaminated by a
low-frequency signal and high-frequency noise. The message can be decoded only by displaying it in
the time domain. The contaminated signal x[n] is provided on the authors website as mystery1.mat.
Load this signal into Matlab (use the command load mystery1). In an eort to decode the message,
try the following methods and determine what the decoded message says.
(a) Display the contaminated signal. Can you read the message? Display the DFT of the signal
to identify the range of the message spectrum.
(b) Design an optimal FIR bandpass lter capable of extracting the message spectrum. Filter the
contaminated signal and display the ltered signal to decode the message. Use both the filter
(linear-phase ltering) and filtfilt (zero-phase ltering) commands.
(c) As an alternative method, rst zero out the DFT component corresponding to the low-frequency
contamination and obtain its IDFT y[n]. Next, design an optimal lowpass FIR lter to reject the
high-frequency noise. Filter the signal y[n] and display the ltered signal to decode the message.
Use both the filter and filtfilt commands.
c Ashok Ambardar, September 1, 2003
360 Chapter 8 Design of FIR Filters
(d) Which of the two methods allows better visual detection of the message? Which of the two
ltering routines (in each method) allows better visual detection of the message?
8.33 (Filtering of a Chirp Signal) This problem deals with time-frequency plots of a chirp signal and
its ltered versions.
(a) Generate 500 samples of a chirp signal x[n] whose frequency varies from F = 0 to F = 0.12.
Then, compute its DFT and plot the the DFT magnitude. Use the routine timefreq (from the
authors website) to display its time-frequency plot. What do the plots reveal? Plot x[n] and
conrm that its frequency is increasing with time.
(b) Design an optimal lowpass lter with a 1-dB passband edge at F = 0.04 and a 40-dB stopband
edge at F = 0.1 and use the Matlab command filtfilt to obtain the zero-phase ltered signal
y
1
[n]. Plot its DFT magnitude and display its time-frequency plot. What do the plots reveal?
Plot y
1
[n] and x[n] on the same plot and compare. Does the lter perform its function?
(c) Design an optimal highpass lter with a 1-dB passband edge at F = 0.06 and a 40-dB stopband
edge at F = 0.01 and use the Matlab command filtfilt to obtain the zero-phase ltered
signal y
2
[n]. Plot its DFT magnitude and display its time-frequency plot. What do the plots
reveal? Plot y
2
[n] and x[n] on the same plot and compare. Does the lter perform its function?
8.34 (Filtering of a Chirp Signal) This problem deals with time-frequency plots of a chirp signal and
a sinusoid and its ltered versions.
(a) Generate 500 samples of a signal x[n] that consists of the sum of cos(0.6n) and a chirp whose
frequency varies from F = 0 to F = 0.05. Then, compute its DFT and plot the DFT magnitude.
Use the routine psdwelch (from the authors website) to display its power spectral density plot.
Use the routine timefreq (from the authors website) to display its time-frequency plot. What
do the plots reveal?
(b) Design an optimal lowpass lter with a 1 dB passband edge at F = 0.08 and a 40-dB stopband
edge at F = 0.25 and use the Matlab command filtfilt to obtain the zero-phase ltered
signal y
1
[n]. Plot its DFT magnitude and display its PSD and time-frequency plot. What do the
plots reveal? Plot y
1
[n]. Does it look like a signal whose frequency is increasing with time? Do
the results conrm that the lter performs its function?
(c) Design an optimal highpass lter with a 1-dB passband edge at F = 0.25 and a 40-dB stopband
edge at F = 0.08 and use the Matlab command filtfilt to obtain the zero-phase ltered
signal y
2
[n]. Plot its DFT magnitude and display its PSD and time-frequency plot. What do the
plots reveal? Plot y
2
[n]. Does it look like a sinusoid? Can you identify its period from the plot?
Do the results conrm that the lter performs its function?
8.35 (A Multi-Band Filter) A requirement exists for a multi-band digital FIR lter operating at 140 Hz
with the following specications:
Passband 1: from dc to 5 Hz
Maximum passband attenuation = 2 dB (from peak)
Minimum attenuation at 10 Hz = 40 dB (from peak)
Passband 2: from 30 Hz to 40 Hz
Maximum passband attenuation = 2 dB (from peak)
Minimum attenuation at 20 Hz and 50 Hz = 40 dB (from peak)
(a) Design the rst stage as an odd-length optimal lter, using the routine firpm (from the authors
website).
c Ashok Ambardar, September 1, 2003
Chapter 8 Problems 361
(b) Design the second stage as an odd-length half-band lter, using the routine firhb (from the
authors website) with a Kaiser window.
(c) Combine the two stages to obtain the impulse response of the overall lter.
(d) Plot the overall response of the lter. Verify that the attenuation specications are met at each
design frequency.
c Ashok Ambardar, September 1, 2003
Chapter 9
DESIGN OF IIR FILTERS
9.0 Scope and Objectives
This chapter begins with an introduction to IIR lters, and the various mappings that are used to con-
vert analog lters to digital lters. It then describes the design of IIR digital lters based on an analog
lowpass prototype that meets the given specications, followed by an appropriate mapping and spectral
transformation. The bilinear transformation and its applications are discussed in detail.
9.1 Introduction
Typical magnitude and phase specications for IIR lters are identical to those for FIR lters. Digital lter
design revolves around two distinctly dierent approaches. If linear phase is not critical, IIR lters yield a
much smaller lter order for a given application. The design starts with an analog lowpass prototype based
on the given specications. It is then converted to the required digital lter, using an appropriate mapping
and an appropriate spectral transformation. A causal, stable IIR lter can never display linear phase for
several reasons. The transfer function of a linear-phase lter must correspond to a symmetric sequence and
ensure that H(z) = H(1/z). For every pole inside the unit circle, there is a reciprocal pole outside the
unit circle. This makes the system unstable (if causal) or noncausal (if stable). To make the innitely long
symmetric impulse response sequence of an IIR lter causal, we need an innite delay, which is not practical,
and symmetric truncation (to preserve linear phase) simply transforms the IIR lter into an FIR lter.
9.2 IIR Filter Design
There are two related approaches for the design of IIR digital lters. A popular method is based on using
methods of well-established analog lter design, followed by a mapping that converts the analog lter to the
digital lter. An alternative method is based on designing the digital lter directly, using digital equivalents
of analog (or other) approximations. Any transformation of an analog lter to a digital lter should ideally
preserve both the response and stability of the analog lter. In practice, this is seldom possible because of
the eects of sampling.
9.2.1 Equivalence of Analog and Digital Systems
The impulse response h(t) of an analog system may be approximated by
h(t)
h
a
(t) = t
s
n=
h(t)(t nt
s
) = t
s
n=
h(nt
s
)(t nt
s
) (9.1)
362 c Ashok Ambardar, September 1, 2003
9.2 IIR Filter Design 363
Here, t
s
is the sampling interval corresponding to the sampling rate S = 1/t
s
. The discrete-time impulse
response h
s
[n] describes the samples h(nt
s
) of h(t) and may be written as
h
s
[n] = h(nt
s
) =
k=
h
s
[k][n k] (9.2)
The Laplace transform H
a
(s) of
h
a
(t) and the z-transform H
d
(z) of h
s
[n] are
H(s) H
a
(s) = t
s
k=
h(kt
s
)e
skt
s
H
d
(z) =
k=
h
s
[k]z
k
(9.3)
Comparison suggests the equivalence H
a
(s) = t
s
H
d
(z) provided z
k
= e
skt
s
, or
z e
st
s
s ln(z)/t
s
(9.4)
These relations describe a mapping between the variables z and s. Since s = + j, where is the
continuous frequency, we can express the complex variable z as
z = e
(+j)t
s
= e
t
s
e
jt
s
= e
t
s
e
j
(9.5)
Here, = t
s
= 2f/S = 2F is the digital frequency in radians/sample.
The sampled signal h
s
[n] has a periodic spectrum given by its DTFT:
H
p
(f) = S
k=
H(f kS) (9.6)
If the analog signal h(t) is band-limited to B and sampled above the Nyquist rate (S > 2B), the principal
period (0.5 F 0.5) of H
p
(f) equals SH(f), a scaled version of the true spectrum H(f). We may thus
relate the analog and digital systems by
H(f) = t
s
H
p
(f) or H
a
(s)[
s=j2f
t
s
H
d
(z)[
z=e
j2f/S, [f[ < 0.5S (9.7)
If S < 2B, we have aliasing, and this relationship no longer holds.
9.2.2 The Eects of Aliasing
The relations z e
st
s
and s ln(z)/t
s
do not describe a one-to-one mapping between the s-plane and the
z-plane. Since e
j
is periodic with period 2, all frequencies
0
2k (corresponding to
0
k
s
, where
s
= 2S) are mapped to the same point in the z-plane. A one-to-one mapping is thus possible only if
lies in the principal range (or 0 2), corresponding to the analog frequency range
0.5
s
0.5
s
(or 0
s
). Figure 9.1 illustrates how the mapping z e
sts
translates points in
the s-domain to corresponding points in the z-domain.
The origin: The origin s = 0 is mapped to z = 1, as are all other points corresponding to s = 0 jk
s
for
which z = e
jk
s
t
s
= e
jk2
= 1.
The j-axis: For points on the j-axis, = 0, z = e
j
, and [z[ = 1. As increases from
0
to
0
+
s
,
the frequency increases from
0
to
0
+2, and segments of the j-axis of length
s
= 2S thus map to
the unit circle, over and over.
c Ashok Ambardar, September 1, 2003
364 Chapter 9 Design of IIR Filters
s
Strips of width in the LHP are mapped
to the interior of the unit circle.
Im[ z ]
Re[ z ]
j
s
/2
s
/2
s
j
-plane s
-plane s
-plane z
The
origin is mapped to
Unit circle
-axis of length
are mapped to the unit circle.
Segments of the
z = 1.
Figure 9.1 Characteristics of the mapping z exp(sts)
The left half-plane: In the left half-plane, < 0. Thus, z = e
ts
e
j
or [z[ = e
ts
< 1. This describes the
interior of the unit circle in the z-plane. In other words, strips of width
s
in the left half of the s-plane are
mapped to the interior of the unit circle, over and over.
The right half-plane: In the right half-plane, > 0, and we see that [z[ = e
t
s
> 1. Thus, strips of width
s
in the right half of the s-plane are repeatedly mapped to the exterior of the unit circle.
REVIEW PANEL 9.1
The Mapping z e
st
s
Is Not a Unique One-to-One Mapping
Strips of width
s
= 2S (along the j-axis) in the left half of the s-plane map to the interior of the unit
circle, over and over.
9.2.3 Practical Mappings
The transcendental nature of the transformation s ln(z)/t
s
does not permit direct conversion of a rational
transfer function H(s) to a rational transfer function H(z). Nor does it permit a one-to-one correspondence
for frequencies higher than 0.5S Hz. A unique representation in the z-plane is possible only for band-limited
signals sampled above the Nyquist rate. Practical mappings are based on one of the following methods:
1. Matching the time response (the response-invariant transformation)
2. Matching terms in a factored H(s) (the matched z-transform)
3. Conversion of system dierential equations to dierence equations
4. Rational approximations for z e
sts
or s ln(z)/t
s
In general, each method results in dierent mapping rules, and leads to dierent forms for the digital
lter H(z) from a given analog lter H(s), and not all methods preserve stability. When comparing the
frequency response, it is helpful to remember that the analog frequency range 0 f 0.5S for the frequency
response of H(s) corresponds to the digital frequency range 0 F 0.5 for the frequency response of H(z).
The time-domain response can be compared only at the sampling instants t = nt
s
.
9.3 Response Matching
The idea behind response matching is to match the time-domain analog and digital response for a given
input, typically the impulse response or step response. Given the analog lter H(s), and the input x(t)
whose invariance we seek, we rst nd the analog response y(t) as the inverse transform of H(s)X(s). We
c Ashok Ambardar, September 1, 2003
9.3 Response Matching 365
then sample x(t) and y(t) at intervals t
s
to obtain their sampled versions x[n] and y[n]. Finally, we compute
H(z) = Y (z)/X(z) to obtain the digital lter. The process is illustrated in Figure 9.2.
[n] y
[n] x
H(z)=
X(z)
Y(z)
Input x(t)
= Y(s) X(s) H(s)
t = nt
s
t = nt
s
Y(z)
X(z)
X(s) y(t)
Sample
Sample
Figure 9.2 The concept of response invariance
Response-invariant matching yields a transfer function that is a good match only for the time-domain
response to the input for which it was designed. It may not provide a good match for the response to other
inputs. The quality of the approximation depends on the choice of the sampling interval t
s
, and a unique
correspondence is possible only if the sampling rate S = 1/t
s
is above the Nyquist rate (to avoid aliasing).
This mapping is thus useful only for analog systems such as lowpass and bandpass lters, whose frequency
response is essentially band-limited. This also implies that the analog transfer function H(s) must be strictly
proper (with numerator degree less than the denominator degree).
REVIEW PANEL 9.2
Response-Invariant Mappings Match the Time Response of the Analog and Digital Filter
The response y(t) of H(s) matches the response y[n] of H(z) at the sampling instants t = nt
s
.
EXAMPLE 9.1 (Response-Invariant Mappings)
(a) Convert H(s) =
1
s + 1
to a digital lter H(z), using impulse invariance, with t
s
= 1 s.
For impulse invariance, we select the input as x(t) = (t). We then nd
X(s) = 1 Y (s) = H(s)X(s) =
1
s + 1
y(t) = e
t
u(t)
The sampled versions of the input and output are
x[n] = [n] y[n] = e
nts
u[n]
Taking the ratio of their z-transforms and using t
s
= 1, we obtain
H(z) =
Y (z)
X(z)
= Y (z) =
z
z e
t
s
=
z
z e
1
=
z
z 0.3679
The frequency response of H(s) and H(z) is compared in Figure E9.1(a).
c Ashok Ambardar, September 1, 2003
366 Chapter 9 Design of IIR Filters
0 0.1 0.2 0.3 0.4 0.5
0
0.4
1
1.582
(a) Magnitude of H(s)=1/(s+1) and H(z) t
s
=1 s
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
H(s)
H(z)
0 0.1 0.2 0.3 0.4 0.5
0
0.25
0.5
0.75
1
(b) Magnitude after gain matching at dc
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
H(z)
H(s)
Figure E9.1 Frequency response of H(s) and H(z) for Example 9.1(a)
The dc gain of H(s) (at s = 0) is unity, but that of H(z) (at z = 1) is 1.582. Even if we normalize the
dc gain of H(z) to unity, as in Figure E9.1(b), we see that the frequency response of the analog and
digital lters is dierent. However, the analog impulse response h(t) = e
t
matches h[n] = e
n
u[n] at
the sampling instants t = nt
s
= n. A perfect match for the time-domain response, for which the lter
was designed, lies at the heart of response invariant mappings. The time-domain response to any other
inputs will be dierent. For example, the step response of the analog lter is
S(s) =
1
s(s + 1)
=
1
s
1
s + 1
s(t) = (1 e
t
)u(t)
To nd the step response S(z) of the digital lter whose input is u[n] z/(z 1), we use partial
fractions on S(z)/z to obtain
S(z) =
z
2
(z 1)(z e
1
)
=
ze/(e 1)
z 1
+
z/(1 e)
z e
1
s[n] =
e
e 1
u[n] +
1
1 e
e
n
u[n]
The sampled version of s(t) is quite dierent from s[n]. Figure E9.1A reveals that, at the sampling
instants t = nt
s
, the impulse response of the two lters shows a perfect match, but the step response
does not, and neither will the time-domain response to any other input.
0 2 4 6
0
0.5
1
DT index n and time t=nt
s
(a) Impulse response of analog and digital filter
A
m
p
l
i
t
u
d
e
0 2 4 6
0
0.5
1
1.5
2
DT index n and time t=nt
s
(b) Step response of analog and digital filter
A
m
p
l
i
t
u
d
e
Figure E9.1A Impulse response and step response of H(s) and H(z) for Example 9.1(a)
(b) Convert H(s) =
4
(s + 1)(s + 2)
to H(z), using various response-invariant transformations.
1. Impulse invariance: We choose x(t) = (t). Then, X(s) = 1, and
Y (s) = H(s)X(s) =
4
(s + 1)(s + 2)
=
4
s + 1
4
s + 2
y(t) = 4e
t
u(t) 4e
2t
u(t)
c Ashok Ambardar, September 1, 2003
9.3 Response Matching 367
The sampled input and output are then
x[n] = [n] y[n] = 4e
nt
s
u[n] 4e
2nt
s
u[n]
The ratio of their z-transforms yields the transfer function of the digital lter as
H
I
(z) =
Y (z)
X(z)
= Y (z) =
4z
z e
t
s
4z
z e
2t
s
2. Step invariance: We choose x(t) = u(t). Then, X(s) = 1/s, and
Y (s) = H(s)X(s) =
4
s(s + 1)(s + 2)
=
2
s
4
s + 1
+
2
s + 2
y(t) = (2 4e
t
+ 2e
2t
)u(t)
The sampled input and output are then
x[n] = u[n] y[n] = (2 4e
nt
s
+ 2e
2nt
s
)u[n]
Their z-transforms give
X(z) =
z
z 1
Y (z) =
2z
z 1
4z
z e
t
s
+
2z
z e
2t
s
The ratio of their z-transforms yields the transfer function of the digital lter as
H
S
(z) =
Y (z)
X(z)
= 2
4(z 1)
z e
t
s
+
2(z 1)
z e
2t
s
3. Ramp invariance: We choose x(t) = r(t) = tu(t). Then, X(s) = 1/s
2
, and
Y (s) =
4
s
2
(s + 1)(s + 2)
=
3
s
+
2
s
2
+
4
s + 1
1
s + 2
y(t) = (3 + 2t + 4e
t
e
2t
)u(t)
The sampled input and output are then
x[n] = nt
s
u[n] y[n] = (3 + 2nt
s
+ 4e
nt
s
e
2nt
s
)u[n]
Their z-transforms give
X(z) =
zt
s
(z 1)
2
Y (z) =
3z
z 1
+
2zt
s
(z 1)
2
+
4z
z e
t
s
z
z e
2t
s
The ratio of their z-transforms yields the transfer function of the digital lter as
H
R
(z) =
3(z 1)
t
s
+ 2 +
4(z 1)
2
t
s
(z e
ts
)
(z 1)
2
t
s
(z e
2ts
)
c Ashok Ambardar, September 1, 2003
368 Chapter 9 Design of IIR Filters
9.3.1 The Impulse-Invariant Transformation
The impulse-invariant mapping yields some useful design relations. We start with a rst-order analog lter
described by H(s) = 1/(s +p). The impulse response h(t), and its sampled version h[n], are
h(t) = e
pt
u(t) h[n] = e
pnt
s
u[n] =
e
pt
s
n
u[n] (9.8)
The z-transform of h[n] (which has the form
n
u[n], where = e
pt
s
), yields the transfer function H(z) of
the digital lter as
H(z) =
z
z e
pt
s
, [z[ > e
pts
(9.9)
This relation suggests that we can go directly from H(s) to H(z), using the mapping
1
s +p
=
z
z e
pts
(9.10)
We can now extend this result to lters of higher order. If H(s) is in partial fraction form, we can obtain
simple expressions for impulse-invariant mapping. If H(s) has no repeated roots, it can be described as a sum
of rst-order terms, using partial fraction expansion, and each term can be converted by the impulse-invariant
mapping to give
H(s) =
N
k=1
A
k
s +p
k
H(z) =
N
k=1
zA
k
z e
p
k
t
s
, ROC: [z[ > e
|p|
max
t
s
(9.11)
Here, the region of convergence of H(z) is in terms of the largest pole magnitude [p[
max
of H(s).
REVIEW PANEL 9.3
Impulse-Invariant Design Requires H(s) in Partial Fraction Form
1
s +p
k
=
z
z e
p
k
ts
(for each term in the partial fraction expansion)
If the denominator of H(s) also contains repeated roots, we start with a typical kth term H
k
(s) with a
root of multiplicity M, and nd
H
k
(s) =
A
k
(s +p
k
)
M
h
k
(t) =
A
k
(M 1)!
t
M1
e
p
k
t
u(t) (9.12)
The sampled version h
k
[n], and its z-transform, can then be found by the times-n property of the z-transform.
Similarly, quadratic terms corresponding to complex conjugate poles in H(s) may also be simplied to obtain
a real form. These results are listed in Table 9.1. Note that impulse-invariant design requires H(s) in partial
fraction form and yields a digital lter H(z) in the same form. It must be reassembled if we need a composite
rational function form. The left half-plane poles of H(s) (corresponding to p
k
> 0) map into poles of H(z)
that lie inside the unit circle (corresponding to z = e
p
k
ts
< 1). Thus, a stable analog lter H(s) is
transformed into a stable digital lter H(z).
REVIEW PANEL 9.4
Impulse-Invariant Mappings Are Prone to Aliasing but Preserve Stability
The analog impulse response h(t) matches h[n] at the sampling instants.
Impulse-invariant mappings are not suitable for highpass or bandstop lter design.
c Ashok Ambardar, September 1, 2003
9.3 Response Matching 369
Table 9.1 Impulse-Invariant Transformations
Term Form of H(s) H(z) (with = e
pt
s
)
Distinct
A
(s +p)
Az
(z )
Complex conjugate
Ae
j
s +p +jq
+
Ae
j
s +p jq
2z
2
Acos() 2Az cos( +qt
s
)
z
2
2z cos(qt
s
) +
2
Repeated twice
A
(s +p)
2
At
s
z
(z )
2
Repeated thrice
A
(s +p)
3
0.5At
2
s
z(z +)
(z )
3
EXAMPLE 9.2 (Impulse-Invariant Mappings)
(a) Convert H(s) =
4s + 7
s
2
+ 5s + 4
to H(z), using impulse invariance at S = 2 Hz.
First, by partial fractions, we obtain
H(s) =
4s + 7
s
2
+ 5s + 4
=
4s + 7
(s + 1)(s + 4)
=
3
s + 4
+
1
s + 1
The impulse-invariant transformation, with t
s
= 1/S = 0.5 s, gives
H(z) =
3z
z e
4t
s
+
z
z e
t
s
=
3z
z e
2
+
z
z e
0.5
=
4z
2
1.9549z
z
2
0.7419z + 0.0821
(b) Convert H(s) =
4
(s + 1)(s
2
+ 4s + 5)
to H(z), using impulse invariance, with t
s
= 0.5 s.
The partial fraction form for H(s) is
H(s) =
2
s + 1
+
1 j
s + 2 +j
+
1 +j
s + 2 j
For the second term, we write K = (1 j) =
2e
j3/4
= Ae
j
. Thus, A =
2 and = 3/4.
We also have p = 2, q = 1, and = e
pt
s
= 1/e. With these values, Table 9.1 gives
H(z) =
2z
z 1/
e
+
2
2z
2
cos(
3
4
) 2
2(z/e)cos(0.5
3
4
)
z
2
2(z/e)cos(0.5) +e
2
This result simplies to
H(z) =
0.2146z
2
+ 0.0930z
z
3
1.2522z
2
+ 0.5270z 0.0821
Comment: The rst step involved partial fractions. Note that we cannot compute H(z) as the cascade
of the impulse-invariant digital lters for H
1
(s) =
4
s + 1
and H
2
(s) =
1
s
2
+ 4s + 5
.
c Ashok Ambardar, September 1, 2003
370 Chapter 9 Design of IIR Filters
9.3.2 Modications to Impulse-Invariant Design
Gain Matching
The mapping H(s) = 1/(s +p
k
) H(z) = z/(z e
p
k
t
s
) reveals that the dc gain of the analog term H(s)
(with s = 0) equals 1/p
k
but the dc gain of the digital term H(z) (with z = 1) is 1/(1 e
p
k
ts
). If the
sampling interval t
s
is small enough such that p
k
t
s
< 1, we can use the approximation e
p
k
ts
1 p
k
t
s
to give the dc gain of H(z) as 1/p
k
t
s
. This suggests that the transfer function H(z) must be multiplied by
t
s
in order for its dc gain to closely match the dc gain of the analog lter H(s). This scaling is not needed
if we normalize (divide) the analog frequency specications by the sampling frequency S before designing
the digital lter, because normalization is equivalent to choosing t
s
= 1 during the mapping. In practice,
regardless of normalization, it is customary to scale H(z) to KH(z), where the constant K is chosen to
match the gain of H(s) and KH(z) at a convenient frequency (typically, dc). Since the scale factor also
changes the impulse response of the digital lter from h[n] to Kh[n], the design is now no longer strictly
impulse invariant.
REVIEW PANEL 9.5
Gain Matching at dc in Impulse-Invariant Design
Design H(z) from H(s), compute K = H(s)[
s=0
/H(z)
z=1
and multiply H(z) by K.
Accounting for Sampling Errors
The impulse-invariant method suers from errors in sampling h(t) if it shows a jump at t = 0. If h(0) is
not zero, the sampled value at the origin should be chosen as 0.5h(0). As a result, the impulse response
of the digital lter must be modied to h
M
[n] = h[n] 0.5h(0)[n]. This leads to the modied transfer
function H
M
(z) = H(z) 0.5h(0). The simplest way to nd h(0) is to use the initial value theorem
h(0) = lim
s
sH(s). Since h(0) is nonzero only if the degree N of the denominator of H(s) exceeds the
degree M of its numerator by 1, we need this modication only if N M = 1.
EXAMPLE 9.3 (Modied Impulse-Invariant Design)
(a) (Impulse-Invariant Design)
Convert the analog lter H(s) =
1
s + 1
, with a cuto frequency of 1 rad/s, to a digital lter with a
cuto frequency of f
c
= 10 Hz and S = 60 Hz, using impulse invariance and gain matching.
There are actually two ways to do this:
1. We normalize by the sampling frequency S, which allows us to use t
s
= 1 in all subsequent
computations. Normalization gives
C
= 2f
C
/S =
3
. We denormalize H(s) to
C
to get
H
1
(s) = H
3
s +
3
Finally, with t
s
= 1, impulse invariance gives
H
1
(z) =
3
z
z e
/3
=
1.0472z
z 0.3509
c Ashok Ambardar, September 1, 2003
9.3 Response Matching 371
2. We rst denormalize H(s) to f
C
to get
H
2
(s) = H
s
2f
C
=
20
s + 20
We use impulse invariance and multiply the resulting digital lter transfer function by t
s
to get
H
2
(z) = t
s
20z
z e
20t
s
=
z
3
z e
/3
=
1.0472z
z 0.3509
Comment: Both approaches yield identical results. The rst method automatically accounts for the
gain matching. For a perfect gain match at dc, we should multiply H(z) by the gain factor
K =
1
H(1)
=
1 0.3509
1.0472
= 0.6198
(b) (Modied Impulse-Invariant Design)
Convert H(s) =
1
s + 1
to a digital lter, with t
s
= 1 s, using modied impulse invariance to account
for sampling errors and gain matching at dc.
Using impulse invariance, the transfer function of the digital lter is
H(z) =
z
z e
1
=
z
z 0.3679
Since h(t) = e
t
u(t), h(t) has a jump of 1 unit at t = 0, and thus h(0) = 1. The modied impulse-
invariant mapping thus gives
H
M
(z) = H(z) 0.5h(0) =
z
z e
1
0.5 =
0.5(z +e
1
)
z e
1
=
0.5(z + 0.3679)
z 0.3679
The dc gain of H(s) is unity. We compute the dc gain of H(z) and H
M
(z) (with z = 1) as
H(z)[
z=1
=
1
1 e
1
= 1.582 H
M
(z)[
z=1
= 0.5
1 +e
1
1 e
1
= 1.082
For unit dc gain, the transfer functions of the original and modied digital lter become
H
1
(z) =
z
1.582(z e
1
)
=
0.6321z
z 0.3679
H
M1
(z) =
0.5(z +e
1
)
1.082(z e
1
)
=
0.4621(z + 0.3679)
z 0.3679
Figure E9.3B compares the response of H(s), H(z), H
1
(z), H
M
(z), and H
M1
(z). It clearly reveals
the improvement due to each modication.
c Ashok Ambardar, September 1, 2003
372 Chapter 9 Design of IIR Filters
0 0.1 0.2 0.3 0.4 0.5
0
0.5
1
1.082
1.582
Impulseinvariant design for H(s) = 1/(s+1) (dashed)
H(z)
H
1
(z)
H
M1
(z)
H
M
(z)
Digital frequency F
M
a
g
n
i
t
u
d
e
Figure E9.3B Response of the various lters for Example 9.3(b)
(c) (Modied Impulse-Invariant Design)
Convert the analog lter H(s) =
4s + 7
s
2
+ 5s + 4
to a digital lter, with t
s
= 0.5 s, using modied impulse
invariance to account for sampling errors.
Since the numerator degree is M = 1 and the denominator degree is N = 2, we have N M = 1, and
a modication is needed. The initial value theorem gives
h(0) = lim
s
sH(s) = lim
s
4s
2
+ 7s
s
2
+ 5s + 4
=
4 + 7/s
1 + 5/s + 4/s
2
= 4
The transfer function of the digital lter using impulse-invariant mapping was found in part (a) as
H(z) =
3z
z e
2
+
z
z e
0.5
The modied transfer function is thus
H
M
(z) = H(z) 0.5h(0) =
3z
z e
2
+
z
z e
0.5
2 =
2z
2
0.4712z 0.1642
z
2
0.7419z + 0.0821
(d) (Modied Impulse-Invariant Design)
Convert H(s) =
4
(s + 1)(s
2
+ 4s + 5)
to a digital lter, with t
s
= 0.5 s, using modied impulse invari-
ance (if required) to account for sampling errors.
Since the numerator degree is M = 0 and the denominator degree is N = 3, we have NM = 3. Since
this does not equal 1, the initial value is zero, and no modication is required. The transfer function
of the digital lter using impulse-invariant mapping is thus
H(z) =
0.2146z
2
+ 0.0930z
z
3
1.2522z
2
+ 0.5270z 0.0821
c Ashok Ambardar, September 1, 2003
9.4 The Matched z-Transform for Factored Forms 373
9.4 The Matched z-Transform for Factored Forms
The impulse-invariant mapping may be expressed as
1
s +
=
z
z e
t
s
s + =
z e
t
s
z
(9.13)
The matched z-transformuses this mapping to convert each numerator and denominator term of a factored
H(s) to yield the digital lter H(z), also in factored form, as
H(s) = C
M
i=1
(s z
i
)
N
k=1
(s p
k
)
H(z) = (Kz
P
)
M
i=1
(z e
z
i
t
s
)
N
k=1
(z e
p
k
ts
)
(9.14)
The power P of z
P
in H(z) is P = N M, the dierence in the degree of the denominator and numerator
polynomials of H(s). The constant K is chosen to match the gains of H(s) and H(z) at some convenient
frequency (typically, dc).
For complex roots, we can replace each conjugate pair using the mapping
(s +p jq)(s +p +jq) =
z
2
2ze
pts
cos(qt
s
) +e
2pts
z
2
(9.15)
Since poles in the left half of the s-plane are mapped inside the unit circle in the z-plane, the matched z-
transform preserves stability. It converts an all-pole analog system to an all-pole digital system but may not
preserve the frequency response of the analog system. As with the impulse-invariant mapping, the matched
z-transform also suers from aliasing errors.
REVIEW PANEL 9.6
The Matched z-Transform Requires H(s) in Factored Form
(s +p
k
) =
z e
p
k
t
s
z
(for each factor of H(s) in factored form)
The matched z-transform is not suitable for highpass or bandpass lter design.
9.4.1 Modications to Matched z-Transform Design
The matched z-transform maps the zeros of H(s) at s = (corresponding to the highest analog frequency
f = ) to z = 0. This yields the term z
P
in the design relation. In the modied matched z-transform, some
or all of these zeros are mapped to z = 1 (corresponding to the highest digital frequency F = 0.5 or the
analog frequency f = 0.5S). Two modications to H(z) are in common use:
1. Move all zeros from z = 0 to z = 1 (or replace z
P
by (z + 1)
P
) in H(z).
2. Move all but one of the zeros at z = 0 to z = 1 (or replace z
P
by z(z + 1)
P1
in H(z)).
These modications allow us to use the matched z-transform even for highpass and bandstop lters.
REVIEW PANEL 9.7
Two Modications to Matched z-Transform Design
1. Move all zeros from z = 0 to z = 1. 2. Move all but one of the zeros at z = 0 to z = 1.
c Ashok Ambardar, September 1, 2003
374 Chapter 9 Design of IIR Filters
EXAMPLE 9.4 (The Matched z-Transform)
Convert H(s) =
4
(s + 1)(s + 2)
to H(z) by the matched z-transform and its modications, with t
s
= 0.5 s.
(a) No modication: The matched z-transform yields
H(z) =
Kz
2
(z e
ts
)(z e
2ts
)
=
Kz
2
(z e
0.5
)(z e
1
)
=
Kz
2
z
2
0.9744z + 0.2231
(b) First modication: Replace both zeros in H(z) (the term z
2
) by (z + 1)
2
:
H
1
(z) =
K
1
(z + 1)
2
(z e
ts
)(z e
2ts
)
=
K
1
(z + 1)
2
(z e
0.5
)(z e
1
)
=
K
1
(z + 1)
2
z
2
0.9744z + 0.2231
(c) Second modication: Replace only one zero in H(z) (the term z
2
) by z + 1:
H
2
(z) =
K
2
z(z + 1)
(z e
t
s
)(z e
2t
s
)
=
K
2
z(z + 1)
(z e
0.5
)(z e
1
)
=
K
2
z(z + 1)
z
2
0.9744z + 0.2231
Comment: The constants K, K
1
, and K
2
may be chosen for a desired gain.
9.5 Mappings from Discrete Algorithms
Discrete-time algorithms are often used to develop mappings to convert analog lters to digital lters. In
the discrete domain, the operations of integration and dierentiation are replaced by numerical integration
and numerical dierences, respectively. The mappings are derived by equating the transfer function H(s) of
the ideal analog operation with the transfer function H(z) of the corresponding discrete operation.
9.5.1 Mappings from Dierence Algorithms
Numerical dierences are often used to convert dierential equations to dierence equations. Three such
dierence algorithms are listed in Table 9.2.
Table 9.2 Numerical Dierence Algorithms
Dierence Numerical Algorithm Mapping for s
Backward
y[n] =
x[n] x[n 1]
t
s
s =
z 1
zt
s
Forward
y[n] =
x[n + 1] x[n]
t
s
s =
z 1
t
s
Central
y[n] =
x[n + 1] x[n 1]
2t
s
s =
z
2
1
2zt
s
c Ashok Ambardar, September 1, 2003
9.5 Mappings from Discrete Algorithms 375
t
s
1 n
1] n [ x
[n] x
[n] y =
[n] x 1] n [ x
t
s
t
s
+1 n
[n] x
+1] n [ x
[n] y =
+1] n [ x [n] x
t
s
Backward Euler algorithm Forward Euler algorithm
n
1/2
Unit circle
The left half of the
is mapped to the interior of a circle
The backward-difference mapping
Figure 9.4 Mapping region for the mapping based on the backward dierence
9.5.3 The Forward-Dierence Algorithm
The mapping for the forward-dierence is s (z 1)/t
s
. With z = u +jv, we obtain
z = u +jv = 1 +t
s
( +j) (9.21)
If = 0, then u = 1, and the j-axis maps to z = 1. If > 0, then u > 1, and the right half of the s-plane
maps to the right of z = 1. If < 0, then u < 1, and the left half of the s-plane maps to the left of z = 1,
as shown in Figure 9.5. This region (in the z-plane) includes not only the unit circle but also a vast region
outside it. Thus, a stable analog lter with poles in the left half of the s-plane may result in an unstable
digital lter H(z) with poles anywhere to the left of z = 1 (but not inside the unit circle) in the z-plane !
is mapped to a region to the left of
1
= 1 z
The forward-difference mapping
Re[
]
z
z -plane
]
The left half of the
Im[ s
-plane z
-plane
Unit circle
j
Figure 9.5 Mapping region for the mapping based on the forward dierence
REVIEW PANEL 9.8
Mappings Based on the Backward Dierence and Forward Dierence
Backward dierence: s =
z 1
zt
s
(stable) Forward dierence: s =
z 1
t
s
(not always stable)
EXAMPLE 9.5 (Mappings from Dierence Algorithms)
(a) We convert the stable analog lter H(s) =
1
s +
, > 0, to a digital lter, using the backward-
dierence mapping s = (z 1)/zt
s
to obtain
H(z) =
1
+ (z 1)/zt
s
=
zt
s
(1 +t
s
)z 1
c Ashok Ambardar, September 1, 2003
9.5 Mappings from Discrete Algorithms 377
The digital lter has a pole at z = 1/(1 + t
s
). Since this is always less than unity if > 0 (for a
stable H(s)) and t
s
> 0, we have a stable H(z).
(b) We convert the stable analog lter H(s) =
1
s +
, > 0, to a digital lter, using the forward-dierence
mapping s = (z 1)/t
s
to obtain
H(z) =
1
+ (z 1)/t
s
=
t
s
z (1 t
s
)
The digital lter has a pole at z = 1 t
s
and is thus stable only if 0 < t
s
< 2 (to ensure [z[ < 1).
Since > 0 and t
s
> 0, we are assured a stable system only if < 2/t
s
. This implies that the sampling
rate S must be chosen to ensure that S > 0.5.
(c) We convert the stable analog lter H(s) =
1
s +
, > 0, to a digital lter, using the central-dierence
mapping s = (z
2
1)/2zt
s
to obtain
H(z) =
1
+ (z
2
1)/2zt
s
=
2zt
s
z
2
+ 2t
s
z 1
The digital lter has a pair of poles at z = t
s
(t
s
)
2
+ 1. The magnitude of one of the poles is
always greater than unity, and the digital lter is thus unstable for any > 0.
Comment: Clearly, from a stability viewpoint, only the mapping based on the backward dierence is useful
for the lter H(s) = 1/(s +). In fact, this mapping preserves stability for any stable analog lter.
9.5.4 Mappings from Integration Algorithms
Two commonly used algorithms for numerical integration are based on the rectangular rule and trapezoidal
rule. These integration algorithms, listed in Table 9.3, estimate the area y[n] from y[n 1] by using step
interpolation (for the rectangular rule) or linear interpolation (for the trapezoidal rule) between the samples
of x[n], as illustrated in Figure 9.6.
Table 9.3 Numerical Integration Algorithms
Rule Numerical Algorithm Mapping for s
Rectangular y[n] = y[n 1] +t
s
x[n]
s =
1
t
s
z 1
z
z 1
z + 1
The mappings resulting from these operators are also listed in Table 9.3 and are based on comparing
the transfer function H(s) = 1/s of the ideal integrator with the transfer function H(z) of each integration
algorithm, as follows:
c Ashok Ambardar, September 1, 2003
378 Chapter 9 Design of IIR Filters
[n] x
1 n
t
s
[n] y = 1] n [ y t
s
[n] x +
1 n
1] n [ x
[n] x
t
s
t
s
[n] y = 1] n [ y + 0.5 [n] x 1] n [ x + { }
Rectangular rule Trapezoidal rule
n n
Figure 9.6 Illustrating numerical integration algorithms
Rectangular rule: y[n] = y[n 1] +t
s
x[n]
Y (z) = z
1
Y (z) +t
s
X(z) H(z) =
Y (z)
X(z)
=
zt
s
z 1
s =
z 1
zt
s
(9.22)
We remark that the rectangular algorithm for integration and the backward dierence for the derivative
generate identical mappings.
Trapezoidal rule: y[n] = y[n 1] + 0.5t
s
(x[n] +x[n 1])
Y (z) = z
1
Y (z) + 0.5t
s
[X(z) +z
1
X(z) H(z) =
Y (z)
X(z)
=
0.5t
s
(z + 1)
z 1
s =
2
t
s
z 1
z + 1
(9.23)
The mapping based on the trapezoidal rule is also called Tustins rule.
9.5.5 Stability Properties of Integration-Algorithm Mappings
Mappings based on the rectangular and trapezoidal algorithms always yield a stable H(z) for any stable
H(s), and any choice of t
s
. The rectangular rule is equivalent to the backward-dierence algorithm. It thus
maps the left half of the s-plane to the interior of a circle of radius 0.5 in the z-plane, centered at z = 0.5,
and is always stable. For the trapezoidal rule, we express z in terms of s = +j to get
z =
2 +st
s
2 st
s
=
2 +t
s
+jt
s
2 t
s
jt
s
(9.24)
If = 0, we get [z[ = 1, and for < 0, [z[ < 1. Thus, the j-axis is mapped to the unit circle, and the left
half of the s-plane is mapped into the interior of the unit circle, as shown in Figure 9.7. This means that a
stable analog system will always yield a stable digital system using this transformation. If = 0, we have
z = 1, and the dc gain of both the analog and digital lters is identical.
Discrete dierence and integration algorithms are good approximations only for small digital frequencies
(F < 0.1, say) or high sampling rates S (small t
s
) that may be well in excess of the Nyquist rate. This is
why the sampling rate is a critical factor in the frequency-domain performance of these algorithms. Another
factor is stability. For example, the mapping based on the central-dierence algorithm is not very useful
because it always produces an unstable digital lter. Algorithms based on trapezoidal integration and the
backward dierence are popular because they always produce stable digital lters.
c Ashok Ambardar, September 1, 2003
9.5 Mappings from Discrete Algorithms 379
is mapped to the interior of the unit circle
The bilinear transform
Im[ z ]
Re[ z
]
s
s plane
-plane
-plane
z
j
Unit circle
The left half of the
Figure 9.7 Mapping region for the trapezoidal integration algorithm
REVIEW PANEL 9.9
Mappings Based on Numerical Integration Algorithms
Rectangular rule: s =
z 1
zt
s
(always stable)
Trapezoidal rule: s =
2
t
s
z 1
z + 1
(always stable)
EXAMPLE 9.6 (Mappings from Integration Algorithms)
(a) Convert H(s) =
1
s +
, > 0, to a digital lter H(z), using the trapezoidal numerical integration
algorithm, and comment on the stability of H(z).
Using the the mapping based on the trapezoidal rule, we obtain
H(z) = H(s)
s=
2(z1)
ts(z+1)
=
t
s
(z + 1)
(2 +t
s
)z (2 t
s
)
The pole location of H(z) is z = (2 t
s
)/(2 +t
s
). Since this is always less than unity (if > 0 and
t
s
> 0), we have a stable H(z).
(b) Simpsons algorithm for numerical integration nds y[n] over two time steps from y[n2] and is given
by
y[n] = y[n 2] +
t
s
3
(x[n] + 4x[n 1] +x[n 2])
Derive a mapping based on Simpsons rule, use it to convert H(s) =
1
s +
, > 0, to a digital lter
H(z), and comment on the stability of H(z).
The transfer function H
S
(z) of this algorithm is found as follows:
Y (z) = z
2
Y (z) +
t
s
3
(1 + 4z
1
+z
2
)X(z) H
S
(z) = t
s
z
2
+ 4z + 1
3(z
2
1)
Comparison with the transfer function of the ideal integrator H(s) = 1/s gives
s =
3
t
s
z
2
1
z
2
+ 4z + 1
1 +e
j2F
1 e
j2F
=
1
j2 tan(F)
Normalizing this by H
I
(F) = 1/j2F and expanding the result, we obtain
H
T
(F)
H
I
(F)
=
j2F
j2 tan(F)
=
F
tan(F)
1
(F)
2
3
(F)
4
45
Figure E9.7 shows the magnitude and phase error by plotting the ratio H
T
(F)/H
I
(F). For an
ideal algorithm, this ratio should equal unity at all frequencies. At low frequencies (F < 1), we
have tan(F) F, and H
T
(F)/H
I
(F) 1, and the trapezoidal rule is a valid approximation to
integration. The phase response matches the ideal phase at all frequencies.
0 0.05 0.1 0.15 0.2
0.8
0.9
1
1.1
Rectangular
Trapezoidal
Simpson
(a) Magnitude spectrum of integration algorithms
Digital frequency F
M
a
g
n
i
t
u
d
e
0 0.05 0.1 0.15 0.2
0.8
0.9
1
1.1
Backward and forward
Central
(c) Magnitude spectrum of difference algorithms
M
a
g
n
i
t
u
d
e
Digital frequency F
0 0.05 0.1 0.15 0.2
40
20
0
20
40
Rectangular
Simpson and trapezoidal
(b) Phase spectrum of integration algorithms
Digital frequency F
P
h
a
s
e
[
d
e
g
r
e
e
s
]
0 0.05 0.1 0.15 0.2
40
20
0
20
40
(d) Phase spectrum of difference algorithms
Forward
Central
Backward
Digital frequency F
P
h
a
s
e
[
d
e
g
r
e
e
s
]
Figure E9.7 Frequency response of the numerical algorithms for Example 9.7
(b) Simpsons numerical integration algorithm yields the following normalized result:
H
S
(F)
H
I
(F)
= 2F
2 + cos 2F
3 sin 2F
It displays a perfect phase match for all frequencies, but has an overshoot in its magnitude response
past F = 0.25 and thus amplies high frequencies. Note that Simpsons algorithm yields a mapping
rule that results in an unstable lter.
c Ashok Ambardar, September 1, 2003
382 Chapter 9 Design of IIR Filters
(c) For the forward dierence operator, the DTFT yields
Y
p
(F) = X
p
(F)e
j2F
X
p
(F) = X
p
(F)[e
j2F
1] H
F
(F) =
Y
p
(F)
X
p
(F)
= e
j2F
1
The ratio H
F
(F)/H
D
(F) may be expanded as
H
F
(F)
H
D
(F)
= 1 +j
1
2!
(2F)
1
3!
(2F)
2
+
Again, we observe correspondence only at low digital frequencies (or high sampling rates). The high
frequencies are amplied, making the algorithm susceptible to high-frequency noise. The phase response
also deviates from the true phase, especially at high frequencies.
(d) For the central dierence algorithm, we nd H
C
(F)/H
D
(F) as
H
C
(F)
H
D
(F)
=
sin(2F)
2F
= 1
1
3!
(2F)
2
+
1
5!
(2F)
4
+
We see a perfect match only for the phase at all frequencies.
9.5.7 Mappings from Rational Approximations
Some of the mappings that we have derived from numerical algorithms may also be viewed as rational
approximations of the transformations z e
st
s
and s ln(z)/t
s
. The forward-dierence mapping is based
on a rst-order approximation for z = e
st
s
and yields
z = e
st
s
1 +st
s
, st
s
<1 s
z 1
t
s
(9.25)
The backward-dierence mapping is based on a rst-order approximation for z
1
= e
sts
and yields
1
z
= e
st
s
1 st
s
, st
s
<1 s
z 1
zt
s
(9.26)
The trapezoidal mapping is based on a rst-order rational function approximation of s = ln(z)/t
s
, with ln(z)
described by a power series, and yields
s =
ln(z)
t
s
=
2
t
s
z 1
z + 1
+
2
3t
s
z 1
z + 1
2
+
2
5t
s
z 1
z + 1
2
+
2
t
s
z 1
z + 1
(9.27)
9.6 The Bilinear Transformation
If we generalize the mapping based on the trapezoidal rule by letting C = 2/t
s
, we obtain the bilinear
transformation, dened by
s = C
z 1
z + 1
z =
C +s
C s
(9.28)
c Ashok Ambardar, September 1, 2003
9.6 The Bilinear Transformation 383
If we let = 0, we obtain the complex variable z in the form
z =
C +j
C j
= e
j2 tan
1
(/C)
(9.29)
Since z = e
j
, where = 2F is the digital frequency, we nd
= 2 tan
1
(/C) = C tan(0.5) (9.30)
This is a nonlinear relation between the analog frequency and the digital frequency . When = 0, = 0,
and as , . It is thus a one-to-one mapping that nonlinearly compresses the analog frequency
range < f < to the digital frequency range < < . It avoids the eects of aliasing at the
expense of distorting, compressing, or warping the analog frequencies, as shown in Figure 9.8.
The higher the frequency, the more severe is the warping. We can compensate for this warping (but
not eliminate it) if we prewarp the frequency specications before designing the analog system H(s) or
applying the bilinear transformation. Prewarping of the frequencies prior to analog design is just a scaling
(stretching) operation based on the inverse of the warping relation, and is given by
= C tan(0.5) (9.31)
Figure 9.9 shows a plot of versus for various values of C, compared with the linear relation = .
The analog and digital frequencies always show a match at the origin ( = = 0), and at one other value
dictated by the choice of C.
We point out that the nonlinear stretching eect of the prewarping often results in a lter of lower order,
especially if the sampling frequency is not high enough. For high sampling rates, it turns out that the
prewarping has little eect and may even be redundant.
= C tan (/2)
(analog)
(digital)
Nonlinear compression (warping)
Equal-width intervals
Figure 9.8 The warping eect of the bilinear transformation
The popularity of the bilinear transformation stems from its simple, stable, one-to-one mapping. It avoids
problems caused by aliasing and can thus be used even for highpass and bandstop lters. Though it does
suer from warping eects, it can also compensate for these eects, using a simple relation.
REVIEW PANEL 9.10
The Bilinear Transformation Avoids Aliasing at the Expense of Nonlinear Warping
Bilinear transformation: s = C
z 1
z + 1
Warping relation: = C tan(0.5)
c Ashok Ambardar, September 1, 2003
384 Chapter 9 Design of IIR Filters
For all values of C, we see a match at the origin.
For some values of C, we see a match at one more point.
(analog)
C=1
= C tan (/2)
= 2 C
(digital)
= 0.5 C
=
Figure 9.9 The warping relation = C tan(0.5) for various choices of C
9.6.1 Using the Bilinear Transformation
Given an analog transfer function H(s) whose response at the analog frequency
A
is to be matched to H(z)
at the digital frequency
D
, we may design H(z) in one of two ways:
1. We pick C by matching
A
and the prewarped frequency
D
, and obtain H(z) from H(s), using the
transformation s = C
z 1
z + 1
. This process may be summarized as follows:
A
= C tan(0.5
D
) C =
A
tan(0.5
D
)
H(z) = H(s)
s=C(z1)/(z+1)
(9.32)
2. We pick a convenient value for C (say, C = 1). This actually matches the response at an arbitrary
prewarped frequency
x
given by
x
= tan(0.5
D
) (9.33)
Next, we frequency scale H(s) to H
1
(s) = H(s
A
/
x
), and obtain H(z) from H
1
(s), using the trans-
formation s =
z 1
z + 1
(with C = 1). This process may be summarized as follows (for C = 1):
x
= tan(0.5
D
) H
1
(s) = H(s)
s=s
A
/x
H(z) = H
1
(s)
s=(z1)/(z+1)
(9.34)
The two methods yield an identical digital lter H(z). The rst method does away with the scaling of H(s),
and the second method allows a convenient choice for C.
REVIEW PANEL 9.11
The Bilinear Transformation Allows Gain Matching by Appropriate Choice of C
Choose C =
A
tan(0.5
D
)
to match the gain of H(s) at
A
to the gain of H(z) at
D
.
EXAMPLE 9.8 (Using the Bilinear Transformation)
(a) Consider a Bessel lter described by H(s) =
3
s
2
+ 3s + 3
. Design a digital lter whose magnitude at
f
0
= 3 kHz equals the magnitude of H(s) at
A
= 4 rad/s if the sampling rate S = 12 kHz.
The digital frequency is = 2f
0
/S = 0.5. We can now proceed in one of two ways:
c Ashok Ambardar, September 1, 2003
9.6 The Bilinear Transformation 385
1. Method 1: We select C by choosing the prewarped frequency to equal
A
= 4:
A
= 4 = C tan(0.5) or C =
4
tan(0.5)
= 4
We transform H(s) to H(z), using s = C
z 1
z + 1
=
4(z 1)
z + 1
, to obtain
H(z) = H(s)
s=4(z1)/(z+1)
=
3(z + 1)
2
31z
2
26z + 7
2. Method 2: We choose C = 1, say, and evaluate
x
= tan(0.5) = tan(0.25) = 1.
Next, we frequency scale H(s) to
H
1
(s) = H(s
A
/
x
) = H(4s) =
3
16s
2
+ 12s + 3
Finally, we transform H
1
(s) to H(z), using s =
z 1
z + 1
, to obtain
H(z) = H(s)
s=(z1)/(z+1)
=
3(z + 1)
2
31z
2
26z + 7
The magnitude [H(s)[ at s = j = j4 matches the magnitude of [H(z)[ at z = e
j
= e
j/2
= j.
We nd that
[H(s)[
s=j4
=
3
13 +j12
= 0.1696 [H(z)[
z
= j =
3(j + 1)
2
24 j26
= 0.1696
Figure E9.8(a) compares the magnitude of H(s) and H(z). The linear phase of the Bessel lter is not
preserved during the transformation (unless the sampling frequency is very high).
0 1 2 3 4 5 6
0
0.5
1
Analog frequency f [kHz]
M
a
g
n
i
t
u
d
e
(a) Bessel filter H(s) and digital filter H(z)
H(z)
H(s)
0 20 40 60 80 100 120
0
0.5
1
(b) Notch filter H(s) and digital filter H(z)
H(z)
H(s)
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
Figure E9.8 Magnitude of the analog and digital lters for Example 9.8(a and b)
(b) The twin-T notch lter H(s) =
s
2
+ 1
s
2
+ 4s + 1
has a notch frequency
0
= 1 rad/s. Design a digital notch
lter with S = 240 Hz and a notch frequency f = 60 Hz.
The digital notch frequency is = 2f/S = 0.5. We pick C by matching the analog notch frequency
0
and the prewarped digital notch frequency to get
0
= C tan(0.5) 1 = C tan(0.25) C = 1
c Ashok Ambardar, September 1, 2003
386 Chapter 9 Design of IIR Filters
Finally, we convert H(s) to H(z), using s = C
z 1
z + 1
=
z 1
z + 1
, to get
H(z) = H(s)
s=(z1)/(z+1)
=
(z 1)
2
+ (z + 1)
2
(z 1)
2
+ 4(z
2
1) + (z + 1)
2
=
z
2
+ 1
3z
2
1
We conrm that H(s) = 0 at s = j
0
= j and H(z) = 0 at z = e
j
= e
j/2
= j. Figure E9.8(b) shows
the magnitude of the two lters and the perfect match at f = 60 Hz (or F = 0.25).
9.7 Spectral Transformations for IIR Filters
The design of IIR lters usually starts with an analog lowpass prototype, which is converted to a digital
lowpass prototype by an appropriate mapping and transformed to the required lter type by an appropriate
spectral transformation. For the bilinear mapping, we may even perform the mapping and the spectral
transformation (in a single step) on the analog prototype itself.
9.7.1 Digital-to-Digital Transformations
If a digital lowpass prototype has been designed, the digital-to-digital (D2D) transformations of Table 9.5
can be used to convert it to the required lter type. These transformations preserve stability by mapping
the unit circle (and all points within it) into itself.
As with analog transformations, the lowpass-to-bandpass (LP2BP) and lowpass-to-bandstop (LP2BS)
transformations yield a digital lter with twice the order of the lowpass prototype. The lowpass-to-lowpass
(LP2LP) transformation is actually a special case of the more general allpass transformation:
z
z
1 z
, [[ < 1 (and real) (9.35)
EXAMPLE 9.9 (Using D2D Transformations)
A lowpass lter H(z) =
3(z + 1)
2
31z
2
26z + 7
operates at S = 8 kHz, and its cuto frequency is f
C
= 2 kHz.
(a) Use H(z) to design a highpass lter with a cuto frequency of 1 kHz.
We nd
D
= 2f
C
/S = 0.5 and
C
= 0.25. The LP2HP transformation (Table 9.5) requires
=
cos[0.5(
D
+
C
)]
cos[0.5(
D
C
)]
=
cos(3/8)
cos(/8)
= 0.4142
The LP2HP spectral transformation is thus z
(z +)
1 +z
=
(z 0.4142)
1 0.4142z
and yields
H
HP
(z) =
0.28(z 1)
2
z
2
0.0476z + 0.0723
(b) Use H(z) to design a bandpass lter with band edges of 1 kHz and 3 kHz.
The various digital frequencies are
1
= 0.25,
2
= 0.75,
2
1
= 0.5, and
2
+
1
= .
c Ashok Ambardar, September 1, 2003
9.7 Spectral Transformations for IIR Filters 387
Table 9.5 Digital-to-Digital (D2D) Frequency Transformations
Note: The digital lowpass prototype cuto frequency is
D
.
All digital frequencies are normalized to = 2f/S.
Form Band Edge(s) Mapping z Mapping Parameters
LP2LP
C
z
1 z
= sin[0.5(
D
C
)]/sin[0.5(
D
+
C
)]
LP2HP
C
(z +)
1 +z
= cos[0.5(
D
+
C
)]/cos[0.5(
D
C
)]
LP2BP [
1
,
2
]
(z
2
+A
1
z +A
2
)
A
2
z
2
+A
1
z + 1
K = tan(0.5
D
)/tan[0.5(
2
1
)]
= cos[0.5(
2
+
1
)]/cos[0.5(
2
1
)]
A
1
= 2K/(K + 1) A
2
= (K 1)/(K + 1)
LP2BS [
1
,
2
]
(z
2
+A
1
z +A
2
)
A
2
z
2
+A
1
z + 1
K = tan(0.5
D
)tan[0.5(
2
1
)]
= cos[0.5(
2
+
1
)]/cos[0.5(
2
1
)]
A
1
= 2/(K + 1) A
2
= (K 1)/(K + 1)
From Table 9.5, the parameters needed for the LP2BP transformation are
K =
tan(/4)
tan(/4)
= 1 =
cos(/2)
cos(/4)
= 0 A
1
= 0 A
2
= 0
The LP2BP transformation is thus z z
2
and yields
H
BP
(z) =
3(z
2
1)
2
31z
4
+ 26z
2
+ 7
(c) Use H(z) to design a bandstop lter with band edges of 1.5 kHz and 2.5 kHz.
Once again, we need
1
= 3/8,
2
= 5/8,
2
1
= /4, and
2
+
1
= .
From Table 9.5, the LP2BS transformation requires the parameters
K =
tan(/8)
tan(/4)
= 0.4142 =
cos(/2)
cos(/8)
= 0 A
1
= 0 A
2
= 0.4142
The LP2BS transformation is thus z
z
2
+ 0.4142
0.4142z
2
+ 1
and yields
H
BS
(z) =
0.28(z
2
+ 1)
2
z
4
+ 0.0476z
2
+ 0.0723
Figure E9.9 compares the magnitudes of each lter designed in this example.
c Ashok Ambardar, September 1, 2003
388 Chapter 9 Design of IIR Filters
0 0.5 1 1.5 2 2.5 3 3.5 4
0
0.2
0.4
0.6
0.8
1
Digitaltodigital transformations of a lowpass digital filter
Analog frequency f [kHz]
M
a
g
n
i
t
u
d
e
LP
HP
BP
BS
Figure E9.9 The digital lters for Example 9.9
9.7.2 Direct (A2D) Transformations for Bilinear Design
All stable transformations that are also free of aliasing introduce warping eects. Only the bilinear mapping
oers a simple relation to compensate for the warping. Combining the bilinear mapping with the D2D
transformations yields the analog-to-digital (A2D) transformations of Table 9.6 for bilinear design. These
can be used to convert a prewarped analog lowpass prototype (with a cuto frequency of 1 rad/s) directly
to the required digital lter.
Table 9.6 Direct Analog-to-Digital (A2D) Transformations for Bilinear Design
Note: The analog lowpass prototype prewarped cuto frequency is 1 rad/s.
The digital frequencies are normalized ( = 2f/S) but are not prewarped.
Form Band Edge(s) Mapping s Mapping Parameters
LP2LP
C
z 1
C(z + 1)
C = tan(0.5
C
)
LP2HP
C
C(z + 1)
z 1
C = tan(0.5
C
)
LP2BP
1
<
0
<
2
z
2
2z + 1
C(z
2
1)
C = tan[0.5(
2
1
)], = cos
0
or
= cos[0.5(
2
+
1
)]/cos[0.5(
2
1
)]
LP2BS
1
<
0
<
2
C(z
2
1)
z
2
2z + 1
C = tan[0.5(
2
1
)], = cos
0
or
= cos[0.5(
2
+
1
)]/cos[0.5(
2
1
)]
9.7.3 Bilinear Transformation for Peaking and Notch Filters
If we wish to use the bilinear transformation to design a second-order digital peaking (bandpass) lter
with a 3-dB bandwidth of and a center frequency of
0
, we start with the lowpass analog prototype
c Ashok Ambardar, September 1, 2003
9.7 Spectral Transformations for IIR Filters 389
H(s) = 1/(s + 1) (whose cuto frequency is 1 rad/s), and apply the A2D LP2BP transformation, to obtain
H
BP
(z) =
C
1 +C
z
2
1
z
2
2
1+C
z +
1C
1+C
= cos
0
C = tan(0.5) (9.36)
Similarly, if we wish to use the bilinear transformation to design a second-order digital notch (bandstop)
lter with a 3-dB notch bandwidth of and a notch frequency of
0
, we once again start with the lowpass
analog prototype H(s) = 1/(s + 1), and apply the A2D LP2BS transformation, to obtain
H
BS
(z) =
1
1 +C
z
2
2z + 1
z
2
2
1+C
z +
1C
1+C
= cos
0
C = tan(0.5) (9.37)
If either design calls for an A-dB bandwidth of , the constant C is replaced by KC, where
K =
1
10
0.1A
1
1/2
(A in dB) (9.38)
This is equivalent to denormailzing the lowpass prototype such that its gain corresponds to an attenuation
of A decibels at unit radian frequency. For a 3-dB bandwidth, we obtain K = 1 as expected. These design
relations prove quite helpful in the quick design of notch and peaking lters.
The center frequency
0
is used to determine the parameter = cos
0
. If only the band edges
1
and
2
are specied, but the center frequency
0
is not, may also be found from the alternative relation
of Table 9.6 in terms of
1
and
2
. The center frequency of the designed lter will then be based on the
geometric symmetry of the prewarped frequencies and can be computed from
tan(0.5
0
) =
tan(0.5
1
)tan(0.5
2
) (9.39)
In fact, the digital band edges
1
and
2
do not show geometric symmetry with respect to the center
frequency
0
. We can nd
1
and
2
in terms of and
0
by equating the two expressions for nding
(in Table 9.6) to obtain
= cos
0
=
cos[0.5(
2
+
1
)]
cos[0.5(
2
1
)]
(9.40)
With =
2
1
, we get
2
= 0.5 + cos
1
[cos
0
cos(0.5)]
1
=
2
(9.41)
EXAMPLE 9.10 (Bilinear Design of Second-Order Filters)
(a) Let us design a peaking lter with a 3-dB bandwidth of 5 kHz and a center frequency of 6 kHz. The
sampling frequency is 25 kHz.
The digital frequencies are = 2(5/25) = 0.4 and
0
= 2(6/25) = 0.48.
We compute C = tan(0.5) = 0.7265 and = cos
0
= 0.0628. Substituting these into the form for
the required lter, we obtain
H(z) =
0.4208(z
2
1)
z
2
0.0727z + 0.1584
Figure E9.10(a) shows the magnitude spectrum. The center frequency is 6 kHz, as expected. The band
edges
1
and
2
may be computed from
2
= 0.5 + cos
1
[cos
0
cos(0.5)] = 2.1483
1
=
2
= 0.8917
c Ashok Ambardar, September 1, 2003
390 Chapter 9 Design of IIR Filters
These correspond to the frequencies f
1
=
S1
2
= 3.55 kHz and f
2
=
S2
2
= 8.55 kHz.
0 3.55 6 8.55 12.5
0
0.5
0.707
1
Analog frequency f [kHz]
M
a
g
n
i
t
u
d
e
(a) Bandpass filter f
0
=6 kHz f=5 kHz
0 4 6.56 9 12.5
0
0.5
0.707
1
(b) Bandpass filter f
1
=4 kHz f
2
=9 kHz
Analog frequency f [kHz]
M
a
g
n
i
t
u
d
e
Figure E9.10 Response of bandpass lters for Example 9.10 (a and b)
(b) Let us design a peaking (bandpass) lter with a 3-dB band edges of 4 kHz and 9 kHz. The sampling
frequency is 25 kHz.
The digital frequencies are
1
= 2(4/25) = 0.32,
2
= 2(6/25) = 0.72, and = 0.4.
We nd C = tan(0.5) = 0.7265 and = cos[0.5(
2
+
1
)]/cos[0.5(
2
1
)] = 0.0776. Substituting
these into the form for the required lter, we obtain
H(z) =
0.4208(z
2
1)
z
2
+ 0.0899z + 0.1584
Figure E9.10(b) shows the magnitude spectrum. The band edges are at 4 kHz and 9 kHz as expected.
The center frequency, however, is at 6.56 kHz. This is because the digital center frequency
0
must be
computed from = cos
0
= 0.0776. This gives
0
= cos
1
(0.0776) = 1.6485. This corresponds
to f
0
= S
0
/(2) = 6.5591 kHz.
Comment: We could also have computed
0
from
tan(0.5
0
) =
tan(0.5
1
) tan(0.5
2
) = 1.0809
Then,
0
= 2 tan
1
(1.0809) = 1.6485, as before.
(c) We design a peaking lter with a center frequency of 40 Hz and a 6-dB bandwidth of 2 Hz, operating
at a sampling rate of 200 Hz.
We compute = 2(2/200) = 0.02 and
0
= 2(40/200) = 0.4 and = cos
0
= 0.3090.
Since we are given the 6-dB bandwidth, we compute K and C as follows:
K =
1
10
0.1A
1
1/2
=
1
10
0.6
1
1/2
= 0.577 C = K tan(0.5) = 0.0182
Substituting these into the form for the required lter, we obtain
H(z) =
0.0179(z
2
1)
z
2
0.6070z + 0.9642
Figure E9.10C shows the magnitude spectrum. The blowup reveals that the 6-dB bandwidth (where
the gain is 0.5) equals 2 Hz, as required.
c Ashok Ambardar, September 1, 2003
9.7 Spectral Transformations for IIR Filters 391
0 20 40 60 80 100
0
0.5
1
Analog frequency f [Hz]
(a) Peaking filter f
0
=40 Hz f
6dB
=2 Hz
35 39 40 41 45
0
0.5
1
Analog frequency f [Hz]
(b) Blowup of response (35 Hz to 45 Hz)
Figure E9.10C Response of peaking lter for Example 9.10(c)
EXAMPLE 9.11 (Interference Rejection)
We wish to design a lter to remove 60-Hz interference in an ECG signal sampled at 300 Hz. A 2-s recording
of the noisy signal is shown in Figure E9.11A.
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1
0
1
2
Two beats (600 samples) of ECG signal with 60Hz noise
Time t [seconds]
A
m
p
l
i
t
u
d
e
Figure E9.11A Simulated ECG signal with 60-Hz interference for Example 9.11
If we design a high-Q notch lter with Q = 50 and a notch at f
0
= 60 Hz, we have a notch bandwidth of
f = f
0
/Q = 1.2 Hz. The digital notch frequency is
0
= 2f
0
/S = 2(60/300) = 0.4, and the digital
bandwidth is = 2f/S = 2(1.2/300) = 0.008.
We nd C = tan(0.5) = 0.0126 and = cos
0
= 0.3090. Substituting these into the form for the
notch lter, we obtain
H
1
(z) =
0.9876(z
2
0.6180z + 1)
z
2
0.6104z + 0.9752
A low-Q design with Q = 5 gives f = f
0
/Q = 12 Hz, and = 2(12/300) = 0.08.
We nd C = tan(0.5) = 0.1263 and, with = cos
0
= 0.3090 as before, we obtain
H
2
(z) =
0.8878(z
2
0.6180z + 1)
z
2
0.5487z + 0.7757
Figure E9.11B shows the magnitude spectrum of the two lters. Naturally, the lter H
1
(z) (with the
higher Q) exhibits the sharper notch.
c Ashok Ambardar, September 1, 2003
392 Chapter 9 Design of IIR Filters
0 30 60 90 120 150
0
0.5
0.707
1
Analog frequency f [Hz]
(a) 60Hz notch filter with Q=50
M
a
g
n
i
t
u
d
e
0 30 60 90 120 150
0
0.5
0.707
1
Analog frequency f [Hz]
(b) 60Hz notch filter with Q=5
M
a
g
n
i
t
u
d
e
Figure E9.11B Response of the notch lters for Example 9.11
The ltered ECG signal corresponding to these two notch lters is shown in Figure E9.11C. Although
both lters are eective in removing the 60-Hz noise, the lter H
2
(z) (with the lower Q) shows a much
shorter start-up transient (because the highly oscillatory transient response of the high-Q lter H
1
(z) takes
much longer to reach steady state).
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1
0
1
2
(a) Filtered ECG signal using 60Hz notch filter with Q = 50
A
m
p
l
i
t
u
d
e
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
1
0
1
2
(b) Filtered ECG signal using 60Hz notch filter with Q = 5
Time t [seconds]
A
m
p
l
i
t
u
d
e
Figure E9.11C Output of the notch lters for Example 9.11
9.8 Design Recipe for IIR Filters
There are several approaches to the design of IIR lters, using the mappings and spectral transformations
described in this chapter. The rst approach, illustrated in Figure 9.10, is based on developing the analog
lter H(s) followed by the required mapping to convert H(s) to H(z).
A major disadvantage of this approach is that it cannot be used for mappings that suer from aliasing
problems (such as the impulse-invariant mapping) to design highpass or bandpass lters.
The second, indirect approach, illustrated in Figure 9.11, tries to overcome this problem by designing
only the lowpass prototype H
P
(s) in the analog domain. This is followed by the required mapping to obtain
a digital lowpass prototype H
P
(z). The nal step is the spectral (D2D) transformation of H
P
(z) to the
required digital lter H(z).
c Ashok Ambardar, September 1, 2003
9.8 Design Recipe for IIR Filters 393
C
1 = rad/s
H
P
(s)
Analog
H(s)
H(z)
Lowpass
analog prototype
transformation
Analog filter
mapping Digital filter
z s
Figure 9.10 Converting an analog lter to a digital lter
C
1 = rad/s
H
P
(s) H
P
(z)
1 = rad/s
C
Lowpass Lowpass
D2D
H(z)
analog prototype digital prototype
transformation Digital filter mapping
s z
Figure 9.11 Indirect conversion of an analog lter to a digital lter
This approach allows us to use any mappings, including those (such as response invariance) that may
otherwise lead to excessive aliasing for highpass and bandstop lters. Designing H
P
(z) also allows us to
match its dc magnitude with H
P
(s) for subsequent comparison.
A third approach that applies only to the bilinear transformation is illustrated in Figure 9.12. We
prewarp the frequencies, design an analog lowpass prototype (from prewarped specications), and apply
A2D transformations to obtain the required digital lter H(z).
C
1 = rad/s
H
P
(s)
Lowpass
A2D
H(z)
transformation Digital filter
analog prototype
Figure 9.12 Conversion of analog lowpass prototype directly to a digital lter
A Step-by-Step Approach
Given the passband and stopband edges, the passband and stopband attenuation, and the sampling frequency
S, a standard recipe for the design of IIR lters is as follows:
1. Normalize (divide) the design band edges by S. This allows us to use a sampling interval t
s
= 1 in
subsequent design. For bilinear design, we also prewarp the normalized band edges.
2. Use the normalized band edges and attenuation specications to design an analog lowpass prototype
H
P
(s) whose cuto frequency is
C
= 1 rad/s.
3. Apply the chosen mapping (with t
s
= 1) to convert H
P
(s) to a digital lowpass prototype lter H
P
(z)
with
D
= 1.
4. Use D2D transformations (with
D
= 1) to convert H
P
(z) to H(z).
5. For bilinear design, we can also convert H
P
(s) to H(z) directly (using A2D transformations).
c Ashok Ambardar, September 1, 2003
394 Chapter 9 Design of IIR Filters
REVIEW PANEL 9.12
Design Recipe for IIR Digital Filters
Normalize (divide) band edges by S (and prewarp if using bilinear design).
Use the normalized band edges to design analog lowpass prototype H
P
(s) with
C
= 1 rad/s.
Apply the chosen mapping (with t
s
= 1) to convert H
P
(s) to H
P
(z) with
D
= 1.
Use D2D transformations to convert H
P
(z) to H(z).
For bilinear design, convert H
P
(s) to H(z) (using A2D transformations).
EXAMPLE 9.12 (IIR Filter Design)
Design a Chebyshev IIR lter to meet the following specications:
Passband edges at [1.8, 3.2] kHz, stopband edges at [1.6, 4.8] kHz, A
p
= 2 dB, A
s
= 20 dB, and sampling
frequency S = 12 kHz.
(a) (Indirect Bilinear Design)
The normalized band edges [
1
,
2
,
3
,
4
], in increasing order, are
[
1
,
2
,
3
,
4
] = 2[1.6, 1.8, 3.2, 4.8]/12 = [0.84, 0.94, 1.68, 2.51]
The passband edges are [
1
,
2
] = [0.94, 1.68]. We choose C = 2 and prewarp each band-edge frequency
using = 2 tan(0.5) to give the prewarped values [0.89, 1.019, 2.221, 6.155].
The prewarped passband edges are [
p1
,
p2
] = [1.019, 2.221].
We design an analog lter meeting the prewarped specications (as described in the appendix). This
yields the lowpass prototype and actual transfer function as
H
P
(s) =
0.1634
s
4
+ 0.7162s
3
+ 1.2565s
2
+ 0.5168s + 0.2058
H
BP
(s) =
0.34s
4
s
8
+ 0.86s
7
+ 10.87s
6
+ 6.75s
5
+ 39.39s
4
+ 15.27s
3
+ 55.69s
2
+ 9.99s + 26.25
Finally, we transform the bandpass lter H
BP
(s) to H(z), using s 2(z 1)/(z + 1), to obtain
H(z) =
0.0026z
8
0.0095z
6
+ 0.0142z
4
0.0095z
2
+ 0.0026
z
8
1.94z
7
+ 4.44z
6
5.08z
5
+ 6.24z
4
4.47z
3
+ 3.44z
2
1.305z + 0.59
Figure E9.12A compares the response of the digital lter H(z) with the analog lter H
BP
(s), and with
a digital lter designed from the unwarped frequencies.
c Ashok Ambardar, September 1, 2003
9.8 Design Recipe for IIR Filters 395
0 1 1.61.8 3.2 4.8 6
0
0.1
0.5
0.794
1
Bandpass filter designed by the bilinear transformation
No prewarping
Prewarped
Analog
Analog frequency f [kHz]
M
a
g
n
i
t
u
d
e
Figure E9.12A Bandpass lter for Example 9.12 designed by the bilinear transformation
(b) (Direct A2D Design)
For bilinear design, we can also convert H
P
(s) directly.
We use the A2D LP2BP transformation with the unwarped passband edges [
1
,
2
] = [0.94, 1.68].
The constants C and (from Table 9.6) are found as
C = tan[0.5(
2
1
)] = 0.3839 =
cos[0.5(
2
+
1
)]
cos[0.5(
2
1
)]
= 0.277
We transform the prototype analog lter H
P
(s) to obtain
H(z) =
0.0026z
8
0.0095z
6
+ 0.0142z
4
0.0095z
2
+ 0.0026
z
8
1.94z
7
+ 4.44z
6
5.08z
5
+ 6.24z
4
4.47z
3
+ 3.44z
2
1.305z + 0.59
This expression is identical to the transfer function H(z) of part (a).
(c) (Design Using Other Mappings)
We can also design the digital lter based on other mappings by using the following steps:
1. Use the normalized (but unwarped) band edges [0.89, 0.94, 1.68, 2.51] to design an analog lowpass
prototype H
P
(s) with
C
= 1 rad/s (fortunately, the unwarped and prewarped specications yield
the same H
P
(s) for the specications of this problem).
2. Convert H
P
(s) to H
P
(z) with
D
= 1, using the chosen mapping, with t
s
= 1. For the backward-
dierence, for example, we would use s = (z 1)/zt
s
= (z 1)/z. To use the impulse-invariant
mapping, we would have to rst convert H
P
(s) to partial fraction form.
3. Convert H
P
(z) to H(z), using the D2D LP2BP transformation with
D
= 1, and the unwarped
passband edges [
1
,
2
] = [0.94, 1.68].
Figure E9.12C compares the response of two such designs, using the impulse-invariant mapping and
the backward-dierence mapping (both with gain matching at dc). The design based on the backward-
dierence mapping shows a poor match to the analog lter.
c Ashok Ambardar, September 1, 2003
396 Chapter 9 Design of IIR Filters
0 1 1.61.8 3.2 4.8 6
0
0.1
0.5
0.794
1
Bandpass filter design using impulse invariance and backward difference
Impulse invariance (solid)
Backward difference
Analog (dashed)
Analog frequency f [kHz]
M
a
g
n
i
t
u
d
e
Figure E9.12C Bandpass lter for Example 9.12 designed by impulse invariance and backward dierence
9.8.1 Finite-Word-Length Eects
The eects of quantization must be considered in the design and implementation of both IIR and FIR lters.
Quantization implies that we choose a nite number of bits and this less than ideal representation leads to
problems collectively referred to as nite-word-length eects.
Quantization noise: Quantization noise limits the signal-to-noise ratio. One way to improve the SNR is
to increase the number of bits. Another is to use oversampling (as discussed earlier).
Coecient quantization: This refers to representation of the lter coecients by a limited number of
bits. Its eects can produce benign eects such as a slight change in the frequency response of the resulting
lter (typically, a larger passband ripple and/or a smaller stopband attenuation) or disastrous eects (for
IIR lters) that can lead to instability.
Roundo errors: When lower-order bits are discarded before storing results (of a multiplication, say),
there is roundo error. The amount of error depends on the type of arithmetic used and the lter structure.
The eects of roundo errors are similar to the eects of quantization noise and lead to a reduction in the
signal-to-noise ratio.
Overow errors: Overow errors occur when the lter output or the result of arithmetic operations (such
as the sum of two large numbers with the same sign) exceeds the permissible wordlength. Such errors are
avoided in practice by scaling the lter coecients and/or the input in a manner that the output remains
within the permissible word length.
9.8.2 Eects of Coecient Quantization
For IIR lters, the eects of rounding or truncating the lter coecients can range from minor changes in
the frequency response to serious problems, including instability. Consider the stable analog lter:
H(s) =
(s + 0.5)(s + 1.5)
(s + 1)(s + 2)(s + 4.5)(s + 8)(s + 12)
(9.42)
Bilinear transformation of H(s) with t
s
= 0.01 s yields the digital transfer function H(z) = B(z)/A(z) whose
denominator coecients to double precision (A
k
) and truncated to seven signicant digits (A
t
k
) are given by
c Ashok Ambardar, September 1, 2003
9.8 Design Recipe for IIR Filters 397
Filter Coecients A
k
Truncated A
t
k
1.144168420199997e+0 1.144168e+0
5.418904483999996e+0 5.418904e+0
1.026166736200000e+1 1.026166e+1
9.712186808000000e+0 9.712186e+0
4.594164261000004e+0 4.594164e+0
8.689086648000011e1 8.689086e1
The poles of H(z) all lie within the unit circle, and the designed lter is thus stable. However, if we use the
truncated coecients A
t
k
to compute the roots, the lter becomes unstable because one pole moves out of
the unit circle! The bottom line is that stability is an important issue in the design of IIR digital lters.
9.8.3 Concluding Remarks
IIR lters are well suited to applications requiring frequency-selective lters with sharp cutos or where linear
phase is relatively unimportant. Examples include graphic equalizers for digital audio, tone generators for
digital touch-tone receivers, and lters for digital telephones. The main advantages are standardized easy
design, and low lter order. On the other hand, IIR lters cannot exhibit linear phase and are quite
susceptible to the eects of coecient quantization. If linear phase is important as in biomedical signal
processing, or stability is paramount as in many adaptive ltering schemes, it is best to use FIR lters.
c Ashok Ambardar, September 1, 2003
398 Chapter 9 Design of IIR Filters
CHAPTER 9 PROBLEMS
9.1 (Response Invariance) Consider the analog lter H(s) =
1
s + 2
.
(a) Convert H(s) to a digital lter H(z), using impulse invariance. Assume that the sampling
frequency is S = 2 Hz.
(b) Will the impulse response h[n] match the impulse response h(t) of the analog lter at the sampling
instants? Should it? Explain.
(c) Will the step response s[n] match the step response s(t) of the analog lter at the sampling
instants? Should it? Explain.
[Hints and Suggestions: For part (a), nd h(t), sample it (t nt
s
) to get h[n] and nd its z-
transform to obtain H(z).]
9.2 (Response Invariance) Consider the analog lter H(s) =
1
s + 2
.
(a) Convert H(s) to a digital lter H(z), using step invariance at a sampling frequency of S = 2 Hz.
(b) Will the impulse response h[n] match the impulse response h(t) of the analog lter at the sampling
instants? Should it? Explain.
(c) Will the step response s[n] match the step response s(t) of the analog lter at the sampling
instants? Should it? Explain.
[Hints and Suggestions: For part (a), nd y(t) from Y (s) = H(s)X(s) = H(s)/s, sample it (t nt
s
)
to get y[n] and nd the ratio H(z) = Y (z)/X(z) where x[n] = u[n].]
9.3 (Response Invariance) Consider the analog lter H(s) =
1
s + 2
.
(a) Convert H(s) to a digital lter H(z), using ramp invariance at a sampling frequency of S = 2 Hz.
(b) Will the impulse response h[n] match the impulse response h(t) of the analog lter at the sampling
instants? Should it? Explain.
(c) Will the step response s[n] match the step response s(t) of the analog lter at the sampling
instants? Should it? Explain.
(d) Will the response v[n] to a unit ramp match the unit-ramp response v(t) of the analog lter at
the sampling instants? Should it? Explain.
[Hints and Suggestions: For part (c), nd y(t) from Y (s) = H(s)X(s) = H(s)/s
2
, sample it
(t nt
s
) to get y[n] and nd the ratio H(z) = Y (z)/X(z) where x[n] = nt
s
u[n].]
9.4 (Response Invariance) Consider the analog lter H(s) =
s + 1
(s + 1)
2
+
2
.
(a) Convert H(s) to a digital lter H(z), using impulse invariance. Assume that the sampling
frequency is S = 2 Hz.
(b) Convert H(s) to a digital lter H(z), using invariance to the input x(t) = e
t
u(t) at a sampling
frequency of S = 2 Hz.
[Hints and Suggestions: For part (c), nd y(t) from Y (s) = H(s)X(s) = H(s)/(s + 1), sample it
(t nt
s
) to get y[n] and nd the ratio H(z) = Y (z)/X(z) where x[n] = e
nts
u[n] = (e
ts
)
n
.]
c Ashok Ambardar, September 1, 2003
Chapter 9 Problems 399
9.5 (Impulse Invariance) Use the impulse-invariant transformation with t
s
= 1 s to transform the
following analog lters to digital lters.
(a) H(s) =
1
s + 2
(b) H(s) =
2
s + 1
+
2
s + 2
(c) H(s) =
1
(s + 1)(s + 2)
[Hints and Suggestions: For (c), nd the partial fractions for H(s). For all, use the transformation
1
s+
z
ze
ts
for each partial fraction term.]
9.6 (Impulse-Invariant Design) We are given the analog lowpass lter H(s) =
1
s + 1
whose cuto
frequency is known to be 1 rad/s. It is required to use this lter as the basis for designing a digital
lter by the impulse-invariant transformation. The digital lter is to have a cuto frequency of 50 Hz
and operate at a sampling frequency of 200 Hz.
(a) What is the transfer function H(z) of the digital lter if no gain matching is used?
(b) What is the transfer function H(z) of the digital lter if the gain of the analog lter and dig-
ital lter are matched at dc? Does the gain of the two lters match at their respective cuto
frequencies?
(c) What is the transfer function H(z) of the digital lter if the gain of the analog lter at its cuto
frequency (1 rad/s) is matched to the gain of the digital lter at its cuto frequency (50 Hz)?
Does the gain of the two lters match at dc?
[Hints and Suggestions: For (a), pick
C
= 2f
C
/S, obtain H
A
(s) = H(s/
C
) and convert to H(z)
using
1
s+
z
ze
t
s
. For (b), nd G
A
= [H(s)[
s=0
, G
D
= [H(z)[
z=1
and multiply H(z) by G
A
/G
D
.
For (c), nd G
A
= [H(s)[
s=j1
and G
D
= [H(z)[ at z = e
j2(50)/S
and multiply H(z) by G
A
/G
D
.]
9.7 (Impulse Invariance) The impulse-invariant method allows us to take a digital lter described by
H
1
(z) =
z
z t
1
at a sampling interval of t
1
and convert this to a new digital lter H
2
(z) =
z
z t
2
at a dierent sampling interval t
2
.
(a) Using the fact that s =
ln(t
1
)
t
1
=
ln(t
2
)
t
2
, show that t
2
= (t
1
)
M
, where M = t
2
/t
1
.
(b) Use the result of part (a) to convert the digital lter H
1
(z) =
z
z 0.5
+
z
z 0.25
, with t
s
= 1 s,
to a digital lter H
2
(z), with t
s
= 0.5 s.
9.8 (Matched z-Transform) Use the matched z-transform s + =
z e
ts
z
, with t
s
= 0.5 s and
gain matching at dc, to transform each analog lter H(z) to a digital lter H(z).
(a) H(s) =
1
s + 2
(b) H(s) =
1
(s + 1)(s + 2)
(c) H(s) =
1
s + 1
+
1
s + 2
(d) H(s) =
s + 1
(s + 1)
2
+
2
[Hints and Suggestions: Set up H(s) in factored form and use (s +)
ze
ts
z
for each factor to
obtain H(z). Then, nd G
A
= [H(s)[
s=0
, G
D
= [H(z)[
z=1
and multiply H(z) by G
A
/G
D
.]
9.9 (Matched z-Transform) The analog lter H(s) =
4s(s + 1)
(s + 2)(s + 3)
is to be converted to a digital
lter H(z) at a sampling rate of S = 4 Hz.
c Ashok Ambardar, September 1, 2003
400 Chapter 9 Design of IIR Filters
(a) Convert H(s) to a digital lter, using the matched z-transform s + =
z e
ts
z
.
(b) Convert H(s) to a digital lter, using the modied matched z-transform by moving all zeros at
the origin (z = 0) to z = 1.
(c) Convert H(s) to a digital lter, using the modied matched z-transform, by moving all but one
zero at the origin (z = 0) to z = 1.
9.10 (Backward Euler Algorithm) The backward Euler algorithm for numerical integration is given by
y[n] = y[n 1] +t
s
x[n].
(a) Derive a mapping rule for converting an analog lter to a digital lter, based on this algorithm.
(b) Apply the mapping to convert the analog lter H(s) =
4
s + 4
to a digital lter H(z), using a
sampling interval of t
s
= 0.5 s.
[Hints and Suggestions: For (a), nd H(z) and compare with H(s) = 1/s (ideal integrator).]
9.11 (Mapping from Dierence Algorithms) Consider the analog lter H(s) =
1
s +
.
(a) For what values of is this lter stable?
(b) Convert H(s) to a digital lter H(z), using the mapping based on the forward dierence at a
sampling rate S. Is H(z) always stable if H(s) is stable?
(c) Convert H(s) to a digital lter H(z), using the mapping based on the backward dierence at a
sampling rate S. Is H(z) always stable if H(s) is stable?
[Hints and Suggestions: For (b)(c), nd H(z) from the dierence equations for the forward and
backward dierence respectively and compare with H(s) = s (ideal dierentiator).]
9.12 (Simpsons Algorithm) Simpsons numerical integration algorithm is described by
y[n] = y[n 2] +
t
s
3
(x[n] + 4x[n 1] +x[n 2])
(a) Derive a mapping rule to convert an analog lter H(s) to a digital lter H(z), based on this
algorithm.
(b) Let H(s) =
1
s + 1
. Convert H(s) to H(z), using the mapping derived in part (a).
(c) Is the lter H(z) designed in part (b) stable for any choice of t
s
> 0?
9.13 (Response of Numerical Algorithms) Simpsons and Ticks rules for numerical integration nd
y[k] (the approximation to the area) over two time steps from y[k 2] and are described by
Simpsons rule: y[n] = y[n 2] +
x[n] + 4x[n 1] +x[n 2]
3
Ticks rule: y[n] = y[n 2] + 0.3584x[n] + 1.2832x[n 1] + 0.3584x[n 2]
(a) Find the transfer function H(F) corresponding to each rule.
(b) For each rule, sketch [H(F)[ over 0 F 0.5 and compare with the spectrum of an ideal
integrator.
(c) It is claimed that the coecients in Ticks rule optimize H(F) in the range 0 < F < 0.25. Does
your comparison support this claim?
9.14 (Bilinear Transformation) Consider the lowpass analog Bessel lter H(s) =
3
s
2
+ 3s + 3
.
c Ashok Ambardar, September 1, 2003
Chapter 9 Problems 401
(a) Use the bilinear transformation to convert this analog lter H(s) to a digital lter H(z) at a
sampling rate of S = 2 Hz.
(b) Use H(s) and the bilinear transformation to design a digital lowpass lter H(z) whose gain at
f
0
= 20 kHz matches the gain of H(s) at
a
= 3 rad/s. The sampling frequency is S = 80 kHz.
[Hints and Suggestions: For (b), use s C
z1
z+1
with C tan(2f
0
/S) =
a
.]
9.15 (Bilinear Transformation) Consider the analog lter H(s) =
s
s
2
+s + 1
.
(a) What type of lter does H(s) describe?
(b) Use H(s) and the bilinear transformation to design a digital lter H(z) operating at S = 1 kHz
such that its gain at f
0
= 250 Hz matches the gain of H(s) at
a
= 1 rad/s. What type of lter
does H(z) describe?
(c) Use H(s) and the bilinear transformation to design a digital lter H(z) operating at S = 10 Hz
such that gains of H(z) and H(s) match at f
m
= 1 Hz. What type of lter does H(z) describe?
[Hints and Suggestions: For (a), use s C
z1
z+1
with C tan(2f
0
/S) =
a
. For (b), use s C
z1
z+1
with C tan(2f
m
/S) = 2
m
.]
9.16 (Bilinear Transformation) A second-order Butterworth lowpass analog lter with a half-power
frequency of 1 rad/s is converted to a digital lter H(z), using the bilinear transformation at a sampling
rate of S = 1 Hz.
(a) What is the transfer function H(s) of the analog lter?
(b) What is the transfer function H(z) of the digital lter?
(c) Are the dc gains of H(z) and H(s) identical? Should they be? Explain.
(d) Are the gains of H(z) and H(s) at their respective half-power frequencies identical? Explain.
9.17 (IIR Filter Design) Lead-lag systems are often used in control systems and have the generic form
H(s) =
1 +s
1
1 +s
2
. Use a sampling frequency of S = 10 Hz and the bilinear transformation to design IIR
lters from this lead-lag compensator if
(a)
1
= 1 s,
2
= 10 s. (b)
1
= 10 s,
2
= 1 s.
9.18 (Spectral Transformation of Digital Filters) The digital lowpass lter H(z) =
z + 1
z
2
z + 0.2
has
a cuto frequency f = 0.5 kHz and operates at a sampling frequency S = 10 kHz. Use this lter to
design the following:
(a) A lowpass digital lter with a cuto frequency of 2 kHz
(b) A highpass digital lter with a cuto frequency of 1 kHz
(c) A bandpass digital lter with band edges of 1 kHz and 3 kHz
(d) A bandstop digital lter with band edges of 1.5 kHz and 3.5 kHz
[Hints and Suggestions: Use the tables for digital-to-digital (D2D) transformations.]
9.19 (Spectral Transformation of Analog Prototypes) The analog lowpass lter H(s) =
2
s
2
+ 2s + 2
has a cuto frequency of 1 rad/s. Use this prototype to design the following digital lters.
(a) A lowpass lter with a passband edge of 100 Hz and S = 1 kHz
c Ashok Ambardar, September 1, 2003
402 Chapter 9 Design of IIR Filters
(b) A highpass lter with a cuto frequency of 500 Hz and S = 2 kHz
(c) A bandpass lter with band edges at 400 Hz and 800 Hz, and S = 3 kHz
(d) A bandstop lter with band edges at 1 kHz and 1200 Hz, and S = 4 kHz
[Hints and Suggestions: Use the tables for analog-to-digital (A2D) transformations.]
9.20 (Notch Filters) A notch lter is required to remove 50-Hz interference. Assuming a bandwidth of
4 Hz and a sampling rate of 300 Hz, design the simplest such lter using the bilinear transformation.
Compute the lter gain at 40 Hz, 50 Hz, and 60 Hz.
[Hints and Suggestions: Use the standard form of the second-order notch lter.]
9.21 (Peaking Filters) A peaking lter is required to isolate a 100-Hz signal with unit gain. Assuming
a bandwidth of 5 Hz and a sampling rate of 500 Hz, design the simplest such lter using the bilinear
transformation. Compute the lter gain at 90 Hz, 100 Hz, and 110 Hz.
[Hints and Suggestions: Use the standard form of the second-order peaking (bandpass) lter.]
9.22 (IIR Filter Design) A fourth-order digital lter operating at a sampling frequency of 40 kHz is
required to have a passband between 8 kHz and 12 kHz and a maximum passband ripple that equals
5% of the peak magnitude. Design the digital lter using the bilinear transformation.
[Hints and Suggestions: You need a Chebyshev bandpass lter of order 4. So, start with a Chebyshev
analog lowpass prototype of order 2 (not 4) and apply A2D transformations.]
9.23 (IIR Filter Design) Design IIR lters that meet each of the following sets of specications. Assume
a passband attenuation of A
p
= 2 dB and a stopband attenuation of A
s
= 30 dB.
(a) A Butterworth lowpass lter with passband edge at 1 kHz, stopband edge at 3 kHz, and S =
10 kHz, using the backward Euler transformation.
(b) A Butterworth highpass lter with passband edge at 400 Hz, stopband edge at 100 Hz, and
S = 2 kHz, using the impulse-invariant transformation.
(c) A Chebyshev bandpass lter with passband edges at 800 Hz and 1600 Hz, stopband edges at
400 Hz and 2 kHz, and S = 5 kHz, using the bilinear transformation.
(d) An inverse Chebyshev bandstop lter with passband edges at 200 Hz and 1.2 kHz, stopband
edges at 500 Hz and 700 Hz, and S = 4 kHz, using the bilinear transformation.
[Hints and Suggestions: Normalize all frequencies. For (a)(b), design an analog lowpass prototype
at
C
= 1 rad/s, convert to a digital prototype at S = 1 Hz, and use digital-to-digital (D2D) transfor-
mations with
C
= 1. For (c)(d), design the analog lowpass prototype using prewarped frequencies.
Then, apply A2D transformations using unwarped digital band edges.]
9.24 (Digital-to-Analog Mappings) In addition to the bilinear transformation, the backward Euler
method also allows a linear mapping to transform a digital lter H(z) to an analog equivalent H(s).
(a) Develop such a mapping based on the backward Euler algorithm.
(b) Use this mapping to convert a digital lter H(z) =
z
z 0.5
operating at S = 2 Hz to its analog
equivalent H(s).
9.25 (Digital-to-Analog Mappings) The forward Euler method also allows a linear mapping to transform
a digital lter H(z) to an analog equivalent H(s).
(a) Develop such a mapping based on the forward Euler algorithm.
c Ashok Ambardar, September 1, 2003
Chapter 9 Problems 403
(b) Use this mapping to convert a digital lter H(z) =
z
z 0.5
operating at S = 2 Hz to its analog
equivalent H(s).
9.26 (Digital-to-Analog Mappings) Two other methods that allow us to convert a digital lter H(z) to
an analog equivalent H(z) are impulse invariance and the matched z-transform s + =
z e
t
s
z
.
Let H(z) =
z(z + 1)
(z 0.25)(z 0.5)
. Find the analog lter H(s) from which H(z) was developed, assuming
a sampling frequency of S = 2 Hz and
(a) Impulse invariance. (b) Matched z-transform. (c) Bilinear transformation.
9.27 (Digital-to-Analog Mappings) The bilinear transformation allows us to use a linear mapping to
transform a digital lter H(z) to an analog equivalent H(s).
(a) Develop such a mapping based on the bilinear transformation.
(b) Use this mapping to convert a digital lter H(z) =
z
z 0.5
operating at S = 2 Hz to its analog
equivalent H(s).
9.28 (Group Delay) A digital lter H(z) is designed from an analog lter H(s), using the bilinear
transformation
A
= C tan(
D
/2).
(a) Show that the group delays T
g
(
A
) and T
g
(
D
) of H(s) and H(z) respectively are related by
T
g
(
D
) = 0.5C(1 +
2
A
)T
g
(
A
)
(b) Design a digital lter H(z) from the analog lter H(s) =
5
s + 5
at a sampling frequency of
S = 4 Hz such that the gain of H(s) at = 2 rad/s matches the gain of H(z) at 1 Hz.
(c) What is the group delay T
g
(
A
) of the analog lter H(s)?
(d) Use the results to nd the group delay T
g
(
D
) of the digital lter H(z) designed in part (b).
9.29 (Pade Approximations) A delay of t
s
may be approximated by e
st
s
1 st
s
+
(st
s
)
2
2!
. An
nth-order Pade approximation is based on a rational function of order n that minimizes the truncation
error of this approximation. The rst-order and second-order Pade approximations are
P
1
(s) =
1
1
2
st
s
1 +
1
2
st
s
P
2
(s) =
1
1
2
st
s
+
1
12
(st
s
)
2
1 +
1
2
st
s
+
1
12
(st
s
)
2
Since e
st
s
describes a delay of one sample (or z
1
), Pade approximations can be used to generate
inverse mappings for converting a digital lter H(z) to an analog lter H(s).
(a) Generate mappings for converting a digital lter H(z) to an analog lter H(s) based on the
rst-order and second-order Pade approximations.
(b) Use each mapping to convert H(z) =
z
z 0.5
to H(s), assuming t
s
= 0.5 s.
(c) Show that the rst-order mapping is bilinear. Is this mapping related in any way to the bilinear
transformation?
c Ashok Ambardar, September 1, 2003
404 Chapter 9 Design of IIR Filters
COMPUTATION AND DESIGN
9.30 (IIR Filter Design) It is required to design a lowpass digital lter H(z) from the analog lter
H(s) =
1
s+1
. The sampling rate is S = 1 kHz. The half-power frequency of H(z) is to be
C
= /4.
(a) Use impulse invariance to design H(z) such that gain of the two lters matches at dc. Compare
the frequency response of both lters (after appropriate frequency scaling). Which lter would
you expect to yield better performance? To conrm your expectations, dene the (in-band) signal
to (out-of-band) noise ratio (SNR) in dB as
SNR = 20 log
signal level
noise level
dB
(b) What is the SNR at the input and output of each lter if the input is
x(t) = cos(0.2
C
t) + cos(1.2
C
t) for H(s) and x[n] = cos(0.2n
C
) + cos(1.2n
C
) for H(z)?
(c) What is the SNR at the input and output of each lter if the input is
x(t) = cos(0.2
C
t) + cos(3
C
t) for H(s) and x[n] = cos(0.2n
C
) + cos(3n
C
) for H(z)?
(d) Use the bilinear transformation to design another lter H
1
(z) such that gain of the two lters
matches at dc. Repeat the computations of parts (a) and (b) for this lter. Of the two digital
lters H(z) and H
1
(z), which one would you recommend using, and why?
9.31 (IIR Filter Design) A digital lter is required to have a monotonic response in the passband and
stopband. The half-power frequency is to be 4 kHz, and the attenuation past 5 kHz is to exceed 20
dB. Design the digital lter, using impulse invariance and a sampling frequency of 15 kHz.
9.32 (The Eect of Group Delay) The nonlinear phase of IIR lters is responsible for signal distortion.
Consider a lowpass lter with a 1-dB passband edge at f = 1 kHz, a 50-dB stopband edge at f = 2 kHz,
and a sampling frequency of S = 10 kHz.
(a) Design a Butterworth lter H
B
(z) and an elliptic lter H
E
(z) to meet these specications. Using
the Matlab routine grpdelay (or otherwise), compute and plot the group delay of each lter.
Which lter has the lower order? Which lter has a more nearly constant group delay in the
passband? Which lter would cause the least phase distortion in the passband? What are the
group delays N
B
and N
E
(expressed as the number of samples) of the two lters?
(b) Generate the signal x[n] = 3 sin(0.03n) + sin(0.09n) + 0.6 sin(0.15n) over 0 n 100. Use
the ADSP routine filter to compute the response y
B
[n] and y
E
[n] of each lter. Plot the lter
outputs y
B
[n] and y
E
[n] (delayed by N
B
and N
E
, respectively) and the input x[n] on the same
plot to compare results. Does the lter with the more nearly constant group delay also result in
smaller signal distortion?
(c) Are all the frequency components of the input signal in the lter passband? If so, how can
you justify that the distortion is caused by the nonconstant group delay and not by the lter
attenuation in the passband?
9.33 (LORAN) A LORAN (long-range radio and navigation) system for establishing positions of marine
craft uses three transmitters that send out short bursts (10 cycles) of 100-kHz signals in a precise phase
relationship. Using phase comparison, a receiver (on the craft) can establish the position (latitude and
longitude) of the craft to within a few hundred meters. Suppose the LORAN signal is to be digitally
processed by rst sampling it at 500 kHz and ltering the sampled signal using a second order peaking
lter with a half-power bandwidth of 100 Hz. Use the bilinear transformation to design the lter from
an analog lter with unit half-power frequency. Compare your design with the digital lter designed
in Problem 18.34 (to meet the same specications).
c Ashok Ambardar, September 1, 2003
Chapter 9 Problems 405
9.34 (Decoding a Mystery Message) During transmission, a message signal gets contaminated by a
low-frequency signal and high-frequency noise. The message can be decoded only by displaying it in
the time domain. The contaminated signal x[n] is provided on the authors website as mystery1.mat.
Load this signal into Matlab (use the command load mystery1). In an eort to decode the message,
try the following methods and determine what the decoded message says.
(a) Display the contaminated signal. Can you read the message? Display the DFT of the signal
to identify the range of the message spectrum.
(b) Use the bilinear transformation to design a second-order IIR bandpass lter capable of extracting
the message spectrum. Filter the contaminated signal and display the ltered signal to decode
the message. Use both the filter (ltering) and filifilt (zero-phase ltering) commands.
(c) As an alternative method, rst zero out the DFT component corresponding to the low-frequency
contamination and obtain the IDFT y[n]. Next, design a lowpass IIR lter (using impulse invari-
ance) to reject the high-frequency noise. Filter the signal y[n] and display the ltered signal to
decode the message. Use both the filter and filtfilt commands.
(d) Which of the two methods allows better visual detection of the message? Which of the two
ltering routines (in each method) allows better visual detection of the message?
9.35 (Interpolation) The signal x[n] = cos(2F
0
n) is to be interpolated by 5 using up-sampling followed
by lowpass ltering. Let F
0
= 0.4.
(a) Generate and plot 20 samples of x[n] and up-sample by 5.
(b) What must be the cuto frequency F
C
and gain A of a lowpass lter that follows the up-sampler
to produce the interpolated output?
(c) Design a fth-order digital Butterworth lter (using the bilinear transformation) whose half-power
frequency equals F
C
and whose peak gain equals A.
(d) Filter the up-sampled signal through this lter and plot the result. Is the result an interpolated
version of the input signal? Do the peak amplitudes of the interpolated signal and original signal
match? Should they? Explain.
9.36 (Coecient Quantization) Consider the analog lter described by
H(s) =
(s + 0.5)(s + 1.5)
(s + 1)(s + 2)(s + 4.5)(s + 8)(s + 12)
(a) Is this lter stable? Why?
(b) Use the bilinear transformation with S = 100 Hz to convert this to a digital lter H(z).
(c) Truncate the lter coecients to seven signicant digits to generate the lter H
2
(z).
(d) Compare the frequency response of H(z) and H
2
(z). Are there any signicant dierences?
(e) Is the lter H
2
(z) stable? Should it be? Explain.
(f ) Suppose the coecients are to be quantized to B bits by rounding. What is the smallest number
of bits B required in order to preserve the stability of the quantized lter?
9.37 (Numerical Integration Algorithms) It is claimed that mapping rules to convert an analog lter to
a digital lter, based on numerical integration algorithms that approximate the area y[n] from y[n2]
or y[n 3] (two or more time steps away), do not usually preserve stability. Consider the following
integration algorithms.
(1) y[n] = y[n 1] +
t
s
12
(5x[n] + 8x[n 1] x[n 2]) (Adams-Moulton rule)
(2) y[n] = y[n 2] +
ts
3
(x[n] + 4x[n 1] +x[n 2]) (Simpsons rule)
c Ashok Ambardar, September 1, 2003
406 Chapter 9 Design of IIR Filters
(3) y[n] = y[n 3] +
3ts
8
(x[n] + 3x[n 1] + 3x[n 2] +x[n 3]) (Simpsons three-eighths rule)
Derive mapping rules for each algorithm, convert the analog lter H(s) =
1
s + 1
to a digital lter
using each mapping with S = 5 Hz, and use Matlab to compare their frequency response. Which of
these mappings (if any) allow us to convert a stable analog lter to a stable digital lter? Is the claim
justied?
9.38 (RIAA Equalization) Audio signals usually undergo a high-frequency boost (and low-frequency
cut) before being used to make the master for commercial production of phonograph records. During
playback, the signal from the phono cartridge is fed to a preamplier (equalizer) that restores the
original signal. The frequency response of the preamplier is based on the RIAA (Recording Industry
Association of America) equalization curve whose Bode plot is shown in Figure P9.38, with break
frequencies at 50, 500, and 2122 Hz.
dB
H
f Hz(log)
-20 dB/dec
RIAA equalization curve
500 2122 50
20
Figure P9.38 Figure for Problem 9.38
(a) What is the transfer function H(s) of the RIAA equalizer?
(b) It is required to implement RIAA equalization using a digital lter. Assume that the signal from
the cartridge is band-limited to 15 kHz. Design an IIR lter H
I
(z), using impulse invariance,
that implements the equalization characteristic.
(c) Use the bilinear transformation to design an IIR lter H
B
(z) that implements the equalization
characteristic. Assume that the gain of H(s) and H
B
(z) are to match at 1 kHz.
(d) Compare the performance of H(s) with H
B
(z) and H
I
(z) and comment on the results. Which
IIR design method results in better implementation?
9.39 (Audio Equalizers) Many hi- systems are equipped with graphic equalizers to tailor the frequency
response. Consider the design of a four-band graphic equalizer. The rst section is to be a lowpass lter
with a passband edge of 300 Hz. The next two sections are bandpass lters with passband edges of
[300, 1000] Hz and [1000, 3000] Hz, respectively. The fourth section is a highpass lter with a passband
edge at 3 kHz. The sampling rate is to be 20 kHz. Implement this equalizer, using FIR lters based on
window design. Repeat the design, using an optimal FIR lter. Repeat the design, using an IIR lter
based on the bilinear transformation. For each design, superimpose plots of the frequency response of
each section and their parallel combination. What are the dierences between IIR and FIR design?
Which design would you recommend?
c Ashok Ambardar, September 1, 2003
Chapter 10
THE DISCRETE FOURIER
TRANSFORM AND ITS
APPLICATIONS
10.0 Scope and Objectives
This chapter introduces the discrete Fourier transform (DFT) as a means of examining sampled signals in
the frequency domain. It describes the DFT, its properties and its relationship to other frequency domain
transforms. It discusses ecient algorithms for implementing the DFT that go by the generic name of fast
Fourier transforms (FFTs). It concludes with applications of the DFT and FFT to digital signal processing.
10.1 Introduction
The processing of analog signals using digital methods continues to gain widespread popularity. The Fourier
series of analog periodic signals and the DTFT of discrete-time signals are duals of each other and are similar
in many respects. In theory, both oer great insight into the spectral description of signals. In practice, both
suer from (similar) problems in their implementation. The nite memory limitations and nite precision of
digital computers constrain us to work with a nite set of quantized numbers for describing signals in both
time and frequency. This brings out two major problems inherent in the Fourier series and the DTFT as
tools for digital signal processing. Both typically require an innite number of samples (the Fourier series
for its spectrum and the DTFT for its time signal). Both deal with one continuous variable (time t or digital
frequency F). A numerical approximation that can be implemented using digital computers requires that
we replace the continuous variable with a discrete one and limit the number of samples to a nite value in
both domains.
10.1.1 Connections Between Frequency-Domain Transforms
Sampling and duality provide the basis for the connection between the various frequency-domain transforms
and the concepts are worth repeating. Sampling in one domain induces a periodic extension in the other.
The sample spacing in one domain is the reciprocal of the period in the other. Periodic analog signals have
discrete spectra, and discrete-time signals have continuous periodic spectra. A consequence of these concepts
is that a sequence that is both discrete and periodic in one domain is also discrete and periodic in the other.
This leads to the development of the discrete Fourier transform (DFT) and discrete Fourier series
(DFS), allowing us a practical means of arriving at the sampled spectrum of sampled signals using digital
c Ashok Ambardar, September 1, 2003 407
408 Chapter 10 The Discrete Fourier Transform and its Applications
computers. The connections are illustrated in Figure 10.1 and summarized in Table 10.1.
f
0
T
t f
DTFT
n
1
F
1
DFT
n
1
k
1
Nonperiodic analog signal
Discrete spectrum Periodic signal
Discrete signal
Discrete and periodic Discrete and periodic
Fourier transform
Fourier series
Periodic spectrum
Nonperiodic spectrum
t f
Figure 10.1 Features of the various transforms
Table 10.1 Connections Between Various Transforms
Operation in the Time Domain Result in the Frequency Domain Transform
Aperiodic, continuous x(t) Aperiodic, continuous X(f) FT
Periodic extension of x(t) x
p
(t) Sampling of X(f) X[k] FS
Period = T Sampling interval = 1/T = f
0
Sampling of x
p
(t) x
p
[n] Periodic extension of X[k] X
DFS
[k] DFS
Sampling interval = t
s
Period = S = 1/t
s
Sampling of x(t) x[n] Periodic extension of X(f) X(F) DTFT
Sampling interval = 1 Period = 1
Periodic extension of x[n] x
p
[n] Sampling of X(F) X
DFT
[k] DFT
Period = N Sampling interval = 1/N
Since sampling in one domain leads to a periodic extension in the other, a sampled representation in
both domains also forces periodicity in both domains. This leads to two slightly dierent but functionally
c Ashok Ambardar, September 1, 2003
10.2 The DFT 409
equivalent sets of relations, depending on the order in which we sample time and frequency, as listed in
Table 10.2. If we rst sample an analog signal x(t), the sampled signal has a periodic spectrum X(F) (the
DTFT), and sampling of X(F) leads to the DFT representation. If we rst sample the Fourier transform
X(f) in the frequency domain, the samples represent the Fourier series coecients of a periodic time signal
x
p
(t), and sampling of x
p
(t) leads to the discrete Fourier series (DFS) as the periodic extension of the
frequency-domain samples. The DFS diers from the DFT only by a constant scale factor.
Table 10.2 Relating Frequency-Domain Transforms
Fourier Transform
`
Aperiodic/continuous signal: Aperiodic/continuous spectrum:
x(t) =
X(f)e
j2ft
df X(f) =
x(t)e
j2ft
dt
Sampling x(t) (DTFT) Sampling X(f) (Fourier series)
Sampled time signal: Sampled spectrum:
x[n] =
1
X(F)e
j2nF
dF X[k] =
1
T
T
x
p
(t)e
j2kf
0
t
dt
Periodic spectrum (period = 1): Periodic time signal (period = T):
X(F) =
n=
x[n]e
j2nF
x
p
(t) =
k=
X[k]e
j2kf0t
Sampling X
p
(F) (DFT) Sampling x
p
(t) (DFS)
Sampled/periodic spectrum: Sampled/periodic time signal:
X
DFT
[k] =
N1
n=0
x[n]e
j2nk/N
x[n] =
N1
k=0
X
DFS
[k]e
j2nk/N
Sampled/periodic time signal: Sampled/periodic spectrum:
x[n] =
1
N
N1
k=0
X
DFT
[k]e
j2nk/N
X
DFS
[k] =
1
N
N1
n=0
x[n]e
j2nk/N
10.2 The DFT
The N-point discrete Fourier transform (DFT) X
DFT
[k] of an N-sample signal x[n] and the inverse
discrete Fourier transform (IDFT), which transforms X
DFT
[k] to x[n], are dened by
X
DFT
[k] =
N1
n=0
x[n]e
j2nk/N
, k = 0, 1, 2, . . . , N 1 (10.1)
x[n] =
1
N
N1
k=0
X
DFT
[k]e
j2nk/N
, n = 0, 1, 2, . . . , N 1 (10.2)
c Ashok Ambardar, September 1, 2003
410 Chapter 10 The Discrete Fourier Transform and its Applications
Each relation is a set of N equations. Each DFT sample is found as a weighted sum of all the samples in x[n].
One of the most important properties of the DFT and its inverse is implied periodicity. The exponential
exp(j2nk/N) in the dening relations is periodic in both n and k with period N:
e
j2nk/N
= e
j2(n+N)k/N
= e
j2n(k+N)/N
(10.3)
As a result, the DFT and its inverse are also periodic with period N, and it is sucient to compute the
results for only one period (0 to N 1). Both x[n] and X
DFT
[k] have a starting index of zero.
REVIEW PANEL 10.1
The N-point DFT and N-Point IDFT Are Periodic with Period N
X
DFT
[k] =
N1
n=0
x[n]e
j2nk/N
, k=0, 1, 2, . . . , N1 x[n] =
1
N
N1
k=0
X
DFT
[k]e
j2nk/N
, n=0, 1, 2, . . . , N1
EXAMPLE 10.1 (DFT from the Dening Relation)
Let x[n] = 1, 2, 1, 0. With N = 4, and e
j2nk/N
= e
jnk/2
, we successively compute
k = 0: X
DFT
[0] =
3
n=0
x[n]e
0
= 1 + 2 + 1 + 0 = 4
k = 1: X
DFT
[1] =
3
n=0
x[n]e
jn/2
= 1 + 2e
j/2
+e
j
+ 0 = j2
k = 2: X
DFT
[2] =
3
n=0
x[n]e
jn
= 1 + 2e
j
+e
j2
+ 0 = 0
k = 3: X
DFT
[3] =
3
n=0
x[n]e
j3n/2
= 1 + 2e
j3/2
+e
j3
+ 0 = j2
The DFT is thus X
DFT
[k] = 4, j2, 0, j2.
10.3 Properties of the DFT
The properties of the DFT are summarized in Table 10.3. They are strikingly similar to other frequency-
domain transforms, but must always be used in keeping with implied periodicity (of the DFT and IDFT) in
both domains.
10.3.1 Symmetry
In analogy with all other frequency-domain transforms, the DFT of a real sequence possesses conjugate
symmetry about the origin with X
DFT
[k] = X
DFT
[k]. Since the DFT is periodic, X
DFT
[k] = X
DFT
[Nk].
This also implies conjugate symmetry about the index k = 0.5N, and thus
X
DFT
[k] = X
DFT
[k] = X
DFT
[N k] (10.4)
c Ashok Ambardar, September 1, 2003
10.3 Properties of the DFT 411
Table 10.3 Properties of the N-Sample DFT
Property Signal DFT Remarks
Shift x[n n
0
] X
DFT
[k]e
j2kn
0
/N
No change in magnitude
Shift x[n 0.5N] (1)
k
X
DFT
[k] Half-period shift for even N
Modulation x[n]e
j2nk
0
/N
X
DFT
[k k
0
]
Modulation (1)
n
x[n] X
DFT
[k 0.5N] Half-period shift for even N
Folding x[n] X
DFT
[k] This is circular folding.
Product x[n]y[n]
1
N
X
DFT
[k] (Y
DFT
[k] The convolution is periodic.
Convolution x[n] (y[n] X
DFT
[k]Y
DFT
[k] The convolution is periodic.
Correlation x[n] ( (y[n] X
DFT
[k]Y
DFT
[k] The correlation is periodic.
Central ordinates x[0] =
1
N
N1
k=0
X
DFT
[k] X
DFT
[0] =
N1
n=0
x[n]
Central ordinates x[
N
2
] =
1
N
N1
k=0
(1)
k
X
DFT
[k] (N even) X
DFT
[
N
2
] =
N1
n=0
(1)
n
x[n] (N even)
Parsevals relation
N1
n=0
[x[n][
2
=
1
N
N1
k=0
[X
DFT
[k][
2
= 7 N
1 N
X[k] = [N ] k X
*
= 8 N
1 N
X[k] = [N ] k X
*
/2 N /2 N
k
0
k
0
Conjugate symmetry Conjugate symmetry
Figure 10.2 Symmetry of the DFT for real signals
If N is odd, the conjugate symmetry is about the half-integer value 0.5N. The index k = 0.5N is called the
folding index. This is illustrated in Figure 10.2.
Conjugate symmetry suggests that we need compute only half the DFT values to nd the entire DFT
sequenceanother labor-saving concept! A similar result applies to the IDFT.
REVIEW PANEL 10.2
The DFT Shows Conjugate Symmetry for Real Signals
X
DFT
[k] = X
DFT
[k] X
DFT
[k] = X
DFT
[N k]
10.3.2 Central Ordinates and Special DFT Values
The computation of the DFT at the indices k = 0 and (for even N) at k =
N
2
can be simplied using the
central ordinate theorems that arise as a direct consequence of the dening relations. In particular, we nd
that X
DFT
[0] equals the sum of the N signal samples x[n], and X
DFT
[
N
2
] equals the sum of (1)
n
x[n] (with
alternating sign changes). This also implies that if x[n] is real valued, so are X
DFT
[0] and X
DFT
[
N
2
]. Similar
c Ashok Ambardar, September 1, 2003
412 Chapter 10 The Discrete Fourier Transform and its Applications
results hold for the IDFT.
REVIEW PANEL 10.3
The DFT Is Easy to Compute at k = 0 and (for Even N) at k =
N
2
X
DFT
[0] =
N1
n=0
x[n] X
DFT
[
N
2
] =
N1
n=0
(1)
n
x[n] x[0] =
1
N
N1
k=0
X
DFT
[k] x[
N
2
] =
1
N
N1
k=0
(1)
k
X
DFT
[k]
10.3.3 Circular Shift and Circular Symmetry
The dening relation for the DFT requires signal values for 0 n N 1. By implied periodicity, these
values correspond to one period of a periodic signal. If we wish to nd the DFT of a time-shifted signal
x[nn
0
], its values must also be selected over (0, N 1) from its periodic extension. This concept is called
circular shifting. To generate x[n n
0
], we delay x[n] by n
0
, create the periodic extension of the shifted
signal, and pick N samples over (0, N 1). This is equivalent to moving the last n
0
samples of x[n] to
the beginning of the sequence. Similarly, to generate x[n + n
0
], we move the rst n
0
samples to the end
of the sequence. Circular folding generates the signal x[n] from x[n]. We fold x[n], create the periodic
extension of the folded signal, and pick N samples of the periodic extension over (0, N 1).
REVIEW PANEL 10.4
Generating One Period (0 n N) of a Circularly Shifted Periodic Signal
To generate x[n n
0
]: Move the last n
0
samples of x[n] to the beginning.
To generate x[n +n
0
]: Move the rst n
0
samples of x[n] to the end.
Even symmetry of x[n] requires that x[n] = x[n]. Its implied periodicity also means x[n] = x[N n],
and the periodic signal x[n] is said to possess circular even symmetry. Similarly, for circular odd
symmetry, we have x[n] = x[N n].
REVIEW PANEL 10.5
Circular Symmetry for Real Periodic Signals with Period N
Circular even symmetry: x[n] = x[N n] Circular odd symmetry: x[n] = x[N n]
10.3.4 Convolution
Convolution in one domain transforms to multiplication in the other. Due to the implied periodicity in both
domains, the convolution operation describes periodic, not regular, convolution. This also applies to the
correlation operation.
REVIEW PANEL 10.6
Multiplication in One Domain Corresponds to Discrete Periodic Convolution in the Other
Periodic Convolution
The DFT oers an indirect means of nding the periodic convolution y[n] = x[n] (h[n] of two sequences
x[n] and h[n] of equal length N. We compute their N-sample DFTs X
DFT
[k] and H
DFT
[k], multiply them
c Ashok Ambardar, September 1, 2003
10.3 Properties of the DFT 413
to obtain Y
DFT
[k] = X
DFT
[k]H
DFT
[k], and nd the inverse of Y
DFT
to obtain the periodic convolution y[n]:
x[n] (h[n] X
DFT
[k]H
DFT
[k] (10.5)
Periodic Correlation
Periodic correlation can be implemented using the DFT in almost exactly the same way as periodic convo-
lution, except for an extra conjugation step prior to taking the inverse DFT. The periodic correlation of two
sequences x[n] and h[n] of equal length N gives
r
xh
[n] = x[n] ( (h[n] X
DFT
[k]H
DFT
[k] (10.6)
If x[n] and h[n] are real, the nal result r
xh
[n] must also be real (to within machine roundo).
Regular Convolution and Correlation
We can also nd regular convolution (or correlation) using the DFT. For two sequences of length M and N,
the regular convolution (or correlation) contains M +N 1 samples. We must thus pad each sequence with
enough zeros, to make each sequence of length M +N 1, before nding the DFT.
REVIEW PANEL 10.7
Regular Convolution by the DFT Requires Zero-Padding
If x[n] and h[n] are of length M and N, create x
z
[n] and h
z
[n], each zero-padded to length M +N 1.
Find the DFT of the zero-padded signals, multiply the DFT sequences, and nd y[n] as the inverse.
10.3.5 The FFT
The DFT describes a set of N equations, each with N product terms and thus requires a total of N
2
multiplications for its computation. Computationally ecient algorithms to obtain the DFT go by the
generic name FFT (fast Fourier transform) and need far fewer multiplications. In particular, radix-2 FFT
algorithms require the number of samples N to be a power of 2 and compute the DFT using only N log
2
N
multiplications. We discuss such algorithms in a later section.
REVIEW PANEL 10.8
The FFT Describes Computationally Ecient Algorithms for Finding the DFT
Radix-2 FFT algorithms require the number of samples N to be a power of 2 (N = 2
m
, integer m).
EXAMPLE 10.2 (DFT Computations and Properties)
(a) Let y[n] = 1, 2, 3, 4, 5, 0, 0, 0, n = 0, 1, 2, . . . , 7. Find one period of the circularly shifted signals
f[n] = y[n 2], g[n] = y[n + 2], and the circularly folded signal h[n] = y[n] over 0 n 7.
1. To create f[n] = y[n 2], we move the last two samples to the beginning. So,
f[n] = y[n 2] = 0, 0, 1, 2, 3, 4, 5, 0, n = 0, 1, . . . , 7.
2. To create g[n] = y[n + 2], we move the rst two samples to the end. So,
g[n] = y[n + 2] = 3, 4, 5, 0, 0, 0, 1, 2, n = 0, 1, . . . , 7.
c Ashok Ambardar, September 1, 2003
414 Chapter 10 The Discrete Fourier Transform and its Applications
3. To create h[n] = y[n], we fold y[n] to 0, 0, 0, 5, 4, 3, 2, 1, n = 7, 6, 5, . . . , 0 and create
its periodic extension by moving all samples (except y[0]) past y[0] to get
h[n] = y[n] = 1, 0, 0, 0, 5, 4, 3, 2, n = 0, 1, 2, . . . , 7.
(b) Let us nd the DFT of x[n] = 1, 1, 0, 0, 0, 0, 0, 0, n = 0, 1, 2, . . . , 7.
Since only x[0] and x[1] are nonzero, the upper index in the DFT summation will be n = 1 and the
DFT reduces to
X
DFT
[k] =
1
n=0
x[n]e
j2nk/8
= 1 +e
jk/4
, k = 0, 1, 2, . . . , 7
Since N = 8, we need compute X
DFT
[k] only for k 0.5N = 4. Now, X
DFT
[0] = 1 + 1 = 2 and
X
DFT
[4] = 1 1 = 0. For the rest (k = 1, 2, 3), we compute X
DFT
[1] = 1 + e
j/4
= 1.707 j0.707,
X
DFT
[2] = 1 +e
j/2
= 1 j, and X
DFT
[3] = 1 +e
j3/4
= 0.293 j0.707.
By conjugate symmetry, X
DFT
[k] = X
DFT
[N k] = X
DFT
[8 k]. This gives
X
DFT
[5] = X
DFT
[3] = 0.293+j0.707, X
DFT
[6] = X
DFT
[2] = 1+j, X
DFT
[7] = X
DFT
[1] = 1.707+j0.707.
Thus, X
DFT
[k] = 2, 1.707j0.707, 0.293j0.707, 1j, 0, 1+j, 0.293+j0.707, 1.707+j0.707.
(c) Consider the DFT pair x[n] = 1, 2, 1, 0 X
DFT
[k] = 4, j2, 0 , j2 with N = 4.
1. (Time Shift) To nd y[n] = x[n 2], we move the last two samples to the beginning to get
y[n] = x[n 2] = 1, 0, 1, 2, n = 0, 1, 2, 3.
To nd the DFT of y[n] = x[n 2], we use the time-shift property (with n
0
= 2) to give
Y
DFT
[k] = X
DFT
[k]e
j2kn0/4
= X
DFT
[k]e
jk
= 4, j2, 0, j2.
2. (Modulation) The sequence Z
DFT
[k] = X
DFT
[k 1] equals j2, 4, j2, 0. Its IDFT is
z[n] = x[n]e
j2n/4
= x[n]e
jn/2
= 1, j2, 1, 0.
3. (Folding) The sequence g[n] = x[n] is g[n] = x[0], x[1], x[2], x[3] = 1, 0, 1, 2.
Its DFT equals G
DFT
[k] = X
DFT
[k] = X
DFT
[k] = 4, j2, 0, j2.
4. (Conjugation) The sequence p[n] = x
[n] is p[n] = x
DFT
[k] = 4, j2, 0, j2
= 4, j2, 0, j2.
5. (Product) The sequence h[n] = x[n]x[n] is the pointwise product. So, h[n] = 1, 4, 1, 0.
Its DFT is H
DFT
[k] =
1
4
X
DFT
[k] (X
DFT
[k] =
1
4
4, j2, 0, j2 (4, j2, 0, j2.
Keep in mind that this is a periodic convolution.
The result is H
DFT
[k] =
1
4
24, j16, 0, j16 = 6, j4, 0, j4.
6. (Periodic Convolution) The periodic convolution c[n] = x[n] (x[n] gives
c[n] = 1, 2, 1, 0 (1, 2, 1, 0 = 2, 4, 6, 4.
Its DFT is given by the pointwise product
C
DFT
[k] = X
DFT
[k]X
DFT
[k] = 16, 4, 0, 4.
7. (Regular Convolution) The regular convolution s[n] = x[n] x[n] gives
s[n] = 1, 2, 1, 0 (1, 2, 1, 0 = 1, 4, 6, 4, 1, 0, 0.
Since x[n] has 4 samples (N = 4), the DFT S
DFT
[k] of s[n] is the product of the DFT of the
zero-padded (to length N +N 1 = 7) signal x
z
[n] = 1, 2, 1, 0, 0, 0, 0 and equals
16, 2.35 j10.28, 2.18 +j1.05, 0.02 +j0.03, 0.02 j0.03, 2.18 j1.05, 2.35 +j10.28.
c Ashok Ambardar, September 1, 2003
10.3 Properties of the DFT 415
8. (Central Ordinates) It is easy to check that x[0] =
1
4
X
DFT
[k] and X
DFT
[0] =
x[n].
9. (Parsevals Relation) We have
[x[n][
2
= 1 + 4 + 1 + 0 = 6.
Since X
2
DFT
[k] = 16, 4, 0, 4, we also have
1
4
[X
DFT
[k][
2
=
1
4
(16 + 4 + 4) = 6.
10.3.6 Signal Replication and Spectrum Zero Interpolation
In analogy with the DTFT and Fourier series, two useful DFT results are that replication in one domain
leads to zero interpolation in the other. Formally, if x[n] X
DFT
[k] form a DFT pair, M-fold replication
of x[n] to y[n] = x[n], x[n], . . . , x[n] leads to zero interpolation of the DFT to Y
DFT
[k] = MX
DFT
[k/M].
The multiplying factor M ensures that we satisfy Parsevals theorem and the central ordinate relations. Its
dual is the result that zero interpolation of x[n] to z[n] = x[n/M] leads to M-fold replication of the DFT to
Z
DFT
[k] = X
DFT
[k], X
DFT
[k], . . . , X
DFT
[k].
REVIEW PANEL 10.9
Replication in One Domain Corresponds to Zero Interpolation in the Other
If a signal is replicated by M, its DFT is zero interpolated and scaled by M.
If x[n] X
DFT
[k], then x[n], x[n], . . . , x[n]
. .. .
M-fold replication
MX
DFT
[k/M].
If a signal is zero-interpolated by M, its DFT shows M-fold replication.
If x[n] X
DFT
[k], then x[n/M] X
DFT
[k], X
DFT
[k], . . . , X
DFT
[k]
. .. .
M-fold replication
.
EXAMPLE 10.3 (Signal and Spectrum Replication)
Let x[n] = 2, 3, 2, 1 and X
DFT
[k] = 8, j2, 0, j2. Find the DFT of the 12-point signal described by
y[n] = x[n], x[n], x[n] and the 12-point zero-interpolated signal h[n] = x[n/3].
(a) Signal replication by 3 leads to spectrum zero interpolation and multiplication by 3. Thus,
Y
DFT
[k] = 3X
DFT
[k/3] = 24, 0, 0, j6, 0, 0, 0, 0, 0, j6, 0, 0
(b) Signal zero interpolation by 3 leads to spectrum replication by 3. Thus,
H
DFT
[k] = X
DFT
[k], X
DFT
[k], X
DFT
[k] = 8, j2, 0, j2, 8, j2, 0, j2, 8, j2, 0, j2
10.3.7 Some Useful DFT Pairs
The DFT of nite sequences dened mathematically often results in very unwieldy expressions and explains
the lack of many standard DFT pairs. However, the following DFT pairs are quite useful and easy to
obtain from the dening relation and properties:
1, 0, 0, . . . , 0 (impulse) 1, 1, 1, . . . , 1 (constant) (10.7)
c Ashok Ambardar, September 1, 2003
416 Chapter 10 The Discrete Fourier Transform and its Applications
1, 1, 1, . . . , 1 (constant) N, 0, 0, . . . , 0 (impulse) (10.8)
n
(exponential)
1
N
1 e
j2k/N
(10.9)
cos(2nk
0
/N) (sinusoid) 0.5N[k k
0
] + 0.5N[k (N k
0
)] (impulse pair) (10.10)
The rst result is a direct consequence of the dening relation. For the second result, the DFT is
e
j2nk/N
,
the sum of N equally spaced vectors of unit length, and equals zero (except when k = 0). For the third
result, we use the dening relation
n
e
j2nk/N
and the fact that e
j2k
= 1 to obtain
X
DFT
[k] =
N1
k=0
e
j2k/N
n
=
1 (e
j2k/N
)
N
1 e
j2k/N
=
1
N
1 e
j2k/N
(10.11)
Finally, the transform pair for the sinusoid says that for a periodic sinusoid x[n] = cos(2nF) whose digital
frequency is F = k
0
/N, the DFT is a pair of impulses at k = k
0
and k = N k
0
. By Eulers relation,
x[n] = 0.5e
j2nk
0
/N
+ 0.5e
j2nk
0
/N
and, by periodicity, 0.5e
j2nk
0
/N
= 0.5e
j2n(Nk
0
)/N
. Then, with the
DFT pair 1 N[k], and the modulation property, we get the required result.
REVIEW PANEL 10.10
The N-Point DFT of a Sinusoid with Period N and F = k
0
/N Has Two Nonzero Samples
cos(2n
k
0
N
)
N
2
[k k
0
] +
N
2
[k (N k
0
)] = 0, . . . , 0,
N
2
,
....
k=k
0
0, . . . , 0,
N
2
,
....
k=Nk
0
0, . . . , 0 (DFT)
10.4 Some Practical Guidelines
From a purely mathematical or computational standpoint, the DFT simply tells us how to transform a set
of N numbers into another set of N numbers. Its physical signicance (what the numbers mean), however,
stems from its ties to the spectra of both analog and discrete signals. In general, the DFT is only an
approximation to the actual (Fourier series or transform) spectrum of the underlying analog signal. The
DFT spectral spacing and DFT magnitude is aected by the choice of sampling rate and how the sample
values are chosen. The DFT phase is aected by the location of sampling instants. The DFT spectral
spacing is aected by the sampling duration. Here are some practical guidelines on how to obtain samples
of an analog signal x(t) for spectrum analysis and interpret the DFT (or DFS) results.
Choice of sampling instants: The dening relation for the DFT (or DFS) mandates that samples of x[n]
be chosen over the range 0 n N 1 (through periodic extension, if necessary). Otherwise, the DFT (or
DFS) phase will not match the expected phase.
Choice of samples: If a sampling instant corresponds to a jump discontinuity, the sample value should be
chosen as the midpoint of the discontinuity. The reason is that the Fourier series (or transform) converges
to the midpoint of any discontinuity.
Choice of frequency axis: The computation of the DFT (or DFS) is independent of the sampling frequency
S or sampling interval t
s
= 1/S. However, if an analog signal is sampled at a sampling rate S, its spectrum
is periodic with period S. The DFT spectrum describes one period (N samples) of this spectrum starting
at the origin. For sampled signals, it is useful to plot the DFT (or DFS) magnitude and phase against the
analog frequency f = kS/N Hz, k = 0, 1, . . . , N 1 (with spacing S/N). For discrete-time signals, we can
plot the DFT against the digital frequency F = k/N, k = 0, 1, . . . , N 1 (with spacing 1/N). These choices
are illustrated in Figure 10.3.
c Ashok Ambardar, September 1, 2003
10.4 Some Practical Guidelines 417
Choice of frequency range: To compare the DFT results with conventional two-sided spectra, just
remember that by periodicity, a negative frequency f
0
(at the index k
0
) in the two-sided spectrum
corresponds to the frequency S f
0
(at the index N k
0
) in the (one-sided) DFT spectrum.
Identifying the highest frequency: The highest frequency in the DFT spectrum corresponds to the
folding index k = 0.5N and equals F = 0.5 for discrete signals or f = 0.5S Hz for sampled analog signals.
This highest frequency is also called the folding frequency. For purposes of comparison, its is sucient to
plot the DFT spectra only over 0 k < 0.5N (or 0 F < 0.5 for discrete-time signals or 0 f < 0.5S Hz
for sampled analog signals).
= 2 f
= 2 F
F = f/S
(N1)/N 1/ N 2/ N 3/ N
f
0
f
0
2 f
0
3
f
0
0 1 2 3 N 1
(analog radian frequency)
(digital radian frequency)
DFT samples may be plotted against the index or against frequency
Two more options are:
F Digital frequency
0
f (Hz) Analog frequency
0
S/N =
k (Index)
Figure 10.3 Various ways of plotting the DFT
Plotting reordered spectra: The DFT (or DFS) may also be plotted as two-sided spectra to reveal
conjugate symmetry about the origin by creating its periodic extension. This is equivalent to creating a
reordered spectrum by relocating the DFT samples at indices past the folding index k = 0.5N to the left of
the origin (because X[k] = X[N k]). This process is illustrated in Figure 10.4.
/2 N
X[k] X[N ] k =
N/2
Relocate
k k
Original spectrum Reordered spectrum
Figure 10.4 Plotting the DFT or its reordered samples
REVIEW PANEL 10.11
Practical Guidelines for Sampling a Signal and Interpreting the DFT Results
Sampling: Start at t = 0. Choose the midpoint value at jumps. Sample above the Nyquist rate.
Plotting: Plot DFT against index k = 0, 1, . . . , N 1 or F =
k
N
or f = k
S
N
Hz.
Frequency spacing of DFT samples: f = S/N Hz (analog) or F = 1/N (digital frequency).
Highest frequency: This equals F = 0.5 or f = 0.5S corresponding to the index k =
N
2
.
For long sequences: The DFT magnitude/phase are usually plotted as continuous functions.
c Ashok Ambardar, September 1, 2003
418 Chapter 10 The Discrete Fourier Transform and its Applications
10.5 Approximating the DTFT by the DFT
The DTFT relation and its inverse are
X
p
(F) =
n=
x[n]e
j2nF
x[n] =
1
X
p
(F)e
j2nF
dF (10.12)
where X
p
(F) is periodic with unit period. If x[n] is a nite N-point sequence with n = 0, 1, . . . , N 1, we
obtain N samples of the DTFT over one period at intervals of 1/N as
X
DFT
[k] =
N1
n=0
x[n]e
j2nk/N
, k = 0, 1, . . . , N 1 (10.13)
This describes the discrete Fourier transform (DFT) of x[n] as a sampled version of its DTFT evaluated
at the frequencies F = k/N, k = 0, 1, . . . , N 1. The DFT spectrum thus corresponds to the frequency
range 0 F < 1 and is plotted at the frequencies F = k/N, k = 0, 1, . . . , N 1.
To recover the nite sequence x[n] from N samples of X
DFT
[k], we use dF 1/N and F k/N to
approximate the integral expression in the inversion relation by
x[n] =
1
N
N1
k=0
X
DFT
[k]e
j2nk/N
, n = 0, 1, . . . , N 1 (10.14)
This is the inverse discrete Fourier transform (IDFT). The periodicity of the IDFT implies that x[n]
actually corresponds to one period of a periodic signal.
If x[n] is a nite N-point signal with n = 0, 1, . . . , N 1, the DFT is an exact match to its DTFT X
p
(F)
at F = k/N, k = 0, 1, . . . , N 1, and the IDFT results in perfect recovery of x[n].
If x[n] is not time-limited, its N-point DFT is only an approximation to its DTFT X
p
(F) evaluated at
F = k/N, k = 0, 1, . . . , N 1. Due to implied periodicity, the DFT, in fact, exactly matches the DTFT of
the periodic extension of x[n] with period N at these frequencies.
If x[n] is a discrete periodic signal with period N, its scaled DFT (
1
N
X
DFT
[k]) is an exact match to the
impulse strengths in its DTFT X
p
(F) at F = k/N, k = 0, 1, . . . , N 1. In this case also, the IDFT results
in perfect recovery of one period of x[n] over 0 n N 1.
REVIEW PANEL 10.12
The DFT X
DFT
[k] of an N-Sample Sequence x[n] Is an Exact Match to Its Sampled DTFT
If x[n] is also periodic, the scaled DFT
1
N
X
DFT
[k] is an exact match to its sampled DTFT.
EXAMPLE 10.4 (Relating the DFT and the DTFT)
(a) Let x[n] = 1, 2, 1, 0. If we use the DTFT, we rst nd
X
p
(F) = 1 + 2e
j2F
+e
j4F
+ 0 = [2 + 2 cos(2F)]e
j2F
With N = 4, we have F = k/4, k = 0, 1, 2, 3. We then obtain the DFT as
X
DFT
[k] = [2 + 2 cos(2k/4)]e
j2k/4
, k = 0, 1, 2, 3, or X
DFT
[k] = 4, j2, 0, j2
Since x[n] is a nite sequence, the DFT and DTFT show an exact match at F = k/N, k = 0, 1, 2, 3.
c Ashok Ambardar, September 1, 2003
10.6 The DFT of Periodic Signals and the DFS 419
With N = 4, and e
j2nk/N
= e
jnk/2
, we compute the IDFT of X
DFT
[k] = 4, j2, 0, j2 to give
n = 0: x[0] = 0.25
3
k=0
X
DFT
[k]e
0
= 0.25(4 j2 + 0 +j2) = 1
n = 1: x[1] = 0.25
3
k=0
X
DFT
[k]e
jk/2
= 0.25(4 j2e
j/2
+ 0 +j2e
j3/2
) = 2
n = 2: x[2] = 0.25
3
k=0
X
DFT
[k]e
jk
= 0.25(4 j2e
j
+ 0 +j2e
j3
) = 1
n = 3: x[3] = 0.25
3
k=0
X
DFT
[k]e
j3k/2
= 0.25(4 j2e
j3/2
+ 0 +e
j9/2
) = 0
The IDFT is thus 1, 2, 1, 0, and recovers x[n] exactly.
(b) Let x[n] =
n
u[n]. Its DTFT is X
p
(F) =
1
1e
j2F
. Sampling X
p
(F) at intervals F = k/N gives
X
p
(F)
F=k/N
=
1
1 e
j2k/N
The N-point DFT of x[n] is
n
(n = 0, 1, . . . , N 1)
1
N
1 e
j2k/N
Clearly, the N-sample DFT of x[n] does not match the DTFT of x[n] (unless N ).
Comment: What does match, however, is the DFT of the N-sample periodic extension x
pe
[n] and the
DTFT of x[n]. We obtain one period of the periodic extension by wrapping around N-sample sections
of x[n] =
n
and adding them to give
x
pe
[n] =
n
+
n+N
+
n+2N
+ =
n
(1 +
N
+
2N
+ ) =
n
1
N
=
1
1
N
x[n]
Its DFT is thus
1
1 e
j2k/N
and matches the DTFT of x[n] at F =
k
N
, k = 0, 1, . . . , N 1.
10.6 The DFT of Periodic Signals and the DFS
The Fourier series relations for a periodic signal x
p
(t) are
x
p
(t) =
k=
X[k]e
j2kf0t
X[k] =
1
T
T
x
p
(t)e
j2kf0t
dt (10.15)
If we acquire x[n], n = 0, 1, . . . , N 1 as N samples of x
p
(t) over one period using a sampling rate of S Hz
(corresponding to a sampling interval of t
s
) and approximate the integral expression for X[k] by a summation
using dt t
s
, t nt
s
, T = Nt
s
, and f
0
=
1
T
=
1
Nts
, we obtain
X
DFS
[k] =
1
Nt
s
N1
n=0
x[n]e
j2kf
0
nt
s
t
s
=
1
N
N1
n=0
x[n]e
j2nk/N
, k = 0, 1, . . . , N 1 (10.16)
c Ashok Ambardar, September 1, 2003
420 Chapter 10 The Discrete Fourier Transform and its Applications
The quantity X
DFS
[k] denes the discrete Fourier series (DFS) as an approximation to the Fourier series
coecients of a periodic signal and equals N times the DFT.
10.6.1 The Inverse DFS
To recover x[n] from one period of X
DFS
[k], we use the Fourier series reconstruction relation whose summation
index covers one period (from k = 0 to k = N 1) to obtain
x[n] =
N1
k=0
X
DFS
[n]e
j2kf
0
nt
s
=
N1
k=0
X
DFS
[k]e
j2nk/N
, n = 0, 1, 2, . . . , N 1 (10.17)
This relation describes the inverse discrete Fourier series (IDFS). The sampling interval t
s
does not
enter into the computation of the DFS or its inverse. Except for a scale factor, the DFS and DFT relations
are identical.
10.6.2 Understanding the DFS Results
The X
DFS
[k] describes the spectrum of a sampled signal and is thus periodic with period S (starting at the
origin). Its N samples are spaced S/N Hz apart and plotted at the frequencies f = kS/N, k = 0, 1, . . . , N1.
However, the highest frequency we can identify in the DFS spectrum is 0.5S (corresponding to the folding
index k = 0.5N).
The N-sample DFS X
DFS
[k] =
1
N
X
DFT
[k] shows an exact match to the Fourier series coecients X[k]
of a periodic signal x(t) only if all of the following conditions are satised:
1. The signal samples must be acquired from x(t) starting at t = 0 (using the periodic extension of the
signal, if necessary). Otherwise, the phase of the DFS coecients will not match the phase of the
corresponding Fourier series coecients.
2. The periodic signal must contain a nite number of sinusoids (to ensure a band-limited signal with a
nite highest frequency) and be sampled above the Nyquist rate. Otherwise, there will be aliasing,
whose eects become more pronounced near the folding frequency 0.5S. If the periodic signal is not
band-limited (contains an innite number of harmonics), we cannot sample at a rate high enough to
prevent aliasing. For a pure sinusoid, the Nyquist rate corresponds to two samples per period.
3. The signal x(t) must be sampled for an integer number of periods (to ensure a match between the
periodic extension of x(t) and the implied periodic extension of the sampled signal). Otherwise, the
periodic extension of its samples will not match that of x(t), and the DFS samples will describe the
Fourier series coecients of a dierent periodic signal whose harmonic frequencies do not match those
of x(t). This phenomenon is called leakage and results in nonzero spectral components at frequencies
other than the harmonic frequencies of the original signal x(t).
If we sample a periodic signal for an integer number of periods, the DFS (or DFT) also preserves the eects
of symmetry. The DFS of an even symmetric signal is real, the DFS of an odd symmetric signal is imaginary,
and the DFS of a half-wave symmetric signal is zero at even values of the index k.
c Ashok Ambardar, September 1, 2003
10.6 The DFT of Periodic Signals and the DFS 421
REVIEW PANEL 10.13
The DFT of a Sampled Periodic Signal x(t) Is Related to Its Fourier Series Coecients
If x(t) is band-limited and sampled for an integer number of periods, the DFT is an exact match to the
Fourier series coecients X[k], with X
DFT
[k] = NX[k].
If x(t) is not band-limited, there is aliasing.
If x(t) is not sampled for an integer number of periods, there is also leakage.
10.6.3 The DFS and DFT of Sinusoids
Consider the sinusoid x(t) = cos(2f
0
t + ) whose Fourier series coecients we know to be 0.5
= 0.5e
j
at f = f
0
and 0.5
= 0.5e
j
at f = f
0
. If x(t) is sampled at the rate S, starting at t = 0, the
sampled signal is x[n] = cos(2nF +), where F = f
0
/S is the digital frequency. As long as F is a rational
fraction of the form F = k
0
/N, we obtain N samples of x[n] from k
0
full periods of x(t). In this case, there
is no leakage, and the N-point DFS will match the expected results and show only two nonzero DFS values,
X
DFS
[k
0
] = 0.5e
j
and X
DFS
[N k
0
] = 0.5e
j
. The DFT is obtained by multiplying the DFS by N.
cos(2n
k0
N
+) 0, . . . , 0, 0.5e
j
,
. .. .
k=k
0
0, . . . , 0, 0.5e
j
,
. .. .
k=Nk
0
0, . . . , 0 (DFS) (10.18)
For the index k
0
to lie in the range 0 k
0
N1, we must ensure that 0 F < 1 (rather than the customary
0.5 F < 0.5). The frequency corresponding to k
0
will then be k
0
S/N and will equal f
0
(if S > 2f
0
) or its
alias (if S < 2f
0
). The nonzero DFT values will equal X
DFT
[k
0
] = 0.5Ne
j
and X
DFT
[N k
0
] = 0.5Ne
j
.
These results are straightforward to obtain and can be easily extended, by superposition, to the DFT of a
combination of sinusoids, sampled over an integer number of periods.
REVIEW PANEL 10.14
The N-Point DFT of the Sinusoid x[n] = cos(2n
k
0
N
+) Contains Only Two Nonzero Samples
cos(2n
k0
N
+) 0, . . . , 0,
N
2
e
j
,
. .. .
k=k0
0, . . . , 0,
N
2
e
j
,
. .. .
k=Nk0
0, . . . , 0 (DFT)
EXAMPLE 10.5 (The DFT and DFS of Sinusoids)
(a) The signal x(t) = 4 cos(100t) is sampled at twice the Nyquist rate for three full periods. Find and
sketch its DFT.
The frequency of x(t) is 50 Hz, the Nyquist rate is 100 Hz, and the sampling frequency is S = 200 Hz.
The digital frequency is F = 50/200 = 1/4 = 3/12 = k/N. This means N = 12 for three full periods.
The two nonzero DFT values will appear at k = 3 and k = N 3 = 9. The nonzero DFT values will
be X[3] = X[9] = (0.5)(4)(N) = 24. The signal and its DFT are sketched in Figure E10.5A.
= 12 N
4
4
t
Cosine signal (3 periods)
T
DFT
3 6 9 11
24 24
k
Figure E10.5A The signal and DFT for Example 10.5(a)
c Ashok Ambardar, September 1, 2003
422 Chapter 10 The Discrete Fourier Transform and its Applications
(b) Let x(t) = 4 sin(72t) be sampled at S = 128 Hz. Choose the minimum number of samples necessary
to prevent leakage and nd the DFS and DFT of the sampled signal.
The frequency of x(t) is 36 Hz, so F = 36/128 = 9/32 = k
0
/N. Thus, N = 32, k
0
= 9, and
the frequency spacing is S/N = 4 Hz. The DFS components will appear at k
0
= 9 (36 Hz) and
N k
0
= 23 (92 Hz). The Fourier series coecients of x(t) are j2 (at 36 Hz) and j2 (at 36 Hz).
The DFS samples will be X
DFS
[9] = j2, and X
DFS
[23] = j2. Since X
DFT
[k] = NX
DFS
[k], we get
X
DFT
[9] = j64, X
DFT
[23] = j64, and thus
X
DFT
[k] = 0, . . . , 0, j64,
. .. .
k=9
0, . . . , 0, j64,
....
k=23
0, . . . , 0
(c) Let x(t) = 4 sin(72t)6 cos(12t) be sampled at S = 21 Hz. Choose the minimum number of samples
necessary to prevent leakage and nd the DFS and DFT of the sampled signal.
Clearly, the 36-Hz term will be aliased. The digital frequencies (between 0 and 1) of the two terms are
F
1
= 36/21 = 12/7 5/7 = k
0
/N and F
2
= 6/21 = 2/7. Thus, N = 7 and the frequency spacing is
S/N = 3 Hz. The DFS components of the rst term will be j2 at k = 5 (15 Hz) and j2 at N k = 2
(6 Hz). The DFS components of the second term will be 3 at k = 2 and 3 at k = 5. The DFS
values will add up at the appropriate indices to give X
DFS
[5] = 3 j2, X
DFS
[2] = 3 +j2, and
X
DFS
[k] = 0, 0, 3 +j2
. .. .
k=2
, 0, 0, 3 j2
. .. .
k=5
, 0 X
DFT
[k] = NX
DFS
[k] = 7X
DFS
[k]
Note how the 36-Hz component was aliased to 6 Hz (the frequency of the second component).
(d) The signal x(t) = 1 +8 sin(80t)cos(40t) is sampled at twice the Nyquist rate for two full periods. Is
leakage present? If not, nd the DFS of the sampled signal.
First, note that x(t) = 1 + 4 sin(120t) +4 sin(40t). The frequencies are f
1
= 60 Hz and f
2
= 20 Hz.
The Nyquist rate is thus 120 Hz, and hence S = 240 Hz. The digital frequencies are F
1
= 60/240 = 1/4
and F
2
= 20/240 = 1/12. The fundamental frequency is f
0
= GCD(f
1
, f
2
) = 20 Hz. Thus, two
full periods correspond to 0.1 s or N = 24 samples. There is no leakage because we acquire the
samples over two full periods. The index k = 0 corresponds to the constant (dc value). To nd
the indices of the other nonzero DFS samples, we compute the digital frequencies (in the form k/N)
as F
1
= 60/240 = 1/4 = 6/24 and F
2
= 20/240 = 1/12 = 2/24. The nonzero DFS samples are
thus X
DFS
[0] = 1, X
DFS
[6] = j2, and X
DFS
[2] = j2, and the conjugates X
DFS
[18] = j2 and
X
DFS
[22] = j2. Thus,
X
DFS
[k] = 0, 0, j2
....
k=2
, 0, 0, 0, j2
....
k=6
, 0, . . . , 0, j2
....
k=18
, 0, 0, 0, j2
....
k=22
, 0, 0
10.6.4 The DFT and DFS of Sampled Periodic Signals
For periodic signals that are not band-limited, leakage can be prevented by sampling for an integer number of
periods, but aliasing is unavoidable for any sampling rate. The aliasing error increases at higher frequencies.
The eects of aliasing can be minimized (but not eliminated) by choosing a high enough sampling rate. A
useful rule of thumb is that for the DFT to produce an error of about 5% or less up to the Mth harmonic
frequency f
M
= Mf
0
, we must choose S 8f
M
(corresponding to N 8M samples per period).
c Ashok Ambardar, September 1, 2003
10.6 The DFT of Periodic Signals and the DFS 423
EXAMPLE 10.6 (DFS of Sampled Periodic Signals)
Consider a square wave x(t) that equals 1 for the rst half-period and 1 for the next half-period, as
illustrated in Figure E10.6.
t
1
1
x(t) (32 samples per period) Square wave
Figure E10.6 One period of the square wave periodic signal for Example 10.6
Its Fourier series coecients are X[k] = j2/k, (k odd). If we require the rst four harmonics to be in
error by no more than about 5%, we choose a sampling rate S = 32f
0
, where f
0
is the fundamental frequency.
This means that we acquire N = 32 samples for one period. The samples and their DFS up to k = 8 are
listed below, along with the error in the nonzero DFS values compared with the Fourier series coecients.
x[n] = 0, 1, 1, . . . , 1, 1
. .. .
15 samples
, 0, 1, 1, . . . , 1, 1
. .. .
15 samples
X
DFS
[k] = 0, j0.6346,
. .. .
o by 0.3%
0, j0.206,
. .. .
o by 2.9%
0, j0.1169,
. .. .
o by 8.2%
0, j0.0762,
. .. .
o by 16.3%
0, . . .
Note how we picked zero as the sample value at the discontinuities. As expected, the DFS coecients are
zero for odd k (due to the half-wave symmetry in x(t)) and purely imaginary (due to the odd symmetry in
x(t)). The error in the nonzero harmonics up to k = 4 is less than 5%.
10.6.5 The Eects of Leakage
If we do not sample x(t) for an integer number of periods, the DFT will show the eects of leakage, with
nonzero components appearing at frequencies other than f
0
, because the periodic extension of the signal
with non-integer periods will not match the original signal, as illustrated in Figure 10.5.
t
Sinusoid sampled for one full period
and its periodic extension
t
Sinusoid sampled for half a period
and its periodic extension
Figure 10.5 Illustrating the concept of leakage
The DFT results will also show aliasing because the periodic extension of the signal with non-integer
periods will not, in general, be band-limited. As a result, we must resort to the full force of the dening
relation to compute the DFT.
c Ashok Ambardar, September 1, 2003
424 Chapter 10 The Discrete Fourier Transform and its Applications
Suppose we sample the 1-Hz sine x(t) = sin(2t) at S = 16 Hz. Then, F
0
= 1/16. If we choose N = 8,
the DFT spectral spacing equals S/N = 2 Hz. In other words, there is no DFT component at 1 Hz, the
frequency of the sine wave! Where should we expect to see the DFT components? If we express the digital
frequency F
0
= 1/16 as F
0
= k
F
/N, we obtain k
F
= NF
0
= 0.5. Thus, F
0
corresponds to the fractional
index k
F
= 0.5, and the largest DFT components should appear at the integer index nearest to k
F
, at k = 0
(or dc) and k = 1 (or 2 Hz). In fact, the signal and its DFS are given by
x[n] = 0, 0.3827, 0.7071, 0.9329, 1, 0.9329, 0.7071, 0.3827
X
DFS
[k] = 0.6284,
. .. .
o by 1.3%
0.2207,
. .. .
o by 4%
0.0518,
. .. .
o by 23%
0.0293,
. .. .
o by 61%
0.0249,
. .. .
k=N/2
0.0293, 0.0518, 0.2207
As expected, the largest components appear at k = 0 and k = 1. Since X
DFS
[k] is real, the DFS results
describe an even symmetric signal with nonzero average value. In fact, the periodic extension of the sampled
signal over half a period actually describes a full-rectied sine with even symmetry and a fundamental
frequency of 2 Hz (see Figure 10.5). The Fourier series coecients of this full-rectied sine wave (with unit
peak value) are given by
X[k] =
2
(1 4k
2
)
We conrm that X
DFS
[0] and X
DFS
[1] show an error of less than 5%. But X
DFS
[2], X
DFS
[3], and X
DFS
[4]
deviate signicantly. Since the new periodic signal is no longer band-limited, the sampling rate is not high
enough, and we have aliasing. The value X
DFS
[3], for example, equals the sum of the Fourier series coecient
X[3] and all other Fourier series coecients X[11], X[19], . . . that alias to k = 3. In other words,
X
DFS
[3] =
m=
X[3 + 8m] =
2
m=
1
1 4(3 + 8m)
2
Although this sum is not easily amenable to a closed-form solution, it can be computed numerically and
does in fact approach X
DFS
[3] = 0.0293 (for a large but nite number of coecients).
Minimizing Leakage
Ideally, we should sample periodic signals over an integer number of periods to prevent leakage. In practice,
it may not be easy to identify the period of a signal in advance. In such cases, it is best to sample over
as long a signal duration as possible (to reduce the mismatch between the periodic extension of the analog
and sampled signal). Sampling for a larger time (duration) not only reduces the eects of leakage but also
yields a more closely spaced spectrum, and a more accurate estimate of the spectrum of the original signal.
Another way to reduce leakage is to multiply the signal samples by a window function (as described later).
REVIEW PANEL 10.15
The DFT of a Sinusoid at f
0
Hz Sampled for Non-Integer Periods Shows Leakage
The largest DFT component appears at the integer index closest to k
F
= F
0
N = f
0
N/S.
To minimize leakage: Sample for the longest duration possible or (better still) for integer periods.
c Ashok Ambardar, September 1, 2003
10.7 The DFT of Nonperiodic Signals 425
EXAMPLE 10.7 (The Eects of Leakage)
The signal x(t) = 2 cos(20t) + 5 cos(100t) is sampled at intervals of t
s
= 0.005 s for three dierent
durations, 0.1 s, 0.125 s, and 1.125 s. Explain the DFT spectrum for each duration.
The sampling frequency is S = 1/t
s
= 200 Hz. The frequencies in x(t) are f
1
= 10 Hz and f
2
= 50 Hz.
The two-sided spectrum of x(t) will show a magnitude of 1 at 10 Hz and 2.5 at 50 Hz. The fundamental
frequency is 10 Hz, and the common period of x(t) is 0.1 s. We have the following results with reference to
Figure E10.7, which shows the DFS magnitude ([X
DFT
[/N) up to the folding index (or 100 Hz).
0 10 50 100
0
0.5
1
1.5
2
2.5
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
(a) Length = 0.1 s N=20
0 10 50 100
0
0.5
1
1.5
2
2.5
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
(b) Length = 0.125 s N=25
0 10 50 100
0
0.5
1
1.5
2
2.5
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
(c) Length = 1.125 s N=225
Figure E10.7 DFT results for Example 10.7
(a) The duration of 0.1 s corresponds to one full period, and N = 20. No leakage is present, and the DFS
results reveal an exact match to the spectrum of x(t). The nonzero components appear at the integer indices
k
1
= NF = Nf
1
/S = 1 and k
2
= NF
2
= Nf
2
/S = 5 (corresponding to 10 Hz and 50 Hz, respectively).
(b) The duration of 0.125 s does not correspond to an integer number of periods. The number of samples
over 0.125 s is N = 25. Leakage is present. The largest components appear at the integer indices closest to
k
1
= NF = Nf
1
/S = 1.25 (i.e., k = 1 or 8 Hz) and k
2
= Nf
2
/S = 6.25 (i.e., k = 6 or 48 Hz).
(c) The duration of 1.125 s does not correspond to an integer number of periods. The number of samples
over 1.125 s is N = 225. Leakage is present. The largest components appear at the integer indices closest to
k
1
= NF = Nf
1
/S = 11.25 (i.e., k = 11 or 9.78 Hz) and k
2
= Nf
2
/S = 56.25 (i.e., k = 56 or 49.78 Hz).
Comment: The spectra reveal that the longest duration (1.125 s) also produces the smallest leakage.
10.7 The DFT of Nonperiodic Signals
The Fourier transform X(f) of a nonperiodic signal x(t) is continuous. To nd the DFT, x(t) must be
sampled over a nite duration (N samples). The spectrum of a sampled signal over its principal period
(0.5S, 0.5S) corresponds to the spectrum of the analog signal SX(f), provided x(t) is band-limited to
B < 0.5S. In practice, no analog signal is truly band-limited. As a result, if x[n] corresponds to N samples
of the analog signal x(t), obtained at the sampling rate S, the DFT X
DFT
[k] of x[n] yields essentially the
Fourier series of its periodic extension, and is only approximately related to X(f) by
X
DFT
[k] SX(f)[
f=kS/N
, 0 k < 0.5N (0 f < 0.5S) (10.19)
REVIEW PANEL 10.16
The DFT of a Sampled Aperiodic Signal x(t) Approximates Its Scaled Transform SX(f)
c Ashok Ambardar, September 1, 2003
426 Chapter 10 The Discrete Fourier Transform and its Applications
To nd the DFT of an arbitrary signal with some condence, we must decide on the number of samples N
and the sampling rate S, based on both theoretical considerations and practical compromises. For example,
one way to choose a sampling rate is based on energy considerations. We pick the sampling rate as 2B Hz,
where the frequency range up to B Hz contains a signicant fraction P of the signal energy. The number of
samples should cover a large enough duration to include signicant signal values.
10.7.1 Spectral Spacing and Zero-Padding
Often, the spectral spacing S/N is not small enough to make appropriate visual comparisons with the analog
spectrum X(f), and we need a denser or interpolated version of the DFT spectrum. To decrease the spectral
spacing, we must choose a larger number of samples N. This increase in N cannot come about by increasing
the sampling rate S (which would leave the spectral spacing S/N unchanged), but by increasing the duration
over which we sample the signal. In other words, to reduce the frequency spacing, we must sample the signal
for a longer duration at the given sampling rate. However, if the original signal is of nite duration, we can
still increase N by appending zeros (zero-padding). Appending zeros does not improve accuracy because it
adds no new signal information. It only decreases the spectral spacing and thus interpolates the DFT at
a denser set of frequencies. To improve the accuracy of the DFT results, we must increase the number of
signal samples by sampling the signal for a longer time (and not just zero-padding).
REVIEW PANEL 10.17
Signal Zero-Padding Corresponds to an Interpolated Spectrum
Zero-padding reduces the spectral spacing. It does not improve accuracy of the DFT results.
To improve accuracy, we need more signal samples (not zeros).
EXAMPLE 10.8 (DFT of Finite-Duration Signals)
(a) (Spectral Spacing) A 3-s signal is sampled at S = 100 Hz. The maximum spectral spacing is to be
f = 0.25 Hz. How many samples are needed for the DFT and FFT?
If we use the DFT, the number of samples is N = S/f = 400. Since the 3-s signal gives only 300
signal samples, we must add 100 padded zeros. The spectral spacing is S/N = 0.25 Hz as required.
If we use the FFT, we need N
FFT
= 512 samples (the next higher power of 2). There are now 212
padded zeros, and the spectral spacing is S/N
FFT
= 0.1953 Hz, better (i.e. less) than required.
(b) Let x(t) = tri(t). Its Fourier transform is X(f) = sinc
2
(f). Let us choose S = 4 Hz and N = 8.
To obtain samples of x(t) starting at t = 0, we sample the periodic extension of x(t) as illustrated in
Figure E10.8B and obtain t
s
X
DFT
[k] to give
x[n] = 1, 0.75, 0.5, 0.25, 0, 0.25, 0.5, 0.75
X(f) t
s
X
DFT
[k] = 1, 0.4268,
. .. .
o by 5.3%
0, 0.0732,
. .. .
o by 62.6%
0,
....
k=N/2
0.0732, 0, 0.4268
Since highest frequency present in the DFT spectrum is 0.5S = 2 Hz, the DFT results are listed only
up to k = 4. Since the frequency spacing is S/N = 0.5 Hz, we compare t
s
X
DFT
[k] with X(kS/N) =
sinc
2
(0.5k). At k = 0 (dc) and k = 2 (1 Hz), we see a perfect match. At k = 1 (0.5 Hz), t
s
X
DFT
[k] is
in error by about 5.3%, but at k = 3 (1.5 Hz), the error is a whopping 62.6%.
c Ashok Ambardar, September 1, 2003
10.7 The DFT of Nonperiodic Signals 427
S = 4 Hz
N = 8
= tri( t ) x(t)
N = 16
S = 4 Hz
= tri( t ) x(t)
1 2 0.5
S = 8 Hz
N = 16
= tri( t ) x(t)
0.5 1
t
Periodic extension
Period = 1
t
Period = 2
Periodic extension
Period = 1
0.5 1
t
Periodic extension
Figure E10.8B The triangular pulse signal for Example 10.8(b)
(Reducing Spectral Spacing) Let us decrease the spectral spacing by zero-padding to increase
the number of samples to N = 16. We must sample the periodic extension of the zero-padded
signal, as shown in Figure E10.8B, to give
x[n] = 1, 0.75, 0.5, 0.25, 0, 0, 0, 0, 0, 0, 0, 0, 0
. .. .
8 zeros
, 0.25, 0.5, 0.75
Note how the padded zeros appear in the middle. Since highest frequency present in the DFT
spectrum is 2 Hz, the DFT results are listed only up to the folding index k = 8.
t
s
X
DFT
[k] = 1, 0.8211, 0.4268,
. .. .
o by 5.3%
0.1012, 0, 0.0452, 0.0732,
. .. .
o by 62.6%
0.0325, 0,
....
k=N/2
. . .
The frequency separation is reduced to S/N = 0.25 Hz. Compared with X(kf
0
) = sinc
2
(0.25k),
the DFT results for k = 2 (0.5 Hz) and k = 6 (1.5 Hz) are still o by 5.3% and 62.6%, respectively.
In other words, zero-padding reduces the spectral spacing, but the DFT results are no more
accurate. To improve the accuracy, we must pick more signal samples.
(Improving Accuracy) If we choose S = 8 Hz and N = 16, we obtain 16 samples shown in
Figure E16.8B. We list the 16-sample DFT up to k = 8 (corresponding to the highest frequency
of 4 Hz present in the DFT):
t
s
X
DFT
[k] = 1, 0.4105,
. .. .
o by 1.3%
0, 0.0506,
. .. .
o by 12.4%
0, 0.0226,
. .. .
o by 39.4%
0, 0.0162,
. .. .
o by 96.4%
0,
....
k=N/2
. . .
The DFT spectral spacing is still S/N = 0.5 Hz. In comparison with X(k/2) = sinc
2
(0.5k), the
error in the DFT results for the 0.5-Hz component (k = 1) and the 1.5-Hz component (k = 3)
is now only about 1.3% and 12.4%, respectively. In other words, increasing the number of signal
samples improves the accuracy of the DFT results. However, the error at 2.5 Hz (k = 5) and
3.5 Hz (k = 7) is 39.4% and 96.4%, respectively, and implies that the eects of aliasing are more
predominant at frequencies closer to the folding frequency.
(c) Consider the signal x(t) = e
t
u(t) whose Fourier transform is X(f) = 1/(1 +j2f). Since the energy
E in x(t) equals 1, we use Parsevals relation to estimate the bandwidth B that contains the fraction
P of this energy as
B
B
1
1 + 4
2
f
2
df = P B =
tan(0.5P)
2
c Ashok Ambardar, September 1, 2003
428 Chapter 10 The Discrete Fourier Transform and its Applications
1. If we choose B to contain 95% of the signal energy (P = 0.95), we nd B = 12.71/2 = 2.02 Hz.
Then, S > 4.04 Hz. Let us choose S = 5 Hz. For a spectral spacing of 1 Hz, we have S/N = 1 Hz
and N = 5. So, we sample x(t) at intervals of t
s
= 1/S = 0.2 s, starting at t = 0, to obtain x[n].
Since x(t) is discontinuous at t = 0, we pick x[0] = 0.5, not 1. The DFT results based on this set
of choices will not be very good because with N = 5 we sample only a 1-s segment of x(t).
2. A better choice is N = 15, a 3-s duration over which x(t) decays to 0.05. A more practical choice is
N = 16 (a power of 2, which allows ecient computation of the DFT, using the FFT algorithm).
This gives a spectral spacing of S/N = 5/16 Hz. Our rule of thumb (N > 8M) suggests that
with N = 16, the DFT values X
DFT
[1] and X
DFT
[2] should show an error of only about 5%. We
see that t
s
X
DFT
[k] does compare well with X(f) (see Figure E10.8C), even though the eects of
aliasing are still evident.
0 1 2
0
0.5
1
(a) t
s
X
DFT
N=16 t
s
=0.2
M
a
g
n
i
t
u
d
e
Frequency f [Hz]
0 1 2
0
0.5
1
(b) t
s
X
DFT
N=128 t
s
=0.04
Frequency f [Hz]
M
a
g
n
i
t
u
d
e
0 1 2
0
0.5
1
(c) t
s
X
DFT
N=50 t
s
=0.2
Frequency f [Hz]
M
a
g
n
i
t
u
d
e
Figure E10.8C DFT results for Example 10.8(c)
3. To improve the results (and minimize aliasing), we must increase S. For example, if we require
the highest frequency based on 99% of the signal energy, we obtain B = 63.6567/2 = 10.13 Hz.
Based on this, let us choose S = 25 Hz. If we sample x(t) over 5 s (by which time it decays to
less that 0.01), we compute N = (25)(5) = 125. Choosing N = 128 (the next higher power of 2),
we nd that the 128-point DFT result t
s
X
DFT
[k] is almost identical to the true spectrum X(f).
4. What would happen if we choose S = 5 Hz and ve signal samples, but reduce the spectral spacing
by zero-padding to give N = 50? The 50-point DFT clearly shows the eects of truncation (as
wiggles) and is a poor match to the true spectrum. This conrms that improved accuracy does
not come by zero-padding but by including more signal samples.
10.8 Spectral Smoothing by Time Windows
Sampling an analog signal for a nite number of samples N is equivalent to multiplying the samples by a
rectangular N-point window. Due to abrupt truncation of this rectangular window, the spectrum of the
windowed signal shows a mainlobe and sidelobes that do not decay rapidly enough. This phenomenon is
similar to the Gibbs eect, which arises during the reconstruction of periodic signals from a nite number of
harmonics (an abrupt truncation of its spectrum). Just as Fourier series reconstruction uses tapered spectral
windows to smooth the time signal, the DFT uses tapered time-domain windows to smooth the spectrum,
but at the expense of making it broader. This is another manifestation of leakage in that the spectral energy
is distributed (leaked) over a wider frequency range.
Unlike windows for Fourier series smoothing, we are not constrained by odd-length windows. As shown
in Figure 10.6, an N-point DFT window is actually generated from a symmetric (N + 1)-point window
c Ashok Ambardar, September 1, 2003
10.8 Spectral Smoothing by Time Windows 429
(sampled over N intervals and symmetric about its midpoint) with its last sample discarded (in keeping
with the implied periodicity of the signal and the window itself). To apply a window, we position it over the
signal samples and create their pointwise product.
N = 12 N = 9
Bartlett window
12 intervals
No sample here
9 intervals
Figure 10.6 Features of a DFT window
10.8.1 Performance Characteristics of Windows
The spectrum of all windows shows a mainlobe and decaying sidelobes, as illustrated in Figure 10.7. Measures
of magnitude are often normalized by the peak magnitude P and expressed in decibels (dB) (as a gain or
attenuation). These measures include the peak value P, the peak sidelobe level, the 3-dB level and the 6-dB
level, and the high-frequency decay rate (in dB/decade or dB/octave). Measures of spectral width include
the 3-dB width W
3
, 6-dB width W
6
, the width W
S
to reach the peak sidelobe level (PSL), and the mainlobe
width W
M
. These measures are illustrated in Figure 10.7.
3
W
S
W
M
W
6
W
P
PSL
0.707P
0.5P
F
DTFT magnitude spectrum of a typical window
High-frequency decay
0.5
Figure 10.7 DFT spectrum of a typical window
Other measures of window performance include the coherent gain (CG), equivalent noise bandwidth
(ENBW), and the scallop loss (SL). For an N-point window w[n], these measures are dened by
CG =
1
N
N1
k=0
[w[k][ ENBW =
N
N1
k=0
[w[k][
2
N1
k=0
w[k]
2
SL = 20 log
N1
k=0
w[k]e
jk/N
N1
k=0
w[k]
dB (10.20)
The reciprocal of the equivalent noise bandwidth is also called the processing gain. The larger the pro-
cessing gain, the easier it is to reliably detect a signal in the presence of noise.
c Ashok Ambardar, September 1, 2003
430 Chapter 10 The Discrete Fourier Transform and its Applications
As we increase the window length, the mainlobe width of all windows decreases, but the peak sidelobe
level remains more or less constant. Ideally, for a given window length, the spectrum of a window should
approach an impulse with as narrow (and tall) a mainlobe as possible, and as small a peak sidelobe level as
possible. The aim is to pack as much energy in a narrow mainlobe as possible and make the sidelobe level
as small as possible. These are conicting requirements in that a narrow mainlobe width also translates
to a higher peak sidelobe level. Some DFT windows and their spectral characteristics are illustrated in
Figure 10.8, and summarized in Table 10.4.
REVIEW PANEL 10.18
A Window Should Maximize the Mainlobe Energy and Minimize the Sidelobe Energy
A narrow mainlobe and small sidelobe are conicting requirements for the spectrum of a window.
Table 10.4 Some Commonly Used N-Point DFT Windows
Entry Window Expression for w[n] W
M
=
K
N
Normalized Peak Sidelobe
1 Boxcar 1 2/N 0.2172 13.3 dB
2 Bartlett 1
2|k|
N
4/N 0.0472 26.5 dB
3 von Hann 0.5 + 0.5 cos(
2k
N
) 4/N 0.0267 31.5 dB
4 Hamming 0.54 + 0.46 cos(
2k
N
) 4/N 0.0073 42.7 dB
5 Blackman 0.42 + 0.5 cos(
2k
N
) + 0.08 cos(
4k
N
) 6/N 0.0012 58.1 dB
6 Kaiser
I
0
(
1 (2k/N)
2
)
I
0
()
2
1+
2
N
0.22
sinh()
45.7 dB (for = 2)
NOTES: k = 0.5N n, where n = 0, 1, . . . , N 1. W
M
is the mainlobe width.
For the Kaiser window, I
0
(.) is the modied Bessel function of order zero.
For the Kaiser window, the parameter controls the peak sidelobe level.
The von Hann window is also known as the Hanning window.
10.8.2 The Spectrum of Windowed Sinusoids
Consider the signal x[n] = cos(2nF
0
), F
0
= k
0
/M. Its DTFT is X
p
(F) = 0.5(F F
0
) + 0.5(F +F
0
). If
this sinusoid is windowed by an N-point window w[n] whose spectrum is W(F), the DTFT of the windowed
signal is given by the periodic convolution
X
w
(F) = X
p
(F) (W(F) = 0.5W(F F
0
) + 0.5W(F +F
0
) (10.21)
The window thus smears out the true spectrum. To obtain a windowed signal that is a replica of the
spectrum of the sinusoid, we require W(F) = (F), which corresponds to the impractical innite-length
time window. The more the spectrum W(F) of an N-point window resembles an impulse, the better the
windowed spectrum matches the original.
The N-point DFT of the signal x[n] may be regarded as the DTFT of the product of the innite-length
x[n] and an N-point rectangular window, evaluated at F = k/N, k = 0, 1, . . . , N 1. Since spectrum of the
N-point rectangular window is W(F) = N
sinc(NF)
sincF
, the DTFT spectrum of the windowed signal is
X
w
(F) = 0.5N
sinc[N(F F
0
)]
sinc(F F
0
)
+ 0.5N
sinc[N(F +F
0
)]
sinc(F +F
0
)
(10.22)
c Ashok Ambardar, September 1, 2003
10.8 Spectral Smoothing by Time Windows 431
0 10 19
0
0.5
1
Bartlett window: N = 20
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
60
40
26.5
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
0 10 19
0
0.5
1
von Hann window: N = 20
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
60
40
31.5
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
0 10 19
0
0.5
1
Hamming window: N = 20
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
60
42
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
0 10 19
0
0.5
1
Blackman window: N = 20
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
58
40
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
0 10 19
0
0.5
1
Kaiser window: N = 20 = 2
Index n
A
m
p
l
i
t
u
d
e
0 0.1 0.2 0.3 0.4 0.5
80
60
45.7
20
0
Magnitude spectrum in dB
Digital frequency F
d
B
m
a
g
n
i
t
u
d
e
Figure 10.8 Commonly used DFT windows and their spectral characteristics
c Ashok Ambardar, September 1, 2003
432 Chapter 10 The Discrete Fourier Transform and its Applications
The N-point DFT of the windowed sinusoid is given by X
DFT
[k] = X
w
(F)[
F=k/N
. If the DFT length N
equals M, the number of samples over k
0
full periods of x[n], we see that sinc[N(F F
0
)] = sinc(k k
0
),
and this equals zero, unless k = k
0
. Similarly, sinc[N(F + F
0
)] = sinc(k + k
0
) is nonzero only if k = k
0
.
The DFT thus contains only two nonzero terms and equals
X
DFT
[k] = X
w
(F)
F=k/N
= 0.5N[k k
0
] + 0.5N[k k
0
] (if N = M) (10.23)
In other words, using an N-point rectangular window that covers an integer number of periods (M samples)
of a sinusoid (i.e., with M = N) gives us exact results. The reason of course is that the DTFT sampling
instants fall on the nulls of the sinc spectrum. If the window length N does not equal M (an integer number
of periods), the sampling instants will fall between the nulls, and since the sidelobes of the sinc function are
large, the DFT results will show considerable leakage. To reduce the eects of leakage, we must use windows
whose spectrum shows small sidelobe levels.
10.8.3 Resolution
Windows are often used to reduce the eects of leakage and improve resolution. Frequency resolution refers
to our ability to clearly distinguish between two closely spaced sinusoids of similar amplitudes. Dynamic-
range resolution refers to our ability to resolve large dierences in signal amplitudes. The spectrum of all
windows reveals a mainlobe and smaller sidelobes. It smears out the true spectrum and makes components
separated by less than the mainlobe width indistinguishable. The rectangular window yields the best fre-
quency resolution for a given length N since it has the smallest mainlobe. However, it also has the largest
peak sidelobe level of any window. This leads to signicant leakage and the worst dynamic-range resolution
because small amplitude signals can get masked by the sidelobes of the window.
Tapered windows with less abrupt truncation show reduced sidelobe levels and lead to reduced leakage
and improved dynamic-range resolution. They also show increased mainlobe widths W
M
, leading to poorer
frequency resolution. The choice of a window is based on a compromise between the two conicting re-
quirements of minimizing the mainlobe width (improving frequency resolution) and minimizing the sidelobe
magnitude (improving dynamic-range resolution).
REVIEW PANEL 10.19
Good Frequency Resolution and Dynamic-Range Resolution Are Conicting Requirements
To improve frequency resolution, use windows with narrow mainlobes.
To improve dynamic-range resolution, use windows with small sidelobes.
The mainlobe width of all windows decreases as we increase the window length. However, the peak
sidelobe level remains more or less constant. To achieve a frequency resolution of f, the digital frequency
F = f/S must equal or exceed the mainlobe width W
M
of the window. This yields the window length N.
For a given window to achieve the same frequency resolution as the rectangular window, we require a larger
window length (a smaller mainlobe width) and hence a larger signal length. The increase in signal length
must come by choosing more signal samples (and not by zero-padding). To achieve a given dynamic-range
resolution, however, we must select a window with small sidelobes, regardless of the window length.
REVIEW PANEL 10.20
The Smallest Frequency We Can Resolve Depends on the Mainlobe Width of the Window
To resolve frequencies separated by f, we require
f
S
= W
M
=
K
N
(window mainlobe width).
K depends on the window. To decrease f, increase N (more signal samples, not zero-padding).
c Ashok Ambardar, September 1, 2003
10.8 Spectral Smoothing by Time Windows 433
EXAMPLE 10.9 (Frequency Resolution)
The signal x(t) = A
1
cos(2f
0
t) + A
2
cos[2(f
0
+ f)t], where A
1
= A
2
= 1, f
0
= 30 Hz is sampled at the
rate S = 128 Hz. We acquire N samples, zero-pad them to length N
FFT
, and obtain the N
FFT
-point FFT.
1. What is the smallest f that can be resolved for
N = 256, N
FFT
= 2048 using a rectangular and von Hann (Hanning) window.
N = 512, N
FFT
= 2048 using a rectangular and von Hann window.
N = 256, N
FFT
= 4096 using a rectangular and von Hann window.
2. How do the results change if A
2
= 0.05?
(a) (Frequency Resolution) Since F = f/S = W
M
, we have f = SW
M
. We compute:
Rectangular window: f = SW
M
= 2S/N = 1 Hz for N = 256 and 0.5 Hz for N = 512
von Hann window: f = SW
M
= 4S/N = 2 Hz for N = 256 and 1 Hz for N = 512
Note that N
FFT
governs only the FFT spacing S/N
FFT
, whereas N governs only the frequency resolu-
tion S/N (which does not depend on the zero-padded length). Figure E10.9A shows the FFT spectra,
plotted as continuous curves, over a selected frequency range. We make the following remarks:
1. For a given signal length N, the rectangular window resolves a smaller f but also has the largest
sidelobes (panels a and b). This means that the eects of leakage are more severe for a rectangular
window than for any other.
2. We can resolve a smaller f by increasing the signal length N alone (panel c). To resolve the
same f with a von Hann window, we must double the signal length N (panel d). This means that
we can improve resolution only by increasing the number of signal samples (adding more signal
information). How many more signal samples we require will depend on the desired resolution
and the type of window used.
3. We cannot resolve a smaller f by increasing the zero-padded length N
FFT
alone (panels e and
f). This means that increasing the number of samples by zero-padding cannot improve resolution.
Zero-padding simply interpolates the DFT at a denser set of frequencies. It simply cannot improve
the accuracy of the DFT results because adding more zeros does not add more signal information.
26 28 29 30 31 32 36
0
0.02
0.04
0.06
(a) N=256 NFFT=2048 No window
M
a
g
n
i
t
u
d
e
Analog frequency f [Hz]
26 28 29 30 31 32 36
0
0.02
0.04
0.06
(b) N=256 NFFT=2048 von Hann window
M
a
g
n
i
t
u
d
e
Analog frequency f [Hz]
c Ashok Ambardar, September 1, 2003
434 Chapter 10 The Discrete Fourier Transform and its Applications
26 28 29 30 31 32 36
0
0.05
0.1
(c) N=512 NFFT=2048 No window
M
a
g
n
i
t
u
d
e
Analog frequency f [Hz]
26 28 29 30 31 32 36
0
0.05
0.1
(d) N=512 NFFT=2048 von Hann window
M
a
g
n
i
t
u
d
e
Analog frequency f [Hz]
26 28 29 30 31 32 36
0
0.01
0.02
0.03
0.04
(e) N=256 NFFT=4096 No window
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
26 28 29 30 31 32 36
0
0.01
0.02
0.03
0.04
(f) N=256 NFFT=4096 von Hann window
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
Figure E10.9A DFT spectra for Example 10.9(a)
(b) (Dynamic-Range Resolution) If A
2
= 0.05 (26 dB below A
1
), the large sidelobes of the rectangular
window (13 dB below the peak) will mask the second peak at 31 Hz, even if we increase N and N
FFT
.
This is illustrated in Figure E10.9B(a) (where the peak magnitude is normalized to unity, or 0 dB) for
N = 512 and N
FFT
= 4096. For the same values of N and N
FFT
, however, the smaller sidelobes of the
von Hann window (31.5 dB below the peak) do allow us to resolve two distinct peaks in the windowed
spectrum, as shown in Figure E10.9B(b).
28 29 30 31 32
60
40
20
0
(a) N=512 NFFT=4096 No window
Analog frequency f [Hz]
N
o
r
m
a
l
i
z
e
d
m
a
g
n
i
t
u
d
e
[
d
B
]
28 29 30 31 32
60
40
20
0
(b) N=512 NFFT=4096 von Hann window
Analog frequency f [Hz]
N
o
r
m
a
l
i
z
e
d
m
a
g
n
i
t
u
d
e
[
d
B
]
Figure E10.9B DFT spectra for Example 10.9(b)
10.8.4 Detecting Hidden Periodicity Using the DFT
Given an analog signal x(t) known to contain periodic components, how do we estimate their frequencies
and magnitude? There are several ways. Most rely on statistical estimates, especially if the signal x(t) is
also corrupted by noise. Here is a simpler, perhaps more intuitivebut by no means the bestapproach
based on the eects of aliasing. The location and magnitude of the components in the DFT spectrum can
c Ashok Ambardar, September 1, 2003
10.8 Spectral Smoothing by Time Windows 435
change with the sampling rate if this rate is below the Nyquist rate. Due to aliasing, the spectrum may not
drop to zero at 0.5S and may even show increased magnitudes as we move toward the folding frequency. If
we try to minimize the eects of noise by using a lowpass lter, we must ensure that its cuto frequency
exceeds the frequency of all the components of interest present in x(t). We have no a priori way of doing
this. A better way, if the data can be acquired repeatedly, is to use the average of many runs. Averaging
minimizes noise while preserving the integrity of the signal.
A crude estimate of the sampling frequency may be obtained by observing the most rapidly varying
portions of the signal. Failing this, we choose an arbitrary but small sampling frequency, sample x(t), and
observe the DFT spectrum. We repeat the process with increasing sampling rates and observe how the DFT
spectrum changes, and when the spectrum shows little change in the location of its spectral components,
we have the right spectrum and the right sampling frequency. This trial-and-error method is illustrated in
Figure 10.9 and actually depends on aliasing for its success.
0 50 100 150 200 250
0
1
2
3
4
(a) DFT spectrum S=100 Hz
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
0 50 100 150 200 250
0
1
2
3
4
(b) DFT spectrum S=500 Hz
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
0 50 100 150 200 250
0
1
2
3
4
(c) DFT spectrum S=200 Hz shows aliasing
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
0 50 100 150 200 250
0
1
2
3
4
(d) S=1 kHz No change in spectral locations
Analog frequency f [Hz]
M
a
g
n
i
t
u
d
e
Figure 10.9 Trial-and-error method for obtaining the DFT spectrum
If x(t) is a sinusoid, its magnitude A is computed from X
DFT
[k
0
] = 0.5NA, where the index k
0
corresponds
to the peak in the N-point DFT. However, if other nonperiodic signals are also present in x(t), this may not
yield a correct result. A better estimate of the magnitude of the sinusoid may be obtained by comparing
two DFT results of dierent lengthssay, N
1
= N and N
2
= 2N. The N
1
-point DFT at the index k
1
of
the peak will equal 0.5N
1
A plus a contribution due to the nonperiodic signals. Similarly, the N
2
-point DFT
will show a peak at k
2
(where k
2
= 2k
1
if N
2
= 2N
1
), and its value will equal 0.5N
2
A plus a contribution
due to the nonperiodic signals. If the nonperiodic components do not aect the spectrum signicantly, the
dierence in these two values will cancel out the contribution due to the nonperiodic components, and yield
an estimate for the magnitude of the sinusoid from
X
DFT2
[k
2
] X
DFT1
[k
1
] = 0.5N
2
A0.5N
1
A (10.24)
c Ashok Ambardar, September 1, 2003
436 Chapter 10 The Discrete Fourier Transform and its Applications
EXAMPLE 10.10 (Detecting Hidden Periodicity)
A signal x(t) is known to contain a sinusoidal component. The 80-point DFT result and a comparison of
the 80-point and 160-point DFT are shown in Figure E10.10. Estimate the frequency and magnitude of the
sinusoid and its DFT index. The sampling rate is S = 10 Hz.
0 0.05 0.5
0
47.12
100
(a) FFT of a signal computed with N
1
=80
Digital frequency F
M
a
g
n
i
t
u
d
e
0 0.05 0.5
0
47.12
86.48
(b) FFT with N
1
=80 and N
2
=160 (light)
Digital frequency F
M
a
g
n
i
t
u
d
e
Figure E10.10 DFT spectra for Example 10.10
The comparison of the two DFT results suggests a peak at F = 0.05 and the presence of a sinusoid.
Since the sampling rate is S = 10 Hz, the frequency of the sinusoid is f = FS = 0.5 Hz. Let N
1
= 80 and
N
2
= 160. The peak in the N
1
-point DFT occurs at the index k
1
= 4 because F = 0.05 = k
1
/N
1
= 4/80.
Similarly, the peak in the N
2
-point DFT occurs at the index k
2
= 8 because F = 0.05 = k
2
/N
2
= 8/160.
Since the two spectra do not dier much, except near the peak, the dierence in the peak values allows us
to compute the peak value A of the sinusoid from
X
DFT2
[k
2
] X
DFT1
[k
1
] = 86.48 47.12 = 0.5N
2
A0.5N
1
A = 40A
Thus, A = 0.984 and implies the presence of the 0.5-Hz sinusoidal component 0.984 cos(t +).
Comment: The DFT results shown in Figure E10.10 are actually for the signal x(t) = cos(t) + e
t
,
sampled at S = 10 Hz. The sinusoidal component has unit peak value, and the DFT estimate (A = 0.984)
diers from this value by less than 2%. Choosing larger DFT lengths would improve the accuracy of the
estimate. However, the 80-point DFT alone yields the estimate A = 47.12/40 = 1.178 (an 18% dierence),
whereas the 160-point DFT alone yields A = 86.48/80 = 1.081 (an 8% dierence).
10.9 Applications in Signal Processing
The applications of the DFT and FFT span a wide variety of disciplines. Here, we briey describe some
applications directly related to digital signal processing.
10.9.1 Convolution of Long Sequences
A situation that often arises in practice is the processing of a long stream of incoming data by a lter whose
impulse response is much shorter than that of the incoming data. The convolution of a short sequence
h[n] of length N (such as an averaging lter) with a very long sequence x[n] of length L N (such as an
incoming stream of data) can involve large amounts of computation and memory. There are two preferred
alternatives, both of which are based on sectioning the long sequence x[n] into shorter ones. The DFT oers
a useful means of nding such a convolution. It even allows on-line implementation if we can tolerate a small
processing delay.
c Ashok Ambardar, September 1, 2003
10.9 Applications in Signal Processing 437
The Overlap-Add Method
Suppose h[n] is of length N, and the length of x[n] is L = mN (if not, we can always zero-pad it to this
length). We partition x[n] into m segments x
0
[n], x
1
[n], . . . , x
m1
[n], each of length N. We nd the regular
convolution of each section with h[n] to give the partial results y
0
[n], y
1
[n], . . . , y
m1
[n]. Using superposition,
the total convolution is the sum of their shifted (by multiples of N) versions
y[n] = y
0
[n] +y
1
[n N] +y
2
[n 2N] + +y
m1
[n (m1)N] (10.25)
Since each regular convolution contains 2N 1 samples, we zero-pad h[n] and each section x
k
[n] with N 1
zeros before nding y
k
[n] using the FFT. Splitting x[n] into equal-length segments is not a strict requirement.
We may use sections of dierent lengths, provided we keep track of how much each partial convolution must
be shifted before adding the results.
The Overlap-Save Method
The regular convolution of sequences of length L and N has L +N 1 samples. If L > N and we zero-pad
the second sequence to length L, their periodic convolution has 2L 1 samples. Its rst N 1 samples are
contaminated by wraparound, and the rest correspond to the regular convolution. To understand this, let
L = 16 and N = 7. If we pad N by nine zeros, their regular convolution has 31 (or 2L1) samples with nine
trailing zeros (L N = 9). For periodic convolution, 15 samples (L 1 = 15) are wrapped around. Since
the last nine (or LN) are zeros, only the rst six samples (LN (N 1) = N 1 = 6) of the periodic
convolution are contaminated by wraparound. This idea is the basis for the overlap-save method. First, we
add N 1 leading zeros to the longer sequence x[n] and section it into k overlapping (by N 1) segments of
length M. Typically, we choose M 2N. Next, we zero-pad h[n] (with trailing zeros) to length M, and nd
the periodic convolution of h[n] with each section of x[n]. Finally, we discard the rst N 1 (contaminated)
samples from each convolution and glue (concatenate) the results to give the required convolution.
In either method, the FFT of the shorter sequence need be found only once, stored, and reused for
all subsequent partial convolutions. Both methods allow on-line implementation if we can tolerate a small
processing delay that equals the time required for each section of the long sequence to arrive at the proces-
sor (assuming the time taken for nding the partial convolutions is less than this processing delay). The
correlation of two sequences may also be found in exactly the same manner, using either method, provided
we use a folded version of one sequence.
EXAMPLE 10.11 (Overlap-Add and Overlap-Save Methods of Convolution)
Let x[n] = 1, 2, 3, 3, 4, 5 and h[n] = 1, 1, 1. Here L = 6 and N = 3.
(a) To nd their convolution using the overlap-add method, we section x[n] into two sequences given by
x
0
[n] = 1, 2, 3 and x
1
[n] = 3, 4, 5, and obtain the two convolution results:
y
0
[n] = x
0
[n] h[n] = 1, 3, 6, 5, 3 y
1
[n] = x
1
[n] h[n] = 3, 7, 12, 9, 5
Shifting and superposition results in the required convolution y[n] as
y[n] = y
0
[n] +y
1
[n 3] =
1, 3, 6, 5, 3
3, 7, 12, 9, 5
= 1, 3, 6, 8, 10, 12, 9, 5
This result can be conrmed using any of the convolution algorithms described in the text.
c Ashok Ambardar, September 1, 2003
438 Chapter 10 The Discrete Fourier Transform and its Applications
(b) To nd their convolution using the overlap-add method, we start by creating the zero-padded sequence
x[n] = 0, 0, 1, 2, 3, 3, 4, 5. If we choose M = 5, we get three overlapping sections of x[n] (we need to
zero-pad the last one) described by
x
0
[n] = 0, 0, 1, 2, 3 x
1
[n] = 2, 3, 3, 4, 5 x
2
[n] = 4, 5, 0, 0, 0
The zero-padded h[n] becomes h[n] = 1, 1, 1, 0, 0. Periodic convolution gives
x
0
[n] (h[n] = 5, 3, 1, 3, 6
x
1
[n] (h[n] = 11, 10, 8, 10, 12
x
2
[n] (h[n] = 4, 9, 9, 5, 0
We discard the rst two samples from each convolution and glue the results to obtain
y[n] = x[n] h[n] = 1, 3, 6, 8, 10, 12, 9, 5, 0
Note that the last sample (due to the zero-padding) is redundant, and may be discarded.
10.9.2 Deconvolution
Given a signal y[n] that represents the output of some system with impulse response h[n], how do we recover
the input x[n] where y[n] = x[n]h[n]? One method is to undo the eects of convolution using deconvolution.
The time-domain approach to deconvolution was studied earlier. Here, we examine a frequency-domain
alternative based on the DFT (or FFT).
The idea is to transform the convolution relation using the FFT to obtain Y
FFT
[k] = X
FFT
[k]H
FFT
[k],
compute X
FFT
[k] = Y
FFT
[k]/H
FFT
[k] by pointwise division, and then nd x[n] as the IFFT of X
FFT
[k].
This process does work in many cases, but it has two disadvantages. First, it fails if H
FFT
[k] equals zero at
some index because we get division by zero. Second, it is quite sensitive to noise in the input x[n] and to
the accuracy with which y[n] is known.
10.9.3 Band-Limited Signal Interpolation
Interpolation of x[n] by M to a new signal x
I
[n] is equivalent to a sampling rate increase by M. If the
signal has been sampled above the Nyquist rate, signal interpolation should add no new information to the
spectrum. The idea of zero-padding forms the basis for an interpolation method using the DFT in the sense
that the (MN)-sample DFT of the interpolated signal should contain N samples corresponding to the DFT
of x[n], while the rest should be zero. Thus, if we nd the DFT of x[n], zero-pad it (by inserting zeros about
the folding index) to increase its length to MN, and nd the inverse DFT of the zero-padded sequence, we
should obtain the interpolated signal x
I
[n]. This approach works well for band-limited signals (such as pure
sinusoids sampled above the Nyquist rate over an integer number of periods). To implement this process,
we split the N-point DFT X
DFT
[k] of x[n] about the folding index N/2. If N is even, the folding index falls
on the sample value X[N/2], and it must also be split in half. We then insert enough zeros in the middle to
create a padded sequence X
zp
[k] with MN samples. It has the form
X
zp
[k] =
IDFTX
zp
[k]
(10.27)
This method is entirely equivalent to creating a zero-interpolated signal (which produces spectrum repli-
cation), and ltering the replicated spectrum (by zeroing out the spurious images). For periodic band-limited
signals sampled above the Nyquist rate for an integer number of periods, the interpolation is exact. For all
others, imperfections show up as a poor match, especially near the ends, since we are actually interpolating
to zero outside the signal duration.
EXAMPLE 10.12 (Signal Interpolation Using the FFT)
(a) For a sinusoid sampled over one period with four samples, we obtain the signal x[n] = 0, 1, 0, 1.
Its DFT is X
DFT
[k] = 0, j2, 0, j2. To interpolate this by M = 8, we generate the 32-sample zero-
padded sequence Z
T
= 0, j2, 0, (27 zeros), 0, j2. The interpolated sequence (the IDFT of Z
T
)
shows an exact match to the sinusoid, as illustrated in Figure E10.12)(a).
0 0.2 0.4 0.6 0.8 1
1
0.5
0
0.5
1
(a) Interpolated sinusoid: 4 samples over one period
A
m
p
l
i
t
u
d
e
Time t
0 0.1 0.2 0.3 0.4 0.5
0
0.5
1
(b) Interpolated sinusoid: 4 samples over a halfperiod
Time t
A
m
p
l
i
t
u
d
e
Figure E10.12 Interpolated sinusoids for Example 10.12
(b) For a sinusoid sampled over a half-period with four samples, interpolation does not yield exact results,
as shown in Figure E10.12(b). Since we are actually sampling one period of a full-rectied sine (the
periodic extension), the signal is not band-limited, and the chosen sampling frequency is too low. This
shows up as a poor match, especially near the ends of the sequence.
10.9.4 The Discrete Hilbert Transform
The Hilbert transform describes an operation that shifts the phase of a signal x(t) by
2
. In the analog
domain, the phase shift can be achieved by passing x(t) through a lter whose transfer function H(f) is
H(f) = j sgn(f) =
j, f > 0
j, f < 0
(10.28)
In the time domain, the phase-shifted signal x(t) is given by the convolution
x(t) =
1
t
x(t) (10.29)
c Ashok Ambardar, September 1, 2003
440 Chapter 10 The Discrete Fourier Transform and its Applications
The phase-shifted signal x(t) denes the Hilbert transform of x(t). The spectrum
X(f) of the Hilbert-
transformed signal equals the product of X(f) with the transform of 1/t. In other words,
2
is called a Hilbert transformer or a quadrature lter.
Such a system can be used to generate a single-sideband amplitude modulated (SSB AM) signal.
Unlike most other transforms, the Hilbert transform belongs to the same domain as the signal trans-
formed. Due to the singularity in 1/t at t = 0, a formal evaluation of this relation requires complex
variable theory.
The discrete Hilbert transform of a sequence may be obtained by using the FFT. The Hilbert
transform of x[n] involves the convolution of x[n] with the impulse response h[n] of the Hilbert transformer
whose spectrum is H(F) = j sgn(F). The easiest way to perform this convolution is by FFT methods.
We nd the N-point FFT of x[n] and multiply by the periodic extension (from k = 0 to k = N 1) of N
samples of sgn(F), which may be written as
sgn[k] = 0, 1, 1, . . . , 1,
. .. .
N
2
2 samples
0, 1, 1, . . . , 1
. .. .
N
2
2 samples
(10.31)
The inverse FFT of the product, multiplied by the omitted factor j, yields the Hilbert transform.
10.10 Spectrum Estimation
The power spectral density (PSD) R
xx
(f) of an analog power signal or random signal x(t) is the Fourier
transform of its autocorrelation function r
xx
(t) and is a real, non-negative, even function with R
xx
(0) equal
to the average power in x(t). If a signal x(t) is sampled and available only over a nite duration, the best
we can do is estimate the PSD of the underlying signal x(t) from the given nite record. This is because the
spectrum of nite sequences suers from leakage and poor resolution. The PSD estimate of a noisy analog
signal x(t) from a nite number of its samples is based on two fundamentally dierent approaches. The rst,
non-parametric spectrum estimation, makes no assumptions about the data. The second, parametric
spectrum estimation, models the data as the output of a digital lter excited by a noise input with a
constant power spectral density, estimates the lter coecients, and in so doing arrives at an estimate of the
true PSD. Both rely on statistical measures to establish the quality of the estimate.
10.10.1 The Periodogram Estimate
The simplest non-parametric estimate is the periodogram P[k] that is based on the DFT of an N-sample
sequence x[n]. It is dened as
P[k] =
1
N
[X
DFT
[k][
2
(10.32)
Although P[k] provides a good estimate for deterministic, band-limited, power signals sampled above the
Nyquist rate, it yields poor estimates for noisy signals because the quality of the estimate does not improve
with increasing record length N (even though the spectral spacing decreases).
EXAMPLE 10.13 (The Concept of the Periodogram)
The sequence x[n] = 0, 1, 0, 1 is obtained by sampling a sinusoid for one period at twice the Nyquist
rate. Find its periodogram estimate.
c Ashok Ambardar, September 1, 2003
10.10 Spectrum Estimation 441
The DFT of x[n] is X
DFT
[k] = 0, j2, 0, j2. Thus, P[k] =
1
N
[X
DFT
[k][
2
= 0, 1, 0, 1.
This is usually plotted as a bar graph with each sample occupying a bin width F = 1/N = 0.25.
The total power thus equals F
[W
DFT
[k][
2
or
1
M
[w[n][
2
). Sections of larger length result in a smaller spectral spacing, whereas
more sections result in a smoother estimate (at the risk of masking sharp details). The number of sections
chosen is a trade-o between decreasing the spectral spacing and smoothing the estimate. Averaging the
results of the (presumably uncorrelated) segments reduces the statistical uctuations (but at the expense
of a larger spectral spacing). In the related Bartlett method, the segments are neither overlapped nor
windowed. Figure 10.10(a) shows the Welch PSD of a 400-point chirp signal, whose digital frequency varies
from F = 0.2 to F = 0.4, using a 45% overlap and a 64-point von Hann window (shown dark), and a
rectangular window (no window). Note how the windowing results in signicant smoothing.
0.50.4 0.2 0 0.2 0.4 0.5
0
0.5
1
1.5
Digital frequency F
(a) Welch PSD of chirp (0.2 F 0.4) N=400
M
a
g
n
i
t
u
d
e
No window
von Hann
0.50.4 0.2 0 0.2 0.4 0.5
0
0.5
1
1.5
Digital frequency F
(b) Tukey PSD of chirp (0.2 F 0.4) N=400
M
a
g
n
i
t
u
d
e
Figure 10.10 Welch and Tukey PSD of a chirp signal
10.10.3 PSD Estimation by the Blackman-Tukey Method
The Blackman-Tukey method relies on nding the PSD from the windowed autocorrelation, which is
assumed to be zero past the window length. The N-sample signal x[n] is zero-padded to get the 2N-sample
signal y[n]. The linear autocorrelation of x[n] equals the periodic autocorrelation of y[n] and is evaluated by
nding the FFT Y
FFT
[k] and taking the IFFT of Y
FFT
[k]Y
FFT
[k] to obtain the 2N-sample autocorrelation
estimate r
xx
[n]. This autocorrelation is then windowed by an M-point window (to smooth the spectrum and
reduce the eects of poor autocorrelation estimates due to the nite data length). The FFT of the M-sample
windowed autocorrelation yields the smoothed periodogram. Using a smaller M (narrower windows) for
the autocorrelation provides greater smoothing but may also mask any peaks or obscure the sharp details.
Typical values of the window length M range from M = 0.1N to M = 0.5N. Only windows whose transform
c Ashok Ambardar, September 1, 2003
442 Chapter 10 The Discrete Fourier Transform and its Applications
is entirely positive should be used. Of the few that meet this constraint, the most commonly used is the
Bartlett (triangular) window. Figure 10.10(b) shows the Tukey PSD of a 1000-point chirp signal, whose
digital frequency varies from F = 0.2 to F = 0.4, using a 64-point Bartlett window.
The Welch method is more useful for detecting closely spaced peaks of similar magnitudes (frequency res-
olution). The Blackman-Tukey method is better for detecting well-separated peaks with dierent magnitudes
(dynamic-range resolution). Neither is very eective for short data lengths.
10.10.4 Non-Parametric System Identication
The Fourier transform R
yx
(f) of the cross-correlation r
yx
(t) = y(t) x(t) = y(t) x(t) of two random
signals x(t) and y(t) is called the cross-spectral density. For a system with impulse response h(t), the
input x(t) yields the output y(t) = x(t) h(t), and we have
r
yx
(t) = y(t) x(t) = h(t) x(t) x(t) = h(t) r
xx
(t) (10.33)
The Fourier transform of both sides yields
R
yx
(f) = H(f)R
xx
(f) H(f) =
R
yx
(f)
R
xx
(f)
(10.34)
This relation allows us to identify an unknown transfer function H(f) as the ratio of the cross-spectral
density R
yx
(f) and the PSD R
xx
(f) of the input x(t). If x(t) is a noise signal with a constant power spectral
density, its PSD is a constant of the form R
xx
(f) = K. Then, H(f) = R
yx
(f)/K is directly proportional to
the cross-spectral density.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
0
0.5
1
1.5
2
Spectrum of an IIR filter and its 20point FIR approximation (dark)
Digital frequency F
M
a
g
n
i
t
u
d
e
Figure 10.11 Spectrum of an IIR lter and its non-parametric FIR lter estimate
This approach is termed non-parametric because it presupposes no model for the system. In practice,
we use the FFT to approximate R
xx
(f) and R
yx
(f) (using the Welch method, for example) by the nite
N-sample sequences R
xx
[k] and R
yx
[k]. As a consequence, the transfer function H[k] = R
yx
[k]/R
xx
[k] also
has N samples and describes an FIR lter. Its inverse FFT yields the N-sample impulse response h[n].
Figure 10.11 shows the spectrum of an IIR lter dened by y[n] 0.5y[n 1] = x[n] (or h[n] = (0.5)
n
u[n]),
and its 20-point FIR estimate obtained by using a 400-sample noise sequence and the Welch method.
c Ashok Ambardar, September 1, 2003
10.11 The Cepstrum and Homomorphic Filtering 443
10.10.5 Time-Frequency Plots
In practical situations, we are often faced with the task of nding how the spectrum of signals varies with
time. A simple approach is to section the signal into overlapping segments, window each section to reduce
leakage, and nd the PSD (using the Welch method, for example). The PSD for each section is then staggered
and stacked to generate what is called a waterfall plot or time-frequency plot. Figure 10.12 shows such
a waterfall plot for the sum of a single frequency sinusoid and a chirp signal whose frequency varies linearly
with time. It provides a much better visual indication of how the spectrum evolves with time.
0 0.2 0.4 0.6 0.8 1
2
1
0
1
2
(a) 20 Hz sine + 60100 Hz chirp signal
Time t [seconds]
A
m
p
l
i
t
u
d
e
0 20 60 100 160
0
0.2
0.4
0.6
0.8
1
Analog frequency f [Hz]
T
i
m
e
t
[
s
e
c
o
n
d
s
]
(b) Its timefrequency (waterfall) plot
Figure 10.12 Time-frequency plot for the sum of a sinusoid and a chirp signal
The fundamental restriction in obtaining the time-frequency information, especially for short time records,
is that we cannot localize both time and frequency to arbitrary precision. Recent approaches are based on
expressing signals in terms of wavelets (much like the Fourier series). Wavelets are functions that show the
best possible localization characteristics in both the time-domain and the frequency-domain, and are an
area of intense ongoing research.
10.11 The Cepstrum and Homomorphic Filtering
Consider a signal x[n]. If we take the complex logarithm of its DTFT X(F) to get X
K
(F) = ln X(F), the
inverse transform x
K
[n] is called the cepstrum (pronounced kepstrum) of x[n]. We have
X
K
(F) = ln X(F) x
K
[n] =
1/2
1/2
[ln X(F)]e
j2nF
dF (10.35)
We can recover X(F) from X
K
(F) by taking complex exponentials to give
X(F) = [X(F)[e
j(F)
= e
X
K
(F)
This allows us to obtain X
K
(F) from the magnitude and phase of X(F) as follows
X
K
(F) = ln X(F) = ln [X(F)[ +j(F) (10.36)
The cepstrum x
K
[n] may then be expressed as
x
K
[n] =
1/2
1/2
(ln [X(F)[)e
j2nF
dF +j
1/2
1/2
(F)e
j2nF
dF (10.37)
c Ashok Ambardar, September 1, 2003
444 Chapter 10 The Discrete Fourier Transform and its Applications
In other words, the cepstrum is a complex quantity. Its real part corresponds to the IDTFT of ln [X(F)[
and is called the real cepstrum. The existence of the cepstrum is restricted by the nature of X(F).
The cepstrum does not exist if X(F) = 0 over a range of frequencies because ln [X(F)[ is undened. The
cepstrum does not exist if X(F) is bipolar (zero at isolated frequencies where it changes sign) and exhibits
phase jumps of . In fact, the cepstrum does not exist unless the phase (F) is a continuous single valued
function of frequency (with no phase jumps of any kind) and equals zero at F = 0 and F = 0.5. We can get
around these problems by working with the unwrapped phase (to eliminate any phase jumps of 2) and,
when necessary, adding a large enough positive constant to X(F) to ensure X(F) > 0 for all frequencies
(and eliminate any phase jumps of ).
10.11.1 Homomorphic Filters and Deconvolution
Deconvolution is a process of extracting the input signal from a lter output and its transfer function. Since
the convolution y[n] = x[n] h[n] transforms to the product Y (F) = X(F)H(F), we can extract x[n] from
the inverse transform of the ratio X(F) = Y (F)/H(F). We may also perform deconvolution using cepstral
transformations. The idea is to rst nd the cepstrum of y[n] by starting with start with Y (F) = X(F)H(F)
to obtain
Y
K
(F) = ln Y (F) = ln X(F) + ln H(F) = X
K
(F) +H
K
(F) (10.38)
Inverse transformation gives
y
K
[n] = x
K
[n] +h
K
[n] (10.39)
Notice how the convolution operation transforms to a summation in the cepstral domain.
y[n] = x[n] h[n] y
K
[n] = x
K
[n] +h
K
[n] (10.40)
Systems capable of this transformation are called homomorphic systems.
If x
K
[n] and h
K
[n] lie over dierent ranges, we may extract x
K
[n] by using a time window w[n] whose
samples equal 1 over the extent of x
K
[n] and zero over the extent of h
K
[n]. This operation is analogous
to ltering (albeit in the time domain) and is often referred to as homomorphic ltering. Finally, we
can use x
K
[n] to recover x[n] by rst nding X
K
(F), then take its exponential to give X(F) and nally
evaluating its inverse transform. A homomorphic lter performs deconvolution in several steps as outlined
in the following review panel.
REVIEW PANEL 10.21
Homomorphic Filtering and Cepstral Analysis
Transform of Cepstrum: Y
K
(F) = ln Y (F) = ln[X(F)H(F)] = ln X(F) +ln H(F) = X
K
(F) +H
K
(F)
Cepstrum: y
K
[n] = x
K
[n] +h
K
[n]
Windowed Cepstrum: y
K
[n]w[n] = x
K
[n]
Signal Recovery: x
K
[n] dtft X
K
(F) e
X
K
(F)
= X(F)
Cepstral methods and homomorphic ltering have found practical applications in various elds including
deconvolution, image processing (restoring degraded images), communications (echo cancellation), speech
processing (pitch detection, dynamic range expansion, digital restoration of old audio recordings) and seismic
signal processing.
c Ashok Ambardar, September 1, 2003
10.11 The Cepstrum and Homomorphic Filtering 445
10.11.2 Echo Detection and Cancellation
Cepstral methods may be used to detect and remove unwanted echos from a signal. We illustrate echo
detection and cancellation by considering a simple rst order echo system described by
y[n] = x[n] +x[n D] (10.41)
Here, x[n D] describes the echo of strength delayed by D samples. It is reasonable to assume that
< 1 since the echo magnitude will be less than that of the original signal x[n]. In the frequency domain,
we get
Y (F) = X(F) +X(F)e
j2FD
The transfer function of the system that produces the echo may be written as
H(F) =
Y (F)
X(F)
= 1 +e
j2FD
(10.42)
Now, we nd H
K
(F) = ln H(F) to yield the transform of the cepstrum h
K
[n] as
H
K
(F) = ln H(F) = ln[1 +e
j2FD
] (10.43)
Note that H
K
(F) is periodic in F with period 1/D and, as a consequence, the cepstrum h
K
[n], describes a
signal with sample spacing D. To formalize this result, we invoke the series expansion
ln(1 +r) =
k=1
(1)
k+1
r
k
k
, [r[ < 1
With r = e
j2FD
in the above result, we can express H
K
(F) as
H
K
(F) = ln[1 +e
j2FD
] =
k=1
(1)
k+1
k
k
e
j2kFD
(10.44)
Its inverse transform leads to the cepstrum h
K
[n] of the echo-producing system as
h
K
[n] =
k=1
(1)
k+1
k
k
(n kD) (10.45)
Note that this cepstrum h
K
[n] is an impulse train with impulses located at integer multiples of the delay D.
The impulses alternate in sign and their amplitude decreases with the index k that equals integer multiples
of the delay.
Since y[n] = x[n] h[n], its cepstral transformation leads to
y
K
[n] = x
K
[n] +h
K
[n] Y
K
(F) = X
K
(F) +H
K
(F) (10.46)
We see that the cepstrum y
K
[n] of the contaminated signal y[n] equals the sum of the cepstrum x
K
[n] of the
original signal x[n] and an impulse train h
K
[n] whose decaying impulses alternate in sign and are located at
multiples of the delay D. If the spectra X
K
(F) and H
K
(F) occupy dierent frequency bands, H
K
(F) may
be eliminated by ordinary frequency domain ltering and lead to the recovery of x[n]. Since h
K
[n] contains
impulses at multiples of D, we may also eliminate the unwanted echoes in the cepstral domain itself by using
a comb lter whose weights are zero at integer multiples of D and unity elsewhere. Naturally, the comb
c Ashok Ambardar, September 1, 2003
446 Chapter 10 The Discrete Fourier Transform and its Applications
lter can be designed only if the delay D is known a priori. Even if D is not known, its value may still be
estimated from the location of the impulses in the cepstrum.
As an example of echo cancellation by homomorphic signal processing, consider the clean signal x[n] =
(0.9)
n
u[n] contaminated by its echo delayed by 20 samples and with a strength of 0.8. The contaminated
signal y[n] is then given by
y[n] = x[n] + 0.8x[n 20]
Our objective is to recover the clean signal x[n] using cepstral techniques. A 64-sample portion of x[n] and
y[n] is shown in Figure 10.13(b). The complex cepstrum of the contaminated signal y[n] is displayed in
Figure 10.13(a) and is a practical approximation based on the 64-point FFT (to approximate the DTFT).
0 20 40 60
0
0.5
1
(b) Original and echo signal, N = 64
0 20 40 60
0.5
0
0.5
1
(a) Complex cepstrum of echo signal
Figure 10.13 Complex cepstrum (a) of the echo signal in (b)
The appearance of the cepstral spikes (deltas) at n = 20, 40, 60 indicates a delay of D = 20 as expected.
Once these spikes are removed, we get the cepstrum shown in Figure 10.14(a). The signal corresponding to
this cepstrum is shown in Figure 10.14(b) and clearly reveals the absence of the echo signal.
0 20 40 60
0
0.5
1
(b) Original and echocancelled signal
0 20 40 60
0.5
0
0.5
1
(a) Cepstrum with deltas removed
Figure 10.14 Complex cepstrum (a) with spikes removed and its corresponding signal (b)
However, the correspondence with the clean signal is not perfect. The reason is that the complex cepstrum
still contains spikes at the indices n = 80, 100, 120 . . . that get aliased to n = 16, 36, 56, . . .. These aliases are
visible in Figure 10.14(a) and are the consequence of using the FFT to approximate the true cepstrum. Once
these aliased spikes are also removed, we get the cepstrum of Figure 10.15(a). The signal corresponding to
this cepstrum, shown in Figure 10.15(b), is now an almost exact replica of the original clean signal.
c Ashok Ambardar, September 1, 2003
10.12 Optimal Filtering 447
0 20 40 60
0
0.5
1
(b) Original and echocancelled signal
0 20 40 60
0.5
0
0.5
1
(a) Cepstrum with aliased deltas removed
Figure 10.15 Complex cepstrum (a) with aliased spikes removed and its corresponding signal (b)
10.12 Optimal Filtering
For a lter with impulse response is h[n], the output y[n] is described by the convolution y[n] = x[n] h[n].
Ideally it is possible to recover x[n] by deconvolution. However, if the output is contaminated by noise or
interference, an optimal lter provides a means for recovering the input signal. Such a lter is optimized in
the sense that its output, when deconvolved by h[n], results in a signal x[n] that is the best (in some sense)
approximation to x[n]. Suppose the contaminated response w[n] equals the sum of the ideal response y[n]
and a noise component s[n] such that
w[n] = y[n]
. .. .
ideal output
+ s[n]
. .. .
noise
(10.47)
If we pass this signal through an optimal lter with a transfer function (F), the lter output in the frequency
domain equals W(F)(F). When we deconvolve this by h[n], we obtain x[n]. In the frequency domain this
is equivalent to nding
X(F) as the ratio
X(F) =
W(F)(F)
H(F)
=
(F)[Y (F) +S(F)]
H(F)
If x[n] is to be the best approximation to x[n] in the least square sense, we must minimize the mean (or
integral) square error. Since the error equals x[n] x[n], the mean square error may be written as
=
[x[n] x[n][
2
dt =
1/2
1/2
[X(F)
X(F)[
2
dF
Substituting for X(F) and
X(F), we get
=
1/2
1/2
[Y (F) (F)[Y (F) +S(F)][
2
[H(F)[
2
dF =
1/2
1/2
[Y (F)[1 (F)] S(F)(F)[
2
[H(F)[
2
dF
If the noise s[n] and signal y[n] are uncorrelated (as they usually are), the integral of the product Y (F)S(F)
equals zero and we obtain
=
1/2
1/2
[Y (F)[
2
[1 (F)[
2
+[S(F)[
2
[(F)[
2
[H(F)[
2
dF =
1/2
1/2
K(F) dF
c Ashok Ambardar, September 1, 2003
448 Chapter 10 The Discrete Fourier Transform and its Applications
To ensure that is a minimum, the kernel K(F) must be minimized with respect to (F). So, we set
dK(F)/d(F) = 0 and (assuming that (F) is real), this leads to the result
(F) =
[Y (F)[
2
[Y (F)[
2
+[S(F)[
2
This result describes a Wiener (or Wiener-Hopf ) optimal lter. Note that (F) = 1 in the absence of
noise and (F) 0 when noise dominates. In other words, it combats the eects of noise only when required.
Interestingly, the result for (F) does not depend on X(F) even though it requires that we obtain estimates
of [Y (F)[ and [S(F)[
2
separately. In theory, this may indeed be dicult without additional information. In
practice however, the power spectral density (PSD) [W(F)[
2
may be approximated by
[W(F)[
2
[Y (F)[
2
+[S(F)[
2
In addition, the noise spectral density [S(F)[
2
often has a form that can be deduced from its high frequency
behaviour. Once this form is known, we estimate [Y (F)[
2
= [W(F)[
2
[S(F)[
2
(by graphical extrapolation,
for example). Once known, we generate the optimal lter transfer function and the spectrum of the estimated
input as
(F) =
[Y (F)[
2
[Y (F)[
2
+[S(F)[
2
[Y (F)[
2
[W(F)[
2
X(F) =
W(F)(F)
H(F)
The inverse transform of
X(F) leads to the desired time domain signal x[n]. We remark that the transfer
function (F) is even symmetric. In turn, the impulse response is also even symmetric about n = 0. This
means that the optimal lter is noncausal. The design of causal optimal lters is much more involved and
has led to important developments such as the Kalman lter.
In practice, the DFT (or its FFT implementation) is often used as a tool in the optimal ltering of
discrete-time signals. The idea is to implement the optimal lter transfer function and signal estimate by
the approximate relations
[k] =
[Y [k][
2
[Y [k][
2
+[S[k][
2
[Y [k][
2
[W[k][
2
X[k] =
W[k][k]
H[k]
Here, W[k] is the DFT of w[n] which allows us to generate [W[k][
2
, estimate [Y [k][
2
and then compute the
transfer function [k] of the optimal lter. We also compute H[k], the DFT of h[n] which leads to the
spectrum of the estimated input as
X[k] = W[k][k]/H[k]. The IDFT of this result recovers the signal x[n].
This approach has its pitfalls however, and may fail, for example, if some samples of H[k] are zero.
10.13 Matrix Formulation of the DFT and IDFT
If we let W
N
= exp(j2/N), the dening relations for the DFT and IDFT may be written as
X
DFT
[k] =
N1
n=0
x[n]W
nk
N
, k = 0, 1, . . . , N 1 (10.48)
x[n] =
1
N
N1
k=0
X
DFT
[k][W
nk
N
]
, n = 0, 1, . . . , N 1 (10.49)
The rst set of N DFT equations in N unknowns may be expressed in matrix form as
X = W
N
x (10.50)
c Ashok Ambardar, September 1, 2003
10.13 Matrix Formulation of the DFT and IDFT 449
Here, X and x are (N 1) matrices, and W
N
is an (N N) square matrix called the DFT matrix. The
full matrix form is described by
X[0]
X[1]
X[2]
.
.
.
X[N 1]
W
0
N
W
0
N
W
0
N
. . . W
0
W
0
N
W
1
N
W
2
N
. . . W
N1
N
W
0
N
W
2
N
W
4
N
. . . W
2(N1)
N
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
W
0
N
W
N1
N
W
2(N1)
N
. . . W
(N1)(N1)
N
x[0]
x[1]
x[2]
.
.
.
x[N 1]
(10.51)
The exponents t in the elements W
t
N
of W
N
are called twiddle factors.
EXAMPLE 10.14 (The DFT from the Matrix Formulation)
Let x[n] = 1, 2, 1, 0. Then, with N = 4 and W
N
= exp(j2/4) = j, the DFT may be obtained by
solving the matrix product:
X[0]
X[1]
X[2]
X[3]
W
0
N
W
0
N
W
0
N
W
0
N
W
0
N
W
1
N
W
2
N
W
3
N
W
0
N
W
2
N
W
4
N
W
6
N
W
0
N
W
3
N
W
6
N
W
9
N
x[0]
x[1]
x[2]
x[3]
1 1 1 1
1 j 1 j
1 1 1 1
1 j 1 j
1
2
1
0
4
j2
0
j2
The result is X
DFT
[k] = 4, j2, 0, j2.
10.13.1 The IDFT from the Matrix Form
The matrix x may be expressed in terms of the inverse of W
N
as
x = W
1
N
X (10.52)
The matrix W
1
N
is called the IDFT matrix. We may also obtain x directly from the IDFT relation
in matrix form, where the change of index from n to k, and the change in the sign of the exponent in
exp(j2nk/N), lead to a conjugate transpose of W
N
. We then have
x =
1
N
[W
N
]
T
X (10.53)
Comparison of the two forms suggests that
W
1
N
=
1
N
[W
N
]
T
(10.54)
This very important result shows that W
1
N
requires only conjugation and transposition of W
N
, an obvious
computational advantage.
The elements of the DFT and IDFT matrices satisfy A
ij
= A
(i1)(j1)
. Such matrices are known as
Vandermonde matrices. They are notoriously ill conditioned insofar as their numerical inversion. This
is not the case for W
N
, however. The product of the DFT matrix W
N
with its conjugate transpose matrix
equals the identity matrix I. Matrices that satisfy such a property are called unitary. For this reason, the
DFT and IDFT, which are based on unitary operators, are also called unitary transforms.
c Ashok Ambardar, September 1, 2003
450 Chapter 10 The Discrete Fourier Transform and its Applications
10.13.2 Using the DFT to Find the IDFT
Both the DFT and IDFT are matrix operations, and there is an inherent symmetry in the DFT and IDFT
relations. In fact, we can obtain the IDFT by nding the DFT of the conjugate sequence and then conjugating
the results and dividing by N. Mathematically,
x[n] = IDFTX
DFT
[k] =
1
N
DFTX
DFT
[k]
(10.55)
This result invokes the conjugate symmetry and duality of the DFT and IDFT, and suggests that the DFT
algorithm itself can also be used to nd the IDFT. In practice, this is indeed what is done.
EXAMPLE 10.15 (Using the DFT to Find the IDFT)
Let us nd the IDFT of X
DFT
[k] = 4, j2, 0, j2 using the DFT. First, we conjugate the sequence to get
X
DFT
[k] = 4, j2, 0, j2. Next, we nd the DFT of X
DFT
[k], using the 4 4 DFT matrix of the previous
example, to give
DFTX
DFT
[k] =
1 1 1 1
1 j 1 j
1 1 1 1
1 j 1 j
4
j2
0
j2
4
8
4
0
Finally, we conjugate this result (if complex) and divide by N = 4 to get the IDFT of X
DFT
[k] as
IDFT X
DFT
[k] =
1
4
4, 8, 4, 0 = 1, 2, 1, 0
10.14 The FFT
The importance of the DFT stems from the fact that it is amenable to fast and ecient computation using
algorithms called fast Fourier transform, or FFT, algorithms. Fast algorithms reduce the problem of
calculating an N-point DFT to that of calculating many smaller-size DFTs. The key ideas in optimizing the
computation are based on the following ideas.
Symmetry and Periodicity
All FFT algorithms take advantage of the symmetry and periodicity of the exponential W
N
= exp(j2n/N),
as listed in Table 10.5. The last entry in this table, for example, suggests that W
N/2
= exp(j2n/0.5N) is
periodic with period
N
2
.
Table 10.5 Symmetry and Periodicity of WN = exp(j2/N)
Entry Exponential Form Symbolic Form
1 e
j2n/N
= e
j2(n+N)/N
W
n+N
N
= W
n
N
2 e
j2(n+N/2)/N
= e
j2n/N
W
n+N/2
N
= W
n
N
3 e
j2K
= e
j2NK/N
= 1 W
NK
N
= 1
4 e
j2(2/N)
= e
j2/(N/2)
W
2
N
= W
N/2
c Ashok Ambardar, September 1, 2003
10.14 The FFT 451
Choice of Signal Length
We choose the signal length N as a number that is the product of many smaller numbers r
k
such that
N = r
1
r
2
. . . r
m
. A more useful choice results when the factors are equal, such that N = r
m
. The factor r
is called the radix. By far the most practically implemented choice for r is 2, such that N = 2
m
, and leads
to the radix-2 FFT algorithms.
Index Separation and Storage
The computation is carried out separately on even-indexed and odd-indexed samples to reduce the compu-
tational eort. All algorithms allocate storage for computed results. The less the storage required, the more
ecient is the algorithm. Many FFT algorithms reduce storage requirements by performing computations
in place by storing results in the same memory locations that previously held the data.
10.14.1 Some Fundamental Results
We begin by considering two trivial, but extremely important, results.
1-point transform: The DFT of a single number A is the number A itself.
2-point transform: The DFT of a 2-point sequence is easily found to be
X
DFT
[0] = x[0] +x[1] and X
DFT
[1] = x[0] x[1] (10.56)
The single most important result in the development of a radix-2 FFT algorithm is that an N-sample
DFT can be written as the sum of two
N
2
-sample DFTs formed from the even- and odd-indexed samples of
the original sequence. Here is the development:
X
DFT
[k] =
N1
n=0
x[n]W
nk
N
=
N/21
n=0
x[2n]W
2nk
N
+
N/21
n=0
x[2n + 1]W
(2n+1)k
N
X
DFT
[k] =
N/21
n=0
x[2n]W
2nk
N
+W
k
N
N/21
n=0
x[2n + 1]W
2nk
N
=
N/21
n=0
x[2n]W
nk
N/2
+W
N
N/21
n=0
x[2n + 1]W
nk
N/2
If X
e
[k] and X
o
[k] denote the DFT of the even- and odd-indexed sequences of length N/2, we can rewrite
this result as
X
DFT
[k] = X
e
[k] +W
k
N
X
o
[k], k = 0, 1, 2, . . . , N 1 (10.57)
Note carefully that the index k in this expression varies from 0 to N 1 and that X
e
[k] and X
o
[k] are both
periodic in k with period N/2; we thus have two periods of each to yield X
DFT
[k]. Due to periodicity, we
can split X
DFT
[k] and compute the rst half and next half of the values as
X
DFT
[k] = X
e
[k] +W
k
N
X
o
[k], k = 0, 1, 2, . . . ,
1
2
N 1
X
DFT
[k +
N
2
] = X
e
[k +
1
2
N] +W
k+
N
2
N
X
o
[k +
N
2
]
= X
e
[k] W
k
N
X
o
[k], k = 0, 1, 2, . . . ,
N
2
1
c Ashok Ambardar, September 1, 2003
452 Chapter 10 The Discrete Fourier Transform and its Applications
[k] X
e
A =
[k] X
o
B =
W
t
BW
t
A +
BW
t
A
BW
t
A +
BW
t
A
A
B
A
B
t
Figure 10.16 A typical buttery
This result is known as the Danielson-Lanczos lemma. Its signal-ow graph is shown in Figure 10.16 and
is called a buttery due to its characteristic shape.
The inputs X
e
and X
o
are transformed into X
e
+W
k
N
X
o
and X
e
W
k
N
X
o
. A buttery operates on one
pair of samples and involves two complex additions and one complex multiplication. For N samples, there
are N/2 butteries in all. Starting with N samples, this lemma reduces the computational complexity by
evaluating the DFT of two
N
2
-point sequences. The DFT of each of these can once again be reduced to the
computation of sequences of length N/4 to yield
X
e
[k] = X
ee
[k] +W
k
N/2
X
eo
[k] X
o
[k] = X
oe
[k] +W
k
N/2
X
oo
[k] (10.58)
Since W
k
N/2
= W
2k
N
, we can rewrite this expression as
X
e
[k] = X
ee
[k] +W
2k
N
X
eo
[k] X
o
[k] = X
oe
[k] +W
2k
N
X
oo
[k] (10.59)
Carrying this process to its logical extreme, if we choose the number N of samples as N = 2
m
, we can reduce
the computation of an N-point DFT to the computation of 1-point DFTs in m stages. And the 1-point
DFT is just the sample value itself (repeated with period 1). This process is called decimation. The FFT
results so obtained are actually in bit-reversed order. If we let e = 0 and o = 1 and then reverse the
order, we have the sample number in binary representation. The reason is that splitting the sequence into
even and odd indices is equivalent to testing each index for the least signicant bit (0 for even, 1 for odd).
We describe two common in-place FFT algorithms based on decimation. A summary appears in Table 10.6.
10.14.2 The Decimation-in-Frequency FFT Algorithm
The decimation-in-frequency (DIF) FFT algorithm starts by reducing the single N-point transform at each
successive stage to two
N
2
-point transforms, then four
N
4
-point transforms, and so on, until we arrive at N 1-
point transforms that correspond to the actual DFT. With the input sequence in natural order, computations
can be done in place, but the DFT result is in bit-reversed order and must be reordered.
The algorithm slices the input sequence x[n] into two halves and leads to
X
DFT
[k] =
N1
n=0
x[n]W
nk
N
=
N/21
n=0
x[n]W
nk
N
+
N/21
n=N/2
x[n]W
nk
N
=
N/21
n=0
x[n]W
nk
N
+
N/21
n=0
x[n +N/2]W
(n+
N
2
)k
N
This may be rewritten as
X
DFT
[k] =
N/21
n=0
x[n]W
nk
N
+W
Nk/2
N
N/21
n=0
x[n +
N
2
]W
nk
N
=
N/21
n=0
x[n]W
nk
N
+ (1)
k
N/21
n=0
x[n +
N
2
]W
nk
N
c Ashok Ambardar, September 1, 2003
10.14 The FFT 453
Table 10.6 FFT Algorithms for Computing the DFT
Entry Characteristic Decimation in Frequency Decimation in Time
1 Number of samples N = 2
m
N = 2
m
2 Input sequence Natural order Bit-reversed order
3 DFT result Bit-reversed order Natural order
4 Computations In place In place
5 Number of stages m = log
2
N m = log
2
N
6 Multiplications
N
2
log
2
N (complex)
N
2
log
2
N(complex)
7 Additions N log
2
N (complex) N log
2
N (complex)
Structure of the ith Stage
8 No. of butteries
N
2
N
2
9 Buttery input A (top) and B (bottom) A (top) and B (bottom)
10 Buttery output (A+B) and (AB)W
t
N
(A+BW
t
N
) and (ABW
t
N
)
11 Twiddle factors t 2
i1
Q, Q = 0, 1, . . . , P 1 2
mi
Q, Q = 0, 1, . . . , P 1
12 Values of P P = 2
mi
P = 2
i1
Separating even and odd indices, and letting x[n] = x
a
and x[n +N/2] = x
b
,
X
DFT
[2k] =
N/21
n=0
[x
a
+x
b
]W
2nk
N
, k = 0, 1, 2, . . . ,
N
2
1 (10.60)
X
DFT
[2k + 1] =
N/21
n=0
[x
a
x
b
]W
(2k+1)n
N
=
N/21
n=0
[x
a
x
b
]W
n
N
W
2nk
N
, k = 0, 1, . . . ,
N
2
1 (10.61)
Since W
2nk
N
= W
nk
N/2
, the even-indexed and odd-indexed terms describe a
N
2
-point DFT. The computations
result in a buttery structure with inputs x[n] and x[n +
N
2
], and outputs X
DFT
[2k] = x[n] + x[n +
N
2
]
and X
DFT
[2k + 1] = x[n] x[n +
N
2
]W
n
N
. Its buttery structure is shown in Figure 10.17.
[k] X
e
A =
[k] X
o
B =
W
t
A + B
(AB)W
t
A + B
(AB)W
t
A
B
A
B
t
Figure 10.17 A typical buttery for the decimation-in-frequency FFT algorithm
The factors W
t
, called twiddle factors, appear only in the lower corners of the buttery wings at each
stage. Their exponents t have a denite order, described as follows for an N = 2
m
-point FFT algorithm
with m stages:
1. Number P of distinct twiddle factors W
t
at ith stage: P = 2
mi
.
2. Values of t in the twiddle factors W
t
: t = 2
i1
Q with Q = 0, 1, 2, . . . , P 1.
c Ashok Ambardar, September 1, 2003
454 Chapter 10 The Discrete Fourier Transform and its Applications
The DIF algorithm is illustrated in Figure 10.18 for N = 2, N = 4, and N = 8.
0
W
X[1]
X[0] x [0]
x [1]
0
W
0
W
0
W
1
W
x [0]
x [1]
x [2]
x [3]
X[0]
X[2]
X[1]
X[3]
0
W
1
W
2
W
3
W
0
W
2
W
0
W
2
W
0
W
0
W
0
W
0
W
x [0]
x [1]
x [2]
x [3]
x [4]
x [5]
x [6]
x [7]
X[0]
X[4]
X[2]
X[6]
X[1]
X[5]
X[3]
X[7]
00 00
01
01 10
11
10
11
000 000
001
001
010 010
011
100
100
101 101
110
110
111 111
011
N = 2
N = 4
N = 8
Figure 10.18 The decimation-in-frequency FFT algorithm for N = 2, 4, 8
EXAMPLE 10.16 (A 4-Point Decimation-in-Frequency FFT Algorithm)
For a 4-point DFT, we use the above equations to obtain
X
DFT
[2k] =
1
n=0
x[n] +x[n + 2]W
2nk
4
X
DFT
[2k + 1] =
1
n=0
x[n] x[n + 2]W
n
4
W
2nk
4
, k = 0, 1
Since W
0
4
= 1 and W
2
4
= 1, we arrive at the following result:
X
DFT
[0] = x[0] +x[2] +x[1] +x[3] X
DFT
[2] = x[0] +x[2] x[1] +x[3]
X
DFT
[1] = x[0] x[2] +W
4
x[1] x[3] X
DFT
[3] = x[0] x[2] W
4
x[1] x[3]
We do not reorder the input sequence before using it.
10.14.3 The Decimation-in-Time FFT Algorithm
In the decimation-in-time (DIT) FFT algorithm, we start with N 1-point transforms, combine adjacent pairs
at each successive stage into 2-point transforms, then 4-point transforms, and so on, until we get a single
N-point DFT result. With the input sequence in bit-reversed order, the computations can be done in place,
and the DFT is obtained in natural order. Thus, for a 4-point input, the binary indices 00, 01, 10, 11
reverse to 00, 10, 01, 11, and we use the bit-reversed order x[0], x[2], x[1], x[3].
For an 8-point input sequence, 000, 001, 010, 011, 100, 101, 110, 111, the reversed sequence corresponds
to 000, 100, 010, 110, 001, 101, 011, 111 or x[0], x[4], x[6], x[2], x[1], x[5], x[3], x[7], and we use this sequence
to perform the computations.
At a typical stage, we obtain
X
DFT
[k] = X
e
[k] +W
k
N
X
o
[k] X
DFT
[k +
N
2
] = X
e
[k] W
k
N
X
o
[k] (10.62)
c Ashok Ambardar, September 1, 2003
10.14 The FFT 455
Its buttery structure is shown in Figure 10.19.
[k] X
e
A =
[k] X
o
B =
W
t
BW
t
A +
BW
t
A
BW
t
A +
BW
t
A
A
B
A
B
t
Figure 10.19 A typical buttery for the decimation-in-time FFT algorithm
As with the decimation-in-frequency algorithm, the twiddle factors W
t
at each stage appear only in the
bottom wing of each buttery. The exponents t also have a denite (and almost similar) order described by
1. Number P of distinct twiddle factors W
t
at ith stage: P = 2
i1
.
2. Values of t in the twiddle factors W
t
: t = 2
mi
Q with Q = 0, 1, 2, . . . , P 1.
The DIT algorithm is illustrated in Figure 10.20 for N = 2, N = 4, and N = 8.
N = 2
N = 4
N = 8
0
W
0
W
0
W
0
W
0
W
0
W
2
W
2
W
0
W
1
W
2
W
3
W
X[0]
X[1]
X[2]
X[3]
X[4]
X[5]
X[6]
X[7]
x [0]
x [4]
x [2]
x [6]
x [1]
x [5]
x [3]
x [7]
000 000
001
001
010 010
011
011
100
100
101
110
110
111
101
111
0
W
x [0]
x [1] X[1]
X[0]
0
W
0
W
0
W
1
W
X[0]
X[1]
X[2]
X[3]
x [0]
x [1]
x [3]
x [2]
00
01
01 10
10
00
11 11
Figure 10.20 The decimation-in-time FFT algorithm for N = 2, 4, 8
In both the DIF algorithm and DIT algorithm, it is possible to use a sequence in natural order and get
DFT results in natural order. This, however, requires more storage, since the computations cannot now be
done in place.
EXAMPLE 10.17 (A 4-Point Decimation-in-Time FFT Algorithm)
For a 4-point DFT, with W
4
= e
j/2
= j, we have
X
DFT
[k] =
3
n=0
x[n]W
nk
4
, k = 0, 1, 2, 3
We group by even-indexed and odd-indexed samples of x[n] to obtain
X
DFT
[k] = X
e
[k] +W
k
4
X
o
[k]
X
e
[k] = x[0] +x[2]W
2k
4
,
X
o
[k] = x[1] +x[3]W
2k
4
,
k = 0, 1, 2, 3
c Ashok Ambardar, September 1, 2003
456 Chapter 10 The Discrete Fourier Transform and its Applications
Using periodicity, we simplify this result to
X
DFT
[k] = X
e
[k] +W
k
4
X
o
[k]
X
DFT
[k +
1
2
N] = X
e
[k] W
k
4
X
o
[k]
X
e
[k] = x[0] +x[2]W
2k
4
,
X
o
[k] = x[1] +x[3]W
2k
4
,
k = 0, 1
These equations yield X
DFT
[0] through X
DFT
[3] as
X
DFT
[0] = X
e
[0] +W
0
4
X
o
[0] = x[0] +x[2]W
0
4
+W
0
4
x[1] +x[3]W
0
4
X
DFT
[1] = X
e
[1] +W
1
4
X
o
[1] = x[0] +x[2]W
2
4
+W
1
4
x[1] +x[3]W
2
4
X
DFT
[2] = X
e
[0] W
0
4
X
o
[0] = x[0] +x[2]W
0
4
W
0
4
x[1] +x[3]W
0
4
X
DFT
[3] = X
e
[1] W
1
4
X
o
[1] = x[0] +x[2]W
2
4
W
1
4
x[1] +x[3]W
2
4
n=0
x[n]e
j2nF
(10.63)
If we sample F at M intervals over one period, the frequency interval F
0
equals 1/M and F kF
0
=
k
M
, k = 0, 1, . . . , M 1, and we get
X
DFT
[k] =
N1
n=0
x[n]e
j2nk/M
, k = 0, 1, . . . , M 1 (10.64)
With W
M
= e
j2/M
, this equation can be written as
X
DFT
[k] =
N1
n=0
x[n]W
nk
M
, k = 0, 1, . . . , M 1 (10.65)
This is a set of M equations in N unknowns and describes the M-point DFT of the N-sample sequence x[n].
It may be written in matrix form as
X = W
M
x (10.66)
Here, X is an M 1 matrix, x is an N 1 matrix, and W
M
is an (M N) matrix. In full form,
X[0]
X[1]
X[2]
.
.
.
X[M 1]
W
0
M
W
0
M
W
0
M
. . . W
0
M
W
0
M
W
1
M
W
2
M
. . . W
N1
M
W
0
M
W
2
M
W
4
M
. . . W
2(N1)
M
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
W
0
M
W
M1
M
W
2(M1)
M
. . . W
(N1)(M1)
M
x[0]
x[1]
x[2]
.
.
.
x[N 1]
(10.67)
c Ashok Ambardar, September 1, 2003
458 Chapter 10 The Discrete Fourier Transform and its Applications
EXAMPLE 10.18 (A 4-Point DFT from a 3-Point Sequence)
Let x[n] = 1, 2, 1. We have N = 3. The DTFT of this signal is
X
p
(F) = 1 + 2e
j2F
+e
j4F
= [2 + 2 cos(2F)]e
j2F
If we pick M = 4, we have F = k/4, k = 0, 1, 2, 3, and obtain the DFT as
X
DFT
[k] = [2 + 2 cos(2k/4)]e
j2k/4
, k = 0, 1, 2, 3 or X
DFT
[k] = 4, j2, 0, j2
Using matrix notation with W
M
= e
j2/4
= j, we can also nd X
DFT
[k] as
X[0]
X[1]
X[2]
X[3]
W
0
M
W
0
M
W
0
M
W
0
M
W
1
M
W
2
M
W
0
M
W
2
M
W
4
M
W
0
M
W
3
M
W
6
M
x[0]
x[1]
x[2]
1 1 1
1 j 1
1 1 1
1 j 1
1
2
3
4
j2
0
j2
Thus, X
DFT
[k] = 4, j2, 0, j2, as before.
10.15.1 The Inverse DFT
How do we obtain the N-sample sequence x[n] from the M-sample DFT? It would seem that we require the
product of X, an M 1 matrix with an N M matrix to give x as an N 1 matrix. But what is this
(M N) matrix, and how is it related to the M N matrix W
M
? To nd out, recall that
x[n] =
1
X
p
(F)e
j2nF
dF (10.68)
Converting this to discrete form with F = kF
0
=
k
M
results in periodicity of x[n] with period 1, and we
obtain N samples of x[n] using
x[n] =
1
M
M1
k=0
X
DFT
[k]e
j2nk/M
, n = 0, 1, . . . , N 1 (10.69)
For N < M, one period of x[n] is a zero-padded M-sample sequence. For N > M, however, one period of
x[n] is the periodic extension of the N-sample sequence with period M.
The sign of the exponent and the interchange of the indices n and k allow us to set up the matrix
formulation for obtaining x[n] using an N M inversion-matrix W
I
that just equals
1
M
times [W
M
]
T
, the
conjugate transpose of the M N DFT matrix W
M
. Its product with the N 1 matrix corresponding
to X[k] yields the M 1 matrix for x[n]. We thus have the forward and inverse matrix relations:
X = W
M
x (DFT) x = W
I
X =
1
M
[W
M
]
T
X (IDFT) (10.70)
These results are valid for any choice of M and N. An interesting result is that the product of W
M
with
W
I
is the N N identity matrix.
c Ashok Ambardar, September 1, 2003
10.15 Why Equal Lengths for the DFT and IDFT? 459
EXAMPLE 10.19 (A 3-Point IDFT from a 4-Point DFT)
Let X
DFT
[k] = 4, j2, 0, j2 and M = 4.
The IDFT matrix equals
W
I
=
1
4
1 1 1
1 j 1
1 1 1
1 j 1
T
=
1
4
1 1 1 1
1 j 1 j
1 1 1 1
1 1 1 1
1 j 1 j
1 1 1 1
4
j2
0
j2
1
2
1
The important thing to realize is that x[n] is actually periodic with M = 4, and one period of x[n] is the
zero-padded sequence 1, 2, 1, 0.
10.15.2 How Unequal Lengths Aect the DFT Results
Even though the M-point IDFT of an N-point sequence is valid for any M, the choice of M aects the
nature of x[n] through the IDFT and its inherent periodic extension.
1. If M = N, the IDFT is periodic, with period M, and its one period equals the N-sample x[n]. Both
the DFT matrix and IDFT matrix are square (M M), and allow a simple inversion relation to go
back and forth between the two.
2. If M > N, the IDFT is periodic with period M. Its one period is the original N-sample x[n] with
M N padded zeros. The choice M > N is equivalent to using a zero-padded version of x[n] with a
total of M samples and M M square matrices for both the DFT matrix and IDFT matrix.
3. If M < N, the IDFT is periodic with period M < N. Its one period is the periodic extension of the
N-sample x[n] with period M. It thus yields a signal that corresponds to x[n] wrapped around after
M samples and does not recover the original x[n].
EXAMPLE 10.20 (The Importance of Periodic Extension)
Let x[n] = 1, 2, 1. We have N = 3. The DTFT of this signal is
X
p
(F) = 1 + 2e
j2F
+e
j4F
= [2 + 2 cos(2F)]e
j2F
We sample X
p
(F) at M intervals and nd the IDFT as y[n]. What do we get?
(a) For M = 3, we should get y[n] = 1, 2, 1 = x[n]. Let us nd out.
With M = 3, we have F = k/3 for k = 0, 1, 2, and X
DFT
[k] becomes
X
DFT
[k] = [2 + 2 cos(2k/3)]e
j2k/3
=
4,
1
2
j
3
4
,
1
2
+j
3
4
W
0
M
W
0
M
W
0
M
W
0
M
W
1
M
W
2
M
W
0
M
W
2
M
W
4
M
T
=
1
3
1 1 1
1
1
2
+j
3
4
1
2
j
3
4
1
1
2
j
3
4
1
2
+j
3
4
x = W
I
X =
1
3
1 1 1
1
1
2
+j
3
4
1
2
j
3
4
1
1
2
j
3
4
1
2
+j
3
4
1
2
j
3
4
1
2
+j
3
4
1
2
1
This result is periodic with M = 3, and one period of this equals x[n].
(b) For M = 4, we should get a new sequence y[n] = 1, 2, 1, 0 that corresponds to a zero-padded
version of x[n].
(c) For M = 2, we should get a new sequence z[n] = 2, 2 that corresponds to the periodic extension of
x[n] with period 2.
With M = 2 and k = 0, 1, we have Z
DFT
[k] = [2 + 2 cos(k)]e
jk
= 4, 0.
Since e
j2/M
= e
j
= 1, we can nd the IDFT z[n] directly from the denition as
z[0] = 0.5X
DFT
[0] +X
DFT
[1] = 2 z[1] = 0.5X
DFT
[0] X
DFT
[1] = 2
The sequence z[n] = 2, 2 is periodic with M = 2. As expected, this equals one period of the periodic
extension of x[n] = 1, 2, 1 (with wraparound past two samples).
c Ashok Ambardar, September 1, 2003
Chapter 10 Problems 461
CHAPTER 10 PROBLEMS
10.1 (DFT from Denition) Compute the DFT and DFS of the following signals.
(a) x[n] = 1, 2, 1, 2 (b) x[n] = 2, 1, 3, 0, 4
(c) x[n] = 2, 2, 2, 2 (d) x[n] = 1, 0, 0, 0, 0, 0, 0, 0
[Hints and Suggestions: Compute the DFT only for the indices k
N
2
and use conjugate symmetry
about k =
N
2
and X
DFT
[N k] = X
DFT
[k] to nd the rest.]
10.2 (DFT from Denition) Use the dening relation to compute the N-point DFT of the following:
(a) x[n] = [n], 0 n N 1
(b) x[n] =
n
, 0 n N 1
(c) x[n] = e
jn/N
, 0 n N 1
10.3 (IDFT from Denition) Compute the IDFT of the following.
(a) X
DFT
[k] = 2, j, 0, j (b) X
DFT
[k] = 4, 1, 1, 1, 1
(c) X
DFT
[k] = 1, 2, 1, 2 (d) X
DFT
[k] = 1, 0, 0, j, 0, j, 0, 0
[Hints and Suggestions: Each DFT has conjugate symmetry about k =
N
2
, so x[n] should be real.
For (b)(c), the DFT is also real, so x[n] has conjugate symmetry about k =
N
2
.]
10.4 (Symmetry) For the DFT of each real sequence, compute the boxed quantities.
(a) X
DFT
[k] =
0, X
1
, 2 +j, 1, X
4
, j
(b) X
DFT
[k] =
1, 2, X
2
, X
3
, 0, 1 j, 2, X
7
[Hints and Suggestions: The DFT of a real signal shows conjugate symmetry about k =
N
2
.]
10.5 (Properties) The DFT of x[n] is X
DFT
[k] = 1, 2, 3, 4. Find the DFT of each of the following
sequences, using properties of the DFT.
(a) y[n] = x[n 2] (b) f[n] = x[n + 6] (c) g[n] = x[n + 1]
(d) h[n] = e
jn/2
x[n] (e) p[n] = x[n] (x[n] (f ) q[n] = x
2
[n]
(g) r[n] = x[n] (h) s[n] = x
x
0
, 3, 4, 0, 2
5, X
1
, 1.28 j4.39, X
3
, 8.78 j1.4
(b)
x
0
, 3, 4, 2, 0, 1
4, X
1
, 4 j5.2, X
3
, X
4
, 4 j1.73
[Hints and Suggestions: Use conjugate symmetry. In (a) also use x[0] =
X
DFT
[k] to nd x
0
. In
(b), also use X
DFT
[0] =
x[n] (or Parsevals relation) to nd X
3
.]
10.9 (Properties) Let x[n] =
k=0
X[k] (c) X[3] (d)
5
k=0
[X[k][
2
(e)
5
k=0
(1)
k
X[k]
[Hints and Suggestions: In (a), also use X
DFT
[0] =
x[n]. In (b), use
X
DFT
[k] = Nx[0]. In (c),
use X
DFT
[
N
2
] =
(1)
n
x[n]. In (d), use Parsevals relation. In (e), use
(1)
k
X
DFT
[k] = Nx[
N
2
].]
10.10 (DFT Computation) Find the N-point DFT of each of the following signals.
(a) x[n] = [n] (b) x[n] = [n K], K < N
(c) x[n] = [n 0.5N] (N even) (d) x[n] = [n 0.5(N 1)] (N odd)
(e) x[n] = 1 (f ) x[n] = [n 0.5(N 1)] +[n 0.5(N + 1)] (N odd)
(g) x[n] = (1)
n
(N even) (h) x[n] = e
j4n/N
(i) x[n] = cos(
4n
N
) (j) x[n] = cos(
4n
N
+ 0.25)
[Hints and Suggestions: In (a), the DFT has N unit samples. Use this result with the shifting
property in (b), (c), (d) and (f). In (e), only X
DFT
[0] is nonzero. In (g), (1)
n
= cos(n). In (h),
use frequency shift on the result of (e). In (i) and (j), F
0
=
2
N
and the DFT is a pair of impulses with
amplitude 0.5Ne
j
at k = 2.]
10.11 (Properties) The DFT of a signal x[n] is X
DFT
[k]. If we use its conjugate Y
DFT
[k] = X
DFT
[k] and
obtain its DFT as y[n], how is y[n] related to x[n]?
[Hints and Suggestions: A typical DFT term is Ae
j
. Its IDFT gives terms of the form
1
N
Ae
j
e
j
where =
2nk
N
. Similarly, examine the DFT of the conjugated DFT sequence and compare.]
10.12 (Properties) Let X[k] =
1, 2, 1 and h[n] =
1, 2, 3, using
(a) The time-domain convolution operation.
(b) The DFT and zero-padding.
(c) The radix-2 FFT and zero-padding.
Which of these methods yield identical results and why?
[Hints and Suggestions: In (a), the convolution length is N = 5. In (b), zero pad each signal to
N = 5, multiply the DFTs (at each index) and nd the 5-point IDFT. In (c), zero-pad to N = 8.]
10.35 (Convolution) Find the periodic convolution of x[n] =
1, 2, 1 and h[n] =
1, 2, 3, using
(a) The time-domain convolution operation.
(b) The DFT operation. Is this result identical to that of part (a)?
(c) The radix-2 FFT and zero-padding. Is this result identical to that of part (a)? Should it be?
Which of these methods yield identical results and why?
[Hints and Suggestions: In (a), use regular convolution and wraparound. In (b), multiply the
DFTs (at each index) and nd the 3-point IDFT. In (c), zero-pad each signal to N = 4, multiply the
DFTs (at each index) and nd the 4-point IDFT.]
10.36 (Correlation) Find one period (starting at n = 0) of the periodic correlation r
xh
of x[n] =
1, 2, 1
and h[n] =
1, 2, 3, using
(a) The time-domain correlation operation.
(b) The DFT.
[Hints and Suggestions: In (a), use circular folding and replication to get samples of h[n] starting
at the origin, compute x[n] h[n] and wraparound. In (b), multiply the DFTs (at each index) and
nd the 3-point IDFT. In (b), multiply X
DFT
[k] and H
DFT
[k] (at each index) and nd the IDFT.]
10.37 (Convolution of Long Signals) Let x[n] =
1, 2, 1 and h[n] =
1, 2, 1, 3, 2, 2, 3, 0, 1, 0, 2, 2.
(a) Find their convolution using the overlap-add method.
(b) Find their convolution using the overlap-save method.
(c) Are the results identical to the time-domain convolution of x[n] and h[n]?
[Hints and Suggestions: In (a), split h[n] into 3-sample segments, nd their convolution with x[n],
shift (by 0, 3, 6 and 9 samples) and add. In (b), generate 5-sample segments from h[n] with the rst
as 0, 0, 1, 2, 1, the second as 2, 1, 3, 2, 2 etc with a 2 sample overlap in each segment (zero pad the
last segment to ve samples, if required). Find their periodic convolutions with x[n], 0, 0, discard
the rst 2 samples of each convolution and concatenate (string together).]
c Ashok Ambardar, September 1, 2003
468 Chapter 10 The Discrete Fourier Transform and its Applications
COMPUTATION AND DESIGN
10.38 (DFT Properties) Consider the signal x[n] = n+1, 0 n 7. Use Matlab to compute its DFT.
Conrm the following properties by computing the DFT.
(a) The DFT of y[n] = x[n] to conrm the (circular) folding property
(b) The DFT of f[n] = x[n 2] to conrm the (circular) shift property
(c) The DFT of g[n] = x[n/2] to conrm the zero-interpolation property
(d) The DFT of h[n] = x[n], x[n] to conrm the signal-replication property
(e) The DFT of p[n] = x[n] cos(0.5n) to conrm the modulation property
(f ) the DFT of r[n] = x
2
[n] to conrm the multiplication property
(g) The DFT of s[n] = x[n] (x[n] to conrm the periodic convolution property
10.39 (IDFT from DFT) Consider the signal x[n] = (1 +j)n, 0 n 9.
(a) Find its DFT X[k]. Find the DFT of the sequence 0.1X[k]. Does this appear to be related to
the signal x[n]?
(b) Find the DFT of the sequence 0.1X
DFT
[N k] +G
DFT
[k]) Y
DFT
[k] = j0.5(G
DFT
[N k] G
DFT
[k])
Use this result to nd the FFT of x[n] = 1, 2, 3, 4 and y[n] = 5, 6, 7, 8 and compare the results
with their FFT computed individually.
10.55 (Quantization Error) Quantization leads to noisy spectra. Its eects can be studied only in
statistical terms. Let x(t) = cos(20t) be sampled at 50 Hz to obtain the 256-point sampled signal
x[n].
(a) Plot the linear and decibel magnitude of the DFT of x[n].
(b) Quantize x[n] by rounding to B bits to generate the quantized signal y[n]. Plot the linear
and decibel magnitude of the DFT of y[n]. Compare the DFT spectra of x[n] and y[n] for
B = 8, 4, 2, and 1. What is the eect of decreasing the number of bits on the DFT spectrum
of y[n]?
(c) Repeat parts (a) and (b), using quantization by truncation. How do the spectra dier in this
case?
(d) Repeat parts (a)(c) after windowing x[n] by a von Hann window. What is the eect of win-
dowing?
10.56 (Sampling Jitter) During the sampling operation, the phase noise on the sampling clock can result
in jitter, or random variations in the time of occurrence of the true sampling instant. Jitter leads
to a noisy spectral and its eects can be studied only in statistical terms. Consider the analog signal
x(t) = cos(2f
0
t) sampled at a rate S that equals three times the Nyquist rate.
(a) Generate a time array t
n
of 256 samples at intervals of t
s
= 1/S. Generate the sampled signal
x[n] from values of x(t) at the time instants in t
n
. Plot the DFT magnitude of x[n].
(b) Add some uniformly distributed random noise with a mean of zero and a noise amplitude of
At
s
to t
n
to form the new time array t
nn
. Generate the sampled signal y[n] from values of
x(t) at the time instants in t
nn
. Plot the DFT magnitude of y[n] and compare with the DFT
magnitude of x[n] for A = 0.01, 0.1, 1, 10. What is the eect of increasing the noise amplitude
on the DFT spectrum of y[n]? What is the largest value of A for which you can still identify
the signal frequency from the DFT of y[n]?
(c) Repeat parts (a) and (b) after windowing x[n] and y[n] by a von Hann window. What is the
eect of windowing?
c Ashok Ambardar, September 1, 2003
Appendix A
USEFUL CONCEPTS FROM
ANALOG THEORY
A.0 Scope and Objectives
This appendix collects useful concepts and results from the area of analog signals and systems that are
relevant to the study of digital signal processing. It includes short descriptions of signals and systems,
convolution, Fourier series, Fourier transforms, Laplace transforms and analog lters. The material presented
in this appendix should lead to a better understanding of some of the techniques of digital signal processing
described in the text.
A.1 Signals
An analog signal x(t) is a continuous function of the time variable t. The signal energy is dened as
E =
p
i
(t) dt =
[x(t)[
2
dt (A.1)
The absolute value [x(t)[ is required only for complex-valued signals. Signals of nite duration and amplitude
have nite energy.
A periodic signal x
p
(t) is characterized by several measures. Its duty ratio equals the ratio of its pulse
width and period. Its average value x
av
equals the average area per period. Its signal power P equals the
average energy per period. Its rms value x
rms
equals
P.
x
av
=
1
T
T
x(t) dt P =
1
T
T
[x(t)[
2
dt x
rms
=
P (A.2)
Signal Operations and Symmetry
A time shift displaces a signal x(t) in time without changing its shape. The signal y(t) = x(t ) is a
delayed (shifted right by ) replica of x(t). A time scaling results in signal compression or stretching.
The signal f(t) = x(2t) describes a two-fold compression, g(t) = x(t/2) describes a two-fold stretching, and
p(t) = x(t) describes a reection about the vertical axis. Shifting or folding a signal x(t) will not change
its area or energy, but time scaling x(t) to x(t) will reduce both its area and energy by [[.
A signal possesses even symmetry if x(t) = x(t), and odd symmetry if x(t) = x(t). The area of
an odd symmetric signal is always zero.
c Ashok Ambardar, September 1, 2003 473
474 Appendix A Useful Concepts from Analog Theory
Sinusoids and Complex Harmonics
An analog sinusoid or harmonic signal is always periodic and unique for any choice of period or frequency.
For the sinusoid x
p
(t) = Acos(
0
t + ) = Acos[
0
(t t
p
)], the quantity t
p
= /
0
is called the phase
delay and describes the time delay in the signal caused by a phase shift of . The various time and frequency
measures are related by
f
0
=
1
T
0
=
2
T
= 2f
0
=
0
t
p
= 2f
0
t
p
= 2
t
p
T
(A.3)
If x(t) = Acos(2f
0
t+), then P = 0.5A
2
and x
rms
= A
2 = 0.707A. If x(t) = Ae
j(2f0t+)
, then P = A
2
.
The common period or time period T of a combination of sinusoids is given by the LCM (least common
multiple) of the individual periods. The fundamental frequency f
0
is the reciprocal of T and also equals
the GCD (greatest common divisor) of the individual frequencies. For a combination of sinusoids at dierent
frequencies, say y(t) = x
1
(t) +x
2
(t) + , the signal power P
y
equals the sum of the individual powers and
the rms value equals
P
y
.
Useful Signals
The unit step u(t), unit ramp r(t), and signum function sgn(t) are piecewise linear. The unit step is
discontinuous at t = 0, where its value is undened (sometimes chosen as 0.5). The value of the signum
function is also undened at t = 0 and chosen as zero. The unit ramp may also be written as r(t) = tu(t).
REVIEW PANEL A.1
The Step, Ramp, and Signum Functions Are Piecewise Linear
(t) sgn u(t)
t
t
t
r(t)
1
1
1
1
1
The rectangular pulse rect(t) and triangular pulse tri(t) are even symmetric and possess unit area and
unit height as shown in the following review panel. The signal f(t) = rect(
t
) is a rectangular pulse of
width , centered at t = . The signal g(t) = tri(
t
() d = 1 (t) =
0, t = 0
, t = 0
The function A(t) is shown as an arrow with its area (or strength) A labeled next to the tip. For visual
appeal, we make its height proportional to A. Signals such as the rect pulse
1
tri(t/),
the exponential
1
e
t/
u(t), and the sinc
1
sinc(t/) all possess unit area, and give rise to the unit impulse
(t) as 0. The unit impulse (t) may also be regarded as the derivative of u(t).
(t) =
d u(t)
dt
u(t) =
(t) dt
Three useful properties of the impulse function relate to scaling, products and sifting.
Scaling Product Sifting
(t) =
1
[[
(t) x(t)(t ) = x()(t )
x(t)(t ) dt = x()
Time-scaling implies that since (t) has unit area, (t), its compressed version, has an area of
1
||
. The
product property says that an arbitrary signal x(t) multiplied by the impulse (t ) is still an impulse
(whose strength equals x(). The sifting property says that the area of the product x(t)(t ) is just x().
EXAMPLE A.2 (Properties of the Impulse Function)
(a) Consider the signal x(t) = 2r(t)2r(t2)4u(t3). Sketch x(t), f(t) = x(t)(t1), and g(t) = x
(t).
Also evaluate I =
x(t)(t 2) dt.
Refer to the gure for the sketches.
x (t)
t
2 3
4
x(t)
t
2
3
2
t
x(t)
1
(2)
(4)
(t 1)
Figure EA.2A The signals for Example A.2(a)
From the product property, f(t) = x(t)(t 1) = x(1)(t 1). This is an impulse function with
strength x(1) = 2.
The derivative g(t) = x
(t) includes the ordinary derivative (slopes) of x(t) and an impulse function of
strength 4 at t = 3.
By the sifting property, I =
x(t)(t 2) dt = x(2) = 4.
(b) Evaluate I =
0
4t
2
(t 3) dt.
Using the sifting property, we get I = 4(3)
2
= 36.
c Ashok Ambardar, September 1, 2003
A.2 System Analysis 477
Signal Approximation by Impulses
A signal x(t) multiplied by a periodic unit impulse train with period t
s
yields the ideally sampled signal,
x
I
(t), as shown in Figure A.2. The ideally sampled signal is an impulse train described by
x
I
(t) = x(t)
k=
(t kt
s
) =
k=
x(kt
s
)(t kt
s
) (A.5)
Note that x
I
(t) is non-periodic and the strength of each impulse equals the signal value x(kt
s
). This form
actually provides a link between analog and digital signals.
s
t
s
t
Section signal into narrow rectangular strips
t
Replace each strip by an impulse
t
Figure A.2 Signal approximation by impulses
Moments
Moments are general measures of signal size based on area and are dened as shown.
nth moment Mean Central moment
m
n
=
t
n
x(t) dt m
x
(mean) =
m
1
m
0
n
=
(t m
x
)
n
x(t) dt
The zeroth moment m
0
=
x(t) dt is the area of x(t). The normalized rst moment m
x
= m
1
/m
0
is the
mean. Moments about the mean are called central moments. The second central moment
2
is called the
variance. It is denoted
2
and its square root is called the standard deviation.
2
=
2
=
m
2
m
0
m
2
x
(A.6)
A.2 System Analysis
Analog LTI (linear, time-invariant) systems may be described by dierential equations with constant coe-
cients. An nth-order dierential has the general form
y
(n)
(t)+a
1
y
(n1)
(t)+ +a
n1
y
(1)
(t)+a
n
y(t) = b
0
x
(m)
(t)+b
1
x
(m1)
(t)+ +b
m1
x
(1)
(t)+b
m
x(t) (A.7)
To solve for the output y(t) for t > 0, we require the n initial conditions y(0), y
(1)
(0), . . . , y
(n1)
(0) (the
response and its n 1 successive derivatives at t = 0). A convenient technique for solving a linear constant-
coecient dierential equation (LCCDE) is the method of undetermined coecients, which yields the
total response as the sum of the forced response y
F
(t) and the natural response y
N
(t).
The forced response is determined by the input terms (left-hand side of the dierential equation) and
has the same form as the input as summarized in Table A.1. The constants are found by satisfying the given
dierential equation.
c Ashok Ambardar, September 1, 2003
478 Appendix A Useful Concepts from Analog Theory
Table A.1 Form of the Forced Response for Analog LTI Systems
Note: If the right-hand side (RHS) is e
t
, where is also a root of the characteristic
equation repeated r times, the forced response form must be multiplied by t
r
.
Entry Forcing Function (RHS) Form of Forced Response
1 C
0
(constant) C
1
(another constant)
2 e
t
(see note above) Ce
t
3 cos(t +) C
1
cos(t) +C
2
sin(t) or C cos(t +)
4 e
t
cos(t +) (see note above) e
t
[C
1
cos(t) +C
2
sin(t)]
5 t C
0
+C
1
t
6 te
t
(see note above) e
t
(C
0
+C
1
t)
Table A.2 Form of the Natural Response for Analog LTI Systems
Entry Root of Characteristic Equation Form of Natural Response
1 Real and distinct: r Ke
rt
2 Complex conjugate: j e
t
[K
1
cos(t) +K
2
sin(t)]
3 Real, repeated: r
p+1
e
rt
(K
0
+K
1
t +K
2
t
2
+ +K
p
t
p
)
4 Complex, repeated: ( j)
p+1
e
t
cos(t)(A
0
+A
1
t +A
2
t
2
+ +A
p
t
p
)
+ e
t
sin(t)(B
0
+B
1
t +B
2
t
2
+ +B
p
t
p
)
The form of the natural response depends only on the system details and is independent of the nature
of the input. It is a sum of exponentials whose exponents are the roots (real or complex) of the so-called
characteristic equation or characteristic polynomial dened by
a
0
s
n
+a
1
s
n1
+a
2
s
n2
+ +a
n2
s
2
+a
n1
s +a
n
= 0 (A.8)
Its n roots, s
1
, s
2
, . . . , s
n
, dene the form of the natural response as summarized in Table A.2. The constants
are evaluated after setting up the total response) using the specied initial conditions.
EXAMPLE A.3 (Natural and Forced Response)
Consider the rst-order system y
F
(t) = 0 and y
F
(t) + 2y
F
(t) = 2C = 6, and thus y
F
(t) = C = 3.
The total response is y(t) = y
N
(t) +y
F
(t) = Ke
2t
+ 3.
With y(0) = 8, we nd 8 = K + 3 (or K = 5) and y(t) = 5e
2t
+ 3, t 0 or y(t) = (5e
2t
+ 3)u(t).
(b) Since x(t) = cos(2t), we choose y
F
(t) = Acos(2t) +Bsin(2t).
Then y
F
(t) = 2Asin(2t) + 2Bcos(2t), and
y
F
(t) + 2y
F
(t) = (2A+ 2B)cos(2t) + (2B 2A)sin(2t) = cos(2t).
Comparing the coecients of the sine and cosine terms on either side, we obtain
2A+ 2B = 1, 2B 2A = 0 or A = 0.25, B = 0.25. This gives
y
F
(t) = 0.25 cos(2t) + 0.25 sin(2t).
The total response is y(t) = y
N
(t) +y
F
(t) = Ke
2t
+ 0.25 cos(2t) + 0.25 sin(2t).
With y(0) = 2, we nd 2 = K + 0.25 (or K = 1.75) and
y(t) = [1.75e
2t
+ 0.25 cos(2t) + 0.25 sin(2t)]u(t)
The steady-state response is y
ss
(t) = 0.25 cos(2t) + 0.25 sin(2t), a sinusoid at the input frequency.
(c) Since x(t) = e
2t
has the same form as y
N
(t), we must choose y
F
(t) = Cte
2t
.
Then y
F
(t) = Ce
2t
2Cte
2t
, and y
F
(t) + 2y
F
(t) = Ce
2t
2Cte
2t
+ 2Cte
2t
= e
2t
.
This gives C = 1, and thus y
F
(t) = te
2t
and y(t) = y
N
(t) +y
F
(t) = Ke
2t
+te
2t
.
With y(0) = 3, we nd 3 = K + 0 and y(t) = (3e
2t
+te
2t
)u(t).
A.2.1 The Zero-State Response and Zero-Input Response
It is often more convenient to describe the response y(t) of an LTI system as the sum of its zero-state
response (ZSR) y
zs
(t) (assuming zero initial conditions) and zero-input response (ZIR) y
zi
(t) (assuming zero
input). Each component is found using the method of undetermined coecients. Note that the natural and
forced components y
N
(t) and y
F
(t) do not, in general, correspond to the zero-input and zero-state response,
respectively, even though each pair adds up to the total response.
EXAMPLE A.4 (Zero-Input and Zero-State Response for the Single-Input Case)
Let y
(t) + 3y
(0) = 4.
Find its zero-input response and zero-state response.
The characteristic equation is s
2
+ 3s + 2 = 0 with roots s
1
= 1 and s
2
= 2.
Its natural response is y
N
(t) = K
1
e
s1t
+K
2
e
s2t
= K
1
e
t
+K
2
e
2t
.
1. The zero-input response is found from y
N
(t) and the prescribed initial conditions:
y
zi
(t) = K
1
e
t
+K
2
e
2t
y
zi
(0) = K
1
+K
2
= 3 y
zi
(0) = K
1
2K
2
= 4
This yields K
2
= 7, K
1
= 10, and y
zi
(t) = 10e
t
7e
2t
.
c Ashok Ambardar, September 1, 2003
480 Appendix A Useful Concepts from Analog Theory
2. Similarly, y
zs
(t) is found from the general form of y(t) but with zero initial conditions.
Since x(t) = 4e
3t
, we select the forced response as y
F
(t) = Ce
3t
.
Then, y
F
(t) = 3Ce
3t
, y
F
(t) = 9Ce
3t
, and y
F
(t) +3y
F
(t) +2y
F
(t) = (9C 9C +2C)e
3t
= 4e
3t
.
Thus, C = 2, y
F
(t) = 2e
3t
, and y
zs
(t) = K
1
e
t
+K
2
e
2t
+ 2e
3t
.
With zero initial conditions, we obtain
y
zs
(0) = K
1
+K
2
+ 2 = 0 y
zs
(0) = K
1
2K
2
6 = 0
This yields K
2
= 4, K
1
= 2, and y
zs
(t) = 2e
t
4e
2t
+ 2e
3t
.
3. The total response is the sum of y
zs
(t) and y
zi
(t):
y(t) = y
zi
(t) +y
zs
(t) = 12e
t
11e
2t
+ 2e
3t
, t 0
A.2.2 Step Response and Impulse Response
The step response, s(t) is the response of a relaxed LTI system to a unit step u(t). The impulse response
h(t) is the response to a unit impulse (t) and also equals s
(t) +
1
y(t) =
1
x(t) (A.9)
where = RC denes the time constant. The characteristic equation s+
1
F
(t) = 0, we obtain
y
F
(t) +
1
y
F
(t) =
1
= 0 +B or B = 1
Thus, y(t) = y
F
(t) +y
N
(t) = 1 +Ke
t/
. With y(0) = 0, we get 0 = 1 +K and
s(t) = y(t) = (1 e
t/
)u(t) (step response) (A.10)
The impulse response h(t) equals the derivative of the step response. Thus,
h(t) = s
(t) =
1
e
t/
u(t) (impulse response) (A.11)
REVIEW PANEL A.3
Unit Step Response and Unit Impulse Response of an RC Lowpass Filter
The output is the capacitor voltage. The time constant is = RC.
t / -
e 1
t
1
s(t)
t
(1)
1
t
R
C
+
+
A
1
t / -
1
e
h(t)
t
R
C
+
+
Step response: s(t) = (1 e
t/
)u(t) Impulse response: h(t) = s
(t) =
1
e
t/
u(t)
c Ashok Ambardar, September 1, 2003
A.3 Convolution 481
A.3 Convolution
Convolution nds the zero-state response y(t) of an LTI system to an input x(t) and is dened by
y(t) = x(t) h(t) =
x()h(t ) d (A.12)
The shorthand notation x(t) h(t) describes the convolution of the signals x(t) and h(t).
Useful Convolution Properties
The starting time of the convolution equals the sum of the starting times of x(t) and h(t). The ending time
of the convolution equals the sum of the ending times of x(t) and h(t). The convolution duration equals the
sum of the durations of x(t) and h(t). The area of the convolution equals the product of the areas of x(t)
and h(t). The convolution of an odd symmetric and an even symmetric signal is odd symmetric, whereas
the convolution of two even symmetric (or two odd symmetric) signals is even symmetric. Interestingly,
the convolution of x(t) with its folded version x(t) is also even symmetric, with a maximum at t = 0.
The convolution x(t) x(t) is called the autocorrelation of x(t). The convolution of a large number of
functions approaches a Gaussian form. This is the central limit theorem.
A.3.1 Useful Convolution Results
The convolution of any signal x(t) with the impulse (t) reproduces the signal x(t). If the impulse is shifted,
so is convolution.
x(t) (t) = x(t) x(t) (t ) = x(t )
Other useful convolution results are illustrated in Figure A.3. By way of an example,
e
t
u(t) e
t
u(t) =
e
(t)
u()u(t ) d = e
t
t
0
d = te
t
u(t)
A.4 The Laplace Transform
The Laplace transform X(s) of a causal signal x(t) is dened as
X(s) =
0
x(t)e
(+j)t
dt =
0
x(t)e
st
dt (A.13)
The complex quantity s = +j generalizes the concept of frequency to the complex domain. Some useful
transform pairs and properties are listed in Table A.3 and Table A.4.
A.4.1 The Inverse Laplace Transform
The inverse transform of H(s) may be found by resorting to partial fraction expansion and a table look-up. If
H(s) = P(s)/Q(s), the form of the expansion depends on the nature of the factors in Q(s) and summarized
below. Once the partial fraction expansion is established, the inverse transform for each term can be found
with the help of Table A.5.
c Ashok Ambardar, September 1, 2003
482 Appendix A Useful Concepts from Analog Theory
Table A.3 A Short Table of Laplace Transforms
Entry x(t) X(s) Entry x(t) X(s)
1 (t) 1 2 u(t)
1
s
3 e
t
u(t)
1
s +
4 e
t
sin(t)u(t)
(s +)
2
+
2
5 te
t
u(t)
1
(s +)
2
6 e
t
cos(t)u(t)
s +
(s +)
2
+
2
7 cos(t)u(t)
s
s
2
+
2
8 sin(t)u(t)
s
2
+
2
Table A.4 Operational Properties of the Laplace Transform
Note: x(t) is to be regarded as the causal signal x(t)u(t).
Entry Property x(t) X(s)
1 Superposition x
1
(t) +x
2
(t) X
1
(s) +X
2
(s)
2 Times-exp e
t
x(t) X(s +)
3 Time Scaling x(t), > 0
1
m=1
K
m
s +p
m
=
N
m=1
P(s)
s +p
m
, where K
m
= (s +p
m
)X(s)[
s=p
m
Repeated:
1
(s +r)
k
N
m=1
P(s)
s +p
m
=
N
m=1
Km
s +p
m
+
k1
n=0
An
(s +r)
kn
A
n
=
1
n!
d
n
ds
n
[(s +r)
k
X(s)]
s=r
Table A.5 Inverse Laplace Transforms of Partial Fraction Expansion Terms
Entry Partial Fraction Expansion Term Inverse Transform
1
K
s +
Ke
t
u(t)
2
K
(s +)
n
K
(n 1)!
t
n1
e
t
u(t)
3
Cs +D
(s +)
2
+
2
e
t
C cos(t) +
D C
sin(t)
u(t)
4
M
s + +j
+
M
s + j
2Me
t
cos(t )u(t)
c Ashok Ambardar, September 1, 2003
484 Appendix A Useful Concepts from Analog Theory
EXAMPLE A.5 (Partial Fraction Expansion)
(a) (Non-Repeated Poles) Let X(s) =
2s
3
+ 8s
2
+ 4s + 8
s(s + 1)(s
2
+ 4s + 8)
. This can be factored as
X(s) =
K
1
s
+
K
2
s + 1
+
A
s + 2 +j2
+
A
s + 2 j2
We successively evaluate
K
1
= sX(s)
s=0
=
2s
3
+ 8s
2
+ 4s + 8
(s + 1)(s
2
+ 4s + 8)
s=0
=
8
8
= 1
K
2
= (s + 1)X(s)
s=1
=
2s
3
+ 8s
2
+ 4s + 8
s(s
2
+ 4s + 8)
s=1
=
10
5
= 2
A = (s + 2 +j2)X(s)
s=2j2
=
2s
3
+ 8s
2
+ 4s + 8
s(s + 1)(s + 2 j2)
s=2j2
= 1.5 +j0.5
The partial fraction expansion thus becomes
X(s) =
1
s
2
s + 1
+
1.5 +j0.5
s + 2 +j2
+
1.5 j0.5
s + 2 j2
With 1.5 +j0.5 = 1.581
18.4
= 1.581
0.1024 = M
, we nd x(t) as
x(t) = u(t) 2e
t
u(t) + 3.162e
2t
cos(2t 0.1024)u(t)
(b) (Repeated Poles) Let X(s) =
4
(s + 1)(s + 2)
3
. Its partial fraction expansion is
X(s) =
K
1
s + 1
+
A
0
(s + 2)
3
+
A
1
(s + 2)
2
+
A
2
(s + 2)
We compute K
1
= (s + 1)X(s)
s=1
=
4
(s + 2)
3
s=1
= 4.
Since (s + 2)
3
X(s) =
4
s + 1
, we also successively compute
A
0
=
4
s + 1
s=2
= 4
A
1
=
d
ds
4
s + 1
s=2
=
4
(s + 1)
2
s=2
= 4
A
2
=
1
2
d
2
ds
2
4
s + 1
s=2
=
4
(s + 1)
3
s=2
= 4
This gives the result
X(s) =
4
s + 1
4
(s + 2)
3
4
(s + 2)
2
4
s + 2
We then nd x(t) = 4e
t
u(t) 2t
2
e
2t
u(t) 4te
2t
u(t) 4e
2t
u(t).
c Ashok Ambardar, September 1, 2003
A.4 The Laplace Transform 485
A.4.2 Interconnected Systems
The impulse response h(t) of cascaded LTI systems is the convolution of the individual impulse responses.The
impulse response of systems in parallel equals the sum of the individual impulse responses.
h
C
(t) = h
1
(t) h
2
(t) h
N
(t) (cascade) h
P
(t) = h
1
(t) +h
2
(t) + +h
N
(t) (parallel)
The overall transfer function of cascaded systems is the product of the individual transfer functions (assuming
ideal cascading and no loading eects). The overall transfer function of systems in parallel is the algebraic
sum of the individual transfer functions.
H
C
(s) = H
1
(s)H
2
(s) H
N
(s) (cascade) H
P
(s) = H
1
(s) +H
2
(s) + +H
N
(s) (parallel)
A.4.3 Stability
In the time domain, BIBO (bounded-input, bounded-output) stability of an LTI system requires a dierential
equation in which the highest derivative of the input never exceeds the highest derivative of the output and
a characteristic equation whose roots have negative real parts. Equivalently, we require the impulse response
h(t) to be absolutely integrable. In the s-domain, we require a proper transfer function H(s) (with common
factors canceled) whose poles lie in the left half of the s-plane (excluding the j-axis).
REVIEW PANEL A.5
BIBO Stability from the Transfer Function or Impulse Response
From transfer function: H(s) must be strictly proper and its poles must lie inside the LHP.
From impulse response: h(t) must be absolutely integrable (
[h()[ d < ).
Minimum-Phase Filters
A minimum-phase system has all its poles and zeros in the left half-plane of the s-plane. It has the smallest
group delay and smallest deviation from zero phase, at every frequency, among all transfer functions with
the same magnitude spectrum [H()[.
A.4.4 The Laplace Transform and System Analysis
The Laplace transform is a useful tool for the analysis of LTI systems. To nd the zero state response of
an electric circuit, we transform a circuit to the s-domain by replacing the elements R, L, and C by their
impedances Z
R
, Z
L
, and Z
C
and replacing sources by their Laplace transforms. We may also transform the
system dierential equation. For a relaxed LTI system, the zero-state response Y (s) to an input X(s) is
H(s)X(s). If the system is not relaxed, the eect of initial conditions is easy to include.
EXAMPLE A.6 (Solving Dierential Equations)
Let y
(t) + 3y
(t) + 2y(t) = 4e
2t
, with y(0) = 3 and y
(0) = 4.
Transformation to the s-domain, using the derivative property, yields
s
2
Y (s) sy(0) y
s=1
= 14
A
0
=
3s
2
+ 19s + 30
s + 1
s=2
= 4
A
1
=
d
ds
3s
2
+ 19s + 30
s + 1
s=2
= 11
Upon inverse transformation, y(t) = (14e
t
4te
2t
11e
2t
)u(t).
As a check, we conrm that y(0) = 3 and y
(0) = 14 + 22 4 = 4.
(b) (Zero-State Response) For the zero-state response, we assume zero initial conditions to obtain
(s
2
+ 3s + 2)Y
zs
(s) =
4
s + 2
This gives
Y
zs
(s) =
4
(s + 2)(s
2
+ 3s + 2)
=
4
s + 1
4
(s + 2)
2
4
s + 2
Inverse transformation gives y
zs
(t) = (4e
t
4te
2t
4e
2t
)u(t).
(c) (Zero-Input Response) For the zero-input response, we assume zero input to obtain
(s
2
+ 3s + 2)Y
zi
(s) = 3s + 13 Y
zi
(s) =
3s + 13
s
2
+ 3s + 2
=
10
s + 1
7
s + 2
Upon inverse transformation, y
zi
(t) = (10e
t
7e
2t
)u(t). The total response equals
y(t) = y
zs
(t) +y
zi
(t) = (14e
t
4te
2t
11e
2t
)u(t)
This matches the result found from the direct solution.
c Ashok Ambardar, September 1, 2003
A.5 Fourier Series 487
A.4.5 The Steady-State Response to Harmonic Inputs
The steady-state response of an LTI system to a sinusoid or harmonic input is also a harmonic at the input
frequency. To nd the steady-state response y
ss
(t) to the sinusoidal input x(t) = Acos(
0
t + ), we rst
evaluate the transfer function at the input frequency
0
to obtain H(
0
) = K
45
.
So, its steady state output is y
1
(t) = 8(0.3536)cos(2t + 45
) = 2.8284 cos(2t + 45
)
Let x
2
(t) = 4. Then, = 0 and H(0) = 0.5. So, its steady state output is y
2
(t) = (4)(0.5) = 2.
By superposition y
ss
(t) = y
1
(t) +y
2
(t) = 2.8284 cos(2t + 60
) 2.
A.5 Fourier Series
The Fourier series is the best least squares t to a periodic signal x
p
(t). It describes x
p
(t) as a sum of
sinusoids at its fundamental frequency f
0
= 1/T and multiples kf
0
whose weights (magnitude and phase)
are selected to minimize the mean square error. The trigonometric, polar and exponential forms of the
Fourier series are listed in the following review panel.
REVIEW PANEL A.6
Three Forms of the Fourier Series for a Periodic Signal x
p
(t)
Trigonometric Polar Exponential
a
0
+
k=1
a
k
cos(2kf
0
t) +b
k
sin(2kf
0
t) c
0
+
k=1
c
k
cos(2kf
0
t +
k
)
k=
X[k]e
j2kf
0
t
REVIEW PANEL A.7
The Exponential Fourier Series Coecients Display Conjugate Symmetry
X[0] =
1
T
T
x(t) dt X[k] =
1
T
T
x(t)e
j2kf0t
dt X[k] = X
[k]
The connection between the three forms of the coecients is given by
X[0] = a
0
= c
0
and X[k] = 0.5(a
k
jb
k
) = 0.5c
k
k
, k 1 (A.14)
The magnitude spectrum and phase spectrum describe plots of the magnitude and phase of each
harmonic. They are plotted as discrete signals and sometimes called line spectra. For real periodic signals,
the X[k] display conjugate symmetry.
c Ashok Ambardar, September 1, 2003
488 Appendix A Useful Concepts from Analog Theory
EXAMPLE A.8 (Some Fourier Series Results)
(a) (A Pure Sinusoid) If x(t) = cos(2f
0
t), its exponential Fourier series coecients are X[1] =
0.5, X[1] = 0.5. Its two-sided magnitude spectrum shows sample values of 0.5 at f = f
0
.
If x(t) = cos(2f
0
t + ), its exponential coecients are X[1] = 0.5e
j
, X[1] = 0.5e
j
. So, Its
two-sided magnitude spectrum has sample values of 0.5 at f = f
0
and its phase spectrum shows a
phase of at f
0
and at f
0
.
(b) (An Impulse Train) From Figure EA.8B, and by the sifting property of impulses, we get
X[k] =
1
T
T/2
T/2
(t)e
j2kf
0
t
dt =
1
T
All the coecients of an impulse train are constant!
T
(1) (1) (1) (1) (1) (1)
t
X[k] =
1
T
(all k)
Figure EA.8B Impulse train and its Fourier series coecients for Example A.8(b)
(c) (A Rectangular Pulse Train) The coecients X[k] of a train of the rectangular pulse train shown
in Figure EA.8C are
X[k] =
At
0
T
sinc(kf
0
t
0
)
t
0
/2 t
0
/2
p
x (t)
t
0
/2 t
0
/2 T
t
A A
t
x(t)
Figure EA.8C Rectangular pulse train and its one period for Example A.8(c)
A.5.1 Some Useful Results
A time shift changes x
p
(t) to y
p
(t) = x
p
(t ) changes the Fourier series coecients from X[k] to Y [k] =
X[k]e
j2kf
0
T
x
2
p
(t) dt =
k=
[X[k][
2
(A.15)
The Gibbs Eect
The Gibbs eect states that perfect reconstruction from harmonics is impossible for signals with jumps.
The reconstructed signal shows overshoot and undershoot (about 9% of jump) near each jump location the
reconstructed value equals the midpoint value of the jump at each jump location.
A.6 The Fourier Transform
The Fourier transform provides a frequency-domain representation of a signal x(t) and is dened by
X(f) =
x(t)e
j2ft
dt (the f-form) X() =
x(t)e
jt
dt (the -form)
The inverse Fourier transform allows us to obtain x(t) from its spectrum and is dened by
x(t) =
X(f)e
j2ft
df (the f-form) x(t) =
1
2
X()e
jt
d (the -form)
The Fourier transform is, in general, complex. For real signals, X(f) is conjugate symmetric with X(f) =
X
(f). This means that the magnitude [X(f)[ or ReX(f) displays even symmetry and the phase (f) or
ImX(f) displays odd symmetry. It is customary to plot the magnitude and phase of X(f) as two-sided
functions. The eect of signal symmetry on the Fourier transform are summarized.
REVIEW PANEL A.8
Eect of Signal Symmetry on the Fourier Transform of Real-Valued Signals
Even symmetry in x(t): The Fourier transform X(f) is real and even symmetric.
Odd symmetry in x(t): The Fourier transform X(f) is imaginary and odd symmetric.
No symmetry in x(t): ReX(f) is even symmetric, and ImX(f) is odd symmetric.
c Ashok Ambardar, September 1, 2003
490 Appendix A Useful Concepts from Analog Theory
The three most useful transform pairs are listed in the following review panel.
REVIEW PANEL A.9
Three Basic Fourier Transform Pairs
(t)
(f) sinc
rect(t)
t
e
2 j f +
(1)
1
t
-0.5 0.5
1
t t
1
Table A.6 lists the Fourier transform of various signals while Table A.7 lists various properties and theorems
useful in problem solving.
A.6.1 Connections between Laplace and Fourier Transforms
The following review panel shows the connection between the Fourier and Laplace transform. We can always
nd the Laplace transform of causal signals from their Fourier transform, but not the other way around.
REVIEW PANEL A.10
Relating the Laplace Transform and the Fourier Transform
From X(s) to X(f): If x(t) is causal and absolutely integrable, simply replace s by j2f.
From X(f) to X(s): If x(t) is causal, delete impulsive terms in X(f) and replace j2f by s.
Some Useful Properties
Scaling of x(t) to x(t) leads to a stretching of X(f) by and an amplitude reduction by [[. If a signal
x(t) is modulated (multiplied) by the high-frequency sinusoid cos(2f
0
t), its spectrum X(f) gets halved and
centered at f = f
0
. This modulation property shifts the spectrum of x(t) to higher frequencies.
x(t)cos(2f
0
t) ft 0.5[X(f +f
0
) +X(f f
0
)] (A.16)
Parsevals Theorem says that the Fourier transform is an energy-conserving relation, and the energy may
be found from the time signal x(t) or its magnitude spectrum [X(f)[.
E =
x
2
(t) dt =
[X(f)[
2
df (A.17)
EXAMPLE A.9 (Some Transform Pairs Using Properties)
(a) For x(t) = tri(t) = rect(t) rect(t), use rect(t) ft sinc(f) and convolution (see Figure EA.9A):
tri(t) = rect(t) rect(t) ft sinc(f)sinc(f) = sinc
2
(f)
t
e
t
e
t
e t
*
=
t t t
rect(t) rect(t) tri(t)
=
*
-0.5 0.5
1
t
-0.5 0.5
1
t t
-1 1
1
Figure EA.9A The signals for Example A.9(a and b)
c Ashok Ambardar, September 1, 2003
A.6 The Fourier Transform 491
Table A.6 Some Useful Fourier Transform Pairs
Entry x(t) X(f) X()
1 (t) 1 1
2 rect(t) sinc(f) sinc
3 tri(t) sinc
2
(f) sinc
2
2
+ 4
2
f
2
2
2
+
2
10 e
t
2
e
f
2
e
2
/4
11 sgn(t)
1
jf
2
j
12 u(t) 0.5(f) +
1
j2f
() +
1
j
13 e
t
cos(2t)u(t)
+j2f
( +j2f)
2
+ (2)
2
+j
( +j)
2
+ (2)
2
14 e
t
sin(2t)u(t)
2
( +j2f)
2
+ (2)
2
2
( +j)
2
+ (2)
2
15
n=
(t nT)
1
T
k=
f
k
T
2
T
k=
2k
T
16 x
p
(t) =
k=
X[k]e
j2kf0t
k=
X[k](f kf
0
)
k=
2X[k]( k
0
)
c Ashok Ambardar, September 1, 2003
492 Appendix A Useful Concepts from Analog Theory
Table A.7 Operational Properties of the Fourier Transform
Property x(t) X(f) X()
Similarity X(t) x(f) 2x()
Time Scaling x(t)
1
[[
X
1
[[
X
(f) 2X
()
Conjugation x
(t) X
(f) X
()
Correlation x(t) y(t) X(f)Y
(f) X()Y
()
Autocorrelation x(t) x(t) X(f)X
(f) = [X(f)[
2
X()X
() = [X()[
2
Fourier Transform Theorems
Central
ordinates
x(0) =
X(f) df =
1
2
X() d X(0) =
x(t) dt
Parsevals
theorem
E =
x
2
(t) dt =
[X(f)[
2
df =
1
2
[X()[
2
d
Plancherels
theorem
x(t)y
(t) dt =
X(f)Y
(f) df =
1
2
X()Y
() d
c Ashok Ambardar, September 1, 2003
A.6 The Fourier Transform 493
(b) For x(t) = te
t
u(t), start with e
t
ft
1
+j2f
and use convolution (see Figure EA.9A):
te
t
u(t) = e
t
u(t) e
t
u(t) ft
1
( +j2f)
2
(c) For x(t) = e
|t|
= e
t
u(t) + e
t
u(t), start with e
t
u(t) ft
1
+j2f
and use the folding
property and superposition:
e
|t|
ft
1
+j2f
+
1
j2f
=
2
2
+ 4
2
f
2
(d) For x(t) = sgn(t), use the limiting form for y(t) = e
t
u(t) e
t
u(t) as 0 to give
e
t
u(t) e
t
u(t) ft
1
+j2f
1
j2f
=
4jf
2
+ 4
2
f
2
sgn(t) ft
1
jf
(e) For x(t) = cos(2t) = 0.5e
j2t
+ 0.5e
j2t
, start with 1 ft (f) and use the dual of the
time-shift property:
cos(2t) = 0.5e
j2t
+ 0.5e
j2t
ft 0.5(f ) + 0.5(f +)
Its magnitude spectrum (see Figure EA.9H(a)) is an impulse pair at f = , with strengths of 0.5.
(f ) For x(t) = cos(2t +), start with cos(2t) ft 0.5(f ) +0.5(f +), and use the shifting
property with t t +/2 (and the product property of impulses) to get
cos(2t +) ft 0.5e
jf/
[(f ) +(f +)] = 0.5e
j
(f ) + 0.5e
j
(f )
Its magnitude spectrum is an impulse pair at f = with strengths of 0.5. Its phase spectrum shows
a phase of at f = and at f = . The spectra are shown in Figure EA.9H(b).
cos( 2t) Transform of cos( 2t+) Transform of (b) (a)
f
(0.5) (0.5)
Phase (rad)
Magnitude
f
(0.5) (0.5)
f
k=
X[k]e
j2kf
0
t
ft
k=
X[k](f kf
0
) (A.21)
The impulses are located at the harmonic frequencies kf
0
, and the impulse strengths are given by the Fourier
series coecients X[k]. The impulse train is periodic only if X[k] = C (i.e. has the same value for every k).
EXAMPLE A.10 (Fourier Transform of an Impulse Train)
For a unit impulse train with period T, as shown in Figure EA.10, we get
X[k] =
1
T
T/2
T/2
(t)e
j2kf
0
t
dt =
1
T
T
(1) (1) (1) (1) (1) (1)
t
X[k] =
1
T
(all k)
Figure EA.10 Impulse train and its Fourier series coecients for Example A.10
The Fourier transform of this function is
X(f) =
k=
X[k](f kf
0
) =
1
T
k=
(f kf
0
), f
0
=
1
T
The Fourier transform of a periodic impulse train is also a periodic impulse train. For other periodic signals,
the Fourier transform will is not periodic (even though it is an impulse train).
A.6.4 Spectral Density
The spectral density is the Fourier transform of the autocorrelation function.
(autocorrelation) r
xx
(t) ft R
xx
(f) (spectral density) (A.22)
This is the celebrated Wiener-Khintchine theorem. For power signals (and periodic signals), we must
use averaged measures consistent with power (and not energy). This leads to the concept of power spectral
c Ashok Ambardar, September 1, 2003
496 Appendix A Useful Concepts from Analog Theory
density (PSD). The PSD of a periodic signal is a train of impulses at f = kf
0
with strengths [X[k][
2
whose
sum equals the total signal power.
R
xx
(f) =
k=
[X[k][
2
(f kf
0
) (A.23)
White Noise and Colored Noise
A signal with zero mean whose PSD is constant (with frequency) is called white noise. The autocorrelation
function of such a signal is an impulse. A signal whose PSD is constant only over a nite frequency range is
called band-limited white noise.
A.6.5 Ideal Filters
The transfer function H
LP
(f) and impulse response h
LP
(t) of an ideal lowpass lter (LPF) with unit gain,
zero phase, and cuto frequency f
C
may be written as
H
LP
(f) = rect(f/2f
C
) h
LP
(t) = 2f
C
sinc(2f
C
t) (ideal LPF) (A.24)
Ideal lters are impractical for the reasons summarized in the following review panel. Even though ideal
lters are unrealizable, they form the yardstick by which the design of practical lters is measured.
REVIEW PANEL A.11
Ideal Filters Are Noncausal, Unstable, and Physically Unrealizable
1. Ideal lters possess constant gain and linear phase in the passband.
2. Their impulse response (with a sinc form) makes them noncausal and unstable.
3. The step response of ideal lters cannot be monotonic and shows overshoot and ringing.
A.6.6 Measures for Real Filters
The phase delay and group delay of a system whose transfer function is H() = [H()[
() are dened
as
t
p
=
()
(phase delay) t
g
=
d ()
d
(group delay) (A.25)
If () varies linearly with frequency, t
p
and t
g
are not only constant but also equal. For LTI systems (with
rational transfer functions), the phase () is a transcendental function but the group delay is always a
rational function of
2
and is much easier to work with in many lter applications.
The time-limited/band-limited theorem asserts that no signal can be both time-limited and band-
limited simultaneously. In other words, the spectrum of a nite-duration signal is always of innite extent.
The narrower a time signal, the more spread out its spectrum. Measures of duration in the time domain
are inversely related to measures of bandwidth in the frequency domain and their product is a constant. A
sharper frequency response [H()[ can be achieved only at the expense of a slower time response. Typical
time-domain and frequency domain measures are listed in Table A.8.
A.6.7 A First Order Lowpass Filter
Consider the RC circuit shown in Figure A.7. If we assume that the output is the capacitor voltage, its
transfer function may be written as
H(f) =
Y (f)
X(f)
=
1/j2fC
R + (1/j2fC)
=
1
1 +j2fRC
=
1
1 +j2f
(A.26)
c Ashok Ambardar, September 1, 2003
A.6 The Fourier Transform 497
Table A.8 Typical Measures for Real Filters
Measure Explanation
Time delay Time between application of input and appearance of response
Typical measure: Time to reach 50% of nal value
Rise time Measure of the steepness of initial slope of response
Typical measure: Time to rise from 10% to 90% of nal value
Overshoot Deviation (if any) beyond the nal value
Typical measure: Peak overshoot
Settling time Time for oscillations to settle to within a specied value
Typical measure: Time to settle to within 5% or 2% of nal value
Speed of Response Depends on the largest time constant
max
in h(t).
Typical measure: Steady state is reached in about 5
max
.
Damping Rate of change toward nal value
Typical measure: Damping factor or quality factor Q
Bandwidth Frequency range over which gain exceeds a given value.
Typical measure: B
3dB
for which [H(f)[ 0.707[H(f)[
max
.
X() Y()
j C R 1 +
x(t) y(t)
R
+
+
R
C
+
+
() = H
jC
1 1
Figure A.7 An RC lowpass lter
The quantity = RC is the circuit time constant. The magnitude [H(f)[ and phase (f) of the transfer
function are sketched in Figure A.8 and given by
[H(f)[ =
1
1 + 4
2
f
2
2
(f) = tan
1
(2f) (A.27)
1
1F
+
+
of H(f) of H(f)
1/ 2
1/ 2
Magnitude Phase
f
f
Figure A.8 Frequency response of the RC lowpass lter
The system is called a lowpass lter because [H(f)[ decays monotonically with positive f, leading to a
reduced output amplitude at higher frequencies. At f =
1
2
, the magnitude equals 1/
. The frequency f =
1
2
is called the half-power frequency because the output power
of a sinusoid at this frequency is only half the input power. The frequency range 0 f
1
2
denes the
half-power bandwidth over which the gain exceeds 0.707 times the peak gain.
c Ashok Ambardar, September 1, 2003
498 Appendix A Useful Concepts from Analog Theory
The time-domain performance of this system is measured by its impulse response h(t) = e
t/
u(t), or by
its step response s(t), plotted in Figure A.9, and described by
h(t) = e
t/
u(t) s(t) = (1 e
t/
)u(t) (A.28)
The step response rises smoothly to unity and is within 1% of the nal value in about 5. A smaller
max
implies a faster response, a shorter time to reach steady state. Other performance measures include the rise
time t
r
(the time taken to rise from 10% to 90% of the nal value).The 10%90% rise time T
r
is computed
by nding
s(t
10
) = 1 e
t
10
/
= 0.1 s(t
90
) = 1 e
t
90
/
= 0.9
We obtain t
10
= ln(10/9), t
90
= ln 10, and T
r
= t
90
t
10
= ln 9.
With B
3dB
=
1
2
, the time-bandwidth product gives
T
r
B
3dB
=
ln 9
2
0.35
This result is often used as an approximation for higher-order systems also.
t / -
e 1
t
1
s(t)
1
t
R
C
+
+
A
t
(1)
1
t / -
1
e
h(t)
t
R
C
+
+
Figure A.9 Impulse response h(t) and step response s(t) of the RC lowpass lter
The delay time t
d
(the time taken to rise to 50% of the nal value), and the settling time t
P%
(the time
taken to settle to within P% of its nal value). Commonly used measures are the 5% settling time and the
2% settling time. Exact expressions for these measures are found to be
t
d
= ln 2 t
5%
= ln 20 t
2%
= ln 50 (A.29)
A smaller time constant implies a faster rise time and a larger bandwidth. The phase delay and group
delay of the RC lowpass lter are given by
t
p
=
(f)
2f
=
1
2f
tan
1
(2f) t
g
=
1
2
d (f)
df
=
1 + 4
2
f
2
2
(A.30)
REVIEW PANEL A.12
Frequency Response of an RC Lowpass Filter with Time Constant = RC
f
3dB
=
1
2
Hz 10%-90% Rise-Time: ln 9. Time-Bandwidth Product: 0.35
A.6.8 A Second-Order Lowpass Filter
A second-order lowpass lter with unit dc gain may be described by
H(s) =
2
p
s
2
+
p
Q
s +
2
p
(A.31)
Here,
p
is called the undamped natural frequency, or pole frequency, and Q represents the quality
factor and is a measure of the losses in the circuit. The quantity = 1/2Q is called the damping factor.
c Ashok Ambardar, September 1, 2003
A.6 The Fourier Transform 499
Frequency Domain Performance
If Q > 0.5, the poles are complex conjugates, and lie on a circle of radius
p
in the s-plane. The higher
the Q, the closer the poles are to the j-axis. For Q < 1/
(/
p
)/Q
1 (/
p
)
2
t
g
() =
2(
p
/Q)(
2
p
+
2
)
(
2
p
2
)
2
+ (
2
p
/Q
2
)
2
(A.32)
< 0.707 Q
1
Monotonic spectrum
pk
H
pk
> 0.707 Q
1
Peaked spectrum
Figure A.10 Frequency response of a second-order system
Time-Domain Performance
The step response of second-order lowpass lters also depends on Q, as shown in Figure A.11. If Q < 0.5,
the poles are real and distinct, and the step response shows a smooth, monotonic rise to the nal value
(overdamped) with a large time constant. If Q > 0.5, the poles are complex conjugates, and the step
response is underdamped with overshoot and decaying oscillations (ringing) about the nal value. This
results in a small rise time but a large settling time. The frequency of oscillations increases with Q. For
Q = 0.5, the poles are real and equal, and the response is critically damped and yields the fastest
monotonic approach to the steady-state value with no overshoot.
t
p
j
j
t
Underdamped
1
Period of oscillations
Peak overshoot
t
0.9
1
0.1
Critically damped
Rise time
t
Delay
Overdamped
1
0.5
2
Figure A.11 Step response of a second-order system
EXAMPLE A.11 (Results for Some Analog Filters)
The step response and impulse response of the lters shown in the following gure are summarized below.
c Ashok Ambardar, September 1, 2003
500 Appendix A Useful Concepts from Analog Theory
3
1
1F
+
x(t) y(t)
H
Second-order Bessel filter
1
x(t)
1F
+
1H
+
y(t)
2
Second-order Butterworth filter
y(t)
x(t)
1F
1
1F 1
+
2H
+
(t) + 3y
3t/2) +
3 sin(
3t/2)]u(t).
Impulse response: h(t) = 2
3e
3t/2
sin(
3t/2)u(t).
(b) (A Second-Order Butterworth Filter)
H(s) =
1
s
2
+
2s + 1
Dierential equation: y
(t) +
2y
2
[cos(t/
2) + sin(t/
2)]u(t).
Impulse response: h(t) = y
(t) =
2e
t/
2
sin(t/
2)u(t).
(c) (A Third-Order Butterworth Filter)
H(s) =
0.5
s
3
+ 2s
2
+ 2s + 1
Dierential equation: y
(t) + 2y
(t) + 2y
(t) +y(t) =
1
2
x(t).
Step response: s(t) =
1
2
u(t)
1
3
e
t/2
sin(
3t/2)u(t)
1
2
e
t
u(t).
Impulse response: h(t) = e
t/2
[
1
2
3
sin(
3t/2)
1
2
cos(
3t/2)]u(t)
1
2
e
t
u(t).
A.7 Bode Plots
Bode plots allow us to plot the frequency response over a wide frequency and amplitude range by using
logarithmic scale compression. The magnitude or gain is plotted in decibels (with H
dB
= 20 log [H(f)[)
against log(f). For LTI systems whose transfer function is a ratio of polynomials in j, a rough sketch
can be quickly generated using linear approximations called asymptotes over dierent frequency ranges to
obtain asymptotic Bode plots. The numerator and denominator is factored into linear and quadratic
factors in j with real coecients. A standard form is obtained by setting the real part of each factored
term to unity A summary appears in the following review panel.
REVIEW PANEL A.13
Some Asymptotic Plots. Note: 20 dB/dec = 6 dB/oct
Term = j: Straight line with slope = 20 dB/dec for all with H
dB
= 0 at = 1 rad/s.
Term = 1 +j
: For , H
dB
= 0. For , straight line with slope = 20 dB/dec.
If repeated k times: Multiply slopes by k. If in denominator: Slopes are negative.
c Ashok Ambardar, September 1, 2003
A.7 Bode Plots 501
The frequency where the slope changes is called break frequency. For a rst order lter, the break
frequency is also called the 3-dB frequency (or the half-power frequency) because the true value diers from
the asymptotic value by 3 dB at this frequency.
For H() = j For H() = (/) j 1+
H
dB
For H() =
H
dB
H
dB
H
dB
j
For H() =
(/) j 1+
20
10 1
Slope
(log)
20 dB/dec
-20
1
Slope
10
(log)
-20 dB/dec
20
10
Slope
20 dB/dec
(log) -20
Slope
-20 dB/dec
10
(log)
1 1
Figure A.12 Asymptotic Bode magnitude plots for for some standard forms
Figure A.12 shows asymptotic magnitude plots of various rst order terms. The decibel value of a constant
H() = K is H
dB
= 20 log [K[ dB, a constant for all . The Bode magnitude plot for a transfer function
with several terms is the sum of similar plots for each of its individual terms. The composite plot can be
sketched directly using the guidelines of the following review panel.
REVIEW PANEL A.14
Guidelines for Sketching a Composite Asymptotic Magnitude Plot for H()
Initial slope: 20 dB/dec (with H
dB
= 0 at = 1 rad/s) if a term j is present; 0 dB/dec if absent.
At a break frequency: The slope increases by 20k dB/dec due to a (repeated by k) numerator term
and decreases by 20k dB/dec due to a denominator term.
EXAMPLE A.12 (Bode Magnitude Plots)
(a) (Linear Factors) Let H() =
40(0.25 +j)(10 +j)
j(20 +j)
.
We write this in the standard form
H() =
5(1 +j/0.25)(1 +j/10)
j(1 +j/20)
The break frequencies, in ascending order, are:
1
= 0.25 rad/s (numerator)
2
= 10 rad/s (numerator)
3
= 20 rad/s (denominator)
The term 1/j provides a starting asymptote 20 dB/dec whose value is 0 dB at = 1 rad/s. We
can now sketch a composite plot by including the other terms:
At
1
= 0.25 rad/s (numerator), the slope increases by (by +20 dB/dec) to 0 dB/dec. At
2
= 10
rad/s (numerator), the slope increases to 20 dB/dec. Finally, at
3
= 20 rad/s (denominator), the
slope decreases to 0 dB/dec.
Finally, the constant 5 shifts the plot by 20 log 5 = 14 dB. Its Bode plot is shown in Figure EA.12(a).
c Ashok Ambardar, September 1, 2003
502 Appendix A Useful Concepts from Analog Theory
10
2
10
1
10
0
10
1
10
2
10
3
20
30
40
50
60
(a) Asymptotic (dark) and exact magnitude
Frequency [rad/s]
M
a
g
n
i
t
u
d
e
[
d
B
]
10
1
10
0
10
1
10
2
10
3
10
4
30
20
10
0
10
20
(b) Asymptotic (dark) and exact magnitude
Frequency [rad/s]
M
a
g
n
i
t
u
d
e
[
d
B
]
Figure EA.12 Bode magnitude plots for Example A.12(a and b)
(b) (Repeated Factors) Let H() =
(1 +j)(1 +j/100)
(1 +j/10)
2
(1 +j/300)
.
Its Bode plot is sketched in Figure EA.12(b). We make the following remarks:
The starting slope is 0 dB/dec since a term of the form (j)
k
is absent.
The slope changes by 40 dB/dec at
B
= 10 rad/s due to the repeated factor.
A.8 Classical Analog Filter Design
Classical analog lters include Butterworth (maximally at passband), Chebyshev I (rippled passband),
Chebyshev II (rippled stopband) and elliptic (rippled passband and stopband) as shown in Figure A.13. The
design of these classical analog lters typically relies on frequency specications (passband and stopband
edge(s)) and magnitude specications (maximum passband attenuation and minimum stopband attenua-
tion) to generate a minimum-phase lter transfer function with the smallest order that meets or exceeds
specications.
Most design strategies are based on converting the given frequency specications to those applicable to
a lowpass prototype with unit radian cuto frequency, designing the lowpass prototype and converting to
the required lter type using frequency transformations. We only concentrate on the design of the lowpass
prototype. Table A.9 describes how to obtain the prototype specications from given lter specications, the
design recipe for Butterworth and Chebyshev lters, and frequency transformations to convert the lowpass
prototype back to the required form.
For bandpass and bandstop lters, if the given specications [f
1
, f
2
, f
3
, f
4
] are not geometrically
symmetric, one of the stopband edges must be relocated (increased or decreased) in a way that the new
transition widths do not exceed the original. The quadratic transformations to bandpass and bandstop
lters yield transfer functions with twice the order of the lowpass prototype.
The poles of a Butterworth lowpass prototype lie equispaced on a circle of radius R = (1/)
1/n
in the
s-plane while the poles of a Chebyshev prototype lie on an ellipse. The high-frequency attenuation of an
nth-order Butterworth or Chebyshev lowpass lter is 20n dB/dec.
c Ashok Ambardar, September 1, 2003
A.8 Classical Analog Filter Design 503
Table A.9 Analog Filter Design
NOTES: For LP lters,
s
=
Stopband edge
Passband edge
For HP lters,
s
=
Passband edge
Stopband edge
For BP and BS lters, with band edges [f
1
, f
2
, f
3
, f
4
] in increasing order:
s
=
f
4
f
1
f
3
f
2
If f
0
is the center frequency, we require geometric symmetry with f
1
f
4
= f
2
f
3
= f
2
0
.
Butterworth Chebyshev
Ripple
2
= 10
0.1A
p
1
2
= 10
0.1A
p
1
Order n =
log[(10
0.1A
s
1)/
2
]
1/2
log(
s
)
n =
cosh
1
[(10
0.1A
s
1)/
2
]
1/2
cosh
1
(
s
)
3-dB Frequency
3
= (1/)
1/n
= R
3
= cosh
1
n
cosh
1
(1/)
Poles of H(s)
p
k
= Rsin
k
+jRcos
k
k
= (2k 1)/2n
k = 1, 2, . . . , n
p
k
= sin
k
sinh() +j cos
k
cosh()
k
= (2k 1)/2n =
1
n
sinh
1
(
1
)
k = 1, 2, . . . , n
Denominator Q
P
(s) (s p
1
)(s p
2
) . . . (s p
n
) (s p
1
)(s p
2
) . . . (s p
n
)
K for unit peak gain K = Q
P
(0) =
n
k=1
p
k
K =
Q
P
(0)/
1 +
2
n even
Q
P
(0) n odd
For transformation of the LPP transfer function H(s) to other forms, use:
To LPF s s/
p
To HPF s
p
/s To BPF s
s
2
+
2
0
sB
BP
To BSF s
sB
BS
s
2
+
2
0
where
0
= 2f
0
and B
BP
= 2(f
3
f
2
) for bandpass, B
BS
= 2(f
4
f
1
) for bandstop
NOTE: For numerical computation, cosh
1
(x) = ln[x + (x
2
1)
1/2
], x 1
EXAMPLE A.13 (Analog Filter Design)
(a) (Butterworth Lowpass Filter) Design a Butterworth lter to meet the following specications.
A
p
1 dB for f 4 kHz A
s
20 dB for f 8 kHz
From the design equations,
s
= f
s
/f
p
= 2 and
2
= 10
0.1A
p
1 = 0.2589
n =
log[(10
0.1A
s
1)/
2
]
1/2
log(
s
)
=
log[(10
2
1)/
2
]
1/2
log(2)
= 4.289 n = 5
3
= (1/)
1/n
= 1.1447 = R
We have
k
= (2k 1)/2n = 0.1(2k 1) k = 1, 2, . . . , 5
This gives
k
= 0.1, 0.3, 0.5, 0.7, 0.9 rad
The pole locations p
k
= Rsin(
k
) +jRcos(
k
) are then
c Ashok Ambardar, September 1, 2003
504 Appendix A Useful Concepts from Analog Theory
p
k
= 1.1447, 0.3537 j1.0887, 0.9261 j0.6728
The denominator Q(s) may be written as:
Q(s) = s
5
+ 3.7042s
4
+ 6.8607s
3
+ 7.8533s
2
+ 5.5558s + 1.9652
The numerator is given by K = Q(0) = 1.9652
The transfer function of the analog lowpass prototype is then
H(s) =
K
Q(s)
=
1.9652
s
5
+ 3.7042s
4
+ 6.8607s
3
+ 7.8533s
2
+ 5.5558s + 1.9652
(b) (Butterworth Bandstop Filter) Design a Butterworth bandstop lter with 2-dB passband edges
of 30 Hz and 100 Hz, and 40-dB stopband edges of 50 Hz and 70 Hz.
The band edges are [f
1
, f
2
, f
3
, f
4
] = [30, 50, 70, 100] Hz. Since f
1
f
4
= 3000 and f
2
f
3
= 3500,
the specications are not geometrically symmetric. Assuming a xed passband, we relocate the upper
stopband edge f
3
to ensure geometric symmetry f
2
f
3
= f
1
f
4
. This gives f
3
= (30)(100)/50 = 60 Hz.
The lowpass prototype band edges are
p
= 1 rad/s and
s
=
f
4
f
1
f
3
f
2
= 7 rad/s.
We compute
2
= 10
0.1A
p
1 = 0.5849 and the lowpass prototype order as
n =
log[(10
0.1A
s
1)/
2
]
1/2
log
s
n = 3
The pole radius is R = (1/)
1/n
= 1.0935. The pole angles are
k
= [
6
,
2
,
5
6
] rad.
Thus, p
k
= Rsin
k
+jRcos
k
= 1.0935, 0.5468 j0.9470, and the lowpass prototype becomes
H
P
(s) =
1.3076
s
3
+ 2.1870s
2
+ 2.3915s + 1.3076
With
2
0
=
1
4
= 4
2
(3000) and B = 2(f
4
f
1
) = 2(70) rad/s, the LP2BS transformation
s sB/(s
2
+
2
0
) gives
H(s) =
s
6
+ 3.55(10)
5
s
4
+ 4.21(10)
10
s
2
+ 1.66(10)
15
s
6
+ 8.04(10)
2
s
5
+ 6.79(10)
5
s
4
+ 2.56(10)
8
s
3
+ 8.04(10)
10
s
2
+ 1.13(10)
13
s + 1.66(10)
15
The linear and decibel magnitude of this lter is sketched in Figure EA.13B.
c Ashok Ambardar, September 1, 2003
A.8 Classical Analog Filter Design 505
0 30 50 70 100 130
0.01
0.5
0.794
1
(a) Butterworth bandstop filter meeting passband specs
Frequency [rad/s]
M
a
g
n
i
t
u
d
e
[
l
i
n
e
a
r
]
0 30 50 70 100 130
50
40
30
20
10
2
0
(b) dB magnitude of bandstop filter in (a)
Frequency [rad/s]
M
a
g
n
i
t
u
d
e
[
d
B
]
Figure EA.13B Butterworth bandstop lter of Example A.13(b)
(c) (Chebyshev Lowpass Filter) Design a Chebyshev lter to meet the following specications.
A
p
1 dB for f 4 kHz A
s
20 dB for f 8 kHz
From the design equations:
2
= 10
0.1Ap
1 = 0.2589
n =
cosh
1
[(10
0.1A
s
1)/
2
]
1/2
cosh
1
(f
s
/f
p
)
=
cosh
1
[(10
2
1)/
2
]
1/2
cosh
1
(2)
= 2.783 n = 3
3
= cosh
1
n
cosh
1
(1/)
= 1.0949
To nd the poles, we rst compute:
= (1/n) sinh
1
(1/) = 0.4760
k
= (2k 1)/2n = (2k 1)/6 k = 1, 2, 3
The poles p
k
= sinh() sin(
k
) +j cosh() cos(
k
) then yield:
p
k
= 0.4942, 0.2471 j0.966
The denominator Q(s) equals:
Q(s) = s
3
+ 0.9883s
2
+ 1.2384s + 0.4913
Since n is odd, the numerator is K = Q(0) = 0.4913 for unity peak gain.
The transfer function of the analog lowpass prototype is then
H(s) =
K
Q(s)
=
0.4913
s
3
+ 0.9883s
2
+ 1.2384s + 0.4913
(d) (Chebyshev Bandpass Filter) Let us design a Chebyshev bandpass lter for which we are given:
Passband edges: [
1
,
2
,
3
,
4
] = [0.89, 1.019, 2.221, 6.155] rad/s
Maximum passband attenuation: A
p
= 2 dB Minimum stopband attenuation: A
s
= 20 dB
The frequencies are not geometrically symmetric. So, we assume xed passband edges and compute
2
0
=
2
3
= 1.5045. Since
1
4
>
2
0
, we decrease
4
to
4
=
2
0
/
1
= 2.54 rad/s.
c Ashok Ambardar, September 1, 2003
506 Appendix A Useful Concepts from Analog Theory
Then, B =
3
2
= 1.202 rad/s,
p
= 1 rad/s, and
s
=
41
B
=
2.540.89
1.202
= 1.3738 rad/s.
The value of
2
and the order n is given by
2
= 10
0.1A
p
1 = 0.5849 n =
cosh
1
[(10
0.1A
s
1)/
2
]
1/2
cosh
1
s
= 3.879 n = 4
The half-power frequency is
3
= cosh[
1
3
cosh
1
(1/)] = 1.018. To nd the LHP poles of the prototype
lter, we need
= (1/n)sinh
1
(1/) = 0.2708
k
(rad) =
(2k 1)
2n
=
(2k 1)
8
, k = 1, 2, 3, 4
From the LHP poles p
k
= sinh sin
k
+j cosh cos
k
, we compute p
1
, p
3
= 0.1049 j0.958 and
p
2
, p
4
= 0.2532 j0.3968. The denominator Q
P
(s) of the prototype H
P
(s) = K/Q
P
(s) is thus
Q
P
(s) = (s p
1
)(s p
2
)(s p
3
)(s p
4
) = s
4
+ 0.7162s
3
+ 1.2565s
2
+ 0.5168s + 0.2058
Since n is even, we choose K = Q
P
(0)/
1 +
2
= 0.1634 for peak unit gain, and thus
H
P
(s) =
0.1634
s
4
+ 0.7162s
3
+ 1.2565s
2
+ 0.5168s + 0.2058
We transform this using the LP2BP transformation s (s
2
+
2
0
)/sB to give the eighth-order analog
bandpass lter H(s) as
H(s) =
0.34s
4
s
8
+ 0.86s
7
+ 10.87s
6
+ 6.75s
5
+ 39.39s
4
+ 15.27s
3
+ 55.69s
2
+ 9.99s + 26.25
The linear and decibel magnitude of this lter is shown in Figure EA.13D.
0 0.89 2.22 4 6.16 8
0.1
0.5
0.794
1
(a) Chebyshev BPF meeting passband specs
Frequency [rad/s]
M
a
g
n
i
t
u
d
e
[
l
i
n
e
a
r
]
1.02
0 0.89 2.22 4 6.16 8
30
20
10
2
0
(b) dB magnitude of filter in (a)
Frequency [rad/s]
M
a
g
n
i
t
u
d
e
[
d
B
]
1.02
Figure EA.13D Chebyshev I bandpass lter for Example A.13(d)
c Ashok Ambardar, September 1, 2003