EC6502 Principles of Digital Signal Processing 1
EC6502 Principles of Digital Signal Processing 1
net
TEXT BOOKS
1. John G Proakis and Manolakis, “ Digital Signal Processing Principles, Algorithms and
Applications”, Pearson, Fourth Edition, 2007.
2. S.Salivahanan, A. Vallavaraj, C. Gnanapriya, Digital Signal Processing,
TMH/McGraw Hill International, 2007
REFERENCES
1. E.C.Ifeachor and B.W.Jervis,“Digital signal processing–A practical approach ”
Second edition, Pearson, 2002.
2. S.K. Mitra, Digital Signal Processing, A Computer Based approach, Tata
Mc GrawHill, 1998.
3. P.P.Vaidyanathan, Multirate Systems & Filter Banks, Prentice Hall, Englewood
cliffs, NJ, 1993.
4. Johny R. Johnson, Introduction to Digital Signal Processing, PHI, 2006.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
UNIT I
FREQUENCY TRANSFORMATIONS
1.1 INTRODUCTION
Time domain analysis provides some information like amplitude at sampling instant but does not
convey frequency content & power, energy spectrum hence frequency domain analysis is used.
For Discrete time signals x(n) , Fourier Transform is denoted as x(ω) & given by
∞
X(ω) = ∑ x (n) e –jωn FT….……(1)
n=-∞
DFT is denoted by x(k) and given by (ω= 2 ∏ k/N)
N-1
X(k) = ∑ x (n) e –j2 ∏ kn / N DFT…….(2)
n=0
IDFT is given as
N-1
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
For calculation of DFT & IDFT two different methods can be used. First method is using mathematical
equation & second method is 4 or 8 point DFT. If x(n) is the sequence of N samples then consider WN= e –
j2 ∏ / N (twiddle factor)
k=0 W4 0 W4 0 W4 0 W4 0
[WN] = k=1 W4 0 W4 1 W4 2 W4 3 k=2
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
W4 0 W4 2 W4 4 W4 6
k=3 W4 0 W4 3 W4 6 W4 9
1 1 1 1
[WN] = 1 –j -1 j
1 -1 1 -1
1 j -1 -j
Examples:
Q) Compute DFT of x(n) = {0,1,2,3} Ans: x4=[6, -2+2j, -2, -2-2j ]
Q) Compute DFT of x(n) = {1,0,0,1} Ans: x4=[2, 1+j, 0, 1-j ]
Q) Compute DFT of x(n) = {1,0,1,0} Ans: x4=[2, 0, 2, 0 ]
Q) Compute IDFT of x(k) = {2, 1+j, 0, 1-j } Ans: x4=[1,0,0,1]
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
4 Thus DFT is given by In DFT and IDFT difference is of factor 1/N &
X(k)= [WN][xn] sign of exponent of twiddle factor.
Thus
x(n)= 1/N [ WN]-1[XK]
2. Linearity
The linearity property states that if
DFT
x1(n) X1(k) And
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
N
DFT
x2(n) X2(k) Then
N
Then DFT
a1 x1(n) + a2 x2(n) a1 X1(k) + a2 X2(k) N
DFT of linear combination of two or more signals is equal to the same linear combination of DFT of
individual signals.
A) A sequence is said to be circularly even if it is symmetric about the point zero on the circle. Thus
X(N-n) = x(n)
B) A sequence is said to be circularly odd if it is anti symmetric about the point zero on the circle.
Thus X(N-n) = - x(n)
D) Anticlockwise direction gives delayed sequence and clockwise direction gives advance sequence.
Thus delayed or advances sequence x`(n) is related to x(n) by the circular shift.
This property states that if the sequence is real and even x(n)= x(N-n) then DFT becomes
N-1
X(k) = ∑ x(n) cos (2∏kn/N)
n=0
This property states that if the sequence is real and odd x(n)=-x(N-n) then DFT becomes
N-1
X(k) = -j ∑ x(n) sin (2∏kn/N)
n=0
This property states that if the sequence is purely imaginary x(n)=j XI(n) then DFT
becomes
N-1
XR(k) = ∑ xI(n) sin (2∏kn/N)
n=0
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
N-1
XI(k) = ∑ xI(n) cos (2∏kn/N)
n=0
5. Circular Convolution
The Circular Convolution property states that if
DFT
x1(n) X1(k) And
N
DFT
x2(n) X2(k) Then
N
DFT
Then x1(n) N x2(n) x1(k) x2(k) N
It means that circular convolution of x1(n) & x2(n) is equal to multiplication of their DFT s.
Thus circular convolution of two periodic discrete signal with period N is given by
N-1
y(m) = ∑ x1 (n) x2 (m-n)N ……….(4)
n=0
Multiplication of two sequences in time domain is called as Linear convolution while Multiplication of
two sequences in frequency domain is called as circular convolution. Results of both are totally different
but are related with each other.
There are two different methods are used to calculate circular convolution
1) Graphical representation form
2) Matrix approach
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
k= -∞
4 Linear Convolution of two signals returns N-1 Circular convolution returns same
elements where N is sum of elements in both number of elements that of two signals.
sequences.
Q) The two sequences x1(n)={2,1,2,1} & x2(n)={1,2,3,4}. Find out the sequence x3(m)
which is equal to circular convolution of two sequences. Ans: X3(m)={14,16,14,16}
Q) Perform Linear Convolution of x(n)={1,2} & h(n)={2,1} using DFT & IDFT.
Q) Perform Linear Convolution of x(n)={1,2,2,1} & h(n)={1,2,3} using 8 Pt DFT & IDFT.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
6. Multiplication
The Multiplication property states that if
DFT
X1(n) x1(k) And
N
DFT
X2(n) x2(k) Then
N
DFT
Then x1(n) x2(n) 1/N x1(k) N N x2(k)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
N-1
rxy(l) = ∑ x (n) y*((n – l ))N
n=0
This means multiplication of DFT of one sequence and conjugate DFT of another sequence is equivalent
to circular cross-correlation of these sequences in time domain.
12.Parseval’sTheorem
N-1 N-1
∑ X(n) y*(n) = 1/N ∑ x (k) y*(k)
n=0 n=0
This equation give energy of finite duration sequence in terms of its frequency components.
Consider that input sequence x(n) of Length L & impulse response of same system is h(n) having M
samples. Thus y(n) output of the system contains N samples where N=L+M-1. If DFT of y(n) also
contains N samples then only it uniquely represents y(n) in time domain. Multiplication of two DFT s is
equivalent to circular convolution of corresponding time domain sequences. But the length of x(n) &
h(n) is less than N. Hence these sequences are appended with zeros to make their length N called as
“Zero padding”. The N point circular convolution and linear convolution provide the same
sequence. Thus linear convolution can be obtained by circular convolution. Thus linear filtering is
provided by
DFT.
When the input data sequence is long then it requires large time to get the output sequence. Hence other
techniques are used to filter long data sequences. Instead of finding the output of complete input
sequence it is broken into small length sequences. The output due to these small length sequences are
computed fast. The outputs due to these small length sequences are fitted one after another to get the
final output response.
Step 1> In this method L samples of the current segment and M-1 samples of the
previous segment forms the input data block. Thus data block will be
Step2> Unit sample response h(n) contains M samples hence its length is made N by padding
zeros. Thus h(n) also contains N samples.
Step3> The N point DFT of h(n) is H(k) & DFT of mth data block be xm(K) then
corresponding DFT of output be Y`m(k)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Step 4> The sequence ym(n) can be obtained by taking N point IDFT of Y`m(k). Initial
(M-1) samples in the corresponding data block must be discarded. The last L samples are
the correct output samples. Such blocks are fitted one after another to get the final output.
X(n) of Size N
Size L
X1(n)
M-1 Size L
Zeros
X2(n)
Size L
X3(n)
Y1(n)
Y3(n)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Y(n) of Size N
Step 1> In this method L samples of the current segment and M-1 samples of the
previous segment forms the input data block. Thus data block will be
X1(n) ={x(0),x(1),…………….x(L-1),0,0,0,……….}
X2(n) ={x(L),x(L+1),x(2L-1),0,0,0,0}
X3(n) ={x(2L),x(2L+2),,,,,,,,,,,,,x(3L-1),0,0,0,0}
Step2> Unit sample response h(n) contains M samples hence its length is made N by padding
zeros. Thus h(n) also contains N samples.
Step3> The N point DFT of h(n) is H(k) & DFT of mth data block be xm(K) then
corresponding DFT of output be Y`m(k)
X(n) of Size N
Size L
M-1
X1(n) Zeros
Size L
M-1
X2(n) Zeros
Size L
M-1
X3(n) Zeros
Y1(n)
47
Y2(n)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
M-1
Points add
together
Y(n) of Size N
DFT of the signal is used for spectrum analysis. DFT can be computed on digital computer or digital
signal processor. The signal to be analyzed is passed through anti-aliasing filter and samples at the rate
of Fs≥ 2 Fmax. Hence highest frequency component is Fs/2.
Frequency spectrum can be plotted by taking N number of samples & L samples of waveforms. The
total frequency range 2∏ is divided into N points. Spectrum is better if we take large value of N & L But
this increases processing time. DFT can be computed quickly using FFT algorithm hence fast processing
can be done. Thus most accurate resolution can be obtained by increasing number of samples.
1. Large number of the applications such as filtering, correlation analysis, spectrum analysis
require calculation of DFT. But direct computation of DFT require large number of computations and
hence processor remain busy. Hence special algorithms are developed to compute DFT quickly called
as Fast Fourier algorithms (FFT).
48
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
2. The radix-2 FFT algorithms are based on divide and conquer approach. In this method, the
N-point DFT is successively decomposed into smaller DFT s. Because of this decomposition, the
number of computations are reduced.
N point sequence x(n) be splitted into two N/2 point data sequences f1(n) and f2(n). f1(n) contains even
numbered samples of x(n) and f2(n) contains odd numbered samples of x(n). This splitted operation is
called decimation. Since it is done on time domain sequence it is called “Decimation in Time”.
Thus
N-1
kn
X(k) =∑ x (n) WN
(1)
n=0
Since the sequence x(n) is splitted into even numbered and odd numbered samples, thus
N/2-1 N/2-1
2m
k k(2m+1)
X(k) =∑ x (2m) WN + ∑ x (2m+1) WN (2)
m=0 m=0
k
X(k) =F1(k) + WN F2(k) (3)
X(k+N/2) =F1(k) - WN k F2(k) (Symmetry property) (4)
Fig 1 shows that 8-point DFT can be computed directly and hence no reduction in
computation.
x(0) X(0)
x(1) X(1)
x(2) X(2)
8 Point
x(3) X(3)
DFT
x(7) X(7)
49
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
50
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Fig 3 shows N/2 point DFT base separated in N/4 boxes. In such cases equations become
2
g1(k) =P1(k) + WN k P2(k) (5)
g1(k+N/2) =p1(k) - WN 2k P2(k) (6)
x(0) F(0)
N/4 Point
x(4) F(1)
DFT
a A= a + W Nr b
b WNr B= a - W Nr b
51
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
x(0) A1 A2 X(0)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Each butterfly operation takes two addition and one multiplication operations. Direct computation
requires N2 multiplication operation & N2 – N addition operations.
a A= a + W NrN b
b WNr B= a - W r b
From values a and b new values A and B are computed. Once A and B are computed, there is no
need to store a and b. Thus same memory locations can be used to store A
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
and B where a and b were stored hence called as In place computation. The advantage of in place
computation is that it reduces memory requirement.
Thus for computation of one butterfly, four memory locations are required for storing two complex
numbers A and B. In every stage there are N/2 butterflies hence total 2N memory locations are
required. 2N locations are required for each stage. Since stages are computed successively these memory
locations can be shared. In every stage N/2 twiddle factors are required hence maximum storage
requirements of N point DFT will be (2N + N/2).
For 8 point DIT DFT input data sequence is written as x(0), x(4), x(2), x(6), x(1), x(5), x(3), x(7) and the
DFT sequence X(k) is in proper order as X(0), X(1), X(2), X(3), X(4), x(5), X(6), x(7). In DIF FFT it is
exactly opposite. This can be obtained by bit reversal method.
Table shows first column of memory address in decimal and second column as binary. Third
column indicates bit reverse values. As FFT is to be implemented on digital computer simple integer
division by 2 method is used for implementing bit reversal algorithms. Flow chart for Bit reversal
algorithm is as follows
DECIMAL
NUMBER B TO BE
REVERSED
I=1
B1=B
BR=0
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Is
I > log2N
Store BR as Bit
reversal of B
In DIF N Point DFT is splitted into N/2 points DFT s. X(k) is splitted with k even and k odd
this is called Decimation in frequency(DIF FFT).
N/2-1 N/2-1
X(k) =∑ x (n) WN + ∑ x (n + N/2) WN (2)
m=0 m=0
N/2-1 N/2-1 N
X(k) =∑ x (n) W kn
N + W NkN/2 ∑ x (n + N/2) W kn
m=0 m=0
N/2-1 N/2-1 N
X(k) =∑ x (n) W kn
N + (-1)k ∑ x (n + N/2) W kn
m=0 m=0
N/2-1
X(k) =∑ x (n) + (-1)k x(n + N/2) kn (3)
WN
m=0
N/2-1
X(2k) =∑ x (n) + (-1)2k x(n + N/2) WN 2kn (4)
m=0
54
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
N/2-1
X(2k+1) =∑ x (n)+(-1)(2k+1) x(n + N/2)WN (2k+1)n (5)
m=0
a A= a + b
b
WNr B= (a –b)W rN
Fig 2 shows signal flow graph and stages for computation of radix-2 DIF FFT algorithm of
N=4
x(0) A X(0)
55
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Fig 3 shows signal flow graph and stages for computation of radix-2 DIF FFT algorithm of
N=8
x(0) A1 A2 X(0)
56
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
57
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
58
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
59
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
FFT algorithms are used to compute N point DFT for N samples of the sequence x(n). This requires N/2
log2N number of complex multiplications and N log2N complex additions. In some applications DFT is
to be computed only at selected values of frequencies and selected values are less than log2N, then
direct computations of DFT becomes more efficient than FFT. This direct computations of DFT can
be realized through linear filtering of x(n). Such linear filtering for computation of DFT can be
implemented using Goertzel algorithm.
60
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
∞
yk(n) = ∑ x (m) WN u(n–m) (3)
m=-∞
N-1
yk(n) = ∑ x (m) WN - (4)
m=0
= yk(n)| n=N
Thus DFT can be obtained as the output of LSI system at n=N. Such systems can give
X(k) at selected values of k. Thus DFT is computed as linear filtering operations by
Goertzel Algorithm.
GLOSSARY:
Fourier Transform:
The Transform that used to analyze the signals or systems Characteristics in frequency domain , which
is difficult in case of Time Domain.
61
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Laplace Transform:
Laplace Transform is the basic continuous Transform. Then it is developed to represent the continuous
signals in frequency domain.
For analyzing the discrete signals, the DTFT (Discrete Time Fourier Transform) is used. The output, that
the frequency is continuous in DTFT. But the Transformed Value should be discrete. Since the Digital
Signal Processors cannot work with the continuous frequency signals. So the DFT is developed to
represent the discrete signals in discrete frequency domain.
Discrete Fourier Transform is used for transforming a discrete time sequence of finite length “N” into a
discrete frequency sequence of the same finite length “N”.
Periodicity:
If a discrete time signal is periodic then its DFT is also periodic. i.e. if a signal or sequence is
repeated after N Number of samples, then it is called periodic signal.
Symmetry:
If a signal or sequence is repeated its waveform in a negative direction after “N/2” number of Samples,
then it is called symmetric sequence or signal.
Linearity:
A System which satisfies the superposition principle is said to be a linear system. The DFT have the
Linearity property. Since the DFT of the output is equal to the sum of the DFT’s of the Inputs.
Fast Fourier Transform is an algorithm that efficiently computes the discrete fourier transform of a
sequence x(n). The direct computation of the DFT requires 2N2 evaluations of trignometric functions.
4N2 real multiplications and 4N(N-1) real additions.
62
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
UNIT II
IIR FILTER DESIGN
PREREQUISITE DISCUSSION:
Basically a digital filter is a linear time –invariant discrete time system. The terms Finite Impulse
response (FIR) and Infinite Impulse Response (IIR) are used to distinguish filter types. The FIR filters are
of Non-Recursive type whereas the IIR Filters are of recursive type.
2.1 INTRODUCTION
To remove or to reduce strength of unwanted signal like noise and to improve the quality of
required signal filtering process is used. To use the channel full bandwidth we mix up two or more
signals on transmission side and on receiver side we would like to separate it out in efficient way.
Hence filters are used. Thus the digital filters are mostly used in
In signal processing, the function of a filter is to remove unwanted parts of the signal, such as
random noise, or to extract useful parts of the signal, such as the components lying within a certain
frequency range.
There are two main kinds of filter, analog and digital. They are quite different in their physical
makeup and in how they work.
An analog filter uses analog electronic circuits made up from components such as resistors,
capacitors and op amps to produce the required filtering effect. Such filter circuits are widely used in
such applications as noise reduction, video signal enhancement, graphic equalizers in hi-fi systems, and
many other areas.
In analog filters the signal being filtered is an electrical voltage or current which is the direct
analogue of the physical quantity (e.g. a sound or video signal or transducer output) involved.
63
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
A digital filter uses a digital processor to perform numerical calculations on sampled values of the
signal. The processor may be a general-purpose computer such as a PC, or a specialized DSP (Digital
Signal Processor) chip.
The analog input signal must first be sampled and digitized using an ADC (analog to digital
converter). The resulting binary numbers, representing successive sampled values of the input signal, are
transferred to the processor, which carries out numerical calculations on them. These calculations
typically involve multiplying the input values by constants and adding the products together. If
necessary, the results of these
calculations, which now represent sampled values of the filtered signal, are output through a DAC
(digital to analog converter) to convert the signal back to analog form.
In a digital filter, the signal is represented by a sequence of numbers, rather than a voltage or
current.
Analog signal
Sampler Quantizer Digital
Xa (t) & Encoder Filter
1. Samplers are used for converting continuous time signal into a discrete time signal by taking
samples of the continuous time signal at discrete time instants.
2. The Quantizer are used for converting a discrete time continuous amplitude signal into a digital
signal by expressing each sample value as a finite number of digits.
64
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
3. In the encoding operation, the quantization sample value is converted to the binary equivalent of
that quantization level.
4. The digital filters are the discrete time systems used for filtering of sequences.
These digital filters performs the frequency related operations such as low pass, high pass,
band pass and band reject etc. These digital Filters are designed with digital hardware and
software and are represented by difference equation.
Filters are usually classified according to their frequency-domain characteristic as lowpass, highpass,
bandpass and bandstop filters.
1. Lowpass Filter
A lowpass filter is made up of a passband and a stopband, where the lower frequencies
Of the input signal are passed through while the higher frequencies are attenuated.
|H (ω)|
ω
-ωc ωc
65
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
2. Highpass Filter
A highpass filter is made up of a stopband and a passband where the lower frequencies of the input
signal are attenuated while the higher frequencies are passed.
|H(ω)|
ω
-ωc ωc
3. Bandpass Filter
A bandpass filter is made up of two stopbands and one passband so that the lower and higher
frequencies of the input signal are attenuated while the intervening
frequencies are passed.
|H(ω)|
ω
-ω2 -ω1 ω2 ω1
4. Bandstop Filter
A bandstop filter is made up of two passbands and one stopband so that the lower and
higher frequencies of the input signal are passed while the intervening frequencies are
attenuated. An idealized bandstop filter frequency response has the following shape.
|H(ω)|
5. Multipass Filter
A multipass filter begins with a stopband followed by more than one passband. By default, a
multipass filter in Digital Filter Designer consists of three passbands and
four stopbands. The frequencies of the input signal at the stopbands are attenuated
while those at the passbands are passed.
6. Multistop Filter
66
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
A multistop filter begins with a passband followed by more than one stopband. By default, a
multistop filter in Digital Filter Designer consists of three passbands and two stopbands.
1. Ideal filters have a constant gain (usually taken as unity gain) passband
characteristic and zero gain in their stop band.
2. Ideal filters have a linear phase characteristic within their passband.
3. Ideal filters also have constant magnitude characteristic.
4. Ideal filters are physically unrealizable.
4 FIR systems has limited or finite memory IIR system requires infinite memory.
requirements.
67
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
The convolution of h(n) and x(n) for FIR systems can be written as
M-1
y(n)=∑ h(k) x(n–k) (1)
k=0
Implementation of direct form structure of FIR filter is based upon the above equation.
+ + + +
h(0)x(n) h(0)x(n)+
h(1)x(n) y(n)
68
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
1) There are M-1 unit delay blocks. One unit delay block requires one memory location.
Hence direct form structure requires M-1 memory locations.
2) The multiplication of h(k) and x(n-k) is performed for 0 to M-1 terms. Hence M
multiplications and M-1 additions are required.
3) Direct form structure is often called as transversal or tapped delay line filter.
In cascade form, stages are cascaded (connected) in series. The output of one system is input to
another. Thus total K number of stages are cascaded. The total system function
'H' is given by
k
H(z)=π Hk(z) (3)
k=1
Each H1(z), H2(z)… etc is a second order section and it is realized by the direct form as shown in
below figure.
x(n) x(n-1)
Z-1 Z-1
bk0 bk1 bk2
+ + y(n)
69
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Z transform is given as
M N
H(z) = ∑ bk z / 1+ ∑ ak z–k
–k (2)
K=0 k=1
M N
Here H1(z) = ∑ bk z–k And H2(z) = 1+ ∑ ak z–k
K=0 k=0
Overall IIR system can be realized as cascade of two function H1(z) and H2(z). Here
H1(z) represents zeros of H(z) and H2(z) represents all poles of H(z).
DIRECT FORM - I
b0
x(n) y(n)
+ +
Z-1 Z-1
b1 -a1
+ +
Z-1 Z-1
b2 -a2
+ +
bM-1 -aN-1
+ +
Z-1 Z-1
bM -aN
70
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
2. There are M+N-1 unit delay blocks. One unit delay block requires one memory location.
Hence direct form structure requires M+N-1 memory locations.
DIRECT FORM - II
2. Two delay elements of all pole and all zero system can be merged into single delay element.
3. Direct Form II structure has reduced memory requirement compared to Direct form I
structure. Hence it is called canonic form.
X(n) b0
y(n)
+ +
Z-1
-a1 b1
+ +
Z-1
-a2 b2
+ +
-aN-1 bN-1
+ +
Z-1
-aN bN
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
In cascade form, stages are cascaded (connected) in series. The output of one system is input to
another. Thus total K number of stages are cascaded. The total system function
'H' is given by
k
H(z)=π Hk(z) (3)
k=1
Each H1(z), H2(z)… etc is a second order section and it is realized by the direct form as shown in
below figure.
where HK(z) = bk0 + bk1 z-1 + bk2 z-2 / 1 + ak1 z-1 + ak2 z-2 (2)
X(n) bk0
y(n)
+ +
Z-1
-ak1 bk1
+ +
Z-1
-ak2 bk2
+ +
72
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
H1(z) +
H2(z) +
X(n) y(n)
k1(z) +
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
1. IMPULSE INVARIANCE
2. BILINEAR TRANSFORMATION
3. BUTTERWORTH APPROXIMATION
Impulse Invariance Method is simplest method used for designing IIR Filters. Important
Features of this Method are
1. In impulse variance method, Analog filters are converted into digital filter just by replacing
unit sample response of the digital filter by the sampled version of impulse response of analog
filter. Sampled signal is obtained by putting t=nT hence
h(n) = ha(nT) n=0,1,2. ………….
where h(n) is the unit sample response of digital filter and T is sampling interval.
2. But the main disadvantage of this method is that it does not correspond to simple algebraic
mapping of S plane to the Z plane. Thus the mapping from analog
frequency to digital frequency is many to one. The segments
(2k-1)∏/T ≤ Ω ≤ (2k+1) ∏/T of jΩ axis are all mapped on the unit circle ∏≤ω≤∏.
This takes place because of sampling.
3. Frequency aliasing is second disadvantage in this method. Because of frequency aliasing, the
frequency response of the resulting digital filter will not be identical to the original analog
frequency response.
4. Because of these factors, its application is limited to design low frequency filters like LPF or
a limited class of band pass filters.
Z is represented as rejω in polar form and relationship between Z plane and S plane is given as
Z=eST where s= σ + j Ω.
Z= eST (Relationship Between Z plane and S plane)
Z= e (σ + j Ω) T
= eσ T . ej Ω T
Comparing Z value with the polar form we have.
r= e σ T and ω = Ω T
Here we have three condition
1) If σ = 0 then r=1
2) If σ < 0 then 0 < r < 1
3) If σ > 0 then r> 1
Thus
73
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Im(z)
1 jΩ
Re(z) σ
74
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Im(z)
1 jΩ
75
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Re(z) σ
where pk are the poles of the analog filter and ck are the coefficients of partial fraction expansion. The impulse
response of the analog filter ha(t) is obtained by inverse Laplace transform and given as
n
ha(t) = Σ Ck epkt (2)
k=1
The unit sample response of the digital filter is obtained by uniform sampling of ha(t). h(n) = ha(nT)
n=0,1,2. ………….
n
h(n) =Σ Ck epknT (3)
k=1
N ∞
H(z) =Σ Ck Σ epkT z-1 n (4)
k=1 n=0
Using the standard relation and comparing equation (1) and (4) system function of digital filter is given as
1 1
76
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
s - pk 1- epkT z-1
2
s+a 1- e-aT (cos bT) z-1
(s+a)2 + b2 1-2e -aT (cos bT)z-1+ e-2aTz-2
3
b e-aT (sin bT) z-1
(s+a) + b2
2
1-2e-aT (cos bT)z-1+ e-2aTz-2
The method of filter design by impulse invariance suffers from aliasing. Hence in order to overcome
this drawback Bilinear transformation method is designed. In analogue domain frequency axis is an
infinitely long straight line while sampled data z plane it is unit circle radius. The bilinear
transformation is the method of squashing the infinite straight analog frequency axis so that it becomes
finite.
1. Bilinear transformation method (BZT) is a mapping from analog S plane to digital Z plane.
This conversion maps analog poles to digital poles and analog zeros to digital zeros. Thus all
poles and zeros are mapped.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
3. There is one to one correspondence between continuous time and discrete time frequency
points. Entire range in Ω is mapped only once into the range -∏≤ω≤∏.
5. But the main disadvantage of frequency warping is that it does change the shape of the desired
filter frequency response. In particular, it changes the shape of the transition bands.
Z is represented as rejω in polar form and relationship between Z plane and S plane in
BZT method is given as
2 z-1
S=
T z+1
2 rejω - 1
S=
T rejω + 1
2 r (cos ω + j sin ω) -1
S=
T r (cos ω + j sin ω) +1
2 r2 - 2r j 2 r sin ω
S= + p11 2
T 1+r2+2r cos ω +r +2r cos ω
2 r2 -1
σ=
T 1+ r2+2r cos ω
2 2 r sin ω
Ω=
T 1+ r2+2r cos ω
When r =1
Ω = 2 sin ω
78
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
T 1+cos ω
Ω=
= (2/T) tan (ω/2)
ω= 2 tan -1 (ΩT/2)
The above equations shows that in BZT frequency relationship is non-linear. The frequency
relationship is plotted as
ω 2 tan -1 (ΩT/2)
ΩT
79
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
3 They are generally used for low frequencies like For designing of LPF, HPF and almost all types of
design of IIR LPF and a limited class of Band pass and band stop filters this
bandpass filter method is used.
4 Frequency relationship is linear. Frequency relationship is non-linear.
Frequency warping or frequency
compression is due to non-linearity.
5 All poles are mapped from the s plane to the z All poles and zeros are mapped.
plane by the relationship
Zk= epkT. But the zeros in two domain does not
satisfy the same relationship.
3 3 1 / s3 + 2 s2 + 2s +1 s3 / s3 + 2 s2 + 2s +1
*=
ωccc= (2/T) tan (ωc Ts/2)
step 2. Find out the value of frequency scaled analog transfer function
Apply BZT. i.e Replace s by the ((z-1)/(z+1)). And find out the desired transfer function of
digital function.
Example:
Q) Design first order high pass butterworth filter whose cutoff frequency is 1 kHz at sampling
frequency of 104 sps. Use BZT Method
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
81
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Step 4. Find out the digital filter transfer function. Replace s by (z-1)/(z+1)
H(z)= z-1
1.325z -0.675
Q) Design second order low pass butterworth filter whose cutoff frequency is 1 kHz at sampling
frequency of 104 sps.
Q) First order low pass butterworth filter whose bandwidth is known to be 1 rad/sec . Use
BZT method to design digital filter of 20 Hz bandwidth at sampling frequency 60 sps.
Q) Second order low pass butterworth filter whose bandwidth is known to be 1 rad/sec . Use BZT
method to obtain transfer function H(z) of digital filter of 3 DB cutoff frequency of
150 Hz and sampling frequency 1.28 kHz.
Q) The transfer function is given as s2+1 / s2+s+1 The function is for Notch filter with frequency 1
rad/sec. Design digital Notch filter with the following specification
(1) Notch Frequency= 60 Hz
(2) Sampling frequency = 960 sps.
The filter passes all frequencies below Ωc. This is called passband of the filter. Also the filter
blocks all the frequencies above Ωc. This is called stopband of the filter. Ωc is called cutoff
frequency or critical frequency.
No Practical filters can provide the ideal characteristic. Hence approximation of the ideal
characteristic are used. Such approximations are standard and used for filter design.
Such three approximations are regularly used. a)
Butterworth Filter Approximation
b) Chebyshev Filter Approximation
c) Elliptic Filter Approximation
Butterworth filters are defined by the property that the magnitude response is maximally
flat in the passband.
|H(Ω)|2
ΩC Ω
82
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
1
|Ha(Ω)|2=
1 + (Ω/Ωc)2N
The squared magnitude function for an analog butterworth filter is of the form.
1
|Ha(Ω)|2= 1 + (Ω/Ωc)2N
N indicates order of the filter and Ωc is the cutoff frequency (-3DB frequency).
At s = j Ω magnitude of H(s) and H(-s) is same hence
Ha(s) Ha(-s) = 1
1 + (-s2/Ω 2c)N
To find poles of H(s). H(-s) , find the roots of denominator in above equation.
-s2 = (-1)1/N
Ωc2
s2 = (-1) Ω 2c e j(2k+1) ∏ / N
Pk = + j Ωc e j(2k+1) ∏ / 2N
As e j∏/2 = j
Pk = + Ωc e j∏/2 e j(2k+1) ∏ / 2N
Pk = + Ωc e j(N+2k+1) ∏ / 2N (1)
This equation gives the pole position of H(s) and H(-s).
The frequency response characteristic of |Ha(Ω)|2 is as shown. As the order of the filter N increases, the
butterworth filter characteristic is more close to the ideal characteristic. Thus at higher orders like N=16
the butterworth filter characteristic closely approximate ideal filter characteristic. Thus an infinite order
filter (N ∞) is required to get ideal characteristic.
83
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
|Ha(Ω)|2 N=18
84
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
N=6
N=2
|Ha(Ω)|
Ap
0.5
As
Ω
Ωp Ωc Ωs
1
1 + (Ωs/Ωc)2N ≤ As2
82
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Ωs
= (1/As2)-1
2
N (1/Ap2)-1
Ωp
log (1/As2)-1
(1/Ap2)-1 (2)
N= 0.5
log (Ωs/ Ωp)
As (DB) = - 20 log As
As = 10 -As/20
(As)-2 = 10 As/10
(As)-2 = 10 0.1 As DB
log 100.1 As -1
100.1 Ap -1 (4)
N= 0.5 log (Ωs/ Ωp)
83
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Example
|Ha(Ω)|
0.89125
0.17783
0.2∏ 0.3∏ Ω
Filter Type - Low Pass Filter
Ap - 0.89125
As - 0.17783
Ωp - 0.2∏
Ωs - 0.3∏
log (1/As2)-1
(1/Ap2)-1
N= 0.5 log (Ωs/ Ωp)
N= 5.88
Hence N=6
Ωp
Ωc =
[(1/Ap2) -1]1/2N
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Pk = + Ωc e j(N+2k+1) ∏ / 2N
-0.497 + j 0.497
1 P1= + 0.7032 e j9∏/12 0.497 - j 0.497
-0.679 + j 0.182
2 P2= + 0.7032 e j11∏/12 0.679 - j 0.182
For stable filter all poles lying on the left side of s plane is selected. Hence
Ha(s) = Ωc6
(s-s1)(s-s1*) (s-s2)(s-s2*) (s-s3)(s-s3*)
Hence
Ha(s) = (0.7032)6
(s+0.182-j0.679)(s+0.182+j0.679) (s+0.497-j0.497) (s+0.497+j0.497)
(s+0.679-j0.182)(s+0.679-j0.182)
Ha(s) = 0.1209
[(s+0.182)2 +(0.679)2] [(s+0.497)2+(0.497)2] [(s+0.679)2-(0.182)2]
Q) Design second order low pass butterworth filter whose cutoff frequency is 1 kHz at sampling
frequency of 104 sps. Use BZT and Butterworth approximation.
When the cutoff frequency Ωc of the low pass filter is equal to 1 then it is called normalized filter.
Frequency transformation techniques are used to generate High pass filter, Bandpass and bandstop
filter from the lowpass filter system function.
s
1 Low Pass
ωlp
ωlp - Password edge frequency of another LPF
ωhp
2 High Pass
s
ωhp = Password edge frequency of HPF
(s2 + ωl ωh )
3 Band Pass s (ωh - ωl )
ωh - higher band edge frequency
ωl - Lower band edge frequency
s(ωh - ωl)
s2+ ωh ωl
4 Band Stop
ωh - higher band edge frequency
ωl - Lower band edge frequency
86
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
z-2 - a1z-1 + a2
4 Band Stop
a2z-2 - a1z-1 + 1
Example:
Q) Design high pass butterworth filter whose cutoff frequency is 30 Hz at sampling frequency
of 150 Hz. Use BZT and Frequency transformation.
Step 4. Find out the digital filter transfer function. Replace s by (z-1)/(z+1)
H(z)= z-1
1.7265z - 0.2735
Q) Design second order band pass butterworth filter whose passband of 200 Hz and 300
Hz and sampling frequency is 2000 Hz. Use BZT and Frequency transformation.
Q) Design second order band pass butterworth filter which meet following specification
Lower cutoff frequency = 210 Hz
Upper cutoff frequency = 330 Hz
Sampling Frequency = 960 sps
Use BZT and Frequency transformation.
87
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
GLOSSARY:
System Design:
Usually, in the IIR Filter design, Analog filter is designed, then it is transformed to a digital filter the
conversion of Analog to Digital filter involves mapping of desired digital filter specifications into equivalent
analog filter.
Warping Effect:
The analog Frequency is same as the digital frequency response. At high frequencies, the relation between
ω and Ω becomes Non-Linear. The Noise is introduced in the Digital Filter as in the Analog Filter. Amplitude
and Phase responses are affected by this warping effect.
Prewarping:
The Warping Effect is eliminated by prewarping of the analog filter. The analog frequencies are
prewarped and then applied to the transformation.
Infinite Impulse Response filters are a Type of Digital Filters which has infinite impulse response. This
type of Filters are designed from analog filters. The Analog filters are then transformed to Digital Domain.
In Bilinear transformation method the transform of filters from Analog to Digital is carried out in a way
such that the Frequency transformation produces a Linear relationship between Analog and Digital Filters.
Filter:
A filter is one which passes the required band of signals and stops the other unwanted band of frequencies.
Pass band:
The Band of frequencies which is passed through the filter is termed as passband.
Stopband:
The band of frequencies which are stopped are termed as stop band.
88
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
89
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
UNIT III
FIR FILTER DESIGN
PREREQUISITE DISCUSSION:
The FIR Filters can be easily designed to have perfectly linear Phase. These filters can be realized
recursively and Non-recursively. There are greater flexibility to control the Shape of their Magnitude
response. Errors due to round off noise are less severe in FIR Filters, mainly because Feed back is not
used.
1. FIR filter always provides linear phase response. This specifies that the signals in the pass
band will suffer no dispersion Hence when the user wants no phase distortion, then FIR
filters are preferable over IIR. Phase distortion always degrade the system performance. In
various applications like speech processing, data transmission over long distance FIR filters
are more preferable due to this characteristic.
2. FIR filters are most stable as compared with IIR filters due to its non feedback nature.
3. Quantization Noise can be made negligible in FIR filters. Due to this sharp cutoff
FIR filters can be easily designed.
4. Disadvantage of FIR filters is that they need higher ordered for similar magnitude response of
IIR filters.
System is stable only if system produces bounded output for every bounded input. This is stability
definition for any system.
Here h(n)={b0, b1, b2, } of the FIR filter are stable. Thus y(n) is bounded if input x(n) is
bounded. This means FIR system produces bounded output for every bounded
input. Hence FIR systems are always stable.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
The various method used for FIR Filer design are as follows
1. Fourier Series method
2. Windowing Method
3. DFT method
4. Frequency sampling Method. (IFT Method)
Consider the ideal LPF frequency response as shown in Fig 1 with a normalizing angular cut off
frequency Ωc.
1. In Fourier series method, limits of summation index is -∞ to ∞. But filter must have finite terms.
Hence limit of summation index change to -Q to Q where Q is some finite integer. But this type of
truncation may result in poor convergence of the series. Abrupt truncation of infinite series is equivalent
to multiplying infinite series with rectangular sequence. i.e at the point of discontinuity some oscillation
may be observed in resultant series.
2. Consider the example of LPF having desired frequency response Hd (ω) as shown
in figure. The oscillations or ringing takes place near band-edge of the filter.
3. This oscillation or ringing is generated because of side lobes in the frequency response
W(ω) of the window function. This oscillatory behavior is called "Gibbs Phenomenon".
91
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Rectangular Window: Rectangular This is the most basic of windowing methods. It does not require
any operations because its values are either 1 or 0. It creates an abrupt discontinuity that results in sharp
roll-offs but large ripples.
=1 for 0 ≤ n ≤ N
=0 otherwise
Triangular Window: The computational simplicity of this window, a simple convolution of two
rectangle windows, and the lower sidelobes make it a viable alternative to the rectangular window.
92
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Kaiser Window: This windowing method is designed to generate a sharp central peak. It has reduced
side lobes and transition band is also narrow. Thus commonly used in FIR filter design.
Hamming Window: This windowing method generates a moderately sharp central peak. Its ability to
generate a maximally flat response makes it convenient for speech processing filtering.
Hanning Window: This windowing method generates a maximum flat filter design.
93
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Filters can be designed from its pole zero plot. Following two constraints should be imposed
while designing the filters.
1. All poles should be placed inside the unit circle on order for the filter to be stable. However zeros
can be placed anywhere in the z plane. FIR filters are all zero filters hence they are always stable. IIR
filters are stable only when all poles of the filter are inside unit circle.
2. All complex poles and zeros occur in complex conjugate pairs in order for the filter coefficients
to be real.
In the design of low pass filters, the poles should be placed near the unit circle at points corresponding to
low frequencies ( near ω=0)and zeros should be placed near or on unit circle at points corresponding to
high frequencies (near ω=∏). The opposite is true for high pass filters.
A notch filter is a filter that contains one or more deep notches or ideally perfect nulls in its frequency
response characteristic. Notch filters are useful in many applications where specific
frequency components must be eliminated. Example Instrumentation and
recording systems required that the power-line frequency 60Hz and its harmonics be eliminated.
To create nulls in the frequency response of a filter at a frequency ω0, simply introduce a
pair of complex-conjugate zeros on the unit circle at an angle ω0.
comb filters are similar to notch filters in which the nulls occur periodically across the
frequency band similar with periodically spaced teeth. Frequency response
characteristic of notch filter |H(ω)| is as shown
94
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
ωo ω1 ω
A digital resonator is a special two pole bandpass filter with a pair of complex conjugate poles
located near the unit circle. The name resonator refers to the fact that the filter has a larger magnitude
response in the vicinity of the pole locations. Digital resonators are useful in many applications,
including simple bandpass filtering and speech generations.
Ideal filters are not physically realizable because Ideal filters are anti-causal and as only causal
systems are physically realizable.
Proof:
Let take example of ideal lowpass filter.
H(ω) = 1 for - ωc ≤ ω ≤ ωc
= 0 elsewhere
The unit sample response of this ideal LPF can be obtained by taking IFT of H(ω).
∞
1_
h(n) = 2∏ ∫ H(ω) ejωn dω (1)
-∞
ωc
h(n) = 1_ ∫ 1 ejωn dω (2)
2∏ -ωc
ωc
1_
h(n) = 2∏ ejωn
jn
-ωc
1
[ejωcn - e-jωcn ]
2∏jn
95
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
1_ [ω] ωc
2∏ -ωc
i.e
sin (ωcn )
∏n for n≠0
h(n) =
ωc
n for n=0
LSI system is causal if its unit sample response satisfies following condition. h(n) = 0
for n<0
In above figure h(n) extends -∞ to ∞. Hence h(n) ≠0 for n<0. This means causality condition is not
satisfied by the ideal low pass filter. Hence ideal low pass filter is non
causal and it is not physically realizable.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
The order of a digital filter can be defined as the number of previous inputs (stored in the processor's
memory) used to calculate the current output.
This is illustrated by the filters given as examples in the previous section.
Example (1): yn = xn
This is a zero order filter, since the current output yn depends only on the current
input xn and not on any previous inputs.
Q) For each of the following filters, state the order of the filter and identify the values of its
coefficients:
(a) yn = 2xn - xn-1 A) Order = 1: a0 = 2, a1 = -1
(b) yn = xn-2 B) Order = 2: a0 = 0, a1 = 0, a2 = 1
(c) yn = xn - 2xn-1 + 2xn-2 + xn-3 C) Order = 3: a0 = 1, a1 = -2, a2 = 2, a3 = 1
97
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
98
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
99
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
100
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
101
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
102
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
103
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
104
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
105
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
106
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
107
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
GLOSSARY:
FIR Filters:
Symmetric FIR Filters have their Impulses that occur as the mirror image in
the first quadrant and second quadrant or Third quadrant and fourth quadrant or
both.
The Antisymmetric FIR Filters have their impulses that occur as the mirror
image in the first quadrant and third quadrant or second quadrant and Fourth
108
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
quadrant or both.
Linear Phase:
The FIR Filters are said to have linear in phase if the filter have the impulses
that increases according to the Time in digital domain.
Frequency Response:
The Frequency response of the Filter is the relationship between the angular
frequency and the Gain of the Filter.
Gibbs Phenomenon:
The abrupt truncation of Fourier series results in oscillation in both passband
and stop band. These oscillations are due to the slow convergence of the fourier
series. This is termed as Gibbs Phenomenon.
Windowing Technique:
To avoid the oscillations instead of truncating the fourier co-efficients we
are multiplying the fourier series with a finite weighing sequence called a window
which has non-zero values at the required interval and zero values for other
Elements.
109
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
UNIT – IV
FINITE WORDLENGTH EFFECTS
4.1 Number Representation
In digital signal processing, (B + 1)-bit fixed-point numbers are usually
represented as two’s- complement signed fractions in the format
bo b-ib-2 …… b-B
The number represented is then
X = -bo + b-i2- 1 + b-22 - 2 + ••• + b-B 2-B (3.1)
where bo is the sign bit and the number range is —1 <X < 1. The advantage of this
representation is that the product of two numbers in the range from — 1 to 1 is
another number in the same range. Floating-point numbers are represented as
X = (-1) s m2 c (3.2)
where s is the sign bit, m is the mantissa, and c is the characteristic or
exponent. To make the representation of a number unique, the mantissa is
normalized so that 0.5 <m < 1.
Although floating-point numbers are always represented in the form of (3.2), the
way in which this representation is actually stored in a machine may differ. Since
m > 0.5, it is not necessary to store the 2- 1 -weight bit of m, which is always set.
Therefore, in practice numbers are usually stored as
X = (-1)s(0.5 + f)2 c (3.3)
where f is an unsigned fraction, 0 <f < 0.5.
Most floating-point processors now use the IEEE Standard 754 32-bit floating-
point format for storing numbers. According to this standard the exponent is stored
as an unsigned integer p where
p = c + 126 (3.4)
Therefore, a number is stored as
X = (-1)s(0.5 + f )2p - 1 2 6 (3.5)
where s is the sign bit, f is a 23-b unsigned fraction in the range 0 <f < 0.5, and p
is an 8-b unsigned integer in the range 0 <p < 255. The total number of bits is 1 +
23 + 8 = 32. For example, in IEEE format 3/4 is written (-1)0 (0.5 + 0.25)2° so s =
0, p = 126, and f = 0.25. The value X = 0 is a unique case and is represented by
all bits zero (i.e., s = 0, f = 0, and p = 0). Although the 2-1-weight mantissa bit is
not actually stored, it does exist so the mantissa has 24 b plus a sign bit.
110
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
either after each multiply or after all products have been summed with double-
length precision.
We will examine three types of fixed-point quantization—rounding, truncation,
and magnitude truncation. If X is an exact value, then the rounded value will be
denoted Q r (X), the truncated value Q t (X), and the magnitude truncated value
Q m t (X). If the quantized value has B bits to the right of the decimal point, the
quantization step size is
A = 2-B (3.6)
Since rounding selects the quantized value nearest the unquantized value, it gives a
value which is never more than ± A /2 away from the exact value. If we denote the
rounding error by
fr = Qr(X) - X (3.7)
then
AA
<f r < — (3.8)
2 - 2
Truncation simply discards the low-order bits, giving a quantized value that is
always less than or equal to the exact value so
- A < ft< 0 (3.9)
Magnitude truncation chooses the nearest quantized value that has a magnitude less
than or equal to the exact value so
—A
<fmt <A (3 10)
.
The error resulting from quantization can be modeled as a random variable
uniformly distributed over the appropriate error range. Therefore, calculations with
roundoff error can be considered error-free calculations that have been corrupted
by additive white noise. The mean of this noise for rounding is
1fA/2
m ( r — E{fr } = x/ frdfr — 0 (3.11)
A
J-A/2
where E{} represents the operation of taking the expected value of a random
variable. Similarly, the variance of the noise for rounding is
2 1 (A/2
2 2 A2
a€ — E{(fr - m€r) } — — (fr - m€r) dfr = — (3.12)
12
A
-A/2
Likewise, for truncation,
A
m E{f }
ft = t = -y
2 A2
2
2
a = E{(ft - mft)} = — (3.13)
m
fmt — E { f mt }— 0
and, for magnitude truncation
111
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
2 A2
2
a E{(f -m ) (3 14)
f-mt = mt m }=— .
112
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
113
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
k=1
114
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Notice that if the order of summation of the product terms in the convolution
summation is changed, then the order in which the h(k)’s appear in (3.32)
changes. If the order is changed so that the h(k)with smallest magnitude is first,
followed by the next smallest, etc., then the roundoff noise variance is
minimized. However, performing the convolution summation in nonsequential
order greatly complicates data indexing and so may not be worth the reduction
obtained in roundoff noise.
115
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
where e(n) is a random roundoff noise sequence. Since e(n) is injected at the
same point as the input, it propagates througha system with impulse
response h(n). Therefore, forfixed-point arithmetic
with rounding,the outputroundoff noise variance from (3.6), (3.12), (3.25),
and (3.33) is
2 2
2 A “ 2 A “ 2n 2-2B 1
2 2 2n
a = — > h (n) = — > a = ---------------------------------- (3.36)
o ()
12 ^ 12 ^ 12 1 - a 2
n=—<x n=0
With fixed-point arithmetic there is the possibility of overflow following
addition. To avoid overflow it is necessary to restrict the input signal amplitude.
This can be accomplished by either placing a scaling multiplier at the filter
input or by simply limiting the maximum input signal amplitude. Consider the
case of the first-order filter of (3.34). The transfer function of this filter is
,v, Y(e j m ) 1
jm
H(e ) = . = (3.37)
X(eJm) eJm — a
so
\H(e j m )\ 2 = --- - —1 ------------------------------- (3.38)
1 + a 2 — 2a cos,)
and
,7, 1
|H(e^)|max = ---- — (3.39)
1 — \a\
The peak gain of the filter is 1/(1 — \a\) so limiting input signal amplitudes to
\x(n)\ < 1 — \ a | will make overflows unlikely.
An expression for the output roundoff noise-to-signal ratio can easily be
obtained for the case where the filter input is white noise, uniformly distributed
over the interval from —(1 — \ a \) to (1 — \ a \) [4,5]. In this case
? 1 f 1 —\ a \ ? 1 ?
a x dx (1 — a (3 40)
x = 21—a) =3 \ \ ) .
2(1 — a ) 3
\ \ J — (1 — \a \)
so, from (3.25),
2 1 (1 — \a \)2
2
ay = 3 , 2 (3'41)
y 2
31— a
Combining (3.36) and (3.41) then gives
0^ = ( 2 _—l^\ (3^0L^ = 2—B (3
42)
a2 \12 1 — a 2 )\ (1 — \a \) 2 ) 12 (1— \ a \ ) 2 (
. )
Notice that the noise-to-signal ratio increases without bound as \ a \ ^ 1.
Similar results can be obtained for the case of the causal second-order filter
116
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
117
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
There are two noise sources contributing to e(n) if quantization is performed after
each multiply, and there is one noise source if quantization is performed after
summation. Since
2 1
1 — r 2
(1 + r 2 2
) — 4r2 (3.46
n= )
— oo cos2 (9)
the output roundoff
noise is
2—2B1 + 1
a2 = V 2 (3.47
r 12 1— r 2 (1 + r 2 )2 — 4r2
2 )
where V = 1 for quantization cos
after(9)
summation, and V = 2 for quantization
after each multiply. To obtain an output noise-to-signal ratio we note that
H(e j w ) 1
(3.48
= 1 — 2r cos(9)e — j m +
)
r2e—j2m
and, using the approach of
[6],
iH(emmax = 1
(3.49
2 sat ( c o s ( 9 ) ^ — cos(9) 2 2
sin 2
4r + )
(9)
wher
e I >1
sat (i) — 1<I<1 (3.50
= )
Following the same approach as for the first-order
case then gives
2 2 — 2 B 1+r 2 3
V 12 1 — r 2 (1 + r 2 )2 — 4r2
y
cos2 (9) 1
X 2 (^cos 2 2 (3.51
4r sat (9) cos( + 1— 2 sin(9) )
) 9) r
Figure3.1 is a contour plot showing the noise-to-signal ratio 2rof (3.51) for v = 1
in units of the noise variance of a single quantization, 2—2 B /12. The plot is
symmetrical about 9 = 90°, so only the range from 0° to 90° is shown. Notice
that as r ^ 1, the roundoff noise increases without bound. Also notice that the
noise increases as 9 ^ 0°.
It is possible to design state-space filter realizations that minimize fixed-point
roundoff noise [7] - [10]. Depending on the transfer function being realized, these
structures may provide a roundoff noise level that is orders-of-magnitude lower
than for a nonoptimal realization. The price paid for this reduction in roundoff
noise is an increase in the number of computations required to implement the
filter. For an Nth-order filter the increase is from roughly 2N multiplies for a
direct form realization to roughly (N + 1)2 for an optimal realization. However,
if the filter is realized by the parallel or cascade connection of first- and second-
order optimal subfilters, the increase is only to about 4N multiplies. Furthermore,
near-optimal realizations exist that increase the number of multiplies to only
about 3N [10].
118
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
0.010.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.99
Normalized fixed-point roundoff noise variance.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Notice that while the input is zero except for the first sample, the output oscillates
with amplitude 1/8 and period 6.
Limit cycles are primarily of concern in fixed-point recursive filters. As long as
floating-point filters are realized as the parallel or cascade connection of first- and
second-order subfilters, limit cycles will generally not be a problem since limit
cycles are practically not observable in first- and second-order systems
implemented with 32-b floating-point arithmetic [12]. It has been shown that such
systems must have an extremely small margin of stability for limit cycles to exist
at anything other than underflow levels, which are at an amplitude of less than
10 — 3 8 [12]. There are at least three ways of dealing with limit cycles when fixed-
point arithmetic is used. One is to determine a bound on the maximum limit cycle
amplitude, expressed as an integral number of quantization steps [13]. It is then
possible to choose a word length that makes the limit cycle amplitude acceptably
low. Alternately, limit cycles can be prevented by randomly rounding calculations
up or down [14]. However, this approach is complicated to implement. The third
approach is to properly choose the filter realization structure and then quantize
the filter calculations using magnitude truncation [15,16]. This approach has the
disadvantage of producing more roundoff noise than truncation or rounding [see
(3.12)—(3.14)].
75
y(n) = Qr\R (3.72)
8y(n -1) - 8y(n - 2) + x(n)
[Type text]
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
In this case we apply the
input
35
x(n) = -4 (n) - ^&(n -1)
&
3 5 ,,
(3.73)
0, 0, ■
4 8
s to scale the filter calculations so as to render overflow impossible. However,
this may unacceptably restrict the filter dynamic range. Another method is to
force completed sums-of- products to saturate at ±1, rather than overflowing
[18,19]. It is important to saturate only the completed sum, since intermediate
overflows in two’s complement arithmetic do not affect the accuracy of the final
result. Most fixed-point digital signal processors provide for automatic saturation
of completed sums if their saturation arithmetic feature is enabled. Yet
another way to avoid overflow oscillations is to use a filter structure for which
any internal filter transient is guaranteed to decay to zero [20]. Such structures are
desirable anyway, since they tend to have low roundoff noise and be insensitive
to coefficient quantization [21].
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
0
0.25 0.50 0.75 1.00
Re Z
FIGURE: Realizable pole locations for the
difference equation of (3.76).
The sparseness of realizable pole locations near z = ± 1 will result in a large coefficient
quantization error for poles in this region.
Figure3.4 gives an alternative structure to (3.77) for realizing the transfer function of (3.76).
Notice that quantizing the coefficients of this structure corresponds to quantizing X r and Xi.
As shown in Fig.3.5 from [5], this results in a uniform grid of realizable pole locations.
Therefore, large coefficient quantization errors are avoided for all pole locations.
It is well established that filter structures with low roundoff noise tend to be robust to
coefficient quantization, and visa versa [22]- [24]. For this reason, the uniform grid
structure of Fig.3.4 is also popular because of its low roundoff noise. Likewise, the low-
noise realizations of [7]- [10] can be expected to be relatively insensitive to coefficient
quantization, and digital wave filters and lattice filters that are derived from low-sensitivity
analog structures tend to have not only low coefficient sensitivity, but also low roundoff
noise [25,26].
It is well known that in a high-order polynomial with clustered roots, the root location is a
very sensitive function of the polynomial coefficients. Therefore, filter poles and zeros can
be much more accurately controlled if higher order filters are realized by breaking them up
into the parallel or cascade connection of first- and second-order subfilters. One exception
to this rule is the case of linear-phase FIR filters in which the symmetry of the polynomial
coefficients and the spacing of the filter zeros around the unit circle usually permits an
acceptable direct realization using the convolution summation.
Given a filter structure it is necessary to assign the ideal pole and zero locations to the
realizable locations. This is generally done by simplyrounding or truncatingthe filter
coefficients to the available number of bits, or by assigning the ideal pole and zero locations
to the nearest realizable locations. A more complicated alternative is to consider the original
filter design problem as a problem in discrete
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
optimization, and choose the realizable pole and zero locations that give the best
approximation to the desired filter response [27]- [30].
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
the other hand, requires a quantization after every multiply and after every add in the
convolution summation. With 32-b floating-point arithmetic these quantizations introduce a
small enough error to be insignificant for many applications.
When realizing IIR filters, either a parallel or cascade connection of first- and second-order
subfilters is almost always preferable to a high-order direct-form realization. With the
availability of very low-cost floating-point digital signal processors, like the Texas
Instruments TMS320C32, it is highly recommended that floating-point arithmetic be used
for IIR filters. Floating-point arithmetic simultaneously eliminates most concerns regarding
scaling, limit cycles, and overflow oscillations. Regardless of the arithmetic employed, a
low roundoff noise structure should be used for the second- order sections. Good choices
are given in [2] and [10]. Recall that realizations with low fixed-point roundoff noise also
have low floating-point roundoff noise. The use of a low roundoff noise structure for the
second-order sections also tends to give a realization with low coefficient quantization
sensitivity. First-order sections are not as critical in determining the roundoff noise and
coefficient sensitivity of a realization, and so can generally be implemented with a simple
direct form structure.
GLOSSARY:
Quantization:
Total number of bits in x is reduced by using two methods namely Truncation and Rounding. These are
known as quantization Processes.
Input Quantization Error:
The Quantized signal are stored in a b bit register but for nearest values the same digital equivalent may be
represented. This is termed as Input Quantization Error.
Product Quantization Error:
The Multiplication of a b bit number with another b bit number results in a 2b bit number but it should be
stored in a b bit register. This is termed as Product Quantization Error.
Co-efficient Quantization Error:
The Analog to Digital mapping of signals due to the Analog Co-efficient Quantization results in error due
to the Fact that the stable poles marked at the edge of the jΩ axis may be marked as an unstable pole in the
digital domain.
Limit Cycle Oscillations:
If the input is made zero, the output should be made zero but there is an error occur due to the quantization
effect that the system oscillates at a certain band of values.
Overflow limit Cycle oscillations:
Overflow error occurs in addition due to the fact that the sum of two numbers may result in overflow. To
avoid overflow error saturation arithmetic is used.
Dead band:
The range of frequencies between which the system oscillates is termed as Deadband of the Filter. It may
have a fixed positive value or it may oscillate between a positive and negative value.
Signal scaling:
The inputs of the summer is to be scaled first before execution of the addition operation to find for any
possibility of overflow to be occurred after addition. The scaling factor s0 is multiplied with the inputs to
avoid overflow.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
UNIT V
APPLICATIONS OF DSP
PRE REQUISITE DISCUSSION:
The time domain waveform is transformed to the frequency domain using a filter bank. The strength of
each frequency band is analyzed and quantized based on how much effect they have on the perceived
decompressed signal.
1. In speech recognition system using microphone one can input speech or voice. The analog
speech signal is converted to digital speech signal by speech digitizer. Such digital signal is
called digitized speech.
2. The digitized speech is processed by DSP system. The significant features of speech such
as its formats, energy, linear prediction coefficients are extracted. The template of this
extracted features are compared with the standard
reference templates. The closed matched template is considered as the recognized
word.
3. Voice operated consumer products like TV, VCR, Radio, lights, fans and voice operated
telephone dialing are examples of DSP based speech recognized devices.
Impulse Voiced
Train Synthetic
Generator speech
Time
× varying
digital filter
Random
number
generator Unvoiced
1. For voiced sound, pulse generator is selected as signal source while for unvoiced sounds
noise generator is selected as signal source.
2. The linear prediction coefficients are used as coefficients of digital filter. Depending upon these
coefficients , the signal is passed and filtered by the digital filter.
3. The low pass filter removes high frequency noise if any from the synthesized speech. Because
of linear phase characteristic FIR filters are mostly used as digital filters.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Pitch Period
Pulse Voiced
Generator
Synthetic
speech
Digital Time
filter varying
digital
White filter
Noise
generator Unvoiced
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Filter Coefficients
2. The time domain waveform is transformed to the frequency domain using a filter bank. The
strength of each frequency band is analyzed and quantized based on how much effect they
have on the perceived decompressed signal.
3. The DSP processor is also used in digital video disk (DVD) which uses MPEG-2
compression, Web video content application like Intel Indeo, real audio.
4. Sound synthesis and manipulation, filtering, distortion, stretching effects are also done by DSP
processor. ADC and DAC are used in signal generation and recording.
5. 4. ECHO CANCELLATION
In the telephone network, the subscribers are connected to telephone exchange by two wire circuit. The
exchanges are connected by four wire circuit. The two wire circuit is bidirectional and carries signal in
both the directions. The four wire circuit has separate paths for transmission and reception. The hybrid
coil at the exchange provides the interface between two wire and four wire circuit which also provides
impedance matching between two wire and four wire circuits. Hence there are no echo or reflections on
the lines. But this impedance matching is not perfect because it is length dependent. Hence
for echo cancellation, DSP techniques are used as follows.
1. An DSP based acoustic echo canceller works in the following fashion: it records the sound
going to the loudspeaker and substract it from the signal coming from
the microphone. The sound going through the echo-loop is transformed and delayed,
and noise is added, which complicate the substraction process.
2. Let be the input signal going to the loudspeaker; let be the signal picked up by the
microphone, which will be called the desired signal. The signal after
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
substraction will be called the error signal and will be denoted by . The adaptive filter will
try to identify the equivalent filter seen by the system from the loudspeaker to the
microphone, which is the transfer function of the room the loudpeaker and microphone are in.
3. This transfer function will depend heavily on the physical characteristics of the environment. In
broad terms, a small room with absorbing walls will origninate just a few, first order reflections
so that its transfer function will have a short impulse response. On the other hand, large rooms
with reflecting walls will have a transfer function whose impulse response decays slowly in
time, so that echo cancellation will be much more difficult.
5. 5 VIBRATION ANALYSIS:
1. Normally machines such as motor, ball bearing etc systems vibrate depending upon the speed of
their movements.
2. In order to detect fault in the system spectrum analysis can be performed. It shows fixed
frequency pattern depending upon the vibrations. If there is fault in the machine, the
predetermined spectrum is changes. There are new frequencies introduced in the spectrum
representing fault.
3. This spectrum analysis can be performed by DSP system. The DSP system can also be used to
monitor other parameters of the machine simultaneously.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
pitch,
Speech coding, A PC and SBC fine structure
Adaptive predictive coding (APC) is a technique used for speech coding, that is data
compression of spccch signals
APC assumes that the input
Typic speech signal is repetitive with a
al period significantly longer than
Voice the average frequency content.
d Two predictors arc used in APC.
speec The high frequency components
h (up to 4 kHz) are estimated using
a 'spectral’ or 'formant’ prcdictor and the low frequency components (50-200 Hz) by a ‘pitch’
or ‘fine structure’ prcdictor (see figure 7.4). The spcctral estimator may he of order 1- 4 and the
pitch estimator about order 10. The low-frequency components of the spccch signal are due to
the movement of the tongue, chin and spectral
envelope, formants
The high-frequency components originate from the vocal chords and the noise-like sounds (like
in ‘s’) produced in the front of the mouth.
The output signal y(n)together with the predictor parameters, obtained adaptively in the
encoder, are transmitted to the decoder, where the spcech signal is reconstructed. The decoder
has the same structure as the encoder but the predictors arc not adaptive and arc invoked in the
reverse order. The prediction parameters are adapted for blocks of data corresponding to for
instance 20 ms time periods.
A PC' is used for coding spcech at 9.6 and 16 kbits/s. The algorithm works well in noisy
environments, but unfortunately the quality of the processed speech is not as good as for other
methods like CELP described below.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
by dccimators, encoders (for instance ADPCM) and a multiplexer combining the data bits
coming from the sub-band channels. The output of the multiplexer is then transmitted to the
sub-band dccodcr having a demultiplexer splitting the multiplexed data stream back into Nsub-
band channels. Every sub-band channel has a dccodcr (for instance ADPCM), followed by an
interpolator and a band-pass filter. Finally, the outputs of the band-pass filters are summed and
a reconstructed output signal results.
Sub-band coding is commonly used at bit rates between 9.6 kbits/s and 32 kbits/s and
performs quite well. The complexity of the system may however be considerable if the number
of sub-bands is large. The design of the band-pass filters is also a critical topic when working
with sub-band coding systems.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
periodic
pitch excitation
9+V
synthetic
noise speech
voiced/unvoiced
Figure 7.6 The LPC model
The first vocoder was designed by H. Dudley in the 1930s and demonstrated at
the ‘New York Fair’ in 1939* Vocoders have become popular as they achieve
reasonably good speech quality at low data rates, from 2A kbits/s to 9,6 kbits/s.
There arc many types of vocoders (Marvcn and Ewers, 1993), some of the most
common techniques will be briefly presented below.
Most vocoders rely on a few basic principles. Firstly, the characteristics of the
spccch signal is assumed to be fairly constant over a time of approximately 20
ms, hcncc most signal processing is performed on (overlapping) data blocks of 20
40 ms length. Secondly, the spccch model consists of a time varying filter
corresponding to the acoustic properties of the mouth and an excitation signal.
The cxeilalion signal is cither a periodic waveform, as crcatcd by the vocal
chords, or a random noise signal for production of ‘unvoiced' sounds, for
example ‘s’ and T. The filter parameters and excitation parameters arc assumed
to be independent of each other and are commonly coded separately.
Linear predictive coding (LPC) is a popular method, which has however
been replaced by newer approaches in many applications. LPC works exceed-
ingly well at low bit rates and the LPC parameters contain sufficient information
of the spccch signal to be used in spccch recognition applications. The LPC
model is shown in Figure 7*6.
LPC is basically an autn-regressive model (sec Chapter 5) and the vocal tract
is modelled as a time-varying all-pole filter (HR filter) having the transfer
function H(z)
(7*17)
-k
k=I
where p is the order of the filter. The excitation signal *?(«), being either noise or
a periodic waveform, is fed to the filter via a variable gain factor G. The output
signal can be expressed in the time domain as
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
and the excitation signal (linear predictive coding). The filter coefficients a k arc
time varying.
The model above describes how to synthesize the speech given the pitch
information (if noise or pet iodic excitation should be used), the gain and the
filter parameters. These parameters must be determined by the cncoder or the
analyser, taking the original spccch signal x(n) as input.
The analyser windows the spccch signal in blocks of 20-40 ms. usually with a
Hamming window (see Chapter 5). These blocks or ‘frames’ arc repeated every
10—30 ms, hence there is a certain overlap in time. Every frame is then analysed
with respect to the parameters mentioned above.
Firstly, the pitch frequency is determined. This also tells whether we arc
dealing with a voiced or unvoiccd spccch signal. This is a crucial part of the
system and many pitch detection algorithms have been proposed. If the segment
of the spccch signal is voiced and has a dear periodicity or if it is unvoiced and
not pet iodic, things arc quite easy* Segments having properties in between these
two extremes are difficult to analyse. No algorithm has been found so far that is
1
perfect* for all listeners.
Now, the second step of the analyser is to determine the gain and the filter
parameters. This is done by estimating the spccch signal using an adaptive
predictor. The predictor has the same structure and order as the filter in the
synthesizer, Hencc, the output of the predictor is
- i ( n ) — - tf] jt(/7— 1) — a 2 x ( n — 2) — . . . — OpX(n—p) (7-19)
where i(rt) is the predicted input spcech signal and jc(rt) is the actual input signal.
The filter coefficients a k are determined by minimizing the square error
This can be done in different ways, cither by calculating the auto-corrc- lation
coefficients and solving the Yule Walker equations (see Chapter 5) or by using
some recursive, adaptive filter approach (see Chapter 3),
So, for every frame, all the parameters above arc determined and irans- mittcd to
the synthesiser, where a synthetic copy of the spccch is generated.
An improved version of LPC is residual excited linear prediction (RELP).
Let us take a closer look at the error or residual signal r(fi) resulting from the
prediction in the analyser (equation (7.19)). The residual signal (wc arc try ing to
minimize) can be expressed as
r(n)= *(«) -i(rt)
= jf(rt) + a^x(n— 1) + a 2 x(n— 2) -h *.. + a p x(ft—p) <7-21)
From this it is straightforward to find out that the corresponding expression using
the z-transforms is
Hcncc, the prcdictor can be regarded as an ‘inverse’ filter to the LPC model filter.
If we now pass this residual signal to the synthesizer and use it to excite the LPC
filter, that is E(z) - R(z), instead of using the noise or periodic waveform
sources we get
Y ( z ) = E ( z ) H ( z ) = R ( z ) H ( z ) = X ( z ) H ~ \ z ) H ( z ) = X ( z ) (7.23)
In the ideal case, we would hence get the original speech signal back. When
minimizing the variance of the residual signal (equation (7.20)), we gathered as
much information about the spccch signal as possible using this model in the
filter coefficients a k . The residual signal contains the remaining information. If
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
the model is well suited for the signal type (speech signal), the residual signal is
close to white noise, having a flat spectrum. In such a case we can get away with
coding only a small range of frequencies, for instance 0-1 kHz of the residual
signal. At the synthesizer, this baseband is then repeated to generate higher
frequencies. This signal is used to excite the LPC filter
Vocoders using RELP are used with transmission rates of 9.6 kbits/s. The
advantage of RELP is a better speech quality compared to LPC for the same bit
rate. However, the implementation is more computationally demanding.
Another possible extension of the original LPC approach is to use multipulse
excited linear predictive coding (MLPC). This extension is an attempt to make
the synthesized spcech less ‘mechanical’, by using a number of different pitches
of the excitation pulses rather than only the two (periodic and noise) used by
standard LPC.
The MLPC algorithm sequentially detects k pitches in a speech signal. As soon
as one pitch is found it is subtracted from the signal and detection starts over
again, looking for the next pitch. Pitch information detection is a hard task and
the complexity of the required algorithms is often considerable. MLPC however
offers a better spcech quality than LPC for a given bit rate and is used in systems
working with 4.S-9.6 kbits/s.
Yet another extension of LPC is the code excited linear prediction (CELP).
The main feature of the CELP compared to LPC is the way in which the filter
coefficients are handled. Assume that we have a standard LPC system, with a
filter of the order p. If every coefficient a k requires N bits, we need to transmit
N-p bits per frame for the filter parameters only. This approach is all right if all
combinations of filter coefficients arc equally probable. This is however not the
case. Some combinations of coefficients are very probable, while others may
never occur. In CELP, the coefficient combinations arc represented by p
dimensional vectors. Using vector quantization techniques, the most probable
vectors are determined. Each of these vectors are assigned an index and stored in
a codebook. Both the analyser and synthesizer of course have identical copies of
the codebook, typically containing 256-512 vectors. Hcncc, instead of
transmitting N-p bits per frame for the filter parameters only 8-9 bits arc needed.
This method offers high-quality spcech at low-bit rates but requires consid-
erable computing power to be able to store and match the incoming spcech to the
‘standard’ sounds stored in the codebook. This is of course especially true if the
codebook is large. Speech quality degrades as the codebook size decreases.
Most CELP systems do not perform well with respect to higher frequency
components of the spccch signal at low hit rates. This is countcractcd in
There is also a variant of CELP called vector sum excited linear prediction
(VSELP). The main difference between CELP and VSELP is the way the
codebook is organized. Further, since VSELP uses fixed point arithmetic
algorithms, it is possible to implement using cheaper DSP chips than
Adaptive Filters
The signal degradation in some physical systems is time varying, unknown, or possibly
both. For example,consider a high-speed modem for transmitting and receiving data over
telephone channels. It employs a filter called a channel equalizer to compensate for the channel
distortion. Since the dial-up communication channels have different and time-varying
characteristics on each connection, the equalizer must be an adaptive filter.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
filter involves both zeros and poles. Unless they are properly controlled, the poles in the filter
may move outside the unit circle and result in an unstable system during the adaptation of
coefficients. Thus, the adaptive FIR filter is widely used for practical real-time applications.
This chapter focuses on the class of adaptive FIR filters.
The most widely used adaptive FIR filter is depicted in Figure 7.2. The filter output signal
is computed
(7.13)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
, where the filter coefficients wl (n) are time varying and updated by the adaptive algorithms that will be
discussed next.
We define the input vector at time n as
x(n) = [x(n)x(n - 1) . . . x(n - L + 1)]T , (7.14)
and the weight vector at time n as
w(n) = [w0(n)w1(n) . . . wL-1(n)]T . (7.15)
Equation (7.13) can be expressed in vector form as
y(n) = wT (n)x(n) = xT (n)w(n). (7.16)
The filter outputy(n) is compared with the desired d(n) to obtain the error signal e(n) =
d(n) - y(n) = d(n) - wT (n)x(n). (7.17)
Our objective is to determine the weight vector w(n) to minimize the predetermined performance (or cost)
function.
Performance Function:
The adaptive filter shown in Figure 7.1 updates the coefficients of the digital filter to optimize some
predetermined performance criterion. The most commonly used performance function is
based on the mean-square error (MSE).
The path leading from the musician's microphone to the audiophile's speaker is remarkably long. Digital
data representation is important to prevent the degradation commonly associated with analog storage and
manipulation. This is very familiar to anyone who has compared the musical quality of cassette tapes with
compact disks. In a typical scenario, a musical piece is recorded in a sound studio on multiple channels or
tracks. In some cases, this even involves recording individual instruments and singers separately. This is
done to give the sound engineer greater flexibility in creating the final product. The complex process of
combining the individual tracks into a final product is called mix down. DSP can provide several
important functions during mix down, including: filtering, signal addition and subtraction, signal editing,
etc. One of the most interesting DSP applications in music preparation is artificial reverberation. If
the individual channels are simply added together, the resulting piece sounds frail and diluted, much as if
the musicians were playing outdoors. This is because listeners are greatly influenced by the echo or
reverberation content of the music, which is usually minimized in the sound studio. DSP allows artificial
echoes and reverberation to be added during mix down to simulate various ideal listening environments.
Echoes with delays of a few hundred milliseconds give the impression of cathedral likelocations. Adding
echoes with delays of 10-20 milliseconds provide the perception of more modest size listening rooms.
Speech generation and recognition are used to communicate between humans and machines. Rather than
using your hands and eyes, you use your mouth and ears. This is very convenient when your hands and
eyes should be doing something else, such as: driving a car, performing surgery, or (unfortunately) firing
your weapons at the enemy. Two approaches are used for computer generated speech: digital
recording and vocal tract simulation. In digital recording, the voice of a human speaker is digitized
and stored, usually in a compressed form. During playback, the stored data are uncompressed and
converted back into an analog signal. An entire hour of recorded speech requires only about three me
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
gabytes of storage, well within the capabilities of even small computer systems. This is the most common
method of digital speech generation used today. Vocal tract simulators are more complicated, trying to
mimic the physical mechanisms by which humans create speech. The human vocal tract is an acoustic
cavity with resonate frequencies determined by the size and shape of the chambers. Sound originates in
the vocal tract in one of two basic ways, called voiced and fricative sounds. With voiced sounds, vocal
cord vibration produces near periodic pulses of air into the vocal cavities. In comparison, fricative sounds
originate from the noisy air turbulence at narrow constrictions, such as the teeth and lips. Vocal tract
simulators operate by generating digital signals that resemble these two types of excitation. The
characteristics of the resonate chamber are simulated by passing the excitation signal through a digital
filter with similar resonances. This approach was used in one of the very early DSP success stories, the
Speak & Spell, a widely sold electronic learning aid for children.
The automated recognition of human speech is immensely more difficult than speech generation. Speech
recognition is a classic example of things that the human brain does well, but digital computers do poorly.
Digital computers can store and recall vast amounts of data, perform mathematical calculations at blazing
speeds, and do repetitive tasks without becoming bored or inefficient. Unfortunately, present day
computers perform very poorly when faced with raw sensory data. Teaching a computer to send you a
monthly electric bill is easy. Teaching the same computer to understand your voice is a major
undertaking. Digital Signal Processing generally approaches the problem of voice recognition in two
steps: feature extraction followed by feature matching. Each word in the incoming audio signal is
isolated and then analyzed to identify the type of excitation and resonate frequencies. These parameters
are then compared with previous examples of spoken words to identify the closest match. Often, these
systems are limited to only a few hundred words; can only accept speech with distinct pauses between
words; and must be retrained for each individual speaker. While this is adequate for many
commercialapplications, these limitations are humbling when compared to the abilities of human hearing.
There is a great deal of work to be done in this area, with tremendous financial rewards for those that
produce successful commercial products.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Example 2
The result of the transformation shown in the figure below is to produce a binary image.
s = T(r)
Frequency domain methods
Let g( x, y) be a desired image formed by the convolution of an image f (x, y) and a linear, position
invariant operator h(x,y), that is:
g(x,y) = h(x,y)2f(x,y)
The following frequency relationship holds:
G(u, i’) = H (a, v)F(u, i’)
We can select H (u, v) so that the desired image
g(x,y) = 3 _ 1 i$(ii,v)F(u,v)
exhibits some highlighted features of f (x,y) . For instance, edges in f (x,y) can be accentuated by
using a function H(u,v) that emphasises the high frequency components of F(u,v) .
Glossary:
Sampling Rate:
The No. of samples per cycle given in the signal is termed as sampling rate of the signal .The samples
occur at T equal intervals of Time.
Sampling Theorem:
Sampling Theorem states that the no. of samples per cycle should be greater than or equal to twice that of
the frequency of the input message signal.
The Sampling rate of the signal may be increased or decreased as per the requirement and application. This
is termed as sampling rate Conversion.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Decimation:
The Decrease in the Sampling Rate are termed as decimation or Downsampling. The No. of Samples per
Cycle is reduced to M-1 no. of terms.
Interpolation:
The Increase in the Sampling rate is termed as Interpolation or Up sampling. The No. of Samples per
Cycle is increased to L-1 No. of terms.
Polyphase Implementation:
If the Length of the FIR Filter is reduced into a set of smaller filters of length k. Usual upsampling process
Inserts I-1 zeros between successive Values of x(n). If M Number of Inputs are there, Then only K
Number of Outputs are non-zero. These k Values are going to be stored in the FIR Filter.
Visit : www.EasyEngineeering.net