0% found this document useful (0 votes)
7 views

EC6502 Principles of Digital Signal Processing 11- By EasyEngineering.net

The document outlines key concepts and definitions related to Digital Signal Processing (DSP), specifically focusing on Discrete-Time Fourier Transform (DTFT), Discrete Fourier Transform (DFT), and Z-transform. It includes properties, theorems, and examples related to these transforms, such as symmetry properties, conditions for existence, and methods for obtaining inverse transforms. Additionally, it provides practical examples of calculating DFT for various sequences.

Uploaded by

Gohul
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

EC6502 Principles of Digital Signal Processing 11- By EasyEngineering.net

The document outlines key concepts and definitions related to Digital Signal Processing (DSP), specifically focusing on Discrete-Time Fourier Transform (DTFT), Discrete Fourier Transform (DFT), and Z-transform. It includes properties, theorems, and examples related to these transforms, such as symmetry properties, conditions for existence, and methods for obtaining inverse transforms. Additionally, it provides practical examples of calculating DFT for various sequences.

Uploaded by

Gohul
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 112

Visit : www.EasyEngineeering.

net
D
DIGITAL SIG
GNAL PRO
OCESSING III YEAR / V SEM E
ECE

EC 65022 DIGITA
AL SIGNAL
L PROCESS
SING

UNIT – I DISCRETE FOURIE


ER TRANSF
FORM

1. Define DT
TFT. APRIL L/MAY20088
The discrrete-time Fourier transfoorm (or DTF
FT) of is usually wrritten:

2. Define Perriodicity of DTFT.


D
Sampliing cauuses its specctrum (DTFT
T) to becom
me periodic. In terms off ordinary frequency,
f
(ccycles per seecond), the period
p is thee sample ratee, . In teerms of norm
malized freqquency, (cycles
peer sample), the
t period iss 1. And in terms of (radians
( per sample), thee period is 2π,
2 which alsso follows
diirectly from the periodiccity of . Thatt is:

W
Where both n and k are arrbitrary integgers. Thereffore:

3. Differencee between DTFT D and otther transfoorm. NOV/D DEC 2010


The DFTD and thee DTFT can be viewed as the logicaal result of applying thee standard continuous
c
Fourier transfform to disccrete data. From F that peerspective, we
w have the satisfying result
r that itt's not the
trransform thaat varies; it's just the form m of the inpuut:
If it iss discrete, thee Fourier traansform becoomes a DTFT.
If it iss periodic, th
he Fourier traansform becomes a Fourrier series.
If it iss both, the Foourier transfform becomees a DFT.

4. Write about symmetrry property of DTFT. APRIL/MAY


A Y2009
The Fourier
F Transsform can bee decomposeed into a reall and imaginnary or into even
e and oddd.

orr

T
Time Domainn Frequencyy Domain

1 015‐2016
20
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

5. Define DFT pair.


The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex
numbers X0, ..., XN−1 by the DFT according to the formula:

where is a primitive N'th root of unity.


The transform is sometimes denoted by the symbol , as in or or .
The inverse discrete Fourier transform (IDFT) is given by

6. How will you express IDFT interms of DFT. MAY/JUNE 2010


A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the
(forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to
only implement a fast Fourier transform corresponding to one transform direction and then to get the other
transform direction from the first.) First, we can compute the inverse DFT by reversing the inputs:

(As usual, the subscripts are interpreted modulo N; thus, for n = 0, we have xN − 0 = x0.)
Second, one can also conjugate the inputs and outputs:

Third, a variant of this conjugation trick, which is sometimes preferable because it requires no
modification of the data values, involves swapping real and imaginary parts (which can be done on a
computer simply by modifying pointers). Define swap(xn) as xn with its real and imaginary parts
swapped—that is, if xn = a + bi then swap(xn) is b + ai. Equivalently, swap(xn) equals . Then

7. Write about Bilateral Z transform. APRIL/MAY2009


The bilateral or two-sided Z-transform of a discrete-time signal x[n] is the function X(z) defined as

2 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
D
DIGITAL SIG
GNAL PRO
OCESSING III YEAR / V SEM E
ECE

w
where n is an integer and z is, in geneeral, a compllex number:
z = Aejφφ (OR)
z = A(cosjφ + sinjφ
φ)
w
where A is thhe magnitud
de of z, and φ is the coomplex arguument (also referred
r to as
a angle or phase) in
raadians

8. Write about Unilatera al Z transfoorms.


Alternativelly, in cases where
w x[n] is
i defined onnly for n ≥ 0,
0 the single--sided or uniilateral Z-traansform is
deefined as

Inn signal proccessing, this definition iss used when the signal iss causal.
9. Define Reggion Of Con nvergence.
The regionn of converg gence (ROC)) is the set ofo points in the
t complexx plane for which
w the Z--transform
suummation coonverges.

100.Write aboout the outp


put responsee of Z transfform APRIL/MAY20008
If such a system is driven by a signaal theen the outpuut is . By
peerforming partial
p fractio
on decompoosition on and theen taking thhe inverse Z-transform the
t output

can be found. In prractice, it is often usefull to fractionaally decompoose before multipplying that
quuantity by to generate a form of which has terms with
w easily coomputable innverse Z-trannsforms.

11. Define Tw widdle Facttor. MAY/JU UNE 2010


A twiddle factor, in fast Fourrier transform m (FFT) alggorithms, is any of the trrigonometricc constant
cooefficients thhat are multiiplied by thee data in the course of the algorithm.
122. State the condition forf existencee of DTFT? MAY/JUN NE 2007
The conditions arre, If x(n)is absolutely
a suummable thhen |x(n)|< Iff x(n) is not absolutely summable
s
thhen it shouldd have finite energy for DTFT
D to exitt.
133. List the properties
p of
o DTFT.

3 015‐2016
20
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

Periodicity, Linearity, Time shift, Frequency shift, Scaling, Differentiation in frequency domain,
Time reversal, Convolution, Multiplication in time domain, Parseval’s theorem
14. What is the DTFT of unit sample? NOV/DEC 2010
The DTFT of unit sample is 1 for all values of w.
15. Define Zero padding.
The method of appending zero in the given sequence is called as Zero padding.
16. Define circularly even sequence.
A Sequence is said to be circularly even if it is symmetric about the point zero on
the circle. x(N-n)=x(n),1<=n<=N-1.
17. Define circularly odd sequence.
A Sequence is said to be circularly odd if it is anti symmetric about point x(0) on the circle

18. Define circularly folded sequences.


A circularly folded sequence is represented as x((-n))N. It is obtained by plotting x(n) in clockwise
direction along the circle.
19. State circular convolution. NOV/DEC 2009
This property states that multiplication of two DFT is equal to circular convolution of their
sequence in time domain.
20. State parseval’s theorem. NOV/DEC 2009
Consider the complex valued sequences x(n) and y(n).If
x(n) Æ X(k), y(n)ÆY(k) then x(n)y*(n)=1/N X(k)Y*(k)
21. Define Z transform.
The Z transform of a discrete time signal x(n) is denoted by X(z) and is given by X(z)= x(n)Z-n.
22. Define ROC.
The value of Z for which the Z transform converged is called region of convergence.
23. Find Z transform of x(n)={1,2,3,4}
x(n)= {1,2,3,4}
X(z)= x(n)z-n
= 1+2z-1+3z-2+4z-3.
= 1+2/z+3/z2+4/z3.
24. State the convolution property of Z transforms. APRIL/MAY2010
The convolution property states that the convolution of two sequences in time domain is
equivalent to multiplication of their Z transforms.
25. What z transform of (n-m)?
By time shifting property
Z[A (n-m)]=AZ-m sinZ[ (n)] =1
26. State initial value theorem.
If x(n) is causal sequence then its initial value is given by x(0)=lim X(z)
27. List the methods of obtaining inverse Z transform.
Partial fraction expansion.

4 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

Contour integration
Power series expansion
Convolution.
28. Obtain the inverse z transform of X(z)=1/z-a,|z|>|a| APRIL/MAY2010
Given X(z)=z-1/1-az-1
By time shifting property X(n)=an.u(n-1)

16-MARKS

1, Determine the 8-Point DFT of the Sequence x(n)={1,1,1,1,1,1,0,0}


(Nov / 2013)

Soln:
N-1
X(k)= Σ x(n)e-j2πkn/N, k=0,1,........N-1.
n=0
7
X(0)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(0) = 6
7
X(1)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(1)=-0.707-j1.707.
7
X(2)= Σ x(n)e-j2πkn/8, k=0,1,....7
n=0
X(2)=1-j
7
X(3)= Σ x(n)e-j2πkn/8, k=0,1,.....7.
n=0
X(3)=0.707+j0.293
7
X(4)= Σ x(n)e-j2πkn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(5)=0.707-j0.293

5 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

7
X(6)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(6)=1+j
7
X(7)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(7)= -0.707+j1.707.
X(k)={6,-0.707-j1.707,1-j, 0.707+j0.293, 0, 0.707-j0.293, 1+j, -0.707+j1.707}

2, Determine the 8-Point DFT of the Sequence x(n)={1,1,1,1,1,1,1,1}


(April 2014)

Soln:
N-1
X(k)= Σ x(n)e-j2πkn/N, k=0,1,........N-1.
n=0
7
X(0)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(0) = 8
7
X(1)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(1)=0.
7
X(2)= Σ x(n)e-j2πkn/8, k=0,1,....7
n=0
X(2)=0.
7
X(3)= Σ x(n)e-j2πkn/8, k=0,1,.....7.
n=0
X(3)=0
7
X(4)= Σ x(n)e-j2πkn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0

6 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

X(5)=0
7
X(6)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(6)=0
7
X(7)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(7)= 0.
X(k)={8, 0, 0, 0, 0, 0, 0, 0}

3, Determine the 8-Point DFT of the Sequence x(n)={1,2,3,4,4,3,2,1}


(Nov / 2014)

Soln:
N-1
X(k)= Σ x(n)e-j2πkn/N, k=0,1,........N-1.
n=0
7
X(0)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(0) = 20
7
X(1)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(1)=-5.828-j2.414
7
X(2)= Σ x(n)e-j2πkn/8, k=0,1,....7
n=0
X(2)=0.
7
X(3)= Σ x(n)e-j2πkn/8, k=0,1,.....7.
n=0
X(3)=-0.172-j0.414.
7
X(4)= Σ x(n)e-j2πkn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= Σ x(n)e-j2πkn/8, k=0,1,........7.

7 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

n=0
X(5)= -0.172+j0.414
7
X(6)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(6)=0
7
X(7)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(7)= -5.828+j2.414
X(k)={20,-5.828-j2.414, 0, -0.172-j0.414, 0, -0.172+j0.414, 0, -5.828+j2.414}

4, Determine the 8-Point IDFT of the Sequence x(n)={5,0,1-j,0,1,0,1+j,0}


(Nov / 2014)

Soln:

N-1
x(n)=1/N Σ X(k)e-j2πkn/N, n=0,1,........N-1.
k=0
7
x(0)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0

x(0) = 1
7
x(1)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(1)=0.75
7
x(2)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(2)=0.5
7
x(3)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(3)=0.25
7
x(4)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0

8 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

x(4)=1
7
x(5)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(5)= 0.75
7
x(6)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(6)=0.5
7
x(7)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0

x(7)= 0.25
x(n)={1,0.75, 0.5, 0.25,1,0.75,0.5,0.25}

5. Derive and draw the radix -2 DIT algorithms for FFT of 8 points. (16) DEC 2009
RADIX-2 FFT ALGORITHMS
The /V-point DFT of an fV-point sequence J is
jV-l
rj=fl
Because rv(/i) may be either real or complex, evaluating -¥(A) requires on the order of jV complex
multiplications and N complex additions for each value of jt. Therefore, because there are Nvalues of
X(kh computing an N-point DFT requires N 1 complex multiplications and additions.
The basic strategy thaiis used in the FFT algorithm is one of "divide and conquer." which involves
decomposing an /V-point DFT inlo successively smaller DFTs. To see how this works, suppose lhat the
length of is even (i.e,,N is divisible by 2).If jr(n) is decimated into two sequences of length N/2,
computing the jV/2-point DFT of each of these sequences requires approximatelymultiplications and
thesame number
2 2
of additions, Thus, ihe two DFTs require 2(iV/2) = ^ /V multiplies and adds, Therefore, if it is possible
to find the fy-poim DFT of jt(h) from these two /V/2-point DFTs in fewer ihan N 2 f2operations, a
savings has been realized.
Deetmation-tn-Tlme FFT
The decimation'in-time FFT algorilhm is based on splitting (decimating) jt{a) into smaller sequences and
finding X(k)from the DFTsof these decimated sequences This section describes how this decimation
leads to an efficient algorithm when the sequence length is a power of 2.
Lei .r(n) be a sequence of length N =2', and suppose thal v(n) is splil (decimated) into iwo
subsequences, each of length N/2.As illustrated in Fig. the first sequence, %(n).is formed frum the
cven-index tcrmx,

9 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

N
g(n) = x(2n) n =0. I— - I
and ihe second, h(ri), is formed from ihe odd-index terms,
N
h{n) = x(2n +1) n=0. I - - I
2
In terms of these sequences, the JV-point DFT of is
N -I
X(k) = ^x(n)Wtf = x{n)Wtf 4‐ x{n)Wj?
heven odd
*_i iV _ n
1=0 I=\)

Odd-Index Ttraii

Because W$ k =
y—| 1|
X { t ) = £ amiin 4- w*J2W ) K f l
1=0 l=to
Note that the firs! term is the A//2-poim DFT of and (ho second is Ihe N/2-point DFT of h(n);
X { k ) = Gik) + Wl„H(k) k = 0 , IN - I
Although the N/2-point DFTs of g(n)and h(n)are sequences of length Nt[2, the periodicity of the
complex exponentials allows us to write
G(*)=G(*+y) H(*>=/m-+y)
Therefore, X(k) may be computed from the A1'/2-point DFTs G(k) and H(k), Note that because
IV* _ U/* LV'v^ — — IV*
IT N —iy iVw iV— vv iV
then IV.*'4* H(it + ‘j} = - W* H [k)
and it is only necessarv to form the products W#H(Jt) tor k =0, 1 ... N/2 —!. The complex exponentials
multiplyingti{k ) are called twiddle factors.A block diagram showing the computations
thaiare
necessaryfor the first stageof an eight-point dedmation-in-time FFT is shown in Fig.
If N/2 is even, #tft) andA(rt) may again be decimated. For example, G(A) may be evaluated as follows:

10 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

: -1 ^ i ■■ -1
G(k)= = V + V gi«W$2
ir—(I Ji even n J.II Id

11 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

12 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

X(
0)
X(l
)
xm
X(
3)
*(4
>
x(s
)
X(
6)
X(
7)
A complete eight-point radix-2 decimaliot)-iiHune FFT.

Computing an /V-point DFT using a radix-2 decimation-in-lime FFT is much more efficient
lhancalculating the DFTdirectly. For example, if N = 2'\ there are log, X= v stagesof compulation.
Bee a use each stage requires N/2complex multiplies by the twiddle factors W r N and Ncomplex
additions, there are a total of \Xlog;N complex multiplications1and Xlog-i Ncomplex additions.
From the structure of the dec i mat ion-in-time FFT algorithm, note that once a butterfly operation
has been performed on a pair of complex numbers, there is no need to save the input pair. Therefore, ihe
output pair may be stored in (he same registers as the input. Thus, only one airay of size Xis required,
and il is said that the computations may be performed in place.To perform the compulations in place,
however, the input sequence a(h) must he stored (or accessed) in nonsequential order as seen in Fig. The
shuffling of the inpul sequence that takes place is due to ihe successive decimations of The ordering
that results corresponds to a bil-reversed indexing of Ihe original sequence. In oilier words, if the index
n is written in binary fonn. Ihe order in which in the inpul sequence musi be accessed is found by
reading the binary representation for /? In reverse order as illustrated in ihe table below for N = 8;

13 2015‐
2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

Bil-Reversed
N Binary Binary
0 000 000 0
1 001 100 4
2 010 010 2
3 Oil 110 6
4
100 001 1
5 101 101 5
6 110 011 3
7 111 111 7

Alternate forms nFFFT algorithms may he derived from the decimalion-in-time FFT hy
manipulating the flowgrapli and rearranging the order in which ihe results of each stage of Ihe
computation are stored. For example, the ntxJes of the fluwgraph may be rearranged su that the input
sequenuc ,r(/i) is in normal order. What is lost with this reordering, however, is the .ibililv iti perform
the computations in place.
6. Derive DIF radix 2 FFT algorithm
Decimation-in-Frequency FFT
Another class of FFT algorithms may be derived by decimating the output sequence X (k) into smaller
and smaller subsequences. These algorithms are called decimation-in-frequencyFFTs and may be
derived as follows. Let N be a power of 2, /V = 21'. and consider separately evaluating the even-index
and odd-indcx samples of X(k). The even samples are
fi~ I
X(2k)=
fl=0
Separating this sum into the first N / 2 points and the last N / 2 points, and using the fact that Wjf k =
W^, this
becomes
S-i ftf-r
X(2k)= '£*(n)W*/2+£ x(n)W* N k , 2
n=0 n=A//2

With a change in the indexing on the second sum we have

X(2k)= J2x{n)WN/2+ £ x ( l l + 7)^/2^

14 2015‐
2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

n—l) n‐0 >• ^


Finally, because ^"25 ** = W'Jv/2’

15 2015‐
2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

Hn
X(2*)=£ f N\]
x(n) K). k
L+ x 1 N
/2

which is the N/2-point DFT of the sequence that is formed by adding the first N/2 points
of.*(n) to the last N/2. Proceeding in the same way for the odd samples of X(k) leads to
7-1
X(2k+l)= x(n) - H). nk (7.
x N/ 4)
n=
() 2
A flowgraph illustrating this first stage of decimation is shown in Fig. 7-7.
As with the decimation-in-time FFT, the decimation may be continued until
only two-point DFTs remain. A completeeight-point decimation-in-
frequency FFT is shown in Fig. 7-8. The complexity of the decimation-in-
frequency FFT is the same as the decimation-in-time, and the computations
may be performed in place. Finally, note that although the input sequence
x(n) is in normal order, the frequency samples X(k) are in bit-reversed
order.

16 2015‐
2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

7, State and prove properties of DFT.

PROPERTIES OF DFT

DFT

x(n) +---------------------------------------------- ► x(k)

17 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

1. Periodicity
Let x(n) and x(k) be the DFT pair then if

x(n + N) = x(n) for all n then

X(k+N) = X(k) for all k

Thus periodic sequence xp(n) can be given as


ra

xp(n) = Z x(n-lN) l=-ra

18 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

DFT

x2(n) * --------------------------------------------- ^ X2(k) Then

Then DFT

a1 x1(n) + a2 x2(n) ^^ a1 X1(k) + a2 X2(k)

DFT of linear combination of two or more signals is equal to the same linear combination of DFT of individual
signals.

1. Circular Symmetries of a sequence

A) A sequence is said to be circularly even if it is symmetric about the point zero on the circle. Thus X(N-n) =
x(n)

B) A sequence is said to be circularly odd if it is anti symmetric about the point zero on the circle. Thus X(N-
n) = - x(n)

C) A circularly folded sequence is represented as x((-n))N and given by x((-n))N = x(N-n).

D) Anticlockwise direction gives delayed sequence and clockwise direction gives advance sequence. Thus
delayed or advances sequence x'(n) is related to x(n) by the circular shift.

2. Symmetry Property of a sequence

A) Symmetry property for real valued x(n) i.e xi(n)=0

This property states that if x(n) is real then X(N-k) = X (k)=X(-k)

B) Real and even sequence x(n) i.e xI(n)=0 & Xi(K)=0


This property states that if the sequence is real and even x(n)= x(N-n) then DFT becomes

N-1

X(k) = I x(n) cos (2nkn/N) n=0

C) Real and odd sequence x(n) i.e xI(n)=0 & Xr(K)=0

This property states that if the sequence is real and odd x(n)=-x(N-n) then DFT becomes

N-1

X(k) = -j I x(n) sin (2nkn/N) n=0

19 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE

D) Pure Imaginary x(n) i.e xR(n)=0

This property states that if the sequence is purely imaginary x(n)=j Xi(n) then DFT becomes

N-1

XR(k) = I xi(n) sin (2nkn/N) n=0


d x(n) i.

20 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V
SEM ECE

N-1

Xi(k) = I xi(n) cos (2nkn/N) n=0

3. Circular Convolution
The Circular Convolution property states that if

DFT

x1(n) ^+ X1(k) And

DFT
x2(n)X2(k) Then
N
DFT

Then x1(n) © x2(n) -^1(k) x2(k)


N

21
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

It means that circular convolution of x1(n) & x2(n) is equal to multiplication of


their DFT's. Thus circular convolution of two periodic discrete signal with period
N is given by

N-1
y(m) = I x1 (n) x2 (m-n)N ........... (4)
n=0
Multiplication of two sequences in time domain is called as Linear convolution
while Multiplication of two sequences in frequency domain is called as circular
convolution. Results of both are totally different but are related with each other.

22
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

UNIT-II
INFINITE IMPULSE RESPONSE DIGITAL FILTERS
1.Give the expression for location of poles of normalized Butterworth
filter(May07,Nov10)

The poles of the Butterworth filter is given by (S-S1)(S-S2)……..(S-SN)

Where N is the order of filter.

2.What are the parameters(specifications) of a Chebyshev filter?(May/june-07)

From the given chebyshev filter specifications we can obtain the parameters
like the order of the filter N, ε, transition ratio k, and the poles of the filter.
3.Find the digital transfer function H(z) by using impulse invariant method for
the analog transfer function H(s)=1/s+2.Assume T=0.5sec(May/june-07)

H(z)= 1/ 1-e-1z-1

4.What is Warping effect?(Nov-07,May-09)

The relation between the analog and digital frequencies in bilinear


2 ⎛ω ⎞
transformation is given by Ω = tan⎜ ⎟ . For smaller values of ω there
T ⎝2⎠
exist linear relationship between ω and Ω. But for large values of ω the
relationship is non-linear. This non-linearity introduces distortion in the
frequency axis. This is known as warping effect. This effect compresses the
magnitude and phase response at high frequencies.

5.Compare Butterworth & Chebyshev filter.(Nov-07,May-09)

S.no Butterworth Chebyshev Type-1

1 All pole design All pole design

2 The poles lie on a circle in The poles lie on ellipse in S-plane.

23
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

S-plane.

3 The magnitude response is The magnitude response is equiriple in


maximally flat at the origin passband and monotonically decreasing
and monotonically in the stopband.
decreasing function of Ω

4 The normalized magnitude The normalized magnitude response has


response has a value of a value of 1/√1+ε↑2 at the cutoff
1/√2 at the cutoff frequency frequency Ωc.
Ωc

5 Only few parameters has to A large number of parameter has to be


be calculated to determine calculated to determine the transfer
the transfer function function

6.What is the relationship between analog & digital freq. in impulse invariant
transformation?(Apr/May-08)

If H(s)= Σ Ck / S-Pk then H(z) = Σ Ck / 1-ePkTZ-1

7.What is bilinear transformation?(Nov/Dec-08)

The bilinear transformation is a mapping that transforms the left half of s


plane into the unit circle in the z-plane only once, thus avoiding aliasing of
frequency components. The mapping from the s-plane to the z-plane in bilinear

2 ⎡1 − z −1 ⎤
transformation is s = .
T ⎢⎣1 + z −1 ⎥⎦

8.What is the main disadvantage of direct form-I realization?(Nov/Dec-08)

The direct form realization is extremely sensitive to parameter quantization. When


the order of the system N is large, a small change in a filter coefficient due to
parameter quantization, results in a large change in the location of the pole and
zeros of the system.

9.What is Prewarping?(May-09,Apr-10,May-11)

24
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

The effect of the non-linear compression at high frequencies can be


compensated. When the desired magnitude response is piece-wise constant over
frequency, this compression can be compensated by introducing a suitable
prescaling, or prewarping the critical frequencies by using the formula, Ω=2/T
tan ω/2.
10.List the features that make an analog to digital mapping for IIR filter
design coefficient.(May/june-2010)

• The bilinear transformation provides one-to-one mapping.


• Stable continuous systems can be mapped into realizable, stable digital
systems.
• There is no aliasing.
11.State the limitations of impulse invariance mapping technique.(Nov-09)

In impulse invariant method, the mapping from s-plane to z-plane is many


to one i.e., all the poles in the s-plane between the intervals [(2k-1)π]/T to
[(2k+1)π]/T ( for k=0,1,2……) map into the entire z-plane. Thus, there are an
infinite number of poles that map to the same location in the z-plane, producing
aliasing effect. Due to spectrum aliasing the impulse invariance method is
inappropriate for designing high pass filters. That is why the impulse invariance
method is not preferred in the design of IIR filter other than low pass filters.

12.Find digital transfer function using approximate derivative technique for


the analog transfer function H(s)=1/s+3.Assume T=0.1sec (May-10)

H(z) = 1/ Z+e-0.3
13.Give the square magnitude function of Butterworth filter.(Nov-2010)

The magnitude function of the butter worth filter is given by

1
H ( jΩ ) = 1
N = 1 , 2 , 3 ,...
⎡ ⎡ Ω ⎤
2 N
⎤ 2
⎢1 + ⎢ ⎥ ⎥
⎢⎣ ⎣Ω c ⎦ ⎥⎦

Where N is the order of the filter and Ωc is the cutoff frequency. The
magnitude response of the butter worth filter closely approximates the ideal

25
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

response as the order N increases. The phase response becomes more non-
linear as N increases.

14. Find the digital transfer function H(z) by using impulse invariant method
for the analog transfer function H(s)=1/s+1.Assume T=1sec.(Apr-11)

H(z)= 1/ 1-e-1z-1

15.Give the equation for the order of N and cut-off frequency Ωc of butter
worth filter.(Nov-06)

10 0.1α s − 1
log 0.1α
The order of the filter N = 10 p − 1

log s
Ωp

Where αs = stop band attenuation at stop band frequency Ω s

αp = pass band attenuation at pass band frequency Ω p

Ωp
Ωc = 1
(100.1α − 1) 2 N

16.What are the properties of the bilinear transformation?(Apr-06)


• The mapping for the bilinear transformation is a one-to-one mapping;
that is for every point z, there is exactly one corresponding point s, and
vice versa.
• The jΩ-axis maps on to the unit circle |z|=1, the left half of the s-plane
maps to the interior of the unit circle |z|=1 and the right half of the s-
plane maps on to the exterior of the unit circle |z|=1.
17.Write a short note on pre-warping.(Apr-06)
The effect of the non-linear compression at high frequencies can be
compensated. When the desired magnitude response is piece-wise constant

26
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

over frequency, this compression can be compensated by introducing a


suitable pre-scaling, or pre-warping the critical frequencies by using the
2 ⎛ω ⎞
formula. Ω = tan ⎜ ⎟
T ⎝2⎠
18.What are the different types of structure for realization of IIR
systems?(Nov-05)
The different types of structures for realization of IIR system are

i. Direct-form-I structure
ii. Direct-form-II structure
iii. Transposed direct-form II structure
iv. Cascade form structure
v. Parallel form structure
vi. Lattice-Ladder structure
19.Draw the general realization structure in direct-form I of IIR system.(May-
04)

20.Give direct form II structure.(Apr-05)

27
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

21.Draw the parallel form structure of IIR filter.(May-06)

22.Mention any two techniques for digitizing the transfer function of an analog
filter. (Nov11)
The two techniques available for digitizing the analog filter transfer function
are Impulse invariant transformation and Bilinear transformation.

23.Write a brief notes on the design of IIR filter. (Or how a digital IIR filter is
designed?)
For designing a digital IIR filter, first an equivalent analog filter is designed
using any one of the approximation technique for the given specifications. The
result of the analog filter design will be an analog filter transfer function Ha(s).
The analog filter transfer function is transformed to digital filter transfer function
H(z) using either Bilinear or Impulse invariant transformation.

24.Define an IIR filter

28
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

The filters designed by considering all the infinite samples of impulse


response are called IIR filers. The impulse response is obtained by taking
inverse Fourier transform of ideal frequency response.

25. Compare IIR and FIR filters.

S.No IIR Filter FIR Filter

i. All the infinite samples of impulse Only N samples of impulse


response are considered. response are considered.

ii. The impulse response cannot be The impulse response can be


directly converted to digital filter directly converted to digital filter
transfer function. transfer function.

iii. The design involves design of The digital filter can be directly
analog filter and then transforming designed to achieve the desired
analog filter to digital filter. specifications.

iv. The specifications include the The specifications include the


desired characteristics for desired characteristics for both
magnitude response only. magnitude and phase response.

v. Linear phase characteristics cannot Linear phase filters can be easily


be achieved. designed.

PART- B(16 MARKS)

1. Explain Frequency Transformation in Analog domain and frequency


transformation in digital domain. (Nov/Dec-07)
(12)

i. Frequency transformation in analog domain

In this transformation technique normalized Low Pass filter with


cutoff freq of Ωp= 1 rad /sec is designed and all other types of filters are
obtained from this prototype. For example, normalized LPF is transformed
to LPF of specific cutoff freq by the following transformation formula,

Normalized LPF to LPF of specific cutoff:

29
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

s
s→
Ω 'p

Ωp
H 1 (s) = H p ( s)
Ω 'p

Wheree,
Ωp= normalized
n c
cutoff freq=11 rad/sec
Ω’p= Desired
D LP cutoff
c freq

at Ω =Ω’p
= it is H(j1)

The other transforrmations aree shown in thhe below tabble.

ii. Frequency Transform


mation in Digital
D Domaain:

This transformatio
t on involves replacing the variable Z-1 by a rationnal function
g(z-1),, while doingg this follow
wing propertiies need to beb satisfied:
-1 -1
1. Mappingg Z to g(z ) must mapp points insidde the unit ciircle in the
Z- plane onto
o the unit circle of z- plane
p to presserve causaliity of the
filter.

0
30
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

2. For stable filter, the innside of the unit


u circle off the Z - planne must
map onto the
t inside off the unit circcle of the z-pplane.

The general form m of the funcction g(.) thaat satisfy the above requiirements of
" all-p
pass " type iss

The different
d trannsformations are shown in
i the below table.

31
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

2. Repreesents the transfer


t funnction of a low-pass fillter (not bu
utterworth)
with a pass-ban nd of 1 rad d/sec. Use freq
f transfformation tot find the
transffer function
n of the folloowing filterss:

1
Let H(s) =
s2 + s + 1

Reepresents th
he transfer function
f of a low pass filter
fi (not
butterrworth) witth a passbannd of 1 rad/sec. Use freq transform mation to
find the
t transfer function off the followiing filters: (Apr/Mayy-08)
(12)
1. A LP filter with a pass band of 10 rad/sec
2. A HP filter with a cutooff freq of 1 rad/sec
3. A HP filter with a cutooff freq of 100 rad/sec
4. A BP filter with a pass band of 10 rad/sec and d a corner freq
f of 100
rad/seec

32
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

5. A BS filter with a stop band of 2 rad/sec and a center freq of 10


rad/sec

Solution:

Given

1
H (s) =
s + s +1
2

a. LP – LP Transform

replace
s s
s→ =
Ω′p 10
1
sub H a (s) = H (s) | =
⎛ s 2 ⎞
s
s→ s
10
⎜ ( ) + ( ) + 1⎟
⎝ 10 10 ⎠
100
=
s + 10 s + 100
2

b. LP – HP(normalized) Transform

Ωu 1
s→ =
s s
1
sub H a ( s ) = H ( s ) | 1 =
s→ ⎛ 1 2 1 ⎞
s
⎜ ( ) + ( ) + 1⎟
⎝ s s ⎠
s2
=
s2 + s +1

c. LP – HP(specified cutoff) Transform


Ω u 10
s→ =
s s
1
sub H a ( s ) = H ( s ) | 10 =
s→ ⎛ 10 2 10 ⎞
s
⎜ ( ) + ( ) + 1⎟
33 ⎝ s s ⎠
2015‐2016 s2 Visit : www.EasyEngineeering.net
= 2
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

replace

d. LP – BP Transform

replace
s 2 + Ωu Ωl s 2 + Ω o2
s→ = where Ω o = Ω u Ω l
s (Ω u − Ω l ) sB0
and Bo = ( Ω u − Ω l )
sub H a ( s ) = H ( s ) | s 2 +10 4
s→
10 s

100s 2
= 4
s + 10s 3 + 20100s 2 + 10 5 s + 10 8

e. LP – BS Transform

replace

s (Ω u − Ω l ) sB
s→ = 2 0 2 where Ω o = Ω u Ω l
s + Ωu Ωl s + Ω o
2

and Bo = (Ω u − Ω l )
sub H a ( s ) = H ( s ) | 2s
s→
s 2 +100

( s 2 + 100) 2
= 4
s + 2 s 3 + 204 s 2 + 200 s + 10 4

3. Convert single pole LP Bufferworth filter with system function


0.245(1 + z −1 )
H ( z) = into BPF with upper & lower cutoff frequency
1 + 0.509z −1
ωu & ωl respectively , The LPF has 3-dB bandwidth ω p = 0.2π (Nov-
.
07) (8)

Solution:

We have the transformation formula given by,

34
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

applyiing this to thhe given trannsfer functionn,

Note that
t the resuulting filter has
h zeros at z=±1
z and a pair
p of poles that
depen
nd on the chooice of ωl annd ωu

This filter
f has polles at z=±j0.7713 and hennce resonatess at ω=π/2

The fo ollowing obsservations arre made,


• It iss shown heree that how eaasy to conveert one form of filter desiign to
anotheer form.
• Wh p filter deesign steps to transform
hat we requirre is only proototype low pass
to anyy other form..

35
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

4. Explain the different structures for IIR Filters. (May-09)


(16)

The IIR filters are represented by system function;


M

∑b z
k =0
k
−k

H(Z) = N
1 + ∑ ak z −k
k =1

And corresponding difference equation given by,


N N
y ( n ) = − ∑ a k y ( n − k ) + ∑ bk x ( n − k )
k =1 k =0

Different realizations for IIR filters are,

1. Direct form-I

2. Direct form-II

3. Cascade form

4. Parallel form

5. Lattice form

• Direct form-I
This is a straight forward implementation of difference equation which
is very simple. Typical Direct form – I realization is shown below. The
upper branch is forward path and lower branch is feedback path. The
number of delays depends on presence of most previous input and output
samples in the difference equation.

36
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

• Direct form-II

The given transfer function H(z) can be expressed as,

Y ( z) V ( z) Y ( z)
H ( z) = = .
X ( z) X ( z) V ( z)

where V(z) is an intermediate term. We identify,

V ( z) 1
= -------------------all poles
X ( z) N
1 + ∑ ak z −k

k =1

Y ( z) ⎛ M

= ⎜1 + ∑ bk z − k ⎟ -------------------all zeros
V ( z ) ⎝ k =1 ⎠

The corresponding difference equations are,

N
v ( n) = x ( n) − ∑ a k v ( n − k )
k =1

M
y ( n) = v( n) + ∑ bk v( n − 1)
k =1

37
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

38
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

This realization requires M+N+! multiplications, M+N addition and


the maximum of {M, N} memory location

• Cascade Form
The transfer function of a system can be expressed as,

H ( z ) = H 1 ( z ) H 2 ( z )....H k ( z )

Where H k (Z ) could be first order or second order section realized in Direct


form – II form i.e.,

bk 0 + bk1 Z −1 + bk 2 Z −2
H k (Z ) =
1 + a k1 Z −1 + a k 2 Z −2

Where K is the integer part of (N+1)/2

Similar to FIR cascade realization, the parameter b0 can be distributed


equally among the k filter section B0 that b0 = b10b20…..bk0. The second
order sections are required to realize section which has complex-conjugate
poles with real co-efficients. Pairing the two complex-conjugate poles with
a pair of complex-conjugate zeros or real-valued zeros to form a subsystem
of the type shown above is done arbitrarily. There is no specific rule used in
the combination. Although all cascade realizations are equivalent for infinite
precision arithmetic, the various realizations may differ significantly when
implemented with finite precision arithmetic.

• Parallel form structure


In the expression of transfer function, if N ≥ M we can express system
function
N N
Ak
H (Z ) = C + ∑ −1
= C + ∑ H k (Z )
k =1 1 − p k Z k =1

39
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Where {pk} are the poles, {Ak} are the coefficients in the partial fraction
expansion, and the constant C is defined as C = b N a N , The system
realization of above form is shown below.

bk 0 + bk1 Z −1
Where H k ( Z ) =
1 + a k1 Z −1 + ak 2 Z −2

Once again choice of using first- order or second-order sections depends on


poles of the denominator polynomial. If there are complex set of poles
which are conjugative in nature then a second order section is a must to have
real coefficients.

• Lattice Structure for IIR System:


Consider an All-pole system with system function.

1 1
H (Z ) = =
N
AN ( Z )
1 + ∑ a N (k ) Z −k
k =1

40
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

The corresponding difference equation for this IIR system is,


N
y (n) = −∑ a N (k ) y (n − k ) + x(n)
k =1

OR
N
x( n) = y ( n) + ∑ a N ( k ) y ( n − k )
k =1

For N=1

x ( n ) = y ( n ) + a1 (1) y ( n − 1)

Which can realized as,

We observe

x(n) = f1 (n)

y (n) = f 0 (n) = f1 (n) − k1 g 0 (n − 1)

= x ( n) − k1 y ( n − 1)
k1 = a1 (1)

g1 (n) = k1 f 0 (n) + g 0 (n − 1) = k1 y (n) + y (n − 1)

For N=2, then

y ( n ) = x ( n ) − a 2 (1) y ( n − 1) − a 2 ( 2) y ( n − 2)

41
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

This output can be obtained from a two-stage lattice filter as shown in below
fig

f 2 ( n) = x ( n)

f 1 ( n ) = f 2 ( n ) − k 2 g 1 ( n − 1)

g 2 ( n ) = k 2 f 1 ( n ) + g 1 ( n − 1)

f 0 (n) = f1 (n) − k1 g 0 (n − 1)

g1 (n) = k1 f 0 (n) + g 0 (n − 1)

y (n) = f 0 (n) = g 0 (n) = f1 (n) − k1 g 0 (n − 1)


= f 2 (n) − k 2 g1 (n − 1) − k1 g 0 (n − 1)

= f 2 (n) − k 2 [k1 f 0 (n − 1) + g 0 (n − 2)] − k1 g 0 (n − 1)

= x ( n ) − k 2 [k1 y ( n − 1) + y ( n − 2)] − k1 y ( n − 1)

= x ( n ) − k1 (1 + k 2 ) y ( n − 1) − k 2 y ( n − 2)

42
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Similarly

g 2 ( n ) = k 2 y ( n ) + k1 (1 + k 2 ) y ( n − 1) + y ( n − 2)

We observe

a 2 (0) = 1; a 2 (1) = k1 (1 + k 2 ); a 2 ( 2) = k 2

N-stage IIR filter realized in lattice structure is,

f N ( n) = x ( n)

f m −1 (n) = f m (n) − k m g m −1 (n − 1) m=N, N-1,---1

g m (n) = k m f m −1 (n) + g m −1 (n − 1) m=N, N-1,---1

y ( n) = f 0 ( n) = g 0 ( n)

5. Realize the following system functions (Dec-08)


(12)

H (Z ) =
( )(
10 1 − 12 Z −1 1 − 23 Z −1 1 + 2 Z −1 )( )
(1 − 3
4
Z −1
)(1 − 1
8
Z −1
)(1 − ( 1
2
+ j 12 )Z −1
)(1 − ( 1
2
− j 12 )Z −1 )
a). Direct form –I

b). Direct form-II

43
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

c). Cascade
d). Parallel
Solution:

H (Z ) =
( )(
10 1 − 12 Z −1 1 − 23 Z −1 1 + 2 Z −1 )( )
(1 − 3
4
Z −1
)(1 − 1
8
Z −1
)(1 − ( 1
2
+ j 12 )Z −1
)(1 − ( 1
2
− j 12 )Z −1 )

=
(
10 1 − 76 Z −1 + 13 Z −2 1 + 2 Z −1 )( )
(1 + 7
8
Z −1
+ 323 Z −2
)(1 − Z −1
+ 12 Z − 2 )

H (Z ) =
(
10 1 + 56 Z −1 − 2Z −2 + 23 Z −3 )
(1 − 15
8
Z −1
+ 47
32
Z −2
− 17
32
Z −3
+ 643 Z −4 )

( − 14 .75 − 12 .90 z −1 ) ( 24 .50 + 26 .82 z −1 )


H ( z) = +
7 3 −2 1
(1 + z −1 + z ) (1 − z −1 + z − 2 )
8 32 2

a). Direct form-I

44
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

c). Cascade Form

H(z) = H1(z) H2(z)

Where

7 −1 1 − 2
1− z + z
H1 ( z) = 6 3
7 3 −2
1 − z −1 + z
8 32

10(1 + 2 z −1 )
H1 ( z) =
1
1 − z −1 + z −2
2

45
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Parallel Form

(−14.75 − 12.90 z −1 ) (24.50 + 26.82 z −1 )


H(z) = H1(z) + H2(z) ; H ( z ) = +
7 −1 3 − 2 1
(1 + z + z ) (1 − z −1 + z − 2 )
8 32 2

6. Obtain the direct form – I, direct form-II, Cascade and parallel form
realization for the following system, y(n)=-0.1 y(n-1)+0.2y(n-
2)+3x(n)+3.6 x(n-1)+0.6 x(n-2)
(12). (May/june-07)
Solution:

The Direct form realization is done directly from the given i/p – o/p
equation, show in below diagram

46
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Direct form –II realization

Taking ZT on both sides and finding H(z)

Y ( z ) 3 + 3.6 z −1 + 0.6 z −2
H ( z) = =
X ( z ) 1 + 0.1z −1 − 0.2 z − 2

Cascade form realization

The transformer function can be expressed as:

(3 + 0.6 z −1 )(1 + z −1 )
H ( z) =
(1 + 0.5 z −1 )(1 − 0.4 z −1 )

which can be re written as

47
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

3 + 0.6 z −1 1 + z −1
where H 1 ( z ) = and H 2 ( z ) =
1 + 0.5 z −1 1 − 0.4 z −1

Parallel Form realization

The transfer function can be expressed as

H(z) = C + H1(z) + H2(z) where H1(z) & H2(z) is given by,

7 1
H ( z ) = −3 + −1

1 − 0 .4 z 1 + 0.5 z −1

7. Convert the following pole-zero IIR filter into a lattice ladder


structure,(Apr-10)
1 + 2 Z −1 + 2 Z −2 + Z −3
H (Z ) = −1
1 + 13
24 Z + 85 Z − 2 + 13 Z −3

Solution:

48
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Given bM ( Z ) = 1 + 2Z −1 + 2Z −2 + Z −3

−1
And AN ( Z ) = 1 + 13
24 Z + 85 Z −2 + 13 Z −3

a3 (0) = 1; a3 (1) = 13
24 ; a3 (2) = 85 ; a3 (3) = 1
3

k 3 = a3 (3) = 1
3

Using the equation

a m ( k ) − a m ( m) a m ( m − k )
a m −1 (k ) =
1 − a 2 m ( m)

For m=3, k=1

a3 (1) − a3 (3)a3 (2) 13 − 13 . 85


a 2 (1) = = 24
= 3

1 − ( 13 )
8
1 − a32 (3) 2

For m=3, & k=2

a3 (2) − a3 (3)a3 (1)


a 2 (2) = k 2 =
1 − a32 (3)

5
− 13 . 13 45−13
8 24
= 72
= 1
2
1 − 19 8
9

for m=2, & k=1

a 2 (1) − a 2 (2)a 2 (1)


a1 (1) = k1 =
1 − a 22 (2)

3
− 12 . 83 83 − 163
8
= = 1
4
1 − ( 12 ) 2 1 − 14

for lattice structure k1 = 14 , k 2 = 12 , k 3 = 1


3

For ladder structure

49
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

M
C m = bm − ∑ C .a (1 − m)
i = m +1
1 1 m=M, M-1,1,0

C3 = b3 = 1; C 2 = b2 − C3 a3 (1)
M=3
= 2 − 1.( 13
24 ) = 1.4583

3
C1 = b1 − ∑ c1 a1 (i − m) m=1
i=2

[
= b1 − c 2 a 2 (1) + c3 a 3( 2 ) ]
= 2 − [(1.4583)( 83 ) + 85 ] = 0.8281

3
c 0 = b0 − ∑ c1 a1 (i − m)
i =1

= b0 − [c1a1 (1) + c2 a 2 (2) + c3 a3 (3)]


= 1 − [08281( 14 ) + 1.4583( 12 ) + 13 ] = −02695

To convert a lattice- ladder form into a direct form, we find an


equation to obtain

a N (k ) From k m (m=1,2,………N) then equation for c m is recursively


used to compute bm (m=0,1,2,………M).

50
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

UNIT III

FINITE IMPULSE RESPONSE DIGITAL FILTERS

1.What is the condition satisfied by linear phase FIR filter?(May/june-09)

The condition for constant phase delay are

Phase delay, α = (N-1)/2 (i.e., phase delay is constant)

Impulse response, h(n) = h(N-1-n) (i.e., impulse response is symmetric)


2.What are the desirable characteristics of the frequency response of window
function?(Nov-07,08)(nov-10)

Advantages:
1. FIR filters have exact linear phase.
2. FIR filters are always stable.
3. FIR filters can be realized in both recursive and non recursive structure.

51
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

4. Filters with any arbitrary magnitude response can be tackled using FIR
sequency.

Disadvantages:
1. For the same filter specifications the order of FIR filter design can be as
high as 5 to n10 times that of IIR design.
2. Large storage requirements needed.
3.Powerful computational facilities required for the implementation.

3.What is meant by optimum equiripple design criterion.(Nov-07)

The Optimum Equiripple design Criterion is used for designing FIR Filters with
Equal level filteration throughout the Design.

4.What are the merits and demerits of FIR filters?(Ap/may-08)

Merits: 1. FIR Filter is always stable.


2. FIR Filter with exactly linear phase can easily be designed.
Demerits: 1. High Cost.
2.Require more Memory.

5.For what type of filters frequency sampling method is suitable?(nov/dec-08)

FIR FIlters

6.What are the properties of FIR filters?(Nov/Dec-09)

1. FIR Filter is always stable.


2. A Realizable filter can always be obtained.

7.What is known as Gibbs phenomenon?(ap/may-10,11)

In the filter design by Fourier series method the infinite duration impulse response
is truncated to finite duration impulse response at n=± (N-1/2). The abrupt
truncation of impulse introduces oscillations in the pass band and stop band. This
effect is known as Gibb’s phenomenon.

52
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

8.Mention various methods available for the design of FIR filter.Also list a few
window for the design of FIR filters.(May/june-2010)

There are three well known method of design technique for linear phase FIR filter.
They are

1. Fourier series method and window method

2. Frequency sampling method

3. Optimal filter design methods.

Windows: i.Rectangular ii.Hamming iii.Hanning iv.Blackman v.Kaiser

9.List any two advantages of FIR filters.(Nov/dec-10)

1. FIR filters have exact linear phase.


2. FIR filters are always stable.
3. FIR filters can be realized in both recursive and non recursive structure.

10.Mention some realization methods available to realize FIR filter(Nov/dec-


10)

i.Direct form. ii.Cascade form iii.Linear phase realization.

11.Mention some design methods available to design FIR filter.(Nov/dec-10)

There are three well known method of design technique for linear phase FIR filter.
They are

1. Fourier series method and window method

2. Frequency sampling method

3. Optimal filter design methods.

Windows: i.Rectangular ii.Hamming iii.Hanning iv.Blackman v.Kaiser

12.What are FIR filters?(nov/dec-10)

The specifications of the desired filter will be given in terms of ideal


frequency response Hd(ω). The impulse response hd(n) of desired filter can be

53
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

obtained by inverse Fourier transform of hd(ω), which consists of infinite


samples. The filters designed by selecting finite number of samples of impulse
response are called FIR filters.
13.What are the conditions to be satisfied for constant phase delay in linear
phase FIR filter?(Apr-06)

The condition for constant phase delay are

Phase delay, α = (N-1)/2 (i.e., phase delay is constant)

Impulse response, h(n) = h(N-1-n) (i.e., impulse response is symmetric)


14.What is the reason that FIR filter is always stable?(nov-05)

FIR filter is always stable because all its poles are at the origin.
15.What are the possible types of impulse response for linear phase FIR
filter?(Nov-11)

There are four types of impulse response for linear phase FIR filters

1. Symmetric impulse response when N is odd.

2. Symmetric impulse response when N is even.

3. Antisymmetric impulse response when N is odd

4. Antisymmetric impulse response when is even.

16.Write the procedure for designing FIR filter using window.(May-06)

1. Choose the desired frequency response of the filter Hd(߱).

2. Take inverse Fourier transform of Hd(߱) to obtain the desired impulse


response hd (n).

3.Choose a window sequence w(n) and multiply hd(n) by w(n) to convert


the infinite duration impulse response to finite duration impulse response h(n).

4. The Transfer function H(z) of the filter is obtained by taking z-transform


of h(n).
17.Write the procedure for FIR filter design by frequency sampling
method.(May-05)

54
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

1. Choose the desired frequency response Hd(߱).

2. Take N-samples of Hd (߱) to generate the sequence H (K) (Here H bar of


k should come) 3. Take inverse of DFT of H (k) to get the impulse response h (n).

4. The transfer function H (z) of the filter is obtained by taking z-transform


of impulse response.

18.List the characteristic of FIR filter designed using window.(nov-04)

1. The width of the transition band depends on the type of window.

2. The width of the transition band can be made narrow by increasing the
value of N where N is the length of the window sequence.

3. The attenuation in the stop band is fixed for a given window, except in
case of Kaiser Window where it is variable.

19.List the features of hanning window spectrum.(nov-04)

1. The mainlobe width is equal to 8π/N. 2. The maximum sidelobe magnitude is -


31db.

3. The sidelobe magnitude decreases with increasing߱.

20. Compare the rectangular window and hanning window.(Dec-07)

Rectangular window Hamming window

1. The width of mainlobe in window 1. The width of mainlobe in window


spectrum is 4π/N. spectrum is 8π/N.

2.The maximum sidelobe magnitude in 2.The maximum sidelobe magnitude in


window spectrum is -13db window spectrum is -41db

3. In window spectrum the sidelobe 3. In window spectrum the sidelobe


magnitude slightly decreases with magnitude remains constant.
increasing ߱.

4. In FIR filter designed using 4. In FIR filter designed using hamming


rectangular window the minimum window the minimum stopband

55
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

stopband attenuation is 22db. attenuation is 51db.

21.Compare Hamming window with Kaiser Window.(Nov-06)

Hamming window Kaiser window

1.The main lobe width is equal to8π/N The main lobe width ,the peak side lobe
level can be varied by varying the
and the peak side lobe level is –41dB. parameter α and N.

2.The low pass FIR filter designed will

have first side lobe peak of –53 dB The side lobe peak can be varied by
varying the parameter α.

Part –B(16 MARKS)

1. Design an ideal high-pass filter with a frequency response using a


hanning window with M = 11 and plot the frequency response.
(12)(Nov-07,09)

π
H d ( e jω ) = 1 for ≤ ω ≤π
4
π
=0 | ω |<
4
Solution:

56
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

−π / 4 π
1
∫π e ∫
jω n jω n
hd ( n ) = [ dω + e dω ]
2π − π/4

1 πn
hd (n) = [sin πn − sin ] for − ∞ ≤ n ≤ ∞ and n≠0
πn 4
−π / 4 π
1 3
hd (0) = [ ∫ dω + ∫ dω ] = = 0.75
2π −π π /4 4

hd(1) = hd(-1)=-0.225

hd(2) = hd(-2)= -0.159

hd(3) = hd(-3)= -0.075

hd(4) = hd(-4)= 0

hd(5) = hd(-5) = 0.045

The hamming window function is given by

57
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

2πn M −1 M −1
whn (n) = 0.5 + 0.5 cos −( )≤n≤( )
M −1 2 2
=0 otherwise

for N = 11
πn
whn (n) = 0.5 + 0.5 cos −5≤ n ≤ 5
5

whn(0) = 1

whn(1) = whn(-1)=0.9045

whn(2)= whn(-2)=0.655

whn(3)= whn(-3)= 0.345

whn(4)= whn(-4)=0.0945

whn(5)= whn(-5)=0

h(n)= whn(n)hd(n)

h(n)=[0 0 -0.026 -0.104 -0.204 0.75 -0.204 -0.104 -0.026 0 0]

2. Design a filter with a frequency response: using a Hanning window with


M=7
π π
H d (e jω ) = e − j 3ω for − ≤ω ≤
4 4 (8)(Apr/08)
π
=0 <| ω |≤ π
Solution: 4
The freq resp is having a term e –jω(M-1)/2 which gives h(n) symmetrical about
n = M-1/2 = 3 i.e we get a causal sequence.
π / 4
1

j 3 ω jω n
h (n ) = e −
e d ω
2 π
d
− π / 4

π
sin (n − 3 )
= 4
π (n − 3 )
this gives h d (0 ) = h d ( 6 ) = 0 . 075
h d (1 ) = h d ( 5 ) = 0 . 159
h d (2 ) = h d ( 4 ) = 0 . 22
h d ( 3 ) = 0 . 25

58
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

The Hanning window function values are given by


whn(0) = whn(6) =0
whn(1)= whn(5) =0.25
whn(2)= whn(4) =0.75
whn(3)=1
h(n)=hd(n) whn(n)

h(n)=[0 0.03975 0.165 0.25 0.165 0.3975 0]

3. Design a LP FIR filter using Freq sampling technique having cutoff freq
of π/2 rad / sample. The filter should have linear phase and length of 17.
(12)(May-07)

The desired response can be expressed as


M −1
− jω ( )

H d (e ) = e 2
for | ω |≤ ωc
=0 otherwise
with M = 17 and ωc = π / 2

H d ( e jω ) = e − jω 8 for 0 ≤ ω ≤ π / 2
=0 for π / 2 ≤ ω ≤ π

2πk 2πk
Selecting ω k = = for k = 0,1,......16
M 17

H ( k ) = H d ( e jω ) | 2πk
ω=
17

59
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

2πk
−j 8 2πk π
H (k ) = e 17
for 0 ≤≤
17 2
2πk
=0 for π /2 ≤ ≤π
17
16πk
−j 17
H (k ) = e 17 for 0 ≤ k ≤
4
17 17
=0 for ≤k≤
4 2

The range for “k” can be adjusted to be an integer such as

0≤k ≤4
and 5≤k ≤8

The freq response is given by


2πk
−j 8
H (k ) = e 17
for 0 ≤ k ≤ 4
=0 for 5≤k ≤8

Using these value of H(k) we obtain h(n) from the equation


( M −1) / 2
1
h( n) = ( H (0) + 2 ∑ Re( H (k )e j 2πkn / M ))
M k =1

i.e.,
4
1
h( n) = (1 + 2∑ Re(e − j16πk / 17 e j 2πkn / 17 ))
17 k =1

1 4
2πk (8 − n)
h( n) = ( H (0) + 2∑ cos( ) for n = 0,1,........16
17 k =1 17

• Even though k varies from 0 to 16 since we considered ω varying


between 0 and π/2 only k values from 0 to 8 are considered
• While finding h(n) we observe symmetry in h(n) such that n varying
0 to 7 and 9 to 16 have same set of h(n)

60
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

4.Design an Ideal Differentiator using


a) Rectangular window
(6)

b) Hamming window
(6)

with length of the system = 7.(Nov-09)

Solution:

From differentiator frequency characteristics

H(ejω) = jω between –π to +π
π
1 cos πn
∫π jω e
jωn
hd (n) = dω = − ∞ ≤ n ≤ ∞ and n≠0
2π −
n

The hd(n) is an add function with hd(n)=-hd(-n) and hd(0)=0

a) rectangular window

h(n)=hd(n)wr(n)

h(1)=-h(-1)=hd(1)=-1

h(2)=-h(-2)=hd(2)=0.5

h(3)=-h(-3)=hd(3)=-0.33

h’(n)=h(n-3) for causal system

thus,

H ' ( z ) = 0.33 − 0.5 z −1 + z −2 − z −4 + 0.5 z −5 − 0.33z −6

( M −3) / 2
M −1
Also from the equation
H r ( e jω ) = 2 ∑
n =0
h(n) sin ω (
2
− n)

For M=7 and h’(n) as found above we obtain this as

61
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE


H r (e ) = 0 . 666 sin 3ω − sinn 2 ω + 2 sin ω

H (e jω ) = jH
H r (e jω ) = j (0.66 sin 3ω − sin 2ω + 2 sin
s ω)

b) Hamming window
w

n)=hd(n)wh(nn)
h(n

here wh(n) is given by


wh

2πn
wh (n) = 0.54 + 0.46 cos − ( M − 1) / 2 ≤ n ≤ ( M − 1) / 2
( M − 1)
= 0 ottherwise

Forr the presentt problem

πn
wh ( n) = 0.54 + 0.46 cos −3≤ n ≤ 3
3

Thee window fuunction coeffficients are given


g by forr n=-3 to +3

Wh
h(n)= [0.08 0.31
0 0.77 1 0.77
0 0.31 0.008]

Thu
us h’(n) = h((n-5) = [0.02267, -0.155, 0.77, 0, -0.777, 0.155, -0.0267]

Sim
milar to the earlier
e case of
o rectangulaar window we
w can writee the freq
response of diffferentiator as
a

H (e jω ) = jH r (e jω ) = j (0.0534
0 sin 3ω − 0.31sin 2ω + 1.54 sin ω )

62
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

• With rectangular window, the effect of ripple is more and transition


band width is small compared with hamming window
• With hamming window, effect of ripple is less whereas transition
band is more

5.Justify Symmetric and Anti-symmetric FIR filters giving out Linear Phase
characteristics.(Apr-08)
(10)
Symmetry in filter impulse response will ensure linear phase
An FIR filter of length M with i/p x(n) & o/p y(n) is described by the
difference equation:
M −1
y(n)= b0 x(n) + b1 x(n-1)+…….+b M-1 x(n-(M-1)) = ∑ b x(n − k )
k =0
k -

(1)

Alternatively, it can be expressed in convolution form


M −1
y ( n) = ∑ h( k ) x ( n − k )
k =0
- (2)

i.e bk= h(k), k=0,1,…..M-1

Filter is also characterized by


M −1
H ( z) = ∑ h( k ) z
k =0
−k
-(3) polynomial of degree M-1 in the variable z-1. The

roots of this polynomial constitute zeros of the filter.

An FIR filter has linear phase if its unit sample response satisfies the condition

h(n)= ± h(M-1-n) n=0,1,…….M-1 -(4)

63
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Incorporating this symmetry & anti symmetry condition in eq 3 we can


show linear phase characteristics of FIR filters

H ( z ) = h(0) + h(1) z −1 + h(2) z −2 + ........... + h( M − 2) z − ( M − 2 ) + h( M − 1) z − ( M −1)

If M is odd
M −1 M +1 M +3
M − 1 −( ) M + 1 −( ) M + 3 −( )
H ( z) = h(0) + h(1) z −1 + ..........+ h( )z 2
+ h( )z 2
+ h( )z 2
+ ...........
2 2 2
+ h(M − 2) z −( M −2) + h(M − 1) z −( M −1)


−(
M −1
)(
M −1
) (
M −3
) M −1 M + 1 −1 M + 3 −2 −(
M −1
)⎤
=z ⎢h (0)
2
z 2
+ h(1) z 2
+ ..........
.. + h( ) + h( ) z + h( ) z + .....
h ( M − 1) z 2

⎣ 2 2 2 ⎦
Applying symmetry conditions for M odd

h(0) = ± h( M − 1)
h(1) = ± h( M − 2)
.
.
M −1 M −1
h( ) = ± h( )
2 2
M +1 M −3
h( ) = ± h( )
2 2
.
.
h( M − 1) = ± h(0)

64
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

⎡ M −1
M −3


−( M −
) 1 2
− ( M −1− 2 n ) / 2 ⎥
H ( z) = z h(2
) + ∑ h(n){z ( M −1− 2 n ) / 2
±z }
⎢ 2 n =0

⎢⎣ ⎥⎦
similaarly for M even
e
M −1 ⎡ M2 −1 ⎤
−( )
⎢ h(n){z ( M −1− 2 n ) / 2 ± z −( M −1− 2 n ) / 2 }⎥
⎢∑
H ( z) = z 2

n =0

⎣⎢ ⎦⎥

6.What aree the structu


ure of FIR filter
f system
ms and explaain it. (Dec--10)
(12)
FIR syystem is desscribed by,
M −1
y (n) = ∑ b x(n − k )
k =0
k

Or equ
uivalently, thhe system fuunction
M −1
H (Z ) = ∑b Z
k =0
k
−k

⎧bn 0 ≤ n ≤ n −1
Wheree we can ideentify h(n) = ⎨
⎩0 otherrwise

Differrent FIR Struuctures usedd in practice are,

1. Direcct form
2. Cascaade form
3. Frequuency-sampling realization
4. Latticce realizationn
• Diirect form
• It is Non recursive
r in structure
s

65
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

• As can be seen from m the abovee implemenntation it reqquires M-1


memory loocations for storing the M-1
M previouus inputs
• It requiress computatioonally M muultiplicationss and M-1 addditions per
output poiint
• It is moree popularly referred
r to as
a tapped deelay line or transversal
system
• Efficient structure
s wiith linear phase
p characcteristics are possible
where h(nn) = ± h( M − 1 − n)
• Caascade formm
Thhe system fuunction H (Z) is factoredd into producct of second – order
FIIR system
K
H (Z ) = ∏ H k (Z )
k =1

Where
W H k (Z ) = bk 0 + bk 1 Z −1 + bk 2 Z −2 k = 1, 2 ….. K

an
nd K = integeer part of (M
M+1) / 2

The filterr parameter b0 may be equally disstributed am mong the K


fillter section, such that b0 = b10 b20 ….
… b k0 or iti may be asssigned to a
sinngle filter seection. The zeros
z of H (z)
( are groupped in pairs to produce
the second – order FIR system. Pairrs of compllex-conjugatte roots are
ormed so thatt the coefficiients {bki} are
fo a real valueed.

66
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

In case of
o linear –phhase FIR fillter, the symmmetry in h(n)
h implies
that the zeros of H(z) alsoo exhibit a foorm of symm
metry. If zk and
a zk* are
paair of compllex – conjuggate zeros thhen 1/zk andd 1/zk* are also a pair
co
omplex –connjugate zeroos. Thus sim mplified fouurth order sections
s are
fo
ormed. This is
i shown bellow,

H k ( z ) = C k 0 (1 − z k z −1 )(1 − z k * z −1 )(1 − z −1 / z k )(1 − z −1 / z k * )


= C k 0 + C k1 z −1 + C k 2 z − 2 + C k1 z −3 + z − 4

• Frrequency saampling metthod


We
W can expresss system fuunction H(z) in terms of DFT
D samplees H(k)
whhich is givenn by

7
67
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

N −1
1 H (k )
H ( z ) = (1 − z − N )
N
∑1−W − k −1
k =0 N z

Thhis form cann be realized with cascadde of FIR andd IIR structuures. The
1 N −1 H (k )
terrm (1-z-N) iss realized as FIR and thee term ∑
N k =0 1 − WN−k z −1
as IIR

strructure.

Th
he realizationn of the abovve freq samppling form shhows necesssity of
co
omplex arithm metic.

ncorporating symmetry inn h (n) and symmetry


In s prroperties of DFT
D of real
seequences the realization can
c be modiified to have only real cooefficients.

• Laattice realizzation
Laattice structuures offer maany interestinng features:

1. Upgradingg filter orderrs is simple. Only additioonal stages need


n to be
ad
dded instead of redesigniing the wholle filter and recalculating
r g the filter
co
oefficients.
2. These filteers are compputationally very
v efficiennt than other filter
strructures in a filter bank applications
a (eg. Waveleet Transform m)

8
68
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

3. Lattice filtters are less sensitive to finite word length


l effectts.

onsider
Co

Y ( z) m
H ( z) = = 1 + ∑ a m (i ) z −i
X ( z) i =1

m is the order of the FIR filter


f and am(0)=1
(

wh
hen m = 1 Y(z)/
Y X(z) = 1+ a1(1) z-1

y((n)= x(n)+ a1(1)x(n-1)

f1(n) is knownn as upper chhannel outpuut and r1(n)aas lower channel


ou
utput.

f0(n)=
( r0(n)=x(n)

Th
he outputs arre

f 1 (n) = f 0 (n) + k1 r0 (n − 1) 1a
r1 (n) = k1 f 0 (n) + r0 (n − 1) 1b
iff k1 = a1 (1),
) then f 1 ( n) = y ( n)

If m=2

9
69
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Y ( z)
= 1 + a 2 (1) z −1 + a 2 (2) z − 2
X ( z)
y (n) = x(n) + a 2 (1) x(n − 1) + a 2 (2) x(n − 2)
y (n) = f1 (n) + k 2 r1 (n − 1) (2)

Substituting 1a and 1b in (2)

y ( n ) = f 0 ( n ) + k1 r0 ( n − 1) + k 2 [ k1 f 0 ( n − 1) + r0 ( n − 2 )]
= f 0 ( n ) + k1 r0 ( n − 1) + k 2 k1 f 0 ( n − 1) + k 2 r0 ( n − 2 )]
sin ce f 0 ( n ) = r0 ( n ) = x ( n )
y ( n ) = x ( n ) + k1 x ( n − 1) + k 2 k1 x ( n − 1) + k 2 x ( n − 2 )]
= x ( n ) + ( k1 + k1 k 2 ) x ( n − 1) + k 2 x ( n − 2 )

We recognize

a 2 (1) = k1 + k1k 2
a 2 (1) = k 2

Solving the above equation we get

a 2 (1)
k1 = and k 2 = a 2 (2) (4)
1 + a 2 (2)

Equation (3) means that, the lattice structure for a second-order filter is
simply a cascade of two first-order filters with k1 and k2 as defined in
eq (4)

70
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

Siimilar to aboove, an Mth order


o FIR filtter can be im
mplemented by lattice
strructures withh

M – Stages

he followingg system fun


7.Realize th nction usingg minimum number
n of
m
multiplicatio
on
1 1 1 1
H = 1 + Z −1 + Z −2 + Z −3 + Z −4 + Z −5
H(Z) (6)(Dec-10)
3 4 4 3
Solutiion:

⎡ 1 1 1 1 ⎤
We reecognize h(nn) = ⎢1, , , , , 1⎥
⎣ 3 4 4 3 ⎦

M is even
e = 6, andd we observe h (n) = h (M
M-1-n) h (n) = h (5-nn)

i.e h (0)
( = h (5) h (1) = h (4) h (2) = h (3)

71
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Direct form structure for linear phase FIR can be realized

8. Realize the following using system function using minimum number of


multiplication. (8)(May-09)
1 1 1 1 1 1
H(Z) = 1 + Z −1 + Z −2 + Z −3 − Z −5 − Z −6 − Z −7 − Z −8
4 3 2 2 3 4

Solution:

m=9

⎡ 1 1 1 1 1 1 ⎤
h(n) = ⎢1, , , , − , − , − , − 1⎥
⎣ 4 3 2 2 3 4 ⎦

Odd symmetry

h (n) = -h(M-1-n); h (n) = -h (8-n); h (m-1/2) = h (4) = 0

h (0) = -h (8); h (1) = -h (7); h (2) = -h (6); h (3) = -h (5)

72
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

9. Realiize the diffeerence equattion in casccade form (May-07)


(6)
y (n) = x ( n) + 0.25
2 x ( n − 1) + 0.5 x ( n − 2) + 0.75 x ( n − 3) + x ( n − 4)

Solutiion:

Y ( z ) = X ( z ){1 + 0.25 z −1 + 0.5 z −2 + 0.75 z −3 + z −4 )


H ( z ) = 1 + 0.25 z −1 + 0.5 z −2 + 0.75 z _ 3 + z −4
H ( z ) = (1 − 1.12119 z −1 + 1.21881z −2 )(1 + 1.3719 z −1 + 0.821z −2 )
H ( z) = H1 ( z) H 2 ( z)

10.Given FIR Z ) = 1 + 2 Z −1 + 13 Z −2 obttain lattice structure


R filter H (Z s foor the same
(44)(Apr-08)
Solutiion:

73
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

Given a1 (1) = 2 , a 2 ( 2) = 1
3

Using the recursive equation for

m = M, M-1… 2, 1

Here M=2 therefore m = 2, 1

If m=2 k 2 = a 2 ( 2) = 1
3

If m=1 k1 = a1 (1)

Also, when m=2 and i=1

a 2 (1) 2 3
a1 (1) = = =
1 + a 2 (2) 1 + 3 2
1

Hence k1 = a1 (1) = 3
2

UNIT IV

FINITE WORDLENGTH EFFECTS

1.Determine ‘Dead band’ of the filter.(May-07,Dec-07,Dec-09)

The limit cycle occur as a result of quantization effects in multiplications. The


amplitudes of output during a limit cycle are confined to a range of values that is
called the dead band of the filter.

2.Why rounding is preferred to truncation in realizing digital filter?(May-07)

74
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

In digital system the product quantization is performed by rounding due to the


following desirable characteristics of rounding.

1. The rounding error is independent of the type of arithmetic.

2. The mean value of rounding error signal is zero.

3. The variance of the rounding error signal is least.

3.What is Sub band coding?(May-07)

Sub band coding is a method by which the signal (speech signal) is sub divided in
to several frequency bands and each band is digitally encoded separately.

4.Identify the various factors which degrade the performance of the digital
filter implementation when finite word length is used.(May-07,May-2010)

5.What is meant by truncation & rounding?(Nov/Dec-07)

The truncation is the process of reducing the size of binary number by discarding all
bits less significant than the least significant bit that is retained.

Rounding is the process of reducing the size of a binary number to finite word
sizes of b-bits such that, the rounded b-bit number is closest to the original
unquantized number.

6.What is meant by limit cycle oscillation in digital filters?(May-07,08,Nov-10)

In recursive system when the input is zero or some nonzero constant value, the
nonlinearities due to finite precision arithmetic operation may cause periodic
oscillations in the output. These oscillations are called limit cycles.

7.What are the three types of quantization error occurred in digital


systems?(Apr-08,Nov-10)

i. Input quantization error ii. Product quantization error iii. Coefficient quantization
error.

75
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

8.What are the advantages of floating point arithmetic(Nov-08)

i. Larger dynamic range.

ii. Overflow in floating point representation is unlikely.

9.Give the IEEE754 standard for the representation of floating point


numbers.(May-09)

The IEEE-754 standard for 32-bit single precision floating point number is given by
Floating point number,

N f = (-1)* (2^E-127)*M

0 1 8 9 31

S E M

S = 1-bit field for sign of numbers

E = 8-bit field for exponent

M = 23-bit field for mantissa

10.Compare the fixed point & floating point arithmetic.(May/june-09)

Fixed point arithmetic Floating point arithmetic

1. The accuracy of the result is less due to 1. The accuracy of the result will be
smaller dynamic range. higher due to larger dynamic range.

2. Speed of processing is high. 2. Speed of processing is low.

3. Hardware implementation is cheaper. 3. Hardware implementation is costlier.

4. Fixed point arithmetic can be used for 4. Floating point arithmetic cannot be
real; time computations. used for real time computations

5. quantization error occurs only in 5. Quantization error occurs in both


multiplication multiplication and addition.

76
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

11.Define Zero input limit cycle oscillation.(Dec-09)

In recursive system, the product quantization may create periodic oscillations


in the output. These oscillations are called limit cycles. If the system output enters a
limit cycle, it will continue to remain in limit cycle even when the input is made
zero. Hence these limit cycles are called zero input limit cycles.

12.What is the effect of quantization on pole locations?(Apr-2010)

Quantization of coefficients in digital filters lead to slight changes in their


value. These changes in value of filter coefficients modify the pole-zero locations.
Some times the pole locations will be changed in such a way that the system may
drive into instability.

13.What is the need for signal scaling?(Apr-2010)

To prevent overflow, the signal level at certain points in the digital filters
must be scaled so that no overflow occurs in the adder.

14.What are the results of truncation for positive & negative numbers?(Nov-
06)

consider the real numbers 5.6341432543653654 ,32.438191288


,−6.3444444444444

To truncate these numbers to 4 decimal digits, we only consider the 4 digits to the
right of the decimal point.

The result would be: 5.6341 ,32.4381 ,−6.3444 Note that in some cases,
truncating would yield the same result as rounding, but truncation does not round
up or round down the digits; it merely cuts off at the specified digit. The truncation
error can be twice the maximum error in rounding.

15.What are the different quantization methods?(Nov-2011)

The two types of quantization are: i. Truncation and ii. Rounding.

16. List out some of the finite word length effects in digital filter.(Apr-06)

1. Errors due to quantization of input data.

2. Errors due to quantization of filter coefficients.

77
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

3. Errors due to rounding the product in multiplications.

4. Limit cycles due to product quantization and overflow in addition.

17. What are the different formats of fixed point’s representation?(May-05)

In fixed point representation, there are three different formats for


representing binary numbers.

1. Signed-magnitude format

2. One’s-complement format

3. Two’s-complement format.

In all the three formats, the positive number is same but they differ only in
representing negative numbers.

18. Explain the floating point representation of binary number.(Dec-06)

The floating numbers will have a mantissa part and exponent part. In a given
word size the bits allotted for mantissa and exponent are fixed. The mantissa is used
to represent a binary fraction number and the exponent is a positive or negative
binary integer. The value of the exponent can be adjusted to move the position of
binary point in mantissa. Hence this representation is called floating point. The
floating point number can be expressed as,

Floating point number, N f = M*2^E

Where M = Mantissa and E=Exponent.

19. What is quantization step size?(Apr-07,11)

In digital system, the number is represented in binary. With b-bit binary we


can generate 2^b different binary codes. Any range of analog value to be
represented in binary should be divided into 2^b levels with equal increment. The
2^b level are called quantization levels and the increment in each level is called
quantization step size. It R is the range of analog signal then.

78
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE

Quantizaation step sizze, q = R/2^


^b

20. What is meant


m by prroduct quan
ntization errror?(Nov-111)

In dig
gital computaation, the ouutput of multtipliers i.e., thhe products is
quuantized to finite
f word length
l in ordder to store thhem in regisster and to bee used in
suubsequent caalculation. The
T error duee to the quanntization of thhe output off
m
multipliers is referred to as
a product quantization
q e
error.

22. What is overflow


o lim
mit cycle?(M
May/june-100)

In fixeed point adddition the oveerflow occurrs when the sum exceedss the finite
w
word o the registeer used to stoore the sum. The overfloow in additioon may be
length of
leead to oscillaation in the output
o whichh is called ovverflow limiit cycle.

23. What is input


i quanttization erroor?(Nov-04))

The filter coeefficients aree computed to


T t infinite prrecision in thheory. But inn digital
coomputation thet filter coeefficients aree representedd in binary anda are stored in
reegisters. If a b bit registeer is used thee filter coeffi
ficients must be rounded or
trruncated to b bits, whichh produces ann error.

24.What are the differen


nt types of number
n rep
presentation
n?(Apr-11)

There are threee forms to represent


T r nuumbers in diggital computter or any othher digital
hardware.

i. Fixed poiint representation

ii. Floating point


p represeentation

iii. Block flo


oating point representation.

255. Define wh
hite noise?(Deec-06)

A statiionary random
m process is said
s to be whiite noise if itss power densitty spectrum
iss constant. Heence the whitee noise has flaat frequency response
r specctrum.

28. What aree the methods used to prevent


p overrflow?(Mayy-05)

There are two methods to prevent overflow

9
79
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

i) saturation arithmetic
ii) scaling
29. What is meant by A/D conversion noise?(MU-04)

A DSP contains a device, A/D converter that operates on the analog input
x(t) to

produce xq(t) which is binary sequence of 0s and 1s. At first the signal x(t) is
sampled at regular intervals to produce a sequence x(n) is of infinite precision. Each
sample x(n) is expressed in terms of a finite number of bits given the sequence
xq(n). The difference signal e(n) =xq (n)-x(n) is called A/D conversion noise.

16 MARKS
1, Explain in detail about Number Representation.
Number Representation
In digital signal processing, (B + 1)-bit fixed-point numbers are usually
represented as two’s- complement signed fractions in the format

bo b-ib-2 …… b-B
The number represented is then
X = -bo + b-i2- 1 + b-22 - 2 + ••• + b-B 2-B (3.1)
where bo is the sign bit and the number range is —1 <X < 1. The advantage of this
representation is that the product of two numbers in the range from — 1 to 1 is
another number in the same range. Floating-point numbers are represented as
X = (-1) s m2 c (3.2)
where s is the sign bit, m is the mantissa, and c is the characteristic or
exponent. To make the representation of a number unique, the mantissa is
normalized so that 0.5 <m < 1.
Although floating-point numbers are always represented in the form of (3.2), the
way in which this representation is actually stored in a machine may differ. Since
m > 0.5, it is not necessary to store the 2- 1 -weight bit of m, which is always set.
Therefore, in practice numbers are usually stored as
X = (-1)s(0.5 + f)2 c (3.3)

where f is an unsigned fraction, 0 <f < 0.5.


Most floating-point processors now use the IEEE Standard 754 32-bit floating-
point format for storing numbers. According to this standard the exponent is stored
as an unsigned integer p where
p = c + 126 (3.4)
Therefore, a number is stored as

80
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

X = (-1)s(0.5 + f )2p - 1 2 6 (3.5)


where s is the sign bit, f is a 23-b unsigned fraction in the range 0 <f < 0.5, and p
is an 8-b unsigned integer in the range 0 <p < 255. The total number of bits is 1 +
23 + 8 = 32. For example, in IEEE format 3/4 is written (-1)0 (0.5 + 0.25)2° so s =
0, p = 126, and f = 0.25. The value X = 0 is a unique case and is represented by
all bits zero (i.e., s = 0, f = 0, and p = 0). Although the 2-1-weight mantissa bit is
not actually stored, it does exist so the mantissa has 24 b plus a sign bit.

2, Explain about the Fixed-Point Quantization Errors and Floating Point


Quantization Errors
Fixed-Point Quantization Errors
In fixed-point arithmetic, a multiply doubles the number of significant bits. For
example, the product of the two 5-b numbers 0.0011 and 0.1001 is the 10-b number
00.000 110 11. The extra bit to the left of the decimal point can be discarded
without introducing any error. However, the least significant four of the remaining
bits must ultimately be discarded by some form of quantization so that the result
can be stored to 5 b for use in other calculations. In the example above this results
in 0.0010 (quantization by rounding) or 0.0001 (quantization by truncating). When
a sum of products calculation is performed, the quantization can be performed
either after each multiply or after all products have been summed with double-
length precision.
We will examine three types of fixed-point quantization—rounding, truncation,
and magnitude truncation. If X is an exact value, then the rounded value will be
denoted Q r (X), the truncated value Q t (X), and the magnitude truncated value
Q m t (X). If the quantized value has B bits to the right of the decimal point, the
quantization step size is
A = 2-B (3.6)
Since rounding selects the quantized value nearest the unquantized value, it gives a
value which is never more than ± A /2 away from the exact value. If we denote the
rounding error by
fr = Qr(X) - X (3.7)
then
AA
<f r < — (3.8)
2 - 2
Truncation simply discards the low-order bits, giving a quantized value that is
always less than or equal to the exact value so
- A < ft< 0 (3.9)
Magnitude truncation chooses the nearest quantized value that has a magnitude less
than or equal to the exact value so — A
<fmt <A (3 10)
.
The error resulting from quantization can be modeled as a random variable
uniformly distributed over the appropriate error range. Therefore, calculations with
roundoff error can be considered error-free calculations that have been corrupted

81
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

by additive white noise. The mean of this A/2noise for rounding is


1f
m ( r — E{fr }A = x/ frdfr — 0 (3.11)
J-A/2
where E{} represents the operation of taking the expected value of a random
variable. Similarly, the variance of the noise for rounding is

1 (A/2 A2
a€2 — E{(fr - m€r2) } — — (fr - m€r2) dfr = — (3.12)
A 12
-A/2
Likewise, for truncation,
E{f } A
m
ft = t = -y
22 A2 2
a = E{(ft - mft)} = — (3.13)
m
fmt — E { f mt }— 0
and, for magnitude truncation
a 2 E{(f - m 2)A2 (3 14)
f-mt = mt m }=— .

Floating-Point Quantization Errors


With floating-point arithmetic it is necessary to quantize after both multiplications
and additions. The addition quantization arises because, prior to addition, the
mantissa of the smaller number in the sum is shifted right until the exponent of
both numbers is the same. In general, this gives a sum mantissa that is too long and
so must be quantized.
We will assume that quantization in floating-point arithmetic is performed by
rounding. Because of the exponent in floating-point arithmetic, it is the relative
error that is important. The relative error is defined as
Qr(X) - X e
e rr = ----------- =—r (3.15)
XX
Since X = (-1)sm2c, Qr(X) = (-1)s Qr(m)2c and
Q (m) - m e
r
er = ------------ =— (3.16)
mm
If the quantized mantissa has B bits to the right of the decimal point, | e| < A/2
where, as before, A = 2-B. Therefore, since 0.5 <m < 1,
|er I < A (3.17)
If we assume that e is uniformly distributed over the range from - A/2 to A/2 and
m is uniformly distributed over 0.5 to 1,
mSr = E \ — } = 0
m
a 2 = E {(—)2| = 2 f' A 4 dedm
er
I ' m/ J A J1/2J-a/2 m 2
A2

82
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

= — = (0.167)2-2B (3.18)
6
In practice, the distribution of m is not exactly uniform. Actual measurements of
roundoff noise in [1] suggested that
al r « 0.23A2 (3.19)
while a detailed theoretical and experimental analysis in [2] determined
a2 « 0.18A2 (3.20)
From (3.15) we can represent a quantized floating-point value in terms of the
unquantized value and the random variable er using
Qr(X) = X(1 + er) (3.21)
Therefore, the finite-precision product X1X2 and the sum X1 + X2 can be written
f IX1X2) = X1X2U + e r ) (3.22)
and
fl(X1 + X2) = (X1 + X2 )(1 + er) (3.23)
where e r is zero-mean with the variance of (3.20).

4, Explain about Roundoff Noise.


Roundoff Noise:
To determine the roundoff noise at the output of a digital filter we will assume that
the noise due to a quantization is stationary, white, and uncorrelated with the filter
input, output, and internal variables. This assumption is good if the filter input
changes from sample to sample in a sufficiently complex manner. It is not valid for
zero or constant inputs for which the effects of rounding are analyzed from a limit
cycle perspective.
To satisfy the assumption of a sufficiently complex input, roundoff noise in
digital filters is often calculated for the case of a zero-mean white noise filter input
signal x(n) of variance a 1. This simplifies calculation of the output roundoff noise
because expected values of the form E{x(n)x(n — k)} are zero for k = 0 and
give a2 when k = 0. This approach to analysis has been found to give estimates of
the output roundoff noise that are close to the noise actually observed for other
input signals.
Another assumption that will be made in calculating roundoff noise is that the
product of two quantization errors is zero. To justify this assumption, consider the
case of a 16-b fixed-point processor. In this case a quantization error is of the order
2—1 5 , while the product of two quantization errors is of the order 2 —3 0 , which is
negligible by comparison.
If a linear system with impulse response g(n) is excited by white noise with
mean m x and variance a2 , the output is noise of mean [3, pp.788-790]

—2B
1 2

al = (3.28)

83
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

TO
my = mx ^2g(n) (3.24)
n=—TO
and variance
TO
ay = al ^ g2 (n) (3.25)
n=—TO
Therefore, if g(n) is the impulse response from the point where a roundoff takes
place to the filter output, the contribution of that roundoff to the variance (mean-
square value) of the output roundoff noise is given by (3.25) with a 2 replaced with
the variance of the roundoff. If there is more than one source of roundoff error in
the filter, it is assumed that the errors are uncorrelated so the output noise variance
is simply the sum of the contributions from each source.

Roundoff Noise in FIR Filters


The simplest case to analyze is a finite impulse response (FIR) filter realized via
the convolution summation
N—1
y(n) =Eh(k)x(n — k)
(3.26)
k=0
When fixed-point arithmetic is used and quantization is performed after each
multiply, the result of the N multiplies is N-times the quantization noise of a single
multiply. For example, rounding after each multiply gives, from (3.6) and (3.12),
an output noise variance of
22 2 —2B
a = N— (3.27)
Virtually all digital signal processor integrated circuits contain one or more double-
length accumulator registers which permit the sum-of-products in (3.26) to be
accumulated without quantization. In this case only a single quantization is
necessary following the summation and
For the floating-point roundoff noise case we will consider (3.26) for N = 4 and
then generalize the result to other values of N. The finite-precision output can be
written as the exact output plus an error term e(n). Thus,
y(n) + e(n) = ({[h(0)x(n)[1 + E1(n)]
+ h(1)x(n -1)[1 + £2(n)]][1 + S3(n)]
+ h(2)x(n -2)[1 + £4(n)]}{1 + s5(n)}
+ h(3)x(n - 3)[1 + £6(n)])[1 + £j(n)] (3.29)
In (3.29), £1(n) represents the errorin the first product, £2(n) the
error in the second product, £3(n)
the error in the firstaddition, etc.Notice that it has been assumed that the products
are summed in
the order implied by the summation of (3.26).
Expanding (3.29), ignoring products of error terms, and recognizing y(n) gives
e(n) = h(0)x(n)[£1 (n) + £3(n) + £$(n) + £i(n)]

84
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

+ h(1)x(n-1)[£2(n) + £3(n) + £5(n) + £j(n)]


+ h(2)x(n-2)[£4(n) + £5(n) + £i(n)]
+ h(3)x(n- 3)[£6(n)+ £j(n)]
(3.30)
Assuming that the input is white noise of variance a^ so that E{x(n)x(n - k)} is
zero for k = 0, and assuming that the errors are uncorrelated,
E{e2(n)} = [4h2(0)+ 4h2(1) + 3h2(2) + 2h2(3)]a2a2 (3.31)
In general, for any N,
N-1
aO = E{e 2 (n)} = Nh22(0) +J2 (N + 1 - a°a° r (3.32)
k)h (k)
k=1
Notice that if the order of summation of the product terms in the convolution
summation is changed, then the order in which the h(k)’s appear in (3.32)
changes. If the order is changed so that the h(k)with smallest magnitude is first,
followed by the next smallest, etc., then the roundoff noise variance is
minimized. However, performing the convolution summation in nonsequential
order greatly complicates data indexing and so may not be worth the reduction
obtained in roundoff noise.

Roundoff Noise in Fixed-Point IIR Filters


To determine the roundoff noise of a fixed-point infinite impulse response (IIR)
filter realization, consider a causal first-order filter with impulse response
h(n) = anu(n) (3.33)
realized by the difference equation
y(n) = ay(n - 1) + x(n) (3.34)
Due to roundoff error, the output actually obtained is
y(n) = Q{ay(n - 1) + x(n)} = ay(n - 1) + x(n) + e(n) (3.35)

85
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

where e(n) is a random roundoff noise sequence. Since e(n) is injected at the
same point as the input, it propagates througha system with impulse
response h(n). Therefore, forfixed-point arithmetic
with rounding,the outputroundoff noise variance from (3.6), (3.12), (3.25),
and (3.33) is
A2 “ A2 “ 2-2B 1
a 22o = — > h 2 (n)()2 = — > a 22nn = ---------------------------------- (3.36)
12 ^ 12 ^ 12 1 - a 2
n=—<x n=0
With fixed-point arithmetic there is the possibility of overflow following
addition. To avoid overflow it is necessary to restrict the input signal amplitude.
This can be accomplished by either placing a scaling multiplier at the filter
input or by simply limiting the maximum input signal amplitude. Consider the
case of the first-order filter of (3.34). The transfer function of this filter is
,v, Y(e j m ) 1
H(e j m ) = .m = m (3.37)
X(eJ ) eJ — a
so
\H(e j m )\ 2 = --- -2—1 ------------------------------- (3.38)
1 + a — 2a cos,)
and
,7, 1
|H(e^)|max = ---- — (3.39)
1 — \a\
The peak gain of the filter is 1/(1 — \a\) so limiting input signal amplitudes to
\x(n)\ < 1 — \ a | will make overflows unlikely.
An expression for the output roundoff noise-to-signal ratio can easily be
obtained for the case where the filter input is white noise, uniformly distributed
over the interval from —(1 — \ a \) to (1 — \ a \) [4,5]. In this case

a
? 1 f 1 —\ a x\ ?d x 1 ( 1 — a ? ) (3 40)
x = 21—a)
2(1 — a ) =3 \ \3 .
\ \ J — (1 — \a \)
so, from (3.25),
2 1 (1 — \a \)2
ayy 2 = 3 , 2 (3'41)
31— a2
Combining (3.36) and (3.41) then gives

0^ = ( 2 _—l^\ (3^0L^ = 2—B (3


42)
a2 \12 1 — a 2 )\ (1 — \a \) 2 ) 12 (1— \ a \ ) 2 (
. )

Notice that the noise-to-signal ratio increases without bound as \ a \ ^ 1.


Similar results can be obtained for the case of the causal second-order filter
realized by the difference equation
y(n) = 2r cos(0)y(n — 1) — r2y(n — 2) + x(n)
(3.43)
j0
This filter has complex-conjugate poles at re± and impulse response

86
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

h(n) =r n sin[(n + 1)0] u(n)


--------------------------- (3.44)
sin (0)
Due to roundoff error, the output actually obtained is
y(n) = 2r cos(0)y(n — 1) — r2y(n — 2) + x(n) + e(n) (3.45)

87
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE

There are two noise sources contributing to e(n) if quantization is performed after
each multiply, and there is one noise source if quantization is performed after
summation. Since
2 1
1 — 2 2 2 2 (3.46
n= 2 r (1 + r ) — 4r )
— oo cos (9)
the output roundoff
noise is
2 2B1 + 1
a2 = V r 2 (3.47
12 21—r 2 (1 + r 2 )2 — 4r2 )
cos (9)
where V = 1 for quantization after summation, and V = 2 for quantization
after each multiply. To obtain an output noise-to-signal ratio we note that

H(e j w ) 1
jm (3.48
= 12——2r j 2 mcos(9)e + )
r e
and, using the approach of
[6],
iH(emmax = 1
(3.49
2 sat ( c o s ( 9 ) ^ — cos(9) 2 2
sin 2 )
4r + (9)

wher
e I >1
sat (i) — 1<I<1 (3.50
= )
Following the same approach as for the first-order
case then gives
2 2 2 B 1+r 2 3
V 12 1 — r 2 (1 + r 2 )2 — 4r2
y cos2 (9)
1
X 2 1— 2 (3.51
2
4r sat (^cos (9) cos( + sin(9) )
) 9) r 2
2r
Figure3.1 is a contour plot showing the noise-to-signal ratio of (3.51) for v = 1
in units of the noise variance of a single quantization, 2—2 B /12. The plot is
symmetrical about 9 = 90°, so only the range from 0° to 90° is shown. Notice
that as r ^ 1, the roundoff noise increases without bound. Also notice that the
noise increases as 9 ^ 0°.
It is possible to design state-space filter realizations that minimize fixed-point
roundoff noise [7] - [10]. Depending on the transfer function being realized, these
structures may provide a roundoff noise level that is orders-of-magnitude lower
than for a nonoptimal realization. The price paid for this reduction in roundoff
noise is an increase in the number of computations required to implement the
filter. For an Nth-order filter the increase is from roughly 2N multiplies for a
direct form realization to roughly (N + 1)2 for an optimal realization. However,
if the filter is realized by the parallel or cascade connection of first- and second-
order optimal subfilters, the increase is only to about 4N multiplies. Furthermore,
near-optimal realizations exist that increase the number of multiplies to only
about 3N [10].

88
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

0.010.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.99


0.9
Normaliized fixed-pooint roundofff noise variaance.
5. Explain ab bout Limit Cycle
C Oscilllations.
L
Limit Cycle Oscillationss:
A limit cyclee, sometimess referred to as a multipliier roundofff limit cycle, is a low-
leevel oscillattion that cann exist in an
a otherwisee stable filteer as a resuult of the
n
nonlinearity associated with
w roundinng (or truncating) internnal filter callculations
[11]. Limit cy ycles requiree recursion to
t exist and do
d not occurr in nonrecurrsive FIR
fi
filters. As an example of a limit cyclee, consider thhe second-order filter reealized by
7 5
y = Qr{ ^y
y(n) y(n —1) — 8y(n
8 — 2) + x(n)
where Q r {} represents quantization
w q n by rounding. This is sttable filter with
w poles
mplementatioon of this fillter with 4-b (3-b and
att 0.4375 ± j 0.6585. Connsider the im
a sign bit) tw
wo’s complem ment fixed-ppoint arithmetic, zero innitial conditioons (y(—
1 ) = y(—2) = 0), and ann input sequuence x(n) = |S(n), whhere S(n) iss the unit
mpulse orr unit saample. Thhe followiing sequeence is obtained;
im

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

Notice that while the input is zero except for the first sample, the output oscillates
with amplitude 1/8 and period 6.
Limit cycles are primarily of concern in fixed-point recursive filters. As long as
floating-point filters are realized as the parallel or cascade connection of first- and
second-order subfilters, limit cycles will generally not be a problem since limit
cycles are practically not observable in first- and second-order systems
implemented with 32-b floating-point arithmetic [12]. It has been shown that such
systems must have an extremely small margin of stability for limit cycles to exist
at anything other than underflow levels, which are at an amplitude of less than
10 — 3 8 [12]. There are at least three ways of dealing with limit cycles when fixed-
point arithmetic is used. One is to determine a bound on the maximum limit cycle
amplitude, expressed as an integral number of quantization steps [13]. It is then
possible to choose a word length that makes the limit cycle amplitude acceptably
low. Alternately, limit cycles can be prevented by randomly rounding calculations
up or down [14]. However, this approach is complicated to implement. The third
approach is to properly choose the filter realization structure and then quantize
the filter calculations using magnitude truncation [15,16]. This approach has the
disadvantage of producing more roundoff noise than truncation or rounding [see
(3.12)—(3.14)].

6, Explain about Overflow Oscillations.


With fixed-point arithmetic it is possible for filter calculations to overflow. This
happens when two numbers of the same sign add to give a value having
magnitude greater than one. Since numbers with magnitude greater than one are
not representable, the result overflows. For example, the two’s complement
numbers 0.101 (5/8) and 0.100 (4/8) add to give 1.001 which is the two’s
complement representation of -7/8.
The overflow characteristic of two’s complement arithmetic can be represented
as R{} where
X - X>1
R{X} 2 X -1 < X < 1 (3.71
= )
X+2 X <-1
For the example just considered, R{9/8} = —7/8.
An overflow oscillation, sometimes also referred to as an adder overflow
limit cycle, is a high- level oscillation that can exist in an otherwise stable
fixed-point filter due to the gross nonlinearity associated with the overflow of
internal filter calculations [17]. Like limit cycles, overflow oscillations require
recursion to exist and do not occur in nonrecursive FIR filters. Overflow
oscillations also do not occur with floating-point arithmetic due to the virtual
impossibility of overflow.
As an example of an overflow oscillation, once again consider the filter of
(3.69) with 4-b fixed-point two’s complement arithmetic and with the two’s
complement overflow characteristic of (3.71):

75
y(n) = Qr\R 8y(n -1) - 8y(n - 2) + x(n) (3.72)

In this case we apply the


input

[Type text] Visit : www.EasyEngineeering.net


Visit : www.EasyEngineeering.net

35
x(n) = -4&(n) - ^&(n -1)
y ( 4) = Qr R
3 = Qr
5 ,, (3.74)
0, 0, ■ (3.73)
4 8
s to scale the filter calculations so as to render overflow impossible. However,
this may unacceptably restrict the filter dynamic range. Another method is to
force completed sums-of- products to saturate at ±1, rather than overflowing
[18,19]. It is important to saturate only the completed sum, since intermediate
overflows in two’s complement arithmetic do not affect the accuracy of the final
result. Most fixed-point digital signal processors provide for automatic saturation
of completed sums if their saturation arithmetic feature is enabled. Yet
another way to avoid overflow oscillations is to use a filter structure for which
any internal filter transient is guaranteed to decay to zero [20]. Such structures are
desirable anyway, since they tend to have low roundoff noise and be insensitive
to coefficient quantization [21].

7, Explain about Coefficient Quantization Error.

Coefficient Quantization Error:

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

0 0.25 0.50 0.755 1.00

Re Z
FIGU
URE: Realizzable pole lo
ocations for the
t differencce equation of
o (3.76).

The sparseness of o realizablee pole locations near z = ± 1 will reesult in a larrge coefficieent quantizattion error for
poles in this regiion.
Figuure3.4 gives an alternattive structurre to (3.77) for realizinng the transsfer functionn of (3.76). Notice thaat
quanntizing the cooefficients of o this structuure correspoonds to quanntizing X r annd Xi. As shhown in Fig.33.5 from [5]],
this results in a uniform griid of realizaable pole loccations. Theerefore, largee coefficientt quantizatioon errors aree
avoidded for all pole
p locations.
It is well established that fillter structurees with low roundoff
r noise tend to be
b robust to coefficient quantization
q n,
and visa
v versa [222]- [24]. Fo or this reasonn, the uniformm grid struccture of Fig.33.4 is also poopular becauuse of its low
w
rounndoff noise. Likewise,
L th
he low-noisee realizationss of [7]- [100] can be exppected to bee relatively insensitive too
coefffficient quanntization, and d digital wavve filters and lattice filtters that are derived from m low-sensitivity analogg
strucctures tend too have not only low coeffficient sensitivity, but alsoa low rounndoff noise [25,26].
[
It is well knownn that in a high-order polynomial with clustered roots, thhe root locaation is a veery sensitivee
function of the polynomial coefficiennts. Thereforre, filter pooles and zerros can be much morre accuratelyy
contrrolled if highher order filtters are realiized by breakking them upp into the paarallel or casscade connecction of firstt-
and second-ordeer subfilters. One excepption to thiss rule is thee case of linnear-phase FIR F filters in i which thee
symm metry of the polynomiaal coefficiennts and the spacing of the filter zeros z aroundd the unit circle usuallyy
permmits an accepptable direct realization usingu the connvolution suummation.
Giveen a filter strructure it is necessary
n too assign the ideal
i pole annd zero locattions to the realizable
r locations. This
is geenerally donne by simply yrounding orr truncatingtthe filter coeefficients to the available number of o bits, or byy
assiggning the ideeal pole and zero locatioons to the neearest realizaable locationns. A more complicated alternative
a is
to coonsider the original
o filterr design probblem as a prooblem in disscrete

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

FIGU
URE 3.4: Allternate realiization structture.

FIGU
URE 3.5: Reealizable polle locations for
f the alternnate realizatiion structuree.

optimmization, annd choose the realizable pole and zeero locationss that give thhe best approximation too the desiredd
filterr response [227]- [30].

1.6 Realizatiion Consideerations

Lineear-phase FIIR digital filters


f can generally be b implemennted with acceptable coefficient quantizationn
sensiitivity usingg the directt convolutioon sum methhod. When implementeed in this way w on a digital
d signaal
processor, fixedd-point arith hmetic is noot only accceptable but may actuaally be prefferable to floating-poin
fl nt
arithhmetic. Virtuually all fixeed-point digiital signal processors acccumulate a sum of products in a double-length
d h
accuumulator. Thhis means th hat only a single
s quanttization is necessary
n to compute ann output. Flloating-poinnt
arithhmetic, on thhe other hand d, requires a quantizationn after everyy multiply annd after everry add in thee convolutionn
summ mation. Witth 32-b floaating-point arithmetic theset quantiizations intrroduce a smmall enough error to bee
insiggnificant for many appliccations.

Visit : www.EasyEngineeering.net
When realizing IIR filters, either a parallel or :cascade
Visit connection of first- and second-order subfilters is almost
www.EasyEngineeering.net
always preferable to a high-order direct-form realization. With the availability of very low-cost floating-point
digital signal processors, like the Texas Instruments TMS320C32, it is highly recommended that floating-point
arithmetic be used for IIR filters. Floating-point arithmetic simultaneously eliminates most concerns regarding
scaling, limit cycles, and overflow oscillations. Regardless of the arithmetic employed, a low roundoff noise
structure should be used for the second- order sections. Good choices are given in [2] and [10]. Recall that
realizations with low fixed-point roundoff noise also have low floating-point roundoff noise. The use of a low
roundoff noise structure for the second-order sections also tends to give a realization with low coefficient
quantization sensitivity. First-order sections are not as critical in determining the roundoff noise and coefficient
sensitivity of a realization, and so can generally be implemented with a simple direct form structure.

UNIT V MULTIRATE SIGNAL PROCESSING

2 MARKS

1. What is the need for multirate signal processing?


In real time data communication we may require more than one sampling rate for processing data in such a
cases we go for multi-rate signal processing which increase and/or decrease the sampling rate.

2. Give some examples of multirate digital systems.


Decimator and interpolator

3. Write the input output relationship for a decimator.


Fy = Fx/D

4. Write the input output relationship for an interpolator.


Fy = IFx

5. What is meant by aliasing?


The original shape of the signal is lost due to under sampling. This is called aliasing.

6. How can aliasing be avoided?


Placing a LPF before down sampling.

7. How can sampling rate be converted by a factor I/D.


Cascade connection of interpolator and decimator.

8. What is meant by sub-band coding?


It is an efficient coding technique by allocating lesser bits for high frequency signals and more bits for low
frequency signals.

9. What is meant by up sampling?


Increasing the sampling rate.

10. What is meant by down sampling?


Decreasing the sampling rate.

11. What is meant by decimator?


Down sampling and a anti-aliasing filter.

12. What is meant by interpolator?


An anti-imaging filters and Up sampling.

13. What is meant by sampling rate conversion? Visit : www.EasyEngineeering.net


Changing one sampling rate : www.EasyEngineeering.net
Visit to other sampling rate is called sampling rate
conversion.

14. What are the sections of QMF.


Analysis section and synthesis section.

15. Define mean.


Mxn=E[xn]=intg xpxn(x,n) dx

16. Define variance.


Zxn2=E[{xn=mxn}2]

17. Define cross correlation of random process.


R xy (n.m)= intxy*pxn,ym(x,n,y,m)dxdy.
18. Define DTFT of cross correlation
Txy(e jw) = x rxy(l) e jwl

19. What is the cutoff frequency of Decimator?


Pi/M where M is the down sampling factor

20. What is the cutoff frequency of Interpolator?


Pi/L where L is the UP sampling factor.

21. What is the difference in efficient transversal structure?


Number of delayed multiplications are reduced.

22. What is the shape of the white noise spectrum?


Flat frequency spectrum.

16 MARKS

1, Explain about Decimation and Interpolation.

Definition.
Given an integer D, we define the downsampling operator S i J D ,shown
in Figure by the following relationship:
>{»] - " *[«£>]
The operator SIW decreases the sampling frequency by a factor of D, by
keeping one sample out of D samples.

Figure Upsampling
jctn]

Figure D o w n s a m p l i n g

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

Example
Let x = [ ..., 1,2,3,4,5,6,... ] , then j[n] = S(/2x[n] is given by
?[«] = [ — 1.3,5f7— ]
bamplmg Kate Conversion by a Rational t* actor
the problem of designing an algorithm for resampling a
digital signal x[n] from the original rate F x (in Hz) into a rate Fv = (UD)F z f with L
and D integers. For example, we have a signal at telephone quality, F x = 8 kHz, and
we want to resample it at radio quality, F y = 22 kHz. In this case, clearly L = 11 and
D = 4.
First consider two particular cases, interpolation and decimation, where we
upsample and downsample by an integer factor without creating aliasing or image
frequencies.

Decimation by an Integer Factor D


We have seen in the previous section that simple downsampling decreases the
sampling frequency by a factor D. In this operation, all frequencies of X(tu) =
DTFT{jr[n]} above 7tID cause aliasing, and therefore they have to be filtered out
before downsampling the signal. Thus we have the scheme in Figure where the
signal is downsampled by a factor D, without aliasing.

Interpolation by an Integer Factor L


As in the case of decimation, upsampling by a factor L alone introduces artifacts in
the frequency domain as image frequencies. Fortunately, as seen in the previous
section, all image frequencies are outside the interval [-ttIL, +ttIL], and thev can
be eliminated by filtering the signal after downsampling. This is shown in Figure i

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

2, Explain about Multistage Implementation of Digital Filters.

I Multistage Implementation of Digital filters


In some applications we want to design filters
where the bandwidth is just a small fraction of the
overall sampling rate. For example, suppose we
want to design a lowpass filter with bandwidth of
the order of a few hertz and a sampling frequency
of the order of several kilohertz. This filter would
require a very sharp transition region in the digital
frequency a>, thus requiring a high-complcxity
filter.

Example <
As an example of application, suppose you want
to design a Filter with the fallowing
specifications:
Passband Fp = 450 Hz

[Type text] Visit : www.EasyEngineeering.net


Visit : www.EasyEngineeering.net

Stopband F s =500 Hz
Sampling frequency F s ~96 kHz
Notice that the stopband is several orders of
magnitude smaller than the sampling frequency.
This leads to a filter with a very short transition
region of high complexity. In
Speech signals
© From prehistory to the new media of the future, speech has
been and will be a primary form of communication between
humans.
© Nevertheless, there often occur conditions under which we
measure and then transform the speech to another form, speech
signal, in order to enhance our ability to communicate.
© The speech signal is extended, through technological media such
as telephony, movies, radio, television, and now Internet. This
trend reflects the primacy of speech communication in human
psychology.
© “Speech will become the next major trend in the personal
computer market in the near future.”
Speech signal processing
© The topic of speech signal processing can be loosely defined as
the manipulation of sampled speech signals by a digital processor
to obtain a new signal with some desired properties.
Speech signal processing is a diverse field that relies on knowledge of
language at the levels of Signal processing
Acoustics (P¥)
Phonetics
( ^ ^ ^ ) Language-
independent Phonology
(^^)
Morphology ( i ^ ^ ^ )
Syntax
( ^ , £ ) Languag
e-dependent
Semantics
(\%X¥)
Pragmatics ( i f , f f l ^ )
7 layers for describing speech From Speech to Speech
Signal, in terms of Digital Signal Processing
At-*
■ Acoustic (and
perceptual) features
{traits)
- fundamental
freouency (FO)
(pitch)
- amplitude (loudness)
- spectrum (timber)

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

It is based on the fact that


- Most of energy between 20 Hz to about 7KHz ,
- Human ear sensitive to energy between 50 Hz and 4KHz
© In terms of acoustic or perceptual, above features are considered.
© From Speech to Speech Signal, in terms of Phonetics (Speech
production), the digital model of Speech Signal will be
discussed in Chapter 2.
© Motivation of converting speech to digital signals:

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

Speech coding, A PC and SBC pitch,


Adaptive predictive coding (APC) is a fine structure
technique used for speech coding, that is data compression of spccch

Typic
al
Voice
d
speec
h
signals APC assumes that the input speech signal is repetitive with a
period significantly longer than the average frequency content. Two
predictors arc used in APC. The high frequency components (up to 4 kHz)
are estimated using a 'spectral’ or 'formant’ prcdictor and the low
frequency components (50‐200 Hz) by a ‘pitch’ or ‘fine structure’
prcdictor (see figure 7.4). The spcctral estimator may he of order 1‐ 4 and
the pitch estimator about order 10. The low‐frequency components of
the spccch signal are due to the movement of the tongue, chin and
spectral
envelope, formants

Figure 7.4 Encoder for adaptive, predictive coding of speech signals.


The decoder is mainly a mirrored version of the encoder

The high-frequency components originate from the vocal chords and the
noise-like sounds (like in ‘s’) produced in the front of the mouth.
The output signal y(n)together with the predictor parameters, obtained
adaptively in the encoder, are transmitted to the decoder, where the spcech
signal is reconstructed. The decoder has the same structure as the encoder
but the predictors arc not adaptive and arc invoked in the reverse order.
The prediction parameters are adapted for blocks of data corresponding to
for instance 20 ms time periods.
A PC' is used for coding spcech at 9.6 and 16 kbits/s. The algorithm
works well in noisy environments, but unfortunately the quality of the
processed speech is not as good as for other methods like CELP described
below.

3, Explain about Subband Coding.


Another coding method is sub-band coding (SBC) (see Figure 7.5)

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

which belongs to the class of waveform coding methods, in which the


frequency domain properties of the input signal arc utilized to achieve data
compression.
The basic idea is that the input speech signal is split into sub-bands
using band-pass filters. The sub-band signals arc then encoded using
ADPCM or other techniques. In this way, the available data transmission
capacity can be allocated between bands according to pcrccptual criteria,
enhancing the speech quality as pcrceivcd by listeners. A sub-band that is
more ‘important’ from the human listening point of view can be allocated
more bits in the data stream, while less important sub-bands will use fewer
bits.
A typical setup for a sub-band codcr would be a bank of N(digital)
bandpass filters followed by dccimators, encoders (for instance ADPCM)
and a multiplexer combining the data bits coming from the sub-band
channels. The output of the multiplexer is then transmitted to the sub-band
dccodcr having a demultiplexer splitting the multiplexed data stream back
into Nsub-band channels. Every sub-band channel has a dccodcr (for
instance ADPCM), followed by an interpolator and a band-pass filter.
Finally, the outputs of the band-pass filters are summed and a
reconstructed output signal results.
Sub-band coding is commonly used at bit rates between 9.6 kbits/s and
32 kbits/s and performs quite well. The complexity of the system may
however be considerable if the number of sub-bands is large. The design
of the band-pass filters is also a critical topic when working with sub-band
coding systems.

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

Figure 7.5 An example of a sub-band coding system

Vocoders and LPC


In the methods described above (APC, SBC and ADPCM),
speech signal applications have been used as examples. By
modifying the structure and parameters of the predictors and
filters, the algorithms may also be used for other signal types.
The main objective was to achieve a reproduction that was as
faithful as possible to the original signal. Data compression was
possible by removing redundancy in the time or frequency
domain.
The class of vocoders (also called source coders) is a special
class of data compression devices aimed only at spcech signals.
The input signal is analysed and described in terms of speech
model parameters. These parameters are then used to synthesize a

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

voice pattern having an acceptable level of perceptual quality.


Hence, waveform accuracy is not the main goal, as in the
previous methods discussed.

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

periodic
pitch excitation

9+V
synthetic
noise speech

voiced/unvoiced
Figure 7.6 The LPC model

The first vocoder was designed by H. Dudley in the 1930s and demonstrated
at the ‘New York Fair’ in 1939* Vocoders have become popular as they achieve
reasonably good speech quality at low data rates, from 2A kbits/s to 9,6 kbits/s.
There arc many types of vocoders (Marvcn and Ewers, 1993), some of the most
common techniques will be briefly presented below.
Most vocoders rely on a few basic principles. Firstly, the characteristics of the
spccch signal is assumed to be fairly constant over a time of approximately 20
ms, hcncc most signal processing is performed on (overlapping) data blocks of
20 40 ms length. Secondly, the spccch model consists of a time varying filter
corresponding to the acoustic properties of the mouth and an excitation signal.
The cxeilalion signal is cither a periodic waveform, as crcatcd by the vocal
chords, or a random noise signal for production of ‘unvoiced' sounds, for
example ‘s’ and T. The filter parameters and excitation parameters arc assumed
to be independent of each other and are commonly coded separately.
Linear predictive coding (LPC) is a popular method, which has however
been replaced by newer approaches in many applications. LPC works exceed‐
ingly well at low bit rates and the LPC parameters contain sufficient
information of the spccch signal to be used in spccch recognition applications.
The LPC model is shown in Figure 7*6.
LPC is basically an autn-regressive model (sec Chapter 5) and the vocal tract
is modelled as a time‐varying all‐pole filter (HR filter) having the transfer
function H(z)
(7*17)
-k
k=I
where p is the order of the filter. The excitation signal *?(«), being either noise
or a periodic waveform, is fed to the filter via a variable gain factor G. The
output signal can be expressed in the time domain as
y(n) ~ Ge(ri) - a , y ( n - 1) - a 2 y ( n - 2 ) %y{n-p) ( 1 . IK)

The output sample at time n is a linear combination of p previous samples

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

and the excitation signal (linear predictive coding). The filter coefficients a k arc
time varying.
The model above describes how to synthesize the speech given the pitch
information (if noise or pet iodic excitation should be used), the gain and the
filter parameters. These parameters must be determined by the cncoder or the
analyser, taking the original spccch signal x(n) as input.
The analyser windows the spccch signal in blocks of 20‐40 ms. usually with a
Hamming window (see Chapter 5). These blocks or ‘frames’ arc repeated every
10—30 ms, hence there is a certain overlap in time. Every frame is then
analysed with respect to the parameters mentioned above.
Firstly, the pitch frequency is determined. This also tells whether we arc
dealing with a voiced or unvoiccd spccch signal. This is a crucial part of the
system and many pitch detection algorithms have been proposed. If the
segment of the spccch signal is voiced and has a dear periodicity or if it is
unvoiced and not pet iodic, things arc quite easy* Segments having properties in
between these two extremes are difficult to analyse. No algorithm has been
found so far that is 1perfect* for all listeners.
Now, the second step of the analyser is to determine the gain and the filter
parameters. This is done by estimating the spccch signal using an adaptive
predictor. The predictor has the same structure and order as the filter in the
synthesizer, Hencc, the output of the predictor is
- i ( n ) — - tf] jt(/7— 1) — a 2 x ( n — 2) — . . . — OpX(n—p) (7-19)
where i(rt) is the predicted input spcech signal and jc(rt) is the actual input
signal. The filter coefficients a k are determined by minimizing the square error

(7,20)
This can be done in different ways, cither by calculating the auto‐corrc‐ lation
coefficients and solving the Yule Walker equations (see Chapter 5) or by using
some recursive, adaptive filter approach (see Chapter 3),
So, for every frame, all the parameters above arc determined and irans‐ mittcd
to the synthesiser, where a synthetic copy of the spccch is generated.
An improved version of LPC is residual excited linear prediction (RELP). Let
us take a closer look at the error or residual signal r(fi) resulting from the
prediction in the analyser (equation (7.19)). The residual signal (wc arc try ing to
minimize) can be expressed as
r(n)= *(«) -i(rt)
= jf(rt) + a^x(n— 1) + a 2 x(n— 2) -h *.. + a p x(ft—p) <7-21)
From this it is straightforward to find out that the corresponding expression
using the z‐transforms is

(7,22)
Hcncc, the prcdictor can be regarded as an ‘inverse’ filter to the LPC model filter.
If we now pass this residual signal to the synthesizer and use it to excite the LPC
filter, that is E(z) - R(z), instead of using the noise or periodic waveform
sources we get
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

Y ( z ) = E ( z ) H ( z ) = R ( z ) H ( z ) = X ( z ) H ~ \ z ) H ( z ) = X ( z ) (7.23)
In the ideal case, we would hence get the original speech signal back. When
minimizing the variance of the residual signal (equation (7.20)), we gathered as
much information about the spccch signal as possible using this model in the
filter coefficients a k . The residual signal contains the remaining information. If
the model is well suited for the signal type (speech signal), the residual signal is
close to white noise, having a flat spectrum. In such a case we can get away with
coding only a small range of frequencies, for instance 0‐1 kHz of the residual
signal. At the synthesizer, this baseband is then repeated to generate higher
frequencies. This signal is used to excite the LPC filter
Vocoders using RELP are used with transmission rates of 9.6 kbits/s. The
advantage of RELP is a better speech quality compared to LPC for the same bit
rate. However, the implementation is more computationally demanding.
Another possible extension of the original LPC approach is to use multipulse
excited linear predictive coding (MLPC). This extension is an attempt to make
the synthesized spcech less ‘mechanical’, by using a number of different pitches
of the excitation pulses rather than only the two (periodic and noise) used by
standard LPC.
The MLPC algorithm sequentially detects k pitches in a speech signal. As soon
as one pitch is found it is subtracted from the signal and detection starts over
again, looking for the next pitch. Pitch information detection is a hard task and
the complexity of the required algorithms is often considerable. MLPC however
offers a better spcech quality than LPC for a given bit rate and is used in systems
working with 4.S‐9.6 kbits/s.
Yet another extension of LPC is the code excited linear prediction (CELP).
The main feature of the CELP compared to LPC is the way in which the filter
coefficients are handled. Assume that we have a standard LPC system, with a
filter of the order p. If every coefficient a k requires N bits, we need to transmit
N-p bits per frame for the filter parameters only. This approach is all right if all
combinations of filter coefficients arc equally probable. This is however not the
case. Some combinations of coefficients are very probable, while others may
never occur. In CELP, the coefficient combinations arc represented by p
dimensional vectors. Using vector quantization techniques, the most probable
vectors are determined. Each of these vectors are assigned an index and stored
in a codebook. Both the analyser and synthesizer of course have identical copies
of the codebook, typically containing 256‐512 vectors. Hcncc, instead of
transmitting N-p bits per frame for the filter parameters only 8‐9 bits arc
needed.
This method offers high‐quality spcech at low‐bit rates but requires consid‐
erable computing power to be able to store and match the incoming spcech to
the ‘standard’ sounds stored in the codebook. This is of course especially true if
the codebook is large. Speech quality degrades as the codebook size decreases.
Most CELP systems do not perform well with respect to higher frequency
components of the spccch signal at low hit rates. This is countcractcd in
There is also a variant of CELP called vector sum excited linear prediction
(VSELP). The main difference between CELP and VSELP is the way the
codebook is organized. Further, since VSELP uses fixed point arithmetic
algorithms, it is possible to implement using cheaper DSP chips than
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

Adaptive Filters
The signal degradation in some physical systems is time varying, unknown, or possibly
both. For example,consider a high-speed modem for transmitting and receiving data over
telephone channels. It employs a filter called a channel equalizer to compensate for the channel
distortion. Since the dial-up communication channels have different and time-varying
characteristics on each connection, the equalizer must be an adaptive filter.
4. Explain about Adaptive Filter4. Explain about Adaptive Filter
Adaptive filters modify their characteristics to achieve certain objectives by
automatically updating their coefficients. Many adaptive filter structures and adaptation
algorithms have been developed for different applications. This chapter presents the most widely
used adaptive filters based on the FIR filter with the least-mean-square (LMS) algorithm. These
adaptive filters are relatively simple to design and implement. They are well understood with
regard to stability, convergence speed, steady-state performance, and finite-precision effects.
Introduction to Adaptive Filtering
An adaptive filter consists of two distinct parts - a digital filter to perform the desired filtering,
and an adaptive algorithm to adjust the coefficients (or weights) of the filter. A general form of
adaptive filter is illustrated in Figure 7.1, where d(n) is a desired (or primary input) signal, y(n)
is the output of a digital filter driven by a reference input signal x(n), and an error signal e(n) is
the difference between d(n) and y(n). The adaptive algorithm adjusts the filter coefficients to
minimize the mean-square value of e(n). Therefore, the filter weights are updated so that the
error is progressively minimized on a sample-bysample basis.
In general, there are two types of digital filters that can be used for adaptive filtering: FIR and
IIR filters. The FIR filter is always stable and can provide a linear-phase response. On the other
hand, the IIR

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

filter involves both zeros and poles. Unless they are properly controlled, the poles in the filter
may move outside the unit circle and result in an unstable system during the adaptation of
coefficients. Thus, the adaptive FIR filter is widely used for practical real-time applications.
This chapter focuses on the class of adaptive FIR filters.
The most widely used adaptive FIR filter is depicted in Figure 7.2. The filter output signal
is computed

(7.13)

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

, where the filter coefficients wl (n) are time varying and updated by the adaptive algorithms
that will be discussed next.
We define the input vector at time n as
x(n) = [x(n)x(n - 1) . . . x(n - L + 1)]T , (7.14)
and the weight vector at time n as
w(n) = [w0(n)w1(n) . . . wL-1(n)]T . (7.15)
Equation (7.13) can be expressed in vector form as
y(n) = wT (n)x(n) = xT (n)w(n). (7.16)
The filter outputy(n) is compared with the desired d(n) to obtain the error
signal e(n) = d(n) - y(n) = d(n) - wT (n)x(n). (7.17)
Our objective is to determine the weight vector w(n) to minimize the predetermined
performance (or cost) function.
Performance Function:
The adaptive filter shown in Figure 7.1 updates the coefficients of the digital filter to
optimize some predetermined performance criterion. The most commonly used performance
function is
based on the mean-square error (MSE) defined as

5, Explain about Audio Processing.


The two principal human senses are vision and hearing. Correspondingly,much of DSP is related
to image and audio processing. People listen toboth music and speech. DSP has made
revolutionary changes in both these areas.
Music Sound processing
The path leading from the musician's microphone to the audiophile's speaker is remarkably long.
Digital data representation is important to prevent the degradation commonly associated with
analog storage and manipulation. This is very familiar to anyone who has compared the musical
quality of cassette tapes with compact disks. In a typical scenario, a musical piece is recorded in
a sound studio on multiple channels or tracks. In some cases, this even involves recording
individual instruments and singers separately. This is done to give the sound engineer greater
flexibility in creating the final product. The complex process of combining the individual tracks
into a final product is called mix down. DSP can provide several important functions during
mix down, including: filtering, signal addition and subtraction, signal editing, etc. One of the
most interesting DSP applications in music preparation is artificial reverberation. If the
individual channels are simply added together, the resulting piece sounds frail and diluted, much
as if the musicians were playing outdoors. This is because listeners are greatly influenced by the
echo or reverberation content of the music, which is usually minimized in the sound studio. DSP
allows artificial echoes and reverberation to be added during mix down to simulate various ideal
listening environments. Echoes with delays of a few hundred milliseconds give the impression
of cathedral likelocations. Adding echoes with delays of 10-20 milliseconds provide the
perception of more modest size listening rooms.
Speech generation
Speech generation and recognition are used to communicate between humans and machines.
Rather than using your hands and eyes, you use your mouth and ears. This is very convenient
when your hands and eyes should be doing something else, such as: driving a car, performing
surgery, or (unfortunately) firing your weapons at the enemy. Two approaches are used for

[Type text]

Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net

computer generated speech: digital recording and vocal tract simulation. In digital
recording, the voice of a human speaker is digitized and stored, usually in a compressed form.
During playback, the stored data are uncompressed and converted back into an analog signal.
An entire hour of recorded speech requires only about three me gabytes of storage, well within
the capabilities of even small computer systems. This is the most common method of digital
speech generation used today. Vocal tract simulators are more complicated, trying to mimic the
physical mechanisms by which humans create speech. The human vocal tract is an acoustic
cavity with resonate frequencies determined by the size and shape of the chambers. Sound
originates in the vocal tract in one of two basic ways, called voiced and fricative sounds.
With voiced sounds, vocal cord vibration produces near periodic pulses of air into the vocal
cavities. In comparison, fricative sounds originate from the noisy air turbulence at narrow
constrictions, such as the teeth and lips. Vocal tract simulators operate by generating digital
signals that resemble these two types of excitation. The characteristics of the resonate chamber
are simulated by passing the excitation signal through a digital filter with similar resonances.
This approach was used in one of the very early DSP success stories, the Speak & Spell, a
widely sold electronic learning aid for children.
Speech recognition
The automated recognition of human speech is immensely more difficult than speech
generation. Speech recognition is a classic example of things that the human brain does well,
but digital computers do poorly. Digital computers can store and recall vast amounts of data,
perform mathematical calculations at blazing speeds, and do repetitive tasks without becoming
bored or inefficient. Unfortunately, present day computers perform very poorly when faced with
raw sensory data. Teaching a computer to send you a monthly electric bill is easy. Teaching the
same computer to understand your voice is a major undertaking. Digital Signal Processing
generally approaches the problem of voice recognition in two steps: feature extraction
followed by feature matching. Each word in the incoming audio signal is isolated and then
analyzed to identify the type of excitation and resonate frequencies. These parameters are then
compared with previous examples of spoken words to identify the closest match. Often, these
systems are limited to only a few hundred words; can only accept speech with distinct pauses
between words; and must be retrained for each individual speaker. While this is adequate for
many commercialapplications, these limitations are humbling when compared to the abilities of
human hearing. There is a great deal of work to be done in this area, with tremendous financial
rewards for those that produce successful commercial products.

[Type text]

Visit : www.EasyEngineeering.net

You might also like