EC6502 Principles of Digital Signal Processing 11- By EasyEngineering.net
EC6502 Principles of Digital Signal Processing 11- By EasyEngineering.net
net
D
DIGITAL SIG
GNAL PRO
OCESSING III YEAR / V SEM E
ECE
EC 65022 DIGITA
AL SIGNAL
L PROCESS
SING
1. Define DT
TFT. APRIL L/MAY20088
The discrrete-time Fourier transfoorm (or DTF
FT) of is usually wrritten:
W
Where both n and k are arrbitrary integgers. Thereffore:
orr
T
Time Domainn Frequencyy Domain
1 015‐2016
20
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
(As usual, the subscripts are interpreted modulo N; thus, for n = 0, we have xN − 0 = x0.)
Second, one can also conjugate the inputs and outputs:
Third, a variant of this conjugation trick, which is sometimes preferable because it requires no
modification of the data values, involves swapping real and imaginary parts (which can be done on a
computer simply by modifying pointers). Define swap(xn) as xn with its real and imaginary parts
swapped—that is, if xn = a + bi then swap(xn) is b + ai. Equivalently, swap(xn) equals . Then
2 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
D
DIGITAL SIG
GNAL PRO
OCESSING III YEAR / V SEM E
ECE
w
where n is an integer and z is, in geneeral, a compllex number:
z = Aejφφ (OR)
z = A(cosjφ + sinjφ
φ)
w
where A is thhe magnitud
de of z, and φ is the coomplex arguument (also referred
r to as
a angle or phase) in
raadians
Inn signal proccessing, this definition iss used when the signal iss causal.
9. Define Reggion Of Con nvergence.
The regionn of converg gence (ROC)) is the set ofo points in the
t complexx plane for which
w the Z--transform
suummation coonverges.
can be found. In prractice, it is often usefull to fractionaally decompoose before multipplying that
quuantity by to generate a form of which has terms with
w easily coomputable innverse Z-trannsforms.
3 015‐2016
20
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
Periodicity, Linearity, Time shift, Frequency shift, Scaling, Differentiation in frequency domain,
Time reversal, Convolution, Multiplication in time domain, Parseval’s theorem
14. What is the DTFT of unit sample? NOV/DEC 2010
The DTFT of unit sample is 1 for all values of w.
15. Define Zero padding.
The method of appending zero in the given sequence is called as Zero padding.
16. Define circularly even sequence.
A Sequence is said to be circularly even if it is symmetric about the point zero on
the circle. x(N-n)=x(n),1<=n<=N-1.
17. Define circularly odd sequence.
A Sequence is said to be circularly odd if it is anti symmetric about point x(0) on the circle
4 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
Contour integration
Power series expansion
Convolution.
28. Obtain the inverse z transform of X(z)=1/z-a,|z|>|a| APRIL/MAY2010
Given X(z)=z-1/1-az-1
By time shifting property X(n)=an.u(n-1)
16-MARKS
Soln:
N-1
X(k)= Σ x(n)e-j2πkn/N, k=0,1,........N-1.
n=0
7
X(0)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(0) = 6
7
X(1)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(1)=-0.707-j1.707.
7
X(2)= Σ x(n)e-j2πkn/8, k=0,1,....7
n=0
X(2)=1-j
7
X(3)= Σ x(n)e-j2πkn/8, k=0,1,.....7.
n=0
X(3)=0.707+j0.293
7
X(4)= Σ x(n)e-j2πkn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(5)=0.707-j0.293
5 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
7
X(6)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(6)=1+j
7
X(7)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(7)= -0.707+j1.707.
X(k)={6,-0.707-j1.707,1-j, 0.707+j0.293, 0, 0.707-j0.293, 1+j, -0.707+j1.707}
Soln:
N-1
X(k)= Σ x(n)e-j2πkn/N, k=0,1,........N-1.
n=0
7
X(0)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(0) = 8
7
X(1)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(1)=0.
7
X(2)= Σ x(n)e-j2πkn/8, k=0,1,....7
n=0
X(2)=0.
7
X(3)= Σ x(n)e-j2πkn/8, k=0,1,.....7.
n=0
X(3)=0
7
X(4)= Σ x(n)e-j2πkn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
6 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
X(5)=0
7
X(6)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(6)=0
7
X(7)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(7)= 0.
X(k)={8, 0, 0, 0, 0, 0, 0, 0}
Soln:
N-1
X(k)= Σ x(n)e-j2πkn/N, k=0,1,........N-1.
n=0
7
X(0)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(0) = 20
7
X(1)= Σ x(n)e-j2πkn/8, k=0,1,...7.
n=0
X(1)=-5.828-j2.414
7
X(2)= Σ x(n)e-j2πkn/8, k=0,1,....7
n=0
X(2)=0.
7
X(3)= Σ x(n)e-j2πkn/8, k=0,1,.....7.
n=0
X(3)=-0.172-j0.414.
7
X(4)= Σ x(n)e-j2πkn/8, k=0,1,.......7
n=0
X(4)=0
7
X(5)= Σ x(n)e-j2πkn/8, k=0,1,........7.
7 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
n=0
X(5)= -0.172+j0.414
7
X(6)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(6)=0
7
X(7)= Σ x(n)e-j2πkn/8, k=0,1,........7.
n=0
X(7)= -5.828+j2.414
X(k)={20,-5.828-j2.414, 0, -0.172-j0.414, 0, -0.172+j0.414, 0, -5.828+j2.414}
Soln:
N-1
x(n)=1/N Σ X(k)e-j2πkn/N, n=0,1,........N-1.
k=0
7
x(0)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(0) = 1
7
x(1)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(1)=0.75
7
x(2)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(2)=0.5
7
x(3)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(3)=0.25
7
x(4)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
8 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
x(4)=1
7
x(5)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(5)= 0.75
7
x(6)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(6)=0.5
7
x(7)=1/8 Σ X(k)e-j2πkn/8, n=0,1,.......7.
k=0
x(7)= 0.25
x(n)={1,0.75, 0.5, 0.25,1,0.75,0.5,0.25}
5. Derive and draw the radix -2 DIT algorithms for FFT of 8 points. (16) DEC 2009
RADIX-2 FFT ALGORITHMS
The /V-point DFT of an fV-point sequence J is
jV-l
rj=fl
Because rv(/i) may be either real or complex, evaluating -¥(A) requires on the order of jV complex
multiplications and N complex additions for each value of jt. Therefore, because there are Nvalues of
X(kh computing an N-point DFT requires N 1 complex multiplications and additions.
The basic strategy thaiis used in the FFT algorithm is one of "divide and conquer." which involves
decomposing an /V-point DFT inlo successively smaller DFTs. To see how this works, suppose lhat the
length of is even (i.e,,N is divisible by 2).If jr(n) is decimated into two sequences of length N/2,
computing the jV/2-point DFT of each of these sequences requires approximatelymultiplications and
thesame number
2 2
of additions, Thus, ihe two DFTs require 2(iV/2) = ^ /V multiplies and adds, Therefore, if it is possible
to find the fy-poim DFT of jt(h) from these two /V/2-point DFTs in fewer ihan N 2 f2operations, a
savings has been realized.
Deetmation-tn-Tlme FFT
The decimation'in-time FFT algorilhm is based on splitting (decimating) jt{a) into smaller sequences and
finding X(k)from the DFTsof these decimated sequences This section describes how this decimation
leads to an efficient algorithm when the sequence length is a power of 2.
Lei .r(n) be a sequence of length N =2', and suppose thal v(n) is splil (decimated) into iwo
subsequences, each of length N/2.As illustrated in Fig. the first sequence, %(n).is formed frum the
cven-index tcrmx,
9 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
N
g(n) = x(2n) n =0. I— - I
and ihe second, h(ri), is formed from ihe odd-index terms,
N
h{n) = x(2n +1) n=0. I - - I
2
In terms of these sequences, the JV-point DFT of is
N -I
X(k) = ^x(n)Wtf = x{n)Wtf 4‐ x{n)Wj?
heven odd
*_i iV _ n
1=0 I=\)
Odd-Index Ttraii
Because W$ k =
y—| 1|
X { t ) = £ amiin 4- w*J2W ) K f l
1=0 l=to
Note that the firs! term is the A//2-poim DFT of and (ho second is Ihe N/2-point DFT of h(n);
X { k ) = Gik) + Wl„H(k) k = 0 , IN - I
Although the N/2-point DFTs of g(n)and h(n)are sequences of length Nt[2, the periodicity of the
complex exponentials allows us to write
G(*)=G(*+y) H(*>=/m-+y)
Therefore, X(k) may be computed from the A1'/2-point DFTs G(k) and H(k), Note that because
IV* _ U/* LV'v^ — — IV*
IT N —iy iVw iV— vv iV
then IV.*'4* H(it + ‘j} = - W* H [k)
and it is only necessarv to form the products W#H(Jt) tor k =0, 1 ... N/2 —!. The complex exponentials
multiplyingti{k ) are called twiddle factors.A block diagram showing the computations
thaiare
necessaryfor the first stageof an eight-point dedmation-in-time FFT is shown in Fig.
If N/2 is even, #tft) andA(rt) may again be decimated. For example, G(A) may be evaluated as follows:
10 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
: -1 ^ i ■■ -1
G(k)= = V + V gi«W$2
ir—(I Ji even n J.II Id
11 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
12 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
X(
0)
X(l
)
xm
X(
3)
*(4
>
x(s
)
X(
6)
X(
7)
A complete eight-point radix-2 decimaliot)-iiHune FFT.
Computing an /V-point DFT using a radix-2 decimation-in-lime FFT is much more efficient
lhancalculating the DFTdirectly. For example, if N = 2'\ there are log, X= v stagesof compulation.
Bee a use each stage requires N/2complex multiplies by the twiddle factors W r N and Ncomplex
additions, there are a total of \Xlog;N complex multiplications1and Xlog-i Ncomplex additions.
From the structure of the dec i mat ion-in-time FFT algorithm, note that once a butterfly operation
has been performed on a pair of complex numbers, there is no need to save the input pair. Therefore, ihe
output pair may be stored in (he same registers as the input. Thus, only one airay of size Xis required,
and il is said that the computations may be performed in place.To perform the compulations in place,
however, the input sequence a(h) must he stored (or accessed) in nonsequential order as seen in Fig. The
shuffling of the inpul sequence that takes place is due to ihe successive decimations of The ordering
that results corresponds to a bil-reversed indexing of Ihe original sequence. In oilier words, if the index
n is written in binary fonn. Ihe order in which in the inpul sequence musi be accessed is found by
reading the binary representation for /? In reverse order as illustrated in ihe table below for N = 8;
13 2015‐
2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
Bil-Reversed
N Binary Binary
0 000 000 0
1 001 100 4
2 010 010 2
3 Oil 110 6
4
100 001 1
5 101 101 5
6 110 011 3
7 111 111 7
Alternate forms nFFFT algorithms may he derived from the decimalion-in-time FFT hy
manipulating the flowgrapli and rearranging the order in which ihe results of each stage of Ihe
computation are stored. For example, the ntxJes of the fluwgraph may be rearranged su that the input
sequenuc ,r(/i) is in normal order. What is lost with this reordering, however, is the .ibililv iti perform
the computations in place.
6. Derive DIF radix 2 FFT algorithm
Decimation-in-Frequency FFT
Another class of FFT algorithms may be derived by decimating the output sequence X (k) into smaller
and smaller subsequences. These algorithms are called decimation-in-frequencyFFTs and may be
derived as follows. Let N be a power of 2, /V = 21'. and consider separately evaluating the even-index
and odd-indcx samples of X(k). The even samples are
fi~ I
X(2k)=
fl=0
Separating this sum into the first N / 2 points and the last N / 2 points, and using the fact that Wjf k =
W^, this
becomes
S-i ftf-r
X(2k)= '£*(n)W*/2+£ x(n)W* N k , 2
n=0 n=A//2
14 2015‐
2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
15 2015‐
2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
Hn
X(2*)=£ f N\]
x(n) K). k
L+ x 1 N
/2
which is the N/2-point DFT of the sequence that is formed by adding the first N/2 points
of.*(n) to the last N/2. Proceeding in the same way for the odd samples of X(k) leads to
7-1
X(2k+l)= x(n) - H). nk (7.
x N/ 4)
n=
() 2
A flowgraph illustrating this first stage of decimation is shown in Fig. 7-7.
As with the decimation-in-time FFT, the decimation may be continued until
only two-point DFTs remain. A completeeight-point decimation-in-
frequency FFT is shown in Fig. 7-8. The complexity of the decimation-in-
frequency FFT is the same as the decimation-in-time, and the computations
may be performed in place. Finally, note that although the input sequence
x(n) is in normal order, the frequency samples X(k) are in bit-reversed
order.
16 2015‐
2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
PROPERTIES OF DFT
DFT
17 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
1. Periodicity
Let x(n) and x(k) be the DFT pair then if
18 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
DFT
Then DFT
DFT of linear combination of two or more signals is equal to the same linear combination of DFT of individual
signals.
A) A sequence is said to be circularly even if it is symmetric about the point zero on the circle. Thus X(N-n) =
x(n)
B) A sequence is said to be circularly odd if it is anti symmetric about the point zero on the circle. Thus X(N-
n) = - x(n)
D) Anticlockwise direction gives delayed sequence and clockwise direction gives advance sequence. Thus
delayed or advances sequence x'(n) is related to x(n) by the circular shift.
N-1
This property states that if the sequence is real and odd x(n)=-x(N-n) then DFT becomes
N-1
19 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM ECE
This property states that if the sequence is purely imaginary x(n)=j Xi(n) then DFT becomes
N-1
20 2015‐2016
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V
SEM ECE
N-1
3. Circular Convolution
The Circular Convolution property states that if
DFT
DFT
x2(n)X2(k) Then
N
DFT
21
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
N-1
y(m) = I x1 (n) x2 (m-n)N ........... (4)
n=0
Multiplication of two sequences in time domain is called as Linear convolution
while Multiplication of two sequences in frequency domain is called as circular
convolution. Results of both are totally different but are related with each other.
22
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
UNIT-II
INFINITE IMPULSE RESPONSE DIGITAL FILTERS
1.Give the expression for location of poles of normalized Butterworth
filter(May07,Nov10)
From the given chebyshev filter specifications we can obtain the parameters
like the order of the filter N, ε, transition ratio k, and the poles of the filter.
3.Find the digital transfer function H(z) by using impulse invariant method for
the analog transfer function H(s)=1/s+2.Assume T=0.5sec(May/june-07)
H(z)= 1/ 1-e-1z-1
23
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
S-plane.
6.What is the relationship between analog & digital freq. in impulse invariant
transformation?(Apr/May-08)
2 ⎡1 − z −1 ⎤
transformation is s = .
T ⎢⎣1 + z −1 ⎥⎦
9.What is Prewarping?(May-09,Apr-10,May-11)
24
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
H(z) = 1/ Z+e-0.3
13.Give the square magnitude function of Butterworth filter.(Nov-2010)
1
H ( jΩ ) = 1
N = 1 , 2 , 3 ,...
⎡ ⎡ Ω ⎤
2 N
⎤ 2
⎢1 + ⎢ ⎥ ⎥
⎢⎣ ⎣Ω c ⎦ ⎥⎦
Where N is the order of the filter and Ωc is the cutoff frequency. The
magnitude response of the butter worth filter closely approximates the ideal
25
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
response as the order N increases. The phase response becomes more non-
linear as N increases.
14. Find the digital transfer function H(z) by using impulse invariant method
for the analog transfer function H(s)=1/s+1.Assume T=1sec.(Apr-11)
H(z)= 1/ 1-e-1z-1
15.Give the equation for the order of N and cut-off frequency Ωc of butter
worth filter.(Nov-06)
10 0.1α s − 1
log 0.1α
The order of the filter N = 10 p − 1
Ω
log s
Ωp
Ωp
Ωc = 1
(100.1α − 1) 2 N
26
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
i. Direct-form-I structure
ii. Direct-form-II structure
iii. Transposed direct-form II structure
iv. Cascade form structure
v. Parallel form structure
vi. Lattice-Ladder structure
19.Draw the general realization structure in direct-form I of IIR system.(May-
04)
27
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
22.Mention any two techniques for digitizing the transfer function of an analog
filter. (Nov11)
The two techniques available for digitizing the analog filter transfer function
are Impulse invariant transformation and Bilinear transformation.
23.Write a brief notes on the design of IIR filter. (Or how a digital IIR filter is
designed?)
For designing a digital IIR filter, first an equivalent analog filter is designed
using any one of the approximation technique for the given specifications. The
result of the analog filter design will be an analog filter transfer function Ha(s).
The analog filter transfer function is transformed to digital filter transfer function
H(z) using either Bilinear or Impulse invariant transformation.
28
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
iii. The design involves design of The digital filter can be directly
analog filter and then transforming designed to achieve the desired
analog filter to digital filter. specifications.
29
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
s
s→
Ω 'p
Ωp
H 1 (s) = H p ( s)
Ω 'p
Wheree,
Ωp= normalized
n c
cutoff freq=11 rad/sec
Ω’p= Desired
D LP cutoff
c freq
at Ω =Ω’p
= it is H(j1)
This transformatio
t on involves replacing the variable Z-1 by a rationnal function
g(z-1),, while doingg this follow
wing propertiies need to beb satisfied:
-1 -1
1. Mappingg Z to g(z ) must mapp points insidde the unit ciircle in the
Z- plane onto
o the unit circle of z- plane
p to presserve causaliity of the
filter.
0
30
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
The general form m of the funcction g(.) thaat satisfy the above requiirements of
" all-p
pass " type iss
The different
d trannsformations are shown in
i the below table.
31
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
1
Let H(s) =
s2 + s + 1
Reepresents th
he transfer function
f of a low pass filter
fi (not
butterrworth) witth a passbannd of 1 rad/sec. Use freq transform mation to
find the
t transfer function off the followiing filters: (Apr/Mayy-08)
(12)
1. A LP filter with a pass band of 10 rad/sec
2. A HP filter with a cutooff freq of 1 rad/sec
3. A HP filter with a cutooff freq of 100 rad/sec
4. A BP filter with a pass band of 10 rad/sec and d a corner freq
f of 100
rad/seec
32
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Solution:
Given
1
H (s) =
s + s +1
2
a. LP – LP Transform
replace
s s
s→ =
Ω′p 10
1
sub H a (s) = H (s) | =
⎛ s 2 ⎞
s
s→ s
10
⎜ ( ) + ( ) + 1⎟
⎝ 10 10 ⎠
100
=
s + 10 s + 100
2
b. LP – HP(normalized) Transform
Ωu 1
s→ =
s s
1
sub H a ( s ) = H ( s ) | 1 =
s→ ⎛ 1 2 1 ⎞
s
⎜ ( ) + ( ) + 1⎟
⎝ s s ⎠
s2
=
s2 + s +1
replace
d. LP – BP Transform
replace
s 2 + Ωu Ωl s 2 + Ω o2
s→ = where Ω o = Ω u Ω l
s (Ω u − Ω l ) sB0
and Bo = ( Ω u − Ω l )
sub H a ( s ) = H ( s ) | s 2 +10 4
s→
10 s
100s 2
= 4
s + 10s 3 + 20100s 2 + 10 5 s + 10 8
e. LP – BS Transform
replace
s (Ω u − Ω l ) sB
s→ = 2 0 2 where Ω o = Ω u Ω l
s + Ωu Ωl s + Ω o
2
and Bo = (Ω u − Ω l )
sub H a ( s ) = H ( s ) | 2s
s→
s 2 +100
( s 2 + 100) 2
= 4
s + 2 s 3 + 204 s 2 + 200 s + 10 4
Solution:
34
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
Note that
t the resuulting filter has
h zeros at z=±1
z and a pair
p of poles that
depen
nd on the chooice of ωl annd ωu
This filter
f has polles at z=±j0.7713 and hennce resonatess at ω=π/2
35
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
∑b z
k =0
k
−k
H(Z) = N
1 + ∑ ak z −k
k =1
1. Direct form-I
2. Direct form-II
3. Cascade form
4. Parallel form
5. Lattice form
• Direct form-I
This is a straight forward implementation of difference equation which
is very simple. Typical Direct form – I realization is shown below. The
upper branch is forward path and lower branch is feedback path. The
number of delays depends on presence of most previous input and output
samples in the difference equation.
36
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
• Direct form-II
Y ( z) V ( z) Y ( z)
H ( z) = = .
X ( z) X ( z) V ( z)
V ( z) 1
= -------------------all poles
X ( z) N
1 + ∑ ak z −k
k =1
Y ( z) ⎛ M
⎞
= ⎜1 + ∑ bk z − k ⎟ -------------------all zeros
V ( z ) ⎝ k =1 ⎠
N
v ( n) = x ( n) − ∑ a k v ( n − k )
k =1
M
y ( n) = v( n) + ∑ bk v( n − 1)
k =1
37
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
38
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
• Cascade Form
The transfer function of a system can be expressed as,
H ( z ) = H 1 ( z ) H 2 ( z )....H k ( z )
bk 0 + bk1 Z −1 + bk 2 Z −2
H k (Z ) =
1 + a k1 Z −1 + a k 2 Z −2
39
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Where {pk} are the poles, {Ak} are the coefficients in the partial fraction
expansion, and the constant C is defined as C = b N a N , The system
realization of above form is shown below.
bk 0 + bk1 Z −1
Where H k ( Z ) =
1 + a k1 Z −1 + ak 2 Z −2
1 1
H (Z ) = =
N
AN ( Z )
1 + ∑ a N (k ) Z −k
k =1
40
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
OR
N
x( n) = y ( n) + ∑ a N ( k ) y ( n − k )
k =1
For N=1
x ( n ) = y ( n ) + a1 (1) y ( n − 1)
We observe
x(n) = f1 (n)
= x ( n) − k1 y ( n − 1)
k1 = a1 (1)
y ( n ) = x ( n ) − a 2 (1) y ( n − 1) − a 2 ( 2) y ( n − 2)
41
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
This output can be obtained from a two-stage lattice filter as shown in below
fig
f 2 ( n) = x ( n)
f 1 ( n ) = f 2 ( n ) − k 2 g 1 ( n − 1)
g 2 ( n ) = k 2 f 1 ( n ) + g 1 ( n − 1)
f 0 (n) = f1 (n) − k1 g 0 (n − 1)
g1 (n) = k1 f 0 (n) + g 0 (n − 1)
= x ( n ) − k 2 [k1 y ( n − 1) + y ( n − 2)] − k1 y ( n − 1)
= x ( n ) − k1 (1 + k 2 ) y ( n − 1) − k 2 y ( n − 2)
42
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Similarly
g 2 ( n ) = k 2 y ( n ) + k1 (1 + k 2 ) y ( n − 1) + y ( n − 2)
We observe
a 2 (0) = 1; a 2 (1) = k1 (1 + k 2 ); a 2 ( 2) = k 2
f N ( n) = x ( n)
y ( n) = f 0 ( n) = g 0 ( n)
H (Z ) =
( )(
10 1 − 12 Z −1 1 − 23 Z −1 1 + 2 Z −1 )( )
(1 − 3
4
Z −1
)(1 − 1
8
Z −1
)(1 − ( 1
2
+ j 12 )Z −1
)(1 − ( 1
2
− j 12 )Z −1 )
a). Direct form –I
43
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
c). Cascade
d). Parallel
Solution:
H (Z ) =
( )(
10 1 − 12 Z −1 1 − 23 Z −1 1 + 2 Z −1 )( )
(1 − 3
4
Z −1
)(1 − 1
8
Z −1
)(1 − ( 1
2
+ j 12 )Z −1
)(1 − ( 1
2
− j 12 )Z −1 )
=
(
10 1 − 76 Z −1 + 13 Z −2 1 + 2 Z −1 )( )
(1 + 7
8
Z −1
+ 323 Z −2
)(1 − Z −1
+ 12 Z − 2 )
H (Z ) =
(
10 1 + 56 Z −1 − 2Z −2 + 23 Z −3 )
(1 − 15
8
Z −1
+ 47
32
Z −2
− 17
32
Z −3
+ 643 Z −4 )
44
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Where
7 −1 1 − 2
1− z + z
H1 ( z) = 6 3
7 3 −2
1 − z −1 + z
8 32
10(1 + 2 z −1 )
H1 ( z) =
1
1 − z −1 + z −2
2
45
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Parallel Form
6. Obtain the direct form – I, direct form-II, Cascade and parallel form
realization for the following system, y(n)=-0.1 y(n-1)+0.2y(n-
2)+3x(n)+3.6 x(n-1)+0.6 x(n-2)
(12). (May/june-07)
Solution:
The Direct form realization is done directly from the given i/p – o/p
equation, show in below diagram
46
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Y ( z ) 3 + 3.6 z −1 + 0.6 z −2
H ( z) = =
X ( z ) 1 + 0.1z −1 − 0.2 z − 2
(3 + 0.6 z −1 )(1 + z −1 )
H ( z) =
(1 + 0.5 z −1 )(1 − 0.4 z −1 )
47
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
3 + 0.6 z −1 1 + z −1
where H 1 ( z ) = and H 2 ( z ) =
1 + 0.5 z −1 1 − 0.4 z −1
7 1
H ( z ) = −3 + −1
−
1 − 0 .4 z 1 + 0.5 z −1
Solution:
48
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Given bM ( Z ) = 1 + 2Z −1 + 2Z −2 + Z −3
−1
And AN ( Z ) = 1 + 13
24 Z + 85 Z −2 + 13 Z −3
a3 (0) = 1; a3 (1) = 13
24 ; a3 (2) = 85 ; a3 (3) = 1
3
k 3 = a3 (3) = 1
3
a m ( k ) − a m ( m) a m ( m − k )
a m −1 (k ) =
1 − a 2 m ( m)
1 − ( 13 )
8
1 − a32 (3) 2
5
− 13 . 13 45−13
8 24
= 72
= 1
2
1 − 19 8
9
3
− 12 . 83 83 − 163
8
= = 1
4
1 − ( 12 ) 2 1 − 14
49
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
M
C m = bm − ∑ C .a (1 − m)
i = m +1
1 1 m=M, M-1,1,0
C3 = b3 = 1; C 2 = b2 − C3 a3 (1)
M=3
= 2 − 1.( 13
24 ) = 1.4583
3
C1 = b1 − ∑ c1 a1 (i − m) m=1
i=2
[
= b1 − c 2 a 2 (1) + c3 a 3( 2 ) ]
= 2 − [(1.4583)( 83 ) + 85 ] = 0.8281
3
c 0 = b0 − ∑ c1 a1 (i − m)
i =1
50
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
UNIT III
Advantages:
1. FIR filters have exact linear phase.
2. FIR filters are always stable.
3. FIR filters can be realized in both recursive and non recursive structure.
51
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
4. Filters with any arbitrary magnitude response can be tackled using FIR
sequency.
Disadvantages:
1. For the same filter specifications the order of FIR filter design can be as
high as 5 to n10 times that of IIR design.
2. Large storage requirements needed.
3.Powerful computational facilities required for the implementation.
The Optimum Equiripple design Criterion is used for designing FIR Filters with
Equal level filteration throughout the Design.
FIR FIlters
In the filter design by Fourier series method the infinite duration impulse response
is truncated to finite duration impulse response at n=± (N-1/2). The abrupt
truncation of impulse introduces oscillations in the pass band and stop band. This
effect is known as Gibb’s phenomenon.
52
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
8.Mention various methods available for the design of FIR filter.Also list a few
window for the design of FIR filters.(May/june-2010)
There are three well known method of design technique for linear phase FIR filter.
They are
There are three well known method of design technique for linear phase FIR filter.
They are
53
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
FIR filter is always stable because all its poles are at the origin.
15.What are the possible types of impulse response for linear phase FIR
filter?(Nov-11)
There are four types of impulse response for linear phase FIR filters
54
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
2. The width of the transition band can be made narrow by increasing the
value of N where N is the length of the window sequence.
3. The attenuation in the stop band is fixed for a given window, except in
case of Kaiser Window where it is variable.
55
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
1.The main lobe width is equal to8π/N The main lobe width ,the peak side lobe
level can be varied by varying the
and the peak side lobe level is –41dB. parameter α and N.
have first side lobe peak of –53 dB The side lobe peak can be varied by
varying the parameter α.
π
H d ( e jω ) = 1 for ≤ ω ≤π
4
π
=0 | ω |<
4
Solution:
56
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
−π / 4 π
1
∫π e ∫
jω n jω n
hd ( n ) = [ dω + e dω ]
2π − π/4
1 πn
hd (n) = [sin πn − sin ] for − ∞ ≤ n ≤ ∞ and n≠0
πn 4
−π / 4 π
1 3
hd (0) = [ ∫ dω + ∫ dω ] = = 0.75
2π −π π /4 4
hd(1) = hd(-1)=-0.225
hd(4) = hd(-4)= 0
57
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
2πn M −1 M −1
whn (n) = 0.5 + 0.5 cos −( )≤n≤( )
M −1 2 2
=0 otherwise
for N = 11
πn
whn (n) = 0.5 + 0.5 cos −5≤ n ≤ 5
5
whn(0) = 1
whn(1) = whn(-1)=0.9045
whn(2)= whn(-2)=0.655
whn(4)= whn(-4)=0.0945
whn(5)= whn(-5)=0
h(n)= whn(n)hd(n)
π
sin (n − 3 )
= 4
π (n − 3 )
this gives h d (0 ) = h d ( 6 ) = 0 . 075
h d (1 ) = h d ( 5 ) = 0 . 159
h d (2 ) = h d ( 4 ) = 0 . 22
h d ( 3 ) = 0 . 25
58
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
3. Design a LP FIR filter using Freq sampling technique having cutoff freq
of π/2 rad / sample. The filter should have linear phase and length of 17.
(12)(May-07)
H d ( e jω ) = e − jω 8 for 0 ≤ ω ≤ π / 2
=0 for π / 2 ≤ ω ≤ π
2πk 2πk
Selecting ω k = = for k = 0,1,......16
M 17
H ( k ) = H d ( e jω ) | 2πk
ω=
17
59
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
2πk
−j 8 2πk π
H (k ) = e 17
for 0 ≤≤
17 2
2πk
=0 for π /2 ≤ ≤π
17
16πk
−j 17
H (k ) = e 17 for 0 ≤ k ≤
4
17 17
=0 for ≤k≤
4 2
0≤k ≤4
and 5≤k ≤8
i.e.,
4
1
h( n) = (1 + 2∑ Re(e − j16πk / 17 e j 2πkn / 17 ))
17 k =1
1 4
2πk (8 − n)
h( n) = ( H (0) + 2∑ cos( ) for n = 0,1,........16
17 k =1 17
60
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
b) Hamming window
(6)
Solution:
H(ejω) = jω between –π to +π
π
1 cos πn
∫π jω e
jωn
hd (n) = dω = − ∞ ≤ n ≤ ∞ and n≠0
2π −
n
a) rectangular window
h(n)=hd(n)wr(n)
h(1)=-h(-1)=hd(1)=-1
h(2)=-h(-2)=hd(2)=0.5
h(3)=-h(-3)=hd(3)=-0.33
thus,
( M −3) / 2
M −1
Also from the equation
H r ( e jω ) = 2 ∑
n =0
h(n) sin ω (
2
− n)
61
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
jω
H r (e ) = 0 . 666 sin 3ω − sinn 2 ω + 2 sin ω
H (e jω ) = jH
H r (e jω ) = j (0.66 sin 3ω − sin 2ω + 2 sin
s ω)
b) Hamming window
w
n)=hd(n)wh(nn)
h(n
2πn
wh (n) = 0.54 + 0.46 cos − ( M − 1) / 2 ≤ n ≤ ( M − 1) / 2
( M − 1)
= 0 ottherwise
πn
wh ( n) = 0.54 + 0.46 cos −3≤ n ≤ 3
3
Wh
h(n)= [0.08 0.31
0 0.77 1 0.77
0 0.31 0.008]
Thu
us h’(n) = h((n-5) = [0.02267, -0.155, 0.77, 0, -0.777, 0.155, -0.0267]
Sim
milar to the earlier
e case of
o rectangulaar window we
w can writee the freq
response of diffferentiator as
a
H (e jω ) = jH r (e jω ) = j (0.0534
0 sin 3ω − 0.31sin 2ω + 1.54 sin ω )
62
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
5.Justify Symmetric and Anti-symmetric FIR filters giving out Linear Phase
characteristics.(Apr-08)
(10)
Symmetry in filter impulse response will ensure linear phase
An FIR filter of length M with i/p x(n) & o/p y(n) is described by the
difference equation:
M −1
y(n)= b0 x(n) + b1 x(n-1)+…….+b M-1 x(n-(M-1)) = ∑ b x(n − k )
k =0
k -
(1)
An FIR filter has linear phase if its unit sample response satisfies the condition
63
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
If M is odd
M −1 M +1 M +3
M − 1 −( ) M + 1 −( ) M + 3 −( )
H ( z) = h(0) + h(1) z −1 + ..........+ h( )z 2
+ h( )z 2
+ h( )z 2
+ ...........
2 2 2
+ h(M − 2) z −( M −2) + h(M − 1) z −( M −1)
⎡
−(
M −1
)(
M −1
) (
M −3
) M −1 M + 1 −1 M + 3 −2 −(
M −1
)⎤
=z ⎢h (0)
2
z 2
+ h(1) z 2
+ ..........
.. + h( ) + h( ) z + h( ) z + .....
h ( M − 1) z 2
⎥
⎣ 2 2 2 ⎦
Applying symmetry conditions for M odd
h(0) = ± h( M − 1)
h(1) = ± h( M − 2)
.
.
M −1 M −1
h( ) = ± h( )
2 2
M +1 M −3
h( ) = ± h( )
2 2
.
.
h( M − 1) = ± h(0)
64
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
⎡ M −1
M −3
⎤
⎢
−( M −
) 1 2
− ( M −1− 2 n ) / 2 ⎥
H ( z) = z h(2
) + ∑ h(n){z ( M −1− 2 n ) / 2
±z }
⎢ 2 n =0
⎥
⎢⎣ ⎥⎦
similaarly for M even
e
M −1 ⎡ M2 −1 ⎤
−( )
⎢ h(n){z ( M −1− 2 n ) / 2 ± z −( M −1− 2 n ) / 2 }⎥
⎢∑
H ( z) = z 2
n =0
⎥
⎣⎢ ⎦⎥
Or equ
uivalently, thhe system fuunction
M −1
H (Z ) = ∑b Z
k =0
k
−k
⎧bn 0 ≤ n ≤ n −1
Wheree we can ideentify h(n) = ⎨
⎩0 otherrwise
1. Direcct form
2. Cascaade form
3. Frequuency-sampling realization
4. Latticce realizationn
• Diirect form
• It is Non recursive
r in structure
s
65
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
Where
W H k (Z ) = bk 0 + bk 1 Z −1 + bk 2 Z −2 k = 1, 2 ….. K
an
nd K = integeer part of (M
M+1) / 2
66
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
In case of
o linear –phhase FIR fillter, the symmmetry in h(n)
h implies
that the zeros of H(z) alsoo exhibit a foorm of symm
metry. If zk and
a zk* are
paair of compllex – conjuggate zeros thhen 1/zk andd 1/zk* are also a pair
co
omplex –connjugate zeroos. Thus sim mplified fouurth order sections
s are
fo
ormed. This is
i shown bellow,
7
67
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
N −1
1 H (k )
H ( z ) = (1 − z − N )
N
∑1−W − k −1
k =0 N z
Thhis form cann be realized with cascadde of FIR andd IIR structuures. The
1 N −1 H (k )
terrm (1-z-N) iss realized as FIR and thee term ∑
N k =0 1 − WN−k z −1
as IIR
strructure.
Th
he realizationn of the abovve freq samppling form shhows necesssity of
co
omplex arithm metic.
• Laattice realizzation
Laattice structuures offer maany interestinng features:
8
68
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
onsider
Co
Y ( z) m
H ( z) = = 1 + ∑ a m (i ) z −i
X ( z) i =1
wh
hen m = 1 Y(z)/
Y X(z) = 1+ a1(1) z-1
f0(n)=
( r0(n)=x(n)
Th
he outputs arre
f 1 (n) = f 0 (n) + k1 r0 (n − 1) 1a
r1 (n) = k1 f 0 (n) + r0 (n − 1) 1b
iff k1 = a1 (1),
) then f 1 ( n) = y ( n)
If m=2
9
69
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Y ( z)
= 1 + a 2 (1) z −1 + a 2 (2) z − 2
X ( z)
y (n) = x(n) + a 2 (1) x(n − 1) + a 2 (2) x(n − 2)
y (n) = f1 (n) + k 2 r1 (n − 1) (2)
y ( n ) = f 0 ( n ) + k1 r0 ( n − 1) + k 2 [ k1 f 0 ( n − 1) + r0 ( n − 2 )]
= f 0 ( n ) + k1 r0 ( n − 1) + k 2 k1 f 0 ( n − 1) + k 2 r0 ( n − 2 )]
sin ce f 0 ( n ) = r0 ( n ) = x ( n )
y ( n ) = x ( n ) + k1 x ( n − 1) + k 2 k1 x ( n − 1) + k 2 x ( n − 2 )]
= x ( n ) + ( k1 + k1 k 2 ) x ( n − 1) + k 2 x ( n − 2 )
We recognize
a 2 (1) = k1 + k1k 2
a 2 (1) = k 2
a 2 (1)
k1 = and k 2 = a 2 (2) (4)
1 + a 2 (2)
Equation (3) means that, the lattice structure for a second-order filter is
simply a cascade of two first-order filters with k1 and k2 as defined in
eq (4)
70
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
M – Stages
⎡ 1 1 1 1 ⎤
We reecognize h(nn) = ⎢1, , , , , 1⎥
⎣ 3 4 4 3 ⎦
M is even
e = 6, andd we observe h (n) = h (M
M-1-n) h (n) = h (5-nn)
i.e h (0)
( = h (5) h (1) = h (4) h (2) = h (3)
71
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Solution:
m=9
⎡ 1 1 1 1 1 1 ⎤
h(n) = ⎢1, , , , − , − , − , − 1⎥
⎣ 4 3 2 2 3 4 ⎦
Odd symmetry
72
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
Solutiion:
73
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Given a1 (1) = 2 , a 2 ( 2) = 1
3
m = M, M-1… 2, 1
If m=2 k 2 = a 2 ( 2) = 1
3
If m=1 k1 = a1 (1)
a 2 (1) 2 3
a1 (1) = = =
1 + a 2 (2) 1 + 3 2
1
Hence k1 = a1 (1) = 3
2
UNIT IV
74
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
Sub band coding is a method by which the signal (speech signal) is sub divided in
to several frequency bands and each band is digitally encoded separately.
4.Identify the various factors which degrade the performance of the digital
filter implementation when finite word length is used.(May-07,May-2010)
The truncation is the process of reducing the size of binary number by discarding all
bits less significant than the least significant bit that is retained.
Rounding is the process of reducing the size of a binary number to finite word
sizes of b-bits such that, the rounded b-bit number is closest to the original
unquantized number.
In recursive system when the input is zero or some nonzero constant value, the
nonlinearities due to finite precision arithmetic operation may cause periodic
oscillations in the output. These oscillations are called limit cycles.
i. Input quantization error ii. Product quantization error iii. Coefficient quantization
error.
75
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
The IEEE-754 standard for 32-bit single precision floating point number is given by
Floating point number,
N f = (-1)* (2^E-127)*M
0 1 8 9 31
S E M
1. The accuracy of the result is less due to 1. The accuracy of the result will be
smaller dynamic range. higher due to larger dynamic range.
4. Fixed point arithmetic can be used for 4. Floating point arithmetic cannot be
real; time computations. used for real time computations
76
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
To prevent overflow, the signal level at certain points in the digital filters
must be scaled so that no overflow occurs in the adder.
14.What are the results of truncation for positive & negative numbers?(Nov-
06)
To truncate these numbers to 4 decimal digits, we only consider the 4 digits to the
right of the decimal point.
The result would be: 5.6341 ,32.4381 ,−6.3444 Note that in some cases,
truncating would yield the same result as rounding, but truncation does not round
up or round down the digits; it merely cuts off at the specified digit. The truncation
error can be twice the maximum error in rounding.
16. List out some of the finite word length effects in digital filter.(Apr-06)
77
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
1. Signed-magnitude format
2. One’s-complement format
3. Two’s-complement format.
In all the three formats, the positive number is same but they differ only in
representing negative numbers.
The floating numbers will have a mantissa part and exponent part. In a given
word size the bits allotted for mantissa and exponent are fixed. The mantissa is used
to represent a binary fraction number and the exponent is a positive or negative
binary integer. The value of the exponent can be adjusted to move the position of
binary point in mantissa. Hence this representation is called floating point. The
floating point number can be expressed as,
78
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIG
D GNAL PRO
OCESSING III YEAR / V SEM
E
ECE
In dig
gital computaation, the ouutput of multtipliers i.e., thhe products is
quuantized to finite
f word length
l in ordder to store thhem in regisster and to bee used in
suubsequent caalculation. The
T error duee to the quanntization of thhe output off
m
multipliers is referred to as
a product quantization
q e
error.
In fixeed point adddition the oveerflow occurrs when the sum exceedss the finite
w
word o the registeer used to stoore the sum. The overfloow in additioon may be
length of
leead to oscillaation in the output
o whichh is called ovverflow limiit cycle.
255. Define wh
hite noise?(Deec-06)
A statiionary random
m process is said
s to be whiite noise if itss power densitty spectrum
iss constant. Heence the whitee noise has flaat frequency response
r specctrum.
9
79
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
i) saturation arithmetic
ii) scaling
29. What is meant by A/D conversion noise?(MU-04)
A DSP contains a device, A/D converter that operates on the analog input
x(t) to
produce xq(t) which is binary sequence of 0s and 1s. At first the signal x(t) is
sampled at regular intervals to produce a sequence x(n) is of infinite precision. Each
sample x(n) is expressed in terms of a finite number of bits given the sequence
xq(n). The difference signal e(n) =xq (n)-x(n) is called A/D conversion noise.
16 MARKS
1, Explain in detail about Number Representation.
Number Representation
In digital signal processing, (B + 1)-bit fixed-point numbers are usually
represented as two’s- complement signed fractions in the format
bo b-ib-2 …… b-B
The number represented is then
X = -bo + b-i2- 1 + b-22 - 2 + ••• + b-B 2-B (3.1)
where bo is the sign bit and the number range is —1 <X < 1. The advantage of this
representation is that the product of two numbers in the range from — 1 to 1 is
another number in the same range. Floating-point numbers are represented as
X = (-1) s m2 c (3.2)
where s is the sign bit, m is the mantissa, and c is the characteristic or
exponent. To make the representation of a number unique, the mantissa is
normalized so that 0.5 <m < 1.
Although floating-point numbers are always represented in the form of (3.2), the
way in which this representation is actually stored in a machine may differ. Since
m > 0.5, it is not necessary to store the 2- 1 -weight bit of m, which is always set.
Therefore, in practice numbers are usually stored as
X = (-1)s(0.5 + f)2 c (3.3)
80
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
81
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
1 (A/2 A2
a€2 — E{(fr - m€r2) } — — (fr - m€r2) dfr = — (3.12)
A 12
-A/2
Likewise, for truncation,
E{f } A
m
ft = t = -y
22 A2 2
a = E{(ft - mft)} = — (3.13)
m
fmt — E { f mt }— 0
and, for magnitude truncation
a 2 E{(f - m 2)A2 (3 14)
f-mt = mt m }=— .
82
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
= — = (0.167)2-2B (3.18)
6
In practice, the distribution of m is not exactly uniform. Actual measurements of
roundoff noise in [1] suggested that
al r « 0.23A2 (3.19)
while a detailed theoretical and experimental analysis in [2] determined
a2 « 0.18A2 (3.20)
From (3.15) we can represent a quantized floating-point value in terms of the
unquantized value and the random variable er using
Qr(X) = X(1 + er) (3.21)
Therefore, the finite-precision product X1X2 and the sum X1 + X2 can be written
f IX1X2) = X1X2U + e r ) (3.22)
and
fl(X1 + X2) = (X1 + X2 )(1 + er) (3.23)
where e r is zero-mean with the variance of (3.20).
—2B
1 2
al = (3.28)
83
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
TO
my = mx ^2g(n) (3.24)
n=—TO
and variance
TO
ay = al ^ g2 (n) (3.25)
n=—TO
Therefore, if g(n) is the impulse response from the point where a roundoff takes
place to the filter output, the contribution of that roundoff to the variance (mean-
square value) of the output roundoff noise is given by (3.25) with a 2 replaced with
the variance of the roundoff. If there is more than one source of roundoff error in
the filter, it is assumed that the errors are uncorrelated so the output noise variance
is simply the sum of the contributions from each source.
84
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
85
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
where e(n) is a random roundoff noise sequence. Since e(n) is injected at the
same point as the input, it propagates througha system with impulse
response h(n). Therefore, forfixed-point arithmetic
with rounding,the outputroundoff noise variance from (3.6), (3.12), (3.25),
and (3.33) is
A2 “ A2 “ 2-2B 1
a 22o = — > h 2 (n)()2 = — > a 22nn = ---------------------------------- (3.36)
12 ^ 12 ^ 12 1 - a 2
n=—<x n=0
With fixed-point arithmetic there is the possibility of overflow following
addition. To avoid overflow it is necessary to restrict the input signal amplitude.
This can be accomplished by either placing a scaling multiplier at the filter
input or by simply limiting the maximum input signal amplitude. Consider the
case of the first-order filter of (3.34). The transfer function of this filter is
,v, Y(e j m ) 1
H(e j m ) = .m = m (3.37)
X(eJ ) eJ — a
so
\H(e j m )\ 2 = --- -2—1 ------------------------------- (3.38)
1 + a — 2a cos,)
and
,7, 1
|H(e^)|max = ---- — (3.39)
1 — \a\
The peak gain of the filter is 1/(1 — \a\) so limiting input signal amplitudes to
\x(n)\ < 1 — \ a | will make overflows unlikely.
An expression for the output roundoff noise-to-signal ratio can easily be
obtained for the case where the filter input is white noise, uniformly distributed
over the interval from —(1 — \ a \) to (1 — \ a \) [4,5]. In this case
a
? 1 f 1 —\ a x\ ?d x 1 ( 1 — a ? ) (3 40)
x = 21—a)
2(1 — a ) =3 \ \3 .
\ \ J — (1 — \a \)
so, from (3.25),
2 1 (1 — \a \)2
ayy 2 = 3 , 2 (3'41)
31— a2
Combining (3.36) and (3.41) then gives
86
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
87
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
DIGITAL SIGNAL PROCESSING III YEAR / V SEM
ECE
There are two noise sources contributing to e(n) if quantization is performed after
each multiply, and there is one noise source if quantization is performed after
summation. Since
2 1
1 — 2 2 2 2 (3.46
n= 2 r (1 + r ) — 4r )
— oo cos (9)
the output roundoff
noise is
2 2B1 + 1
a2 = V r 2 (3.47
12 21—r 2 (1 + r 2 )2 — 4r2 )
cos (9)
where V = 1 for quantization after summation, and V = 2 for quantization
after each multiply. To obtain an output noise-to-signal ratio we note that
H(e j w ) 1
jm (3.48
= 12——2r j 2 mcos(9)e + )
r e
and, using the approach of
[6],
iH(emmax = 1
(3.49
2 sat ( c o s ( 9 ) ^ — cos(9) 2 2
sin 2 )
4r + (9)
wher
e I >1
sat (i) — 1<I<1 (3.50
= )
Following the same approach as for the first-order
case then gives
2 2 2 B 1+r 2 3
V 12 1 — r 2 (1 + r 2 )2 — 4r2
y cos2 (9)
1
X 2 1— 2 (3.51
2
4r sat (^cos (9) cos( + sin(9) )
) 9) r 2
2r
Figure3.1 is a contour plot showing the noise-to-signal ratio of (3.51) for v = 1
in units of the noise variance of a single quantization, 2—2 B /12. The plot is
symmetrical about 9 = 90°, so only the range from 0° to 90° is shown. Notice
that as r ^ 1, the roundoff noise increases without bound. Also notice that the
noise increases as 9 ^ 0°.
It is possible to design state-space filter realizations that minimize fixed-point
roundoff noise [7] - [10]. Depending on the transfer function being realized, these
structures may provide a roundoff noise level that is orders-of-magnitude lower
than for a nonoptimal realization. The price paid for this reduction in roundoff
noise is an increase in the number of computations required to implement the
filter. For an Nth-order filter the increase is from roughly 2N multiplies for a
direct form realization to roughly (N + 1)2 for an optimal realization. However,
if the filter is realized by the parallel or cascade connection of first- and second-
order optimal subfilters, the increase is only to about 4N multiplies. Furthermore,
near-optimal realizations exist that increase the number of multiplies to only
about 3N [10].
88
2015‐2016 Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Notice that while the input is zero except for the first sample, the output oscillates
with amplitude 1/8 and period 6.
Limit cycles are primarily of concern in fixed-point recursive filters. As long as
floating-point filters are realized as the parallel or cascade connection of first- and
second-order subfilters, limit cycles will generally not be a problem since limit
cycles are practically not observable in first- and second-order systems
implemented with 32-b floating-point arithmetic [12]. It has been shown that such
systems must have an extremely small margin of stability for limit cycles to exist
at anything other than underflow levels, which are at an amplitude of less than
10 — 3 8 [12]. There are at least three ways of dealing with limit cycles when fixed-
point arithmetic is used. One is to determine a bound on the maximum limit cycle
amplitude, expressed as an integral number of quantization steps [13]. It is then
possible to choose a word length that makes the limit cycle amplitude acceptably
low. Alternately, limit cycles can be prevented by randomly rounding calculations
up or down [14]. However, this approach is complicated to implement. The third
approach is to properly choose the filter realization structure and then quantize
the filter calculations using magnitude truncation [15,16]. This approach has the
disadvantage of producing more roundoff noise than truncation or rounding [see
(3.12)—(3.14)].
75
y(n) = Qr\R 8y(n -1) - 8y(n - 2) + x(n) (3.72)
35
x(n) = -4&(n) - ^&(n -1)
y ( 4) = Qr R
3 = Qr
5 ,, (3.74)
0, 0, ■ (3.73)
4 8
s to scale the filter calculations so as to render overflow impossible. However,
this may unacceptably restrict the filter dynamic range. Another method is to
force completed sums-of- products to saturate at ±1, rather than overflowing
[18,19]. It is important to saturate only the completed sum, since intermediate
overflows in two’s complement arithmetic do not affect the accuracy of the final
result. Most fixed-point digital signal processors provide for automatic saturation
of completed sums if their saturation arithmetic feature is enabled. Yet
another way to avoid overflow oscillations is to use a filter structure for which
any internal filter transient is guaranteed to decay to zero [20]. Such structures are
desirable anyway, since they tend to have low roundoff noise and be insensitive
to coefficient quantization [21].
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Re Z
FIGU
URE: Realizzable pole lo
ocations for the
t differencce equation of
o (3.76).
The sparseness of o realizablee pole locations near z = ± 1 will reesult in a larrge coefficieent quantizattion error for
poles in this regiion.
Figuure3.4 gives an alternattive structurre to (3.77) for realizinng the transsfer functionn of (3.76). Notice thaat
quanntizing the cooefficients of o this structuure correspoonds to quanntizing X r annd Xi. As shhown in Fig.33.5 from [5]],
this results in a uniform griid of realizaable pole loccations. Theerefore, largee coefficientt quantizatioon errors aree
avoidded for all pole
p locations.
It is well established that fillter structurees with low roundoff
r noise tend to be
b robust to coefficient quantization
q n,
and visa
v versa [222]- [24]. Fo or this reasonn, the uniformm grid struccture of Fig.33.4 is also poopular becauuse of its low
w
rounndoff noise. Likewise,
L th
he low-noisee realizationss of [7]- [100] can be exppected to bee relatively insensitive too
coefffficient quanntization, and d digital wavve filters and lattice filtters that are derived from m low-sensitivity analogg
strucctures tend too have not only low coeffficient sensitivity, but alsoa low rounndoff noise [25,26].
[
It is well knownn that in a high-order polynomial with clustered roots, thhe root locaation is a veery sensitivee
function of the polynomial coefficiennts. Thereforre, filter pooles and zerros can be much morre accuratelyy
contrrolled if highher order filtters are realiized by breakking them upp into the paarallel or casscade connecction of firstt-
and second-ordeer subfilters. One excepption to thiss rule is thee case of linnear-phase FIR F filters in i which thee
symm metry of the polynomiaal coefficiennts and the spacing of the filter zeros z aroundd the unit circle usuallyy
permmits an accepptable direct realization usingu the connvolution suummation.
Giveen a filter strructure it is necessary
n too assign the ideal
i pole annd zero locattions to the realizable
r locations. This
is geenerally donne by simply yrounding orr truncatingtthe filter coeefficients to the available number of o bits, or byy
assiggning the ideeal pole and zero locatioons to the neearest realizaable locationns. A more complicated alternative
a is
to coonsider the original
o filterr design probblem as a prooblem in disscrete
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
FIGU
URE 3.4: Allternate realiization structture.
FIGU
URE 3.5: Reealizable polle locations for
f the alternnate realizatiion structuree.
optimmization, annd choose the realizable pole and zeero locationss that give thhe best approximation too the desiredd
filterr response [227]- [30].
Visit : www.EasyEngineeering.net
When realizing IIR filters, either a parallel or :cascade
Visit connection of first- and second-order subfilters is almost
www.EasyEngineeering.net
always preferable to a high-order direct-form realization. With the availability of very low-cost floating-point
digital signal processors, like the Texas Instruments TMS320C32, it is highly recommended that floating-point
arithmetic be used for IIR filters. Floating-point arithmetic simultaneously eliminates most concerns regarding
scaling, limit cycles, and overflow oscillations. Regardless of the arithmetic employed, a low roundoff noise
structure should be used for the second- order sections. Good choices are given in [2] and [10]. Recall that
realizations with low fixed-point roundoff noise also have low floating-point roundoff noise. The use of a low
roundoff noise structure for the second-order sections also tends to give a realization with low coefficient
quantization sensitivity. First-order sections are not as critical in determining the roundoff noise and coefficient
sensitivity of a realization, and so can generally be implemented with a simple direct form structure.
2 MARKS
16 MARKS
Definition.
Given an integer D, we define the downsampling operator S i J D ,shown
in Figure by the following relationship:
>{»] - " *[«£>]
The operator SIW decreases the sampling frequency by a factor of D, by
keeping one sample out of D samples.
Figure Upsampling
jctn]
Figure D o w n s a m p l i n g
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Example
Let x = [ ..., 1,2,3,4,5,6,... ] , then j[n] = S(/2x[n] is given by
?[«] = [ — 1.3,5f7— ]
bamplmg Kate Conversion by a Rational t* actor
the problem of designing an algorithm for resampling a
digital signal x[n] from the original rate F x (in Hz) into a rate Fv = (UD)F z f with L
and D integers. For example, we have a signal at telephone quality, F x = 8 kHz, and
we want to resample it at radio quality, F y = 22 kHz. In this case, clearly L = 11 and
D = 4.
First consider two particular cases, interpolation and decimation, where we
upsample and downsample by an integer factor without creating aliasing or image
frequencies.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Example <
As an example of application, suppose you want
to design a Filter with the fallowing
specifications:
Passband Fp = 450 Hz
Stopband F s =500 Hz
Sampling frequency F s ~96 kHz
Notice that the stopband is several orders of
magnitude smaller than the sampling frequency.
This leads to a filter with a very short transition
region of high complexity. In
Speech signals
© From prehistory to the new media of the future, speech has
been and will be a primary form of communication between
humans.
© Nevertheless, there often occur conditions under which we
measure and then transform the speech to another form, speech
signal, in order to enhance our ability to communicate.
© The speech signal is extended, through technological media such
as telephony, movies, radio, television, and now Internet. This
trend reflects the primacy of speech communication in human
psychology.
© “Speech will become the next major trend in the personal
computer market in the near future.”
Speech signal processing
© The topic of speech signal processing can be loosely defined as
the manipulation of sampled speech signals by a digital processor
to obtain a new signal with some desired properties.
Speech signal processing is a diverse field that relies on knowledge of
language at the levels of Signal processing
Acoustics (P¥)
Phonetics
( ^ ^ ^ ) Language-
independent Phonology
(^^)
Morphology ( i ^ ^ ^ )
Syntax
( ^ , £ ) Languag
e-dependent
Semantics
(\%X¥)
Pragmatics ( i f , f f l ^ )
7 layers for describing speech From Speech to Speech
Signal, in terms of Digital Signal Processing
At-*
■ Acoustic (and
perceptual) features
{traits)
- fundamental
freouency (FO)
(pitch)
- amplitude (loudness)
- spectrum (timber)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Typic
al
Voice
d
speec
h
signals APC assumes that the input speech signal is repetitive with a
period significantly longer than the average frequency content. Two
predictors arc used in APC. The high frequency components (up to 4 kHz)
are estimated using a 'spectral’ or 'formant’ prcdictor and the low
frequency components (50‐200 Hz) by a ‘pitch’ or ‘fine structure’
prcdictor (see figure 7.4). The spcctral estimator may he of order 1‐ 4 and
the pitch estimator about order 10. The low‐frequency components of
the spccch signal are due to the movement of the tongue, chin and
spectral
envelope, formants
The high-frequency components originate from the vocal chords and the
noise-like sounds (like in ‘s’) produced in the front of the mouth.
The output signal y(n)together with the predictor parameters, obtained
adaptively in the encoder, are transmitted to the decoder, where the spcech
signal is reconstructed. The decoder has the same structure as the encoder
but the predictors arc not adaptive and arc invoked in the reverse order.
The prediction parameters are adapted for blocks of data corresponding to
for instance 20 ms time periods.
A PC' is used for coding spcech at 9.6 and 16 kbits/s. The algorithm
works well in noisy environments, but unfortunately the quality of the
processed speech is not as good as for other methods like CELP described
below.
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
periodic
pitch excitation
9+V
synthetic
noise speech
voiced/unvoiced
Figure 7.6 The LPC model
The first vocoder was designed by H. Dudley in the 1930s and demonstrated
at the ‘New York Fair’ in 1939* Vocoders have become popular as they achieve
reasonably good speech quality at low data rates, from 2A kbits/s to 9,6 kbits/s.
There arc many types of vocoders (Marvcn and Ewers, 1993), some of the most
common techniques will be briefly presented below.
Most vocoders rely on a few basic principles. Firstly, the characteristics of the
spccch signal is assumed to be fairly constant over a time of approximately 20
ms, hcncc most signal processing is performed on (overlapping) data blocks of
20 40 ms length. Secondly, the spccch model consists of a time varying filter
corresponding to the acoustic properties of the mouth and an excitation signal.
The cxeilalion signal is cither a periodic waveform, as crcatcd by the vocal
chords, or a random noise signal for production of ‘unvoiced' sounds, for
example ‘s’ and T. The filter parameters and excitation parameters arc assumed
to be independent of each other and are commonly coded separately.
Linear predictive coding (LPC) is a popular method, which has however
been replaced by newer approaches in many applications. LPC works exceed‐
ingly well at low bit rates and the LPC parameters contain sufficient
information of the spccch signal to be used in spccch recognition applications.
The LPC model is shown in Figure 7*6.
LPC is basically an autn-regressive model (sec Chapter 5) and the vocal tract
is modelled as a time‐varying all‐pole filter (HR filter) having the transfer
function H(z)
(7*17)
-k
k=I
where p is the order of the filter. The excitation signal *?(«), being either noise
or a periodic waveform, is fed to the filter via a variable gain factor G. The
output signal can be expressed in the time domain as
y(n) ~ Ge(ri) - a , y ( n - 1) - a 2 y ( n - 2 ) %y{n-p) ( 1 . IK)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
and the excitation signal (linear predictive coding). The filter coefficients a k arc
time varying.
The model above describes how to synthesize the speech given the pitch
information (if noise or pet iodic excitation should be used), the gain and the
filter parameters. These parameters must be determined by the cncoder or the
analyser, taking the original spccch signal x(n) as input.
The analyser windows the spccch signal in blocks of 20‐40 ms. usually with a
Hamming window (see Chapter 5). These blocks or ‘frames’ arc repeated every
10—30 ms, hence there is a certain overlap in time. Every frame is then
analysed with respect to the parameters mentioned above.
Firstly, the pitch frequency is determined. This also tells whether we arc
dealing with a voiced or unvoiccd spccch signal. This is a crucial part of the
system and many pitch detection algorithms have been proposed. If the
segment of the spccch signal is voiced and has a dear periodicity or if it is
unvoiced and not pet iodic, things arc quite easy* Segments having properties in
between these two extremes are difficult to analyse. No algorithm has been
found so far that is 1perfect* for all listeners.
Now, the second step of the analyser is to determine the gain and the filter
parameters. This is done by estimating the spccch signal using an adaptive
predictor. The predictor has the same structure and order as the filter in the
synthesizer, Hencc, the output of the predictor is
- i ( n ) — - tf] jt(/7— 1) — a 2 x ( n — 2) — . . . — OpX(n—p) (7-19)
where i(rt) is the predicted input spcech signal and jc(rt) is the actual input
signal. The filter coefficients a k are determined by minimizing the square error
(7,20)
This can be done in different ways, cither by calculating the auto‐corrc‐ lation
coefficients and solving the Yule Walker equations (see Chapter 5) or by using
some recursive, adaptive filter approach (see Chapter 3),
So, for every frame, all the parameters above arc determined and irans‐ mittcd
to the synthesiser, where a synthetic copy of the spccch is generated.
An improved version of LPC is residual excited linear prediction (RELP). Let
us take a closer look at the error or residual signal r(fi) resulting from the
prediction in the analyser (equation (7.19)). The residual signal (wc arc try ing to
minimize) can be expressed as
r(n)= *(«) -i(rt)
= jf(rt) + a^x(n— 1) + a 2 x(n— 2) -h *.. + a p x(ft—p) <7-21)
From this it is straightforward to find out that the corresponding expression
using the z‐transforms is
(7,22)
Hcncc, the prcdictor can be regarded as an ‘inverse’ filter to the LPC model filter.
If we now pass this residual signal to the synthesizer and use it to excite the LPC
filter, that is E(z) - R(z), instead of using the noise or periodic waveform
sources we get
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Y ( z ) = E ( z ) H ( z ) = R ( z ) H ( z ) = X ( z ) H ~ \ z ) H ( z ) = X ( z ) (7.23)
In the ideal case, we would hence get the original speech signal back. When
minimizing the variance of the residual signal (equation (7.20)), we gathered as
much information about the spccch signal as possible using this model in the
filter coefficients a k . The residual signal contains the remaining information. If
the model is well suited for the signal type (speech signal), the residual signal is
close to white noise, having a flat spectrum. In such a case we can get away with
coding only a small range of frequencies, for instance 0‐1 kHz of the residual
signal. At the synthesizer, this baseband is then repeated to generate higher
frequencies. This signal is used to excite the LPC filter
Vocoders using RELP are used with transmission rates of 9.6 kbits/s. The
advantage of RELP is a better speech quality compared to LPC for the same bit
rate. However, the implementation is more computationally demanding.
Another possible extension of the original LPC approach is to use multipulse
excited linear predictive coding (MLPC). This extension is an attempt to make
the synthesized spcech less ‘mechanical’, by using a number of different pitches
of the excitation pulses rather than only the two (periodic and noise) used by
standard LPC.
The MLPC algorithm sequentially detects k pitches in a speech signal. As soon
as one pitch is found it is subtracted from the signal and detection starts over
again, looking for the next pitch. Pitch information detection is a hard task and
the complexity of the required algorithms is often considerable. MLPC however
offers a better spcech quality than LPC for a given bit rate and is used in systems
working with 4.S‐9.6 kbits/s.
Yet another extension of LPC is the code excited linear prediction (CELP).
The main feature of the CELP compared to LPC is the way in which the filter
coefficients are handled. Assume that we have a standard LPC system, with a
filter of the order p. If every coefficient a k requires N bits, we need to transmit
N-p bits per frame for the filter parameters only. This approach is all right if all
combinations of filter coefficients arc equally probable. This is however not the
case. Some combinations of coefficients are very probable, while others may
never occur. In CELP, the coefficient combinations arc represented by p
dimensional vectors. Using vector quantization techniques, the most probable
vectors are determined. Each of these vectors are assigned an index and stored
in a codebook. Both the analyser and synthesizer of course have identical copies
of the codebook, typically containing 256‐512 vectors. Hcncc, instead of
transmitting N-p bits per frame for the filter parameters only 8‐9 bits arc
needed.
This method offers high‐quality spcech at low‐bit rates but requires consid‐
erable computing power to be able to store and match the incoming spcech to
the ‘standard’ sounds stored in the codebook. This is of course especially true if
the codebook is large. Speech quality degrades as the codebook size decreases.
Most CELP systems do not perform well with respect to higher frequency
components of the spccch signal at low hit rates. This is countcractcd in
There is also a variant of CELP called vector sum excited linear prediction
(VSELP). The main difference between CELP and VSELP is the way the
codebook is organized. Further, since VSELP uses fixed point arithmetic
algorithms, it is possible to implement using cheaper DSP chips than
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
Adaptive Filters
The signal degradation in some physical systems is time varying, unknown, or possibly
both. For example,consider a high-speed modem for transmitting and receiving data over
telephone channels. It employs a filter called a channel equalizer to compensate for the channel
distortion. Since the dial-up communication channels have different and time-varying
characteristics on each connection, the equalizer must be an adaptive filter.
4. Explain about Adaptive Filter4. Explain about Adaptive Filter
Adaptive filters modify their characteristics to achieve certain objectives by
automatically updating their coefficients. Many adaptive filter structures and adaptation
algorithms have been developed for different applications. This chapter presents the most widely
used adaptive filters based on the FIR filter with the least-mean-square (LMS) algorithm. These
adaptive filters are relatively simple to design and implement. They are well understood with
regard to stability, convergence speed, steady-state performance, and finite-precision effects.
Introduction to Adaptive Filtering
An adaptive filter consists of two distinct parts - a digital filter to perform the desired filtering,
and an adaptive algorithm to adjust the coefficients (or weights) of the filter. A general form of
adaptive filter is illustrated in Figure 7.1, where d(n) is a desired (or primary input) signal, y(n)
is the output of a digital filter driven by a reference input signal x(n), and an error signal e(n) is
the difference between d(n) and y(n). The adaptive algorithm adjusts the filter coefficients to
minimize the mean-square value of e(n). Therefore, the filter weights are updated so that the
error is progressively minimized on a sample-bysample basis.
In general, there are two types of digital filters that can be used for adaptive filtering: FIR and
IIR filters. The FIR filter is always stable and can provide a linear-phase response. On the other
hand, the IIR
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
filter involves both zeros and poles. Unless they are properly controlled, the poles in the filter
may move outside the unit circle and result in an unstable system during the adaptation of
coefficients. Thus, the adaptive FIR filter is widely used for practical real-time applications.
This chapter focuses on the class of adaptive FIR filters.
The most widely used adaptive FIR filter is depicted in Figure 7.2. The filter output signal
is computed
(7.13)
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
, where the filter coefficients wl (n) are time varying and updated by the adaptive algorithms
that will be discussed next.
We define the input vector at time n as
x(n) = [x(n)x(n - 1) . . . x(n - L + 1)]T , (7.14)
and the weight vector at time n as
w(n) = [w0(n)w1(n) . . . wL-1(n)]T . (7.15)
Equation (7.13) can be expressed in vector form as
y(n) = wT (n)x(n) = xT (n)w(n). (7.16)
The filter outputy(n) is compared with the desired d(n) to obtain the error
signal e(n) = d(n) - y(n) = d(n) - wT (n)x(n). (7.17)
Our objective is to determine the weight vector w(n) to minimize the predetermined
performance (or cost) function.
Performance Function:
The adaptive filter shown in Figure 7.1 updates the coefficients of the digital filter to
optimize some predetermined performance criterion. The most commonly used performance
function is
based on the mean-square error (MSE) defined as
[Type text]
Visit : www.EasyEngineeering.net
Visit : www.EasyEngineeering.net
computer generated speech: digital recording and vocal tract simulation. In digital
recording, the voice of a human speaker is digitized and stored, usually in a compressed form.
During playback, the stored data are uncompressed and converted back into an analog signal.
An entire hour of recorded speech requires only about three me gabytes of storage, well within
the capabilities of even small computer systems. This is the most common method of digital
speech generation used today. Vocal tract simulators are more complicated, trying to mimic the
physical mechanisms by which humans create speech. The human vocal tract is an acoustic
cavity with resonate frequencies determined by the size and shape of the chambers. Sound
originates in the vocal tract in one of two basic ways, called voiced and fricative sounds.
With voiced sounds, vocal cord vibration produces near periodic pulses of air into the vocal
cavities. In comparison, fricative sounds originate from the noisy air turbulence at narrow
constrictions, such as the teeth and lips. Vocal tract simulators operate by generating digital
signals that resemble these two types of excitation. The characteristics of the resonate chamber
are simulated by passing the excitation signal through a digital filter with similar resonances.
This approach was used in one of the very early DSP success stories, the Speak & Spell, a
widely sold electronic learning aid for children.
Speech recognition
The automated recognition of human speech is immensely more difficult than speech
generation. Speech recognition is a classic example of things that the human brain does well,
but digital computers do poorly. Digital computers can store and recall vast amounts of data,
perform mathematical calculations at blazing speeds, and do repetitive tasks without becoming
bored or inefficient. Unfortunately, present day computers perform very poorly when faced with
raw sensory data. Teaching a computer to send you a monthly electric bill is easy. Teaching the
same computer to understand your voice is a major undertaking. Digital Signal Processing
generally approaches the problem of voice recognition in two steps: feature extraction
followed by feature matching. Each word in the incoming audio signal is isolated and then
analyzed to identify the type of excitation and resonate frequencies. These parameters are then
compared with previous examples of spoken words to identify the closest match. Often, these
systems are limited to only a few hundred words; can only accept speech with distinct pauses
between words; and must be retrained for each individual speaker. While this is adequate for
many commercialapplications, these limitations are humbling when compared to the abilities of
human hearing. There is a great deal of work to be done in this area, with tremendous financial
rewards for those that produce successful commercial products.
[Type text]
Visit : www.EasyEngineeering.net