0% found this document useful (0 votes)
195 views11 pages

University of Manchester CS3291: Digital Signal Processing 2003/2004

The document contains solutions to selected problems from lecture notes on digital signal processing. It includes: 1) Explanations and calculations for problems related to signals, systems, filters and their properties. 2) Derivations of impulse responses and transfer functions for various linear time-invariant systems. 3) Discussion of stability, causality and other characteristics for both continuous and discrete time systems. 4) The solution provides detailed steps and reasoning for problems ranging from basic concepts to more advanced topics like derivation of transfer functions.

Uploaded by

Mukesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
195 views11 pages

University of Manchester CS3291: Digital Signal Processing 2003/2004

The document contains solutions to selected problems from lecture notes on digital signal processing. It includes: 1) Explanations and calculations for problems related to signals, systems, filters and their properties. 2) Derivations of impulse responses and transfer functions for various linear time-invariant systems. 3) Discussion of stability, causality and other characteristics for both continuous and discrete time systems. 4) The solution provides detailed steps and reasoning for problems ranging from basic concepts to more advanced topics like derivation of transfer functions.

Uploaded by

Mukesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

University of Manchester

CS3291: Digital Signal Processing 2003/2004

Solutions to selected problems in Notes


Section 1
1.1 Electrical waveform analogous to continuous variation in other quantity such as air pressure.

1.2. Never!

 0 : t<0
1.3. x(t ) = 
sin(100π t) : t ≥ 0
1.4. Zero for t<0.

1.5
 0: n < 0
x[ n] = 
sin( 0.5π n ):n>=0
. fs = 10 Hz

1.6. See notes.

1.7. (i) Multiplies by constant


(ii) Any linear time-invariant signal processing system of finite order.
(iii) Output is modulus of input (full wave…)
(iv) Multiplies given signal by another signal.

1.8. Filtering music to increase or decrease bass or treble power: HI-FI tone control.

1.9. Permanent storage of data: no ageing as with magnetic tape.

1.10. Loss of data can be catastrophic and total, e.g. due to a virus, rather than partial. Also data can be
easily copied by unauthorized people.

1.11. Fixed point: only integers possible; fractions require a decimal or binary point to be assumed.
Floating point: fractions with wide dynamic range possible using mantissa and exponent.

1.12. The term "filter", is often assumed to be a device for removing or "filtering off" unwanted frequency
components of a signal. However a filter may be more generally defined as any finite order LTI
system. An analogue filter acts directly upon an analogue signal, for example using capacitors,
inductors, and operational amplifiers. A digital filter operates to similar effect on digital signals.

1.13. The input signal is processed as it is being received sample by sample or block by block. Any output
will be produced with fixed delay with respect to the input, as would be required, for example, for
processing speech in a telephone conversation The processor must be fast enough keep up with the
incoming data. The opposite is batch mode processing where the incoming data may be captured and
stored, for example, on a CD.

1.14. Restoring historical music recordings.


EE3271(CS3291) DSP Solutions 2 BMGC

1.15. Analysing or predicting the behaviour of the stock exchange.

1.16. Using the "rectangular-to-polar" function on a calculator Modulus (3+4j) = 5, arg = 0.929 radians.
Enter 3, shift or inv R->P, 4,"=", gives mod (5), "XÅ>Y gives arg (0.929)
Mod(-3e4j) = 3, Arg = 4+π

1.17. y(t)=-4ωsin(ωt) . Amplitude = 4ω for any ω.

1.18. y(t)=(4/ω)sin(ωt) . Amplitude = 4/ω for any ω>0

1.19. (1-e-(N+1)jω)/(1-e-jω) if ω>0 else (N+1)

1.20. 1/(1-0.9e-jω)

1.21. 1+e-jω = e-jω/2(ejω/2 + e-jω/2) = e-jω/2(2cos(ω/2))

1.22. x2+0.9x+0.81 = (x-Rejθ)(x-Re-jθ) . Therefore R=0.9 & 2Rcos(θ)=0; θ = arccos(1/2)=1.047.

1.23. x(t) = cos(ωt+π/2)+(1/2)cos(2ωt+π)+(1/3)cos(3ωt+3π/2)+(1/4)cos(4ωt+2π)+…


= 0.5ej(ωt+π/2) + 0.5e-j(ωt+π/2) + 0.25ej(2ωt+π) + 0.25e-j(2ωt+π) + …

1.24. Lecture notes.

1.25. Just look at formulae and observe what happens when ω is replaced by -ω.

1.26. G(0)=1 i.e. 0dB. G(ωC)=1/(√2) i.e. -3dB.


G(10ωC) = 1 / √ (1+102n) ≈ 1 / √ (102n) = 1 / 10n i.e. -20n dB.

Section 2
2.1. Yes
2.2 If input to L1 in top diagram is δ(t) output from L1 is h1(t) , i.e. its impulse-response,
and this forms the input to L2 to produce an overall output h1(t)⊗h2(t) as given by the
convolution formula in notes. If input to L2 in bottom diagram is δ(t) output from L2 is
h2(t) and this forms the input to L1 to produce an overall output h2(t)⊗h1(t). Hence the
impulse-response of the top arrangement is h1(t)⊗h2(t) and the impulse-response of the
bottom arrangement is h2(t)⊗h1(t) and we know that h1(t)⊗h2(t)= h2(t)⊗h1(t). So the two
arrangements have exactly the same impulse-response and this means that their responses
to any other input signal will also be identical. A similar argument may be used when
L1 and L2 are discrete time LTI systems.

2.3.

H ( jω ) = ∫ h(t )e − jωt
1
dt = ∫ e − jωt
 1

dt =  − jω
[
e − jω t ]
1
0 :ω ≠ 0
−∞ 0
1 : ω =0
e − jω / 2 − jω / 2
=
− jω
[ e ]
− e jω / 2 = (2 / ω )e − jω / 2 sin(ω / 2) when ω ≠ 0

= e − jω / 2 sinc(ω /(2π )) for anyω


EE3271(CS3291) DSP Solutions 3 BMGC

2.4. H ( jω ) = e −3 jω / 2 sinc(ω /( 2π )) for anyω .


Same gain-response, but different phase-response.

2.5. System would be stable and causal.


Calculate Fourier transform of h(t).

2.6.
∞ ∞ ∞
| H ( jω ) | = | ∫ h(t )e − jωt dt | ≤ ∫ | h(t )e − jωt | dt = ∫ | h(t ) | dt = finite
−∞ −∞ −∞

2.7. By applying the inverse Fourier transform and performing a calculation similar to
that in 2.3, we get h(t) = (1/π) sinc(t/π). This is a 'sinc' function of time that remains
non-zero for all t from -∞ to +∞ and hence is the impulse-response of a non-causal filter.

Section 3
3.1. We can find two simple signals for which the rule for linearity does not apply.
Let x1[n]=1 for all n and x2[n]=-1 for all n.
Response to {x1[n]} is {y1[n]} with y1[n]=1 for all n.
Response to {x2[n]} is {y2[n]} with y2[n] =1 for all n.
As x1[n]+x2[n]=0 for all n, the response to {x1[n]+x2[n]} will be {02 } i.e. zero for all n.
But this is not {y1[n]+y2[n]} which would be 2 for all n.

3.2. { …,0,… 1, 1, 1, 1, -4, 0, …, 0, … }

3.3. Signal flow graph (i) is non-recursive like fig 3.3 whereas (ii) is recursive like fig 3.4.

3.4. (i) { …,0,…, 1, 1, 0, …, 0, …} stable & causal. It is a finite impulse response.


(ii) { …,0,…, 1, -1, 1, -1, 1, -1, …, 1, -1, …} causal but unstable. It is an infinite impulse response.
Note that not all filters with infinite impulse responses are unstable, bit this one is.

3.5. { …, 0, …, 0, 1, 0, 0, …, 0, …}

3.6. 800 Hz.

3.7. H(ejΩ) = 1 – e-jΩ = e-jΩ/2 ( ejΩ/2 - e-jΩ/2) = 2 j sin (Ω/2) e-jΩ/2 = 2 sin (Ω/2) e-j(Ω/2 – π/2)
G(Ω) = |2sin (Ω/2)|
When Ω>0 then φ(Ω) = -(Ω/2 – π/2); when Ω<0 then φ(Ω) = π -(Ω/2 – π/2) = -(Ω/2 + π/2).
When Ω=0 phase is arbitrary as G(Ω)=0, call it zero.
This is not linear phase.

3.8. { …, 0, …, 0, 4, -8, 16, -32, …} Non-stable.

3.9. We know that {ejΩn } produces {H(ejΩ) ejΩn } and therefore {e-jΩn } produces {H(e-jΩ) e-jΩn }
We also know that H(e-jΩ) is the complex conjugate of H(ejΩ).

So that since H(ejΩ) = G(Ω)ejφ(Ω) it follows that H(e-jΩ) = G(Ω)e-jφ(Ω) .


Now {cos(Ωn)} = { 0.5 ( ejΩn + e-jΩn )} = 0.5{ ejΩn } + 0.5 {e-jΩn }.
Therefore the response to {cos(Ωn)} is 0.5{ G(Ω)ejφ(Ω)ejΩn } + 0.5 {G(Ω)e-jφ(Ω)e-jΩn }
= 0.5 G(Ω){e j(φ(Ω)+Ωn) + e -j(φ(Ω)+Ωn) } = {G(Ω) cos(Ωn + φ(Ω))}
EE3271(CS3291) DSP Solutions 4 BMGC

3.10. H(ejΩ) = 1 + 2e-jΩ + 3 e-2jΩ + 2 e-3jΩ +e-4jΩ = e-2jΩ ( e2jΩ + 2ejΩ + 3 + 2e-jΩ + e-2jΩ )
= e-2jΩ ( 3 + 4cos (Ω ) + 2 cos (2Ω) )
It may be shown by various means that 3 + 4cos (Ω ) + 2 cos (2Ω) ≥ 0 for all Ω.

(For example show that 3 + 4cos (Ω ) + 2 cos (2Ω) = 1 + 4cos (Ω ) + 4 cos2 (Ω) = (1+2cos(Ω))2
Therefore the phase response φ(Ω) is –2Ω for all Ω.
This is linear phase with a phase delay of 2 samples.

Note that the impulse response ( …, 0, …, 0, 1, 2, 3, 2, 1, 0, …,0, … } is symmetric about n=2.


A symmetric impulse response gives a linear phase response.

Section 4

4.1. Cut-off frequency = π/2 radians/sample.

π π
G (1Ω: )−=π / 12:≤− Ω/≤2 π≤ /Ω2 ≤ / 2
G (Ω ) =  0:π / 2 <| Ω | < π
 0 : π / 2 < | Ω | < π
Taking φ(Ω) to be 0, H(ejΩ) = G(Ω) and the ideal impulse response is, by the inverse DTFT:
1 π 1 π /2
h[n] = 1 ∫π H (ejΩjΩ )ejΩjΩnn dΩ = 1 ∫π / 2 ejΩjΩnn dΩ = sin(nπ / 2) /( nπ ) when n ≠ 0
h[n] = 2π ∫ −πH (e )e dΩ = 2π ∫ −π / 2e dΩ = sin(nπ / 2) /( nπ ) when n ≠ 0
2π −π 2π −π / 2
It may be checked that h[n] = 0.5 when n=0

Therefore {h[n]} is the following infinite sequence:


{ …-1/(7π), 0, 1/(5π), 0, -1/(3π), 0, 1/π, 0.5, 1/π, 0, -1/(3π), 0, 1/(5π), 0, -1/(7π), … }

Rectangularly windowing for –5 ≤ n ≤ 5 gives the following sequence:


{ …, 0, …0, 1/(5π), 0, -1/(3π), 0, 1/π, 0.5, 1/π, 0, -1/(3π), 0, 1/(5π), 0, …, 0 … }

Delaying by 5 samples to make the impulse response causal gives:


{ …, 0, …, 0, 1/(5π), 0, -1/(3π), 0, 1/π, 0.5, 1/π, 0, -1/(3π), 0, 1/(5π),0, …, 0 … }

We can now draw the signal-flow graph of the FIR filter.

H(z) = 1/(5π) - (1/3π) z-2 + (1/π) z-4 0.5 z-5 + (1/π)z-6 - ( 1/(3π) ) z-8 + (1/5π) z-10

y[n] = (1/(5π))x[n] -(1/(3π))x[n-2] + (1/π) x[n-4] + 0.5 x[n-5] + (1/π)x[n-6] - ( 1/(3π) )x[n-8]
+(1/(5π)) x[n-10]

4.2.
 0 : π / 2 <| Ω |0< :ππ / 2 <| Ω |< π
 
G (Ω) =  1 : πG/(Ω
4≤ 1 :ππ//24 ≤| Ω |≤ π / 2
) |=Ω|≤
0 : −π / 4 < Ω 0<: π
  −π/ 4/ 4 < Ω < π / 4

1 π 1 −π / 4 1 π /2
∫ π H (e ∫π ∫π
jΩ
h[n] = )e jΩn dΩ = e jΩn dΩ + e jΩn dΩ
2π − 2π − /2 2π /4
EE3271(CS3291) DSP Solutions 5 BMGC

= (1/nπ)(sin(nπ/2) – sin(nπ/4) ) when n≠0 and 0.25 when n=0. Hence etc.

4.3.
y[n] = (1/(5π))x[n] -(1/(3π))x[n-2] + (1/π) x[n-4] + 0.5 x[n-5] + (1/π)x[n-6] - ( 1/(3π) )x[n-8]
+(1/(5π)) x[n-10]

= 0.06366x[n] -0.1061x[n-2] + 0.3183 x[n-4] + 0.5 x[n-5] + 0.3183x[n-6] - 0.1061x[n-8]


+0.06366 x[n-10]

637x[n] -1061x[n-2] + 3183 x[n-4] + 5000 x[n-5] + 3183x[n-6] - 1061x[n-8] +637x[n-10]


= 
10000

The following is the “bare bones” of a C program to read 1000 16-bit integer samples of a signal from a
binary file, pass these through the 10th order filter above implemented using integer arithmetic only, and
store the output samples generated in a binary output file.

#include<stdio.h>
#include<stlab.h>
#include<math.h>
void main( )
{ file *fpin;fpout;
long n, y;
short m,ix, iy, x[11], a[11]; //16-bit integers
fpin=fopen(c:\\..\\infilename.dat, “rb”);
fpout=fopen(c:\\..\\outfilename.dat, “wb”);
a[0]=637; a[1]=0; a[2]=-1061; a[3]=0; a[4]=3183; etc.
for (m=1;m<11;n++) x[m]=0;
for ( n=0;n<1000;n++)
{ fread(&ix, sizeof(short),1, fpin); x[0] = ix;
y=x[0]*a[0];
for (m=10; m>0; m--) {y = y + x[m]*a[m]; x[m] = x[m-1];}
iy = y / 10000;
fwrite(&iy, sizeof(short),1, fpout); }
fclose(fpin); fclose(fpout);
}

4.4.
 1 : −π < Ω < −π / 3
: −)π=<0Ω: π
G1(Ω <−/ 3π ≤/ |3Ω |≤ π / 3
 
G (Ω) = 0 : π / 3≤| 1Ω: π
|≤ /π3/<3 Ω < π

 1:π / 3 < Ω < π

1 π 1 −π / 3 1 π
∫ π H (e ∫π ∫π e
jΩ
h[n] = )e jΩn dΩ = e jΩn dΩ + jΩn
dΩ
2π − 2π − 2π /3

1 π – sin(nπ/3) 1 and−π2/3
/3
jΩn n=0. 1
π

h[n] == (1/nπ)( ∫ ∫
H ( e jΩ
) e ) nwhen
jΩ
d Ω =n≠0 ewhend Ω + Hence eetc.
jΩn
dΩ
2π −π 2π − π 2π π / 3

4.5. no
4.6. Straightforward calculation.
4.7. If filter is linear phase with phase-delay N, then −φ(Ω)/Ω = N sampling intervals. Therefore:
EE3271(CS3291) DSP Solutions 6 BMGC

π
h[n] = (1 / π ) ∫ | H (e jΩ ) | cos(Ω(n − N ))dΩ
0
Now
π
h[ N + n] = (1 / π ) ∫ | H (e jΩ ) | cos(Ωn)dΩ
0
π
h[ N − n] = (1 / π ) ∫ | H (e jΩ ) | cos(Ω(− n))dΩ
0
π
= (1 / π ) ∫ | H (e jΩ ) | cos(Ωn)dΩ = h[ N + n]
0

since cos(-θ) = cos(θ) for any θ.

h[n]

n=N

So h[n] must be symmetric about n=N as sketched above. If h{[n]} is an infinite impulse-response, we will
clearly have a problem with causality since if it goes forward for all time, it must also go backward for all
time and give us non-zero values of h[n] for n<0. We cannot have an IIR digital filter which is exactly
linear phase.
But we can have an FIR digital filter which is exactly linear phase. Can you see why?

Note: By a similar argument, a linear phase analogue filter must have an impulse-response which is
symmetric about some point in time, t=D say. This means that for all t, h(D+t) = h(D-t), and if h(t) remains
non-zero as t Æ∞, h(t) must also remain non-zero as t tends to -∞. But analogue filters have infinite
impulse-responses which means that an exactly linear phase analogue filter must be non-causal and
therefore unrealisable.
The argument is the same for discrete time filters where h[n] is symmetric about n=N where N is an
integer; i.e. h[N-n]=h[N+n] for all n.
It is also easily shown to be true where 2N is an integer and h[N+0.5+n]=h[N-0.5-n] for all n.
The argument for discrete time filters is a little more complicated where the symmetry is about n=M and
2M is not an integer. Fortunately we rarely encounter this case.

Examples of symmetric impulse-responses corresponding to linear phase FIR digital filters are:

{…,0, …, 0, 1, -2, 3, 7, 3, -2, 1, 0, …, 0, …} N = 3 ; x[2] = x[4], etc.

{ …, 0, …0, 1, -2, 5, 5, -2, 1, 0, …, 0, …} N=2.5: x[2]=x[3], x[1]=x[4], x[0]=x[5]

Section 5
5.1. (a) H(z) = 2 - 3z-1 + 6 z-4
(b) H(z) = z-1 / (1 + z-1 + 0.5 z-2)

5.2. Corresponding difference equation is y[n] = x[n-1]


EE3271(CS3291) DSP Solutions 7 BMGC

5.3. (i) When input x[n] = zn then output y[n] = H(z) zn.
Substituting, H(z)zn = zn – 0.9 H(z)z(n-1) Therefore H(z) [ zn + 0.9zn-1] = zn
H(z) = 1 / (1 + 0.9z-1)

(ii) {h[n]} = { …, 0, …, 0, 1, -0.9, 0.81, -0.93, 0.94, … }


H(z) = 1 + (-0.9z-1) + (-0.9)2z-2 + (-0.9)3z-3 + …
= 1 + (-0.9z-1) + (-0.9z-1)2 + (-0.9z-1)3 + …
= 1 / (1 + 0.9z-1) assuming |0.9z-1| < 1 i.e. |z| > 0.9.

5.4. y[n] = x[n] + 3x[n-1] + 2x[n-2] – 0.9 y[n-1]


Zeros: z = -2 & z = -1; poles at z = 0 & z = - 0.9.

5.5. Difference equation is: y[n] = x[n] + 2 y[n-1].


{h[n]} = { …, 0, …, 0, 1, 2, 4, 8, 16, 32, …} Unstable! (NB not all IIR filters are unstable)

5.6.
1 - 0.9 z-1 + 0.81 z-2 z2 - 0.9 z + 0.81 (z - 0.9ejπ/3 ) (z -0.9e-jπ/3)
H(z) =  =  = 
1 -0.95z-1 + 0.9025 z-2 z2 -0.95z + 0.9025 (z - 0.95ejπ/3 ) (z -0.95e-jπ/3)

The gain response will have a peak of amplitude 2 (6dB) at Ω = π/3. The gain at frequencies not
close to π/3 will be approximately one. To find out how sharp the peak is we can do various easy
things.
You may choose to estimate the gain at π/3 ±0.05. A bit of geometry (right-angle triangles as usual)
tells us that the gain at π/3 ±0.05 is √(0.12 + 0.052) / √(0.052 + 0.052) = √5 / √2 = 4 dB. The distance
to the pole has increased by a factor √2 (decreasing the gain by 3dB) but the distance to the zero has
increased from 0.1 to 0.112, i.e. a factor 1.12 ( corresponding to an increase in gain of about 1dB).
Overall the gain decreases by 2 dB from its value of 6 dB at π/3. This information will allow a
reasonable sketch (showing 4 dB points rather that 3 dB points), but if you insist on finding the 3dB
points, you can do it quite easily by finding the increase θ in relative frequency such that
(distance to zero) / (distance to pole)
reduces from 2 to approximately √2. As usual we neglect changes to the distances to the complex
conjugates of this pole and zero as they are far away.

Distance to zero ≈ √ (θ2 + 0.12) and distance to pole ≈√(θ2+0.052).

Solving √ (θ2 + 0.12) / √ (θ2 + 0.052) = √2 gives θ = ±0.071 radians/sample.


Hence 3dB points are at π/3±0.071.

Follow-up exercise: repeat this problem with the poles at 0.99exp(±jπ/3) and the zeros unchanged.
The peak at π/3 now becomes much higher, i.e. 20 dB, and you will find that the points where the
gain drops by 3 dB from 20 dB to 17 dB occur very close to π/3 ± 0.01. But where does the gain
become approximately 3dB ? Solving √ (θ2 + 0.12) / √ (θ2 + 0.012) = √2 gives θ = ±0.1
radians/sample; i.e. the gain drops to 3dB at Ω = π/3 ± 0.1. A gain response sketch for this follow-
up exercise can therefore be drawn with minimal calculation.

5.7. r - e-jΩ r - (cos(Ω) –j sin(Ω)) (r – cos(Ω)) + j sin(Ω)


jΩ
H(e ) =  =  = 
1 - re-jΩ 1 - r (cos(Ω) –j sin(Ω)) (1 – r cos(Ω)) + j r sin(Ω)
EE3271(CS3291) DSP Solutions 8 BMGC

(r – cos(Ω))2 + sin(Ω)2 r2 - 2rcos(Ω) + 1


jΩ 2
|H(e )| =  =  = 1
(1 – r cos(Ω))2 + r2 sin(Ω)2 1 - 2 r cos(Ω) + r2 (cos2(Ω) + sin2(Ω))

5.8. See Example 5.4.

5.9. The coefficients a0, a1, a2, b1, & b2 are clearly not integers, and if we just round each to the nearest
integer or take its integer part, the result will be rather silly. So we must choose a scaling factor K
which is a large integer, say 100 or 1000 or 1024 or maybe 100000. We multiply each coefficient by
K and then round to the nearest integer. The effect of rounding is now less drastic, and we can
compensate later for the scaling up of the coefficients by dividing by K. Clearly the larger K, the less
serious will be the effect of rounding. However the integers produced must not be too large as
overflow may occur. In a 16-bit microprocessor like the TMS32010, stored integers representing
filter coefficients are limited to the range –32768 to 32767. Similarly signal values from the ADC lie
between –32768 to 32767. The processor can multiply together two 16-bit words (e.g. a signal
sample and an integerised filter coefficient) to produce a 32-bit result, and can add 32-bit numbers
together. But any resulting 32-bit number must be scaled back to 16-bits before it can be stored as a
signal and subjected to further multiplication processes, or output to a DAC. The scaling back to 16
bits is achieved by dividing by K. It is also much easier to divide by a constant K which is a power
of two, such as 1024 than to divide by say 100 or 1000. Since second order IIR section filter
coefficients normally lie between –2 and +2, choosing K=1024 is fairly safe, though not necessarily
optimal. Having chosen K, we then calculate the integerised coefficients as follows:
IB1 = int (K*b1); IB2 = int (K*b2); IA0 = int(K*a0); IA1 = int(K*a1); IA2 =int(K*a2);

Now we can write the program:-


IW1:=0; IW2:=0; (these are 16-bit integers)
L: Input IX; (16-bit integer from ADC)
P := IX*K - IB1*IW1 – IB2*IW2; (32-bit result in P)
IW := P / K; (integer divide by shifting to produce the 16-bit IW)
P := IW*IA0 + IW1*IA1 + IW2*IA2; (32-bit result in P)
IY := P / K; (integer divide by shifting to produce the 16-bit IY)
Output IY; (send 16-bit result to DAC)
IW2 := IW1;
IW1 := IW;
Goto L (Go back for next sample)
These program steps are easily understood and converted to a different language such as C or
assembly language. Apologies for the “goto” statement.
The program given above may be understood in a more professional way by defining IB1, IB2, etc. to be
“Q-12” fixed-point numbers; i.e. a decimal point (strictly a “binary” point) would be assumed to exist after
the most significant 4 bits. If the programmer chooses to define IX as a Q-12 number also, P becomes a
Q-24 number which is scaled back to a Q-12 number by the statement IY=P/K. The programmer must
remember the Q-formats assumed for each word and keep track of what happens at each stage of the
calculation. Thinking about Q-factors rather than scaling by K constants is ultimately more elegant and
flexible, but students tend to prefer K constants to begin with. You can generate the same code by either
mode of thinking. Clearly good documentation is going to be very important as calculations get
complicated. Fixed point programming is important for mobile communications since it simplifies the
hardware and computational complexity (though not the programming effort) and this leads to power
savings and longer battery life.

5.10. If the order is as in the question, the impulse response of the combination is {h1[n]}⊗{h2[n]}.
If the order is reversed, the impulse response becomes: {h2[n]}⊗{h1[n]}. This is equal to
{h1[n]}⊗{h2[n]} as we know.
EE3271(CS3291) DSP Solutions 9 BMGC

5.11. Notch frequency is π/2. Place zero on unit circle at z = exp(jπ/2) and its complex conjugate at z =
exp(-jπ/2). Place poles at z = (1-α) exp(jπ/2) and z = (1-α) exp(-jπ/2). If α is small, the 3dB
bandwidth is 2α. Therefore 2α = 3.2*2π/200 and α = 0.05.

( z - exp(jπ/2) )( z - exp(-jπ/2) ) z2 + 1 1 + z-2


H(z) =  =  = 
(z – 0.95exp(jπ/2))( z – 0.95exp(-jπ/2)) z2 + 0.952 1 + 0.9025 z-2

Hence etc.

Section 6
6.1. Refer to general formula.

6.2. Multiply out the denominator in 6.1, then to scale the cut-off frequency from 1 radian/second to ωC,
replace s by s/ ωC.
y(t) – (2 / ωC) dy(t)/dt - (2 / ωC2 )d2y(t)/dt2 - (1/ ωC3 )d3y(t)/dt3 = x(t)
ωC = 1000π. Sampling interval T = 0.0001 seconds. TωC = π/10.

1
H(z) = 
1 + 20(1-z-1)/π + 200(1-2z-1+z-2)/π2 + 1000(1 – 3z-1 + 3z-2 – z-3)/π3

1
= 
1 + 6.366(1-z-1) + 20.264(1-2z-1+z-2) + 32.2515(1 – 3z-1 + 3z-2 – z-3)

1
H(z) = 
59.8815 - 143.649z-1 + 117.019z-2 - 32.2515 z-3

I have not checked this yet.

6.3.
ΩC = π/2 radians/sample.
Pre-warped analogue frequency: 2 tan(ΩC/2) = 2 radians/second.

Required analogue prototype transfer function is :

1
H(s) = 
(1 + s/2) (1 + s/2 + s2/4)

Replacing s by 2(z-1)/(z+1) and rearranging, we obtain:

(1 + 2z-1 + z-2)
-1
H(z) = (1/6) (1 + z ) 
( 1 + (1/3) z-2 )

6.5. Cut-off frequency is ΩC = π/2.


EE3271(CS3291) DSP Solutions 10 BMGC

Must be –20 dB or less at Ω = 3π/4.


Analogue prototype transfer function must have cut-off ωC = 2tan(π/4) = 2 radians/second.
Gain of analogue prototype must be –20dB or less at ω = 2tan(3π/8) = 4.828 radians/second.
Gain of an nth order Butterworth low-pass analogue filter with cut-off frequency ωC = 2 radians/second
is:

1
G(ω) = 
√ (1 + (ω / 2)2n )

So we need to find the smallest integer value of n such that 20 log10(G(4.828)) < -20 dB.

This means that we must have log10((1 + (2.414)2n ) > 2


i.e. (1 + (5.827)n ) > 102 = 100
5.827n > 99 Smallest possible integer value of n is three.

We can now design the filter by applying the bilinear transformation to 3rd order H(s) with cut-off
ωC = 2.

Section 7
7.1. See recommended textbook.

7.2. Yes it is possible. We can "over-sample" to simplify the analogue filters required to avoid aliasing.
Then we can digitally filter the resulting over-sampled signal so that we can then reduce the sampling rate.
Lowering the sampling rate allows more efficient processing, storage and transmission. The digital low-
pass filter operating on the over-sampled signal can be much sharper and more reliable (no variations with
temperature, manufacturing tolerance or aging, for example) than an equivalent analogue filter.

7.3. "Images" (see lecture notes) are increased in frequency. They are further away in frequency from the
signal itself, and are hence easier to filter out without significantly affecting the signal.

7.4. QN power in range -10 kHz to 10 kHz: ∆2/12.


Max sine-wave amplitude: 28∆2/2 Max power: (27∆2)/2
Max SQNR: 213∆2 / (∆2/12)
= 12 x 213 in range -10kHz to 10kHz.

Signal range is -4kHz to 4kHz, so can filter off QN above 4kHz.

SQNR in range -4kHz to4kHz : 12 x 213 / 0.4 ; i.e. 53.9 dB.

Replacing 8 bit by 10-bit ADC adds 12 dB. Decreasing sampling rate to 10 kHz means that wecan no
longer divide by 0.4. Answer is 61 dB., Need more expensive ADC and sharper analogue anti-aliasing
filter.

7.5. By argument in notes (p.2.6) |H(jω)|=|sinc(ωT/(8π))| x 0.25.


When ω=π/T (=half sampling freq) |H(jω| = |sinc(1/8)| x 0,25 = 0.24

When ω=0, |H(jω)|=0.25


Therefore gain only falls from 0.25 to 0.24, ie from -12 dB to -12.4 dB
We only drop by 0.4 dB as ω goes from zero to half the sampling frequency.

Disadvantage: reduced signal-to-nose ratio. This can be corrected by increasing pulse height but this
means that we have very high voltages.
EE3271(CS3291) DSP Solutions 11 BMGC

Bookwork . See notes.

7.6. See notes for explanation for zero order hold of "sample & hold" reconstruction.
H(jω) = e-jωT/2 sinc(ωT/(2π))

7.7. In notes

7.8. Ideally we need a digital filter with |H(ejΩ)| = 1 / sinc(Ω/2)

Can we design an FIR filter by windowing method ? The inverse DTFT gives us a formula that is
probably too hard to integrate. Various alternative approaches exist such as sampling 1/sinc(Ω/2) in
the frequency domain and using the DFT or FFT in place of the inverse DTFT to perform the inverse
transform.

Alternatively we can approximate |H(ejΩ)| = 1 / sinc(Ω/2) by a simpler function such as the linear
function:
|H(ejΩ)| = 1 + Ω/(2π)

7.9. Must have 20 log10(1/ √ (1 + (ωa / ωc)2) ≤ -37 dB. Therefore ωa ≥889.5 x 103 radians/second
If aliasing is going to affect our 0-2kHz useful signal, its frequency must be greater than 141.6 kHz,
otherwise it would not be attenuated sufficiently by the filter.

If the sampling rate is fs any noise above fs/2 will be aliased, but it will not affect the signal unless the
result lies between 0 and 2kHz. The lowest frequency that will cause problems is fs - 2 kHz.

Therefore we must ensure that fs - 2 kHz ≥ 141.6 kHz.


Therefore, minimum sampling freq is 143.6kHz for original question.

Section 8
For problems on Setction 8 and their solutions, refer to past examination papers.

You might also like