0% found this document useful (0 votes)
2K views5 pages

E4702 HW#4-5 Solutions: (Solution.)

1. The document provides solutions to homework problems related to quantization and power spectral density. 2. For problem 1, the probability of a Gaussian input exceeding the quantizer range is calculated to be 6.334×10−5. For a second part, the output signal-to-noise ratio of a uniform quantizer is derived. 3. For problem 2, an expression is derived for the mean-square quantization error of a nonuniform quantizer in terms of step sizes and input probabilities. 4. For problem 3, the power spectral density of a bipolar return-to-zero signal is calculated by deriving its autocorrelation function.

Uploaded by

Mohammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views5 pages

E4702 HW#4-5 Solutions: (Solution.)

1. The document provides solutions to homework problems related to quantization and power spectral density. 2. For problem 1, the probability of a Gaussian input exceeding the quantizer range is calculated to be 6.334×10−5. For a second part, the output signal-to-noise ratio of a uniform quantizer is derived. 3. For problem 2, an expression is derived for the mean-square quantization error of a nonuniform quantizer in terms of step sizes and input probabilities. 4. For problem 3, the power spectral density of a bipolar return-to-zero signal is calculated by deriving its autocorrelation function.

Uploaded by

Mohammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

E4702 HW#4-5 solutions

by Anmo Kim ([email protected])

1. (P3.17) Midtread type uniform quantizer (figure 3.10(a) in Haykin)


Gaussian-distributed random variable with zero mean and unit variance is applied to this quantizer input

(a) the probability that the amplitude of the input lies outside the range -4 to 4

(Solution.)
This problem can be described more clearly with the complementary error function. (see eqn.(4.29) in
Haykin) Z ∞
2 2
erfc(u) = √ e−z dz
π u
For the quantizer input, m, with the normal distribution,
Z ∞ √
1 2
P [|m| > 4] = 2 √ e−t /2 dt = erfc(2 2) = 6.334 · 10−5
4 2π
(b) output signal-to-noise ration of the quantizer ( input located between [4,-4])

(Solution.)
The average power of the message signal is
Z 4 Z ∞
2 1 −x2 /2 1 2
P = x √ ·e dx ≈ x2 √ · e−x /2 dx = 1
−4 2π −∞ 2π
We can evaluate the above integral by using Matlab with the following commands:
>> syms x
>> eval(int(x^2*exp(-x^2/2)/sqrt(2*pi),-4,4))
Then, we will get 0.9989 as a result. Here, we will use the approximated average power.(i.e. P = 1)
Putting this and mmax = 4 into equation(3.32) in Haykin,

2 1 2 −2R 16 −2R
σQ = 4 2 = 2
3 3
The signal-to-noise ratio is
1 3 · 22R
(SNR)O = 16 −2R =
3 2
16
In dB scale,
(SNR)O(dB) = 10 log10 (3/16) + 10 ∗ log10 2 ∗ 2R = −7.27 + 6.02R
We can compare this result to that of sinusoidal modulating signal, which is given in Example3.1 of
Haykin. (SNR)O(dB) of the sinusoidal modulating signal is 1.8 + 6R, which is 9dB greater than Gaussian
input.
The reason can be directly observed from the average power: Psinusoidal = A2m /2 = 8 where Am = 4
and PGaussian = 1. This means that more number of signals in the Gaussian input signal come with
lower amplitude compared to the sinusoidal signal. (This can be also seen on the bell-shaped curve of
Gaussian distribution.)
As a following result, in case of Gaussian input signal, non-uniform quantizer - dividing the lower
amplitude part in a finer scale - would outperform the uniform quantizer.

1
2. (P3.19) mean-square quantization error for a nonuniform quantizer
- ∆i : ith step size of the given nonuniform quantizer
- pi : probability that the input signal amplitude lies within the ith interval
- ∆i is small compared with the excursion of the signal

(Solution.)
The expected mean-square quantization error can be represented in terms of the quantization error in each
interval.

L
X
2 2
σQ = σQ,i pi
i=1
where L is the number of quantization levels.
2
In the ith interval, the quantization error for input signal , qi , and the mean-square quantization error, σQ i
,
is defined as follows.
mi + mi+1
qi = m −
2
2
£ ¤
σQ,i = Em (qi − E[qi ])2
From the condition that the quantization interval is small compared with the excursion of the signal, we can
assume that the input signal is uniformly distributed in an interval. Then,

E[qi ] = 0
Z ∆i /2
1 ∆2
2
σQ,i = qi2 dqi = i
−∆i /2 ∆i 12
Thus,
L
2 1 X 2
σQ = ∆ pi
12 i=1 i

3. (P3.11(d)) Power spectral density of bipolar return-to-zero signals

(Solution.)
From the lecture, power spectral density is described as follows.
|G(f )|2 X
Ss (f ) = R(k)ejk2πTb (1)
Tb
−∞<k<∞

PN
where sN (t) = −N an g(t − nTb ).
Tb
For the bipolar RZ, g(t) = rect( Tbt/2 ) and G(f ) = 2 sinc(f Tb /2)
The probabilistic description of an is:


0, with P (an = 0) = 1/2
an = A, with P (an = 1) = 1/4


−A, with P (an = −1) = 1/4

The autocorrelation function of an is:


R(k) = E[an an+k ]

2
• R(0) (
0, with p = 1/2 (when an = 0)
an an
A2 , with p = 1/2 (when an = A or − A)
1 1 A2
Thus, R(0) = 0 · 2 + A2 · 2 = 2
• R(1)
Here, notice that
P (an+1 = m|an = 0) = P (an+1 = m), m ∈ {0, A, −A}


1/2, m = 0
P (an+1 = m|an = A or − A) = 1/2, m = −an


0, m = an
Thus, (
0, with p = 3/4 (when an = 0 or an+1 = 0)
an an+1
−A2 , with p = 1/4 (otherwise)
2
3 1
R(1) = 0 · 4 + (−A2 ) · 4 = − A4
• R(k), k > 1
an an+k (k > 1) has non-zero value with the probability of 14 only when both bits are non-zero. Given
Pn+k−1
that an an+k is non-zero, it is either 1 or -1. Here, we define bn,k = j=n+1 aj which is zero when the
number of non-zero bits between an+1 and an+k−1 is even.

1 1
P (an an+k = 1) = P (bn,k = 0) , P (an an+k = −1) = P (bn,k 6= 0)
4 4
Notice here that P (an an+k = 1) = P (an an+k = −1).
Applying the equation (2) to be given in (P3.23),
1
P (an an+k = 1) = P (an an+k = −1) =
2
Thus,

R(k) = E[an an+k ]


= 1 · P (an an+k = 1) + (−1) · P (an an+k = −1) + 0 · P (an an+k = 0)
= P (an an+k = 1) − P (an an+k = −1)
=0

For the negative index of k, remind that R(k) is an even function.


By putting G(f ) and R(k) into (1),
· ¸
| T2b sinc(f Tb /2)|2 2 1 1 ¡ −j2πf Tb ¢
Ss (f ) = A − e + ej2πf Tb
Tb 2 4
· ¸
A2 Tb 1 1
= sinc2 (f Tb /2) − 2 cos(2πf Tb )
4 2 4
A2 Tb 1 − cos(2πf Tb )
= sinc2 (f Tb /2)
4 µ ¶ 2
A2 Tb f T b
= sinc2 sin2 (πf Tb )
4 2

3
4. (P3.23) a chain of (n-1) regenerative repeaters with n sequential decisions made on a binary PCM wave

repeaters
1-p1
p1
p1
1-p1
source destination
0 1 2 n-2 n-1

p1 : probability of error on each decision procedure

(Solution.)

(a) The error probability on the nth stage can be represented in a recursive form.

pn+1 = pn (1 − p1 ) + (1 − pn )p1 = p1 + (1 − 2p1 )pn


This equation can be expanded as

pn+1 = p1 + (1 − 2p1 )pn


= p1 + p1 (1 − 2p1 ) + (1 − 2p1 )2 pn−1
= p1 + p1 (1 − 2p1 ) + p1 (1 − 2p1 )2 + (1 − 2p1 )3 pn−2
n−1
X
= p1 (1 − 2p1 )k + (1 − 2p1 )n p1
k=0
1 − (1 − 2p1 )n
= p1 + (1 − 2p1 )n p1
1 − (1 − 2p1 )
1
= [1 − (1 − 2p1 )n+1 ]
2
Thus,
1
pn = [1 − (1 − 2p1 )n ] (2)
2
(b)
n µ ¶
X n
(1 − 2p1 )n = (−2p1 )k
k
k=0

If p1 is very small, we can approximate this equation by taking only the constant term and the first
polynomial term.
(1 − 2p1 )n ≈ 1 − 2np1
1
pn ≈ [1 − (1 − 2np1 ] = np1
2
5. (P3.31) A one-step linear predictor
f0
- input : sin(2π 10f 0
n) = sin(2π · 0.1n)

4
x[n] A sin(2S ˜ 0.1n)
Z 1

w1

x[n] w1 A sin(2S ˜ 0.1(n  1))

(a) the optimum value of w1 for the minimizing the prediction error variance

(Solution.)
The prediction error is defined as:
e[n] = x[n] − x̂[n]
The index of performance (or the prediction error variance) is defined as:

J = E[e2 [n]]

Then,

J = E[x2 [n]] + E[x̂2 [n]] − 2E[x[n]x̂[n]]


= A2 E[sin(2π0.1n)2 ] + w12 A2 E[sin(2π0.1(n − 1))2 ] − 2A2 w1 E[sin(2π0.1n) sin(2π0.1(n − 1))]
1 1
= A2 + w12 A2 + w1 A2 E[cos(4π0.1n − 2π0.1n)] − w1 A2 E[cos(2π0.1)]
2 · 2 ¸
1 1
= A2 w12 − cos(2π0.1)w1 +
2 2

Taking derivative of J over w1 ,


dJ
= A2 w1 − A2 cos(2π0.1)
dw1
J has its minimum when this derivative is zero. Thus,

w1 = cos(2π0.1) = 0.809

(b) the minimum value of the prediction error variance

(Solution.)
when w1 = 0.809,
Jmin = 0.1727A2

You might also like