0% found this document useful (0 votes)
36 views11 pages

DigiComm Exam B 2020

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views11 pages

DigiComm Exam B 2020

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Digital Communication: Final Exam B

Prof. Ofer Shayevitz, Shahar Stein Ioushua, Valentin Krasny


March 1, 2021

• The duration of the exam is 3 hours

• The exam contains three questions having equal weights

• The section weights are indicated in percentage of the respective question

• Any written material is allowed

• Any other material is forbidden, including (but not limited to): web searches, chats, phones.

• No electronic equipment is allowed, with the exception of the computer with Zoom super-
vision used to view the exam form on Moodle

• Any communication with others regarding the exam or course material is not allowed
during the exam, with the exception of sending questions to the course’s staff via the
designated Moodle Forum.

• The exam submission is via Moodle. Please submit a single PDF file of your scanned
solutions.

• You can answer the exam either in English or Hebrew, as you prefer.

• Please carefully follow any announcements on Moodle during the exam.

• Good Luck!

Notation used in the exam

• Optimality is in the minimum error probability sense.

• 1 (··) is the indicator function of the event in the parenthesis, returning 1 if the event
occurs, and 0 otherwise.

• Z ∼ Bern(p) means that Pr(Z = 0) = 1 − p and Pr(Z = 1) = p.

• A BSC(p) is a binary symmetric channel with crossover probability p, i.e, a binary-input


binary-output channel that flips its input with probability p.

• The symbol ⊕ stands for the binary XOR operation.

1
Question 1
In this question, we consider binary-input channels called state-controlled BSC (S-BSC). The
state of an S-BSC is a r.v. S taking values in the unit interval [0, 1]. S has a fixed distribution
that does not depend on the channel input. The channel flips its binary input with probability
S, and the state S is also observed at the output (i.e., by the receiver). Formally, an S-BSC
has a binary input x ∈ {0, 1}, and an output pair (y, s) ∈ {0, 1} × [0, 1], where pY,S|X (y, s|x) =
pS (s)pY |X,S (y|x, s), wuch that Pr(Y 6= x|S = s, x) = s. In other words, Y is obtained from x
through a BSC(S), and the receiver observes the pair (Y, S).
Below, a uniform binary message W ∼ Ber(1/2) is transmitted over the channel, using the
trivial modulation X = W (unless otherwise stated).

(a) (5%) Suppose that Pr(S = δ) = 1 for some 0 < δ < 1/2. Find the optimal decision rule,
and the associated error probability as a function of δ.
Solution: In this case, the channel is simply a BSC with crossover probability δ < 1/2. As we
have seen, the optimal decision rule in this case is ŵMAP (y) = y, and the error probability is
Pr(EMAP ) = δ. Note that the receiver can ignore S, since it is deterministically equal to δ.

(b) Suppose that Pr(S = 0) = Pr(S = 1) = 1/2.

(1) (5%) Find the optimal decision rule and the associated error probability.
Solution: In this case, the channel is either clean when S = 0, or inverting when S = 1.
Therefore, the optimal decision rule is clearly given by ŵMAP (y, s) = y ⊕ s. The error
probability is clearly Pr(EMAP ) = 0.
(2) (5%) In this subsection only, assume S is not available to the receiver. Find the
optimal decision rule (based on Y only) and the associated error probability.
Solution: Since S is not available, we need to find the channel law from x → y.
1 1 1
p(y|x) = (p(y|x, s = 0) + p(y|x, s = 1)) = (0 + 1) =
2 2 2
Hence the channel p(y|x) is completely noisy – it’s output distribution does not depend
on the input. This makes sense, since we took the equiprobably convex combination of a
clean channel and an inverting channel. We colnclude that any decision rule is optimal,
and Pr(EMAP ) = 1/2.

(c) (15%) What is the optimal decision rule for an S-BSC? Give an expression for the asso-
ciated error probability, as a function of the distribution of S.
Solution: The MAP rule (which in this case is ML) is given by (assuming here that S is discrete,
we can do a similar thing when S is continuous)

ŵMAP (y, s) = argmax Pr(Y = y, S = s|X = w)


w∈{0,1}

= argmax Pr(S = s) · Pr(Y = y|S = s, X = w)


w∈{0,1}

= argmax Pr(Y = y|S = s, X = w)


w∈{0,1}

= argmax s1(y6=w) · (1 − s)1(y=w)


w∈{0,1}

= 1 (s > 1/2) ⊕ y.

2
where we have used the fact that S is independent of W and that Pr(S = s) ≥ 0. This of course
makes sense – the only thing the state determines is whether the channel is more likely to flip
the input that to retain its value, or the other way around. The error probability is simply given
by averaging the error probability given S

Pr(EMAP ) = E min(S, 1 − S).

(d) In this section only, assume the transmitter also knows S, and can use it in the modulation
function, i.e., can transmit X = x(W, S) ∈ {0, 1}.

(1) (10%) Can the transmitter use S to improve the error probability? If so, show how.
Otherwise, show it is impossible.
Solution: No, it cannot. To see this, note first that without loss of generality, x(w, s) ∈
{w, w̄}. This is since mapping the message to a constant value for some values of s will
result in an error probability of 1/2 for these values of s, which is the worst possible. But
since flipping the input and flipping the output have the same effect here, and since the
receiver knows S, the receiver can emulate anything the transmitter can do. Hence, there
can be no gain from knowing S at the transmitter too.
(2) (10%) In this subsection only, assume in addition that the receiver does not observe
S (i.e., sees only Y ). Can the transmitter use S to improve the error probability?
If so, show how. Otherwise, show it is impossible. Solution: Yes it can. The trans-
mitter can use the modulation function x(w, s) = 1 (s > 1/2) ⊕ w, which will result in a
BSC(min(S, 1 − S)) channel. This can clearly improve the outcome for the receiver when
it does not know S, and in fact results in the error probability attained when the receiver
knows S. For example, taking the case of Section (b), we saw that without knowing S the
receiver has error probability 1/2, but if the transmitter flips the input whenever S = 1,
then the channel is completely clean and the error probability will be zero.

Consider a general channel with a binary input x ∈ {0, 1}, and output Y ∈ Y. Let ŵML : Y →
{0, 1} be an ML decision rule for this channel (where again X = W ∼ Bern(1/2)). Define

Ye , ŵML (Y ), S , Pr(ŵML (Y ) 6= W |Y ),

and note that both the r.v.s Ye and S are functions of Y . Consider the channel with input x
and output (Ye , S). If this channel is an S-BSC with state S, and if Y can be uniquely recovered
from (Ye , S), then we call this an S-BSC representation of the original channel from x to Y .

(e) (5%) What is Pr(S > 1/2) for the above S? Explain.
Solution: An ML rule decides on the most likely value of message W given the channel output
Y (for a uniform prior). Therefore, for a binary message it must be that Pr(ŵML (Y ) 6= W |Y =
y) ≤ 1/2 for any y, as otherwise the ML would have flipped its decision. Thus, Pr(S > 1/2) =
0.

(f) For each of the following channels with a binary input x ∈ {0, 1}, find (Ye , S) defined above
as a function of Y , and rigorously determine if this constitutes an S-BSC representation.

(1) (15%) Y = x · U , where U ∼ Bern(1/2). (multiplication over the reals.)


Solution: The output distributions here are p(y|x = 0) = [1, 0] and p(y|x = 1) =
[1/21/2]. Hence the ML decision rule here is clearly ŵML (y) = y, therefore Ỹ = Y . The

3
conditional error probability is Pr(W 6= Y |Y = 0) = Pr(W = 1∧Y = 0)/ Pr(Y = 0) =
Pr(W = 1∧U = 0)/(1−Pr(W = 1∧U = 1)) = (1/2)·(1/2)/(1−(1/2)·(1/2)) = 1/3,
and Pr(W 6= Y |Y = 1) = 0. Hence we have that S = (1/3) · 1 (Y = 0). This is
not an S-BSC since the state depends on the input, since Pr(S = 0|x = 0) = 0 while
Pr(S = 0|x = 1) = 1/2.
(2) (15%) The channel given by

x with probability 1 − δ
Y =
2 with probability δ

(Hint: randomization is required.)


Solution: The conditional distributions here are p(y|x = 0) = [1 − δ, 0, δ] and p(y|x =
1) = [0, 1 − δ, δ]. Hence the ML decoding rule decides y whenever y ∈ {0, 1}, and is
free to decide anything for y = 2. To make things symmetric, let us decide uniformly at
random in this case. Namely, Ỹ = Y when Y ∈ {0, 1}, Ỹ ∼ Bern(1/2) when Y = 2.
The error probability is clearly zero when y ∈ {0, 1}, and is 1/2 when y = 2, hence
S = 21 · 1 (Y = 2). It is clear that Y can be uniquely recovered from the pair (Ye , S). Let
us now show that the induced channel is an S-BSC. First, the distribution of S does not
depend on the input x, as required, since the probability that Y = 2 does not depend on
x. Then, note that Pr(Ỹ 6= x|S = 0, x) = Pr(Ỹ 6= x|Y 6= 2, x) = 0, and Pr(Ỹ 6= x|S =
1/2, x) = Pr(Ỹ 6= x|Y = 2, x) = 1/2. The intuition behind this representation is that
this is an erasure channel, erasing the input with probability δ, which is reflected in the
fact that Pr(S = 0) = 1 − δ and Pr(S = 1/2) = δ.
(3) (15%) Y = (−1)x + Z, where Z ∼ N (0, 1).
Solution: This is standard BPSK in Gaussian noise, and so clearly Ye = 1 (Y < 0). The
error probability conditioned on the output Y = y > 0 is

Pr(1 (Y < 0) 6= W | Y = y) = Pr(W = 1 | Y = y)


Pr(W = 1) · fY |X (y|1)
=
fY (y)
fZ (y + 1)
=
fZ (y + 1) + fZ (y − 1)
2
exp(− (y+1)
2
)
= 2 2
exp(− (y+1)
2
) + exp(− (y−1)
2
)
1
= .
1 + exp(2y)
Similarly for Y = y < 0

Pr(1 (Y < 0) 6= W | Y = y) = Pr(W = 0 | Y = y)


Pr(W = 0) · fY |X (y|0)
=
fY (y)
fZ (y + 1)
=
fZ (y + 1) + fZ (y − 1)
2
exp(− (y−1)
2
)
= 2 2
exp(− (y−1)
2
) + exp(− (y+1)
2
)
1
= .
1 + exp(−2y)

4
Therefore, we conclude that S = 1+e12|Y | . It is easy to see that (Ye , S) uniquely recover Y ,
since S gives |Y | and Ye gives sign(Y ). Moreover, the distribution of S does not depend
on the input, since the distribution of |Y | does not. Finally,

Pr(Y < 0 | x = 0) · Pr(S = s|Y < 0, x = 0)


Pr(Ye 6= x | S = s, x = 0) =
Pr(S = s|x = 0)
Pr(Y > 0 | x = 1) · Pr(S = s|Y > 0, x = 1)
=
Pr(S = s|x = 1)
= Pr(Ye 6= x | S = s, x = 1)

where we have used the fact that S does not depend on the input, and the symmetry of the
modulation scheme. So, given S = s the channel from x to Ye is a BSC. The crossover
probability of this BSC is clearly s, since (trivially)
1 1−s
s = Pr(Ye 6= W | |Y | = ln ) = Pr(Ye 6= W | S = s).
2 s

5
Question 2
A uniformly distributed binary message W ∼ Bern(1/2) is modulated into the waveform xW (t).
The modulated signal is then multiplied by the (random) function gA (t), where A ∼ Ber(p) is
independent of W , p ≤ 1/2, and
( (
1, 0 ≤ t ≤ T2 , 1, T2 ≤ t ≤ T,
g0 (t) = , g1 (t) =
0, otherwise 0, otherwise.

Then, the resulting signal x̃W,A (t) , xW (t) · gA (t) is transmitted over an AWGN channel with
power spectral density N0 /2 (the noise process is independent of both W and A).
In Sections (a) and (b), assume that A is known to the receiver.
(a) (20%) In this section only, assume that

T
(
1,
 2
≤ t ≤ 3T
4
,
1, 0 ≤ t ≤ T, 3T
x0 (t) = , x1 (t) = −1, 4
≤ t ≤ T, .
0, otherwise 
0, otherwise

Find the optimal decision rule and the associated error probability.
Solution: Since the receiver knows A the decision rule can depend on it. In case that A = 0 we
have a detection problem (OOK constellation), i.e. a one dimensional signal space spanned by
(q
2
T
, 0 ≤ t ≤ T2 ,
ϕ(t) = .
0, otherwise
q
The constellation point in this case are x1 = 0 and x0 = T2 , hence we decide according to
r
0 T
Y ≷ ,
1 8
RT
where Y = 0 Y (t)ϕ(t)dt is the projection of the channel output onto the signal space. The
respective error probability (given A = 0) is
r !
T
Pr(E|A = 0) = Q .
4N0

In the case A = 1 we get two orthogonal signals, the two-dimensional signal space is spanned
by
x0 (t)1 t ∈ [ T2 , T ] x1 (t)1 t ∈ [ T2 , T ]
 
ϕ0 (t) = q , ϕ1 (t) = q ,
T T
2 2
q q
T T
and the corresponding constellation points are x0 = [ 2
, 0] and x0 = [0, 2
]. The decision
rule is the
y 0 ≷10 y 1 .

The distance in this case is d = kx0 − x1 k = T , hence the corresponding error probability is
r !
T
Pr(E|A = 1) = Q .
2N0

6
Overall, we get
X
Pr(E) = Pr(E|A = a)pA (a)
a
r ! r !
T T
= pQ + (1 − p)Q .
4N0 2N0

RT
(b) In this section, assume an energy constrain 0
x2w (t)dt ≤ E for both w ∈ {0, 1}.

(1) (20%) Suggest waveforms x0 (t) and x1 (t) (as a function of p) that minimize the error
probability of the optimal decision rule. You can leave your answer as a suitable
optimization problem, simplfying as much as possible.
Solution: Note that X
Pr(E) = Pr(E|A = a)pA (a).
a

Given some distribution (E1 , E2 ) of the energy E between the two intervals (0, T2 ) and
T
( 32 , T ) respectively such that E1 + E2 = E, we get that for each value a of A the cor-
responding error probability Pr(E|A = a) is minimized by choosing xW (t) as antipodal
signals over the corresponding interval. Hence we get that the optimal waveforms must be
antipodal, and only the energy distribution (E1 , E2 ) needs to be determined. This can be
done by solving the following problem:
r ! s 
2E1 2(E − E1 ) 
argmin p · Q + (1 − p) ·  .
E1 ∈[0,E] N0 N0

x2
(2) (10%) Use the approximation Q(x) ≈ 21 e− 2 . Under this approximation (and the
energy constraint), explicitly find waveforms x0 (t), x1 (t) that minimize the error
probability of the optimal decision rule. What is the resulting error probability?
E E−E1
− N1 −
Solution: Under this approximation, the error probability is Pr(E) = p2 e 0 + 1−p
2
e N0
.
Taking the derivative w.r.t. E1 , we get
d Pr(E) 1  E
− 1 −
E−E1 
= −pe N0 + (1 − p)e N0 .
dE1 2N0
The derivative vanishes at a unique point E1∗ , given by
  
∗ 1 p
E1 = E + N0 ln .
2 1−p
It is easy to check that for p < 1/2 the derivative is nonnegative for any E1 > E1∗ , hence
the solution is E1 = max(E1∗ , 0).

From this point on, we assume that A is not known to the receiver, and that
r
E
x0 (t) = −x1 (t) = · 1 (t ∈ [0, T ]) .
T
The following strategy for decoding W is suggested. First, the receiver treats the pair (W, A)
as a two-bit message, and applies the optimal decision rule for this pair, namely, uses a decision
rule (W b the minimizes the error probability Pr(E) = Pr(Ŵ 6= W or  6= A). Then, it uses
c , A)
W
c as the decision for W .

7
(c) (20%) Assume that p = 1/2. Find (W
c , A)
b and the associated minimal error probability
Pr(E).
Solution: When both W and A are the message we want to decode, and p p = 1/2, this is equiva-
a problem with 4 equi-probable messages mapped to the signals ± E/T ·1 (t ∈ [0, T /2])
lent top
and ± E/T · 1 (t ∈ [T /2, T ]). This is clearly a QPSK constellation, which we bring to “stan-
dard” form by picking the basis functions

ϕi (t) = 1/T · 1 (t ∈ [0, T /2]) + (−1)i · 1/T · 1 (t ∈ [T /2, T ]) ,


p p
i ∈ {0, 1}
√ √
This results in the four constellation points ((−1)W E/2, (−1)W ⊕A E/2). The optimal deci-
sion rule now decides according to the quadrant, which (denoting the projection onto this basis
of signal space by y = (y0 , y1 )) yields the rule

(W
c , A) \
b = (w, a)MAP (Y ) = (1 (Y0 < 0) , 1 (Y0 < 0) ⊕ 1 (Y1 < 0)).

The error probability is the standard error probability for QPSK, taking into account that the
signal energy here is E/2, which gives
r !!2
E
Pr(E) = 1 − 1−Q
2N0

(d) (10%) What is the error probability of this rule in decoding W using the rule above, i.e.,
c 6= W )? (for p = 1/2 again.)
Pr(W
Solution: The decision rule for W from the previous subsection is simply ŵ(y) = 1 (y0 < 0).
The probability of error is

Pr(Wc 6= W ) = E Pr(1 (Y0 < 0) 6= W |A)


 
1 1 1
= Pr(Y0 < 0|W = 0, A = 0) + Pr(Y0 < 0|W = 0, A = 1)
2 2 2
 
1 1 1
+ Pr(Y0 > 0|W = 1, A = 0) + Pr(Y0 > 0|W = 1, A = 1) .
2 2 2
q 
E
It is easy to see that all the probabilities above are the same, and all equal Q 2N0
, hence
this is the error probability.

(e) (20%) Is the suggested decision strategy optimal for decoding W for any value of p?
Prove your claim.
Solution: This strategy is of course not optimal in general. To see this in the extreme, think about
the special case where p = 0. In this case we know that A = 1, and the√question√is reduced
to the
√ equiprobable
p prior binary decision problem of separating between ( E/2, − E/2) and
(− E/2, E/2) in our signal space. The optimal decision boundary in this case is clearly
the 45 degrees diagonal line, which does not coincide with the rule 1 (y0 < 0). More generally,
when p 6= 0, we have a problem (in our signal space) of separating between √ two Gaussian

mixtures
√ with mixture
p probabilities
√ (1 −
√ p, p) and centers
√ given
p by either ( E/2, − E/2)
and (− E/2, E/2). or (− E/2, − E/2) and ( E/2, E/2). It should be noted that by
symmetry, for the special case where p = 1/2, the suggested decoding strategy happens to be
optimal.

8
Question 3
Let C ⊆ {0, 1}n be a binary block code with rate Rc and minimum Hamming distance d. A
transmitter wants to send a codeword, chosen uniformly at random from C, to a receiver. To
that end, the transmitter can use a binary input channel described below. Using this channel
costs money, and the more money the transmitter pays, the better the quality of the channel.
Specifically, if the transmitter pays s dollars to send the ith coded bit ci , then the corresponding
channel output is

Yi = (−1)ci · (ψ(s) − (1 + ψ(s)) Ui ) , i ∈ [n]

where {Ui ∼ Uniform([0, 1])}i∈[n] are i.i.d. noises, and ψ(s) is a given payment function ψ :
R+ 7→ [1, ∞). You can assume that ψ is monotonically increasing and unbounded.

(a) (10%) Suppose the transmitter pays s dollars for each code bit it sends. Find sb , the
amount the transmitter pays per information bit.
Solution: Each code bit is worth Rc information bits. Hence the cost per information bit is
sb = s/Rc dollars.
(b) (10%) For an uncoded transmission with n = 1 and Rc = 1, find the optimal decision rule
and the associated error probability. Express the error probability in terms of sb and ψ.
Solution: In this case s = sb . Write a = ψ(sb ). Note that Yi ∼ Uniform([−1, a]) given ci = 0,
and Note that Yi ∼ Uniform([−a, 1]) given ci = 1. Therefore, the ML rule decides 0 for Yi > 1,
decides 1 for Yi < −1, and is arbitrary otherwise. So for simplicity and symmetry, we can
choose ŵMAP (y) = 1 (y < 0). The error probability in this case is the same conditioned on both
1 1
inputs, and is therefore on average Pr(EMAP ) = Pr(Uniform([−1, a]) < 0) = a+1 = 1+ψ(s b)
.

(c) To decode, the receiver first decides on each code bit separately and optimally, resulting
in a binary vector U ∈ {0, 1}n (you can assume that the prior probability of each code
bit is uniform). Then, it finds the most likely codeword given U . Let us denote the
overall resulting decision rule by ĉ(y) ∈ C.

(1) (10%) Find ĉ(y) explicitly, simplifying as much as possible.


Solution: The hard-decision per coded bit is the same as in the previous section, namely
Ui = 1 (Yi < 0). Then, the induced end-to-end channel seen by the receiver after taking
1
this hard-decision is a BSC with crossover probability 1+a = 1+ψ(s1 b Rc ) , and hence at this
point the receiver will use minimum Hamming distance decoding. Thus, overall

1 (1 (Yi < 0) 6= ci ) .
X
ĉ(y) = argmin
c∈C
i∈[n]

(2) (10%) Give an (asymptotically tight) upper bound on the error probability incurred
by the rule ĉ(y), as a function of n, d, Rc and sb . Hows does the bound behave when
the transmitter pays “a lot” per information bit?
Solution: Just as we saw in class, the error probability is the upper bounded by the proba-
bility that the number of flips will be larger than t = b d−1
2
c. Hence, writing p = 1+ψ(s1 b Rc ) ,
n  
X n j
Pr(EMAP ) ≤ p (1 − p)n−j ≤ 2n · pt+1 .
j=t+1
j

For large sb and fixed n, we see that Pr(EMAP ) = O(ψ(sb Rc )−(t+1) )

9
We wish to know how much the transmitter would save in payment per information bit, by using
the code instead of uncoded transmission. So, we define the discount factor γ = suncoded
b /scode
b .

(d) (10%) Suggest a payment function ψ(s) such that, asymptotically, γ will be constant.
Express the resulting asymptotic γ as a function of Rc and d.
Solution: Since the error probability decays like 1/ψ(suncoded
b ) for uncoded transmission, and
like 1/ψ(sb Rc )t+1 when using the code, we can set ψ(s) = es , which by comparing the error
probabilities, asymptotically yields a discount factor γ ≈ Rc (t + 1).

To try and improve performance, the receiver now uses the following decoding strategy. First,
it maps each channel output Yi to Zi = φ(Yi ), where φ : R 7→ {0, 1, 2} is a function to be
chosen later. Then, it uses the optimal decision rule given the random vector Z ∈ {0, 1, 2}n .

(e) (10%) Find a function φ such that the decoding strategy above is truly optimal, namely,
attains the same error probability as the MAP rule applied to Y . Prove your claim.
Solution: The idea is that when |Yi | > 1 then the bit is decoded without error, and when |Yi | ≤ 1
both values of the bit are equally likely, so we can think of this as an erasure and map it to 2.
So, we are suggesting the function φ(y) = 1 (y ≤ 1) + 1 (|y| ≤ 1). To make this rigorous, let
us look at the likelihood induced by a codeword c ∈ C:

1 ((ci = 0 ∧ yi > −1) ∨ (ci = 1 ∧ yi < 1))


Y
f (y|c) = (1 + ψ(s))−n
i∈[n]

1 (φ(yi ) = ci ∨ φ(yi ) = 2)
Y
= (1 + ψ(s))−n
i∈[n]

1 (zi = ci ∨ zi = 2) .
Y
= (1 + ψ(s))−n
i∈[n]

Hence, the ML decision rule given Y can be computed from the vector Z, and since ML is
optimal here, it must be that the MAP (ML) decoding rule applied to Z will be also optimal.

(f) (10%) For the above choice of φ, what is the decision rule applied to Z? Prove your
claim.
Solution: From the above it is easy to see that the likelihood depends only on the “non-erased”
indices where Zi 6= 2, and that the likelihood is zero when ci 6= zi for an index like that. In fact,
the likelihood is either zero of another fixed number:

0 zi 6= 2 ∧ ci 6= zi for some i ∈ [n]
f (y|c) = −n
(1 + ψ(s)) o.w.

Therefore, an optimal decision rule returns any codeword c ∈ C that agrees with Z on all
non-erased indices (there could be multiple such codewords).

(g) For the decoding strategy you found:

(1) (10%) Give a general condition on the vector Z such that the error probability
(conditioned on Z) would be the minimal possible.
Solution: If less than d bits have been erased, there can be only one codeword
P(the correct
one) that would agree with Z on all the non-erased coordinates. Therefore, if i∈[n] 1 (Zi = 2) <
d, the conditional error probability is zero.

10
(2) (10%) Give an (asymptotically tight) upper bound on the error probability incurred
by the above decoding strategy, as a function of n, d, Rc and sb . Hows does the
bound behave when the transmitter pays a “lot per” information bit?
Solution: To get an upper bound, we can assume that when there are d or more erasures,
the decision rule makes an error. Thus, the error probability is at most
 

1 (Zi = 2) ≥ d
X
Pr(EMAP ) ≤ Pr 
i∈[n]
n  
X n
= (1 + ψ(s))−j
j=d
j
≤ 2n · (1 + ψ(sb Rc ))−d .

For a large sb , the bound behaves like Pr(EMAP ) = O(ψ(sb Rc )−d )


(3) (10%) For the payment function ψ(s) you suggested in section (d), find the asymp-
totic discount factor γ attained, as a function of Rc and d.
Solution: Similarly to the discussion before, we get that γ = Rc · d for this choice of
payment function.

11

You might also like