0% found this document useful (0 votes)
15 views2 pages

Communication 2 Midsem Spring 23

The document outlines the mid-semester exam details for Communication - II and Digital Communication at IIT Kharagpur, scheduled for February 20, 2023. It includes five questions covering topics such as MAP detection, probability of error in constellations, properties of Gaussian distributions, and optimization in coding theory. Each question has specified marks, including optional bonus marks for additional insights.

Uploaded by

Arup Basak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views2 pages

Communication 2 Midsem Spring 23

The document outlines the mid-semester exam details for Communication - II and Digital Communication at IIT Kharagpur, scheduled for February 20, 2023. It includes five questions covering topics such as MAP detection, probability of error in constellations, properties of Gaussian distributions, and optimization in coding theory. Each question has specified marks, including optional bonus marks for additional insights.

Uploaded by

Arup Basak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Indian Institute of Technology Kharagpur

Mid-semester Exam
Subject: Communication - II (EC31204), Digital Comm. (EC31002) [Backlog]
Date: 20 February 2023, Session: AN, Time: 14:00 PM–16:00PM (2 hours)
Department of Electronics and Electrical Communication Engineering
Instructor: Amitalok J. Budkuley
Total marks: 30 marks (*up to 3 bonus marks)

Instructions: There are 5 questions. Answer to a single question should be contiguous and not
scattered across the answer script. Bonus marks (are optional and indicated by *) for a specific
question will be available only if the answer(s) to the rest of the said question is fully correct.

[1] A “two-look” channel: One of two equally likely signals, s0 = −1 or s1 = +1 is sent over a noisy
channel (see figure below). The two noise variables N1 , N2 are independent (of each other and signal) with
1
identical density function fN (n) = e−|n| , n ∈ R. Analyse the maximum aposteriori (MAP) detector
2
and determine (and draw) the optimum decision/decoding regions for each signal.
Hint: There is a “minimum distance decoder”-like interpretation here but unlike the Gaussian case done
in class, the “distance” notion is different. Draw carefully. 3+3=6 Marks

[2] “Hyping” the Cubes: Consider a centered two-dimensional cube with vertices at s0 = (−a, −a), s1 =
(a, −a), s2 = (−a, a), and s4 = (a, a) (draw this and see why it is ‘centered’). We transmit M = 22 = 4
equally likely messages with this constellation over an additive Gaussian noise channel where each of the
two noise components, say N1 , N2 are independent and identically distributed (i.i.d.) with distribution
N1 , N2 ∼ N (0, σ 2 ), where σ 2 is finite.

(a) Determine the average probability of error for the 2-D constellation.

(b) Extend the probability of error calculation in part (a) to a centered 3-dimensional hypercube with
vertices similarly placed (draw it).

(c) Generalize the calculation to an centered k-dimensional constellation, where k is any finite positive
integer.

(d) (Bonus) What happens to the probability of error if one of the messages is removed from the constel-
lation. Argue clearly.

2+2+2+1*=6+1* Marks

1
2
[3] Thinking Gaussians: Let X ∼ N (0, σX ) and Z ∼ N (0, σZ2 ) be two independent Gaussian random
variables. Let Y = X + Z be a function of the two Gaussian random variables. Answer the following
questions (note: if any distribution is Gaussian in nature, you may express it in the form N (µ, σ 2 ), where
µ, σ 2 are mean and variance.)

(a) What is the distribution of Y ?

(b) What is the conditional distribution of Y when X = x?

(c) What is the conditional distribution of X when Y = y?

(d) (Bonus) For this part (note the variables carefully), assume that Y1 = X + Z1 and Y2 = Y1 + Z2 , where
Z1 , Z2 are i.i.d. with distribution identical to Z. Are the pair (Y1 , Y2 ) jointly Gaussian? Argue clearly
with analysis (and not just words).
Hint: In the standard approach, it will help to recall the linear algebraic definition of a jointly Gaussian
pair of random variables.

1+2+3+2*=6+2* Marks

[4] The “centrality” of E[·]: Consider a non-negative random variable X with finite E[X] = µ. Show that

E[X]
P (X : X ≥ a) ≤ .
a
What is the probability that X takes a value at least X = k · E[X], where k is a positive finite integer?
4+2=6 marks

[5] Consider a random source which outputs i.i.d symbols from the set X = {α1 , α2 , · · · , αm }, where the
m
X
probability of symbol αi is pi ∈ [0, 1]. Note that pi = 1.
i=1

(a) Devise a (i) prefix-free code (ii) uniquely decodable code (which is not prefix-free) for the source.
m
X
(b) Let E[L] := pi li . Inspired by Kraft’s inequality on lengths, we seek to solve the following constrained
i=1
optimization problem using the basic Lagrange multiplier approach:

min E[L]
(l ,l ,···l ):
P1m 2 −lm
i ≤1
i=1 2

where the minimization is over all lengths l1 , l2 , · · · , lm satisfying Kraft’s inequality. Solve it to de-
termine the optimum E[L] and an optimum collection of lengths (l1∗ , l2∗ , · · · , lm ∗
)? (Relax the fact that
lengths need to be integers to solve the optimization problem). Comment meaningfully (1 out of 4
marks reserved) on the purpose of the problem, what it formalizes and the result (esp. optimal lengths).
Hint: : for the given objective function f (l1 , l2 , · · · , lm ), construct the the Lagrangian augmented func-
tion F (l1 , l2 , · · · , lm , λ) = f (l1 , l2 , · · · , lm ) + λ(--constraint--) where λ is the Lagrangian multiplier.
Then follow standard calculus.

2+4=6 marks

Wish you the best of luck!

You might also like