0% found this document useful (0 votes)
14 views17 pages

Comm2 Assgn1

This document contains a series of assignments for a Communication-II course at IIT Kharagpur, focusing on probability and statistics. It includes various questions and solutions related to probability calculations, random variables, and events in different scenarios. The assignments are completed by a group of students and cover topics such as the probability of events, conditional probabilities, and expected outcomes in experiments.

Uploaded by

sharmashreyans6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views17 pages

Comm2 Assgn1

This document contains a series of assignments for a Communication-II course at IIT Kharagpur, focusing on probability and statistics. It includes various questions and solutions related to probability calculations, random variables, and events in different scenarios. The assignments are completed by a group of students and cover topics such as the probability of events, conditional probabilities, and expected outcomes in experiments.

Uploaded by

sharmashreyans6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Dept.

of Electronics and Electrical Communications


Engineering
Indian Institute of Technology, Kharagpur

Communication-II [EC31204]

Assignment-1

Group no.:17

Members:

1) Venu Gopal [22EC30032]


2) Shreyans Sharma [22EC10072]
3) Harsh Raj [22EC30061]
4) Aditya Agarwal [22EC10004]

Date: 24 Jan 2025


1 Question 1
(Written by Harsh Raj)
Let G: Event that the selected student is a genius,
C: Event that the selected student loves chocolates.

Given:
P(G) = 0.6, P(C) = 0.7, P(G ∩ C) = 0.4
Now, the probability that the selected student belongs neither to G nor to C is:

P(selected student belongs neither to G nor to C) = P((G ∪ C)′ )

P((G ∪ C)′ ) = 1 − P(G ∪ C)


Now,
P(G ∪ C) = P(G) + P(C) − P(G ∩ C)

=⇒ P((G ∪ C)′ ) = 1 − P(G ∪ C)


= 1 − [P(G) + P(C) − P(G ∩ C)]
= 1 − [0.6 + 0.7 − 0.4]
= 1 − 0.9
= 0.1

Thus, the probability that a randomly selected student is neither a genius nor a chocolate lover

P(neither G nor C) = 0.1

1
2 Question 2 [solved by Shreyans]
For the experiment of rolling a die, we have:

Ω = {1, 2, 3, 4, 5, 6}

Let X be the random variable denoting the outcome of the roll. Let us consider the σ-algebra to be
the power set of Ω.
Let p and q denote the probability of getting an even face and an odd face respectively. Given that:

p = 2q (1)

From the normalization axiom, we have:


P(Ω) = 1
Thus:
6
X
P(X = i) = 1
i=1

Substituting:
3p + 3q = 1 (2)
Solving equations (1) and (2), we get:
1 2
q= , p=
9 9
Thus, the probability model for this die is as follows:

x 1 2 3 4 5 6
1 2 1 2 1 2
P(X = x) 9 9 9 9 9 9

Let A denote the event of getting an outcome less than 4, then:


3
[
A= Ai
i=1

where Ai denotes the event of getting the number i as the outcome.


Since all Ai are disjoint, we can use the additivity axiom. Thus:
3
X
P(A) = P(Ai ) = P(A1 ) + P(A2 ) + P(A3 )
i=1

Substituting the values:


1 2 1 4
P(A) = + + =
9 9 9 9

2
3 Question 3 [solved by Shreyans]
For the given experiment, let U denote the set of all possible outcomes from the die, then:

U = {1, 2, 3, 4}

Any roll of the die can result in any number x ∈ U . As per the question, the experiment is finished
when the roll results in an even number. Hence, the last roll of the die must result in an even number
in U , and all the throws before that must result in an odd number in U .

Let Ao denote the event of getting an odd number, that is, some y ∈ {1, 3}. Let Ae denote the
event of getting an even number, that is, some w ∈ {2, 4}.
Thus, the sample space Ω can be represented as:

Ω = {(Ae ), (Ao Ae ), (Ao Ao Ae ), . . . }

Formally, this can also be written as:

Ω = {(x1 , x2 , . . . , xn ) | x1 , x2 , . . . , xn−1 ∈ {1, 3}, xn ∈ {2, 4}, n ∈ N}

3
4 Question 4 [solved by Shreyans]
Let A, B, and C be the opponents, and let a, b, and c be the corresponding probabilities of winning
against each opponent. Without loss of generality, assume a > b > c. Hence, A is the weakest oppo-
nent and C is the strongest.

Let E denote the event of a win against opponent E, and E denote a loss against E. For any
permutation P1 P2 P3 of ABC, let:
- The first game be played against P1 (with win probability p1 ),
- The second against P2 (with win probability p2 ),
- The last against P3 (with win probability p3 ).

Let W denote the event of winning the tournament. To win the tournament, we must win at least two
games in a row. Hence:
W = (P1 P2 P3 ) ∪ (P1 P2 P3 ) ∪ (P1 P2 P3 )
Clearly, all of the above events are disjoint. Thus, using the additivity axiom, we get:

P(W ) = P(P1 P2 P3 ) + P(P1 P2 P3 ) + P(P1 P2 P3 )

Substituting the probabilities:

P(W ) = (1 − p1 )p2 p3 + p1 p2 (1 − p3 ) + p1 p2 p3

Simplifying :
P(W ) = p2 (p1 + p3 − p1 p3 ) = p2 (p1 + p3 ) − p1 p2 p3
Notice that the term p1 p2 p3 appears in every permutation as it is (since the order of multiplication
does not matter). Hence, to maximize the probability of winning the tournament, we need to maximize
the term p2 (p1 + p3 ).

Again, the order of p1 and p3 does not matter (as they are added), the possible options we have
are:
a(b + c) if A plays second,
b(a + c) if B plays second,
c(a + b) if C plays second.
Clearly:
a(b + c) > b(a + c) because a > b,
a(b + c) > c(a + b) because a > c.
Hence, playing A second turns out to be optimal, and the order of B and C does not matter, as shown.

4
5 Question 5
(Written by Harsh Raj)
Let Xi : event of getting a Head on the ith toss.

Claim:
P(Xi = 1, X2 = 1 | X1 = 1) ≥ P(Xi = 1, X2 = 1 | X1 = 1 OR X2 = 1)

Assumptions:
1. Xi ∼ i.i.d

2. P(Xi = 1) = p (generalised)

Solution:
Since X1 and X2 are independent, we can say:

P(X1 = 1, X2 = 1 | X1 = 1) = P(X2 = 1) = p
and :
P(X1 = 1, X2 = 1) = P(X1 = 1) · P(X2 = 1) = p2
The probability of getting at least one head :

P(at least one head) = P(X1 = 1, X2 = 1) + P(X1 = 1, X2 = 0) + P(X1 = 0, X2 = 1)


= p2 + 2 · p · (1 − p)
= p2 + 2p − 2p2
= 2p − p2

The conditional probability given X1 = 1 OR X2 = 1 is:

p2 p2
P(Xi = 1, X2 = 1 | X1 = 1 OR X2 = 1) = =
P(at least one Head) 2p − p2

Verifying alice’s claim :

P(Xi = 1, X2 = 1 | X1 = 1) ≥ P(Xi = 1, X2 = 1 | X1 = 1 OR X2 = 1)

p2
p≥
2p − p2

=⇒ 2p − p2 ≥ p2
=⇒ 2p ≥ 2p2
=⇒ 1 ≥ p
This inequality holds for all valid probabilities (0 ≤ p ≤ 1).

Conclusion:
Alice is right about her claim; also, the fairness of toss (value of p) does not affect the correctness of
her claim as it is valid for all possible and valid values of p.

5
6 Question 6
(solved by: Venu Gopal)
Let xi : Event that the i-th ball picked up is not-defective,where i = 1, 2, 3, 4.
Given:
• Number of defective items, D = 5

• Total number of pieces, N = 100

X : All four pieces picked up are not defective.

P (X) = P (x1 ∩ x2 ∩ x3 ∩ x4 )
Using the multiplication rule:

P (X) = P (x1 ) · P (x2 | x1 ) · P (x3 | x1 ∩ x2 ) · P (x4 | x1 ∩ x2 ∩ x3 )

Substituting probabilities:
95
P (x1 ) =
100
94
P (x2 | x1 ) =
99
93
P (x3 | x1 ∩ x2 ) =
98
92
P (x4 | x1 ∩ x2 ∩ x3 ) =
97
95 94 93 92
P (E) = · · ·
100 99 98 97
Simplify:
P (X) = 0.81
Thus, the probability of accepting the batch is:

P (X) = 0.81

6
7 Question 7
(solved by: Venu Gopal)
Let XB : Number of heads Bob gets.
XA : Number of heads Alice gets.

Given:

• Total coins: 2n + 1.
• Bob tosses n + 1 coins.
• Alice tosses n coins.
• Each coin has probability of heads: p = 12 .

• Coin tosses are independent.


After n tosses by Bob(i.e before tossing the last coin) and after n tosses Alice

Let,

• XB 1 : Number of heads Bob gets.(after n tosses)


• XA1 : Number of heads Alice gets.(after n tosses)
Due to similarity,

P (XB1 > XA1 ) = P (XB1 < XA1 ) = p1


which implies,
P (XB1 = XA1 ) = 1 − 2 · p1
Now, in the n + 1-th throw, the probability of Bob getting a head is 12 . The probability that Bob
gets more heads than Alice is:
1
P (XB > XA ) = P (XB1 > XA1 ) · 1 + P (XB1 = XA1 ) · + P (XB1 < XA1 ) · 0
2
Substituting:
1
P (XB > XA ) = p1 + (1 − 2 · p1 ) ·
2
Simplify:
1
P (XB > XA ) =
2
1
Hence, the probability of Bob getting more heads than Alice is .
2

7
8 Question 8
(solved by: Venu Gopal)
Let, x1 : be the event,one dog decides the correct path
x2 : be the event,two dogs decide the correct path

Given:
• P (x1 ) = p1
• Both dogs choose the path independently.

P (both dogs agreeing to the same path) = P (both choosing the correct path)+P (both choosing the wrong path)

P (both dogs agreeing to the same path) = p1 · p1 + (1 − p1 ) · (1 − p1 )

P (both dogs agreeing to the same path) = p21 + (1 − p1 )2

P (both dogs not agreeing to the same path) = 1 − P (both dogs agreeing to the same path)

Substitute:

P (both dogs not agreeing to the same path) = 1 − p21 + (1 − p1 )2




Simplify:

P (both dogs not agreeing to the same path) = 1 − p21 + (1 − 2p1 + p21 )


P (both dogs not agreeing to the same path) = 1 − 2p21 − 2p1 + 1




P (both dogs not agreeing to the same path) = 1 − 1 + 2p1 − 2p21

P (both dogs not agreeing to the same path) = 2p1 − 2p21

P (both dogs not agreeing to the same path) = 2p1 (1 − p1 )


i.e., if the dogs don’t agree, we choose a path randomly, which turns out to be correct
with a probability of 12 .

The probability P (x2 ) is given as:


1
P (x2 ) = P (both dogs agree to the correct path) + P (both dogs not agreeing to the same path) ·
2
Substitute the values:
1
P (x2 ) = p21 + 2p1 (1 − p1 ) ·
2
Simplify:
P (x2 ) = p21 + p1 (1 − p1 )

8
Further simplification:
P (x2 ) = p21 + p1 − p21

P (x2 ) = p1
We can see that P (x1 ) = P (x2 ). Hence,we can see that both the strategies are equiprob-
able

9
9 Question 9
(Written by Harsh Raj)
Let Xi : ith sent signal, Yi : ith received signal, Zi : ith signal received correctly.

Xi ∈ {0, 1}, Yi ∈ {0, 1}, Zi ∈ {0, 1}

Given:
P(X = 0) = p, P(X = 1) = 1 − p,
P(Y = 0 | X = 0) = 1 − ϵ0 , P(Y = 0 | X = 1) = ϵ1 ,
P(Y = 1 | X = 0) = ϵ0 , P(Y = 1 | X = 1) = 1 − ϵ1 .

Part A
The probability that the k th symbol is received correctly is :

P(k th symbol is received correctly) = P(Xk = 1) · P(Yk = 1 | Xk = 1) + P(Xk = 0) · P(Yk = 0 | Xk = 0)

=⇒ P(k th symbol is received correctly) = (1 − p) · (1 − ϵ1 ) + p · (1 − ϵ0 )


Part B
The probability of receiving the sequence 1011 correctly is:

P(1011 is received correctly) = P(Y1 = 1 | X1 = 1)·P(Y2 = 0 | X2 = 0)·P(Y3 = 1 | X3 = 1)·P(Y4 = 1 | X4 = 1)

(Since the events(transmissions) are independent)

P(1011 is received correctly) = (1 − ϵ1 )3 · (1 − ϵ0 )

10
Part C
If the sent signal is 000 , the message is decoded correctly if at least two 0s are correctly detected. The
probability is:

P(Atleast two 0s are detected correctly) = P(Z1 = 1, Z2 = 1, Z3 = 0)+P(Z1 = 1, Z2 = 0, Z3 = 1)+P(Z1 = 0, Z2 = 1, Z3 =

=⇒ P(At least two 0s are detected correctly) = (1 − ϵ0 )2 · ϵ0 + (1 − ϵ0 )2 · ϵ0 + (1 − ϵ0 )2 · ϵ0 + (1 − ϵ0 )3

=⇒ P(0 is decoded correctly) = 3 · (1 − ϵ0 )2 · ϵ0 + (1 − ϵ0 )3


Part D
As the probability of toggling X = 0 (ϵ0 ) decreases, the probability of correctly decoding ”000”
increases.

Part E
Using Bayes’ rule:

P(received = 101 | X = 0) · P(X = 0)


P(X = 0 | received = 101) =
P(received = 101)

Using the LOTP :

P(received = 101) = P(received = 101 | X = 0) · P(X = 0) + P(received = 101 | X = 1) · P(X = 1)

P(received = 101) = p · ϵ0 · (1 − ϵ0 ) · ϵ0 + (1 − p) · (1 − ϵ1 )2 · ϵ1

p · ϵ20 · (1 − ϵ0 )
=⇒ P(X = 0 | received = 101) = .
p· ϵ20 · (1 − ϵ0 ) + (1 − p) · (1 − ϵ1 )2 · ϵ1

p · ϵ20 · (1 − ϵ0 )
P(symbol sent was 0 given we received 101) = .
p· ϵ20 · (1 − ϵ0 ) + (1 − p) · (1 − ϵ1 )2 · ϵ1

11
10 Question 10 [Solved by Aditya]
We are working with two types of users:
• n1 voice users: These users occasionally need a voice connection.
• n2 data users: These users occasionally need a data connection.
Each user connects independently:
• A voice user connects with probability p1 and requires r1 bits/sec when active.
• A data user connects with probability p2 and requires r2 bits/sec when active.
The system has a total capacity of C bits/sec. If the total demand exceeds C, the system becomes
overwhelmed. Our goal is to calculate the probability of this happening.
Let’s define the key random variables:
• V : The number of active voice users. Since each voice user connects independently, V follows a
binomial distribution:
V ∼ Binomial(n1 , p1 ).

• D: The number of active data users. Similarly, D follows:

D ∼ Binomial(n2 , p2 ).

• T : The total demand on the system, which is the sum of the contributions from voice and data
users:
T = r1 · V + r2 · D.

The system is considered overloaded when the total demand exceeds its capacity:

P (Overload) = P (T > C).

This is the probability we want to calculate.


Calculating P (T > C) exactly can be tricky because T depends on the sum of two binomial random
variables. However, when the number of users is large, we can use the Central Limit Theorem (CLT)
to approximate T as a Gaussian (normal) random variable.
Using the CLT:
T ∼ N (µT , σT2 ),
where:
• µT : The expected total demand, which is:

µT = r1 · n1 · p1 + r2 · n2 · p2 .

• σT2 : The variance of the total demand, which is:

σT2 = r12 · n1 · p1 · (1 − p1 ) + r22 · n2 · p2 · (1 − p2 ).

Now that we know T is approximately normal, the probability of overload can be expressed as:

P (Overload) = P (T > C).

Standardizing T (to convert it into a standard normal random variable Z), we have:
 
C − µT
P (Overload) = P Z > ,
σT

12
where Z ∼ N (0, 1) (a standard normal random variable).
Finally, we use the cumulative distribution function (CDF) of the standard normal distribution,
Φ(z), to express the probability:
 
C − µT
P (Overload) = 1 − Φ .
σT

Here’s what everything means:


• µT is the average total demand, based on the number of users and their probabilities of connect-
ing.

• σT is the standard deviation of the total demand, which reflects how much variation we expect.
• C−µT
σT is the ”z-score” that measureshow far the system capacity C is from the average demand
µT , in units of standard deviation. item The overload probability, P (Overload), gives us the
likelihood that the demand will exceed capacity, causing the system to be overwhelmed.

13
11 Question 11 [Solved by Aditya]
(a) Binomial Random Variable
The binomial random variable represents the number of successes in n independent trials of a
Bernoulli process, where each trial has the same probability p of success.
Event Description: Tossing a biased coin n times, where the probability of landing heads (success)
is p. The binomial random variable counts how many times the coin lands on heads out of the n tosses.
 
n k
X ∼ Binomial(n, p), P (X = k) = p (1 − p)n−k , k = 0, 1, . . . , n
k

(b) Geometric Random Variable


The geometric random variable represents the number of trials needed to get the first success in a
sequence of independent Bernoulli trials with probability p of success.
Event Description: Rolling a fair die repeatedly, where getting a 6 is considered a success. The
geometric random variable measures how many rolls you make until you roll the first 6(success).

X ∼ Geometric(p), P (X = k) = (1 − p)k−1 p, k = 1, 2, . . .

(c) Negative Binomial Random Variable


The negative binomial random variable represents the number of trials required to achieve r
successes in a Bernoulli process with success probability p.
Event Description: Flipping a coin and aiming to get exactly 5 heads. The negative binomial
random variable counts how many flips are needed to achieve the 5th head.
 
k−1 r
X ∼ Negative Binomial(r, p), P (X = k) = p (1 − p)k−r , k = r, r + 1, . . .
r−1

(d) Hypergeometric Random Variable


The hypergeometric random variable describes the number of successes in n draws from a finite
population of size N , containing K successes, where the draws are made without replacement.
Event Description: A jar containing N marbles, of which K are red and the rest are blue. If you
draw n marbles from the jar without putting any back, the hypergeometric random variable counts
how many of the drawn marbles are red.
K N −K
 
k n−k
X ∼ Hypergeometric(N, K, n), P (X = k) = N
 , k = 0, 1, . . . , min(n, K)
n

(e) Pascal Random Variable


The Pascal random variable is another name for the negative binomial random variable, but it
emphasizes counting failures before achieving r successes.
Event Description: Consider you’re playing a game where you aim to win 4 rounds. The Pascal
random variable measures how many losses you incur before you finally win the 4th round.
 
k+r−1 r
X ∼ Pascal(r, p), P (X = k) = p (1 − p)k , k = 0, 1, 2, . . .
r−1

14
(f ) Events and Their Probabilities
1. The first success does not occur till the m-th trial (m is a constant)
This means that the first m−1 trials are failures, and the m-th trial is the first success. The probability
of this event is:

P (First success at m) = (1 − p)m−1 p


Here:
• (1 − p)m−1 : Probability of m − 1 consecutive failures.
• p: Probability of a success on the m-th trial.

2. The third success occurs between the m-th and m + k-th trial (m and k are constants)
This means:
• Exactly two successes occur in the first m − 1 trials.
• Exactly one success occurs in the k trials between the m-th and m + k − 1-th trials.
The probability of this event is:
   
m−1 2 m−3 k
P (Third success between m and m + k) = p (1 − p) · p(1 − p)k−1
2 1
Here:
• m−1

2 p2 (1 − p)m−3 : Probability of exactly two successes in the first m − 1 trials.

• k

1 p(1−p)k−1 : Probability of exactly one success in the k trials between the m-th and m+k−1-th
trials.

(g) Range and Distribution of Y


We are given:
• m1 , m2 , . . . , mk are fixed and denote the trial numbers at which the 1st , 2nd , . . . , k th successes
occur.
• Y is the random variable denoting the number of trials required to achieve the (k + 1)th success.

Range of Y
After the k th success occurs at trial mk , the (k + 1)th success must occur on a subsequent trial.
Therefore, the range of Y is:
Y ∈ {mk + 1, mk + 2, . . . }

Distribution of Y
The (k + 1)th success occurs only after mk trials, so the distribution of Y is a shifted geometric
distribution. The probability that the (k + 1)th success occurs at the n-th trial (n > mk ) is:
P (Y = n) = (1 − p)n−mk −1 p
Here:
• (1 − p)n−mk −1 : Probability of n − mk − 1 failures after the k th success.
• p: Probability of success on the n-th trial.

15
(h) Distribution of Z
The random variable Z represents the number of trials required to get the first occurrence of three
successive 1s. The probability is:
P (Z = n) = p3 (1 − p)n−3 , n≥3

(i) Create an Interesting Problem

Problem
You have a biased coin with P (Heads) = 0.6. You flip the coin until you see two heads in a row. What
is the expected number of flips?

Solution
This problem is modeled as a Markov process with three states:
• State 0: No heads have been observed.
• State 1: One head has been observed.
• State 2: Two consecutive heads (HH) have been observed, and the process stops.
Let E0 and E1 represent the expected number of flips starting from State 0 and State 1, respectively:
E0 = 1 + pE1 + (1 − p)E0
E1 = 1 + p · 0 + (1 − p)E0
Rearranging:
1 + pE1
E0 =
p
E1 = 1 + (1 − p)E0
Substituting E1 = 1 + (1 − p)E0 into E0 :
1 + p (1 + (1 − p)E0 )
E0 =
p
Simplifying:
1+p
E0 = + (1 − p)E0
p
Rearranging:
1+p
E0 − (1 − p)E0 =
p
1+p
E0 p =
p
1+p
E0 =
p2

Final Result
For p = 0.6, the expected number of flips is:
1 + 0.6 1.6
E[Number of flips] = = ≈ 4.44
(0.6)2 0.36
Thus, on average, 4.44 flips are needed to observe two consecutive heads.

16

You might also like