0% found this document useful (0 votes)
16 views28 pages

ISO Module2 BCS301

Uploaded by

naikmeghana369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views28 pages

ISO Module2 BCS301

Uploaded by

naikmeghana369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

MODULE-2
Joint probability distribution: Joint Probability distribution for two discrete random variables,
expectation, covariance and correlation. Markov Chain: Introduction to Stochastic Process, Probability
Vectors, Stochastic matrices, Regular stochastic matrices, Markov chains, higher transition probabilities,
Stationary distribution of Regular Markov chains and absorbing states.

Lecture-1 Joint probability distribution for two discrete random variables


Joint probability distribution: 𝑃(𝑥, 𝑦) is called joint Probability function for two discrete random variables
𝑋 and 𝑌 If i) 𝑃(𝑥, 𝑦) ≥ 0 for all 𝑥, 𝑦 ii) ∑𝑥 ∑𝑦 𝑃(𝑥, 𝑦) = 1.
The collection [(𝑥, 𝑦), 𝑃(𝑥, 𝑦)] is called joint probability distribution.

Marginal distribution of 𝑋 : [𝑥, 𝑓(𝑥)] where 𝑓(𝑥) = ∑𝑦 𝑃(𝑥, 𝑦).


Marginal distribution of 𝑌 : [𝑦, 𝑔(𝑦)] where 𝑔(𝑦) = ∑𝑥 𝑃(𝑥, 𝑦).

𝐸(𝑋) = 𝜇𝑋 = ∑ 𝑥𝑓(𝑥),
𝑉(𝑋) = ∑ 𝑥 2 𝑓(𝑥) − (𝜇𝑋 )2 , and 𝜎𝑋 = √𝑉(𝑋) .

Similarly,

𝐸(𝑌) = 𝜇𝑌 = ∑ 𝑦𝑔(𝑦) ,
𝑉(𝑌) = ∑ 𝑦 2 𝑓(𝑦) − (𝜇𝑌 )2 , and 𝜎𝑌 = √𝑉(𝑌) .

Expectation of a function ℎ(𝑥, 𝑦): 𝐸[ℎ(𝑥, 𝑦)] = ∑𝑥 ∑𝑦 ℎ(𝑥, 𝑦)𝑃(𝑥, 𝑦) .


Examples:
1. Two pens are selected at random from a box that contains 3 blue pens, 2 red pens and 3 green pens. If X is
the number of blue pens selected, and Y is the number of red pens selected,
then find i) Joint probability distribution ii) 𝑃(𝑥 + 𝑦 ≤ 1) .

Solution: There are 8 pens and ⬚8𝐶2 = 28 ways we can select two pens. Selected two pens may be both are
blue, one blue one red, one blue one green, both are red , one red one green, or both are green.
Following table gives the number of ways, probability, 𝑥 and 𝑦 values.
Selected pens Number of ways 𝑥 𝑦 Probability
3 3
BB ⬚𝐶2 2 0
28
3 2 6
BR ⬚𝐶1 × ⬚𝐶1 1 1
28
3 3 9
BG ⬚𝐶1 × ⬚𝐶1 1 0
28
2 1
RR ⬚𝐶2 0 2
28
2 3 6
RG ⬚𝐶1 × ⬚𝐶1 0 1
28
3 3
GG ⬚𝐶2 0 0
28

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


1
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Joint distribution is given in the following table.

𝑌 0 1 2
𝑋
0 3 6 1
28 28 28

9 6
1 0
28 28

2 3 0 0
28

3 6 9
𝑃(𝑥 + 𝑦 ≤ 1) = 𝑃(0, 0) + 𝑃(0, 1) + 𝑃(1, 0) = + + = 0.6429.
28 28 28

2. From 5 boys and 3 girls a committee of 4 is to be formed. Let X be the number of boys, Y be the number of
girls in the committee, find the joint distribution and 𝐸(𝑋) and 𝐸(𝑌).
Solution: Possible number of committees = ⬚8𝐶4 = 70.
A committee may contains one, two, three or four boys.
Number of boys= 𝑥 Number of girls= 𝑦 Possible number of ways Probability
5 5
4 0 ⬚𝐶4
70
5 3 30
3 1 ⬚𝐶3 × ⬚𝐶1
70
5 3 30
2 2 ⬚𝐶2 × ⬚𝐶2
70
5 3 5
1 3 ⬚𝐶1 × ⬚𝐶3
70
Joint distribution is given in the following table.

𝑌 0 1 2 3
𝑋
1
1 0 0 0
14

3
2 0 0 0
7

3 0 3 0 0
7

4 1 0 0 0
14

Marginal distribution of 𝑋 Marginal distribution of 𝑌


𝑥 1 2 3 4 𝑦 0 1 2 3
1 3 3 1 1 3 3 1
𝑓(𝑥) 14 7 7 14
𝑔(𝑦) 14 7 7 14

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


2
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

𝐸(𝑋) = 𝜇𝑋 = ∑ 𝑥𝑓(𝑥) 𝐸(𝑌) = 𝜇𝑦 = ∑ 𝑦𝑔(𝑦)


1 6 9 4 5 3 6 3 3
= 14 + 7 + 7 + 14 = 2 . = 7 + 7 + 14 = 2 .

3. Joint distribution of 𝑋 and 𝑌 is given,


Find 𝜇𝑥 , 𝜇𝑦 and 𝐸(𝑋𝑌 2 ). 𝑌 1 3 4
𝑋
2 0.1 0.2 0.1

4 0.15 0.3 0.15

Marginal distribution of 𝑋 Marginal distribution of 𝑌


𝑥 2 4 𝑦 1 3 4
𝑓(𝑥) 0.4 0.6 𝑔(𝑦) 0.25 0.5 0.25

𝐸(𝑋) = 𝜇𝑋 = ∑ 𝑥𝑓(𝑥) 𝐸(𝑌) = 𝜇𝑦 = ∑ 𝑦𝑔(𝑦)


= 2 × 0.4 + 4 × 0.6 = 3.2 . = 0.25 + 3 × 0.5 + 4 × 0.25 = 2.75 .

𝐸(𝑋𝑌 2 ) = ∑𝑥 ∑𝑦 𝑥𝑦 2 𝑃(𝑥, 𝑦) = 28.

Review questions:

1. 𝜇𝑋 in terms of 𝑃(𝑥, 𝑦)?

2. 𝜇𝑦 in terms of 𝑃(𝑥, 𝑦)?

3. 𝐸(𝑋 + 𝑌)?

4. If a coin is tossed 2 times, 𝑥 denote number of heads, and 𝑦, the number of tails. Then, (1, 1) = ?

5. Expected value of X and Y in question 4?

6. Expected value of XY in question 4?

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


3
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture- 2 Expectation, covariance and correlation of two discrete random variables.


𝐸[ℎ(𝑥, 𝑦)] = ∑𝑥 ∑𝑦 ℎ(𝑥, 𝑦)𝑃(𝑥, 𝑦).

𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦).

Covariance of 𝑋 and 𝑌: 𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌)


𝐶𝑜𝑣 (𝑋,𝑌)
Correlation: Coefficient of correlation: 𝜌(𝑋, 𝑌) = .
𝜎𝑋 𝜎𝑌

Independent random variable:

If 𝑃(𝑥, 𝑦) = 𝑓(𝑥)𝑔(𝑦) for all 𝑥, 𝑦 then 𝑋 and 𝑌 are independent.

1. A fair coin is tossed 3 times. Let 𝑋 denote 0 or 1 according as a head or tail occurs on the
first toss. Let 𝑌 denote the number of heads which occur. Find the joint distribution and
marginal distribution of 𝑋 and 𝑌 . Also find 𝐶𝑜𝑣 (𝑋, 𝑌).
Solution:
S HHH HHT HTH HTT THH THT TTH TTT
𝑥 0 0 0 0 1 1 1 1
𝑦 3 2 2 1 2 1 1 0

Joint distribution is 𝑌 0 1 2 3 Sum


𝑋
0 0 1 2 1 1
8 8 8 2

1 1 2 1 0 1
8 8 8 2

Sum 1 3 3 1
8 8 8 8

𝑥 0 1
Marginal distribution of 𝑋 𝑓(𝑥) 1 1
2 2

𝑦 0 1 2 3
Marginal distribution of 𝑌 1 3 3 1
𝑔(𝑦)
8 8 8 8

2 2 1
𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦) = + = ,
8 8 2

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


4
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

1
𝐸(𝑋) = ∑ 𝑥𝑓(𝑥) = .
2

3 6 3 3
𝐸(𝑌) = ∑ 𝑦𝑔(𝑦) = + + = .
8 8 8 2

1 3 1
𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = − = − .
2 4 4

2. The joint probability distribution for two random variables 𝑋 and 𝑌 is as follows

𝑋 −2 −1 4 5
𝑌
1 0.1 0.2 0 0.3

2 0.2 0.1 0.1 0


Determine i) marginal distribution of 𝑋 and 𝑌 ii) 𝐶𝑜𝑣 (𝑋, 𝑌) iii) Correlation of 𝑋 and 𝑌

Solution: Given that 𝑋 −2 −1 4 5 Sum


𝑌
1 0.1 0.2 0 0.3 0.6

2 0.2 0.1 0.1 0 0.4

Sum 0.3 0.3 0.1 0.3

Marginal distribution of 𝑋 Marginal distribution of 𝑌

𝑥 −2 −1 4 5 𝑦 1 2
𝑓(𝑥) 0.3 0.3 0.1 0.3 𝑔(𝑦) 0.6 0.4

𝐸(𝑋) = 𝜇𝑋 = ∑ 𝑥𝑓(𝑥) = 1, 𝐸(𝑌) = 𝜇𝑌 = ∑ 𝑦𝑔(𝑦) = 1.4.

𝑉(𝑋) = ∑ 𝑥 2 𝑓(𝑥) − (𝜇𝑋 )2 = 9.6, 𝑉(𝑌) = ∑ 𝑦 2 𝑓(𝑦) − (𝜇𝑌 )2 = 0.24.

𝜎𝑋 = √𝑉(𝑋) = 3.098 , 𝜎𝑌 = √𝑉(𝑌) = 0.4899 .

𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦) = 0.9

𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = −0.5.

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


5
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

𝐶𝑜𝑣 (𝑋,𝑌)
Correlation 𝜌(𝑋, 𝑌) = = −0.3294.
𝜎𝑋 𝜎𝑌

Review questions:

1. Is coefficient of correlation bounded?

2. Explain covariance of two random variables.

3. For continuous random variables define joint distribution.

4. For continuous random variables define covariance.

5. Is 𝜌(𝑋, 𝑌) = 𝜌(𝑌, 𝑋)?

6. 𝜌(𝑋, 𝑋) =?

7. 𝜌(𝑋, −𝑋) =?

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


6
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-3 Joint Probability distribution-problems

Conditional Probability distribution:


𝑃(𝑥,𝑦)
Conditional Probability distribution of random variable 𝑌 given that 𝑋 = 𝑥 is (𝑦|𝑥) = .
𝑓(𝑥)

𝑃(𝑥,𝑦)
Conditional Probability distribution of random variable 𝑋 given that 𝑌 = 𝑦 is (𝑥|𝑦) = .
𝑔(𝑦)

Statistical independence:
Two random variables X and Y are said to be independent if 𝑃(𝑥, 𝑦) = 𝑓(𝑥). 𝑔(𝑦) for all (𝑥, 𝑦).
Problems:
𝑥+𝑦
1. If the joint distribution of 𝑋 and 𝑌 is given by 𝑃(𝑥, 𝑦) = , for 𝑥 = 1, 2 and 𝑦 = 0, 1, 2. Find the
𝑘
value of , 𝑃(𝑋 + 𝑌 = 3), probability distribution of 𝑍 = 𝑋 + 2𝑌 and conditional probabilities 𝑃(𝑦|1),
𝑃(𝑥|1).
𝑥+𝑦
Solution: ∑ 𝑃(𝑥, 𝑦) = 1 ⟹ ∑𝑥 ∑𝑦 =1
𝑘
1
⟹ 𝑘 (1 + 2 + 3 + 2 + 3 + 4) = 1
⟹ 𝑘 = 15.
3 3 2
𝑃(𝑋 + 𝑌 = 3) = 𝑃(1, 2) + 𝑃(2, 1) = 15 + 15 = 5 .
𝑍 = 𝑋 + 2𝑌 ⟹ 𝑍 take the values 1, 2, 3, 4, 5 and 6.
Probability distribution is

𝑧 1 2 3 4 5 6
1 2 2 3 3 4
𝑓(𝑧)
15 15 15 15 15 15

𝑃(1,𝑦) 𝑃(1,𝑦) 5
Conditional probability 𝑃(𝑦|1) = =∑ = 2 𝑃(1, 𝑦)
𝑓(1) 𝑦 𝑃(1,𝑦)

𝑦|1 0 1 2
1 1 1
𝑃(𝑦|1)
6 3 2

𝑃(𝑥,1) 𝑃(𝑥,1)
Conditional probability 𝑃(𝑥|1) = =∑ = 3𝑃(𝑥, 1)
𝑔(1) 𝑥 𝑃(𝑥,1)

𝑥|1 1 2
6 9
𝑃(𝑥|1)
15 15

2. The distribution of two independent variables 𝑋 and 𝑌 are as follows

𝑥 0 1 2 𝑦 1 2
𝑓(𝑥) 0.3 0.3 0.4 𝑔(𝑦) 0.3 0.7

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


7
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Determine the joint distribution, verify that 𝐶𝑜𝑣 (𝑋, 𝑌) = 0.

Solution: Since 𝑋 and 𝑌 are independent, 𝑃(𝑥, 𝑦) = 𝑓(𝑥)𝑔(𝑦) for all 𝑥, 𝑦 .

The joint distribution is

𝑋 0 1 2 Sum
𝑌
1 0.09 0.09 0.12 0.3

2 0.21 0.21 0.28 0.7

Sum 0.3 0.3 0.4

𝐸(𝑋) = 𝜇𝑋 = ∑ 𝑥𝑓(𝑥) = 1.1, 𝐸(𝑌) = 𝜇𝑌 = ∑ 𝑦𝑔(𝑦) = 1.7.


𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦) = 1.87
𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = 1.87 − 1.1 × 1.7 = 0 .
1 1 1
3. 𝑋 and 𝑌 are independent random variables. 𝑋 takes the values 2, 5 and 7 with probabilities 2 , 4 and 4
1 1 1
respectively, and 𝑌 takes the values 3, 4 and 5 with probabilities 3 , 3 and 3 respectively.
a) Find the joint probability distribution of 𝑋 and 𝑌
b) Show that the covariance of 𝑋 and 𝑌 is equal to zero.
Solution:
Given that 𝑋 and 𝑌 are independent random variables, and
𝑥 2 5 7
1 1 1
𝑓(𝑥) 2 4 4

5 7
𝐸(𝑋) = ∑ 𝑥𝑓(𝑥) = 1 + 4 + 4 = 4

𝑦 3 4 5
𝑔(𝑦) 1 1 1
3 3 3

4 5
𝐸(𝑌) = ∑ 𝑦𝑔(𝑦) = 1 + 3 + 3 = 4

Joint distribution of 𝑋 and 𝑌 𝑌


3 4 5
𝑋
1 1 1
2 6 6 6
1 1 1
5 12 12 12
1 1 1
7 12 12 12

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


8
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

8 10 15 20 25 21 28 35 3
𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦𝑥𝑦𝑃(𝑥, 𝑦) = 1 + 6 + + 12 + 12 + 12 + 12 + 12 + 12 = 2 = 16.
6

𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = 16 − 4 × 4 = 0

Review questions:

1. For the independent random variables whether the covariance is zero ?


2. Whether covariance zero implies X and Y are independent?
3. What will be the correlation for the independent random variables?
4. For the independent events verify that 𝑃(𝑦|𝑥) = 𝑔(𝑦) and 𝑃(𝑥|𝑦) = 𝑓(𝑥) .
5. If the correlation is zero then the variables are independent random?
6. 𝐸(𝑎𝑋 + 𝑏𝑌) =?

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


9
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-4 Problems on Joint distribution of dependent random variables with covariance zero

If 𝑋 and 𝑌 are independent random variables prove that


i) 𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌) ii) 𝐶𝑜𝑣(𝑋, 𝑌) = 0 .
Solution: If 𝑋 and 𝑌 are independent, then 𝑃(𝑥, 𝑦) = 𝑓(𝑥). 𝑔(𝑦) for all (𝑥, 𝑦).
𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦)
= ∑𝑥 ∑𝑦 𝑥𝑦𝑓(𝑥). 𝑔(𝑦)
= ∑𝑥 𝑥𝑓(𝑥). ∑𝑦 𝑦𝑔(𝑦) = 𝐸(𝑋)𝐸(𝑌).
𝐶𝑜𝑣(𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = 𝐸(𝑋)𝐸(𝑌) − 𝐸(𝑋)𝐸(𝑌) = 0 .
2. Give an example to show that 𝐶𝑜𝑣(𝑋, 𝑌) = 0 is not sufficient for 𝑋 and 𝑌 are independent.
Solution: Consider the following joint distribution of 𝑋 and 𝑌.

𝑌
−1 0 1
𝑋
−1 0 0.1 0.1
0 0.2 0.2 0.2
1 0 0.1 0.1

Marginal distributions of 𝑋 and 𝑌


𝑥 −1 0 1

𝑓(𝑥) 0.2 0.6 0. 2

𝐸(𝑋) = ∑ 𝑥𝑓(𝑥) = −0.2 + 0 + 0.2 = 0

𝑦 −1 0 1
𝑔(𝑦) 0.2 0.4 0.4

𝐸(𝑌) = ∑ 𝑦𝑔(𝑦) = −0.2 + 0 + 0.4 = 0.2


𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦) = −0.1 + 0.1 = 0.
∴ 𝐶𝑜𝑣(𝑋, 𝑌) = 0 − 0 × 0.2 = 0.
But 𝑃(−1, −1) = 0, 𝑓(−1) = 0.2 and 𝑔(−1) = 0.2.
𝑃(−1, −1) ≠ 𝑓(−1). 𝑔(−1) and hence X and Y are dependant.

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


10
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

2. Consider the following joint distribution of 𝑋 and 𝑌.

𝑌
−1 0 1
𝑋
1 1
0 0 6 12

1 1
1 4
0 2

Marginal distributions of 𝑋 and 𝑌


𝑥 0 1
1 3
𝑓(𝑥) 4 4

3 3
𝐸(𝑋) = ∑ 𝑥𝑓(𝑥) = 0 + 4 = 4

𝑦 −1 0 1
𝑔(𝑦) 1 1 7
4 6 12

1 7 1
𝐸(𝑌) = ∑ 𝑦𝑔(𝑦) = − 4 + 0 + 12 = 3
1 1 1
𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦) = − 4 + 2 = 4.
1 3 1
∴ 𝐶𝑜𝑣(𝑋, 𝑌) = 4 − 4 × 3 = 0.
1 1
But 𝑃(0, −1) = 0, 𝑓(0) = 4 and 𝑔(−1) = 4.

𝑃(0, −1) ≠ 𝑓(0). 𝑔(−1) and hence X and Y are dependant.

Review questions:
1. For the independent random variables whether the covariance is zero ?
2. Whether covariance zero implies X and Y are independent?
3. What will be the correlation for the independent random variables?
2
4. If X and Y are independent then, 𝜎𝑋+𝑌 =? .
5. If the correlation is zero then the variables are independent random?
6. When X and Y are dependent random variables?

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


11
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-5 Stochastic process: Markov-chain

Stochastic process:
A stochastic process is a mathematical model that describes how random quantities change over time or
space. It's a collection of random variables that are indexed by a parameter, often time, where each variable
represents a different outcome.
Simply we can define Stochastic Process as follow.
Stochastic Process: The process {𝑡, 𝑥𝑡 } is called stochastic Process. Where 𝑡 is the parameter and 𝑥𝑡 are the
values of the random variables called states.
Stochastic processes are used in many fields, including:
• Mathematical finance
• Queuing processes
• Computer algorithm analysis
• Economic time series
• Image analysis
• Social networks
• Biomedical phenomena modeling
Classification of stochastic process.
1. Discrete state discrete parameter process.
Example: Number of telephone calls in different days in a telephone booth.
2. Discrete state continuous parameter process.
Example: Number of telephone calls in different time intervals in a telephone booth.
3. Continuous state discrete parameter process.
Example: Average duration of a telephone calls in different days in a telephone booth.
4. Continuous state continuous parameter process.
Example: Average duration of telephone calls in different time intervals in a telephone booth.

Markov-chain: Markov-chain is a discrete state discrete parameter process in which state space
is finite and the probability of any state depends at the most predecessor state.

Examples:
1. Three boys A, B, C are throwing a ball to each other. A always throws ball to B, B always
throws the ball to C, but C is just as likely to throw the ball to B as to A.
Clearly state space finite. At any stage ball is with A, B or C. Therefore 𝑆 = {𝐴, 𝐵, 𝐶} ,
Parameter is throw number 1, 2, 3, ⋯ ⋯ .
Therefore the process (game) is discrete state discrete parameter process, and probability that the ball is with
A, B or C in 𝑛𝑡ℎ throw depends on the probabilities of the same in (𝑛 − 1)𝑡ℎ throw.
Hence this game is Markov-chain.

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


12
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Review questions: Identify the process:


1. The length of a queue waiting for a cashier over time in a Mall.

2. Suppose there are three brands on sale, say A, B, C. The consumers either buy the same brand for a few
months or change their brands every now and then. Consumer preferences are observed on a monthly basis.

3. Number of students waiting for a bus at any time of a day.

4. Size of a population at a given time.

5. The values of the Dow-Jones Index at the end of the nth week.

6. Waiting time of the n-th student arriving at a bus stop.

7. Waiting time of an arriving job until it gets into service.

8. Number of jobs waiting at any time and the time a job has to spend in the system.

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


13
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-6 (Tutorial) Construction of probability distribution of 𝒛 = 𝒉(𝒙, 𝒚) from joint probability


distribution.

1. The joint probability distribution for two random variables 𝑋 and 𝑌 is as follows

𝑌 −2 −1 0 1
𝑋
1 0.1 0.2 0 0.3

2 0.2 0.1 0.1 0

Determine i) 𝐸(2𝑋 + 𝑌) ii) 𝑉(2𝑋 + 𝑌)

Solution:

Method 1: 𝐸(2𝑋 + 𝑌) = ∑𝑥 ∑𝑦(2𝑥 + 𝑦)𝑃(𝑥, 𝑦)


= 0 + 0.2 + 0 + 0.9 + 0.4 + 0.3 + 0.4 + 0 = 2.2.
𝑉(2𝑋 + 𝑌) = 𝐸[(2𝑋 + 𝑌)2 ] − [𝐸(2𝑋 + 𝑌)]2
= ∑𝑥 ∑𝑦(2𝑥 + 𝑦)2 𝑃(𝑥, 𝑦) − [𝐸(2𝑋 + 𝑌)]2
= 0 + 0.2 + 0 + 2.7 + 0.8 + 0.9 + 1.6 + 0 − 2.22 = 1.36 .
Method 2: Let 𝑍 = 2𝑋 + 𝑌 then 𝑍 may take the values 0, 1, 2, 3 and 2, 3, 4, 5.
Probability distribution is
𝑧 0 1 2 3 4 5
𝑓(𝑧) 0.1 0.2 0.2 0.4 0.1 0

𝐸(2𝑋 + 𝑌) = 𝐸(𝑍) = ∑ 𝑧𝑓(𝑧) = 0 + 0.2 + 0.4 + 1.2 + 0.4 + 0 = 2.2.


𝑉(2𝑋 + 𝑌) = 𝑉(𝑧) = 𝐸(𝑧 2 ) − [𝐸(𝑧)]2
= ∑ 𝑧 2 𝑓(𝑧) − [𝐸(𝑧)]2 = 0 + 0.2 + 0.8 + 3.6 + 1.6 − 2.22 = 1.36.

Method 3: Marginal distribution of X and Y are


Marginal distribution of 𝑋 Marginal distribution of 𝑌

𝑥 1 2 𝑦 −2 −1 0 1
𝑓(𝑥) 0.6 0.4 𝑔(𝑦) 0.3 0.3 0.1 0.3

𝐸(𝑋) = 𝜇𝑋 = ∑ 𝑥𝑓(𝑥) = 1.4, 𝐸(𝑌) = 𝜇𝑌 = ∑ 𝑦𝑔(𝑦) = −0.6.

𝑉(𝑋) = ∑ 𝑥 2 𝑓(𝑥) − (𝜇𝑋 )2 = 0.24, 𝑉(𝑌) = ∑ 𝑦 2 𝑓(𝑦) − (𝜇𝑌 )2 = 1.44.

𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦) = −1.1 𝐶𝑜𝑣(𝑋, 𝑌) = −1.1 + 1.4 × 0.6 = −0.26

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


14
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

𝐸(2𝑋 + 𝑌) = 2𝐸(𝑋) + 𝐸(𝑌) = 2.2.


𝑉(2𝑋 + 𝑌) = 𝐸[(2𝑋 + 𝑌)2 ] − [𝐸(2𝑋 + 𝑌)]2
= 4𝐸[𝑋 2 ] + 𝐸[𝑌 2 ] + 4𝐸(𝑋𝑌) − [2𝐸(𝑋) + 𝐸(𝑌)]2
= 4𝑉(𝑋) + 𝑉(𝑌) + 4𝐶𝑜𝑣(𝑋, 𝑌) = 4 × 0.24 + 1.44 + 4 × −0.26 = 1.36.
2. If 𝐸(𝑋) = 1, 𝐸(𝑌) = 1.4, 𝑉(𝑋) = 9.6, 𝑉(𝑌) = 0.24, 𝐸(𝑋𝑌) = 0.9 and 𝐶𝑜𝑣 (𝑋, 𝑌) = −0.5. Then find
i) 𝐸(2𝑋 + 3𝑌) ii) 𝑉(𝑋 + 2𝑌).
Solution: 𝐸(2𝑋 + 3𝑌) = 2𝐸(𝑋) + 3𝐸(𝑌) = 6.2 .
𝑉(𝑋 + 2𝑌) = 𝑉(𝑋) + 4𝑉(𝑌) + 4𝐶𝑜𝑣(𝑋, 𝑌) = 9.6 + 4 × 0.24 + 4 × −0.5 = 8.56.
Note: i) 𝐸(𝑎𝑋 + 𝑏𝑌) = 𝑎𝐸(𝑋) + 𝑏𝐸(𝑌) .
ii) 𝑉(𝑎𝑋 + 𝑏𝑌) = 𝑎2 𝑉(𝑋) + 𝑏 2 𝑉(𝑌) + 2𝑎𝑏𝐶𝑜𝑣(𝑋, 𝑌)

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


15
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-7 Probability vector, Stochastic matrices. Examples.

Probability vector: A vector 𝑣 = (𝑣1 , 𝑣2 , 𝑣3 , ⋯ ⋯ 𝑣𝑛 ) is said to be probability vector if


i) 𝑣𝑖 ≥ 0 for all 𝑖, and ii) ∑𝑛𝑖=1 𝑣𝑖 = 1.
1 1
Examples: ( 0.6, 0.4), (0.1, 0.3, 0.4, 0.2), (1, 0, 0), (2 , 0, 2) .

Stochastic matrices: A square matrix 𝑃 is said to be stochastic matrices if each row of P is a probability
vector.
0.3 0.2 0.5
0.3 0.7
Examples: [ 0 1 0] , [ ].
1 0
0.2 0.2 0.6
Regular stochastic matrices: A stochastic matrix 𝑃 is said to be regular if all the entries of
𝑃𝑘 are nonzero for some positive integer 𝑘.

Fixed point or unique fixed probability vector: Let 𝑃 be a regular stochastic matrix, and 𝑣 be a probability
vector such that 𝑣𝑃 = 𝑣 , then 𝑣 is called unique fixed probability vector for 𝑃.

Note:
1. If 𝑣 = (𝑣1 , 𝑣2 , 𝑣3 , ⋯ ⋯ 𝑣𝑛 ) is the probability vector, and 𝑃𝑛×𝑛 be a stochastic matrix then 𝑣𝑃 is also
probability vector.

2. If 𝐴 and 𝐵 are stochastic matrices of same order, then 𝐴𝐵 is also stochastic matrix.

3. If the principal diagonal contains at least one entry 1of stochastic matrix then it is not regular.

Problems:

0 1 0

1. Show that 𝑃 = [1 6 1⁄2 1⁄3] is regular stochastic matrix and find the associated
0 2⁄3 1⁄3
unique fixed probability vector.

1⁄6 1⁄2 1⁄3


2
Solution: Since 𝑃 = [1⁄12 23⁄36 5⁄18] ,
1⁄9 5⁄9 1⁄3
All the entries of 𝑃2 are nonzero, hence 𝑃 is regular.

Let 𝑣 = (𝑥, 𝑦, 𝑧) be the fixed probability vector,


then 𝑣𝑃 = 𝑣, that is
0 1 0
(𝑥, 𝑦, 𝑧) [1⁄6 1⁄2 1⁄3] = (𝑥, 𝑦, 𝑧)
0 2⁄3 1⁄3

𝑥+𝑦+𝑧=1 𝑥+𝑦+𝑧 =1
𝑦 𝑦 𝑥 = 0.1
⟹ =𝑥 ⟹ −𝑥 + 6 = 0 ⟹ 𝑦 = 0.6 .
6
𝑦 2𝑧 𝑦 2𝑧
𝑥+2+ 3 =𝑦 𝑥−2+ 3 =0 𝑧 = 0.3
DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.
16
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

∴ 𝑣 = (0.1, 0.6, 0.3) .


0.25 0.75
2. Show that 𝑃 = [ ] is not regular.
0 1
𝑎 𝑏 𝑎′ 𝑏′] = [𝑎′′ 𝑏′′], second row of 𝑃𝑘 is always (0, 1).
Solution: Since[ ][
0 1 0 1 0 1
Hence, 𝑃𝑘 contains zero entry for every 𝑘.
Therefore P is not regular.
0.5 0.5
3. Find the fixed probability vector for 𝑃 = [ ] .
0.4 0.6
Solution: Let 𝑣 = (𝑥, 𝑦) be the fixed probability vector, then 𝑣𝑃 = 𝑣, that is
0.5𝑥 + 0.4𝑦 = 𝑥
(𝑥, 𝑦) [0.5 0.5] = (𝑥, 𝑦) ⟹
0.4 0.6 0.5𝑥 + 0.6𝑦 = 𝑦
4
𝑥+𝑦 =1 𝑥=9
⟹ ⟹ 5
−0.5𝑥 + 0.4𝑦 = 0 𝑦=9
4 5
∴ 𝑣 = ( 9, ) .
9

Review questions:
1. Find whether the following vectors are probability vector or not. Give reason.
1 1
i) ( 0.6, 0.3), ii) (0, 1, 0, 1), iii) (2, −1, 0), iv) ( 2 , 0, , 0) .
2

2. Find whether the following matrices are stochastic matrices or not. Give reason.

0 0 1 0 1 0 1 0
0.5 0.25 0.25
i) [1 0 0] ii) [ ] iii) [ 1 0] iv) [2 0 −1]
0.3 0.2 0.5
0 1 0 0.5 0.5 3 −2 0

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


17
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-8 Regular stochastic matrices. Problems

Problems:
0 1
1. Show that 𝑃 = [ ] is regular and find the associated unique fixed probability vector.
0.3 0.7
0 1 0 1 0.3 0.7
Solution: 𝑃2 = [ ][ ]=[ ]
0.3 0.7 0.3 0.7 0.21 0.79
All the entries of 𝑃2 are nonzero, hence 𝑃 is regular.
Let 𝑣 = (𝑥, 𝑦) be the fixed probability vector, then 𝑣𝑃 = 𝑣, that is
0.3𝑦 = 𝑥
(𝑥, 𝑦) [ 0 1
] = (𝑥, 𝑦) ⟹
0.3 0.7 𝑥 + 0.7𝑦 = 𝑦
3
𝑥+𝑦 =1 𝑥 = 13
⟹ ⟹ 10
−𝑥 + 0.3𝑦 = 0 𝑦 = 13
3 10
∴ 𝑣 = ( 13 , ) .
13

0 1 0
2. Show that 𝐴 = [ 0 0 1] is regular.
1⁄2 1⁄2 0
0 0 1
Solution: 𝐴2 = [0.5 0.5 0 ],
0 0.5 0.5
0.5 0.5 0
𝐴3 = [ 0 0.5 0.5],
0.25 0.25 0.5
0 0.5 0.5
𝐴4 = [0.25 0.25 0.5 ]
0.25 0.5 0.25
0.25 0.25 0.5
𝐴5 = [ 0.25 0.5 0.25] .
0.125 0.375 0.5
Since the entries of 𝐴5 are all nonzero’s, therefore 𝐴 is regular.
3. Show that following stochastic matrices are not regular.
1⁄2 1⁄4 1⁄4 1⁄2 1⁄2 0
i. [ 0 1 0 ] ii. [1⁄2 1⁄2 0 ]
1⁄2 0 1⁄2 1⁄4 1⁄4 1⁄2
1⁄2 1⁄4 1⁄4
i. Let 𝑃=[ 0 1 0 ]
1⁄2 0 1⁄2

𝑎 𝑏 𝑐 𝑎′ 𝑏′ 𝑐′ 𝑎" 𝑏" 𝑐"


Since [ 0 1 0] × [ 0 1 0] = [ 0 1 0]
𝑑 𝑒 𝑓 𝑑′ 𝑒′ 𝑓′ 𝑑" 𝑒" 𝑓"

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


18
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Second row of 𝑃𝑘 is always (0, 1, 0).


Hence, 𝑃𝑘 contains zero entry for every 𝑘.
Therefore P is not regular.

1⁄2 1⁄2 0
ii. Let 𝑃 = [1⁄2 1⁄2 0 ]
1⁄4 1⁄4 1⁄2

𝑎 𝑏 0 𝑎′ 𝑏′ 0 𝑎′′ 𝑏′′ 0
Since [ 𝑐 𝑑 0 ] × [ 𝑐′ 𝑑′ 0 ] = [ 𝑐′′ 𝑑′′ 0 ]
𝑒 𝑓 𝑔 𝑒′ 𝑓′ 𝑔′ 𝑒′′ 𝑓′′ 𝑔′′

First two entry in the third column of 𝑃𝑘 is always 0.


Hence, 𝑃𝑘 contains zero entry for every 𝑘.
Therefore P is not regular.
1−𝑞 𝑞
4. Show that 𝑣 = (𝑝, 𝑞) is the fixed probability vector of [ ].
𝑝 1−𝑝

1−𝑞 𝑞
Solution: 𝑣𝑃 = (𝑝, 𝑞) [ ] = (𝑝 − 𝑝𝑞 + 𝑝𝑞, 𝑝𝑞 + 𝑞 − 𝑝𝑞) = (𝑝, 𝑞) = 𝑣.
𝑝 1−𝑝
Since 𝑣𝑃 = 𝑣, 𝑣is the fixed probability vector of 𝑃.

Review questions:
1. Find whether the following vectors are probability vector or not. Give reason.
1 1 1
i) ( 0.5, 0.4), ii) (0, 1, 0, 1), iii) (2, 0, −1, 0), iv) ( 2 , , 0, − 5) .
3

2. Find whether the following matrices are stochastic matrices or not. Give reason.
1 1 1
1 2
2 4 4 0 0 1 0 1 0
3 3
i) [0 1 0] ii) [ 1 3] iii) [ 1 0] iv) [1 0 −1]
1 1 0 0.5 0.5 1 −1 0
0 4 4
2 2

3. Which of the following matrices are regular?


1 1 1
1 1 2 4 4
0 0 1
0 1 1 1 1
𝐴 = [2 2] , 𝐵=[ ] , 𝐶 = [ 0 1 0] , 𝐷 = [2 ]
0 1 1 0 1 1 4 4
0 0 1 0
2 2

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


19
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-9 Markov chain: Higher transition probabilities.

Markov-chain: Markov-chain is a discrete state discrete parameter process in which state space
is finite and the probability of any state depends at the most predecessor state.
Transition probability matrix of a Markov-chain is 𝑃 = 〈𝑝𝑖𝑗 〉 ,
Where 𝑝𝑖𝑗 is the probability of the transition of 𝑖th state to 𝑗th state.

Higher transition probability: Let 𝑃 be the transition probability matrix of a Markov-chain, and 𝑣0 be the initial
probability vector of the chain.
Then after the first step the probability vector is 𝑣1 = 𝑣0 𝑃 ,
after the second step the probability vector is 𝑣2 = 𝑣1 𝑃 = 𝑣0 𝑃2 ,
after the third step the probability vector is 𝑣3 = 𝑣2 𝑃 = 𝑣0 𝑃3 , and so on.
A Markov-chain is irreducible iff the transition probability matrix is regular.
Therefore, irreducible Markov-chain is also called as regular Markov-chain.
Stationary distribution of regular Markov-chain: In the long run, probability of each state is obtained from
unique fixed probability vector of its regular transition probability matrix.
Problems
1. Three boys A, B, C are throwing a ball to each other. A always throws ball to B, B always
throws the ball to C, but C is just as likely to throw the ball to B as to A. If C was the first
person to throw the ball, find the probability that after three throws i) A has the ball,
ii) B has the ball, iii) C has the ball.

Solution: Let a, b, c indicate the states ball is with A, B or C respectively.


𝑎 𝑏 𝑐
𝑎 0 1 0
Transition probability matrix is 𝑃 = 𝑏 1].
[ 0 0
𝑐 1⁄2 1⁄2 0
Since C was the first person to throw the ball, 𝑣0 = (0, 0, 1)
Then after the first throw the probability vector is 𝑣1 = 𝑣0 𝑃 = (1⁄2 , 1⁄2 , 0) ,
after the second throw the probability vector is 𝑣2 = 𝑣1 𝑃 = (0, 1⁄2 , 1⁄2) ,
after the third throw the probability vector is 𝑣3 = 𝑣2 𝑃 = (1⁄4 , 1⁄4 , 1⁄2).
Therefore after third throw probabilities that A has the ball, B has the ball, and C has the ball are
respectively 1⁄4 , 1⁄4 , 1⁄2 .
2. Every year, a man trades his car for a new car. If he has a Maruti, he trades it for a Ford, If he has a Ford, he
trades it for Santro. However, if he has a Santro, he is just as likely to trade it for a new Santro as to trade it for a
Maruti or a Ford. In 2014 he bought his first car, which was a Santro.
Find the probability that he has a i) 2015 Santro, ii) 2016 Maruti, iii) 2017 Ford.
Solution: Let 𝑀, 𝐹, 𝑆 be the states of having Maruti car, Ford car and Santro car respectively.
Then the transition probability matrix is
𝑀 𝐹 𝐶
𝑀 0 1 0
𝑃=
𝐹 [ 0 0 1 ]
𝑆 1⁄3 1⁄3 1⁄3
DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.
20
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

In 2014: Initial probability vector is 𝑣0 = (0, 0, 1). Since he has Santro car in 2014.
In 2015: 𝑣1 = 𝑣0 𝑃 = (1⁄3 , 1⁄3 , 1⁄3)
In 2016: 𝑣2 = 𝑣1 𝑃 = (1⁄9 , 4⁄9 , 4⁄9)
In 2017: 𝑣3 = 𝑣2 𝑃 = (4⁄27 , 7⁄27 , 16⁄27).
1 1 7
Therefore probability that he has a 2015 Santro is 3 , ii) 2016 Maruti is 9 , iii) 2017 Ford is 27 .
3. There are 2 white marbles in box A and 3 red marbles in box B. At each step of the process a marble is
selected from each box and the two marbles selected are interchanged. Let the state 𝑎𝑖 of the system is number
of 𝑖 red marbles in box A. a) Find the transition probability matrix. b) What is the probability that there are 2
red marbles in box A after 3 steps? c) In long run what is the probability that there are 2 red marbles in box A?
Solution: There are three states 𝑎0 , 𝑎1 and 𝑎2 as shown below.

2W 3R 1W 1W 2R 2W
1R 2R 1R

Box A Box B Box A Box B Box A Box B

State 𝑎0 State 𝑎1 State 𝑎2

If the system is in the state 𝑎0 , then a white marble from box A and a red marble from box B must be selected,
so that system will now move to state 𝑎1 .
Let the system is in the state 𝑎1 , system will move to state 𝑎0 if a red marble from box A and a white marble
1
from box B is be selected, probability of such selection is 6 .
System remains in to state 𝑎1 if a white marble from box A and a white marble from box B is be selected, or if a
1 2 1
red marble from box A and a red marble from box B is be selected, probability of such selection is 6 + 6 = 2 .
System will move to state 𝑎2 if a white marble from box A and a red marble from box B is be selected,
2 1
probability of such selection is 6 = 3 .

Let the system is in the state 𝑎2 , system will move to state 𝑎1 if a red marble from box A and a white marble
4 2
from box B is be selected, probability of such selection is 6 = 3 .
System remains in to state 𝑎2 if a red marble from box A and a red marble from box B is be selected,
2 1
probability of such selection is 6 = 3 .
There is no chance of state 𝑎0 from state 𝑎2 .
Therefore the transition probability matrix is
𝑎0 𝑎1 𝑎2
𝑎0 0 1 0
𝑃= 𝑎
1 [1⁄6 1⁄2 1⁄3]
𝑎2 0 2⁄3 1⁄3
Initial probability vector is 𝑣0 = (1, 0, 0). (Because there is no red marble in box A initially)
After 1st step: 𝑣1 = 𝑣0 𝑃 = (0, 1, 0)

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


21
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

1 1 1
After 2nd step: 𝑣2 = 𝑣1 𝑃 = ( 6 , , )
2 3
1 23 5
After 3rd step: 𝑣3 = 𝑣2 𝑃 = ( 12 , , )
36 18
5
Probability that there are 2 red marbles in box A after 3 steps = Probability of 𝑎2 after 3 steps = 18 .

Let 𝑣 = (𝑥, 𝑦, 𝑧) be the fixed probability vector,


then 𝑣𝑃 = 𝑣, that is
0 1 0
(𝑥, 𝑦, 𝑧) [1⁄6 1⁄2 1⁄3] = (𝑥, 𝑦, 𝑧)
0 2⁄3 1⁄3
𝑥+𝑦+𝑧=1 𝑥+𝑦+𝑧 =1
𝑦 𝑦 𝑥 = 0.1
⟹ =𝑥 ⟹ −𝑥 + = 0 ⟹ 𝑦 = 0.6 .
6 6
𝑦 2𝑧 𝑦 2𝑧
𝑥+ + =𝑦 𝑥− + =0 𝑧 = 0.3
2 3 2 3

∴ 𝑣 = (0.1, 0.6, 0.3) .


In long run the probability that there are 2 red marbles in box A = 0.3 .
Exercise: Following figure shows four compartments with door leading from one to another. A mouse in any
compartment is equally likely to pass through each of the doors between the compartments. Every day mouse
moves to different room. Find the transition probability matrix. If the mouse is in 2nd room in one day what is
the probability that the mouse is in 3rd room after 3 days.

1 4

2 3

Review questions:
1. Whether the state space of Markov-chain is discrete?
2. Whether the state space of Markov-chain is infinite?
3. In the transition probability matrix of a Markov-chain the probability 𝑝2,3 indicates?
4. Is the transition probability matrix of a Markov-chain being stochastic matrix?
5. If the Markov-chain is irreducible then transition probability matrix is?
6. When we say Markov-chain is irreducible?

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


22
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-10 Stationary distribution of regular Markov chain:

Note: In the long run, probability of each state is obtained from unique fixed probability vector of its regular
transition probability matrix.

1. A student’s study habits are as follows. If he studies one night, he is 70% sure not to study
the next night. On the other hand if he does not study one night, he is 60% sure not to study
the next night. In long run how often does he study?
Solution: Let S denote he study, N denote he does not study.
𝑆 𝑁
Transition probability matrix is 𝑃 = 𝑆 0.3 0.7 .
[ ]
𝑁 0.4 0.6

Let 𝑣 = (𝑥, 𝑦) be the fixed probability vector, then 𝑣𝑃 = 𝑣, that is


𝑥+𝑦 =1
(𝑥, 𝑦) [0.3 0.7] = (𝑥, 𝑦) ⟹
0.4 0.6 0.3𝑥 + 0.4𝑦 = 𝑥
4
𝑥+𝑦 =1 𝑥 = 11
⟹ ⟹ 7
−0.7𝑥 + 0.4𝑦 = 0 𝑦 = 11
4
In long run he studies with the probability .
11

2. A salesman’s territory consists of 3 cities A, B and C. he never sells in the same city on successive days.
If he sells in city A, then the next day he sells in city B. However if he sells in either B or C, then the next
day he is twice as likely to sell in city A as in another city. In long run, how often does he sell in each of the
cities?
Solution: Let the states 𝑎, 𝑏, 𝑐 indicates salesman sell in cities A, B and C respectively.
𝑎 𝑏 𝑐
𝑎 0 1 0
Then the transition probability matrix is 𝑃 = 𝑏
[2⁄3 0 1⁄3]
𝑐 2⁄3 1⁄3 0
Let 𝑣 = (𝑥, 𝑦, 𝑧) be the fixed probability vector,
then 𝑣𝑃 = 𝑣, that is
0 1 0
(𝑥, 𝑦, 𝑧) [2⁄3 0 1⁄3] = (𝑥, 𝑦, 𝑧)
2⁄3 1⁄3 0
2
2⁄3 𝑦 + 2⁄3 𝑧 = 𝑥 𝑥+𝑦+𝑧 =1 𝑥=5
𝑧 𝑧 9
⟹ 𝑥+3=𝑦 ⟹ 𝑥 − 𝑦 + 3 = 0 ⟹ 𝑦 = 20 .
𝑦 𝑦
=𝑧 0𝑥 + 3 − 𝑧 = 0 𝑧=
3
3
20
2 9
Therefore, in long run he sells in city A with probability 5 , city B with probability 20 and city C with
3
probability 20 .

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


23
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

3. A man’s smoking habits are as follows. If he smokes filter cigarettes one week, he switches to nonfilter
cigarettes the next week with probability 0.2. On the other hand, if he smokes nonfilter cigarettes one week,
there is a probability of 0.7 that he will smoke nonfilter cigarettes the next week as well. In long run how often
does he smokes filter cigarettes.
Solution: Let the states 𝑠, 𝑛 indicates he smokes filter cigarette and nonfilter cigarette respectively.
𝑠 𝑛
Transition probability matrix is 𝑃 = 𝑠 [
0.8 0.2 .
]
𝑛 0.3 0.7
Let 𝑣 = (𝑥, 𝑦) be the fixed probability vector, then 𝑣𝑃 = 𝑣, that is
𝑥+𝑦 =1
(𝑥, 𝑦) [0.8 0.2] = (𝑥, 𝑦) ⟹
0.3 0.7 0.8𝑥 + 0.3𝑦 = 𝑥
3
𝑥+𝑦 =1 𝑥=5
⟹ ⟹ 2
−0.2𝑥 + 0.3𝑦 = 0 𝑦=5
3
Hence, in long run he smokes filter cigarettes with probability .
5

Review questions:
1. Stationary distribution of regular Markov-chain is?
2. If 𝑃 is the transition probability matrix of a Markov-chain, then each row of 𝑃𝑛 as 𝑛 ⟶ ∞ is?
3. What the vector does 𝑣𝑃 approach for any transition probability matrix 𝑃of a Markov-chain.
4. What the vector does 𝑣𝑃3 approach for any transition probability matrix 𝑃of a Markov-chain.
5. Transition probability matrix of an irreducible Markov-chain may be nonregular?
6. Transition probability matrix of an irreducible Markov-chain may contains entry1 in principal diagonal?

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


24
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-11 Transient state, Absorbing state and Recurrent state of a Markov-chain:


Transient state of Markov-chain: A state 𝑎𝑖 is said to be transient if the chain may be in the state 𝑎𝑖 in some step
but after the finite number of steps chain never comes back to the state 𝑎𝑖 .
Example: In modeling a computer program as a Marko-chain by considering every step as a
state, except the last step every step is a transient state.

Absorbing state: A state 𝑎𝑖 is said to be absorbing if the chain once it reaches the state 𝑎𝑖 then it remains in the
same state.
Example: Consider the transition probability matrix of a Markov-chain with state space (𝑎1 , 𝑎2 , 𝑎3 )
0.2 0.5 0.3
𝑃=[ 0 1 0 ] . Clearly the state 𝑎2 is absorbing.
0.5 0.5 0
Recurrent state: A state 𝑎𝑖 is said to be recurrent (or periodic) if the chain return to the state 𝑎𝑖 from the state 𝑎𝑖
in finite number of steps with probability 1. Minimum number of steps required to return is called period.

Example: Consider the transition probability matrix of a Markov-chain with state space (𝑎1 , 𝑎2 , 𝑎3 )

0 0 1
𝑃 = [1 0 0] . Clearly each state is recurrent state with period 3.
0 1 0
3
1. A player has Rs. 300. At each play of a game, he losses Rs. 100 with probability 4 , but wins Rs. 200 with
1
probability 4 . He stops playing if he lost his Rs 300 or he has won Rs. 300. Find transition probability
matrix. Identify the absorbing states.
Solution: Since he stops playing if he lost his Rs 300 or he has won Rs. 300, at any stage he may have 0, 100,
200, 300, 400, 500 or more than or equal to 600 Rs.
Let the states 𝑎0 , 𝑎1 , 𝑎2 , 𝑎3 , 𝑎4 , 𝑎5 indicate he has 0, 100, 200, 300, 400, 500 Rs respectively, and 𝑎6 indicates he
has 600 or more Rs.
3
If he is in any state 𝑎𝑖 , at each play of a game, he will be in state 𝑎𝑖−1 with probability 4 or in the state 𝑎𝑖+2
1
with probability 4 for 𝑖 = 1,2,3,4,5. But if he is in state 𝑎0 or 𝑎6 he stops playing, that means he remain in the
same state.
𝒂𝟎 𝒂𝟏 𝒂𝟐 𝒂𝟑 𝒂𝟒 𝒂𝟓 𝒂𝟔

𝒂𝟎 1 0 0 0 0 0 0

3 1
𝒂𝟏 0 0 0 0 0
4 4
3 1
𝒂𝟐 0 0 0 0 0
4 4
3 1
Transition probability matrix is 𝑃 = 𝒂𝟑 0 0 0 0 0
4 4
3 1
𝒂𝟒 0 0 0 0 0
4 4
3 1
𝒂𝟓 0 0 0 0 0
4 4

𝒂𝟔 0 0 0 0 0 0 1

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


25
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Clearly the states 𝑎0 and 𝑎6 are absorbing states.

Review questions:
1. Is there a state with both transient and absorbing state?
2. Is there a state with both transient and recurrent state?
3. Is there a state with both recurrent and absorbing state?
0 1
4. Find the period of each state if the T.P.M is 𝑃 = [ ].
1 0
5. If a Markov-chain has absorbing state then is it irreducible?
6. If a Markov-chain has a transient state then is it regular?
7. If a Markov-chain has recurrent state then is it irreducible?

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


26
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Lecture-12 (Tutorial) Complicated problems in Markov chain :


1. Two boys 𝐵1 , 𝐵2 and two girls 𝐺1 , 𝐺2 throwing ball from one to another. Each boy throws the ball to
1 1
other boy with probability 2 , and to each girl with probability 4 . On the other hand, each girl throws
1
the ball to each boy with probability 2 , and never to the other girl. If ball is with 𝐺1 find the probability
that after three throws each receives the ball.

Solution: Clearly there are 4 states 𝐵1 , 𝐵2 , 𝐺1 , 𝐺2 that is in any throw ball is with 𝐵1 , 𝐵2 , 𝐺1 or 𝐺2 respectively.

𝑩𝟏 𝑩𝟐 𝑮𝟏 𝑮𝟐

1 1 1
𝑩𝟏 0
2 4 4
1 1 1
Transition probability matrix is 𝑃 = 𝑩𝟐 0
2 4 4
1 1
𝑮𝟏 0 0
2 2
1 1
𝑮𝟐 0 0
2 2

Since the ball is with 𝐺1 initial probability vector is 𝑣0 = (0, 0, 1, 0)

Then after the first throw the probability vector is 𝑣1 = 𝑣0 𝑃 = (1⁄2 , 1⁄2 , 0, 0) ,

after the second throw the probability vector is 𝑣2 = 𝑣1 𝑃 = (1⁄4 , 1⁄4 , 1⁄4 , 1⁄4) ,

after the third throw the probability vector is 𝑣3 = 𝑣2 𝑃 = (3⁄8 , 3⁄8 , 1⁄8 , 1⁄8).

Therefore, after third throw probability that each boy has the ball is 3⁄8 and each girl has the ball is 1⁄8 .

0 0.5 0.5 0
2. Find the unique fixed probability vector of 𝑃 = [0.5 0.25 0.25 0] .
0 0 0 1
0 0.5 0 0.5

Solution: Let 𝑣 = (𝑥, 𝑦, 𝑧, 𝑤) be the unique fixed probability vector of 𝑃.

Then 𝑣𝑃 = 𝑣.
0 0.5 0.5 0
⟹ (𝑥, 𝑦, 𝑧, 𝑤) [0.5 0.25 0.25 0] = (𝑥, 𝑦, 𝑧, 𝑤)
0 0 0 1
0 0.5 0 0.5

0.5𝑦 = 𝑥
0.5𝑥 + 0.25𝑦 + 0.5𝑤 = 𝑦

0.5𝑥 + 0.25𝑦 = 𝑧
𝑧 + 0.5𝑤 = 𝑤
Since 𝑥 + 𝑦 + 𝑦 + 𝑤 = 1, Substituting 𝑤 = 1 − 𝑥 − 𝑦 − 𝑧 in the 4th equation we get,
𝑧 = 0.5(1 − 𝑥 − 𝑦 − 𝑧) ⟹ 0.5𝑥 + 0.5𝑦 + 1.5𝑧 = 0.5.

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


27
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024

Consider first, third and fourth equations, that are,


−𝑥 + 0.5𝑦 + 0𝑧 = 0
0.5𝑥 + 0.25𝑦 − 𝑧 = 0 , solve for 𝑥, 𝑦, 𝑧 .
0.5𝑥 + 0.5𝑦 + 1.5𝑧 = 0.5
1 1 1 1
We get, 𝑥 = 6 , 𝑦 = 3 , 𝑧 = 6 . Therefore 𝑤 = 1 − 𝑥 − 𝑦 − 𝑧 = 3 .
1 1 1 1
∴ 𝑣 = ( 6, , , )
3 6 3

1
3. A player has Rs. 200. He bets Rs. 100 at a time and wins s. 100 with probability 2. He stops playing if he
loses the Rs. 200 or wins Rs. 400. Find the probability that the game lasts more than 4 plays.
Solution: Since he stops playing if he lost his Rs 200 or he has won Rs. 400, at any stage he may have 0, 100,
200, 300, 400, 500 or 600 Rs.
Let the states 𝑎0 , 𝑎1 , 𝑎2 , 𝑎3 , 𝑎4 , 𝑎5 and 𝑎6 indicate he has 0, 100, 200, 300, 400, 500 or 600 Rs respectively.
1
If he is in any state 𝑎𝑖 , at each bet, he will be in state 𝑎𝑖−1 with probability 2 or in the state 𝑎𝑖+1 with
1
probability 2 for 𝑖 = 2,3,4,5. But if he is in state 𝑎0 or 𝑎6 he stops playing, that means he remain in the same
state.

𝒂𝟎 𝒂𝟏 𝒂𝟐 𝒂𝟑 𝒂𝟒 𝒂𝟓 𝒂𝟔

𝒂𝟎 1 0 0 0 0 0 0

1 1
𝒂𝟏 0 0 0 0 0
2 2
1 1
𝒂𝟐 0 0 0 0 0
2 2
Transition probability matrix is 𝑃 =
1 1
𝒂𝟑 0 0 0 0 0
2 2
1 1
𝒂𝟒 0 0 0 0 0
2 2
1 1
𝒂𝟓 0 0 0 0 0
2 2

𝒂𝟔 0 0 0 0 0 0 1

Since he has 200Rs, initial probability vector is 𝑣0 = (0, 0, 1, 0, 0, 0, 0).


1 1
After the first play: 𝑣1 = 𝑣0 𝑃 = (0, , 0, , 0, 0, 0).
2 2
1 1 1
After the second play: 𝑣2 = 𝑣1 𝑃 = (4 , 0, , 0, , 0, 0).
2 4
1 1 3 1
After the third play: 𝑣3 = 𝑣2 𝑃 = (4 , , 0, , 0, , 0).
4 8 8
3 5 1 1
After the fourth play: 𝑣4 = 𝑣3 𝑃 = (8 , 0, 0, , , 0, ).
16 4 16

If he is in states 𝑎0 or 𝑎6 after 4th play, game stops.


Therefore, game lasts more than 4 plays means he must be in states 𝑎1 , 𝑎2 , 𝑎3 , 𝑎4 or 𝑎5 after fourth play.
5 1 9
Hence the required probability is + = .
16 4 16

DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.


28

You might also like