ISO Module2 BCS301
ISO Module2 BCS301
MODULE-2
Joint probability distribution: Joint Probability distribution for two discrete random variables,
expectation, covariance and correlation. Markov Chain: Introduction to Stochastic Process, Probability
Vectors, Stochastic matrices, Regular stochastic matrices, Markov chains, higher transition probabilities,
Stationary distribution of Regular Markov chains and absorbing states.
𝐸(𝑋) = 𝜇𝑋 = ∑ 𝑥𝑓(𝑥),
𝑉(𝑋) = ∑ 𝑥 2 𝑓(𝑥) − (𝜇𝑋 )2 , and 𝜎𝑋 = √𝑉(𝑋) .
Similarly,
𝐸(𝑌) = 𝜇𝑌 = ∑ 𝑦𝑔(𝑦) ,
𝑉(𝑌) = ∑ 𝑦 2 𝑓(𝑦) − (𝜇𝑌 )2 , and 𝜎𝑌 = √𝑉(𝑌) .
Solution: There are 8 pens and ⬚8𝐶2 = 28 ways we can select two pens. Selected two pens may be both are
blue, one blue one red, one blue one green, both are red , one red one green, or both are green.
Following table gives the number of ways, probability, 𝑥 and 𝑦 values.
Selected pens Number of ways 𝑥 𝑦 Probability
3 3
BB ⬚𝐶2 2 0
28
3 2 6
BR ⬚𝐶1 × ⬚𝐶1 1 1
28
3 3 9
BG ⬚𝐶1 × ⬚𝐶1 1 0
28
2 1
RR ⬚𝐶2 0 2
28
2 3 6
RG ⬚𝐶1 × ⬚𝐶1 0 1
28
3 3
GG ⬚𝐶2 0 0
28
𝑌 0 1 2
𝑋
0 3 6 1
28 28 28
9 6
1 0
28 28
2 3 0 0
28
3 6 9
𝑃(𝑥 + 𝑦 ≤ 1) = 𝑃(0, 0) + 𝑃(0, 1) + 𝑃(1, 0) = + + = 0.6429.
28 28 28
2. From 5 boys and 3 girls a committee of 4 is to be formed. Let X be the number of boys, Y be the number of
girls in the committee, find the joint distribution and 𝐸(𝑋) and 𝐸(𝑌).
Solution: Possible number of committees = ⬚8𝐶4 = 70.
A committee may contains one, two, three or four boys.
Number of boys= 𝑥 Number of girls= 𝑦 Possible number of ways Probability
5 5
4 0 ⬚𝐶4
70
5 3 30
3 1 ⬚𝐶3 × ⬚𝐶1
70
5 3 30
2 2 ⬚𝐶2 × ⬚𝐶2
70
5 3 5
1 3 ⬚𝐶1 × ⬚𝐶3
70
Joint distribution is given in the following table.
𝑌 0 1 2 3
𝑋
1
1 0 0 0
14
3
2 0 0 0
7
3 0 3 0 0
7
4 1 0 0 0
14
Review questions:
3. 𝐸(𝑋 + 𝑌)?
4. If a coin is tossed 2 times, 𝑥 denote number of heads, and 𝑦, the number of tails. Then, (1, 1) = ?
1. A fair coin is tossed 3 times. Let 𝑋 denote 0 or 1 according as a head or tail occurs on the
first toss. Let 𝑌 denote the number of heads which occur. Find the joint distribution and
marginal distribution of 𝑋 and 𝑌 . Also find 𝐶𝑜𝑣 (𝑋, 𝑌).
Solution:
S HHH HHT HTH HTT THH THT TTH TTT
𝑥 0 0 0 0 1 1 1 1
𝑦 3 2 2 1 2 1 1 0
1 1 2 1 0 1
8 8 8 2
Sum 1 3 3 1
8 8 8 8
𝑥 0 1
Marginal distribution of 𝑋 𝑓(𝑥) 1 1
2 2
𝑦 0 1 2 3
Marginal distribution of 𝑌 1 3 3 1
𝑔(𝑦)
8 8 8 8
2 2 1
𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦) = + = ,
8 8 2
1
𝐸(𝑋) = ∑ 𝑥𝑓(𝑥) = .
2
3 6 3 3
𝐸(𝑌) = ∑ 𝑦𝑔(𝑦) = + + = .
8 8 8 2
1 3 1
𝐶𝑜𝑣 (𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌) = − = − .
2 4 4
2. The joint probability distribution for two random variables 𝑋 and 𝑌 is as follows
𝑋 −2 −1 4 5
𝑌
1 0.1 0.2 0 0.3
𝑥 −2 −1 4 5 𝑦 1 2
𝑓(𝑥) 0.3 0.3 0.1 0.3 𝑔(𝑦) 0.6 0.4
𝐶𝑜𝑣 (𝑋,𝑌)
Correlation 𝜌(𝑋, 𝑌) = = −0.3294.
𝜎𝑋 𝜎𝑌
Review questions:
6. 𝜌(𝑋, 𝑋) =?
7. 𝜌(𝑋, −𝑋) =?
𝑃(𝑥,𝑦)
Conditional Probability distribution of random variable 𝑋 given that 𝑌 = 𝑦 is (𝑥|𝑦) = .
𝑔(𝑦)
Statistical independence:
Two random variables X and Y are said to be independent if 𝑃(𝑥, 𝑦) = 𝑓(𝑥). 𝑔(𝑦) for all (𝑥, 𝑦).
Problems:
𝑥+𝑦
1. If the joint distribution of 𝑋 and 𝑌 is given by 𝑃(𝑥, 𝑦) = , for 𝑥 = 1, 2 and 𝑦 = 0, 1, 2. Find the
𝑘
value of , 𝑃(𝑋 + 𝑌 = 3), probability distribution of 𝑍 = 𝑋 + 2𝑌 and conditional probabilities 𝑃(𝑦|1),
𝑃(𝑥|1).
𝑥+𝑦
Solution: ∑ 𝑃(𝑥, 𝑦) = 1 ⟹ ∑𝑥 ∑𝑦 =1
𝑘
1
⟹ 𝑘 (1 + 2 + 3 + 2 + 3 + 4) = 1
⟹ 𝑘 = 15.
3 3 2
𝑃(𝑋 + 𝑌 = 3) = 𝑃(1, 2) + 𝑃(2, 1) = 15 + 15 = 5 .
𝑍 = 𝑋 + 2𝑌 ⟹ 𝑍 take the values 1, 2, 3, 4, 5 and 6.
Probability distribution is
𝑧 1 2 3 4 5 6
1 2 2 3 3 4
𝑓(𝑧)
15 15 15 15 15 15
𝑃(1,𝑦) 𝑃(1,𝑦) 5
Conditional probability 𝑃(𝑦|1) = =∑ = 2 𝑃(1, 𝑦)
𝑓(1) 𝑦 𝑃(1,𝑦)
𝑦|1 0 1 2
1 1 1
𝑃(𝑦|1)
6 3 2
𝑃(𝑥,1) 𝑃(𝑥,1)
Conditional probability 𝑃(𝑥|1) = =∑ = 3𝑃(𝑥, 1)
𝑔(1) 𝑥 𝑃(𝑥,1)
𝑥|1 1 2
6 9
𝑃(𝑥|1)
15 15
𝑥 0 1 2 𝑦 1 2
𝑓(𝑥) 0.3 0.3 0.4 𝑔(𝑦) 0.3 0.7
𝑋 0 1 2 Sum
𝑌
1 0.09 0.09 0.12 0.3
5 7
𝐸(𝑋) = ∑ 𝑥𝑓(𝑥) = 1 + 4 + 4 = 4
𝑦 3 4 5
𝑔(𝑦) 1 1 1
3 3 3
4 5
𝐸(𝑌) = ∑ 𝑦𝑔(𝑦) = 1 + 3 + 3 = 4
8 10 15 20 25 21 28 35 3
𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦𝑥𝑦𝑃(𝑥, 𝑦) = 1 + 6 + + 12 + 12 + 12 + 12 + 12 + 12 = 2 = 16.
6
Review questions:
Lecture-4 Problems on Joint distribution of dependent random variables with covariance zero
𝑌
−1 0 1
𝑋
−1 0 0.1 0.1
0 0.2 0.2 0.2
1 0 0.1 0.1
𝑦 −1 0 1
𝑔(𝑦) 0.2 0.4 0.4
𝑌
−1 0 1
𝑋
1 1
0 0 6 12
1 1
1 4
0 2
3 3
𝐸(𝑋) = ∑ 𝑥𝑓(𝑥) = 0 + 4 = 4
𝑦 −1 0 1
𝑔(𝑦) 1 1 7
4 6 12
1 7 1
𝐸(𝑌) = ∑ 𝑦𝑔(𝑦) = − 4 + 0 + 12 = 3
1 1 1
𝐸(𝑋𝑌) = ∑𝑥 ∑𝑦 𝑥𝑦𝑃(𝑥, 𝑦) = − 4 + 2 = 4.
1 3 1
∴ 𝐶𝑜𝑣(𝑋, 𝑌) = 4 − 4 × 3 = 0.
1 1
But 𝑃(0, −1) = 0, 𝑓(0) = 4 and 𝑔(−1) = 4.
Review questions:
1. For the independent random variables whether the covariance is zero ?
2. Whether covariance zero implies X and Y are independent?
3. What will be the correlation for the independent random variables?
2
4. If X and Y are independent then, 𝜎𝑋+𝑌 =? .
5. If the correlation is zero then the variables are independent random?
6. When X and Y are dependent random variables?
Stochastic process:
A stochastic process is a mathematical model that describes how random quantities change over time or
space. It's a collection of random variables that are indexed by a parameter, often time, where each variable
represents a different outcome.
Simply we can define Stochastic Process as follow.
Stochastic Process: The process {𝑡, 𝑥𝑡 } is called stochastic Process. Where 𝑡 is the parameter and 𝑥𝑡 are the
values of the random variables called states.
Stochastic processes are used in many fields, including:
• Mathematical finance
• Queuing processes
• Computer algorithm analysis
• Economic time series
• Image analysis
• Social networks
• Biomedical phenomena modeling
Classification of stochastic process.
1. Discrete state discrete parameter process.
Example: Number of telephone calls in different days in a telephone booth.
2. Discrete state continuous parameter process.
Example: Number of telephone calls in different time intervals in a telephone booth.
3. Continuous state discrete parameter process.
Example: Average duration of a telephone calls in different days in a telephone booth.
4. Continuous state continuous parameter process.
Example: Average duration of telephone calls in different time intervals in a telephone booth.
Markov-chain: Markov-chain is a discrete state discrete parameter process in which state space
is finite and the probability of any state depends at the most predecessor state.
Examples:
1. Three boys A, B, C are throwing a ball to each other. A always throws ball to B, B always
throws the ball to C, but C is just as likely to throw the ball to B as to A.
Clearly state space finite. At any stage ball is with A, B or C. Therefore 𝑆 = {𝐴, 𝐵, 𝐶} ,
Parameter is throw number 1, 2, 3, ⋯ ⋯ .
Therefore the process (game) is discrete state discrete parameter process, and probability that the ball is with
A, B or C in 𝑛𝑡ℎ throw depends on the probabilities of the same in (𝑛 − 1)𝑡ℎ throw.
Hence this game is Markov-chain.
2. Suppose there are three brands on sale, say A, B, C. The consumers either buy the same brand for a few
months or change their brands every now and then. Consumer preferences are observed on a monthly basis.
5. The values of the Dow-Jones Index at the end of the nth week.
8. Number of jobs waiting at any time and the time a job has to spend in the system.
1. The joint probability distribution for two random variables 𝑋 and 𝑌 is as follows
𝑌 −2 −1 0 1
𝑋
1 0.1 0.2 0 0.3
Solution:
𝑥 1 2 𝑦 −2 −1 0 1
𝑓(𝑥) 0.6 0.4 𝑔(𝑦) 0.3 0.3 0.1 0.3
Stochastic matrices: A square matrix 𝑃 is said to be stochastic matrices if each row of P is a probability
vector.
0.3 0.2 0.5
0.3 0.7
Examples: [ 0 1 0] , [ ].
1 0
0.2 0.2 0.6
Regular stochastic matrices: A stochastic matrix 𝑃 is said to be regular if all the entries of
𝑃𝑘 are nonzero for some positive integer 𝑘.
Fixed point or unique fixed probability vector: Let 𝑃 be a regular stochastic matrix, and 𝑣 be a probability
vector such that 𝑣𝑃 = 𝑣 , then 𝑣 is called unique fixed probability vector for 𝑃.
Note:
1. If 𝑣 = (𝑣1 , 𝑣2 , 𝑣3 , ⋯ ⋯ 𝑣𝑛 ) is the probability vector, and 𝑃𝑛×𝑛 be a stochastic matrix then 𝑣𝑃 is also
probability vector.
2. If 𝐴 and 𝐵 are stochastic matrices of same order, then 𝐴𝐵 is also stochastic matrix.
3. If the principal diagonal contains at least one entry 1of stochastic matrix then it is not regular.
Problems:
0 1 0
⁄
1. Show that 𝑃 = [1 6 1⁄2 1⁄3] is regular stochastic matrix and find the associated
0 2⁄3 1⁄3
unique fixed probability vector.
𝑥+𝑦+𝑧=1 𝑥+𝑦+𝑧 =1
𝑦 𝑦 𝑥 = 0.1
⟹ =𝑥 ⟹ −𝑥 + 6 = 0 ⟹ 𝑦 = 0.6 .
6
𝑦 2𝑧 𝑦 2𝑧
𝑥+2+ 3 =𝑦 𝑥−2+ 3 =0 𝑧 = 0.3
DEPARTMENT OF SCIENCE & HUMANITIES /C.E.C.
16
MATHEMATICS FOR COMPUTER SCIENCE (BCS301) 2024
Review questions:
1. Find whether the following vectors are probability vector or not. Give reason.
1 1
i) ( 0.6, 0.3), ii) (0, 1, 0, 1), iii) (2, −1, 0), iv) ( 2 , 0, , 0) .
2
2. Find whether the following matrices are stochastic matrices or not. Give reason.
0 0 1 0 1 0 1 0
0.5 0.25 0.25
i) [1 0 0] ii) [ ] iii) [ 1 0] iv) [2 0 −1]
0.3 0.2 0.5
0 1 0 0.5 0.5 3 −2 0
Problems:
0 1
1. Show that 𝑃 = [ ] is regular and find the associated unique fixed probability vector.
0.3 0.7
0 1 0 1 0.3 0.7
Solution: 𝑃2 = [ ][ ]=[ ]
0.3 0.7 0.3 0.7 0.21 0.79
All the entries of 𝑃2 are nonzero, hence 𝑃 is regular.
Let 𝑣 = (𝑥, 𝑦) be the fixed probability vector, then 𝑣𝑃 = 𝑣, that is
0.3𝑦 = 𝑥
(𝑥, 𝑦) [ 0 1
] = (𝑥, 𝑦) ⟹
0.3 0.7 𝑥 + 0.7𝑦 = 𝑦
3
𝑥+𝑦 =1 𝑥 = 13
⟹ ⟹ 10
−𝑥 + 0.3𝑦 = 0 𝑦 = 13
3 10
∴ 𝑣 = ( 13 , ) .
13
0 1 0
2. Show that 𝐴 = [ 0 0 1] is regular.
1⁄2 1⁄2 0
0 0 1
Solution: 𝐴2 = [0.5 0.5 0 ],
0 0.5 0.5
0.5 0.5 0
𝐴3 = [ 0 0.5 0.5],
0.25 0.25 0.5
0 0.5 0.5
𝐴4 = [0.25 0.25 0.5 ]
0.25 0.5 0.25
0.25 0.25 0.5
𝐴5 = [ 0.25 0.5 0.25] .
0.125 0.375 0.5
Since the entries of 𝐴5 are all nonzero’s, therefore 𝐴 is regular.
3. Show that following stochastic matrices are not regular.
1⁄2 1⁄4 1⁄4 1⁄2 1⁄2 0
i. [ 0 1 0 ] ii. [1⁄2 1⁄2 0 ]
1⁄2 0 1⁄2 1⁄4 1⁄4 1⁄2
1⁄2 1⁄4 1⁄4
i. Let 𝑃=[ 0 1 0 ]
1⁄2 0 1⁄2
1⁄2 1⁄2 0
ii. Let 𝑃 = [1⁄2 1⁄2 0 ]
1⁄4 1⁄4 1⁄2
𝑎 𝑏 0 𝑎′ 𝑏′ 0 𝑎′′ 𝑏′′ 0
Since [ 𝑐 𝑑 0 ] × [ 𝑐′ 𝑑′ 0 ] = [ 𝑐′′ 𝑑′′ 0 ]
𝑒 𝑓 𝑔 𝑒′ 𝑓′ 𝑔′ 𝑒′′ 𝑓′′ 𝑔′′
1−𝑞 𝑞
Solution: 𝑣𝑃 = (𝑝, 𝑞) [ ] = (𝑝 − 𝑝𝑞 + 𝑝𝑞, 𝑝𝑞 + 𝑞 − 𝑝𝑞) = (𝑝, 𝑞) = 𝑣.
𝑝 1−𝑝
Since 𝑣𝑃 = 𝑣, 𝑣is the fixed probability vector of 𝑃.
Review questions:
1. Find whether the following vectors are probability vector or not. Give reason.
1 1 1
i) ( 0.5, 0.4), ii) (0, 1, 0, 1), iii) (2, 0, −1, 0), iv) ( 2 , , 0, − 5) .
3
2. Find whether the following matrices are stochastic matrices or not. Give reason.
1 1 1
1 2
2 4 4 0 0 1 0 1 0
3 3
i) [0 1 0] ii) [ 1 3] iii) [ 1 0] iv) [1 0 −1]
1 1 0 0.5 0.5 1 −1 0
0 4 4
2 2
Markov-chain: Markov-chain is a discrete state discrete parameter process in which state space
is finite and the probability of any state depends at the most predecessor state.
Transition probability matrix of a Markov-chain is 𝑃 = 〈𝑝𝑖𝑗 〉 ,
Where 𝑝𝑖𝑗 is the probability of the transition of 𝑖th state to 𝑗th state.
Higher transition probability: Let 𝑃 be the transition probability matrix of a Markov-chain, and 𝑣0 be the initial
probability vector of the chain.
Then after the first step the probability vector is 𝑣1 = 𝑣0 𝑃 ,
after the second step the probability vector is 𝑣2 = 𝑣1 𝑃 = 𝑣0 𝑃2 ,
after the third step the probability vector is 𝑣3 = 𝑣2 𝑃 = 𝑣0 𝑃3 , and so on.
A Markov-chain is irreducible iff the transition probability matrix is regular.
Therefore, irreducible Markov-chain is also called as regular Markov-chain.
Stationary distribution of regular Markov-chain: In the long run, probability of each state is obtained from
unique fixed probability vector of its regular transition probability matrix.
Problems
1. Three boys A, B, C are throwing a ball to each other. A always throws ball to B, B always
throws the ball to C, but C is just as likely to throw the ball to B as to A. If C was the first
person to throw the ball, find the probability that after three throws i) A has the ball,
ii) B has the ball, iii) C has the ball.
In 2014: Initial probability vector is 𝑣0 = (0, 0, 1). Since he has Santro car in 2014.
In 2015: 𝑣1 = 𝑣0 𝑃 = (1⁄3 , 1⁄3 , 1⁄3)
In 2016: 𝑣2 = 𝑣1 𝑃 = (1⁄9 , 4⁄9 , 4⁄9)
In 2017: 𝑣3 = 𝑣2 𝑃 = (4⁄27 , 7⁄27 , 16⁄27).
1 1 7
Therefore probability that he has a 2015 Santro is 3 , ii) 2016 Maruti is 9 , iii) 2017 Ford is 27 .
3. There are 2 white marbles in box A and 3 red marbles in box B. At each step of the process a marble is
selected from each box and the two marbles selected are interchanged. Let the state 𝑎𝑖 of the system is number
of 𝑖 red marbles in box A. a) Find the transition probability matrix. b) What is the probability that there are 2
red marbles in box A after 3 steps? c) In long run what is the probability that there are 2 red marbles in box A?
Solution: There are three states 𝑎0 , 𝑎1 and 𝑎2 as shown below.
2W 3R 1W 1W 2R 2W
1R 2R 1R
If the system is in the state 𝑎0 , then a white marble from box A and a red marble from box B must be selected,
so that system will now move to state 𝑎1 .
Let the system is in the state 𝑎1 , system will move to state 𝑎0 if a red marble from box A and a white marble
1
from box B is be selected, probability of such selection is 6 .
System remains in to state 𝑎1 if a white marble from box A and a white marble from box B is be selected, or if a
1 2 1
red marble from box A and a red marble from box B is be selected, probability of such selection is 6 + 6 = 2 .
System will move to state 𝑎2 if a white marble from box A and a red marble from box B is be selected,
2 1
probability of such selection is 6 = 3 .
Let the system is in the state 𝑎2 , system will move to state 𝑎1 if a red marble from box A and a white marble
4 2
from box B is be selected, probability of such selection is 6 = 3 .
System remains in to state 𝑎2 if a red marble from box A and a red marble from box B is be selected,
2 1
probability of such selection is 6 = 3 .
There is no chance of state 𝑎0 from state 𝑎2 .
Therefore the transition probability matrix is
𝑎0 𝑎1 𝑎2
𝑎0 0 1 0
𝑃= 𝑎
1 [1⁄6 1⁄2 1⁄3]
𝑎2 0 2⁄3 1⁄3
Initial probability vector is 𝑣0 = (1, 0, 0). (Because there is no red marble in box A initially)
After 1st step: 𝑣1 = 𝑣0 𝑃 = (0, 1, 0)
1 1 1
After 2nd step: 𝑣2 = 𝑣1 𝑃 = ( 6 , , )
2 3
1 23 5
After 3rd step: 𝑣3 = 𝑣2 𝑃 = ( 12 , , )
36 18
5
Probability that there are 2 red marbles in box A after 3 steps = Probability of 𝑎2 after 3 steps = 18 .
1 4
2 3
Review questions:
1. Whether the state space of Markov-chain is discrete?
2. Whether the state space of Markov-chain is infinite?
3. In the transition probability matrix of a Markov-chain the probability 𝑝2,3 indicates?
4. Is the transition probability matrix of a Markov-chain being stochastic matrix?
5. If the Markov-chain is irreducible then transition probability matrix is?
6. When we say Markov-chain is irreducible?
Note: In the long run, probability of each state is obtained from unique fixed probability vector of its regular
transition probability matrix.
1. A student’s study habits are as follows. If he studies one night, he is 70% sure not to study
the next night. On the other hand if he does not study one night, he is 60% sure not to study
the next night. In long run how often does he study?
Solution: Let S denote he study, N denote he does not study.
𝑆 𝑁
Transition probability matrix is 𝑃 = 𝑆 0.3 0.7 .
[ ]
𝑁 0.4 0.6
2. A salesman’s territory consists of 3 cities A, B and C. he never sells in the same city on successive days.
If he sells in city A, then the next day he sells in city B. However if he sells in either B or C, then the next
day he is twice as likely to sell in city A as in another city. In long run, how often does he sell in each of the
cities?
Solution: Let the states 𝑎, 𝑏, 𝑐 indicates salesman sell in cities A, B and C respectively.
𝑎 𝑏 𝑐
𝑎 0 1 0
Then the transition probability matrix is 𝑃 = 𝑏
[2⁄3 0 1⁄3]
𝑐 2⁄3 1⁄3 0
Let 𝑣 = (𝑥, 𝑦, 𝑧) be the fixed probability vector,
then 𝑣𝑃 = 𝑣, that is
0 1 0
(𝑥, 𝑦, 𝑧) [2⁄3 0 1⁄3] = (𝑥, 𝑦, 𝑧)
2⁄3 1⁄3 0
2
2⁄3 𝑦 + 2⁄3 𝑧 = 𝑥 𝑥+𝑦+𝑧 =1 𝑥=5
𝑧 𝑧 9
⟹ 𝑥+3=𝑦 ⟹ 𝑥 − 𝑦 + 3 = 0 ⟹ 𝑦 = 20 .
𝑦 𝑦
=𝑧 0𝑥 + 3 − 𝑧 = 0 𝑧=
3
3
20
2 9
Therefore, in long run he sells in city A with probability 5 , city B with probability 20 and city C with
3
probability 20 .
3. A man’s smoking habits are as follows. If he smokes filter cigarettes one week, he switches to nonfilter
cigarettes the next week with probability 0.2. On the other hand, if he smokes nonfilter cigarettes one week,
there is a probability of 0.7 that he will smoke nonfilter cigarettes the next week as well. In long run how often
does he smokes filter cigarettes.
Solution: Let the states 𝑠, 𝑛 indicates he smokes filter cigarette and nonfilter cigarette respectively.
𝑠 𝑛
Transition probability matrix is 𝑃 = 𝑠 [
0.8 0.2 .
]
𝑛 0.3 0.7
Let 𝑣 = (𝑥, 𝑦) be the fixed probability vector, then 𝑣𝑃 = 𝑣, that is
𝑥+𝑦 =1
(𝑥, 𝑦) [0.8 0.2] = (𝑥, 𝑦) ⟹
0.3 0.7 0.8𝑥 + 0.3𝑦 = 𝑥
3
𝑥+𝑦 =1 𝑥=5
⟹ ⟹ 2
−0.2𝑥 + 0.3𝑦 = 0 𝑦=5
3
Hence, in long run he smokes filter cigarettes with probability .
5
Review questions:
1. Stationary distribution of regular Markov-chain is?
2. If 𝑃 is the transition probability matrix of a Markov-chain, then each row of 𝑃𝑛 as 𝑛 ⟶ ∞ is?
3. What the vector does 𝑣𝑃 approach for any transition probability matrix 𝑃of a Markov-chain.
4. What the vector does 𝑣𝑃3 approach for any transition probability matrix 𝑃of a Markov-chain.
5. Transition probability matrix of an irreducible Markov-chain may be nonregular?
6. Transition probability matrix of an irreducible Markov-chain may contains entry1 in principal diagonal?
Absorbing state: A state 𝑎𝑖 is said to be absorbing if the chain once it reaches the state 𝑎𝑖 then it remains in the
same state.
Example: Consider the transition probability matrix of a Markov-chain with state space (𝑎1 , 𝑎2 , 𝑎3 )
0.2 0.5 0.3
𝑃=[ 0 1 0 ] . Clearly the state 𝑎2 is absorbing.
0.5 0.5 0
Recurrent state: A state 𝑎𝑖 is said to be recurrent (or periodic) if the chain return to the state 𝑎𝑖 from the state 𝑎𝑖
in finite number of steps with probability 1. Minimum number of steps required to return is called period.
Example: Consider the transition probability matrix of a Markov-chain with state space (𝑎1 , 𝑎2 , 𝑎3 )
0 0 1
𝑃 = [1 0 0] . Clearly each state is recurrent state with period 3.
0 1 0
3
1. A player has Rs. 300. At each play of a game, he losses Rs. 100 with probability 4 , but wins Rs. 200 with
1
probability 4 . He stops playing if he lost his Rs 300 or he has won Rs. 300. Find transition probability
matrix. Identify the absorbing states.
Solution: Since he stops playing if he lost his Rs 300 or he has won Rs. 300, at any stage he may have 0, 100,
200, 300, 400, 500 or more than or equal to 600 Rs.
Let the states 𝑎0 , 𝑎1 , 𝑎2 , 𝑎3 , 𝑎4 , 𝑎5 indicate he has 0, 100, 200, 300, 400, 500 Rs respectively, and 𝑎6 indicates he
has 600 or more Rs.
3
If he is in any state 𝑎𝑖 , at each play of a game, he will be in state 𝑎𝑖−1 with probability 4 or in the state 𝑎𝑖+2
1
with probability 4 for 𝑖 = 1,2,3,4,5. But if he is in state 𝑎0 or 𝑎6 he stops playing, that means he remain in the
same state.
𝒂𝟎 𝒂𝟏 𝒂𝟐 𝒂𝟑 𝒂𝟒 𝒂𝟓 𝒂𝟔
𝒂𝟎 1 0 0 0 0 0 0
3 1
𝒂𝟏 0 0 0 0 0
4 4
3 1
𝒂𝟐 0 0 0 0 0
4 4
3 1
Transition probability matrix is 𝑃 = 𝒂𝟑 0 0 0 0 0
4 4
3 1
𝒂𝟒 0 0 0 0 0
4 4
3 1
𝒂𝟓 0 0 0 0 0
4 4
𝒂𝟔 0 0 0 0 0 0 1
Review questions:
1. Is there a state with both transient and absorbing state?
2. Is there a state with both transient and recurrent state?
3. Is there a state with both recurrent and absorbing state?
0 1
4. Find the period of each state if the T.P.M is 𝑃 = [ ].
1 0
5. If a Markov-chain has absorbing state then is it irreducible?
6. If a Markov-chain has a transient state then is it regular?
7. If a Markov-chain has recurrent state then is it irreducible?
Solution: Clearly there are 4 states 𝐵1 , 𝐵2 , 𝐺1 , 𝐺2 that is in any throw ball is with 𝐵1 , 𝐵2 , 𝐺1 or 𝐺2 respectively.
𝑩𝟏 𝑩𝟐 𝑮𝟏 𝑮𝟐
1 1 1
𝑩𝟏 0
2 4 4
1 1 1
Transition probability matrix is 𝑃 = 𝑩𝟐 0
2 4 4
1 1
𝑮𝟏 0 0
2 2
1 1
𝑮𝟐 0 0
2 2
Then after the first throw the probability vector is 𝑣1 = 𝑣0 𝑃 = (1⁄2 , 1⁄2 , 0, 0) ,
after the second throw the probability vector is 𝑣2 = 𝑣1 𝑃 = (1⁄4 , 1⁄4 , 1⁄4 , 1⁄4) ,
after the third throw the probability vector is 𝑣3 = 𝑣2 𝑃 = (3⁄8 , 3⁄8 , 1⁄8 , 1⁄8).
Therefore, after third throw probability that each boy has the ball is 3⁄8 and each girl has the ball is 1⁄8 .
0 0.5 0.5 0
2. Find the unique fixed probability vector of 𝑃 = [0.5 0.25 0.25 0] .
0 0 0 1
0 0.5 0 0.5
Then 𝑣𝑃 = 𝑣.
0 0.5 0.5 0
⟹ (𝑥, 𝑦, 𝑧, 𝑤) [0.5 0.25 0.25 0] = (𝑥, 𝑦, 𝑧, 𝑤)
0 0 0 1
0 0.5 0 0.5
0.5𝑦 = 𝑥
0.5𝑥 + 0.25𝑦 + 0.5𝑤 = 𝑦
⟹
0.5𝑥 + 0.25𝑦 = 𝑧
𝑧 + 0.5𝑤 = 𝑤
Since 𝑥 + 𝑦 + 𝑦 + 𝑤 = 1, Substituting 𝑤 = 1 − 𝑥 − 𝑦 − 𝑧 in the 4th equation we get,
𝑧 = 0.5(1 − 𝑥 − 𝑦 − 𝑧) ⟹ 0.5𝑥 + 0.5𝑦 + 1.5𝑧 = 0.5.
1
3. A player has Rs. 200. He bets Rs. 100 at a time and wins s. 100 with probability 2. He stops playing if he
loses the Rs. 200 or wins Rs. 400. Find the probability that the game lasts more than 4 plays.
Solution: Since he stops playing if he lost his Rs 200 or he has won Rs. 400, at any stage he may have 0, 100,
200, 300, 400, 500 or 600 Rs.
Let the states 𝑎0 , 𝑎1 , 𝑎2 , 𝑎3 , 𝑎4 , 𝑎5 and 𝑎6 indicate he has 0, 100, 200, 300, 400, 500 or 600 Rs respectively.
1
If he is in any state 𝑎𝑖 , at each bet, he will be in state 𝑎𝑖−1 with probability 2 or in the state 𝑎𝑖+1 with
1
probability 2 for 𝑖 = 2,3,4,5. But if he is in state 𝑎0 or 𝑎6 he stops playing, that means he remain in the same
state.
𝒂𝟎 𝒂𝟏 𝒂𝟐 𝒂𝟑 𝒂𝟒 𝒂𝟓 𝒂𝟔
𝒂𝟎 1 0 0 0 0 0 0
1 1
𝒂𝟏 0 0 0 0 0
2 2
1 1
𝒂𝟐 0 0 0 0 0
2 2
Transition probability matrix is 𝑃 =
1 1
𝒂𝟑 0 0 0 0 0
2 2
1 1
𝒂𝟒 0 0 0 0 0
2 2
1 1
𝒂𝟓 0 0 0 0 0
2 2
𝒂𝟔 0 0 0 0 0 0 1