PQT Unit 5 Markov
PQT Unit 5 Markov
depends only on the current state, and not on the states in the past.
Whenever 𝑡0 < 𝑡1 < … < 𝑡𝑛 < 𝑡𝑛+1 . 𝑋0 , 𝑋1 , 𝑋2 , … , 𝑋𝑛 , … are called the states
of the process.
(ii) Board games played with dice like snakes and ladders etc.
Markov Chain:
Markov property, that, given the present state, the future and past states are
Markov chain.
𝑝𝑖𝑗 (𝑛 − 1, 𝑛)
then the Markov chain is said to have stationary transition probabilities and the
Let {𝑋𝑛 , 𝑛 ≥ 0} be a homogeneous Markov chain. Then the one – step transition
(i) 𝑝𝑖𝑗 ≥ 0
(ii) ∑𝑚
𝑗=1 𝑝𝑖𝑗 = 1 𝑓𝑜𝑟 𝑖 = 1,2, … , 𝑚 (i.e., row total = 1)
Regular Matrix:
lim [𝑃𝑛 ] = 𝜋
𝑛→∞
b) 𝑃 (𝑛) = 𝑃𝑛
Type: 1
Steady state or invariant or stationary or limiting state or long run or 1000 th trial
If given is 𝟐 × 𝟐 matrix
𝜋 = (𝜋1 , 𝜋2 )
𝜋𝑃 = 𝜋 . . . (1)
𝜋1 + 𝜋2 = 1 . . . (2)
If given is 𝟑 × 𝟑 matrix
𝜋 = (𝜋1 , 𝜋2 , 𝜋3 )
𝜋𝑃 = 𝜋 . . . (1)
𝜋1 + 𝜋2 + 𝜋3 = 1 . . . (2)
night, then 70% sure not to study in the next night. If he does not
study one night, then he is only 60% sure not to study in the next
night also. (i) Find the TPM (ii) How often he studies in the long run?
Solution:
0.3 0.7
The transition probability matrix (TPM) = 𝑃 = ( )
0.4 0.6
𝜋0 + 𝜋1 = 1 . . . (1)
⇒ −0.70 𝜋0 = −0.4 𝜋1
0.4
⇒ 𝜋0 = 𝜋 . . . (4)
0.70 1
0.4
⇒ 𝜋 + 𝜋1 = 1
0.70 1
0.4
⇒ 𝜋1 ( + 1) = 1
0.70
7
⇒ 𝜋1 =
11
0.4 7 4
(4) ⇒ 𝜋0 = × =
0.70 11 11
4 7
The steady state distribution is (𝜋0 , 𝜋1 ) = ( , )
11 11
4
Probability of he studies in the long run 𝜋0 =
11
the same city on successive days. If he sells in city A, then next day
he is twice as likely to sell in the city A as in the other city. How often
Solution:
0 1 0
The transition probability matrix (TPM) = 𝑃 = (2/3 0 1/3)
2/3 1/3 0
𝜋1 + 𝜋2 + 𝜋3 = 1 . . . (1)
0 1 0
(𝜋1 , 𝜋2 , 𝜋3 ) (2/3 0 1/3) = (𝜋1 , 𝜋2 , 𝜋3 )
2/3 1/3 0
2 2
𝜋2 + 𝜋3 = 𝜋1 . . . (2)
3 3
1
𝜋1 + 𝜋3 = 𝜋2 . . . (3)
3
1
𝜋2 = 𝜋3 . . . (4) ⇒ 𝜋2 = 3𝜋3
3
2 2
(2) ⇒ 𝜋2 + 𝜋3 = 𝜋1
3 3
2
⇒ (𝜋2 + 𝜋3 ) = 𝜋1
3
3
⇒ (𝜋2 + 𝜋3 ) = 𝜋1
2
3
(1) ⇒ 𝜋1 + 𝜋1 = 1
2
5
⇒ 𝜋1 = 1
2
2
⇒ 𝜋1 =
5
2 1
⇒ + 𝜋3 = 3𝜋3
5 3
2 1
⇒ = 3𝜋3 − 𝜋3
5 3
2 8 2 3
⇒ = 𝜋3 ⇒ 𝜋3 = ×
5 3 5 8
3
⇒ 𝜋3 =
20
3 9
(4) ⇒ 𝜋2 = 3 × =
20 20
Given that any highly distorted signals are not recognizable, find the TPM
Solution:
20/23 3/23
The transition probability matrix (TPM) = 𝑃 = ( )
14/15 1/15
𝜋1 + 𝜋2 = 1 . . . (1)
20/23 3/23
(𝜋1 , 𝜋2 ) ( ) = (𝜋1 , 𝜋2 )
14/15 1/15
20 14
𝜋1 + 𝜋2 = 𝜋1 . . . (2)
23 15
3 1
𝜋1 + 𝜋2 = 𝜋2 . . . (3)
23 15
3 1
(3) ⇒ 𝜋1 = 𝜋2 − 𝜋2
23 15
3 14
⇒ 𝜋1 = 𝜋2
23 15
14 23
⇒ 𝜋1 = × 𝜋2
15 3
322
⇒ 𝜋1 = 𝜋2 . . . (4)
45
(1) ⇒ 𝜋1 + 𝜋2 = 1
322
⇒ 𝜋2 + 𝜋2 = 1
45
322
⇒ 𝜋2 + 𝜋2 = 1
45
322
⇒( + 1) 𝜋2 = 1
45
367
⇒( ) 𝜋2 = 1
45
45
⇒ 𝜋2 =
367
322 45
(4) ⇒ 𝜋1 = ×
45 367
322
⇒ 𝜋1 =
367
45
The fraction of signals that are highly distorted = 𝜋2 =
367
Type: 2
1. Suppose that the probability of a dry day following a rainy day is 1/3 and
that the probability of a rainy day following a dry day is ½. Given May 1 is
a dry day, find the probability that (i) May 3 is also a dry day (ii) May 5 is
Solution:
1/2 1/2
The transition probability matrix (TPM) = 𝑃 = ( )
1/3 2/3
∴ 𝑃(1) = [1 0]
1/2 1/2
𝑃 (2) = 𝑃(1)𝑃 = (1 0) ( )
1/3 2/3
= (1/2 1/2)
1/2 1/2
𝑃 (3) = 𝑃(2) 𝑃 = (1/2 1/2) ( )
1/3 2/3
= (5/12 7/12)
5
P( May 3 dry day ) =
12
1/2 1/2
𝑃 (4) = 𝑃 (3) 𝑃 = (5/12 7/12) ( )
1/3 2/3
= (29/72 43/72)
1/2 1/2
𝑃(5) = 𝑃 (4) 𝑃 = (29/72 43/72) ( )
1/3 2/3
= (173/432 259/432)
173
P( May 5 dry day ) =
432
the state 0 denotes the process is running in the manufacturing firm and
the state 1 denotes the process is not running in the firm. Suppose that the
𝟎. 𝟖 𝟎. 𝟐
( ) (i) Find the probability that the welding process will run on
𝟎. 𝟑 𝟎. 𝟕
the third day from today given that the welding process is running today.
(ii) Find the probability that the welding process will run on the third day
from today if the initial probability of state 0 and 1 are equally likely.
Solution:
0.8 0.2
The transition probability matrix (TPM) = 𝑃 = ( )
0.3 0.7
∴ 𝑃(0) = [1 0]
0.8 0.2
𝑃 (1) = 𝑃(0)𝑃 = (1 0) ( )
0.3 0.7
= (0.8 0.2)
0.8 0.2
𝑃(2) = 𝑃 (1) 𝑃 = (0.8 0.2) ( )
0.3 0.7
= (0.7 0.3)
0.8 0.2
𝑃(3) = 𝑃 (2) 𝑃 = (0.7 0.3) ( )
0.3 0.7
= (0.65 0.35)
= (0.55 0.45)
0.8 0.2
𝑃 (2) = 𝑃 (1) 𝑃 = (0.55 0.45) ( )
0.3 0.7
= (0.575 0.425)
0.8 0.2
𝑃 (3) = 𝑃(2)𝑃 = (0.575 0.425) ( )
0.3 0.7
= (0.5875 0.4125)
never goes two days in a row by train. But he drives one day, then the next
day he is just as likely to drive again as he used to travel like train. Suppose
that on the first day of the week the man tosses a fair die and drove to work
if and only if a “6” appears. Find the probability that he takes a train on
the third day. The probability that he drives to work in the long run.
Solution:
0 1
The transition probability matrix (TPM) = 𝑃 = ( )
1/2 1/2
1
(i) P( first day he drives the car) = P ( getting 6 on a die) = (Given)
6
1 5
P( first day he catches the train ) = 1 − =
6 6
5 1
𝑃 (1) = ( )
6 6
0 1
(2) (1) 5 1 1 1)
𝑃 = 𝑃 𝑃= ( )(
6 6 2 2
1 11
= ( )
12 12
1 11 0 1
𝑃 (3)
= 𝑃 (2)
𝑃= ( ) (1 1)
12 12 2 2
11 13
= (24 24
)
11
P( third day he catches the train ) =
24
𝜋1 + 𝜋2 = 1 . . . (1)
0 1
(𝜋1 , 𝜋2 ) ( ) = (𝜋1, 𝜋2 )
1/2 1/2
1
𝜋2 = 𝜋1 . . . (2) ⇒ 𝜋2 = 2𝜋1
2
1
𝜋1 + 𝜋2 = 𝜋2 . . . (3)
2
(1) ⇒ 𝜋1 + 2𝜋1 = 1
⇒ 3𝜋1 = 1
1
⇒ 𝜋1 =
3
1 2
(2) ⇒ 𝜋2 = 2 × =
3 3
2
P(he drives to work in the long run) =
3
occurring in the first n tosses, find TPM of the Markov chain. Find also P 2
Solution:
1 1 1 1 1 1
0 2 1 1 1 1
=
1 0 0 3 1 1 1
6 0 0 0 4 1 1
0 0 0 0 5 1
[0 0 0 0 0 1]
𝑃2 = 𝑃. 𝑃
1 3 5 7 9 11
0 4 5 7 9 11
𝑃2 =
1 0 0 9 7 9 11
36 0 0 0 16 9 11
0 0 0 0 25 11
[0 0 0 0 0 36]
Since initial probability is not given assume all are equally likely
2 2 2
= 𝑝16 𝑃 (𝑋0 = 1) + 𝑝26 𝑃(𝑋0 = 2) + 𝑝36 𝑃 (𝑋0 = 3)
2 2 2
+𝑝46 𝑃(𝑋0 = 4) + 𝑝56 𝑃 (𝑋0 = 5) + 𝑝66 𝑃(𝑋0 = 6)
11 1 11 1 11 1 11 1 11 1 36 1
= × × × × × × × × × × ×
36 6 36 6 36 6 36 6 36 6 36 6
91
𝑃 [𝑋2 = 6] =
216
3. A gambler has Rs. 𝟐/ -. He bets Rs. 1/- at a time and wins Rs.1/-with
𝟏
probability . He stops playing, if he loses Rs. 2/- or he wins Rs.4/- Write
𝟐
down the tpm of the associated Markov chain. What is the probability that
he lost his money at the end of his 𝟓th play. What is the probability that the
Solution:
Let 𝑋𝑛 represents the amount with the player at the end of the 𝑛th round
of the play.
Initially he has Rs. 2. He bets Rs. 1 at a time. The game ends when he loses
1
If he loses Rs. 1 , then he has Rs. 0 i.e 𝑝10 = and so on.
2
1 0 0 0 0 0 0
1/2 0 1/2 0 0 0 0
0 1/2 0 1/2 0 0 0
P= 0 0 1/2 0 1/2 0 0
0 0 0 1/2 0 1/2 0
0 0 0 0 1/2 0 1/2
[ 0 0 0 0 0 0 1 ]
1 1
𝑝(1) = 𝑃 (0) 𝑃 = 𝑃(1) = (0, , 0, , 0,0,0)
2 2
1 1 1
𝑃(2) = 𝑃 (1) 𝑃 = ( , 0, , 0, , 0,0)
4 2 4
1 1 3 1
𝑃 (3) = 𝑃 (2) 𝑃 = ( , , 0, , 0, , 0)
4 4 8 8
3 5 1 1
𝑃 (4) = 𝑃(3)𝑃 = ( , 0, , 0, , 0, )
8 16 4 16
P(5)=𝑃 (4) 𝑃
3 5 9 1 1
=( , , 0, , 0, , )
8 32 32 8 16
3
𝑃 [the man has lost his money at the end of his 5th play ] = 𝑃5 (0) = .
8
29 7 13 1
𝑃 (6) = 𝑃(5)𝑃 = ( , 0, , 0, , 0, )
64 32 64 8
𝑃 (7) = 𝑃(6)𝑝
29 7 27 13 1
=( , , 0, , 0, , )
64 64 128 128 8
= 𝑃 (the system is neither in state 0 nor in 6 at the end of the 7th round)
7 27 13
= +0+ +0+
64 128 128
27
=
64
(𝑛)
(2) 𝑃𝑗 = 𝑃[𝑋𝑛 = 𝑗] = ∑𝑘𝑖=0 𝑃 [𝑋𝑛 = 𝑗/𝑋0 = 𝑖]𝑃 [𝑋0 = 𝑖]
𝟎. 𝟏 𝟎. 𝟓 𝟎. 𝟒
states 1,2 and 3 is 𝑷 = (𝟎. 𝟔 𝟎. 𝟐 𝟎. 𝟐) and initial distribution is 𝑷(𝟎) =
𝟎. 𝟑 𝟎. 𝟒 𝟎. 𝟑
(𝟎. 𝟕 𝟎. 𝟐 𝟎. 𝟏).
𝟐] (iii) 𝑷[𝑿𝟐 = 𝟑]
Solution:
1 2 3
1 2 3
(i) 𝑃[𝑋3 = 2, 𝑋2 = 3, 𝑋1 = 3, 𝑋0 = 2]
1 1 1 [
= 𝑝32 𝑝33 𝑝23 𝑃 𝑋0 = 2] = (0.4)(0.3)(0.2)(0.2)
= 0.0048
3−2 1
= 𝑃31 = 𝑃31
= 0.3
2 ( 2 ( 2 (
𝑃 (𝑋2 = 3) = 𝑝13 𝑃 𝑋0 = 1) + 𝑝23 𝑃 𝑋0 = 2) + 𝑝33 𝑃 𝑋0 = 3) … (1)
2 2 2
To compute 𝑝13 , 𝑝23 𝑝33 , find 𝑃2 :
1 2 3
𝑃[𝑋2 = 3] = 0.279
𝟑 𝟏
𝟎
𝟒 𝟒
𝟏 𝟏 𝟏
𝑷 and initially state distribution of the chain is
𝟒 𝟐 𝟒
𝟑 𝟏
[𝟎 𝟒 𝟒]
𝟏
𝑷(𝑿𝟎 = 𝒊) = ; 𝒊 = 𝟎, 𝟏, 𝟐. Find (i) 𝑷[𝑿𝟐 = 𝟐](𝒊𝒊)𝑷[𝑿𝟑 = 𝟏, 𝑿𝟐 = 𝟐, 𝑿𝟏 =
𝟑
𝟏, 𝑿𝟎 = 𝟐]
Solution:
3/4 1/4 0
P=[1/4 1/2 1/4]
0 3/4 1/4
1
𝑃 (𝑋0 = 𝑖) = ; 𝑖 = 0,1,2
3
1 1 1
∴ 𝑃(𝑋0 = 0) = , 𝑃 (𝑋0 = 1) = , 𝑃(𝑋0 = 2) =
3 3 3
(ii) 𝑃[𝑋3 = 1, 𝑋2 = 2, 𝑋1 = 1, 𝑋0 = 2]
𝑃 (𝑋1 = 1/𝑋2 = 2)𝑃 (𝑋2 = 2/𝑋1 = 1)𝑃 (𝑋1 = 1/𝑋0 = 2)𝑃 (𝑋0 = 2)
3 1 3 1
= ( )( )( )( )
4 4 4 3
𝑃(𝑋3 = 1, 𝑋2 = 2, 𝑋1 = 1, 𝑋0 = 2) = 3/64
2 ( 2 ( 2 (
= 𝑝02 𝑃 𝑋0 = 0) + 𝑝12 𝑃 𝑋0 = 1) + 𝑝22 𝑃 𝑋0 = 2) … … … (1)
Find 𝑷𝟐 :
10 5 1
16 16 16
5 8 3
𝑃2 =
16 16 16
3 9 4
(16 16 16)
1 1 3 1 4 1
( 1) => 𝑃(𝑋2 = 2) = + +
16 3 16 3 16 3
1 8 1
= ( )=
3 16 6
State diagram:
Accessibility:
Suppose that the state j has the property that can be reached from any state I,
Communication:
If two states i and j are accessible from each other, then they are said to be
irreducible.
Transient state:
A state is transient if and only if there is a positive probability that the process
Absorbing state:
Note:
Ergodic State:
i.e., all its states are positive recurrent and aperiodic. A Markov chain is said to