Unit 5
Unit 5
If P{𝑋𝑛 = 𝑎𝑛 /𝑋𝑛−1 = 𝑎𝑛−1 , 𝑋𝑛−2 = 𝑎𝑛−2 , … , 𝑋0 = 𝑎0 } = 𝑃{𝑋𝑛 = The conditional probability that the process is in state 𝑎𝑗at step n, given that
𝑎𝑛 /𝑋𝑛 − 1 = 𝑎𝑛−1 } for all 𝑛, then the process {𝑋𝑛}, 𝑛 = 0,1,2, .. is called as it was in state 𝑎𝑖 ] at step 0, That is 𝑃[𝑋𝑛 = 𝑎𝑗 / 𝑋0 = 𝑎𝑖 ] is called the n-step
Markov chain. Here a1, a2,…,an are called the states of Markov chain. transition probability and is denoted by 𝑃𝑖𝑗(𝑛).
One Step Transition Probability: Chapman – Kolmogorov Theorem:
The conditional transition probability 𝑃[𝑋𝑛 = 𝑎𝑗 /𝑋𝑛 − 1 = 𝑎𝑖 ] is called If P is the t.p.m. of a homogeneous markov chain, then n-step t.p.m. 𝑃(𝑛) is
the one step transition probability from the state 𝑎𝑖 to 𝑎𝑗 at the 𝑛thstep equal to Pn. i.e., 𝑓 𝑃 = 𝑃𝑖𝑗 𝑡ℎ𝑒𝑛[𝑃𝑖𝑗(𝑛)] = [𝑃𝑖𝑗]𝑛.
Homogeneous Markov Chain: A stochastic matrix 𝑃 is said to be the regular matrix if all the entries of
If the one-step transition probability does not depend on the step 𝑝𝑚 (for some positive integer m)are positive. A homogeneous Markov chain
I.e. 𝑃𝑖𝑗(𝑛 – 1, 𝑛) = 𝑃𝑖𝑗(𝑚 – 1, 𝑚 ), the Markov chain is called a is said to be regular if its t.p.m. is regular.
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 1
If a homogeneous Markov chain is regular, then every sequence of state A state i is said to be persistent or recurrent if the eventual return to
probability distributions approaches a unique fixed distribution, called the state ‘𝑖’ is certain. i.e., if 𝐹𝑖𝑖 = 1.
steady state distribution of the Markov chain. The state 𝑖 is said to be transient if the return to state 𝑖 is uncertain.
i.e., lim {𝑃 (𝑛) } = 𝜋 where state probability distribution at step n, 𝑃 (𝑛) = i.e., if 𝐹𝑖𝑖 < 1.
𝑛→∞
The state 𝑖 is said to be non-null persistent, if its mean recurrence
{𝑝1 (𝑛) , 𝑝2 (𝑛) , … . . 𝑝𝑘 (𝑛) } and the stationary distribution 𝜋 = (𝜋1 , 𝜋2 , … 𝜋𝑘 ) are
time 𝜇𝑖𝑖 finite i.e., 𝜇𝑖𝑖 < ∞ and null persistent if 𝜇𝑖𝑖 = ∞.
row vectors.
If P is the t.p.m. of the regular Markov chain, and 𝜋 = (𝜋1 , 𝜋2 , … 𝜋𝑘 ) is the NOTE:
steady state distribution, then 𝜋𝑝 = 𝜋 and 𝜋1 + 𝜋2 + ⋯ + 𝜋𝑘 = 1. If the state ‘𝑖′ is persistent non-null then lim 𝑝𝑖𝑖 𝑛 > 0 and if the state ’i' is
𝑛→∞
𝑛
Classification of States of a Markov Chain: persistent null or transient, then lim 𝑝𝑖𝑖 → 0 .
𝑛→∞
(i) Irreducible Markov Chain Ergodic State:
A Markov chain is said to be irreducible if 𝑃𝑖𝑗 (𝑛) > 0 for some n A persistent, non-null and aperiodic state i are called an ergodic state.
and for all 𝑖and 𝑗 , then every state can be reached from every other state. Absorbing State:
The t.p.m. of an irreducible chain is an irreducible matrix. Otherwise the A state ‘𝑖′ is called an absorbing state if and only if 𝑃𝑖𝑖 = 1 𝑎𝑛𝑑 𝑃𝑖𝑗 =
chain is said to be reducible. 0 𝑖𝑓 𝑖 ≠ 𝑗.
(ii) Return State Problem: 1 The transition probability matrix of a Markov chain {𝐗𝐧}, 𝐧 =
State 𝑖 of a Markov chain is called a return state if 𝑃𝑖𝑖(𝑛) > 0 for 𝟎. 𝟏 𝟎. 𝟓 𝟎. 𝟒
some 𝑛 > 1. 𝟏, 𝟐, … having 3 states 1, 2 and 3 is 𝐏 = [𝟎. 𝟔 𝟎. 𝟐 𝟎. 𝟐] and the initial
𝟎. 𝟑 𝟎. 𝟒 𝟎. 𝟑
i.e., the system comes back to 𝑖 starting from 𝑖.
distribution is 𝐏(𝟎) = (𝟎. 𝟕 𝟎. 𝟐 𝟎. 𝟏). Find (𝐢) 𝐏(𝐱𝟐 = 𝟑), (𝐢𝐢) 𝐏(𝐱𝟑 =
(iii) Period
𝟐, 𝐱𝟐 = 𝟑, 𝐱𝟏 = 𝟑, 𝐱𝟎 = 𝟐)
The period 𝑑𝑖of a return state ‘i’ is defined as the greatest common
Solution:
divisor of all integers m such that 𝑃𝑖𝑗(𝑚) > 0 i.e.,𝑑𝑖 =
0.1 0.5 0.4
𝐺. 𝐶. 𝐷. {𝑚: 𝑝𝑖𝑗(𝑚) > 0}. Given 𝑃 = [0.6 0.2 0.2]
State 𝑖 is said to be periodic with period 𝑑𝑖 𝑖𝑓 𝑑𝑖 > 1 and aperiodic 0.3 0.4 0.3
0.1 0.5 0.4 0.1 0.5 0.4
if 𝑑𝑖 = 1. 𝑃 (2) = 𝑃 2 = [0.6 0.2 0.2] [0.6 0.2 0.2]
(iv) Persistent, Transient, non-null persistent and null persistent: 0.3 0.4 0.3 0.3 0.4 0.3
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 2
0.43 0.31 0.26 Problem: 2 The t.p.m. of a Markov process {𝐗𝐧}, 𝐧 = 𝟏, 𝟐, 𝟑, …having 3 states
= [0.24 0.42 0.34]
0.36 0.35 0.29 0, 1, and 2 is
(iii) 𝑃(𝐴 ∩ 𝐵) = 𝑃 (𝐴). 𝑃(𝐵) = 𝑃[𝑋2 = 2⁄𝑋0 = 0]. 𝑃[𝑋0 = 0] + 𝑃[𝑋2 = 2⁄𝑋0 =]. 𝑃[𝑋0 = 1]
(iv) 𝑃[𝐴 ∩ 𝐵 ∩ 𝐶] = 𝑃 (𝐴⁄𝐵 ). 𝑃 (𝐵⁄𝐶 ). 𝑃(𝐶) +𝑃[𝑋2 = 2⁄𝑋0 = 2]. 𝑃[𝑋0 = 2]
(2) (2) (2)
= 𝑃02 . 𝑃[𝑋0 = 0] + 𝑃12 . 𝑃[𝑋0 = 1] + 𝑃22 . 𝑃[𝑋0 = 2]
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 3
1 1 3 1 4 1 If he studies one night, next night he is 70% not studying
= . + . + .
16 3 16 3 16 3 ∴ 𝑃 (𝑆 → 𝑁) = 0.7, 𝑃 (𝑆 → 𝑆) = 0.3, 𝑃(𝑁 → 𝑁 ) = 0.6 ∴ 𝑃 (𝑁 → 𝑆) = 0.4
1 3 4 8 1
= + + = = If 𝜋 = [𝜋1 , 𝜋2 ] be the long run (or) limiting form of state distribution then
48 48 48 48 6
(ii) 𝑃[𝑋2 = 2, 𝑋1 = 1, 𝑋0 = 2] = 𝑃[𝑋2 = 2⁄𝑋1 = 1]. 𝑃[𝑋1 = 1⁄𝑋0 = 𝜋𝑝 = 𝜋 where 𝜋1 + 𝜋2 = 1 ⟶ (1)
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 4
Since the mode of transport of next day is decided on the basis of today, the (ii) Let 𝜋 = [𝜋1 , 𝜋2 ] be the limiting form of long run probability
travel pattern is a Markov chain where the states are train (T) and car (C). distrtibution.
The t.p.m. is We know that
State of Xn 0 1
𝜋𝑃 = 𝜋 ⟹ [𝜋1 𝜋2 ] [ 1 1 ] = [𝜋1 , 𝜋2 ] 𝑤ℎ𝑒𝑟𝑒 𝜋1 + 𝜋2 = 1
T C 2 2
𝜋2 𝜋2
𝑇 0 1 [0 + 𝜋1 + ] = [𝜋1 , 𝜋2 ]
State of Xn−1 [ 1 1 ] = 𝑃 2 2
𝐶 2 2 𝜋2
⟹ = 𝜋1
If today he goes by train, next day he will not go by train 2
1 1
1 − 𝜋1
∴ 𝑃 (𝑇 → 𝑇) = 0, 𝑃 (𝑇 → 𝐶 ) = 1, 𝑃 (𝐶 → 𝑇) = , 𝑃(𝐶 → 𝐶 ) = ⟹ = 𝜋1
2 2 2
Initial state probability distribution is obtained by throwing a die. 1
⟹ 1 − 𝜋1 = 2𝜋1 ⟹ 𝜋1 =
1 3
𝑃 (𝑔𝑜𝑖𝑛𝑔 𝑏𝑦 𝑐𝑎𝑟) = 𝑃 (6) = 1
6 Put 𝜋1 = 3 in (1), we get
1 5
∴ 𝑃 (𝑔𝑜𝑖𝑛𝑔 𝑏𝑦 𝑡𝑟𝑎𝑖𝑛) = 1 − = 𝜋1 + 𝜋2 = 1
6 6
5 1
1 1 2
∴ The first day state distribution is 𝑃 (1) = [ , ] + 𝜋2 = 1 ⟹ 𝜋2 = 1 − =
6 6 3 3 3
(2) (1) 2
Second day state probability𝑃 =𝑃 .𝑃 ∴ 𝑃 (𝑑𝑟𝑖𝑣𝑖𝑛𝑔 𝑖𝑛 𝑡ℎ𝑒 𝑙𝑜𝑛𝑔 𝑟𝑢𝑛) = .
3
5 1 0 1 Problem: 5 An engineer analyzing a series of digital signals generated by
= [ , ] [1 1]
6 6 2 2
a testing system observes that only 1 out of 15 highly distorted signals
1 5 1 1 11
=[0 + 12 , 6 + 12] = [12 , 12]
followed a highly distorted signal with no recognizable signal, whereas 20 out
(3) (2)
Third day state probability𝑃 =𝑃 .𝑃 of 23 recognized signals follow recognizable signals with no highly distorted
1 11 0 1 signals between. Given that only highly distorted signals are not recognizable,
= [ , ] [1 1]
12 12 2 2
find the fraction of signals that are highly distorted.
11 1 11 11 13
= [0 + + ]=[ , ] Solution: The state space is {ℎ𝑖𝑔ℎ𝑙𝑦 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑒𝑑, 𝑟𝑒𝑐𝑜𝑔𝑛𝑖𝑧𝑎𝑏𝑙𝑒}
24 12 24 24 24
11 We shall denote them by 0, 1. State space is {0,1}
(𝑖)𝑃[ℎ𝑒 𝑡𝑟𝑎𝑣𝑒𝑙𝑠 𝑏𝑦 𝑡𝑟𝑎𝑖𝑛 𝑜𝑛 𝑡ℎ𝑒 𝑡ℎ𝑖𝑟𝑑 𝑑𝑎𝑦] = 24
Let 𝑋𝑛 = 0, if the 𝑛𝑡ℎ signal generated is highly distorted.
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 5
= 1,if the 𝑛𝑡ℎ signal generated is recognizable. In other words, 12.3% of the signals generated by the testing system are
∴ {Xn ; n = 1,2, … . }is a Markov Chain with state space {0,1}. highly distorted.
The t.p.m. is 0 1
1 14 Problem: 6 Find the limiting state probabilities associated with the
0 15 15]
P= [ 𝟎. 𝟒 . 𝟓 𝟎. 𝟏
1 3 20 following probability matrix [𝟎. 𝟑 𝟎. 𝟑 𝟎. 𝟒]
23 23 𝟎. 𝟑 𝟎. 𝟐 𝟎. 𝟓
Let 𝜋 = [𝜋0 𝜋1 ] be the limiting form of long run probability distribution and Solution:
𝜋0 + 𝜋1 = 1 ……….. [1] 0.4 0.5 0.1
Given t.p.m. is P = [0.3 0.3 0.4]
We know that
0.3 0.2 0.5
1 14
Let [𝜋1, 𝜋2 , 𝜋3 ] be the limiting probability distribution and
𝜋𝑃 = 𝜋 ⟹ [𝜋0 𝜋1 ] [15 15] = [𝜋0 𝜋1 ]
3 20 𝜋1 + 𝜋2 + 𝜋3 = 1 ………….. [1]
23 23 We know that
𝜋0 3𝜋1 14𝜋0 20 𝜋1
⇒[ + . + ] = [𝜋0 𝜋1 ] 0.4 . 5 0.1
15 23 15 23 𝜋𝑃 = 𝜋 ⟹ [𝜋1, 𝜋2 , 𝜋3 ] [0.3 0.3 0.4] = [𝜋1, 𝜋2 , 𝜋3 ]
𝜋0 3𝜋1 0.3 0.2 0.5
⇒ + = 𝜋0
15 23 ⟹ [0.4𝜋1 + 0.3𝜋2 + 0.3𝜋3 0.5𝜋1 + 0.3𝜋2 + 0.2𝜋3 0.1𝜋1 + 0.4𝜋2 + 0.5𝜋3 ]
3𝜋1 𝜋0
⇒ = 𝜋0 − = [𝜋1, 𝜋2 , 𝜋3 ]
23 15
3𝜋1 14𝜋0 23 × 14𝜋0 322 ∴ 0.4𝜋1 + 0.3𝜋2 + 0.3𝜋3 = 𝜋1 ⟹ −0.6𝜋1 + 0.3𝜋2 + 0.3𝜋3 = 0 ……….. [2]
⇒ = ⇒ 𝜋1 = = 𝜋
23 15 3 × 15 45 0 0.5𝜋1 + 0.3𝜋2 + 0.2𝜋3 = 𝜋2 ⟹ 0.5𝜋1 − 0.7𝜋2 + 0.2𝜋3 = 0 ………….. [3]
Substituting in [1], we get 0.1𝜋1 + 0.4𝜋2 + 0.5𝜋3 = 𝜋3 ⟹ 0.1𝜋1 + 0.4𝜋2 − 0.5𝜋3 = 0 ……………[4]
322 367 [4]x 5 ⟹ 0.5𝜋1 + 2𝜋2 − 2.5𝜋3 =0
𝜋0 + π0 = 1 ⇒ π =1
45 45 0
[3] ⟹ 0.5𝜋1 − 0.7𝜋2 + 0.2𝜋3 = 0
45
⇒ π0 = = 0.123 Subtracting, we get 2.7𝜋2 − 2.7𝜋3 = 0 ⟹ 𝜋2 = 𝜋3 ……………… [5]
367
∴ 𝜋1 = 1 − π0 = 1 − 0.123 = 0.877 Substituting 𝜋2 = 𝜋3 in [2], we get
∴the fraction of signals that are highly distorted is π0 = 0.123 −0.6𝜋1 + 0.3𝜋2 + 0.3𝜋2 = 0
⟹ −0.6𝜋1 + 0.6𝜋2 = 0 ⟹ 0.6𝜋1 = 0.6𝜋2
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 6
⟹ 𝜋1 = 𝜋2 ………………….. [6] Now, P{X2 = 6} = ∑6i=1 P(𝑋2 = 6/𝑋0 = 𝑖 ) × P(𝑋0 = 𝑖 )
Substituting in [1], we get 6
1
1 = ∑ P (2) i6
𝜋2 + 𝜋2 + 𝜋2 = 1 ⟹ 3𝜋2 = 1 ⟹ 𝜋2 = 6
i=1
3
1
1 1 = [P (2)16 + P(2) 26 + P (2) 36 + P(2) 46 + P(2) 56 + P (2) 66 ]
∴ 𝜋1 = , 𝜋3 = 6
3 3
1 1 1
1 11 11 11 11 11 36
∴ The limiting probabilities are(3 , 3 , 3). = [ + + + + + ]
6 36 36 36 36 36 36
1 91
Problem: 7:A fair die is tossed repeatedly. If 𝐗 𝐧 denotes the maximum of the = 6 [5 × 11 + 36] = 216
numbers occurring in the first n tosses, find the transition probability matrix Problem: 8 suppose that the probability of a dry day following a rainy day
𝟐
P of the Markov chain {𝐗 𝐧 }. Find also 𝐏 and 𝐏[𝐗 𝟐 = 𝟔] is 1/3 and that the probability of a rainy day following a dry day is ½. Given
Solution:The State space is {1,2,3,4,5,6} that May 1 is a dry day. Find the probability that (i) May 3 is a dry day and
The t.p.m. is formed using the following analysis. (ii) May 5 is a dry day.
Let 𝑋𝑛 = the maximum of the numbers occuring in the first n trails
Assume this number be 3. Solution:
1 1 1 3
𝑃{𝑋𝑛+1 = 3/𝑋𝑛 = 3} = + + = (There are 3 possibilities1, 2, 3) To find t.p.m.:
6 6 6 6
1 The state space is {D, R} where D – Dry day and R – Rainy day.
𝑃{𝑋𝑛+1 = i/𝑋𝑛 = 3} = 6 When 𝑖 = 4,5,6
∴ The t.p.m. of the markov chain is
The transition probability matrix of the chain is
1 1 1 1 1 1 D R
6 6 6 6 6 6 𝐷 1⁄2 1⁄2
0
2 1 1 1 1 1 3 5 7 9 11 𝑃= [ ]
6 6 6 6 6 0 4 5 7 7 11 𝑅 1⁄3 2⁄3
3 1 1 1
0 0 1 0 0 9 7 9 11 The initial probability distribution is (1, 0)
𝑃= 6 6 6 6 and 𝑃 2 =
4 1 1 36 0 0 0 16 9 11
0 0 0 6 6 6 0 0 0 0 25 11 𝑖. 𝑒. , 𝑃 (1) = [1 0] ∵ 𝑀𝑎𝑦 1 𝑖𝑠 𝑎 𝑑𝑟𝑦 𝑑𝑎𝑦
0 0 0 0 5 1 (0 0 0 0 0 36) Next, to find May 3 is a dry day
6 6
(0 0 0 0 0 1) 1⁄2 1⁄2
Now,𝑃 (2) = 𝑃 (1) . 𝑃 = [1 0] [ ]
1 1 1 1 1 1 1⁄3 2⁄3
Initial state probability distribution is 𝑃 (0) = (6 , 6 , 6 , 6 , 6 , 6)
1 1
Since all the faces are equally likely. = [ +0 + 0]
2 2
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 7
1 1 ∴ The process {𝑋𝑛} is a markov chain.
=[ , ]
2 2 0 0 1
0 1 0 0 1 0 1 1
1 1 1⁄2 1⁄2 0]
𝑃 (3) = 𝑃 (2) . 𝑃 = [ ] [ ] Now 𝑃 = 𝑃. 𝑃 = [01 01
2 1] [0 0 1] = [ 2 2
2 2 1⁄3 2⁄3 1 1
0 2 2 0 1 1
2 2 0
1 11 1 10 7 2 2
=[ + + ]=[ , ] 1 1
4 64 3 24 12 0 0 1 0
5 1 1 0 1 0 2 2
∴ The probability that May 3 is a dry day = 12 0 0 0 1 1 1
3 [1 ] = 0
𝑃 = 2 2 1
10 7 1⁄2 1⁄2 1 1 0 2 2
𝑃 (4) = 𝑃 (3) . 𝑃 = [ , ] [ ] [0 2 2 1 1 1
24 12 1⁄3 2⁄3 2 2] [4 4 2]
5 7 5 14 29 43
=[ + + ]=[ , ] We observe that,
24 36 24 36 72 72
(3) 1 (3) 1 (2)
29 43 1⁄2 1⁄2 𝑃00 = > 0, 𝑃01 = > 0, 𝑃02 = 1 > 0
𝑃 (5) = 𝑃 (4) . 𝑃 = [ , ] [ ] 2 2
72 72 1⁄3 2⁄3
(2) 1 (2) 1 (3) 1
29 43 29 86 173 259 𝑃10 = > 0, 𝑃11 = > 0, 𝑃12 = > 0
=[ + + ]=[ , ] 2 2 2
144 216 144 216 432 432
173 (3) 1 (3) 1 (3)
∴ 𝑃 (𝑀𝑎𝑦 5 𝑖𝑠 𝑎 𝑑𝑟𝑦 𝑑𝑎𝑦) = 432. 𝑃20 = > 0, 𝑃21 = > 0, 𝑃22 > 0
4 4
Problem: 9 Three boys A, B and C are throwing a ball to each other. A ∴ The chain is irreducible.
always throw the ball to B and B always throws the ball to C and C is just as 1 1 1 1
0 0
2 2 0 1 0 2 2
likely to throw the ball to B as to A. Show that the process is Markovian. Find 1 1 0 0 1 1 1 1
𝑃4 = 𝑃3 . 𝑃 = 0 [ ]=
the t.p.m. and classify the states. 2 2 1 1
0 4 4 2
1 1 1 2 2 1 1 1
Solution:
[4 4 2] [4 2 4]
The t.p.m. of the process {𝑋𝑛} is given below 1 1 1
A B C 4 4 2
5 4 1 1 1
0 1 0 𝑃 = 𝑃 .𝑃 =
𝐴 4 2 4
0 0 1
𝑃 = 𝐵 [1 1 ] 1 3 1
𝐶 0 [8 8 2]
2 2
The states of 𝑋𝑛 depends only on states of 𝑋𝑛 − 1.but not on states of 𝑋𝑛 −
2, 𝑋𝑛 − 3 …
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 8
1 1 1 (2) (1) (2)
4 2 4
𝑃00 > 0, 𝑃01 > 0, 𝑃02 > 0
1 3 1
𝑃6 = 𝑃5. 𝑃 = 8 8 2
and so on. (1) (2)
𝑃10 > 0, 𝑃11 > 0, 𝑃12 > 0
(1)
1 3 3
(2) (1) (2)
[4 8 8] 𝑃20 > 0, 𝑃21 > 0, 𝑃22 > 0
(2) (3) (4) (5) (6)
Here, 𝑃𝑖𝑖 , 𝑃𝑖𝑖 𝑃𝑖𝑖 , 𝑃𝑖𝑖 , 𝑃𝑖𝑖 etc, are greater than 0 for i = 2, 3 ∴ The chain is irreducible.
and GCD of 2, 3, 4, 5, 6 …=1 1⁄2 0 1⁄2 0 1 0
Now, 𝑃3 = 𝑃2. 𝑃 = [ 0 1 0 ] [1⁄2 0 1⁄2]
∴The states 2 and 3(i.e., B and C) are periodic with period 1. i.e., a periodic. 1⁄2 0 1⁄2 0 1 0
(3) (5) (6)
Also 𝑃11 , 𝑃11 𝑃11 , … are > 0 and GCD of {3, 5, 6] = 1 0 1 0
= [1⁄2 0 1⁄2 ] = 𝑃
∴ The state 1(i.e., state A) is periodic with period 1 i.e., aperiodic. 0 1 0
Since the chain is finite and irreducible, all its states are non null persistent. 0 1 0 0 1 0
Moreover all the states are ergodic. 𝑃 4 = 𝑃 3 . 𝑃 = [1⁄2 0 1⁄2] [1⁄2 0 1⁄2 ]
0 1 0 0 1 0
Problem: 10 Find the nature of the states of the Markov chain with the 1⁄2 0 1⁄2
t.p.m =[ 0 1 0 ] = 𝑃2
1⁄2 0 1⁄2
𝟎 𝟎 𝟏 𝟎
𝐏 = 𝟏 [𝟏⁄𝟐 𝟎 𝟏⁄𝟐]. 1⁄2 0 1⁄2 0 1 0
5 4
𝟐 𝟎 𝑃 = 𝑃 .𝑃 = [ 0 1 0 ] [1⁄2 0 1⁄2 ]
𝟏 𝟎
1⁄2 0 1⁄2 0 1 0
Solution: Given tpm is 0 1 0
0 1 2 = [1⁄2 0 1⁄2 ] = 𝑃
0 1 0
0 0 1 0
0 1 0 0 1 0
𝑃 = 1 [1⁄2 0 1⁄ ]
2 𝑃 6 = 𝑃 5 . 𝑃 = [1⁄2 0 1⁄2] [1⁄2 0 1⁄2]
2 0 1 0 0 1 0 0 1 0
0 1 0 0 1 0 1⁄2 0 1⁄2
𝑃 2 = 𝑃. 𝑃 = [1⁄2 0 1⁄ ] [1⁄
2 2 0 1⁄2] =[ 0 1 0 ] = 𝑃 2 𝑎𝑛𝑑 𝑠𝑜 𝑜𝑛 …
0 1 0 0 1 0 1⁄2 0 1⁄2
(2) (4) (6)
0 1 2 Here, 𝑃𝑖𝑖 = 𝑃𝑖𝑖 = 𝑃𝑖𝑖 … > 0 for all i, all the states of the chain are
0 1⁄2 0 1⁄2 periodic, with period 2.
= 1[ 0 1 0 ] We observe that
2 1⁄2 0 1⁄2 Since, the chain is finite and irreducible all its states are non-null persistent.
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 9
All states are not ergodic. (vi) The markov chain is not irreducible. The state are not non null
Problem 11 Consider a Markov chain with state space {0, 1} and the TPM P = persistent..
𝟏 𝟎 (vii) Hence the chain is not ergodic.
[ 𝟏 𝟏]
𝟐 𝟐
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 10
𝑃𝑖𝑗 (𝑛) = 𝑃[𝑥𝑛+𝑘 = 𝑗/𝑥𝑘 = 𝑖] (2) 𝑃 (𝑋3 = 2, 𝑋2 = 3, 𝑋1 = 1, 𝑋0 = 2)
SAMPLE UNIVERSITY QUESTIONS 1 .(i) A fair die is tossed repeatedly. The maximum of the first ‘n’ outcomes
is denoted by Xn. Is {Xn : n = 1,2, … . . } a Markov chain? Why or why not? If it
is a Markov chain calculate its transition probability matrix. Specify the
1 . Check whether the Markov chain with transition probability matrix classes
0 1 0 (ii) An observer at a lake notices that when fish are caught, only 1 out of 9
1 1 trout is caught after another trout, with no other fish between, whereas 10
= ( 2 0 2 ) is irreducible or not?
out of 11 non-trout are caught following non-trout, with no trout between
0 1 0 Assuming that all fish are equally likely to be caught, what fraction of fish in
(ii) An engineer analyzing a series of digital signals generated by a the lake is trout?
testing system observes that 1 out of 15 highly distorted signals, with 2.(i) The following is the transition probability matrix of a Markov
no recognizable signal between, whereas 20 out of 23 recognizable chain with state space {1,2,3,4,5}. Specify the classes and determine
signals follow recognizable signals, with no highly distorted signal which classes are transient and which are recurrent.
between. Given that only highly signals are not recognizable, find the
fraction of signals that are highly distorted 2 3
0 0 0
5 5
2. A gambler has Rs. 2. He bets Re. 1 at a time and wins Re. 1 with 1 1 1
1 0 0
probability 2. He stops playing if he loses Rs. 2 or wins Rs.4.What is 3 3 3
the tpm of the related Markov chain? 1 1
= 0 0 0
PART-B 2 2
1.(i) A TPM of a Markov chain 𝑋𝑛 , n = 1,2,3,… having three states 1, 2 1 3
0.1 0.5 0.4 0 0 0
4 4
and 3 is 𝑃 = (0.6 0.2 0.2) and the initial distribution is 1 2
0.3 0.4 0.3 (0 0
3
0
3)
𝑃(0) = (0.7 0.2 0.1) Find (1) 𝑃(𝑋2 = 3)
Dr.A. Manickam School of Science, Division of Mathematics, FET, SRMIST TRICHY, India. Page 11
will visit after having changed cells, ‘n’ times. Is {𝑿𝒏 ; 𝒏 =
(ii) For an English course, there are four popular textbooks 𝟎, 𝟏, 𝟐, … } a Markov chain? If so, write its state space and
dominating the market. The English department of an institution transition probability matrix
allows its faculty to teach only from these 4 textbooks. Each year, (ii)The following is the transition probability matrix of a Markov
Prof. Rose Mary, ‘O’ Donohue adopts the same book. she was using chain with state space{0,1,2,3,4}. Specify the classes and
the previous year with probability 0.64. The probabilities of her determine which classes are transient and which are recurrent.
changing to any of the other 3 books are equal. Find the Give reasons.
proportion of years Prof.’O’, Prof. Donoghue uses each book 2 3
0 0 0
(ii) A salesman territory consists of three cities A, B and C. He never 5 5
sells in the same city on successive days. If he sells in city-A, then the 1 1 1
next day he sells in city-B. However if he sells in either city-B or city-C, 0 0
3 3 3
the next day he is twice as likely to sell in city-A as in the other city. In 1 1
the long run how often does he sell in each of these cities? 𝑃= 0 0 0
2 2
(ii)A man either drives a car (or) catches a train 1 3
to go to office. Each day he never goes two days in a row by train but if he 0 0 0
4 4
drives one day, then the next day he is just as likely to drive again as he is to
1 2
travel by train. Now suppose that on the first day of the week, the man
tossed a fair die and drove to work if and if a 6 appeared. Find (a)the
(0 0 3 0 3 )
probability that he takes a train on the third day 2.Consider a Markov chain consisting of three states 0,1,2 and the
1 1
( 0
2 2
2.(i) Find the limiting state probabilities associated with the following 1 1 1
0.4 0.5 0.1 transition probability matrix 𝑃 = 2 2 4
transition probability matrix 𝑃 = (0.3 0.3 0.4) 0 3 3
1 2
0.3 0.2 0.5 ( )
1.(i).An engineer analyzing a series of digital signals generated by (i) Let the Markov chain consisting of the states 0,1,2,3 have the
1 1
a testing system observes that 1 out of 15 highly distorted signals, 0 0 2 2
with no recognizable signal between, whereas 20 out of 23
Transition probability matrix 𝑃 = 1 0 0 0
recognizable signals follow recognizable signals, with no highly 0 1 0 0
distorted signal between. Given that only highly signals are not (0 1 0 0 )
recognizable, find the fraction of signals that are highly
distorted