0% found this document useful (0 votes)
143 views26 pages

PQT Unit 5 Markov

A Markov process is a random process where the future state depends only on the current state, not past states. Some examples of Markov processes include processes with independent increments, board games using dice, and weather prediction models. A Markov chain is a discrete-state Markov process. The transition probability matrix (TPM) describes the one-step transition probabilities between states. For a regular, homogeneous Markov chain, there exists a unique steady state or stationary distribution that the state probabilities approach as time increases. The Chapman-Kolmogorov equations relate the n-step transition probabilities to the TPM.

Uploaded by

Agnivesh Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views26 pages

PQT Unit 5 Markov

A Markov process is a random process where the future state depends only on the current state, not past states. Some examples of Markov processes include processes with independent increments, board games using dice, and weather prediction models. A Markov chain is a discrete-state Markov process. The transition probability matrix (TPM) describes the one-step transition probabilities between states. For a regular, homogeneous Markov chain, there exists a unique steady state or stationary distribution that the state probabilities approach as time increases. The Chapman-Kolmogorov equations relate the n-step transition probabilities to the TPM.

Uploaded by

Agnivesh Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

3.3 Markov Process

Markov Process is a random process in which future behaviour of the process

depends only on the current state, and not on the states in the past.

Mathematically, A random process {𝑋 (𝑡), 𝑡 ∈ 𝑡} is called a Markov process. If

𝑃 [𝑋(𝑡𝑛+1) = 𝑋𝑛+1 |𝑋(𝑡𝑛 ) = 𝑋𝑛 , 𝑋(𝑡𝑛−1) = 𝑋𝑛−1 , … , 𝑋(𝑡0) = 𝑋0 . . . (1)

= 𝑃[𝑋 (𝑡𝑛+1)] = 𝑋𝑛+1 ⁄𝑋(𝑡𝑛 ) = 𝑋𝑛

Whenever 𝑡0 < 𝑡1 < … < 𝑡𝑛 < 𝑡𝑛+1 . 𝑋0 , 𝑋1 , 𝑋2 , … , 𝑋𝑛 , … are called the states

of the process.

Give some examples of Markov process:

Some examples of Markov processes are described below.

(i) Any random process with independent increments.

(ii) Board games played with dice like snakes and ladders etc.

(iii) Weather prediction models.

Markov Chain:

A discrete state Markov process is called a Markov chain.

Markov chain is defined as a set of random variables {𝑋𝑛 , 𝑛 ≥ 0} with the

Markov property, that, given the present state, the future and past states are

independent, i.e., If all values of n

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

𝑃 [𝑋𝑛 = 𝑎𝑛 ⁄𝑋𝑛−1 = 𝑎𝑛−1, 𝑋𝑛−2 = 𝑎𝑛−2, … , 𝑋0 = 𝑎0 ]

= 𝑃[𝑋𝑛 = 𝑎𝑛 ⁄𝑋𝑛−1 = 𝑎𝑛−1] . . . (2)

Then the process 𝑋𝑛 is called Markov chain 𝑎0 , 𝑎1 , 𝑎2 , … , 𝑎𝑛 are called states of

Markov chain.

One step Transition Probability:

The conditional 𝑃(𝑋𝑛 = 𝑎𝑗 /𝑋𝑛−1 = 𝑎𝑖 ) is called the one step transition

probability from state 𝑎𝑖 to state 𝑎𝑗 at the nth step and is denoted by

𝑝𝑖𝑗 (𝑛 − 1, 𝑛)

N – step Transition probability:

The conditional 𝑃(𝑋𝑛 = 𝑎𝑗 /𝑋0 = 𝑎𝑖 ) is called the n step transition probability

and is denoted by 𝑝𝑖𝑗 (𝑛)

Homogeneous Markov Chain:

If the one – step transition probability is independent of n,

i.e., 𝑃𝑖𝑗 (𝑛, 𝑛 + 1) = 𝑃𝑖𝑗 (𝑚, 𝑚 + 1)

then the Markov chain is said to have stationary transition probabilities and the

process is called as a homogeneous Markov chain. Otherwise, the process is

known as non – homogeneous Markov chain.

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

Transition Probability Matrix (TPM):

Let {𝑋𝑛 , 𝑛 ≥ 0} be a homogeneous Markov chain. Then the one – step transition

probability from state i to state j is denoted by

𝑝𝑖𝑗 = 𝑃[𝑋𝑛+1 = 𝑗⁄𝑋𝑛 = 𝑖] 1 ≤ 𝑖 ≤ 𝑚, 1 ≤ 𝑗 ≤ 𝑚

is called the transition probability matrix (TPM) of the process if

(i) 𝑝𝑖𝑗 ≥ 0

(ii) ∑𝑚
𝑗=1 𝑝𝑖𝑗 = 1 𝑓𝑜𝑟 𝑖 = 1,2, … , 𝑚 (i.e., row total = 1)

Regular Matrix:

A Stochastic matrix P is said to be a regular matrix if all the entries of 𝑝𝑚 > 0

for some positive integer m.

A homogeneous Markov chain is said to be regular if its TPM is regular.

Steady State Distribution:

If a homogeneous Markov chain is regular, then every sequence of state

probability distributions approaches a unique fixed distribution is called the

steady state distribution of the Markov chain.

lim [𝑃𝑛 ] = 𝜋
𝑛→∞

Where 𝑃 (𝑛) = [𝑝1 (𝑛) , 𝑝2 (𝑛), … , 𝑝𝑘 (𝑛) ] and 𝜋 = (𝜋1 , 𝜋2 , … , 𝜋𝑘 )

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

Condition for Steady State Distribution:

If P is the TPM of the regular Markov chain and 𝜋 = (𝜋1 , 𝜋2 , … , 𝜋𝑘 ) is the

steady state distribution, then 𝜋𝑃 = 𝜋 and 𝜋1 + 𝜋2 + ⋯ + 𝜋𝑘 = 1

Chapman – Kolmogrov Theorem:

Let {𝑋𝑛 , 𝑛 ≥ 0} be a homogeneous Markov chain with transition

probability matrix 𝑃 = [𝑝𝑖𝑗 ] and n – step transition probability matrix 𝑃(𝑛) =

[𝑝𝑖𝑗 (𝑛) ] where

𝑝𝑖𝑗 (𝑛) = 𝑃 {𝑋𝑛 = 𝑗⁄𝑋0 = 𝑖} and 𝑝𝑖𝑗 (1) = 𝑝𝑖𝑗

Then the following properties hold:

a) 𝑃 (𝑛+𝑚) = 𝑃 (𝑛) 𝑃(𝑚)

b) 𝑃 (𝑛) = 𝑃𝑛

Type: 1

To find the steady state distribution of the chain

Steady state or invariant or stationary or limiting state or long run or 1000 th trial

or lim 𝑃 (𝑛)these are represents the same 𝜋.


𝑛→∞

If given is 𝟐 × 𝟐 matrix

𝜋 = (𝜋1 , 𝜋2 )

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

𝜋𝑃 = 𝜋 . . . (1)

𝜋1 + 𝜋2 = 1 . . . (2)

If given is 𝟑 × 𝟑 matrix

𝜋 = (𝜋1 , 𝜋2 , 𝜋3 )

𝜋𝑃 = 𝜋 . . . (1)

𝜋1 + 𝜋2 + 𝜋3 = 1 . . . (2)

Problems Under Steady State:

1. A college student has the following study habits. If he studies one

night, then 70% sure not to study in the next night. If he does not

study one night, then he is only 60% sure not to study in the next

night also. (i) Find the TPM (ii) How often he studies in the long run?

Solution:

Let the state space = {𝑠𝑡𝑢𝑑𝑦, 𝑛𝑜𝑡 𝑠𝑡𝑢𝑑𝑦}

0.3 0.7
The transition probability matrix (TPM) = 𝑃 = ( )
0.4 0.6

Let the steady state distribution is 𝜋 = (𝜋0 , 𝜋1 )

𝜋0 + 𝜋1 = 1 . . . (1)

Condition for steady state 𝜋𝑃 = 𝜋

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

(𝜋0 , 𝜋1 ) (0.3 0.7) = (𝜋0 , 𝜋1 )


0.4 0.6

0.3 𝜋0 + 0.4 𝜋1 = 𝜋0 . . . (2)

0.7 𝜋0 + 0.6 𝜋1 = 𝜋1 . . . (3)

(2) ⇒ 0.3 𝜋0 − 𝜋0 = −0.4 𝜋1

⇒ −0.70 𝜋0 = −0.4 𝜋1

0.4
⇒ 𝜋0 = 𝜋 . . . (4)
0.70 1

Sub (4) in equation (1)

0.4
⇒ 𝜋 + 𝜋1 = 1
0.70 1

0.4
⇒ 𝜋1 ( + 1) = 1
0.70

7
⇒ 𝜋1 =
11

0.4 7 4
(4) ⇒ 𝜋0 = × =
0.70 11 11

4 7
The steady state distribution is (𝜋0 , 𝜋1 ) = ( , )
11 11

4
Probability of he studies in the long run 𝜋0 =
11

2. A salesman territory consists of 3 cities A, B and C. He never sells in

the same city on successive days. If he sells in city A, then next day

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

sells in city B. However if he sell in either B or C, then the next day

he is twice as likely to sell in the city A as in the other city. How often

does he sell in each of cities in the steady state.

Solution:

Let the state space = {𝐴, 𝐵, 𝐶 }

0 1 0
The transition probability matrix (TPM) = 𝑃 = (2/3 0 1/3)
2/3 1/3 0

Let the steady state distribution is 𝜋 = (𝜋1 , 𝜋2 , 𝜋3 )

𝜋1 + 𝜋2 + 𝜋3 = 1 . . . (1)

Condition for steady state 𝜋𝑃 = 𝜋

0 1 0
(𝜋1 , 𝜋2 , 𝜋3 ) (2/3 0 1/3) = (𝜋1 , 𝜋2 , 𝜋3 )
2/3 1/3 0

2 2
𝜋2 + 𝜋3 = 𝜋1 . . . (2)
3 3

1
𝜋1 + 𝜋3 = 𝜋2 . . . (3)
3

1
𝜋2 = 𝜋3 . . . (4) ⇒ 𝜋2 = 3𝜋3
3

2 2
(2) ⇒ 𝜋2 + 𝜋3 = 𝜋1
3 3

2
⇒ (𝜋2 + 𝜋3 ) = 𝜋1
3

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

3
⇒ (𝜋2 + 𝜋3 ) = 𝜋1
2

3
(1) ⇒ 𝜋1 + 𝜋1 = 1
2

5
⇒ 𝜋1 = 1
2

2
⇒ 𝜋1 =
5

2 1
⇒ + 𝜋3 = 3𝜋3
5 3

2 1
⇒ = 3𝜋3 − 𝜋3
5 3

2 8 2 3
⇒ = 𝜋3 ⇒ 𝜋3 = ×
5 3 5 8

3
⇒ 𝜋3 =
20

3 9
(4) ⇒ 𝜋2 = 3 × =
20 20

3. An Engineer analysing a series of digital signals generated by a testing

system observes that only 1 out of 15 highly distorted signals follows a

highly distorted signals, with no recognizable signal between, whereas 20

out of 23 recognizable signals, with no highly distorted signal between.

Given that any highly distorted signals are not recognizable, find the TPM

and fraction of signals that are highly distorted.

Solution:

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

Let the state space = {𝑟𝑒𝑐𝑜𝑔𝑛𝑖𝑧𝑎𝑏𝑙𝑒 𝑠𝑖𝑔𝑛𝑎𝑙, 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑒𝑑 𝑠𝑖𝑔𝑛𝑎𝑙 }

20/23 3/23
The transition probability matrix (TPM) = 𝑃 = ( )
14/15 1/15

Let the steady state distribution is 𝜋 = (𝜋1 , 𝜋2 )

𝜋1 + 𝜋2 = 1 . . . (1)

Condition for steady state 𝜋𝑃 = 𝜋

20/23 3/23
(𝜋1 , 𝜋2 ) ( ) = (𝜋1 , 𝜋2 )
14/15 1/15

20 14
𝜋1 + 𝜋2 = 𝜋1 . . . (2)
23 15

3 1
𝜋1 + 𝜋2 = 𝜋2 . . . (3)
23 15

3 1
(3) ⇒ 𝜋1 = 𝜋2 − 𝜋2
23 15

3 14
⇒ 𝜋1 = 𝜋2
23 15

14 23
⇒ 𝜋1 = × 𝜋2
15 3

322
⇒ 𝜋1 = 𝜋2 . . . (4)
45

(1) ⇒ 𝜋1 + 𝜋2 = 1

322
⇒ 𝜋2 + 𝜋2 = 1
45

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

322
⇒ 𝜋2 + 𝜋2 = 1
45

322
⇒( + 1) 𝜋2 = 1
45

367
⇒( ) 𝜋2 = 1
45

45
⇒ 𝜋2 =
367

322 45
(4) ⇒ 𝜋1 = ×
45 367

322
⇒ 𝜋1 =
367

45
The fraction of signals that are highly distorted = 𝜋2 =
367

Type: 2

To find the probability distribution based on the initial distribution

1. Suppose that the probability of a dry day following a rainy day is 1/3 and

that the probability of a rainy day following a dry day is ½. Given May 1 is

a dry day, find the probability that (i) May 3 is also a dry day (ii) May 5 is

also a dry day.

Solution:

Let the state space = {𝑑𝑟𝑦 𝑑𝑎𝑦, 𝑟𝑎𝑖𝑛𝑦 𝑑𝑎𝑦}

1/2 1/2
The transition probability matrix (TPM) = 𝑃 = ( )
1/3 2/3

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

Since initial probability distribution of May 1 is given by

∴ 𝑃(1) = [1 0]

The probability distribution of the May 2 is given by

1/2 1/2
𝑃 (2) = 𝑃(1)𝑃 = (1 0) ( )
1/3 2/3

= (1/2 1/2)

The probability distribution of the May 3 is given by

1/2 1/2
𝑃 (3) = 𝑃(2) 𝑃 = (1/2 1/2) ( )
1/3 2/3

= (5/12 7/12)

5
P( May 3 dry day ) =
12

The probability distribution of the May 4 is given by

1/2 1/2
𝑃 (4) = 𝑃 (3) 𝑃 = (5/12 7/12) ( )
1/3 2/3

= (29/72 43/72)

The probability distribution of the May 5 is given by

1/2 1/2
𝑃(5) = 𝑃 (4) 𝑃 = (29/72 43/72) ( )
1/3 2/3

= (173/432 259/432)

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

173
P( May 5 dry day ) =
432

2. A welding process is considered to be a two state Markov chain, where

the state 0 denotes the process is running in the manufacturing firm and

the state 1 denotes the process is not running in the firm. Suppose that the

transition probability matrix for this Markov chain is given by 𝑷 =

𝟎. 𝟖 𝟎. 𝟐
( ) (i) Find the probability that the welding process will run on
𝟎. 𝟑 𝟎. 𝟕

the third day from today given that the welding process is running today.

(ii) Find the probability that the welding process will run on the third day

from today if the initial probability of state 0 and 1 are equally likely.

Solution:

Let the state space = {𝑟𝑢𝑛, 𝑛𝑜𝑡 𝑟𝑢𝑛}

0.8 0.2
The transition probability matrix (TPM) = 𝑃 = ( )
0.3 0.7

The initial probability distribution is given by

∴ 𝑃(0) = [1 0]

The probability distribution of the first day is given by

0.8 0.2
𝑃 (1) = 𝑃(0)𝑃 = (1 0) ( )
0.3 0.7

= (0.8 0.2)

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

The probability distribution on the second day is given by

0.8 0.2
𝑃(2) = 𝑃 (1) 𝑃 = (0.8 0.2) ( )
0.3 0.7

= (0.7 0.3)

The probability distribution on the third day is given by

0.8 0.2
𝑃(3) = 𝑃 (2) 𝑃 = (0.7 0.3) ( )
0.3 0.7

= (0.65 0.35)

P(welding process is running on the third day) = 0.65

(ii)Given initial probabilities of 0 and 1 are equally likely

P(running today) = P(not running today) = ½

The initial probability distribution is given by

∴ 𝑃(0) = [1/2 1/2]

The probability distribution of the first day is given by

𝑃 (1) = 𝑃(0)𝑃 = (1/2 1/2) (0.8 0.2)


0.3 0.7

= (0.55 0.45)

The probability distribution on the second day is given by

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

0.8 0.2
𝑃 (2) = 𝑃 (1) 𝑃 = (0.55 0.45) ( )
0.3 0.7

= (0.575 0.425)

The probability distribution on the third day is given by

0.8 0.2
𝑃 (3) = 𝑃(2)𝑃 = (0.575 0.425) ( )
0.3 0.7

= (0.5875 0.4125)

P(welding process is running on the third day) = 0.5875

Problems based on Type 1 and Type 2:

1. A man either drives a car or catches a train to go office each day. He

never goes two days in a row by train. But he drives one day, then the next

day he is just as likely to drive again as he used to travel like train. Suppose

that on the first day of the week the man tosses a fair die and drove to work

if and only if a “6” appears. Find the probability that he takes a train on

the third day. The probability that he drives to work in the long run.

Solution:

Let the state space = {𝑡𝑟𝑎𝑖𝑛, 𝑐𝑎𝑟}

0 1
The transition probability matrix (TPM) = 𝑃 = ( )
1/2 1/2

1
(i) P( first day he drives the car) = P ( getting 6 on a die) = (Given)
6

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

1 5
P( first day he catches the train ) = 1 − =
6 6

The Probability distribution of the first day is given by

5 1
𝑃 (1) = ( )
6 6

The probability distribution of the second day is given by

0 1
(2) (1) 5 1 1 1)
𝑃 = 𝑃 𝑃= ( )(
6 6 2 2

1 11
= ( )
12 12

The probability distribution of the third day is given by

1 11 0 1
𝑃 (3)
= 𝑃 (2)
𝑃= ( ) (1 1)
12 12 2 2

11 13
= (24 24
)

11
P( third day he catches the train ) =
24

(ii)Let the steady state distribution is 𝜋 = (𝜋1 , 𝜋2 )

𝜋1 + 𝜋2 = 1 . . . (1)

Condition for steady state 𝜋𝑃 = 𝜋

0 1
(𝜋1 , 𝜋2 ) ( ) = (𝜋1, 𝜋2 )
1/2 1/2

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

1
𝜋2 = 𝜋1 . . . (2) ⇒ 𝜋2 = 2𝜋1
2

1
𝜋1 + 𝜋2 = 𝜋2 . . . (3)
2

(1) ⇒ 𝜋1 + 2𝜋1 = 1

⇒ 3𝜋1 = 1
1
⇒ 𝜋1 =
3

1 2
(2) ⇒ 𝜋2 = 2 × =
3 3

2
P(he drives to work in the long run) =
3

2. A fair die is tossed repeatedly. If 𝑿𝒏 denotes the maximum numbers

occurring in the first n tosses, find TPM of the Markov chain. Find also P 2

and P(X2 = 6).

Solution:

Let the State space = {1, 2, 3, 4, 5, 6}

𝑋𝑛 denotes maximum of the numbers occurring in the first n tosses

1/6 1/6 1/6 1/6 1/6 1/6


0 2/6 1/6 1/6 1/6 1/6
0 0 3/6 1/6 1/6 1/6
The TPM is 𝑃 = 𝑋𝑛
0 0 0 4/6 1/6 1/6
0 0 0 0 5/6 1/6
[ 0 0 0 0 0 6/6]

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

1 1 1 1 1 1
0 2 1 1 1 1
=
1 0 0 3 1 1 1
6 0 0 0 4 1 1
0 0 0 0 5 1
[0 0 0 0 0 1]

𝑃2 = 𝑃. 𝑃

1 3 5 7 9 11
0 4 5 7 9 11
𝑃2 =
1 0 0 9 7 9 11
36 0 0 0 16 9 11
0 0 0 0 25 11
[0 0 0 0 0 36]

Since initial probability is not given assume all are equally likely

∴ 𝑃(0) = [1/6 1/6 1/6 1/6 1/6 1/6]

P[X2 = 6] = ∑6𝑖=1 𝑃(𝑋2 = 6) ∕ 𝑋0 = 𝑖)𝑃(𝑋 = 𝑖)

= 𝑃 (𝑋2 = 6⁄𝑋0 = 1)𝑃 (𝑋0 = 1) + 𝑃 (𝑋2 = 6⁄𝑋0 = 2)𝑃(𝑋0 = 2)

+ 𝑃(𝑋2 = 6⁄𝑋0 = 3)𝑃(𝑋0 = 3) + 𝑃 (𝑋2 = 6⁄𝑋0 = 4)𝑃 (𝑋0 = 4)

+ 𝑃(𝑋2 = 6⁄𝑋0 = 5)𝑃(𝑋0 = 5) + 𝑃 (𝑋2 = 6⁄𝑋0 = 6)𝑃 (𝑋0 = 6)

2 2 2
= 𝑝16 𝑃 (𝑋0 = 1) + 𝑝26 𝑃(𝑋0 = 2) + 𝑝36 𝑃 (𝑋0 = 3)

2 2 2
+𝑝46 𝑃(𝑋0 = 4) + 𝑝56 𝑃 (𝑋0 = 5) + 𝑝66 𝑃(𝑋0 = 6)

11 1 11 1 11 1 11 1 11 1 36 1
= × × × × × × × × × × ×
36 6 36 6 36 6 36 6 36 6 36 6

91
𝑃 [𝑋2 = 6] =
216

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

3. A gambler has Rs. 𝟐/ -. He bets Rs. 1/- at a time and wins Rs.1/-with

𝟏
probability . He stops playing, if he loses Rs. 2/- or he wins Rs.4/- Write
𝟐

down the tpm of the associated Markov chain. What is the probability that

he lost his money at the end of his 𝟓th play. What is the probability that the

game lasts more than 7 plays?

Solution:

Let 𝑋𝑛 represents the amount with the player at the end of the 𝑛th round

of the play.

Initially he has Rs. 2. He bets Rs. 1 at a time. The game ends when he loses

Rs. 2, [i.e., he has 2 − 2 = 0] or wins Rs. 4 [i.e., 2 + 4 = 6 ]

State space of {𝑋𝑛 } = (0,1,2,3,4,5,6).

If he has Rs, 0, then he can’t play ∴ 𝑝00 = 1

If he has Rs. 6 , then the play ends ∴ 𝑝66 = 1


1 1
If he has Rs. 1 , then he get rupees 2 with probability is 𝑝12 =
2 2

1
If he loses Rs. 1 , then he has Rs. 0 i.e 𝑝10 = and so on.
2

The TPM of the Markov chain is

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

1 0 0 0 0 0 0
1/2 0 1/2 0 0 0 0
0 1/2 0 1/2 0 0 0
P= 0 0 1/2 0 1/2 0 0
0 0 0 1/2 0 1/2 0
0 0 0 0 1/2 0 1/2
[ 0 0 0 0 0 0 1 ]

Since initially he has 2 Rs., 𝑝(0) (2) = 1

lnitial probability distribution of {𝑋𝑛 } is 𝑃 (0) = (0,0,1,0,0,0,0)

The probability distribution at the first round is given by

1 1
𝑝(1) = 𝑃 (0) 𝑃 = 𝑃(1) = (0, , 0, , 0,0,0)
2 2

The probability distribution at the second round is given by

1 1 1
𝑃(2) = 𝑃 (1) 𝑃 = ( , 0, , 0, , 0,0)
4 2 4

The probability distribution at the third round is given by

1 1 3 1
𝑃 (3) = 𝑃 (2) 𝑃 = ( , , 0, , 0, , 0)
4 4 8 8

The probability distribution at the fourth round is given by

3 5 1 1
𝑃 (4) = 𝑃(3)𝑃 = ( , 0, , 0, , 0, )
8 16 4 16

The probability distribution at the fifth round is given by

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

P(5)=𝑃 (4) 𝑃

3 5 9 1 1
=( , , 0, , 0, , )
8 32 32 8 16

3
𝑃 [the man has lost his money at the end of his 5th play ] = 𝑃5 (0) = .
8

The probability distribution at the sixth round is given by

29 7 13 1
𝑃 (6) = 𝑃(5)𝑃 = ( , 0, , 0, , 0, )
64 32 64 8

The probability distribution at the seventh round is given by

𝑃 (7) = 𝑃(6)𝑝

29 7 27 13 1
=( , , 0, , 0, , )
64 64 128 128 8

𝑃 (the game lasts more than 7 rounds)

= 𝑃 (the system is neither in state 0 nor in 6 at the end of the 7th round)

= 𝑃7 (1) + 𝑃7 (2) + 𝑃7 (3) + 𝑃7 (4) + 𝑃7 (5)

7 27 13
= +0+ +0+
64 128 128

27
=
64

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

Problem under ‘𝒏’ Step Probability

(1) 𝑃 [𝑋𝑛 = 𝑗/𝑋𝑚 = 𝑖] = 𝑝𝑖𝑗 𝑛−𝑚

(𝑛)
(2) 𝑃𝑗 = 𝑃[𝑋𝑛 = 𝑗] = ∑𝑘𝑖=0    𝑃 [𝑋𝑛 = 𝑗/𝑋0 = 𝑖]𝑃 [𝑋0 = 𝑖]

1. The transition probability matrix (TPM) of a Markov chain 𝑿𝒏 has three

𝟎. 𝟏 𝟎. 𝟓 𝟎. 𝟒
states 1,2 and 3 is 𝑷 = (𝟎. 𝟔 𝟎. 𝟐 𝟎. 𝟐) and initial distribution is 𝑷(𝟎) =
𝟎. 𝟑 𝟎. 𝟒 𝟎. 𝟑

(𝟎. 𝟕 𝟎. 𝟐 𝟎. 𝟏).

Find (i) 𝑷[𝑿𝟑 = 𝟐, 𝑿𝟐 = 𝟑, 𝑿𝟏 = 𝟑, 𝑿𝟎 = 𝟐] (ii) 𝑷[𝑿𝟑 = 𝟏 /𝑿𝟐 = 𝟑 𝑿𝟎 =

𝟐] (iii) 𝑷[𝑿𝟐 = 𝟑]

Solution:

State space = {1,2,3}

1 2 3

1 0.1 0.5 0.4


𝑃 = 2 [0.6 0.2 0.2]
3 0.3 0.4 0.3

Given initial probability distribution is

1 2 3

𝑃 (0) = [0.7 0.2 0.1]

∴ 𝑃(𝑋0 = 1) = 0.7, 𝑃(𝑋0 = 2) = 0.2, 𝑃 (𝑋0 = 3) = 0.1

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

(i) 𝑃[𝑋3 = 2, 𝑋2 = 3, 𝑋1 = 3, 𝑋0 = 2]

= 𝑃[𝑋3 = 2/𝑋2 = 3]𝑃 [𝑋2 = 3/𝑋1 = 3]𝑃[𝑋1 = 3/𝑋0 = 2]𝑃 [𝑋0 = 2]

3−2 2−1 1−0 [


= 𝑝32 𝑝33 𝑝23 𝑃 𝑋0 = 2]

1 1 1 [
= 𝑝32 𝑝33 𝑝23 𝑃 𝑋0 = 2] = (0.4)(0.3)(0.2)(0.2)

= 0.0048

(ii)𝑃 [𝑋3 = 1/𝑋2 = 3 𝑋0 = 2] = 𝑃 [𝑋3 = 1/𝑋2 = 3][ By property ]

3−2 1
= 𝑃31 = 𝑃31

= 0.3

(iii) 𝑃 (𝑋2 = 3) = ∑3𝑖=1 𝑃 (𝑋2 = 3)/𝑋0 = 𝑖)𝑃(𝑋0 = 𝑖)

= 𝑃(𝑋2 = 3/𝑋0 = 1)𝑃 (𝑋0 = 1) + 𝑃 (𝑋2 = 3/𝑋0 = 2)𝑃(𝑋0 = 2)

+𝑃 (𝑋2 = 3/𝑋0 = 3)𝑃 (𝑋0 = 3)

2 ( 2 ( 2 (
𝑃 (𝑋2 = 3) = 𝑝13 𝑃 𝑋0 = 1) + 𝑝23 𝑃 𝑋0 = 2) + 𝑝33 𝑃 𝑋0 = 3) … (1)

2 2 2
To compute 𝑝13 , 𝑝23 𝑝33 , find 𝑃2 :

0.1 0.5 0.4 0.1 0.5 0.4


2
𝑃 = 𝑃 × 𝑃 = (0.6 0.2 0.2) (0.6 0.2 0.2)
0.3 0.4 0.3 0.3 0.4 0.3

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

1 2 3

1 0.43 0.31 0.26


2
𝑃 = 2 [0.24 0.42 0.34]
3 0.36 0.35 0.29

(1) ⇒ 𝑃[𝑋2 = 3] = 0.26 × 0.7 + 0.34 × 0.2 + 0.29 × 0.1 = 0.279

𝑃[𝑋2 = 3] = 0.279

2. The TPM ‘P’ of the Markov chain with 3 states (𝟎, 𝟏, 𝟐) is

𝟑 𝟏
𝟎
𝟒 𝟒
𝟏 𝟏 𝟏
𝑷 and initially state distribution of the chain is
𝟒 𝟐 𝟒
𝟑 𝟏
[𝟎 𝟒 𝟒]

𝟏
𝑷(𝑿𝟎 = 𝒊) = ; 𝒊 = 𝟎, 𝟏, 𝟐. Find (i) 𝑷[𝑿𝟐 = 𝟐](𝒊𝒊)𝑷[𝑿𝟑 = 𝟏, 𝑿𝟐 = 𝟐, 𝑿𝟏 =
𝟑

𝟏, 𝑿𝟎 = 𝟐]

Solution:

State space is {0,1,2}

3/4 1/4 0
P=[1/4 1/2 1/4]
0 3/4 1/4

The initial probability distribution is

1
𝑃 (𝑋0 = 𝑖) = ; 𝑖 = 0,1,2
3

1 1 1
∴ 𝑃(𝑋0 = 0) = , 𝑃 (𝑋0 = 1) = , 𝑃(𝑋0 = 2) =
3 3 3

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

(ii) 𝑃[𝑋3 = 1, 𝑋2 = 2, 𝑋1 = 1, 𝑋0 = 2]

𝑃 (𝑋1 = 1/𝑋2 = 2)𝑃 (𝑋2 = 2/𝑋1 = 1)𝑃 (𝑋1 = 1/𝑋0 = 2)𝑃 (𝑋0 = 2)

3−2 2−1 1−0 (


= 𝑝21 𝑝12 𝑝21 𝑃 𝑋0 = 2)
1 1 1 (
= 𝑝21 𝑝2 𝑝21 𝑃 𝑋0 = 2)

3 1 3 1
= ( )( )( )( )
4 4 4 3

𝑃(𝑋3 = 1, 𝑋2 = 2, 𝑋1 = 1, 𝑋0 = 2) = 3/64

(i) 𝑃 (𝑋2 = 2) = ∑2𝑖=0 𝑃(𝑋2 = 2)/𝑋0 = 𝑖)𝑃 (𝑋0 = 𝑖)

=𝑃 (𝑋2 = 2/𝑋0 = 0)𝑃(𝑋0 = 0) + 𝑃 (𝑋2 = 2/𝑋0 = 1)𝑃(𝑋0 = 1)

+𝑃(𝑋2 = 2/𝑋0 = 2)𝑃 (𝑋0 = 2)

2 ( 2 ( 2 (
= 𝑝02 𝑃 𝑋0 = 0) + 𝑝12 𝑃 𝑋0 = 1) + 𝑝22 𝑃 𝑋0 = 2) … … … (1)

To compute 𝒑𝟐𝟎𝟐 , 𝒑𝟐𝟏𝟐 , 𝒑𝟐𝟐𝟐

Find 𝑷𝟐 :

10 5 1
16 16 16
5 8 3
𝑃2 =
16 16 16
3 9 4
(16 16 16)

1 1 3 1 4 1
( 1) => 𝑃(𝑋2 = 2) = + +
16 3 16 3 16 3

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

1 8 1
= ( )=
3 16 6

Classification of Markov Chain

State diagram:

The pictorial representation of a Markov chain is called state diagram.

Accessibility:

Suppose that the state j has the property that can be reached from any state I,

then j is said to be accessible from i.

i.e., 𝑝𝑖𝑗 (𝑛) > 0, ∀𝑛 > 0

Communication:

If two states i and j are accessible from each other, then they are said to be

communicative with each other.

Irreducible Markov chain:

If it is possible to reach one state to another, then the Markov chain is

irreducible.

i.e., All the states communicate with each other.

MA8451-PROBABILITY AND RANDOM PROCESSES


ROHINI COLLEGE OF ENGINEERING AND TECHNOLOGY

Transient state:

A state is transient if and only if there is a positive probability that the process

will not return to this state.

Absorbing state:

A state i is called an absorbing state if 𝑝𝑖𝑖 = 1

Note:

All the absorbing states of a Markov chain is recurrent.

Ergodic State:

A non – null persistent and aperiodic state is called an ergodic state.

i.e., all its states are positive recurrent and aperiodic. A Markov chain is said to

be ergodic, if all the states are ergodic.

An irreducible non – null persistent Markov chain is ergodic.

MA8451-PROBABILITY AND RANDOM PROCESSES

You might also like