0% found this document useful (0 votes)
41 views18 pages

4 - Markov - Chain - v4

l

Uploaded by

alanoutizeina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views18 pages

4 - Markov - Chain - v4

l

Uploaded by

alanoutizeina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

ENGG515 Dr.

Samih Abdul-Nabi

Markov Chain
Table of contents
1 Introduction ............................................................................................................................ 2
1.1 Example of Markov chain................................................................................................. 2
2 Stochastic processes ............................................................................................................... 2
2.1 Examples .......................................................................................................................... 2
2.2 Structure of a stochastic process ..................................................................................... 3
3 Markov chain .......................................................................................................................... 3
3.1 Stationary ......................................................................................................................... 4
3.2 Transition matrix .............................................................................................................. 5
3.2.1 Formulating the Weather Example as a Markov Chain ............................................ 5
3.3 Chapman-Kolmogorov Equations .................................................................................... 5
3.3.1 n-Step Transition Matrices for the Weather Example .............................................. 6
3.3.2 Example .................................................................................................................... 7
4 Unconditional State Probabilities ........................................................................................... 7
5 Classification of States of a Markov Chain.............................................................................. 9
5.1 Accessible states .............................................................................................................. 9
5.2 Communicating States and Communicating Classes ....................................................... 9
5.3 Recurrent, Transient and Absorbing States ................................................................... 10
5.3.1 Gambling example .................................................................................................. 10
6 Long-Run Properties of Markov Chains ................................................................................ 11
7 Absorbing states ................................................................................................................... 12
8 Exercises ............................................................................................................................... 17

Page 1 of 18
ENGG515 Dr. Samih Abdul-Nabi

1 Introduction
This topic presents probability models for processes that evolve over time in a probabilistic
manner. Such processes are called stochastic processes. We will focus, however, on a special
kind of stochastic processes called Markov chain. Markov chains have the special property that
probabilities involving how the process will evolve in the future depend only on the present
state of the process, and so are independent of events in the past.

1.1 Example of Markov chain


Example 1: A Markov chain can be used to model the status of equipment. Consider a machine
used in a manufacturing process. Suppose that the possible states for the machine are:
• Idle & awaiting work (I)
• Working on a job/task (W)
• Broken (B)
• In Repair (R)
Using Markov chain, we can monitor the machine, build the transmission matrix and answer
questions like these ones:
_ Find the probability that the machine is working on a job one hour from now if the
machine is idle now.
_ Find the probability that the machine is being repaired in one hour if it is broken now.
_ Find the probability that the machine is idle three hours from now if the machine is
working on a job now.
Answering these questions helps managing the machine, predict the potential need of
additional machine…

2 Stochastic processes
A stochastic process: an indexed collection of random variables { X t } with
• t ÎT. Normally T is the set of nonnegative integers and t represents a unit of time (day,
week, year, month, hour, second…)
• X t a measurable characteristic of interest at time t.

2.1 Examples
- A coin tossed n times. The number of heads is a random variable which depends on n. It
is therefore a stochastic process (discrete).
- Let X, W and α be random variables, then Y (t ) = X sin ( wt + a ) is a stochastic process. It
corresponds to an oscillation with random amplitude, frequency and phase.

Page 2 of 18
ENGG515 Dr. Samih Abdul-Nabi
- The number of occupied channels in a telephone link at time t or at the arrival time of
the nth customer.
Phone link of 24 channels (25 states) I can monitor the link every minute Xt the number
of used channels. Xt is a stochastic process.
2.2 Structure of a stochastic process
The current status of the system can fall into any one of the (M + 1) mutually exclusive
categories called states. For notational convenience, these states are labeled 0, 1 . . . M. The
random variable X t represents the state of the system at time t, so its only possible values are
0, 1 . . . M. The system is observed at particular points of time, labeled t=0, 1, 2… Thus, the
stochastic process { X t } provides a mathematical representation of how the status of the
physical system evolves over time.

Weather example: for the weather in Beirut, the chances of being dry (no rain) tomorrow is 0.9
if it is dry today and if it rains today, it may rain tomorrow with 0.6 chance. Define t, X t and the
states of the stochastic process.

The weather is observed each day t and its evolution is a stochastic process. The state of the
system can be either rain or dry so we have 2 states.
State 0: Dry State 1: Rain (M=1 and we have M+1 states)
ì0 if day t is dry
Xt = í
î1 if day t has rain

Telephone line channel example: consider a circuit switching telephone link with 24 channels
and observe the number of used channels at the starting of each hour. Define t, X t and the
states of the stochastic process.
We have here 25 states defined as follow:
State 0: no channel used
State 1: one channel used
….
State 24: 24 channel used

3 Markov chain
A stochastic process has the Markovian property if the conditional probability of any future
“event,” given any past “events” and the present state X t = i , is independent of the past
events and depends only upon the present state.
This means:
P{ X t +1 = j | X 0 = k0 , X1 = k1, X 2 = k2 ,..., X t -1 = kt -1, X t = i} = P { X t +1 = j | X t = i}

Page 3 of 18
ENGG515 Dr. Samih Abdul-Nabi
I am at time t, the current state Xt is i, one unit of time before I was at state Kt-1, ….at time 0 I
was at state k0 what is the probability to be at state j at time t+1?
This probability equals the probability that currently at time t I am at state i and the next state
is j.
This means that the probability does not depend on the previous states and depends only on
the current stateà Markovian property.
This means that probability that X t +1 = j knowing all previous events is the same as the
probability that X t +1 = j knowing that X t = i (the current state).

A stochastic process is a Markov chain if it has the Markovian property.

3.1 Stationary
The conditional probabilities P { X t +1 = j | X t = i} are called one step transition probabilities if they
do not depend on t. It means:
P(X5=j/X4=i) = P(X12=j/X11=i)
P { X t +1 = j | X t = i} = P{ X1 = j | X 0 = i} for t = 1,2,... in this case, the transition probabilities are said
to be stationary. To simplify notation, we write Pij = P { X t +1 = j | X t = i} note that Pij is
independent of t since it is stationary.

Similarly, if P{ X t +n = j | X t = i} = P{ X n = j | X 0 = i} for t = 0,1,2,... then the conditional probabilities


are called n-step transition probabilities and we can write: Pij( n) = P { X t + n = j | X t = i}
Example: P (x4=j/x1=i) = 0.6 having 3-step transition probability à P(x10=j/x7=i)=0.6
Clarification of Pij( n ) :
- Pij( n ) is the conditional probability that the system will be in state j after exactly n steps
(time unit) if it starts at state i at time t.
Pij( n ) = At any time, if I am in state i I will be in state j after n steps with the same
probability.
- Pij( n ) ³ 0 for all i, j and n
M
- åP
j =0
(n)
ij = 1 for all i and n (we need to move to a state)

I am at state i after n steps I will be in one of the available states.


Pi0(n) + P i1(n)+……+ P iM(n)= 1 with P i02 the probability to move in two steps from state i to
state 0

Page 4 of 18
ENGG515 Dr. Samih Abdul-Nabi
3.2 Transition matrix
A convenient way of showing all the n-step transition probabilities is the n-step transition
matrix.

State 0 1 ... M
0 é P00( n ) P01( n ) ... P0(Mn) ù
ê (n) ú
P(n) = 1 ê P10 P11( n ) ... P1(Mn ) ú it is read as the transition from row state to the column state.
| ê ú
ê... ... ... ... ú
M ê P(n) PM( n1) ... PMM(n) ú
ë M0 û
With Pij(n) is the probability to go from state i to state j after n steps.

The Markov chain to be considered in this chapter has the following properties:
- A finite number of states
- Stationary transition probabilities
3.2.1 Formulating the Weather Example as a Markov Chain
ì0 if day t is dry
Recall the weather example. X t = í and two states 0 and 1.
î1 if day t has rain
We also have P00 = P{ X t +1 = 0 | X t = 0} = 0.9 and P10 = P{ X t +1 = 0 | X t = 1} = 0.6
Or since P00 + P01 = 1 ® P01 = 0.1 and P10 + P11 = 1 ® P11 = 0.4therefor the transition matrix looks like:
State 0 1 State 0 1
P= 0 é P00 P01 ù = 0 é0.9 0.1ù
1 êP P11 úû 1 ê0.6 0.4 ú
ë 10 ë û
Figure 1 shows the state transition diagram of the weather example. Nodes represent the
states and arrows show the possible transition holding the transition probability.

Figure 1: the state transition diagram of the weather example

3.3 Chapman-Kolmogorov Equations


We introduced the n-step probability Pij( n) = P { X t + n = j | X t = i} . The following Chapman-
Kolmogorov equations provide method for computing these n-step transition probabilities:

Page 5 of 18
ENGG515 Dr. Samih Abdul-Nabi
M
Pij( n ) = å Pik( m ) Pkj( n - m ) for all i = 0,1,..., M
k =0

j = 0,1,..., M
and any m = 1, 2,..., n - 1
n = m + 1, m + 2,...
Example with 2 states 3 steps: P01(3)=P00(1)P01(2) + P01(1)P11(2)
P01(3)=P00P01P11+ P00P00P01+ P01P11P11+P01P10P01
When going from state i to state j in n steps, the system will be in state k after m steps and then
reaches state j in (m-n) steps. Summing all these conditional probabilities over all possible
values of k gives Pij( n ) .
Since Pij( n ) can be obtained for any m thus looking at the case where m=1 and m= n-1 we have
M M
Pij( n ) = å Pik Pkj( n -1) = åP ( n -1)
ik P So the n-step transition matrix can be obtained recursively from
kj
k =0 k =0

the one-step transition matrix.


Example:
M
Pij(2) = åP P
k =0
ik kj these are the elements of the matrix P(2) obtained by multiplying P by P.

(2)
Pij is obtained by multiplying the ith line of P by the jth column of P. Therefore multiplying P
by P gives P(2) the 2-step transition matrix.
3.3.1 n-Step Transition Matrices for the Weather Example
Back to the weather example we had:
State 0 1 State 0 1
P= 0 é P00 P01 ù = 0 é0.9 0.1ù
1 êP P11 úû 1 ê0.6 0.4 ú
ë 10 ë û
Thus
æ P00(2) P01(2) ö æ 0.9 0.1 öæ 0.9 0.1 ö æ 0.87 0.13 ö
P = P * P = ç (2)
2
÷=ç ÷ç ÷=ç ÷
è P10 P11(2) ø è 0.6 0.4 øè 0.6 0.4 ø è 0.78 0.22 ø

The probability to go from state 0 to state 0 in two steps is 0.87. Sine here a step is a day. So the
probability to have a dry day in two days knowing that it is dry today is 0.87 (P200).

Today is Wednesday (Dry) the probability so that tomorrow is dry = 0.9.

To go from state 0 to state 1 is 0.13. So the probability to have rain in two days knowing that it
is dry today is 0.13. This can be explained as follows: to go dry to rain in 2 days, we go either:
- Dry à Dry à Rain with probability 0.9*0.1 = 0.09
- Dry à Rain à Rain with probability 0.1*0.4 = 0.04

Page 6 of 18
ENGG515 Dr. Samih Abdul-Nabi
Dry to rain (in two days) 0.09 + 0.04 = 0.13
3.3.2 Example

Consider the state diagram shown in Figure 2

Figure 2: n-step transition example

To go from 1 to 3 in 2 steps: 1-3 * 3-3 + 1-1*1-3 + 1-2*2-3 = 0.2*0.2 + 0.6*0.2 + 0.2*0.6


= 0.28

a) Find the transition matrix, P.


æ 0.6 0.2 0.2 ö
ç ÷
P = ç 0.4 0 0.6 ÷
ç 0 0.8 0.2 ÷
è ø
b) Find P ( X 2 = 3| X 0 = 1)
I need the probability to go from state 1 to state 3 in 2 steps à P13(2)
P13(2) = (0.6 0.2 0.2) (0.2 0.6 0.2)T = 0.12 + 0.12 + 0.04 = 0.28

The requested probability means starting from state 1, what is the probability to be in
state 3 after 2 steps. We can reach 3 from 1 in 2 steps as follows:
1-1-3 with probability 0.6*0.2 = 0.12
1-3-3 with probability 0.2*0.2 = 0.04
1-2-3 with probability 0.2*0.6 = 0.12
So P ( X 2 = 3| X 0 = 1) = 0.28

This can also be obtained by considering


æ 0.2 ö
ç ÷
P ( X 2 = 3 | X 0 = 1) = P13 = ( 0.6 0.2 0.2 ) ç 0.6 ÷ = 0.28 Obtained by multiplying the
( 2)

ç 0.2 ÷
è ø
first row with the third column to obtain the element P13 from P(2) matrix.

4 Unconditional State Probabilities


One-step or n-step transition probabilities are conditional. Pij(n) probability to go to j in n-steps
knowing (condition) that I am in state i.
Page 7 of 18
ENGG515 Dr. Samih Abdul-Nabi
The unconditional probability P ( X n = j ) is obtained by specifying the probability distribution
of the initial state P ( X 0 = i ) i = 0,1,..., M . Therefore:
P ( X n = j ) = P ( X 0 = 0) P0(jn) + P ( X 0 = 1) P1(jn) + ... + P ( X 0 = M ) PMj( n)
æ P0( jn ) ö
ç (n) ÷
P
P ( X n = j ) = ( P ( X 0 = 0 ) P ( X 0 = 1) ... ( X 0 = M ) ) ç 1 j ÷
ç ... ÷
ç (n) ÷
ç PMj ÷
è ø
The first vector (the line vector) is the probability distribution of the initial state. The second
vector (the column vector) is the jth column in the Pn matrix.
If we go back to the example in 3.3.2, answer the following questions:
a) Suppose that it is equally likely (I have the same probability to start from any state) to
start on any state at time 0. Find the probability distribution of X1.
æ1 1 1ö
The distribution of X 0 = p T = ç ÷ therefore
è 3 3 3ø
æ 0.6 0.2 0.2 ö
æ1 1 1ö ç ÷ æ 1 1 1 ö X =( P(X =1)
X1 ! p P = ç
T
÷ ç 0.4 0 0.6 ÷ = ç ÷ 1 1 P(X1=2) P(X1=3))
è3 3 3ø ç ÷ è 3 3 3 ø
è 0 0.8 0.2 ø
We conclude that X1 is also equally likely to be in 1, 2 or 3.

b) Suppose that we begin at vertex 1 at time 0. Find the probability distribution of X2 after
two steps.

The distribution of X 0 = p T = (1 0 0 ) therefore


æ 0.6 0.2 0.2 öæ 0.6 0.2 0.2 ö
ç ÷ç ÷
X 2 ! p P = (1 0 0 ) ç 0.4 0 0.6 ÷ç 0.4 0 0.6 ÷
T 2

ç 0 0.8 0.2 ÷ç 0 0.8 0.2 ÷


è øè ø
æ 0.6 0.2 0.2 ö
ç ÷
= ( 0.6 0.2 0.2 ) ç 0.4 0 0.6 ÷ = ( 0.44 0.28 0.28 )
ç 0 0.8 0.2 ÷
è ø
This means that starting from 1 we have P ( X 2 = 1) = 0.44 and P ( X 2 = 2) = 0.28 …

c) Suppose that it is equally likely to start on any state at time 0. Find the probability of
obtaining the trajectory (3, 2, 1, 1, 3)
1
P(3, 2,1,1,3) = P( X 0 = 3)*P32 * P21 * P11 * P13 = *0.8*0.4*0.6*0.2 = 0.0128
3

Page 8 of 18
ENGG515 Dr. Samih Abdul-Nabi

5 Classification of States of a Markov Chain


The long-run properties of a Markov chain depend greatly on the characteristics of its states
and transition matrix. The objective is to understand the convergence of the transition
probabilities. For that we need the following definitions:

5.1 Accessible states


State j is said to be accessible from state i if Pij( n ) > 0 for some n ³ 0 . It is possible for the system to
enter state j when it starts from state i.

5.2 Communicating States and Communicating Classes


Definition: Consider a Markov chain with state space S and transition matrix P, and consider
states i, j Î S. Then state i communicates with state j if:
1. There exists some t such that Pij(t) > 0 , AND
2. There exists some u such that Pji(u) > 0

Theorem: if state i communicates with state j and state j communicates with state k then states
i and k communicate. (This follows from the Chapman-Kolmogorov equations).

Definition: States i and j are in the same communicating class if i ßà j: i.e. if each state is
accessible from the other. A class may consist of a single state.
We derive that every state is a member of exactly one communicating class.

Example: consider the transition diagram shown in Figure 3. Find all its communicating classes.
1, 2, 3 and 4 belong to the same communicating class.
5 and 6 belongs to the same communicating class.

Note that since 2 leads to 5 but 5 does not lead to 2 they are in
different communicating classes.

Definition: A communicating class of states is closed if it is not


possible to leave that class, i.e. the communicating class C is Figure 3: Example of communicating
classes
closed if Pij(n) = 0 whenever i Î C and j ÏC.
In the example of Figure 3, the communication class {5, 6} is closed.

Definition: If there is only one class, i.e., all the states communicate, the Markov chain is said to
be irreducible.
The example shown in Figure 2 is irreducible.
Page 9 of 18
ENGG515 Dr. Samih Abdul-Nabi

5.3 Recurrent, Transient and Absorbing States


Definition: A state is said to be a transient state if, upon entering this state, the process might
never return to this state again. Therefore, state i is transient if and only if there exists a state j
( j ¹ i ) that is accessible from state i but not vice versa, that is, state i is not accessible from
state j.

5.3.1 Gambling example


Suppose that a player has $1 and with each play of the game wins $1 with probability p>0 or
loses $1 with probability 1-p>0. The game ends when the player either accumulates $3 or goes
broke. This game is a Markov chain with the states representing the player’s current holding of
money, that is, 0, $1, $2, or $3, and with the transition matrix given by:
0 1 2 3
0 æ1 0 0ö
0
ç ÷
1 ç1 - p 0 p 0÷
2 ç0 1- p 0 p ÷
ç ÷
3 è0 0 0 1ø

The state transition diagram is shown in Figure 4.


Figure 4: Gambling problem

In this example, both states 1 and 2 are transient since the


process will leave these states sooner or later.

Definition: A state is said to be a recurrent state if, upon entering this state, the process
definitely will return to this state again. Therefore, a state is recurrent if and only if it is not
transient.

In the example of Figure 4, states 0 and 3 are a special type of recurrent states.

Definition: A state is said to be an absorbing state if, upon entering this state, the process will
never leave this state again. Therefore, state i is an absorbing state if and only if Pii=1.

In the example of Figure 4, states 0 and 3 are absorbing states.

NOTE: in an irreducible Markov chain (finite) all states are recurrent.


0 1 2 3 4
Example: Consider the following transition matrix. 0 æ 0.25 0.75 0 0 0ö
1 çç 0.5 0.5 0 0 0÷
÷
P=
• We can identify state 2 as absorbing state. 2 ç0 0 1 0 0÷
ç ÷
3 ç0 0 0.33 0.66 0 ÷
• State 3 is transient; there is a positive probability to go from 3 to 2 4 çè1 0 0 0 0 ÷ø
and remains at 2.

Page 10 of 18
ENGG515 Dr. Samih Abdul-Nabi
• State 4 is also transient; it cannot be reached unless the process starts at state 4. Once
in 4, it leaves to 0 and never come back.
• States 0 and 1 are recurrent states.

6 Long-Run Properties of Markov Chains


Reconsider the weather problem and it transition matrix
æ 0.9 0.1 ö 2 æ 0.87 0.13 ö 3 æ 0.861 0.139 ö
P=ç ÷ P =ç ÷ P =ç ÷
è 0.6 0.4 ø è 0.78 0.22 ø è 0.834 0.166 ø
æ 0.858 0.1417 ö 5 æ 0.857 0.1425 ö 6 æ 0.857 0.1428 ö
P4 = ç ÷ P =ç ÷ P =ç ÷
è 0.850 0.1498 ø è 0.8551 0.1449 ø è 0.8565 0.1435 ø
æ 0.8572 0.1428 ö 8 æ 0.8572 0.1428 ö 9 æ 0.8571 0.1429 ö
P7 = ç ÷ P =ç ÷ P =ç ÷
è 0.8570 0.1430 ø è 0.8571 0.1429 ø è 0.8751 0.1429 ø
æ 0.8571 0.1429 ö
P10 = ç ÷
è 0.8751 0.1429 ø
After the 10-step transition matrix all rows has identical entries. So the probability that the
system is in each state j no longer depends on the initial state of the system.
In other words, there is a limiting probability that the system will be in each state j after a large
number of transitions, and this probability is independent of the initial state.

So we can say: lim Pij(n) exists and is independent of i


n®¥

Pij(n) is the probability that the system ends in state j when starting from state i after n
transition. When n ൠthe probability of ending in state j becomes fixed independently from
the starting state.

Furthermore: lim Pij(n) = p j > 0


n®¥

p j Uniquely satisfy the following steady-state equations:


M
pj= åp P
j =0
i ij for i = 0, 1, ..., M

å p =1
j =0
i

In a matrix form p = p P where p = (p 0 , p1,..., p M )


The term steady-state probability means that the probability of finding the process in a certain
state, say j, after a large number of transitions tends to the value πj, independent of the
probability distribution of the initial state. It is important to note that the steady-state
probability does not imply that the process settles down into one state. On the contrary, the
process continues to make transitions from state to state, and at any step n the transition
probability from state i to state j is still Pij.

Page 11 of 18
ENGG515 Dr. Samih Abdul-Nabi

7 Absorbing states
Recall that a state k is called an absorbing state if Pkk=1 so that once the system visits k it
remains there forever. Given that the process starts at i, the probability of ever going to k is
called the probability of absorption into state k and denoted by fik. If the system has one
absorbing state this probability will be 1 at long term. So we are interested in systems with 2 or
more absorbing states. The process will be absorbed by one of these states and we want to
study the probability of absorption.
These probabilities can be obtained by solving a linear problem:
If k is an absorbing state then:

M
fik = åP f
j =0
ij jk for i = 0, 1, . . . , M

Subject to the conditions


f kk = 1
fik = 0 if state i is recurrent and i ¹ k

Example: Gambling
To illustrate the use of absorption probabilities in a random walk (models used for gambling
where with a single transition, the system either remains at state i or moves to one of the two
states immediately adjacent to i) consider a gambling example where two players (A and B),
each having $2, agree to keep playing the game and betting $1 at a time until one player is
broke. The probability of A winning a single bet is 1/3, so B wins the bet with probability 2/3.
The number of dollars that player A has before each bet (0, 1, 2, 3, or 4) provides the states of a
Markov chain with transition matrix:

State 0 1 2 3 4 f 00 = 1
æ1 0 0 0 0ö 2 1
ç ÷ f10 = f 00 + f 20
3 3
0 ç2 0
1
0 0÷
ç3 3 ÷ 2 1
1
ç ÷ f 20 = f10 + f30
P= 2 1 3 3
2 ç0 0 0÷
ç 3 3 ÷ 2 1
3 f30 = f 20 + f 40
ç 2 1÷ 3 3
4 ç0 0 0 ÷ f 40 =0
ç 3 3÷
ç0 0 0 0 1 ÷ø 4
è Leading to f 20 =
5
Player A will lose with probability 4/5 à player B will win with probability 4/5
f20 + f24=1 àf24=1/5

Page 12 of 18
ENGG515 Dr. Samih Abdul-Nabi
Starting from state 2 (since initially the player has $2), the probability of absorption into state 0
(A losing all her money) can be obtained by solving for f20 from the system of equations shown
above.
Same way, we can compute f24 (A wins) = 1/5.

Another way to solve the problem is by manipulating (matrix manipulation, exchange rows,
columns, multiply a row by a scalar and adding to another row) the P matrix to have the
following:
æ I 0ö
P=ç ÷ with I as identity matrix, 0 is a zero matrix, A and B are matrices.
è A Bø
It is shown that:
• Expected time in state j starting in state i = element (i,j) of [I – B]-1
• Expected time to absorption = [I – B]-1[1] where [1] is a unit column vector. In fact, this
column refers to the unit of time used. (see examples).
• Probability of absorption Q= [I – B]-1A

So for our example the matrix P becomes:


State 0 1 2 3 4
State 0 4 2 3 1
æ1 0 0 0 0ö
ç ÷ æ1 0 0 0 0ö
0 ç2 1 ç ÷
0 0 0÷ 0 ç0 1 0 0 0÷
1 ç3 3 ÷
ç ÷ 4 ç 1 2÷
P= 2 1 ç0 0 0 ÷
2 ç0 0 0÷ P= 3 3÷
ç 3 3 ÷ 2 ç
3 ç ç 1 2 ÷
2 1÷ 3 ç0 0 0÷
4 ç0 0 0 ÷ 3 3
ç 3 3÷ 1 ç ÷
ç0 0 0 0 1 ÷ø ç2 0
1
0 0 ÷÷
è ç
è3 3 ø

The manipulated matrix shows that 0 and 4 are absorbing state (the Identity matrix).
Starting from state i (in our case it is state 2) the probability of being end at the absorbing
states is given by: Q = FA With F = [ I - B]-1
Back to our example:
The first line in F corresponding to state 2: I spend 0.6 time in state 3, 1.8 time in state 2 and 1.2
time in state 1 before being absorbed by 0 or 4. Suppose each round takes 2 minutes. How long
will a game last?
Total nb of rounds = (0.6+1.8+1.2) = 3.6 à total game time = 3.6*2 = 7.2 minutes
-1 -1
æ æ 1 0 0 ö æ 0 1/ 3 2 / 3 ö ö æ 1 -1/ 3 -2 / 3 ö
çç ÷ ç ÷÷ ç ÷
F = çç0 1 0÷ - ç 2 / 3 0 0 ÷ ÷ = ç -2 / 3 1 0 ÷
ç ç 0 0 1 ÷ ç 1/ 3 0 0 ÷ø ÷ø ç -1/ 3 0 1 ÷ø
èè ø è è

Page 13 of 18
ENGG515 Dr. Samih Abdul-Nabi

-1
æ ö æ 1 2ö æ 1 2ö æ9 3 6ö æ 4 1ö
ç0 ç 1 - - ÷
ç0 0÷ 3 3÷ 3 3 ç5 5 5÷ ç 5 5÷
ç ÷ ç ÷ ç ÷ ç ÷ ç ÷
1÷ 2 2 6 7 4÷ ç 8 7÷
A=ç0 and B = çç 0 0 ÷÷ F = çç - 1 0 ÷÷ = çç giving Q =
ç 3÷ 3 3 5 5 5÷ ç 15 15 ÷
ç ÷ ç ÷ ç ÷ ç ÷ ç ÷
çç 2 0 ÷÷ ç1
ç 0 0 ÷÷ ç-1
ç 0 1 ÷÷ ç3
ç
1 7÷
÷
ç 14 1 ÷
ç ÷
è3 ø è3 ø è 3 ø è5 5 5ø è 15 15 ø

0 4
2 æ 4 1ö
ç 5 5÷
ç ÷
Q= ç 8 7 ÷ Qij is the probability of being absorbed by j when starting from i.
3 ç
15 15 ÷
ç ÷
ç 14 1 ÷
1 ç 15 15 ÷
è ø

Note that the first row of Q corresponds to state 2, the second to state 3 and the last to state 1,
each line gives the probability of being absorbed by the absorbing states when starting from
the corresponding states.
Therefore, starting from state 2, the probability of being absorbed by state 0 is 4/5 and the
probability of being absorbed by state 4 is 1/5. Similarly, starting from state 3, the change of
being absorbed by state 0 is 8/15.

Example: A product is processed on two sequential machines, M1 and M2. Processing time at
machine M1 takes 20 minutes while processing time at machine M2 takes 30 minutes.
Inspection takes place after a product unit is completed on either machine. There is a 5%
chance that the unit will be junked before inspection. After inspection, there is a 3% chance the
unit will be junked, and a 7% chance of being returned to the same machine for reworking. Else,
a unit passing inspection on both machines is good. Inspection times after M1 and M2 are 5 and
7 minutes respectively.
a) For a part starting at machine M1, determine the probability of being junked.
b) For a part starting at machine M1, determine the average number of visits to each state.
c) For a part starting at machine M1, determine the average time so the part is processed
(either junked or completed).
d) If a batch of 1000 units is started on machine M1, determine the average number of
completed good units.

Page 14 of 18
ENGG515 Dr. Samih Abdul-Nabi
Solution:
The time table of the system is as follow:
M1 takes (20 m) outcome↓
Junked 5%
Inspected 95% (5 m) outcome↓
Returned to M1 7%
Junked 3%
sent to M2 90% (30 m) outcome↓
junked 5%
inspected 95% (7 m) outcome↓
junked 3%
returned to M2 7%
completed 90%

The production process has 6 states: start at M1 (s1), inspect after M1 (i1), start at M2 (s2),
inspect after M2 (i2), junk after inspection M1 or M2 (J), and good after M2 (G). States
J and G are absorbing states. The transition matrix is given as:
s1 i1 s2 i2 J G
s1 æ0 0.95 0 0 0.05 0 ö
ç ÷
i1 ç 0.07 0 0.9 0 0.03 0 ÷
P = s2 ç0 0 0 0.95 0.05 0 ÷
ç ÷
i2 ç0 0 0.07 0 0.03 0.9 ÷
J ç0 0 0 0 1 0 ÷
çç ÷
G è0 0 0 0 0 1 ÷ø
s1 i1 s 2 i 2 J G
s1 æ 0 0.95 0 0ö æ 0.05 0 ö
We can easily deduce the matrices B = i1 ç 0.07 0 0.9 0 ÷
÷ A=ç ÷
ç ç 0.03 0 ÷
s2 ç 0 0 0 0.95 ÷ ç 0.05 0 ÷
ç ÷ ç ÷
i2 è 0 0 0.07 0 ø è 0.03 0.9 ø
s1 i1 s2 i2 J G
s1 æ 1.07 1.02 0.98 0.93 ö s1 æ 0.16 0.84 ö
-1
Therefore [ I - B] = i1 ç ÷ -1 ç ÷
ç 0.07 1.07 1.03 0.98 ÷ [ I - B] A = i 2 ç 0.12 0.88 ÷
s2 ç 0 0 1.07 1.02 ÷ s 2 ç 0.08 0.92 ÷
ç ÷ ç ÷
i2 è 0 0 0.07 1.07 ø i 2 è 0.04 0.96 ø
Starting from M1 Probability of being junked 0.16 probability of being good is 0.84
1000 ---à 840
X 1000 X=1000*1000/840 or 1000/0.84 = 1190 time needed? = 1190*62.41

Page 15 of 18
ENGG515 Dr. Samih Abdul-Nabi
s1 i1 s2 i2
æ 20 ö æ 62.41ö
s1 æ 1.07 1.02 0.98 0.93 ö ç ÷ ç ÷
ç ÷ ç 5÷ ç ÷
Total time = i1 ç 0.07 1.07 1.03 0.98 ÷ =
ç 30 ÷ ç ÷
s2 ç 0 0 1.07 1.02 ÷ ç ÷ ç ÷
ç ÷è7ø è ø
i2 è 0 0 0.07 1.07 ø

Average unit of time spent in M1 = 1.07. How many minutes? 1.07*20


Average unit of time spent in S1 = 1.02. 1.02*5
Average unit of time spent in M2 = 0.98 0.98*30
Average unit of time spent in S2 = 0.93 0.93*7 =62.41 minutes
To produce 1000 good parts you need to start with 1190 (1000/0.84)
Time needed to produce 1000 good units = 1190*62.41

a) For a part starting at M1, there is a probability of 0.16 to be junked.


b) The top row of [ I - B ]-1 , corresponding to state s1, shows that with a part starting at
M1, on the average, machine M1 is visited 1.07 times, inspection after M1 is visited 1.02
times, machine M2 is visited .98 time, and inspection after M2 is visited .93 time.

The reason the number of visits in machine M1 and inspection after M1 is greater than 1
is because of rework and re-inspection. Because some parts are junked, the
corresponding values for machine M2 are less than 1.

c) In this exercise, the processing time is not similar to all tasks. Consider the processing
æ 20 ö æ 20 ö æ 62.41 ö
ç ÷ ç ÷ ç ÷
ç 5÷ ç 5 ÷ ç 44.51 ÷
then [ I - B ]
-1
time as a vector PT = = . Therefore, a part starting at
ç 30 ÷ ç 30 ÷ ç 39.24 ÷
ç ÷ ç ÷ ç ÷
è7ø è 7 ø è 9.59 ø
M1 takes 62.41 minutes to be processed.

d) Since the probability of a part to be junked is 0.16, then the probability of being good
and completed is 0.84. Therefore 1000*0.84 = 840 pieces will be completed in a starting
batch of 1000.

Page 16 of 18
ENGG515 Dr. Samih Abdul-Nabi

8 Exercises

1) Imad has a history of receiving fines (tickets) for driving violations. As soon as he has
accumulated 4 tickets, his driving license is revoked until he completes a new driver
educations class, in which case he starts with a clean record. Imad is most reckless
immediately after completing the driver education class and he is habitually stopped by the
police with a 50-50 chance of being fined. After each new fine, he tries to be more careful,
which reduces the probability of a fine by 0.1.
a. Express Imad problem as a Markov chain by identifying the states and the transition
matrix for this situation.
b. Determine the long run probability.
c. What is the probability that Imad loses his license?

Solution

a. We define 5 states. 0, 1, 2, 3, 4. With state k means Imad has k tickets. In this case the
transition matrix will be
æ 0.5 0.5 0 0 0 ö
ç ÷
ç0 0.6 0.4 0 0 ÷
ç0 0 0.7 0.3 0 ÷
ç ÷
ç0 0 0 0.8 0.2 ÷
ç1 0 0 0 0 ÷ø
è
From state 0, changes are 50, 50 to get a ticket. Being at state 1, the chance to get a
new ticket and move to state 2 is reduced by 01 (so it is 0.4) remaining is 0.6 to stay on
the same state. When reaching state 4, the probability to move to state 0 (no tickets is
1) as he is forced to follow a driver education class and his record is cleaned.

p = p P where p = (p 0 , p1 ,..., p M )
Since p = p P
æp0 ö æ 0.5 0.5 0 0 0 ö æ 0.5p 0 + p 4 ö
ç ÷ ç ÷ ç ÷
p
ç 1÷ ç0 0.6 0.4 0 0 ÷ ç 0.5p 0 + 0.6p1 ÷
ç p 2 ÷ = (p 0 p1 p 2 p 3 p 4 ) * ç 0 0 0.7 0.3 0 ÷ = ç 0.4p1 + 0.7p 2 ÷
b. ç ÷ ç ÷ ç ÷
çp3 ÷ ç0 0 0 0.8 0.2 ÷ ç 0.3p 2 + 0.8p 3 ÷
çp ÷ ç1 0 0 0 0 ÷ø çè 0.2p 3 ÷
è 4ø è ø
p 0 + p1 + p 2 + p 3 + p 4 = 1

Then p 0 = 0.144573 p1 = 0.180723 p 2 = 0.240964 p 3 = 0.361446 p 4 = 0.072289


c. The probability that Imad will lose his license is 0.072289

Page 17 of 18
ENGG515 Dr. Samih Abdul-Nabi
2) A mouse is trapped in the 3x3 maze shown below. The mouse keeps moving from one cell
to another. When faced with several options, the mouse chooses the next cell with equal
probability. The arrow in the top-right cell indicates a one-way door that can be used to
enter the cell but not to exit it.

a. Model the problem as a Markov chain. That is, identify the


states and transition probabilities.
0 1 2
b. Identify the communicating classes.
3 4 5
c. Identify all recurrent states (if any) 6 7 8
d. Identify all transient states (if any)
e. Identify all absorbing states (if any)
f. What is the probability of reaching the cheese in 4 steps if the mouse starts at the top
left corner?P08(4) P08(2) Line 0 X Column 8
P08(4)

Solution
a. We have 9 states, each corresponding to a cell: 0, 1,…, 8
Transition probabilities:
0 1 2 3 4 5 6 7 8
0 1
1 1
2 1
3 1/3 1/3 1/3
4 1/3 1/3 1/3
5 1/3 1/3 1/3
6 1/2 1/2
7 1/3 1/3 1/3
8 1/2 1/2

b. Identify the communicating classes. {0,3,4,5,6,7,8} {1,2}


c. Identify all recurrent states (if any). 1 and 2
d. Identify all transient states (if any) 0,3,4,5,6,7,8
e. Identify all absorbing states (if any) There are no absorbing states
f. What is the probability of reaching the cheese in 4 steps if the mouse starts at the
top left corner?
($)
We need to compute 𝑃!"
First row in P(2): [1/3, 0, 0, 0, 1/3, 0, 1/3, 0, 0]
First row in P(3): [0, 0, 0, 11/18, 0, 1/9, 0, 5/18, 0]
($)
𝑃!" =first row in P(3) * last column in P = 7/54
First line in P à L1 = (0 0 0 1 0 0 0 0 0) then F1 * P

Page 18 of 18

You might also like