Lec12 PDF
Lec12 PDF
Module - 4
Discrete-time Markov chain
Lecture - 4
Limiting and Stationary Distributions
Good morning, this is the module 4 of stochastic processes video course. And in this we
are going to discuss the limiting distribution and stationary distribution in the lecture 4.
In the last three lectures, we have discuss the time homogenous discrete time Markov
chain. And in the last lecture that is on lecture 3, we have discussed classifications of
states concepts and definitions, but we have not discuss the simple examples for that.
So, in this lecture I am planning to explain, I am planning to give few examples for the
classifications of states then, I am going to give the definitions of limiting distributions
then followed by stationary distributions. Then the same examples I am going to explain
how to get the stationary distribution if it exists. So, if you recall our earlier lecture that
is lecture three, we have given the lot of concepts. Through those concepts we can
classified the states, the status transient state or recurrent state, than the recurrent state
can be classified into the positive recurrent state and then null recurrent state. You can
find out the periodicity of the states, and if the period is going to be one than the status is
going to be the a periodic state.
If any state is going to be positive recurrent and a periodic than if see the state is the
ergodic state. If one step transition probability if EII is equal to 1, than that state is going
to be call it is observing state. Also we have discussed irreducible Markov chain that
means, the whole state space is not able to partition into more than one close completing
classes. Than that is going to be close that is going to be call it is irreducible Markov
chain, otherwise it is reducible Markov chain.
Now, I am going to give simple examples, through that we are going to explain the
classification of the states. The first example the simplest one, in this first simple
example we have only two states. So, the state space contains only two elements 0 and 1,
the transition the one step transition probability from the system is moving from state 0
to 1 that probability is 1. And the system is moving from the state 1 to 0 that probability
is also 1. So, the one step transition probability matrix can be obtained from the state
transition diagram both are one and the same. So, this is the one step transition
probability matrix and this is the state transition diagram both are one at the same.
That is possible because by seeing the state transition diagram we can make out the first
step the system is moving from state 0 to 1 and 1 to 0. It is possible coming back to the
same state tacking exactly two steps for the first step. Therefore, f 0 0 of 2 that
probability is 1 and by seeing the state transition diagram we can visualize since it comes
to the same state exactly second step. Therefore, all the further steps for the first step that
is not possible. Therefore, all the f 0 0 of n that is going to be a 0 for n is greater than are
equal to 3, for n is greater than are equal to 3, the f 0 0 of n is equal to 0.
Now, if you try to find out what is capital F 0 0, that is the probability of ever visiting to
the state 0, starting from the state 0, that is going to be summation of f 0 0 superscript,
within bracket n for all n vary from 1 to infinity if you submit up then that is going to be
1. Since, F 0 0 is equal to 1 you can conclude the state 0 is the recurrent state, you can
conclude the state 0 is the recurrent state. Similarly, if you do the same exercise for the
state 1 by starting with f 1 1 of step one what is the probability, f 1 1 of step two what is
the probability and f 1 1 of all the n’s and find out the summation.
So, you will land up f 1 1 is also going to be 1, we can conclude similarly, the state 1 that
is also recurrent state. Hereafter finding the recurrent state, now we can come find out
whether in this going to be a positive recurrent state or null recurrent state. For that we
have to find out, what is the mean recurrence time or mean passage time. So, try to find
out what is mu 0 0 that is nothing but summation n f i i of n, n variance from 1 to
infinity.
So, here the i is nothing but 0 0, n times f 0 0 of n because, this takes the value 1 for f 0 0
of 2. Therefore, you will get two times 1 and all other quantities are 0, therefore this is
going to be 2. And this is going to be a finite quantity, therefore you can conclude the 0
is the, state 0 is the positive recurrent state. The same exercise you can do it for mu 1 1
that is also we may land up getting the value is equal to 2. Therefore, you can come to
the conclusion the state 1 that is also positive recurrent state.
So, in this finite discrete time Markov chain you have two states and both the states are
going to be a positive recurrent state and both are the communicating states. Therefore,
you have class that has the two states and the state space is also 0 and 1 the close
communicating class is also 0 and 1. Therefore, you are not able to partition the state
space into to more than one communicating class and so on. Therefore, we will land up
this Markov chain is going to be this Markov chain is the irreducible Markov chain.
This Markov chain is irreducible Markov chain, because the state space has only two
elements and the both elements are both the states are communicating each other. And
we will land up only one close communicating class, therefore this to be into irreducible
Markov chain. We can find out what is the periodicity of the this state also, you can find
out the periodicity for the state 0 by evaluating d 0 that is nothing but what is the greatest
common divisor of all possible steps in which the system is coming back to the same
state. So, if you find out the system can come to the same state, if you see the state
transition diagram if the system starts from the state 0, coming back to the same state,
either by two steps or four steps or six steps and so on.
We should remember that when you are when you are finding the periodicity you are
finding the number of step coming back to the same state, not necessarily the first visit.
Whereas, the f 0 0, f 0 0 of n to conclude it is the recurrent state you are find using the
first time teaching the state exactly, so there is the difference. So, the g c d of all the
possible steps in which the system is coming in back to the same state. So, you can come
back to the same state 0 in two steps or four steps or six steps and so on. So, the g c d is
going to be 2, that means the period for the state 2 sorry, the state 1, the state 0 period for
the state 0 is 2.
Similarly, you can find out what is the period for the state 1 also, if you do the same
exercise, but since you can diagram make out the state 1 also going to have the g c d of 2
6, 2, 4, 6, 8 so on. Therefore, the period for the state 1 also going to be 2, otherwise also
we can conclude both are communicating states. Since, the period for the state 0 is 2 and
since, the state 1 is the communicating the state 0 that means this is accessibly in both
space. Therefore, the state 1 is also having the same state, same period. In conclusion
you can make out, if you have one class with the more than one states then the all states
are going to have same period.
(Refer Slide Time: 12:31)
Therefore, the state 1 is also have the period for the state 1 that is also 2.
That means this example you have a only two states and this is irreducible Markov chain
and both the states are positive recurrent if the period two. So, that is the way we using
the classification of the states will come to the conclusion of this particular example.
Later we are going to find out the limiting distribution and stationary distribution and so
on, but for that we need the classification. Here also we can visualize where the system
will be for a longer run, if the system starts from the state 0 or 1. We can visualize
because it is only two state, by seeing the state transition diagram we can make out.
Suppose the system start initially the state 0, at every even number of steps it is will be
come back to the state 0 in a longer run based on the number is going to be even or odd
accordingly the system will be in anyone of the states. Similarly, in a longer run you can
make out, if the system start from the state 1 initially, all the even the number of steps it
will become back to the same state 1 and all the odd number of steps it will be in the
state 0. In the longer run also it is going to be happen in the same way for a even n and
odd n accordingly the system will be in any one of the states.
In a longer run also the system will be any one of these two states only, because it is
irreducible Markov chain. Because, these two states are communicating each other
therefore, in a longer run the probability that the system will be any one of these states
will be the some value and only the system will in any one of these two states only. Later
I am going to give the definition of the limiting distribution to the time going to explain
the same example again.
Now, we are moving into the next example, example two; here I am going to discuss a
reducible Markov chain. Here also we have only two states, the probability of system is
moving from state 0 to 0, in the next step it is the probability is 1 and the system is
coming from the state 1 to 0 in one step that probability is 1. So, this is a state transition
diagram of a time homogenous discrete time Markov chain. So, I am going to write what
is the one step transition probability matrix for this state transition diagram or for this
discrete time Markov chain.
Now, you will try to find out what is the classification of the state 1. So, if you find out f
1 1 of 1,wthat is the probability that system will come to the state 1 given that it was in
the state 1 and first time we visit to the state 1 exactly if the first step. So, that is going to
be not possible, because if the probability one it moved to the state 0, therefore this is
going to be 0. If you find out f 1 1 of all the subsequent steps also that is also going to be
0 because, if the system starts from the state 1, in the next step itself it goes to the state 0
if the probability one and it is not coming back.
Therefore, now you try to find out what is the capital F 1 1, that is nothing but the
summation of all the F i’s summation of all the F i’s and that is going to be 0. If you
recall the way he classify the state is going to be recurrent or transient. We said f i i is
going to be 1 or f i i is going to be less than 1. So, that less than 1 includes f i i is equal to
0. So, basically our interest is to classify weather with the proper distribution, the system
is coming back to the same state with the probability one, that is f i i is equal to 1 and all
other things we say that is a transition state it includes f i i equal to 0. So, here with the
probability 0 the system is not coming back to the state 1, if the system starts from the
state 1. This is the always a conditional probability and this conditional probability f 1 1
is equal to 0 implies the state 1 is going to be a transient state.
So, whenever any for any state i, f i i is equal to 1 that concludes states going to be
recurrent state and whenever the f i i’s lies between including 0 excluding 1that is less
than 1 than that state is going to be call it is transient state. Since, the close since you
have only two states that is state space is 0 and 1 and you will land up one absorbing
state and one transient state. Therefore, the state space is the partition into one close
communicating class, which has only one element and the transient state is 1. Therefore,
I can say the state space s is partition into closed communicating class c 1, which
consists of only one element and collection of all the transient states that is only one
element. So, this a notation for capital T collecting all the transient states in the state
space in them d t m c and c 1 is the first closed communicating class and which has only
one element. If any close communicating class has only one element then it is going to
be called it as absorbing states.
Therefore, 0 is the absorbing state and 1 is transient state. Since, you have c 1 union T
becomes state space s therefore, this Markov chain is not a reducible irreducible Markov
chain. Therefore, this is called reducible Markov chain, this Markov whereas, the
previous example is irreducible Markov chain, where we have a two element and both
the elements from only one close communicating class. Whereas, here you have one
close communicating class it one element and the transient state is 1. Therefore, it is
going to be a reducible Markov chain that can be more than one transient state.
Now, I am moving into the third example so, that I am explaining some more concepts
through the examples. Example three; here I go for four state, four states it consist of
states 0, 1, 2 and 3. It is easy to explain through a state transition diagram then the one
step transition probability matrix. So, I am just drawing the state transition diagram for
this DTMC. So, 0 to 0 one step that probabilities one-third and 0 to 1 is a two-third.
Therefore, rows some is taken care that probability as summation probability 1. Now, I
am going to the state 1, state 1 to 0 that probability is an 1, therefore that row is taken
care. Now, I am moving to the state 2, state 2 the self loop has the probability of and
going from the state 2 to 0 that probabilities half, therefore this row also taken care.
Now, I am moving to the state 3, state 3 it has the self loop with the probabilities half and
it has the moving from the state 3 to 2 that probabilities half.
My interest is to classify the states for this Markov chain. Markov chain has four states,
0, 1, 2 and 3, that is state space capital S. Now, will starts with the state 0, so you find
out what is f 0 0 of 1, in one step has a first visit the system has to be come back to the
same state 0. So, that probability is one-third. If find out f 0 0 of 2, f 0 0 of 2 exactly two
steps as a first visit you have come back to the state 0. That means you go to the state 1
by starting from the state 0 and come back to the state 0 in the next step. Therefore, it is
two-third into 1, therefore it is to be two-third. Then we go for what is the possibility
take three steps, exactly three steps coming to the state 0. As a first visit this is a not
possible whereas, the P 0 0 of 3 is possible, f 0 0 of 3 is not possible because, in three
steps you cannot make a first visit.
Therefore, that is going to be 0 not only f 0 0 of 3 and for all other things also is going to
be 0, f 0 0 of n equal to 0 for n is greater than are equal to 3. Now, I can find out what is
a capital F 0 0, if you find out capital F 0 0, I have to add all the values. So, it is one-third
plus two-third plus all the further terms are 0, therefore it is going to be 1. Since, f 0 0 is
equal equal to 1, you can conclude the state space 0 is the state 0 is going to be the
recurrent state. The similar exercise you can do it for the state 1, the same way you
conclude f 1 1 is also going to be 1. The other way, since the state 1 is communicating
with the state 0, since the state 1 communicating with the state 0 therefore, this is also
going to be of the same type. Therefore, the state 1 is also going to be the recurrent state.
Now, we can go to the state 2, so the state 0 comma 1 that is going to be the recurrent
state. Now, I move it be the state 2. So, whereas the state 2 if you find out f 0, f 2, 2 of 1,
it one step coming back to the same state that is going to be a half. f 2 2 of two steps,
exactly two steps that is not possible, that is going to be 0 and so on, not only 2 and all
the further steps also going to be 0. Because, if the probability half it takes only one step
come back and all the further steps it takes if the probability of it is not coming back at
all. Therefore, this is going to be for greater than or equal to 2, it is going to be 0.
Therefore, if you compute F 2 2, capital F 2 2 then that is going to be half plus 0 and so
on therefore, you will land up of which is less than 1. Therefore, you can conclude the
state 2 is going to be transient state. Not only the state 2, if you do the similar exercise
for the state 3 the same thing you may land up f 3 3 is also going to be less than 1,
whatever be the number. You can conclude the state 3 that is also going to be the
transient state. You can find out the periodicity for the recurrent state only not for the
transient state. Therefore, now you can try to find out what is the periodicity for the state
0 and 1. Before that we will try to find out what is the type of recurrent state weather it is
going to be positive recurrent or null recurrent.
If you find out mu 0 0, mu 0 0 that is nothing but one times one-third, two time two-
third, three times 0, four times 0 and so on. So, if you submitted everything you may land
up one times one-third plus two times two-third that is going to be one-third plus two
times two-third. So, that is going to be 3, 1 plus 4 that is 5. So, which is a finite quantity,
you can conclude the state 0 is going to be positive recurrent. Similarly, if you calculate f
mu 1 1 also you may land up with the finite quantity. So, you can conclude both the
states are going to be a positive recurrent states. Here the both the state space is a
classified into two positive recurrent state and two transient state. Therefore, this Markov
chain is going to be reducible Markov chain, in short form MC, is reducible Markov
chain.
Because, the whole state space capital S is partition into 1 close communicating with
class which consist of the state 0 and one. And the transient states 2 and 3 therefore, this
going to be reducible Markov chain. You can find out the periodicity of the these two
recurrence state also. So, if we find out d 0 that is going to be the greatest common
divisor of, what are all the steps in which the system will be come back if the system
starts from the start 0.
So, either it can come back with the one step or either it can come back with the two
steps, or it can make a one loop here then one loop then here. Therefore, it can come
back from the three steps and four steps and so on. It is need not be the first visit,
therefore the g c d of one step or two steps and three steps and so on, therefore this is a
going to be 1.
That means it is a periodic state therefore, whatever we have a done it for state 0 you can
do it for the state 1 also, that is also going to be 1, the periodic is going to be 1.
Therefore, both the state’s 0 and 1 are the positive recurrent and a periodic states and
other two are going to be the transient states. Since, this state 0 and 1 or going to be
positive recurrent as well as a periodic these two states are ergodic states also. Later we
are going to explain ergodicity the property for that property you need you need to
understand, what is ergodic state? So, whenever the Markov chain has few states going
to be a positive recurrent and a periodic then those states are going to be call it as a
ergodic states. So, later I am going to give the definition of ergodicity and so on.
Now, we are moving to the fourth example; that has now, we are moving into the fourth
example; this has the infinite number of states. Supposed state space, let me draw the
state transmission diagram, state 0, 1, 2 and so on. The left hand side it has the states
minus 1, minus 2 and so on. So, the state space of the Markov chain has count ably
infinite of elements with the state 0 plus or minus 1, plus or minus 2 and so on. Let me
draw give the transition states transition probabilities. So, the system is moving from
state 0 to 1 if the probability p and system is moving from the state 0 to minus 1 if the
probability 1 minus p.
Therefore, if you the state transition diagram, state 1 step transition probability matrix
the rows on his going to be 1. So, if keep P is lies between P can lies between lies
between 0 to 1. Similarly, you go for the all other states, this is the system is moving
from the state to the forward 1 state that if the probability P, backward state if the
probability 1 minus P. If the forward is P and coming back to the one step less, one state
less that is 1 minus P. So, this is a way it goes for all the states 1 minus P and this is 1 P
and you have a countably infinite number of states.
Now, let me go for the case one in which the P is going to the 0, suppose P takes value 0
what happens or how to classify the states, when P is equal to 0 in these time
homogenous discrete time Markov chain. When P is equal to 0 there is no forward arc,
when P is equal to 0 implies the system is always go to the 1 state, 1 step less, 1 state less
with the probability one, because P equal to 0
Therefore, you should able to be visualize what is the state transition diagram
corresponding to P is equal to 0 there is no forward arc arrows. That means whenever the
system starts from some state it will keep on going to the 1 state less in every step. And
you can visualize are a longer run where the system will be whether it will be in the
positive side or in the negative side you can visualize, whenever the system starts from
any finite state, it over the period. It may be the some state if the some positive
probabilities for the finite number of steps for a infinite number of step or for a longer
run the system will be in the negative side for a longer run.
So, that is limiting distribution, but here discussing the classification of the states
therefore, with the probability 0 it won’t back at all. If the system starts from any state it
won’t be back to the same state if the probability 0. Therefore, all the states are going to
be all the states are going to be the transients states. If you calculate f for.. If you take
any finite state 1 or something than f 1 1 of 1, f 1 1 of 2 and so on if you calculate then
we may land up f 1 1 capital 1 that is going to be always less than 1. Therefore, if you
start with 1 state we can conclude its transient states and all other states of the same way
therefore, all the states are going to be same.
Suppose you discuss the case two with the P equal to 1 what happen. If p equal to 1 then
you have all the forward arcs not the backward arcs. That means whenever system starts
from any state then the system will go to the forward all the states in sub sequent steps if
the probability 1. In the longer run the system will be in the positive side, positive
infinitive side, in a longer run.
Therefore, with the with the probability 0 it will be in any one of the finite states in
longer run. Whereas, for the any infinite steps the system will be in some of the states
and it will be keep moving forward states over the number of steps. Therefore, here also
you land up all the states are going to be transient states, the both the two cases the
situation for the limiting distribution may change. One is the left side the other one is the
right side whereas, the all the states are going to be the transient states. But our interest is
for the P is lies between that is our third case; our interest is P lies between 0 to 1 open
interval.
That means if you see the previous state transition diagram, you have both the forward
arcs as well as the backward arcs. Because, the probability P is lies between open
interval 0 to 1, therefore the 1 minus P is also lies between 0 to 1 in the open interval.
Therefore, whenever the system starts from any state it will come back to the same state
with the even number of steps. Suppose you visualize the state 1 it can come back to the
same state 1 not in the odd number of steps, but in the even number of steps suppose if
the system moves to the state 0 in the first step and in the second step it come to the state
1.
(Refer Slide Time: 38:17)
Similarly, suppose the system would have moved from 1 to 2 then in the second step
would have come to the state 1. Therefore, it is the two step it has come back either y are
going to the state 0 are going to the state 2. Suppose if go for think of four step it coming
back to the same state that is possible, need not be the first visit means it can make two
times loop the left side or it can make two times loop in the right hand side or it can
make one step forward and one more step forward, then it come back.
Without fixing the value P you can conclude F i i capital F ii is going to be 1 for all other
states. Therefore, now we come to the conclusion all the states are going to be the
recurrent state. If you try to find out the periodicity for any state the way I discuss the
greatest common divisor of coming the greatest common divisor of n such that the P i i
of n, which is going to be greater than 0. And this is possible for all the even number of
sets. Therefore, system will be come back to the same state two steps, four steps, six
steps and so on. Therefore, the g c d is going to be 2 for this particular Markov chain. So,
the period is going to be 2 and the recurrent state.
Now, our interest is whether this states are going to be positive recurrent or null
recurrent. But for that we need what is the value of P, because without P without the
value of P you cannot come to the conclusion the mu suffix i i that is going to n times f i
i of n. You need the value, but some example it is not it is possible, but still by applying
the value of P or what is the range in which you can conclude weather this is going to be
finite quantity or going to be infinite quantity based on the range of P. You can conclude
this recurrent states are going to be positive recurrent or null recurrent
Since, the state space is going to be 0 plus or minus 1, plus or minus 2 and so on and all
the states are going to be recurrent states. It will form one close communicating class
both are communicating all the states are communicating with each other therefore, you
land up having only one close communicating class, which is same as the state space.
Therefore, this is going to be a irreducible Markov chain.
This states may be positive recurrent or null recurrent based on the range of P. But here
we are just concluding this going to be irreducible Markov chain with all the states are
going to be a recurrent to the period two. So since, the period is two it own be ergodic
state also, you want ergodic state if you need positive recurrent as well as the a periodic.
Since, the period is two you can conclude this not going to be the ergodic state.
(Refer Slide Time: 43:18)
Going to be the next example; that is example five, which has the finite states state 0, 1,
2, 3, 4 and this one step transition probabilities that is one-third, two-third. And for the
state o1 with probability one it moves to the state 2 and for the state 2 if the probability
one if moves to the state 0, for the state 3 with the probability half goes to the state 4.
With the probability half goes to the state 1, for the state 4 it is the with the probability
half it goes to the state half state 3, with the probability half it is goes to the state. The
way I have drawn the state transition diagram by taking care the rows on is going to be 1,
so you can equivalently have one step probability matrix also.
So, here I have only the state transition diagram for this d t m c. From this diagram either
by calculating f i i and the capital F i you can conclude it is going to be a recurrent state
or transient state. Then you can conclude whether it is going to be a positive recurrent or
null recurrent. But whenever the Markov chain is going to be finite without doing the
calculation from the diagram we can conclude this states are going to be positive
recurrent and this states are going to be transient states. So, that I am going to do, but the
same exercise you can do it and get the result also. The way the arcs are here if you see
the state 3 and 4 states 3 and 4, it has only out going arc to the states 1 and 2 whereas, 0 1
and 2 as loop form and the state 0 as self loop with the probability one-third.
Sometimes if the outgoing arcs the probabilities are not going to be 1 that summation
that means, you can make out self loop has the probability 1 minus of all the outgoing
arcs. But that is default scenario, but always you should draw the correct state transition
decision diagram. If it has some positive probability with the self loop you should always
draw the self loop with the positive probability that is correct way of drawing the state
transition diagram. So, now you can make out the state 0, 1 and 2 are forming some sort
of loop that means, if the system starts from the state 0 or 1 or 2 it will be only within
this three states over the number of steps. Even for longer run the system will be any one
of these three states only.
So, this three states will be communicating each other, not communicating with the states
3 and 4 whereas, that is accessible from the state 3 to 1. But there is no accessible from 1
to 3 therefore, 1 and 3 are not communicating states. Similarly, 2 and 4 are not
communicating states because, one side accessibility here not the other side accessible.
Therefore, you can make set 0, 1 and 2 you cannot include any more states to form a set
and this set satisfying the property closed as well as communicating. So, this is this set is
called closed communicating class.
All these three states are communicating each other and if you find out f capital F 0 0,
capital F 1 1, capital F 2 2, so on. If you come to the conclusion that values going to be 1
and all these three states are going to be positive recurrent states. Whereas, the states 3
and 4 if the system starts from the state 3 or 4 it has the loop structure with the
probability half. But with the probability half it can go to the state 2 or it can go the state
1 via state 3, via state 4. Accordingly then land up this system is not coming back to the
state 3 or 4, ones it is going away from the state 3 and 4 starting from these states it is not
coming back. Therefore, these states 3 and 4 will form a transient states.
Even though the state 3 and 4 are communicating each other even though the state three
and four are communicating each other this state will form transient state because, f 3 3
and f 4 4 it is going to be less than 1. And if you try to find out periodicity of this state
you can find out the periodicity of any one state. Then that is going to be all other states
the same class. Therefore, if you find out the periodicity of state 0 that is d 0 that is the
greatest common divisor of what are all steps the system will be come back to the same
state. So, either it can take one step or one, two, three, four or it can take only three steps
not making self loop one, two and three.
So, it can make a one or three steps or four steps or five steps. Five steps means it makes
two steps self loop, then third step going from 0 to 1 and 1 to 2 and 2 to 0 therefore, it is
a five step and so on. Therefore, the greatest common divisor is going to be 1. So, since
the period of other state 0 is going to be 1, this is a going to be a periodic state and all the
states are going to be a periodic states.
Since, these states are positive recurrent a periodic this states are also going to be call it
as a ergodic states. Since, the state space s as the union of the close communicating class
and it is transient state. Therefore, this is going to be reducible Markov chain. So, in this
example we come to the conclusion we have five states and this is going to be the
reducible Markov chain, because of the close communicating class is consist of element
0, 1 and 2 and the transient state are 3 and 4.
We are moving to the next example, example six; in this example I have six states and
the one step transition probability values are one-fourth and three-fourth, it goes to the
state 4. And for the state 1 it is half and this is also half, for the state 2 with the
probability three-fourth it goes to the state 5 and with the probability one-fourth it goes
to the state 4. Whereas, for the state 3 there is nothing, for the state 4 with the probability
one-third it goes to the state 2 and two-third it goes to the state 3. Since, there is no
outgoing arc in the state 5 and 3, you can make out self loop has probability 1 or you can
draw also with the probability 1.
Now, we can go for classifying these states because, of the self loop with probability 1
for the state 3 and 5 you can directly make out the state 3 is going to be a absorbing state.
And this is going to form a one class c 1, this close communicating class only one
element, which is state 3. Similarly, I can go for the second class, which has the only one
element that is state 5 that is also absorbing state that is also the absorbing state. Now, I
can go for classifying the state 0, 1, 2 and 4 because, it is a finite state discrete time
Markov chain. If the system starts from the state 3 or 5 it will be in the state 3 or 5
forever, because both are the absorbing state.
If the system starts from the other than the state 3 or 5, ultimately comes to the state 3 or
5 via this 2 to 5 or 4 to 3, then it will be back. Therefore, all these states 0, 1, 2 and 4 will
form a transient state. So, this is a collection capital T, that is 0, 1, 2 and 4 are going to
be form a transient states. I have not computed what is T of 0, 0 or f 1 1, f 2 2, f 4 4.
Since, it is a finite Markov chain and these two states are going to be observing state.
Whenever the system start from the state 0 or 1 or 2 or 4 either it will make a loop or
ultimately land up to the state 5 or 3 with this arcs. Therefore, this three, this four states
are going to be a transient states and this will make a reducible Markov chain. Suppose
the system starts from 0 or 1 with the arc 1 to 2 or 3 to 4, either the system can go to
either the system can go to the state 2 or state 4.
If the system start from the state 0 or 1 either the system go to the state 2 via 1, 2 or state
4 via 0, 4 then after that it will be keep roaming here 2 2, 2 and 4. But if the positive
probability three-fourth and two-third it can go to the state 5 or state 3. Therefore, this
state are going to the observing state therefore, ultimately the system will land up the
state 3 or 5. Therefore, this states are therefore this one type of reducible Markov chain
in which you have transient states and few absorbing states. Now, I am moving into next
example.
(Refer Slide Time: 56:10)
This is another type of reducible Markov chain that is example seven. Which has the
finite states which has the finite n plus 1 states and the transition are like this, with the
probability 1 minus P the system goes to the state 1 to 0. If the probability if the
probability P it can go to the state 1 to 2 and this probability is 1 minus P and so on. So,
all the forward arcs are P and the backward arcs are 1 minus P, whereas here this is P
there is no forward arc there is no forward arc. Therefore, the state 0 and n is going to be
observing states.
So, here the P can lies between… Later I am going to explain same d t m c for the same
problem. Here we have n plus 1 state with the state 0 and 1 are going to be 0 and 1 are
going to be observing states. We usually write observing states individually, because
each one will form a close communicating class. Here I have written both the states are
observing states and all other states 1 to 2 till n minus 1 those will form transient states.
Because, if the system starts from these states 1 to n minus 1 it can keep move between
these states over the number of steps, but with the possible probabilities of 1 minus P it
can go to the state 0 with the positive probability of P it go to the state n in these group of
transient states.
So, once the system come to the state 0 or n then it be forever, it is basically therefore,
we can come to the conclusion this is going to be reducible Markov chain of the type
transient state and few absorbing states. So, with this let me stop the examples of
classification of states that means, I have given the seven different examples which it has
a finite Markov chain, as well as the infinite Markov chain. And few Markov chains are
the reducible or few are the irreducible, in reducible Markov chain we have a made a two
three types of reducible Markov chain that also I have explained. In the next class we
will discuss limiting distribution and stationary distribution.
Thanks.