Lecture 3 Markov Chain
Lecture 3 Markov Chain
Lesson 1
Part III
• It represents the likelihood of system to change the states, from one time period to the
next.
• Since the probability of moving from one state to another depends on the probability of
the preceding state, transitional probability is a conditional probability
Transition Matrix
• A Transition Matrix, also, known as a stochastic or probability matrix
is a square (n x n) matrix representing the transition probabilities of a
stochastic system.
• This is the prediction of the movement of a system from one state to
next state.
Matrix of Transition Probabilities
• Each time when a new state is reached the system is said to have
stepped or incremented one step ahead. Each step represents a time
period are condition which would result in another possible state. The
symbol n is used to indicate the number of steps or increments. For
example if n=0 then it represents the initial state. let us define the
following notations for formulating the matrix of transition
probabilities
• Si = State i (initial state) of a system (or process); i = 1,2,3………m
• Pij = conditional probability of moving from state Si to state Sj at some
later step (Next time period) i.e. prob (Si|Sj).
• All conditional, one step, state probabilities can be represented as
elements of a square matrix, called matrix of transition probabilities as
follows
Succeeding State
S1 S2 …………….. Sm
S1 P11 P12 ……….. P1m Retention
& Gain
P= [Pij]mxn = Initial State S2 P21 P22 ………….P2m
(Probability Vectors)
A B C TRANSITION STATE
A
B
C Transition Probabilities
INITIAL STATE
How would you draw a transition
probability matrix TPM?
• TPM There are three warehouses A, B and C. Certain
pattern is observed in unloading the material in
A B C these warehouses. A truck which unloads
material at warehouse A will always have to
unload the remaining material at warehouse B.
A Truck which unloads the material at warehouse B
will always have to unload the material at
Matrix P = B warehouse C. However, the truck which unloads
C the material at warehouse C is as likely to unload
the remaining material at any of the warehouse
(either A or B). If the initial probability
distribution of these three states A, B and C is 0.3,
INITIAL TRANSITION 0.4 and 0.3 respectively find
PROBABILITY [ 0.3, 0.4, 03 ]
i) draw the transition matrix for truck movement.
Transition Matrix
The present market share of three brands of telecom companies Airtel, Idea, Jio be
respectively A , B, C are 60%,30%,10%. The transition matrix on the basis of shifting
pattern for year is
Transition diagram
To find the State Probabilities for kth Period
of time
• One of the purpose of Markov chain is to predict the future. Thus if P , P ,…..P represent
1 2 m
probabilities of various states (state probabilities) in initial period n=0 we can represent them
by row matrix
n =0 R0 = [P1, P2,…..Pm]
Hence, the state probabilities for next period (n=1) are given by
• For convenience let R1 represents the state probabilities at time (or states) n=1. After one
execution of the experiment it can be written in terms of Row Matrix as
for n=1 R1 = R0x P
• To compute the vector of state probabilities at time (or state) Multiply the system state at
time 0 with the transition Matrix (P), that is,
for n=2 R2 = R1 x P = R0 xP2
for n=3 R3 = R2 x P3 = R0x P3
[ a b ] = [ a b ]* P
[ a b ] = [ a b ]*
[ a b ] = [0.70a+0.90b 0.30a+0.10b]
a= 0.70a+0.90b --------eq.1 and
b= 0.30a+0.10b --------eq.2
a= 0.70a+0.90b --------eq.1
0.30a=0.90b
a=3b
We have a+b=1 substitute value of a 3b+b=1
4b=1 b= 1/4 = 0.25
a= 1-1/4 = 3/4 = 0.75
In the long run , the student is expected to be late 25%
of time.
Case II- steady state probabilities
• Steady state probability: If the transition from one state to other
continue indefinitely . System becomes stable and state probabilities
tends to remain constant .
• This is steady state (equilibrium) condition
Symbolically Rk = Rk-1
Rk=P*Rk-1
Rk=P*Rk
• If SA and SB are steady state probabilities
then
Also SA+SB=1
Steps of constructing a Matrix of
Transition Probabilities
STEP 1: To determine the retention probabilities, divide the number of customers
retained for the period under review by the number of customers at the beginning of
the period.
STEP 2: a) for those customer who switch brands, show gains and losses among the
brands for completing the matrix of transition.
b) To convert the customer switching of brands so that all gains and losses
take the form of transition probabilities, divide the number of customers that each
entity has gained (or lost) by the original number of customers it served.
STEP 3: In a matrix of transition probabilities retentions (as calculated in step 1 are
shown in as values on the main diagonal. The rows in the matrix shows the retention
and loss of customers while the column shows the retention and gain of customers
Steady State Markov Chain (Long
run probabilities)
• The idea of a steady state distribution is that we have reached (or
converging to) a point in the process where the distributions will no
longer change.
• Steady state Markov chains is the idea that as the time period heads
towards infinity then a two state Markov chain’ state vector will
stabilise. If we keep multiplying the initial state vector, the initial state
input becomes less important and the conditional probability input
more dominant in the final answer.
• i.e. in Long run
Steady State Condition
If the matrix of a transition probabilities remain constant that is no
action is taken by anyone to alter it a steady state will be arrived at in a
due course of time. Steady state implies a state of equilibrium. For
example the market share of 3 newspaper will become steady and
while exchange of customers would take place the market share will
remains solid. In other words if the present market share is multiplied
by the transition matrix the resulting market matrix will be the same
Steady- State (Equilibrium)
Conditions
• Previously we have seen that, as the number of periods increase, further changes in
the state probabilities are smaller. This means that the state probability may become
constant and will eventually remain unchanged. At that point, process reaches a
steady state (or equilibrium) and will remain the same until outside actions change the
transition probabilities. That is, the system becomes independent of time and thus
probability for leaving any particular state must equal the probability of entering the
that state.
• The markovs chain reaches the steady state condition only when following conditions
are met
(i)That transition Matrix elements remain positive from one period to the next. This is
often referred to as the regular property of markov chain.
(ii) It is possible to go from one state to another in a finite number of steps, regardless of
the present state. This is often referred as the ergodic property of Markov Chain.
Some more states of Markov Chains
Recurrent Absorbing & Transient
State
A state is a Transient state if there exists a state j (5) that is reachable from i(3) , but the state i (3) is not
reachable from state j(5). Here probability of reaching the state i is Non-Zero.
There is some possibility (a nonzero probability) that a process beginning in
a transient state will never return to that state. There is a guarantee that a process
beginning in a recurrent state will return to that state.
Weather Observations:
Suppose that we have the following weather
observations: If it is Cloudy today on 29th August , it will
be rainy tomorrow i.e. on 30th August with
probability 0.50 and Sunny tomorrow i.e. on 30th August
with probability 0.40. If it is rainy today, it will be sunny
tomorrow with probability 0.10, cloudy tomorrow with
probability 0.30, and rainy tomorrow with
probability 0.60. If it is sunny today, it will be cloudy
tomorrow with probability 0.40 and rainy tomorrow with
probability 0.10.
[Market share in period n]= [Market share in period n-1] [Transition Matrix]
In general once a steady state is reached multiplication of a state condition by the transition
probabilities does not change the state condition. That is
pn = pn-1 x p
for any value of n after a steady state is reached
Step3: Determine Steady- State condition The steady state condition can be determined by the use of
the matrix Algebra and the solution of a set of simultaneous equations obtained using the equation
given in step 2
Markov process problem:
Company K, the manufacturer of a breakfast cereal, currently has some 25% of the
market. Data from the previous year indicates that 88% of K's customers remained
loyal that year, but 12% switched to the competition. In addition, 85% of the
competition's customers remained loyal to the competition but 15% of the
competition's customers switched to K. Assuming these trends continue determine
K's share of the market:
• in 2 years; and
• in the long-run.
This problem is an example of a brand switching problem that often arises in the
sale of consumer goods.
In order to solve this problem we make use of Markov chains or Markov processes.
• Observe that, each year, a customer can either be buying K's cereal or
the competition's. Hence we can construct a diagram as shown below
where the two circles represent the two states a customer can be in and
the arcs represent the probability that a customer makes
a transition each year between states. Note the circular arcs indicating
a "transition" from one state to the same state. This diagram is known
as the state-transition diagram (and note that all the arcs in that
diagram are directed arcs).
Data from the previous year indicates that 88% of K's customers
remained loyal that year, but 12% switched to the competition. In
addition, 85% of the competition's customers remained loyal to the
competition but 15% of the competition's customers switched to K.
S1 S2 To state
= 0.35+.02+.01 = 0.38
[Market share in period n]= [Market share in period n-1] [Transition Matrix]
In general once a steady state is reached multiplication of a state condition by the transition
probabilities does not change the state condition. That is
pn = pn-1 x p
for any value of n after a steady state is reached
Step3: Determine Steady- State condition The steady state condition can be determined by the use of
the matrix Algebra and the solution of a set of simultaneous equations obtained using the equation
given in step 2
Example
• There are three factories in a country producing scooters and let the manufacturers
of these factories be A B and C respectively. It has been observed that during the
previous month the manufacturer A sold a total of total of 100 scooters
manufacturer B sold a total of 200 scooters and manufacturer C sold 400 scooters.
It is known to all manufacturers that the customers do not always purchase a new
scooter from the same producer who manufacture their previous scooter because
of advertising, dissatisfaction with service and other reasons. All manufacturers
maintain records of the number of their customers and the factory from which they
obtained each new customer following table gives the information regarding the
movement of customers from one factory to another with the condition that this
month manufacturer A sold 120 scooters manufacturer B sold 203 scooter and
manufacturers C sold 377 Scooter. Further it is assumed that the new customer is
allowed to enter the market and no old customer left the market.
Previously Owned New Scooter Made by Total
Scooter Made by
A B C
A 85 8 7 100
B 20 160 20 200
C 15 35 350 400
Total 120 203 377 700