0% found this document useful (0 votes)
2 views76 pages

Lecture 3 Markov Chain

This document provides an introduction to Markov Chains, covering key concepts such as state probabilities, transition probabilities, and transition matrices. It explains how Markov analysis can be used to predict future states based on current conditions and outlines the assumptions and properties of Markov processes. Additionally, it includes examples and applications, such as predicting market shares and analyzing consumer behavior.

Uploaded by

rushikesh malode
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views76 pages

Lecture 3 Markov Chain

This document provides an introduction to Markov Chains, covering key concepts such as state probabilities, transition probabilities, and transition matrices. It explains how Markov analysis can be used to predict future states based on current conditions and outlines the assumptions and properties of Markov processes. Additionally, it includes examples and applications, such as predicting market shares and analyzing consumer behavior.

Uploaded by

rushikesh malode
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 76

Markov Chain

Lesson 1
Part III

Only for private use. Not for circulation.


Learnings:
• Introduction : Markov Chain Concepts
• To find the probabilities for the Kth Period
• Steady state probabilities
Introduction to Markov Chain
Use :
• behaviour of consumers in terms of their brand loyalty and switching
pattern
• machine used to manufacture a product is in state of working or not
at any point
Introduction:
• Markov analysis is used to analyse state of system to describe its
position (or condition) at any instant of time.
• Such knowledge must be useful for analysing the current state of the
system in an effort to predict the future state of the same system.
• The Markov chain is formed due to repetition of an experiment and is
discrete as well as constant over time because the system is examined
at regular intervals such as every hour, day or year.
• There is a finite set of states and the system can be in only one state
at a given time.
Concepts- Markov Chain

Markov chain- A sequence of the event in which, an outcome of event is


dependent upon the immediate preceding event only, but not on any other prior
events.
• Random process in which the occurrence of future state depends on the
immediate preceding state and only on it is known as Markov chain or Markov
process
• A Markov Process is a series of experiments which are performed at a regular time
intervals and which always have the same set of outcomes. These outcomes are
called states and the outcome of current experiment is called as current state
States: each of the system consists of several possible states.
E.g.
• Attendance- Absent and Present
• Machin condition- Working, Fairly working, Not working
Note:- States are finite in number, exhaustive (including all), mutually-
exclusive(Mutually exclusive events are the events that cannot occur or happen at
Assumptions in Markov Chain
• Finite Number of State
• State are Mutually exclusive (Will always be in one state at a time)
• State are collectively exhaustive (All states are known)
• Probability of moving from one state to other state is constant over
time
Transition Probability
The probability of moving from one state to another or remain in the same state in a single
time period is called as transition probability.
Mathematically
Pij = P(Next State Sj at t=1/initial state Si at t=0)

i = Initial State j = next state


OR
The probabilities of the system to change from state (i) to state (j) is called as transition
probabilities.

• It represents the likelihood of system to change the states, from one time period to the
next.
• Since the probability of moving from one state to another depends on the probability of
the preceding state, transitional probability is a conditional probability
Transition Matrix
• A Transition Matrix, also, known as a stochastic or probability matrix
is a square (n x n) matrix representing the transition probabilities of a
stochastic system.
• This is the prediction of the movement of a system from one state to
next state.
Matrix of Transition Probabilities
• Each time when a new state is reached the system is said to have
stepped or incremented one step ahead. Each step represents a time
period are condition which would result in another possible state. The
symbol n is used to indicate the number of steps or increments. For
example if n=0 then it represents the initial state. let us define the
following notations for formulating the matrix of transition
probabilities
• Si = State i (initial state) of a system (or process); i = 1,2,3………m
• Pij = conditional probability of moving from state Si to state Sj at some
later step (Next time period) i.e. prob (Si|Sj).
• All conditional, one step, state probabilities can be represented as
elements of a square matrix, called matrix of transition probabilities as
follows
Succeeding State
S1 S2 …………….. Sm
S1 P11 P12 ……….. P1m Retention
& Gain
P= [Pij]mxn = Initial State S2 P21 P22 ………….P2m
(Probability Vectors)

Sm Pm1 Pm2 ………… Pnm


___Retention and Loss_
• In the transition matrix of the Markov chain, Pij=0 when no transition
occurs from state i to state j and Pij = 1 when the system is in state i, it
can move only to state j at the next transition. Each row of the
transition Matrix represent a one step transition probability
distribution over all states this means
Assumptions of TPM
1. Each element of TPM is probability & non Negative
2. TPM will always be a square matrix
Example
There are three warehouses A, B and C. Certain pattern is observed in
unloading the material in these warehouses. A truck which unloads
material at warehouse A will always have to unload the remaining
material at warehouse B. Truck which unloads the material at
warehouse B will always have to unload the material at warehouse C.
However, the truck which unloads the material at warehouse C is as
likely to unload the remaining material at any of the warehouse (either
A or B). If the initial probability distribution of these three states A, B
and C is 0.3, 0.4 and 0.3 respectively find
i) the transition matrix.
Example
How would you draw a transition matrix ?
How would you draw a transition
probability matrix ?

A B C TRANSITION STATE
A
B
C Transition Probabilities

INITIAL STATE
How would you draw a transition
probability matrix TPM?
• TPM There are three warehouses A, B and C. Certain
pattern is observed in unloading the material in
A B C these warehouses. A truck which unloads
material at warehouse A will always have to
unload the remaining material at warehouse B.
A Truck which unloads the material at warehouse B
will always have to unload the material at
Matrix P = B warehouse C. However, the truck which unloads
C the material at warehouse C is as likely to unload
the remaining material at any of the warehouse
(either A or B). If the initial probability
distribution of these three states A, B and C is 0.3,
INITIAL TRANSITION 0.4 and 0.3 respectively find
PROBABILITY [ 0.3, 0.4, 03 ]
i) draw the transition matrix for truck movement.
Transition Matrix
The present market share of three brands of telecom companies Airtel, Idea, Jio be
respectively A , B, C are 60%,30%,10%. The transition matrix on the basis of shifting
pattern for year is
Transition diagram
To find the State Probabilities for kth Period
of time
• One of the purpose of Markov chain is to predict the future. Thus if P , P ,…..P represent
1 2 m
probabilities of various states (state probabilities) in initial period n=0 we can represent them
by row matrix
n =0 R0 = [P1, P2,…..Pm]
Hence, the state probabilities for next period (n=1) are given by
• For convenience let R1 represents the state probabilities at time (or states) n=1. After one
execution of the experiment it can be written in terms of Row Matrix as
for n=1 R1 = R0x P
• To compute the vector of state probabilities at time (or state) Multiply the system state at
time 0 with the transition Matrix (P), that is,
for n=2 R2 = R1 x P = R0 xP2
for n=3 R3 = R2 x P3 = R0x P3

for kth period n=k Rk = Rk-1 X P = R0 x Pk


Case I- To find the probabilities for
kth period of time
• Q. The present market share of three brands of telecom companies
Airtel, Idea, Jio be respectively A , B, C are 60%,30%,10%. The
transition matrix on the basis of shifting pattern for year is

Find the probabilities (market share) for 2 years from now?


Solution
Let a1,a2,a3 be the market share of three brands of
Telecom companies A, B,C for initial period (n=0)
Ro= [ a1 a2 a3]
Ro = [0.60 0.30 .0.10]
The state probabilities for the next period (n=1 year)
R1= Ro *P

R1 = [ 0.60 0.30 0.10]*

R = [(0.42+0.06+0.01) (0.12+0.18+0.01) (0.06+0.06+0.08)]


R1 =[0.49 0.31 0.20]
The market share of three brands of Telecom at the end
of 1 year is 49%,31%,20%
The state probabilities for the next period (n=2 year)
R2=R1*P
R2 =[0.49 0.31 0.20] *

R2 = [(0.343+0.062+0.02) (0.098+0.186+0.02) (0.049+0.062+0.16)]

R2 =[0.425 0.304 0.271]


The market share of three brands of soft drink at the end of 2 year is
42.5%,30.4%,27.10% respectively
OR
• R2=R1*P
= Ro *P*P
= Ro *P2

R2=[ 0.60 0.30 0.10] * *


R2= [0.425 0.304 0.271]
• Question :A student tries to be punctual for the
classes. If (S)he is late on a day,(S)he is 90 % sure to be
on time next day . Similarly , if (S)he is on time
then ,there is 30 % chance that (S)he will be late on
next day. How often in long run, (S)he is expected to
be late for the class?
Let a and b be the probabilities of being “on time “and “Late” respectively

[ a b ] = [ a b ]* P

[ a b ] = [ a b ]*

[ a b ] = [0.70a+0.90b 0.30a+0.10b]
a= 0.70a+0.90b --------eq.1 and
b= 0.30a+0.10b --------eq.2
a= 0.70a+0.90b --------eq.1
0.30a=0.90b
a=3b
We have a+b=1 substitute value of a 3b+b=1
4b=1 b= 1/4 = 0.25
a= 1-1/4 = 3/4 = 0.75
In the long run , the student is expected to be late 25%
of time.
Case II- steady state probabilities
• Steady state probability: If the transition from one state to other
continue indefinitely . System becomes stable and state probabilities
tends to remain constant .
• This is steady state (equilibrium) condition
Symbolically Rk = Rk-1
Rk=P*Rk-1
Rk=P*Rk
• If SA and SB are steady state probabilities
then

[SA SB]=[SA SB]*P

Also SA+SB=1
Steps of constructing a Matrix of
Transition Probabilities
STEP 1: To determine the retention probabilities, divide the number of customers
retained for the period under review by the number of customers at the beginning of
the period.
STEP 2: a) for those customer who switch brands, show gains and losses among the
brands for completing the matrix of transition.
b) To convert the customer switching of brands so that all gains and losses
take the form of transition probabilities, divide the number of customers that each
entity has gained (or lost) by the original number of customers it served.
STEP 3: In a matrix of transition probabilities retentions (as calculated in step 1 are
shown in as values on the main diagonal. The rows in the matrix shows the retention
and loss of customers while the column shows the retention and gain of customers
Steady State Markov Chain (Long
run probabilities)
• The idea of a steady state distribution is that we have reached (or
converging to) a point in the process where the distributions will no
longer change.
• Steady state Markov chains is the idea that as the time period heads
towards infinity then a two state Markov chain’ state vector will
stabilise. If we keep multiplying the initial state vector, the initial state
input becomes less important and the conditional probability input
more dominant in the final answer.
• i.e. in Long run
Steady State Condition
If the matrix of a transition probabilities remain constant that is no
action is taken by anyone to alter it a steady state will be arrived at in a
due course of time. Steady state implies a state of equilibrium. For
example the market share of 3 newspaper will become steady and
while exchange of customers would take place the market share will
remains solid. In other words if the present market share is multiplied
by the transition matrix the resulting market matrix will be the same
Steady- State (Equilibrium)
Conditions
• Previously we have seen that, as the number of periods increase, further changes in
the state probabilities are smaller. This means that the state probability may become
constant and will eventually remain unchanged. At that point, process reaches a
steady state (or equilibrium) and will remain the same until outside actions change the
transition probabilities. That is, the system becomes independent of time and thus
probability for leaving any particular state must equal the probability of entering the
that state.
• The markovs chain reaches the steady state condition only when following conditions
are met
(i)That transition Matrix elements remain positive from one period to the next. This is
often referred to as the regular property of markov chain.
(ii) It is possible to go from one state to another in a finite number of steps, regardless of
the present state. This is often referred as the ergodic property of Markov Chain.
Some more states of Markov Chains
Recurrent Absorbing & Transient
State

1,2,3,4 are Transient State


6,7,8 are Recurrent State
5 is recurrent & absorbing
state

A state is a Transient state if there exists a state j (5) that is reachable from i(3) , but the state i (3) is not
reachable from state j(5). Here probability of reaching the state i is Non-Zero.
There is some possibility (a nonzero probability) that a process beginning in
a transient state will never return to that state. There is a guarantee that a process
beginning in a recurrent state will return to that state.
Weather Observations:
Suppose that we have the following weather
observations: If it is Cloudy today on 29th August , it will
be rainy tomorrow i.e. on 30th August with
probability 0.50 and Sunny tomorrow i.e. on 30th August
with probability 0.40. If it is rainy today, it will be sunny
tomorrow with probability 0.10, cloudy tomorrow with
probability 0.30, and rainy tomorrow with
probability 0.60. If it is sunny today, it will be cloudy
tomorrow with probability 0.40 and rainy tomorrow with
probability 0.10.

What is the probability it will be rainy 2 days from now


i.e. on 31st August if it is cloudy today?
Solution
• Suppose that we have the following weather
observations:
1) If it is Cloudy on 29th August , it will be rainy on 30th
August with probability 0.50 and Sunny on 30th August
with probability 0.40.
2) If it is rainy on 29th it will be sunny on 30th with
probability 0.10, cloudy with probability 0.30, and rainy
with probability 0.60.
3) If it is sunny on 29th, it will be cloudy on 30th with
probability 0.40 and rainy with probability 0.10

• Draw a Transition State Diagram


Suppose that we have the following weather observations:
1) If it is Cloudy on 29th August , it will be rainy on 30th August with
probability 0.50 and Sunny on 30th August with probability 0.40.
2) If it is rainy on 29th it will be sunny on 30th with probability 0.10,
cloudy with probability 0.30, and rainy with probability 0.60.
3) If it is sunny on 29th, it will be cloudy on 30th with
probability 0.40 and rainy with probability 0.10.
What is the probability it will be rainy 2 days from now i.e. on 31 st
August if it is sunny today?
1) If it is Cloudy on 29th August , it will be rainy on 30th August with
probability 0.50 and Sunny on 30th August with probability 0.40.
2) If it is rainy on 29th it will be sunny on 30th with probability 0.10, cloudy with
probability 0.30, and rainy with probability 0.60.
3) If it is sunny on 29th, it will be cloudy on 30th with probability 0.40 and rainy
with probability 0.10 What is the probability it will be rainy 2 days from now i.e.
on 31st August if it is sunny today?

We can make a state transition matrix to represent the


probabilities of the weather events, where state S1 is Cloudy,
S2 is Rainy, and S3 is Sunny.
The transition matrix is represented as below:
To 30th
S1C S2R S3S

S1 Cloudy 0.10 0.50 0.40


F
R
29th S2 Rainy 0.30 0.60 0.10
O
M S3 Sunny 0.40 0.10 0.50
STATE 1 STATE 3
STATE 2
Transition matrix from 29th August to 30th Aug.
1) If it is Cloudy on 29th August , it will be rainy on 30th August with
probability 0.50 and Sunny on 30th August with probability 0.40.
2) If it is rainy on 29th it will be sunny on 30th with probability 0.10, cloudy with
probability 0.30, and rainy with probability 0.60.
3) If it is sunny on 29th, it will be cloudy on 30th with probability 0.40 and rainy
with probability 0.10 What is the probability it will be rainy 2 days from now i.e.
on 31st August if it is sunny today?

We can make a state transition matrix to represent the


probabilities of the weather events, where state S1 is Cloudy,
S2 is Rainy, and S3 is Sunny.
The transition matrix is represented as below:
To 30th
S1C S2R S3S

S1 Cloudy 0.10 0.50 0.40


F
R
O
29th S2 Rainy 0.30 0.60 0.10 [1 0 0]
M
STATE 1 STATE 3 S3 Sunny 0.40 0.10 0.50
STATE 2
Transition matrix from 29th August to 30th Aug.
• Here on the day • the resultant new matrix will be R1 = 30th
29th August If
it is Cloudy
August
means the [1, 0, 0] = [0.1, 0.5, 0.4]
probability of
being cloudy
Above Probabilities is for 30th August
is “1” and
sunny being
“0” and rainy
being 0. [0.1,0.5,0.4] = [0.32, 0.39, 0.29]
• So R0matrix
will be
[ 1,0,0]
Above Probabilities is for 31st August
ALTERNATIVELY
1) If it is Cloudy on 29th August , it will be rainy on 30th August with
probability 0.50 and Sunny on 30th August with probability 0.40.
2) If it is rainy on 29th it will be sunny on 30th with probability 0.10, cloudy with
probability 0.30, and rainy with probability 0.60.
3) If it is sunny on 29th, it will be cloudy on 30th with probability 0.40 and rainy
with probability 0.10 What is the probability it will be rainy 2 days from now i.e.
on 31st August if it is sunny today?

We can make a state transition matrix to represent the


probabilities of the weather events, where state S1 is Cloudy,
S2 is Rainy, and S3 is Sunny.
The transition matrix is represented as below:
From 29th
S1C S2R S3S

S1 Cloudy 0.10 0.30 0.40 1


T
o
30th S2 Rainy 0.50 0.60 0.10 0

STATE 1 STATE 3 S3 Sunny 0.40 0.10 0.50 0


STATE 2
Transition matrix from 29th August to 30th Aug.
• Here on the day 29th August • the resultant new matrix will be
If it is Cloudy means the = 0.1
probability of being cloudy 0.5
is “1” and sunny being 0.4
“0” and rainy being 0.
Above Probabilities is for 30th August
• So R0matrix will be 0.1 0.3 0.4
0.1 0.32
0.5 0.6 0.1 =
0.5 0.39
0.4 0.1 0.5 0.29
0.4

Above Probabilities is for 31st August


Procedure for determining Steady-
State Condition
STEP 1: Formulate a state transition matrix develop a state transition Matrix by first calculating
probabilities with the retentions and the gains and losses in the same manner as explained earlier in this
PPT
Step 2: Calculate for future probable market share The market share for any period n is determined by
using the following equation
[Market share in period 2]= [Market share in period 1] [Transition Matrix]
[Market share in period 3]= [Market share in period 2] [Transition Matrix]

[Market share in period n]= [Market share in period n-1] [Transition Matrix]
In general once a steady state is reached multiplication of a state condition by the transition
probabilities does not change the state condition. That is
pn = pn-1 x p
for any value of n after a steady state is reached
Step3: Determine Steady- State condition The steady state condition can be determined by the use of
the matrix Algebra and the solution of a set of simultaneous equations obtained using the equation
given in step 2
Markov process problem:
Company K, the manufacturer of a breakfast cereal, currently has some 25% of the
market. Data from the previous year indicates that 88% of K's customers remained
loyal that year, but 12% switched to the competition. In addition, 85% of the
competition's customers remained loyal to the competition but 15% of the
competition's customers switched to K. Assuming these trends continue determine
K's share of the market:
• in 2 years; and
• in the long-run.
This problem is an example of a brand switching problem that often arises in the
sale of consumer goods.
In order to solve this problem we make use of Markov chains or Markov processes.
• Observe that, each year, a customer can either be buying K's cereal or
the competition's. Hence we can construct a diagram as shown below
where the two circles represent the two states a customer can be in and
the arcs represent the probability that a customer makes
a transition each year between states. Note the circular arcs indicating
a "transition" from one state to the same state. This diagram is known
as the state-transition diagram (and note that all the arcs in that
diagram are directed arcs).
Data from the previous year indicates that 88% of K's customers
remained loyal that year, but 12% switched to the competition. In
addition, 85% of the competition's customers remained loyal to the
competition but 15% of the competition's customers switched to K.

` FIGURE : STATE TRANSITION DIAGRAM


Given that diagram we can construct the transition matrix (usually denoted by the
symbol P) which tells us the probability of making a transition from one state to
another state. Letting:
state 1 = customer buying K's cereal and
state 2 = customer buying competition's cereal
we have the transition matrix P for this problem given by P=

S1 S2 To state

From state S1 |0.88 0.12 | 0.88+0.12=1


S2 |0.15 0.85 | 0.15+0.85=1
Note here that, as a numerical check, the elements of st should always sum to one
Note here that the sum of the elements in each row of the transition matrix is
one. Note also that the transition matrix is such that the rows are "From" and the
columns are "To" in terms of the state transitions.
Now we know that currently K has some 25% of the market. Hence we have the
state vector probabilities as row matrix representing the initial state of the system
given by:
K C
P= [Pij]mxn = Initial State S1 [0.25, 0.75] = R0
denoted by [ x1 , x2]
We usually denote this row matrix by s1 indicating the state of the system in the
first period (years in this particular example).
• We already know the state of the system in initial state S1 is (R0
represents initial state probabilities) so the state of the system in year 1
i.e. S2 is given by (R1) :
• Steady State S2 is given by R1 = R0 x P
R1 = [0.25,0.75] P = |0.88 0.12 |
|0.15 0.85 |
= [(0.25)(0.88) + (0.75)(0.15), (0.25)(0.12) + (0.75)(0.85)]
Row matrix x 1st Column , Row matrix x 2nd Column

= [0.3325, 0.6675] (0.3325 +0.6675 =1)


Could be denoted as [x1,x2] (x1 = Market share of K and X2= of
Competitors)
• Note that this result makes intuitive sense, e.g. of the 25% currently
buying K's cereal 88% continue to do so, whilst of the 75% buying the
competitor's cereal 15% change to buy K's cereal - giving a
(fractional) total of (0.25)(0.88) + (0.75)(0.15) = 0.3325 buying K's
cereal.
• Hence in year one 33.25% of the people are buying K's cereal.
• In year two the state of the system is given by S3:
R2 = R1 x P
= [0.3325, 0.6675] |0.88 0.12 |
|0.15 0.85 |
= (0.3325*0.88+0.6675*0.15, 0.3325*0.12+0.3325*0.85)
= [0.392725, 0.607275]
Could be denoted as [x1,x2]
• Hence in year two 39.2725% of the people are buying K's cereal.
• Now it is plainly nonsense to suggest that we can predict to four decimal
places K's market share in two years time - but the calculation has enabled
us to get some insight into the change in K's share of the market over time
that we otherwise might not have had.
• Long-run
• K's share of the market in the long-run. This implies that we need to calculate st as t becomes
very large (approaches infinity).
• The idea of the long-run is based on the assumption that, eventually, the system reaches
"equilibrium" (often referred to as the "steady-state") in the sense that st = st-1. This is not to
say that transitions between states do not take place, they do, but they "balance out" so that
the number in each state remains the same.
There are two basic approaches to calculating the steady-state:
• Computational - find the steady-state by calculating steady state st for t=1,2,3,... and stop when
st-1 and st are approximately the same. This is obviously very easy for a computer and is the
approach used by the package software
• Algebraic - to avoid the lengthy arithmetic calculations needed to calculate steady state st for
t=1,2,3,... we have an algebraic short-cut that can be used. Recall that in the steady-state st = st-
1 (= [x1,x2] say for the example considered above). Then as st = st-1P we have that
• [x1,x2] = [x1,x2] |0.88 0.12 |
|0.15 0.85 |
(and note also that x1 + x2 = 1). Hence we have three equations which we can solve.
K +C = 1 (Market share is always 1)
• Note here that we have used the word assumption above. This is because not all systems
reach an equilibrium, e.g. the system with transition matrix
|0 1 |
|1 0 |
will never reach a steady-state.
• Adopting the algebraic approach above for the K's cereal example we have the three
equations:
• [x1,x2] = [x1,x2] |0.88 0.12 | Eq. 1 x1 = 0.88x1 + 0.15x2
|0.15 0.85 | Eq. 2 x2 = 0.12x1 + 0.85x2
Eq. 3 x1 + x2 = 1
and rearranging the first two equations we get
0.12x1 - 0.15x2 = 0
0.88x1 - 0.85x2 = 0
x1 + x2 = 1
1 x1 = 0.88x1 + 0.15x2
2 x2 = 0.12x1 + 0.85x2
3 x1 + x2 = 1
• Rearranging Equation 1 i.e. x1- 0.88x1 - 0.15x2 =0
0.12x1- 0.15x2 = 0 ------------1
• Rearranging Equation 2 i.e. x2 = 0.12x1 + 0.85x2
0= 0.12x1 + 0.85x2 – x2
0.12x1- 0.15x2 = 0 ------------2
• Solving equation
0.12x1- 0.15x2 =0
Substitute x1 by (1-x2) As x1+x2 = 1
0.12(1-x2) -0.15x2 =0
0.12- 0.12x2 -0.15x2 = 0 0.12 -0.27x2 =0 X2 = 0.12/0.27
x2= 0.44444
As x1+x2 = 1 substitute x2
therefore x1 = 1-0.4444= 0.5556
Hence, in the long-run, K's market share will be 55.56%.
Comments:
• The above analysis is plainly simplistic. We cannot pretend to have
accurately predicted market shares into the future. Changing circumstances
will change the transition matrix over time. However what the above analysis
has given us is some insight that we did not have before. For example:
• we now have some numeric insight into how fast we might expect K's market
share to grow over time
• we now have some numeric insight into what might be the maximum market
share we can expect K to achieve
• we now have some numeric insight into whether an advertising campaign
aimed at changing transition probabilities will increase K's market share or
not
Comments:
• Such insights, based as they are on a quantitative analysis, can help us in decision
making. For example, if we wanted K to have a market share of 35% in two years
time we need do nothing (on current trends). If we wanted K to have a market share
of 50% in two years time then, on current trends, we do need to do something. How
much we need to change any given transition probability to achieve our target
market share of 50% in two years time is easily calculated.
• Moreover reflect on what all the TV/other media advertisements for consumer items
are meant to do. For many items (since the total market demand is effectively stable,
what is often called by economists a saturated market, "all the people who are
going to buy are already buying") what such advertisements are meant to achieve
is to alter consumer switching (transition) probabilities. You should be clear that
transition probabilities need not be regarded as fixed numbers - rather they are often
things that we can influence/change.
Comment:
• Any problem for which a state-transition diagram can be drawn can be analysed
using the approach given above.
The advantages and disadvantages of using Markov theory include:
• Markov theory simple to apply and understand.
• Sensitivity calculations (i.e. "what-if" questions) are easily carried out.
• Markov theory gives us an insight into changes in the system over time.
• P may be dependent upon the current state of the system. If P is dependent upon
both time and the current state of the system i.e. P a function of both t and s t then
the basic Markov equation becomes st=st-1P(t-1,st-1).
• Markov theory is only a simplified model of a complex decision-making process.
Some facts : Data the new gold
• Nowadays many supermarkets in the UK have their own "loyalty cards" which are swiped through
the checkout at the same time as a customer makes their purchases. These provide a mass of detailed
information from which supermarkets (or others) can deduce brand switching transition matrices:
• Do you think those matrices might be of interest (value) to other companies or not?
• For example, suppose you were a manufacturer/marketer of breakfast cereal how much would you pay
a leading supermarket for continuing and detailed electronic information on cereal brand switching?
• How much extra would you pay for exclusive rights with that supermarket (so your competitors cannot have access to
that data)?
• How useful would a continuing flow of such information be to you to judge the effect of promotional/marketing
campaigns?
• Now consider how many different products/types of products a supermarket sells. The data they are
gathering on their databases through the use of loyalty cards can be extremely valuable to them.
• Note too that the availability of such data enables more detailed models to be constructed. For
example in the cereal problem we dealt with above the competition was represented by just one state.
With more detailed data that state could be disaggregated into a number of different states - maybe
one for each competitor brand of cereal. If we have n states then we need n2 transition probabilities.
Estimating these probabilities is easy if we have access to a supermarket database which tells us from
individual consumer data whether people switched or not, and if so to what.
Data
• Also we could have different models for different segments of the market - maybe brand
switching is different in rural areas from brand switching in urban areas for example. Families
with young children would obviously constitute another important brand switching segment of
the cereal market.
• Note here that if we wish to investigate brand switching in a numeric way then transition
probabilities are key. Unless we can get such numbers nothing numeric is possible.
• Consider now how, in the absence of readily available information on brand switching as
gathered by a supermarket (e.g. because we cannot afford the price the supermarkets are asking
for such information), we might get information as to transition probabilities. One way, indeed
this is how this was done before loyalty cards, is to survey customers individually. Someone
physically stands outside the supermarket and asks shoppers about their current purchases and
their previous purchases. Whilst this can be done it is plainly expensive - particularly if we need
to achieve a reasonable geographic coverage that is regularly updated as time passes.
• Both of the above ways of estimating transition matrices - buying electronic information and
manual surveys - cost money. There is however one way that is
effectively COST FREE although, as will become apparent below, it does involve some
intellectual effort. This involves estimating the transition probabilities (i.e. the entire transition
matrix) from the observed market shares. We illustrate how this can be done below.
Recap of Markov Chain
• Random process in which the occurrence of future state depends on the
immediate preceding state and only on it is known as Markov chain or
process.
• Use behaviour of consumers in terms of their brand loyalty and switching
pattern to machine used to manufacture a product to state working or not
working at any point
• State: a state is a condition or location of an object in the system at
particular time
• Assumptions and finite number of state:
State are mutually exclusive state are collectively exhaustive probability of
moving from one state to other state is constant over time
Example
• The “School of International Studies for Population” found out by its
survey that the mobility of the population (in percent) of a state to
village, town & City is in the following percentage
Village Town City
• From Village 50 30 20
Town 10 70 20
City 10 40 50
What will be the proportion of population in Village, Town, City after 2
years, Given the present population has proportions of 0.7, 0.2, 0.1 in
village, town, and city respectively?
Example
For first Year n=1 0.7*.5+.2*.1+.1*.1

= 0.35+.02+.01 = 0.38

Village Town City


From Village 50 30 20
Town 10 70 20
City 10 40 50
The proportion of population in Village, Town, City after 1 year, present
population has proportions of [0.38,.39,.23] in village, town, and city
respectively
Example
After two Year

Village Town City


From Village 50 30 20
Town 10 70 20 x [0.38,.39,.23]
City 10 40 50
The proportion of population in Village, Town, City after 2 year, in
village, town, and city is [0.252,0.479,0.269]
Recap : Procedure for determining
Steady-State Condition
STEP 1: Formulate a state transition matrix develop a state transition Matrix by first calculating
probabilities with the retentions and the gains and losses in the same manner as explained earlier in this
PPT
Step 2: Calculate for future probable market share The market share for any period n is determined by
using the following equation
[Market share in period 2]= [Market share in period 1] [Transition Matrix]
[Market share in period 3]= [Market share in period 2] [Transition Matrix]

[Market share in period n]= [Market share in period n-1] [Transition Matrix]
In general once a steady state is reached multiplication of a state condition by the transition
probabilities does not change the state condition. That is
pn = pn-1 x p
for any value of n after a steady state is reached
Step3: Determine Steady- State condition The steady state condition can be determined by the use of
the matrix Algebra and the solution of a set of simultaneous equations obtained using the equation
given in step 2
Example
• There are three factories in a country producing scooters and let the manufacturers
of these factories be A B and C respectively. It has been observed that during the
previous month the manufacturer A sold a total of total of 100 scooters
manufacturer B sold a total of 200 scooters and manufacturer C sold 400 scooters.
It is known to all manufacturers that the customers do not always purchase a new
scooter from the same producer who manufacture their previous scooter because
of advertising, dissatisfaction with service and other reasons. All manufacturers
maintain records of the number of their customers and the factory from which they
obtained each new customer following table gives the information regarding the
movement of customers from one factory to another with the condition that this
month manufacturer A sold 120 scooters manufacturer B sold 203 scooter and
manufacturers C sold 377 Scooter. Further it is assumed that the new customer is
allowed to enter the market and no old customer left the market.
Previously Owned New Scooter Made by Total
Scooter Made by
A B C

A 85 8 7 100
B 20 160 20 200
C 15 35 350 400
Total 120 203 377 700

Manufacturers of three factories wish to know the following


a) should the advertising campaign of manufacturer C be directed towards
attracting previous purchase of scooters manufactured by A or B, or should it
concentrate on retaining a large proportion of the previous purchasers of scooters
manufactured by C?
b) The purchaser of a new scooter keeps the vehicle on an average for 3 years. If
this trend in brand switching continues what will be the market share of three
companies be in 3 year (in Six years) ?
Solution :
Step 1) From the data we observe that the market share of manufacturer C has
declined from
400/ 700 = 0.571 to 377/700 = 0.539
With most of the gain going to manufacturer A out of the 120 new scooters
purchased from the manufacturer A, 85 customers have a scooter manufactured
previously by A, 20 customers have scooters of B and 15 have scooters of C. Of the
100 previous owners of the scooter manufactured by A, 8 purchase a new scooter
from B while only 7 purchased from C. The calculations for market share of the three
manufacturers shown below
Manufacturer
A B C
Previous market 100 /700 = 0.143 200/700 = 0.286 400/700 = 0.571
Share

New Market 120/700 = 0.171 203/700 = 0.290 377/700 = 0.539


Share
Step 2 : using the data of the problem, the state transition matrix is
obtained as shown below:
State Transition matrix:
A B C
A 85/100= 0.8500 8/100 = 0.0800 7/100 = 0.0700 Retention
B 20/200 = 0.1000 160/200 = 0.800 20/200 = 0.100 &
C 15/400 = 0.0375 35/400 = 0.0875 350/450 = 0.8750 Gain
Retention & Loss
Step 2 : using the data of the problem, the state transition matrix is
obtained as shown below:
A B C
A 0.8500 0.0800 0.0700 Retention
B 0.1000 0.800 0.100 &
C 0.0375 0.0875 0.8750 Gain
Retention & Loss
These transition probabilities for shift between periods maybe
interpreted as follows:
i) A retains 85% of its own customers, gains 10% of B’s customers and
gains 3.75% of C’s customers
ii) B gains 8% of A’s customer, retains 80% of its own customers, and
gains 8.75% of C’s customer
iii) C gain 7% of A’s customers, gains 10% of B’s customers, and retains
87.5% of its own customers.
Step 3: Transition probabilities can now be used to study the brand loyalty of
customers with the following assumptions:
a) Without considering the brand of the scooter, probability that the customer
will switch to another manufacturer or purchase again from the same
manufacturer depends on the brand of the scooter he is presently having.
b) The probability that a customer will switch to another manufacturer or
purchase again from the same manufacturer is independent of how many
previous purchases he had made.
Since the new market share of the three manufacturers are 0.171 0.290 and
0.539 respectively then the estimate of the market share of A in 3 years can
easily be obtained by multiplying the numbers in the market share by the
corresponding numbers in the first column of the transition Matrix and then
summing the results as follows
[Market Share Column]x [First column of the transition Matrix] =
[Market share component for A]
A: 0.171 x 0.8500 = 0.145
B: 0.290 x 0.1000 = 0.029
C: 0.539 x 0.0375 = 0.020
Market Share of A= 0.194
Similarly, the market shares of B and C in 3 years also be obtained as :
[Market Share Column]x [Second column of the transition Matrix] = [Market share component for B]
A: 0.171 0.0800 0.014
B: 0.290 0.8000 0.232
C: 0.539 0.0875 0.047
Market Share of B= 0.293
[Market Share Column]x [Second column of the transition Matrix] = [Market share component for B]
A: 0.171 0.0700 0.012
B: 0.290 0.1000 0.029
C: 0.539 0.8750 0.472
Market Share of C= 0.513
[0.194 0.293 0.513]
These calculations represent a change in market share as
0.194-0.171 = 0.023 in the market share of A
0.293- 0.290 = 0.030 in the market share of B
0.539-0.571 = - 0.032 in the market share of C
i.e.
a net gain of 2.3 percent for A and 3.0 percent for B, and
a net loss of 2.6 percent for C.
Thus the market share in 3 years is
A: 0.194
B: 0.293
C: 0.513

You might also like