Chapter 8 Markov Chain Model
Chapter 8 Markov Chain Model
One can simulate from a Markov chain by noting that the collection of moves from any
given state (the corresponding row in the probability matrix) form a multinomial
distribution. One can thus simulate from a Markov Chain by simulating from a multinomial
distribution.
A Markov chain is a particular model for keeping track of systems that change according
to given probabilities. As we'll see, a Markov chain may allow one to predict future events, but
the predictions become less useful for events farther into the future (much like predictions of the
stock market or weather).
The system could have many more than two states, but we'll stick to two for this small example.
The term Markov chain refers to any system in which there are a certain number of states and
given probabilities that the system changes from any state to another state. That's a lot to take in
at once, so let's illustrate using our rainy days example. The probabilities for our system might
be:
pp by Mesfin ab
1
If it rains today (R), then there is a 40% chance it will rain tomorrow and 60% chance of
no rain.
If it doesn't rain today (N), then there is a 20% chance it will rain tomorrow and 80%
chance of no rain.
It may help to organize this data in what we call a state diagram. In this diagram appearing here,
the left circle represents rain (R), and the right represents no rain (N). The arrows indicate the
probability to change state. For example, the arrow from R to N is labeled 0.6 because there is a
60% chance that if it rains today, then it won't rain tomorrow.
On the left, the probabilities are shown in table form. On the right, they are shown as a transition
matrix.
pp by Mesfin ab
2
The Model
Formally, a Markov chain is a probabilistic automaton. The probability distribution of state
transitions is typically represented as the Markov chain’s transition matrix. If the Markov chain
has N possible states, the matrix will be an N x N matrix, such that entry (I, J) is the probability
of transitioning from state I to state J. Additionally, the transition matrix must be a stochastic
matrix, a matrix whose entries in each row must add up to exactly 1. This makes complete sense,
since each row represents its own probability distribution.
General view of a sample Markov chain, with states as circles, and edges as transitions
pp by Mesfin ab
3