0% found this document useful (0 votes)
139 views3 pages

Chapter 8 Markov Chain Model

This document provides an overview of Markov chain modeling. It defines a Markov chain as a stochastic model where the probability of each sequential event depends only on the previous state. It describes how to simulate a Markov chain using a multinomial distribution based on the transition probabilities between states. It also defines key aspects of Markov chains including states, transition matrices showing the probabilities of moving between states, and using the transition matrix to model movement between states over multiple time steps.

Uploaded by

Amanuel Tefera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views3 pages

Chapter 8 Markov Chain Model

This document provides an overview of Markov chain modeling. It defines a Markov chain as a stochastic model where the probability of each sequential event depends only on the previous state. It describes how to simulate a Markov chain using a multinomial distribution based on the transition probabilities between states. It also defines key aspects of Markov chains including states, transition matrices showing the probabilities of moving between states, and using the transition matrix to model movement between states over multiple time steps.

Uploaded by

Amanuel Tefera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Chapter 8: MARKOVCHAIN MODEL

What is Markov chain simulation?


A Markov chain or Markov process is a stochastic model describing a sequence of
possible events in which the probability of each event depends only on the state
attained in the previous event.

One can simulate from a Markov chain by noting that the collection of moves from any
given state (the corresponding row in the probability matrix) form a multinomial
distribution. One can thus simulate from a Markov Chain by simulating from a multinomial
distribution.
A Markov chain is a particular model for keeping track of systems that change according
to given probabilities. As we'll see, a Markov chain may allow one to predict future events, but
the predictions become less useful for events farther into the future (much like predictions of the
stock market or weather).

Definition of Markov Chain


When we study a system that can change over time, we need a way to keep track of those
changes. A Markov chain is a particular model for keeping track of systems that change
according to given probabilities. As we'll see, a Markov chain may allow one to predict future
events, but the predictions become less useful for events farther into the future (much like
predictions of the stock market or weather). This lesson requires prior knowledge of matrix
arithmetic.
A state is any particular situation that is possible in the system. For example, if we are studying
rainy days, then there are two states:

1. It's raining today.


2. It's not raining today.

The system could have many more than two states, but we'll stick to two for this small example.
The term Markov chain refers to any system in which there are a certain number of states and
given probabilities that the system changes from any state to another state. That's a lot to take in
at once, so let's illustrate using our rainy days example. The probabilities for our system might
be:

pp by Mesfin ab
1
 If it rains today (R), then there is a 40% chance it will rain tomorrow and 60% chance of
no rain.
 If it doesn't rain today (N), then there is a 20% chance it will rain tomorrow and 80%
chance of no rain.

It may help to organize this data in what we call a state diagram. In this diagram appearing here,
the left circle represents rain (R), and the right represents no rain (N). The arrows indicate the
probability to change state. For example, the arrow from R to N is labeled 0.6 because there is a
60% chance that if it rains today, then it won't rain tomorrow.

The Transition Matrix


If a Markov chain consists of k states, the transition matrix is the k by k matrix (a table of
numbers) whose entries record the probability of moving from each state to another state (in
decimal form rather than percentage). The rows of the matrix correspond to the current state and
columns correspond to the next state. For example, the entry at row 1 and column 2 records the
probability of moving from state 1 to state 2. (Note, the transition matrix could be defined the
other way around, but then the formulas would also be reversed.)
Let's build a transition matrix together. First, we choose an order for our states. Let's say R
always comes before N. That means the first row and first column will concern R while the
second row and column will concern N. Remember, rows mean ''from'' and columns mean ''to.''
As we can see here, the transition from R to R is 0.4, so we put 0.4 in the upper left of the matrix.
The transition from R to N is 0.6 (upper right). N to R is 0.2 (lower left) and N to N is 0.8 (lower
right).

On the left, the probabilities are shown in table form. On the right, they are shown as a transition
matrix.

pp by Mesfin ab
2
The Model
Formally, a Markov chain is a probabilistic automaton. The probability distribution of state
transitions is typically represented as the Markov chain’s transition matrix. If the Markov chain
has N possible states, the matrix will be an N x N matrix, such that entry (I, J) is the probability
of transitioning from state I to state J. Additionally, the transition matrix must be a stochastic
matrix, a matrix whose entries in each row must add up to exactly 1. This makes complete sense,
since each row represents its own probability distribution.

General view of a sample Markov chain, with states as circles, and edges as transitions

Sample transition matrix with 3 possible states


Additionally, a Markov chain also has an initial state vector, represented as an N x 1 matrix (a
vector), that describes the probability distribution of starting at each of the N possible states.
Entry I of the vector describes the probability of the chain beginning at state I.

Initial State Vector with 4 possible states


These two entities are typically all that is needed to represent a Markov chain.
We now know how to obtain the chance of transitioning from one state to another, but how about
finding the chance of that transition occurring over multiple steps? To formalize this, we now
want to determine the probability of moving from state I to state J over M steps. As it turns out,
this is actually very simple to find out. Given a transition matrix P, this can be determined by
calculating the value of entry (I, J) of the matrix obtained by raising P to the power of M. For
small values of M, this can easily be done by hand with repeated multiplication. However, for
large values of M, if you are familiar with simple Linear Algebra, a more efficient way to raise a
matrix to a power is to first diagonalizable the matrix

pp by Mesfin ab
3

You might also like