CH 8 Markov Chain
CH 8 Markov Chain
• Introduction
- Most probability studies are on the basis of an independent process;
that is, the previous experiment does not have influence on the
outcome of the next experiment
- How about the outcome in the previous trial having an impact on the
next trial? Or each trial is proved dependent?
- it is observed:
X(1) = X(0) P
- and we can also observe:
X(n) = X(n-1) P
X(n) = X(0) Pn
- however, we can expect that when n is very large:
X(n-1) = X(n)
- Find X(n) in the sunny-rainy example:
Let q = X(n-1) = X(n), when n approaches infinity; therefore
- Steady state:
=> the prediction for more distant days are increasingly inaccurate
and becomes independent of the initial condition, or the vector tends
to form a steady state (long term condition); the steady state vector
is also called steady state distribution
Example:
** Matrix Operation in Excel
- the states 1, 2 and 3 are transient states, and from any of these, it is
possible to reach the absorbing states, which are 0 and 4.
=> N = (I - Q)-1
- Next, we can calculate the absorption probability by:
B = NR
Canonical Form
- Therefore, in the example the absorption probability is:
- Pn is identical regardless of n
!! Interesting exercise for improving your matrix
operation and coding
How to code
a program to for
compute the
inverse matrix (X-1)