0% found this document useful (0 votes)
18 views19 pages

CH 8 Markov Chain

Uploaded by

chihphoto122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views19 pages

CH 8 Markov Chain

Uploaded by

chihphoto122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 19

Ch 8 – Markov Chain

• Introduction
- Most probability studies are on the basis of an independent process;
that is, the previous experiment does not have influence on the
outcome of the next experiment

- How about the outcome in the previous trial having an impact on the
next trial? Or each trial is proved dependent?

- In 1907, A.A. Markov began the study of this type of dependent


random process. This type of process is then referred to as Markov
Chain
• Example: Weather Model

- if today is a sunny day, there is a 90% probability that


tomorrow is a sunny day as well, so it is 10% to be a rainy day

- on the contrary, if today is a rainy day, there is a 50%


probability that tomorrow is a rainy day as well, so it is 50% to
be a sunny day

- so we write the probability as a matrix form:

Pij = probability; i = day type of day D, j = day type of day D+1


- P11 = sunny, sunny (SS) = 0.9
P12 = sunny, rainy (SR) = 0.1
P21 = rainy, sunny (RS) = 0.5
P22 = rainy, rainy (RR) = 0.5

- and today is sunny, meaning the probability of


(sunny, rainy) = [1, 0] = X(0),
and tomorrow is:
(sunny, rainy) = [0.9, 0.1] = X(1)

- it is observed:

X(1) = X(0) P
- and we can also observe:

X(2) = X(0) P2 = X(1) P

- Therefore, the essential of the Markov Chain is:

X(n) = X(n-1) P

X(n) = X(0) Pn
- however, we can expect that when n is very large:

X(n-1) = X(n)
- Find X(n) in the sunny-rainy example:
Let q = X(n-1) = X(n), when n approaches infinity; therefore

X(n) = X(n-1) P => q = qP

=> 0.1q1 - 0.5q2 = 0 and q1 + q2 = 1

=> [q1, q2] = [0.833, 0.167]


• Essentials of Markov Chain
- Transition matrix (P): for example: the matrix shown in the weather
model

- Transition probability, pij

- Steady state:
=> the prediction for more distant days are increasingly inaccurate
and becomes independent of the initial condition, or the vector tends
to form a steady state (long term condition); the steady state vector
is also called steady state distribution

- Governing equations of Markov Chain:

X(n) = X(n-1) P or X(n) = X(0) Pn


• Matrix operation in Excel
- Product of Matrix (matrix multiplication)

Example:
** Matrix Operation in Excel

- in excel, the function to obtain the product of two matrices is MMULT,


but note that you need to select the correct output range and press
“shift + control + enter”

- MINVERSE to obtain the inverse matrix

- TRANSPOSE to obtain the transpose matrix

- The tricks (shift+control+enter) applies to MINVERSE and


TRANSPOSE as well

- MDETERM to obtain the determinant of the matrix (since it is a single


value, the trick is no need)
• Absorbing Markov Chain
- Definition:
=> A state (si) of a Markov chain is called absorbing if it is
impossible to leave it, i.e., pii = 1

=> In an absorbing Markov chain, a state which is not absorbing is


called a transient state

- Typical example: drunkard’s (random) walk


=> the person who can’t walk straight and fall down or stop when
against the obstacle (say walls, cars,….)

=> 0 and 4 represent the walls (obstacles) and 1, 2, 3 represent the


positions
- therefore, the transition matrix becomes:

- the states 1, 2 and 3 are transient states, and from any of these, it is
possible to reach the absorbing states, which are 0 and 4.

- when a process reaches an absorbing state, we call that it is absorbed

- the probability of the chain is absorbed is equal to 1 (n approaching


infinity)
Absorbing Probability

- Fundamental Matrix (N)


=> from the transition matrix, the transient matrix (Q) is:

=> N = (I - Q)-1
- Next, we can calculate the absorption probability by:

B = NR

where, in the example R becomes:

=> It can be obtained from transforming the matrix into the


Canonical form

Canonical Form
- Therefore, in the example the absorption probability is:

=> which can be interpreted as, for instance, starting from


the state 1, the probabilities of absorption in state 0 and 4
is 0.75 and 0.25, respectively
• Discussion: although here we do not
provide and prove the algorithm of the
absorbing probability, can we use other
approaches to calculate the probability?

• Monte Carlo Simulation


• Interesting Notes of Markov Chain
- An interesting example: rumor propagation
=> sometimes, when you are told “yes” but you tell the next person
“no” and vice versa; it is a problem of the Markov chain, and as
more people are involved, the truth could be opposite to what you
are told.

=> the transition matrix is:

=> given a = 0.1 and b = 0.05, let’s see what happen


100
 0.9 0.1   0.33 0.67 
    
 0.05 0.95   0.33 0.67 
- if it is a “yes” at the beginning, that is (1, 0); after
the information passed through 100 people, the
probability of getting a “yes” is 0.33 and a “no”
for 0.67 probability

- What the first person said is not important.


- the independent case can be seen as the special case of Markov
Chain; the vector in each row in the transition matrix is identical and
equal to the given probability; for example, you have 0.5, 0.25, 0.25
to win the first, second and third place in a race, the transition matrix
is:

- Pn is identical regardless of n
!! Interesting exercise for improving your matrix
operation and coding

 How to code
a program to for
compute the
inverse matrix (X-1)

You might also like