100% found this document useful (1 vote)
275 views3 pages

Absorbing States: (Q0Ris)

An absorbing state is a state in a Markov chain where the probability of remaining in that state is 1. For a Markov chain to be absorbing, it must have all states eventually reach an absorbing state with probability 1. Transition matrices for absorbing Markov chains can be written as blocks where the bottom right is the identity matrix. Calculations like expected number of visits to states and absorption probabilities can be found from the fundamental matrix and its products with other blocks of the transition matrix. Continuous-time Markov chains also have the Markov property and embedded discrete-time chains, with exponentially distributed waiting times between state changes conditional on the current state. First passage times refer to the number of steps to transition between states, like from one state back to itself or

Uploaded by

Surya Jaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
275 views3 pages

Absorbing States: (Q0Ris)

An absorbing state is a state in a Markov chain where the probability of remaining in that state is 1. For a Markov chain to be absorbing, it must have all states eventually reach an absorbing state with probability 1. Transition matrices for absorbing Markov chains can be written as blocks where the bottom right is the identity matrix. Calculations like expected number of visits to states and absorption probabilities can be found from the fundamental matrix and its products with other blocks of the transition matrix. Continuous-time Markov chains also have the Markov property and embedded discrete-time chains, with exponentially distributed waiting times between state changes conditional on the current state. First passage times refer to the number of steps to transition between states, like from one state back to itself or

Uploaded by

Surya Jaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Absorbing States

An absorbing state is a state ii in a Markov chain such that \mathbb{P}(X_{t+1} =


i \mid X_t = i) = 1P(Xt+1=i∣Xt=i)=1. Note that it is not sufficient for a Markov chain to
contain an absorbing state (or even several!) in order for it to be an absorbing Markov chain.
It must also have all other states eventually reach an absorbing state with probability 11.

Calculations

If the chain has tt transient states and ss absorbing states, the transition matrix PP for


a time-homogeneous absorbing Markov chain may then be written

P = (Q0RIs)

where QQ is a t \times tt×t matrix, RR is a t \times st×s matrix, \text{0}0 is the s \times


ts×t zero matrix, and I_sIs is the s \times ss×s identity matrix.
Many calculations for Markov chains are made simple by that decomposition of the transition
matrix. In particular, the fundamental matrix N = (I_t - Q)^{-1}N=(It−Q)−1 contains a lot of
information. Note

N=(It−Q)−1=k=0∑∞Qk,

So Ni,j=Prob(X0=j∣X0=i)+Prob(X1=j∣X0=i)+Prob(X2=j∣X0=i)+…=E(number of times j is visited∣X0=i).

It follows from the linearity of expectation that, starting from state ii, the expected number of
steps before entering an absorbing state is given by the i^\text{th}ith entry of the column
vector N \, \mathbf{1}N1.

Furthermore, the probability of being absorbed by state jj after starting in state ii is given by


the (i,\,j)^\text{th}(i,j)th entry of the t \times st×s matrix M = N\, RM=NR,

Mi,j=k=1∑tProb(Xt+1=j∣Xt=k)E(number of times k is visited∣X0=i).

Continuous-time Markov Chains

Many processes one may wish to model occur in continuous time (e.g. disease transmission
events, cell phone calls, mechanical component failure times, . . .). A discrete-time
approximation may or may not be adequate.

{X(t), t ≥ 0} is a continuous-time Markov Chain if it is a stochastic process taking values on a


finite or countable set, say 0, 1, 2, . . ., with the Markov property that

P X(t + s)= j | X(s)= i, X(u)= x(u) for 0 ≤ u ≤ s = P X(t + s)= j | X(s)= i .

Here we consider homogeneous chains, meaning P[X(t + s)= j | X(s)= i] = P[X(t)= j | X(0)= i]

Write {Xn, n ≥ 0} for the sequence of states that {X(t)} arrives in, and let Sn be the
corresponding arrival times. Set XA n = Sn − Sn−1.

The Markov property for {X(t)} implies the (discrete-time) Markov property for {Xn}, thus
{Xn} is an embedded Markov chain, with transition matrix P = [Pij ].

Similarly, the inter-arrival times XA n must be conditionally independent given {Xn}. Why?
Show that XA n has a memoryless property conditional on Xn−1, P XA n > t + s | XA n > s,
Xn−1= x = P XA n > t| Xn−1= x i.e., XA n is conditionally exponentially distributed given
Xn−1.

First Passage Times

Section 29.3 dealt with finding n-step transition probabilities from state i to state j. It
is often desirable to also make probability statements about the number of transitions made
by the process in going from state i to state j for the first time. This length of time is called
the first passage time in going from state i to state j. When j = i, this first passage time is just
the number of transitions until the process returns to the initial state i. In this case, the first
passage time is called the recurrence time for state i.

To illustrate these definitions, reconsider the inventory example introduced in Sec.


29.1, where Xt is the number of cameras on hand at the end of week t, where we start with
X0 = 3. Suppose that it turns out that.

X0 = 3, X1 = 2, X2 = 1, X3 = 0, X4 = 3, X5 = 1.

In this case, the first passage time in going from state 3 to state 1 is 2 weeks, the first passage
time in going from state 3 to state 0 is 3 weeks, and the recurrence time for state 3 is 4 weeks.

You might also like