0% found this document useful (0 votes)
52 views

SF2863 Systems Engineering, 7.5 HP: - Intro To Markov Chains

This document provides an introduction to Markov chains, which are stochastic processes that can model processes like queues, inventory levels, and biological systems. It defines Markov chains as stochastic processes that have the Markov property, where the probability of future states depends only on the present state. The document discusses stationary transition probabilities, transition matrices, the Chapman-Kolmogorov equation, and unconditional state probabilities for Markov chains. It also provides examples to illustrate these concepts.

Uploaded by

Friha Arshid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

SF2863 Systems Engineering, 7.5 HP: - Intro To Markov Chains

This document provides an introduction to Markov chains, which are stochastic processes that can model processes like queues, inventory levels, and biological systems. It defines Markov chains as stochastic processes that have the Markov property, where the probability of future states depends only on the present state. The document discusses stationary transition probabilities, transition matrices, the Chapman-Kolmogorov equation, and unconditional state probabilities for Markov chains. It also provides examples to illustrate these concepts.

Uploaded by

Friha Arshid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

SF2863 Systems Engineering, 7.

5 HP
- Intro to Markov Chains

Lecturer: Per Enqvist

Optimization and Systems Theory


Department of Mathematics
KTH Royal Institute of Technology

October 31, 2016

P. Enqvist Systems Engineering


Systems Engineering (SF2863) 7.5 HP

1 Definition of Stationary Markov Chains

2 Classifications of states in Markov Chains


Properties of Markov Chains

3 Continuous time Markov processes


Markov Processes and the Exponential Distribution
Poisson processes

P. Enqvist Systems Engineering


Markov Chain

Markov chains are a simple type of stochastic processes, that can be


used to model many different processes, such as, queues, inventory
levels, games, music, speech, biology et.c.
We will consider the basic properties of Markov chains and in
particular focus on the theory needed for modelling later in the course.

For more information see the course book:


Introduction to Operations Research.
Edition Sections
9th 16.1-16.3.
10th 29.1-29.3.

P. Enqvist Systems Engineering


Stochastic processes in discrete time

A stochastic process in discrete time can be thought of as a sequence


of random variables {Xt }∞
t=t0 , i.e. if t0 = 0

X0 , X1 , X2 , · · ·

We assume that each stochastic variable Xt takes values in some set


{1, 2, · · · , M}. (M could be infinity)

Xt usually represents the state of a system at time t.

The evolution of such a process is in general very complex, so we will


consider a class of processes with a simple dependence.

P. Enqvist Systems Engineering


Markov Chain
For Markov chains the transitions between two states only depend on
the current state.
Example with two states, ’E’ and ’A’, and arrows indicating transitions.

P. Enqvist Systems Engineering


Examples

Stochastic processes with state Xt where t = 0, 1, 2, · · · .

Xt = wind condition {1 = Calm, 2 = breeze, 3 = storm, }


at a particular place on day t.
Xt = number of items in stock of a particular item on day t.
Xt = accumulated sum of points after t rolls of a die.
Xt = number of rabbits living on Gärdet at time t.
Xt = number of complaint phone calls to the help desk at day t.
Xt = condition of patient {1 = stable, 2 = manic, 3 = depressive}
on day t.

Which of these are Markov chains?

P. Enqvist Systems Engineering


The Markovian property

The formal mathematical definition


Definition
A stochastic process {Xt } is said to have the Markovian property if

Pr(Xt+1 = j |X0 = k0 , X1 = k1 , · · · Xt = kt ) = Pr(Xt+1 = j |Xt = kt )

for all t = 0, 1, · · · and every sequence j, k0 , k1 , · · · , kt .

It says that
“The conditional probability of a future event depends only on the
present state and not on all past states”

P. Enqvist Systems Engineering


Markov Chains

Definition
A stochastic process {Xt } is said to be a Markov Chain if it has the
Markovian property.

P. Enqvist Systems Engineering


Example: Finite memory

Is the stochastic processes with state Xt = Xt−5 + Vt where for


t = 0, 1, 2, · · · Vt is a noise process, uncorrelated with Xs for s ≤ t, a
Markov Chain?

No!, Since knowing Xt−5 would enhance the prediction of Xt .

Note that Pr(Xt |Xt−1 , Xt−2 , · · · , Xt−5 ) 6= Pr(Xt |Xt−1 )

In this case the state space can be extended so to obtain a Markov


Chain.

In fact, Yt = (Xt , Xt−1 , Xt−2 , Xt−3 , Xt−4 )T is a Markov Chain.

P. Enqvist Systems Engineering


Example: Finite memory
Note (using Xt = Xt−5 + Vt )
   
Xt Xt−5 + Vt

 Xt−1  
  Xt−1 

Yt = 
 Xt−2 =
  Xt−2 =

 Xt−3   Xt−3 
Xt−4 Xt−4
    
0 0 0 0 1 Xt−1 Vt

 1 0 0 0 0   Xt−2  
   0 

= 
 0 1 0 0 0   Xt−3  + 
   0 

 0 0 1 0 0   Xt−4   0 
0 0 0 1 0 Xt−5 0
| {z }
Yt−1

Now Pr(Yt |Yt−1 , Yt−2 , · · · , Yt−5 ) = Pr(Yt |Yt−1 )


P. Enqvist Systems Engineering
Stationarity

Definition
A stochastic process {Xt } has stationary transition probabilities if

Pr(Xt+1 = j |Xt = i) = Pr(X1 = j |X0 = i)

for all t = 1, 2, · · · and all states i, j,

Which of the previous examples have stationary transition


probabilities?

We consider Markov Chains with stationary transition probabilities


Define the one-step transition probabilities

pij = Pr(Xt+1 = j |Xt = i)


(which is independent of t)
P. Enqvist Systems Engineering
The two-step transition probabilities
Let
(2)
= Pr(Xt+2 = j kXt = i)
pij
P
Using the law of total probability: Pr(A) = Pr(A ∩ Bk ), if Bk forms a
partition of the whole probability space, (Bk disjoint, ∪Bk = Ω )
m
(2)
X
pij = Pr(Xt+2 = j |Xt = i) = Pr(Xt+2 = j ∩ Xt+1 = k |Xt = i)
k =0

From the conditional probability equation Pr(A ∩ Bk ) = Pr(A|Bk )Pr(Bk )

m
(2)
X
pij = Pr(Xt+2 = j |Xt+1 = k ∩ Xt = i) Pr(Xt+1 = k |Xt = i) =
k =0
m
X m
X
= Pr(Xt+2 = j |Xt+1 = k ) Pr(Xt+1 = k |Xt = i) = pkj pik .
k =0 k =0
P. Enqvist Systems Engineering
Transition matrices
Define the one-step transition matrix
 
p00 p01 · · · p0m
 .. 
   p10 p11 . 
P = pij = 
 .. .. ..


 . . . 
pm0 · · · ··· p0m
Define the two-step transition matrix
 (2) (2) (2) 
p00 p01 ··· p0m
i  (2) (2) .. 
(2)
h
(2)  p10 p11 . 
P = pij =
 .. .. ..


 . . . 
(2) (2)
pm0 ··· ··· p0m

Then we just showed that P (2) = P 2 .


P. Enqvist Systems Engineering
Chapman-Kolmogorov

Theorem
For any n ≥ 0, m ≥ 0, it holds that P (n+m) = P (n) P (m) .

In particular P (n) = P n .

Next we will consider what happens when n → ∞.

P. Enqvist Systems Engineering


Unconditional state probabilities
Let
(n)
pi = Pr(Xn = i)
Then
m m
(1) (0)
X X
pi = Pr(X1 = i) = Pr(X1 = i |X0 = k ) Pr(X0 = k ) = pik pk
k =0 k =0

i.e., p(1) = p(0) P if


 
(j) (j) (j)
p(j) = p0 p1 ··· pm .

Hence p(n) = p(0) P n .


What happens when n → ∞?
In some cases P (n) → P̄ and p(n) → π where π are some steady-state
probabilities.
P. Enqvist Systems Engineering
Classifications of states in Markov Chain

It can be shown that the long term behavior of a Markov Chain


depends on a number of properties of the states and the transitions
between states.
We will define these properties in terms of the one-step (and n-step)
transitions.

For more information see the course book:


Introduction to Operations Research.
Edition Sections
9th 16.4-16.7
10th 29.4-29.7

P. Enqvist Systems Engineering


Notation (Reminder)

Assuming stationary Markov Chains

pij = Pr(Xt+1 = j | Xt = i) (independent of t)


 
P = pij (one-step transition matrix)

(n)
pij = Pr(Xt+n = j | Xt = i)
h i
(n)
P (n) = pij (n-step transition matrix)

Chapman-Kolmogorov Equation implies that P (n) = P n .

P. Enqvist Systems Engineering


Accessible states

Which state-to-state transitions are possible in a Markov Chain?


In one step, or n-steps?

Definition
(n)
State j is said to be accessible from state i if pij > 0 for some n ≥ 0.

Accessible means that it is not impossible to reach j from i.

Sometimes denoted i → j.

P. Enqvist Systems Engineering


Communicating states

If it goes both ways:

Definition
State j and i communicate if
i is accessible from j and
j is accessible from i.

Sometimes denoted i ↔ j.

States that communicate with each others form an equivalence class

P. Enqvist Systems Engineering


Equivalence Classes

A binary relation ∼ on a set X is an equivalence relation if it is

Reflexive if a ∼ a for all a ∈ X


Symmetric if a ∼ b, then b ∼ a for all a, b ∈ X
Transitive if a ∼ b and b ∼ a, then a ∼ c for all a, b, c ∈ X

The equivalence class of a under ∼ is defined as

[a] = {b ∈ X | a ∼ b}

P. Enqvist Systems Engineering


Irreducible Markov Chains

Definition
The Markov Chain is irreducible if there is only class, i.e, all states
communicate with each other.

When we talk about classes we will from now on mean communication


equivalence classes.

P. Enqvist Systems Engineering


Transient states

Definition
A state is said to be transient if
upon entering this state, the process never return to this state again
with a probability > 0.

State i is transient if, and only if, there exists a state j that is accessible
from i, but i is not accessible from j, i.e., i → j, j 6→ i.
If one state in a class is transient, then all states in the class are
transient.

P. Enqvist Systems Engineering


Recurrent states

Definition
A state is said to be recurrent if
upon entering this state, the process definitely will return to this state
again.

State i is recurrent if, and only if, it is not transient.


If one state in a class is recurrent, then all states in the class are
recurrent.

P. Enqvist Systems Engineering


Absorbing states

Definition
A state is said to be absorbing if
upon entering this state, the process never will leave this state.

State i is absorbing if, and only if, pii = 1.

P. Enqvist Systems Engineering


Periodicity
n o
(n)
Let N(i) = n ≥ 1 pii > 0 .

Definition
The period of state i is defined by

gcd(N(i)), if N(i) 6= ∅
d(i) =
0, if N(i) = ∅

gcd denotes the greatest common denominator.


If states i and j communicate, then d(i) = d(j).

Definition
A state is aperiodic if d(i) = 1.

State i is aperiodic if pii > 0. (but not only if)

P. Enqvist Systems Engineering


Periodic states

Consider a Markov Chain with three states where p12 = p23 = p31 = 1.

Then N(1) = {3, 6, 9, 12, · · · },


and d(1) = gcd(N(1)) = 3.

So the period of all states is 3.

P. Enqvist Systems Engineering


Periodic states

Consider a Markov Chain with ten states where arcs indicate positive
transition probabilities.

Then N(1) = {8, 10, 16, 18, 20, 24, 26, 28, 30, · · · }, and
d(1) = gcd(N(1)) = 2.

So the period of all states is 2.

P. Enqvist Systems Engineering


Ergodicity

Definition
In a finite-state Markov Chain, recurrent state that are aperiodic are
called ergodic states.
A Markoc Chain is said to be ergodic if all states are ergodic states.

P. Enqvist Systems Engineering


Steady state properties

Theorem
(n)
For any irreducible ergodic Markov Chain, lim pij exists and is
n→∞
independent of i.
(n)
Furthermore, lim pij = πj > 0 where π = (π0 , π1 , · · · , πM ) satisfy the
n→∞
steady state equations
M
X
π = πP, πj = 1.
j=0

P. Enqvist Systems Engineering


Example 1

Consider a Markov Chain with two states where


   
p00 p01 0 1
P= =
p10 p11 0.4 0.6
Does it satisfy the criteria for the theorem?

Yes, first p01 , p10 > 0 so the states are communicating and the chain is
irreducibel.
Second, note that p11 > 0, so state 1 is aperiodic and then the chain is
aperiodic, hence it is ergodic.

P. Enqvist Systems Engineering


Example 1

Does the steady state equations


M
X
π = πP, πj = 1,
j=0

have a solution?

π0 = 0.4π1 , π1 = π0 + 0.6π1 , π0 + π1 = 1

There is a unique solution π = (2/7, 5/7).


So there are positive probabilities to find the system in state 0 or 1.

P. Enqvist Systems Engineering


Example 2

Consider a Markov Chain with four states where


   
p00 p01 p02 p03 0.3 0.7 0 0
 p10 p11 p12 p13   0.2
  0 0.4 0.4 
P=
 p20 = 
p21 p22 p23   0 0 1 0 
p30 p31 p32 p33 0 0.1 0 0.9
Does it satisfy the criteria for the theorem?
Note that p22 = 1, which means that state 2 is absorbing, hence the
chain is not irreducible. States 0,1 and 3 are transient.

P. Enqvist Systems Engineering


Example 2

Does the steady state equations


M
X
π = πP, πj = 1,
j=0

have a solution?
There is a unique solution π = (0, 0, 1, 0).
In the long run the chain will end up in the absorbing state.
Since the chain is not irreducible πj > 0 will not hold for all j.

P. Enqvist Systems Engineering


Example: Duel

Two persons, A and B, engage in a duel with water-guns.


The persons shoot at each others, taking turns. A starts.
The probability that A hits B is pA . qA = 1 − PA
The probability that B hits A is pB . qB = 1 − PB
This can be modelled as a Markov Chain with four states where


 1 if it is A:s turn to shoot
2 if it is B:s turn to shoot

Xt =

 3 if A has shot B
4 if A has shot B

There are two absorbing states, 3 and 4.


States 1 and 2 are both transient.
Not irreducible.

P. Enqvist Systems Engineering


Example: Duel

   
p11 p12 p13 p14 0 qA pA 0
 p21 p22 p23 p23   qB 0 0 pB 
P=
 p31
= 
p32 p33 p34   0 0 1 0 
p41 p42 p43 p44 0 0 0 1

Now let pA = 1/3 and pB = 1/2.


Does the steady state equations
M
X
π = πP, πj = 1,
j=0

have a solution?

P. Enqvist Systems Engineering


Example: Duel
Yes, but there is no unique solution.
All π = (0, 0, π3 , π4 ). such that π3 + π4 = 1 are solutions.
The solution depends on the starting state.

p(0) = (1, 0, 0, 0) A starts shooting

p(1) = p(0)P = (0, 2/3, 1/3, 0)

p(2) = p(0)P 2 = (1/3, 0, 1/3, 1/3)

p(3) = p(0)P 3 = (0, 2/9, 4/9, 3/9)

p(∞) = p(0)P ∞ = (0, 0, 1/2, 1/2)


Which indicates that the game is fair!
P. Enqvist Systems Engineering
Perron-Frobenius

The mathematical background of the previous theorem is

Theorem
If P is the transition matrix of a finite irreducible chain with period d,
then
1 λ0 = 1 is an eigenvalue of P
2 the d complex roots of unity

λ0 = 1, λ1 = ω, · · · , λd−1 = ω d−1 ,

where ω = e2πi/d , are eigenvalues of P


3 the remaining eigenvalues λd , · · · , λm satisfy |λk | < 1.

P. Enqvist Systems Engineering


Expected First Passage times

Let µij = Expected number of steps from state i to first visit to state j.
Then for fixed j and i = 0, 1, · · · , m,
m
X
µij = E(# steps) = E(# steps| first step is i to k )pik
k =0

X X
= 1pij + (1 + µkj )pik = 1 + µkj pik .
k 6=j k 6=j

These are m + 1 linear equations in m + 1 unknowns, that can be


solved for the expected first passage times µij . (from all states i 6= j)
Furthermore, the expected recurrence time
µii = Expected number of steps from state i until it returns to state i,
is given by 1/πi .

P. Enqvist Systems Engineering


Absorbing states

Let fik = Probability of absorption in state k given that the system start
in state i.
Then for fixed k and i = 0, 1, · · · , m,
m
X m
X
fik = Pr( absorption in k | first step is i to j)pij = pij fjk
j=0 j=0

where fkk = 1 and fik = 0 if state i 6= k is recurrent.

For the duel example

f13 = 2/3f23 + 1/3 · 1, f23 = 1/2f13 ,

which gives f13 = 1/2 and f23 = 1/4.

P. Enqvist Systems Engineering


Continuous time Markov processes

Many of the stochastic processes we consider are changing states at


arbitrary times and we should model the time as a continuous variable.

We will review again the basic properties of continuous Markov


processes and in particular focus on the theory needed for modelling
later in the course.

For more information see the course book:


Introduction to Operations Research.
Edition Sections
9th 16.8, 17.4
10th 29.8, 17.4

P. Enqvist Systems Engineering


Continuous time Markov processes

Now let the state X (t) depend on a continuous variable t, which is


usually the time.

How do we describe this processes?

When does a state transition occur?


Is the time between transitions a stochastic variable?
And if it is, what kind of distribution does it have?

Can we develop a similar framework as for the discrete time Markov


chains?

P. Enqvist Systems Engineering


The Markovian property
Definition
A continuous time stochastic process {X (t) | t ≥ 0} is said to have the
Markovian property if

Pr(X (t + s) = j |X (s) = i, X (r ) = k ) = Pr(X (t + s) = j |X (s) = i)

for all i, j, k ∈ {0, 1, · · · , M}, t > 0 and 0 ≤ r < s.

It says that
“The conditional probability of a future event depends only on the
present state and not on all past states”
Definition
A stochastic process {X (t)} is said to be a Markov process if it has the
Markovian property.
P. Enqvist Systems Engineering
Stationarity
Definition
A stochastic process {Xt } has stationary transition probabilities if

Pr(X (t + s) = j |X (s) = i) = Pr(X (t) = j |X (0) = i)

for all t, s > 0 and all states i, j,

Transition probabilities only depend on the relative times between


transitions.
We consider Markov processes with stationary transition probabilities
Define the continuous time transition probability function
pij (t) = Pr(X (t) = j |X (0) = i) for t > 0.

1 if i = j
We assume that lim+ pij (t) = .
t→0 0 if i =6 j
P. Enqvist Systems Engineering
Transition matrices

Define the transition matrix function


 
p00 (t) p01 (t) · · · p0m (t)
 .. 
   p10 (t) p11 (t) . 
P(t) = pij (t) = 
 .. .. ..


 . . . 
pm0 (t) · · · ··· p0m (t)

Theorem (Chapman-Kolmogorov)
For any s, t ≥ 0, it holds that P(s + t) = P(s)P(t).

By assumption we have P(0) = I.

P. Enqvist Systems Engineering


Unconditional state probabilities

Let
pi (t) = Pr(X (t) = i)

Then
m
X m
X
pi (t + s) = Pr(X (t + s) = i |X (t) = k ) Pr(X (t) = k ) = pik (s)pk (t)
k =0 k =0

i.e., p(t + s) = p(t)P(s) if p(t) = p0 (t) p1 (t) · · · pm (t) .
Hence p(t) = p(0)P(t).

What happens when t → ∞?

In some cases P(t) → P̄ and p(t) → π where π are some steady-state


probabilities.

P. Enqvist Systems Engineering


Transition rate matrix
Take derivatives
p(t + h) − p(t) p(t)P(h) − p(t)I
ṗ(t) = lim+ = lim+
h→0 h h→0 h
P(h) − I
= p(t) lim+ = p(t)Q
h→0 h
Note that Q is independent of t.
Now ṗ(t) = p(t)Q implies that p(t) = p(0)eQt
That is, P(t) = eQt .
Q is called the Transition rate matrix
 
q00 q01 · · · q0m
 .. 
   q10 q11 . 
Q = qij =   .. .. ..


 . . . 
qm0 · · · · · · q0m
P. Enqvist Systems Engineering
Transition rate matrix
P(h) − I
Since Q = lim+ , we have that for a small time step h
h→0 h

P(h) = eQh ≈ I + Qh.

Then
pii (h) ≈ 1 + hqii
is the probability of no jump in the next time interval of length h.
Note: qii ≤ 0 is necessary to get a probability.

And
pij (h) ≈ hqij
is the probability of jump i to j in the next time interval of length h.
Note: qij ≥ 0 is necessary to get a probability.
P. Enqvist Systems Engineering
Transition rates
qij can be interpreted as the transition rate from state i to j,
i.e. the average number of jumps from i to j in one time unit.

For qii the interpretation is a bit different.


Since all probabilities pij (h) for j = 0, · · · , m add to one, we note that
m
X m
X m
X
1= pij (h) = 1 + qij h, so qij = 0.
j=0 j=0 j=0

I.e., all the row sums of the Q matrix are zero.

Since qii is negative, one usually defines the transition rate out of i as
X
qi = −qii = qij ,
j6=i

i.e. the average number of jumps out from i in one time unit.
P. Enqvist Systems Engineering
Stationarity

If p(t) → π as t → ∞, then ṗ(t) → 0, so we must have

πQ = 0.

When is this expression sufficient?

P. Enqvist Systems Engineering


Classification of states

Similarly to the Markov chain case we define accessible states, and


then:
Definition
States i and j communicate if there exists t1 and t2 such that pij (t1 ) > 0
and pji (t2 ) > 0.

Definition
The Markov Chain is irreducible if all states communicate with each
other.

P. Enqvist Systems Engineering


Stationarity

Theorem
For a finite irreducible Markov process there always exists unique
steady state probabilities π that solve the steady state equations
m
X
πQ = 0, πj = 1.
j=0

Theorem
If the process is not finite, the existence of steady-state probabilities is
equivalent to the existence of solutions to the steady state equations.

P. Enqvist Systems Engineering


Example 1
Consider a Markov process with two states where
 
q00 q01
Q=
q10 q11
Does the steady state equations have a solution?
M
X
πQ = 0, πj = 1.
j=0

Balance equation at state 0:


Flow out of state 0 π0 q01 = π1 q10 flow in to state 0.
Balance equation at state 1:
Flow out of state 1 π1 q10 = π0 q01 flow in to state 1.
Together with π0 + π1 = 1, gives
q10 q01
π0 = π1 = .
q10 + q01 q10 + q01
P. Enqvist Systems Engineering
The time to next transition
Let Ti denote the time that the process remains in state i until next
transition.
The cumulative distribution function of Ti is FTi (t) = Pr(Ti > t).
Aim: Determine FTi (t)
Do this by determining a differential equation for F .
FTi (t+h)−FTi (t)
By definition FT0 i (t) = limh→0+ h .
Note that

Pr(Ti ≤ t + h ∩ Ti > t) FT (t + h) − FTi (t)


Pr(Ti ≤ t + h | Ti > t) = = i .
Pr(Ti > t) 1 − FTi (t)

Then
1 − FTi (t)
FT0 i (t) = lim+ Pr(Ti ≤ t + h | Ti > t).
h→0 h
P. Enqvist Systems Engineering
The time to next transition

Using that

Pr(Ti ≤ t + h | Ti > t) = Pr(jump in [t, t + h]) = hqi + o(h)

gives that
FT0 i (t) = (1 − FTi (t))qi .

Since FTi (0) = 0 it follows that FTi (t) = 1 − e−qi t ,


i.e., Ti is exponentially distributed with intensity qi = 1/E(Ti ).

P. Enqvist Systems Engineering


The time to transition from i to j
Let Tij denote the time that the process remains in state i before
jumping to state j. Unless it jumps somewhere else first.

It can be shown that Tij is exponentially distributed with intensity


qij = 1/E(Tij ).

Furthermore, Ti = minj6=i Tij
 
 X
Note that Ti = minj6=i Tij ∈ Exp  qij . (Property 3 - 17.4)
j6=i
| {z }
=qi
In addition to this
qij
pij = probability that the process jump to j if it jumps = .
qi

P. Enqvist Systems Engineering


Derivation of expression for pij

pij = probability that the process jump to j if it jumps.

Then

pij (h) = qij h + o(h) = Pr (i → j in time [0, h]) =

Pr (any jump in time [0, h] ∩ (i → j))


using that these events are independent

= Pr(any jump in time [0, h])Pr(i → j) = (1 − (1 + qii h + o(h))) pij

qij
Letting h → 0+ we get pij = −qii .

P. Enqvist Systems Engineering


Alternative description of Markov process

Each time the process enters state i it stays there for a stochastic time
Ti before it jumps to a new state.
Where
1 Ti ∈ Exp(qi )
2 pij is the probability that the next state is j
3 the next state visited after state i is independent of the time spent
in state i

P. Enqvist Systems Engineering


Exponential distribution

If T is exponentially distributed with rate q, i.e., T ∈ Exp(q), then

FT (t) = Pr(T ≤ t) = 1 − e−qt for t ≥ 0.

qe−qt if t ≥ 0

fT (t) =
0 if t < 0

E(T ) = 1/q, Var(T ) = 1/q 2

P. Enqvist Systems Engineering


Lack of memory property

If T is exponentially distributed then

Pr(T > t + ∆t)|T > t) = Pr(T > ∆t).

If the event has not happened at time t, the probability that it will
happen the next ∆t time units is the same as it was when we started
waiting.
Proof: (Property 2 - 17.4)
Pr ((T > t + ∆t) ∩ (T > t)) Pr (T > t + ∆t)
Pr ((T > t + ∆t)|T > t) = = =
Pr(T > t) Pr(T > t)

e−q(t+∆t)
= = e−q∆t = Pr(T > ∆t).
e−qt

P. Enqvist Systems Engineering


Poisson processes
Suppose that people arrive to a queue without service, or a desert
island, with interarrival times Ti ∈ Exp(q) for all i = 1, 2, 3, · · · .
Then {X (t)}t≥0 is a special Markov process, the Poisson process,
where
X (t) = {number of people in the queue at time t} .

(qt)n e−qt
Then Pr(X (t) = n) = ,
n!
i.e., X (t) is Poisson distributed with expected value E(X (t)) = qt.
Birth process with constant birth rate q.

P. Enqvist Systems Engineering


Three definitions of Poisson processes

Exponentially distributed interarrival times: The interarrival times are


independent and exponentially distributed with intensity q.

Birth process: In any infinitesimal interval of length h there may only


occur one arrival. The probability of an arrival is qh and is independent
of other arrivals outside the interval.

Poisson distribution: The number of arrivals in an interval of length T is


Poisson(qT ), and is independent of arrivals in other non-overlapping
intervals.

P. Enqvist Systems Engineering


Aggregation and disaggregation of Poisson Processes

The sum of two Poisson processes is a Poisson process.

If {Xt }t≥0 is a Poisson process with rate λX and {Yt }t≥0 is a Poisson
process with rate λY , then {Zt = Xt + Yt }t≥0 is a Poisson process with
rate λZ = λX + λY .

The disaggregation of a Poisson process creates new Poisson


processes.

If {Zt }t≥0 is a Poisson process with rate λZ , and two new arrival
processes {Xt }t≥0 and {Yt }t≥0 , are created by letting each arrival in
the Z process with probability pX be allocated to the X process and
otherwise be allocated to the Y process. Then the X and Y processes
are Poisson processes with rates λX = pX λZ and λY = (1 − pX )λZ .

P. Enqvist Systems Engineering


The hitchhiker’s paradox

A hitchhiker arrives at a road at a random time instant.


Cars pass the hitchhiker according to a Poisson process.
The mean time between passing cars is 10 minutes.

What is the mean waiting time until the next car passes the hitchhiker?

From the memoryless property the time to next car is exponentially


distributed with mean value 10 minutes,
so the expected time is 10 minutes.

P. Enqvist Systems Engineering


The hitchhiker’s paradox

Explanation: The probability that the hitchhiker arrives during a long


interarrival interval is greater than for a short one.

The average waiting time over a long period of time τ is


Z τ n
1 1X1 2
W = W (t)dt ≈ T
τ t=0 τ 2 i
i=1

We note that n ≈ τ /E(Ti ) for large τ , and then

n
1 X1 2 1 1 1 1
W ≈ Ti → nE(Ti2 ) = 2E(Ti )2 = E(Ti ) = 10m
nE(Ti ) 2 nE(Ti ) 2 E(Ti ) 2
i=1

where we used that E(Ti2 ) = 2(E(Ti ))2 for Ti ∈ Exp.

P. Enqvist Systems Engineering

You might also like