ET4060 Fundamentals of Data Communication Networks Lecture 06
ET4060 Fundamentals of Data Communication Networks Lecture 06
ET4060
Fundamentals of Data Communication
Networks
Lecture 06 – Part 01
Discrete Markov Chains
2
The overall picture …
Markov Process
Discrete Time Markov Chains
Homogeneous and non-homogeneous Markov chains
Transient and steady state Markov chains
Continuous Time Markov Chains
Homogeneous and non-homogeneous Markov chains
Transient and steady state Markov chains
3
Markov Process
• Stochastic Process
• Markov Property
4
What is “Discrete Time”?
5
time
0 1 2 3 4
5
What is “Stochastic Process”?
State Space = {SUNNY,
6
RAINNY}
X day i " S " or " R " : RANDOM VARIABLE that varies with the DAY
X day 1 " S " X day 3 " R " X day 5 " R " X day 7 "S "
Day
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
THU FRI SAT SUN MON TUE WED
Day
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
THU FRI SAT SUN MON TUE WED
X(tk) or Xk = xk
10
Markov Processes
11
Markov Process
The future of a process does not depend on its past, only on its present
Pr X t k 1 x k 1 | X t k x k ,..., X t 0 x 0
Pr X t k 1 x k 1 | X t k x k
Since we are dealing with “chains”, X(ti) = Xi can take discrete values from a
finite or a countable infinite set.
The possible values of Xi form a countable set S called the state space of the
chain
For a Discrete-Time Markov Chain (DTMC), the notation is also simplified to
Pr X k 1 xk 1 | X k xk ,..., X 0 x0 Pr X k 1 xk 1 | X k xk
11
Where Xk is the value of the state at the kth step
General Model of a Markov Chain
12
p11
pSR=0.3
pSS=0.7 SUNNY RAINY pRR=0.4
pRS=0.6
State Space
S SUNNY , RAINY
If today is Sunny, What is the probability that to have a SUNNY weather
after 1 week?
If today is rainy, what is the probability to stay rainy for 3 days?
14
Chapman Kolmogorov Equation
Determine transition probabilities from one state
to anothe after n events.
15
Chapman-Kolmogorov Equations
16
Necessary Condition: for all states i, instants k, and all feasible transitions
from state i we have:
j i
p ij k 1 where i is all neighbor states to i
pij k , k n Pr X k n j | X k i
x1
xi xj
…
xR
16
Discrete time k k+1 u k+n
Chapman-Kolmogorov Equations
17
p ij k , k n Pr X k n j | X k i
R
Pr X k n j | X u r , X k i Pr X u r | X k i
r 1
x1
xi xj
…
xR
Discrete time k k+1 u k+n
17
Chapman-Kolmogorov Equations
18
R
pij k , k n pir k , u prj u , k n , k u k n
r 1 18
Chapman-Kolmogorov Equations
Example on the simple weather model
19
pSR=0.3
psunny rainy day 1, day 3 p ss day 1, say 2 p sr day 2, day 3 +p sr day 1, day 2 p rr day 2, day 3
20
Transition Matrix
Simplify the transition probability
21 representation
Define the n-step transition matrix as
H k , k n pij k , k n
We can re-write the Chapman-Kolmogorov Equation as follows:
H k , k n H k , u H u, k n
Choose, u = k+n-1, then
H k , k n H k , k n 1 H k n 1, k n
H k , k n 1 P k n 1
22
Transition Matrix
Example on the simple weather model
23
pSR=0.3
23
Homogeneous Markov Chains
Markov chains with time-homogeneous transition probabilities
24
p ij Pr X k 1 j | X k i Pr X k j | X k 1 i
The one-step transition probabilities are independent of time k.
P k P or pij Pr X k 1 j | X k i
p ij Pr X k 1 j | X k i is said to be Stationary Transition Probability
Even though the one step transition is independent of k, this does not mean
that the joint probability of Xk+1 and Xk is also independent of k. Observe
that:
Pr X k 1 j and X k i Pr X k 1 j | X k i Pr X k i
p ij Pr X k i 24
Two Minutes Break
You are free to discuss with your classmates about
the previous slides, or to refresh a bit, or to ask
questions.
25
Example: Two Processors System
Consider a two processor computer system where, time is divided
into time slots and that operates as follows:
At most one job can arrive during any time slot and this can happen with
probability α.
Jobs are served by whichever processor is available, and if both are available
then the job is given to processor 1.
If both processors are busy, then the job is lost.
When a processor is busy, it can complete the job with probability β during any
one time slot.
If a job is submitted during a slot when both processors are busy but at least
one processor completes a job, then the job is accepted
(departures occur before arrivals).
Q1. Describe the automaton that models this system (not included).
Q2. Describe the Markov Chain that describes this model.
26
Example: Automaton (not
included)
Let the number of jobs that are currently processed by the system by the
state, then the State Space is given by X= {0, 1, 2}.
Event set:
a: job arrival,
d: job departure
Feasible event set:
If X=0, then Γ(X)= a
If X= 1, 2, then Γ(Χ)= a, d.
State Transition Diagram - / a,d
a a -/a/ad
- 0 1 2
d d / a,d,d
dd 27
Example: Alternative Automaton
(not included)
Let (X1,X2) indicate whether processor 1 or 2 are busy, Xi= {0, 1}.
Event set:
a: job arrival, di: job departure from processor i
Feasible event set:
If X=(0,0), then Γ(X)= a If X=(0,1) then Γ(Χ)= a, d2.
If X=(1,0) then Γ(Χ)= a, d1. If X=(0,1) then Γ(Χ)= a, d1, d2.
State Transition Diagram
- / a,d1
a a
10 -/a/ad1/ad2
d1 a,d1,d2
- 00 d1,d2 11
a,d2
d2 01 d1
28
-
Example: Markov Chain
29
For the State Transition Diagram of the Markov Chain, each transition is
simply marked with the transition probability
p11
p20 1
2 p21 2 2 1 1
p22 1 2 129
2
Example: Markov Chain
30
p11
0.5 0.5 0
P pij 0.35 0.5 0.15
0.245 0.455 0.3
30
State Holding Time
How much time does it take for going from one
state to another?
31
State Holding Times P A B | C P A | B C P B | C
32
Suppose that at point k, the Markov Chain has transitioned into state
Xk=i. An interesting question is how long it will stay at state i.
Let V(i) be the random variable that represents the number of time
slots that Xk=i.
We are interested on the quantity Pr{V(i) = n}
Pr V i n Pr X k n i , X k n 1 i ,..., X k 1 i | X k i
Pr X k n i | X k n 1 i ,..., X k i
Pr X k n 1 i ,..., X k 1 i | X k i
Pr X k n i | X k n 1 i Pr X k n 1 i | X k n 2 ..., X k i
Pr X k n 2 i ,..., X k 1 i | X k i
32
State Holding Times
33
Pr V i n Pr X k n i | X k n 1 i
Pr X k n 1 i | X k n 2 ..., X k i
Pr X k n 2 i,..., X k 1 i | X k i
1 pii Pr X k n 1 i | X k n 2 i
Pr X k n 2 i | X k n 3 i,..., X k i
Pr X k n 3 i,..., X k 1 i | X k i
Pr V i n 1 pii piin 1
This is the Geometric Distribution with parameterp ii
Clearly, V(i) has the memoryless property 33
State Probabilities
34
pij k j k 1
j
In vector form, one can write
π k π k 1 P k Or, if homogeneous
π k π k 1 P 34
Markov Chain
State Probabilities Example
35
Suppose that
0.5 0.5 0
P 0.35 0.5 0.15 with π 0 1 0 0
0.245 0.455 0.3
Find π(k) for k=1,2,…
0.5 0.5 0
π 1 1 0 0 0.35 0.5 0.15 0.5 0.5 0
0.245 0.455 0.3
Transient behavior of the system
In general, the transient behavior is obtained by solving the
difference equation
π k π k 1 P 35