0% found this document useful (0 votes)
28 views23 pages

Markov Chains

1) A Markov chain is a sequence of random values where the probability of the next value depends only on the current value. 2) In a Markov chain, each possible system state is known as a state. The transition probability determines the likelihood of moving from one state to another. 3) Markov chains have properties including a finite number of states, state transitions that depend only on the present state, and constant transition probabilities over time.

Uploaded by

Saksham giri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views23 pages

Markov Chains

1) A Markov chain is a sequence of random values where the probability of the next value depends only on the current value. 2) In a Markov chain, each possible system state is known as a state. The transition probability determines the likelihood of moving from one state to another. 3) Markov chains have properties including a finite number of states, state transitions that depend only on the present state, and constant transition probabilities over time.

Uploaded by

Saksham giri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Pramod Parajuli

Simulation and Modeling, CS-331

Markov Chains

a l
ep
itn
1
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

1
Markov Chains

A Markov chain is a sequence of random values


whose probabilities at a time interval depends
upon the value of the number at the previous
time
A simple example is the nonreturning random walk,
where the walkers are restricted to not go back
to the location just previously visited
In this case, each possible position is known as state
or condition
The controlling factor in a Markov chain is the

l
transition probability, it is a conditional

a
ep
probability for the system to go to a particular

itn
new state, given the current state of the system
(C) Pramod Parajuli, 2004
2

cs
Source: www.csitnepal.com

2
Markov Chains

Since the probability of moving from one state to


another depends on probability of the preceding
state, transition probability is a conditional
probability
(blackboard example)
Markov chain (process) has following properties;
1. The set of all possible states of stochastic (probabilistic) system
is finite
2. The variables move from one state to another and the probability
of transition from a given state is dependent only on the present
state of the system, not in which it was reached
3. The probabilities of reaching to various states from any given

a l
state are measurable and remain constant over a time (i.e.

ep
throughout the system’s operation)

itn
3
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

3
Markov Chains

Markov chains are classified by their order


- If the probability occurrence of each state depends
only upon the immediate preceding state, then it is
known as first order Markov chain
- The zero order Markov chain is memory less chain

a l
ep
itn
4
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

4
Matrix of Transition Probabilities

Let sj, (s1, s2, …, sm; j = 1,2,…,m) be state of a


system and
pij, (p0,0, p0,1, p0,2, …, pm,m) be probability
of moving from state si to state sj
So now, the square matrix of size mxm
Succeeding state

P = [pij]mxm = s1 s2 ... ... sm


s1  p11 p12 . . p1m 
s2 p . p2 m 
Initial state  21 p22 .
..  .. .. . . .. 
 

al
..  .. .. . . .. 

ep
sm  pm1 pm 2 . . pmm 

itn
5
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

5
Matrix of Transition Probabilities

If there is no transition between si and sj, then pij= 0


If only one state is selected while advancing, then
pij =1
The probability is distributed over the elements in
the row. Therefore;
m

∑p
j =1
ij = 1 for all i

and 0 ≤ pij ≤ 1

a l
ep
itn
6
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

6
Diagrams

Two types;
1. Transition Diagram
2. Probability Tree Diagram

p12

S1 Succeeding state
p22
S2 S1 S2 S3
S1 0 P12 0
P31
p23 S2 0 P22 P23
S3

a l
S3 P31 0 P33

ep
Transition Diagram

itn
p33 7
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

7
Diagrams
S1
S1
S2

S1
S1
S2
S2

S1
S1

S1
S2

l
S2

a
S1

ep
S2
Probability Tree Diagram

itn
S2 8
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

8
Diagrams

The probabilities at various states can be


determined by using;
If state1 is the initial state;
P(state 1, shift n+1 | state 1, shift 1)
= 0.7 * P(state 1, shift n | state 1, shift1) +
0.8 * P(state2, shift n | state 1, shift1)
n = 1, 2, 3

Ex;
P(state 1, shift 3 | state 1, shift 1)

a l
ep
= 0.7 (0.7) + 0.8 ( 0.3) = 0.73

itn
9
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

9
Diagrams

P(state 1, shift 4 | state 1, shift 1)


= 0.7 P(state 1, shift 3 | state 1, shift 1)
+ 0.8 P(state 1, shift 3 | state 1, shift 1)
= 0.727

P(state 1, shift 4 | state 1, shift 1)


= 0.7 (0.7) (0.7) + 0.7 (0.3)(0.8) + 0.3 (0.8)
(0.7) + 0.3 (0.2)(0.8)
= 0.727

a l
ep
itn
10
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

10
n-Step Transition Probabilities

Let’s represent the initial situation by R0 as;


R0 = [ p11, p12, p13, . . ., p1m]
And P = [pij]mxm be transition probabilities matrix
at time period n = 0
Further, let R1 represent the situation after one
execution of the experiment i.e. n = 1
R1 = R0 x P
Similarly,
R1 = R0 x P = R x P2
. .

a l
. .

ep
Rn = Rn-1 x P = R0 X Pn

itn
11
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

11
n-Step Transition Probabilities

Rn = Rn-1 x P = R0 X Pn
Here, Pn denotes the n-steps transition matrix

Sample calculation – Blackboard demonstration


1. Example 17.1 from page 716 (handouts)
2. Samples from PC Quest

a l
ep
itn
12
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

12
Steady-state (equilibrium conditions)

When the number of periods (stage) increases, the


probabilities approaches to steady state (equilibrium).
At this state, the system becomes independent of time

A Markov chain reaches to equilibrium state if;


1. The transition matrix elements remains positive
from one period to the next (regular property of
Markov chain)
2. It is possible to go from one state to another in a
finite number of steps, regardless of the present
state (ergodic property)

a l
Note: All regular Markov chains must be ergodic Markov chains but

ep
the converse is not true.

itn
13
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

13
Steady-state behavior of Markovian Systems

- Steady state solution solved mathematically

Inifinite – Population
- Are assumed to follow Poisson process with λ
arrivals per unit time and inter-arrival time is
exponentially distributed with mean 1/ λ
- The queue discipline is FIFO
- Mathematically, a system is said to be in steady
state, provided the probability that the system is in
a given state is not time dependent;
P(L(t) = n) = Pn(t) = Pn

a l
ep
itn
14
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

14
Steady-state behavior of Markovian Systems

- For simple models, steady-state behavior parameter


‘L’ (time-average of customers in the system) can
be computed as;

L = ∑ n.Pn
n =0

Ref. Lecture 8, Slide – 46


- If L is given, then other steady-state parameters
can be computed by using Little’s equation;
L = λ.w
wQ= w – (1/ µ)
LQ = λ. wQ

a l
ep
itn
15
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

15
M/G/1

Single server queues with poisson arrivals and unlimited


capacity

Mean service time = 1/ µ


Variance = σ2
If µ < 1, then M/G/1 queue has a steady-state probability
distribution
If λ < µ, then ρ will be server utilization

λ
ρ=
µ
ρ 2 (1 + σ 2 µ 2 )
L=ρ+
2(1 − ρ )

a l
ρ 2 (1 + σ 2 µ 2 )

ep
LQ =
2(1 − ρ )

itn
16
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

16
M/G/1

Single server queues with poisson arrivals and unlimited


capacity
Let's look at these a bit more closely
Consider first if σ2 = 0
i.e. the service times are all the same (= mean)
In this case the equations for L and LQ greatly
simplified to
ρ 2 (1 + 0 2 µ 2 ) ρ2
LQ = =
2(1 − ρ ) 2(1 − ρ )
In this case LQ is dependent solely upon the server
utilization, ρ

al
Note as ρ Æ 0 (low server utilization) LQ Æ 0

ep
Note as ρ Æ 1 (high server utilization) LQ Æ ∞

itn
17
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

17
M/G/1

Single server queues with poisson arrivals and unlimited


capacity

Again, ρ = L – LQ is the time average number of customers


being served

Example;
Supplements from Banks and Nicol, Page 226, 227

a l
ep
itn
18
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

18
M/M/1

M/G/1 (service times are exponential)

• Let's look at the simplest case, an M/M/1 queue


with arrival rate λ and service rate µ
• Consider the state of the system to be the
number of customers in the system
• We can then form a state-transition diagram for
this system
• A transition from state k to state k+1 occurs with
probability λ
• A transition from state k+1 to state k occurs with
probability µ

a l
ep
itn
19
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

19
M/M/1

M/G/1 (service times are exponential)


4 From this we can obtain
λ λ λ λ
0 1 2 … k-1 k k+1
µ µ µ µ
4 We also know that
k
k −1
λ λ
Pk = P0 ∏ = P0  
i =0 µ µ
• Since the sum of the probabilities in the distribution
must equal 1
• This will allow us to solve for P0

∑P

l
=1

a
k

ep
k =0

itn
20
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

20
M/M/1

M/G/1 (service times are exponential)



Before completing the derivation, ∑ Pk = 1
we must note an important k =0
k
requirement: to be stable, ∞
λ
the system utilization ∑ P 
 
 =1
µ
0
k =0
λ/µ < 1 must be true
 ∞
 λ 
k

P0 1 + ∑    = 1
 k =1  µ  
 
1
P0 = k

λ
1 + ∑  

a l
k =1  µ 

ep
itn
21
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

21
M/M/1
M/G/1 (service times are exponential)
1 1
P0 = =
 λ / µ   1 − (λ / µ ) λ/µ 
1 +   + 
 1 − ( λ / µ )   1 − (λ / µ ) 1 − (λ / µ ) 
1 λ
P0 = = 1−
 1  µ
 
 1 − (λ / µ ) 
• Which is the solution for P0 from the M/G/1 Queue in
Table 6.3
4 Utilizing these, we can substitute back to get
k k
 λ   λ  λ 
Pk = P0   = 1 −   = (1 − ρ ) ρ k
 µ   µ  µ 

a l
ep
• Which is the formula indicated in Table 6.4
• The other values can also be derived in a similar

itn
22
manner (C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

22
Homework

Read about ‘Steady state behavior for finite population


model’ and write an article about it.

Deadline – February 14, 2005

a l
ep
itn
23
(C) Pramod Parajuli, 2004

cs
Source: www.csitnepal.com

23

You might also like