0% found this document useful (0 votes)
5 views

Slides2

The document provides an overview of stochastic processes and Markov chains, defining key concepts such as state space, index parameters, and stationarity. It explains the characteristics of stochastic processes, including memoryless properties and ergodicity, and categorizes them into discrete and continuous types. Additionally, it discusses the analysis of discrete-time Markov chains, including state transition probabilities, first passage times, and steady-state behavior.

Uploaded by

sushaaaantttt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Slides2

The document provides an overview of stochastic processes and Markov chains, defining key concepts such as state space, index parameters, and stationarity. It explains the characteristics of stochastic processes, including memoryless properties and ergodicity, and categorizes them into discrete and continuous types. Additionally, it discusses the analysis of discrete-time Markov chains, including state transition probabilities, first passage times, and steady-state behavior.

Uploaded by

sushaaaantttt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Stochastic Process and Markov

Chains

David Tipper
Associate Professor
Graduate Telecommunications and Networking Program
Universityy of Pittsburgh
g
[email protected]
http://www.sis.pitt.edu/~dtipper/tipper.html

Stochastic Processes
• A stochastic process is a mathematical model for
describing an empirical process that changes in
time accordingg to some p
probabilistic forces.
• A stochastic process is a family of random variables {X(t),
t T} defined on a given probability space S, indexed by
the parameter t, where t is in an index set T.
• For each t T, X(t) is a random variable with F(x,t) =
P{X(t) ≤ x}
• A realization of X(t) is called a sample path
• Characterization of a stochastic process.
1. State Space S,
2. Index set T
3. Stationarity

Telcom 2130 2

1
Characteristics of Stochastic Processes
• State Space
– The values assumed by a random variable X(t) are
called “states” and the collection of all p
possible values
forms the “state space S” of the process.
– If X(t)=i, then we say the process is in state i.
– Discrete-state process
• The state space is finite or countable for example the non-
negative integers {0, 1, 2,…}.
– Continuous
Continuous-state
state process
• The state space contains finite or infinite intervals of the real
number line.

Telcom 2130 3

Characteristics of Stochastic Processes

• Index parameter
– The index T is usually taken to be the time parameter.
– Discrete
Discrete-time
time process
• A process changes state (or makes a “transition”) at discrete
or finite countable time instants.
– Continuous-time process
• A process may change state at any instant on the time axis.
• The probability that stochastic process X takes on
a value i ( i  S ) at time = t is P[X(t)=i]
• Stationarity
– A stochastic process X(t) is strict sense stationary if
the statistical properties are invariant to time shifts
f(x,t) = f(x) for all t.
Telcom 2130 4

2
Characteristics
• A stochastic process X(t) is wide sense stationary if
1. Mean is constant E{X(t)} = K for all t
2. The autocorrelation R is only a function of the time difference
R(t1, t2) = R(t2 – t1) = R()

• Ergoditcity
– A stochastic process X(t) is ergodic if it’s ensemble averages
equal time averages
– => Any statistic of X(t) can be found from a sample path

 T
1
E { X (t )}   xf ( x , t ) dx

 lim T  
T  x (t ) dt
0

Telcom 2130 5

Categories of Stochastic Processes

State Space
Time
P
Parameters
t Continuous
Discrete State
State
Discrete time
Discrete time
Discrete Time stochastic
stochastic chain
process

Continuous time
Continuous Continuous time
stochastic
Time stochastic chain
process

Telcom 2130 6

3
Stochastic Processes

Telcom 2130 7

Stochastic Processes
♦ Important Stochastic Processes for Queueing System
Analysis
 Markov Chains
 Markov Process
 Counting Process - Poisson Process
 Birth Death Process

♦ In 1907 AA.A.
A Markov defined and investigated a
particular class of stochastic processes – now know as
Markov processes/chains

4
Markov Process
• For a Markov process {X(t), t T, S}, with state
space S, its future probabilistic development is
dependent
p only
y on the current state,, how the
process arrives at the current state is irrelevant.
• Mathematically
– The conditional probability of any future state given
an arbitrary sequence of past states and the present
state depends only on the current state

• Memoryless property - The process starts afresh


at the time of observation and has no memory of
the past.

Telcom 2130 9

Discrete Time Markov Chains


• The Discrete time and Discrete state stochastic process
{X(tk), kT} is a Markov Chain if the following conditional
probability holds for all i, j and k. (note Xi means X(ti))
P[ Xk+1 = j | X0 = i0, X1 = i1,…, Xk-1 = ik-1, Xk = i ]
= P[ Xk+1 = j | Xk = i ]
= pi j(k)  state transition probability at kth time step
• The future probability development of the chain depends
only on its current state (kth instant) and not on how the
chain has arrived at the current state (i.e., memoryless)

Telcom 2130 10

5
Discrete Time Markov Chains (2)
• pi j (k) is (one-step) transitional probability, which is
the probability of the chain going from state i to
state j at time step tk
• pi j (k) is a function of time tk. If it does not vary with
time (independent of k), then the chain is said to
have stationary transition probabilities and is time
homogeneous pi j (k) = pij for all k
pij is the one step transition probability of going
from state i to state j
• The state transition matrix P = [pij] characterizes
the Markov chain.
Telcom 2130 11

Discrete Time Markov Chains (2)


• The one step state transition matrix P = [pij] is a
stochastic matrix
1. 0 ≤ pij ≤ 1 All elements between zero and one
1
2.  pij  1 Each row sums to one and is a density function
jS

  = 1 is an eigenvalue of P and | j | < 1 j = 2, 3, …

 p00 p01 p02 p03 p04   p0K 1 p0K 


p p11 p12 p13 p14   p1K 1 p1K 
 10
  p21 p22 p23 p24   p2K 1 p2K 
 
P     p32 p33 p34   
         
 
         
   pK ,K 1 pK ,K 
 pK ,0 pK ,1 pK ,2 pK ,3 pK ,4
Telcom 2130 12

6
Discrete Time Markov Chains (2)
• For small state space can represent state
transition matrix P = [pij] as a state transition
diagram
Consider the Time Homogeneous Markov Chain with one step
transition matrix for the states {0, 1, 2, 3} given below.

.2 .5 .3 0
.1 .3 .6 0 
P
 0 .4 .3 .3 
 
0 0 .5 .5 

Telcom 2130 13

Markov Chain Analysis Summary


• Markov chain  x  t  : k  N 
k

– one step transition matrix P   pij 

state probabilities  j   p  x  tn   j
n

• General/transient behavior
1)   n     0 p  n     0  p 
n
with computation  
O nN 2

2)  n
  n 1
p with computation O  log n N 
2
2

  n   n 0 p n     0T 1 diag  in   T with computation O  N 


3
3)
T  m 1   modal matrix 
1
where
Also P(n) the n step transition matrix is P(n) = (P)n
• Steady state behavior   lim   n
n 
4)   p and   i  1
Telcom 2130 iS 14

7
Markov Chain Analysis Summary
• 5. First Passage Time
The first passage time Tij is the number of transitions required to go
from state i to state j for the first time (recurrence time if i = j).
Let fij(n) = P{Tij = n} . That is probability the first passage time is n steps

fij(n)  P{x(tkn )  j, x(tkr )  j, r 1, 2, ,, n 1| x(tk )  i}


fij(1)  pij
fij(2)  pij(2)  fij(1) p jj

n1
fij(n)  pij(n)   fij(k) p(jjnk) n  2, 3,.....
k 1
Telcom 2130 15

Markov Chain Analysis Summary


• 6. Mean First Passage Time
The Mean First Passage Time E{Tij } is the average number of
transitions required for the Markov Chain to go from state i to state j for
the first time (recurrence time if i = j).
j)

E{Tij}  nfij(n)
n1
Using probability generating function approach get

E[Tij ]   0j ( I  R j ) 1 e

Where pj0 = [0, …0, 1, 0, …0] is one only in the ith element and

Rj = [Pik] i  j, k  j  one step transition matrix P without row j and


column j
Telcom 2130 16

8
Markov Chain Analysis Summary

• 7 Transform approach to n) and Pn)


 (n)   (n1) P

 (z)  Z (n)   Z (n1) P


 (Z)   (0)  z (Z)P
  (Z)   (0) I  zP
1

 P(n)  I  zP
1

Telcom 2130 17

Markov Chain Analysis Summary

• 8. State Holding Times


Let Hi be the random variable that represents the
number of time slots the Markov Chain spends in state i
P{Hi  n} P{x(tkn)  j, x(tkr ) i, r 1, 2, ,, n1| x(tk ) i}
 P{x(tkn)  j | x(tkn1) i}P{x(tkn1) i | x(tkn2) i}...P{x(tk1) i | x(tk ) i}
(1 pii)piin1

Note the holding time has a geometric distribution which


is the only memoryless discrete distribution.

Telcom 2130 18

9
Markov Chain Example
One model of a discrete time bursty ATM traffic source is a two state markov
chain with one state representing ON and the other state representing OFF.
When the source is in the ON state a cell is generated in every slot, when the
source is in the OFF state no cell is generated
generated.
Let a be the probability of transition from ON to OFF
Let t be the probability of transition from OFF to ON
The probability of making a transition from a state back to itself are 1  a and
respectively for ON and OFF 1  t
The state transition diagram and state transition matrix P are

1  a  1  t 
a ON OFF
ON 1-a  a 
p  
OFF  t
 1-t  
t
Telcom 2130 19

Markov Chain Example

Determining the steady state probabilities    ON ,  OFF 

 O N   O N 1  a    O F F t a
  p    OFF   ON
 O F F   O N a   O F F 1  t  t

 a
 e  1   ON   OFF  1  ON  1    1
 t 

substitutingg from above


t a
 ON    OFF 
at at

Telcom 2130 20

10
Example
Consider the Time Homogeneous Markov Chain with one step
transition matrix for the states {0, 1, 2, 3} given below.

.2 .5 .3 0 
.1 .3 .6 0 
P
 0 .4 .3 .3 
 
0 0 .5 .5 
Find P(2), P(4) , and P(16) , what is noticeable about P(16)?
From C-K equation P(n) = (P)n

P(2) = P*P = |0.09 0.37 0.45 0.09|


|0.05 0.38 0.39 0.18|
|0.04 0.24 0.48 0.24|
|0 0.2 0.4 0.4 |

Telcom 2130 21

Example
Find P(2), P(4) , and P(16) , what is noticeable about P(16)?
From C-K equation P(n) = (P)n

P(4)= P(2)*P(2)= |0.0446 0.2999 0.4368 0.2187|


|0.0391 0.2925 0.4299 0.2385|
|0.0348 0.2692 0.438 0.258 |
|0.026 0.252 0.43 0.292 |

P(16)= (P)16 =
.034015 0.272114 0.433674 0.2601960
.034015
034015 0
0.272112
272112 0
0.433674
433674 0
0.2602000
2602000
.034014 0.272109 0.433674 0.2602040
.034012 0.272104 0.433674 0.260210

Notice all rows become about the same and


approach steady state probability π
Telcom 2130 22

11
Example
Determine (n) for n = 1, 2,… 10 given the initial condition (0) = [0,0,0,1]

From  ( n )   ( n 1) P
(1)= (0)P=[0,
P [0 0,
0 0.5,
0 5 0.5]
0 5]
(2)= (1)P=[0, 0.2, 0.4, 0.4]
(3)= (2)P=[0.02, 0.22, 0.44, 0.32]
(4)= (3)P=[0.026, 0.252, 0.43, 0.292]
(5)= (4)P=[0.0304, 0.2606, 0.434, 0.275]
(6)= (5)P=[0.03214, 0.26698, 0.43318, 0.2677]
(7)= (6)P=[0.033126, 0.269436, 0.433634, 0.263804]
(8)= (7)P=[0.033569, 0.270847, 0.433592, 0.261992]
(9)= (8)P=[0.033799, 0.271475, 0.433653, 0.261074]
(10)= (9)P=[0.033907, 0.271803, 0.433658, 0.260633]

Telcom 2130 23

Example
Consider the Time Homogeneous Markov Chain with one step
transition matrix for the states {0, 1, 2, 3} given below.

.2 .5 .3 0 
.1 .3 .6 0 
P
 0 .4 .3 .3 
 
0 0 .5 .5 
Determining the steady state probability vector we get   lim   n 
n 

Solving   p and  i 1
iS

Results in 

Telcom 2130 24

12
Example
Find the probability of the first passage time from state 3 to state 1 in 3 steps
f31(3)

. 2 . 5 . 3 0 
. 1 . 3 . 6 0 
P   
 0 .4 .3 .3 
 
 0 0 .5 .5 
Determining the first passage times we use fij  pij
(1)

fij(2)  pij(2)  fij(1) pjj


f31(1)=0

f31(2)= P31(2) - f31(1)P11(1) =0.2 n1
fij(n)  pij(n) fij(k) p(jjnk) n  2,3,....
f31(3)=P31(3)- (f31(1)P11(2)+f31(2)P11(1)) = 0.16 k1

Telcom 2130 25

Exampe

• Determine the mean first passage time from 3 to 1


. 2 . 5 . 3 0 
[ 31]]= (0)((I-R1)-1e
E[T . 1 . 3 . 6 0 
P   
(0)=[0, 0, 1]  0 .4 .3 .3 
 
 0 0 .5 .5 
R1= |0.2 0.3 0 |
|0 0.3 0.3|
|0 05
0.5 0 5|
0.5|

E[T31]= (0)(I-R1)-1e = 6

Telcom 2130 26

13
Markov Chain Example
• Analyze N x N non-blocking output buffered switch

• Assumptions
– Arrival streams are independent
– B
Bernoulli
lli arrival
i l process
– Service time deterministic – D
– Buffer size fixed – SS
– Uniform distribution of traffic

Telcom 2130 27

Performance Evaluation
• Define embeded Markov Chain at slot times
i,j = Prob{ i class ‘1’ cells, j class ‘2’ cells}
n = [0,n ,1,n-1 , … , n,0 ]
 = [0 ,1,2 , … , k ]
• Solve for steady-state probabilities
00 01 02 0  0 
= · P 
0 0
10 11 12 0 0   0 0 
where P is state transition matrix
  21 22 23 0   0 0 
 
P    32 33 34   0 0 
        
 
        
Also use normalization condition 0 0
 0 0 0  K,K1 K,K
 · e = 1 where eT = [ 1, 1, 1, …, 1 ]
• Exact form of P depends on space priority scheme modeled - for details
see posted Infocom paper
Telcom 2130 28

14
Performance Evaluation

• No Priority Scheme
– Cells accepted into the buffer in FCFS
f hi
fashion.
– When buffer is full, all packets are rejected.

Telcom 2130 29

Performance Evaluation
• Partial Buffering Scheme – (Nested Thresholds)
– Define a threshold Ti for each class i
– If number in the system  Ti ,all new class i packets
dropped
– Here two class [T1, T2 ] Set T1 = K

Telcom 2130 30

15
Performance Evaluation

• Pushout with overwrite probability (Pow)


– Admit all packets until buffer full
– If buffer full class ‘1’
1 pushout class ‘2’
2 with
probability 1-Pow
– If buffer full class ‘2’ pushout class ‘1’ with
probability Pow

Telcom 2130 31

Performance Evaluation

• Validate Analytical
M d l with
Model ith
Simulation
• Experiment 1

Telcom 2130 32

16
Performance Evaluation

• Experiment 2
– Define grade of service requirements
– 1 Acceptable Loss Probability for class ‘1’
cells
– 2 Acceptable Loss Probability for class ‘2’
cells
• For specific
p traffic mixture ((% class 1,, %
class 2)
– Determined maximum offered load (MOL)

Telcom 2130 33

Performance Evaluation

Telcom 2130 34

17

You might also like