0% found this document useful (0 votes)
7 views

Lect 06

Uploaded by

Sudipta Maity
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Lect 06

Uploaded by

Sudipta Maity
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

6.

Introduction to stochastic processes

lect06.ppt S-38.145 - Introduction to Teletraffic Theory - Fall 2000 1

6. Introduction to stochastic processes

Contents

• Basic concepts
• Poisson process
• Markov processes
• Birth-death processes

2
6. Introduction to stochastic processes

Stochastic processes (1)

• Consider a teletraffic (or any) system


• It typically evolves in time randomly
– Example 1: the number of occupied channels in a telephone link
at time t or at the arrival time of the nth customer
– Example 2: the number of packets in the buffer of a statistical multiplexer
at time t or at the arrival time of the nth customer
• This kind of evolution is described by a stochastic process
– At any individual time t (or n) the system can be described by a random
variable
– Thus, the stochastic process is a collection of random variables

6. Introduction to stochastic processes

Stochastic processes (2)

• Definition: A (real-valued) stochastic process X = (Xt | t ∈ I) is a


collection of random variables Xt
– taking values in some (real-valued) set S, Xt(ω) ∈ S, and
– indexed by a real-valued (time) parameter t ∈ I.
– Stochastic processes are also called random processes
(or just processes)
• The index set I ⊂ ℜ is called the parameter space of the process
• The value set S ⊂ ℜ is called the state space of the process
– Note: Sometimes notation Xt is used to refer to the whole stochastic
process (instead of a single random variable)

4
6. Introduction to stochastic processes

Stochastic processes (3)

• Each (individual) random variable Xt is a mapping from the sample


space Ω into the real values ℜ:

X t : Ω → ℜ, ω = X t (ω )
• Thus, a stochastic process X can be seen as a mapping from the
sample space Ω into the set of real-valued functions ℜI (with t ∈ I as
an argument):

X : Ω → ℜ I , ω = X (ω )
• Each sample point ω ∈ Ω is associated with a real-valued function
X(ω). Function X(ω) is called a realization (or a path or a trajectory)
of the process.

6. Introduction to stochastic processes

Summary

• Given the sample point ω ∈ Ω


– X(ω) = (Xt(ω) | t ∈ I) is a real-valued function (of t ∈ I)
• Given the time index t ∈ I,
– Xt = (Xt(ω) | ω ∈ Ω) is a random variable (as ω ∈ Ω)
• Given the sample point ω ∈ Ω and the time index t ∈ I,
– Xt(ω) is a real value

6
6. Introduction to stochastic processes

Example

• Consider traffic process X = (Xt | t ∈ [0,T]) in a link between two


telephone exchanges during some time interval [0,T]
– Xt denotes the number of occupied channels at time t
• Sample point ω ∈ Ω tells us
– what is the number X0 of occupied channels at time 0,
– what are the remaining holding times of the calls going on at time 0,
– at what times new calls arrive, and
– what are the holding times of these new calls.
• From this information, it is possible to construct the realization X(ω) of
the traffic process X
• Note that all the randomness is included in the sample point ω
– Given the sample point, the realization of the process is just a
(deterministic) function of time
7

6. Introduction to stochastic processes

Traffic process

channel-by-channel call holding


occupation time
6
channels

5
4
3
2
1

time
call arrival times
blocked call
nr of channels
occupied
nr of channels

6
5
4
3
2
1
0
traffic volume time

8
6. Introduction to stochastic processes

Categories of stochastic processes

• Reminder:
– Parameter space: set I of indices t ∈ I
– State space: set S of values Xt(ω) ∈ S
• Categories:
– Based on the parameter space:
• Discrete-time processes: parameter space discrete
• Continuous-time processes: parameter space continuous
– Based on the state space:
• Discrete-state processes: state space discrete
• Continuous-state processes: state space continuous
• In this course we will concentrate on the discrete-state processes
(with either a discrete or a continuous parameter space)
– Typical processes describe the number of customers in a queueing system
(the state space being thus S = {0,1,2,...})
9

6. Introduction to stochastic processes

Examples

• Discrete-time, discrete-state processes


– Example 1: the number of occupied channels in a telephone link
at the arrival time of the nth customer, n = 1,2,...
– Example 2: the number of packets in the buffer of a statistical multiplexer
at the arrival time of the nth customer, n = 1,2,...
• Continuous-time, discrete-state processes
– Example 3: the number of occupied channels in a telephone link
at time t > 0
– Example 4: the number of packets in the buffer of a statistical multiplexer
at time t > 0

10
6. Introduction to stochastic processes

Notation

• For a discrete-time process,


– the parameter space is typically the set of positive integers, I = {1,2,…}
– Index t is then (often) replaced by n: Xn, Xn(ω)

• For a continuous-time process,


– the parameter space is typically either a finite interval, I = [0, T], or all non-
negative real values, I = [0, ∞)
– In this case, index t is (often) written not as a subscript but in parentheses:
X(t), X (t;ω)

11

6. Introduction to stochastic processes

Distribution

• The stochastic characterization of a stochastic process X is made


by giving all possible finite-dimensional distributions

P{ X t1 ≤ x1,K , X t n ≤ xn }

where t1,…, tn ∈ I, x1,…, xn ∈ S and n = 1,2,...


• In general, this is not an easy task because of dependencies between
the random variables Xt (with different values of time t)

12
6. Introduction to stochastic processes

Dependence

• The most simple (but not so interesting) example of a stochastic


process is such that all the random variables Xt are independent of
each other. In this case

P{ X t1 ≤ x1,..., X t n ≤ xn } = P{ X t1 ≤ x1}L P{ X t n ≤ xn }
• The most simple non-trivial example is a Markov process. In this case
P{ X t1 ≤ x1,..., X t n ≤ xn } =
P{ X t1 ≤ x1} ⋅ P{ X t 2 ≤ x2 | X t1 ≤ x1}L P{ X t n ≤ xn | X t n −1 ≤ xn −1}
• This is related to the so called Markov property:
– Given the current state (of the process),
the future (of the process) does not depend on the past (of the process)

13

6. Introduction to stochastic processes

Stationarity

• Definition: Stochastic process X is stationary if all finite-dimensional


distributions are invariant to time shifts, that is:

P{ X t1 + ∆ ≤ x1,K , X t n + ∆ ≤ xn } = P{ X t1 ≤ x1, K , X t n ≤ xn }

for all ∆, n, t1,…, tn and x1,…, xn


• Consequence: By choosing n = 1, we see that all (individual) random
variables Xt of a stationary process are identically distributed:

P{ X t ≤ x} = F ( x)

for all t ∈ I. This is called the stationary distribution of the process.

14
6. Introduction to stochastic processes

Stochastic processes in teletraffic theory

• In this course (and, more generally, in teletraffic theory) various


stochastic processes are needed to describe
– the arrivals of customers to the system (arrival process)
– the state of the system (state process, traffic process)

15

6. Introduction to stochastic processes

Arrival process

• An arrival process can be described as


– a point process (τn | n = 1,2,...) where τn tells the arrival time of the nth
customer (discrete-time, continuous-state)
• typically it is assumed that the interarrival times τn − τn-1 are
independent and identically distributed (IID) ⇒ renewal process
• then it is sufficient to specify the interarrival time distribution
• exponential IID interarrival times ⇒ Poisson process
– a counter process (A(t) | t ≥ 0) where A(t) tells the number of arrivals up
to time t (continuous-time, discrete-state)
• non-decreasing: A(t+∆) ≥ A(t) for all t,∆ ≥ 0
• thus non-stationary!
• independent and identically distributed (IID) increments A(t+∆) − A(t)
with Poisson distribution ⇒ Poisson process

16
6. Introduction to stochastic processes

State process

• In simple cases
– the state of the system is described just by an integer
• e.g. the number X(t) of calls or packets at time t
– This yields a state process that is continuous-time and discrete-state
• In more complicated cases,
– the state process is e.g. a vector of integers (cf. loss and queueing network
models)
• Now it is reasonable to ask whether the state process is stationary
– Although the state of the system did not follow the stationary distribution at
time 0, in many cases state distribution approaches the stationary
distribution as t tends to ∞

17

6. Introduction to stochastic processes

Contents

• Basic concepts
• Poisson process
• Markov processes
• Birth-death processes

18
6. Introduction to stochastic processes

Bernoulli process

• Definition: Bernoulli process with success probability p is an infinite


series (Xn | n = 1,2,...) of independent and identical random
experiments of Bernoulli type with success probability p
• Bernoulli process is clearly discrete-time and discrete-state
– Parameter space: I = {1,2,…}
– State space: S = {0,1}
• Finite dimensional distributions (note: Xn’s are IID):
P{ X1 = x1,..., X n = xn } = P{ X1 = x1}L P{ X n = xn }
n
= ∏ p xi (1 − p )1− xi = p ∑i i (1 − p ) ∑i i
x n− x
i =1
• Bernoulli process is stationary (stationary distribution: Bernoulli(p))
19

6. Introduction to stochastic processes

Poisson process (1)

• Definition 1: A point process (τn | n = 1,2,...) is a Poisson process


with intensity λ if the probability that there is an event during a short
time interval (t, t+h] is λh + o(h) independently of the other time
intervals
– τn tells the occurrence time of the nth event
– o(h) refers to any function such that o(h)/h → 0 as h → 0
– new events happen with a constant intensity λ: (λh + o(h))/h → λ
– Poisson process can be seen as the continuous-time counter-part of a
Bernoulli process
• Defined as a point process,
Poisson process is discrete-time and continuous-state
– Parameter space: I = {1,2,…}
– State space: S = (0, ∞)

20
6. Introduction to stochastic processes

Poisson process (2)

• Consider the interarrival time τn − τn-1 between two events (τ0 = 0)


– Since the intensity that something happens remains constant λ, the
interarrival time distribution is clearly memoryless. On the other hand, we
know that this is a property of an exponential distribution.
– Due to the same reason, different interarrival times are also independent
– This leads to the following (second) characterization of a Poisson process
• Definition 2: A point process (τn | n = 1,2,...) is a Poisson process
with intensity λ if the interarrival times τn − τn−1 are independent and
identically distributed (IID) with joint distribution Exp(λ)
– τn tells (again) the occurrence time of the nth event

21

6. Introduction to stochastic processes

Poisson process (3)

• Consider finally the number of events A(t) during time interval [0,t]
– In a Bernoulli process, the number of successes in a fixed interval would
follow a binomial distribution. As the “time slice” tends to 0, this approaches
a Poisson distribution.
– On the other hand, since the intensity that something happens remains
constant λ, the number of events occurring in disjoint time intervals are
clearly independent.
– This leads to the following (third) characterization of a Poisson process
• Definition 3: A counter process (A(t) | t ≥ 0) is a Poisson process
with intensity λ if its increments in disjoint intervals are independent
and follow a Poisson distribution as follows:

A(t + ∆) − A(t ) ∼ Poisson (λ∆)

22
6. Introduction to stochastic processes

Poisson process (4)

• Defined as a counter process,


Poisson process is continuous-time and discrete-state
– Parameter space: I = [0, ∞)
– State space: S = {0,1,2,…}
• One dimensional distribution: A(t) ∼ Poisson(λt)
– E[A(t)] = λt, D2[A(t)] = λt
• Finite dimensional distributions (due to indep. of disjoint intervals):

P{ A(t1 ) = x1,..., A(tn ) = xn } =


P{ A(t1 ) = x1}P{ A(t2 ) − A(t1 ) = x2 − x1}L
P{ A(tn ) − A(tn −1 ) = xn − xn −1}
• No stationary distribution (but independent and identically distributed
increments) 23

6. Introduction to stochastic processes

Three ways to characterize the Poisson process

• It is possible to show that all three definitions for a Poisson process are,
indeed, equivalent

A(t)

τ4−τ3

τ1 τ2 τ3 τ4

no event with prob. 1−λh+o(h)


event with prob. λh+o(h)

24
6. Introduction to stochastic processes

Properties (1)

• Property 1 (Sum): Let A1(t) and A2(t) be two independent Poisson


processes with intensities λ1 and λ2. Then the sum (superposition)
process A1(t) + A2(t) is a Poisson process with intensity λ1 + λ2.
• Proof: Consider a short time interval (t, t+h]
– Probability that there are no events in the superposition is
(1 − λ1h + o( h))(1 − λ2 h + o(h)) = 1 − (λ1 + λ2 )h + o(h)
– On the other hand, the probability that there is exactly one event is

(λ1h + o(h))(1 − λ2 h + o(h)) + (1 − λ1h + o(h))(λ2 h + o(h))


= (λ1 + λ2 )h + o(h)
λ1
λ2
λ1+λ2
25

6. Introduction to stochastic processes

Properties (2)

• Property 2 (Random sampling): Let τn be a Poisson process with


intensity λ. Denote by σn the point process resulting from a random
and independent sampling (with probability p) of the points of τn. Then
σn is a Poisson process with intensity pλ.
• Proof: Consider a short time interval (t, t+h]
– Probability that there are no events after the random sampling is
(1 − λh + o(h)) + (1 − p)(λh + o(h)) = 1 − pλh + o(h)
– On the other hand, the probability that there is exactly one event is

p(λh + o(h)) = pλh + o( h)

λ

26
6. Introduction to stochastic processes

Properties (3)

• Property 3 (Random sorting): Let τn be a Poisson process with


intensity λ. Denote by σn(1) the point process resulting from a random
and independent sampling (with probability p) of the points of τn.
Denote by σn(2) the point process resulting from the remaining points.
Then σn(1) and σn (2) are independent Poisson processes with
intensities λp and λ(1 − p).
• Proof: Due to property 2, it is enough to prove that the resulting two
processes are independent.

λ
λp
λ(1-p)
27

6. Introduction to stochastic processes

Properties (4)

• Property 4 (PASTA): Consider any simple (and stable) teletraffic


model with Poisson arrivals. Let X(t) denote the state of system at time
t (continuous-time process) and Yn denote the state of the system seen
by the nth arriving customer (discrete-time process). Then the
stationary distribution of X(t) is the same as the stationary distribution
of Yn.
• Thus, we can say that
– arriving customers see the system in the stationary state
• PASTA property is only valid for Poisson arrivals
– Consider e.g. your own PC. Whenever you start a new session, the system
is idle. In the continuous time, however, the system is not only idle but also
busy (when you use it).

28
6. Introduction to stochastic processes

Contents

• Basic concepts
• Poisson process
• Markov processes
• Birth-death processes

29

6. Introduction to stochastic processes

Markov process

• Consider a continuous-time and discrete-state stochastic process X(t)


– with state space S = {0,1,…,N} or S = {0,1,...}
• Definition: The process X(t) is a Markov process if

P{ X (t n +1 ) = xn +1 | X (t1 ) = x1,K, X (tn ) = xn } =


P{ X (t n +1 ) = xn +1 | X (tn ) = xn }
for all n, t1< … < tn+1 and x1,…, xn +1
– This is called the Markov property
– Given the current state,
the future of the process does not depend on its past
– As regards the future of the process,
it is important to know the current state
(not how the process has evolved to this state)

30
6. Introduction to stochastic processes

Example

• Process X(t) with independent increments is always a Markov process:

X (tn ) = X (tn −1 ) + ( X (tn ) − X (t n −1 ))

• It follows that Poisson process is a Markov process:


– according to Definition 3, the increments of a Poisson process are
independent

31

6. Introduction to stochastic processes

Time-homogeneity

• Definition: Markov process X(t) is time-homogeneous if

P{ X (t + ∆ ) = y | X (t ) = x} = P{ X (∆ ) = y | X (0) = x}
for all t, ∆ ≥ 0 and x, y ∈ S
• In other words,
probabilities P{X(t + ∆) = y | X(t) = x} are independent of t

32
6. Introduction to stochastic processes

State transition rates

• Consider a time-homogeneous Markov process X(t)


• The state transition rates qij, where i, j ∈ S, are defined as follows:

qij := lim 1h P{ X (h) = j | X (0) = i}


h↓0
• The initial distribution P{X(0) = i}, i ∈ S, and the state transition rates
qij together determine the state probabilities P{X(t) = i}, i ∈ S, by the
Kolmogorov (backwards/forwards) equations

33

6. Introduction to stochastic processes

Exponential holding times

• When in state i, the conditional probability that there is a transition from


state i to state j during a short time interval (t, t+h] is qijh + o(h)
independently of the other time intervals
• Let qi denote the total transition rate out of state i, that is:

qi := ∑ qij
j ≠i
• Then, the conditional probability that there is a transition from state i to
any other state during a short time interval (t, t+h] is qih + o(h)
independently of the other time intervals
• Thus, the holding time in (any) state i is exponentially distributed with
intensity qi

34
6. Introduction to stochastic processes

State transition probabilities

• Let Ti denote the holding time in state i


• It can be seen as the minimum of independent (potential) holding times
Tij corresponding to (potential) transitions from state i to state j:

Ti = min Tij
j ≠i
• Let then pij denote the conditional probability that, when in state i, there
is a transition from state i to state j
• Since potential holding times Tij are exponentially distributed with
intensity qij, we have (by slide 5.44)
qij
Ti ∼ Exp( qi ), pij = P{Ti = Tij } =
qi
35

6. Introduction to stochastic processes

State transition diagram

• A time-homogeneous Markov process can be represented by a state


transition diagram, which is a directed graph where
– nodes correspond to states and
– one-way links correspond to potential state transitions
link from state i to state j ⇔ qij > 0
• Example: Markov process with three states, S = {0,1,2}

− + 0 0
  q20 q01
Q =  0 − +
q21
 + + −
  2
q12
1

36
6. Introduction to stochastic processes

Irreducibility

• Definition: There is a path from state i to state j (i → j) if there is a


directed path from state i to state j in the state transition diagram.
• In this case, starting from state i, the process visits state j with positive
probability
• Definition: States i and j communicate (i ↔ j) if i → j and j → i.
• Definition: Markov process is irreducible if all states i ∈ S
communicate with each other
• Example: The Markov process presented in the previous slide is
irreducible

37

6. Introduction to stochastic processes

Global balance equations, equilibrium distribution

• Consider an irreducible Markov process X(t)


• Definition: Let π = (πi | πi ≥ 0, i ∈ S) be a distribution defined on the
state space S, that is:
∑i∈S π i = 1 (N)
It is the equilibrium distribution of the process if the following global
balance equations (GBE) are satisfied for each i ∈ S:

∑ j ≠ i π i qij = ∑ j ≠ i π j q ji (GBE)
– It is possible that no equilibrium distribution exists
– However, if the state space is finite, a unique equilibrium distribution exists
– By choosing the equilibrium distribution (if it exists) as the initial distribution,
the Markov process X(t) becomes stationary (with stationary distribution π)

38
6. Introduction to stochastic processes

Example

− 1 0 0
  1 1
Q = 0 − 1
µ
1 µ − 
 2
1
1

π 0 + π1 + π 2 = 1 (N)

π 0 ⋅1 = π 2 ⋅1
π 1 ⋅1 = π 0 ⋅1 + π 2 ⋅ µ (GBE)
π 2 ⋅ (1 + µ ) = π 1 ⋅1
1+ µ
⇒ π 0 = 3+1µ , π 1 = 3+ µ , π 2 = 3+1µ
39

6. Introduction to stochastic processes

Local balance equations

• Consider still an irreducible Markov process X(t). Next we will give


sufficient (but not necessary) conditions for the equilibrium distribution.
• Proposition: Let π = (πi | πi ≥ 0, i ∈ S) be a distribution defined on the
state space S, that is:
∑i∈S π i = 1 (N)
If the following local balance equations (LBE) are satisfied for each
i,j ∈ S:
π i qij = π j q ji (LBE)
then π is the equilibrium distribution of the process.
– Proof: (GBE) follows from (LBE) by summing over all j ≠ i
– In this case the Markov process X(t) is called reversible (looking
stochastically the same in either direction of time)
40
6. Introduction to stochastic processes

Contents

• Basic concepts
• Poisson process
• Markov processes
• Birth-death processes

41

6. Introduction to stochastic processes

Birth-death process

• Consider a continuous-time and discrete-state Markov process X(t)


– with state space S = {0,1,…,N} or S = {0,1,...}
• Definition: The process X(t) is a birth-death process (BD) if state
transitions are possible only between neighbouring states, that is:
| i − j |> 1 ⇒ qij = 0
• In this case, we denote
µi := qi,i −1 ≥ 0
λi := qi ,i +1 ≥ 0
– The former is called the death rate and the latter the birth rate.
– In particular, we define µ0 = 0 and λN = 0 (if N < ∞)

42
6. Introduction to stochastic processes

Irreducibility

• Proposition: A birth-death process is irreducible if and only if


λi > 0 for all i ∈ S\{N} and µi > 0 for all i ∈ S\{0}

• State transition diagram of an infinite-state irreducible BD process:

λ0 λ1 λ2
0 1 2
µ1 µ2 µ3

• State transition diagram of a finite-state irreducible BD process:

λ0 λ1 λN−2 λN−1
0 1 N-1 N
µ1 µ2 µN−1 µN

43

6. Introduction to stochastic processes

Equilibrium distribution (1)

• Consider an irreducible birth-death process X(t)


• Let π = (πi | i ∈ S) denote the equilibrium distribution (if it exists)
• Local balance equations (LBE):

π i λi = π i +1µi +1 (LBE)
• Thus we get the following recursive formula:
λ i λ
j −1
π i +1 = µ i π i ⇒ πi = π0 ∏ µ
i +1 j =1 j
• Normalizing condition (N):
i λ
j −1
∑π i = π 0 ∑ ∏ µ j = 1 (N)
i∈S i∈S j =1
44
6. Introduction to stochastic processes

Equilibrium distribution (2)

• Thus, the equilibrium distribution exists if and only if


i λ
j −1
∑ ∏ µj <∞
i∈S j =1
• Finite state space:
The sum above is always finite, and the equilibrium distribution is
−1
i λ
j −1  N i λ j −1 
πi = π0 ∏ µ , 
π 0 = 1+ ∑ ∏ µ 
j =1 j
 j 
 i =1 j =1 
• Infinite state space:
If the sum above is finite, the equilibrium distribution is
−1
i λ
j −1  ∞ i λ j −1 
πi = π0 ∏ µ , 
π 0 = 1+ ∑ ∏ µ 
j =1 j
 j 
 i =1 j =1 
45
45

6. Introduction to stochastic processes

Example

− λ 0
  λ λ
Q = µ − λ 0 1 2
µ µ
0 µ − 

π i λ = π i +1µ
⇒ π i +1 = ρπ i ( ρ := λ / µ ) (LBE)
⇒ πi = π 0ρi
π 0 + π 1 + π 2 = π 0 (1 + ρ + ρ 2 ) = 1 (N)
ρi
⇒ πi =
1+ ρ + ρ 2 46
46
6. Introduction to stochastic processes

Pure birth process

• Definition: A birth-death process is a pure birth process if


µi = 0 for all i ∈ S
• State transition diagram of an infinite-state pure birth process:
λ0 λ1 λ2
0 1 2

• State transition diagram of a finite-state pure birth BD process:

λ0 λ1 λN−2 λN−1
0 1 N-1 N

• Example: Poisson process is a pure birth process (with constant birth


rate λi = λ for all i ∈ S = {0,1,…})
• Note: Pure birth process is never irreducible (nor stationary)!
47

6. Introduction to stochastic processes

THE END

48

You might also like