0% found this document useful (0 votes)
8 views48 pages

Queueing 2017 Fall

The document covers various topics in computer communication networks, focusing on queueing theory, Markov processes, and switching networks. It discusses circuit and packet switching, their traffic management, and the implications of random processes in network performance. Key concepts include discrete-time Markov processes, state transition probabilities, and statistical multiplexing in network systems.

Uploaded by

anirudhes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views48 pages

Queueing 2017 Fall

The document covers various topics in computer communication networks, focusing on queueing theory, Markov processes, and switching networks. It discusses circuit and packet switching, their traffic management, and the implications of random processes in network performance. Key concepts include discrete-time Markov processes, state transition probabilities, and statistical multiplexing in network systems.

Uploaded by

anirudhes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Contents

ELL 785–Computer Communication Networks

Motivations

Lecture 3 Discrete-time Markov processes


Introduction to Queueing theory
Review on Poisson process

Continuous-time Markov processes

Queueing systems

3-1 3-2

Circuit switching networks - I Circuit switching networks - II

Traffic fluctuates as calls initiated & terminated Fluctuation in Trunk Occupancy


• Telephone calls come and go Number of busy trunks
• People activity follow patterns: Mid-morning & mid-afternoon at All trunks busy, new call requests blocked

office, Evening at home, Summer vacation, etc.


• Outlier Days are extra busy (Mother’s Day, Christmas, ...),
disasters & other events cause surges in traffic
Providing resources so
• Call requests always met is too expensive 1 active

2 active
• Call requests met most of the time cost-effective

Trunk number
3 active

Switches concentrate traffic onto shared trunks: blocking of requests 4 active active
active
will occur from time to time 5
6 active active

7 active active

Many Fewer
lines trunks – minimize the number of trunks subject to a blocking probability
3-3 3-4
Packet switching networks - I Packet switching networks - II

Statistical multiplexing Fluctuations in Packets in the System


• Dedicated lines involve not waiting for other users, but lines are
used inefficiently when user traffic is bursty (a) Dedicated lines A1 A2
• Shared lines concentrate packets into shared line; packets buffered
(delayed) when line is not immediately available B1 B2

C1 C2
(a) Dedicated lines A1 A2

B1 B2
(b) Shared line A1 C1 B1 A2 B2 C2
C1 C2
A
(b) Shared lines
A1 C1 B1 A2 B2 C2
B Buffer
Output line
Number of packets
C
Input lines
in the system

3-5 3-6

Packet switching networks - III Random (or Stochastic) Processes

Delay = Waiting times + service times General notion


• Suppose a random experiment specified by the outcomes ζ from
P1 P2 P3 P4 P5 some sample space S, and ζ ∈ S
Packet completes
transmission • A random process (or stochastic) is a mapping ζ to a function of
Packet begins Service
time
time t: X (t, ζ)
transmission
– For a fixed t, e.g., t1 , t2 ,...: X (ti , ζ) is random variable
– For ζ fixed: X (t, ζi ) is a sample path or realization
Packet arrives Waiting
at queue P1 P2 time
P3 P4
P5

• Packet arrival process


• Packet service time


– R bps transmission rate and a packet of L bits long 0 1 2 n n+1 n+2 time

– Service time: L/R (transmission time for a packet) – e.g. # of people in Cafe coffee day, # of rickshaws at IIT main
– Packet length can be a constant, or random variables gate

3-7 3-8
Discrete-time Markov process I Discrete-time Markov process II

A sequence of integer-valued random variables, Xn , n = 0, 1, . . . , Time-homogeneous, if for any n,


is called a discrete-time Markov process
pij = Pr[Xn+1 = j|Xn = i] (indepedent of time n)
If the following Markov property holds
which is called one-step (state) transition probability
Pr[Xn+1 = j|Xn = i, Xn−1 = in−1 , . . . , X0 = i0 ]
State transition probability matrix:
= Pr[Xn+1 = j|Xn = i]
 
• State: the value of Xn at time n in the set S p00 p01 p02 ··· ···
 
• State space: the set S = {n|n = 0, 1, . . . , } p10 p11
 p12 ··· · · ·
· · ··· ··· · · ·
 
– An integer-valued Markov process is called Markov chain (MC) P = [pij ] = 
 
– With an independent Bernoulli seq. Xi with prob. 1/2, is
p
 i0 pi1 pi2 ··· · · ·
.. .. .. .. ..
 
Yn = 0.5(Xn + Xn−1 ) a Markov process? . . . . .
– Is the vector process Yn = (Xn , Xn−1 ) a Markov process? P∞
which is called a stochastic matrix with pij ≥ 0 and j=0 pij = 1

3-9 3-10

Discrete-time Markov process III Discrete-time Markov process IV


A mouse in a maze n-step transition probability matrix:
• A mouse chooses the next cell to visit with (n)
pij = Pr[Xl+n = j|Xl = i] for n ≥ 0, i, j ≥ 0.
1 2 3
probability 1/k, where k is the number of
adjacent cells.
4 5 6 – Consider a two-step transition probability
• The mouse does not move any more once it is
caught by the cat or it has the cheese.
7 8 9 Pr[X2 = j, X1 = k, X0 = i]
Pr[X2 = j, X1 = k|X0 = i] =
1 2 3 4 5 6 7 8 9 Pr[X0 = i]
1 1
1
0 0 0 0 0 0 0

2 2 Pr[X2 = j|X1 = k] Pr[X1 = k|X0 = i] Pr[X0 = i]
2  13 0 1
0 1
0 0 0 0 =
1
3 3
1 Pr[X0 = i]
3 
0 2
0 0 0 2
0 0 0
4  31 1 1

0 0 0 3
0 3
0 0 = pik pkj
1 1 1 1
5 0 0 0 0 0
 
P= 4 4 4 4
1 1 1
6 
0 0 3
0 3
0 0 0 3

 – Summing over k, we have
7 0 0 0 0 0 0 1 0 0
1 1 1 (2)
 X
8 0 0 0 0 3
0 3
0 3 pji = pik pkj
9 0 0 0 0 0 0 0 0 1
k

3-11 3-12
Discrete-time Markov process IV Discrete-time Markov process V

In a place, the weather each day is classified as sunny, cloudy or rainy. The
The Chapman-Kolmogorov equations: next day’s weather depends only on the weather of the present day and not

on the weather of the previous days. If the present day is sunny, the next
(n+m) (n) (m)
X
pij = pik pkj for n, m ≥ 0, i, j ∈ S day will be sunny, cloudy or rainy with respective probabilities 0.70, 0.10
k=0 and 0.20. The transition probabilities are 0.50, 0.25 and 0.25 when the
present day is cloudy; 0.40, 0.30 and 0.30 when the present day is rainy.
Proof:
0.2 0.3
X 0.7 0.25 S C R
Pr[Xn+m = j|X0 = i] = Pr[Xn+m = j|X0 = i, Xn = k] Pr[Xn = k|X0 = i] 0.1
S 0.7 0.1 0.2
 
0.25
k∈S Sunny Cloudy Rainy P=
X 0.5 0.3 C 0.5 0.25 0.25
(Markov property) = Pr[Xn+m = j|Xn = k] Pr[Xn = k|X0 = i]
R 0.4 0.3 0.3
k∈S 0.4
X
(Time homogeneous) = Pr[Xm = j|X0 = k] Pr[Xn = k|X0 = i] – Using n-step transition probability matrix,
k∈S    
0.601 0.168 0.230 0.596 0.172 0.231
P n+m = P n P m ⇒ P n+1 = P n P P 3 = 0.596 0.175 0.233 and P 12 = 0.596 0.172 0.231 = P 13
   

0.585 0.179 0.234 0.596 0.172 0.231


3-13 3-14

Discrete-time Markov process VI Discrete-time Markov process VII

If P (n) has identical rows, then P (n+1) has also. Suppose State probabilities at time n
(n)  (n) (n)

r
 – πi = Pr[Xn = i] and π (n) = π0 , . . . , πi , . . .](row vector)
(0)
 r  – πi : the initial state probability
P (n) =  . 
 
 .. 
X
Pr[Xn = j] = Pr[Xn = j|X0 = i] Pr[X0 = i]
r i∈S
(n) (n) (0)
X
Then, we have πj = pij πi
  i∈S
r
··· ···
   
 r  – In matrix notation: π (n) = π (0) P n
PP (n) =  pj1 pj2 · · · pjn    = pj1 r + pj2 r + · · · + pjn r 
  
..
···
 . 
··· Limiting distribution: Given an initial prob. distribution, π (0) ,
r
π = lim π (n) = lim π (0) P n = π (0) lim P n+1
~
··· n→∞ n→∞ n→∞
 
h i
=  r  = P (n) = π (0) n
lim P P = ~ πP
n→∞
···

3-15 3-16
Discrete-time Markov process VIII Discrete-time Markov process IX

Stationary distribution:
Note that – zj and z = [zj ] denote the prob. of being in state j and its vector

π = lim π (n) = lim π (0) P n → πj


~
(∞)
= lim pij
(n)
z =z ·P and z · ~1 = 1
n→∞ n→∞ n→∞
(0)
• If zj is chosen as the initial distribution, i.e., πj = zj for all j, we
– The system reaches “equilibrium" or “steady-state"
(n)
–~π is independent of π (0) have πj = zj for all n

Global balance equation: z = z · P = z · P2 = z · P3 = · · ·


X X
~ P ⇒ (each row) πj
~ =π
π pji = πi pij • A limiting distribution, when it exists, is always a stationary
i i distribution, but the converse is not true
– LHS represents the total flow from state j into the states except j " # " # " #
0 1 2
1 0 3
0 1
– RHS shows the total flow from other states to state j P= , P = , P =
1 0 0 1 1 0

3-17 3-18

Discrete-time Markov process X Discrete-time Markov process XI

Back to the weather example on page 3-16 Classes of states:


(n)
• State j is accessible from state i if pij > 0 for some n
• Using ~
π P = ~π , we have
• States i and j communicate if they are accessible to each other
π0 =0.7π0 + 0.5π1 + 0.4π2 • Two states belong to the same class if they communicate with
each other
π1 =0.1π0 + 0.25π1 + 0.3π2
• MC having a single class is said to be irreducible
π2 =0.2π0 + 0.25π1 + 0.3π2
0
- Note that one equation is always redundant
• Using 1 = π0 + π1 + π2 , we have
1 2 3

−0.5 −0.4 π0
    
0.3 0
Recurrence property
−0.1 0.75 −0.3 π1  = 0
     P∞ (n)
• State j is recurrent if n=1 pjj = ∞
1 1 1 π2 1
– Positive recurrent if πj > 0
π0 = 0.596, π1 = 0.1722, π2 = 0.2318 – Null recurrent if πj = 0
P∞ (n)
• State j is transient if n=1 pjj < ∞
3-19 3-20
Discrete-time Markov process XII Discrete-time Markov process XIII

Periodicity and aperiodic: In a place, a mosquito is produced every hour with prob. p, and dies
• State i has period d if with prob. 1 − p
(n)
pii = 0 when n is not a multiple of d, • Show the state transition diagram

where d is the largest integer with this property.


• State i is aperiodic if it has period d = 1.
• All states in a class have the same period 0 1 2 3 ……
– An irreducible Markov chain is said to be aperiodic if the states
in its single class have period one
• Using global balance eqns, find the (stationary) state prob.:
Null recurrent:
 i
Recurrent Periodic: p p
pπi = (1 − p)πi+1 → πi+1 = πi = π0
1−p 1−p
State Positive recurrent
Transient: Aperiodic: • All states are positive recurrent if p < 1/2, null recurrent if
ergodic P∞
p = 1/2 (see i=0 πi = 1), and transient if p > 1/2

3-21 3-22

Discrete-time Markov process XIV Discrete-time Markov process XV

An autorickshaw driver provides service in two zones of New Delhi. Diksha possesses 5 umbrellas which she employs in going from her
Fares picked up in zone A will have destinations in zone A with home to office, and vice versa. If she is at home (the office) at the
probability 0.6 or in zone B with probability 0.4. Fares picked up in beginning (end) of a day and it is raining, then she will take an
zone B will have destinations in zone A with probability 0.3 or in zone umbrella with her to the office (home), provided there is one to be
B with probability 0.7. The driver’s expected profit for a trip entirely taken. If it is not raining, then she never takes an umbrella. Assume
in zone A is 40 Rupees (Rps); for a trip entirely in zone B is 80 Rps; that, independent of the past, it rains at the beginning (end) of a day
and for a trip that involves both zones is 110 Rps. with probability p.

• Find the stationary prob. that the driver is in each zone. • By defining a Markov chain with 6 states which enables us to
• What is the expected profit of the driver? determine the proportion of time that our TA gets wet, draw its
state transition diagram by specifying all state transition
(40 × 0.6 + 110 × 0.4)πA + (80 × 0.7 + 110 × 0.3)πB
probabilities (Note: She gets wet if it is raining, and all umbrellas
= 68πA + 89πB are at her other location.)
= 68πA + 89(1 − πA ) = 89 − 21πA • Find the probability that our TA gets wet.
• At what value of p, can the chance for our TA to get wet be
highest?

3-23 3-24
Drift and Stability I Drift and Stability II

Suppose an irreducible, aperiodic, discrete-time MC


Proof: β = maxi≤i Di (on page 264 in the textbook)
• The chain is ‘stable’ if πj > 0 for all j
• Drift is defined as E[Xn |X0 ] − i = E[Xn − Xn−1 + Xn−1 − Xn−2 + · · · + X1 − X0 |X0 = i]

X n
Di = E[Xn+1 − Xn |Xn = i] = kPi(i+k)
X
= E[Xk − Xk−1 |X0 = i]
k=−i
k=1
– If Di > 0, the process goes up some higher states from state i X ∞
n X

– If Di < 0, the process visits some lower states from state i = E[Xk − Xk−1 |Xk−1 = j] Pr[Xk−1 = j|X0 = i]
k=1 j=0
– In the previous slide, Di = 1 · p − 1 · (1 − p) = 2p − 1
n  X
X i
Pakes’ lemma ≤ β Pr[Xk−1 = j|X0 = i]
1) Di < ∞ for all i k=1 j=0

2) For some scalar δ > 0 and integer i ≥ 0 X 
+ E[Xk − Xk−1 |Xk−1 = j] Pr[Xk−1 = j|X0 = i]
Di ≤ −δ for all i > i j=i+1
| {z
−δ
}

Then, the MC has a stationary distribution


3-25 3-26

Drift and Stability III Drift and Stability IV

(Continued) Dividing by n and as n → ∞ yields


 i  X n  i
n  Xi X 1 (k)
E[Xn |X0 ] − i ≤
X
β Pr[Xk−1 = j|X0 = i] 0 ≤ (β + δ) pij − δ +
j=0
n n
k=1
k=1 j=0
i
i  X
= (β + δ) πj − δ
 X
−δ 1− Pr[Xk−1 = j|X0 = i]
j=0
j=0
1
Pn (k) (n)
n X
X i – πj = limn→∞ n k=1 pij (Cesaro limit) = limn→∞ pij ,
= (β + δ) Pr[Xk−1 = j|X0 = i] − nδ – This implies that for j ∈ {0, . . . , i}, we have πj > 0.
k=1 j=0
Kaplan’s Instability lemma: the converse of stability lemma
from which we can get • There exist integers i > 0 and k such that
i  Xn
Di > 0 for all i>i
 
X 1
0 ≤ E[Xn |X0 = i] ≤ n (β + δ) Pr[Xk−1 = j|X0 = i] −δ + i
j=0
n | {z } and pij = 0 for all i and j such that 0 ≤ j ≤ i − k.
k=1 (k)
pij
Then, the Markov chain does not have a stationary distribution
3-27 3-28
Review on Poisson process I Review on Poisson process II
Properties of a Poisson process, Λ(t):
P1) Independent increment for some finite λ (arrivals/sec): P3) Interarrival (or inter-occurrence) times between Poisson arrivals are
Number of arrivals in disjoint intervals, e.g., [t1 , t2 ] and [t3 , t4 ], are exponentially distributed:
independent random variables. Its probability density function is Suppose τ1 , τ2 , τ3 , . . . are the epochs of the first, second and third
arrivals, then the interarrival times t1 , t2 and t3 are given by
(λt)k −λt
Pr[Λ(t) = k] = e for k = 0, 1, . . . t1 = τ1 , t2 = τ2 − τ1 and t3 = τ3 − τ2 , generally, tn = τn − τn−1
k!
with τ0 = 0.
where t = ti+1 − ti .
P2) Stationary increments:
time
The number of events (or arrivals) in (t, t + h] is independent of t.
Using PGF of distribution Λ(t + h), i.e.,
P∞ 1. For t1 , we have Pr[Λ(t) = 0] = e −λt = Pr[t1 ≥ t] for t ≥ 0, which
E[z Λ(t) ] = k=0 z k Pr[Λ(t) = k] = e λt(z−1) , means that t1 is exponentially distributed with mean 1/λ.
E[z Λ(t+h) ] = E[z Λ(t) · z Λ(t+h)−Λ(t) ] 2. For t2 , we get
Pr[t2 > t|t1 = x] = Pr[Λ(t+x)−Λ(x) = 0] = Pr[Λ(t) = 0] = e −λt ,
= E[z Λ(t) ] · E[z Λ(t+h)−Λ(t) ], due to P1.
which also means that t2 is independent of t1 and has the same
e λ(t+h)(z−1) distribution as t1 . Similarly t3 , t4 , . . . are iid.
⇒ E[z Λ(t+h)−Λ(t) ] = = e λh(z−1) .
e λ(t)(z−1)
3-29 3-30

Review on Poisson process III Review on Poisson process IV

P4) The converse of P4 is true:


P5) For a short interval, the probability that an arrival occurs in an
If the sequence of interarrival times {ti } is iid rv’s with exp. density
interval is proportional to the interval size, i.e.,
fun. λe −λt , t ≥ 0, then the number of arrivals in interval [0, t],
Λ(t), is a Poisson process. Pr[Λ(h) = 1] e −λh (λh)
lim = lim = λ.
h→0 h h→0 h
time

o(h)
Or, we have Pr[Λ(h) = 1] = λh + o(h), where limh→0 h =0

P6) The probability of two or more arrivals in an interval of length h


Let Y denote the sum of j independent rv’s with exp. density fun., gets small as h → 0. For every t ≥ 0,
j−1
then Y is Erlang-j distributed, fY (y) = λ(λy)
(j−1)! e
−λy
:
Z t Pr[Λ(h) ≥ 2] 1 − e −λh − λhe −λh
Pr[Λ(t) = j] = Pr[0 arrival in(y, t]|Y = y]fY (y)dy lim = lim =0
h→0 h h→0
{z h
0 | }
t L’Hopital’s rule
(λt)j e −λt
Z
= e λ(t−y) · fY (y)dy = .
0 j!

3-31 3-32
Review on Poisson process V Continuous-time Markov process I

P7) Merging: If Λi (t)’s are mutually independent Poisson


 P processes A stochastic process is called continuous-time MC if it satisfies
k
with rates λi ’s, the superposition process Λ(t) = i=1 Λi (t) is Pr[X (tk+1 ) = xk+1 |X (tk ) = xk , X (tk−1 ) = xk−1 , . . . , X (t1 ) = x1 ]
 P 
k
a Poisson process with rate λ = i=1 λi = Pr[X (tk+1 ) = xk+1 |X (tk ) = xk ]
Note: If the interarrival times of the ith stream are a sequence of
If X (t) is a time-homogeneous continuous-time MC if
iid rv’s but not necessarily exponentially distributed, then Λ(t)
tends to a Poisson process as k → ∞. [D. Cox, Renewal Theory] Pr[X (t + s) = j|X (s) = i] = pij (t) (independent of s)
which is analogous to pij in a discrete-time MC
: sojourn time in state


: time of state change
4
Merging Splitting
3
P8) Splitting: If an arrival randomly chooses the ith branch with 2
probability πi , the arrival process at the ith branch, Λi (t), is 1
Poisson with rate λi (= πi λ). Moreover, Λi (t) is independent of
time
Λj (t) for any pair of i and j (i 6= j).
3-33 A sample path of continuous time MC 3-34

Continuous-time Markov process II Continuous-time Markov process III

State transition rate


State occupancy time follows exponential dist.
• Let Ti be the sojourn (or occupancy) time of X (t) in state i before qii (δ) = Pr[the process remains in state i during δ sec]
making a transition to any other state. vi δ (vi δ)2
= Pr[Ti > δ] = e −vi δ = 1 − + − · · · = 1 − vi δ + o(δ)
– Ti is assumed to be exponential distribution with mean 1/vi . 1 2!
• For all s ≥ 0 and t ≥ 0, due to Markovian property of this process, Or, let vi be the rate that the process moves out of state i,
1 − qii (δ) vi δ + o(δ)
Pr[Ti > s + t|Ti > s] = Pr[Ti > t] = e −vi t . lim = lim = vi
δ→0 δ δ→0 δ
Only exponential dist. satisfies this property. : Poisson process with mean rate vi

Semi-Markov process:
• The process jumps state j. Such a jump depends only on the
previous state.
• Tj for all j follows a general (independent) distribution.

3-35 3-36
Continuous-time Markov process IV Continuous-time Markov process V

State probabilities πj (t) = Pr[X (t) = j]. For δ > 0, Subtracting πj (t) from both sides,
X
πj (t + δ) = Pr[X (t + δ) = j] πj (t + δ) − πj (t) = qij (δ)πi (t) − πj (t)
i
X
= Pr[X (t + δ) = j|X (t) = i] Pr[X (t) = i] X
i
| {z } = qij (δ)πi (t) + (qjj (δ) − 1)πj (t)
=qij (δ)
i,i6=j
(n+1) (n)
X X
= qij (δ)πi (t) ⇐⇒ πi = pji πj (DTMC)
i j Dividing both sides by δ,
Transition into state j from any other state: πj (t + δ) − πj (t) dπj (t)
lim =
δ→0 δ dt
state 1h X i X
= lim qij (δ)πi (t) + (qjj (δ) − 1) πj (t) = γij πi (t),
δ→0 δ | {z }
i i
γii =−vi

which is a form of the Chapman-Kolmogorov equations


dπj (t) X
= γij πi (t)
dt i
time
3-37 3-38

Continuous-time Markov process VI Continuous-time Markov process VII

As t → ∞, the system reaches ‘equilibrium’ or ‘steady-state’ As a matrix form,

dπj (t) ~ (t)



→0 and πj (∞) = πj ~ (t)Q
=π and ~ (t)~1 = 1
π
dt dt
X X  X  whose solution is given by
0= γij πi or vj π j = γij πi γjj = −vj = − γij
i i6=j i6=j
~ (0)e Qt
~ (t) = π
π
P
which is called the global balance equation, and j πj = 1.
State transition rate diagram: As t → ∞, π
~ (∞) , π
~ = [πi ],

−v0 γ01
 
γ02 γ03 ...
 γ10 −v1 γ12 γ13 . . .
~ Q = 0 with Q =  γ
π and ~ · ~1 = 1,
π
 
 20 γ21 −v2 γ23 . . .
… … … … .. .. .. .. ..

. . . . .

where Q is called the infinitesimal generator or rate matrix.


3-39 3-40
Continuous-time Markov Process VIII Two-state CTMC I
Comparison between discrete- and continuous time MC
A queueing system alternates between two states. In state 0, the
: discrete-time Markov process system is idle and waiting for a customer to arrive. This idle time is
8
an exponential random variable with mean 1/α. In state 1, the
≈ system is busy servicing a customer.The time in the busy state is an
≈ exponential random variable with mean 1/β.
≈ • Find the state transition rate matrix.
1
   
γ γ01 −α α
time Q = 00 =
γ10 γ11 β −β
: sojourn time in state
: continuous-time Markov process : time of state change

8 • Draw state transition rate diagram




2 0 1
1

time
3-41 3-42

Two-state CTMC II Cartridge Inventory I


Find the state probabilities with initial state probabilities π0 (0) and
dπ (t) P
π1 (0): use dtj = i γij πi (t)
An office orders laser printer cartridges in batches of four cartridges.
π00 (t) = −απ0 (t) + βπ1 (t) and π10 (t) = απ0 (t) − βπ1 (t) Suppose that each cartridge lasts for an exponentially distributed time
with mean 1 month. Assume that a new batch of four cartridges
• Using π0 (t) + π1 (t) = 1, we have
becomes available as soon as the last cartridge in a batch runs out.
π00 (t) = − απ0 (t) + β(1 − π0 (t)) and π0 (0) = p0
• Find the state transition rate matrix:
= − (α + β)π0 (t) + β
−1 0 0 1
 
• Assume π0 (t) = C1 e −at + C2 :
(a) Find homogeneous part: π00 (t) + (α + β)π0 (t) = 0
1 −1 0 0
Q= 
0 1 −1 0
(b) Find particular solution
0 0 1 −1
– Using the solution in (a), determine coefficients with
π00 (t) + (α + β)π0 (t) = β
• Find the stationary pmf for N (t), the number of cartridges
(c) The solution of the above is
available at time t.
β β
π0 (t) = + Ce −(α+β)t and C = p0 −
α+β α+β
3-43 3-44
Cartridge Inventory II Barber shop I
• Transient behavior of πi (t): π
~ (t) = π ~ (0)Ee Λt E −1
~ (0)e Qt = π Customers arrive at a Barbor shop with a Poisson process with rate λ.
– E and Λ are given by One barber serves those customers based on first-come first-serve

1 1 1 1
 
0 0 0 2
 basis. Its service time, Si is exponentially distributed with 1/µ (sec).
1 1 i −i −1   0 −1 − i 0 0 The number of customers in the system, N (t) for t ≥ 0, forms a
E=  and Λ =  
2 1 −1 −1 1  0 0 −1 + i 0 Markov chain
1 −i i −1 0 0 1 −2
N (t + τ ) = max(N (t) − B(τ ) + A(τ ), 0)

– note that i = −1, use ‘expm’ in Matlab
1
State transition probabilities (see properties of Poisson process):
0.8 Pr[0 arrival (or departure) in (t, t + δ)]

0.6 π4 (t) = 1 − λδ + o(δ) (or 1 − µδ + o(δ))


Pr[1 arrival (or deparutre) in (t, t + δ)]
0.4
π3 (t)
= λδ + o(δ) (or µδ + o(δ))
0.2 π2 (t)
Pr[more than 1 arrivals (or departure) in (t, t + h)] = o(h)
π1 (t)
0
0 1 2 3 4 5 6
time t 3-45 3-46

Barber shop II Barber shop III

Find Pn (t) , Pr[N (t) = n]. For n ≥ 1 For n = 0, we have


Pn (t + δ) = Pn (t) Pr[0 arrival & 0 departure in (t, t + δ)] dP0 (t)
= −λP0 (t) + µP1 (t).
dt
+ Pn−1 (t) Pr[1 arrival & 0 departure in (t, t + δ)]
dPn (t)
+ Pn+1 (t) Pr[0 arrival & 1 departure in (t, t + δ)] + o(h) As t → ∞, i.e., steady-state, we have Pn (∞) = πn with dt = 0.

= Pn (t)(1 − λδ)(1 − µδ) + Pn−1 (t)(λδ)(1 − µδ) λπ0 = µπ1


+ Pn+1 (t)(1 − λδ)(µδ) + o(δ). (λ + µ)πn = λπn−1 + µπn+1 for n ≥ 1.

Rearranging and dividing it by δ, State transition rate diagram


Pn (t + δ) − Pn (t) o(δ)
= −(λ + µ)Pn (t) + λPn−1 (t) + µPn+1 (t) +
δ δ … …
As δ → 0, for n > 0 we have
dPn (t)
= − (λ + µ) Pn (t) + λ Pn−1 (t)
dt | {z } |{z}
Solution of the above equations is (ρ = λ/µ)
rate out of state n rate from state n − 1 to n

+ µ Pn+1 (t).
 X 
|{z} 3-47
πn = ρn π0 and 1 = π0 1 + ρi ⇒ π0 = 1 − ρ 3-48
rate from state n + 1 to n i=1
Barber shop IV Barbershop V

ρ: the server’s utilization (< 1, i.e., λ < µ) Recall the state transition rate matrix, Q on page 3-40 as
Mean of customers in the system
−v0
 
∞ γ01 γ02 γ03 ...
X ρ  γ10 −v1 γ12 γ13 . . .
E[N ] = nπn =
1−ρ ~ Q = 0 with Q =  γ
π and ~ · ~1 = 1,
π
 
n=0  20 γ21 −v2 γ23 . . .
= ρ(in server) + ρ2 /(1 − ρ)(in queue) .. .. .. .. ..
. . . . .
An M/M/1 system with 1/µ = 1
– What is γij , and vi in M/M/1 queue ?
Number of customers in the system

20 20

Mean system response time (sec)


Simulation Simulation

Analysis Analysis  λ, if j = i + 1,

µ, if j = i − 1.
15 15

γij =

 −(λ + µ), if j = i
10 10 
0, otherwise
5 5
If a and b denote interarrival and service time, respectively, then vi is the
mean of an exponential distribution, i.e., min(a, b).
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
ρ ρ
3-49 3-50

Barbershop VI Barbershop VII

Distribution of sojourn time, T :


TN = S1 + S2 + · · · + SN +SN +1
The babershop at the student activity center opens up for business
| {z }
customers ahead
at t = 0. Customers arrive at random based on a Poisson process
with mean rate λ (customers/sec). Assume that there is one An arriving customer finds N customers in the system (including the
barber and that each haircut takes X sec, which is exponentially customer in the server)
distributed with mean 1/µ, i.e., b(x) = µe −µx . – By the memoryless property of the exponential distribution, the
remaining service time of the customer in service is exponentially
(a) Find the probability that the second arriving customer will not have to
distributed:
wait ∞
X (µt)i −µt
(b) Find the mean waiting time of the second arriving customer fT (t) = µ e πi
i=0
i!

X (µt)i −µt i
= µ e ρ (1 − ρ) = µ(1 − ρ)e −µ(1−ρ)t
i=0
i!
which can be obtained via Laplace transform of distribution of Si .
3-51 3-52
Sum of independent & identical exponential R.V. Barbershop simulation I

Discrete event simulation


Use Laplace transform of an exponentially distributed R.V.: sim_time = 0
Z ∞ Generate an arrival
∗ µ
L (s) = e −ts µe −µt dt = sim_time = sim_time + interarrival time
0 s + µ
Queue = Queue + 1
Laplace transform of N exponentially distributed R.V.s:
 N Arrival? yes Scheduling
µ
L∗N (s) = next interarrival time an event
s+µ < service time service time < next interarrival time
sim_time = sim_time + service time
Use the inverse transform pair:
Queue = Queue ─ 1
t n −at 1
e ⇐⇒ Yes
n! (s + a)n+1 Queue is
empty?
No

3-53 3-54

Barbershop simulation II Barbershop simulation III


clear
% Define variables % If a departure occurs,
global arrival departure mservice_time elseif event == departure
arrival = 1; departure = -1; mservice_time = 1; delay_per_arrival = sim_time - system_queue(1);
% Set simulation parameters system_queue(1:max_queue-1) = system_queue(2:max_queue);
sim_length = 30000; max_queue = 1000; total_delay = total_delay + delay_per_arrival;
% To get delay statistics num_system = num_system - 1;
system_queue = zeros(1,max_queue); num_served = num_served + 1;
k = 0; if num_system == 0
for arrival_rate = 0.1:0.025:0.97 % nothing to serve, schedule an arrival
k = k + 1; event = arrival;
% x(k) denotes utilization event_time = exprnd(1/arrival_rate);
x(k) = arrival_rate*mservice_time; elseif num_system > 0
% initialize % still the system has customers to serve
sim_time = 0; num_arrivals = 0; num_system =0; upon_arrival = 0; total_delay = 0; num_served =0; [event, event_time] = schedule_next_event(arrival_rate);
% Assuming that queue is empty end
event = arrival; event_time = exprnd(1/arrival_rate); end
sim_time = sim_time + event_time; sim_time = sim_time + event_time;
while (sim_time < sim_length), end
% If an arrival occurs, ana_queue_length(k) = (x(k)/(1-x(k)));
if event == arrival ana_response_time(k) = 1/(1/mservice_time-arrival_rate);
num_arrivals = num_arrivals + 1; % Queue length seen by arrival
num_system = num_system + 1; sim_queue_length(k) = upon_arrival/sim_length;
% Record arrival time of the customer sim_response_time(k) = total_delay/num_served;
system_queue(num_system) = sim_time; end
upon_arrival = upon_arrival + num_system;
% To see whether one new arrival comes or new departure occurs
[event, event_time] = schedule_next_event(arrival_rate);
3-55 3-56
Barbershop simulation IV Queueing systems I

function [event, event_time] = schedule_next_event(arrival_rate)

global arrival departure mservice_time

minter_arrival = 1/arrival_rate;
inter_arrival = exprnd(minter_arrival);
service_time = exprnd(mservice_time);
if inter_arrival < service_time
event = arrival;
event_time = inter_arrival;
else
The arrival times, the size of demand for service, the service capacity
event = departure; and the size of waiting room may be (random) variables.
event_time = service_time;
end
Queueing discipline: specify which customer to pick next for service.
• First come first serve (FCFS, or FIFO)
• Last come first serve (LCFS, LIFO)
• Random order, Processor sharing (PS), Round robin (RR)
• Priority (preemptive:resume, non-resume; non-preemptive)
• Shortest job first (SJF) and Longest job first (LJF)
3-57 3-58

Queueing systems II Queueing system III

Customer behavior: jockeying, reneging, balking, etc. Performance measure:


Kendall’s notation: • N (t) = Nq (t) + NS (t): number in
system
Population size (default ) • Nq (t): number in queue
Queue size (default ) • NS (t): number in service
# of servers • W : Waiting time in queue
Service time distribution
Arrival time distribution • T : total time (or response time) in
the system
For A and B:
• τ : service time
• M: Markovian, exponential dist.
• Throughput: γ , mean # of customers served per unit time
• D: Deterministic
1. γ for non-blocking system = min(λ, mµ)
• GI: General independent
2. γ for a blocking system = (1 − PB )λ, PB = blocking probability
• Ek : Erlang-k • Utilization: ρ , fraction of time server is busy
• Hk : Mixture of k exponentials load λT λ
• PH : Phase type distribution ρ= = lim = for a single server queue
capacity T→∞ µT µ
E.g.: M /D/2, M /M /c, G/G/1, etc.; Barbershop is M/M/1 queue. λT λ
= lim = for an m-server queue
3-59 T→∞ mµT mµ 3-60
Little’s theorem I Little’s theorem II

A data communication line delivers a block of information every 10 Any queueing system in steady state: N = λT
µsec. A decoder checks each block for errors and corrects the errors if • N : average number

Number of arrivals or departures


necessary. It takes 1 µsec to determine whether a block has any of customers in the
6
errors. In addition to the time taken to check up the block, if the system
5
block has one error, it takes 5 µsec to correct it, and if it has more 4
• λ: steady-state
than one error it takes 20 µsec to correct the error. Blocks wait in a 3 arrival rate, need not
2
queue when the decoder falls behind. Suppose that the decoder is to be a Poisson
1 Customer 2
initially empty and that the numbers of errors in the first ten blocks Customer 1 • T : average delay per
time
are 0, 1, 3, 1, 0, 4, 0, 1, 1, 0, 2. customer
(a) Using Kendall’s notation, classify this queueing system. Proof: For a system with N (0) = 0 and N (t) = 0, as t → ∞
(b) Plot the number of blocks, N (t) in the decoder as a function of α(t) Pα(t)
Z t
time in the figure given below. Specify the time instances on the 1X α(t) i=0 Ti
1
Nt =
Ti = = λt · Tt .
N (τ )dτ =
x-axis, at which each event occurs. 0 t i=1 t t α(t)
(c) Find the mean number of blocks in the decoder. Pβ(t)
Ti
(d) What percentage of the time is the decoder empty? If N (t) 6= 0, we have β(t)
t
i=0
β(t) ≤ Nt ≤ λt Tt .

3-61 3-62

Little’s theorem III Little’s theorem IV


As an alternative, for the cumulative processes, Finite queue
N (t) = α(t) − β(t) = γ(t) −→ N (t)/t = γ(t)/t = Nt
divided by t

See the variable, ‘num_system’ in the previous Matlab code ……


‘num_arrvials’ in the code (t corresponds to ‘sim_length’)
λt = α(t)/t

Response time per customer from ‘total_delay’


γ(t) γ(t) t Nt Network of queues
Tt = = · =
α(t) t α(t) λt
As t → ∞, we have
λT = λ(W + x) = Nq + ρ
valid for any queue (even with any service order) as long as the limits
of λt and Tt exist as t → ∞
3-63 3-64
Increasing the arrival and transmission rates by the Statistical multiplexing vs TDMA or FDMA
same fator
Multiplexing: m Poisson packet streams each with λ/m (packets/sec)
In a packet transmission system,
are transmitted over a communication link with 1/µ exponentially
• Arrival rate (packets/sec) is increased from λ to K λ for K > 1 distributed packet transmission time
• The packet length distribution remains the same (exponential),
with mean 1/µ bits a) Statistical multiplexing b) TDMA or FDMA
• The transmission capacity (C bps) is increased by a factor of K
Performance


• The average number of packets in the system remain the same
ρ
N = with ρ = λ/(µC )
1−ρ 1 m
• Average delay per packet T= < T=
µ−λ µ−λ
λW = N → W = N /(K λ) When do we need TDMA or FDMA?
Aggregation is better: increasing a transmission line by K times can – In a multiplexer, packet generation times overlap, so that it must
allow K times as many packets/sec with K times smaller average buffer and delay some of the packets
delay per packet
3-65 3-66

Little’s theorem: example I Little’s theorem: example II

Estimating throughput
Sec. 3.2
in a time-sharing system
Queueing Models-Little's Theorem 161
The average time a user spends in the system
T = R + D → R + P ≤ T ≤ R + NP
– D: the average delay between the time a job is submitted to the
computer and the time its execution is completed, D = [P, NP]
B
Computer
C Combining this with λ = N /T ,
 
N 1 N
≤ λ ≤ min ,
Average reflection Average job processing
162
R + NP P Delay
R +Models
P in Data Networks Chap. 3
time R time P

Suppose a time-sharing computer system with N terminals. A user logs


Figure 3.4 N terminals connected with a time-sharing computer system. To – throughput is bounded by 1/P, maximum job execution rate
estimate maximum attainable throughput, we assume that a departing user im-
into the system through a terminal and after an initial reflection period of
mediately reenters the system or, equivalently, is immediately replaced by a new
user.
,< Bound induced by Bound induced by
average length R, submit
Combining a job
this relation with Athat
= NITrequires anobtain
[cf. Eq. (3.3)], we average processing time P ....::J Iimited number CPU processing
0.
.r::
of terminals capacity
at the computer. Jobs queue up inside N the
-R+P
computer and are(3.6) served by a Cl
::J
o
single CPU according to some
The throughput unspecified
A is also bounded priority
above by the processing or oftime-sharing
capacity the computer. In rule. 1:
I- 11P
particular, since the execution time of a job is P units on the average, it follows that the
What is the maximum of process
computer cannot sustainable
in the long run throughput byunitthe
more than I I P jobs per system?
time, that is, ::c'"
– Assume that there is always a userA ready <-
I
to take the place (3.7) of a departing '"c Guaranteed
-P
throughput
(This conclusion can also be reached by applying Little's Theorem between the entry and :t
user, so the number ofof users
exit points in the
the computer's CPU.) system is always N
curve
By combining the preceding two relations, we obtain the bounds
3-67 o 1 + RIP 3-68
N
---::-:-:c-
R+NP-
< A < min {I-
-
---
P'R+P
N} (3.8) Number of Terminals N
for the throughput A. By using T = N I A, we also obtain bounds for the average user delay (a)
when the system is fully loaded:
max {NP, R+ P} -s; T -s; R+ NP (3.9)
,< Bound induced by Bound induced by
....::J Iimited number CPU processing
0.
.r::
of terminals capacity

o
1:
Cl
::J
Little’s theorem: example III Poisson Arrivals See Time Average (PASTA) theorem I
I- 11P
::c'"
'"c Guaranteed
Using T = N
:t /λ, we can rewrite
throughput Suppose a random process which spends its time in different states Ej
curve

o 1 + RIP
In equilibrium, we can associate with each state Ej two different
max{NP, RNumber
+ P} ≤ T N≤ R + NP
of Terminals
(a) probabilities
• The probability of the state as seen by an outside random observer
E
'">
1;;
Upper bound for delay Lower bound for delay due to limited – πj : prob. that the system is in the state Ej at a random instant
en CPU processing capacity
• The probability of the state seen by an arriving customer
.s'"c
'"E R+P
/1
– πj∗ : prob. that the system is in the state Ej just before (a
i= / I
R I randomly chosen) arrival
:::> I
Delay assuming no waiting in queue
'" 1/
In general, we have πj 6= πj∗
Cl
:;> V
'">
<{
//1

0
When the arrival process is Poisson, we have
Number of Terminals N

(b)
πj = πj∗
Figure 3.5 Bounds on throughput and average user delay in a time-sharing
system. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on average
user time in a fully loaded system [Eq. (3.9)]. The time increases essentially in
proportion with the number of terminals N.

bounds obtained are independent of these parameters. We owe this convenient situation to 3-69 3-70
the generality of Little's Theorem.

3.3 THE M / M /1 QUEUEING SYSTEM

The M / ]\[/ I queueing system consists of a single queueing station with a single server
(in a communication context, a single transmission line). Customers arrive according
PASTA theorem II
to a Poisson process with rate A, and the probability distribution of the service time is PASTA theorem III
exponential with mean 1/ f.1 sec. We will explain the meaning of these terms shortly.
The name AI/AI / I reflects standard queueing theory nomenclature whereby:
For a stochastic process, N ≡ {N (t), t ≥ 0} for t ≥ 0 and an Proof:
1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem-
arbitrary set B ∈ N :
oryless, which here means a Poisson process (i.e., exponentially distributed inter- • For sufficiently large n, Y (t) is approximated as
1 t
 Z
1, if N (t) ∈ B, n−1
U (t) = ⇒ V (t) = U (τ )dτ. X
0, otherwise. t 0 Yn (t) = U (k(t/n))[A((k + 1)t/n) − A(kt/n)]
| {z }
k=0
For a Poisson arrival process A(t), (λ(k+1)t−λkt)/n
Z t • LAA decouples the above as
Y (t) = U (τ )dA(τ ) ⇒ Z (t) = Y (t)/A(t)
0 h n−1
X i
E[Yn (t)] = λtE U (kt/n)/n
k=0
Lack of Anticipation Assumption (LAA): For each t ≥ 0,
{A(t + u) − A(t), u ≥ 0} and {U (s), 0 ≤ s ≤ t} are independent: • As n → ∞, if |Yn (t)| is bounded,
Future inter-arrival times and service times of previously arrived hZ t i
customers are independent. lim E[Yn (t)] = E[Y (t)] = λtE[V (t)] = λE U (τ )dτ . 
n→∞ 0
Under LAA, as t → ∞, PASTA ensures
: the expected number of arrivals who find the system in state B
V (t) → V (∞) w.p. 1 if Z (t) → V (∞) w.p.1 equals arrival rate times the expected length of time it is there.
3-71 3-72
Systems where PASTA does not hold M/M/1/K I

Ex1) D/D/1 queue M/M/1/K: the system can accommodate K customers (including one
• Deterministic arrivals every 10 msec in service)
• Deterministic service times of 9 msec

……
… …
0 9 10 19 20

A sample path of D/D/1 queue waiting customers

• State balance equations


• Arrivals always finds the system empty.
• The system is occupied on average with 0.9. λπ0 = µπ1
Ex2) LAA violated: Service times for a current customer depends on (λ + µ)πi = λπi−1 + µπi+1 for 1 ≤ i ≤ K
an inter-arrival time of a future customer After rearranging, we have
• Your own PC (one customer, one server) λπi−1 = µπi for 1 ≤ i ≤ K
• Your own PC is always free when you need it, π0∗ = 1
• For i ∈ {0, 1, . . . , K }, steady-state probabilities are
• π0 = proportion of time the PC is free (< 1)
K
X 1−ρ
3-73 πn = ρn π0 and πn = 1 ⇒ π0 = 3-74

n=0
1 − ρK+1

M/M/1/K II M/M/1/K Simulation I


clear
• πK : the probability that an arriving customer finds the system full. % Define variables
Due to PASTA, this is a blocking probability global arrival departure mservice_time
arrival = 1; departure = -1; mservice_time = 1;
1−ρ % Define simulation parameters
πK = ρK sim_length = 30000; K = 10; system_queue = zeros(1,K);
1 − ρK+1 k = 0; max_iter = 5;
for arrival_rate = 0.1:0.025:0.97
• Blocking probability in simulation k = k + 1;
x(k) = arrival_rate*mservice_time;
total # of blocked arrivals upon arrival instants % initialize
PB = sim_time=0; num_arrivals=0; num_system=0; upon_arrival=0; total_delay=0; num_served=0; dropped=0;
total # of arrivals at the system % Assuming that queue is empty
event = arrival; event_time = exprnd(1/arrival_rate);
0.15 sim_time = sim_time + event_time;
Analysis, K = 10 for iter = 1:max_iter
Simulation, K = 10 while (sim_time < sim_length),
0.125 Analysis, K = 5 % If an arrival occurs,
Simulation, K = 5
if event == arrival
0.1
num_arrivals = num_arrivals + 1;
if num_system == K
PB

0.075 dropped = dropped + 1;


else
0.05 num_system = num_system + 1;
system_queue(num_system) = sim_time;
0.025 upon_arrival = upon_arrival + num_system;
end
% To see whether one new arrival comes or new departure occurs
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 [event, event_time] = schedule_next_event(arrival_rate);
3-75 3-76
ρ
M/M/1/K Simulation II M/M/m queue I
% If a departure occurs, M/M/m: there are m parallel servers, whose service times are
elseif event == departure
delay_per_arrival = sim_time - system_queue(1); exponentially distributed with mean 1/µ.
system_queue(1:K-1) = system_queue(2:K);
total_delay = total_delay + delay_per_arrival;
num_system = num_system - 1;
num_served = num_served + 1;
if num_system == 0
… … …
% nothing to serve, schedule an arrival
event = arrival;
event_time = exprnd(1/arrival_rate);
elseif num_system > 0
% still the system has customers to serve State transition rate diagram of M /M /m
[event, event_time] = schedule_next_event(arrival_rate);
end When m servers are busy, the time until the next departure, X , is
end
sim_time = sim_time + event_time; X = min(τ1 , τ2 , . . . , τm ) ⇒ Pr[X > t] = Pr[min(τ1 , τ2 , . . . , τm ) > t]
end
Pd_iter(iter)=dropped/num_arrivals; m
Y
end
piK(k) = x(k)^K*(1-x(k))./(1-x(k)^(K+1));
= Pr[τi > t] = e −mµt (i.i.d.)
Pd(k) = mean(Pd_iter); i=1
end Global balance equations:
%%%%%%%%%%%
%% use the previous schedule_next_event function λπ0 = µπ1
(λ + min(n, m)µ)πn = λπn−1 + min(n + 1, m)µπn+1 for n ≥ 1
3-77 3-78

M/M/m queue II M/M/c/c I

The previous global balance equation can be rewritten as c-server and only c customers can be accommodated

λπn−1 = min(n, m)µπn for n ≥ 0


… …
Using a = λ/µ and ρ = λ/mµ
am
πn = ρmax(0,n−m) π0
m! Balance equations are (a = λ/µ called Erlang)
From the normalization condition, π0 is obtained a an
λπn−1 = nµπn ⇒ πn = πn−1 = π0
∞ n m−1
X ai ∞ n n!
X a m X i−m o Pc
1= πi = π0 + ρ Using πn = 1, we have
i=0 i=0
i! m! i=m n=0
c
a n n X a i o−1
Erlang C formula, C (m, a), πn =
n! i=0 i!

X (mρ)m π0 Erlang B formula: B(c, a) = πc
C (m, a) = Pr[W > 0] = Pr[N ≥ m] = πi = ·
i=m
m! 1−ρ – valid for M/G/c/c system. Note that this depends only on the
mean of service time distribution
3-79 3-80
M/M/c/c II Example: a system with blocking I

Erlang capacity: Telephone systems with c channels In Select-city shopping mall, customers arrive at the underground
10 0 10 0 parking lot of it according to a Poisson process with a rate of 60 cars
c=1
per hour. Parking time follows a Weibull distribution with mean 2.5
10 -1 10 -1
hours and the parking lot can accommodate 150 cars. When the
parking lot is full, an arriving customer has to park his car somewhere
B(c, a)

B(c, a)
2
10 -2 10 20 30
10 -2

40 50 60
else. Find the fraction of customers finding all places occupied upon
3 70 80 90 100
10 -3 10 -3 arrival
4 5 6 7 8 9 10 two different distributions with the same mean
0.7

10 -4 10 -4
10 -1 10 0 10 1 0 20 40 60 80 100 0.6 Weibull: α = 2.7228, k = 5
offered traffic intensity, a offered traffic intensity, a k
! k "k−1 ( x )k
0.5
f (x) = α α
e α
0
10
0.4

f (x)
10-1
0.3
10-2
0.2
10-3 exponential
x
0.1 f (x) = α1 e− α
PB

10-4
0
10-5
0 1 2 3 4 5 6 7 8
Analysis, c = 3
x (hours)
Simulation, c = 3 R∞
10-6
Analysis, c = 5 – Mean of Weibull distribution: αΓ(1 + 1/k), and Γ(x) = t x−1 e −t dt is called
10-7
0
Simulation, c = 5
3-81
the gamma function 3-82
10-8
0 0.5 1 1.5 2 2.5 3
a

Example: a system with blocking II Finite source population: M/M/C/C/K system I

Consider the loss system (no waiting places) in the case where the
• c = 150 and a = λ/µ = 60 × 2.5 = 150 arrivals originate from a finite population of sources: the total number
of customers is K
ac
B(c, a) = Pcc! ai 1
1
c=150,a=150
i=0 i! 2
2


Pc−1
• Divide the numerator and denominator by n=0 a n /n!,
ac Pc−1
c!
n
n=0 a /n! (a c /c!)/
B(c, a) = Pc−1 =
ai
a c /c!
P c−1
1 + (a c /c!)/ n=0 a n /n! • The time to the next call attempt by a customer, so called thinking
i=0 i! +
(a/c)B(c − 1, a) aB(c − 1, a) time (idle time) of the customer obeys an exponential distribution
= = with mean 1/λ (sec)
1 + (a/c)B(c − 1, a) c + aB(c − 1, a)
• Blocked calls are lost
with B(0, a) = 1 - does not lead to reattempts; starts a new thinking time, again.
The time to the next attempt is also the same exponential
distribution with 1/λ
- the call holding time is exponentially distributed with 1/µ
3-83 3-84
M/M/C/C/K system II M/M/C/C/K system III

If C ≥ K , each customer has its own server, i.e., no blocking. • For j = 1, 2, . . . , K , we have
 
• Each user shows two-state, active with mean 1/µ and idle with K j
(C − j + 1)πj−1 = jµπj ⇒ πj = a π0 .
mean 1/λ j
• The probability for a user to be idle or active is PK
• Applying j=0 πj = 1,
π0 = 1/λ/(1/λ + 1/µ) and π1 = 1/µ/(1/λ + 1/µ),   C  
K j X K k
πj = a/ a
• Call arrival rate: π0 λ, offered load (or carried load per source): j k
k=0
π1 = a/(1 + a), and a = λ/µ
Time blocking (or congestion): the proportion of time the system
If C < K , this system can be described as spends in the state C ; the equilibrium probability of the state C is
PB = πC

– The probability of all resources being busy in a given observational period
– Insensitivity: Like Erlang B formula, this result is insensitive to the form
of the holding time distribution (though the derivation above was explicitly
((K − i)λ + iµ)πi = (K − (i − 1))πi−1 + (i + 1)µπi+1 based on the assumption of exponential holding time distribution)

3-85 3-86

M/M/C/C/K system IV M/M/C/C/K system V

Call blocking: the probability that an arriving call is blocked, i.e., PL • Call blocking PL can be obtained by
• Arrival rate is state-dependent, i.e., (K − N (t))λ: Not Poisson.
λC
• PASTA does not hold: Time blocking, PB can’t represent PL PL λT = PB λC → PL = PB ≤ PB
• λT : Call arrivals on average
λT
C
X • Engset formula:
λT ∝ (K − i)λπi
K! c
i=0 (K − C )λ (K − C ) C !(K−C )! a
PL (K ) = PC πC = P C K!
– PL : the probability that a call finds the system blocked i=0 (K − i)λπi i=0 (K − i) i!(K−i)! a
c

– If λT = 10000 and PL = 0.01, λT PL = 100 calls are lost (K−1)! c C 


C !(K−1−C )! a
  
K −1 c X K −1 i
• λC : Call arrivals when the system is in the blocking state = PC = a / a
(K−1)! C i
ac
i=0 i!(K−1−i)! i=0
λC ∝ (K − C )λ
–PB λC : blocked calls upon the arrival instant – The state distribution seen by an arriving customer is the same as the
equilibrium distribution in a system with one less customer. It is as if the
PL λT = PB λC arriving customer were an "outside observer"
– Among total arrivals, some of them that find the system blocked – PL (K ) = PB (K − 1): as K → ∞, PL → PB
should be equal to call arrivals of seeing the busy system 3-87 3-88
Where are we? Time Reversibility of discrete-time MC I

Elementary queueing models For an irreducible, aperiodic, discrete-time MC, (Xn , Xn+1 ,...) having
– M/M/1, M/M/C, M/M/C/C/K: product form solution transition probabilities pij and stationary distribution πi for all i:
– Bulk queues (not discussed here) Time-reversed MC is defined as Xn∗ = Xτ −n for an arbitrary τ > 0
Intermediate queueing models (product-form solution)
– Time-reversibility of Markov process
– Detailed balance equations of time-reversible MCs
– Multidimensional Birth-death processes
– Network of queues: open- and closed networks
Forward process Time reversed process
Advanced queueing models
1) Transition probabilities of Xn∗
– M/G/1 type queue: Embedded MC and Mean-value analysis πj pji
– M/G/1 with vacations and Priority queues pij∗ =
πi
– G/M/m queue ∗
2) Xn and Xn have the same stationary distribution πi :
More advanced queueing models (omitted) ∞
X ∞
X
– Algorithmic approaches to get steady-state solutions πi pij = πj pji∗ = πi
j=0 j=0
3-89 3-90

Time Reversibility of discrete-time MC II Time Reversibility of discrete-time MC III



• Proof for 1) pij = πj pji /πi :
A Markov process, Xn , is said to be reversible, if
pij∗ = Pr[Xm = j|Xm+1 = i, Xm+2 = i2 , . . . , Xm+k = ik ] – the transition probabilities of the forward and reversed chains are
Pr[Xm = j, Xm+1 = i, Xm+2 = i2 , . . . , Xm+k = ik ] the same,
=
Pr[Xm+1 = i, Xm+2 = i2 , . . . , Xm+k = ik ]
Pr[Xm = j, Xm+1 = i] Pr[Xm+2 = i2 , . . . , Xm+k = ik |Xm = j, Xm+1 = i] pij∗ = Pr[Xm = j|Xm+1 = i] = pij = Pr[Xm+1 = j|Xm = i]
=
Pr[Xm+1 = i] Pr[Xm+2 = i2 , . . . , Xm+k = ik |Xm+1 = i]
• Time reversibility ⇔ Detailed balanced equations (DBEs) hold
Pr[Xm = j, Xm+1 = i]
=
Pr[Xm+1 = i]
πi pij∗ = πj pji → πi pij = πj pji (detailed balance eq.)
Pr[Xm+1 = i|Xm = j] Pr[Xm = j]
=
Pr[Xm+1 = i] What types of Markov processes satisfy this detailed balance
pji πj
= equation? discrete-time Birth-death (BD) process
πi
• Transition occurs between neighboring states: pij = 0 for |i − j| > 1
• Proof for 2) Using the above result,
X
πi pij∗ =
X
πi (πj pji /πi ) = πj 0 1 2 ……
i∈S i∈S

3-91 3-92
Time Reversibility of discrete-time MC IV Time Reversibility of discrete-time MC V

A transmitter’s queue with stop-and-wait ARQ (θ = qr) in Mid-term I Kolmogorov Criteria


• Is this process reversible? • A discrete-time Markov chain is reversible if and only if

pi1 i2 pi2 i3 · · · pin−1 in pin i1 = pi1 in pin in−1 · · · pi3 i2 pi2 i1


0 1 2 …… ……
for any finite sequence of states, i1 , i2 ,. . . ,in and any n
Proof:
• Global balance equations (GBEs) • For a reversible chain, if detailed balance eqns. hold, we have
π0 =(1 − p)π0 + (1 − p)θπ1
0 1
π1 =pπ0 + (pθ + (1 − p)(1 − θ))π1 + (1 − p)θπ2
For i = 2, 3, . . ., we have
πi =p(1 − θ)πi−1 + (pθ + (1 − p)(1 − θ))πi + (1 − p)θπi+1 3 2
• Instead, we can use DBEs, or simplify GBEs using DBEs, e.g.,
n
X ∞
X n
X ∞
X • Fixing two states, i1 = i, and in = j and multiplying over all states,
p(1 − θ)πi = (1 − p)θπi+1 ↔ πj pji = πi pij
j=0 i=n+1 j=0 i=n+1
pi,i2 pi2 i3 · · · pin−1 j pji = pij pjin−1 · · · pi3 i2 pi2 i
3-93 3-94

Time Reversibility of discrete-time MC VI Time Reversibility of discrete-time MC VII

• From the Kolmogorov criteria, we can get Inspect whether the following three-state MC is reversible
 
pi,i2 pi2 i3 · · · pin−1 j pji = pij pjin−1 · · · pi3 i2 pi2 i 0 0.6 0.4
(n−1) (n−1) P = 0.1 0.8 0.1
pij pji = pij pji 0.5 0 0.5
• Using Kolmogorov criteria,
As n → ∞, we have
p12 p23 p31 = 0.6 × 0.1 × 0.5 6= p13 p32 p21 = 0.4 × 0 × 0.1 = 0
(n−1) (n−1)
lim p pji = lim pij pji → πj pji = πi pij • Inspecting state transition diagram, it is not a BD process
n→∞ ij n→∞
If the state transition diagram of a Markov process is a tree, then the
Inspect whether the following two-state MC is reversible
  process is time reversible
0 1
P=
0.5 0.5
– It is a small BD process
– Using state probabilities, π0 = 1/3 and π1 = 2/3,
1 2 1
π0 p01 = · 1 = π1 p10 = ·
3 3 2
– A generalization of BD processes: at the cut boundary, DBE is
3-95 satisfied 3-96
Exercise Relation between DTMC and CTMC I
Consider a transmitter that uses STOP-and-WAIT (SW) ARQ protocol to
Recall an embedded MC: each time a state, say i, is entered, an
transmit a frame. Each frame can be successfully transmitted with probability
exponentially distributed state occupancy time is selected. When the time
q, while the probability that the ACK to the frame arrives at the transmitter is up, the next state j is selected according to transition probabilities, pij
corrupted with an error or will be lost is 1 − r. Assume that ACK from the : continuous-time Markov process
receiver always arrives at this transmitter just before the time-out, tout , if it is 4
not lost. During each tout , at the transmitter’s queue, one frame is generated 3
from the upper layer with probability p and passed down to this queue. Note 2
that we assume 0 < p, q, r < 1. 1
(a) Show that the stochastic process of describing the transmitter’s queue is a Markov
time
process.
(b) Find the global balance equations of the Markov chain. • Ni (n): the number of times state i occurs in the first n transitions
(c) Is the Markov chain is periodic or aperiodic? Why?
• Ti (j): the occupancy time the j th time state i occurs.

(d) Under what condition can this Markov chain be positive recurrent, transient, or The proportion of time spent by X (t) in state i after the first n transitions
null recurrent? i
PN (n)
time spent in state i j=1
Ti (j)
(e) Determine the state probabilities πi for i ≥ 0, the probability that i frames in the = P PN (n)
queue, at steady-state.
time spent in all states i
Ti (j)
i j=1

(f) What is the mean number of frames in the transmitter’s queue? This includes one
frame in service. 3-97 3-98

Relation between DTMC and CTMC II Relation between DTMC and CTMC III
As n → ∞, using πi = Ni (n)/n we have Recall M/M/1 queue
Ni (n) 1 PNi (n)
j=1 Ti (j) πi E[Ti ] a) CTMC
n Ni (n)
P Ni (n) 1 PNi (n) =P = φi ,
i n Ni (n) j=1 Ti (j) i πi E[Ti ] E[Ti ]=1/vi
0 1 2 3 4 ……
where πi is the unique pmf solution to
X X b) Embedded MC
πj = πi pij and πj = 1 (∗)
i j 0 1 2 3 4 ……
The long-term proportion of time spent in state i approaches
πi /vi πi vi φ i In the embedded MC, we have the following global balance equations
φi = P = c → πi =
i π i /v i v i c π0 = qπ1



π1 = π0 + qπ2
  
Substituting πi = (vi φi )/c into (∗) yields  p
.. πi = πi−1 5
vj φ j 1X X X .  q
= vi φi pij → vj φj = φi vi pij = φi γij


c c i

i i
πi = pπi−1 + qπi+1

3-99 3-100
Relation between DTMC and CTMC IV Continuous-time reversible MC I
P∞ For a continuous-time MC, X (t), whose stationary state probability
Using the normalization condition, i=0 πi = 1,
πi , we have a discrete-time embedded Markov chain whose stationary
 i−1 pmf and a state transition probability are πi and p̃ij .
p 1 1 − 2p
πi = π0 and π0 =
q q 2(1 − p)
Forward Process Reverse Process

Converting the embedded MC into CTMC,


c c cπi c
φ0 = π0 = π0 and φi = = πi Embedded Markov process time
v0 λ vi λ+µ
There is a reversed embedded MC with πi p̃ij = πj p̃ji∗ for all i 6= j.
Determine c,
CTMC
∞ ∞
!
X
φi = 1 → c
π0
+
1 X
πi = 1 → c = 2λ 0 1 2 3 4 ……
i=0
λ λ+µ i=1
Embedded MC (BD process)
Finally, we get φi = ρi (1 − ρ) for i = 1, 2, . . .
0 1 2 3 4 ……
3-101 3-102

Continuous-time reversible MC II Continuous-time reversible MC III


Recall the state occupancy time of the forward process A continuous-time MC whose stationary probability of state i is θi ,
Pr[Ti > t + s|Ti > t] = Pr[Ti > s] = e −vi s and state transition rate from j to i is γji has a reversible MC whose
state transition rate is γij∗ , if we find γij∗ of satisfying

πj p̃ji πj γji
If X (t) = i, the probability that the reversed process remains in state γij∗ = vi p̃ij∗ = vi = vi = θj γji /θi
i for an additional s seconds is πi p̃ji =γji /vj π i vj
| {z }
from embedded MC
Pr[X (t 0 ) = i, t − s ≤ t 0 ≤ t|X (t) = i] = e −vi s
; after staying t, probability that it shall stay s sec more – p̃ij∗ (= p̃ij ): state transition probability of the reversed embedded MC
– Continuous-time MC whose state occupancy times are exponentially
Forward Process Reverse Process distributed is reversible if its embedded MC is reversible
Additionally, we have vj = vj∗
X X X X
Embedded Markov process time
θi γij∗ γ ∗ =θj γji /θi = θj γji = θj vj = θj vj∗ ⇒ γij = γij∗
ij
i6=j i6=j j6=i j6=i

3-103 3-104
Continuous-time reversible MC IV M/M/2 queue with heterogeneous servers I
Detailed balance equation holds for continuous-time reversible MCs Servers A and B with service rates µA and µB . When the system
θj γji (input rate to i) = θi γij (output rate from i) for j = i + 1 empty, arrivals go to A with probability p and to B with probability
1 − p. Otherwise, the head of the queue takes the first free server
– Birth-death systems with γij = 0 for |i − j| > 1
1A
– Since the embedded MC is reversible,
πi p̃ij = πj p̃ji → (vi θi /c)p̃ij = (vj θj /c)p̃ji → θi γij = θj γji 0 2 3

If there exists a set of positive numbers θi , that sum up to 1 and 2B


satisfy
θi γij = θj γji for i 6= j Under what condition is this system time-reversible?
then, the MC is reversible and θi is the unique stationary distribution • For n = 2, 3, . . .,
– Birth and death processes, e.g., M/M/1, M/M/c, M/M/∞ πn = π2 (λ/(µA + µA ))
n−2

Kolmogorov criteria for continuous time MC


• Global balance equations along the cuts
– A continuous-time Markov chain is reversible if and only if
λπ0 = µA π1,A + µB π1,B
γi1 i2 γi2 i3 · · · γin i1 = γi1 in γin in−1 · · · γi3 i2 γi2 i1
(µA + µB )π2 = λ(π1,A + π1,B )
– Proof is the same as in the discrete-time reversible MC 3-105 (µA + λ)π1,A = pλπ0 + µB π2 3-106

M/M/2 queue with heterogeneous servers II Multidimensional Markov chains I

After some manipulations, Suppose that X1 (t) and X2 (t) are independent reversible MCs
• Then, X (t) = (X1 (t), X2 (t)) is a reversible MC
λ λ + p(µA + µB ) • Two independent M /M /1 queue, where arrival and service rates at
π1,A = π0
µA 2λ + µA + µB
λ λ + (1 − p)(µA + µB ) 6-23 Example: Two Independent M/M/1 Queues
queue i are λi and µi
π2,A = π0 – (N1 (t), N2 (t)) forms an MC
µB 2λ + µA + µB
2
λ λ + (1 − p)µA + pµB „ Stationary distribution: λ1 λ1 λ1
π2 = π0 n1 n2 03 13 23 33
µA µB 2λ + µA + µB  λ  λ   λ  λ 
p( n1 , n2 ) =  1 − 1  1   1 − 2  2  µ1 µ1 µ1
P∞  µ1  µ1   µ2  µ2  λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2
π0 can be determined by π0 + π1,A + π2,B + n=2 πn = 1 λ1 λ1 λ1
„ Detailed Balance Equations: 02 12 22 32
• If it is reversible, use detailed balance equations µ1 µ1 µ1
µ1 p( n1 + 1, n2 ) = λ1 p( n1 , n2 ) λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2
µ2 p( n1 , n2 + 1) = λ2 p( n1 , n2 ) λ1 λ1 λ1
(1/2)λπ0 = µA π1,A → π1,A = 0.5(λ/µA )π0 01 11 21 31
µ1 µ1 µ1
(1/2)λπ0 = µB π1,B → π1,B = 0.5(λ/µB )π0 λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2
Verify that the Markov chain is λ1 λ1 λ1
0.5λ2 reversible – Kolmogorov criterion 00 10 20 30
π2 = π0 µ1 µ1 µ1
µA µB
– Is this a reversible MC?
3-107 3-108
Multidimensional Markov chains II Truncation of a Reversible Markov chain I
X (t) is a reversible Markov process with state space S and stationary
– Owing to time-reversibility, detailed balance equations hold distribution, πj for j ∈ S.
– Truncated to a set E ⊂ S such that the resulting chain Y (t) is
µ1 π(n1 + 1, n2 ) = λ1 π(n1 , n2 )
irreducible. Then, Y (t) is reversible and has the stationary
µ2 π(n1 , n2 + 1) = λ2 π(n1 , n2 ) distribution
πj
π̂j = P j∈E
– Stationary state distribution k∈E πk

λ1
   n1 
λ1 λ2
   n2
λ2 – This is the conditional prob. that. in steady state, the original
π(n1 , n2 ) = 1 − 1− process is at state j, given that it is somewhere in E
µ1 µ1 µ2 µ2
Proof:
• Can be generalized for any number of independent queues, e.g.,
πj πi
M/M/1, M/M/c or M/M/∞ π̂j qji = π̂i qij ⇒ P qji = P qij ⇒ πj qji = πi qij
π k k∈E πk
| k∈E
{z }
π(n1 , n2 , . . . , nK ) = π1 (n1 )π2 (n2 ) · · · πK (nK ) π̂j
X X πj
– ’Product form’ distribution π̂k = P =1
k∈E πk
k∈E j∈E

3-109 3-110

Truncation of a Reversible Markov chain II Truncation of a Reversible Markov chain III

Markov processes for M/M/1 and M/M/C are reversible Two independent M/M/1 queues of the previous example share a
• State probabilities of M/M/1/K queue
6-25 Example: Two Queues
common buffer ofwith
size BJoint
(=2) Buffer
• An arriving customer who finds B customers waiting is blocked
(1 − ρ)ρi (1 − ρ)ρi λ „ The two independent M/M/1 queues of λ1
πi = PK = for ρ = the previous example share a common 03 13
i=0 (1 − ρ)ρ
i 1 − ρK+1 µ buffer of size B – arrival that finds B µ1
λ2 µ2 λ2 µ2
customers waiting is blocked λ1 λ1
– Truncated version of M/M/1/∞ queue „ State space restricted to 02 12 22
µ1 µ1
• State probabilities of M/M/c/c queue E = {( n1 , n2 ) : ( n1 − 1)+ + ( n2 − 1)+ ≤ B}
λ2 µ2 λ2 µ2 λ2 µ2
„ Distribution of truncated chain: λ1 λ1 λ1
– M/M/c/∞ queue with ρ = λ/(mµ) and a = λ/µ 01 11 21 31
p( n1 , n2 ) = p(0,0) ⋅ ρ1n1 ρ 2n2 , ( n1 , n2 ) ∈ E
µ1 µ1 µ1
c
a Normalizing: λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2
πn = ρmax(0,n−c) π0 „
−1 λ1 λ1 λ1
n!  
p(0,0) =  ∑ ρ1n1 ρ 2n2  00 10 20 30
µ1 µ1 µ1
– Truncated version of M/M/c/∞ queue  1 2
( n , n )∈E 
Theorem specifies joint distribution
• State up E = {(n
space: 1 , ndiagram
State −B1)=2+ + (n2 − 1)+ ≤ B}
2 ) : (n1for
c c
an
ai to the normalization •
constant
Stationary state distribution of the truncated MC
X X
π̂n = πn / πn = / 0 Calculation of normalization constant is
n! i=0 i!
n=0 often tedious π(n1 , n2 ) = π(0, 0)ρn1 1 ρn2 2 for (n1 , n2 ) ∈ E
ρn1 1 ρn2 2
P
3-111 • π(0, 0) is obtained by π(0, 0) = 1/ (n1 ,n2 )∈E
3-112
Truncation of a Reversible Markov chain IV Truncation of a Reversible Markov chain V
Two session classes in a circuit switching system with preferential • The state probabilities can be obtained as
treatment for one class for a total of C channels ρn1 1 ρn2 2
• Type 1: Poisson arrivals with λ1 require exponentially distributed P(n1 , n2 ) = P(0, 0) for 0 ≤ n1 ≤ K , n1 +n2 ≤ C , n2 ≥ 0
n1 ! n2 !
service rate µ1 – admissible only up to K P
– P(0, 0) can be determined by n1 ,n2 P(n1 , n2 ) = 1
• Type 2: Poisson arrivals with λ2 require exponentially distributed
• Blocking probability of type 1
service rate µ2 – can be accepted until C channels are used up
PC −K ρK1 ρn22 PK−1 ρn11 ρC2 −n1
374
S = {(n1 , n2 )|0 ≤IEEE ≤ K , nON1 VEHICULAR
n1TRANSACTIONS + n2 ≤ C}
TECHNOLOGY, VOL. 51, NO. 2, MARCH 2002
n2 =0 K! · n2 ! + n1 =0 n1 ! · (C −n1 )!
Pb1 = PK n
ρ11 PC −n1 ρ22
n

n1 =0 n1 ! n2 =0 n2 !

• Blocking probability of type 2


n C −n1
PK ρ11 ρ
n1 =0 n1 ! · (C2−n1 )!
Pb2 = P ρ11
n PC −n1 ρn22
K
n1 =0 n1 ! n2 =0 n2 !

For this kind of systems, blocking probabilities are valid for a broad
class of holding time distributions
3-113 3-114
Fig. 2. Transition diagram for the new call bounding scheme.

handoff calls in the cell. Let and . From From this, the traffic intensities for new calls and handoff calls
the detailed balance equation, we obtain using the above common average channel holding time 1
are given by

Network of queues Networks of queues


From the normalization equation, we obtain
Applying these formulas in (1) and (2), we obtain similar re-
Open queueing networks sults for new call blocking probability and handoff call blocking
Two queues in tandem (BG, p.210)
probability following the traditional approach (one-dimensional
Markov chain theory), which obviously provides only an ap-
proximation. We will show later that significantly inaccurate • Assume that service time is proportional to the packet length
results are obtained using this approach, which implies that we
cannot use the traditional approach if the channel holding times Queue 1 Queue 2
for new calls and handoff calls are distinct with different av-
erage values. We observe that there is one case where these two
From this, we obtain the formulas for new call blocking proba-
approaches give the same results, i.e., when the nonprioritized
bility and handoff call blocking probability as follows:
scheme is used: . This is because we have the following
identity: .
As a final remark, this scheme may work best when the call Queue 1 Queue 2 is empty when arrives
(1) arrivals are bursty. When a big burst of calls arrives in a cell (for
Closed queueing networks example, before or after a football game), if too many new calls
accepted, the network may not be able to handle the resulting
Queue 2
(2) handoff traffic, which will lead to severe call dropping. The new
Three things are needed call bounding scheme, however, could handle the problem well
Arrivals at queue 2 get bursty
by spreading the potential bursty calls (users will try again when time
Kleinrock’s independence assumption
Obviously, when , the new call bounding scheme be- the first few tries fail). On another note, as we observe in wired
comes the nonprioritized scheme. As we expect, we obtain networks, network traffic tends to be self-similar ([15]). Wire-
– Each link works as an M /M /1 queue less network traffic will behave the same considering more data – Interarrival times at the second queue are strongly correlated with
services will be supported in the wireless networks. This scheme
Burke’s theorem will be useful in the future wireless multimedia networks. the packet length at the first queue or the service time!
– The output process at an M /M /1B. queue
As we mentioned earlier, in most literature the channel
is aScheme
Cutoff Priority Poisson process with • The first queue is an M/M/1, but the second queue cannot be
Instead of putting limitation on the number of new calls, we
mean rate λ
holding times for both new calls and handoff calls are iden-
base on the number of total on-going calls in the cell to make considered as an M/M/1
tically distributed with the same parameter. In this case, the
a decision whether a new arriving call is accepted or not. The
average channel holding time is given by
Time-reversibility scheme works as follows.
Let denote the threshold, upon a new call arrival.3-115
If the total 3-116
(3) number of busy channels is less than , the new call is accepted;
Kleinrock’s Independence Approximation I Kleinrock’s Independence Approximation II

In real networks, many queues interact with each other Suppose several packet streams, each following a unique path through
– a traffic stream departing from one or more queues enters one or the network: appropriate for virtual circuit network, e.g., ATM
more other queues, even after merging with other streams departing
from yet other queues
• Packet interarrival times are correlated with packet lengths.
• Service times at various queue are not independent, e.g.,
state-dependent flow control.
Kleinrock’s independence approximation:
• M /M /1 queueing model works for each link:
– sufficient mixing several packet streams on a transmission line
makes interarrival times and packet lengths independent
• Good approximation when: • xs : arrival rate of packet stream s
* Poisson arrivals at entry points of the network • fij (s): the fraction of the packets of stream s through link (i, j)
* Packet transmission times ‘nearly’ exponential • Total arrival rate at link (i, j)
* Several packet streams merged on each link X
* Densely connected network and moderate to heavy traffic load λij = fij (s)xs
3-117
all packet streams s 3-118
crossing link (i, j)

Kleinrock’s Independence Approximation III Kleinrock’s Independence Approximation IV

Based on M/M/1 (with Kleinrock’s Independence approximation), # In datagram networks including multiple path routing for some
of packets in queue or service at (i, j) on average is origin-destination pairs, M/M/1 approx. often fails
Sec. 3.6
λij • Node A sends trafficNetworks
to nodeof Transmission Lines 213
B along two links with service rate µ
Nij =
µij − λij V2
Figure 3.29 Poisson process with rate .\
divided among two links. If division is
done by randomization, each link behaves
– 1/µij is the average packet transmission time on link (i, j) B
like an M I JI II queue. If division is done
by metering, the whole system behaves I ike
• The average number of packets over all queues and the average V2 an 1\1 11'v112 queue.

delay per packet are – Random splitting:


where! = queue at total
L8.rS is the each linkratemay
arrival in the behave
system. If like an M/M/1
the average processing and
propagation delay el ij at link (i. j) is not negligible, this formula should be adjusted to
X 1X 1
N = Nij and T= Nij TRT = L−(λ/2
γ = -I µ . Ai)
11" - A"
+ Aij elij ) (3.103)
(i,j) (i,j) r (i.j) I) 1)

– Metering: arriving
Finally, the packets are
average delay perassigned to astream
packet of a traffic queue witha the
traversing path psmallest
is given by
P
– γ = s xs : total arrival rate in the system
• As a generalization with proc. & propag. delay, backlog → approximated Tas p = an '"'
L M /M(Ai)
.. /2
'.
. . _with
.. +a-.
ILI)(p,)
1common
+ eli) )
AI))
queue (3.104)
11i)
all (I,))
2
  on path p

TMterms
where the three = in the sum above represent < TRwaiting time in queue, average
average
X  λij 1  (2µ − λ)(1 + ρ) delay, respectively.
transmission time, and processing and propagation
Tp = + + dij 
 
In many networks, the assumption of exponentially distributed packet lengths is not
 µij (µij − λij ) µij ∗ Metering appropriate.

all packet streams s
 destroysGiven
an M/M/1 approximation
a different type of probability distribution of the packet lengths, one
may keep the approximation of independence between queues but use the P-K formula for
| {z }
crossing link (i, j)
queueing delay 3-119 average number in the system in place of the AI /M /1 formula (3.100). Equations (3.101)
3-120
to (3.104) for average delay would then be modified in an obvious way.
For virtual circuit networks (cf. Fig. 3.27), the main approximation involved in the
I\I / M /1 formula (3.101) is due to the correlation of the packet lengths and the packet
interarrival times at the various queues in the network. If somehow this correlation was
Burke’s theorem I Burke’s theorem II
For M /M /1, M /M /c, M /M /∞ with arrival rate λ (without bulk
arrivals and service):
B1. Departure process is Poisson with rate λ. B2. At each time t, the number of jobs in the system at time t is
independent of the sequence of departure times prior to time t
Forward process Reverse process
• The sequence of departure times prior to time t in the forward
process is exactly the sequence of arrival times after time t in the
reverse process.
Arrivals of forward process Departures of reverse process • Since the arrival process in the reverse process is independent
Poisson process, the future arrival process does not depend on the
• The arrival process in the forward process corresponds to the
current number in the system
departure Reverse
process in the reverse process
process • The past departure process in the forward process (which is the
– The departure process in the forward process is the arrival
future arrival in the reverse process) does not depend on the
process in the backward process
current number in the system.
• Because M /M /1 is time-reversible, the reverse process is
statistically identical to the forward process.
– The departures in forward time form a Poisson process, which is
the arrivals in backward time
3-121 3-122

Two M/M/1 Queues in Tandem Open queueing networks

Consider a network of K first-come first serve, single server queue,


The service times of a customer at the first and the second queues are each of which has unlimited queue size and exponential distribution
mutually independent as well as independent of the arrival process. with rate µk . routing path
Queue 1 Queue 2

External arrivals
• Based on Burke’s theorem B1, queue 2 in isolation is an M/M/1
– Pr[m at queue 2] = ρm
2 (1 − ρ2 )

• B2: # of customers presently in queue 1 is independent of the


sequence of departure time prior to t (earlier arrivals at queue 2)
• Traffic equation with routing probability pij or matrix P = [pij ]
– independent of # of customers presently in queue 2
K
X K
X
Pr[n at queue 1 and m at queue 2] λi = αi + λj pji and pji = 1
= Pr[n at queue 1] Pr[m at queue 2] = ρn1 (1 − ρ1 )ρm
2 (1 − ρ2 )
j=1 i=0

– pi0 : flow going to the outside


– λi can be uniquely determined by solving
3-123 3-124
λ = α + λP ⇒ λ = α(I − P)−1
Open queueing networks II Jackson’s theorem I

Let n = (n1 , . . . , nK ) denote a state (row) vector of the network. Using time-reversibility, guess detailed balance equations (DBEs) as
The limiting queue length distribution π(n) λi π(n − ei ) = µi π(n), λi π(n) = µi π(n + ei )
π(n) = lim Pr[X1 (t) = n1 , . . . , XK (t) = nK ] and λj π(n − ei ) = µj π(n + ej − ei ) based on
t→∞

Global balance equation (GBE): total rate out of n = total rate into n
 K
X  K
X K
X
α+ µi π(n) = αi π(n − ei ) + pi0 µi π(n + ei )
i=1 i=1 i=1
| {z } | {z }
external arrivals go outside from i Substituting DBEs into GBE gives us
K X
K K K K PK
j=1 pji λj
X 
X αi µi X X
+ pji µj π(n + ej − ei ) RHS =π(n) + pi0 λi + µi
i=1 j=1 i=1
λi i=1 i=1
λi
| {z } K
K K P
αi + j=1 pji λj
X 
from j to i X
=π(n) pi0 λi + µi
– ei = (0, . . . , 1, . . . , 0), i.e., the 1 is in the i th position i=1 i=1
λi
– π(n − ei ) denotes π(n1 , n2 , . . . , ni − 1, . . . , nK ) | {z }
α
PK
3-125 – in the numerator: λi = αi + j=1 pji λj 3-126

Jackson’s theorem II Summary of time-reversibility: CTMC


From DBEs, we have
λi
π(n1 , . . . , ni , . . . , nK ) = π(n1 , . . . , ni − 1, . . . , nK )
µi The forward CTMC has transition rates γij .
and The reversed chain is a CTMC with transition rates γij∗ =
φj γji
φi
λi
π(n1 , . . . , ni − 1, . . . , nK ) = π(n1 , . . . , ni − 2, . . . , nK ) If we find positive numbers φi , summing to unity and such that
µi P∞ P∞
scalars γij∗ satisfy j=0 γij = j=0 γij∗ for all i ≥ 0, then φi is the
which is finally rearranged as
  ni stationary distribution of both the forward and reverse chains.
λi – Proof
π(n1 , . . . , ni , . . . , nK ) = π(n1 , . . . , 0, . . . , nK )
µi X X X X X
φj γji = φi γij∗ = φi γij∗ = φi γij = φi γij
Repeating for i = 1, 2, . . . , K ,
j6=i j6=i j6=i j6=i j6=i
K  n
Y λi i
π(n) = π(0) : Global balance equation
i=1
µi
Q −1
K P∞
– π(0) = i=1 ni =0 ρni i and ρi = λi /µi

3-127 3-128
Jackson’s theorem: proof of DBEs I Jackson’s theorem: proof of DBEs II

Proving DBEs based on time-reversibility We need to consider the following three cases
• Construct a routing matrix, P ∗ = [pij

], of the reversed process • Arrival to server i outside the network in the forward process
• The rate from node i to j must be the same in the forward and corresponds to a departure out of the network from server i in the
reverse direction, reversed process

π(n)vn,n+ei = π(n + ei )vn+ei ,n
(forward process) λi pij = λj pji∗ (reverse process)
• Departure to the outside in the forward process corresponds to
– λj pji∗ : the output rate from server j is λj , and pji∗ is the rate of
arrival from the outside in the reversed process,
moving from j to i; αi∗ = λi pi0 ; pi0

= αi /λi

π(n)vn,n−ei = π(n − ei )vn−e
We need to show (recall θi γij = θj γji∗ ) i ,n


X X
∗ • Leaving queue i and joining queue j in the forward process
π(n)vn,m = π(m)vm,n and vn,m = vn,m
m m
(vn,n−ei +ej = µi pij ) correspond to leaving queue j and joining

queue i in the reversed process (vn−e i +ej ,n
= µj pji∗ = λi pij µj /λj )

– vn,m and vn,m denote state transition rate of the forward and

reversed process π(n)vn,n−ei +ej = π(n − ei + ej )vn−ei +ej ,n

3-129 3-130

Jackson’s theorem: proof of DBEs III Jackson’s theorem: proof of DBEs IV



1) π(n)vn,n+ei = π(n + ei )vn+ei ,n
: Rearranging the previous eqn. yields
Arrival to server i outside the network in the forward process corresponds to K
Y K
Y
a departure out of the network from server i in the reversed process, i.e., πi (ni )αi πj (nj ) = πi (ni + 1)(αi /ρi ) πj (nj )
j=1,j6=i j=1,j6=i
 K   K 

X X λj pji
vn+ei ,n
=µi 1 − pij∗ = µi 1 − After canceling, we have
j=1
∗ =λ p /λ
Use pij j=1
λi
πi (ni + 1) = ρi πi (ni ) ⇒ πi (n) = ρni (1 − ρi )
j ji i
| {z }

pi0

 K  2) π(n)vn,n−ei = π(n − ei )vn−ei ,n
: Departure to the outside in the forward
µi X
∗ process corresponds to arrival from the outside in the reversed process,
= λi − λj pji = αi /ρi (= vn,n−e ).
λi j=1
i
K K

X X λi pij
| {z } vn−ei ,n
= αi = λi − λj pji∗ = λi − λj
λj
PK
λi =αi + λj pji
j=1 j=1 j=1
| {z }
Substituting this into 1) (vn,n+ei = αi : arrival to server i from outside) Traffic eqn. for the reversed process
K
!
K
Y K
Y X

πi (ni )αi = πi (ni + 1) πj (nj )αi /ρi =λi 1− pij = λi pi0 (= vn,n+ei
).
i=1 j=1,j6=i j=1

3-131 3-132
Jackson’s theorem: proof of DBEs V Jackson’s theorem: proof of DBEs VI

Substituting this with vn,n−ei = µi pi0 (departure to the outside), Summary of transition rates of forward and reverse processes

K
Y K
Y Transition Forward vn,m Reverse vn,m Comment
ρi )ρni i ρi )ρni i −1
PK
(1 − πk (nk )µi pi0 = (1 − πk (nk )λi pi0 n → n + ei αi λi (1 − j=1 pij ) all i
PK
k=1,k6=i k=1,k6=i n → n − ei µi (1 − j=1
pij ) αi µi /λi all i: ni > 0

n → n − ei + ej µi pij λj pji µi /λi all i: ni > 0, all j
3) π(n)vn,n−ei +ej = π(n − ei + ej )vn−ei +ej ,n
: Leaving queue i and

P P
joining queue j in the forward process (vn,n−ei +ej = µi pij ) correspond to 4) Finally, we verify total rate equation, m
vn,m = m
vn,m :
leaving queue j and joining queue i in the reversed process, i.e, X  K
X  X  X 


vn−e = µj pji∗ = λi pij µj /λj , vn,m = λi 1 − pij + αi µi /λi + λj pji µi /λi
i +ej ,n
i j=1 i:ni >0 j
K
|P {z P
P }
n λi − λi pij
Y
(1 − ρi )ρni i (1 − ρj )ρj j πk (nk )µi pij i i j
X X X  αi µi µi

k=1,k6=i,j = λi − (λj − αj ) + + (λi − αi )
K λi λi
i j i:ni >0
n +1
Y
= (1 − ρi )ρni i −1 (1 − ρj )ρj j πk (nk )µj pji∗ ∗ =λ p /λ
| {z
PK
}
use pji i ij j λj =αj + λi pij
k=1,k6=i,j i=1
X X
= αi + µi = vn,m . 
3-133 3-134
i i:ni >0

Open queueing networks: Extension I Open queueing networks: Extension II

– For all state n = (n1 , . . . , nK )


The product-form solution of Jackson’s theorem is valid for the
following network of queues P̂1 (n1 )P̂2 (n2 ) · · · P̂k (nK )
P(n) = ,
• State-dependent service rate
G
P∞ P∞
– 1/µi (ni ): the mean of queue i’s service time exponentially where G = n1 =0 · · · nK =0 P̂1 (n1 ) · · · P̂k (nK )
distributed, when ni is the number of customers in the i th queue
• Multiple classes of customers
just before the customer’s departure
– Provided that the service time distribution at each queue is the
λi same for all customer classes, the product form solution is valid for
ρi (ni ) = , i = 1, . . . , K , ni = 1, 2, . . .
µi (ni ) the system with different classes of customers, i.e.,
K
– λi : total arrival rate at queue i determined by the traffic eqn. X
λj (c) = αj (c) + λi (c)pij (c)
– Define P̂j (nj ) as
i=1

1, if nj = 0, – αj (c): rate of the external arrival of class c at queue j; pij (c) the
P̂j (nj ) =
ρj (1)ρj (2) · · · ρj (nj ), if nj > 0 routing probabilities of class c
– See pp.230-231 in the textbook for more details
3-135 3-136
Open queueing networks: Performance measure Open queueing networks: example A-I

Performance measure
New programs arrive at a CPU according to a Poisson process of rate α. A
• State probability distribution has been derived program spends an exponentially distributed execution time of mean 1/µ1
• Mean # of hops traversed, h, is in the CPU. At the end of this service time, the program execution is
PK complete with probability p or it requires retrieving additional information
λ i=1 λi
h= = PK from secondary storage with probability 1 − p. Suppose that the retrieval of
α i=1 αi
information from secondary storage requires an exponentially distributed
• Throughput of queue i: λi amount of time with mean 1/µ2 . Find the mean time that each program
• Total throughput of the queueing network: α spends in the system.
• Mean number of customers at queue i (ρi = λi /µi )

N i = ρi /(1 − ρi )

• System response time T


K K K
N 1X 1X 1 X λi
T= = Ni = λi Ti = Data Communications EIEN 368 (2014년 2학기)
α α i=1 α i=1 α i=1 µi − λi QUEUEING THEORY Q–69

3-137 ∆ λi 3-138
ρi =
µi
ρi 1
Ni = ; Ti =
1 − ρi µi − λ i
M
X M
X X λi M
ρi N
N= Ni = ; T = = ( )Ti
1 − ρi γ γ
Open queueing networks: example A-II Open queueing
i=1 networks:
i=1 example B-I i=1

eg. Consider the following networks with three routers


Find the mean arrival rate, Consider the following network with three nodes
• External packet arrivals :
• Arrival rate into each queue, λ1 = α + λ2 and λ2 = (1 − p)λ1 γB
•Poisson
External process with γ:APoisson
packet arrivals = 350
λ1 = α/p and λ2 = (1 − p)α/p (packets/sec),
process with γA γB = 3.5 pack-
= 150,
L1 B L3 γCets/sec, γB = 1.5, γC = 1.5.
= 150.
• Each queue behaves like an M/M/1 system, so • Packet length : exponentially
L2
γA A C γC •distributed
Packet lengthwithexponentially
: mean 50 dis-
ρ1 ρ2 L4 tributed with mean 1000 bits/packet.
E[N1 ] = and E[N2 ] = (kbits/packet)
1 − ρ1 1 − ρ2
Assumptions:
where ρ1 = λ1 /µ1 and ρ2 = λ2 /µ2
(a) Packets moving along a path from source to destination have their
Using Little’s result, the total time spent in the system lengths selected independently at each outgoing link
  → Kleinrock’s independence assumption
E[N1 + N2 ] 1 ρ1 ρ2 (b) Channel capacity of link i: Ci = 17.5 Mbps for i = 1, 2, 3, 4
E[T ] = = +
α α 1 − ρ1 1 − ρ2 → Service rate at link i: exponentially distributed with rate
µi = Ci /50000 = 350 packets/sec.

3-139 3-140
Open queueing networks: example B-II Open queueing networks: example B-II

• Traffic matrix (packets per second) • Since α = 650 and λ = 850, the mean number of hops is
from → to A B C
A – 150 200 h = 850/650 = 1.3077
(50% through B)
(50% directly to C) • We get link utilization, mean number and response time as
B 50 – 100 L1 L2 L3 L4
C 100 50 – ρi 300/350=0.857 100/350=0.286 250/350=0.714 200/350 =0.572
Ni 300/50 100/250 250/100 200/150
• Find mean delay from A to C
Ti 1/50=0.02 1/250=0.004 1/100=0.01 1/150=0.0067
• First, we need to know link traffic
traffic type L1 L2 L3 L4 – N i = ρi /(1 − ρi ) and Ti = N i /λi
A→B 150 • Mean delay from A to C
A→C 100 100 100
B→A 50 50
B→C 100
T AC = ( T1 + T2 ) × 0.5 + T3 × 0.5 = 0.017 (sec)
|{z} |{z}
C→A 100 A to B B to C
C→B 50 50
total λ1 = 300 λ2 = 100 λ3 = 250 λ4 = 200 – propagation delay is ignored

3-141 3-142

Closed queueing networks I Closed queueing networks II

Consider a network of K first-come first serve, single server queue, π = ~π · P, and ~π · ~1 = 1, we have
• Using ~
each of which has unlimited queue size and exponential distribution λi = λ(M )πi
with rate µk . There are also a fixed number of customers, say M ,
circulate endlessly in a closed network of queues. – λ(M ): a constant of proportionality, the sum of the arrival rates
PK
in all the queues in the network, and i=1 λi 6= 1
– G(M ) (normalization constant) will take care of λ(M )
Assuming ρi (ni ) = λi /µi (ni ) < 1 for i = 1, . . . , K , we have for all
ni ≥ 0,
– Define P̂j (nj ) as

1, if nj = 0,
P̂j (nj ) =
ρj (1)ρj (2) · · · ρj (nj ), if nj > 0
• Traffic eqn.: no external arrival! The joint state probability is expressed as
K K K K
X X 1 Y X Y
λi = λj pji with pji = 1 π(n) = P̂i (ni ), and G(M ) = P̂i (ni )
G(M ) i=1
j=1 i=0 n1 +···+nK =M i=1

3-143 3-144
Closed queueing networks III Closed queueing networks IV
• ρi : no longer the actual utilization due to λ(M ), i.e., relative • GBE of open queueing networks is reduced to
utilization  K  K K
X X X
• Setting λ(M ) to a value does not change the results α+ µi π(n) = αi π(n − ei ) + pi0 µi π(n + ei )
• The maximum queue size of each queue is M i=1 i=1 i=1
| {z } | {z }
Proof: as in Jackson’s theorem for open queueing networks external arrivals go outside from i
K X
K
• Use time-reversibility: X
+ pji µj π(n − ei + ej )
– routing matrix of the reversed process, pji∗ = λi pij /λj
i=1 j=1
• For state transition between n and n0 = n − ei + ej | {z }
from j to i
π(n0 )vn∗ 0 ,n = π(n)vn,n0 (∗)
• Substituting (1) and (2) into (*), we have
• As in open queueing networks, we have for ni > 0
ρi π(n1 , . . . , ni − 1, . . . , nj + 1, . . . , nK ) = ρj π(n1 , . . . , nK )

vn−ei +ej ,n
=µj pji∗ = µj (λi pij /λj ) (1)
• The proof for the following is given on page 235 (BG)
vn,n−ei +ej =µi pij (2) X X

vn,n0 = vn,n0
Leaving queue i and joining queue j in the forward process
n0 n0
(vn,n−ei +ej = µi pij ) correspond to leaving queue j and joining queue i
in the reversed process, 3-145 3-146

Summary of time-reversibility: CTMC Closed queueing networks V


• µi (n): a service rate when queue i has n customers
• vn,n−ei +ej = µi (n)pij (leaving queue i and joining queue j)
The forward CTMC has transition rates γij . K
X X X
The reversed chain is a CTMC with transition rates γij∗ =
φj γji µi (ni )pij = µi (ni )pij
φi
{(j,i)|ni >0} j=1 {i|ni >0}
If we find positive numbers φi , summing to unity and such that K
P∞ P∞
scalars γij∗ satisfy j=0 γij = j=0 γij∗ for all i ≥ 0, then φi is the
X X X
= µi (ni ) pij = µi (ni )
stationary distribution of both the forward and reverse chains. {i|ni >0} j=1 {i|ni >0}
– Proof

X X X X X • vn,n−ei +ej
= µi (n)pij∗ = µi (n)λj pji /λi using pij∗ = λj pji /λi
φj γji = φi γij∗ = φi γij∗ = φi γij = φi γij
K
j6=i j6=i j6=i j6=i j6=i X µi (ni )λj pji X X µi (ni )λj pji
=
: Global balance equation λi j=1
λi
{(j,i)|ni >0} {i|ni >0}
K
X µi (ni ) X X
= λj pji = µi (ni )
λi j=1
{i|ni >0} {i|ni >0}
| {z }
3-147 λi 3-148
Closed queueing networks VI Closed queueing networks VII
P QK
Dealing with G(M , K ) = n1 +···+nK =M i=1 P̂i (ni ) • Since nk > 0, we change nk = nk0 + 1 for nk0 ≥ 0
Computing G(M , K ) with M customers and K queues iteratively X X n 0 +1
ρn1 1 ρn2 2 · · · ρnk k = ρn1 1 ρn2 2 · · · ρk k
G(m, k) = G(m, k − 1) + ρk G(m − 1, k) n1 +···+nk =m, n1 +···+nk0 +1=m,
nk >0
nk0 >0
with boundary conditions: G(m, 1) = ρm
1 for m = 0, 1, . . . , M , and
n0
X
G(0, k) = 1 for k = 1, 2, · · · , K =ρk ρn1 1 ρn2 2 · · · ρk k
• For m > 0 and k > 1, split the sum into two disjoint sums as n1 +···+nk0 =m−1,
X nk0 >0
G(m, k) = ρn1 1 ρn2 2 · · · ρnk k =ρk G(m − 1, k)
n1 +···+nk =m
X X
= ρn1 1 ρn2 2 · · · ρnk k + ρn1 1 ρn2 2 · · · ρnk k In a closed Jackson network with M customers, the probability that
n1 +···+nk =m, n1 +···+nk =m, at steady-state, the number of customers in station j greater than or
nk =0 nk >0
X n
X equal to m is
= ρn1 1 ρn2 2 · · · ρk−1
k−1
+ ρn1 1 ρn2 2 · · · ρnk k
n1 +···+nk =m, n1 +···+nk =m, G(M − m)
nk =0 nk >0 Pr[xj ≥ m] = ρm
j for 0≤m≤M
| {z } G(M )
G(m,k−1)

3-149 3-150

Closed queueing networks VIII Closed queueing networks IX

Implementation of G(M , K ): µi is not a function of m. • Proof: nj = nj0 + m for nj0 ≥ 0


n
m\k 1 2 ··· k−1 k ··· K
X ρn1 1 · · · ρj j · · · ρnKK
0 1 1 1 1 1 Pr[xj ≥ m] =
G(1, 2) = G(1, 1) + ρ2 G(0, 2)
G(M )
n1 +···+nj +···+nK =M,
1 ρ1
= ρ1 + ρ2 nj ≥m
2 ρ2 G(2, 2) = ρ2
1 + ρ2 G(1, 2)
1 n 0 +m
.
.
X ρn1 1 · · · ρj j · · · ρnKK
. =
m−1 G(M )
m ρm
n1 +···+nj0 +m+···+nK =M,
1
. nj0 +m≥m
.
.
ρm
j
X n0
M ρM
1 G(M, K)
= ρn1 1 · · · ρj j · · · ρnKK
G(M )
n1 +···+nj0 +···+nK =M−m,
If µi (m) = mµi (multiple servers), we generalize nj0 ≥0
m
X (λ(M )πk )i ρm
j
G(m, k) = fk (i)G(m − i, k − 1) and fk (i) = Qi = G(M − m)
G(M )
i=0 j=1 µi (j)

with fk (0) = 1 for all k. • Pr[xj = m] = Pr[xj ≥ m] − Pr[xj ≥ m + 1]


= ρm
j (G(M − m) − ρj G(M − m − 1))/G(M )
3-151 3-152
Closed queueing networks X Closed queueing networks: example A-I

In a closed Jackson network with M customers, the average number Suppose that the computer system given in the open queueing network is
of customers at queue j: now operated so that there are always I programs in the system. Note that
M M the feedback loop around the CPU signifies the completion of one job and
X X G(M − m)
Nj (M ) = Pr[xj ≥ m] = ρm
j
its instantaneous replacement by another one. Find the steady state pmf of
G(M )
m=1 m=1 the system. Find the rate at which programs are completed.
In a closed Jackson network with M customers, the average
throughput of queue j:
G(M − 1)
γj (M ) =µj Pr[xj ≥ 1] = µj ρj
G(M )
G(M − 1)
=λj
G(M )
• Using λi = λ(I )πi with ~
π = ~π P,
– Average throughput is the average rate at which customers are
serviced in the queue. For a single-server queue, the service rate is µj π1 = pπ1 + π2 , π2 = (1 − p)π1 and π1 + π2 = 1
when there are one or more customers in the queue, and 0 when the we have
queue is empty λ(I ) λ(I )(1 − p)
λ1 = λ(I )π1 = and λ2 = λ(I )π2 =
3-153 2−p 2−p 3-154

Closed queueing networks: example A-II Arrival theorem for closed networks I

• For 0 ≤ i ≤ I , ρ1 = λ1 /µ1 and ρ2 = λ2 /µ2 Theorem: In a closed Jackson network with M customers, the
(1 − ρ1 )ρi1 (1− ρ2 )ρI2−i occupancy distribution seen by a customer upon arrival at queue j is
Pr[N1 = i, N2 = I − i] = the same as the occupancy distribution in a closed network with the
S(I )
arriving customer removed, i.e., the system with M − 1 customers
• The normalization constant, S(I ), is obtained by
• In a closed network with M customers, the expected number of
I I +1
X 1 − (ρ1 /ρ2 ) customers found upon arrival by a customer at queue j is equal to
S(I ) = (1−ρ1 )(1−ρ2 ) ρi1 ρI2−i = (1−ρ1 )(1−ρ2 )ρI2
i=0
1 − (ρ1 /ρ2 ) the average number of customers at queue j, when the total
number of customers in the closed network is M − 1
• We then have for 0 ≤ i ≤ I
• An arriving customer sees the system at a state that does not
1−β include itself
Pr[N1 = i, N2 = I − i] = βi
1 − β I +1
Proof:
where β = ρ1 /ρ2 = µ2 /((1 − p)µ1 )
• X (t) = [X1 (t), X2 (t), . . . , XK (t))]: state of the network at time t
• Program completion rate: pλ1
• Tij (t): probability that a customer moves from queue i to j at
λ1 /µ1 = 1 − Pr[N1 = 0] = β(1 − β I )/(1 − β I +1 ) time t +

3-155 3-156
Arrival theorem for closed networks II Mean Value Analysis I
• For any state n with ni > 0, the conditional probability that a Performance measure for closed networks with M customers
customer moving from node i to j finds the network at state n
• Nj (M ): average number of customers in queue j
Pr[X (t) = n, Tij (t)] • Tj (M ): average time a customer spends (per visit) in queue j
αij (n) = Pr[X (t) = n|Tij (t)] =
Pr[Tij (t)] • γj (M ): average throughput of queue j
Pr[Tij (t)|X (t) = n] Pr[X (t) = n]
=P Mean-Value Analysis: Calculates Nj (M ) and Tj (M ) directly, without
m,mi >0 Pr[Tij (t)|X (t) = m] Pr[X (t) = m]
first computing G(M ) or deriving the stationary distribution of the
π(n)µi pij ρn1 1 · · · ρni i · · · ρnKK network
=P =P m1 mi mK
m,mi >0 π(m)µi pij m,mi >0 ρ1 · · · ρi · · · ρK a) The queue length observed by an arriving customer is the same as
– Changing mi = mi0 + 1, mi0 ≥ 0, the queue length in a closed network with one less customer
ρn1 1 · · · ρni i · · · ρnKK b) Little’s result is applicable throughout the network
αij (n) = P mi0 +1
m1
· · · ρm 1. Based on a)
m1 +···+mi0 +1+···+mK =M, ρ1 · · · ρi
K
K
0
mi +1>0 1
Tj (s) = (1 + Nj (s − 1)) for j = 1, . . . , K , s = 1, . . . , M
ρ1 · · · ρni i −1 · · · ρnKK
n1
ρn1 · · · ρini −1 · · · ρnKK µj
= m 0 = 1
P
ρm mK G(M − 1)
1 · · · ρi · · · ρK
1 i
m1 +···+mi0 – Tj (0) = Nj (0) = 0 for j = 1, . . . , K
0
+···+mK =M−1,mi ≥0
3-157 3-158

Mean Value Analysis II Closed queueing networks: example B

2. Based on b), we first have when there are s customers in the Gupta’s truck company owns m trucks: Gupta is interested in the
network probability that 90% of his trucks are in operation
Nj (s) = λj (s)Tj (s) = λ(s)πj Tj (s) (1) • Set a routing matrix P:
| {z }
step 2-b Op LM M
 
and Op 0 0.85 0.15
P=
LM 0.9 0 0.1 
 
K K
X X s Local maintenance
s= Nj (s) = λ(s) πj Tj (s) → λ(s) = PK (2) M 1 0 0
j=1 j=1 πj Tj (s)
j=1
| {z } • With π0 = 0.4796,
step 2-a
π1 = 0.4077, and
Combining (1) and (2) yields Manufacturer π2 = 0.1127, we have
ρ0 = λ(m)π0 /λ0 ,
λ(s)πj Tj (s)
Nj (s) = s PK ρ1 = λ(m)π1 /µ1 , and
j=1 πj Tj (s) ρ2 = λ(m)π2 /µ2
This will be iteratively done for s = 0, 1, . . . , M • We have Pr[O = i, L = j, M = k] = 1 i j k
i! ρ0 ρ1 ρ2 /G(m) and
3-159
k =m−i −j 3-160
Where are we? M/G/1 queue: Embedded MC I

Elementary queueing models


Recall that a continuous-time MC is described by (n, r):
– M/M/1, M/M/C, M/M/C/C/K, ... and bulk queues
– either product-form solutions or use PGF • n: number of customers in the system.
• r: attained or remaining service time of the customer in service.
Intermediate queueing models (product-form solution)
– Time-reversibility of Markov process Due to x, (n, x) is not a countable state space. How can we get rid of x?
– Detailed balance equations of time-reversible MCs What if we observe the system at the end of each service?
– Multidimensional Birth-death processes
– Network of queues: open- and closed networks Xn+1 = max(Xn − 1, 0) + Yn+1
Advanced queueing models
Xn : number of customers in the system left behind by a departure.
– M/G/1 type queue: Embedded MC and Mean-value analysis
Yn : number of arrivals that occur during the service time of the
– M/G/1 with vacations and Priority queues
departing customer.
More materials on queueing models (omitted)
Question: Xn is equal to the queue length seen by an arriving
– G/M/m queue, G/G/1, etc.
customer (queue length just before arrival)? Recall PASTA.
– Algorithmic approaches to get steady-state solutions

3-161 3-162

Distribution Upon Arrival or Departure M/G/1 queue: Embedded MC II

α(t), β(t): number of arrivals and departures (respectively) in (0, t) Defining probability generating function of distribution Xn+1 ,
Un (t): number of times the system goes from n to n + 1 in (0, t); Qn+1 (z) , E[z Xn+1 ] = E[z max(Xn −1,0)+Yn+1 ] = E[z max(Xn −1,0) ]E[z Yn+1 ]
number of times an arriving customer finds n customers in the system
Let Un+1 (z) = E[z Yn+1 ], as n → ∞, Un+1 (z) = U (z) (independent
Vn (t): number of times that the system goes from n + 1 to n;
of n). Then, we have
number of times a departing customer leaves n.

the transition n to n+1 cannot reoccur until after the number in the system drops to n once more X
(i.e., until after the transition n +1 to n reoccurs) Qn+1 (z) =U (z) z k Pr[max(Xn − 1, 0) = k]
k=0
h ∞
X i
=U (z) z 0 Pr[Xn = 0] + z k−1 Pr[Xn = k]
k=1
h i
=U (z) Pr[Xn = 0] + z −1 (Qn (z) − Pr[Xn = 0])

Un (t) and Vn (t) differ by at most one: |Un (t) − Vn (t)| ≤ 1. As n → ∞, we have Qn+1 (z) = Qn (z) = Q(z), and Pr[Xn = 0] = q0 ,
Un (t) Vn (t) Un (t) α(t) Vn (t) β(t) U (z)(z − 1)
lim = lim ⇒ lim = lim Q(z) = q0 .
t→∞ t t→∞ t t→∞ α(t) t t→∞ β(t) t z − U (z)

3-163 3-164
M/G/1 queue: Embedded MC III M/G/1 Queue: Embedded MC IV

We need to find U (z) and q0 . Using U (z|xi = x) = e λx(z−1) , Let fT (t) be probability density function of Tj , i.e., total delay.
Z ∞ ∞ Z ∞
(λt)k −λt
U (z|xi = x)b(x)dx = B ∗ (λ(1 − z)).
X
U (z) = Q(z) = zk e fT (t)dt = T ∗ (λ(1 − z))
0 0 k!
k=0
0 ∗
Since Q(1) = 1, we have q0 = 1 − U (1) = 1 − λ · X = 1 − ρ. where T (s) is the Laplace transform of fT (t). We have
Transform version of Pollaczek-Khinchin (P-K) formula is B ∗ (λ(1 − z))(z − 1)
T ∗ (λ(1 − z)) = (1 − ρ)
z − B ∗ (λ(1 − z))
B ∗ (λ(1 − z))(z − 1)
Q(z) = (1 − ρ)
z − B ∗ (λ(1 − z)) Let s = λ(1 − z), one gets
(1 − ρ)sB ∗ (s) (1 − ρ)s
Letting q = Q 0 (1), one gets W = q/λ − X . T ∗ (s) = = W ∗ (s)B ∗ (s) ⇒ W ∗ (s) =
s − λ + λB ∗ (s) s − λ + λB ∗ (s)
Sojourn time distribution of an M/G/1 system with FIFO service:
In an M/M/1 system, we have B ∗ (s) = µ/(s + µ):
If a customer spends Tj sec in the system, the number of customers it  
leaves behind in the system is the number of customers that arrive λ
W ∗ (s) = (1 − ρ) 1 +
during these Tj sec, due to FIFO. s+µ−λ

3-165 3-166

M/G/1 Queue: Embedded MC V Residual life time∗ I

• Taking the inverse transform of W ∗ (s) (L{Ae −at } ↔ A/(s + a)), Hitchhiker’s paradox:
 
λ
 Cars are passing at a point of a road according to a Poisson process
L−1 {W ∗ (s)} = L−1 (1 − ρ) 1 + with rate λ = 1/10, i.e., 10 min.
s+µ−λ
= (1 − ρ)δ(t) + λ(1 − ρ)e −µ(1−ρ)x , x>0 A hitchhiker arrives to the roadside point at random instant of time.

We can write W ∗ (s) in terms of R0∗ (s) Previous car Next car

time
(1 − ρ)s (1 − ρ)s
W ∗ (s) = ∗
= Hitchhiker arrives
s − λ + λB (s) s − λ(1 − B ∗ (s))
1−ρ 1−ρ What is his mean waiting time for the next car?
= =
1 − B ∗ (s) 1 − ρR0∗ (s) 1. Since he arrives randomly in an interval, it would be 5 min.
1 − λX
sX 2. Due to memoryless property of exponential distribution, it would be

X another 10 min.
= (1 − ρ) (ρR0∗ (s))k
k=0

L. Kleinrock, Queueing systems, vol.1: theory
3-167 3-168
Residual life time II Residual life time III
The distribution of an interval that the hitchhiker captures depends If we take the Laplace transform of the pdf of R0 for 0 ≤ R0 ≤ x,
on both X and fX (x): Z x −sx
0 e 1 − e −sx
E[e −R s |X 0 = x] = dy =
fX 0 (x) = CxfX (x) and C : proportional constant 0 x sx
Unconditioning over X 0 , we have R0∗ (s) and its moments as
R∞
Since 0 fX 0 (x)dx = 1, we have C = 1/E[X ] = 1/X :

xfX (x) 1 − FX (s) X (n+1)
fX 0 (x) = R0∗ (s) = ⇒ E[R0n ] =
X sX (n + 1)X
R∞
Since Pr[R0 < y|X 0 = x] = y/x for 0 ≤ y ≤ x, joint pdf of X and R0 : ∗
where FX (s) = 0
e −sx fX (t)dt
dy xfX (x)dx fX (x)dydx Mean residual time is rewritten as
Pr[y < R0 < y + dy, x < X 0 < x + dx] = =
x X X 
σ2

R = E[R0 ] = 0.5 X + X
Unconditioning over X 0 , X
dy ∞ 1 − FX (y) 1 − FX (y)
Z
fR0 (y)dy = fX (x)dx = dy ⇒ fR0 (y) = Surprisingly, the distribution of the elapsed waiting time, X 0 − R0 , is
X y X X identical to that of the remaining waiting time.
3-169 3-170

M/G/1 Queue: Embedded MC VI M/G/1 Queue: Embedded MC VII

State transition diagram of M/G/1 GBE can be expressed as


n
… …

X
πn = πn+1−k αk + αn π0 for n = 0, 1, 2, · · ·
k=0
P∞
… … … – Q(z) can be also obtained using Q(z) = n=0 πn z n
As an alternative, we define ν0 = 1 and νi = πi /π0

and its state transition probability matrix 1 − α0


ν1 =
  α0
α0 α1 α2 α3 . . . 1 − α1 α1
ν2 = ν1 −
α α α α . . .
 0 1 2 3 Z ∞ α0 α0
(λx)k −λx

P=

0 α0 α1 α2 . . . and αk =

e b(x)dx ..
  k! .
0 0 α0 α1 . . . 0

.. .. .. .. . .
 1 − α1 α2 αi−1 αi−1
. . . . . νi = νi−1 − νi−2 − · · · ν1 −
α0 α0 α0 α0
P∞ P∞ πi
– i=0 νi = 1 + i=1 π0 = 1/π0 and πi = π0 νi
3-171 3-172
M/G/1 queue: Mean value analysis I M/G/1 queue: Mean value analysis II
dW ∗ (s)
We are interested in E[Wi ] = ds (See BG p.186) A sample path of M/G/1 queue
s=0
• Wi : Waiting time in queue of customer i

# of customers
3
• Ri : Residual service time seen by customer i 2
• Xi : Service time of customer i 1
time
• Ni : Number of customers in queue found by customer i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

i−1
X
Wi = Ri + Xj 8
j=i−Ni 7

Virtual workload
6
5
Taking expectations and using the independence among Xj , 4
3
i−1 2
h X i 1 1
E[Wi ] , W = E[Ri ] + E E[Xj |Ni ] = Ri + Nq time
µ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
j=i−Ni

Since Nq = λW , Ri = R for all i, we have 4


3
R 2
W = 1
1−ρ time
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
3-173 3-174

M/G/1 queue: Mean value analysis III M/G/1 queue: Mean value analysis IV

Time averaged residual time of r(τ ) in the interval [0, t] is From the hitchhiker’s paradox, we have E[R0 ] = E[X 2 ]/(2E[X ])

1 t
Z M(t)
1 X 1 2
PM(t)
1 M (t) i=1 Xi2 R =0 · Pr[N (t) = 0] + E[R0 ] × Pr[N (t) > 0]
R(t) = r(τ )dτ = Xi =
t 0 t i=1 2 2 t M (t) E[X 2 ] λX 2
= × λE[X ] =
2E[X ] 2
– M (t) is the number of service completion within [0, t].
P-K formula for mean waiting time in queue
Residual service time

2 2
λX 2 λ(σX +X )
W = −W ∗0 (s)|s=0 = =
2(1 − ρ) 2 +X
X 2 =σX
2 2(1 − ρ)
1 + Cx2 ρ 1 + Cx2
= X= WM/M/1
time 2 1−ρ 2
2
– Cx2 = σX
2
/X is the coefficient of variation of the service time
– e.g., upon a new service of duration X1 , r(τ ) starts at X1 and decays
– The average time in the system T = W + X
linearly for X1 time units
Eg.: since Cx = 1 in an M/M/1 and Cx = 0 in an M/D/1,
As t → ∞, limt→∞ R(t) = R = λX 2 /2
2 ∗ ρ ρ
Upon a new service of duration λX
, starts at dW
and (s)
decays linearly for time units. WM/M/1 = X > WM/D/1 = X
W = ←→ 1−ρ 2(1 − ρ)
2(1 − ρ) ds s=0 3-175 3-176
Packet arrivals
k
P{X=I+kn}=(l-p)p, k=O.I. ...

The first two moments of the service time are

(X system
Delay analysis of an ARQ x X) ::::::::'':'::':'':'::'V
M/G/1 Queue ,
with vacations I
x, x x v2 V3 X5 V4 X5

Busy period

Vacations

Suppose Go-Back-N x
ARQ system, where
(00 k a packet
00 k is2 00)
successfully Server takes a vacation at the end of each busy period Time
- 2 k 2k
transmitted with probability 1 − p; ACK arrives in+ntx time of N − 1
=(l-p)
• Take an additional vacation if no customers are found at the end of
Figure 3.12 An M/G/I1 system with vacations. At the end of a busy
period, the server goes on vacation for time V with first and second moments

frames without annote


We now error
that
Y
V and V , respectively. If the system is empty at the end of a vacation, the
each vacation: V1 , V2 , ... the durations of the successive vacations
server takes a new vacation. An arriving customer to an empty system must
X X wait until the end of the current vacation to get service.

• Packet arrivals,",pk _1_,


I-p
'"'kk _ _
to a= transmitter’s
P - (Iqueue
P_
_ )2' follows Poisson with mean • A customer finds the system idle (vacation), waits for the end of
k=O k=O P
λ (packets/slot) the vacation period
Start of effective service time
Effective service time Effective service time of packet 4
of packet 1 of packet 2
,. II .. 'II

X, X3
X,

Error Final transmission Error Final transmission Correct Error Error

-
of packet 1 of packet 2

Packets Transmitted
• Residual service time including vacation periods
• We need the first two
Figure 3.17 moments of the
Illustration of the effective service
service time
times of packets in theto
ARQuse P-K Figure 3.13 Residual service times for an M/G/1 system with vacations.
Busy periods alternate with vacation periods.
system of Example 3.15. For example, packet 2 has an effective service time of
n + 1 because there was an error in the first attempt to transmit it following the t M(t) L(t)
formula X
Z
∞ last transmission of packet 1. but no error in the second attempt. 1 1 X 1 2 1X1 2
Np r(τ )dτ = X + V
X= (1 + kN )(1 − p)pk = 1 + t 0 t i=1 2 i t i=1 2 i
1−p
k=0
∞ – M (t): # of services completed by time t
X 2Np N 2 (p + p2 )
X2 = (1 + kN )2 (1 − p)pk = 1 + + – L(t): # of vacations completed by time t
1−p (1 − p)2 3-177 3-178
k=0 II

M/G/1 Queue with vacations II FDM and TDM on a Slot Basis I

• Residual service time including vacation periods is rewritten as Suppose m traffic streams of equal-length packets according to
Poisson process with rate λ/m each
t
PM(t) 1 2
PL(t) 1 2
2 Xi i=1 2 Vi
Z
1 M (t) i=1 L(t) • If the traffic streams are frequency-division multiplexed on m
r(τ )dτ = · + ·
t 0 t }
| {z M (t) t }
| {z L(t) subchannels, the transmission time of each packet is m time units
| {z }
λ as t→∞ 1−ρ
R as t→∞
V
as t→∞ – Using P-K formula, λX 2 /(2(1 − ρ)), with ρ = λ and µ = 1/m,
λX 2 (1 − ρ)V 2 λm
= + =R WFDM =
2 2V 2(1 − λ)
• Using W = R/(1 − ρ), we have • Consider the same FDM, but packet transmissions can start only at
times, m, 2m, 3m,...: slotted FDM
λX 2 V2
W = + – This system gives stations a vacation of m slots
2(1 − ρ) 2V
!
V2
– The sum of waiting time in M/G/1 queue and mean residual WSFDM = WFDM + 0.5m =
vacation times 2V

3-179 3-180
FDM and TDM on a Slot Basis II M/G/1 Queue with Non-Preemptive Priorities I
Customers are divided into K priority classes, k = 1, . . . , K .
• m traffic streams are time-division multiplexed, where one slot
dedicated to each
Sec. 3.5 The M traffic stream as shown below
/ G /1 System 195
Non-preemptive priority
Stream 1 Stream 2 Stream 3 Stream 4
• Service of a customer completes uninterrupted, even if customers
of higher priority arrive in the meantime
--,--IttM-'-------'--...l...----..!-1-!--I__. • A separate (logical) queue is maintained for each class; each time

I. Framek

One time unit per slot


.1_. Frame (k + 1 )
t
the server becomes free, the first customer in the highest priority
queue (that is not empty) enters service
TDM with
Figure 3.20 m
TOM= 4 mtraffic
with = 4 trafficstreams
streams.
• Due to non-preemptive policy, the mean residual service time R0

2
seen by an arriving customer is the same for all priority classes if all
– Service Thus,
timetheto eachaverage
customer's queue, X : is m
total delay moreslots
favorable X 2 than
→in TDM = inmFDM (assuming
customers have the same service time distribution
that > 2). The longer average waiting time in queue for TDM is more than compensated
In
– Frame synchronization delay: m/2
by the faster service time. Contrast this with the Example 3.9, which treats TDM with
slots that are a very small portion of the packet size. Problem 3.33 outlines an altemative
– Using P-K formula, we have Notations
approach for deriving the TDM average delay. (k)
• Nq : mean number of waiting customers belonging to class k in
m
3.5.2 Wand
Reservations TDM =
Polling = WSFDM the queue.
2(1 − λ)
Organizing transmissions from several packet streams into a statistical multiplexing sys- • Wk : mean waiting time of class-k customers
tem requires some form of scheduling. In some cases, this scheduling is naturally and
– System response time: T = 1 + W
easily accomplished; in other cases, however, some TDMform of reservation or polling system
• ρk : utilization, or load of class k, ρk = λk X k .
is required. • R0 : mean residual service time in the server upon arrival
Situations of this type arise often in multiaccess channels, which will be treated
3-181 3-182
extensively in Chapter 4. For a typical example, consider a communication channel that
can be accessed by several spatially separated users; however, only one user can transmit
successfully on the channel at anyone time. The communication resource of the channel
can be divided over time into a portion used for packet transmissions and another portion
used for reservation or polling messages that coordinate the packet transmissions. In other
words, the time axis is divided into data intervals, where actual data are transmitted, and
reservation intervals, used for scheduling future data. For uniform presentation, we use
the term "reservation" even though "polling" may be more appropriate to the practical
M/G/1 Queue with Non-Preemptive Priorities II
situation.
M/G/1 Queue with Non-Preemptive Priorities III
We will consider Tn traffic streams (also called users) and assume that each data
interval contains packets of a sinfile user. Reservations for these packets are made in the
Stability condition: ρ + ρ + · · · + ρ
immediately preceding1 reservation
2 interval. K
< 1.
All users are taken up in cyclic order (see From W2 = R/((1 − ρ1 )(1 − ρ1 − ρ2 )), we can generalize
Fig. 3.21). There are several versions of this system differing in the rule for deciding
Priority 1: similar to P-K formula,
which packets are transmitted during the data interval of each user. In the fiated system,
the rule is that only those packets that arrived prior to the user's preceding reservation
R
1 R Wk =
interval are transmitted.(1)
W =R+ N
1 and N q
By contrast, in (1)
the exhaustive system, the rule is that all available
=λ W ⇒W =
1 1 data interval, 1
(1 − ρ1 − · · · − ρk−1 )(1 − ρ1 − · · · − ρk )
packets of a user are transmitted
µ duringq the corresponding 1−ρ including those
1
that arrived in this data interval or the preceding reservation interval. An intermediate
Priority 2:version, which we call the partially fiated system, results when the packets transmitted in As before, the mean residual service time R is
a user's data interval are those that arrived up to the time this data interval began (and the
corresponding reservation interval ended). A typical example of such reservation systems
1 (1) 1 (2) 1 K K
W2 = R + Nq + N + λ 1 W2 1 X 1X
µ µ q µ R = λX 2 , with λ = λi and X 2 = λi Xi2
|1 {z 2 } | 1 {z } 2 i=1
λ i=1
time needed to serve time needed to serve those customers
class-1 and class-2 customers in higher classes that arrive
ahead in the queue during the waiting time of class-2 customer Mean waiting time for class-k customers:
(2)
From Nq = λ2 W2 , PK 2
i=1 λi Xi
R + ρ1 W1 Wk =
W2 = R + ρ1 W1 + ρ2 W2 + ρ1 W2 ⇒ W2 = 2(1 − ρ1 − · · · − ρk−1 )(1 − ρ1 − · · · − ρk )
1 − ρ1 − ρ2
– Using W1 = R/(1 − ρ1 ), we have Note that average queueing time of a customer depends on the arrival
rate of lower priority customers.
R
W2 =
(1 − ρ1 )(1 − ρ1 − ρ2 ) 3-183 3-184
M/G/1 Queue with Preemptive Resume Priorities I M/G/1 Queue with Preemptive Priorities II

Preemptive resume priority (iii) Average time required to serve customers of priority higher than k
• Service of a customer is interrupted when a higher priority that arrive while the customer is in the system for Tk
customer arrives.
k−1 k−1
• It resumes from the point of interruption when all higher priority X X
λi X i Tk = ρi Tk for k > 1 and 0 if k = 1
customers have been served.
i=1 i=1
• In this case the lower priority class customers are completely
"invisible" and do not affect in any way the queues of the higher
• Combining these three terms,
classes
k−1
Waiting time of class-k customer consists of Rk X
Tk = X k + + Tk ρk
(i) The customer’s own mean service time Xk . 1 − ρ1 − · · · − ρk i=1
(ii) The mean time to serve the customers in classes 1, . . . , k, ahead in | {z }
this is zero for k=1
the queue, Pk
(1 − ρ1 − · · · − ρk )X k + i=1 λi Xi2
k ⇒Tk = .
Rk 1X (1 − ρ1 − · · · − ρk−1 )(1 − ρ1 − · · · − ρk )
Wk = and Rk = λi Xi2 .
1 − ρ1 − · · · − ρk 2
| {z }
i=1 becomes 1 if k=1

This is equal to the average waiting time in an M/G/1 system


3-185 3-186
where customers of priority lower than k are neglected

Upper Bound for G/G/1 System I Upper Bound for G/G/1 System II

Waiting time of the k th customer Notations for any random variable Y :


• Y + = max{0, Y } and Y − = − min{0, Y }
2
 2
• Y = E[Y ] and σY = E Y2 − Y
• Y = Y + − Y − and Y + · Y − = 0
• E[Y ] = Y = Y + − Y −
• Wk : Waiting time of the k th customer • 2
σY = σY2 2 + · Y−
+ + σY − + 2Y
• Xk : Service time of the k th customer
• τk : Interarrival time between the k th and the (k + 1)st customer Using the above, we can express
• Ik : Idle period between the k th and the (k + 1)st customer • Wk+1 = max{0, Wk +Xk −τk } = max{0, Wk +Vk } = (Wk +Vk )+
Wk+1 = max{0, Wk + Xk − τk } and Ik = − min{0, Wk + Xk − τk } • Ik = − min{0, Wk +Xk −τk } = − min{0, Wk +Vk } = (Wk +Vk )−

λ(σa2 +σb2 ) 2 2 2 + · (W + V )−
The average waiting time in queue: W ≤ 2(1−ρ)
σ(W k +Vk )
=σ(W k +Vk )
+ + σ(W +V )− + 2(Wk + Vk )
k k k k

• σa2 : variance of the interarrival times 2


=σW + σI2k + 2W k+1 · I k
k+1
• σb2 : variance of the service times 2 2 2
=σW k
+ σV k
= σW k
+ σa2 + σb2
• λ: average interarrival time

3-187 3-188
Upper Bound for G/G/1 System III

As k → ∞, we can see
2
σW k+1
2
+ σI2k + 2W k+1 · I k = σW k
+ σa2 + σb2

becomes
2
σW + σI2 + 2W · I = σW
2
+ σa2 + σb2

We get W as

σa2 + σb2 σ2 σ 2 + σb2 λ(σa2 + σb2 )


W = − I ≤ a =
2I 2I 2I 2(1 − ρ)
1
– The average idle time I between two successive arrival is λ (1 − ρ)

3-189

You might also like