Queueing 2017 Fall
Queueing 2017 Fall
Motivations
Queueing systems
3-1 3-2
2 active
• Call requests met most of the time cost-effective
Trunk number
3 active
Switches concentrate traffic onto shared trunks: blocking of requests 4 active active
active
will occur from time to time 5
6 active active
7 active active
Many Fewer
lines trunks – minimize the number of trunks subject to a blocking probability
3-3 3-4
Packet switching networks - I Packet switching networks - II
C1 C2
(a) Dedicated lines A1 A2
B1 B2
(b) Shared line A1 C1 B1 A2 B2 C2
C1 C2
A
(b) Shared lines
A1 C1 B1 A2 B2 C2
B Buffer
Output line
Number of packets
C
Input lines
in the system
3-5 3-6
≈
– R bps transmission rate and a packet of L bits long 0 1 2 n n+1 n+2 time
– Service time: L/R (transmission time for a packet) – e.g. # of people in Cafe coffee day, # of rickshaws at IIT main
– Packet length can be a constant, or random variables gate
3-7 3-8
Discrete-time Markov process I Discrete-time Markov process II
3-9 3-10
3-11 3-12
Discrete-time Markov process IV Discrete-time Markov process V
In a place, the weather each day is classified as sunny, cloudy or rainy. The
The Chapman-Kolmogorov equations: next day’s weather depends only on the weather of the present day and not
∞
on the weather of the previous days. If the present day is sunny, the next
(n+m) (n) (m)
X
pij = pik pkj for n, m ≥ 0, i, j ∈ S day will be sunny, cloudy or rainy with respective probabilities 0.70, 0.10
k=0 and 0.20. The transition probabilities are 0.50, 0.25 and 0.25 when the
present day is cloudy; 0.40, 0.30 and 0.30 when the present day is rainy.
Proof:
0.2 0.3
X 0.7 0.25 S C R
Pr[Xn+m = j|X0 = i] = Pr[Xn+m = j|X0 = i, Xn = k] Pr[Xn = k|X0 = i] 0.1
S 0.7 0.1 0.2
0.25
k∈S Sunny Cloudy Rainy P=
X 0.5 0.3 C 0.5 0.25 0.25
(Markov property) = Pr[Xn+m = j|Xn = k] Pr[Xn = k|X0 = i]
R 0.4 0.3 0.3
k∈S 0.4
X
(Time homogeneous) = Pr[Xm = j|X0 = k] Pr[Xn = k|X0 = i] – Using n-step transition probability matrix,
k∈S
0.601 0.168 0.230 0.596 0.172 0.231
P n+m = P n P m ⇒ P n+1 = P n P P 3 = 0.596 0.175 0.233 and P 12 = 0.596 0.172 0.231 = P 13
If P (n) has identical rows, then P (n+1) has also. Suppose State probabilities at time n
(n) (n) (n)
r
– πi = Pr[Xn = i] and π (n) = π0 , . . . , πi , . . .](row vector)
(0)
r – πi : the initial state probability
P (n) = .
..
X
Pr[Xn = j] = Pr[Xn = j|X0 = i] Pr[X0 = i]
r i∈S
(n) (n) (0)
X
Then, we have πj = pij πi
i∈S
r
··· ···
r – In matrix notation: π (n) = π (0) P n
PP (n) = pj1 pj2 · · · pjn = pj1 r + pj2 r + · · · + pjn r
..
···
.
··· Limiting distribution: Given an initial prob. distribution, π (0) ,
r
π = lim π (n) = lim π (0) P n = π (0) lim P n+1
~
··· n→∞ n→∞ n→∞
h i
= r = P (n) = π (0) n
lim P P = ~ πP
n→∞
···
3-15 3-16
Discrete-time Markov process VIII Discrete-time Markov process IX
Stationary distribution:
Note that – zj and z = [zj ] denote the prob. of being in state j and its vector
3-17 3-18
−0.5 −0.4 π0
0.3 0
Recurrence property
−0.1 0.75 −0.3 π1 = 0
P∞ (n)
• State j is recurrent if n=1 pjj = ∞
1 1 1 π2 1
– Positive recurrent if πj > 0
π0 = 0.596, π1 = 0.1722, π2 = 0.2318 – Null recurrent if πj = 0
P∞ (n)
• State j is transient if n=1 pjj < ∞
3-19 3-20
Discrete-time Markov process XII Discrete-time Markov process XIII
Periodicity and aperiodic: In a place, a mosquito is produced every hour with prob. p, and dies
• State i has period d if with prob. 1 − p
(n)
pii = 0 when n is not a multiple of d, • Show the state transition diagram
3-21 3-22
An autorickshaw driver provides service in two zones of New Delhi. Diksha possesses 5 umbrellas which she employs in going from her
Fares picked up in zone A will have destinations in zone A with home to office, and vice versa. If she is at home (the office) at the
probability 0.6 or in zone B with probability 0.4. Fares picked up in beginning (end) of a day and it is raining, then she will take an
zone B will have destinations in zone A with probability 0.3 or in zone umbrella with her to the office (home), provided there is one to be
B with probability 0.7. The driver’s expected profit for a trip entirely taken. If it is not raining, then she never takes an umbrella. Assume
in zone A is 40 Rupees (Rps); for a trip entirely in zone B is 80 Rps; that, independent of the past, it rains at the beginning (end) of a day
and for a trip that involves both zones is 110 Rps. with probability p.
• Find the stationary prob. that the driver is in each zone. • By defining a Markov chain with 6 states which enables us to
• What is the expected profit of the driver? determine the proportion of time that our TA gets wet, draw its
state transition diagram by specifying all state transition
(40 × 0.6 + 110 × 0.4)πA + (80 × 0.7 + 110 × 0.3)πB
probabilities (Note: She gets wet if it is raining, and all umbrellas
= 68πA + 89πB are at her other location.)
= 68πA + 89(1 − πA ) = 89 − 21πA • Find the probability that our TA gets wet.
• At what value of p, can the chance for our TA to get wet be
highest?
3-23 3-24
Drift and Stability I Drift and Stability II
– If Di < 0, the process visits some lower states from state i = E[Xk − Xk−1 |Xk−1 = j] Pr[Xk−1 = j|X0 = i]
k=1 j=0
– In the previous slide, Di = 1 · p − 1 · (1 − p) = 2p − 1
n X
X i
Pakes’ lemma ≤ β Pr[Xk−1 = j|X0 = i]
1) Di < ∞ for all i k=1 j=0
∞
2) For some scalar δ > 0 and integer i ≥ 0 X
+ E[Xk − Xk−1 |Xk−1 = j] Pr[Xk−1 = j|X0 = i]
Di ≤ −δ for all i > i j=i+1
| {z
−δ
}
o(h)
Or, we have Pr[Λ(h) = 1] = λh + o(h), where limh→0 h =0
3-31 3-32
Review on Poisson process V Continuous-time Markov process I
…
: time of state change
4
Merging Splitting
3
P8) Splitting: If an arrival randomly chooses the ith branch with 2
probability πi , the arrival process at the ith branch, Λi (t), is 1
Poisson with rate λi (= πi λ). Moreover, Λi (t) is independent of
time
Λj (t) for any pair of i and j (i 6= j).
3-33 A sample path of continuous time MC 3-34
Semi-Markov process:
• The process jumps state j. Such a jump depends only on the
previous state.
• Tj for all j follows a general (independent) distribution.
3-35 3-36
Continuous-time Markov process IV Continuous-time Markov process V
State probabilities πj (t) = Pr[X (t) = j]. For δ > 0, Subtracting πj (t) from both sides,
X
πj (t + δ) = Pr[X (t + δ) = j] πj (t + δ) − πj (t) = qij (δ)πi (t) − πj (t)
i
X
= Pr[X (t + δ) = j|X (t) = i] Pr[X (t) = i] X
i
| {z } = qij (δ)πi (t) + (qjj (δ) − 1)πj (t)
=qij (δ)
i,i6=j
(n+1) (n)
X X
= qij (δ)πi (t) ⇐⇒ πi = pji πj (DTMC)
i j Dividing both sides by δ,
Transition into state j from any other state: πj (t + δ) − πj (t) dπj (t)
lim =
δ→0 δ dt
state 1h X i X
= lim qij (δ)πi (t) + (qjj (δ) − 1) πj (t) = γij πi (t),
δ→0 δ | {z }
i i
γii =−vi
−v0 γ01
γ02 γ03 ...
γ10 −v1 γ12 γ13 . . .
~ Q = 0 with Q = γ
π and ~ · ~1 = 1,
π
20 γ21 −v2 γ23 . . .
… … … … .. .. .. .. ..
. . . . .
time
3-41 3-42
ρ: the server’s utilization (< 1, i.e., λ < µ) Recall the state transition rate matrix, Q on page 3-40 as
Mean of customers in the system
−v0
∞ γ01 γ02 γ03 ...
X ρ γ10 −v1 γ12 γ13 . . .
E[N ] = nπn =
1−ρ ~ Q = 0 with Q = γ
π and ~ · ~1 = 1,
π
n=0 20 γ21 −v2 γ23 . . .
= ρ(in server) + ρ2 /(1 − ρ)(in queue) .. .. .. .. ..
. . . . .
An M/M/1 system with 1/µ = 1
– What is γij , and vi in M/M/1 queue ?
Number of customers in the system
20 20
3-53 3-54
minter_arrival = 1/arrival_rate;
inter_arrival = exprnd(minter_arrival);
service_time = exprnd(mservice_time);
if inter_arrival < service_time
event = arrival;
event_time = inter_arrival;
else
The arrival times, the size of demand for service, the service capacity
event = departure; and the size of waiting room may be (random) variables.
event_time = service_time;
end
Queueing discipline: specify which customer to pick next for service.
• First come first serve (FCFS, or FIFO)
• Last come first serve (LCFS, LIFO)
• Random order, Processor sharing (PS), Round robin (RR)
• Priority (preemptive:resume, non-resume; non-preemptive)
• Shortest job first (SJF) and Longest job first (LJF)
3-57 3-58
A data communication line delivers a block of information every 10 Any queueing system in steady state: N = λT
µsec. A decoder checks each block for errors and corrects the errors if • N : average number
3-61 3-62
…
• The average number of packets in the system remain the same
ρ
N = with ρ = λ/(µC )
1−ρ 1 m
• Average delay per packet T= < T=
µ−λ µ−λ
λW = N → W = N /(K λ) When do we need TDMA or FDMA?
Aggregation is better: increasing a transmission line by K times can – In a multiplexer, packet generation times overlap, so that it must
allow K times as many packets/sec with K times smaller average buffer and delay some of the packets
delay per packet
3-65 3-66
Estimating throughput
Sec. 3.2
in a time-sharing system
Queueing Models-Little's Theorem 161
The average time a user spends in the system
T = R + D → R + P ≤ T ≤ R + NP
– D: the average delay between the time a job is submitted to the
computer and the time its execution is completed, D = [P, NP]
B
Computer
C Combining this with λ = N /T ,
N 1 N
≤ λ ≤ min ,
Average reflection Average job processing
162
R + NP P Delay
R +Models
P in Data Networks Chap. 3
time R time P
o
1:
Cl
::J
Little’s theorem: example III Poisson Arrivals See Time Average (PASTA) theorem I
I- 11P
::c'"
'"c Guaranteed
Using T = N
:t /λ, we can rewrite
throughput Suppose a random process which spends its time in different states Ej
curve
o 1 + RIP
In equilibrium, we can associate with each state Ej two different
max{NP, RNumber
+ P} ≤ T N≤ R + NP
of Terminals
(a) probabilities
• The probability of the state as seen by an outside random observer
E
'">
1;;
Upper bound for delay Lower bound for delay due to limited – πj : prob. that the system is in the state Ej at a random instant
en CPU processing capacity
• The probability of the state seen by an arriving customer
.s'"c
'"E R+P
/1
– πj∗ : prob. that the system is in the state Ej just before (a
i= / I
R I randomly chosen) arrival
:::> I
Delay assuming no waiting in queue
'" 1/
In general, we have πj 6= πj∗
Cl
:;> V
'">
<{
//1
0
When the arrival process is Poisson, we have
Number of Terminals N
(b)
πj = πj∗
Figure 3.5 Bounds on throughput and average user delay in a time-sharing
system. (a) Bounds on attainable throughput [Eq. (3.8)]. (b) Bounds on average
user time in a fully loaded system [Eq. (3.9)]. The time increases essentially in
proportion with the number of terminals N.
bounds obtained are independent of these parameters. We owe this convenient situation to 3-69 3-70
the generality of Little's Theorem.
The M / ]\[/ I queueing system consists of a single queueing station with a single server
(in a communication context, a single transmission line). Customers arrive according
PASTA theorem II
to a Poisson process with rate A, and the probability distribution of the service time is PASTA theorem III
exponential with mean 1/ f.1 sec. We will explain the meaning of these terms shortly.
The name AI/AI / I reflects standard queueing theory nomenclature whereby:
For a stochastic process, N ≡ {N (t), t ≥ 0} for t ≥ 0 and an Proof:
1. The first letter indicates the nature of the arrival process [e.g., !vI stands for mem-
arbitrary set B ∈ N :
oryless, which here means a Poisson process (i.e., exponentially distributed inter- • For sufficiently large n, Y (t) is approximated as
1 t
Z
1, if N (t) ∈ B, n−1
U (t) = ⇒ V (t) = U (τ )dτ. X
0, otherwise. t 0 Yn (t) = U (k(t/n))[A((k + 1)t/n) − A(kt/n)]
| {z }
k=0
For a Poisson arrival process A(t), (λ(k+1)t−λkt)/n
Z t • LAA decouples the above as
Y (t) = U (τ )dA(τ ) ⇒ Z (t) = Y (t)/A(t)
0 h n−1
X i
E[Yn (t)] = λtE U (kt/n)/n
k=0
Lack of Anticipation Assumption (LAA): For each t ≥ 0,
{A(t + u) − A(t), u ≥ 0} and {U (s), 0 ≤ s ≤ t} are independent: • As n → ∞, if |Yn (t)| is bounded,
Future inter-arrival times and service times of previously arrived hZ t i
customers are independent. lim E[Yn (t)] = E[Y (t)] = λtE[V (t)] = λE U (τ )dτ .
n→∞ 0
Under LAA, as t → ∞, PASTA ensures
: the expected number of arrivals who find the system in state B
V (t) → V (∞) w.p. 1 if Z (t) → V (∞) w.p.1 equals arrival rate times the expected length of time it is there.
3-71 3-72
Systems where PASTA does not hold M/M/1/K I
Ex1) D/D/1 queue M/M/1/K: the system can accommodate K customers (including one
• Deterministic arrivals every 10 msec in service)
• Deterministic service times of 9 msec
……
… …
0 9 10 19 20
n=0
1 − ρK+1
The previous global balance equation can be rewritten as c-server and only c customers can be accommodated
Erlang capacity: Telephone systems with c channels In Select-city shopping mall, customers arrive at the underground
10 0 10 0 parking lot of it according to a Poisson process with a rate of 60 cars
c=1
per hour. Parking time follows a Weibull distribution with mean 2.5
10 -1 10 -1
hours and the parking lot can accommodate 150 cars. When the
parking lot is full, an arriving customer has to park his car somewhere
B(c, a)
B(c, a)
2
10 -2 10 20 30
10 -2
40 50 60
else. Find the fraction of customers finding all places occupied upon
3 70 80 90 100
10 -3 10 -3 arrival
4 5 6 7 8 9 10 two different distributions with the same mean
0.7
10 -4 10 -4
10 -1 10 0 10 1 0 20 40 60 80 100 0.6 Weibull: α = 2.7228, k = 5
offered traffic intensity, a offered traffic intensity, a k
! k "k−1 ( x )k
0.5
f (x) = α α
e α
0
10
0.4
f (x)
10-1
0.3
10-2
0.2
10-3 exponential
x
0.1 f (x) = α1 e− α
PB
10-4
0
10-5
0 1 2 3 4 5 6 7 8
Analysis, c = 3
x (hours)
Simulation, c = 3 R∞
10-6
Analysis, c = 5 – Mean of Weibull distribution: αΓ(1 + 1/k), and Γ(x) = t x−1 e −t dt is called
10-7
0
Simulation, c = 5
3-81
the gamma function 3-82
10-8
0 0.5 1 1.5 2 2.5 3
a
Consider the loss system (no waiting places) in the case where the
• c = 150 and a = λ/µ = 60 × 2.5 = 150 arrivals originate from a finite population of sources: the total number
of customers is K
ac
B(c, a) = Pcc! ai 1
1
c=150,a=150
i=0 i! 2
2
…
Pc−1
• Divide the numerator and denominator by n=0 a n /n!,
ac Pc−1
c!
n
n=0 a /n! (a c /c!)/
B(c, a) = Pc−1 =
ai
a c /c!
P c−1
1 + (a c /c!)/ n=0 a n /n! • The time to the next call attempt by a customer, so called thinking
i=0 i! +
(a/c)B(c − 1, a) aB(c − 1, a) time (idle time) of the customer obeys an exponential distribution
= = with mean 1/λ (sec)
1 + (a/c)B(c − 1, a) c + aB(c − 1, a)
• Blocked calls are lost
with B(0, a) = 1 - does not lead to reattempts; starts a new thinking time, again.
The time to the next attempt is also the same exponential
distribution with 1/λ
- the call holding time is exponentially distributed with 1/µ
3-83 3-84
M/M/C/C/K system II M/M/C/C/K system III
If C ≥ K , each customer has its own server, i.e., no blocking. • For j = 1, 2, . . . , K , we have
• Each user shows two-state, active with mean 1/µ and idle with K j
(C − j + 1)πj−1 = jµπj ⇒ πj = a π0 .
mean 1/λ j
• The probability for a user to be idle or active is PK
• Applying j=0 πj = 1,
π0 = 1/λ/(1/λ + 1/µ) and π1 = 1/µ/(1/λ + 1/µ), C
K j X K k
πj = a/ a
• Call arrival rate: π0 λ, offered load (or carried load per source): j k
k=0
π1 = a/(1 + a), and a = λ/µ
Time blocking (or congestion): the proportion of time the system
If C < K , this system can be described as spends in the state C ; the equilibrium probability of the state C is
PB = πC
…
– The probability of all resources being busy in a given observational period
– Insensitivity: Like Erlang B formula, this result is insensitive to the form
of the holding time distribution (though the derivation above was explicitly
((K − i)λ + iµ)πi = (K − (i − 1))πi−1 + (i + 1)µπi+1 based on the assumption of exponential holding time distribution)
3-85 3-86
Call blocking: the probability that an arriving call is blocked, i.e., PL • Call blocking PL can be obtained by
• Arrival rate is state-dependent, i.e., (K − N (t))λ: Not Poisson.
λC
• PASTA does not hold: Time blocking, PB can’t represent PL PL λT = PB λC → PL = PB ≤ PB
• λT : Call arrivals on average
λT
C
X • Engset formula:
λT ∝ (K − i)λπi
K! c
i=0 (K − C )λ (K − C ) C !(K−C )! a
PL (K ) = PC πC = P C K!
– PL : the probability that a call finds the system blocked i=0 (K − i)λπi i=0 (K − i) i!(K−i)! a
c
Elementary queueing models For an irreducible, aperiodic, discrete-time MC, (Xn , Xn+1 ,...) having
– M/M/1, M/M/C, M/M/C/C/K: product form solution transition probabilities pij and stationary distribution πi for all i:
– Bulk queues (not discussed here) Time-reversed MC is defined as Xn∗ = Xτ −n for an arbitrary τ > 0
Intermediate queueing models (product-form solution)
– Time-reversibility of Markov process
– Detailed balance equations of time-reversible MCs
– Multidimensional Birth-death processes
– Network of queues: open- and closed networks
Forward process Time reversed process
Advanced queueing models
1) Transition probabilities of Xn∗
– M/G/1 type queue: Embedded MC and Mean-value analysis πj pji
– M/G/1 with vacations and Priority queues pij∗ =
πi
– G/M/m queue ∗
2) Xn and Xn have the same stationary distribution πi :
More advanced queueing models (omitted) ∞
X ∞
X
– Algorithmic approaches to get steady-state solutions πi pij = πj pji∗ = πi
j=0 j=0
3-89 3-90
3-91 3-92
Time Reversibility of discrete-time MC IV Time Reversibility of discrete-time MC V
• From the Kolmogorov criteria, we can get Inspect whether the following three-state MC is reversible
pi,i2 pi2 i3 · · · pin−1 j pji = pij pjin−1 · · · pi3 i2 pi2 i 0 0.6 0.4
(n−1) (n−1) P = 0.1 0.8 0.1
pij pji = pij pji 0.5 0 0.5
• Using Kolmogorov criteria,
As n → ∞, we have
p12 p23 p31 = 0.6 × 0.1 × 0.5 6= p13 p32 p21 = 0.4 × 0 × 0.1 = 0
(n−1) (n−1)
lim p pji = lim pij pji → πj pji = πi pij • Inspecting state transition diagram, it is not a BD process
n→∞ ij n→∞
If the state transition diagram of a Markov process is a tree, then the
Inspect whether the following two-state MC is reversible
process is time reversible
0 1
P=
0.5 0.5
– It is a small BD process
– Using state probabilities, π0 = 1/3 and π1 = 2/3,
1 2 1
π0 p01 = · 1 = π1 p10 = ·
3 3 2
– A generalization of BD processes: at the cut boundary, DBE is
3-95 satisfied 3-96
Exercise Relation between DTMC and CTMC I
Consider a transmitter that uses STOP-and-WAIT (SW) ARQ protocol to
Recall an embedded MC: each time a state, say i, is entered, an
transmit a frame. Each frame can be successfully transmitted with probability
exponentially distributed state occupancy time is selected. When the time
q, while the probability that the ACK to the frame arrives at the transmitter is up, the next state j is selected according to transition probabilities, pij
corrupted with an error or will be lost is 1 − r. Assume that ACK from the : continuous-time Markov process
receiver always arrives at this transmitter just before the time-out, tout , if it is 4
not lost. During each tout , at the transmitter’s queue, one frame is generated 3
from the upper layer with probability p and passed down to this queue. Note 2
that we assume 0 < p, q, r < 1. 1
(a) Show that the stochastic process of describing the transmitter’s queue is a Markov
time
process.
(b) Find the global balance equations of the Markov chain. • Ni (n): the number of times state i occurs in the first n transitions
(c) Is the Markov chain is periodic or aperiodic? Why?
• Ti (j): the occupancy time the j th time state i occurs.
(d) Under what condition can this Markov chain be positive recurrent, transient, or The proportion of time spent by X (t) in state i after the first n transitions
null recurrent? i
PN (n)
time spent in state i j=1
Ti (j)
(e) Determine the state probabilities πi for i ≥ 0, the probability that i frames in the = P PN (n)
queue, at steady-state.
time spent in all states i
Ti (j)
i j=1
(f) What is the mean number of frames in the transmitter’s queue? This includes one
frame in service. 3-97 3-98
Relation between DTMC and CTMC II Relation between DTMC and CTMC III
As n → ∞, using πi = Ni (n)/n we have Recall M/M/1 queue
Ni (n) 1 PNi (n)
j=1 Ti (j) πi E[Ti ] a) CTMC
n Ni (n)
P Ni (n) 1 PNi (n) =P = φi ,
i n Ni (n) j=1 Ti (j) i πi E[Ti ] E[Ti ]=1/vi
0 1 2 3 4 ……
where πi is the unique pmf solution to
X X b) Embedded MC
πj = πi pij and πj = 1 (∗)
i j 0 1 2 3 4 ……
The long-term proportion of time spent in state i approaches
πi /vi πi vi φ i In the embedded MC, we have the following global balance equations
φi = P = c → πi =
i π i /v i v i c π0 = qπ1
π1 = π0 + qπ2
Substituting πi = (vi φi )/c into (∗) yields p
.. πi = πi−1 5
vj φ j 1X X X . q
= vi φi pij → vj φj = φi vi pij = φi γij
c c i
i i
πi = pπi−1 + qπi+1
3-99 3-100
Relation between DTMC and CTMC IV Continuous-time reversible MC I
P∞ For a continuous-time MC, X (t), whose stationary state probability
Using the normalization condition, i=0 πi = 1,
πi , we have a discrete-time embedded Markov chain whose stationary
i−1 pmf and a state transition probability are πi and p̃ij .
p 1 1 − 2p
πi = π0 and π0 =
q q 2(1 − p)
Forward Process Reverse Process
πj p̃ji πj γji
If X (t) = i, the probability that the reversed process remains in state γij∗ = vi p̃ij∗ = vi = vi = θj γji /θi
i for an additional s seconds is πi p̃ji =γji /vj π i vj
| {z }
from embedded MC
Pr[X (t 0 ) = i, t − s ≤ t 0 ≤ t|X (t) = i] = e −vi s
; after staying t, probability that it shall stay s sec more – p̃ij∗ (= p̃ij ): state transition probability of the reversed embedded MC
– Continuous-time MC whose state occupancy times are exponentially
Forward Process Reverse Process distributed is reversible if its embedded MC is reversible
Additionally, we have vj = vj∗
X X X X
Embedded Markov process time
θi γij∗ γ ∗ =θj γji /θi = θj γji = θj vj = θj vj∗ ⇒ γij = γij∗
ij
i6=j i6=j j6=i j6=i
3-103 3-104
Continuous-time reversible MC IV M/M/2 queue with heterogeneous servers I
Detailed balance equation holds for continuous-time reversible MCs Servers A and B with service rates µA and µB . When the system
θj γji (input rate to i) = θi γij (output rate from i) for j = i + 1 empty, arrivals go to A with probability p and to B with probability
1 − p. Otherwise, the head of the queue takes the first free server
– Birth-death systems with γij = 0 for |i − j| > 1
1A
– Since the embedded MC is reversible,
πi p̃ij = πj p̃ji → (vi θi /c)p̃ij = (vj θj /c)p̃ji → θi γij = θj γji 0 2 3
After some manipulations, Suppose that X1 (t) and X2 (t) are independent reversible MCs
• Then, X (t) = (X1 (t), X2 (t)) is a reversible MC
λ λ + p(µA + µB ) • Two independent M /M /1 queue, where arrival and service rates at
π1,A = π0
µA 2λ + µA + µB
λ λ + (1 − p)(µA + µB ) 6-23 Example: Two Independent M/M/1 Queues
queue i are λi and µi
π2,A = π0 – (N1 (t), N2 (t)) forms an MC
µB 2λ + µA + µB
2
λ λ + (1 − p)µA + pµB Stationary distribution: λ1 λ1 λ1
π2 = π0 n1 n2 03 13 23 33
µA µB 2λ + µA + µB λ λ λ λ
p( n1 , n2 ) = 1 − 1 1 1 − 2 2 µ1 µ1 µ1
P∞ µ1 µ1 µ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2
π0 can be determined by π0 + π1,A + π2,B + n=2 πn = 1 λ1 λ1 λ1
Detailed Balance Equations: 02 12 22 32
• If it is reversible, use detailed balance equations µ1 µ1 µ1
µ1 p( n1 + 1, n2 ) = λ1 p( n1 , n2 ) λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2
µ2 p( n1 , n2 + 1) = λ2 p( n1 , n2 ) λ1 λ1 λ1
(1/2)λπ0 = µA π1,A → π1,A = 0.5(λ/µA )π0 01 11 21 31
µ1 µ1 µ1
(1/2)λπ0 = µB π1,B → π1,B = 0.5(λ/µB )π0 λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2
Verify that the Markov chain is λ1 λ1 λ1
0.5λ2 reversible – Kolmogorov criterion 00 10 20 30
π2 = π0 µ1 µ1 µ1
µA µB
– Is this a reversible MC?
3-107 3-108
Multidimensional Markov chains II Truncation of a Reversible Markov chain I
X (t) is a reversible Markov process with state space S and stationary
– Owing to time-reversibility, detailed balance equations hold distribution, πj for j ∈ S.
– Truncated to a set E ⊂ S such that the resulting chain Y (t) is
µ1 π(n1 + 1, n2 ) = λ1 π(n1 , n2 )
irreducible. Then, Y (t) is reversible and has the stationary
µ2 π(n1 , n2 + 1) = λ2 π(n1 , n2 ) distribution
πj
π̂j = P j∈E
– Stationary state distribution k∈E πk
λ1
n1
λ1 λ2
n2
λ2 – This is the conditional prob. that. in steady state, the original
π(n1 , n2 ) = 1 − 1− process is at state j, given that it is somewhere in E
µ1 µ1 µ2 µ2
Proof:
• Can be generalized for any number of independent queues, e.g.,
πj πi
M/M/1, M/M/c or M/M/∞ π̂j qji = π̂i qij ⇒ P qji = P qij ⇒ πj qji = πi qij
π k k∈E πk
| k∈E
{z }
π(n1 , n2 , . . . , nK ) = π1 (n1 )π2 (n2 ) · · · πK (nK ) π̂j
X X πj
– ’Product form’ distribution π̂k = P =1
k∈E πk
k∈E j∈E
3-109 3-110
Markov processes for M/M/1 and M/M/C are reversible Two independent M/M/1 queues of the previous example share a
• State probabilities of M/M/1/K queue
6-25 Example: Two Queues
common buffer ofwith
size BJoint
(=2) Buffer
• An arriving customer who finds B customers waiting is blocked
(1 − ρ)ρi (1 − ρ)ρi λ The two independent M/M/1 queues of λ1
πi = PK = for ρ = the previous example share a common 03 13
i=0 (1 − ρ)ρ
i 1 − ρK+1 µ buffer of size B – arrival that finds B µ1
λ2 µ2 λ2 µ2
customers waiting is blocked λ1 λ1
– Truncated version of M/M/1/∞ queue State space restricted to 02 12 22
µ1 µ1
• State probabilities of M/M/c/c queue E = {( n1 , n2 ) : ( n1 − 1)+ + ( n2 − 1)+ ≤ B}
λ2 µ2 λ2 µ2 λ2 µ2
Distribution of truncated chain: λ1 λ1 λ1
– M/M/c/∞ queue with ρ = λ/(mµ) and a = λ/µ 01 11 21 31
p( n1 , n2 ) = p(0,0) ⋅ ρ1n1 ρ 2n2 , ( n1 , n2 ) ∈ E
µ1 µ1 µ1
c
a Normalizing: λ2 µ2 λ2 µ2 λ2 µ2 λ2 µ2
πn = ρmax(0,n−c) π0
−1 λ1 λ1 λ1
n!
p(0,0) = ∑ ρ1n1 ρ 2n2 00 10 20 30
µ1 µ1 µ1
– Truncated version of M/M/c/∞ queue 1 2
( n , n )∈E
Theorem specifies joint distribution
• State up E = {(n
space: 1 , ndiagram
State −B1)=2+ + (n2 − 1)+ ≤ B}
2 ) : (n1for
c c
an
ai to the normalization •
constant
Stationary state distribution of the truncated MC
X X
π̂n = πn / πn = / 0 Calculation of normalization constant is
n! i=0 i!
n=0 often tedious π(n1 , n2 ) = π(0, 0)ρn1 1 ρn2 2 for (n1 , n2 ) ∈ E
ρn1 1 ρn2 2
P
3-111 • π(0, 0) is obtained by π(0, 0) = 1/ (n1 ,n2 )∈E
3-112
Truncation of a Reversible Markov chain IV Truncation of a Reversible Markov chain V
Two session classes in a circuit switching system with preferential • The state probabilities can be obtained as
treatment for one class for a total of C channels ρn1 1 ρn2 2
• Type 1: Poisson arrivals with λ1 require exponentially distributed P(n1 , n2 ) = P(0, 0) for 0 ≤ n1 ≤ K , n1 +n2 ≤ C , n2 ≥ 0
n1 ! n2 !
service rate µ1 – admissible only up to K P
– P(0, 0) can be determined by n1 ,n2 P(n1 , n2 ) = 1
• Type 2: Poisson arrivals with λ2 require exponentially distributed
• Blocking probability of type 1
service rate µ2 – can be accepted until C channels are used up
PC −K ρK1 ρn22 PK−1 ρn11 ρC2 −n1
374
S = {(n1 , n2 )|0 ≤IEEE ≤ K , nON1 VEHICULAR
n1TRANSACTIONS + n2 ≤ C}
TECHNOLOGY, VOL. 51, NO. 2, MARCH 2002
n2 =0 K! · n2 ! + n1 =0 n1 ! · (C −n1 )!
Pb1 = PK n
ρ11 PC −n1 ρ22
n
n1 =0 n1 ! n2 =0 n2 !
For this kind of systems, blocking probabilities are valid for a broad
class of holding time distributions
3-113 3-114
Fig. 2. Transition diagram for the new call bounding scheme.
handoff calls in the cell. Let and . From From this, the traffic intensities for new calls and handoff calls
the detailed balance equation, we obtain using the above common average channel holding time 1
are given by
In real networks, many queues interact with each other Suppose several packet streams, each following a unique path through
– a traffic stream departing from one or more queues enters one or the network: appropriate for virtual circuit network, e.g., ATM
more other queues, even after merging with other streams departing
from yet other queues
• Packet interarrival times are correlated with packet lengths.
• Service times at various queue are not independent, e.g.,
state-dependent flow control.
Kleinrock’s independence approximation:
• M /M /1 queueing model works for each link:
– sufficient mixing several packet streams on a transmission line
makes interarrival times and packet lengths independent
• Good approximation when: • xs : arrival rate of packet stream s
* Poisson arrivals at entry points of the network • fij (s): the fraction of the packets of stream s through link (i, j)
* Packet transmission times ‘nearly’ exponential • Total arrival rate at link (i, j)
* Several packet streams merged on each link X
* Densely connected network and moderate to heavy traffic load λij = fij (s)xs
3-117
all packet streams s 3-118
crossing link (i, j)
Based on M/M/1 (with Kleinrock’s Independence approximation), # In datagram networks including multiple path routing for some
of packets in queue or service at (i, j) on average is origin-destination pairs, M/M/1 approx. often fails
Sec. 3.6
λij • Node A sends trafficNetworks
to nodeof Transmission Lines 213
B along two links with service rate µ
Nij =
µij − λij V2
Figure 3.29 Poisson process with rate .\
divided among two links. If division is
done by randomization, each link behaves
– 1/µij is the average packet transmission time on link (i, j) B
like an M I JI II queue. If division is done
by metering, the whole system behaves I ike
• The average number of packets over all queues and the average V2 an 1\1 11'v112 queue.
– Metering: arriving
Finally, the packets are
average delay perassigned to astream
packet of a traffic queue witha the
traversing path psmallest
is given by
P
– γ = s xs : total arrival rate in the system
• As a generalization with proc. & propag. delay, backlog → approximated Tas p = an '"'
L M /M(Ai)
.. /2
'.
. . _with
.. +a-.
ILI)(p,)
1common
+ eli) )
AI))
queue (3.104)
11i)
all (I,))
2
on path p
TMterms
where the three = in the sum above represent < TRwaiting time in queue, average
average
X λij 1 (2µ − λ)(1 + ρ) delay, respectively.
transmission time, and processing and propagation
Tp = + + dij
In many networks, the assumption of exponentially distributed packet lengths is not
µij (µij − λij ) µij ∗ Metering appropriate.
all packet streams s
destroysGiven
an M/M/1 approximation
a different type of probability distribution of the packet lengths, one
may keep the approximation of independence between queues but use the P-K formula for
| {z }
crossing link (i, j)
queueing delay 3-119 average number in the system in place of the AI /M /1 formula (3.100). Equations (3.101)
3-120
to (3.104) for average delay would then be modified in an obvious way.
For virtual circuit networks (cf. Fig. 3.27), the main approximation involved in the
I\I / M /1 formula (3.101) is due to the correlation of the packet lengths and the packet
interarrival times at the various queues in the network. If somehow this correlation was
Burke’s theorem I Burke’s theorem II
For M /M /1, M /M /c, M /M /∞ with arrival rate λ (without bulk
arrivals and service):
B1. Departure process is Poisson with rate λ. B2. At each time t, the number of jobs in the system at time t is
independent of the sequence of departure times prior to time t
Forward process Reverse process
• The sequence of departure times prior to time t in the forward
process is exactly the sequence of arrival times after time t in the
reverse process.
Arrivals of forward process Departures of reverse process • Since the arrival process in the reverse process is independent
Poisson process, the future arrival process does not depend on the
• The arrival process in the forward process corresponds to the
current number in the system
departure Reverse
process in the reverse process
process • The past departure process in the forward process (which is the
– The departure process in the forward process is the arrival
future arrival in the reverse process) does not depend on the
process in the backward process
current number in the system.
• Because M /M /1 is time-reversible, the reverse process is
statistically identical to the forward process.
– The departures in forward time form a Poisson process, which is
the arrivals in backward time
3-121 3-122
External arrivals
• Based on Burke’s theorem B1, queue 2 in isolation is an M/M/1
– Pr[m at queue 2] = ρm
2 (1 − ρ2 )
Let n = (n1 , . . . , nK ) denote a state (row) vector of the network. Using time-reversibility, guess detailed balance equations (DBEs) as
The limiting queue length distribution π(n) λi π(n − ei ) = µi π(n), λi π(n) = µi π(n + ei )
π(n) = lim Pr[X1 (t) = n1 , . . . , XK (t) = nK ] and λj π(n − ei ) = µj π(n + ej − ei ) based on
t→∞
Global balance equation (GBE): total rate out of n = total rate into n
K
X K
X K
X
α+ µi π(n) = αi π(n − ei ) + pi0 µi π(n + ei )
i=1 i=1 i=1
| {z } | {z }
external arrivals go outside from i Substituting DBEs into GBE gives us
K X
K K K K PK
j=1 pji λj
X
X αi µi X X
+ pji µj π(n + ej − ei ) RHS =π(n) + pi0 λi + µi
i=1 j=1 i=1
λi i=1 i=1
λi
| {z } K
K K P
αi + j=1 pji λj
X
from j to i X
=π(n) pi0 λi + µi
– ei = (0, . . . , 1, . . . , 0), i.e., the 1 is in the i th position i=1 i=1
λi
– π(n − ei ) denotes π(n1 , n2 , . . . , ni − 1, . . . , nK ) | {z }
α
PK
3-125 – in the numerator: λi = αi + j=1 pji λj 3-126
3-127 3-128
Jackson’s theorem: proof of DBEs I Jackson’s theorem: proof of DBEs II
Proving DBEs based on time-reversibility We need to consider the following three cases
• Construct a routing matrix, P ∗ = [pij
∗
], of the reversed process • Arrival to server i outside the network in the forward process
• The rate from node i to j must be the same in the forward and corresponds to a departure out of the network from server i in the
reverse direction, reversed process
∗
π(n)vn,n+ei = π(n + ei )vn+ei ,n
(forward process) λi pij = λj pji∗ (reverse process)
• Departure to the outside in the forward process corresponds to
– λj pji∗ : the output rate from server j is λj , and pji∗ is the rate of
arrival from the outside in the reversed process,
moving from j to i; αi∗ = λi pi0 ; pi0
∗
= αi /λi
∗
π(n)vn,n−ei = π(n − ei )vn−e
We need to show (recall θi γij = θj γji∗ ) i ,n
∗
X X
∗ • Leaving queue i and joining queue j in the forward process
π(n)vn,m = π(m)vm,n and vn,m = vn,m
m m
(vn,n−ei +ej = µi pij ) correspond to leaving queue j and joining
∗
queue i in the reversed process (vn−e i +ej ,n
= µj pji∗ = λi pij µj /λj )
∗
– vn,m and vn,m denote state transition rate of the forward and
∗
reversed process π(n)vn,n−ei +ej = π(n − ei + ej )vn−ei +ej ,n
3-129 3-130
3-131 3-132
Jackson’s theorem: proof of DBEs V Jackson’s theorem: proof of DBEs VI
Substituting this with vn,n−ei = µi pi0 (departure to the outside), Summary of transition rates of forward and reverse processes
∗
K
Y K
Y Transition Forward vn,m Reverse vn,m Comment
ρi )ρni i ρi )ρni i −1
PK
(1 − πk (nk )µi pi0 = (1 − πk (nk )λi pi0 n → n + ei αi λi (1 − j=1 pij ) all i
PK
k=1,k6=i k=1,k6=i n → n − ei µi (1 − j=1
pij ) αi µi /λi all i: ni > 0
∗
n → n − ei + ej µi pij λj pji µi /λi all i: ni > 0, all j
3) π(n)vn,n−ei +ej = π(n − ei + ej )vn−ei +ej ,n
: Leaving queue i and
∗
P P
joining queue j in the forward process (vn,n−ei +ej = µi pij ) correspond to 4) Finally, we verify total rate equation, m
vn,m = m
vn,m :
leaving queue j and joining queue i in the reversed process, i.e, X K
X X X
∗
∗
vn−e = µj pji∗ = λi pij µj /λj , vn,m = λi 1 − pij + αi µi /λi + λj pji µi /λi
i +ej ,n
i j=1 i:ni >0 j
K
|P {z P
P }
n λi − λi pij
Y
(1 − ρi )ρni i (1 − ρj )ρj j πk (nk )µi pij i i j
X X X αi µi µi
k=1,k6=i,j = λi − (λj − αj ) + + (λi − αi )
K λi λi
i j i:ni >0
n +1
Y
= (1 − ρi )ρni i −1 (1 − ρj )ρj j πk (nk )µj pji∗ ∗ =λ p /λ
| {z
PK
}
use pji i ij j λj =αj + λi pij
k=1,k6=i,j i=1
X X
= αi + µi = vn,m .
3-133 3-134
i i:ni >0
Performance measure
New programs arrive at a CPU according to a Poisson process of rate α. A
• State probability distribution has been derived program spends an exponentially distributed execution time of mean 1/µ1
• Mean # of hops traversed, h, is in the CPU. At the end of this service time, the program execution is
PK complete with probability p or it requires retrieving additional information
λ i=1 λi
h= = PK from secondary storage with probability 1 − p. Suppose that the retrieval of
α i=1 αi
information from secondary storage requires an exponentially distributed
• Throughput of queue i: λi amount of time with mean 1/µ2 . Find the mean time that each program
• Total throughput of the queueing network: α spends in the system.
• Mean number of customers at queue i (ρi = λi /µi )
N i = ρi /(1 − ρi )
3-137 ∆ λi 3-138
ρi =
µi
ρi 1
Ni = ; Ti =
1 − ρi µi − λ i
M
X M
X X λi M
ρi N
N= Ni = ; T = = ( )Ti
1 − ρi γ γ
Open queueing networks: example A-II Open queueing
i=1 networks:
i=1 example B-I i=1
3-139 3-140
Open queueing networks: example B-II Open queueing networks: example B-II
• Traffic matrix (packets per second) • Since α = 650 and λ = 850, the mean number of hops is
from → to A B C
A – 150 200 h = 850/650 = 1.3077
(50% through B)
(50% directly to C) • We get link utilization, mean number and response time as
B 50 – 100 L1 L2 L3 L4
C 100 50 – ρi 300/350=0.857 100/350=0.286 250/350=0.714 200/350 =0.572
Ni 300/50 100/250 250/100 200/150
• Find mean delay from A to C
Ti 1/50=0.02 1/250=0.004 1/100=0.01 1/150=0.0067
• First, we need to know link traffic
traffic type L1 L2 L3 L4 – N i = ρi /(1 − ρi ) and Ti = N i /λi
A→B 150 • Mean delay from A to C
A→C 100 100 100
B→A 50 50
B→C 100
T AC = ( T1 + T2 ) × 0.5 + T3 × 0.5 = 0.017 (sec)
|{z} |{z}
C→A 100 A to B B to C
C→B 50 50
total λ1 = 300 λ2 = 100 λ3 = 250 λ4 = 200 – propagation delay is ignored
3-141 3-142
Consider a network of K first-come first serve, single server queue, π = ~π · P, and ~π · ~1 = 1, we have
• Using ~
each of which has unlimited queue size and exponential distribution λi = λ(M )πi
with rate µk . There are also a fixed number of customers, say M ,
circulate endlessly in a closed network of queues. – λ(M ): a constant of proportionality, the sum of the arrival rates
PK
in all the queues in the network, and i=1 λi 6= 1
– G(M ) (normalization constant) will take care of λ(M )
Assuming ρi (ni ) = λi /µi (ni ) < 1 for i = 1, . . . , K , we have for all
ni ≥ 0,
– Define P̂j (nj ) as
1, if nj = 0,
P̂j (nj ) =
ρj (1)ρj (2) · · · ρj (nj ), if nj > 0
• Traffic eqn.: no external arrival! The joint state probability is expressed as
K K K K
X X 1 Y X Y
λi = λj pji with pji = 1 π(n) = P̂i (ni ), and G(M ) = P̂i (ni )
G(M ) i=1
j=1 i=0 n1 +···+nK =M i=1
3-143 3-144
Closed queueing networks III Closed queueing networks IV
• ρi : no longer the actual utilization due to λ(M ), i.e., relative • GBE of open queueing networks is reduced to
utilization K K K
X X X
• Setting λ(M ) to a value does not change the results α+ µi π(n) = αi π(n − ei ) + pi0 µi π(n + ei )
• The maximum queue size of each queue is M i=1 i=1 i=1
| {z } | {z }
Proof: as in Jackson’s theorem for open queueing networks external arrivals go outside from i
K X
K
• Use time-reversibility: X
+ pji µj π(n − ei + ej )
– routing matrix of the reversed process, pji∗ = λi pij /λj
i=1 j=1
• For state transition between n and n0 = n − ei + ej | {z }
from j to i
π(n0 )vn∗ 0 ,n = π(n)vn,n0 (∗)
• Substituting (1) and (2) into (*), we have
• As in open queueing networks, we have for ni > 0
ρi π(n1 , . . . , ni − 1, . . . , nj + 1, . . . , nK ) = ρj π(n1 , . . . , nK )
∗
vn−ei +ej ,n
=µj pji∗ = µj (λi pij /λj ) (1)
• The proof for the following is given on page 235 (BG)
vn,n−ei +ej =µi pij (2) X X
∗
vn,n0 = vn,n0
Leaving queue i and joining queue j in the forward process
n0 n0
(vn,n−ei +ej = µi pij ) correspond to leaving queue j and joining queue i
in the reversed process, 3-145 3-146
3-149 3-150
In a closed Jackson network with M customers, the average number Suppose that the computer system given in the open queueing network is
of customers at queue j: now operated so that there are always I programs in the system. Note that
M M the feedback loop around the CPU signifies the completion of one job and
X X G(M − m)
Nj (M ) = Pr[xj ≥ m] = ρm
j
its instantaneous replacement by another one. Find the steady state pmf of
G(M )
m=1 m=1 the system. Find the rate at which programs are completed.
In a closed Jackson network with M customers, the average
throughput of queue j:
G(M − 1)
γj (M ) =µj Pr[xj ≥ 1] = µj ρj
G(M )
G(M − 1)
=λj
G(M )
• Using λi = λ(I )πi with ~
π = ~π P,
– Average throughput is the average rate at which customers are
serviced in the queue. For a single-server queue, the service rate is µj π1 = pπ1 + π2 , π2 = (1 − p)π1 and π1 + π2 = 1
when there are one or more customers in the queue, and 0 when the we have
queue is empty λ(I ) λ(I )(1 − p)
λ1 = λ(I )π1 = and λ2 = λ(I )π2 =
3-153 2−p 2−p 3-154
Closed queueing networks: example A-II Arrival theorem for closed networks I
• For 0 ≤ i ≤ I , ρ1 = λ1 /µ1 and ρ2 = λ2 /µ2 Theorem: In a closed Jackson network with M customers, the
(1 − ρ1 )ρi1 (1− ρ2 )ρI2−i occupancy distribution seen by a customer upon arrival at queue j is
Pr[N1 = i, N2 = I − i] = the same as the occupancy distribution in a closed network with the
S(I )
arriving customer removed, i.e., the system with M − 1 customers
• The normalization constant, S(I ), is obtained by
• In a closed network with M customers, the expected number of
I I +1
X 1 − (ρ1 /ρ2 ) customers found upon arrival by a customer at queue j is equal to
S(I ) = (1−ρ1 )(1−ρ2 ) ρi1 ρI2−i = (1−ρ1 )(1−ρ2 )ρI2
i=0
1 − (ρ1 /ρ2 ) the average number of customers at queue j, when the total
number of customers in the closed network is M − 1
• We then have for 0 ≤ i ≤ I
• An arriving customer sees the system at a state that does not
1−β include itself
Pr[N1 = i, N2 = I − i] = βi
1 − β I +1
Proof:
where β = ρ1 /ρ2 = µ2 /((1 − p)µ1 )
• X (t) = [X1 (t), X2 (t), . . . , XK (t))]: state of the network at time t
• Program completion rate: pλ1
• Tij (t): probability that a customer moves from queue i to j at
λ1 /µ1 = 1 − Pr[N1 = 0] = β(1 − β I )/(1 − β I +1 ) time t +
3-155 3-156
Arrival theorem for closed networks II Mean Value Analysis I
• For any state n with ni > 0, the conditional probability that a Performance measure for closed networks with M customers
customer moving from node i to j finds the network at state n
• Nj (M ): average number of customers in queue j
Pr[X (t) = n, Tij (t)] • Tj (M ): average time a customer spends (per visit) in queue j
αij (n) = Pr[X (t) = n|Tij (t)] =
Pr[Tij (t)] • γj (M ): average throughput of queue j
Pr[Tij (t)|X (t) = n] Pr[X (t) = n]
=P Mean-Value Analysis: Calculates Nj (M ) and Tj (M ) directly, without
m,mi >0 Pr[Tij (t)|X (t) = m] Pr[X (t) = m]
first computing G(M ) or deriving the stationary distribution of the
π(n)µi pij ρn1 1 · · · ρni i · · · ρnKK network
=P =P m1 mi mK
m,mi >0 π(m)µi pij m,mi >0 ρ1 · · · ρi · · · ρK a) The queue length observed by an arriving customer is the same as
– Changing mi = mi0 + 1, mi0 ≥ 0, the queue length in a closed network with one less customer
ρn1 1 · · · ρni i · · · ρnKK b) Little’s result is applicable throughout the network
αij (n) = P mi0 +1
m1
· · · ρm 1. Based on a)
m1 +···+mi0 +1+···+mK =M, ρ1 · · · ρi
K
K
0
mi +1>0 1
Tj (s) = (1 + Nj (s − 1)) for j = 1, . . . , K , s = 1, . . . , M
ρ1 · · · ρni i −1 · · · ρnKK
n1
ρn1 · · · ρini −1 · · · ρnKK µj
= m 0 = 1
P
ρm mK G(M − 1)
1 · · · ρi · · · ρK
1 i
m1 +···+mi0 – Tj (0) = Nj (0) = 0 for j = 1, . . . , K
0
+···+mK =M−1,mi ≥0
3-157 3-158
2. Based on b), we first have when there are s customers in the Gupta’s truck company owns m trucks: Gupta is interested in the
network probability that 90% of his trucks are in operation
Nj (s) = λj (s)Tj (s) = λ(s)πj Tj (s) (1) • Set a routing matrix P:
| {z }
step 2-b Op LM M
and Op 0 0.85 0.15
P=
LM 0.9 0 0.1
K K
X X s Local maintenance
s= Nj (s) = λ(s) πj Tj (s) → λ(s) = PK (2) M 1 0 0
j=1 j=1 πj Tj (s)
j=1
| {z } • With π0 = 0.4796,
step 2-a
π1 = 0.4077, and
Combining (1) and (2) yields Manufacturer π2 = 0.1127, we have
ρ0 = λ(m)π0 /λ0 ,
λ(s)πj Tj (s)
Nj (s) = s PK ρ1 = λ(m)π1 /µ1 , and
j=1 πj Tj (s) ρ2 = λ(m)π2 /µ2
This will be iteratively done for s = 0, 1, . . . , M • We have Pr[O = i, L = j, M = k] = 1 i j k
i! ρ0 ρ1 ρ2 /G(m) and
3-159
k =m−i −j 3-160
Where are we? M/G/1 queue: Embedded MC I
3-161 3-162
α(t), β(t): number of arrivals and departures (respectively) in (0, t) Defining probability generating function of distribution Xn+1 ,
Un (t): number of times the system goes from n to n + 1 in (0, t); Qn+1 (z) , E[z Xn+1 ] = E[z max(Xn −1,0)+Yn+1 ] = E[z max(Xn −1,0) ]E[z Yn+1 ]
number of times an arriving customer finds n customers in the system
Let Un+1 (z) = E[z Yn+1 ], as n → ∞, Un+1 (z) = U (z) (independent
Vn (t): number of times that the system goes from n + 1 to n;
of n). Then, we have
number of times a departing customer leaves n.
∞
the transition n to n+1 cannot reoccur until after the number in the system drops to n once more X
(i.e., until after the transition n +1 to n reoccurs) Qn+1 (z) =U (z) z k Pr[max(Xn − 1, 0) = k]
k=0
h ∞
X i
=U (z) z 0 Pr[Xn = 0] + z k−1 Pr[Xn = k]
k=1
h i
=U (z) Pr[Xn = 0] + z −1 (Qn (z) − Pr[Xn = 0])
Un (t) and Vn (t) differ by at most one: |Un (t) − Vn (t)| ≤ 1. As n → ∞, we have Qn+1 (z) = Qn (z) = Q(z), and Pr[Xn = 0] = q0 ,
Un (t) Vn (t) Un (t) α(t) Vn (t) β(t) U (z)(z − 1)
lim = lim ⇒ lim = lim Q(z) = q0 .
t→∞ t t→∞ t t→∞ α(t) t t→∞ β(t) t z − U (z)
3-163 3-164
M/G/1 queue: Embedded MC III M/G/1 Queue: Embedded MC IV
We need to find U (z) and q0 . Using U (z|xi = x) = e λx(z−1) , Let fT (t) be probability density function of Tj , i.e., total delay.
Z ∞ ∞ Z ∞
(λt)k −λt
U (z|xi = x)b(x)dx = B ∗ (λ(1 − z)).
X
U (z) = Q(z) = zk e fT (t)dt = T ∗ (λ(1 − z))
0 0 k!
k=0
0 ∗
Since Q(1) = 1, we have q0 = 1 − U (1) = 1 − λ · X = 1 − ρ. where T (s) is the Laplace transform of fT (t). We have
Transform version of Pollaczek-Khinchin (P-K) formula is B ∗ (λ(1 − z))(z − 1)
T ∗ (λ(1 − z)) = (1 − ρ)
z − B ∗ (λ(1 − z))
B ∗ (λ(1 − z))(z − 1)
Q(z) = (1 − ρ)
z − B ∗ (λ(1 − z)) Let s = λ(1 − z), one gets
(1 − ρ)sB ∗ (s) (1 − ρ)s
Letting q = Q 0 (1), one gets W = q/λ − X . T ∗ (s) = = W ∗ (s)B ∗ (s) ⇒ W ∗ (s) =
s − λ + λB ∗ (s) s − λ + λB ∗ (s)
Sojourn time distribution of an M/G/1 system with FIFO service:
In an M/M/1 system, we have B ∗ (s) = µ/(s + µ):
If a customer spends Tj sec in the system, the number of customers it
leaves behind in the system is the number of customers that arrive λ
W ∗ (s) = (1 − ρ) 1 +
during these Tj sec, due to FIFO. s+µ−λ
3-165 3-166
• Taking the inverse transform of W ∗ (s) (L{Ae −at } ↔ A/(s + a)), Hitchhiker’s paradox:
λ
Cars are passing at a point of a road according to a Poisson process
L−1 {W ∗ (s)} = L−1 (1 − ρ) 1 + with rate λ = 1/10, i.e., 10 min.
s+µ−λ
= (1 − ρ)δ(t) + λ(1 − ρ)e −µ(1−ρ)x , x>0 A hitchhiker arrives to the roadside point at random instant of time.
We can write W ∗ (s) in terms of R0∗ (s) Previous car Next car
time
(1 − ρ)s (1 − ρ)s
W ∗ (s) = ∗
= Hitchhiker arrives
s − λ + λB (s) s − λ(1 − B ∗ (s))
1−ρ 1−ρ What is his mean waiting time for the next car?
= =
1 − B ∗ (s) 1 − ρR0∗ (s) 1. Since he arrives randomly in an interval, it would be 5 min.
1 − λX
sX 2. Due to memoryless property of exponential distribution, it would be
∞
X another 10 min.
= (1 − ρ) (ρR0∗ (s))k
k=0
∗
L. Kleinrock, Queueing systems, vol.1: theory
3-167 3-168
Residual life time II Residual life time III
The distribution of an interval that the hitchhiker captures depends If we take the Laplace transform of the pdf of R0 for 0 ≤ R0 ≤ x,
on both X and fX (x): Z x −sx
0 e 1 − e −sx
E[e −R s |X 0 = x] = dy =
fX 0 (x) = CxfX (x) and C : proportional constant 0 x sx
Unconditioning over X 0 , we have R0∗ (s) and its moments as
R∞
Since 0 fX 0 (x)dx = 1, we have C = 1/E[X ] = 1/X :
∗
xfX (x) 1 − FX (s) X (n+1)
fX 0 (x) = R0∗ (s) = ⇒ E[R0n ] =
X sX (n + 1)X
R∞
Since Pr[R0 < y|X 0 = x] = y/x for 0 ≤ y ≤ x, joint pdf of X and R0 : ∗
where FX (s) = 0
e −sx fX (t)dt
dy xfX (x)dx fX (x)dydx Mean residual time is rewritten as
Pr[y < R0 < y + dy, x < X 0 < x + dx] = =
x X X
σ2
R = E[R0 ] = 0.5 X + X
Unconditioning over X 0 , X
dy ∞ 1 − FX (y) 1 − FX (y)
Z
fR0 (y)dy = fX (x)dx = dy ⇒ fR0 (y) = Surprisingly, the distribution of the elapsed waiting time, X 0 − R0 , is
X y X X identical to that of the remaining waiting time.
3-169 3-170
X
πn = πn+1−k αk + αn π0 for n = 0, 1, 2, · · ·
k=0
P∞
… … … – Q(z) can be also obtained using Q(z) = n=0 πn z n
As an alternative, we define ν0 = 1 and νi = πi /π0
# of customers
3
• Ri : Residual service time seen by customer i 2
• Xi : Service time of customer i 1
time
• Ni : Number of customers in queue found by customer i 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
i−1
X
Wi = Ri + Xj 8
j=i−Ni 7
Virtual workload
6
5
Taking expectations and using the independence among Xj , 4
3
i−1 2
h X i 1 1
E[Wi ] , W = E[Ri ] + E E[Xj |Ni ] = Ri + Nq time
µ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
j=i−Ni
M/G/1 queue: Mean value analysis III M/G/1 queue: Mean value analysis IV
Time averaged residual time of r(τ ) in the interval [0, t] is From the hitchhiker’s paradox, we have E[R0 ] = E[X 2 ]/(2E[X ])
1 t
Z M(t)
1 X 1 2
PM(t)
1 M (t) i=1 Xi2 R =0 · Pr[N (t) = 0] + E[R0 ] × Pr[N (t) > 0]
R(t) = r(τ )dτ = Xi =
t 0 t i=1 2 2 t M (t) E[X 2 ] λX 2
= × λE[X ] =
2E[X ] 2
– M (t) is the number of service completion within [0, t].
P-K formula for mean waiting time in queue
Residual service time
2 2
λX 2 λ(σX +X )
W = −W ∗0 (s)|s=0 = =
2(1 − ρ) 2 +X
X 2 =σX
2 2(1 − ρ)
1 + Cx2 ρ 1 + Cx2
= X= WM/M/1
time 2 1−ρ 2
2
– Cx2 = σX
2
/X is the coefficient of variation of the service time
– e.g., upon a new service of duration X1 , r(τ ) starts at X1 and decays
– The average time in the system T = W + X
linearly for X1 time units
Eg.: since Cx = 1 in an M/M/1 and Cx = 0 in an M/D/1,
As t → ∞, limt→∞ R(t) = R = λX 2 /2
2 ∗ ρ ρ
Upon a new service of duration λX
, starts at dW
and (s)
decays linearly for time units. WM/M/1 = X > WM/D/1 = X
W = ←→ 1−ρ 2(1 − ρ)
2(1 − ρ) ds s=0 3-175 3-176
Packet arrivals
k
P{X=I+kn}=(l-p)p, k=O.I. ...
(X system
Delay analysis of an ARQ x X) ::::::::'':'::':'':'::'V
M/G/1 Queue ,
with vacations I
x, x x v2 V3 X5 V4 X5
Busy period
Vacations
Suppose Go-Back-N x
ARQ system, where
(00 k a packet
00 k is2 00)
successfully Server takes a vacation at the end of each busy period Time
- 2 k 2k
transmitted with probability 1 − p; ACK arrives in+ntx time of N − 1
=(l-p)
• Take an additional vacation if no customers are found at the end of
Figure 3.12 An M/G/I1 system with vacations. At the end of a busy
period, the server goes on vacation for time V with first and second moments
X, X3
X,
-
of packet 1 of packet 2
Packets Transmitted
• Residual service time including vacation periods
• We need the first two
Figure 3.17 moments of the
Illustration of the effective service
service time
times of packets in theto
ARQuse P-K Figure 3.13 Residual service times for an M/G/1 system with vacations.
Busy periods alternate with vacation periods.
system of Example 3.15. For example, packet 2 has an effective service time of
n + 1 because there was an error in the first attempt to transmit it following the t M(t) L(t)
formula X
Z
∞ last transmission of packet 1. but no error in the second attempt. 1 1 X 1 2 1X1 2
Np r(τ )dτ = X + V
X= (1 + kN )(1 − p)pk = 1 + t 0 t i=1 2 i t i=1 2 i
1−p
k=0
∞ – M (t): # of services completed by time t
X 2Np N 2 (p + p2 )
X2 = (1 + kN )2 (1 − p)pk = 1 + + – L(t): # of vacations completed by time t
1−p (1 − p)2 3-177 3-178
k=0 II
• Residual service time including vacation periods is rewritten as Suppose m traffic streams of equal-length packets according to
Poisson process with rate λ/m each
t
PM(t) 1 2
PL(t) 1 2
2 Xi i=1 2 Vi
Z
1 M (t) i=1 L(t) • If the traffic streams are frequency-division multiplexed on m
r(τ )dτ = · + ·
t 0 t }
| {z M (t) t }
| {z L(t) subchannels, the transmission time of each packet is m time units
| {z }
λ as t→∞ 1−ρ
R as t→∞
V
as t→∞ – Using P-K formula, λX 2 /(2(1 − ρ)), with ρ = λ and µ = 1/m,
λX 2 (1 − ρ)V 2 λm
= + =R WFDM =
2 2V 2(1 − λ)
• Using W = R/(1 − ρ), we have • Consider the same FDM, but packet transmissions can start only at
times, m, 2m, 3m,...: slotted FDM
λX 2 V2
W = + – This system gives stations a vacation of m slots
2(1 − ρ) 2V
!
V2
– The sum of waiting time in M/G/1 queue and mean residual WSFDM = WFDM + 0.5m =
vacation times 2V
3-179 3-180
FDM and TDM on a Slot Basis II M/G/1 Queue with Non-Preemptive Priorities I
Customers are divided into K priority classes, k = 1, . . . , K .
• m traffic streams are time-division multiplexed, where one slot
dedicated to each
Sec. 3.5 The M traffic stream as shown below
/ G /1 System 195
Non-preemptive priority
Stream 1 Stream 2 Stream 3 Stream 4
• Service of a customer completes uninterrupted, even if customers
of higher priority arrive in the meantime
--,--IttM-'-------'--...l...----..!-1-!--I__. • A separate (logical) queue is maintained for each class; each time
I. Framek
2
seen by an arriving customer is the same for all priority classes if all
– Service Thus,
timetheto eachaverage
customer's queue, X : is m
total delay moreslots
favorable X 2 than
→in TDM = inmFDM (assuming
customers have the same service time distribution
that > 2). The longer average waiting time in queue for TDM is more than compensated
In
– Frame synchronization delay: m/2
by the faster service time. Contrast this with the Example 3.9, which treats TDM with
slots that are a very small portion of the packet size. Problem 3.33 outlines an altemative
– Using P-K formula, we have Notations
approach for deriving the TDM average delay. (k)
• Nq : mean number of waiting customers belonging to class k in
m
3.5.2 Wand
Reservations TDM =
Polling = WSFDM the queue.
2(1 − λ)
Organizing transmissions from several packet streams into a statistical multiplexing sys- • Wk : mean waiting time of class-k customers
tem requires some form of scheduling. In some cases, this scheduling is naturally and
– System response time: T = 1 + W
easily accomplished; in other cases, however, some TDMform of reservation or polling system
• ρk : utilization, or load of class k, ρk = λk X k .
is required. • R0 : mean residual service time in the server upon arrival
Situations of this type arise often in multiaccess channels, which will be treated
3-181 3-182
extensively in Chapter 4. For a typical example, consider a communication channel that
can be accessed by several spatially separated users; however, only one user can transmit
successfully on the channel at anyone time. The communication resource of the channel
can be divided over time into a portion used for packet transmissions and another portion
used for reservation or polling messages that coordinate the packet transmissions. In other
words, the time axis is divided into data intervals, where actual data are transmitted, and
reservation intervals, used for scheduling future data. For uniform presentation, we use
the term "reservation" even though "polling" may be more appropriate to the practical
M/G/1 Queue with Non-Preemptive Priorities II
situation.
M/G/1 Queue with Non-Preemptive Priorities III
We will consider Tn traffic streams (also called users) and assume that each data
interval contains packets of a sinfile user. Reservations for these packets are made in the
Stability condition: ρ + ρ + · · · + ρ
immediately preceding1 reservation
2 interval. K
< 1.
All users are taken up in cyclic order (see From W2 = R/((1 − ρ1 )(1 − ρ1 − ρ2 )), we can generalize
Fig. 3.21). There are several versions of this system differing in the rule for deciding
Priority 1: similar to P-K formula,
which packets are transmitted during the data interval of each user. In the fiated system,
the rule is that only those packets that arrived prior to the user's preceding reservation
R
1 R Wk =
interval are transmitted.(1)
W =R+ N
1 and N q
By contrast, in (1)
the exhaustive system, the rule is that all available
=λ W ⇒W =
1 1 data interval, 1
(1 − ρ1 − · · · − ρk−1 )(1 − ρ1 − · · · − ρk )
packets of a user are transmitted
µ duringq the corresponding 1−ρ including those
1
that arrived in this data interval or the preceding reservation interval. An intermediate
Priority 2:version, which we call the partially fiated system, results when the packets transmitted in As before, the mean residual service time R is
a user's data interval are those that arrived up to the time this data interval began (and the
corresponding reservation interval ended). A typical example of such reservation systems
1 (1) 1 (2) 1 K K
W2 = R + Nq + N + λ 1 W2 1 X 1X
µ µ q µ R = λX 2 , with λ = λi and X 2 = λi Xi2
|1 {z 2 } | 1 {z } 2 i=1
λ i=1
time needed to serve time needed to serve those customers
class-1 and class-2 customers in higher classes that arrive
ahead in the queue during the waiting time of class-2 customer Mean waiting time for class-k customers:
(2)
From Nq = λ2 W2 , PK 2
i=1 λi Xi
R + ρ1 W1 Wk =
W2 = R + ρ1 W1 + ρ2 W2 + ρ1 W2 ⇒ W2 = 2(1 − ρ1 − · · · − ρk−1 )(1 − ρ1 − · · · − ρk )
1 − ρ1 − ρ2
– Using W1 = R/(1 − ρ1 ), we have Note that average queueing time of a customer depends on the arrival
rate of lower priority customers.
R
W2 =
(1 − ρ1 )(1 − ρ1 − ρ2 ) 3-183 3-184
M/G/1 Queue with Preemptive Resume Priorities I M/G/1 Queue with Preemptive Priorities II
Preemptive resume priority (iii) Average time required to serve customers of priority higher than k
• Service of a customer is interrupted when a higher priority that arrive while the customer is in the system for Tk
customer arrives.
k−1 k−1
• It resumes from the point of interruption when all higher priority X X
λi X i Tk = ρi Tk for k > 1 and 0 if k = 1
customers have been served.
i=1 i=1
• In this case the lower priority class customers are completely
"invisible" and do not affect in any way the queues of the higher
• Combining these three terms,
classes
k−1
Waiting time of class-k customer consists of Rk X
Tk = X k + + Tk ρk
(i) The customer’s own mean service time Xk . 1 − ρ1 − · · · − ρk i=1
(ii) The mean time to serve the customers in classes 1, . . . , k, ahead in | {z }
this is zero for k=1
the queue, Pk
(1 − ρ1 − · · · − ρk )X k + i=1 λi Xi2
k ⇒Tk = .
Rk 1X (1 − ρ1 − · · · − ρk−1 )(1 − ρ1 − · · · − ρk )
Wk = and Rk = λi Xi2 .
1 − ρ1 − · · · − ρk 2
| {z }
i=1 becomes 1 if k=1
Upper Bound for G/G/1 System I Upper Bound for G/G/1 System II
λ(σa2 +σb2 ) 2 2 2 + · (W + V )−
The average waiting time in queue: W ≤ 2(1−ρ)
σ(W k +Vk )
=σ(W k +Vk )
+ + σ(W +V )− + 2(Wk + Vk )
k k k k
3-187 3-188
Upper Bound for G/G/1 System III
As k → ∞, we can see
2
σW k+1
2
+ σI2k + 2W k+1 · I k = σW k
+ σa2 + σb2
becomes
2
σW + σI2 + 2W · I = σW
2
+ σa2 + σb2
We get W as
3-189