0% found this document useful (0 votes)
70 views246 pages

SOR Main Angol

This document provides an overview and summary of Basic Queueing Theory by Dr. János Sztrik. It discusses fundamental concepts in queueing theory, performance measures, notation, relations for birth-death processes, optimal design of systems, and software/problems. It also summarizes 14 chapters that analyze various queueing models including infinite-source systems like M/M/1, M/M/c, finite-source systems, and exercises. Formulas for key queueing models are also provided.

Uploaded by

Bruno Neves
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views246 pages

SOR Main Angol

This document provides an overview and summary of Basic Queueing Theory by Dr. János Sztrik. It discusses fundamental concepts in queueing theory, performance measures, notation, relations for birth-death processes, optimal design of systems, and software/problems. It also summarizes 14 chapters that analyze various queueing models including infinite-source systems like M/M/1, M/M/c, finite-source systems, and exercises. Formulas for key queueing models are also provided.

Uploaded by

Bruno Neves
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 246

Basic Queueing Theory

Dr. János Sztrik

University of Debrecen, Faculty of Informatics


Reviewers:

Dr. József Bíró


Doctor of the Hungarian Academy of Sciences, Full Professor
Budapest University of Technology and Economics

Dr. Zalán Heszberger


PhD, Associate Professor
Budapest University of Technology and Economics

2
This book is dedicated to my wife without whom this
work could have been nished much earlier.

ˆ If anything can go wrong, it will.

ˆ If you change queues, the one you have left will start to move faster than the one
you are in now.

ˆ Your queue always goes the slowest.

ˆ Whatever queue you join, no matter how short it looks, it will always take the
longest for you to get served.

( Murphy' Laws on reliability and queueing )

3
4
Contents

Preface 7
1 Fundamental Concepts of Queueing Theory 9
1.1 Performance Measures of Queueing Systems . . . . . . . . . . . . . . . . 10
1.2 Kendall's Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Basic Relations for Birth-Death Processes . . . . . . . . . . . . . . . . . 13
1.4 Optimal Design of Queueing Systems . . . . . . . . . . . . . . . . . . . . 15
1.5 Queueing Software and Collection of Problems with Solutions . . . . . . 17

2 Innite-Source Queueing Systems 19


2.1 The M/M/1 Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 The M/M/1 Queue with Balking Customers . . . . . . . . . . . . . . . . 46
2.3 The M/M/1 Priority Queues . . . . . . . . . . . . . . . . . . . . . . . . 51
2.4 The M/M/1/K Queue, Systems with Finite Capacity . . . . . . . . . . . 54
2.5 The M/M/∞ Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.6 The M/M/n/n Queue, Erlang-Loss System . . . . . . . . . . . . . . . . 61
2.7 The M/M/n Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.8 The M/M/c Non-preemptive Priority Queue (HOL) . . . . . . . . . . . 94
2.9 The M/M/c/K Queue - Multiserver, Finite-Capacity Systems . . . . . . 94
2.10 The M/M/c/K Queue with Balking and Reneging . . . . . . . . . . . . 99
2.11 The M/G/1 Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
2.12 The M/G/1 Priority Queue . . . . . . . . . . . . . . . . . . . . . . . . . 114
2.13 The M/G/c Processor Sharing Queue . . . . . . . . . . . . . . . . . . . . 120

3 Finite-Source Systems 121


3.1 The M/M/r/r/n Queue, Engset-Loss System . . . . . . . . . . . . . . . 121
3.2 The M/M/1/n/n Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
3.3 The ~ /M
Heterogeneous M ~ /1/n/n Queue . . . . . . . . . . . . . . . . . . 141
3.4 The M/M/r/n/n Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
3.5 The M/M/r/K/n Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
3.6 The M/M/c/K/n Queue with Balking and Reneging . . . . . . . . . . . 161
3.7 The M/G/1/n/n/P S Queue . . . . . . . . . . . . . . . . . . . . . . . . . 163
3.8 The ~
G/M/r/n/n/F IF O Queue . . . . . . . . . . . . . . . . . . . . . . . 166

5
4 Exercises 175
4.1 Innite-Source Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.2 Finite-Source Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

5 Queueing Theory Formulas 195


5.1 Notations and Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.2 Relationships between random variables . . . . . . . . . . . . . . . . . . 197
5.3 M/M/1 Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.4 M/M/1/K Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.5 M/M/c Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.6 M/M/2 Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.7 M/M/c/c Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.8 M/M/c/K Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.9 M/M/∞ Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.10 M/M/1/K/K Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.11 M/G/1/K/K Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5.12 M/M/c/K/K Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.13 D/D/c/K/K Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.14 M/G/1 Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.15 GI/M/1 Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.16 GI/M/c Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.17 M/G/1 Priority queueing system . . . . . . . . . . . . . . . . . . . . . . 227
5.18 M/G/c Processor Sharing system . . . . . . . . . . . . . . . . . . . . . . 235
5.19 M/M/c Priority system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236

Appendix 237
Bibliography 238

6
Preface
Modern information technologies require innovations that are based on modeling, ana-
lyzing, designing and nally implementing new systems. The whole developing process
assumes a well-organized team work of experts including engineers, computer scientists,
mathematicians, physicist just to mention some of them. Modern infocommunication
networks are one of the most complex systems where the reliability and eciency of the
components play a very important role. For the better understanding of the dynamic
behavior of the involved processes one have to deal with constructions of mathematical
models which describe the stochastic service of randomly arriving requests. Queueing
Theory is one of the most commonly used mathematical tool for the performance evalu-
ation of such systems.

The aim of the book is to present the basic methods, approaches mainly in a Markovian
level for the analysis of not too complicated systems. The main purpose is to understand
how models could be constructed and how to analyze them. It is assumed the reader has
been exposed to a rst course in probability theory, however in the text I give a refresher
and state the most important principles I need later on. My intention is to show what is
behind the formulas and how we can derive formulas. It is also essential to know which
kind of questions are reasonable and then how to answer them.

My experience and advice are that if it is possible solve the same problem in dierent
ways and compare the results. Sometimes very nice closed-form, analytic solutions are
obtained but the main problem is that we cannot compute them for higher values of the
involved variables. In this case the algorithmic or asymptotic approaches could be very
useful. My intention is to nd the balance between the mathematical and practitioner
needs. I feel that a satisfactory middle ground has been established for understanding
and applying these tools to practical systems. I hope that after understanding this book
the reader will be able to create his owns formulas if needed.

It should be underlined that most of the models are based on the assumption that the
involved random variables are exponentially distributed and independent of each other.
We must confess that this assumption is articial since in practice the exponential distri-
bution is not so frequent. However, the mathematical models based on the memoryless
property of the exponential distribution greatly simplies the solution methods resulting
in computable formulas. By using these relatively simple formulas one can easily foresee
the eect of a given parameter on the performance measure and hence the trends can be
forecast. Clearly, instead of the exponential distribution one can use other distributions
but in that case the mathematical models will be much more complicated. The analytic

7
results can help us in validating the results obtained by stochastic simulation. This ap-
proach is quite general when analytic expressions cannot be expected. In this case not
only the model construction but also the statistical analysis of the output is important.

The primary purpose of the book is to show how to create simple models for practical
problems that is why the general theory of stochastic processes is omitted. It uses only
the most important concepts and sometimes states theorem without proofs, but each time
the related references are cited.

I must confess that the style of the following books greatly inuenced me, even if they
are in dierent level and more comprehensive than this material: Allen [3], Jain [54], Klein-
rock [60], Kobayashi and Mark [63], Medhi [73], Nelson [75], Stewart [95], Tijms [115],
Trivedi [118].

This book is intended not only for students of computer science, engineering, operation
research, mathematics but also those who study at business, management and planning
departments, too. It covers more than one semester and has been tested by graduate
students at Debrecen University over the years. It gives a very detailed analysis of the
involved queueing systems by giving density function, distribution function, generating
function, Laplace-transform, respectively. Furthermore, a software package called QSA
(Queueing Systems Assistance) developed in 2021 is provided to calculate and visualize
the main performance measures. These applets can be run in all modern devices including
smart phones, too.

I have attempted to provide examples for the better understanding and a collection
of exercises with detailed solution helps the reader in deepening her/his knowledge. I
am convinced that the book covers the basic topics in stochastic modeling of practical
problems and it supports students in all over the world.

I am indebted to Professors József Bíró and Zalán Heszberger for their review, com-
ments and suggestions which greatly improved the quality of the book. I am also very
grateful to Tamás Török, Zoltán Nagy and Ferenc Veres for their help in editing.

All comments and suggestions are welcome at:


mailto:[email protected]
http://irh.inf.unideb.hu/~jsztrik

Debrecen, 2012, 2021

János Sztrik

8
Chapter 1
Fundamental Concepts of Queueing
Theory
Queueing theory deals with one of the most unpleasant experiences of life, waiting. Queue-
ing is quite common in many elds, for example, in telephone exchange, in a supermarket,
at a petrol station, at computer systems, etc. I have mentioned the telephone exchange
rst because the rst problems of queueing theory was raised by calls and Erlang was the
rst who treated congestion problems in the beginning of 20th century, see Erlang [29,30].
His works inspired engineers, mathematicians to deal with queueing problems using
probabilistic methods. Queueing theory became a eld of applied probability and many of
its results have been used in operations research, computer science, telecommunication,
trac engineering, reliability theory, just to mention some. It should be emphasized that
is a living branch of science where the experts publish a lot of papers and books. The
easiest way is to verify this statement one should use the Google Scholar for queueing
related items. A Queueing Theory Homepage has been created where readers are informed
about relevant sources, for example books, softwares, conferences, journals, etc. I highly
recommend to visit it at

http://web2.uwindsor.ca/math/hlynka/queue.html

There is only a few books and lectures notes published in Hungarian language, I would
mention the work of Györ and Páli [41], Jereb and Telek [56], Kleinrock [60], Lakatos
and Szeidl , Telek [67] and Sztrik [102105]. However, it should be noted that the Hun-
garian engineers and mathematicians have eectively contributed to the research and
applications. First of all we have to mention Lajos Takács who wrote his pioneer and
famous book about queueing theory [112]. Other researchers are J. Tomkó, M. Arató,
L. Györ, A. Benczúr, L. Lakatos, L. Szeidl, L. Jereb, M. Telek, J. Bíró, T. Do, and J.
Sztrik. The Library of Faculty of Informatics, University of Debrecen, Hungary oer a
valuable collection of queueing and performance modeling related books in English, and
Russian, too. Please visit:

https://irh.inf.unideb.hu/user/jsztrik/education/05/3f.html

I may draw your attention to the books of Takagi [109111] where a rich collection of
references is provided.

9
1.1 Performance Measures of Queueing Systems
To characterize a queueing system we have to identify the probabilistic properties of the
incoming ow of requests, service times and service disciplines. The arrival process can
be characterized by the distribution of the interarrival times of the customers, denoted
by A(t), that is
A(t) = P ( interarrival time < t).
In queueing theory these interarrival times are usually assumed to be independent and
identically distributed random variables. The other random variable is the service time,
sometimes it is called service request, work. Its distribution function is denoted by B(x),
that is
B(x) = P ( service time < x).
The service times, and interarrival times are commonly supposed to be independent
random variables.
The structure of service and service discipline tell us the number of servers,
the capacity of the system, that is the maximum number of customers staying in the
system including the ones being under service. The service discipline determines the
rule according to the next customer is selected. The most commonly used laws are
ˆ FIFO - First In First Out: who comes earlier leaves earlier, FCFS - First Come
First Served

ˆ LIFO - Last Come First Out: who comes later leaves earlier, LCFS - Last Come
First Served

ˆ RS - Random Service: the customer is selected randomly, SIRO - Service In Random


Order

ˆ Priority without Preemption or Head of Line (HOL), Priority with Preemption /


Resume or Repeat

ˆ PS - Processor Sharing

The aim of all investigations in queueing theory is to get the main performance measures of
the system which are the probabilistic properties ( distribution function, density function,
mean, variance ) of the following random variables: number of customers in the system,
number of waiting customers, utilization of the server/s, response time of a customer,
waiting time of a customer, idle time of the server, busy time of a server. Of course, the
answers heavily depends on the assumptions concerning the distribution of interarrival
times, service times, number of servers, capacity and service discipline. It is quite rare,
except for elementary or Markovian systems, that the distributions can be computed.
Usually their mean or transforms can be calculated.
For simplicity consider rst a single-server system Let %, called trac intensity, be
dened as
mean service time
%= .
mean interarrival time

10
Assuming an innity population system with arrival intensity λ, which is reciprocal of
the mean interarrival time, and let the mean service denote by 1/µ. Then we have

λ
% = arrival intensity ∗ mean service time = .
µ
If % > 1 then the systems is overloaded since the requests arrive faster than as the are
served. It shows that more server are needed.

Let χ(A) denote the characteristic function of event A, that is


(
1 , if A occurs,
χ(A) =
0 , if A does not ,

furthermore let N (t) = 0 denote the event that at time T the server is idle, that is no
customer in the system. Then the utilization of the server during time T is dened
by
ZT
1
χ (N (t) 6= 0) dt ,
T
0

where T is a long interval of time. As T → ∞ we get the utilization of the server


denoted by Us and the following relations holds with probability 1

ZT
1 Eδ
Us = lim χ (N (t) 6= 0) dt = 1 − P0 = ,
T →∞ T Eδ + Ei
0

where P0 is the steady-state probability that the server is idle Eδ , Ei denote the mean
busy period, mean idle period of the server, respectively.

This formula is a special case of the relationship valid for continuous-time Markov chains
and proved in Tomkó [117].

Theorem 1 Let X(t) be an ergodic Markov chain, and A is a subset of its state space.
Then with probability 1
Z T  X
1 m(A)
lim χ(X(t) ∈ A)dt = Pi = ,
T →∞ T 0 m(A) + m(A)
i∈A

where m(A) and m(A) denote the mean sojourn time of the chain in A and A during a
cycle,respectively. The ergodic ( stationary, steady-state ) distribution of X(t) is denoted
by Pi .

In an m-server system the mean number of arrivals to a given server during time T
is λT /m given that the arrivals are uniformly distributed over the servers. Thus the
utilization of a given server is
λ
Us = .

11
The other important measure of the system is the throughput of the system which
is dened as the mean number of requests serviced during a time unit. In an m-server
system the mean number of completed services is m%µ and thus

throughput = mUs µ = .

However, if we consider now the customers for a tagged customer the waiting and
response times are more important than the measures dened above. Let us dene by
Wj , Tj the waiting, response time of the j th customer, respectively. Clearly the waiting
time is the time a customer spends in the queue waiting for service, and response time is
the time a customer spends in the system, that is

Tj = Wj + Sj ,

where Sj denotes its service time. Of course, Wj and Tj are random variables and their
mean, denoted by Wj and Tj , are appropriate for measuring the eciency of the system.
It is not easy in general to obtain their distribution function.

Other characteristic of the system is the queue length, and the number of customers
in the system. Let the random variables Q(t), N (t) denote the number of customers in
the queue, in the system at time t, respectively. Clearly, in an m-server system we have

Q(t) = max{0, N (t) − m}.

The primary aim is to get their distributions, but it is not always possible, many times
we have only their mean values or their generating function.

1.2 Kendall's Notation


Before starting the investigations of elementary queueing systems let us introduce a no-
tation originated by Kendall to describe a queueing system.
Let us denote a system by

A / B / m / K / n/ D,
where

A: distribution function of the interarrival times,

B : distribution function of the service times,

m: number of servers,

K : capacity of the system, the maximum number of customers in the system including
the one being serviced,

n: population size, number of sources of customers,

D: service discipline.

12
Exponentially distributed random variables are notated by M , meaning Markovian or
memoryless. Furthermore, if the population size and the capacity is innite, the service
discipline is FIFO, then they are omitted.

Hence M/M/1 denotes a system with Poisson arrivals, exponentially distributed service
times and a single server. M/G/m denotes an m-server system with Poisson arrivals
and generally distributed service times. M/M/r/K/n stands for a system where the
customers arrive from a nite-source with n elements where they stay for an exponentially
distributed time, the service times are exponentially distributed, the service is carried out
according to the request's arrival by r severs, and the system capacity is K .

David G. Kendall, 1918-2007

1.3 Basic Relations for Birth-Death Processes


Since birth-death processes play a very important role in modeling elementary queueing
systems let us consider some useful relationships for them. Clearly, arrivals mean birth
and services mean death.

As we have seen earlier the steady-state distribution for birth-death processes can be
obtained in a very nice closed-form, that is

λ0 · · · λi−1 X λ0 · · · λi−1
(1.1) Pi = P0 , i = 1, 2, · · · , P0 −1
=1+ .
µ1 · · · µi i=1
µ1 · · · µi

13
Let us consider the distributions at the moments of arrivals, departures, respectively,
because we shall use them later on.

Let Na , Nd denote the state of the process at the instant of births, deaths, respectively,
and let Πk = P (Na = k), Dk = P (Nd = k), k = 0, 1, 2, . . . stand for their distributions.

By applying the Bayes's theorem it is easy to see that

(λk h + o(h))Pk λk P k
(1.2) Πk = lim P∞ = P∞ .
j=0 (λj h + o(h))Pj j=0 λj Pj
h→0

Similarly

(µk+1 h + o(h))Pk+1 µk+1 Pk+1


(1.3) Dk = lim P∞ = P∞ .
j=1 (µj h + o(h))Pj j=1 µj Pj
h→0

λk
Since Pk+1 = Pk , k = 0, 1, . . ., thus
µk+1
λ k Pk
(1.4) Dk = P∞ = Πk , k = 0, 1, . . . .
i=0 λi Pi

In words, the above relation states that the steady-state distributions at the moments of
births and deaths are the same. It should be underlined, that it does not mean that it is
equal to the steady-state distribution at a random point as we will see later on.

Further essential observation is that in steady-state the mean birth rate is equal to the
mean death rate. This can be seen as follows

X ∞
X ∞
X
(1.5) λ= λi Pi = µi+1 Pi+1 = µk Pk = µ.
i=0 i=0 k=1

14
1.4 Optimal Design of Queueing Systems
The ultimate goal of the modeling is to make optimal decision on a given problem.
Queueing theory may help to do that. After obtaining the corresponding formulas one
can make the decision. Like the descriptive models in classical queueing theory, optimal
design models may be classied according to such parameters as the arrival rate(s), the
service rate(s), number of servers, the interarrival time and service time distributions, and
the queue discipline(s). In addition, the queueing system under study may be a network
with several facilities and/or classes of customers, in which case the nature of the ows
of the classes among the various facilities must also be specied. What distinguishes an
optimal design model from a traditional descriptive model is the fact that some of the
parameters are subject to decision and that this decision is made with explicit attention
to economic considerations, with the preferences of the decision maker(s) as a guiding
principle. The basic distinctive components of a design model are thus:

ˆ the decision variables

ˆ benets/rewards and costs

ˆ the objective function

Decision variables may include, for example, the arrival rates, the service rates, number
of servers, and the queue disciplines at the various service facilities. Typical benets
and costs include rewards to the customers from being served, waiting costs incurred
by the customers while waiting for service, and costs to the facilities for providing the
service. These benets and costs may be brought together in an objective function, which
quanties the implicit trade-os. For example, increasing the service rate will result in less
time spent by the customers waiting (and thus a lower waiting cost), but a higher service
cost. Each time we dealt with a linear cost/reward structure, in which the objective is
to minimize the expected total cost per unit time in steady state. The objective function
is calculated and illustrated without any details. In a design problem, the values of the
decision variables, once chosen, cannot vary with time nor in response to changes in the
state of the system (e.g., the number of customers present). The decision is made with
respect to only one variable.
Let us introduce the following costs and benets/rewards

ˆ CS - cost of service per server per unit time

ˆ CWS - cost of waiting in the system per customer per unit time

ˆ CI - cost of idleness per server per unit time

ˆ CSR - cost of service rate per server per unit time

ˆ CLC - cost of loss per customer per unit time

ˆ R - reward per entering customer per unit time

15
Our aim is to minimize the following expected total cost per unit time with objective
function

E(T otal cost) = (number of servers) ∗ CS


+ E(number of customers in the system) ∗ CW
+ E(number of idle servers) ∗ CI + (number of servers) ∗ CSR
+ E(arrival rate) ∗ P (loss/blocking) ∗ CLC
− E(arrival rate)(1 − P (loss/blocking) ∗ R.

It is quite a general cost function and it is calculated numerically by giving the respective
costs. Depending on the decision parameter this function is illustrated and the user can
determine the optimal value of the parameter and the expected total cost.

There are several books on this type of decision making using queueing formulas. In
the past years I found the following sources are very useful, Bhat [9], Gross et. al. [40],
Harchol-Balter [46], Hillier and Lieberman [?], Kobayashi and Mark [63], Kulkarni [66],
Stidham [96], White [124] in which not only the topic is treated but dierent software
tools support the decision, for example MATLAB, Mathematica, Excel.

16
1.5 Queueing Software and Collection of Problems with
Solutions
To solve practical problems the rst step is to identify the appropriate queueing system
and then to calculate the performance measures. Of course the level of modeling heavily
depends on the assumptions. It is recommended to start with a simple system and then
if the results do not t to the problem continue with a more complicated one. Various
software packages help the interested readers in dierent level. The following links worth
a visit

http://web2.uwindsor.ca/math/hlynka/qsoft.html

I highly recommend an Excel-based software package called QTSPlus to calculate the


main performance measures of basic models. It is associated to the book of Gross, Shortle,
Thompson and Harris [40] and can be downloaded here

http://mason.gmu.edu/~jshortle/fqt5th.html,
http://mason.gmu.edu/~jshortle/QtsPlus-4-0.zip
ftp://ftp.wiley.com/public/sci_tech_med/queueing_theory/

For practical oriented teaching courses we have also developed a software package called
QSA (Queueing Systems Assistance) to calculate and visualize the performance mea-
sures together with optimal decisions not only for elementary but more advanced queueing
systems as well. It is available at

https://qsa.inf.unideb.hu

The main advantages of QSA over QTSPlus are the following


ˆ It runs on desktops, laptops, smartphones (due to Java)

ˆ It calculates not only the mean but the variance of the corresponding random
variables

ˆ It gives the distribution function of the waiting/response times (if possible)

ˆ It visualizes all the main performance measures

ˆ It graphically supports the decision making

Besides the package I have established a Collection of Problems with Solutions


teaching material in which the problems deliberately listed in random order imitating
the practical needs. The material can be downloaded here:

https://irh.inf.unideb.hu/~jsztrik/education/16/Queueing_Problems_
Solutions_2021_Sztrik.pdf

17
QSA Welcome page

M/M/1 System

If the already existing systems are not suitable for your problem then you have to create
your own queueing system and then the creation starts and the primary aim of the
present book is to help this process.

For further readings the interested reader is referred to the following books: Allen [3],
Bose [14], Cooper [23], Daigle [25], Gnedenko and Kovalenko [39], Gross, Shortle, Thomp-
son and Harris [40], Harchol-Balter [46], Jain [54], Kleinrock [60], Kobayashi [62, 63],
Kulkarni [66], Medhi [73], Nelson [75], Stewart [95], Sztrik [102], Takagi [109111], Ti-
jms [115], Trivedi [118].

The present book has used some parts of Adan and Reising [1], Allen [3], Daigle [25],
Gross and Harris [40], Harchol-Balter [46], Kleinrock [60], Kobayashi [63], Sztrik [102],
Tijms [115], Trivedi [118].

18
Chapter 2
Innite-Source Queueing Systems
Queueing systems can be classied according to the cardinality of their sources, namely
nite-source and innite-source models. In nite-source models the arrival intensity of
the request depends on the state of the system which makes the calculations more com-
plicated. In the case of innite-source models, the arrivals are independent of the number
of customers in the system resulting a mathematically tractable model. In queueing net-
works each node is a queueing system which can be connected to each other in various
way. The main aim of this chapter is to know how these nodes operate.

2.1 The M/M/1 Queue


An M/M/1 queueing system is the simplest non-trivial queue where the requests arrive
according to a Poisson process with rate λ, that is the interarrival times are independent,
exponentially distributed random variables with parameter λ. The service times are also
assumed to be independent and exponentially distributed with parameter µ. Further-
more, all the involved random variables are supposed to be independent of each other.

Let N (t) denote the number of customers in the system at time t and we shall say
that the system is at state k if N (t) = k . Since all the involved random variables are
exponentially distributed, consequently they have the memoryless property, N (t) is a
continuous-time Markov chain with state space 0, 1, · · · .

In the next step let us investigate the transition probabilities during time h. It is easy to
see that

Pk,k+1 (h) = (λh + o(h)) (1 − (µh + o(h)) +


X∞
+ (λh + o(h))k (µh + o(h))k−1 ,
k=2
k = 0, 1, 2, ...

By using the independence assumption the rst term is the probability that during h
one customer has arrived and no service has been nished. The summation term is the
probability that during h at least 2 customers has arrived and at the same time at least 1

19
has been serviced. It is not dicult to verify the second term is o(h) due to the property
of the Poisson process. Thus

Pk,k+1 (h) = λh + o(h).

Similarly, the transition probability from state k into state k − 1 during h can be written
as
Pk,k−1 (h) = (µh + o(h)) (1 − (λh + o(h)) +
X∞
+ (λh + o(h))k−1 (µh + o(h))k
k=2
= µh + o(h).

Furthermore, for non-neighboring states we have

Pk,j = o(h), | k − j |≥ 2.

In summary, the introduced random process N (t) is a birth-death process with rates

λk = λ, k = 0, 1, 2, ..., µk = µ, k = 1, 2, 3....

That is all the birth rates are λ, and all the death rates are µ.
As we notated the system capacity is innite and the service discipline is FIFO.

To get the steady-state distribution let us substitute these rates into formula (1.1) ob-
tained for general birth-death processes. Thus we obtain
k  k
Y λ λ
Pk = P0 = P0 , k ≥ 0.
i=1
µ µ

By using the normalization condition we can see that this geometric sum is convergent
i λ/µ < 1 and

∞  k
!−1
X λ λ
P0 = 1+ =1− =1−%
k=1
µ µ

where % = µλ . Thus
Pk = (1 − %)%k , k = 0, 1, 2, ...,
which is a modied geometric distribution with success parameter 1 − %.

In the following we calculate the the main performance measures of the system

ˆ Mean number of customers in the system



X ∞
X
N= kPk = (1 − %)% k%k−1 =
k=0 k=1

20

d%k
 
X d 1 %
= (1 − %)% = (1 − %)% = .
k=1
d% d% 1 − % 1−%

Variance
∞ ∞  2
X
2
X %
V ar(N ) = (k − N ) Pk = k− Pk
k=0 k=0
1−%
∞  2 ∞
X
2 % X %
= k Pk + − 2k Pk
k=0
1−% k=0
1−%
∞ 2
 2
X % % %
= k(k − 1)Pk + + −2
k=0
(1 − %)2 1 − % 1−%
∞ 2
d2 X k
 
% %
= (1 − %)%2 2 % + −
d% k=0 1−% 1−%
2
2%2

% % %
= 2
+ − = .
(1 − %) 1−% 1−% (1 − %)2

ˆ Mean number of waiting customers, mean queue length


∞ ∞ ∞
X X X %2
Q= (k − 1)Pk = kPk − Pk = N − (1 − P0 ) = N − % = .
k=1 k=1 k=1
1−%

Variance ∞
X 2 %2 (1 + % − %2 )
V ar(Q) = (k − 1)2 Pk − Q = 2
.
k=1
(1 − %)

ˆ Server utilization
λ
Us = 1 − P0 = = %.
µ
By using Theorem 1 it is easy to see that
1
λ
P0 = 1 ,
λ
+ Eδ

where Eδ a is the mean busy period length of the server, λ1 is the mean idle time of
the server. Since the server is idle until a new request arrives which is exponentially
distributed with parameter λ. Hence

1
λ
1−%= 1 ,
λ
+ Eδ

and thus

1 % 1 1
Eδ = = N= .
λ1−% λ µ−λ

21
In the next few lines we show how this performance measure can be obtained in a
dierent way.
To do so we need the following notations.
Let E(νA ), E(νD ) denote the mean number of customers that have arrived, departed
during the mean busy period of the server, respectively. Furthermore, let E(νS )
denote the mean number of customers that have arrived during a mean service
time. Clearly

E(νD ) = E(δ)µ,
λ
E(νS ) = ,
µ
E(νA ) = E(δ)λ,
E(νA ) + 1 = E(νD ),

and thus after substitution we get


1
E(δ) = .
µ−λ

Consequently

1
E(νD ) = E(δ)µ =
1−%
λ 1 %
E(νA ) = E(νS )E(νD ) = =
µ1−% 1−%
%
E(νA ) = E(δ)λ = .
1−%

ˆ Distribution of the response time of a customer


Before investigating the response we show that in any queueing system where the
arrivals are Poisson distributed

Pk (t) = Πk (t),

where Pk (t) denotes the probability that at time t the system is a in state k , and
Πk (t) denotes the probability that an arriving customers nd the system in state k
at time t. Let
A(t, t + ∆t)
denote the event that an arrival occurs in the interval (t, t + ∆t). Then

Πk (t) := lim P (N (t) = k|A(t, t + ∆t)) ,


∆t→0

Applying the denition of the conditional probability we have


P (N (t) = k , A(t, t + ∆t))
Πk (t) = lim =
∆t→0 P (A(t, t + ∆t))

22
P (A(t, t + ∆t)|N (t) = k) P (N (t) = k)
= lim .
∆t→0 P (A(t, t + ∆t))
However, in the case of a Poisson process event A(t, t + ∆t) does not depends on
the number of customers in the system at time t and even the time t is irrespective
thus we obtain
P (A(t, t + ∆t)|N (t) = k) = P (A(t, t + ∆t)) ,
hence for birth-death processes we have
Πk (t) = P (N (t) = k) .

That is the probability that an arriving customer nd the system in state k is equal
to the probability that the system is in state k .

In stationary case applying formula (1.2) with substitutions λi = λ, i = 0, 1, . . .


we have the same result.

If a customer arrives it nds the server idle with probability P0 hence the waiting
time is 0. Assume, upon arrival a tagged customer, the system is in state n. This
means that the request has to wait until the residual service time of the customer
being serviced plus the service times of the customers in the queue. As we assumed
the service is carried out according to the arrivals of the requests. Since the ser-
vice times are exponentially distributed the remaining service time has the same
distribution as the original service time. Hence the waiting time of the tagged cus-
tomer is Erlang distributed with parameters (n, µ) and the response time is Erlang
distributed with (n + 1, µ). Just to remind you the density function of an Erlang
distribution with parameters (n, µ) is
µ(µx)n−1 −µx
fn (x) = e , x ≥ 0.
(n − 1)!

Hence applying the theorem of total probability for the density function of the
response time we have
∞ n ∞
X
n (µx) −µx −µx
X (%µx)n
fT (x) = (1 − %)% µe = µ(1 − %)e =
n=0
n! n=0
n!

= µ(1 − %)e−µ(1−%)x .
Its distribution function is
FT (x) = 1 − e−µ(1−%)x .
That is the response time is exponentially distributed with parameter
µ(1 − %) = µ − λ.
Hence the expectation and variance of the response time are
1 1
T = , V ar(T ) = ( )2 .
µ(1 − %) µ(1 − %)

23
Furthermore
1 1
T = = = Eδ.
µ(1 − %) µ−λ
ˆ Distribution of the waiting time

Let fW (x) denote the density function of the waiting time. Similarly to the above
considerations for x > 0 we have
∞ ∞
X (µx)n−1 −µx n
X (µx%)k
fW (x) = µe % (1 − %) = (1 − %)%µ e−µx =
n=1
(n − 1)! k=0
k!

= (1 − %)%µe−µ(1−%)x .
Thus
fW (0) = 1 − %, if x = 0,
fW (x) = %(1 − %)µe −µ(1−%)x
, if x > 0.
Hence
FW (x) = 1 − % + % 1 − e−µ(1−%)x = 1 − %e−µ(1−%)x .


The mean waiting time is


Z∞
% 1
W = xfW (x)dx = = %Eδ = N .
µ(1 − %) µ
0

Since T = W + S , in addition W and S are independent we get

1 1
V ar(T ) = = V ar(W ) + ,
(µ(1 − ρ))2 µ2
thus

1 1 2ρ − ρ2 2 ρ2
V ar(W ) = − = = ρ − ,
(µ(1 − ρ))2 µ2 (µ(1 − ρ))2 (µ(1 − ρ))2 (µ(1 − ρ))2

that is exactly E(W 2 ) − (EW )2 .


Notice that
1 %
(2.1) λT = λ = = N.
µ(1 − %) 1−%
Furthermore
% %2
(2.2) λW = λ = = Q.
µ(1 − %) 1−%

Relations (2.1), (2.2) are called Little formulas or Little theorem, or Little
law which remain valid under more general conditions.

24
Figure 2.1: John Little, 1928-

It should be noted that in many applications we are dealing with a required service
denoted by SR , which does not mean service time. In these cases we involve some kind of
capacity (speed, bandwidth) denoted by C . In that case SR = CS and E(S) = E(RS )/C.
It is easy to see if the required service is exponentially distributed with parameter γ than
the service time is also exponentially distributed with parameter γC . That is

P (S < t) = P (SR /C < t) = P (SR < Ct) = 1 − e−γCt .


For example, router A sends 8 packets per second, on the average, to router B. The
mean size of a packet is 400 byte (exponentially distributed). The line speed is 64 kbit/s.
The utilization of the line (server) is ρ = 8/s × 400 × 8 bit/(64 × 1000) bit/s = 0.4.
Or ρ = λ/µ, where λ = 8 packets/s, µ = 64000 bit/s/(400×8 bit/packet) = 20 packets/s.
Thus λ/µ = 8/20 = 0.4.

Example 1 Economy of Scale


Consider a company that has K terminal rooms. Each terminal room is identical contain-
ing as set of terminals/workstations connected by a concentrator to a network. Each set
of terminals generates messages to be sent over the concentrator according to a Poisson
process with rate λ. Each message requires an exponentially distributed amount of time
to be sent by the concentrator with a rate of µ. The company is considering replacing the
set of K rooms and K concentrators with one large room and a concentrator that is K
times faster.
Comparing two options:

ˆ K independent rooms
Each room can be modeled as multiple M/M/1 queues with arrival rate λ and service
rate µ. Average delay at any room E(T ) = 1/(µ − λ).

ˆ Single large room


It can be modeled as a single M/M/1 queue with arrival rate Kλ and service rate
Kµ. Average delay at the large room E(T ) = 1/(Kµ − Kλ). That is the combined
system is K time faster.

25
Example 2 Statistical Multiplexing (SM)
There are m independent Poisson data streams, each supplying packet at rate λ/m, arriv-
ing at a common concentrator where they are mixed into a single data stream of combined
rate λ.
Packet lengths are independent and exponentially distributed with mean transmission time
1/µ.
The concentrator can be viewed as M/M/1 system which is statistically multiplexes the
independent data streams into a single data stream.

Example 3 Time/Frequency Division Multiplexing (TDM/FDM)


In TDM and FDM, transmission capacity is divided equally over m data stream so that
each data stream eectively sees a dedicated line with service rate µ/m.
TDM and FDM can be modeled as m M/M/1 systems operating in parallel. Each M/M/1
queue observes packet arrival rate of λ/m and service rate of µ/m.
It is easy to see that
E(TT DM ) = mE(TSM ).

Time/Frequency Division Multiplexing (TDM/FDM)

Question: Why would one ever use FDM?


Answer: Frequency-division multiplexing guarantees a specic service rate to each stream.
Statistical multiplexing is unable to provide any such guarantee. More importantly, sup-
pose the original m streams were very regular (i.e., the interarrival times were less variable
than Exponential, say closer to Deterministic than Exponential). By merging the streams,
we introduce lots of variability into the arrival stream. This leads to problems if an ap-
plication requires a low variability in delay (e.g., voice or video).

Analysis of the busy period of the server


The system is said to be idle at time t if N (t) = 0 and busy at time t if N (t) > 0. A
busy period begins at any instant in time at which the value of N (t) increases from zero
to one and ends at the rst instant in time, following entry into a busy period, at which
the value of N (t) again reaches zero. An idle period begins when a given busy period
ends and ends when the next busy period begins. From the perspective of the server, the
M/M/1 queueing system alternates between two distinct types of periods: idle periods
and busy periods. These types are descriptive; the busy periods are periods during which
the server is busy servicing customers, and the idle periods are those during which the

26
server is not servicing customers. For the ordinary M/M/1 queueing system, the server
is never idle when there is at least one customer in the system.
Because of the memoryless property of both the exponential distribution and the
Poisson process, the length of an idle period is the same as the length of time between
two successive arrivals from a Poisson process with parameter λ. The length of a busy
period, on the other hand is dependent upon both the arrival and service processes. The
busy period begins upon the arrival of its rst customer. During the rst service another
customers arrive and the service and arrival processes continue until there are no longer
any remaining customers, and at that point in time the system returns to an idle period.
Thus, the length of a busy period is the total amount of time required to service all of the
customers of all of the generations of the rst customer of the busy period. Consequently,
we can think of the busy period as being generated by its rst customer. Alternatively,
we can view the server as having to work until all of the rst customers descendents
die out. The distribution of the length of a busy period is of interest in its own right,
but an understanding of the behavior of busy period processes is also extremely helpful
in understanding waiting time and queue length behavior in both ordinary and priority
queueing systems.
Before starting the investigations we need some additional knowledge about the prop-
erties of the exponential distribution. Let us see the following proof.

X ∈ Exp(λ), Y ∈ Exp(µ) independent, Z = min(X, Y ). Find

P (Z < t | X < Y ), P (Z < t | Y < X)

Solution:
R∞
P (Z < t | X < Y ) 0
P (Z < t, X < Y )fy dy
=
P (X < Y ) P (X < Y )
Rt −µy
R∞
0
P (X < y)µe dy + t
P (X < t)µe−µy dy
=
P (X < Y )
Z t Z ∞ 
λ+µ −λy −µy −λt −µy
= (1 − e ) · µe dy + (1 − e ) · µe dy
λ 0 t
 
λ+µ −µt µ −(λ+µ)t −µt λt
= (1 − e ) − (1 − e ) + e (1 − e )
λ λ+µ
 
λ+µ λ λ −(λ+µ)t
= − e = 1 − e−(λ+µ)t .
λ λ+µ λ+µ
P (Z < t | Y < X) = 1 − e−(λ+µ)t , can be proved exactly the same way.

Another Proof:
P (X > t, X < Y )
P (Z < t | X < Y ) = 1 − P (Z > t | X < Y ) = 1 −
P (X < Y )
R∞ −λx
R ∞ −µx
t
P (Y > x)λe dx t
e · λe−λx dx
=1− =1−
P (X < Y ) P (X < Y )
 
λ −(λ+µ)t λ + µ
=1− e · = 1 − e−(λ+µ)t .
λ+µ λ

27
An alternate and instructive way to view busy period process is to separate the busy
period into two parts: the part occurring before the rst customer arrival after the busy
period has started, and the part occurring after the rst customer arrival after the busy
period has started, if such an arrival occurs. In the latter case we have two customers in
the system and easy to see that the length of the busy period does not depend on the
order of service. So the server will be idle of all the customers leave the system, that is
we have two busy periods initiated by the generic customer and the rst customer after
the busy period started. These busy period are independent of each other because the
arrival and service time are independent of each other.

Due to the memoryless property of the exponential distribution and taking into account
the statements concerning of the minimum of independent exponentially distributed ran-
dom variables it is not so dicult to see that for the Laplace-transform of the busy period
δ we have

µ λ+µ λ λ+µ
Lδ (t) = · + · (Lδ (t))2
λ+µ λ+µ+t λ+µ λ+µ+t

µ λ
Lδ (t) = + (Lδ (t))2
λ+µ+t λ+µ+t

⇒ λ(Lδ (t))2 − (λ + µ + t)Lδ (t) + µ = 0

p
λ+µ+t± (λ + µ + t)2 − 4λµ
Lδ (t) =

Lδ (0) = 1, that is why:

p
λ+µ+t− (λ + µ + t)2 − 4λµ
Lδ (t) = < 1.

We are interested in the mean and variance of the busy period, that is why we need
 
0 1 1 2
− 21
Lδ (t) = 1− (λ + µ + t) − 4λµ .2(λ + µ + t)
2λ 2

 
1 1 − 23 1 − 21
L00δ (t) = 2
(λ + µ + t) − 4λµ 2
· 4(λ + µ + t) − 2
(λ + µ + t) − 4λµ ·2
2λ 4 2

   
1 1 1 λ+µ 1
L0δ (0) = 1− 2(λ + µ) = 1− =− ,
2λ 2(µ − λ) 2λ µ−λ µ−λ
1
E(δ) = .
µ−λ

28
To get the variance we proceed

1 (λ + µ)2 1 (λ + µ)2 − (µ − λ)2


 
00 1 1 2µ2λ 2µ
Lδ (0) = 3
− = 3
= 3
= ,
2λ (µ − λ) µ−λ 2λ (µ − λ) 2λ (µ − λ) (µ − λ)3
 2
2µ 1 2µ − µ + λ λ+µ 1+ρ
V ar(δ) = − = = = .
(µ − λ)3 µ−λ (µ − λ)3 (µ − λ)3 µ2 (1 − ρ)3

Let us see another solution treated in Adan and Reising [1].

Let the random variable Tn be the time till the system is empty again if there are now n
customers present in the system. Clearly, T1 is the length of a busy period, since a busy
period starts when the rst customer after an idle period arrives and it ends when the
system is empty again. The random variables Tn satisfy the following recursion relation.
Suppose there are n(> 0) customers in the system. Then the next event occurs after
an exponential time with parameter λ + µ: with probability λ/(λ + µ) a new customer
arrives, and with probability µ/(λ + µ) service is completed and a customer leaves the
system. Hence, for n = 1, 2, . . .,

with probability λ/(λ + µ),



Tn+1
(2.3) Tn = Z +
Tn−1 with probability µ/(λ + µ),

where Z is an exponential random variable with parameter λ + µ. From this relation we


get for the Laplace-transform Ten (s) of Tn that
 
λ + µ λ µ
Ten (s) = Ten+1 (s) + Ten−1 (s) ,
λ+µ+s λ+µ λ+µ

and thus, after rewriting,

(λ + µ + s)Ten (s) = λTen+1 (s) + µTen−1 (s), n = 1, 2, . . .

For xed s this equation is a second order dierence equation. Its general solution is

Ten (s) = c1 xn1 (s) + c2 xn2 (s), n = 0, 1, 2, . . .

where x1 (s) and x2 (s) are the roots of the quadratic equation

(λ + µ + s)x = λx2 + µ,

satisfying 0 < x1 (s) ≤ 1 < x2 (s). Since 0 ≤ Ten (s) ≤ 1 it follows that c2 = 0. The
coecient c1 follows from the fact that T0 = 0 and hence Te0 (s) = 1, yielding c1 = 1.
Hence we obtain
Ten (s) = xn1 (s),
and in particular, for the Laplace-transform Lδ (s) of the busy period δ , we nd
1  p 
Lδ (s) = Te1 (s) = x1 (s) = λ + µ + s − (λ + µ + s)2 − 4λµ .

29
By inverting this transform we get for the density fδ (t) of δ ,

1 p
fδ (t) = √ e−(λ+µ)t I1 (2t λµ), t > 0,
t ρ

where I1 (·) denotes the modied Bessel function of the rst kind of order one, i.e.

X (x/2)2k+1
I1 (x) = .
k=0
k!(k + 1)!

As we will see later on for an M/G/1 system we have

(2.4) Lδ (t) = LS (t + λ − λLδ (t)),

where LS (t) denotes the Laplace-transform of the service time. For exponentially dis-
tributed service time we have
µ
Lδ (t) =
µ + t + λ − λLδ (t)

from we we get the same equation as before, that is

λ(Lδ (t))2 − (λ + µ + t)Lδ (t) + µ = 0.

Distribution of number of customers served during the busy period


Let Nd (δ) denote the number of departed/served customers during a busy period and let
G(z) = GNd (δ) (z) its generating function.
Then similarly to above considerations it is not dicult to get

µ λ
G(z) = z + G2 (z) ⇒ λG2 (z) − (λ + µ)G(z) + zµ = 0.
λ+µ λ+µ

p
λ+µ± (λ + µ)2 − 4λµz
G(z) =

p s !
1 + ρ − (1 + ρ)2 − 4ρz 1+ρ 4ρz
G(z) = = 1− 1− .
2ρ 2ρ (1 + ρ)2

p
1+ρ− (1 − ρ)2 − 4ρ 1+ρ−1+ρ
G(1) = = = 1.
2ρ 2ρ

30
The mean and variance can be obtained in the following way
p
1 + ρ − (1 − ρ)2 − 4ρz
G(z) =

−1
0 − 1 ((1 + ρ)2 − 4ρz) 2 (−4ρ) − 1
G (z) = 2 = (1 + ρ)2 − 4ρz) 2 .

1
G0 (1) = .
1−ρ
1 − 3
G00 (z) = − (1 + ρ)2 − 4ρz 2 (−4ρ)
2
00 2ρ
G (1) = .
(1 − ρ)3

Thus
1
E (Nd (δ)) = G0 (1) = ,
1−ρ
and the variance is
2ρ 1 1 2ρ + (1 + ρ)2 − (1 − ρ) ρ + ρ2
V ar (Nd (δ)) = + − = = .
(1 − ρ)3 1 − ρ (1 − ρ)2 (1 − ρ)3 (1 − ρ)3

Furthermore, the distribution of Nd (δ) can be obtained, too


s !
1+ρ 4ρz
GNd (δ) (z) = 1− 1−
ρ (1 + ρ)2

ρn−1
 
1 2n − 2
P (Nd (δ) = n) = , n = 1,2,...
n n − 1 (1 + ρ)2n−1

Thus
∞ 
ρn−1

X 2n − 2
E (Nd (δ)) = ,
n=1
n−1 (1 + ρ)2n−1
which is very dicult to calculate. It means that the generating function approach is very
useful since we proved that
1
E (Nd (δ)) = .
1−ρ
As it will see later on for an M/G/1 system we have

GNd (δ) (z) = zLS (λ − λGNd (δ) (z)).

For exponentially distributed service time we have


µ
G( z) = z
µ + λ − λG( z)

31
from we we get the same equation as before, that is

λ(GNd (δ) (z))2 − (λ + µ)GNd (δ) (z) + µz = 0.

Also
ρ(1 − ρ) + λ2 E(S 2 ) ρ(1 − ρ) + 2ρ2 ρ + ρ2
V ar(Nd (δ)) = = = .
(1 − ρ)3 (1 − ρ)3 (1 − ρ)3

M/M/1 system with non-preemptive LCFS service discipline

In the following we show how the results concerning to the busy period analysis of
a FCFS system can be used for the investigation of the waiting and response time of a
system with non-preemptive LCFS ( Last-Come- First-Served ) service order. This means
that the last customer does not interrupt the service of the current customer.

Since the service time are exponentially distributed due to the memoryless property the
waiting time of the last customer will be the busy period length of the server. Thus for
the Laplace-transform, mean and variance we have

LWLCF S (t) = (1 − ρ) + ρLδ (t)


µ
LTLCF S (t) = · (1 − ρ + ρLδ (t))
µ+t
2 2 2
E(WLCF S ) = ρE(δ ) = ρ 2
µ (1 − ρ)3
1
E(WLCF S ) = ρE(δ) = ρ
µ(1 − ρ)

2
2ρ − ρ2 (1 − ρ) 2ρ − ρ2 + ρ3

2 1
V ar(WLCF S ) = ρ 2 − ρ = = .
µ (1 − ρ)3 µ(1 − ρ) µ2 (1 − ρ)3 µ2 (1 − ρ)3

1 2ρ − ρ2 + ρ3
V ar(TLCF S ) = + .
µ2 µ2 (1 − ρ)3

As we will see later on for an M/G/1 system the Laplace-transform, mean, variance
can be obtained by the following formula and hence we can check our result for exponen-
tially distributed service time.

1 − Lδ (t)
LWLCF S (t) = (1 − ρ) + ρ ,
(t + λ − λLδ (t))E(S)

LTLCF S (t) = LWLCF S (t)LS (t),

32
λE(S 3 ) λ2 (1 + ρ)(E(S 2 ))2
V ar(WLCF S ) = +
3(1 − ρ)2 4(1 − ρ)3
6λ 4λ2 (1 + ρ) 2ρ ρ2 (1 + ρ)
= 3 + = +
3µ (1 − ρ)2 4µ4 (1 − ρ)3 µ2 (1 − ρ)2 µ2 (1 − ρ)3
2ρ(1 − ρ) + (1 + ρ)ρ2 2ρ − 2ρ2 + ρ2 + ρ3 2ρ − ρ2 + ρ3
= = = .
µ2 (1 − ρ)3 µ2 (1 − ρ)3 µ2 (1 − ρ)3

Furthermore, it should be noted that the mean waiting and response time of an
M/M/1 under any well-known service discipline will be the same due to the Little-
formula and the fact that the service rate is always µ resulting the same distribution
for the steady-state distribution of the number of customers in the system. However, the
higher moment will be dierent depending on the service order. It can be proved that for
M/G/1 systems we have

2λE(S 3 ) λ2 (2 + ρ)(E(S 2 ))2


V ar(WSIRO ) = +
3(1 − ρ)(2 − ρ) 4(1 − ρ)2 (2 − ρ)

λE(S 3 ) λ2 (1 + ρ)(E(S 2 ))2


V ar(WLCF S ) = +
3(1 − ρ)2 4(1 − ρ)3

λE(S 3 ) λ2 (E(S 2 ))2


V ar(WF CF S ) = +
3(1 − ρ) 4(1 − ρ)2

Comparing the formulas term-by-term it is not dicult to prove that

V ar(WF CF S ) < V ar(WSIRO ) < V ar(WLCF S ),


V ar(TF CF S ) < V ar(TSIRO ) < V ar(TLCF S ).

Analysis of the output process


Let us examine the states of an M/M/1 system at the departure instants of the customers.
Our aim is to calculate the distribution of the departure times of the customers. As it
was proved in (1.3) at departures the distribution is
λk P k
Dk = P∞ .
i=0 λi Pi

In the case of Poisson arrivals λk = λ, k = 0, 1, . . ., hence Dk = Pk .


Now we are able to calculate the Laplace-transform of the interdeparture time d. Condi-
tioning on the state of the server at the departure instants, by using the theorem of total

33
Laplace-transform we have

µ λ µ
Ld (s) = % + (1 − %) ,
µ+s λ+sµ+s

since if the server is idle for the next departure a request should arrive rst. Hence

µ%(λ + s) + (1 − %)λµ λµ% + λs + λµ − λµ% λ


Ld (s) = = = ,
(λ + s)(µ + s) (λ + s)(µ + s) λ+s

which shows that the distribution is exponential with parameter λ and not with µ as one
might expect. The independence follows from the memoryless property of the exponential
distributions and from their independence. This means that the departure process is a
Poisson process with rate λ.

This observation is very important to investigate tandem queues, that is when several
simple M/M/1 queueing systems as nodes are connected in serial to each other. Thus at
each node the arrival process is a Poisson process with parameter λ and the nodes oper-
ate independently of each other. Hence if the service times have parameter µi at the ith
λ
node then introducing trac intensity %i = all the performance measures for a given
µi
node could be calculated. Consequently, the mean number of customers in the network
is the sum of the mean number of customers in the nodes. Similarly, the mean waiting
and response times for the network can be calculated as the sum of the related measures
in the nodes.

Now, let us show how the density function d can be obtained directly without using the
Laplace-transforms. By applying the theorem of total probability we have
 
−µx λµ −µx λµ −λx
fd (x) = %µe + (1 − %) e + e
λ−µ µ−λ
 
−µx µ−λ λµ −λx λµ −µx
= λe + e − e
µ µ−λ µ−λ
= λe−µx + λe−λx − λe−µx = λe−λx .

Let us see a more general method that works for any systems with Poisson arrivals and
exponentially distributed service times.
We now proceed to verify the input-output identity with a constructive proof that
utilizes a simple dierential-dierence argument (much like that used in the development
of the birth-death process), which will show that, indeed, the inter-departure times are
exponential with parameter λ.

Consider an M/M/c/∞ system in steady state. Let N (t) now represent the number of
customers in the system at a time t after the last departure.
Since we are considering steady state, we have

34
(2.5) P r{N (t) = n} = pn .

Furthermore, let d represent the random variable "time between successive departures"
(inter-departure time), and

(2.6) Fn (t) = P r{N (t) = n and d > t}.

So Fn (t) is the joint probability that there are n customers in the system at a time t
after the last departure and that t is less than the inter-departure time d; that is, another
departure has not as yet occurred. The cumulative distribution function of the random
variable d, which will be denoted as D(t), is given by


X
(2.7) D(t) = P {d ≤ t} = 1 − Fn (t),
n=0

since

X
(2.8) = P r{d > t}
n=0

is the marginal complementary cumulative distribution function of d. To nd D(t), it is


necessary to rst nd Fn (t).

As usual using the law of total probability we can write the following dierence equa-
tions concerning Fn (t):

Fn (t + ∆t) = (1 − λ∆t)(1 − cµ∆t)Fn (t) + λ∆t(1 − cµ∆t)Fn−1 (t)


+ o(∆t), c ≤ n,
Fn (t + ∆t) = (1 − λ∆t)(1 − nµ∆t)Fn (t) + λ∆t(1 − nµ∆t)Fn−1 (t)
+ o(∆t), 1 ≤ n ≤ c,
F0 (t + ∆) = (1 − λ∆t)F0 (t) + o(∆t).

Moving Fn (t) from the right side of each of the above equations, dividing by ∆t, and
taking the limit as ∆ → 0, we obtain the dierential-dierence equations as

dFn (t)
= −(λ + cµ)Fn (t) + λFn−1 (t) c ≤ n,
dt
dFn (t)
= −(λ + nµ)Fn (t) + λFn−1 (t) 1 ≤ n ≤ c,
dt
dF0 (t)
= −λF0 (t).
dt

35
Using the boundary condition

Fn (t) ≡ P r{N (0) = n and d > 0} = P r{N (0) = n} = pn .

Let us consider

(2.9) Fn (t) = pn e−λt .

The reader can easily verify that is the solution to the above system of dierential
equations by substitution, recalling that for M/M/c/∞ models,
(
λ
pn , 1 ≤ n < c,
pn+1 = (n+1)µ
λ
p ,
cµ n
c ≤ n.
To obtain D(t), the cumulative distribution function of the inter-departure times, we
use 2.9 in 2.7 to get


X ∞
X
(2.10) D(t) = 1 − pn e −λt
=1−e −λt
pn = 1 − e−λt
n=0 n=0

thus showing that the inter-departure times are exponential.

It is easy to see that this statement is valid for any state-dependent service in-
tensities, that is instead of the service intensities of the M/M/c system we can write
µn , n = 0, 1, 2, .... Thus, the statement is valid for any M/M/∞ system.

In the following we prove that the random variables N (d) and d are independent and
furthermore that successive inter-departure times are independent of each other. This
result was rst proved by Burke. So we see that the output distribution is identical to
the input distribution and not at all aected by the exponential service mechanism.

Z ∞ Z ∞
P (N (d) = n, d > t) = Fn+1 (x)µn+1 dx = pn+1 · µn+1 e−λx dx
t t

pn+1 µn+1 −λt


= ·e = pn e−λt = P (N (d) = n) P (d > t).
λ

Thus N (d) and d are independent of each other.

P (d1 > t1 | N (d1 ) = n, d2 > t2 ) = P (d1 > t1 | N (d1 ) = n)


= P (d1 > t1 )

36
Therefore d1 and d2 are independent of each other.
In many problems, a customer requires service from several service stations before a task
is completed. These problems require that we consider a network of queueing systems.
In such networks, the departures from some queues become the arrivals to other queues.
This is the reason why we are interested in the statistical properties of the departure
process from a queue.

Consider two queues in tandem as shown in Fig. 2.2, where the departures from the
rst queue become the arrivals at the second queue. Assume that the arrivals to the
rst queue are Poisson with rate λ and that the service time at queue 1 is exponentially
distributed with rate µ1 > λ Assume that the service time in queue 2 is also exponentially
distributed with rate µ2 > λ. The state of this system is specied by the number of
customers in the two queues, (N1 (t), N2 (t)) This state vector forms a Markov process
with the transition rate diagram shown in Fig. 2.3, and global balance equations are

(2.11) λP [N1 = 0, N2 = 0] = µ2 P [N1 = 0, N2 = 1]


(2.12) (λ + µ1 )P [N1 = n, N2 = 0] = µ2 P [N1 = n, N2 = 1]
(2.13) + λP [N1 = n − 1, N2 = 0] n > 0
(2.14) (λ + µ2 )P [N1 = 0, N2 = m] = µ2 P [N1 = 0, N2 = m + 1]
(2.15) + µ1 P [N1 = 1, N2 = m − 1] m > 0
(2.16) (λ + µ1 + µ2 )P [N1 = n, N2 = m] = µ2 P [N1 = n, N2 = m + 1]
(2.17) + µ1 P [N1 = n + 1, N2 = m − 1]
(2.18) + λP [N1 = n − 1, N2 = m]
(2.19) n > 0, m > 0.

Figure 2.2: Two tandem exponential queues with Poisson input

It is easy to verify that the following joint probabilities satisfy Eqs. 2.11 through 2.19

(2.20) P [N1 = n, N2 = m] = (1 − ρ1 )ρn1 (1 − ρ2 )ρm


2 , n ≥ 0, m ≥ 0,

where ρi = λ/µi . We know that the rst queue is an M/M/1 system, so

(2.21) P [N1 = n] = (1 − ρ1 )ρn1 , n = 0, 1, · · ·

By summing Eq. 2.20 over all n, we obtain the marginal distribution of the second queue,
that is

(2.22) P [N2 = m] = (1 − ρ2 )ρm


2 , m ≥ 0.

37
Figure 2.3: Transition rate diagram for two tandem exponential queues with Poisson
input.

Equations 2.20 through 2.22 imply that

(2.23) P [N1 = n, N2 = m] = P [N1 = n]P [N2 = m] for all n, m.

In words, the number of customers at queue 1 and the number at queue 2 at the same time
instant are independent random variables. Furthermore, the steady-state distribution at
the second queue is that of an M/M/1 system with Poisson arrival rate λ and exponential
service time µ2 .
We say that a network of queues has a product-form solution when the joint distribu-
tion of the vector of numbers of customers at the various queues is equal to the product of
the marginal distribution of the number in the individual queues. We now discuss Burke's
theorem, which states the fundamental result underlying the product-form solution in Eq.
2.23.
Burke's Theorem Consider an M/M/1, M/M/c, or M/M/∞ queueing system at
steady state with arrival rate λ then
1. The departure process is Poisson with rate λ

2. At each time t, the number of customers in the system N (t) is independent of the
sequence of departure times prior to t.

38
The product-form solution for the two tandem queues follows from Burke's theorem.
Queue 1 is an M/M/1 queue, so from part 1 of the theorem the departures from queue 1
form a Poisson process. Thus the arrivals to queue 2 are a Poisson process, so the second
queue is also an M/M/1 system with steady state pmf given by Eq. 2.22. It remains
to show that the numbers of customers in the two queues at the same time instant are
independent random variables.
The arrivals to queue 2 prior to time t are the departures from queue 1 prior to time
t. By part 2 of Burke's theorem the departures from queue 1, and hence the arrivals
to queue 2, prior to time t are independent of N1 (t). Since N2 (t) is determined by the
sequence of arrivals from queue 1 prior to time t and the independent sequence of service
times, it then follows that N1 (t) and N2 (t) are independent. Equation 2.23 then follows.
Note that Burke's theorem does not state that N1 (t) and N2 (t) are independent random
processes. This would require that N1 (t) and N2 (t) be independent random variables for
all t1 and t2 . This is clearly not the case.
Burke's theorem implies that the generalization of Eq. 2.23 holds for the tandem com-
bination of any number of M/M/1, M/M/c, M/M/∞ queues. Indeed, the result holds
for any feedforward network of queues in which a customer cannot visit any queue more
than once.

Example 4 Find the joint distribution for the network of queues shown in Fig. 2.4, where
queue 1 is driven by a Poisson process of rate λ1 , where the departures from queue 1 are
randomly routed to queues 2 and 3, and where queue 3 also has an additional independent
Poisson arrival stream of rate λ2 .

Figure 2.4: A feed-forward network of queues.

From Burke's theorem N1 (t) and N2 (t) are independent, as are N1 (t) and N3 (t). Since
the random split of a Poisson process yields independent Poisson processes, we have that
the inputs to queues 2 and 3 are independent. The input to queue 2 is Poisson with rate
λ1 /2. The input to queue 3 is Poisson of rate λ1 /2+λ2 since the merge of two independent
Poisson processes is also Poisson. Thus

P [N1 (t) = k, N2 (t) = m, N3 (t) = n]


= (1 − ρ1 )ρk1 (1 − ρ2 )ρm n
2 (1 − ρ3 )ρ3 k, m, n ≥ 0,

39
where ρ1 = λ1 /µ1 , ρ2 = λ1 /(2µ2 ), ρ3 = (λ1 /2 + λ2 )/µ3 , and where we have assumed that
all of the queues are stable.
Now let us consider an M/G/1 system and we are interested in under which service time
distribution the inter-departure time is exponentially distributed with parameterλ. First
prove that the utilization of the system is US = % = λE(S). As it is understandable for
any stationary stable G/G/1 queueing system the mean number of departures during
the mean busy period length of the server is one more than the mean number of arrivals
during the mean busy period length of the server. That is
E(δ) E(δ)
=1+ ,
E(S) E(τ )
where E(τ ) denotes the mean inter-arrival times. Hence
E(τ )
E(τ ) + E(δ) = E(δ)
E(S)
E(τ )E(S) 1
E(δ) = = E(S) ,
E(τ ) − E(S) 1−%

where % = E(S)
E(τ )
. Clearly
1 %
E(δ) E(S) 1−% 1−%
US = = = % = % < 1.
E(τ ) + E(δ) E(τ ) + E(S) 1 + 1−%
1−%

Thus the utilization for an M/G/1 system is %. It should be noted that an M/G/1 system
Dk = Pk , that is why our question can be formulated as
 
λ λ λ(1 − %)
= %LS (s) + (1 − %) LS (s) = LS (s) % +
λ+s λ+s λ+s
2 2
λ E(S) + sλE(S) + λ − λ E(S) λ(1 + sE(S))
= LS (s) = LS (s) ,
λ+s λ+s
thus
1
LS (s) = ,
1 + sE(S)
which is the Laplace-transform of an exponential distribution with mean E(S) . In sum-
mary, only exponentially distributed service times assures that Poisson arrivals involves
Poisson departures with the same parameters.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu
Example 5 Let us consider a small post oce in a village where on the average 70
customers arrive according to a Poisson process during a day. Let us assume that the
service times are exponentially distributed with rate 10 clients per hour and the oce
operates 10 hours daily. Find the mean queue length, and the probability that the number
of waiting customer is greater than 2. What is the mean waiting time and the probability
that the waiting time is greater than 20 minutes ?

40
Solution:
Let the time unit be an hour. Then λ = 7, µ = 10, ρ = 7
10

ρ 7
N= =
1−ρ 3
7 7 70 − 21 49
Q=N −ρ= − = =
3 10 30 30
P (n > 3) = 1 − P (n ≤ 3) = 1 − P0 − P1 − P2 − P3
= 1 − 1 + ρ − (1 − ρ)(ρ + ρ2 + ρ3 ) = ρ4 = 0.343 · 0.7 = 0.2401
N 7 7
W = = = hour = 14 minutes
µ 3 · 10 30
   
1 1 1
P W > = 1 − FW = 0.7 · e−10· 3 ·0.3 = 0.7 · e−1 = 0.257
3 3

The following 4 Examples are taken from Allen [3].

Example 6 For a small batch computing system the processing time per job is exponen-
tially distributed with an average time of 3 minutes. Jobs arrive randomly at an average
rate of one job every 4 minutes and are processed on a rst-come-rst-served basis. The
manager of the installation has the following concerns.

(a) What is the probability that an arriving job will require more than 20 minutes to be
processed (the job turn-around time exceeds 20 minutes)?

(b) A queue of jobs waiting to be processed will form, occasionally. What is the average
number of jobs waiting in this queue?

(c) It is decided that, when the work load increases to the level such that the average time
in the system reaches 30 minutes, the computer system capacity will be increased.
What is the average arrival rate of jobs per hour at which this will occur? What is
the percentage increase over the present job load? What is the average number of
jobs in the system at this time?

(d) Suppose the criterion for upgrading the computer capacity is that not more than
10% of all jobs have a time in the system (turn-around time) exceeding 40 minutes.
At the arrival rate at which this criterion is reached, what is the average number of
jobs waiting to be processed?

Solution:

(a)

E(τ ) = 4 minutes, so
λ = 1/E(τ ) = 0.25 jobs/minute
and
ρ = λE(S) = 0.25 × 3 = 0.75.

41
The average time in the system, T̄ = E(S)/(1 − ρ) = 12 minutes, so
FT (t) = P (T ≤ t) = 1 − e−t/12 or P (T > t) = e−t/12
Therefore, the probability that T exceeds 20 minutes is e−20/12 = e−5/3 = 0.1889
(b) If we assume a job queue has not formed unless there is a job in it, we use the
formula
E[Q|Q > 0] = 1/(1 − ρ) = 4 jobs

If the question is interpreted to mean the average job queue length, including queues
of length zero, then we calculate
Q̄ = E(Q) = ρ2 /(1 − ρ) = (0.75)2 /0.25 = 2.25 jobs.
The most reasonable answer to the question, as stated, is 4 jobs.
(c) When T̄ = 30 minutes the system is to be upgraded, assuming the current E(S) is
3 minutes. We solve the equation
E(S) 3
30 = T̄ = =
1 − λE(S) 1 − 3λ
or

λ = 27/90 = 3/10 jobs/minute = 18 jobs/hour


The percentage increase is
100 × (18 − 15)/15 = 100/5 = 20%
When λ = 18 jobs/hour = 3/10 jobs/minute, the average number of jobs in the
system
N̄ = ρ/(1 − ρ) = 0.9/(1 − 0.9) = 9 jobs.

(d) The criterion is that πT (90) reaches 40 minutes. We solve the equation
2.3 × E[s] 2.3 × 3
40 = πT (90) = 2.8T̄ = = ,
1 − λE[s] 1 − 3λ
to obtain
33.1 33.1
λ= jobs/minute = 60 × = 16.55 jobs/hours.
120 120
That is only a [(16.55 − 15)/15] × 100 = 10.3% increase over the present arrival
rate. At this arrival rate ρ = λE[s] = 0.8275 and the average number of jobs in the
queue is
Q̄ = E[Q] = ρ2 /(1 − ρ) = 3.97
This is an increase over the current value of 2.25 jobs. The average time in the
system at this increased arrival rate is 17.39 minutes; it is only 12 minutes at the
current arrival rate.

42
In part (c) of the above example we see that increasing the arrival rate by 20% increased
the average time a job would spend in the system from 12 minutes to 30 minutes a 150%
increase! The curve of E(T )/E(S) rises sharply as ρ approaches the value 1.

That is, the slope of the curve increases rapidly as ρ grows beyond about 0.8. Since

dT̄ /dρ = E(S)(1 − λE(S)−2

a small change in ρ (due to a small change in λ, assuming E[s] is xed) causes a change
in T̄ given approximately by

(dT̄ /dρ)∆ρ = (dT̄ /dρ)E(S)∆λ = E(S)2 (1 − λE(S))−2 ∆λ.

Thus, if ρ = 0.5, a change ∆λ in λ will cause a change in T̄ of about 4E[s]2 ∆λ, while, if
ρ = 0.9, the change in T̄ will be about 100E[s]2 ∆λ, or 2.5 times the size of the change
that occurred for ρ = 0.5!
That is, when the system is operating at 90%server utilization, a small change in the sys-
tem load (arrival rate) will cause 25 times as great an increase in the average system time
as the same increase in load would cause if the system were operating at 50% utilization!
This illustrates the danger of designing a system to operate at a high utilization level - a
small increase in the load can have disastrous eects on the system performance.

Example 7 A computing facility has a large computer dedicated to a certain type of


on-line application for users who are scattered about the country. The arrival pattern of
requests to the central machine is random (Poisson), and the service time provided is
random (exponential) also, so the system is an M/M/1 queueing system. A proposal is
made that the workload be divided equally among n smaller machines - each with 1/n
times the processing power of the original machine. It is claimed that the response time
(time a request is in the system) will not change but the users will have a local computer.
Are these claims justied?

Solution: Let λ, µ be the average arrival and service rates, respectively, of the current
system so that ρ = λ/µ is the computer utilization. For each of the proposed new systems
the average arrival rate is λ/n and the average service rate is µ/n, so the server utilization
is (λ/n)/(µ/n) = λ/µ = ρ, the same value as the present system. If we assume the small
computers also provide random service, then

   
T̄proposed n/µ 1/µ
= / = n,
T̄current (1 − ρ) 1−ρ

and
   
W̄proposed ρn/µ ρ/µ
= / = n.
W̄current (1 − ρ) 1−ρ

43
Thus, the average time in the system and the average time in the queue would increase
n-fold rather than remain the same! Of course the n new computer systems, together,
process the same number of requests per hour as before, but each individual request
requires n times as long to be processed, on the average, as in the present system. Thus,
if the present system has an average service time of 2 seconds with a utilization of 0.7,
then it has an average response time of 6.67 seconds; a proposed system of 10 computers,
each providing 20 seconds service time, would yield a response time of 66.7 seconds! The
eect discussed in this example is called the "scaling eect". The result can be used to
show that centralizing a computing facility can improve the response time while providing
more computing capability for less money (economy of scale).
Example 8 A branch oce of a large engineering rm has one on-line terminal con-
nected to a central computer system for 16 hours each day. Engineers, who work through-
out the city, drive to the branch oce to use the terminal for making routine calculations.
The arrival pattern of engineers is random (Poisson) with an average of 20 persons per
day using the terminal. The distribution of time spent by an engineer at the terminal is
exponential with an average time of 30 minutes. Thus the terminal is 5/8 utilized (20 ×
1/2 = 10 hours out of 16 hours available). The branch manager receives complaints from
the sta about the length of time many of them have to wait to use the terminal. It does not
seem reasonable to the manager to procure another terminal when the present one is only
used ve-eighths of the time, on the average. How can queueing theory help this manager?

Solution:
The M/M/1 queueing system is a reasonable model with ρ = 5/8, as we computed
above. The M/M/1 formulas give the following.

T̄ = E(T ) = E(S)/(1 − ρ) = 80 minutes. Average time an engineer


spends at the branch oce.
2
Q̄ = ρ /(1 − ρ) = 1.0417. Average number of engineers
waiting in the queue.
E(Q|Q > 0) = 1/(1 − ρ) = 8/3. Average number of engineers
in nonempty queues.
W̄ = E(W ) = ρE(S)/(1 − ρ) = 50 minutes. Average waiting time
in queue.
E(W |W > 0) = E(T ) = 80 minutes. Average waiting time of
those who must wait.
πW (90) = T̄ ln 10ρ = 146.61 minutes. 90th percentile of time in
the queue.
πT (90) ≈ 2.3T̄ = 184 minutes. 90th percentile time in the
branch oce.
Since ρ = 5/8, only three-eighths of the engineers who use the terminal need not wait.
For those who must wait, the average wait for the terminal is 80 minutes - quite a long

44
wait, by most standards! Ten percent of the engineers spend over 3 hours (actually 184
minutes) in the oce to do an average of 30 minutes of computing. The probability of
waiting more than an hour to use the terminal is

5
P [W > 60] = e−60/80 = 0.295229,
8

or almost 30%

These results may seem a little startling to those not acquainted with queueing theory. It
might seem, intuitively, that adding another terminal would cut the average waiting time
in half - from 50 minutes to 25 minutes (to 40 minutes for those who must wait). The
queueing theory we have presented so far should suce to convince the manager that an
improvement is needed.

Example 9 Trac to a message switching center for one of the outgoing communication
lines arrives in a random pattern at an average rate of 240 messages per minute. The
line has a transmission rate of 800 characters per second. The message length distribution
(including control characters) is approximately exponential with an average length of 176
characters. Calculate the principal statistical measures of system performance assuming
that a very large number of message buers are provided. What is the probability that 10
or more messages are waiting to be transmitted?

Solution: The average service time is the average time to transmit a message or

average message length


E(S) =
line speed
176 characters
= = 0.22 seconds.
800 characters/second

Hence, since the average arrival rate

λ = 240 messages/minute = 4 messages/second,

the server utilization


ρ = λE(S) = 4 × 0.22 = 0.88,

that is, the communication line is transmitting outgoing messages 88% of the time.

45
N̄ = E(N ) = ρ/(1 − ρ) = 7.33 messages) Average number of messages in the
system
Q̄ = E(Q) = ρ /(1 − ρ) = 6.45 messages).
2
Average number of messages in the
queue waiting to be transmitted.
T̄ = E(T ) = E(S)/(1 − ρ) = 1.83 seconds. Average time a message spend
in the system.
W̄ = E(W ) = ρE(S)/(1 − ρ) = 1.61 seconds. Average time a message wait for
transmission.
πT (90) = 2.3T̄ = 4.209 seconds. 90th percentile time in the system.
πW (90) = T̄ ln 10ρ = 3.98 seconds. 90th percentile waiting time in
queue (90% of the messages wait
no longer than 3.98 seconds.)

Since 10 or more messages are waiting if and only if 11 or more messages are in the
system, the required probability is

P (11 or more messages in the system) = ρ11 = 0.245.


Our discussion of the M/M/1 model has been more complete than it will be for many
queueing models because it is an important but simple model. It is also a pleasant model
to study because the probability distributions of the random variables T, W, N and Q
can be calculated; for some queueing models only the averages T̄ , W̄ , N̄ , and Q̄ can be
computed, and these only with diculty. A number of systems can be modeled, at least
in a limiting sense, as an M/M/1 queueing system.

2.2 The M/M/1 Queue with Balking Customers


Let us consider a modication of an M/M/1 system in which customers are discouraged
when more and more requests are present at their arrivals. Let us denote by bk the
probability that a customers joints to the systems provided there are k customers in the
system at the moment of his arrival.
It is easy to see, that the number of customers in the system is a birth-death process
with birth rates

λ k = λ · bk , k = 0, 1, . . .

Clearly, there are various candidates for bk but we have to nd such probabilities which
result not too complicated formulas for the main performance measures. Keeping in mind
this criteria let us consider the following

1
bk = , k = 0, 1, . . .
k+1

46
Thus

ρk
Pk = P0 , k = 0, 1, . . . ,
k!
and then using the normalization condition we get

ρk −ρ
Pk = e , k = 0, 1, . . .
k!
The stability condition is ρ < ∞, that is we do not need the condition ρ < 1 as in an
M/M/1 system.
Notice that the number of customers follows a Poisson law with parameter ρ and we can
expect that the performnace measures can be obtained in a simple way.

Performance measures

US = 1 − P0 = 1 − e−ρ ,
E(δ)
US = 1 ,
λ
+ E(δ)

hence

1 US 1 1 − e−ρ
E(δ) = · = · .
λ 1 − US λ e−ρ

N = ρ,
V ar(N ) = ρ

Q = N − US = ρ − (1 − e−ρ ) = ρ + e−ρ − 1.

X ∞
X X∞ ∞
X
2 2 2
E(Q ) = (k − 1) Pk = k Pk − 2 kPk + Pk
k=1 k=1 k=1 k=1
= E(N ) − 2N + US = ρ + ρ − 2ρ + US = ρ − ρ + 1 − e−ρ .
2 2 2

Thus

V ar(Q) = E(Q2 ) − (E(Q))2 = ρ2 − ρ + 1 − e−ρ − (ρ + e−ρ − 1)2


= ρ2 − ρ + 1 − e−ρ − ρ2 − e−2ρ − 1 − 2ρe−ρ + 2ρ + 2e−ρ
= ρ − e−2ρ + e−ρ − 2ρe−ρ = ρ − e−ρ (e−ρ + 2ρ − 1).

47
ˆ The probability that an arriving customer enters/joins into the system can be ob-
tained with the help of the Bayes-formula, namely
P∞ P∞
j=0 (λj h + o(h))Pj j=0 λj Pj µ(1 − e−ρ ) 1 − e−ρ
PJ = lim P∞ = P∞ = = .
k=0 (λh + o(h))Pk k=0 λPk λ ρ
h→0

ˆ To get the distribution of the response and waiting times we have to know the
distribution of the system at the instant when an arriving customer joins to the
system.
By applying the Bayes's rule it is not dicult to see that
ρk+1
λ
k+1
· Pk (k+1)!
· e−ρ Pk+1
Πk = ∞ = ∞ = .
X λ X i+1
ρ 1 − e−ρ
· Pi e−ρ
i=0
i+1 i=0
(i + 1)!
Notice, that this time
Πk 6= Pk .
Let us rst determine T and then W .
By the law of total expectations we have
∞ ∞
X k+1 1 X (k + 1)Pk+1 1 ρ
T = Πk = −ρ
= −ρ
·N = −ρ )
.
k=0
µ µ k=0
1 − e µ(1 − e ) µ(1 − e
1 ρ + e−ρ − 1
 
1
W =T− = .
µ µ 1 − e−ρ
As we have proved in formula (1.5)

X ∞
X ∞
X
λ= λk Pk = µ k Pk = µPk = µ(1 − e−ρ ),
k=0 k=1 k=1

thus
ρ
λ · T = µ(1 − e−ρ ) · = ρ = N,
µ(1 − e−ρ )
ρ + e−ρ − 1
λ · W = µ(1 − e−ρ ) · = ρ + e−ρ − 1 = Q
µ(1 − e−ρ )
which is the Little formula for this system.
ˆ To nd the distribution of T and W we have to use the same approach as we did
earlier, namely
∞ ∞
X X µ(µx)k e−µx ρk+1 e−ρ
fT (x) = fT (x|k) · Πk = ·
k=0 k=0
k! (k + 1)! 1 − e−ρ

−(ρ+µx) X
λe (µxρ)k
= ,
1 − e−ρ k=0
k!(k + 1)!

48
which is dicult to calculate. We have the same problems with fW (x), too.
However, the Laplace-transforms LT (s) and LW (s) can be obtained and the hence
the higher moments can be derived.
Namely
k+1 ρ k+1

X X∞ 
µ (k+1)!
e−ρ
LT (s) = LT (s|k)Πk =
k=0 k=0
µ+s 1 − e−ρ
∞  k+1
e−ρ X µρ 1 e−ρ  µ+s
µρ

= = e −1 .
1 − e−ρ k=0 µ + s (k + 1)! 1 − e−ρ
µ+s
LW (s) = LT (s) · .
µ

Find T by the help of LT (s) to check the formula. It is easy to see that

e−ρ µρ
L0T (s) = · e µ+s (−µρ(µ + s)−2 )
1 − e−ρ
e−ρ ρ ρ ρ
L0T (0) = − e · = − .
1 − e−ρ µ µ(1 − e−ρ )

Hence
ρ
T = ,
µ(1 − e−ρ )

as we have obtained earlier. W can be veried similarly.


To get V ar(T ) and V ar(W ) we can use the Laplace-transform method. As we have
seen

e−ρ  µ+s
λ

LT (s) = e −1 .
1 − e−ρ
Thus

e−ρ λ
L0T (s) = · e µ+s (−1)λ(µ + s)−2 ,
1 − e−ρ
therefore

e−ρ  λ
−2 2 λ

L00T (s) = −3

· e µ+s (−1)λ(µ + s) + 2λ(µ + s) · e µ+s .
1 − e−ρ
Hence
2 !
e−ρ 1 ρ2 + 2ρ

ρ 2ρ
L00T (0) = eρ − + 2 eρ = · .
1 − e−ρ µ µ µ2 1 − e−ρ

49
Consequently

2
1 ρ2 + 2ρ

ρ
V ar(T ) = 2 · −
µ 1 − e−ρ µ(1 − e−ρ )
(ρ2 + 2ρ) (1 − e−ρ ) − ρ2 ρ2 + 2ρ − ρ2 e−ρ − 2ρe−ρ − ρ2
= =
µ2 (1 − e−ρ )2 µ2 (1 − e−ρ )2
2ρ − ρ2 e−ρ − 2ρe−ρ ρ(2 − (ρ + 2)e−ρ )
= = .
µ2 (1 − e−ρ )2 µ2 (1 − e−ρ )2

However, W and T can be considered as a random sum, too. That is

 2
1 1 1
V ar(W ) = E(Na ) 2 + V ar(Na ) = 2 (E(Na ) + V ar(Na )).
µ µ µ
∞ ∞
X X kPk+1
E(Na ) = kΠk =
k=1 k=1
1 − e−ρ
∞ ∞
!
1 X X
= (k + 1)Pk+1 − Pk+1
1 − e−ρ k=0 k=0
1
ρ + e−ρ − 1 .

=
1 − e−ρ

Since

V ar(Na ) = E(Na2 ) − (E(Na ))2

rst we have to calculate E(Na2 ), that is

∞ ∞
X X Pk+1
E(Na2 ) = 2
k Πk = k2
k=1 k=1
1 − e−ρ

1 X
(k + 1)2 − 2k − 1 Pk+1

=
1 − e−ρ k=0
∞ ∞ ∞
!
1 X
2
X X
= (k + 1) Pk+1 − 2 kPk+1 − Pk+1
1 − e−ρ k=0 k=0 k=0
1 2 −ρ
 −ρ

= ρ + ρ − 2 ρ + e − 1 − 1 − e
1 − e−ρ
1 2 −ρ

= ρ − ρ − e + 1 .
1 − e−ρ

50
Therefore
 2
1 2 −ρ
 1 −ρ
V ar(Na ) = ρ −ρ−e +1 − (ρ + e − 1)
1 − e−ρ 1 − e−ρ
 2 
1 −ρ 2 −ρ
 −ρ
2 
= (1 − e ) ρ − ρ − e + 1 − ρ + e − 1
1 − e−ρ
 2
1
= (ρ2 − ρ − e−ρ + 1 − ρ2 e−ρ + ρe−ρ + e−2ρ − e−ρ
1 − e−ρ
− ρ2 − e−2ρ − 1 − 2ρe−ρ + 2ρ − 2e−ρ )
ρ − e−ρ (ρ2 + ρ)
= .
(1 − e−ρ )2
Finally
 2 
ρ − e−ρ (ρ2 + ρ)

1 1 −ρ
V ar(W ) = (ρ + e − 1) +
µ 1 − e−ρ (1 − e−ρ )2
1
= ((ρ + e−ρ − 1)(1 − e−ρ ) + ρ − e−ρ (ρ2 + ρ)).
(µ(1 − e−ρ ))2
Thus
1
V ar(T ) = V ar(W ) + 2
µ
 2
1
V ar(T ) = (ρ + e−ρ − 1)(1 − e−ρ ) + ρ − e−ρ (ρ2 + ρ) + (1 − e−ρ )2 )
µ(1 − e−ρ )
(1 − e−ρ )(ρ + e−ρ − 1 + 1 − e−ρ ) + ρ − e−ρ (ρ2 + ρ)
=
(µ(1 − e−ρ ))2
2ρ − 2ρe−ρ − ρ2 e−ρ
=
(µ(1 − e−ρ )2
which is the same we have obtained earlier.
Java applets for direct calculations can be found at
https://qsa.inf.unideb.hu

2.3 The M/M/1 Priority Queues


In the following let us consider an M/M/1 systems with priorities. This means that we
have two classes of customers. Each type of requests arrive according to a Poisson process
with parameter λ1 , and λ2 , respectively and the processes are supposed to be independent
of each other. The service times for each class are assumed to be exponentially distributed
with parameter µ. The system is stable if
ρ1 + ρ2 < 1,
where ρi = λi /µ, i = 1, 2.
Let us assume that class 1 has priority over class 2. This section is devoted to the investi-
gation of preemptive and non-preemptive systems and some mean values are calculated.

51
Preemptive Priority

According to the discipline the service of a customer belonging to class 2 is never carried
out if there is customer belonging to class 1 in the system. In other words it means that
class 1 preempts class 2 that is if a class 2 customer is under service when a class 1 request
arrives the service stops and the service of class 1 request starts. The interrupted service
is continued only if there is no class 1 customer in the system.

Let Ni denote the number of class i customers in the system and let Ti stand for the
response time of class i requests. Our aim is to calculate E(Ni ) and E(Ti ) for i = 1, 2.
Since type 1 always preempts type 2 the service of class 1 customers is independent of
the number of class 2 customers. Thus we have
1/µ ρ1
(2.24) E(T1 ) = , E(N1 ) = .
1 − ρ1 1 − ρ1
Since for all customers the service time is exponentially distributed with the same pa-
rameter, the number of customers does not depends on the order of service. Hence for
the total number of customers in an M/M/1 we get
ρ1 + ρ2
(2.25) E(N1 ) + E(N2 ) = ,
1 − ρ1 − ρ2
and then inserting (2.24) we obtain
ρ1 + ρ2 ρ1 ρ2
E(N2 ) = − = ,
1 − ρ1 − ρ2 1 − ρ1 (1 − ρ1 )(1 − ρ1 − ρ2 )
and using the Little's law we have
E(N2 ) 1/µ
E(T2 ) = = .
λ2 (1 − ρ1 )(1 − ρ1 − ρ2 )
Example 10 Let us compare what is the dierence if preemptive priority discipline is
applied instead of FIFO.

Let λ1 = 0.5, λ2 = 0.25 and µ = 1. In FIFO case we get

E(T ) = 4.0, E(W ) = 3.0, E(N ) = 3.0

and in priority case we obtain

E(T1 ) = 2.0, E(W1 ) = 1.0, E(N1 ) = 1.0

E(T2 ) = 8.0, E(W2 ) = 7.0, E(N2 ) = 2.0

Non-preemptive Priority

The only dierence between the two disciplines is that in the case the arrival of a class
1 customer does not interrupt the service of type 2 request. That is why sometimes this
discipline is call HOL ( Head Of the Line ). Of course after nishing the service of class

52
1 starts.

By using the law of total expectations the mean response time for class 1 can be obtained
as

1 1 1
E(T1 ) = E(N1 ) + + ρ2 .
µ µ µ

The last term shows the situation when an arriving class 1 customer nd the server
busy servicing a class 2 customer. Since the service time is exponentially distributed the
residual service time has the same distribution as the original one. Furthermore, because
of the Poisson arrivals the distribution at arrival moments is the same as at random
moments, that is the probability that the server is busy with class 2 customer is ρ2 . By
using the Little's law
E(N1 ) = λ1 E(T1 ),
after substitution we get

(1 + ρ2 )/µ (1 + ρ2 )ρ1
E(T1 ) = , E(N1 ) = .
1 − ρ1 1 − ρ1

To get the means for class 2 the same procedure can be performed as in the previous
case. That is using (2.25) after substitution we obtain

(1 − ρ1 (1 − ρ1 − ρ2 ))ρ2
E(N2 ) = ,
(1 − ρ1 )(1 − ρ1 − ρ2 )
and then applying the Little's law we have

(1 − ρ1 (1 − ρ1 − ρ2 ))/µ
E(T2 ) = .
(1 − ρ1 )(1 − ρ1 − ρ2 )

Example 11 Now let us compare the dierence between the two priority disciplines.
Let λ1 = 0.5, λ2 = 0.25 and µ = 1, then

E(T1 ) = 2.5, E(W1 ) = 1.5, E(N1 ) = 1.25

E(T2 ) = 7.0, E(W2 ) = 6.0, E(N2 ) = 1.75

Of course knowing the mean response time and mean number of customers in the system
the mean waiting time and the mean number of waiting customers can be obtained in
the usual way.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

53
2.4 The M/M/1/K Queue, Systems with Finite Capac-
ity
Let K be the capacity of an M/M/1 system, that is the maximum number of customers in
the system including the one under service. It is easy to see that the nu,ber of customers
in the systems is a birth-death process with rates λk = λ, k = 0, . . . , K − 1 és µk = µ,
k = 1, . . . , K . For the steady-state distribution we have
ρk
Pk = K
, k = 0, . . . , K,
X
ρi
i=0

that is

1
1  K+1 ,
 ρ=1
P0 = K =
1−ρ
ρ 6= 1.
X 
ρi ,

1−ρK+1
i=0

It should be noted that the system is stable for any ρ > 0 when K is xed. However, if
K → ∞ the the stability condition is ρ < 1 since the distribution of M/M/1/K converges
to the distribution of M/M/1.
It can be veried analytically since ρK → 0 then P0 → 1 − ρ.

Similarly to an M/M/1 systems after reasonable modications the performance measures


can be computed as
ˆ
US = 1 − P0 ,
1 US
E(δ) =
λ 1 − US
ˆ
K
X K
X
k
N= kρ P0 = ρP0 kρk−1
k=1 k=1
K
!0 0 0
1 − ρK ρ − ρK+1
X  
k
= ρP0 ρ = ρP0 ρ = ρP0
k=1
1−ρ 1−ρ
ρP0
1 − (K + 1)ρK (1 − ρ) + ρ − ρK+1 ·
 
=
(1 − ρ)2

ρP0 1 − (K + 1)ρK − ρ + (K + 1)ρK+1 + ρ − ρK+1
=
(1 − ρ)2

ρP0 1 − (K + 1)ρK + KρK+1
=
(1 − ρ)2

ρ 1 − (K + 1)ρK + KρK+1
= .
(1 − ρ)(1 − ρK+1 )

54
ˆ
K
X
2
E(N ) = k 2 Pk , V ar(N ) = E(N 2 ) − (E(N ))2
k=1

ˆ
K
X K
X K
X
Q= (k − 1)Pk = kPk − Pk = N − US
k=1 k=1 k=1

ˆ
K
X
E(Q2 ) = (k − 1)2 Pk , V ar(Q) = E(Q2 ) − (E(Q))2 .
k=1

ˆ To obtain the distribution of the response and waiting time we have to know the
distribution of the system at the moment when the tagged customer enters into
to system. It should be underlined that the customer should enter into the system
and it is not the same as an arriving customer. An arriving customer can join the
system or can be lost because the system is full. By using the Bayes' theorem it is
easy to see that

λPk Pk
Πk = = .
K−1
X 1 − PK
λPi
i=0

Similarly to the investigations we carried out in an M/M/1 system the mean and
the density function of the response time can be obtained by the help of the law of
total means and law of total probability, respectively.

For the expectation we have

K−1 K−1
X k + 1 ρ k P0
X k+1
T = Πk =
k=0
µ k=0
µ 1 − Pk
K−1
1 X N
= (k + 1)Pk+1 = .
λ(1 − PK ) k=0 λ(1 − PK )

Consequently

1 N 1
W =T− = − .
µ λ(1 − PK ) µ

We would like to show that the Little's law is valid in this case and the same time
we can check the correctness of the formula.

55
It can easily be seen that the average arrival rate into the system is λ = λ(1 − PK )
and thus

N
λ · T = λ(1 − PK ) = N.
λ(1 − PK )

Similarly
 
N 1 λ
λ·W =λ − =N−
λ(1 − PK ) µ µ
= N − ρ(1 − PK ) = N − US = Q,

since

λ = µ = µUS .

Since the conditional waiting time is Erlang distributed, it is easy to see that

K−1
2
X (k + k 2 )
E(W ) = Πk , V ar(W ) = E(W 2 ) − (E(W ))2 ,
k=1
µ2

V ar(T ) = V ar(W ) + 1/µ2 .

Now let us nd the density function of the response and waiting times
By using the theorem of total probability we have

K−1
X (µx)k −µx Pk
fT (x) = µ e ,
k=0
k! 1 − PK

and thus for the distribution function we get


 x 
K−1
X Z (µt)k Pk
FT (x) =  µ e−µt dt
k=0
k! 1 − PK
0
K−1 k
!
X X (µx)i −µx Pk
= 1− e
k=0 i=0
i! 1 − PK
K−1 k
!
X X (µx)i −µx Pk
=1− e .
i=0
k=0
i! 1 − PK

These formulas are more complicated due to the nite summation as in the case of
an M/M/1 system, but it is not dicult to see that in the limiting case as K → ∞
we have

fT (x) = µ(1 − ρ)e−µ(1−ρ)x .

56
For the density and distribution function of the waiting time we obtain
P0
fW (0) =
1 − PK
K−1
X (µx)k−1 Pk
fW (x) = µ e−µx , x>0
k=1
(k − 1)! 1 − P K

K−1 k−1
!
P0 X X (µx)i Pk
FW (x) = + 1− e−µx
1 − PK k=1 i=0
i! 1 − PK
K−1 k−1
!
X X (µx)i Pk
=1− e−µx · .
k=1 i=0
i! 1 − PK

These formulas can be calculated very easily by a computer.


As we can see the probability PK plays an important role in the calculations.
Notice that it is exactly the probability that an arriving customer nd the system
full that is it lost. It is called blocking or lost probability and denoted by PB .
Its correctness can be proved by the help of the Bayes's rule, namely

(λK h + o(h))PK λPK


PB = lim PK = PK = PK .
j=0 (λj h + o(h))Pj j=0 λPj
h→0

If we would like to show the dependence on K and ρ it can be denoted by

ρK
PB (K, ρ) = K
.
X
ρk
k=0

Notice that
ρρK−1 ρPB (K − 1, ρ)
PB (K, ρ) = = .
K−1
X 1 + ρPB (K − 1, ρ)
ρk + ρρK−1
k=0

ρ
Starting with the initial value PB (1, ρ) = the probability of loss can be com-
1+ρ
puted recursively. It is obvious that this sequence tends to 0 as ρ < 1. Consequently
by using the recursion we can always nd an K -t, for which

PB (K, ρ) < P ∗ ,

where P ∗ is a predened limit value for the probability of loss.


To nd the value of K without recursion we have to solve the inequality

ρK (1 − ρ)
< P∗
1 − ρK+1

57
which is more complicated task.
Alternatively can can nd an approximation method, too. Use the distribution of
an M/M/1 system and nd the probability that in the system there are at least K
customers. It is easy to see that

ρK (1 − ρ) X k
PB (K, ρ) = < ρ (1 − ρ) = ρK ,
1 − ρK+1 k=K

and thus if

ρK < P ∗ ,

then PB∗ (K, ρ) < P ∗ . That is


K ln ρ < ln P ∗
ln P ∗
K> .
ln ρ

Now let us turn our attention to the Laplace-transform of the response and wait-
ing times. First let us compute it for the response time. Similarly to the previous
arguments we have
K−1
X  µ k+1 ρk P0
LT (s) =
k=0
µ+s 1 − PK
K  l
P0 X µρ
=
ρ(1 − PK ) l=1 µ + s
 K
λ
P0 λ 1 − µ+s
= λ
ρ(1 − PK ) µ + s 1 − µ+s
 K
λ
µP0 1 − µ+s
= .
(1 − PK ) µ − λ + s
The Laplace-transform of the waiting time can be obtained as
K−1
X  µ k ρk P0
LW (s) =
k=0
µ+s 1 − PK
K−1  k
P0 X µρ
=
1 − PK k=0 µ + s
 K
λ
P0 1 − µ+s
= λ
1 − PK 1 − µ+s
  K 
λ
(µ + s) 1 − µ+s
P0
= ,
1 − PK µ−λ+s

58
which also follows from relation
µ
LT (s) = LW (s) · .
µ+s

By the help of the Laplace-transforms the higher moments of the involved random
variables can be computed, too.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

Example 12 Consider the queue at an output port of router. The transmission link is
a T1 line (1.544Mbps), packets arrive according to a Poisson process with mean rate
λ = 659.67 packets/sec, the packet lengths are exponentially distributed with a mean
length of 2048 bits/packet. If the system size is 16 packets what is the packet loss rate?

Solution:
λ = 659.67, µ = 1.544 M bps/2048 bits/packet = 753.9 packets/sec, ρ = 0.875.
Thus the packet loss rate = blocking probability x λ = 0.0165 x 659.67 = 10.88.

Example 13 A data concentrator has 40 terminals connected to it. During the busiest
time of day each terminal is occupied and produces packets which are exponentially dis-
tributed with a mean of 1000 bits. The link connecting the concentrator to the campus
network carries trac at 1.552 Mbps. The arrival process of packets to the concentrator
forms a Poisson process with ten of the terminals producing on average 1 packet per 10
msec, twenty of the terminals producing on average 1 packet per 50 msec, and ten of the
terminals producing on average 1 packet per 0.5 second.
(a) Determine the utilization of the concentrator.
(b) Assuming the buer at the concentrator is innite, determine the average delay in the
queue.
(c) If the concentrator has a system capacity of 20 packets, determine the packet loss rate.

Solution:
(a) Determine the utilization of the concentrator.
mean service rate µ = 1.552 ∗ 10 bps/1000 bits/packet = 1552 packets/sec
mean arrival rate λ = 10 ∗ (1 packet/10 msec) + 20 ∗ (1 packet/50 msec)
+ 10 ∗ (1 packet/0.5sec) = 1420 packets/sec. Thus ρ = 1420/1552 = 0.9149.
(b) Assuming the buer at the concentrator is innite, determine the average delay in
the queue. E(W ) = 6.93 msec
(c) If concentrator has a system capacity of 20 packets, nd the packet loss rate.
The system is now modeled as a M/M/1/K queue.
Packet loss rate = the blocking probability * λ = 0.017*1420 = 24.14 packets/sec.

59
2.5 The M/M/∞ Queue
Similarly to the previous systems it is easy to see that the number of customers in the
system, that is the process (N (t), t ≥ 0) is a birth-death process with rates

λk = λ, k = 0, 1, . . .
µk = kµ, k = 1, 2, . . . .

Hence the steady-state distribution can be obtained as



%k X %k
Pk = P0 , where P0−1 = = e% ,
k! k=0
k!

That is
%k −%
Pk = e ,
k!
showing that N follows a Poisson law with parameter %.

It is easy to see that the performance measures can be computed as


1
N = %, λ = λ, T = , W = 0, r = N , µ = rµ
µ
E(δr ) 1 − e−% 1 1 − e−%
Ur = 1 − e−% , 1 = , E(δ r ) = .
λ
e−% λ e−%

It can be proved that these formulas remain valid for an M/G/∞ system as well where
1
E(S) = .
µ

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

60
Agner Krarup Erlang, 1878-1929

2.6 The M/M/n/n Queue, Erlang-Loss System


This system is the oldest and thus the most famous system in queueing theory. The ori-
gin of the trac theory or congestion theory started by the investigation of this system
and Erlang was the rst who obtained his well-reputed formulas, see for example Er-
lang [29, 30].
By assumptions customers arrive according to a Poisson process and the service times
are exponentially distributed. However, if n servers all busy when a new customer arrives
it will be lost because the system is full. The most important question is what proportion
of the customers is lost.

The process (N (t), t ≥ 0) is said to be in state k if k servers are busy, which is the same as
k customers are in the system. It is easy to see that (N (t), t ≥ 0)is a birth-death process
with rates
(
λ, if k < n,
λk =
0, if k ≥ n,

µk = kµ, k = 1, 2, ..., n.

Clearly the steady-state distribution exists since the process has a nite state space. The

61
stationary distribution can be obtained as
  
k
λ 1
, if k ≤ n,

P0

Pk = µ k!
, if k > 0.

0

Due to the normalizing condition we have


n  k
!−1
X λ 1
P0 = ,
k=0
µ k!

and thus the distribution is


 k
λ 1 %k %k −ρ
µ k! e
Pk = n  i = nk! i = nk! i , k ≤ n.
X λ 1 X% X%
−ρ
e
i=0
µ i! i=0
i! i=0
i!

which is called as a truncated Poisson distribution with parameter ρ.

The most important measure of the system is


%n
Pn = nn! k = B(n, ρ)
X%

k=0
k!

which was introduced by Erlang and it is referred to as Erlang's B-formula, or loss


formula and generally denoted by B(n, λ/µ).
By using the Bayes's rule it is easy to see that

(λn h + o(h))Pn λPn


B(n, ρ) = lim Pn = Pn = Pn .
j=0 (λj h + o(h))Pj j=0 λPj
h→0

For moderate n the probability P0 can easily be computed. For large n and small %
P0 ≈ e−% , and thus
%k
Pk ≈ e−% ,
k!
that is the Poisson distribution. For large n and large %
n
X %j
6= e% .
j=0
j!

However, in this case the central limit theorem can be used, since the denominator is the
sum of the rst (n + 1) terms of a Poisson distribution with mean %. Thus by the central

62
limit theorem this Poisson distribution can be approximated by a normal law with mean

% and dispersion % that is
n + 12 − % n − 1 + 12 − %
Φ( √ ) − Φ( √ ) n− 1 −%
Φ( √2% )
% %
Pn ≈ =1− ,
n + 12 − % Φ(
n+ 12 −%
√ )
Φ( √ ) %
%
where
Zs
1 x2
Φ(s) = √ e− 2 dx.

−∞

is the distribution function of the standard normal distribution.

Another way to calculate B(n, ρ) is to nd a recursion. This can be obtained as follows
ρn ρ ρn−1
n! n (n−1)!
B(n, ρ) = n i
= n−1 i
X ρ X ρ ρ ρn−1
+
i=0
i! i! n (n − 1)!
i=0
ρ
n
B(n − 1, ρ) ρB(n − 1, ρ)
= = .
1 + nρ B(n − 1, ρ) n + ρB(n − 1, ρ)
ρ
Using B(1, ρ) = as an initial value the probabilities B(n, ρ) can be computed for
1+ρ
any n. It is important since the direct calculation can cause a problem due to the value
of the factorial.
For example for n = 1000, ρ = 1000 the exact formula cannot be computed but the ap-
proximation and the recursion gives the value 0.024.

Due to the great importance of B(n, ρ) in practical problems so-called calculators have
been developed which can be found at
http://www.erlang.com/calculator/

To compare the approximations and the exact values please use


https://qsa.inf.unideb.hu

Now determine the main performance measures of this M/M/n/n system


ˆ Mean number of customers in the systems, mean number of busy servers
n n n−1 i
X X %j X %
N =n= jPj = j P0 = % P0 = %(1 − Pn ),
j=0 j=0
j! j=0
i!

thus the mean number of requests for a given server is


%
(1 − Pn ).
n
63
ˆ Utilization of a server
As we have seen
n
X i n̄
Us = Pi = .
i=1
n n

This case
%
Us = (1 − Pn ).
n

ˆ The mean idle period for a given server


By applying the well-known relation

1/µ
P (the server is busy ) = ,
e + 1/µ

where e is the mean idle time of the server. Thus

% 1/µ
(1 − Pn ) = ,
n e + 1/µ

hence
n 1
e= − .
λ(1 − Pn ) µ

ˆ The mean busy period of the system


Clearly
Eδr
Ur = 1 − P0 = 1 ,
λ
+ Eδr
thus
n
X %i
1 − P0 i=1
i!
Eδr = = n
!.
λP0 X %i
λ 1+
i=1
i!

It can be proved that these formulas remain valid for an M/G/n/n system as well
1
where E(S) = .
µ

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

Example 14 In busy parking lot cars arrive according to a Poisson process one in 20
seconds and stay there in the average of 10 minutes.
How many parking places are required if the probability of a loss is no to exceed 1% ?

64
Solution:
λ 10
ρ= = 1 = 30, Pn = 0.01.
µ 3
Following a normal approximation
n+ 12 −ρ n− 12 −ρ
   
ρn −ρ
e Φ √
ρ
−Φ √
ρ
Pn = 0.01 = n! 1  = .
n+ 12 −ρ
 
n+ 2 −ρ
Φ √
ρ
Φ √
ρ

Thus
n + 21 − ρ n − 21 − ρ
   
0.99Φ √ =Φ √ .
ρ ρ
It is not dicult to verify by using the Table for the standard normal distribution that
n = 41.

Thus the approximation value of P41 is 0.009917321712214377,


and the exact value is 0.01043318100246811.

Example 15 A telephone exchange consists of 50 lines and calls arrive according to a


Poisson process, the mean interarrival time is 10 minutes. The mean service time is 5
minutes.
Find the main performance measures.
Solution:
Using Poisson approximation where ρ = µλ = 0.5
P50 = 0.00000, event for n = 6
P6 = 0, 00001. This means that a call is almost never lost.
Mean number of busy lines can be obtain as
n = ρ(1 − Pn ) = ρ = 0.5 ,
The utilization of a line is
0.5 5 × 10−1
= = 10−2
50 5 × 10
The utilization of the system is
Ur = 1 − 0.606 = 0.394
The mean busy period of the system can be obtained as
(1 − P0 ) 0.394 0.394
Eδr = = = = 0.32 minutes
(λP0 ) 2 × 0.606 1.212
Mean idle period of a line is
n ρ 50 0, 5 1
e= − = − = 25 − = 24.75 minutes
λ(1 − Pn ) λ 2(1 − 0) 2 4

65
Heterogeneous Servers


In the case of an M/M /n/n system the service time distribution depends on the index
of the server. That is the service time is exponentially distributed with parameter µi for
server i. An arriving customer choose randomly among the idle servers, that is each idle
server is chosen with the same probability. Since the servers are heterogeneous it is not
enough to to the number of busy servers but we have to identify them by their index. It
means that we have to deal with general Markov-processes.

Let (i1 , . . . , ik ) denote the indexes of the busy servers, which are the combinations of n
objects taken k at a time without replacement. Thus the state space of the Markov-chain
is the set of these combinations, that is (0, (i1 , . . . , ik ) ∈ Ckn , k = 1, . . . , n).

Let us denote by

P0 = P (0),
P (i1 , . . . , ik ) = P ((i1 , . . . , ik )), (i1 , . . . , ik ) ∈ Ckn , k = 1, . . . , n
the steady-state distribution of the chain which exists since the chain has a nite state
space and it is irreducible. The set of steady-state balance equations can be written as

n
X
(2.26) λP0 = µj P (j)
j=1

k k
X λ X
(λ + µij )P (i1 , . . . , ik ) = P (i1 , . . . , ij−1 , ij+1 , . . . , ik )
n − k + 1 j=1
(2.27) j=1
X
+ µj P (i01 , . . . , i0k , j 0 )
j6=i1 ,...,ik

n
X  n
X
(2.28) µj P (1, . . . , n) = λ P (1, . . . , j − 1, j + 1, . . . , n)
j=1 j=1

where (i01 , . . . , i0k , j 0 ) denotes the ordered set i1 , . . . , ik , j , i−1 and in+1 are not dened.
Despite of the large number of unknowns, which is 2n , the solution is quite simple, namely
k
Y
(2.29) P (i1 , . . . , ik ) = (n − k)! %ij C,
j=1

λ
where %j = , j = 1, . . . , n, P0 = n!C , which can be determined by the help of the
µi
normalizing condition
n
X X
P0 + P (i1 , . . . , ik ) = 1.
k=1 (i1 ,...,ik )∈Ckn

66
Let us check the rst equation (2.26). By substitution we have
n
X λ
λn!C = µj (n − 1)!C = n!λC.
j=1
µj

Lets us check now the third equation (2.28)

n n n
λn λn−1 C λn
X  X X 
µj C=λ = µj C.
j=1
µ1 · · · µn j=1
µ 1 · · · µ j−1 µ j+1 · · · µ n µ 1 · · · µ n j=1

Finally, let us check the most complicated one, the second set of equations (2.27), namely
k
X k
Y
(λ + µij )(n − k)! %ij C
j=1 j=1
k
λ X λk−1 C
= (n − k + 1)!
n−k+1 µ · · · µij−1 µij+1 · · · µik
j=1 i1
X λk+1 µj C
+ (n − k − 1)!
j6=i1 ,...,ik
µi1 · · · µik µj
k
X µij λk C X λk C
= (n − k)! +λ (n − k − 1)!
µ · · · µik
j=1 i1 j6=i1 ,...,ik
µi1 · · · µik
k
λk C λk C
X 
= (n − k)! µij + λ(n − k)! ,
j=1
µ i 1 · · · µ i k
µ i 1 · · · µ i k

which shows the equality.

Thus the usual performance measures can be obtained as

ˆ the utilization of the j th server Uj can be calculated as


n
X X
Uj = P (i1 , . . . , ik ),
k=1 j∈(i1 ,...,ik )

and thus
1
µj
Uj = 1 ,
µj
+ E(ej )
where E(ej ) is the mean idle period of the j th server. Hence

1 1 − Uj
E(ej ) = .
µj Uj

67
Pn
ˆ N= j=1 Uj

ˆ The probability of loss is PB = P (1, . . . , n).

It should be noted that in this case the following relation also holds
n
X
λ(1 − PB ) = Uj µj .
j=1

In homogeneous case, that is when µj = µ, j = 1, . . . , n, after substitution we have

%k
%k %k
 
X n k
Pk = P (i1 , . . . , ik ) = (n − k)!% C = n!C = P0 = Pnk! %j
,
n
k k! k! j=0
(i1 ,...,ik )∈Ck j!

that is it reduces to the Erlang's formula derived earlier.

It should be noted that these formulas remains valid under generally distributed service
times with nite means with ρi = λE(Si ). In other words the Erlang's loss formula is
robust to the distribution of the service time, it does not depend on the distribution itself
but only on its mean.

2.7 The M/M/n Queue


It is a variation of the classical queue assuming that the service is provided by n servers
operating independently of each other. This modication is natural since if the mean
arrival rate is greater than the service rate the system will not be stable, that is why
the number of servers should be increased. However, in this situation we have parallel
services and we are interested in the distribution of rst service completion.
That is why we need the following observation.
Let Xi be exponentially distributed random variables with parameter µi , (i = 1, 2, ..., r)
and denote by Y their minimum. It is not dicult to see that Y is also exponentially
r
distributed with parameter µi since
P
i=1

P (Y < x) = 1 − P (Y ≥ x) = 1 − P (Xi ≥ x, i = 1, ..., r) =


r Pr
P (Xi ≥ x) = 1 − e−( µi )x
Y
=1− i=1 .
i=1

Similarly to the earlier investigations, it can easily be veried that the number of cus-
tomers in the system is a birth-death process with the following transition probabilities

Pk,k−1 (h) = (1 − (λh + o(h))) (µk h + o(h)) + o(h) = µk h + o(h),


Pk,k+1 (h) = (λh + o(h)) (1 − (µk h + o(h))) + o(h) = λh + o(h),

68
where

kµ
 , for 0 ≤ k ≤ n,
µk = min(kµ, nµ) =
nµ , for n < k .

It is understandable that the stability condition is λ/nµ < 1.


To obtain the distribution Pk we have to distinguish two cases according to as µk depends
on k . Thus if k < n, then we get
k−1  k
Y λ λ 1
Pk = P0 = P0 .
i=0
(i + 1)µ µ k!

Similarly, if k ≥ n, then we have


n−1 k−1  k
Y λ Y λ λ 1
Pk = P0 = P0 .
i=0
(i + 1)µ j=n nµ µ n!nk−n

In summary
ρk

 P
 k!
 0 , for k ≤ n,
Pk =
k n

P0 a n , for k > n,


n!
where
λ ρ
a= = < 1.
nµ n
This a is exactly the u tilization of a given server . Furthermore
n−1 k ∞
!−1
X ρ X ρk 1
P0 = 1 + + k−n
,
k=1
k! k=n
n! n

and thus !−1


n−1 k
X ρ ρn 1
P0 = + .
k=0
k! n! 1 − a
Since the arrivals follow a Poisson law the the distribution of the system at arrival instants
equals to the distribution at random moments, hence the probability that an arriving
customer has to wait is
∞ ∞
X X ρk 1
P (waiting) = Pk = P0 .
k=n
n! nk−n
k=n

that is it can be written as


ρn 1
ρn n
P (waiting) = n−1 n! 1 − a = n−1
n! n−ρ
= C(n, ρ).
X ρk ρn 1 X ρk ρn n
+ +
k=0
k! n! 1 − a k=0
k! n!(n − ρ)

69
This probability is frequently used in dierent practical problems, for example in tele-
phone systems, call centers, just to mention some of them. It is also a very famous formula
which is referred to as Erlang's C formula,or Erlang's delay formula and it is de-
noted by C(n, λ/µ).

The main performance measures of the systems can be obtained as follows

ˆ For the mean queue length we have


∞ ∞ ∞ λ n+j

X X X j µ
Q= (k − n)Pk = jPn+j = P0 =
k=n j=0 j=0
n!nj

∞ λ n λ n ∞ λ n ∞
  
X µ j µ
X daj d X j
µ
= j a P 0 = P0 a = P0 a a =
j=0
n! n! j=0
da n! da j=0
λ n

µ a ρ
= P0 = C(n, ρ).
n! (1 − a)2 n−ρ

ˆ For the m ean number of busy servers we obtain


n−1 n−2 k
!
X X ρn X 1ρ
n= kPk + nPk = P0 ρ + =
k=0 k=n k=0
k! (n − 1)! 1 − a
n−2 k !
ρn−1 ρn−1

X ρ 1
=ρ + + −1 P0 =
k=0
k! (n − 1)! (n − 1)! 1 − a
n−1 k
!
X ρ ρn 1 1
=ρ + P0 = ρ P0 = ρ.
k=0
k! n! 1 − a p0

ˆ For the mean number of customers in the system we get


X n−1
X ∞
X ∞
X
N= kPk = kPk + (k − n)Pk + nPk = n + Q
k=0 k=0 k=n k=n
ρ
=ρ+ C(n, ρ),
n−ρ
which is understandable since a customer is either in the queue or in service. Let
us denote by S -gal the mean number of idle servers. Then it is easy to see that

n = n − S,
λ
S =n− ,
µ

70
thus
N = n − S + Q,
hence
N − n = Q − S.
ˆ Distribution of the waiting time

An arriving customer has to wait if at his arrival the number of customers in the
system is at least n. In this case the time while a customer is serviced is exponentially
distributed with parameter nµ, consequently if there n + j customers in the system
the waiting time is Erlang distributed with parameters (j + 1, nµ). By applying the
theorem of total probability for the density function of the waiting time we have

X xj −nµx
fW (x) = Pn+j (nµ)j+1 e .
j=0
j!

Substituting the distribution we get


∞ λ n

X µ xj −nµx
fW (x) = P0 j
a (nµ)j+1 e
j=0
n! j!
λ n ∞

P0 µ −nµx
X (anµx)j
= nµe
n! j=0
j!
λ n

µ
= P0 nµe−(nµ−λ)x
n!
λ n
µ
= P0 nµe−nµ(1−a)x
n!
λ n
µ 1
= P0 nµ(1 − a)e−nµ(1−a)x
n! 1−a
= P (waiting)nµ(1 − a)e−nµ(1−a)x .

Hence for the complement of the distribution function we obtain


Z∞
P (W > x) = fW (u)du = P (waiting)e−nµ(1−a)x
x

= C(n, ρ) · e−µ(n−ρ)x .
Therefore the distribution function can be written as
FW (x) = 1 − P (waiting) + P (waiting) 1 − e−nµ(1−a)x


= 1 − P (waiting)e−nµ(1−a)x = 1 − C(n, ρ) · e−µ(n−ρ)x .


Consequently the mean waiting time can be calculated as
Z∞ λ n

µ 1 1
W = xfW (x)dx = P0 = C(n, ρ).
n! (1 − a)2 nµ µ(n − ρ)
0

71
It is not dicult to see that
C(n, ρ)(2 − C(n, ρ))
V ar(W ) = .
(µ(n − ρ))2

ˆ Distribution of the response time

The service immediately starts if at arrival the number of customer in the system
is than n. However, if the arriving customer has to wait then the response time is
the sum of this waiting and service times. By applying the law of total probability
for the density function of the response time we get

fT (x) = P (no waiting)µe−µx + fW +S (x)


As we have proved

fW (x) = P (waiting)e−nµ(1−a)x nµ(1 − a).

Thus
Zz
fW +S (z) = fW (x)µe−µ(z−x) dx =
0

Zz
= P (waiting)nµ(1 − a)µ e−nµ(1−a)x e−µ(z−x) dx =
0

Zz
ρn 1
= P0 nµ(1 − a)µe−zµ e−µ(n−1−λ/µ)x dx =
n! (1 − a)
0

ρn 1
e−µz 1 − e−µ(n−1−λ/µ)z .

= P0 nµ
n! n − 1 − λ/µ
Therefore   n 
λ P0
fT (x) = 1− µe−µx +
µ n!(1 − a)
λ n

µ 1
e−µx 1 − e−µ(n−1−λ/µ)x =

+ nµP0
n! n − 1 − λ/µ

λ n λ n
  !
µ
P0 µ1
= µe−µx 1 − e−µ(n−1−λ/µ)x

1− + nP0 =
n!(1 − a) n! n − 1 − λ/µ

λ n
 !
µ
P0
1 − (n − λ/µ)e−µ(n−1−λ/µ)x
= µe−µx 1+ .
n!(1 − a) n − 1 − λ/µ

72
Consequently for the complement of the distribution function of the response time
we have
Z∞
P (T > x) = fT (y)dy =
x

Z∞ λ n

P0
!
µ 1
= µe−µy + µe−µy − µ(n − λ/µ)e−µ(n−λ/µ)y dy =
n!(1 − a) n − 1 − λ/µ
x

 n
−µx λ 1
e−µx − e−µ(n−λ/µ)x =

=e + P0
µ n!(1 − a)(n − 1 − λ/µ)

λ n
 !
µ
P0
1 − e−µ(n−1−λ/µ)x
= e−µx 1+ .
n!(1 − a) n − 1 − λ/µ
Thus the distribution function can be written as

FT (x) = 1 − P (T > x).

In addition for the mean response time we obtain


Z∞ λ n

1 1 µ 1 1
T = xfT (x)dx = + P0 2
= + W,
µ nµ n! (1 − a) µ
0

as it was expected.
In stationary case the mean number of arriving customer should be equal to the
mean number of departing customers, so the mean number of customer in the system
is equal to the number of customers arrived during a mean response time. That is

λT = N = Q + n,
in addition
λW = Q.
These are the Little's formulas, that can be proved by simple calculations. As we
have seen
ρn
N = ρ + P0 a.
n!(1 − a)2
Since
λ n

1 1 µ 1
T = + P0 ,
µ nµ n! (1 − a)2
thus
λ ρn a
λT = + P0 ,
µ n! (1 − a)2

73
that is
N = λT ,
λ
because = ρ.
µ

Furthermore
Q = λW ,
since
n = ρ.

ˆ Overall utilization of the servers can be obtained as


The utilization of a single server is
n−1 ∞
X k X n̄
Us = Pk + Pk = = a.
k=1
n k=n
n

Hence the overall utilization can be written as

Un = nUs = n̄.

ˆ The mean busy period of the system can be computed as


The system is said to be idle if the is no customer in the system, otherwise the
system is busy. Let Eδr denote the mean busy period of the system. Then the
utilization of the system is

Eδr
Ur = 1 − P0 = 1 ,
λ
+ Eδr
thus
1 − P0
Eδr = .
λP0
If the individual servers are considered then we assume that a given server becomes
busy earlier if it became idle earlier. Hence if j < n customers are in the system
then the number of idle servers is n − j .
Let as consider a given server. On the condition that at the instant when it became
idle the number of customers in the system was j its mean idle time is

n−j
ej = .
λ
The probability of this situation is

Pj
aj = n−1
.
X
Pi
i=0

74
Then applying the law of total expectations for its mean idle period we have
n−1 n−1
X X (n − j)Pj S
e= aj ej = Pn−1 = ,
j=0 j=0
λ i=0 Pi λP (e)

where P (e) = 1 − C(n, ρ) denotes the probability that an arriving customer nd an
idle server.

Since

Us = a = ,
e + Eδ
thus
ae = (1 − a)Eδ,
where Eδ denotes its mean busy period.

Hence
a S
Eδ = .
1 − a λP (e)

In the case of n = 1 it reduces to


λ
S = 1 − a, P (e) = P0 = 1 − a, a= ,
µ
thus
1
Eδ = ,
µ−λ
which was obtained earlier.
In the following we are going to show what is the connection between these two famous
Erlang's formulas. Namely, rst we prove how the delay formula can be expressed by the
help of loss formula, that is
λ m
(µ )
( µλ )m 1
 
λ 1 m!
C m, = λ P λ k λ m =P λ m λ m
µ m! 1 − mµ m−1 (µ )
+
(µ ) 1 m−1 (µ )
(1 − λ
) +
(µ )
k=0 k! m! 1− λ k=0 m! mµ m!

B(m, µλ ) B(m, λ
µ
)
= λ λ λ
= λ
.
(1 − B(m, µ
))(1 − mµ ) + B(m, µ
) 1− mµ
(1 − B(m, µλ ))

As we have seen in the previous investigations the delay probability C(n, ρ), plays an
important role in determining the main performance measures. Notice that the above
formula can be rewritten as
nB(n, ρ)
C(n, ρ) = > B(n, ρ),
n − ρ + ρB(n, ρ)

75
moreover it can be proved that there exists a recursion for it, namely
ρ(n − 1 − ρ) · C(n − 1, ρ)
C(n, ρ) = ,
(n − 1)(n − ρ) − ρC(n − 1, ρ)
starting with the value C(1, ρ) = ρ.
If the quality of service parameter is C(n, ρ) then it is easy to see that there exists an
olyan n∗α , for which C(n∗α , ρ) < α. This n∗α can easily be calculated by a computer using
the above recursion.

Let us show another method for calculating this value. As we have seen earlier the prob-
ability of loss can be approximated as
 
ϕ n−ρ

ρ
B(n, ρ) ≈ √  .
ρφ n−ρ√
ρ


Let k = √ ,
n−ρ
ρ
thus n = ρ + ρk . Hence

nB(n, ρ) (ρ + k ρ) √ϕ(k)
ρφ(k)
C(n, ρ) = ≈ √
n − ρ + ρB(n, ρ) ρ + k k − ρ + ρ √ϕ(k)
ρφ(k)
√ ϕ(k) −1
ρ φ(k)

φ(k)
≈√   = 1+k .
ρ k + ϕ(k) ϕ(k)
φ(k)

That is if we would like to nd such an n∗α for which C(n∗α , ρ) < α, then we have to solve
the following equation
 −1
φ(kα )
1 + kα ≈α
ϕ(kα )
which can be rewritten as
φ(kα ) 1−α
kα =
ϕ(kα ) α
If kα is given then

n∗α = ρ + kα ρ.

It should be noted that the search for kα is independent of the value of ρ and n thus it
can be calculated for various values of α.

For example, if α = 0.8, 0.5, 0.2, 0.1,


then the corresponding kα -as are 0.1728, 0.5061, 1.062, 1.420.

The formula n∗α = ρ + kα ρ is called as square-root stang rule. As we can see in
the following Table it gives a very good approximation, see Tijms [115].

76
Table 2.1: Exact and approximated values of n∗

α = 0.5 α = 0.2 α = 0.1


exact approximation exact approximation exact approximation
ρ=1 2 2 3 3 3 3
ρ=5 7 7 8 8 9 9
ρ = 10 12 12 14 14 16 15
ρ = 50 54 54 58 58 61 61
ρ = 100 106 106 111 111 115 115
ρ = 250 259 259 268 267 274 273
ρ = 500 512 512 525 524 533 532
ρ = 1000 1017 1017 1034 1034 1046 1045

Let us see an example for illustration.


Let us consider two service centers which operate separately. Then using this rule overall

we have to use 2(ρ + kα ρ) servers. However, if we have a joint queue to get the same
√ √ √
service level we should use 2ρ + kα 2ρ servers. The reduction is (2 − 2)kα ρ, that is
the reason that the joint queue is used in practice.

C(n, ρ) is of great importance in practical problems hence so-called calculators have been
developed and can be used at the link

http://www.erlang.com/calculator/

Separated M/M/1 and common queue M/M/2 systems

ρ2 1 ρ2 ρ2
C(2, ρ) = ρ2
== = .
2−ρ1+ρ+ 2 − ρ + 2ρ − ρ2 + ρ2 2+ρ
2−ρ

Thus

ρ ρ2 ρ2
Q= , W = .
2−ρ2+ρ µ(4 − ρ2 )
Therefore

ρ(4 − ρ2 ) + ρ3 4ρ
N =ρ+Q= 2
= =
4−ρ 4 − ρ2
ρ
=  2 .
1 − ρ2

Thus by using the Little formula we have


1 1 4
T =  2 = .
µ ρ µ(4 − ρ2 )
1− 2

77
Example 16 Let us consider 2 separated M/M/1 queues with λ1 , λ2 arrival intensities
and with the same service intensity µ. Of course λ1 < µ, λ2 < µ. Aggregate the arrival
processes and consider a 2 server system with service µ intensities at each server. Assume
that λ1 ≥ λ2 .

Total number of customers in the aggregated system is N1 + N2 .

1. Show that
T < T1

2. Find the condition that implies


T < T2 ,

where T1 , T2 are the mean response times for the separated queues and T denotes the
mean response time for the M/M/2 system.

78
Solution:
Obvious that

1 1
T2 = ≤ T1 = ,
µ − λ2 µ − λ1
1
T =   2  .
λ1 +λ2
µ 1− 2µ

First prove that


1 1
  2  < µ − λ ,
λ1 +λ2 1
µ 1− 2µ

4µ2 1
 < ,
µ 4µ2 − (λ1 + λ2 )2 µ − λ1

4(µ − λ1 )µ < 4µ2 − (λ1 + λ2 )2 ,

(λ1 + λ2 )2 < 4λ1 µ,

(λ1 + λ2 )2 < (2λ1 )2 < 4λ1 λ1 < 4λ1 µ.


since λ1 ≥ λ2 , λ1 < µ.
Similarly,
1
T < if f
µ − λ2

(λ1 + λ2 )2 < 4λ2 µ.


If λ1 = λ2 = λ then (2λ)2 < 4λµ.
which is valid sinceλ < µ.

79
Separated and common queues
Separated queues

Total number of customers in the aggregated system is N1 + N2 .

Common queue

Show that N < N1 + N2 .

We have to prove that


ρ1 + ρ2 ρ1 ρ2
2 < + ,
1 − ρ1 1 − ρ2

1 − ρ1 +ρ
2
2

4(ρ1 + ρ2 ) ρ1 ρ2
2
< + ,
4 − (ρ1 + ρ2 ) 1 − ρ1 1 − ρ2

 
2 2
4 ρ1 (ρ2 − 1) + ρ2 (ρ1 − 1) <
 
(ρ1 + ρ2 ) ρ21 (2ρ2 − 1) + ρ22 (2ρ1 − 1) .

80
After arrangement we get

h i
ρ21(2ρ2 − 1)(ρ1 + ρ2 ) + 4(1 − ρ2 ) +
h i
+ ρ22 (2ρ1 − 1)(ρ1 + ρ2 ) + 4(1 − ρ1 ) > 0.

We show that

(2ρi − 1)(ρ1 + ρ2 ) + 4(1 − ρi ) > 0, i = 1, 2.


That is
(ρ1 + ρ2 )(1 − 2ρi ) < 4(1 − ρi ).
It is easy to see that

(ρ1 + ρ2 )(1 − 2ρi ) < (ρ1 + ρ2 )(1 − ρi )


< 2(1 − ρi ) < 4(1 − ρi ), i = 1, 2.

From this the statement follows.

If λ1 = λ2 = λ then ρ < 1, furthermore for the aggregated M/M/2 and the combined
separated M/M/1 systems we get
2ρ 2ρ
N= , N1 + N2 = .
1 − ρ2 1−ρ
That is
2ρ 2ρ 2ρ
2
= < .
1−ρ (1 − ρ)(1 + ρ) 1−ρ
Hence

N 1−ρ2 1 1
= 2ρ = > .
N1 + N2 1−ρ
1+ρ 2
In other form and by using the Little-formula we get
1
N= N1 + N2 = 2λT .
1+ρ
Thus
1 1
T = 2
= .
µ(1 − ρ ) µ(1 − ρ)(1 + ρ)
It is easy to see that
1 1
T1 = T2 = = .
µ−λ µ(1 − ρ)
Consequently
1
T µ(1−ρ)(1+ρ) 1 1 1
= 1 = > , T = T1 .
T1 µ(1−ρ)
1+ρ 2 1+ρ

81
Separated versus common M/M/2 queues

1.

2.

3.

Compare the queues with respect to mean response time with the same trac intensity
Solution:
1 1 2
T1 = λ
= ρ = ,
µ(1 − 2µ ) µ(1 − 2 ) µ(2 − ρ)

4 4
T2 = 2
= .
µ(4 − ρ ) µ(2 − ρ)(2 + ρ)
1 2 1
T3 = ρ = = .
2µ(1 − 2 ) 2µ(2 − ρ) µ(2 − ρ)
Thus for the comparison we have
1 1 4 2 2 2
< = < ,
µ(2 − ρ) µ(2 − ρ) 2 + ρ µ(2 − ρ) 2 + ρ µ(2 − ρ)
since ρ < 2, thus
T 3 < T 2 < T 1.

Server Farms and Distributed Server Systems


In the server farm shown in Figure 2.5 jobs arrive according to a Poisson process with
rate λ and are probabilistically split between two servers, with p fraction of the jobs going
to server 1, which has service rate µ1 , and q = 1 − p fraction going to server 2, which
has service rate µ2 . Assume that job sizes are exponentially distributed.
It is easy to see that the response time of an arbitrary job is hiper-exponential with
parameters p, 1 − p, and µ1 , µ2 . The number of customers in the system and in the
queues are the sum of the corresponding numbers in the separated M/M/1 systems and
the distribution of the waiting and response times can be calculated with the help of law
of total probability.

82
We can formulate the following two optimization problems:
ˆ If we have a total service capacity of µ for the two servers, how should we
optimally split µ between the two servers, into µ1 and µ2 , where µ = µ1 + µ2 , so
as to minimize mean response time E(T )? We assume that p ≥ 1/2.

ˆ How can we choose the probability p so as to minimize E(T ) ?

Server_Farm.png

Figure 2.5: Server farms and distributed server systems

Let us see the solutions, rst the total service capacity case.
Consider a frequently used option called balanced load. In this situation

λp λ(1 − p) λ(1 − p)
= = , p(µ − µ1 ) = (1 − p)µ1 , µ1 = pµ.
µ1 µ2 µ − µ1
In this case

p 1−p p 1−p 2
E(Tb ) = + = + = .
µ1 − λp µ − µ1 − λ(1 − p) p(µ − λ) (1 − p)(µ − λ) µ−λ
The optimization problem can be formulates as follows
p 1−p
E(T ) = +
µ1 − λp µ2 − (1 − p)λ

subject to

λp < µ1 , λ(1 − p) < µ2 , µ1 + µ2 = µ, p ≥ 1/2.

To nd the optimal value we have to nd the roots of the following equation

dE(T ) p(−1) (1 − p)(−1)(−1)


= 2
+ =0
dµ1 (µ1 − λp) (µ − µ1 − (1 − p)λ)2
(1 − p) p
2
=
(µ − λ − (µ1 − λp)) (µ1 − λp)2

Let us introduce the notation

x = µ1 − λp

then we can rewrite the equation as

(1 − p)x2 = p((µ − λ)2 + x2 − 2(µ − λ)x)


(2p − 1)x2 − 2p(µ − λ)x + p(µ − λ)2 = 0

83
The solution
p
2p(µ − λ) ± 4p2 (µ − λ)2 − 4(µ − λ)p2 (2p − 1)
x1,2 =
2(2p − 1)
p
2p(µ − λ) ± (µ − λ)2 p2 − 2p2 + p
x1,2 =
2(2p − 1)
p √ √ √
(µ − λ)(p ± (1 − p)p) p(µ − λ)( p ± 1 − p)
x1,2 = =
2p − 1 2p − 1
√ √
(µ − λ) p(p − (1 − p)) p
= · √ = (µ − λ) √ √ .
2p − 1 p∓ 1−p p∓ 1−p

Since

p
√ √ >1
p− 1−p

the solution is

∗ p
x = (µ − λ) √ √
p+ 1−p

thus the optimal value


p
µ∗1 = λp + (µ − λ) √ √ .
p+ 1−p

Hence the extra service intensity


p
(µ − λ) √ √ .
p+ 1−p

Let us show that



p
√ √ ≤p
p+ 1−p

that is
p p
1 ≤ p(p + 1 − p + 2 p(1 − p)) = p(1 + 2 p(1 − p)) ≤ p(1 + 2 × 1/2) = 2p

Hence p ≥ 1
2
which is true.

Thus the optimal mean response time

84
p 1−p
E(Tm ) = +
µ1 − λp µ2 − λ(1 − p)
p 1−p
= √
p
+ h √ i
(µ−λ) p
(µ − λ) √p+√1−p µ − λp + √ √ − (1 − p)λ
p+ 1−p

√ √ √ √ √ √
p( p + 1 − p) 1 − p( p + 1 − p)
= +
µ−λ µ−λ

√ √ p
( p + 1 − p)2 p + 1 − p + 2 p(1 − p)
= =
µ−λ µ−λ

1 + 2 · 21
p
1 + 2 p(1 − p) 2
= ≤ = = E(Tb ).
µ−λ µ−λ µ−λ
which was expected since it is the minimal value. At the same time we can see that the
distribution of the total capacity is not proportional to p, which was the balanced load.
However, if p = 1/2 then the balanced load value and the minimal value are the same.
In other words, if we choose the servers with the same probability 1/2 which many times
happens because we have no information about the speed of the servers, that we have to
give the half of the total service capacity because this minimizes the mean response time.

Let us see the solution to the minimization problem with respect to p.


We have the same expected response time function, namely
p 1−p
E(T ) = + , µ1 = α · µ2 , α ≥ 1, p ≤
µ1 − pλ µ2 − (1 − p)λ
We have to nd the solution to derivative function
dE(T ) 1(αµ2 − pλ) + λp −(αµ2 − pλ) + λp
= + =0
dp (αµ2 − pλ)2 (µ2 − (1 − p)λ)2

(αµ2 − pλ) + λp µ2
2
=
(αµ2 − pλ) (µ2 − (1 − p)λ)2
α(µ2 − (1 − p)λ)2 = (αµ2 − pλ)2
Since
µ2 > (1 − p)λ, αµ2 > pλ, p≤1
then

α(µ2 − (1 − p)λ) = αµ2 − pλ

α(µ2 − λ + λp) = αµ2 − λp
√ √ √
(λ α + λ)p = µ2 (α − α) + λ α

85
Thus √ √
µ2 (α − α) + λ α
p= √ .
λ(1 + α)
In addition, since p is a probability its value should not be greater than 1, that is we have
another condition √ √
µ2 (α − α) + λ α
√ ≤1
λ(1 + α)
which results √
µ2 (α − α) ≤ λ.
It is easy to see that √ √
µ2 (α − α) + λ α 1
√ ≥
λ(1 + α) 2
√ √ √
2(µ2 (α − α) + λ α) ≥ λ(1 + α)
√ √
2µ2 (α − α) + λ( α − 1) ≥ 0.
Thus the conditions are

µ2 (α − α) ≤ λ, λp < αµ2 , (1 − p)λ < µ2 .
In other words, if µ1 , µ2 are xed p = 1/2 does not minimize the expected response times
except they are equal. If α = 1, then µ1 = µ2 = µ/2, and p = λ/(λ · 2) = 1/2.

In the case of balanced load we have


p 1−p
=
µ µ

thus p = 1/2, that is the optimal value for p and the value obtain by using the balanced
load principle are the same, thus the minimum expected response times are the same, too.

Let us calculate the minimal value, that is

p 1−p
E(T ) = +
µ1 − λp µ2 − (1 − p)λ
√ √
µ2 (α − α) + αλ
p= √ , µ1 = αµ2
λ(1 + α)

√ √
µ2 (α − α) + αλ
1−p=1− √ µ2
λ(1 + α)
√ √ √
λ + λ α − µ2 (α − α) − αλ
= √
λ(1 + α)

λ − µ2 (α − α)
= √
λ(1 + α)

86
√ √ √
(µ2 (α− α)+ αλ λ−µ2 (α− α)
√ √
λ(1+ α) λ(1+ α)
E(T ) = √ √ + √
λ(µ2 (α− α)+ αλ) λ−µ2 (α− α)
αµ2 − √ µ2 − λ(1+√α) ·λ
λ(1+ α)
√ √ √
µ2 (α − α) + αλ λ − µ2 (α − α)
= √ √ √ + √ √
λ (αµ2 (1 + α) − µ2 (α − α) − αλ) λ (µ2 (1 + α) − λ + µ2 (α − α))
√ √ √
µ2 (α − α) + αλ λ − µ2 (α + α)
= √ √ √ +
λ (αµ2 α + µ2 α − αλ) λ(µ2 − λ + µ2 α)
√ √ √
α(µ2 α − µ2 + λ) λ − µ2 (α − α)
= √ +
λ α(αµ2 + µ2 − λ) λ(µ2 + µ2 α − λ)
√ √
µ2 α − µ2 + λ + λ − µ2 (α − α)
=
λ(µ2 + µ2 α − λ)

2µ2 α − µ2 + 2λ − µ2 α
=
λ(µ2 + µ2 α − λ)

µ2 (2 α − α − 1) + 2λ
=
λ (µ2 (1 + α) − λ)

Thus the minimum response time



µ2 (2 α − α − 1) + 2λ
E(Tm ) =
λ (µ2 (1 + α) − λ)

If α = 1 then µ1 = µ2 = µ

2λ 2
E(Tm ) = =
λ(2µ − λ) 2µ − λ
In the case of balanced load we have
p 1−p p 1−p α
= , = , p=
µ1 µ2 αµ2 µ2 α+1
Thus in this case the mean value is
α 1
α+1 α+1
E(Tb ) = α + 1
αµ2 − α+1
λ µ2 − 1+α λ

α 1
E(Tb ) = +
αµ2 (1 + α) − αλ µ2 (1 + α) − λ
1 1 2
= + =
µ2 (1 + α) − α µ2 (1 + α) − λ µ2 (1 + α) − λ

87
Hence we have the nal result for the two cases, namely
balanced load
2
E(Tb ) =
µ2 (1 + α) − λ
the minimal value

µ2 (2 α − α − 1) + 2λ
E(Tm ) =
λ (µ2 (1 + α) − λ)
Comparing them we have

E(Tm ) µ2 (2 α − α − 1) + 2λ
=
E(Tb ) 2λ

2λ − µ2 ( α − 1)2
= ≤1

If α = 1 then µ1 = µ2 = µ and
2
E(Tm ) = E(Tb ) =
2µ − λ
.

Usually the customer has no information about the service speed in advance and that is
why p = 1/2, that is the expected response time is not minimal.

Let us compare numerically the mean of the total number of customers in the hetero-
geneous system M/M/2, that is two separated queues with dierent service intensities
µ1 = 10, µ2 = 2 and λ = 1, p = 0.9, with the corresponding homogeneous system
M/M/2, when the service rate is (µ1 + µ2 )/2 = 6.
Using QSA we get N̄1 + N̄2 = 0.0989 + 0.0526 = 0.1515 and N̄ = 0.168 which means that
the combined separated system is preferable.

However, if p = 0.5 than the M/M/2 common queue systems is always better with respect
to the mean total number of customers in the system. The proof is the following.
We show that
λ λ 2λ
2µ1 2µ2 µ1 +µ2
λ
+ λ
> 2 .
1− 1−


2µ1 2µ2 1− 2(µ1 +µ2 )

Thus,
1 1 2
+ > λ2
2µ1 − λ 2µ2 − λ (µ1 + µ2 ) − µ1 +µ2

2µ1 − λ + 2µ2 − λ 2(µ1 + µ2 )


>
(2µ1 − λ)(2µ2 − λ) (µ1 + µ2 )2 − λ2
µ1 + µ2 − λ µ1 + µ2
>
(2µ1 − λ)(2µ2 − λ) (µ1 + µ2 + λ)(µ1 + µ2 − λ)

88
(µ1 + µ2 − λ)2 µ1 + µ2
>
(2µ1 − λ)(2µ2 − λ) µ1 + µ2 + λ
We show that
(µ1 + µ2 − λ)2
≥1
(2µ1 − λ)(2µ2 − λ)
from which the inequality follows.
That is

(µ1 + µ2 − λ)2 ≥ (2µ1 − λ)(2µ2 − λ)


(µ1 + µ2 )2 + λ2 − 2λ(µ1 + µ2 ) ≥ 4µ1 µ2 − 2µ1 λ − 2µ2 λ + λ2
(µ1 + µ2 )2 ≥ 4µ1 µ2
(µ1 − µ2 )2 ≥ 0
which is true.
Since
µ1 + µ2
<1
µ1 + µ2 + λ
we are ready with the proof.
It is not dicult to show that the probability of waiting of an arbitrary customer after
selecting a server is

pλ (1 − p)λ
P (W > 0) = PW = p + (1 − p) .
µ1 µ2

Let us consider the following example dealing with closed and open systems, see the
Figure below

Figure 2.6: Closed and open systems

It is proved that in any closed system where the number of circulated jobs N (level of
multiprogramming) is high enough the expected response/waiting time is minimized if
the balanced load principle is applied, see Harchol-Balter [46] from where the example is
taken. The optimal value is

E(Tm ) = E(Tb ) = N/(µ1 + µ2 ).

89
M/M/2 with heterogeneous servers and fastest free server service discipline
Consider a variant of the M/M/2 queue where the service rates of the two servers are
not identical. Denote the service rate of the rst server by µ1 and the service rate of the
second server by µ2 , where µ1 ≥ µ2 . In the case of heterogeneous servers, the rule is that
when both servers are idle, the faster server is scheduled for service before the slower one,
that is called FFS - Fastest Free Server But if there is only one server free when an
arrival occurs, it enters service with the free server regardless of the service rate. If both
servers are busy, the arriving customer waits in common line for service in the order of
arrival. Dene the utilization, a, for this system to be a = λ/(µ1 + µ2 ).
Let us determine the mean number of jobs in the system E(N ), mean response and
waiting time E(T ), E(W ), respectively.
It was proved, for example in Harchol-Balter [46] and Trivedi [118] that
1 µ1 µ2 (1 + 2a) 1
E(N ) = , A= + .
A(1 − a)2 λ(λ + µ2 ) 1−a
Using Little law E(T ) = E(N )/λ.

It can be shown that P (Q = i) = ai+1 /A, i = 0, 1, 2, ...∞ and thus


a2 E(Q) E(N ) − E(Q) a
E(Q) = , E(W ) = , E(S) = , P (W > 0) = .
A(1 − a)2 λ λ (1 − a)A
It is easy to see if µ1 = µ2 = µ then a = ρ/2 and there is no dierence between the servers
thus the corresponding measures are the same as in the homogeneous M/M/2 system,
that is
4ρ ρ3 1
E(N ) = , E(Q) = , E(S) = .
4 − ρ2 4 − ρ2 µ
M/M/2 with heterogeneous servers and random free server service discipline
Let us see the previous system with the exception that and arriving customer select
between the free servers with the same probability, that is the selection probability is
0.5. Let us call this discipline as RFS - Random Free Server . However, this small
modication makes the calculations rather complicated, but it can be treated numerically.
To do so, let us introduce the following notations: Let (c1 , c2 , k) be the state of the system,
where k is the number of customers in the queue, and ci is 1 if server i is busy and 0
otherwise. Let us denote by Π0,0,0 , Π0,1,0 , Π1,0,0 , Π1,1,k , k = 0, 1, 2, ... ∞ the steady-state
distribution of the system which exists if a < 1. These probabilities can be computed in
the following way
!, !
µ1 µ2 µ2 µ1 µ2
Π0,1,0 = µ1 + λ+ − Π1,1,0 ,
2λ + µ1 2 4λ + 2µ1
!
2µ2 1
Π1,0,0 = Π0,1,0 + Π1,1,0 , Π0,0,0 = (Π1,0,0 µ1 + Π0,1,0 µ2 )/λ,
2λ + µ1 2

Π1,1,k = P (Q = k) = ak Π1,1,0 , k = 0, 1, ..., ∞.

90
Of course Π1,1,0 should satisfy the normalizing condition, that is

Π0,0,0 + Π0,1,0 + Π1,0,0 + Π1,1,0 /(1 − a) = 1.

It can be obtained very easily, starting the calculation by an initial value then calculate
the sum, and then divide each term by this sum.

If we have the distribution the expectations can be calculated in the standard way, that
is

X 2−a
E(N ) = Π0,1,0 + Π1,0,0 + (2 + k)ak Π1,1,0 = Π0,1,0 + Π1,0,0 + Π1,1,0 ,
k=0
(1 − a)2

a
E(Q) = Π1,1,0 ,
(1 − a)2
Π1,1,0
E(T ) = E(N )/λ, E(W ) = E(Q)/λ, E(S) = (E(N )−E(Q))/λ, P (W > 0) = .
1−a
It is not dicult to see that in FFS and RFS cases

P (W > x) = PW e−(µ1 +µ2 −λ)x .

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

Example 17 Compare numerically the mean response, waiting time and probability of
waiting in the heterogeneous M/M/2 systems with separated and common queues. Use
FFS and RFS discipline in the heterogeneous case, in the homogeneous M/M/2 system
the service rate is (µ1 + µ2 )/2. The input parameters are λ = 1, µ2 = 2.

Table 2.2: Comparison of systems

E(T) E(T) E(W) E(W) P(W>0) P(W>0)


Separated µ1 = 10 µ1 = 3.5 µ1 = 10 µ1 = 3.5 µ1 = 10 µ1 = 3.5

balanced 0.1818 0.4444 0.0151 0.0808 0.0833 0.1818


p=0.9 0.1515 0.3987 0.0115 0.0916 0.086 0.2361
p=0.5 0.3859 0.5 0.0859 0.1071 0.15 0.1964
optimal 0.3981 0.0989 0.2529

Common
FFS 0.1341 0.3391 0.0009 0.0112 0.0102 0.0504
RFS 0.2689 0.3964 0.0018 0.0131 0.0205 0.0589
Hom. 0.1678 0.376 0.0011 0.0124 0.0128 0.0559

91
Example 18 Consider a service center with 4 servers where λ = 6, µ = 2.
Find the performance measures of the system.
Solution:
P0 = 0.0377, Q = 1.528, N = 4.528, S = 1, n = 3,
P (W > 0) = P (n ≥ 4) = C(4, 3) = 0.509, W = 0.255 time unit, T = 0.755 time unit ,
3
Un = , e = 0.35 time unit , Eδ = 1.05 time unit ,
4
Eδr = 4.2 time unit , Ur = 0.9623.

Example 19 Find the number of runways in an airport such a way the the probability
of waiting of an airplane should not exceed 0.1. The arrivals are supposed to be Poisson
distributed with rate λ = 27 per hour and the service times are exponentially distributed
with a mean of 2 minutes.
Solution:
First use the same time unit for the rates, let us compute in hours. Hence µ = 30 and for
λ < 1 which results n > 1.
stability we need nµ

Denote by Pi (W > 0) the probability of waiting for i runways. By applying the corre-
sponding formulas we get
P2 (W > 0) = 0.278, P3 (W > 0) = 0.070, P4 (W > 0) = 0.014.
Hence the solution is n = 3. In this case P0 = 0.403 and W = 0.0665hour, Q = 0.03.

Example 20 Consider a fast food shop where to the customers arrive according to a
Poisson law one customer in 6 seconds on the average. The service time is exponentially
distributed with 20 seconds mean. Assuming that the maintenance cost of a server is 100
Hungarian Forint and the waiting cost is the same nd the optimal value of the server
which minimizes the mean cost per hour.
Solution:
3600
Q = λW = W
6
E(T C) = 100 × n + 100 × 600 × W
1
λ 20
= 6
1 = thus n ≥ 4 .
µ 20
6
Computing for the values n = 4, 5, 6, 7, 8 we have found that the minimum is achieved
at n = 5. This case the performance measures are

W = 3.9second, P (e) = 0.66, P (W ) = 0.34 ,


Eδ = 29.7second, e = 14.9second, Q = 0.65 ,
n = 2.5, N = 3.15, S = 2.5, E(T C) = 565 HUF/hour.

92
Example 21 Let us consider the following optimization problem. Find the number of
servers which minimizes the total expected cost per unit time with the following input
parameters and costs:

λ = 2, µ = 1, CS = 1, CW S = 20, CI = 1, CSR = 5, R = 20.

M|M|c ρ
P0
Us
a
60
Pn
P[N≥c]
50
P[N≥n]
N
● E(Total cost): 29.478
40 var(N)
c: 4 Q
var(Q)
30
W
var(W)
20
π W[r]
3 4 5 6 7 8 9 10 π W[90]
c π W[95]
P[W≤t|W>0]
E[W|W>0]
1/2
3 4 5 6 7 8 9 10

Expected total cost function with respect to number of servers

Example 22 Let us consider the following optimization problem. Find the intensity of
service which minimizes the total expected cost per unit time with the following input pa-
rameters and costs:

λ = 2, n = 4, CS = 1, CW S = 20, CI = 1, CSR = 5, R = 20.

M|M|c ρ
● E(Total cost): 29.478 P0
Us
µ: 1
a
30 Pn
P[N≥c]
P[N≥n]
20 N
var(N)
Q
10 var(Q)
W
var(W)
0
π W[r]
1 2 π W[90]
µ π W[95]
P[W≤t|W>0]
E[W|W>0]
1/2
1 2

Expected total cost function with respect to service rate

93
2.8 The M/M/c Non-preemptive Priority Queue (HOL)
There are n priority classes with each class having a Poisson arrival pattern with mean
arrival rate λi . Each customer has the same exponential service time requirement. Then
the overall arrival pattern is Poisson with mean λ = λ1 +λ2 +. . .+λn . The server utilization

λS λ ρ C[c, ρ]S
a= = = , W1 = ,
c cµ c c(1 − λ1 S/c)
and the following equations are also true:

C[c, ρ]S
Wj =   j−1     j  , j = 2, . . . , n
P P
c 1− S λi /c 1 − S λi /c
i=1 i=1

T j = W j + S, Q j = λj · W j , N j = λj · T j , j = 1, 2, . . . , n
λ1 λ2 λn
W = W1 + W2 + ... + Wn
λ λ λ

Q = λ · W, T = W + S, N = λ · T.
Java applets for direct calculations can be found at
https://qsa.inf.unideb.hu

2.9 The M/M/c/K Queue - Multiserver, Finite-Capacity


Systems
This queue is a variation of a multiserver system and only maximum K customers are
allowed to stay in the system. As earlier the number of customers in the system is a
birth-death process with appropriate rates and for the steady-state distribution we have
 n
λ
 n!µn P0 ,
 for 0 ≤ n ≤ c
Pn =
 λn
P , for c ≤ n ≤ K.

cn−c c!µn 0

From the normalizing condition for P0 we have


c−1 K −1
λn λn
X X
P0 = n
+ n−c c!µn
.
n=0
n!µ n=c
c
λ ρ
To simplify this expression let ρ = , a= .
µ c
Then
 c
ρ 1−aK−c+1
K K  c! 1−a , if a 6= 1
ρn ρc X n−c

X
= a =
cn−c c! c! n=c  ρc
if a = 1.

n=c
c!
(K − c + 1),

94
Thus
c−1 n −1

c K−c+1
X ρ
ρ 1−a
if a 6= 1


 c! 1−a
+ ,
n!


n=0


P0 =
c−1 n −1
 
ρ


ρc
X
 c! (K − c + 1) + if a = 1.


 ,
n!
n=0

The main performance measures can be obtained as follows

ˆ Mean and variance of queue length


K K K
X X λn P 0 ρc X ρn−c
Q= (n − c)Pn = (n − c) n−c n P0 = (n − c) n−c
n=c+1 n=c+1
c c!µ c! n=c+1 c
K K−c K−c 
P 0 ρc a X n−c−1 P0 ρc a X i−1 P0 ρc a d X i
= (n − c)a = ia = a
c! n=c+1 c! i=1 c! da i=0
P0 ρc a d 1 − aK−c+1
 
=
c! da 1−a

which results
P0 ρc a
Q= [1 − aK−c+1 − (1 − a)(K − c + 1)aK−c ]
c!(1 − a)2
In particular, if a = 1 then the L'Hopital's rule should be applied twice.

P0 ρc a − aK−c+2 − (1 − a)(K − c + 1)aK−c+1


 
Q̄ =
c! (1 − a)2
lim Q̄ : L'Hospital rule
a→1

P0 ρc 1 − (K − c + 2)a(K−c+1) − (−1)(K − c + 1)a(K−c+1) − (1 − a)(K − c + 1)2 a(K−c)
= ·
c! −2(1 − a)
c (K−c+1) 2 (K−c)
P0 ρ 1 − a − (1 − a)(K − c + 1) a
= · .
c! −2(1 − a)

Applying again the L'Hospital rule

P0 ρ c 
−(K − c + 1)a(K−c) + (K − c + 1)2 a(K−c) − (1 − a)(K + c + 1)2 (K − c)a(K−c−1) .

=
2c!

Then

P 0 ρc   P 0 ρc
lim Q̄(a) = (K − c + 1)2 − (K − c + 1) = (K − c)(K − c + 1).
a→1 2c! 2c!

95
If c = 1 and ρ = 1 we get
K(K − 1)
Q̄ = .
2(K + 1)

K
X
2
E(Q ) = (k − c)2 Pk , V ar(Q) = E(Q2 ) − (E(Q))2 .
k=c

ˆ Mean and variance of number of customers in the system


It is easy to see that

λ = λ(1 − PK ) = µc
and since
N =Q+c
we get
N = Q + ρ(1 − PK ).

K
X
2
E(N ) = k 2 Pk , V ar(N ) = E(N 2 ) − (E(N ))2
k=1

ˆ Distribution at the moment of arrival into the system


By applying the Bayes's rule we have

Πn = P ( there are n customers in the system


given a customer is about to enter into the system )
   
[λ∆t + o(∆t)]Pn [λ + o(∆t)/∆t]Pn
= lim PK−1 = lim PK−1
n=0 [λ∆t + o(∆t)]Pn n=0 [λ + o(∆t)/∆t]Pn
∆t→0 ∆t→0

λPn Pn
= PK−1 = , (n ≤ K − 1).
λ n=0 Pn 1 − PK

Obviously in the case of an M/M/c/∞ system Πn = Pn since PK tends to 0.

ˆ Mean and variance of response and waiting times


The mean times can be obtained by applying the Little's law, that is
K−1
X (k − c + 1)
Q
W = = Πk
λ(1 − PK ) k=c
(cµ)
N
T = = W + 1/µ
λ(1 − PK )

96
Since the conditional waiting time is Erlang distributed, it is easy to see that

K−1
2
X (k − c + 1) + (k − c + 1)2
E(W ) = Πk , V ar(W ) = E(W 2 ) − (E(W ))2 ,
k=c
(cµ)2

V ar(T ) = V ar(W ) + 1/µ2 .

ˆ Distribution of the waiting time


As in the previous parts for FW (t) the theorem of total probability is applied re-
sulting
K−1 t
cµ(cµx)n−c −cµx
X Z
FW (t) = FW (0) + Πn e dx
n=c 0 (n − c)!
K−1 Z ∞
X  cµ(cµx)n−c −cµx

= FW (0) + Πn 1 − e dx .
n=c t (n − c)!

Since m
∞ X (λt)i e−λt
λ(λx)m −λx
Z
e dx =
t m! i=0
i!
applying substitutions m = n − c, λ = cµ we have
Z ∞ n−c
cµ(cµx)n−c −cµx X (cµt)i e−cµt
e dx = ,
t (n − c)! i=0
i!

thus
K−1 K−1 n−c
X X X (cµt)i e−cµt
FW (t) = FW (0) + Πn − Πn
n=c n=c i=0
i!
K−1 n−c
X X (cµt)i e−cµt
=1− Πn .
n=c i=0
i!

The Laplace-transform of the waiting and response times can be derived similarly, by
using the law of total Laplace-transforms.

ˆ Overall utilization of the servers can be obtained as


The utilization of a single server is
n−1 K
X k X c̄ ρ(1 − PK )
Us = Pk + Pk = = .
k=1
c k=c
c c

Hence the overall utilization can be written as

Un = cUs = c̄.

97
ˆ The mean busy period of the system can be computed as follows
The system is said to be idle if the is no customer in the system, otherwise the
system is busy. Let Eδr denote the mean busy period of the system. Then the
utilization of the system is

Eδr
Ur = 1 − P0 = 1 ,
λ
+ Eδr
thus
1 − P0
Eδr = .
λP0
If the individual servers are considered then we assume that a given server becomes
busy earlier if it became idle earlier. Hence if j < c customers are in the system
then the number of idle servers is c − j .
Let as consider a given server. On the condition that at the instant when it became
idle the number of customers in the system was j its mean idle time is
c−j
ej = .
λ
The probability of this situation is
Πj Pj
aj = c−1
= c−1
.
X X
Πi Pi
i=0 i=0

Then applying the law of total expectations for its mean idle period we have
c−1 c−1 c−1
X X (c − j)Πj X 1 c − ρ(1 − PK )
e= aj e j = Pc−1 = (c − j)Πj = ,
λP (e) λ c−1
P
j=0 j=0
λ i=0 Πi j=0 i=0 Pi

Pc−1
where P (e) = j=0 Πj denotes the probability that an arriving customer nds an
idle server.

Since

Us = ,
e + Eδ
thus
Us
Eδ = e
1 − Us
where Eδ denotes its mean busy period.

It is easy to see if PK = 0 than the performance measures of the M/M/c and the
M/M/c/K systems are the same which is reasonable.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

98
2.10 The M/M/c/K Queue with Balking and Reneging
In real practice, it often happens that arrivals become discouraged or balked when
the queue is long and do not wish to wait. One such model is the M/M/c/K that is, if
people see K ahead of them in the system, they do not join. Generally, unless K is the
result of a physical restriction such as no more places to park or room to wait, people
will not act quite like that voluntarily. Rarely do all customers have exactly the same
discouragement limit all the time. Another approach to balking is to employ a series of
monotonically decreasing functions of the system size multiplying the rate λ. Let bk be
this function, so that λk = bn λ and bk+1 ≤ bk ≤ 1, k > 0, b0 = 1, that is the probability
of joining the system provided it is in state k .

Possible examples that may be useful for the bk = 1/(k + 1), k = 1, ..., K. People are not
always discouraged because of queue size, but may attempt to estimate how long they
would have to wait. If the queue moving quickly, then the person may join a long one. On
the other hand, if the queue is slow-moving, a customer may become discouraged even
if the line is short. Now if k customers are in the system, an estimate for the average

waiting time might be k/cµ, if the customer had an idea of µ. In this case bk = e− cµ .
The M/M/c/K system can be obtained as bk = 1, k = 0, ..., K .

Customers who tend to be impatient may not always be discouraged by excessive


queue size, but may instead join the queue to see how long their wait may become. How-
ever, they renege, abandon if their estimate of their total wait is intolerable and they
leave the system without service.

Let rk h + o(h) = probability of reneging during h given k customers in the system, that
is the reneging intensity is rk . A good possibility for the reneging function rk is

rk = 0, k = 0, · · · , K classical system, rk = (k − c)θ, rk = e cµ , k = c, ...K, and zero
otherwise, where θ is the parameter of the exponentially distributed impatience time of
a customer.
It is not so dicult to see, that the number of customers in the systems is a birth-death
process with

λk = λbk , k = 0, · · · , K − 1

(
kµ, k = 1, · · · , c
µk =
cµ + rk , k = c, · · · , K.

As usual, the steady-state distribution can be obtained as

K
!−1
λ0 · · · λk−1 X λ0 · · · λj−1
Pk = P0 , P0 = 1+ .
µ1 · · · µk j=1
µ1 · · · µj

99
The main performance measures can be calculated as follows

1 Us
Ur = 1 − P0 , E(δr ) =·
λ 1 − Us
K
X K
X
N̄ = kPk , Q̄ = (k − c)Pk
k=1 k=c
XK K
X
N¯2 = 2
k Pk , Q̄2 = (k − c)2 Pk
k=1 k=c

V ar(N ) = N¯2 − (N̄ )2 , V ar(Q) = Q̄2 − (Q̄)2


c−1
X K
X
c̄ = kPk + cPk , N̄ = Q̄ + c̄, Uc = c̄/c
k=1 k=c
K−1
X K
X
λ̄ = λk Pk , µ̄ = µk Pk , λ̄ = µ̄
k=0 k=1
T̄ = N̄ /λ̄, W̄ = Q̄/λ̄
K
X
r̄ = rk Pk , mean reneging rate
k=c

The probability that an entering customer nds k customers in the system is


λk Pk
Πk = , k = 0, . . . K − 1.
λ̄

λ̄
P (an arriving customer enters the system) = ,
λ

P (a departing customer leaves the system without service) =
µ̄
K−1
X λK P K
P (waiting) = Πk , P (blocking) = PK .
k=c k=0 λk Pk

In the case of a balking system we can calculate the variance of waiting and response
time and the distribution function of the waiting time, too.
Namely, we have

K−1
Q X (k − c + 1)
W = = Πk
λ k=c
(cµ)
N
T = = W + 1/µ
λ
100
Since the conditional waiting time is Erlang distributed, it is easy to see that

K−1
2
X (k − c + 1) + (k − c + 1)2
E(W ) = Πk , V ar(W ) = E(W 2 ) − (E(W ))2 ,
k=c
(cµ)2

V ar(T ) = V ar(W ) + 1/µ2 .

Distribution function of the waiting time


As in the previous parts for FW (t) the theorem of total probability is applied resulting
K−1 t
cµ(cµx)n−c −cµx
X Z
FW (t) = FW (0) + Πn e dx
n=c 0 (n − c)!
K−1 Z ∞
X  cµ(cµx)n−c −cµx

= FW (0) + Πn 1 − e dx .
n=c t (n − c)!

Similarly to the previous section we have


K−1 K−1 n−c
X X X (cµt)i e−cµt
FW (t) = FW (0) + Πn − Πn
n=c n=c i=0
i!
K−1 n−c
X X (cµt)i e−cµt
=1− Πn .
n=c i=0
i!

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

101
2.11 The M/G/1 Queue
So far systems with exponentially distributed serviced times have been treated. We must
admit that it is a restriction since in many practical problems these times are not expo-
nentially distributed. It means that the investigation of queueing systems with generally
distributed service times is natural. It is not the aim of this book to give a detailed anal-
ysis of this important system I concentrate only on the mean value approach and some
practice oriented theorems are stated without proofs. A simple proof for the Little's law
is also given.

Little's Law

As a rst step for the investigations let us give a simple proof for the Little's theorem,
Little's law, Little's formula, which states a relation between the mean number of
customers in the systems, mean arrival rate and the mean response time. Similar version
can be stated for the mean queue length, mean arrival rate and mean waiting time.

Let α(t) denote the number of customers arrived into the system in a time interval (0, t),
and let δ(t) denote the number of departed customers in (0, t). Supposing that N (0) = 0,
the number of customers in the system at time t is N (t) = α(t) − δ(t).
Let the mean arrival rate into the system during (0, t) be dened as
α(t)
λ̄t := .
t
Let γ(t) denote the overall sojourn times of the customers until t and let T t be dened
as the mean sojourn time for a request. Clearly

γ(t)
Tt = .
α(t)
Finally, let N̄t denote the mean number of customers in the system during in the interval
(0, t), that is
γ(t)
N̄t = .
t
From these relations we have
N̄t = λ̄t T̄t .
Supposing that the following limits exist

λ̄ = lim λ̄t , T̄ = lim T̄t .


t→∞ t→∞

we get
N̄ = λ̄T̄ ,
which is called Little's law .
Similar version is
Q̄ = λ̄W̄ .

102
The Embedded Markov Chain

As before let N (t) denote the number of customers in the system at time t. As time
evolves the state changes and we can see that changes to neighboring states occur, up
and down, that is from state k either to k + 1 or to k − 1. Since we have a single server
the number of k → k + 1 type transitions may dier by at most one from the number of
k + 1 → k type transitions. So if the system operate for a long time the relative frequen-
cies should be the same. It means that in stationary case the distributions at the arrival
instants and the departure instants should be the same. More formally, Πk = Dk .

For further purposes we need the following statements


Statement 1 For Poisson arrivals
P (N (t) = k) = P (an arrival at time t nds k customers in the system ) .

Statement 2 If in any system N (t) changes its states by one then if either one of the
following limiting distribution exists, so does the other and they are equal.

Πk := lim (an arrival at time t nds k customers in the system ) ,


t→∞

Dk := lim (a departure at time t leaves k customers behind ) ,


t→∞

Πk = Dk .
Thus for an M/G/1 system
Πk = Pk = Dk ,
that is in stationary case these 3 types of distributions are the same.

Due to their importance we prove them. Les us consider rst Statement 1.


Introduce the following notation

Pk (t) := P (N (t) = k) ,

Πk (t) := P (an arriving customer at instant t nds k customers in the system ) .


Let A(t, t + ∆t) be the event that one arrival occurs in the interval (t, t + ∆t). Then

Πk (t) = lim P (N (t) = k | A(t, t + ∆t)) .


∆t→0

By the denition of the conditional probability we have


P (N (t) = k, A(t, t + ∆t))
Πk (t) = lim =
∆t→0 P (A(t, t + ∆t))
P (A(t, t + ∆t) | N (t) = k) P (N (t) = k)
= lim .
∆t→0 P (A(t, t + ∆t))
Due to the memoryless property of the exponential distribution event A(t, t + ∆t) does
not depend on the number of customers in the systems and even on t itself thus

P (A(t, t + ∆t) | N (t) = k) = P (A(t, t + ∆t)) ,

103
hence
Πk (t) = lim P (N (t) = k) ,
∆t→0
that is
Πk (t) = Pk (t).
This holds for the limiting distribution as well, namely
Πk = lim Πk (t) = lim Pk (t) = Pk .
t→∞ t→∞

Let us prove Statement 2 by the help of Statement 1 .

Let R̂k (t) denote the number of arrivals into the system when it is in state k during the
time interval (0, t) and let D̂k (t) denote the number of departures that leave the system
behind in state k during (0, t). Clearly
(2.30) |R̂k (t) − D̂k (t) | ≤ 1.
Furthermore if the total number of departures is denoted by D (t), and the total number
of arrivals is denoted by R (t) then
D (t) = R (t) + N (0) − N (t) .
The distribution at the departure instants can be written as

D̂k (t)
Dk = lim .
t→∞ D (t)

It is easy to see that the after simple algebra we have


D̂k (t) R̂k (t) + D̂k (t) − R̂k (t)
= .
D (t) R (t) + N (0) − N (t)
Since N (0) is nite and N (t) is also nite due to the stationarity from (2.30) and R̂ (t) →
∞, with probability one follows that
D̂k (t) R̂k (t)
Dk = lim = lim = Πk .
t→∞ D (t) t→∞ R (t)

Consequently, by using Statement 1 the equality of the three probabilities follows.

Mean Value Approach

Let S denote the service time and let R denote the residual ( remaining) service time. It
can be proved that the systems is stable if ρ = λE(S) < 1, furthermore P0 = 1 − ρ.
Then it can easily be seen that

X
E(W ) = (E(R) + (k − 1)E(S)) Πk
k=1
∞ ∞
!
X X
= E(R)Pk + (k − 1)Pk E(S) = E(R)ρ + E(Q)E(S),
k=1 k=1

104
Felix Pollaczek, 1892-1981 Alexander Y. Khintchine, 1894-1959

where E(R) denotes the mean residual time.


By applying the Little's law we have

E(Q) = λE(W ),

and thus
ρE(R)
(2.31) E(W ) =
1−ρ
known as Pollaczek-Khintchine mean value formula.
In subsection 2.11 we will show that
E(S 2 )
(2.32) E(R) = ,
2E(S)

which can be written as


E(S 2 ) V ar(S) + E2 (S) 1
(2.33) E(R) = = = (CS2 + 1)E(S),
2E(S) 2E(S) 2

where CS2 is the squared coecient of the service time S . It should be noted that mean
residual service time depends on the rst two moments of the service time.

105
Thus for the mean waiting time we have

ρE(R) ρ
E(W ) = = (C 2 + 1)E(S).
1−ρ 2(1 − ρ) S

By using the Little's law for the mean queue length we get

ρ2 CS2 + 1
E(Q) = .
1−ρ 2
Clearly, the mean response time and the mean number of customers in the systems can
be expressed as

ρ CS2 + 1
E(T ) = E(S) + E(S),
1−ρ 2

ρ2 CS2 + 1
E(N ) = ρ + ,
1−ρ 2
which are also referred to as Pollaczek-Khintchine mean value formulas.

Example 23 For an exponential distribution CS2 = 1, and thus E(R) = E(S) which is
evident from the memoryless property of the exponential distribution. In this case we get

ρ ρ2 1 ρ
E(W ) = E(S), E(Q) = , E(T ) = E(S), E(N ) = .
1−ρ 1−ρ 1−ρ 1−ρ

Example 24 In the case of deterministic service time CS2 = 0, thus E(R) = E(S)/2.
Consequently we have

ρ E(S) ρ2
E(W ) = , E(Q) =
1−ρ 2 2(1 − ρ)

1 E(S) ρ2
E(T ) = + E(S), E(N ) = ρ + .
1−ρ 2 2(1 − ρ)

For an M/G/1 system we have proved that

Πk = Dk = Pk , k = 0, 1, . . .

therefore the generating function of the number of customers in the system is equal to the
generating function of the number of customers at departure instant. Furthermore, it is
clear that the number of customers at departure instants is equal the number customers
arrived during the response time. In summary we have

Z∞
(λx)k −λx
Dk = Pk = e fT (x)dx.
k!
0

106
Thus the corresponding generating function can be obtained as
∞ Z∞
X (λx)k −λx
GN (z) = zk e fT (x)dx
k=0
k!
0
Z∞ X

(λxz)k −λx
= e fT (x)dx
k!
0 k=0
Z∞
= e−λ(1−z)x fT (x)dx = LT (λ(1 − z)),
0

that is it can be expressed by the help of the Laplace-transform of the response time T .
By applying the properties of the generating function and the Laplace-transform we have
(k) (k)
GN (1) = E(N (N − 1) . . . (N (−k + 1))) = (−1)k LT (0)λk = λk E(T k ).

In particular, the rst derivative results to the Little's law, that is

N = λT ,

and hence this formula can be considered as the generalization of the Little's law for an
M/G/1 queueing systems.
By the help of this relation the higher moments of N can be obtained, thus the variance
can be calculated if the second moment of T is known.

Residual Service Time

Let us suppose that the tagged customer arrives when the server is busy and denote the
total service time of the request in service by X , that is a special interval. Let fX (x)
denote the density function of X . The key observation to nd fX (x) is that it is more
likely that the tagged customer arrives in a longer service time than in a short one. Thus
the probability that X is of length x should be proportional to the length x as well as
the frequency of such service times, which is fS (x) dx. Thus we may write

P (x ≤ X ≤ x + dx) = fX (x)dx = CxfS (x)dx,

where C is a constant to normalize this density. That is


Z ∞
−1
C = xfS (x)dx = E(S),
x=0

thus
xfS (x)
fX (x) = .
E(S)
Z ∞ Z ∞
1 E(S 2 )
E(X) = xfX (x) dx = x2 fS (x) dx = .
0 E(S) 0 E(S)

107
Since the tagged customers arrives randomly in service time S hence the mean residual
can be obtained as
E(X) E(S 2 )
E(R) = =
2 2E(S)

Example 25 Let the service time be Erlang distributed with parameters (n, µ) then
n n
E(S) = , V ar(S) = ,
µ µ2
thus
n(1 + n)
E(S 2 ) = V ar(S) + E2 (S) =
µ2
hence
1+n
E(R) = .

It is easy to see that using this approach the the density function the residual service
time can be calculated. Given that the tagged customer arrives in a service time of length
x, the arrival moment will be a random point within this service time, that is it will be
uniformly distributed within the service time interval (0, x). Thus we have

dy
P (x ≤ X ≤ x + dx, y ≤ R ≤ y + dy) = fX (x)dx, qquad0 ≤ y ≤ x.
x
After substitution for fX (x) and integrating over x we get the desired density function
of the residual service time, that is

1 − FS (y)
fR (y) = .
E(S)

Hence
Z∞ Z∞
1 − FS (x)
E(R) = xfR (x)dx = x dx,
E(S)
0 0

Thus
E(S 2 )
E(R) = .
2E(S)
Now let us show how to calculate this type of integrals.
Let X be a non-negative random variable with nite nth moment. Then
Z∞ Zy Z∞
xn f (x)dx = xn f (x)dx + xn f (x)dx,
0 0 y

thus
Z∞ Z∞ Zy
xn f (x)dx = xn f (x)dx − xn f (x)dx.
y 0 0

108
Since
Z∞ Z∞
n n
x f (x)dx ≥ y f (x)dx = y n (1 − F (y)) ,
y y

hence
Z∞ Zy
n n
0 ≤ y (1 − F (y)) ≤ x f (x)dx − xn f (x)dx,
0 0

therefore
Z∞ Zy
0 ≤ lim y n (1 − F (y)) ≤ xn f (x)dx − lim xn f (x)dx,
y→∞ y→∞
0 0

that is
lim y n (1 − F (y)) = 0.
y→∞

Then using integration by parts keeping in mind the above relation we get
Z∞ Z∞
n−1 xn E(X n )
x (1 − F (x))dx = f (x)dx = .
n n
0 0

Let us show another way to calculate this type of integral

Z ∞ Z ∞ Z ∞ Z ∞ 
n−1 1 n−1 1 n−1
x fR (x)(x)dx = x (1 − FS (x))dx = x fS (y)d(y) dx
0 E(S) 0 E(S) x=0 y=x

∞ y ∞
yn E(S n )
Z Z  Z
1 n−1 1
= x dx fS (y)d(y) = fS (y)d(y) = .
E(S) y=0 x=0 E(S) y=0 n nE(S)
In particular, for n = 2 we obtain

E(S 2 )
E(R) = .
2E(S)

Pollaczek-Khintchine and Takács formulas

The following relations are commonly referred to as Pollaczek-Khintchine transform


equations
(1 − ρ)(1 − z)
(2.34) GN (z) = LS (λ − λz) ,
LS (λ − λz) − z

t(1 − ρ)
(2.35) LT (t) = LS (t) ,
t − λ + λLS (t)

t(1 − ρ)
(2.36) LW (t) = ,
t − λ + λLS (t)

109
with the help of which, in principle, the distribution of the number of customers in the
system, the density function of the response and waiting times can be obtained. Of course
this time we must be able to invert the involved Laplace-transforms.

Lajos Takács, 1924-2015

Takács Recurrence Theorem


k  
λ X k E(S i+1 )
(2.37) k
E(W ) = E(W k−i )
1 − ρ i=1 i i+1

that is moments of the waiting time can be obtained in terms of lower moments of the
waiting time and moments of the service time. It should be noted to get the k th moment
of W the k + 1th moment of the service time should exist.

Since W ,S are independent and T = W + S the k th moment of the response time can
also be computed by

k  
X k
(2.38) k
E(T ) = E(W l ) · E(S k−l ).
l=0
l

110
By using these formulas the following relations it can be proved
λE(S 2 ) ρE(S) 1 + CS2
 
E(W ) = = ,
2(1 − ρ) 1−ρ 2
E(T ) = E(W ) + E(S),
λE(S 3 )
E(W 2 ) = 2(W )2 + ,
3(1 − ρ)
E(S 2 )
E(T 2 ) = E(W 2 ) + ,
1−ρ
V ar(W ) = E(W 2 ) − (E(W ))2 ,
V ar(T ) = V ar(W + S) = V ar(W ) + V ar(S).
Because

E(N (N − 1)) = λ2 E(T 2 )

after elementary but lengthy calculation we have


2
λE(S 3 ) λE(S 2 ) λ(3 − 2ρ)E(S 2 )

V ar(N ) = + + + ρ(1 − ρ).
3(1 − ρ) 2(1 − ρ) 2(1 − ρ)
Since

X ∞
X ∞
X ∞
X
E(Q2 ) = (k − 1)2 Pk = k 2 Pk − 2 kPk + Pk
k=1 k=1 k=1 k=1
2
= E(N ) − 2N + ρ
by elementary computations we can prove that
2
λE(S 3 ) λE(S 2 ) λE(S 2 )

V ar(Q) = + + .
3(1 − ρ) 2(1 − ρ) 2(1 − ρ)

Now let us turn our attention to the Laplace-transform of the busy period of the server.

Lajos Takács proved that

(2.39) Lδ (t) = LS (t + λ − λLδ (t)),

that is for the Laplace-transform Lδ (t) a function equation should be solved ( which is
usually impossible to invert ).
However, by applying this equation the moments the busy period can be calculated.

First determine E(δ). Using the properties of the Laplace-transform we have


L0δ (0) = (1 − λL0δ (0))L0S (0)
E(δ) = (1 + λE(δ))E(S)
E(S) 1 ρ
E(δ) = =
1−ρ λ1−ρ

111
which was obtained earlier by the well-known relation

E(δ)
1 = ρ.
λ
+ E(δ)

After elementary but lengthy calculations it can be proved that

E(S 2 ) (E(S))2 V ar(S) + ρ(E(S))2


V ar(δ) = − = .
(1 − ρ)3 (1 − ρ)2 (1 − ρ)3

Now let us consider the generating function of the customers served during a busy period.
It can be proved that

(2.40) GNd (δ) (z) = zLS (λ − λGNd (δ) (z))

which is again a functional equation but using derivations the higher moments can be
computed.
Thus for the mean numbers we have
E(Nd (δ)) = 1 + λE(S)E(Nd (δ))
1
E(Nd (δ)) = ,
1−ρ
which can also be obtained by relation

E(δ) = E(S)E(Nd (δ))

since
1 ρ
= E(S) · E(Nd (δ))
λ1−ρ
(2.41)
ρ 1
E(Nd (δ)) = = .
ρ(1 − ρ) 1−ρ

It can be proved that

ρ(1 − ρ) + λ2 E(S 2 )
(2.42) V ar(Nd (δ)) = .
(1 − ρ)3

It is interesting to note that the computation of V ar(δ), V ar(Nd (δ)) does not require the
existence of E(S 3 ), as it in the case of V ar(N ), V ar(Q), V ar(T ), V ar(W ).

112
M/G/1 system with non-preemptive LCFS service discipline
In the following we show how the results concerning to the busy period analysis of
a FCFS system can be used for the investigation of the waiting and response time of a
system with non-preemptive LCFS ( Last-Come- First-Served ) service order. This means
that the last customer does not interrupt the service of the current customer.

It should be noted that the mean waiting and response time of an M/G/1 under any
well-known service discipline will be the same due to the Little-formula and the fact that
the generating function of the steady-state distribution of the number of customers in the
system is the same. As a consequence the mean number of customers in the system and
the mean busy period length is the same. Moreover, the Laplace-transform of the busy
period is the same, too. However, the higher moment will be dierent depending on the
service order.

It can be proved that for M/G/1 systems we have

1 − Lδ (t)
LWLCF S (t) = (1 − ρ) + ρ
(t + λ − λLδ (t))E(S)

LTLCF S (t) = LWLCF S (t)LS (t)

λE(S 3 ) λ2 (1 + ρ)(E(S 2 ))2


V ar(WLCF S ) = +
3(1 − ρ)2 4(1 − ρ)3

λE(S 3 ) λ2 (E(S 2 ))2


V ar(WF CF S ) = +
3(1 − ρ) 4(1 − ρ)2

2λE(S 3 ) λ2 (2 + ρ)(E(S 2 ))2


V ar(WSIRO ) = +
3(1 − ρ)(2 − ρ) 4(1 − ρ)2 (2 − ρ)

Comparing the formulas term-by-term it is not dicult to prove that

V ar(WF CF S ) < V ar(WSIRO ) < V ar(WLCF S )


V ar(TF CF S ) < V ar(WSIRO ) < V ar(WLCF S )

As it is one of the most widely used queueing system the calculation of the main perfor-
mance measure is of great importance. It can be done by the help of our Java applets

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

113
2.12 The M/G/1 Priority Queue
M/G/1 Queueing Systems (classes, no priority)
There are n customer classes. Customers from class i arrive in a Poisson pattern with
mean arrival rate λi , i = 1, 2, . . . , n. Each class has its own general service time with
E[Si ] = 1/µi , E[Si2 ], E[Si3 ]. All customers served on a FCFS basis with no considera-
tion for class. The total arrival stream to the system has a Poisson arrival pattern with

λ = λ1 + λ2 + . . . + λn .

The rst three moments of service time arc given by

λ1 λ2 λn
S= E[S1 ] + E[S2 ] + . . . + E[Sn ],
λ λ λ
λ1 λ2 λn
E[S 2 ] = E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
λ λ λ
and

λ1 λ2 λn
E[S 3 ] = E[S13 ] + E[S23 ] + . . . + E[Sn3 ],
λ λ λ
By Pollaczek's formula,

λE[S 2 ]
W = .
2(1 − ρ)
The mean time in the system for each class is given by

T i = W + E[Si ], i = 1, 2, . . . , n.

The overall mean customer time in the system,

λ1 λ2 λn
T = T 1 + T 2 + . . . + T n.
λ λ λ
The variance of the waiting time

λE[S 3 ] λ2 (E[S 2 ])2


V ar(W ) = + .
3(1 − ρ) 4(1 − ρ)2
The variance of T is given by

V ar(Ti ) = V ar(W ) + V ar(Si ), i = 1, 2, . . . , n.

114
The second moment of T by class is
2
E[Ti2 ] = V ar(Ti ) + T i , i = 1, 2, . . . , n.

Thus, the overall second moment of T is given by

λ1 λ2 λn
E[T 2 ] = E[T12 ] + E[T22 ] + . . . + E[Tn2 ],
λ λ λ
2
V ar(T ) = E[T 2 ] − T .

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu
M/G/1 Non-preemptive (HOL) Priority Queueing Systems
There are n priority classes with each class having a Poisson arrival pattern with mean
arrival rate λi . Each customer has the same exponential service time requirement. Then
the overall arrival pattern is Poiisson with mean:

λ = λ1 + λ2 + . . . + λn .

The server utilization

λ1 λ2 λn
S= E[S1 ] + E[S2 ] + . . . + E[Sn ],
λ λ λ
λ1 λ2 λn
E[S 2 ] = E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
λ λ λ
and

λ1 λ2 λn
E[S 3 ] = E[S13 ] + E[S23 ] + . . . + E[Sn3 ],
λ λ λ
Let

ρj = λ1 E[S1 ] + λ2 E[S2 ] + . . . + λj E[Sj ], j = 1, 2, . . . , n,

and notice that

ρn = ρ = λS .

The mean times in the queues:

λE[S 2 ]
W j = E[Wj ] = ,
2(1 − ρj−1 )(1 − ρj )
j = 1, 2, . . . , n, ρ0 = 0.

115
The mean queue lengths are

Qj = λj · W j , j = 1, 2, . . . , n.

The unied time in the queue

λ1 λ2 λn
W = E[W1 ] + E[W2 ] + . . . + E[Wn ].
λ λ λ
The mean times of staying in the system

T j = E[Tj ] = E[Wj ] + E[Sj ], j = 1, 2, . . . , n,

and the average of the customers staying at the system is

N j = λj · T j , j = 1, 2, . . . , n.

The total time in the system

T = W + S.

The total queue length

Q = λ · W,

and the average of the customers staying at the system

N = λ · T.

The variance of the total time stayed in the system by class

λE[S 3 ]
V ar(Tj ) = V ar(Sj ) +
3(1 − ρj−1 )2 (1 − ρj )
 j 
λE[S 2 ] 2 λi E[Si2 ] − λE[S 2 ]
P
i=1
+
4(1 − ρj−1 )2 (1 − ρj )2
j−1
λE[S 2 ] λi E[Si2 ]
P
i=1
+ , j = 1, 2, . . . , n.
2(1 − ρj−1 )3 (1 − ρj )

The variance of the total time stayed in the system

λ1 2 λ2 2
V ar(T ) = [V ar(T1 ) + T 1 ] + [V ar(T2 ) + T 2 ]
λ λ
λn 2 2
+... + [V ar(Tn ) + T n ] − T .
λ

116
The variance of the waiting time by class

V ar(Wj ) = V ar(Tj ) − V ar(Sj ), j = 1, 2, . . . , n.


2
We know that E[Wj2 ] = V ar(Wj ) + W j , j = 1, 2, . . . , n,

so

λ1 λ2 λn
E[W 2 ] = E[W12 ] + E[W22 ] + . . . + E[Wn2 ].
λ λ λ
Finally
2
V ar(W ) = E[W 2 ] − W .

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

M/G/1 Preemptive Resume Priority Queueing Systems


There are n customer classes. Class 1 customers receive the most favorable treatment;
class n customers receive the least favorable treatment. Customers from class i arrive in
a Poisson pattern with mean arrival rate λi ,t = 1, 2, . . . , n. Each class has its own gen-
eral service time with E[Si ] = 1/µi , and nite second and third moments E[Si2 ], E[Si3 ].
The priority system is preemptive resume, which means that if a customer of class j is
receiving service when a customer of class i < j arrives, the arriving customer preempts
the server and the customer who was preempted returns to the head of the line for class
j customers. The preempted customer resumes service at the point of interruption upon
reentering the service facility. The total arrival stream to the system has a Poisson arrival
pattern with

λ = λ1 + λ2 + . . . + λn .

The rst three moment of service time are given by:

λ1 λ2 λn
S= E[S1 ] + E[S2 ] + . . . + E[Sn ],
λ λ λ
λ1 λ2 λn
E[S 2 ] = E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
λ λ λ
λ1 λ2 λn
E[S 3 ] = E[S13 ] + E[S23 ] + . . . + E[Sn3 ].
λ λ λ
Let

ρj = λ1 E[S1 ] + λ2 E[S2 ] + . . . + λj E[Sj ], j = 1, 2, . . . , n,

and notice that ρn = ρ = λS .

117
The mean time in the system for each class is
 
j
λi E[Si2 ] 
P
1 
i=1
T j = E[Tj ] = E[Sj ] + ,
 
1 − ρj−1  2(1 − ρj ) 

ρ0 = 0, j = 1, 2, . . . , n.

Waiting times
W j = E[Tj ] − E[Sj ], j = 1, 2, . . . , n.

The mean length of the queue number j :

Qj = λj W j , j = 1, 2, . . . , n.

The total waiting time, W , is given by:

λ1 λ2 λn
W = E[W1 ] + E[W2 ] + . . . + E[Wn ].
λ λ λ
The mean number of customers staying in the system for each class is

N j = λj W j , j = 1, 2, . . . , n.

The mean total time is

λ1 λ2 λn
T = T 1 + T 2 + . . . + T n = W + S.
λ λ λ
The mean number of customers waiting in the queue is

Q = λ · W,

and the average number of customers staying in the system

N = λ · T.
The variance of the total time of staying in the system for each class is

j−1
λi E[Si2 ]
P
E[Sj ]
V ar(Sj ) i=1
V ar(Tj ) = +
(1 − ρj−1 )2 (1 − ρj−1 )3
j
 j 2
λi E[Si3 ] 2
P P
λi E[Si ]
i=1 i=1
+ +
3(1 − ρj−1 )2 (1 − ρj ) 4(1 − ρj−1 )2 (1 − ρj )2

118
j
  j−1 
λi E[Si2 ] λi E[Si2 ]
P P
i=1 i=1
+ , ρ0 = 0, j = 1, 2, . . . , n.
2(1 − ρj−1 )3 (1 − ρj )
The overall variance

λ1 2 λ2 2
V ar(T ) = [V ar(T1 ) + T 1 ] + [V ar(T2 ) + T 2 ]
λ λ
λn 2 2
+... + [V ar(Tn ) + T n ] − T .
λ
The variance of waiting times for each class is

V ar(Wj ) = V ar(Tj ) − V ar(Sj ), j = 1, 2, . . . , n.

Because,
2
E[Wj2 ] = V ar(Wj ) + W j , j = 1, 2, . . . , n,

so

λ1 λ2 λn
E[W 2 ] = E[W12 ] + E[W22 ] + . . . + E[Wn2 ].
λ λ λ
Finally
2
V ar(W ) = E[W 2 ] − W .

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

119
2.13 The M/G/c Processor Sharing Queue
M/G/1 Processor Sharing Queueing Systems
The Poisson arrival stream has an average arrival rate of λ and the average service rate is
µ. The service time distribution is general with the restriction that its Laplace transform
is rational, with the denominator having degree at least one higher than the numerator.
Equivalently. the service time, s, is Coxian. The priority system is processor-sharing,
which means that if a customer arrives when there are already n − 1 customers in the
system, the arriving customer (and all the others) receive service at the average rate µ/n.
Then Pn = ρn (1 − ρ), n = 0, 1, . . . , where ρ = λ/µ. We also have

ρ t S
N= , E[T |S = t] = , and T = .
1−ρ 1−ρ 1−ρ
Finally
ρt ρS
E[W |S = t] = , and W = .
1−ρ 1−ρ
M/G/c Processor Sharing Queueing Systems
The Poisson arrival stream has an average arrival rate of λ. The service time distribution
is general with the restriction that its Laplace transform is rational, with the denominator
having degree at least one higher than the numerator. Equivalently, the service time, s,
is Coxian. The priority system is processor-sharing, which works as follows. When the
number of customers in the service center, is less than c, then each customers is served
simultaneously by one server; that is, each receives service at the rate µ. When N > c.
each customer simultaneously receives service at the rate cµ/N . We nd that just as for
the M/G/l processor-sharing system.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

120
Chapter 3
Finite-Source Systems
So far we have been dealing with such queueing systems where arrivals followed a Poisson
process, that is the source of customers is innite. In this chapter we are focusing on
the nite-source population models. They are also very important from practical point
of view since in many situation the source is nite. Let us investigate the example of the
so-called machine interference problem treated by many experts.

Let us consider n machines that operates independently of each other. The operation
times and service times are supposed to be independent random variables with given
distribution function. After failure the broken machines are repaired by a single or multi-
ple repairmen according to a certain discipline. Having been repaired the machine starts
operating again and the whole process is repeated.

This simple model has many applications in various elds, for example in manufacturing,
computer science, reliability theory, management science, just to mention some of them.
For a detailed references on the nite-source models and their application the interested
reader is recommended to visit the following link

http://irh.inf.unideb.hu/user/jsztrik/research/fsqreview.pdf

3.1 The M/M/r/r/n Queue, Engset-Loss System


As we can see depending on the system capacity r in an M/M/r/r/n a customer may
nd the system full. Despite of the innite-source model where the customer is lost, in
the nite-source model this request returns to the source and stay there for a exponen-
tially distributed time. Since all the random variables are supposed to be exponentially
distributed the number of customers in the system is a birth-death process with the
following rates

λk = (n − k)λ , 0 ≤ k < r,
µk = kµ , 1 ≤ k ≤ r,

121
T.O Engset, 1865-1943

hence the distribution can be obtained as


 
n k
Pk = ρ P0 , 0 ≤ k ≤ r,
k
 
n k
ρ
k
Pk = r   ,
X n
ρi
i=0
i
which is called a truncated binomial or Engset distribution .
This is the distribution of a nite-source loss or Engset system .
Specially, if r = n that is no loss and each customer has its own server the distribution
has a very nice form, namely
   
n k n k
ρ ρ
k k
Pk = n   =
X n
i
(1 + ρ)n
ρ
i=0
i
  k  n−k
n ρ ρ
= 1− ,
k 1+ρ 1+ρ
ρ
that is we have a binomial distribution with success parameter ip = .
1+ρ
That is p is the probability that a given request is in the system. It is easy to see that
this distribution remains valid even for a G/G/n/n/n system since
E(S) ρ
p= = ,
E(S) + E(τ ) 1+ρ
where ρ = E(S)
E(τ )
, and E(τ ) denotes the mean time a customer spends in the source.

122
As before it is easy to see that the performance measures are as follows

ˆ Mean number of customers in the system N


r
X r N
N= kPk , r = N, US = = ,
k=0
r r

ˆ Mean number of customers in the source m

m=n−N

ˆ Utilization of a source Ut
m E(τ )
Ut = = ,
n E(τ ) + µ1
thus
1 Ut
E(τ ) = .
µ 1 − Ut
This help us to calculate the mean number of retrials of a customer from the source
to enter to the system. That it we have

E(NR ) = λE(τ ),

hence the mean number of rejection is E(NR ) − 1.

The blocking probability, that is the probability that a customer nd the system full at
his arrival, by the help of the Bayes's theorem can be calculated as
(n − r)Pr (n, r)
PB (n, r) = r = Pr (n − 1, r).
X
(n − i)Pk (n, r)
i=0

This can easily be veried by

((n − r)λh + o(h))Pr (n, r) (n − r)Pr (n, r)


PB (n, r) = lim r = r
h→0 X X
((n − i)λh + o(h))Pi (n, r) (n − i)Pi (n, r)
i=0 i=0
 
n r
(n − r) ρ n!
(n − r) r!(n−r)! ρr
r
= r   =X r
X n i n!
(n − i) ρ (n − i) ρi
i=0
i i=0
i!(n − i)!
 
n−1 r
(n−1)!
ρr ρ
r!(n−1−r)! r
= r = r  = Pr (n − 1, r).
X (n − 1)! X n − 1
i i
ρ ρ
i=0
i!(n − 1 − i)! i=0
i

123
Let E(n, r, ρ) denote the blocking probability, that is E(n, r, ρ) = Pr (n − 1, r), which is
called Engset's loss formula.
In the following we show a recursion for this formula, namely

   
n−1 r n − 1 n−r r
ρ ρ
r r−1 r
E(n, r, ρ) = r  = r−1 
X n − 1 X n − 1  
n−1 n−r r
i i
ρ ρ + ρ
i=0
i i r−1 r
i=0
n−r
r
ρE(n, r − 1, ρ) (n − r)ρE(n, r − 1, ρ)
= n−r = .
1 + r ρE(n, r − 1, ρ) r + (n − r)ρE(n, r − 1, ρ)

The initial value is


(n − 1)ρ
E(n, 1, ρ) = P1 (n − 1, 1) = .
1 + (n − 1)ρ

It is clear that

lim E(n, r, ρ) = B(n, ρ0 ),


n→∞, λ→0, nλ→λ0

where

0 λ0
ρ =
µ

which van be seen formally, too. Moreover, as (n − r)ρ → ρ0 the well-known recursion for
B(n, ρ0 ) is obtained which also justies the correctness of the recursion for E(n, r, ρ).

In particular, if r = n then it is easy to see that N = n 1+ρ


ρ
and thus

ρ n 1 1
US = , m= , Ut = , E(τ ) = , E(NR ) = 1, PB = 0,
1+ρ 1+ρ 1+ρ λ

which was expected.

In general case
r−1
X 1
µ = rµ = λ = λ(n − k)Pk 6= λ(n − N ), T = .
k=0
µ

124
Let us consider the distribution of the system at the instant when an arriving customer
enters into the system.
By using the Bayes's law we have

(λk h + o(h))Pk λk P k
Πk = lim r−1
= r−1
, k = 0, · · · , r − 1
h→0 X X
(λi h + o(h))Pi λ i Pi
i=0 i=0
r−1
1 X λk Pk 1
T = r−1
=
µ k=0 X µ
λi Pi
i=0
1
λ · T = µr · = r = N
µ

which Little's formula for the nite-source loss system.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

3.2 The M/M/1/n/n Queue


It is the traditional machine interference problem, where the broken machines has to
wait and the single repairman xes the failed machine in FIFO order. Assume the the
operating times are exponentially distributed with parameter λ and the repair rate is µ.
All random variables are supposed to be independent of each other.

Let N (t) denote the number of customers in the system at time t, which is a birth-death
process with birth rates


(n − k)λ , ha 0 ≤ k ≤ n,

λk =
, ha k > n,

0

and with death rate


µk = µ, k ≥ 1.
Thus for the distribution we have
n!
Pk = %k P0 = (n − k + 1)%Pk−1 ,
(n − k)!

where

λ
%= ,
µ

125
and
1 1
P0 = n = P
n .
n! n!
P
1+ (n−k)!
%k (n−k)!
%k
k=1 k=0
Since the state space is nite the steady-state distribution always exists but if % > 1 then
more repairmen is needed.

For numerical calculation other forms are preferred that is why we introduce some nota-
tions.
Let P (k; λ) a λ denote the Poisson distribution with parameter λ) and let Q(k; λ) denote
its cumulative distribution function, that is
λk −λ
P (k; λ) = e , 0 ≤ k < ∞;
k!
k
X
Q(k; λ) = P (i; λ), 0 ≤ k < ∞.
i=0
First we show that
P (n − k; R)
Pk = , 0 ≤ k ≤ n,
Q(n; R)
where
µ
R= = %−1 .
λ
By elementary calculations we have
 k  k
n! µ n−k − µ n! λ n! λ

P (n − k; R) (n−k)! λ
e λ (n−k)! µ (n−k)! µ
= n = n  n−i =P   i = Pk .
Q(n; R) P n! µ i − µ P n! λ n n! λ
i! λ
e λ i! µ k=0 (n−i)! µ
i=0 i=0

Hence a very important consequence is

P0 = B(n, R).

The main performance measures can be obtained as follows


ˆ Utilization of the server and the throughput of the system
For the utilization of the server we have
Us = 1 − P0 = 1 − B(n, R).

By using the cumulative distribution function this cab be written as


Q(n − 1; R)
Us = .
Q(n; R)

For the throughput of the system we obtain


λt = µUs .

126
ˆ Mean number of customers in the system N can be calculated as
n
X n
X
N= kPk = n − (n − k)Pk =
k=0 k=0

n n−1
1X 1X
=n− (n − k)%Pk = n − Pk+1 =
% k=0 % k=0
1 Us
= n − (1 − P0 ) = n − .
% %
In other form
RQ(n − 1; R) Us
N =n− =n− .
Q(n; R) %
ˆ Mean queue length, mean number of customers waiting can be derived as
n n n
X X X µ
Q= (k − 1)Pk = kPk − Pk = n − (1 − P0 ) − (1 − P0 ) =
k=1 k=1 k=1
λ
 
µ 1
= n − (1 − P0 )(1 + ) = n − 1 + Us .
λ %
ˆ Mean number of customers in the source can be calculated as
n
X µ Us
m= (n − k)Pk = n − N = (1 − P0 ) = .
k=0
λ %

ˆ Mean busy period of the server


Since

Us = 1 − P0 = 1 ,

+ Eδ

thus
1 − P0 Us
Eδ = = .
nλP0 nλ(1 − Us )

In computer science and reliability theory application we often need the following
measure

ˆ Utilization of a given source ( machine, terminal )


The utilization of the ith source is dened by
ZT
1
U (i)
= lim χ(at time t the ith source is active)dt
T →∞ T
0

Then
U (i) = P ( there is a request in the ith source) .

127
Hence the overall utilization of the sources is
n
X µ
Un = (n − k)Pk = m = (1 − P0 ).
k=0
λ

Thus the utilization of any source is


µ m
Ut = (1 − P0 ) = .
nλ n
This can be obtained in the following way as well,
n
(i)
X n−k m
U = Pk = ,
k=1
n n

since the source are homogeneous we have


Ut = U (i) .

ˆ Mean waiting time


By using the result of Tomkó 1 we have
1/λ m
Ut = = .
1/λ + W + 1/µ n
Thus
n
λm = ,
1/λ + W + 1/µ
and  
λ Us
λmW = n − m 1 + = n − (1 + %) = Q,
µ %
which the Litle's law for the mean waiting time. Hence
 
Q 1 n 1+%
W = = − .
λm µ Us %

The mean response can be obtained as


   
1 1 n 1 1 n 1
T =W+ = − = − .
µ µ 1 − P0 % µ Us %
It is easy to prove that
mλT = N ,
which is the Little's law for the mean response time. Clearly we have
 
1
mλ W + = Q + m% =
µ
Us Us
= n − (1 + %) + Us = n − =N .
% %

128
ˆ Further relations
Us = 1 − P0 = n%Ut = m%,
and thus
mλ = µUs = λt .

It should be noted that the utilization of the server plays a key role in the calculation of
all the main performance measures.

Distribution at the arrival instants

In the following we nd the steady-state distribution of the system at arrival instants and
in contrary to the innity-source model is not he same as the distribution at a random
point. To show this use the Bayes's theorem, that is

n(n−1)···(n−k)λk
(λk h + o(h))Pk λ k Pk µ1 ···µk
P0
Πk (n) = lim Pn−1 = Pn−1 = Pn−1 n(n−1)···(n−j)λj
j=0 (λj h + o(h))Pj j=0 λj Pj
h→0 P0
j=0 µ1 ···µj
(n−1)···(n−k)λk (n−1)···(n−1−k+1)λk
µ1 ···µk µ1 ···µk
= Pn−1 (n−1)···(n−j)λj = Pn−1 (n−1)···(n−1−i+1)λi = Pk (n − 1)
1+ j=1 µ1 ···µj
1+ i=1 µ1 ···µi

irrespective to the number of servers. It should be noted that this relation shows a very
important result, namely that at arrivals the distribution of the system containing n
sources is not he same as its distribution at random points, but equals to the random
point distribution of a system with n − 1 sources.

Distribution at the departure instants

We are interested in the distribution of the number of customers a departing customer


leaves behind in the system. This calculations are independent of the number of servers.
By applying the Bayes's theorem we have

µk+1 n(n−1)···(n−k)λk+1
(µk+1 h + o(h))Pk+1 µk+1 Pk+1 µ1 ···µk+1
P0
Dk (n) = lim Pn = Pn = Pn µj n(n−1)···(n−j+1)λj
j=1 (µj h + o(h))Pj j=1 µj Pj
h→0 P0
j=1 µ1 ···µj
(n−1)···(n−k)λk (n−1)···(n−1−k+1)λk
µ1 ···µk µ1 ···µk
= Pn (n−1)···(n−j+1)λj−1 = Pn−1 (n−1)···(n−1−i+1)λi = Pk (n − 1)
1+ j=2 µ1 ···µj−1
1+ i=1 µ1 ···µi

in the case when there is customer left in the system

1
D0 (n) = Pn−1 (n−1)···(n−1−i+1)λi
= P0 (n − 1)
1+ i=1 µ1 ···µi

if the system becomes empty.

129
Recursive Relations

Similarly to the previous arguments it is easy to see that the density function of the
response time can be obtained as
n−1 n−1
X X µ(µx)k
fT (x) = fT (x|k)Πk (n) = e−µx Pk (n − 1).
k=0 k=0
k!

Hence the mean value is


n−1
X k+1 1
T (n) = Pk (n − 1) = (N (n − 1) + 1).
k=0
µ µ

Similarly, for the waiting time we have


n−1 n−1
X X µ(µx)k−1
fW (x) = fW (x|k)Πk (n) = e−µx Pk (n − 1),
k=0 k=0
(k − 1)!

thus its mean is


n−1
X k 1
W (n) = Pk (n − 1) = (N (n − 1)),
k=0
µ µ
which is clear.
We want to verify the correctness of the formula
1
T (n) = (N (n − 1) + 1).
µ
As we have shown earlier the utilization can be expressed by the Erlang's loss formula,
hence

1 − B(n, %1 )
N (n) = n − .
%
Using the well-known recursive relation we have
1
1 %
B(n − 1, %1 ) B(n − 1, %1 )
B(n, ) = = .
% n + %1 B(n − 1, %1 ) n% + B(n − 1, %.1 )
Since
1 − B(n − 1, %1 )
N (n − 1) = n − 1 −
%
thus
 
1
%N (n − 1) = (n − 1)% − 1 − B n − 1,
%
 
1
B n − 1, = 1 + %N (n − 1) − (n − 1)%.
%

130
After substitution we have
1
(1 + %N (n − 1) − (n − 1)%)
 
1 %
B n, =
% n + %1 (1 + %N (n − 1) − (n − 1)%)
1 + %N (n − 1) − (n − 1)% 1 + %N (n − 1) − (n − 1)%
= = .
n% + 1 + %N (n − 1) − (n − 1)% 1 + %N (n − 1) + %

Therefore
1+%N (n−1)−(n−1)%
n% − 1 + B(n, %1 ) n% − 1 + 1+%N (n−1)+%
N (n) = =
% %
n%
n% − 1+%N (n−1)+% n
= =n−
% 1 + %N (n − 1) + %.
Fnally
n
n − N (n) =
1 + %N (n − 1) + %

n
1 + %(N (n − 1) + 1) =
n − N (n)
N (n)
%(N (n − 1) + 1) = ,
n − N (n)

which is a recursion for the mean number of customers in the system.

Now we able to prove our relation regarding the mean response time. Keeping in mind
the recursive relation for N (n − 1) we get

1
T (n) = (N (n − 1) + 1)
µ
N (n)
λT (n) = %(N (n − 1) + 1) =
n − N (n)
λ(n − N (n))T (n) = N (n),

which was proved earlier.

Now let us show how we can verify T (n) directly. It can easily be seen that
1
1 %
B(n − 1, )
US (n) = 1 − B(n, ) = 1 −
% n + %1 B(n − 1, %1 )

n n% n%
= 1 1 = 1 = ,
n+ %
B(n − 1, %
) n% + B(n − 1, % ) n% + 1 − US (n − 1)
that is there is a recursion for the utilization as well. It is also very important because by
using this recursion all the main performance measures can be obtained. Thus if λ, µ, n are

131
given we can use the recursion for US (n) and nally substitute it into the corresponding
formula. Thus
 
n% 1
US (n − 1) = n% + 1 − = 1 + n% 1 − .
US (n) US (n)
Since
US (n − 1)
N (n − 1) = n − 1 − ,
%
we proceed
 
1 1 US (n − 1)
T (n) = (N (n − 1) + 1) = n−1− +1
µ µ %
1 + n%(1 − US1(n) )
     
1 US (n − 1) 1 1 n 1
= n− = n− = − ,
µ % µ % µ US (n) %
which shows the correctness of the formula.

In the following let us show to compute T (n), W (n), N (n) recursively. As we have seen

1
T (n) = (N (n − 1) + 1)
µ
1 1
W (n) = T (n) − = N (n − 1),
µ µ
we have to know how N (n) can be expressed in term of T (n).
It can be shown very easily, namely

N (n) = λ(n − N (n))T (n) = λnT (n) − λN (n)T (n)


N (n)(1 + λT (n)) = λnT (n)
λnT (n)
N (n) = .
1 + λT (n)
The initial values are
1
T (1) =
µ
%
N (1) = .
1+%
Now the iteration proceeds as

1
W (n) = N (n − 1)
µ
1
T (n) = + W (n)
µ
λnT (n)
N (n) =
(1 + λT )(n)

132
that is we use a double iteration. The main advantage is that only the mean values are
needed. This method is referred to as mean value analysis.

In the previous section we have derived a recursion for US (n) and thus we may expect
that there is direct recursive relation for the other mean values as well since they depends
on the utilization. As a next step we nd a recursion for the mean number of customers
in the source m(n). It si quite easy since

Us (n) n
m(n) = = =
ρ nρ + 1 − Us (n − 1)
n
ρ n
= =
1 nρ + 1 − ρm(n − 1).
n + − m(n − 1)
ρ
By using this relation for the utilization of the source can be expressed as

m(n) n 1
Ut (n) = = =
n nρ + 1 − ρm(n − 1) n
1
n−1 1
= = .
nρ + 1 nρ + 1 − (n − 1)ρUt (n − 1)
− ρUt (n − 1)
n−1
For the mean number of customers in the system we have

nρ −
Us (n) nρ − Us (n) nρ + 1 − Us (n − 1)
N (n) = n − = = =
ρ ρ ρ
n2 ρ + n − nUs (n − 1) − 1 n(nρ − Us (n − 1))
= = .
nρ + 1 − Us (n − 1) nρ + 1 − Us (n − 1)
Since
Us (n − 1 nρ − Us (n − 1)
N (n − 1) = n − 1 − = −1
ρ ρ

ρ(N (n − 1) + 1) = nρ − Us (n − 1)

Us (n − 1) = nρ − ρ(N (n − 1) + 1),
thus after substitution we get

nρ(N (n − 1) + 1)
N (n) = .
1 + ρ(N (n − 1) + 1)

Finally nd the recursion for the mean response time . Starting with
 
1 n 1
T (n) = −
µ Us (n) ρ

133
using that  
1 n−1 1
T (n − 1) = −
µ Us (n − 1) ρ
n−1 1
µT (n − 1) = −
Us (n − 1) ρ
1 n−1
µT (n − 1) + =
ρ Us (n − 1)
(n − 1)ρ
Us (n − 1) = ,
λT (n − 1) + 1
substituting into the recursion for Us (n) we obtain

1 nρ − Us (n − 1) 1 nλT (n − 1) + 1
T (n) = = .
µ ρ µ λT (n − 1) + 1

Obviously the missing initial values are

1
m(1) = Ut (1) = .
1+%

Distribution Function of the Response Time and Waiting Time

This subsection is devoted to one of the major problems in nite-source queueing systems.
To nd the distribution function of the response and waiting time is not easy. As it is
expected the theorem of total probability should be used.

Let us determine the density function and then the distribution function. As we did many
times in earlier chapters the law of total probability should be applied for the conditional
density functions and the distribution at the arrival instants. So we can write

n−1 ( µ )n−1−k− µ µ
X (µx)k −µx (n−1−k)! e λ
λ
(µx + µλ )n−1 e−(µx+ λ )
fT (n, x) = µ e =µ
k=0
k! Xn−1 µ i
( ) µ
(n − 1)! Q(n − 1, µλ )
λ −λ
e
i!
|i=0 {z }
Q(n−1, µ
λ
)

µP (n − 1, µx + µλ )
= .
Q(n − 1, µλ )
Similarly for the waiting time
Pn−2 i ( µ )n−2−i µ
n−1
X (µx)k−1 −µx i=0 µ (µx)
i!
e−µx (n−2−i)!
λ
e− λ
fW (n, x) = µ e Pk (n − 1) =
k=1
(k − 1)! Q(n − 1, µλ )

µP (n − 2, µx + µλ )
= .
Q(n − 1, µλ )

134
To get the distribution function we have to calculate the integral
Z x
FT (n, x) = fT (n, t) dt.
0

Using the substitution y = µt + µλ , t = (y − µλ ) µ1 , dt


dy
= µ1 .
Hence
 µx+ µλ
Pn−1 yi −y
R µx+ µλ y n−1 −y 1− i=0 y! e
µ
(n−1)!
e dy µ Q(n − 1, µx + µλ )
FT (n, x) = λ
= λ
=1− .
Q(n − 1, µλ ) Q(n − 1, µλ ) Q(n − 1, µλ )

Similarly for the waiting time we have

Q(n − 2, µx + µλ )
FW (n, x) = 1 − .
Q(n − 1, µλ )

Now let us determine the distribution function by the help of the conditional distribution
functions. Clearly we have to know the distribution function of the Erlang distributions,
thus we can proceed as

n−1  k
(µx)j
X X 
−µx
FT (x) = 1− e Pk (n − 1)
k=0 j=0
j!

n−1 X
k
(µx)j
X 
−µx
=1− e Pk (n − 1)
k=0 j=0
j!

n−1 n−1 (µ )n−1−k − µ


X X λ
(n−1−k)!
e λ
=1− Q(k, µx)Pk (n − 1) = 1 − Q(k, µx)
k=0 k=0
Q(n − 1, µλ )

Q(n − 1, µx + µλ )
=1−
Q(n − 1, µλ )
Meantime we have used that

λ j
tj −t X λi
Z
e dt = 1 − e−λ
0 j! i=0
i!

and thus

l j
X µl−j −µ X λi −λ
e e
j=0
(l − j)! i=0
i!

can be written as

135
l  λ
tj −t
Z  l−j
X µ
1− e dt e−µ
j=0 0 j! (l − j)!
l Z λ Z λ+µ l
X µl−j −µ (t + µ)l −(t+µ) y −y
= e − e = Q(l, µ) − e dy
j=0
(l − j)! 0 l! µ l!
| {z }
Q(l,µ)
l λ+µ
yi
 X
−y
= Q(l, µ) − 1 − e = Q(l, λ + µ).
i=0
i! µ

During the calculations we could see that the derivative of Q(k, t) is −P (k, t), which can
be used to nd the density function, that is

µP (n − 1, µx + µλ )
fT (x) = .
Q(n − 1, µλ )

Generating Function of the Customers in the System

Using the denition the generating function GN (s) can be calculated as


n µ n−k

X
k λ
GN (s) = s P0
k=0
(n − k)!
 n−k
n 1
n
X sρ
=s P0
k=0
(n − k)!
 
1
1 1
Q n, ρs
= sn e− ρ (1− s )  .
Q n, ρ1

This could be derived in the following way. Let denote by F the number of customers in
the source. As we have proved earlier its distribution can be obtained as the distribution
of an Erlang loss system with trac intensity ρ1 . Since the generating function of this
system has been obtained we can use this fact. Thus

 
N n−F n −F n 1
GN (s) = E(s ) = E(s ) = s E(s ) = s GF
s
 
1
1 1
Q n, ρs
= sn e− ρ (1− s )  .
Q n, ρ1

To verify the formula let us compute the mean number of customers in the system. By

136
the property of the generating function we have
  0
0 n 1
N (n) = GN (n) (1) = s GF (n)
s s=1
    
0 n−1 1 n 0 1 1
GN (n) (s) = n · s GF (n) + s GF (n) − 2 ,
s s s

thus
  
1 1 US (n)
N (n) = nGF (n) (1) − G0F (n) (1) =n− 1 − B n, =n− .
ρ ρ ρ

Laplace-transform of the Response Time and Waiting Time

Solution 1
By the law of the total Laplace-transforms we have

n−1  k+1
X µ
LT (s) = Pk (n − 1)
k=0
µ+s

since the conditional response time is Erlang distributed with parameters (k + 1, µ).
Substituting Pk (n − 1) we get
n−1  k+1 µ n−1−k µ
X µ λ e− λ
LT (s) =
(n − 1 − k)! Q n − 1, µλ

k=0
µ+s
n−1  −k−1  n µ n−1−k µ
X
µ+s (λ)
µ
· µ+s
µ
· (n−1−k)! e− λ
k=0
=  n
µ+s
Q n − 1, µλ

µ
n−1 
X n−1−k
µ+s µ 1
 n µ
· λ
· (n−1−k)!
µ µ k=0
= e− λ
Q n − 1, µλ

µ+s
n µ+s
 µ+s
µ Q n − 1, ·e λ

µ
= e− λ λ
Q n − 1, µλ

µ+s
n µ+s

s Q n − 1,

µ λ
= eλ .
µ+s Q n − 1, µλ

137
Solution 2
Let us calculate LT (s) by the help of the density function. Since the denominator is a
constant we have to determine the Laplace-transform of the numerator, that is
Z∞ n−1
µx + µλ µ
LN um (s) = µ e−(µx+ λ ) · e−sx dx
(n − 1)!
0
Z∞ n−1
−µ µx + µλ
=e λ µ e−(µ+s)x dx.
(n − 1)!
0

By using the binomial theorem we get

µ Z∞ Xn−1 
e− λ

n−1  µ n−1−k
Lsz (s) = µ (µx)k · e−(µ+s)x dx
(n − 1)! k=0
k λ
0
n−1 µ n−1−k
 Z∞
−µ
X
λ (µx)k −(µ+s)x
=e λ µ e dx
k=0
(n − 1 − k)! k!
0
n−1 µ n−1−k  k+1

−µ
X
λ µ
=e λ

k=0
(n − 1 − k)! µ + s
 n−1−k
µ µ+s
µ

µ
n Xn−1
λ
· µ
= e− λ
µ+s (n − 1 − k)!
 n k=0 
−µ µ µ+s µ+s
=e λ Q n − 1, ·e λ
µ+s λ
 n  
µ µ+s s
= Q n − 1, · eλ .
µ+s λ

Since
LN um (s)
LT (s) = ,
Q n − 1, µλ

thus
n
Q n − 1, µ+s
 
µ s
λ
LT (s) = eλ .
µ+s Q n − 1, µλ

138
Solution 3
The Laplace-transform of the numerator be can obtained as
Z∞ n−1
µx + µλ µ
LN um (s) = µ e−(µx+ λ ) · e−sx dx
(n − 1)!
0

Substituting t = µx + µ
λ
we get
1 µ dx 1
x= t− , = ,
µ λ dt µ
and thus
Z∞
tn−1 −t − µs (t− µλ ) 1
LN um (s) = µ e e dt
(n − 1)! µ
µ
λ

Z∞
s tn−1 −(1+ µs )t
=e µ e dt
(n − 1)!
µ
λ
  n−1
 n−1 Z∞ 1+ s
t
s µ µ s t
= eµ e−(1+ µ ) dt.
µ+s (n − 1)!
µ
λ

Substituting again y = µ+s dt


µ
t dy = µ
µ+s
, thus
n Z∞
y n−1 −y

s µ
LN um (s) = e λ e dy
µ+s (n − 1)!
µ+s
λ
 n  
s µ µ+s
=e λ · Q n − 1, ,
µ+s λ
therefore

n
Q n − 1, µ+s
 
LN um (s) s µ λ
LT (s) = µ = e
λ · .
Q n − 1, µλ

Q n − 1, λ µ+s

That is all 3 solutions gives the same result. Thus, in principle the higher moments of
the response time can be evaluated.

Since LT (s) = LW (s) · µ


µ+s
, thus
n−1
Q n − 1, µ+s
 
µ s
λ
LW (s) = eλ .
µ+s Q n − 1, µλ

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

139
Example 26 Consider 6 machines with mean lifetime of 40 hours. Let their mean repair
time be 4 hours. Find the performance measures.

Solution: λ = 1
40
per hour, µ = 41 per hour, ρ = λ
µ
= 4
40
= 0.1, n = 6, P0 = 0.484

Failed machines 0 1 2 3 4 5 6
Waiting machines 0 0 1 2 3 4 5
Pn 0, 484 0.290 0.145 0.058 0.017 0.003 0.000

Q = 0.324, Usz = 0.516, W = 2.51 hour, T = 2.51 + 0.25 = 6.51 hour

e = 40 hour, Ug = 0.86
m = n × Ug = 5.16, N = 6 − 5.16 = 0.84
0.516 4 × 5.16 7
Eδ = 1 = ≈ hour
6 × 40 × 0.484 6 × 0.484 10

Example 27 Change the mean lifetime to 2 hours in the previous Example. Find the
performance measures.

Solution: λ1 = 2, µ1 = 4, µλ = 2, n = 6, P0 = 75973
1
, which shows that a single repairman
is not enough. We should increase the number of repairmen.

Failed machines 0 1 2 3 4 5 6
Waiting machines 0 0 1 2 3 4 5
1 1
Pk 75973 75973
0.001 0.012 0.075 0.303 0.606

Us ≈ 0.999, Q ≈ 4.5, W ≈ 22.5hours, T = 26.5 hours


e = 2 hours, Ug ≈ 0.08, m ≈ 0.5, N ≈ 5.5, Eδ ≈ ∞.
All these measures demonstrate what we have expected because 1 is greater than 1. To
decide how many repairmen is needed there are dierent criterias as we shall see in Section
3.4. To avoid this congestion we must ensure the condition rµ
λ
< 1 where r is the number
of repairmen.

140
3.3 The Heterogeneous M ~ /1/n/n Queue
~ /M
The results of this section have been published in the paper of Csige and Tomkó [24].
The reason of its introduction is to show the importance of the service discipline.

Let us consider n heterogeneous machines with exponentially distributed operating and


repair time with parameter λk > 0 and µk > 0, respectively for the k th machine, k =
1, · · · , n. The failures are repaired by a single repairman according to Processor Sharing,
FIFO, and Preemptive Priory disciplines. All involved random variables are supposed to
be independent of each other.
Let N (t), denote the number of failed machines at time t. Due to the heterogeneity
of the machines this information is not enough to describe the behavior of the system
because we have to know which machine is under service.Thus let us introduce an N (t)-
dimensional vector with components x1 (t) , . . . , xv(t) (t) indicating the indexes of the
failed machines. Hence for N (t) > 0 using FIFO discipline machine with index x1 (t) is
under service. Under Processor Sharing discipline when all machines are serviced by a
proportional service rate, that is if N (t) = k then the proportion is 1/k the order of
indexes (x1 (t) , . . . , xn (t)) is not
 important, but a logical treatment we order them as
x1 (t) < x2 (t) < . . . < xv(t) (t) . In the case of Preemptive Priority assuming that the
smaller index means higher priority we use the same ordering as before mentioning that
in this case the machine with the rst index is under service since he has the highest
priority among the failed machines.
Due to the exponential distributions the process

X(t) (t) = v (t) ; x1 (t) , . . . , xv(t) (t) , (t ≥ 0) ,

is a continuous-time Markov where the ordering of x1 (t) , . . . , xv(t) (t) depends ot the ser-
vice discipline.

Let us consider the Processor Sharing service discipline.


Since X(t)(t) is a nite state Markov chain thus if the parameters λk , µk , (1 ≤ k ≤ n)
are all positive then it is ergodic and hence the steady-state distribution exists. Of course
this heavily depends on the service discipline.
Let the distribution of the Markov chain be denoted by
0
P0 (t), , Pi1 ,...,ik (t).

It is not dicult to see that for this distribition we have


" n # n
0
X X
P0 (t) = − λi P0 (t) + µi Pi (t) ,
i=1 i=1
k
0
X
Pi1 ,...,ik (t) = λir Pi1 ,...,ir−1 ,ir+1 ,...,ik (t) −
r=1
" k
#
1X X µr
− νi1 ...ik + µir Pi1 ,...,ik (t) + P 0 0 0 (t)
k r=1 r6=i ...i
k + 1 i1 i2 ...ik+1
1 k

141
0 0
where i1 , . . . , ik+1 is the ordering of the indexes i1 , . . . , ik , r and
X
νi1 ...ik = λr , k = 1, . . . , n − 1.
r6=i1 ...ik

n n
# "
0
X 1X
P1,...,n (t) = λr P1,...,r−1,r+1,]...,n (t) − µr P1,...,n (t).
r=1
n r=1

The steady-state distribution which is denoted by

P0 = lim Po (t) ,
t→∞

Pi1 ...ik = lim Pi1 ...ik (t)


t→∞

(1 ≤ i1 < i2 < . . . < ik ≤ n, 1 ≤ k ≤ n).


is the solution of the following set of equations
" n # n
X X
λi P0 = µ i Pi ,
i=1 i=1
" k
# k
1X X
νi1 ...ik + µi Pi1 ...ik = λir Pi1 ...ir−1 ir+1 ...ik +
k r=1 r r=1

X µr
+ P0 0 0 ,
r6=i1 ...ik
k + 1 i1 i2 ...ik+1
" n
# n
1X X
µr P1,...,n = λr P1...,r−1,r+1,...,n
n r=1 r=1

with normalizing condition


X
P0 + Pi1 ...ik = 1

where the summation is mean by all possible combinations of the indexes.

The surprising fact is it can be obtained as

k
Y λi r
Pi1 ...ik = Ck! ,
µ
r=1 ir

where C can be calculated from the normalizing condition.

For the FIFO and Preemptive Priority disciplines the balance equations and the solution
is rather complicated and they are omitted. The interested reader is referred to the cited
paper. However for all cases the performance measures can be computed the same way.

142
Performance Measures

ˆ Utilization of the server


E(δ)
Us =  n −1 = 1 − P0 .
P
E(δ) + λi
i=1

ˆ Utilization of the machines


Let U (i) denote the utilization of machine i. Then
1
(i) λi
U = 1
= 1 − P (i) ,
λi
+ Ti

where T i denotes the mean response time for machine i, that is the mean time while
it is broken, and
X n X
(i)
P = Pi1 ,...,ik ,
k=1 i∈(i1 ,...,ik )

is the probability that the ith machine is failed. Thus


P (i)
Ti = ,
λi (1 − P (i) )
and in FIFO case for the main waiting time we have
1
Wi = Ti − .
µi
Furthermore it is easy to see that the mean number of failed machines can be
obtained as n
X
N= P (i) .
i=1
In addition n n
X X
(i)
P (i)

λi 1 − P Ti =
i=1 i=1
which is the Little's formula for heterogeneous customers. In particular, for ho-
mogeneous case we

(n − N̄ )λT̄ = N̄
which was proved earlier.
Various generalized versions of the machine interference problem with heterogeneous ma-
chines can be found in Pósafalvi and Sztrik [81, 82].

Let us see some sample numerical results for the illustration of the inuence of the service
disciplines on the main performance measures

143
Inpur parameters Machine utilizations
FIFO PROC-SHARING PRIORITY
n=3
λ1 = 0.3 µ1 = 0.7 0.57 0.57 0.70
λ2 = 0.3 µ2 = 0.7 0.75 0.57 0.74 0.57 0.74 0.58
λ3 = 0.3 µ3 = 0.7 0.57 0.57 0.44
Overall machine utilization 1.72 1.72 1.72

n=3
λ1 = 0.5 µ1 = 0.9 0.48 0.51 0.64
λ2 = 0.3 µ2 = 0.7 0.75 0.56 0.76 0.56 0.77 0.56
λ3 = 0.2 µ3 = 0.5 0.62 0.58 0.44
Overall machine utilization 1.669 1.666 1.656

n=4
λ1 = 0.5 µ1 = 0.9 0.38 0.429 0.64
λ2 = 0.4 µ2 = 0.7 0.41 0.423 0.49
0.903 0.906 0.922
λ3 = 0.3 µ3 = 0.6 0.46 0.451 0.36
λ4 = 0.2 µ4 = 0.5 0.54 0.500 0.24
Overall machine utilization 1.814 1.804 1.751

Table 3.1: Numerical results

3.4 The M/M/r/n/n Queue


Consider the homogeneous nite-source model with r, r ≤ n independent servers. De-
noting by N (t) the number of customers in the system at time t similarly to the previous
sections it can easily be seen that it is a birth-death process with rates

λk = (n − k)λ, 0 ≤ k ≤ n − 1,
(
kµ , 1 ≤ k ≤ r,
µk =
rµ , r < k ≤ n,
The steady-state distribution can be obtained as
 
n k
Pk = ρ P0 , 0 ≤ k ≤ r,
k
 
k! n k
Pk = k−r
ρ P0 , r≤k≤n
r!r k

with normalizing condition


n
X
Pk = 1
k=0

To determine P0 we can use the following simpler recursion.

144
Let akP=P
0
k
and using the relation for the consecutive elements of the birth-death process
our procedure operates as follows

a0 = 1,
n−k+1
ak = %ak−1 , 0 ≤ k ≤ r − 1,
k
n−k+1
ak = %ak−1 , r ≤ k ≤ n.
r
Since n
X
Pk = 1
k=0

must be satised thus we get


n
X
P0 = 1 − Pk .
k=1

Dividing both sides by P0 we have


n n
1 X Pk 1 X
1= − = − ak ,
P0 k=1 P0 P0 k=1

hence
1
P0 = Pn .
1+ ak
k=1

Finally
Pk = ak P0 = Pk (n).
Let us determine the main performance measures

ˆ Mean and variance of the number of customers in the systems can be computed as
n
X n
X
N= kPk , V ar(N ) = k 2 Pk − (N )2 .
k=0 k=0

ˆ Mean and variance of queue length can be obtained by


n
X n
X
Q= (k − r)Pk , V ar(Q) = (k − r)2 Pk − (Q)2 .
k=r+1 k=r+1

ˆ Mean number of customers in the source can be calculated by

m = n − N.

ˆ Utilization of the system is computed by

Ur = 1 − P0 .

145
ˆ Mean busy period of the systems can be obtained by
1 − P0 Ur
Eδ (n) = = .
nλP0 nλP0

ˆ Mean number of busy servers can be calculated by


r
X n
X r−1
X n
X
r= kPk + rPk = kPk + r Pk
k=1 k=r+1 k=1 k=r

Furthermore,
r
X n
X
kPk + r Pk
k=1 k=r+1 r
Us = = .
r r
ˆ Mean number of idle servers
S = r − r.
Additional relation is
r
X n
X n
X
N= kPk + (k − r)Pk + r Pk = Q + r = Q + r − S = n − m.
k=1 k=r+1 k=r+1

ˆ Utilization of the sources can be calculated by


n
X n−k m
Ut = Pk = .
k=1
n n

ˆ The mean waiting and response times can be derived by

1
λ m
Ut = 1 1
= ,
λ
+W + µ
n
thus for the mean waiting time we have
 
N1 1 1 N
W = − = −1 .
mλ µ µ m%
Hence the mean response time is

1 N
T =W+ = ,
µ mλ
consequently we get
mλT = N ,
which is the well-known Little's formula. Thus we get
 
1
mλ W + = Q + r,
µ

146
that is
mλW + m% = Q + r.
Show that
r = m%,
because from this follows
mλW = Q
which is the Little's formula for the waiting time.
Since
(n − k)λ
Pk+1 = Pk ,
µk+1
where

(
jµ , j ≤ r,
µj =
rµ , j > r.

Furthermore, it is well-known that


r−1
X n
X
r= kPk + r Pk .
k=1 k=r

We can proceed as
n
X r−1
X n−1
X
%m = %(n − k)Pk = %(n − k)Pk + %(n − k)Pk =
k=0 k=0 k=r

r−1 n−1
X λ(n − k)(k + 1) X λ(n − k)
= Pk + r Pk =
k=0
(k + 1)µ k=r

r−1
X n−1
X r
X n
X r−1
X n
X
= (k + 1)Pk+1 + r Pk+1 = jPj + r Pj = jPj + r Pj = r.
k=0 k=r j=1 j=r+1 j=1 j=r

Finally, we get
%m = r,
or in another form
λm = µr,
that is
mean arrival rate = mean service rate,
which was expected because the system is in steady state. Consequently

Q Q Q
W = = = .
mλ rλ% µr

147
ˆ Mean idle period of a server can be computed as follows.
If the idle servers start their busy period in the order as they nished the previous
busy period, then their activity can be written as follows. If a server becomes idle
and nds other j − 1 servers idle, then his busy period start at the instant of the
arrival of the j th customer.

r−j
ēj = , j = 0,1,...,r-1
(n − j)λ
Pj (n − 1) Πj (n)
aj = Pr−1 = Pr−1
i=0 Pi (n − 1) i=0 Πi (n)
(n − j)Pj (n)
Πj (n) = Pn−1 = Pj (n − 1)
i=0 (n − i)Pi (n)
r−1
X
ē = ēj aj .
j=0

ˆ Mean busy period of the servers can be calculated as follows.


Since
r̄ Eδ
Us = = ,
r e + Eδ
thus
Us
Eδ = e.
1 − Us

Distribution Function of the Waiting and Response Time

This subsection is devoted to the most complicated problem of this system, namely to
the determination of the distribution function of the waiting and response times. First
the density function is calculated and then we obtain the distribution function. You may
remember that the distribution has been given in the form

 
n k
ρ P0


k





Pk =  
n
k!ρk



k


P0 .


r!rk−r
Introducing z = ρ1 , this can be written as
 
n −k
z P0


 k




Pk =  
n
k!z −k



k


P0


r!rk−r
148
thus
n
k!rr (rz)−k

k
Pk = P0
r!
n!rr (rz)n−k · e−rz
= P0
(n − k)!r!(rz)n · e−rz
rr P (n − k, rz)
= P0 , k ≥ r.
r! P (n, rz)

Since

Πk (n) = Pk (n − 1), thus


rr P (n − 1 − k, rz)
Πk (n) = P0 (n − 1), ha k = r, . . . , n − 1.
r! P (n − 1, rz)

It is easy to see that the probability of waiting is


n−1
X n−1
X
Πk (n) = Pk (n − 1) = PW = P (W > 0).
k=r k=r

Inserting z this can be rewritten as


n−1 r
X r P (n − 1 − k, rz)
PW = P0 (n − 1)
k=r
r! P (n − 1, rz)
n−1−r
X
P (i, rz)
rr
= P0 (n − 1) i=0
r! P (n − 1, rz)
r
r Q(n − 1 − r, rz)
= P0 (n − 1).
r! P (n − 1, rz)

We show that the distribution function of the waiting time can be calculated as

rr Q(n − 1 − r, r(z + µx))


FW (x) = 1 − P0 (n − 1),
r!P (n − 1, rz)

and thus
rr Q(n − 1 − r, rz)
FW (0) = 1 − P0 (n − 1)
r!P (n − 1, rz)

which is probability that an arriving customer nds idle server. For the density function
we have

fW (0) = 1 − PW ,
rr P (n − 1 − r, r(z + µx))
fW (x) = µr P0 (n − 1), x > 0.
r!P (n − 1, rz)

149
R∞
If we calculate the integral fW (x)dx-t that is 0 is not considered then
0+

Z∞ Z∞
rr P0 (n − 1) (r(z + µt))n−1−r −r(z+µt)
fW (x)dx = · µr e dt.
r!P (n − 1, rz) (n − 1 − r)!
0+ 0+

By the substitution y = r(z + µt) we have dt


dy
= 1
µ
for the integral part we get

Z∞
y n−1−r
e−y dy = Q(n − 1 − r, rz)
(n − 1 − r)!
rz

that is
Z∞
rr Q(n − 1 − r, rz)
fW (x)dx = P0 (n − 1) = PW ,
r!P (n − 1, rz)
0+

as it was expected. Thus

Z∞ Z∞
fW (x)dx = fW (0) + fW (x)dx = 1.
0 0+

Let us determine the density function for x > 0. That is

n−1
X (rµx)k−r −rµx
fW (x) = rµ e Pk (n − 1)
k=r
(k − r)!
n−1
X (rµx)k−r −rµx rr P (n − 1 − k, rz)
= rµ e P0 (n − 1)
k=r
(k − r)! r! P (n − 1, rz)
n−1
rµrr P0 (n − 1) X (rµx)k−r (rz)n−1−k −r(z+µx)
= e
r!P (n − 1, rz) k=r (k − r)! (n − 1 − k)!
n−1−r
rµrr P0 (n − 1)e−r(z+µx) X (rµx)i (rz)n−1−r−i
=
r!P (n − 1, rz) i=0
i! (n − 1 − r − i)!
rµrr P0 (n − 1) (r(z + µx))n−1−r −r(z+µx)
= ·e
r!P (n − 1, rz) (n − 1 − r)!
rµrr P0 (n − 1)P (n − 1 − r, r(z + µx))
= ,
r!P (n − 1, rz)

as we got earlier, but we have to remember that

fW (0) = 1 − PW .

150
Therefore
Z∞
P (W > x) = fW (t)dt
x
Z∞
rr P0 (n − 1) (r(z + µt))n−1−r −r(z+µt)
= rµ e dt
r!P (n − 1, rz) (n − 1 − r)!
x
Z∞
rr P0 (n − 1) y n−1−r
= e−y dy
r!P (n − 1, rz) (n − 1 − r)!
r(z+µx)
r
r P0 (n − 1)Q(n − 1 − r, r(z + µx))
= .
r!P (n − 1, rz)
Thus for the distribution function we have

FW (x) = 1 − P (W > x)

which was obtained earlier.

To verify the correctness of the formula let r = 1. After substitution we get


P0 (n − 1)Q(n − 2, z + µx)
P (W > x) = ,
P (n − 1, z)
but
P (n − 1, z)
P0 (n − 1) = ,
Q(n − 1, z)
thus
Q(n − z, z + µx)
P (W > x) = .
Q(n − 1, z)
The derivation of the distribution function of the response time is analogous. Because the
calculation is rather lengthy it is omitted, but can be found in the Solution Manual for
Kobayashi [62].
As it can be seen in Allen [3], Kobayashi [62], the following formulas are valid for r ≥ 2

FT (x) = 1 − C1 e−µx + C2 Q(n − r − 1, r(z + µx)),

where
C1 = 1 + C2 Q(n − r − 1, rz),
rr P0 (n − 1)
C2 = .
r!(r − 1)(n − r − 1)!P (n − 1, rz)
Hence the density function can be obtained as

fT (x) = µC1 e−µx − C2 rµP (n − r − 1, r(z + µx)).

151
It should be noted that for the normalizing constant we have the following recursion
r−1 n−1
 
n nX 1 1
P0−1 (n) = 1 + P0−1 (n − 1) + i
− , n > r,
rz z i=0 zi i+1 r

with initial value  r


1
P0−1 (r) = 1+ . r ≥ 1.
z
Since the conditional waiting time is Erlang distributed, it is easy to see that

K−1
2
X (k − r + 1) + (k − r + 1)2
E(W ) = Πk , V ar(W ) = E(W 2 ) − (E(W ))2 ,
k=r
(rµ)2

V ar(T ) = V ar(W ) + 1/µ2 .

Laplace-transform of the Waiting and Response Times

First determine the Laplace-transform of the waiting time.


It is easy to see that by using the theorem of total Laplace-transform we have
n−1  k−r+1
X rµ
LW (s) = 1 − PW + Pk (n − 1).
k=r
rµ + s

We calculate this formula step-by-step. Namely we can proceed as

n−1  k−r+1 r
X rµ r P0 (n − 1)P (n − 1 − k, rz)
k=r
rµ + s r!P (n − 1, rz)
n−1  k−r+1
rr P0 (n − 1)e−rz X rµ (rz)n−1−k
= .
r!P (n − 1, rz) k=r rµ + s (n − 1 − k)!

Then
n−1  k−r+1
X rµ (rz)n−1−k
k=r
rµ + s (n − 1 − k)!
n−1−r
X  rµ i+1 (rz)n−1−r−i
= ,
i=0
rµ + s (n − 1 − r − i)!

where i = k − r. Thus the last equation can be written as


n−r n−1−r rµ+z n−1−r−i n−r
    
rµ X
λ rµ rµ+z rµ + z
· = e λ Q n − 1 − r, .
rµ + s i=0
(n − 1 − r − i)! rµ + s λ

152
Finally collecting all terms we get
n−r r
r P0 (n − 1)e−rz rµ+s
  
rµ rµ + s
LW (s) = 1 − PW + e λ Q n − 1 − r,
rµ + s r!P (n − 1, rz) λ
r λs rµ+s  n−r

r e P0 (n − 1)Q n − 1 − r, λ rµ
= 1 − PW + .
r!P (n − 1, rz) rµ + s

To verify the correctness of the formula let r = 1.


Thus after inserting we have
n−1 s
e λ P0 (n − 1)Q n − 2, µ+s
 
µ λ
LW (s) = P0 (n − 1) +
µ+s P (n − 1, z)
n−1 s
e λ Q n − 2, µ+s
 
P (n − 1, z) µ λ
= +
Q(n − 1, z) µ+s Q(n − 1, z)
 n−1    n−1 
µ µ+s
 z µ
 
µ+s s µ+s 
= e−z + e λ Q n − 2,
Q(n − 1, z) (n − 1)! λ
 

 n−1 "
µ µ+s n−1 − µ+s
 s  #
µ+s λ
e λ eλ s µ+s
= + e Q n − 2,
λ
Q(n − 1, z) (n − 1)! λ
 n−1 s
µ
e λ Q n − 1, µ+s

µ+s λ
= ,
Q (n − 1, z)

as we got earlier.

Keeping in mind the relation between the waiting time and the response time and the
properties of the Laplace-transform we have
 
µ
LT (s) = LW (s),
µ+s

which is in the case of r = 1 reduces to


n s
e λ Q n − 1, µ+s
 
µ λ
LT (s) = .
µ+s Q(n − 1, z)

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

153
Example 28 A factory possesses 20 machines having mean lifetime of 50 hours. The
mean repair time is 5 hours and the repairs are carried out by 3 repairmen. Find the
performance measures of the system.

Solution:
1
λ µ 5 1
ρ= = 1 = = = 0.1
µ λ
50 10

By using the recursive approach we get

a0 = 1

20 − 0
a1 = × 0.1 × 1 = 2
0+1

20 − 1
a2 = × 0.1 × 2 = 1.9
1+1

20 − 2
a3 = × 0.1 × 1.9 = 1.14
2+1

20 − 3
a4 = × 0.1 × 1.14 = 0.646
3

..
.

and so on.
Hence
1 1
P0 = Pn = = 0.13625.
1+ k=1 ak 1 + 6.3394

Innen
P1 = a1 × P0 = 2 × 0.13625 = 0.2775

P2 = a2 × P0 = 1.9 × 0.13625 = 0.2588 etc

The distribution can be seen in the next Table for


n = 20, r = 3, ρ = 0.1

154
Number of busy Number of waiting Number of idle Steady-state.
K under repair machines repairmen distribution
repairmen (Q) (S) (Pk )
0 0 0 3 0.13625
1 1 0 2 0.27250
2 2 0 1 0.25888
3 3 0 0 0.15533
4 3 1 0 0.08802
5 3 2 0 0.04694
6 3 3 0 0.02347
7 3 4 0 0.01095
8 3 5 0 0.00475
9 3 6 0 0.00190
10 3 7 0 0.00070
11 3 8 0 0.00023
12 3 9 0 0.00007

Hence the performance measures are

Q = 0.339, S = 1.213, N = Q + r − S = 2.126

Q
P (W > 0) = 0.3323, P (e) = 0.6677, W = = 0.918 hours, 58 minutes
λ(n − N )
m = 20 − 2.126 = 17.874, U (n) = 0.844
U (n) 5 0.844
Eδ (n) = = × ≈ 15.5 hours, r = 1.787, s = 1.213
nλP0 2 0.136
r 1.787 s 1.213 50 × 1.213
US = = = 0.595, e= = 1 = ≈ 90.8 hours
r 3 P (e)λ 0.667 × 50 0.667
r 1.787 50 × 1.787
Eδ = = 1 = ≈ 132.1 hours
P (e)λ 0.667 × 50 0.667
m 17.874
Ug = = ≈ 0.893
n 20
1
T =W+ = 0.981 + 5 = 5.981 hours
µ
mean number of waiting machines Q 0.339
K1 = = = = 0.0169
total number of machines n 20
mean number of idle repairmen S 1.213
K2 = = = = 0.404
total number of repairmen r 3
Let us compare these measures to the system where we have 6 machines and a single
repairman. The lifetime and repair time characteristics remain the same. The result can
be seen in the next Table

155
Number of machines 6 20
Number of repairman 1 3
Number of machines per repairman 6 6 23
Waiting coecient for the servers K2 0.4845 0.4042
Waiting coecient for the machines K1 0.0549 0.01694

Example 29 Let us continue the previous Example with cost structure. Assume that the
waiting cost is 18 000 Euro/hour and the cost for an idle repairman is 600 Euro/hour.
Find the optimal number of repairmen. It should be noted that dierent cost functions
can be constructed.

Solution:

The mean cost per hour as a function of r can be seen in the next Table which are
calculated by the help of the distribution listed below for r = 3, 4, 5, 6, 7.

r P0 P1 P2 P3 P4 P5 P6 P7 P8
3 0.136 0.272 0.258 0.155 0.088 0.047 0.023 0.011 0.005
4 0.146 0.292 0.278 0.166 0.071 0.028 0.010 0.003 0.001
5 0.148 0.296 0.281 0.168 0.071 0.022 0.006 0.001 0.000
6 0.148 0.297 0.282 0.169 0.072 0.023 0.006 0.001 · · ·
7 0.148 0.297 0.282 0.169 0.072 0.023 0.006 ··· ···

The mean cost per hour is

r Q S E(Cost) Euro
3 0.32 1.20 6480
4 0.06 2.18 2388
5 0.01 3.17 2082
6 0 4.17 2502
7 0 5.16 3096

Hence the optimal number is r = 5.


This simple Example shows us that there are dierent criteria for the optimal operation.

156
3.5 The M/M/r/K/n Queue
This system is an combination of the nite-source systems considered in the previous
sections. It is the most general system since for K = r we have the Engset system treated
in Section 3.1, for r = 1, K = n get the system analyzed in Section 3.2, for K = n we
obtain the system of Section 3.4. For the value r < K < n we have delay-loss system,
that is customers can arrive into the system until the number of customers in the system
is K − 1 but then the must return to the source because the system is full.
As before it is easy to see that the number of customers in the systems is a birth-death
process with rates

λk = (n − k) , 0 ≤ k < K,

(
kµ , 1 ≤ k ≤ r,
µk =
rµ , r<k≤K

where 1 ≤ r ≤ n, r ≤ K ≤ n. It is rather complicated system and have not been


investigated, yet. The main problem is that there are no closed form formulas as before,
but using computers all the performance measures can be obtained. The normalizing
constant P0 (n, r, K) should satises the normalizing condition
K
X
Pk (n, r, K) = 1.
i=0

As before it can easily be seen that


 
n k
ρ P0 (n, r, K) , 0 ≤ k < r,


k





Pk (n, r, K) =  
 n k!ρk


 k

,

P0 (n, r, K) r≤k≤K

r!rk−r

The main performance measures can be computed as


K
X K
X
N= kPk , V ar(N ) = k 2 Pk − (N )2 ,
k=0 k=0

K
X K
X
Q= (k − r)Pk , V ar(Q) = (k − r)2 Pk − (Q)2 ,
k=r k=r
r−1
X K
X
r= kPk + r Pk , m = n − N,
k=1 k=r

157
r n−N
US = , Ut = , λ = µ = µr,
r n
N Q 1
T = , W = , W =T− ,
λ λ µ
n−N E(τ ) (n − N )T
= , E(τ ) = , NR = E(τ )λ.
n E(τ ) + T N

By using the Bayes's rule it is easy to see that for the probability of blocking we have
(n − K)PK (n, r, K)
PB (n, r, K) = K
= PK (n − 1, r, K).
X
(n − i)Pi (n, r, K)
i=0

In particular, if K = n, then

λ = λ(n − N ) = µr,

thus
N 1
T = , E(τ ) = , PB = 0,
λ(n − N ) λ
as it was expected.
Furthermore, by elementary calculations it can be seen that the normalizing constant
P0 (n, r, K) can be expressed recursively with respect to K under xed r, n. Namely we
have

n

−1 −1 K
K!ρK
(P0 (n, r, K)) = (P0 (n, r, K − 1)) + ,
r!rK−r
with initial value
r  
−1
X n i
(P0 (n, r, r)) = ρ.
i=0
i

By using the Bayes's rule it is easy to see that the probability that an arriving customer
nds k customers in the system is

Π∗k (n, r, K) = Pk (n − 1, r, K), k = 0, · · · , K

but the probability that a customer arriving into the systems nds k customers there is
(n − k)Pk (n, r, K)
Πk (n, r, K) = K−1
, k = 0, . . . , K − 1.
X
(n − i)Pi (n, r, K)
i=0

158
Hence the probability of waiting and the density function of the waiting time can be
expressed as
K−1
X
PW (n, r, K) = Πk (n, r, K)
k=r
fW (0) = 1 − PW (n, r, K)
K−1
X (rµ)(rµx)k−r+1 −rµx
fW (x) = e · Πk (n, r, K)
k=r
(k − r + 1)!

By using the Bayes's rule it can easily be veried that

Pk (n − 1, r, K)
Πk (n, r, K) = ,
1 − PK (n − 1, r, K)

and analogously to the earlier arguments for the density function we obtain

µrrr P (K − 1 − r, rz) P0 (n − 1)
fW (x) = .
r!P (K − 1, rz) 1 − PK (n − 1, r, K)

In particular, if K = n, that is all customer may enter into the system, then
PK (n − 1, r, K) = 0 and thus we got the formulas derived before.

K−1 k−r
XX (rµx)j rµx
P (W > x) = ē Πk (n, r, K)
k=r j=0
j!
P (W ≤ x) = 1 − P (W > x)
r−1
X
P (W = 0) = Πk (n, r, K).
k=0

In M/M/1/K/n systems we have

K−1 k
XX (µx)j µx
P (T > x) = ē Πk (n, 1, K)
k=0 j=0
j!
P (T < x) = 1 − P (T > x)
K−1
X
P (T > 0) = Πk (n, r, K) = 1
k=0
P (T < 0) = 0.

By reasonable modications for the distribution function we have

rr Q(K − 1 − r, r(z + µx)) P0 (n − 1, r, K)


FW (x) = 1 − .
r!P (K − 1, rz) 1 − PK (n − 1, r, K)

159
The corresponding Laplace-transform can be computed as
K s rµ+s

rr e λ Q K − 1 − r, λ P0 (n − 1, r, K)


LW (s) = 1 − PW (n, r, K) + .
rµ + s r! P (K − 1, rz)(1 − PK (n − 1, r, K))

Since the conditional waiting time is Erlang distributed, it is easy to see that

K−1
X (k − r + 1) + (k − r + 1)2
E(W 2 ) = Πk (n, r, K), V ar(W ) = E(W 2 ) − (E(W ))2 ,
k=r
(rµ)2

V ar(T ) = V ar(W ) + 1/µ2 .


Utilization of the system is computed by

Ur = 1 − P0 .

Mean busy period of the systems can be obtained by


1 − P0 Ur
Eδ (n,r,K) = = .
nλP0 nλP0
Mean idle period of a server can be evaluated as follows.
If the idle servers start their busy period in the order as they nished the previous busy
period, then their activity can be written as follows. If a server becomes idle and nds
other j − 1 servers idle, then his busy period start at the instant of the arrival of the j
th customer.

r−j
ēj = , j = 0,1,...,r-1
(n − j)λ

Pj (n − 1, r, K) Πj (n, r, K)
aj = Pr−1 = Pr−1 K = r, ..., n
i=0 Pi (n − 1, r, K) i=0 Πi (n, r, K)

r−1
X r̄ a
ē = ēj aj , a= , E(δ) = ē.
j=0
r 1−a

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

160
3.6 The M/M/c/K/n Queue with Balking and Reneg-
ing
Exactly the same was as we dealt with an M/M/c/K system we can introduce the
balking probabilities and reneging intensities. The balking can be represented as series
of monotonically decreasing functions of the system size multiplying the corresponding
arrival rate. Let bk be this function, so that λk = bn λ and bk+1 ≤ bk ≤ 1, k > 0, b0 = 1,
that is the probability of joining the system provided it is in state k .
Possible examples that may be useful for the bk = 1/(k + 1), k = 1, ..., K Now if k
customers are in the system, an estimate for the average waiting time might be k/cµ, if

the customer had an idea of µ. In this case bk = e− cµ . The M/M/c/K/n system can be
obtained as bk = 1, k = 0, ..., K .
Let rk h + o(h) = probability of reneging during h given k customers in the system, that
is the reneging intensity is rk . A good possibility for the reneging function rk is

rk = 0, k = 0, ..., K classical system, rk = (k − c)θ, rk = e cµ , k = c, ...K, and zero
otherwise, where θ is the parameter of the exponentially distributed impatience time of
a customer.

It is not so dicult to see, that the number of customers in the systems is a birth-death
process with

λk = (n − k)λbk , k = 0, · · · , K − 1

(
kµ, k = 1, · · · , c
µk =
cµ + rk , k = c, · · · , K.

As usual, the steady-state distribution can be obtained as

K
!−1
λ0 · · · λk−1 X λ0 · · · λj−1
Pk = P0 , P0 = 1+
µ1 · · · µk j=1
µ1 · · · µj

The main performance measures can be calculated as follows

1 Us
Ur = 1 − P0 , E(δr ) =·
λ 1 − Us
K
X K
X
N̄ = kPk , Q̄ = (k − c)Pk
k=1 k=c
XK K
X
N¯2 = 2
k Pk , Q̄2 = (k − c)2 Pk
k=1 k=c
V ar(N ) = N¯2 − (N̄ )2 , V ar(Q) = Q̄2 − (Q̄)2

161
c−1
X K
X
c̄ = kPk + cPk , N̄ = Q̄ + c̄, Uc = c̄/c
k=1 k=c
m̄ = n − N̄ , Ut = m̄/n
K−1
X K
X
λ̄ = λk Pk , µ̄ = µk Pk , λ̄ = µ̄
k=0 k=1
T̄ = N̄ /λ̄, W̄ = Q̄/λ̄
K
X
r̄ = rk Pk , mean reneging rate
k=c

The probability that an entering customer nds k customers in the system is

λk Pk
Πk = , k = 0, . . . K − 1.
λ̄

λ̄
P (an arriving customer enters the system) = PK−1 ,
k=0 (n − k)λPk ,


P (a departing customer leaves the system without service) =
µ̄

K−1
X λK P K
P (waiting) = Πk , P (blocking) = PK .
k=c k=0 λk Pk

In the case of a balking system we can calculate the variance of waiting and response
time and the distribution function of the waiting time, too.
Namely, we have

K−1
Q X (k − c + 1) N
W = = Πk , T = = W + 1/µ
λ k=c
(cµ) λ

Since the conditional waiting time is Erlang distributed, it is easy to see that

K−1
X (k − c + 1) + (k − c + 1)2
E(W 2 ) = Πk , V ar(W ) = E(W 2 ) − (E(W ))2 ,
k=c
(cµ)2

V ar(T ) = V ar(W ) + 1/µ2 .

162
Distribution function of the waiting time
As in the previous parts for FW (t) the theorem of total probability is applied resulting
K−1 t
cµ(cµx)n−c −cµx
X Z
FW (t) = FW (0) + Πn e dx
n=c 0 (n − c)!
K−1 Z ∞
X  cµ(cµx)n−c −cµx

= FW (0) + Πn 1 − e dx .
n=c t (n − c)!

Similarly to the previous section we have


K−1 K−1 n−c
X X X (cµt)i e−cµt
FW (t) = FW (0) + Πn − Πn
n=c n=c i=0
i!
K−1 n−c
X X (cµt)i e−cµt
=1− Πn .
n=c i=0
i!

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

3.7 The M/G/1/n/n/P S Queue


This system is a generalization of system M/M/1/n/n/F IF O treated in Section 3.2.
The essential dierences are the distribution of the service time and the service disciple.
Since the service times are not exponentially distributed the number of customers as a
stochastic process is not a Markov chain. In this Section we introduce the model which
has been published in Yashkov [126].

The requests arrive from a nite-source where they spend an exponentially distributed
time with parameter λ. The required service time S is generally distributed random vari-
able with ES < ∞. Let us denote by G(x) and g(x) its distribution function, density
function, respectively, assuming that (G (0+ ) = 0). The service discipline is Processor
Sharing, that is all customers in the service facility are being served but the rate is pro-
portional to the number of customers in service.

The method of supplementary variables is used for the description of the behavior of the
system.
Let us introduce the following random variables.
Let ν(t) denote the number of customers in the system at time t, and for ν(t) > 0 let
ξ1 (t), . . . , ξν(t) (t) denote the elapsed service time of the requests.

The stochastic process 


X(t) = ν(t); ξ1 (t), . . . , ξν(t) (t)
is a continuous-time Markov process with discrete and continuous components which are
called piecewise linear Markov process.

163
It should be noted the many practical problems can be modeled by the help of these
processes and the interested reader is referred to the book of GnedenkoKovalenko [39].

Let

Pk (t, x1 , . . . , xk ) dx1 . . . dxk = P (ν(t) = k; xi ≤ ξi < xi + dxi , i = 1, . . . , k) ,

that is Pk (t, x1 , . . . , xk ), k = 1, . . . , n denotes the density function that at time t there


are k customers in the system and their elapsed service times are x1 , . . . , xk .
Let δ be a small positive real number. Then for the density functions Pk (t, x1 , . . . , xk ) we
have the following set of equations

Pk (t; x1 , . . . , xk ) =
  k
δ δ Y 1 − G (xi )
= Pk t − δ; x1 − , . . . , xk −  [1 − λ (n − k) δ] +
k k i=1 1 − G xi − kδ
Z∞  
δ δ
+ (k + 1) Pk+1 t − δ; x1 − , . . . , xk+1 − ×
k k+1
0
k δ
 
Y 1 − G (xi ) G (xk+1 ) − G xk+1 − k+1
× δ
·  δ
 dxk+1 .
i=1
1 − G xi − k+1 1 − G xk+1 − k+1
k
Dividing both sides by [1 − G (xi )] and taking the limits as δ → 0, t → ∞ we have
Q
i=1
the stationary equations, namely
" k #
1X ∂
+ λ (n − k) qk (x1 , . . . , xk ) =
k i=1 ∂xi

Z∞
qk+1 (x1 , . . . , xk+1 ) g (xk+1 ) dxk+1 , k = 1, . . . , n − 1,
0

where
k
Y
qk (x1 , . . . , xk ) = lim Pk (t; x1 , . . . , xk ) / [1 − G (xi )]
t→∞
i=1

are called normalized density functions.

Similarly, for P0 and qn (x1 , . . . , xn ) we obtain


Z∞
λnP0 = q1 (x1 ) g (x1 ) dx1 ,
0

n
1X ∂
qn (x1 , . . . , xn ) = 0.
n i=1 ∂xi

164
Beside these equation we need the boundary conditions which are

q1 (0) = λnP0 ,

qk (0, x1 , . . . , xk−1 ) = λ (n − k + 1) qk−1 (x1 , . . . , xk−1 ) ,


k = 1, . . . , n.
The solution to these set of integro-dierential equations is surprisingly simple, namely

qk (x1 , . . . , xk ) = P0 λk n!/ [(n − k)!] ,

which can be proved by direct substitution.


Consequently
k
kn! Y
Pk (x1 , . . . , xk ) = P0 λ [1 − G (xi )] ,
(n − k)! i=1
i = 1, . . . , n.
Let us denote by Pk the steady-state probability of the number of customers in the system.
Clearly we have
Z∞ Z∞
n!
Pk = ... Pk (x1 , . . . , xk ) dx1 . . . dxk = P0 (λES)k .
(n − k)!
0 0

n
Probability P0 can be obtained by using the normalizing condition Pi = 1.
P
i=0

Recall that it is the same as the distribution in the M/M/1/n/n system if % = λES .

It is not dicult to see that for this M/G/1/n/n/P S system the performance measures
can be calculated as
n
X
(i) N= kPk
k=1
1
n−N
(ii) U (i) = 1
λ
= ,
λ
+ T n

thus
1 N
T = ,
λn−N
hence

λ(n − N )T = N

which is the Little's formula.


Clearly, due to the Processor Sharing discipline the response time is longer then the re-
quired service time, and there is no waiting time since each customer are being served.

165
The dierence is T − E(S).

It can be proved, see Cohen [22], that for an G/ ~ G/1/n/n/P


~ S system the steady-state
probability that customers with indexes i1 , . . . , ik are in the system can be written as
k
Y E(Si )
P (i1 , . . . , ik ) = C · k! ρij , ρi = , i = 1, . . . , n.
j=1
E(τi )

For homogeneous case we get


 
n k
Pk = C · k! ρ .
k

3.8 The G/M/r/n/n/F


~ IF O Queue
This section is devoted to a generalized version the nite-source model with multiple
servers where the customers are supposed to have heterogeneous generally distributed
source times and homogeneous exponentially distributed service times. They are served
according to the order of their arrivals. The detailed description of this model can be
found in Sztrik [98].

Customers arrive from a nite source of size n and are served by one of r (r ≤ n)
servers at a service facility according to a rst-come rst-served (FFIFO) discipline. If
there is no idle server, then a waiting line is formed and the units are delayed. The service
times of the units are supposed to be identically and exponentially distributed random
variables with parameter µ. After completing service, customer with index i returns to
the source and stays there for a random time τi having general distribution function Fi (x)
with density fi (x). All random variables are assumed to be independent of each other.

Determination of the steady-state distribution

As in the previous section the modeling is more dicult since the involved random times
are not all exponentially distributed and thus we have to use the method of supplementary
variables.

Let the random variable ν(t) denote the number of customers staying in the source at
time t and let α1 (t) , . . . , αν(t) indicate their indexes ordered lexicographically, that is


in increasing order of their indexes.

Let us denote by β1 (t) , . . . , βn−ν(t) the indexes of the requests waiting or served at the


service facility in the order of their arrival. It is not dicult to see that the process

Y (t) = ν (t) ; α1 (t) , . . . , αν(t) ; β1 (t) , . . . , βn−ν(t) , (t ≥ 0)

is not Markovian unless the distribution functions Fi (x) , i = 1, . . . , n are exponential.

166
To use the supplementary variable technique let us introduce the supplementary variable
ξαi (t) to denote the elapsed source time of request with index αi , i = 1, · · · , n. Dene

X (t) = ν (t) ; α1 (t) , . . . , αν(t) ; ξα1 (t) , . . . , ξαν(t) ; β1 (t) , . . . .βn−ν(t)

This is a multicomponent piecewise linear Markov process.

Let Vkn and Ckn denote the set of all variations and combinations of order k of the in-
tegers 1, 2, . . . , n, respectively, ordered lexicographically. Then the state space of process
(X (t) , t ≥ 0) consists of the set of points

(i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jn−k )
where

(i1 , . . . , ik ) ∈ Ckn , (j1 , . . . , jn−k ) ∈ Vkn , xi ∈ R+ , i = 0, 1, . . . , k, k = 0, 1, . . . , n.

Process X(t) is in state (i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jn−k ) if k customers with indexes


(i1 , . . . , ik ) have been staying in the source for times (x1 , . . . , xk ), respectively while the
rest need service and their indexes in the order of arrival are (j1 , . . . , jn−k ).
To derive the Kolmogorov-equations we should consider the transitions that can occur in
an arbitrary time interval (t, t + h) . For 0 ≤ n − k < r the transition probabilities are
then the following

P [X (t + h) = (i1 , . . . , ik ; x1 + h, . . . , xk + h; j1 , . . . , jn−k ) |

X (t) = (i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jn−k )]


k
Y 1 − Fil (xl + h)
= (1 − (n − k) µh) + o (h) ,
l=1
1 − Fil (xl )

P [X (t + h) = (i1 , . . . , ik ; x1 + h, . . . , xk + h; j1 , . . . , jn−k ) |
0 0 0 0 0 0
X (t) = (i1 , . . . , jn−k , . . . , ik ; x1 , . . . , y , . . . , xk ; j1 , . . . , jn−k−1 )]
k
fjn−k (y) h Y 1 − Fil (xl + h)
= + o (h) ,
1 − Fjn−k (y) l=1 1 − Fil (xl )
0 0 0
where (i1 , . . . , jn−k , . . . , ik ) denotes the lexicographical order of indexes (i1 , . . . , ik , jn−k )
0 0
while (x1 0 , . . . , y , . . . , xk ) indicates the corresponding times.
For r ≤ n − k ≤ n the transition probabilities can be obtained as

P [X (t + h) = (i1 , . . . , ik ; x1 + h, . . . , xk + h; j1 , . . . , jn−k ) |

X (t) = (i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jn−k )]


k
Y 1 − Fil (xl + h)
= (1 − rµh) + o (h) ,
l=1
1 − F il (xl )

167
h
P X (t + h) = (i1 , . . . , ik ; x1 + h, . . . , xk + h; j1 , . . . , jn−k ) |
0 0 0 0
i
X (t) = i1 , . . . , jn−k , . . . , ik ; x1 , . . . , y , . . . , xk ; j1 , . . . , jn−k−1 =
k
fjn−k (y) h Y 1 − Fil (xl + h)
= + o (h) .
1 − Fjn−k (y) l=1 1 − Fil (xl )

For the distribution of X(t) introduce the following functions

Q0;j1 ,...,jn (t) = P (ν (t) = 0; β1 (t) = j1 , . . . , βn (t) = jn ) ,

Qi1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ; t) =


P (ν (t) = k; α1 (t) = i1 , . . . , αk (t) = ik ; ξi1 ≤ x1 , . . . , ξik ≤ xk ;
β1 (t) = j1 , . . . , βn−k (t) = jn−k ) .
Let λi is dened by 1/λi = E(τi ). Then we have

Theorem 2 If 1/λi < ∞, i = 1, . . . , n, then the process (X (t) , t ≥ 0) possesses a


unique limiting ( stationary, steady-state) distribution independent of the initial condi-
tions, namely
Q0;j1 ,...,jn = lim Q0;j1 ,...,jn (t) ,
t→∞

Qi1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) = lim Qi1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ; t) .
t→∞

Notice that X(t) belongs to the class of piecewise-linear Markov processes, subject to
discontinuous changes treated by Gnedenko and Kovalenko [39]. Our statement follows
from a theorem on page 211 of this monograph.

Since by assumption Fi (x) has density function, for xed k Theorem 2 provides the
existence and uniqueness of the following limits

qi1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) dx1 . . . dxk =

= P (ν (t) = k; α1 (t) = i1 , . . . , αk (t) = il ; xl ≤ ξil < xl + dxl , l = 1, . . . , k;


β1 (t) = j1 , . . . , βn−k (t) = jn−k ) , k = 1, . . . , n
where qi1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) denotes the density function of state
(i1 , . . . , ik ; x1 , . . . , xk ; j1 , . . . , jn−k ) when t → ∞.

Let us introduce the so-called normed density function dened by

qi1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk )


q̃i1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) = .
(1 − Fi1 (x1 )) . . . (1 − Fik (xk ))

Then we have

168
Theorem 3 The normed density functions satisfy the following system of integro-dierential
equations (3.1), (3.3) with boundary conditions (3.2), (3.4)
 ∗
∂ ∂
(3.1) + ... + q̃i1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk )
∂x1 ∂xk

= − (n − k) µq̃i1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) +


Z∞  
0 0 0
+ q̃i0 ,...,j 0 0 x1 , . . . , y , . . . , xk fjn (y) dy,
1 n−k ,...,ik ;j1 ,...,jn−k−1
0

(3.2) q̃i1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xl−1 , 0, xl+1 , . . . , xk ) =


X
=µ q̃i1 ,...,il−1 ;il+1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xl−1 , xl+1 , . . . , xk )
i
Vj l ,...,j
1 n−k

for l = 1, . . . , k , 0≤n−k <r

 ∗
∂ ∂
(3.3) + ... + q̃i1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) =
∂x1 ∂xk

= −rµq̃i1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) +


Z∞
0 0 0
+ q̃i0 ,...,j 0 0 (x1 , . . . , y , . . . , xk )fjn (y)dy
1 n−k ,...,ik ;j1 ,...,jn−k−1
0

(3.4) q̃i1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xl−1 , 0, xl+1 , . . . , xk ) =


X
=µ q̃i1 ,...,il−1 ;il+1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xl−1 , xl+1 , . . . , xk ) ,
i
Vj l ,...,j
1 r−1

for l = 1, . . . , k, r ≤n−k <n−1


furthermore
Z∞
rµQ0;j1 ,...,jn = q̃jn ;j1 ,...,jn−1 (y)fjn (y) dy.
0

The symbol [ ] will be explained later while

Vji1l,...,js = [(il , j1 , . . . , js ) , (j1 , il , j2 , . . . , js ) , . . . , (j1 , . . . , js , il )] ∈ Vs+1


n
.

Proof: Since the process (X (t) , t ≥ 0) is Markovian its densities must satisfy the Kolmogorov-
equations. A derivation is based on the examination of the sample paths of the process
during an innitesimal interval of width h. The following relations hold

qi1 ,...,ik ;j1 ,...,jn−k (x1 + h, . . . , xk + h) =

169
k
Y 1 − Fil (xl + h)
= qi1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) (1 − (n − k) µh) +
l=1
1 − Fil (xl )

k ∞
1 − Fil (xl + h)
Z  0 
0 0
Y
+ + q̃i0 ,...,j 0 ,...,i0 ;j1 ,...,jn−k−1 x1 , . . . , y . . . , xk ×
l=1
1 − Fil (xl ) 1 n−k k
0
0
fjn−k (y) h
× dy + o (h) ,
1 − Fjn−k (xl )
qi1 ,...,ik ;j1 ,...,jn−k (x1 + h, . . . , xl−1 + h, 0, xl+1 + h, . . . , xk + h) h =
k
Y 1 − Fis (xs + h)
= o (h) + ×
s=1
1 − Fis (xs )
s6=l
X
×µh q̃i1 ,...,il−1 ;il+1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xl−1 , xl+1 , . . . , xk )
i
Vj l ,...,j
1 n−k

for 0 ≤ n − k < r, l = 1, . . . , k .
Similarly
qi1 ,...,ik ;j1 ,...,jn−k (x1 + h, . . . , xk + h) =
k
Y 1 − Fil (xl + h)
= qi1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) (1 − rµh) +
l=1
1 − F il (xl )

k ∞
1 − Fil (xl + h)
Z  0 
0 0
Y
+ q̃i0 ,...,j 0 ,...,ie0 ;j1 ,...,jn−k−1 x1 , . . . , y , . . . , xk ×
l=1
1 − Fil (xl ) 1 n−k k
0
0
fjn−k (y) h
× dy + o (h) ,
1 − Fjn−k (xl )
qi1 ,...,ik ;j1 ,...,jn−k (x1 + h, . . . , xl−1 + h, 0, xl+1 + h, . . . , xk + h) h =
k
Y 1 − Fis (xs + h)
= o (h) + ×
s=1
1 − Fis (xs )
s6=l
X
×µh q̃i1 ,...,il−1 ;il+1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xl−1 , xl+1 , . . . , xk )
i
Vj l ,...,j
1 n−k

for 0 ≤ n − k < r, l = 1, . . . , k .
Finally
Q0;j1 ,...,jn = Q0;j1 ,...,jn (1 − rµh) +
Z∞
fjn (y) h
+ q̃jn ;j1 ,...,jn−1 (y) dy + o (h) .
1 − Fjn (y)
0

170
Thus the statement of this theorem can easily be obtained. Dividing the left-hand side
k
of equations by (1 − Fil (xl + h)) and taking into account the denition of the normed
Q
l=1
densities taking the limit as h → 0 we get the desired result.

In the left-hand side of (3.1)(3.3) used for the notation of the limit in the right-hand side,
the usual notation for partial dierential quotients has been applied. Strictly considering
this is not allowed, since the existence of the individual partial dierential quotient is not
assured. This is why the operator is notated by [ ]∗ . Actually this is a (1, 1, . . . , 1) ∈ Rk
directional derivative, see Cohen [22].

To determine the steady-state probabilities


 
Q0;j1 ,...,jn , Qi1 ,...,ik ;j1 ,...,jn−k ,

(i1 , . . . , ik ) ∈ Ckn , N
(j1 , . . . , jn−k ) ∈ Vn−k , k = 1, . . . , n.
we have to solve equations (3.1)(3.3) subject to the boundary conditions (3.2)(3.4).
If we set
Q0;j1 ,...,jn = c0 ,
q̃i1 ,...,ik ;j1 ,...,jn−k (x1 , . . . , xk ) = ck , k = 1, . . . , n,
then by direct substitution it can easily be veried that they satisfy these equations with
boundary conditions. Moreover these ck can be obtained by the help of cn , namely
−1
ck = r!rn−r−k µn−k cn , 0 ≤ k ≤ n − r,
−1
ck = (n − k)!µn−k cn , n − r ≤ k ≤ n.
Since these equations completely describe the system, this is the required solution.

Let Qi1 ,...,ik ;j1 ,...,jn−k denote the steady-state probability that customers with indexes
(i1 , . . . , ik ) are in the source and the order of arrivals of the rest to the service facility
is (j1 , . . . , jn−k ). Furthermore, denote by Qi1 ,...,ik the stationary probability that requests
with indexes (i1 , . . . , ik ) i are staying in the source.
It can easily be seen

Qi1 ,...,ik ;j1 ,...,jn−k = (λi1 , . . . , λik )−1 ck , k = 1, . . . , n.

By using the relation we obtained for ck we have


−1
Qi1 ,...,ik = (n − k)! r!rn−r−k µn−k λi1 , . . . , λik cn ,

(i1 , . . . , ik ) ∈ Ckn , k = 0, 1, . . . , n − r.
Similarly
−1
Qi1 ,...,ik = µn−k λi1 , . . . , λik cn ,
(i1 , . . . , ik ) ∈ Ckn , k = n − r, . . . , n.

171
Let us denote by Q̂k and P̂l the steady-state probability of the number of customers in
the source, in the service facility, respectively. Hence it is easy to see that

Qi1 ,...,in = Q1,...,n = Q̂n ,


Q̂k = P̂n−k , k = 0, . . . , n.
Furthermore
cn = Q̂n (λ1 , . . . , λn ) ,
X
Q̂k = Qi1 ,...,ik ,
(i1 ,...,ik )∈Ckn
n
where Q̂n can be obtained by the help of the normalizing condition
P
Q̂k = 1.
k=0

In the homogeneous case these formulas reduce to


 n−k
n! λ
Q̂k = n−k−r
Q̂n , f or 0 ≤ k ≤ n − r,
r!k!r µ
  n−k
n λ
Q̂k = Q̂n , f or n − r ≤ k ≤ n,
k µ
which is the result of the paper Bunday and Scraton [17], and for r = 1 is the formulas
obtained by Schatte [90]. Thus the distribution of the number of customers in the service
facility is
  k
n λ
P̂k = P̂0 , f or 0 ≤ k ≤ r,
k µ
 k
n! λ
P̂k = k−r
P̂0 , f or r ≤ k ≤ n.
r!(n − k)!r µ
This is exactly the same result that we got for the < M/M/r/n/n > model.

It should be underlined that the distribution of the number of customers in the system
does not depend on the form of Fi (x) but the mean 1/λi , that is it is robust.

Performance Measures

ˆ Utilization of the sources


Let Q(i) denote the steady-state distribution that source i is busy with generating
a new customer, that is

n
X X
(i)
Q = Qi1 ,...,ik .
k=1 i∈(i1 ,...,ik )∈Ckn

Hence the utilization of source i can be obtained as

U (i) = Q(i) .

172
ˆ Utilization of the servers
As we have calculated earlier the utilization of a server can be derived as
r n
!
1 X X r̄
UCP U = k P̂k + r P̂k = ,
r k=1 k=r+1
r

where r̄ denotes the mean number of busy servers. Thus the overall utilization of
the servers is r̄.

ˆ Mean waiting and response times


By the results of Tomkó [116] we have
−1
Q(i) = (1/λi ) 1/λi + W̄i + 1/µ .

Thus for the mean waiting time of the customer with index i we obtain

1 1 − Q(i) 1
W̄i = · (i)
− , i = 1, . . . , n.
λi Q µ

Consequently the mean response time T̄i of the ith request can be calculated as
−1
T̄i = W̄i + 1/µ = 1 − Q(i) λi Q(i)

, i = 1, . . . , n.

Since n
X
1 − Q(i) = N̄ ,

i=1

where N̄ denotes the mean number customers at the service facility. This can be
rewritten as n
X
λi T̄i Q(i) = N̄ ,
i=1

which the Little's formula for the G/M/r/n/n/F


~ IF O > queueing system.

It should be noted that using the terminology of the machine interference problem U (i) ,
W̄i , T̄i denote the utilization, mean waiting time and the mean time spent in a broken
state of the ith machine.

This model can be generalized such a way that the service intensities depend on the
number of customers in the source, see Sztrik [100, 101].

173
174
Chapter 4
Exercises

4.1 Innite-Source Systems


Exercise 1 Solve the following system of equations by the help of dierence equations
λP0 = µP1
(λ + µ)Pn = λPn−1 + µPn+1 , n ≥ 1.

Solution:
It is easy to see that it can be rewritten as

λPn−1 − (λ + µ)Pn + µPn+1 = 0, n = 1, 2, . . .

which is a 2-nd order dierence equation with constant coecient. Its general solution
can be obtained in the form

Pn = c1 xn1 + c2 xn2 , n = 1, 2, . . .

where x1 , x2 are the solutions to

µx2 − (λ + µ)x + λ = 0.

It can easily be veried that x1 = 1, x2 = % and thus

Pn = c1 + c2 %n , n = 1, 2, . . . .

However P1 = %P0 , and because ∞n=0 Pn = 1, thus c1 = 0 and c2 = P0 = 1 − %.


P

Exercise 2 Find the generating function of the number of customers in the system for
an M/M/1 queueing system by using the steady-state balance equations. Then derive the
corresponding distribution.

Solution:
Staring with the set of equations

λP0 = µP1
(λ + µ)Pn = λPn−1 + µPn+1 , n ≥ 1

175
by multiplying both sides by si and then adding the terms we obtain
µ µ
λGN (s) + µGN (s) − µP0 = λsGN (s) + GN (s) − P0 .
s s
Thus we can calculate as
    
1 1
GN (s) λ(1 − s) + µ 1 − =µ 1− P0 ,
s s
      
1 1 1
GN (s) µ 1 − − λs 1 − =µ 1− P0 ,
s s s
µ
GN (s) = P0 .
µ − λs
Since GN (1) = 1, therefore
µ−λ
P0 = = 1 − %.
µ
That is
1−%
GN (s) =
,
1 − s%
which is exactly the generating function of a modied geometric distribution with pa-
rameter (1 − %). It can be proved as follows, if

P (N = k) = (1 − %)%k , k = 0, . . .

then its generating function is



X 1−%
GN (s) = sk (1 − %)%k = .
k=0
1 − s%

Exercise 3Find the generating function of the number of customers waiting in a queue
for an M/M/1 queueing system.

Solution:
Clearly

X ∞
X ∞
X
0 k−1 k−1
GQ (s) = (P0 + P1 )s + s Pk = P 0 + s Pk = 1 − % + sk−1 %k (1 − %)
k=2 k=1 k=1

X
=1−%+% si (1 − %)%i = 1 − % + %GN (s) = 1 − %(1 − GN (s)).
i=0

For verication let us calculate the mean queue length, thus

%2
G0Q (1) = %G0N (1) = .
1−%

176
Exercise 4 Find the Laplace-transform of T and W for an M/M/1 queueing system.

Solution:
It is easy to see that
∞  k+1 ∞  k
X µ k µ X µ%
LT (s) = % (1 − %) = (1 − %)
k=0
µ+s µ + s k=0 µ + s
µ 1 µ µ+s µ(1 − %)
= (1 − %) µ% = (1 − %) = ,
µ + s 1 − µ+s µ + s µ(1 − %) + s µ(1 − %) + s

which was expected, since T follows an exponential distribution with parameter µ(1 − %).
To get the Laplace-transform of W we have
∞  k ∞  k
X µ k
X µ
LW (s) = % (1 − %) = 1 − % + %k (1 − %)
k=0
µ + s k=1
µ + s

%µ(1 − %)
=1−%+ ,
µ(1 − %) + s

which should be
LT (s)
µ
µ+s

since
µ
LT (s) = LW (s) .
µ+s
To show this it can be calculated that
µ+s µ(1 − %) µ + s µ(1 − %)
LW (s) = LT (s) = =1−%+% .
µ µ(1 − %) + s µ µ(1 − %) + s

Let us verify the result by deriving the mean values T and W .


1
L0T (0) = − ,
µ(1 − %)
%
L0W (0) = %L0T (0) = − ,
µ(1 − %)

thus
1 %
T = ,W = ,
µ(1 − %) µ(1 − %)
which was obtained earlier.

177
Exercise 5 Show that for an M/M/1/K queueing system
ρ
lim N (K) = , ρ<1
K→∞ 1−ρ
.

Solution:
It is well-known if ρ < 1 then

lim ρK = 0.
K→∞

ρ(1−(K+1)ρK +KρK+1 )
Since N = (1−ρ)(1−ρK+1 )
, it is enough to show that

lim KρK = 0.
K→∞

This can be proved by the L'Hospital's rule, namely

K ∞
lim =
K→∞ ρ−K ∞
K 1 ρK
lim −K = lim = = 0.
K→∞ ρ K→∞ −lnρρ−K −lnρ

Exercise 6 Show that for an M/M/1/K queueing system the Laplace-transform


 K
λ
µP0 1 − µ+s
LT (s) =
1 − PK µ − λ + s
satises LT (0) = 1.

Solution:
µP0 1 − ρK P0 1 − ρ K
LT (0) = =
1 − PK µ − λ 1 − PK 1 − ρ
1−ρ
1−ρK+1 1 − ρK
= K ·
1 − ρ1−ρ(1−ρ)
K+1
1−ρ
1−ρ 1 − ρK+1 1 − ρK
= · ·
1 − ρK+1 1 − ρK+1 − ρK + ρK+1 1 − ρ
= 1.

178
Exercise 7 Find T by the help of the Laplace-transform for an M/M/1/K queueing
system.

Solution:

Since
 K
λ
µP0 1− µ+s
LT (s) =
1 − PK µ − λ + s
then
   K  
µ+s −(K+1) 1 λ

K λ
·λ
(µ− λ + s) − 1 − µ+s
µP0 
L0T (s)

=  
1 − PK  (µ − λ + s)2 

   
1
µP0  KρK+1 − 1
ρ
+ ρ K
− 1
L0T (0) = 
1 − PK (µ − λ)2
   
P0 ρ K+1 1 K
= Kρ −1 +ρ −1
λ(1 − PK )(1 − ρ)2 ρ
1 ρP0 (KρK − KρK+1 + ρK − 1)
= ·
λ(1 − PK ) (1 − ρ)2
1 ρP0 ((K + 1)ρK − KρK+1 − 1)
=
λ(1 − PK ) (1 − ρ)2
N
=− ,
λ(1 − PK )
that is
N
T = ,
λ(1 − PK )
which was obtained earlier.
The higher moments can be calculated, too.

Exercise 8 Consider a closed queueing network with 2 nodes containing K customers.


Assume that at each node the service times are exponentially distributed with parameter
µ1 and µ2 , respectively. Find the mean performance measures at each node.

Solution:
It is easy to see that the nodes operate the same way and they can be considered as an
M/M/1/K queueing system. Hence the performance measures can be computed by using
the formulas with ρ2 = µµ21 and ρ1 = µµ21 , respectively.
Furthermore, one can easily verify that

US (1)µ1 = US (2)µ2 ,

179
where US (i), i = 1, 2 is the utilization of the server.

Exercise 9 Find the generating function for an M/M/n/n queueing system.

Solution:
n n
X %k X (s%)k Q(n, s%)
GN (s) = sk P0 = e−s% P0 es% = e−%(1−s) .
k=0
k! k=0
k! Q(n, %)

To verify the formula let us calculate T .


Since N = G0N (1), therefore take the derivative, that is we get

Q(n, %s) %P (n, %s)


G0N (s) = %e−%(1−s) − e−%(1−s) .
Q(n, %) Q(n, %)

hence
G0N (1) = % − %B(n, %) = %(1 − B(n, %)),
which was obtained earlier.

Exercise 10 Find V ar(N ) for an M/M/n/n queueing system.

Solution:
Since V ar(N ) = E(N 2 ) − (E(N ))2 , let us calculate rst E(N 2 ). That is
n
X n
X n
X n
X
2 2
E(N ) = k Pk = (k(k − 1) + k)Pk = k(k − 1)Pk + kPk
k=1 k=1 k=1 k=1
n n−2
X %k X %i
= k(k − 1) P0 + E(N ) = %2 P0 + E(N )
k=2
k! i=0
i!
  
2 2 n
= % (1 − Pn − Pn−1 ) + E(N ) = % 1 − Pn 1 + + E(N ).
%

Since E(N ) = %(1 − B(n, %)), therefore


n
V ar(N ) = %2 (1 − B(n, %)(1 + )) − (%(1 − B(n, %)))2 + E(N )
%
% + n
= %2 (1 − B(n, %)( )) − (%(1 − B(n, %)))2 + E(N )
%
= %2 − %2 B(n, %) − n%B(n, %) − %2 − %2 B 2 (n, %) + 2%2 B(n, %) + E(N )
= E(N ) + %2 B(n, %) − n%B(n, %) − %2 B 2 (n, %)
= E(N ) − %B(n, %)(n − %(1 − B(n, %)))
= E(N ) − %B(n, %)(n − E(N )).

180
Exercise 11 Show that B(m, a) is a monotone decreasing sequence and its limit is 0.

Solution:
aB(m − 1, a) a
B(m, a) = < ,
m + aB(m − 1, a) m
and thus it tends to 0 as m increasing. The sequence is monotone decreasing i

B(m, a) − B(m − 1, a) < 0, ∀m

that is
aB(m − 1, a)
− B(m − 1, a) < 0
m + aB(m − 1, a)
B(m − 1, a)(a − m − aB(m − 1, a))
<0
m + aB(m − 1, a)
a − m − aB(m − 1, a) < 0
a−m
B(m − 1, a) > ,
a
which is satised if a ≤ m. Since 1 ≥ B(m − 1, a) ≥ 0 therefore 1 ≥ a−ma
≥ 0, and thus
a ≥ m,m ≥ 0, that is 0 ≤ m ≤ a. It means that B(m, a) is monotone decreasing for m
which was expected since as the number of servers increases the probability of loss should
decrease.

Exercise 12 Find a recursion for C(m, a).

Solution:
Let a = µλ , then by the help of

B(m, a)
C(m, a) = a
1− m
− B(m, a))
(1

we should write a recursion for C(m, a) since B(m, a) can be obtained recursively. First
we show how B(m−1, a) can be expressed by the help of C(m−1, a) and then substituting
into the recursion
aB(m − 1, a)
B(m, a) =
m + aB(m − 1, a)
we get the desired formula. So let us express B(m, a) via C(m, a) that is

mB(m, a)
C(m, a) =
m − a(1 − B(m, a))

C(m, a)(m − a) + C(m, a)aB(m, a) = mB(m, a)


thus
(m − a)C(m, a)
B(m, a) = ,
m − aC(m, a)

181
which is positive since m > a is the stability condition for an M/M/m queueing system.
This shows that
B(m, a) < C(m, a),
which was expected because of the nature of the problem.
Consequently
(m − 1 − a)C(m − 1, a)
B(m − 1, a) = ,
m − 1 − aC(m − 1, a)
and m − 1 > a is also valid due to the stability condition. Let us rst express C(m, a) by
the help of B(m − 1, a) then substitute. To do so
aB(m−1,a) amB(m−1,a)
m+aB(m−1,a)
m m+aB(m−1,a)
C(m, a) = aB(m−1,a) 
= m+aB(m−1,a)−aB(m−1,a) 
m− a 1 − m+aB(m−1,a) m−a m+aB(m−1,a)
aB(m − 1, a) aB(m − 1, a)
= = .
m + aB(m − 1, a) − a m − a(1 − B(m − 1, a))

Now let us substitute C(m−1, a) into here. Let us express the numerator and denominator
in a simpler form, namely

(m − 1 − a)C(m − 1, a)
NUM = a
m − 1 − aC(m − 1, a)
 
(m − 1 − a)C(m − 1, a)
DEN OM = m − a 1 −
m − 1 − aC(m − 1, a)
m − 1 − aC(m − 1, a) − (m − 1)C(m − 1, a) + aC(m − 1, a)
=m−a
m − 1 − aC(m − 1, a)
(m − 1)(1 − C(m − 1, a))
=m−a
m − 1 − aC(m − 1, a)
m(m − 1) − maC(m − 1, a) − a(m − 1)(1 − C(m − 1, a))
=
m − 1 − aC(m − 1, a)
m(m − 1) − a(m − 1) − aC(m − 1, a)
=
m − 1 − aC(m − 1, a)
(m − 1)(m − a) − aC(m − 1, a)
= .
m − 1 − aC(m − 1, a)
Thus
a(m − 1 − a)C(m − 1, a)
C(m, a) = ,
(m − 1)(m − a) − aC(m − 1, a)
and the initial value is C(1) = a. Thus the probability of waiting can be computed re-
cursively. It is important because the main performance measures depends on this value.

Now, let us show that C(m, a) is a monotone decreasing sequence and tends to 0 as m,
increases which is expected. It is not dicult to see that

a(m − 1 − a)C(m − 1, a)
C(m, a) <
(m − 1)(m − a) − a

182
and if we show that
a(m − 1 − a)
< 1,
(m − 1)(m − a)a
then we have
C(m, a) < C(m − 1, a).
To do so it is easy to see that
a(m − 1 − a) < (m − 1)(m − a) − a
m − m − ma + a − a − am + a + a2 > 0
2

m2 − (1 + 2a)m + a + a2 > 0
p
1 + 2a ± (1 + 2a)2 − 4(a2 + a)
m1,2 =
√ 2
1 + 2a ± 1 + 4a2 + 4a − 4a2 − 4a
m1,2 =
2
1 + 2a ± 1
m1,2 =
2
that is if m > a + 1 then the values of the parabola are positive. However, this condition
is satised since the stability condition is m − 1 > a.
Furthermore, since
B(m, a)
C(m, a) = a
1 − m (1 − B(m, a))
therefore limm→∞ C(m, a) = 0, which was expected.
This can be proved by direct calculations, since
%m m
C(m, %) = P0 (m)
m! m − %
and from
%m m
lim P0 (m) = e−% , lim = 0,
m→∞ m→∞ m! m − %

the limit is 0. It is clear because there is no waiting in an innite-server system.

Exercise 13 Verify that the distribution function of the response time for a M/M/r
queueing system in the case of r = 1 reduces to the formula obtained for an M/M/1
system.

Solution:
1 − e−µ(r−1−%)x 1 − eµ%x
   
−µx −µx
P (T > x) = e 1 + C(n, %) =e 1+%
r−1−% −%
= e−µx (1 − 1 + eµ%x ) = e−µ(1−%)x .
Thus
FT (x) = 1 − e−µ(1−%)x .

183
Exercise 14 Show that lim GN (z) = 1 for an M/G/1 queueing system.
z→1

Solution:
z−1 0
lim GN (z) = lim (1 − ρ)LS (λ − λz) · = ,
z→1 z→1 z − LS (λ − λz) 0

therefor the L'Hospital's rule is applied. It is easy to see that


z−1 1 1
lim = lim 0
= ,
z→1 z − LS (λ − λz) z→1 1 + λL (λ − λz) 1−ρ
S

and thus
z−1 1−ρ
lim GN (z) = lim (1 − ρ)LS (λ − λz) · lim = = 1.
z→1 z→1 z→∞ z − LS (λ − λz) 1−ρ

Exercise 15 Show that if the residual service time in an M/G/1 queueing system is
denoted by R then its Laplace-transform can be obtained as LR (t) = 1−L S (t)
t·E(S)
.

Solution:

Z∞
1 − FS (x)
LR (t) = e−tx dx.
E(S)
0

Using integration by parts we have


∞ Z∞
e−tx 1 − FS (x) e−tx (−f (x))

1 − LS (t)
LR (t) = − + dx = .
t E(S) 0 t E(S) tE(S)
0

Verify the limit lim LR (t) = 1.


t→0
It is easy to see that

1 − LS (t) 0
lim LR (t) = lim = ,
t→0 t→∞ tE(S) 0

therefore apply the L'Hospital's rule. Thus

−L0S (t) E(S)


lim LR (t) = lim = = 1.
t→0 t→0 E(S) E(S)

184
Exercise 16 By the help of LR (t) prove that if S ∈ Exp(µ),
then R ∈ Exp(µ).

Solution:

µ
1 − LS (t) 1 − µ+t µ
LR (t) = = t = ,
tE(S) µ
µ + t

thus R ∈ Exp(µ).

Exercise 17 By the help of the formulas for an M/G/1 system derive the corresponding
formulas for an M/M/1 system.

Solution:

In this case
µ
LS (t) = ,
µ+t
therefore the Laplace-transform of the response time is

t(1 − ρ)
LT (t) = LS (t)
t − λ + λLS (t)
µ t(1 − ρ)
= λµ
µ + t t − λ + µ+t
µ t(1 − ρ)(µ + t)
=
µ + t µt + t2 − λµ − λt + λµ
t(µ − λ) µ−λ µ(1 − ρ)
= = = ,
t(t + µ − λ) t+µ−λ µ(1 − ρ) + t

that is T ∈ Exp(µ(1 − ρ)), as we have seen earlier.

For the generating function of the number of customers in the system we have

(1 − ρ)(1 − z)
GN (z) = LS (λ − λz)
LS (λ − λz) − z
µ (1 − ρ)(1 − z)
= · µ
λ − λz + µ λ−λz+µ
−z
µ (1 − ρ)(1 − z)(λ − λz + µ)
= ·
λ − λz + µ µ − λz + λz 2 − µz
µ(1 − ρ)(1 − z) µ(1 − ρ) 1−ρ
= = = ,
µ(1 − z) − λz(1 − z) µ − λz 1 − ρz

185
as we proved in the case of an M/M/1 system.

For the mean waiting and response times we get

ρE(S) 1 + CS2 ρ 1+1 ρ


W = · = · = ,
1−ρ 2 µ(1 − ρ) 2 µ(1 − ρ)
 
1 1 ρ 1
T =W+ = +1 = .
µ µ 1−ρ µ(1 − ρ)

To calculate the variance we need

λ µ3!3
2
λE(S 3 )

2 2 ρ
E(W ) = 2(W ) + =2 +
3(1 − ρ) µ(1 − ρ) 3(1 − ρ)
2 2
ρ 2λ 2µρ + 2λ(1 − ρ)
=2 2 2
+ 3 =
µ (1 − ρ) µ (1 − ρ) µ3 (1 − ρ)2
2λρ + 2λ − 2λρ 2λ
= 3 2
= 3 ,
µ (1 − ρ) µ (1 − ρ)2

thus

 2
2λ ρ
V ar(W ) = 3 −
µ (1 − ρ)2 µ(1 − ρ)
2
2λ − µρ 2λ − λρ (2 − ρ)ρ
= 3 2
= 3 2
= 2 ,
µ (1 − ρ) µ (1 − ρ) µ (1 − ρ)2

as we have seen earlier.


Furthermore

 2
(2 − ρ)ρ 1
V ar(T ) = V ar(W ) + V ar(S) = 2 +
µ (1 − ρ)2 µ
2 2
 2
2ρ − ρ + 1 − 2ρ + ρ 1 1
= = 2 = .
µ2 (1 − ρ)2 µ (1 − ρ)2 µ(1 − ρ)

186
The variance of the number of customers in the system is
2
λ3 E(S 3 ) λ2 E(S 2 ) λ2 (3 − 2ρ)E(S 2 )

V ar(N ) = + + + ρ(1 − ρ)
3(1 − ρ) 2(1 − ρ) 2(1 − ρ)
!2
λ3 µ3!3 λ2 µ22 λ2 (3 − 2ρ) µ22
= + + + ρ(1 − ρ)
3(1 − ρ) 2(1 − ρ) 2(1 − ρ)
 2 2
2λ3 ρ ρ2 (3 − 2ρ)
= + + + ρ(1 − ρ)
µ3 (1 − ρ) 1−ρ 1−ρ
2ρ3 ρ4 ρ2 (3 − 2ρ)
= + + + ρ(1 − ρ)
1 − ρ (1 − ρ)2 1−ρ
2ρ3 (1 − ρ) + ρ4 + ρ2 (1 − ρ)(3 − 2ρ) ρ(1 − ρ)3
= +
(1 − ρ)2 (1 − ρ)2
2ρ3 − 2ρ4 + ρ4 + 3ρ2 − 2ρ3 − 3ρ3 + 2ρ4 + ρ + 3ρ3 − 3ρ2 − ρ4
=
(1 − ρ)2
ρ
= ,
(1 − ρ)2

as we have seen earlier.

Finally
2
λ3 E(S 3 ) λ2 E(S 2 ) λ2 E(S 2 )

V ar(Q) = + +
3(1 − ρ) 2(1 − ρ) 2(1 − ρ)
!2
λ3 µ3!3 λ2 µ22 λ2 µ22
= + +
3(1 − ρ) 2(1 − ρ) 2(1 − ρ)
2ρ3 ρ4 ρ2 2(1 − ρ)ρ3 + ρ4 + ρ2 (1 − ρ)
= + + =
1 − ρ (1 − ρ)2 1 − ρ (1 − ρ)2
2ρ3 − 2ρ4 + ρ4 + ρ2 − ρ3 ρ3 − ρ4 + ρ2 ρ2 (1 + ρ − ρ2 )
= = = .
(1 − ρ)2 (1 − ρ)2 (1 − ρ)2

These verications help us to see if these complicated formulas reduces to the simple ones.

Exercise 18 Based on the transform equation

1−z
GN (z) = LS (λ − λz)(1 − ρ)
LS (λ − λz) − z

nd N -t.

Solution:
It is well-known that N = G0N (1), that is why we have to calculate the derivative at the

187
right hand side. However, the term LS (λ−λz)−z
1−z
takes an indetermined value at z = 1 hence
the L'Hospital's rule is used. Let us rst dene a function

LS (λ − λz) − z
f (z) =
1−z
.

Hence one can see that


LS (λ − λz)
LN (z) = (1 − ρ) .
f (z)

Applying the expansion procedure



X (−1)k E(S k )
LS (λ − λz) = 1 + (λ − λz)k
k=1
k!

we have

X (−1)k E(S k )
1+ (λ − λz)k − z
k=1
k!
f (z) =
1−z
E(S 2 )(1 − z)
= 1 − λE(S) + λ2 + ....
2
2 E(S 2 )
Thus f (1) = 1 − ρ and f 0 (1) = − λ 2
.

After these calculations we get

(1 − ρ)f (z)L0S (λ − λz)(−λ) − LS (λ − λz) · f 0 (z)


L0N (z) =
(f (z))2

and hence
2 2
(1 − ρ)f (1)λE(S) + λ E(S )
N= G0N (1)
= 2
(1 − ρ)2
 2 2)

(1 − ρ) (1 − ρ)ρ + λ E(S
2
= 2
(1 − ρ)
λ E(S 2 )
2
ρ2 1 + CS2
=ρ+ =ρ+ · ,
2(1 − ρ) 1−ρ 2

which was obtained in a dierent way.

188
Exercise 19 Find V ar(W ) by the help of LW (s) = s(1−ρ)
s−λ+λLS (s)
.

Solution:
Let us dene a function
s − λ + λLS (s)
f (s) = ,
s
which is after expansion can be written as

λ λ λ λ X E(S i ) i
1 − + LS (s) = 1 − + · (−1)i s
s s s s i=0 i!
λE(S 2 ) λE(S 3 ) 2
= 1 − λE(S) + s− s + ....
2 3!
Therefore

λE(S 2 ) 2λE(S 3 ) 3λE(S 4 ) 2


f 0 (s) = − s+ s + ...,
2 3! 4!
2λE(S 3 ) 3 · 2λE(S 4 )
f (2) (s) = − + s + ....
3! 4!
Hence

f (0) = 1 − ρ,
λE(S 2 )
f 0 (0) = ,
2
λE(S 3 )
f 00 (0) = − .
3
Consequently, because

1−ρ
LW (s) =
f (s)

we have

f 0 (s)
L0W (s) = −(1 − ρ) ,
(f (s))2
00 f 00 (s)(f (s))2 − 2f (s)(f 0 (s))2
LW (s) = −(1 − ρ) .
(f (s))4

Thus
2)
f 0 (0) (1 − ρ) λE(S
E(W ) = −L0W (0)
= (1 − ρ) = 2
(f (0))2 (1 − ρ)2
λE(S 2 ) ρE(S) 1 + CS2
= = · .
2(1 − ρ) 1−ρ 2

189
Similarly

f 00 (0)(f (0))2 − 2f (0)(f 0 (0))2


E(W 2 ) = L00W (0) = −(1 − ρ)
(f (0))4
   2
2 λE(S 3 ) λE(S 2 )
(1 − ρ)(1 − ρ) − 3 − 2(1 − ρ) 2
=− 4
(1 − ρ)
3
λE(S )
= 2(E(W ))2 + .
3(1 − ρ)

Thus

V ar(W ) = E(W 2 ) − (E(W ))2


λE(S 3 )
= 2(E(W ))2 + − (E(W ))2
3(1 − ρ)
λE(S 3 )
= (E(W ))2 + .
3(1 − ρ)

Finally

V ar(T ) = V ar(W + S) = V ar(W ) + V ar(S).

Exercise 20 By using the Laplace-transform show that

E(S k+1 )
E(Rk ) = .
(k + 1)E(S)

Solution:
As we have seen earlier
1 − LS (s)
LR (s) = ,
sE(S)

and it is well-known that


∞ (i)
X L (0)
LS (s) = S
si
i=0
i!

X (−1)i E(S i )
= si .
i=0
i!

Thus for LR (s) we get



X (−1)k
LR (s) = 1 + E(Rk )sk .
k=1
k!

190
Therefore

X (−1)k 1 − LS (s)
LR (s) = 1 + E(Rk )sk =
k=1
k! sE(S)

!
X (−1)k
1− 1+ E(S k )sk
k=1
k!
=
sE(S)
∞ ∞
X (−1) E(S k+1 ) k
k X (−1)k E(Sk + 1) k
= ·s =1+ s .
k=1
(k + 1)!E(S) k=1
k! (k + 1)E(S)

Consequently

E(S k+1 )
E(Rk ) = , k = 1, 2, . . .
(k + 1)E(S)

Exercise 21 Find the generating function of the number of customers arrived during a
service time for an M/G/1 system.

Solution:
By applying the theorem of total probability we have
Z ∞
(λx)k −λx
P (νA (S) = k) = e fS (x) dx.
0 k!
Hence its generating function can be written as
∞ Z ∞X ∞
(λx)k −λx (zλx)k −λx
Z
k
GνA (S) (z) = z e fS (x) dx = e fS (x) dx
0 k! 0 k=0
k!
Z ∞ Z ∞
zλx −λx
= e e fS (x) dx = e−λx(1−z) fS (x) dx = LS (λ(1 − z)).
0 0

191
4.2 Finite-Source Systems
Exercise 22 If P (k, λ) = λk −λ
and Q(k, λ) = λi −λ
, then show the following
Pk
k!
e i=0 i! e
important formula
k
X
P (k − j, a1 )Q(j, a2 ) = Q(k, a1 + a2 ).
j=0

Solution:
It is well-known that Z ∞
Q(n, a) = P (n, y) dy,
a

therefore

k k j
X X ak−j X ai2 −a2
P (k − j, a1 )Q(j, a2 ) = 1
e−a1 e
j=0 j=0
(k − j)! i=0
i!

k
ak−j
Z ∞ j Z ∞ Z ∞
X
1 −a1 y −y (y + a1 )k −(a1 +y) tk −t
e e dy = e dy = e dt = Q(k, a1 +a2 ),
j=0
(k − j)! a2 j! a2 k! a1 +a2 k!

where we introduced the substitution t = y + a1 .

Exercise 23 Find the mean response time for an M/M/1/n/n queueing system by using
the Laplace-transform.

Solution:

It is well-known that T = −L0T (0), that is why let us calculate L0T (0).
" n µ+s
 #0
µ s Q n − 1,
L0T (s) = eλ λ
µ+s Q n − 1, µλ
n 0 n 0
Q n − 1, µ+s µ+s
 


µ s
λ µ s Q n 1, λ
= eλ · + eλ · .
µ+s Q n − 1, µλ µ+s Q n − 1, µλ

Thus
n 1 1  µ n 1
L0T (o) = − + − B n − 1, = − + US (n − 1),
µ λ λ λ µ λ

and hence
n US (n − 1)
T (n) = − .
µ λ

192
Since
1
T (n) = (N (n − 1) + 1)
µ
 
1 US (n − 1)
= n−1− +1
µ ρ
n US (n − 1)
= − ,
µ λ

which was obtained earlier. The higher moments of T (n) can be obtained and hence
further measured can be calculated.
Similarly, the moments of W (n) can be derived.

Exercise 24 Find the mean response time, denoted by T A, for an M/M/1/n/n system
by using the density function approach.

Solution:
. µ
z=
λ
Z∞
1 (µx + z)n−1 −(µx+z)
T = xµ e dx
Q(n − 1, z) (n − 1)!
0
Z∞ n−1
e−z X (µx)k z n−1−k
= xµ e−µx dx
Q(n − 1, z) k=0
k! (n − 1 − k)!
0
n−1 Z∞
e−z X z n−1−k (µx)k −µx
= xµ e dx
Q(n − 1, z) k=0 (n − 1 − k)! k!
0
−z n−1 n−1−k
e X z k+1
= ·
Q(n − 1, z) k=0
(n − 1 − k)! µ
z n−1−k
1
n−1
X (n−1−k)!
· e−z
= (k + 1)
µ k=0
Q(n − 1, z)
n−1
1 X k+1 1
= Pk (n − 1) = (N (n − 1) + 1).
µ k=0
µ µ

The mean waiting time can be obtained similarly, starting the summation from 1 and
taking a Erlang distribution with one phase less.

193
Exercise 25 Find V ar(N ) for an M/M/1/n/n system.

Solution:

Let us denote by F the number of customers in the source. Hence F + N = n, and thus
V ar(N ) = V ar(F ).
As we have proved the distribution of F equals to the distribution of an M/M/n/n system
with trac intensity z = ρ1 we have
       
1 1 1 1 1 1
V ar(N ) = 1 − B n, − B n, n− 1 − B n,
ρ ρ ρ ρ ρ ρ
 
US 1 − US US
= − n− .
ρ ρ ρ

If the number of sources is denoted then this formula can be written as


 
Us (n) 1 − Us (n) Us (n)
V ar(N (n)) = − n− .
ρ ρ ρ

This result helps us to determine V ar(T (n))-t and V ar(W (n))-t. Since W (n) can be
consider as a random sum, where the summands are the exponentially distributed service
times with parameter µ, and the counting process is the number of customers in the
(n)
system at the arrival instant of a customer, denoted by NA , we can use the formula
obtained for the variance of a random sum, namely

(n) 1 1 (n)
V ar(W (n)) = E(NA ) · + V ar(NA )
µ2 µ2
1 1
= N (n − 1) · 2 + 2 V ar(N (n − 1))
µ µ
1
= 2 (N (n − 1) + V ar(N (n − 1))),
µ
where
Us (n − 1)
N (n − 1) = n − 1 −
ρ
 
Us (n − 1) 1 − Us (n − 1) Us (n − 1)
V ar(N (n − 1)) = − n−1− .
ρ ρ ρ

Similarly, since T (n) = W (n) + S , then

1
V ar(T (n)) = V ar(W (n)) + .
µ2

194
Chapter 5
Queueing Theory Formulas

5.1 Notations and Denitions


Table 1. Basic Queueing Theory Notations and Denitions

a Server utilization.
ai Utilization of component i in a queueing network.
A[t] Distribution function of interarrival time. A[t] = P [τ ≤ t]
b Random variable describing the busy period for a server
B[c, ρ] Erlang's B formula. Probability all servers busy in M/M/c/c system.
Also called Erlang's loss formula.
c Number of servers in in a service facility.
C[c, ρ] Erlang's C formula. Probability all servers busy in
M/M/c/c system. Also called Erlang's delay formula
CX2
Squared coecient of a variation of a positive
V ar[X]
random variable, CX 2
= .
E[X]2
D Symbol for constant (deterministic) interarrival
or service time.
Ek Symbol for Erlang-k distribution of interarrival or service time.
E[Q|Q > 0] Expected (mean or average) queue length of nonempty queues.
E[W |W > 0] Expected queueing time.
FCFS First Come First Served queue discipline.
FIFO First In First Out queue discipline. Identical with FCFS.
FT (t) The distribution function of T , FT (t) = P [T < t].
FW (t) The distribution function of W , FW (t) = P [W < t].
G Symbol for general probability distribution of service
time. Independence usually assumed.
GI Symbol for general independent interarrival time
distribution.
H2 Symbol for two-stage hyperexponential distribution.
Can be generalized to k stages.
K Maximum number of customers allowed in queueing
system. Also size of population in nite population
models.

195
Table 1. Basic Queueing Theory Notations and Denitions (continued)

λ Mean arrival rate of customers into the system.


λ Actual mean arrival rate into the system, for which
some arrivals arc turned away, e.q., the M/M/c/c system.
λT Mean throughput of a computer system measured in
transactions or interactions per unit time.
ln ·) Natural logarithm function (log to base e).
Ns Expected steady state number of customers receiving
service, E[Ns ].
LCFS Last Come First Served queue discipline.
LIFO Last In First Out queue discipline.
Identical with LCFS.
M Symbol for exponential interarrival or
service time.
µ Mean service rate per server, that is,
the mean rate of service completions
while the server is busy.
µa , µb Parameters of the two-stage hyperexponential
distribution of it? for the M/H2 /1 system.
N Expected steady state number of customers
in the queueing system, E[N ].
N [t] Random variable describing the number of
customers in the system at time t.
N Random variable describing the steady state
number of customers in the system.
Nb Random variable describing the number of customers
served by a server in one busy period.
Ns [t] Random variable describing the number of
customers receiving service at time t.
Ns Random variable describing the steady state number of
customers in the service facility.
O Operating time of a machine in a machine repair queueing model.
The time a machine remains in operation aer repair
before repair is again necessary.
Pn [t] Probability there arc n customers
in the system at time t.
Pn Steady state probability that there are
n customer in the system.
PRI Symbol for priority queueing discipline.
PS Symbol for processor sharing queueing discipline.
pi A parameter of a hypoexponential random variable.
πa , πb Parameters of the distribution function
of w for the M/H2 /1 queueing system.
πX [r] The rth percentile for random variable X .
Q Random variable describing the steady state
number of customers in the queue.

196
Table 1. Basic Queueing Theory Notations and Denitions (continued)

Q[t] Random variable describing the number of


customers in the queue at time t.
ρ ρ = λS The trac intensity or oered load.
The international unit of this is erlang,
named for A.K. Erlang, a queueing theory pioneer
RSS Symbol for queueing discipline "random selection for service".
S Random variable describing the service time. E[S] = µ1 .
S Expected customer service time, E[S] = µ1 .
SIRO Symbol for service in random order, which is identical to RSS.
It means each customer in queue has the
same probability of being served next.
T Random variable describing the total time a customer
spends in the queueing system, T = W + S .
T Expected steady state time a customer spends
in the system, T = E[T ] = W + S .
τ Random variable describing interarrival time.
E[τ ] = λ1 .
W Random variable describing the time a customer
spends in the queue before service begins.
W0 Random variable describing time a customer who must
queue spends in the queue before receiving service.
Also called conditional queueing time.
W Expected steady state time a customer
spends in the queue, W = E[W ] = T − S .

5.2 Relationships between random variables


Table 2. Relationships between Random Variables

a Server utilization. The probability


any particular server is busy.
N = Q + Ns Number of customers in steady state system.
N =λ·T Mean number of customers in steady state system.
This formula is often called Little's law.
Ns = λ · S Mean number of customers receiving service
in steady state system. This formula
sometimes called Little's law.
Q=λ·W Mean number in steady state queue.
Also called Little's law.
ρ = E[S]
E[τ ]
= λS Trac intensity in erlangs.
T =W +S Total waiting time in the system.
T =W +S Mean total waiting time in steady state system.

197
5.3 M/M/1 Formulas
Table 3. M/M/1 Queueing System

ρ = λS, Pn = P [N = n] = (1 − ρ)ρn , n = 0, 1, . . . .

P [N ≥ n] = ρn , n = 0, 1, . . . .

ρ ρ
N = E[N ] = λ · T = , V ar(N ) = .
1−ρ (1 − ρ)2

ρ2 ρ2 (1 + ρ − ρ2 )
Q = λW = , V ar(Q) = .
1−ρ (1 − ρ)2

1 ρ
E[Q|Q > 0] = , V ar[Q|Q > 0] = .
1−ρ (1 − ρ)2
   
−t −t
FT (t) = P [T ≤ t] = 1 − exp , P [T > t] = exp .
T T

S 1 2
T = E[T ] = = , V ar(T ) = T .
1−ρ µ(1 − ρ)
 
100
πT [r] = T ln , πT [90] = T ln 10, πT [95] = T ln 20
100 − r
  
−t −t
FT (t) = P [W ≤ t] = 1 − ρ exp , P [W > t] = ρ exp .
T T
2
ρS (2 − ρ)ρS
W = , V ar(W ) = .
1−ρ (1 − ρ)2
   
100ρ
πW [r] = max T ln ,0 .
100 − r

πW [90] = max{T ln(10ρ), 0}, πW [95] = max{T ln(20ρ), 0}.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

198
5.4 M/M/1/K Formulas
Table 4. M/M/1/K Queueing System

(1 − ρ)ρn


 if λ 6= µ
 (1 − ρK+1 )

Pn =
 1
if λ = µ



K +1

n = 0, 1, . . . , K , where ρ = λS .

λ = (1 − PK )λ, Mean arrival rate into system.

ρ[1 − (K + 1)ρK + KρK+1 ]




 ha λ 6= µ
(1 − ρ)(1 − ρK+1 )


N=
 K


ha λ = µ.

2

Pn
Q = N − (1 − P0 ), Πn = , n = 0, 1, . . . , K − 1.
1 − PK
K−1
X
FT (t) = 1 − Πn Q[n; µt],
n=0
where
n
−µt
X (µt)k
Q[n; µt] = e .
k=0
k!

N Q
T = , W = .
λ λ
K−2
X
FT (t) = 1 − Πn+1 Q[n; µt].
n=0

W
E[W |W > 0] = , a = (1 − PK )ρ.
1 − Π0

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

199
5.5 M/M/c Formulas

Table 5. M/M/c Queueing System

ρ
ρ = λS, a=
c
" c−1 #−1
X ρn ρc c!(1 − a)P [N ≥ c]
P0 = + = .
n=0
n! c!(1 − a) ρc
 n
ρ
 n! P0 , if n ≤ c



Pn =
 ρn
P0 , if n ≥ c.



c!cn−c
 " c−1 #
k c
X ρ ρ
if n < c,

P0 +


k! c!(1 − a)



 k=n
P [N ≥ n] = " #
 c n−c
a a

if n ≥ c
 n−c
 P0 c!(1 − a) = P [N ≥ c]a


ρP [N ≥ c]
Q=λ·W = ,
c(1 − a)

where
ρc
P [N ≥ c] = C[c, ρ] = c! .
c−1 n
ρ X ρ ρc
(1 − ) +
c n=0 n! c!

aC[c, ρ][1 + a − aC[c, ρ]]


V ar(Q) = .
(1 − a)2

N = λ · T = Q + ρ.

V ar(N ) = V ar(Q) + ρ(1 + P [N ≥ c]).

W [0] = 1 − P [N ≥ c], FT (t) = 1 − P [N ≥ c] exp[−cµt(1 − a)],

P [N ≥ c]S
W = .
c(1 − a)

200
Table 5. M/M/c Queueing System (continued)

2
[2 − C[c, ρ]]C[c, ρ]S
V ar(W ) = .
c2 (1 − a)2  
S 100C[c, ρ]
πW [r] = max{0, ln }.
c(1 − a) 100 − r
S
πW [90] = max{0, ln(10C[c, ρ])}.
c(1 − a)
S
πW [95] = max{0, ln(20C[c, ρ])}.
c(1 − a)
 
−ct(1 − a)
P [W ≤ t|W > 0] = 1 − exp , t > 0.
S
S
E[W |W > 0] = E[W 0 ] = .
c(1 − a)
 2
S
V ar([W |W > 0] = .
c(1 − a)

 1 + C1 e−µt + C2 e−cµt(1−a) if ρ 6= c − 1

FT (t) =
1 − {1 + C[c, ρ]µt}e−µt if ρ = c − 1

where

P [N ≥ c] P [N ≥ c]
C1 = − 1, and C2 = .
1 − c(1 − a) c(1 − a) − 1

T = W + S.
 2
2P [N ≥ c][1 − c2 (1 − a)2 ]S 2
+ 2S if ρ 6= c − 1



2
(ρ + 1 − c)c (1 − a)2

E[T 2 ] =


 2{2P [N ≥ c] + 1}S 2

if ρ = c − 1

2
V ar(T ) = E[T 2 ] − T .

πT [90] ≈ T + 1.3D(T ), πT [95] ≈ T + 2D(T ) (estimates due to James Martin).


Java applets for direct calculations can be found at
https://qsa.inf.unideb.hu

201
5.6 M/M/2 Formulas
Table 6. M/M/2 Queueing System

ρ
ρ = λS, a=
2

1−a
P0 = .
1+a

Pn = 2P0 an , n = 1, 2, 3, . . .

2an
P [N ≥ n] = , n = 1, 2, . . .
1+a

2a3
Q = λW = ,
1 − a2

P [N ≥ 2] = C[2, ρ] is the probability that ail arriving customer


must queue for service. P [N ≥ 2] is given by

2a2
P [N ≥ 2] = C[2, ρ] = .
1+a

2a3 [(1 + a)2 − 2a3 ]


V ar(Q) = .
(1 − a2 )2

2a
N =λ·T =Q+ρ= .
1 − a2

2a(1 + a + 2a2 )
V ar(N ) = V ar(Q) + .
1+a

1 + a − 2a2
W [0] = .
1+a
2a2
FT (t) = 1 − exp[−2µt(1 − a)]
1+a

a2 S
W = .
1 − a2
2
a2 (1 + a − a2 )S
V ar(W ) = .
(1 − a2 )2

200a2
 
S
πW [r] = max{0, ln }.
2(1 − a) (100 − r)(1 + a)

202
Table 6. M/M/2 Queueing System (continued)

20a2
 
S
πW [90] = max{0, ln }.
2(1 − a) 1 + a
S 40a2
πW [95] = max{0, ln }.
2(1 − a) 1+a
 
−2t(1 − a)
W W 0 = P [W ≤ t|W > 0] = 1 − exp , t > 0.
S
S
E[W |W > 0] = E[W 0 ] = .
2(1 − a)
 2
S
V ar[W |W > 0] = .
2(1 − a)

2a2

1−a
1 + e −µt
+ e−2µt(1−a) where ρ 6= 1


1 − a2 − 2a2 1 − a − 2a2


FT (t) =
 1 − {1 + µt }e−µt

where ρ = 1


3
S
T =W +S = .
1 − a2
 2
a2 [1 − 4(1 − a)2 ]S 2
 (2a − 1)(1 − a)(1 − a2 ) + 2S if ρ 6= 1




E[T 2 ] =

 10 2
if ρ = 1


 S
3
2
V ar(T ) = E[T 2 ] − T .

πT [90] ≈ T + 1.3D(T ), πT [95] ≈ T + 2D(T )

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

203
5.7 M/M/c/c Formulas
Table 7. M/M/c/c Queueing System (M/M/c loss)

ρ = λS

ρn
Pn = n! n = 0, 1, . . . , c.
ρ2 ρc
1+ρ+ + ... +
2! c!

The probability that all servers are busy, Pc is


called Erlang's B formula, B[c, ρ], and thus

ρc
B[c, ρ] = c! .
ρ2 ρc
1+ρ+ + ... +
2! c!

λ = λ(1 − B[c, ρ]) Is the average arrival rate of customers who


actually enter the system. Thus, the true server utilization, a,
is given by

λS
a= .
c

N = λ S.

N
T = = S.
λ
 
−t
FT (t) = 1 − exp .
S

All of the formulas except the last one arc true for
the M/G/c/c queueing system. For this system we have

FT (t) = FS (t).

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

204
5.8 M/M/c/K Formulas

Table 8. M/M/c/K Queueing System

ρ = λS.

K−c  n −1
" c
#
X ρn ρc X ρ
P0 = + .
n=0
n! c! n=1 c
 n
 ρ P0

if n = 1, 2, . . . , c,
Pn = n!n  n−c
ρ ρ

 P0 if n = c + 1, . . . , K.
c! c

The average arrival rate of customers who actually


enter the system is λ = λ(1 − PK ).

The actual mean server utilization, a, is given by:

λS
a= .
c c
ρ rP0 
1 + (K − c)rK−c+1 − (K − c + 1)rK−c ,

Q= 2
c!(1 − r)

where

ρ
r= .
c
c−1 c−1
!
X X
N = Q + E[Ns ] = Q + nPn + c 1 − Pn .
n=0 n=0

By Little's Law

Q N
W = , T = .
λ λ

Pn
Πn = , n = 0, 1, 2, . . . , K − 1,
1 − PK

205
Table 8. M/M/c/K Queueing System (continued)

where Πn is the probability that an arriving customer who


enters the system nds n customers already there.

W
E[W |W > 0] = c−1
.
X
1− Πn
n=0

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

206
5.9 M/M/∞ Formulas
Table 9. M/M/∞ Queueing System

ρ = λS.

ρn −ρ
Pn = e , n = 0, 1, . . . .
n!

Since N has a Poisson distribution,

N = ρ and V ar(N ) = ρ.

By Little's Law

N
T = = S.
λ

Since there is no queueing for service,

W = Q = 0,

and
FT (t) = P [T ≤ t] = FS (t) = P [S ≤ t]

That is, T has the same distribution as S .


All the above formulas arc true for the M/G/∞ system, also.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

207
5.10 M/M/1/K/K Formulas

Table 10. M/M/1/K/K Queueing System

The mean operating time per machine (sometimes called the


the mean time to failure, MTTF) is
1
E[O] = .
α

The mean repair time per machine by one repairman is


1
S= .
µ

The probability, P0 , that no machines arc out of service is given by


" K  k #−1
X K! S
P0 = = B[K, z],
(K − k)! E[O]
k=0

where B[·, ·] is Erlang's B formula and

E[O]
z= .
S

Then, Pn , the probability that n machines arc out of service,


is given by
K!
Pn = z −n P0 , n = 0, 1, . . . , K,
(K − n)!

The formula for Pn can also be written in the form


z K−n
(K − n)!
Pn = K , n = 0, 1, . . . , K.
X zk

k=0
k!

a = 1 − P0 .

a
λ= .
S

K
T = − E[O].
λ
N = λ · T.

W = T − S.

208
Table 10. M/M/1/K/K Queueing System (continued)

z K−n−1
(K − n)Pn (K − n − 1)!
Πn = = K−1
, n = 0, 1, 2, . . . , K − 1,
K −N X zk

k=0
k!

where Πn is the probability that a machine that breaks down


nds n machines in the repair facility.

Q(K − 1; z + tµ)
FT (t) = P [T ≤ t] = 1 − , t ≥ 0,
Q(K − 1; z)

where

n
X xk
Q(n; x) = e−x .
k=0
k!

Q(K − 2; z + tµ)
FT (t) = P [W ≤ t] = 1 − , t ≥ 0,
Q(K − 1; z)

W
E[W |W > 0] = .
1 − Π0
Java applets for direct calculations can be found at
https://qsa.inf.unideb.hu

209
5.11 M/G/1/K/K Formulas
Table 11. M/G/1/K/K Queueing System

The mean operating time per machine (sometimes called the the mean time to failure,
MTTF) is
1
E[O] = .
α
The mean repair time per machine by one repairman is
1
S= .
µ
The probability, P0 , that no machines arc out of service is given by
" K−1   #−1
KS X K − 1
P0 = 1 + Bn ,
E[O] n=0 n
where


 1 n = 0,
n  ∗
Bn =

Q 1−S [iα]
 ∗
S [iα]
n = 1, 2, . . . , K − 1.
i=1


and S [θ] is the Laplace-Stieltjes transform of s.

a = 1 − P0 .
a
λ= .
S
K
T = − E[O].
λ
N = λ · T.

W = T − S.

Q = λ · W.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

210
5.12 M/M/c/K/K Formulas
Table 12. M/M/c/K/K Queueing System

The mean operating time per machine (sometimes called the the mean time to failure,
MTTF) is
1
E[O] = .
α

The mean repair time per machine by one repairman is


1
S= .
µ

The probability, P0 , that no machines are out of service is given by

" c   K   #−1
X K X k! K −k
P0 = z −k + k−c
z ,
k=0
k k=c+1
c!c k

where

E[O]
z= .
S

Then,Pn , the probability that n machines are out of service is given by

K
z −n P0
 
n
n = 0, 1, . . . , c,
Pn = K
n!
c!cn−c n
z −n P0 n = c + 1, . . . , K.

K
X
Q= (n − c)Pn .
n=c+1

Q(E[O] + S)
W = .
K −Q
K
λ= .
E[O] + W + S
K
T = − E[O].
λ
N = λ T.

211
Table 12. M/M/c/K/K Queueing System (continued)

(K − n)Pn
Πn = ,
K −N
where Πn is the probability that a machine which breaks down nds n inoperable ma-
chines already in the repair facility. We denote Πn by Πn [K] to emphasize the fact that
there are K machines. It can be shown that

Πn [K] = Pn [K − 1], n = 0, 1, . . . , K − 1.

cc p(K − n − 1; cz)
Pn [K − 1] = P0 [K − 1],
c! p(K − 1; cz)
where, of course,

αk −α
p(k; α) = e .
k!
cc Q(K − c − 1; cz)P0 [K − 1]
FT (t) = P [W ≤ t] = 1 − , t ≥ 0,
c!p(K − 1; cz)
where

k
X αn
Q(k; α) = e−α .
n=0
n!
Q(K − c − 1; c(z + tµ))
FT (t) = P [T ≤ t] = 1 − C1 exp(−t/S) + C2 ,
Q(K − c − 1; cz)
t ≥ 0,

where C1 = 1 + C2 ¤»s

cc Q(K − c − 1; cz)
C2 = P0 [K − 1].
c!(c − 1)(K − c − 1)!p(K − 1; cz)
The probability that a machine that breaks down must wait for repair is given by

K−1
X c−1
X
D= Πn = 1 − Πn .
n=c n=0
W
E[W |W > 0] = .
D

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

212
5.13 D/D/c/K/K Formulas
Table 13. D/D/c/K/K Queueing System

The mean operating time per machine (sometimes called the the mean time to failure,
MTTF) is
1
E[O] = .
α

The mean repair time per machine by one repairman is


1
S= .
µ
K
a = min{1, },
c(1 + z)

where

E[O]
z= .
S
ca
λ = caµ = .
S
K
T = − E[O].
λ

N = λ T.

W = T − S.

Q = λ W.

The equations for this model are derived in "A straightforward model of computer
performance prediction" by John W. Boyse es David R. Warn in ACM Comput. Surveys,
7(2), (June 1972).

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

213
5.14 M/G/1 Formulas
Table 14. M/G/1 Queueing System

The z -transform of N , the steady-state number of customers in the system is given by:
∞ ∗
X (1 − ρ)(1 − z)S [λ(1 − z)]
GN (z) = Pn z n = ∗ ,
n=0 S [λ(1 − z)] − z

where S is the Laplace-Stieltjes transform of the servide time S . The Laplace-Stieltjes
transforms of T and W are given by

(1 − ρ)θS [θ]

W [θ] = ∗ ,
θ − λ + λS [θ]
¤»s

∗ (1 − ρ)θ
W [θ] = ∗ .
θ − λ + λS [θ]
Each of the three transforms above is called the Pollaczek-Khintchine transform equa-
tion by various authors. The probability, P0 , of no customers in the system has the simple
and intuitive equation P0 = 1 − ρ, where the server utilization ρ = λS . The probability
that the server is busy is P [N ≥ 1] = ρ.

λE[S 2 ] 1 + CS2
 
ρS
W = = (Pollaczek formula).
2(1 − ρ) 1−ρ 2
Q = λW .
2
λ3 E[S 3 ] λ2 E[S 2 ] λ2 E[S 2 ]

V ar(Q) = + + .
3(1 − ρ) 2(1 − ρ) 2(1 − ρ)
1 + CS2
 
S
E[W |W > 0] = .
1−ρ 2
2 λE[S 3 ]
E[W 2 ] = 2W + .
3(1 − ρ)
2
V ar(W ) = E[W 2 ] − W .

T = W + S.

N = λ · T = Q + ρ.
2
λ3 E[S 3 ]
 2
λ E[S 2 ] λ2 (3 − 2ρ)E[S 2 ]
V ar(N ) = + + + ρ(1 − ρ).
3(1 − ρ) 2(1 − ρ) 2(1 − ρ)
E[S 2 ]
E[T 2 ] = E[W 2 ] + .
1−ρ

214
Table 14. M/G/1 Queueing System (continued)
2
V ar(T ) = E[T 2 ] − T .

πT [90] ≈ T + 1.3D(T ), πT [95] ≈ T + 2D(T ).

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

215
Table 15. M/H2 /1 Queueing System

The z -transform of the steady-state number in the system, N , is given by



X z1 z2
GN (z) = Pn z n = C 1 + C2 ,
n=0
z1 − z z2 − z

where z1 and z2 are the roots of the next equation

a1 a2 z 2 − (a1 + a2 + a1 a2 )z + 1 + a1 + a2 − a = 0,

where

a = λS ,

λ
ai = , i = 1, 2,
µi
(z1 − 1)(1 − az2 )
C1 = ,
z1 − z2
and

(z2 − 1)(1 − az1 )


C2 = .
z2 − z1
From GN (z) we get

Pn = C1 z1−n + C2 z2−n , n = 0, 1, . . ..

Specically, P0 = 1 − a.

z1−n+1 z2−n+1
P [N ≥ n] = C1 − C2 .
z1 − 1 z2 − 1
Additionally,

P [N ≥ 1] = a.

FT (t) = P [W ≤ t] = 1 − C5 e−ρt − C6 e−bt , t ≥ 0,

where ρ = −ζ1 , b = −ζ2 , ζ1 , ζ2 are the solutions of the

θ2 + (µ1 + µ2 − λ)θ + µ1 µ2 (1 − a) = 0,

equation,

216
Table 15. M/H2 /1 Queueing System (continued)

λ(1 − a)ζ1 + a(1 − a)µ1 µ2


C5 =
ρ(ζ1 − ζ2 )

and

λ(1 − a)ζ2 + a(1 − a)µ1 µ2


C6 = .
ρ(ζ2 − ζ1 )

λE[S 2 ] 1 + CS2
 
aS
W = = . (Pollaczek-formula)
2(1 − a) 1−a 2

1 + CS2
 
S
E[W |W > 0] = .
1−a 2

2 λE[S 3 ]
E[W 2 ] = 2W + .
3(1 − a)

In this formula we substitute

6p1 6p2
E[S 3 ] = + 3,
µ31 µ2

then

2
V ar(W ) = E[W 2 ] − W .

FT (t) = P [T ≤ t] = 1 − πa e−µa t − πb e−µb t , t ≥ 0,

where

z1
π a = C1 ,
z1 − 1

z2
π b = C2 ,
z2 − 1

217
Table 15. M/H2 /1 Queueing System (continued)

µa = λ(z1 − 1),

and

µb = λ(z2 − 1).

T = W + S.

E[S 2 ]
E[T 2 ] = E[W 2 ] + ,
1−a

where of course

2p1 2p2
E[S 2 ] = + 2.
µ21 µ2
2
V ar(T ) = E[T 2 ] − T .

E[T 2 ]
CT2 = 2 − 1.
T
a2 1 + CS2
 
Q=λ·W = .
1−a 2
2
λ3 E[S 3 ]
 2
λ E[S 2 ] λ2 E[S 2 ]
V ar(Q) = + + .
3(1 − a) 2(1 − a) 2(1 − a)

N = λT = Q + a.

2
λ3 E[S 3 ] λ2 E[S 2 ] λ2 (3 − 2a)E[S 2 ]

V ar(N ) = + + + a(1 − a).
3(1 − a) 2(1 − a) 2(1 − a)

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

218
Table 16. M/Gamma/1 Queueing System

Since S has Gamma-distribution

β(β + 1) . . . (β + n − 1)
E[S n ] = , n = 1, 2, . . ..
αn
Since
1
CS2 = ,
β
so
2
E[S 2 ] = S (1 + CS2 ),
3
E[S 3 ] = S (1 + CS2 )(1 + 2CS2 ),

and
n−1
n Y
E[sn ] = S (1 + kCS2 ), n = 1, 2, . . ..
k=1

This time

λE[S 2 ] 1 + CS2
 
aS
W = = ,
2(1 − a) 1−a 2
Q = λ · W,

a2 (1 + CS2 ) a2 (1 + CS2 ) 2a(1 + 2CS2 )


 
V ar(Q) = 1+ + ,
2(1 − a) 2(1 − a) 3
1 + CS2
 
S
E[W |W > 0] = ,
1−a 2
2
aS (1 + CS2 )(1 + 2CS2 )
2
2
E[W ] = 2W + ,
3(1 − a)
2
V ar(W ) = E[W 2 ] − W ,

T = W + S, N = λ · T = Q + a,
2
a3 (1 + CS2 )(1 + 2CS2 ) a2 (1 + CS2 ) a2 (3 − 2a)(1 + CS2 )

V ar(N ) = + + + a(1 − a).
3(1 − a) 2(1 − a) 2(1 − a)
2
S (1 + CS2 )
2
E[T ] = E[W ] + 2
.
1−a
2
V ar(T ) = E[T 2 ] − T .

πT [90] ≈ T + 1.3D(T ), πT [95] ≈ T + 2D(T ).


Java applets for direct calculations can be found at
https://qsa.inf.unideb.hu

219
Table 17. M/Ek /1 Queueing System
Mivel S Since S has Erlang -k distribution, hence
    
1 2 n−1 n
n
E[S ] = 1 + 1+ ... 1 + S , n = 1, 2, . . ..
k k k
so  
1
2
2
E[S ] = S 1 + ,
k
and   
3 1 2
3
E[s ] = S 1 + 1+ .
k k
This time
1
λE[s2 ]
 
aS 1+
W = = k
. (Pollaczek's formula)
2(1 − a) 1−a 2
Q = λ · W.

a2 (1 + k) a2 (1 + k) 2a(k + 2)
 
V ar(Q) = 1+ + .
2k(1 − a) 2k(1 − a) 3k
1 + k1
 
S
E[W |W > 0] = .
1−a 2
2
aS (k + 1)(k + 2)
2
2
E[W ] = 2W + .
3k 2 (1 − a)
2
V ar(W ) = E[W 2 ] − W .

T = W + S, N = λ · T = Q + a.
2
a3 (k + 1)(k + 2) a2 (1 + k1 ) a2 (3 − 2a)(1 + k1 )

V ar(N ) = + + + a(1 − a).
3k 2 (1 − a) 2(1 − a) 2(1 − a)
2
S (1 + k1 )
2
E[T ] = E[W ] + 2
.
1−a
2
V ar(T ) = E[T 2 ] − T .

πT [90] ≈ T + 1.3D(T ), πT [95] ≈ T + 2D(T ).

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

220
Table 18. M/D/1 Queueing System

Since S has a constant distribution

n
E[S n ] = S , n = 1, 2, . . .,

so

(1 − a)(1 − z)
GN (z) = .
1 − zea(1−z)

We suppose that

|zea(1−z) | < 1,

we can expand GN (z) in the geometric series


X  a(1−z) j
gN (z) = (1 − a)(1 − z) ze .
j=0

This thime we can show that,

P1 = (1 − a)(ea − 1),

and

n
X (−1)n−j (ja)n−j−1 (ja + n − j)eja
Pn = (1 − a) n = 2, 3, . . ..
j=1
(n − j)!

Additionally

k−1  
X t − (k − 1)S
FT (t) = Pn + P k ,
n=0
S

where (k − 1)S ≤ t ≤ kS, k = 1, 2, . . ..

221
Table 18. M/D/1 Queueing System (continued)
So,

W [0] = P0 .

aS
W = .
2(1 − a)
S
FW [W |W > 0] = .
2(1 − a)
2
aS2
2
E[W ] = 2W + .
3(1 − a)
2
V ar(W ) = E[W 2 ] − W .

a2
Q = λW = .
2(1 − a)
2
a3 a2 a2

V ar(Q) = + + .
3(1 − a) 2(1 − a) 2(1 − a)


 0 if t < S,

FT (t) = k−1  
Pn + Pk t−kS
P
if t ≥ S.


 S
n=0

where

kS ≤ t < (k + 1)S, k = 1, 2, . . ..

T = W + S.
2
S
2
E[T ] = E[W ] + 2
.
1−a
2
V ar(T ) = E[T 2 ] − T .

N = λ · T = Q + a.
2
a3 a2 a2 (3 − 2a)

V ar(N ) = + + + a(1 − a).
3(1 − a) 2(1 − a) 2(1 − a)

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

222
5.15 GI/M/1 Formulas
Table 19. GI/M/1 Queueing System
The steady-state probability that an arriving customer will nd the system empty, is
the unique solution of the equation 1−Π0 = A∗ [µΠ0 ] such that 0 < Π0 < 1, where A∗ [Θ] is
the Laplace-Stieltjes transform of r. The steady-state number of customers in the system,
N has the distribution {Pn }, where P0 = P [N = 0] = 1 − a, Pn = aΠ0 (1 − Π0 )n−1 , n =
1, 2, . . ., tov¤»bb¤»
a 0 −a)
N= , and V ar(N ) = a(2−Π
Π 2 .
Π0 0

(1 − Π0 )a
Q= .
Π0
a(1 − Π0 )(2 − Π0 − a(1 − Π0 ))
V ar(Q) = .
Π20
1
E[Q|Q > 0] = .
Π0
S
T = .
Π0
FT (t) = P [T ≤ t] = 1 − exp(−t/T ).
 
100
ΠT [r] = T ln .
100 − r
ΠT [90] = T ln 10, ΠT [95] = T ln 20.

S
W = (1 − Π0 ) .
Π0
 2
S
V ar(W ) = (1 − Π20 ) .
Π0
FT (t) = P [W ≤ t] = 1 − (1 − Π0 ) exp(−t/T ).
  
100(1 − Π0 )
ΠW [r] = max 0, T ln .
100 − r
W 0 , the queueing time for those who must, has the same distribution as T .

223
1
Table 20. Π0 versus a for GI/M/1 Queueing System

a E2 E3 U D H2 H2
0.100 0.970820 0.987344 0.947214 0.999955 0.815535 0.810575
0.200 0.906226 0.940970 0.887316 0.993023 0.662348 0.624404
0.300 0.821954 0.868115 0.817247 0.959118 0.536805 0.444949
0.400 0.724695 0.776051 0.734687 0.892645 0.432456 0.281265
0.500 0.618034 0.669467 0.639232 0.796812 0.343070 0.154303
0.600 0.504159 0.551451 0.531597 0.675757 0.263941 0.081265
0.700 0.384523 0.626137 0.412839 0.533004 0.191856 0.044949
0.800 0.260147 0.289066 0.284028 0.371370 0.124695 0.024404
0.900 0.131782 0.147390 0.146133 0.193100 0.061057 0.010495
0.950 0.066288 0.074362 0.074048 0.098305 0.030252 0.004999
0.980 0.026607 0.029899 0.029849 0.039732 0.012039 0.001941
0.999 0.001333 0.001500 0.001500 0.001999 0.000600 0.000095

1 At the rst H2 distribution p1 = 0.4, µ1 = 0.5λ, µ2 = 3λ. At the second H2 distribution p1 =


0.024405, µ1 = 2p1 λ, ¤»s µ2 = 2p2 λ.

224
5.16 GI/M/c Formulas

Table 21. GI/M/c Queueing System

Let Πn , n = 0, 1, 2, . . . be the steady state number of customers that an arriving cus-


tomer nds in the system. Then


c−1
(−1)i−n ni Ui , n = 0, 1, . . . , c − 2,
 P 


i=n
Πn =


Dω n−c , n = c − 1, c, . . . ,

where ω is the unique solution of the equation ω = A∗ [cµ(1 − ω)] such that 0 < ω < 1,
where A∗ [θ] is the Laplas-Stieltjes transform of r,

gj = A∗ [jµ], j = 1, 2, . . . , c,


 1, j = 0,
j
Cj = 
Q gi 
 1−gi
, j = 1, 2, . . . , c,
i=1

" c c
  #−1
1 X j c(1 − gj ) − j)
D= +
1 − ω j=1 Cj (1 − gj ) c(1 − ω) − j

and

c
c
  
X j c(1 − gj ) − j)
Un = DCn , n = 0, 1, . . . , c − 1.
j=n+1
Cj (1 − gj ) c(1 − ω) − j

225
Table 21. GI/M/c Queueing System (continued)

FT (t) = P [W ≤ t] = 1 − P [W > 0]e−cµ(1−ω)t , t ≥ 0,

where

D DS S
P [W > 0] = . W = 2
. E[W |W > 0] = .
1−ω c(1 − ω) c(1 − ω)

If c(1 − ω) 6= 1, then

FT (t) = P [ω ≤ t] = 1 + (G − 1)e−µt − Ge−cµ(1−ω)t , t ≥ 0,

where

D
G= .
(1 − ω)[1 − c(1 − ω)]

When c(1 − ω) = 1, then

 
Dµt
FT (t) = P [ω ≤ t] = 1 − 1 + e−µt , t ≥ 0.
1−ω

We also have

T = W + S.

c−1  
λS X 1 1
P0 = 1 − − λS Πj−1 − .
c j=1
j c

(
λSΠn−1
n,
n = 1, 2, . . . , c − 1,
Pn = λSΠn−1
c,
n = c, c + 1, . . . .

226
5.17 M/G/1 Priority queueing system
Table 22. M/G/1 Queueing System (classes, no priority)
There are n customer classes. Customers from class i arrive in a Poisson pattern with
mean arrival rate λi , i = 1, 2, . . . , n. Each class has its own general service time with
E[Si ] = 1/µi , E[Si2 ], E[Si3 ]. All customers served on a FCFS basis with no considera-
tion for class. The total arrival stream to the system has a Poisson arrival pattern with

λ = λ1 + λ2 + . . . + λn .

The rst three moments of service time arc given by

λ1 λ2 λn
S= E[S1 ] + E[S2 ] + . . . + E[Sn ],
λ λ λ
λ1 λ2 λn
E[S 2 ] = E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
λ λ λ
and

λ1 λ2 λn
E[S 3 ] = E[S13 ] + E[S23 ] + . . . + E[Sn3 ],
λ λ λ
By Pollaczek's formula,

λE[S 2 ]
W = .
2(1 − a)
The mean time in the system for each class is given by

T i = W + E[Si ], i = 1, 2, . . . , n.

The overall mean customer time in the system,

λ1 λ2 λn
T = T 1 + T 2 + . . . + T n.
λ λ λ
The variance of the waiting time

λE[S 3 ] λ2 (E[S 2 ])2


V ar(W ) = + .
3(1 − a) 4(1 − a)2
The variance of T is given by

V ar(Ti ) = V ar(W ) + V ar(Si ), i = 1, 2, . . . , n.

The second moment of T by class is


2
E[Ti2 ] = V ar(Ti ) + T i , i = 1, 2, . . . , n.

227
Table 22. M/G/1 Queueing System (classes, no priority)
(continued)
Thus, the overall second moment of T is given by

λ1 λ2 λn
E[T 2 ] = E[T12 ] + E[T22 ] + . . . + E[Tn2 ],
λ λ λ
and
2
V ar(T ) = E[T 2 ] − T .

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

228
Table 23. M/G/1 Nonpreemptive (HOL) Queueing System

There are n priority classes with each class having a Poisson arrival pattern with mean
arrival rate λi . Each customer has the same exponential service time requirement. Then
the overall arrival pattern is Poiisson with mean:

λ = λ1 + λ2 + . . . + λn .

The server utilization

λ1 λ2 λn
S= E[S1 ] + E[S2 ] + . . . + E[Sn ],
λ λ λ

λ1 λ2 λn
E[S 2 ] = E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
λ λ λ

and

λ1 λ2 λn
E[S 3 ] = E[S13 ] + E[S23 ] + . . . + E[Sn3 ],
λ λ λ

Let

ρj = λ1 E[S1 ] + λ2 E[S2 ] + . . . + λj E[Sj ], j = 1, 2, . . . , n,

and notice that

ρn = ρ = λS .

The mean times in the queues:

λE[S 2 ]
W j = E[Wj ] = ,
2(1 − ρj−1 )(1 − ρj )

j = 1, 2, . . . , n, ρ0 = 0.

229
Table 23. M/G/1 Nonpreemptive (HOL) Queueing System
(continued)

The mean queue lengths are

Q j = λj · W j , j = 1, 2, . . . , n.

The unied time in the queue

λ1 λ2 λn
W = E[W1 ] + E[W2 ] + . . . + E[Wn ].
λ λ λ
The mean times of staying in the system

T j = E[Tj ] = E[Wj ] + E[Sj ], j = 1, 2, . . . , n,

and the average of the customers staying at the system is

N j = λj · T j , j = 1, 2, . . . , n.

The total time in the system

T = W + S.

The total queue length

Q = λ · W,

and the average of the customers staying at the system

N = λ · T.

The variance of the total time stayed in the system by class

λE[S 3 ]
V ar(Tj ) = V ar(Sj ) +
3(1 − ρj−1 )2 (1 − ρj )
 j 
2 2 2
P
λE[S ] 2 λi E[Si ] − λE[S ]
i=1
+
4(1 − ρj−1 )2 (1 − ρj )2
j−1
λE[S 2 ] λi E[Si2 ]
P
i=1
+ , j = 1, 2, . . . , n.
2(1 − ρj−1 )3 (1 − ρj )

230
Table 23. M/G/1 Nonpreemptive (HOL) Queueing System
(continued)
The variance of the total time stayed in the system

λ1 2 λ2 2
V ar(T ) = [V ar(T1 ) + T 1 ] + [V ar(T2 ) + T 2 ]
λ λ
λn 2 2
+... + [V ar(Tn ) + T n ] − T .
λ
The variance of the waiting time by class

V ar(Wj ) = V ar(Tj ) − V ar(Sj ), j = 1, 2, . . . , n.


2
We know that E[Wj2 ] = V ar(Wj ) + W j , j = 1, 2, . . . , n,

so

λ1 λ2 λn
E[W 2 ] = E[W12 ] + E[W22 ] + . . . + E[Wn2 ].
λ λ λ
Finally
2
V ar(W ) = E[W 2 ] − W .

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

231
Table 24. M/G/1 absolute priority Queueing System

There are n customer classes. Class 1 customers receive the most favorable treatment;
class n customers receive the least favorable treatment. Customers from class i arrive in
a Poisson pattern with mean arrival rate λi ,t = 1, 2, . . . , n. Each class has its own gen-
eral service time with E[Si ] = 1/µi , and nite second and third moments E[Si2 ], E[Si3 ].
The priority system is preemptive resume, which means that if a customer of class j is
receiving service when a customer of class i < j arrives, the arriving customer preempts
the server and the customer who was preempted returns to the head of the line for class
j customers. The preempted customer resumes service at the point of interruption upon
reentering the service facility. The total arrival stream to the system has a Poisson arrival
pattern with

λ = λ1 + λ2 + . . . + λn .

The rst three moment of service time are given by:

λ1 λ2 λn
S= E[S1 ] + E[S2 ] + . . . + E[Sn ],
λ λ λ
λ1 λ2 λn
E[S 2 ] = E[S12 ] + E[S22 ] + . . . + E[Sn2 ],
λ λ λ
λ1 λ2 λn
E[S 3 ] = E[S13 ] + E[S23 ] + . . . + E[Sn3 ].
λ λ λ

Let

ρj = λ1 E[S1 ] + λ2 E[S2 ] + . . . + λj E[Sj ], j = 1, 2, . . . , n,

and notice that

ρn = ρ = λS .

The mean time in the system for each class is


 
j
λi E[Si2 ] 
P
1 
i=1
T j = E[Tj ] = E[Sj ] + ,
 
1 − ρj−1  2(1 − ρj ) 

ρ0 = 0, j = 1, 2, . . . , n.

232
Table 24. M/G/1 absolute priority Queueing System

(continued)

Waiting times

W j = E[Tj ] − E[Sj ], j = 1, 2, . . . , n.

The mean length of the queue number j :

Q j = λj W j , j = 1, 2, . . . , n.

The total waiting time, W , is given by:

λ1 λ2 λn
W = E[W1 ] + E[W2 ] + . . . + E[Wn ].
λ λ λ

The mean number of customers staying in the system for each class is

N j = λj W j , j = 1, 2, . . . , n.

The mean total time is

λ1 λ2 λn
T = T 1 + T 2 + . . . + T n = W + S.
λ λ λ

The mean number of customers waiting in the queue is

Q = λ · W,

and the average number of customers staying in the system

N = λ · T.

233
Table 24. M/G/1 absolute priority Queueing System
(continued)

The variance of the total time of staying in the system for each class is

j−1
λi E[Si2 ]
P
E[Sj ]
V ar(Sj ) i=1
V ar(Tj ) = +
(1 − ρj−1 )2 (1 − ρj−1 )3
j
 j 2
λi E[Si3 ] 2
P P
λi E[Si ]
i=1 i=1
+ +
3(1 − ρj−1 )2 (1
− ρj ) 4(1 − ρj−1 )2 (1 − ρj )2
 j  j−1 
2 2
P P
λi E[Si ] λi E[Si ]
i=1 i=1
+ , ρ0 = 0, j = 1, 2, . . . , n.
2(1 − ρj−1 )3 (1 − ρj )
The overall variance

λ1 2 λ2 2
V ar(T ) = [V ar(T1 ) + T 1 ] + [V ar(T2 ) + T 2 ]
λ λ
λn 2 2
+... + [V ar(Tn ) + T n ] − T .
λ
The variance of waiting times for each class is

V ar(Wj ) = V ar(Tj ) − V ar(Sj ), j = 1, 2, . . . , n.

Because,

2
E[Wj2 ] = V ar(Wj ) + W j , j = 1, 2, . . . , n,

so

λ1 λ2 λn
E[W 2 ] = E[W12 ] + E[W22 ] + . . . + E[Wn2 ].
λ λ λ
Finally

2
V ar(W ) = E[W 2 ] − W .

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

234
5.18 M/G/c Processor Sharing system
Table 25. M/G/1 processor sharing Queueing System
The Poisson arrival stream has an average arrival rate of λ and the average service
rate is µ. The service time distribution is general with the restriction that its Laplace
transform is rational, with the denominator having degree at least one higher than the
numerator. Equivalently. the service time, s, is Coxian. The priority system is processor-
sharing, which means that if a customer arrives when there are already n − 1 customers
in the system, the arriving customer (and all the others) receive service at the average
rate µ/n. Then Pn = ρn (1 − ρ), n = 0, 1, . . . , where ρ = λ/µ. We also have

ρ t S
N= , E[T |S = t] = , and T = .
1−ρ 1−ρ 1−ρ
Finally
ρt ρS
E[W |S = t] = , and W = .
1−ρ 1−ρ

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

Table 26. M/G/c processor sharing Queueing System

The Poisson arrival stream has an average arrival rate of λ. The service time distribution
is general with the restriction that its Laplace transform is rational, with the denominator
having degree at least one higher than the numerator. Equivalently, the service time, s,
is Coxian. The priority system is processor-sharing, which works as follows. When the
number of customers in the service center, is less than c, then each customers is served
simultaneously by one server; that is, each receives service at the rate µ. When N > c.
each customer simultaneously receives service at the rate cµ/N . We nd that just as for
the M/G/l processor-sharing system.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

235
5.19 M/M/c Priority system
Table 27. M/M/c relative priority (HOL) Queueing System
There are n priority classes with each class having a Poisson arrival pattern with mean
arrival rate λi . Each customer has the same exponential service time requirement. Then
the overall arrival pattern is Poisson with mean λ = λ1 +λ2 +. . .+λn . The server utilization

λS λ
a= = ,
c cµ
C[c, ρ]S
W1 = ,
c(1 − λ1 S/c)
and these equations are also true:

C[c, ρ]S
Wj =   j−1     j  , j = 2, . . . , n.
P P
c 1− S λi /c 1 − S λi /c
i=1 i=1

T j = W j + S, j = 1, 2, . . . , n.

Q j = λj · W j , j = 1, 2, . . . , n.

N j = λj · T j , j = 1, 2, . . . , n.
λ1 λ2 λn
W = W 1 + W 2 + . . . + W n.
λ λ λ
Q = λ · W.

T = W + S.

N = λ · T.

Java applets for direct calculations can be found at


https://qsa.inf.unideb.hu

236
Appendix and Bibliography
In this Appendix some properties of the generating function, sometimes called as z-
transform, and the Laplace-transform are listed. More properties can be found, for ex-
ample in Kleinrock [60].

Some properties of the generating function


Sequence ⇐⇒ Generating function

fn z n
P
1. fn , n = 0, 1, 2, . . . G(z) =
n=0
2. afn + bgn aG(z) + bH(z)
n
3. a fn f (az)
4. f nk , n = 0, k, 2k, . . . G(z k )
k
G(z)
z i−k−1 fi−1
P
5. fn+k , k > 0 zk

i=1

6. fn−k , k > 0 z k G(z)


d m
7. n(n − 1) · · · (n − m + 1)fn z m dz m G(z), m≥1
P∞
8. fn ∗ gn := fn−k gk G(z)H(z)
k=0

9. fn − fn−1 (1 − z)G(z)
n
P G(z)
10. fk , n = 0, 1, 2, . . . 1−z
k=0
∂ ∂
P
11. f
∂a n ∂a
G(z)

Series sum property
P
12. G(1) = fn
n=0

Alternating sum property (−1)n fn
P
13. G(−1) =
n=0

14. Initial value theorem G(0) = f0


1 dn G(z)
15. Intermediate value theorem n! dz n
= fn
z=0
16. Final value theorem lim (1 − z)G(z) = lim fn
z→1 n→∞

237
Some properties of the Laplace-transform
Function ⇐⇒ Transform
R∞
1. f (t), t ≥ 0 f ∗ (s) = f (t)e−st dt
0
2. af (t) + bg(t) af (s) + bg ∗ (s)

f at , (a > 0) af ∗ (as)

3.
4. f (t − a) e−as f ∗ (s)
5. e−at f (t) f ∗ (s + a)
n ∗
6. tn f (t) (−1)n d ds
f (s)
n


f (t) R ∗
7. t
f (s1 )ds1
s1 =s

f (t) R∞ R∞ R∞
8. tn
ds1 ds2 . . . dsn f ∗ (sn )
s1 =s s2 =s1 sn =sn−1

Rt
9. f (t) ∗ g(t) = f (t − x)g(x)dx f ∗ (s)g ∗ (s)
0
df (t)
10. dt
sf ∗ (s) − f (0)
dn f (t)
11. dtn
:= f (n) (t) sn f ∗ (s) − sn−1 f (0) − sn−2 f 0 (0) − ... − f (n−1) (0)
12. ∂
∂a
f (t) a is parameter a ∂
∂a
F (s)
R∞
13. Integral property f ∗ (0) = f (t)dt
0

14. Initial value theorem lim sf ∗ (s) = lim f (t)


s→∞ t→0

15. Final value theorem ∗


lim sf (s) = lim f (t)
s→0 t→∞
if sf ∗ (s) is analytic for Re(s) ≥ 0

238
Bibliography
[1] Adan, I., and Resing, J. Queueing Theory .
http://www.win.tue.nl/~iadan/queueing.pdf, 2015.
[2] Alfa, A. S. Applied discrete-time queues. Springer, 2016.
[3] Allen, A. O. Probability, statistics, and queueing theory with computer science
applications, 2nd ed. Academic Press, Inc., Boston, MA, 1990.
[4] Anisimov, V., Zakusilo, O., and Donchenko, V. Elements of queueing theory
and asymptotic analysis of systems. Visha Skola, Kiev, 1987.
[5] Artalejo, J., and Gómez-Corral, A. Retrial queueing systems. Springer,
Berlin, 2008.

[6] Asztalos, D. Optimal control of nite source priority queues with computer
system applications. Computers & Mathematics with Applications 6 (1980), 425
431.

[7] Baron, M. Probability and statistics for computer scientists. CRC Press, 2019.
[8] Begain, K., Bolch, G., and Herold, H. Practical perfromance modeling, Ap-
plication of the MOSEL language. Wiley & Sons, New York, 2001.
[9] Bhat, U. N. An introduction to queueing theory: modeling and analysis in appli-
cations. Birkhäuser, 2015.
[10] Bocharov, P. P., D'Apice, C., and Pechinkin, A. Queueing theory. Walter
de Gruyter, 2011.

[11] Böhm, W. A Course on Queueing Models . Chapman and Hall/CRC, 2016.


[12] Bolch, G., Greiner, S., de Meer, H., and Trivedi, K. Queueing networks
and Markov chains, 2nd ed. Wiley & Sons, New York, 2006.
[13] Borovkov, A. Stochastic processes in queueing theory, vol. 4. Springer Science &
Business Media, 2012.

[14] Bose, S. An introduction to queueing systems. Kluwer Academic/Plenum Pub-


lishers, New York, 2002.

[15] Breuer, L., and Baum, D. An introduction to queueing theory and matrix-
analytic methods. Springer, 2005.

239
[16] Brockmeyer, E., Halstrom, H., and Jensen, A. The life and works of a.k.
erlang. Academy of Technical Sciences, Copenhagen (1948).

[17] Bunday, B., and Scraton, R. The G/M/r machine interference model. European
Journal of Operational Research 4 (1980), 399402.

[18] Chan, W. C. An elementary introduction to queueing systems. World Scientic,


2014.

[19] Chee-Hock, N., and Boon-He, S. Queueing modelling fundamentals, 2nd ed.
Wiley & Son, Chichester, 2002.

[20] Chen, H., and Yao, D. D. Fundamentals of queueing networks: Performance,


asymptotics, and optimization, vol. 46. Springer Science & Business Media, 2013.

[21] Chun, Y., et al. Fair queueing. Springer, 2016.

[22] Cohen, J. The multiple phase service network with generalized processor sharing.
Acta Informatica 12 (1979), 245284.

[23] Cooper, R. Introduction to Queueing Theory, 3-rd Edition. CEE Press, Wash-
ington, 1990.
http://web2.uwindsor.ca/math/hlynka/qonline.html.

[24] Csige, L., and Tomkó, J. Machine interference problem with exponential dis-
tributions ( in Hungarian ). Alkalmazott Matematikai Lapok (1982), 107124.

[25] Daigle, J. Queueing theory with applications to packet telecommunication.


Springer, New York, 2005.

[26] Daigle, J. N. Queueing theory for telecommunications. Addison-Wesley, Reading,


MA, 1992.

[27] Dattatreya, G. Performance analysis of queuing and computer networks. CRC


Press, Boca Raton, 2008.

[28] Dshalalow, J. H. Frontiers in queueing : Models and applications in science and


engineering. CRC Press., Boca Raton, 1997.

[29] Erlang, A. The theory of probabilities and telephone conversations. Nyt Tidsskrift
for Matematik B 20 (1909), 3339.

[30] Erlang, A. Solution of some problems in the theory of probabilities of signicance


in automatic telephone exchanges. The Post Oce Electrical Engineers' Journal
10 (1918), 189197.

[31] Falin, G., and Templeton, J. Retrial queues. Chapman and Hall, London,
1997.

[32] Franken, P., Konig, D., and Arndt, U. Schmidt, V. Queues and point
processes. Academie Verlag, Berlin, 1981.

240
[33] Gautam, N. Analysis of queues: methods and applications. CRC Press, 2012.

[34] Gebali, F. Analysis of computer and communication networks. Springer, New


York, 2008.

[35] Gelenbe, E., and Mitrani, I. Analysis and synthesis of computer systems.
Academic Press, London, 1980.

[36] Gelenbe, E., and Pujolle, G. Introduction to queueing networks. Wiley &
Sons, Chichester, 1987.

[37] Gnedenko, B., Belyayev, J., and Solovyev, A. Mathematical methods of


reliability theory ( in Hungarian ). M¶szaki Könyvkiadó, Budapest, 1970.

[38] Gnedenko, B., Belyayev, Y., and Solovyev, A. Mathematical methods of


reliability theory. Academic Press, New York, London, 1969.

[39] Gnedenko, B., and Kovalenko, I. Introduction to queueing theory.


Birkhaeuser, Boston, MA, 1991.

[40] Gross, D., Shortle, J., Thompson, J., and Harris, C. Fundamentals of
queueing theory, 4th edition. John Wiley & Sons, New York, 2008. .

[41] Queueing theory in informatics systems (in Hungarian).


Györfi, L., and Páli, I.
M¶egyetemi Kiadó, Budapest, 1996.

[42] Haghighi, A., and Mishev, D. Queueing models in industry and business. Nova
Science Publishers, Inc., New York, 2008.

[43] Haghighi, A. M., and Mishev, D. P. Dierence and dierential equations with
applications in Queueing theory. John Wiley & Sons, 2013.

[44] Haghighi, A. M., and Mishev, D. P. Delayed and network queues. John Wiley
& Sons, 2016.

[45] Hall, R. W. Queueing methods for services and manufacturing. Prentice Hall,
Englewood Clis, NJ, 1991.

[46] Harchol-Balter, M. Performance modeling and design of computer systems:


queueing theory in action. Cambridge University Press, 2013.

[47] Haribaskaran, G. Probability, queueing theory and reliability engineering. Laxmi


Publications, Bangalore, 2006.

[48] Hassin, R. Rational queueing. CRC press, 2016.

[49] Haverkort, B. Performance of computer communication systems: A model-based


approach. Wiley & Sons, New York, 1998.

[50] Haviv, M. A course in queueing theory, 2013.

241
[51] Hlynka, M. Queueing theory page.
http://web2.uwindsor.ca/math/hlynka/queue.html.
[52] Ivcsenko, G., Kastanov, V., and Kovalenko, I. Theory of queueing systems.
Nauka, Moscow, 1982.

[53] Iversen, V. Teletrac Engineering Handbook. ITC in Cooperation with ITU-D


SG2, 2005.
http://web2.uwindsor.ca/math/hlynka/queue.html.
[54] Jain, R. The art of computer systems performance analysis. Wiley & Sons, New
York, 1991.

[55] Jaiswal, N. Priority queues. Academic Press, New York, 1969.


[56] Jereb, L., and Telek, M. Queueing systems ( in Hungarian ). teaching material,
BME Department of Telecommunication.

[57] Karlin, S., and Taylor, H. Stochastic process ( in Hungarian ). Gondolat


Kiadó, Budapest, 1985.

[58] Karlin, S., and Taylor, H. An introduction to stochastic modeling. Harcourt,


New York, 1998.

[59] Khintchine, A. Mathematical methods in the theory of queueing. Hafner, New


York, 1969.

[60] Kleinrock, L. Queueing systems. Vol. I. Theory. John Wiley & Sons, New York,
1975.

[61] Kleinrock, L. Queueing systems. Vol. II: Computer applications. John Wiley &
Sons, New York, 1976.

[62] Kobayashi, H. Modeling and Analysis: An Introduction to System Performance


Evaluation Methodology. Addison-Wesley, Reading, MA, 1978.
[63] Kobayashi, H., and Mark, B. System modeling and analysis: Foundations of
system performance evaluation. Pearson Education Inc., Upper Sadle River, 2008.
[64] Korolyuk, V., and Korolyuk, V. Stochastic models of systems. Kluwer Aca-
demic Publishers, Dordrecht, London, 1999.

[65] Kovalenko, I., Pegg, P., and Kuznetzov, N. Mathematical theory of reliabil-
ity of time dependent systems with practical applications. Wiley & Sons, New York,
1997.

[66] Kulkarni, V. Modeling, analysis, design, and control of stochastic systems.


Springer, New York, 1999.

[67] Lakatos, L., Szeidl, L., and Telek, M. Algorithms in informatics, Vol. II
(in Hungarian). ELTE Eötvös Kiadó, 2005, ch. Queueing theory ( in Hungarian ),
pp. 12981347.

242
[68] Lakatos, L., Szeidl, L., and Telek, M. Introduction to queueing systems with
telecommunication applications. Springer, 2019.

[69] Lavenberg, S., e. Computer performance modeling handbook. Academic Press,


New York, 1983.

[70] Lee, A. M. Applied queueing theory. Macmillan International Higher Education,


2016.

[71] Lefebvre, M. Basic probability theory with applications. Springer, 2009.

[72] Leon-Garcia, A. Probability, statistics, and random processes for electrical engi-
neering. Pearson Education, 2017.

[73] Medhi, J. Stochastic models in queueing theory. Elsevier, 2002.

[74] Mieghem, P. Performance analysis of communications networks and systems.


Cambridge University Press, Cambridge, 2006.

[75] Nelson, R. Probability, stochastic processes, and queueing theory: the mathematics
of computer performance modeling. Springer Science & Business Media, 2013.

[76] Newell, C. Applications of queueing theory, vol. 4. Springer Science & Business
Media, 2013.

[77] Ovcharov, L., and Wentzel, E. Applied Problems in Probability Theory. Mir
Publishers, Moscow, 1986.

[78] Palaniammal, S. Probability and Queueing Theory. PHI Learning Pvt. Ltd.,
2011.

[79] Prabhu, N. U. Foundations of queueing theory, vol. 7. Springer Science & Business
Media, 2012.

[80] Prékopa, A. Probability theory ( in Hungarian ). M¶szaki Könyvkiadó, Budapest,


1962.

[81] Pósafalvi, A., and Sztrik, J. On the heterogeneous machine interference with
limited server's availability. European Journal of Operational Research 28 (1987),
321328.

[82] Pósafalvi, A., and Sztrik, J. A numerical approach to the repairman problem
with two dierent types of machines. Journal of Operational Reseach Society 40
(1989), 797803.

[83] Ravichandran, N. Stochastic Methods in Reliability Theory. John Wiley and


Sons, 1990.

[84] Reimann, J. Probability theory and statistics for engineers ( in Hungarian) .


Tankönyvkiadó, Budapest, 1992.

[85] Rényi, A. Probability theory ( in Hungarian ). Tankönyvkiadó, Budapest, 1973.

243
[86] Ross, S. M. Introduction to Probability Models. Academic Press, Boston, 1989.

[87] Saaty, T. Elements of Queueing Theory with Applications. McGraw-Hill, 1961.

[88] Sahner, R., Trivedi, K., and Puliafito, A. Performance and reliability anal-
ysis of computer systems  An example-based approach using the SHARPE software
package. Kluwer Academic Publisher, Boston, M.A., 1996.

[89] Sauer, C., and Chandy, K. Computer systems performance modelling. Prentice
Hall, Englewood Clis, N.J., 1981.

[90] Schatte, P. On the nite population G]/M/l queue and its application to mul-
tiprogrammed computers. Journal of lnformation Processing and Cybernetics 16
(1980), 433441.

[91] Shortle, J. F., Thompson, J. M., Gross, D., and Harris, C. M. Funda-
mentals of queueing theory, vol. 399. John Wiley & Sons, 2018.

[92] Smith, J. M. Introduction to Queueing Networks: Theory and Practice. Springer,


2018.

[93] Smith, W. L. Queues. Chapman and Hall/CRC, 2020.

[94] Stewart, W. Introduction to the numerical solution of Markov chains. Princeton


University Press, Princeton, 1995.

[95] Stewart, W. Probability, Markov chains, queues, and simulation. Princeton


University Press, Princeton, 2009.

[96] Stidham, S. Optimal design of queueing systems. CRC Press/Taylor & Francis,
2009.

[97] Syski, R. Introduction to Congestion Theory in Telephone Systems, 2nd Edition.


North Holland, 2005.

[98] Sztrik, J. ~
On the nite-source G/m/r queues. European Journal of Operational
Research 20 (1985), 261268.

[99] Sztrik, J. On the n/G/M/1 queue and Erlang's loss formulas. Serdica 12 (1986),
321331.

[100] Sztrik, J. ~
On the G/M/r/F IF O machine interference model with state-
dependent speeds. Journal of Operational Researc Society 39 (1988), 201201.

[101] Sztrik, J. Some contribution to the machine interference problem with hetero-
geneous machines. Journal of Information Processing and Cybernetics 24 (1988),
137143.

[102] Sztrik, J.An introduction to queueing theory and its applications (in Hungarian).
Kossuth Egyetemi Kiadó, Debrecen, 2000.
http://irh.inf.unideb.hu/user/jsztrik/education/eNotes.htm.

244
[103] Sztrik, J. A key to queueing theory with applications (in Hungarian). Kossuth
Egyetemi Kiadó, Debrecen, 2004.
http://irh.inf.unideb.hu/user/jsztrik/education/eNotes.htm.

[104] Sztrik, J. Practical queueing theory. Teaching material, Debrecen University


Egyetem,Faculty if Informatics, 2005.
http://irh.inf.unideb.hu/user/jsztrik/education/09/index.html.

[105] Sztrik, J. Performance modeling of informatics systems ( in Hungarian ). EKF


Líceum Kiadó, Eger, 2007.

[106] Sztrik, J.Modeling and analyisis of information technology sytems, 2012.


http://irh.inf.unideb.hu/user/jsztrik/education/eNotes.htm.

[107] Sztrik, J. Basic Queuing Theory, Foundation of System Performance Modeling,


Globe Edit. Omni Scriptum GmbH & Co. KG (2016).

[108] Sztrik, J. Modeling and Analysis of Information Technology Systems. GlobeEdit,


2016.

[109] Takagi, H. Queueing analysis. A foundation of performance evaluation. Volume


1: Vacation and priority systems, part 1. North-Holland, Amsterdam, 1991.

[110] Takagi, H. Queueing analysis. A foundation of performance evaluation. Volume


2: Finite Systems. North-Holland, Amsterdam, 1993.

[111] Takagi, H. Queueing analysis. A foundation of performance evaluation. Volume


3: Discrete-Time Systems. North-Holland, Amsterdam, 1993.

[112] Takács, L. Introduction to the theory of queues. Oxford University Press, New
York, 1962.

[113] Takács, L. Combinatorial Methods in the Theory of Stochastic Processes. John


Wiley & Sons, 1977.

[114] Tijms, H. Stochastic Modelling and Analysis: A Computational Approach. Wiley


& Sons, New York, 1986.

[115] Tijms, H. A rst course in stochastic models. Wiley & Son, Chichester, 2003.

[116] Tomkó, J. On sojourn times for semi-Markov processes. Proceeding of the 14th
European Meeting of Statisticians, Wroclaw (1981).

[117] Tomkó, J. Sojourn time problems for Markov chains ( in Hungarian ). Alkalmazott
Matematikai Lapok (1982), 91106.

[118] Trivedi, K. Probability and Statistics with Reliability, Queuing, and Computer
Science Applications, 2-nd edition. Wiley & Son, New York, 2002.

[119] Ushakov, I. A., and Harrison, R. A. Handbook of reliability engineering.


Transl. from the Russian. Updated ed. John Wiley & Sons, New York, NY, 1994.

245
[120] van Hoorn, M. Algorithms and approximations for queueing systems. Centrum
voor Wiskunde en Informatica, Amsterdam, 1984.

[121] Virtamo, J. Queueing Theory. Helsinki University of Technology, 2015.


http://www.netlab.tkk.fi/opetus/s383143/kalvot/english.shtml.
[122] Weber, T. Solving Performance Models Based on Basic Queueing Theory For-
mulas. Grin Publishing, 2017.

[123] Wentzel, E., and Ovcharov, L. Applied problems in probabbility theory. Mir
Publisher, Moscow, 1986.

[124] White, J. Analysis of queueing systems. Academic Press, New York, 1975.

[125] Wolf, R. Stochastic Modeling and the Theory of Queues. Prentice-Hall, 1989.

[126] Yashkov, S. Processor-sharing queues: some progress in analysis. Queueing Sys-


tems: Theory and Applications 2 (1987), 117.

[127] Zukerman, M. Introduction to queueing theory and stochastic teletrac models.


arXiv preprint arXiv:1307.2968 (2013).

246

You might also like