Queueing
Queueing
August 7, 2001
János Sztrik 2001/08/05
• Introduction
? Queueing systems
? Performance measures for finite-source systems
• Analytical Results
? Homogeneous M/M/r systems, the classical model
• Numerical Methods
? A recursive method for the M/G/1/ system
• Asymptotic Methods
? Preliminary results
? Heterogeneous multiprocessor systems
Introduction
• Queueing systems
? Kendall’s notations
? Performance measures
Queueing systems
The mean service time E[B] is denoted by T B and its reciprocal by the
service rate µ:
1
µ= . (2)
TB
However, there are many practical situations when the request’s arrivals do
not form a renewal process, that is the arrivals may depend on the number of
customers, request, jobs ect staying at the service facility. This happens in
the case of finite-source queueing systems .
Example 1
Example 2
Suppose a single unloader system at which trains arrive which bring coal from
various mines. There are N trains involved in the coal transport. The coal
unloader can handle only one train at a time and the unloading time per train
has an exponential distribution with mean 1/µ. The unloader is subject to
breakdowns when the unloader is in operation. The operating time of an
unloader has an exponential distribution with mean 1/η and the time to
repair a broken unloader is exponentially distributed with mean 1/ξ. The
unloading of the train that is in service when the unloader breaks down is
resumed as soon as the repair of the unloader is completed. An unloaded
train returns to the mines for another trainload of coal. The time for a train
to complete a trip from the unloader to the mines and back is assumed to
have an exponential distribution with mean 1/λ.
Example 3
Kendall’s notation
Dynamic Priorities: The selection depends on dynamic priorities that alter with
the passing of time.
Preemption: If priority or LCFS discipline is used, then the job currently being
processed is interrupted and preempted if there is a job in the queue with a
higher priority.
job that was interrupted is resumed only after all jobs that arrived after it
have completed service.
M/G/1/K/N
denotes the above system with different rates and service times.
Performance measures
probability vector of the number of jobs in the system Pk . The mean values
of most of the other interesting performance measures can be deduced from
Pk :
Pk = P [there are k jobs in the system] .
In case when the source is infinite and there is no limit on the number of
jobs in the single server queue, the server utilization is given by:
The utilization of a service station with multiple servers is the mean fraction
of active ( or busy ) servers.
λ
ρ= , (4)
mµ
ρ < 1, (5)
i.e., on average the number of jobs that arrive in a unit of time must be
less than the number of jobs that can be processed.
in accordance with Eq. (4). We note that in the case of finite buffer
or finite-source queueing system, throughput is usually different from the
external arrival rate.
Response time T : The response time, also known as the sojourn time, is the
total time that a job spends in the queueing system.
Waiting time W : The waiting time is the time that a job spends in a queue
waiting to be serviced. Therefore we have:
The distribution functions of the waiting time, FW (x), and the response
time, FT (x), sometimes are also required.
Queue length Q: The queue length, Q, is the number of jobs in the queue.
∞
X
L = E(L) = kPk . (8)
k=1
The mean number of jobs in the queueing system L, E(L) and the mean
queue length Q, E(Q) can be calculated using one of the most important
theorems of queueing theory, Little’s theorem, ( law ):
L = γT , Q = γW .
Little’s theorem is valid for all queueing disciplines and arbitrary GI/G/m,
and GI/G/K/N systems.
• Homogeneous systems
• Asymptotic properties
• Heterogeneous systems
Homogeneous systems
For the better understanding let consider an M/G/1 system without server
vacations treated in details in [76]. One of the performance measures in our
system is the mean message response time E[T ] defined as the mean time
from the arrival of a new message to its service completion, that is, the mean
time a message spends in the service facility. Since the mean time that each
message takes to complete cycle of staying in the source and staying in the
service facility is E[T ] + 1/λ, the throughput γ of the system, which is
defined as the mean number of messages served per unit time in the whole
system, is given by N/(E[T ] + 1/λ). On the other hand, if P0 is the
probability that the server is idle at an arbitary time, then ρ0 = 1 − P0 is the
carried load or server utilization, namely, the long run fraction of the time
that the server is busy. Thus, the throughput is also given by (1 − P0)/b. By
equating these two expressions for the throughput, we get
N 1 − P0 ρ0
γ= = = (9)
E[T ] + 1/λ b b
Hence we have
Nb 1
− E[T ] = (10)
1 − P0 λ
If E[L] denotes the mean number of messages in the service facility at an
arbitary time, we also have the relationship
that equates the throughput to the mean number of messages arriving per
unit of time. Thus we get
1 − P0
E[L] = N − = γE[T ] (12)
λb
N − E[L] γ 1 − P0
E= = = (13)
N Nλ N λb
Finite-Source Queueing Systems and their Applications
János Sztrik 2001/08/05
Let E[Θ] be the mean length of a busy period. Since the state of the system
repeats regenerative cycles of a busy period of mean length E[Θ] and an idle
period of mean length E[I] = 1/(N λ), the probability P0 that the server is
idle at an arbitary time is given by
E[I] 1/(N λ)
P0 = = (14)
E[Θ] + E[I] E[Θ] + 1/(N λ)
If π0 denotes the probability that the service facility is empty after a service
completion, 1/π0 is the mean number of messages that are served during
each busy period. This can be seen by considering a long period of time
during wich a large number of (say N) messages are served. Such a period
will include Nπ0 busy periods on the average, because π0 is the probability
that a busy period is terminated after a service completion. Therefore, on the
average 1/π0 messages are served per busy period. Hence, the mean length of
a busy period is given by
b
E[Θ] = (15)
π0
π0
P0 = (16)
π0 + N λb
Substituting (16) into (9),(10), and (12) we can express the throughput γ,
the mean message response time E[T ], and the mean number E[L] of
Nλ 1
γ= ; E= (17)
π0 + N λb π0 + N λb
1 − π0
E[T ] = N b − (18)
λ
1
E[L] = N 1 − (19)
π0 + N λb
Asymptotic properties
π0 → 0, P0 → 0, γ → 1/b,
E → 0, E[T ] → N b, E[L] → N.
1
E[T ] ≈ N b − as N →∞ (20)
λ
The value of N , denoted by N ∗, at which two straight lines E[T ] = b and the
one in (20) as a function of N intersect each other is called the saturation
number by [42] (sec.4.12). It is given by
1
N∗ = 1 + (21)
λb
Heterogeneous systems
The multiple finite-source model and the singel finite-source model may be
associated with flow control and congestion avoidance mechanisms in
computer communication networks. Namely, the multiple finite-source model
in wich the population size is fixed for each class corresponds to the window
flow control Let us first assume that each of N messages has different
characteristics. In terms of machine interference problems, each machine is
assumed to have a different breakdown rate and a different repair time
ditribution. Specifically, let λi be the rate at wich message i in the source
arrives at the service facility, and let Bi(x) be the distribution function (DF)
for the service time of message i, where i = 1, 2, . . . , N . We also denote by bi
and Bi∗(s) the mean and Laplace-Stieltjes transform (LST) of Bi(x),
respectively. We call this system an individual message model. The total
arrival rate when all messages are in the source is denoted by
N
X
Λ= λi (22)
i=1
throughput of message i, that is, the mean number of times that message i is
served per unit time, where i = 1, 2, . . . , N . These are related by
1
γi = 1≤i≤N (23)
E[Ti] + 1/λi
If Γ(i) denotes the mean number of times that message i is served in a busy
period of length Θ, the throughput γi can also be expressed as
Γ(i)
γi = 1≤i≤N (24)
E[Θ] + E[I]
where
1
E[I] = (25)
Λ
is the mean lenght of an idle period I, and
N
X
E[Θ] = bj Γ(j) (26)
j=1
is the mean lenght of an busy period Θ. The carried load (total server
utilization) ρ0 is given by
0 E[Θ]
ρ = = 1 − P0 (27)
E[Θ] + E[I]
N PN
i=1 Γ(i)
X
γ= γi = (28)
i=1
E[Θ] + E[I]
Hence we can obtain the throughput γi and the mean response time E[Ti]
once we have calculated {Γ(j); 1 ≤ j ≤ N }, where i = 1, 2, . . . , N . The
mean waiting time of message i is given by
1 1
E[Wi] = E[Ti] − bi = − − bi 1≤i≤N (29)
γi λi
If P (i) denotes the probability that message i is present in the service facility
at an arbitary time, we have
E[Ti] γi
P (i) = = γiE[Ti] = 1 − 1≤i≤N (30)
E[Ti] + 1/λi λi
P (i)
E[Ti] = ; γi = λi(1 − P (i)) (31)
λi(1 − P (i))
Comprehensive books and papers: [2, 5, 9, 15, 17, 18, 19, 28, 29, 35, 37,
39, 44, 46, 48, 52, 53, 74, 75, 76, 77, 81].
Computer and communication systems: [1, 8, 16, 22, 23, 24, 34, 33, 36,
40, 41, 42, 43, 49, 51, 54, 55, 56, 59, 60, 57, 79].
It should be noted that there are many papers on machine interference and
related problems, but our aim is to refer only the most important ones,
which are closely connected to the problems or results presented in this work.
Here they are: [6, 7, 10, 11, 13, 20, 30, 31, 38, 45, 50, 58, 78, 82, 83].
The main aim of the following chapters is to show how different methods can
be applied in the investigation of finite-source queueing systems. Thus,
analytical, numerical and asymptotic approaches are presented and in most
cases numerical results illustrate the problem in question. Furthermore, the
most important sources of information are listed to draw attention of the
interested readers. Finally, some of the works of the author is either presented
or cited.
Analytical Results
1. The time between breakdowns (or production time) of any one of the
machines is a sample from a negative exponential probability distribution
with mean 1/λ, (or mean rate λ). A breakdown is random and is independent
of the operating behavior of the other machines. Then, when there are n
machines not working at time t,
Prob (one of the N − n machines goes down in the interval (t, t + ∆t)) =
(N − n)λ∆t + o(∆t),
where ∆t is a small increment of time.
2. Any one of the n down machines requires only one of the r operators to
fix it. The service time distribution is negative exponential with mean
1/µ for each machine and each operator. The service times are mutually
independent and also independent of the number of down machines.
Then
Prob [one of the n down machines is fixed in an interval ∆t]
(
nµ∆t + o(∆t), for 1 ≤ n ≤ r
=
rµ∆t + o(∆t), for r < n ≤ N
Let
L(t) = the number of down machines at time t
and
Pn(t) = P rob(L(t) = n|L(0) = i), n = 0, . . . , N.
rates
(
(N − n)λ, n = 0, 1, . . . , N
λn =
0, n>N
(
nµ, n = 1, 2, . . . , r
µn =
rµ, n = r + 1, . . . , N
This finite system of ordinary differential equations can be solved and we get
the transient probabiliies.
Pn = lim Pn(t)
t→∞
N λP0 = µP1
{(N − n)λ + nµ}P0 = (N − n + 1)λPn−1 + (n + 1)µPn+1, 1<n<r
{(N − n)λ + rµ}P0 = (N − n + 1)λPn−1 + rµPn+1, r≤n<N
rµPN = λPN −1
!k
N λ
Pn = P̂0 for 0 ≤ n ≤ r, (32)
n µ
!n
N! λ
Pn = P0 for r ≤ n ≤ N.
(N − n)!r!rn−r µ
N
P
where P0 is obtained by solving Pn = 1 to get
n=0
r N
!−1
X N n NX n! n
P0 = ρ + n−r
ρ
n=0
n n=r+1
n r!r
N
X
E[L] = nPn
n=0
λ+µ
E[L] = N + (1 − P0)
λ
N − E[L]
Um =
N
or percentage of average production obtained (or the fraction of total
production time on all machines).
N N
X nPn X
Us = + Pn
n=0
r n=r+1
r
X
r − rUs = (r − n)Pn
n=0
N
X
Q= (n − r)Pn
n=r+1
E(L)
T =
λ(N − E(L))
Q
W =
λ(N − E(L))
Table 1 has values for operator utilization for pairs of (N, r) parameters that
have the same machine per operator ratio (N/r = 4 and then 15).
Notice that the operator utilization is increasing for a given ρ even though the
ratio of the number of machines per operator stays the same. This is an
indication that it is better, when feasible, to pool operators rather than to
assign a particular number of machines to each operator individually. The
example considers two cases: (1) 6 machines serviced by one operator and (2)
20 machines serviced by three operators. The results show that, even though
the workload per operator increased from system 1 (6 machines/operator) to
system 2 (6 32 machines/operator), the machines were serviced more efficiently
in system 2. The advantages of pooling are well known.
ρ N r Us
0.45 4 1 0.881
8 2 0.934
16 4 0.994
0.05 15 1 0.656
30 2 0.682
60 4 0.705
Table 1: Operator utilizations for proportional parameters
Numerical Methods
Let us define
and
∂
P0(t) = −N λP0(t) + P1(0, t), (36)
∂t
∂ ∂
− P1(u, t) = −(N − 1)λP1(u, t) + N λP0(t)b(u) +
∂t ∂u
+ P2(0, t)b(u), (37)
∂ ∂
− Pr (u, t) = −(N − r)λPr (u, t) + (N − r + 1)λPr−1(u, t) +
∂t ∂u
Further define
and
Z ∞
∂
e−su Pn(u)du = sPn∗(s) − Pn(0). (44)
0 ∂u
From (36)-(44) and the fact that all derivatives with respect to t are zero, it
follows that
N ∗ N
X 1 − B (s) X
Pr∗(s) = Pr (0). (49)
r=1
s r=1
N
X N
X
Pr∗(0) = b1 Pr (0) (50)
r=1 r=1
(46), we get
and
1
P1∗(0) = P2(0). (52)
(N − 1)λ
1 ∗
Pr+1(0) = ∗
[Pr (0) − (N − r + 1)λP r−1 ((N − r)λ)],
B ((N − r)λ)
2 ≤ r ≤ N − 1. (53)
1
P1∗((N − r)λ) = [N λP0{B ∗((N − r)λ) − 1} +
(r − 1)λ
+ P2(0)B ∗((N − r)λ)]. (54)
1
Pj∗((N − r)λ) = ∗
[(N − j + 1)λPj−1 ((N − r)λ) +
(r − j)λ
+ Pj+1(0)B ∗((N − r)λ) − Pj (0)], (55)
2 ≤ j ≤ r − 1.
Hence P3(0), P4(0), . . . , PN (0) can be obtained recursively using (51), (54),
(55) and (53) in terms of P0.
1
Pr∗(0) = ∗
[(N − r + 1)λPr−1 (0) + Pr+1(0) − Pr (0)],
(N − r)λ
2 ≤ r ≤ N − 1. (56)
Now the only unknown quantity is PN∗ (0) which can be obtained from
equation (48). To obtain it, differentiate equation (48) with respect to s and
set s = 0, we get
∗(1)
PN∗ (0) = −λPN −1(0). (57)
∗(1)
To get PN −1(0), differentiate (47) and (46) with respect to s and set s = 0.
1 ∗(1)
Pr∗(1)(0) = [(N − r + 1)λPr−1 (0)
(N − r)λ
+Pr+1(0)B ∗(1)(0) + Pr∗(0)], 2≤r ≤N −1 (58)
∗(1) 1
P1 (0) = [N λP0B ∗(1)(0) + P2(0)B ∗(1)(0) + P1∗(0)]. (59)
(N − r)λ
∗(1) ∗(1)
As P1 (0) is known completely from (59), Pr (0), (2 ≤ r ≤ N − 1) can be
determined recursively from (58) and hence PN∗ (0) is known from (57). So
Pn∗(0)(1 ≤ n ≤ N ) is known in terms of P0, which can be determined using
the normalizing condition
N
X
P0 + Pn∗(0) = 1. (60)
n=1
Pn+1(0)
πn = PN , n = 0, 1, . . . , N − 1. (61)
r=1 Pr (0)
µ
B ∗(s) =
µ+s
4λ[1 − B ∗(3λ)]
P2(0) = ∗
P0 .
B (3λ)
1
P3(0) = ∗ [P2(0) − 3λP1∗(2λ)],
B (2λ)
1
P1∗(2λ) = [4λP0{B ∗(2λ) − 1} + P2(0)B ∗(2λ)].
λ
1
P4(0) = ∗ [P3(0) − 2λP2∗(λ)].
B (λ)
1
P2∗(λ) = [3λP1∗(λ) + P3(0)B ∗(λ) − P2(0)].
λ
Finite-Source Queueing Systems and their Applications
János Sztrik 2001/08/05
1
P1∗(λ) = [4λP0{B ∗(λ) − 1} + P2(0)B ∗(λ)].
2λ
λ2
P2(0) = 12 µ P0,
3
P1∗(2λ) = 4λ
µ+2λ P0 , P3(0) = 24 µλ2 P0, P1∗(λ) = 4λ
µ+λ P0 ,
2 4
P2∗(λ) = 12 µ(µ+λ)
λ
P0 , P4(0) = 24 µλ3 P0.
1 4λ
P1∗(0) = P2(0) = P0,
3λ µ
1 λ2
P2∗(0) = P3(0) = 12 2 P0,
2λ µ
1 λ3
P3∗(0) = P4(0) = 24 3 P0.
λ µ
∗(1)
P4(0) = −λP3 (0),
∗(1)
where P3 (0) can be obtained from (58)
∗(1) 1 ∗(1)
P3 (0) = [2λP2 (0) + P4(0)B ∗(1)(0) + P3∗(0)],
λ
Finite-Source Queueing Systems and their Applications
János Sztrik 2001/08/05
∗(1)
again P2 (0) an be obtained from (58)
∗(1) 1 ∗(1)
P2 (0) = [3λP1 (0) + P3(0)B ∗(1)(0) + P2∗(0)],
2λ
∗(1) ∗(1)
To know P2 (0) we require P1 (0) which can be obtained from (59)
∗(1) 1
P1 (0) = [4λP0B ∗(1)(0) + P2(0)B ∗(1)(0) + P1∗(0)].
3λ
and hence
λ4
P4∗(0) = 24 4 P0
µ
where ρ = µλ .
1
P0 = .
1 + 4ρ + 12ρ2 + 24ρ3 + 24ρ4
It can be easily seen that this result matches with the expression given in [29]
p. 105.
U = 1 − P0 and AOP = N − N .
Numerical results
Takács result is that if fails for large N whereas the proposed method works
without any difficulty. The results has also been tested with those given [12]
for operator utilization in case of E5, E10 and D with varying ρ and N . Effect
of ρ on U and W for fixed N = 5 have been shown in Tables 4 and 5,
respectively. It is seen that as ρ increases both U and W also increase. But
for the same value of ρ, U for H2 is less than U of M, E10 and D. It is also
observed that W for H2 is more than W of M, E10 and D. However for
ρ ≥ 0.5, W is almost same for all the repair time distributions. The effect of
the number of machines (N ) on U and W for the fixed value of ρ = 0.3 is
given in Table 6. As N increases both U and W also increases irrespective of
the repair time distributions but for N ≥ 12, U remains same for all the repair
time distributions. The same is also true for W .
H2
(µ1 = 1.41,
(µ2 = 5.26) 0 GE4
(α1 = 0.21, (a = 0.01, (µ1 = 1, µ2 = 2,
M E10 D α2 = 0.79) b = 0.09) µ3 = 3, µ4 = 4)
P(n) ρ=0.5 ρ = 0.7 ρ=0.7 ρ = 0.3) ρ = 0.9 ρ = 0.3
P(0) 0.36697E-01 0.11763E-02 0.62725E-03 0.16141E+00 0.50140E-03 0.10867E+00
P(1) 0.91743E-01 0.15888E-01 0.12110E-01 0.19939E+00 0.68133E-02 0.24016E+00
P(2) 0.18349E+00 0.10269E+00 0.97234E-01 0.21607E+00 0.53685E-01 0.30048E+00
P(3) 0.27523E+00 0.31946E+00 0.33197E+00 0.19791E+00 0.23300E+00 0.23267E+00
P(4) 0.27523E+00 0.41045E+00 0.42045E+00 0.14666E+00 0.45373E+00 0.10034E+00
P(5) 0.13671E+00 0.15033E+00 0.13760E+00 0.78559E-01 0.25227E+00 0.17680E-01
Sum 0.10000E+01 0.10000E+01 0.10000E+01 0.10000E+01 0.10000E+01 0.10000E+01
N 3.07339 3.57310 3.57230 2.20470 3.88940 2.02890
SDL 1.30424 0.92744 0.89142 1.51480 0.87103 1.21130
Lq 2.11009 2.57430 2.57300 1.36610 2.88990 1.13760
T 1.59520 2.50410 2.50220 0.78872 0.19457 4.74220
W 1.09520 1.80410 1.80220 0.48872 0.14457 2.65890
U 0.96330 0.99882 0.99937 0.83859 0.99950 0.89133
AOP 1.92660 1.42690 1.42770 2.79530 1.11060 2.97110
n Pn πn Pn (Using
GR method GR method Jeyachandra
& Shanthikumar
relation)
0 0.10530E−01 0.26605E−01 0.10530E−01
1 0.68335E−01 0.13812E+00 0.68335E−01
2 0.21794E+00 0.33040E+00 0.21794E+00
3 0.36235E+00 0.36621E+00 0.36235E+00
4 0.27442E+00 0.13867E+00 0.27442E+00
5 0.66432E−01 0.66423E−01
Sum 0.10000E+01 0.10000E+01 0.10000E+01
ρ M E10 D H2
0.10 0.43605 0.44309 0.44398 0.43049
0.20 0.71513 0.74594 0.75042 0.69688
0.30 0.86079 0.90401 0.91046 0.83859
0.40 0.93027 0.96763 0.97254 0.91112
0.50 0.96330 0.98947 0.99216 0.94889
0.60 0.97961 0.99654 0.99779 0.96929
0.70 0.98808 0.99882 0.99937 0.98080
0.80 0.99270 0.99958 0.99982 0.98755
0.90 0.99535 0.99985 0.99995 0.99167
1.00 0.99693 0.99994 0.99998 0.99427
1.10 0.99791 0.99998 0.99999 0.99596
1.20 0.99854 0.99999 1.00000 0.99708
Table 4: Effect of ρ on U (N = 5)
ρ M E10 D H2
0.10 0.04666 0.02843 0.02618 0.06173
0.20 0.19834 0.14059 0.13258 0.23496
0.30 0.44258 0.35928 0.34753 0.48872
0.40 0.74992 0.66690 0.65648 0.79509
0.50 1.10952 1.10266 1.10198 1.11347
0.60 1.14624 1.14104 1.14066 1.14950
0.70 1.18422 1.18041 1.18022 1.18685
0.80 1.22294 1.22017 1.22007 1.22504
0.90 1.26210 1.26007 1.26002 1.26378
1.00 1.30154 1.30003 1.30001 1.30288
1.10 1.34115 1.34001 1.34000 1.34223
1.20 1.38080 1.38001 1.38000 1.38175
Table 5: Effect of ρ on W (N = 5)
N M E10 D H2
2 U 0.42830 0.44640 0.447490 0.432830
W 0.06923 0.04409 0.040818 0.086241
4 U 0.75742 0.79439 0.799950 0.737780
W 0.28432 0.21060 0.200100 0.326500
6 U 0.92821 0.96531 0.970200 0.908050
W 0.63921 0.56469 0.555290 0.682270
8 U 0.98641 0.99818 0.998880 0.976830
W 1.11331 1.11044 1.110270 1.115690
10 U 0.99833 0.99997 0.999990 0.995860
W 1.17050 1.17000 1.170000 1.171250
12 U 0.99986 1.00000 1.000000 0.999470
W 1.23005 1.23000 1.230000 1.230190
14 U 0.99999 1.00000 1.000000 0.999950
W 1.29000 1.29000 1.290000 1.290020
16 U 1.00000 1.00000 1.000000 1.000000
W 1.35000 1.35000 1.350000 1.350000
18 U 1.00000 1.00000 1.000000 1.000000
W 1.41000 1.41000 1.410000 1.410000
20 U 1.00000 1.00000 1.000000 1.000000
W 1.47000 1.47000 1.470000 1.470000
~ /G/1/F
This system has been generalized to M ~ IF O system which can be
found in [61, 62].
Asymptotic Methods
• Preliminary results
Preliminary results
In this section a brief survey is given of the most related theoretical results
due to Anisimov [3, 5], to be applied later on.
Let (X(k), k ≥ 0) be a Markov chain with state space
m+1
[
Xq , Xi ∩ Xj = 0, i 6= j,
q=0
defined by the transition matrix p(i, j)
satisfying the following conditions:
In the sequel the set of states Xq is called the q-th level of the chain,
q = 1, . . . , m + 1. Let us single out the subset of states
m
[
hαmi = Xq
q=0
p(i(q), j (z))
(q)
, i ∈ Xq , j (z) ∈ Xz , q, z ≤ m,
P
1 − k(m+1)∈Xm+1 p(i(q), k (m+1))
hαmi, that is
X X
(m)
g(hαmi) = π(i ) p(i(m), j (m+1))).
i(m) ∈Xm j (m+1) ∈Xm+1
(q)
(q) (q) (q+1)
(q)
A =
α (i , j )
, i ∈ Xq , j (q+1) ∈ Xq+1, q = 0, . . . , m
defined by Condition 2.
where 1 = (1, ..., 1)∗ is a column vector, see Anisimov et al. [5] pp. 141-153.
Let (η(t), t ≥ 0) be a Semi-Markov Process (SMP) given by the embedded
Markov chain (X(k), k ≥ 0) satisfying conditions (1)-(4). Let the times
τ(j (s), k (z)) – transition times from state j (s) to state k (z) – fulfill the
condition
Theorem 1. [cf. [5] pp. 153] If the above conditions are satisfied then
where
π0(j (0))p0(j (0), k (0))ajk (0, 0, Θ)
P
j (0) ,k(0) ∈X0
A(Θ) =
π 0A(0)A(1) . . . A(m)1
Corollary 1. In particular, if αjk (s, z, Θ) = iΘmjk (s, z) then the limit is an
exponentially distributed random variable with mean
i.e., the instant at which the number of inactive processors reaches the
(m + 1)-th level for the first time, provided that at the beginning their
number is not greater than m, m = 1, . . . , N − 1. In particular, if m = N − 1
then the bus becomes idle since there is no active processor and, hence
Ωε(N − 1) can be referred to as the busy period length of the bus. Denote by
πo(i1, i2 : 0; k1, . . . , kN ) the steady-state probability that ξ1(t) is in state
i1, ξ2(t) is in state i2, there is no idle processor and the order of requests’
arrival to the bus is (k1, . . . , kN ). Similarly, denote by
πo(i1, i2 : 1; k2, . . . , kN ) the steady-state probability that the first random
environment is in state i1, the second one is in state i2 , processor k1 is
inactive and the other processors sent their requests in order (k2, . . . , kN ).
Clearly (ks, . . . , kN ) ∈ VNN −s+1, s = 1, 2, where VNN −s+1 denotes the set of
all variations of order N − s + 1 of integets 1, . . . , N . Now we have:
r1 X
X r2 X
Λ= πo(i1, i2 : 1; k2, . . . , kN )
i1 =1 i2 =1 (k1 ,...kN )∈V N
N
where
r1
X r2
X X
D= πo(i1, i2 : 0; k1, . . . , kN )
i1 ,j1 =1 i2 ,j2 =1 (k1 ,...kN )∈V N
j1 6=i1 j2 6=i2 N
(1) (2)
ai1j1 + ai2j2
× (1) (2)
(ai1i1 + ai2i2 + µk1 (i2))2
Hence our aim is to determine the distribution of the first exit time of Zε(t)
from hαmi, provided that Zε(t) ∈ hαmi. It can easily be verified that the
(1)
ai1j1
pε[(i1, i2 : N ; 0), (j1, i2 : N ; 0)] = (1) (2) PN , s = N,
ai1i1 + ai2i2 + p=1 λp(i1)/ε
(2)
ai2j2
pε[(i1, i2 : N ; 0), (j1, i2 : N ; 0)] = (1) (2) PN , s = N,
ai1i1 + ai2i2 + p=1 λp(i1)/ε
λk (i1)
pε[(i1, i2 : N ; 0), (i1, i2 : N − 1; k)] = (1) (2) PN , s=N
ai1i1 + ai2i2 + p=1 λp(i1)/ε
As ε → 0 this implies
(1)
ai1j1
pε[(i1, i2 : 0; k1, . . . , kN ), (j1, i2 : 0; k1, . . . , kN )] = (1) (2)
, s=0
ai1i1 + ai2i2 µk1 (i2)
(2)
ai2j2
pε[(i1, i2 : 0; k1, . . . , kN ), (i1, j2 : 0; k1, . . . , kN )] = (1) (2)
, s=0
ai1i1 + ai2i2 µk1 (i2)
pε[(i1, i2 : s; k1, . . . , kN −s), (j1, i2 : s; k1, . . . , kN −s)] = o(1), s = 1, . . . , N,
pε[(i1, i2 : s; k1, . . . , kN −s), (i1, j2 : s; k1, . . . , kN −s)] = o(1), s = 1, . . . , N,
µk1 (i2)
pε[(i1, i2 : 0; k1, . . . , kN ), (i1, i2 : 1; k2, . . . , kN )] = (1) (2)
, s = 0,
ai1i1 + ai2i2 + µk1 (i2)
pε[(i1, i2 : s; k1, . . . , kN −s), (i1, i2 : s + 1; k2, . . . , kN −s)]
µk1 (i2)ε
=P (1 + o(1)), s = 1, . . . , N − 1
p6=k1 ,...,kN −s λp (i1 )
This agrees with the conditions (1)-(4), but here the zero level is the set
Since the level 0 in the limit forms an essential class, the probabilities
πo(i1, i2 : 0; k1, . . . , kN ), πo(i1, i2 : 1; k1, . . . , kN −1)i1 = 1, . . . , r1, i2 =
1, . . . , r2, (k1, . . . , kN −s) ∈ VNN −s, s = 0, 1, satisfy the following system of
equations
r1 X
X r2 X
{πo(i1, i2 : 0; k1, . . . , kN ) + πo(i1, i2 : 1; k1, . . . , kN −1)} = 1.
i1 =1 i2 =1 (k1 ,...,kN )
r1 X
X r2 X
m
g(hαmi = ε πo(i1, i2 : 1; k2, . . . , kN ) (65)
i1 =1 i2 =1 (k1 ,...,kN )∈V N
N
Taking into account the exponentiality of τε(j1, j2 : s; k1, . . . , kN −s) for fixed
Θ it is implied that
iΘ
E exp{iεmΘτε(j1, j2 : 0; k1, . . . , kB )} = 1 + εm (1) (2)
(1 + o(1)),
aj1j1 + aj2j2 + µk1 (j2)
E exp{iεmΘτε(j1, j2 : s; k1, . . . , kN −s)} = 1 + o(εm), s > 0.
it can easily be verified, that the solution of (66) together with (67) is
(1) (2) (1) (2)
πo(i1, i2 : 0; k1, . . . , kN ) = Bπi1 πi2 (ai1i1 + ai2i2 + µ(i2)),
(1) (2)
πo(i1, i2 : 1; k1, . . . , kN −1) = Bπii πi2 + µ(i2)),
1
×... × .
λk1 (i1) + . . . + λkm (i1)
Consequently, the distribution of the time while the number of idle processors
reaches the (m + 1)-th level for the first time is approximated by
In particular, when m = N − 1, we get that the busy period length of the bus
is asymptotically an exponentially distributed random variable with parameter
r1 X r2
1 X X (1) (2)
εN −1∧ = εN −1 πi1 πi2 µ(i2)N ×
N ! i =1 i =1 N
1 2 (k1 ,...,kN )∈VN
1 1 1
× × ... × . (68)
λk1 (i1) λk1 (i1) + λk2 (i1) λk1 (i1) + . . . + λkN (i1)
In the case when there are no random environments, i.e., µ(i2) = µ, and
λp(i1) = λp, i1 = 1, . . . , r1, i2 = 1, . . . , r2, p = 1, . . . , N , from (68) it follows
that
N −1 µN X 1 1
ε ∧= ×
N! N
λk1 /ε λk1 /ε + λk2 /ε
(k1 ,...,kN )∈VN
1
× ... × . (69)
λk1 /ε + . . . + λkN −1 /ε
N −1 1 µN
ε ∧= (70)
(N − 1)! (λ/ε)N −1
Performance measures
This section deals with the derivation of the main steady-state performance
measures relating to the heterogeneous multiprocessor model treated in the
previous section.
Utilizations
The utilization U of the bus is defined as the fraction of time during which it
is busy. The idle period of the bus starts when each processor is idle at the
end of a service completion, and terminates when a processor generates a
request. It is clear that the mean idle period length is
r1
X (1) 1
πi1 PN .
i1 =1 p=1 λp (i1 )/ε
r1 N
!
(1)
X X
Up = U πi 1 λp(i1)/ λk (i1) (72)
i1 =1 k=1
Throughput
Up = γpbp
and thus
r2
X (2) 1
γp = Up/ πi 2 .
i2 =1
µ(i2)
The mean delay Tp of processor p is the average time from the instant at
which a request is generated at processor p to the instant at which the bus
usage of that request has been completed. In other words, Tp is the mean
duration of an active state at processor p. Since the state of processor p
alternates between the active state of average duration Tp and the inactive
state of mean duration
r1
X (1) 1
πi 1
i1 =1
λ(i1)/ε
1
γp = Pr1 (1)
.
1
Tp + π
i1 =1 i1 λ(i1 )/ε
Thus,
r1
1 X (1) 1
Tp = − π .
γp i =1 i1 λp(i1)/ε
1
r2
X (2) 1
Wp = Tp − πi 2 .
i2 =1
µ(i2)
r1
1 X (1) 1
C= + πi1 PN .
εN −1∧ i1 =1 p=1 λp (i1 )/ε
N
X N
X
Np = γpC.
p=1 p=1
r1
X (1) 1
Q(p) = γp πi 1 .
i1 =1
λp(i1)/ε
N
X N
X
(1 − Q(p)) = N − Q(p).
p=1 p=1
Numerical results
where ρ = λ/ε
µ . In this case relations (70-72) reduce to the following
approximation
1 N!
Up = µ N.
N N ! + ( λ/ε )
N =3 N =4
ρ Up∗ Up ρ Up∗ Up
1 0.3125 0.285714286 1 0.246153846 0.24
2 0.329113924 0.326530612 2 0.249605055 0.249350649
22 0.332657201 0.332467532 22 0.249968310 0.249959317
23 0.333237575 0.333224862 23 0.249997756 0.249997457
24 0.333320592 0.333319771 24 0.249999999 0.249999999
25 0.333333169 0.333331638 25 0.25 0.25
26 0.333333125 0.333333121
27 0.333333307 0.333333307
28 0.333333333 0.333333333
N =5 N =6
ρ Up∗ Up ρ Up∗ Up
1 0.199386503 0.198347107 1 0.166581502 0.166435506
2 0.199968409 0.199947930 2 0.166664473 0.166666305
22 0.199998732 0.199998372 22 0.166666623 0.166666661
23 0.199999955 0.199999949 23 0.166666666 0.166666666
24 0.199999998 0.199999998
25 0.2 0.2
N =7 N =8
ρ Up∗ Up ρ Up∗ Up
1 0.142846715 0.142828804 1 0.124998860 0.1249969
2 0.142857009 0.142856921 2 0.124999993 0.124999988
22 0.142857142 0.142857141 22 0.125 0.125
23 0.142857143 0.142857143
N =9 N = 10
ρ Up∗ Up ρ Up∗ Up
1 0.111110998 0.111110805 1 0.099999999 0.099999999
2 0.111111111 0.111111111 2 0.1 0.1
Table 7: Exact and asymptotic results
It can be observed from Table 7 that the approximate values for {Up} are
very much comparable in accuracy to those provided by the exact results for
{Up∗}. However, the computational complexity, due to the proposed
approximation, has been considerably reduced. As λ/ε becomes greater than
µ, the {Up} approximations, as expected, approach the exact values of {Up∗}.
Clearly, the greater the number of processors the less number of steps are
needed to reach the exact results.
References
[7] Baccelli, F., and Makowski, A. Stability and bounds for single
server queues in random environment . Communications in Statistics:
Stochastic Models 2 (1986), 281–292.
[9] Bolch, G., Greiner, S., de Meer, H., and Trivedi, K. Queueing
Networks and Markov Chains. Wiley & Sons, New York, 1998.
[11] Bunday, B., and R.E., S. The G/M/r machine interference model.
Eur. J. Oper. Res. 4, 9 (1980), 399–402.
[14] Chen, I.-R., and Wang, D.-C. Repairman models for replicated data
management: A case study. In Fourth International Conference on
Parallel and Distributed Information Systems (PDIS ’96) (Los Alamitos,
Ca., USA, Dec. 1996), IEEE Computer Society Press, pp. 184–195.
[36] Jain, R. The Art of Computer Systems Performance Analysis. Wiley &
Sons, New York, 1991.
[41] Kleinrock, L. Queueing systems. Vol. I. Theory. John Wiley & Sons,
New York, 1975.
[53] Peck, L., and Hazelwood, R. Finite Queueing Tables. Wiley &
Sons, New York, 1958.
[82] Wang, K., and Sivazlian, B. Comparative analysis for the G/G/r
machine repair problem. Computers and Industrial Engineerings 18, 4
(1990), 511–520.